Follow Work Different With AI!
AI privacy risks, highlighting issues like deepfakes, surveillance, the creation of new or aggravated existing privacy risks, and limitations

Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks

WorkDifferentWithAI.com Academic Paper Alert!

Written by Hao-Ping Lee, Yu-Ju Yang, Thomas Serban von Davier, Jodi Forlizzi, Sauvik Das

Category: AI Strategy & Governance

Article Section: Ethics of enterprise AI

Publication Date: 2023-10-11

SEO Description: This text discusses a study exploring the AI privacy risks, highlighting issues like deepfakes, surveillance, the creation of new or aggravated existing privacy risks, and limitations of current privacy-preserving AI/ML methods.

Claude.ai-Generated Paper Summary

Here are some key takeaways from analyzing the paper on a taxonomy of AI privacy risks:

Novelty:

  • Presents a taxonomy of 12 categories of privacy risks created or exacerbated by AI capabilities and data requirements. Grounded in analysis of 321 real-world AI privacy incidents.
  • New risks like deepfakes and inference of sensitive attributes emerge from AI’s generative abilities and arbitrary input-output correlations.
  • AI exacerbates known risks like surveillance, disclosure, accessibility by enabling new scale, latency, ubiquity in data collection and dissemination.
  • Phrenology/physiognomy risk is entirely new, revived by AI’s pattern recognition abilities.
  • Taxonomy reveals AI-specific guidance needed to address utility-intrusiveness tradeoffs in design.

Commercial Applications:

  • Understanding taxonomy could help organizations anticipate and mitigate AI privacy risks in products and services.
  • Consulting services to conduct privacy impact assessments and recommend controls aligned to taxonomy.
  • Tools to automatically flag AI privacy risks during ideation and design phases, based on taxonomy.
  • Training and certification programs on AI privacy risks for product teams.
  • Expanded design space for techniques like federated learning, differential privacy to address broader set of risks.

Overall, the taxonomy provides a comprehensive applied perspective on AI privacy risks, grounded in documented incidents. It highlights the need for AI-specific strategies to preserve privacy, beyond just data protection methods. The taxonomy could inform development of practical AI privacy guidance and tools.

Keywords

Deepfakes, Phrenology, Surveillance, AI Privacy Risks, Ethical AI Technologies

Author’s Abstract

Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by analyzing 321 documented AI privacy incidents. We codified how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current privacy-preserving AI/ML methods (e.g., federated learning, differential privacy) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.

Read the full paper here

Last updated on October 22nd, 2023.