Researchers Provide Breakdown of Generative AI Misuse

In an effort to clarify the potential risks of GenAI and provide "a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm," a group of researchers from Google DeepMind, Jigsaw, and Google.org recently published a paper entitled, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data."

The authors of the paper, Nahema Marchal, Rachel Xu, Rasmi Elasmar, Iason Gabriel, Beth Goldberg, and William Isaac, emphasized that, as GenAI capabilities continue to advance, understanding the specific ways in which these tools are exploited is critical for developing effective safeguards. Their "taxonomy of GenAI misuse tactics" is meant to provide a framework for identifying and addressing the potential harms associated with these technologies, they wrote, ultimately aiming to ensure their responsible and ethical use.

The researchers based their study on the qualitative analysis of approximately 200 incidents reported between January 2023 and March 2024. That analysis revealed key patterns and motivations behind the misuse of GenAI, including:

  • Manipulation of human likeness. The most prevalent tactics involve the manipulation of human likeness, such as impersonation, "sockpuppeting," and "non-consensual intimate imagery."
  • Low-tech exploitation. Most misuse cases do not involve sophisticated technological attacks, but rather exploit easily accessible GenAI capabilities requiring minimal technical expertise.
  • Emergence of new forms of misuse. The availability and accessibility of GenAI tools have introduced new forms of misuse that, although not overtly malicious or policy-violative, have concerning ethical implications, such as blurring the lines between authenticity and deception in political outreach and self-promotion.

The study also identified two categories of misuse tactics:

Exploitation of GenAI Capabilities

  • Impersonation: Creating AI-generated audio or video to mimic real people.
  • Appropriated likeness: Using or altering a person's likeness without consent.
  • Sockpuppeting: Creating synthetic online personas.
  • NCII: Generating explicit content without consent.
  • Falsification: Fabricating evidence such as reports or documents.
  • IP infringement: Using someone’s intellectual property without permission.
  • Counterfeit: Producing items that imitate original works and pass as real.
  • Scaling and amplification: Automating and amplifying content distribution.
  • Targeting & personalization: Refining outputs for targeted attacks.

Compromise of GenAI Systems

  • Adversarial inputs: Modifying inputs to cause a model to malfunction.
  • Prompt injections: Manipulating text instructions to produce harmful outputs.
  • Jailbreaking: Bypassing model restrictions and safety filters.
  • Model diversion: Repurposing models for unintended uses.
  • Steganography: Hiding messages within model outputs.
  • Data poisoning: Corrupting training datasets to introduce vulnerabilities.
  • Privacy compromise: Revealing sensitive information from training data.
  • Data exfiltration: Illicitly obtaining training data.
  • Model extraction: Stealing model architecture and parameters.

The paper provides insights for policymakers, trust and safety teams, and researchers to help them develop strategies for AI governance and mitigate real-world harms, the authors wrote. In order to protect against the diverse and growing threats posed by GenAI, they called for better technical safeguards, non-technical user-facing interventions, and ongoing monitoring of the evolving misuse landscape.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • AI microchip, a cybersecurity shield with a lock, a dollar coin, and a laptop with financial graphs connected by dotted lines

    Survey: Generative AI Surpasses Cybersecurity in 2025 Tech Budgets

    Global IT leaders are placing bigger bets on generative artificial intelligence than cybersecurity in 2025, according to new research by Amazon Web Services (AWS).

  • university building surrounded by icons for AI, checklists, and data governance

    Improving AI Governance for Stronger University Compliance and Innovation

    AI can generate valuable insights for higher education institutions and it can be used to enhance the teaching process itself. The caveat is that this can only be achieved when universities adopt a strategic and proactive set of data and process management policies for their use of AI.