University of Chicago Researchers Develop Technique to Poison Generative AI Image Scraping

Tool seen as a way to protect artists' copyright

Researchers at the University of Chicago have developed a technique that can "poison" generative text-to-image machine learning models such as Stable Diffusion XDSL and OpenAI's Dall-E when they scrape the internet for training images. And it can do it with as few as 100 poisoned images, they said.

The tool, dubbed Nightshade, has implications for publishers, filmmakers, museums, art departments, educators, and artists wanting to protect their works against generative AI companies violating their copyrights.

University of Chicago computer science department researchers Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y. Zhao have published their paper, "Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models" for peer review.

Earlier this year, the same team released the free open source software, Glaze, which allows image makers to "cloak" their works in a style different from their own, preventing an AI from stealing the original image, researchers said in an FAQ.

The poisoning attacks on generative AI are prompt-specific, researchers said, and target a model's ability to respond to individual prompts. Further, because a doctored image contains specific but random poisoned pixels, it becomes nearly impossible to be detected as any different from the original and thus corrected.

"Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images," they said.

In addition, Nightshade prompt samples can "bleed through" to similar prompts. For example, the prompt "fantasy art" can also poison the prompts "dragon" and fantasy artist "Michael Whelan." Multiple Nightshade poison prompts can be stacked into a single prompt, with cumulative effect — when enough of these attacks are deployed, it can collapse the image generation model's function altogether.

"Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models," the researchers conclude.

To read and/or download the full abstract, visit this page.

Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • close-up illustration of a hand signing a legislative document

    California Passes AI Safety Legislation, Awaits Governor's Signature

    California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report: Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, a popular open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a new report from Microsoft.

  • interconnected cubes and circles arranged in a grid-like structure

    Hugging Face Gradio 5 Offers AI-Powered App Creation and Enhanced Security

    Hugging Face has released version 5 of its Gradio open source platform for building machine learning (ML) applications. The update introduces a suite of features focused on expanding access to AI, including a novel AI-powered app creation tool, enhanced web development capabilities, and bolstered security measures.