University of Chicago Researchers Develop Technique to Poison Generative AI Image Scraping

Tool seen as a way to protect artists' copyright

Researchers at the University of Chicago have developed a technique that can "poison" generative text-to-image machine learning models such as Stable Diffusion XDSL and OpenAI's Dall-E when they scrape the internet for training images. And it can do it with as few as 100 poisoned images, they said.

The tool, dubbed Nightshade, has implications for publishers, filmmakers, museums, art departments, educators, and artists wanting to protect their works against generative AI companies violating their copyrights.

University of Chicago computer science department researchers Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y. Zhao have published their paper, "Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models" for peer review.

Earlier this year, the same team released the free open source software, Glaze, which allows image makers to "cloak" their works in a style different from their own, preventing an AI from stealing the original image, researchers said in an FAQ.

The poisoning attacks on generative AI are prompt-specific, researchers said, and target a model's ability to respond to individual prompts. Further, because a doctored image contains specific but random poisoned pixels, it becomes nearly impossible to be detected as any different from the original and thus corrected.

"Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images," they said.

In addition, Nightshade prompt samples can "bleed through" to similar prompts. For example, the prompt "fantasy art" can also poison the prompts "dragon" and fantasy artist "Michael Whelan." Multiple Nightshade poison prompts can be stacked into a single prompt, with cumulative effect — when enough of these attacks are deployed, it can collapse the image generation model's function altogether.

"Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models," the researchers conclude.

To read and/or download the full abstract, visit this page.

Featured

  • geometric pattern features abstract icons of a dollar sign, graduation cap, and document

    Maricopa Community Colleges Adopts Platform to Combat Student Application Fraud

    In an effort to secure its admissions and financial processes, Maricopa Community Colleges has partnered with A.M. Simpkins and Associates (AMSA) to implement the company's S.A.F.E (Student Application Fraudulent Examination) across the district's 10 institutions.

  • stylized figures, resumes, a graduation cap, and a laptop interconnected with geometric shapes

    OpenAI to Launch AI-Powered Jobs Platform

    OpenAI announced it will launch an AI-powered hiring platform by mid-2026, directly competing with LinkedIn and Indeed in the professional networking and recruitment space. The company announced the initiative alongside an expanded certification program designed to verify AI skills for job seekers.

  • Abstract AI circuit board pattern

    New Nonprofit to Work Toward Safer, Truthful AI

    Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a new nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

  • hooded figure types on a laptop, with abstract manifesto-like posters taped to the wall behind them

    Hacktivism Is a Growing Threat to Higher Education

    In recent years, colleges and universities have faced an evolving array of cybersecurity challenges. But one threat is showing signs of becoming both more frequent and more politically charged: hacktivism.