University of Chicago Researchers Develop Technique to Poison Generative AI Image Scraping

Tool seen as a way to protect artists' copyright

Researchers at the University of Chicago have developed a technique that can "poison" generative text-to-image machine learning models such as Stable Diffusion XDSL and OpenAI's Dall-E when they scrape the internet for training images. And it can do it with as few as 100 poisoned images, they said.

The tool, dubbed Nightshade, has implications for publishers, filmmakers, museums, art departments, educators, and artists wanting to protect their works against generative AI companies violating their copyrights.

University of Chicago computer science department researchers Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y. Zhao have published their paper, "Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models" for peer review.

Earlier this year, the same team released the free open source software, Glaze, which allows image makers to "cloak" their works in a style different from their own, preventing an AI from stealing the original image, researchers said in an FAQ.

The poisoning attacks on generative AI are prompt-specific, researchers said, and target a model's ability to respond to individual prompts. Further, because a doctored image contains specific but random poisoned pixels, it becomes nearly impossible to be detected as any different from the original and thus corrected.

"Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images," they said.

In addition, Nightshade prompt samples can "bleed through" to similar prompts. For example, the prompt "fantasy art" can also poison the prompts "dragon" and fantasy artist "Michael Whelan." Multiple Nightshade poison prompts can be stacked into a single prompt, with cumulative effect — when enough of these attacks are deployed, it can collapse the image generation model's function altogether.

"Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models," the researchers conclude.

To read and/or download the full abstract, visit this page.

Featured

  • file folders floating in the clouds, with glowing AI circuitry and data lines intertwined

    OneDrive Update Adds AI Agents, Copilot Interactions

    Microsoft has announced new enterprise capabilities in its OneDrive cloud storage service, many of which leverage the company's Copilot AI technologies.

  • translucent lock composed of interconnected nodes and circuits at the center

    Cloud Security Alliance: Best Practices for Securing AI Systems

    The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.

  • white desk with an open digital tablet showing AI-related icons like gears and neural networks

    Elon University and AAC&U Release Student Guide to AI

    A new publication from Elon University 's Imagining the Digital Future Center and the American Association of Colleges and Universities offers students key principles for navigating college in the age of artificial intelligence.

  • open laptop with screen depicting a glowing, holographic figure surrounded by floating symbols of knowledge like books, equations, and lightbulbs

    Cengage Intros Gen AI Student Assistant Beta

    Ed tech company Cengage has announced the beta launch of Student Assistant, a generative AI tool designed to guide students through the learning process with personalized resources and feedback.