University of Chicago Researchers Develop Technique to Poison Generative AI Image Scraping

Tool seen as a way to protect artists' copyright

Researchers at the University of Chicago have developed a technique that can "poison" generative text-to-image machine learning models such as Stable Diffusion XDSL and OpenAI's Dall-E when they scrape the internet for training images. And it can do it with as few as 100 poisoned images, they said.

The tool, dubbed Nightshade, has implications for publishers, filmmakers, museums, art departments, educators, and artists wanting to protect their works against generative AI companies violating their copyrights.

University of Chicago computer science department researchers Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y. Zhao have published their paper, "Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models" for peer review.

Earlier this year, the same team released the free open source software, Glaze, which allows image makers to "cloak" their works in a style different from their own, preventing an AI from stealing the original image, researchers said in an FAQ.

The poisoning attacks on generative AI are prompt-specific, researchers said, and target a model's ability to respond to individual prompts. Further, because a doctored image contains specific but random poisoned pixels, it becomes nearly impossible to be detected as any different from the original and thus corrected.

"Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images," they said.

In addition, Nightshade prompt samples can "bleed through" to similar prompts. For example, the prompt "fantasy art" can also poison the prompts "dragon" and fantasy artist "Michael Whelan." Multiple Nightshade poison prompts can be stacked into a single prompt, with cumulative effect — when enough of these attacks are deployed, it can collapse the image generation model's function altogether.

"Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models," the researchers conclude.

To read and/or download the full abstract, visit this page.

Featured

  • laptop displaying a red padlock icon sits on a wooden desk with a digital network interface background

    Reports Highlight Domain Controllers as Prime Ransomware Targets

    A recent report from Microsoft reinforces warnings about the critical role Active Directory (AD) domain controllers play in large-scale ransomware attacks, aligning with U.S. government advisories on the persistent threat of AD compromise.

  • cloud icon with a padlock overlay set against a digital background featuring binary code and network nodes

    New Cloud Security Auditing Tool Utilizes AI to Validate Providers' Security Assessments

    The Cloud Security Alliance has announced a new artificial intelligence-powered system that automates the validation of cloud service providers' (CSPs) security assessments, aiming to improve transparency and trust across the cloud computing landscape.

  • server racks, a human head with a microchip, data pipes, cloud storage, and analytical symbols

    OpenAI, Oracle Expand AI Infrastructure Partnership

    OpenAI and Oracle have announced they will develop an additional 4.5 gigawatts of data center capacity, expanding their artificial intelligence infrastructure partnership as part of the Stargate Project, a joint venture among OpenAI, Oracle, and Japan's SoftBank Group that aims to deploy 10 gigawatts of computing capacity over four years.

  • chart with ascending bars and two silhouetted figures observing it, set against a light background with blue and purple tones

    Report: Enterprises Embracing Agentic AI

    According to research by SnapLogic, 50% of enterprises are already deploying AI agents, and another 32% plan to do so within the next 12 months..