University of Chicago Researchers Develop Technique to Poison Generative AI Image Scraping

Tool seen as a way to protect artists' copyright

Researchers at the University of Chicago have developed a technique that can "poison" generative text-to-image machine learning models such as Stable Diffusion XDSL and OpenAI's Dall-E when they scrape the internet for training images. And it can do it with as few as 100 poisoned images, they said.

The tool, dubbed Nightshade, has implications for publishers, filmmakers, museums, art departments, educators, and artists wanting to protect their works against generative AI companies violating their copyrights.

University of Chicago computer science department researchers Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y. Zhao have published their paper, "Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models" for peer review.

Earlier this year, the same team released the free open source software, Glaze, which allows image makers to "cloak" their works in a style different from their own, preventing an AI from stealing the original image, researchers said in an FAQ.

The poisoning attacks on generative AI are prompt-specific, researchers said, and target a model's ability to respond to individual prompts. Further, because a doctored image contains specific but random poisoned pixels, it becomes nearly impossible to be detected as any different from the original and thus corrected.

"Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images," they said.

In addition, Nightshade prompt samples can "bleed through" to similar prompts. For example, the prompt "fantasy art" can also poison the prompts "dragon" and fantasy artist "Michael Whelan." Multiple Nightshade poison prompts can be stacked into a single prompt, with cumulative effect — when enough of these attacks are deployed, it can collapse the image generation model's function altogether.

"Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models," the researchers conclude.

To read and/or download the full abstract, visit this page.

Featured

  • Jasper Halekas, instrument lead for the Analyzer for Cusp Electrons (ACE), checks final calibration. ACE was designed and built at the University of Iowa for the TRACERS mission.

    TRACERS: The University of Iowa Leads NASA-Funded Space Weather Research with Twin Satellites

    Working in tandem, the recently launched TRACERS satellites enable new measurement strategies that will produce significant data for the study of space weather. And as lead institution for the mission, the University of Iowa upholds its long-held value of bringing research collaborations together with academics.

  • magnifying glass with AI icon in the center

    Google Intros Learning-Themed AI Mode Features for Search

    Google has announced new AI Mode features in Search, including image and PDF queries on desktop, a Canvas tool for planning, real-time help with Search Live, and Lens integration in Chrome. Features are launching in the U.S. ahead of the school year.

  • young man in a denim jacket scans his phone at a card reader outside a modern glass building

    Colleges Roll Out Mobile Credential Technology

    Allegion US has announced a partnership with Florida Institute of Technology (FIT) and Denison College, in conjunction with Transact + CBORD, to install mobile credential technologies campuswide. Implementing Mobile Student ID into Apple Wallet and Google Wallet will allow students access to campus facilities, amenities, and residence halls using just their phones.

  • classroom desk with a stack of textbooks next to an open laptop displaying a chat bubble icon on screen

    New ChatGPT Study Mode Guides Students Through Questions

    OpenAI has announced a new study mode for ChatGPT that helps students work through problems step by step — instead of just providing an answer.