UIUC Study: AI Agents Can Exploit Cybersecurity Vulnerabilities

In a new study from the University of Illinois Urbana-Champaign (UIUC), researchers demonstrated that large language model (LLM) agents can autonomously exploit real-world cybersecurity vulnerabilities, raising critical concerns about the widespread deployment and security of these advanced AI systems.

The study, "LLM Agents can Autonomously Hack Websites," conducted by Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang, found that GPT-4, the leading LLM developed by OpenAI, can successfully exploit 87% of one-day vulnerabilities when provided with the Common Vulnerabilities and Exposures (CVE) descriptions. (The CVE is a publicly listed catalog of known security threats.)

This constitutes a massive leap from the 0% success rate achieved by previous models and open source vulnerability scanners, such as the ZAP web app scanner and the Metasploit penetration testing framework.

The researchers collected a dataset of 15 real-world, one-day vulnerabilities, including those categorized as critical severity in the CVE description. When tested, GPT-4 could exploit 87% of these vulnerabilities, while models such as GPT-3.5 and other open-source LLMs failed to exploit any. Without the CVE descriptions, GPT-4's success rate plummeted to 7%, indicating that while GPT-4 is adept at exploiting known vulnerabilities, it struggles to identify them independently.

These findings are both impressive and concerning. The ability of LLM agents to autonomously exploit vulnerabilities poses a significant threat to cybersecurity. As AI models become more powerful, their potential misuse for malicious purposes becomes more likely. The study highlights the need for the cybersecurity community and AI developers to carefully consider the deployment and capabilities of these agents.

"We need to balance the incredible potential of these AI systems with the very real risks they pose," study co-author Kang said in a statement. "Our findings suggest that while GPT-4 can be a powerful tool for finding and exploiting vulnerabilities, it also underscores the need for robust safeguards and responsible deployment."

The study's authors call for more research into improving the planning and exploration capabilities of AI agents, as well as the development of more sophisticated defense mechanisms. Enhancing the security of AI systems and ensuring they are used ethically will be crucial in preventing potential misuse.

"Our work shows the dual-edged nature of these powerful AI tools," co-author Fang said. "While they hold great promise for advancing many fields, including cybersecurity, we must be vigilant about their potential for harm."

As LLMs continue to evolve, their capabilities will only increase. This study serves as a stark reminder of the need for careful oversight and ethical considerations in the development and deployment of these technologies. The cybersecurity community must stay ahead of potential threats by continuously improving defensive measures and fostering collaboration between researchers, developers, and policymakers.

The full report is available here.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • three main icons—a cloud, a user profile, and a padlock—connected by circuit lines on a blue abstract background

    Report: Identity Has Become a Critical Security Perimeter for Cloud Services

    A new threat landscape report points to new cloud vulnerabilities. According to the 2025 Global Threat Landscape Report from Fortinet, while misconfigured cloud storage buckets were once a prime vector for cybersecurity exploits, other cloud missteps are gaining focus.

  • two large brackets facing each other with various arrows, circles, and rectangles flowing between them

    1EdTech Partners with DXtera to Support Ed Tech Interoperability

    1EdTech Consortium and DXtera Institute have announced a partnership aimed at improving access to learning data in postsecondary and higher education.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • illustration of a football stadium with helmet on the left and laptop with ed tech icons on the right

    The 2025 NFL Draft and Ed Tech Selection: A Strategic Parallel

    In the fast-evolving landscape of collegiate football, the NFL, and higher education, one might not immediately draw connections between the 2025 NFL Draft and the selection of proper educational technology for a college campus. However, upon closer examination, both processes share striking similarities: a rigorous assessment of needs, long-term strategic impact, talent or tool evaluation, financial considerations, and adaptability to a dynamic future.