Cyber Espionage Campaign Exploits Claude Code Tool to Infiltrate Global Targets

Anthropic recently reported that attackers linked to China leveraged its Claude Code AI to carry out intrusions against about 30 global organizations. According to the San Francisco-based AI developer, the campaign occurred in mid-September and primarily targeted tech companies, financial firms, government agencies and chemical manufacturers.

"The threat actor — whom we assess with high confidence was a Chinese state-sponsored group — manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases," said the company in a blog post.

The attackers reportedly began by manually selecting high-value targets and then used a jailbreak technique to circumvent Claude's security guardrails. Once activated, the model autonomously handled much of the operation, conducting reconnaissance, generating exploits, compromising credentials and facilitating data exfiltration.

Anthropic said it discovered the activity after internal monitoring flagged atypical use patterns. It subsequently disabled the affected accounts, notified relevant parties and worked with authorities to analyze the incident.

The disclosure reflects a growing concern in the cybersecurity community about the potential for advanced AI to accelerate or even automate sophisticated attacks, according to Anthropic.

"These attacks are likely to only grow in their effectiveness. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one."

In related research, Anthropic recently demonstrated how its Claude Sonnet 4.5 model can assist defenders by identifying vulnerabilities and improving patching workflows. But the company acknowledged that many of the same capabilities — especially AI-driven agency — can also be used for malicious activities.

Their solution: AI service companies and providers continue to focus on safety first from the onset of development. "While we will continue to invest in detecting and disrupting malicious attackers, we think the most scalable solution is to build AI systems that empower those safeguarding our digital environments — like security teams protecting businesses and governments, cybersecurity researchers and maintainers of critical open-source software."

Anthropic also stressed that safeguarding AI models and sharing threat intelligence across sectors will be critical to mitigating future misuse. For IT teams, the incident underscores the urgency of integrating AI-enabled defense systems into security operations.

For more information, read the Anthropic blog.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.