Cyber Espionage Campaign Exploits Claude Code Tool to Infiltrate Global Targets

Anthropic recently reported that attackers linked to China leveraged its Claude Code AI to carry out intrusions against about 30 global organizations. According to the San Francisco-based AI developer, the campaign occurred in mid-September and primarily targeted tech companies, financial firms, government agencies and chemical manufacturers.

"The threat actor — whom we assess with high confidence was a Chinese state-sponsored group — manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases," said the company in a blog post.

The attackers reportedly began by manually selecting high-value targets and then used a jailbreak technique to circumvent Claude's security guardrails. Once activated, the model autonomously handled much of the operation, conducting reconnaissance, generating exploits, compromising credentials and facilitating data exfiltration.

Anthropic said it discovered the activity after internal monitoring flagged atypical use patterns. It subsequently disabled the affected accounts, notified relevant parties and worked with authorities to analyze the incident.

The disclosure reflects a growing concern in the cybersecurity community about the potential for advanced AI to accelerate or even automate sophisticated attacks, according to Anthropic.

"These attacks are likely to only grow in their effectiveness. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one."

In related research, Anthropic recently demonstrated how its Claude Sonnet 4.5 model can assist defenders by identifying vulnerabilities and improving patching workflows. But the company acknowledged that many of the same capabilities — especially AI-driven agency — can also be used for malicious activities.

Their solution: AI service companies and providers continue to focus on safety first from the onset of development. "While we will continue to invest in detecting and disrupting malicious attackers, we think the most scalable solution is to build AI systems that empower those safeguarding our digital environments — like security teams protecting businesses and governments, cybersecurity researchers and maintainers of critical open-source software."

Anthropic also stressed that safeguarding AI models and sharing threat intelligence across sectors will be critical to mitigating future misuse. For IT teams, the incident underscores the urgency of integrating AI-enabled defense systems into security operations.

For more information, read the Anthropic blog.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • abstract illustration of artificial intelligence

    CSU Shares AI Learnings in Systemwide Survey

    In a systemwide survey of more than 94,000 faculty, staff, and students, California State University recently documented widespread AI use across its 22 campuses.

  • glowing brain above stacked coins

    The Higher Ed Playbook for AI Affordability

    Fulfilling the promise of AI in higher education does not require massive budgets or radical reinvention. By leveraging existing infrastructure, embracing edge and localized AI, collaborating across institutions, and embedding AI thoughtfully across the enterprise, universities can move from experimentation to impact.

  • ai robot connected to various technology icons

    ASU Teams Up with Grammarly to Deploy Agentic AI Assistant

    Arizona State University recently partnered with Grammarly to integrate agentic AI into teaching and learning, becoming the first university to deploy Grammarly's Superhuman Go AI platform.

  • abstract cybersecurity data protection

    Rubrik Intros Google Workspace Data Protection

    Rubrik has announced the launch of Rubrik Data Protection for Google Workspace, a product the company said is designed to help enterprise customers protect data and restore operations across Google Workspace environments.