OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

The report highlights a growing reliance on AI by adversaries to scale scams, automate phishing and deploy tailored misinformation across platforms like Telegram, TikTok and Facebook. OpenAI says it is countering these threats using its own AI systems alongside human analysts, while coordinating with cloud providers and global security partners to take action against offenders.

In the three months since its previous update, the company says it has detected and disrupted activity including:

  • Cyber operations targeting cloud-based infrastructure and software.
  • Social engineering and scams scaling through AI-assisted content creation.
  • Influence operations attempting to manipulate public discourse using AI-generated posts on platforms like X, TikTok, Telegram and Facebook.

The report details 10 case studies where OpenAI banned user accounts and shared findings with industry partners and authorities to strengthen collective defenses.

Here's how the company detailed the tactics, techniques, and procedures (TTPs) as presented in the discussion of one representative case — a North Korea-linked job scam operation using ChatGPT to generate fake résumés and spoof interviews:

Activity LLM ATT&CK Framework Category
Automating to systematically fabricate detailed résumés aligned to various tech job descriptions, personas, and industry norms. Threat actors automated generation of consistent work histories, educational backgrounds, and references via looping scripts. LLM Supported Social Engineering
Threat actors utilized the model to answer employment-related, likely application questions, coding assignments, and real-time interview questions, based on particular uploaded resumes. LLM Supported Social Engineering
Threat actors sought guidance for remotely configuring corporate-issued laptops to appear as though domestically located, including advice on geolocation masking and endpoint security evasion methods. LLM-Enhanced Anomaly Detection Evasion
LLM assisted coding of tools to move the mouse automatically, or keep a computer awake remotely, possibly to assist in remote working infrastructure setups. LLM Aided Development

Beyond the employment scam case, OpenAI's report outlines multiple campaigns involving threat actors abusing AI in cloud-centric and infrastructure-based attacks.

Cloud-Centric Threat Activity

Many of the campaigns OpenAI disrupted either targeted cloud environments or used cloud-based platforms to scale their impact:

  • A Russian-speaking group (Operation ScopeCreep) used ChatGPT to assist in the iterative development of sophisticated Windows malware, distributed via a trojanized gaming tool. The campaign leveraged cloud-based GitHub repositories for malware distribution and used Telegram-based C2 channels.
  • Chinese-linked groups (KEYHOLE PANDA and VIXEN PANDA) used ChatGPT to support AI-driven penetration testing, credential harvesting, network reconnaissance, and automation of social media influence. Their targets included US federal defense industry networks and government communications systems.
  • An operation dubbed Uncle Spam, also linked to China, generated polarizing US political content using AI and pushed it via social media profiles on X and Bluesky.
  • Wrong Number, likely based in Cambodia, used AI-generated multilingual content to run task scams via SMS, WhatsApp, and Telegram, luring victims into cloud-based crypto payment schemes.
    SMS randomly sent to an OpenAI investigator, generated using ChatGPT.
    [Click on image for larger view.] SMS randomly sent to an OpenAI investigator, generated using ChatGPT. (source: OpenAI).

Defensive AI in Action

OpenAI says it is using AI as a "force multiplier" for its investigative teams, enabling it to detect abusive activity at scale. The report also highlights how using AI models can paradoxically expose malicious actors by providing visibility into their workflows.

"AI investigations are an evolving discipline," the report notes. "Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses."

The company calls for continued collaboration across the industry to strengthen defenses, noting that AI is only one part of the broader internet security ecosystem.

For cloud architects, platform engineers and security professionals, the report is a useful read. It illustrates not only how attackers are using AI to speed up traditional tactics, but also how cloud-based services are central both to their targets and to the infrastructure of modern threat campaigns.

The full report is available on the OpenAI site here.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • image of a white AI chip with circuit lines, flanked by interlocking gears and a neural network brain icon

    Researchers Develop AI-Powered Method for Business Process Redesign

    Researchers have developed a novel AI-powered approach that enables non-technical users to modify complex process models through simple conversations with chatbots.

  • illustration with geometric shapes, digital circuitry, and subtle icons of an open book, graduation cap, and lightbulb

    University of Michigan Launches Agentic AI Virtual Teaching Assistant

    At the University of Michigan's Stephen M. Ross School of Business, a new Virtual Teaching Assistant pilot program is utilizing agentic AI to provide students with 24/7 access to support and self-directed learning.

  • From Fire TV to Signage Stick: University of Utah's Digital Signage Evolution

    Jake Sorensen, who oversees sponsorship and advertising and Student Media in Auxiliary Business Development at the University of Utah, has navigated the digital signage landscape for nearly 15 years. He was managing hundreds of devices on campus that were incompatible with digital signage requirements and needed a solution that was reliable and lowered labor costs. The Amazon Signage Stick, specifically engineered for digital signage applications, gave him the stability and design functionality the University of Utah needed, along with the assurance of long-term support.

  • collection of glowing digital documents and seals

    1EdTech: 6 Key Steps for a Successful Credentialing Program

    A new report from 1EdTech Consortium outlines recommendations for creating microcredential programs in schools, colleges, and universities.