Microsoft, OpenAI Shut Down State-Sponsored Hackers Using OpenAI LLMs

Microsoft and OpenAI have jointly shut down five state-sponsored hacking groups that were using OpenAI's LLMs "in support of malicious cyber activities," the companies announced in a blog post

Working with OpenAI, Microsoft's Threat Intelligence group identified the following groups that were, as OpenAI put it in its own blog, using OpenAI technology "for querying open-source information, translating, finding coding errors, and running basic coding tasks":

  • Forest Blizzard, a Russian military-backed group known to target organizations related to Russia's ongoing war with Ukraine. Per Microsoft, "Forest Blizzard's use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations."
  • Emerald Sleet, a North Korean spear-phishing group known to impersonate universities and nonprofits to extract intelligence from foreign policy experts. The group used LLMs to research potential targets, as well as "to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies."
  • Crimson Sandstorm, an Iranian group that specializes in delivering .NET malware to targets in the defense, maritime shipping, health care and other industries. The group is known to use LLMs to "[request] support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine."
  • Charcoal Typhoon, a group affiliated with the Chinese government that has been known to target organizations and individuals that are deemed oppositional to Chinese government policies. This group has been found to use LLMs "to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets."
  • Salmon Typhoon, also affiliated with China, is known to be proficient at disseminating malware to U.S. government agencies and defense contractors. In the past year, researchers observed this group using LLMs in an "exploratory" way, suggesting that "it is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs."

OpenAI has disabled all accounts related to each of these groups. However, while the groups raised red flags with Microsoft's research team, their activities hadn't actually amounted to major LLM-driven attacks. "Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely," according to Microsoft.

Moreover, OpenAI argued that its platform would not have given these groups a noteworthy advantage, even if their actions had led to material attacks. "GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools," it said.

Hallmarks of an LLM-Based Attack Strategy

Just as IT professionals are looking for ways to harden their security postures using AI, threat actors are turning to AI to facilitate and improve their attacks. As Microsoft notes, a typical malicious attack strategy requires reconnaissance, coding and proficiency with the targets' native languages -- all of which are tasks that can be expedited with AI.

Microsoft shared a list of nine common LLM-related attack tactics, techniques and procedures (TTPs) used by nation-state groups. They are as follows:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

More information is available in Microsoft's Cyber Signals report here.

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.