Microsoft, OpenAI Shut Down State-Sponsored Hackers Using OpenAI LLMs

Microsoft and OpenAI have jointly shut down five state-sponsored hacking groups that were using OpenAI's LLMs "in support of malicious cyber activities," the companies announced in a blog post

Working with OpenAI, Microsoft's Threat Intelligence group identified the following groups that were, as OpenAI put it in its own blog, using OpenAI technology "for querying open-source information, translating, finding coding errors, and running basic coding tasks":

  • Forest Blizzard, a Russian military-backed group known to target organizations related to Russia's ongoing war with Ukraine. Per Microsoft, "Forest Blizzard's use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations."
  • Emerald Sleet, a North Korean spear-phishing group known to impersonate universities and nonprofits to extract intelligence from foreign policy experts. The group used LLMs to research potential targets, as well as "to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies."
  • Crimson Sandstorm, an Iranian group that specializes in delivering .NET malware to targets in the defense, maritime shipping, health care and other industries. The group is known to use LLMs to "[request] support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine."
  • Charcoal Typhoon, a group affiliated with the Chinese government that has been known to target organizations and individuals that are deemed oppositional to Chinese government policies. This group has been found to use LLMs "to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets."
  • Salmon Typhoon, also affiliated with China, is known to be proficient at disseminating malware to U.S. government agencies and defense contractors. In the past year, researchers observed this group using LLMs in an "exploratory" way, suggesting that "it is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs."

OpenAI has disabled all accounts related to each of these groups. However, while the groups raised red flags with Microsoft's research team, their activities hadn't actually amounted to major LLM-driven attacks. "Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely," according to Microsoft.

Moreover, OpenAI argued that its platform would not have given these groups a noteworthy advantage, even if their actions had led to material attacks. "GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools," it said.

Hallmarks of an LLM-Based Attack Strategy

Just as IT professionals are looking for ways to harden their security postures using AI, threat actors are turning to AI to facilitate and improve their attacks. As Microsoft notes, a typical malicious attack strategy requires reconnaissance, coding and proficiency with the targets' native languages -- all of which are tasks that can be expedited with AI.

Microsoft shared a list of nine common LLM-related attack tactics, techniques and procedures (TTPs) used by nation-state groups. They are as follows:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

More information is available in Microsoft's Cyber Signals report here.

Featured

  • MathGPT

    MathGPT AI Tutor Now Out of Beta

    Ed tech provider GotIt! Education has announced the general availability of MathGPT, an AI tutor and teaching assistant for foundational math support.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • white desk with an open digital tablet showing AI-related icons like gears and neural networks

    Elon University and AAC&U Release Student Guide to AI

    A new publication from Elon University 's Imagining the Digital Future Center and the American Association of Colleges and Universities offers students key principles for navigating college in the age of artificial intelligence.

  • abstract technology icons connected by lines and dots

    Digital Layers and Human Ties: Navigating the CIO's Dilemma in Higher Education

    As technology permeates every aspect of life on campus, efficiency and convenience may come at the cost of human connection and professional identity.