Microsoft Adds New Agentic AI Tools to Security Copilot

Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

The update marks the next phase for Microsoft Security Copilot, launched a year ago, as the company adds 11 AI-powered agents to automate tasks like phishing detection, data protection, vulnerability management, and threat analysis. The move underscores Microsoft's strategy to use AI not only as a target for protection, but also as a frontline defense against increasingly sophisticated cyber attacks.

"With over 30 billion phishing e-mails detected in 2024 alone and cyber attacks now exceeding human capacity to respond, agent-based AI security has become an imperative," said Vasu Jakkal, corporate vice president for Microsoft's Security Group, in a blog post.

Six of the new AI agents are developed in-house and five are built by Microsoft'’s security partners, including OneTrust, Aviatrix, and Tanium. The tools will begin rolling out in preview starting April 2025.

"An agentic approach to privacy will be game-changing for the industry," said Blake Brannon, chief product and strategy officer, OneTrust, in a statement. "Autonomous AI agents will help our customers scale, augment, and increase the effectiveness of their privacy operations. Built using Microsoft Security Copilot, the OneTrust Privacy Breach Response Agent demonstrates how privacy teams can analyze and meet increasingly complex regulatory requirements in a fraction of the time required historically."

Among the new additions is a Phishing Triage Agent in Microsoft Defender, designed to filter and prioritize phishing alerts, providing explanations and improving with user feedback. Another, the Conditional Access Optimization Agent, monitors identity systems to spot policy gaps and recommend fixes. Microsoft is also debuting an AI-powered Threat Intelligence Briefing Agent that curates threat insights tailored to each organization's risk profile.

The release comes amid surging global interest in generative AI and a parallel rise in what Microsoft calls "shadow AI" — unauthorized AI use within organizations, often outside of IT oversight. Microsoft estimates that 57% of enterprises have seen an uptick in security incidents tied to AI, even as 60% admit they have not implemented adequate controls.

To address this, Microsoft is extending its AI security posture management across multiple clouds and models. Starting May 2025, Microsoft Defender will support AI security visibility across Azure, AWS, and Google Cloud, including models like OpenAI's GPT, Meta's Llama, and Google's Gemini.

Other new safeguards include browser-based data loss prevention (DLP) tools to block sensitive information from being entered into generative AI apps like ChatGPT and Google Gemini, as well as enhanced phishing protection in Microsoft Teams — long a target of e-mail-like attacks.

"The rise of AI has introduced new cyber risk vectors, but it's also our greatest ally," said Alexander Stojanovic, vice president of Microsoft Security AI Applied Research, in a statement. "This is just the beginning of what security agents can do."

For more information, visit the Microsoft blog.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.