Microsoft Adds New Agentic AI Tools to Security Copilot

Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

The update marks the next phase for Microsoft Security Copilot, launched a year ago, as the company adds 11 AI-powered agents to automate tasks like phishing detection, data protection, vulnerability management, and threat analysis. The move underscores Microsoft's strategy to use AI not only as a target for protection, but also as a frontline defense against increasingly sophisticated cyber attacks.

"With over 30 billion phishing e-mails detected in 2024 alone and cyber attacks now exceeding human capacity to respond, agent-based AI security has become an imperative," said Vasu Jakkal, corporate vice president for Microsoft's Security Group, in a blog post.

Six of the new AI agents are developed in-house and five are built by Microsoft'’s security partners, including OneTrust, Aviatrix, and Tanium. The tools will begin rolling out in preview starting April 2025.

"An agentic approach to privacy will be game-changing for the industry," said Blake Brannon, chief product and strategy officer, OneTrust, in a statement. "Autonomous AI agents will help our customers scale, augment, and increase the effectiveness of their privacy operations. Built using Microsoft Security Copilot, the OneTrust Privacy Breach Response Agent demonstrates how privacy teams can analyze and meet increasingly complex regulatory requirements in a fraction of the time required historically."

Among the new additions is a Phishing Triage Agent in Microsoft Defender, designed to filter and prioritize phishing alerts, providing explanations and improving with user feedback. Another, the Conditional Access Optimization Agent, monitors identity systems to spot policy gaps and recommend fixes. Microsoft is also debuting an AI-powered Threat Intelligence Briefing Agent that curates threat insights tailored to each organization's risk profile.

The release comes amid surging global interest in generative AI and a parallel rise in what Microsoft calls "shadow AI" — unauthorized AI use within organizations, often outside of IT oversight. Microsoft estimates that 57% of enterprises have seen an uptick in security incidents tied to AI, even as 60% admit they have not implemented adequate controls.

To address this, Microsoft is extending its AI security posture management across multiple clouds and models. Starting May 2025, Microsoft Defender will support AI security visibility across Azure, AWS, and Google Cloud, including models like OpenAI's GPT, Meta's Llama, and Google's Gemini.

Other new safeguards include browser-based data loss prevention (DLP) tools to block sensitive information from being entered into generative AI apps like ChatGPT and Google Gemini, as well as enhanced phishing protection in Microsoft Teams — long a target of e-mail-like attacks.

"The rise of AI has introduced new cyber risk vectors, but it's also our greatest ally," said Alexander Stojanovic, vice president of Microsoft Security AI Applied Research, in a statement. "This is just the beginning of what security agents can do."

For more information, visit the Microsoft blog.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.