NIST Proposes New Cybersecurity Guidelines for AI Systems

The National Institute of Standards and Technology (NIST) has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence (AI) systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

The concept paper outlines a framework called Control Overlays for Securing AI Systems (COSAIS), which adapts existing federal cybersecurity standards (SP 800-53) to address unique vulnerabilities in AI. NIST said the overlays will provide practical, implementation-focused security measures for organizations deploying AI technologies, from large language models to predictive decision-making systems.

"AI systems introduce risks that are distinct from traditional software, particularly around model integrity, training data security, and potential misuse," according to the concept paper. "By leveraging familiar SP 800-53 controls, COSAIS offers a technical foundation that organizations can adapt to AI-specific threats."

The initial overlays will cover five categories of use: generative AI applications such as chatbots and image generators; predictive AI systems used in business and finance; single-agent and multi-agent AI systems designed for automation; and secure software development practices for AI developers. Each overlay will address risks to model training, deployment, and outputs, with a focus on protecting data confidentiality, integrity, and availability.

The effort builds on NIST's existing AI Risk Management Framework and related guidelines on adversarial machine learning and dual-use foundation models. COSAIS will also complement the agency's work on a Cybersecurity Framework Profile for AI, ensuring consistency across risk management approaches.

NIST is inviting feedback from AI developers, cybersecurity professionals, and industry groups on the draft, including whether the proposed use cases capture real-world adoption patterns and how the overlays should be prioritized. The agency plans to release a public draft of the first overlay in fiscal year 2026, alongside a stakeholder workshop.

Interested parties can share feedback via e-mail or through a Slack channel dedicated to the project.

For more information, visit the NIST site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.