NIST Proposes New Cybersecurity Guidelines for AI Systems

The National Institute of Standards and Technology (NIST) has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence (AI) systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

The concept paper outlines a framework called Control Overlays for Securing AI Systems (COSAIS), which adapts existing federal cybersecurity standards (SP 800-53) to address unique vulnerabilities in AI. NIST said the overlays will provide practical, implementation-focused security measures for organizations deploying AI technologies, from large language models to predictive decision-making systems.

"AI systems introduce risks that are distinct from traditional software, particularly around model integrity, training data security, and potential misuse," according to the concept paper. "By leveraging familiar SP 800-53 controls, COSAIS offers a technical foundation that organizations can adapt to AI-specific threats."

The initial overlays will cover five categories of use: generative AI applications such as chatbots and image generators; predictive AI systems used in business and finance; single-agent and multi-agent AI systems designed for automation; and secure software development practices for AI developers. Each overlay will address risks to model training, deployment, and outputs, with a focus on protecting data confidentiality, integrity, and availability.

The effort builds on NIST's existing AI Risk Management Framework and related guidelines on adversarial machine learning and dual-use foundation models. COSAIS will also complement the agency's work on a Cybersecurity Framework Profile for AI, ensuring consistency across risk management approaches.

NIST is inviting feedback from AI developers, cybersecurity professionals, and industry groups on the draft, including whether the proposed use cases capture real-world adoption patterns and how the overlays should be prioritized. The agency plans to release a public draft of the first overlay in fiscal year 2026, alongside a stakeholder workshop.

Interested parties can share feedback via e-mail or through a Slack channel dedicated to the project.

For more information, visit the NIST site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • abstract graph showing growth

    Where Are You on the Ed Tech Maturity Curve?

    Ed tech maturity models can help institutions map progress and make smarter tech decisions.

  • abstract coding

    Anthropic's New AI Model Targets Coding, Enterprise Work

    Anthropic has released Claude Opus 4.6, introducing a million-token context window and automated agent coordination features as the AI company seeks to expand beyond software development into broader enterprise applications.

  • Abstract digital cloudscape of glowing interconnected clouds and radiant lines

    Cloud Complexity Outpacing Human Defenses, Report Warns

    According to the 2026 Cloud Security Report from Fortinet, while cloud security budgets are rising, 66% of organizations lack confidence in real-time threat detection across increasingly complex multi-cloud environments, with identity risks, tool sprawl, and fragmented visibility creating persistent operational gaps despite significant investment increases.

  • AI word on microchip and colorful light spread

    Microsoft Unveils Maia 200 Inference Chip to Cut AI Serving Costs

    Microsoft recently introduced Maia 200, a custom-built accelerator aimed at lowering the cost of running artificial intelligence workloads at cloud scale, as major providers look to curb soaring inference expenses and lessen dependence on Nvidia graphics processors.