U.S. Department of Commerce Proposes Mandatory Reporting Requirement for AI, Cloud Providers

The United States Department of Commerce is proposing a new mandatory reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

Specifically, the BIS is asking for reporting on developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which involve testing AI models for dangerous capabilities, such as assisting in cyber attacks or enabling the development of weapons by non-experts.

The rule is designed to help the Department of Commerce assess the defense-relevant capabilities of advanced AI systems and ensure they meet stringent safety and reliability standards. This initiative follows a pilot survey conducted earlier this year by BIS and aims to safeguard against potential abuses that could undermine global security, officials said.

"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo in a Sept. 9 news release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."

Under a Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

All of these efforts springing forth in such a short time period speak to the urgency of governments, organizations and industry leaders to address AI regulation.

"The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors, all of which are imperative for maintaining national defense and furthering America's technological leadership," the BIS news release said. "With this proposed rule, the United States continues to foster innovation while safeguarding against potential abuses that could undermine global security and stability."

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.