U.S. Department of Commerce Proposes Mandatory Reporting Requirement for AI, Cloud Providers

The United States Department of Commerce is proposing a new mandatory reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.

Specifically, the BIS is asking for reporting on developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which involve testing AI models for dangerous capabilities, such as assisting in cyber attacks or enabling the development of weapons by non-experts.

The rule is designed to help the Department of Commerce assess the defense-relevant capabilities of advanced AI systems and ensure they meet stringent safety and reliability standards. This initiative follows a pilot survey conducted earlier this year by BIS and aims to safeguard against potential abuses that could undermine global security, officials said.

"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo in a Sept. 9 news release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."

Under a Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

All of these efforts springing forth in such a short time period speak to the urgency of governments, organizations and industry leaders to address AI regulation.

"The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors, all of which are imperative for maintaining national defense and furthering America's technological leadership," the BIS news release said. "With this proposed rule, the United States continues to foster innovation while safeguarding against potential abuses that could undermine global security and stability."

About the Author

David Ramel is an editor and writer for Converge360.

Featured

  • Federal Ban of Kaspersky Sales Cites 'Unacceptable' Security Risk

    Effective this fall, the United States government has ordered a ban on all sales of Kaspersky Lab software to businesses and private citizens due to concerns about cyber espionage.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • abstract representation of equity at the core of AI

    Why Equity Must Be a Core Part of the Conversation About AI

    AI is an immensely powerful tool that can provide customized support for students with diverse learning needs, tailoring educational experiences to meet student’s individual needs more effectively. However, significant disparities in AI access and digital literacy skills prevent many of these same students from fully leveraging its benefits.

  • A glowing blue shield at the center, surrounded by digital lines and red dots

    Cohesity Adds CrowdStrike Threat Intelligence to Data Protection Platform

    Data security provider Cohesity has integrated CrowdStrike threat intelligence to its flagship data protection platform.