Western Governors University has announced the acquisition of Craft Education, an ed tech platform for managing job training aligned with degree programs and work-ready skills.
Seventy-seven percent of institutions across K-12 and higher education have uncovered a cyber attack on their infrastructure within the past 12 months, according to a new survey from cybersecurity company Netwrix.
This proposed rule from the department's Bureau of Industry and Security aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.
California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.
The United States, the United Kingdom, the European Union, and several other countries have signed "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law," the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI).
The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has formalized agreements with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.
In a recent survey from the Digital Education Council, a global alliance of universities and industry representatives focused on education innovation, the majority of students (86%) said they use artificial intelligence in their studies.
The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.
ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.
Anthropic has announced its support for an amended version of California’s Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," because of revisions to the bill the company helped to influence, but not without some reservations.