World Leaders Sign First Global AI Treaty

The United States, the United Kingdom, the European Union, and several other countries have signed the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI). "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law" was developed by the Council of Europe and opened for signatures on September 5, 2024. The primary goal of the treaty is to ensure that AI systems are designed, developed, deployed, and decommissioned in ways that respect human rights, support democratic institutions, and uphold the rule of law.

"This first-of-a-kind treaty will ensure that the rise of Artificial Intelligence upholds Council of Europe legal standards in human rights, democracy and the rule of law," said Marija Pejčinović Burić, Secretary General of the Council of Europe, in a statement. "Its finalization by our Committee on Artificial Intelligence (CAI) is an extraordinary achievement and should be celebrated as such."

The treaty was created to mitigate risks while promoting responsible innovation by establishing regulations for AI systems and set a global standard for transparency, safety, and accountability in AI use. It was adopted by the Council of Europe on May 17, 2024.

The treaty sets out a number of conditions, including:

  • Human-centric AI: AI systems must align with human rights principles and uphold democratic values.
  • Transparency and accountability: The treaty requires transparency in how AI systems operate, especially in cases where AI interacts with humans. Governments must also provide legal remedies if AI systems violate human rights.
  • Risk management and oversight: It establishes frameworks for managing risks posed by AI and sets up oversight mechanisms to ensure that AI systems comply with safety and ethical standards.
  • Protection against misuse: It includes safeguards to prevent AI from undermining democratic processes, like judicial independence and public access to justice.

The treaty applies to all AI systems except those used in national security or defense, though it still requires that these activities respect international laws and democratic principles. The treaty must be ratified by five signatory nations, and it builds on prior AI regulatory efforts, such as the EU AI Act. The treaty has been signed by other nations, including Israel, Norway, and Iceland.

Although the treaty emphasizes preventing AI from undermining democratic institutions, some critics argue that its broad principles may lack enforceability, particularly in areas such as national security, which are exempt from full scrutiny. Nonetheless, this treaty marks a significant step toward global AI governance.

How exactly would the treaty be enforced? It outlines several key enforcement mechanisms:

  • Legal Accountability: Countries that sign and ratify the treaty are required to adopt legislative and administrative measures to ensure AI systems comply with the treaty's principles. This includes protecting human rights and promoting transparency and accountability in AI deployment
  • Monitoring and Oversight: The treaty introduces oversight mechanisms that monitor the adherence of AI systems to the established standards. However, critics have pointed out that the enforcement mechanism may largely rely on national governments monitoring their AI sectors, which may not always be consistent or effective.
  • Remedies for Violations: The treaty mandates that signatories provide legal remedies for individuals harmed by AI-related human rights violations. This could involve procedures for individuals to challenge AI decisions or seek compensation when AI systems cause harm.
  • International Cooperation: The treaty encourages collaboration between signatories to harmonize AI standards, share best practices, and address cross-border AI issues. This is crucial as AI technologies often transcend national borders.
  • Adaptability: The framework is designed to be technology-neutral, allowing it to evolve as AI systems develop over time. This adaptability is key to maintaining relevant and enforceable standards as AI technologies rapidly change.

Although these mechanisms create a structure for enforcement, their effectiveness remains to be seen, especially when considering the exceptions the treaty provides in areas such as national security.

The treaty was made available for signature at a conference of justice ministers from the Council of Europe, held in Vilnius, the capital of Lithuania, following final approval of the EU's Artificial Intelligence Act by the bloc's ministers just a few months ago, which aimed to regulate the use of AI in "high-risk" sectors.

The full text of the treaty can be found here.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • glowing digital brain above a chessboard with data charts and flowcharts

    Why AI Strategy Matters (and Why Not Having One Is Risky)

    If your institution hasn't started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

  • laptop screen with a video play icon, surrounded by parts of notebooks, pens, and a water bottle on a student desk

    New AI Tool Generates Video Explanations Based on Course Materials

    AI-powered studying and learning platform Studyfetch has launched Imagine Explainers, a new video creator that utilizes artificial intelligence to generate 10- to 60-minute explainer videos for any topic.

  • cloud and circuit patterns with AI stamp

    Cloud Management Startup Launches Infrastructure Intelligence Tool

    A new AI-powered infrastructure intelligence tool from cloud management startup env0 aims to turn the fog of sprawling, enterprise-scale deployments into crisp, queryable insight, minus the spreadsheets, scripts, and late-night Slack threads.

  • Stylized illustration showing cybersecurity elements like shields, padlocks, and secure cloud icons on a neutral, minimalist digital background

    Microsoft Announces Security Advancements

    Microsoft has announced major security advancements across its product portfolio and practices. The work is part of its Secure Future Initiative (SFI), a multiyear cybersecurity transformation the company calls the largest engineering project in company history.