World Leaders Sign First Global AI Treaty

The United States, the United Kingdom, the European Union, and several other countries have signed the world's first legally binding treaty aimed at regulating the use of artificial intelligence (AI). "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law" was developed by the Council of Europe and opened for signatures on September 5, 2024. The primary goal of the treaty is to ensure that AI systems are designed, developed, deployed, and decommissioned in ways that respect human rights, support democratic institutions, and uphold the rule of law.

"This first-of-a-kind treaty will ensure that the rise of Artificial Intelligence upholds Council of Europe legal standards in human rights, democracy and the rule of law," said Marija Pejčinović Burić, Secretary General of the Council of Europe, in a statement. "Its finalization by our Committee on Artificial Intelligence (CAI) is an extraordinary achievement and should be celebrated as such."

The treaty was created to mitigate risks while promoting responsible innovation by establishing regulations for AI systems and set a global standard for transparency, safety, and accountability in AI use. It was adopted by the Council of Europe on May 17, 2024.

The treaty sets out a number of conditions, including:

  • Human-centric AI: AI systems must align with human rights principles and uphold democratic values.
  • Transparency and accountability: The treaty requires transparency in how AI systems operate, especially in cases where AI interacts with humans. Governments must also provide legal remedies if AI systems violate human rights.
  • Risk management and oversight: It establishes frameworks for managing risks posed by AI and sets up oversight mechanisms to ensure that AI systems comply with safety and ethical standards.
  • Protection against misuse: It includes safeguards to prevent AI from undermining democratic processes, like judicial independence and public access to justice.

The treaty applies to all AI systems except those used in national security or defense, though it still requires that these activities respect international laws and democratic principles. The treaty must be ratified by five signatory nations, and it builds on prior AI regulatory efforts, such as the EU AI Act. The treaty has been signed by other nations, including Israel, Norway, and Iceland.

Although the treaty emphasizes preventing AI from undermining democratic institutions, some critics argue that its broad principles may lack enforceability, particularly in areas such as national security, which are exempt from full scrutiny. Nonetheless, this treaty marks a significant step toward global AI governance.

How exactly would the treaty be enforced? It outlines several key enforcement mechanisms:

  • Legal Accountability: Countries that sign and ratify the treaty are required to adopt legislative and administrative measures to ensure AI systems comply with the treaty's principles. This includes protecting human rights and promoting transparency and accountability in AI deployment
  • Monitoring and Oversight: The treaty introduces oversight mechanisms that monitor the adherence of AI systems to the established standards. However, critics have pointed out that the enforcement mechanism may largely rely on national governments monitoring their AI sectors, which may not always be consistent or effective.
  • Remedies for Violations: The treaty mandates that signatories provide legal remedies for individuals harmed by AI-related human rights violations. This could involve procedures for individuals to challenge AI decisions or seek compensation when AI systems cause harm.
  • International Cooperation: The treaty encourages collaboration between signatories to harmonize AI standards, share best practices, and address cross-border AI issues. This is crucial as AI technologies often transcend national borders.
  • Adaptability: The framework is designed to be technology-neutral, allowing it to evolve as AI systems develop over time. This adaptability is key to maintaining relevant and enforceable standards as AI technologies rapidly change.

Although these mechanisms create a structure for enforcement, their effectiveness remains to be seen, especially when considering the exceptions the treaty provides in areas such as national security.

The treaty was made available for signature at a conference of justice ministers from the Council of Europe, held in Vilnius, the capital of Lithuania, following final approval of the EU's Artificial Intelligence Act by the bloc's ministers just a few months ago, which aimed to regulate the use of AI in "high-risk" sectors.

The full text of the treaty can be found here.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Two professionals, one male and one female, discuss AI regulations in a modern office with holographic displays showing legal documents, balance scales, and neural network symbols.

    Congressional Task Force Releases Recommendations for AI Governance

    The bipartisan House Task Force on Artificial Intelligence recently released its recommendations to bolster American leadership in AI.

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • Abstract widescreen image with geometric shapes, flowing lines, and digital elements like graphs and data points in soft blue and white gradients.

    5 Trends to Watch in Higher Education for 2025

    In 2025, the trends shaping higher education reflect a continuous transformation of the higher education landscape to meet the changing needs of students and staff, while maintaining sustainable and cost-effective institutional practices.

  • hand touching glowing connected dots

    Registration Now Open for Tech Tactics in Education: Thriving in the Age of AI

    Tech Tactics in Education has officially opened registration for its May 7 virtual conference on "Thriving in the Age of AI." The annual event, brought to you by the producers of Campus Technology and THE Journal, offers hands-on learning and interactive discussions on the most critical technology issues and practices across K–12 and higher education.