EU Parliament Passes Major AI Regulation Law

The European Union (EU) Parliament has formally passed the Artificial Intelligence Act, a regulation tackling comprehensive rules for trustworthy AI systems.

The law is seen as the world's first major piece of legislative framework aimed at classifying products and services that use generative AI based on risk and security.

"The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology, and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential," said Dragos Tudorache, a Romanian lawmaker, before the vote on social media.

The AI Act has been crafted with key objectives at its core. Its primary aim is to protect essential freedoms and ensure the safety of users by setting rigorous standards for AI systems deemed high-risk. This category encompasses use cases in sectors like healthcare, law enforcement, and vital infrastructure, where AI technologies could have a bigger impact.

Products and services deploying AI technologies will be rated one of the following: "low" hazard, "medium" hazard, "high" hazard or "unacceptable." The AI Act will immediately ban those products and services rated unacceptable — like social scoring systems, emotion recognition systems and predictive policing.  

Furthermore, the legislation strives to promote innovation and confidence in AI solutions, positioning European-based organizations as a formidable player in AI development.

For businesses and developers, the AI Act will bring new considerations when using and deploying gen AI. Companies deploying AI systems classified as high risk will need to implement comprehensive risk assessment and mitigation strategies, maintain detailed documentation, and ensure transparency and accountability. These requirements aim to build public trust in AI systems by making their decisions understandable, traceable, and challengeable by individuals

Next, the EU and participating countries will begin integrating the new law and regulations. First up, the AI Act will officially become law in the next two to three months, pending final formalities. Next, products and services deemed unacceptable must be banned by the individual governing countries in the first six months. Finally, agreed-upon rules for public AI products and services will start applying one year after the law has been formally adopted.

For the EU's part, it will establish the AI Office, central governing headquarters in Brussels, with each individual country creating its own watchdog organization aimed at facilitating communication between regulators and the public.

"Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected," said the Internal Market Committee co-rapporteur Brando Benifei. "The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI's development."

Full text of the act is available here on the EU site.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.