PRESS RELEASE: AI Act approved

Amsterdam, May 21, 2024

The Council of the European Union, specifically the EU Ministers responsible for Telecommunications, met on May 21st, 2024, in Brussels, where they adopted the AI Act. With this the final hurdle for the AI Act to become law was taken and the EU is the first continent to regulate AI.

What started in 2017 with Catelijne Muller’s EESC report on AI & Society, and followed by the European AI Strategy and the Ethics Guidelines for Trustworthy AI, now culminated into a law that sets clear conditions for trustworthy AI in the EU. A law that already resonates around the globe.

A “hybrid” law

With the aim of protecting health, safety and fundamental rights, the AI Act not only demands that AI is safe, of high quality and in line with European values, but it also describes, in quite some detail, how to achieve this.

“We welcome the AI Act and particularly its hybrid nature.” says ALLAI President Catelijne Muller. “The overarching objective of protecting fundamental rights, paired with the granular approach of a product regulation, provides the much needed conditions for AI that is trustworthy, beneficial and safe and that respects European values. I am honoured and proud to have been involved in the process from the very beginning, and that ALLAI was able to make quite a number of valuable contributions to the final text of the AI Act.”

The European Commission introduced the proposal for the AI Act in April 2021, and a significant milestone was reached two years later, on 8 December 2023, when the European Parliament agreed on the final text. European Parliamentarians Brando Benifei and Dragoş Tudorache overviewed the law-making process as the European Parliament’s rapporteurs on the file. Within this process, ALLAI played a significant role in advising the European Parliament’s co-rapporteur Brando Benifei and his team, as well as various shadow rapporteurs, on matters including biometric recognition, general-purpose AI (foundation models), and high-risk AI systems.

A risk-based approach

The AI Act follows a ‘risk-based’ approach: the higher the risk, the stricter the rule. It also prohibits quite a number of AI practices, such as harmful behavioural manipulation, social scoring, emotion recognition at the workplace and in education, predictive policing and real-time remote biometric recognition by (or for) law enforcement. While AI systems with a ‘limited risk’ only have to comply with transparency requirements, ‘high-risk’ AI systems must follow a strict set of requirements and obligations before they can be used (or have an effect) on the EU market. The lists of high-risk AI systems is pre-defined and includes domains such as education, healthcare, workplaces, law enforcement, critical infrastructure, public services, and health and life insurance.

It is worth noting that penalties for infringements of the AI Act are substantial, ranging from € 7,5 mio to € 35 mio or 1% – 7% of an organisation’s global annual turnover (whichever is higher).


To foster innovation, the AI Act also foresees AI regulatory sandboxes that would enable a controlled environment for developing, testing, and validating innovative AI systems and should also allow for testing innovative AI systems in real world conditions.

“I strongly believe that the AI Act can act as a stepping stone for innovation.” says Catelijne. “It provides a level playing field where everyone is held to the same standard, but more importantly, if helps improve AI systems. And rightly so. We have unfortunately seen many serious incidents with low-quality AI systems that have led to serious harm, even up to the point where a government had to resign. And it is not the first time that we regulate a product that is potentially harmful. We regulate airplanes. They can bring us to beautiful places, but they can also kill us. We regulate medication. It can cure us, but also poison us. And for those who say that the rules are too strong or technically impossible to meet I say: try harder. If we would not have been able to build breaks into cars, we would not have left out the breaks, we would have tried harder to build a better car.”

Question zero

Being the result of fierce negotiations and intense lobbying, the AI Act, while laying an excellent basis for trustworthy AI, still leaves ample need for ethical reflection. We hence urge all parties to keep asking ‘question zero’. With the recent rapid developments in AI it is well worth taking a step back and ask ourselves if we want to use AI in the first place. If it serves the society we want to live in. If it truly shapes it for the better.

What’s next

It is crucial that all organisations, public and private, developing, procuring, buying, using or anticipating any of the former,  start preparing now for compliance (the prohibitions already apply by the end of this year for example). The AI Act is complex and hard to navigate and its hybrid character makes interpretation challenging. Compliance with the AI Act is not just a legal exercise, it requires deep understanding of the technology itself as well as of its ethical, societal and fundamental rights implications.

Read the approved text of the AI Act

Press inquiries:, +31 (0)6 11 60 18 25