EU’s AI Regulation: Europe puts Fundamental Rights and Values Front and Center
PRESS RELEASE – 21 April 2021
In its highly anticipated legislative proposal for AI, the European Commission today projected a clear message: fundamental rights and European values are at the core of Europe’s approach to AI. Europe is basically saying: when it comes to this technology, ‘anything goes’ is no longer the norm. We will not allow everything, just because it can be done. And we don’t just regulate bits and elements of the technology, we set EU wide rules that will resonate across the globe.
An important milestone
ALLAI welcomes the proposal, as Europe is the first in the world to set an all encompassing legal framework for the responsible development, deployment and use of AI. Something that we have been advocating for since the very beginning. Most recommendations we have been making in our various roles and positions over the past couple of years, in one way or another, found their way into this regulatory proposal. But there is still quite some work to be done. The proposal is complex, contains numerous ‘backdoors’, exceptions and caveats and leaves a lot of room for discussion and interpretation. The proof of the pudding will be in the eating, which will happen in the months to come. Nevertheless, this is an important milestone for Europe.
Red lines, but blurry
ALLAI welcomes the Commission’s courage to actually aim to prohibit certain AI practices. Harmful manipulation, exploitation of vulnerabilities, indiscriminate mass surveillance for law enforcement and social scoring are all mentioned as practices that have no place in the EU.
“There are however quite some potential ‘loopholes’ that we worry could be exploited and that need further refinement.” says ALLAI President and co-founder Catelijne Muller.
We also welcome the ‘translation’ of most of the 7 requirements of the Ethics Guidelines for Trustworthy AI into specific requirements for ‘high-risk’ AI. What stands out though is that one of the most important requirements, ‘inclusivity, non-discrimination and fairness’, is not explicitly dealt with in the proposal. If the Commission hopes that high quality data will deal with any discriminatory bias or unfair outcomes, it overlooks the important fact that not all biases are the result of low-quality data.
“The design of any artefact is in itself an accumulation of choices and choices are are biased by nature as they involve selecting some option over another. We should not merely focus on technical solutions at dataset level, but rather develop socio-technical processes that help avoid any discriminatory or unfair outcomes of AI.”, says ALLAI co-founder Virginia Dignum.
The rules on remote biometric identification cause concern
The proposal aims to ban real time remote biometric identification (with for example facial recognition) for law enforcement and categorises it as ‘high risk’ when used for other purposes. The word ‘identification’ rather than ‘recognition’ causes concern here. Many biometric recognition technologies are not aimed at identifying a person, but rather at assessing a person’s behaviour or characteristics (e.g. by evaluating a person’s facial features, expressions, eye movement, gate, temperature, hearth rate etc.).
“Surprisingly, this type of biometric (or ‘affect’) recognition is placed at the second lowest risk level of the ‘risk-pyramid’, merely requiring transparency about its use, while it is highly intrusive, and scientifically questionable.” says Catelijne Muller. “It should be much higher up the pyramid.”
Regulation on top of regulation
ALLAI stresses that this regulation is complementary. Even before this proposal, AI was not a ‘completely unregulated technology’.
“Many existing rules already apply to AI and its effects much as they apply to any other tool or practice. AI never operated in a lawless world.” says Catelijne Muller. “This regulation should come on top of existing legislation and cannot in any way replace it.”
According to Vice-President Vestager, the proposed regulation does not only look at the technology itself, but also at what is used for and how it is used. Nevertheless, it adds a list of AI-techniques that the technology should incorporate to be considered AI and fall within the scope of the regulation. We expect this to give rise to a wide discussion in the months to come on what exactly it is we want to regulate.
“AI technology comes in many ‘flavours’, some more controversial than others, some of which fully transparent, deterministic and explainable, but still invasive. I would rather see the regulation merely focusing on properties or possible results or impacts, than on techniques.” says Virginia Dignum.
Fostering AI Innovation
The proposal makes clear links between regulation and innovation, but mostly sees these as two opposite directions. We would like to see a much stronger proposition of regulation as stepping stone for innovation. Also, lessons from the past show that good regulation does not stifle innovation.
“In fact, when we regulate we are in much stronger position to drive innovation.” says Virginia Dignum.
“Indeed…”, says Catelijne Muller. “Uncertainty hampers innovation and regulation creates certainty. You know what you can and cannot do and that it is exactly the same as your competitor.”
For press inquiries contact: firstname.lastname@example.org