Draft AI Act: ALLAI analysis and recommendations
In its highly anticipated legislative proposal for AI, the European Commission projects a clear message: fundamental rights and European values are at the core of Europe’s approach to AI. Europe is basically saying: when it comes to this technology, ‘anything goes’ is no longer the norm. We will not allow everything, just because it can be done. And we don’t just regulate bits and pieces of the technology, we set EU wide rules that will resonate across the globe.
Artificial Intelligence Act: ALLAI Analysis & Recommendations
Catelijne Muller and Virginia Dignum
Catelijne Muller and Virginia Dignum did a thorough analysis of the draft AIA and decided to make a number of practical recommendations for improvement. In order to give policy makers something to work with, their analysis does not resort to more general insights or a repetition of high level principles, but provides actual textual proposals to improve and clarify a number of provisions of the AIA
Objective, scope and definition
- ALLAI welcomes the fact that the AIA puts health, safety and fundamental rights at the centre of the AIA.
- ALLAI welcomes the external effect of the AIA, making sure that AI that is developed outside of the EU, has to meet the same legal standards if deployed or having an impact within the EU.
- A more effective definition of AI should focus on the characteristics, or properties of a system, that are relevant to be regulated.
- The AIA lacks more general notions such as the prerogative of human decision making, the need for human agency and autonomy, the strength of human-machine collaboration and the full involvement of stakeholders.
- AI does not operate in a lawless world and the AIA should be clear(er) on the fact that existing laws and regulations (beyond the GDPR) apply to AI and the way we use it.
- ALLAI recommends to include ‘legacy AI’ within the scope of the AIA.
Prohibited AI-practices
- ALLAI welcomes the fact that the European Commission took up the courage to actually prohibit certain AI practices
- The two first prohibitions centre around ‘distorting a person’s behaviour’, but in their current form only capture rare cases. They however provide a grand opportunity to address one of the most worrying and widespread capabilities of AI: harmful conditioning and manipulation.
- It is important that the AIA halts the current trajectory of public and private actors using ever more information to assess, categorise and score us. The AIA should attempt to draw a clear line between what is considered ‘social scoring’ and what can be considered an acceptable form of evaluation for a certain purpose. ALLAI beliefs this line can be drawn at the point where the information used for the assessment is not reasonably relevant for, related to or proportionate to the assessment.
- Broadly in line with the EDPB and EDPS, ALLAI calls for a ban on biometrics recognition (which includes biometric identification, but also all forms of ‘emotion/behaviour/affect/intent/trait recognition with biometric recognition, which is now categorised as medium-risk and high-risk in some areas) both by private organisations and by or on behalf of (semi-)public authorities.
- ALLAI believes that the further proliferation of widespread tracking of our entire lives by public and private actors should be curbed.
High-risk AI
- ALLAI welcomes the fact that the AIA draws heavily on the Ethics Guidelines for Trustworthy AI.
- The AIA lays down the criteria for what can be considered a ‘risk of harm to health, safety and fundamental rights’ posed by AI. This limits the broad interpretation of our fundamental rights framework, that allows for the consideration of all relevant circumstances.
- The ‘high-risk AI approach’ can normalise and mainstream quite a number of AI practices that are still heavily criticised, often due to their lack of sufficient social benefit.
- Moreover, the risks these AI-practices pose cannot necessarily always be sufficiently mitigated by the current requirements for high-risk AI.
- We also think that a number of AI uses such as in election and voting processes and for content moderation should be added to ANNEX III.
- We the fact that the requirements for high-risk AI strongly reflect the requirements for trustworthy AI of the Ethics Guidelines for Trustworthy AI.
- We recommend to also include the currently missing requirements of (i) Human agency (ii) privacy (iii) diversity, non-discrimination and fairness, (iv) explainability and (v) environmental and social well being.
- In line with our long advocated human-in-command’ approach to AI, ALLAI strongly recommends the AIA to arrange for certain decisions to remain the prerogative of humans, particularly in domains where these decisions have a strong moral component and legal and/or societal impacct (such as in the judiciary, law enforcement, social services, healthcare, housing, financial services, labor relations and education. Not all decisions can or should be reduced to ‘ones and zero’s.