In these series of papers we take deep-dives into the European Commission proposal for a Regulation for Artificial Intelligence (AIA).
In this series we cut the AIA into sizeable pieces and analyze the true scope and meaning of the rules. In each paper, we will assess whether the rules indeed reflect the overall objective of the AIA, which is to protect health, safety and fundamental rights and support innovation.
#1. Objective, Scope and Definition
- Strengthen the aim and purpose of the AIA by drawing inspiration from existing regulations for potentially harmful products or practices, such as REACH.
- Include a mandatory Fundamental Rights Impact Assessment for all High-Risk AI systems.
- Develop an EU taxonomy aimed at ensuring digitally sustainable economic activities (inspired by the EU taxonomy for environmentally sustainable economic activities).
- Multiple (proposed) exclusions of AI systems or domains from the scope of the AIA render the AIA’s protection of health, safety and fundamental rights less effective and should be reconsidered.
- AI does not operate in a lawless world. Defining the scope of the AIA should include a clear clarification of its interplay with existing (and upcoming) primary and secondary EU law, UN Human Rights Treaties, Council of Europe Conventions and national laws.
- Both definitions of AI (in the AIA and the Slovenian Presidency compromise proposal) focus on AI techniques, while it is better to focus on the characteristics, or properties of a system, that are relevant to be regulated.
- The paper proposes an alternative definition of AI that provides sufficient legal certainty as to what is covered by it and is future proof, whereas it provides the necessary room for interpretation.
#2. Prohibited AI Practices
- Prohibition of AI-driven manipulation is limited to a very rare and narrow set of practices.
- The AIA presents a grand opportunity to address the wider societal harms that AI-driven manipulation can bring and curb the trajectory towards the Internet-of-Minds.
- Prohibition of social scoring is welcome but should be widened to private actors and clarified.
- For social scoring to be effectively banned in Europe, a clearer line should be drawn between what is considered ‘social scoring’ and what can be considered an acceptable form of evaluation for a certain purpose.
- Prohibition of real time remote biometric identification only covers a narrow set of practices.
- Under the current prohibition, many biometric recognition practices (a.o. biometric assessment and biometric categorization) remain allowed, also by law enforcement.
- Outside of law enforcement all biometric recognition practices remain possible.
#3a. High Risk AI Classification
- Classifying AI as high-risk is based on a limited set of criteria, which prioritizes certain criteria and excludes others. This is contrary to our fundamental rights doctrine
- Adding new high-risk AI is only allowed in pre-determined domains, making the AIA less ‘future proof’
- Not just intended purpose of the AI system, but also its ‘reasonably foreseeable use’ should be taken into consideration
- We see no reason to exclude the harmonized sectors of ANNEX II.B from the scope of the AIA
- Biometric identification (one-to-many), categorisation and assessment should be moved to the prohibitions of art. 5 AIA
- Telecom, internet, financial infrastructure as well as air, rail and water traffic management should be added to para. 2 as critical infrastructures
- AI driven personalised education should be added to para. 3
- Certain AI(-driven) decisions in employment, e.g. on hiring and termination, should be moved to art. 5 AIA
- AI determining or predicting the lawful use of public services (e.g. fraud risk prediction) should be added to para. 5
- Clarify what kind of private services are to be considered ‘essential'(e.g. housing, internet, telecom, financial services) (para. 5)
- Predictive policing, criminal profiling and biometric lie detection in law enforcement, criminal justice and asylum, migration and border control should be moved to the prohibitions of art. 5 AIA
- AI to make judicial decisions should be moved to art. 5 AIA and ‘the judiciary’ should be clarified in para. 8
- AI used for vote counting in elections should be added to art. 5 AIA
- Content moderation in democracy-critical processes should be added to para. 8
#3a. High Risk AI Requirements
- Many but not all requirements of the Ethics Guidelines have been fully incorporated in the AIA
- ‘Missing’ requirements are: human autonomy, privacy, explainability, non-discrimination, environmental well-being, societal well-being
- The requirements should be seen not only in light of the intended purpose but also of the reasonably foreseeable use of the AI system
- Include the obligation to involve relevant stakeholders in the development and review phases of the risk management system and promote multi-disciplinarity in the design and development of AI
- Delete para. 5 of article 10 (bias monitoring, detection and correction)
- Delete art. 42(1) (presumption of conformity with para. 4 of article 10)
- Apply the requirements for logging to all high-risk AI
- Define affectee(s), natural or legal persons affected by high-risk AI
- Add requirements to ensure explainability of AI decisions towards affectees and the right to request an explanation by affectees
- Add measures to ensure independence and protection of person(s) endowed with exercising human oversight
- Include requirements regarding model quality, such as justifiable and reasonable parameters and features, and generalisability
- Add requirements to protect human autonomy, privacy, non-discrimination, societal well-being, democracy, the rule of law and the environment
- Add a fiduciary duty for to act in the interest of affectees
- Have certain decisions remain the ultimate prerogative of humans
- Have all high-risk AI systems third party assessed