AIA in-depth #3a | High-Risk AI Classification

This report is the third in a series of in-depth analyses of the European Commission proposal for a Regulation for Artificial Intelligence (AIA).

In this third report we dive deeper into the main elements of Chapter 1 of Title III of the AIA: Classification of High-Risk AI (articles 6 and 7 and ANNEXES II and III AIA). By evaluating the methodology used to classify AI-systems and area’s as high-risk, and the implications of those classifications.

Main findings

1 Classifcation criteria and future proofing:

  • Classifying AI as high-risk is based on a limited set of criteria, which prioritizes certain criteria and excludes others. This is contrary to our fundamental rights doctrine
  • Adding new high-risk AI is only allowed in pre-determined domains, making the AIA less ‘future proof’
  • Not just intended purpose of the AI system, but also its ‘reasonably foreseeable use’ should be taken into consideratio

2 Harmonized products with AI:

  • We see no reason to exclude the harmonized sectors of ANNEX II.B from the scope of the AIA

3 Stand-alone High-Risk AI:

  • Biometric identification (one-to-many), categorisation and assessment should be moved to art. 5 AIA
  • Telecom, internet, financial infrastructure as well as air, rail and water traffic management should be added to para. 2 as critical infrastructures
  • AI driven personalised education should be added to para. 3
  • Certain AI(-driven) decisions in employment, e.g. on hiring and termination, should be moved to art. 5 AIA
  • AI determining or predicting the lawful use of public services (e.g. fraud risk prediction) should be added to para. 5
  • Clarify what kind of private services are to be considered ‘essential'(e.g. housing, internet, telecom, financial services) (para. 5)
  • Predictive policing, criminal profiling and biometric lie detection in law enforcement, criminal justice and asylum, migration and border control should be added to art. 5 AIA
  • AI to make judicial decisions should be moved to art. 5 AIA and ‘the judiciary’ should be clarified in para. 8
  • AI used for vote counting in elections should be added to art. 5 AIA
  • Content moderation in democracy-critical processes should be added to para. 8