AIA in-depth #3b | High-Risk AI Requirements

This report is the fourth in a series of in-depth analyses of the European Commission proposal for a Regulation for Artificial Intelligence (AIA).

In this paper, which is the fourth in a series, we will dive deeper into the main elements of Chapter 2 of Title III of the AIA: The Requirements for High- Risk AI (articles 9 – 15, 42, 43).

Main findings

1 The AIA versus the Ethics Guidelines for Trustworthy AI:

  • Many but not all requirements of the Ethics Guidelines have been fully incorporated in the AIA
  • ‘Missing’ requirements are: human autonomy, privacy, explainability, non-discrimination, environmental well-being, societal well-being

2 The requirements should be seen not only in light of the intended purpose but also of the reasonably foreseeable use of the AI system

3 Include the obligation to involve relevant stakeholders in the development and review phases of the risk management system and promote multi-disciplinarity in the design and development of AI

4 Main adjustments to the requirements of articles 10-15:

  • Delete para. 5 of article 10 (bias monitoring, detection and correction)
  • Delete art. 42(1) (presumption of conformity with para. 4 of article 10)
  • Apply the recommendations for logging to all high-risk AI
  • Define affectee(s), natural or legal persons affected by high-risk AI
  • Add requirements to ensure explainability of AI decisions towards affectees and the right to request an explanation by affectees
  • Add measures to ensure independence and protection of person(s) endowed with exercising human oversight
  • Include requirements regarding model quality, such as justifiable and reasonable parameters and features, and generalisability

5 Strengthen the requirements:

  • Add requirements to protect human autonomy, privacy, non-discrimination, societal well-being, democracy, the rule of law and the environment
  • Add a fiduciary duty for to act in the interest of affectees
  • Have certain decisions remain the ultimate prerogative of humans
  • Have all high-risk AI systems third party assessed