AIA in-depth #1 | Objective, Scope, Definition

This report is the first in a series of in-depth analyses of the European Commission proposal for a Regulation for Artificial Intelligence (AIA).

In this first report, we dive deeper into the main elements of Chapter I of the AIA. By evaluating the objective, scope and technical definitions (of AI and data), we will assess whether Chapter I indeed reflects the overall objective of the AIA, which is to protect health, safety and fundamental rights and support innovation.

Main findings

1 Three elements could be considered to strengthen the objective of the AIA, which is the protection of health, safety and fundamental rights against the ill effects of AI:

  • Strengthening the aim and purpose of the AIA by drawing inspiration from existing regulations for potentially harmful products or practices, such as REACH.
  • Mandatory Fundamental Rights Impact Assessment for all High-Risk AI systems.
  • EU taxonomy aimed at ensuring digitally sustainable economic activities (inspired by the EU taxonomy for environmentally sustainable economic activities).

2 Multiple (proposed) exclusions of AI systems or domains from the scope of the AIA render the AIA’s protection of health, safety and fundamental rights less effective. Most of these exclusions should be reconsidered, while the R&D exclusion could be replaced with exemptions from certain requirements and specific notification and labelling obligations when AI systems are used in R&D.

3 AI does not operate in a lawless world. Defining the scope of the AIA should include a clear clarification of its interplay with existing (and upcoming) primary and secondary EU law, UN Human Rights Treaties, Council of Europe Conventions and national laws.

4 Both definitions of AI (in the AIA and the Slovenian Presidency compromise proposal) focus on AI techniques, while it is better to focus on the characteristics, or properties of a system, that are relevant to be regulated. The focus on technologies can create loopholes and legal uncertainty, and is not necessary. This paper proposes an alternative definition of AI that provides sufficient legal certainty as to what is covered by it and is future proof, whereas it provides the necessary room for interpretation.

For more information or inquiries: contact us at