Next phase of the European AI regulation process: ALLAI gives feedback to the Commission
In its Whitepaper on Artificial Intelligence, Europe took a clear stance on AI; foster uptake of AI technologies, underpinned by what it calls ‘an ecosystem of excellence’, while also ensuring their compliance with to European ethical norms, legal requirements and social values, ‘an ecosystem of trust’. The Inception Impact Assessment of a “Proposal for a legal act of the European Parliament and the Council laying down requirements for Artificial Intelligence” now presents a number of objectives and policy options. ALLAI provided feedback to these objectives and policy options.
First and foremost, we expressed our support for the European Commission’s efforts to establish an appropriate regulatory framework for AI. In establishing such a framework, one should both look at existing laws and regulations and determine if they are ‘fit for purpose’ for a world with AI as well as consider establishing new rules where current legislation is not adequate.
In general, we recommend to broaden the description of the problem that the initiative aims to tackle, i.e. addressing a number of ethical and legal issues raised by AI, and include “societal issues raised by AI”. In the same spirit, we recommend to broaden the description of the ultimate policy objective of the proposal, i.e. to foster the development and uptake of safe and lawful AI that respects fundamental rights across the Single Market by both private and public actors while ensuring inclusive societal outcomes, so as to include “fair societal outcomes”.
Defining AI for regulatory purposes
The issue of defining the scope of a new legislative initiative for AI is the core element that needs to be addressed. Whereas the Inception Impact Assessment mentions a number of AI-techniques that either should or should not be covered by the instrument, we would like to recommend a different approach toward defining the scope of the instrument: an approach that looks at the level of impact of the technology on people and society at large, rather than (merely) on the technical specifications of a particular AI-system. An impact-level based approach lowers the risk of loopholes that could be exploited.
Legal AI stress test
As for existing legislation, we call for a broad legal AI stress test, because we see a large number of additional legal lacunae where it comes to AI, that were not mentioned in the Inception Impact Assessment, such as the GDPR, law enforcement, competition law, transportation, trade of dual use technology, medical devices, energy and the environment, to name a few.
The Inception Impact Assessment lays down 5 policy options ranging from keeping the ‘Baseline scenario’ to a combination of several policy options.
ALLAI would be most in favor of a combination of the policy options 2, 3a and 3b as described in the Inception Impact Assessment. This combination would entail soft law for low impact AI applications (or uses) including volulntary labelling, and EU instrument with mandatory labelling covering two elements: (i) clear restrictions, conditions, safeguards and/or boundaries for a limited number of exceptionally impactful AI-applications or uses and (ii) mandatory requirements for medium to high impact AI based on common denominators to determine the level of impact.
Finally, we call for an ex-durante (which would include ex-ante and ex-post mechanisms) mechanism to ensure a continuous, systematic, socio-technical governance approach, looking at the technology from all perspectives and through various lenses. For this we recommend to set up European AI Authority as part of a global framework of AI Authorities.