AIA Trilogue Topics: Extra Layer High Risk AI

With the EP having adopted its position on the AIA on June 14, the legislative process recently entered its final phase, the Trilogue. During this phase the co-legislators (EP and Council) will negotiate the final text of the AIA under the brokerage of the European Commission.

On 18 July, the first substantive Trilogue meeting took place with on the agenda (a.o.) the classification of high risk AI.

An important topic, because both co-legislators suggest adding an extra layer to high risk AI systems. The EP proposes that high risk AI systems listed on ANNEX III shall be considered high-risk (only) if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Providers can submit a reasoned notification to the national supervisory authority when they conclude that they are not subject to the requirements of Title III Chapter 2 of this Regulation, because their system does not pose such significant risk. A misclassification could result in hefty fines.

The Council takes the opposite approach by proposing that AI systems referred to in Annex III shall be considered high risk unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to health, safety or fundamental rights.

Both ‘accessoriness’ (Council) and ‘significance’ (Council and EP) are rather vague concepts that, in case of AI systems, are even harder to grasp. Moreover, both concepts do not ensure that health, safety and fundamental rights are fully protected.

One example that illustrates this is the Dutch childcare benefit scandal, that is seen as the largest ‘AI-scandal’ in Europe. It evolved due to a complex interplay of multiple factors, one of which being a flawed AI-system for fraud prediction. Apart from that, there was heavy political pressure to ‘detect fraud before it happened’, leading to internal pressure to fast-track the deployment of a system unfit for purpose. There also was supposed ‘institutional racism’ within the tax authority. And there was a general lack of understanding of workings of the AI-system and a tendency to take the system’s predictions at face value. This led to Kafka-like situations where families were constantly targeted and mistrusted by the tax authority.

Because of the combination of factors, it has indeed been argued that the AI-system was merely an ‘accessory’ to the scandal and did not in itself pose a significant risk to health, safety or fundamental rights. The AI fraud prediction system did, however accessory or insignificant, contribute to 14.000 children being taken from their homes, one suicide, severe (mental) health problems, bankruptcies, job losses and ultimately the resignation of the Dutch Government.

The proposed ‘horizontal layers’ could lead to a situation where an AI-system such as the one used in the Dutch childcare benefit scandal, would not be classified as high-risk and would not have to meet any of the requirements that protect exactly those fundamental rights that were affected in the scandal.

We truly believe that the European Commission in its proposal for the AIA already made a proper risk assessment, and rightfully concluded that the systems listed in ANNEX III indeed pose a significant enough risk to health, safety, or fundamental rights.

Because of this, we strongly advise against both of these extra layers. They do not ensure proper protection, create loopholes and continued legal uncertainty and leave too much room for interpretation. This in turn could stifle innovation.