Encoding the same biases: Artificial Intelligence’s limitations in coronavirus resonse

Interview with Catelijne Muller (a.o.) in Horizon Magazine, Sept. 2020

atelijne Muller was interviewed alongside AINow’s Meredith Withaker for Horizon Magazine, the EU Research and Innovation Magazine, on AI in times of Corona.

Catelijne noted a number of elements that should be taken into account when deploying AI to tackle the corona crisis.

Perpetuating bias for example. The fact that black people (in the US) are more likely to be severely affected by COVID-19, while vaccine trials might not take this into consideration, might disadvantage black people if put outcomes of such trials into an AI system that makes future predictions.

She also called for asking ‘question zero’: what is my problem and how can I solve it? Do I solve it with artificial intelligence or with something else? If with AI, is this application good enough? Does it harm fundamental rights?

She emphasized that many people think that AI is a magic wand…that it will solve everything. But sometimes it doesn’t solve anything because it’s not fit for the problem. Sometimes it’s so invasive that it might solve one problem, but create a large, different problem.

When it comes to using AI in the context of Covid-19 — there is an eruption of data, but that data needs to be reliable and be optimized. Data cannot just be thrown at another algorithm. Data-driven AI works by finding correlations. It doesn’t understand what a virus is.

The article also refers ALLAI’s report analysing a proposal EU regulators are working on to regulate ‘high-risk’ AI applications such as for use in recruitment, biometric recognition or in the deployment of health. It highlights ALLAI’s recommendations to audit every decision cycle of the AI system, monitor the operation of the system, to safeguard discretion to decide when and how to use the system in any particular situation, and the opportunity to override a decision made by a system.