A Framework for Responsible AI in Times of Corona

Abstract

In efforts to tackle the Corona-crisis, many public and private parties are considering to deploy or are already deploying AI-driven applications, both for medical as well as for societal uses. Examples of such applications include disease prediction with AI, AI to help vaccine discovery, face mask detection cameras and remote surveillance of workers with AI. There is an increasing concern about the implications of these applications on fundamental rights, ethical principles and societal values. Many of these applications are deployed through a fast-tracked decision cycle often driven by ‘techno-solutionism’. This denotes the belief that for all complex social and in this case epistemological problems a technological solution can be developed. Even if basic elements such as effectiveness are often uncertain, there is a tendency to believe that in desperate times all possible avenues for solutions should be exhausted. This while it is well known that when powerful technologies such as AI are used unwisely, they can have serious unintended consequences.

Given the increasing number of solutions and the stated intention of many governments and public organisations to implement AI-driven systems to solve or alleviate the effects of this pandemic, the evaluation of such solutions is very important. Deploying organisations and end users, need to have the means to measure the effects of the solutions to be able to trust them. Such evaluation needs to go further than the technical characteristics of the systems, and include means to evaluate their societal, ethical and legal impact.

This paper contains a first outline of evaluation criteria to assess the efficacy and the legal, ethical and societal impact of AI-applications that are used during the Corona pandemic to guide their responsible use. These criteria will help organisations perform a ‘quickscan’ of the AI-application they want to develop, procure or deploy to tackle the challenges of this crisis. The quickscan is a self-assessment tool to quickly identify the relevant elements of responsible AI and the level of adherence to these elements. It will help determine the level of impact of the AI-application, and provide options to balance different tensions and interests.

These criteria are grouped around 3 topics and follow the EU Ethics Guidelines for Trustworthy AI:

  • Impact on Citizens and Society
  • Technology
  • Governance

As an example use case, the criteria are checked against the Face Mask Detection camera pilot that the RATF performed briefly at the Paris Metrostation Chatelet-Les Halles.

This framework will be piloted with a selected group of organisations, among which the Municipality of Amsterdam, that are developing or deploying AI to tackle the various challenges of the Corona crisis.

Read the Framework

“A Draft Framework for Responsible AI in times of Corona”

By Catelijne Muller, Virginia Dignum and Noah Schöppl

Become a Pilot Partner

Developing or deploying an AI application to alleviate the Corona crisis? Interested in piloting the draft Framework? Let us know.