Responsible AI & Corona officially launched at WorldSummitAI
October 14, 2020
With a panel titled “Responsible AI & Corona” ALLAI formally launched the project on Wednesday October 14th at WorldSummitAI.
Why this Project?
First of all, AI can play an important role in this crisis. AI can contribute to the understanding of the spread of the virus, to the search for a vaccine and to treatment of COVID-19. It can also guide policy measures and gain insight in their long-term impact.
On the other hand though, we know that AI can be very invasive and can have adverse effects. We see a proliferation of more of these invasive AI applications in this crisis. Remote temperature scanning. Wearables that measure our blood oxygen. Facial recognition that detects the use of face masks or social distancing.
But also, applications that remotely control workers, now that many of us work from home. Or grading algorithms, like the the UK algorithm that graded students out of an Oxford education.
In times of crisis, we tend to turn more quickly to these kinds of ‘invasive’ technology and to ‘turn a blind eye’ when it comes to fundamental rights, ethical values and even effectiveness. We tend to think that desperate times need desperate measures and the motto is often: no harm, no foul. And we tend to believe that an invasive technology that is harmful, will be dismantled after the crisis. Unfortunately, past crises have shown that once a technology is there, it doesn’t go away.
And that is why we feel that this crisis needs Responsible AI. Despite the urgency, or rather, because of the urgency, now more than ever. To make sure ensure that AI actually helps in tackling the Corona-crisis without causing harm to society along the way.
What does the Project involve?
- We will set up an observatory where we will map the different AI applications that are being used in this crisis. Medical as well as social.
- We will organize expert sessions for policy makers, doctors and the media to raise awareness and knowledge about the need for Responsible AI during this crisis.
- We will perform research into what Responsible AI in times of crisis actually involves. What trade offs we face and how we bring principles to practice.
- We will conduct expert interviews and provide relevant news that you will find on our webpage
- We will perform a limited number of Quickscans, where we subject AI applications to a first rapid assessment of whether they align with the elements of responsible AI.
The Project is lead by Virginia Dignum (Scientific Lead), co-founder ALLAI and Professor of Ethical and Social AI and Catelijne Muller (Project Lead), President of ALLAI and Expert in AI & Law, Democracy and Society.
We are honored to have a great number of experts from various fields on board for this project:
- Frank Dignum, Socially Aware AI, Umeå University
- Nathali Smuha, Assistant Lecturer & Researcher (EU) Law & AI, KU Leuven
- Jan van Gemert, Head of Computer Vision Lab, TU Delft
- Andreas Theodorou, PD Fellow Computer Science, Umeå University
- Gabrielle Speijer, MD oncology-radiology, Haga Hospital, owner Catalyzit
- Roel Dobbe, Postdoctural Researcher AINow Institute, New York
- Frederik Zuiderveen Borgesius, ICT & Private Law, Radboud University
- Dagmar Monett-Diaz, Computer Science, Berlin School of Economics and Law
- Catholijn Joncker, Explainable AI, TU Delft
- Assistant Professor Filippo Santoni de Sio, Ethics & Technology, TU Delft
- Bart Geerts, MD, Msc, PhD, CEO Healthplus AI