RESPONSIBLE AI & CORONA

Responsible AI & Corona

A Project to promote that AI is used responsibly in the fight against Corona

WHY this project?

  • AI can contribute to tackling the Corona-crisis

    AI can play an important role during this crisis. AI applications can contribute to the understanding of the spread of the virus, to the search for a vaccine and to treatment of COVID-19. AI can also guide policy measures and gain insight in their long-term impact on society and the economy.

  • If it doesn’t work, it can do harm

    In times of crisis, we tend to turn more quickly to ‘invasive’ technology and to ‘turn a blind eye’ when it comes to fundamental rights, ethical values and even effectiveness. The motto is often: no harm, no foul. Unfortunately, this is different with AI. If it does not help, it may very well harm.

  • Desparate times, desparate measures?

    We see false contradictions such as: we have to choose between ethics or health, between fundamental rights or the opening up of the economy. And it is often thought that an invasive technology that is harmful, will be dismantled after the crisis. In reality this is often not the case and we risk setting unwanted precedents for the future.

  • This crisis needs Responsible AI

    Despite the urgency of this crisis, it is important that AI is developed and deployed in a responsible manner. Robustness, effectiveness, transparency and explainability, but also fundamental rights, inclusion and ethics are adamant to ensure that AI actually helps in tackling the Corona-crisis without causing harm to society along the way.

What are the objectives?

Systematic assessment of the opportunities and challenges of AI to tackle the Corona-crisis and devise AI strategies for the current as well as future epidemics.

Highlight responsible, useful, reliable and safe AI & Corona applications, while at the same time draw attention to unsafe, irresponsible or undesirable AI.

Increase knowledge and awareness among policymakers, stakeholders and society about the importance of responsible AI, especially in times of crisis.

What does the project involve?

What does the project involve?

2
3
4
5
1

The Observatory will contain monitors to provide insights into AI applications that are being deployed / launched in connection with the Corona crisis.

2

The Quickscans will entail a quick first assessment of a number of carefully selected AI-applications deployed in the Corona crisis. We will look at elements such as ethical impact, legal impact, societal impact, etc.

3

The research, expert and advocacy papers will delve into the need for Responsible AI during Corona, the outcomes of the Quickscans and the results of the expert sessions.

4

Interviews and information will be provided on a continuous basis throughout the project to raise awareness and understanding of AI during Corona and the need to ensure its responsible development and use.

5

Several Expert Sessions will be organized for selected groups to raise knowledge and awareness of AI during Corona, including its opportunities and challenges.

  • Oct. 14 2020

    Project Launch

  • Oct. 2020

    Project Inception Paper

  • Nov. 2020

    Expert Session for Policy Makers

  • Nov. 2020

    Selection Quickscan Candidates

  • Dec. 2020

    Observatory Launch

  • Feb. 2020

    Expert Session for Media

  • T.B.C.

KEY dates

Catelijne Muller, Project Lead

Expertise: AI & Society, Law and Policy

President of ALLAI. Master of Laws; EU Policy expert; Member of EU High Level Expert Group on AI; AI Rapporteur of the European Economic and Social Committee; Expert advisor on AI to the Council of Europe.

Virginia Dignum, Scientific Lead

Expertise: Social and Ethical AI

Co-founder and board member of ALLAI; Professor of Ethical and Social Artificial Intelligence; Member of the High Level Expert Group on AI. Member of the Global Partnership on AI. Book: “Responsible AI”.

Multi- disciplinary expertise

Prof. Dagmar Monett

Expertise: Artificial Intelligence

Full Professor of Computer Science HWR Berlin; Co-founder AGI Sentinel Initiative, AGISI.org; AI expert at Ms.AI, “Artificial Intelligence for and with Women.”

Prof. Filippo Santoni de Sio

Expertise: Ethics of Technology

Associate Professor in Ethics of Technology TU Delft; adjunct professor in Ethics of Transportation Politecnico di Milano; Rapporteur of the EU Expert Group on ethical issues of driverless mobility.

Bart Geerts, MD

Expertise: AI & Healthcare

Anaesthetist; CEO and founder of Healthplus AI, a company that builds, validates and scales machine-learning to transform health data into valuable insights for patients and doctors.

Andreas Theodorou, PhD

Expertise: Responsible Artificial Intelligence

Postdoc. Researcher and Research Engineer in Responsible AI Umeå University; CEO and Co-Founder of VeRAI AB, a company that develops methods for responsible AI.

Prof. Frank Dignum

Expertise: Social Simulations & AI

Wallenberg chair in AI at Umeå University in Sweden; affil. Utrecht University; honorary principal research fellow of the University of Melbourne; EurAI fellow.

Gabrielle Speijer, MD

Expertise: AI, Healthcare & Data

Radiation-Oncologist at the Haga hospital; Founder of health innovation company CatalyzIT. She’s board member ICT&health international/NL, HIMSS Global Future50 community.

Nathalie Smuha, LLM.

Expertise: AI & EU Regulation and Law

Researcher at the KU Leuven Faculty of Law and the Leuven.AI Institute; Coordinator of the EU High Level Expert Group on AI; Admitted to the New York Bar.

Prof. Jan van Gemert

Expertise: Image Recognition

Lead of Computer Vision lab at TUDelft. His research focuses on visual inductive priors for deep learning for automatic image and video understanding.

Prof. Catholijn Jonker

Expertise: Explainable AI

Professor of Interactive Intelligence at TUDelft. Board member of the European Artificial Intelligence Association.

Prof. Frederik Zuiderveen Borgesius

Expertise: Privacy and Data Protection

Professor ICT and Law Radboud University Nijmegen where he is affiliated with iHub, a multidisciplinary research hub on Security, Privacy, and Data Governance.

Assistant Prof. Roel Dobbe

Expertise: Safety, Sustainability and Democratization of AI

Assistant Professor at TUDelft. Research: security, sustainability and democratization of algorithmic systems and complex digital infrastructures. Former Postdoc fellow AI ​​Now Institute in New York.

Mustafa Kaynak

Core team member: Partnerships

ALLAI network manager. Sales professional with decades experience in international sales processes and consultancy. Mustafa is responsible for partnerships of the Project.

Noah Schoppl

Core team member: Observatory and Research

AI Governance AI Governance Researcher and Consultant at ALLAI. Noah will co-develop and maintain the Observatory and perform research activities for the Project.

What our Experts say

“Especially in times of crisis, it is essential to acknowledge that – even if AI has a great potential to do good – its development and use needs to be lawful, ethical and robust to actually materialize that potential.”

Nathalie Smuha, LLM.Researcher AI & Law

“AI can support decision making of governments, but only if its contributions can be explained and adapted to a new reality.”

Prof. Frank DignumChair Wallenberg AI

“Emergencies like the Corona pandemic put some of our basic and political values under pressure: freedom, responsibility, privacy. In these times it is even more important to keep reflecting on the meaning of these concepts and how to embed them in AI-based technologies.”

Prof. Filippo Santoni de SioEthics of Emerging Technologies

“AI can and will exploit all patterns and biases in data; what I admire about this project is that it helps to identify unwanted AI bias.”

Prof. Jan van GemertImage Recognition

“The corona crisis motivates the rapid development of AI applications that increase the power of the government and technology companies. The ALLAI Project Responsible AI & Corona is therefore of great importance to keep one’s finger on the pulse and protect our fundamental rights, democracy and values. ”

Assistant Prof. Roel DobbeSafety, Sustainability and Democratization of AI

“Responsible AI needs responsible humans, both in pandemic times and beyond. I am certain the ALLAI project ‘Responsible AI & Corona’ will set the tone in this respect.”

Prof. Dagmar MonettArtificial Intelligence

“Doctors are the trusted link: they determine the quality of care and consequently the tools, including artificial intelligence. Leading sustainable IT orchestration from the clinical perspective as an expert in ‘Responsible AI & Corona’ in open collaboration and cross-domain is crucial now for society as a whole.”

Gabrielle Speijer, MDOncology-Radiology, AI & Data in Healthcare

       This project is supported by:

CONTACT

ALLAI
Prinseneiland 23A
Amsterdam, NL
welkom@allai.nl