Recap: Masterclass Responsible AI & Corona for (Dutch) Policymakers
On February 19, last, together with the Dutch Ministry of Interior Affairs, ALLAI organised a Masterclass Responsible AI & Corona for policymakers.
Mildo van Staden (Senior policy officer, Ministry of the Interior) welcomed all participants and gave an introduction to the activities of the Ministry’s team “new technologies, human rights and public values”. The team investigates the impact of AI on human rights and public values. Mildo emphasized that there is already a good legal framework for AI but that we need to specify elements of it, for example for AI auditing.
Responsible AI
Second speaker was Prof. Virginia Dignum (Co-founder ALLAI and Professor Ethical & Social AI), who gave an introduction to AI and “Responsible AI”. She explained that AI is more than just algorithms, more than data, more than just robots, more than machine learning and, most importantly, that AI does not equal (human) intelligence.
AI can operate well in a predictable situation, in which it has been trained. If the situation is slightly different, AI will make mistakes, she explained.
The characteristics of an AI system are that it is 1) autonomous, 2) interactive and 3) adaptive. Moreover, all AI systems are always part of a socio-technical system.
What is responsible AI? Every AI system has limits, so we humans bear responsibility for AI. Responsible AI recognizes that AI is a tool for which we set the goals. The key question is not what AI can do, but what we want AI to do. Important questions regarding responsible AI are: What do we want AI to do? What and whose values should be taken into account? How do we deal with dilemmas? How should values be prioritized?
Responsible AI in times of Corona
Third speaker was Catelijne muller (President and co-founder ALLAI). Catelijne gave an introduction to the need for “responsible AI” during Corona. She explained that AI can contribute to the fight against the Corona crisis (e.g. in research into the spread of the virus and vaccine development), but also entails new risks. In crisis, we often think: no harm, no foul, but with AI this is unfortunately not the case. If it does not work, it may cause damage. In times of crisis, we are also inclined to turn a blind eye when it comes to ethical, social and even legal aspects of AI.
For this project we are looking at two ‘types’ of AI: medical AI (e.g. for diagnostics, prediction of disease progress, vaccine development, etc.) and societal AI (e.g. for social distancing monitoring, algorithmic grading, face mask detection, etc.). For all applications, AI needs an overload of data to function at an acceptable level. One of the particularities of this crisis is that we do not have this overload of data available. There is relatively little data available on the spread of the virus and the course of the disease and there are also “new” societal phenomena, such as changed public behaviour due to the lockdowns (e.g. less traffic and more online purchases).
Catelijne further explained the “Framework for Responsible AI in times of Corona” with requirements in the following areas: 1) impact on people and society, 2) technology and effectiveness, 3) governance and responsibility.
The framework also looks at trade-offs between the different requirements and whether the fact that we live in a crisis leads to different trade-offs than under normal circumstances.
She further presented the assessment of the RATP face mask detection system, which was tested at the Paris metro station Châtelet-les Halles, against the Framework. The test was also prematurely terminated by the RATP, partly because of strong criticism from CNIL, the French privacy watchdog, regarding the privacy aspects, but also regarding the impact on the democratic constitutional state.
Modelling a pandemic
Last speaker was Prof. Frank Dignum (Professor Social AI Simulations). Frank introduced Simassocc, an agent-based simulation platform that provides insight into the effect of corona measures. He explained that when modelling a pandemic, the type of model and the type of data are very important to the outcome Epidemiological models are often leading in this pandemic. An assumption behind the curfew, for example, is that people have less contact and that there will therefore be less spread of the virus. But people often respond to new measures with alternative behaviour. For example, they visit each other during the day and work in the home office during the evening curfew. Social contact is a fundamental requirement: we can do without a day, but not five months. Also, what worked six months ago no longer has to work at all because people will react differently to the same measures. They get tired of Corona. You will not find all these elements in an epidemiological model.
In Simassocc, Frank combined epidemiological models with economic and behavioural models based on daily life patterns and census data. This has shown that it is more difficult, but no less important, to base policy on realistic models.
Read the Framework
“A Draft Framework for Responsible AI in times of Corona”
By Catelijne Muller, Virginia Dignum and Noah Schöppl