Op-ed: Responsible AI in times of Pandemic

Virginia Dignum, March 2020

The last few days, I’ve received several questions on the possible contributions of Artificial Intelligence (AI) to the analysis of the current situation and to help understand the possible effects of the different policies that are being put in place, such as local or national shutdown, quarantine or social distancing. And most specifically, how to do AI responsibly.

To start, I would like to stress that AI can indeed contribute to understand the current situation and therefore the responsible thing for AI researchers and professionals to do at this moment is to put our expertise to use for the analysis and understanding of the spread of the coronavirus, the possible effects of policies, the long term impact on society and economy, and other such questions.

AI is not magic

However, in order to do this responsibly, it is important to be open and clear about what AI can and cannot do. As I often stress, AI is not magic nor is it a solution to all our problems. Some of the main requirements for responsible development and use of AI include robustness, transparency and respect for human rights and principles.

Results from the past are no guarantee for the future

Firstly systems need to be robust. We need to look carefully to the techniques and approaches used. In particular, the use of data-driven methods to forecast the spread of the coronavirus, is potentially problematic. These methods ‘learn’ by correlating data from the past and at this moment we just don’t have enough data about similar situations[1]. Results from the past are no guarantee for the future, especially when the future looks to be so very different from anything we knew from the past. Moreover, the little data that is available is incomplete and biased. For example, it is certain that there are many more infected persons that those that have been confirmed, and there is not a complete accounting of those that have recovered. Using the existing data for machine learning approaches can lead to many false negatives and false positives.

Model driven AI

The current situation may be a case for which model-driven methods are more suitable than data-driven ones. Indeed there are several groups at this moment working on (agent-based) simulations. However, the selection of models needs to be grounded on sound research in epidemiology, sociology and psychology, together with the suitable computational representations. This type of models tend to be sensitive to design assumptions and initial parameters. Before proposing these systems to support policy makers, sensitivity analysis tests need to be conducted. From a research perspective, work on hybrid approaches that combine data-driven and model-driven are especially important to be working on at the moment.

Transparency

In either case, data-driven or model-driven, transparency is paramount. Which models and datasets have been used and why, which experts have been consulted, how have the system been tested and evaluated… Without explicit answers to these questions, results cannot be trusted. At this moment more than ever, the world cannot afford to take decisions based on ‘black boxes’. If your system cannot provide explanation of its results, please think twice before considering using it to provide any analysis or forecast on the pandemics. At the same time, efforts to use AI to support identifying fake news and limit its spread are highly recommended.

Human rights, ethical principles and existing legislation

As means to control the pandemic, many apps are being developed at this moment to collect data about individuals, their condition and their movements. Even though we can agree that such data is essential for decision making by governments and local authorities, and is of great value to the development of AI tools, we cannot forget the fundamental respect for human rights, ethical principles and existing legislation. The combat of the pandemic does not mean that human rights can be forgotten. The right to privacy and to respect of human dignity still hold. Applications that collect personal information need respect these now as before. Ensuring proper data management and secure storage are maybe even more important now, when in many countries people have lost some of their freedoms. This does not mean that these apps cannot be developed and used but that it needs to be done according to the holding ethical standards for research. Maintaining the trust of citizens at a time of crisis like the current one, must be a priority. This includes respecting the requirements for the treatment of personal data. However, given the gravity of the situation, I would like to see that research ethics boards prioritize and speed the treatment of applications concerning projects that aim at supporting the control of the pandemic.

Fairness and inclusion

Finally, we need to ensure fairness and inclusion. The systems being develop should be available to all and consider the societal and cultural differences of the different countries affected. Now is the time for open source and open access. We are in this together, let’s make sure that all can use and benefit from any support that AI can provide.