The Fatal combination of AI Bias, Health Inequality and Covid-19

Mónica Fernández-Peñalver, Feb. 2022

First published in Turning Magazine, Ed. 5, 2022


Bias and discrimination from AI systems risk worsening existing health inequity during the fight against COVID-19. Are these systems helping society, or damaging it?


After a year of lockdown, a lot of praise has been given to AI for its crucial role in combating COVID-19. AI has helped society to overcome the deathly waves of the pandemic by predicting the spread of the virus, helping with diagnostics, accelerating vaccine discovery and more [1,2,3]. Yet, despite its successes, researchers argue that there is still bias and discrimination in the design and use of these systems. These biases have a great negative impact on minority populations, especially during times of crisis.

“If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden,” wrote Eliane Röösli in the Journal of the American Medical Informatics Association [4].

Minority populations i.e. minoritized ethnicities, immigrants, and socioeconomically disadvantaged groups, have little and low-quality access to healthcare. These groups are more likely to become infected and therefore at a higher risk of death. Although this is already a major societal problem, it gets worse when the data obtained from an already discriminatory healthcare process, gets combined with a biased AI system.

How can an AI be biased?

It is easy to think that an AI system can make accurate predictions based on the great amount of data that is being fed. However, it turns out that the reason behind AI successes is also the reason behind its failures – datasets. Bias can sneak into datasets in many ways. If the data is not obtained correctly or is affected by discriminatory institutional processes, it can become unrepresentative, discriminatory, and biased.

During the pandemic, for example, health records collected about COVID-19 patients disproportionately represent the upper class who have access to “digitally mature” hospitals (because datasets require a degree of quality and integrity that only well-off hospitals can provide) [5]. Likewise, if COVID-related data is collected through contact tracing apps, it does not represent the people who do not have access to smartphones. Data unrepresentativeness supposes a problem because the prevalence of diseases varies by population group. If data from minority groups aren’t utilized when making medical decisions, then those groups will be negatively affected by these systems.


Why is
COVID-19 making the situation worse?

Developing an AI system is a laborious process, but the demand for a rapid response to tackle the virus sped up the task. However, the rush and the urgency to find solutions had also led to a power imbalance in setting the agendas. Why? Because the demographic composition of those who set research agendas normally fall outside those that are negatively affected by a biased AI, as shown by diversity statistics in the AI industry [5, 7]. Therefore, when it comes to developing a system within time constraints or crises, developers and deployers overlook the measures needed to ensure that the product is responsibly used and unbiased.

A shocking example of this is the misuse of AI by the United States’ prison system. They repurposed AI tools that predict recidivism to determine which inmates should be released to home confinement [6]. However, given the high racial bias of these tools, black inmates were more likely to not be released and left exposed to a higher risk of COVID-19 infection, and consequently, death. This example clearly shows how misusing AI systems to manage COVID-19 can lead to fatal consequences.


What can be done?

The worsening of health inequity demands an urgent call for research and regulation against AI bias. The rapid development and application of these technologies in healthcare leave no choice but to fight against a discriminatory process that puts millions of lives at risk. Researchers, companies, and governments should focus on integrating diligent bias detection protocols and prioritise inclusive technology policies that work even during times of crisis. Finally, we must deal with the root of the problem: systemic racism, wealth disparities, and other structural inequalities, as these are the major sources of the bias that sneaks into our datasets and AI systems.

[1] Malik, Y. S., Sircar, S., Bhat, S., Ansari, M. I., Pande, T., Kumar, P., … & Dhama, K. (2021). How artificial intelligence may help the Covid‐19 pandemic: Pitfalls and lessons for the future. Reviews in medical virology, 31(5), 1-11.

[2] Belfiore, M. P., Urraro, F., Grassi, R., Giacobbe, G., Patelli, G., Cappabianca, S., & Reginelli, A. (2020). Artificial intelligence to codify lung CT in Covid-19 patients. La radiologia medica, 125(5), 500-504.

[3] Keshavarzi Arshadi, A., Webb, J., Salem, M., Cruz, E., Calad-Thomson, S., Ghadirian, N., … & Yuan, J. S. (2020). Artificial intelligence for COVID-19 drug discovery and vaccine development. Frontiers in Artificial Intelligence, 3, 65.

[4] Röösli, E., Rice, B., & Hernandez-Boussard, T. (2021). Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. Journal of the American Medical Informatics Association, 28(1), 190-192.

[5] Leslie, D., Mazumder, A., Peppin, A., Wolters, M. K., & Hagerty, A. (2021). Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?. bmj, 372.

[6] https://partnershiponai.org/why-pattern-should-not-be-used-the-perils-of-using-algorithmic-risk-assessment-tools-during-covid-19/ [7] West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems. AI Now.

Literature review: HOW AI BIAS IS EXACERBATING HEALTH INEQUALITY IN TIMES OF CORONA

An analysis of the literature suggests that AI bias is exacerbating inequality in the fight against COVID-19. Findings suggest that the rush to find technical solutions has been the main source of bias, giving rise to two other sources: poor methodological conduct, and the repurposing of AI systems. In this review, I evaluated how these situations can exacerbate health inequality in different ways, as well as possible measures that can be taken in other to mitigate AI bias through research and policy.

Read the review >>