Bias and discrimination from AI systems risk worsening existing health inequity during the fight against COVID-19. Are these systems helping society, or damaging it?
After a year of lockdown, a lot of praise has been given to AI for its crucial role in combating COVID-19. AI has helped society to overcome the deathly waves of the pandemic by predicting the spread of the virus, helping with diagnostics, accelerating vaccine discovery and more [1,2,3]. Yet, despite its successes, researchers argue that there is still bias and discrimination in the design and use of these systems. These biases have a great negative impact on minority populations, especially during times of crisis.
“If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden,” wrote Eliane Röösli in the Journal of the American Medical Informatics Association .
Minority populations i.e. minoritized ethnicities, immigrants, and socioeconomically disadvantaged groups, have little and low-quality access to healthcare. These groups are more likely to become infected and therefore at a higher risk of death. Although this is already a major societal problem, it gets worse when the data obtained from an already discriminatory healthcare process, gets combined with a biased AI system.
How can an AI be biased?
It is easy to think that an AI system can make accurate predictions based on the great amount of data that is being fed. However, it turns out that the reason behind AI successes is also the reason behind its failures – datasets. Bias can sneak into datasets in many ways. If the data is not obtained correctly or is affected by discriminatory institutional processes, it can become unrepresentative, discriminatory, and biased.
During the pandemic, for example, health records collected about COVID-19 patients disproportionately represent the upper class who have access to “digitally mature” hospitals (because datasets require a degree of quality and integrity that only well-off hospitals can provide) . Likewise, if COVID-related data is collected through contact tracing apps, it does not represent the people who do not have access to smartphones. Data unrepresentativeness supposes a problem because the prevalence of diseases varies by population group. If data from minority groups aren’t utilized when making medical decisions, then those groups will be negatively affected by these systems.
Why is COVID-19 making the situation worse?
Developing an AI system is a laborious process, but the demand for a rapid response to tackle the virus sped up the task. However, the rush and the urgency to find solutions had also led to a power imbalance in setting the agendas. Why? Because the demographic composition of those who set research agendas normally fall outside those that are negatively affected by a biased AI, as shown by diversity statistics in the AI industry [5, 7]. Therefore, when it comes to developing a system within time constraints or crises, developers and deployers overlook the measures needed to ensure that the product is responsibly used and unbiased.
A shocking example of this is the misuse of AI by the United States’ prison system. They repurposed AI tools that predict recidivism to determine which inmates should be released to home confinement . However, given the high racial bias of these tools, black inmates were more likely to not be released and left exposed to a higher risk of COVID-19 infection, and consequently, death. This example clearly shows how misusing AI systems to manage COVID-19 can lead to fatal consequences.
What can be done?
The worsening of health inequity demands an urgent call for research and regulation against AI bias. The rapid development and application of these technologies in healthcare leave no choice but to fight against a discriminatory process that puts millions of lives at risk. Researchers, companies, and governments should focus on integrating diligent bias detection protocols and prioritise inclusive technology policies that work even during times of crisis. Finally, we must deal with the root of the problem: systemic racism, wealth disparities, and other structural inequalities, as these are the major sources of the bias that sneaks into our datasets and AI systems.