Many studies have revealed that health inequalities amongst populations such as front-line workers, marginalized communities, and racial minority groups face a higher risk of morbidity against COVID-19 [1, 2, 3, 4, 5].
Part of this vulnerability is linked to the higher risk of having comorbidities (e.g. high BMI, respiratory diseases) that have also been defined as major risk factors of illness and death from COVID-19 [5, 6].
Researchers have therefore been working on developing AI models that can identify people in the general population at risk of covid-19 infection or at risk of being admitted to hospital with the disease
These models may work by taking multiple comorbidities or social factors (e.g. demographics occupation etc.) into account.
The methodologies differ widely. One researched model uses thermal cameras to detect abnormal breathing through face recognition techniques . Other models use medical datasets (i.e. hospital admission for non-tuberculosis pneumonia, influenza, acute bronchitis etc.) as proxy outcomes to determine vulnerability . Lastly, another model used demographics, symptoms, and contact history in a mobile app to assist general practitioners in collecting data and to risk-stratify patients .
Technological robustness and efficacy (no evidence)
Given the complexity of the interplay between social factors and comorbidities, predicting population vulnerability is not an easy task to do.
Although the models mentioned above claim to have high accuracy index, the PRECISE living review revealed that all the models mentioned above had an unclear or high risk of bias regarding the use of participants, outcome, and analysis.
Impact on citizens and society (blue=pos, red=neg)
Given that most of the models we found were not developed wisely, had poor methodology or displayed technical biases, we consider this type of AI use to have a predominantly negative impact on citizens and society at this point.
Using models that have a high risk of bias can exacerbate health inequality and other social inequalities. In times of corona, this can even lead to harmful or fatal consequences.
Furthermore, the use of invasive techniques such as facial recognition and biometric identification in public places entail further negative impacts on citizens and society. (see assessments on covid risk detection or face mask detection to learn more)
Governance and Accountability
Sharing data and expertise for the validation and updating of covid-19 related prediction models is urgently needed 
The issues concerning bias should be dealt with by making researchers adhere to reporting guidelines (i.e. TRIPOD or MINIMAR) to improve reporting and guide modelling choices, and by assessing risk of bias (e.g. by using PROBAST)
At policy level, policy makers, public health officials, and AI developers should come together with affected stakeholders to figure out how to include vulnerable populations in policies, initiatives, and innovations 
Assessment prepared by Mónica Fernández-Peñalver
Amsterdam Science Park 900
1098 XH Amsterdam