FACE MASK DETECTION

FACE MASK DETECTION

Scope of Use (COVERT)
  • Numerous face mask detection applications are being developed
  • Developers widely share methods on how to train a face mask detection system (e.g. via GitHub)
  • Number of commercial organisations actively offer face mask detection systems commercially
  • Actual public or private use of these systems difficult to determine
  • Indications that face mask detection is tested and/or used covertly, e.g. by governments.
Technological robustness and efficacy
  • Accurate face mask detection is difficult to achieve, due to the lack of generalisability of the system (research)
  • It works well in a controlled environment where people are well lit, look straight into the camera and pass at an even pace, wearing the same mask.
  • It is well known that these systems are less accurate when put into real life settings (e.g. are poorly lit, crowds, people wearing scarfs, hoodies caps or glasses etc.)
Impact on citizens and society (blue=pos, red=neg)
  • We consider this type of AI use to have a predominantly negative impact on citizens and society.
  • It could set an undesired precedent for the future (more (acceptance of) surveillance.
  • There is ample impact on the human right to a private life and democracy.
  • Compliance with the GDPR is difficult if not impossible (e.g. people cannot consent).
  • There is impact on human agency and free will where submission to the system can only be avoided by not visiting certain places.
Governance and Accountability
  • Unable to determine whether appropriate legal basis exists, because actual use is unknown (‘under the radar’)
  • Unknown whether there is purpose limitation and for example a sunset clause in place.
  • Any form of publicly available documentation on the actual use of face mask detection systems is lacking
  • Submission to these systems is not voluntary (because public is not made aware of the use of the system).
Acceptable trade-offs in times of crisis

Given the intrusiveness of facial recognition applications and the fact that multiple governments and legislative bodies have implemented, are developing or are calling for strict regulation or even a ban on facial recognition, we see no acceptable trade off at this stage for the use of face mask detection camera’s.

Face Mask Detection Test at Paris Châtelet-Les Halles Metrostation

The French RATP cut short a test of mask-detection at its Paris metro station Ch​âtelet-Les Halles after criticism, among others by the French privacy watchdog CNIL. CNIL predominantly looked at the impact of the technology on human rights and the GDPR and called for vigilance in using these kinds of surveillance technologies.

We have tested the technology against the 24 requirements of the “Framework for Responsible AI in times of Corona”, divided into 3 categories: (i) Society; (ii) Technology; and (iii) Governance.

Each requirement is given a score from ‘0’ to ‘3’:

0 = unknown

1 = no compliance

2 = partial compliance

3 = full compliance

USE CASE

Society

  • Face mask detection could set undesired precedent: more (acceptance of) surveillance
  • Impact on human right to a private life
  • Chilling effect of constantly being surveilled
  • Risk of generalizing a feeling of surveillance among citizens
  • Risk of habituation and trivialization of intrusive technologies
  • Risk of generating increased surveillance
  • This will undermine the proper functioning of our democratic society
  • No compliance with GDPR
  • “Shaking head” is no solution for witholding consent
  • Subjection to the AI-system could only be avoided by not using the Chatelet-Les Halles metro station, thus limiting people in their freedom to move about.
  • Behavioural ‘chilling-effect’ on people who could feel ‘watched’ all the time
  • Unknown whether the ‘decisions’ of the application were explicable
  • Unknown whether the application was ‘fair’ in the sense of free from bias and unequal treatment
  • Unknown whether the application was inclusive (‘accessible’) to all

Technology

  • Problem definition has several elements:
    • Detection of people wearing or not wearing a mask in public spaces
    • Health security management for staff and the public
    • Distribution of masks adapted to needs and promotion of masks in the public space
    • Make it possible to have, in real time, an estimate of the number of travellers who comply with health instructions
  • Unclear whether alternative solutions (e.g. counting by hand) were investigated
  • Unknown whether goals were reached, test was abandoned prematurely
  • Unknown, test was abandoned prematurely
  • Unknown whether potential adverse effects were identified
  • Level of resilience to (cyber)attacks: unknown
  • Proper mask wearing is correctly detected only in a limited setting (video: 4 white men directly facing the camera, passing at an even pace, with and without proper mask wearing, all wearing the same mask)
  • Unknown but unlikely that the system is generalisable for ‘real life’ setting (not directly facing the camera, passing at different speeds, wearing scarfs or hoods, different skin colours, darker skin color, poor light, crowds, etc)
  • Unknown

Governance

  • No legal basis in French law for facial recognition
  • Both unknown because of unclear problem definition and multiple goals
  • Unknown whether use of the system and its in- and outputs were limited to a specific goal of tackling a defined problem
  • Unknown whether the public was made aware of the use of the application
  • No voluntary submission to the application other than avoiding the metrostation all together
  • “Shaking head” is insufficient to establish non-consent (CNIL)
  • The project was abandoned prematurely and was an experiment
  • No indication that it would be dismantled after the crisis
  • Unknown
  • No public information on whether the use of the system was documented for accountability purposes

EXPERT VIEW

Catelijne Muller, LLM.

Co-founder and President of ALLAI, (former) member of the EU High Level Expert Group on AI, AI-Rapporteur EESC, AI-advisor to the Council of Europe.

“The face mask detection system of the RATP shows an overall low score, which is not surprising since the test was prematurely abandoned, due to strong criticism by the French privacy watchdog CNIL.

CNIL however only looked at the privacy and democracy impact of the system, while this assessment shows that the system is also technically very brittle, or at least not properly tested for generalisability, which means that it will not only perform accurate on white males, wearing the same masks, looking straight into the camera, but also on dark skinned women, wearing a scarf and passing at a high speed for example.

Notwithstanding this technical brittleness, even a fully functioning face mask detection system poses significant ethical and legal risks and wil set an undesired precedent for the future.

One of the main issues with this particular use case is the lack of publicly available information on the system. This shows that without adherence to the requirements of transparency and accountability, the AI application cannot meet many of the other requirements either.”

Listen to the first Episode of the ALLAI Podcast where Catelijne speaks about this use case.