“The face mask detection system of the RATP shows an overall low score, which is not surprising since the test was prematurely abandoned, due to strong criticism by the French privacy watchdog CNIL.
CNIL however only looked at the privacy and democracy impact of the system, while this assessment shows that the system is also technically very brittle, or at least not properly tested for generalisability, which means that it will not only perform accurate on white males, wearing the same masks, looking straight into the camera, but also on dark skinned women, wearing a scarf and passing at a high speed for example.
Notwithstanding this technical brittleness, even a fully functioning face mask detection system poses significant ethical and legal risks and wil set an undesired precedent for the future.
One of the main issues with this particular use case is the lack of publicly available information on the system. This shows that without adherence to the requirements of transparency and accountability, the AI application cannot meet many of the other requirements either.”
Listen to the first Episode of the ALLAI Podcast where Catelijne speaks about this use case.