PEOPLE COUNTING CAMERAS

PEOPLE COUNTING CAMERAS

Scope of Use
  • Since the start of the pandemic, there has been an increased use of AI-based surveillance technology to monitor the number of people in establishments and enforce social distancing.
  • The technology is being applied in shops, schools, and public spaces.
  • MOBOTIX, LinkVision, Hikvision, V-count and Canon are a few of the many companies offering this technology.
  • AI-based counting technologies are part of the 43 AI-based surveillance measures that have been adopted in 27 countries. Europe introduced more surveillance measures than any other region.
Technological robustness and efficacy
  • Companies offering AI-driven counting sensors claim they provide an efficient way of counting people in real-time to help with social distancing.
  • Different technologies can be used for the same purpose (counting people) but diverge in their efficiency and limitations. WIFI-tracking and thermal cameras, for example, are more inefficient than optical vision sensors, however, optical vision sensors suffer from issues concerning privacy and data security.
Impact on citizens and society (blue=pos, red=neg)
  • Depending on the methodology and extra features, people counting technologies differ in their intrusiveness.
  • People counting cameras, for example, can contain additional AI-based features i.e. face recognition or gender detection features, that makes it more intrusive than systems that simply count people. Such features exacerbate their negative impact on privacy and autonomy due to the processing of sensitive personal data.
  • People counting technologies create an unavoidable ‘chilling’ effect especially if applied to public spaces.
Governance and Accountability
  • AI-based surveillance cameras installed in public spaces are not in compliance with GDPR.
  • Although there seems to be no clear and specific legislation put in place for these AI-based surveillance technologies, the responsibility to comply with policy and current legislation falls upon the controller of the AI.
  • Given the extensive capabilities of some people-counting technologies, it is unknown whether additional data is collected aside from what they are strictly designed for, or from what is strictly necessary.
Acceptable trade-offs in times of crisis

The use of people counting technologies has been justified by many European countries amid the pandemic to aid with social distancing measures. For example, in airports and train stations, analysing the number of people can help optimize passenger flow, make real-time queue management and improve operations. However, some of these technologies include intrusive AI-based features that also implement face recognition or gender detection, having a significant trade-off on people’s privacy and autonomy. Furthermore, the deployment of these systems in public spaces without clear communication and transparency about the system’s workings do not fall under the scope of some legal or policy frameworks such as the GDPR.

People counting cameras at Leiden University

Leiden University has deployed 350 AI-driven ‘counting sensors’ manufactured by Xovis during the lockdown of early 2021. These counting sensors have been deployed in their classrooms and corridors to monitor the number of people on university premises. However, this sensor can do much more than count. They provide AI extensions such as gender statistics, view direction, face mask detection, staff exclusion, and group counting. The amount of data that the user sees and how anonymous it is depends on the settings it is used at. Leiden university claims to be using it at a level where people are identifiable as silhouettes. They guarantee that they do not register any specific characteristics of people. However, the university only informed their students and staff about their use after they had been exposed by Mare, the university’s independent weekly magazine. The over-kill of the technology and the fact that the university has not informed their students and staff about their use for over a year raises reasons to be concerned about the use of such an intrusive technology.

We have assessed the use of thermal cameras at the Spanish airports against the 24 requirements of the “Framework for Responsible AI in times of Corona”, divided into 3 categories: (i) Society; (ii) Technology; and (iii) Governance.

Each requirement is given a score from ‘0’ to ‘3’:

0 = unknown

1 = no compliance

2 = partial compliance

3 = full compliance

USE CASE

Society

  • People counting cameras could set an undesired precedent: more (acceptance of) surveillance
  • The lack of transparency from the university has left its students and staff feel disrespected, disrupting trust
  • The extra AI-based features make some students and staff feel intimidated and unsafe
  • There is a ‘whistle-blower mechanism’ available at Xovis and at Leiden University
  • However, a human rights impact assessment did not take place prior to the system’s deployment
  • The technology impacts the human right to privacy and autonomy
  • No consent was given by students or staff, posing a threat to democracy, systemic failure or disruption.
  • Although the application does provide correct information, it is only available to the controller.
  • Images used to calculate the presence of a silhouette does not leave the scanner and are deleted immediately after 0.2 seconds
  • Despite its deletion, it still undergoes personal data processing
  • Students and staff were not informed about the data processing nor did they give consent.
  • The user can set the level of data protection level of the sensors giving a sense of user autonomy.
  • However, once the setting is in place, it is solely based on the trust in the AI’s ability to accurately detect features.
  • Xovis explains the different functionalities of the system, but not a detailed explanation about the type of AI that is used to make its predictions.
  • The system provides the user with visualisation maps in real-time showing an accurate representation of the tracking situation [6]
  • Does not show signs of having an unfair bias towards individuals or groups
  • The AI system is used for the monitoring of all, irrespective of age, demographic, disability, language, digital literacy, and financial capacity

Technology

  • Counting scanners are used to count incoming students, employees and visitors to help comply with regulatory measures deployed to decrease the rate of Covid-19 infections.
  • Infrared/manual thermometers are more effective at an individual level
  • These are also less intrusive
  • Screening of other body parts has not been explored
  • The counting sensor significantly contributes to a solution for the problem by counting the number of people.
  • However, whether measures are being taken in response to the outcome of the system is unknown. The deployment of the system would only make sense if there were appropriate measures taken in response.
  • The university has identified students’ concerns due to a protest that was held on the 8th of December 2021. But the university had not tried to identify this adverse effect themselves, by consulting the students prior to their installation.
  • Level of resilience to (cyber)attacks: low
  • Until recently, the login page of the cameras was unprotected via the public internet, and the data that the cameras collected was “protected” with an unencrypted password only
  • The producer claims that they have a 99% accuracy but there is no information that this is the case when employed in educational premises
  • Although Xovis cameras were designed to be used at airports, retail, and transportation, we have not found external validation tests that confirm this.
  • It is possible to assess the AI’s judgement by comparing it to manual counting.
  • Not possible to do this at the same rate as the AI

Governance

  • If equipped with additional AI features, there should exist an extra legal basis or policy framework for the use of these technologies.
  • There isn’t one
  • The use of counting sensors with extra intrusive functionalities does not seem proportional and necessary to tackle the problem
  • A processing agreement with Xovis states that images may not be used for any other purpose, except on the instruction of the university
  • The use of these cameras for other purposes is prohibited to the provider but not to the university (the controller)
  • No clear, open and direct communication on their workings and use
  • No direct warnings to passengers
  • No explanation about the nature of the technology
  • People entering university buildings cannot deny being subjected
  • The is no sunset process for the use of counting sensors in educational premises, with no clear end date or dismantling process.
  • An independent Data Protection Officer has given advice on the processing agreement with Xovis, the camera supplies.
  • Students and staff have not been involved in the decision-making process.
  • There is proper documentation available on the workings and use of the AI system.
  • There is a processing agreement showing the university’s ‘ownership’ of the application

Assessment prepared by Mónica Fernández-Peñalver