Council of Europe proposes ban on certain facial recognition applications

In a new set of guidelines addressed to governments, legislators and businesses, the Council of Europe (a 47-state human rights organisation) proposes a number of restrictions and bans of certain types of facial recognition uses. We have been advising the Council of Europe on regulating, limiting and banning certain types of biometric recognition so we welcome these guidelines.

A broad ban

The Council proposed the use of facial recognition for the sole purpose of determining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status to be prohibited.

Affect recognition

According to the press release “This ban should also be applied to “affect recognition” technologies – which can identify emotions and be used to detect personality traits, inner feelings, mental health condition or workers´ level of engagement – since they pose important risks in fields such as employment, access to insurance and education.”

It should be noted that no sound scientific evidence exists corroborating that a person’s inner emotions or mental state can be accurately ‘read’ from a person’s face, gate, heart rate, tone of voice or temperature, let alone that future behaviour could be predicted by it. In a recent meta-study, a group of scientists concluded that AI-driven emotion recognition could, at the most, recognise how a person subjectively interprets a certain biometric feature of another person. An interpretation does not align with how that person actually feels, and AI is just labelling that interpretation which is highly dependent on context and culture. Far-fetched statements, that AI could for example determine whether someone will be successful in a job based on micro-expressions or tone of voice, are simply without scientific basis.

Scraping publicly available images for facial recognition systems

The proposal also refers to the use of images that we have made public ourselves (on social media for example). These images cannot be used indiscriminately to ‘integrate them in biometric systems’. This means (for example) that images scraped from social media sites or search engines, cannot be used to train facial recognition systems, unless it is for “overriding legitimate purposes and it is provided by law and strictly necessary and proportionate for these purposes (for instance law enforcement or medical purposes)”. The mere fact that we made these images widely available ourselves, should not be a ‘free for all’, according to the Council.

This is an important notion and is, rightfully so, aimed at internet-wide image scraping activities such as by Clearview AI, but also at facial recognition developments of social network and search engine companies themselves. Also, many facial recognition research efforts use image sets that fall under this category.

Consent is not enough

In an unexpected but welcome statement, the Council argues that consent should not, as a rule, be the legal ground used for facial recognition performed by public authorities and private private entities authorised to carry out similar tasks as public authorities. Reason for this is the imbalance of powers between the subjects and public authorities (incl. private entities that perform public tasks).

What about other forms of biometric recognition?

We welcome this proposal, but we would advise the Council of Europe to include other forms of biometric recognition, such as gate, voice, heart rate, temperature, etc. as these biometrics can and are also being used to determining a person’s sex, age, health or social status, as well as mental health, personal traits and emotional state.

Photo credit: The Verge