It’s Human Rights Day: How AI impacts virtually all human rights

It’s Human Rights Day. What does AI have to do with that? Over the past couple of months we have seen a ‘shift’ in the discussion on responsible AI. Where the ethics of AI were at the center of the debate in 2018 and 2019, the impact of AI on human rights, democracy and the rule of law finally seems to start getting the attention it deserves.

In March of 2020 I presented a report for the Council of Europe’s CAHAI (Ad-hoc committee on AI) on the impact of AI on human rights, democracy and the rule of law. This report will serve as a basis for a possible legally binding instrument on AI from the Council. This blogpost contains a summary of that report. For readability the post does not contain the references. These can be found in the actual report.

The impact of AI goes beyond non-discrimination and privacy

While the impact on non-discrimination has received much attention over the past years (where the use of AI has lead to the disadvantaging or harming minorities, people of color, women, etc.), the impact on our human rights is much broader and deeper than one would think at first glance. AI impacts virtually all human rights that are covered by the human rights ‘families’: (i) respect for human value, (ii) freedom of the individual, (iii) equality, non-discrimination and solidarity and (iv) social and economic rights. Below is an overview of these rights and how they could be impacted by AI.

Right to liberty and security, a fair trial and no punishment without law: The fact that AI can perpetuate or amplify existing biases, is particularly pertinent when used in situations where physical liberty or personal security is at stake, such as with predictive policing, recidivism risk determination and sentencing. Moreover, when it is a black box, it becomes impossible for legal professionals, such as judges, lawyers and district attorneys to understand the reasoning behind the outcomes of the system and thus complicate the motivation and appeal of the judgement and subsequently affecting the right to a fair trial.

Right to reasonable suspicion and arbitrary arrest: Less obvious is the impact of AI on the right to reasonable suspicion and prohibition of arbitrary arrest. AI-applications used for predictive policing merely seek correlations based on shared characteristics with other ‘cases’. Suspicion in these instances is not based on actual suspicion of a crime or misdemeanor by the particular suspect, but merely on shared characteristics of the suspect with other.

Right to a private life and physical and mental integrity: It should be noted that, while data privacy is indeed an important element, the impact of AI on our privacy goes well beyond our data. It involves a person’s private life, a person’s physical, psychological or moral integrity, and a person’s identity and autonomy.

AI-driven (mass) surveillance, for example with facial recognition also affects our ‘general’ privacy, identity and autonomy in such a way that it creates a situation where we are (constantly) being watched, followed and identified. As a psychological ‘chilling’ effect, people might feel inclined to adapt their behavior to a certain norm, affecting our ability to act autonomously. One could argue that the same goes for the indiscriminate on- and offline tracking of all aspects of our lives (through our online behavior, our location data, our IoT data from smart devices).

Other forms of AI-driven biometric recognition have an even greater impact on our psychological integrity. Recognition of micro-expressions, gate, (tone of) voice, heart rate, temperature, etc. are currently being used to assess or even predict our behavior, mental state and emotions. These kinds of AI techniques, for example in recruitment, law enforcement, schools, severely impact a person’s physical, psychological or moral integrity and thus elements of that person’s private life.

It should also be noted that no sound scientific evidence exists corroborating that a person’s inner emotions or mental state can be accurately ‘read’ from a person’s face, gate, heart rate, tone of voice or temperature, let alone that future behavior could be predicted by it. Far-fetched statements, that AI could for example determine whether someone will be successful in a job based on micro-expressions or tone of voice, are simply without scientific basis.

Freedom of expression: Using facial recognition in public areas may interfere with a person’s freedom of opinion and expression, simply because of the fact that the protection of ‘group anonymity’ no longer exists, if everyone in the group could potentially be recognized. This could lead to those individuals changing their behavior for example by no longer partaking in peaceful demonstrations.

The same goes for the situation where all our data is used for AI-enabled scoring, assessment and performance (e.g. to receive credit, a mortgage, a loan, a job, a promotion, etc.). People might become more hesitant to openly express a certain opinion, read certain books or newspapers online or watch certain online media.

Online personalized information, filter bubbles and proliferation of fake news, disinformation and propaganda, all driven by AI, affect the capacity of individuals to form and develop opinions, receive and impart information and ideas and thus impact our freedom of expression.

Freedom of assembly and association: The internet and social media have shown to be helpful tools for people to exercise their right to peaceful assembly and association. At the same time however, the use of AI could also jeopardize these rights, when people or groups of people are automatically tracked and identified and perhaps even ‘excluded’ from demonstrations or protests, or where people become hesitant to partake in protests or from a union. Examples of this were already seen in Hong Kong when protesters started wearing masks and using lasers to avoid being ‘caught’ by facial recognition cameras.

Non-discrimination: One of the most reported impacts of AI on human rights is the impact on the prohibition of discrimination and the right to equal treatment. As noted earlier, in many cases, AI has shown to perpetuate and amplify and possibly enshrine discriminatory or otherwise unacceptable biases. Also, AI can enlarge the group of impacted people, when it groups them based on shared characteristics. Moreover, these data-driven systems obscure the existence of biases, marginalizing the social control mechanisms that govern human behavior.

Contrary to popular belief, not all biases are the result of low-quality data. The design of any artefact is in itself an accumulation of biased choices, ranging from the inputs considered to the goals set to optimize for; does the system optimize for pure efficiency, or does it take the effect on workers and the environment into account? Is the goal of the system to find as many potential fraudsters as possible, or does it avoid flagging innocent people? All these choices are in one way or another driven by the inherent biases of the person(s) making them. In short, suggesting that we can remove all biases in (or even with) AI is wishful thinking.

Social and economic rights: I can have major benefits when used for hazardous, heavy, exhausting, dirty, unpleasant, repetitive or boring work. AI systems are however also increasingly being used to monitor and track workers, distribute work without human intervention and assess and predict worker potential and performance in hiring and firing situations. These applications of AI could jeopardize the right to just conditions of work, safe and healthy working conditions, dignity at work as well as the right to organize. If workers are constantly monitored by their employers, they might become more hesitant to organize. AI-systems that assess and predict performance of workers could jeopardize the right to equal opportunities and equal treatment in matters of employment and occupation without discrimination on the grounds of sex, especially when these systems enshrine biases within the data or of their creator.

How to address the impact of AI on Human Rights

Many AI developers, deployers and users (public and private) seem to be unaware of the (potential) impacts of AI on human Rights. As a first step, an iteration or (re)articulation exercise in which existing human rights are ‘translated’ to an AI context, would be very useful.

Compliance could start with what has recently been proposed by Paul Nemitz as a new culture of “Human Rights, Democracy and Rule of Law by design”. In such a culture, developers, deployers and uses of AI, from the outset would reflect on how the technology might affect human rights, democracy and the rule of law and adjust the technology or its use accordingly.

More importantly however, many AI-applications are developed and deployed by only a handful of large private actors. These companies dominate both the development of AI as well as the (eco)systems AI operates in. While states are obliged to protect individuals and groups against breaches of human rights perpetrated by other actors, appreciation of non-state actors’ influence on human rights has steadily grown. AI might serve as a good opportunity and think of a structure that would legally oblige private actors to comply with human rights and to grant access to justice if they fail to do so. The basic question is whether to a) accept the private power of AI companies and to make sure they use it responsibly, or to b) challenge it and try to reassert the power of the state.

What if current human rights fail to adequately protect us: Red Lines, New Human Rights?

Due to the invasiveness of some AI-applications or uses, there might be situations in which our current framework of human rights fails to adequately protect us and where we might need to pause for reflection and find the answer to “question zero”: Do we want to allow this particular AI-system or use and if so, under what conditions? Answering this question should force us to look at the AI-system or use from all perspectives and decide whether we want to draw ‘red lines’ AI-systems or uses that are considered to be too impactful to be left uncontrolled or unregulated or to even be allowed. Red lines could for example consist of a ban, a moratorium or strong restrictions or conditions for exceptional, controlled use of the application. I advised the Council of Europe to consider red lines for the following AI:

  • Indiscriminate use of facial recognition and other forms of biometric recognition either by state actors or by private actors
  • AI-powered mass surveillance (using facial/biometric recognition, location tracking online behavior tracking, etc.)
  • Personal, physical or mental tracking, assessment, profiling, scoring and nudging through biometric and behavior recognition
  • AI-enabled Social Scoring
  • Covert AI systems and deep fakes
  • Human-AI interfaces

What about a longer term horizon?

Extrapolating into the future with a longer time horizon, certain critical long-term concerns can be hypothesized and are being researched, necessitating a precautionary approach in view of possible unknown unknowns and “black swans”. While some consider that artificial general intelligence, artificial consciousness, artificial moral agents, super-intelligence can be examples of such long-term concerns (currently non-existent), many others believe these to be unrealistic. Nevertheless, close monitoring of these developments is necessary in order to determine whether ongoing adaptations to our human rights, democracy and rule of law systems are necessary.

Catelijne Muller

Author

Catelijne Muller is co-founder and president of ALLAI, member of the European High Level Expert group on AI, Rapporteur on AI for the European Economic and Social Committee, and currently working as an expert advisor for the Council of Europe on AI & Human Rights, Democracy and the Rule of Law.