Catelijne Muller opens first meeting of CAHAI at the Council of Europe

The Council of Europe established the Ad-Hoc Committee on Artificial Intelligence. The Committee will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law.

Main tasks

Under the authority of the Committee of Ministers, the CAHAI is instructed to:

  • examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law.

When fulfilling this task, the Ad hoc Committee shall:

  • take into account the standards of the Council of Europe relevant to the design, development and application of digital technologies, in the fields of human rights, democracy and the rule of law, in particular on the basis of existing legal instruments;
  • take into account relevant existing universal and regional international legal instruments, work undertaken by other Council of Europe bodies as well as ongoing work in other international and regional organisations;
  • take due account of a gender perspective, building cohesive societies and promoting and protecting rights of persons with disabilities in the performance of its tasks.

First meeting – opening speech Catelijne Muller on behalf of ALLAI

The first meeting of CAHAI was held from 18 – 20 november in Strasboug and was aimed at ‘setting the scene’ for CAHAI’s work. Catelijne Muller on behalf of ALLAI and ALLAI Advisor Professor Barry O’Sullivan on behalf of Cork University held introductory remarks alongside Lord Timothy Clement Jones (House Select Committee on AI, UK) and Matthias Spielkamp (AlgorithmWatch) a.o.

* * *

Catelijne’s opening speech:

Goodmorning, thank you Jan [Kleijssen] for inviting me.

3 years ago, I started a journey to put the impact of AI on society on the agenda. We have come a long way since then. Europe has an AI strategy, the High Level Expert Group on AI produced Ethics Guidelines for Trustworthy AI, the OECD put forward ethical guidelines for AI and AI is now mentioned in one breath with ethics.

But this is just a first step. AI can be very invasive and has a much broader impact than just an ethical one. Think of the impact on work, education, safety, warfare, the law, including fundamental rights.

Your committee has been given the mandate to examine the feasibility of a legal instrument for AI. But how do you go about that? Please allow me to give you some direction based on a couple of examples.

In 2017 a a 14 year old girl was arrested for stealing a bike. She had no record. At the police station she was flagged as someone with a high risk to commit a crime again. She was denied bail and spent 3 nights in jail before she was released without charges. She was African-American. Around that same time, a guy was arrested. He was a seasoned criminal. He had a long record. At the police station he was flagged as someone with a low risk to commit a crime again. He was let out on bail and almost immediately committed a crime again. He was white. The flagging was done by an AI-system that turned out to be biased against African-Americans. It was trained on historical data and there is a misconception about data: that data are facts. But data are not facts. Data can be messy, it can be incomplete, it can have biases in it. And you might think, well, if this is affecting only potential criminals then let’s fix it and be done with it. But these systems more and more decide on whether you get a mortgage, or a loan, or whether your child gets into a certain school.

That is why need more people at the table. Because let’s face it. If you are going to develop an AI system that is potentially going to send somebody to prison, you might want to talk to a judge, or a prosecutor, or a lawyer, or a legal scholar and ask them, what do you need to become better at your jobs?

So isn’t this already regulated? you might ask. In fact it is. That girl at the police station had quite some fundamental rights trampled upon. First her right not to be discriminated against. But also, her right to a fair trial. One might ask whether the use of this AI-system constituted a fair trial. And her right to freedom, she was held without cause.

Another example. Over the past weeks you have probably seen images of protesters in Hong Kong tearing down lamp post with camera’s on them and covering their faces with masks. They are hiding for facial recognition camera’s. These camera’s apparently are everywhere in China. But make no mistake, on this side of the globe, facial recognition is quickly finding its way into our public and private spaces. In the US this has lead to several bans and moratoriums on the use of facial recognition by law enforcement.

But hasn’t this been regulated? you might ask. Yes it has. There exists a right to privacy, which is about more than just the right not to be identified or the right to data privacy. The right to privacy also includes the right to a private life. These kind of invasive applications of AI also impinge on the right to assembly, and free speech. We have a right to human integrity. And while our face is or most recognizable feature, our voice, our gate, our posture, all of that can be analyzed by AI through biometric recognition. Our biometric features and conclusions drawn from that are, in my opinion, all elements of our human integrity. Elements that should not be up for grabs to profile us, oppress us, surveil us, nudge us.

Which brings me to Cambridge Analytica. Cambridge Analytica was able to build voter profiles based on massive amounts of Facebook data. Cambridge Analytica claimed to have thousands of data points per potential voter. And based on these profiles it was able to send targeted adds to those voters that needed a little push, a nudge, in a certain direction. Again, hasn’t this been regulated? Yes. We have a right to freedom of choice and privacy.

So, where does this leave your committee, you might think? If there is already so much taken care of? I think this leaves you with 3 important tasks. First you need to determine how to make sure that existing fundamental rights are upheld in a world with AI. Second you need to determine where these rights do not function properly when it comes to AI and fix that. But most importantly, you need to be brave and ask what I call “Question Zero”.

Once a technology is out there we tend to accept that it’s there and only focus on fixing its flaws. Question Zero forces us to take a step back and ask ourselves: Even if the technology were flawless, bias free, safe, cyber-proof, transparent…do we want to use it in the first place?

Do we allow emotion detection without scientific proof? In all circumstances? Or perhaps only in controlled environments such as a mental hospital, where it might be of positive value? Do we share our biometric data with everybody to profile us? Or just with our doctor? Do we allow brain-machine interfaces? Or only in controlled environments such as hospitals to perhaps communicate with locked-in patients? Do we use facial recognition on a massive scale? Or only when there is an actual threat and with a prior warrant by a judge?

Asking Question Zero is what I call “Human-in-Command”. Technology does not overcome us. And neither does this technology. We can and we must decide if, when and how we want to use AI in our daily lives. Human-in-Command.

Thank you very much.

* * *

“In your opinion, how should AI be regulated?”

Catelijne Muller
Barry O’Sullivan