ALLAI calls for new fundamental principles in the AI Treaty
Catelijne Muller attended the 6th plenary meeting of the Council of Europe’s Committee on Artificial Intelligence (CAI) to negotiate elements of the upcoming AI Treaty. The other large AI regulation effort coming from Europe.
In light of the recent developments around large language models and other generative AI models, ALLAI called upon the parties to consider new fundamental principles, to protect against the specific risks these systems pose to our human rights, democracy and the rule of law. Read Catelijne’s intervention below.
Thank you Chair,
First of all, the recent developments have shown an ever stronger capability of AI of manipulating and deceiving us without us knowing. Let me just give you one example. Snapchat’s My AI, based on ChatGPT, tried to set up a meeting with a young boy at Amsterdam Central Station. But there are many more examples where people are tricked into behavior or decisions they would not have made otherwise. In fact, the FTC put out a statement not so long ago warning that it would be critical of commercial AI manipulation that makes people take decisions they would not otherwise take. There are people that say AI could become the best manipulator ever. The CAI could address this by introducing the ‘principle of non-manipulation by AI’.
The second challenge is agency. If we are to believe some of the people that build these ever-stronger systems, there is a risk of losing control or agency over these systems. The CAI could address this by introducing a ‘principle of human agency over AI’.
The third this experimentation. Snapchat says that it has fixed the problem of My AI making appointments with children and that it now warns that the chatbot is experimental. I think that society is not a lab and especially children should not be experimented on the guy could address this by introducing ‘the principle of non-experimentation with AI’. The tech practice of rolling out imperfect systems and then patching them later on based on user experience might work for some tools, but it should not be the modus operandi for AI.
One thing I would like to add is that the company Neuralink just received FDA approval to start testing brain-machine interfaces on humans. I know that this house has a Committee on Bioethics that has already done extensive work on neurotechnology in biomedicine and has shown that AI plays a crucial role in the decoding of human cognition. I would like to propose that the CAI asks the Committee on Bioethics to assist it in developing a position on AI and neurotechnology for this Treaty.
ALLAI has formal Observer status to the CAI that currently negotiates a global convention for artificial intelligence, and has been involved in the process towards this treaty since the beginning, as advisor to the CAHAI (ad-hoc Committee on AI of the Council of Europe). Below you can find ALLAI’s report to the CAHAI on the impact of AI on human rights, democracy and the rule of law.