Catelijne Muller presents EESC opinion on EU Whitepaper on AI at Plenary Meeting
On 15 July last Catelijne Muller presented her EESC opinion on the EU Whitepaper on AI. Catelijne is AI Rapporteur for the EESC and wrote this opinion in collaboration with a studygroup of 12 members from the 3 representative groups at the EESC.
The main conclusions of the opinion are:
- The EESC congratulates the Commission for its strategy to encourage the uptake of AI technologies while also ensuring their compliance with European ethical norms, legal requirements and social values.
- AI innovation should be fostered to maximize the benefits of AI systems, while at the same time preventing and minimizing their risks.
- The focus on mere data-driven AI too narrow to make the EU a true leader in cutting-edge, trustworthy and competitive AI. The EESC urges the Commission to also promote a new generation of AI systems that are knowledge-driven and reasoning-based, and that uphold human values and principles.
- Foster multi-disciplinarity in research, by involving other disciplines such as law, ethics, philosophy, psychology, labour sciences, humanities, economics, etc.
- Involve relevant stakeholders (trade unions, professional organisations, business organisations, consumer organisations, NGOs) in the debate around AI and as equal partners in EU-funded research and other projects.
- keep educating and informing the broader public on the opportunities and challenges of AI.
- The EESC urges the Commission to consider in more depth the impact of AI on the full spectrum of fundamental rights and freedoms, including − but not limited to − the right to a fair trial, to fair and open elections, and to assembly and demonstration and non-discrimination.
- The EESC continues to oppose the introduction of any form of legal personality for AI.
- The EESC asks for a continuous, systematic socio-technical approach, looking at the technology from all perspectives and through various lenses, rather than a one-off (or even regularly repeated) prior conformity assessment of high-risk AI.
- The EESC warns that the “high-risk” sector requirement could exclude many AI applications and uses that are intrinsically high-risk. The EESC recommends that the Commission draw up a list of common characteristics of AI applications or uses that are considered intrinsically high risk, irrespective of the sector.
- The widespread use of AI-driven biometric recognition for surveillance or to track, assess or categorize humans or human behavior or emotions, should be prohibited.
- The EESC advocates early and close involvement of the social partners when introducing AI systems at workplaces.
- The EESC also advocates early and close involvement of relevant employees within organizations, when deciding on and introducing of AI systems.
- AI techniques and approaches used to fight the corona virus pandemic should be robust, effective, transparent and explainable. They should also uphold human rights, ethical principles and existing legislation, and be fair, inclusive and voluntary.
- The EESC calls on the Commission to assume a leadership role so as to ensure better coordination within Europe of applied AI solutions and approaches used to fight the corona virus pandemic.