EU Ethics Guidelines for Trustworthy AI – Open For Consultation
The EU High Level Expert Group on AI, of which the three ALLAI founders are members, has issued is draft Ethics Guidelines for Trustworthy AI. The Group welcomes input from society. The stakeholder consultation is open until February 1st.
The group has taken the approach that follows the road that maximises the benefits of AI while minimising its risks. According to the group, to ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be the groups north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology. Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
The guidelines continue to set out a framework for Trustworthy AI:
- Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
- From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.
- Chapter III subsequently operationalises the requirements by providing a concrete but non-exhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.
You can take part in the stakeholder consultation via https://ec.europa.eu/futurium/en/ai-alliance-stakeholders-consultation/stakeholders-consultation-draft-ai-ethics-guidelines