Artificial Intelligence (AI) continues to be a white-hot area of rapid technological development and innovation. Beside the evident benefits, AI technologies embody great risks. To take a recent example, over the past several months, a range of powerful new AI applications has been made available to non-professional users around the world, including large language models such as ChatGPT, and image generation tools such as DALL·E 2 and Midjourney. The sudden availability of these tools has triggered a media and social media frenzy, with widespread public excitement about the new possibilities that have been opened up, but has also raised profound and growing ethical as well as legal concerns about a wide range of issues including the potential for misinformation at scale, reproduction of harmful stereotypes and biases in the texts and images that are being generated, plagiarism, authorship and human creative work, new vectors for cyberattacks, as well as questions about the legitimate and lawful use of online data and artwork by companies producing AI models. These tools are also contributing to immediate and direct challenges to – but also potential opportunities for – creative industries and education, which are being forced to adapt quickly in response. Meanwhile, less high-profile but equally consequential developments and deployments of AI are continuing across all sectors.
In many ways, these prominent tools are serving as simply the latest examples of why ethical and responsible approaches to AI are so vital to the responsible deployment of high-impact systems that are likely to reshape our work and daily lives over the coming years and decades. Often developed in Silicon Valley and presented as “experimental” releases – albeit for a global public – it appears there has been little effective corporate assessment of ethical impacts or multi-stakeholder engagement with potentially impacted end users. These recent cases serve to highlight the vital importance of the UNESCO Recommendation on the Ethics of AI and the need for Member States to begin implementing its values – moving from principles to practice.
The Recommendation was adopted in 2021 and serves as a comprehensive and actionable framework for the ethical development and use of AI that encompasses the full spectrum of human rights. It is intended to provide the foundation to identify, think through, and begin to address the kinds of ethical concerns mentioned above, as well as many others. As it now progresses to the operationalization phase, UNESCO is working to develop and pilot tools to help Member States implement the values and principles contained in the Recommendation, including a Readiness Assessment Methodology and Ethical Impact Assessment.
With reference to recent developments, this panel focuses on the need to embed ethics at every stage of the AI system lifecycle and use UNESCO’s Recommendation on the Ethics of AI in order to ensure a responsible and human-rights-based approach to ethical AI governance encompassing design, development, deployment, and procurement. in a mutually supportive, inclusive, and holistic manner.