Loading Events

« All Events

March 16 @ 16:15 CET


Artificial Intelligence (AI) continues to be a white-hot area of rapid technological development and innovation. Beside the evident benefits, AI technologies embody great risks. To take a recent example, over the past several months, a range of powerful new AI applications has been made available to non-professional users around the world, including large language models such as ChatGPT, and image generation tools such as DALL·E 2 and Midjourney. The sudden availability of these tools has triggered a media and social media frenzy, with widespread public excitement about the new possibilities that have been opened up, but has also raised profound and growing ethical as well as legal concerns about a wide range of issues including the potential for misinformation at scale, reproduction of harmful stereotypes and biases in the texts and images that are being generated, plagiarism, authorship and human creative work, new vectors for cyberattacks, as well as questions about the legitimate and lawful use of online data and artwork by companies producing AI models. These tools are also contributing to immediate and direct challenges to – but also potential opportunities for – creative industries and education, which are being forced to adapt quickly in response. Meanwhile, less high-profile but equally consequential developments and deployments of AI are continuing across all sectors.

In many ways, these prominent tools are serving as simply the latest examples of why ethical and responsible approaches to AI are so vital to the responsible deployment of high-impact systems that are likely to reshape our work and daily lives over the coming years and decades. Often developed in Silicon Valley and presented as “experimental” releases – albeit for a global public – it appears there has been little effective corporate assessment of ethical impacts or multi-stakeholder engagement with potentially impacted end users. These recent cases serve to highlight the vital importance of the UNESCO Recommendation on the Ethics of AI and the need for Member States to begin implementing its values – moving from principles to practice.

The Recommendation was adopted in 2021 and serves as a comprehensive and actionable framework for the ethical development and use of AI that encompasses the full spectrum of human rights. It is intended to provide the foundation to identify, think through, and begin to address the kinds of ethical concerns mentioned above, as well as many others. As it now progresses to the operationalization phase, UNESCO is working to develop and pilot tools to help Member States implement the values and principles contained in the Recommendation, including a Readiness Assessment Methodology and Ethical Impact Assessment.

With reference to recent developments, this panel focuses on the need to embed ethics at every stage of the AI system lifecycle and use UNESCO’s Recommendation on the Ethics of AI in order to ensure a responsible and human-rights-based approach to ethical AI governance encompassing design, development, deployment, and procurement. in a mutually supportive, inclusive, and holistic manner.

Related events

GovTech 4 Impact

WSIS Forum 2023: GovTech 4 Impact

March 13 @ 15:00 - 15:45 CET

Hack the Digital Divide

WSIS Forum 2023: Hack the Digital Divide

March 13 @ 16:00 - 16:45 CET

WSIS Forum 2023: Afternoon Sessions

March 14 @ 13:45 - 17:15 CET

WSIS Forum 2023: High-Level Dialogue

March 15 @ 11:00 - 12:00 CET

WSIS Forum 2023: WSIS Action Lines C7

March 16 @ 08:45 - 10:00 CET

AI for Good

WSIS Forum 2023: AI for Good

March 16 @ 16:15 CET

EQUALS – EU

WSIS Forum 2023: EQUALS – EU

March 17 @ 10:15 CET

WSIS Forum 2023: Closing Ceremony

March 17 @ 16:15 - 17:15 CET

Details

Date:
March 16
Time:
16:15 CET
Event Category:
Event Tags:
,

Other

Access
public