Can a global framework regulate AI ethics?

The United Kingdom just hosted the first-ever global Artificial Intelligence (AI) Safety Summit on 1 and 2 November 2023, an event that brought international attention to the regulation of Artificial Intelligence (AI). UK Prime Minister Sunak underscored the urgency for global collaboration in the governance of AI, a technology that defies national boundaries and demands a collective regulatory approach. The summit’s outcome was the Bletchley Declaration, a (non-binding) commitment signed by leading nations, including Australia, China, the European Union, France, Germany, India, Japan, Switzerland, the United Kingdom, and the United States, to enhance cooperation in the development and regulatory oversight of AI technologies.

Authors Anahita Thoms, Alexander Ehrle and Kimberly Fischer take this as an opportunity to look in a recently published article at the current regulatory landscape and ask if a global framework is the right move to regulate the risks AI poses on human rights. Thus, they focus on AI ethics. While definitions vary, AI ethics is the term given to a broad set of considerations for responsible AI that combine safety, security, human concerns, and environmental concerns according to the authors.

These are the upcoming dates for our Annual General Meetings:

Thursday, 21 March 2024
Thursday, 20 March 2025