Press Release

Secretary-General: Sustained, Structured Conversation around Risks, Challenges, Opportunities of Artificial Intelligence Vital

03 November 2023

Caption: Secretary-General António Guterres (second from right at table) attends the first global AI Safety Summit held in Milton Keynes, United Kingdom.

Following are UN Secretary-General António Guterres’ remarks on the occasion of the United Kingdom AI [artificial intelligence] Safety Summit, in London on 2 November:

The speed and reach of today’s AI technology are unprecedented.  The paradox is that in the future, it will never move as slowly as today.  The gap between AI and its governance is wide and growing.

AI-associated risks are many and varied.  Like AI itself, they are still emerging, and they demand new solutions.  But let’s be clear: they do not demand new principles.

The principles for AI governance should be based on the United Nations Charter and the Universal Declaration of Human Rights.  We urgently need to incorporate those principles into AI safety.  I see three areas for action.

First, we are playing catchup on today’s threats.  We need to get ahead of the wave.  In the past year, we experienced the release of powerful AI models with little consideration for the safety and security of users.  Every time this happens, it increases the risk that technology will be used maliciously by criminals or even terrorists; that it will undermine security or information integrity; that people could lose control of it; and that it could develop in unintended directions.  We urgently need frameworks to deal with these risks, so that both developers and the public are safe and can have confidence in AI.

The second area for action concerns AI’s possible long-term negative consequences.  These include disruption to job markets and economies; and the loss of cultural diversity that could result from algorithms that perpetuate biases and stereotypes.  The concentration of AI in a few countries and companies could increase geopolitical tensions.  Right now, the vast majority of advanced AI chips are made in one of the most geopolitically sensitive places on Earth.

Longer-term harms extend to the potential development of dangerous new AI-enabled weapons… the malicious combination of AI with biotechnology… and threats to democracy and human rights from AI-assisted misinformation, manipulation and surveillance.  We need frameworks to monitor and analyse these trends, in order to prevent them.

The third concern is that without immediate action, AI will exacerbate the enormous inequalities that already plague our world.  This is not a risk; it’s a reality.  One recent report found that no African country is in the top 50 for AI preparedness.  Twenty-one out of the 25 lowest scoring countries were African.

AI has huge potential to help developing economies still recovering from the COVID-19 pandemic and struggling with a mountain of debt.  It can help Governments to budget; help businesses to expand; and help climate scientists to predict droughts and storms.  It can help ordinary people access vital health care and education.  It can be a huge accelerator and enabler for the 17 Sustainable Development Goals.  But, for that to happen, every country and every community must have access to AI -- and to the digital and data infrastructure it requires.  Right now, AI technologies are limited to a few countries and companies.  So, we need a systematic effort to change that.

In response to these three areas of concern, different stakeholders have developed over 100 sets of ethical principles for AI -- which have much in common.

There is broad agreement that AI applications must be reliable, transparent, accountable, overseen by humans and capable of being shut down.  But, without global oversight, there is a real risk of incoherence and gaps.  We need a sustained, structured conversation around risks, challenges and opportunities.  The United Nations -- an inclusive, equitable and universal platform for coordination on AI governance -- is now fully engaged in that conversation.

The Multistakeholder Advisory Body on Artificial Intelligence I launched last week brings together global expertise from Governments, business, the tech community, civil society and academia.  It is truly universal, with representation from all parts of the world, in order to foster the networked, inclusive, evidence-based solutions that are needed.  Universality means one country or group of countries cannot dominate. Transparency and mutual accountability are built in.

The Advisory Body will consider how to link and coordinate with various initiatives that are already underway -- including by the European Union and the G7 [Group of Seven] Hiroshima Process.  It will be at the centre of a global network of science-based action by governments, private sector and civil society.  The State of Science report on the capabilities and risks of frontier AI proposed by the United Kingdom can play an important role in informing its work.

The first task of the Advisory Body is to examine models of technology governance that have worked in the past, with a view to identifying forms that could work for AI governance now and in the future.  It will report back by the end of this year with preliminary recommendations in three areas:  Strengthening international cooperation on AI governance; building scientific consensus on risks and challenges; and making AI work for all of humanity.

These recommendations will feed into the Global Digital Compact proposed for adoption by Heads of States at the Summit of the Future next September.  In other words -- its work will embed AI governance into intergovernmental processes, and an established global Summit.

We need a united, sustained, global strategy, based on multilateralism and the participation of all stakeholders.  The United Nations is ready to play its part.

***

[END]

UN entities involved in this initiative

UN
United Nations

Goals we are supporting through this initiative