Should the UN govern global AI?
By Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, and Andrew W. Wyckoff, Brookings, 26 February 2024
- The UN AI Advisory Body recently released an interim report advocating for a “global governance framework” for AI.
- Any such approach to global governance must be distributed and iterative, involving participation from a wide range of stakeholders including governments and private companies.
- The UN has a crucial role to play in this effort, but a single governance body should not be the only goal for global governance of AI.
The emergence of OpenAI’s ChatGPT in 2022, followed by several other generative artificial intelligence (AI) models, created unprecedented urgency for guardrails around AI. In the year since, a call for a pause on training these large-scale AI models and predictions that powerful AI could cause human extinction or a future without work sparked a rush of proposals for some form of global governance framework.
One such proposal was from the United Nations’ multi-stakeholder AI Advisory Body, which released an interim report offering future steps for global AI governance. Though it did not recommend any single model, the report concluded that “a global governance framework is needed.” It identified seven layers of governance functions for “an institution or network of institutions,” starting with expert-led scientific consensus and building to global norm elaboration, compliance, and accountability. We concur with the need to build broad consensus through many voices. But we emphasize that what is needed is a distributed and iterative approach, one that would be—as the UN AI Advisory Body itself put it—“agile, networked, flexible” and makes the most of the initiatives already underway.
Debate about governing AI has been on national and international policy agendas at least since 2016. National AI policies have so far taken different forms, with some countries pursuing legislation, while others rely primarily on voluntary frameworks. Bridging these approaches is an array of international initiatives: among the most salient are the OECD, with 38 advanced economies; the Global Partnership on AI, with 29 member countries and a network of experts; UNESCO, with 194 member states; and other UN bodies such as the International Telecommunications Union. But the list also includes international standard-setting bodies like the ISO/IEC and IEEE. In 2023, U.S.-EU work on a code of conduct made its way to the G7, and the U.K. convened an AI safety summit with 28 countries (including China), which resulted in a “Bletchley Declaration” focused on the “risks, opportunities and a forward process for international collaboration on frontier AI safety,” as well as a series of future meetings on AI safety.
As researchers at leading think tanks in Washington and Brussels through the Forum on Cooperation in AI (FCAI), we have tracked AI developments and brought together government officials and experts from industry, civil society, and academia. Since 2020, we have distilled these numerous discussions into recommendations in key areas for international cooperation such as definitions, risk assessment, standards development, trade agreements, and joint research and development (R&D).
In our opinion, none of the existing initiatives can address the challenges of maximizing the opportunities of AI while identifying and minimizing the risks alone. This is especially true where there is still much to learn about AI. Understanding risks and putting shared principles into practice will require thoughtful implementation through myriad use cases. These tasks will need participation from a wide range of stakeholders—not just governments, but also the private companies that develop and apply AI and a broad range of other voices. And they will need time—setting up bodies like the International Atomic Energy Agency, the nuclear safety and management organization often mentioned as a possible model for AI safety regulation, was the work of decades and involved international treaties.
There are specific measures and paths that governments and international organizations can take to help these processes along. A U.S. pilot project for a “National AI Research Resource” to make data and computing available to researchers outside the large tech companies could be scaled up internationally. Joint R&D projects can help explore and build collaboration while harnessing AI for good; both the UN interim report and our reports point to climate change research as a fruitful and feasible subject for such projects. The work of private sector initiatives like the Frontier Model Forum and other groups established to control misuse of models and AI-generated content can explore emerging risks and mitigate known ones. And recently created bodies such as AI safety institutes in the U.S. and U.K. and the EU Centre for Algorithmic Transparency can work together to offer consistent guidance on how to identify and manage risks.
The UN has an essential role because it can convene a broader group of nations than the OECD, the G7, GPAI, or ad hoc coalitions built by current AI leaders, and it can be a vehicle to help bring access to AI and developing AI to support the UN Sustainable Development Goals. But the UN should not displace other efforts, nor should the end goal be a single governance body within the foreseeable future. Rather, existing efforts should operate in parallel, sharing a vision articulated by the UN and its member states and oriented towards the deployment of AI for the benefit of all of humanity and the planet.
Authors
Cameron F. Kerry: Ann R. and Andrew H. Tisch Distinguished Visiting Fellow - Governance Studies, Center for Technology Innovation
Joshua P. Meltzer: Senior Fellow - Global Economy and Development
Andrea Renda: Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy (GRID) - Center for European Policy Studies (CEPS)
Andrew W. Wyckoff: Nonresident Senior Fellow - Governance Studies, Center for Technology Innovation