• Home
  • News
  • Moving AI governance from principles to practice
Moving AI governance from principles to practice featured image

Moving AI governance from principles to practice

By ITU News

How can we ensure artificial intelligence (AI) is really used for good?

This was the central question of a virtual media roundtable hosted by the International Telecommunication (ITU) ahead of the annual AI for Good Global Summit, happening from 29 to 31 May with an expected 3,000+ participants.

The hope for this year’s summit, explained ITU Secretary-General Doreen Bogdan-Martin, is to shift the global AI conversation from principles to implementation.

This is the idea driving ITU’s dedicated “Governance Day” on 29 May – a first for the United Nations AI summit convened since 2017.

Over the years, AI for Good has grown into the world’s largest UN-led multi-stakeholder platform on artificial intelligence.

Organized with over 40 UN sister agencies, AI for Good brings together a community of around 28,000 stakeholders from more than 180 countries.

Governance challenges

Even before the release of generative AI applications like ChatGPT in 2022, a bevy of governance efforts had emerged – and have ramped up steadily over the past year.

This proactive response, Bogdan-Martin reminded participating journalists, challenges the oft-heard argument that “governments lack initiative” when it comes to tech regulation.

But enforceable ways to prevent people from building unsafe AI systems are severely lacking, contended Stuart Russell, Professor of Computer Science at the University of California, Berkeley.

The status quo where private entities play “Russian roulette with the entire human race for private gain” is simply unacceptable, he continued.

Governance approaches must also be coordinated, said Robert Trager, International Governance Lead at the Centre for the Governance of AI based at the University of Oxford.

Governments have already begun implementing requirements for training AI models, such as explainability, transparency and accountability, as well as compliance with local data protection and privacy laws.

But as training becomes increasingly distributed – where part of an AI model’s training happens in one jurisdiction, and the next in another – enforcement becomes trickier.

“We need to make sure no actor is avoiding regulation by simply changing jurisdictions,” said Trager. “At the summit, we’ll examine how to govern AI in a coordinated way that is effective and inclusive.”

Collective problem solving

For Emilia Javorsky, Director of the Futures Program at the Future of Life Institute, an important governance priority lies in ensuring institutions are up to the task of mitigating myriad AI risks, which range from the ethical to the physical, military, and economic, even extending to epistemic collapse, “when we can no longer tell what’s real.”

This requires not only robust governance and safety engineering in practice, but also setting the right incentive structures for institutions and companies, said Javorsky.

In healthcare, for example, many steps unrelated to AI development are needed to unlock the technology’s full potential, such as higher quality data sets, better information sharing practices, and regulatory reform.

While panellists agreed that AI governance must be a collective effort, some felt “we are nowhere near” the collaboration needed to demonstrably improve safety.

Dr. Ebtesam Almazrouei of the UAE Council for Al and Blockchain cited UN Sustainable Development Goal 17 – Partnerships for the Goals – as particularly important when it comes to moving beyond dialogue.

“No single entity should hold the key to the vast potential of AI,” Almazrouei added.

The role of standards

Governance efforts that have emerged so far share a common belief in technical standards, observed Bogdan-Martin.

Standardization is core to the work of the ITU, which has over 220 AI-related standards published or in development.

Standards should require mathematical proof that an AI system won’t cross so-called “red lines,” Russell pointed out.

Red lines are behaviours that would be considered unacceptable if machines were to exhibit them, he explained, such as replicating themselves without permission, hacking into critical infrastructure systems, advising terrorists on deploying bioweapons, disclosing classified information, or defaming people.

One of the hotly anticipated summit outcomes is ITU work on watermarking, a method aimed at helping to counter AI-generated misinformation and disinformation.

In addition to serving as a prerequisite for guardrails, standards can also help level the playing field for developing countries at different stages of their AI journey, said Bogdan-Martin.

Capacity building support and policy assistance for those countries is another major part of ITU’s work on AI, she noted.

Another UN agency, the World Intellectual Property Organization (WIPO), is also guiding policymakers and innovators in developing countries. Issues related to intellectual property (IP) are a crucial aspect of equitable AI development.

Common denominators

Asked about WIPO’s post-ChatGPT learnings, Kenichiro Natsume, Assistant Director General at WIPO, described navigating tensions between AI developers who want to use as much data as possible – including copyrighted material scraped from the Internet – and creators who want to protect and profit from their own IP.

“The AI for Good Global Summit is the perfect opportunity for us to identify common denominators among different stakeholders to find a way forward,” said Natsume.

Bogdan-Martin highlighted the UN’s role in achieving “effective multilateralism” and leading multi-stakeholder collaboration towards trusted, safe, ethical, inclusive AI development while leaving no one behind.

“The fact that 2.6 billion [unconnected] people are not part of the digital world means they are not part of the AI world,” she said.

“Global governance discussions must bring all stakeholders to the table, including the Global South.”

Register for the AI for Good Global Summit here.

Header image credit: Adobe Stock

Related content