Committed to connecting the world

Secretary-General's Corner: Speeches

​​​​

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​   ↩ Back to Secretary-General's Corner 
   ​​↩​ Back to all speeches​​​​

AI Governance Day
Geneva, Switzerland  29 May 2024


Welcome address
Doreen Bogdan-Martin
Secretary-General, International Telecommunication Union (ITU)

Good afternoon, everyone!

Welcome to Day Zero of the AI for Good Global Summit.

Our eagerly anticipated “Governance Day" is off to a running start. We've already put our AI experts – our government leaders − to work this morning, early.

We've spent the entire morning exchanging ideas on three critical topics. We've been surveying the AI landscape, understanding how it might evolve. We've been looking at how to implement AI governance frameworks. And perhaps most importantly, we've been discussing how we can ensure inclusion and trust as we implement those frameworks.

This morning, we heard about various governance efforts – the areas that they have in common, as well as some of their differences. I think crucially, we learned from developing countries. Because we want to ensure that they are not left out of the process.

All this challenges the argument that governments “lack initiative" when it comes to tech regulation.

In just a few moments, you are actually going to be hearing from some of our amazing roundtable participants who will be sharing the outcomes of their work.

But first, let me tell you why we're doing this. So why are we here today? What is AI Governance Day all about? And why are we at ITU going to keep doing it?

ITU, as you many of you know is the United Nations (UN) agency for digital technologies — and we have been working to harness AI for Good for the past seven years.

We have been convening the UN system around AI, and we have been co-leading an inter-agency coordination mechanism, as we call it, with UNESCO (the United Nations Educational, Scientific and Cultural Organization) since 2021.

Through our AI for Good platform, which is a multi-stakeholder community of 28,000 people from over 180 countries, our focus has been putting artificial intelligence (AI) at the service of the Sustainable Development Goals (SDGs).

That's been our compass.

What's new is this much sharper, stronger focus on governance.

Because it's not the benefits, it's the risks of artificial intelligence that keep us all awake at night.

So much has been said about AI governance — in the media, in academic circles, from start-ups to tech giants, from local governments, all the way to the United Nations, which recently adopted an historic resolution that recognized AI's potential to advance the SDGs.

But ladies and gentlemen, at the heart of all of this is a conundrum: How do we govern a technology … how do we govern technologies if we don't yet know their full potential?

There is no one answer to that question.

What we do know is that we have been there before.

It was twenty years ago. The Internet was met with a similar mix of shock, awe, skepticism.

It raised the same questions about how our economies, our societies and our environment would transform — for better and for worse. And we are still grappling with those questions, two decades later.

In fact, we still don't know the full potential of the Internet —because a third of humanity is actually never ever connected.

But before we could even realize the potential — generative AI came along. And yet, even with the convergence of these world-changing, interdependent technologies — governance efforts have emerged. They may not be perfect, but we are not starting from scratch.

The Internet Governance Forum and the WSIS Forum were born out of the World Summit on the Information Society.  

Some of you, like me, were there when this all happened twenty years ago.

And I remember how then, as now, we didn't even have the vocabulary to describe what we were dealing with. But that didn't stop us. It didn't stop us from moving forward.

What we've learned from the WSIS process, is that we actually can take steps toward governance, even if we're building the plane as we fly it.

We can come together as a community, we can share experiences, practices, lessons learned, barriers, challenges. Knowing that once more, there is no one size that fits all when it comes to balancing the benefits and reconciling regulatory risks. Knowing, yet again, that we must look at governance from many different angles, and knowing that the only way forward is through a multi-stakeholder approach.

That's why I'm so glad that today, gathered in this room, we actually have our WSIS community with us. So welcome to this WSIS community. We hope that you will help and guide us through these many complex questions and challenges.

After listening closely to this morning's discussions there are three key pieces that I believe must be part of any AI governance effort:

I would say the first piece, and obviously this is very relevant to ITU, is the technical standards development.

As we heard this morning, those working on AI governance, they already recognize how technical standards can help implement effective guardrails and help to support interoperability.

This is where ITU has such a key role to play. As an international standards development organization, we already have over 200 AI-related standards that we've either developed or we're in the process of developing.

As part of the World Standards Cooperation, which is a high-level collaboration between IEC (International Electrotechnical Commission), who is in the room, ISO (International Organization for Standardization), who is also in the room, and ITU, we're helping to advance the development of global standards that can make AI systems more transparent, make them more explainable, more reliable, and of course, more secure.

This provides certainty in the market, and eases innovation for both large and small industry players everywhere, including in developing countries.

The second element is putting human rights, inclusion, and other core UN values at the heart of AI governance.

All stakeholders deserve a voice in shaping AI's present and future.

But who can afford the compute resources that go into producing AI applications?

Who is on the teams that design the foundational models?

Right now, the power of AI is concentrated in the hands of too few. It is risky and it is ethically precarious to be in this kind of position for humanity.

Ladies and gentlemen, we must work towards an inclusive environment, where diverse perspectives, including those on gender – that was a key element raised this morning – are reflected in the policies that ring true to UN values.

International AI governance efforts must account for the needs of all countries.

That's why the United Nations – together with governments, with companies, with academics, with civil society, with the technical community – must play a key role in ensuring that power is distributed equitably.

This is not going to happen automatically.

And that brings me to my third element: Inclusive development through capacity building.

ITU has a long history of bringing the voices of the Global South to the emerging technology table.

Part of this means making sure that every workforce in the world can deal with the challenges and the risks being brought about by artificial intelligence.

That's why we've been integrating AI capacity support in our digital transformation offerings.

And we'll continue to roll out those initiatives with many of our UN partners in the room, including with UNDP (United Nations Development Programme), where we are focused on countries that have low technological capabilities. We want to make sure that we upskill them, no matter where they are on their AI journey.

Ladies and gentlemen, governance is not a given.

An AI readiness survey that ITU recently conducted amongst the 193 Member States demonstrated that a majority – actually 85 per cent of our members – don't actually have any AI regulations or policies in place.

But today, some might at least start thinking about the policy elements – about what to do next.

And I think that's what makes the work we are doing here today and beyond, absolutely fundamental and essential.

All good governance starts with listening – listening to experts, exchanging ideas, experiences with peers, identifying gaps, and building on potential areas of convergence.

And governance is never a sort of “one and done" — it's actually an iterative, sometimes frustratingly slow, but ultimately necessary, multi-stakeholder process.

Taking stock of the landscape and facilitating deep discussions, as we did this morning, is actually the first step in transforming principles into practical implementation.

And implementation, ladies and gentlemen, is what today is all about.

I know everyone in this room actually has a stake in seeing AI used as a force of good – as a force for good in this world.

As we heard from the UN Secretary-General's High-Level Advisory Body, who joined us remotely this morning, we need to take bold decisions, and we need to view governance not as an inhibitor, but as an enabler. An enabler for AI for Good.

That's why today, I'm calling on all of you to get involved – take an action.

Participate actively in the AI governance activities that are happening here, now, here at ITU.

Let's harness the power of this AI community to govern AI with and for the world.

Let's show them what it looks like.

Let's show them how it's done.

And let's show them together.

Thank you very much. ​