What Proliferation of Artificial Intelligence Means for Information Integrity?
Latvian Mission to UN
Session 175
The Global Digital Compact introduces commitments on information integrity - access to relevant, reliable and accurate information and knowledge essential for an inclusive, open, safe and secure digital space. The Compact also encourages advancement of safe, secure and trustworthy artificial intelligence systems. The disruptive proliferation of AI requires urgent actions to live up to these commitments.
The WSIS+20 review that will be concluded this year should not ignore the recent challenges posed by the AI to the information integrity. WSIS beyond 20 has to address mitigation strategies for the potential negative impact on human rights, democracy and the rule of law caused by unintended harms or misuse of digital technologies. To ensure that, WSIS needs to maintain and strengthen its human rights-based and multi-stakeholder approach to digital transformation.
Just as the generative AI tools are growing more powerful, they also rapidly becoming more available and accessible. The proliferation of AI brings with it risks of fragmentation, manipulation and loss of trust in digital public space. Of particular concern are the growing risks stemming from malicious use of AI, for, e.g. information manipulation through increasingly advanced synthetic audio and video technologies and algorithmic microtargeting. Other risks are related to malfunctions that can cause unintended harm, such as reliability issues due to “hallucinations”, and social and political biases.
As countries push ahead with adoption of AI to ensure economic growth and prosperity, dealing with these risks become a practical necessity. Although the full effect AI will have on information integrity is yet unknown, some risk mitigation strategies can be pursued already now. First, AI adoption and AI literacy need to go ahead alongside each other. Second, both governments and on civil society need to build practical AI-powered capabilities to detect, understand and address AI-related risks, such as purposeful information manipulation. Third, international norms and national regulatory guardrails are necessary to ensure that AI is open, inclusive, transparent, ethical, safe, secure and trustworthy.
Following up on last year’s event at WSIS, this panel discussion will offer an update on the state of play on capability-building, regulatory developments and research that can help all countries leverage knowledge and experience on ways to support a free, open, safe and secure online environment resilient to the negative impacts of information manipulation in the age of AI.
The speakers will address the following questions:
• What is the current assessment of risks and opportunities that AI technologies present for information integrity today and in the near future?
• What tools are available to governments and civil society to mitigate AI-related risks to information integrity and ensure that AI develops as a trustworthy, human rights-based and transparent technology?
• How can countries and other stakeholders work together to address the above issues on a global scale, including within the UN framework?




-
C5. Building confidence and security in use of ICTs
-
C9. Media
-
C10. Ethical dimensions of the Information Society
-
C11. International and regional cooperation
-
Goal 16: Promote just, peaceful and inclusive societies