Committed to connecting the world

WTISD

Trustworthy AI

​​​​​​Geneva, Switzerland

​​Trustworthy AI is a programme of work of the ITU under its flagship AI for Good programme, which advances standardization of several key privacy-enhancing technologies (PETs).

The technologies that drive Privacy-Enhancing Technologies such as (Fully) Homomorphic Encryption (FHE) are notoriously complex. In the case of homomorphic encryption, for instance, numerical data is encrypted and yet it is still possible to apply mathematical functions - such as addition, multiplication, etc. - to the data while it is in ciphertext state. These properties are often found to be counterintuitive.

As these techniques are very novel and societies and organizations have little experience in their use, it is important for the adoption of these technologies to prove to potential users that they are indeed safe.

This is the aim of TrustworthyAI, develop standards using a multistakeholder framework that can be used to ensure that technologies that are being promoted as privacy-preserving are indeed that.

The program was developed out of the ITU-WHO Focus Group on Artificial Intelligence for Health FG-AI4H). The need for collaboration on data in international contacts while respecting privacy drove the need for the development of reliable standards in Privacy-Enhancing Technologies.

Simultaneously during the COVID-19 pandemic, the AI for Good Global Summit moved online, as part of this a new TrustworthyAI series was launched to address the second mission of AI for Good, namely to ensure the responsible development of AI. The series is​ hosted by professor Wojciech Samek of the Technical University Berlin and Fraunhofer HHI.

Technologies that are in the process of standardization include:
  1. Homomorphic Encryption
  2. Federated Learning
  3. Multi-Party Computation
  4. Differential Privacy
  5. Zero-Knowledge Proofs
CC BY 4.0



RELATED INFORMATION

DOCUMENTATION AND REGISTRATION

HOSTED BY

.​​





SPONSORED BY