Geneva, Switzerland, 21 January 2019
The International Telecommunication Union organized an open workshop on Artificial Intelligence, Machine Learning and Security held on 21 January 2019 in ITU Headquarters in Geneva, Switzerland. The workshop was held one day before the meeting of ITU-T Study Group 17 on Security which took place on 22-30 January 2019 in the same venue.
Artificial Intelligence (AI) and Machine Learning (ML) technologies are advancing at a remarkable speed and lead to many widely beneficial applications, ranging from machine translation to medical image analysis.
AI and ML have the potential to improve cybersecurity in such a way that human analysts will become more effective and accurate in their detection of security threats and related decision-making. Diversified data is key to data analytics. However, the volume of available data has grown so large that the number of skilled security analyst are overloaded in identifying potential attacks – the opportunity to leverage AI and ML in security is very clear.
AI empowered applications and services have been developed to focus on potential and efficiency in constrained environments, without always considering and protecting against the emergence of new security vulnerabilities, threats or other unintended consequences. If AI and ML are to be part of security defences, there is a need to explore how these defences might be subverted.
Several of the threats (e.g. automated spear phishing, personalised propaganda) rely on attackers gaining access to personal information about individuals. The risks posed by AI and ML to security and privacy should be mitigated, which includes threat-detection methods that misclassify malicious threats as benign, automated systems that fail to detect key stimuli, or authentication mechanisms capable of misidentification.
The workshop focused on addressing three critical aspects: what was the relationship between AI/ML and security; how AI and ML could be utilized to improve the cyber defence capability; and which risks should be addressed to build on AI and ML empowered applications, especially privacy risks.
Objectives
The objectives of the workshop were, but not limited to:
-
discuss the relationship between AI/ML and security/privacy;
- identity how AI/ML can be used to launch cyber-attacks;
- identify use cases for incorporating AI/ML for security and trust;
- identify use cases for defining security and trust of AI/ML;
- identify security requirements and capabilities of AI/ML enabled applications and services;
- identify security requirements and capabilities for security applications and services incorporating AI/ML;
- share on-going activities among relevant groups (especially ITU-T FG-AI4H, FG-ML5G, SG13, SG16; ISO/IEC JTC 1/SC 42, IETF, IEEE, OASIS, etc.) and industries; and
- identify ways forward for SG17 to undertake in its future study, including potential new work items.
Target Audience
ITU Member States, ICT Regulators, Policymakers, ICT Service/Platform Providers, Mobile Operators, International Standards Organizations, NGOs related to security and privacy protection.