Committed to connecting the world

  •  
ITU GSR 2024

ITU-T work programme

[2017-2020] : [SG16] : [Q24/16]

[Declared patent(s)]  - [Publication]

Work item: H.862.5 (ex F.EMO-NN)
Subject/title: Emotion enabled multimodal user interface based on artificial neural networks
Status: Approved on 2021-06-13 
Approval process: AAP
Type of work item: Recommendation
Version: New
Equivalent number: -
Timing: -
Liaison: -
Supporting members: -
Summary: This Recommendation provides functional entities and architecture for emotion enabled multimodal user interface based on artificial neural network. As Emotion technology can contribute to a big improvement in HCI (Human Computer Interaction) areas, many companies and researchers have been studying emotion technology. Various applications using multimodality and emotion analysis begin to be introduced these days with artificial intelligence technology. However, many of current systems still do not infer human emotion properly yet, because some systems are too dependent on certain source, or too weak for real circumstances. Therefore, the proposed system architecture is for multimodal UI based on emotion analysis with some properties and illustrations and data with artificial neural network. The multimedia data for the input is composed of text, speech, and image. And, for the unimodal emotion analysis, these data are pre-processed in the corresponding module. For example, the text data is pre-processed by data augmentation, person attributes recognition, topic cluster recognition, document summarization, named entity recognition, sentence splitter, keyword cluster, and sentence to graph functions.
Comment: -
Reference(s):
  Historic references:
Contact(s):
Miran Choi, Editor
Sunghan Kim, Editor
Hyun Ho Lee, Editor
ITU-T A.5 justification(s):
Generate A.5 drat TD
-
[Submit new A.5 justification ]
See guidelines for creating & submitting ITU-T A.5 justifications
First registration in the WP: 2019-03-07 17:27:21
Last update: 2021-05-10 11:55:18