Research Area:  Machine Learning
Having devices capable of understanding human emotions will significantly improve the way people interact with them. Moreover, if those devices are capable of influencing the emotions of users in a positive way, this will improve their quality of life, especially for frail or dependent users. A first step towards this goal is improving the performance of emotion recognition systems. Specifically, using a multimodal approach is appealing, as the availability of different signals is growing. We believe that it is important to incorporate new architectures and techniques like the Transformer and BERT, and to investigate how to use them in a multimodal setting. Also, it is essential to develop self-supervised learning techniques to take advantage of the considerable quantity of unlabeled data available nowadays. In this extended abstract, we present our research in those directions.
Keywords:  
Performance evaluation
Deep learning
Emotion recognition
Affective computing
Computer vision
Conferences
Bit error rate
Author(s) Name:  Juan Vazquez-Rodriguez
Journal name:  2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/ACIIW52867.2021.9666396
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9666396