Research Area:  Machine Learning
Human beings have the ability to understand and visualize various emotions on a daily basis. This could be done by noticing various features such as facial muscle movements, speech, hand gestures, etc. The automated emotion recognition is an important issue and has also been a lively research topic for the modern time. At the moment, several research workers have taken part in inheriting two or more unimodals for better understanding. This paper shows an approach for emotion recognition that uses three modalities: facial images, audio signals, and electroencephalogram (EEG) signals from FER and Ck+, RAVDESS and SEED-IV datasets respectively. Finally, various fusion techniques were approached and each of these fusion methods gave different results. The maximum accuracy of 71.24% was obtained with help of an autoencoder approach when combined with SVM classifier.
Keywords:  
human beings
facial muscle movements
speech
hand gestures
audio signals
electroencephalogram
Author(s) Name:  Gokul Subramanian, Niranjan Cholendiran, Kotapati Prathyusha, Noviya Balasubramanain, J Aravinth
Journal name:  
Conferrence name:  2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII)
Publisher name:  IEEE
DOI:  https://doi.org/10.1109/ICBSII51839.2021.9445146
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9445146