Unimodal and multimodal static facial expression recognition for virtual reality users with EmoHeVRDB

Thorben Ortmann, Qi Wang, Larissa Putzar

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Downloads (Pure)

Abstract

This study explores the potential of utilizing Facial Expression Activations (FEAs) captured via the Meta Quest Pro Virtual Reality (VR) headset for Facial Expression Recognition (FER) in VR settings. Leveraging the EmojiHeroVr Database (EmoHeVRDB), we compared several unimodal approaches and achieved up to 73.02% accuracy for the static FER task with seven emotion categories. Furthermore, we integrated FEA and image data in multimodal approaches, observing significant improvements in recognition accuracy. An intermediate fusion approach achieved the highest accuracy of 80.42%, significantly surpassing the baseline evaluation result of 69.84% reported for EmoHeVRDB's image data. Our study is the first to utilize EmoHeVRDB's unique FEA data for unimodal and multimodal static FER, establishing new benchmarks for FER in VR settings. Our findings highlight the potential of fusing complementary modalities to enhance FER accuracy in VR settings, where conventional image-based methods are severely limited by the occlusion caused by Head-Mounted Displays (HMDs).
Original languageEnglish
Title of host publication2025 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)
PublisherIEEE
Pages252-256
Number of pages5
ISBN (Electronic)9798331521578
ISBN (Print)9798331521585
DOIs
Publication statusPublished - 26 Feb 2025

Publication series

NameIEEE Conference Proceedings
PublisherIEEE
ISSN (Print)2771-7445
ISSN (Electronic)2771-7453

Keywords

  • facial expression recognition
  • emotion recognition
  • multimodal
  • virtual reality

Fingerprint

Dive into the research topics of 'Unimodal and multimodal static facial expression recognition for virtual reality users with EmoHeVRDB'. Together they form a unique fingerprint.

Cite this