Abstract
Decoding human emotions through Facial Expression Recognition (FER) is a challenging, yet critical endeavor, particularly on resource-limited embedded systems. This research introduces a method centered around an attention-augmented Convolutional Neural Network (CNN), tailored to detect facial Action Units (AUs), which are intricate facial movements tied to distinct emotions. To optimize for resource-constrained environments, the model underwent a three-step optimization process: restructuring the CNN architecture, model pruning, and quantization. Despite its compact footprint of only 57,001 parameters, the model delivers robust performance across multiple datasets. Once these AUs are accurately identified, we utilize the Facial Action Coding System (FACS) to map these units to corresponding emotions, thereby facilitating comprehensive emotion recognition and explanation. The incorporation of quantization further refines our model without compromising its performance, enabling efficient, real-world emotion recognition even within constrained environments.
Original language | English |
---|---|
Title of host publication | 2023 30th IEEE International Conference on Electronics, Circuits and Systems (ICECS) |
Place of Publication | Piscataway, NJ |
Publisher | IEEE |
Number of pages | 4 |
ISBN (Electronic) | 9798350326499 |
DOIs | |
Publication status | Published - 10 Jan 2024 |
Event | 2023 30th IEEE International Conference on Electronics, Circuits and Systems - Istanbul, Turkey Duration: 4 Dec 2023 → 7 Dec 2023 |
Conference
Conference | 2023 30th IEEE International Conference on Electronics, Circuits and Systems |
---|---|
Abbreviated title | ICECS |
Country/Territory | Turkey |
City | Istanbul |
Period | 4/12/23 → 7/12/23 |
Keywords
- embedded systems
- action unit detection
- facial expression recognition
- model optimization
- convolutional neural networks