Optimizing deep learning models for facial emotion recognition in embedded systems
DOI:
https://doi.org/10.18488/76.v13i1.4817Abstract
Facial emotion recognition (FER) enables intelligent systems to interpret human affect from facial expressions and is increasingly important for human–computer interaction in resource-constrained environments. This work aims to design and evaluate a real-time FER framework that improves recognition accuracy while maintaining low computational complexity, making it suitable for embedded and edge devices. The proposed approach is developed using transfer learning with deep convolutional neural networks, where MobileNetV2 and ResNet50 are implemented as benchmark models, and EfficientNetB0 is selected as the primary model for optimization. Experiments are conducted on the FER-2013 dataset for both training and evaluation, and the input images are preprocessed to enhance facial feature representation. Fine-tuning is performed on the pretrained networks to reduce training time and improve generalization, while preserving real-time feasibility through lightweight inference. The experimental results show that EfficientNetB0 achieves an accuracy of 72.3% with low-latency performance appropriate for real-time operation. ResNet50 provides comparatively higher accuracy but demands greater computational resources, whereas MobileNetV2 offers a more balanced trade-off between speed and recognition performance. These findings indicate that EfficientNetB0 is a practical solution for real-time FER systems, supporting deployment in embedded platforms and applications such as assistive technologies, smart monitoring, and interactive systems where computational efficiency is critical.
