Abstract
Facial expressions are essential for non-verbal human communication as they convey behavioral intentions and emotional states. While facial action units (AUs) can occur bilaterally or unilaterally, the existing research in affective
computing predominantly concentrates on bilateral expressions, largely due to the lack of datasets with unilateral AU labels. In this study, we present a method for generating unilateral
AU labels and assess its efficacy against expert-labeled facial
images. Furthermore, we introduce a dedicated model trained on
the generated data and evaluate its performance across multiple
datasets. Our findings offer insights into feature extraction for
unilateral facial expression recognition. This research contributes to advancing the understanding and recognition of nuanced facial expressions, with potential applications in various domains such as healthcare and human-computer interaction.
Index Terms—Affective computing, facial expression recognition, action units, unilateral facial expressions.
computing predominantly concentrates on bilateral expressions, largely due to the lack of datasets with unilateral AU labels. In this study, we present a method for generating unilateral
AU labels and assess its efficacy against expert-labeled facial
images. Furthermore, we introduce a dedicated model trained on
the generated data and evaluate its performance across multiple
datasets. Our findings offer insights into feature extraction for
unilateral facial expression recognition. This research contributes to advancing the understanding and recognition of nuanced facial expressions, with potential applications in various domains such as healthcare and human-computer interaction.
Index Terms—Affective computing, facial expression recognition, action units, unilateral facial expressions.
Original language | English |
---|---|
Title of host publication | IEEE Affective Computing and Intelligent Interfaces |
Number of pages | 9 |
Publication status | Published - 18 Sept 2024 |