dc.description.abstract | Facial expression recognition has emerged as a dynamic field within computer vision and human-computer
interaction, finding diverse applications such as animation, social robots, personalized banking, and more. Current studies
employ transfer learning models in facial expression recognition through the application of convolutional neural networks.
The proposed model combines data augmentation with fine-tunned transfer learning models to get a better FER model. A
comprehensive collection of training images is crucial as input to effectively train a convolutional neural network (CNN)
for accurate facial expression recognition. Hence, the presented research employed data augmentation to enhance the
quantity of input images derived from a pre-existing dataset. Manually employing CNN is outdated. Therefore, fine-tuned
transfer learning models are used in the proposed study. Activating the final 8 layers of the transfer learning model by
freezing the whole transfer learning model is the novel methodology of the proposed model. Then we vary the values of
dense layers and dropout layers of the activated 8 layers, which results the fine-tuning of the transfer learning model. The
CK+, The facial recognition dataset (human) datasets are used in the proposed model. Subsequently, conduct a stratified 5fold
cross-validation to assess the model's performance on previously unseen data and avoid overfitting the proposed
model. The method under consideration utilized transfer learning models, namely DenseNet121, DenseNet201,
DenseNet169, and InceptionV3, along with fine-tuned transfer learning models applied to augmented datasets CK+, The
facial recognition dataset (human) datasets. The outcomes indicate an achievement of 99.36% accuracy for the CK+
dataset, 95.14% for the facial recognition dataset (Human). | en_US |