Loading…

A Facial Expressions Recognition Method Using Residual Network Architecture for Online Learning Evaluation

Facial expression recognition (FER) has been widely researched in recent years, with successful applications in a range of domains such as monitoring and warning of drivers for safety, surveillance, and recording customer satisfaction. However, FER is still challenging due to the diversity of people...

Full description

Saved in:
Bibliographic Details
Published in:Journal of advanced computational intelligence and intelligent informatics 2021-11, Vol.25 (6), p.953-962
Main Author: Long, Duong Thang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Facial expression recognition (FER) has been widely researched in recent years, with successful applications in a range of domains such as monitoring and warning of drivers for safety, surveillance, and recording customer satisfaction. However, FER is still challenging due to the diversity of people with the same facial expressions. Currently, researchers mainly approach this problem based on convolutional neural networks (CNN) in combination with architectures such as AlexNet, VGGNet, GoogleNet, ResNet, SENet. Although the FER results of these models are getting better day by day due to the constant evolution of these architectures, there is still room for improvement, especially in practical applications. In this study, we propose a CNN-based model using a residual network architecture for FER problems. We also augment images to create a diversity of training data to improve the recognition results of the model and avoid overfitting. Utilizing this model, this study proposes an integrated system for learning management systems to identify students and evaluate online learning processes. We run experiments on different datasets that have been published for research: CK+, Oulu-CASIA, JAFFE, and collected images from our students (FERS21). Our experimental results indicate that the proposed model performs FER with a significantly higher accuracy compared with other existing methods.
ISSN:1343-0130
1883-8014
DOI:10.20965/jaciii.2021.p0953