Loading…

Improving the performance of automatic short answer grading using transfer learning and augmentation

The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems we...

Full description

Saved in:
Bibliographic Details
Published in:Engineering applications of artificial intelligence 2023-08, Vol.123, p.106292, Article 106292
Main Authors: Bonthu, Sridevi, Rama Sree, S., Krishna Prasad, M.H.M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available11https://github.com/sridevibonthu/SPRAG/tree/main/augmentation.. •A method to automatically grade short programming related objective answers authored by students.•An Evaluation of the pre-trained sentence transformers on ASAG task.•An extensive analysis of text augmentation techniques.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2023.106292