Loading…

Improving the performance of automatic short answer grading using transfer learning and augmentation

The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems we...

Full description

Saved in:
Bibliographic Details
Published in:Engineering applications of artificial intelligence 2023-08, Vol.123, p.106292, Article 106292
Main Authors: Bonthu, Sridevi, Rama Sree, S., Krishna Prasad, M.H.M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873
cites cdi_FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873
container_end_page
container_issue
container_start_page 106292
container_title Engineering applications of artificial intelligence
container_volume 123
creator Bonthu, Sridevi
Rama Sree, S.
Krishna Prasad, M.H.M.
description The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available11https://github.com/sridevibonthu/SPRAG/tree/main/augmentation.. •A method to automatically grade short programming related objective answers authored by students.•An Evaluation of the pre-trained sentence transformers on ASAG task.•An extensive analysis of text augmentation techniques.
doi_str_mv 10.1016/j.engappai.2023.106292
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_engappai_2023_106292</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0952197623004761</els_id><sourcerecordid>S0952197623004761</sourcerecordid><originalsourceid>FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873</originalsourceid><addsrcrecordid>eNqFkMtKAzEUQIMoWKu_IPMDU_PoZCY7pfgoFNzoOsTkZprSSYYkrfj3ZqyuXV04cC73HoRuCV4QTPjdbgG-V-Oo3IJiygrkVNAzNCNdy2recnGOZlg0tCai5ZfoKqUdxph1Sz5DZj2MMRyd76u8hWqEaEMclNdQBVupQw6Dyk5XaRtirpRPnxCrPiozGYf048WCbcF7UNFPRHlT1H4An4sc_DW6sGqf4OZ3ztH70-Pb6qXevD6vVw-bWjNCc20UJ6Th5KMhTUeWDVG8EZrxznTMtNwC1dZYDUJgJlrLARroiOCttaLF5dk54qe9OoaUIlg5Rjeo-CUJllMruZN_reTUSp5aFfH-JEK57uggyqQdlAjGRdBZmuD-W_ENa0h4NQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Improving the performance of automatic short answer grading using transfer learning and augmentation</title><source>ScienceDirect Journals</source><creator>Bonthu, Sridevi ; Rama Sree, S. ; Krishna Prasad, M.H.M.</creator><creatorcontrib>Bonthu, Sridevi ; Rama Sree, S. ; Krishna Prasad, M.H.M.</creatorcontrib><description>The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available11https://github.com/sridevibonthu/SPRAG/tree/main/augmentation.. •A method to automatically grade short programming related objective answers authored by students.•An Evaluation of the pre-trained sentence transformers on ASAG task.•An extensive analysis of text augmentation techniques.</description><identifier>ISSN: 0952-1976</identifier><identifier>EISSN: 1873-6769</identifier><identifier>DOI: 10.1016/j.engappai.2023.106292</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>ASAG ; Augmentation ; Backtranslation ; Random deletion ; Random swap ; SBERT ; SPRAG corpus ; Synonym replacement ; Transfer learning</subject><ispartof>Engineering applications of artificial intelligence, 2023-08, Vol.123, p.106292, Article 106292</ispartof><rights>2023 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873</citedby><cites>FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Bonthu, Sridevi</creatorcontrib><creatorcontrib>Rama Sree, S.</creatorcontrib><creatorcontrib>Krishna Prasad, M.H.M.</creatorcontrib><title>Improving the performance of automatic short answer grading using transfer learning and augmentation</title><title>Engineering applications of artificial intelligence</title><description>The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available11https://github.com/sridevibonthu/SPRAG/tree/main/augmentation.. •A method to automatically grade short programming related objective answers authored by students.•An Evaluation of the pre-trained sentence transformers on ASAG task.•An extensive analysis of text augmentation techniques.</description><subject>ASAG</subject><subject>Augmentation</subject><subject>Backtranslation</subject><subject>Random deletion</subject><subject>Random swap</subject><subject>SBERT</subject><subject>SPRAG corpus</subject><subject>Synonym replacement</subject><subject>Transfer learning</subject><issn>0952-1976</issn><issn>1873-6769</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNqFkMtKAzEUQIMoWKu_IPMDU_PoZCY7pfgoFNzoOsTkZprSSYYkrfj3ZqyuXV04cC73HoRuCV4QTPjdbgG-V-Oo3IJiygrkVNAzNCNdy2recnGOZlg0tCai5ZfoKqUdxph1Sz5DZj2MMRyd76u8hWqEaEMclNdQBVupQw6Dyk5XaRtirpRPnxCrPiozGYf048WCbcF7UNFPRHlT1H4An4sc_DW6sGqf4OZ3ztH70-Pb6qXevD6vVw-bWjNCc20UJ6Th5KMhTUeWDVG8EZrxznTMtNwC1dZYDUJgJlrLARroiOCttaLF5dk54qe9OoaUIlg5Rjeo-CUJllMruZN_reTUSp5aFfH-JEK57uggyqQdlAjGRdBZmuD-W_ENa0h4NQ</recordid><startdate>202308</startdate><enddate>202308</enddate><creator>Bonthu, Sridevi</creator><creator>Rama Sree, S.</creator><creator>Krishna Prasad, M.H.M.</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202308</creationdate><title>Improving the performance of automatic short answer grading using transfer learning and augmentation</title><author>Bonthu, Sridevi ; Rama Sree, S. ; Krishna Prasad, M.H.M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>ASAG</topic><topic>Augmentation</topic><topic>Backtranslation</topic><topic>Random deletion</topic><topic>Random swap</topic><topic>SBERT</topic><topic>SPRAG corpus</topic><topic>Synonym replacement</topic><topic>Transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bonthu, Sridevi</creatorcontrib><creatorcontrib>Rama Sree, S.</creatorcontrib><creatorcontrib>Krishna Prasad, M.H.M.</creatorcontrib><collection>CrossRef</collection><jtitle>Engineering applications of artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bonthu, Sridevi</au><au>Rama Sree, S.</au><au>Krishna Prasad, M.H.M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving the performance of automatic short answer grading using transfer learning and augmentation</atitle><jtitle>Engineering applications of artificial intelligence</jtitle><date>2023-08</date><risdate>2023</risdate><volume>123</volume><spage>106292</spage><pages>106292-</pages><artnum>106292</artnum><issn>0952-1976</issn><eissn>1873-6769</eissn><abstract>The task of grading answers ranging from one phrase to one paragraph using computational techniques is known as Automated Short Answer Grading (ASAG). The performance of existing systems is not good enough due to limited data and the lack of availability of data in many domains. Many ASAG systems were developed as an outcome of the active research in this field. This study builds an effective system for grading short answers in the programming domain by leveraging Pre-trained Language Models and Text Augmentation. We fine-tuned three-sentence transformer models on the SPRAG corpus with five different augmentation techniques: viz., Random Deletion, Synonym Replacement, Random Swap, Backtranslation, and NLPAug. The SPRAG corpus contains student responses involving keywords and special symbols. We experimented with four different data sizes with the augmented data to determine the impact of training data on the fine-tuned sentence transformer model. this paper provides an exhaustive analysis of fine-tuning pretrained sentence transformer models with varying sizes of data by applying text augmentation techniques. we found that applying random swap and synonym replacement techniques together while fine-tuning has given a significant improvement, With a 4.91% increase in accuracy and a 3.36% increase in the F1-score. All the trained models are publicly available11https://github.com/sridevibonthu/SPRAG/tree/main/augmentation.. •A method to automatically grade short programming related objective answers authored by students.•An Evaluation of the pre-trained sentence transformers on ASAG task.•An extensive analysis of text augmentation techniques.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.engappai.2023.106292</doi></addata></record>
fulltext fulltext
identifier ISSN: 0952-1976
ispartof Engineering applications of artificial intelligence, 2023-08, Vol.123, p.106292, Article 106292
issn 0952-1976
1873-6769
language eng
recordid cdi_crossref_primary_10_1016_j_engappai_2023_106292
source ScienceDirect Journals
subjects ASAG
Augmentation
Backtranslation
Random deletion
Random swap
SBERT
SPRAG corpus
Synonym replacement
Transfer learning
title Improving the performance of automatic short answer grading using transfer learning and augmentation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T20%3A00%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20the%20performance%20of%20automatic%20short%20answer%20grading%20using%20transfer%20learning%20and%20augmentation&rft.jtitle=Engineering%20applications%20of%20artificial%20intelligence&rft.au=Bonthu,%20Sridevi&rft.date=2023-08&rft.volume=123&rft.spage=106292&rft.pages=106292-&rft.artnum=106292&rft.issn=0952-1976&rft.eissn=1873-6769&rft_id=info:doi/10.1016/j.engappai.2023.106292&rft_dat=%3Celsevier_cross%3ES0952197623004761%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c312t-da611561b51581451a659c368d83d76fe2cfdfce990397f6ee5e81967ff970873%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true