Loading…

Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT

As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the cont...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-08
Main Authors: Pinto, Gustavo, Cardoso-Pereira, Isadora, Danilo Monteiro Ribeiro, Lucena, Danilo, de Souza, Alberto, Gama, Kiev
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Pinto, Gustavo
Cardoso-Pereira, Isadora
Danilo Monteiro Ribeiro
Lucena, Danilo
de Souza, Alberto
Gama, Kiev
description As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2844936765</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2844936765</sourcerecordid><originalsourceid>FETCH-proquest_journals_28449367653</originalsourceid><addsrcrecordid>eNqNy98KgjAYBfARBEn5DoOuBdv8V7didmGYYNfy0aYpstk-9_4p9ABdnQO_czbEYZyfvCRgbEdcxMH3fRbFLAy5Q8oCTCdpAaqzsJS7FnJE2mpDM2FfMPdaXWhuQPSqo-UklZcpIQWtrMQVkT5xpfQNc_6oD2TbwojS_eWeHK9Znd68yejPemkGbY1aqGFJEJx5FEch_2_1BYlQPYc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2844936765</pqid></control><display><type>article</type><title>Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT</title><source>Publicly Available Content Database</source><creator>Pinto, Gustavo ; Cardoso-Pereira, Isadora ; Danilo Monteiro Ribeiro ; Lucena, Danilo ; de Souza, Alberto ; Gama, Kiev</creator><creatorcontrib>Pinto, Gustavo ; Cardoso-Pereira, Isadora ; Danilo Monteiro Ribeiro ; Lucena, Danilo ; de Souza, Alberto ; Gama, Kiev</creatorcontrib><description>As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Chatbots ; Feedback ; Large language models ; Questions ; Software ; Training</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2844936765?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Pinto, Gustavo</creatorcontrib><creatorcontrib>Cardoso-Pereira, Isadora</creatorcontrib><creatorcontrib>Danilo Monteiro Ribeiro</creatorcontrib><creatorcontrib>Lucena, Danilo</creatorcontrib><creatorcontrib>de Souza, Alberto</creatorcontrib><creatorcontrib>Gama, Kiev</creatorcontrib><title>Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT</title><title>arXiv.org</title><description>As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.</description><subject>Chatbots</subject><subject>Feedback</subject><subject>Large language models</subject><subject>Questions</subject><subject>Software</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNy98KgjAYBfARBEn5DoOuBdv8V7didmGYYNfy0aYpstk-9_4p9ABdnQO_czbEYZyfvCRgbEdcxMH3fRbFLAy5Q8oCTCdpAaqzsJS7FnJE2mpDM2FfMPdaXWhuQPSqo-UklZcpIQWtrMQVkT5xpfQNc_6oD2TbwojS_eWeHK9Znd68yejPemkGbY1aqGFJEJx5FEch_2_1BYlQPYc</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Pinto, Gustavo</creator><creator>Cardoso-Pereira, Isadora</creator><creator>Danilo Monteiro Ribeiro</creator><creator>Lucena, Danilo</creator><creator>de Souza, Alberto</creator><creator>Gama, Kiev</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230801</creationdate><title>Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT</title><author>Pinto, Gustavo ; Cardoso-Pereira, Isadora ; Danilo Monteiro Ribeiro ; Lucena, Danilo ; de Souza, Alberto ; Gama, Kiev</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28449367653</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Chatbots</topic><topic>Feedback</topic><topic>Large language models</topic><topic>Questions</topic><topic>Software</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Pinto, Gustavo</creatorcontrib><creatorcontrib>Cardoso-Pereira, Isadora</creatorcontrib><creatorcontrib>Danilo Monteiro Ribeiro</creatorcontrib><creatorcontrib>Lucena, Danilo</creatorcontrib><creatorcontrib>de Souza, Alberto</creatorcontrib><creatorcontrib>Gama, Kiev</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pinto, Gustavo</au><au>Cardoso-Pereira, Isadora</au><au>Danilo Monteiro Ribeiro</au><au>Lucena, Danilo</au><au>de Souza, Alberto</au><au>Gama, Kiev</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT</atitle><jtitle>arXiv.org</jtitle><date>2023-08-01</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2844936765
source Publicly Available Content Database
subjects Chatbots
Feedback
Large language models
Questions
Software
Training
title Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T03%3A10%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Large%20Language%20Models%20for%20Education:%20Grading%20Open-Ended%20Questions%20Using%20ChatGPT&rft.jtitle=arXiv.org&rft.au=Pinto,%20Gustavo&rft.date=2023-08-01&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2844936765%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28449367653%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2844936765&rft_id=info:pmid/&rfr_iscdi=true