Loading…
Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language
•Inbuilt Rubric is presented here as a “non-Latent” Semantic Analysis approach.•This method transforms latent semantic space into a non-latent and meaningful one.•It represents meaning in multi-vector representations for text processing.•Its performance was significantly higher than the cosine-based...
Saved in:
Published in: | Expert systems with applications 2021-12, Vol.185, p.115621, Article 115621 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3 |
---|---|
cites | cdi_FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3 |
container_end_page | |
container_issue | |
container_start_page | 115621 |
container_title | Expert systems with applications |
container_volume | 185 |
creator | Martínez-Huertas, José Á. Olmos, Ricardo León, José A. |
description | •Inbuilt Rubric is presented here as a “non-Latent” Semantic Analysis approach.•This method transforms latent semantic space into a non-latent and meaningful one.•It represents meaning in multi-vector representations for text processing.•Its performance was significantly higher than the cosine-based similarity.•It could enhance content-detection in expert and intelligent systems applications.
Usually, computerized assessments of constructed responses use a predictive-centered approach instead of a validity-centered one. Here, we compared the convergent and discriminant validity of two computerized assessment methods designed to detect semantic topics in constructed responses: Inbuilt Rubric (IR) and Partial Contents Similarity (PCS). While both methods are distributional models of language and use the same Latent Semantic Analysis (LSA) prior knowledge, they produce different semantic representations. PCS evaluates constructed responses using non-meaningful semantic dimensions (this method is the standard LSA assessment of constructed responses), but IR endows original LSA semantic space coordinates with meaning. In the present study, 255 undergraduate and high school students were allocated one of three texts and were tasked to make a summary. A topic-detection task was conducted comparing IR and PCS methods. Evidence from convergent and discriminant validity was found in favor of the IR method for topic-detection in computerized constructed response assessments. In this line, the multicollinearity of PCS method was larger than the one of IR method, which means that the former is less capable of discriminating between related concepts or meanings. Moreover, the semantic representations of both methods were qualitatively different, that is, they evaluated different concepts or meanings. The implications of these automated assessment methods are also discussed. First, the meaningful coordinates of the Inbuilt Rubric method can accommodate expert rubrics for computerized assessments of constructed responses improving computer-assisted language learning. Second, they can provide high-quality computerized feedback accurately detecting topics in other educational constructed response assessments. Thus, it is concluded that: (1) IR method can represent different concepts and contents of a text, simultaneously mapping a considerable variability of contents in constructed responses; (2) IR method semantic representations have a qualitatively different meaning than |
doi_str_mv | 10.1016/j.eswa.2021.115621 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2584580434</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0957417421010150</els_id><sourcerecordid>2584580434</sourcerecordid><originalsourceid>FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3</originalsourceid><addsrcrecordid>eNp9kEtLAzEQx4MoWB9fwNOC563JZtPNghcp9QEFL3oOaTLbprTJmsla9NObup49DcP_wcyPkBtGp4yy2d12CnjQ04pWbMqYmFXshEyYbHg5a1p-Sia0FU1Zs6Y-JxeIW0pZQ2kzIcPCb7Q3zq-LFHpnSgsJTHLBF84XJuz7IUF032ALjQiIe_AJi9BlzWOKg0lZioB9XgGLg0ubwrqsuNVwrNG7Yh8s7H4zO-3Xg17DFTnr9A7h-m9ekvfHxdv8uVy-Pr3MH5al4ZVMZdPaVldWgmVcayNgJSwH3bXCSClNS1dcCNsy03aSGdZ1ojYrTRswle24tvyS3I69fQwfA2BS2zDEfBOqSshaSFrzOruq0WViQIzQqT66vY5filF1xKu26ohXHfGqEW8O3Y-h_Bp8OogKjQNvwLqYASob3H_xH2OTiAs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2584580434</pqid></control><display><type>article</type><title>Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language</title><source>Elsevier</source><creator>Martínez-Huertas, José Á. ; Olmos, Ricardo ; León, José A.</creator><creatorcontrib>Martínez-Huertas, José Á. ; Olmos, Ricardo ; León, José A.</creatorcontrib><description>•Inbuilt Rubric is presented here as a “non-Latent” Semantic Analysis approach.•This method transforms latent semantic space into a non-latent and meaningful one.•It represents meaning in multi-vector representations for text processing.•Its performance was significantly higher than the cosine-based similarity.•It could enhance content-detection in expert and intelligent systems applications.
Usually, computerized assessments of constructed responses use a predictive-centered approach instead of a validity-centered one. Here, we compared the convergent and discriminant validity of two computerized assessment methods designed to detect semantic topics in constructed responses: Inbuilt Rubric (IR) and Partial Contents Similarity (PCS). While both methods are distributional models of language and use the same Latent Semantic Analysis (LSA) prior knowledge, they produce different semantic representations. PCS evaluates constructed responses using non-meaningful semantic dimensions (this method is the standard LSA assessment of constructed responses), but IR endows original LSA semantic space coordinates with meaning. In the present study, 255 undergraduate and high school students were allocated one of three texts and were tasked to make a summary. A topic-detection task was conducted comparing IR and PCS methods. Evidence from convergent and discriminant validity was found in favor of the IR method for topic-detection in computerized constructed response assessments. In this line, the multicollinearity of PCS method was larger than the one of IR method, which means that the former is less capable of discriminating between related concepts or meanings. Moreover, the semantic representations of both methods were qualitatively different, that is, they evaluated different concepts or meanings. The implications of these automated assessment methods are also discussed. First, the meaningful coordinates of the Inbuilt Rubric method can accommodate expert rubrics for computerized assessments of constructed responses improving computer-assisted language learning. Second, they can provide high-quality computerized feedback accurately detecting topics in other educational constructed response assessments. Thus, it is concluded that: (1) IR method can represent different concepts and contents of a text, simultaneously mapping a considerable variability of contents in constructed responses; (2) IR method semantic representations have a qualitatively different meaning than the LSA ones and present a desirable multicollinearity that promotes the discriminant validity of the scores of distributional models of language; and (3) IR method can extend the performance and the applications of current LSA semantic representations by endowing the dimensions of the semantic space with semantic meanings.</description><identifier>ISSN: 0957-4174</identifier><identifier>EISSN: 1873-6793</identifier><identifier>DOI: 10.1016/j.eswa.2021.115621</identifier><language>eng</language><publisher>New York: Elsevier Ltd</publisher><subject>Assessments ; Automated summary evaluation ; Constructed responses ; Convergence ; Inbuilt rubric ; Latent semantic analysis ; Methods ; Representations ; Semantics ; Summaries ; Topic detection ; Validity</subject><ispartof>Expert systems with applications, 2021-12, Vol.185, p.115621, Article 115621</ispartof><rights>2021 Elsevier Ltd</rights><rights>Copyright Elsevier BV Dec 15, 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3</citedby><cites>FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Martínez-Huertas, José Á.</creatorcontrib><creatorcontrib>Olmos, Ricardo</creatorcontrib><creatorcontrib>León, José A.</creatorcontrib><title>Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language</title><title>Expert systems with applications</title><description>•Inbuilt Rubric is presented here as a “non-Latent” Semantic Analysis approach.•This method transforms latent semantic space into a non-latent and meaningful one.•It represents meaning in multi-vector representations for text processing.•Its performance was significantly higher than the cosine-based similarity.•It could enhance content-detection in expert and intelligent systems applications.
Usually, computerized assessments of constructed responses use a predictive-centered approach instead of a validity-centered one. Here, we compared the convergent and discriminant validity of two computerized assessment methods designed to detect semantic topics in constructed responses: Inbuilt Rubric (IR) and Partial Contents Similarity (PCS). While both methods are distributional models of language and use the same Latent Semantic Analysis (LSA) prior knowledge, they produce different semantic representations. PCS evaluates constructed responses using non-meaningful semantic dimensions (this method is the standard LSA assessment of constructed responses), but IR endows original LSA semantic space coordinates with meaning. In the present study, 255 undergraduate and high school students were allocated one of three texts and were tasked to make a summary. A topic-detection task was conducted comparing IR and PCS methods. Evidence from convergent and discriminant validity was found in favor of the IR method for topic-detection in computerized constructed response assessments. In this line, the multicollinearity of PCS method was larger than the one of IR method, which means that the former is less capable of discriminating between related concepts or meanings. Moreover, the semantic representations of both methods were qualitatively different, that is, they evaluated different concepts or meanings. The implications of these automated assessment methods are also discussed. First, the meaningful coordinates of the Inbuilt Rubric method can accommodate expert rubrics for computerized assessments of constructed responses improving computer-assisted language learning. Second, they can provide high-quality computerized feedback accurately detecting topics in other educational constructed response assessments. Thus, it is concluded that: (1) IR method can represent different concepts and contents of a text, simultaneously mapping a considerable variability of contents in constructed responses; (2) IR method semantic representations have a qualitatively different meaning than the LSA ones and present a desirable multicollinearity that promotes the discriminant validity of the scores of distributional models of language; and (3) IR method can extend the performance and the applications of current LSA semantic representations by endowing the dimensions of the semantic space with semantic meanings.</description><subject>Assessments</subject><subject>Automated summary evaluation</subject><subject>Constructed responses</subject><subject>Convergence</subject><subject>Inbuilt rubric</subject><subject>Latent semantic analysis</subject><subject>Methods</subject><subject>Representations</subject><subject>Semantics</subject><subject>Summaries</subject><subject>Topic detection</subject><subject>Validity</subject><issn>0957-4174</issn><issn>1873-6793</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLAzEQx4MoWB9fwNOC563JZtPNghcp9QEFL3oOaTLbprTJmsla9NObup49DcP_wcyPkBtGp4yy2d12CnjQ04pWbMqYmFXshEyYbHg5a1p-Sia0FU1Zs6Y-JxeIW0pZQ2kzIcPCb7Q3zq-LFHpnSgsJTHLBF84XJuz7IUF032ALjQiIe_AJi9BlzWOKg0lZioB9XgGLg0ubwrqsuNVwrNG7Yh8s7H4zO-3Xg17DFTnr9A7h-m9ekvfHxdv8uVy-Pr3MH5al4ZVMZdPaVldWgmVcayNgJSwH3bXCSClNS1dcCNsy03aSGdZ1ojYrTRswle24tvyS3I69fQwfA2BS2zDEfBOqSshaSFrzOruq0WViQIzQqT66vY5filF1xKu26ohXHfGqEW8O3Y-h_Bp8OogKjQNvwLqYASob3H_xH2OTiAs</recordid><startdate>20211215</startdate><enddate>20211215</enddate><creator>Martínez-Huertas, José Á.</creator><creator>Olmos, Ricardo</creator><creator>León, José A.</creator><general>Elsevier Ltd</general><general>Elsevier BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20211215</creationdate><title>Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language</title><author>Martínez-Huertas, José Á. ; Olmos, Ricardo ; León, José A.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Assessments</topic><topic>Automated summary evaluation</topic><topic>Constructed responses</topic><topic>Convergence</topic><topic>Inbuilt rubric</topic><topic>Latent semantic analysis</topic><topic>Methods</topic><topic>Representations</topic><topic>Semantics</topic><topic>Summaries</topic><topic>Topic detection</topic><topic>Validity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Martínez-Huertas, José Á.</creatorcontrib><creatorcontrib>Olmos, Ricardo</creatorcontrib><creatorcontrib>León, José A.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Expert systems with applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Martínez-Huertas, José Á.</au><au>Olmos, Ricardo</au><au>León, José A.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language</atitle><jtitle>Expert systems with applications</jtitle><date>2021-12-15</date><risdate>2021</risdate><volume>185</volume><spage>115621</spage><pages>115621-</pages><artnum>115621</artnum><issn>0957-4174</issn><eissn>1873-6793</eissn><abstract>•Inbuilt Rubric is presented here as a “non-Latent” Semantic Analysis approach.•This method transforms latent semantic space into a non-latent and meaningful one.•It represents meaning in multi-vector representations for text processing.•Its performance was significantly higher than the cosine-based similarity.•It could enhance content-detection in expert and intelligent systems applications.
Usually, computerized assessments of constructed responses use a predictive-centered approach instead of a validity-centered one. Here, we compared the convergent and discriminant validity of two computerized assessment methods designed to detect semantic topics in constructed responses: Inbuilt Rubric (IR) and Partial Contents Similarity (PCS). While both methods are distributional models of language and use the same Latent Semantic Analysis (LSA) prior knowledge, they produce different semantic representations. PCS evaluates constructed responses using non-meaningful semantic dimensions (this method is the standard LSA assessment of constructed responses), but IR endows original LSA semantic space coordinates with meaning. In the present study, 255 undergraduate and high school students were allocated one of three texts and were tasked to make a summary. A topic-detection task was conducted comparing IR and PCS methods. Evidence from convergent and discriminant validity was found in favor of the IR method for topic-detection in computerized constructed response assessments. In this line, the multicollinearity of PCS method was larger than the one of IR method, which means that the former is less capable of discriminating between related concepts or meanings. Moreover, the semantic representations of both methods were qualitatively different, that is, they evaluated different concepts or meanings. The implications of these automated assessment methods are also discussed. First, the meaningful coordinates of the Inbuilt Rubric method can accommodate expert rubrics for computerized assessments of constructed responses improving computer-assisted language learning. Second, they can provide high-quality computerized feedback accurately detecting topics in other educational constructed response assessments. Thus, it is concluded that: (1) IR method can represent different concepts and contents of a text, simultaneously mapping a considerable variability of contents in constructed responses; (2) IR method semantic representations have a qualitatively different meaning than the LSA ones and present a desirable multicollinearity that promotes the discriminant validity of the scores of distributional models of language; and (3) IR method can extend the performance and the applications of current LSA semantic representations by endowing the dimensions of the semantic space with semantic meanings.</abstract><cop>New York</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.eswa.2021.115621</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0957-4174 |
ispartof | Expert systems with applications, 2021-12, Vol.185, p.115621, Article 115621 |
issn | 0957-4174 1873-6793 |
language | eng |
recordid | cdi_proquest_journals_2584580434 |
source | Elsevier |
subjects | Assessments Automated summary evaluation Constructed responses Convergence Inbuilt rubric Latent semantic analysis Methods Representations Semantics Summaries Topic detection Validity |
title | Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T20%3A25%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20topic-detection%20in%20computerized%20assessments%20of%20constructed%20responses%20with%20distributional%20models%20of%20language&rft.jtitle=Expert%20systems%20with%20applications&rft.au=Mart%C3%ADnez-Huertas,%20Jos%C3%A9%20%C3%81.&rft.date=2021-12-15&rft.volume=185&rft.spage=115621&rft.pages=115621-&rft.artnum=115621&rft.issn=0957-4174&rft.eissn=1873-6793&rft_id=info:doi/10.1016/j.eswa.2021.115621&rft_dat=%3Cproquest_cross%3E2584580434%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c328t-79d9a2d8ed13aac5eb5d3eaf95c888c90b355d91c9f81c1ff54cba07ec2df3ad3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2584580434&rft_id=info:pmid/&rfr_iscdi=true |