Loading…

Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity

Surveys are typical for student evaluation of teaching (SET). Survey research consistently confirms the negative impacts of careless responses on research validity, including low data quality and invalid research inferences. SET literature seldom addresses if careless responses are present and how t...

Full description

Saved in:
Bibliographic Details
Published in:SAGE open 2024-04, Vol.14 (2)
Main Author: Wang, Yingchen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c303t-fa62b8fc88b56cd68020b14522ac840c18f579dc28df84f2db8a973dc6e1481f3
container_end_page
container_issue 2
container_start_page
container_title SAGE open
container_volume 14
creator Wang, Yingchen
description Surveys are typical for student evaluation of teaching (SET). Survey research consistently confirms the negative impacts of careless responses on research validity, including low data quality and invalid research inferences. SET literature seldom addresses if careless responses are present and how to improve. To improve evaluation practices and validity, the current study proposed a three-step procedure to screen SET data for quantifying careless responses, delete and justify the removal of careless responses, and assess if removing careless responses improved the internal structure of the SET data. For these purposes, a convenience sample was taken from a Chinese university. A web-based survey was administered using a revised version of the Students’ Evaluation of Education Quality. One hundred ninety-nine students evaluated 11 courses with 295 responses. Longstring and Rasch outlier analyses identified 49% of nonrandom and random careless responses. The careless responses impacted evaluation results substantively and were deleted. The subsequent study demonstrated that removal improved data validity, using reliability, separation, and inter-rater agreement from the multi-facet Rasch model and G- and D-coefficients, signal-noise ratios, and error variance from generalizability theory. Removing careless responses improved the data validity in terms of true score variance and discrimination power of the data. Data screening should be a prerequisite to validating SET data based on the research results. Data removal is necessary to improve the research validity only if there is a noticeable change in the estimated teaching abilities. Suggestions and implications were discussed, including developing sound evaluation practices and formative use of SET. Plain Language Summary How many careless responses were there in student evaluation of teaching, and was it good to remove them to improve data quality? Student evaluation of teaching (SET) exists everywhere in education. However, people question whether they trust SET data and feedback. The survey is popular in SET. Literature has consistently reported the survey participants’ careless response (CR). CRs mean that participants complete a survey without enough attention to instructions and the content of the survey items. There are two types of CRs—non-random or random. Random CR means that participants choose the options randomly. Nonrandom CR occurs if respondents consistently select the same options. When CRs
doi_str_mv 10.1177/21582440241256947
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_f10c6084472f4cef968cace516d1fbe5</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_21582440241256947</sage_id><doaj_id>oai_doaj_org_article_f10c6084472f4cef968cace516d1fbe5</doaj_id><sourcerecordid>3085710448</sourcerecordid><originalsourceid>FETCH-LOGICAL-c303t-fa62b8fc88b56cd68020b14522ac840c18f579dc28df84f2db8a973dc6e1481f3</originalsourceid><addsrcrecordid>eNp1kVtLAzEQhRdRsNT-AN8CPrcm2WQ3fZRatVIQtfoaZnOpKdtNTbJC_71b6w3EeZnhcM43A5NlpwSPCCnLc0q4oIxhygjlxZiVB1lvpw134uGv-TgbxLjCXXHMGaO9zN230CRnt65ZogkEU5sY0YOJG99EE5Fr0GNqtWkSmr5B3UJyvkHeooUB9bILQaPRbRu_GA9m7Tsjsj6gS0iAnqF22qXtSXZkoY5m8Nn72dPVdDG5Gc7vrmeTi_lQ5ThPQwsFrYRVQlS8ULoQmOKKME4pKMGwIsLycqwVFdoKZqmuBIzLXKvCECaIzfvZbM_VHlZyE9wawlZ6cPJD8GEpISSnaiMtwarAgrGSWqaMHRdCgTKcFJrYyvCOdbZnbYJ_bU1McuXb0HTnyxwLXhLMmOhcZO9SwccYjP3eSrDcPUj-eVCXGe0zEZbmh_p_4B1ayJA0</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3085710448</pqid></control><display><type>article</type><title>Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity</title><source>Publicly Available Content Database</source><source>Social Science Premium Collection</source><source>Sage Journals GOLD Open Access 2024</source><creator>Wang, Yingchen</creator><creatorcontrib>Wang, Yingchen</creatorcontrib><description>Surveys are typical for student evaluation of teaching (SET). Survey research consistently confirms the negative impacts of careless responses on research validity, including low data quality and invalid research inferences. SET literature seldom addresses if careless responses are present and how to improve. To improve evaluation practices and validity, the current study proposed a three-step procedure to screen SET data for quantifying careless responses, delete and justify the removal of careless responses, and assess if removing careless responses improved the internal structure of the SET data. For these purposes, a convenience sample was taken from a Chinese university. A web-based survey was administered using a revised version of the Students’ Evaluation of Education Quality. One hundred ninety-nine students evaluated 11 courses with 295 responses. Longstring and Rasch outlier analyses identified 49% of nonrandom and random careless responses. The careless responses impacted evaluation results substantively and were deleted. The subsequent study demonstrated that removal improved data validity, using reliability, separation, and inter-rater agreement from the multi-facet Rasch model and G- and D-coefficients, signal-noise ratios, and error variance from generalizability theory. Removing careless responses improved the data validity in terms of true score variance and discrimination power of the data. Data screening should be a prerequisite to validating SET data based on the research results. Data removal is necessary to improve the research validity only if there is a noticeable change in the estimated teaching abilities. Suggestions and implications were discussed, including developing sound evaluation practices and formative use of SET. Plain Language Summary How many careless responses were there in student evaluation of teaching, and was it good to remove them to improve data quality? Student evaluation of teaching (SET) exists everywhere in education. However, people question whether they trust SET data and feedback. The survey is popular in SET. Literature has consistently reported the survey participants’ careless response (CR). CRs mean that participants complete a survey without enough attention to instructions and the content of the survey items. There are two types of CRs—non-random or random. Random CR means that participants choose the options randomly. Nonrandom CR occurs if respondents consistently select the same options. When CRs are present, people question data quality and research inferences. Researchers can take preventive measures during survey development and/or administration to address the CR issue. Some scholars recommend deleting CRs. The current research proposed a three-step procedure to (1) identify CRs and remove them, (2) prove that removing CRs was correct, and (3) evaluate whether removing CRs improved the SET data quality. For these purposes, two types of analyses were performed to identify the CRs. The analyses detected 49% of CRs in the dataset. 54.4% of the teachers’ abilities were misclassified. Thus, CRs impacted the evaluation practically. The evaluation criteria demonstrated that CR removal improved data quality. Based on the results, the evaluators should take necessary measures, including prevention measures during survey development and administration and checking data quality. Deleting CRs should be based on careful research ONLY IF many teachers’ abilities were misclassified. It is also important to use a set of criteria to ensure that data quality improves after deleting CRs. The proposed evaluation criteria can be applied to different evaluation settings.</description><identifier>ISSN: 2158-2440</identifier><identifier>EISSN: 2158-2440</identifier><identifier>DOI: 10.1177/21582440241256947</identifier><language>eng</language><publisher>Los Angeles, CA: SAGE Publications</publisher><subject>Validity</subject><ispartof>SAGE open, 2024-04, Vol.14 (2)</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is licensed under the Creative Commons Attribution License https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c303t-fa62b8fc88b56cd68020b14522ac840c18f579dc28df84f2db8a973dc6e1481f3</cites><orcidid>0000-0003-3376-6224</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/21582440241256947$$EPDF$$P50$$Gsage$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3085710448?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml></links><search><creatorcontrib>Wang, Yingchen</creatorcontrib><title>Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity</title><title>SAGE open</title><description>Surveys are typical for student evaluation of teaching (SET). Survey research consistently confirms the negative impacts of careless responses on research validity, including low data quality and invalid research inferences. SET literature seldom addresses if careless responses are present and how to improve. To improve evaluation practices and validity, the current study proposed a three-step procedure to screen SET data for quantifying careless responses, delete and justify the removal of careless responses, and assess if removing careless responses improved the internal structure of the SET data. For these purposes, a convenience sample was taken from a Chinese university. A web-based survey was administered using a revised version of the Students’ Evaluation of Education Quality. One hundred ninety-nine students evaluated 11 courses with 295 responses. Longstring and Rasch outlier analyses identified 49% of nonrandom and random careless responses. The careless responses impacted evaluation results substantively and were deleted. The subsequent study demonstrated that removal improved data validity, using reliability, separation, and inter-rater agreement from the multi-facet Rasch model and G- and D-coefficients, signal-noise ratios, and error variance from generalizability theory. Removing careless responses improved the data validity in terms of true score variance and discrimination power of the data. Data screening should be a prerequisite to validating SET data based on the research results. Data removal is necessary to improve the research validity only if there is a noticeable change in the estimated teaching abilities. Suggestions and implications were discussed, including developing sound evaluation practices and formative use of SET. Plain Language Summary How many careless responses were there in student evaluation of teaching, and was it good to remove them to improve data quality? Student evaluation of teaching (SET) exists everywhere in education. However, people question whether they trust SET data and feedback. The survey is popular in SET. Literature has consistently reported the survey participants’ careless response (CR). CRs mean that participants complete a survey without enough attention to instructions and the content of the survey items. There are two types of CRs—non-random or random. Random CR means that participants choose the options randomly. Nonrandom CR occurs if respondents consistently select the same options. When CRs are present, people question data quality and research inferences. Researchers can take preventive measures during survey development and/or administration to address the CR issue. Some scholars recommend deleting CRs. The current research proposed a three-step procedure to (1) identify CRs and remove them, (2) prove that removing CRs was correct, and (3) evaluate whether removing CRs improved the SET data quality. For these purposes, two types of analyses were performed to identify the CRs. The analyses detected 49% of CRs in the dataset. 54.4% of the teachers’ abilities were misclassified. Thus, CRs impacted the evaluation practically. The evaluation criteria demonstrated that CR removal improved data quality. Based on the results, the evaluators should take necessary measures, including prevention measures during survey development and administration and checking data quality. Deleting CRs should be based on careful research ONLY IF many teachers’ abilities were misclassified. It is also important to use a set of criteria to ensure that data quality improves after deleting CRs. The proposed evaluation criteria can be applied to different evaluation settings.</description><subject>Validity</subject><issn>2158-2440</issn><issn>2158-2440</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>AFRWT</sourceid><sourceid>ALSLI</sourceid><sourceid>M2R</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp1kVtLAzEQhRdRsNT-AN8CPrcm2WQ3fZRatVIQtfoaZnOpKdtNTbJC_71b6w3EeZnhcM43A5NlpwSPCCnLc0q4oIxhygjlxZiVB1lvpw134uGv-TgbxLjCXXHMGaO9zN230CRnt65ZogkEU5sY0YOJG99EE5Fr0GNqtWkSmr5B3UJyvkHeooUB9bILQaPRbRu_GA9m7Tsjsj6gS0iAnqF22qXtSXZkoY5m8Nn72dPVdDG5Gc7vrmeTi_lQ5ThPQwsFrYRVQlS8ULoQmOKKME4pKMGwIsLycqwVFdoKZqmuBIzLXKvCECaIzfvZbM_VHlZyE9wawlZ6cPJD8GEpISSnaiMtwarAgrGSWqaMHRdCgTKcFJrYyvCOdbZnbYJ_bU1McuXb0HTnyxwLXhLMmOhcZO9SwccYjP3eSrDcPUj-eVCXGe0zEZbmh_p_4B1ayJA0</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Wang, Yingchen</creator><general>SAGE Publications</general><general>SAGE PUBLICATIONS, INC</general><general>SAGE Publishing</general><scope>AFRWT</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7XB</scope><scope>88J</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>M2R</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>POGQB</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PRQQA</scope><scope>Q9U</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-3376-6224</orcidid></search><sort><creationdate>20240401</creationdate><title>Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity</title><author>Wang, Yingchen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c303t-fa62b8fc88b56cd68020b14522ac840c18f579dc28df84f2db8a973dc6e1481f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Validity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yingchen</creatorcontrib><collection>Sage Journals GOLD Open Access 2024</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Social Science Database (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>ProQuest Central Essentials Local Electronic Collection Information</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>Social Science Database</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest Sociology &amp; Social Sciences Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Social Sciences</collection><collection>ProQuest Central Basic</collection><collection>Directory of Open Access Journals (DOAJ)</collection><jtitle>SAGE open</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Yingchen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity</atitle><jtitle>SAGE open</jtitle><date>2024-04-01</date><risdate>2024</risdate><volume>14</volume><issue>2</issue><issn>2158-2440</issn><eissn>2158-2440</eissn><abstract>Surveys are typical for student evaluation of teaching (SET). Survey research consistently confirms the negative impacts of careless responses on research validity, including low data quality and invalid research inferences. SET literature seldom addresses if careless responses are present and how to improve. To improve evaluation practices and validity, the current study proposed a three-step procedure to screen SET data for quantifying careless responses, delete and justify the removal of careless responses, and assess if removing careless responses improved the internal structure of the SET data. For these purposes, a convenience sample was taken from a Chinese university. A web-based survey was administered using a revised version of the Students’ Evaluation of Education Quality. One hundred ninety-nine students evaluated 11 courses with 295 responses. Longstring and Rasch outlier analyses identified 49% of nonrandom and random careless responses. The careless responses impacted evaluation results substantively and were deleted. The subsequent study demonstrated that removal improved data validity, using reliability, separation, and inter-rater agreement from the multi-facet Rasch model and G- and D-coefficients, signal-noise ratios, and error variance from generalizability theory. Removing careless responses improved the data validity in terms of true score variance and discrimination power of the data. Data screening should be a prerequisite to validating SET data based on the research results. Data removal is necessary to improve the research validity only if there is a noticeable change in the estimated teaching abilities. Suggestions and implications were discussed, including developing sound evaluation practices and formative use of SET. Plain Language Summary How many careless responses were there in student evaluation of teaching, and was it good to remove them to improve data quality? Student evaluation of teaching (SET) exists everywhere in education. However, people question whether they trust SET data and feedback. The survey is popular in SET. Literature has consistently reported the survey participants’ careless response (CR). CRs mean that participants complete a survey without enough attention to instructions and the content of the survey items. There are two types of CRs—non-random or random. Random CR means that participants choose the options randomly. Nonrandom CR occurs if respondents consistently select the same options. When CRs are present, people question data quality and research inferences. Researchers can take preventive measures during survey development and/or administration to address the CR issue. Some scholars recommend deleting CRs. The current research proposed a three-step procedure to (1) identify CRs and remove them, (2) prove that removing CRs was correct, and (3) evaluate whether removing CRs improved the SET data quality. For these purposes, two types of analyses were performed to identify the CRs. The analyses detected 49% of CRs in the dataset. 54.4% of the teachers’ abilities were misclassified. Thus, CRs impacted the evaluation practically. The evaluation criteria demonstrated that CR removal improved data quality. Based on the results, the evaluators should take necessary measures, including prevention measures during survey development and administration and checking data quality. Deleting CRs should be based on careful research ONLY IF many teachers’ abilities were misclassified. It is also important to use a set of criteria to ensure that data quality improves after deleting CRs. The proposed evaluation criteria can be applied to different evaluation settings.</abstract><cop>Los Angeles, CA</cop><pub>SAGE Publications</pub><doi>10.1177/21582440241256947</doi><orcidid>https://orcid.org/0000-0003-3376-6224</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2158-2440
ispartof SAGE open, 2024-04, Vol.14 (2)
issn 2158-2440
2158-2440
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_f10c6084472f4cef968cace516d1fbe5
source Publicly Available Content Database; Social Science Premium Collection; Sage Journals GOLD Open Access 2024
subjects Validity
title Quantifying Careless Responses in Student Evaluation of Teaching and Justifying Removal for Data Validity
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-10T05%3A31%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Quantifying%20Careless%20Responses%20in%20Student%20Evaluation%20of%20Teaching%20and%20Justifying%20Removal%20for%20Data%20Validity&rft.jtitle=SAGE%20open&rft.au=Wang,%20Yingchen&rft.date=2024-04-01&rft.volume=14&rft.issue=2&rft.issn=2158-2440&rft.eissn=2158-2440&rft_id=info:doi/10.1177/21582440241256947&rft_dat=%3Cproquest_doaj_%3E3085710448%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c303t-fa62b8fc88b56cd68020b14522ac840c18f579dc28df84f2db8a973dc6e1481f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3085710448&rft_id=info:pmid/&rft_sage_id=10.1177_21582440241256947&rfr_iscdi=true