Loading…

Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence

The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are...

Full description

Saved in:
Bibliographic Details
Published in:Computers in human behavior 2018-09, Vol.86, p.153-164
Main Authors: Weigold, Arne, Weigold, Ingrid K., Natera, Sara N.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13
cites cdi_FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13
container_end_page 164
container_issue
container_start_page 153
container_title Computers in human behavior
container_volume 86
creator Weigold, Arne
Weigold, Ingrid K.
Natera, Sara N.
description The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are available for specific measures have typically not differentiated between studies using between-groups and within-subjects designs. Also, few individual studies, and no meta-analyses, have used correct statistical procedures to determine the equivalence of the two methodologies. Consequently, we conducted two meta-analyses assessing quantitative equivalence (i.e., mean scores), with the first consisting of 144 independent effect sizes from studies with between-groups designs and the second including 70 independent effect sizes from studies using within-subjects designs. Both meta-analyses assessing mean scores indicated equivalence across conditions, with large heterogeneity of variance in the between-groups analysis. Presence of others in both the paper-and-pencil and computer conditions accounted for a significant portion of this variance. Heterogeneity of variance was small for the within-subjects design analysis. Overall, results indicated that the mean scores for self-report surveys using paper-and-pencil and the computer are comparable, although heterogeneity differs for the study designs. Equivalence testing was demonstrated to be the recommended statistical procedure for this type of research.
doi_str_mv 10.1016/j.chb.2018.04.038
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2093199491</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0747563218302024</els_id><sourcerecordid>2093199491</sourcerecordid><originalsourceid>FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13</originalsourceid><addsrcrecordid>eNp9kLFu2zAQhokiBeq4fYBuBDJLPZKSSCWTYSRpAAdd2pmgqFNLQTZlknLgty8dd-7EI_B9h_9-Qr4yKBmw5ttY2j9dyYGpEqoShPpAVkxJUcim5TdkBbKSRd0I_oncxjgCQF1DsyJvr2gONFofMNLBBxpxGoqAsw-JxiWc8Byp9ft5woQ9XaI7_KazmTEU5tAXMx6sm2ge36ElYYj3dEP3mEwGzHROztKEMVE_UDwu7mSm7OBn8nEwU8Qv_941-fX0-HP7vdj9eH7ZbnaFFbxOxVBVDR9s3bHaoqhBdtByEEw2qmeNsZWSnQTFOFdd3aHqWiHzH6Sy2A89E2tyd907B39ccg49-iXkYFFzaAVr26q9UOxK2eBjDDjoObi9CWfNQF_61aPO_epLvxoqnfvNzsPVwRz_5DDoaN3ltN4FtEn33v3H_gsCBoPE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2093199491</pqid></control><display><type>article</type><title>Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence</title><source>ScienceDirect Journals</source><creator>Weigold, Arne ; Weigold, Ingrid K. ; Natera, Sara N.</creator><creatorcontrib>Weigold, Arne ; Weigold, Ingrid K. ; Natera, Sara N.</creatorcontrib><description>The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are available for specific measures have typically not differentiated between studies using between-groups and within-subjects designs. Also, few individual studies, and no meta-analyses, have used correct statistical procedures to determine the equivalence of the two methodologies. Consequently, we conducted two meta-analyses assessing quantitative equivalence (i.e., mean scores), with the first consisting of 144 independent effect sizes from studies with between-groups designs and the second including 70 independent effect sizes from studies using within-subjects designs. Both meta-analyses assessing mean scores indicated equivalence across conditions, with large heterogeneity of variance in the between-groups analysis. Presence of others in both the paper-and-pencil and computer conditions accounted for a significant portion of this variance. Heterogeneity of variance was small for the within-subjects design analysis. Overall, results indicated that the mean scores for self-report surveys using paper-and-pencil and the computer are comparable, although heterogeneity differs for the study designs. Equivalence testing was demonstrated to be the recommended statistical procedure for this type of research.</description><identifier>ISSN: 0747-5632</identifier><identifier>EISSN: 1873-7692</identifier><identifier>DOI: 10.1016/j.chb.2018.04.038</identifier><language>eng</language><publisher>Elmsford: Elsevier Ltd</publisher><subject>Computer ; Data acquisition ; Design analysis ; Equivalence ; Equivalence testing ; Heterogeneity ; Meta-analysis ; paper-and-pencil ; Polls &amp; surveys ; Self report ; Variance analysis</subject><ispartof>Computers in human behavior, 2018-09, Vol.86, p.153-164</ispartof><rights>2018 Elsevier Ltd</rights><rights>Copyright Elsevier Science Ltd. Sep 2018</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13</citedby><cites>FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Weigold, Arne</creatorcontrib><creatorcontrib>Weigold, Ingrid K.</creatorcontrib><creatorcontrib>Natera, Sara N.</creatorcontrib><title>Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence</title><title>Computers in human behavior</title><description>The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are available for specific measures have typically not differentiated between studies using between-groups and within-subjects designs. Also, few individual studies, and no meta-analyses, have used correct statistical procedures to determine the equivalence of the two methodologies. Consequently, we conducted two meta-analyses assessing quantitative equivalence (i.e., mean scores), with the first consisting of 144 independent effect sizes from studies with between-groups designs and the second including 70 independent effect sizes from studies using within-subjects designs. Both meta-analyses assessing mean scores indicated equivalence across conditions, with large heterogeneity of variance in the between-groups analysis. Presence of others in both the paper-and-pencil and computer conditions accounted for a significant portion of this variance. Heterogeneity of variance was small for the within-subjects design analysis. Overall, results indicated that the mean scores for self-report surveys using paper-and-pencil and the computer are comparable, although heterogeneity differs for the study designs. Equivalence testing was demonstrated to be the recommended statistical procedure for this type of research.</description><subject>Computer</subject><subject>Data acquisition</subject><subject>Design analysis</subject><subject>Equivalence</subject><subject>Equivalence testing</subject><subject>Heterogeneity</subject><subject>Meta-analysis</subject><subject>paper-and-pencil</subject><subject>Polls &amp; surveys</subject><subject>Self report</subject><subject>Variance analysis</subject><issn>0747-5632</issn><issn>1873-7692</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kLFu2zAQhokiBeq4fYBuBDJLPZKSSCWTYSRpAAdd2pmgqFNLQTZlknLgty8dd-7EI_B9h_9-Qr4yKBmw5ttY2j9dyYGpEqoShPpAVkxJUcim5TdkBbKSRd0I_oncxjgCQF1DsyJvr2gONFofMNLBBxpxGoqAsw-JxiWc8Byp9ft5woQ9XaI7_KazmTEU5tAXMx6sm2ge36ElYYj3dEP3mEwGzHROztKEMVE_UDwu7mSm7OBn8nEwU8Qv_941-fX0-HP7vdj9eH7ZbnaFFbxOxVBVDR9s3bHaoqhBdtByEEw2qmeNsZWSnQTFOFdd3aHqWiHzH6Sy2A89E2tyd907B39ccg49-iXkYFFzaAVr26q9UOxK2eBjDDjoObi9CWfNQF_61aPO_epLvxoqnfvNzsPVwRz_5DDoaN3ltN4FtEn33v3H_gsCBoPE</recordid><startdate>201809</startdate><enddate>201809</enddate><creator>Weigold, Arne</creator><creator>Weigold, Ingrid K.</creator><creator>Natera, Sara N.</creator><general>Elsevier Ltd</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201809</creationdate><title>Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence</title><author>Weigold, Arne ; Weigold, Ingrid K. ; Natera, Sara N.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer</topic><topic>Data acquisition</topic><topic>Design analysis</topic><topic>Equivalence</topic><topic>Equivalence testing</topic><topic>Heterogeneity</topic><topic>Meta-analysis</topic><topic>paper-and-pencil</topic><topic>Polls &amp; surveys</topic><topic>Self report</topic><topic>Variance analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Weigold, Arne</creatorcontrib><creatorcontrib>Weigold, Ingrid K.</creatorcontrib><creatorcontrib>Natera, Sara N.</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computers in human behavior</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Weigold, Arne</au><au>Weigold, Ingrid K.</au><au>Natera, Sara N.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence</atitle><jtitle>Computers in human behavior</jtitle><date>2018-09</date><risdate>2018</risdate><volume>86</volume><spage>153</spage><epage>164</epage><pages>153-164</pages><issn>0747-5632</issn><eissn>1873-7692</eissn><abstract>The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are available for specific measures have typically not differentiated between studies using between-groups and within-subjects designs. Also, few individual studies, and no meta-analyses, have used correct statistical procedures to determine the equivalence of the two methodologies. Consequently, we conducted two meta-analyses assessing quantitative equivalence (i.e., mean scores), with the first consisting of 144 independent effect sizes from studies with between-groups designs and the second including 70 independent effect sizes from studies using within-subjects designs. Both meta-analyses assessing mean scores indicated equivalence across conditions, with large heterogeneity of variance in the between-groups analysis. Presence of others in both the paper-and-pencil and computer conditions accounted for a significant portion of this variance. Heterogeneity of variance was small for the within-subjects design analysis. Overall, results indicated that the mean scores for self-report surveys using paper-and-pencil and the computer are comparable, although heterogeneity differs for the study designs. Equivalence testing was demonstrated to be the recommended statistical procedure for this type of research.</abstract><cop>Elmsford</cop><pub>Elsevier Ltd</pub><doi>10.1016/j.chb.2018.04.038</doi><tpages>12</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0747-5632
ispartof Computers in human behavior, 2018-09, Vol.86, p.153-164
issn 0747-5632
1873-7692
language eng
recordid cdi_proquest_journals_2093199491
source ScienceDirect Journals
subjects Computer
Data acquisition
Design analysis
Equivalence
Equivalence testing
Heterogeneity
Meta-analysis
paper-and-pencil
Polls & surveys
Self report
Variance analysis
title Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T14%3A40%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mean%20scores%20for%20self-report%20surveys%20completed%20using%20paper-and-pencil%20and%20computers:%20A%20meta-analytic%20test%20of%20equivalence&rft.jtitle=Computers%20in%20human%20behavior&rft.au=Weigold,%20Arne&rft.date=2018-09&rft.volume=86&rft.spage=153&rft.epage=164&rft.pages=153-164&rft.issn=0747-5632&rft.eissn=1873-7692&rft_id=info:doi/10.1016/j.chb.2018.04.038&rft_dat=%3Cproquest_cross%3E2093199491%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c325t-f4462fc5b15ce3507b092031768d16ac487b7081228b5be8b937708078cedfd13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2093199491&rft_id=info:pmid/&rfr_iscdi=true