Loading…

Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking

Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: agg...

Full description

Saved in:
Bibliographic Details
Published in:Journal of online trust & safety 2021-10, Vol.1 (1)
Main Authors: Godel, William, Sanderson, Zeve, Aslett, Kevin, Nagler, Jonathan, Bonneau, Richard, Persily, Nathaniel, Tucker, Joshua
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c1106-f4b32ca493bcbdfba0b81952e4be389353e91a4911d7fc043d50278c8b0a13c03
cites
container_end_page
container_issue 1
container_start_page
container_title Journal of online trust & safety
container_volume 1
creator Godel, William
Sanderson, Zeve
Aslett, Kevin
Nagler, Jonathan
Bonneau, Richard
Persily, Nathaniel
Tucker, Joshua
description Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.
doi_str_mv 10.54501/jots.v1i1.15
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2859470006</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2859470006</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1106-f4b32ca493bcbdfba0b81952e4be389353e91a4911d7fc043d50278c8b0a13c03</originalsourceid><addsrcrecordid>eNpNkE1PAjEQhhujiQQ5em_iudjpB931ZjagJhATg4m3pu22sggU2wXCv3cRD55mMu87Xw9Ct0CHUkgK98vY5uEeGhiCvEA9phQlHAS7_Jdfo0HOS0opK0eglOihj1msfTJts_nEh6Zd4Hbh8SzaBzzem9XuLJxq4xAaZ9wRx4DfvFmRebP2uErxUOe4S87XeGJcS6qFd19d0w26CmaV_eAv9tH7ZDyvnsn09emlepwSB0BHJAjLmTOi5NbZOlhDbQGlZF5Yz4uSS-5L6GSAWgVHBa8lZapwhaUGuKO8j-7Oc7cpfu98bvWyO2fTrdSskKVQ3bejzkXOLpdizskHvU3N2qSjBqp_-ekTP33ip0HyH3UuY30</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2859470006</pqid></control><display><type>article</type><title>Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking</title><source>Publicly Available Content (ProQuest)</source><source>Coronavirus Research Database</source><creator>Godel, William ; Sanderson, Zeve ; Aslett, Kevin ; Nagler, Jonathan ; Bonneau, Richard ; Persily, Nathaniel ; Tucker, Joshua</creator><creatorcontrib>Godel, William ; Sanderson, Zeve ; Aslett, Kevin ; Nagler, Jonathan ; Bonneau, Richard ; Persily, Nathaniel ; Tucker, Joshua</creatorcontrib><description>Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.</description><identifier>ISSN: 2770-3142</identifier><identifier>EISSN: 2770-3142</identifier><identifier>DOI: 10.54501/jots.v1i1.15</identifier><language>eng</language><publisher>Stanford: Stanford Internet Observatory, Journal of Online Trust and Safety</publisher><subject>Algorithms ; Crowdsourcing ; Design ; Diffusion rate ; False information ; Investigations ; Literature reviews ; Machine learning ; News ; Real time ; Social networks</subject><ispartof>Journal of online trust &amp; safety, 2021-10, Vol.1 (1)</ispartof><rights>2021. This work is published under https://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c1106-f4b32ca493bcbdfba0b81952e4be389353e91a4911d7fc043d50278c8b0a13c03</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2859470006?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,25731,27901,27902,36989,38493,43871,44566</link.rule.ids></links><search><creatorcontrib>Godel, William</creatorcontrib><creatorcontrib>Sanderson, Zeve</creatorcontrib><creatorcontrib>Aslett, Kevin</creatorcontrib><creatorcontrib>Nagler, Jonathan</creatorcontrib><creatorcontrib>Bonneau, Richard</creatorcontrib><creatorcontrib>Persily, Nathaniel</creatorcontrib><creatorcontrib>Tucker, Joshua</creatorcontrib><title>Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking</title><title>Journal of online trust &amp; safety</title><description>Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.</description><subject>Algorithms</subject><subject>Crowdsourcing</subject><subject>Design</subject><subject>Diffusion rate</subject><subject>False information</subject><subject>Investigations</subject><subject>Literature reviews</subject><subject>Machine learning</subject><subject>News</subject><subject>Real time</subject><subject>Social networks</subject><issn>2770-3142</issn><issn>2770-3142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>COVID</sourceid><sourceid>PIMPY</sourceid><recordid>eNpNkE1PAjEQhhujiQQ5em_iudjpB931ZjagJhATg4m3pu22sggU2wXCv3cRD55mMu87Xw9Ct0CHUkgK98vY5uEeGhiCvEA9phQlHAS7_Jdfo0HOS0opK0eglOihj1msfTJts_nEh6Zd4Hbh8SzaBzzem9XuLJxq4xAaZ9wRx4DfvFmRebP2uErxUOe4S87XeGJcS6qFd19d0w26CmaV_eAv9tH7ZDyvnsn09emlepwSB0BHJAjLmTOi5NbZOlhDbQGlZF5Yz4uSS-5L6GSAWgVHBa8lZapwhaUGuKO8j-7Oc7cpfu98bvWyO2fTrdSskKVQ3bejzkXOLpdizskHvU3N2qSjBqp_-ekTP33ip0HyH3UuY30</recordid><startdate>20211028</startdate><enddate>20211028</enddate><creator>Godel, William</creator><creator>Sanderson, Zeve</creator><creator>Aslett, Kevin</creator><creator>Nagler, Jonathan</creator><creator>Bonneau, Richard</creator><creator>Persily, Nathaniel</creator><creator>Tucker, Joshua</creator><general>Stanford Internet Observatory, Journal of Online Trust and Safety</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>88I</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>COVID</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2P</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20211028</creationdate><title>Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking</title><author>Godel, William ; Sanderson, Zeve ; Aslett, Kevin ; Nagler, Jonathan ; Bonneau, Richard ; Persily, Nathaniel ; Tucker, Joshua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1106-f4b32ca493bcbdfba0b81952e4be389353e91a4911d7fc043d50278c8b0a13c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Crowdsourcing</topic><topic>Design</topic><topic>Diffusion rate</topic><topic>False information</topic><topic>Investigations</topic><topic>Literature reviews</topic><topic>Machine learning</topic><topic>News</topic><topic>Real time</topic><topic>Social networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Godel, William</creatorcontrib><creatorcontrib>Sanderson, Zeve</creatorcontrib><creatorcontrib>Aslett, Kevin</creatorcontrib><creatorcontrib>Nagler, Jonathan</creatorcontrib><creatorcontrib>Bonneau, Richard</creatorcontrib><creatorcontrib>Persily, Nathaniel</creatorcontrib><creatorcontrib>Tucker, Joshua</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Science Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of online trust &amp; safety</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Godel, William</au><au>Sanderson, Zeve</au><au>Aslett, Kevin</au><au>Nagler, Jonathan</au><au>Bonneau, Richard</au><au>Persily, Nathaniel</au><au>Tucker, Joshua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking</atitle><jtitle>Journal of online trust &amp; safety</jtitle><date>2021-10-28</date><risdate>2021</risdate><volume>1</volume><issue>1</issue><issn>2770-3142</issn><eissn>2770-3142</eissn><abstract>Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.</abstract><cop>Stanford</cop><pub>Stanford Internet Observatory, Journal of Online Trust and Safety</pub><doi>10.54501/jots.v1i1.15</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2770-3142
ispartof Journal of online trust & safety, 2021-10, Vol.1 (1)
issn 2770-3142
2770-3142
language eng
recordid cdi_proquest_journals_2859470006
source Publicly Available Content (ProQuest); Coronavirus Research Database
subjects Algorithms
Crowdsourcing
Design
Diffusion rate
False information
Investigations
Literature reviews
Machine learning
News
Real time
Social networks
title Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T12%3A44%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Moderating%20with%20the%20Mob:%20Evaluating%20the%20Efficacy%20of%20Real-Time%20Crowdsourced%20Fact-Checking&rft.jtitle=Journal%20of%20online%20trust%20&%20safety&rft.au=Godel,%20William&rft.date=2021-10-28&rft.volume=1&rft.issue=1&rft.issn=2770-3142&rft.eissn=2770-3142&rft_id=info:doi/10.54501/jots.v1i1.15&rft_dat=%3Cproquest_cross%3E2859470006%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1106-f4b32ca493bcbdfba0b81952e4be389353e91a4911d7fc043d50278c8b0a13c03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2859470006&rft_id=info:pmid/&rfr_iscdi=true