Loading…

Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms

This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered mo...

Full description

Saved in:
Bibliographic Details
Published in:Big data & society 2022-07, Vol.9 (2)
Main Authors: Ozanne, Marie, Bhandari, Aparajita, Bazarova, Natalya N, DiFranzo, Dominic
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23
cites cdi_FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23
container_end_page
container_issue 2
container_start_page
container_title Big data & society
container_volume 9
creator Ozanne, Marie
Bhandari, Aparajita
Bazarova, Natalya N
DiFranzo, Dominic
description This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.
doi_str_mv 10.1177/20539517221115666
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_041fcbe843b444fbada770d18973a2d8</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sage_id>10.1177_20539517221115666</sage_id><doaj_id>oai_doaj_org_article_041fcbe843b444fbada770d18973a2d8</doaj_id><sourcerecordid>2758582249</sourcerecordid><originalsourceid>FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23</originalsourceid><addsrcrecordid>eNp1kV9LHDEUxYdSQbF-AN8CPq_N30nmSUS0LggttD6HO8mNzTIzWZOssPTLd6Zba6H06R4O5_zuhds054xeMqb1R06V6BTTnDPGVNu275qTxVst5vu_9HFzVsqGUsqkUUKak-bH1-8wDOR6TcbkMUNNuZAeyQgeyUsssR_winzB7HBbY5pICgScS7upQh-HWPcEJk9q3pVK4vRKWZJlXyqOhSwyuQgDGdFHINsBakh5LB-aowBDwbPf87R5vLv9dnO_evj8aX1z_bByUtO66ltDqWai1Z0EQU3LDDqhFAVQnnLwRgrJjMfWhDYYgcqrjpsOOfcBkYvTZn3g-gQbu81xhLy3CaL9ZaT8ZCHX6Aa0VLLgepyJvZQy9OBBa-qZ6bQA7s3Mujiwtjk977BUu0m7PM3nW66VUYZz2c0pdki5nErJGP5sZdQuL7P_vGzuXB46BZ7wjfr_wk-mrZYG</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2758582249</pqid></control><display><type>article</type><title>Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms</title><source>ABI/INFORM Collection</source><source>SAGE Open Access</source><source>Publicly Available Content Database</source><source>Sociological Abstracts</source><creator>Ozanne, Marie ; Bhandari, Aparajita ; Bazarova, Natalya N ; DiFranzo, Dominic</creator><creatorcontrib>Ozanne, Marie ; Bhandari, Aparajita ; Bazarova, Natalya N ; DiFranzo, Dominic</creatorcontrib><description>This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.</description><identifier>ISSN: 2053-9517</identifier><identifier>EISSN: 2053-9517</identifier><identifier>DOI: 10.1177/20539517221115666</identifier><language>eng</language><publisher>London, England: SAGE Publications</publisher><subject>Accountability ; Artificial intelligence ; Automation ; Computer simulation ; Content management ; Digital media ; Harassment ; Mass media ; Objectivity ; Perception ; Perceptions ; Social media ; Social networks ; Social systems ; Visibility</subject><ispartof>Big data &amp; society, 2022-07, Vol.9 (2)</ispartof><rights>The Author(s) 2022</rights><rights>The Author(s) 2022. This work is licensed under the Creative Commons Attribution – Non-Commercial License https://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23</citedby><cites>FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23</cites><orcidid>0000-0001-5355-0397 ; 0000-0001-5375-6598</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://journals.sagepub.com/doi/pdf/10.1177/20539517221115666$$EPDF$$P50$$Gsage$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2758582249?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,11688,21966,25753,27344,27853,27924,27925,33774,36060,37012,44363,44590,44945,45333</link.rule.ids></links><search><creatorcontrib>Ozanne, Marie</creatorcontrib><creatorcontrib>Bhandari, Aparajita</creatorcontrib><creatorcontrib>Bazarova, Natalya N</creatorcontrib><creatorcontrib>DiFranzo, Dominic</creatorcontrib><title>Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms</title><title>Big data &amp; society</title><description>This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.</description><subject>Accountability</subject><subject>Artificial intelligence</subject><subject>Automation</subject><subject>Computer simulation</subject><subject>Content management</subject><subject>Digital media</subject><subject>Harassment</subject><subject>Mass media</subject><subject>Objectivity</subject><subject>Perception</subject><subject>Perceptions</subject><subject>Social media</subject><subject>Social networks</subject><subject>Social systems</subject><subject>Visibility</subject><issn>2053-9517</issn><issn>2053-9517</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>AFRWT</sourceid><sourceid>BHHNA</sourceid><sourceid>M0C</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp1kV9LHDEUxYdSQbF-AN8CPq_N30nmSUS0LggttD6HO8mNzTIzWZOssPTLd6Zba6H06R4O5_zuhds054xeMqb1R06V6BTTnDPGVNu275qTxVst5vu_9HFzVsqGUsqkUUKak-bH1-8wDOR6TcbkMUNNuZAeyQgeyUsssR_winzB7HBbY5pICgScS7upQh-HWPcEJk9q3pVK4vRKWZJlXyqOhSwyuQgDGdFHINsBakh5LB-aowBDwbPf87R5vLv9dnO_evj8aX1z_bByUtO66ltDqWai1Z0EQU3LDDqhFAVQnnLwRgrJjMfWhDYYgcqrjpsOOfcBkYvTZn3g-gQbu81xhLy3CaL9ZaT8ZCHX6Aa0VLLgepyJvZQy9OBBa-qZ6bQA7s3Mujiwtjk977BUu0m7PM3nW66VUYZz2c0pdki5nErJGP5sZdQuL7P_vGzuXB46BZ7wjfr_wk-mrZYG</recordid><startdate>202207</startdate><enddate>202207</enddate><creator>Ozanne, Marie</creator><creator>Bhandari, Aparajita</creator><creator>Bazarova, Natalya N</creator><creator>DiFranzo, Dominic</creator><general>SAGE Publications</general><general>Sage Publications Ltd</general><general>SAGE Publishing</general><scope>AFRWT</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7U4</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8FD</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BHHNA</scope><scope>CCPQU</scope><scope>DWI</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M2O</scope><scope>MBDVC</scope><scope>PIMPY</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>WZK</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-5355-0397</orcidid><orcidid>https://orcid.org/0000-0001-5375-6598</orcidid></search><sort><creationdate>202207</creationdate><title>Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms</title><author>Ozanne, Marie ; Bhandari, Aparajita ; Bazarova, Natalya N ; DiFranzo, Dominic</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accountability</topic><topic>Artificial intelligence</topic><topic>Automation</topic><topic>Computer simulation</topic><topic>Content management</topic><topic>Digital media</topic><topic>Harassment</topic><topic>Mass media</topic><topic>Objectivity</topic><topic>Perception</topic><topic>Perceptions</topic><topic>Social media</topic><topic>Social networks</topic><topic>Social systems</topic><topic>Visibility</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ozanne, Marie</creatorcontrib><creatorcontrib>Bhandari, Aparajita</creatorcontrib><creatorcontrib>Bazarova, Natalya N</creatorcontrib><creatorcontrib>DiFranzo, Dominic</creatorcontrib><collection>SAGE Open Access</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Sociological Abstracts (pre-2017)</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Sociological Abstracts</collection><collection>ProQuest One Community College</collection><collection>Sociological Abstracts</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Collection</collection><collection>Research Library (ProQuest)</collection><collection>Research Library (Corporate)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>Sociological Abstracts (Ovid)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Big data &amp; society</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ozanne, Marie</au><au>Bhandari, Aparajita</au><au>Bazarova, Natalya N</au><au>DiFranzo, Dominic</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms</atitle><jtitle>Big data &amp; society</jtitle><date>2022-07</date><risdate>2022</risdate><volume>9</volume><issue>2</issue><issn>2053-9517</issn><eissn>2053-9517</eissn><abstract>This study examines how visibility of a content moderator and ambiguity of moderated content influence perception of the moderation system in a social media environment. In the course of a two-day pre-registered experiment conducted in a realistic social media simulation, participants encountered moderated comments that were either unequivocally harsh or ambiguously worded, and the source of moderation was either unidentified, or attributed to other users or an automated system (AI). The results show that when comments were moderated by an AI versus other users, users perceived less accountability in the moderation system and had less trust in the moderation decision, especially for ambiguously worded harassments, as opposed to clear harassment cases. However, no differences emerged in the perceived moderation fairness, objectivity, and participants confidence in their understanding of the moderation process. Overall, our study demonstrates that users tend to question the moderation decision and system more when an AI moderator is visible, which highlights the complexity of effectively managing the visibility of automatic content moderation in the social media environment.</abstract><cop>London, England</cop><pub>SAGE Publications</pub><doi>10.1177/20539517221115666</doi><orcidid>https://orcid.org/0000-0001-5355-0397</orcidid><orcidid>https://orcid.org/0000-0001-5375-6598</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2053-9517
ispartof Big data & society, 2022-07, Vol.9 (2)
issn 2053-9517
2053-9517
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_041fcbe843b444fbada770d18973a2d8
source ABI/INFORM Collection; SAGE Open Access; Publicly Available Content Database; Sociological Abstracts
subjects Accountability
Artificial intelligence
Automation
Computer simulation
Content management
Digital media
Harassment
Mass media
Objectivity
Perception
Perceptions
Social media
Social networks
Social systems
Visibility
title Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T23%3A13%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Shall%20AI%20moderators%20be%20made%20visible?%20Perception%20of%20accountability%20and%20trust%20in%20moderation%20systems%20on%20social%20media%20platforms&rft.jtitle=Big%20data%20&%20society&rft.au=Ozanne,%20Marie&rft.date=2022-07&rft.volume=9&rft.issue=2&rft.issn=2053-9517&rft.eissn=2053-9517&rft_id=info:doi/10.1177/20539517221115666&rft_dat=%3Cproquest_doaj_%3E2758582249%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c470t-b68007136794a308618ec3550aa5d02ad843418de68f6f83e5d59289e22dfee23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2758582249&rft_id=info:pmid/&rft_sage_id=10.1177_20539517221115666&rfr_iscdi=true