Loading…

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in w...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-10
Main Authors: Srba, Ivan, Moro, Robert, Matus Tomlein, Pecher, Branislav, Simko, Jakub, Stefancova, Elena, Kompan, Michal, Hrckova, Andrea, Podrouzek, Juraj, Gavornik, Adrian, Bielikova, Maria
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Srba, Ivan
Moro, Robert
Matus Tomlein
Pecher, Branislav
Simko, Jakub
Stefancova, Elena
Kompan, Michal
Hrckova, Andrea
Podrouzek, Juraj
Gavornik, Adrian
Bielikova, Maria
description In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.
doi_str_mv 10.48550/arxiv.2210.10085
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2726630558</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2726630558</sourcerecordid><originalsourceid>FETCH-LOGICAL-a528-1d0653eb4d0045088b62fb4b1549b3136ca3a6683c361ab5570945935b9be4023</originalsourceid><addsrcrecordid>eNotjVFLwzAUhYMgOOZ-gG8FH3zqvMnNTdPHOpwTNgTpi08jadOZ0TbatOLPtzCfzuE78B3G7jispSaCRzP8-p-1EDPgAJqu2EIg8lRLIW7YKsYzAAiVCSJcsEMx1X70_Sn5CFM5WfcQk3dXha5zfW1GH_qkaE9h8ONnlzRhSA4--n4u3WXc-nZ0Q_I0Wdu6eMuuG9NGt_rPJSu3z-Vml-7fXl43xT41JHTKa1CEzsoaQBJobZVorLScZG6Ro6oMGqU0Vqi4sUQZ5JJyJJtbJ0Hgkt1ftF9D-J5cHI_nMA39_HgUmVAKgUjjH9A4Tn4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2726630558</pqid></control><display><type>article</type><title>Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles</title><source>Publicly Available Content Database</source><creator>Srba, Ivan ; Moro, Robert ; Matus Tomlein ; Pecher, Branislav ; Simko, Jakub ; Stefancova, Elena ; Kompan, Michal ; Hrckova, Andrea ; Podrouzek, Juraj ; Gavornik, Adrian ; Bielikova, Maria</creator><creatorcontrib>Srba, Ivan ; Moro, Robert ; Matus Tomlein ; Pecher, Branislav ; Simko, Jakub ; Stefancova, Elena ; Kompan, Michal ; Hrckova, Andrea ; Podrouzek, Juraj ; Gavornik, Adrian ; Bielikova, Maria</creatorcontrib><description>In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2210.10085</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Bubbles ; Classification ; False information ; Machine learning ; Video</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2726630558?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,27902,36989,44566</link.rule.ids></links><search><creatorcontrib>Srba, Ivan</creatorcontrib><creatorcontrib>Moro, Robert</creatorcontrib><creatorcontrib>Matus Tomlein</creatorcontrib><creatorcontrib>Pecher, Branislav</creatorcontrib><creatorcontrib>Simko, Jakub</creatorcontrib><creatorcontrib>Stefancova, Elena</creatorcontrib><creatorcontrib>Kompan, Michal</creatorcontrib><creatorcontrib>Hrckova, Andrea</creatorcontrib><creatorcontrib>Podrouzek, Juraj</creatorcontrib><creatorcontrib>Gavornik, Adrian</creatorcontrib><creatorcontrib>Bielikova, Maria</creatorcontrib><title>Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles</title><title>arXiv.org</title><description>In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.</description><subject>Algorithms</subject><subject>Bubbles</subject><subject>Classification</subject><subject>False information</subject><subject>Machine learning</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjVFLwzAUhYMgOOZ-gG8FH3zqvMnNTdPHOpwTNgTpi08jadOZ0TbatOLPtzCfzuE78B3G7jispSaCRzP8-p-1EDPgAJqu2EIg8lRLIW7YKsYzAAiVCSJcsEMx1X70_Sn5CFM5WfcQk3dXha5zfW1GH_qkaE9h8ONnlzRhSA4--n4u3WXc-nZ0Q_I0Wdu6eMuuG9NGt_rPJSu3z-Vml-7fXl43xT41JHTKa1CEzsoaQBJobZVorLScZG6Ro6oMGqU0Vqi4sUQZ5JJyJJtbJ0Hgkt1ftF9D-J5cHI_nMA39_HgUmVAKgUjjH9A4Tn4</recordid><startdate>20221018</startdate><enddate>20221018</enddate><creator>Srba, Ivan</creator><creator>Moro, Robert</creator><creator>Matus Tomlein</creator><creator>Pecher, Branislav</creator><creator>Simko, Jakub</creator><creator>Stefancova, Elena</creator><creator>Kompan, Michal</creator><creator>Hrckova, Andrea</creator><creator>Podrouzek, Juraj</creator><creator>Gavornik, Adrian</creator><creator>Bielikova, Maria</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221018</creationdate><title>Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles</title><author>Srba, Ivan ; Moro, Robert ; Matus Tomlein ; Pecher, Branislav ; Simko, Jakub ; Stefancova, Elena ; Kompan, Michal ; Hrckova, Andrea ; Podrouzek, Juraj ; Gavornik, Adrian ; Bielikova, Maria</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a528-1d0653eb4d0045088b62fb4b1549b3136ca3a6683c361ab5570945935b9be4023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Bubbles</topic><topic>Classification</topic><topic>False information</topic><topic>Machine learning</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Srba, Ivan</creatorcontrib><creatorcontrib>Moro, Robert</creatorcontrib><creatorcontrib>Matus Tomlein</creatorcontrib><creatorcontrib>Pecher, Branislav</creatorcontrib><creatorcontrib>Simko, Jakub</creatorcontrib><creatorcontrib>Stefancova, Elena</creatorcontrib><creatorcontrib>Kompan, Michal</creatorcontrib><creatorcontrib>Hrckova, Andrea</creatorcontrib><creatorcontrib>Podrouzek, Juraj</creatorcontrib><creatorcontrib>Gavornik, Adrian</creatorcontrib><creatorcontrib>Bielikova, Maria</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Srba, Ivan</au><au>Moro, Robert</au><au>Matus Tomlein</au><au>Pecher, Branislav</au><au>Simko, Jakub</au><au>Stefancova, Elena</au><au>Kompan, Michal</au><au>Hrckova, Andrea</au><au>Podrouzek, Juraj</au><au>Gavornik, Adrian</au><au>Bielikova, Maria</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles</atitle><jtitle>arXiv.org</jtitle><date>2022-10-18</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2210.10085</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2726630558
source Publicly Available Content Database
subjects Algorithms
Bubbles
Classification
False information
Machine learning
Video
title Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T13%3A40%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Auditing%20YouTube's%20Recommendation%20Algorithm%20for%20Misinformation%20Filter%20Bubbles&rft.jtitle=arXiv.org&rft.au=Srba,%20Ivan&rft.date=2022-10-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2210.10085&rft_dat=%3Cproquest%3E2726630558%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a528-1d0653eb4d0045088b62fb4b1549b3136ca3a6683c361ab5570945935b9be4023%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2726630558&rft_id=info:pmid/&rfr_iscdi=true