Loading…
Disentangling Hate in Online Memes
Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in aca...
Saved in:
Published in: | arXiv.org 2021-08 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cao, Rui Fan, Ziqing Roy Ka-Wei Lee Wen-Haw, Chong Jiang, Jing |
description | Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in academic and industry research communities. This paper aims to contribute to this emerging research topic by proposing DisMultiHate, which is a novel framework that performed the classification of multimodal hateful content. Specifically, DisMultiHate is designed to disentangle target entities in multimodal memes to improve hateful content classification and explainability. We conduct extensive experiments on two publicly available hateful and offensive memes datasets. Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task. Empirical case studies were also conducted to demonstrate DisMultiHate's ability to disentangle target entities in memes and ultimately showcase DisMultiHate's explainability of the multimodal hateful content classification task. |
doi_str_mv | 10.48550/arxiv.2108.06207 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2561644605</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2561644605</sourcerecordid><originalsourceid>FETCH-LOGICAL-a525-68a2a955d490225b599b470a67e9149671039f45696ae9e792f6009fafa3c91f3</originalsourceid><addsrcrecordid>eNotjU1PAjEQQBsTEwjyA7xt9LzrdNqZ7hwNfkCC4cKdDNiSJVh0uxh_viR6enmX94y5tdD4lggetP_pvhu00DbACOHKjNE5W7cecWSmpRwAADkgkRubu6euxDxo3h-7vK_mOsSqy9UqXzRWb_EjlhtznfRY4vSfE7N-eV7P5vVy9bqYPS5rJaSaW0UVoncvgEhbEtn6AMohivXCwYKT5ImFNUoMgokBJGlStxOb3MTc_2U_-9PXOZZhczid-3w5bpDYsvcM5H4BDHE99w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2561644605</pqid></control><display><type>article</type><title>Disentangling Hate in Online Memes</title><source>Publicly Available Content Database</source><creator>Cao, Rui ; Fan, Ziqing ; Roy Ka-Wei Lee ; Wen-Haw, Chong ; Jiang, Jing</creator><creatorcontrib>Cao, Rui ; Fan, Ziqing ; Roy Ka-Wei Lee ; Wen-Haw, Chong ; Jiang, Jing</creatorcontrib><description>Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in academic and industry research communities. This paper aims to contribute to this emerging research topic by proposing DisMultiHate, which is a novel framework that performed the classification of multimodal hateful content. Specifically, DisMultiHate is designed to disentangle target entities in multimodal memes to improve hateful content classification and explainability. We conduct extensive experiments on two publicly available hateful and offensive memes datasets. Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task. Empirical case studies were also conducted to demonstrate DisMultiHate's ability to disentangle target entities in memes and ultimately showcase DisMultiHate's explainability of the multimodal hateful content classification task.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2108.06207</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Classification ; Freedom of speech</subject><ispartof>arXiv.org, 2021-08</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2561644605?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>778,782,25740,27912,36999,44577</link.rule.ids></links><search><creatorcontrib>Cao, Rui</creatorcontrib><creatorcontrib>Fan, Ziqing</creatorcontrib><creatorcontrib>Roy Ka-Wei Lee</creatorcontrib><creatorcontrib>Wen-Haw, Chong</creatorcontrib><creatorcontrib>Jiang, Jing</creatorcontrib><title>Disentangling Hate in Online Memes</title><title>arXiv.org</title><description>Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in academic and industry research communities. This paper aims to contribute to this emerging research topic by proposing DisMultiHate, which is a novel framework that performed the classification of multimodal hateful content. Specifically, DisMultiHate is designed to disentangle target entities in multimodal memes to improve hateful content classification and explainability. We conduct extensive experiments on two publicly available hateful and offensive memes datasets. Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task. Empirical case studies were also conducted to demonstrate DisMultiHate's ability to disentangle target entities in memes and ultimately showcase DisMultiHate's explainability of the multimodal hateful content classification task.</description><subject>Classification</subject><subject>Freedom of speech</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjU1PAjEQQBsTEwjyA7xt9LzrdNqZ7hwNfkCC4cKdDNiSJVh0uxh_viR6enmX94y5tdD4lggetP_pvhu00DbACOHKjNE5W7cecWSmpRwAADkgkRubu6euxDxo3h-7vK_mOsSqy9UqXzRWb_EjlhtznfRY4vSfE7N-eV7P5vVy9bqYPS5rJaSaW0UVoncvgEhbEtn6AMohivXCwYKT5ImFNUoMgokBJGlStxOb3MTc_2U_-9PXOZZhczid-3w5bpDYsvcM5H4BDHE99w</recordid><startdate>20210809</startdate><enddate>20210809</enddate><creator>Cao, Rui</creator><creator>Fan, Ziqing</creator><creator>Roy Ka-Wei Lee</creator><creator>Wen-Haw, Chong</creator><creator>Jiang, Jing</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210809</creationdate><title>Disentangling Hate in Online Memes</title><author>Cao, Rui ; Fan, Ziqing ; Roy Ka-Wei Lee ; Wen-Haw, Chong ; Jiang, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a525-68a2a955d490225b599b470a67e9149671039f45696ae9e792f6009fafa3c91f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Classification</topic><topic>Freedom of speech</topic><toplevel>online_resources</toplevel><creatorcontrib>Cao, Rui</creatorcontrib><creatorcontrib>Fan, Ziqing</creatorcontrib><creatorcontrib>Roy Ka-Wei Lee</creatorcontrib><creatorcontrib>Wen-Haw, Chong</creatorcontrib><creatorcontrib>Jiang, Jing</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cao, Rui</au><au>Fan, Ziqing</au><au>Roy Ka-Wei Lee</au><au>Wen-Haw, Chong</au><au>Jiang, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Disentangling Hate in Online Memes</atitle><jtitle>arXiv.org</jtitle><date>2021-08-09</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in academic and industry research communities. This paper aims to contribute to this emerging research topic by proposing DisMultiHate, which is a novel framework that performed the classification of multimodal hateful content. Specifically, DisMultiHate is designed to disentangle target entities in multimodal memes to improve hateful content classification and explainability. We conduct extensive experiments on two publicly available hateful and offensive memes datasets. Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task. Empirical case studies were also conducted to demonstrate DisMultiHate's ability to disentangle target entities in memes and ultimately showcase DisMultiHate's explainability of the multimodal hateful content classification task.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2108.06207</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2561644605 |
source | Publicly Available Content Database |
subjects | Classification Freedom of speech |
title | Disentangling Hate in Online Memes |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T17%3A59%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Disentangling%20Hate%20in%20Online%20Memes&rft.jtitle=arXiv.org&rft.au=Cao,%20Rui&rft.date=2021-08-09&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2108.06207&rft_dat=%3Cproquest%3E2561644605%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a525-68a2a955d490225b599b470a67e9149671039f45696ae9e792f6009fafa3c91f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2561644605&rft_id=info:pmid/&rfr_iscdi=true |