Loading…
Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets
The rise of social media has brought both opportunities and challenges to the digital age, including the proliferation of online trolls that have spread misinformation, hates, and disruptions. An automated classification system is crucial to mitigate the impact of trolls. This paper presents an inno...
Saved in:
Published in: | Social network analysis and mining 2024-07, Vol.14 (1), p.133 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | 1 |
container_start_page | 133 |
container_title | Social network analysis and mining |
container_volume | 14 |
creator | Mazhar, Kashif Dwivedi, Pragya |
description | The rise of social media has brought both opportunities and challenges to the digital age, including the proliferation of online trolls that have spread misinformation, hates, and disruptions. An automated classification system is crucial to mitigate the impact of trolls. This paper presents an innovative approach for classifying social media tweets into troll and non-troll categories using a machine learning (ML) approach and CNN. We also employed explainable artificial intelligence (XAI) to address the inherent opacity and complexity of the CNN model. This approach allowed us to provide a comprehensive explanation of the model’s behavior. We have achieved Accuracy = 91.45% with CNN2 model. The best results using ML methods were achieved by random forest classifier model, Accuracy = 86.57%. To enhance our trust in the CNN model, we leveraged the local interpretable model-agnostic explanation (LIME) technique within XAI. Algorithm correctly predicted troll tweets with a confidence of 93% and non-troll tweets with a confidence of 97%. This research lays the groundwork for better decision making in the ever-changing field of social media content analysis by bridging the gap between complex neural networks and insights that can be understood by humans. The transparency and reliability that LIME brings to public discussion are crucial tools for ensuring the responsible and efficient use of online content, as social media continues to influence public opinion. |
doi_str_mv | 10.1007/s13278-024-01297-8 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3077575683</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3077575683</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-8b7c1eb0bc5d0df170eaf816cc87a3a8c4c06e332e342d5a689049bbaa1a4fb83</originalsourceid><addsrcrecordid>eNo9jj1PwzAURS0EElXpH2CyxAKD4TlObIcNhQKVSllgrvzxAmlDDLFDmfnl0IKY7h3OPbqEHHM45wDqInKRKc0gyxnwrFRM75ER17JkRS7L_f9ewCGZxLgCAA5ClCBH5OsaXfBN90zTC1LbGremNnxe0vnsfspMjE1M6OnQeexjMt0ODTWtQvcR2iE1oTMtXeDQ7yJtQr-mp9VicUabjrp2a6gbZ7bgdheDa37IV_SNoWmDmOIROahNG3Hyl2PydDN9rO7Y_OF2Vl3NmcsAEtNWOY4WrCs8-JorQFNrLp3TygijXe5AohAZijzzhZG6hLy01hhu8tpqMSYnv963PrwPGNNyFYb-535cClCqUIXUQnwD4uhksg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3077575683</pqid></control><display><type>article</type><title>Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets</title><source>International Bibliography of the Social Sciences (IBSS)</source><source>Social Science Premium Collection</source><source>Springer Nature</source><creator>Mazhar, Kashif ; Dwivedi, Pragya</creator><creatorcontrib>Mazhar, Kashif ; Dwivedi, Pragya</creatorcontrib><description>The rise of social media has brought both opportunities and challenges to the digital age, including the proliferation of online trolls that have spread misinformation, hates, and disruptions. An automated classification system is crucial to mitigate the impact of trolls. This paper presents an innovative approach for classifying social media tweets into troll and non-troll categories using a machine learning (ML) approach and CNN. We also employed explainable artificial intelligence (XAI) to address the inherent opacity and complexity of the CNN model. This approach allowed us to provide a comprehensive explanation of the model’s behavior. We have achieved Accuracy = 91.45% with CNN2 model. The best results using ML methods were achieved by random forest classifier model, Accuracy = 86.57%. To enhance our trust in the CNN model, we leveraged the local interpretable model-agnostic explanation (LIME) technique within XAI. Algorithm correctly predicted troll tweets with a confidence of 93% and non-troll tweets with a confidence of 97%. This research lays the groundwork for better decision making in the ever-changing field of social media content analysis by bridging the gap between complex neural networks and insights that can be understood by humans. The transparency and reliability that LIME brings to public discussion are crucial tools for ensuring the responsible and efficient use of online content, as social media continues to influence public opinion.</description><identifier>ISSN: 1869-5450</identifier><identifier>EISSN: 1869-5469</identifier><identifier>DOI: 10.1007/s13278-024-01297-8</identifier><language>eng</language><publisher>Heidelberg: Springer Nature B.V</publisher><subject>Accuracy ; Algorithms ; Artificial intelligence ; Artificial neural networks ; Classification ; Complexity ; Content analysis ; Criminal investigations ; Decision making ; Decision trees ; Decoding ; Digital media ; Evidence ; Explainable artificial intelligence ; Hate speech ; Language ; Literature reviews ; Machine learning ; Misinformation ; Network reliability ; Neural networks ; Performance evaluation ; Profanity ; Public opinion ; Reliability ; Sentiment analysis ; Social media ; Social networks ; Transparency ; User behavior</subject><ispartof>Social network analysis and mining, 2024-07, Vol.14 (1), p.133</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3077575683?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,12847,21394,27924,27925,33223,33611,43733</link.rule.ids></links><search><creatorcontrib>Mazhar, Kashif</creatorcontrib><creatorcontrib>Dwivedi, Pragya</creatorcontrib><title>Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets</title><title>Social network analysis and mining</title><description>The rise of social media has brought both opportunities and challenges to the digital age, including the proliferation of online trolls that have spread misinformation, hates, and disruptions. An automated classification system is crucial to mitigate the impact of trolls. This paper presents an innovative approach for classifying social media tweets into troll and non-troll categories using a machine learning (ML) approach and CNN. We also employed explainable artificial intelligence (XAI) to address the inherent opacity and complexity of the CNN model. This approach allowed us to provide a comprehensive explanation of the model’s behavior. We have achieved Accuracy = 91.45% with CNN2 model. The best results using ML methods were achieved by random forest classifier model, Accuracy = 86.57%. To enhance our trust in the CNN model, we leveraged the local interpretable model-agnostic explanation (LIME) technique within XAI. Algorithm correctly predicted troll tweets with a confidence of 93% and non-troll tweets with a confidence of 97%. This research lays the groundwork for better decision making in the ever-changing field of social media content analysis by bridging the gap between complex neural networks and insights that can be understood by humans. The transparency and reliability that LIME brings to public discussion are crucial tools for ensuring the responsible and efficient use of online content, as social media continues to influence public opinion.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Complexity</subject><subject>Content analysis</subject><subject>Criminal investigations</subject><subject>Decision making</subject><subject>Decision trees</subject><subject>Decoding</subject><subject>Digital media</subject><subject>Evidence</subject><subject>Explainable artificial intelligence</subject><subject>Hate speech</subject><subject>Language</subject><subject>Literature reviews</subject><subject>Machine learning</subject><subject>Misinformation</subject><subject>Network reliability</subject><subject>Neural networks</subject><subject>Performance evaluation</subject><subject>Profanity</subject><subject>Public opinion</subject><subject>Reliability</subject><subject>Sentiment analysis</subject><subject>Social media</subject><subject>Social networks</subject><subject>Transparency</subject><subject>User behavior</subject><issn>1869-5450</issn><issn>1869-5469</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>8BJ</sourceid><sourceid>ALSLI</sourceid><sourceid>M2R</sourceid><recordid>eNo9jj1PwzAURS0EElXpH2CyxAKD4TlObIcNhQKVSllgrvzxAmlDDLFDmfnl0IKY7h3OPbqEHHM45wDqInKRKc0gyxnwrFRM75ER17JkRS7L_f9ewCGZxLgCAA5ClCBH5OsaXfBN90zTC1LbGremNnxe0vnsfspMjE1M6OnQeexjMt0ODTWtQvcR2iE1oTMtXeDQ7yJtQr-mp9VicUabjrp2a6gbZ7bgdheDa37IV_SNoWmDmOIROahNG3Hyl2PydDN9rO7Y_OF2Vl3NmcsAEtNWOY4WrCs8-JorQFNrLp3TygijXe5AohAZijzzhZG6hLy01hhu8tpqMSYnv963PrwPGNNyFYb-535cClCqUIXUQnwD4uhksg</recordid><startdate>20240709</startdate><enddate>20240709</enddate><creator>Mazhar, Kashif</creator><creator>Dwivedi, Pragya</creator><general>Springer Nature B.V</general><scope>0-V</scope><scope>3V.</scope><scope>7XB</scope><scope>88J</scope><scope>8BJ</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FQK</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JBE</scope><scope>JQ2</scope><scope>K7-</scope><scope>M2R</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20240709</creationdate><title>Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets</title><author>Mazhar, Kashif ; Dwivedi, Pragya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-8b7c1eb0bc5d0df170eaf816cc87a3a8c4c06e332e342d5a689049bbaa1a4fb83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Complexity</topic><topic>Content analysis</topic><topic>Criminal investigations</topic><topic>Decision making</topic><topic>Decision trees</topic><topic>Decoding</topic><topic>Digital media</topic><topic>Evidence</topic><topic>Explainable artificial intelligence</topic><topic>Hate speech</topic><topic>Language</topic><topic>Literature reviews</topic><topic>Machine learning</topic><topic>Misinformation</topic><topic>Network reliability</topic><topic>Neural networks</topic><topic>Performance evaluation</topic><topic>Profanity</topic><topic>Public opinion</topic><topic>Reliability</topic><topic>Sentiment analysis</topic><topic>Social media</topic><topic>Social networks</topic><topic>Transparency</topic><topic>User behavior</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mazhar, Kashif</creatorcontrib><creatorcontrib>Dwivedi, Pragya</creatorcontrib><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Social Science Database (Alumni Edition)</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Social Science Premium Collection</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>International Bibliography of the Social Sciences</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>International Bibliography of the Social Sciences</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>Social Science Database (ProQuest)</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Social network analysis and mining</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mazhar, Kashif</au><au>Dwivedi, Pragya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets</atitle><jtitle>Social network analysis and mining</jtitle><date>2024-07-09</date><risdate>2024</risdate><volume>14</volume><issue>1</issue><spage>133</spage><pages>133-</pages><issn>1869-5450</issn><eissn>1869-5469</eissn><abstract>The rise of social media has brought both opportunities and challenges to the digital age, including the proliferation of online trolls that have spread misinformation, hates, and disruptions. An automated classification system is crucial to mitigate the impact of trolls. This paper presents an innovative approach for classifying social media tweets into troll and non-troll categories using a machine learning (ML) approach and CNN. We also employed explainable artificial intelligence (XAI) to address the inherent opacity and complexity of the CNN model. This approach allowed us to provide a comprehensive explanation of the model’s behavior. We have achieved Accuracy = 91.45% with CNN2 model. The best results using ML methods were achieved by random forest classifier model, Accuracy = 86.57%. To enhance our trust in the CNN model, we leveraged the local interpretable model-agnostic explanation (LIME) technique within XAI. Algorithm correctly predicted troll tweets with a confidence of 93% and non-troll tweets with a confidence of 97%. This research lays the groundwork for better decision making in the ever-changing field of social media content analysis by bridging the gap between complex neural networks and insights that can be understood by humans. The transparency and reliability that LIME brings to public discussion are crucial tools for ensuring the responsible and efficient use of online content, as social media continues to influence public opinion.</abstract><cop>Heidelberg</cop><pub>Springer Nature B.V</pub><doi>10.1007/s13278-024-01297-8</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1869-5450 |
ispartof | Social network analysis and mining, 2024-07, Vol.14 (1), p.133 |
issn | 1869-5450 1869-5469 |
language | eng |
recordid | cdi_proquest_journals_3077575683 |
source | International Bibliography of the Social Sciences (IBSS); Social Science Premium Collection; Springer Nature |
subjects | Accuracy Algorithms Artificial intelligence Artificial neural networks Classification Complexity Content analysis Criminal investigations Decision making Decision trees Decoding Digital media Evidence Explainable artificial intelligence Hate speech Language Literature reviews Machine learning Misinformation Network reliability Neural networks Performance evaluation Profanity Public opinion Reliability Sentiment analysis Social media Social networks Transparency User behavior |
title | Decoding the black box: LIME-assisted understanding of Convolutional Neural Network (CNN) in classification of social media tweets |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T14%3A19%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Decoding%20the%20black%20box:%20LIME-assisted%20understanding%20of%20Convolutional%20Neural%20Network%20(CNN)%20in%20classification%20of%20social%20media%20tweets&rft.jtitle=Social%20network%20analysis%20and%20mining&rft.au=Mazhar,%20Kashif&rft.date=2024-07-09&rft.volume=14&rft.issue=1&rft.spage=133&rft.pages=133-&rft.issn=1869-5450&rft.eissn=1869-5469&rft_id=info:doi/10.1007/s13278-024-01297-8&rft_dat=%3Cproquest%3E3077575683%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c200t-8b7c1eb0bc5d0df170eaf816cc87a3a8c4c06e332e342d5a689049bbaa1a4fb83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3077575683&rft_id=info:pmid/&rfr_iscdi=true |