Loading…
FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection
False news articles pose a serious challenge in today's information landscape, impacting public opinion and decision-making. Efforts to counter this issue have led to research in deep learning and machine learning methods. However, a gap exists in effectively using contextual cues and skip conn...
Saved in:
Published in: | PloS one 2023-12, Vol.18 (12), p.e0294701-e0294701 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503 |
---|---|
cites | cdi_FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503 |
container_end_page | e0294701 |
container_issue | 12 |
container_start_page | e0294701 |
container_title | PloS one |
container_volume | 18 |
creator | Keya, Ashfia Jannat Shajeeb, Hasibul Hossain Rahman, Md. Saifur Mridha, M. F |
description | False news articles pose a serious challenge in today's information landscape, impacting public opinion and decision-making. Efforts to counter this issue have led to research in deep learning and machine learning methods. However, a gap exists in effectively using contextual cues and skip connections within models, limiting the development of comprehensive detection systems that harness contextual information and vital data propagation. Thus, we propose a model of deep learning, FakeStack, in order to identify bogus news accurately. The model combines the power of pre-trained Bidirectional Encoder Representation of Transformers (BERT) embeddings with a deep Convolutional Neural Network (CNN) having skip convolution block and Long Short-Term Memory (LSTM). The model has been trained and tested on English fake news dataset, and various performance metrics were employed to assess its effectiveness. The results showcase the exceptional performance of FakeStack, achieving an accuracy of 99.74%, precision of 99.67%, recall of 99.80%, and F1-score of 99.74%. Our model's performance was extended to two additional datasets. For the LIAR dataset, our accuracy reached 75.58%, while the WELFake dataset showcased an impressive accuracy of 98.25%. Comparative analysis with other baseline models, including CNN, BERT-CNN, and BERT-LSTM, further highlights the superiority of FakeStack, surpassing all models evaluated. This study underscores the potential of advanced techniques in combating the spread of false news and ensuring the dissemination of reliable information. |
doi_str_mv | 10.1371/journal.pone.0294701 |
format | article |
fullrecord | <record><control><sourceid>gale_plos_</sourceid><recordid>TN_cdi_plos_journals_3072927504</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A775108588</galeid><doaj_id>oai_doaj_org_article_b1fa49e058d44fcd87a3b5c4ca716f62</doaj_id><sourcerecordid>A775108588</sourcerecordid><originalsourceid>FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503</originalsourceid><addsrcrecordid>eNqNkl2LEzEUhgdRcF39B4IDgujF1Hxnxru17LqFugvbKngVMpmTNt3ppCYzfvx7M7bKVvZCcpHw8pz3DeecLHuO0QRTid9u_BA63U52voMJIhWTCD_ITnBFSSEIog_vvB9nT2LcIMRpKcRJ9uVC38Ki1-b2XX7pIOhg1s7oNl8GV7w_v1kW06urYr5YfszjSEGTb30DbW59yMFaML37BrlNLnkH32PeQD9qvnuaPbK6jfDscJ9mny7Ol9PLYn79YTY9mxdGYNYXleRcQAOaioYhaqm0moEQiFjBa-CykpiZRhJayxoxVFOkMTEm6YZQjuhp9mLvu2t9VIdWREWRJBWRHLFEzPZE4_VG7YLb6vBTee3Ub8GHldKhd6YFVeOUXgHiZcOYNU0pNa25YUZLLKwgyev1IS34rwPEXm1dNNC2ugM_REXKSpQIVSVO6Mt_0Ps_d6BWOuW7zvo-aDOaqjMpOUYlL8tETe6h0mlg60wau3VJPyp4c1SQmB5-9Cs9xKhmi5v_Z68_H7Ov7rBr0G2_jr4dxonHY5DtQRN8jAHs38ZjpMat_dMNNW6tOmwt_QXqYtue</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3072927504</pqid></control><display><type>article</type><title>FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>PubMed Central Free</source><source>Coronavirus Research Database</source><creator>Keya, Ashfia Jannat ; Shajeeb, Hasibul Hossain ; Rahman, Md. Saifur ; Mridha, M. F</creator><contributor>Khowaja, Sunder Ali</contributor><creatorcontrib>Keya, Ashfia Jannat ; Shajeeb, Hasibul Hossain ; Rahman, Md. Saifur ; Mridha, M. F ; Khowaja, Sunder Ali</creatorcontrib><description>False news articles pose a serious challenge in today's information landscape, impacting public opinion and decision-making. Efforts to counter this issue have led to research in deep learning and machine learning methods. However, a gap exists in effectively using contextual cues and skip connections within models, limiting the development of comprehensive detection systems that harness contextual information and vital data propagation. Thus, we propose a model of deep learning, FakeStack, in order to identify bogus news accurately. The model combines the power of pre-trained Bidirectional Encoder Representation of Transformers (BERT) embeddings with a deep Convolutional Neural Network (CNN) having skip convolution block and Long Short-Term Memory (LSTM). The model has been trained and tested on English fake news dataset, and various performance metrics were employed to assess its effectiveness. The results showcase the exceptional performance of FakeStack, achieving an accuracy of 99.74%, precision of 99.67%, recall of 99.80%, and F1-score of 99.74%. Our model's performance was extended to two additional datasets. For the LIAR dataset, our accuracy reached 75.58%, while the WELFake dataset showcased an impressive accuracy of 98.25%. Comparative analysis with other baseline models, including CNN, BERT-CNN, and BERT-LSTM, further highlights the superiority of FakeStack, surpassing all models evaluated. This study underscores the potential of advanced techniques in combating the spread of false news and ensuring the dissemination of reliable information.</description><identifier>ISSN: 1932-6203</identifier><identifier>EISSN: 1932-6203</identifier><identifier>DOI: 10.1371/journal.pone.0294701</identifier><language>eng</language><publisher>San Francisco: Public Library of Science</publisher><subject>Accuracy ; Analysis ; Artificial neural networks ; Automation ; Classification ; Comparative analysis ; Computational linguistics ; Datasets ; Decision making ; Deep learning ; Detectors ; Disinformation ; False information ; Identification ; Language ; Language processing ; Long short-term memory ; Machine learning ; Natural language ; Natural language interfaces ; Neural networks ; News ; Performance measurement ; Public opinion ; Social media ; Social networks</subject><ispartof>PloS one, 2023-12, Vol.18 (12), p.e0294701-e0294701</ispartof><rights>COPYRIGHT 2023 Public Library of Science</rights><rights>2023 Keya et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2023 Keya et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503</citedby><cites>FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503</cites><orcidid>0000-0002-8553-6546 ; 0009-0001-6547-9126 ; 0000-0001-5738-1631</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3072927504/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3072927504?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,37013,38516,43895,44590,74412,75126</link.rule.ids></links><search><contributor>Khowaja, Sunder Ali</contributor><creatorcontrib>Keya, Ashfia Jannat</creatorcontrib><creatorcontrib>Shajeeb, Hasibul Hossain</creatorcontrib><creatorcontrib>Rahman, Md. Saifur</creatorcontrib><creatorcontrib>Mridha, M. F</creatorcontrib><title>FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection</title><title>PloS one</title><description>False news articles pose a serious challenge in today's information landscape, impacting public opinion and decision-making. Efforts to counter this issue have led to research in deep learning and machine learning methods. However, a gap exists in effectively using contextual cues and skip connections within models, limiting the development of comprehensive detection systems that harness contextual information and vital data propagation. Thus, we propose a model of deep learning, FakeStack, in order to identify bogus news accurately. The model combines the power of pre-trained Bidirectional Encoder Representation of Transformers (BERT) embeddings with a deep Convolutional Neural Network (CNN) having skip convolution block and Long Short-Term Memory (LSTM). The model has been trained and tested on English fake news dataset, and various performance metrics were employed to assess its effectiveness. The results showcase the exceptional performance of FakeStack, achieving an accuracy of 99.74%, precision of 99.67%, recall of 99.80%, and F1-score of 99.74%. Our model's performance was extended to two additional datasets. For the LIAR dataset, our accuracy reached 75.58%, while the WELFake dataset showcased an impressive accuracy of 98.25%. Comparative analysis with other baseline models, including CNN, BERT-CNN, and BERT-LSTM, further highlights the superiority of FakeStack, surpassing all models evaluated. This study underscores the potential of advanced techniques in combating the spread of false news and ensuring the dissemination of reliable information.</description><subject>Accuracy</subject><subject>Analysis</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Classification</subject><subject>Comparative analysis</subject><subject>Computational linguistics</subject><subject>Datasets</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Detectors</subject><subject>Disinformation</subject><subject>False information</subject><subject>Identification</subject><subject>Language</subject><subject>Language processing</subject><subject>Long short-term memory</subject><subject>Machine learning</subject><subject>Natural language</subject><subject>Natural language interfaces</subject><subject>Neural networks</subject><subject>News</subject><subject>Performance measurement</subject><subject>Public opinion</subject><subject>Social media</subject><subject>Social networks</subject><issn>1932-6203</issn><issn>1932-6203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>COVID</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNqNkl2LEzEUhgdRcF39B4IDgujF1Hxnxru17LqFugvbKngVMpmTNt3ppCYzfvx7M7bKVvZCcpHw8pz3DeecLHuO0QRTid9u_BA63U52voMJIhWTCD_ITnBFSSEIog_vvB9nT2LcIMRpKcRJ9uVC38Ki1-b2XX7pIOhg1s7oNl8GV7w_v1kW06urYr5YfszjSEGTb30DbW59yMFaML37BrlNLnkH32PeQD9qvnuaPbK6jfDscJ9mny7Ol9PLYn79YTY9mxdGYNYXleRcQAOaioYhaqm0moEQiFjBa-CykpiZRhJayxoxVFOkMTEm6YZQjuhp9mLvu2t9VIdWREWRJBWRHLFEzPZE4_VG7YLb6vBTee3Ub8GHldKhd6YFVeOUXgHiZcOYNU0pNa25YUZLLKwgyev1IS34rwPEXm1dNNC2ugM_REXKSpQIVSVO6Mt_0Ps_d6BWOuW7zvo-aDOaqjMpOUYlL8tETe6h0mlg60wau3VJPyp4c1SQmB5-9Cs9xKhmi5v_Z68_H7Ov7rBr0G2_jr4dxonHY5DtQRN8jAHs38ZjpMat_dMNNW6tOmwt_QXqYtue</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Keya, Ashfia Jannat</creator><creator>Shajeeb, Hasibul Hossain</creator><creator>Rahman, Md. Saifur</creator><creator>Mridha, M. F</creator><general>Public Library of Science</general><general>Public Library of Science (PLoS)</general><scope>AAYXX</scope><scope>CITATION</scope><scope>IOV</scope><scope>ISR</scope><scope>3V.</scope><scope>7QG</scope><scope>7QL</scope><scope>7QO</scope><scope>7RV</scope><scope>7SN</scope><scope>7SS</scope><scope>7T5</scope><scope>7TG</scope><scope>7TM</scope><scope>7U9</scope><scope>7X2</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8C1</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>ATCPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>COVID</scope><scope>D1I</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB.</scope><scope>KB0</scope><scope>KL.</scope><scope>L6V</scope><scope>LK8</scope><scope>M0K</scope><scope>M0S</scope><scope>M1P</scope><scope>M7N</scope><scope>M7P</scope><scope>M7S</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PATMY</scope><scope>PDBOC</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>PYCSY</scope><scope>RC3</scope><scope>7X8</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-8553-6546</orcidid><orcidid>https://orcid.org/0009-0001-6547-9126</orcidid><orcidid>https://orcid.org/0000-0001-5738-1631</orcidid></search><sort><creationdate>20231201</creationdate><title>FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection</title><author>Keya, Ashfia Jannat ; Shajeeb, Hasibul Hossain ; Rahman, Md. Saifur ; Mridha, M. F</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Analysis</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Classification</topic><topic>Comparative analysis</topic><topic>Computational linguistics</topic><topic>Datasets</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Detectors</topic><topic>Disinformation</topic><topic>False information</topic><topic>Identification</topic><topic>Language</topic><topic>Language processing</topic><topic>Long short-term memory</topic><topic>Machine learning</topic><topic>Natural language</topic><topic>Natural language interfaces</topic><topic>Neural networks</topic><topic>News</topic><topic>Performance measurement</topic><topic>Public opinion</topic><topic>Social media</topic><topic>Social networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Keya, Ashfia Jannat</creatorcontrib><creatorcontrib>Shajeeb, Hasibul Hossain</creatorcontrib><creatorcontrib>Rahman, Md. Saifur</creatorcontrib><creatorcontrib>Mridha, M. F</creatorcontrib><collection>CrossRef</collection><collection>Gale_Opposing Viewpoints In Context</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing & Allied Health Database</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Immunology Abstracts</collection><collection>Meteorological & Geoastrophysical Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>Agricultural Science Collection</collection><collection>Health Medical collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Agricultural & Environmental Science</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Materials Science Collection</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Materials Science Database</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Meteorological & Geoastrophysical Abstracts - Academic</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Agriculture Science Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>ProQuest Biological Science Journals</collection><collection>Engineering Database</collection><collection>Nursing & Allied Health Premium</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Environmental Science Database</collection><collection>Materials science collection</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>Environmental Science Collection</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>PloS one</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Keya, Ashfia Jannat</au><au>Shajeeb, Hasibul Hossain</au><au>Rahman, Md. Saifur</au><au>Mridha, M. F</au><au>Khowaja, Sunder Ali</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection</atitle><jtitle>PloS one</jtitle><date>2023-12-01</date><risdate>2023</risdate><volume>18</volume><issue>12</issue><spage>e0294701</spage><epage>e0294701</epage><pages>e0294701-e0294701</pages><issn>1932-6203</issn><eissn>1932-6203</eissn><abstract>False news articles pose a serious challenge in today's information landscape, impacting public opinion and decision-making. Efforts to counter this issue have led to research in deep learning and machine learning methods. However, a gap exists in effectively using contextual cues and skip connections within models, limiting the development of comprehensive detection systems that harness contextual information and vital data propagation. Thus, we propose a model of deep learning, FakeStack, in order to identify bogus news accurately. The model combines the power of pre-trained Bidirectional Encoder Representation of Transformers (BERT) embeddings with a deep Convolutional Neural Network (CNN) having skip convolution block and Long Short-Term Memory (LSTM). The model has been trained and tested on English fake news dataset, and various performance metrics were employed to assess its effectiveness. The results showcase the exceptional performance of FakeStack, achieving an accuracy of 99.74%, precision of 99.67%, recall of 99.80%, and F1-score of 99.74%. Our model's performance was extended to two additional datasets. For the LIAR dataset, our accuracy reached 75.58%, while the WELFake dataset showcased an impressive accuracy of 98.25%. Comparative analysis with other baseline models, including CNN, BERT-CNN, and BERT-LSTM, further highlights the superiority of FakeStack, surpassing all models evaluated. This study underscores the potential of advanced techniques in combating the spread of false news and ensuring the dissemination of reliable information.</abstract><cop>San Francisco</cop><pub>Public Library of Science</pub><doi>10.1371/journal.pone.0294701</doi><tpages>e0294701</tpages><orcidid>https://orcid.org/0000-0002-8553-6546</orcidid><orcidid>https://orcid.org/0009-0001-6547-9126</orcidid><orcidid>https://orcid.org/0000-0001-5738-1631</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1932-6203 |
ispartof | PloS one, 2023-12, Vol.18 (12), p.e0294701-e0294701 |
issn | 1932-6203 1932-6203 |
language | eng |
recordid | cdi_plos_journals_3072927504 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3); PubMed Central Free; Coronavirus Research Database |
subjects | Accuracy Analysis Artificial neural networks Automation Classification Comparative analysis Computational linguistics Datasets Decision making Deep learning Detectors Disinformation False information Identification Language Language processing Long short-term memory Machine learning Natural language Natural language interfaces Neural networks News Performance measurement Public opinion Social media Social networks |
title | FakeStack: Hierarchical Tri-BERT-CNN-LSTM stacked model for effective fake news detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T12%3A29%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_plos_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FakeStack:%20Hierarchical%20Tri-BERT-CNN-LSTM%20stacked%20model%20for%20effective%20fake%20news%20detection&rft.jtitle=PloS%20one&rft.au=Keya,%20Ashfia%20Jannat&rft.date=2023-12-01&rft.volume=18&rft.issue=12&rft.spage=e0294701&rft.epage=e0294701&rft.pages=e0294701-e0294701&rft.issn=1932-6203&rft.eissn=1932-6203&rft_id=info:doi/10.1371/journal.pone.0294701&rft_dat=%3Cgale_plos_%3EA775108588%3C/gale_plos_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c614t-97556edea36d403f37fa4e6602f65be579714cd723b7b040b30a12cc797c23503%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3072927504&rft_id=info:pmid/&rft_galeid=A775108588&rfr_iscdi=true |