Loading…

Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images

COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Compute...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2022-12, Vol.12 (1), p.21796-21796, Article 21796
Main Authors: Uddin, Khandaker Mohammad Mohi, Dey, Samrat Kumar, Babu, Hafiz Md. Hasan, Mostafiz, Rafid, Uddin, Shahadat, Shoombuatong, Watshara, Moni, Mohammad Ali
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03
cites cdi_FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03
container_end_page 21796
container_issue 1
container_start_page 21796
container_title Scientific reports
container_volume 12
creator Uddin, Khandaker Mohammad Mohi
Dey, Samrat Kumar
Babu, Hafiz Md. Hasan
Mostafiz, Rafid
Uddin, Shahadat
Shoombuatong, Watshara
Moni, Mohammad Ali
description COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 positive patients by following a four-phase paradigm for COVID-19 detection: preprocess the CT scan images; remove noise from test image by using anisotropic diffusion techniques; make a different segment for the preprocessed images; and train and test COVID-19 detection using Convolutional Neural Network (CNN) models. This study employed well-known pre-trained models, including AlexNet, ResNet50, VGG16 and VGG19 to evaluate experiments. 80% of images are used to train the network in the detection process, while the remaining 20% are used to test it. The result of the experiment evaluation confirmed that the VGG19 pre-trained CNN model achieved better accuracy (98.06%). We used 4861 real-life COVID-19 CT images for experiment purposes, including 3068 positive and 1793 negative images. These images were acquired from a hospital in Sao Paulo, Brazil and two other different data sources. Our proposed method revealed very high accuracy and, therefore, can be used as an assistant to help professionals detect COVID-19 patients accurately.
doi_str_mv 10.1038/s41598-022-25539-x
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_2d6a72612969432ea6f0f792a6e59f44</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_2d6a72612969432ea6f0f792a6e59f44</doaj_id><sourcerecordid>2755578320</sourcerecordid><originalsourceid>FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03</originalsourceid><addsrcrecordid>eNp9kk1v1DAQhiMEotXSP8ABWeLCJcXfsS9IaGG3K1X0Ar1ak8RJs0riYDtVy6_Huyml5YAv_ph3npmx3ix7S_A5wUx9DJwIrXJMaU6FYDq_e5GdUsxFThmlL5-cT7KzEPY4LUE1J_p1dsKkoFIqfJoNGwtx9hY1c-jciEoItkbX2-3meP9mIxpcbXsUHapttFVE66vr3ZecaDRB7OwYA5pj13e_urFFlRumOSZCdINrPUw39yhUMKJugNaGN9mrBvpgzx72VfZj8_X7-iK_vNru1p8v80pwHPMaVFWyolQ1bQQIqgrJikoRaHSttdWNLDhnCoiSnBDMGYBimJCSy0LyErNVtlu4tYO9mXyq7u-Ng84cH5xvDfjYVb01tJZQUEmolpozakE2uCk0BWmFblKZVfZpYU1zOdi6ShN76J9Bn0fG7sa07tboQhz6ToAPDwDvfs42RDN0obJ9D6N1czC0EEIUitFD3-__ke7d7Mf0VUcVxhpLkVR0UVXeheBt89gMweZgDrOYwyRzmKM5zF1Kevd0jMeUP1ZIArYIQgqNrfV_a_8H-xtkOcSP</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2755009065</pqid></control><display><type>article</type><title>Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images</title><source>Open Access: PubMed Central</source><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><source>Full-Text Journals in Chemistry (Open access)</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Uddin, Khandaker Mohammad Mohi ; Dey, Samrat Kumar ; Babu, Hafiz Md. Hasan ; Mostafiz, Rafid ; Uddin, Shahadat ; Shoombuatong, Watshara ; Moni, Mohammad Ali</creator><creatorcontrib>Uddin, Khandaker Mohammad Mohi ; Dey, Samrat Kumar ; Babu, Hafiz Md. Hasan ; Mostafiz, Rafid ; Uddin, Shahadat ; Shoombuatong, Watshara ; Moni, Mohammad Ali</creatorcontrib><description>COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 positive patients by following a four-phase paradigm for COVID-19 detection: preprocess the CT scan images; remove noise from test image by using anisotropic diffusion techniques; make a different segment for the preprocessed images; and train and test COVID-19 detection using Convolutional Neural Network (CNN) models. This study employed well-known pre-trained models, including AlexNet, ResNet50, VGG16 and VGG19 to evaluate experiments. 80% of images are used to train the network in the detection process, while the remaining 20% are used to test it. The result of the experiment evaluation confirmed that the VGG19 pre-trained CNN model achieved better accuracy (98.06%). We used 4861 real-life COVID-19 CT images for experiment purposes, including 3068 positive and 1793 negative images. These images were acquired from a hospital in Sao Paulo, Brazil and two other different data sources. Our proposed method revealed very high accuracy and, therefore, can be used as an assistant to help professionals detect COVID-19 patients accurately.</description><identifier>ISSN: 2045-2322</identifier><identifier>EISSN: 2045-2322</identifier><identifier>DOI: 10.1038/s41598-022-25539-x</identifier><identifier>PMID: 36526680</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>639/705/1042 ; 639/705/117 ; 639/705/258 ; 692/700/1421 ; Brazil ; Computed tomography ; Coronaviruses ; COVID-19 ; COVID-19 - diagnostic imaging ; Deep learning ; Humanities and Social Sciences ; Humans ; multidisciplinary ; Neural networks ; Pandemics ; Patients ; Radionuclide Imaging ; Science ; Science (multidisciplinary) ; Tomography ; Tomography, X-Ray Computed</subject><ispartof>Scientific reports, 2022-12, Vol.12 (1), p.21796-21796, Article 21796</ispartof><rights>The Author(s) 2022</rights><rights>2022. The Author(s).</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03</citedby><cites>FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2755009065/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2755009065?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36526680$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Uddin, Khandaker Mohammad Mohi</creatorcontrib><creatorcontrib>Dey, Samrat Kumar</creatorcontrib><creatorcontrib>Babu, Hafiz Md. Hasan</creatorcontrib><creatorcontrib>Mostafiz, Rafid</creatorcontrib><creatorcontrib>Uddin, Shahadat</creatorcontrib><creatorcontrib>Shoombuatong, Watshara</creatorcontrib><creatorcontrib>Moni, Mohammad Ali</creatorcontrib><title>Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images</title><title>Scientific reports</title><addtitle>Sci Rep</addtitle><addtitle>Sci Rep</addtitle><description>COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 positive patients by following a four-phase paradigm for COVID-19 detection: preprocess the CT scan images; remove noise from test image by using anisotropic diffusion techniques; make a different segment for the preprocessed images; and train and test COVID-19 detection using Convolutional Neural Network (CNN) models. This study employed well-known pre-trained models, including AlexNet, ResNet50, VGG16 and VGG19 to evaluate experiments. 80% of images are used to train the network in the detection process, while the remaining 20% are used to test it. The result of the experiment evaluation confirmed that the VGG19 pre-trained CNN model achieved better accuracy (98.06%). We used 4861 real-life COVID-19 CT images for experiment purposes, including 3068 positive and 1793 negative images. These images were acquired from a hospital in Sao Paulo, Brazil and two other different data sources. Our proposed method revealed very high accuracy and, therefore, can be used as an assistant to help professionals detect COVID-19 patients accurately.</description><subject>639/705/1042</subject><subject>639/705/117</subject><subject>639/705/258</subject><subject>692/700/1421</subject><subject>Brazil</subject><subject>Computed tomography</subject><subject>Coronaviruses</subject><subject>COVID-19</subject><subject>COVID-19 - diagnostic imaging</subject><subject>Deep learning</subject><subject>Humanities and Social Sciences</subject><subject>Humans</subject><subject>multidisciplinary</subject><subject>Neural networks</subject><subject>Pandemics</subject><subject>Patients</subject><subject>Radionuclide Imaging</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Tomography</subject><subject>Tomography, X-Ray Computed</subject><issn>2045-2322</issn><issn>2045-2322</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kk1v1DAQhiMEotXSP8ABWeLCJcXfsS9IaGG3K1X0Ar1ak8RJs0riYDtVy6_Huyml5YAv_ph3npmx3ix7S_A5wUx9DJwIrXJMaU6FYDq_e5GdUsxFThmlL5-cT7KzEPY4LUE1J_p1dsKkoFIqfJoNGwtx9hY1c-jciEoItkbX2-3meP9mIxpcbXsUHapttFVE66vr3ZecaDRB7OwYA5pj13e_urFFlRumOSZCdINrPUw39yhUMKJugNaGN9mrBvpgzx72VfZj8_X7-iK_vNru1p8v80pwHPMaVFWyolQ1bQQIqgrJikoRaHSttdWNLDhnCoiSnBDMGYBimJCSy0LyErNVtlu4tYO9mXyq7u-Ng84cH5xvDfjYVb01tJZQUEmolpozakE2uCk0BWmFblKZVfZpYU1zOdi6ShN76J9Bn0fG7sa07tboQhz6ToAPDwDvfs42RDN0obJ9D6N1czC0EEIUitFD3-__ke7d7Mf0VUcVxhpLkVR0UVXeheBt89gMweZgDrOYwyRzmKM5zF1Kevd0jMeUP1ZIArYIQgqNrfV_a_8H-xtkOcSP</recordid><startdate>20221216</startdate><enddate>20221216</enddate><creator>Uddin, Khandaker Mohammad Mohi</creator><creator>Dey, Samrat Kumar</creator><creator>Babu, Hafiz Md. Hasan</creator><creator>Mostafiz, Rafid</creator><creator>Uddin, Shahadat</creator><creator>Shoombuatong, Watshara</creator><creator>Moni, Mohammad Ali</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88A</scope><scope>88E</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20221216</creationdate><title>Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images</title><author>Uddin, Khandaker Mohammad Mohi ; Dey, Samrat Kumar ; Babu, Hafiz Md. Hasan ; Mostafiz, Rafid ; Uddin, Shahadat ; Shoombuatong, Watshara ; Moni, Mohammad Ali</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>639/705/1042</topic><topic>639/705/117</topic><topic>639/705/258</topic><topic>692/700/1421</topic><topic>Brazil</topic><topic>Computed tomography</topic><topic>Coronaviruses</topic><topic>COVID-19</topic><topic>COVID-19 - diagnostic imaging</topic><topic>Deep learning</topic><topic>Humanities and Social Sciences</topic><topic>Humans</topic><topic>multidisciplinary</topic><topic>Neural networks</topic><topic>Pandemics</topic><topic>Patients</topic><topic>Radionuclide Imaging</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Tomography</topic><topic>Tomography, X-Ray Computed</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Uddin, Khandaker Mohammad Mohi</creatorcontrib><creatorcontrib>Dey, Samrat Kumar</creatorcontrib><creatorcontrib>Babu, Hafiz Md. Hasan</creatorcontrib><creatorcontrib>Mostafiz, Rafid</creatorcontrib><creatorcontrib>Uddin, Shahadat</creatorcontrib><creatorcontrib>Shoombuatong, Watshara</creatorcontrib><creatorcontrib>Moni, Mohammad Ali</creatorcontrib><collection>SpringerOpen</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Biology Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>ProQuest Science Journals</collection><collection>ProQuest Biological Science Journals</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Open Access: DOAJ - Directory of Open Access Journals</collection><jtitle>Scientific reports</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Uddin, Khandaker Mohammad Mohi</au><au>Dey, Samrat Kumar</au><au>Babu, Hafiz Md. Hasan</au><au>Mostafiz, Rafid</au><au>Uddin, Shahadat</au><au>Shoombuatong, Watshara</au><au>Moni, Mohammad Ali</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images</atitle><jtitle>Scientific reports</jtitle><stitle>Sci Rep</stitle><addtitle>Sci Rep</addtitle><date>2022-12-16</date><risdate>2022</risdate><volume>12</volume><issue>1</issue><spage>21796</spage><epage>21796</epage><pages>21796-21796</pages><artnum>21796</artnum><issn>2045-2322</issn><eissn>2045-2322</eissn><abstract>COVID-19 is one of the most life-threatening and dangerous diseases caused by the novel Coronavirus, which has already afflicted a larger human community worldwide. This pandemic disease recovery is possible if detected in the early stage. We proposed an automated deep learning approach from Computed Tomography (CT) scan images to detect COVID-19 positive patients by following a four-phase paradigm for COVID-19 detection: preprocess the CT scan images; remove noise from test image by using anisotropic diffusion techniques; make a different segment for the preprocessed images; and train and test COVID-19 detection using Convolutional Neural Network (CNN) models. This study employed well-known pre-trained models, including AlexNet, ResNet50, VGG16 and VGG19 to evaluate experiments. 80% of images are used to train the network in the detection process, while the remaining 20% are used to test it. The result of the experiment evaluation confirmed that the VGG19 pre-trained CNN model achieved better accuracy (98.06%). We used 4861 real-life COVID-19 CT images for experiment purposes, including 3068 positive and 1793 negative images. These images were acquired from a hospital in Sao Paulo, Brazil and two other different data sources. Our proposed method revealed very high accuracy and, therefore, can be used as an assistant to help professionals detect COVID-19 patients accurately.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>36526680</pmid><doi>10.1038/s41598-022-25539-x</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2045-2322
ispartof Scientific reports, 2022-12, Vol.12 (1), p.21796-21796, Article 21796
issn 2045-2322
2045-2322
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_2d6a72612969432ea6f0f792a6e59f44
source Open Access: PubMed Central; Publicly Available Content Database (Proquest) (PQ_SDU_P3); Full-Text Journals in Chemistry (Open access); Springer Nature - nature.com Journals - Fully Open Access
subjects 639/705/1042
639/705/117
639/705/258
692/700/1421
Brazil
Computed tomography
Coronaviruses
COVID-19
COVID-19 - diagnostic imaging
Deep learning
Humanities and Social Sciences
Humans
multidisciplinary
Neural networks
Pandemics
Patients
Radionuclide Imaging
Science
Science (multidisciplinary)
Tomography
Tomography, X-Ray Computed
title Feature fusion based VGGFusionNet model to detect COVID-19 patients utilizing computed tomography scan images
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T16%3A48%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature%20fusion%20based%20VGGFusionNet%20model%20to%20detect%20COVID-19%20patients%20utilizing%20computed%20tomography%20scan%20images&rft.jtitle=Scientific%20reports&rft.au=Uddin,%20Khandaker%20Mohammad%20Mohi&rft.date=2022-12-16&rft.volume=12&rft.issue=1&rft.spage=21796&rft.epage=21796&rft.pages=21796-21796&rft.artnum=21796&rft.issn=2045-2322&rft.eissn=2045-2322&rft_id=info:doi/10.1038/s41598-022-25539-x&rft_dat=%3Cproquest_doaj_%3E2755578320%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c540t-da8cb37b8d2f5a5287637c81af9d99e9f674438a186411043aa83011b46764b03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2755009065&rft_id=info:pmid/36526680&rfr_iscdi=true