Loading…

Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classi...

Full description

Saved in:
Bibliographic Details
Published in:Journal of digital imaging 2019-12, Vol.32 (6), p.925-930
Main Authors: Kim, Tae Kyung, Yi, Paul H., Wei, Jinchi, Shin, Ji Won, Hager, Gregory, Hui, Ferdinand K., Sair, Haris I., Lin, Cheng Ting
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43
cites cdi_FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43
container_end_page 930
container_issue 6
container_start_page 925
container_title Journal of digital imaging
container_volume 32
creator Kim, Tae Kyung
Yi, Paul H.
Wei, Jinchi
Shin, Ji Won
Hager, Gregory
Hui, Ferdinand K.
Sair, Haris I.
Lin, Cheng Ting
description Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant ( p  = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.
doi_str_mv 10.1007/s10278-019-00208-0
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6841900</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207936308</sourcerecordid><originalsourceid>FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43</originalsourceid><addsrcrecordid>eNp9kU9vEzEQxS0EomnhC3BAK3HhsnTs_WPvBSlKaUEKAiGQuFlee5w42tiLvYvEt6_ThFI4cBrb8_ObeXqEvKDwhgLwy0SBcVEC7UoABvn0iCxoS0XJGf_-mCxAdLykQnRn5DylHQDlDa-fkrMKOs4a0SzI7gpxLNaoond-U3zEaRtMYUMslvMU9mpCU6wGlZKzTqvJBV8EWyz9hDGMIeXiMqu8KT7f3YLyp7fVFtNUfFHGhU1U4zY9I0-sGhI-P9UL8u363dfV-3L96ebDarkudVPDVHKsKFR1D2hsLaq6Y70VmmmsbN9rboxWHVqtuNG9Nib7bdAaa5jqO0ZVXV2Qt0fdce73aDT6KapBjtHtVfwlg3Ly7453W7kJP2UratoBZIHXJ4EYfszZhdy7pHEYlMcwJ8kY8K5qKxAZffUPugtz9NnegWrz-rQ9UOxI6RhSimjvl6EgD1HKY5QyRynvopSHLV4-tHH_5Xd2GaiOQMotv8H4Z_Z_ZG8B8YmuDQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2206834168</pqid></control><display><type>article</type><title>Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs</title><source>Springer Link</source><source>PubMed Central</source><creator>Kim, Tae Kyung ; Yi, Paul H. ; Wei, Jinchi ; Shin, Ji Won ; Hager, Gregory ; Hui, Ferdinand K. ; Sair, Haris I. ; Lin, Cheng Ting</creator><creatorcontrib>Kim, Tae Kyung ; Yi, Paul H. ; Wei, Jinchi ; Shin, Ji Won ; Hager, Gregory ; Hui, Ferdinand K. ; Sair, Haris I. ; Lin, Cheng Ting</creatorcontrib><description>Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant ( p  = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.</description><identifier>ISSN: 0897-1889</identifier><identifier>EISSN: 1618-727X</identifier><identifier>DOI: 10.1007/s10278-019-00208-0</identifier><identifier>PMID: 30972585</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Adult ; Algorithms ; Annotations ; Artificial intelligence ; Artificial neural networks ; Automation ; Chest ; Child ; Classification ; Databases, Factual ; Datasets ; Deep Learning ; Diagnostic systems ; Humans ; Image classification ; Image quality ; Imaging ; Learning algorithms ; Machine learning ; Medicine ; Medicine &amp; Public Health ; Neural networks ; Pediatrics ; Quality assurance ; Quality control ; Radiographic Image Interpretation, Computer-Assisted - methods ; Radiographs ; Radiography ; Radiography, Thoracic - methods ; Radiology ; Reproducibility of Results ; Retrospective Studies ; Sensitivity ; Sensitivity and Specificity ; Statistical analysis</subject><ispartof>Journal of digital imaging, 2019-12, Vol.32 (6), p.925-930</ispartof><rights>Society for Imaging Informatics in Medicine 2019</rights><rights>Journal of Digital Imaging is a copyright of Springer, (2019). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43</citedby><cites>FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6841900/pdf/$$EPDF$$P50$$Gpubmedcentral$$H</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6841900/$$EHTML$$P50$$Gpubmedcentral$$H</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30972585$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Tae Kyung</creatorcontrib><creatorcontrib>Yi, Paul H.</creatorcontrib><creatorcontrib>Wei, Jinchi</creatorcontrib><creatorcontrib>Shin, Ji Won</creatorcontrib><creatorcontrib>Hager, Gregory</creatorcontrib><creatorcontrib>Hui, Ferdinand K.</creatorcontrib><creatorcontrib>Sair, Haris I.</creatorcontrib><creatorcontrib>Lin, Cheng Ting</creatorcontrib><title>Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs</title><title>Journal of digital imaging</title><addtitle>J Digit Imaging</addtitle><addtitle>J Digit Imaging</addtitle><description>Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant ( p  = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.</description><subject>Adult</subject><subject>Algorithms</subject><subject>Annotations</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>Chest</subject><subject>Child</subject><subject>Classification</subject><subject>Databases, Factual</subject><subject>Datasets</subject><subject>Deep Learning</subject><subject>Diagnostic systems</subject><subject>Humans</subject><subject>Image classification</subject><subject>Image quality</subject><subject>Imaging</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Neural networks</subject><subject>Pediatrics</subject><subject>Quality assurance</subject><subject>Quality control</subject><subject>Radiographic Image Interpretation, Computer-Assisted - methods</subject><subject>Radiographs</subject><subject>Radiography</subject><subject>Radiography, Thoracic - methods</subject><subject>Radiology</subject><subject>Reproducibility of Results</subject><subject>Retrospective Studies</subject><subject>Sensitivity</subject><subject>Sensitivity and Specificity</subject><subject>Statistical analysis</subject><issn>0897-1889</issn><issn>1618-727X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNp9kU9vEzEQxS0EomnhC3BAK3HhsnTs_WPvBSlKaUEKAiGQuFlee5w42tiLvYvEt6_ThFI4cBrb8_ObeXqEvKDwhgLwy0SBcVEC7UoABvn0iCxoS0XJGf_-mCxAdLykQnRn5DylHQDlDa-fkrMKOs4a0SzI7gpxLNaoond-U3zEaRtMYUMslvMU9mpCU6wGlZKzTqvJBV8EWyz9hDGMIeXiMqu8KT7f3YLyp7fVFtNUfFHGhU1U4zY9I0-sGhI-P9UL8u363dfV-3L96ebDarkudVPDVHKsKFR1D2hsLaq6Y70VmmmsbN9rboxWHVqtuNG9Nib7bdAaa5jqO0ZVXV2Qt0fdce73aDT6KapBjtHtVfwlg3Ly7453W7kJP2UratoBZIHXJ4EYfszZhdy7pHEYlMcwJ8kY8K5qKxAZffUPugtz9NnegWrz-rQ9UOxI6RhSimjvl6EgD1HKY5QyRynvopSHLV4-tHH_5Xd2GaiOQMotv8H4Z_Z_ZG8B8YmuDQ</recordid><startdate>20191201</startdate><enddate>20191201</enddate><creator>Kim, Tae Kyung</creator><creator>Yi, Paul H.</creator><creator>Wei, Jinchi</creator><creator>Shin, Ji Won</creator><creator>Hager, Gregory</creator><creator>Hui, Ferdinand K.</creator><creator>Sair, Haris I.</creator><creator>Lin, Cheng Ting</creator><general>Springer International Publishing</general><general>Springer Nature B.V</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QO</scope><scope>7RV</scope><scope>7SC</scope><scope>7TK</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K9.</scope><scope>KB0</scope><scope>L7M</scope><scope>LK8</scope><scope>L~C</scope><scope>L~D</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20191201</creationdate><title>Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs</title><author>Kim, Tae Kyung ; Yi, Paul H. ; Wei, Jinchi ; Shin, Ji Won ; Hager, Gregory ; Hui, Ferdinand K. ; Sair, Haris I. ; Lin, Cheng Ting</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adult</topic><topic>Algorithms</topic><topic>Annotations</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>Chest</topic><topic>Child</topic><topic>Classification</topic><topic>Databases, Factual</topic><topic>Datasets</topic><topic>Deep Learning</topic><topic>Diagnostic systems</topic><topic>Humans</topic><topic>Image classification</topic><topic>Image quality</topic><topic>Imaging</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Neural networks</topic><topic>Pediatrics</topic><topic>Quality assurance</topic><topic>Quality control</topic><topic>Radiographic Image Interpretation, Computer-Assisted - methods</topic><topic>Radiographs</topic><topic>Radiography</topic><topic>Radiography, Thoracic - methods</topic><topic>Radiology</topic><topic>Reproducibility of Results</topic><topic>Retrospective Studies</topic><topic>Sensitivity</topic><topic>Sensitivity and Specificity</topic><topic>Statistical analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Tae Kyung</creatorcontrib><creatorcontrib>Yi, Paul H.</creatorcontrib><creatorcontrib>Wei, Jinchi</creatorcontrib><creatorcontrib>Shin, Ji Won</creatorcontrib><creatorcontrib>Hager, Gregory</creatorcontrib><creatorcontrib>Hui, Ferdinand K.</creatorcontrib><creatorcontrib>Sair, Haris I.</creatorcontrib><creatorcontrib>Lin, Cheng Ting</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Biotechnology Research Abstracts</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Biological Sciences</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Biological Science Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of digital imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Tae Kyung</au><au>Yi, Paul H.</au><au>Wei, Jinchi</au><au>Shin, Ji Won</au><au>Hager, Gregory</au><au>Hui, Ferdinand K.</au><au>Sair, Haris I.</au><au>Lin, Cheng Ting</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs</atitle><jtitle>Journal of digital imaging</jtitle><stitle>J Digit Imaging</stitle><addtitle>J Digit Imaging</addtitle><date>2019-12-01</date><risdate>2019</risdate><volume>32</volume><issue>6</issue><spage>925</spage><epage>930</epage><pages>925-930</pages><issn>0897-1889</issn><eissn>1618-727X</eissn><abstract>Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN’s performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant ( p  = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><pmid>30972585</pmid><doi>10.1007/s10278-019-00208-0</doi><tpages>6</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0897-1889
ispartof Journal of digital imaging, 2019-12, Vol.32 (6), p.925-930
issn 0897-1889
1618-727X
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_6841900
source Springer Link; PubMed Central
subjects Adult
Algorithms
Annotations
Artificial intelligence
Artificial neural networks
Automation
Chest
Child
Classification
Databases, Factual
Datasets
Deep Learning
Diagnostic systems
Humans
Image classification
Image quality
Imaging
Learning algorithms
Machine learning
Medicine
Medicine & Public Health
Neural networks
Pediatrics
Quality assurance
Quality control
Radiographic Image Interpretation, Computer-Assisted - methods
Radiographs
Radiography
Radiography, Thoracic - methods
Radiology
Reproducibility of Results
Retrospective Studies
Sensitivity
Sensitivity and Specificity
Statistical analysis
title Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T09%3A31%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20Method%20for%20Automated%20Classification%20of%20Anteroposterior%20and%20Posteroanterior%20Chest%20Radiographs&rft.jtitle=Journal%20of%20digital%20imaging&rft.au=Kim,%20Tae%20Kyung&rft.date=2019-12-01&rft.volume=32&rft.issue=6&rft.spage=925&rft.epage=930&rft.pages=925-930&rft.issn=0897-1889&rft.eissn=1618-727X&rft_id=info:doi/10.1007/s10278-019-00208-0&rft_dat=%3Cproquest_pubme%3E2207936308%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c540t-7e31034b0edf483492bf8c2ce3fbbc7ddca9efca7dcbcdd1615efdfd2ab921a43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2206834168&rft_id=info:pmid/30972585&rfr_iscdi=true