Loading…
Self Supervised Super-Resolution PET Using A Generative Adversarial Network
Resolution limitations pose a continuing challenge for PET quantitation. While deep learning architectures based on convolutional neural networks (CNNs) have produced unprecedented accuracy at generating super-resolution (SR) PET images, most existing approaches are based on supervised learning. The...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 3 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Song, Tzu-An Roy Chowdhury, Samadrita Yang, Fan Dutta, Joyita |
description | Resolution limitations pose a continuing challenge for PET quantitation. While deep learning architectures based on convolutional neural networks (CNNs) have produced unprecedented accuracy at generating super-resolution (SR) PET images, most existing approaches are based on supervised learning. The latter requires training datasets with paired (low- and high-resolution) images, which are often unavailable for clinical applications. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which obviate the need for paired training data, ensuring wider applicability and adoptability. Our network receives as inputs a low-resolution PET image, a high-resolution anatomical MR image, and spatial information. An imperfect SR image generated by a separately-trained auxiliary CNN serves as an additional input to the network. This CNN is trained in a supervised manner using paired simulation datasets. The loss function for training the dual GANs consists of two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. The method was validated on clinical data by comparing the SSSR results with those generated from a supervised approach and from deconvolution stabilized by a total variation penalty. Our results show that SSSR, while weaker than its supervised counterpart, noticeably outperforms deconvolution as indicated by the peak signal-to-noise-ratio and structural similarity index measures. |
doi_str_mv | 10.1109/NSS/MIC42101.2019.9059947 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9059947</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9059947</ieee_id><sourcerecordid>9059947</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-e847ce4857bb6ea74ef4e4a6e023f4dc26818584a15aaf9e792fe2cd726794b33</originalsourceid><addsrcrecordid>eNotj9FOwjAYRquJiYg8gTf1ATba7t_aXi4LAhHROLgm3fbXVOdG2jHj22sCV9-5OicfIY-cxZwzPd-W5fxlXYDgjMeCcR1rlmoN8orMtFRcCsWBZ8CuyUSkUkZMCX1L7kL4ZEywBGBCnktsLS1PR_SjC9icMXrH0LenwfUdfVvs6D647oPmdIkdejO4EWnejOiD8c60dIvDT--_7smNNW3A2WWnZP-02BWraPO6XBf5JnL_0SFCBbJGUKmsqgyNBLSAYDJkIrHQ1CJTXKUKDE-NsRqlFhZF3UiRSQ1VkkzJw9nrEPFw9O7b-N_D5XvyB5saT2Q</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Self Supervised Super-Resolution PET Using A Generative Adversarial Network</title><source>IEEE Xplore All Conference Series</source><creator>Song, Tzu-An ; Roy Chowdhury, Samadrita ; Yang, Fan ; Dutta, Joyita</creator><creatorcontrib>Song, Tzu-An ; Roy Chowdhury, Samadrita ; Yang, Fan ; Dutta, Joyita</creatorcontrib><description>Resolution limitations pose a continuing challenge for PET quantitation. While deep learning architectures based on convolutional neural networks (CNNs) have produced unprecedented accuracy at generating super-resolution (SR) PET images, most existing approaches are based on supervised learning. The latter requires training datasets with paired (low- and high-resolution) images, which are often unavailable for clinical applications. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which obviate the need for paired training data, ensuring wider applicability and adoptability. Our network receives as inputs a low-resolution PET image, a high-resolution anatomical MR image, and spatial information. An imperfect SR image generated by a separately-trained auxiliary CNN serves as an additional input to the network. This CNN is trained in a supervised manner using paired simulation datasets. The loss function for training the dual GANs consists of two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. The method was validated on clinical data by comparing the SSSR results with those generated from a supervised approach and from deconvolution stabilized by a total variation penalty. Our results show that SSSR, while weaker than its supervised counterpart, noticeably outperforms deconvolution as indicated by the peak signal-to-noise-ratio and structural similarity index measures.</description><identifier>EISSN: 2577-0829</identifier><identifier>EISBN: 9781728141640</identifier><identifier>EISBN: 1728141648</identifier><identifier>DOI: 10.1109/NSS/MIC42101.2019.9059947</identifier><language>eng</language><publisher>IEEE</publisher><subject>Gallium nitride ; Generative adversarial networks ; Imaging ; Signal resolution ; Spatial resolution ; Training</subject><ispartof>2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2019, p.1-3</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9059947$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9059947$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Song, Tzu-An</creatorcontrib><creatorcontrib>Roy Chowdhury, Samadrita</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Dutta, Joyita</creatorcontrib><title>Self Supervised Super-Resolution PET Using A Generative Adversarial Network</title><title>2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)</title><addtitle>NSS/MIC</addtitle><description>Resolution limitations pose a continuing challenge for PET quantitation. While deep learning architectures based on convolutional neural networks (CNNs) have produced unprecedented accuracy at generating super-resolution (SR) PET images, most existing approaches are based on supervised learning. The latter requires training datasets with paired (low- and high-resolution) images, which are often unavailable for clinical applications. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which obviate the need for paired training data, ensuring wider applicability and adoptability. Our network receives as inputs a low-resolution PET image, a high-resolution anatomical MR image, and spatial information. An imperfect SR image generated by a separately-trained auxiliary CNN serves as an additional input to the network. This CNN is trained in a supervised manner using paired simulation datasets. The loss function for training the dual GANs consists of two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. The method was validated on clinical data by comparing the SSSR results with those generated from a supervised approach and from deconvolution stabilized by a total variation penalty. Our results show that SSSR, while weaker than its supervised counterpart, noticeably outperforms deconvolution as indicated by the peak signal-to-noise-ratio and structural similarity index measures.</description><subject>Gallium nitride</subject><subject>Generative adversarial networks</subject><subject>Imaging</subject><subject>Signal resolution</subject><subject>Spatial resolution</subject><subject>Training</subject><issn>2577-0829</issn><isbn>9781728141640</isbn><isbn>1728141648</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj9FOwjAYRquJiYg8gTf1ATba7t_aXi4LAhHROLgm3fbXVOdG2jHj22sCV9-5OicfIY-cxZwzPd-W5fxlXYDgjMeCcR1rlmoN8orMtFRcCsWBZ8CuyUSkUkZMCX1L7kL4ZEywBGBCnktsLS1PR_SjC9icMXrH0LenwfUdfVvs6D647oPmdIkdejO4EWnejOiD8c60dIvDT--_7smNNW3A2WWnZP-02BWraPO6XBf5JnL_0SFCBbJGUKmsqgyNBLSAYDJkIrHQ1CJTXKUKDE-NsRqlFhZF3UiRSQ1VkkzJw9nrEPFw9O7b-N_D5XvyB5saT2Q</recordid><startdate>201910</startdate><enddate>201910</enddate><creator>Song, Tzu-An</creator><creator>Roy Chowdhury, Samadrita</creator><creator>Yang, Fan</creator><creator>Dutta, Joyita</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201910</creationdate><title>Self Supervised Super-Resolution PET Using A Generative Adversarial Network</title><author>Song, Tzu-An ; Roy Chowdhury, Samadrita ; Yang, Fan ; Dutta, Joyita</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-e847ce4857bb6ea74ef4e4a6e023f4dc26818584a15aaf9e792fe2cd726794b33</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Gallium nitride</topic><topic>Generative adversarial networks</topic><topic>Imaging</topic><topic>Signal resolution</topic><topic>Spatial resolution</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Song, Tzu-An</creatorcontrib><creatorcontrib>Roy Chowdhury, Samadrita</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Dutta, Joyita</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEL</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Song, Tzu-An</au><au>Roy Chowdhury, Samadrita</au><au>Yang, Fan</au><au>Dutta, Joyita</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Self Supervised Super-Resolution PET Using A Generative Adversarial Network</atitle><btitle>2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)</btitle><stitle>NSS/MIC</stitle><date>2019-10</date><risdate>2019</risdate><spage>1</spage><epage>3</epage><pages>1-3</pages><eissn>2577-0829</eissn><eisbn>9781728141640</eisbn><eisbn>1728141648</eisbn><abstract>Resolution limitations pose a continuing challenge for PET quantitation. While deep learning architectures based on convolutional neural networks (CNNs) have produced unprecedented accuracy at generating super-resolution (SR) PET images, most existing approaches are based on supervised learning. The latter requires training datasets with paired (low- and high-resolution) images, which are often unavailable for clinical applications. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which obviate the need for paired training data, ensuring wider applicability and adoptability. Our network receives as inputs a low-resolution PET image, a high-resolution anatomical MR image, and spatial information. An imperfect SR image generated by a separately-trained auxiliary CNN serves as an additional input to the network. This CNN is trained in a supervised manner using paired simulation datasets. The loss function for training the dual GANs consists of two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. The method was validated on clinical data by comparing the SSSR results with those generated from a supervised approach and from deconvolution stabilized by a total variation penalty. Our results show that SSSR, while weaker than its supervised counterpart, noticeably outperforms deconvolution as indicated by the peak signal-to-noise-ratio and structural similarity index measures.</abstract><pub>IEEE</pub><doi>10.1109/NSS/MIC42101.2019.9059947</doi><tpages>3</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2577-0829 |
ispartof | 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2019, p.1-3 |
issn | 2577-0829 |
language | eng |
recordid | cdi_ieee_primary_9059947 |
source | IEEE Xplore All Conference Series |
subjects | Gallium nitride Generative adversarial networks Imaging Signal resolution Spatial resolution Training |
title | Self Supervised Super-Resolution PET Using A Generative Adversarial Network |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T11%3A09%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Self%20Supervised%20Super-Resolution%20PET%20Using%20A%20Generative%20Adversarial%20Network&rft.btitle=2019%20IEEE%20Nuclear%20Science%20Symposium%20and%20Medical%20Imaging%20Conference%20(NSS/MIC)&rft.au=Song,%20Tzu-An&rft.date=2019-10&rft.spage=1&rft.epage=3&rft.pages=1-3&rft.eissn=2577-0829&rft_id=info:doi/10.1109/NSS/MIC42101.2019.9059947&rft.eisbn=9781728141640&rft.eisbn_list=1728141648&rft_dat=%3Cieee_CHZPO%3E9059947%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-e847ce4857bb6ea74ef4e4a6e023f4dc26818584a15aaf9e792fe2cd726794b33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9059947&rfr_iscdi=true |