Loading…
Multistage supervised contrastive learning for hybrid-degraded image restoration
Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which...
Saved in:
Published in: | Signal, image and video processing image and video processing, 2023-03, Vol.17 (2), p.573-581 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883 |
---|---|
cites | cdi_FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883 |
container_end_page | 581 |
container_issue | 2 |
container_start_page | 573 |
container_title | Signal, image and video processing |
container_volume | 17 |
creator | Fu, Bo Dong, Yuhan Fu, Shilin Wu, Yuechu Ren, Yonggong Thanh, Dang N. H. |
description | Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which are fairly common in natural images. To tackle this challenge, we propose a multistage network architecture. It is capable of gradually learning and restoring the hybrid degradation model of the image. The model comprises three stages, with each pair of adjacent stages combining to exchange information between the early and late stages. Meanwhile, we employ a double-pooling channel attention block that combines maximum and average pooling. It is capable of inferring more intricate channel attention and enhancing the network’s representation capability. Then, during the model training step, we introduce contrastive learning. Our method outperforms comparable methods in terms of qualitative scores and visual effects and restores more detailed textures to improve image quality. |
doi_str_mv | 10.1007/s11760-022-02262-8 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2777849381</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2777849381</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883</originalsourceid><addsrcrecordid>eNp9kE9PwzAMxSMEEtPYF-BUiXMhrtsmPaKJP5OG4ADnKEvS0mm0w0kn7duTUgQ3LFn24b1n68fYJfBr4FzceABR8pRn2dhllsoTNgNZYgoC4PR353jOFt5veSzMhCzljL08DbvQ-qAbl_hh7-jQemcT03eBtA_twSU7p6lruyape0rejxtqbWpdQ9pGYfsxOsn50JMObd9dsLNa77xb_Mw5e7u_e10-puvnh9Xydp0ahCqktjYonbSFqWwuRM5RFAXPTakBDOq62mgowKLh6KTBCnnm0HKX61JUtZQ4Z1dT7p76zyHeV9t-oC6eVJkQQuYVSoiqbFIZ6r0nV6s9xZfpqICrEZ6a4KkITn3DU2M0TiYfxV3j6C_6H9cXRRtyiA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2777849381</pqid></control><display><type>article</type><title>Multistage supervised contrastive learning for hybrid-degraded image restoration</title><source>Springer Nature</source><creator>Fu, Bo ; Dong, Yuhan ; Fu, Shilin ; Wu, Yuechu ; Ren, Yonggong ; Thanh, Dang N. H.</creator><creatorcontrib>Fu, Bo ; Dong, Yuhan ; Fu, Shilin ; Wu, Yuechu ; Ren, Yonggong ; Thanh, Dang N. H.</creatorcontrib><description>Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which are fairly common in natural images. To tackle this challenge, we propose a multistage network architecture. It is capable of gradually learning and restoring the hybrid degradation model of the image. The model comprises three stages, with each pair of adjacent stages combining to exchange information between the early and late stages. Meanwhile, we employ a double-pooling channel attention block that combines maximum and average pooling. It is capable of inferring more intricate channel attention and enhancing the network’s representation capability. Then, during the model training step, we introduce contrastive learning. Our method outperforms comparable methods in terms of qualitative scores and visual effects and restores more detailed textures to improve image quality.</description><identifier>ISSN: 1863-1703</identifier><identifier>EISSN: 1863-1711</identifier><identifier>DOI: 10.1007/s11760-022-02262-8</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Computer architecture ; Computer Imaging ; Computer Science ; Image compression ; Image degradation ; Image Processing and Computer Vision ; Image quality ; Image restoration ; Learning ; Multimedia Information Systems ; Original Paper ; Pattern Recognition and Graphics ; Raindrops ; Signal,Image and Speech Processing ; Vision ; Visual effects</subject><ispartof>Signal, image and video processing, 2023-03, Vol.17 (2), p.573-581</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883</citedby><cites>FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883</cites><orcidid>0000-0003-2025-8319</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Fu, Bo</creatorcontrib><creatorcontrib>Dong, Yuhan</creatorcontrib><creatorcontrib>Fu, Shilin</creatorcontrib><creatorcontrib>Wu, Yuechu</creatorcontrib><creatorcontrib>Ren, Yonggong</creatorcontrib><creatorcontrib>Thanh, Dang N. H.</creatorcontrib><title>Multistage supervised contrastive learning for hybrid-degraded image restoration</title><title>Signal, image and video processing</title><addtitle>SIViP</addtitle><description>Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which are fairly common in natural images. To tackle this challenge, we propose a multistage network architecture. It is capable of gradually learning and restoring the hybrid degradation model of the image. The model comprises three stages, with each pair of adjacent stages combining to exchange information between the early and late stages. Meanwhile, we employ a double-pooling channel attention block that combines maximum and average pooling. It is capable of inferring more intricate channel attention and enhancing the network’s representation capability. Then, during the model training step, we introduce contrastive learning. Our method outperforms comparable methods in terms of qualitative scores and visual effects and restores more detailed textures to improve image quality.</description><subject>Computer architecture</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Image compression</subject><subject>Image degradation</subject><subject>Image Processing and Computer Vision</subject><subject>Image quality</subject><subject>Image restoration</subject><subject>Learning</subject><subject>Multimedia Information Systems</subject><subject>Original Paper</subject><subject>Pattern Recognition and Graphics</subject><subject>Raindrops</subject><subject>Signal,Image and Speech Processing</subject><subject>Vision</subject><subject>Visual effects</subject><issn>1863-1703</issn><issn>1863-1711</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE9PwzAMxSMEEtPYF-BUiXMhrtsmPaKJP5OG4ADnKEvS0mm0w0kn7duTUgQ3LFn24b1n68fYJfBr4FzceABR8pRn2dhllsoTNgNZYgoC4PR353jOFt5veSzMhCzljL08DbvQ-qAbl_hh7-jQemcT03eBtA_twSU7p6lruyape0rejxtqbWpdQ9pGYfsxOsn50JMObd9dsLNa77xb_Mw5e7u_e10-puvnh9Xydp0ahCqktjYonbSFqWwuRM5RFAXPTakBDOq62mgowKLh6KTBCnnm0HKX61JUtZQ4Z1dT7p76zyHeV9t-oC6eVJkQQuYVSoiqbFIZ6r0nV6s9xZfpqICrEZ6a4KkITn3DU2M0TiYfxV3j6C_6H9cXRRtyiA</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Fu, Bo</creator><creator>Dong, Yuhan</creator><creator>Fu, Shilin</creator><creator>Wu, Yuechu</creator><creator>Ren, Yonggong</creator><creator>Thanh, Dang N. H.</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2025-8319</orcidid></search><sort><creationdate>20230301</creationdate><title>Multistage supervised contrastive learning for hybrid-degraded image restoration</title><author>Fu, Bo ; Dong, Yuhan ; Fu, Shilin ; Wu, Yuechu ; Ren, Yonggong ; Thanh, Dang N. H.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer architecture</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Image compression</topic><topic>Image degradation</topic><topic>Image Processing and Computer Vision</topic><topic>Image quality</topic><topic>Image restoration</topic><topic>Learning</topic><topic>Multimedia Information Systems</topic><topic>Original Paper</topic><topic>Pattern Recognition and Graphics</topic><topic>Raindrops</topic><topic>Signal,Image and Speech Processing</topic><topic>Vision</topic><topic>Visual effects</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fu, Bo</creatorcontrib><creatorcontrib>Dong, Yuhan</creatorcontrib><creatorcontrib>Fu, Shilin</creatorcontrib><creatorcontrib>Wu, Yuechu</creatorcontrib><creatorcontrib>Ren, Yonggong</creatorcontrib><creatorcontrib>Thanh, Dang N. H.</creatorcontrib><collection>CrossRef</collection><jtitle>Signal, image and video processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fu, Bo</au><au>Dong, Yuhan</au><au>Fu, Shilin</au><au>Wu, Yuechu</au><au>Ren, Yonggong</au><au>Thanh, Dang N. H.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multistage supervised contrastive learning for hybrid-degraded image restoration</atitle><jtitle>Signal, image and video processing</jtitle><stitle>SIViP</stitle><date>2023-03-01</date><risdate>2023</risdate><volume>17</volume><issue>2</issue><spage>573</spage><epage>581</epage><pages>573-581</pages><issn>1863-1703</issn><eissn>1863-1711</eissn><abstract>Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which are fairly common in natural images. To tackle this challenge, we propose a multistage network architecture. It is capable of gradually learning and restoring the hybrid degradation model of the image. The model comprises three stages, with each pair of adjacent stages combining to exchange information between the early and late stages. Meanwhile, we employ a double-pooling channel attention block that combines maximum and average pooling. It is capable of inferring more intricate channel attention and enhancing the network’s representation capability. Then, during the model training step, we introduce contrastive learning. Our method outperforms comparable methods in terms of qualitative scores and visual effects and restores more detailed textures to improve image quality.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s11760-022-02262-8</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0003-2025-8319</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1863-1703 |
ispartof | Signal, image and video processing, 2023-03, Vol.17 (2), p.573-581 |
issn | 1863-1703 1863-1711 |
language | eng |
recordid | cdi_proquest_journals_2777849381 |
source | Springer Nature |
subjects | Computer architecture Computer Imaging Computer Science Image compression Image degradation Image Processing and Computer Vision Image quality Image restoration Learning Multimedia Information Systems Original Paper Pattern Recognition and Graphics Raindrops Signal,Image and Speech Processing Vision Visual effects |
title | Multistage supervised contrastive learning for hybrid-degraded image restoration |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T04%3A08%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multistage%20supervised%20contrastive%20learning%20for%20hybrid-degraded%20image%20restoration&rft.jtitle=Signal,%20image%20and%20video%20processing&rft.au=Fu,%20Bo&rft.date=2023-03-01&rft.volume=17&rft.issue=2&rft.spage=573&rft.epage=581&rft.pages=573-581&rft.issn=1863-1703&rft.eissn=1863-1711&rft_id=info:doi/10.1007/s11760-022-02262-8&rft_dat=%3Cproquest_cross%3E2777849381%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-dfc38e8d5c9d47740375504c6a11c3af9ba151d3c03e8c39302e3d0e4a679f883%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2777849381&rft_id=info:pmid/&rfr_iscdi=true |