Loading…
Extreme-Quality Computational Imaging via Degradation Framework
To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 2621 |
container_issue | |
container_start_page | 2612 |
container_title | |
container_volume | |
creator | Chen, Shiqi Feng, Huajun Gao, Keming Xu, Zhihai Chen, Yueting |
description | To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are inefficient in dealing with the degradation that shows a wide range of spatial variants over regions. And the deep learning techniques in low-level and physics-based vision suffer from a lack of accurate data. To address this issue, we develop a degradation framework to estimate the spatially variant point spread functions (PSFs) of mobile cameras. When input extreme-quality digital images, the proposed framework generates degraded images sharing a common domain with real-world photographs. Supplied with the synthetic image pairs, we design a Field-Of-View shared kernel prediction network (FOV-KPN) to perform spatial-adaptive reconstruction on real degraded photos. Extensive experiments demonstrate that the proposed approach achieves extreme-quality computational imaging and outperforms the state-of-the-art methods. Furthermore, we illustrate that our technique can be integrated into existing postprocessing systems, resulting in significantly improved visual quality. |
doi_str_mv | 10.1109/ICCV48922.2021.00263 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9711061</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9711061</ieee_id><sourcerecordid>9711061</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-c1bf20d41db632e09d10a2e862afb65ffeb94c9d652b861d06deded563952c2b3</originalsourceid><addsrcrecordid>eNotkNFKwzAUhqMgOOeeQC_6Aq0nJ02aXInUTQsDEdTbkTSnJdquI-3Uvb1D5b_4Lj74Ln7GrjlknIO5qcryLdcGMUNAngGgEidsYQrNlZI5ao7ylM1QaEgLCfk5uxjHdwBhUKsZu11-T5F6Sp_3tgvTISmHfref7BSGre2Sqrdt2LbJZ7DJPbXR-l-TrKLt6WuIH5fsrLHdSIt_ztnravlSPqbrp4eqvFunAUFMac1dg-Bz7p0SSGA8B4ukFdrGKdk05ExeG68kOq24B-XpOKmEkVijE3N29dcNRLTZxdDbeNiY4viB4uIHiPpKrA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Extreme-Quality Computational Imaging via Degradation Framework</title><source>IEEE Xplore All Conference Series</source><creator>Chen, Shiqi ; Feng, Huajun ; Gao, Keming ; Xu, Zhihai ; Chen, Yueting</creator><creatorcontrib>Chen, Shiqi ; Feng, Huajun ; Gao, Keming ; Xu, Zhihai ; Chen, Yueting</creatorcontrib><description>To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are inefficient in dealing with the degradation that shows a wide range of spatial variants over regions. And the deep learning techniques in low-level and physics-based vision suffer from a lack of accurate data. To address this issue, we develop a degradation framework to estimate the spatially variant point spread functions (PSFs) of mobile cameras. When input extreme-quality digital images, the proposed framework generates degraded images sharing a common domain with real-world photographs. Supplied with the synthetic image pairs, we design a Field-Of-View shared kernel prediction network (FOV-KPN) to perform spatial-adaptive reconstruction on real degraded photos. Extensive experiments demonstrate that the proposed approach achieves extreme-quality computational imaging and outperforms the state-of-the-art methods. Furthermore, we illustrate that our technique can be integrated into existing postprocessing systems, resulting in significantly improved visual quality.</description><identifier>EISSN: 2380-7504</identifier><identifier>EISBN: 9781665428125</identifier><identifier>EISBN: 1665428120</identifier><identifier>DOI: 10.1109/ICCV48922.2021.00263</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Computational photography ; Datasets and evaluation ; Deconvolution ; Degradation ; Image and video synthesis ; Image quality ; Low-level and physics-based vision ; Network architecture ; Optical imaging ; Visualization</subject><ispartof>2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.2612-2621</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9711061$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27904,54534,54911</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9711061$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chen, Shiqi</creatorcontrib><creatorcontrib>Feng, Huajun</creatorcontrib><creatorcontrib>Gao, Keming</creatorcontrib><creatorcontrib>Xu, Zhihai</creatorcontrib><creatorcontrib>Chen, Yueting</creatorcontrib><title>Extreme-Quality Computational Imaging via Degradation Framework</title><title>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</title><addtitle>ICCV</addtitle><description>To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are inefficient in dealing with the degradation that shows a wide range of spatial variants over regions. And the deep learning techniques in low-level and physics-based vision suffer from a lack of accurate data. To address this issue, we develop a degradation framework to estimate the spatially variant point spread functions (PSFs) of mobile cameras. When input extreme-quality digital images, the proposed framework generates degraded images sharing a common domain with real-world photographs. Supplied with the synthetic image pairs, we design a Field-Of-View shared kernel prediction network (FOV-KPN) to perform spatial-adaptive reconstruction on real degraded photos. Extensive experiments demonstrate that the proposed approach achieves extreme-quality computational imaging and outperforms the state-of-the-art methods. Furthermore, we illustrate that our technique can be integrated into existing postprocessing systems, resulting in significantly improved visual quality.</description><subject>Cameras</subject><subject>Computational photography</subject><subject>Datasets and evaluation</subject><subject>Deconvolution</subject><subject>Degradation</subject><subject>Image and video synthesis</subject><subject>Image quality</subject><subject>Low-level and physics-based vision</subject><subject>Network architecture</subject><subject>Optical imaging</subject><subject>Visualization</subject><issn>2380-7504</issn><isbn>9781665428125</isbn><isbn>1665428120</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkNFKwzAUhqMgOOeeQC_6Aq0nJ02aXInUTQsDEdTbkTSnJdquI-3Uvb1D5b_4Lj74Ln7GrjlknIO5qcryLdcGMUNAngGgEidsYQrNlZI5ao7ylM1QaEgLCfk5uxjHdwBhUKsZu11-T5F6Sp_3tgvTISmHfref7BSGre2Sqrdt2LbJZ7DJPbXR-l-TrKLt6WuIH5fsrLHdSIt_ztnravlSPqbrp4eqvFunAUFMac1dg-Bz7p0SSGA8B4ukFdrGKdk05ExeG68kOq24B-XpOKmEkVijE3N29dcNRLTZxdDbeNiY4viB4uIHiPpKrA</recordid><startdate>202110</startdate><enddate>202110</enddate><creator>Chen, Shiqi</creator><creator>Feng, Huajun</creator><creator>Gao, Keming</creator><creator>Xu, Zhihai</creator><creator>Chen, Yueting</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202110</creationdate><title>Extreme-Quality Computational Imaging via Degradation Framework</title><author>Chen, Shiqi ; Feng, Huajun ; Gao, Keming ; Xu, Zhihai ; Chen, Yueting</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-c1bf20d41db632e09d10a2e862afb65ffeb94c9d652b861d06deded563952c2b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cameras</topic><topic>Computational photography</topic><topic>Datasets and evaluation</topic><topic>Deconvolution</topic><topic>Degradation</topic><topic>Image and video synthesis</topic><topic>Image quality</topic><topic>Low-level and physics-based vision</topic><topic>Network architecture</topic><topic>Optical imaging</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Shiqi</creatorcontrib><creatorcontrib>Feng, Huajun</creatorcontrib><creatorcontrib>Gao, Keming</creatorcontrib><creatorcontrib>Xu, Zhihai</creatorcontrib><creatorcontrib>Chen, Yueting</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Shiqi</au><au>Feng, Huajun</au><au>Gao, Keming</au><au>Xu, Zhihai</au><au>Chen, Yueting</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Extreme-Quality Computational Imaging via Degradation Framework</atitle><btitle>2021 IEEE/CVF International Conference on Computer Vision (ICCV)</btitle><stitle>ICCV</stitle><date>2021-10</date><risdate>2021</risdate><spage>2612</spage><epage>2621</epage><pages>2612-2621</pages><eissn>2380-7504</eissn><eisbn>9781665428125</eisbn><eisbn>1665428120</eisbn><coden>IEEPAD</coden><abstract>To meet the space limitation of optical elements, free-form surfaces or high-order aspherical lenses are adopted in mobile cameras to compress volume. However, the application of free-form surfaces also introduces the problem of image quality mutation. Existing model-based deconvolution methods are inefficient in dealing with the degradation that shows a wide range of spatial variants over regions. And the deep learning techniques in low-level and physics-based vision suffer from a lack of accurate data. To address this issue, we develop a degradation framework to estimate the spatially variant point spread functions (PSFs) of mobile cameras. When input extreme-quality digital images, the proposed framework generates degraded images sharing a common domain with real-world photographs. Supplied with the synthetic image pairs, we design a Field-Of-View shared kernel prediction network (FOV-KPN) to perform spatial-adaptive reconstruction on real degraded photos. Extensive experiments demonstrate that the proposed approach achieves extreme-quality computational imaging and outperforms the state-of-the-art methods. Furthermore, we illustrate that our technique can be integrated into existing postprocessing systems, resulting in significantly improved visual quality.</abstract><pub>IEEE</pub><doi>10.1109/ICCV48922.2021.00263</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2380-7504 |
ispartof | 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.2612-2621 |
issn | 2380-7504 |
language | eng |
recordid | cdi_ieee_primary_9711061 |
source | IEEE Xplore All Conference Series |
subjects | Cameras Computational photography Datasets and evaluation Deconvolution Degradation Image and video synthesis Image quality Low-level and physics-based vision Network architecture Optical imaging Visualization |
title | Extreme-Quality Computational Imaging via Degradation Framework |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T16%3A25%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Extreme-Quality%20Computational%20Imaging%20via%20Degradation%20Framework&rft.btitle=2021%20IEEE/CVF%20International%20Conference%20on%20Computer%20Vision%20(ICCV)&rft.au=Chen,%20Shiqi&rft.date=2021-10&rft.spage=2612&rft.epage=2621&rft.pages=2612-2621&rft.eissn=2380-7504&rft.coden=IEEPAD&rft_id=info:doi/10.1109/ICCV48922.2021.00263&rft.eisbn=9781665428125&rft.eisbn_list=1665428120&rft_dat=%3Cieee_CHZPO%3E9711061%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-c1bf20d41db632e09d10a2e862afb65ffeb94c9d652b861d06deded563952c2b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9711061&rfr_iscdi=true |