Loading…
NIQSV+: A No-Reference Synthesized View Quality Assessment Metric
Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are...
Saved in:
Published in: | IEEE transactions on image processing 2018-04, Vol.27 (4), p.1652-1664 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743 |
---|---|
cites | cdi_FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743 |
container_end_page | 1664 |
container_issue | 4 |
container_start_page | 1652 |
container_title | IEEE transactions on image processing |
container_volume | 27 |
creator | Shishun Tian Lu Zhang Morin, Luce Deforges, Olivier |
description | Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test. |
doi_str_mv | 10.1109/TIP.2017.2781420 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8170274</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8170274</ieee_id><sourcerecordid>1989575185</sourcerecordid><originalsourceid>FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743</originalsourceid><addsrcrecordid>eNo9kF1LwzAUhoMoOqf3giC9VKQzJ81XvSvix2B-zE1vQ5aeskq3atMp89ebsbmrnCTP-8J5CDkB2gOg6dW4_9JjFFSPKQ2c0R3SgZRDTClnu2GmQsUKeHpADr3_oBS4ALlPDliaMM5Bd0j21B-O3i-voyx6quNXLLDBucNotJy3U_TlL-bRe4k_0XBhq7JdRpn36P0M5230iG1TuiOyV9jK4_Hm7JK3u9vxzUM8eL7v32SD2DGp21g5kTCJeWKdnLDcUqYEUg5MS-VkoiXLKeNMWOmEpiCZSpwuVLhO8kIpnnTJxbp3aivz2ZQz2yxNbUvzkA3M6i2EZKiR3xDY8zX72dRfC_StmZXeYVXZOdYLbyDVqVACtAgoXaOuqb1vsNh2AzUrySZINivJZiM5RM427YvJDPNt4N9qAE7XQImI228NKizNkz854nyT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1989575185</pqid></control><display><type>article</type><title>NIQSV+: A No-Reference Synthesized View Quality Assessment Metric</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Shishun Tian ; Lu Zhang ; Morin, Luce ; Deforges, Olivier</creator><creatorcontrib>Shishun Tian ; Lu Zhang ; Morin, Luce ; Deforges, Olivier</creatorcontrib><description>Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2017.2781420</identifier><identifier>PMID: 29324418</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Computer Science ; DIBR ; Distortion ; Distortion measurement ; Image coding ; MVD ; no-reference ; Quality assessment ; Signal and Image Processing ; Three-dimensional displays ; Two dimensional displays ; view synthesis</subject><ispartof>IEEE transactions on image processing, 2018-04, Vol.27 (4), p.1652-1664</ispartof><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743</citedby><cites>FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743</cites><orcidid>0000-0002-7616-8382 ; 0000-0001-8241-1425 ; 0000-0003-0750-0959 ; 0000-0003-3332-2615 ; 0000-0002-8859-5453</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8170274$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,776,780,881,27901,27902,54771</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29324418$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-01668626$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Shishun Tian</creatorcontrib><creatorcontrib>Lu Zhang</creatorcontrib><creatorcontrib>Morin, Luce</creatorcontrib><creatorcontrib>Deforges, Olivier</creatorcontrib><title>NIQSV+: A No-Reference Synthesized View Quality Assessment Metric</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test.</description><subject>Computer Science</subject><subject>DIBR</subject><subject>Distortion</subject><subject>Distortion measurement</subject><subject>Image coding</subject><subject>MVD</subject><subject>no-reference</subject><subject>Quality assessment</subject><subject>Signal and Image Processing</subject><subject>Three-dimensional displays</subject><subject>Two dimensional displays</subject><subject>view synthesis</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNo9kF1LwzAUhoMoOqf3giC9VKQzJ81XvSvix2B-zE1vQ5aeskq3atMp89ebsbmrnCTP-8J5CDkB2gOg6dW4_9JjFFSPKQ2c0R3SgZRDTClnu2GmQsUKeHpADr3_oBS4ALlPDliaMM5Bd0j21B-O3i-voyx6quNXLLDBucNotJy3U_TlL-bRe4k_0XBhq7JdRpn36P0M5230iG1TuiOyV9jK4_Hm7JK3u9vxzUM8eL7v32SD2DGp21g5kTCJeWKdnLDcUqYEUg5MS-VkoiXLKeNMWOmEpiCZSpwuVLhO8kIpnnTJxbp3aivz2ZQz2yxNbUvzkA3M6i2EZKiR3xDY8zX72dRfC_StmZXeYVXZOdYLbyDVqVACtAgoXaOuqb1vsNh2AzUrySZINivJZiM5RM427YvJDPNt4N9qAE7XQImI228NKizNkz854nyT</recordid><startdate>201804</startdate><enddate>201804</enddate><creator>Shishun Tian</creator><creator>Lu Zhang</creator><creator>Morin, Luce</creator><creator>Deforges, Olivier</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-7616-8382</orcidid><orcidid>https://orcid.org/0000-0001-8241-1425</orcidid><orcidid>https://orcid.org/0000-0003-0750-0959</orcidid><orcidid>https://orcid.org/0000-0003-3332-2615</orcidid><orcidid>https://orcid.org/0000-0002-8859-5453</orcidid></search><sort><creationdate>201804</creationdate><title>NIQSV+: A No-Reference Synthesized View Quality Assessment Metric</title><author>Shishun Tian ; Lu Zhang ; Morin, Luce ; Deforges, Olivier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science</topic><topic>DIBR</topic><topic>Distortion</topic><topic>Distortion measurement</topic><topic>Image coding</topic><topic>MVD</topic><topic>no-reference</topic><topic>Quality assessment</topic><topic>Signal and Image Processing</topic><topic>Three-dimensional displays</topic><topic>Two dimensional displays</topic><topic>view synthesis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shishun Tian</creatorcontrib><creatorcontrib>Lu Zhang</creatorcontrib><creatorcontrib>Morin, Luce</creatorcontrib><creatorcontrib>Deforges, Olivier</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shishun Tian</au><au>Lu Zhang</au><au>Morin, Luce</au><au>Deforges, Olivier</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NIQSV+: A No-Reference Synthesized View Quality Assessment Metric</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2018-04</date><risdate>2018</risdate><volume>27</volume><issue>4</issue><spage>1652</spage><epage>1664</epage><pages>1652-1664</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Benefiting from multi-view video plus depth and depth-image-based-rendering technologies, only limited views of a real 3-D scene need to be captured, compressed, and transmitted. However, the quality assessment of synthesized views is very challenging, since some new types of distortions, which are inherently different from the texture coding errors, are inevitably produced by view synthesis and depth map compression, and the corresponding original views (reference views) are usually not available. Thus the full-reference quality metrics cannot be used for synthesized views. In this paper, we propose a novel no-reference image quality assessment method for 3-D synthesized views (called NIQSV+). This blind metric can evaluate the quality of synthesized views by measuring the typical synthesis distortions: blurry regions, black holes, and stretching, with access to neither the reference image nor the depth map. To evaluate the performance of the proposed method, we compare it with four full-reference 3-D (synthesized view dedicated) metrics, five full-reference 2-D metrics, and three no-reference 2-D metrics. In terms of their correlations with subjective scores, our experimental results show that the proposed no-reference metric approaches the best of the state-of-the-art full reference and no-reference 3-D metrics; and outperforms the widely used no-reference and full-reference 2-D metrics significantly. In terms of its approximation of human ranking, the proposed metric achieves the best performance in the experimental test.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>29324418</pmid><doi>10.1109/TIP.2017.2781420</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-7616-8382</orcidid><orcidid>https://orcid.org/0000-0001-8241-1425</orcidid><orcidid>https://orcid.org/0000-0003-0750-0959</orcidid><orcidid>https://orcid.org/0000-0003-3332-2615</orcidid><orcidid>https://orcid.org/0000-0002-8859-5453</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2018-04, Vol.27 (4), p.1652-1664 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_ieee_primary_8170274 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Computer Science DIBR Distortion Distortion measurement Image coding MVD no-reference Quality assessment Signal and Image Processing Three-dimensional displays Two dimensional displays view synthesis |
title | NIQSV+: A No-Reference Synthesized View Quality Assessment Metric |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T21%3A45%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NIQSV+:%20A%20No-Reference%20Synthesized%20View%20Quality%20Assessment%20Metric&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Shishun%20Tian&rft.date=2018-04&rft.volume=27&rft.issue=4&rft.spage=1652&rft.epage=1664&rft.pages=1652-1664&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2017.2781420&rft_dat=%3Cproquest_ieee_%3E1989575185%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c268t-7c5326ed3ac6b2da0275e0412867c63862d02425a6c58016273c8f7a6cbdf7743%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1989575185&rft_id=info:pmid/29324418&rft_ieee_id=8170274&rfr_iscdi=true |