Loading…

Viewport-Based Omnidirectional Video Quality Assessment: Database, Modeling and Inference

This article first provides a new Viewport-based OmniDirectional Video Quality Assessment (VOD-VQA) database, which includes eighteen salient viewport videos extracted from the original OmniDirectional Videos (ODVs), and corresponding 774 impaired samples generated by compressing the raw viewports u...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-01, Vol.32 (1), p.120-134
Main Authors: Meng, Yu, Ma, Zhan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article first provides a new Viewport-based OmniDirectional Video Quality Assessment (VOD-VQA) database, which includes eighteen salient viewport videos extracted from the original OmniDirectional Videos (ODVs), and corresponding 774 impaired samples generated by compressing the raw viewports using a variety of combinations of its Spatial (frame size s ), Temporal (frame rate t ), and Amplitude (quantization stepsize q ) Resolutions (STAR). Total 160 subjects have assessed the processed viewport videos rendered on the head mounted display (HMD) when they stabilize their fixations. We then have formulated an analytical model to connect the perceptual quality of a compressed viewport video with its STAR variables, noted as the Q^{{\mathsf {VP}}}_{\tt {STAR}} index. All four model parameters can be predicted using linearly weighted content features, making the proposed metric generalized to various contents. This model correlates well with the mean opinion scores (MOSs) collected for processed viewport videos, having both the Pearson Correlation Coefficient and Spearman's Rank Correlation Coefficient (SRCC) at 0.95 according to an independent validation test, yielding the state-of-the-art performance in comparison to those popular objective metrics (e.g., Weighted to Spherically uniform (WS)-Peak Signal to Noise Ratio (PSNR), WMS-SSIM, Video Multimethod Assessment Fusion (VMAF), Feature SIMilarity Index (FSIM), and Visual Saliency based IQA Index (VSI)). Furthermore, this viewport-based quality index Q^{{\mathsf {VP}}}_{\tt {STAR}} is extended to infer the overall ODV quality, a.k.a., Q^{{\mathsf {ODV}}}_{\tt {STAR}} , by linearly weighing the saliency-aggregated qualities of salient viewports and the quality of quick-scanning (or non-salient) area. Experiments have shown that inferred Q^{{\mathsf {ODV}}}_{\tt {STAR}} can accurately predict the MOS with competitive performance to the state-of-the-art algorithm using another four independent and third-party
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2021.3057368