Loading…

Learning degradation priors for reliable no-reference image quality assessment

The goal of No-Reference Image Quality Assessment (NR-IQA) is to endow computers with a human-like ability to evaluate an image’s quality without comparison to a reference. Current deep learning-based methods mainly work in the spatial domain to measure the quality, heavily rely on semantic informat...

Full description

Saved in:
Bibliographic Details
Published in:Journal of visual communication and image representation 2024-06, Vol.102, p.104189, Article 104189
Main Authors: Zhang, Hua, Shen, Zhuonan, Zheng, Bolun, Chen, Quan, Yu, Dingguo, Chen, Yiru, Yan, Chenggang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The goal of No-Reference Image Quality Assessment (NR-IQA) is to endow computers with a human-like ability to evaluate an image’s quality without comparison to a reference. Current deep learning-based methods mainly work in the spatial domain to measure the quality, heavily rely on semantic information, and less on the degradation of the image itself, struggling to accurately judge the quality of an image in similar scenes. In this paper, we propose a novel degradation priors learning architecture to address the NR-IQA task by leveraging learnable degradation priors, along with semantic features. The multi-task learning strategy is introduced to ensure our model could obtain accurate degradation priors for the NR-IQA task. Extensive experiments on public benchmarks demonstrate that our approach outperforms state-of-the-art solutions. Besides we also collect an additional dataset namely ReD-1K to illustrate the superiority of our approach to judge the image quality in similar scenes. •We propose a novel architecture involving degradation priors and semantic features for NR-IQA.•A multi-task learning framework is proposed for NR-IQA, intergating semantic features and frequency domain degradation features.•We collect a new dataset namely ReD-1K, which consists of 537 pairs of degraded and non-degraded real images.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2024.104189