Loading…
Blind Quality Enhancement for Compressed Video
Deep convolutional neural networks (CNNs) have achieved impressive success in enhancing the quality of compressed images/videos. These approaches mostly obtain the noise level in advance and train multiple architecture-identical models for enhancement on images/videos of known levels of noise. It la...
Saved in:
Published in: | IEEE transactions on multimedia 2024, Vol.26, p.5782-5794 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep convolutional neural networks (CNNs) have achieved impressive success in enhancing the quality of compressed images/videos. These approaches mostly obtain the noise level in advance and train multiple architecture-identical models for enhancement on images/videos of known levels of noise. It largely hinders their practical applications where the noise level is unknown and resource is limited. To practically perform quality enhancement, we propose a novel blind quality enhancement framework for compressed video (BQEV), which utilizes a single network to conduct enhancement on videos compressed at various and unknown quality parameters (QPs). Since there exists feature similarity and difference among videos compressed at multiple QPs, BQEV utilizes this prior to efficiently handle enhancement on videos compressed at blind QPs, which consists of progressive feature extraction and QP-adaptive feature fusion subnets. They utilize temporal information and feature similarity to progressively extract valuable features and further employ the feature difference to conduct reasonable QP-adaptive feature fusion and quality enhancement, respectively. In the progressive feature extraction subnet, we first design a quality rank module to assign more attention to higher-quality frames for efficient utilization of temporal information, then propose a progressive extraction module to further extract features from different QPs. In the QP-adaptive feature fusion subnet, we develop a quality estimation module to guide reasonable feature fusion of these extracted progressive features for stable and promising enhancement results on multiple QPs. Experimental results demonstrate that BQEV achieves 0.31-0.69 dB PSNR improvement compared with videos compressed at various QPs, outperforming state-of-the-art approaches. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2023.3339599 |