Loading…

Lightweight single image super-resolution with attentive residual refinement network

In recent years, deep convolutional neural network (CNN) based single image super-resolution (SISR) methods have been demonstrated impressive performance in terms of quantitative metrics and visual effects. Most CNN-based SISR methods can learn the complex non-linear mapping between low-resolution (...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2022-08, Vol.500, p.846-855
Main Authors: Qin, Jinghui, Zhang, Rumin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, deep convolutional neural network (CNN) based single image super-resolution (SISR) methods have been demonstrated impressive performance in terms of quantitative metrics and visual effects. Most CNN-based SISR methods can learn the complex non-linear mapping between low-resolution (LR) images and their corresponding high-resolution (HR) images due to the powerful representation capabilities of deep convolutional neural networks. However, as the depth and width of the SISR networks increase, the parameters of SISR networks will increase dramatically, leading to huge computational cost and large memory consumption, making them impractical in real-world applications. To attack the above issues, we propose an accurate and lightweight deep convolutional neural network, named Attentive Residual Refinement Network (ARRFN), to recover the high-resolution image from the original low-resolution image directly for SISR. In general, our proposed ARRFN consists of three parts, a feature extraction block, a stack of attentive residual refinement blocks (ARRFB), and a multi-scale separable upscaling module (MSSU), respectively. Specifically, our ARRFB consists of two branches, a regular residual learning branch and an attentive residual refinement branch. The former conducts regular residual learning by two residual blocks while the latter refines the residual information from the two residual blocks of the former branch with an attentive residual mechanism to further enhance the representation capabilities of the network. Furthermore, a multi-scale separable upsampling module (MSSU) is proposed to replace the regular upsampling operation for better SR results. Extensive experiments on several standard benchmarks show that the proposed method outperforms state-of-the-art SR methods in terms of quantitative metrics, visual quality, memory footprint, and inference time.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2022.05.066