Loading…

A Multi-Branch Feature Extraction Residual Network for Lightweight Image Super-Resolution

Single-image super-resolution (SISR) seeks to elucidate the mapping relationships between low-resolution and high-resolution images. However, high-performance network models often entail a significant number of parameters and computations, presenting limitations in practical applications. Therefore,...

Full description

Saved in:
Bibliographic Details
Published in:Mathematics (Basel) 2024-09, Vol.12 (17), p.2736
Main Authors: Liu, Chunying, Wan, Xujie, Gao, Guangwei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Single-image super-resolution (SISR) seeks to elucidate the mapping relationships between low-resolution and high-resolution images. However, high-performance network models often entail a significant number of parameters and computations, presenting limitations in practical applications. Therefore, prioritizing a light weight and efficiency becomes crucial when applying image super-resolution (SR) to real-world scenarios. We propose a straightforward and efficient method, the Multi-Branch Feature Extraction Residual Network (MFERN), to tackle lightweight image SR through the fusion of multi-information self-calibration and multi-attention information. Specifically, we have devised a Multi-Branch Residual Feature Fusion Module (MRFFM) that leverages a multi-branch residual structure to succinctly and effectively fuse multiple pieces of information. Within the MRFFM, we have designed the Multi-Scale Attention Feature Fusion Block (MAFFB) to adeptly extract features via convolution and self-calibration attention operations. Furthermore, we introduce a Dual Feature Calibration Block (DFCB) to dynamically fuse feature information using dynamic weight values derived from the upper and lower branches. Additionally, to overcome the limitation of convolution in solely extracting local information, we incorporate a Transformer module to effectively integrate global information. The experimental results demonstrate that our MFERN exhibits outstanding performance in terms of model parameters and overall performance.
ISSN:2227-7390
2227-7390
DOI:10.3390/math12172736