Loading…

A blind image super-resolution network guided by kernel estimation and structural prior knowledge

The goal of blind image super-resolution (BISR) is to recover the corresponding high-resolution image from a given low-resolution image with unknown degradation. Prior related research has primarily focused effectively on utilizing the kernel as prior knowledge to recover the high-frequency componen...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2024-04, Vol.14 (1), p.9525-9525, Article 9525
Main Authors: Zhang, Jiajun, Zhou, Yuanbo, Bi, Jiang, Xue, Yuyang, Deng, Wei, He, Wenlin, Zhao, Tao, Sun, Kai, Tong, Tong, Gao, Qinquan, Zhang, Qing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The goal of blind image super-resolution (BISR) is to recover the corresponding high-resolution image from a given low-resolution image with unknown degradation. Prior related research has primarily focused effectively on utilizing the kernel as prior knowledge to recover the high-frequency components of image. However, they overlooked the function of structural prior information within the same image, which resulted in unsatisfactory recovery performance for textures with strong self-similarity. To address this issue, we propose a two stage blind super-resolution network that is based on kernel estimation strategy and is capable of integrating structural texture as prior knowledge. In the first stage, we utilize a dynamic kernel estimator to achieve degradation presentation embedding. Then, we propose a triple path attention groups consists of triple path attention blocks and a global feature fusion block to extract structural prior information to assist the recovery of details within images. The quantitative and qualitative results on standard benchmarks with various degradation settings, including Gaussian8 and DIV2KRK, validate that our proposed method outperforms the state-of-the-art methods in terms of fidelity and recovery of clear details. The relevant code is made available on this link as open source.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-60157-9