Loading…

Face image deblurring with feature correction and fusion

The key to image deblurring is to extract and screen valid information from blurred images and use this information to restore sharp images. Given this, we carefully design a face deblurring method. We adopt a simple feature extraction technique that extracts multilevel face features using a pretrai...

Full description

Saved in:
Bibliographic Details
Published in:The Visual computer 2024-05, Vol.40 (5), p.3693-3707
Main Authors: Long, Ma, Yu, Xu, Cong, Shu, Zoujian, Wei, Jiangbin, Du, Jiayao, Zhao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The key to image deblurring is to extract and screen valid information from blurred images and use this information to restore sharp images. Given this, we carefully design a face deblurring method. We adopt a simple feature extraction technique that extracts multilevel face features using a pretrained feature extraction subnetwork (FEN), e.g., VGGNet. To ensure adequate and accurate face information, we use all but the lowest-level features, which may contain much blurring information, for restoration. The selected features are preprocessed with self-attention modules (SAMs), correcting the features by synthesizing the context, and then injected into a feature fusion network (FFN) to restore clear images. The FFN stem is intentionally designed as a mirror of the FEN. Therefore, the corrected features are directly added to the corresponding (same-sized) features in the FFN naturally without manual compression/expansion. In the FFN, the features are layerwise fused and corrected again in the last layers with the SAMs; the deblurring results are finally output. Thus, the resulting images are firmly controlled by the extracted face features. This is advantageous for restoring true faces. To pursue results with both high sharpness and fidelity, the networks are trained with discriminator and fidelity constraints. Experimental results obtained on abundant challenging datasets show that our method achieves competitive results relative to state-of-the-art method outputs. Specifically, the proposed method does not need to align faces or perform iterations; thus, it is simple and practical.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-023-03059-7