Loading…
Revisiting reweighted graph total variation blind deconvolution and beyond
It is known that image priors are essential to blind deconvolution. Reweighted graph total variation (RGTV), as a new prior to substitute the most classical total variation (TV), has been shown superior to TV and several other cutting-edge models both theoretically and empirically. This paper steps...
Saved in:
Published in: | The Visual computer 2024-05, Vol.40 (5), p.3119-3135 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | It is known that image priors are essential to blind deconvolution. Reweighted graph total variation (RGTV), as a new prior to substitute the most classical total variation (TV), has been shown superior to TV and several other cutting-edge models both theoretically and empirically. This paper steps forward, firstly providing a simpler geometric perspective to RGTV in the framework of variational partial differential equations (PDEs), rather than the previous graph spectral interpretation made in the graph frequency domain. Surprisingly, a slight shift of perspective as such finally leads to a huge blind deblurring performance boosting in both accuracy and efficiency as compared to the previously derived numerical approach, which approximates RGTV as the graph L1-Laplacian regularizer. Utilizing the simplified RGTV as reformulated in this paper, another valuable contribution is an exploration of its potentials for blind facial image restoration by combining unsupervised deep facial models. Experimental results of blind face deblurring and blind face hallucination both demonstrate necessity and rationale of a joint model-based and learning-based approach to blind face restoration. |
---|---|
ISSN: | 0178-2789 1432-2315 |
DOI: | 10.1007/s00371-023-03014-6 |