Loading…

Scene Text Deblurring Using Text-Specific Multiscale Dictionaries

Texts in natural scenes carry critical semantic clues for understanding images. When capturing natural scene images, especially by handheld cameras, a common artifact, i.e., blur, frequently happens. To improve the visual quality of such images, deblurring techniques are desired, which also play an...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing 2015-04, Vol.24 (4), p.1302-1314
Main Authors: Xiaochun Cao, Wenqi Ren, Wangmeng Zuo, Xiaojie Guo, Foroosh, Hassan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Texts in natural scenes carry critical semantic clues for understanding images. When capturing natural scene images, especially by handheld cameras, a common artifact, i.e., blur, frequently happens. To improve the visual quality of such images, deblurring techniques are desired, which also play an important role in character recognition and image understanding. In this paper, we study the problem of recovering the clear scene text by exploiting the text field characteristics. A series of text-specific multiscale dictionaries (TMD) and a natural scene dictionary is learned for separately modeling the priors on the text and nontext fields. The TMD-based text field reconstruction helps to deal with the different scales of strings in a blurry image effectively. Furthermore, an adaptive version of nonuniform deblurring method is proposed to efficiently solve the real-world spatially varying problem. Dictionary learning allows more flexible modeling with respect to the text field property, and the combination with the nonuniform method is more appropriate in real situations where blur kernel sizes are depth dependent. Experimental results show that the proposed method achieves the deblurring results with better visual quality than the state-of-the-art methods.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2015.2400217