Loading…

A Cycle-GAN-Based Method for Real-World Image Super-Resolution

In recent years, the development of deep learning-based image super-resolution (SR) technology has made it easier to generate high-resolution, clear images. However, in practical scenarios where cameras cannot be placed close to objects, the captured image quality significantly degrades, leading to...

Full description

Saved in:
Bibliographic Details
Main Authors: Imamura, Ryusei, Li, Yinhao, Taga, Hiroshi, Iwasa, Koki, Shichikawa, Ryuichi, Suganami, Makoto, Nakamoto, Kazuhiro, Chen, Yen-Wei
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, the development of deep learning-based image super-resolution (SR) technology has made it easier to generate high-resolution, clear images. However, in practical scenarios where cameras cannot be placed close to objects, the captured image quality significantly degrades, leading to very low accuracy in automatic recognition of digits and text. Traditional SR methods still fail to achieve satisfactory results in such situations. Many supervised SR studies use synthetic low-resolution (LR) images created from high-resolution (HR) images for training. However, synthetic LR images do not correspond to real-world degradation. Therefore, to reconstruct degraded images captured in real-world scenarios for high-accuracy automatic character recognition, this study generates low-quality images that simulate real-world conditions to train SR models, thereby restoring and enhancing characters in degraded captured images. Our approach comprises two parts: domain correction and SR model training. First, we use CycleGAN to synthesize LR images that include real-world degradation features. Second, the generated LR images and HR images are paired to train the SR model. To validate the effectiveness of the proposed method, we used YOLOv5 for text recognition before and after SR processing. Experimental results show that our method effectively improves image quality and the accuracy of optical characteristic recognition.
ISSN:2693-0854
DOI:10.1109/GCCE62371.2024.10760665