Loading…

A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution

Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel...

Full description

Saved in:
Bibliographic Details
Main Authors: Xue, Minglong, Huang, Zhiheng, Liu, Ruo-ze, Lu, Tong
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 6
container_issue
container_start_page 1
container_title
container_volume
creator Xue, Minglong
Huang, Zhiheng
Liu, Ruo-ze
Lu, Tong
description Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel low-resolution text image super-resolution method. This method adopts the residual-in-residual dense network (RRDN) to extract deeper high-frequency features than the residual dense network (RDN). Then, enhances the spatial and channel features with an attention mechanism. According to the characteristics of the text, we added gradient loss to adversarial learning. Experiments show that our method performs well in both qualitative and quantitative aspects of the latest public text image super-resolution dataset. Similarly, the proposed super-resolution method for text images of natural scenes also achieves the latest results.
doi_str_mv 10.1109/ICME51207.2021.9428128
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9428128</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9428128</ieee_id><sourcerecordid>9428128</sourcerecordid><originalsourceid>FETCH-LOGICAL-i485-ec0b7808d75010bacf8db84bf859f7241e48bf45c9b603a1623d9421339ee8d13</originalsourceid><addsrcrecordid>eNo1kN1Kw0AUhFdBsNQ-gSD7Aol79id79rLEagO1gvbCGym7yYlG06QkqT9vb4t1bmYuhg9mGLsCEQMId52l9zMDUthYCgmx0xJB4gmbOIuQJEYrTLQ6ZSNw2kQW8fmcTfr-XexltXZCjdjLlC_bT6r5dBioGaq24bPmzTc5FfyR-qrY-TrKmug_8xtqeuJLGr7a7oOXbcdX9D3wbONfiT_tttQdum29O7Au2Fnp654mRx-z1e1slc6jxcNdlk4XUaXRRJSLYFFgYY0AEXxeYhFQhxKNK63UQBpDqU3uQiKUh0SqYj8WlHJEWIAas8s_bEVE621XbXz3sz7-oX4BY85Urw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution</title><source>IEEE Xplore All Conference Series</source><creator>Xue, Minglong ; Huang, Zhiheng ; Liu, Ruo-ze ; Lu, Tong</creator><creatorcontrib>Xue, Minglong ; Huang, Zhiheng ; Liu, Ruo-ze ; Lu, Tong</creatorcontrib><description>Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel low-resolution text image super-resolution method. This method adopts the residual-in-residual dense network (RRDN) to extract deeper high-frequency features than the residual dense network (RDN). Then, enhances the spatial and channel features with an attention mechanism. According to the characteristics of the text, we added gradient loss to adversarial learning. Experiments show that our method performs well in both qualitative and quantitative aspects of the latest public text image super-resolution dataset. Similarly, the proposed super-resolution method for text images of natural scenes also achieves the latest results.</description><identifier>EISSN: 1945-788X</identifier><identifier>EISBN: 9781665438643</identifier><identifier>EISBN: 1665438649</identifier><identifier>DOI: 10.1109/ICME51207.2021.9428128</identifier><language>eng</language><publisher>IEEE</publisher><subject>attention ; Conferences ; Feature extraction ; GAN ; Handheld computers ; Image edge detection ; Image quality ; Super-resolution ; Superresolution ; text image ; Text recognition</subject><ispartof>2021 IEEE International Conference on Multimedia and Expo (ICME), 2021, p.1-6</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9428128$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9428128$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Xue, Minglong</creatorcontrib><creatorcontrib>Huang, Zhiheng</creatorcontrib><creatorcontrib>Liu, Ruo-ze</creatorcontrib><creatorcontrib>Lu, Tong</creatorcontrib><title>A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution</title><title>2021 IEEE International Conference on Multimedia and Expo (ICME)</title><addtitle>ICME</addtitle><description>Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel low-resolution text image super-resolution method. This method adopts the residual-in-residual dense network (RRDN) to extract deeper high-frequency features than the residual dense network (RDN). Then, enhances the spatial and channel features with an attention mechanism. According to the characteristics of the text, we added gradient loss to adversarial learning. Experiments show that our method performs well in both qualitative and quantitative aspects of the latest public text image super-resolution dataset. Similarly, the proposed super-resolution method for text images of natural scenes also achieves the latest results.</description><subject>attention</subject><subject>Conferences</subject><subject>Feature extraction</subject><subject>GAN</subject><subject>Handheld computers</subject><subject>Image edge detection</subject><subject>Image quality</subject><subject>Super-resolution</subject><subject>Superresolution</subject><subject>text image</subject><subject>Text recognition</subject><issn>1945-788X</issn><isbn>9781665438643</isbn><isbn>1665438649</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kN1Kw0AUhFdBsNQ-gSD7Aol79id79rLEagO1gvbCGym7yYlG06QkqT9vb4t1bmYuhg9mGLsCEQMId52l9zMDUthYCgmx0xJB4gmbOIuQJEYrTLQ6ZSNw2kQW8fmcTfr-XexltXZCjdjLlC_bT6r5dBioGaq24bPmzTc5FfyR-qrY-TrKmug_8xtqeuJLGr7a7oOXbcdX9D3wbONfiT_tttQdum29O7Au2Fnp654mRx-z1e1slc6jxcNdlk4XUaXRRJSLYFFgYY0AEXxeYhFQhxKNK63UQBpDqU3uQiKUh0SqYj8WlHJEWIAas8s_bEVE621XbXz3sz7-oX4BY85Urw</recordid><startdate>20210705</startdate><enddate>20210705</enddate><creator>Xue, Minglong</creator><creator>Huang, Zhiheng</creator><creator>Liu, Ruo-ze</creator><creator>Lu, Tong</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20210705</creationdate><title>A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution</title><author>Xue, Minglong ; Huang, Zhiheng ; Liu, Ruo-ze ; Lu, Tong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i485-ec0b7808d75010bacf8db84bf859f7241e48bf45c9b603a1623d9421339ee8d13</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>attention</topic><topic>Conferences</topic><topic>Feature extraction</topic><topic>GAN</topic><topic>Handheld computers</topic><topic>Image edge detection</topic><topic>Image quality</topic><topic>Super-resolution</topic><topic>Superresolution</topic><topic>text image</topic><topic>Text recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Xue, Minglong</creatorcontrib><creatorcontrib>Huang, Zhiheng</creatorcontrib><creatorcontrib>Liu, Ruo-ze</creatorcontrib><creatorcontrib>Lu, Tong</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xue, Minglong</au><au>Huang, Zhiheng</au><au>Liu, Ruo-ze</au><au>Lu, Tong</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution</atitle><btitle>2021 IEEE International Conference on Multimedia and Expo (ICME)</btitle><stitle>ICME</stitle><date>2021-07-05</date><risdate>2021</risdate><spage>1</spage><epage>6</epage><pages>1-6</pages><eissn>1945-788X</eissn><eisbn>9781665438643</eisbn><eisbn>1665438649</eisbn><abstract>Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel low-resolution text image super-resolution method. This method adopts the residual-in-residual dense network (RRDN) to extract deeper high-frequency features than the residual dense network (RDN). Then, enhances the spatial and channel features with an attention mechanism. According to the characteristics of the text, we added gradient loss to adversarial learning. Experiments show that our method performs well in both qualitative and quantitative aspects of the latest public text image super-resolution dataset. Similarly, the proposed super-resolution method for text images of natural scenes also achieves the latest results.</abstract><pub>IEEE</pub><doi>10.1109/ICME51207.2021.9428128</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 1945-788X
ispartof 2021 IEEE International Conference on Multimedia and Expo (ICME), 2021, p.1-6
issn 1945-788X
language eng
recordid cdi_ieee_primary_9428128
source IEEE Xplore All Conference Series
subjects attention
Conferences
Feature extraction
GAN
Handheld computers
Image edge detection
Image quality
Super-resolution
Superresolution
text image
Text recognition
title A Novel Attention Enhanced Residual-In-Residual Dense Network for Text Image Super-Resolution
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T01%3A47%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Novel%20Attention%20Enhanced%20Residual-In-Residual%20Dense%20Network%20for%20Text%20Image%20Super-Resolution&rft.btitle=2021%20IEEE%20International%20Conference%20on%20Multimedia%20and%20Expo%20(ICME)&rft.au=Xue,%20Minglong&rft.date=2021-07-05&rft.spage=1&rft.epage=6&rft.pages=1-6&rft.eissn=1945-788X&rft_id=info:doi/10.1109/ICME51207.2021.9428128&rft.eisbn=9781665438643&rft.eisbn_list=1665438649&rft_dat=%3Cieee_CHZPO%3E9428128%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i485-ec0b7808d75010bacf8db84bf859f7241e48bf45c9b603a1623d9421339ee8d13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9428128&rfr_iscdi=true