Loading…
DeCNT: Deep Deformable CNN for Table Detection
This paper presents a novel approach for the detection of tables present in documents, leveraging the potential of deep neural networks. Conventional approaches for table detection rely on heuristics that are error prone and specific to a dataset. In contrast, the presented approach harvests the pot...
Saved in:
Published in: | IEEE access 2018, Vol.6, p.74151-74161 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3 |
---|---|
cites | cdi_FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3 |
container_end_page | 74161 |
container_issue | |
container_start_page | 74151 |
container_title | IEEE access |
container_volume | 6 |
creator | Siddiqui, Shoaib Ahmed Malik, Muhammad Imran Agne, Stefan Dengel, Andreas Ahmed, Sheraz |
description | This paper presents a novel approach for the detection of tables present in documents, leveraging the potential of deep neural networks. Conventional approaches for table detection rely on heuristics that are error prone and specific to a dataset. In contrast, the presented approach harvests the potential of data to recognize tables of arbitrary layout. Most of the prior approaches for table detection are only applicable to PDFs, whereas, the presented approach directly works on images making it generally applicable to any format. The presented approach is based on a novel combination of deformable CNN with faster R-CNN/FPN. Conventional CNN has a fixed receptive field which is problematic for table detection since tables can be present at arbitrary scales along with arbitrary transformations (orientation). Deformable convolution conditions its receptive field on the input itself allowing it to mold its receptive field according to its input. This adaptation of the receptive field enables the network to cater for tables of arbitrary layout. We evaluated the proposed approach on two major publicly available table detection datasets: ICDAR-2013 and ICDAR-2017 POD. The presented approach was able to surpass the state-of-the-art performance on both ICDAR-2013 and ICDAR-2017 POD datasets with a F-measure of 0.994 and 0.968, respectively, indicating its effectiveness and superiority for the task of table detection. |
doi_str_mv | 10.1109/ACCESS.2018.2880211 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_proquest_journals_2455916118</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8540832</ieee_id><doaj_id>oai_doaj_org_article_bc6bf9b275b84e04a1f763123669d0e8</doaj_id><sourcerecordid>2455916118</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3</originalsourceid><addsrcrecordid>eNpNkMtuwjAQRa2qlYooX8AmUtehHjt-pDsUaIuE6AK6tpxkXAUBpk5Y9O9rCEL1Yux53OvRIWQMdAJA85dpUczX6wmjoCdMa8oA7siAgcxTLri8__d-JKO23dJ4dCwJNSCTGRarzWsyQzzG4HzY23KHSbFaJTFJNpdshh1WXeMPT-TB2V2Lo-s9JF9v803xkS4_3xfFdJlWGdVdqkCCzaXVWkpdK6mUQ6o4WKxqygTltaNQ5yUIlLFjJWVclVJrjtRlUPMhWfS-tbdbcwzN3oZf421jLgUfvo0NXVPt0JSVLF1eMiVKnSHNLDglOTAuZV5T1NHrufc6Bv9zwrYzW38Kh7i-YZkQedwVzlO8n6qCb9uA7vYrUHPmbHrO5szZXDlH1bhXNYh4U2gRKXDG_wCsoHS8</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455916118</pqid></control><display><type>article</type><title>DeCNT: Deep Deformable CNN for Table Detection</title><source>IEEE Open Access Journals</source><creator>Siddiqui, Shoaib Ahmed ; Malik, Muhammad Imran ; Agne, Stefan ; Dengel, Andreas ; Ahmed, Sheraz</creator><creatorcontrib>Siddiqui, Shoaib Ahmed ; Malik, Muhammad Imran ; Agne, Stefan ; Dengel, Andreas ; Ahmed, Sheraz</creatorcontrib><description>This paper presents a novel approach for the detection of tables present in documents, leveraging the potential of deep neural networks. Conventional approaches for table detection rely on heuristics that are error prone and specific to a dataset. In contrast, the presented approach harvests the potential of data to recognize tables of arbitrary layout. Most of the prior approaches for table detection are only applicable to PDFs, whereas, the presented approach directly works on images making it generally applicable to any format. The presented approach is based on a novel combination of deformable CNN with faster R-CNN/FPN. Conventional CNN has a fixed receptive field which is problematic for table detection since tables can be present at arbitrary scales along with arbitrary transformations (orientation). Deformable convolution conditions its receptive field on the input itself allowing it to mold its receptive field according to its input. This adaptation of the receptive field enables the network to cater for tables of arbitrary layout. We evaluated the proposed approach on two major publicly available table detection datasets: ICDAR-2013 and ICDAR-2017 POD. The presented approach was able to surpass the state-of-the-art performance on both ICDAR-2013 and ICDAR-2017 POD datasets with a F-measure of 0.994 and 0.968, respectively, indicating its effectiveness and superiority for the task of table detection.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2018.2880211</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Convolution ; convolutional neural networks ; Data mining ; Datasets ; Deep learning ; deformable convolution ; Deformation ; faster R-CNN ; Feature extraction ; Formability ; FPN ; Hidden Markov models ; Layout ; Layouts ; object detection ; representation learning ; table detection ; table spotting ; Task analysis</subject><ispartof>IEEE access, 2018, Vol.6, p.74151-74161</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3</citedby><cites>FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3</cites><orcidid>0000-0003-4600-7331 ; 0000-0002-4239-6520</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8540832$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,4022,27632,27922,27923,27924,54932</link.rule.ids></links><search><creatorcontrib>Siddiqui, Shoaib Ahmed</creatorcontrib><creatorcontrib>Malik, Muhammad Imran</creatorcontrib><creatorcontrib>Agne, Stefan</creatorcontrib><creatorcontrib>Dengel, Andreas</creatorcontrib><creatorcontrib>Ahmed, Sheraz</creatorcontrib><title>DeCNT: Deep Deformable CNN for Table Detection</title><title>IEEE access</title><addtitle>Access</addtitle><description>This paper presents a novel approach for the detection of tables present in documents, leveraging the potential of deep neural networks. Conventional approaches for table detection rely on heuristics that are error prone and specific to a dataset. In contrast, the presented approach harvests the potential of data to recognize tables of arbitrary layout. Most of the prior approaches for table detection are only applicable to PDFs, whereas, the presented approach directly works on images making it generally applicable to any format. The presented approach is based on a novel combination of deformable CNN with faster R-CNN/FPN. Conventional CNN has a fixed receptive field which is problematic for table detection since tables can be present at arbitrary scales along with arbitrary transformations (orientation). Deformable convolution conditions its receptive field on the input itself allowing it to mold its receptive field according to its input. This adaptation of the receptive field enables the network to cater for tables of arbitrary layout. We evaluated the proposed approach on two major publicly available table detection datasets: ICDAR-2013 and ICDAR-2017 POD. The presented approach was able to surpass the state-of-the-art performance on both ICDAR-2013 and ICDAR-2017 POD datasets with a F-measure of 0.994 and 0.968, respectively, indicating its effectiveness and superiority for the task of table detection.</description><subject>Artificial neural networks</subject><subject>Convolution</subject><subject>convolutional neural networks</subject><subject>Data mining</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>deformable convolution</subject><subject>Deformation</subject><subject>faster R-CNN</subject><subject>Feature extraction</subject><subject>Formability</subject><subject>FPN</subject><subject>Hidden Markov models</subject><subject>Layout</subject><subject>Layouts</subject><subject>object detection</subject><subject>representation learning</subject><subject>table detection</subject><subject>table spotting</subject><subject>Task analysis</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNkMtuwjAQRa2qlYooX8AmUtehHjt-pDsUaIuE6AK6tpxkXAUBpk5Y9O9rCEL1Yux53OvRIWQMdAJA85dpUczX6wmjoCdMa8oA7siAgcxTLri8__d-JKO23dJ4dCwJNSCTGRarzWsyQzzG4HzY23KHSbFaJTFJNpdshh1WXeMPT-TB2V2Lo-s9JF9v803xkS4_3xfFdJlWGdVdqkCCzaXVWkpdK6mUQ6o4WKxqygTltaNQ5yUIlLFjJWVclVJrjtRlUPMhWfS-tbdbcwzN3oZf421jLgUfvo0NXVPt0JSVLF1eMiVKnSHNLDglOTAuZV5T1NHrufc6Bv9zwrYzW38Kh7i-YZkQedwVzlO8n6qCb9uA7vYrUHPmbHrO5szZXDlH1bhXNYh4U2gRKXDG_wCsoHS8</recordid><startdate>2018</startdate><enddate>2018</enddate><creator>Siddiqui, Shoaib Ahmed</creator><creator>Malik, Muhammad Imran</creator><creator>Agne, Stefan</creator><creator>Dengel, Andreas</creator><creator>Ahmed, Sheraz</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-4600-7331</orcidid><orcidid>https://orcid.org/0000-0002-4239-6520</orcidid></search><sort><creationdate>2018</creationdate><title>DeCNT: Deep Deformable CNN for Table Detection</title><author>Siddiqui, Shoaib Ahmed ; Malik, Muhammad Imran ; Agne, Stefan ; Dengel, Andreas ; Ahmed, Sheraz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Artificial neural networks</topic><topic>Convolution</topic><topic>convolutional neural networks</topic><topic>Data mining</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>deformable convolution</topic><topic>Deformation</topic><topic>faster R-CNN</topic><topic>Feature extraction</topic><topic>Formability</topic><topic>FPN</topic><topic>Hidden Markov models</topic><topic>Layout</topic><topic>Layouts</topic><topic>object detection</topic><topic>representation learning</topic><topic>table detection</topic><topic>table spotting</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Siddiqui, Shoaib Ahmed</creatorcontrib><creatorcontrib>Malik, Muhammad Imran</creatorcontrib><creatorcontrib>Agne, Stefan</creatorcontrib><creatorcontrib>Dengel, Andreas</creatorcontrib><creatorcontrib>Ahmed, Sheraz</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Siddiqui, Shoaib Ahmed</au><au>Malik, Muhammad Imran</au><au>Agne, Stefan</au><au>Dengel, Andreas</au><au>Ahmed, Sheraz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeCNT: Deep Deformable CNN for Table Detection</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2018</date><risdate>2018</risdate><volume>6</volume><spage>74151</spage><epage>74161</epage><pages>74151-74161</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>This paper presents a novel approach for the detection of tables present in documents, leveraging the potential of deep neural networks. Conventional approaches for table detection rely on heuristics that are error prone and specific to a dataset. In contrast, the presented approach harvests the potential of data to recognize tables of arbitrary layout. Most of the prior approaches for table detection are only applicable to PDFs, whereas, the presented approach directly works on images making it generally applicable to any format. The presented approach is based on a novel combination of deformable CNN with faster R-CNN/FPN. Conventional CNN has a fixed receptive field which is problematic for table detection since tables can be present at arbitrary scales along with arbitrary transformations (orientation). Deformable convolution conditions its receptive field on the input itself allowing it to mold its receptive field according to its input. This adaptation of the receptive field enables the network to cater for tables of arbitrary layout. We evaluated the proposed approach on two major publicly available table detection datasets: ICDAR-2013 and ICDAR-2017 POD. The presented approach was able to surpass the state-of-the-art performance on both ICDAR-2013 and ICDAR-2017 POD datasets with a F-measure of 0.994 and 0.968, respectively, indicating its effectiveness and superiority for the task of table detection.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2018.2880211</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-4600-7331</orcidid><orcidid>https://orcid.org/0000-0002-4239-6520</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2018, Vol.6, p.74151-74161 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_proquest_journals_2455916118 |
source | IEEE Open Access Journals |
subjects | Artificial neural networks Convolution convolutional neural networks Data mining Datasets Deep learning deformable convolution Deformation faster R-CNN Feature extraction Formability FPN Hidden Markov models Layout Layouts object detection representation learning table detection table spotting Task analysis |
title | DeCNT: Deep Deformable CNN for Table Detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T12%3A49%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeCNT:%20Deep%20Deformable%20CNN%20for%20Table%20Detection&rft.jtitle=IEEE%20access&rft.au=Siddiqui,%20Shoaib%20Ahmed&rft.date=2018&rft.volume=6&rft.spage=74151&rft.epage=74161&rft.pages=74151-74161&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2018.2880211&rft_dat=%3Cproquest_doaj_%3E2455916118%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c408t-7161a96a88668d7677fe0731aecd02503df01d9b15e6fe0a60237b6883e0f41d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2455916118&rft_id=info:pmid/&rft_ieee_id=8540832&rfr_iscdi=true |