Loading…
Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network
Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, a...
Saved in:
Published in: | IEEE access 2019, Vol.7, p.33795-33808 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643 |
---|---|
cites | cdi_FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643 |
container_end_page | 33808 |
container_issue | |
container_start_page | 33795 |
container_title | IEEE access |
container_volume | 7 |
creator | Nguyen, Ngoc-Quang Lee, Sang-Woong |
description | Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, and appearances of objects of interest, which vary considerably. The success of the procedure is still highly dependent on the operator's skills and the level of hand-eye coordination. Thus, this paper was strongly motivated by the necessity to obtain an early and accurate diagnosis of a detected object in medical images. In this paper, we propose a new polyp segmentation method based on the architecture of a multiple deep encoder-decoder networks combination called CDED-net. The architecture can not only hold multi-level contextual information by extracting discriminative features at different effective fields-of-view and multiple image scales but also learn rich information features from missing pixels in the training phase. Moreover, the network is also able to capture object boundaries by using multiscale effective decoders. We also propose a novel strategy for improving the method's segmentation performance based on a combination of a boundary-emphasization data augmentation method and a new effective dice loss function. The goal of this strategy is to make our deep learning network available with poorly defined object boundaries, which are caused by the non-specular transition zone between the background and foreground regions. To provide a general view of the proposed method, our network was trained and evaluated on three well-known polyp datasets, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib PolypDB. Furthermore, we also used the Pedro Hispano Hospital (PH 2 ), ISBI 2016 skin lesion segmentation dataset, and CT healthy abdominal organ segmentation dataset to depict our network's ability. Our results reveal that the CDED-net significantly surpasses the state-of-the-art methods. |
doi_str_mv | 10.1109/ACCESS.2019.2904094 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2455640919</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8664145</ieee_id><doaj_id>oai_doaj_org_article_8b94da7e5f5945cea865701d608ed4c4</doaj_id><sourcerecordid>2455640919</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643</originalsourceid><addsrcrecordid>eNpNUdtO4zAQjRBIi4Av4MUSzyl2fEn8CKG7VOIi0eXZmtqTyqWNu3bCav9-XYIQ83JGozlnLqcoLhmdMUb19U3bzpfLWUWZnlWaCqrFUXFaMaVLLrk6_pb_KC5S2tAcTS7J-rQwL2E1poHchrF3EP-RJa532A8w-NAT35NHdN7Clix2sMZEXpPv1wRIG_qEdhz8O5I7xD2Z9zY4jOUdfiB5wuFviG_nxUkH24QXn3hWvP6c_27vy4fnX4v25qG0gjZD2dUSasXEqtGVopYJjdxZpEA7rFagOUCltaTYiAyMM24RwOrK6tppJfhZsZh0XYCN2Ue_y8eYAN58FEJcG4iDt1s0zUoLBzXKTmohs06jZE2ZU7RBJ-xB62rS2sfwZ8Q0mE0YY5_XN5WQUuUPM527-NRlY0gpYvc1lVFzMMZMxpiDMebTmMy6nFgeEb8YjVKCCcn_A_ubiOk</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455640919</pqid></control><display><type>article</type><title>Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network</title><source>IEEE Open Access Journals</source><creator>Nguyen, Ngoc-Quang ; Lee, Sang-Woong</creator><creatorcontrib>Nguyen, Ngoc-Quang ; Lee, Sang-Woong</creatorcontrib><description>Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, and appearances of objects of interest, which vary considerably. The success of the procedure is still highly dependent on the operator's skills and the level of hand-eye coordination. Thus, this paper was strongly motivated by the necessity to obtain an early and accurate diagnosis of a detected object in medical images. In this paper, we propose a new polyp segmentation method based on the architecture of a multiple deep encoder-decoder networks combination called CDED-net. The architecture can not only hold multi-level contextual information by extracting discriminative features at different effective fields-of-view and multiple image scales but also learn rich information features from missing pixels in the training phase. Moreover, the network is also able to capture object boundaries by using multiscale effective decoders. We also propose a novel strategy for improving the method's segmentation performance based on a combination of a boundary-emphasization data augmentation method and a new effective dice loss function. The goal of this strategy is to make our deep learning network available with poorly defined object boundaries, which are caused by the non-specular transition zone between the background and foreground regions. To provide a general view of the proposed method, our network was trained and evaluated on three well-known polyp datasets, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib PolypDB. Furthermore, we also used the Pedro Hispano Hospital (PH 2 ), ISBI 2016 skin lesion segmentation dataset, and CT healthy abdominal organ segmentation dataset to depict our network's ability. Our results reveal that the CDED-net significantly surpasses the state-of-the-art methods.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2904094</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Boundaries ; boundary segmentation ; Cancer ; Coders ; continuous network ; Datasets ; deep convolutional neural network ; Diagnosis ; encoder-decoder network ; Encoders-Decoders ; Feature extraction ; Image segmentation ; Medical diagnostic imaging ; medical image segmentation ; Medical imaging ; Shape ; Training</subject><ispartof>IEEE access, 2019, Vol.7, p.33795-33808</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643</citedby><cites>FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643</cites><orcidid>0000-0001-8117-6566</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8664145$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Nguyen, Ngoc-Quang</creatorcontrib><creatorcontrib>Lee, Sang-Woong</creatorcontrib><title>Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, and appearances of objects of interest, which vary considerably. The success of the procedure is still highly dependent on the operator's skills and the level of hand-eye coordination. Thus, this paper was strongly motivated by the necessity to obtain an early and accurate diagnosis of a detected object in medical images. In this paper, we propose a new polyp segmentation method based on the architecture of a multiple deep encoder-decoder networks combination called CDED-net. The architecture can not only hold multi-level contextual information by extracting discriminative features at different effective fields-of-view and multiple image scales but also learn rich information features from missing pixels in the training phase. Moreover, the network is also able to capture object boundaries by using multiscale effective decoders. We also propose a novel strategy for improving the method's segmentation performance based on a combination of a boundary-emphasization data augmentation method and a new effective dice loss function. The goal of this strategy is to make our deep learning network available with poorly defined object boundaries, which are caused by the non-specular transition zone between the background and foreground regions. To provide a general view of the proposed method, our network was trained and evaluated on three well-known polyp datasets, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib PolypDB. Furthermore, we also used the Pedro Hispano Hospital (PH 2 ), ISBI 2016 skin lesion segmentation dataset, and CT healthy abdominal organ segmentation dataset to depict our network's ability. Our results reveal that the CDED-net significantly surpasses the state-of-the-art methods.</description><subject>Boundaries</subject><subject>boundary segmentation</subject><subject>Cancer</subject><subject>Coders</subject><subject>continuous network</subject><subject>Datasets</subject><subject>deep convolutional neural network</subject><subject>Diagnosis</subject><subject>encoder-decoder network</subject><subject>Encoders-Decoders</subject><subject>Feature extraction</subject><subject>Image segmentation</subject><subject>Medical diagnostic imaging</subject><subject>medical image segmentation</subject><subject>Medical imaging</subject><subject>Shape</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNUdtO4zAQjRBIi4Av4MUSzyl2fEn8CKG7VOIi0eXZmtqTyqWNu3bCav9-XYIQ83JGozlnLqcoLhmdMUb19U3bzpfLWUWZnlWaCqrFUXFaMaVLLrk6_pb_KC5S2tAcTS7J-rQwL2E1poHchrF3EP-RJa532A8w-NAT35NHdN7Clix2sMZEXpPv1wRIG_qEdhz8O5I7xD2Z9zY4jOUdfiB5wuFviG_nxUkH24QXn3hWvP6c_27vy4fnX4v25qG0gjZD2dUSasXEqtGVopYJjdxZpEA7rFagOUCltaTYiAyMM24RwOrK6tppJfhZsZh0XYCN2Ue_y8eYAN58FEJcG4iDt1s0zUoLBzXKTmohs06jZE2ZU7RBJ-xB62rS2sfwZ8Q0mE0YY5_XN5WQUuUPM527-NRlY0gpYvc1lVFzMMZMxpiDMebTmMy6nFgeEb8YjVKCCcn_A_ubiOk</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Nguyen, Ngoc-Quang</creator><creator>Lee, Sang-Woong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-8117-6566</orcidid></search><sort><creationdate>2019</creationdate><title>Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network</title><author>Nguyen, Ngoc-Quang ; Lee, Sang-Woong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Boundaries</topic><topic>boundary segmentation</topic><topic>Cancer</topic><topic>Coders</topic><topic>continuous network</topic><topic>Datasets</topic><topic>deep convolutional neural network</topic><topic>Diagnosis</topic><topic>encoder-decoder network</topic><topic>Encoders-Decoders</topic><topic>Feature extraction</topic><topic>Image segmentation</topic><topic>Medical diagnostic imaging</topic><topic>medical image segmentation</topic><topic>Medical imaging</topic><topic>Shape</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Ngoc-Quang</creatorcontrib><creatorcontrib>Lee, Sang-Woong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Ngoc-Quang</au><au>Lee, Sang-Woong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>33795</spage><epage>33808</epage><pages>33795-33808</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Image segmentation is typically used to locate objects and boundaries. It is essential in many clinical applications, such as the pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. The segmentation task is hampered by fuzzy boundaries, complex backgrounds, and appearances of objects of interest, which vary considerably. The success of the procedure is still highly dependent on the operator's skills and the level of hand-eye coordination. Thus, this paper was strongly motivated by the necessity to obtain an early and accurate diagnosis of a detected object in medical images. In this paper, we propose a new polyp segmentation method based on the architecture of a multiple deep encoder-decoder networks combination called CDED-net. The architecture can not only hold multi-level contextual information by extracting discriminative features at different effective fields-of-view and multiple image scales but also learn rich information features from missing pixels in the training phase. Moreover, the network is also able to capture object boundaries by using multiscale effective decoders. We also propose a novel strategy for improving the method's segmentation performance based on a combination of a boundary-emphasization data augmentation method and a new effective dice loss function. The goal of this strategy is to make our deep learning network available with poorly defined object boundaries, which are caused by the non-specular transition zone between the background and foreground regions. To provide a general view of the proposed method, our network was trained and evaluated on three well-known polyp datasets, CVC-ColonDB, CVC-ClinicDB, and ETIS-Larib PolypDB. Furthermore, we also used the Pedro Hispano Hospital (PH 2 ), ISBI 2016 skin lesion segmentation dataset, and CT healthy abdominal organ segmentation dataset to depict our network's ability. Our results reveal that the CDED-net significantly surpasses the state-of-the-art methods.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2904094</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-8117-6566</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2019, Vol.7, p.33795-33808 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_proquest_journals_2455640919 |
source | IEEE Open Access Journals |
subjects | Boundaries boundary segmentation Cancer Coders continuous network Datasets deep convolutional neural network Diagnosis encoder-decoder network Encoders-Decoders Feature extraction Image segmentation Medical diagnostic imaging medical image segmentation Medical imaging Shape Training |
title | Robust Boundary Segmentation in Medical Images Using a Consecutive Deep Encoder-Decoder Network |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T19%3A40%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20Boundary%20Segmentation%20in%20Medical%20Images%20Using%20a%20Consecutive%20Deep%20Encoder-Decoder%20Network&rft.jtitle=IEEE%20access&rft.au=Nguyen,%20Ngoc-Quang&rft.date=2019&rft.volume=7&rft.spage=33795&rft.epage=33808&rft.pages=33795-33808&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2904094&rft_dat=%3Cproquest_cross%3E2455640919%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c408t-f75a7614b89260c149e3dce0a0fe2ba93aa29950e849951313ceaac92c97d9643%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2455640919&rft_id=info:pmid/&rft_ieee_id=8664145&rfr_iscdi=true |