Loading…
Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images
It is a huge challenge for multi-organs segmentation in various medical images based on a consistent algorithm with the development of deep learning methods. We therefore develop a deep learning method based on cross-convolutional transformer for these automated- segmentation to obtain better genera...
Saved in:
Published in: | Physics in medicine & biology 2023-01, Vol.68 (3), p.35008 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3 |
---|---|
cites | cdi_FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3 |
container_end_page | |
container_issue | 3 |
container_start_page | 35008 |
container_title | Physics in medicine & biology |
container_volume | 68 |
creator | Wang, Jing Zhao, Haiyue Liang, Wei Wang, Shuyu Zhang, Yan |
description | It is a huge challenge for multi-organs segmentation in various medical images based on a consistent algorithm with the development of deep learning methods. We therefore develop a deep learning method based on cross-convolutional transformer for these automated- segmentation to obtain better generalization and accuracy.
We propose a cross-convolutional transformer network (C
Former) to solve the segmentation problem. Specifically, we first redesign a novel cross-convolutional self-attention mechanism in terms of the algorithm to integrate local and global contexts and model long-distance and short-distance dependencies to enhance the semantic feature understanding of images. Then multi-scale feature edge fusion module is proposed to combine the image edge features, which effectively form multi-scale feature streams and establish reliable relational connections in the global context. Finally, we use three different modalities, imaging three different anatomical regions to train and test multi organs and evaluate segmentation performance.
We use the evaluation metrics of Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) for each dataset. Experiments showed the average DSC of 83.22% and HD95 of 17.55 mm on the Synapse dataset (CT images of abdominal multi-organ), the average DSC of 91.42% and HD95 of 1.06 mm on the ACDC dataset (MRI of cardiac substructures) and the average DSC of 86.78% and HD95 of 16.85 mm on the ISIC 2017 dataset (skin cancer images). In each dataset, our proposed method consistently outperforms the compared networks.
The proposed deep learning network provides a generalized and accurate solution method for multi-organ segmentation in the three different datasets. It has the potential to be applied to a variety of medical datasets for structural segmentation. |
doi_str_mv | 10.1088/1361-6560/acb19a |
format | article |
fullrecord | <record><control><sourceid>proquest_iop_j</sourceid><recordid>TN_cdi_proquest_miscellaneous_2763335655</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2763335655</sourcerecordid><originalsourceid>FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3</originalsourceid><addsrcrecordid>eNp1kE1r3DAQhkVoaTZJ7zkV3dpD3Mx4bK19LEu_INBLcxb6XBxsy5XkQP59ZDbNqT0NiOd9Z_Qwdo3wGaHrbpEEVqIVcKuMxl6dsd3r0xu2AyCsemzbc3aR0gMAYlc379g5CVET1bRj_hBDSpUJ82MY1zyEWY08RzUnH-LkIi-DqzWHSWVn-bSOeahCPBaAJ3ec3JzVluLDzBV_VHFw-YkHzydnB1O6hkkdXbpib70ak3v_Mi_Z_bevvw8_qrtf338evtxVhhrIFenG9h4bbCxopUkb1wvYU2_Qd_u-UzWhAms6XTtPjTdoXVvbBqxGVFbTJft06l1i-LO6lOU0JOPGUc0urEnWe0FErWjbgsIJNZuB6LxcYjk2PkkEudmVm0q5qZQnuyXy4aV91eV_r4G_Ogvw8QQMYZEPYY3FZpLLpKXoJEmgFqCTi_WFvPkH-d_Nz0vqk5M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2763335655</pqid></control><display><type>article</type><title>Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images</title><source>Institute of Physics</source><creator>Wang, Jing ; Zhao, Haiyue ; Liang, Wei ; Wang, Shuyu ; Zhang, Yan</creator><creatorcontrib>Wang, Jing ; Zhao, Haiyue ; Liang, Wei ; Wang, Shuyu ; Zhang, Yan</creatorcontrib><description>It is a huge challenge for multi-organs segmentation in various medical images based on a consistent algorithm with the development of deep learning methods. We therefore develop a deep learning method based on cross-convolutional transformer for these automated- segmentation to obtain better generalization and accuracy.
We propose a cross-convolutional transformer network (C
Former) to solve the segmentation problem. Specifically, we first redesign a novel cross-convolutional self-attention mechanism in terms of the algorithm to integrate local and global contexts and model long-distance and short-distance dependencies to enhance the semantic feature understanding of images. Then multi-scale feature edge fusion module is proposed to combine the image edge features, which effectively form multi-scale feature streams and establish reliable relational connections in the global context. Finally, we use three different modalities, imaging three different anatomical regions to train and test multi organs and evaluate segmentation performance.
We use the evaluation metrics of Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) for each dataset. Experiments showed the average DSC of 83.22% and HD95 of 17.55 mm on the Synapse dataset (CT images of abdominal multi-organ), the average DSC of 91.42% and HD95 of 1.06 mm on the ACDC dataset (MRI of cardiac substructures) and the average DSC of 86.78% and HD95 of 16.85 mm on the ISIC 2017 dataset (skin cancer images). In each dataset, our proposed method consistently outperforms the compared networks.
The proposed deep learning network provides a generalized and accurate solution method for multi-organ segmentation in the three different datasets. It has the potential to be applied to a variety of medical datasets for structural segmentation.</description><identifier>ISSN: 0031-9155</identifier><identifier>EISSN: 1361-6560</identifier><identifier>DOI: 10.1088/1361-6560/acb19a</identifier><identifier>PMID: 36623323</identifier><identifier>CODEN: PHMBA7</identifier><language>eng</language><publisher>England: IOP Publishing</publisher><subject>Algorithms ; deep learning ; Humans ; Image Processing, Computer-Assisted - methods ; Magnetic Resonance Imaging ; medical image ; Neural Networks, Computer ; self-attention ; Skin Neoplasms ; transformer ; visual attention mechanism</subject><ispartof>Physics in medicine & biology, 2023-01, Vol.68 (3), p.35008</ispartof><rights>2023 The Author(s). Published on behalf of Institute of Physics and Engineering in Medicine by IOP Publishing Ltd</rights><rights>Creative Commons Attribution license.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3</citedby><cites>FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3</cites><orcidid>0000-0003-3824-5639 ; 0000-0002-7315-1446</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36623323$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Jing</creatorcontrib><creatorcontrib>Zhao, Haiyue</creatorcontrib><creatorcontrib>Liang, Wei</creatorcontrib><creatorcontrib>Wang, Shuyu</creatorcontrib><creatorcontrib>Zhang, Yan</creatorcontrib><title>Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images</title><title>Physics in medicine & biology</title><addtitle>PMB</addtitle><addtitle>Phys. Med. Biol</addtitle><description>It is a huge challenge for multi-organs segmentation in various medical images based on a consistent algorithm with the development of deep learning methods. We therefore develop a deep learning method based on cross-convolutional transformer for these automated- segmentation to obtain better generalization and accuracy.
We propose a cross-convolutional transformer network (C
Former) to solve the segmentation problem. Specifically, we first redesign a novel cross-convolutional self-attention mechanism in terms of the algorithm to integrate local and global contexts and model long-distance and short-distance dependencies to enhance the semantic feature understanding of images. Then multi-scale feature edge fusion module is proposed to combine the image edge features, which effectively form multi-scale feature streams and establish reliable relational connections in the global context. Finally, we use three different modalities, imaging three different anatomical regions to train and test multi organs and evaluate segmentation performance.
We use the evaluation metrics of Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) for each dataset. Experiments showed the average DSC of 83.22% and HD95 of 17.55 mm on the Synapse dataset (CT images of abdominal multi-organ), the average DSC of 91.42% and HD95 of 1.06 mm on the ACDC dataset (MRI of cardiac substructures) and the average DSC of 86.78% and HD95 of 16.85 mm on the ISIC 2017 dataset (skin cancer images). In each dataset, our proposed method consistently outperforms the compared networks.
The proposed deep learning network provides a generalized and accurate solution method for multi-organ segmentation in the three different datasets. It has the potential to be applied to a variety of medical datasets for structural segmentation.</description><subject>Algorithms</subject><subject>deep learning</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Magnetic Resonance Imaging</subject><subject>medical image</subject><subject>Neural Networks, Computer</subject><subject>self-attention</subject><subject>Skin Neoplasms</subject><subject>transformer</subject><subject>visual attention mechanism</subject><issn>0031-9155</issn><issn>1361-6560</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp1kE1r3DAQhkVoaTZJ7zkV3dpD3Mx4bK19LEu_INBLcxb6XBxsy5XkQP59ZDbNqT0NiOd9Z_Qwdo3wGaHrbpEEVqIVcKuMxl6dsd3r0xu2AyCsemzbc3aR0gMAYlc379g5CVET1bRj_hBDSpUJ82MY1zyEWY08RzUnH-LkIi-DqzWHSWVn-bSOeahCPBaAJ3ec3JzVluLDzBV_VHFw-YkHzydnB1O6hkkdXbpib70ak3v_Mi_Z_bevvw8_qrtf338evtxVhhrIFenG9h4bbCxopUkb1wvYU2_Qd_u-UzWhAms6XTtPjTdoXVvbBqxGVFbTJft06l1i-LO6lOU0JOPGUc0urEnWe0FErWjbgsIJNZuB6LxcYjk2PkkEudmVm0q5qZQnuyXy4aV91eV_r4G_Ogvw8QQMYZEPYY3FZpLLpKXoJEmgFqCTi_WFvPkH-d_Nz0vqk5M</recordid><startdate>20230123</startdate><enddate>20230123</enddate><creator>Wang, Jing</creator><creator>Zhao, Haiyue</creator><creator>Liang, Wei</creator><creator>Wang, Shuyu</creator><creator>Zhang, Yan</creator><general>IOP Publishing</general><scope>O3W</scope><scope>TSCCA</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3824-5639</orcidid><orcidid>https://orcid.org/0000-0002-7315-1446</orcidid></search><sort><creationdate>20230123</creationdate><title>Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images</title><author>Wang, Jing ; Zhao, Haiyue ; Liang, Wei ; Wang, Shuyu ; Zhang, Yan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>deep learning</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Magnetic Resonance Imaging</topic><topic>medical image</topic><topic>Neural Networks, Computer</topic><topic>self-attention</topic><topic>Skin Neoplasms</topic><topic>transformer</topic><topic>visual attention mechanism</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Jing</creatorcontrib><creatorcontrib>Zhao, Haiyue</creatorcontrib><creatorcontrib>Liang, Wei</creatorcontrib><creatorcontrib>Wang, Shuyu</creatorcontrib><creatorcontrib>Zhang, Yan</creatorcontrib><collection>IOP Publishing</collection><collection>IOPscience (Open Access)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Physics in medicine & biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Jing</au><au>Zhao, Haiyue</au><au>Liang, Wei</au><au>Wang, Shuyu</au><au>Zhang, Yan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images</atitle><jtitle>Physics in medicine & biology</jtitle><stitle>PMB</stitle><addtitle>Phys. Med. Biol</addtitle><date>2023-01-23</date><risdate>2023</risdate><volume>68</volume><issue>3</issue><spage>35008</spage><pages>35008-</pages><issn>0031-9155</issn><eissn>1361-6560</eissn><coden>PHMBA7</coden><abstract>It is a huge challenge for multi-organs segmentation in various medical images based on a consistent algorithm with the development of deep learning methods. We therefore develop a deep learning method based on cross-convolutional transformer for these automated- segmentation to obtain better generalization and accuracy.
We propose a cross-convolutional transformer network (C
Former) to solve the segmentation problem. Specifically, we first redesign a novel cross-convolutional self-attention mechanism in terms of the algorithm to integrate local and global contexts and model long-distance and short-distance dependencies to enhance the semantic feature understanding of images. Then multi-scale feature edge fusion module is proposed to combine the image edge features, which effectively form multi-scale feature streams and establish reliable relational connections in the global context. Finally, we use three different modalities, imaging three different anatomical regions to train and test multi organs and evaluate segmentation performance.
We use the evaluation metrics of Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) for each dataset. Experiments showed the average DSC of 83.22% and HD95 of 17.55 mm on the Synapse dataset (CT images of abdominal multi-organ), the average DSC of 91.42% and HD95 of 1.06 mm on the ACDC dataset (MRI of cardiac substructures) and the average DSC of 86.78% and HD95 of 16.85 mm on the ISIC 2017 dataset (skin cancer images). In each dataset, our proposed method consistently outperforms the compared networks.
The proposed deep learning network provides a generalized and accurate solution method for multi-organ segmentation in the three different datasets. It has the potential to be applied to a variety of medical datasets for structural segmentation.</abstract><cop>England</cop><pub>IOP Publishing</pub><pmid>36623323</pmid><doi>10.1088/1361-6560/acb19a</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-3824-5639</orcidid><orcidid>https://orcid.org/0000-0002-7315-1446</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0031-9155 |
ispartof | Physics in medicine & biology, 2023-01, Vol.68 (3), p.35008 |
issn | 0031-9155 1361-6560 |
language | eng |
recordid | cdi_proquest_miscellaneous_2763335655 |
source | Institute of Physics |
subjects | Algorithms deep learning Humans Image Processing, Computer-Assisted - methods Magnetic Resonance Imaging medical image Neural Networks, Computer self-attention Skin Neoplasms transformer visual attention mechanism |
title | Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T21%3A35%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_iop_j&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-convolutional%20transformer%20for%20automated%20multi-organs%20segmentation%20in%20a%20variety%20of%20medical%20images&rft.jtitle=Physics%20in%20medicine%20&%20biology&rft.au=Wang,%20Jing&rft.date=2023-01-23&rft.volume=68&rft.issue=3&rft.spage=35008&rft.pages=35008-&rft.issn=0031-9155&rft.eissn=1361-6560&rft.coden=PHMBA7&rft_id=info:doi/10.1088/1361-6560/acb19a&rft_dat=%3Cproquest_iop_j%3E2763335655%3C/proquest_iop_j%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c340t-3b4d9f1414d0bab3bce960739c1f8798a231a0dc8b2ef34fc1de52d40db11adb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2763335655&rft_id=info:pmid/36623323&rfr_iscdi=true |