Loading…

CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation

The semantic segmentation of indoor scenes based on RGB and Depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we p...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cognitive and developmental systems 2024-09, p.1-13
Main Authors: Zihao, Zhang, Yale, Yang, Huifang, Hou, Fanman, Meng, Fan, Zhang, Kangzhan, Xie, Chunsheng, Zhuang
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 13
container_issue
container_start_page 1
container_title IEEE transactions on cognitive and developmental systems
container_volume
creator Zihao, Zhang
Yale, Yang
Huifang, Hou
Fanman, Meng
Fan, Zhang
Kangzhan, Xie
Chunsheng, Zhuang
description The semantic segmentation of indoor scenes based on RGB and Depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification module (BCFR) to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion module (CAF) is designed to realize multi-stage deep multi-modal feature fusion. Finally, a multi-supervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.
doi_str_mv 10.1109/TCDS.2024.3455356
format article
fullrecord <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCDS_2024_3455356</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10669091</ieee_id><sourcerecordid>10_1109_TCDS_2024_3455356</sourcerecordid><originalsourceid>FETCH-LOGICAL-c631-3aa7b4ed5c38ae390c44f4db43892cc4b151103bc7f9ad20575b8bd091f08ce83</originalsourceid><addsrcrecordid>eNpNkMFOwzAQRC0EElXpByBx8A-k2Fk7iblVhkKlAof2HjnOJgSauHICqH-PQyvEaecwM9p5hFxzNuecqdutvt_MYxaLOQgpQSZnZBJDqqJMgTr_0zG7JLO-f2eM8QTSTKQTUmu9eMHhjmrv-j56dqXZNcOBatfuPb5h1zdfSJdohk-PdFHXHmszNK6jIfXt_AetnKerrnThbCx2SDfYmm5obBB1i93wa78iF5XZ9Tg73SnZLh-2-ilavz6u9GId2QR4BMakhcBSWsgMgmJWiEqUhYDwvbWi4DIMhsKmlTJlzGQqi6womeIVyyxmMCX8WGvHOR6rfO-b1vhDzlk-sspHVvnIKj-xCpmbY6ZBxH_-JFGhGH4AQb1m1Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zihao, Zhang ; Yale, Yang ; Huifang, Hou ; Fanman, Meng ; Fan, Zhang ; Kangzhan, Xie ; Chunsheng, Zhuang</creator><creatorcontrib>Zihao, Zhang ; Yale, Yang ; Huifang, Hou ; Fanman, Meng ; Fan, Zhang ; Kangzhan, Xie ; Chunsheng, Zhuang</creatorcontrib><description>The semantic segmentation of indoor scenes based on RGB and Depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification module (BCFR) to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion module (CAF) is designed to realize multi-stage deep multi-modal feature fusion. Finally, a multi-supervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.</description><identifier>ISSN: 2379-8920</identifier><identifier>EISSN: 2379-8939</identifier><identifier>DOI: 10.1109/TCDS.2024.3455356</identifier><identifier>CODEN: ITCDA4</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Bidirectional Rectification ; Cross-Modality Fusion ; Data mining ; Feature extraction ; Image color analysis ; RGB-D ; Semantic segmentation ; Semantics ; Transformers ; Vision Transformers</subject><ispartof>IEEE transactions on cognitive and developmental systems, 2024-09, p.1-13</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10669091$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zihao, Zhang</creatorcontrib><creatorcontrib>Yale, Yang</creatorcontrib><creatorcontrib>Huifang, Hou</creatorcontrib><creatorcontrib>Fanman, Meng</creatorcontrib><creatorcontrib>Fan, Zhang</creatorcontrib><creatorcontrib>Kangzhan, Xie</creatorcontrib><creatorcontrib>Chunsheng, Zhuang</creatorcontrib><title>CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation</title><title>IEEE transactions on cognitive and developmental systems</title><addtitle>TCDS</addtitle><description>The semantic segmentation of indoor scenes based on RGB and Depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification module (BCFR) to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion module (CAF) is designed to realize multi-stage deep multi-modal feature fusion. Finally, a multi-supervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.</description><subject>Accuracy</subject><subject>Bidirectional Rectification</subject><subject>Cross-Modality Fusion</subject><subject>Data mining</subject><subject>Feature extraction</subject><subject>Image color analysis</subject><subject>RGB-D</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Transformers</subject><subject>Vision Transformers</subject><issn>2379-8920</issn><issn>2379-8939</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkMFOwzAQRC0EElXpByBx8A-k2Fk7iblVhkKlAof2HjnOJgSauHICqH-PQyvEaecwM9p5hFxzNuecqdutvt_MYxaLOQgpQSZnZBJDqqJMgTr_0zG7JLO-f2eM8QTSTKQTUmu9eMHhjmrv-j56dqXZNcOBatfuPb5h1zdfSJdohk-PdFHXHmszNK6jIfXt_AetnKerrnThbCx2SDfYmm5obBB1i93wa78iF5XZ9Tg73SnZLh-2-ilavz6u9GId2QR4BMakhcBSWsgMgmJWiEqUhYDwvbWi4DIMhsKmlTJlzGQqi6womeIVyyxmMCX8WGvHOR6rfO-b1vhDzlk-sspHVvnIKj-xCpmbY6ZBxH_-JFGhGH4AQb1m1Q</recordid><startdate>20240905</startdate><enddate>20240905</enddate><creator>Zihao, Zhang</creator><creator>Yale, Yang</creator><creator>Huifang, Hou</creator><creator>Fanman, Meng</creator><creator>Fan, Zhang</creator><creator>Kangzhan, Xie</creator><creator>Chunsheng, Zhuang</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240905</creationdate><title>CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation</title><author>Zihao, Zhang ; Yale, Yang ; Huifang, Hou ; Fanman, Meng ; Fan, Zhang ; Kangzhan, Xie ; Chunsheng, Zhuang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c631-3aa7b4ed5c38ae390c44f4db43892cc4b151103bc7f9ad20575b8bd091f08ce83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Bidirectional Rectification</topic><topic>Cross-Modality Fusion</topic><topic>Data mining</topic><topic>Feature extraction</topic><topic>Image color analysis</topic><topic>RGB-D</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Transformers</topic><topic>Vision Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Zihao, Zhang</creatorcontrib><creatorcontrib>Yale, Yang</creatorcontrib><creatorcontrib>Huifang, Hou</creatorcontrib><creatorcontrib>Fanman, Meng</creatorcontrib><creatorcontrib>Fan, Zhang</creatorcontrib><creatorcontrib>Kangzhan, Xie</creatorcontrib><creatorcontrib>Chunsheng, Zhuang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on cognitive and developmental systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zihao, Zhang</au><au>Yale, Yang</au><au>Huifang, Hou</au><au>Fanman, Meng</au><au>Fan, Zhang</au><au>Kangzhan, Xie</au><au>Chunsheng, Zhuang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation</atitle><jtitle>IEEE transactions on cognitive and developmental systems</jtitle><stitle>TCDS</stitle><date>2024-09-05</date><risdate>2024</risdate><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>2379-8920</issn><eissn>2379-8939</eissn><coden>ITCDA4</coden><abstract>The semantic segmentation of indoor scenes based on RGB and Depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification module (BCFR) to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion module (CAF) is designed to realize multi-stage deep multi-modal feature fusion. Finally, a multi-supervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.</abstract><pub>IEEE</pub><doi>10.1109/TCDS.2024.3455356</doi><tpages>13</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2379-8920
ispartof IEEE transactions on cognitive and developmental systems, 2024-09, p.1-13
issn 2379-8920
2379-8939
language eng
recordid cdi_crossref_primary_10_1109_TCDS_2024_3455356
source IEEE Electronic Library (IEL) Journals
subjects Accuracy
Bidirectional Rectification
Cross-Modality Fusion
Data mining
Feature extraction
Image color analysis
RGB-D
Semantic segmentation
Semantics
Transformers
Vision Transformers
title CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T01%3A07%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CCANet:%20Cross-Modality%20Comprehensive%20Feature%20Aggregation%20Network%20for%20Indoor%20Scene%20Semantic%20Segmentation&rft.jtitle=IEEE%20transactions%20on%20cognitive%20and%20developmental%20systems&rft.au=Zihao,%20Zhang&rft.date=2024-09-05&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=2379-8920&rft.eissn=2379-8939&rft.coden=ITCDA4&rft_id=info:doi/10.1109/TCDS.2024.3455356&rft_dat=%3Ccrossref_ieee_%3E10_1109_TCDS_2024_3455356%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c631-3aa7b4ed5c38ae390c44f4db43892cc4b151103bc7f9ad20575b8bd091f08ce83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10669091&rfr_iscdi=true