Loading…
Pocket convolution Mamba for brain tumor segmentation: Pocket convolution Mamba for
In the field of brain tumor segmentation, models based on CNNs and transformer have received a lot of attention. However, CNNs have limitations in long-range modeling, and although transformers can model at long distances, they have quadratic computational complexity. Recently, state space models (S...
Saved in:
Published in: | The Journal of supercomputing 2025, Vol.81 (1) |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the field of brain tumor segmentation, models based on CNNs and transformer have received a lot of attention. However, CNNs have limitations in long-range modeling, and although transformers can model at long distances, they have quadratic computational complexity. Recently, state space models (SSM), exemplified by the Mamba model, can achieve linear computational complexity and are adept at long-distance interactions. In this paper, we propose pocket convolution Mamba (P-BTS), which utilizes the PocketNet paradigm, SSM, and patch contrastive learning to achieve an efficient segmentation model. Specifically, the encoder follows the PocketNet paradigm, the SSM is used at the highest level of the model encoder to capture rich semantic information, and patch contrastive learning is achieved through the results of dual-stream data. Meanwhile, we designed a spatial channel attention (SCA) module to enhance control over spatial channels, and a feature complement module (FCM) to facilitate the interaction between low-level features and high-level semantic information. We conducted comprehensive experiments on the BraTS2018 and BraTS2019 datasets, and the results show that P-BTS has excellent segmentation performance. Our code has been released at
https://github.com/zzpr/P-BTS
. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-024-06732-3 |