Loading…

Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis

Sequence modeling plays a vital role across various domains, with recurrent neural networks being historically the predominant method of performing these tasks. However, the emergence of transformers has altered this paradigm due to their superior performance. Built upon these advances, transformers...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-06
Main Authors: Heidari, Moein, Sina Ghorbani Kolahi, Karimijafarbigloo, Sanaz, Azad, Bobby, Bozorgpour, Afshin, Hatami, Soheila, Azad, Reza, Diba, Ali, Bagci, Ulas, Merhof, Dorit, Hacihaliloglu, Ilker
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Heidari, Moein
Sina Ghorbani Kolahi
Karimijafarbigloo, Sanaz
Azad, Bobby
Bozorgpour, Afshin
Hatami, Soheila
Azad, Reza
Diba, Ali
Bagci, Ulas
Merhof, Dorit
Hacihaliloglu, Ilker
description Sequence modeling plays a vital role across various domains, with recurrent neural networks being historically the predominant method of performing these tasks. However, the emergence of transformers has altered this paradigm due to their superior performance. Built upon these advances, transformers have conjoined CNNs as two leading foundational models for learning visual representations. However, transformers are hindered by the \(\mathcal{O}(N^2)\) complexity of their attention mechanisms, while CNNs lack global receptive fields and dynamic weight allocation. State Space Models (SSMs), specifically the \textit{\textbf{Mamba}} model with selection mechanisms and hardware-aware architecture, have garnered immense interest lately in sequential modeling and visual representation learning, challenging the dominance of transformers by providing infinite context lengths and offering substantial efficiency maintaining linear complexity in the input sequence. Capitalizing on the advances in computer vision, medical imaging has heralded a new epoch with Mamba models. Intending to help researchers navigate the surge, this survey seeks to offer an encyclopedic review of Mamba models in medical imaging. Specifically, we start with a comprehensive theoretical review forming the basis of SSMs, including Mamba architecture and its alternatives for sequence modeling paradigms in this context. Next, we offer a structured classification of Mamba models in the medical field and introduce a diverse categorization scheme based on their application, imaging modalities, and targeted organs. Finally, we summarize key challenges, discuss different future research directions of the SSMs in the medical domain, and propose several directions to fulfill the demands of this field. In addition, we have compiled the studies discussed in this paper along with their open-source implementations on our GitHub repository.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3065127049</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3065127049</sourcerecordid><originalsourceid>FETCH-proquest_journals_30651270493</originalsourceid><addsrcrecordid>eNqNi8EKwjAQBYMgKOo_LHguxMS26k1KRQ891buEutWUNKnZpuDfW8EP8PQYZt6EzYWUm2i3FWLGVkQN51wkqYhjOWcqc20XetVrZ6O8rnWl0faQe3WAI3ylxyda0gNCGfyAb3A1lONh5E5VCIW7oyHQFgq860oZuLTqgXC0yrxJ05JNa2UIV79dsPUpv2bnqPPuFZD6W-OCH2O6SZ7EG5Hy7V7-V30AweFESQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3065127049</pqid></control><display><type>article</type><title>Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Heidari, Moein ; Sina Ghorbani Kolahi ; Karimijafarbigloo, Sanaz ; Azad, Bobby ; Bozorgpour, Afshin ; Hatami, Soheila ; Azad, Reza ; Diba, Ali ; Bagci, Ulas ; Merhof, Dorit ; Hacihaliloglu, Ilker</creator><creatorcontrib>Heidari, Moein ; Sina Ghorbani Kolahi ; Karimijafarbigloo, Sanaz ; Azad, Bobby ; Bozorgpour, Afshin ; Hatami, Soheila ; Azad, Reza ; Diba, Ali ; Bagci, Ulas ; Merhof, Dorit ; Hacihaliloglu, Ilker</creatorcontrib><description>Sequence modeling plays a vital role across various domains, with recurrent neural networks being historically the predominant method of performing these tasks. However, the emergence of transformers has altered this paradigm due to their superior performance. Built upon these advances, transformers have conjoined CNNs as two leading foundational models for learning visual representations. However, transformers are hindered by the \(\mathcal{O}(N^2)\) complexity of their attention mechanisms, while CNNs lack global receptive fields and dynamic weight allocation. State Space Models (SSMs), specifically the \textit{\textbf{Mamba}} model with selection mechanisms and hardware-aware architecture, have garnered immense interest lately in sequential modeling and visual representation learning, challenging the dominance of transformers by providing infinite context lengths and offering substantial efficiency maintaining linear complexity in the input sequence. Capitalizing on the advances in computer vision, medical imaging has heralded a new epoch with Mamba models. Intending to help researchers navigate the surge, this survey seeks to offer an encyclopedic review of Mamba models in medical imaging. Specifically, we start with a comprehensive theoretical review forming the basis of SSMs, including Mamba architecture and its alternatives for sequence modeling paradigms in this context. Next, we offer a structured classification of Mamba models in the medical field and introduce a diverse categorization scheme based on their application, imaging modalities, and targeted organs. Finally, we summarize key challenges, discuss different future research directions of the SSMs in the medical domain, and propose several directions to fulfill the demands of this field. In addition, we have compiled the studies discussed in this paper along with their open-source implementations on our GitHub repository.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Complexity ; Computer architecture ; Computer vision ; Context ; Image analysis ; Machine learning ; Medical imaging ; Modelling ; Recurrent neural networks ; Representations ; State space models ; Transformers</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3065127049?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Heidari, Moein</creatorcontrib><creatorcontrib>Sina Ghorbani Kolahi</creatorcontrib><creatorcontrib>Karimijafarbigloo, Sanaz</creatorcontrib><creatorcontrib>Azad, Bobby</creatorcontrib><creatorcontrib>Bozorgpour, Afshin</creatorcontrib><creatorcontrib>Hatami, Soheila</creatorcontrib><creatorcontrib>Azad, Reza</creatorcontrib><creatorcontrib>Diba, Ali</creatorcontrib><creatorcontrib>Bagci, Ulas</creatorcontrib><creatorcontrib>Merhof, Dorit</creatorcontrib><creatorcontrib>Hacihaliloglu, Ilker</creatorcontrib><title>Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis</title><title>arXiv.org</title><description>Sequence modeling plays a vital role across various domains, with recurrent neural networks being historically the predominant method of performing these tasks. However, the emergence of transformers has altered this paradigm due to their superior performance. Built upon these advances, transformers have conjoined CNNs as two leading foundational models for learning visual representations. However, transformers are hindered by the \(\mathcal{O}(N^2)\) complexity of their attention mechanisms, while CNNs lack global receptive fields and dynamic weight allocation. State Space Models (SSMs), specifically the \textit{\textbf{Mamba}} model with selection mechanisms and hardware-aware architecture, have garnered immense interest lately in sequential modeling and visual representation learning, challenging the dominance of transformers by providing infinite context lengths and offering substantial efficiency maintaining linear complexity in the input sequence. Capitalizing on the advances in computer vision, medical imaging has heralded a new epoch with Mamba models. Intending to help researchers navigate the surge, this survey seeks to offer an encyclopedic review of Mamba models in medical imaging. Specifically, we start with a comprehensive theoretical review forming the basis of SSMs, including Mamba architecture and its alternatives for sequence modeling paradigms in this context. Next, we offer a structured classification of Mamba models in the medical field and introduce a diverse categorization scheme based on their application, imaging modalities, and targeted organs. Finally, we summarize key challenges, discuss different future research directions of the SSMs in the medical domain, and propose several directions to fulfill the demands of this field. In addition, we have compiled the studies discussed in this paper along with their open-source implementations on our GitHub repository.</description><subject>Complexity</subject><subject>Computer architecture</subject><subject>Computer vision</subject><subject>Context</subject><subject>Image analysis</subject><subject>Machine learning</subject><subject>Medical imaging</subject><subject>Modelling</subject><subject>Recurrent neural networks</subject><subject>Representations</subject><subject>State space models</subject><subject>Transformers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi8EKwjAQBYMgKOo_LHguxMS26k1KRQ891buEutWUNKnZpuDfW8EP8PQYZt6EzYWUm2i3FWLGVkQN51wkqYhjOWcqc20XetVrZ6O8rnWl0faQe3WAI3ylxyda0gNCGfyAb3A1lONh5E5VCIW7oyHQFgq860oZuLTqgXC0yrxJ05JNa2UIV79dsPUpv2bnqPPuFZD6W-OCH2O6SZ7EG5Hy7V7-V30AweFESQ</recordid><startdate>20240605</startdate><enddate>20240605</enddate><creator>Heidari, Moein</creator><creator>Sina Ghorbani Kolahi</creator><creator>Karimijafarbigloo, Sanaz</creator><creator>Azad, Bobby</creator><creator>Bozorgpour, Afshin</creator><creator>Hatami, Soheila</creator><creator>Azad, Reza</creator><creator>Diba, Ali</creator><creator>Bagci, Ulas</creator><creator>Merhof, Dorit</creator><creator>Hacihaliloglu, Ilker</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240605</creationdate><title>Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis</title><author>Heidari, Moein ; Sina Ghorbani Kolahi ; Karimijafarbigloo, Sanaz ; Azad, Bobby ; Bozorgpour, Afshin ; Hatami, Soheila ; Azad, Reza ; Diba, Ali ; Bagci, Ulas ; Merhof, Dorit ; Hacihaliloglu, Ilker</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30651270493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Complexity</topic><topic>Computer architecture</topic><topic>Computer vision</topic><topic>Context</topic><topic>Image analysis</topic><topic>Machine learning</topic><topic>Medical imaging</topic><topic>Modelling</topic><topic>Recurrent neural networks</topic><topic>Representations</topic><topic>State space models</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Heidari, Moein</creatorcontrib><creatorcontrib>Sina Ghorbani Kolahi</creatorcontrib><creatorcontrib>Karimijafarbigloo, Sanaz</creatorcontrib><creatorcontrib>Azad, Bobby</creatorcontrib><creatorcontrib>Bozorgpour, Afshin</creatorcontrib><creatorcontrib>Hatami, Soheila</creatorcontrib><creatorcontrib>Azad, Reza</creatorcontrib><creatorcontrib>Diba, Ali</creatorcontrib><creatorcontrib>Bagci, Ulas</creatorcontrib><creatorcontrib>Merhof, Dorit</creatorcontrib><creatorcontrib>Hacihaliloglu, Ilker</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heidari, Moein</au><au>Sina Ghorbani Kolahi</au><au>Karimijafarbigloo, Sanaz</au><au>Azad, Bobby</au><au>Bozorgpour, Afshin</au><au>Hatami, Soheila</au><au>Azad, Reza</au><au>Diba, Ali</au><au>Bagci, Ulas</au><au>Merhof, Dorit</au><au>Hacihaliloglu, Ilker</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis</atitle><jtitle>arXiv.org</jtitle><date>2024-06-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Sequence modeling plays a vital role across various domains, with recurrent neural networks being historically the predominant method of performing these tasks. However, the emergence of transformers has altered this paradigm due to their superior performance. Built upon these advances, transformers have conjoined CNNs as two leading foundational models for learning visual representations. However, transformers are hindered by the \(\mathcal{O}(N^2)\) complexity of their attention mechanisms, while CNNs lack global receptive fields and dynamic weight allocation. State Space Models (SSMs), specifically the \textit{\textbf{Mamba}} model with selection mechanisms and hardware-aware architecture, have garnered immense interest lately in sequential modeling and visual representation learning, challenging the dominance of transformers by providing infinite context lengths and offering substantial efficiency maintaining linear complexity in the input sequence. Capitalizing on the advances in computer vision, medical imaging has heralded a new epoch with Mamba models. Intending to help researchers navigate the surge, this survey seeks to offer an encyclopedic review of Mamba models in medical imaging. Specifically, we start with a comprehensive theoretical review forming the basis of SSMs, including Mamba architecture and its alternatives for sequence modeling paradigms in this context. Next, we offer a structured classification of Mamba models in the medical field and introduce a diverse categorization scheme based on their application, imaging modalities, and targeted organs. Finally, we summarize key challenges, discuss different future research directions of the SSMs in the medical domain, and propose several directions to fulfill the demands of this field. In addition, we have compiled the studies discussed in this paper along with their open-source implementations on our GitHub repository.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3065127049
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Complexity
Computer architecture
Computer vision
Context
Image analysis
Machine learning
Medical imaging
Modelling
Recurrent neural networks
Representations
State space models
Transformers
title Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T11%3A54%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Computation-Efficient%20Era:%20A%20Comprehensive%20Survey%20of%20State%20Space%20Models%20in%20Medical%20Image%20Analysis&rft.jtitle=arXiv.org&rft.au=Heidari,%20Moein&rft.date=2024-06-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3065127049%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30651270493%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3065127049&rft_id=info:pmid/&rfr_iscdi=true