Loading…

SF2Former: Amyotrophic lateral sclerosis identification from multi-center MRI data using spatial and frequency fusion transformer

Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder characterized by motor neuron degeneration. Significant research has begun to establish brain magnetic resonance imaging (MRI) as a potential biomarker to diagnose and monitor the state of the disease. Deep learning has emer...

Full description

Saved in:
Bibliographic Details
Published in:Computerized medical imaging and graphics 2023-09, Vol.108, p.102279-102279, Article 102279
Main Authors: Kushol, Rafsanjany, Luk, Collin C., Dey, Avyarthana, Benatar, Michael, Briemberg, Hannah, Dionne, Annie, Dupré, Nicolas, Frayne, Richard, Genge, Angela, Gibson, Summer, Graham, Simon J., Korngut, Lawrence, Seres, Peter, Welsh, Robert C., Wilman, Alan H., Zinman, Lorne, Kalra, Sanjay, Yang, Yee-Hong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder characterized by motor neuron degeneration. Significant research has begun to establish brain magnetic resonance imaging (MRI) as a potential biomarker to diagnose and monitor the state of the disease. Deep learning has emerged as a prominent class of machine learning algorithms in computer vision and has shown successful applications in various medical image analysis tasks. However, deep learning methods applied to neuroimaging have not achieved superior performance in classifying ALS patients from healthy controls due to insignificant structural changes correlated with pathological features. Thus, a critical challenge in deep models is to identify discriminative features from limited training data. To address this challenge, this study introduces a framework called SF2Former, which leverages the power of the vision transformer architecture to distinguish ALS subjects from the control group by exploiting the long-range relationships among image features. Additionally, spatial and frequency domain information is combined to enhance the network’s performance, as MRI scans are initially captured in the frequency domain and then converted to the spatial domain. The proposed framework is trained using a series of consecutive coronal slices and utilizes pre-trained weights from ImageNet through transfer learning. Finally, a majority voting scheme is employed on the coronal slices of each subject to generate the final classification decision. The proposed architecture is extensively evaluated with multi-modal neuroimaging data (i.e., T1-weighted, R2*, FLAIR) using two well-organized versions of the Canadian ALS Neuroimaging Consortium (CALSNIC) multi-center datasets. The experimental results demonstrate the superiority of the proposed strategy in terms of classification accuracy compared to several popular deep learning-based techniques. •We propose a novel vision transformer model to classify ALS from healthy controls.•We analyze two independent and extensive datasets of 120 and 232 MRI scans.•We leverage multi-center and multi-modal neuroimaging data (T1W, R2*, and FLAIR).•The proposed method demonstrates state-of-the-art classification accuracy.
ISSN:0895-6111
1879-0771
DOI:10.1016/j.compmedimag.2023.102279