Loading…

Attention module improves both performance and interpretability of four‐dimensional functional magnetic resonance imaging decoding neural network

Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains...

Full description

Saved in:
Bibliographic Details
Published in:Human brain mapping 2022-06, Vol.43 (8), p.2683-2692
Main Authors: Jiang, Zhoufan, Wang, Yanming, Shi, ChenWei, Wu, Yueyang, Hu, Rongjie, Chen, Shishuo, Hu, Sheng, Wang, Xiaoxiao, Qiu, Bensheng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing on advances in machine learning, we integrated attention modules into brain decoders to facilitate an in‐depth interpretation of DNN channels. A four‐dimensional (4D) convolution operation was also included to extract temporo‐spatial interaction within the fMRI signal. The experiments showed that the proposed model obtains a very high accuracy (97.4%) and outperforms previous researches on the seven different task benchmarks from the Human Connectome Project (HCP) dataset. The visualization analysis further illustrated the hierarchical emergence of task‐specific masks with depth. Finally, the model was retrained to regress individual traits within the HCP and to classify viewing images from the BOLD5000 dataset, respectively. Transfer learning also achieves good performance. Further visualization analysis shows that, after transfer learning, low‐level attention masks remained similar to the source domain, whereas high‐level attention masks changed adaptively. In conclusion, the proposed 4D model with attention module performed well and facilitated interpretation of DNNs, which is helpful for subsequent research. The 4DResNet with attention module obtains very high accuracy (97.4%) on the HCP dataset brain decoding. The attention module facilitates in‐depth interpretability of the fMRI decoding neural network.
ISSN:1065-9471
1097-0193
DOI:10.1002/hbm.25813