Loading…
Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding
Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits th...
Saved in:
Published in: | arXiv.org 2024-12 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Bao, Guangyin Zhang, Qi Gong, Zixuan Zhou, Jialei Fan, Wei Yi, Kun Usman Naseem Hu, Liang Miao, Duoqian |
description | Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits the broader applicability of brain visual decoding in real-world scenarios. To address this issue, we introduce Wills Aligner, a novel approach designed to achieve multi-subject collaborative brain visual decoding. Wills Aligner begins by aligning the fMRI data from different subjects at the anatomical level. It then employs delicate mixture-of-brain-expert adapters and a meta-learning strategy to account for individual fMRI pattern differences. Additionally, Wills Aligner leverages the semantic relation of visual stimuli to guide the learning of inter-subject commonality, enabling visual decoding for each subject to draw insights from other subjects' data. We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3044062822</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3044062822</sourcerecordid><originalsourceid>FETCH-proquest_journals_30440628223</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC_GmrcVJrYqLk6JjSWsst1wSzY_Pr4MP4HSG74xYAlIusioHmLDU-0EIAeUSikImbH1DIs83hL3RbsVPkQJm59gOugu8tkSqtU4FfGu-dQoNv6KPivhOd_aOpp-x8UOR1-mvUzY_7C_1MXs6-4rah2aw0ZkvNVLkuSihApD_XR8DpjiW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3044062822</pqid></control><display><type>article</type><title>Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Bao, Guangyin ; Zhang, Qi ; Gong, Zixuan ; Zhou, Jialei ; Fan, Wei ; Yi, Kun ; Usman Naseem ; Hu, Liang ; Miao, Duoqian</creator><creatorcontrib>Bao, Guangyin ; Zhang, Qi ; Gong, Zixuan ; Zhou, Jialei ; Fan, Wei ; Yi, Kun ; Usman Naseem ; Hu, Liang ; Miao, Duoqian</creatorcontrib><description>Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits the broader applicability of brain visual decoding in real-world scenarios. To address this issue, we introduce Wills Aligner, a novel approach designed to achieve multi-subject collaborative brain visual decoding. Wills Aligner begins by aligning the fMRI data from different subjects at the anatomical level. It then employs delicate mixture-of-brain-expert adapters and a meta-learning strategy to account for individual fMRI pattern differences. Additionally, Wills Aligner leverages the semantic relation of visual stimuli to guide the learning of inter-subject commonality, enabling visual decoding for each subject to draw insights from other subjects' data. We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Brain ; Cognition ; Cognition & reasoning ; Cognitive tasks ; Commonality ; Learning ; Performance evaluation ; Representations ; Robustness ; Visual tasks</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3044062822?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Bao, Guangyin</creatorcontrib><creatorcontrib>Zhang, Qi</creatorcontrib><creatorcontrib>Gong, Zixuan</creatorcontrib><creatorcontrib>Zhou, Jialei</creatorcontrib><creatorcontrib>Fan, Wei</creatorcontrib><creatorcontrib>Yi, Kun</creatorcontrib><creatorcontrib>Usman Naseem</creatorcontrib><creatorcontrib>Hu, Liang</creatorcontrib><creatorcontrib>Miao, Duoqian</creatorcontrib><title>Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding</title><title>arXiv.org</title><description>Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits the broader applicability of brain visual decoding in real-world scenarios. To address this issue, we introduce Wills Aligner, a novel approach designed to achieve multi-subject collaborative brain visual decoding. Wills Aligner begins by aligning the fMRI data from different subjects at the anatomical level. It then employs delicate mixture-of-brain-expert adapters and a meta-learning strategy to account for individual fMRI pattern differences. Additionally, Wills Aligner leverages the semantic relation of visual stimuli to guide the learning of inter-subject commonality, enabling visual decoding for each subject to draw insights from other subjects' data. We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance.</description><subject>Brain</subject><subject>Cognition</subject><subject>Cognition & reasoning</subject><subject>Cognitive tasks</subject><subject>Commonality</subject><subject>Learning</subject><subject>Performance evaluation</subject><subject>Representations</subject><subject>Robustness</subject><subject>Visual tasks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC_GmrcVJrYqLk6JjSWsst1wSzY_Pr4MP4HSG74xYAlIusioHmLDU-0EIAeUSikImbH1DIs83hL3RbsVPkQJm59gOugu8tkSqtU4FfGu-dQoNv6KPivhOd_aOpp-x8UOR1-mvUzY_7C_1MXs6-4rah2aw0ZkvNVLkuSihApD_XR8DpjiW</recordid><startdate>20241216</startdate><enddate>20241216</enddate><creator>Bao, Guangyin</creator><creator>Zhang, Qi</creator><creator>Gong, Zixuan</creator><creator>Zhou, Jialei</creator><creator>Fan, Wei</creator><creator>Yi, Kun</creator><creator>Usman Naseem</creator><creator>Hu, Liang</creator><creator>Miao, Duoqian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241216</creationdate><title>Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding</title><author>Bao, Guangyin ; Zhang, Qi ; Gong, Zixuan ; Zhou, Jialei ; Fan, Wei ; Yi, Kun ; Usman Naseem ; Hu, Liang ; Miao, Duoqian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30440628223</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Brain</topic><topic>Cognition</topic><topic>Cognition & reasoning</topic><topic>Cognitive tasks</topic><topic>Commonality</topic><topic>Learning</topic><topic>Performance evaluation</topic><topic>Representations</topic><topic>Robustness</topic><topic>Visual tasks</topic><toplevel>online_resources</toplevel><creatorcontrib>Bao, Guangyin</creatorcontrib><creatorcontrib>Zhang, Qi</creatorcontrib><creatorcontrib>Gong, Zixuan</creatorcontrib><creatorcontrib>Zhou, Jialei</creatorcontrib><creatorcontrib>Fan, Wei</creatorcontrib><creatorcontrib>Yi, Kun</creatorcontrib><creatorcontrib>Usman Naseem</creatorcontrib><creatorcontrib>Hu, Liang</creatorcontrib><creatorcontrib>Miao, Duoqian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bao, Guangyin</au><au>Zhang, Qi</au><au>Gong, Zixuan</au><au>Zhou, Jialei</au><au>Fan, Wei</au><au>Yi, Kun</au><au>Usman Naseem</au><au>Hu, Liang</au><au>Miao, Duoqian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding</atitle><jtitle>arXiv.org</jtitle><date>2024-12-16</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Decoding visual information from human brain activity has seen remarkable advancements in recent research. However, the diversity in cortical parcellation and fMRI patterns across individuals has prompted the development of deep learning models tailored to each subject. The personalization limits the broader applicability of brain visual decoding in real-world scenarios. To address this issue, we introduce Wills Aligner, a novel approach designed to achieve multi-subject collaborative brain visual decoding. Wills Aligner begins by aligning the fMRI data from different subjects at the anatomical level. It then employs delicate mixture-of-brain-expert adapters and a meta-learning strategy to account for individual fMRI pattern differences. Additionally, Wills Aligner leverages the semantic relation of visual stimuli to guide the learning of inter-subject commonality, enabling visual decoding for each subject to draw insights from other subjects' data. We rigorously evaluate our Wills Aligner across various visual decoding tasks, including classification, cross-modal retrieval, and image reconstruction. The experimental results demonstrate that Wills Aligner achieves promising performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3044062822 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Brain Cognition Cognition & reasoning Cognitive tasks Commonality Learning Performance evaluation Representations Robustness Visual tasks |
title | Wills Aligner: Multi-Subject Collaborative Brain Visual Decoding |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T13%3A36%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Wills%20Aligner:%20Multi-Subject%20Collaborative%20Brain%20Visual%20Decoding&rft.jtitle=arXiv.org&rft.au=Bao,%20Guangyin&rft.date=2024-12-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3044062822%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30440628223%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3044062822&rft_id=info:pmid/&rfr_iscdi=true |