Loading…
Decoding face categories in diagnostic subregions of primary visual cortex
Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top‐down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task‐relevant facial fe...
Saved in:
Published in: | The European journal of neuroscience 2013-04, Vol.37 (7), p.1130-1139 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top‐down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task‐relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region‐of‐interest‐based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non‐diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non‐diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).
Using fMRI, we mapped the cortical representation of the eyes and mouth in V1. We measured the BOLD signal whilst subjects engaged in gender and expression categorization tasks of identical faces. With univariate and multivariate analyses, we reveal activity changes within “eye” and “mouth” regions, as well as the remaining V1, as a function of task. Thus a high level constraint such as face categorization task influences V1, possibly via top‐down activity from face areas or subcortical input. |
---|---|
ISSN: | 0953-816X 1460-9568 |
DOI: | 10.1111/ejn.12129 |