Loading…

Visual Feature-Tolerance in the Reading Network

A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or...

Full description

Saved in:
Bibliographic Details
Published in:Neuron (Cambridge, Mass.) Mass.), 2011-09, Vol.71 (5), p.941-953
Main Authors: Rauschecker, Andreas M., Bowen, Reno F., Perry, Lee M., Kevan, Alison M., Dougherty, Robert F., Wandell, Brian A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. ► Word shapes defined by motion features elicit robust BOLD responses in VWFA ► TMS to hMT+ disrupts visibility of motion-defined but not line-contour-defined words ► A quantitative model relates reading performance to VWFA responses
ISSN:0896-6273
1097-4199
DOI:10.1016/j.neuron.2011.06.036