Loading…

Depth estimation from retinal disparity requires eye and head orientation signals

To reach for an object, one needs to know its egocentric distance (absolute depth). It remains an unresolved issue which signals are required by the brain to calculate this absolute depth information. We devised a geometric model of binocular 3D eye orientation and investigated the signals necessary...

Full description

Saved in:
Bibliographic Details
Published in:Journal of vision (Charlottesville, Va.) Va.), 2008-12, Vol.8 (16), p.3.1-3
Main Authors: Blohm, Gunnar, Khan, Aarlenne Z, Ren, Lei, Schreiber, Kai M, Crawford, J Douglas
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To reach for an object, one needs to know its egocentric distance (absolute depth). It remains an unresolved issue which signals are required by the brain to calculate this absolute depth information. We devised a geometric model of binocular 3D eye orientation and investigated the signals necessary to uniquely determine the depth of a non-foveated object accounting for naturalistic variations of eye and head orientations. Our model shows that, in the presence of noisy internal estimates of the ocular vergence angle, horizontal and vertical retinal disparities alone are insufficient to calculate the unique depth of a point-like target. Instead the brain must account for the 3D orientations of the eye and head. We tested the model in a behavioral experiment that involved reaches to targets in depth. Our analysis showed that a target with the same retinal disparity produced different estimates of reach depth that varied consistently with different eye and head orientations. The experimental results showed that subjects accurately account for this extraretinal information when they reach. In summary, when estimating the distance of point-like targets, all available signals about the object's location as well as body configuration are combined to provide accurate information about the object's distance.
ISSN:1534-7362
1534-7362
DOI:10.1167/8.16.3