Loading…

LATEST: A Model of Saccadic Decisions in Space and Time

Many of our actions require visual information, and for this it is important to direct the eyes to the right place at the right time. Two or three times every second, we must decide both when and where to direct our gaze. Understanding these decisions can reveal the moment-to-moment information prio...

Full description

Saved in:
Bibliographic Details
Published in:Psychological review 2017-04, Vol.124 (3), p.267-300
Main Authors: Tatler, Benjamin W, Brockmole, James R, Carpenter, R. H. S
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Many of our actions require visual information, and for this it is important to direct the eyes to the right place at the right time. Two or three times every second, we must decide both when and where to direct our gaze. Understanding these decisions can reveal the moment-to-moment information priorities of the visual system and the strategies for information sampling employed by the brain to serve ongoing behavior. Most theoretical frameworks and models of gaze control assume that the spatial and temporal aspects of fixation point selection depend on different mechanisms. We present a single model that can simultaneously account for both when and where we look. Underpinning this model is the theoretical assertion that each decision to move the eyes is an evaluation of the relative benefit expected from moving the eyes to a new location compared with that expected by continuing to fixate the current target. The eyes move when the evidence that favors moving to a new location outweighs that favoring staying at the present location. Our model provides not only an account of when the eyes move, but also what will be fixated. That is, an analysis of saccade timing alone enables us to predict where people look in a scene. Indeed our model accounts for fixation selection as well as (and often better than) current computational models of fixation selection in scene viewing.
ISSN:0033-295X
1939-1471
DOI:10.1037/rev0000054