Loading…
Differential Visual and Auditory Effects in a Crossmodal Induced Roelofs Illusion
For vision and audition to accurately inform judgments about an object's location, the brain must reconcile the variable anatomical correspondence of the eyes and ears, and the different frames of reference in which stimuli are initially encoded. To do so, it has been suggested that multisensor...
Saved in:
Published in: | Journal of experimental psychology. Human perception and performance 2022-03, Vol.48 (3), p.232-245 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | For vision and audition to accurately inform judgments about an object's location, the brain must reconcile the variable anatomical correspondence of the eyes and ears, and the different frames of reference in which stimuli are initially encoded. To do so, it has been suggested that multisensory cues are eventually represented within a common frame of reference. If this is the case, then they should be similarly susceptible to distortion of this reference frame. Following this reasoning, we asked participants to locate visual and auditory probes in a crossmodal variant of the induced Roelofs effect, a visual illusion in which a large, off-center visual frame biases the observer's perceived straight-ahead. Auditory probes were mislocalized in the same direction and with a similar magnitude as visual probes due to the off-center visual frame. However, an off-center auditory frame did not elicit a significant mislocalization of visual probes, indicating that auditory context does not elicit an induced Roelofs effect. These results suggest that the locations of auditory and visual stimuli are represented within a common frame of reference, but that the brain does not rely on stationary auditory context, as it does visual, to maintain this reference frame.
Public Significance StatementHuman observers maintain a map of their own location within the world around them, and this egocentric reference frame is used to encode the locations of nearby objects. This study demonstrates that observers use visual, but not auditory, cues to maintain this egocentric reference frame, as they attempt to determine the locations of nearby sights and sounds. |
---|---|
ISSN: | 0096-1523 1939-1277 |
DOI: | 10.1037/xhp0000983 |