Loading…

A shared neural code for the physics of actions and object events

Observing others’ actions recruits frontoparietal and posterior temporal brain regions – also called the action observation network. It is typically assumed that these regions support recognizing actions of animate entities (e.g., person jumping over a box). However, objects can also participate in...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications 2023-06, Vol.14 (1), p.3316-3316, Article 3316
Main Authors: Karakose-Akbiyik, Seda, Caramazza, Alfonso, Wurm, Moritz F.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Observing others’ actions recruits frontoparietal and posterior temporal brain regions – also called the action observation network. It is typically assumed that these regions support recognizing actions of animate entities (e.g., person jumping over a box). However, objects can also participate in events with rich meaning and structure (e.g., ball bouncing over a box). So far, it has not been clarified which brain regions encode information specific to goal-directed actions or more general information that also defines object events. Here, we show a shared neural code for visually presented actions and object events throughout the action observation network. We argue that this neural representation captures the structure and physics of events regardless of animacy. We find that lateral occipitotemporal cortex encodes information about events that is also invariant to stimulus modality. Our results shed light onto the representational profiles of posterior temporal and frontoparietal cortices, and their roles in encoding event information. The authors examine how the brain processes actions performed by humans and events involving objects. Their findings suggest that a common neural code is used in the brain’s action observation network to represent event information, regardless of animacy.
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-023-39062-8