Loading…
Visual Noise Mask for Human Point-Light Displays: A Coding-Free Approach
Human point-light displays consist of luminous dots representing human articulations, thus depicting actions without pictorial information. These stimuli are widely used in action recognition experiments. Because humans excel in decoding human motion, point-light displays (PLDs) are often masked wit...
Saved in:
Published in: | NeuroSci 2025-01, Vol.6 (1), p.2 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Human point-light displays consist of luminous dots representing human articulations, thus depicting actions without pictorial information. These stimuli are widely used in action recognition experiments. Because humans excel in decoding human motion, point-light displays (PLDs) are often masked with additional moving dots (noise masks), thereby challenging stimulus recognition. These noise masks are typically found within proprietary programming software, entail file format restrictions, and demand extensive programming skills. To address these limitations, we present the first user-friendly step-by-step guide to develop visual noise to mask PLDs using free, open-source software that offers compatibility with various file formats, features a graphical interface, and facilitates the manipulation of both 2D and 3D videos. Further, to validate our approach, we tested two generated masks in a pilot experiment with 12 subjects and demonstrated that they effectively jeopardised human agent recognition and, therefore, action visibility. In sum, the main advantages of the presented methodology are its cost-effectiveness and ease of use, making it appealing to novices in programming. This advancement holds the potential to stimulate young researchers' use of PLDs, fostering further exploration and understanding of human motion perception. |
---|---|
ISSN: | 2673-4087 2673-4087 |
DOI: | 10.3390/neurosci6010002 |