Loading…

Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives

Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2024, Vol.12, p.189926-189940
Main Authors: Doore, Stacy A., Brown, Justin R., Imai, Saki, Dimmel, Justin K., Giudice, Nicholas A.
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing multimodal learning systems into STEM settings and allowing students to access dual channel cues beyond visual perception may help more students process information in their preferred modality. The purpose of this study was to investigate the usability, effectiveness, and design of multimodal interfaces for enhancing access to graphical representations. We used existing theories of multisensory information processing to study how sighted participants could learn and interpret spatial primitives and graphical concepts presented via three non-visual conditions: natural language (NL) descriptions, haptic renderings, and a NL-Haptic combination. The results showed that access to haptic-only renderings produced the least accurate responses, whereas NL descriptions with and without haptics led to similar performance by participants when learning graphical content without vision. Performance was also impacted by the complexity of the graphical content, with the highest level of accuracy observed for closed forms, compared to paired line segments and line/polygon intersections. We argue that universally designed, multimodal learning environments can transcend traditional, visual diagrams by utilizing non-visual channels and commercial hardware to support learners with different sensory abilities, preferences, and processing needs. Findings contribute to extending theoretical insights of non-visual information processing to better understand multisensory learning in sighted individuals.
ISSN:2169-3536
DOI:10.1109/ACCESS.2024.3513712