Loading…

Investigation of multi-modal interface features for adaptive automation of a human–robot system

The objective of this research was to assess the effectiveness of using a multi-modal interface for adaptive automation (AA) of human control of a simulated telerobotic (remote-control, semi-autonomous robotic) system. We investigated the use of one or more sensory channels to cue dynamic control al...

Full description

Saved in:
Bibliographic Details
Published in:International journal of human-computer studies 2006-06, Vol.64 (6), p.527-540
Main Authors: Kaber, David B., Wright, Melanie C., Sheik-Nainar, Mohamed A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The objective of this research was to assess the effectiveness of using a multi-modal interface for adaptive automation (AA) of human control of a simulated telerobotic (remote-control, semi-autonomous robotic) system. We investigated the use of one or more sensory channels to cue dynamic control allocations to a human operator or computer, as part of AA, and to support operator system/situation awareness (SA) and performance. It was expected that complex auditory and visual cueing through system interfaces might address previously observed SA decrements due to unannounced or unexpected automation-state changes as part of adaptive system control. AA of the telerobot was based on a predetermined schedule of manual- and supervisory-control allocations occurring when operator workload changes were expected due to the stages of a teleoperation task. The task involved simulated underwater mine disposal and 32 participants were exposed to four types of cueing of task-phase and automation-state changes including icons, earcons, bi-modal (combined) cues and no cues at all. Fully automated control of the telerobot combined with human monitoring produced superior performance compared to completely manual system control and AA. Cueing, in general, led to better performance than none, but did not appear to completely eliminate temporary SA deficits due to changes in control and associated operator reorienting. Bi-modal cueing of dynamic automation-state changes was more supportive of SA than modal (single sensory channel) cueing. The use of icons and earcons appeared to produce no additional perceived workload in comparison no cueing. The results of this research may serve as an applicable guide for the design of human–computer interfaces for real telerobotic systems, including those used for military tactical operations, which support operator achievement and maintenance of SA and promote performance in using AA.
ISSN:1071-5819
1095-9300
DOI:10.1016/j.ijhcs.2005.11.003