Loading…

Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology

As artificial intelligence advances towards unprecedented levels of competence, people's acceptance of autonomous technology has become a hot topic among psychology and HCI scholars. Previous studies suggest that threat perceptions—regarding observers' immediate physical safety (proximal)...

Full description

Saved in:
Bibliographic Details
Published in:Computers in human behavior 2019-06, Vol.95, p.73-82
Main Authors: Stein, Jan-Philipp, Liebold, Benny, Ohler, Peter
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As artificial intelligence advances towards unprecedented levels of competence, people's acceptance of autonomous technology has become a hot topic among psychology and HCI scholars. Previous studies suggest that threat perceptions—regarding observers' immediate physical safety (proximal) as well as their more abstract concepts of human uniqueness (distal)—impede the positive reception of self-controlled digital systems. Developing a Model of Autonomous Technology Threat, we propose both of these threat forms as common antecedents of users' general threat experience, which ultimately predicts reduced technology acceptance. In a laboratory study, 125 participants were invited to interact with a virtual reality agent, assuming it to be the embodiment of a fully autonomous personality assessment system. In a path analysis, we found correlational support for the proposed model, as both situational control and human uniqueness attitudes predicted threat experience, which in turn connected to stronger aversion against the presented system. Other potential state and trait influences are discussed. •Autonomous technology can seem threatening to users in various ways.•Proximal and distal threat cues are juxtaposed in a newly developed model.•Participants interact with the VR embodiment of an allegedly autonomous system.•Both situational and attitudinal factors increase the threat evoked by the AI system.•Ultimately, situational factors emerge as more relevant than overarching attitudes.
ISSN:0747-5632
1873-7692
DOI:10.1016/j.chb.2019.01.021