Loading…

Algorithm exploitation: Humans are keen to exploit benevolent AI

We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, hum...

Full description

Saved in:
Bibliographic Details
Published in:iScience 2021-06, Vol.24 (6), p.102679-102679, Article 102679
Main Authors: Karpus, Jurgis, Krüger, Adrian, Verba, Julia Tovar, Bahrami, Bahador, Deroy, Ophelia
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies. [Display omitted] •People predict that AI agents will be as benevolent (cooperative) as humans•People cooperate less with benevolent AI agents than with benevolent humans•Reduced cooperation only occurs if it serves people's selfish interests•People feel guilty when they exploit humans but not when they exploit AI agents Computer science; Social sciences; Sociology
ISSN:2589-0042
2589-0042
DOI:10.1016/j.isci.2021.102679