1. Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies.
- Author
-
Holbrook, Colin, Holman, Daniel, Clingo, Joshua, and Wagner, Alan
- Subjects
Anthropomorphism ,Artificial intelligence ,Decision-making ,Human–computer interaction ,Human–robot interaction ,Social robotics ,Threat-detection ,Humans ,Robotics ,Trust ,Male ,Female ,Artificial Intelligence ,Adult ,Decision Making ,Young Adult ,Uncertainty - Abstract
This research explores prospective determinants of trust in the recommendations of artificial agents regarding decisions to kill, using a novel visual challenge paradigm simulating threat-identification (enemy combatants vs. civilians) under uncertainty. In Experiment 1, we compared trust in the advice of a physically embodied versus screen-mediated anthropomorphic robot, observing no effects of embodiment; in Experiment 2, we manipulated the relative anthropomorphism of virtual robots, observing modestly greater trust in the most anthropomorphic agent relative to the least. Across studies, when any version of the agent randomly disagreed, participants reversed their threat-identifications and decisions to kill in the majority of cases, substantially degrading their initial performance. Participants subjective confidence in their decisions tracked whether the agent (dis)agreed, while both decision-reversals and confidence were moderated by appraisals of the agents intelligence. The overall findings indicate a strong propensity to overtrust unreliable AI in life-or-death decisions made under uncertainty.
- Published
- 2024