1. Judging One’s Own or Another Person’s Responsibility in Interactions With Automation
- Author
-
Joachim Meyer and Nir Douer
- Subjects
Warning system ,business.industry ,Computer science ,Emotions ,05 social sciences ,Intelligent decision support system ,Human Factors and Ergonomics ,Automation ,050105 experimental psychology ,Behavioral Neuroscience ,Social Perception ,Human–computer interaction ,Humans ,0501 psychology and cognitive sciences ,Social Behavior ,business ,050107 human factors ,Applied Psychology - Abstract
Objective We explore users’ and observers’ subjective assessments of human and automation capabilities and human causal responsibility for outcomes. Background In intelligent systems and advanced automation, human responsibility for outcomes becomes equivocal, as do subjective perceptions of responsibility. In particular, actors who actively work with a system may perceive responsibility differently from observers. Method In a laboratory experiment with pairs of participants, one participant (the “actor”) performed a decision task, aided by an automated system, and the other (the “observer”) passively observed the actor. We compared the perceptions of responsibility between the two roles when interacting with two systems with different capabilities. Results Actors’ behavior matched the theoretical predictions, and actors and observers assessed the system and human capabilities and the comparative human responsibility similarly. However, actors tended to relate adverse outcomes more to system characteristics than to their own limitations, whereas the observers insufficiently considered system capabilities when evaluating the actors’ comparative responsibility. Conclusion When intelligent systems greatly exceed human capabilities, users may correctly feel they contribute little to system performance. They may interfere more than necessary, impairing the overall performance. Outside observers, such as managers, may overweigh users’ contribution to outcomes, holding users responsible for adverse outcomes when they rightly trusted the system. Application Presenting users of intelligent systems and others with performance measures and the comparative human responsibility may help them calibrate subjective assessments of performance, reducing users’ and outside observers’ biases and attribution errors.
- Published
- 2020