Back to Search Start Over

Moral judgements of errors by AI systems and humans in civil and criminal law.

Authors :
Parlangeli, Oronzo
Curro’, Francesco
Palmitesta, Paola
Guidi, Stefano
Source :
Behaviour & Information Technology. Nov2023, p1-11. 11p. 4 Illustrations, 5 Charts.
Publication Year :
2023

Abstract

The evaluation of the use of Artificial Intelligence (AI) in legal decisions still has unsolved questions. These may refer to the perceived degree of seriousness of the possible errors committed, the distribution of responsibility among the different decision-makers (human or artificial), and the evaluation of the error concerning its possible benevolent or malevolent consequences on the person sanctioned. Above all, assessing the possible relationships between these variables appears relevant. To this aim, we conducted a study through an online questionnaire (<italic>N</italic> = 288) where participants had to consider different scenarios in which a decision-maker, human or artificial, made an error of judgement for offences punishable by a fine (Civil Law infringement) or years in prison (Criminal Law infringement). We found that humans who delegate AIs are blamed less than solo humans, although the effect of decision maker was subtle. In addition, people consider the error more serious if committed by a human being when a sentence for a crime of the penal code is mitigated, and for an AI when a penalty for an infringement of the civil code is aggravated. The mitigation of the evaluation of seriousness for joint AI-human judgement errors suggests the potential for strategic scapegoating of AIs. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0144929X
Database :
Academic Search Index
Journal :
Behaviour & Information Technology
Publication Type :
Academic Journal
Accession number :
173642472
Full Text :
https://doi.org/10.1080/0144929x.2023.2283622