Back to Search Start Over

How to deal with risks of AI suffering.

Authors :
Dung, Leonard
Source :
Inquiry. Jul2023, p1-29. 29p.
Publication Year :
2023

Abstract

We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0020174X
Database :
Academic Search Index
Journal :
Inquiry
Publication Type :
Academic Journal
Accession number :
165108603
Full Text :
https://doi.org/10.1080/0020174x.2023.2238287