Back to Search Start Over

Protecting Life While Preserving Liberty: Ethical Recommendations for Suicide Prevention With Artificial Intelligence

Authors :
Lindsey C. McKernan
Ellen W. Clayton
Colin G. Walsh
Source :
Frontiers in Psychiatry, Vol 9 (2018)
Publication Year :
2018
Publisher :
Frontiers Media S.A., 2018.

Abstract

In the United States, suicide increased by 24% in the past 20 years, and suicide risk identification at point-of-care remains a cornerstone of the effort to curb this epidemic (1). As risk identification is difficult because of symptom under-reporting, timing, or lack of screening, healthcare systems rely increasingly on risk scoring and now artificial intelligence (AI) to assess risk. AI remains the science of solving problems and accomplishing tasks, through automated or computational means, that normally require human intelligence. This science is decades-old and includes traditional predictive statistics and machine learning. Only in the last few years has it been applied rigorously in suicide risk prediction and prevention. Applying AI in this context raises significant ethical concern, particularly in balancing beneficence and respecting personal autonomy. To navigate the ethical issues raised by suicide risk prediction, we provide recommendations in three areas—communication, consent, and controls—for both providers and researchers (2).

Details

Language :
English
ISSN :
16640640
Volume :
9
Database :
Directory of Open Access Journals
Journal :
Frontiers in Psychiatry
Publication Type :
Academic Journal
Accession number :
edsdoj.01c0a6c52d446c892c8661684af870a
Document Type :
article
Full Text :
https://doi.org/10.3389/fpsyt.2018.00650