AUTHOR=McKernan Lindsey C. , Clayton Ellen W. , Walsh Colin G. TITLE=Protecting Life While Preserving Liberty: Ethical Recommendations for Suicide Prevention With Artificial Intelligence JOURNAL=Frontiers in Psychiatry VOLUME=9 YEAR=2018 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2018.00650 DOI=10.3389/fpsyt.2018.00650 ISSN=1664-0640 ABSTRACT=

In the United States, suicide increased by 24% in the past 20 years, and suicide risk identification at point-of-care remains a cornerstone of the effort to curb this epidemic (1). As risk identification is difficult because of symptom under-reporting, timing, or lack of screening, healthcare systems rely increasingly on risk scoring and now artificial intelligence (AI) to assess risk. AI remains the science of solving problems and accomplishing tasks, through automated or computational means, that normally require human intelligence. This science is decades-old and includes traditional predictive statistics and machine learning. Only in the last few years has it been applied rigorously in suicide risk prediction and prevention. Applying AI in this context raises significant ethical concern, particularly in balancing beneficence and respecting personal autonomy. To navigate the ethical issues raised by suicide risk prediction, we provide recommendations in three areas—communication, consent, and controls—for both providers and researchers (2).