AUTHOR=Sheu Yi-han TITLE=Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research JOURNAL=Frontiers in Psychiatry VOLUME=11 YEAR=2020 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2020.551299 DOI=10.3389/fpsyt.2020.551299 ISSN=1664-0640 ABSTRACT=

Psychiatric research is often confronted with complex abstractions and dynamics that are not readily accessible or well-defined to our perception and measurements, making data-driven methods an appealing approach. Deep neural networks (DNNs) are capable of automatically learning abstractions in the data that can be entirely novel and have demonstrated superior performance over classical machine learning models across a range of tasks and, therefore, serve as a promising tool for making new discoveries in psychiatry. A key concern for the wider application of DNNs is their reputation as a “black box” approach—i.e., they are said to lack transparency or interpretability of how input data are transformed to model outputs. In fact, several existing and emerging tools are providing improvements in interpretability. However, most reviews of interpretability for DNNs focus on theoretical and/or engineering perspectives. This article reviews approaches to DNN interpretability issues that may be relevant to their application in psychiatric research and practice. It describes a framework for understanding these methods, reviews the conceptual basis of specific methods and their potential limitations, and discusses prospects for their implementation and future directions.