AUTHOR=Kanyal Ayush , Mazumder Badhan , Calhoun Vince D. , Preda Adrian , Turner Jessica , Ford Judith , Ye Dong Hye TITLE=Multi-modal deep learning from imaging genomic data for schizophrenia classification JOURNAL=Frontiers in Psychiatry VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2024.1384842 DOI=10.3389/fpsyt.2024.1384842 ISSN=1664-0640 ABSTRACT=Background

Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual’s cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.

Methods

Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).

Results

Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.

Conclusion

We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.