AUTHOR=Dhariwal Naman , Sengupta Nidhi , Madiajagan M. , Patro Kiran Kumar , Kumari P. Lalitha , Abdel Samee Nagwan , Tadeusiewicz Ryszard , Pławiak Paweł , Prakash Allam Jaya TITLE=A pilot study on AI-driven approaches for classification of mental health disorders JOURNAL=Frontiers in Human Neuroscience VOLUME=18 YEAR=2024 URL=https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1376338 DOI=10.3389/fnhum.2024.1376338 ISSN=1662-5161 ABSTRACT=
The increasing prevalence of mental disorders among youth worldwide is one of society's most pressing issues. The proposed methodology introduces an artificial intelligence-based approach for comprehending and analyzing the prevalence of neurological disorders. This work draws upon the analysis of the Cities Health Initiative dataset. It employs advanced machine learning and deep learning techniques, integrated with data science, statistics, optimization, and mathematical modeling, to correlate various lifestyle and environmental factors with the incidence of these mental disorders. In this work, a variety of machine learning and deep learning models with hyper-parameter tuning are utilized to forecast trends in the occurrence of mental disorders about lifestyle choices such as smoking and alcohol consumption, as well as environmental factors like air and noise pollution. Among these models, the convolutional neural network (CNN) architecture, termed as DNN1 in this paper, accurately predicts mental health occurrences relative to the population mean with a maximum accuracy of 99.79%. Among the machine learning models, the XGBoost technique yields an accuracy of 95.30%, with an area under the ROC curve of 0.9985, indicating robust training. The research also involves extracting feature importance scores for the XGBoost classifier, with Stroop test performance results attaining the highest importance score of 0.135. Attributes related to addiction, namely smoking and alcohol consumption, hold importance scores of 0.0273 and 0.0212, respectively. Statistical tests on the training models reveal that XGBoost performs best on the mean squared error and