AUTHOR=Wang Dong , Lian Jian , Cheng Hebin , Zhou Yanan TITLE=Music-evoked emotions classification using vision transformer in EEG signals JOURNAL=Frontiers in Psychology VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1275142 DOI=10.3389/fpsyg.2024.1275142 ISSN=1664-1078 ABSTRACT=Introduction

The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty.

Methods

In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions.

Experiments

Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials.

Discussion

The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.