The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Neurorobot.
Volume 19 - 2025 |
doi: 10.3389/fnbot.2025.1540033
This article is part of the Research Topic Multiscale modeling of brain dynamics: development, validation, and applications in artificial systems or clinical populations View all articles
AMEEGNet: Attention-based Multiscale EEGNet for Effective Motor Imagery EEG Decoding
Provisionally accepted- 1 University of Chinese Academy of Sciences, Beijing, China
- 2 State Key Laboratory of Robotics, Shenyang Institute of Automation (CAS), Shenyang, Liaoning Province, China
- 3 School of Information Science and Engineering, Shenyang University of Technology, Shenyang, Liaoning Province, China
Recently, Electroencephalogram (EEG) based on Motor imagery (MI) have gained significant traction in braincomputer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-tonoise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the Efficient Channel Attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17%, 89.83%, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.
Keywords: Motor imagery (MI) EEG, Brain-computer interface, Signal decoding, Multi-scale decoding, Fusion transmission, Efficient Channel Attention (ECA) mechanism
Received: 05 Dec 2024; Accepted: 07 Jan 2025.
Copyright: © 2025 Wu, Chu, Li, Luo, Zhao and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yaqi Chu, University of Chinese Academy of Sciences, Beijing, China
Xingang Zhao, University of Chinese Academy of Sciences, Beijing, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.