AUTHOR=Hooks Kevin , El-Said Refaat , Fu Qiushi TITLE=Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb JOURNAL=Frontiers in Human Neuroscience VOLUME=17 YEAR=2023 URL=https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2023.1302647 DOI=10.3389/fnhum.2023.1302647 ISSN=1662-5161 ABSTRACT=

Fundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object’s shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can be decoded from cortical activities using EEG signals. The goal of the present study is to determine the extent to which information encoded in the EEG signals is shared between two limbs to enable cross-hand decoding. We performed an experiment in which human subjects (n = 10) were tasked to interact with a novel object with multiple affordances using either right or left hands. The object had two vertical handles attached to a horizontal base. A visual cue instructs what action (lift or touch) and whether the left or right handle should be used for each trial. EEG was recorded and processed from bilateral frontal-central-parietal regions (30 channels). We trained LDA classifiers using data from trials performed by one limb and tested the classification accuracy using data from trials performed by the contralateral limb. We found that the type of hand-object interaction can be decoded with approximately 59 and 69% peak accuracy in the planning and execution stages, respectively. Interestingly, the decoding accuracy of the reaching directions was dependent on how EEG channels in the testing dataset were spatially mirrored, and whether directions were labeled in the extrinsic (object-centered) or intrinsic (body-centered) coordinates.