Understanding Human Implicit Intention based on EEG and Speech Signals
-
1
KAIST, Department of Electrical Engineering and Brain Science Research Center, Republic of Korea
There are many approaches to understand human intention. The objective of human computer interface (HCI) research is to make machine that understands human intention with high accuracy. In the future machine will be able to understand human intention and communicate with him/her as same as human-human communication. Previous studies mainly concentrated on interpreting human’s intention expressed explicitly. Of course we show our intention by speech, gesture, and facial expression, etc. However, these explicit expressions may not be enough to understand what we really intend to. Moreover, we sometimes do not show our intention explicitly. There are many situations not to disclose our minds. If machine learned only interpreting explicit expression like erstwhile studies, it is not possible to ensure the accuracy.
Ten healthy volunteers were studied. The participants read obvious and non-obvious sensitive sentences(Table 1.) and indicated their agreement or disagreement by their speech during recording 32-channel electroencephalography (EEG) along their scalp. The obvious sentences consist of two groups, i.e., one for the obvious ‘agreement’ and the other for obvious ‘disagreement’. The ‘agreement’ or ‘disagreement’ on non-obvious sentence is subject dependent. To create situations of discrepancy between the explicit intention and implicit intention, the non-obvious sentences consist of sensitive personal questions to which human subject may not want to answer correctly. We assumed these sentences may be related to their private life. And it is also assumed that brain activation may not be the same when people expresses differently from their real intention.
ICA on EEGLAB was used to identify and remove eye movement artifacts. And power spectra are averaged over 6 standard EEG frequency bands of delta (2-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta1 (13-20 Hz), beta2 (20-35Hz), and gamma (35-46 Hz). First, the obvious situation was examined. We found that the classification results with nonlinear support vector machine (SVM) for prefrontal EEGs showed significant differences between agreement and disagreement situation. Secondly, the prefrontal EEGs for the non-obvious situation were tested by the SVM classifier trained with the EEGs of obvious situation.
We also trained an SVM for speech signals from the obvious sentences, and tested the speech signals from the non-obvious sentences. Since no ground truth labels were available, the SVM outputs for corresponding EEG were considered as the labels of implicit intention. The results showed that the classifier outputs for speech and EEG signals have high correlation. It may be used to understand human implicit intention, which is different from the explicit intention.
Keywords
Human implicit intention; Electroencephalography; support vector machine; EEGLAB;
Keywords:
brain machine interface,
computational neuroscience
Conference:
4th INCF Congress of Neuroinformatics, Boston, United States, 4 Sep - 6 Sep, 2011.
Presentation Type:
Poster Presentation
Topic:
Computational neuroscience
Citation:
Dong
S,
Kim
D and
Lee
S
(2011). Understanding Human Implicit Intention based on EEG and Speech Signals.
Front. Neuroinform.
Conference Abstract:
4th INCF Congress of Neuroinformatics.
doi: 10.3389/conf.fninf.2011.08.00082
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
17 Oct 2011;
Published Online:
19 Oct 2011.
*
Correspondence:
Dr. Suh-Yeon Dong, KAIST, Department of Electrical Engineering and Brain Science Research Center, Daejeon, Republic of Korea, sydong@sookmyung.ac.kr