Skip to main content

EDITORIAL article

Front. Comput. Sci., 25 August 2023
Sec. Mobile and Ubiquitous Computing
This article is part of the Research Topic Biosignal-based Human–Computer Interfaces View all 6 articles

Editorial: Biosignal-based human–computer interfaces

  • 1Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
  • 2School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, Jiangsu, China
  • 3Department of Computer Science, Memorial University of Newfoundland, St. John's, NL, Canada

Editorial on the Research Topic
Biosignal-based human–computer interfaces

Recently, human-computer interface applications based on various biomedical signals have been booming. Such technologies are able to provide additional ways for humans to interact with the physical world, beyond motion and language. However, current human-computer interfaces based on biomedical signals face several limitations, such as complex system setup, low classification accuracy, and poor system stability, which hinder their widespread adoption and use in daily scenarios. The majority of existing applications of biosignal-based human-computer interfaces mainly aggregate in the medical domain, where they can assist, rehabilitate, or diagnose patients with different needs and conditions within a well-controlled environment. Therefore, this Research Topic invites researchers in the field to present their latest research findings and propose innovative methods for human-computer interfaces and expand the scope and impact of human-computer interface applications via biomedical engineering approaches.

The scope of this Research Topic covers novel algorithms and frameworks for human-computer interfaces that use biomedical signals to enable activity assistance and performance augmentation. The original proposal of this Research Topic has been focused on new biomedical acquisition and application frameworks, with specific contributions in optimizing existing/future human-computer interface systems between humans and external devices. Five articles have been published via this Research Topic, which contains three original research articles, one review article, and one data report.

Su and Wu aim at the multiple fundamental frequency detection problem and the source separation problem from a single-channel signal containing multiple oscillatory components and a nonstationary noise. They demonstrate a novel method to accurately obtain the fetal heart rate and the fetal ECG waveform from a single-lead maternal abdominal ECG. An adaptive non-harmonic model is proposed and validated to capture the time-varying amplitude, frequency, and wave-shape of the maternal and fetal cardiac activities, and the de-shape short-time Fourier transform (STFT) and the nonlocal median algorithm are used to separate the aforementioned features.

Ciliberto et al. present a publicly available dataset Opportunity++, an extension of the previous Opportunity dataset that includes video footage and video-based skeleton tracking of four people performing everyday activities in a kitchen environment. The data collection method, the annotation tracks, the sensor data, and the videos are presented in detail, as well as some examples of applications of the dataset, such as multimodal fusion, dynamic modality selection, data imputation, re-annotation, annotation crowd-sourcing, inter-rater reliability, and privacy-preserving annotation. It is clear that Opportunity++ is a valuable resource for researchers who want to explore new research questions and challenges in human activity recognition using multiple sensor modalities.

Wang et al. investigate how the speech rate of elderly people, the type of voice interaction task, and the word count of feedback affect their expected feedback speech rate from a voice user interface. They utilize a Wizard of Oz testing method to conduct voice interaction simulation experiments with 30 elderly subjects and a prototype of a voice robot. The results suggest elderly people speak to a voice robot at a slower speech rate than to a person, and they expect the robot feedback speech rate to be lower than their own. In addition, a positive correlation is identified between the participants' speech rate and the expected feedback speech rate, and a negative correlation is identified between the feedback word count and the expected feedback speech rate. However, no significant effect of dialog task type on the expected feedback speech rate is identified. This article demonstrates important implications for voice user interface design by suggesting that the feedback speech rate should be adjusted according to the user's speech rate and the feedback word count. A linear regression model is also proposed to predict the expected feedback speech rate based on the user's speech rate.

Ferguson et al. review therapeutic music systems that use biofeedback to reduce stress and anxiety in users. This article surveys various systems that use different types of biofeedback, such as ECG and EEG, to measure the physiological responses of users to music and provides an in-depth discussion on how machine learning techniques can be used to make the system more adaptive to the individual needs and preference of users, such as by customizing playlists, adjusting music tempo, altering the amplitude, and generating binaural beats. The article concludes that biofeedback paired with adaptive software systems can create effective therapeutic music systems that can assist researchers and music therapists in alleviating stress and anxiety.

Park and Lee demonstrate a study that classified electroencephalography (EEG) data of imagined speech using signal decomposition and a multi-receptive convolutional neural network. This article creates an EEG data set of imagining five vowels, /a/, /e/, /i/, /o/, and /u/, and a mute (rest) state from 10 participants. Noise-assisted multivariate empirical mode decomposition and wavelet packet decomposition are used to extract six statistical features from the decomposed eight sub-frequency bands of the recorded EEG. A multi-receptive field convolutional neural network and several other classifiers are included to compare the classification results, which achieve an average classification accuracy of 73.09% and a maximum accuracy of 80.41% for the six classes. The proposed classification method for imagined speech EEG will contribute to developing a practical imagined speech-based brain-computer interface system.

Author contributions

XZ: Funding acquisition, Project administration, Resources, Writing—original draft, Writing—review and editing. YZ: Funding acquisition, Project administration, Resources, Writing—original draft, Writing—review and editing. XJ: Funding acquisition, Project administration, Resources, Writing—original draft, Writing—review and editing.

Funding

This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFF1202900, the National Natural Science Foundation of China under Grant 82102174, and China Postdoctoral Science Foundation under Grant 2021TQ0243.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: human-computer interface (HCI), brain-computer interface (BCI), biosignal, biomaker, multimodal

Citation: Zhang X, Zhou Y and Jiang X (2023) Editorial: Biosignal-based human–computer interfaces. Front. Comput. Sci. 5:1275031. doi: 10.3389/fcomp.2023.1275031

Received: 09 August 2023; Accepted: 15 August 2023;
Published: 25 August 2023.

Edited and reviewed by: Kristof Van Laerhoven, University of Siegen, Germany

Copyright © 2023 Zhang, Zhou and Jiang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xin Zhang, eGluX3poYW5nX2JtZSYjeDAwMDQwOzE2My5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.