AUTHOR=Arulkumaran Kai , Di Vincenzo Marina , Dossa Rousslan Fernand Julien , Akiyama Shogo , Ogawa Lillrank Dan , Sato Motoshige , Tomeoka Kenichi , Sasai Shuntaro TITLE=A comparison of visual and auditory EEG interfaces for robot multi-stage task control JOURNAL=Frontiers in Robotics and AI VOLUME=11 YEAR=2024 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2024.1329270 DOI=10.3389/frobt.2024.1329270 ISSN=2296-9144 ABSTRACT=

Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.