Predicting head rotation using EEG to enhance streaming of images to a Virtual Reality headset
-
1
Netherlands Organisation for Applied Scientific Research (TNO), Netherlands
Introduction
Virtual Reality (VR) enables individuals to be virtually present in another location, for instance at a stadium where your favorite soccer team is playing a match. However, when presenting 360° streaming images to a VR headset, solutions are needed to deal with limited bandwidth. So-called Field-of-View approaches only stream the content that is in the current field of view and leave out the rest because of bandwidth limitations. When the user rotates the head, other content parts need to be made available soon enough to go unnoticed by the user. This problem can be partially solved at the cost of some bandwidth (resulting in e.g. a loss of spatial resolution) by not only streaming the current field of view, but also the directly surrounding content (‘guard bands’) so that the content can be retrieved in time for display when the head rotation actually takes place. If we could predict upcoming head rotations and their direction, we could optimize choices about spending bandwidth on high spatial resolution or on streaming the content part that is expected to be viewed later, leading to an enhanced VR experience.
Predicting head rotations using EEG may be possible given the literature on EEG correlates of upcoming movements such as the readiness potential (Leuthold et al., 2004). Also, arm, leg and foot movement have been predicted successfully in a BCI context before (e.g. Haufe et al., 2011; Lew et al., 2012). As far as we know, no such work has been done for head rotations. In an experiment we tested whether predictions of head rotations are possible, and we simulated prediction in real time.
Methods and results
We asked eleven participants to generate random left- and rightward head rotations, self-paced but leaving a few seconds in between movements, while wearing a VR headset (showing a black screen) and 32 EEG electrodes. Head movements were tracked using the motion sensing system embedded in the headset. After defining rotation onset and the direction of rotation on the basis of data from the motion sensing system, we trained personalized multi-layer perceptron models to distinguish EEG epochs preceding rightward, leftward and no rotation, using equally sized classes. Single epochs of unseen test data could be classified as belonging to one of the three rotation categories with accuracies ranging between 32% (chance level) to 79%. For four participants, accuracy was below 40%, for four participants it was over 70%, and three ended up in between. The main difference in performance was caused by the strength of the bias to label data as preceding ‘no rotation’. Simulating a real time scenario by applying these models to streaming EEG data that was withheld from the training also showed that the probability of ‘no rotation’ was consistently high but started to decrease at around 400 ms before rotation onset (see figure below). Slightly earlier, the probabilities of an upcoming right- or leftward rotation started to diverge in the correct directions.
Discussion
This study showed the feasibility of predicting whether a head rotation is coming up, and in which direction, based on EEG data. Simulating a real time scenario shows that in order to use this information, it is wise to not simply take the most probable class as the prediction but to take the development of probabilities of the three classes into account. Improvements in the classification models, especially with respect to feature selection and reducing observed overfitting, are still possible. While we did not record eye movements, we do not think that our models capitalized on artifacts generated by eye movements. Studies on eye-head coordination show that motor commands of head movement generally precede those of eye movements, and that especially for intentional, large amplitude movements (as in our case) head movements begin well before saccades (Freedman, 2008).
Depending on the VR context, e.g., whether the user is watching a live tennis game or is engaged in a group meeting, head rotations will be more or less strongly determined by the visual stimuli. This will affect EEG, and likely also the specific signals associated with left- right- and no rotation. It may prove helpful to build or adjust models separately for different contexts. In addition, and depending on the context, other features that are predictive of head rotations, acquired from the presented visual and auditory stimuli, can be exploited in the model and improve predictions.
The proposed BCI has the potential of large scale application given that users would not need to wear additional equipment on the head, and given that the BCI can collect training data, train and validate itself on the fly, without the user even noticing. If the predictions are not deemed good enough, the VR system will function in its default way, if the model becomes better, presentation of imagery can be improved. In addition to the ability to monitor its own errors, consequences of errors are not grave. All of this makes the costs of using the BCI in real life relatively low. The potential gain is an improved VR experience, and possibly reducing VR sickness.
Acknowledgements
This research was funded by the project Long Term Research on Networked Virtual Reality, (Top consortium for Knowledge and Innovation), Surcharge Consortium Agreement for PPP projects 0100294530 with KPN as Industrial Partner. We thank Gert-Jan Schilt (KPN) for his ideas and support.
References
Freedman EG (2008) Coordination of the eyes and head during visual orienting. Exp Brain Res 190:369-387.
Haufe S, Treder MS, Gugler MF, Sagebaum M, Curio G, Blankertz B (2011) EEG potentials predict upcoming emergency brakings during simulated driving J. Neural Eng. 8.
Leuthold H, Sommer W, Ulrich R (2004) Preparing for Action: Inferences from CNV and LRP. J. Psychophysiol. 18, 77–88.
Lew E, Chavarriaga R, Silvoni S, Millán J del R (2012) Detection of self-paced reaching movement intention from EEG signals. Front. Neuroeng. 5: 13.
Keywords:
BCI,
EEG,
virtual reality,
head rotation,
movement prediction
Conference:
2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.
Presentation Type:
Oral Presentation
Topic:
Neuroergonomics
Citation:
Brouwer
A,
Van Der Waa
J and
Stokking
H
(2019). Predicting head rotation using EEG to enhance streaming of images to a Virtual Reality headset
.
Conference Abstract:
2nd International Neuroergonomics Conference.
doi: 10.3389/conf.fnhum.2018.227.00121
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
30 Mar 2018;
Published Online:
27 Sep 2019.
*
Correspondence:
Dr. Anne-Marie Brouwer, Netherlands Organisation for Applied Scientific Research (TNO), Delft, Netherlands, anne-marie.brouwer@tno.nl