Skip to main content

PERSPECTIVE article

Front. Virtual Real.
Sec. Virtual Reality and Human Behaviour
Volume 5 - 2024 | doi: 10.3389/frvir.2024.1423756
This article is part of the Research Topic Use of body and gaze in extended reality View all 3 articles

Head-Area Sensing in Virtual Reality: Future Visions for Visual Perception and Cognitive State Estimation

Provisionally accepted
  • VTT Technical Research Centre of Finland Ltd, Espoo, Finland

The final, formatted version of the article will be published soon.

    Developments in virtual reality technology have reached technical maturity to start shifting focus from the creation of devices capable of providing immersive virtual experiences to creating devices also capable of sensing and understanding the human. Engaging audiovisual stimuli, user movement scanning as well as biosensing integrated to a headset creates an unprecedented framework for studying human visual perception in controlled yet close-to-natural settings.Headsets, near eye displays, and future smart eyewear enable unobtrusive sensing of a variety of biosignals such as eye and head movements, pupil size, and blinks, heart rate, and brain activity, providing versatile information on human physiology and psychophysiology. Additionally, modern AI methods provide interpretations of multimodal data to obtain personalized estimations of the users' oculomotor behavior, visual perception, and cognitive state, and their possibilities extend to controlling, adapting, and even creating the virtual audiovisual content in real-time.This article proposes a visionary approach for visual perception and cognitive state estimation in virtual reality. We introduce ideas on using virtual content with head-area sensing to enable the estimation of the user's active vision and cognitive state as well as how embedded protocols, adaptive sampling, and artificial intelligence could improve the accuracy of the estimations. The primary focus in sensing lies on the eye and oculomotor parameters, as well as head movements, given their crucial role in visual perception. Finally, we outline the potential application domains.

    Keywords: Visual Perception, oculomotor behavior, Cognitive state estimation, virtual reality, Adaptive sampling, artificial intelligence

    Received: 26 Apr 2024; Accepted: 29 Aug 2024.

    Copyright: © 2024 Pettersson, Tervonen, Heininen and Mäntyjärvi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Kati Pettersson, VTT Technical Research Centre of Finland Ltd, Espoo, Finland

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.