- 1Centro Dia.Ri., Motor and Cognitive Rehabilitation, Terricciola, Pisa, Italy
- 2Dipartimento di Economia e Management, University of Pisa, Pisa, Tuscany, Italy
- 3Virtuleap, Lisbon, Portugal
Virtual Reality (VR) environments have been proven useful in memory assessment and have shown to be more sensitive than pen-and-paper in prospective memory assessment. Moreover, these techniques provide the advantage of offering neuropsychological evaluations in a controlled, ecologically valid, and safe manner. In the present study, we used Enhance VR, a cognitive training and assessment tool in virtual reality. User performance was evaluated by means of the in-game scoring system. The primary goal of this study was to compare Enhance VR in-game scoring to already existing validated cognitive assessment tests. As a secondary goal, we tested the tolerance and usability of the system. 41 older adults took part in the study (mean age = 62.8 years). Each participant was evaluated with a predefined set of traditional pen-and-paper cognitive assessment tools and played four VR games. We failed to find a significant positive impact in explaining the variability of the Enhance VR game scores by the traditional pen-and-paper methodologies that addressed the same cognitive ability. This lack of effect may be related to the gamified environment of Enhance VR, where the players are awarded or subtracted points depending on their game performance, thus deviating from the scoring system used in traditional methodologies. Moreover, while the games were inspired by traditional assessment methodologies, presenting them in a VR environment might modify the processing of the information provided to the participant. The hardware and Enhance VR games were extremely well tolerated, intuitive, and within the reach of even those with no experience.
Introduction
Neuropsychological evaluations allow clinicians and researchers to measure specific cognitive abilities, global cognitive functioning, and anatomic integrity of the brain areas. Neuropsychological tasks can be used as a diagnostic or staging tool, but also as a way to monitor changes in cognitive performance during and after a pharmacological intervention or rehabilitation. Traditional pen-and-paper or screen-based neuropsychological tests have been shown to provide reliable evaluations of cognitive function. A caveat of these screen and paper-based tools is that they lack verisimilitude (Spooner and Pachana, 2006) and fail to incorporate motor-related information.
The use of virtual reality (VR) environments, on the other hand, allows for a strong sensory immersion and a sense of presence, given the simulatenous integrations of the motor, visual, and proprioceptive systems (Sanchez-Vives et al., 2010). Thus, virtual environments allow for a naturalistic interaction with the environment and provide an ecologically valid scenario. VR environments have become interesting tools in neuroscience and neuropsychology, as they allow for precise control over each experimental variable (Bohil et al., 2011). Thus, VR environments provide the advantage of offering neuropsychological evaluations in a controlled, ecologically valid (Kourtesis et al., 2012), and safe manner (Rizzo et al., 2004). VR training systems have shown promising results in people with dementia (PoW) (Díaz-Pérez and Flórez-Lozano, 2018) allowing adaptive activities in a highly engaging environment (Mrakic-Sposta et al., 2018). Thus, VR has a strong potential to evaluate cognitive performance in PoW and prodromal states, and virtual environments have been shown to be more sensitive than pen-and-paper in prospective memory assessment (Nolin et al., 2013). Few studies assess the effectiveness of VR scenarios in cognitive ability assessment. For instance, the VR-EAL system has shown positive experiences and enhanced ecological validity when compared to pen-and-paper tests (Kourtesis et al., 2012). Moreover, these tests have shown positive correlations with pen-and-paper tasks. The virtual shop was shown to be correlated to traditional memory tasks (Ouellet et al., 2018). A virtual version of the Trail Making Test has shown feasibility and validity when converted into a virtual environment, preserving the task features.
Enhance VR app (Virtuleap, Portugal, www.virtuleap.com) is a VR-based cognitive training and assessment system. Enhance VR consists of a library of cognitive exercises (hereafter, games) categorized into seven cognitive domains: memory, attention, problem-solving, spatial orientation, cognitive flexibility, information processing, and motor control. Each game session is designed to be a gamified experience of already existing validated neuropsychological principles (Brugada-Ramentol et al., 2022). Every game starts with a benchmark session that aims to find the baseline performance of the participant. From there on, every session starts at the level where the user left off. The difficulty of the Enhance VR games increases when the performance of the individual improves. The details of adjustment of the difficulty of the games are proprietary information. Some parameters are inferrable, such as increasing the length of the pattern to be memorized in Memory Wall, or the complexity of the recipes in Pizza Builder, which attempt to mimic the parameters that increase difficulty in the neuropsychological tests that inspired the games.
The main goal of this study was to evaluate the baseline performance of the Enhance VR scores in the benchmark session of 4 Enhance VR games and compare them to the scores obtained in already existing validated cognitive assessment tests. A secondary aim of this study is to evaluate the user experience of the Enhance VR app for its use as a cognitive training and assessment solution.
Methods
Participants
The present study wanted to avoid the well-known ceiling effects of current cognitive screening tools for healthy young adults. Therefore, we collected the data from older adults: this would allow us also to assess their tolerance and acceptability to a novel and highly technological tool. Moreover, this specific population will benefit from early detection of cognitive decline. Volunteers were recruited via social media announcements.
A total of 41 volunteer participants took part in the study (mean age = 62.8 years, range = 55–82 years; male:female = 20:21, Supplementary Table S1). Each participant was evaluated by a physician (DB) to assess the existence of any exclusion criteria. The inclusion criteria for the present study were: 1) male or female subjects aged between 50 and 85 years, 2) ability to give informed consent and 3) ability to speak, read and write Italian language. The exclusion criteria were: 1) history of neurodegenerative or psychiatric disorders, or 2) the presence of any issues that may interfere with the trial or pose a risk for the participant while undergoing the VR experience, such as muscular, articular, balance, or cardiovascular issues. None of the participants had previous experience with VR.
Experimental design
Each participant underwent two testing sessions, which occurred on two different days. In one session the participants would undergo cognitive testing using validated neuropsychological tests (see Neuropsychological evaluations). In the other session, they would engage with 4 Enhance VR games (see VR Hardware and Enhance VR Games). The order was randomized among the participants: once the schedule of sessions was established based on the availability of the operators, participants were assigned to a specific slot in a random manner (by drawing). Any session could be interrupted on request or if we detected signs of discomfort, disorientation, or intolerance.
Neuropsychological evaluation
Neuropsychological evaluation was performed by three neuropsychologists (CZ, IN, SO). The battery of tests included:
• The Italian version of the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005; Pirani et al., 2006; Santangelo et al., 2015; Conti et al., 2015) to detect cognitive impairment. The MoCA test includes twelve subtests (attention, executive functions, memory, language, abstraction, and orientation).
• The Stroop task (Stroop, 1935) to evaluate response inhibition. The participants need to report the color of the ink in which the name of a color is written.
• The Wisconsin Card Sorting Test (WSCT; Grant and Berg, 1948) to assess task-switching abilities. The participants need to classify cards according to one of three changing rules.
• The Trail Making Test (Reitan, 1958) assesses information processing and visual search abilities. The TMT consists of two parts: Part A (TMT-A) assesses information processing by requiring the participant to order numbers in ascending order, while Part B (TMT-B) also assesses task switching by alternating between numbers and letters.
• The Rey–Osterrieth complex figure (Osterrieth, 1944) evaluates memory and visuospatial abilities. The test requires the participant to copy a figure, which will later have to be redrawn by memory.
• The Italian standardization of the Word Pair Test (Novelli et al., 1986) evaluates anterograde memory. The examiner reads ten pairs of words for the subject to memorize.
• The Clock Drawing Test (CDT; Goodglass and Kaplan, 1983) evaluates verbal understanding, spatial knowledge, and visual memory, where the subject is asked to draw a clock indicating “10 minutes past 11 o’clock.”
• The Corsi Block-Tapping test (CBT; Corsi, 1972) evaluates short-term memory. In this test, the experimenter taps a sequence of blocks, and the subject needs to tap, in the same order, the blocks that the experimenter showed.
• The Visual Search Test (Matrix test; Spinnler and Tognoni, 1987) is a test where the subject is asked to search for target stimuli (usually numbers) in a set of three matrices.
VR hardware and Enhance VR games
The present study used 4 of the Enhance VR games (Supplementary Figure S1, see https://youtu.be/aSfKK6TC38I?t=286 for visuals on the games):
• Magic Deck (memory) is inspired by the Paired Associates Learning (PAL) test (Sahakian et al., 1988). The subject has to memorize the location of a set of cards displaying colorful abstract patterns in different locations.
• Memory Wall (memory) is motivated by the Visual Pattern Test (Della Sala et al., 1997), which is related to short-term memory. The subject is shown a grid of cubes (with variable size) with a predetermined number of cubes lit up creating a pattern that the user has to memorize and recreate.
• Pizza Builder (attention) is inspired by divided attention assessments. The subject has to simultaneously take incoming orders and assemble and cook different pizzas accordingly.
• React (cognitive flexibility) is inspired by the Wisconsin Card Sorting Task (Grant and Berg, 1948) and the Stroop test (Stroop, 1935). A stream of incoming stimuli into two categories, represented as portals. The position of the portals, which accept stimuli of specific shape and color, changes unannounced. The users have to adapt to the change of paradigm.
The participants underwent the experience using the Meta Quest (Oculus VR, California), a commercially available and widespread standalone headset. The subject interacts with the virtual environment using two controllers. The tests were taken by a physician (DB) inside a large laboratory room: two games were played standing (React and Pizza Builder), while the other two were played sitting down. This study uses the first session of each of the games (hereafter, benchmark session), which is designed to find the peak performance of each participant and provides a score from 0 to 100. This allows us to compare the results to the neuropsychological evaluations. The benchmark durations vary depending on the game, 9 min for Magic Deck and Pizza Builder and 6 for Memory Wall and React.
Immersion and presence
To assess the tolerance to the headset and the perception of immersion in the virtual environment, at the end of their session each subject reported their feedback by means of a 9-item questionnaire (Slater-Usoh-Steed questionnaire, based on data from Usoh et al., 2000, Supplementary Table S2) in a 7-point Likert scale ranging from 1 (I completely disagree) to 7 (I completely agree).
Data handling
All the results (neuropsychological tests, Enhance VR scores for each game, and post-session questionnaires) were pseudo-anonymised following GDPR guidelines and recommendations. We removed all personally identifiable information from the database, including those irrelevant to analysis, like name and date of birth, and replaced them with a pseudonym (an arbitrary numeric sequence). Information for de-anonymization was kept secured and physically separated from the rest of the data during analysis; we avoided the use of any cloud-based system to store information to guarantee the patient’s privacy and data confidentiality.
We performed an intra-individual comparison of the results from the Enhance VR scores and the neuropsychological evaluations for each participant. We also calculated the Pearson correlation matrix with the relative statistical significance (Supplementary Table S8). The software used was R version 4.2.3 (R Core Team, 2021). We referred to each Enhance VR game score obtained as the Virtual Reality index (VRi). For each of VRi obtained, we estimated a linear regression model where the VRi is regressed on some individual’s characteristics (i.e., age, gender, education) and the score obtained on traditional neuropsychological evaluation (hereafter referred to as Real Life index, RLi).
The model can be defined as:
where
The models are estimated by OLS (Ordinary Least Squares) estimation with robust standard errors that allow for the presence of heteroskedasticity.
Results
React
38 out of 41 patients completed the React session. Three participants failed to complete the session: one had a pre-existent shoulder issue and two complained about confusion, caused by the continuous spawning of new targets. The mean score of React was 24.48 (std = 14.19). Statistical analysis shows a strong and significant (p < 0.05) effect of gender on the React index, with females obtaining lower scores (estimated coefficient −18.069; Pearson’s r = −0.42; male mean = 30.65; female mean = 18.93). All the other scores remained insignificant (Supplementary Table S4) and the model only explains about 5.3% of the variability of React in the sample as measured by the Adjusted R2.
Magic Deck
All 41 participants completed the Magic Deck session. Overall, the mean score of Magic Deck was 45.26 (std = 13.61). We found a negative and statistically significant impact on Magic Deck (−7.761, p < 0.05) of the female gender (male = 47.57, female = 43.06), TMT-B (0.103, p < 0.05), WCST Global score (−0.115, p < 0.01) and TMT-A (−1.056, p < 0.01), while a significant positive impact of Ray–Osterrieth Figure score (2.602, p < 0.01) (Supplementary Table S5). Although we only have 41 observations, the model explains about 52% of the variability of the Magic Deck score.
Memory Wall
39 out of the 41 participants completed the Memory Wall session. The mean score of Memory Wall was 45.46 (std = 10.10). Statistical analysis shows a strong and significant negative impact (−8.670, p < 0.01) of the female gender on the game score (male = 51.52; female = 43.73), as well as a less pronounced but still statistically significant impact of raw Stroop time (−0.511, p < 0.05) and TMT-A (−0.509, p < 0.01) (Supplementary Table S6). The model describes a high portion of the variability of Memory Wall in the sample (near 64%).
Pizza Builder
In Pizza Builder, 1 out of the 41 participants dropped out of the session. The mean score for Pizza Builder was 17.22 (std = 13.76). We found a statistically negative impact of the Pizza Builder scores, age (−0.939, p < 0.01, Pearson’s r = −0.34, p < 0.01), and CDT (−4.072, p < 0.05) on Pizza Builder (Supplementary Table S7). In this model, about 23% of the variability of Pizza Builder is explained by the regressors.
Questionnaires
The nine items included in the questionnaire, along with a brief explanation, are listed in Supplementary Table S2.
The highest scoring items were notHindrance, easyLearning, willingtoRepeat, and VRasDiagnostic. Overall, this suggests high acceptance and engagement and that most people would like VR cognitive exercises to become part of a normal diagnostic or therapeutic routine. Our results did, however, show slightly lower scores for the realMoment and realExperience items. These results suggest that, while enjoyable, the experience was not compared to a real experience (Figure 1).
FIGURE 1. Responses to the immersion and presence questionnaires. At the end of the experience, the participants rated in a 7-point Likert scale 9 items on usability and acceptance of the VR experience. The questions are described on Table 7. The red line represents the median value of the responses, while the box represents the interquartile range. The whiskers represent the most extreme values that are not considered outliers.
Discussion
The current study aims to evaluate the relationship of 4 Enhance VR game scores (VRi) to validated methodologies that target the same cognitive abilities (e.g., memory, attention, executive functioning). Overall, we failed to find a significant positive impact on the game scores of the traditional pen-and-paper methodologies that addressed the same cognitive ability. In the following section, we discuss each game individually.
Neuropsychological evaluation and Enhance VR scores
React is motivated by the mechanics of the Stroop test (it requires the participants to make quick decisions to the game objects according to their features) and WCST (the participants need to quickly adapt to changes in the classification paradigm). Therefore, we sought to investigate whether the task scores have an impact on React VRi, which was not the case in the present study. The estimated coefficients of other variables are weak and not statistically significant, with the exception of the negative impact of the female gender. Qualitative feedback identified that the user’s main concern was the inability to throw the object in the right direction, either because they missed it or launched it in the wrong direction. Ultimately, this feedback suggests that the physical intensity resulting from the added motor component could be affecting the impact of the validated tests. Future studies with React should include the evaluation against motor speed and accuracy tests.
Magic Deck is inspired by an episodic memory test, as it requires the participants to memorize the position of a set of abstract patterns shown in cards. We found a positive impact of the Ray–Osterrieth Figure on Magic Deck VRi which evaluates delayed recall. The Magic Deck VRi is also positively impacted by the WCST, and negatively by the female gender, TMT parts A and B, and WSCT_Fails. Magic Deck is one of the most relaxed games in the library and lacks intense physical interaction, as the user has to select a card with the raycaster pointer.
Memory Wall is a slow-paced game that engages short-term memory, by requiring the participants to memorize and subsequently reproduce a pattern of cubes on a grid using a raycaster pointer. Unexpectedly, the Matrix test negatively impacted the Memory Wall scores. However, this phenomenon cannot be explained by the currently available data. Furthermore, TMT-A and TMT-B have a positive effect on the Memory Wall VRi, while gender has a negative effect: on average, female participants performed worse at Memory Wall. Similar to Magic Deck, Memory Wall has a very small motor component.
Pizza Builder is considered a divided attention and planning game, where the participants have to attend to incoming orders and prepare and cook multiple pizzas simultaneously. We found a significant negative impact on the Pizza Builder VR scores of the CBT scores and also of age: younger participants performed better at Pizza Builder than older participants. The significance of the negative impact of the short-term memory test remains unclear and requires further investigation. The negative effect of age could, in part, be explained by an age-related decline in divided attention abilities (Fraser and Bherer, 2013). Pizza Builder is a dynamic and motor-demanding game in a similar fashion to React.
Overall, we failed to find a significant positive impact on the game scores (VRi) of the traditional pen-and-paper methodologies that addressed the same cognitive ability, with the exception of Magic Deck and a delayed recall measurement. It is worth noting that all, but one, VRi are lower in female participants. Further efforts to understand this relationship are necessary to eliminate confounding effects. For instance, VR systems have been shown to have a gender-dependent effect regarding cybersickness (Stanney et al., 2020).
We can provide several hypotheses that could explain the lack of the effect of traditional methodologies. First of all, the added motor component could be impairing the equivalence with paper-based methodologies. The results presented here find an impact on Magic Deck scores, which presents a low motor component, and memory tests. This is not the case with the motor-intensive games, Pizza Builder and React. Thus, suggesting that the added motor component of the naturalistic interaction presented in VR could be influencing the measurement of the assessed cognitive abilities. This would be the case, for example, in React, where the representation of the stimuli as constantly appearing shapes that the user needs to throw into portals, adds a strong motor component to a task considered for cognitive flexibility and response inhibition. While the mechanics that inspired the games are taken from traditional assessment methodologies, presenting them in an environment that provides a naturalistic interaction might modify the processing of the information provided to the participant, resulting in different scores.
Another explanation could result from the adaptive nature of the Enhance VR level progression, meaning that the participants progress through the levels depending on their performance, increasing or decreasing difficulty accordingly. The difficulty of each test is parametrized according to the corresponding cognitive ability while maintaining parameters irrelevant to the cognitive ability constant. Thus, the scores might not fully correspond to the scoring system used in traditional methodologies.
VR experience
The participants reported that the games and the VR system were enjoyable and that could be part of the therapeutic or diagnostic methodology. Overall, the results obtained from the questionnaire indicate that the hardware and tests were extremely well tolerated, even by patients with lower MoCA scores, without any reported disorientation or sense of confusion. The participants reported a very positive experience when using the Enhance VR app: they enjoyed the experience and were willing to repeat it. Furthermore, there seems to be a generalized opinion that VR experiences designed to assess and train cognitive performance, such as the one provided by Enhance VR, should be part of diagnostic or therapeutic solutions.
Although the Meta Quest provides an immersive experience of the environment, the graphic design of the games may prevent the situation from being perceived as real, as they take place in a futuristic scenario with a robot companion. Even though the realism is reduced, this is the preferred approach for VR scenarios for cognitive performance evaluation. First, an abstract distraction-free scenario can improve reliability, making the tests more specific and sensitive and limiting the influence of confounding factors. Moreover, overly realistic scenarios could confuse the participants with initial cognitive impairment, doubling their effort while entering and then leaving the virtual environment.
The use of Enhance VR as a reliable cognitive screening device would offer many advantages compared to traditional testing. First, being self-administered, the user might be more involved and less nervous (the so-called white-coat effect), given also the ecological setup (such as assembling a pizza or hitting targets with a paddle). Secondly, as Enhance VR is a gamified scenario, it can be considered entertaining by the participants—a thing that can’t be said of many widely used cognitive tests. Both of these features have proven crucial to keep the user engaged in neurorehabilitation programs (Slater et al., 2010).
Limitations and future perspectives
The Enhance VR system, however, is not without limitations. The presence of a timer discouraged slower participants or those with greater motor impairment, and in the most exciting phases of the game, some lost the thread and did not understand what they should do. Other participants struggled to adapt to continuously changing requests and were challenged to divide their attention between different elements. Despite this, the participants never felt frustrated and always lived the experience in a positive and fun way. Future versions of the software should address these concerns and make sure that older populations are provided with enough feedback to counteract these negative, although rare, effects.
The present study uses the Enhance VR games scores as a way to evaluate the cognitive performance of the users. We failed to find a correlation between the Enhance VR scores and validated neuropsychological tests that address the same cognitive category. However, the Enhance VR app leverages another advantage that VR systems offer, which is extensive and sensitive behavioral data collection, which allows for a sensitive reconstruction of all game-related events. Further work will be necessary to investigate the potential predictive value of traditional methodologies scores and to be able to correct for the added motor information. Moreover, there will possibly be a need to establish novel VR paradigms to estimate the global cognitive status of the subject - and these paradigms will be far from what we currently know from pen and pencil testing.
Finally, each user has been compared to itself to avoid ethical issues (misdiagnosed patients), the need for population stratification, and other common biases. Future studies require the collection of a large dataset to establish normative data for the target age group for the Enhance VR scores.
Conclusion
The advent of more affordable VR devices, like the Meta standalone headsets, has paved the way for the widespread use of immersive virtual scenarios for diagnostic and cognitive training purposes. VR applications have the advantage of providing an ecological environment with a naturalistic interaction, especially if compared with traditional pen-and-paper setups. The complete control of all the stimuli and environmental variables allows for maximum flexibility, thus promoting motivation, adherence, and compliance with the tasks.
The positive attitude towards VR systems as a diagnostic tool reported by our sample of older adults suggests that VR cognitive assessment systems, such as Enhance VR, have the potential to be seamlessly integrated within the healthcare setting and the current diagnostic routine. The Enhance VR app provides a system for the early detection of potential cognitive impairment, even when the patients do not suspect or have any subjective sensation of a disorder. Therefore, saving time and effort for healthcare providers.
With the increase in life expectancy and the continued growth of the senior population, the prevalence of dementia is expected to double by 2030 (World Health Organization Regional Office for Europe, 2023). Dementia and dementia-related disorders are widely considered one of the leading causes of dependency and disability in older adults. The detection of the earliest manifestations of cognitive decline becomes crucial for early intervention and mitigating the effects on the individual’s autonomy.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving humans were approved by the eCampus Online University—Comitato Etico. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
DB coordination and VR trials CZ, IN, and SO neuropsychological assessment AP statistical analysis VB-R. HJ, and AB coordination, tech support.
Conflict of interest
AB and HJ are founders and CEO and CTO, respectively, of Virtuleap and VB-R is employed as a Lead Neuroscientist at Virtuleap.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2023.1153145/full#supplementary-material
References
Bohil, C. J., Alicea, B., and Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12 (12), 752–762. doi:10.1038/nrn3122
Brugada-Ramentol, V., Bozorgzadeh, A., and Jalali, H. (2022). Enhance VR: a multisensory approach to cognitive training and monitoring. Front. Digital Health 4, 916052. doi:10.3389/FDGTH.2022.916052
Conti, S. B. S., Bonazzi, S., Laiacona, M., Masina, M., and Coralli, M. V. (2015). Montreal Cognitive Assessment (MoCA)-Italian version: regression based norms and equivalent scores. Neurol. Sci. 36 (2), 209–214. doi:10.1007/s10072-014-1921-3
Corsi, P. M. (1972). Human memory and the medial temporal region of the brain. Doctoral Thesis. Canada: McGill University.
Diaz-Perez, E., and Florez-Lozano, J. A. (2018). Realidad virtual y demencia [Virtual reality and dementia]. Rev. Neurol. 66 (10), 344–352. doi:10.33588/rn.6610.2017438
Fraser, S., and Bherer, L. (2013). Age-related decline in divided-attention: from theoretical lab research to practical real-life situations. Wiley Interdiscip. Rev. Cognitive Sci. 4 (6), 623–640. doi:10.1002/WCS.1252
Goodglass, H., and Kaplan, E. (1983). The assessment of aphasia and related disorders. Philadelphia: Lea and Febiger.
Grant, D. A., and Berg, E. (1948). A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. J. Exp. Psychol. 38 (4), 404–411. doi:10.1037/h0059831
Kourtesis, P., Collina, S., Doumas, L. A. A., and MacPherson, S. E. (2012). Validation of the virtual reality everyday assessment lab (VR-EAL): an immersive virtual reality neuropsychological battery with enhanced ecological validity. J. Int. Neuropsychological Soc. 7 (2), 181–196. doi:10.1017/S1355617720000764
McDowd, J. M., and Craik, F. I. M. (1988). Effects of aging and task difficulty on divided attention performance. J. Exp. Psychol. Hum. Percept. Perform. 14 (2), 267–280. doi:10.1037/0096-1523.14.2.267
Mrakic-Sposta, S., Di Santo, S. G., Franchini, F., Arlati, S., Zangiacomi, A., Greci, L., et al. (2018). Effects of combined physical and cognitive virtual reality-based training on cognitive impairment and oxidative stress in mci patients: a pilot study. Front. aging Neurosci. 10, 282. doi:10.3389/fnagi.2018.00282
Nasreddine, Z. S., Phillips, N. A., Bédirian, V., Charbonneau, S., Whitehead, V., Collin, I., et al. (2005). The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J. Am. Geriatrics Soc. 53 (4), 695–699. doi:10.1111/j.1532-5415.2005.53221.x
Nolin, P., Banville, F., Cloutier, J., and Allain, P. (2013). Virtual reality as a new approach to assess cognitive decline in the elderly. Acad. J. Interdiscip. Stud. 2 (8), 612–616. doi:10.5901/ajis.2013.v2n8p612
Novelli, G., Papagno, C., Capitani, E., Laiacona, M., Cappa, S. F., and Vallar, G. (1986). Tre test clinici di memoria a lungo termine. Archivio di Psicologia. Neurol. Psichiatria” 47 (2), 278–296.
Osterrieth, P. A. (1944). Le test de copie d'une figure complexe; contribution à l'étude de la perception et de la mémoire [Test of copying a complex figure; contribution to the study of perception and memory]. Arch. Psychol. 30, 206–356.
Ouellet, É., Boller, B., Corriveau-Lecavalier, N., Cloutier, S., and Belleville, S. (2018). The Virtual Shop: a new immersive virtual reality environment and scenario for the assessment of everyday memory. J. Neurosci. Methods 303, 126–135. doi:10.1016/j.jneumeth.2018.03.010
Parsons, T. D., and Rizzo, A. A. (2008). Initial validation of a virtual environment for assessment of memory functioning: virtual reality cognitive performance assessment test. CyberPsychology Behav. 11 (1), 17–25. doi:10.1089/cpb.2007.9934
Pinto, T., Machado, L., Bulgacov, T., Rodrigues-Júnior, A., Costa, M., Ximenes, R., et al. (2019). Is the Montreal cognitive assessment (MoCA) screening superior to the mini-mental state examination (MMSE) in the detection of mild cognitive impairment (MCI) and alzheimer’s disease (AD) in the elderly? Int. Psychogeriatrics 31 (4), 491–504. doi:10.1017/S1041610218001370
Pirani, A., Tulipani, C., and Neri, M. (2006). Italian translation of MoCA test and of its instructions. Avaliable at: http://www.mocatest.org.
Plotnik, M., Ben-Gal, O., Doniger, G. M., Gottlieb, A., Bahat, Y., Cohen, M., et al. (2021). Multimodal immersive trail making-virtual reality paradigm to study cognitive-motor interactions. J. Neuroeng Rehabil. 18 (1), 82. doi:10.1186/s12984-021-00849-9
R Core Team (2021). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
Reitan, R. M. (1958). Validity of the trail making test as an indicator of organic brain damage. Percept. Mot. Ski. 8 (3), 271–276. doi:10.2466/pms.8.7.271-276
Rizzo, A. A., Schultheis, M., Kerns, K. A., and Mateer, C. (2004). Analysis of assets for virtual reality applications in neuropsychology. Neuropsychol. Rehabil. 14 (1-2), 207–239. doi:10.1080/09602010343000183
Sahakian, B. J., Morris, R. G., Evenden, J. L., Heald, A., Levy, R., Philpot, M., et al. (1988). A comparative study of visuospatial memory and learning in Alzheimer-type dementia and Parkinson’s disease. Brain 111 (3), 695–718. doi:10.1093/brain/111.3.695
Sala, S. D., Gray, C., Baddeley, A., and Wilson, L. (1997). Visual patterns test. A test of short-term visual recall Bury St Edmunds: Thames Valley Test Company. Available at: https://research-information.bris.ac.uk/en/publications/visual-patterns-test-a-test-of-short-term-visual-recall.
Sanchez-Vives, M. V., Spanlang, B., Frisoli, A., Bergamasco, M., and Slater, M. (2010). Virtual hand illusion induced by visuomotor correlations. PLoS ONE 5 (4), e10381. doi:10.1371/journal.pone.0010381
Santangelo, G. S. M., Siciliano, M., Pedone, R., Vitale, C., Falco, F., Bisogno, R., et al. (2015). Normative data for the Montreal Cognitive Assessment in an Italian population sample. Neurol. Sci. 36, 585–591. doi:10.1007/s10072-014-1995-y
Slater, M., Spanlang, B., Sanchez-Vives, M. V., and Blanke, O. (2010). First person experience of body transfer in virtual reality. PLoS ONE 5 (5), e10564. doi:10.1371/journal.pone.0010564
Spinnler, H., and Tognoni, G. (1987). Standardizzazione e taratura italiana di test neuropsicologici. Ital. J. Neurol. Sci. 8, 1–120.
Spooner, D. M., and Pachana, N. A. (2006). Ecological validity in neuropsychological assessment: a case for greater consideration in research with neurologically intact populations. Archives Clin. Neuropsychology 21 (4), 327–337. doi:10.1016/j.acn.2006.04.004
Stanney, K., Fidopiastis, C., and Foster, L. (2020). Virtual reality is sexist: but it does not have to Be. Front. Robotics AI 7, 4. doi:10.3389/frobt.2020.00004
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. J. Exp. Psychol. 18 (6), 643–662. doi:10.1037/h0054651
Usoh, M., Catena, E., Arman, S., and Slater, M. (2000). Using presence questionnaires in reality. Presence teleoperators Virtual Environ. 9 (5), 497–503. doi:10.1162/105474600566989
World Health Organization Regional Office for Europe (2023). Dementia. Avaliable at: https://www.euro.who.int/en/health-topics/noncommunicable-diseases/mental-health/areas-of-work/dementia.
Keywords: immersive virtual reality, early detection, cognitive impairment, cognitive assessment, serious games
Citation: Borghetti D, Zanobini C, Natola I, Ottino S, Parenti A, Brugada-Ramentol V, Jalali H and Bozorgzadeh A (2023) Evaluating cognitive performance using virtual reality gamified exercises. Front. Virtual Real. 4:1153145. doi: 10.3389/frvir.2023.1153145
Received: 28 January 2023; Accepted: 23 October 2023;
Published: 07 November 2023.
Edited by:
Flavio Bertini, University of Parma, ItalyReviewed by:
Tomasz Kupka, Polish Dental Association, PolandFabrizio Stasolla, Giustino Fortunato University, Italy
Sara Giovagnoli, University of Bologna, Italy
Copyright © 2023 Borghetti, Zanobini, Natola, Ottino, Parenti, Brugada-Ramentol, Jalali and Bozorgzadeh. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Davide Borghetti, aW5mb0BkYXZpZGVib3JnaGV0dGkuaXQ=