The final, formatted version of the article will be published soon.
EDITORIAL article
Front. Psychol.
Sec. Quantitative Psychology and Measurement
Volume 16 - 2025 |
doi: 10.3389/fpsyg.2025.1549236
This article is part of the Research Topic Neuropsychological Testing: From Psychometrics to Clinical Neuropsychology View all 12 articles
Neuropsychological Testing: From Psychometrics to Clinical Neuropsychology
Provisionally accepted- 1 Neuroscience Research Center, Department of Medical and Surgical Sciences, Magna Graecia University, Catanzaro, Italy
- 2 University of Rome Tor Vergata, Roma, Lazio, Italy
- 3 University College London Hospitals NHS Foundation Trust, London, United Kingdom
The development of novel neuropsychological tests is crucial to advance our understanding of brain-behavior relationships in the ever changing social context. Innovative testing methods which incorporate new technology or advances in cognitive neuroscience allow us to better capture cognitive changes and provide more personalized treatment plans (7). As the field of neuropsychology and neurorehabilitation moves towards a greater dependence on computerized or digitalised tools, it is important to consider the suitability of these tools for the individual. The article "Diagnosing homo digitalis: towards a standardized assessment for digital tool competencies" explores this concept using the "Digital Tools Test" (DIGI), a standardized instrument designed to evaluate digital tool competencies in a sample of young people and older adults. Preliminary results highlight performance differences between age groups, with older adults showing lower proficiency in navigating digital tools. In the future, digital tool competency assessments like the DIGI may be used in standard neuropsychological assessments. As technological advances allow for biometric measurements to be more accessible, the study "Using behavior and eye-fixations to detect feigned memory impairment" explores the use of both response type/time and eye-fixation measures to detect feigned memory impairment through a computerized version of the wellestablished TOMM. Results found distinct behavioral patterns for genuine and feigned memory impairment. The findings highlight the potential of how eye-tracking metrics may enhance standard paper-and-pencil neuropsychological tools. Finally, the opinion piece "Performance validity testing: the need for digital technology and where to go from here" discusses the use of digital technologies to enhance Performance Validity Assessment (PVA).Taking an alternative approach, the article "Navigating the "frontal lobe paradox": integrating Real-Life Tasks (RLTs) approach into neuropsychological evaluations" explores the "frontal lobe paradox" proposes integrating Real-Life Tasks (RLTs) may also be important to enhance standard paper-and-pencil tasks. The "frontal lobe paradox" is a well-described phenomena in neuropsychology whereby some patients with frontal lobe compromise report a host of executive difficulties in daily activities but perform reasonably well in standardized neuropsychological tests. A framework for assessing frontal dysfunction using a variety of RLTs is presented. The evaluation of psychometric properties is essential for selecting reliable and valid instruments, making it a fundamental aspect of clinical practice and research in many areas (8). Unfortunately, many instruments still lack thorough or complete validation, which hinders their practical application (9). In this special issue, particular emphasis has been placed on the psychometric properties of various existing neuropsychological instruments, and notable advancements have also been reported. The study "A new neuropsychological tool for simultaneous reading and executive functions assessment: initial psychometric properties" presents the development and initial validation of a new tool for the Assessment of Reading and Executive Functions (AREF) in children. The findings highlight the interdependence of executive functions, such as inhibitory control, cognitive flexibility and working memory, with reading skills. Once new tests such as the AREF are validated and in use, further validation studies and developments can improve its clinical utility. Country-specific validation of tests is useful to overcome inherent cultural, language and educational differences. The study "Psychometrics and validation of the EQ-5D-5L instrument in individuals with ischemic stroke in Lithuania" investigated the psychometric properties of the EQ-5D-5L instrument for assessing health-related quality of life (HRQoL) in Lithuanian individuals who have experienced stroke, while the study "Reliability and validity of a novel attention assessment scale (broken ring enVision search test) in the Chinese population" investigated the reliability and validity of the BReViS test for assessing attention in the Chinese population. It is also important to understand the test-retest reliability of our tools for monitoring change over time. The study "Reliability and minimal detectable change of the Yoni task for the theory of mind assessment" investigates the test-retest reliability of the Yoni-48 task, a tool for assessing Theory of Mind (ToM) in social cognition, and to establish the minimal detectable change for determining clinical significance. Lastly, shortening established tests can often improve clinical utility but it is important that the same validation rigour is applied before use. The study "Short Italian Wilkins Rate of Reading Test for repeated-measure designs in optometry and neuropsychology" focuses on the development of the Short Italian Wilkins Rate of Reading Test to enhance the test's applicability to elderly and neuropsychological patients by reducing reading time. Meta-analysis and systematic reviews provide a comprehensive understanding of test properties by synthesizing vast amounts of research on a given topic. These studies help ascertain clinical utility with greater power and guide future research. The article "Metaanalysis of Montreal cognitive assessment diagnostic accuracy in amnestic mild cognitive impairment" presents a systematic review assessing the diagnostic accuracy of the Montreal Cognitive Assessment (MoCA) for detecting amnestic mild cognitive impairment. The findings support the MoCA's utility as a screening tool in clinical settings but emphasizes the need for context-specific cutoff adjustments. The article "Assessments scales for the evaluation of health-related quality of life in Parkinson's disease, progressive supranuclear palsy, and multiple system atrophy: a systematic review" provides a critical evaluation of the scale used to assess wellbeing in people with Parkinsonism. Although eight HRQoL tools were identified, questions were raised about the psychometric properties of the measures which may mar their utility.
Keywords: Psychometrics, testing, Test development research, Reliability, validity
Received: 20 Dec 2024; Accepted: 15 Jan 2025.
Copyright: © 2025 Facchin, Cavicchiolo and Chan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Alessio Facchin, Neuroscience Research Center, Department of Medical and Surgical Sciences, Magna Graecia University, Catanzaro, Italy
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.