AUTHOR=Arango Tiffany , Yu Deyue , Lu Zhong-Lin , Bex Peter J.
TITLE=Effects of Task on Reading Performance Estimates
JOURNAL=Frontiers in Psychology
VOLUME=11
YEAR=2020
URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2020.02005
DOI=10.3389/fpsyg.2020.02005
ISSN=1664-1078
ABSTRACT=
Reading is a primary problem for low vision patients and a common functional endpoint for eye disease. However, there is limited agreement on reading assessment methods for clinical outcomes. Many clinical reading tests lack standardized materials for repeated testing and cannot be self-administered, which limit their use for vision rehabilitation monitoring and remote assessment. We compared three different reading assessment methods to address these limitations. Normally sighted participants (N = 12) completed MNREAD, and two forced-choice reading tests at multiple font sizes in counterbalanced order. In a word identification task, participants indicated whether 5-letter pentagrams, syntactically matched to English, were words or non-words. In a true/false reading task, participants indicated whether four-word sentences presented in RSVP were logically true or false. The reading speed vs. print size data from each experiment were fit by an exponential function with parameters for reading acuity, critical print size and maximum reading speed. In all cases, reading speed increased quickly as an exponential function of text size. Reading speed and critical print size significantly differed across tasks, but not reading acuity. Reading speeds were faster for word/non-word and true/false reading tasks, consistent with the elimination of eye movement load in RSVP but required larger text sizes to achieve those faster reading speeds. These different reading tasks quantify distinct aspects of reading behavior and the preferred assessment method may depend on the goal of intervention. Reading performance is an important clinical endpoint and a key quality of life indicator, however, differences across methods complicate direct comparisons across studies.