AUTHOR=Cho Changje , Park Sejik , Ma Sunmi , Lee Hyo-Jeong , Lim Eun-Cheon , Hong Sung Kwang TITLE=Feasibility of video-based real-time nystagmus tracking: a lightweight deep learning model approach using ocular object segmentation JOURNAL=Frontiers in Neurology VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1342108 DOI=10.3389/fneur.2024.1342108 ISSN=1664-2295 ABSTRACT=Background

Eye movement tests remain significantly underutilized in emergency departments and primary healthcare units, despite their superior diagnostic sensitivity compared to neuroimaging modalities for the differential diagnosis of acute vertigo. This underutilization may be attributed to a potential lack of awareness regarding these tests and the absence of appropriate tools for detecting nystagmus. This study aimed to develop a nystagmus measurement algorithm using a lightweight deep-learning model that recognizes the ocular regions.

Method

The deep learning model was used to segment the eye regions, detect blinking, and determine the pupil center. The model was trained using images extracted from video clips of a clinical battery of eye movement tests and synthesized images reproducing real eye movement scenarios using virtual reality. Each eye image was annotated with segmentation masks of the sclera, iris, and pupil, with gaze vectors of the pupil center for eye tracking. We conducted a comprehensive evaluation of model performance and its execution speeds in comparison to various alternative models using metrics that are suitable for the tasks.

Results

The mean Intersection over Union values of the segmentation model ranged from 0.90 to 0.97 for different classes (sclera, iris, and pupil) across types of images (synthetic vs. real-world images). Additionally, the mean absolute error for eye tracking was 0.595 for real-world data and the F1 score for blink detection was ≥ 0.95, which indicates our model is performing at a very high level of accuracy. Execution speed was also the most rapid for ocular object segmentation under the same hardware condition as compared to alternative models. The prediction for horizontal and vertical nystagmus in real eye movement video revealed high accuracy with a strong correlation between the observed and predicted values (r = 0.9949 for horizontal and r = 0.9950 for vertical; both p < 0.05).

Conclusion

The potential of our model, which can automatically segment ocular regions and track nystagmus in real time from eye movement videos, holds significant promise for emergency settings or remote intervention within the field of neurotology.