AUTHOR=Garcia-Terraza Abril L. , Jimenez-Collado David , Sanchez-Sanoja Francisco , Arteaga-Rivera José Y. , Morales Flores Norma , Pérez-Solórzano Sofía , Garfias Yonathan , Graue-Hernández Enrique O. , Navas Alejandro TITLE=Reliability, repeatability, and accordance between three different corneal diagnostic imaging devices for evaluating the ocular surface JOURNAL=Frontiers in Medicine VOLUME=9 YEAR=2022 URL=https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2022.893688 DOI=10.3389/fmed.2022.893688 ISSN=2296-858X ABSTRACT=Purpose

To evaluate repeatability, reproducibility, and accordance between ocular surface measurements within three different imaging devices.

Methods

We performed an observational study on 66 healthy eyes. Tear meniscus height, non-invasive tear break-up time (NITBUT) and meibography were measured using three corneal imaging devices: Keratograph 5M (Oculus, Wetzlar, Germany), Antares (Lumenis, Sidney, Australia), and LacryDiag (Quantel Medical, Cournon d’Auvergne, France). One-way ANOVAs with post hoc analyses were used to calculate accordance between the tear meniscus and NITBUT. Reproducibility was assessed through coefficients of variation and repeatability with intraclass correlation coefficients (ICC). Reliability of meibography classification was analyzed by calculating Fleiss’ Kappa Index and presented in Venn diagrams.

Results

Coefficients of variation were high and differed greatly depending on the device and measurement. ICCs showed moderate reliability of NITBUT and tear meniscus height measurements. We observed discordance between measurements of tear meniscus height between the three devices, F2, 195 = 15.24, p < 0.01. Measurements performed with Antares were higher; 0.365 ± 0.0851, than those with Keratograph 5M and LacryDiag; 0.293 ± 0.0790 and 0.306 ± 0.0731. NITBUT also showed discordance between devices, F2, 111 = 13.152, p < 0.01. Measurements performed with LacryDiag were lower (10.4 ± 1.82) compared to those of Keratograph 5M (12.6 ± 4.01) and Antares (12.6 ± 4.21). Fleiss’ Kappa showed a value of -0.00487 for upper lid and 0.128 for inferior lid Meibography classification, suggesting discrete to poor agreement between measurements.

Conclusion

Depending on the device used and parameter analyzed, measurements varied between each other, showing a difference in image processing.