1. EyeLiner
- Author
-
Yoga Advaith Veturi, MSc, Steve McNamara, OD, Scott Kinder, MS, Christopher William Clark, MS, Upasana Thakuria, MS, Benjamin Bearce, MS, Niranjan Manoharan, MD, Naresh Mandava, MD, Malik Y. Kahook, MD, Praveer Singh, PhD, and Jayashree Kalpathy-Cramer, PhD
- Subjects
Artificial intelligence ,Change detection ,Deep learning ,Flicker chronoscopy ,Image registration ,Ophthalmology ,RE1-994 - Abstract
Objective: Detecting and measuring changes in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases, such as glaucoma and macular degeneration. Clinicians assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired color fundus photos (CFPs). Distinguishing variations in image acquisition due to camera orientation, zoom, and exposure from true disease-related changes can be challenging. This makes manual image evaluation variable and subjective, potentially impacting clinical decision-making. We introduce our deep learning (DL) pipeline, “EyeLiner,” for registering, or aligning, 2-dimensional CFPs. Improved alignment of longitudinal image pairs may compensate for differences that are due to camera orientation while preserving pathological changes. Design: EyeLiner registers a “moving” image to a “fixed” image using a DL-based keypoint matching algorithm. Participants: We evaluate EyeLiner on 3 longitudinal data sets: Fundus Image REgistration (FIRE), sequential images for glaucoma forecast (SIGF), and our internal glaucoma data set from the Colorado Ophthalmology Research Information System (CORIS). Methods: Anatomical keypoints along the retinal blood vessels were detected from the moving and fixed images using a convolutional neural network and subsequently matched using a transformer-based algorithm. Finally, transformation parameters were learned using the corresponding keypoints. Main Outcome Measures: We computed the mean distance (MD) between manually annotated keypoints from the fixed and the registered moving image. For comparison to existing state-of-the-art retinal registration approaches, we used the mean area under the curve (AUC) metric introduced in the FIRE data set study. Results: EyeLiner effectively aligns longitudinal image pairs from FIRE, SIGF, and CORIS, as qualitatively evaluated through registration checkerboards and flicker animations. Quantitative results show that the MD decreased for this model after alignment from 321.32 to 3.74 pixels for FIRE, 9.86 to 2.03 pixels for CORIS, and 25.23 to 5.94 pixels for SIGF. We also obtained an AUC of 0.85, 0.94, and 0.84 on FIRE, CORIS, and SIGF, respectively, beating the current state-of-the-art SuperRetina (AUCFIRE = 0.76, AUCCORIS = 0.83, AUCSIGF = 0.74). Conclusions: Our pipeline demonstrates improved alignment of image pairs in comparison to the current state-of-the-art methods on 3 separate data sets. We envision that this method will enable clinicians to align image pairs and better visualize changes in disease over time. Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
- Published
- 2025
- Full Text
- View/download PDF