Skip to main content

ORIGINAL RESEARCH article

Front. Med.
Sec. Pathology
Volume 11 - 2024 | doi: 10.3389/fmed.2024.1470941
This article is part of the Research Topic Artificial Intelligence-Assisted Medical Imaging Solutions for Integrating Pathology and Radiology Automated Systems - Volume II View all articles

Redefining Retinal Vessel Segmentation: Empowering Advanced Fundus Image Analysis with the Potential of GANs

Provisionally accepted
Badar Almarri Badar Almarri 1Naveen Kumar Naveen Kumar 2Aditya Pai Aditya Pai 3Surbhi Bhatia Khan Surbhi Bhatia Khan 4*Fatima Asiri Fatima Asiri 5Mahesh T R Mahesh T R 2
  • 1 King Faisal University, Al-Ahsa, Eastern Province, Saudi Arabia
  • 2 Jain University, Bengaluru, Karnataka, India
  • 3 MIT Art Design and Technology University, Pune, Maharashtra, India
  • 4 University of Salford, Salford, United Kingdom
  • 5 King Khalid University, Abha, Saudi Arabia

The final, formatted version of the article will be published soon.

    Retinal vessel segmentation is a critical task in fundus image analysis, providing essential insights for diagnosing various retinal diseases. In recent years, deep learning (DL) techniques, particularly Generative Adversarial Networks (GANs), have garnered significant attention for their potential to enhance medical image analysis. This paper presents a novel approach for retinal vessel segmentation by harnessing the capabilities of GANs. Our method, termed GANVesselNet, employs a specialized GAN architecture tailored to the intricacies of retinal vessel structures. In GANVesselNet, a dual-path network architecture is employed, featuring an Auto Encoder-Decoder (AED) pathway and a UNet-inspired pathway. This unique combination enables the network to efficiently capture multi-scale contextual information, improving the accuracy of vessel segmentation. Through extensive experimentation on publicly available retinal datasets, including STARE and DRIVE, GANVesselNet demonstrates remarkable performance compared to traditional methods and state-of-the-art deep learning approaches. The proposed GANVesselNet exhibits superior sensitivity (0.8174), specificity (0.9862), and accuracy (0.9827) in segmenting retinal vessels on the STARE dataset, and achieves commendable results on the DRIVE dataset with sensitivity (0.7834), specificity (0.9846), and accuracy (0.9709). Notably, GANVesselNet achieves remarkable performance on previously unseen data, underscoring its potential for real-world clinical applications. Furthermore, we present qualitative visualizations of the generated vessel segmentations, illustrating the network's proficiency in accurately delineating retinal vessels. In summary, this paper introduces GANVesselNet, a novel and powerful approach for retinal vessel segmentation. By capitalizing on the advanced capabilities of GANs and incorporating a tailored network architecture, GANVesselNet offers a quantum leap in retinal vessel segmentation accuracy, opening new avenues for enhanced fundus image analysis and improved clinical decision-making.

    Keywords: INDEX TERMS Diabetic Retinopathy, generative adversarial networks (GANs), Fundus images, lesion segmentation, Ganesan, deep learning

    Received: 26 Jul 2024; Accepted: 13 Sep 2024.

    Copyright: © 2024 Almarri, Kumar, Pai, Bhatia Khan, Asiri and T R. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Surbhi Bhatia Khan, University of Salford, Salford, United Kingdom

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.