
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Hum. Neurosci.
Sec. Sensory Neuroscience
Volume 19 - 2025 | doi: 10.3389/fnhum.2025.1549698
This article is part of the Research Topic Neuro-Behavioral Insights on Low Vision and Beyond View all 3 articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Prosthetic vision systems aim to restore functional sight for visually impaired individuals by replicating visual perception by inducing phosphenes through electrical stimulation in the visual cortex, yet there remain challenges in visual representation strategies such as including gaze information and task-dependent optimization. In this paper, we introduce Point-SPV, an end-toend deep learning model designed to enhance object recognition in simulated prosthetic vision.Point-SPV takes an initial step towards gaze-based optimization by simulating viewing points, representing potential gaze locations, and training the model on patches surrounding these points. Our approach prioritizes task-oriented representation, aligning visual outputs with object recognition needs. A behavioral gaze-contingent object discrimination experiment demonstrated that Point-SPV outperformed a conventional edge detection method, by facilitating observers to gain a higher recognition accuracy, faster reaction times, and a more efficient visual exploration.Our work highlights how task-specific optimization may enhance representations in prosthetic vision, offering a foundation for future exploration and application.
Keywords: simulated prosthetic vision, Synthetic Viewing Points, object recognition, End-to-end training, deep learning
Received: 21 Dec 2024; Accepted: 24 Feb 2025.
Copyright: © 2025 Nejad, Küçükoğlu, de Ruyter van Steveninck, Bedrossian, de Haan, Heutink, Cornelissen and van Gerven. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Ashkan Nejad, Royal Dutch Visio, Huizen, Netherlands
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.