
95% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
REVIEW article
Front. Comput. Sci.
Sec. Theoretical Computer Science
Volume 7 - 2025 | doi: 10.3389/fcomp.2025.1557977
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Deep neural networks (DNN) are extensively used in both current and future manufacturing, transportation, and health care sectors. The current widespread use of neural networks in highly safety-critical applications has made it necessary to prevent catastrophic issues from arising during prediction processes. In fact, misreading a traffic sign by an autonomous car or performing an incorrect analysis of medical records could put human lives in danger. Being aware of this, the number of studies related to deep neural network verification has increased dramatically in recent years. In particular, formal guarantees regarding the behavior of a DNN under particular settings are provided by model checking, which is crucial in applications that are safety-critical and where network output errors could have disastrous effects. Model checking is an effective approach for confirming that neural networks perform as planned by comparing them to clearly stated qualities. This paper aims to highlight the critical need and present challenges of using model checking verification techniques to verify deep neural networks before relying on them in real-world applications. It examines the state-of-the-art researches and draws the most prominent future directions in model checking of neural networks.
Keywords: Deep neural network, formal models, specification, model checking, robustness, Safety, Consistency
Received: 09 Jan 2025; Accepted: 27 Mar 2025.
Copyright: © 2025 Sbaï. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Zohra Sbaï, Department of Computer Science, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.