Skip to main content

ORIGINAL RESEARCH article

Front. Digit. Health
Sec. Connected Health
Volume 6 - 2024 | doi: 10.3389/fdgth.2024.1455767
This article is part of the Research Topic Use of Artificial Intelligence to Improve Maternal and Neonatal Health in Low-Resource Settings View all articles

AI-enabled workflow for automated classification and analysis of fetoplacental Doppler images

Provisionally accepted
  • 1 August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Catalonia, Spain
  • 2 BCN MedTech (UPF), Barcelona, Catalonia, Spain
  • 3 Cardiology Care for Children, Lancaster, United States
  • 4 Departments of Pediatrics and Child Health, Aga Khan University, Karachi, Punjab, Pakistan
  • 5 Sindh Institute of Urology and Transplantation, Karachi, Sindh, Pakistan
  • 6 BCNatal Fetal Medicine Research Center, Sant Joan de Déu Hospital, Barcelona, Balearic Islands, Spain
  • 7 Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Catalonia, Spain

The final, formatted version of the article will be published soon.

    Extraction of Doppler-based measurements from feto-placental Doppler images is crucial in identifying vulnerable new-borns prenatally. However, this process is time-consuming, operator dependent, and prone to errors. To address this, our study introduces an artificial intelligence (AI) enabled workflow for automating feto-placental Doppler measurements from four sites (i.e. Umbilical Artery (UA), Middle Cerebral Artery (MCA), Aortic Isthmus (AoI) and Left Ventricular Inflow and Outflow (LVIO)), involving classification and waveform delineation tasks. Derived from data from a low-and middle-income country, our approach's versatility was tested and validated using a dataset from a high-income country, showcasing its potential for standardized and accurate analysis across varied healthcare settings. The classification of Doppler views was approached through three distinct blocks: i) a Doppler velocity amplitude-based model with an accuracy of 94%, ii) two Convolutional Neural Networks (CNN) with accuracies of 89.2% and 67.3%, and iii) Doppler view-and datasetdependent confidence models to detect misclassifications with an accuracy higher than 85%. The extraction of Doppler indices utilized Doppler-view dependent CNNs coupled with postprocessing techniques. Results yielded a mean absolute percentage error of 6.1 ± 4.9% (n=682), 1.8 ± 1.5% (n=1480), 4.7 ± 4.0% (n=717), 3.5±3.1% (n=1318) for the magnitude location of the systolic peak in LVIO, UA, AoI and MCA views, respectively. The developed models proved to be highly accurate in classifying Doppler views and extracting essential measurements from Doppler images. The integration of this AI-enabled workflow holds significant promise in reducing the manual workload and enhancing the efficiency of fetoplacental Doppler image analysis, even for non-trained readers.

    Keywords: artificial intelligence, Convolutional Neural Networks, deep learning, Ultrasound view classification, Ultrasound waveform delineation, feto-placental Doppler

    Received: 03 Jul 2024; Accepted: 27 Sep 2024.

    Copyright: © 2024 Aguado, Jimenez-Perez, Chowdhury, Prats-Valero, Sanchez-Martinez, Hoodbhoy, Mohsin, Castellani, Testa, Crispi, Bijnens, Hasan and Bernardino. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Ainhoa M. Aguado, August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, 08036, Catalonia, Spain
    Gabriel Bernardino, BCN MedTech (UPF), Barcelona, Catalonia, Spain

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.