Skip to main content

EDITORIAL article

Front. Signal Process., 22 November 2024
Sec. Image Processing
This article is part of the Research Topic Advances in Biomedical Image Segmentation and Analysis using Deep Learning View all 6 articles

Editorial: Advances in biomedical image segmentation and analysis using deep learning

  • 1Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, Lamia, Greece
  • 2Department of Electronics Information and Bioengineering, Politecnico di Milano, Milan, Italy
  • 3Department of Mathematics, School of Sciences, University of Thessaly, Lamia, Greece

Over the past decade, advances in deep learning (DL) have significantly transformed the landscape of biomedical image segmentation and analysis. The state-of-the-art in several domains has been achieved using models based on convolutional neural networks (CNNs), such as U-Net and ResNet. This Research Topic addresses various areas, such as 1) medical imaging modalities, including computed tomography (CT), X-ray, and fundal images, 2) body areas, including thoracic, skeletal, and fundus areas, and 3) related aspects that include coping with the limitations on the availability of labeled data, benchmarking, reproducibility, and the adoption of automated segmentation methods in clinical workflows.

Niloy et al. propose a bimodal method for COVID-19 recognition, which employs both X-ray images and chest CT scans and is based on a 42-layer CNN, capable of distinguishing between complex clinical scenarios, including COVID-19, viral pneumonia, and bacterial pneumonia. Their experiments indicate competitive recognition performance when compared against state-of-the-art methods. Additionally, their effort culminates in assembling a dataset integrating both X-ray and CT scans.

Jiang et al. propose a ResNet50-based method for computer-aided diagnosis of Omicron pneumonia, addressing the difficulty in obtaining large amounts of labeled chest CT images, which are required to sufficiently train DL-based models. Their method includes a contrastive learning model with token projection (CoTP) enabling few-shot learning. Specifically, they utilize solely unlabeled data for fitting CoTP, along with a small number of labeled samples for fine-tuning; they present a new Omicron CT image dataset, employ random Poisson noise perturbation for data augmentation, and utilize token projection in order to enhance the quality of global visual representations. Their experiments reveal that CoTP pretraining significantly increases classification performance in terms of accuracy, sensitivity, precision, and area under curve (AUC) of receiver operating characteristic (ROC).

Vengalil et al. employ DL to identify landmark regions in fundal images, including optic disc, blood vessels, macula, and exudates. This serves as a critical step facilitating the pathological analysis and diagnosis of retinal diseases, such as diabetic retinopathy. Their method is based on a variant of U-Net architecture. The results of experimental comparisons against state-of-the-art methods indicate considerable improvement in segmentation performance.

Rossi et al. employ a U-Net variant entitled CEL-Unet for automatic segmentation of bone CT scans, starting from the consideration that U-Net is able to address anatomies of varying sizes and pathological deformations. CEL-Unet embeds region-aware and two contour-aware branches in the decoding path and aims to boost the segmentation quality for the femur and tibia in the osteoarthritic knee joint, as well as to cope with changes in mineral density, narrowing of joint spaces, and formation of largely irregular osteophytes. Experimental results on a set of 700 knee CT scans demonstrate that CEL-Unet obtains higher segmentation quality than competing U-Net models. The segmentation is effective for large pathological deformations and osteophytes, making the CEL-Unet potentially usable in PSI-based surgical planning, where the reconstruction accuracy of the bony structures is a critical factor for the success of the operation.

A more practical aspect related to benchmarking and reproducibility is addressed by Zhang et al., who present a simple containerized pipeline for shadow testing. Their work aims to offer digital tools that are essential for the adoption of medical image segmentation methods in the clinical workflow. Their pipeline integrates picture archiving communication systems (PACS), allowing visualization of DICOM-compatible segmentation results and volumetric data at the radiology workstation. Their pipeline has two main components: 1) a router/listener, anonymizer, and an open health imaging foundation (OHIF) web viewer, backstopped by a DCM4CHEE archive that is deployed in the virtual infrastructure of a secure hospital intranet; and 2) an on-premises single GPU workstation host for DICOM/NIfTI (neuroimaging informatics technology initiative) conversion and image processing. DICOM images are visualized in OHIF, along with their segmentation masks and associated volumetry measurements, using DICOM segmentation and structured report elements. Considering that nnU-Net has emerged as a widely used method for training segmentation models with state-of-the-art performance, the proposed pipeline is tested by recording clock times for a traumatic pelvic hematoma nnU-Net model and works seamlessly with an existing PACS, indicating that it can be used for deployment of DL models within the radiology workflow.

In conclusion, this Research Topic highlights the transformative impact of DL on biomedical image segmentation and analysis and provides valuable perspectives and insights, outlining the challenges and opportunities associated with the translation of DL clinical decision support tools into the clinic. The works presented demonstrate considerable advancements in diagnostic accuracy and efficiency, clearly demonstrating the potential of adopting DL in clinical workflow.

Author contributions

MS: Conceptualization, Project administration, Writing–original draft, Writing–review and editing. EM: Conceptualization, Writing–review and editing. SK: Conceptualization, Writing–review and editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: biomedical image segmentation, biomedical image analysis, deep learning, vision transformers, convolutional neural networks

Citation: Savelonas MA, Maggioni E and Karkanis SA (2024) Editorial: Advances in biomedical image segmentation and analysis using deep learning. Front. Sig. Proc. 4:1523312. doi: 10.3389/frsip.2024.1523312

Received: 05 November 2024; Accepted: 12 November 2024;
Published: 22 November 2024.

Edited and reviewed by:

Frederic Dufaux, Université Paris-Saclay, France

Copyright © 2024 Savelonas, Maggioni and Karkanis. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Michalis A. Savelonas, bXNhdmVsb25hc0B1dGguZ3I=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.