- 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- 2School of Computer Science and Informatics, Cardiff University, Cardiff, United Kingdom
- 3Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China
- 4Department of Next Generation Artificial Intelligence Research Center for Education and Research in Information Science and Technology, The University of Tokyo, Bunkyo, Japan
- 5College of Intelligence and Computing/School of Media and Communication, Tianjin University, Tianjin, China
- 6Faculty of Applied Sciences, Macao Polytechnic University, Macau, Macao SAR, China
Editorial on the Research Topic
Artificial intelligence-based medical image automatic diagnosis and prognosis prediction
Computer-assisted diagnosis and prognosis prediction (especially with medical images) consist of a series of long-standing tasks, including classification, regression, segmentation, tracking tasks, etc. Deep learning is one of the most important breakthroughs in the field of artificial intelligence over the last decade. It has achieved great success because of enormously increasing data and computation resources. Not only there has been a constantly growing flow of related research papers, but also substantial progress has been achieved in real-world applications including axillary lymph node (ALN) metastasis status prediction, radiotherapy planning, histological image understanding and retina image recognition.
This Research Topic seeks to present and highlight the latest development on applying advanced deep learning techniques to cover the promising and novel deep learning algorithms in automatic diagnosis and prognosis prediction. It has attracted a fair number of submissions from researchers active in the study of automatic diagnosis and prognosis prediction using medical images or/and other sources of information. After careful peer review, nine manuscripts have finally been selected for publication in this Research Topic, covering the topics including advanced Transfer Learning techniques that transfer knowledge from other tasks or modals, multi-modal Learning techniques that enable multi-modal diagnosis or prognosis prediction, novel multi-task learning framework that enables joint diagnosis and prognosis prediction in a single model and advanced Unsupervised/Semi-Supervised/Weak-Supervised Learning techniques which boost performance with limited annotations.
In the field of cortical surface reconstruction in brain MR, An et al. proposed ResAttn-Recon, a residual self-attention based encoder-decoder framework with skip connections. They also proposed a truncated and weighted L1 loss function to accelerate network convergence, compared to simply applying the L1 loss function. The average symmetric surface distance (AD) for the inner and outer surfaces is 0.253 ± 0.051 and the average Hausdorff distance (HD) is 0.629 ± 0.186, which is lower than that of DeepCSR, whose absolute distance equals 0.283 ± 0.059 and Hausdorff distance equals 0.746 ± 0.245.
In the area of object detection from Hu et al., it indicated that small object detection is one of the most challenging and important problems especially in medical scenarios. Complementary to the researches with attention to feature extraction and data augmentation of small objects, it proposed a method called pixel level balancing (PLB), which takes into account the number of pixels contained in the detection box as an impact factor to characterize the size of the inspected objects. This factor is then used as a weight in the training loss, so as to improve the accuracy of small object detection. Finally, the PLB operation is applied in the RPN stage of a two-stage detector. The experimental results demonstrated that it can improve the detection effect of small objects and maintain the accuracy of medium and large objects. Overall, the PLB method shows promise in addressing the challenges of small object detection in medical scenarios, particularly in tasks with higher requirements for small objects like blood cell detection.
For vital signs estimation, photoplethysmography (PPG) is a non-invasive method that measures the changes in blood volume in the skin using light. PPG-based devices can estimate blood pressure (BP), heart rate (HR), heart rate variability (HRV), and oxygen saturation (SpO2) from the PPG signal. However, PPG measurements may be influenced by factors such as subcutaneous fat, skin color, and sex. The paper from Nachman et al. presents a study that compares BP measurements between a PPG-based device and a cuff-based device in different groups of people based on sex, BMI, and skin color. The study found that the PPG-based device had high accuracy and agreement with the cuff-based device across all groups, regardless of their personal characteristics. It also concluded that the PPG-based device can provide valid BP measurements for various populations and enable personalized BP management.
In fundus imaging, a deep learning method from Yao et al. called FunSwin is proposed to analyze diabetic retinopathy grade and macular edema risk based on fundus images. The method uses Swin Transformer, a hierarchical vision transformer, as the main framework, and integrates transfer learning and data augmentation strategies to improve the performance. The method is claimed to outperform other state-of-the-art methods in both binary and multiclass classification tasks on the MESSIDOR dataset, which contains 1,200 fundus images with labels for diabetic retinopathy and age-related macular degeneration.
For histopathological image analysis, Xiao et al. propose a deep learning framework called LAD-GCN for automatic estimation of growth patterns for lung adenocarcinoma diagnosis (LAD). The main idea is to jointly utilize a Graph Convolutional Network (GCN) to extract spatial structure features of cells and a Convolutional Neural Network (CNN) to extract global semantic features from histopathological images. To achieve this, cell nuclei in the images are first segmented using an existing instance segmentation model. By exploiting the complementary information, the proposed method achieves improved performance, as quantitatively validated on a lung adenocarcinoma dataset.
For computed tomography (CT) data analysis, the work by Feng et al. analyzes the influential factors from radiomics signature of pericoronary tissue (PCT) in coronary CT angiography (CCTA) for functional ischemia, measured using CT-derived fractional flow reserve (CCT-FFR). They segmented PCT from CT images and extracted 1,691 radiomic features of each vessel. They then performed feature selection using the Boruta algorithm built on top of random forest classifier to identify most contributive features to functional ischemia. The machine learning derived radiomics signature shows significant association in the study.
In the area of PET imaging, two experimental studies on Flourine-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images for diagnosis and monitoring of fatal diseases stand out of the selective reviewing process. The study by Shi et al. aims to investigate the role of radiomics analysis on 18F-FDG PET images for the sake of predicting microvascular invasion (mVI) in hepatocellular carcinoma (HCC), a common liver malignancy that leads to cancer death with a nontrivial probability. It further explores hybrid criteria combining PET/CT and multi-parameter MRI for higher prediction performance. Quantitatively, the 18F-FDG PET image radiomics classifier shows good performance in discriminating HCC with/without mVI, with an AUC of 0.917 (95% CI: 0.824 and 0.970) and 0.771 (95% CI: 0.578 and 0.905), and the hybrid model, which combines radiomics classifier and several key indicators based on contrast-enhanced MRI, yields much improved predictive performance with an AUC of 0.996 (95% CI: 0.939 and 1.000) and 0.953 (95% CI: 0.883 and 1.000). The study by Wang et al. tries to develop and validate 18F-FDG PET/CT image-based radiomics to determine the Ki-67 status of high-grade serous ovarian cancer (HGSOC), a disease with very high risk of recurrence and death. It is found that Radiomics based on Habitat can predict the Ki-67 expression accurately, and the Habitat model can better stratify the prognosis (p < 0.05).
In the field of pathology, this study Zhang et al. provides reliable machine learning-based (ML-based) models for predicting the probability of lymph node metastasis (LNM) in kidney cancer patients. The data was extracted from the Surveillance, Epidemiology, and Outcomes (SEER) database from 2010 to 2017, and variables were filtered using the least absolute shrinkage and selection operator (LASSO), univariate and multivariate logistic regression analyses. The independent predictive factors of LNM were identified as pathological grade, liver metastasis, M stage, primary site, T stage, and tumor size. Among six ML algorithms, the XGB model significantly outperformed any other machine learning models with an AUC value of 0.916 in the model validation process, in which M stage, T stage, and pathological grade were the top three important variables. Based on the probability density function (PDF) and clinical utility curve (CUC), the study suggested that 54.6% could be used as a threshold probability for the diagnosis of LNM using the XGB model, which could distinguish about 89% of LNM patients. As a conclusion, the machine learning-based predictive tool can accurately predict the probability of LNM in kidney cancer patients and has promising applications in clinical practice.
All nine papers tackle different but extremely relevant domain issues of artificial intelligence-based medical image automatic diagnosis and prognosis prediction. We believe this Research Topic will raise awareness in the scientific and industry community that a multidisciplinary research path is therefore in need to meet the desire from healthcare providers that are emerging in this field.
Author contributions
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Keywords: artificial intelligence, medical image 3D reconstruction, prognosis prediction, diagnosis, CAD
Citation: Yan J, Lai Y, Xu Y, Zheng Y, Niu Z and Tan T (2023) Editorial: Artificial intelligence-based medical image automatic diagnosis and prognosis prediction. Front. Phys. 11:1210010. doi: 10.3389/fphy.2023.1210010
Received: 21 April 2023; Accepted: 10 May 2023;
Published: 25 May 2023.
Edited and reviewed by:
Ewald Moser, Medical University of Vienna, AustriaCopyright © 2023 Yan, Lai, Xu, Zheng, Niu and Tan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Tao Tan, dGFvdGFuQG1wdS5lZHUubW8=