- 1Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
- 2Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- 3Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, United States
Editorial on the Research Topic
Advances in artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors
Growing interest in artificial intelligence (AI) applications in the biological and medical sciences has led to a notable surge in related publications in recent years. A PubMed query for “machine learning” yielded 33,855 results published in the year of 2023 alone, which represents a greater than fourfold increase in comparison to 2018 (1). In the realm of musculoskeletal and soft tissue tumor imaging, there has been increasing enthusiasm for a broad range of AI applications ranging from prognostication and risk stratification to lesion classification and treatment response (Varghese et al., 2–4).
This Research Topic, titled “Advances in Artificial Intelligence and Machine Learning Applications for the Imaging of Bone and Soft Tissue Tumors,” encompasses a selection of manuscripts that primarily focus on recent developments in AI applications targeting the imaging of these neoplasms while also exploring topics related to quantitative image analysis and interpretation. The following representative manuscripts provide insights into current advancements and methodologies within this evolving field:
Jin et al. describe a machine learning algorithm based on Gray Level Co-occurrence Matrix (GLCM) scores to detect pelvic bone metastases in patients with colorectal cancer using a retrospective cohort of 614 patients who underwent MRI of the pelvis over a 7-year period. Diffusion-weighted images were segmented, from which 48 GLCM features were extracted for further analysis. A generalized linear regression model and four machine learning classification models were then constructed from this dataset. Random forest achieved the highest performance, with AUCs of 0.926 and 0.919 in the training and internal validation sets, respectively. Their results suggest a promising role for radiomics-based machine learning models in detecting pelvic bone metastases. By integrating these advanced modeling techniques into clinical workflows, healthcare providers may enhance their ability to identify osseous metastatic disease more reliably, thereby potentially improving prognostic assessments and guiding treatment interventions tailored to individual patient needs.
Rich et al. publish a systematic review and meta-analysis of deep learning image segmentation approaches for the evaluation of primary and secondary malignant bone tumors. Accurate delineation of osseous lesions on imaging is crucial for performing quantitative analyses, yet is often the rate-limiting step in AI and machine learning studies when performed manually. Additionally, precise delineation of the extent of bony involvement of disease may allow for more holistic clinical assessments and prognostication in situations of widespread metastatic disease. In their article, Rich et al. conducted a comprehensive literature search of studies investigating automated deep-learning segmentation methods for malignant bony lesions, identifying 41 studies published between 2010 and 2023 suitable for inclusion in the final analysis. Overall, they found that most studies applied a U-Net convolutional neural network architecture, most often trained on either MRI or CT images. Models trained on PET/MRI and PET/CT had lower median dice similarity coefficients when compared to models trained on MRI and CT, possibly due to increased image noise and degradation. Furthermore, models trained on 2D data appeared to have slightly higher median dice similarity coefficients, though the difference was not statistically significant. Future optimizations and validations could include training of models on larger, multi-institutional datasets to allow for improved generalizability.
Debs and Fayad and Sabeghi et al. both describe emerging applications of AI and machine learning in musculoskeletal imaging. Debs and Fayad review use cases ranging from image protocoling, examination scheduling, and hanging protocol optimization to results reporting, lesion detection, and lesion classification. They also detail approaches for determining tumor of origin and assessing for treatment response of metastatic spinal lesions. Sabeghi et al. comment on methods for stratifying neoplastic lesions with respect to malignant potential, histologic grade, and response to treatment. They further discuss challenges with assembling sufficiently large, diverse, and heterogeneous datasets to train robust, generalizable models. As models become increasingly more complex and advanced, federated learning may offer an elegant solution for leveraging multicentric training sets without the need for centralized aggregation of potentially sensitive and protected patient data (5), though technical limitations and complexities associated with managing heterogeneous systems and data pose major hurdles.
Gadermayr et al. investigate computer vision and deep learning approaches for automated muscle segmentation of MR images. The authors employ an unpaired image-to-image translation approach to leverage “easy” data in order to improve segmentation performance of “hard” data. Using a novel domain specific loss function alongside four segmentation schemes, namely Gaussian mixture, graph-cut, shape-priority graph-cut, and convolutional neural network approaches. Their results suggest performance benefits for both unsupervised and supervised methods, with the potential to reduce the amount of necessary training data in related studies.
Woznicki et al. developed an open-source framework termed “AutoRadiomics”, which seeks to aggregate all common steps of typical radiomics workflows into a single standardized software package. Their framework provides embedded tools for image segmentation and pre-processing alongside standardized radiomics libraries and machine learning models, and provides multiple optimizations for hyperparametric tuning, data splitting, and oversampling. AutoRadiomics seeks to lower the barrier to entry for radiomics and machine learning studies through a modular, user-friendly interface and intuitive design, promising accessibility without the need for a robust coding background. Consistent application of publicly accessible frameworks such as AutoRadiomics in future radiomics studies can serve to enhance transparency by improving workflow standardization and reproducibility.
Finally, Varghese et al. discuss applications of spatial assessments of texture analysis in oncologic imaging. Spatial assessments are particularly effective in capturing areas of intratumoral heterogeneity as they have the ability to quantify subtle voxel-to-voxel variations in the underlying grayscale intensities. This granular approach allows for a more nuanced understanding of tumor characteristics, which is critical for accurate diagnosis and treatment planning. These spatial assessments can be further classified as neighborhood-based methods, which quantify differences in grayscale intensities relative to neighboring voxels, and spatial filters, which are image processing methods that enhance edges and/or textures at regions of rapid grayscale change. Additionally, the authors also underscore the significance of utilizing optimal methodologies and best practices when conducting statistical analyses in high-dimensional radiomics studies. They offer guidance on accurate reporting of machine learning performance outcomes to ensure that findings are both reliable and reproducible.
In conclusion, this Research Topic highlights many innovative developments in AI and machine learning applications for the imaging of bone and soft tissue neoplasms. While radiomics-based studies have for some time shown promise in serving as novel quantitative imaging biomarkers, deep learning approaches have, in more recent years, also gained traction as powerful decision support tools which integrate diverse hierarchical and multimodal inputs. As imaging-based AI algorithms continue to evolve in complexity and rigor, there exists immense potential for creating an armamentarium of tools aimed at augmenting the work of clinical radiologists and enhancing patient care.
Author contributions
BKKF: Conceptualization, Writing – original draft, Writing – review & editing. BAV: Writing – review & editing, Conceptualization. GRM: Writing – review & editing, Conceptualization.
Conflict of interest
BKKF received prior research grants from the RSNA R&E (2019-2020 RMS #1909; 2018-2019 RMS #1810); consulting fees from Mendaera; honorarium payments from Neurodiem (invited author) and Elsevier (book proposal reviews); RSNA and institutional support for attending meetings (RSNA RFC stipend, institutional support stipend); Vice-Chair of the RSNA Resident and Fellow Committee; member of the American Board of Radiology Initial Certification Advisory Committee for Diagnostic Radiology, of the RSNA Education Council, and of the Radiology: Imaging Cancer Trainee Editorial Board. GRM is a consultant for Canon Medical Systems, USA.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors BKKF, BAV, and GRM declared that they were editorial board members of Frontiers at the time of submission. This had no impact on the peer review process and the final decision.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Abbreviations
AI, artificial intelligence; GLCM, Gray Level Co-occurrence Matrix.
References
1. Pubmed website. Available online at: https://pubmed.ncbi.nlm.nih.gov/?term=machine+learning&size=200 (accessed November 5, 2024).
2. Fields BKK, Hwang D, Cen S, Desai B, Gulati M, Hu J, et al. Quantitative magnetic resonance imaging (q-MRI) for the assessment of soft-tissue sarcoma treatment response: a narrative case review of technique development. Clin Imaging. (2020) 63:83–93. doi: 10.1016/j.clinimag.2020.02.016
3. Fields BKK, Demirjian NL, Hwang DH, Varghese BA, Cen SY, Lei X, et al. Whole-tumor 3D volumetric MRI-based radiomics approach for distinguishing between benign and malignant soft tissue tumors. Eur Radiol. (2021) 31(11):8522–35. doi: 10.1007/s00330-021-07914-w
4. Fields BKK, Demirjian NL, Cen SY, Varghese BA, Hwang DH, Lei X, et al. Predicting soft tissue sarcoma response to neoadjuvant chemotherapy using an MRI-based delta-radiomics approach. Mol Imaging Biol. (2023) 25(4):776–87. doi: 10.1007/s11307-023-01803-y
Keywords: radiomics, texture analysis, deep learning, machine learning, artificial intelligence, cancer imaging
Citation: Fields BKK, Varghese BA and Matcuk GR Jr (2024) Editorial: Advances in artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors. Front. Radiol. 4:1523389. doi: 10.3389/fradi.2024.1523389
Received: 6 November 2024; Accepted: 27 November 2024;
Published: 17 December 2024.
Edited and Reviewed by: Matthew David Blackledge, Institute of Cancer Research (ICR), United Kingdom
Copyright: © 2024 Fields, Varghese and Matcuk. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Brandon K. K. Fields, YmtrZmllbGRzQGdtYWlsLmNvbQ==