Machine Learning (ML) models have revolutionized many areas of oncology, including computer-aided diagnosis (CADx), medical image segmentation and synthesis, treatment protocol generation, and outcome prediction. While ML models have reported promising performance in research works, their applications in clinical care are limited by insufficient interpretability and explainability. Due to the high complexity of oncology problems and limited data samples, ML models focus on implicit description of the data under investigation, leading to issues with model generalizability when working with uncurated data in clinic. Additionally, many ML model implementations, particularly deep learning, remain a 'black box’, in which model output(s) are generated from the designed input(s) in behaviors that cannot be explained by oncology-specific theories and knowledge. As such, these interacting issues result in a lack of accountability and trust when using ML models in oncology clinic during clinical decision making. Research on ML interpretability and explainability will support improved ML utility in clinic and will offer additional insights for ML model development.
This Research Topic aims to promote ML model explainability and interpretability research as a high impact research field with a mixture of both academic and clinical interest. We plan to collect original high-quality research works on advances of explainability and interpretable ML models for oncology problems. Novel methodologies developments in explainable algorithms and data-algorithm interpretation will be prioritized. Translational studies of investigating current ML models for novel oncology applications with emphasis on model explainability and interpretability will be encouraged.
We invite researchers to submit their recent works of explainable and interpretable ML model covering all aspects of oncology application. Potential topics include but are not limited to:
1. ML model explainability in weakly-supervised and unsupervised learning
2. Integration of biochemical/biophysical modelling for improved model explainability
3. Associations of ML model parameters with classic approaches/theories in medicine
4. ML model generalization and robustness analysis via data property exploration
5. Deep learning visualization and attention analysis
6. Cross-domain learning approaches, such as radiopathomic and radiogenomic modelling
7. Reinforcement learning and multi-task learning with model explainability emphasis
8. Reference dataset generation for ML model explainability and interpretability research
Please note: manuscripts consisting solely of bioinformatics, computational analysis, or predictions of public databases which are not accompanied by validation (independent cohort or biological validation in vitro or in vivo) will not be accepted in any of the sections of Frontiers in Oncology.
Machine Learning (ML) models have revolutionized many areas of oncology, including computer-aided diagnosis (CADx), medical image segmentation and synthesis, treatment protocol generation, and outcome prediction. While ML models have reported promising performance in research works, their applications in clinical care are limited by insufficient interpretability and explainability. Due to the high complexity of oncology problems and limited data samples, ML models focus on implicit description of the data under investigation, leading to issues with model generalizability when working with uncurated data in clinic. Additionally, many ML model implementations, particularly deep learning, remain a 'black box’, in which model output(s) are generated from the designed input(s) in behaviors that cannot be explained by oncology-specific theories and knowledge. As such, these interacting issues result in a lack of accountability and trust when using ML models in oncology clinic during clinical decision making. Research on ML interpretability and explainability will support improved ML utility in clinic and will offer additional insights for ML model development.
This Research Topic aims to promote ML model explainability and interpretability research as a high impact research field with a mixture of both academic and clinical interest. We plan to collect original high-quality research works on advances of explainability and interpretable ML models for oncology problems. Novel methodologies developments in explainable algorithms and data-algorithm interpretation will be prioritized. Translational studies of investigating current ML models for novel oncology applications with emphasis on model explainability and interpretability will be encouraged.
We invite researchers to submit their recent works of explainable and interpretable ML model covering all aspects of oncology application. Potential topics include but are not limited to:
1. ML model explainability in weakly-supervised and unsupervised learning
2. Integration of biochemical/biophysical modelling for improved model explainability
3. Associations of ML model parameters with classic approaches/theories in medicine
4. ML model generalization and robustness analysis via data property exploration
5. Deep learning visualization and attention analysis
6. Cross-domain learning approaches, such as radiopathomic and radiogenomic modelling
7. Reinforcement learning and multi-task learning with model explainability emphasis
8. Reference dataset generation for ML model explainability and interpretability research
Please note: manuscripts consisting solely of bioinformatics, computational analysis, or predictions of public databases which are not accompanied by validation (independent cohort or biological validation in vitro or in vivo) will not be accepted in any of the sections of Frontiers in Oncology.