Uncertainty quantification is an emerging area of interest in Artificial Intelligence (AI) research, particularly in the field of radiation oncology. Its significance lies in offering a clearer interpretation of results from AI models, which, in the analysis of new real-world imaging data, are prone to deliver deterministic, binary outcomes that ignore input context, thus conveying a false sense of certainty. Incorporating uncertainty quantification when evaluating AI results, in the absence of a ground-truth comparison, sheds light on the model’s confidence in its predictions for specific scenarios, providing users with a more profound grasp of AI findings and fostering enhanced confidence in their responses. Moreover, recent advancements in AI, notably deep neural networks (DNNs), are susceptible to overfitting, exacerbated by the often limited size of medical image datasets used in training. This highlights the critical need for uncertainty quantification in evaluating the AI model robustness in the realm of radiation oncology.
This Research Topic aims to promote research in AI uncertainty quantification and application as a high impact research field, with a mixture of both academic and clinical interest. We plan to collect original high-quality research works on advances of AI uncertainty quantification for radiation oncology problems. Novel methodologies developments in AI uncertainty theory and uncertainty quantification estimation will be prioritized. Translational studies of investigating current AI methods for novel oncology applications with emphasis on AI uncertainty will be encouraged.
We invite researchers to submit their recent works of AI uncertainty quantification covering all aspects of oncology application. Potential topics include but are not limited to:
1. Novel methods of uncertainty estimation in current AI models
2. Novel AI model designs that enable uncertainty quantification.
3. Applications of AI uncertainty results in radiation oncology applications, such as image segmentation, radiotherapy planning, and outcome prediction.
4. Quantitative methods for uncertainty result comparison across different AI models
5. Detection and evaluation of Out-of-Distribution (OOD) data for AI uncertainty
6. Interpretability of AI model uncertainty in the context of radiation oncology
7. Reference dataset generation for AI uncertainty quantification research
Please note: manuscripts consisting solely of bioinformatics or computational analysis of public genomic or transcriptomic databases which are not accompanied by validation (independent cohort or biological validation in vitro or in vivo) are out of scope for this section and will not be accepted as part of this Research Topic.
Keywords:
artificial intelligence, radiation oncology, cancer, oncology
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Uncertainty quantification is an emerging area of interest in Artificial Intelligence (AI) research, particularly in the field of radiation oncology. Its significance lies in offering a clearer interpretation of results from AI models, which, in the analysis of new real-world imaging data, are prone to deliver deterministic, binary outcomes that ignore input context, thus conveying a false sense of certainty. Incorporating uncertainty quantification when evaluating AI results, in the absence of a ground-truth comparison, sheds light on the model’s confidence in its predictions for specific scenarios, providing users with a more profound grasp of AI findings and fostering enhanced confidence in their responses. Moreover, recent advancements in AI, notably deep neural networks (DNNs), are susceptible to overfitting, exacerbated by the often limited size of medical image datasets used in training. This highlights the critical need for uncertainty quantification in evaluating the AI model robustness in the realm of radiation oncology.
This Research Topic aims to promote research in AI uncertainty quantification and application as a high impact research field, with a mixture of both academic and clinical interest. We plan to collect original high-quality research works on advances of AI uncertainty quantification for radiation oncology problems. Novel methodologies developments in AI uncertainty theory and uncertainty quantification estimation will be prioritized. Translational studies of investigating current AI methods for novel oncology applications with emphasis on AI uncertainty will be encouraged.
We invite researchers to submit their recent works of AI uncertainty quantification covering all aspects of oncology application. Potential topics include but are not limited to:
1. Novel methods of uncertainty estimation in current AI models
2. Novel AI model designs that enable uncertainty quantification.
3. Applications of AI uncertainty results in radiation oncology applications, such as image segmentation, radiotherapy planning, and outcome prediction.
4. Quantitative methods for uncertainty result comparison across different AI models
5. Detection and evaluation of Out-of-Distribution (OOD) data for AI uncertainty
6. Interpretability of AI model uncertainty in the context of radiation oncology
7. Reference dataset generation for AI uncertainty quantification research
Please note: manuscripts consisting solely of bioinformatics or computational analysis of public genomic or transcriptomic databases which are not accompanied by validation (independent cohort or biological validation in vitro or in vivo) are out of scope for this section and will not be accepted as part of this Research Topic.
Keywords:
artificial intelligence, radiation oncology, cancer, oncology
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.