Artificial intelligence and machine learning (AI/ML) approaches are being increasingly used on healthcare data for a variety of purposes, from predicting relative benefits of different antidepressants to optimizing cancer treatment regimens. However, the use of these technologies in healthcare is encountering significant challenges when it comes to implementation, specifically with respect to safety, explainability, and effectiveness.
How do we evaluate the clinical effectiveness of these models, or decide which ones should be brought to clinical trials? What threshold of explainability or interpretability is required for these tools, and how does this change, based on the use case and end users? What increase in effectiveness of a model justifies the use of a less interpretable AI/ML method? What forms of data should be used to build safe and practical models, and how do we avoid bias when systemic biases are already present in the day-to-day business of healthcare?
This themed Article Collection aims to provide answers to the above questions and beyond. Submissions addressing subjects in the context of safety, explainability, and effectiveness of AI/ML applications in healthcare are welcome. Topics include but are not limited to:
• Public and population health;
• Personalized medicine;
• Evaluation of the quality and readiness of AI/ML models for clinical trials or clinical application;
• Proposals for the evaluation of the safety of AI/ML models before and after clinical implementation;
• AI/ML explainability in different applications and algorithm types;
• Balancing explainability and effectiveness and choosing requisite levels of explainability;
• Ethical/philosophical viewpoints on AI/ML explainability.
Other related topics will be considered via submission of a query to the editors. Contributions can be of different types and reflect perspectives of a wide breadth, e.g. clinical, AI/ML techniques, government, industry, health administration, philosophical and bioethical. We are looking for articles that analyse specific issues or case studies and layout technical or philosophical principles with potential applications, or present original clinical and/or AI/ML-driven results touching on these points. Reviews and metanalyses that propose innovative solutions or recommendations will also be welcome.
Artificial intelligence and machine learning (AI/ML) approaches are being increasingly used on healthcare data for a variety of purposes, from predicting relative benefits of different antidepressants to optimizing cancer treatment regimens. However, the use of these technologies in healthcare is encountering significant challenges when it comes to implementation, specifically with respect to safety, explainability, and effectiveness.
How do we evaluate the clinical effectiveness of these models, or decide which ones should be brought to clinical trials? What threshold of explainability or interpretability is required for these tools, and how does this change, based on the use case and end users? What increase in effectiveness of a model justifies the use of a less interpretable AI/ML method? What forms of data should be used to build safe and practical models, and how do we avoid bias when systemic biases are already present in the day-to-day business of healthcare?
This themed Article Collection aims to provide answers to the above questions and beyond. Submissions addressing subjects in the context of safety, explainability, and effectiveness of AI/ML applications in healthcare are welcome. Topics include but are not limited to:
• Public and population health;
• Personalized medicine;
• Evaluation of the quality and readiness of AI/ML models for clinical trials or clinical application;
• Proposals for the evaluation of the safety of AI/ML models before and after clinical implementation;
• AI/ML explainability in different applications and algorithm types;
• Balancing explainability and effectiveness and choosing requisite levels of explainability;
• Ethical/philosophical viewpoints on AI/ML explainability.
Other related topics will be considered via submission of a query to the editors. Contributions can be of different types and reflect perspectives of a wide breadth, e.g. clinical, AI/ML techniques, government, industry, health administration, philosophical and bioethical. We are looking for articles that analyse specific issues or case studies and layout technical or philosophical principles with potential applications, or present original clinical and/or AI/ML-driven results touching on these points. Reviews and metanalyses that propose innovative solutions or recommendations will also be welcome.