Randomized controlled trials (RCTs) are the hallmark of evidence-based medicine which aims to inform the translation of research findings into clinical practice. As such they are widely regarded as the most authoritative source of evidence on the effects of interventions. Clinical artificial intelligence (AI) presents a group of innovations where this hierarchy is not always directly relevant for the local stakeholders who need to interpret evidence to inform implementation decisions.
These differing evidence needs are partly due to the evolving policy and regulatory context in which clinical AI is situated, but also the sensitivity of the technology to a unique local constellation of clinical, operational, and technical factors. The conventional evidence hierarchy also prioritizes answering if clinical AI should be implemented in a particular setting, but not necessarily how that implementation should take place. For local stakeholders, and the potentially brittle group of innovations they are charged with evaluating and implementing, place-based and actionable evidence often carries more meaning than the revered RCT. Despite this, there are few examples of such evidence in the academic literature to guide implementation decisions or the design of service evaluations that meet local evidence needs. Health service researchers have also begun to use theories, models and frameworks to address this evidence gap. These approaches abstract insights gained in specific clinical AI implementation efforts enough to make them relevant to others, but not so much as to make them unactionable.
This Frontiers in Health Services Research Topic seeks to address the scarcity of place-based clinical AI service evaluations in the academic literature and illustrate how they can inform the practice of clinical AI implementation. It also aims to highlight the potential of theories, models and frameworks to enhance the scope of relevance of such evidence. For the purposes of this Research Topic, we define place-based service evaluations as evidence generation from the analysis of real-world data from the specific setting for which implementation is being considered. It can cover a large array of evidence types and be generated from a wide range of study designs and analytical methods (including quantitative and qualitative methods) depending on the research question or use case.
By inviting leading practitioners and researchers from various disciplines and healthcare settings to share their work and findings, this Research Topic aims to provide practical and actionable insights to make place-based evidence generation accessible to a wider community. Through convening such peer-to-peer insights in service evaluation, we hope to support the scaling-across of responsible and effective clinical AI implementation for wider patient and service benefits.
We are seeking articles focused on real-world evaluations of clinical AI technologies, taking an observational or interventional approach. The use of theories, models and frameworks to support study design, data collection, data analysis, and dissemination of findings is also encouraged. Authors should seek to share adequately detailed descriptions of their methods and settings to enable readers to adopt or translate part or whole of a service evaluation approach to their own setting. Contributions for perspectives, reviews, and commentaries are welcome from all nations, clinical specialties, professional disciplines, and sectors. To preserve the coherence and actionability of this topic as a whole, we ask that protocols and original research contributions should focus on UK healthcare settings.
Dr. Janak Gunatilleke is the Head of Health Data and Analytics at KPMG in the UK, the Topic Editors declare no competing interests with regards to the Research Topic subject.
Keywords:
AI, Artificial Intelligence, Implementation Science
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Randomized controlled trials (RCTs) are the hallmark of evidence-based medicine which aims to inform the translation of research findings into clinical practice. As such they are widely regarded as the most authoritative source of evidence on the effects of interventions. Clinical artificial intelligence (AI) presents a group of innovations where this hierarchy is not always directly relevant for the local stakeholders who need to interpret evidence to inform implementation decisions.
These differing evidence needs are partly due to the evolving policy and regulatory context in which clinical AI is situated, but also the sensitivity of the technology to a unique local constellation of clinical, operational, and technical factors. The conventional evidence hierarchy also prioritizes answering if clinical AI should be implemented in a particular setting, but not necessarily how that implementation should take place. For local stakeholders, and the potentially brittle group of innovations they are charged with evaluating and implementing, place-based and actionable evidence often carries more meaning than the revered RCT. Despite this, there are few examples of such evidence in the academic literature to guide implementation decisions or the design of service evaluations that meet local evidence needs. Health service researchers have also begun to use theories, models and frameworks to address this evidence gap. These approaches abstract insights gained in specific clinical AI implementation efforts enough to make them relevant to others, but not so much as to make them unactionable.
This Frontiers in Health Services Research Topic seeks to address the scarcity of place-based clinical AI service evaluations in the academic literature and illustrate how they can inform the practice of clinical AI implementation. It also aims to highlight the potential of theories, models and frameworks to enhance the scope of relevance of such evidence. For the purposes of this Research Topic, we define place-based service evaluations as evidence generation from the analysis of real-world data from the specific setting for which implementation is being considered. It can cover a large array of evidence types and be generated from a wide range of study designs and analytical methods (including quantitative and qualitative methods) depending on the research question or use case.
By inviting leading practitioners and researchers from various disciplines and healthcare settings to share their work and findings, this Research Topic aims to provide practical and actionable insights to make place-based evidence generation accessible to a wider community. Through convening such peer-to-peer insights in service evaluation, we hope to support the scaling-across of responsible and effective clinical AI implementation for wider patient and service benefits.
We are seeking articles focused on real-world evaluations of clinical AI technologies, taking an observational or interventional approach. The use of theories, models and frameworks to support study design, data collection, data analysis, and dissemination of findings is also encouraged. Authors should seek to share adequately detailed descriptions of their methods and settings to enable readers to adopt or translate part or whole of a service evaluation approach to their own setting. Contributions for perspectives, reviews, and commentaries are welcome from all nations, clinical specialties, professional disciplines, and sectors. To preserve the coherence and actionability of this topic as a whole, we ask that protocols and original research contributions should focus on UK healthcare settings.
Dr. Janak Gunatilleke is the Head of Health Data and Analytics at KPMG in the UK, the Topic Editors declare no competing interests with regards to the Research Topic subject.
Keywords:
AI, Artificial Intelligence, Implementation Science
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.