Skip to main content

REVIEW article

Front. Health Serv., 11 June 2024
Sec. Implementation Science

Towards evidence-based practice 2.0: leveraging artificial intelligence in healthcare

\r\nPer Nilsen,
Per Nilsen1,2*David Sundemo,David Sundemo3,4Fredrik HeintzFredrik Heintz5Margit NeherMargit Neher1Jens NygrenJens Nygren1Petra SvedbergPetra Svedberg1Lena Petersson\r\nLena Petersson1
  • 1School of Health and Welfare, Halmstad University, Halmstad, Sweden
  • 2Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
  • 3School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
  • 4Lerum Närhälsan Primary Healthcare Center, Lerum, Sweden
  • 5Department of Computer and Information Science, Linköping University, Linköping, Sweden

Background: Evidence-based practice (EBP) involves making clinical decisions based on three sources of information: evidence, clinical experience and patient preferences. Despite popularization of EBP, research has shown that there are many barriers to achieving the goals of the EBP model. The use of artificial intelligence (AI) in healthcare has been proposed as a means to improve clinical decision-making. The aim of this paper was to pinpoint key challenges pertaining to the three pillars of EBP and to investigate the potential of AI in surmounting these challenges and contributing to a more evidence-based healthcare practice. We conducted a selective review of the literature on EBP and the integration of AI in healthcare to achieve this.

Challenges with the three components of EBP: Clinical decision-making in line with the EBP model presents several challenges. The availability and existence of robust evidence sometimes pose limitations due to slow generation and dissemination processes, as well as the scarcity of high-quality evidence. Direct application of evidence is not always viable because studies often involve patient groups distinct from those encountered in routine healthcare. Clinicians need to rely on their clinical experience to interpret the relevance of evidence and contextualize it within the unique needs of their patients. Moreover, clinical decision-making might be influenced by cognitive and implicit biases. Achieving patient involvement and shared decision-making between clinicians and patients remains challenging in routine healthcare practice due to factors such as low levels of health literacy among patients and their reluctance to actively participate, barriers rooted in clinicians' attitudes, scepticism towards patient knowledge and ineffective communication strategies, busy healthcare environments and limited resources.

AI assistance for the three components of EBP: AI presents a promising solution to address several challenges inherent in the research process, from conducting studies, generating evidence, synthesizing findings, and disseminating crucial information to clinicians to implementing these findings into routine practice. AI systems have a distinct advantage over human clinicians in processing specific types of data and information. The use of AI has shown great promise in areas such as image analysis. AI presents promising avenues to enhance patient engagement by saving time for clinicians and has the potential to increase patient autonomy although there is a lack of research on this issue.

Conclusion: This review underscores AI's potential to augment evidence-based healthcare practices, potentially marking the emergence of EBP 2.0. However, there are also uncertainties regarding how AI will contribute to a more evidence-based healthcare. Hence, empirical research is essential to validate and substantiate various aspects of AI use in healthcare.

1 Introduction

More than three decades ago, evidence-based medicine (EBM) emerged as a ground-breaking concept introduced by the Evidence-Based Medicine Working Group, marking a pivotal shift in medical practice (1). Originating at McMaster University in Hamilton, Canada, the term was coined to describe a new methodology in learning the practice of medicine (2). EBM was developed in response to the recognition of limitations and shortcomings in traditional medical practices, which often relied on anecdotal evidence, expert opinion and historical practices rather than rigorous scientific evidence. EBM emerged as a response to the need for a more systematic and scientific approach to medical decision-making. It aimed to improve the quality of patient care by ensuring that medical interventions and treatments were based on sound empirical evidence and demonstrated effectiveness through rigorous evaluation methods such as randomized controlled trials (RCTs) (3).

Subsequently, EBM swiftly disseminated and its influence expanded to become evidence-based practice (EBP), extending beyond traditional medical domains to disciplines such as nursing, mental health, physiotherapy, occupational therapy, as well as broader fields such as public health, social work, education, and management. At the same time, a vertical spread occurred from an early focus on various forms of interventions to also include the policy process concerning the use of evidence for identifying and prioritizing problem areas and for decision-making (4). This widespread adoption was facilitated by technological advancements, including electronic databases and the Internet (5).

EBP is often described in terms of decisions being made based on three sources of information: evidence, clinical experience and patient preferences (6). However, despite the popularization of EBP, research in implementation science has shown that there are many barriers to achieving the goals of the EBP model, including practitioners having insufficient time to find relevant studies or guidelines and a lack of skills and confidence in assessing the quality of research. There are also organizational barriers such as lack of leadership support and a culture that does not facilitate the use of research in practice (7).

The use of artificial intelligence (AI) in healthcare has been proposed as a means to improve clinical decision-making (812). The upsurge in interest in AI, which involves machines or computer systems simulating human intelligence processes, has been propelled by the exponential increase in computing capabilities and the abundance of available healthcare data (812). Under the definition approved by the EU Parliament, an AI system means “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (13). The promise of AI has fostered expectations of a paradigm shift towards an “AI-assisted” (also referred to as “data-driven” or “information-driven”) healthcare, with expectations of improving care quality while curbing costs (14, 15). This review pinpoints key challenges pertaining to the three pillars of EBP: evidence, clinical experience and patient preferences. The potential of AI to surmount these challenges and contribute to a more evidence-based healthcare practice is discussed.

A selective review of the literature on EBP and the integration of AI in healthcare was undertaken, focusing on the domains of evidence, clinical experience and patient preferences. Thus, this article draws on secondary sources, encompassing empirical studies, overviews, reviews, assessments of research and opinion papers. Although the primary responsibility for sourcing and evaluating pertinent literature rested with the first author (PN), all authors collaborated in identifying relevant materials for this review.

2 Challenges with the three components of EBP

This section addresses the challenges identified in research associated with the three key components of EBP: evidence, clinical experience and patient preferences.

2.1 Challenges concerning evidence

The EBP model is based on the existence and availability of evidence to inform clinicians' decision-making to achieve evidence-based healthcare. Nevertheless, there are numerous hurdles in the transition from the generation of evidence to its practical use. The journey from research conception and publication to dissemination and implementation in routine healthcare is often dogged by significant time gaps. The prolonged duration from inception to publication may render the findings outdated upon release. Furthermore, the scarcity of high-quality evidence stemming from RCTs, which is the pinnacle in the evidence hierarchy, compels clinicians to rely on lower-tier sources of evidence. In several instances, the absence of any evidence compounds these challenges, leaving practitioners with no foundational support (16, 17).

Systematic reviews constitute a cornerstone of the EBP model. These reviews locate, assess and synthesize evidence on a health topic, making the information accessible and digestible to clinicians and decision-makers. They also serve as the foundation for developing clinical guidelines (18). However, the process of generating evidence via systematic reviews is typically slow and resource-intensive, involving substantial time for preparation and writing. Factors such as the authors’ expertise, methodologies used and the number of studies incorporated contribute to this duration (19). Studies have indicated that the average duration to complete a systematic review exceeds 15 months (20). Consequently, there is a risk that once published, a systematic review may already be outdated (18). Moreover, due to the labour-intensive nature of this process, many systematic reviews are not updated at sufficiently regular intervals (21).

2.2 Challenges concerning clinical experience

Clinicians gain their expertise through many years of training and practice, from undergraduate education to higher degrees and hands-on experience in real-world healthcare settings. This extensive training is essential to ensure clinicians possess the proficiency required for accurate diagnosis, treatment and prognosis for their patients (22). Clinical experience equips clinicians with the ability to draw informed conclusions about patients' health conditions, enabling them to make critical assessments, distinctions and decisions based on their accumulated knowledge. Although the EBP model underscores the significance of evidence in clinical decision-making, its direct application is not always feasible. Clinicians must leverage their clinical experience to interpret the relevance of research findings and contextualize them within the unique needs and conditions of their patients (17).

Clinical experience plays a crucial role in discerning the applicability of research-generated evidence to individual patients, primarily because evidence is often derived from studies involving patient groups that differ from those encountered in routine healthcare. These studies typically involve homogeneous populations, unlike the diverse range of individuals seen in everyday practice. As a result, patients in routine healthcare often exhibit variations from the averaged descriptions found in research (23). Research conclusions hold validity at a population level, but they might not necessarily apply uniformly at the individual patient level (4). Consequently, even if the evidence supports a specific treatment for a particular patient population, clinical experience becomes essential in gauging the extent to which this evidence can be extended and tailored to suit the needs of specific individuals.

Clinical experience can be susceptible to various cognitive biases, i.e., distortions in judgement and decision-making arising from the human brain's inclination to simplify and expedite information processing based on personal experiences and preferences (24). These biases have the potential to significantly affect clinical decision-making, leading to adverse health outcomes (25). Studies have identified that time constraints, a hectic work environment, frequent task switching and the pressure to make swift decisions based on limited information contribute to high cognitive loads and work-related stress, further exacerbating the impact of these biases (26, 27). A review by Gopal et al. (25) identified 12 different types of cognitive biases in healthcare. For example, selection bias might occur when selective observations of favourable outcomes attributed to a certain treatment yield undue confidence in its effectiveness (28). Availability bias might occur as a result of having experience with cases or studies that come more easily to mind, which may yield assumptions of the same scenario being repeated (29). Further, satisfaction bias occurs when a clinician concludes a diagnosis when identifying a single disease as the root cause although more causes may be possible (30).

Clinical experience can also be influenced by implicit biases, i.e., the subconscious tendencies for stereotype-affirming thoughts to traverse our minds, potentially leading to discrimination (27). Implicit biases are associated with underlying attitudes, whether favourable or unfavourable, whereas cognitive biases are linked to thought processes (31). These implicit biases might be directed towards various characteristics such as race, gender, age or health conditions, possibly resulting in discriminatory practices. For instance, research indicates that healthcare professionals, including physicians, nurses, nutritionists and dietitians, often harbour biases against individuals with obesity. This prejudice leads to implicit assumptions regarding weight-related issues, associating them with a perceived lack of willpower or personal character (25, 32).

2.3 Challenges concerning patient preferences

The EBP model underscores the significance of patient preferences alongside evidence and clinical expertise. Patient engagement has long been seen as the “last mile” problem of healthcare, the assumption being that the more patients participate in their own care, the better the health outcomes (8). However, patient involvement and shared decision-making where clinicians and patients make decisions together have been found to be difficult to achieve in routine healthcare practice (33, 34). Multiple obstacles hinder this process, ranging from low levels of health literacy among patients and their reluctance to actively participate, to barriers rooted in clinicians' attitudes, scepticism towards patient knowledge and ineffective communication strategies. In addition, high workloads, bustling healthcare environments and limited resources pose further challenges (35, 36). Consequently, living up to the EBP model's aspiration of integrating patients' preferences into clinical decision-making often proves challenging.

3 Enhancing EBP through AI support

This section explores how the use of AI in healthcare can address challenges within the EBP model and enhance the three key components of evidence, clinical experience and patient preferences through AI support.

3.1 Enhancing evidence through AI support

AI presents a promising solution to address several challenges inherent in the research process, from conducting studies, generating evidence, synthesizing findings, and disseminating crucial information to clinicians to implementing these findings into routine practice. Notably, it has been suggested that AI may free up researchers' time to delve into new areas of pioneering research, commonly referred to as “blue skies” research (37). AI is proficient in focused tasks, such as sifting through abstracts to identify pertinent information and excels in analysing substantial volumes of unstructured data (38). Further, AI may assist in the laborious process of performing systematic reviews, increasing the relevance of the suggested articles, thereby saving valuable time (39). Consequently, AI can enhance researchers' efficiency, expediting the process of generating evidence.

AI presents an opportunity to enhance the efficiency and effectiveness of clinical RCTs, often considered the gold standard for generation of evidence. Its potential covers all phases of these trials, from discovery, pre-trial planning and design to patient recruitment, trial execution and analysis of results (40, 41). In pharmaceutical research, the discovery and design of potential drugs demand significant time and resources, often relying on labor-intensive processes (42). Leveraging AI-driven tools like Alphafold, researchers can predict the three-dimensional structure of amino acids, expediting the identification of promising compound candidates during the preclinical phase (41, 42). Moreover, AI can forecast potential toxicity and side effects in the initial stages, thus enhancing the likelihood of trial success. It is estimated that approximately 30% of potential drugs are discarded due to toxicity concerns, making AI-enabled toxicity prediction a valuable resource-saving tool (43). Additionally, AI can streamline the identification of suitable trial participants, as demonstrated by Hassanzadeh et al., who utilized an algorithm to match individuals with the eligibility criteria of relevant trials using text-based data extracted from patient records (44). During the active trial phase, AI offers improved patient monitoring, enhancing adherence to the medication or treatment regimen under study (45). AI systems can contribute to the analysis of the results by providing more comprehensive assessments, such as identifying key risk factors, managing missing data and automating data extraction to minimize human error. Moreover, AI's efficiency in supporting clinical trials expands the feasibility of conducting trials in areas where profitability might otherwise hinder progress, such as developing medications for rare diseases or targeted therapies (40).

AI can also play an important role in enhancing the synthesis of evidence within systematic reviews, reducing time and costs while enhancing efficiency (46). Notably, various AI systems can facilitate screening of titles and abstracts in systematic reviews (39). A systematic review of AI-assisted systematic reviews concluded that several AI systems “have taken hold with varying success in evidence synthesis” (47). This trend indicates a growing probability of AI-assisted systematic reviews gaining prevalence, thereby accelerating the synthesis of evidence. Furthermore, AI provides assistance in conducting literature reviews, particularly those that are less structured compared with quantitative systematic reviews (48).

However, AI is not without limitations when used for research purposes. Large language models (LLMs) may tempt researchers to utilize them for tasks beyond their validation, as their results often appear sound superficially (49). This poses challenges in qualitative evidence synthesis, which relies on careful consideration of evidence quality and quantity (50). For instance, summarizing evidence in a systematic review may overlook actual probabilities if LLMs generate text without causal understanding. If the training data for the LLM model is pertinent to the topic, the generated text may correlate with the underlying evidence, but it does not establish a causal relationship. Conversely, if the training data is irrelevant, the resulting text may be eloquent and persuasive but likely lacks accuracy and truthfulness (51). As LLM availability grows, there is a risk of diluting valid evidence with AI-generated content. Vigilance within the scientific community is crucial to prevent this.

Although AI offers support to researchers throughout various phases of the research process, integrating this evidence into clinical practice poses challenges for clinicians. They might encounter difficulties in critically reviewing and interpreting research findings or assessing the quality of the evidence, potentially due to limited training in developing academic evidence assessment skills (40, 45). Keeping up with the evolving research landscape requires significant time and expertise. Notably, Hoffmann et al.'s study (52) observed a drastic increase, more than 20-fold, in indexed systematic reviews in the past two decades, averaging 80 publications per day in 2019. Despite suggestions indicating AI's potential to afford clinicians more time to engage with evidence (33), empirical studies supporting this relationship are currently lacking.

3.2 Enhancing clinical experience through AI support

AI systems have a distinct advantage over human clinicians in processing specific types of data and information. Equipped with access to vast amounts of data and the capability to process them in real time, AI systems demonstrate an unparalleled capacity for rapid learning and adaptation, continuously enhancing their performance at an exponential rate (22). The use of AI in clinical decision-making offers clear benefits, notably in minimizing variations among clinicians, thereby ensuring more uniform and precise diagnoses, treatments and prognoses (53).

AI has shown significant promise in revolutionizing image analysis within healthcare. For example, Cohen et al. (54) showcased a recent study where an AI algorithm specialized in wrist fracture detection outperformed radiographic analysis conducted by non-specialized radiologists. The study highlighted that a combination of AI and physician analysis achieved the highest sensitivity in identifying radiographic fractures. This trend was echoed in a mammography study (55), where the combined approach of AI and physician assessment surpassed the performance of two individual physicians or AI analysis alone. In addition, a separate study (56) revealed that AI-supported mammography screenings achieved cancer detection rates comparable with readings by two physicians and the workload associated with screen reading was reduced considerably. Thus, clinicians using AI-driven clinical decision support systems are not confined by their own clinical experience but can harness data from thousands of relevant cases to enhance their decision-making process.

AI holds the promise of augmenting clinicians' capabilities and enriching their clinical experience through innovative simulation training methods. The emergence of LLMs presents an added avenue for creating interactive medical simulation cases. These simulations provide patients' responses to every conceivable action or reaction by medical students, offering promising educational possibilities (57). Recent studies using the LLM ChatGPT have demonstrated its efficiency and accuracy in simulating patients for educational purposes (58). Research is important to assess the equivalency of AI-simulated medical training to genuine patient consultations and to ascertain the extent to which the knowledge acquired from simulated cases translates into practical real-world healthcare scenarios (59).

AI may alleviate some of the challenges with clinicians' cognitive and implicit biases but AI systems still carry a risk for various flaws (60). For example, the input or output of an AI application may necessitate human judgement, such as a clinician determining what data should be used in the application (61). A novel challenge for clinicians is to know when and how to use the output from a particular algorithm in a clinical situation. The importance of increased knowledge among clinicians has been emphasized to ensure appropriate use of AI (62). Bias can be generated across AI system development, from the preparation and collection of the data to the development and training of the algorithm, and the evaluation and deployment of the system in clinical settings (15, 63). These challenges underscore the importance of not taking the objectivity of AI for granted but rather devoting research to investigate the consequences of AI flaws prior to routine use.

The journey towards expertise for clinicians is multifaceted, encompassing experiential learning and self-reflection, encountering errors, observing peers and receiving feedback from both senior staff and patients. Typically, clinicians start with basic tasks that require fundamental skills, gradually progressing to more complex responsibilities as their expertise advances. This gradual ascent builds a crucial foundational framework for expertise (64). However, the integration of AI in performing tasks traditionally conducted by clinicians may potentially omit the foundational learning stage acquired through years of practice. This transfer of responsibilities to AI could lead to deskilling, characterized by reduced clinician discretion, autonomy, decision-making capabilities and domain knowledge within their roles (65).

3.3 Enhancing patient preferences through AI support

AI presents promising avenues for enhancing patient engagement, aligning with the aspirations of the EBP model. Despite the lack of dedicated research on AI systems explicitly designed for this purpose, its capacity to save time for clinicians has been acknowledged, potentially fostering more meaningful interactions between clinicians and patients (8, 33, 66, 67). Previously, simpler algorithms like the Wells score for diagnosing deep vein thrombosis (68) and the FRAX score for assessing osteoporosis risk (69) have supported clinicians in medical decision-making. These tools help avoid unnecessary diagnostic procedures by identifying patients who are more likely to benefit. More advanced AI algorithms, like voice recognition and generative AI, have even greater potential to save clinician time by automating the transcription of complete clinician-patient interactions (70). Other examples include automating administrative tasks such as appointment scheduling, reminders and managing no-shows can alleviate clinicians' burden of data entry, freeing up time (71). Research on AI in radiology suggests that radiologists, with a reduced administrative load, can devote more time to patients, enabling them to prioritize personalized care (72).

AI holds promise in enhancing patient autonomy, presenting the prospect of a more balanced clinician-patient relationship with a more equitable decision-making process (73). For example, an AI-driven self-monitoring device has been developed to anticipate exacerbation risks (severe worsening of lung symptoms) in patients with chronic obstructive pulmonary disorder (COPD) (74). This device utilizes both hardware for biometric data capture and AI-driven software to predict exacerbation risks within the patient's home, eliminating the need for consultation with a clinician. Its goal is to offer automated recommendations to patients, empowering them to make informed decisions without the necessity of consulting a clinician.

While there is hope that AI will enhance patient autonomy, there are also uncertainties about how this will happen. Efficient algorithms require vast amounts of data, often sensitive health information in healthcare settings. Developing AI systems using non-anonymized data poses privacy risks, as individual patient data could be traced back (75). Additionally, AI-driven profiling may unveil health-related details about individuals without their consent. For instance, companies can predict individuals' health using publicly available social media data, potentially selling this information for profit. The conventional clinician-patient dynamic is often marked by the patient's vulnerability to the clinician, which raises questions concerning how AI might reshape this power balance. Empirical questions remain on the impact and evolution of the clinician-patient relationship with AI deployment, necessitating further investigation. In addition, accommodating diverse preferences in AI systems poses a challenge. McDougall (76) has raised a concern that the integration of AI might unintentionally promote a new type of paternalism, encouraging a paradigm where “the computer knows best”. Such a transition could contradict the recognized significance of incorporating the patient's viewpoint within the EBP model.

4 Discussion

Striving to achieve evidence-based healthcare practice aligned with the EBP model has encountered substantial challenges, resulting in the rapid growth of implementation science. This discipline aims to identify barriers and formulate strategies that enable the integration of research-based practices into routine healthcare (77). When evidence is beset with quality limitations, restricted applicability or even absence, clinicians' experience is necessary to contextualize and apply it in specific cases. In addition, clinical decision-making may suffer from cognitive and implicit biases, limiting its accuracy. Incorporating patients’ preferences poses another challenge within the EBP model. Recognizing these limitations, several scholars (812) advocate for leveraging AI to enhance clinical decision-making.

Our analysis underscores the potential of AI to enhance various aspects of the three pillars of EBP. Nonetheless, the use of AI comes with inherent limitations. There is a risk of perpetuating biases and the potential deskilling of clinicians as the automation of tasks progresses. Furthermore, within various AI applications in healthcare, we face a realm of uncertainty concerning its uncharted effects, leading to speculation due to the nascent stage of AI development and implementation in practice. Undoubtedly, the ongoing discourse on AI in healthcare will persist, necessitating empirical research to comprehend and shape its implementation and influence within healthcare.

AI has been lauded as a substantial time-saving tool, potentially affording clinicians more time to enhance their expertise in evidence assessment or deepen patient engagements (33). Nevertheless, skeptics have expressed concerns regarding the potential increase in the number of patients navigating the healthcare system due to economic dynamics (78). The actual outcome, whether AI will predominantly optimize throughput or pursue alternate objectives, will be contingent upon how healthcare systems and decision-makers prioritize efficiency relative to other crucial values. It is essential to study both the intended and unintended consequences of AI deployment in healthcare to gain a holistic understanding of its impact. A European survey by the European Patients' Forum highlighted the promise of AI in delivering more personalized care to patients (79), but further research is crucial to explore how AI implementation will influence clinician-patient relationships and its broader impact on enhancing patient involvement.

AI has faced criticism for its “black box” nature, making it challenging to decipher or explain the reasoning behind specific predictions or decisions due to the intricate structures and numerous variables within AI systems (80). Conversely, the EBP model was designed to elucidate clinical decision-making processes, ensuring transparency in understanding why certain decisions were made (2). Nevertheless, proponents argue that if AI consistently outperforms clinicians, the necessity for explainability diminishes (33). Thus, the credibility and trustworthiness of AI systems could be assessed based on the reliability of their output rather than the transparency of their processes. Moreover, it could be contended that traditional clinical decision-making also harbours “black box” components, where cognitive and implicit biases influence clinicians' decisions (81).

Another debate surrounding AI is the potential deskilling of clinicians due to the automation of tasks by AI systems. There are concerns regarding potential negative consequences, such as compromised decision-making, a decline in clinical skills and a possible compromise in patient safety (64). In the short term, the challenge of deskilling may not present a significant issue because the current clinical workforce has substantial clinical experience, making them valuable resources in handling complex cases that AI systems might find challenging. However, in the long term, newly educated clinicians may lack experience and proficiency in tasks that have been automated, potentially affecting the quality of care. Deskilling is not a new phenomenon; across various sectors, technological advancements have reduced the skill requirements for specific jobs over generations (82). The evolution and implications of deskilling in healthcare remain uncertain and warrant thorough investigation.

The ongoing debate on AI-induced deskilling resonates with the historical discourse that surrounded the emergence of the EBP model. Initially, the EBP model faced critique and was labelled as “cookbook medicine” and a “straitjacket” for clinicians. Concerns about potential de-professionalization arose because it was perceived that clinicians might lose their autonomy and critical judgement by adhering to pre-established guidelines and protocols (4, 83). Yet, while past criticisms have waned over time, the current discourse on the potential deskilling of clinicians due to AI automation seems more serious. This elevated gravity likely arises from heightened concerns about the possible displacement of the workforce by AI-enabled automation (64).

This selective literature review has some limitations that require consideration. The primary focus was to explore AI's potential impact on the three components of EBP: evidence, clinical experience and patient preferences. Consequently, the review did not encompass the extensive array of existing or potential applications of AI in healthcare, such as its use as a managerial tool for administrative tasks, resource allocation in healthcare facilities or facilitating continuous quality improvement initiatives through data analysis and feedback mechanisms. This review does not provide a comprehensive overview but rather focuses on AI in relation to the three pillars of EBP.

5 Conclusion

This review of the literature on EBP and AI in healthcare suggests considerable potential for AI to advance evidence-based healthcare practices, potentially heralding the advent of what might be termed EBP 2.0. Nonetheless, empirical research is crucial to substantiate various aspects of the use of AI in healthcare. There is speculation about AI potentially replacing clinicians' roles in healthcare, but we believe that human clinicians will continue to provide critical value for patients through their uniquely human attributes. Consequently, a paradox arises whereby the integration of AI might indirectly emphasize and increase the value of human skills within healthcare.

Author contributions

PN: Conceptualization, Formal Analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. DS: Formal Analysis, Investigation, Methodology, Validation, Writing – review & editing. FH: Formal Analysis, Investigation, Methodology, Validation, Writing – review & editing. MN: Formal Analysis, Investigation, Methodology, Validation, Writing – review & editing. JN: Formal Analysis, Investigation, Methodology, Validation, Writing – review & editing. PS: Formal Analysis, Investigation, Methodology, Validation, Writing – review & editing. LP: Formal Analysis, Investigation, Methodology, Supervision, Validation, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

Thanks to Elin Karlsson for comments on the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Guyatt G, Cairns J, Churchill D, Cook D, Haynes B, Hirsh J, et al. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA. (1992) 268(17):2420–5. doi: 10.1001/jama.1992.03490170092032

PubMed Abstract | Crossref Full Text | Google Scholar

2. Sackett DL. Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone (1997).

Google Scholar

3. Howick J. The Philosophy of Evidence-Based Medicine. Chichester, West Sussex, UK: Wiley-Blackwell, BMJ Books (2011).

Google Scholar

4. Trinder L. Introduction: the context of evidence-based practice. In: Trinder L, Reynolds S, editors. Evidence-based Practice. (2000). p. 1–16. doi: 10.1002/9780470699003.ch1

Crossref Full Text | Google Scholar

5. Avby G. Evidence in Practice: On Knowledge Use and Learning in Social Work. Linköping: Linköping University, Department of Behavioural Sciences and Learning (2015).

Google Scholar

6. Sackett DL, William MCR, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. Br Med J. (1996) 312(7023):71. doi: 10.1136/bmj.312.7023.71

Crossref Full Text | Google Scholar

7. Nilsen P. Overview of theories, models and frameworks in implementation science. In: Nilsen P, Birken SA, editors. Handbook on Implementation Science. Cheltenham, UK: Edward Elgar Publishing (2020). p. 8–31.

Google Scholar

8. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. (2019) 6(2):94–8. doi: 10.7861/futurehosp.6-2-94

PubMed Abstract | Crossref Full Text | Google Scholar

9. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25(1):44–56. doi: 10.1038/s41591-018-0300-7

PubMed Abstract | Crossref Full Text | Google Scholar

10. Sunarti S, Fadzlul Rahman F, Naufal M, Risky M, Febriyanto K, Masnina R. Artificial intelligence in healthcare: opportunities and risk for future. Gac Sanit. (2021) 35(Suppl 1):S67–s70. doi: 10.1016/j.gaceta.2020.12.019

PubMed Abstract | Crossref Full Text | Google Scholar

11. Dave M, Patel N. Artificial intelligence in healthcare and education. Br Dent J. (2023) 234(10):761–4. doi: 10.1038/s41415-023-5845-2

PubMed Abstract | Crossref Full Text | Google Scholar

12. Camaradou JCL, Hogg HDJ. Commentary: patient perspectives on artificial intelligence; what have we learned and how should we move forward? Adv Ther. (2023) 40(6):2563–72. doi: 10.1007/s12325-023-02511-3

PubMed Abstract | Crossref Full Text | Google Scholar

13. Union E. The EU Artificial Intelligence Act.

Google Scholar

14. Barrett M, Boyne J, Brandts J, Brunner-La Rocca HP, De Maesschalck L, De Wit K, et al. Artificial intelligence supported patient self-care in chronic heart failure: a paradigm shift from reactive to predictive, preventive and personalised care. EPMA J. (2019) 10(4):445–64. doi: 10.1007/s13167-019-00188-9

PubMed Abstract | Crossref Full Text | Google Scholar

15. Ferryman K, Mackintosh M, Ghassemi M. Considering biased data as informative artifacts in AI-assisted health care. N Engl J Med. (2023) 389(9):833–8. doi: 10.1056/NEJMra2214964

PubMed Abstract | Crossref Full Text | Google Scholar

16. Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plast Reconstr Surg. (2011) 128(1):305–10. doi: 10.1097/PRS.0b013e318219c171

PubMed Abstract | Crossref Full Text | Google Scholar

17. Nilsen P. Implementering av Evidensbaserad Praktik. Malmö: Gleerup (2014).

Google Scholar

18. Dones V. Systematic review writing by artificial intelligence: can artificial intelligence replace humans? Musculoskelet Disord Treat. (2022) 8. doi: 10.23937/2572-3243.1510112

Crossref Full Text | Google Scholar

19. de la Torre-López J, Ramírez A, Romero JR. Artificial intelligence to automate the systematic review of scientific literature. Computing. (2023) 105(10):2171–94. doi: 10.1007/s00607-023-01181-x

Crossref Full Text | Google Scholar

20. Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. (2017) 7(2):e012545. doi: 10.1136/bmjopen-2016-012545

PubMed Abstract | Crossref Full Text | Google Scholar

21. Yaffe J, Montgomery P, Hopewell S, Shepard LD. Empty reviews: a description and consideration of cochrane systematic reviews with no included studies. PLoS One. (2012) 7(5):e36626. doi: 10.1371/journal.pone.0036626

PubMed Abstract | Crossref Full Text | Google Scholar

22. Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, et al. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell. (2023) 6:1227091. doi: 10.3389/frai.2023.1227091

PubMed Abstract | Crossref Full Text | Google Scholar

23. Nilsen P. Implementation Science: Theory and Application. Abingdon, UK: Routledge (2024).

Google Scholar

24. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf. (2013) 22(Suppl 2):ii58–64. doi: 10.1136/bmjqs-2012-001712

PubMed Abstract | Crossref Full Text | Google Scholar

25. Gopal DP, Chetty U, O’Donnell P, Gajria C, Blackadder-Weinstein J. Implicit bias in healthcare: clinical practice, research and decision making. Future Healthc J. (2021) 8(1):40–8. doi: 10.7861/fhj.2020-0233

PubMed Abstract | Crossref Full Text | Google Scholar

26. O'Sullivan ED. Cognitive bias is a crucial factor in nurses’ decision making. Evid Based Nurs. (2023) 26(1):37. doi: 10.1136/ebnurs-2022-103585

Crossref Full Text | Google Scholar

27. Thirsk LM, Panchuk JT, Stahlke S, Hagtvedt R. Cognitive and implicit biases in nurses’ judgment and decision-making: a scoping review. Int J Nurs Stud. (2022) 133:104284. doi: 10.1016/j.ijnurstu.2022.104284

PubMed Abstract | Crossref Full Text | Google Scholar

28. Munafò MR, Tilling K, Taylor AE, Evans DM, Davey Smith G. Collider scope: when selection bias can substantially influence observed associations. Int J Epidemiol. (2018) 47(1):226–35. doi: 10.1093/ije/dyx206

Crossref Full Text | Google Scholar

29. Mamede S, van Gog T, van den Berge K, Rikers RM, van Saase JL, van Guldener C, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. (2010) 304(11):1198–203. doi: 10.1001/jama.2010.1276

PubMed Abstract | Crossref Full Text | Google Scholar

30. Esteban-Zubero E, Valdivia-Grandez MA, Alatorre-Jiménez MA, Torre LD, Marín-Medina A, Alonso-Barragán SA, et al. Diagnosis bias and its revelance during the diagnosis process. Fortune J. (2017). doi: 10.26502/acmcr.96550056

Google Scholar

31. Hagiwara N, Kron FW, Scerbo MW, Watson GS. A call for grounding implicit bias training in clinical and translational frameworks. Lancet. (2020) 395(10234):1457–60. doi: 10.1016/s0140-6736(20)30846-1

PubMed Abstract | Crossref Full Text | Google Scholar

32. Swift JA, Hanlon S, El-Redy L, Puhl RM, Glazebrook C. Weight bias among UK trainee dietitians, doctors, nurses and nutritionists. J Hum Nutr Diet. (2013) 26(4):395–402. doi: 10.1111/jhn.12019

PubMed Abstract | Crossref Full Text | Google Scholar

33. Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak. (2023) 23(1):73. doi: 10.1186/s12911-023-02162-y

PubMed Abstract | Crossref Full Text | Google Scholar

34. Waddell A, Lennox A, Spassova G, Bragge P. Barriers and facilitators to shared decision-making in hospitals from policy to practice: a systematic review. Implement Sci. (2021) 16(1):74. doi: 10.1186/s13012-021-01142-y

PubMed Abstract | Crossref Full Text | Google Scholar

35. Fridberg H. The Complexities of Implementing Person-Centred Care in a Real-World Setting: A Case Study with Seven Embedded Units. Falun: Högskolan Dalarna (2022).

Google Scholar

36. Grim K, Näslund H, Allaskog C, Andersson J, Argentzell E, Broström K, et al. Legitimizing user knowledge in mental health services: epistemic (in)justice and barriers to knowledge integration. Front Psychiatry. (2022) 13:981238. doi: 10.3389/fpsyt.2022.981238

PubMed Abstract | Crossref Full Text | Google Scholar

37. Chubb J, Cowling P, Reed D. Speeding up to keep up: exploring the use of AI in the research process. AI Soc. (2022) 37(4):1439–57. doi: 10.1007/s00146-021-01259-0

PubMed Abstract | Crossref Full Text | Google Scholar

38. van Belkom R. The impact of artificial intelligence on the activities of a futurist. World Futures Rev. (2019) 12(2):156–68. doi: 10.1177/1946756719875720

Crossref Full Text | Google Scholar

39. van Dijk SHB, Brusse-Keizer MGJ, Bucsán CC, van der Palen J, Doggen CJM, Lenferink A. Artificial intelligence in systematic reviews: promising when appropriately used. BMJ Open. (2023) 13(7):e072254. doi: 10.1136/bmjopen-2023-072254

PubMed Abstract | Crossref Full Text | Google Scholar

40. Askin S, Burkhalter D, Calado G, El Dakrouni S. Artificial intelligence applied to clinical trials: opportunities and challenges. Health Technol (Berl). (2023) 13(2):203–13. doi: 10.1007/s12553-023-00738-2

PubMed Abstract | Crossref Full Text | Google Scholar

41. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, et al. Highly accurate protein structure prediction with AlphaFold. Nature. (2021) 596(7873):583–9. doi: 10.1038/s41586-021-03819-2

PubMed Abstract | Crossref Full Text | Google Scholar

42. Borkakoti N, Thornton JM. Alphafold2 protein structure prediction: implications for drug discovery. Curr Opin Struct Biol. (2023) 78:102526. doi: 10.1016/j.sbi.2022.102526

PubMed Abstract | Crossref Full Text | Google Scholar

43. Tran TTV, Surya Wibowo A, Tayara H, Chong KT. Artificial intelligence in drug toxicity prediction: recent advances, challenges, and future perspectives. J Chem Inf Model. (2023) 63(9):2628–43. doi: 10.1021/acs.jcim.3c00200

PubMed Abstract | Crossref Full Text | Google Scholar

44. Hassanzadeh H, Karimi S, Nguyen A. Matching patients to clinical trials using semantically enriched document representation. J Biomed Inform. (2020) 105:103406. doi: 10.1016/j.jbi.2020.103406

PubMed Abstract | Crossref Full Text | Google Scholar

45. Harrer S, Shah P, Antony B, Hu J. Artificial intelligence for clinical trial design. Trends Pharmacol Sci. (2019) 40(8):577–91. doi: 10.1016/j.tips.2019.05.005

PubMed Abstract | Crossref Full Text | Google Scholar

46. Wang L, Zhang Y, Wang D, Tong X, Liu T, Zhang S, et al. Artificial intelligence for COVID-19: a systematic review. Front Med (Lausanne). (2021) 8:704256. doi: 10.3389/fmed.2021.704256

PubMed Abstract | Crossref Full Text | Google Scholar

47. Blaizot A, Veettil SK, Saidoung P, Moreno-Garcia CF, Wiratunga N, Aceves-Martins M, et al. Using artificial intelligence methods for systematic review in health sciences: a systematic review. Res Synth Methods. (2022) 13(3):353–62. doi: 10.1002/jrsm.1553

PubMed Abstract | Crossref Full Text | Google Scholar

48. Wagner G, Lukyanenko R, Paré G. Artificial intelligence and the conduct of literature reviews. J Inf Technol. (2021) 37(2):209–26. doi: 10.1177/02683962211048201

Crossref Full Text | Google Scholar

49. Bhardwaj A, Kishore S, Pandey DK. Artificial intelligence in biological sciences. Life (Basel, Switzerland). (2022) 12(9). doi: 10.3390/life12091430

Crossref Full Text | Google Scholar

50. Wright RW, Brand RA, Dunn W, Spindler KP. How to write a systematic review. Clin Orthop Relat Res. (2007) 455:23–9. doi: 10.1097/BLO.0b013e31802c9098

PubMed Abstract | Crossref Full Text | Google Scholar

51. Marcus G, Leivada E, Murphy E. A Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Human Language? arXiv preprint arXiv:230800109 2023.

52. Hoffmann F, Allers K, Rombey T, Helbach J, Hoffmann A, Mathes T, et al. Nearly 80 systematic reviews were published each day: observational study on trends in epidemiology and reporting over the years 2000–2019. J Clin Epidemiol. (2021) 138:1–11. doi: 10.1016/j.jclinepi.2021.05.022

PubMed Abstract | Crossref Full Text | Google Scholar

53. Ramgopal S, Sanchez-Pinto LN, Horvat CM, Carroll MS, Luo Y, Florin TA. Artificial intelligence-based clinical decision support in pediatrics. Pediatr Res. (2023) 93(2):334–41. doi: 10.1038/s41390-022-02226-1

PubMed Abstract | Crossref Full Text | Google Scholar

54. Cohen M, Puntonet J, Sanchez J, Kierszbaum E, Crema M, Soyer P, et al. Artificial intelligence vs. radiologist: accuracy of wrist fracture detection on radiographs. Eur Radiol. (2023) 33(6):3974–83. doi: 10.1007/s00330-022-09349-3

PubMed Abstract | Crossref Full Text | Google Scholar

55. Dembrower K, Crippa A, Colón E, Eklund M, Strand F. Artificial intelligence for breast cancer detection in screening mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study. Lancet Digit Health. (2023) 5(10):e703–11. doi: 10.1016/s2589-7500(23)00153-x

PubMed Abstract | Crossref Full Text | Google Scholar

56. Lång K, Josefsson V, Larsson AM, Larsson S, Högberg C, Sartor H, et al. Artificial intelligence-supported screen Reading versus standard double reading in the mammography screening with artificial intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study. Lancet Oncol. (2023) 24(8):936–44. doi: 10.1016/s1470-2045(23)00298-x

Crossref Full Text | Google Scholar

57. Safranek CW, Sidamon-Eristoff AE, Gilson A, Chartash D. The role of large language models in medical education: applications and implications. JMIR Med Educ. (2023) 9:e50945. doi: 10.2196/50945

PubMed Abstract | Crossref Full Text | Google Scholar

58. Liu X, Wu C, Lai R, Lin H, Xu Y, Lin Y, et al. ChatGPT: when the artificial intelligence meets standardized patients in clinical training. J Transl Med. (2023) 21(1):447. doi: 10.1186/s12967-023-04314-0

PubMed Abstract | Crossref Full Text | Google Scholar

59. Williams B, Song JJY. Are simulated patients effective in facilitating development of clinical competence for healthcare students? A scoping review. Adv Simul (Lond). (2016) 1:6. doi: 10.1186/s41077-016-0006-1

PubMed Abstract | Crossref Full Text | Google Scholar

60. Ibrahim H, Liu X, Zariffa N, Morris AD, Denniston AK. Health data poverty: an assailable barrier to equitable digital health care. Lancet Digit Health. (2021) 3(4):e260–5. doi: 10.1016/s2589-7500(20)30317-4

PubMed Abstract | Crossref Full Text | Google Scholar

61. Belenguer L. AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics. (2022) 2(4):771–87. doi: 10.1007/s43681-022-00138-8

PubMed Abstract | Crossref Full Text | Google Scholar

62. Goodman KE, Rodman AM, Morgan DJ. Preparing physicians for the clinical algorithm era. N Engl J Med. (2023) 389(6):483–7. doi: 10.1056/NEJMp2304839

PubMed Abstract | Crossref Full Text | Google Scholar

63. Vokinger KN, Feuerriegel S, Kesselheim AS. Mitigating bias in machine learning for medicine. Commun Med. (2021) 1(1):25. doi: 10.1038/s43856-021-00028-w

PubMed Abstract | Crossref Full Text | Google Scholar

64. Aquino YSJ, Rogers WA, Braunack-Mayer A, Frazer H, Win KT, Houssami N, et al. Utopia versus dystopia: professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. Int J Med Inf. (2023) 169:104903. doi: 10.1016/j.ijmedinf.2022.104903

Crossref Full Text | Google Scholar

65. Randhawa GK, Jackson M. The role of artificial intelligence in learning and professional development for healthcare professionals. Healthc Manage Forum. (2020) 33(1):19–24. doi: 10.1177/0840470419869032

PubMed Abstract | Crossref Full Text | Google Scholar

66. Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med. (2017) 376(26):2507–9. doi: 10.1056/NEJMp1702071

PubMed Abstract | Crossref Full Text | Google Scholar

67. Fogel AL, Kvedar JC. Artificial intelligence powers digital medicine. NPJ Digit Med. (2018) 1:5. doi: 10.1038/s41746-017-0012-2

PubMed Abstract | Crossref Full Text | Google Scholar

68. Modi S, Deisler R, Gozel K, Reicks P, Irwin E, Brunsvold M, et al. Wells criteria for DVT is a reliable clinical tool to assess the risk of deep venous thrombosis in trauma patients. World J Emerg Surg. (2016) 11:24. doi: 10.1186/s13017-016-0078-1

PubMed Abstract | Crossref Full Text | Google Scholar

69. Kanis JA, Johansson H, Harvey NC, McCloskey EV. A brief history of FRAX. Arch Osteoporos. (2018) 13(1):118. doi: 10.1007/s11657-018-0510-0

PubMed Abstract | Crossref Full Text | Google Scholar

70. Baker HP, Dwyer E, Kalidoss S, Hynes K, Wolf J, Strelzow JA. ChatGPT’s ability to assist with clinical documentation: a randomized controlled trial. J Am Acad Orthop Surg. (2024) 32(3):123–9. doi: 10.5435/jaaos-d-23-00474

PubMed Abstract | Crossref Full Text | Google Scholar

71. Glover Wiljeana J, Li Z, Pachamanova D. The AI-enhanced future of health care administrative task management. Catal Non Issue Content. (2022) 3(2). doi: 10.1056/CAT.21.0355

Crossref Full Text | Google Scholar

72. Aminololama-Shakeri S, López JE. The doctor-patient relationship with artificial intelligence. AJR Am J Roentgenol. (2019) 212(2):308–10. doi: 10.2214/ajr.18.20509

PubMed Abstract | Crossref Full Text | Google Scholar

73. Žaliauskaitė M. Role of ruler or intruder? Patient’s right to autonomy in the age of innovation and technologies. AI Soc. (2021) 36(2):573–83. doi: 10.1007/s00146-020-01034-7

Crossref Full Text | Google Scholar

74. Boer L, Bischoff E, van der Heijden M, Lucas P, Akkermans R, Vercoulen J, et al. A smart mobile health tool versus a paper action plan to support self-management of chronic obstructive pulmonary disease exacerbations: randomized controlled trial. JMIR Mhealth Uhealth. (2019) 7(10):e14408. doi: 10.2196/14408

PubMed Abstract | Crossref Full Text | Google Scholar

75. Rickert J. On patient safety: the lure of artificial intelligence-are we jeopardizing our patients’ privacy? Clin Orthop Relat Res. (2020) 478(4):712–4. doi: 10.1097/corr.0000000000001189

PubMed Abstract | Crossref Full Text | Google Scholar

76. McDougall RJ. Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. (2019) 45(3):156–60. doi: 10.1136/medethics-2018-105118

PubMed Abstract | Crossref Full Text | Google Scholar

77. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. (2015) 10(1):53. doi: 10.1186/s13012-015-0242-0

PubMed Abstract | Crossref Full Text | Google Scholar

78. Sparrow R, Hatherley JJ. The promise and perils of AI in medicine. Int J Chin Comp Philos Med. (2019) 17(2):79–109. doi: 10.24112/ijccpm.171678

Crossref Full Text | Google Scholar

79. Forum EP. Survey for Patient Organisations and Individual Patient Advocates on the Perception of AI in Healthcare Spring. (2023).

80. Gilvary C, Madhukar N, Elkhader J, Elemento O. The missing pieces of artificial intelligence in medicine. Trends Pharmacol Sci. (2019) 40(8):555–64. doi: 10.1016/j.tips.2019.06.001

PubMed Abstract | Crossref Full Text | Google Scholar

81. Nalliah RP. Clinical decision making—choosing between intuition, experience and scientific evidence. Br Dent J. (2016) 221(12):752–4. doi: 10.1038/sj.bdj.2016.942

PubMed Abstract | Crossref Full Text | Google Scholar

82. Lu J. Will medical technology deskill doctors. Int Educ Stud. (2016) 9:130–4. doi: 10.5539/ies.v9n7p130

Crossref Full Text | Google Scholar

83. Bergmark A, Bergmark Å, Lundström T. Evidensbaserat Socialt Arbete: Teori, Kritik, Praktik. Stockholm: Natur & kultur (2011).

Google Scholar

Keywords: artificial intelligence, evidence-based practice, clinical decision-making, evidence, clinical experience, patient preferences

Citation: Nilsen P, Sundemo D, Heintz F, Neher M, Nygren J, Svedberg P and Petersson L (2024) Towards evidence-based practice 2.0: leveraging artificial intelligence in healthcare. Front. Health Serv. 4:1368030. doi: 10.3389/frhs.2024.1368030

Received: 9 January 2024; Accepted: 31 May 2024;
Published: 11 June 2024.

Edited by:

Melanie Barwick, University of Toronto, Canada

Reviewed by:

Katrina Maree Long, Monash University, Australia
Manon Ironside, University of California, San Diego, United States

© 2024 Nilsen, Sundemo, Heintz, Neher, Nygren, Svedberg and Petersson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Per Nilsen, per.nilsen@liu.se

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.