Skip to main content

ORIGINAL RESEARCH article

Front. Public Health, 26 October 2023
Sec. Disaster and Emergency Medicine
This article is part of the Research Topic Artificial Intelligence Solutions for Global Health and Disaster Response: Challenges and Opportunities View all 25 articles

Perceptions and concerns of emergency medicine practitioners about artificial intelligence in emergency triage management during the pandemic: a national survey-based study

Erhan AhunErhan Ahun1Ahmet DemirAhmet Demir2Yavuz Yiit,
&#x;Yavuz Yiğit3,4*Yasemin Koer Tulgar,Yasemin Koçer Tulgar5,6Meltem DoanMeltem Doğan6David Terence Thomas,David Terence Thomas7,8Serkan TulgarSerkan Tulgar9
  • 1Department of Emergency Medicine, Sabuncuoglu Serefeddin Training and Research Hospital, Amasya, Türkiye
  • 2Department of Emergency Medicine, Faculty of Medicine, Mugla Sitki Kocman University, Mugla, Türkiye
  • 3Department of Emergency Medicine, Hamad Medical Corporation, Doha, Qatar
  • 4Blizard Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
  • 5Department of Medical History and Ethics, Samsun University Faculty of Medicine, Samsun, Türkiye
  • 6Department of Medical History and Ethics, Kocaeli University Faculty of Medicine, Kocaeli, Türkiye
  • 7Department of Medical Education, Maltepe University Faculty of Medicine, Istanbul, Türkiye
  • 8Department of Pediatric Surgery, Maltepe University Faculty of Medicine, Istanbul, Türkiye
  • 9Department of Anesthesiology, Samsun University Faculty of Medicine, Samsun Training and Research Hospital, Samsun, Türkiye

Objective: There have been continuous discussions over the ethics of using AI in healthcare. We sought to identify the ethical issues and viewpoints of Turkish emergency care doctors about the use of AI during epidemic triage.

Materials and methods: Ten emergency specialists were initially enlisted for this project, and their responses to open-ended questions about the ethical issues surrounding AI in the emergency room provided valuable information. A 15-question survey was created based on their input and was refined through a pilot test with 15 emergency specialty doctors. Following that, the updated survey was sent to emergency specialists via email, social media, and private email distribution.

Results: 167 emergency medicine specialists participated in the study, with an average age of 38.22 years and 6.79 years of professional experience. The majority agreed that AI could benefit patients (54.50%) and healthcare professionals (70.06%) in emergency department triage during pandemics. Regarding responsibility, 63.47% believed in shared responsibility between emergency medicine specialists and AI manufacturers/programmers for complications. Additionally, 79.04% of participants agreed that the responsibility for complications in AI applications varies depending on the nature of the complication. Concerns about privacy were expressed by 20.36% regarding deep learning-based applications, while 61.68% believed that anonymity protected privacy. Additionally, 70.66% of participants believed that AI systems would be as sensitive as humans in terms of non-discrimination.

Conclusion: The potential advantages of deploying AI programs in emergency department triage during pandemics for patients and healthcare providers were acknowledged by emergency medicine doctors in Turkey. Nevertheless, they expressed notable ethical concerns related to the responsibility and accountability aspects of utilizing AI systems in this context.

Introduction

Recent attention has been drawn to artificial intelligence (AI) due to its potential to enable the creation of computer systems that can replicate human intelligence and decision-making processes (1). AI has already permeated every aspect of our lives, even if we are not consciously aware of it (2). Recently AI techniques have sent vast waves across healthcare, even fueling an active discussion of whether AI doctors will eventually replace human physicians in the future (3).

Utilizing sophisticated algorithms, AI can ‘comprehend’ intricate patterns within extensive healthcare data and employ these acquired insights to enhance clinical practices. Furthermore, it can be endowed with the capability to learn and self-correct, thus refining its precision through feedback loops. An AI system aids healthcare practitioners by furnishing them with the most current medical insights from scholarly journals, textbooks, and clinical experiences, thereby ensuring optimal patient care. Additionally, AI systems are pivotal in mitigating the diagnostic and therapeutic errors intrinsic to human clinical practice (35). Furthermore, these AI systems extract invaluable information from extensive patient populations, facilitating real-time inferences for health risk alerts and predictions regarding health outcomes (6).

Emergency medicine, like many other medical specialties, has identified a variety of potential AI applications. Diagnosis is one of the most essential applications of AI in emergency care. In order to identify potential diagnoses, AI algorithms can examine patient data, such as symptoms, medical history, and test results, efficiently and swiftly.

AI’s role in healthcare extends significantly to include advanced patient triage capabilities. By leveraging AI algorithms to analyze patient data comprehensively, healthcare systems can effectively prioritize individuals based on the severity of their condition, ensuring that those in critical need receive immediate attention and the appropriate care interventions. This not only optimizes resource allocation but also enhances patient outcomes by minimizing delays in treatment.

In emergency medicine, AI’s influence in triage is particularly transformative. Beyond diagnosis, AI contributes to the triage process by rapidly assessing the acuity of each case. Through the analysis of various clinical indicators, such as vital signs, medical history, and presenting symptoms, AI systems can swiftly categorize patients, enabling healthcare providers to allocate resources efficiently.

By integrating AI-driven triage systems into emergency departments, healthcare facilities can improve the speed and accuracy of decision-making. AI’s ability to analyze vast datasets and adapt to real-time information empowers clinicians to make well-informed decisions, ultimately leading to more precise and timely care delivery. As a result, patients with critical conditions receive immediate attention, while those with less urgent needs are appropriately managed, resulting in enhanced overall healthcare efficiency and patient satisfaction.

The world recently endured a severe COVID-19 pandemic, during which applications of AI were also observed in the health sector. According to reports, methods exist for the analysis of radiological and laboratory results, diagnosis, patient triage in the emergency room, and the development of patient-specific treatments during a pandemic.

As with any pervasive invention, the application of AI in health has sparked ethical concerns, and these debates are ongoing in many fields (7). A few of these concerns include data privacy and security concerns, algorithmic bias, a lack of transparency, autonomy, and accountability, the dehumanization of healthcare, and economic repercussions. For AI to have the potential to improve healthcare outcomes, it must be used ethically and responsibly.

In this survey study, using the opinions of emergency medicine specialists practicing in Turkey, we sought to determine the ethical concerns and perspective of implementing AI in emergency department triage management during an epidemic.

Materials and methods

Study design

This survey-based study investigated the ethical perspectives of emergency specialist physicians regarding the use of artificial intelligence (AI) in the emergency department. This research was approved by the Samsun University Clinical Research Ethics Committee (SÜKAEK) (Approval No. 2022-12-12, 23/11/2022) and conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. All participants gave their informed consent. Participants’ confidentiality and anonymity were maintained throughout the duration of the study. Special attention was paid to ensuring that participants did not feel compelled to participate or provide specific responses.

Participant recruitment

Initially, ten emergency specialists were recruited to participate in the study. The research team devised and asked participants open-ended questions to gain insight into their experiences and perspectives regarding the ethical considerations of AI use in the ED.

Survey development

Based on the responses to the open-ended questions, the research team developed a survey for gathering more specific information from the participants. The survey consisted of 15 questions and was designed to assess the emergency specialists’ ethical perspectives on the use of AI in the ED.

Survey pilot testing

Pilot testing was conducted with 15 emergency specialist doctors. These participants were asked to provide feedback on any difficulties they had understanding the questions, issues with the appropriateness of certain questions, and grammar and spelling errors. Some changes were made to the survey to improve its clarity, readability, and comprehensiveness in response to the received feedback.

Data collection

The revised survey was distributed to the emergency specialist doctors via email and social media tools between 05/12/2022–15/04/2023. All participants gave their informed consent. Responses were collected anonymously.

Survey content

In the first section of the survey, participants were asked to provide descriptive information such as age, gender, and emergency medicine experience duration. On a 3-point Likert scale, participants were subsequently asked their opinions on a total of 13 questions pertaining to four major ethical topics. Before requesting opinions on each ethical topic, a thorough explanation of the ethical rule was provided. The most important ethical issues were as follows:

Beneficence (A): In this section, participants were asked about the beneficence of AI for triage purposes during the pandemic. The question encompassed two aspects: the usefulness of AI for patients and the usefulness of AI for physicians.

Responsibility and accountability (B): In this section, four questions were posed to gauge the participants’ perspectives on responsibility and accountability in the case of complications or adverse events resulting from the use of AI for triage purposes during the pandemic.

Rights to privacy and confidentiality (C): In this section, 5 questions on personal data protection and the right to privacy were posed to participants separately for artificial intelligence and deep learning.

Non-Discrimination (D): In this section, two questions were posed to ascertain the participants’ perspectives on nondiscrimination.

On a 3-point Likert scale, participants assessed a total of 20 evaluations. Through a final open-ended question, participants were also given the opportunity to express any ethical concerns not addressed in the questionnaire. The example English translation of the questionnaire is provided in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Artificial intelligence (AI) triage survey statements.

To collect data for the study, a survey was developed using Google Forms and distributed via multiple social media platforms, including WhatsApp, Twitter, and Facebook. In addition, the survey was sent individually via email to groups of emergency medicine specialists. To ensure a reliable sample size, a minimum of 154 participants were required with an 80% level of confidence and a 5% margin of error, given the total population of about 2,500 specialists. The objective was to collect responses from at least 170 participants to account for the possibility of data loss.

Statistical analysis

SPSS 16 was used for data analysis. Descriptive data were given as mean and standard deviation, and survey responses were given as frequency and percentage. T-test was used in the analysis of descriptive data. Categorical data were presented as counts and percentages and compared using Chi-square test or Fisher’s exact test as appropriate, with post-hoc Bonferroni adjustments to determine where the difference between groups originated. Statistical significance was accepted as p < 0.05.

Results

Our survey was completed by 171 individuals within the specified time frame. Although it was clear at the outset of the survey through our social media posts that the target of the survey was emergency medicine specialists, it was discovered that 4 participants were emergency medicine residents, and as a result, only 167 participants were taken into consideration for evaluation.

An overview of the survey results is shown in the Figure 1.

FIGURE 1
www.frontiersin.org

Figure 1. Demonstration of questionnaire results.

Participants were composed of 44 females to 123 males with an average age of 38.22 ± 6.79 years, and an average duration of professional experience of 6.79 ± 5.25 years (Table 2).

TABLE 2
www.frontiersin.org

Table 2. The analysis of the participants’ descriptive characteristics.

The responses to questions A1 and A2 were analyzed when evaluating the utility of using artificial intelligence for triage purposes in emergency services during COVID-19 and similar pandemics (such as accurate diagnosis time, fewer complications, etc.). 54.50% of participants believed it would be beneficial for patients, and 70.06% believed it would be beneficial for healthcare professionals.

In section B, participants were asked to assess the use of artificial intelligence in pandemics from the standpoint of responsibility and accountability. Only 12.57% of participants believed that all responsibility lies with emergency medicine specialists, while 23.95% said that only artificial intelligence manufacturers and programmers should bear responsibility. The highest rate of agreement was found for the statement “The responsibility for complications occurring in applications varies depending on their nature” (79.04%). The majority, 63.47%, stated that both parties should be responsible. A demonstration of the answers given by the participants according to their agreement regarding responsibility is presented in Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2. Elicitation of participants’ perceptions regarding accountability for adverse outcomes arising from the utilization of artificial intelligence (AI). Percentages have been rounded to the nearest number.

In section C, situations pertaining to the right to privacy and confidentiality were evaluated. 20.36% of participants stated that deep learning-based applications violate privacy, while 23.95% stated that AI-based applications violate privacy. 14.37% of respondents believed that these applications constitute a violation of privacy under all circumstances. Conversely, 61.68% of participants believed that there is no violation of privacy so long as the data is recorded anonymously. In addition, nearly half of the respondents (47.31%) agreed that the ethical aspect of storing patient data in artificial intelligence and similar systems could be disregarded during extraordinary events such as the COVID-19 pandemic.

In section D, opinions on the non-discrimination principle were evaluated. 70.66% of participants believe that AI-based systems would be as sensitive to non-discrimination as humans. However, only 13.37% of respondents agreed that artificial intelligence-based triage would violate the non-discrimination principle.

In section E, respondents were questioned about their perspectives on additional ethical issues not covered by the survey. Seven participants were concerned about how cultural differences might affect patient attitudes toward AI-based applications and their potential repercussions. In addition, one participant expressed concern about the elevated risk of incorrect positive/negative diagnoses as a result of the difficulties vulnerable groups face in comprehending and expressing themselves.

With the exception of two questions (p < 0.005), the majority of the items’ responses did not demonstrate any discernible differences based on age, gender, or years of professional experience (Table 3). There was a significant gender difference in responses to the statement that the practitioner should bear sole responsibility for negative outcomes resulting from the use of artificial intelligence in triage (question B1). The proportion of women who disagreed with this statement was higher than that of men (86.3% vs. 53.6%, p < 0.001) (Table 4).

TABLE 3
www.frontiersin.org

Table 3. Assessing the relationship between participants’ demographic characteristics and their answers to questions and statistical results (p values).

TABLE 4
www.frontiersin.org

Table 4. Assessment of descriptive items regarding questions/statements (only statistically significant items are presented in this table).

Another significant statistical finding was associated with question C2 of the survey, which addressed the ethical aspect of patient data storage in deep learning-based applications, specifically patient privacy. There was a higher rate of disagreement with this statement among emergency medicine specialists with over 10 years of experience compared to other experience duration groups (p = 0.040) (Table 4).

Discussion

According to our findings, the majority of emergency medicine specialists in Turkey believe that using AI-based systems for triage in emergency rooms during COVID-19 and other pandemics will benefit both patients and emergency physicians. In addition, participants believe that both the artificial intelligence and the emergency medicine professional should be held accountable for any problems caused by this application, with approximately 80% agreeing that, the responsibility for complications in AI applications varies depending on their nature. A smaller proportion of participants agreed that AI’s collection of personal information violates users’ privacy, and nearly half said that this issue might be overlooked in extreme circumstances such as a pandemic. In terms of “non-discrimination,” most participants believed that artificial intelligence would be just as sensitive to this as humans, if not more so.

Triage is a critical process used in emergency medicine to effectively prioritize and manage the severity and urgency of patients’ healthcare needs. Triage is critical in quickly assessing and categorizing individuals based on the severity of their conditions and their chances of survival, especially during medical emergencies or disasters when a large influx of patients requires immediate medical attention. Medical resources can be allocated appropriately by efficiently triaging patients, ensuring that those in critical condition receive prompt care while optimizing the overall allocation of healthcare resources (1).

Triage becomes even more critical in the context of a pandemic, such as COVID-19, because the number of patients seeking medical care may exceed the available resources (2). During a pandemic, the triage process is critical for distinguishing between patients infected with the virus who require immediate medical attention and those who have mild symptoms that can be managed at home. This method ensures that resources, such as hospital beds, medical equipment, and healthcare personnel, are used as efficiently as possible (3).

Furthermore, during a pandemic, healthcare facilities may need to change their triage protocols in order to reduce the risk of infection transmission (4). Patients suspected or confirmed to have COVID-19, for example, may be triaged separately from other patients, and healthcare personnel may be required to wear personal protective equipment (PPE) to reduce their risk of exposure (6). These precautions are intended to protect both patients and healthcare workers while also slowing the spread of the virus.

Various technologies, such as telephone systems, digital scoring systems, deep learning, and AI-based systems, have been used for triage during pandemics (712). During the COVID-19 pandemic, AI programs have been recommended and implemented to improve patient, healthcare worker, and community safety. These AI systems consider descriptive data like age, gender, BMI, medical history, medications, contact history, and risk factors of patients who present to the emergency department. The AI systems generate an output by analyzing the patients’ current complaints, physical findings, laboratory tests, and radiological images. The analysis process employs technologies such as algorithms, machine learning, and artificial neural networks to determine probable diagnoses, urgency levels, and the severity of the patients’ conditions (13, 14). Subsequently, based on the outputs, patients can be directed to appropriate medical care units, hospitals, or facilities based on their level of urgency. Furthermore, these AI systems can aid in decision-making processes such as categorizing patients for home treatment, emergency department follow-up, or admission to a regular ward or intensive care unit (1518).

The terms “deep learning” and “artificial intelligence” are frequently used interchangeably, but they are not the same (19). AI systems strive to imitate human learning models and demonstrate human-like intelligence. Deep learning, on the other hand, is concerned with discovering patterns and relationships in large datasets and making inferences. As a result, deep learning is only one technique in the larger field of AI. Deep learning, natural language processing, robotics, and other domains are all part of AI. While AI is used in a variety of fields, deep learning is used specifically for discovering and utilizing patterns and relationships in large datasets (20). Despite briefly mentioning the distinction between AI and deep learning in our survey, we generally preferred to use the term “AI” rather than separate the two terms. Although the use of AI in medicine appears to be promising and beneficial, it is not without ethical concerns. These include, among other things, biases, a lack of transparency, privacy, accountability and responsibility, equity, depersonalization, and autonomy (21, 22). Although this survey could have been designed in a much more comprehensive manner, we focused on the topics of beneficence responsibility and accountability, rights to privacy and confidentiality, and non-discrimination in our study. However, as the scope of survey studies grows and the time required to complete them grows, so does the participation rate. Furthermore, this is the first study to assess emergency medicine specialists’ perspectives on AI applications, and it should be viewed as a pilot ethical study focusing on a specific issue rather than a comprehensive ethical study.

Some expert opinions and survey studies have called into question the beneficence and ethical aspects of AI use in various medical fields (23, 24). However, there is currently no article that discusses the ethical implications of AI in the field of Emergency Medicine. Nonetheless, it is worth noting that studies on the ethical implications of AI use in many other areas of medicine have been published. Cobianchi et al. examined the ethical dimension of AI usage in surgical sciences in their study (using a modified Delphi process) and concluded that “the main ethical issues that arise from applying AI to surgery, relate to human agency, accountability for errors, technical robustness, privacy and data governance, transparency, diversity, non-discrimination, and fairness” (22). There are numerous recent studies discussing the ethical dimensions of AI usage in many areas of medicine, including imaging, differential diagnosis, prediction models, and decision-making, and they generally raise similar ethical concerns (2530).

Unlike previous studies, our research is not a consensus paper reporting expert opinions or a Delphi consensus paper to address experts’ ethical concerns. Instead, we asked emergency medicine specialists who are currently or may be using AI for triage purposes during COVID-19 and similar pandemics about the ethical implications of its use. We chose to address the topics of beneficence, responsibility and accountability, privacy and confidentiality rights, and non-discrimination in our study, which focused on a single medical condition and a single purpose. We believe that, as a pilot study in the early stages of the AI era, our research will shed light on future applications. Aside from these, numerous ethical issues concerning various AI usage domains can be discussed (26).

Privacy rights and non-discrimination are prominent ethical debates in literature regarding AI. In medical ethics, the right to privacy includes not only bodily privacy but also that regarding health and personal life. As a result, individuals who are adequately informed have the right to decide how much of their information is shared (31). Within predetermined frameworks, a violation of an individual’s privacy rights may be deemed acceptable only when the benefits to the society or third parties outweigh the breach (32). While anonymizing individuals’ data before incorporating it into the system can alleviate some privacy concerns, the lack of transparency in how artificial intelligence processes data creates uncertainty about the extent to which individuals can exercise control over their own data (33). In our study, approximately 23% of participants saw retaining and subjecting data to repeated analysis within AI systems as an ethical issue, and a similar percentage saw deep learning-based systems as an ethical issue. Furthermore, 60% of participants believed that as long as the data was collected anonymously, it would not violate their privacy.

In our study, more than 70% of participants believed that AI would be as sensitive to non-discrimination as humans, while only 13% saw AI usage as an ethical concern regarding discrimination. While there is widespread agreement in the literature that AI would be more fair than humans, there are also reservations about the extent to which AI can be fair. When data generated through discriminatory thinking is fed into the system, it has the potential to perpetuate discrimination. Furthermore, the opaque decision-making mechanism of AI, which is based on established algorithms, makes identifying instances of discrimination caused by AI difficult (34, 35).

Our study has some limitations. Firstly, because it is a content-specific study conducted exclusively within the emergency medicine profession, the generalizability of our findings to a broader spectrum of healthcare practitioners may be limited. Specifically, the age distribution of the participating physicians is relatively young, which could potentially introduce bias into our results. While including older emergency physicians could offer a different perspective, it is worth noting that although emergency medicine is not a new specialty in Turkey, the recent surge in the number of graduates in the field may have contributed to the predominance of younger specialists in our study. This demographic trend, to some extent, reflects the current composition of emergency medicine professionals in the country and is a constraint beyond our control. Additionally, our survey concentrated on four specific ethical concerns chosen by the investigators, offering an in-depth exploration of these issues. However, conducting a more extensive survey, such as using the Delphi technique, might have provided a broader ethical perspective. Yet, the practical challenges of recruiting a larger participant pool could have arisen.

Conclusion

According to our findings, emergency medicine specialists in Turkey thought that using AI programs for triage in emergency departments during pandemics could be beneficial, safe, and complication-reducing for patients and healthcare providers. Participants, however, expressed serious ethical concerns about the responsibility and accountability associated with using these systems for the stated purpose. Surprisingly, the majority of participants believed that ethical concerns about data storage and reuse could be overlooked. The perspectives of both the engineers and developers who create AI systems and the potential users, who are healthcare professionals, should be gathered more thoroughly. To develop guidelines, these perspectives should be combined with those of bioethics leaders.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by Samsun University Clinical Research Ethics Committee (SÜKAEK) (Approval No. 2022-12-12, 23/11/2022). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

EA: Conceptualization, Data curation, Investigation, Methodology, Writing – original draft, Writing – review & editing. AD: Conceptualization, Data curation, Investigation, Methodology, Writing – original draft. YY: Conceptualization, Formal analysis, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing. YT: Conceptualization, Formal analysis, Investigation, Methodology, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing. MD: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft. DT: Conceptualization, Formal analysis, Project administration, Software, Visualization, Writing – original draft. ST: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Open Access funding provided by the Qatar National Library.

Conflict of interest

YY was employed by Department of Emergency Medicine, Hamad Medical Corporation.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Hinson, JS, Martinez, DA, Cabral, S, George, K, Whalen, M, Hansoti, B, et al. Triage performance in emergency medicine: a systematic review. Ann Emerg Med. (2019) 74:140–52. doi: 10.1016/j.annemergmed.2018.09.022

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Flaatten, H, Van Heerden, V, Jung, C, Beil, M, Leaver, S, Rhodes, A, et al. The good, the bad and the ugly: pandemic priority decisions and triage. J Med Ethics. (2020) 47:e75. doi: 10.1136/medethics-2020-106489

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Jiang, F, Jiang, Y, Zhi, H, Dong, Y, Li, H, Ma, S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. (2017) 2:230–43. doi: 10.1136/svn-2017-000101

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Patel, VL, Shortliffe, EH, Stefanelli, M, Szolovits, P, Berthold, MR, Bellazzi, R, et al. The coming of age of artificial intelligence in medicine. Artif Intell Med. (2009) 46:5–17. doi: 10.1016/j.artmed.2008.07.017

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Alkahlout, BH, and Ahmad, S. Optimizing patient care in the emergency department: insights from automated alert systems and triage strategies. Glob Emerg Crit Care. (2023) 2:71–6. doi: 10.4274/globecc.galenos.2023.25744

CrossRef Full Text | Google Scholar

6. Neill, DB. Using artificial intelligence to improve hospital inpatient care. IEEE Intell Syst. (2013) 28:92–5. doi: 10.1109/MIS.2013.51

CrossRef Full Text | Google Scholar

7. Eppes, CS, Garcia, PM, and Grobman, WA. Telephone triage of influenza-like illness during pandemic 2009 H1N1 in an obstetric population. Am J Obstet Gynecol. (2012) 207:3–8. doi: 10.1016/j.ajog.2012.02.023

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Payne, R, Darton, TC, and Greig, JM. Systematic telephone triage of possible “swine” influenza leads to potentially serious misdiagnosis of infectious diseases. J Infect. (2009) 59:371–2. doi: 10.1016/j.jinf.2009.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Lai, L, Wittbold, KA, Dadabhoy, FZ, Sato, R, Landman, AB, Schwamm, LH, et al. Digital triage: novel strategies for population health management in response to the COVID-19 pandemic. Healthc (Amst). (2020) 8:100493. doi: 10.1016/j.hjdsi.2020.100493

CrossRef Full Text | Google Scholar

10. Liang, W, Yao, J, Chen, A, Lv, Q, Zanin, M, Liu, J, et al. Early triage of critically ill COVID-19 patients using deep learning. Nat Commun. (2020) 11:3543. doi: 10.1038/s41467-020-17280-8

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Chou, EH, Wang, CH, Hsieh, YL, Namazi, B, Wolfshohl, J, Bhakta, T, et al. Clinical features of emergency department patients from early COVID-19 pandemic that predict SARS-CoV-2 infection: machine-learning approach. West J Emerg Med. (2021) 22:244–51. doi: 10.5811/westjem.2020.12.49370

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Soltan, AAS, Kouchaki, S, Zhu, T, Kiyasseh, D, Taylor, T, Hussain, ZB, et al. Rapid triage for COVID-19 using routine clinical data for patients attending hospital: development and prospective validation of an artificial intelligence screening test. Lancet Digital Health. (2021) 3:e78–87. doi: 10.1016/S2589-7500(20)30274-0

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Nazir, T, Mushhood Ur Rehman, M, Asghar, MR, and Kalia, JS. Artificial intelligence assisted acute patient journey. Front Artif Intell. (2022) 5:962165. doi: 10.3389/frai.2022.962165

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Arnaud, E, Elbattah, M, Ammirati, C, Dequen, G, and Ghazali, DA. Use of artificial intelligence to manage patient flow in emergency department during the COVID-19 pandemic: a prospective, single-center study. Int J Environ Res Public Health. (2022) 19:9667. doi: 10.3390/ijerph19159667

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Becker, J, Decker, JA, Römmele, C, Kahn, M, Messmann, H, Wehler, M, et al. Artificial intelligence-based detection of pneumonia in chest radiographs. Diagnostics (Basel). (2022) 12:1465. doi: 10.3390/diagnostics12061465

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Song, W, Zhang, L, Liu, L, Sainlaire, M, Karvar, M, Kang, MJ, et al. Predicting hospitalization of COVID-19 positive patients using clinician-guided machine learning methods. J Am Med Inform Assoc. (2022) 29:1661–7. doi: 10.1093/jamia/ocac083

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Soltan, AAS, Yang, J, Pattanshetty, R, Novak, A, Yang, Y, Rohanian, O, et al. Real-world evaluation of rapid and laboratory-free COVID-19 triage for emergency care: external validation and pilot deployment of artificial intelligence driven screening. Lancet Digit Health. (2022) 4:e266–78. doi: 10.1016/S2589-7500(21)00272-7

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Kapoor, A, Kapoor, A, and Mahajan, G. Use of artificial intelligence to triage patients with flu-like symptoms using imaging in non-COVID-19 hospitals during COVID-19 pandemic: an ongoing 8-month experience. Indian J Radiol Imaging. (2021) 31:901–9. doi: 10.1055/s-0041-1741103

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Hamet, P, and Tremblay, J. Artificial intelligence in medicine. Metabolism. (2017) 69:S36–40. doi: 10.1016/j.metabol.2017.01.011

CrossRef Full Text | Google Scholar

20. Alsuliman, T, Humaidan, D, and Sliman, L. Machine learning and artificial intelligence in the service of medicine: necessity or potentiality? Curr Res Transl Med. (2020) 68:245–51. doi: 10.1016/j.retram.2020.01.002

CrossRef Full Text | Google Scholar

21. Naik, N, Hameed, BMZ, Shetty, DK, Swain, D, Shah, M, Paul, R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. (2022) 9:862322. doi: 10.3389/fsurg.2022.862322

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Cobianchi, L, Verde, JM, Loftus, TJ, Piccolo, D, Dal Mas, F, Mascagni, P, et al. Artificial intelligence and surgery: ethical dilemmas and open issues. J Am Coll Surg. (2022) 235:268–75. doi: 10.1097/XCS.0000000000000242

CrossRef Full Text | Google Scholar

23. Koçer Tulgar, Y, Tulgar, S, Güven Köse, S, Köse, HC, Çevik Nasırlıer, G, Doğan, M, et al. Anesthesiologists’ perspective on the use of artificial intelligence in ultrasound-guided regional Anaesthesia in terms of medical ethics and medical education: a survey study. Eurasian J Med. (2023). doi: 10.5152/eurasianjmed.2023.22254

CrossRef Full Text | Google Scholar

24. Bostrom, N, and Yudkowsky, E. (2023). The ethics of artificial intelligence. Available at: http://faculty.smcm.edu/acjamieson/s13/artificialintelligence.pdf

Google Scholar

25. Shen, FX, Wolf, SM, Gonzalez, RG, and Garwood, M. Ethical issues posed by field research using highly portable and cloud-enabled neuroimaging. Neuron. (2020) 105:771–5. doi: 10.1016/j.neuron.2020.01.041

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Couture, V, Roy, MC, Dez, E, Laperle, S, and Bélisle-Pipon, JC. Ethical implications of artificial intelligence in population health and the Public’s role in its governance: perspectives from a citizen and expert panel. J Med Internet Res. (2023) 25:e44357. doi: 10.2196/44357

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Lorenzini, G, Arbelaez Ossa, L, Shaw, DM, and Elger, BS. Artificial intelligence and the doctor-patient relationship expanding the paradigm of shared decision making. Bioethics. (2023) 37:424–9. doi: 10.1111/bioe.13158

CrossRef Full Text | Google Scholar

28. Rolfes, V, Bittner, U, Gerhards, H, Krüssel, JS, Fehm, T, Ranisch, R, et al. Artificial intelligence in reproductive medicine – an ethical perspective. Geburtshilfe Frauenheilkd. (2023) 83:106–15. doi: 10.1055/a-1866-2792

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Sharma, M, Savage, C, Nair, M, Larsson, I, Svedberg, P, and Nygren, JM. Artificial intelligence applications in health care practice: scoping review. J Med Internet Res. (2022) 24:e40238. doi: 10.2196/40238

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Anshari, M, Hamdan, M, Ahmad, N, Ali, E, and Haidi, H. COVID-19, artificial intelligence, ethical challenges and policy implications. AI Soc. (2022) 38:707–20. doi: 10.1007/s00146-022-01471-6

CrossRef Full Text | Google Scholar

31. Humayun, A, Fatima, N, Naqqash, S, Hussain, S, Rasheed, A, Imtiaz, H, et al. Patients’ perception and actual practice of informed consent, privacy and confidentiality in general medical outpatient departments of two tertiary care hospitals of Lahore. BMC Med Ethics. (2008) 9:14. doi: 10.1186/1472-6939-9-14

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Childress, JF, and Beauchamp, TL. Common morality principles in biomedical ethics: responses to critics. Camb Q Healthc Ethics. (2022) 31:164–76. doi: 10.1017/S0963180121000566

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Morley, J, Machado, CCV, Burr, C, Cowls, J, Joshi, I, Taddeo, M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. (2020) 260:113172. doi: 10.1016/j.socscimed.2020.113172

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Zuiderveen Borgesius, FJ. Strengthening legal protection against discrimination by algorithms and artificial intelligence. Int J Human Rights. (2020) 24:1572–93. doi: 10.1080/13642987.2020.1743976

CrossRef Full Text | Google Scholar

35. Cossette-Lefebvre, H, and Maclure, J. AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI Ethics. (2022). doi: 10.1007/s43681-022-00233-w

CrossRef Full Text | Google Scholar

Keywords: specialist, AI, triage, emergency, pandemic, ethics, survey, national

Citation: Ahun E, Demir A, Yiğit Y, Tulgar YK, Doğan M, Thomas DT and Tulgar S (2023) Perceptions and concerns of emergency medicine practitioners about artificial intelligence in emergency triage management during the pandemic: a national survey-based study. Front. Public Health. 11:1285390. doi: 10.3389/fpubh.2023.1285390

Received: 29 August 2023; Accepted: 05 October 2023;
Published: 26 October 2023.

Edited by:

Dmytro Chumachenko, National Aerospace University – Kharkiv Aviation Institute, Ukraine

Reviewed by:

Roman Tandlich, Rhodes University, South Africa
Sanjeeb Sudarshan Bhandari, UPMC Western Maryland Medical Center, United States

Copyright © 2023 Ahun, Demir, Yiğit, Tulgar, Doğan, Thomas and Tulgar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yavuz Yiğit, yyigit@hamad.qa

ORCID: Yavuz Yiğit, https://orcid.org/0000-0002-7226-983X

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.