Skip to main content

REVIEW article

Front. Polit. Sci., 17 December 2024
Sec. Politics of Technology
This article is part of the Research Topic Post-pandemic democratic innovation: transparency, citizen behavior and decision-making View all articles

Implementation of smart devices in health crisis scenarios: risks and opportunities

  • Social Sciences Department, Universidad Carlos III de Madrid, Getafe, Madrid, Spain

The scarcity of healthcare resources, particularly during crises, is a reality. AI can help alleviate this deficiency. Tasks such as triage, diagnosis, or determining a patient’s life-threatening risk are some of the applications we can delegate to algorithms. However, the limited number of real clinical experiences and the lack of research on its implementation mean that we only partially understand the risks involved in its development. To contribute to the knowledge of both the opportunities and risks that a management solution like AI presents, we analyze the case of autonomous emergency vehicles. After conducting a detailed literature review, we adopt an innovative perspective: that of the patient. We believe that the relationship established between the patient and this technology, particularly the emotional connection, can determine the success of implementing such autonomous driving devices. Therefore, we also propose a simple solution: endowing this technology with anthropomorphic features.

Introduction

The COVID-19 pandemic has highlighted the transformative potential of artificial intelligence (AI) in crisis situations, providing rapid and effective solutions to mitigate public health impacts. Since the onset of the outbreak, AI applications have been developed and deployed to improve epidemiological surveillance, accelerate research into treatments (Etzioni and Decario, 2020) and vaccines and fight related misconceptions (Sohail et al., 2023), and optimize the distribution of medical resources (Laudanski et al., 2020). Examples such as AI systems used to predict pandemic emergence (Freifeld et al., 2008; Rezaei et al., 2020), infection patterns, or identify the genetic structures of the virus have demonstrated the ability of these technologies to enhance global emergency responses. Beyond pandemic-related applications, the use of AI in medicine is already a reality (Jiang et al., 2017; Kirubarajan et al., 2020). Its applications are numerous (Tang et al., 2021), and there is growing interest in exploring its potential uses. Primarily employed in tasks such as triage and diagnosis (Grant and McParland, 2019; Tang et al., 2021; Mueller et al., 2022) and supporting decision-making (Piliuk and Tomforde, 2023), AI is not without risks. Most challenges concern the limits of its application (Grant et al., 2020; Moulik et al., 2020), the quality of data used to train learning systems (Mueller et al., 2022), potential design biases (Hong et al., 2000; Kim et al., 2024), or the establishment of a regulatory framework that provides sufficient security for both operators and users (Ioannou and Tussyadiah, 2021; Fenwick et al., 2017).

The inherent biases in user perception of this technology can undermine its effectiveness and even endanger public health. Factors such as age, gender, and previous experience with technology influence trust and acceptance of AI. Studies have shown, for example, that women and older adults tend to be more reluctant toward autonomous vehicles, which could translate into lower use of services like autonomous ambulances (AAs) (Rice et al., 2019). These biases, often based on stereotypes and unfounded fears, can lead to distrust and rejection of AI, limiting its ability to contribute to effective pandemic management.

Additionally, there is the issue of biases within AI programming itself. If the algorithms controlling decision-making are trained on biased data, they can perpetuate and even amplify existing inequalities in access to healthcare. For example, an algorithm assigning AAs might prioritize patients from urban areas with better connectivity, disadvantaging those in rural or marginalized areas (Lima et al., 2019). It is crucial to recognize that user perception and biases within AI are interconnected factors that can hinder the successful implementation of AI-based pandemic management strategies. Lack of trust in the technology and the potential for algorithmic discrimination can generate public resistance (Smith, 2018) and undermine the effectiveness of these strategies.

Moreover, the massive collection of data and the implementation of surveillance technologies, while necessary to control the virus’s spread, have raised ethical concerns about balancing public safety with personal freedom. The interaction between users and AI-based applications, such as virtual assistants or Autonomous Vehicles (AVs), can exacerbate issues of trust and dependence. These systems, if not properly regulated, could lead to reckless behavior or misunderstandings about their actual capabilities, increasing the risk of accidents or misuse in critical situations. Most studies focus on the design of AI applications that offer medical assistance or support the management and organization of medical services. These investigations are primarily formulated from the perspective of healthcare professionals or managers. However, few studies have addressed the application of AI from the patient’s point of view (Jiang and Cheng, 2021; Yin et al., 2021). Beyond legal issues, especially regarding data protection and the liability that may arise from the misuse of this technology (Grant et al., 2020), there is little evidence about the challenges and difficulties that may arise in the interaction between this technology and patients. This is likely due to the scarcity of real-world implementation experiences and the traditional approach, which tends to focus more on the data provided by patients as aggregated cases—such as predicting surges in emergency service demand (Kang et al., 2020; Lin et al., 2020)—rather than on the human-computer interaction challenges.

This work focuses on the use of Autonomous Emergency Vehicles (AEVs) in the management of health crises or pandemics. The effective integration of these vehicles in the management of future pandemics crucially depends on mitigating user biases towards them, which represents a fundamental challenge for political practice in the digital age. Public perception and acceptance of these disruptive technologies are shaped by a range of biases, influenced by demographic factors, previous experiences, and trust in automated systems (Kuziemski and Misuraca, 2020; El-Haddadeh et al., 2021; Pickering, 2021). If these biases are not addressed, the implementation of AI-driven pandemic management strategies, such as the use of AAs will be hindered by public resistance, distrust, and the potential deepening of inequalities in access to essential services (Kuziemski and Misuraca, 2020). It is therefore essential to analyze how these biases impact policy formulation, the legitimacy of AI-based decision-making, and the ability of governments to effectively respond to health crises while ensuring equity and protecting the rights of all citizens (Gans-Combe, 2020).

The goal of this article is to analyze, from the patients’ perspective, the opportunities and risks associated with applying AI in medical services. To offer a less abstract view, we examine the behavior of potential users in an increasingly plausible scenario: the use of AEVs. Driverless ambulances must overcome several human-computer interaction (HCI) challenges, such as establishing trust and assigning appropriate roles, as well as addressing ethical conduct, which is inherent to any task related to medical activities.

The use of AI in pandemic management

In the healthcare sector, resources are always scarce. This scarcity is not limited to medical staff or the number of available hospital beds; the most limited resource is time. When we focus our efforts on providing medical care as quickly as possible or aim to reach an accurate diagnosis as early as possible, we are ultimately talking about time. Efficient management of this scarce resource can be greatly assisted by Artificial Intelligence (AI). An opportunity presented by the Fourth Industrial Revolution, also known as Industry 4.0, is the chance to rethink the capacities of public organizations and to provide new, more efficient solutions grounded in data governance (Salvador and Ramió, 2020; Ramió, 2019).

From an operational standpoint, AI implementation in healthcare can be divided into two major areas: the pre-hospital phase and the hospital phase. The latter, the hospital phase, undoubtedly concentrates the majority of AI applications, particularly in tasks related to triage and diagnosis (Grant and McParland, 2019; Grossmann et al., 2020; Tang et al., 2021; Piliuk and Tomforde, 2023). Firstly, AI technologies are mainly used in emergency services to improve triage, making the classification of patients according to their severity more precise (Kang et al., 2020). An increase in triage precision could positively impact patient survival rates in emergency medical services (Yu et al., 2020). Secondly, AI offers multiple applications in the field of diagnosis, such as radiodiagnostics (Jamaludin et al., 2017; Herweh et al., 2016) or the monitoring of echocardiograms for disease detection (Al-Dury et al., 2020; Grant and McParland, 2019). These tools, with their autonomy and learning capabilities, could reduce the dependence on specialists in certain medical services. This is particularly relevant for small hospitals with limited availability of specialized medical personnel (Tang et al., 2021).

The pre-hospital phase

Despite the continuous and numerous advances in AI applications within the clinical hospital setting (Ramlakhan et al., 2022), our research focuses on the pre-hospital phase. According to authors like Tang et al. (2021), the implementation of AI in this scenario serves two main purposes: (i) to accurately identify medical conditions for the earliest and most effective intervention; and (ii) to predict critical conditions that require the preparation and mobilization of healthcare resources.

Although healthcare planning is inherently challenging due to its multi-faceted and non-linear dynamics (Lin et al., 2020), the estimation of emergency service demand is one of the most researched areas (Fischer et al., 2020; Grant et al., 2020; Kang et al., 2020; Piliuk and Tomforde, 2023). For example, AI developments have been used to analyze Google Trends and social media behavior to anticipate peaks in demand (Burnap et al., 2017; Ho and MacDorman, 2017). Other studies, such as those by Pineda et al. (2015), attempt to predict flu cases by reviewing medical reports, while Papini et al. (2018) focus on forecasting post-traumatic stress cases in hospitalized patients. Additionally, tools like the one designed by Yousefi et al. (2019) aim to predict emergency service demand several days in advance.

Autonomous emergency vehicles in pandemic management

The implementation of AEVs is becoming a reality in both the tech and automotive industries. This shift towards a new model of mobility is advancing as several key technical challenges are being resolved. AEVs offer numerous social benefits, such as enhanced safety, energy savings, cost reductions, and decreased dependence on healthcare personnel. In the context of healthcare, particularly pre-hospital management, the most critical feature is their autonomous driving capability, which reduces or eliminates the need for human drivers and presents an opportunity to improve ambulance efficiency in emergency situations (Ahmed et al., 2023; Khalid et al., 2021). Additionally, AEVs can address negative externalities such as fuel costs through the more efficient use of ambulances (Bagloee et al., 2016).

Despite the many advantages, especially during pandemics like COVID-19 (Khalid et al., 2021), the implementation of AEVs is not without risks. Challenges include algorithm development for route estimation, legal frameworks regulating their use, and, critically, the relationship that AVs establish with patients. This relationship will ultimately shape public confidence and perceptions of safety in using these services (Kyriakidis et al., 2015; Winter et al., 2018a).

A significant number of resource planning models and their adaptation to forecasted demands also affect the transportation of potential patients to hospitals. This includes not only redirecting them to facilities with lower patient concentrations to reduce waiting times (Kang et al., 2020), but also determining whether a patient should be transported by ambulance or asked to reach the hospital by their own means (Mijwil et al., 2023; Yoshida et al., 2023). In this context, the use of AEVs (Karkar, 2019; Tahir and Javaid, 2019) emerges as an optimal solution.

AAs, an innovative application of AEVs, could revolutionize the response to medical emergencies, especially during a pandemic. Their ability to optimize routes using real-time traffic data would allow for faster emergency responses, reducing patient wait times and speeding up their transfer to healthcare facilities (Alam et al., 2021). By minimizing human interaction, AAs would also reduce the risk of contagion for medical personnel, protecting them from infectious diseases. Additionally, the intelligent management of routes and the availability of AAs could improve resource allocation, ensuring equitable and efficient distribution of emergency services, particularly in remote or resource-limited areas (Fontes et al., 2023).

AEVs-enhanced information systems offer a powerful tool for disseminating vital information during a crisis. These systems can provide real-time updates on the pandemic, including public health guidelines, infection rates, prevention measures, and personalized recommendations for individuals (Kritikos et al., 2022; Zhu et al., 2022). By collecting and analyzing data on virus spread, AEVs can help health authorities better understand the pandemic’s evolution, predict potential outbreaks, and make informed decisions to optimize the response. Furthermore, AEVs can facilitate clear, accurate, and timely communication between authorities, healthcare professionals, and the public, improving coordination and information management during the crisis (Fontes et al., 2023).

Healthcare support is significantly enhanced by the incorporation of AEVs. Automated patient triage, based on symptom assessment through AI algorithms, can speed up care processes and resource allocation, ensuring patients are directed to the appropriate level of medical care (Jiang et al., 2017; Kritikos et al., 2022; Laudanski et al., 2020). Remote symptom monitoring allows for continuous health tracking at a distance, reducing the need for in-person visits and minimizing exposure to the virus (Lalmuanawma et al., 2020). AEVs can also provide psychological counseling and support to patients in quarantine or isolation, relieving stress and anxiety related to the pandemic. These applications ease the burden on healthcare systems, freeing up medical professionals to focus on the most critical cases and enabling more efficient management of scarce medical resources, such as hospital beds, personal protective equipment, and medications (Ortiz-Barrios et al., 2023).

While AAs leverage the broader development of AEVs, their use in medical emergencies has specific characteristics that set them apart from other types of AEVs. This distinction mainly arises from their operational logic. These vehicles respond to medical emergencies where a person’s life may be at risk. Therefore, one of their primary objectives is to reach their destination as quickly as possible (Murray and Kue, 2017; Peelam et al., 2024; So et al., 2020). In addition, they must provide medical care and be prepared to transport the patient to a healthcare facility. These two basic objectives are often linked to the development of technical solutions that optimize travel times. However, the implementation of AEVs could go beyond transportation, offering a comprehensive solution for healthcare resource management.

The incorporation of AAs in pandemic management has profound political implications, particularly concerning governance, surveillance, and power relations. The possibility of AAs functioning as extensions of state authority, gathering large-scale information, and facilitating automated decision-making introduces new challenges and opportunities in the political sphere.

Delegating government functions to AEVs, such as disseminating information or enforcing health guidelines, can blur the lines between technology and state authority. This may affect public perception of governmental legitimacy and accountability. If citizens view AEVs as mere instruments of state control, trust in government institutions and willingness to cooperate with public health measures could be undermined (Kuziemski and Misuraca, 2020). The increasing mediation of interactions between citizens and the state through VAs may transform the nature of this relationship. While automating bureaucratic processes may increase efficiency, it could also depersonalize the relationship between citizens and the state. The collection and analysis of personal data by AEVs, if not managed transparently and with proper safeguards, could raise concerns about privacy and individual autonomy, impacting trust in institutions and social cohesion (Smith, 2018).

The ability of AEVs to process complex information and execute sophisticated algorithms may lead to greater delegation of decision-making (Kuziemski and Misuraca, 2020; Fontes et al., 2023). Although this offers benefits in terms of speed and efficiency, it also raises significant ethical and political dilemmas. The lack of transparency in automated decision-making processes, the potential for algorithmic biases (Lima et al., 2019; Ramdani et al., 2021), and the difficulty in assigning responsibility in case of errors or harm are challenges that must be addressed to ensure that AEVs are designed and used responsibly, transparently, and fairly, with adequate human oversight mechanisms to mitigate risks and protect individual rights.

The capacity of AEVs to collect and analyze personal data, track movements, and monitor behaviors raises serious concerns about their potential use for surveillance and social control (Zhu et al., 2019; Groh et al., 2021). The use of AEVs for enforcing quarantine measures, contact tracing, or identifying individuals who violate health guidelines, without proper safeguards for privacy and civil rights, could have significant implications for individual freedoms (Tang et al., 2021). A robust legal and ethical framework is needed to regulate the collection, storage, use, and retention of data by AEVs, ensuring the protection of fundamental rights and preventing abuses (Kong, 2024). The proliferation of AEVs in pandemic management demands ongoing, deep debate on the protection of privacy, individual autonomy, and digital rights (Pickering, 2021; Fontes et al., 2023). The collection and use of sensitive data, such as medical information or movement patterns, must be subject to rigorous scrutiny and strong control mechanisms to avoid discrimination, stigmatization, and the erosion of public trust. Citizens must have control over their data and be informed about its use, with effective recourse mechanisms in place if their rights are violated.

VAs can also serve as effective tools for population management during pandemics by identifying behavioral patterns, predicting outbreaks, and optimizing resource distribution (Alam et al., 2021; Zhu et al., 2022; Lima et al., 2019). However, it is essential to avoid discrimination or stigmatization of certain groups. Equity in access to technology, the mitigation of algorithmic biases, and the participation of affected communities in the design and implementation of these tools are crucial to ensuring that the benefits are distributed equitably and the risks of exclusion or marginalization are minimized.

A final aspect to consider is that the introduction of AEVs in pandemic management may reshape power relations between governments, tech companies, healthcare professionals, and civil society. The tech companies that develop and control VAs gain an influential role in managing information and decision-making, which could affect the autonomy of government institutions and the balance of power among different actors. Government reliance on the expertise and infrastructure of tech companies to implement AEVs-based solutions may grant these companies considerable power in shaping pandemic management policies and practices. The need for access to data, algorithms, and technological platforms could create a dependency that may impact governments’ ability to act independently and safeguard public interests. In the following sections, after reviewing recent studies, we outline some of the main challenges in implementing AEVs. We do this by identifying the four key stakeholders: (i) governments and regulatory agents; (ii) manufacturers and developers; (iii) healthcare personnel and managers; (iv) patients. Identifying these four groups allows us to allocate the specific challenges each faces. Although many studies are based on simulations due to the lack of real-world implementation experiences, they provide sufficient data to project a medium-term outlook.

Governments and regulatory agents: legislation and regulation

The implementation of AEVs is not merely about launching a new transportation system. Its impact, much like that of other Autonomous Vehicles (AVs), goes beyond its primary function and requires additional modifications to the regulatory framework (Almaskati et al., 2024; Grant et al., 2020; So et al., 2020). This task primarily falls to lawmakers and can be summarized into four main areas: (i) infrastructure use; (ii) liability in case of error; (iii) management of patient data; (iv) biases that their use might generate.

First, AEVs require a new technical and communication environment, as well as a reimagined use of existing communication infrastructures. This inevitably leads to legislative reforms to allow, for example, the operation of AVs in countries where such use is currently not permitted (Khalid et al., 2021). However, legislative changes do not only concern the operation of driverless vehicles on the road. The use of AEVs represents a paradigm shift (Karkar, 2019). Until now, emergency vehicles have had right-of-way on public roads through the use of lights and sirens. The implementation of AAs will require the deployment of cooperative traffic management systems that prioritize them over other vehicles (Sumia and Ranga, 2018; Ahmed et al., 2023; Murray and Kue, 2017; Peelam et al., 2024). For example, new regulations will need to establish how the priority use of specific lanes is managed (Dresner and Stone, 2006), or the preferential use of communication systems (So et al., 2020).

The second major issue that legislators must address is liability: Who is responsible if a problem occurs with an AA? The responsibility in the case of AEVs is twofold: they must adhere to the same safety and liability standards as other AVs in the event of an accident, and they must also consider the liability that arises from any medical procedures the intelligent ambulances might carry out.

Regarding safety liability, this topic has been explored from various perspectives. One of the most compelling approaches is found in the work of moral philosophers discussed by Wu (2020), highlighting the unique challenges posed by AEVs. Until now, responsibility for malpractice has generally rested on the healthcare professional performing an incorrect procedure. However, a safety failure in an AEV might stem from a design flaw by a developer. Therefore, it is crucial to adapt and update the regulatory framework to clearly define a new system of liabilities that anticipates these scenarios (Elayan et al., 2021; Grant et al., 2020; Woo et al., 2021).

Additionally, determining liability for medical actions is crucial. AAs could implement health monitoring systems that aid in triage (Cooney et al., 2021; Elayan et al., 2021) and initiate treatments that the patient will later receive in the hospital (Akca et al., 2020). This significant innovation must ensure not only successful patient recovery but also address what happens in the event of a diagnostic error. These are undoubtedly complex issues that are difficult to resolve or even anticipate due to the rapid pace of technological innovation.

One of the main challenges in defining legal responsibility for the use of autonomous vehicles in medical practice is the limited number of real clinical cases (Mueller et al., 2022). Given this limitation, the most recommended approach could involve establishing a regulatory sandbox that allows for multiple simulations to assist in adapting the regulatory framework accordingly (Fenwick et al., 2017). In this context, the European Union’s Regulation 2024/1689 (2024), known as the Artificial Intelligence Act, provides a restrictive regulatory framework for designing and implementing AI in clinical practice. Article 6.2 of the Act categorizes the use of AI in medical applications as High-Risk. This classification stems from its potential use for evaluating and prioritizing emergency calls made by individuals or for dispatching and prioritizing first-response services in emergencies. This classification equates such usage to other high-risk applications, such as those involving critical infrastructures, product safety components, or educational training. The high-risk designation mandates that these technological developments include appropriate systems for risk assessment and mitigation, minimize the production of discriminatory outcomes, and maintain a record of activities to ensure proper traceability of all procedures undertaken.

Third, AAs must ensure a system that guarantees the privacy and security of the medical data they use (Ioannou and Tussyadiah, 2021; Woo et al., 2021). It is crucial to recognize that patients own sensitive information that requires the highest level of protection: their medical records. These records could be accessed by AAs when collecting personal data derived from triage tasks and health monitoring (Elayan et al., 2021). This information can be utilized and subsequently shared with the hospital to which the ambulance is heading, enabling medical staff to be fully prepared to provide the most appropriate treatment (Akca et al., 2020).

How clinical information is collected, with whom it is shared, and how it is stored are some of the legal concerns surrounding data management in AAs (Woo et al., 2021). These concerns are even more pronounced in regions like the EU, where the stringent General Data Protection Regulation (GDPR) might pose challenges to the implementation of such technologies (Khalid et al., 2021). This is why some authors propose establishing some form of informed consent (Mueller et al., 2022). While this legal requirement does not entirely eliminate another common issue in data management—vulnerability to cyberattacks—it is a step in the right direction (Almaskati et al., 2024). As a precaution, authors like Tahir and Javaid (2019) suggest the use of blockchain technology to sign a smart contract before any treatment is administered in an AA. This measure could potentially enhance the security of patient data records.

Finally, policymakers must ensure that this technology provides equal treatment to all patients. Although we will specifically address the main biases affecting patients in their interactions with this technology in the last section, we must also acknowledge those biases that might be inherently embedded in the design of these devices.

It is not only a matter of ensuring that the technology respects the principles of equality and non-discrimination in patient care. We know that developers can infuse their public values into the realm of research and, as a result, integrate these values into the applications they design (Scheufele et al., 2007). For this reason, algorithms can harbor biases or even develop new biases during their learning processes, affecting their interaction with patients (Grant and McParland, 2019; Kang et al., 2020). The ethical dimension of technology must be thoroughly addressed within the regulatory framework that governs these technological developments to prevent the spread of discriminatory scenarios (Woo et al., 2021; Wu, 2020).

Manufacturers and developers: technical solutions

Most research focuses on the technical solutions necessary for implementing AVs in emergency vehicles. The primary concern for researchers is ensuring that AEVs arrive on time and do so safely (Peelam et al., 2024). To achieve this, it is crucial to equip the ambulance with sufficient capability and autonomy to make decisions (Ahmed et al., 2023; Tahir and Javaid, 2019; Khalid et al., 2021). This level of agency, as we will discuss later, could pose challenges in the interaction between AEVs and patients. Even though users understand the artificial nature of these devices, this interaction can have significant psychological effects (Guzman, 2019). Known as the “uncanny valley” effect (Chang et al., 2018; Ho and MacDorman, 2017; Mori, 1970; Mori, 2020; Seyama and Nagayama, 2007), this phenomenon refers to the discomfort that users may feel when technology appears almost, but not entirely, human-like (Chattaraman et al., 2019; Pitardi and Marriott, 2021).

The decision-making capabilities of AEVs primarily focus on choosing the quickest route to the patient (Ahmed et al., 2023; Khalid et al., 2021; Peelam et al., 2024). Efforts are being made to reduce response times using intelligent systems that deploy the nearest vehicle to the patient (Akca et al., 2020) while also ensuring that these trips are conducted safely. Although the accident rate for AEVs is relatively low (Khalid et al., 2021), the specific conditions under which these vehicles operate—such as high speeds—require that safety remains a primary concern for researchers. Therefore, they are developing algorithms and cooperative traffic systems to guarantee timely and secure arrivals. This goal is supported by infrastructure enhancements (Lee et al., 2023; Cooney et al., 2021; So et al., 2020) and cooperative traffic models (Sumia and Ranga, 2018; Alzubaidi et al., 2023; Buckman et al., 2021; Dresner and Stone, 2006). For example, advanced communication systems allow AAs to transmit their route information to other vehicles on the road, prompting them to make way for the ambulance (Karkar, 2019).

The reliance on autonomous vehicles for these tasks must also address environmental conditions that could impact their safety (Capodieci et al., 2021; Cui et al., 2019; Lee et al., 2023). For instance, pedestrian detection (Rajendar et al., 2022), cyclist identification (Ahmed et al., 2019), recognition of other vehicles (Liu et al., 2022), and safeguarding against hacking of autonomous driving systems to prevent criminal or terrorist activities (De la Torre et al., 2020) are critical considerations. Many of these safety challenges are likely to be resolved with the implementation of new technological advancements. Examples include the transition from object detection using monocular cameras to collision prevention systems based on stereoscopic vision (Rajendar et al., 2022), the application of deep learning approaches like Fast Region-Convolutional Neural Networks (R-CNN) and Faster R-CNN for pedestrian detection (Ahmed et al., 2019), and the deployment of Connected Autonomous Vehicle (CAV) systems to prevent vehicle collisions (Liu et al., 2022).

Reducing the response times of AEVs is not the only benefit these simulations offer. Optimizing routes, coupled with better traffic management, can increase energy efficiency and, consequently, reduce fuel consumption (Ahmed et al., 2023; Karkar, 2019; Khalid et al., 2021). This advantage ultimately contributes to less environmental pollution and promotes sustainability (Bagloee et al., 2016; Katebi, 2023).

Healthcare personnel and managers: triage and patient care

According to authors like Murray and Kue (2017), in a medical emergency, the journey to reach the patient is more critical to their health than the return trip to the hospital after stabilizing the patient. In this context, AAs offer significant improvements in initial medical care and an optimal solution to one of the primary issues in healthcare resource management. Depending on their level of automation, AEVs could operate without a driver (Rice and Winter, 2019). As a result, given the shortage of healthcare professionals, the individual who would typically be dedicated to driving could redirect their efforts toward patient care (Almaskati et al., 2024; Rice et al., 2019; Wickens et al., 2000). Consequently, this paradigm shift in ambulance services could lead to enhanced medical attention and increased patient survival rates (Becker and Hugelius, 2021; Rice et al., 2019).

The potential of AAs cannot be limited solely to increasing healthcare personnel. Just as studies have examined the integration of medical robots in AEVs to perform first aid in case of accidents (Cooney et al., 2021), these ambulances could also be equipped with various health monitoring devices and diagnostic systems (Khalid et al., 2021). This capability could lead to more accurate diagnoses (Akca et al., 2020), transmitting vital information to the destination hospital and preparing the necessary treatment in advance (Karkar, 2019).

Additionally, AAs could enhance the safety of medical staff. First, as discussed in previous sections, AEVs offer greater safety in their movements—a crucial factor considering that ambulances often travel at high speeds and maneuver through traffic during emergencies. Such driving conditions increase the risk of accidents. Autonomous driving could reduce the number of incidents involving ambulances, thereby protecting both medical personnel and patients (Almaskati et al., 2024; Karkar, 2019; Lasky et al., 2023).

Moreover, the degree of automation in these vehicles can reduce human interaction (Ahmed et al., 2019), which could further enhance the safety of emergency services. Minimizing human contact, or at least reducing it to the essential minimum as AAs could, proves to be an optimal solution for health crises like COVID-19 (Khalid et al., 2021). This approach would lower the risk of contagion and, therefore, reduce the potential spread of the disease, which could otherwise compromise healthcare personnel (Tavakoli et al., 2020).

Despite the various advantages that AAs seem to offer, many healthcare workers still prefer a human driver (Almaskati et al., 2024; Liu et al., 2023). This resistance is likely due to the limited research conducted in this area and the lack of training for healthcare personnel in using new autonomous devices and their capabilities (Goodison et al., 2020). For this new technology to succeed, it is crucial to address these biases among healthcare workers and improve their training. Ignoring this need could mean that the potential improvements that AAs could bring to healthcare resource management may not be fully realized.

Biases in user perception as a barrier to the use of VAs in crisis management

Despite the potential of AEVs in healthcare management, particularly during health crises like COVID-19 (Khalid et al., 2021), there is a lack of studies that take an integrated approach. A review of recent research reveals that most studies focus on technical solutions for AEVs development, emphasizing the perspective of manufacturers and developers. Although these studies occasionally reference other stakeholders, such as regulatory bodies or emergency service managers, they often overlook one key factor in the potential success of AEVs implementation: the patients.

Gender, race, culture, and social class biases play a fundamental role in how people perceive and interact with AEVs. These biases, often unconscious, are based on pre-existing stereotypes that are amplified and perpetuated by technology. A clear example of this is found in research on Virtual Assistants (VAs), where it has been shown that the choice of a female voice for these systems can reinforce gender stereotypes by associating women with service and support roles (Anderson et al., 2014). Additionally, the quality of voice recognition in VAs may vary depending on the user’s gender and accent, highlighting the presence of biases in the data used to train these systems (Dou et al., 2021). These biases can negatively affect the trust, credibility, and acceptance of the technology, especially among groups that are disadvantaged or underrepresented.

Biases in technology are not merely technical errors, but reflect existing power structures and inequalities in society (Nass, 1997; Nass and Brave, 2007). Technology, as a product of society, inherits and amplifies the biases present within it. These technological biases can reinforce hierarchies and power dynamics by perpetuating existing inequalities and creating new forms of exclusion. An example of this is seen in the development of facial recognition systems, which have been shown to be less accurate in identifying individuals with darker skin tones, potentially leading to negative consequences in areas such as criminal justice.

As we have seen, the integration of AEVs into healthcare systems represents a significant advancement in managing health crises and delivering services, particularly through applications such as AAs. These technologies offer a wide range of potential benefits, from reducing emergency response times (Mijwil et al., 2023) to delivering services in remote areas (Khalid et al., 2021) and using predictive models to assess clinical risks (Frost et al., 2017) or accurately assessing clinical risk to prioritize medical care for individuals (Paulin et al., 2022). However, despite these advantages, patients frequently exhibit reluctance toward these innovations, particularly in relation to AVs (Almaskati et al., 2024). This hesitance reflects a broader trend of resistance to technological change, often driven by cognitive biases and psychological barriers that undermine effective adoption.

As discussed in earlier sections, there is limited research on the implementation of AI in real clinical environments (Grant and McParland, 2019). This gap also applies to the use of AEVs (Rice and Winter, 2019). Although AAs hold significant promise, particularly in scenarios like a pandemic, there are still few experiences or studies on their deployment (Das and Ghosh, 2021; Khalid et al., 2021). Moreover, many of these studies report limited success in the implementation of such vehicles. For instance, research conducted by Zarkeshev and Csiszár (2020) in Hungary and Kazakhstan showed that users were not inclined to use an AA. This reluctance may stem from various factors, mainly related to biases, which could jeopardize the adoption of this technology.

In addition to the biases related to the appearance of the user interface, we can identify the biases affecting the relationship between users and AAs into two broad, non-exclusive categories: those related to perceived trust and competence, and those linked to user characteristics. We analyze all of them in the following sections.

Manifestations of biases in the interface of AEVs

The interface of AEVs, as the point of interaction with the user, is a space where biases can manifest in various ways, impacting perception and interaction with the technology. The choice of male or female voices for VAs can influence perceptions of competence, authority, and trustworthiness, reflecting and perpetuating existing gender stereotypes in society (Damen and Toh, 2019). Several studies have shown that female voices tend to be associated with care and kindness roles, while male voices are perceived as more competent and authoritative (Ernst and Herm-Stapelberg, 2020a; Ernst and Herm-Stapelberg, 2020b). This trend is observed even when there are no differences in the ability or information provided by the VAs, indicating the subconscious influence of stereotypes on user perception. Research suggests that users prefer “gendered” virtual assistants when the assigned task aligns with gender stereotypes. For example, there is a preference for female voices in traditionally female environments, such as the home, and for male voices in environments considered masculine, like the office (Abercrombie et al., 2021; Hoy, 2018).

Cultural biases also manifest in the language, tone, and communication style of AEVs. These biases can have significant implications for accessibility, inclusion, and the representation of cultural diversity in technology. A study conducted in Brazil examined voice recognition in virtual assistants and found that accuracy varied depending on the user’s regional accent (Lima et al., 2019). This research suggests that the data used to train AI systems may not adequately represent the linguistic and cultural diversity of the population, potentially resulting in the exclusion of certain groups. The lack of cultural sensitivity in interface design can lead to misunderstandings, frustration, and a negative user experience, undermining trust in the technology.

The visual appearance of VAs can also perpetuate stereotypes related to gender, race, and other characteristics, influencing trust, perceptions of safety, and interaction with the technology. For example, avatar’s appearance, including realism, age, and body shape, influences user perceptions (Chattaraman et al., 2019; van Pinxteren et al., 2019; Tavakoli et al., 2020). The lack of diversity in visual representation can perpetuate exclusion and discrimination, limiting identification with and acceptance of the technology by diverse social groups.

Perceived trust and competence

The reluctance to embrace AAs stems primarily from issues of trust and perceived competence. Trust, as defined by Mayer et al. (1995), is the willingness to relinquish control over a task in the best interest of oneself. However, in healthcare, where decisions can have life-or-death consequences, the trust required for autonomous technologies is not easily earned. Users often need to feel that these systems behave predictably (Eckel and Wilson, 2004), an expectation that is complicated by the limited real-world deployment of AAs (Das and Ghosh, 2021). This gap in experience results in limited information for users to base their perceptions of reliability, further complicating their willingness to adopt the technology (Geels-Blair et al., 2013).

Moreover, a high degree of automation can trigger feelings of alienation rather than confidence, a phenomenon exemplified by the “uncanny valley” effect (Chang et al., 2018; Ho and MacDorman, 2017; Mori, 1970; Mori, 2020; Seyama and Nagayama, 2007). This concept posits that as technology becomes more human-like but still imperfect, it provokes discomfort.

Moreover, when user agency diminishes, the relationship between people and intelligent devices can become more threatening (Natale and Cooke, 2021; Parasuraman and Riley, 1997; Stein et al., 2019; Zafari and Koeszegi, 2021). To counter this and improve trust, designers often give technology anthropomorphic features, which increases empathy and acceptance (Moussawi et al., 2020; Natale and Cooke, 2021; Pitardi and Marriott, 2021). However, it is challenging to incorporate these anthropomorphic traits into an AEV. The best solution might be to integrate VAs with anthropomorphic characteristics to foster greater trust (van Pinxteren et al., 2019; Watkins and Pak, 2020; Portela and Granell-Canut, 2017).

Consequently, AEVs that operate with minimal human input may foster distrust, as their complex functionality creates a perception of inaccessibility and loss of control (Natale and Cooke, 2021; Stein et al., 2019). This psychological distance can be particularly acute in medical emergencies, where human empathy is often expected.

One solution proposed to mitigate these effects is the incorporation of anthropomorphic features into autonomous systems, making them more relatable and fostering greater empathy and trust (Moussawi et al., 2020; Natale and Cooke, 2021; Pitardi and Marriott, 2021). In the case of AAs, integrating virtual assistants with human-like characteristics could serve to bridge the emotional gap and increase user acceptance (van Pinxteren et al., 2019; Watkins and Pak, 2020; Portela and Granell-Canut, 2017). However, even with such modifications, these systems face other barriers, such as the challenge of communicating their complex decision-making processes clearly to users.

Bias linked to user characteristics: cognitive biases and generational gaps

Despite the limited number of available studies, we can identify one of the most influential variables contributing to the aversion to these vehicles: risk. To understand why the implementation of this type of service might fail, we look at variables related to risk management. For example, some studies predicted a higher predisposition among younger individuals to use AVs (Kautonen, 2017). However, the data does not show a significant relationship between age and acceptance of AAs (Howard and Dai, 2014; LaFrance, 2015; Rice and Winter, 2019).

In addition to that generational gap, gender also significantly influences attitudes toward autonomous healthcare technologies (Rice and Winter, 2019; Rice et al., 2019; Winter et al., 2018a; Winter et al., 2018b). Studies have shown that women are generally more hesitant to use AAs compared to men (Rice et al., 2019). This hesitancy may be tied to emotional responses triggered by the absence of a human driver, with women reacting more negatively to unfamiliar technological interventions in emergency situations (Winter et al., 2018a). These findings align with broader research on gender and technology, which suggests that societal stereotypes and expectations shape how different groups interact with technological systems (Mehta et al., 2014). For instance, virtual assistants are often gendered as female, reflecting and reinforcing pre-existing social norms regarding caregiving roles (Natale and Cooke, 2021).

Cultural and regional differences also affect the adoption of autonomous healthcare technologies. In countries like India, studies have found greater acceptance of AAs, especially for short trips (Rice et al., 2019). These variations suggest that cultural context plays a significant role in shaping perceptions of risk and trust. Moreover, the language and accent recognition capabilities of these technologies can create further barriers, especially in regions with diverse linguistic backgrounds. As a result, the development of AI systems that are sensitive to these cultural nuances is crucial for promoting equitable access and adoption across different populations.

To address these challenges, greater transparency in the operation of autonomous healthcare technologies is essential. Users require clear and accessible explanations regarding the capabilities and limitations of these systems, as well as the safety protocols in place. Such requirements are detailed in regulations like the European Union’s Artificial Intelligence Act, particularly for High-Risk AI systems applied in clinical settings. These stipulations include providing sufficient information about the identity and contact details of the provider, the capabilities and limitations of the AI system’s functionality, and the human oversight measures in place. These requirements become especially critical in industries like automotive, where balancing the inherent complexity of AI technologies with user-friendly communication poses a significant challenge (Moussawi et al., 2020). Failing to address these informational gaps could exacerbate distrust and hinder adoption, particularly among populations already inclined toward skepticism.

Furthermore, improving technological literacy could help bridge the cognitive gap between developers and end-users. Efforts to educate the public on the functioning of AI and autonomous systems may alleviate some of the fear and resistance associated with their use in healthcare. This is particularly important in ensuring that these technologies do not exacerbate existing inequalities in access to medical services. Those with limited access to education may face additional barriers to understanding and trusting these systems, thus perpetuating healthcare disparities (LaFrance, 2015).

General discussion

One of the key challenges in healthcare resource management is scarcity. Often, the limited number of available healthcare professionals prevents the provision of high-quality services. This difficulty intensifies during demand surges or, as seen during the Covid-19 crisis, when a healthcare emergency arises (Khalid et al., 2021). For these reasons, it seems logical to introduce various AI applications to help improve the healthcare system (Christ et al., 2010; Tang et al., 2021; Piliuk and Tomforde, 2023). AI’s high degree of automation and increasing accuracy enable it to handle tasks such as triage (Kang et al., 2020; Yu et al., 2020), emergency management (Frost et al., 2017; Mijwil et al., 2023), and assist with diagnostics (Al-Dury et al., 2020; Jamaludin et al., 2017; Herweh et al., 2016). However, despite these apparent advantages, as we have seen, several challenges risk hindering the implementation of these tools.

The success of AI applications in the healthcare sector will not solely depend on the technical solutions they offer. While Human-Computer Interaction (HCI) is essential for the implementation of any device, it is even more critical in the healthcare field, an environment that is particularly sensitive due to the type of interaction between technology and users, and the life-or-death stakes for those involved. The implementation of AEVs exemplifies the opportunities and risks that AI can bring to this sector.

The shortage of ambulances is a recurring and significant issue. The time it takes for this aid to reach a patient in an emergency situation is crucial for their survival (Murray and Kue, 2017; Tahir and Javaid, 2019). AI can help solve this problem by efficiently assessing risk levels and prioritizing ambulance dispatch (Mijwil et al., 2023; Yoshida et al., 2023), determining fast and safe routes (Dresner and Stone, 2006; Khalid et al., 2021), or contributing to diagnostics before arriving at the designated hospital (Akca et al., 2020; Karkar, 2019). These opportunities will become more refined as technical capabilities improve, offering more precise and effective responses. Additionally, these technical developments will likely coincide with solutions to the problems identified during the deployment of these technologies, such as the need for a clear regulatory framework (Grant et al., 2020; So et al., 2020), establishing communication network priorities (Peelam et al., 2024; So et al., 2020), addressing liability in cases of malpractice (Elayan et al., 2021; Grant et al., 2020; Woo et al., 2021), or handling and protecting data (Tahir and Javaid, 2019).

However, despite the likely solutions that AI will offer in the near future to resolve some of the major implementation challenges, there remains a difficult-to-resolve issue that, at least for now, has garnered little attention. Interaction with healthcare professionals, and especially with patients, will be key to ensuring the success of this management solution. For instance, we know that healthcare workers prefer human-driven ambulances over autonomous ones (Goodison et al., 2020). Despite the low propensity to use autonomous ambulances, training to improve healthcare workers’ readiness is practically non-existent (Almaskati et al., 2024), making it unlikely that this problem will be resolved in the short term.

Similarly, the relationship between patients and autonomous emergency vehicles has been scarcely studied. While real-world usage experiences are equally limited (Zarkeshev and Csiszár, 2020), all research points to the low level of support for the implementation of such vehicles. Many of these studies also argue that part of this reluctance stems from certain biases or variables, such as gender or nationality (Rice et al., 2019).

It is known that women show less willingness to use AEVs (Winter et al., 2018b; Rice and Winter, 2019). It is also known that nationality, and the cultural patterns associated with it, can negatively influence preferences for this type of vehicle (Rice et al., 2019). Although not many other biases affecting autonomous vehicles are known due to the limited number of existing studies, we do know that this resistance and its intensity are linked to the emotional response these vehicles trigger. For example, when the emotional response to these vehicles is anger, the willingness to use them decreases drastically (Winter et al., 2018a). For this reason, incorporating strategies to improve empathy is essential to avoid a failure in the implementation of this technology.

A device’s high degree of agency or the uncertainty generated by a new technology can lead to what we call the “Uncanny Valley” (Chang et al., 2018; Ho and MacDorman, 2017; Mori, 1970; Mori, 2020), which makes it harder for users to trust AEVs. To mitigate this issue, strategies that have proven effective include giving these devices anthropomorphic traits (Chattaraman et al., 2019; Pitardi and Marriott, 2021). This approach improves empathy through the identification of certain social characteristics in those anthropomorphic traits, making the technology more predictable and, therefore, more trustworthy (van Pinxteren et al., 2019; Watkins and Pak, 2020; Portela and Granell-Canut, 2017). In this regard, incorporating a synthetic voice into AEVs could be particularly useful, as it is an effective tool for building a social connection with all types of technologies. However, despite its advantages, this is not without risks. The identification of social characteristics occurs through the activation of certain stereotypes, including gender stereotypes (Nass, 1997; Nass and Brave, 2007).

The selection of a female voice is the most common choice when it comes to GPS navigators or autonomous vehicles (Abercrombie et al., 2021; Hoy, 2018). This choice may be driven by the default selection in most cases, but it transcends the technological realm. Using female voices can activate negative gender stereotypes, such as those linking virtual assistants with a submissive role for women (Anderson et al., 2014; Dou et al., 2021). Such a bias would severely hinder the development of this technology, as it would fail to meet ethical standards by not preventing the proliferation of discriminatory scenarios (Woo et al., 2021; Wu, 2020).

Moreover, the choice between a male or female voice, due to the activation of gender stereotypes, can affect expectations of what autonomous emergency vehicles should accomplish effectively—whether that be providing medical care or driving to the nearest hospital. Therefore, research should focus on patients and the emotional response these vehicles elicit. Identifying and determining what type of synthetic voice, which gender it should have, and what personality it should be assigned (whether more dominant or more compassionate) could facilitate the implementation of this technology. These devices are poised to become one of the most suitable solutions to ambulance shortages, particularly during health crises like Covid-19.

Author contributions

RL: Conceptualization, Investigation, Writing – original draft, Writing – review & editing. RS: Conceptualization, Investigation, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This article is part of an agreement between the Community of Madrid (Consejería de Educación, Universidades, Ciencia y Portavocía) and Universidad Carlos III de Madrid for the direct granting of aid to finance the implementation of research projects on SARS-COV 2 and the covid-19 disease financed with REACT-EU resources from the European Regional Development Fund “A way to make Europe.”

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abercrombie, G., Cercas Curry, A., Pandya, M., and Rieser, V. (2021). Alexa, Google, Siri: What are your pronouns? Gender and anthropomorphism in the design and perception of conversational assistants. Proceedings of the 3rd workshop on gender Bias in natural language processing.

Google Scholar

Ahmed, S. T., Basha, S. M., Ramachandran, M., Daneshmand, M., and Gandomi, A. H. (2023). An edge-AI-enabled autonomous connected ambulance-route resource recommendation protocol (ACA-R3) for eHealth in smart cities. IEEE Internet Things J. 10, 11497–11506. doi: 10.1109/JIOT.2023.3243235

Crossref Full Text | Google Scholar

Ahmed, S., Huda, M. N., Rajbhandari, S., Saha, C., Elshaw, M., and Kanarachos, S. (2019). Pedestrian and cyclist detection and intent estimation for autonomous vehicles: a survey. Appl. Sci. 9:335. doi: 10.3390/app9112335

Crossref Full Text | Google Scholar

Akca, T., Sahingoz, O. K., Kocyigit, E., and Tozal, M. (2020). Intelligent ambulance management system in smart cities. 2020 international conference on electrical engineering (ICEE), Istanbul, Turkey.

Google Scholar

Alam, F., Almaghthawi, A., Katib, I., Albeshri, A., and Mehmood, R. (2021). iResponse: an AI and IoT-enabled framework for autonomous COVID-19 pandemic management. Sustain. For. 13, 1–52. doi: 10.3390/su13073797

Crossref Full Text | Google Scholar

Al-Dury, N., Ravn-Fischer, A., Hollenberg, J., Israelsson, J., Nordberg, P., Strömsöe, A., et al. (2020). Identifying the relative importance of predictors of survival in out of hospital cardiac arrest: a machine learning study Scandinavian journal of trauma. Resuscit. Emerg. Med. 28:60. doi: 10.1186/s13049-020-00742-9

PubMed Abstract | Crossref Full Text | Google Scholar

Almaskati, D., Pamidimukkala, A., Kermanshachi, S., Rosenberger, J., and Foss, A. (2024). Investigation of the impacts of the deployment of autonomous vehicles on first responders. Smart Resilient Transport. 6, 150–168. doi: 10.1108/SRT-05-2024-0005

Crossref Full Text | Google Scholar

Alzubaidi, A., Sumaiti, A. S. A., Byon, Y.-J., and Hosani, K. A. (2023). Emergency vehicle aware lane change decision model for autonomous vehicles using deep reinforcement learning. IEEE Access 11, 27127–27137. doi: 10.1109/ACCESS.2023.3253503

Crossref Full Text | Google Scholar

Anderson, R. C., Klofstad, C. A., Mayew, W. J., and Venkatachalam, M. (2014). Vocal fry may undermine the success of young women in the labor market. PLoS One 9:e97506. doi: 10.1371/journal.pone.0097506

PubMed Abstract | Crossref Full Text | Google Scholar

Bagloee, S. A., Tavana, M., Asadi, M., and Oliver, T. (2016). Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. J. Modern Transport. 24, 284–303. doi: 10.1007/s40534-016-0117-3

Crossref Full Text | Google Scholar

Becker, J., and Hugelius, K. (2021). Driving the ambulance: an essential component of emergency medical services: an integrative review. BMC Emerg. Med. 21:160. doi: 10.1186/s12873-021-00554-9

PubMed Abstract | Crossref Full Text | Google Scholar

Buckman, N., Schwarting, W., Karaman, S., and Rus, D. (2021). Semi-cooperative control for autonomous emergency vehicles. IEEE/RSJ international conference on intelligent robots and systems, Prague, Czech Republic.

Google Scholar

Burnap, P., Colombo, G., Amery, R., Hodorog, A., and Scourfield, J. (2017). Multi-class machine classification of suicide-related communication on twitter. Online Social Networks Media 2, 32–44. doi: 10.1016/j.osnem.2017.08.001

PubMed Abstract | Crossref Full Text | Google Scholar

Capodieci, N., Cavicchioli, R., Muzzini, F., and Montagna, L. (2021). Improving emergency response in the era of ADAS vehicles in the Smart City. ICT Express 7, 481–486. doi: 10.1016/j.icte.2021.03.005

Crossref Full Text | Google Scholar

Chang, R. C.-S., Lu, H.-P., and Yang, P. (2018). Stereotypes or golden rules?: exploring likable voice traits of social robots as active aging companions for tech-savvy baby boomers in Taiwan. Comput. Hum. Behav. 84, 194–210. doi: 10.1016/j.chb.2018.02.025

Crossref Full Text | Google Scholar

Chattaraman, V., Kwon, W.-S., Gilbert, J. E., and Ross, K. (2019). Should AI-based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Comput. Hum. Behav. 90, 315–330. doi: 10.1016/j.chb.2018.08.048

Crossref Full Text | Google Scholar

Christ, M., Grossmann, F., Winter, D., Bingisser, R., and Platz, E. (2010). Modern Triage in the Emergency Department. Deutsches Ärzteblatt International, 107, 892–898. doi: 10.3238/arztebl.2010.0892

Crossref Full Text | Google Scholar

Cooney, M., Valle, F., and Vinel, A. (2021). Robot first aid: autonomous vehicles could help in emergencies. 33rd annual workshop of the Swedish artificial intelligence society (SAIS), Sweden.

Google Scholar

Cui, J., Liew, L. S., Sabaliauskaite, G., and Zhou, F. (2019). A review on safety failures, security attacks, and available countermeasures for autonomous vehicles. Ad Hoc Netw. 90:101823. doi: 10.1016/j.adhoc.2018.12.006

Crossref Full Text | Google Scholar

Damen, N., and Toh, C. (2019). Designing for trust: understanding the role of agent gender and location on user perceptions of Trust in Home Automation. J. Mech. Des. 141:061101. doi: 10.1115/1.4042223

Crossref Full Text | Google Scholar

Das, M. K., and Ghosh, G. (2021). Self-Driving Ambulance for Emergency Application. 5th International Conference on Electronics, Materials Engineering \u0026amp; Nano-Technology (IEMENTech), Kolkata, India

Google Scholar

De la Torre, G., Rad, P., and Choo, K.-K. R. (2020). Driverless vehicle security: challenges and future research opportunities. Futur. Gener. Comput. Syst. 108, 1092–1111. doi: 10.1016/j.future.2017.12.041

Crossref Full Text | Google Scholar

Dou, X., Wu, C.-F., Lin, K.-C., Gan, S., and Tseng, T.-M. (2021). Effects of different types of social robot voices on affective evaluations in different application fields. Int. J. Soc. Robot. 13, 615–628. doi: 10.1007/s12369-020-00654-9

Crossref Full Text | Google Scholar

Dresner, K., and Stone, P. (2006). Human-usable and emergency vehicle–aware control policies for autonomous intersection management the fourth workshop on agents in traffic and transportation (Hakodate, Japan). International Conference on Autonomous Agents and Multiagent Systems (AMMS).

Google Scholar

Eckel, C. C., and Wilson, R. K. (2004). Is trust a risky decision? J. Econ. Behav. Organ. 55, 447–465. doi: 10.1016/j.jebo.2003.11.003

Crossref Full Text | Google Scholar

Elayan, H., Aloqaily, M., Salameh, H. B., and Guizani, M. (2021). Intelligent cooperative health emergency response system in autonomous vehicles. IEEE 46th conference on local computer networks (LCN), Edmonton, AB, Canada.

Google Scholar

El-Haddadeh, R., Fadlalla, A., and Hindi, N. M. (2021). Is there a place for responsible artificial intelligence in pandemics? A tale of two countries. Inf. Syst. Front. 25, 2221–2237. doi: 10.1007/s10796-021-10140-w

PubMed Abstract | Crossref Full Text | Google Scholar

Ernst, C.-P. H., and Herm-Stapelberg, N. (2020a). Gender Stereotypingís influence on the perceived competence of Siri and co. Available online at: http://hdl.handle.net/10125/64286

Google Scholar

Ernst, C.-P. H., and Herm-Stapelberg, N. (2020b). The impact of gender stereotyping on the perceived likability of virtual assistants cognitive research in IS (SIGCORE). University of Hawaiʻi at Mānoa.

Google Scholar

Etzioni, O., and DeCario, N. (2020). AI can help scientists find a Covid-19 vaccine. Available online at: https://www.wired.com/story/opinion-ai-can-help-find-scientists-find-a-covid-19-vaccine/

Google Scholar

European Union’s Regulation 2024/1689. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Available online at: http://data.europa.eu/eli/reg/2024/1689/oj

Google Scholar

Fenwick, M. D., Kaal, W., and Vermeulen, E. P. M. (2017). Regulation tomorrow: what happens when technology is faster than the law? Am. Univ. Bus. Law Rev. 6, 561–594. doi: 10.2139/ssrn.2834531

Crossref Full Text | Google Scholar

Fischer, G. S., Righi, R. D. R., Ramos, G. D. O., Costa, C. A. D., and Rodrigues, J. J. P. C. (2020). ElHealth: using internet of things and data prediction for elastic management of human resources in smart hospitals. Eng. Appl. Artif. Intell. 87:103285. doi: 10.1016/j.engappai.2019.103285

Crossref Full Text | Google Scholar

Fontes, C., Corrigan, C., and Lütge, C. (2023). Governing AI during a pandemic crisis: initiatives at the EU level. Technol. Soc. 72:102204. doi: 10.1016/j.techsoc.2023.102204

PubMed Abstract | Crossref Full Text | Google Scholar

Freifeld, C. C., Mandl, K. D., Reis, B. Y., and Brownstein, J. S. (2008). HealthMap: global infectious disease monitoring through automated classification and visualization of internet media reports. J. Am. Med. Inform. Assoc. 15, 150–157. doi: 10.1197/jamia.M2544

PubMed Abstract | Crossref Full Text | Google Scholar

Frost, D. W., Vembu, S., Wang, J., Tu, K., Morris, Q., and Abrams, H. B. (2017). Using the electronic medical record to identify patients at high risk for frequent emergency department visits and high system costs. Am. J. Med. 130, 601.e17–601.e22. doi: 10.1016/j.amjmed.2016.12.008

Crossref Full Text | Google Scholar

Gans-Combe, C. (2020). “Automated justice: issues, benefits and risks in the use of artificial intelligence and its algorithms in access to justice and law enforcement” in Ethics, integrity and policymaking. eds. D. O'Mathuna and R. Iphofen. Springer Nature 175–194.

Google Scholar

Geels-Blair, K., Rice, S., and Schwark, J. (2013). Using system-wide trust theory to reveal the contagion effects of automation false alarms and misses on compliance and reliance in a simulated aviation task. Int. J. Aviat. Psychol. 23, 245–266. doi: 10.1080/10508414.2013.799355

Crossref Full Text | Google Scholar

Goodison, S. E., Barnum, J. D., Vermeer, M. J. D., Woods, D., Lloyd-Dotta, T., and Jackson, B. A. (2020). “Autonomous road vehicles and law enforcement” in Identifying high-priority needs for law enforcement interactions with autonomous vehicles within the next five years (Santa Monica, CA: RAND Corporation).

Google Scholar

Grant, K. M., and McParland, A. (2019). Applications of artificial intelligence in emergency medicine. Univ. Toronto Med. J. 96, 37–39.

Google Scholar

Grant, K., McParland, A., Mehta, S., and Ackery, A. D. (2020). Artificial intelligence in emergency medicine: surmountable barriers with revolutionary potential. Ann. Emerg. Med. 75, 721–726. doi: 10.1016/j.annemergmed.2019.12.024

PubMed Abstract | Crossref Full Text | Google Scholar

Groh, G., Brand, D., Merwe, J. V. D., Hoffmann, M., Eder, T., Mosca, E., et al. (2021). A scenario-based approach to the design and use of ethical AI models in managing a health pandemic. Munich: Institute for Ethics in Artificial Intelligence.

Google Scholar

Guzman, A. L. (2019). Voices in and of the machine: source orientation toward mobile virtual assistants. Comput. Hum. Behav. 90, 343–350. doi: 10.1016/j.chb.2018.08.009

Crossref Full Text | Google Scholar

Herweh, C., Ringleb, P. A., Rauch, G., Gerry, S., Behrens, L., Möhlenbruch, M., et al. (2016). Performance of e-ASPECTS software in comparison to that of stroke physicians on assessing CT scans of acute ischemic stroke patients. Int. J. Stroke 11, 438–445. doi: 10.1177/1747493016632244

PubMed Abstract | Crossref Full Text | Google Scholar

Ho, C.-C., and MacDorman, K. F. (2017). Measuring the Uncanny Valley effect. Int. J. Soc. Robot. 9, 129–139. doi: 10.1007/s12369-016-0380-9

Crossref Full Text | Google Scholar

Hong, Y.-Y., Morris, M. W., Chiu, C.-Y., and Benet-Martínez, V. (2000). Multicultural minds: a dynamic constructivist approach to culture and cognition. Am. Psychol. 55, 709–720. doi: 10.1037/0003-066X.55.7.709

PubMed Abstract | Crossref Full Text | Google Scholar

Howard, D., and Dai, D. (2014). Public perceptions of self-driving cars: the case of Berkeley, California. 93rd annual meeting of the Transportation Research Board, California.

Google Scholar

Hoy, M. B. (2018). Alexa, Siri, Cortana, and more: an introduction to voice assistants. Med. Ref. Serv. Q. 37, 81–88. doi: 10.1080/02763869.2018.1404391

PubMed Abstract | Crossref Full Text | Google Scholar

Ioannou, A., and Tussyadiah, I. (2021). Privacy and surveillance attitudes during health crises: acceptance of surveillance and privacy protection behaviours. Technol. Soc. 67:101774. doi: 10.1016/j.techsoc.2021.101774

PubMed Abstract | Crossref Full Text | Google Scholar

Jamaludin, A., Lootus, M., Kadir, T., Zisserman, A., Urban, J., Battié, M. C., et al. (2017). ISSLS PRIZE IN BIOENGINEERING SCIENCE 2017: automation of reading of radiological features from magnetic resonance images (MRIs) of the lumbar spine without human intervention is comparable with an expert radiologist. Eur. Spine J. 26, 1374–1383. doi: 10.1007/s00586-017-4956-3

PubMed Abstract | Crossref Full Text | Google Scholar

Jiang, H., and Cheng, L. (2021). Public perception and reception of robotic applications in public health emergencies based on a questionnaire survey conducted during COVID-19. Int. J. Environ. Res. Public Health 18:10908. doi: 10.3390/ijerph182010908

PubMed Abstract | Crossref Full Text | Google Scholar

Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., et al. (2017). Artificial intelligence in healthcare: past, present and future. Stroke Vasc. Neurol. 2, 230–243. doi: 10.1136/svn-2017-000101

PubMed Abstract | Crossref Full Text | Google Scholar

Kang, D.-Y., Cho, K.-J., Kwon, O., Jeon, K.-H., Park, H., Lee, Y., et al. (2020). Artificial intelligence algorithm to predict the need for critical care in prehospital emergency medical services. Scand. J. Trauma Resusc. Emerg. Med. 28:17. doi: 10.1186/s13049-020-0713-4

PubMed Abstract | Crossref Full Text | Google Scholar

Karkar, A. (2019). Smart ambulance system for highlighting emergency-routes. Third world conference on smart trends in systems security and sustainablity (world S4), London, UK.

Google Scholar

Katebi, H. (2023). Emergent horizons: the convergence of autonomous vehicles and advanced learning in post-pandemic transport resilience. Eng. Technol. J. 8:6. doi: 10.47191/etj/v8i9.06

Crossref Full Text | Google Scholar

Kautonen, A. (2017). The younger you are, the more likely you are to trust autonomous cars. Mobile Electronics Association. Available at: https://me-mag.com/blogs/item/44813-the-younger-you-are,-the-more-likely-you-are-to-trust-autonomous-cars

Google Scholar

Kim, S., Lee, J., and Oh, P. (2024). Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues| Questioning Artificial Intelligence: How Racial Identity Shapes the Perceptions of Algorithmic Bias. International Journal of Communication, 18, 677–699.

Google Scholar

Khalid, M., Awais, M., Singh, N., Khan, S., Raza, M., and Malik, Q. B. (2021). Autonomous transportation in emergency healthcare services: framework, challenges, and future work. IEEE Internet Things Magazine 4, 28–33. doi: 10.1109/IOTM.0011.2000076

Crossref Full Text | Google Scholar

Kirubarajan, A., Taher, A., Khan, S., and Masood, S. (2020). Artificial intelligence in emergency medicine: a scoping review. J Am Coll Emerg Physicians Open 1, 1691–1702. doi: 10.1002/emp2.12277

PubMed Abstract | Crossref Full Text | Google Scholar

Kong, Y. (2024). Preventing and mitigating risks of rumours during major pandemics in the era of artificial intelligence: a perspective on vulnerability. Expert. Syst. :e13558. doi: 10.1111/exsy.13558

Crossref Full Text | Google Scholar

Kritikos, M., Franceschi, A. M., Vaska, P., Clouston, S. A. P., Huang, C., Salerno, M., et al. (2022). Assessment of Alzheimer's disease imaging biomarkers in world trade center responders with cognitive impairment at midlife. World J. Nucl. Med. 21, 267–275. doi: 10.1055/s-0042-1750013

PubMed Abstract | Crossref Full Text | Google Scholar

Kuziemski, M., and Misuraca, G. (2020). AI governance in the public sector: three tales from the frontiers of automated decision-making in democratic settings. Telecommun. Policy 44:101976. doi: 10.1016/j.telpol.2020.101976

PubMed Abstract | Crossref Full Text | Google Scholar

Kyriakidis, M., Happee, R., and Winter, J. C. F. D. (2015). Public opinion on automated driving: results of an international questionnaire among 5000 respondents. Transport. Res. F: Traffic Psychol. Behav. 32, 127–140. doi: 10.1016/j.trf.2015.04.014

Crossref Full Text | Google Scholar

LaFrance, A. (2015). One thing baby boomers and millennials agree on: Self-driving cars. Washington, DC: The Atlantic.

Google Scholar

Lalmuanawma, S., Hussain, J., and Chhakchhuak, L. (2020). Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: a review. Chaos Solitons Fractals 139:110059. doi: 10.1016/j.chaos.2020.110059

PubMed Abstract | Crossref Full Text | Google Scholar

Lasky, T. A., Yen, K. S., Donecker, S. M., and Ravani, B. (2023). A connected vehicle system with high-availability and low-bandwidth requirement for first responders. J. Transport. Eng. Part A Syst. 149:7563. doi: 10.1061/JTEPBS.TEENG-7563

Crossref Full Text | Google Scholar

Laudanski, K., Shea, G., DiMeglio, M., Restrepo, M., and Solomon, C. (2020). What can COVID-19 teach us about using AI in pandemics? Healthcare (Basel) 8:527. doi: 10.3390/healthcare8040527

PubMed Abstract | Crossref Full Text | Google Scholar

Lee, S. H., Patil, V., Britten, N., Block, A., Pandya, A., Jung, M. F., et al. (2023). Safe to approach: Insights on autonomous vehicle interaction protocols with first responders. HRI '23: Companion of the 2023 ACM/IEEE international conference on human-robot interaction.

Google Scholar

Lima, L., Furtado, V., Furtado, E., and Almeida, V. (2019). Empirical analysis of Bias in voice-based personal assistants. WWW '19: Companion proceedings of the 2019 World Wide Web conference.

Google Scholar

Lin, A. X., Ho, A. F. W., Cheong, K. H., Li, Z., Cai, W., Chee, M. L., et al. (2020). Leveraging machine learning techniques and engineering of multi-nature features for national daily regional ambulance demand prediction. Int. J. Environ. Res. Public Health 17:4179. doi: 10.3390/ijerph17114179

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, Y.-H., Albuquerque, O. D. P., Hung, P. C. K., Gabbar, H. A., Fantinato, M., and Iqbal, F. (2022). Towards a real-time emergency response model for connected and autonomous vehicles transforms in behavioral and affective computing, Atlanta, Georgia. Available online at: https://ceur-ws.org/Vol-3318/paper3.pdf

Google Scholar

Liu, J., Xu, N., Shi, Y., Rahman, M. M., Barnett, T., and Jones, S. (2023). Do first responders trust connected and automated vehicles (CAVs)? A national survey. Transp. Policy 140, 85–99. doi: 10.1016/j.tranpol.2023.06.012

Crossref Full Text | Google Scholar

Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–634. doi: 10.2307/258792

Crossref Full Text | Google Scholar

Mehta, R., Rice, S., Winter, S. R., and Oyman, K. (2014). Consumers’ perceptions about autopilots and remote-controlled commercial aircraft. Proc. Human Factors Ergon. Soc. Ann. Meet. 58, 1834–1838. doi: 10.1177/1541931214581384

Crossref Full Text | Google Scholar

Mijwil, M. M., Unogwu, O. J., and Kumar, K. (2023). The role of artificial intelligence in emergency medicine: a comprehensive overview. Mesopotamian J. Artif. Intell. Healthcare 1-6, 1–6. doi: 10.58496/MJAIH/2023/001

Crossref Full Text | Google Scholar

Mori, M. (1970). The Uncanny Valley. Energy 7, 33–35.

Google Scholar

Mori, M. (2020). The Uncanny Valley. The Monster Theory Reader ed. J. A. Weinstock. University of Minnesota Press. 89–94.

Google Scholar

Moulik, S. K., Kotter, N., and Fishman, E. K. (2020). Applications of artificial intelligence in the emergency department. Emerg. Radiol. 27, 355–358. doi: 10.1007/s10140-020-01794-1

PubMed Abstract | Crossref Full Text | Google Scholar

Moussawi, S., Koufaris, M., and Benbunan-Fich, R. (2020). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 31, 343–364. doi: 10.1007/s12525-020-00411-w

Crossref Full Text | Google Scholar

Mueller, B., Kinoshita, T., Peebles, A., Graber, M. A., and Lee, S. (2022). Artificial intelligence and machine learning in emergency medicine: a narrative review. Acute Med. Surg. 9:e740. doi: 10.1002/ams2.740

PubMed Abstract | Crossref Full Text | Google Scholar

Murray, B., and Kue, R. (2017). The use of emergency lights and sirens by ambulances and their effect on patient outcomes and public safety: a comprehensive review of the literature. Prehosp. Disaster Med. 32, 209–216. doi: 10.1017/S1049023X16001503

PubMed Abstract | Crossref Full Text | Google Scholar

Nass, C. (1997). Are computers gender-neutral? Gender stereotypic responses to computers. J. Appl. Soc. Psychol. 27, 864–876. doi: 10.1111/j.1559-1816.1997.tb00275.x

Crossref Full Text | Google Scholar

Nass, C., and Brave, S. (2007). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.

Google Scholar

Natale, S., and Cooke, H. (2021). Browsing with Alexa: interrogating the impact of voice assistants as web interfaces. Media Cult. Society 43, 1000–1016. doi: 10.1177/0163443720983295

Crossref Full Text | Google Scholar

Ortiz-Barrios, M., Arias-Fonseca, S., Ishizaka, A., Barbati, M., Avendaño-Collante, B., and Navarro-Jiménez, E. (2023). Artificial intelligence and discrete-event simulation for capacity management of intensive care units during the Covid-19 pandemic: A case study. Journal of Business Research, 160:113806. doi: 10.1016/j.jbusres.2023.113806

Crossref Full Text | Google Scholar

Papini, S., Pisne, D., Shumake, J., Powers, M. B., Beevers, C. G., Rainey, E. E., et al. (2018). Ensemble machine learning prediction of posttraumatic stress disorder screening status after emergency room hospitalization. J. Anxiety Disord. 60, 35–42. doi: 10.1016/j.janxdis.2018.10.004

PubMed Abstract | Crossref Full Text | Google Scholar

Parasuraman, R., and Riley, V. (1997). Humans and automation: use, misuse, disuse, abuse. Human Factors 39, 230–253. doi: 10.1518/001872097778543886

Crossref Full Text | Google Scholar

Paulin, J., Reunamo, A., Kurola, J., Moen, H., Salanterä, S., Riihimäki, H., et al. (2022). Using machine learning to predict subsequent events after EMS non-conveyance decisions. BMC Med. Inform. Decis. Mak. 22:166. doi: 10.1186/s12911-022-01901-x

PubMed Abstract | Crossref Full Text | Google Scholar

Peelam, M. S., Naren Gera, M., Chamola, V., and Zeadally, S. (2024). A review on emergency vehicle management for intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 25, 15229–15246. doi: 10.1109/tits.2024.3440474

Crossref Full Text | Google Scholar

Pickering, B. (2021). Trust, but Verify: informed consent, AI technologies, and public health emergencies. Future Internet 13:132. doi: 10.3390/fi13050132

Crossref Full Text | Google Scholar

Piliuk, K., and Tomforde, S. (2023). Artificial intelligence in emergency medicine. A systematic literature review. Int. J. Med. Inform. 180:105274. doi: 10.1016/j.ijmedinf.2023.105274

PubMed Abstract | Crossref Full Text | Google Scholar

Pineda, A. L., Ye, Y., Visweswaran, S., Cooper, G. F., Wagner, M. M., and Tsui, F. R. (2015). Comparison of machine learning classifiers for influenza detection from emergency department free-text reports. J. Biomed. Inform. 58, 60–69. doi: 10.1016/j.jbi.2015.08.019

PubMed Abstract | Crossref Full Text | Google Scholar

Pitardi, V., and Marriott, H. R. (2021). Alexa, she's not human but… Unveiling the drivers of consumers' trust in voice-based artificial intelligence. Psychol. Mark. 38, 626–642. doi: 10.1002/mar.21457

Crossref Full Text | Google Scholar

Portela, M., and Granell-Canut, C. (2017). A new friend in our smartphone? Interacción '17: Proceedings of the XVIII international conference on human computer interaction.

Google Scholar

Rajendar, S., Rathinasamy, D., Kaliappan, V. K., and Gnanamurthy, S. (2022). Prediction of stopping distance for autonomous emergency braking using stereo camera pedestrian detection. Materials Today Proceedings 51, 1224–1228. doi: 10.1016/j.matpr.2021.07.211

Crossref Full Text | Google Scholar

Ramdani, R., Eko, A., and Purnomo, P. (2021). Big data analysis of COVID-19 mitigation policy in Indonesia: Democratic, elitist, and artificial intelligence. IOP Conf. Series 717:012023. doi: 10.1088/1755-1315/717/1/012023

Crossref Full Text | Google Scholar

Ramió, C. (2019). Inteligencia artificial y administración pública: Robots y humanos compartiendo el servicio público Madrid: Catarata.

Google Scholar

Ramlakhan, S., Saatchi, R., Sabir, L., Singh, Y., Hughes, R., Shobayo, O., et al. (2022). Understanding and interpreting artificial intelligence, machine learning and deep learning in emergency medicine. Emerg. Med. J. 39, 380–385. doi: 10.1136/emermed-2021-212068

PubMed Abstract | Crossref Full Text | Google Scholar

Rezaei, M., Khalilpour, K. R., and Jahangiri, M. (2020). Multi-criteria location identification for wind/solar based hydrogen generation: the case of capital cities of a developing country. Int. J. Hydrog. Energy 45, 33151–33168. doi: 10.1016/j.ijhydene.2020.09.138

Crossref Full Text | Google Scholar

Rice, S., and Winter, S. R. (2019). Do gender and age affect willingness to ride in driverless vehicles: if so, then why? Technol. Soc. 58:101145. doi: 10.1016/j.techsoc.2019.101145

Crossref Full Text | Google Scholar

Rice, S., Winter, S. R., Mehta, R., Keebler, J. R., Baugh, B. S., Anania, E. C., et al. (2019). Does length of ride, gender, or nationality affect willingness to ride in a driverless ambulance? J. Unmanned Vehicle Syst. 7, 39–53. doi: 10.1139/juvs-2017-0027

Crossref Full Text | Google Scholar

Salvador, M., and Ramió, C. (2020). Capacidades analíticas y gobernanza de datos en la administración pública como paso previo a la introducción de la inteligencia artificial. Revista del CLAD: reforma y democracia 77, 5–36. doi: 10.69733/clad.ryd.n77.a205

Crossref Full Text | Google Scholar

Scheufele, D. A., Corley, E. A., Dunwoody, S., Shih, T.-J., Hillback, E., and Guston, D. H. (2007). Scientists worry about some risks more than the public. Nat. Nanotechnol. 2, 732–734. doi: 10.1038/nnano.2007.392

PubMed Abstract | Crossref Full Text | Google Scholar

Seyama, J. I., and Nagayama, R. S. (2007). The Uncanny Valley: effect of realism on the impression of artificial human faces. Presence Teleop. Virt. 16, 337–351. doi: 10.1162/pres.16.4.337

Crossref Full Text | Google Scholar

Smith, A. (2018). Public attitudes toward computer algorithms. Pew Research Center. Available onlin at: https://www.pewresearch.org/internet/2018/11/16/public-attitudes-toward-computer-algorithms/

Google Scholar

So, J. J., Kang, J., Park, S., Park, I., and Lee, J. (2020). Automated emergency vehicle control strategy based on automated driving controls. J. Adv. Transp. 2020, 1–11. doi: 10.1155/2020/3867921

Crossref Full Text | Google Scholar

Sohail, S. S., Madsen, D. Ø., Farhat, F., and Alam, A. (2023). Fighting COVID-19 vaccine misconceptions: how AI-based Chatbots like ChatGPT can help promote vaccine awareness and uptake. SSRN Electron. J. 52, 446–450. doi: 10.2139/ssrn.4399901

Crossref Full Text | Google Scholar

Stein, J.-P., Liebold, B., and Ohler, P. (2019). Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology. Comput. Hum. Behav. 95, 73–82. doi: 10.1016/j.chb.2019.01.021

Crossref Full Text | Google Scholar

Sumia, L., and Ranga, V. (2018). Intelligent traffic management system for prioritizing emergency vehicles in a Smart City. Int. J. Eng. 31:11. doi: 10.5829/ije.2018.31.02b.11

Crossref Full Text | Google Scholar

Tang, K. J. W., Ang, C. K. E., Constantinides, T., Rajinikanth, V., Acharya, U. R., and Cheong, K. H. (2021). Artificial intelligence and machine learning in emergency medicine. Biocybernet. Biomed. Eng. 41, 156–172. doi: 10.1016/j.bbe.2020.12.002

Crossref Full Text | Google Scholar

Tavakoli, M., Carriere, J., and Torabi, A. (2020). Robotics, smart wearable technologies, and autonomous intelligent Systems for Healthcare during the COVID-19 pandemic: an analysis of the state of the art and future vision. Adv. Intell. Syst. 2:52. doi: 10.1002/aisy.202000071

Crossref Full Text | Google Scholar

van Pinxteren, M. M. E., Wetzels, R. W. H., Rüger, J., Pluymaekers, M., and Wetzels, M. (2019). Trust in humanoid robots: implications for services marketing. J. Serv. Mark. 33, 507–518. doi: 10.1108/JSM-01-2018-0045

Crossref Full Text | Google Scholar

Watkins, H., and Pak, R. (2020). Investigating user perceptions and stereotypic responses to gender and age of voice assistants. Proc. Human Fact. Ergon. Soc. Ann. Meet. 64, 1800–1804. doi: 10.1177/1071181320641434

Crossref Full Text | Google Scholar

Wickens, C. D., Helton, W. S., Hollands, J. G., and Banbury, S. (2000). Engineering psychology and human performance. Old Bridge, NJ: Pearson Prentice Hall.

Google Scholar

Winter, S. R., Keebler, J. R., Rice, S., Mehta, R., and Baugh, B. S. (2018a). Driverless ambulances: a possibility, but will patients ride? Proc. Human Fact. Ergon. Soc. Ann. Meet. 62:1176. doi: 10.1177/1541931218621270

Crossref Full Text | Google Scholar

Winter, S. R., Keebler, J. R., Rice, S., Mehta, R., and Baugh, B. S. (2018b). Patient perceptions on the use of driverless ambulances: an affective perspective. Transport. Res. F: Traffic Psychol. Behav. 58, 431–441. doi: 10.1016/j.trf.2018.06.033

Crossref Full Text | Google Scholar

Woo, S., Youtie, J., Ott, I., and Scheu, F. (2021). Understanding the long-term emergence of autonomous vehicles technologies. Technol. Forecast. Soc. Chang. 170:120852. doi: 10.1016/j.techfore.2021.120852

Crossref Full Text | Google Scholar

Wu, S. S. (2020). Autonomous vehicles, trolley problems, and the law. Ethics Inf. Technol. 22, 1–13. doi: 10.1007/s10676-019-09506-1

Crossref Full Text | Google Scholar

Yin, J., Ngiam, K. Y., and Teo, H. H. (2021). Role of artificial intelligence applications in real-life clinical practice: systematic review. J. Med. Internet Res. 23:e25759. doi: 10.2196/25759

PubMed Abstract | Crossref Full Text | Google Scholar

Yoshida, T., Yoshida, T., Noma, H., Nomura, T., Suzuki, A., and Mihara, T. (2023). Diagnostic accuracy of point-of-care ultrasound for shock: a systematic review and meta-analysis. Crit. Care 27:200. doi: 10.1186/s13054-023-04495-6

PubMed Abstract | Crossref Full Text | Google Scholar

Yousefi, M., Yousefi, M., Fathi, M., and Kybernetes, F. S. F. (2019). Patient visit forecasting in an emergency department using a deep neural network approach. Kybernetes 49, 2335–2348. doi: 10.1108/K-10-2018-0520

Crossref Full Text | Google Scholar

Yu, Z., Chen, Q., Zheng, G., and Zhu, Y. (2020). Social work involvement in the COVID-19 response in China: interdisciplinary remote networking. J. Soc. Work. 21, 246–256. doi: 10.1177/146801732098065

Crossref Full Text | Google Scholar

Zafari, S., and Koeszegi, S. T. (2021). Attitudes toward attributed agency: role of perceived control. Int. J. Soc. Robot. 13, 2071–2080. doi: 10.1007/s12369-020-00672-7

Crossref Full Text | Google Scholar

Zarkeshev, A., and Csiszár, C. (2020). Patients’ willingness to ride on a driverless ambulance: a case study in Hungary. Transport. Res. Procedia 44, 8–14. doi: 10.1016/j.trpro.2020.02.002

Crossref Full Text | Google Scholar

Zhu, L., Chen, P., Dong, D., and Wang, Z. (2022). Can artificial intelligence enable the government to respond more effectively to major public health emergencies? ——taking the prevention and control of Covid-19 in China as an example. Socio Econ. Plan. Sci. 80:101029. doi: 10.1016/j.seps.2021.101029

PubMed Abstract | Crossref Full Text | Google Scholar

Zhu, H., Wu, C. K., Koo, C. H., Tsang, Y. T., Liu, Y., and Chi, H. R. (2019). Smart healthcare in the era of internet-of-things. IEEE Consumer Electron. Magazine 8, 26–30. doi: 10.1109/MCE.2019.2923929

Crossref Full Text | Google Scholar

Keywords: AI, autonomous emergency vehicles, smart devices, health crisis, patient, bias

Citation: Losada Maestre R and Sánchez Medero R (2024) Implementation of smart devices in health crisis scenarios: risks and opportunities. Front. Polit. Sci. 6:1518067. doi: 10.3389/fpos.2024.1518067

Received: 13 November 2024; Accepted: 29 November 2024;
Published: 17 December 2024.

Edited by:

Gema Pastor Albaladejo, Complutense University of Madrid, Spain

Reviewed by:

Julio Pérez Hernanz, University of Barcelona, Spain
Gonzalo Pardo-Beneyto, Universitat de València, Spain

Copyright © 2024 Losada Maestre and Sánchez Medero. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rubén Sánchez Medero, cnNtZWRlcm9AcG9sc29jLnVjM20uZXM=

ORCID: Roberto Losada Maestre, https://orcid.org/0000-0001-6584-1888
Rubén Sánchez Medero, https://orcid.org/0000-0001-8799-5685

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.