- 1Public Health Research Centre, NOVA National School of Public Health, Universidade NOVA de Lisboa, Lisboa, Portugal
- 2Comprehensive Health Research Center, Universidade NOVA de Lisboa, Lisboa, Portugal
- 3Centro Interdisciplinar de Ciências Sociais, Lisboa, Portugal
Digital technologies and data science have laid down the promise to revolutionize healthcare by transforming the way health and disease are analyzed and managed in the future. Digital health applications in healthcare include telemedicine, electronic health records, wearable, implantable, injectable and ingestible digital medical devices, health mobile apps as well as the application of artificial intelligence and machine learning algorithms to medical and public health prognosis and decision-making. As is often the case with technological advancement, progress in digital health raises compelling ethical, legal, and social implications (ELSI). This article aims to succinctly map relevant ELSI of the digital health field. The issues of patient autonomy; assessment, value attribution, and validation of health innovation; equity and trustworthiness in healthcare; professional roles and skills and data protection and security are highlighted against the backdrop of the risks of dehumanization of care, the limitations of machine learning-based decision-making and, ultimately, the future contours of human interaction in medicine and public health. The running theme to this article is the underlying tension between the promises of digital health and its many challenges, which is heightened by the contrasting pace of scientific progress and the timed responses provided by law and ethics. Digital applications can prove to be valuable allies for human skills in medicine and public health. Similarly, ethics and the law can be interpreted and perceived as more than obstacles, but also promoters of fairness, inclusiveness, creativity and innovation in health.
Introduction
Innovative solutions to both classic and emergent medical problems have resulted from the impact of the digital revolution on healthcare (1, 2). Prominent examples include telemedicine, electronic health records, wearable, implantable, injectable and ingestible medical devices, health mobile apps, and the application of artificial intelligence (AI) algorithms to health settings (3). Correspondingly, computer power, interconnectivity and storage capacity, have potentiated the collection, analysis, and sharing of health data. These advancements, coupled with the expansion of data generation capabilities have led to an era of big data in healthcare, which promises to facilitate timely and precise healthcare interventions (4, 5). In order to achieve this aim, the extraction of knowledge from big data through the interdisciplinary work of data science is fundamental (6).
Better healthcare quality as a result of digital health applications and data science methods is an appealing promise, which also elicits significant ethical, legal, and social challenges (Table 1). This article aims to outline this dichotomy, with a limited focus on the examples of telemedicine and AI, which are two specific and interconnected areas in expansion.
Table 1. Summary of relevant ethical, legal, and social issues (ELSI) raised by digital technologies and health data processing in healthcare.
In broad terms, telemedicine and telehealth consist in the practice of healthcare through information and telecommunication systems (7). This branch of digital health has had notorious growth in the last years and particularly during the COVID-19 pandemic (8–10). Its applications include, but are not restricted to, real-time health consultations from a distance, remote health data collection, analysis, interpretation, and monitoring, and digital interactions with health assistants, including virtual ones (7). Accordingly, these subjects have deserved particular ethical, legal and scholar attention in recent years (7, 11–13).
In parallel, AI applications in healthcare have gathered significant interest (14, 15). These include, but are also not restricted to, analysis of health data to predict health events and outcomes, check symptoms and improve diagnosis, suggest preventive strategies, design and develop new medicines, improve the organization and conduction of clinical trials, enhance patient experiences, and advance the structure and intelligibility of electronic health records (5, 14, 16–19). Consequently, the influence of AI and machine learning in the health sector is projected to expand and affect the work of healthcare professionals, the efficiency of health systems, and the capacity of patients to interpret their own health data (18, 20, 21). Similarly, awareness about the ethical, legal and social dimensions of AI broadened (22–25), which will hopefully translate into better regulation (26–28).
With a balanced view between promises and challenges and telemedicine and AI as examples, this article aims to provide a succinct review of the main ethical, legal and social implications (ELSI) of the adoption of digital technologies and the processing of health data in medicine and public health.
Methods
The main topics highlighted in this article resulted from an initial literature search using electronic platforms PubMed and Google Scholar and general search terms (corresponding to the article keywords in combination with “ethics,” “law,” and “ELSI”). Additional references were identified by backward and forward citation chaining. Finally, articles from scholars known to the author were also analyzed to complement the analysis.
ELSI of the Adoption of Digital Technologies in Healthcare
Trust, Quality, and the Doctor–Patient Relationship
Trust is a fundamental and reciprocal value in healthcare. To obtain guidance and care, patients trust health professionals in a context of asymmetrical information (29). Conversely, health professionals trust patients to describe their individual experience and their medical history, as well as adhering to recommended behaviors and treatments (30). Therefore, the doctor–patient relationship is based on mutual trust, which is fundamental to ensure quality of care (31, 32). Admittedly, the uptake of digital technologies in healthcare can affect the doctor–patient relationship by reducing human contact and proximity (33). This effect has been substantially debated in the context of telemedicine (9, 12, 34, 35). In particular, the possible devaluation of the importance of continuous face-to-face interactions between patients and doctors, of non-verbal cues and of established ways of building empathy and rapport for economical or efficiency reasons have been highlighted (9). Notably, the impact of digital technologies (and telemedicine in particular) on the doctor-relationship might be very different depending on medical specialty. For example, it might be low for some interactions in dermatology, and completely reshape relations in the context of mental health specialties (35). Likewise, specific functions (image analysis/evaluation of health parameters vs. communication of diagnosis, for example) might also be impacted differently. These considerations reinforce the need to value patient context and preferences in order to improve quality in health, as superficial relationships (even if quantitatively informed) might lead to superficial care.
In a context of increased reliance on digital technologies, the trustworthiness of digital services and goods is fundamental to preserve the value of trust, to strengthen the doctor–patient relationship and increase healthcare quality (36). To achieve these goals, it is essential to rigorously assess the analytical validity and clinical validity and utility of digital technologies (37, 38). Particularly, validity assessment must consider scientific standards for market clearance and authorization, licensing and periodical evaluation, definition and enforceability of quality control norms, and professional requirements for usage and operation (including training, registration and authentication rules). However, despite ongoing efforts, audit and certification procedures are often variable, opaque and incompatible with the rhythm of technological progress (37, 39–41). Furthermore, clinical utility must be critically estimated and, subsequently, communicated to users, which involves the capacity to properly perceive and transmit technology's benefits and risks, as well as uncertain notions such as probability and variance. These issues, together with legal liability clarification and definition of malpractice norms across different geographies and jurisdictions, have been clearly identified as fundamental to guarantee that quality of care and patient safety are protected and hopefully improved during a sharp uptake of telemedicine and telehealth (8, 9, 34, 35).
Different health stakeholders will deliberate and define relations of trust differently (42). Nonetheless, in broad terms, the recommendation or adoption of subpar digital health services and products risks fostering mistrust in healthcare professionals, institutions and systems, which might ultimately affect the whole digital health field and the broader scientific endeavor (43, 44). As an example, paradigm cases like Theranos, in which a hyped promise to revolutionize the blood tests industry turned out to be fraudulent, illustrate the damage that can result from the lack of adequate scrutiny (45). Digital health has lessons to learn from this and other similar cases. Ultimately, the implementation of rigorous, updated and intelligible assessment models is a key component of the digital health promise to promote and advance ethics, evidence, and value-based healthcare (46).
In particular, the introduction of AI and machine learning algorithms in healthcare provides a relevant illustration of the growing need for dedicated assessment and validation. Although progress in the area of deep neural networks allows for assessment of algorithmic capacity using synthetic data, including medical images (47), peer-reviewed real-world clinical data must not be abandoned if poor accuracy or undetermined clinical utility are to be avoided (18). Cases like the IBM Watson-mediated promotion of unsafe and incorrect treatment recommendations to hospitals and medical doctors globally, illustrate how risky it is to adopt AI-based approaches without proper validation, particularly as these flaws can potentially affect large numbers of patients in a short timespan, severely affecting the elements of trust and quality in healthcare (48).
Finally, the extreme case of superficiality in contemporary and future medicine, some argue, is machinal healthcare (33). In fact, it might be precisely in healthcare that dehumanization might prove more costly as human vulnerability, hope, suffering, dependence and, ultimately, death are at stake (49). In accordance, healthcare practice extends beyond technical analysis to include ethics and morality. Interestingly, the competence of machines for moral reasoning, judgement and decision-making is a developing discussion (50–52). Either way, it would reinforce trust, promote quality and strengthen the doctor patient-relationship if digital health tools, and AI in particular, provide stronger incentives for healthcare professionals to focus on caring, compassion, and communication. These are fundamental skills, which are perceived by patients to be in decline (33, 53).
Transparency, Bias, and Exclusion
In order to achieve its highest aims while preserving trust, the uptake of digital technologies in healthcare must be transparent (40). In a world where data can be artificially created, it is increasingly important that the non-human dimensions of healthcare are disclosed, including the usage of models and algorithms, both in and outside the context of telehealth (9, 14, 15, 54). Therefore, medical decisions supported by digital technologies should be more transparent and understandable, in order to simultaneously guarantee accountability and avoid patient disenfranchizement and exclusion. For example, the incapacity to understand and scrutinize algorithm decision-making leading to less transparency, a challenge known as the black box problem, is currently subject to intense debate including in the healthcare context (55, 56). Notably, the “right to explanation” of algorithm decisions and requirements for human intervention are legally established in different jurisdictions (57). In contrast, some authors view a degree of uncertainty as inevitable and perceive these conditions as obstacles to progress (18, 58). Nonetheless, common ground can be found in the need for better integration of scientific disciplines to surpass hyper technical discussions (and limited understanding and explainability) in contemporary medicine in the context of digital health in general, and AI in particular. Consequently, adaptation of academic curricula to digital health developments should be prioritized (59). In parallel, the emergence of new health skills or professions that oversee the development of common languages, intersect different disciplines, assist in science implementation and facilitate interactions between different stakeholders is likely (60).
On a separate yet related note, flawed algorithms can feed into human bias and potentiate discrimination (61, 62). Moreover, as the widespread use of facial recognition technology (FRT) edges closer, questionable studies portraying facial traits as proxies for different characteristics (including economic condition, emotional status, and sexual orientation) multiply, raising justified concerns about bias (63–65). The same is true for extrapolating conclusions from online digital behavior (66, 67). Healthcare is not foreign to this debate as FRT can be used to diagnose medical and genetic conditions, for example (68). Notoriously, the issue of AI bias is still open and evolving (69, 70). Nonetheless, it is unreasonable to expect that simply dehumanizing the flawed dimensions of healthcare will per se facilitate fairer health outcomes. Expectedly, feeding AI with biased data will lead to biased and unjust decisions (71, 72). Hence, AI implementation in healthcare demands great responsibility (73, 74). Additionally, data is context-dependent and biased context can result in biased conclusions, as studies have shown (75). Conversely, decontextualization of data can also result in algorithm bias, flawed decisions, and discrimination (75, 76). These are renewed arguments to keep fairness and justice at the center of the healthcare debate.
ELSI of Digital Health Data Processing
Autonomy, Consent, and Patient Participation
Grounded on the principle of autonomy, informed consent is a cornerstone of medical ethics. Valid informed consent requires a clear and precise acknowledgement of the situation, freedom from coercion (physical or psychological), and competence for decision making (or representation, in the case of minors and incompetent adults) (77). Notably, guaranteeing informed consent for heath research or care purposes, faces singular challenges in the digital era, including identity confirmation, remote evaluation of voluntariness, assessment of understanding levels and competence determination (78, 79).
Defining the scope of consent for health data processing is especially difficult. On one hand, single purpose consent is problematic as secondary uses are often necessary for research and care purposes and re-consent is impracticable (80). On the other hand, in an increasingly fluid ecosystem with expanding interactions between different stakeholders and infrastructures (hospitals, clinics, biobanks, research institutes, biotechnology, and pharma) and possible cycling of health data between health research and healthcare contexts, significant challenges are posed to classical informed consent models. Consequently, in alternative to otherwise open consent options, dynamic consent models have been proposed and justify continuous efforts of implementation (81, 82). Additionally, as health data anonymity is a commodity in an interconnected digital context, legal compliance, management of expectations and risk assessment and communication add extra pressure and complexity to the informed consent process (83–85). Furthermore, some types of health data (for example genetic data), might be shared by more than one person, blurring the limits of individual consent and rights while urging extra care in defining norms and interpreting the law (86).
Along with these challenges to informed consent, the issues of patient autonomy, participation and the doctor–patient relationship converge on other equally challenging digital health data trends. For example, health data can now be generated by patients themselves (via apps, wearables, and other digital means) (87). In parallel, healthcare interactions can be patient-initiated (requests of diagnosis and treatment, for example) (88). Thirdly, direct-to-consumer health services, including telemedicine and AI, are expanding (17, 89). These trends highlight the need to extract meaning and knowledge from large quantities of data, while protecting patients from misinformation, misjudgement and disenfranchizement (90). For example, individual and collective risks such as unjustified anxiety, false reassurance and overconsumption of scarce health resources are magnified by digital health illiteracy, neglect, or abandonment to excessive technicality (15, 17, 87, 91). Equally, confirming the accuracy of patient-generated health data and streamlining its integration with electronic health records should be harmonized if healthcare quality is to be promoted (92).
It is well-established that health illiteracy and the digital divide affect patient participation, possibly compromising access to healthcare (93–96). Therefore, to respect autonomy and promote patient participation, the most vulnerable (due to isolation, disability, age, illiteracy, and other factors) deserve special attention and protection. This implies rejecting a one-size-fits-all approach and tailoring digital healthcare encounters to individual needs and histories, which are told by different types of data. Among different digital health services and products, which can promote health data sharing and healthcare access, telemedicine has understandably gathered special recognition due to its proven capacity to extend healthcare access to isolated communities (35, 97). Nonetheless, challenges remain as some studies have also indicated that telemedicine services might contribute to medicalize the home context and end up worsening isolation and dependence in some cases (35). This fact further underlines the relevance of context-dependent assessment models and implementation efforts. Furthermore, as telehealth services grew quickly to meet demand in a context of reduced physical interactions such as the COVID-19 pandemic, one must not use the lack of immediate alternatives as an excuse for neglecting fundamental aspects of healthcare ethics. Particularly, the delimitation of professional responsibilities (clinical, administrative, and other) related with health data accessibility and sharing [including the clarification of End User License Agreements (EULAs)] and improved risk communication and cultural respect in digital interactions must be focused on in order to attribute real meaning to health data and achieve the highest hopes for this technology (9).
Finally, it is the human pondering of different alternatives that enriches the consent process and furthers patient participation and autonomy in healthcare. As machine learning algorithms gather more influence and prove to be increasingly autonomous, new challenges are posed to these ethical principles in the context of health data processing (98). Therefore, progress in healthcare should not consist in the promotion of machine autonomy at the expense of human autonomy. On the contrary, well-established human values in healthcare, such as integrity, conscientiousness and compassion must guide health data processing in a digital context and work as allies of digital health innovation.
Privacy, Confidentiality, and Security
The issues of privacy, confidentiality and data protection are recognized as fundamental rights in most jurisdictions and are especially challenging for digital health (99–103). The old debate surrounding the erosion of privacy and confidentiality in health settings endures (104, 105). For example, the protection of electronic health records has been widely recognized as insufficient (106, 107). Furthermore, privacy protection, data access, interoperability, and quality of recorded data are recurrently reported as ELSI of digital health, including of telemedicine (9, 12, 34, 35). Undoubtedly, health data processing is essential for medical and scientific progress and should follow transparent, balanced and fair rules (108–112). In order to strengthen fundamental rights in the digital age and regulate the free movement of personal data, including health data, the EU has adopted the General Data Protection Regulation (GDPR) (113). This broad ranging legal document reinforced mechanisms of data protection, including more transparency and accountability, mandatory impact assessments, pan-European validation of codes of conduct, certification procedures, and more severe sanctions (114). Furthermore, rights of data subjects were enhanced, including a right of access and rights to information, explanation, rectification, erasure, restriction of processing, data portability, object and not to be subject to automated individual decision-making (113). Due to its broad scope and recent nature, it is still early to judge the impact of GDPR in the health area. Additionally, harmonization of health data flows across jurisdictions is still problematic. For example, the successive European Court of Justice Schrems cases (115), leading to successive annulments of legal agreements regulating EU-US data flows, have impacted health research and healthcare (116).
In parallel, health data security has become a serious concern as poor protection measures combined with high transactional value, exacerbate the risk of violations and damages (117, 118). Particular cybersecurity concerns emerge from the expansion of digital health, including telemedicine and the multiplication of interconnected sensors and medical devices (119), which has rightly deserved regulatory attention (120–123). Ultimately, healthcare institutions should see their data security infrastructure strengthened and recent technological progress can provide the tools for risk mitigation (124).
In complement to data protection measures, responsible use of data is key. Especially, projects with public notoriety demand greater responsibility if public trust is to be preserved. For example, cases such as the Google Deepmind collaboration with the UK NHS (125, 126), NHS England's care.data programme (127), or “Project Nightingale” in the US, in which patient data was accessed by commercial companies without informed consent, emphasize the need for transparency, communication and responsibility in order to guarantee the positive impact of data sharing on healthcare. In contrast, opacity extends power imbalances and unprotects citizens (128). Therefore, clear and fair health data ownership rules, beyond the traditional property approaches, should continue to be developed and harmonized. These should guarantee patient access to their data (129) while limiting access and usage by third parties without a compelling interest. Moreover, different strategies can be adopted to encourage responsible use of health data. For example, investing in healthcare ethics literacy programs, implementing validated codes of conduct (institutional, national, and international), refining deontology rules (for health professionals and data scientists), protecting whistleblowers of data mistreatment practices, setting fair procedures, and imposing dissuasive sanctions (disciplinary, legal, and social) for confirmed misconduct. Obviously, such strategies must include electronic health records and health data shared in telemedicine settings, processed by mobile apps and medical devices, as well as AI algorithms (130, 131).
Discussion
The digital revolution has impacted different areas of society, including healthcare (19). At the core of this transformation is an increased capacity to process large quantities of health data using digital means (15, 16). Big data in healthcare originates from different sources, including biological and social determinants, health records, environmental signals, habits, and behaviors (19, 132, 133). Against this backdrop, telemedicine and telehealth services expanded significantly in recent years, particularly during the COVID-19 pandemic (10). Furthermore, AI applications are gaining ground in complementing even the most knowledgeable or skilled professionals (134, 135). Concomitantly, a new era of precision healthcare is promised, where the right individual and public intervention is available for the right patient or population at the right time (136–141).
In this context, the optimization of health data processing using digital means and the general uptake of digital technologies can rightly be perceived as health enablers. In parallel, compelling related ELSI must be considered and dealt with.
Expectedly, healthier activities and wiser health choices should result from better data science. In this sense, digitally-mediated health data processing can promote individual autonomy and patient empowerment (142–144). However, the adoption of healthy behaviors does not emerge linearly from better health information (which must be extracted from health data) as human decision making is complex and affected by context and cognitive biases, combining emotion and reason. Therefore, data-assisted decision making in healthcare, justifies a closer collaboration between healthcare professionals, decision, and data scientists and ethicists (15, 33). In fact, the links between health data and statistics literacy and healthcare quality is a classical debate, which is expected to intensify (145). Indeed, the adoption of digital technologies has the potential to improve the volume and quality of health data processing in order to expand knowledge to professionals and patients alike. However, a lack of common platforms and cross-disciplinary languages to deal with increasing technical complexity are significant challenges (146). Also, can technology, data and analytical models alone capture human vulnerability, suffering, fears, hopes and potential? Evidently not. Nonetheless, these can elevate the standard of care by providing healthcare professionals with invaluable (an otherwise inaccessible) information and knowledge, while alleviating the burden of repetitive and laborious tasks to focus on compassion and emotional connections, which are associated with the highest quality (15, 33). To this end, patient stories, particularly those of the most vulnerable, must be heard and understood and one must be mindful that health data misuse can contribute to misinformation, poorer care, reinforced exclusion or stigmatization. Notably, such risks are exacerbated if human health and disease are looked at from a purely quantitative lens (147–150). There is, however, cause for optimism as digital technologies can also be used to promote scientific robustness and tackle the very risks it potentially generates (151–154).
Balanced health data processing and usage of digital technologies to improve healthcare quality is a matter of public interest. Presently, there is significant data access asymmetries between citizens, corporations and governments (128). Therefore, urgent efforts are necessary to reach an inclusive and democratic deliberation leading to the simultaneous advancement of science and human rights. Importantly, digital technologies can advance the fulfillment of the human right to health. Specifically, they can improve the availability of health facilities, services and goods; increase the acceptability of practices (by incorporating medical ethics and approximating cultures); raise the quality of scientific and medical services, goods and professional skills; and promote access without discrimination (155, 156). The specific issue of fairness regarding access to digital technologies has been a key topic in the context of the accelerated uptake of telemedicine. This is particularly relevant as those who are more likely to benefit from this technology and its applications (isolated communities, including in rural areas) are also those who, predictably, are less capable of affording or using them (35). Therefore, public and individual interests must be properly balanced in order to maximize the potential of this technology while respecting human rights. Additionally, big data, telehealth, machine learning algorithms and the era of individual profiling might be ingredients for deeper discrimination and stigmatization (142). In summary, the positive application of digital technologies and data science in medicine and public health should promote, not defer progress in social justice.
In conclusion, the ELSI of the digital health field (Table 1) are compelling and proportional to the positive impact of digital technologies in healthcare. Consequently, normative orders such as law and ethics should act as beneficial limit-setters and promoters of just, creative and innovative realities. Accordingly, digital health ELSI convoke ethicists, legal scholars, patients, scientists, health professionals, health providers and payers, regulators, managers, and other decision makers to play a role in this fascinating field, which promises to decisively shape the way health and disease are perceived, assessed and managed in the future (157, 158).
Author Contributions
The author confirms being the sole contributor of this work and has approved it for publication.
Funding
This present publication was funded by Fundação Ciência e Tecnologia, IP national support through CHRC (UIDP/04923/2020).
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
1. Elenko E, Underwood L, Zohar D. Defining digital medicine. Nat Biotechnol. (2015) 33:456–61. doi: 10.1038/nbt.3222
2. Jain SH, Powers BW, Hawkins JB, Brownstein JS. The digital phenotype. Nat Biotechnol. (2015) 33:462–3. doi: 10.1038/nbt.3223
3. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. (2020) 3:118. doi: 10.1038/s41746-020-00324-0
4. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. (2013) 309:1351. doi: 10.1001/jama.2013.393
5. Topol EJ. A decade of digital medicine innovation. Sci Transl Med. (2019) 11:eaaw7610. doi: 10.1126/scitranslmed.aaw7610
7. American Medical Association. Telehealth Implementation Playbook. Digital Health Implementation Playbook Series (2020).
8. Golinelli D, Boetto E, Carullo G, Nuzzolese AG, Landini MP, Fantini MP. Adoption of digital technologies in health care during the COVID-19 pandemic: systematic review of early scientific literature. J Med Internet Res. (2020) 22:e22280. doi: 10.2196/preprints.22280
9. Kaplan B. REVISITING HEALTH INFORMATION TECHNOLOGY ETHICAL, LEGAL, and SOCIAL ISSUES and EVALUATION: TELEHEALTH/TELEMEDICINE and COVID-19. Int J Med Inform. (2020) 143:104239. doi: 10.1016/j.ijmedinf.2020.104239
10. Brody JE. A pandemic benefit: the expansion of telemedicine. The New York Times (2020). Available online at: https://www.nytimes.com/2020/05/11/well/live/coronavirus-telemedicine-telehealth.html (accessed December 30, 2020).
12. Botrugno C. Towards an ethics for telehealth. Nurs Ethics. (2019) 26:357–67. doi: 10.1177/0969733017705004
13. Raposo VL. Telemedicine: the legal framework (or the lack of it) in Europe. GMS Health Technol Assess. (2016) 12:Doc03. doi: 10.3205/hta000126
14. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. (2019) 6:94–8. doi: 10.7861/futurehosp.6-2-94
15. Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Basic Books (2019).
16. Obermeyer Z, Emanuel EJ. Predicting the future - big data, machine learning, and clinical medicine. N Engl J Med. (2016) 375:1216–9. doi: 10.1056/NEJMp1606181
17. Babic B, Gerke S, Evgeniou T, Cohen IG. Direct-to-consumer medical machine learning and artificial intelligence applications. Nat Mach Intell. (2021) 3:283–7. doi: 10.1038/s42256-021-00331-0
18. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–59. doi: 10.1038/s41591-018-0300-7
19. Topol EJ. Individualized medicine from prewomb to tomb. Cell. (2014) 157:241–53. doi: 10.1016/j.cell.2014.02.012
21. Nedelkoska L, Glenda Q. Automation, skills use and training. In: OECD Social, Employment and Migration Working Papers. Paris: OECD Publishing (2018).
22. Etzioni A, Etzioni O. Incorporating ethics into artificial intelligence. J Ethics. (2017) 21:403–18. doi: 10.1007/s10892-017-9252-2
23. High-Level Expert Group on AI. Ethics Guidelines for Trustworthy AI. European Commission (2019).
24. High-Level Expert Group on AI. Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self-Assessment. European Commission (2020).
25. Courtland R. Bias detectives: The researchers striving to make algorithms fair news-feature. Nature. (2018) 558:357–60. doi: 10.1038/d41586-018-05469-3
26. Digital Economy Task Force. Trustworthy AI in health. In: Background Paper for the G20 AI Dialogue. OECD (2020). Available online at: https://www.oecd.org/health/trustworthy-artificial-intelligence-in-health.pdf
27. Ghassemi M, Naumann T, Schulam P, Beam AL, Chen IY, Ranganath R. Practical guidance on artificial intelligence for health-care data. Lancet Digit Health. (2019) 1:e157–e9. doi: 10.1016/S2589-7500(19)30084-6
28. Cohen IG, Evgeniou T, Gerke S, Minssen T. The European artificial intelligence strategy: implications and challenges for digital health. Lancet Digit Health. (2020) 2:e376–e9. doi: 10.1016/S2589-7500(20)30112-6
29. Zaner RM. The phenomenon of trust and the patient-physician relationship. In: Pellegrino ED, Veatch RM, Langan JP, editors. ETHICS, TRUST, AND THE PROFESSIONS Philosophical and Cultural Aspects. Washington, DC: Georgetown University Press (1991). p. 45–68.
30. Sousa-Duarte F, Brown P, Mendes AM. Healthcare professionals' trust in patients: a review of the empirical and theoretical literatures. Sociol Compass. (2020) 14:1–15. doi: 10.1111/soc4.12828
31. Donabedian A. Evaluating the quality of medical care. Milbank Q. (2005) 83:691–725. doi: 10.1111/j.1468-0009.2005.00397.x
32. Rasiah S, Jaafar S, Yusof S, Ponnudurai G, Chung KPY, Amirthalingam SD. A study of the nature and level of trust between patients and healthcare providers, its dimensions and determinants: a scoping review protocol. BMJ Open. (2020) 10:e028061. doi: 10.1136/bmjopen-2018-028061
33. Verghese A. How tech can turn doctors into clerical workers. The New York Times Magazine (2018). Available online at: https://www.nytimes.com/interactive/2018/05/16/magazine/health-issue-what-we-lose-with-data-driven-medicine.html?mtrref=www.google.com&gwh=50127696B49AB16FC5EA028724BB6740&gwt=regi&assetType=REGIWALL (accessed December 30, 2020).
34. Nittari G, Khuman R, Baldoni S, Pallotta G, Battineni G, Sirignano A, et al. Telemedicine practice: review of the current ethical and legal challenges. Telemed eHealth. (2020) 26:1427–37. doi: 10.1089/tmj.2019.0158
35. Keenan AJ, Cert G, Tsourtos G, Tieman J. The value of applying ethical principles in telehealth practices: systematic review. J Med Internet Res. (2021) 23:e25698. doi: 10.2196/25698
36. Adjekum A, Blasimme A, Vayena E. Elements of trust in digital health systems: scoping review. J Med Internet Res. (2018) 20:e11254. doi: 10.2196/11254
37. Garell C, Svedberg P, Nygren JM. A legal framework to support development and assessment of digital health services. JMIR Med Inform. (2016) 4:e17. doi: 10.2196/medinform.5401
38. Perakslis E, Ginsburg GS. Digital health-the need to assess benefits, risks, and value. JAMA. (2021) 325:127–8. doi: 10.1001/jama.2020.22919
39. Schulke DF. The regulatory arms race: mobile health applications and agency posturing. Boston Univ Law Rev. (2013) 93:1699–752.
40. Rodriguez-Villa E, Torous J. Regulating digital health technologies with transparency: the case for dynamic and multi-stakeholder evaluation. BMC Med. (2019) 17:226. doi: 10.1186/s12916-019-1447-x
41. World Health Organisation. Monitoring and Evaluating Digital Health Interventions: A Practical Guide to Conducting Research and Assessment. WHO (2016).
42. Pilgrim D, Tomasini F, Vassilev I. Examining Trust in Healthcare: A Multidisciplinary Perspective. London: Palgrave Macmillan (2011).
43. Kabat GC. Taking distrust of science seriously. EMBO Rep. (2017) 18:1052–5. doi: 10.15252/embr.201744294
44. Baron RJ, Berinsky AJ. Mistrust in science - a threat to the patient-physician relationship. N Engl J Med. (2019) 381:182–5. doi: 10.1056/NEJMms1813043
45. Carreyrou J. Bad Blood: Secrets and Lies in a Silicon Valley Startup. New York, NY: Vintage Books, Pengun Random House (2018).
46. Rigby M, Ammenwerth E. The need for evidence in health informatics. In: Ammenwerth E, Rigby M, editors. Evidence-Based Health Informatics: Promoting Safety and Efficiency through Scientific Methods and Ethical Policy. Amsterdam: IOS Press (2016). p. 3–14.
47. Guibas JT, Virdi TS, Li PS. Synthetic medical images from dual generative adversarial networks. arXiv:1709.01872 [Preprint]. (2018).
48. Ross C, Swetlitz I. IBM's Watson Supercomputer Recommended 'Unsafe and Incorrect' Cancer Treatments, Internal Documents Show. Stat+ (2018).
49. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Algorithmic Bias In Health Care: A Path Forward. Health Affairs (2019).
50. Anderson M, Susan LA. Robot be good. Sci Am. (2010) 303:72–7. doi: 10.1038/scientificamerican1010-72
51. Several authors. Morals and the machine. The Economist. (2012). Available online at: https://www.economist.com/leaders/2012/06/02/morals-and-the-machine (accessed December 30, 2020).
52. Cervantes JA, López S, Rodríguez LF, Cervantes S, Cervantes F, Ramos F. Artificial moral agents: a survey of the current status. Sci Eng Ethics. (2020) 26:501–32. doi: 10.1007/s11948-019-00151-x
53. Singletary B, Patel N, Heslin M. Patient perceptions about their physician in 2 words. JAMA Surg. (2017) 152:1169–70. doi: 10.1001/jamasurg.2017.3851
54. Lima MR, Wairagkar M, Natarajan N, Vaitheswaran S, Vaidyanathan R. Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement. Front Robotics AI. (2021) 8:618866. doi: 10.3389/frobt.2021.618866
56. Bjerring JC, Busch J. Artificial intelligence and patient-centered decision-making. Philos Technol. (2021) 34:349–71. doi: 10.1007/s13347-019-00391-6
57. Atkinson K, Bench-Capon T, Bollegala D. Explanation in AI and law: past, present and future. Artif Intell. (2020) 289:103387. doi: 10.1016/j.artint.2020.103387
58. Brouillette M. Deep Learning Is a Black Box, But Health Care Won't Mind - MIT Technology Review. MIT Technology Review. (2017).
59. Aungst TD, Patel R. Integrating digital health into the curriculum-considerations on the current landscape and future developments. J Med Educ Curric Dev. (2020) 7:1–7. doi: 10.1177/2382120519901275
60. Wensing M, Grol R. Knowledge translation in health: how implementation science could contribute more. BMC Med. (2019) 17:88. doi: 10.1186/s12916-019-1322-9
62. Simonite T. When it comes to gorillas, google photos remains blind. Wired (2018). Available online at: https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/ (accessed December 30, 2020).
63. Burdick A. The A.I. “Gaydar” study and the real dangers of big data. The New Yorker (2017, September 15). Available online at: https://www.newyorker.com/news/daily-comment/the-ai-gaydar-study-and-the-real-dangers-of-big-data
64. van Noorden R. The ethical questions that haunt facial-recognition research. Nature. (2020) 587:354–8. doi: 10.1038/d41586-020-03187-3
65. Castelvecchi D. Is facial recognition too biased to be let loose? The technology is improving - but the bigger issue is how it's used. Nature. (2020) 587:347–9. doi: 10.1038/d41586-020-03186-4
66. Kramer ADI, Guillory JE, Hancock JT. Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci U S A. (2014) 111:8788–90. doi: 10.1073/pnas.1320040111
67. Kosinski M, Stillwell D, Graepel T. Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci U S A. (2013) 110:5802–5. doi: 10.1073/pnas.1218772110
68. Martinez-Martin N. What are important ethical implications of using facial recognition technology in health care? AMA J Ethics. (2019) 21:E180–E7. doi: 10.1001/amajethics.2019.180
69. Miller AP. Want Less-Biased Decisions? Use Algorithms. Harvard Business Review Digital Articles (2018, July 26). Available online at: https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms
71. Zou J, Schiebinger L. AI can be sexist and racist - it's time to make it fair. Nature. (2018) 559:324–6. doi: 10.1038/d41586-018-05707-8
72. Adamson AS, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol. (2018) 154:1247–8. doi: 10.1001/jamadermatol.2018.2348
73. Castelvecchi D. Prestigious AI meeting takes steps to improve ethics of research. Nature. (2021) 589:12–3. doi: 10.1038/d41586-020-03611-8
74. Gibney E. The battle for ethical AI at the world's biggest machine-learning conference. Nature. (2020) 577:609. doi: 10.1038/d41586-020-00160-y
75. Borgesius F. Discrimination, Artificial Intelligence, and Algorithmic Decision-Making. Council of Europe (2018).
76. Lorentz A. With big data, context is a big issue. Wired (2018). Available online at: https://www.wired.com/insights/2013/04/with-big-data-context-is-a-big-issue/ (accessed December 30, 2020).
77. Beauchamp TL, Childress JF. Principles of Biomedical Ethics, 8th Edn. Oxford: Oxford University Press (2019).
78. Grady C. Enduring and emerging challenges of informed consent. N Engl J Med. (2015) 372:855–62. doi: 10.1056/NEJMra1411250
79. Grady C, Cummings SR, Rowbotham MC, McConnell Mv, Ashley EA, Kang G. Informed consent. N Engl J Med. (2017) 376:856–67. doi: 10.1056/NEJMra1603773
80. Aicardi C, del Savio L, Dove ES, Lucivero F, Tempini N, Prainsack B. Emerging ethical issues regarding digital health data. On the World Medical Association Draft Declaration on Ethical Considerations Regarding Health Databases and Biobanks. Croatian Med J. (2016) 57:207–13. doi: 10.3325/cmj.2016.57.207
81. Schneble CO, Elger BS, Shaw DM. All our data will be health data one day: the need for universal data protection and comprehensive consent. J Med Internet Res. (2020) 22:e16879. doi: 10.2196/16879
82. Vayena E, Blasimme A. Biomedical big data: new models of control over access, use and governance. J Bioethical Inquiry. (2017) 14:501–13. doi: 10.1007/s11673-017-9809-6
83. Rocher L, Hendrickx JM, de Montjoye Y-A. Estimating the success of re-identifications in incomplete datasets using generative models. Nat Commun. (2019) 10:3069. doi: 10.1038/s41467-019-10933-3
84. Editorial. Time to discuss consent in digital-data studies. Nature. (2019) 572:5. doi: 10.1038/d41586-019-02322-z
85. Mann SP, Savulescu J, Sahakian BJ. Facilitating the ethical use of health data for the benefit of society: electronic health records, consent and the duty of easy rescue. Philos Trans A Math Phys Eng Sci. (2016) 374:20160130. doi: 10.1098/rsta.2016.0130
86. Annas GJ. Bioidentifiers. In: Worst Case Bioethics. New York, NY: Oxford University Press (2010). p. 235–50.
87. Lordon RJ, Mikles SP, Kneale L, Evans HL, Munson SA, Backonja U, et al. How patient-generated health data and patient-reported outcomes affect patient-clinician relationships: a systematic review. Health Inform J. (2020) 26:2689–706. doi: 10.1177/1460458220928184
88. Whear R, Thompson-Coon J, Rogers M, Abbott RA, Anderson L, Ukoumunne O, et al. Patient-initiated appointment systems for adults with chronic conditions in secondary care. Cochrane Database Syst Rev. (2020) 4:CD010763. doi: 10.1002/14651858.CD010763.pub2
89. Elliott T, Shih J. Direct to consumer telemedicine. Curr Allergy Asthma Rep. (2019) 19:1. doi: 10.1007/s11882-019-0837-7
90. Ho A, Quick O. Leaving patients to their own devices? Smart technology, safety and therapeutic relationships. BMC Med Ethics. (2018) 19:18. doi: 10.1186/s12910-018-0255-8
91. Bollmeier SG, Stevenson E, Finnegan P, Griggs SK. Direct to consumer telemedicine: is healthcare from home best? Missouri Med. (2020) 117:303–9. doi: 10.1007/978-3-030-53879-8_11
92. Abdolkhani R, Gray K, Borda A, DeSouza R. Patient-generated health data management and quality challenges in remote patient monitoring. JAMIA Open. (2019) 2:417–8. doi: 10.1093/jamiaopen/ooz036
93. Chesser A, Burke A, Reyes J, Rohrberg T. Navigating the digital divide: Literacy in underserved populations in the United States. Inform Health Soc Care. (2015) 41:1–19. doi: 10.3109/17538157.2014.948171
94. van der Vaart R, Drossaert C. Development of the digital health literacy instrument: Measuring a broad spectrum of health 1.0 and health 2.0 skills. J Med Internet Res. (2017) 19:e27. doi: 10.2196/jmir.6709
95. Brall C, Schröder-Bäck P, Maeckelberghe E. Ethical aspects of digital health from a justice point of view. Eur J Public Health. (2019) 29(Suppl. 3):18–22. doi: 10.1093/eurpub/ckz167
96. McAuley A. Digital health interventions: widening access or widening inequalities? Public Health. (2014) 128:1118–20. doi: 10.1016/j.puhe.2014.10.008
97. Rubeis G, Schochow M, Steger F. Patient autonomy and quality of care in telehealthcare. Sci Eng Ethics. (2018) 24:93–107. doi: 10.1007/s11948-017-9885-3
98. Bitterman DS, Aerts HJWL, Mak RH. Approaching autonomy in medical artificial intelligence. Lancet Digit Health. (2020) 2:e447–e9. doi: 10.1016/S2589-7500(20)30187-4
99. Buchner B. Privacy. In: den Exter A, editor. European Health Law. Apeldoorn: Maklu-Uitgevers nv (2017). p. 273–90.
100. Rothstein MA. Privacy and confidentiality. In: Joly Y, Knoppers BM, editors. Routledge Handbook of Medical Law and Ethics. New York, NY: Routledge (2015). p. 52–66.
101. González Fuster G. The Emergence of Personal Data Protection as a Fundamental Right of the EU. Cham: Springer International Publishing (2014).
102. McDermott Y. Conceptualising the right to data protection in an era of Big Data. Big Data Soc. (2017) 4:205395171668699. doi: 10.1177/2053951716686994
104. Siegler M. Confidentiality in medicine - a decrepit concept. N Engl J Med. (1982) 307:1518–21. doi: 10.1056/NEJM198212093072411
105. de Faria PL, Cordeiro JV. Health data privacy and confidentiality rights: Crisis or redemption? Rev Portuguesa Saude Publ. (2014) 32:123–33. doi: 10.1016/j.rpsp.2014.10.001
106. Kruse CS, Kristof C, Jones B, Mitchell E, Martinez A. Barriers to electronic health record adoption: a systematic literature review. J Med Syst. (2016) 40:252. doi: 10.1007/s10916-016-0628-9
107. Rezaeibagha F, Win KT, Susilo W. A systematic literature review on security and privacy of electronic health record systems: technical perspectives. Health Inform Manag J. (2015) 44:23–38. doi: 10.1177/183335831504400304
108. Kosseim P, Dove ES, Baggaley C, Meslin EM, Cate FH, Kaye J, et al. Building a data sharing model for global genomic research. Genome Biol. (2014) 15:430. doi: 10.1186/s13059-014-0430-2
109. Cook-Deegan R, Ankeny RA, Maxson Jones K. Sharing data to build a medical information commons: from bermuda to the global alliance. Annu Rev Genomics Human Genet. (2017) 18:389–415. doi: 10.1146/annurev-genom-083115-022515
110. Baker DB, Kaye J, Terry SF. Privacy, fairness, and respect for individuals. eGEMs. (2016) 4:7. doi: 10.13063/2327-9214.1207
111. Woolley JP, McGowan ML, Teare HJA, Coathup V, Fishman JR, Settersten RA, et al. Citizen science or scientific citizenship? Disentangling the uses of public engagement rhetoric in national research initiatives Donna Dickenson, Sandra Soo-Jin Lee, and Michael Morrison. BMC Medical Ethics. (2016) 17:33. doi: 10.1186/s12910-016-0117-1
112. Atasoy H, Greenwood BN, McCullough JS. The digitization of patient care: a review of the effects of electronic health records on health care quality and utilization. Annu Rev Public Health. (2019) 40:487–500. doi: 10.1146/annurev-publhealth-040218-044206
114. Buttarelli G. The EU GDPR as a clarion call for a new global digital gold standard. Int Data Privacy Law. (2016) 6:77–8. doi: 10.1093/idpl/ipw006
115. Costello RA. Róisín Áine. Schrems II: everything is illuminated? Eur Pap J Law Integr. (2020) 2020:1045–59.
116. Global Alliance for Genomics and Health. GA4GH GDPR Brief: Transferring Genomic and Health-Related Data Following Schrems II (2020).
117. Gillum J, Kao J, Larson J. Millions of Americans' Medical Images and Data Are Available on the Internet. Anyone Can Take a Peek. ProPublica. (2019).
118. CBS News. Hackers are stealing millions of medical records - and selling them on the dark web. CBS News (2019) (accessed December 30, 2020).
119. Thierer AD. The internet of things & wearable technology: addressing privacy & security concerns without derailing innovation. SSRN Electr J. (2015). doi: 10.2139/ssrn.2494382
120. Williams PAH, Woodward AJ. Cybersecurity vulnerabilities in medical devices: a complex environment and multifaceted problem. Med Dev Evid Res. (2015) 8:305–16. doi: 10.2147/MDER.S50048
121. Food and Drug Administration (FDA). Medical Device Cybersecurity: What You Need to Know (2020).
122. European Commission. COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS on Enabling the Digital Transformation of Health and Care in the Digital Single Market; Empowering Citizens and Building a Healthier Society. COM/2018/233 Final (2018).
123. Schwartz S, Ross A, Carmody S, Chase P, Coley SC, Connolly J, et al. The evolving state of medical device cybersecurity. Biomed Instrum Technol. (2018) 52:103–11. doi: 10.2345/0899-8205-52.2.103
124. Shi S, He D, Li L, Kumar N, Khan MK, Choo K-KR. Applications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur. (2020) 97:101966. doi: 10.1016/j.cose.2020.101966
125. Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol. (2017) 7:351–67. doi: 10.1007/s12553-017-0179-1
126. Vaughan A. Google gets green light to access five years of NHS patient data. New Scientist (2019). Available online at: https://www.newscientist.com/article/2220344-google-gets-green-light-to-access-five-years-of-nhs-patient-data/ (accessed December 30, 2020).
128. Véliz C. Privacy is Power: Why and How You Should Take Back Control of Your Data. London: Penguin Random House UK (2020).
129. Neves AL, Freise L, Laranjo L, Carter AW, Darzi A, Mayer E. Impact of providing patients access to electronic health records on quality and safety of care: a systematic review and meta-analysis. BMJ Qual Saf. (2020) 29:1019–32. doi: 10.1136/bmjqs-2019-010581
130. Kish LJ, Topol EJ. Unpatients-why patients should own their medical data. Nat Biotechnol. (2015) 33:921–4. doi: 10.1038/nbt.3340
131. Wilbanks JT, Topol EJ. Stop the privatization of health data. Nature. (2016) 535:345–8. doi: 10.1038/535345a
132. Cordeiro JV. Ethical and legal challenges of personalized medicine: paradigmatic examples of research, prevention, diagnosis and treatment. Rev Portuguesa Saude Publ. (2014) 32:164–80. doi: 10.1016/j.rpsp.2014.10.002
133. Ho D, Quake SR, McCabe ERB, Chng WJ, Chow EK, Ding X, et al. Enabling technologies for personalized and precision medicine. Trends Biotechnol. (2020) 38:497–518. doi: 10.1016/j.tibtech.2019.12.021
134. Schmietow B, Marckmann G. Mobile health ethics and the expanding role of autonomy. Med Health Care Philos. (2019) 22:623–30. doi: 10.1007/s11019-019-09900-y
135. Lamanna C, Byrne L. Should artificial intelligence augment medical decision making? The Case for an Autonomy Algorithm. AMA J Ethics. (2018) 20:E902–10. doi: 10.1001/amajethics.2018.902
137. Gornick MC, Zikmund-Fisher BJ. What clinical ethics can learn from decision science. AMA J Ethics. (2019) 21:E906–E12. doi: 10.1001/amajethics.2019.906
138. Gasser U, Ienca M, Scheibner J, Sleigh J, Vayena E. Digital tools against COVID-19: taxonomy, ethical challenges, and navigation aid. Lancet Digit Health. (2020) 2:e425–34. doi: 10.1016/S2589-7500(20)30137-0
139. Khoury MJ, Iademarco MF, Riley WT. Precision public health for the era of precision medicine. Am J Prev Med. (2016) 50:398–401. doi: 10.1016/j.amepre.2015.08.031
140. Rasmussen SA, Khoury MJ, del Rio C. Precision public health as a key tool in the COVID-19 response. JAMA. (2020) 324:933–4. doi: 10.1001/jama.2020.14992
141. Khoury MJ, Bowen MS, Clyne M, Dotson WD, Gwinn ML, Green RF, et al. From public health genomics to precision public health: A 20-year journey. Genet Med. (2018) 20:574–82. doi: 10.1038/gim.2017.211
142. Lupton D. Digital Health: Critical and Cross-Disciplinary Perspectives. Abingdon, Oxfordshire: Routledge (2017).
143. White RW, Horvitz E. Experiences with web search on medical concerns and self diagnosis. AMIA Annu Symp Proc. (2009) 2009:696–700.
144. de Faria PL, Cordeiro JV. Public health: current and emergent legal and ethical issues in a nutshell. In: Joly Y, Knoppers BM, editors. Routledge Handbook of Medical Law and Ethics. New York, NY: Routledge (2014). p. 381–401.
145. Gigerenzer G, Gaissmaier W, Kurz-Milcke E, Schwartz LM, Woloshin S. Helping doctors and patients make sense of health statistics. Psychol Sci Public Interest. (2007) 8:53–96. doi: 10.1111/j.1539-6053.2008.00033.x
146. Land MK, Aronson JD. Human rights and technology: new challenges for justice and accountability. Annu Rev Law Soc Sci. (2020) 16:223–40. doi: 10.1146/annurev-lawsocsci-060220-081955
147. Ajana B. Digital health and the biopolitics of the Quantified Self. Digit Health. (2017) 3:205520761668950. doi: 10.1177/2055207616689509
148. Ruckenstein M, Schüll ND. The datafication of health. Annu Rev Anthropol. (2017) 46:261–78. doi: 10.1146/annurev-anthro-102116-041244
149. Sharon T. Self-tracking for health and the quantified self: re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philos Technol. (2017) 30:93–121. doi: 10.1007/s13347-016-0215-5
150. Abnousi F, Rumsfeld JS, Krumholz HM. Social determinants of health in the digital age. JAMA. (2019) 321:247–8. doi: 10.1001/jama.2018.19763
151. Nissim K, Wood A. Is privacy privacy? Philos Trans R Soc A Math Phys Eng Sci. (2018) 376:1–17. doi: 10.1098/rsta.2017.0358
152. Papernot N, Abadi M, Erlingsson Ú, Goodfellow I, Talwar K. Semi-supervised knowledge transfer for deep learning from private training data. arXiv:1610.05755 [Preprint]. (2017).
153. Yang Y, Youyou W, Uzzi B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proc Natl Acad Sci U S A. (2020) 117:10762–8. doi: 10.1073/pnas.1909046117
154. Volpp KG, Mohta NS. Social Networks to Improve Patient Health Advisor Analysis. NEJM Catalyst. (2017).
155. UN. CESCR General Comment no. 14: The Right to the Highest Attainable Standard of Health (Article 12 of the International Covenant on Economic, Social and Cultural Rights). E/C.12/2000/4 (2000).
156. Faria PL, Cordeiro JV. Managing a difficult ethical and legal equilibrium in healthcare: assuring access to the basics while keeping up with innovation. Rev Portuguesa Saude Publ. (2014) 32:121–2. doi: 10.1016/j.rpsp.2014.11.001
157. Topol E. Preparing the Healthcare Workforce to Deliver the Digital Future. The Topol Review. An Independent Report on Behalf of the Secretary of State for Health and Social Care (2019).
158. Intelligence Unit. The future of healthcare. The Economist (2017). Available online at: https://eiuperspectives.economist.com/healthcare/future-healthcare (accessed December 30, 2020).
Keywords: digital health, ethics, law, artificial intelligence, telemedicine, big data, patient–doctor relationship
Citation: Cordeiro JV (2021) Digital Technologies and Data Science as Health Enablers: An Outline of Appealing Promises and Compelling Ethical, Legal, and Social Challenges. Front. Med. 8:647897. doi: 10.3389/fmed.2021.647897
Received: 30 December 2020; Accepted: 10 June 2021;
Published: 08 July 2021.
Edited by:
Michele Mario Ciulla, University of Milan, ItalyReviewed by:
Maurice Mars, University of KwaZulu-Natal, South AfricaBernard Kamsu Foguem, Université de Toulouse, France
Mara Almeida, University of Lisbon, Portugal
Copyright © 2021 Cordeiro. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: João V. Cordeiro, joao.cordeiro@ensp.unl.pt