About this Research Topic
Achieving trustworthy AI requires collaboration between academia and industry, where bridging these worlds is paramount to fostering progress. Industry and academia bring different expertise and perspectives to the table. Industry often has practical, real-world experience with implementing and deploying AI systems, while academia often focuses on research and theory. By collaborating, these two groups can combine their strengths and insights to create trustworthy AI systems and mitigate concerns about the potential negative impacts of AI.
AI systems will continue to impact and revolutionize the healthcare system in ways we cannot yet imagine. In this context, it is crucial that the AI systems we are developing and implementing are worthy of our trust and that we can ensure that risks and possible adverse effects are considered. However, despite an overbridging consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI in health. To trust AI technologies, we need to know that they are fair, reliable, not harmful, and accountable. The High-Level Expert Group on AI (AI HLEG) on trustworthy AI has identified three components for general AI technologies, which should be met throughout the AI system's entire life cycle [link] :
• it should be lawful, complying with all applicable laws and regulations;
• it should be ethical, ensuring adherence to ethical principles and values; and
• it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.
In this Research Topic, we seek to explore different facets of trustworthy AI in health. We welcome the submission of Original Research, Reviews, Methods, and Perspective articles related to the Research Topic on the Trustworthy adoption of AI in Healthcare. We encourage the collaboration of academia and industry in addressing these topics. Following the requirements of Trustworthy Artificial Intelligence, as specified by AI HLEG, the submissions must address at least one of the following dimensions:
1. Human Agency and Oversight: Empowerment of human beings, human autonomy, and decision-making, oversight mechanisms for AI
2. Technical Robustness and Safety: Security, safety, accuracy, reliability, fall-back plans, and reproducibility
3. Privacy and Data Governance: The impact of the AI system's impact on privacy and data protection
4. Transparency: Traceability, explainability, and open communication about possible limitations of the AI system
5. Diversity, Non-discrimination, and Fairness: Unfair bias, equality, and justice in AI systems
6. Societal and Environmental Well-being: sustainability, social and environmental impact
7. Accountability: Auditability, responsibility, and accountability for AI systems and their outcomes
---
Dr. Oleg Agafonov, Dr. Aleksandar Babic and Dr. Sharmini Alagaratnam are employed at DNV. All other Topic Editors declare no conflicts of interest.
Keywords: AI, machine learning, clinical decision support, digital health, electronic health records
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.