Artificial Intelligence and Robotics are predicted to change nearly all aspects of our everyday life, however, there is still no unifying strategy on how potential false decisions of AI algorithms are treated and how trust to the user can be established. Current and future AI systems use deep networks involving neuro-inspired mechanisms and are required to interact with humans (and therefore the human brain). This requires mechanisms to explain intrinsic properties “to each other”. In addition, key issues for AI systems working together with humans are the prediction of own and others’ behaviors, and the consequences, formation, and re-use of reliable knowledge, and the ability to make decisions transparent providing, e.g. the possibility to debate actions.
If robots and AI should indeed be widely accepted to operate in our everyday life, trustworthiness is a key argument supporting the development of AI and robots with human-centered mechanisms and behavior, as well as the application-directed or mechanism-directed functionality goal. It is important to address this question with respect to the specific algorithm application.
There are technical, and human factors, influencing robots, and AI’s use in the real world. To achieve a wide acceptance, legal regulations and societal concerns need to be considered, e.g., drones, or technology directly interfacing with our nervous system.
Trustworthiness will be required by non-engineer users, not expertly aware of the technology, including the decision-making algorithms and processes. Depending on the system and application, these systems should be able to explain their decisions, allowing non-experts in AI to understand. This holds with respect to the application: it does not mean that everything needs to be explainable for everyone. E.g. an AI supporting our decision-making by managing our personal data, will handle complex information, and requires a high level of trust from the human user.
This Research Topic aims to collect knowledge on the strategies to make AI reliable, as-transparent-as-needed, and trustworthy. Key strategies are the development of self-aware and self-verifying systems, robust and/or certified robotics and AI, and traceable decision paths.
Concerning trustworthy AI and robots, including humanoids, the focus will be directed to 1) the design or functionality of systems, 2) the process of interactions between human and AI/robots, and 3) self-understanding with respect to human nature and morality.
We welcome articles addressing the following, concerning trust and reliability in human-AI/robot interactions:
• acting in a real-world environment
- sensing with trust
- context-based sensing (attention) and action
- predictions of consequences of own actions and actions of others
- internal models of other behaviors
- reliable and safe interfacing with biological neural networks
• building knowledge representations and experience
- linkage with semantics of human knowledge
- using learning for defining new experiences that can be re-used in similar situations
• uncertainty in models and reality
- incomplete models, probability of success, overdetermined goals
• sociality
- transparent decision making: ability to explain and debate decisions
- applications of AI and robots in the environment
- evolution of human capacities by AI and robotic technologies
- effects, sustainability and risk management
- cultural diversity of human-AI/robot interaction
• ethical and philosophical perspectives
- essential conditions for trustworthy and moral agents
- social norms for human-AI/robot interaction
- types and meanings of alterity, sociality, and trust
- conceptual frameworks for human-AI/robot relationships
We would like to acknowledge Dr. Marco Nørskov's valued and integral contribution to the creation of this Research Topic
Artificial Intelligence and Robotics are predicted to change nearly all aspects of our everyday life, however, there is still no unifying strategy on how potential false decisions of AI algorithms are treated and how trust to the user can be established. Current and future AI systems use deep networks involving neuro-inspired mechanisms and are required to interact with humans (and therefore the human brain). This requires mechanisms to explain intrinsic properties “to each other”. In addition, key issues for AI systems working together with humans are the prediction of own and others’ behaviors, and the consequences, formation, and re-use of reliable knowledge, and the ability to make decisions transparent providing, e.g. the possibility to debate actions.
If robots and AI should indeed be widely accepted to operate in our everyday life, trustworthiness is a key argument supporting the development of AI and robots with human-centered mechanisms and behavior, as well as the application-directed or mechanism-directed functionality goal. It is important to address this question with respect to the specific algorithm application.
There are technical, and human factors, influencing robots, and AI’s use in the real world. To achieve a wide acceptance, legal regulations and societal concerns need to be considered, e.g., drones, or technology directly interfacing with our nervous system.
Trustworthiness will be required by non-engineer users, not expertly aware of the technology, including the decision-making algorithms and processes. Depending on the system and application, these systems should be able to explain their decisions, allowing non-experts in AI to understand. This holds with respect to the application: it does not mean that everything needs to be explainable for everyone. E.g. an AI supporting our decision-making by managing our personal data, will handle complex information, and requires a high level of trust from the human user.
This Research Topic aims to collect knowledge on the strategies to make AI reliable, as-transparent-as-needed, and trustworthy. Key strategies are the development of self-aware and self-verifying systems, robust and/or certified robotics and AI, and traceable decision paths.
Concerning trustworthy AI and robots, including humanoids, the focus will be directed to 1) the design or functionality of systems, 2) the process of interactions between human and AI/robots, and 3) self-understanding with respect to human nature and morality.
We welcome articles addressing the following, concerning trust and reliability in human-AI/robot interactions:
• acting in a real-world environment
- sensing with trust
- context-based sensing (attention) and action
- predictions of consequences of own actions and actions of others
- internal models of other behaviors
- reliable and safe interfacing with biological neural networks
• building knowledge representations and experience
- linkage with semantics of human knowledge
- using learning for defining new experiences that can be re-used in similar situations
• uncertainty in models and reality
- incomplete models, probability of success, overdetermined goals
• sociality
- transparent decision making: ability to explain and debate decisions
- applications of AI and robots in the environment
- evolution of human capacities by AI and robotic technologies
- effects, sustainability and risk management
- cultural diversity of human-AI/robot interaction
• ethical and philosophical perspectives
- essential conditions for trustworthy and moral agents
- social norms for human-AI/robot interaction
- types and meanings of alterity, sociality, and trust
- conceptual frameworks for human-AI/robot relationships
We would like to acknowledge Dr. Marco Nørskov's valued and integral contribution to the creation of this Research Topic