The rapid progress and ubiquitous use of artificial intelligence (AI) technologies and intelligent robots have created the potential for a series of moral dilemmas and algorithmic discrimination. The most paradigmatic cases of moral dilemmas are particularly those involving life-or-death decisions, such as AI applications in transportation, health care, judicial trials, and military strikes. While algorithms are supposed to be both more accurate and less emotional than humans, plentiful social events and empirical studies have demonstrated how algorithms may yield discriminatory judgments or decisions on grounds of socio-demographic characteristics such as gender, race, or age. In the past decade, a number of local or national governments, international organizations, industry associations, research institutions and leading AI companies around the world have issued various policies, principles, and guidelines for ethical AI, including principles or values such as transparency, fairness, non-maleficence, responsibility, privacy, humanity, collaboration, and sharing. A crucial question that needs to be taken seriously of is how to build such morally behaving AI that coincides with people’s perception of right and wrong and morally fits in our society. Specifically, current debates have focused on whether and how humans should/could ascribe moral status to AI, what and how ethical norms and frameworks AI should acquire, and very importantly, how AI can deepen our understanding of human nature, human relationships, and diverse moral understandings.
This Research Topic aims to discuss these questions both theoretically and empirically. Authors are invited to submit theoretical or empirical work (quantitative, qualitative, case studies) as well as review papers that contribute to the recent advances in the understanding and design of ethical AI from the perspective of moral psychology and other related approaches. Intercultural, cross-tradition, and multidisciplinary studies are highly welcomed.
Topics for this collection include, but are not limited to:
1. the ascription of moral status/standing to AI/robots
2. the acquisition of ethical norms/standards for AI/robots
3. possible solutions to moral dilemmas or conflicts in the application of AI/robots
4. the moral responsibility attribution for anthropomorphic AI/robots
5. robopsychology (the psychology of, for, and by robots, robotics, and AI)
6. the influence of AI/robots on human relationships (particularly the moral dimensions)
7. the moral interactions between humans and social robots (especially health care robots)
8. AI/robot acceptance in various moral domains
9. algorithmic profiling and algorithmic discrimination
10. boundary conditions and possible motivations of algorithmic aversion
11. models of algorithmic decision-making approach and avoidance
The rapid progress and ubiquitous use of artificial intelligence (AI) technologies and intelligent robots have created the potential for a series of moral dilemmas and algorithmic discrimination. The most paradigmatic cases of moral dilemmas are particularly those involving life-or-death decisions, such as AI applications in transportation, health care, judicial trials, and military strikes. While algorithms are supposed to be both more accurate and less emotional than humans, plentiful social events and empirical studies have demonstrated how algorithms may yield discriminatory judgments or decisions on grounds of socio-demographic characteristics such as gender, race, or age. In the past decade, a number of local or national governments, international organizations, industry associations, research institutions and leading AI companies around the world have issued various policies, principles, and guidelines for ethical AI, including principles or values such as transparency, fairness, non-maleficence, responsibility, privacy, humanity, collaboration, and sharing. A crucial question that needs to be taken seriously of is how to build such morally behaving AI that coincides with people’s perception of right and wrong and morally fits in our society. Specifically, current debates have focused on whether and how humans should/could ascribe moral status to AI, what and how ethical norms and frameworks AI should acquire, and very importantly, how AI can deepen our understanding of human nature, human relationships, and diverse moral understandings.
This Research Topic aims to discuss these questions both theoretically and empirically. Authors are invited to submit theoretical or empirical work (quantitative, qualitative, case studies) as well as review papers that contribute to the recent advances in the understanding and design of ethical AI from the perspective of moral psychology and other related approaches. Intercultural, cross-tradition, and multidisciplinary studies are highly welcomed.
Topics for this collection include, but are not limited to:
1. the ascription of moral status/standing to AI/robots
2. the acquisition of ethical norms/standards for AI/robots
3. possible solutions to moral dilemmas or conflicts in the application of AI/robots
4. the moral responsibility attribution for anthropomorphic AI/robots
5. robopsychology (the psychology of, for, and by robots, robotics, and AI)
6. the influence of AI/robots on human relationships (particularly the moral dimensions)
7. the moral interactions between humans and social robots (especially health care robots)
8. AI/robot acceptance in various moral domains
9. algorithmic profiling and algorithmic discrimination
10. boundary conditions and possible motivations of algorithmic aversion
11. models of algorithmic decision-making approach and avoidance