Businesses progressively exploit AI to make decisions for humans. YouTube, Amazon, Google, and Facebook customize what users see. Uber and Lyft match passengers with drivers and set prices. Tesla’s Advanced driver-assistance systems aid with steering and braking. These examples share a core: a data-trained set of rules (“machine learning’’) implementing a decision with little or no human intermediation. Such features raise various ethical issues and managerial responsibilities. Amazon used AI to recruit new employees, shutting it down after it showed bias against women. Microsoft had to stop its first AI-based twitter-bot, Tay, after the chatbot tweeted racist and misogynist remarks. Tesla autonomous systems have been involved in fatal incidents but the black box technology is not open to public scrutiny.
This call seeks papers that examine ethical issues in using AI techniques, broadly defined, in business. Themes to covered include, but are not limited to, the following:
Value alignment: AI attempts to imitate human intelligence, especially calculative and strategic intelligence. As AI is adopted for automating decisions, worries about compatibility with human values grow. Researchers examine how to imitate moral, calculative, and strategic intelligence through “value alignment.” This capability to make ethical decisions brings AI to another level.
Moral status of AI: Traditionally, moral agency attributed to humans is endowed with rationality and freedom. How applicable is this to AI? To what extent would the recognition of AI agency warrant the reframing of ethics? Are there parallels between AI agency and corporate agency from the legal, moral, and psychological perspectives?
Autonomy: As autonomous systems such as self-driving vehicles, robotic caregivers, and autonomous weapons develop, we worry about losing control. We imagine autonomous agents making their own decisions, free of external ethical constraints and oblivious to our interests. How much autonomy and of what sort is appropriate for machines?
Algorithmic fairness: Interest in AI fairness has led to a proliferation of statistical fairness metrics and remedies for discrimination and bias. Statistical measures are useful, but many notions of statistical fairness are contradictory and lack normative justification. Ethicists can contribute to this field by developing normatively rigorous frameworks.
Explainable AI: Classification and recommendation AIs provide predictions to bankers, doctors, or lawyers. Yet these systems rarely give explanations. The has led to calls for algorithmic transparency, requiring ex post “meaningful” explanations to users. Researchers are developing explainable AI (XAI). However, what makes a “good” XAI model?
Technological unemployment: Robots are taking over human jobs. While technological innovation has historically created more jobs by replacing manual work with cognitive work, what will be the situation now, given that machines can learn new cognitive skills more cheaply than humans? Discussions about the future of work focus on the economic sustenance of displaced workers. Can a universal basic income compensate those who lose work opportunities?
We welcome a variety of article types: original research, systematic reviews, policy and practice reviews or briefs, perspectives, and conceptual analysis,
Businesses progressively exploit AI to make decisions for humans. YouTube, Amazon, Google, and Facebook customize what users see. Uber and Lyft match passengers with drivers and set prices. Tesla’s Advanced driver-assistance systems aid with steering and braking. These examples share a core: a data-trained set of rules (“machine learning’’) implementing a decision with little or no human intermediation. Such features raise various ethical issues and managerial responsibilities. Amazon used AI to recruit new employees, shutting it down after it showed bias against women. Microsoft had to stop its first AI-based twitter-bot, Tay, after the chatbot tweeted racist and misogynist remarks. Tesla autonomous systems have been involved in fatal incidents but the black box technology is not open to public scrutiny.
This call seeks papers that examine ethical issues in using AI techniques, broadly defined, in business. Themes to covered include, but are not limited to, the following:
Value alignment: AI attempts to imitate human intelligence, especially calculative and strategic intelligence. As AI is adopted for automating decisions, worries about compatibility with human values grow. Researchers examine how to imitate moral, calculative, and strategic intelligence through “value alignment.” This capability to make ethical decisions brings AI to another level.
Moral status of AI: Traditionally, moral agency attributed to humans is endowed with rationality and freedom. How applicable is this to AI? To what extent would the recognition of AI agency warrant the reframing of ethics? Are there parallels between AI agency and corporate agency from the legal, moral, and psychological perspectives?
Autonomy: As autonomous systems such as self-driving vehicles, robotic caregivers, and autonomous weapons develop, we worry about losing control. We imagine autonomous agents making their own decisions, free of external ethical constraints and oblivious to our interests. How much autonomy and of what sort is appropriate for machines?
Algorithmic fairness: Interest in AI fairness has led to a proliferation of statistical fairness metrics and remedies for discrimination and bias. Statistical measures are useful, but many notions of statistical fairness are contradictory and lack normative justification. Ethicists can contribute to this field by developing normatively rigorous frameworks.
Explainable AI: Classification and recommendation AIs provide predictions to bankers, doctors, or lawyers. Yet these systems rarely give explanations. The has led to calls for algorithmic transparency, requiring ex post “meaningful” explanations to users. Researchers are developing explainable AI (XAI). However, what makes a “good” XAI model?
Technological unemployment: Robots are taking over human jobs. While technological innovation has historically created more jobs by replacing manual work with cognitive work, what will be the situation now, given that machines can learn new cognitive skills more cheaply than humans? Discussions about the future of work focus on the economic sustenance of displaced workers. Can a universal basic income compensate those who lose work opportunities?
We welcome a variety of article types: original research, systematic reviews, policy and practice reviews or briefs, perspectives, and conceptual analysis,