Artificial Intelligence (AI), including Machine Learning with Deep Neural Networks, is making and supporting decisions in ways that increasingly affect humans in many aspects of their lives. Both autonomous and decision-support systems applying AI algorithms and data-driven models are used for decisions about justice, education, physical and psychological health, and to provide or deny access to credit, healthcare, and other essential resources, in all aspects of daily life, in increasingly ubiquitous and sometimes ambiguous ways. Too often these systems are built without considering the human factors associated with their use and the need for clarity about the correct way to use them, and possible biases. Models and systems provide results that are difficult to interpret and are accused of being good or bad, whereas good or bad is only the design of such tools, and the necessary training for them to be properly integrated into human values.
We invite submissions that explore the impact on humans of AI automation and decision support algorithms, with a focus on how to integrate human-centered principles into the algorithms and their surrounding systems, and to clearly indicate in each publication the possible limitations and biases in the proposed real application.
We are pleased to launch this Research Topic, originating from the ACER (
Affective Computing and Emotion Recognition) workshop at IEEE/ACM/WIC WI2021, and welcome contributions presented at the 2021 event in the form of extended papers. We also welcome extended contributions from the EMORE workshop at BI2021, IWCES, and AAILT workshops at ICCSA2021 (
International Conference on Computational Science and Its Applications), as well as original submissions from all the researchers who are focused or sensitive to the topic.
Artificial Intelligence (AI), including Machine Learning with Deep Neural Networks, is making and supporting decisions in ways that increasingly affect humans in many aspects of their lives. Both autonomous and decision-support systems applying AI algorithms and data-driven models are used for decisions about justice, education, physical and psychological health, and to provide or deny access to credit, healthcare, and other essential resources, in all aspects of daily life, in increasingly ubiquitous and sometimes ambiguous ways. Too often these systems are built without considering the human factors associated with their use and the need for clarity about the correct way to use them, and possible biases. Models and systems provide results that are difficult to interpret and are accused of being good or bad, whereas good or bad is only the design of such tools, and the necessary training for them to be properly integrated into human values.
We invite submissions that explore the impact on humans of AI automation and decision support algorithms, with a focus on how to integrate human-centered principles into the algorithms and their surrounding systems, and to clearly indicate in each publication the possible limitations and biases in the proposed real application.
We are pleased to launch this Research Topic, originating from the ACER (
Affective Computing and Emotion Recognition) workshop at IEEE/ACM/WIC WI2021, and welcome contributions presented at the 2021 event in the form of extended papers. We also welcome extended contributions from the EMORE workshop at BI2021, IWCES, and AAILT workshops at ICCSA2021 (
International Conference on Computational Science and Its Applications), as well as original submissions from all the researchers who are focused or sensitive to the topic.