Meaningful human control (MHC) is a focal construct intended to facilitate the responsible deployment of autonomous systems. Initially emerging from international discourse about autonomous weapons, the modern application space for MHC has expanded to include more generalized applications such as surgical robotics, self-driving vehicles, decision support systems, and likely others. Within this, the main challenge arises when systems of increasing abilities to learn and self-select courses of action are faced with compromises involving differential consequences that are associated with competing human values (e.g. freedom versus security). Yet, the role of the human has largely been limited to considering how much authority to delegate, to which agent, and over what types of system functions. In this Research Topic, we take the perspective that such limited considerations have inherently directed discourse away from the true benefit that humans offer: that is, their deep connection with humanity and all that comes with it.
Humanity in the broad sense is much more than cognitive capability. Yet, cognition is the single human resource most commonly incorporated into human-machine teams. More completely, humanity is the connection with collective human society, across geography and history. Humanity references being humane and invokes compassion, integrity, empathy, self-awareness, and benevolence. Such social and emotional aspects of intelligence contribute to decision-making, especially for complex dilemmas with ethical or moral considerations. While MHC is ultimately about preserving human dignity, the aspects of humanity considered here are not often included in serious discourse within the domain. We contend that this is because of a perceived difficulty, and perhaps an assumed lower relevance, of incorporating socio-emotional state information into human-machine teams. To access humanity in this way implies observing, measuring, inferring, predicting, and influencing affective along with cognitive and physical states. This conflicts with the predominant approach to human-machine teaming, which involves designs that tacitly assume that applying uncertainty metrics will sufficiently account for any variability due to socio-emotional influences. Yet socio-emotional states are important. Unlike more present-focused and deliberative cognitive processes, these affective processes provide an immediate connection with the history of a person's lived experience and their culture, and thus can rapidly and directly modulate decision likelihoods in ways that fit their personal circumstances. In this Research Topic, we aim to elucidate the role of socio-emotional processes in value-based, moral, and ethical decision making. We seek to explore how to measure and draw inferences about such processes, and specifically how to use the information gleaned to improve human relationships with advanced, intelligent technologies.
This topic explores the potentially transformative role of integrating socio-emotional state information for the responsible and ethical deployment of human-machine teams and thus establishing MHC. As such, we seek submissions regarding:
• Implications of MHC within socio-technical ecosystems, including projected ethical, legal, and societal consequences
• Empirical or theoretical demonstrations of the importance of explicitly addressing issues of diversity, equity, and inclusion in development of human-intelligent technology ecosystems
• Metaphors and methods for integrating humanity into socio-technical ecosystems (e.g. adaptive and dynamic delegation, models of multi-scale social dynamics)
• Theoretical and empirical methods for examining and modeling complex ecosystems that include human socio-emotional states
• Moral psychology and the role of human emotion in complex, value-based, ethical decision-making
• Rapport, camaraderie, and social dynamics in human-machine ecosystems
• Multi-modal methods for observing, measuring, and predicting affective-cognitive dynamics in real-world settings
• Lifecycle design and evaluation for hybrid human-intelligent agent systems (e.g. value elicitation, value-sensitive and human-centered design)
• Developmental approaches for manifesting humane, moral, or ethical agents
• Testing, evaluating, and verifying the quality of complex, value-based moral and ethical decisions in human machine teams
• Real-time or near real-time affective computing with wearable and IoT-type sensing
Meaningful human control (MHC) is a focal construct intended to facilitate the responsible deployment of autonomous systems. Initially emerging from international discourse about autonomous weapons, the modern application space for MHC has expanded to include more generalized applications such as surgical robotics, self-driving vehicles, decision support systems, and likely others. Within this, the main challenge arises when systems of increasing abilities to learn and self-select courses of action are faced with compromises involving differential consequences that are associated with competing human values (e.g. freedom versus security). Yet, the role of the human has largely been limited to considering how much authority to delegate, to which agent, and over what types of system functions. In this Research Topic, we take the perspective that such limited considerations have inherently directed discourse away from the true benefit that humans offer: that is, their deep connection with humanity and all that comes with it.
Humanity in the broad sense is much more than cognitive capability. Yet, cognition is the single human resource most commonly incorporated into human-machine teams. More completely, humanity is the connection with collective human society, across geography and history. Humanity references being humane and invokes compassion, integrity, empathy, self-awareness, and benevolence. Such social and emotional aspects of intelligence contribute to decision-making, especially for complex dilemmas with ethical or moral considerations. While MHC is ultimately about preserving human dignity, the aspects of humanity considered here are not often included in serious discourse within the domain. We contend that this is because of a perceived difficulty, and perhaps an assumed lower relevance, of incorporating socio-emotional state information into human-machine teams. To access humanity in this way implies observing, measuring, inferring, predicting, and influencing affective along with cognitive and physical states. This conflicts with the predominant approach to human-machine teaming, which involves designs that tacitly assume that applying uncertainty metrics will sufficiently account for any variability due to socio-emotional influences. Yet socio-emotional states are important. Unlike more present-focused and deliberative cognitive processes, these affective processes provide an immediate connection with the history of a person's lived experience and their culture, and thus can rapidly and directly modulate decision likelihoods in ways that fit their personal circumstances. In this Research Topic, we aim to elucidate the role of socio-emotional processes in value-based, moral, and ethical decision making. We seek to explore how to measure and draw inferences about such processes, and specifically how to use the information gleaned to improve human relationships with advanced, intelligent technologies.
This topic explores the potentially transformative role of integrating socio-emotional state information for the responsible and ethical deployment of human-machine teams and thus establishing MHC. As such, we seek submissions regarding:
• Implications of MHC within socio-technical ecosystems, including projected ethical, legal, and societal consequences
• Empirical or theoretical demonstrations of the importance of explicitly addressing issues of diversity, equity, and inclusion in development of human-intelligent technology ecosystems
• Metaphors and methods for integrating humanity into socio-technical ecosystems (e.g. adaptive and dynamic delegation, models of multi-scale social dynamics)
• Theoretical and empirical methods for examining and modeling complex ecosystems that include human socio-emotional states
• Moral psychology and the role of human emotion in complex, value-based, ethical decision-making
• Rapport, camaraderie, and social dynamics in human-machine ecosystems
• Multi-modal methods for observing, measuring, and predicting affective-cognitive dynamics in real-world settings
• Lifecycle design and evaluation for hybrid human-intelligent agent systems (e.g. value elicitation, value-sensitive and human-centered design)
• Developmental approaches for manifesting humane, moral, or ethical agents
• Testing, evaluating, and verifying the quality of complex, value-based moral and ethical decisions in human machine teams
• Real-time or near real-time affective computing with wearable and IoT-type sensing