Interest is rising rapidly in the development of autonomous teams and systems. Until now, autonomy has been an intractable problem. In this interdisciplinary call, our primary interest is in the theory, models, structure, performance and management of interdependent teams (e.g., control, machine learning, AI, the mix of humans and machines); but a solution may be interdisciplinary, motivating our interest in systemic and human issues associated with autonomy (e.g., safety, ethics, trust, confidence, philosophy, law, etc.). What does the science of autonomy mean for individual humans or machines? For swarms of machines? How might the science of autonomy differ for interdependent human-machine teams and systems? For other biological systems where interdependent agents share tasks by performing in orthogonal roles (e.g., ants, “mother trees”)? Computational swarms increase the value of a common goal or objective, while interdependence increases the likelihood of autonomy; however, we suspect an overlap between the control of swarms and the governance of interdependent teams (e.g., leadership).
Interdependence has bewildered scientists in the laboratory. However, it is a state-dependent phenomenon linked with similar phenomena e.g. quantum effects. This linkage opens a path to theory and models that may advance the science of interaction among intelligent agents striving to form a productive social unit (team, business, or system). Factors intrinsic to the social unit: control, boundaries, deception, trust (risk mitigation), cooperation, etc.; extrinsic factors: goals, competition, vulnerability, management, etc.
We are especially interested in metrics for the interdependence between structure and performance of teams. What happens to an autonomous team or system, such as determining its context, when faced by uncertainty, conflict or competition? How much authority should be given to a machine in an interdependent team to override human counterparts in the event that an operator, human or machine, becomes incapacitated or dysfunctional? How might the governance of an A-HMT-S be conducted? How might human society interpret, appreciate, reject or advance the science of autonomy?
This Research Topic welcomes a range of article types, such as, Original Research, Review, Mini-Review, and Perspective. We welcome manuscripts around the following topics of interest and related topics:
-Interdependent shared tasks of human-machine teams and systems; the bistable (two-sided) effects of tasks.
-Explainable AI.
-Interdisciplinary theory and modeling of A-HMT-S.
-State-dependent phenomena.
-The Systems Engineering of an A-HMT-S.
-Determinants of structure, performance, resilience or vulnerability of an A-HMT-S.
-Comparing swarms and interdependent teams and systems; seeking conceptual overlap.
-Are biological models of interdependence and collective intelligence generalizable to an A-HMT-S?
-Theories and models of interdependent human-machine teams when faced by external sources of uncertainty affecting the context confronted by a team and its decisions.
-In the models of fitness for teams are included mergers or spin-offs for markets under threat (e.g., AT&T is shedding its media empire), disruptions or divorces (marriages, businesses, sciences, etc.).
-Despite the belief that 'many hands make light work,' the size of a team is an open problem. Is the search for the fitness of a team's or firm's size the cause of mergers or spin-offs? Should team size characterize the problem being addressed?
-Safety, philosophy, ethics, trust, intelligence, leadership and other considerations for autonomous systems, e.g., should authority be given to a machine to take operational control from a human operator? Can governance of an A-HMT-S be designed to promote democracy and confidence?
-Other (please propose).
Interest is rising rapidly in the development of autonomous teams and systems. Until now, autonomy has been an intractable problem. In this interdisciplinary call, our primary interest is in the theory, models, structure, performance and management of interdependent teams (e.g., control, machine learning, AI, the mix of humans and machines); but a solution may be interdisciplinary, motivating our interest in systemic and human issues associated with autonomy (e.g., safety, ethics, trust, confidence, philosophy, law, etc.). What does the science of autonomy mean for individual humans or machines? For swarms of machines? How might the science of autonomy differ for interdependent human-machine teams and systems? For other biological systems where interdependent agents share tasks by performing in orthogonal roles (e.g., ants, “mother trees”)? Computational swarms increase the value of a common goal or objective, while interdependence increases the likelihood of autonomy; however, we suspect an overlap between the control of swarms and the governance of interdependent teams (e.g., leadership).
Interdependence has bewildered scientists in the laboratory. However, it is a state-dependent phenomenon linked with similar phenomena e.g. quantum effects. This linkage opens a path to theory and models that may advance the science of interaction among intelligent agents striving to form a productive social unit (team, business, or system). Factors intrinsic to the social unit: control, boundaries, deception, trust (risk mitigation), cooperation, etc.; extrinsic factors: goals, competition, vulnerability, management, etc.
We are especially interested in metrics for the interdependence between structure and performance of teams. What happens to an autonomous team or system, such as determining its context, when faced by uncertainty, conflict or competition? How much authority should be given to a machine in an interdependent team to override human counterparts in the event that an operator, human or machine, becomes incapacitated or dysfunctional? How might the governance of an A-HMT-S be conducted? How might human society interpret, appreciate, reject or advance the science of autonomy?
This Research Topic welcomes a range of article types, such as, Original Research, Review, Mini-Review, and Perspective. We welcome manuscripts around the following topics of interest and related topics:
-Interdependent shared tasks of human-machine teams and systems; the bistable (two-sided) effects of tasks.
-Explainable AI.
-Interdisciplinary theory and modeling of A-HMT-S.
-State-dependent phenomena.
-The Systems Engineering of an A-HMT-S.
-Determinants of structure, performance, resilience or vulnerability of an A-HMT-S.
-Comparing swarms and interdependent teams and systems; seeking conceptual overlap.
-Are biological models of interdependence and collective intelligence generalizable to an A-HMT-S?
-Theories and models of interdependent human-machine teams when faced by external sources of uncertainty affecting the context confronted by a team and its decisions.
-In the models of fitness for teams are included mergers or spin-offs for markets under threat (e.g., AT&T is shedding its media empire), disruptions or divorces (marriages, businesses, sciences, etc.).
-Despite the belief that 'many hands make light work,' the size of a team is an open problem. Is the search for the fitness of a team's or firm's size the cause of mergers or spin-offs? Should team size characterize the problem being addressed?
-Safety, philosophy, ethics, trust, intelligence, leadership and other considerations for autonomous systems, e.g., should authority be given to a machine to take operational control from a human operator? Can governance of an A-HMT-S be designed to promote democracy and confidence?
-Other (please propose).