With the continuous advancement in the area of robotics and machine learning, more and more autonomous robot solutions find their way into our daily lifes. These technologies not only have the potential to take care of monotonous tasks or tasks too dangerous to be executed by a human, but can also help humans to regain or enhance their autonomy. It is desirable that those robots (or agents) possess at least some degree of autonomy in order to free us from controlling all the details of operation.
However, fully automated behavior is still a long way ahead and we therefore rely on some level of human supervision creating shared autonomy systems which leverage the strengths of both human and robots. In Shared Autonomy we either interact, cooperate, communicate or at least have to find actions safely not interfering with the other. Shared Autonomy as a concept describes how all the agents can remain autonomous, following overall their own intentions and goals, but at the same time deal with coordination of activities and resolution of possible conflicts.
Determining and implementing the proper autonomy spaces called for by such missions is an ongoing research challenge and has triggered a large body of work to make robots more skillful, versatile and easier to deploy — including successful ways to connect human and robot autonomy in restricted scenarios. This has opened up new perspectives of tasks and scenarios where robots can no longer act solitarily, but must share part of their autonomy space with others. As encounters between autonomous agents cease to be exclusive among humans and begin to include more and more robots it evidently becomes increasingly important to endow robots at least to some degree with similar capabilities to share their autonomy spaces with others — be it humans or robots. As a consequence, we have to address the following questions:
? What is required to enrich autonomous behavior of an agent to include capabilities for context-specific adjustments towards enhanced compatibility with other agents?
? What are suitable functional prerequisites (e.g. in perception) to allow an implementation of shared autonomy in the presence of uncertainty, limited information and processing power?
? What representations and processes help to organize a sharing of autonomy spaces?
? How can suitable coordination patterns be learnt or refined through practice?
? How can we measure the success of autonomy sharing vs. not sharing?
With the continuous advancement in the area of robotics and machine learning, more and more autonomous robot solutions find their way into our daily lifes. These technologies not only have the potential to take care of monotonous tasks or tasks too dangerous to be executed by a human, but can also help humans to regain or enhance their autonomy. It is desirable that those robots (or agents) possess at least some degree of autonomy in order to free us from controlling all the details of operation.
However, fully automated behavior is still a long way ahead and we therefore rely on some level of human supervision creating shared autonomy systems which leverage the strengths of both human and robots. In Shared Autonomy we either interact, cooperate, communicate or at least have to find actions safely not interfering with the other. Shared Autonomy as a concept describes how all the agents can remain autonomous, following overall their own intentions and goals, but at the same time deal with coordination of activities and resolution of possible conflicts.
Determining and implementing the proper autonomy spaces called for by such missions is an ongoing research challenge and has triggered a large body of work to make robots more skillful, versatile and easier to deploy — including successful ways to connect human and robot autonomy in restricted scenarios. This has opened up new perspectives of tasks and scenarios where robots can no longer act solitarily, but must share part of their autonomy space with others. As encounters between autonomous agents cease to be exclusive among humans and begin to include more and more robots it evidently becomes increasingly important to endow robots at least to some degree with similar capabilities to share their autonomy spaces with others — be it humans or robots. As a consequence, we have to address the following questions:
? What is required to enrich autonomous behavior of an agent to include capabilities for context-specific adjustments towards enhanced compatibility with other agents?
? What are suitable functional prerequisites (e.g. in perception) to allow an implementation of shared autonomy in the presence of uncertainty, limited information and processing power?
? What representations and processes help to organize a sharing of autonomy spaces?
? How can suitable coordination patterns be learnt or refined through practice?
? How can we measure the success of autonomy sharing vs. not sharing?