Social robots are designed and developed to interact with humans. In order to engage humans in a social interaction, a robot must be capable of showing social behaviors that are well-timed, contextually-appropriate, intuitive and appealing to the human interactants. Though the behaviors of a robot may depend on its embodiment and specific role, there are several expectations associated with such behaviors that qualify these as "social". These include the capability for two-way interactions and the need to be socially aware; to perceive events within an interaction and produce socially-relevant responses. However, in order to be convincing social agents, robots must be able to generate behaviors that are not limited or simplistic but rather sophisticated, meaningful and diverse.
Extensive research has been conducted to develop multimodal robot behaviors that support social interactions with and between humans. However, effective design of robot behaviors and autonomous behavior generation in social interactions remains an open challenge. Early social robotics research relied heavily on hand-crafted robot behaviors in Wizard-of-Oz settings to give the illusion of interactivity and intelligence. However, manually designing the behavior repertoire of a robot is a tedious and time-intensive task, which typically yields only a limited set of actions that are usually insufficient for an intelligent social robot. Later research automated the design of some robot behaviors with rule-based selection methods that enabled robots to assume some level of autonomy. Supported by advances in sensing technologies and affective computing, as well as the availability of rich human behavior datasets, recent trends show a move towards machine learning and deep learning methods to generate multimodal robot behaviors for various social applications.
This Research Topic on robot behavior generation methods focuses on the latest advancements in the design and generation of multimodal expressive behaviors for robots that can engage humans in social interactions. Submissions aligned with this special issue could include methodologies for autonomous behavior generation, design approaches focusing on user-centered interaction design principles, and application domains such as healthcare, education, mediation applications, and assistive robotics where social interactions play a crucial role. In particular, proposed social interaction designs may integrate with physical design elements through the utilization of advanced sensing technologies and smart materials. For instance, smart materials embedded in a robot's exterior can facilitate dynamic changes in appearance that may convey emotions or intentions during social interactions. Advanced sensing technologies can enable robots to perceive and interpret human gestures, facial expressions, vocal cues, etc., thus enhancing their ability to engage meaningfully in social interactions. Additionally, research showcasing the integration of multimodal sensing technologies for enhancing the social capabilities of robots would be particularly relevant to this special issue.
Topics that fit the scope of this collection include, but are not limited to:
(1) Multimodal robot behavior design (movements, gestures, speech, facial expressions, haptic behavior, etc. based on real-time feedback from human interaction partners)
(2) Generative models for behavior generation (use of generative techniques such GANs to synthesize diverse and contextually appropriate robot behaviors based on input stimuli and environmental cues)
(3) Multimodal datasets and validation (collection and utilization of datasets containing diverse human behaviors and interactions to train and validate the effectiveness of robot behavior generation models)
(4) Planning methods for interactive robot behaviors (algorithms for real-time decision-making and action planning in social contexts, considering factors such as social norms, user preferences, task objectives, etc.)
(5) Cognitive architectures for interactive robots (architectures inspired by human cognition, such as hierarchical planning systems or memory-based models to enable robots to adaptively generate behaviors in complex social scenarios)
(6) Human-robot multimodal interaction (development of interaction frameworks that facilitate seamless communication between humans and robots through a combination of modalities)
(7) Automatic adaptation and personalization of robot behavior (techniques for dynamically adjusting robot behaviors based on ongoing feedback from human users, environmental changes, and individual user preferences, improving the personalization of social interactions between robots and humans, for example, with the use of reinforcement learning to enable learning of socially-appropriate behaviors through trial and error interactions with humans)
Keywords:
social robot, behavior generation, multimodal behavior, deep learning, generative model, interactive behaviors
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Social robots are designed and developed to interact with humans. In order to engage humans in a social interaction, a robot must be capable of showing social behaviors that are well-timed, contextually-appropriate, intuitive and appealing to the human interactants. Though the behaviors of a robot may depend on its embodiment and specific role, there are several expectations associated with such behaviors that qualify these as "social". These include the capability for two-way interactions and the need to be socially aware; to perceive events within an interaction and produce socially-relevant responses. However, in order to be convincing social agents, robots must be able to generate behaviors that are not limited or simplistic but rather sophisticated, meaningful and diverse.
Extensive research has been conducted to develop multimodal robot behaviors that support social interactions with and between humans. However, effective design of robot behaviors and autonomous behavior generation in social interactions remains an open challenge. Early social robotics research relied heavily on hand-crafted robot behaviors in Wizard-of-Oz settings to give the illusion of interactivity and intelligence. However, manually designing the behavior repertoire of a robot is a tedious and time-intensive task, which typically yields only a limited set of actions that are usually insufficient for an intelligent social robot. Later research automated the design of some robot behaviors with rule-based selection methods that enabled robots to assume some level of autonomy. Supported by advances in sensing technologies and affective computing, as well as the availability of rich human behavior datasets, recent trends show a move towards machine learning and deep learning methods to generate multimodal robot behaviors for various social applications.
This Research Topic on robot behavior generation methods focuses on the latest advancements in the design and generation of multimodal expressive behaviors for robots that can engage humans in social interactions. Submissions aligned with this special issue could include methodologies for autonomous behavior generation, design approaches focusing on user-centered interaction design principles, and application domains such as healthcare, education, mediation applications, and assistive robotics where social interactions play a crucial role. In particular, proposed social interaction designs may integrate with physical design elements through the utilization of advanced sensing technologies and smart materials. For instance, smart materials embedded in a robot's exterior can facilitate dynamic changes in appearance that may convey emotions or intentions during social interactions. Advanced sensing technologies can enable robots to perceive and interpret human gestures, facial expressions, vocal cues, etc., thus enhancing their ability to engage meaningfully in social interactions. Additionally, research showcasing the integration of multimodal sensing technologies for enhancing the social capabilities of robots would be particularly relevant to this special issue.
Topics that fit the scope of this collection include, but are not limited to:
(1) Multimodal robot behavior design (movements, gestures, speech, facial expressions, haptic behavior, etc. based on real-time feedback from human interaction partners)
(2) Generative models for behavior generation (use of generative techniques such GANs to synthesize diverse and contextually appropriate robot behaviors based on input stimuli and environmental cues)
(3) Multimodal datasets and validation (collection and utilization of datasets containing diverse human behaviors and interactions to train and validate the effectiveness of robot behavior generation models)
(4) Planning methods for interactive robot behaviors (algorithms for real-time decision-making and action planning in social contexts, considering factors such as social norms, user preferences, task objectives, etc.)
(5) Cognitive architectures for interactive robots (architectures inspired by human cognition, such as hierarchical planning systems or memory-based models to enable robots to adaptively generate behaviors in complex social scenarios)
(6) Human-robot multimodal interaction (development of interaction frameworks that facilitate seamless communication between humans and robots through a combination of modalities)
(7) Automatic adaptation and personalization of robot behavior (techniques for dynamically adjusting robot behaviors based on ongoing feedback from human users, environmental changes, and individual user preferences, improving the personalization of social interactions between robots and humans, for example, with the use of reinforcement learning to enable learning of socially-appropriate behaviors through trial and error interactions with humans)
Keywords:
social robot, behavior generation, multimodal behavior, deep learning, generative model, interactive behaviors
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.