Social robots and virtual avatars are emerging technologies that promise opportunities for an effective interaction with humans. This ability emerges from a coherent exploitation of social skills that explicitly take into account the human presence in the perception-action loop. Identifying users, understanding their actions, inferring their mental and emotional states, processing non-verbal gestures, planning their action in a more “human-aware” fashion, are just some examples of social skills that are pushing interactive agents towards an efficient and practical collaboration with humans in different scenarios; as partners in industries, as companions for children and elderly, as educational tools, as assistants in public or personal spaces and so on. However, despite progress in the field of Human-Agent Interaction (HAI), today’s state-of-the-art agents still lack skills in accomplishing complex social tasks and are yet unable to function autonomously in real-world social scenes.
This Research Topic aims to present rigorous scientific and technical advances in methods, models, techniques and algorithms that enrich the mutual interplay between humans and artificial agents and improve their social skills and awareness. Welcome contributions will focus on theoretical aspects as well as practical applications of embodied, autonomous and semi-autonomous social cognition of virtual agents and robots.
Particular attention will be given to the integration of electrophysiological (brain activity, heart rate, skin responses, etc.) and behavioural signals (verbal, non-verbal, vocal communication, gestures, posture, facial expressivity, etc.) in the agent’s decision making process and in the coherent social behaviour production based on the user’s emotional states, attention, stress levels, etc.
Topics of interest include, but are not limited to:
- Human-aware perception-action loop
- Social signal processing for HAI
- Physiological monitoring and biofeedback systems
- BCI (brain-computer interfaces)-enabled adaptive interaction with artificial agents
- Emotion recognition and real-time detection of mental states
- Long-term, personalized HAI
- Neural outcomes of interaction with artificial agents
- Relationship between neural, verbal, and non-verbal signals in HAI
- Objective metrics and scales for evaluation of HAI
Social robots and virtual avatars are emerging technologies that promise opportunities for an effective interaction with humans. This ability emerges from a coherent exploitation of social skills that explicitly take into account the human presence in the perception-action loop. Identifying users, understanding their actions, inferring their mental and emotional states, processing non-verbal gestures, planning their action in a more “human-aware” fashion, are just some examples of social skills that are pushing interactive agents towards an efficient and practical collaboration with humans in different scenarios; as partners in industries, as companions for children and elderly, as educational tools, as assistants in public or personal spaces and so on. However, despite progress in the field of Human-Agent Interaction (HAI), today’s state-of-the-art agents still lack skills in accomplishing complex social tasks and are yet unable to function autonomously in real-world social scenes.
This Research Topic aims to present rigorous scientific and technical advances in methods, models, techniques and algorithms that enrich the mutual interplay between humans and artificial agents and improve their social skills and awareness. Welcome contributions will focus on theoretical aspects as well as practical applications of embodied, autonomous and semi-autonomous social cognition of virtual agents and robots.
Particular attention will be given to the integration of electrophysiological (brain activity, heart rate, skin responses, etc.) and behavioural signals (verbal, non-verbal, vocal communication, gestures, posture, facial expressivity, etc.) in the agent’s decision making process and in the coherent social behaviour production based on the user’s emotional states, attention, stress levels, etc.
Topics of interest include, but are not limited to:
- Human-aware perception-action loop
- Social signal processing for HAI
- Physiological monitoring and biofeedback systems
- BCI (brain-computer interfaces)-enabled adaptive interaction with artificial agents
- Emotion recognition and real-time detection of mental states
- Long-term, personalized HAI
- Neural outcomes of interaction with artificial agents
- Relationship between neural, verbal, and non-verbal signals in HAI
- Objective metrics and scales for evaluation of HAI