Nowadays people spend significant amount of time interacting with devices and computing systems. These devices and computing systems, however, are playing a passive role during interaction, unlike humans who can observe partners to know when and how to provide assistance. Seamless blending of humans and technology for intelligent interaction is becoming more important than ever. One key aspect is to let machine understand users’ state of emotion, cognition and action (herein termed user state for simplicity). Building this ability in machine is of critical use in a wide variety of human machine collaboration contexts spanning from safe driving to assistance for people with disability. For example, autonomous car can ‘observe’ user’s state of emotion (e.g., negative), cognition (e.g., overloaded), and action (e.g., glancing down) to determine when to give safety reminders or take control. The aim of this research is to empower machine to understand user state so that human and machine can collaborate in the best form to augment human ability.
Recognition of users’ state of emotion, cognition and action is a fundamentally multidisciplinary field. The key research questions need to be addressed include (i) what are the physiological and behavioural cues that are critically important in understanding user state? (ii) how to develop better representation for each individual state (e.g., distress) for addressing the complex psychological process where our psychological and behavioural signals are often affected by multiple states? (iii) how to develop computational methods applicable to real-life tasks to automatically recognize the interested state of cognition, emotion, action or all of them?
As we often express emotion while performing cognitive tasks and responding with actions, the multiple co-occurring states on users make it complicated to recognize them in real life and require substantial research. Studies examining the measurable differences and recognition performance in less controlled tasks and proposing effective and robust computing methods for user state recognition will help shed light on these problems.
The main objective of this Research Topic is to bring together current advances in the field. Topics of interest include, but are not limited to:
? Theorical frameworks and/or experimental study or review for the relationship between user state and physiological/behavioural cues
? Methods for processing physiological and behavioural signals and recognizing state of emotion, cognition, and action indicators
? Machine learning techniques focused specifically on user state recognition
? Methods for multimodal user-state recognition
? Robustness issues in recognizing user state in less controlled contexts and everyday life.
? Wearable technologies for user state analysis
? Emotion/cognition/action recognition-powered assistive technologies and applications
? Novel user-state recognition systems and their applications
Nowadays people spend significant amount of time interacting with devices and computing systems. These devices and computing systems, however, are playing a passive role during interaction, unlike humans who can observe partners to know when and how to provide assistance. Seamless blending of humans and technology for intelligent interaction is becoming more important than ever. One key aspect is to let machine understand users’ state of emotion, cognition and action (herein termed user state for simplicity). Building this ability in machine is of critical use in a wide variety of human machine collaboration contexts spanning from safe driving to assistance for people with disability. For example, autonomous car can ‘observe’ user’s state of emotion (e.g., negative), cognition (e.g., overloaded), and action (e.g., glancing down) to determine when to give safety reminders or take control. The aim of this research is to empower machine to understand user state so that human and machine can collaborate in the best form to augment human ability.
Recognition of users’ state of emotion, cognition and action is a fundamentally multidisciplinary field. The key research questions need to be addressed include (i) what are the physiological and behavioural cues that are critically important in understanding user state? (ii) how to develop better representation for each individual state (e.g., distress) for addressing the complex psychological process where our psychological and behavioural signals are often affected by multiple states? (iii) how to develop computational methods applicable to real-life tasks to automatically recognize the interested state of cognition, emotion, action or all of them?
As we often express emotion while performing cognitive tasks and responding with actions, the multiple co-occurring states on users make it complicated to recognize them in real life and require substantial research. Studies examining the measurable differences and recognition performance in less controlled tasks and proposing effective and robust computing methods for user state recognition will help shed light on these problems.
The main objective of this Research Topic is to bring together current advances in the field. Topics of interest include, but are not limited to:
? Theorical frameworks and/or experimental study or review for the relationship between user state and physiological/behavioural cues
? Methods for processing physiological and behavioural signals and recognizing state of emotion, cognition, and action indicators
? Machine learning techniques focused specifically on user state recognition
? Methods for multimodal user-state recognition
? Robustness issues in recognizing user state in less controlled contexts and everyday life.
? Wearable technologies for user state analysis
? Emotion/cognition/action recognition-powered assistive technologies and applications
? Novel user-state recognition systems and their applications