Event Abstract

Acting is not the same as feeling: Emotion expression in gait is different for posed and induced emotions

  • 1 University of Birmingham, School of Psychology, United Kingdom

INTRODUCTION The past decade has witnessed an unprecedented growth in human–computer interaction. With this progress comes growing demand for computers to sense and recognize users’ affective states (Cowie et al., 2001; Pantic & Rothkrantz, 2003; Hudlicka, 2003). Indeed, the development of emotion-sensitive computer systems may have important implications for variety of areas from automatic customer services (Fragopanagos & Taylor, 2005) to early recognition and diagnosis of clinical conditions (Michalak, et al., 2009). Automated emotion detection has largely focused on facial expressions (e.g., Kenji, 1991), however, whole-body movement carries numerous emotion-related cues, which humans can rapidly detect (e.g., Clarke et al., 2005; De Meijer, 1989; Dittrich et al., 1996; Montepare et al., 1987, 1999; Walk and Homan, 1984; Wallbott and Scherer, 1986). Velocity, for instance, comprises an important cue as to a person’s underlying emotional state: faster (high velocity) body movements tend to indicate anger and happiness, whilst low velocity movements are indicative of sadness (Chouchourelou et al., 2006; Edey et al., 2017; Gross et al., 2012; Halovic & Kroos, 2018; Michalak et al., 2009; Roether et al., 2009). Consequently, whole-body cues are increasingly being incorporated into computerized emotion recognition technologies (Janssen et al., 2008; Pantic & Rothkrantz, 2003). However, at present this remains a field which lags behind the advances made in the detection of emotion from face cues. One issue which has received interest in the context of emotion recognition from faces, but which has been overlooked with respect to whole-body emotion recognition, is the question of differences between posed and induced/spontaneous expressions. Although emotion tracking software typically aims to detect naturally occurring, spontaneous expressions, much of our knowledge of movement kinematics comes from the posed expressions of professional actors (e.g., Jannsen et al., 2008; Roether et al., 2009, Venture et al., 2014). Even when posing is aided by induction methods such as autobiographical recall, the actor remains aware of the effects they are expected to produce, and likely exaggerates particular movement patterns. Consequently, kinematic measures derived from studies using posed expressions alone may not correspond to naturally occurring emotional expressions. Indeed, with respect to facial expressions, preliminary evidence suggests that induced and posed expressions differ with regards to timing and amplitude (Schmidt et al., 2006; Valstar et al., 2006). The current study recorded happy, angry and sad walks, as executed by student volunteers. We compared the velocity of these walks when the emotion was ‘posed’ and when the emotion was naturally ‘induced’ by watching emotional film clips. METHODS Kinematic data was obtained from 31 healthy participants (24 females) with self-reported unimpaired motor function. All participants gave informed consent to participate and received course credit or a monetary incentive as reimbursement. The study was approved by the University of Birmingham Ethics Committee. Walking data was recorded using the Zeno™ Walkway (ProtoKinetics LLC, Havertown, USA) gait mat. All participants first carried out a ‘baseline’ walk for a duration of 120 seconds. Following this, participants watched 3 film clips (average length: 2.5 minutes) which had been selected for their propensity to induce happy, angry and sad emotional states, as assessed in a pilot study. Film-clip order was pseudo-randomized between participants. Between films participants viewed a 1-minute-long filler clip, to reset their mood to neutral. Immediately after each clip, participants walked continuously across the gait mat, stepping off the end to turn around each time. Walks were recorded for 30 seconds, resulting in 7 passes, across the full length of the gait mat, on average. Subsequently, participants rated their current mood (positive – negative), arousal (calm – excited), intensity for the target emotion and 4 other basic emotions (anger, happiness, sadness, disgust, surprise) and the extent to which they felt emotionally neutral on a 10-point scale. After watching all the film clips, participants executed posed walks, simulating happy, angry and sad emotional states according to the instruction … “Imagine you were angry (happy/sad). Walk across the mat how you think you would walk if you were angry (happy/sad)”. PKMAS software (ProtoKinetics LLC, Havertown, USA) was employed to process each walk and calculate the average velocity (distance travelled/ambulation time, centimeters/second (cm/s)) across the walk period (120 seconds for baseline walks, 30 seconds for all other walks). RESULTS Emotion induction was successful: emotion rating discreteness (target emotion rating minus average rating of all non-target emotions) scores for each video were significantly greater than zero (ps < .001). A repeated-measures ANOVA with within-subjects factors of condition (posed, induced), and emotion (happy, angry, sad) revealed a significant main effect for emotion (Figure 1; F(2,60) = 60.09, p < .001, partial eta squared = .67). There was no main effect for condition (F(1,30) = .041, p = .841, partial eta squared= .00). Collapsing across posed and induced revealed that angry and happy walks were the fastest, and sad walks were the slowest (angry: mean [M] = 118.90, standard error of the mean [SEM] = 3.29; happy: M = 114.96, SEM = 2.34; sad: M = 101.34, SEM = 2.95). Bonferroni-corrected post-hoc t-tests reveal that while there was no difference in velocity for happy and angry walks (t(30) = 2.49, p = .019) , sad and angry (t(30) = 9.56, p < .001) and happy and sad (t(30) = 8.46, p < .001) were significantly different. However, the ANOVA also revealed a significant condition x emotion interaction (F(2,60) = 38.16, p < .001, partial eta squared= .56). Separate ANOVAs for each condition revealed that, whereas velocity differed as a function of emotion for posed walks (F(2,60) = 57.39 p < .001, partial eta squared = .66), this was not the case for the induced condition (F(2,60) = 2.34, p =.105, partial eta squared = .07). Post-hoc tests further showed that, for the posed condition alone, there was no difference in velocity for happy and angry walks (t(30) = 1.98, p = .058). However, velocities for posed sad walks were significantly lower than those for posed angry walks (sad: M = 92.35, SEM = 3.82; angry: M = 124.50, SEM = 4.33; t(30) = 9.77, p < .001) and posed happy walks (happy: M = 117.85 , SEM = 2.40; t(30) = 9.04 p < .001). The equivalent tests, for the induced condition, showed no difference in velocity for sad and angry walks (sad: M = 110.32, SEM = 2.60; angry: M = 113.31, SEM = 2.74 t(30) = 2.40, p = .024), sad and happy (happy: M = 112.10, SEM = 2.63; t(30) = 1.10, p = .281) or happy and angry walks (t(30) = .96, p = .343). DISCUSSION The current study investigated whether the velocity of happy, angry and sad walks differed for walks that comprised posed simulations of emotion, compared to those that followed emotion induction and thus comprised natural expressions of emotion. Velocity differed as a function of emotion for posed simulations: in line with previous literature we observed faster velocities for angry walks and slower velocities for sad walks. Velocities for posed happiness were also, as expected, faster than sad but slower than angry, albeit the difference between happy and angry was not statistically significant. This pattern of data was not observed for walks that followed emotion induction. Although our emotion induction methods were successful, as evidenced by higher post-film-clip intensity ratings for the target emotion compared to non-target emotions (i.e. if a participant watched a happy video they gave high ratings on the happy scale and low ratings for sad, angry, disgusted and surprised) we saw no velocity differences between happy, angry and sad walks for the induced condition. These data highlight important differences between posed and naturally-occurring whole-body expressions of emotion, demonstrating in particular that, for induced emotions, gait velocity should not be relied upon to discriminate emotional state. Further exploration is required to identify the gait characteristics (e.g. cadence, step width, force, stride length) that are the best predictors of emotional state for induced, naturally-occurring, emotions. With respect to the further development of computerized emotion recognition methods, our data clearly show that algorithms aiming to detect spontaneous/naturally-occurring emotions should not rely on posed expression datasets for training purposes.

Figure 1

Acknowledgements

The work in this paper was funded by a European Research Council Starting Grant held by J.C.. We thank Connor Keating, Georgina Toms and Laura Guile for help with data collection.

References

Chouchourelou, A., Matsuka, T., Harber, K., and Shiffrar, M. (2006). The visual analysis of emotional actions. Soc. Neurosci. 1, 63–74. https://doi.org/10.1080/17470910600630599 Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D. (2005). The Perception of Emotion from Body Movement in Point-Light Displays of Interpersonal Dialogue. Perception, 34(10), 1171–1180. https://doi.org/10.1068/p5203 Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine, 18(1), 32–80. de Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior, 13(4), 247–268. https://doi.org/10.1007/BF00990296 Dittrich, W. H., Troscianko, T., Lea, S. E. G., & Morgan, D. (1996). Perception of Emotion from Dynamic Point-Light Displays Represented in Dance. Perception, 25(6), 727–738. https://doi.org/10.1068/p250727 Edey, R., Yon, D., Cook, J., Dumontheil, I., & Press, C. (2017). Our own action kinematics predict the perceived affective states of others. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1263–1268. https://doi.org/10.1037/xhp0000423 Fragopanagos, N., Taylor, JG. (2005). Emotion recognition in human–computer interaction. Neural Networks, 18(4), 389–405. doi: 10.1016/j.neunet.2005.03.006. Gross, M. M., Crane, E. A., & Fredrickson, B. L. (2012). Effort-Shape and kinematic assessment of bodily expression of emotion during gait. Human Movement Science, 31(1), 202–221. https://doi.org/10.1016/J.HUMOV.2011.05.001 Halovic, S., & Kroos, C. (2018). Not all is noticed: Kinematic cues of emotion-specific gait. Human Movement Science, 57, 478–488. https://doi.org/10.1016/J.HUMOV.2017.11.008 Hudlicka, E. (2003). To feel or not to feel: the role of affect in human–computer interaction. International Journal of Human–Computer Studies. 59(1), 1–32. doi: 10.1016/S1071-5819(03)00047-8. Janssen, D., Schöllhorn, W. I., Lubienetzki, J., Fölling, K., Kokenge, H., & Davids, K. (2008). Recognition of Emotions in Gait Patterns by Means of Artificial Neural Nets. Journal of Nonverbal Behavior, 32(2), 79–92. https://doi.org/10.1007/s10919-007-0045-3 Kenji, M. (1991). Recognition of facial expression from optical flow. IEICE Transactions on Information and Systems. 74(10), 3474–3483. Michalak, J., Troje, N. F., Fischer, J., Vollmar, P., Heidenreich, T., & Schulte, D. (2009). Embodiment of Sadness and Depression—Gait Patterns Associated With Dysphoric Mood. Psychosomatic Medicine, 71(5), 580-587. Montepare, J. M., Goldstein, S. B., & Clausen, A. (1987). The identification of emotions from gait information. Journal of Nonverbal Behavior, 11(1), 33–42. https://doi.org/10.1007/BF00999605 Montepare, J., Koff, E., Zaitchik, D., & Albert, M. (1999). The use of body movements and gestures as cues to emotions in younger and older adults. Journal of Nonverbal Behavior, 23(2), 133–152. https://doi.org/10.1023/A:1021435526134 Pantic M, Rothkrantz, LJ. (2003). Toward an affect-sensitive multimodal human–computer interaction. Proceedings of the IEEE, 91(9), 1370–1390. doi: 10.1109/JPROC.2003.817122. Schmidt, K. L., Ambadar, Z., Cohn, J. F., & Reed, L. I. (2006). Movement Differences between Deliberate and Spontaneous Facial Expressions: Zygomaticus Major Action in Smiling. Journal of Nonverbal Behavior, 30(1), 37–52. https://doi.org/10.1007/s10919-005-0003-x Roether, C. L., Omlor, L., Christensen, A., & Giese, M. A. (2009). Critical features for the perception of emotion from gait. Journal of Vision, 9(6), 15. https://doi.org/10.1167/9.6.15 Walk, R. D., & Homan, C. P. (1984). Emotion and dance in dynamic light displays. Bulletin of the Psychonomic Society, 22(5), 437–440. https://doi.org/10.3758/BF03333870 Wallbott, H. G., & Scherer, K. R. (1986). How universal and specific is emotional experience? Evidence from 27 countries on five continents. Information (International Social Science Council), 25(4), 763–795. https://doi.org/10.1177/053901886025004001 Valstar, M., & Pantic, M. (2006). Fully Automatic Facial Action Unit Detection and Temporal Analysis. In 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06) (p. 149). https://doi.org/10.1109/CVPRW.2006.85 Venture, G., Kadone, H., Zhang, T., Grèzes, J., Berthoz, A., & Hicheur, H. (2014). Recognizing Emotions Conveyed by Human Gait. International Journal of Social Robotics, 6(4), 621–632. https://doi.org/10.1007/s12369-014-0243-1

Keywords: emotion, Biological motion, kinematics, velocity, Gait, happy, angry, SAD

Conference: 4th International Conference on Educational Neuroscience, Abu Dhabi, United Arab Emirates, 10 Mar - 11 Mar, 2019.

Presentation Type: Oral Presentation (invited speakers only)

Topic: Educational Neuroscience

Citation: Schuster BA, Sowden SL, Abdlkarim D, Wing AM and Cook JL (2019). Acting is not the same as feeling: Emotion expression in gait is different for posed and induced emotions. Conference Abstract: 4th International Conference on Educational Neuroscience. doi: 10.3389/conf.fnhum.2019.229.00010

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 10 Feb 2019; Published Online: 27 Sep 2019.

* Correspondence: Dr. Jennifer L Cook, University of Birmingham, School of Psychology, Birmingham, B15 2TT, United Kingdom, j.l.cook@bham.ac.uk