Skip to main content

PERSPECTIVE article

Front. Neurosci., 28 April 2020
Sec. Decision Neuroscience
This article is part of the Research Topic Consumer Neuroscience - Foundation, Validation, and Relevance View all 19 articles

Opportunities and Challenges for Using Automatic Human Affect Analysis in Consumer Research

  • 1Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
  • 2Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
  • 3Department of Experimental Psychology, University College London, London, United Kingdom
  • 4Maharaja Surajmal Institute of Technology, Guru Gobind Singh Indraprastha University, New Delhi, India
  • 5Centre for Situated Action and Communication, Department of Psychology, University of Portsmouth, Portsmouth, United Kingdom

The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.

Introduction

Emotions matter profoundly for understanding consumers’ behavior in fast changing economic markets of modern life (McStay, 2016). While there exist various ways to assess emotions in the laboratory, most approaches that target bodily signals require sensors to be attached to the participant that are either less accurate or less practicable when used in the field (Küster and Kappas, 2013). Hence, automated methods of measuring facial emotional responses via contact-free video recording tap into a rapidly growing market that presents opportunities but also risks (e.g., Gupta, 2018; Schwartz, 2019), and debate about false expectations (Vincent, 2019).

If consumer attention, social engagement, and emotional responses can be measured reliably and non-invasively, a broad spectrum of marketing decisions could be readily informed by objective data. As such, we need to examine how well new computational methods can predict consumer behavior, thereby moving away from questions that simply ask whether or not they can predict choice (Smidts et al., 2014). Further, it will be critical to measure neurocognitive choice processes in more naturalistic settings to facilitate the study of a broad spectrum of human behavior – including also disorders such as addiction and obesity (Hsu and Yoon, 2015). For example, such settings could include the elicitation of complex yet distinct mixed emotional states such as the feeling of being moved that is often described as pleasurable, but that can also involve crying and tears (Zickfeld et al., 2019). Viewing another’s tears has been shown to elicit empathy and a wish to help (e.g., Küster, 2018). In turn, this might result in increased donations towards advertisements based on the feeling of being moved. Potentially, it might be possible to even simulate human-like empathy through affective computing (Picard, 1997), thereby creating an “empathic artificial intelligence” (McStay, 2018) that fundamentally transforms the future of consumer research and related fields. On the flipside, certain real-world applications of automatic human affect analysis (AHAA), such as the detection of unhappy emotional states of customers in retail stores (e.g., Anderson, 2017) appear to be vastly premature, if not downright unethical.

The current paper aims to critically discuss the growing role of AHAA in consumer research. It also highlights some of the most pressing barriers the field currently faces. We argue that automatic classification may provide substantial new leverage to the study of emotion and cognition in consumer neuroscience through both primary and subsequent machine analysis. While the tools available to date may not be as versatile, reliable, and proven to be valid across domains, they nevertheless represent an important advance in the area of AHAA with substantial potential for further development.

Abundant Choices: Classifiers Lack Cross-System Validation

In the past decades, early automated systems for facial affect recognition (Tian et al., 2001) were not readily available for use by the wider research community. In the wake of recent technical advances in video-based affect sensing, this has changed (Valstar et al., 2012). Today, researchers face a plethora of choices for selecting the best machine classifier. Besides covering a wide range of price tags, commercial systems differ in their technical features for facial analysis, as well as the ways in which users engage with the system (e.g., through APIs, SDKs). Among the “grizzled veterans” in the field of AHAA are the two software packages FACET (iMotions) and FaceReader (Noldus). Originally built upon another software called CERT (Littlewort et al., 2011), FACET was distributed by Emotient, whereas FaceReader was developed and first presented by VicarVision in 2005 (Den Uyl and Van Kuilenburg, 2005). Both systems have been used in a large number of scientific studies (e.g., Skiendziel et al., 2019; for a review see Lewinski et al., 2014a) as well as in consumer behavior (Garcia-Burgos and Zamora, 2013; Danner et al., 2014; Yu and Ko, 2017) and marketing research (Lewinski et al., 2014b; McDuff et al., 2015). Nonetheless, there are several other promising off-the-shelf classifiers available today that could be employed for the same purposes. These include Affdex (Affectiva), FaceVideo (CrowdEmotion), Cognitive Services: Face (Microsoft), EmotionalTracking (MorphCast), EmotionRecognition (Neurodata Lab), or FaceAnalysis (Visage Technologies). Moreover, some classifiers are freely available such as OpenFace (Baltrusaitis et al., 2016) or OpenCV (Bradski, 2000) to extract facial feature sets from video recordings.

Given the large and growing number of choices for academics and practitioners in consumer research, there still exists little “cross-system” (i.e., between competing products) validation research that could independently inform about the relative performance indicators of AHAA (Krumhuber et al., 2019). Out of the studies available to date, only a few have directly compared different commercial classifiers (Stöckli et al., 2018). Likewise, a small number of studies has tested AHAA against human performance benchmarks on a larger number of databases (Yitzhak et al., 2017; Krumhuber et al., 2019), thereby calling the generalizability of findings derived from single stimulus sets into question.

Ultimately, not only accuracy of AHAA needs to be evaluated, but also its validity and reliability in a broader sense (cf., Meyer, 2010; Ramsøy, 2019). First, certain concepts may require re-interpretation: For example, classic test–retest reliability by the machine classifier on identical stimuli tends to be perfect because the underlying algorithms remain fixed. Likewise, the issue of inter-rater reliability, i.e., different experimenters applying the same AHAA, may be irrelevant if all parameters are shared between experimenters. More critical, however, are questions of convergent and external validity. So far, most validation efforts have focused on the convergence between AHAA and human ratings, although initial evidence suggests that AHAA may correlate highly with facial electromyography (EMG; Kappas et al., 2016; Beringer et al., 2019; Kulke et al., 2020). However, much more work is still needed to compare AHAA against both facial EMG and expert annotations to determine its convergent and discriminant validity. Generalizability of AHAA study findings may be further limited in other ways. E.g., classifier performance may be substantially lower for spontaneous affective behavior (Dupré et al., 2019; Krumhuber et al., 2019). This issue often ties into the lack of information given about the stimulus materials originally used to develop or “train” a given AHAA system to fully evaluate generalizability toward similar novel stimuli.

Finally, AHAA needs to demonstrate an added value to predict consumer behavior. A few studies have begun to examine this question by predicting purchase intent from automatically detected facial expressions. For example, the FACET classifier has recently been employed to examine purchase intent toward vegetable juices, showing that AHAA-based facial expressions modulated consumer intent in concert with a number of other factors (Samant et al., 2017; Samant and Seo, 2020). Nevertheless, it remains an empirical question to what extent expressions, as tracked by AHAA, translate to purchase intent and tangible real-world behavior.

Overall, we still know too little about the various contenders to choose between classifiers for different purposes. Once a commercial software package has been purchased, users typically have few options to reconsider their choice, as the cost of even a single system is often in the (higher) four- or (lower) five-digit range. Furthermore, available open-source solutions such as OpenFace still need to be tested with regards to their potential for supplementary behavioral analysis.

Missing the Beat of Fine-Grained Expression Dynamics

In the real world, faces are constantly in motion. As demonstrated by a growing body of research in cognitive science, the dynamics of facial movement convey communicative intent and emotion (for reviews, see Krumhuber et al., 2013; Krumhuber and Skora, 2016; Sato et al., 2019). While the role of fine-grained dynamics has been best explored in the context of smiling (e.g., Krumhuber et al., 2007, 2009), they are believed to impact emotion judgments and behavioral responses more generally (Sato and Yoshikawa, 2007; Recio et al., 2013). This renders expression dynamics to be of crucial importance for large areas of consumer research. Since online and TV advertisements frequently involve dynamic material involving human faces, their affective credibility depends on whether the content is perceived as genuine-looking and sincere. However, relatively little is known about the precise characteristics of expression dynamics in product evaluation beyond simple analyses of means (Peace et al., 2006). Teixeira and Stipp (2013) showed an inverted-U relationship between smile intensity and purchase intents of people who viewed advertisements – i.e., both very high and very low levels of humorous entertainment predicted lower purchase intent. Similarly, joy velocity, i.e., the speed of change in facial expressions of happiness, has been suggested to affect consumers’ decisions to continue to watch or “zap” advertisements (Teixeira et al., 2012). Finally, humorous entertainment, as measured by smile intensity, may increase purchase intent when placed after, rather than before, brand presentation (Teixeira et al., 2014).

One reason for this comparative neglect of the dynamics of facial movement lies in its complexity. In traditional laboratory research, a limited number of factors can be manipulated simultaneously. Higher ecological validity of evoked facial activity, and more natural recording situations, typically make it more difficult to adequately control for possible confounds, as well as ensure sufficient statistical power. As shown in prior research (Ambadar et al., 2005), the impact of dynamic expressions is likely to be more than the sum of still images. While temporal information improves emotion recognition (Krumhuber et al., 2013), it is less clear how multi-peaked dynamic expression trajectories are weighted in the mind of the human perceiver. Also, it remains largely unknown how rich socio-emotional knowledge about the context of dynamic expressions shapes their perception (Maringer et al., 2011). Such applied questions are of imminent relevance for consumer research given that AHAA can provide per-frame classifications of large amounts of video data of human observers. For example, based on an analysis of more than 120,000 frames, Lewinski et al. (2014b) found context-specific features of facial expressions of happiness to be major indicators of happiness. Unfortunately, however, no well-established standards yet exist in terms of how best to pre-process and aggregate raw per-frame probabilities of machine classification.

Until now, many validation approaches consider only the peak response intensities or overall mean response envelopes of a perceiver’s facial activity. From our perspective, this calls for more advanced and systematic ways of generating and testing hypotheses relating to short- to medium term expression dynamics. Such challenges may require the use of metrics that do not simply reduce complex facial movements to a single image, i.e., one that is representative of the prototypical peak expression. Instead, temporal segments of facial activity need to be weighted relative to other simultaneously present channels, without discarding nuanced expressions (Pantic and Patras, 2006; Valstar and Pantic, 2006; Dente et al., 2017).

While AHAA provides new avenues for more fine-grained and subtle expression analysis, certain use cases might fail to translate to future research. For example, it is unlikely that micro-expressions (Ekman, 2009; Matsumoto and Hwang, 2011) offer a promising theoretical approach toward a better understanding of expression dynamics in consumer research. Micro-expressions refer to brief displays (20–500 ms) argued to “leak” an individual’s true emotional state before the expression can be actively controlled (Ekman and Friesen, 1969). While micro-expression analysis still enjoys substantial attention (see Shen et al., 2019), the concept is questionable and lacks empirical support as a validated theory, partly because micro-expressions are extremely rare (Porter et al., 2012) and of little practical relevance in understanding the multiple functions of emotions. As such, they could simply represent briefer and weaker versions of normal emotional expressions (Durán et al., 2017). In consequence, it seems worthwhile to focus future research efforts on other aspects, such as those that concern dynamic and spontaneous emotional behavior beyond the level of the individual frame.

From Posed Stereotypes to Spontaneous Expressions “In the Wild”

Video-based affect classification can only be a useful tool for consumer research if patterns of naturally occurring responses can be reliably detected. Historically, AHAA has primarily been designed to achieve high accuracy in recognizing intense and stereotypical expressions provided by carefully instructed actors (Pantic and Bartlett, 2007). However, the narrow focus on individual posed emotions throughout psychology has been increasingly criticized because it has not been very helpful to understand the evolutionary functions of emotional expressions themselves (Shariff and Tracy, 2011). While promising methods for analyzing spontaneous behavior have been proposed, fewer efforts target the automatic analysis of spontaneous displays (Masip et al., 2014). This could be due to the rather limited number of available databases with naturalistic and spontaneous expressions used to train and test machine classifiers. Often, these databases are also of lower recording quality which hinders objective measurement and analysis (Krumhuber et al., 2017).

Recent findings regarding the classification performance of posed expressions have been encouraging. For example, Stöckli et al. (2018) demonstrated acceptable accuracy in classifying basic emotions using the software packages FACET and Affdex. The authors calculated recognition accuracy for maximum intensity expressions extracted from two posed databases. However, when participants were asked to spontaneously respond to emotionally evocative pictures, accuracy for emotional valence (see Yik et al., 2011) was barely above chance level. Similar results have been reported by Yitzhak et al. (2017) using videos. Depending on the emotion in question, recognition performance of prototypical posed expressions typically ranged between 70 and 90%, with happiness being recognized most accurately. By contrast, the same classifier performed “very poorly” (Yitzhak et al., 2017, p. 1) on subtle and non-prototypical expressions. Overall, machine learning for spontaneous expressions is a difficult task, with performance rates varying as a function of classifier, emotion, and database (Dupré et al., 2019). Furthermore, the notion of what constitutes spontaneous facial behavior varies between the databases.

To make significant progress in the future, more work is still needed to create and validate large and diverse datasets of spontaneous expressions (Zeng et al., 2009). For example, efforts such as AffectNet (Mollahosseini et al., 2019) or Aff-Wild (Kollias et al., 2019) might help to close the gap toward predicting affective responses in the wild. Ideally, new databases should be publicly accessible to allow for independent verification of the results or modification of the computer models. Dedicated large scale efforts to obtain high quality “in-the-wild” dynamic facial response data will allow researchers to consistently address ethical challenges that require substantial consideration. E.g., the partial deception required to ensure unbiased responses can be ameliorated through standardized debriefing procedures. Further, spontaneous databases can be (re-)used for multiple cross-system validation studies, as well as for more specific consumer response analyses. By doing so, AHAA of spontaneous expressions may contribute to increasingly better predictions of real-world consumer responses while minimizing the burden on ethical data collection in the field. Finally, such an approach would also provide a benchmark for comparisons between the different algorithms. For example, although a large amount of online video data used for the training of Affdex has been one its major selling points (Zijderveld, 2017), this and similar systems still function like a “black box” that cannot be directly validated by other parties.

Theoretical Issues: a Lack of Coherence

While some of the most pressing issues of AHAA concern practical limitations, theoretical issues equally need to be addressed. Importantly, the notion of a direct and hard-wired or “universal” link between facial expressions and subjective experience has been challenged in recent years. As argued by multiple researchers (Reisenzein et al., 2013; Hollenstein and Lanteigne, 2014; Durán et al., 2017), coherence between emotions and facial expressions may at best be moderate in strength, and sometimes even non-existent. Further, similar configurations of facial actions [i.e., Action Units (AUs)] may express more than one emotion or communicative intent (Barrett et al., 2019). This contrasts with existing views such as those proposed by Basic Emotion Theory (Ekman, 1992, 1999). In consequence, any facial activity, whether it is measured manually or automatically, cannot be assumed to directly reflect a person’s emotional experience. Facial expressions are not the sole readout of underlying emotional states (Kappas, 2003; Crivelli and Fridlund, 2018). As a result, AHAA is essentially about the recognition of patterns and regularities in the data (Mehta et al., 2018).

Nevertheless, there are reasons to be optimistic when attempting to interpret facial expressions. First, spontaneous consumer responses might be more predictive of affective behavior than abstract and decontextualized situations as typically examined in the laboratory (Küster and Kappas, 2014). Such applied contexts could be more informative about the emotional experience of respondents, thereby increasing the magnitude and coherence of the response. Second, recent improvements in efficiency rendered by AHAA allows the processing of larger amounts of data than has previously been possible. This should increase overall robustness in study findings across domains, including larger-scale studies (Garcia et al., 2016). Third, results obtained via frame-based classification could be used as a starting point for further analyses based on machine learning despite low overall levels of emotion-expression coherence. For example, for the prediction of consumption choices between several products, it might not matter whether a given smile or frown reflects a full-blown emotion or something else (i.e., concentration, politeness) – provided the consumer’s decision is predicted correctly.

Overall, we therefore propose to consider the wider context of emotional expressions rather than to limit investigations to a blind use of emotion labels provided by commercial machine classifiers without considering the wider context. Instead, it is commendable to think of these technologies as a means to “pre-process” large amounts of facial activity data at low cost. These pre-processed facial activity data can then itself be used as input features for machine learning methods to learn and predict human emotional behavior in context.

Conclusion

AHAA promises to revolutionize research in consumer neuroscience. However, even apart from general theoretical limitations, its validity and usefulness are likely to vary between different types of studies. Testing hypotheses about specific consumer responses may often depend on relatively small datasets of facial responses, rendering the decision of which software to use even more difficult. In many cases, freely available tools such as OpenFace may be a good entry point. However, there presently appears to be no single software tool on the market that clearly outperforms all other machine classifiers. Hence, additional research is still needed to examine the reliability and predictive value of AHAA. Although the future of automatic affect sensing in consumer research looks promising, it is important to remain aware of its potential limitations. Social scientists can play an active role here to contribute to further development of this technology.

Author Contributions

DK and EK developed the theoretical ideas. All authors contributed to the discussion and refinement of the presented perspective with regards to AHAA. DK and EK wrote the manuscript. MB and TS provided critical feedback. DK, EK, MB, and LS contributed to manuscript revision. All authors read and approved the final version.

Funding

We acknowledge support by the Open Access Initiative of the University of Bremen. This work was partially funded by Klaus-Tschira Stiftung.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Ambadar, Z., Schooler, J. W., and Cohn, J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychol. Sci. 16, 403–410. doi: 10.1111/j.0956-7976.2005.01548.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderson, G. (2017). Walmart’s Facial Recognition Tech Would Overstep Boundaries. Forbes. Available online at: https://www.forbes.com/sites/retailwire/2017/07/27/walmarts-facial-recognition-tech-would-overstep-boundaries/ (Accessed March 24, 2020).

Google Scholar

Baltrusaitis, T., Robinson, P., and Morency, L.-P. (2016). “OpenFace: An open source facial behavior analysis toolkit,” in Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), (Lake Placid, NY: IEEE), 1–10. doi: 10.1109/WACV.2016.7477553

CrossRef Full Text | Google Scholar

Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., and Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1–68. doi: 10.1177/1529100619832930

PubMed Abstract | CrossRef Full Text | Google Scholar

Beringer, M., Spohn, F., Hildebrandt, A., Wacker, J., and Recio, G. (2019). Reliability and validity of machine vision for the assessment of facial expressions. Cogn. Syst. Res. 56, 119–132. doi: 10.1016/j.cogsys.2019.03.009

CrossRef Full Text | Google Scholar

Bradski, G. (2000). The opencv library. Dr Dobbs J Softw. Tools 25, 120–125.

Google Scholar

Crivelli, C., and Fridlund, A. J. (2018). Facial displays are tools for social influence. Trends Cogn. Sci. 22, 388–399. doi: 10.1016/j.tics.2018.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Danner, L., Sidorkina, L., Joechl, M., and Duerrschmid, K. (2014). Make a face! Implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology. Food Qual. Prefer. 32, 167–172. doi: 10.1016/j.foodqual.2013.01.004

CrossRef Full Text | Google Scholar

Den Uyl, M. J., and Van Kuilenburg, H. (2005). “The FaceReader: Online facial expression recognition,” in Proceedings of measuring behavior (Citeseer), Utrecht, 589–590.

Google Scholar

Dente, P., Küster, D., Skora, L., and Krumhuber, E. (2017). “Measures and metrics for automatic emotion classification via FACET,” in Proceedings of the Conference on the Study of Artificial Intelligence and Simulation of Behaviour (AISB), Bath, 160–163.

Google Scholar

Dupré, D., Krumhuber, E., Küster, D., and McKeown, G. J. (2019). Emotion recognition in humans and machine using posed and spontaneous facial expression. PsyArXiv [Preprint] doi: 10.31234/osf.io/kzhds

CrossRef Full Text | Google Scholar

Durán, J., Reisenzein, R., and Fernández-Dols, J.-M. (2017). “Coherence between emotions and facial expressions,” in The Science of Facial Expression, eds J.-M. Fernández-Dols and J. A. Russell (Oxford: Oxford University Press), 107–129.

Google Scholar

Ekman, P. (1992). An argument for basic emotions. Cogn. Emot. 6, 169–200. doi: 10.1080/02699939208411068

CrossRef Full Text | Google Scholar

Ekman, P. (1999). “Basic Emotions,” in Handbook of Cognition and Emotion, Ed. M. Power (Hoboken, NJ: John Wiley & Sons, Ltd), 45–60. doi: 10.1002/0470013494.ch3

CrossRef Full Text | Google Scholar

Ekman, P. (2009). “Lie catching and microexpressions,” in The Philosophy of Deception, Ed. C. Martin (New York, NY: Oxford University Press), 118–142.

Google Scholar

Ekman, P., and Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry 32, 88–106. doi: 10.1080/00332747.1969.11023575

PubMed Abstract | CrossRef Full Text | Google Scholar

Garcia, D., Kappas, A., Küster, D., and Schweitzer, F. (2016). The dynamics of emotions in online interaction. R. Soc. Open Sci. 3:160059. doi: 10.1098/rsos.160059

PubMed Abstract | CrossRef Full Text | Google Scholar

Garcia-Burgos, D., and Zamora, M. C. (2013). Facial affective reactions to bitter-tasting foods and body mass index in adults. Appetite 71, 178–186. doi: 10.1016/j.appet.2013.08.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Gupta, S. (2018). Facial Emotion Detection using AI: Use-Cases. Medium. Available online at: https://itnext.io/facial-emotion-detection-using-ai-use-cases-3507e38da598 (Accessed November 29, 2019).

Google Scholar

Hollenstein, T., and Lanteigne, D. (2014). Models and methods of emotional concordance. Biol. Psychol. 98, 1–5. doi: 10.1016/j.biopsycho.2013.12.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Hsu, M., and Yoon, C. (2015). The neuroscience of consumer choice. Curr. Opin. Behav. Sci. 5, 116–121. doi: 10.1016/j.cobeha.2015.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Kappas, A. (2003). “What facial activity can and cannot tell us about emotions,” in The Human Face: Measurement and Meaning, ed. M. Katsikitis (Boston, MA: Springer), 215–234. doi: 10.1007/978-1-4615-1063-5_11

CrossRef Full Text | Google Scholar

Kappas, A., Küster, D., Dente, P., and Basedow, C. (2016). “Shape of things to come: Facial electromyography vs automatic facial coding via FACET,” in Proceedings of the Annual Meeting of the Society for Psychophysiological Research (SPR), (Minneapolis, MN: Jacobs University Bremen), S78–S78.

Google Scholar

Kollias, D., Tzirakis, P., Nicolaou, M. A., Papaioannou, A., Zhao, G., Schuller, B., et al. (2019). Deep Affect Prediction in-the-Wild: Aff-Wild Database and Challenge. Deep Architectures, and Beyond. Int. J. Comput. Vis. 127, 907–929. doi: 10.1007/s11263-019-01158-4

CrossRef Full Text | Google Scholar

Krumhuber, E., Küster, D., Namba, S., Shah, D., and Calvo, M. (2019). Emotion recognition from posed and spontaneous dynamic expressions: Human observers vs. machine analysis. Emotion doi: 10.1037/emo0000712

CrossRef Full Text [Epub ahead of print]

PubMed Abstract | Google Scholar

Krumhuber, E., Manstead, A. S., Cosker, D., Marshall, D., and Rosin, P. L. (2009). Effects of dynamic attributes of smiles in human and synthetic faces: a simulated job interview setting. J. Nonverbal Behav. 33, 1–15. doi: 10.1007/s10919-008-0056-8

CrossRef Full Text | Google Scholar

Krumhuber, E., Manstead, A. S. R., and Kappas, A. (2007). Temporal aspects of facial displays in person and expression perception: the effects of smile dynamics. Head-tilt, and Gender. J. Nonverbal Behav. 31, 39–56. doi: 10.1007/s10919-006-0019-x

CrossRef Full Text | Google Scholar

Krumhuber, E. G., Kappas, A., and Manstead, A. S. R. (2013). Effects of dynamic aspects of facial expressions: a review. Emot. Rev. 5, 41–46. doi: 10.1177/1754073912451349

CrossRef Full Text | Google Scholar

Krumhuber, E. G., and Skora, L. (2016). “Perceptual Study on Facial Expressions,” in Handbook of Human Motion, eds B. Müller, S. I. Wolf, G.-P. Brueggemann, Z. Deng, A. McIntosh, F. Miller, et al. (Cham: Springer International Publishing), 1–15. doi: 10.1007/978-3-319-30808-1_18-1

CrossRef Full Text | Google Scholar

Krumhuber, E. G., Skora, L., Küster, D., and Fou, L. (2017). A review of dynamic datasets for facial expression research. Emot. Rev. 9, 280–292. doi: 10.1177/1754073916670022

CrossRef Full Text | Google Scholar

Kulke, L., Feyerabend, D., and Schacht, A. (2020). A Comparison of the affectiva imotions facial expression analysis software with EMG for identifying facial expressions of emotion. Front. Psychol. 11:329. doi: 10.3389/fpsyg.2020.00329

PubMed Abstract | CrossRef Full Text | Google Scholar

Küster, D. (2018). Social effects of tears and small pupils are mediated by felt sadness: an evolutionary view. Evol. Psychol. 16:147470491876110. doi: 10.1177/1474704918761104

PubMed Abstract | CrossRef Full Text | Google Scholar

Küster, D., and Kappas, A. (2013). “Measuring emotions in individuals and internet communities,” in Internet and Emotions, eds T. Benski and E. Fisher (Abingdon: Routledge), 62–76.

Google Scholar

Küster, D., and Kappas, A. (2014). “What could a body tell a social robot that it does not know?,” in Proceedings of the International Conference on Physiological Computing Systems, Lisbon, 358–367. doi: 10.5220/0004892503580367

CrossRef Full Text | Google Scholar

Lewinski, P., den Uyl, T. M., and Butler, C. (2014a). Automated facial coding: validation of basic emotions and FACS AUs in faceReader. J. Neurosci. Psychol. Econ. 7, 227–236. doi: 10.1037/npe0000028

CrossRef Full Text | Google Scholar

Lewinski, P., Fransen, M. L., and Tan, E. S. (2014b). Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli. J. Neurosci. Psychol. Econ. 7, 1–14. doi: 10.1037/npe0000012

CrossRef Full Text | Google Scholar

Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M., Movellan, J., et al. (2011). “The computer expression recognition toolbox (CERT),” in Proceedings of the Ninth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2011), (Santa Barbara, CA: IEEE), 298–305. doi: 10.1109/FG.2011.5771414

CrossRef Full Text | Google Scholar

Maringer, M., Krumhuber, E. G., Fischer, A. H., and Niedenthal, P. M. (2011). Beyond smile dynamics: mimicry and beliefs in judgments of smiles. Emotion 11, 181–187. doi: 10.1037/a0022596

PubMed Abstract | CrossRef Full Text | Google Scholar

Masip, D., North, M. S., Todorov, A., and Osherson, D. N. (2014). Automated prediction of preferences using facial expressions. PLoS One 9:e87434. doi: 10.1371/journal.pone.0087434

PubMed Abstract | CrossRef Full Text | Google Scholar

Matsumoto, D., and Hwang, H. S. (2011). Evidence for training the ability to read microexpressions of emotion. Motiv. Emot. 35, 181–191. doi: 10.1007/s11031-011-9212-9212

CrossRef Full Text | Google Scholar

McDuff, D., El Kaliouby, R., Cohn, J. F., and Picard, R. W. (2015). Predicting ad liking and purchase intent: Large-scale analysis of facial responses to ads. IEEE Trans. Affect. Comput. 6, 223–235. doi: 10.1109/TAFFC.2014.2384198

CrossRef Full Text | Google Scholar

McStay, A. (2016). Empathic media and advertising: Industry, policy, legal and citizen perspectives (the case for intimacy). Big Data Soc. 3:205395171666686. doi: 10.1177/2053951716666868

CrossRef Full Text | Google Scholar

McStay, A. (2018). Emotional AI: The Rise of Empathic Media. Los Angeles, CA: SAGE Publications Ltd.

Google Scholar

Mehta, D., Siddiqui, M., and Javaid, A. (2018). Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors 18:416. doi: 10.3390/s18020416

PubMed Abstract | CrossRef Full Text | Google Scholar

Meyer, P. (2010). Understanding Measurement: Reliability. Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780195380361.001.0001

CrossRef Full Text | Google Scholar

Mollahosseini, A., Hasani, B., and Mahoor, M. H. (2019). AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10, 18–31. doi: 10.1109/TAFFC.2017.2740923

CrossRef Full Text | Google Scholar

Pantic, M., and Bartlett, M. S. (2007). “Machine analysis of facial expressions,” in Face Recognition, eds K. Delac and M. Grgic (London: IntechOpen).

Google Scholar

Pantic, M., and Patras, I. (2006). Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man Cybern. Part B Cybern. 36, 433–449. doi: 10.1109/tsmcb.2005.859075

PubMed Abstract | CrossRef Full Text | Google Scholar

Peace, V., Miles, L., and Johnston, L. (2006). It doesn’t matter what you wear: The impact of posed and genuine expressions of happiness on product evaluation. Soc. Cogn. 24, 137–168. doi: 10.1521/soco.2006.24.2.137

CrossRef Full Text | Google Scholar

Picard, R. W. (1997). Affective Computing. Cambridge, MA: MIT Press.

Google Scholar

Porter, S., ten Brinke, L., and Wallace, B. (2012). Secrets and lies: involuntary leakage in deceptive facial expressions as a function of emotional intensity. J. Nonverbal Behav. 36, 23–37. doi: 10.1007/s10919-011-0120-7

CrossRef Full Text | Google Scholar

Ramsøy, T. (2019). A Foundation for Consumer Neuroscience and Neuromarketing. J. Advert. Res. Work 1–32. doi: 10.13140/RG.2.2.12244.45446

CrossRef Full Text | Google Scholar

Recio, G., Schacht, A., and Sommer, W. (2013). Classification of dynamic facial expressions of emotion presented briefly. Cogn. Emot. 27, 1486–1494. doi: 10.1080/02699931.2013.794128

PubMed Abstract | CrossRef Full Text | Google Scholar

Reisenzein, R., Studtmann, M., and Horstmann, G. (2013). Coherence between emotion and facial expression: Evidence from laboratory experiments. Emot. Rev. 5, 16–23. doi: 10.1177/1754073912457228

CrossRef Full Text | Google Scholar

Samant, S. S., Chapko, M. J., and Seo, H.-S. (2017). Predicting consumer liking and preference based on emotional responses and sensory perception: A study with basic taste solutions. Food Res. Int. 100, 325–334. doi: 10.1016/j.foodres.2017.07.021

PubMed Abstract | CrossRef Full Text | Google Scholar

Samant, S. S., and Seo, H.-S. (2020). Influences of sensory attribute intensity, emotional responses, and non-sensory factors on purchase intent toward mixed-vegetable juice products under informed tasting condition. Food Res. Int. 132:109095. doi: 10.1016/j.foodres.2020.109095

CrossRef Full Text | Google Scholar

Sato, W., Krumhuber, E. G., Jellema, T., and Williams, J. H. G. (2019). Editorial: dynamic emotional communication. Front. Psychol. 10:2836. doi: 10.3389/fpsyg.2019.02836

PubMed Abstract | CrossRef Full Text | Google Scholar

Sato, W., and Yoshikawa, S. (2007). Spontaneous facial mimicry in response to dynamic facial expressions. Cognition 104, 1–18. doi: 10.1016/j.cognition.2006.05.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwartz, O. (2019). Don’t Look Now: Why You Should be Worried About Machines Reading Your Emotions. The Guardian. Available online at: https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science (Accessed November 29, 2019).

Google Scholar

Shariff, A. F., and Tracy, J. L. (2011). What are emotion expressions for? Curr. Dir. Psychol. Sci. 20, 395–399. doi: 10.1177/0963721411424739

CrossRef Full Text | Google Scholar

Shen, X., Chen, W., Zhao, G., and Hu, P. (2019). Editorial: recognizing microexpression: an interdisciplinary perspective. Front. Psychol. 10:1318. doi: 10.3389/fpsyg.2019.01318

PubMed Abstract | CrossRef Full Text | Google Scholar

Skiendziel, T., Rösch, A. G., and Schultheiss, O. C. (2019). Assessing the convergent validity between the automated emotion recognition software Noldus FaceReader 7 and facial action coding system scoring. PLoS One 14:e0223905. doi: 10.1371/journal.pone.0223905

PubMed Abstract | CrossRef Full Text | Google Scholar

Smidts, A., Hsu, M., Sanfey, A. G., Boksem, M. A. S., Ebstein, R. B., Huettel, S. A., et al. (2014). Advancing consumer neuroscience. Mark. Lett. 25, 257–267. doi: 10.1007/s11002-014-9306-1

CrossRef Full Text | Google Scholar

Stöckli, S., Schulte-Mecklenbeck, M., Borer, S., and Samson, A. C. (2018). Facial expression analysis with AFFDEX and FACET: a validation study. Behav. Res. Methods 50, 1446–1460. doi: 10.3758/s13428-017-0996-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Teixeira, T., Picard, R., and Kaliouby, R. (2014). Why, when, and how much to entertain consumers in advertisements? A web-based facial tracking field study. Mark. Sci. 33, 809–827. doi: 10.1287/mksc.2014.0854

CrossRef Full Text | Google Scholar

Teixeira, T., Wedel, M., and Pieters, R. (2012). Emotion-induced engagement in internet video advertisements. J. Mark. Res. 49, 144–159. doi: 10.1509/jmr.10.0207

CrossRef Full Text | Google Scholar

Teixeira, T. S., and Stipp, H. (2013). Optimizing the amount of entertainment in advertising: what’s so funny about tracking reactions to humor? J. Advert. Res. 53, 286–296. doi: 10.2501/JAR-53-3-286-296

CrossRef Full Text | Google Scholar

Tian, Y.-I., Kanade, T., and Cohn, J. F. (2001). Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23, 97–115. doi: 10.1109/34.908962

PubMed Abstract | CrossRef Full Text | Google Scholar

Valstar, M., and Pantic, M. (2006). “Fully automatic facial action unit detection and temporal analysis,” in Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), (Piscataway, NJ: IEEE), 149–149.

Google Scholar

Valstar, M. F., Mehu, M., Jiang, B., Pantic, M., and Scherer, K. (2012). Meta-analysis of the first facial expression recognition challenge. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42, 966–979. doi: 10.1109/tsmcb.2012.2200675

PubMed Abstract | CrossRef Full Text | Google Scholar

Vincent, J. (2019). AI “Emotion Recognition” Can’t Be Trusted. The Verge. Available online at: https://www.theverge.com/2019/7/25/8929793/emotion-recognition-analysis-ai-machine-learning-facial-expression-review (Accessed November 29, 2019).

Google Scholar

Yik, M., Russell, J. A., and Steiger, J. H. (2011). A 12-point circumplex structure of core affect. Emotion 11, 705–731. doi: 10.1037/a0023980

PubMed Abstract | CrossRef Full Text | Google Scholar

Yitzhak, N., Giladi, N., Gurevich, T., Messinger, D. S., Prince, E. B., Martin, K., et al. (2017). Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions. Emotion 17, 1187–1198. doi: 10.1037/emo0000287

PubMed Abstract | CrossRef Full Text | Google Scholar

Yu, C.-Y., and Ko, C.-H. (2017). Applying facereader to recognize consumer emotions in graphic styles. Proc. CIRP 60, 104–109. doi: 10.1016/j.procir.2017.01.014

CrossRef Full Text | Google Scholar

Zeng, Z., Pantic, M., Roisman, G. I., and Huang, T. S. (2009). A survey of affect recognition methods: audio. visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 39–58. doi: 10.1109/TPAMI.2008.52

PubMed Abstract | CrossRef Full Text | Google Scholar

Zickfeld, J. H., Schubert, T. W., Seibt, B., and Fiske, A. P. (2019). Moving through the literature: what is the emotion often denoted being moved? Emot. Rev. 11, 123–139. doi: 10.1177/1754073918820126

CrossRef Full Text | Google Scholar

Zijderveld, G. (2017). The World’s Largest Emotion Database: 5.3 Million Faces and Counting. Available online at: https://blog.affectiva.com/the-worlds-largest-emotion-database-5.3-million-faces-and-counting (Accessed December 4, 2019).

Google Scholar

Keywords: automatic human affect analysis (AHAA), machine learning, facial expression, spontaneous expressions, dynamic responses, consumer research

Citation: Küster D, Krumhuber EG, Steinert L, Ahuja A, Baker M and Schultz T (2020) Opportunities and Challenges for Using Automatic Human Affect Analysis in Consumer Research. Front. Neurosci. 14:400. doi: 10.3389/fnins.2020.00400

Received: 13 December 2019; Accepted: 31 March 2020;
Published: 28 April 2020.

Edited by:

Thomas Zoëga Ramsøy, Neurons Inc., Denmark

Reviewed by:

Thomas Zoëga Ramsøy, Neurons Inc., Denmark
William Hedgcock, University of Minnesota Twin Cities, United States
Russell Sarwar Kabir, Hiroshima University, Japan

Copyright © 2020 Küster, Krumhuber, Steinert, Ahuja, Baker and Schultz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dennis Küster, a3Vlc3RlckB1bmktYnJlbWVuLmRl; ZGt1ZXN0ZXJAdW5pLWJyZW1lbi5kZQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.