- 1Research and Development, Swiss Federal University for Vocational Education and Training, Renens, Switzerland
- 2Department of Individual Differences and Psychological Assessment, Ulm University, Ulm, Germany
Emotion information processing (EIP) has been recently introduced as a new component of emotional intelligence. We present a task aiming at measuring a type of emotion information processing related to fine-grained discrimination of emotional expressions. We modified an existing task presenting morphed faces created from a blend of two prototypical emotional expressions. Participants’ (N = 154) ability-EI, in particular emotion recognition, understanding and management, as well as intelligence were evaluated. Results show that all facets of EI independently predicted accuracy in the discrimination task and that emotion recognition was the strongest predictor. When controlling for emotion recognition level, we found that emotion understanding still predicted accuracy for less difficult stimuli. Results support the idea that individuals high in EI have higher emotion processing skills at the emotion perception stage of information processing and suggest that the task employed in the current study might measure more spontaneous processing of emotional expressions. Implications regarding the use of the current task as a new measure of the EIP component are discussed.
1. Introduction
Emotional intelligence (EI) corresponds to the skills related to the perception, understanding and management of emotion. Two major conceptualizations of EI are present in the scientific literature. The first one, trait-EI, defines EI as dispositions or personality characteristics that explain how individuals behave in emotional situations (Petrides and Furnham, 2001). The second one, ability-EI, views EI as an ability related to the processing of emotional information (Mayer et al., 2016). Whereas trait-EI is measured with self-report questionnaires, ability-EI is assessed with performance tests designed at evaluating each EI facet (emotion recognition, understanding and management). For instance, the Situational Test of Emotion Understanding (STEU; MacCann and Roberts, 2008) presents descriptions of short emotional scenarios and respondents have to select the appropriate emotion or indicate which event lead to a specific emotion.
Recently, it has been proposed that ability-EI is not a monolithic construct, but that it is likely based on two components: (1) the emotion knowledge component (EIK) and (2) the emotion processing component (EIP) (Fiori et al., 2022). EIK is related to higher order reasoning or top–down processing about emotions and corresponds to what is habitually measured with performance-based ability-EI tests, i.e., knowledge about emotions. EIP is related to bottom-up processing about emotion and can be assessed with emotion processing tasks, evaluating more spontaneous and fast processing of emotion information. Drawing a parallel with intelligence (Cattell, 1963), EIK is conceptualized as a crystallized component of EI, related to culture-bound knowledge about emotion, and EIP as a fluid component of EI, related to how people feel and experience emotion (Fiori and Vesely-Maillefer, 2018; Fiori et al., 2022). EIK and EIP, while being different constructs, are nonetheless related: individuals high on EIK should also be high on EIP. In other terms, with high EI, individuals should not only demonstrate more emotional knowledge and perform better at ability-EI tests, but also more efficiently process emotional stimuli in a spontaneous manner.
The inclusion of the EIP component in the conceptualization of EI allows us to offer an explanation as to how EI functions regarding emotional and cognitive processes that are taking place in high vs. low-EI individuals. Previous research has indeed suggested that individuals high in EI are more efficient in tasks with emotionally laden stimuli (Gutiérrez-Cobo et al., 2016, 2017). In addition, the hypersensitivity hypothesis (Fiori and Ortony, 2021) states that EI works as a magnifier of emotional experience. In this view, high-EI individuals are hypersensitive to emotion information, which can be observed at different stages of emotion processing. High-EIP individuals are then expected to better perceive and encode emotion, to experience more intense emotional reactions, and to show greater attention to emotional stimuli. This was demonstrated in a recent study (Nicolet-dit-Félix et al., 2023) in which individuals high on the emotion understanding facet of EI showed an attentional bias to emotional faces in a dot-probe task, in which they had to identify a letter appearing at the location of an emotional vs. a neutral face. The difference in response times between the conditions was apparent for individuals scoring above 1 standard deviation from the mean, supporting the ideas that EIP increases with EIK and that hypersensitivity toward emotional information appears at high levels of EI.
Importantly and as said above, the fluid component of EI, EIP, is not captured by current ability-EI tests, which tap into general knowledge about emotions. These tests, by their very instructions and design, measure the respondent’s maximum-ability performance, which may not correspond to their actual emotional behavior (Fiori, 2009). For example, it is possible to know how to manage one’s own emotions in different situations, hence obtain a high score in an emotion management ability test, without being capable of doing so in a real situation. In addition, whereas current ability-EI tests measure broad facets, namely perceiving, understanding and managing emotions, EIP is concerned about the underlying processes accounting for such facets, such as attentional processes. Finally, ability-EI tests rely on conscious processing of emotion information, whereas EIP is meant to capture more spontaneous and automatic reactions to emotion information (Fiori, 2009). There is therefore a need to develop measures of EIP in order to consider this component when investigating the role of EI on life outcomes. For instance, it has been shown that high levels of EI, particularly the emotion perception facet, can lead to higher levels of stress during stressful situations (Matthews et al., 2006; Bechtoldt and Schneider, 2016). This kind of finding is difficult to explain based solely on the EIK component. However, if we consider individual differences in how people process emotion information and include EIP in the equation, then these findings could be interpreted as reflecting hypersensitivity to emotions (i.e., individuals high in EI pay more attention or better discriminate emotions in their surroundings which can lead to higher stress).
Previous research examining EIP has focused on the attentional processes related to emotion processing and has employed experimental tasks tapping into such processes. For example, Fiori et al. (2022) used an emotional Stroop task and a GonoGo task to operationalize EIP. They showed that scores in these tasks predicted additional variability (i.e., above the one predicted by ability-EI tests) in emotionally intelligent behavior. EIP is nonetheless not only related to attentional processes, but also concerns other types of processes related to the three broad facets of EI. In this article, we aim at offering a way to measure EIP mainly related to the facet of emotion perception and to investigate how hypersensitivity at the level of fine-grained discrimination of emotions is related to EIK.
Emotion perception is considered the basis of EI (Mayer et al., 2008). For example, the cascading model of EI considers emotion perception as the building block of EI (Joseph and Newman, 2010). Being able to correctly identify emotions based on the cues expressed through the face, voice or body is indeed an important prerequisite to understand and then manage emotions in oneself and others. Emotion recognition ability (ERA) has notably been positively associated with higher interpersonal skills (Hall et al., 2009), empathy and good functioning in work and private relationships (Schlegel et al., 2019).
Most tests designed for assessing ERA rely on affect labeling, i.e., choosing the appropriate emotional label for an emotional expression. In general, unimpaired individuals are very good at this kind of tasks and perform at ceiling when there is no time limit (Wilhelm et al., 2014). In order to investigate individual differences in ERA, different ways to avoid ceiling effects, and therefore being able to rank individuals, can be used: introduce a time limit or make the task more difficult. For instance, in tasks such as the Brief Affect Recognition Test (BART, Ekman and Friesen, 1974) or the Japanese and Caucasian Brief Affective Recognition Test (JACBART, Matsumoto et al., 2000), the presentation time of the stimuli (i.e., prototypical expressions of basic emotions) is limited to 2 s. In the Diagnostic Analysis of Nonverbal Accuracy (DANVA, Nowicki and Carton, 1993), not only the presentation time is limited but also the stimuli vary in intensity and thus in difficulty. Some tests use multimodal and dynamic stimuli (MERT, Bänziger et al., 2009; GERT, Schlegel et al., 2014). They also propose more emotional categories to select from (10 in the MERT and 14 in the GERT), which increases difficulty and allows avoiding ceiling effect. Finally, it is possible to add difficulty to the task by using stimuli that are composites of emotion expressions, such as in the Facial Expression Megamix (Young et al., 1997). In the latter case, the participants have to identify one or both emotional expressions and the presentation time is generally unlimited because the focus is made on accuracy.
In the current study, we present a task aiming at measuring EIP mainly related to the emotion perception facet of EI. Our aims were to shed light on spontaneous processes related to fine-grained recognition of emotion and to allow us to test hypersensitivity related to emotion information. For this purpose, we needed a task that presents complex emotional stimuli (i.e., blended emotional faces) and does not have a ceiling effect. As presented before, in this type of task, emotional stimuli are usually presented for an unlimited time until the participants select a response. In order to make emotion information processing more spontaneous and less thoughtful, we decided to present the emotional stimuli for a limited duration.
We turned to the Test Battery for Measuring the Perception and Recognition of Facial Expressions of Emotion provided in Wilhelm et al. (2014) and selected two tasks that assess emotion categorization (i.e., tasks 4 and 5). These tasks are based on morphed images from two different emotional expressions adjacent on the emotion hexagon and with maximal confusion rates (Calder et al., 1996), such as disgust-anger. Importantly, these morphed images are blends of two emotional expressions displayed on the same face, not composite faces that display one emotion in the upper half of the face and another emotion in the lower half of the face. Contrary to the latter, the former have the advantage of being ecologically valid, reflecting possible emotional expressions that one can encounter in real life, since individuals often feel several emotions at the same time. For instance, surprise and happiness can occur simultaneously when opening a nice gift, or surprise and fear when witnessing a sudden accident on the road. Hence, this type of morphed images was particularly interesting to evaluate hypersensitivity to emotional stimuli.
In tasks 4 and 5 from Wilhelm and colleagues’ battery, the morphed images were presented along the prototypical expressions of the corresponding emotions and the participants had to estimate the ratio of the morphed image on a visual analog scale (task 4) or indicate the prototypical expression to which the morphed image was more similar (task 5). Because we wanted to assess more spontaneous processes related to fine-grained emotion discrimination, in our task, the morphed images were presented by themselves on the screen and for only 1,000 ms. The participants were instructed to determine the correct combination among six possibilities corresponding to the different morphed images categories (i.e., fear-sadness, sadness-disgust, disgust-anger, anger-happiness, happiness-surprise, and surprise-fear). All three facets of participants’ ability-EI (i.e., understanding, management and recognition) were evaluated.
If EI is related to hypersensitivity to emotion information, this should be reflected in higher accuracy in this task for high-compared to low-ability-EI individuals. According to the hypersensitivity hypothesis, individuals high in EI should in principle also be more responsive to emotional signals and this should lead to a stronger capacity to rapidly discriminate complex emotional expressions. Because the task employed involves perception and recognition of emotions, we expected especially the emotion perception facet of EI to be related to it. At the same time, considering that the type of fine-grained discrimination required for this task would provide a fundamental input for in depth emotion understanding and more effective emotion management, we did not exclude positive associations with the other ability-EI facets.
We included a proxy measure for fluid intelligence, to check the extent to which performance in the task was accounted for by individual differences in general reasoning, and a measure of participants’ mood at the time in which they completed the task, to control for potential mood effects on fine-grained emotion discrimination.
2. Method
2.1. Procedure
The study was conducted in two sessions. The participants took part in a first session where they completed a battery of questionnaires described below. One week later, they were asked to take part in the second session, which consisted in an evaluation of their mood followed by the facial emotion blends discrimination task along with other tasks not reported here.
2.2. Participants
Participants (individuals with approval rate of 95% or above) were recruited from the general population on the online platform Prolific. Two-hundred and thirty-nine participants took part in the first session, and 203 participants completed the second session of the study. Because the study was run online and lasted for an hour and a half (both sessions), we followed a strict procedure of exclusion. Participants who did not give correct answers to the attentional checks were removed. We also excluded participants who scored lower than 3 SD from the mean on the Raven and the GERT (less than 4 correct answers in both cases). Hundred and fifty-seven (52 male, 103 female and 2 who indicated “other”) were retained. The participants were aged between 18 and 63 (M = 28.9, SD = 9.8). All participants were informed about the course of the study and gave their consent to participate in the study in accordance with procedures and protocols approved by the ethical committee of the University of Geneva. They were remunerated for their participation.
2.3. Questionnaires and tests
2.3.1. The shortened Raven’s standard progressive matrices
Participants had to complete 36 items selected from the original Raven SPM (Set B, C, D, Raven et al., 1998). In this task, each item presents a matrix of black and white patterns. Respondents are required to select among 6 or 8 possible choices the correct missing pattern. Responses are scored as correct (1) or incorrect (0). Participants had a 5-min time limit to answer the maximum number of items. The Cronbach alpha was 0.92 in our sample.
2.3.2. The situational test of emotional understanding-brief
The situational test of emotional understanding-brief (STEU-B, Allen et al., 2014) measures respondents’ knowledge of emotions with 19 items that correspond to short scenarios describing situations in which a character experiences an emotion. Respondents are asked to select the appropriate emotion or to answer a question about an aspect of the scenario. For example, for the item “Xavier completes a difficult task on time and under budget. Xavier is most likely to feel?,” the response is “Pride.” Responses are scored as correct (1) and or incorrect (0). The test–retest reliability of the full version of the test is 0.72 (Libbrecht and Lievens, 2012). Cronbach alpha was 0.47 and McDonald’s omega was 0.63 in our sample.
2.3.3. The situational test of emotional management-brief
The situational test of emotional management-brief (STEM-B, Allen et al., 2015) measures the respondents’ knowledge of the strategy to adopt to manage emotions in various situations. In 18 items, respondents are asked to select the most effective way to manage the protagonist’s emotions or the issues they must handle. Responses are scored according to a weight derived from expert ratings. For instance, for the item “Juno is fairly sure his company is going down and his job is under threat. It is a large company and nothing official has been said. What action would be the most effective for Juno?,” the most appropriate response is “Find out what is happening and discuss his concerns with his family.” The test-rest reliability of the full version of the test is 0.85 (Libbrecht and Lievens, 2012). In our sample, Cronbach alpha was 0.62 and McDonald’s omega was 0.65.
2.3.4. The Geneva emotion recognition test short version
The Geneva emotion recognition test short version (GERT-S, Schlegel and Scherer, 2016) measures emotion recognition ability. Respondents see short video clips with sound (duration 1–3 s), in which 10 professional actors express 14 different emotions. After each clip, respondents are asked to choose which of the 14 emotions was expressed by the actor. Responses are scored as correct and incorrect format. Cronbach alpha was 0.78 and McDonald’s omega was 0.81 in our sample.
2.3.5. Brief mood introspection scale
We assessed the participants’ emotional state before the task with the item “Overall, your mood right now is,” from the Brief mood introspection scale (BMIS, Mayer and Gaschke, 1988). Participants answered using a scale ranging from 0 = Very unpleasant to 10 = Very pleasant.
2.4. Facial expressions blends task
The task used in this study (hereafter called FEB task, Facial Expressions Blends) was based on the materials provided in the test battery for measuring the perception and recognition of facial expressions of emotion by Wilhelm et al. (2014). Tasks 4 and 5 from the battery are based on morphed images created from two emotional expressions adjacent on the emotion hexagon, resulting in six emotion continua (happiness-surprise, surprise-fear, fear-sadness, sadness-disgust, disgust-anger, and anger-happiness). Morphs were created for each face separately for five female and five male models. We selected 19 grayscale morphed faces for each emotion continuum, with mixture ratios composed in several steps between 95:5 to 5:95 (more precisely: 95:5, 85:15, 75:25, 70:30, 66:34, 65:35, 62:38, 58:42, 55:45, 54:46, 46:54, 45:55, 42:58, 38:62; 35:65, 34:66, 30:70, 25:75: 15:85, 5:95).
The FEB task was programmed and run online using Gorilla1. For each trial, a fixation cross appeared during 1,000 ms followed by an emotional morphed face presented for 1,000 ms. After the presentation of the emotional morphed face, the six possible emotion combinations were displayed on the screen, and the participants had to indicate which one corresponded to the image previously seen (Figure 1). For instance, if they saw a morphed image of surprise and happiness, they had to select the “SURPRISE – HAPPINESS” combination. The task was composed of 114 trials divided into 3 blocks of 38 trials. Due to the task difficulty, participants had an unlimited time to answer, but they were encouraged to try to answer as fast and as accurately as possible. They were also informed that they would get feedback at the end of the task. Participants had the opportunity to take a break between blocks to ensure that they stayed fully concentrated during the trials. The task started with 6 practice trials.
Figure 1. Example of a trial in the FEB task. Morphed face reproduced with permission from Wilhelm et al. (2014).
2.5. Data analysis
The relationship between accuracy in the FEB task and EI was analyzed with generalized mixed logistic models in R (R Core Team, 2021). This type of model allows us to analyze binary variables (such as our dependent variable which was correct or incorrect response to each trial) and to account for both within-person (such as in a repeated measures design) and between-person variability. It also allows us to consider all responses, and not only means by condition or by participant. When constructing our models, we followed Baayen et al.’ (2008) procedure and used a forward-approach. In other words, we started with the simplest model, added fixed effects of control variables, and then added fixed effects of explanatory variables (EI facets for example) one a time. We compared the models with a likelihood-ratio test. All continuous independent variables were standardized around the grand mean.
3. Results
Hereafter we first present descriptive statistics and correlations between the variables in the study. We then turn to describe how accuracy in the FEB task was influenced by the stimuli characteristics (i.e., emotion combinations and percentages of blends) before analyzing the influence of EI on accuracy.
One participant who scored lower than chance (less than 16.6% of correct responses) in the FEB task was eliminated prior to the analyses. As ERA is generally associated with gender, we also removed both participants that indicated “other” to this question. The analyses were consequently run on 154 participants.
3.1. Descriptive statistics
Descriptive statistics and correlations for the variables investigated in the study are shown in Table 1. Accuracy in the task was negatively correlated with age (−0.19) and mood (−0.18) and positively associated with all ability-EI measures (STEU: 0.39, STEM: 0.27, GERT: 0.54) and with fluid intelligence (0.20). Correlations among ability EI measures ranged from 0.29 to 0.56.
Figure 2 shows the participants’ accuracy. The distribution reveals that the task was quite difficult: the percentage of correct responses varied between 25 and 63. The distribution was normal (W = 0.99, p = 0.28) and reliability estimates were good (α = 0.74, ω = 0.81).
When looking at accuracy in function of emotion combination (Figure 3), combinations of surprise-fear (M = 62.2, SD = 18.2) and happiness-surprise (M = 62.2, SD = 15.5) were better recognized than disgust-anger (M = 49.7, SD = 17.3), which was better recognized than sadness-disgust (M = 38.6, SD = 14.5) and fear-sadness (M = 37.8, SD = 13.4). Anger-happiness (M = 22.8, SD = 12.6) was the least recognized combination [F(5,765) = 180.93, p < 0.001]. Generally, accuracy for the different emotion blends was correlated with EI facets, except for the anger-happiness combination with was not associated with any facet.
Regarding the percentage of blends (Figure 4), accuracy was the highest when the prevailing emotion corresponded to 54–55% and decreased with increasing contribution of the main emotion [F(9,1,377) = 27.24, p < 0.001]. Hence, expressions in which an emotion was stronger and the other very subtle were the most difficult to evaluate.
Figure 4. Percentage of correct responses as a function of percentage of the dominant emotion in emotion blends.
3.2. Relationship with EI
As described above, our data were analyzed with generalized mixed logistic models. In the first model, we included fixed effects of age, gender, fluid intelligence and mood at testing time, and a random intercept by participant. This model returned significant effects of age (OR = 0.93, 95%CI [0.88-0.98], p = 0.005), mood (OR = 0.94, 95%CI [0.90-0.99], p = 0.02), and gender (OR = 1.15, 95%CI [1.03-1.27], p = 0.01) but no effect of fluid intelligence. In order to verify that the performance was not influenced by motivation or fatigue effects, we added block in the model, which did not improve it (χ2 = 1.06, df = 2, p = 0.59).
When adding the STEU score to the first model, the model improved significantly (χ2 = 24.11, df = 1, p < 0.001) and showed that individuals high on emotion understanding were more likely to give correct responses in the task (OR = 1.13, 95%CI [1.08–1.19], p < 0.001). We then added the STEM score and the model improved again (χ2 = 5.62, df = 1, p = 0.018), showing that emotion management also predicted accuracy (OR = 1.06, 95%CI [1.01–1.11], p = 0.017). We finally added the GERT score and the model improved further (χ2 = 25.03, df = 1, p < 0.001). In this last model, only age and emotion recognition were significant predictors of accuracy in the task. Increasing age was associated with a decrease in the likelihood to choose a correct answer (OR = 0.94, 95%CI [0.90–0.98], p = 0.003) whereas increasing emotion recognition ability increased this likelihood (OR = 1.16, 95%CI [1.09–1.22], p < 0.001). Neither gender nor fluid intelligence played a role in the models when the different facets of EI were added (see Table 2 for outputs of models).
We then investigated whether ability-EI interacted with the percentage of emotion blends when predicting the participants’ accuracy. In other words, we tested whether the influence of the different EI facets depended on the percentage of emotion blends of the stimuli.
In order to do so, we added the main effect of percentage of emotion blends to model 4, which improved the model (χ2 = 40.72, df = 1, p < 0.001). In the next model, we included the interaction between STEU and percentage of emotion blends and the model improved further (χ2 = 10.70, df = 1, p = 0.001) (Table 3). In this model, in addition to the effects of age and emotion recognition, there was also an effect of percentage of emotion blends (OR = 0.91, 95%CI [0.88–0.94], p < 0.001) which showed that with increasing percentage of the main emotion, participants were less likely to choose the correct emotion combination. There was also an interaction between STEU and percentage of emotion blends (OR = 0.95, 95%CI [0.92–0.98], p = 0.001). Simple slopes analysis with the Johnson-Neyman procedure revealed that the effect of STEU was only significant for combinations with the dominant emotion below 65%. In other words, for less difficult items, individuals high on emotion understanding had higher accuracy than those low on this facet. For more difficult items however, there was no difference between individuals depending on their level of emotion understanding (Figure 5).
Table 3. Mixed logistics models testing the influence of EI and mixture ratio on accuracy in the FEB task.
Figure 5. Accuracy in the task in function of emotion understanding and percentage of dominant emotion in emotion blends.
Adding the interaction between STEM and ratio or between GERT and ratio did not improve the model (Table 3). Finally, we also tested whether emotion combination interacted with the different EI facets when predicting accuracy, which was not the case.
4. Discussion
In this paper, we aimed to create an EIP task that would measure fine-grained discrimination of emotional expressions. We also wanted to assess whether individuals who are high on EI show stronger reactivity to emotion information and hence better performance than low EI individuals, in line with the hypersensitivity hypothesis (Fiori and Ortony, 2021). For this purpose, we created a task based on blends of emotional expressions in which participants were presented with the pictures for 1,000 ms before having to choose the correct combination of emotions among 6 possibilities. These specific choices aimed at making the task more spontaneous than usual emotion discrimination tasks while maintaining a high level of difficulty. Hypersensitivity in fine-grained discrimination of emotions was operationalized as high accuracy in the task.
Accuracy in the task was influenced by the characteristics of the stimuli. First, some emotion combinations were generally more recognizable than others, suggesting that certain emotion combinations are easier to categorize. Interestingly, emotion combinations displaying surprise seemed easier to recognize, as shown by the higher number of correct responses associated with them, probably because of the specificity of the surprise expression that was combined to emotions that have quite opposite specificities (i.e., happiness and fear).
Second, the percentage of emotion blends influenced accuracy. Blends of emotional expressions were more difficult to categorize as the percentage of the dominant emotion increased. This is not surprising, as the participants had in total six possibilities to choose from, with two possibilities pertaining to each emotion. With balanced percentages of emotions, it might be easier to find cues pertaining to both emotions displayed and then select the correct combination. A higher percentage of the dominant emotion implies less cues for detecting the second emotion, which might lead to a choice based on chance between the emotion combinations containing the dominant emotion.
Interestingly, in task 4 of Wilhelm et al. (2014), in which participants had to evaluate the percentage of emotion blends, they found that accuracy increased with increasing percentage of the dominant emotion, which is the opposite of what was found in this study. This can be explained by differences in the tasks. Perhaps we could have found a similar effect if we had asked our participants to identify both emotions independently (i.e., select the first emotion among six possibilities and then the second emotion among the same possibilities). Of note, our task was initially designed in this way, but it was too difficult, and did not seem to fully capture individual differences in fine-grained emotion discrimination; for this reason, we chose to present the possible emotion combinations as responses in the task. The distribution of the scores did not show any floor or ceiling effects and allowed us to observe individual differences in EIP related to emotion discrimination.
Regarding the participants’ characteristics, age was associate with a decrease in accuracy, which is in line with previous findings showing a decline in ERA with age (e.g., Ruffman et al., 2008; Schlegel et al., 2019). Supporting previous findings that females have a small advantage in ERA (Schlegel et al., 2014; Thompson and Voyer, 2014; Schlegel, 2019), sex was associated with accuracy in the task, but only when other variables (i.e., age, EI level) were not controlled for.
Turning to the main interest of this study, we found that the FEB task was associated with all facets of EI, which suggests that this task indeed measures a component of EI. The fact that emotion understanding, emotion management and emotion recognition predicted accuracy in the FEB task beyond control variables further suggests that individuals high on EI also have higher emotion information processing skills (i.e., EIP) related to fine-grained discrimination of emotional expressions. Of note, when including emotion recognition in the model, the influence of emotion understanding and emotion management disappeared, and only emotion recognition predicted the performance. This supports previous findings showing that, in a go/nogo task, participants with higher emotion perception ability were better at discriminating emotions (Gutiérrez-Cobo et al., 2017). This could also be explained by the fact that the three facets of EI are correlated. For this reason, emotion understanding and emotion management did not predict the performance beyond the influence of emotion recognition. Still, the fact that they individually were associated with accuracy in the discrimination task supports the idea that hypersensitivity related to perceptual processing of emotional expression is associated with all facets of EI. In addition, when controlling for emotion recognition level, we found that emotion understanding still predicted accuracy for less difficult stimuli.
All in all, this study supports the idea that individuals high in EI have higher emotion processing skills at the emotion perception stage of information processing. It also shows that a form of sensibility (or emotional hypersensitivity) in fine-grained discrimination of emotional expressions is associated with all facets of EI. This is consistent with the hypersensitivity hypothesis and the idea that perceptual processes, such as those captured by the EI emotion perception facet, underlie all EI abilities (Mayer et al., 2008).
Hence, we believe the task employed in the current study might be employed as a new measure of the EIP component. The FEB task involves complex stimuli presented for a limited amount of time, which we think evaluates fast information processing and can describe more effectively spontaneous processes involved in emotion perception. As such, accuracy in the FEB task might predict different outcomes (i.e., less thoughtful behaviors) as those related to other emotion recognition tasks such as the GERT (Schlegel et al., 2014) for instance. Yet, further research is needed to fully test the validity of the task, especially its incremental validity.
Despite the encouraging results obtained in the current study, we think that the FEB task presented here could be improved in several ways. First, the task was rather difficult as demonstrated by the participant’s performance, which might have dampened their motivation throughout the experiment. However, as there was no block effect, we are confident that this was not the case. Second, the difficulty of the task could have diminished the role of emotion understanding and emotion management facets on performance. There was indeed an effect of emotion understanding for less difficult items that was not observable for more difficult ones. It is possible that with a less difficult task (with longer presentation time for instance), or with more items in the task (which would increase power) the relationship between emotion understanding and accuracy would be stronger beyond the effect of emotion recognition.
Another limitation concerns the fact that we did not control for crystallized intelligence in our study. Recent findings (Davis et al., 2021) have indeed shown that the emotion understanding and emotion management facets of EI did not predict performance beyond this type of intelligence in an emotion recognition task. Even though we reckon that crystallized intelligence might play a lesser role than fluid intelligence in the FEB task (notably because of the fast presentation of the stimuli), it would be important to include it in further research.
Finally, further research is also still needed to determine whether EIP related to emotion perception as measured in this study explains additional variability in emotionally intelligent behavior. In a similar way as proposed by Fiori et al. (2022), it would be necessary to investigate whether performance in such a task adds to classic ability-EI measures when explaining real life outcomes. We can think for instance of a study measuring personality traits, trait-EI and fluid and crystallized intelligence as controls, in addition to classical ability-EI (EIK) and the FEB task (EIP) presented here and assess their respective influence on an outcome such as performance in a negotiation task (i.e., where fine-grained discrimination of emotions is crucial). Another interesting line of research would be to investigate whether better emotion information processing skills are related to more emotional activation during the task. It would indeed be possible that those individuals who perform better at emotion processing tasks also respond more strongly to emotion.
In sum, in this study we aimed at testing a type of emotion information processing task related to fine-grained discrimination of emotional expressions that could be employed as a measure of the EIP component of emotional intelligence. This task is different from previous ability EI tasks because it taps into more spontaneous processing of emotion information, it includes complex emotional expressions made of morphed blends of emotions, and it is quite difficult, although in a way that allows measuring individual differences in EIP. Hence, the FEB task presented here could be a valuable alternative to existing EI tests for those researchers interested to capture more spontaneous emotional behavior, such as when people react to stressful situations without having much time or cognitive resources to think about what to do, or when interacting with other individuals with little time for processing others’ emotional reactions. Although results were generally encouraging, further research is needed to ascertain the validity of such task as a measure of individual differences in EI and as a new test that may predict additional variance on top of existing ability EI tests.
Data availability statement
The raw data supporting the conclusions of this article are available on OSF, https://osf.io/7gdxz/.
Ethics statement
The studies involving human participants were reviewed and approved by the ethical committee of the University of Geneva. The patients/participants provided their written informed consent to participate in this study.
Author contributions
CG: conceptualization, methodology, formal analysis, writing: original draft, review and editing. MN-d-F: conceptualization, methodology, formal analysis, writing: review and editing. OW: resources, writing: review and editing. MF: funding acquisition, conceptualization, writing: review and editing. All authors contributed to the article and approved the submitted version.
Funding
The research presented in this manuscript was supported by a grant from the Swiss National Science Foundation (10001C_192443) awarded to MF.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Allen, V., Rahman, N., Weissman, A., MacCann, C., Lewis, C., and Roberts, R. D. (2015). The situational test of emotional management–brief (STEM-B): development and validation using item response theory and latent class analysis. Pers. Individ. Differ. 81, 195–200. doi: 10.1016/j.paid.2015.01.053
Allen, V. D., Weissman, A., Hellwig, S., MacCann, C., and Roberts, R. D. (2014). Development of the situational test of emotional understanding – Brief (STEU-B) using item response theory. Pers. Individ. Differ. 65, 3–7. doi: 10.1016/j.paid.2014.01.051
Baayen, R. H., Davidson, D. J., and Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 59, 390–412. doi: 10.1016/j.jml.2007.12.005
Bänziger, T., Grandjean, D., and Scherer, K. R. (2009). Emotion recognition from expressions in face, voice, and body: the multimodal emotion recognition test (MERT). Emotion 9, 691–704. doi: 10.1037/a0017088
Bechtoldt, M. N., and Schneider, V. K. (2016). Predicting stress from the ability to eavesdrop on feelings: emotional intelligence and testosterone jointly predict cortisol reactivity. Emotion 16, 815–825. doi: 10.1037/emo0000134
Calder, A. J., Young, A. W., Perrett, D. I., Etcoff, N. L., and Rowland, D. (1996). Categorical perception of morphed facial expressions. Vis. Cogn. 3, 81–118. doi: 10.1080/713756735
Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: a critical experiment. J. Educ. Psychol. 54, 1–22. doi: 10.1037/h0046743
Davis, S. K., Morningstar, M., and Qualter, P. (2021). Ability EI predicts recognition of dynamic facial emotions, but not beyond the effects of crystallized IQ. Pers. Individ. Differ. 169:109968. doi: 10.1016/j.paid.2020.109968
Ekman, P., and Friesen, W. V. (1974). Detecting deception from the body or face. J. Pers. Soc. Psychol. 29, 288–298. doi: 10.1037/h0036006
Fiori, M. (2009). A new look at emotional intelligence: a dual process framework. Pers. Soc. Psychol. Rev. 13, 21–44. doi: 10.1177/1088868308326909
Fiori, M., and Ortony, A. (2021). Initial evidence for the hypersensitivity hypothesis: emotional intelligence as a magnifier of emotional experience. J. Intell. 9:24. doi: 10.3390/jintelligence9020024
Fiori, M., Udayar, S., and Vesely Maillefer, A. (2022). Emotion information processing as a new component of emotional intelligence: theoretical framework and empirical evidence. Eur. J. Pers. 36, 245–264. doi: 10.1177/08902070211007672
Fiori, M., and Vesely-Maillefer, A. K. (2018). “Emotional intelligence as an ability: theory, challenges, and new directions” in Emotional Intelligence in Education: Integrating Research with Practice. eds. K. Keefer, J. Parker, and D. Saklofske (Cham: Springer), 23–47.
Gutiérrez-Cobo, M. J., Cabello, R., and Fernández-Berrocal, P. (2016). The relationship between emotional intelligence and cool and hot cognitive processes: a systematic review. Front. Behav. Neurosci. 10:101. doi: 10.3389/fnbeh.2016.00101
Gutiérrez-Cobo, M. J., Cabello, R., and Fernández-Berrocal, P. (2017). The three models of emotional intelligence and performance in a hot and cool go/no-go task in undergraduate students. Front. Behav. Neurosci. 11:33. doi: 10.3389/fnbeh.2017.00033
Hall, J. A., Andrzejewski, S. A., and Yopchick, J. E. (2009). Psychosocial correlates of interpersonal sensitivity: A meta-analysis. J. Non. Behav. 33, 149–180. doi: 10.1007/s10919-009-0070-5
Joseph, D. L., and Newman, D. A. (2010). Emotional intelligence: an integrative meta-analysis and cascading model. J. Appl. Psychol. 95, 54–78. doi: 10.1037/a0017286
Libbrecht, N., and Lievens, F. (2012). Validity evidence for the situational judgment test paradigm in emotional intelligence measurement. Int. J Psychol. 47, 438–447. doi: 10.1080/00207594.2012.682063
MacCann, C., and Roberts, R. D. (2008). New paradigms for assessing emotional intelligence: theory and data. Emotion 8, 540–551. doi: 10.1037/a0012746
Matsumoto, D., LeRoux, J., Wilson-Cohn, C., Raroque, J., Kooken, K., Ekman, P., et al. (2000). A new test to measure emotion recognition ability: Matsumoto and Ekman’s Japanese and Caucasian brief affect recognition test (JACBART). J. Nonverbal Behav. 24, 179–209. doi: 10.1023/A:1006668120583
Matthews, G., Emo, A. K., Funke, G., Zeidner, M., Roberts, R. D., Costa, P. T. Jr., et al. (2006). Emotional intelligence, personality, and task-induced stress. J. Exp. Psychol. Appl. 12, 96–107. doi: 10.1037/1076-898X.12.2.96
Mayer, J. D., Caruso, D. R., and Salovey, P. (2016). The ability model of emotional intelligence: principles and updates. Emot. Rev. 8, 290–300. doi: 10.1177/1754073916639667
Mayer, J. D., and Gaschke, Y. N. (1988). The experience and meta-experience of mood. J. Pers. Soc. Psychol. 55, 102–111. doi: 10.1037//0022-3514.55.1.102
Mayer, J. D., Roberts, R. D., and Barsade, S. G. (2008). Human abilities: emotional intelligence. Annu. Rev. Psychol. 59, 507–536. doi: 10.1146/annurev.psych.59.103006.093646
Nicolet-dit-Félix, M., Gillioz, C., Mortillaro, M., Sander, D., and Fiori, M. (2023). Emotional intelligence and attentional bias to emotional faces: evidence of hypersensitivity towards emotion information. Pers. Individ. Differ. 201:111917. doi: 10.1016/j.paid.2022.111917
Nowicki, S., and Carton, J. (1993). The measurement of emotional intensity from facial expressions. J. Soc. Psychol. 133, 749–750. doi: 10.1080/00224545.1993.9713934
Petrides, K. V., and Furnham, A. (2001). Trait emotional intelligence: psychometric investigation with reference to established trait taxonomies. Eur. J. Pers. 15, 425–448. doi: 10.1002/per.416
R Core Team (2021). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Available at: https://www.R-project.org/
Raven, J. C., Raven, J. E., and Court, J. H. (1998). Progressive Matrices. Oxford: Oxford Psychologists Press.
Ruffman, T., Henry, J. D., Livingstone, V., and Phillips, L. H. (2008). A meta-analytic review of emotion recognition and aging: implications for neuropsychological models of aging. Neurosci. Biobehav. Rev. 32, 863–881. doi: 10.1016/j.neubiorev.2008.01.001
Schlegel, K., Fontaine, J. R. J., and Scherer, K. R. (2019). The nomological network of emotion recognition ability: evidence from the Geneva emotion recognition test. Eur. J. Psychol. Assess. 35, 352–363. doi: 10.1027/1015-5759/a000396
Schlegel, K., Grandjean, D., and Scherer, K. R. (2014). Introducing the Geneva emotion recognition test: an example of Rasch-based test development. Psychol. Assess. 26, 666–672. doi: 10.1037/a0035246
Schlegel, K., and Scherer, K. R. (2016). Introducing a short version of the Geneva emotion recognition test (GERT-S): psychometric properties and construct validation. Behav. Res. Methods 48, 1383–1392. doi: 10.3758/s13428-015-0646-4
Thompson, A. E., and Voyer, D. (2014). Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis. Cognit. Emot. 28, 1164–1195. doi: 10.1080/02699931.2013.875889
Wilhelm, O., Hildebrandt, A., Manske, K., Schacht, A., and Sommer, W. (2014). Test battery for measuring the perception and recognition of facial expressions of emotion. Front. Psychol. 5:404. doi: 10.3389/fpsyg.2014.00404
Keywords: ability-EI, emotion blends, emotional intelligence, hypersensitivity, emotion information processing, emotion discrimination, emotion recognition
Citation: Gillioz C, Nicolet-dit-Félix M, Wilhelm O and Fiori M (2023) Emotional intelligence and emotion information processing: Proof of concept of a test measuring accuracy in discriminating emotions. Front. Psychol. 14:1085971. doi: 10.3389/fpsyg.2023.1085971
Edited by:
Pablo Fernández-Berrocal, University of Malaga, SpainReviewed by:
Elkin O. Luis, University of Navarra, SpainJoana Vieira Dos Santos, University of Algarve, Portugal
Gemma Filella, Universitat de Lleida, Spain
Copyright © 2023 Gillioz, Nicolet-dit-Félix, Wilhelm and Fiori. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Christelle Gillioz, ✉ Y2hyaXN0ZWxsZS5naWxsaW96QGhlZnAuc3dpc3M=; Marina Fiori, ✉ bWFyaW5hLmZpb3JpQGhlZnAuc3dpc3M=