Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 07 November 2022
Sec. Perception Science
This article is part of the Research Topic Psychological and Cognitive Evaluation and Intervention Based on Physiological and Behavioral Computing View all 9 articles

Virtual reality stimulation and organizational neuroscience for the assessment of empathy

  • 1Institute for Research and Innovation in Bioengineering, Polytechnic University of Valencia, Valencia, Spain
  • 2Fundació Institut Universitari per a la recerca a l'Atenció Primària de Salut Jordi Gol i Gurina (IDIAPJGol), Cornellà de Llobregat, Spain

This study aimed to evaluate the viability of a new procedure based on machine learning (ML), virtual reality (VR), and implicit measures to discriminate empathy. Specifically, eye-tracking and decision-making patterns were used to classify individuals according to their level in each of the empathy dimensions, while they were immersed in virtual environments that represented social workplace situations. The virtual environments were designed using an evidence-centered design approach. Interaction and gaze patterns were recorded for 82 participants, who were classified as having high or low empathy on each of the following empathy dimensions: perspective-taking, emotional understanding, empathetic stress, and empathetic joy. The dimensions were assessed using the Cognitive and Affective Empathy Test. An ML-based model that combined behavioral outputs and eye-gaze patterns was developed to predict the empathy dimension level of the participants (high or low). The analysis indicated that the different dimensions could be differentiated by eye-gaze patterns and behaviors during immersive VR. The eye-tracking measures contributed more significantly to this differentiation than did the behavioral metrics. In summary, this study illustrates the potential of a novel VR organizational environment coupled with ML to discriminate the empathy dimensions. However, the results should be interpreted with caution, as the small sample does not allow general conclusions to be drawn. Further studies with a larger sample are required to support the results obtained in this study.

Introduction

Empathy is a multidimensional construct associated with understanding and connecting with the emotional states of other individuals (Christov-Moore et al., 2014). In general, two main dimensions of empathy have been considered: affective and cognitive (e.g., Cuff et al., 2016). Affective empathy supposes a form of emotional experience congruent with what another individual is feeling (e.g., Preckel et al., 2018), while cognitive empathy refers to understanding others’ emotions. The ability to adopt another individual’s perspective (i.e., perspective-taking) has been considered as the hallmark of cognitive empathy (e.g., Kanske et al., 2015).

The operationalization of empathy as a “dual system” helps explain subsequent empathic behaviors toward others’ emotional states (Goleman, 2006; Hoffman, 2008; Shamay-Tsoory et al., 2009; Heyes, 2018; Decety and Holvoet, 2021). For example, empathy grounds prosocial behaviors, such as helping, caring, sharing, and defending another individual or group (Silke et al., 2018). Empathy can also be at the core of many interpersonal processes, such as cooperation, sociability, and social competence (Eisenberg and Miller, 1987), which apply not only to clinical (e.g., Decety, 2020; Vinson and Underman, 2020) or psychosocial levels (e.g., Davis, 2018; Liu and Shange, 2018) but also to organizational levels (e.g., Gentry et al., 2007; Zivkovic, 2022).

In this regard, the current study focuses on empathy processes linked to organizational dynamics (i.e., organizational empathy; Clark et al., 2019). Within organizations, it has been specifically found that empathy is tightly linked to adaptive management in terms of recognizing the emotional states of other organizational members (Rubin et al., 2005; Burch et al., 2016) and facilitating assessments of their interests and motivations (Avolio and Bass, 1995; Somogyi et al., 2013).

Some studies suggest that empathic managers may reinforce employees’ motivation, optimism, and commitment by understanding their needs and emotions (Dubinsky et al., 1995; Barling et al., 1996). Moreover, this empathic management has been suggested to be associated with better outcomes, communication, and decision-making (Rahman and Castelli, 2013).

Therefore, organizational empathy may be understood as the balance between organizational skills, such as decision-making, and empathic abilities (cf. Mittal and Sindhu, 2012). From this point of view, being able to estimate high and low empathic profiles accurately can be of great advantage for organizational research, particularly if considering that individuals with similar managerial skills can broadly differ in their empathic abilities (Leiberg and Anders, 2006; Gerdes et al., 2010; Reniers et al., 2011).

The assessments of organizational empathy have mainly relied on self-reports and questionnaires, which present limitations and biases (e.g., Nederhof, 1985; Furnham, 1986; Grimm, 2010). For example, response biases in terms of social desirability or variability among different empathy instruments have been indicated because they do not address the same dimensions of empathy or present poor construct validity (see de Lima and de Lima Osório, 2021). Against this limitation, empathy assessments based on physiological and behavioral data are emerging to complement standard empathy psychometrics (e.g., eye-tracking; Cowan et al., 2014). However, methodological approaches capable of integrating this type of assessment within more realistic organizational settings are still necessary (cf. Clark et al., 2019). In this regard, the current study presents an innovative approach to assessing organizational empathy based on a virtual reality organizational environment (VROE) and machine learning (ML) techniques (Alcañiz Raya et al., 2020).

In the next sections, we first overview some issues related to the assessments of organizational empathy. Thereafter, we define our VROE to explore organizational empathy. Finally, we present an exploratory study using ML to estimate organizational empathy from modeling behavioral data in terms of decision-making and attentional data in terms of eye-tracking.

Organizational empathy assessment

Research on organizational empathy is a relatively novel field focusing on how empathy relates to workplace behaviors and management (e.g., Gerdes et al., 2010; Cropanzano et al., 2017). In this regard, a recent review on organizational empathy by Clark et al. (2019) highlights several key issues related to the measurement of empathy in organizational research, which are detailed below.

First, most studies on organizational empathy rely on a canonical operationalization of empathy that equates empathy with sympathy (cf. Hershcovis and Bhatnagar, 2017). For example, the Interpersonal Reactivity Index by Davis (1980) is a widely used instrument within organizational research that follows that rationale. Sympathy can be understood as a form of empathic response (e.g., Gonzalez-Liencres et al., 2013) that relates feelings of alleviating another individual’s suffering (e.g., Davis, 1983). However, sympathy does not necessarily require feeling a congruent emotional state of others as for affective empathy. Moreover, sympathy only addresses suffering, which excludes emotional states with positive valence (cf. Bloom, 2016); precisely for this, instruments capturing both positive and negative affectivities may be adequate for measuring the affective empathy dimension (e.g., López-Pérez et al., 2008).

Second, studies evaluating cognitive empathy rely mainly on perspective-taking. However, cognitive empathy aspects, including emotional understanding or interpretation of facial expressions (e.g., Drimalla et al., 2019), which are linked to cognitive empathy, are less commonly investigated within organizational research (e.g., Besel and Yuille, 2010). This gap also leads to the following important point.

Third, most studies focus on empathy at the trait level (i.e., the tendency to be empathic across different situations). However, empathy can also operate at the state level as responsive to situational cues (e.g., Toomey and Rudolph, 2018). Clark et al. (2019) recommend deepening the study of behavioral empathy. In this regard, behavioral and implicit processes can play a significant role. Explicit behaviors occur through conscious executive control as the outcome of the previous relevant information processing, such as when making a decision within organizational contexts (cf. Jackson et al., 2005; Becker et al., 2011). Unlike explicit processes, implicit processes are relatively automatic and outside of conscious control and awareness. Implicit measures of empathy have included both brain and physiological measures, such as electroencephalogram (EEG; Balthazard et al., 2012; Alimardani et al., 2020; D’Errico et al., 2020), galvanic skin response (GSR; Nikula, 1991; Marci et al., 2007; Sequeira et al., 2009), and electromyogram (Brower et al., 2015).

Herein, we draw particular attention to eye-tracking and decision-making. First, paying attention to socially relevant cues, such as body postures, and to facial expressions (e.g., Jolliffe and Farrington, 2006; Besel and Yuille, 2010) mainly provides essential information for decoding other people’s emotional states (Cowan et al., 2014; Hedger et al., 2018) and facilitates the understanding of others’ emotional states (Frischen et al., 2007). People with attentional impairment can also show deficits in empathy-related processes (Gu et al., 2013). Therefore, measuring eye-tracking enables the analysis of the in-depth processes of an individual’s visual attention in social situations and complex simulations. Second, decision-making patterns offer valuable information on the level of empathy. According to the decision-making theory of Rowe and Boulgarides (1983), decision-making can be understood as a continuum linked to how an individual understands and perceives a social situation. In line with this reasoning, individuals oriented toward people and the team present a cooperative decision-making style. In contrast, individuals who focus primarily on achieving their own or organizational goals without considering others present a competitive decision-making style (e.g., Mukherjee and Upadhyay, 2018). Both decision-making styles differ according to the emotional understanding of the situation, which leads to different behavioral patterns: Individuals tending to cooperative decision-making would be concerned with maintaining good relationships, offering support and encouragement to team members, promoting collaboration, and achieving consensus. Meanwhile, individuals tending to competitive decision-making could show marked authoritarian behaviors and make unilateral managerial decisions (Weinberger, 2009). According to Scott and Bruce (1995), people can show behaviors of both styles; however, one of these decision-making styles is usually predominant (e.g., Bruine de Bruin et al., 2020).

Incorporating and combining measures of eye-tracking and decision-making with appropriate psychometrics could provide greater accuracy and validity when evaluating empathy in general and particularly in contexts addressing organizational behaviors.

Finally, Clark et al. (2019) reported that ML could be a methodological approach critical for assessing empathy within organizational contexts because it handles a substantial amount of data. To the best of our knowledge, this is an aspect very scarcely investigated within organizational empathy research and virtual reality (VR).

Against this background, the present study considers the above-mentioned recommendations to explore whether and how eye-tracking and decision-making can be modeled to estimate high and low empathic profiles. We approach this question based on a VR framework.

Virtual reality and behavioral stealth assessment

Virtual reality can be conceptualized as a synthetic three-dimensional environment that simulates real-life experiences where participants can interact with the environment as if they were in the real world (Pratt et al., 1995; Lumsden et al., 2016). Combining different sensory modalities (e.g., visual, auditory, and haptic) with tracking systems that accurately reproduce the stimuli allows a great sense of presence (e.g., Diemer et al., 2015) and engagement. This sense of presence or “being there” causes the user to be less aware of the unreality of the situation and experience it as if it were real life (both mentally and physically). For example, VR can facilitate decision-making responses as if they were natural analogs (cf. Burdea and Coiffet, 1994; Slater, 2009).

Nonetheless, VR applied to empathy has focused more on the training and development of empathy than on its evaluation (Rueda and Lara, 2020). Specifically, the term “empathy machine” has been coined to refer to VR potential to improve the emotional understanding and perspective-taking of others (Bujić et al., 2020; Hassan, 2020). For example, virtual environments have been designed to investigate perspective-taking by enabling “being” in the body of another person (Botvinick and Cohen, 1998; Maselli and Slater, 2013). Within organizational research, a similar approach has been adopted to explore perspective-taking from the viewpoint of managers within a work meeting (Chirino-Klevans, 2017).

Recent meta-analyses (Ventura et al., 2020; Martingano et al., 2021) aimed to investigate and clarify existing research on VR as a means of generating empathy. Ventura et al. (2020) revealed significant positive changes in perspective-taking outcomes after exposure to VR. In contrast, Martingano et al. (2021) revealed that VR improved emotional empathy but not cognitive empathy. Therefore, previous results on cognitive and affective empathies elicited in VR simulations are contrasting.

Importantly, VR allows the integration of stealth assessments (Shute et al., 2009, 2016), which refer to the possibility of capturing behavioral data related to specific skills and attributes, providing indirect evaluations in real time (Mislevy et al., 2003; Shute, 2009). This approach specifically aims to measure performance by unobtrusively logging user behaviors (e.g., time to complete tasks, number of attempts to complete the entire experience, and paths taken to solve a problem) rather than by explicitly asking users to self-report their thoughts and behaviors in an evaluative assessment. Accordingly, this approach is suitable for assessing real-time decisions (e.g., timing and type) and eye-tracking within a virtual organizational context. Although this type of evaluation is valid in many contexts and not only in VR, this technology allows it. The need for the transfer to real life to be as large as possible is simultaneously determined in the current study; VR was used, allowing the collection of metrics, including eye-tracking, in a much more ecological manner. This metric is covertly collected by the HTC system used in the environment presentation (Alcañiz et al., 2018; Parra et al., 2021, 2022).

The stealth method relies on evidence-centered design (ECD), a conceptual framework that can be used to develop assessment models (Shute, 2011). According to the ECD, three conceptual models must be developed before the design of a VR experience:

a. Competency model: The identification of competencies aims to describe the set of skills and competencies that must be evaluated. It involves identifying the latent components and their relationship with other constructs to be studied. This study mainly focuses on relationships between empathy measured as a trait (psychometrically) and situations (real-time stealth assessments).

b. Evidence model: The actual behaviors that can elicit the competencies to be assessed are identified. For each theoretical competency, various behaviors interacting with responses to a specific problem are individualized and described. The present study proposes the investigation of how decision-making behaviors and eye-tracking (e.g., gaze patterns and eye-fixations) within a work context relate to affective and cognitive empathies.

c. Task model: Tasks or situations capable of eliciting behaviors related to the competencies that must be evaluated are created. These tasks are addressed in the following sections.

Therefore, the current study explores the capabilities of a VROE based on the ECD to evaluate empathy through the collection of real-time data related to eye-tracking and decision-making. Furthermore, ML is used as a methodological approach to model the data.

Machine learning

Machine learning is a scientific discipline within artificial intelligence that designs and develops algorithms that allow computers to unravel cognitive and behavioral patterns from large amounts of empirical data (Mikalef et al., 2018). In particular, ML algorithms can identify and estimate data trends and patterns by building on existing information and highlighting unexpected relationships between variables (Vieira et al., 2017; Alcañiz Raya et al., 2020; Graham et al., 2020). This “learning-by-processing” approach has a great potential to produce accurate predictive models. Recently, an increasing number of studies within the organizational field have implemented ML techniques applied to large amounts of data (George et al., 2014; Leavitt et al., 2021). For example, ML has been used for the assessment of candidates (Faliagka et al., 2012) and identification of leadership roles (Doornenbal et al., 2021) as well as personality traits of managers (Hrazdil et al., 2020). Machine learning has also been used to study communication skills in the workplace (Suen et al., 2020) and evaluate gaze patterns and facial expressions (Muralidhar et al., 2018).

As previously introduced, there is an increasing urge to advance this type of methodology within organizational empathy research (e.g., Clark et al., 2019). Our study thereby presents a novel VR framework to investigate this issue.

New integrated approach to organizational empathy assessment

This study explores the feasibility of a VROE to investigate organizational empathy from both explicit and implicit data. At the theoretical level, the study aimed to link trait and situational empathy assessments by using a psychometric instrument covering different dimensions of empathy (cognitive and affective). Decision-making and eye-tracking data measured in real-time within the virtual environment are used to assess situational empathy. At the methodological level, an ML approach is implemented to model the data.

Accordingly, the study raises the following two research questions:

RQ1: How can decision-making and eye-tracking data be integrated within an organizational virtual environment to assess situational empathy?

RQ2: Can ML techniques discriminate high or low empathy from decision-making and eye-tracking data?

Materials and methods

Participants

The study sample consisted of 82 Spanish participants (men: 67%, women: 33%; mean = 42, standard deviation = 3.44). The inclusion criteria were the absence of mental disorders or psychiatric medication. All participants signed a written informed consent prior to participation in the study.

Trait empathy assessment

The Cognitive and Affective Empathy Test (TECA; López-Pérez et al., 2008) was used to measure trait empathy. It consists of 33 items rated on a 5-point Likert scale (1 = “I totally disagree” to 5 = “I totally agree”) representing four subscales. Two of these subscales evaluate cognitive empathy: perspective-taking (eight items; e.g., “I try to understand my friends by looking at situations from their perspective”) and emotional understanding (nine items; e.g., “I notice when someone tries to hide their true feelings”). The other two subscales evaluate affective empathy, both positive and negative: empathetic joy (eight items; e.g., “When something good happens to someone, I feel happy”) and empathetic stress (eight items; e.g., “I cannot help but cry with the testimonials of unknown people”). The Cronbach’s alpha values were 0.70, 0.74, 0.78, 0.75, for the TECA perspective-taking (TECA PT), emotional understanding (TECA EU), empathetic stress (TECA ES), and empathetic joy (TECA EJ) subscales, respectively. Table 1 shows a detailed description of the subscales.

TABLE 1
www.frontiersin.org

Table 1. Description of the dimensions of the TECA questionnaire.

Situational empathy assessment

Situational empathy was assessed based on decision-making behaviors and attentional patterns via eye-tracking features. A series of features regarding these variables are described in Tables 2, 3, respectively.

TABLE 2
www.frontiersin.org

Table 2. Decision-making variables.

TABLE 3
www.frontiersin.org

Table 3. Description of the eye-tracking variables.

Experimental procedure

The participants completed the TECA online with a short demographic questionnaire before the experimental testing at the laboratory. The experimental testing consisted of a 1.5-h session in which the participants experienced a workplace dynamic in an immersive VROE. The participants were seated and wore a head-mounted display (HMD) equipped with an eye-tracking feature, calibrated at the beginning of the session. Thereafter, the VROE experience began. The first 2 min showed a brief tutorial explaining how to use the virtual environment. The participants then went through activities taking place either at an office or a meeting room.

Visual attention was measured using the HTC VIVE Pro Eye HMD, with a combined resolution of 2,880 × 1,600 pixels (1,440 × 1,600 per eye), a field of view of 110°, and a refresh rate of 90 Hz. The VROE was developed using the Unity 5.5 software, applying C# programming language with the Visual Studio tool. The VR application ran on the MSI GE75 Raider 9SF-1204XES laptop (17.3″, i7-9750H, RAM 32 GB, 1 TB NVMe PCIe Gen3x4 SSD, GeForce RTX 2070 GDDR6 8GB).

Virtual reality organizational environment description

Virtual scenarios and decision-making tasks potentially related to empathy and suitable workplace settings were designed following the ECD guidelines. The virtual environment consisted of four situations with the same organization each. Specifically, the design focused on two main scenarios: (1) an office and (2) a meeting room (Figure 1). The main difference between both scenarios was their social character. In the office, the participants were alone, performing a series of tasks individually. In the meeting room, they shared a table with four other co-workers. These co-workers were actors pre-recorded using the Chroma key technique. This technique allows cutting the part of a video that is intended to be integrated into the virtual environment; in this case, different actors were recorded saying the designed dialog.

FIGURE 1
www.frontiersin.org

Figure 1. Scenarios of the virtual reality environment.

Tutorial

At the beginning of the experience, a tutorial was provided. This tutorial showed relevant aspects to the participant to move around the environment and to interact or respond to the different types of interfaces proposed throughout the experience. The user was expected to become familiar with the virtual environment during this learning period, with no metrics collected.

Virtual experience at the office

The participants started the virtual experience within the office. They adopted the role of a new worker within the organization. After a brief period to familiarize themselves with the environment, the participants were kindly asked (by a pop-up message) to sit at a chair in front of a table with a computer inside the virtual office. This message was presented directly in the virtual environment. The entire experience was within the virtual environment during all situations. Herein, it was explained that the computer could involve two interactive tasks: chatting with co-workers and answering email messages.

• Chatting with co-workers: The goal of this activity was to evaluate decisions framed within a chat group with other colleagues. Concretely, the participants were to decide freely whether to chat. The users interacted with the chat through a virtual keyboard. Examples of chat topics were internet jokes and comments toward images with humorous content or comments on others’ personal problems. Decision-making in terms of the number of times that the participants opened the chat, answered, and sent messages was evaluated.

• Answering emails: Some emails were framed in a narrative attempting to capture the empathic skills of the participants. To be more precise, the main objective of this activity was recognizing and understanding emotions by paying attention to verbal and non-verbal cues. For example, an email addressed a recruitment task where candidates’ images were blurred so that their faces were not clearly visible. The participants were to answer questions referring to emotion recognition. Another email sent a video showing a man talking and gesturing. However, the narrative addressed problems with the audio. Accordingly, the participants’ attention toward body language was tracked as a potential indicator of emotional understanding.

Virtual experience at the meeting room

Once the above-described tasks at the office were finished, the participants entered the group meeting scenario. This scenario involved four virtual agents (two women and two men) with different attitudes and behavioral profiles. Specifically, one of the characters was presented as the organizer, another as communicative, another as logical, and the remaining as passive (Figure 2):

a. Organizer agent representing the role of the primary manager: A woman showed planned, sequential, and structured thinking. Her role focused on deciding what steps to take after a problem-solving debate.

b. Communicative agent representing the role of the human resource manager: A woman showed interpersonal warmth, fluid communication, and holistic thinking. This agent showed self-confidence and a predisposition to understand others’ points of view. Moreover, she encouraged everyone to yield consensus regarding the discussed topic. This character was sensitive to both personal and organizational problems.

c. Passive agent representing the role of the production department head: A man showed a non-interventional attitude. He avoided providing any feedback regarding the discussed topic and left the decision on the discretion of the other members. However, he exalted when the topic referred to his department.

d. Logical agent representing the role of the sales department head: A man showed technical and analytical reasoning and a pessimist tendency. He did not show empathetic attitudes toward the rest of the characters. On the contrary, he showed a distant, critical, and competitive attitude. Additionally, this agent set clear standards to follow and punished any mistakes.

FIGURE 2
www.frontiersin.org

Figure 2. Virtual agent characters: (A) organizer agent, (B) communicative agent, (C) passive agent, and (D) logical agent.

At the beginning of the meeting room scenario, the virtual agents were sitting at a table and talking. The organizer agent invited the participants to join them at the table. Herein, the participants were to decide where to sit among three free chairs and move in the scenario through a teleportation paradigm. A problem was then presented by one of the virtual agents. The participants listened first to the opinions of the other members and were then asked to explain their opinion about the problem. They were prompted to talk and explain their arguments by voice to be more realistic and engaging and choose a solution to the problem by clicking on a list of four alternatives. Finally, the organizer noted the participants’ decision and closed the debate. The meeting topics had different emotional connotations. For example, one topic addressed the organization of a meeting, whereas another meeting addressed a heated discussion concerning issues associated with the role of the participants.

The participants’ decisions addressed the following different behavioral styles:

Decision style 1: Cooperation was sought, reaching agreements among all members. Interest in the welfare of others was appreciated, and emotional responses were provided to demands. The following is an example: “We could focus on deciding who will be responsible. What do you think? Do you think we can distribute the tasks as I propose?”

Decision style 2: This style involved not making decisions unless the opinion of others was known, which was explained by an excessive concern for rejection. Decisions showed high sensitivity to the emotions of others. However, extreme rejection concerns may leave few cognitive resources to understand such emotions. The following is an example: “Perhaps I have been here too little time to be able to divide the tasks. I think it would be advisable for you to make the decision this time.”

Decision style 3: This style was characterized by rapid and rigid responses. Decisions reflected minimal trust in others and dislike when the rest of the team disagreed. Decisions did not reflect a willingness to understand or respond appropriately to the emotions of others. However, there were a high fear of rejection and a need for approval. The following is an example: “From my point of view, the best distribution is this.”

Decision style 4: This style was defined by a complete lack of interest in others and a lack of cooperation and support. No interest or concern for others was reflected, and willingness to take another person’s perspective nor the ability to share another person’s emotions was not appreciated. The users distanced themselves abruptly or stopped collaborating because they felt pressured. The following is an example: “Maybe we should go to the next point of the meeting and decide later.”

During the different meetings, chat messages (as previously described) were also implemented. At the end of the meetings, a series of mini-games were implemented as filler tasks. Depending on the decision, there were games of logic, creativity, and cognitive load.

After decision-making about each meeting, another decision had to be made individually regarding other situations. The purpose of these tasks was to collect decision-making styles in situations that may have emotional and labor repercussions on the rest of the co-workers without counting on their opinion. The following is an example: “You have pending tasks, but your workload is very intense this week, and you have decided to put some of your work aside. How would you do it?” The participants had to select from four options regarding the decision that best fit their decision style (e.g., “You tell the employees to do the job and leave them free to decide how to do it” vs. “You select two different employees to each do the task, and finally, you select the best job”).

Virtual reality at the office

After resolving the last point of the meeting, the participants returned to the office, where they were asked to rate their behavior and involvement in the group chat and their performance in the problem-solving tasks. In addition, the participants were shown a variety of mini-games. They were encouraged to select the games they wanted to play (e.g., logic, creativity, or mental speed). Once they had played, the participants had to indicate the reasons for the selected option, rate their performance level, and evaluate the mini-games as positive or negative.

In brief, a virtual environment that aimed to stimulate behaviors linked to different levels of empathy was designed using avatars with different personality traits and personal and work decision-making tasks.

Figure 3 displays the structure of the virtual environment as well as the information collected from both sources (decision-making and eye-tracking).

FIGURE 3
www.frontiersin.org

Figure 3. Structure of the virtual environment.

Data processing

Data were obtained from three different sources: the participants’ answers to the TECA questionnaire, behavioral data (i.e., decisions made by the participants during the VR experience), and eye-tracking data (i.e., gaze fixations). Raw data were used to obtain a set of variables. Specifically, a total of 63 variables represented the decision-making data (Table 2), whereas a total of 110 variables represented the eye-tracking data (Table 3).

Statistical analysis

Statistical analyses were performed using R (version 3.6.1). Eight participants did not answer the TECA questionnaire; thus, their data were excluded from the analysis. A multivariate outlier analysis (Filzmoser, 2004) considering the four dimensions of the questionnaire was performed. In this outlier detection method, the distance between the participants was calculated by considering all subscales of the questionnaire and estimating the probability of this distance belonging to a chi-square distribution. When the probability was below 0.01, the participants’ scores were defined as outliers. Accordingly, two participants were excluded from further analysis. Finally, 72 participants were considered.

Mean, median, minimum, maximum, and standard deviation values and interquartile ranges were used to describe the TECA scores. The normality of the scores was inspected using the Shapiro–Wilk test. Statistical significance was defined at p < 0.05.

Machine learning

Machine learning was used to explore the potential to discriminate between high and low empathic profiles from the decision-making and eye-tracking assessments during the VR experience. Accordingly, the TECA empathy scores were categorized into high or low according to the median value in each subscale (i.e., TECA PT, EU, EJ, and ES). The ML-based models were then trained per each subscale to select the best set of features characterizing each of them.

Initially, feature selection using a backward sequential wrapper (Doak, 1992) was performed to reduce the number of features. The method started by building a model based on a particular ML algorithm with all available features and measuring its performance. Thereafter, a feature was removed at each step; the model was re-trained; and its performance was measured. The feature whose removal increased the performance measure the most (i.e., Cohen’s Kappa) was removed from the set of features used in the next step. After several steps in which the performance metric did not vary by more than 0.01, the process stopped.

Different ML algorithms were used to obtain the best set of features: random forest, SVM, Naïve Bays, XGBoost (gradient boosting tree), and K-nearest neighbor (kNN). These algorithms used the default hyperparameters defined in the mlr package version 2.14.0 (Bischl et al., 2016). After obtaining the best set of features for each ML algorithm, we trained the model. The accuracy (Cohen’s Kappa), sensitivity (true positive rate), and specificity (true negative rate) of the model were calculated.

Both steps used repeated cross-validation (five folds, four times); thus, the validation metrics corresponded to the mean value across the 20 repetitions. The same folds were used to validate all algorithms. The information from 10 randomly selected participants was excluded from this training model process and was used only as a test set.

Results

TECA scores

Table 4 shows the scores in the TECA subscales. The participants scored (mean ± standard deviation) 34.17 ± 4.31 in the TECA EU and 34.89 ± 3.28 in the TECA EJ (Shapiro–Wilk test, p > 0.05). The median ± interquartile range of the scores in the TECA ES and TECA PT was 21.5 ± 8 and 32 ± 6, respectively (Shapiro–Wilk test, p < 0.05). Once categorized, 55.55%, 63.89%, 50%, and 52.78% of the participants scored high in the TECA EU, TECA EJ, TECA ES, and TECA PT, respectively.

TABLE 4
www.frontiersin.org

Table 4. Scores in each TECA subscale.

TECA recognition models

The metrics of the best ML-based models for each TECA subscale and their characteristics are shown in Table 5. The best model for the TECA EU was based on the kNN, which selected 19 features (63.16% from the eye-tracking data and 36.84% from the behavioral data) and achieved an accuracy of 69% in the validation set and 58% in the test set. The model for the TECA EJ was built using random forest, which selected 14 features (28.57% from the eye-tracking data and 71.43% from the behavioral data) and achieved a similar accuracy between the validation and test sets (76% and 75%, respectively). The model for the TECA ES was built using random forest, which selected 17 features (94.11% from the eye-tracking data and 5.88% from the behavioral data) and achieved an accuracy of 81% in the validation set and 67% in the test set. The model for the TECA PT was built using random forest, which selected 19 features (73.68% from the eye-tracking data and 26.32% from the behavioral data) and achieved a similar accuracy between the validation and test sets (82% and 83%, respectively).

TABLE 5
www.frontiersin.org

Table 5. Metrics of the best model achieved for each TECA subscale both in the validation and test sets.

Discussion

This study explored the links between trait and situational organizational empathies through a novel VROE. Behavioral (i.e., decision-making) and attentional data (eye-tracking) were measured in real time during the VR experience and subsequently modeled through ML techniques to estimate the cognitive and affective dimensions of trait empathy. Concretely, the TECA questionnaire was used to assess cognitive empathy encompassing perspective-taking and emotional understanding and affective empathy encompassing empathic stress and empathetic joy.

The VR experience was designed following the ECD, which enabled the collection of behavioral decision-making and eye-tracking data associated with each empathy dimension. Machine learning was used to build different models based on these two sources of information recorded during the VR experience, making it possible to identify the behavioral decision-making and eye-tracking variables that best define and predict each of the dimensions and to perform an analysis of the frequency distribution (high vs. low) of the different empathy dimensions.

This multi-method approach combining VR and behavioral and attentional measures offers a deeper understanding of the psychological construct to be evaluated and its manifestations. In addition, it improves the ecological validity of the use of self-report measures alone, since it enables behavioral decision-making data to be captured in scenarios that simulate real management situations.

High and low perspective-taking, emotional understanding, empathetic stress, and empathetic joy

The first objective was to identify the differences between the different empathy dimensions both in the traditional measure questionnaire and in the VR experience. The analysis of the traditional measures indicated that for the self-report form, 56% of the participants had a high score for emotional understanding; 64%, empathetic joy; 50%, empathetic stress; and 53%, perspective-taking. These results indicate that the traditional evaluation measures can define and classify each of the empathy dimensions. Furthermore, the number of participants with high scores was higher than that of those with low scores for emotional understanding and empathetic joy; the number of participants who scored high and low for empathetic stress was similar; and the number of participants who scored low was higher than that of those who scored high for perspective-taking.

Regarding the VR experience, the results supported both research questions: The different dimensions of empathy could be differentiated by the gaze patterns and behaviors elicited during the immersive VR experience. However, most models were based mainly on the eye-tracking data rather than on the behavioral data, except for the empathetic joy dimension. In fact, with this dimension excluded, the eye-tracking data were represented between 63% and 94% for all models. Meanwhile, the behavioral data were represented between 6% and 71% in the selected variables for all models.

Therefore, eye-tracking played a more relevant and distinctive role in predicting the different empathy dimensions than did decision-making during the VR immersion. This could be explained by the idea that it is necessary to accurately attend to the surrounding context to start empathic processes (Frischen et al., 2007). This information is acquired by observing and paying attention to behaviors and facial expressions, which allow the detection of complex mental states, such as intentions, thoughts, beliefs, emotions, and desires of those around (Balconi and Canavesio, 2016). Therefore, through gaze, individuals attempt to accurately assess the motivations, intentions, and emotions to anticipate the behavior of another and to amend own decisions and actions accordingly (Singer, 2006).

Nevertheless, the decision-making variables also added value to the ML-based models, supporting the idea that empathy is a precursor to prosocial behavior, including any action performed to alleviate and share positive and negative emotions, which have a repercussion in all fields of work. Especially in management environments in which decisions have to be made under uncertainty, risk, and stress, empathy is especially important, since it favors the maintenance of social relationships and encourages people to serve the needs of others (Vyatkin et al., 2019).

Based on the ML-based model metrics, the feasibility of the virtual environment to track behaviors (eye-gaze patterns and behavioral decision-making) enables the classification of participants according to their level of empathy dimensions. However, our results suggest that empathy can be better identified from eye-tracking than from decision-making. Eye-tracking-related variables were systematically selected more frequently for all empathy subscales, except for the TECA EJ, which presented more behavioral variables. This could be attributed to the fact that people could be more extrinsically motivated to respond behaviorally to positive affects than to negative affects. According to Telle and Pfister (2016), empathy from positive events implies low costs but high benefits: The experience of a pleasant emotional state is acquired. While empathy from negative events generates unpleasant feelings, activating prosocial behaviors can yield costs for the individual. The TECA PT and EJ models were the best models both in terms of absolute accuracy and similarity between the results in the validation and test sets. The TECA ES model also showed similar results in the validation set but poor results in the test set. Finally, the TECA EU model was unable to generalize the results in the test set.

Theoretical and practical implications

The main added value of this study lies on the detection and assessment of empathy by integrating (a) behavioral information (eye-tracking and decision-making); (b) a highly ecological and standardized setting, such as VR; and (c) a powerful method capable of analyzing an extensive amount of data and predicting models—ML. Therefore, this study yielded better understanding of the practical implications and benefits of using implicit measures in general (e.g., EEG, GSR, and heart rate variability) and eye-tracking and decision-making data in particular. Based on the findings, both implicit measures allowed the measurement of empathy in a more ecological manner, offering valid data during the VR experience. This study also confirmed the relevance of ML in identifying and predicting data trends and patterns from information on VR performance.

This multi-method approach can increase the knowledge about the attentional and behavioral patterns and decision-making processes conducted by workers with different empathy levels in complex work situations. In addition, unlike most evaluations that use subjective self-report measures, this method combines neuroscience with VR, which attributes greater objectivity and ecological validity to the results. Regarding implicit measures, the importance of non-verbal cues in identifying empathy characteristics, especially ET measures that cannot be evaluated through explicit measures, is highlighted.

This study also advances research on the evaluation of empathy in VR environments, since most studies focus on improving and training this competence rather than on its evaluation (e.g., Bertrand et al., 2018; Dyer et al., 2018). This explains why most studies focusing on empathy assessment through implicit measures use visual stimuli, such as images or videos, to evaluate gaze patterns instead of experiential stimuli (e.g., Zhi-Jiang and Pan-Cha, 2016; Nebi et al., 2022). The present study contributes to the use of experiential stimuli by offering a novel multi-method approach that assesses empathy, electing its manifestation through exposure to VR situations strategically designed for this purpose and recollecting data from eye-tracking and decision-making patterns.

The current study also provides a broad overview of the benefits of using ML as a dataset analysis methodology. This methodology allows the identification of hidden patterns between different apparently unrelated variables, which may be of interest for the hidden evaluation of empathy in a virtual environment whose apparent objective is not the evaluation of such. Thus, it can be constituted as an alternative to other methods of assessing empathy, which may serve as an initial tool for the assessment of empathy in work environments prior to the training phase in the future. After the training phase, this assessment tool could be used again to check the efficacy of the training. The methodology could be applied for the assessment of other physiological constructs in clinical and organizational areas.

Limitations and future directions

In this study, we identified some limitations that could be helpful for future research on empathy and the organizational field. First, our ability to generalize the results was restricted by the small number of the participants included (n = 71). Thus, the size of the test set for the ML-based models was also small. Second, we built the high and low target variables based on the mean or median values of the responses from the study; thus, they may not be extrapolated to the rest of the population. This indicates that both conditions could compromise the generalizability of the theory. However, the main objective of the study was not to design an evaluation tool that would replace traditional selection tools, such as questionnaires or interviews, but to explore the feasibility of designing an empathy evaluation strategy more ecologically, replicating the results of the TECA by capturing behavioral measures. This goal was achieved by using ML, which allowed the creation of a predictive classification model. To our knowledge, our study is the first to integrate VR, implicit measures, and ML to explore and assess empathy dimensions in a specific population.

Regarding future directions, this work can serve as a basis for the study of psychological constructs, including empathy, using a novel technology (e.g., VR with implicit measures and ML) to make predictions. Increasing the number of participants in future research and including an expert judgment on the levels of empathy presented by candidates to improve data validity are recommended.

Conclusion

Based on the current results, we conclude that behavioral measures captured during VR experiences constitute valid parameters for detecting and assessing different empathy dimension levels. ET measures provide the core information in the classification models. Therefore, this multi-method approach consisting of an immersive VR system, eye-tracking, and ML offers a novel perspective on the study of empathy and the ability to replicate the results of the TECA questionnaire.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

This study was reviewed and approved by the Ethics Committee of the Polytechnic University of Valencia (protocol code: P01_08_07_20) and conducted according to the guidelines of the Declaration of Helsinki (1964). The participants provided written informed consent for participation in this study and for publication of any identifiable images or data included in this article.

Author contributions

EP and MA: conceptualization, methodology, and resources. EP: validation and investigation. LC-R and JM-M: formal analysis and data curation. EP, AG, ST, and LC-R: writing—original draft preparation. AG, ST, LC-R, EP, and JM-M: writing—review and editing. MA: supervision. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by the European Commission (project RHUMBO: H2020-MSCA-ITN-2018-813234) and by the Generalitat Valenciana funded project “REBRAND” (grant number: PROMETEU/2019/105).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Alcañiz, M., Parra, E., and Chicchi Giglioli, I. A. (2018). Virtual reality as an emerging methodology for leadership assessment and training. Front. Psychol. 9:1658. doi: 10.3389/fpsyg.2018.01658

CrossRef Full Text | Google Scholar

Alcañiz Raya, M., Chicchi Giglioli, I. A., Marín-Morales, J., Higuera-Trujillo, J. L., Olmos, E., Minissi, M. E., et al. (2020). Application of supervised machine learning for behavioral biomarkers of autism spectrum disorder based on electrodermal activity and virtual reality. Front. Hum. Neurosci. 14:90. doi: 10.3389/fnhum.2020.00090

CrossRef Full Text | Google Scholar

Alimardani, M., Hermans, A., and Tinga, A. M. (2020). Assessment of empathy in an affective VR environment using EEG signals. arXiv preprint arXiv:2003.10886.

Google Scholar

Avolio, B. J., and Bass, B. M. (1995). Individual consideration viewed at multiple levels of analysis: a multi-level framework for examining the diffusion of transformational leadership. Leadersh. Q. 6, 199–218. doi: 10.1016/1048-9843(95)90035-7

CrossRef Full Text | Google Scholar

Balconi, M., and Canavesio, Y. (2016). Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing. Cognit. Emot. 30, 210–224. doi: 10.1080/02699931.2014.993306

CrossRef Full Text | Google Scholar

Balthazard, P. A., Waldman, D. A., Thatcher, R. W., and Hannah, S. T. (2012). Differentiating transformational and non-transformational leaders on the basis of neurological imaging. Leadersh. Q. 23, 244–258. doi: 10.1016/j.leaqua.2011.08.002

CrossRef Full Text | Google Scholar

Barling, J., Weber, T., and Kelloway, E. K. (1996). Effects of transformational leadership training on attitudinal and financial outcomes: a field experiment. J. Appl. Psychol. 81:827. doi: 10.1037/0021-9010.81.6.827

CrossRef Full Text | Google Scholar

Becker, W. J., Cropanzano, R., and Sanfey, A. G. (2011). Organizational neuroscience: taking organizational theory inside the neural black box. J. Manag. 37, 933–961. doi: 10.1177/0149206311398955

CrossRef Full Text | Google Scholar

Bertrand, P., Guegan, J., Robieux, L., McCall, C. A., and Zenasni, F. (2018). Learning empathy through virtual reality: multiple strategies for training empathy-related abilities using body ownership illusions in embodied virtual reality. Front. Robotics AI :26. doi: 10.3389/frobt.2018.00026

CrossRef Full Text | Google Scholar

Besel, L. D., and Yuille, J. C. (2010). Individual differences in empathy: the role of facial expression recognition. Personal. Individ. Differ. 49, 107–112. doi: 10.1016/j.paid.2010.03.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., et al. (2016). Mlr: machine learning in R. J. Mach. Learn. Res. 17, 1–5.

Google Scholar

Bloom, P. (2016). Against Empathy. New York, NY: HarperCollins Publishers.

Google Scholar

Botvinick, M., and Cohen, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature 391:756. doi: 10.1038/35784

PubMed Abstract | CrossRef Full Text | Google Scholar

Brower, C. T., Hardman, D., George, M. M., and Beutler, T. (2015). Physiological Techniques for Measuring Empathetic Responses in Pre-healthcare Students vs. Non-pre-healthcare Students. Proceedings of The National Conference On Undergraduate Research (NCUR). Eastern Washington University, Cheney, WA.

Google Scholar

Bruine de Bruin, W., Parker, A. M., and Fischhoff, B. (2020). Decision-making competence: more than intelligence? Curr. Dir. Psychol. Sci. 29, 186–192. doi: 10.1177/0963721420901592

PubMed Abstract | CrossRef Full Text | Google Scholar

Bujić, M., Salminen, M., Macey, J., and Hamari, J. (2020). “Empathy machine”: how virtual reality affects human rights attitudes. Int. Res., 30, 1407–1425. doi: 10.1108/INTR-07-2019-0306

CrossRef Full Text | Google Scholar

Burch, G. F., Bennett, A. A., Humphrey, R. H., Batchelor, J. H., and Cairo, A. H. (2016). “Unraveling the complexities of empathy research: a multi-level model of empathy in organizations,” in Emotions and Organizational Governance eds. Neal M. Ashkanasy, Charmine E. J. Härtel, Wilfred J. Zerbe (Emerald Group Publishing Limited).

Google Scholar

Burdea, G., and Coiffet, P. (1994). Virtual Reality Technology. New York, NY: Wiley

Google Scholar

Chirino-Klevans, I. (2017). Virtual reality techniques for eliciting empathy and cultural awareness: affective human-virtual world interaction. Int. SERIES Inf. Syst. Manage. Creative eMedia (CreMedia) 2, 1–5.

Google Scholar

Christov-Moore, L., Simpson, E. A., Coudé, G., Grigaityte, K., Iacoboni, M., and Ferrari, P. F. (2014). Empathy: gender effects in brain and behavior. Neurosci. Biobehav. Rev. 46, 604–627. doi: 10.1016/j.neubiorev.2014.09.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, M. A., Robertson, M. M., and Young, S. (2019). “I feel your pain”: a critical review of organizational research on empathy. J. Organ. Behav. 40, 166–192. doi: 10.1002/job.2348

CrossRef Full Text | Google Scholar

Cowan, D. G., Vanman, E. J., and Nielsen, M. (2014). Motivated empathy: the mechanics of the empathic gaze. Cognit. Emot. 28, 1522–1530. doi: 10.1080/02699931.2014.890563

PubMed Abstract | CrossRef Full Text | Google Scholar

Cropanzano, R., Dasborough, M. T., and Weiss, H. M. (2017). Affective events and the development of leader-member exchange. Acad. Manag. Rev. 42, 233–258. doi: 10.5465/amr.2014.0384

CrossRef Full Text | Google Scholar

Cuff, B. M., Brown, S. J., Taylor, L., and Howat, D. J. (2016). Empathy: a review of the concept. Emot. Rev. 8, 144–153. doi: 10.1177/1754073914558466

PubMed Abstract | CrossRef Full Text | Google Scholar

D’Errico, F., Leone, G., Schmid, M., and D’Anna, C. (2020). Prosocial virtual reality, empathy, and EEG measures: a pilot study aimed at monitoring emotional processes in intergroup helping behaviors. Appl. Sci. 10:1196. doi: 10.3390/app10041196

CrossRef Full Text | Google Scholar

Davis, M. H. (1980). Interpersonal Reactivity Index (IRI). doi: 10.1037/t01093-000

PubMed Abstract | CrossRef Full Text | Google Scholar

Davis, M. H. (1983). The effects of dispositional empathy on emotional reactions and helping: a multidimensional approach. J. Pers. 51, 167–184. doi: 10.1111/j.1467-6494.1983.tb00860.x

CrossRef Full Text | Google Scholar

Davis, M. H. (2018). Empathy: A Social Psychological Approach, New York: Routledge.

Google Scholar

de Lima, F. F., and de Lima Osório, F. (2021). Empathy: assessment instruments and psychometric quality–a systematic literature review with a meta-analysis of the past ten years. Front. Psychol. 12. doi: 10.3389/fpsyg.2021.781346

PubMed Abstract | CrossRef Full Text | Google Scholar

Decety, J. (2020). Empathy in medicine: what it is, and how much we really need it. Am. J. Med. 133, 561–566. doi: 10.1016/j.amjmed.2019.12.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Decety, J., and Holvoet, C. (2021). The emergence of empathy: a developmental neuroscience perspective. Dev. Rev. 62:100999. doi: 10.1016/j.dr.2021.100999

PubMed Abstract | CrossRef Full Text | Google Scholar

Diemer, J., Alpers, G. W., Peperkorn, H. M., Shiban, Y., and Mühlberger, A. (2015). The impact of perception and presence on emotional reactions: a review of research in virtual reality. Front. Psychol. 6:26. doi: 10.3389/fpsyg.2015.00026

PubMed Abstract | CrossRef Full Text | Google Scholar

Doak, J. (1992). An evaluation of feature selection methods and their application to computer security. Technical Report CSE-92-18.

Google Scholar

Doornenbal, B. M., Spisak, B. R., and van der Laken, P. A. (2021). Opening the black box: uncovering the leader trait paradigm through machine learning. Leadersh. Q. 101515. doi: 10.1016/j.leaqua.2021.101515

CrossRef Full Text | Google Scholar

Drimalla, H., Landwehr, N., Hess, U., and Dziobek, I. (2019). From face to face: the contribution of facial mimicry to cognitive and emotional empathy. Cognit. Emot. doi: 10.1080/02699931.2019.1596068

PubMed Abstract | CrossRef Full Text | Google Scholar

Dubinsky, A. J., Yammarino, F. J., and Jolson, M. A. (1995). An examination of linkages between personal characteristics and dimensions of transformational leadership. J. Bus. Psychol. 9, 315–335. doi: 10.1007/BF02230972

CrossRef Full Text | Google Scholar

Dyer, E., Swartzlander, B. J., and Gugliucci, M. R. (2018). Using virtual reality in medical education to teach empathy. J. Med. Library Assoc.: JMLA 106:498. doi: 10.5195/jmla.2018.518

CrossRef Full Text | Google Scholar

Eisenberg, N., and Miller, P. A. (1987). The relation of empathy to prosocial and related behaviors. Psychol. Bull. 101:91. doi: 10.1037/0033-2909.101.1.91

PubMed Abstract | CrossRef Full Text | Google Scholar

Faliagka, E., Tsakalidis, A., and Tzimas, G. (2012). An integrated e-recruitment system for automated personality mining and applicant ranking. Internet Res., 22:551568. doi: 10.1108/10662241211271545

CrossRef Full Text | Google Scholar

Filzmoser, P., (2004). A Multivariate Outlier Detection Method, 18–22. Computer science.

Google Scholar

Frischen, A., Bayliss, A. P., and Tipper, S. P. (2007). Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol. Bull. 133:694. doi: 10.1037/0033-2909.133.4.694

PubMed Abstract | CrossRef Full Text | Google Scholar

Furnham, A. (1986). Response bias, social desirability and dissimulation. Personal. Individ. Differ. 7, 385–400. doi: 10.1016/0191-8869(86)90014-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Gentry, W. A., Weber, T. J., and Sadri, G. (2007). “Empathy in the workplace: a tool for effective leadership,” in Annual Conference of the Society of Industrial Organizational Psychology, New York, NY, April.

Google Scholar

George, G., Haas, M. R., and Pentland, A. (2014). Big Data and Management. Acad. Manag. J. 57, 321–326. doi: 10.5465/amj.2014.4002

PubMed Abstract | CrossRef Full Text | Google Scholar

Gerdes, K. E., Segal, E. A., and Lietz, C. A. (2010). Conceptualising and measuring empathy. Br. J. Soc. Work. 40, 2326–2343. doi: 10.1093/bjsw/bcq048

CrossRef Full Text | Google Scholar

Goleman, D. (2006). The socially intelligent. Educ. Leadersh. 64, 76–81.

Google Scholar

Gonzalez-Liencres, C., Shamay-Tsoory, S. G., and Brüne, M. (2013). Towards a neuroscience of empathy: ontogeny, phylogeny, brain mechanisms, context and psychopathology. Neurosci. Biobehav. Rev. 37, 1537–1548. doi: 10.1016/j.neubiorev.2013.05.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Graham, S. A., Lee, E. E., Jeste, D. V., Van Patten, R., Twamley, E. W., Nebeker, C., et al. (2020). Artificial intelligence approaches to predicting and detecting cognitive decline in older adults: a conceptual review. Psychiatry Res. 284:112732. doi: 10.1016/j.psychres.2019.112732

PubMed Abstract | CrossRef Full Text | Google Scholar

Grimm, P. (2010). “Social desirability bias,” in Wiley International Encyclopedia of Marketing. eds. Jagdish N. Sheth and Naresh K. Malhotra, Wiley International Encyclopedia of Marketing.

Google Scholar

Gu, S. L. H., Gau, S. S. F., Tzang, S. W., and Hsu, W. Y. (2013). The ex-Gaussian distribution of reaction times in adolescents with attention-deficit/hyperactivity disorder. Res. Dev. Disabil. 34, 3709–3719. doi: 10.1016/j.ridd.2013.07.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Hassan, R. (2020). Digitality, virtual reality and the ‘empathy machine’. Digit. J. 8, 195–212. doi: 10.1080/21670811.2018.1517604

CrossRef Full Text | Google Scholar

Hedger, N., Haffey, A., McSorley, E., and Chakrabarti, B. (2018). Empathy modulates the temporal structure of social attention. Proc. R. Soc. B 285:20181716. doi: 10.1098/rspb.2018.1716

PubMed Abstract | CrossRef Full Text | Google Scholar

Helsinki, D. (1964). Declaración de Helsinki de la Asociación Médica Mundial. Recomendaciones Para Guiar a los Médicos en la Investigación Biomédica en Personas. Santiago: Universidad De Chile

Google Scholar

Hershcovis, M. S., and Bhatnagar, N. (2017). When fellow customers behave badly: witness reactions to employee mistreatment by customers. J. Appl. Psychol. 102:1528. doi: 10.1037/apl0000249

PubMed Abstract | CrossRef Full Text | Google Scholar

Heyes, C. (2018). Empathy is not in our genes. Neurosci. Biobehav. Rev. 95, 499–507. doi: 10.1016/j.neubiorev.2018.11.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoffman, M. L. (2008). “Empathy and prosocial behavior,” in Handbook of Emotions. Vol. 3. New York, London: THE GUILFORD PRESS. 440–455.

Google Scholar

Hrazdil, K., Novak, J., Rogo, R., Wiedman, C., and Zhang, R. (2020). Measuring executive personality using machine-learning algorithms: a new approach and audit fee-based validation tests. J. Bus. Financ. Acc. 47, 519–544. doi: 10.1111/jbfa.12406

CrossRef Full Text | Google Scholar

Jackson, P. L., Meltzoff, A. N., and Decety, J. (2005). How do we perceive the pain of others? A window into the neural processes involved in empathy. NeuroImage 24, 771–779. doi: 10.1016/j.neuroimage.2004.09.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Jolliffe, D., and Farrington, D. P. (2006). Development and validation of the basic empathy scale. J. Adolesc. 29, 589–611. doi: 10.1016/j.adolescence.2005.08.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Kanske, P., Böckler, A., Trautwein, F. M., and Singer, T. (2015). Dissecting the social brain: introducing the EmpaToM to reveal distinct neural networks and brain–behavior relations for empathy and theory of mind. NeuroImage 122, 6–19. doi: 10.1016/j.neuroimage.2015.07.082

PubMed Abstract | CrossRef Full Text | Google Scholar

Leavitt, K., Schabram, K., Hariharan, P., and Barnes, C. M. (2021). Ghost in the machine: on organizational theory in the age of machine learning. Acad. Manag. Rev. 46, 750–777. doi: 10.5465/amr.2019.0247

CrossRef Full Text | Google Scholar

Leiberg, S., and Anders, S. (2006). The multiple facets of empathy: a survey of theory and evidence. Prog. Brain Res. 156, 419–440. doi: 10.1016/S0079-6123(06)56023-6

CrossRef Full Text | Google Scholar

Liu, R., and Shange, S. (2018). Toward thick solidarity: theorizing empathy in social justice movements. Radic. Hist. Rev. 2018, 189–198. doi: 10.1215/01636545-4355341

CrossRef Full Text | Google Scholar

López-Pérez, B., Fernández-Pinto, I., and García, F. J. A. (2008). TECA: Test de empatía cognitiva y afectiva. Madrid: Tea.

Google Scholar

Lumsden, J., Skinner, A., Woods, A. T., Lawrence, N. S., and Munafò, M. (2016). The effects of gamelike features and test location on cognitive test performance and participant enjoyment. PeerJ 4:e2184. doi: 10.7717/peerj.2184

CrossRef Full Text | Google Scholar

Marci, C. D., Ham, J., Moran, E., and Orr, S. P. (2007). Physiologic correlates of perceived therapist empathy and social-emotional process during psychotherapy. J. Nerv. Ment. Dis. 195, 103–111. doi: 10.1097/01.nmd.0000253731.71025.fc

PubMed Abstract | CrossRef Full Text | Google Scholar

Martingano, A. J., Hererra, F., and Konrath, S. (2021). Virtual reality improves emotional but not cognitive empathy: a meta-analysis. Technol. Mind Behav, 2. doi: 10.1037/tmb0000034

CrossRef Full Text | Google Scholar

Maselli, A., and Slater, M. (2013). The building blocks of the full body ownership illusion. Front. Hum. Neurosci. 7:83. doi: 10.3389/fnhum.2013.00083

PubMed Abstract | CrossRef Full Text | Google Scholar

Mikalef, P., Pappas, I. O., Krogstie, J., and Giannakos, M. (2018). Big data analytics capabilities: a systematic literature review and research agenda. IseB 16, 547–578. doi: 10.1007/s10257-017-0362-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Mislevy, R. J., Almond, R. G., and Lukas, J. F. (2003). A brief introduction to evidence-centered design. ETS Res. Rep. Ser. 2003, i–29. doi: 10.1002/j.2333-8504.2003.tb01908.x

CrossRef Full Text | Google Scholar

Mittal, E. V., and Sindhu, E. (2012). Emotional intelligence and leadership. Global J. Manage. Bus. Res. 12

Google Scholar

Mukherjee, K., and Upadhyay, D. (2018). Effect of mental construals on cooperative and competitive conflict management styles. Int. J. Confl. Manag. 30, 202–226. doi: 10.1108/IJCMA-11-2017-0136

CrossRef Full Text | Google Scholar

Muralidhar, S., Siegfried, R., Odobez, J. M., and Gatica-Perez, D. (2018). “Facing employers and customers: what do gaze and expressions tell about soft skills?,” in Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia. 121–126).

Google Scholar

Nebi, E., Altmann, T., and Roth, M. (2022). The influence of emotional salience on gaze behavior in low and high trait empathy: an exploratory eye-tracking study. J. Soc. Psychol. 162, 109–127. doi: 10.1080/00224545.2021.2001410

PubMed Abstract | CrossRef Full Text | Google Scholar

Nederhof, A. J. (1985). Methods of coping with social desirability bias: a review. Eur. J. Soc. Psychol. 15, 263–280. doi: 10.1002/ejsp.2420150303

PubMed Abstract | CrossRef Full Text | Google Scholar

Nikula, R. (1991). Psychological correlates of nonspecific skin conductance responses. Psychophysiology 28, 86–90. doi: 10.1111/j.1469-8986.1991.tb03392.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Parra, E., Chicchi Giglioli, I. A., Philip, J., Carrasco-Ribelles, L. A., Marín-Morales, J., and Alcañiz Raya, M. (2021). Combining virtual reality and organizational neuroscience for leadership assessment. Appl. Sci. 11:5956. doi: 10.3390/app11135956

CrossRef Full Text | Google Scholar

Parra, E., García Delgado, A., Carrasco-Ribelles, L. A., Chicchi Giglioli, I. A., Marín-Morales, J., Giglio, C., et al. (2022). Combining virtual reality and machine learning for leadership styles recognition. Front. Psychol. 13:864266. doi: 10.3389/fpsyg.2022.864266

PubMed Abstract | CrossRef Full Text | Google Scholar

Pratt, D. R., Zyda, M., and Kelleher, K. (1995). Virtual reality: in the mind of the beholder. Computer 28, 17–19.

Google Scholar

Preckel, K., Kanske, P., and Singer, T. (2018). On the interaction of social affect and cognition: empathy, compassion and theory of mind. Curr. Opin. Behav. Sci. 19, 1–6. doi: 10.1016/j.cobeha.2017.07.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Rahman, W. A., and Castelli, P. A. (2013). The impact of empathy on leadership effectiveness among business leaders in the United States and Malaysia.

Google Scholar

Reniers, R. L., Corcoran, R., Drake, R., Shryane, N. M., and Völlm, B. A. (2011). The QCAE: a questionnaire of cognitive and affective empathy. J. Pers. Assess. 93, 84–95. doi: 10.1080/00223891.2010.528484

PubMed Abstract | CrossRef Full Text | Google Scholar

Rowe, A. J., and Boulgarides, J. D. (1983). Decision styles—a perspective. Leadersh. Organ. Dev. J. 4, 3–9. doi: 10.1108/eb053534

CrossRef Full Text | Google Scholar

Rubin, R. S., Munz, D. C., and Bommer, W. H. (2005). Leading from within: the effects of emotion recognition and personality on transformational leadership behavior. Acad. Manag. J. 48, 845–858. doi: 10.5465/amj.2005.18803926

CrossRef Full Text | Google Scholar

Rueda, J., and Lara, F. (2020). Virtual reality and empathy enhancement: ethical aspects. Front. Rob. AI 7:160. doi: 10.3389/frobt.2020.506984

CrossRef Full Text | Google Scholar

Scott, S. G., and Bruce, R. A. (1995). Decision-making style: the development and assessment of a new measure. Educ. Psychol. Meas. 55, 818–831. doi: 10.1177/0013164495055005017

PubMed Abstract | CrossRef Full Text | Google Scholar

Sequeira, H., Hot, P., Silvert, L., and Delplanque, S. (2009). Electrical autonomic correlates of emotion. Int. J. Psychophysiol. 71, 50–56. doi: 10.1016/j.ijpsycho.2008.07.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Shamay-Tsoory, S. G., Aharon-Peretz, J., and Perry, D. (2009). Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain 132, 617–627. doi: 10.1093/brain/awn279

PubMed Abstract | CrossRef Full Text | Google Scholar

Shute, V. J. (2009). Simply assessment. Int. J. Learning Media 1, 1–11. doi: 10.1162/ijlm.2009.0014

PubMed Abstract | CrossRef Full Text | Google Scholar

Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. Comput. Games Instruct. 55, 503–524.

Google Scholar

Shute, V. J., Ventura, M., Bauer, M. I., and Zapata-Rivera, D. (2009). Melding the power of serious games and embedded assessment-110.

Google Scholar

Shute, V. J., Wang, L., Greiff, S., Zhao, W., and Moore, G. (2016). Measuring problem solving skills via stealth assessment in an engaging video game. Comput. Hum. Behav. 63, 106–117. doi: 10.1016/j.chb.2016.05.047

CrossRef Full Text | Google Scholar

Silke, C., Brady, B., Boylan, C., and Dolan, P. (2018). Factors influencing the development of empathy and pro-social behaviour among adolescents: a systematic review. Child Youth Serv. Rev. 94, 421–436. doi: 10.1016/j.childyouth.2018.07.027

CrossRef Full Text | Google Scholar

Singer, T. (2006). The neuronal basis and ontogeny of empathy and mind reading: review of literature and implications for future research. Neurosci. Biobehav. Rev. 30, 855–863. doi: 10.1016/j.neubiorev.2006.06.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B: Biol. Sci. 364, 3549–3557. doi: 10.1098/rstb.2009.0138

PubMed Abstract | CrossRef Full Text | Google Scholar

Somogyi, R. L., Buchko, A. A., and Buchko, K. J. (2013). Managing with empathy: can you feel what I feel? J. Organizational Psychol. 13, 32–42.

Google Scholar

Suen, H. Y., Hung, K. E., and Lin, C. L. (2020). Intelligent video interview agent used to predict communication skill and perceived personality traits. HCIS 10, 1–12. doi: 10.1186/s13673-020-0208-3

CrossRef Full Text | Google Scholar

Telle, N. T., and Pfister, H. R. (2016). Positive empathy and prosocial behavior: a neglected link. Emot. Rev. 8, 154–163. doi: 10.1177/1754073915586817

CrossRef Full Text | Google Scholar

Toomey, E. C., and Rudolph, C. W. (2018). Age-conditional effects in the affective arousal, empathy, and emotional labor linkage: within-person evidence from an experience sampling study. Work Aging Retire. 4, 145–160. doi: 10.1093/workar/wax018

CrossRef Full Text | Google Scholar

Ventura, S., Badenes-Ribera, L., Herrero, R., Cebolla, A., Galiana, L., and Baños, R. (2020). Virtual reality as a medium to elicit empathy: a meta analysis. Cyberpsychol. Behav. Soc. Netw. 23, 667–676. doi: 10.1089/cyber.2019.0681

PubMed Abstract | CrossRef Full Text | Google Scholar

Vieira, S., Pinaya, W. H., and Mechelli, A. (2017). Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: methods and applications. Neurosci. Biobehav. Rev. 74, 58–75. doi: 10.1016/j.neubiorev.2017.01.002

CrossRef Full Text | Google Scholar

Vinson, A. H., and Underman, K. (2020). Clinical empathy as emotional labor in medical work. Soc. Sci. Med. 251:112904. doi: 10.1016/j.socscimed.2020.112904

PubMed Abstract | CrossRef Full Text | Google Scholar

Vyatkin, A. V., Fomina, L. V., and Shmeleva, Z. N. (2019). “Empathy, emotional intelligence and decision-making among managers of agro-industrial complex. The role of tolerance for uncertainty in decision-making,” in IOP Conference Series: Earth and Environmental Science, Vol. 315 (IOP Publishing), 022081.

Google Scholar

Weinberger, L. A. (2009). Emotional intelligence, leadership style, and perceived leadership effectiveness. Adv. Dev. Hum. Resour. 11, 747–772. doi: 10.1177/1523422309360811

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhi-Jiang, Y. A. N., and Pan-Cha, S. U. (2016). The influence of empathy on the attention process of facial pain expression: evidence from eye tracking. J. Psychol. Sci. 39:573.

Google Scholar

Zivkovic, S. (2022). “Empathy in leadership: how it enhances effectiveness,” in Economic and Social Development (Book of Proceedings), 80th International Scientific Conference on Economic and Social. Vol. 1. 454.

Google Scholar

Keywords: organizational neuroscience, empathy, virtual reality, behavioral data, eye-tracking, machine learning, decision-making

Citation: Parra Vargas E, García Delgado A, Torres SC, Carrasco-Ribelles LA, Marín-Morales J and Alcañiz Raya M (2022) Virtual reality stimulation and organizational neuroscience for the assessment of empathy. Front. Psychol. 13:993162. doi: 10.3389/fpsyg.2022.993162

Received: 13 July 2022; Accepted: 05 September 2022;
Published: 07 November 2022.

Edited by:

Alain Morin, Mount Royal University, Canada

Reviewed by:

Pradeep Raj Krishnappa Babu, Duke University, United States
Konstantin Ryabinin, Perm State University, Russia

Copyright © 2022 Parra Vargas, García Delgado, Torres, Carrasco-Ribelles, Marín-Morales and Alcañiz Raya. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Elena Parra Vargas, elparvar@i3b.upv.es

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.