Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 27 April 2023
Sec. Personality and Social Psychology
This article is part of the Research Topic Bullying and Cyberbullying: Their Nature and Impact on Psychological Wellbeing View all 11 articles

Measuring empathy online and moral disengagement in cyberbullying

  • CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, Lisbon, Portugal

This investigation intends to explore how adolescents report empathy in online contexts and moral disengagement in cyberbullying incidents, and how these two constructs are related. To accomplish this goal, three studies were conducted considering the need to develop new instruments to uncover this new approach of measuring empathy and moral disengagement. In the first study, we adapted the Portuguese version of the Empathy Quotient-short form to online contexts, which resulted in the Empathy Quotient in Virtual Contexts (EQVC). We also developed the Process Moral Disengagement in Cyberbullying Inventory (PMDCI), in order to assess moral disengagement in these specific situations. In the second study we conducted exploratory factor analyses (N = 234) of these instruments. Finally, in the third study, we conducted confirmatory factor analyses (N = 345) of both instruments. These results showed how adolescents reported empathy in online contexts and moral disengagement in cyberbullying incidents. Specifically, empathy revealed a bi-dimensional structure including difficulty and self-efficacy in empathizing (Cronbach’s α = 0.44, 0.83, respectively), whereas process moral disengagement revealed four unidimensional questionnaires including locus of behavior, agency, outcome, and recipient (Cronbach’s α = 0.76, 0.65, 0.77, 0.69, respectively). Furthermore, a correlational analysis was also performed of both constructs, and we also considered the variable sex. Results showed that difficulty in empathizing was negatively associated with sex (with girls revealing more difficulty than boys) and all moral disengagement mechanisms except for behavior. Moral disengagement was positively correlated with sex, suggesting boys morally disengaged more from cyberbullying. The instruments provided new insights on how empathy and moral disengagement can be specific to online contexts and cyberbullying situations, and how they can be used in educational programs to promote empathy and gain insight on moral disengagement within this phenomenon.

1. Introduction

People are not only autonomous agents, but also function as the product of a reciprocal interplay of intrapersonal, behavioral, and environmental events (Bandura, 1986). Therefore, this investigation is based on the Social Cognitive Theory, which adopts an agentic perspective. Specifically, in this investigation we explore the relation between two intrapersonal factors that are proven to play an important role in cyberbullying involvement, which are empathy and moral disengagement.

Cyberbullying is a pervasive problem in our society, as it increases and causes harmful consequences in the lives of children and adolescents (Kowalski et al., 2014). Considering this, it is of utmost importance to be familiar with factors that play a role in preventing or reinforcing this type of behavior (Lo Cricchio et al., 2020). Many factors have been studied in relation to cyberbullying, such as empathy and Moral Disengagement (MD) (Marín-López et al., 2020; Ferreira et al., 2021).

When someone is involved in conflicts, empathy allows us to empathize with and understand others, but also helps us to predict the type of response of others (i.e., aggressive). Thus, it is assumed that empathy can serve as a control mechanism in conflict dynamics (Klimecki, 2019), which may include aggressive behavior (Tampke et al., 2020), such as in bullying and cyberbullying.

Therefore, empathy plays an important role in cyberbullying, however, it does not explain or predict it (Pfetsch, 2017). In fact, empathy has been found to be negatively related to cyberbullying perpetration (Garaigordobil, 2015). With respect to bystander behavior, empathy has been found to be an important factor for increasing prosocial behavior (Barlińska et al., 2018), therefore it can be considered a protective factor (Zhu et al., 2021).

Considering that cyberbullying may be seen as intentional and repeated acts of aggression toward peers (Hinduja and Patchin, 2009), involving moral aspects (Romera et al., 2021), it is also crucial to understand moral (dis) engagement within this phenomenon, which is an important risk factor in the cyberbullying cycle (Gao et al., 2020; Romera et al., 2021). With respect to bullying, Wang and Goldberg (2017) suggested that MD predicted and increased bullying perpetration in adolescence, and Thornberg et al. (2019) also found that bullying perpetration could also lead to MD. That is, MD impacted aggressive conduct, and aggressive conduct also impacted MD progressively over time (Bandura, 1999). For example, Falla et al. (2020) found that moral disengagement also had an impact on bullying victims, since cognitive restructuring (i.e., moral justification, euphemistic language and advantageous comparison) influenced the association between victimization and later, bullying behavior. Moreover, that same set of MD mechanisms were the single strongest predictor of both offline and online bullying (Romera et al., 2021). Thus, mechanisms of MD prevent individuals from feeling unpleasant emotions when perpetrating transgressions (Mazzone et al., 2019). Falla et al. (2021) argued that MD mechanisms may lead to a decrease in empathy, considering that the first seem to promote aggressive behavior, and the latter is related to prosocial behavior. Thus, considering that empathy seems to play an important role in moral development (Cameron et al., 2019), assessing both constructs with regards to online contexts and understanding the possible relation between them, may provide an important contribution to the field. For example, Francisco (2022) discussed that empathy can be viewed as a shield for the impulsive use of MD mechanisms, since they found that when adolescents did not spontaneously use MD mechanisms to justify aggressors’ and/or bystanders’ cyberbullying behavior, they tended to show empathic responses instead. Moreover, Haddock and Jimerson (2017) studied the correlation between MD and empathy and found that this correlation was statistically significant and negative. Specifically, these authors found that affective empathy and cognitive empathy both significantly predicted MD. Accordingly, as MD increased, affective and cognitive empathy decreased. In general, students who had higher scores in MD, tended to have lower scores in empathy. Despite the differences that can occur in feeling empathy online and the activation of MD mechanisms with respect to cyberbullying incidents, we believe that a similar relationship might occur between these constructs, since it occurs within bullying (Haddock and Jimerson, 2017). Therefore, this study aims to assess adolescents’ perceived empathy with regards to online contexts and their MD in cyberbullying situations with two new instruments. We also proposed to understand the relationship between the two constructs, considering adolescents’ perspectives, because the MD instrument was developed according to adolescents’ point of view regarding cyberbullying scenarios.

1.1. Measuring adolescents’ perceived empathy regarding online contexts

1.1.1. The importance of the online context

This study is positioned within the perspective of empathy online, namely that it is possible to express “traditional empathic characteristics such as concern and caring for others … through computer-mediated communications” (Terry and Cain, 2016, p. 1). In fact, this study focuses specifically on empathy in virtual contexts, because empathy itself is not online, but rather, occurs within individuals as they establish interpersonal relations in virtual contexts. To date, few studies have considered this specificity and have assessed empathy with adapted instruments. That is, few studies have considered the online characteristics of empathy, when studying cyberbullying. Nonetheless, some studies have already taken empathy in virtual contexts into account. For example, Carrier et al. (2015) and Manasia and Chicioreanu (2017) found that virtual empathy was positively related with empathy in face-to-face interactions, however, virtual empathy was lower for both sexes. Complementarily, Marín-López et al. (2020) found no differences between the different cyberbullying roles with respect to online empathy. Considering the scarce literature with respect to empathy in virtual contexts and cyberbullying (Marín-López et al., 2019, 2020), it is crucial to develop further research in this area of knowledge.

Assessing empathy is important to explain bystanders’ role in cyberbullying situations. For instance, Macaula and Boulton (2017) found that when comparing positive bystanders’ responses in bullying and cyberbullying, the rate of responses tended to be higher in cyberbullying. Moreover, this type of responses in both bullying and cyberbullying was positively and moderately correlated with empathy. Also, positive bystander responses tended to increase, as a result of cyberbullying severity. Another study (Schultze-Krumbholz et al., 2018) found that higher levels of both cognitive and affective empathy were associated with prosocial defending, when compared to passive bystander behavior. Notwithstanding, the research presented above considered measures of empathy without accounting for the online context.

From a phenomenological perspective, Fuchs (2014) proposed that it is not possible for empathy to occur in online contexts, since we lose our perceptual access to other individuals’ physical presence, and thus, we lose our direct empathic access to others. Accordingly, for empathy to occur, we need to perceive other individuals’ “lived body” (see Osler, 2021), and this is not possible in online “disembodied communication” (Fuchs, 2014, p. 167). Moreover, the temporal delay and the loss of perceptual queues (i.e., the perception we have is not apprehended by all our sensory capabilities) that occurs in technological mediated communication prevents us from perceiving someone’s physical and emotional experience. This was not a concern in face-to-face interactions, but do come into play in online interactions (Osler, 2021). Despite these perspectives, we believe it is possible to feel empathy in online contexts, even if individuals do not see others in person. We consider this to be true because empathic skills can be developed through the use of virtual reality (e.g., Bertrand et al., 2018), which is also different from face-to-face interactions. Moreover, although there are differences between online and offline communication, individuals tend to use other cue systems at their disposal, with the objective of promoting and detecting these cues, as well as developing relationships (Walther, 1995). Therefore, if relationships can be developed, empathy can also be possible in online interactions. In fact, through interpersonal communication online, individuals are able to infer what others might be thinking/feeling in a certain situation (Carrier et al., 2015). Nonetheless, the specificities of online contexts, may make it difficult for empathic reactions (Terry and Cain, 2016). Despite the fact that few studies have investigated empathy in virtual contexts and its specificities, it has been already proven that empathy can be experienced online. For example, Preece (1999) found that empathy online was quite common in support groups, which corroborates our position. This author discussed that the difference between synchronous and asynchronous systems impacts communication. Firstly, the pace of interaction is very different between these systems, that is, in one it is almost immediate, whereas in the other, it can take much more time (i.e., hours, days, or weeks differing from the platform). Moreover, another important difference is regarding the mode of expression, and other features that allowed nonverbal expression, whereas in the asynchronous system the primary mode is written text. It is important to highlight that this investigation is from the 1990’s, and several features of online communication have changed. However, more recent studies have found that text-type emoticons and graphic emojis are processed in a similar way to in-person facial expressions (Gantiva et al., 2019), and participants who viewed text-type emoticons exhibited face imitation mirroring (O’Neil, 2013). Therefore, we can argue that it is possible to feel empathy when interacting in virtual contexts.

1.1.2. Gaps in existing scale development

Considering the importance of accounting for online features in measuring empathy, we sought new instruments on empathy that were developed according to the online context. To date, we found three instruments directly adapted from the Basic Empathy Scale (Jolliffe and Farrington, 2006), that is, the Virtual Empathy Scale (Carrier et al., 2015), the Online Empathy Questionnaire (Marín-López et al., 2019) and the Virtual Basic Empathy Scale (Manasia and Chicioreanu, 2017). Also, another instrument was adapted by García-Pérez et al. (2016) based on the Basque version (Gorostiaga et al., 2014) of the Test de Empatía Cognitiva y Afectiva (TECA) from López-Pérez et al. (2008). Additionally, Happ and Pfetsch (2015) developed the Media-Based Empathy (MBE) Scale (original name Skalazumedienbasierter Empathie) based on a pool of items according to the Interpersonal Reactivity Index (Davis, 1980) and an instrument to assess media empathy by Früh and Wünsch (2009), which included media concern, affective media empathy, cognitive media empathy, and immersion in video games, with items related to different types of media, as well as fictional and real people. Of all these instruments, only the Online Empathy Questionnaire (Marín-López et al., 2019) was used in relation to cyberbullying behavior.

Despite the valuable contributions in terms of the aforementioned instrument development and validity studies, and after a detailed analysis of the respective items, we found that the Empathy Quotient (EQ) by Baron-Cohen and Wheelwright (2004) would be appropriate to reach our objectives. Specifically, these authors defined empathy as “The drive or ability to attribute mental states to another person/animal and entails an appropriate affective response in the observer to the other person’s mental state” (Baron-Cohen and Wheelwright, 2004, p. 168). The term “quocient” derives from the Latin word “quotiens” which means “how much” or “how many” (Baron-Cohen and Wheelwright, 2004, p. 166). According to this perspective (Baron-Cohen, 2011), that if individuals only focus on their own problems or interests, they are likely to feel less empathy. In fact, when individuals feel empathy, they are able to identify what others are thinking or feeling and are able to provide an adaptive emotional response. Thus, this view of empathy entails two fundamental stages: recognition and response. Accordingly, empathy occurs when there is recognition and an adaptive response, which helps avoid hurting others and fosters prosociality.

Some studies have provided evidence that the Empathy Quotient was the third most used instrument (e.g., Ilgunaite et al., 2017) and a recent a meta-analysis by Hall and Schwartz (2019) determined that it was the second most used instrument in research. For this investigation the aim was to choose an instrument that had been widely used and already validated for several countries (e.g., Redondo and Herrero-Fernández, 2018), but that also included items assessing accurate interpersonal perception (Hall and Schwartz, 2019), since it is an important feature when assessing empathy, specifically in the virtual contexts, as is the case with this study. Moreover, we preferred to adapt the short form of this questionnaire, which had already been developed by Wakabayashi et al. (2006), and adapted for the Portuguese population (Rodrigues et al., 2011). Our study provides an important contribution, since it proposes to adapt this last version of the instrument to a younger population and for online contexts.

1.1.3. Goals of the present work

Considering the literature reviewed, one of the main purposes of this study is to present and evaluate a new version of the Portuguese short form of the EQ for adolescents communicating online, entitled Empathy Quotient in Virtual Contexts (EQVC).

According to some of the literature, empathy can be developed over time (Gerdes et al., 2010) and may be considered a capacity (or ability), suggesting that individuals have the potential to empathize or not (Hall and Schwartz, 2019). In fact, in some circumstances, feeling empathy requires effort and cognitive costs, and therefore, individuals may avoid feeling empathy (Cameron et al., 2019). Thus, considering the specificities of the online environment and its consequences in interpersonal relationships, we felt the need to assess empathy that occurs specifically in virtual contexts. Moreover, empathy can be situation and context specific (Cameron et al., 2019) such as in cyberbullying situations. Nonetheless, despite the widespread consensus that empathy is predetermined by circumstances (Barlińska et al., 2013), none of the empathy definitions clearly state that empathy can decrease in some situations. That is, for example, in a bullying situation, an individual might feel empathy, however, if a similar situation occurs online, the same individual might not feel the same degree of empathy. This is one of the reasons we opted to adapt an empathy instrument for online contexts, as it may be more difficult for individuals to feel empathy toward others in these digital environments (Pfetsch, 2017).

1.2. Assessing moral disengagement in cyberbullying situations

According to the Social Information Processing theory (Walther, 2015), the lack of nonverbal cues in many forms of computer-mediated communication (CMC) causes relational information to be exchanged more slowly. As a result, relationships develop more slowly via CMC than in face-to-face interactions, but eventually reaches equivalent levels of development (Walther, 1992). Moreover, the scarcity of social–emotional cues and the easiness of sharing media content may facilitate the use of certain MD mechanisms (Runions and Bak, 2015).

Before cyberbullying had been linked to MD (for a meta-analytic review see Zhao and Yu, 2021), Suler (2004) had already investigated some characteristics of the online world that impacted individuals’ online actions. For instance, Suler (2004) argued that in cyberspace, people tended to say and do things that normally they would not in face-to-face interactions. Suler explained how dissociative anonymity, invisibility and asynchronicity facilitated online disinhibition. He also discussed other factors, however considering cyberbullying situations, those three seemed more important. Specifically, Suler defended that dissociative anonymity allowed people to distance themselves from their online behavior, which is one of the main principles that helps explain online disinhibition. Furthermore, the fact that it was possible to be invisible in online interactions also amplified the disinhibition effect because people did not worry about how they looked when they communicated online (Suler, 2004). Thus, considering that the virtual online world seems to be characterized by a degree of disinhibition (Suler, 2004), which is a crucial social environment for MD (Bandura et al., 1996), cyberbullying behavior will be more frequent for individuals with higher MD (Zhao and Yu, 2021). That is, the lack of emotional cues in online settings may result in dehumanization (i.e., depriving another person from human qualities; Bandura, 2002), whereas the ease with which young people share information online, may facilitate the diffusion and displacement of responsibility (distributing the responsibility for several individuals or attributing the responsibility to an authority; Bandura, 2002). Accordingly, ambiguous communication, which is common online, may provoke cyber aggression which is justified by the perceived blame of the other (Runions and Bak, 2015). Moreover, the same authors argued that young people are technologically more immersed, and media attention is increasing regarding extreme cases of cyberbullying. Hence, the relationship between online contexts and the use of MD mechanisms stresses the importance of assessing the construct in terms of specific behavior that occurs online, which in the case of this study, is cyberbullying behavior.

To our knowledge, few studies have accounted for MD in online settings. For instance, Paciello et al. (2020) found that online MD and offline MD were correlated, even though they were distinct constructs. Moreover, they found that depending on the degree of externalizing behavior, the importance of online and offline MD was different. Specifically, cyberbullying was only significantly related to online MD for low externalizing adolescents, whereas for medium externalizing behaviors, both online and offline MD were significant. For high externalizing participants, only offline MD was significant. Complementarily, Marín-López et al. (2020) found that online MD was generally higher for children who were involved in cyberbullying (specifically cyberbullies and cybervictims), when compared to those who were not.

Some instruments have already been developed to assess MD in cyberbullying context. One of the first measures of MD in cyberbullying situations was from Bussey et al. (2015), in which they reworded 8 items from the MD scale by Bandura et al. (1996). Later, Day and Lazuras (2016) developed the Cyberbullying-specific Moral Disengagement Questionnaire (CBMDQ-15) which is a 15-item scale based on thematic analysis of focus group interviews with undergraduate students, from where eight themes reflecting the MD mechanisms (Bandura, 1991) emerged. In recent years, two more questionnaires were developed. Marín-López et al. (2019) developed the Moral Disengagement through Technology Questionnaire, also based on Bandura et al. (1996) and adapted to online interactions. Additionally, Cuadrado-Gordillo and Fernández-Antelo (2019) combined two different questionnaires (Day and Lazuras, 2016; Meter and Bauman, 2018) and transform the different types of aggression to online contexts. More recently, Paciello et al. (2020) developed the Online Moral Disengagement scale referring to “online social settings and misbehavior” (Paciello et al., 2020, p. 191).

Despite the aforementioned instruments to assess MD in online interactions (e.g., Paciello et al., 2020) and cyberbullying situations (e.g., Bussey et al., 2015), we consider that the development of a new instrument would be beneficial to assess the construct as a process for the Portuguese population, rather than just an adaptation to the Portuguese language. The main objective was to develop an instrument that could capture adolescents’ view regarding cyberbullying phenomenon, and MD as a process. That is, we intended to follow Bandura’s (2002) Social Cognitive Theory, but we also aimed to complement this perspective with new information that participants may report regarding MD in cyberbullying situations. We consider this important because most instruments presented were only adaptations to online contexts, without considering adolescents’ view of the phenomenon. Thus, this study also aims to present the new developed instrument to assess MD regarding cyberbullying situations (Process Moral Disengagement in Cyberbullying Inventory [PMDCI]), as well as to evaluate its psychometric properties.

1.3. Adolescents’ perceived empathy online and moral disengagement in cyberbullying

Empathy is central for moral development (Cameron et al., 2019), as it can be an antecedent of moral attitudes (Hyde et al., 2010). Additionally, as empathy can be considered the base for more abstract moral concepts, as well as attitudes toward society, it is probably an antecedent of subsequent moral attitudes, such as MD. For example, Hyde et al. (2010) postulated that both MD and empathy share an element of disengagement, that is, MD is directed at society and its values, whereas empathy can be considered more person-specific. For instance, moral self-censure derives from how aggressors regard the individuals they harm, therefore, if they perceive another person as human this can activate empathic reactions through perceived similarity (Bandura, 1992). Moreover, Francisco (2022) found that when spontaneously talking about fictitious cyberbullying scenarios, participants who tended to use less MD mechanisms to justify aggressors’ and bystanders’ cyberbullying behavior, showed more empathic responses. Thus, empathy and MD seem to be related, as they can be seen as opposite sides of the same coin, and therefore, highlighting the importance of a concerted work including empathy and MD, with the aim of increasing prosocial behavior online (Francisco, 2022). Moreover, MD and empathy are two relevant personal factors in cyberbullying bystanders’ behavior. However, the relationship between the two constructs is not fully understood (Marín-López et al., 2020). Thus, taking this into account, and considering the virtual world and cyberbullying involvement, we propose that adolescents’ perceived empathy regarding online contexts may be related to MD with cyberbullying situations.

It is known that gender can have an impact on several individual factors, such as empathy and MD. For example, Falla et al. (2021) found gender differences with respect to empathy and MD in relation to bullying. Specifically, the authors found that girls had higher scores on both cognitive and affective empathy, and that boys had higher scores on several MD mechanisms, such as cognitive restructuring, minimizing responsibility, distorting consequences and dehumanizing. Thus, considering these gender differences we argue whether gender can have an impact on the variables of this study. Therefore, we question: (1) Is there a relationship between Empathy in virtual contexts and MD related to cyberbullying situations? If so, how are these constructs related?; and (2) What is the role of gender in empathy in virtual contexts and MD in cyberbullying situations?

In order to reach our objectives and answer our research questions, we present three distinct studies. A first study explores the initial adaptation of the EQVC and the preliminary development of the PMDCI. A second study presents the exploratory psychometric evidence of the EQVC and the PMDCI, whereas a third study shows the confirmatory analyses of the instruments and a correlational study of the two constructs.

2. Study 1- Adaptation of the EQVC and preliminary development of the PMDCI

2.1. Method

2.1.1. Ethical aspects

For all the studies presented, authorization to complete the questionnaires in the online context was granted by the Ministry of Education of Portugal, the Portuguese National Commission of Data Protection, the Deontology Committee of the researchers’ institution, the schools’ boards of directors, the teachers, the parents and the adolescents themselves. Before the completion of the questionnaires, students were informed that psychological assistance was available if needed, considering the sensitivity of the subject in study. Additionally, students were informed that all information collected was anonymous and confidential and that they could quit at any time if they were not comfortable. This study was not preregistered. Further information regarding the initial adaptation and construction of the instruments, all items (Portuguese version), and additional information are available in the Supplementary material.

2.2. Initial adaptation of the EQVC

All the 22 items from the Portuguese version of the EQ short form were converted to the online context considering its specificities. Later, these items were compared to the original version in English, by a bilingual Portuguese-English teacher. Considering the different populations from the original version (i.e., adults) and ours, some modifications were made to simplify the items and make them more comprehensible for the adolescent population. Lastly, small changes were made considering students’ feedback in the face validity session (see Supplementary Appendix A.1 and Supplementary Appendix Table A.1).

2.3. Initial construction of the PMDCI

2.3.1. Participants

Thirty-four 9th grade students (Mage = 14.29, SD = 0.72, 53% female) participated in an in-depth semi-structured interview with fictitious scenarios.

2.3.2. Procedure

A qualitative study was conducted to explore adolescents’ MD in cyberbullying situations. In-depth semi-structured interviews with scenarios were conducted and verbatim transcribed. Later, we performed a content analysis with a mixed approach (deductive/inductive), based on the Social Cognitive Theory (Bandura, 2002). The coding units we established were adolescents’ written verbalizations with meaning (Amado et al., 2014), summing a total of 396 verbalizations, which were analyzed. We performed an initial phase, where categories were created, and a re-checking phase, where a set of verbalizations were analyzed by two other researchers and adjustments were made to the operational definition of the categories. Finally, two independent coders rated the data. Inter-rater reliability was excellent, as mentioned in the literature (McGraw and Wong, 1996), with an ICC = 0.99, with a 95% confident interval = 0.99–0.99. From this analysis, the categorization process went beyond the Social Cognitive Theory. That is, several categories of MD mechanisms emerged from the analysis, as well as other attributions (Figure 1), both regarding aggressors’ and bystanders’ behavior from the scenarios (see Francisco et al., 2022 for a detailed description).

FIGURE 1
www.frontiersin.org

Figure 1. Procedural model of cyberbullying in the perspective of participants, as bystanders of the scenarios. Ag., aggressors’ behavior; Bys., bystanders’ behavior; Part., participants’ bystanders behavior in the scenarios. From Francisco et al. (2022).

It is important to highlight that we considered MD as a process, since several mechanisms tend to be used before the aggression, during the behavior and after as consequents of the behavior, as presented in Figure 1. Thus, considering this novel approach, the qualitative data was the starting point of the development of the PMDCI because we sought to develop an instrument that could capture adolescents’ beliefs and perspective of this phenomenon as accurately as possible. Hence, from the categories that emerged from the content analysis, we created the items for the PMDCI. All the procedures regarding scale development can be found in the Supplementary Appendix A.2.

2.4. Results

Study 1 allowed us to develop the EQVC and the PMDCI. The EQVC is composed of 22 items in Portuguese, for the adolescent population. The final items were translated into English, for the purpose of presenting this investigation (Supplementary Appendix Table A.8). As for the PMDCI, it was an instrument about the psychological mechanisms adolescents use to justify their cyberbullying-related actions, in the perspective of possible aggressors and bystanders (Supplementary Appendix Tables A.2–A.6). The inventory begins with a brief introduction about adolescents’ daily use of ICT. The PMDCI (Supplementary Appendix A.3) is also composed of two scales (the aggressor’s and bystander’s perspective), because when speaking freely about the cyberbullying scenarios, adolescents tended to use MD mechanisms to not only legitimize cyberbullies’ actions, but also to approve cyber bystanders’ aggressive behavior. The PMDCI also includes a Non-Intervention scale. However, for the purpose of this work, only the bystander scale was used, since it is part of a larger investigation that aims to improve bystanders’ prosocial behavior online. The Bystander Scale of the PMDCI is composed of 36 items (24 regarding MD mechanisms, 3 regarding the devaluation of behavioral intention, and 9 items in the attribution category). All items were presented with a Likert scale from 1 (totally disagree) to 4 (totally agree).

3. Study 2 – Preliminary testing and exploratory psychometric evidence of the EQVC and 363 the PMDCI

3.1. Method

3.1.1. Participants

A total of 234 students participated in the exploratory factor analysis (EFA) study (Mage = 13.24; SD = 1.18; 51.7% girls), 35.9% of whom were in the 7th grade, 25.6% were in the 8th grade and 38.5% were in the 9th grade (Supplementary Appendix A.4). All 234 participated in the EFA of the EQVC and 230 participated in the EFA of the PMDCI.

3.1.2. Procedures

The new created version of EQ (EQVC) and the new developed instrument (PMDCI) were administered on-line in a classroom context, individually with the guidance of an educational psychologist. Students took approximately 40 minutes to complete both questionnaires. After the data gathering, EFA was conducted with FACTOR 10.10.02 (Ferrando and Lorenzo-Seva, 2017) to understand the factorial structure of both instruments. Specifically, we intended to explore if the EQVC yielded the same structure of the EQ-short form (Portuguese version), or if considering the new context and different population, the structure of the instrument would change. Regarding the PMDCI, since it was developed considering the four loci (i.e., Behavior, Agency, Outcome and Recipient) and the respective MD mechanisms, we intended to evaluate the best way to validate the instrument. That is, we were interested in understanding if the instrument should be considered as a single scale, or if it should be regarded as a questionnaire with different scales (i.e., one scale for each locus) involving the distinct locus of the MD.

3.2. Results

3.2.1. Exploratory evidence of the EQVC

In order to uncover the underlying structure of the EQVC, we performed an EFA (see Supplementary Appendix A.5 for more details). We present the correlations and descriptive statistics of all items, including skewness and kurtosis (Supplementary Appendix Table A.7). Regarding univariate normality, all variables were approximately normally distributed according to the literature, with skewness absolute values less than 2 and kurtosis absolute values less than 2 (George and Mallery, 2016). We also analyzed multivariate normality accordingly to Bollen and Long (1993), where multivariate normality is accepted if Mardia’s coefficient is lower than P (P + 2), considering P the number of observed variables. Considering that the EQVC presented 22 observed variables, Mardia’s coefficient for skewness of 78.41 < 22(22 + 2) = 528 and for kurtosis is 605.06 > 22(22 + 2) = 528. Moreover, as for the correlation matrix, we used polychoric correlations (Muthén and Kaplan, 1985; Brown, 2006) (Supplementary Appendix Table A.7). Furthermore, before proceeding to the EFA results, Kaiser–Meyer–Olkin (KMO) and Bartlett Sphericity were assessed. As for KMO it was 0.89 revealing sampling adequacy, and Bartlett Sphericity test was χ2(231) = 2543.4 (p < 0.001), which indicated that we could proceed with factor analysis. In order to retain the appropriate number of factors we used Horn Parallel analyses (O’Connor, 2000). In the FACTOR program (Ferrando and Lorenzo-Seva, 2017) the Optimal Implementation of Parallel Analysis (Timmerman and Lorenzo-Seva, 2011) suggested that two factors should be extracted. We used the Unweighted Least Squares (ULS) method for factor extraction. Specifically, Robust Factor Analysis based on the Robust Unweighted Least Squares (RULS) was used to fit the factor solution. Robust Promin Rotation was used to achieve factor simplicity (Lorenzo-Seva and Ferrando, 2019). As according to the literature (Bandalos and Finney, 2010), we took into account all items with structure coefficients superior to 0.30, and no items revealed loadings greater than 0.40 on the two factors (Supplementary Appendix Table A.2). According to the literature (McDonald, 1999), goodness-of-fit values (GFI = 0.98) and (AGFI = 0.98), residuals statistics (RMSR = 0.06) were good. The EQVC presented 48% of the explained variance. We then compared the bi-factorial model to the unifactorial model (Supplementary Appendix A.6 and Supplementary Appendix Table A.9). Considering the results, we decided to keep the bi-factorial model since the percentage of explained variance was higher. Regarding reliability, McDonald’s Omega (Hayes and Coutts, 2020) was also assessed for both factors: factor 1 presented ω = 0.68, 95% CI [0.58, 0.74], showing acceptable reliability, and factor 2 presented ω = 0.91, 95% CI [0.88, 0.93], with excellent reliability (Supplementary Appendix A.6).

Later, we conducted a Multidimensional Normal-ogive Graded Response Model (Reckase, 1985), whose parameters can be seen in Supplementary Appendix Table A.8, as well as the item loadings. This model presents a discrimination parameter (a), which is important in the preliminary adjustment of questionnaires and item selection (Matteucci and Stracqualursi, 2006). Most items revealed moderate item discrimination, however, items 1, 4, and 5 revealed low item discrimination, presenting values between 0.424 and 0.586, as indicated in the literature (Baker, 2001). Item discrimination reveals how well an item differentiates individuals scoring high and low on the latent ability being measured (Depaoli et al., 2018). Then, we performed the analysis again without items 1, 4, and 5 to see how the model change. Lastly, we had some participants with Weighted Mean-Squared Index larger than 2.0 (Ferrando et al., 2016), thus, these participants were removed and the analysis was performed again. Table 1 shows a comparison between 4 proposed EFA models: (1) with all participants and all items, (2) with all participants and without items 1, 4 and 5, (3) without infit/outfit participants and all items and (4) without infit/outfit participants and without items 1, 4, and 5.

TABLE 1
www.frontiersin.org

Table 1. Proposed bi-factorial model parameters of the EQVC.

The elimination of participants improved the % of explained variance (from 48 to 51%); the RMSEA and the RMSR were the fit indices that had better improvement. Moreover, the elimination of the 3 items improved the model essentially in terms of % explained variance (from 48 to 55%), and also the same indices as described above. Considering these improvements, we conducted Confirmatory Factor Analysis (CFA) with this structure.

3.2.2. Exploratory factor analysis of the PMDCI

With the aim of assessing the structure of the PMDCI, we performed an EFA with data from 230 participants to the 5 scales included in the questionnaire (4 scales regarding Loci of MD and 1 scale regarding Attributions for the cyberbullying behavior), considering the Bystanders’ perspective (i.e., Bystander scale). We present the correlations and descriptive statistics of all items, including skewness and kurtosis (Supplementary Appendix Table A.10).

Regarding univariate normality, most of the variables were normally distributed, with skewness absolute values less than 2 (Bollen and Long, 1993), with the exception of the items from the Attribution Scale. Regarding kurtosis, all variables had less than 5 in absolute value. With respect to multivariate normality, according to Bollen and Long (1993), it is accepted if Mardia’s coefficient is lower than P(P + 2), considering P the number of observed variables. Moreover, as for the correlation matrix, we used polychoric correlations (Muthén and Kaplan, 1985; Brown, 2006). Furthermore, before proceeding to the EFA, Kaiser–Meyer–Olkin (KMO) and Bartlett Sphericity were assessed (Supplementary Appendix Table A.11). All scales had high KMO which revealed sampling adequacy, as well as a significant Bartlett Sphericity test, which indicates that we could proceed with factor analysis.

In order to retain the appropriate number of factors, we followed the same procedures used for the EQVC. Our EFA suggested that a single factor should be extracted of each scale of the PMDCI. As for the factor structure (Supplementary Appendix Table A.12), we took into account all items with structure coefficients superior than 0.30 (Bandalos and Finney, 2010). Regarding reliability, all scales reveal good internal consistency values (Supplementary Appendix Table A.11).

Regarding Explained Variance, all scales were above the minimum range, as according to the literature (Hair et al., 2014). As for the model fit indices, all scales presented satisfactory values of goodness-of-fit values and residuals statistics (Supplementary Appendix Table A.11), according to the literature (McDonald, 1999).

Later, we conducted a Multidimensional Normal-ogive Graded Response Model for unifactorial models (Samejima, 1969), whose parameters can be seen in Supplementary Appendix Table A.12, as well as the item loadings, for all the 5 scales. Considering the discrimination parameter values, it was concluded that all items from all scales revealed good discrimination (Baker, 2001), indicating that there was no need to remove items. Thus, we conducted CFA with the original structure of all 5 scales.

4. Study 3 – The confirmatory analyses of the instruments and a correlational study of the studied constructs

4.1. Method

4.1.1. Participants

For the CFA, our sample consisted of 345 students (Mage = 13.13; SD = 1.27; 51% boys), 40.5% of whom were in the 7th grade, 27.1% in the 8th grade and 32.4% in the 9th grade. Most students were Portuguese (85.8%). All 345 participated in the CFA of the EQVC and 342 participated in the CFA of the PMDCI, as well as in the correlational study.

4.1.2. Procedures

Before proceeding to the CFA, univariate and multivariate normality of all scales were evaluated and the distributions were considered non-normal. This is consistent with the literature (Yuan and Bentler, 1998), since non-normality is prevalent in real data (Blanca et al., 2013) and it would dictate the possibilities in the data analysis, because structural equation modeling assumes the normality of latent variables (Bollen, 1989). Thus, several estimation methods were investigated and analyzed considering the nature of our data (for a detailed description see Supplementary Appendix A.7).

With this in mind, we attempted to analyze several estimation methods that could be applied to our data. As a way of summarizing our results, we only mentioned the ULS parameters in the text, as advised by Bollen (1989) because it does not make distributional assumptions regarding the observed variables. Moreover, the other estimation procedures are presented in the Supplementary Material and referred to when they are considered relevant.

For the CFA of the EQVC and PMDCI we used IBM, SPSS AMOS 24.0 (Arbuckle, 2019) and the lavaan package (Rosseel, 2012) in R Project (R Core Team, 2020). ULS and ML with Bollen-Stine Bootstrapping were conducted in AMOS, and ML with Satorra-Bentler correction and WLSMV were conducted using the lavaan package in R software. Several Fit Indices will be presented according to the different estimation methods (Supplementary Appendix A.7), and organized by their main classification. Considering that the covariance matrix might not be as asymptotically distributed as chi-square with the ULS method (Bollen, 1989), several statistics are not reported, such as the chi-square test and other fit indexes based on this statistic. Instead, we used the following fit indexes to ascertain the tested models: GFI, AGFI and PGFI (more information regarding Fit Indices are in the Supplementary Appendix A.9).

As for the correlational study, Spearman correlation coefficients were used to examine the relationship between the variables.

4.2. Results

4.2.1. Confirmatory factor analysis of the EQVC

We examined the multivariate normality and considering that the critical ratio for both skewness and kurtosis was outside the interval of [−1.96, +1.96] (Byrne, 2010), some procedures were made to account for the non-normal distribution of the data. Thus, first several multivariate outliers were removed, and multivariate normality was assessed again. However, the distribution was still non-normal.

We tested various possible models so as to confirm the initial structure of the EQVC suggested by the EFA with confirmatory factor analysis. We attempted to test a model with all participants and no covariances (model 1), a model without outliers and no covariances (model 2) and a model without outliers and with covariances (Supplementary Appendix Table A.13 and Supplementary Appendix Figure A.1) between the error terms (model 3). From the results presented, we chose model 3, which according to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008) presented good reference values [χ2(149) = 151.626, χ2/df = 0.793, GFI = 0.969, AGFI = 0.961, SRMR = 0.054, NFI = 0.930, PGFI = 0.759, PNFI = 0.810].

Despite the good fit of the model, several relationships between each factor and corresponding items were lower than the cut-off value of 0.5, as suggested in the literature (e.g., Bandalos and Finney, 2010). All unstandardized path coefficients1 were significant at p < 0.05, with the exception of item 3, which was equal to 0.05 (Supplementary Appendix Figure A.1). Moreover, the construct reliability scores were low for the Difficulties in Empathizing and higher than 0.80 (Hair et al., 2014) for the Self-efficacy regarding Empathy (Table 2). Thus, the second factor presented good construct reliability; however, the first, which only has 4 items, revealed low reliability. Convergent validity was low for both factors since the Average Variance Extracted (AVE) scores were lower than 0.50 (Henseler et al., 2009). Nonetheless, the Average Shared Variance scores below the AVE scores (Hair et al., 2014) indicated good discriminant validity of both factors. Additionally, the simplified model also presented lower Modified Expected Cross-Validation Index (MECVI), indicating that it has better validity in the population we are studying (Marôco, 2014).

TABLE 2
www.frontiersin.org

Table 2. Validity measures of Model 3 from the EQVC.

The bi-factorial structure that we found could be the result of reverse coding (Woods, 2006). Even though the factor Difficulties in Empathizing revealed low construct reliability, we decided to keep the bi-factorial structure, since this is a pilot study of an adapted instrument to online contexts, which is quite different from the offline environment. Nonetheless, further studies are required to better assess the EQVC, and to better understand if the bi-factorial structure results from reverse coding, or from the characteristics of online contexts.

4.2.2. Confirmatory factor analysis of the PMDCI

In order to confirm the initial structure suggested by the EFA of the scales from the PMDCI, various possible models were tested for the 5 scales (Supplementary Appendix Tables A.14–A.18). Hence, we attempted to test a model with all participants and no covariances (model 1), a model without outliers and no covariances (model 2) and a model without outliers and with covariances between the error terms (model 3).

Considering the Locus Behavior scale, the best model (model 3) presents several covariances between items (Supplementary Appendix Table A.14 and Supplementary Appendix Figure A.2). According to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008), the factor model we opted for presented good reference values [χ2(25) = 9.638, χ2/df = 0.386, GFI = 0.991, AGFI =0.983, SRMR = 0.051, NFI = 0.975, PGFI =0.550, PNFI = 0.677].

As for the Locus Agency scale, model 3 which presents the covariances between two error terms of items (Supplementary Appendix Table A.15 and Supplementary Appendix Figure A.3) presented good reference values [χ2(8) = 1.233, χ2/df = 0.154, GFI = 0.997, AGFI =0.992, SRMR = 0.032, NFI = 0.987, PGFI =0.380, PNFI = 0.526], as according to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008).

As for the Locus Outcome scale, we only assessed 2 models, since the Modification Indices did not indicate the need to covariate error terms of items (Supplementary Appendix Table A.16 and Supplementary Appendix Figure A.4), thus we only had model 1 with all participants, and model 2 without outliers. Model 2 presented good values [χ2(9) = 0.904, χ2/df = 0.100, GFI = 0.997, AGFI =0.993, SRMR = 0.028, NFI = 0.993, PGFI =0.427, PNFI = 0.596], as according to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008). Nonetheless, Model 1 presented better validity in the population of study, since it has lower MECVI (Marôco, 2014).

Considering the Locus Recipient scale, model 3 presented the covariances between four error terms (Supplementary Appendix Table A.17 and Supplementary Appendix Figure A.5). According to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008), the factor model we opted for presented good reference values [χ2(7) = 6.366, χ2/df = 0.909, GFI = 0.993, AGFI =0.979, SRMR = 0.042, NFI = 0.979, PGFI =0.331, PNFI = 0.457].

Finally, for the Attribution scale, model 3 presented the covariances between two error terms (Supplementary Appendix Table A.18 and Supplementary Appendix Figure A.6) revealed good reference values [χ2(26) = 1.198, χ2/df = 0.046, GFI = 0.992, AGFI =0.987, SRMR = 0.05, NFI =0.987, PGFI =0.573, PNFI = 0.713], according to the literature (Jöreskog and Sörbom, 1984; Cole, 1987; Blunch, 2008).

Despite the good fit of the selected models, PGFI did not present good values for all scales. It was below the cutoff of 0.6 (Blunch, 2008) in the Locus Agency, Outcome and Recipient and near the cutoff in the Locus Behavior and Attribution scale. Nonetheless, the other estimation procedures revealed good fit indices, supporting our model choice, as can be seen by comparing RMSEA and AIC. Also, all models chosen presented lower MECVI (Marôco, 2014), indicating better validity in the population of study, except for the Locus Outcome scale.

As can be seen in Supplementary Appendix Figures A.2–A.6, several relationships between each factor and corresponding items were lower than the cut-off value of 0.5 (Bandalos and Finney, 2010). Nevertheless, all unstandardized path coefficients were significant at p < 0.05. Moreover, the composite reliability scores ranged from 0.62 to 0.88, revealing medium to high construct reliability (Hair et al., 2014), as can be seen in Table 3. However, the AVE was low for Locus Behavior, Agency and Recipient and approximate of the 0.50 as indicated in the literature (Henseler et al., 2009) for Locus Outcome and Attributions. Thus, for the former scales, convergent validity was low, and for the later, convergent validity was almost adequate. Nonetheless, the Average Shared Variance (ASV) scores below the AVE scores (Hair et al., 2014) indicated good discriminant validity for all scales, except for Locus Outcome, of which the ASV could not be calculated, since this scale did not have correlation between error terms.

TABLE 3
www.frontiersin.org

Table 3. Validity measures of Model 3 for all scales from the PMDCI.

4.2.3. Correlational study

In this investigation, we found that empathy in online contexts appeared to be divided in two factors (i.e., Difficulties in Empathizing and Self-efficacy regarding Empathy), and that Moral Disengagement with respect to cyberbullying situations was composed of 4 different loci (i.e., Behavior, Agency, Outcome and Recipient) and Attributions (for the definition of each scale/variable see Supplementary Appendix A.10). Thus, regarding the first research question, Difficulties in Empathizing was negatively and significantly correlated with Attributions (r = −0.135, p < 0.05) and 3 Locus of MD [Agent (r = −0.169, p < 0.01), Outcome (r = −0.218, p < 0.01), and Recipient (r = −0.142, p < 0.01)]. That is, the more difficulty participants had in empathizing, the less attributions and the three different Loci were used. However, with respect to self-efficacy in empathizing, it was not statistically significantly correlated with any variable. Considering the second research question, difficulties in empathizing was negatively and significantly correlated with gender (r = −0.114, p < 0.05), meaning that girls tended to have more difficulties in empathizing, and boys tended to have less. Additionally, gender was positively and significantly correlated with Attributions (r = 0.223, p < 0.01), Locus of Behavior (r = 0.174, p < 0.01), Locus of Agency (r = 0.226, p < 0.01), Locus of Outcome (r = 0.136, p < 0.05) and Locus of Recipient (r = 0.196, p < 0.01). This means that boys tended to use more attributions and MD Loci with regards to cyberbullying. Correlations can be found in Table 4.

TABLE 4
www.frontiersin.org

Table 4. Correlations between EQVC and PMDCI.

5. Discussion

Although investigating cyberbullying is crucial, it is difficult to assess adolescents’ view of this phenomenon since students tend to underrate their involvement (Francisco et al., 2015), which further demonstrates the importance of studying other related constructs, such as empathy and MD. That is, by understanding how these types of variables operate within the cyberbullying cycle, the more we are able to understand cyberbullying and its relationship with these variables. Thus, this investigation proposed a different perspective of these constructs, considering the specificities of the online world. Thus, we presented a preliminary study of two new instruments with respect to empathy and MD, considering that the characteristics of cyberspace can make right from wrong more difficult to distinguish (Marín-López et al., 2019), and have an impact on online interactions (Marín-López et al., 2020).

5.1. Empathy quotient in virtual contexts

Our proposed model of empathy in virtual contexts was highly distinct from the one initially proposed by Baron-Cohen and Wheelwright (2004) for face-to-face interactions. This was expected; since online contexts have some features that make feeling empathy difficult (Terry and Cain, 2016). Thus, instead of having three factors (i.e., cognitive empathy, emotional reactivity, and social skills) (Suler, 2004), EFA and CFA showed a bi-factorial structure. Therefore, the first factor refers to the difficulties in empathizing specifically in online contexts (by referring the term “difficulty” in most of the items) or not being able to understand something online. The second factor refers to self-efficacy beliefs regarding empathy, which according to Bandura (1997), refers to individuals’ beliefs regarding their capacity to control their own behavior and the environment that surrounds them, and specifically in this case, with respect to empathy.

This structure shares some similarities with the Portuguese short form of the EQ, since the factor Difficulties in Empathizing has the same 6 items as the Empathic Difficulties. Even though, two items had to be eliminated because of low discrimination, the fact that other study (Rodrigues et al., 2011) found a factor with the same structure gave us some support for our two-dimensional structure. Despite the bi-factorial structure of the EQVC, which could be a direct consequence of the reverse worded items, as well as careless respondents (Woods, 2006), if all the items of the first factor had already been aggregated together in other study (Rodrigues et al., 2011), we may suppose that they in fact, form a factor. Nonetheless, further investigation should be conducted, adding more (positively worded) items to this factor to reassess the bi-factorial structure and understand if it is specific to the online context.

As for the second factor, all items of Self-efficacy beliefs regarding Empathy refer to a capacity which is perceived by the participant (e.g., “I find it easy to put myself in someone else’s shoes online”). According to Bandura (2001, p.10) efficacy beliefs are the foundation of human agency, therefore the perceived self-efficacy to accomplish goals is more important than the actual capacity. These beliefs are the driving force to act, despite the difficulties that may arise in the course of action (Bandura, 2001). Thus, in the context of online empathy, it is of major importance that adolescents feel that they can deal with those situations, specifically considering online features that hamper empathy. Moreover, this structure informed us that in online contexts the different components of empathy (i.e., cognitive empathy), are not as relevant as the easiness/difficulty in feeling empathy, as well as the self-efficacy beliefs related to it.

Considering the results from this investigation, with respect to the factorial structure and reliability values, it seems important to continue this work of improving this instrument on empathy in virtual contexts, in order to understand whether the structure holds if more items are included, or if the instrument is analyzed with a different population, for example. Moreover, it would be interesting to test model invariance, in order to understand if the instrument behaves differently regarding boys and girls, separately. This would be important to test, since empathy is usually higher for girls (Jolliffe and Farrington, 2006). Moreover, it would also be interesting to evaluate the convergent validity, with other measures of MD in online interaction, as well as to assess discriminant validity with measures of empathy in virtual contexts.

5.2. Process moral disengagement in cyberbullying situations questionnaire

As for MD, instruments to address it related to cyberbullying situations have begun to appear (e.g., Bussey et al., 2015), but research on this topic remains a current concern (e.g., Paciello et al., 2020). For example, Bussey et al. (2015) addressed this issue in a general sense (i.e., “Cyberbullying annoying classmates is just teaching them a lesson”) or without specifying who the aggressor is (i.e., “If people give out their passwords to others, they deserve to be cyberbullied”). Items with this mixed approach made us question if the level of MD would be the same if participants put themselves in the place of aggressors or bystanders. Also, the qualitative research that led to the development of the instrument supported this idea, since adolescents did not use MD mechanisms only to legitimize cyberbullies’ actions, but also to approve cyber bystanders’ aggressive behavior (Francisco et al., 2022). Therefore, we decided to develop an instrument that could assess MD from the aggressors’ and bystanders’ perspectives. This distinctive feature allows us to understand the role of MD with respect to the aggressors’ and bystanders’ cyberbullying behavior, however, for the purpose of this study, only the bystander scale was analyzed.

With a different perspective, Marín-López et al. (2019) focused on Moral Justification, Diffusion of responsibility, Distortion of consequences and Attribution of blame. However, we wanted to capture the impact of MD mechanisms as a process. Thus, we chose to develop a measure that included all mechanisms, separated by locus, since the qualitative study showed that not all mechanisms have the same impact in explaining cyberbullying behavior (Francisco et al., 2022), and not all of them were mentioned (Figure 1). Moreover, for investigation purposes, some scales may prove to be more useful than others. Furthermore, we consider MD as a process; since this view provides a better understanding of how cyberbullying starts and how adolescents perpetuate this type of behavior, considering that some mechanisms may occur in specific timings of the cyberbullying cycle (Tillman et al., 2018).

Confirmatory Factor Analysis verified the unidimensionality of the five scales (i.e., 4 Locus and Attributions) of the Bystander perspective of the PMDCI. Future studies should evaluate the psychometric properties of the Aggressor’s perspective and compare it to the Bystander’s perspective. It would also be important to evaluate the convergent validity, with other measures of MD in online interaction, as well as to assess discriminant validity with measures of empathy in virtual contexts. Furthermore, it would also be very important, especially in terms of intervention, to understand if the role of the distinct loci differ according to different grade levels and participants’ age, because it is known that MD increases over the years in high school (Smith and Slonje, 2010) and severe cyberbullying incidents peak during middle adolescence (Festl et al., 2017).

5.3. Empathy online and moral disengagement in cyberbullying

With respect to the relationship between both constructs, we believe that when students felt more difficulties in empathizing, the need to resort to MD mechanisms to decrease moral self-sanctions lessened (Bandura, 2002). However, this does not mean that they would not get involved in cyberbullying situations. That is, if they did enter the cyberbullying cycle, since they had difficulties in empathizing, they would not use MD mechanisms, because they did not feel that the situation could transgress their moral standards. Considering gender issues, girls felt more difficulties in empathizing probably because they needed more social cues to do so (Suler, 2004; Runions and Bak, 2015). Even though they generally scored higher on empathy (Baron-Cohen and Wheelwright, 2004; Carrier et al., 2015), ICT may have brought them more challenges, especially considering that empathy can be effortful (Cameron et al., 2019), they may perceive more difficulties in empathizing. With respect to MD, we were expecting positive significant correlations regarding gender, since boys tended to express significantly higher levels of moral justification, euphemistic labeling, diffusion of responsibility, distortion of consequences and blaming the victim than girls (Thornberg and Jungert, 2014).

5.4. Limitations and future directions

This study has some limitations, among them the convenience sample (Marín-López et al., 2020), sample size (Gerdes et al., 2010), and age of participants (Barlett et al., 2016), therefore we cannot generalize findings. Additionally, self-report instruments can lead to false reporting and social desirability (Thornberg and Jungert, 2014), thus it would be interesting to compare adolescents’ results to peer reports (Garaigordobil, 2015). Also, procedures of data collection may not establish validity of the data (Gerdes et al., 2010), thus, comparison with objective data collected from ecologically valid contexts, would be important. Moreover, test–retest reliability would be important to better assess the instruments (Redondo and Herrero-Fernández, 2018).

5.5. Implications for practice

In terms of implications for practice, we believe the EQVC may provide some clues for intervention regarding the promotion of empathy in online contexts. Specifically, it can help identify which areas may be more prone to evoke some difficulties in feeling empathy when interacting virtually. Moreover, considering the importance of self-efficacy in goals and expectations (Bandura, 2001), it seems of extreme importance to stimulate and develop self-efficacy specific to online interactions, as well as to empower children and adolescents, so they can be able to persevere when deciding to act against cyberbullying events. Regarding MD, as Bandura et al. (1996) argued, the different mechanisms seem to differ in their contribution to detrimental conduct, hence the PMDCI allowed us to understand which MD mechanisms could interfere more with justifying cyberbullying behavior, and therefore, be an in-depth resource for interventions. That is, by providing information about the most common mechanisms used, this inventory can inform researchers and practitioners about what type of intervention can be developed within a specific population. Consequently, future interventions could be more accurate in terms of psychological needs, as well as more focused and shorter. These features may be important considering the difficulties that are often encountered with respect to the time available to work with children and adolescents beyond the school schedule. We believe that these versions of the EQVC and the PMDCI are promising instruments that can be further improved, and can also be used with other Portuguese-speakers (i.e., from Brazil and Angola, for example), however cultural differences may emerge. Moreover, we believe that these instruments can also be translated and adapted to other countries. Finally, the two instruments that resulted from this investigation can make an important contribution to understand the complex nature of cyberbullying to improve prosocial behavior online.

Data availability statement

The datasets presented in this article are not readily available because the Portuguese National Commission of Data Protection and the Deontology Committee of the researchers’ institution do not allow the availability of the datasets. The data that supports the findings of this study are available in the Supplementary material of this article. Requests to access the datasets should be directed to c29maWZyYW5jaXNjb0BnbWFpbC5jb20=.

Ethics statement

The studies involving human participants were reviewed and approved by Deontology Committee of the Faculty of Psychology University of Lisbon. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.

Author contributions

SF designed and executed the study, analyzed the data, and wrote the manuscript. PC assisted with the design, collaborated with the data analyses, and the writing of the study. AV assisted with the design, execution and writing of the study, collaborated with the editing of the final manuscript. NP assisted with writing and the editing of the final manuscript. All authors approved the final version of the manuscript for submission.

Funding

This work was supported by the Foundation for Science and Technology of the Science and Education Ministry of Portugal through a PhD grant (SFRH/BD/130982/2017), a Project grant (PTDC/PSI-GER/1918/2020) and through the Research Center for Psychological Science of the Faculty of Psychology, University of Lisbon (CICPSI; UIDB/04527/2020 and UIDP/04527/2020).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1061482/full#supplementary-material

Footnotes

1. ^Unstandardized path coefficients and corresponding significant statistics were not available for ULS, thus, we present values from the ML with Bollen-Stine Bootstrap.

References

Amado, J., Costa, A. P., and Crusoé, N. (2014). “A Técnica de Análise de Conteúdo [The Content Analysis Technique]” in Manual de investigação qualitativa em educação. 2nd edn. ed. J. Amado. (Coimbra: Coimbra University Press), 301–348.

Google Scholar

Arbuckle, J. L. (2019). Amos (version 24.0) [computer program]. Chicago: IBM SPSS.

Google Scholar

Baker, F. (2001). The basics of item response theory. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park, MD. Available at: http://echo.edres.org:8080/irt/baker/

Google Scholar

Bandalos, D. L., and Finney, S. J. (2010). “Factor analysis: Exploratory and confirmatory” in The Reviewer’s Guide to Quantitative Methods in the Social Sciences. eds. G. R. Hancock and R. O. Mueller (New York, NY: Routledge), 93–114.

Google Scholar

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Google Scholar

Bandura, A. (1991). “Social cognitive theory of moral thought and action” in Handbook of moral behavior and development. Vol. 1. eds. M. K. William and L. G. Jacob (Mahwah: Lawrence Erlbaum Associates, Inc.), 45–103.

Google Scholar

Bandura, A. (1992). “Social cognitive theory of social referencing” in Social Referencing and the Social Construction of Reality in Infancy. ed. S. Feinman (New York, NY: Plenum), 175–208.

Google Scholar

Bandura, A. (1997). Self-efficacy: the exercise of control. Freeman.

Google Scholar

Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personal Soci Psychol Rev. 3, 193–209. doi: 10.1207/s15327957pspr0303_3

CrossRef Full Text | Google Scholar

Bandura, A. (2001). Social cognitive theory: an agentic perspective. Annu. Rev. Psychol. 52, 1–26. doi: 10.1146/annurev.psych.52.1.1

PubMed Abstract | CrossRef Full Text | Google Scholar

Bandura, A. (2002). Selective moral disengagement in the exercise of moral agency. J. Moral Educ. 31, 101–119. doi: 10.1080/0305724022014322

CrossRef Full Text | Google Scholar

Bandura, A., Barbaranelli, C., Caprara, G. V., and Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. J. Pers. Soc. Psychol. 71, 364–374. doi: 10.1037/0022-3514.71.2.364

CrossRef Full Text | Google Scholar

Barlett, C. P., Helmstetter, K., and Gentile, D. A. (2016). The development of a new cyberbullying attitude measure. Comput. Hum. Behav. 64, 906–913. doi: 10.1016/j.chb.2016.08.013

CrossRef Full Text | Google Scholar

Barlińska, J., Szuster, A., and Winiewski, M. (2013). Cyberbullying among adolescent bystanders: role of the communication medium, form of violence, and empathy. J. Community Appl. Soc. Psychol. 23, 37–51. doi: 10.1002/casp.2137

CrossRef Full Text | Google Scholar

Barlińska, J., Szuster, A., and Winiewski, M. (2018). Cyberbullying among adolescent bystanders: Role of affective versus cognitive empathy in increasing prosocial cyberbystander behavior. Front. Psychol. 9:799. doi: 10.3389/fpsyg.2018.00799

CrossRef Full Text | Google Scholar

Baron-Cohen, S. (2011). The science of evil: On empathy and the origins of cruelty. New York, NY: Basic Books.

Google Scholar

Baron-Cohen, S., and Wheelwright, S. (2004). The empathy quotient: an investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences. J. Autism Dev. Disord. 34, 163–175. doi: 10.1023/B:JADD.0000022607.19833.00

CrossRef Full Text | Google Scholar

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychol. Bull. 107, 238–246. doi: 10.1037/0033-2909.107.2.238

CrossRef Full Text | Google Scholar

Bentler, P. M., and Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychol. Bull. 88, 588–606. doi: 10.1037/0033-2909.88.3.588

CrossRef Full Text | Google Scholar

Bertrand, P., Guegan, J., Robieux, L., McCall, C. A., and Zenasni, F. (2018). Learning empathy through virtual reality: multiple strategies for training empathy-related abilities using body ownership illusions in embodied virtual reality. Front. Robot AI 5. doi: 10.3389/frobt.2018.00026

CrossRef Full Text | Google Scholar

Blanca, M. J., Arnau, J., López-Montiel, D., Bono, R., and Bendayan, R. (2013). Skewness and kurtosis in real data samples. Methodology: European. J. Res. Method. Behav. Soci. Sci. 9, 78–84. doi: 10.1027/1614-2241/a000057

CrossRef Full Text | Google Scholar

Blunch, N. J. (2008). Introduction to structural equation modelling using SPSS and AMOS. California: SAGE Publications.

Google Scholar

Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley

Google Scholar

Bollen, K. A., and Long, J. S. (1993). Testing structural equations models. Newbury Park: Sage.

Google Scholar

Bollen, K. A., and Stine, R. A. (1992). Bootstrapping goodness-of-fit measures in structural equation models. Sociol. Methods Res. 21, 205–229. doi: 10.1177/0049124192021002004

CrossRef Full Text | Google Scholar

Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: The Guilford Press.

Google Scholar

Browne, M. W., and Cudeck, R. (1993). “Alternative ways of assessing model fit” in Testing structural equation models. eds. K. A. Bollen and J. S. Long (New York, NY: Sage), 136–162.

Google Scholar

Bussey, K., Fitzpatrick, S., and Raman, A. (2015). The role of moral disengagement and self-efficacy in cyberbullying. J. Sch. Violence 14, 30–46. doi: 10.1080/15388220.2014.954045

CrossRef Full Text | Google Scholar

Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming. New York, NY: Routledge.

Google Scholar

Cameron, C. D., Hutcherson, C. A., Ferguson, A. M., Scheffer, J. A., Hadjiandreou, E., and Inzlicht, M. (2019). Empathy is hard work: people choose to avoid empathy because of its cognitive costs. J. Exp. Psychol. Gen. 148, 962–976. doi: 10.1037/xge0000595

PubMed Abstract | CrossRef Full Text | Google Scholar

Carrier, L. M., Spradlin, A., Bunce, J. P., and Rosen, L. D. (2015). Virtual empathy: positive and negative impacts of going online upon empathy in young adults. Comput. Hum. Behav. 52, 39–48. doi: 10.1016/j.chb.2015.05.026

CrossRef Full Text | Google Scholar

Cole, D. A. (1987). Utility of confirmatory factor analysis in test validation research. J. Consult. Clin. Psychol. 55, 584–594. doi: 10.1037/0022-006X.55.4.584

CrossRef Full Text | Google Scholar

Cuadrado-Gordillo, I., and Fernández-Antelo, I. (2019). Analysis of moral disengagement as a modulating factor in adolescents’ perception of cyberbullying. Front. Psychol. 10:1222. doi: 10.3389/fpsyg.2019.01222

PubMed Abstract | CrossRef Full Text | Google Scholar

Davis, M. H. (1980). A multidimensional approach to individual differences in empathy. JSAS Catal. Sel. Doc. Psychol. 10:85.

Google Scholar

Day, S., and Lazuras, L. (2016). The cyberbullying-specific moral disengagement questionnaire (CBMDQ-15). Available at: https://shura.shu.ac.uk/12890

Google Scholar

Depaoli, S., Tiemensma, J., and Felt, J. M. (2018). Assessment of health surveys: fitting a multidimensional graded response model. Psychol. Health Med. 23, 13–31. doi: 10.1080/13548506.2018.1447136

PubMed Abstract | CrossRef Full Text | Google Scholar

Falla, D., Ortega-Ruiz, R., Runions, K., and Romera, E. M. (2020). Why do victims become perpetrators of peer bullying? Moral disengagement in the cycle of violence. Youth Society 54, 397–418. doi: 10.1177/0044118X20973702

CrossRef Full Text | Google Scholar

Falla, D., Romera, E., and Ortega-Ruiz, R. (2021). Aggression, moral disengagement and empathy: a longitudinal study within the interpersonal dynamics of bullying. Front. Psychol. 12:703468. doi: 10.3389/fpsyg.2021.703468

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferrando, P. J., and Lorenzo-Seva, U. (2017). Program FACTOR at 10: origins, development and future directions. Psicothema 29, 236–240. doi: 10.7334/psicothema2016.304

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferrando, P. J., Vigil-Colet, A., and Lorenzo-Seva, U. (2016). Practical person-fit assessment with the linear FA model: new developments and a comparative study. Front. Psychol. 7:1973. doi: 10.3389/fpsyg.2016.01973

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferreira, P. C., Veiga Simão, A. M., Paiva, A., Martinho, C., Prada, R., Ferreira, A., et al. (2021). Exploring empathy in cyberbullying with serious games. Computers and Education 166:104155. doi: 10.1016/j.compedu.2021.104155

CrossRef Full Text | Google Scholar

Festl, R., Vogelgesang, J., Scharkow, M., and Quandt, T. (2017). Longitudinal patterns of involvement in cyberbullying: results from a latent transition analysis. Comput. Hum. Behav. 66, 7–15. doi: 10.1016/j.chb.2016.09.027

CrossRef Full Text | Google Scholar

Francisco, S. M. (2022). The Role of Moral Disengagement in Cyberbullying. [Doctoral Dissertation]. Faculty of Psychology, University of Lisbon.

Google Scholar

Francisco, S. M., Ferreira, P. C., and Veiga Simão, A. M. (2022). Behind the scenes of cyberbullying: personal and normative beliefs across profiles and moral disengagement mechanisms. Int. J. Adolescence and Youth 27, 337–361. doi: 10.1080/02673843.2022.2095215

CrossRef Full Text | Google Scholar

Francisco, S. M., Veiga Simão, A. M., Ferreira, P. C., and Martins, M. J. D. (2015). Cyberbullying: the hidden side of college students. Comput. Hum. Behav. 43, 167–182. doi: 10.1016/j.chb.2014.10.045

CrossRef Full Text | Google Scholar

Früh, W., and Wünsch, C. (2009). Empathie und medienempathie. Publizistik 54, 191–215. doi: 10.1007/s11616-009-0038-9

CrossRef Full Text | Google Scholar

Fuchs, T. (2014). The virtual other: empathy in the age of virtuality. J. Conscious. Stud. 21, 152–173.

Google Scholar

Gantiva, C., Sotaquirá, M., Araujo, A., and Cuervo, P. (2019). Cortical processing of human and emoji faces: an ERP analysis. Behav Inform Technol 39, 935–943. doi: 10.1080/0144929X.2019.1632933

CrossRef Full Text | Google Scholar

Gao, L., Liu, J., Wang, W., Yang, J., Wang, P., and Wang, X. (2020). Moral disengagement and adolescents’ cyberbullying perpetration: Student relationship and gender as moderators. Children of Youth Services Rev. 116:105119. doi: 10.1016/j.childyouth.2020.105119

CrossRef Full Text | Google Scholar

Garaigordobil, M. (2015). Psychometric properties of the cyberbullying test: a screening instrument to measure cybervictimization, cyberaggression and cyberobservation. J. Interpers. Violence 32, 3556–3576. doi: 10.1177/0886260515600165

PubMed Abstract | CrossRef Full Text | Google Scholar

García-Pérez, R., Santos-Delgado, J. M., and Buzón-García, O. (2016). Virtual empathy as digital competence in education 3.0. International journal of educational technology. High. Educ. 13, 1–10. doi: 10.1186/s41239-016-0029-7

CrossRef Full Text | Google Scholar

George, D., and Mallery, P. (2016). IBM SPSS statistics 23 step by step: A simple guide and reference (13th ed.). New York, NY: Routledge.

Google Scholar

Gerdes, K. E., Segal, E. A., and Lietz, C. A. (2010). Conceptualising and measuring empathy. Br. J. Soc. Work. 40, 2326–2343. doi: 10.1093/bjsw/bcq048

CrossRef Full Text | Google Scholar

Gorostiaga, A., Balluerka, N., and Soroa, G. (2014). Assessment of empathy in educational field and its relationship with emotional intelligence. Revist. Educ. 364, 12–38.

Google Scholar

Haddock, A. D., and Jimerson, S. R. (2017). An examination of differences in moral disengagement and empathy among bullying participant groups. J. Relat. Res. 8, 1–15. doi: 10.1017/jrr.2017.15

CrossRef Full Text | Google Scholar

Hair, J. F., Black, W. C., Babin, B. J., and Anderson, R. E. (2014). Multivariate data analysis: Pearson new international edition. Essex: Pearson Education Limited.

Google Scholar

Hall, J. A., and Schwartz, R. (2019). Empathy present and future. J. Soc. Psychol. 159, 225–243. doi: 10.1080/00224545.2018.1477442

CrossRef Full Text | Google Scholar

Happ, C., and Pfetsch, J. (2015). Medienbasierte Empathie (MBE). Diagnostica 62, 1–16. doi: 10.1026/0012-1924/a000152

CrossRef Full Text | Google Scholar

Hayes, A. F., and Coutts, J. J. (2020). Use omega rather than Cronbach's alpha for estimating reliability. But. Commun. Methods Measur. 14, 1–24. doi: 10.1080/19312458.2020.1718629

CrossRef Full Text | Google Scholar

Hayton, J. C., Allen, D. G., and Scarpello, V. (2004). Factor retention decisions in exploratory factor analysis: a tutorial on parallel analysis. Organ. Res. Methods 7, 191–205. doi: 10.1177/1094428104263675

CrossRef Full Text | Google Scholar

Henseler, J., Ringle, C. M., and Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. Adv. Int. Mark. 20, 277–319. doi: 10.1108/S1474-7979(2009)0000020014

CrossRef Full Text | Google Scholar

Hinduja, S., and Patchin, J. W. (2009). Bullying beyond the schoolyard: Preventing and responding to cyberbullying. Thousand Oaks, CA: Sage Publications.

Google Scholar

Hyde, L. W., Shaw, D. S., and Moilanen, K. L. (2010). Developmental precursors of moral disengagement and the role of moral disengagement in the development of antisocial behavior. J. Abnorm. Child Psychol. 38, 197–209. doi: 10.1007/s10802-009-9358-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Ilgunaite, G., Giromini, L., and Di Girolamo, M. (2017). Measuring empathy: a literature review of available tools. Appl. Psychol. Bull. 65, 2–28.

Google Scholar

Jolliffe, D., and Farrington, D. P. (2006). Development and validation of the basic empathy scale. J. Adolesc. 29, 589–611. doi: 10.1016/j.adolescence.2005.08.010

CrossRef Full Text | Google Scholar

Jöreskog, K. G., and Sörbom, D. (1982). Recent developments in structural equation modeling. J. Mark. Res. 19, 404–416. doi: 10.1177/002224378201900402

CrossRef Full Text | Google Scholar

Jöreskog, K. G., and Sörbom, D. (1984). Advances in factor analysis and structural equation models. Lanham: Rowman & Littlefield Publishers.

Google Scholar

Kanyongo, G. Y. (2005). Determining the correct number of components to extract from a principal components analysis: a Monte Carlo study of the accuracy of the scree plot. J. Mod. Appl. Stat. Methods 4, 120–133. doi: 10.22237/jmasm/1114906380

CrossRef Full Text | Google Scholar

Klimecki, O. M. (2019). The role of empathy and compassion in conflict resolution. Emot. Rev. 11, 310–325. doi: 10.1177/1754073919838609

CrossRef Full Text | Google Scholar

Kowalski, R. M., Giumetti, G. W., Schroeder, A. N., and Lattanner, M. R. (2014). Bullying in the digital age: a critical review and meta-analysis of cyberbullying research among youth. Psychol. Bull. 140, 1073–1137. doi: 10.1037/a0035618

PubMed Abstract | CrossRef Full Text | Google Scholar

Lo Cricchio, M. G., García-Poole, C., Te Brinke, L. W., Bianchi, D., and Menesini, E. (2020). Moral disengagement and cyberbullying involvement: a systematic review. Eur. J. Dev. Psychol. 18, 271–311. doi: 10.1080/17405629.2020.1782186

CrossRef Full Text | Google Scholar

López-Pérez, B., Fernández-Pinto, I., and Abad, F. J. (2008). TECA. Test de Empatía Cognitiva y Afectiva. Madrid: Tea Ediciones, S.A.

Google Scholar

Lorenzo-Seva, U., and Ferrando, P. J. (2019). Robust promin: a method for diagonally weighted factor rotation. Rev. Peruana Psicol. 25, 99–106. doi: 10.24265/liberabit.2019.v25n1.08

CrossRef Full Text | Google Scholar

Macaula, P., and Boulton, M. J. (2017). Adolescent bystander responses to offline and online bullying: The role of bullying severity and empathy, In proceedings of the 22nd annual CyberPsychology, Cyber Therapy & Social Networking Conference, University of Wolverhampton, Wolverhampton.

Google Scholar

Manasia, L., and Chicioreanu, T. D. (2017). Does the internet shape our mind? The case of virtual empathy in future teachers. eLearn. Softw. Educ. 2, 397–404. doi: 10.12753/2066-026X-17-141

CrossRef Full Text | Google Scholar

Marín-López, I., Zych, I., Ortega-Ruiz, R., and Monks, C. (2019). “Validación y propiedades psicométricas del Cuestionario de Empatía Online y el Cuestionario de Desconexión Moral a través de las Tecnologías” in Creando Redes Doctorales Vol. VII “Investiga y Comunica”. eds. A. F. Chica Pérez and J. Mérida García (Córdoba, Spain: UCOPress), 525–528.

Google Scholar

Marín-López, I., Zych, I., Ortega-Ruiz, R., Monks, C. P., and Llorent, V. J. (2020). Empathy online and moral disengagement through technology as longitudinal predictors of cyberbullying victimization and perpetration. Child Youth Serv. Rev. 116, 1–8.

Google Scholar

Marôco, J. (2014). Análise de equações estruturais: Fundamentos teóricos, software e aplicações (2nd). Pero Pinheiro: Structural Equation Analysis: Theoretical Foundations, Software and Applications.

Google Scholar

Matteucci, M., and Stracqualursi, L. (2006). Student assessment via graded response model. Statistica 66, 435–447.

Google Scholar

Mazzone, A., Yanagida, T., Caravita, S. C. S., and Strohmeier, D. (2019). Moral emotions and moral disengagement: concurrent and longitudinal associations with aggressive behavior among early adolescents. J. Early Adolesc. 39, 839–863. doi: 10.1177/0272431618791276

CrossRef Full Text | Google Scholar

McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Lawrence Erlbaum Associates.

Google Scholar

McGraw, K. O., and Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychol. Meth. 1, 30–46. doi: 10.1037/1082-989X.1.1.30

CrossRef Full Text | Google Scholar

Meter, D. J., and Bauman, S. (2018). Moral disengagement about cyberbullying and parental monitoring: effects on traditional bullying and victimization via cyberbullying involvement. J. Early Adoles. 38, 303–326. doi: 10.1177/0272431616670752

CrossRef Full Text | Google Scholar

Muthén, B., du Toit, S.H.C., and Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report.

Google Scholar

Muthén, B., and Kaplan, D. (1985). A comparison of some methodologies for the factor analysis of non-normal Likert variables. Br. J. Math. Stat. Psychol. 38, 171–189. doi: 10.1111/j.2044-8317.1985.tb00832.x

CrossRef Full Text | Google Scholar

O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behav. Res.Method. Instru. Comput. 32, 396–402.

Google Scholar

O’Neil, B. (2013). Mirror, mirror on the screen, what does all this ASCII mean?: a pilot study of spontaneous facial mirroring of emotions. Arbutus Rev 4, 19–44. doi: 10.18357/tar41201312681

CrossRef Full Text | Google Scholar

Osler, L. (2021). Taking empathy online. Inquiry, 1–28. doi: 10.1080/0020174X.2021.1899045

CrossRef Full Text | Google Scholar

Paciello, M., Tramontano, C., Nocentini, A., Fida, R., and Menesini, E. (2020). The role of traditional and online moral disengagement on cyberbullying: do externalising problems make any difference? Comput. Hum. Behav. 103, 190–198. doi: 10.1016/j.chb.2019.09.024

CrossRef Full Text | Google Scholar

Pfetsch, J. S. (2017). Empathic skills and cyberbullying: relationship of different measures of empathy to cyberbullying in comparison to offline bullying among young adults. J. Genet. Psychol. 178, 58–72. doi: 10.1080/00221325.2016.1256155

PubMed Abstract | CrossRef Full Text | Google Scholar

Preece, J. (1999). Empathy online. Virtual Reality 4, 74–84. doi: 10.1007/BF01434996

CrossRef Full Text | Google Scholar

R Core Team. (2020). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing

Google Scholar

Reckase, M. D. (1985). The difficulty of test items that measure more than one ability. Appl. Psychol. Meas. 9, 401–412. doi: 10.1177/014662168500900409

CrossRef Full Text | Google Scholar

Redondo, I., and Herrero-Fernández, D. (2018). Adaptación del Empathy Quotient (EQ) en una muestra española. Terapia Psicol 36, 81–89. doi: 10.4067/S0718-48082018000200081

CrossRef Full Text | Google Scholar

Rodrigues, J., Lopes, A., Giger, J.-C., Gomes, A., Santos, J., and Gonçalves, G. (2011). Escalas de medição do Quociente de Empatia/Sistematização: Um ensaio de validação para a população portuguesa. Psicologia 25, 73–89. doi: 10.17575/rpsicol.v25i1.280

CrossRef Full Text | Google Scholar

Romera, E. M., Ortega-Ruiz, R., Runions, K., and Falla, D. (2021). Moral disengagement strategies in online and offline bullying. Psychosoc. Interv. 30, 85–93. doi: 10.5093/pi2020a21

CrossRef Full Text | Google Scholar

Rosseel, Y. (2012). Lavaan: an R package for structural equation modeling. J. Stat. Softw. 48, 1–36. doi: 10.18637/jss.v048.i02

CrossRef Full Text | Google Scholar

Runions, K. C., and Bak, M. (2015). Online moral disengagement, cyberbullying, and cyber-aggression. Cyberpsychol. Behav. Soc. Netw. 18, 400–405. doi: 10.1089/cyber.2014.0670

PubMed Abstract | CrossRef Full Text | Google Scholar

Samejima, F. (1969). Estimation of Latent Ability Using a Response Pattern of Graded Scores. Psychometrika 34, 1–97. doi: 10.1007/BF03372160

CrossRef Full Text | Google Scholar

Schultze-Krumbholz, A., Hess, M., Pfetsch, J., and Scheithauer, H. (2018). Who is involved in cyberbullying? Latent class analysis of cyberbullying roles and their associations with aggression, self-esteem, and empathy. J. Psychosocial Res. Cyberspace 12. doi: 10.5817/CP2018-4-2

CrossRef Full Text | Google Scholar

Smith, P. K., and Slonje, R. (2010). “Cyberbullying: the nature and extent of a new kind of bullying, in and out of school” in In handbook of bullying in schools: an international perspective. eds. S. R. Jimerson, S. M. Swearer, and D. L. Espelage (Routledge)

Google Scholar

Suler, J. (2004). The online disinhibition effect. Cycberpsychol. Behav. 7, 321–326. doi: 10.1089/1094931041291295

CrossRef Full Text | Google Scholar

Tampke, E. C., Fite, P. J., and Cooley, J. L. (2020). Bidirectional associations between affective empathy and proactive and reactive aggression. Aggress. Behav. 46, 317–326. doi: 10.1002/ab.21891

PubMed Abstract | CrossRef Full Text | Google Scholar

Terry, C., and Cain, J. (2016). The emerging issue of digital empathy. Am. J. Pharm. Educ. 80:58. doi: 10.5688/ajpe80458

PubMed Abstract | CrossRef Full Text | Google Scholar

Thornberg, R., and Jungert, T. (2014). School bullying and the mechanisms of moral disengagement. Aggress. Behav. 40, 99–108. doi: 10.1002/ab.21509

PubMed Abstract | CrossRef Full Text | Google Scholar

Thornberg, R., Wänström, L., Pozzoli, T., and Hong, J. S. (2019). Moral disengagement and school bullying perpetration in middle childhood: A short-term longitudinal study in Sweden. J. School Viol. 18, 585–596. doi: 10.1080/15388220.2019.1636383

CrossRef Full Text | Google Scholar

Tillman, C., Gonzalez, K., Whitman, M. V., Crawford, W. S., and Hood, A. C. (2018). A multi-functional view of moral disengagement: exploring the effects of learning the consequences. Front. Psychol. 8, 1–14. doi: 10.3389/fpsyg.2017.02286

CrossRef Full Text | Google Scholar

Timmerman, M. E., and Lorenzo-Seva, U. (2011). Dimensionality assessment of ordered polytomous items with parallel analysis. Psychol. Methods 16, 209–220. doi: 10.1037/a0023353

PubMed Abstract | CrossRef Full Text | Google Scholar

Wakabayashi, A., Baron-Cohen, S., Wheelwright, S., Goldenfeld, N., Delaney, J., Fine, D., et al. (2006). Development of short forms of the empathy quotient (EQ-short) and the systemizing quotient (SQ-short). Personal. Individ. Differ. 41, 929–940. doi: 10.1016/j.paid.2006.03.017

CrossRef Full Text | Google Scholar

Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: a relational perspective. Commun. Res. 19, 52–90. doi: 10.1177/009365092019001003

CrossRef Full Text | Google Scholar

Walther, J. (1995). Relational aspects of computer-mediated communication. Organ. Sci. 6, 186–203. doi: 10.1287/orsc.6.2.186

CrossRef Full Text | Google Scholar

Walther, J. B. (2015). “Social information processing theory (CMC)” in The International Encyclopedia of Interpersonal Communication. eds. C. R. Berger, M. E. Roloff, S. R. Wilson, J. P. Dillard, J. Caughlin and D. Solomon. 1–13.

Google Scholar

Wang, C., and Goldberg, T. S. (2017). Using children’s literature to decrease moral disengagement and victimization among elementary school students. Psychol. Schools 54, 918–931. doi: 10.1002/pits.22042

CrossRef Full Text | Google Scholar

Woods, C. M. (2006). Careless responding to reverse-worded items: implications for confirmatory factor analysis. J. Psychopathol. Behav. Assess. 28, 186–191. doi: 10.1007/s10862-005-9004-7

CrossRef Full Text | Google Scholar

Yuan, K. H., and Bentler, P. M. (1998). Structural equation modeling with robust Covariances. Sociol. Methodol. 28, 363–396. doi: 10.1111/0081-1750.00052

CrossRef Full Text | Google Scholar

Zhao, L., and Yu, J. (2021). A meta-analytic review of moral disengagement and cyberbullying. Front. Psychol. 12:681299. doi: 10.3389/fpsyg.2021.681299

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, C., Huang, S., Evans, R., and Zhang, W. (2021). Cyberbullying among adolescents and children: A comprehensive review of the global situation, risk factors, and preventive measures. Front. Public Health. 9:634909. doi: 10.3389/fpubh.2021.634909

CrossRef Full Text | Google Scholar

Zwick, R. W., and Velicer, F. V. (1986). Comparison of five rules for determining the number of components to retain. Psychol. Bull. 99, 432–442. doi: 10.1037/0033-2909.99.3.432

CrossRef Full Text | Google Scholar

Keywords: assessing empathy online, measuring moral disengagement in cyberbullying, instruments, cyberbullying, adolescents

Citation: Francisco SM, da Costa Ferreira P, Veiga Simão AM and Pereira NS (2023) Measuring empathy online and moral disengagement in cyberbullying. Front. Psychol. 14:1061482. doi: 10.3389/fpsyg.2023.1061482

Received: 04 October 2022; Accepted: 29 March 2023;
Published: 27 April 2023.

Edited by:

Carla Canestrari, University of Macerata, Italy

Reviewed by:

Christine Linda Cook, National Chengchi University, Taiwan
Maria Grazia Lo Cricchio, University of Basilicata, Italy
Lijun Zhao, Liaocheng University, China

Copyright © 2023 Francisco, da Costa Ferreira, Veiga Simão and Pereira. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sofia Mateus Francisco, c29maWZyYW5jaXNjb0BnbWFpbC5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.