Skip to main content

ORIGINAL RESEARCH article

Front. Virtual Real., 21 December 2022
Sec. Virtual Reality and Human Behaviour
This article is part of the Research Topic Human Spatial Perception, Cognition, and Behaviour in Extended Reality View all 13 articles

Does distance matter? Embodiment and perception of personalized avatars in relation to the self-observation distance in virtual reality

  • 1Human-Computer Interaction Group, University of Würzburg, Würzburg, Germany
  • 2Psychology of Intelligent Interactive Systems Group, University of Würzburg, Würzburg, Germany
  • 3Computer Graphics Group, TU Dortmund University, Dortmund, Germany

Virtual reality applications employing avatar embodiment typically use virtual mirrors to allow users to perceive their digital selves not only from a first-person but also from a holistic third-person perspective. However, due to distance-related biases such as the distance compression effect or a reduced relative rendering resolution, the self-observation distance (SOD) between the user and the virtual mirror might influence how users perceive their embodied avatar. Our article systematically investigates the effects of a short (1 m), middle (2.5 m), and far (4 m) SOD between users and mirror on the perception of their personalized and self-embodied avatars. The avatars were photorealistic reconstructed using state-of-the-art photogrammetric methods. Thirty participants repeatedly faced their real-time animated self-embodied avatars in each of the three SOD conditions, where they were repeatedly altered in their body weight, and participants rated the 1) sense of embodiment, 2) body weight perception, and 3) affective appraisal towards their avatar. We found that the different SODs are unlikely to influence any of our measures except for the perceived body weight estimation difficulty. Here, the participants perceived the difficulty significantly higher for the farthest SOD. We further found that the participants’ self-esteem significantly impacted their ability to modify their avatar’s body weight to their current body weight and that it positively correlated with the perceived attractiveness of the avatar. Additionally, the participants’ concerns about their body shape affected how eerie they perceived their avatars. The participants’ self-esteem and concerns about their body shape influenced the perceived body weight estimation difficulty. We conclude that the virtual mirror in embodiment scenarios can be freely placed and varied at a distance of one to four meters from the user without expecting major effects on the perception of the avatar.

1 Introduction

Avatars are digital self-representations controlled by their users within a virtual environment (Bailenson and Blascovich, 2004). In virtual reality (VR), users can not only control their avatar but also embody it from a first-person perspective, seeing their avatar’s virtual body moving where their physical body normally would be located (Slater et al., 2010; Debarba et al., 2015). In consequence, users can develop the feeling of owning and controlling their virtual body as their own body, called the sense of embodiment (SoE, Kilteni et al., 2012). Unlike the physical body, the virtual body is easily adjustable in various ways (e.g., body shape, body size, skin color). Virtual mirrors are used to make users aware of their altered appearance by providing a holistic third-person perspective on their virtual body (Inoue and Kitazaki, 2021). The observed modified self-appearance can induce human perceptual or behavioral changes based on the Proteus Effect (Ratan et al., 2020), which originally describes the phenomenon that users adapt their behavior according to the behavior they expect from the appearance of their embodied avatar (Yee and Bailenson, 2007).

In mental health, the serious application of avatar embodiment can be particularly valuable (Aymerich-Franch, 2020; Matamala-Gomez et al., 2021). A good example is the treatment of body image-related misperceptions of body dimensions (i.e., body weight or size) in body image distortions (World Health Organization, 2019), where the exposition with the own body in a mirror is an elementary part of the treatment strategy (Delinsky and Wilson, 2006; Griffen et al., 2018). To improve such mirror exposure, the embodiment of avatars in VR offers novel opportunities for working on the body perception (Turbyne et al., 2021). Affected individuals may face their photorealistic and highly personalized avatar in a virtual mirror, which can then be realistically modified in its body weight or size (Mölbert et al., 2018; Döllinger et al., 2022). Hence, scenarios helping to uncover and visualize the individuals’ mental body image or to deal intensively with their current or desired body weight are becoming conceivable (Döllinger et al., 2019). A recent review by Turbyne et al. (2021) even showed that the user’s mental body image could correspond to the avatar’s body after exposure, suggesting great potential for further research and application.

When using embodied avatars as a predefined stimulus for inducing perceptual and behavioral changes in serious applications, it is vital to ensure that users perceive their avatars as intended. However, prior research has shown that system and application-related factors, such as the used display type (Wolf et al., 2020a; Wolf et al., 2022b), the observation perspective (Thaler et al., 2019; Neyret et al., 2020), or the application of embodiment itself (Wolf et al., 2021) can inadvertently impact how users perceive their embodied avatar. Another influencing factor could be the self-observation distance (SOD) on the embodied avatar when using a virtual mirror. Imagine a thought experiment in which a mirror moves further and further away from an observer until it reaches a distance at which the observer can no longer recognize the reflection. In the context of virtual mirror exposure, it means that the provided third-person perspective on the predefined stimulus, the embodied avatar, will diminish over distance until the stimulus cannot be recognized anymore and its potential effect is gone. Prior work suspects similar, as Wolf et al. (2022a) and Wolf et al. (2022b) recently raised the question of whether different SODs might be the reason for observing heterogeneous results between mirror exposure studies. Indeed, although it is known that distance-related biases significantly influence a user’s perception of a virtual environment (Renner et al., 2013; Kelly, 2022), no research seems to have yet investigated the effects of the distance between a virtual mirror and the user on the perception of their embodied avatar. Performing a meta-analysis of previous work also seems difficult, as most works using virtual mirrors do not report details about the placement of the virtual mirror in relation to the user’s position (e.g., of 17 studies from a review on the Proteus Effect (Ratan et al., 2020), only four reported details on the mirror placement). To address the identified research gap, we derive the following research question for our work:

RQ: How does the self-observation distance between the user-embodied avatar and its presentation in a virtual mirror affect the user’s avatar perception?

To investigate our posed research question, we systematically manipulated the distance between user-embodied avatars and a virtual mirror in a controlled user study. Participants repeatedly observed their embodied avatar at different distances in the virtual mirror and judged it concerning the induced SoE, the perceived body weight, and the affective appraisal. In the following, we present further related work on distance-related factors potentially influencing avatar perception and the different captured measures.

2 Related work

2.1 Distance-related factors influencing avatar perception

When thinking back to the above-introduced thought experiment and imagining it conducted in VR, some further limitations become apparent. Increasing the distance between an observer and the virtual mirror leads to a decrease of the relative size of the mirror in the presented field of view (FoV) of the observer (Sedgwick, 1986). This ultimately results in a decreased relative size of the mirror compared to the whole rendered scene and, therefore, in a reduced resolution of the avatar. As commercially available HMDs still have a limited display resolution compared to the human eye (Angelov et al., 2020), the resolution of the avatar presented in the virtual mirror decreases much faster when increasing the SOD than it would in reality. Consequently, the observer receives less detailed visual information about the embodied avatar, which might be reflected in an altered perception of the avatar.

Another influencing factor could be the distance compression effect, which states that individuals underestimate egocentric distances (i.e., the distance between an object and its observer) in VR compared to reality (Renner et al., 2013; Kelly, 2022). Although research shows that absolute underestimates are more pronounced above 10 m (Loomis and Knapp, 2003), the effect also occurs also occurs at shorter egocentric distances, which are more common in avatar embodiment scenarios (Willemsen and Gooch, 2002). However, the ability to precisely estimate distances is important for creating accurate mental maps of perceived space. These maps are then used to judge the dimensions and positions of objects in space in relation to each other (Epstein et al., 2017; Wienrich et al., 2019). When a distance is misperceived in a virtual environment, the size-distance relations learned in reality are no longer applicable and require cognitive adaptation (Loomis and Knapp, 2003). The reasoning is supported by the size-distance invariance hypothesis, stating that a perceived distance directly relates to an object’s perceived size (Gilinsky, 1951; Kelly et al., 2018). A misperceived distance might therefore lead to a subconscious misperception of the size of a presented object like, in our case, an avatar. This would be particularly troubling in the context of the introduced body image-related application (Turbyne et al., 2021). Here, the avatar’s dimensions, as a well-defined stimulus, could be misperceived as a consequence of a misjudged distance between the user and the virtual mirror, ultimately compromising the intended perceptual adaption effect. However, as Renner et al. (2013) summarized, there are potential compensational cues improving spatial perception. The most important in the context of our work is the application of avatar embodiment itself, since displaying a virtual body in the first-person perspective can serve as a well-known visual reference partially compensating distance compression effects (Ries et al., 2008; Mohler et al., 2010; Leyrer et al., 2011). Hence, the question arises whether the known spatial distortions lead to a distorted perception of the embodied avatar in the virtual mirror or whether a possible effect is compensated.

To investigate the effects of altered SOD on avatar perception in our study, we manipulated the distance between the avatar-embodied user and the mirror based on the few distances reported in previous studies (Ratan et al., 2020; Turbyne et al., 2021). We ensured that our tested range covered the extracted distances (i.e.,1 m–2.5 m), and extended the range further to 4 meters for covering potentially larger distances not reflected in prior work. Due to the drastically reduced relative size of the avatar reflection in the virtual mirror, we considered distances greater than four meters irrelevant for practical use in mirror exposure. To check whether participants recognized the manipulation of the SOD and whether a distance compression effect occurred, we formulate the following hypotheses:

H1.1:  A variation of the SOD has a significant influence on the estimation of the perceived egocentric distance to the virtual mirror.

H1.2:  The perceived egocentric distance to the virtual mirror will be underestimated compared to the actual SOD.

2.2 Assessing avatar perception

Various measures are suitable to capture the effects of SOD variations on avatar perception. In the following, we present measures that we consider relevant in the context of mirror exposure for behavioral and perceptual adaption. We further classify the expected effects of an altered SOD on the measures in relation to the potentially influencing factors described above.

2.2.1 Sense of embodiment, self-similarity, and self-attribution

A user’s subjective reaction to embodying an avatar can be captured by the aforementioned SoE, which is also considered a strong moderator of activating behavioral and perceptual adaptation in mirror exposure (Kilteni et al., 2013; Mal et al., 2023). The SoE originates from coherence between corresponding body-related sensory impressions perceived simultaneously in reality and virtuality, leading to the feeling of really owning and controlling the avatar’s virtual body (Ijsselsteijn et al., 2006; Slater et al., 2009; Slater et al., 2010). It comprises three components: 1) Virtual body ownership (VBO) is the feeling of really owning the virtual body, 2) agency is the feeling of controlling the virtual body, and 3) self-location is the feeling of being in the virtual body (Kilteni et al., 2012). For assessing the sub-components, well-established standardized questionnaires can be used (Roth and Latoschik, 2020; Peck and Gonzalez-Franco, 2021).

When investigating the influence of SOD on avatar perception in mirror exposure scenarios, it plays an important role on which observation perspective an impression is based (e.g., first-person, third-person, or a combination of both). We expect that only when the third-person perspective is crucial for assessing an impression, an altered SOD to the mirror can influence the results of the corresponding measure. For example, the feeling of VBO is expected to rise when a user receives a third-person perspective on the embodied avatar, as the holistic view of the body, including the face, provides more potential cues for self-recognition (Spanlang et al., 2014; Inoue and Kitazaki, 2021). However, the cues might be less recognizable when the mirror image is rendered on a lower resolution in the FoV due to an increased SOD. This is particularly relevant in the case of personalized avatars since areas of the face are considered to be important for self-recognition (Tsakiris, 2008). Waltemate et al. (2018) further showed that avatar personalization significantly increases VBO. Hence, when important cues for self-recognition (e.g., self-similarity and self-attribution) are no longer recognizable at an increased SOD, a reduced feeling of VBO can be expected. Furthermore, a potential distance compression effect could impact VBO by potentially breaking the plausibility of the body representation (Latoschik and Wienrich, 2022; Fiedler et al., 2023), as the perceived size of the avatar in the mirror might not match the user’s expectations acquired from the real world.

On the contrary, the feeling of agency refers to controlling the avatar’s movements that are perceivable in both third-person and first-person perspectives. Here, we expect a potential distance compression effect to have no impact on agency (Gonzalez-Franco et al., 2019). Although the distance to the third-person representation could be subjectively misjudged, the movements are still clearly visible in the first-person perspective. Moreover, Gorisse et al. (2017) showed that the sense of agency is similarly pronounced when receiving visuomotor feedback from first-person and third-person perspectives. It suggests that the first-person perspective is generally sufficient to evoke a sense of agency. Consequently, a reduced resolution in the simultaneously presented third-person perspective should have a negligible effect on agency. Similarly, Debarba et al. (2017) showed that agency is insensitive to a change in the observation perspective as long as there are no multisensory inconsistencies in the embodiment. Hence, we do not expect an altered SOD to influence the sense of agency as long as a first-person perspective is simultaneously presented.

Similarly, the feeling of self-location seems to be mainly driven by having a first-person perspective on the avatar’s virtual body. It might only be affected by the third-person perspective when a strong incongruence between spatial localization in both perspectives occurs (Kilteni et al., 2012; Gorisse et al., 2017). An example could be a curved or distorted mirror which breaks the plausibility of the mirror reflection by showing the avatar in a different location as the user would expect it, thus questioning the fact that the user is really embodying the avatar (Higashiyama and Shimono, 2004; Inoue and Kitazaki, 2021). We expect that a potential distance compression effect would not lead to such an incongruence between the provided perspectives. Moreover, we do not expect a reduced resolution of the mirror image to affect self-location, as it rather removes than distorts the third-person perspective. Hence, we expect that modifying only the SOD on the third-person perspective will not significantly impact self-location in usual embodiment scenarios. Based on the reasoning regarding the influence of SOD on the different dimensions of SoE, self-similarity, and self-attribution presented in this section, we hypothesize the following:

H2.1:  An increase in the SOD results in a lower VBO, self-similarity, and self-attribution towards the embodied avatar.

H2.2:  A variation of the SOD does not affect agency and self-location towards the embodied avatar.

2.2.2 Body weight estimation

In the proposed serious application of avatar embodiment in virtual mirror exposure for treating body image-related misperceptions, the estimation of body weight can be a valuable tool for investigating and working on a user’s body image (Thaler, 2019; Döllinger et al., 2022). However, when using the avatar as a body image-related stimulus, care should be taken to ensure that the avatar’s virtual body is perceived accurately without being distorted by individual-, system- or application-related factors. To explore such factors, prior works modified the body weight of static photorealistic virtual humans and used body weight estimations to discover influences of the avatar personalization or the estimators’ gender or body weight (Piryankova et al., 2014; Thaler et al., 2018a; Thaler et al., 2018b; Mölbert et al., 2018). Wolf et al. (2020), Wolf et al. (2022a) and Wolf et al. (2022b) further highlighted vast differences in avatar perception between different kinds of VR and augmented reality (AR) displays using a similar approach but with embodied avatars.

Concerning the role of the SOD on body weight perception, Thaler et al. (2019) showed significant differences in body weight estimations between first- and third perspectives, highlighting the third-person perspective as more accurate and important. Neyret et al. (2020) further compared the influence of perspective on general avatar perception and highlighted the importance of a third-person presentation for providing a less self-biased view of the avatar. As the impression of an embodied avatar’s body weight seems to be driven by the third-person perspective, our introduced distance-related factors potentially impact the perception of the avatar’s body weight. However, it is likely that the learned proportions of a familiar body obscure any potential effects that could be attributed to a misperception of egocentric distances (Mohler et al., 2010; Renner et al., 2013; Gonzalez-Franco et al., 2019). Especially when using personalized avatars, we consider body-related cues as particularly strong as we usually know exactly the proportion between, for example, our arm’s length and other body dimensions (Stefanucci and Geuss, 2009). This has also been noticed by Higashiyama and Shimono (2004), who already suspected a person’s familiarity as a potential reason for violations of the size-distance invariance hypothesis when estimating a person’s size at different distances. However, empirical evidence in the context of our work is still pending. Another open question is at what SOD the resolution of the displayed avatar is reduced to such an extent that body weight can no longer be reliably estimated and whether this distance is within the relevant range for practical application. Although it can be deductively inferred that an avatar’s body weight will be more difficult and uncertain to estimate, no work seems to have addressed this issue before. Based on the reasoning regarding the influence of SOD on body weight perception presented in this section, we hypothesize the following:

H3.1:  A variation of the SOD does not affect the overall estimations of the embodied avatar’s body weight.

H3.2:  An increase in the SOD results in a higher uncertainty in estimating the embodied avatar’s body weight.

H3.3:  An increase in the SOD results in a higher perceived difficulty in estimating the embodied avatar’s body weight.

2.2.3 Affective appraisal

When working on body perception in VR, different work suggests using photorealistically personalized avatars (Mölbert et al., 2018; Turbyne et al., 2021; Döllinger et al., 2022). However, highly realistic human-like avatars are prone to fall into the Uncanny Valley (Mori, 1970), which could affect the plausibility and credibility of the whole experience (Latoschik and Wienrich, 2022; Fiedler et al., 2023) and possibly prevent behavioral and perceptual adaptation effects (Wienrich et al., 2021). In general, the Uncanny Valley defines a perceptual range in which the affective appraisal of an avatar paradoxically changes from pleasant to uncanny as soon as it approaches but has not yet fully reached a human-like representation (Mori et al., 2012). To measure the affective appraisal of an avatar regarding the Uncanny Valley effect, Ho and MacDorman (2017) introduced a revised version of their questionnaire, often denoted as the “Uncanny Valley Index”, for capturing the perceived humanness, attractiveness, and eeriness of an avatar.

There has been prior work on the affective appraisal of virtual humans in dependence on different factors like their stylistics (Hepperle et al., 2020; Hepperle et al., 2022), reconstruction method (Bartl et al., 2021), anthropomorphism (Chaminade et al., 2007; Lugrin et al., 2015), or the display type they are perceived with (Wolf et al., 2022b; Hepperle et al., 2022). However, no work seems to explore the effects of altering SOD. In the context of our study, we assume that the avatar’s relative size in the user’s FoV, and hence its rendering resolution, impact the avatar’s affective appraisal. Especially when using photorealistically personalized avatars with an almost reality-like appearance, as in our and similar works, users may be more attentive to their avatar’s appearance. For example, Döllinger et al. (2022) noticed in their qualitative evaluation that minor defects in the reconstruction of personalized avatars might lead to a strong feeling of uncanniness, especially in the facial area. This is in line with Bartl et al. (2021), who asked participants to increase the distance between themselves and two differently reconstructed avatars until they could no longer tell which reconstruction was superior. Interestingly, participants chose a larger distance for personalized avatars than for generic avatars, indicating that small reconstruction errors are more significant on personalized avatars. As an presented avatar’s resolution decreases with increasing SOD, the resulting blurred avatar rendering potentially hides reconstruction inaccuracies and self-recognition cues (similar to VBO). While we assume that the avatar will always similarly be recognizable as a human-like being, we expect that an altered SOD will impact eeriness and attractiveness. Based on the reasoning regarding the influence of SOD on affective appraisal presented in this section, we hypothesize the following:

H4.1:  A variation of the SOD does not affect the perceived humanness of the embodied avatar.

H4.2:  An increase in the SOD results in a lower perceived eeriness and a higher perceived attractiveness of the embodied avatar.

3 Materials and methods

In our user study, we systematically manipulated the distance between the embodied avatar and virtual mirror between a short (1 m), medium (2.5 m), and far (4 m) distance. A total of 30 participants repeatedly embodied a photorealistically personalized avatar using a state-of-the-art consumer VR setup including body tracking. During the VR exposure, participants performed various body movement and body weight estimation tasks in front of a virtual mirror. The body weight estimation consisted of an active modification task (AMT) and a passive estimation task (PET). In the AMT, participants had to modify their avatar’s body weight actively to their current, ideal, and the population’s average body weight. In the PET, the system repeatedly modified the avatar’s body weight while participants had to estimate it. After exposure, participants answered questions regarding their SoE and affective appraisal of the avatar. Before conducting the study, we obtained ethical approval from the ethics committee of the Institute Human-Computer-Media (MCM) of the University of Würzburg without further obligations.

Concerning our manipulation, we expected that the variation of the SOD would be reflected in participants’ distance estimations (H1.1) and that there would be a distance compression effect across all SODs (H1.2). For our results, we expected that an increased SOD would cause a declined feeling of VBO, self-identification, and self-attribution (H2.1), while there would be no differences in agency and self-location (H2.2). We further predicted that there would be no differences in the estimation of the avatars’ body weight (H3.1) but that the body weight estimation difficulty and uncertainty would rise with increasing SOD (H3.2 and H3.3). Lastly, we assumed that the participants would not perceive any differences in the avatar’s humanness (H4.1) but that eeriness of the avatar would decrease and that the attractiveness would rise with increasing SOD (H4.2). In summary, our study systematically investigates the influence of the distance between the avatar and the virtual mirror in mirror exposure scenarios by capturing the user’s perception and appraisal of the embodied avatar for the first time. It thus contributes to uncovering unintended influences that could arise from an uncontrolled SOD.

3.1 Participants

The study took place in a quiet laboratory at the University of Würzburg. We recruited a total of 30 bachelor students (19 female, 11 male) who received course credit in return. Prior to the experiment, we determined the following inclusion criteria: Participants should 1) dress appropriately for the body scan according to previously given instructions (e.g., tight clothes, no jewelry, hair tied together), 2) have a normal or corrected-to-normal vision, 3) have no known sensitivity to motion and simulator sickness, and 4) not have been suffering from any mental or psychosomatic disease or body weight disorders. No participants had to be excluded. All participants were German native speakers. Six participants had no VR experience, 16 had used it between two and ten times before the experiment, five between 11 and 20 times, and three had used it more than 20 times. Therefore, most participants can be considered rather inexperienced with VR. Further demographic information and descriptive values for the sample-related control measures (explained in Section 3.3.4) can be found in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Age, body measurements, and the scores of the control variables of our sample.

3.2 Design

In a counterbalanced within-subjects design, the independent variable was the self-observation distance (SOD), i.e., the distance between the avatar and mirror, with three different levels: short (1 m), middle (2.5 m), and far (4 m). As dependent variables, we captured 1) SoE, 2) body weight perception, and 3) affective appraisal. SoE consists of the feeling of VBO, agency, and self-location. Additionally, we extended the SoE exploratively by the self-recognition-related feelings of self-similarity and self-attribution. Body weight perception consists of body weight estimations performed in the AMT and PET and the respective estimation difficulty. The affective appraisal consists of humanness, eeriness, and attractiveness. We further controlled for the participant’s 1) self-esteem, 2) body shape concerns, and 3) perceived distance to the mirror. All measures are explained in detail below.

3.3 Measures

Participants gave their answers on our measures either verbally during the VR experience or on a separate PC using LimeSurvey 4 (LimeSurvey GmbH, 2020) before and after the VR exposure. The exact measurement time for each measure can be found in Section 3.6. To conduct the questionnaires with German participants, we either used existing validated German versions of the questionnaires or translated the items to the best of our knowledge using back-and-forth translation.

3.3.1 Sense of embodiment, self-similarity, and self-attribution

The feeling of VBO and agency was measured using the Virtual Embodiment Questionnaire (VEQ, Roth and Latoschik, 2020). Participants answered four items for each dimension on a scale from 1 to 7 (7 = highest VBO, agency). To capture the participants’ feelings of self-location, self-similarity, and self-attribution towards their avatars, we exploratively extended the VEQ by four items for each of the added dimensions. The items were either created by ourselves or adapted from different prior work and rephrased to match the usual VEQ item phrasing. We call these new items VEQ+ in the course of our work. Following the VEQ, the items were captured on a scale from 1 to 7 (7 = highest self-location, self-similarity, self-attribution) and presented with the same instructions. The following list contains the items for each dimension, including the source when not self-created.

Self-Location

1) I felt as I was located within the virtual body (Gonzalez-Franco and Peck, 2018).

2) I felt like I was located out of my body (Gonzalez-Franco and Peck, 2018).

3) I felt like my body was drifting towards the virtual body (Gonzalez-Franco and Peck, 2018).

4) I felt like my body was located where I saw the virtual body (Debarba et al., 2015).

Self-Similarity

1) The appearance of the virtual human’s face was similar to my face (Thaler et al., 2019).

2) The overall appearance of the virtual person was similar to me (Thaler et al., 2019).

3) I felt like the virtual human resembled me.

4) The appearance of the virtual human reminded me of myself.

Self-Attribution

1) I felt like the virtual human was me (Romano et al., 2014).

2) I could identify myself with the virtual human.

3) I had the feeling that the virtual human was behaving the way I would behave.

4) I felt like the virtual human had the same attributes as I have.

3.3.2 Body weight perception

In our study, body weight perception is composed of the two measurements explained below. The first captures the body weight estimations of the embodied avatar, while the second one captures the difficulty of the estimations. All measurements were taken during the VR exposure.

3.3.2.1 Body weight estimation

We captured the participant’s body weight estimations in kg during the AMT and PET as explained in Section 3.5.2. For estimations of the current body weight, we calculated the misestimation M based on the modified body weight m and the participant’s real body weight r as M=rmr. For the participant’s estimations from the PET, we calculated the misestimation M for each performed body weight modification as M=epp, where e is the estimated body weight, and p is the presented body weight of the avatar. A negative value of M always constitutes an underestimation of the avatar’s body weight, and a positive value constitutes an overestimation. Additionally, we calculated for all estimations 1) the average misestimation M̄=1nk=1nMk and 2) the absolute average percentage of misestimation as Ā=1nk=1n|Mk|. Since M̄ considers under- and overestimations that may cancel each other out between different trials and participants, the results demonstrate the general ability to estimate the absolute body weight of the avatar across multiple trials. Therefore, it can be used to highlight systematic biases in the avatar perception between conditions. Since Ā accumulates only the absolute amount of misestimations and does not take its direction into account, it operationalizes the magnitude of individual estimates. Therefore, it can be used as a good indicator of the difficulty of estimations between conditions. The higher the value, the more difficult the estimate. We refer to Döllinger et al. (2022) and Wolf et al. (2022b) for an advanced analysis of the proposed measures.

3.3.2.2 Body weight estimation difficulty

We measured the participants’ perceived difficulty in estimating the avatars’ body weight during the PET. For this reason, we used a single-item scale ranging from 0 to 220 (220 = highest difficulty). The scale was inspired by the work of Eilers et al. (1986), a German version of the Rating Scale Mental Effort (Zijlstra, 1993; Arnold, 1999), and rephrased to capture difficulty instead of effort. Its range is defined by non-linearly plotted text anchors serving as estimation reference points enhancing the psychometric properties of the captured data.

3.3.3 Affective appraisal

We measured the participants’ affective appraisal of their avatars using the revised version of the Uncanny Valley Index (UVI, Ho and MacDorman, 2017). It includes three sub-dimensions: humanness, eeriness, and attractiveness. Each dimension is captured by four or five items ranging from 1 to 7 (7 = highest humanness, eeriness, attractiveness).

3.3.4 Controls

3.3.4.1 Distance estimation

To control whether distortion in distance perception occurred already at our relatively small distances between the avatar and mirror, we asked the participants to estimate the distance between them and the virtual mirror in meters. As found by Philbeck and Loomis (1997), the verbal estimation of distance serves as a reliable measure of the perceived distance. The distance misestimation M is calculated as M = et, where e is the estimated distance and t is the true distance.

3.3.4.2 Self-esteem

Since low self-esteem is considered to be linked to a disturbed body image (O’Dea, 2012), we captured the participants’ self-esteem as a potential factor explaining deviations in body weight perception. For this purpose, we used the well-established Rosenberg Self-Esteem Scale (RSES, Rosenberg, 2015; Ferring and Filipp, 1996; Roth et al., 2008). The score of the questionnaire ranges from 0 to 30. Scores below 15 indicate low self-esteem, scores between 15 and 25 can be considered normal, and scores above 25 indicate high self-esteem.

3.3.4.3 Body shape concerns

To control for the participant’s body shape concern as a potential confounding factor of our body weight perceptions measurements (Kamaria et al., 2016), we measured participants’ tendencies for body shape concerns using the validated shortened form of the Body Shape Questionnaire (BSQ, Cooper et al., 1987; Evans and Dolan, 1993; Pook et al., 2002). The score is captured with 16 different items and ranges from 0 to 204 (204 = highest body shape concerns).

3.3.4.4 Simulator sickness

To control for possible influences of simulator sickness caused by latency jitter or other sources (Stauffert et al., 2018; Stauffert et al., 2020), we captured the presence and intensity of 16 different typical symptoms associated with simulator sickness on 4-point Likert scales using the Simulator Sickness Questionnaire (SSQ, Kennedy et al., 1993; Bimberg et al., 2020). The total score of the questionnaire ranges from 0 to 235.62 (235.62 = strongest simulator sickness). An increase in the score by 20 between a pre- and post-measurement indicates the occurrence of simulator sickness (Stanney et al., 1997).

3.4 Apparatus

The technical system used in our study closely followed the technical system developed and evaluated in previous work by Döllinger et al. (2022). A video showing the system is provided in the supplementary material of this work. All relevant technical components will be explained in the following.

3.4.1 Soft- and hardware

To create and operate our interactive, real-time 3D VR environment, we used the game engine Unity 2019.4.20f1 LTS (Unity Technologies, 2019). Our hardware configuration consisted of a Valve Index VR HMD (Valve Corporation, 2020a), two handheld Valve Index Controllers, one HTC Vive Tracker 3.0 positioned on a belt at the lower spine, and one further tracker on each foot fixed by a velcro strap. The hardware components were rapidly (22 ms) and accurately (within a sub-millimeter range) tracked by four SteamVR Base Stations 2.0 (Niehorster et al., 2017). According to the manufacturer, the HMD provides a resolution of 1440 × 1600 px per eye with a total horizontal field of view of 130° running at a refresh rate of 90 Hz. However, the perspective calculated using the perspective projection parameters of the HMD, which can be retrieved from the OpenVR API using an open-source tool1, results in a maximum monocular FoV of 103.6 × 109.4° and an overlap in the stereo rendering of 93.1°, providing a total FoV of 114.1 × 109.4°. Thus, the actual pixel density in the FoV was 13.9 × 14.6 PPD. Figure 1 shows the user’s FoV for our three conditions. All VR hardware was integrated using SteamVR (Valve Corporation, 2020b) in version 1.16.10 and the corresponding Unity plugin in version 2.7.32. The whole system ran on a high-end VR PC composed of an Intel Core i7-9700K, an Nvidia RTX2080 Super, and 32 GB RAM running Windows 10.

FIGURE 1
www.frontiersin.org

FIGURE 1. The user’s first-person perspective monocularly rendered according to the Valve Index rendering parameters for the short (left), middle (center), and far (right) SOD. The yellow outlined areas within the virtual mirror highlight the decreasing size of the third-person perspective on the avatar in the user’s field of view with increasing SOD.

To ensure participants received a sufficient frame rate and to preclude a possible cause of simulator sickness, we measured motion-to-photon latency by frame-counting (He et al., 2000; Stauffert et al., 2020; Stauffert et al., 2021). To perform the measurement, we split the video output of our VR PC into two signals using an Aten VanCryst VS192 display port splitter. One signal still led to the HMD, the other to an ASUS ROG SWIFT PG43UQ low-latency gaming monitor. A high-speed camera of an iPhone 8 recorded the user’s motions and the corresponding reactions on the monitor screen at 240 fps. By analyzing 20 different movements each, the motion-to-photon latency was determined to be 14.4 ms (SD = 2.8 ms) for the HMD and 40.9 ms (SD = 5.4 ms) for the further tracked hardware devices. Both measurements were considered sufficiently low to provide a fluent VR experience and a high feeling of agency towards the avatar (Waltemate et al., 2016).

3.4.2 Virtual environment

The virtual environment used in our study was based on an asset obtained from the Unity Asset Store3 that we modified for our purposes. To create a suitable area for self-observation at different SODs, we removed some of the original objects and added a custom-written virtual mirror based on a planar reflection shader to the wall. Figure 1 shows the different SODs within the environment from the user’s first-person perspective. When changing the SODs, the real-time lighting of the room was also shifted to keep lighting and shadow casting consistent across conditions. We further added a curtain in front of the room’s window to limit the visual depth in the mirror background. Depending on the current condition and SOD, a marker on the floor of the virtual environment indicated where participants had to stand during the exposure. Automatic realignment of the environment ensured that participants did not have to change their position in the real world. To this end, we used a customized implementation of the Kabsch algorithm4 (Müller et al., 2016), which uses the positions of the SteamVR base stations as physical references. Additionally, the virtual ground height was calibrated by briefly placing the controllers onto the physical ground.

3.4.3 Avatar generation

We created the participants’ personalized avatars using a photogrammetry rig and the method for avatar generation described by Achenbach et al. (2017). The rig consists of 106 Canon EOS 1300D DSLR cameras arranged in circles around the participant (see Bartl et al. (2021) for a detailed description). The cameras trigger simultaneously, creating a holistic recording of the person in a series of individual images. This set of images is then fed into the avatar generation pipeline of Achenbach et al. (2017), which creates an animatable, personalized avatar from the individual images in less than 10 min. The pipeline uses Agisoft Metashape (Agisoft, 2021) for photogrammetric reconstruction to process the set of images into a dense point cloud. It then fits a template model to the point cloud and calculates the avatar’s texture. The template model is fully rigged so that the resulting avatar is ready for use in embodied VR without manual post-processing. Figure 2 shows an avatar generated by the described method in the different distance-depended resolutions of our experiment. In our experiment, the avatar generation pipeline ran on a PC containing an Intel Core i9-9900KF, an Nvidia RTX2080 Ti, and 32 GB RAM running Ubuntu 20.

FIGURE 2
www.frontiersin.org

FIGURE 2. Enlargements of the yellow outlined areas in Figure 1 following the same order. They illustrate the effects of reducing the relative size of the avatar in the user’s entire field of view by increasing the SOD. The resulting resolutions are plotted next to the images.

To enable quick integration of the avatars during the experiment, we used a custom-written FBX-based importer to load the avatars during runtime into our Unity application. The importer is realized through a native Unity Plugin which automatically generates a fully rigged, humanoid avatar object which is immediately ready for animation. This approach avoids error-prone manual configuration for each user, as it would be required when using Unity’s built-in FBX import system.

3.4.4 Avatar animation

We animated the generated avatars during the study in real-time according to the user’s body movements. Since recent investigations indicate that VR equipment-based full-body tracking solutions combined with Inverse Kinematics (IK, Aristidou et al., 2018) can achieve similar results in motion tracking quality (Spitzley and Karduna, 2019; Vox et al., 2021) and embodiment-related measurements (Wolf et al., 2020; Wolf et al., 2022a) as professional full-body tracking solutions, we decided to use no dedicated motion tracking system in our work. Therefore, the participants’ movements were continuously captured using the introduced HMD, controllers, and trackers. After a short user calibration in T-Pose using a custom-written calibration script, the received poses of the tracking devices were combined with the Unity plugin FinalIK version 2.05 to continuously calculate the user’s body pose. The calculated body pose was retargeted to the imported personalized avatar in the next step. To avoid potentially occurring inaccuracies in the alignment of the pose or the end-effectors (e.g., sliding feet, end-effector mismatch), we applied a post-retargeting IK-supported custom pose optimization step to increase the avatar animation quality after retargeting.

3.4.5 Avatar body weight modification

To dynamically alter the generated avatars’ body weight, we use the method described by Döllinger et al. (2022). They build a statistical model of human body shapes by performing a Principal Component Analysis on a set of registered meshes, which are generated by fitting a template mesh to scans from the European subset of the CAESAR database (Robinette et al., 2002). A mapping between the parameters of the shape space and the anthropometric measurements provided by the CAESAR database is learned through linear regression (Allen et al., 2003). This provides a way to map the desired change in body weight to a change in body shape at an interactive rate during runtime. Figure 3 shows an example of the method’s results. The body weight modification is integrated into Unity using a native plugin that receives the initial vertex positions of the avatar and applies the above-described statistical model of weight gain/loss.

FIGURE 3
www.frontiersin.org

FIGURE 3. Visualization of the body weight modification in a BMI range from 16 to 32 in two-point increments using an exemplary reconstructed avatar of a female person with an original BMI = 19.8. The image is taken from (Döllinger et al., 2022)

When performing the AMT in VR, users could modify their avatar’s body weight by altering the body shape using gesture interaction, as introduced by Döllinger et al. (2022). To modify the avatar’s body weight, users had to press each trigger button of the two controllers while either moving the controllers away from each other or together (see Figure 4). When moving the controller, the body weight changed according to the relative distance change between the controllers r in m/s, following the equation v = 3.5r2+15r, where v is the resulting body weight change velocity in kg/s. Moving the controllers away from each other increased the body weight, while moving them together decreased it. The faster the controllers were moved, the faster the body weight changed. As suggested by Döllinger et al. (2022), body weight modification was restricted to a range of ±35% of the user’s body weight to avoid unrealistic or uncomfortable body shape deformation.

FIGURE 4
www.frontiersin.org

FIGURE 4. Sketch of the body weight modification interaction through gestures. Participants had to press the trigger buttons on each controller and move the controllers either apart (increase body weight) or together (decrease body weight).

3.5 Experimental tasks

We chose the following experimental tasks to induce SoE in participants and to encourage them to focus their attention on their avatar to capture the participant’s avatar perception as accurately and controlled as possible.

3.5.1 Body movement task

Participants had to perform five body movements (i.e., waving with each arm, walking in place, circling arms, circling hip) in front of a virtual mirror to accomplish synchronous visuomotor stimulation suggested by Slater et al. (2010) and to get the feeling that they were really embodying their avatar. They were asked to observe the body movements of their avatar alternately in the first- and third-person perspective. The movement tasks have been adopted from Wolf et al. (2020).

3.5.2 Active modification task

Participants had to modify their avatar’s body weight by interactively altering its body shape multiple times. We followed Thaler (2019) and Neyret et al. (2020) and asked participants to modify their avatar’s body weight to 1) their current, 2) their desired body weight, and 3) the guessed average body weight in the population (as they defined it). Before each task, the avatar’s body weight was set to a random value between ±5% and ±10% of the original body weight of the avatar. To avoid providing any hints on the modification direction, the HMD was blacked-out during that pre-modification. For modifying the body weight, participants used gesture interaction, as explained in Section 3.4.5. The AMT had to be performed twice per condition to compensate for possible outliers in a single estimation (see Figure 5).

FIGURE 5
www.frontiersin.org

FIGURE 5. Overview of the experimental procedure (left) and a detailed overview of the repeated part of the exposure phase (right). The icons on the right side of each step show in which environment the step was conducted. The icons on the left side indicate the repetition of steps.

3.5.3 Passive estimation task

The PET followed prior work (Wolf et al., 2020; Wolf et al., 2021; Wolf et al., 2022a; Wolf et al., 2022b; Döllinger et al., 2022) and was used to measure the participants’ ability to estimate the repeatedly modified body weight of their avatar numerically in kg. The original body weight of the avatar was modified within a range of ±20% in 5% increments in a counterbalanced manner, resulting in n = 9 modifications, including the original body weight. As in the AMT, the HMD was blacked-out during the modification to avoid any hints. In both AMT and PET, participants were asked to move and turn in front of the virtual mirror to provide a holistic picture of their avatar, as suggested by prior work (Cornelissen et al., 2018; Thaler et al., 2019).

3.6 Procedure

The experimental procedure was divided into three major phases depicted in Figure 5. In the opening phase, the experimenter first welcomed the participants and provided information about the current COVID-19 regulations, which they had to comply with and sign for. Afterwards, they received information about the body scan and experiment, generated pseudonymization codes for avatar and experimental data, gave consent for participation, and had the opportunity to ask questions about the whole procedure.

The body scan phase followed, in which the participants first received instruction on the precise procedure for the body scan (e.g., no jewelry and shoes, required pose). Subsequently, the body height and weight of the participants were recorded, and two body scans of the person were taken. After a brief visual inspection of the taken images, the avatar reconstruction pipeline was started, and the participants were guided to the laboratory where the actual experiment took place.

In the experiment phase, participants first answered the pre-questionnaires (demographics, RSES, BSQ, and SSQ) and got a quick introduction to the VR equipment. Three VR exposures followed that proceeded for each of our conditions as follows. First, the participants put on the VR equipment while the experimenter ensured they wore it correctly. After the fitting, a pre-programmed experimental procedure started, and participants entered the preparation environment. All instructions were displayed on an instruction panel in the virtual environment and played as pre-recorded voice instructions. The display was blackened for a short moment for all virtual transitions during the VR exposure. Immediately after entering the virtual environment, participants performed a short eye test to validate the HMD settings and confirm appropriate vision. The embodiment calibration followed, and participants could shortly practice modifying their avatars’ body weight through gestures before the experimental tasks started (see Section 3.5). After the experimental tasks, the control question regarding the perceived distance to the mirror followed before participants left VR to answer the VEQ, VEQ+, and UVI. The distance to the mirror changed for each exposition in a counterbalanced manner. After the final exposition run, participants filled in the post-SSQ. The duration of one VR exposure averaged 12.35 min. The whole experimental procedure averaged 93 min.

4 Results

We used SPSS version 27.0.0.0 (IBM, 2022) to analyze our results statistically. Before running the statistical tests, we checked whether all variables met the assumption of normality and sphericity for parametric testing. For variables meeting the requirements, we performed a repeated-measures ANOVA (i.e., self-location, self-attribution, AMT current M̄, PET M̄, humanness, eeriness, attractiveness). Otherwise, we performed a Friedman test (i.e., VBO, agency, self-similarity, AMT current Ā, AMT ideal, AMT average, PET Ā, PET estimation difficulty, distance estimation, distance misestimation). We calculated all tests against an α of .05. Table 2 summarizes the descriptive values of all variables we compared between our conditions. Additionally, we performed a sensitivity analysis using G*Power version 3.1.9.7 (Faul et al., 2009). To strengthen the results of non-significant differences, we performed a sensitivity analysis with a group size of n = 30, a pre-specified α-level of .05, and a power of .80. Results showed that a Friedman test would have revealed effects of Kendall’s W = 0.1 or greater. A repeated-measures ANOVA would have revealed effects of Cohen’s f = 0.238 or greater (Cohen, 2013). Hence, differences with smaller effect sizes could have remained undetected. In order to exclude sequence effects due to our within-subject design, we also performed our statistical analysis comparing only the first condition completed by each participant in a between-subject design. Apart from minor descriptive differences, the results did not differ from the presented results. Since self-esteem (RSES) and body shape concerns (BSQ) were captured on an individual basis and not per condition (see Table 1 for descriptive results), we calculated Pearson correlations between both variables and each dependent variable for control purposes. In case of a significant correlation, we further calculated a simple linear regression to determine the predicting effect of the control variable. The test results are reported in the corresponding sections below.

TABLE 2
www.frontiersin.org

TABLE 2. Descriptive values of our measures that were compared between the different SODs.

4.1 Distance estimation

We controlled for successfully manipulating our conditions and assumed that participants would estimate the distance to the virtual mirror by increasing the SOD significantly higher. We found significant differences in the distance estimations between our SODs, χ2 (2) = 59.51, p < .001, W = 0.992. Two-tailed Wilcoxon signed-rank post hoc tests revealed a significantly higher estimated distance for far compared to short, Z = 4.790, p < .001, r = 0.875, for far compared to middle, Z = 4.732, p < .001, r = 0.864, and for middle compared to short SOD, Z = 4.810, p < .001, r = 0.878. Hence, we accept H1.1 and consider our experimental manipulation as successful.

We further used the distance estimations to test whether a distance compression effect for our SODs occurred. A comparison of the calculated distance misestimations between the SODs revealed no significant differences, χ2 (2) = 3.717, p = .156, W = 0.062. Hence, we compared participants’ distance estimates to the ground truth across all SODs and revealed a significant distance compression (M = 11.8%, SD = 20.69%) for our sample, Z = 4.122, p < .001, r = 0.753. Hence, we accept H1.2. The results are depicted in Figure 6.

FIGURE 6
www.frontiersin.org

FIGURE 6. Distance estimations for our short (1 m), middle (2.5 m), and far (4 m) SODs in comparison to the ground truth. Error bars represent 95% confidence intervals.

4.2 Sense of embodiment, self-similarity, and self-attribution

Concerning the SoE, we compared VBO, agency, self-location, self-similarity, and self-attribution between our three SODs. The results revealed no significant differences between the conditions for VBO, χ2 (2) = 0.585, p = .746, W = 0.010, agency, χ2 (2) = 3.089, p = .213, W = 0.051, self-location, F (2, 58) = 0.905, p = .410, f = 0.176, self-similarity, χ2 (2) = 2.523, p = .283, W = 0.042, and self-attribution, χ2 (2) = 0.925, p = .630, W = 0.015. Hence, we reject our hypothesis H2.1 but do not discard H2.2 for now. We further found no significant correlations between the controls RSES and BSQ on any of the SoE factors.

4.3 Body weight perception

We compared the participant’s current body weight misestimations M̄ and Ā, their ideal body weight estimations, and their estimations of the average body weight in the population between our three SODs using the AMT. The results revealed no significant difference between the conditions for the current body weight misestimations M̄, F (2, 58) = 0.802, p = .453, f = 0.167, and for the body weight estimations of the ideal body weight, χ2 (2) = 2.4, p = .301, W = 0.040, and the average body weight in the population, χ2 (2) = 4.2, p = .122, W = 0.070. Based on the results for AMT, we do not reject H3.1 for now but reject H3.2. In addition, we explored whether the body weight estimations M̄ accumulated across all conditions (M = −0.48, SD = 3.6) differed significantly from the actual body weight of the avatars. However, the calculated one-sample t-test showed no significant difference, t (29) = 0.728, p = .472, d = 0.13.

We further compared the participant’s body weight misestimations M̄ and Ā between our three SODs using the PET. We could neither find significant differences for body weight misestimations M̄, F (2, 58) = 0.72, p = .931, f = 0.002, nor for absolute body weight misestimations Ā, χ2 (2) = 2.467, p = .291, W = 0.041, between the SODs. Based on the results for PET, we can also not reject H3.1 for now but reject H3.2. In addition, we explored whether the body weight estimations M̄ accumulated across all conditions (M = −0.22, SD = 6.5) differed significantly from the actual body weight of the avatars. However, the calculated one-sample t-test showed no significant difference, t (29) = 0.182, p = .857, d = 0.03.

For the perceived body weight estimation difficulty, we found significant differences between SODs, χ2 (2) = 14.625, p = .001, W = 0.244. Two-tailed Wilcoxon signed-rank post hoc tests revealed a significantly higher difficulty for far compared to short, Z = 3.133, p = .002, r = 0.572, and for far compared to middle, Z = 3.38, p = .001, r = 0.617, but not between the short and middle SODs. Since we did not find significant differences between all conditions, we reject H3.3.

The calculated correlations between RSES and BSQ scores and the body weight misestimations M̄ and Ā in AMT and PET, as well as the estimation difficulty in PET, are shown in Table 3. Since there was no significant difference in body weight estimates between SODs, we aggregated the dependent variables across conditions. We used the absolute values of M̄ to operate on the sign-adjusted misestimations and not on sign-dependent under- or overestimations, possibly compensating for each other in a correlation. By calculating simple linear regressions for the significant correlations, we found that AMT Current |M̄| is significantly predicted by RSES, F (1, 28) = 5.40, p = .023, with an adjusted R2 of .14 following the equation AMT Current |M̄|=6.810.19(RSES Score). We further found that the RSES significantly predicts the PET Estimation Difficulty, F (1, 28) = 9.40, p = .005, with an adjusted R2 of .23 following the equation PET Estimation Difficulty = 201.7 − 4.18 ⋅ (RSES Score), and BSQ, F (1, 28) = 4.62, p = .040, with an adjusted R2 of .11 following the equation PET Estimation Difficulty = 69.17 + 0.57 ⋅ (BSQ Score).

TABLE 3
www.frontiersin.org

TABLE 3. Correlations between RSES and BSQ scores and the body weight misestimations M̄ and Ā in AMT and PET as well as the estimation difficulty in PET. Single-asterisks indicate significant and double-asterisks highly significant p-values.

4.4 Affective appraisal

For the participant’s affective appraisal of the avatar, we compared humanness, eeriness, and attractiveness between our three SODs. The results revealed no significant differences between conditions for humanness, F (2, 58) = 0.829, p = .441, f = 0.170, eeriness, F (2, 58) = 0.488, p = .617, f = 0.017, and attractiveness, F (2, 58) = 0.938, p = .397, f = 0.179. Hence, we do not discard H4.1 for now but reject our hypothesis H4.2.

The calculated correlations for controlling the relationship between RSES and BSQ scores and the affective appraisal of the avatars can be found in Table 4. Since there were no significant differences in the affective appraisal, we aggregated the dependent variables across conditions. By calculating simple linear regressions for the significant correlations, we found that the BSQ scores significantly predict the perceived eeriness of the avatar, F (1, 28) = 4.83, p = .036, with an adjusted R2 of .12 following the equation UVI Eeriness = 4.79 − 0.013 ⋅ (BSQ Score). We further found that the RSES significantly predicts the perceived attractiveness of the avatar F (1, 28) = 5.28, p = .029, with an adjusted R2 of .13 following the equation UVI Attractiveness = 2.63 + 0.85 ⋅ (RSES Score).

TABLE 4
www.frontiersin.org

TABLE 4. Correlations between RSES and BSQ and the different affective appraisal scores. Single-asterisks indicate significant p-values.

4.5 Simulator sickness

We controlled whether there was a significant increase in simulator sickness-related symptoms during the VR exposures. Since the assumptions for parametric testing were not met for SSQ scores, we compared pre and post-measurements using a two-tailed Wilcoxon signed-rank test. The results showed that the scores did not differ significantly between pre (M = 13.34, SD = 15.7) and post-measurements (M = 16.34, SD = 22.34), Z = 0.543, p = .587, r = 0.100. Two participants showed an increase in the SSQ score above 20 points between pre and post-measurement but did not complain about the occurrence of symptoms or appear as outliers in other measurements. Therefore, we decided to keep them in our sample.

5 Discussion

Prior work has raised the question of whether distance-related biases in virtual mirror exposure scenarios influence the perception of an embodied avatar within a virtual environment (Wolf et al., 2022a; Wolf et al., 2022b). Since the analysis of existing work allowed only limited conclusions, we systematically investigated the role of SOD on the embodiment and perception of avatars in a user study. Participants observed, manipulated, and rated their avatars in a short (1 m), middle (2.5 m), and far (4 m) distance between themselves and the virtual mirror. Our manipulation check (H1.1) showed a successful manipulation of the SOD, as participants estimated the distance to the mirror in the conditions significantly different. We could further confirm a significant distance compression effect in the distance estimations (H1.2). However, compared with the distance compression effect observed in other state-of-the-art consumer HMDs (Buck et al., 2018; Kelly, 2022; Kelly et al., 2022), the obtained distance compression of about 12% in our study using the Valve Index was relatively small. A potential reason could be the compensating effect of the first-person perspective on the embodied avatar (Mohler et al., 2010; Leyrer et al., 2011; Gonzalez-Franco et al., 2019).

5.1 Sense of embodiment, self-similarity, self-attribution

Our results for SoE, self-similarity, and self-attribution showed no significant differences between the SODs. This result is in line with H2.2 for agency and self-location, as previous work suggested that a change in the third-person perspective would have no significant influence on the measures as long as a first-person perspective is provided simultaneously (Kilteni et al., 2012; Debarba et al., 2017; Gorisse et al., 2017; Inoue and Kitazaki, 2021). With our study design and statistical power, we would have revealed medium to large effects but cannot rule out existing small effects. In addition, future research should show how the SOD would impact agency and self-location without presenting the presumed dominant first-person perspective.

We further could not confirm our H2.1, which assumes that the participant’s feeling of VBO, self-similarity, and self-attribution will decrease with increasing SOD from the virtual mirror. Here, we expected initially that the increasing blurriness in the presentation of the avatar in the virtual mirror (c.f., Figure 2) would lead to a reduction of recognizable personal features (Tsakiris, 2008; Waltemate et al., 2018), which would ultimately affect participants’ judgments. For the observed contrary results, we have a couple of potential explanations. First, a learning effect could have occurred since we used a within-subjects design in which each participant performed the tasks three times. After the exposure closer to the mirror, participants might have rated their avatar with the memorized details in mind. However, this is rather unlikely since our additional analysis of only the first run did not reveal any differences from the presented analysis. Second, participants knew they were facing their personalized avatar since they signed the informed consent and performed a body scan before the study. Hence, their perceived similarity to their avatar could result from a possible “placebo effect”, i.e., the belief that they are facing their personalized avatar. However, to our knowledge, no systematic research on the influence of the user’s expectation towards an avatar on the perceived SoE exists, and future work, including a control condition with non-personalized avatars, seems required. Third, recent work suggests that in some cases, experiencing an avatar exclusively from the first-person perspective may be sufficient to develop a similarly high feeling of VBO compared to having a third-person perspective, which would make the latter negligible (Bartl et al., 2022; Döllinger et al., 2023). Other works consider the first-person perspective at least to be more dominant (Debarba et al., 2017; Gorisse et al., 2017). However, there is also work that assumes that providing the third-person perspective enhances the VBO significantly (Kilteni et al., 2012; Inoue and Kitazaki, 2021), especially when using personalized avatars (Waltemate et al., 2018). Future work on the general role of perspective in the embodiment of avatars on SoE seems necessary to resolve this ambiguity. The final, and in our opinion, most likely explanation is that the reduced resolution (e.g., at four meters about a quarter of the resolution of one meter), despite the obvious blurring (c.f., Figure 2), was still high enough for participants to recognize themselves without any limitations. For future work, it suggests validating our results with non-personalized generic avatars and even larger SODs above four meters.

5.2 Body weight perception

Prior to our study, we expected no differences between the SODs in the participant’s body weight misestimations M̄ as well as in the estimations for their ideal body weight and the guessed average body weight in the population (H3.1). However, we found no significant differences between the conditions. Considering that the distance compression did not significantly differ between SODs, the results are no surprise. Furthermore, since the body weight misestimations M̄ did not differ significantly from the avatar’s body weight, we assume that the observed distance compression of around 12% had no impact on body weight estimations. Hence, we assume that the size-distance invariance hypothesis (Gilinsky, 1951) was violated, as already observed in other works (Brenner and van Damme, 1999; Kelly et al., 2018). The most likely explanation is the provided first-person perspective on the avatar, which could have served as a reference cue to correct body size estimations. However, our non-significant results do not mean that there is no possible effect but that we would have found at least medium and large effects based on our sensitivity analysis.

In contrast to M̄, we expected that the absolute body weight misestimations Ā, which served as an indicator of the uncertainty of the estimates (Wolf et al., 2022b), and the estimation difficulty would increase with increasing SOD (H3.2 and H3.3). We could not confirm our prior assumption for H3.2 since we could not find any significant differences in the measures for Ā. This is partly contrary to the results of the perceived estimation difficulty (H3.3), for which we found a significant increase from middle to far SOD. Participants rated the body weight estimation more difficult for the longest distance, but the greater perceived difficulty did not result in higher absolute misestimations.

We further explored the influence of our control measures of self-esteem and body shape concerns on our body weight perception measures. We found that the participants’ self-esteem significantly predicts their misestimation M̄ of their current body weight in the AMT. The higher the self-esteem was, the more accurate the body weight estimates were. We further found a significant prediction of the participants’ perceived body weight estimation difficulty by their self-esteem and body shape concerns. Estimations were perceived as more difficult when the body shape concerns were higher, and the self-esteem was lower. The results support assumptions of prior work that self-esteem and body shape concerns are linked to body image distortions (O’Dea, 2012; Kamaria et al., 2016; Irvine et al., 2019).

5.3 Affective appraisal

As expected, we found no significant differences between SODs in participants’ affective appraisal of the avatar in terms of perceived humanness (H4.1). However, against our expectations, we also found no significant differences regarding eeriness and attractiveness (H4.2). This is surprising since, based on the work of Döllinger et al. (2022), we assumed that participants would perceive minor defects in reconstructing their personalized avatar to a lesser extent with increasing distance from the mirror and thus judge their avatar to be less creepy and more attractive. However, we observed a minor descriptive trend for both measures in the expected direction. The effect may only become apparent at even higher SODs, as already suspected for VBO, self-similarity, and self-attribution.

The investigation of our control variables concerning possible subjective predictors of the affective appraisal of the avatars revealed interesting insights. We found that the participants’ perceived attractiveness of their avatar is significantly predicted by their self-esteem. The higher the self-esteem, the higher the perceived attractiveness of the avatar. Since it is well documented that the self-rating of one’s physical attractiveness correlates with self-esteem (Kenealy et al., 1991; Patzer, 1995), we attribute this finding primarily to the avatars’ personalization. However, other work has also suggested that personal characteristics can be attributed to non-personalized avatars via embodiment (Wolf et al., 2021), leaving space for further investigation. Furthermore, we found a significant prediction of the participants’ perceived eeriness of the avatar by their body shape concerns. The higher the body shape concerns, the lower the perceived eeriness. A possible reason for this could be that participants who are concerned about their body shape tend to focus their attention on the parts of their personalized avatar that bother them on their real body rather than on areas that are irregularly reconstructed (Tuschen-Caffier et al., 2015; Bauer et al., 2017). Future work investigating the perception of avatars, especially when using personalized avatars or between-subject designs, should consider self-esteem and body shape as covariates. In addition, further research is required to clarify the role of self-esteem and body concerns in the perception of avatars.

5.4 Practical implications

Our study provides interesting insights that also have implications for the practical application of avatar embodiment in research and beyond. Our results show that it is rather unlikely that a systematic distance-related bias affects the sense of embodiment, perception of body weight, or affective appraisal of a personalized embodied avatar in VR mirror exposure in a SOD range between one and four meters. What may seem rather uninteresting from a scientific point of view, namely that our priorly expected differences could not be shown at the distances we selected, is of great use for practical application. For studies that neglected the SOD, it can be retrospectively assumed that it is unlikely that uncontrolled distances within our tested range had a confounding effect. Based on our results, we formulate the following practice guideline:

The distance between a personalized embodied avatar and a virtual mirror can be freely chosen in a range between one and four meters without expecting a major influence on the avatar’s perception

Nevertheless, with our statistical power, we cannot entirely rule out small effects. The question remains how relevant these would be in a practical application compared to other individual-, system- or application-related influences. Given the limitations of our work stated below, we that the avatar perception should always be carefully evaluated before the applcation in a practical context. This counts especially for serious applications in mental health or related areas, where it is vital to rule out or control unwanted distortions in the user’s perception of the provided stimuli due to system- and application-related factors.

5.5 Limitations and future work

Throughout our discussion, we already identified some limitations of our work, which we summarize below and from which we derive future work. First, our work used photorealistically personalized avatars solely, limiting our findings to their use. Participants were fully aware that they got scanned, which might have biased their answers on self-recognition-related measures because one is usually aware of the own physical appearance. This could also have played a role in estimating body weight as in reality learned proportions of the own body could have also been applied to the avatar. Therefore, the influence of SOD should also be evaluated with non-personalized avatars.

Second, we can state that our experimental manipulation of the SOD between one and four meters was unlikely to cause differences in our measures. However, we cannot rule out differences occurring on larger distances where a distance compression might be more pronounced, and the avatars’ rendering resolution declines even more. But this limitation might only be relevant from a theoretical point of view, since a distance of 4 m between the observer and the mirror is unlikely to be exceeded in the practical application of virtual mirrors. Since resolution and distance compression are bound to particular HMDs (Angelov et al., 2020; Kelly, 2022), it appears necessary to either repeat our investigation with different HMDs or extend it to investigate each particular property on its own. For example, the avatar rendering could artificially be reduced while the distance to the mirror is kept constant. For this purpose, the method of an interactive controllable mirror, as suggested by Bartl et al. (2021), extended to a dynamic adjustable mirror resolution, could be used. However, as long as the causes for device-specific differences, e.g., in body weight perception (Wolf et al., 2022a; Wolf et al., 2022b), are not precisely clarified, a device-related assessment still seems necessary.

Third, we provided the user simultaneously with a first-person and a third-person perspective in our virtual environment. However, the first-person perspective could have corrected potentially stronger distance-related biases, as described by prior work (Ries et al., 2008; Mohler et al., 2010; Leyrer et al., 2011; Renner et al., 2013). For use cases where the avatar is only provided in a third-person perspective but without a mirror or embodiment (Thaler, 2019; Wolf et al., 2022b), the observation distance on the avatar could have a different impact. Future work should investigate the role of observation distance without embodiment or a first-person perspective.

6 Conclusion and contribution

Avatar embodiment in VR has steadily increased in recent years and is likely to grow further due to technological advancements. Especially in the field of serious applications, the question arises of how the avatar embodiment affects the user’s perception and by what factors an effect is influenced. Our study contributes to answering this question by investigating the influence of the self-observation distance (SOD) when looking at one’s own embodied avatar in a virtual mirror. We found that the SOD in avatar embodiment scenarios using a virtual mirror does not influence the sense of embodiment, body weight perception, and affective appraisal towards the avatar in a distance of one to four meters when the first-person perspective is presented simultaneously. Therefore, we conclude that distance compression and a distance-related reduced resolution of the third-person representation of the avatar do not affect the perception of personalized avatars in the tested range. Although our results need to be verified and confirmed for different use cases, we assume that the outcomes apply to most current applications employing avatar embodiment and virtual mirror exposure. Hence, we assume that recent avatar embodiment or perception research is unlikely to be subject to an uncontrolled distance-related systematic bias within our tested SOD range.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving human participants were reviewed and approved by Ethics Committee of the Institute Human-Computer-Media (MCM) of the University of Würzburg. The participants provided their written informed consent to participate in this study.

Author contributions

EW conceptualized large parts of the experimental design, collected the data, and took the lead in writing the manuscript. EW and DM developed the Unity application including the experimental environment and avatar animation system. MB and SW provided the avatar reconstruction and body weight modification framework. CW and ML conceived the original project idea, discussed the study design, and supervised the project. ND supported in data analysis. AB supported the generation of avatars and data collection. All authors continuously provided constructive feedback and helped to shape study and the corresponding manuscript.

Funding

This research has been funded by the German Federal Ministry of Education and Research in the project ViTraS (project numbers 16SV8219 and 16SV8225). It was further supported by the Open Access Publication Fund of the University of Würzburg.

Acknowledgments

We thank Marie Fiedler, Nico Erdmannsdörfer, and Viktor Frohnapfel for their help in developing the Unity application and data collection, and Sara Wolf for her support with our illustrations.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2022.1031093/full#supplementary-material

Footnotes

1https://github.com/PeterTh/ovr_rawprojection

2https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647

3https://assetstore.unity.com/packages/3d/props/interior/manager-office-interior-107709

4https://github.com/zalo/MathUtilities/#kabsch

5https://assetstore.unity.com/packages/tools/animation/final-ik-14290

References

Achenbach, J., Waltemate, T., Latoschik, M. E., and Botsch, M. (2017). “Fast generation of realistic virtual humans,” in Proceedings of the 23rd ACM symposium on virtual reality software and technology (New York, NY, United States: Association for Computing Machinery), 1–10. doi:10.1145/3139131.3139154

CrossRef Full Text | Google Scholar

Agisoft (2021). Metashape pro. Available at: http://www.agisoft.com (Accessed August 26, 2022) [online source].

Google Scholar

Allen, B., Curless, B., and Popović, Z. (2003). The space of human body shapes: Reconstruction and parameterization from range scans. ACM Trans. Graph. 22, 587–594. doi:10.1145/882262.882311

CrossRef Full Text | Google Scholar

Angelov, V., Petkov, E., Shipkovenski, G., and Kalushkov, T. (2020). “Modern virtual reality headsets,” in 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 26-28 June 2020, 1–5. doi:10.1109/HORA49412.2020.9152604

CrossRef Full Text | Google Scholar

Aristidou, A., Lasenby, J., Chrysanthou, Y., and Shamir, A. (2018). “Inverse kinematics techniques in computer graphics: A survey,” in Computer graphics forum (New York, NY, United States: Wiley Online Library), 37, 35–58. doi:10.1111/cgf.13310

CrossRef Full Text | Google Scholar

Arnold, A. G. (1999). “Mental effort and evaluation of user-interfaces: A questionnaire approach,” in Proceedings of HCI international (the 8th international conference on human-computer interaction) on human-computer interaction: Ergonomics and user interfaces-volume I - volume I (USA: L. Erlbaum Associates Inc.), 1003–1007.

Google Scholar

Aymerich-Franch, L. (2020). “Chapter 3 – avatar embodiment experiences to enhance mental health,” in Technology and health. Editors J. Kim, and H. Song (Amsterdam, Netherlands: Academic Press), 49–66. doi:10.1016/B978-0-12-816958-2.00003-4

CrossRef Full Text | Google Scholar

Bailenson, J. N., and Blascovich, J. (2004). “Avatars,” in Encyclopedia of human-computer interaction (Great Barrington, Massachusetts, United States: Berkshire Publishing Group), 64–68.

Google Scholar

Bartl, A., Merz, C., Roth, D., and Latoschik, M. E. (2022). “The effects of avatar and environment design on embodiment, presence, activation, and task load in a virtual reality exercise application,” in 2022 IEEE international symposium on mixed and augmented reality (ISMAR), 1–10.

Google Scholar

Bartl, A., Wenninger, S., Wolf, E., Botsch, M., and Latoschik, M. E. (2021). Affordable but not cheap: A case study of the effects of two 3D-reconstruction methods of virtual humans. Front. Virtual Real. 2. doi:10.3389/frvir.2021.694617

CrossRef Full Text | Google Scholar

Bauer, A., Schneider, S., Waldorf, M., Braks, K., Huber, T. J., Adolph, D., et al. (2017). Selective visual attention towards oneself and associated state body satisfaction: An eye-tracking study in adolescents with different types of eating disorders. J. Abnorm. Child. Psychol. 45, 1647–1661. doi:10.1007/s10802-017-0263-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Bimberg, P., Weissker, T., and Kulik, A. (2020). “On the usage of the simulator sickness questionnaire for virtual reality research,” in 2020 IEEE conference on virtual reality and 3D user interfaces abstracts and workshops (VRW) (New York, NY, United States: IEEE), 464–467. doi:10.1109/VRW50115.2020.00098

CrossRef Full Text | Google Scholar

Brenner, E., and van Damme, W. J. (1999). Perceived distance, shape and size. Vis. Res. 39, 975–986. doi:10.1016/S0042-6989(98)00162-X

PubMed Abstract | CrossRef Full Text | Google Scholar

Buck, L. E., Young, M. K., and Bodenheimer, B. (2018). A comparison of distance estimation in HMD-based virtual environments with different HMD-based conditions. ACM Trans. Appl. Percept. 15, 1–15. doi:10.1145/3196885

CrossRef Full Text | Google Scholar

Chaminade, T., Hodgins, J., and Kawato, M. (2007). Anthropomorphism influences perception of computer-animated characters’ actions. Soc. cognitive Affect. Neurosci. 2, 206–216. doi:10.1093/scan/nsm017

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, J. (2013). Statistical power analysis for the behavioral sciences. New York, NY, United States: Academic Press.

Google Scholar

Cooper, P. J., Taylor, M. J., Cooper, Z., and Fairbum, C. G. (1987). The development and validation of the body shape questionnaire. Int. J. Eat. Disord. 6, 485–494. doi:10.1002/1098-108X(198707)6:4(485:AID-EAT2260060405)3.0.CO;2-O

CrossRef Full Text | Google Scholar

Cornelissen, P. L., Cornelissen, K. K., Groves, V., McCarty, K., and Tovée, M. J. (2018). View-dependent accuracy in body mass judgements of female bodies. Body Image 24, 116–123. doi:10.1016/j.bodyim.2017.12.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Debarba, H. G., Bovet, S., Salomon, R., Blanke, O., Herbelin, B., and Boulic, R. (2017). Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality. PLOS ONE 12, e0190109–e0190119. doi:10.1371/journal.pone.0190109

PubMed Abstract | CrossRef Full Text | Google Scholar

Debarba, H. G., Molla, E., Herbelin, B., and Boulic, R. (2015). “Characterizing embodied interaction in first and third person perspective viewpoints,” in 2015 IEEE symposium on 3D user interfaces (3DUI) (New York, NY, United States: IEEE), 67–72. doi:10.1109/3DUI.2015.7131728

CrossRef Full Text | Google Scholar

Delinsky, S. S., and Wilson, G. T. (2006). Mirror exposure for the treatment of body image disturbance. Int. J. Eat. Disord. 39, 108–116. doi:10.1002/eat.20207

PubMed Abstract | CrossRef Full Text | Google Scholar

Döllinger, N., Wienrich, C., Wolf, E., and Latoschik, M. E. (2019). “ViTraS – Virtual reality therapy by stimulation of modulated body image – Project outline,” in Mensch und Computer 2019 – Workshopband (Bonn: Gesellschaft für Informatik e.V.), 1–6. doi:10.18420/muc2019-ws-633

CrossRef Full Text | Google Scholar

Döllinger, N., Wolf, E., Botsch, M., Latoschik, M. E., and Wienrich, C. (2023). Are embodied avatars harmful to our self-experience? The impact of virtual embodiment on body awareness. Manuscript submitted for publication.

Google Scholar

Döllinger, N., Wolf, E., Mal, D., Wenninger, S., Botsch, M., Latoschik, M. E., et al. (2022). Resize me! Exploring the user experience of embodied realistic modulatable avatars for body image intervention in virtual reality. Front. Virtual Real. 3. doi:10.3389/frvir.2022.935449

CrossRef Full Text | Google Scholar

Eilers, K., Nachreiner, F., and Hänecke, K. (1986). “Entwicklung und überprüfung einer skala zur erfassung subjektiv erlebter anstrengung,” in Zeitschrift für Arbeitswissenschaft, 214–224.

Google Scholar

Epstein, R. A., Patai, E. Z., Julian, J. B., and Spiers, H. J. (2017). The cognitive map in humans: Spatial navigation and beyond. Nat. Neurosci. 20, 1504–1513. doi:10.1038/nn.4656

PubMed Abstract | CrossRef Full Text | Google Scholar

Evans, C., and Dolan, B. (1993). Body shape questionnaire: Derivation of shortened “alternate forms”. Int. J. Eat. Disord. 13, 315–321. doi:10.1002/1098-108X(199304)13:3(315:AID-EAT2260130310)3.0.CO;2-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Faul, F., Erdfelder, E., Buchner, A., and Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. methods 41, 1149–1160. https://www.gpower.hhu.de/. doi:10.3758/brm.41.4.1149

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferring, D., and Filipp, S.-H. (1996). Messung des Selbstwertgefühls: Befunde zu Reliabilität, Validität und Stabilität der Rosenberg-Skala. Diagnostica-Goettingen 42, 284–292.

Google Scholar

Fiedler, M. L., Wolf, E., Döllinger, N., Botsch, M., Latoschik, M. E., and Wienrich, C. (2023). Appearing or behaving like me? Embodiment and personalization of virtual humans in virtual reality. Manuscript submitted for publication.

Google Scholar

Gilinsky, A. S. (1951). Perceived size and distance in visual space. Psychol. Rev. 58, 460–482. doi:10.1037/h0061505

PubMed Abstract | CrossRef Full Text | Google Scholar

Gonzalez-Franco, M., Abtahi, P., and Steed, A. (2019). “Individual differences in embodied distance estimation in virtual reality,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23-27 March 2019, 941–943. doi:10.1109/VR.2019.8798348

CrossRef Full Text | Google Scholar

Gonzalez-Franco, M., and Peck, T. C. (2018). Avatar embodiment. Towards a standardized questionnaire. Front. Robot. AI 5, 74. doi:10.3389/frobt.2018.00074

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorisse, G., Christmann, O., Amato, E. A., and Richir, S. (2017). First- and third-person perspectives in immersive virtual environments: Presence and performance analysis of embodied users. Front. Robot. AI 4. doi:10.3389/frobt.2017.00033

CrossRef Full Text | Google Scholar

Griffen, T. C., Naumann, E., and Hildebrandt, T. (2018). Mirror exposure therapy for body image disturbances and eating disorders: A review. Clin. Psychol. Rev. 65, 163–174. doi:10.1016/j.cpr.2018.08.006

PubMed Abstract | CrossRef Full Text | Google Scholar

He, D., Liu, F., Pape, D., Dawe, G., and Sandin, D. (2000). “Video-based measurement of system latency,” in International immersive projection technology workshop. Ames, Iowa: Iowa State University, 1–6.

Google Scholar

Hepperle, D., Ödell, H., and Wölfel, M. (2020). “Differences in the uncanny valley between head-mounted displays and monitors,” in 2020 International Conference on Cyberworlds (CW), Caen, France, 29 September 2020 - 01 October 2020 (New York, NY, United States: IEEE), 41–48. doi:10.1109/CW49994.2020.00014

CrossRef Full Text | Google Scholar

Hepperle, D., Purps, C. F., Deuchler, J., and Wölfel, M. (2022). Aspects of visual avatar appearance: Self-representation, display type, and uncanny valley. Vis. Comput. 38, 1227–1244. doi:10.1007/s00371-021-02151-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Higashiyama, A., and Shimono, K. (2004). Mirror vision: Perceived size and perceived distance of virtual images. Percept. Psychophys. 66, 679–691. doi:10.3758/BF03194911

PubMed Abstract | CrossRef Full Text | Google Scholar

Ho, C.-C., and MacDorman, K. F. (2017). Measuring the uncanny valley effect. Int. J. Soc. Robot. 9, 129–139. doi:10.1007/s12369-016-0380-9

CrossRef Full Text | Google Scholar

IBM (2022). SPSS statistics. Available at: https://www.ibm.com/products/spss-statistics (Accessed August 24, 2022) [online source].

Google Scholar

Ijsselsteijn, W. A., de Kort, Y. A. W., and Haans, A. (2006). Is this my hand I see before me? The rubber hand illusion in reality, virtual reality, and mixed reality. Presence. (Camb). 15, 455–464. doi:10.1162/pres.15.4.455

CrossRef Full Text | Google Scholar

Inoue, Y., and Kitazaki, M. (2021). Virtual mirror and beyond: The psychological basis for avatar embodiment via a mirror. J. Robot. Mechatron. 33, 1004–1012. doi:10.20965/jrm.2021.p1004

CrossRef Full Text | Google Scholar

Irvine, K. R., McCarty, K., McKenzie, K. J., Pollet, T. V., Cornelissen, K. K., Tovée, M. J., et al. (2019). Distorted body image influences body schema in individuals with negative bodily attitudes. Neuropsychologia 122, 38–50. doi:10.1016/j.neuropsychologia.2018.11.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Kamaria, K., Vikram, M., and Ayiesah, R. (2016). Body image perception, body shape concern and body shape dissatisfaction among undergraduates students. J. Teknol. 78. doi:10.11113/jt.v78.9050

CrossRef Full Text | Google Scholar

Kelly, J. W., Cherep, L. A., Klesel, B., Siegel, Z. D., and George, S. (2018). Comparison of two methods for improving distance perception in virtual reality. ACM Trans. Appl. Percept. 15, 1–11. doi:10.1145/3165285

CrossRef Full Text | Google Scholar

Kelly, J. W. (2022). Distance perception in virtual reality: A meta-analysis of the effect of head-mounted display characteristics. IEEE Trans. Vis. Comput. Graph. 1, 1–13. doi:10.1109/TVCG.2022.3196606

PubMed Abstract | CrossRef Full Text | Google Scholar

Kelly, J. W., Doty, T. A., Ambourn, M., and Cherep, L. A. (2022). Distance perception in the oculus quest and oculus quest 2. Front. Virtual Real. 3. doi:10.3389/frvir.2022.850471

CrossRef Full Text | Google Scholar

Kenealy, P., Gleeson, K., Frude, N., and Shaw, W. (1991). The importance of the individual in the “causal” relationship between attractiveness and self-esteem. J. Community Appl. Soc. Psychol. 1, 45–56. doi:10.1002/casp.2450010108

CrossRef Full Text | Google Scholar

Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal, M. G. (1993). Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3, 203–220. doi:10.1207/s15327108ijap0303_3

CrossRef Full Text | Google Scholar

Kilteni, K., Bergstrom, I., and Slater, M. (2013). Drumming in immersive virtual reality: The body shapes the way we play. IEEE Trans. Vis. Comput. Graph. 19, 597–605. doi:10.1109/TVCG.2013.29

PubMed Abstract | CrossRef Full Text | Google Scholar

Kilteni, K., Groten, R., and Slater, M. (2012). The sense of embodiment in virtual reality. Presence. (Camb). 21, 373–387. doi:10.1162/PRES_a_00124

CrossRef Full Text | Google Scholar

Latoschik, M. E., and Wienrich, C. (2022). Congruence and plausibility, not presence: Pivotal conditions for XR experiences and effects, a novel approach. Front. Virtual Real. 3. doi:10.3389/frvir.2022.694433

CrossRef Full Text | Google Scholar

Leyrer, M., Linkenauger, S. A., Bülthoff, H. H., Kloos, U., and Mohler, B. (2011). “The influence of eye height and avatars on egocentric distance estimates in immersive virtual environments,” in Proceedings of the ACM SIGGRAPH symposium on applied perception in graphics and visualization (New York, NY, USA: Association for Computing Machinery), 67–74. doi:10.1145/2077451.2077464

CrossRef Full Text | Google Scholar

LimeSurvey GmbH (2020). Limesurvey 4. Available at: https://www.limesurvey.org (Accessed August 24, 2022) [online source].

Google Scholar

Loomis, J., and Knapp, J. (2003). “Visual perception of egocentric distance in real and virtual environments,” in Virtual and adaptive environments. Editors L. J. Hettinger, and M. W. Haas (Hillsdale, New Jersey: Lawrence Erlbaum Associates Publishers), 21–46. chap. 2. doi:10.1201/9781410608888

CrossRef Full Text | Google Scholar

Lugrin, J.-L., Latt, J., and Latoschik, M. E. (2015). “Anthropomorphism and illusion of virtual body ownership,” in Proceedings of the 25th international conference on artificial reality and telexistence and 20th eurographics symposium on virtual environments (Eindhoven, Netherlands: Eurographics Association), 1–8. doi:10.2312/egve.20151303

CrossRef Full Text | Google Scholar

Mal, D., Wolf, E., Döllinger, N., Wienrich, C., and Latoschik, M. E. (2023). The impact of avatar and environment congruence on plausibility, embodiment, presence, and the proteus effect. Manuscript submitted for publication.

Google Scholar

Matamala-Gomez, M., Maselli, A., Malighetti, C., Realdon, O., Mantovani, F., and Riva, G. (2021). Virtual body ownership illusions for mental health: A narrative review. J. Clin. Med. 10, 139. doi:10.3390/jcm10010139

PubMed Abstract | CrossRef Full Text | Google Scholar

Mohler, B. J., Creem-Regehr, S. H., Thompson, W. B., and Bülthoff, H. H. (2010). The effect of viewing a self-avatar on distance judgments in an hmd-based virtual environment. Presence. (Camb). 19, 230–242. doi:10.1162/pres.19.3.230

CrossRef Full Text | Google Scholar

Mölbert, S. C., Thaler, A., Mohler, B. J., Streuber, S., Romero, J., Black, M. J., et al. (2018). Assessing body image in anorexia nervosa using biometric self-avatars in virtual reality: Attitudinal components rather than visual body size estimation are distorted. Psychol. Med. 48, 642–653. doi:10.1017/S0033291717002008

PubMed Abstract | CrossRef Full Text | Google Scholar

Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy 7, 33–35.

Google Scholar

Mori, M., MacDorman, K. F., and Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98–100. doi:10.1109/MRA.2012.2192811

CrossRef Full Text | Google Scholar

Müller, M., Bender, J., Chentanez, N., and Macklin, M. (2016). “A robust method to extract the rotational part of deformations,” in Proceedings of the 9th international conference on motion in games (New York, NY, USA: Association for Computing Machinery), 55–60. doi:10.1145/2994258.2994269

CrossRef Full Text | Google Scholar

Neyret, S., Bellido Rivas, A. I., Navarro, X., and Slater, M. (2020). Which body would you like to have? The impact of embodied perspective on body perception and body evaluation in immersive virtual reality. Front. Robot. AI 7, 31. doi:10.3389/frobt.2020.00031

PubMed Abstract | CrossRef Full Text | Google Scholar

Niehorster, D. C., Li, L., and Lappe, M. (2017). The accuracy and precision of position and orientation tracking in the HTC Vive virtual reality system for scientific research. i-Perception 8, 204166951770820. doi:10.1177/2041669517708205

PubMed Abstract | CrossRef Full Text | Google Scholar

O’Dea, J. (2012). “Body image and self-esteem,” in Encyclopedia of body image and human appearance. Editor T. Cash (Oxford: Academic Press), 141–147. doi:10.1016/B978-0-12-384925-0.00021-3

CrossRef Full Text | Google Scholar

Patzer, G. L. (1995). Self-esteem and physical attractiveness. J. Esthet. Restor. Dent. 7, 274–276. doi:10.1111/j.1708-8240.1995.tb00591.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Peck, T. C., and Gonzalez-Franco, M. (2021). Avatar embodiment. A standardized questionnaire. Front. Virtual Real. 1. doi:10.3389/frvir.2020.575943

CrossRef Full Text | Google Scholar

Philbeck, J. W., and Loomis, J. M. (1997). Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions. J. Exp. Psychol. Hum. Percept. Perform. 23, 72–85. doi:10.1037/0096-1523.23.1.72

PubMed Abstract | CrossRef Full Text | Google Scholar

Piryankova, I. V., Stefanucci, J. K., Romero, J., De La Rosa, S., Black, M. J., and Mohler, B. J. (2014). Can I recognize my body’s weight? The influence of shape and texture on the perception of self. ACM Trans. Appl. Percept. 11, 1–18. 1–13:18. doi:10.1145/2641568

CrossRef Full Text | Google Scholar

Pook, M., Tuschen-Caffier, B., and Stich, N. (2002). Evaluation des Fragebogens zum Figurbewusstsein (FFB, Deutsche Version des Body Shape Questionnaire). Verhaltenstherapie 12, 116–124. doi:10.1159/000064375

CrossRef Full Text | Google Scholar

Ratan, R., Beyea, D., Li, B. J., and Graciano, L. (2020). Avatar characteristics induce users’ behavioral conformity with small-to-medium effect sizes: A meta-analysis of the proteus effect. Media Psychol. 23, 651–675. doi:10.1080/15213269.2019.1623698

CrossRef Full Text | Google Scholar

Renner, R. S., Velichkovsky, B. M., and Helmert, J. R. (2013). The perception of egocentric distances in virtual environments – a review. ACM Comput. Surv. 46, 1–40. doi:10.1145/2543581.2543590

CrossRef Full Text | Google Scholar

Ries, B., Interrante, V., Kaeding, M., and Anderson, L. (2008). “The effect of self-embodiment on distance perception in immersive virtual environments,” in Proceedings of the 2008 ACM symposium on virtual reality software and technology (New York, NY, USA: Association for Computing Machinery), 167–170. doi:10.1145/1450579.1450614

CrossRef Full Text | Google Scholar

Robinette, K. M., Blackwell, S., Daanen, H., Boehmer, M., and Fleming, S. (2002). Civilian American and European surface anthropometry resource (CEASAR). Tech. Rep. Sytronics Inc. doi:10.21236/ada406704

CrossRef Full Text | Google Scholar

Romano, D., Pfeiffer, C., Maravita, A., and Blanke, O. (2014). Illusory self-identification with an avatar reduces arousal responses to painful stimuli. Behav. Brain Res. 261, 275–281. doi:10.1016/j.bbr.2013.12.049

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenberg, M. (2015). Society and the adolescent self-image. Princeton, New Jersey, United States: Princeton University Press.

Google Scholar

Roth, D., and Latoschik, M. E. (2020). Construction of the virtual embodiment questionnaire (VEQ). IEEE Trans. Vis. Comput. Graph. 26, 3546–3556. doi:10.1109/TVCG.2020.3023603

PubMed Abstract | CrossRef Full Text | Google Scholar

Roth, M., Decker, O., Herzberg, P. Y., and Brähler, E. (2008). Dimensionality and norms of the Rosenberg self-esteem scale in a German general population sample. Eur. J. Psychol. Assess. 24, 190–197. doi:10.1027/1015-5759.24.3.190

CrossRef Full Text | Google Scholar

Sedgwick, H. A. (1986). “Space perception,” in Handbook of perception and human performance. Editors K. R. Boff, L. Kaufman, and J. P. Thomas (New York: Wiley), 1, 21. 1–21.57.

Google Scholar

Slater, M., Pérez Marcos, D., Ehrsson, H., and Sanchez-Vives, M. V. (2009). Inducing illusory ownership of a virtual body. Front. Neurosci. 3, 214–220. doi:10.3389/neuro.01.029.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Slater, M., Spanlang, B., Sanchez-Vives, M. V., and Blanke, O. (2010). First person experience of body transfer in virtual reality. PLOS ONE 5, e10564. doi:10.1371/journal.pone.0010564

PubMed Abstract | CrossRef Full Text | Google Scholar

Spanlang, B., Normand, J.-M., Borland, D., Kilteni, K., Giannopoulos, E., Pomés, A., et al. (2014). How to build an embodiment lab: Achieving body representation illusions in virtual reality. Front. Robot. AI 1, 9. doi:10.3389/frobt.2014.00009

CrossRef Full Text | Google Scholar

Spitzley, K. A., and Karduna, A. R. (2019). Feasibility of using a fully immersive virtual reality system for kinematic data collection. J. Biomechanics 87, 172–176. doi:10.1016/j.jbiomech.2019.02.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Stanney, K. M., Kennedy, R. S., and Drexler, J. M. (1997). “Cybersickness is not simulator sickness,” in Proceedings of the human factors and ergonomics society annual meeting (Los Angeles, CA: SAGE Publications Sage CA), 41, 1138–1142. doi:10.1177/107118139704100292

CrossRef Full Text | Google Scholar

Stauffert, J.-P., Korwisi, K., Niebling, F., and Latoschik, M. E. (2021). “Ka-Boom!!! Visually exploring latency measurements for XR,” in Extended abstracts of the 2021 CHI conference on human factors in computing systems (New York, NY, United States: Association for Computing Machinery), 1–9.

CrossRef Full Text | Google Scholar

Stauffert, J.-P., Niebling, F., and Latoschik, M. E. (2018). “Effects of latency jitter on simulator sickness in a search task,” in 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Tuebingen/Reutlingen, Germany, 18-22 March 2018, 121–127. doi:10.1109/VR.2018.8446195

CrossRef Full Text | Google Scholar

Stauffert, J.-P., Niebling, F., and Latoschik, M. E. (2020). Latency and cybersickness: Impact, causes, and measures. A review. Front. Virtual Real. 1. doi:10.3389/frvir.2020.582204

CrossRef Full Text | Google Scholar

Stefanucci, J. K., and Geuss, M. N. (2009). Big people, little world: The body influences size perception. Perception 38, 1782–1795. doi:10.1068/p6437

PubMed Abstract | CrossRef Full Text | Google Scholar

Thaler, A., Geuss, M. N., Mölbert, S. C., Giel, K. E., Streuber, S., Romero, J., et al. (2018a). Body size estimation of self and others in females varying in BMI. PLOS ONE 13, e0192152. doi:10.1371/journal.pone.0192152

PubMed Abstract | CrossRef Full Text | Google Scholar

Thaler, A., Piryankova, I. V., Stefanucci, J. K., Pujades, S., de la Rosa, S., Streuber, S., et al. (2018b). Visual perception and evaluation of photo-realistic self-avatars from 3D body scans in males and females. Front. ICT 5, 18. doi:10.3389/fict.2018.00018

CrossRef Full Text | Google Scholar

Thaler, A., Pujades, S., Stefanucci, J. K., Creem-Regehr, S. H., Tesch, J., Black, M. J., et al. (2019). “The influence of visual perspective on body size estimation in immersive virtual reality,” in ACM symposium on applied perception 2019 (New York, NY, USA: Association for Computing Machinery), 1–12. doi:10.1145/3343036.3343134

CrossRef Full Text | Google Scholar

Thaler, A. (2019). The role of visual cues in body size estimation. Berlin: Logos Verlag Berlin GmbH. vol. 56 of MPI Series in Biological Cybernetics.

Google Scholar

Tsakiris, M. (2008). Looking for myself: Current multisensory input alters self-face recognition. PLOS ONE 3, e4040–e4046. doi:10.1371/journal.pone.0004040

PubMed Abstract | CrossRef Full Text | Google Scholar

Turbyne, C., Goedhart, A., de Koning, P., Schirmbeck, F., and Denys, D. (2021). Systematic review and meta-analysis of virtual reality in mental healthcare: Effects of full body illusions on body image disturbance. Front. Virtual Real. 2, 39. doi:10.3389/frvir.2021.657638

CrossRef Full Text | Google Scholar

Tuschen-Caffier, B., Bender, C., Caffier, D., Klenner, K., Braks, K., and Svaldi, J. (2015). Selective visual attention during mirror exposure in anorexia and bulimia nervosa. PLOS ONE 10, e0145886. doi:10.1371/journal.pone.0145886

PubMed Abstract | CrossRef Full Text | Google Scholar

Unity Technologies (2019). Unity. Available at: https://unity3d.com (Accessed January 20, 2022) [online source].

Google Scholar

Valve Corporation (2020a). Index. Available at: https://store.steampowered.com/valveindex (Accessed August 26, 2022) [online source].

Google Scholar

Valve Corporation (2020b). SteamVR. Available at: https://store.steampowered.com/app/250820/SteamVR (Accessed 05 August, 2022) [online source].

Google Scholar

Vox, J. P., Weber, A., Wolf, K. I., Izdebski, K., Schüler, T., König, P., et al. (2021). An evaluation of motion trackers with virtual reality sensor technology in comparison to a marker-based motion capture system based on joint angles for ergonomic risk assessment. Sensors 21, 3145. doi:10.3390/s21093145

PubMed Abstract | CrossRef Full Text | Google Scholar

Waltemate, T., Gall, D., Roth, D., Botsch, M., and Latoschik, M. E. (2018). The impact of avatar personalization and immersion on virtual body ownership, presence, and emotional response. IEEE Trans. Vis. Comput. Graph. 24, 1643–1652. doi:10.1109/TVCG.2018.2794629

PubMed Abstract | CrossRef Full Text | Google Scholar

Waltemate, T., Senna, I., Hülsmann, F., Rohde, M., Kopp, S., Ernst, M., et al. (2016). “The impact of latency on perceptual judgments and motor performance in closed-loop interaction in virtual reality,” in Proceedings of the 22nd ACM symposium on virtual reality software and technology (New York, NY, United States: Association for Computing Machinery), 27–35. doi:10.1145/2993369.2993381

CrossRef Full Text | Google Scholar

Wienrich, C., Döllinger, N., and Hein, R. (2021). Behavioral framework of immersive technologies (BehaveFIT): How and why virtual reality can support behavioral change processes. Front. Virtual Real. 2, 84. doi:10.3389/frvir.2021.627194

CrossRef Full Text | Google Scholar

Wienrich, C., Döllinger, N., Kock, S., and Gramann, K. (2019). “User-centered extension of a locomotion typology: Movement-related sensory feedback and spatial learning,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23-27 March 2019, 690–698. doi:10.1109/VR.2019.8798070

CrossRef Full Text | Google Scholar

Willemsen, P., and Gooch, A. (2002). “Perceived egocentric distances in real, image-based, and traditional virtual environments,” in Proceedings IEEE Virtual Reality 2002, Orlando, FL, USA, 24-28 March 2002, 275–276. doi:10.1109/VR.2002.996536

CrossRef Full Text | Google Scholar

Wolf, E., Döllinger, N., Mal, D., Wienrich, C., Botsch, M., and Latoschik, M. E. (2020). “Body weight perception of females using photorealistic avatars in virtual and augmented reality,” in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 09-13 November 2020 (New York, NY, United States: IEEE), 583–594. doi:10.1109/ISMAR50242.2020.00071

CrossRef Full Text | Google Scholar

Wolf, E., Fiedler, M. L., Döllinger, N., Wienrich, C., and Latoschik, M. E. (2022a). “Exploring presence, avatar embodiment, and body perception with a holographic augmented reality mirror,” in 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12-16 March 2022 (New York, NY, United States: IEEE), 350–359. doi:10.1109/VR51125.2022.00054

CrossRef Full Text | Google Scholar

Wolf, E., Mal, D., Frohnapfel, V., Döllinger, N., Wenninger, S., Botsch, M., et al. (2022b). “Plausibility and perception of personalized virtual humans between virtual and augmented reality,” in 2022 IEEE international symposium on mixed and augmented reality (ISMAR). New York, NY, United States: IEEE, 489–498. doi:10.1109/ISMAR55827.2022.00065

CrossRef Full Text | Google Scholar

Wolf, E., Merdan, N., Döllinger, N., Mal, D., Wienrich, C., Botsch, M., et al. (2021). “The embodiment of photorealistic avatars influences female body weight perception in virtual reality,” in 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Lisboa, Portugal, 27 March 2021 - 01 April 2021, 65–74. doi:10.1109/VR50410.2021.00027

CrossRef Full Text | Google Scholar

World Health Organization (2019). International statistical classification of diseases and related health problems. 11th ed.. Lyon, France: World Health Organization.

Google Scholar

Yee, N., and Bailenson, J. (2007). The Proteus effect: The effect of transformed self-representation on behavior. Hum. Commun. Res. 33, 271–290. Publisher: Oxford Academic. doi:10.1111/j.1468-2958.2007.00299.x

CrossRef Full Text | Google Scholar

Zijlstra, F. R. H. (1993). Efficiency in work behaviour: A design approach for modern tools. Ph.D. thesis (Delft, Netherlands: Delft University).

Keywords: virtual human, virtual body ownership, agency, body image distortion, body weight perception, body weight modification, affective appraisal, distance compression

Citation: Wolf E, Döllinger N, Mal D, Wenninger S, Bartl A, Botsch M, Latoschik ME and Wienrich C (2022) Does distance matter? Embodiment and perception of personalized avatars in relation to the self-observation distance in virtual reality. Front. Virtual Real. 3:1031093. doi: 10.3389/frvir.2022.1031093

Received: 29 August 2022; Accepted: 25 November 2022;
Published: 21 December 2022.

Edited by:

Jonathan W. Kelly, Iowa State University, United States

Reviewed by:

You Cheng, Massachusetts General Hospital, Harvard Medical School, United States
Bing Liu, Technical University of Munich, Germany

Copyright © 2022 Wolf, Döllinger, Mal, Wenninger, Bartl, Botsch, Latoschik and Wienrich. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Erik Wolf, erik.wolf@uni-wuerzburg.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.