Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI, 27 September 2022
Sec. Human-Robot Interaction
This article is part of the Research Topic Human-robot collaboration in industry 5.0: a human-centric AI-based approach View all 5 articles

Collaborating eye to eye: Effects of workplace design on the perception of dominance of collaboration robots

Alexander Arntz
Alexander Arntz1*Carolin StraßmannCarolin Straßmann1Stefanie VlkerStefanie Völker2Sabrina C. EimlerSabrina C. Eimler1
  • 1Institute of Computer Science, University of Applied Sciences Ruhr West, Bottrop, Germany
  • 2Institute of Mechanical Engineering, University of Applied Sciences Ruhr West, Mülheim an der Ruhr, Germany

The concept of Human-Robot Collaboration (HRC) describes innovative industrial work procedures, in which human staff works in close vicinity with robots on a shared task. Current HRC scenarios often deploy hand-guided robots or remote controls operated by the human collaboration partner. As HRC envisions active collaboration between both parties, ongoing research efforts aim to enhance the capabilities of industrial robots not only in the technical dimension but also in the robot’s socio-interactive features. Apart from enabling the robot to autonomously complete the respective shared task in conjunction with a human partner, one essential aspect lifted from the group collaboration among humans is the communication between both entities. State-of-the-art research has identified communication as a significant contributor to successful collaboration between humans and industrial robots. Non-verbal gestures have been shown to be contributing aspect in conveying the respective state of the robot during the collaboration procedure. Research indicates that, depending on the viewing perspective, the usage of non-verbal gestures in humans can impact the interpersonal attribution of certain characteristics. Applied to collaborative robots such as the Yumi IRB 14000, which is equipped with two arms, specifically to mimic human actions, the perception of the robots’ non-verbal behavior can affect the collaboration. Most important in this context are dominance emitting gestures by the robot that can reinforce negative attitudes towards robots, thus hampering the users’ willingness and effectiveness to collaborate with the robot. By using a 3 × 3 within-subjects design online study, we investigated the effect of dominance gestures (Akimbo, crossing arms, and large arm spread) working in a standing position with an average male height, working in a standing position with an average female height, and working in a seated position on the perception of dominance of the robot. Overall 115 participants (58 female and 57 male) with an average age of 23 years evaluated nine videos of the robot. Results indicated that all presented gestures affect a person’s perception of the robot in regards to its perceived characteristics and willingness to cooperate with the robot. The data also showed participants’ increased attribution of dominance based on the presented viewing perspective.

1 Introduction

For a long time in automated production processes, industrial robots and humans have performed their work strictly separated. Isolated by cages, industrial robots operated at safe distances from the personnel to prevent potentially hazardous situations (Hentout et al., 2019). A new paradigm in the industry has spawned a new category of industrial robots explicitly designed to collaborate with humans in close proximity. This approach bears an enormous labor multiplier for industrial production cycles, as the human worker and the industrial robot can complement each other in their respective skill set. Industrial robots are valued for their capability to lift heavy objects and their repeatability of precise tasks, whereas the human worker excels at intuition and experience-based decision making and reactivity towards procedure deviating circumstances (Ajoudani et al., 2018). In concept, both parties can benefit from each other through mutual assistance and form the basis for the subject and research discipline of Human-Robot Collaboration (HRC).

While dedicated collaboration robots can come in many forms depending on their respective specializations Buxbaum et al. (2020), the configuration of a dual-arm setup for collaboration robots promises to mimic the actions and capabilities of the human collaboration partner best (Kirschner et al., 2016). This is based on the assumption that dual-arm robots can mirror the capability of humans to operate as bilateral capable manipulators. Apart from projected advantages for the collaboration procedure itself, such as enhanced coordination capabilities, the physical representation of a dual-arm robot allows for extensive gesture-based communication. Although incapable of mirroring the body language capabilities of modern androids, industrial dual-arm robots can express some gestures that resemble human-like postures. Therefore, dual-arm collaboration robots could be outfitted to signal and adjust a variety of different gestures in accordance with the current collaboration context. Prior studies regarding the interaction with robots revealed a significant benefit of collaboration robots equipped with gesture-based communication combined with other information interfaces during shared task scenarios with a human partner (Arntz et al., 2021a; Bremner and Leonards, 2016). Ranging from subjective benefits such as reduced stress and objective benefits regarding the production quantity, it can be assumed that dual-arm robots can be enhanced in their collaboration effectiveness through communication as well. However, while prior studies used established gestures for industrial robots represented by a single-arm setup (Ende et al., 2011), the evaluation of dual-arm gestures for collaboration settings with humans still needs further investigation.

Since research in the domain of Human-Robot Interaction indicates the multi-layered complexity in regards to the information that is conveyed by robots using human-like gestures and body language (Riek et al., 2010; McColl and Nejat, 2014), it is paramount to explore the respective perceptions that users gain from dual-collaboration robots equipped with gesture-based communication. The goal is to incrementally over a series of studies evaluate a library of different gestures and investigate their respective impression on users to sort out unfit gestures for dual-arm robots that might compromise the collaboration experience. To achieve this, the study presented in this work tested three distinct gestures for dominance derived from the works of Straßmann et al. that investigated the effects of different nonverbal gestures of virtual agents regarding the perception of dominance (Straßmann et al., 2016). Since a substantial number of people in Western societies uphold several misconceptions and fears about robots in a working context, i.e., predominantly the fear of being replaced (Union, 2014), it is important to investigate gesture-based communications for robots that do not reinforce the notion of being dominated. Another aspect that is essential for an individual’s perception of the threat and dominance of other humans and robots is the individual’s spatial perspective and position (Re et al., 2014). With regard to the design of workplace ergonomics for HRC setups, it is important to investigate potential multipliers for the unwanted perception of dominance by the collaborative robot. To consider this, the study presented the robot from multiple perspectives based on the height of the human operator that is exposed to the gestures of the dual-arm robot, resulting in the three conditions using the average height of females, males, and a sitting position. With regards to the ergonomics of HRC workplace arrangements, it is anticipated that the view to which the user is exposed to the robot can affect the way the robot is perceived (Stanton and Stevens, 2017).

The subsequent sections outline the theoretical foundations of the hypothesis and research questions guiding this work. After that, the methodological details of the empirical study are presented. In the end, results are reported followed by a discussion of results and limitations.

2 Related work

Akin to the group collaboration among humans, research indicates that the effective collaboration between humans and robots requires the exchange of information to facilitate coordination of the current state of each entity and the handled tasks (Arntz et al., 2021b; Hentout et al., 2019). The medium to convey the necessary information can be delegated to a wide array of possible channels, i.e., speech, text, light signals, or gestures (Arntz et al., 2020a). Since the collaboration of humans and robots follows an embodied form of interaction between the two parties, it is expected that the usage of gestures is naturally embedded in people’s social communication while collaborating with the robot (Hentout et al., 2019). Research has shown that robots specifically tasked with collaboration procedures induce higher expectations regarding the robots’ capability to respect social norms such as proxemics and gestures that suit the current context (Mumm and Mutlu, 2011). However, prior studies on the usage of gestures for industrial robots did not replicate gestures directly adapted from human posture (Ende et al., 2011). The reason is the difficult direct translation of human-like gestures onto the various non-humanoid representations of industrial robots. However, the application of human-like characteristics to industrial robots can be found in some attempts from robot manufacturers to provide human personnel with an anchor to facilitate peoples’ willingness for interaction. A common example of this is the implementation of human-like facial expressions on the Baxter robot (Si and McDaniel, 2016). However, since the goal of the application of human characteristics is to elevate people’s willingness to collaborate with the robot and reduce unfavorable prejudices, it is paramount for the robot to emit a non-threatening and dominating presence (Buxbaum et al., 2020).

Research regarding communication among humans has shown, that across cultures gestures and body language can be reinforced based on the respective hierarchy level an individual is perceived to have (Chase and Lindquist, 2009). The authors argue with the prior-attributes hypothesis which postulates that the attribution of dominance can be affected by behavioral characteristics such as aggressiveness but also the physical representation such as height Chase and Lindquist (2009). The posture of an individual and the relational perspective of the observer can affect the observers’ perception regarding the dominance and other characteristics of the respective person (Marshall et al., 2020). Human-Robot Interaction studies have shown that some of the associations gained from human body language can be applied to social robots with humanoid representations as well (Beck et al., 2012; McColl and Nejat, 2014). According to the research of Chung-En, humanoid robots outfitted with a smiling face and accompanied by adequate body language evoked similar perceptions of interpersonal warmth across all ages and genders (Chung-En, 2018). While collaborative robots deployed in industrial settings do not follow a human-like representation as robots deployed in social contexts, it can be argued that the Yumi IRB 14000 with its dual-arm setup designed to mimic human action can evoke a more human-like association (Kirschner et al., 2016). This is based on the works of Lee et al., where the authors designed a dual-arm robot following a close resembling structure to the Yumi IRB 14000, with the goal of a biologically inspired anthropomorphic representation (Lee et al., 2017). It can be argued the attribution of human characteristics can be applied to dual-arm collaboration robots with their anthropomorphic resemblance. Therefore the perspective-based reinforcement of characteristics such as dominance should be applicable as well.

Another important aspect is the effect the perspective has on the respective gestures. Since the interpretation of body language does not follow the same precision as direct messaging, gestures designed to convey a certain state can be interpreted differently by industrial staff based on their respective position, thus jeopardizing the intent of the gesture. Another aspect that is crucial for the upcoming HRC scenario is the ergonomics of the workplace setup. Industrial staff can collaborate with the robotic partner in a seated or standing position. Therefore it is of interest how the shift in perspective affects the impression gained from the gestures made by the robot. Since a variety of perceptions regarding the dominance of an entity exists across different demographics such as age groups (Rosenthal-von der Pütten et al., 2019), it is of interest if there are gender-specific differences in the dominance-related perception of the robot. This is grounded in the works of Sokolov et al., which indicate that women tend to read body language, especially hostile gestures more effectively than men (Sokolov et al., 2011).

2.1 Hypothesis and research questions

Based on the theoretical work outlined before one hypothesis (H1) and two research questions (RQ1 and RQ2) were deduced focusing on the attributions made to the robot arm’s dominance gestures and considering the users’ workplace configuration (average female and male height and a sitting setup) and gender.

• H1: The viewing perspective has an effect on participants perceived dominance of the dual-arm robot.

• RQ1: How does the human’s viewing perspective affect the perception of the robot’s gestures?

• RQ2: What are the differences in perception between both genders?

3 Methods

The study used an HRC workplace arrangement containing the Yumi IRB 14000 dual-arm robot within an industrial background scene for immersion purposes (ABB, 2015). The online experiment followed a within design comprised of a series of twelve first-person perspective videos.

3.1 Sample

The sample consisted of 115 participants (female = 58, male = 57) with an average age of M = 24.47 (SD = 6.26). Only N = 5 participants indicated to have worked with the robot before, while N = 10 indicated to have seen the dual-arm robot in a real environment.

3.2 Measures and procedure

Self-reported data collected through an online questionnaire were used to investigate the postulated hypothesis and research question. Presented through the online platform SoSciSurvey Survey (2022), participants were asked to fill out a questionnaire that was formulated in German. Items derived from English language sources were translated independently by two researchers to guarantee a proper translation. The landing page introduced participants to all information required to provide informed consent. After agreeing to take part, participants were informed about data protection handling and asked to generate a code allowing the anonymous deletion of their data after the study if they wished so. After that their age and gender were collected, which was necessary to explore RQ2. This was followed by a short briefing regarding the stimulus material explaining that participants would see 12 short videos of an industrial robot and be asked to answer the question in an undisturbed environment where they could follow the videos with full attention. Also, they were informed that they would not need audio and could also use their smartphone but would need to adjust the display size.

After that, participants were exposed to the videos. Each of the videos was accompanied by a set of questions consisting of a) a list of 17 items of a semantic differential (5-point scale) and b) one item with a Kunin-scale (7-point scale). To measure anthropomorphism (conscious—unconscious (inverse) and artificial—lifelike), animacy (stagnant—lively and artificial—lifelike), and likability (unpleasant—pleasant, dislike—like), a selection of 2 items each from the Godspeed subscales (Bartneck et al., 2008) were used. The Cronbach’s alpha values of the subscale anthropomorphism (α = .402) and animacy (α = .558) are rather low and the internal consistency is therefore threatened. However, this can be explained by the reduced number of item (2 items) that was necessary due to the repeated measures approach and the overall length of the questionnaire. Since these variables are of high interest for the research aim of this study, the measures are used despite the low internal consistency. For the subscale likability the internal consistency was acceptable (α = .781).

Along the interpersonal circumplex (Orford, 1994) the perceived dominance (dominant—submissive) and hostility (hostile—friendly) of the robot were measured. Additionally, two self-generated single items were used to measure the perceived cooperativeness (uncooperative—cooperative) and threat (threatening—harmless) of the robot. Moreover, the feelings during an imagined collaboration with the robot were measured with a single item (“If the robot behaved as it does in the video shown: How comfortable are you with the thought of working together with the robot?”) rated on a Kunin-scale (7-point scale).The order of the videos was shuffled at random for each of the participants to prevent the formation of participants answering the items along an emerging pattern. In the end, participants were asked if they are familiar with the presented robot and if they ever collaborated with the Yumi IRB 14000 robot. Also, they were asked if there had been technical problems and given the chance to give feedback on positive or negative aspects they noticed about the study before being fully debriefed.

3.3 Stimulus material

The stimulus material consisted of twelve videos in a 3 × 3 setup with the three gestures dubbed Akimbo, crossing arms and large arm spread, and the respective three height positions derived from the global average female height of 159 cm (Roser et al., 2022), the global average male height of 177 cm and a sitting position (133 cm above the ground) in front of the Yumi IRB 14000 dual-arm robot manufactured by ABB (ABB, 2015). The dual-arm robot was presented within an industrial background and the appropriate soundscape to facilitate the HRC context of the study. On average, the video stimulus material presented the respective gesture in a fifteen seconds time frame.

3.3.1 Akimbo

The placement of the arms on the hips, which is referred to as Akimbo, is a readiness stance that is regarded as a confident posture among humans (Ball and Breese, 2000). This was difficult to recreate with the robot since the Yumi IRB 14000 does not have humanoid characteristics, thus no representation of hips was present (Figure 1). As a substitute, the extension of the robots’ supporting surface was used a reference point for the placement of the Akimbo gesture to imitate the human posture as close as possible. The dual-arm configuration started from the robot’s initial position and maintained a steady upwards trajectory before shifting position midway towards the designated hips of the robot while widening the elbows of the arms outwards to emphasize the dominant position.

FIGURE 1
www.frontiersin.org

FIGURE 1. The Akimbo gesture where both arms of the robot are placed on the “hips” of the dual-arm robot.

3.3.2 Crossing arms

In Human-Human Interaction the crossing of arms is usually interpreted as a defiant or defensive posture that can indicate that the respective person is denying or disagreeing with the current circumstance or situation (Danbom, 2008). This gesture was chosen because it can indicate stubbornness, uncooperativeness, and dominance, all characteristics that can be detrimental to the collaboration effectiveness between two parties. The crossing arms gesture was designed to imitate the crossing arm posture of a human being based on the works of Straßmann et al. (Straßmann et al., 2016). While the human expression of this body language commonly involves direct contact of both arms while they cross, the direct translation of this gesture on the Yumi IRB 14000 is not possible (cf. Section 5.1 Limitations). To approximate this gesture as close as possible to its human counterpart, the robot started from the initial neutral position to a posture where arms were aligned vertical towards the central body, then the joints that can be seen as an analog of shoulders and the elbows rotated inwards so that each arm assumed a trajectory that resulted in a parallel position to the robots’ body pointing to the respective opposite position. Although, the robot does not cross its arms in this stance, from the perspective of the human operator the impression of the crossing arm gesture can be made (Figure 2).

FIGURE 2
www.frontiersin.org

FIGURE 2. The crossing arm gesture as illustrated in the stimulus material.

3.3.3 Large arm spread

Robotic arms that spread themselves in the direction of the human operator and violate the proxemics of the respective individual are often considered threatening (Arntz et al., 2020b). Based on the prior works by Straßmann et al. (Straßmann et al., 2016), the large arm spread was conceptualized as a threatening gesture, where the posture of the robot indicates that the robot claimed to occupy the available space for itself. To realize this gesture both arms of the Yumi IRB 14000 start out in their respective neutral position. Here, both arms were retracted in an upright position and aligned with the body of the robot. At first, both arms moved simultaneously downward and forwards in the direction of the observer. After reaching the middle of the body of the robot, the trajectory of both arms diverted to an outward position, resulting in the final posture of the large arm spread (Figure 3).

FIGURE 3
www.frontiersin.org

FIGURE 3. The large arm spread presented by the Yumi IRB 14000 dual-arm robot.

4 Results

To investigate the above-mentioned hypotheses and research questions, data collected via the online study were analyzed using multiple mixed-measures ANOVAs with the repeated-measures variables gesture (large arm spread, Akimbo pose, and crossing arms) and viewing perspective (sitting, standing male perspective, and standing female perspective), and participants’ gender (binary: male and female) as between-subjects factor. For the repeated measures the assumption of sphericity was checked using Mauchly’s test, a statistical procedure used to validate a repeated measures analysis of variance (ANOVA) (Mauchly, 1940). If this assumption was violated, corrected results are reported. Since Greenhouse-Geisser Epsilon was above 0.75 in all cases (Geisser and Greenhouse, 1958), the Huyn-Feldt correction was used. Significant effects are further investigated with a post-hoc test using Bonferroni correction. As participants were not forced to answer all items in the questionnaire, and the sample sizes of the analyses vary between the dependent variables. However, in each of the cases, the full data set comprised at least 50 male and 50 female subjects. Subsequently, the results of these analyses are reported for all dependent variables.

4.1 Anthropomorphism

A main effect of the gesture (F(2,206) = 11.85, p < .001, par. n2 = .10) and viewing perspective (F(1.93, 199.09) = 3.30, p = .041, par. n2 = .03) in the perceived anthropomorphism of the robot occurred. To investigate the differences between the robot’s gestures, post-hoc analyses with Bonferroni correction were used. Results indicate that the Akimbo pose (M = 2.27, SE = 0.08) was perceived as less anthropomorphic than the large gesture (M = 2.50, SE = 0.09, p = .016) and a robot that crosses its arms (M = 2.64, SE = 0.18, p < .001). The effect of the viewing perspective in the post-hoc analyses, where the perceived anthropomorphism did not differ between the three viewing perspectives, disappeared. No significant interaction between gesture and viewing perspective was found. Participants’ gender had no effect on the perceived anthropomorphism of the robot (F(1,103) = 1.89, p = .171, par. n2 = .02) nor are there any significant interaction effects between gender and the repeated-measure variables. Detailed results of the mixed-measures ANOVA are reported in Table 1.

TABLE 1
www.frontiersin.org

TABLE 1. Results of the mixed-measures ANOVA for the perceived anthropomorphism of the robot.

4.2 Animacy

The robot’s perceived animacy was only affected by the displayed gesture of the robot (F(1.94,211.68) = 23.31, p < .001, par. n2 = .18); no effect of the viewing perspective and no interaction effect between both variables occurred. The post-hoc analyses revealed that a robot showing the Akimbo pose (M = 2.46, SE = 0.07) was rated with lower animacy than one presenting the large arm spread gesture (M = 2.95, SE = 0.07, p < .001) and a robot that uses the crossing arm gesture (M = 2.85, SE = 0.07, p < .001). Additionally, the participants’ gender had no significant effect on the perception of the robot’s animacy (F(1,109) = 3.17, p = .078, par. n2 = .03) and no interaction effects with participants gender were found. See Table 2 for the results of the mixed-measures ANOVA.

TABLE 2
www.frontiersin.org

TABLE 2. Results of the mixed-measures ANOVA for the perceived animacy of the robot.

4.3 Likability

The mixed-measures ANOVA revealed a significant main effect for the gesture on the perceived likability of the robot (F(2,210) = 7.98, p < .001, par. n2 = .07), but no significant main effect for the viewing perspective and no interaction effect. According to the post-hoc results, the robot is perceived as more likable when it presents large arm spread gesture (M = 3.51, SE = 0.07) compared to the Akimbo pose (M = 3.19, SE = 0.08, p = .004) and the crossing arm gesture (M = 3.16, SE = 0.09, p = .003). Again, there were no interaction effects with participant’s gender and the perceived likability rating was in general not affected by participants’ gender, F(1,105) = 0.00, p = .982, par. n2 = .00. Please consult Table 3 for the values of the mixed-measures ANOVA.

TABLE 3
www.frontiersin.org

TABLE 3. Results of the mixed-measures ANOVA for the perceived likability of the robot.

4.4 Dominance

The perceived dominance of the robot is significantly affected by the expressed gesture (F(2,216) = 4.47, p = .013, par. n2 = .04) and the viewing perspective (F(2,216) = 4.96, p = .008, par. n2 = .04), but no significant interaction effect occurred. Post-hoc results show that the crossing arm gesture (M = 2.93, SE = 0.08) is perceived as more dominant than the Akimbo pose (M = 3.18, SE = 0.07, p = .023). Here higher values indicate lower dominance and higher submissiveness all descriptive values can be found in Table 4. Moreover, a significant difference between the sitting and male standing perspective was revealed by the post-hoc analyses: The robot is perceived as more dominant from the sitting perspective (M = 2.98, SE = 0.06) compared to the male standing perspective (M = 3.18, SE = 0.06, p = .008). The female perspective (M = 3.12, SE = 0.07) did not differ in perceived dominance from the male and sitting viewpoint. The participants’ gender did not affect the dominance perception of the robot (F(1,108) = 2.74, p = .101, par. n2 = .03) and there were also no significant interaction effects of the gender with the other two independent variables. See Table 5 for details.

TABLE 4
www.frontiersin.org

TABLE 4. Descriptive values of the three different gestures for all dependent variables.

TABLE 5
www.frontiersin.org

TABLE 5. Results of the mixed-measures ANOVA for the perceived dominance of the robot.

4.5 Hostility

The perceived hostility of the robot is significantly affected by the expressed gesture (F(2,220) = 10.04, p < .001, par. n2 = .08) and the viewing perspective (F(2,220) = 3.95, p = .021, par. n2 = .04), but no significant interaction effect occurred. According to the post-hoc results, the robot is perceived as more friendly and less hostile when it presents the large arm spread gesture (M = 3.60, SE = 0.07) compared to the Akimbo pose (M = 3.26, SE = 0.07, p = .001) and the crossing arm gesture (M = 3.21, SE = 0.09, p = .001). Here higher values indicate a higher ascription of friendliness, while lower values indicate attributions towards more hostility. Moreover, a significant difference between the sitting and male standing perspective was revealed by the post-hoc analyses: The robot is perceived as less friendly from the sitting perspective (M = 3.29, SE = 0.06) compared to the male standing perspective (M = 3.44, SE = 0.06, p = .034) The participants’ gender had no effect on the perceived hostility/friendliness of the robot, F(1,110) = 0.12, p = .725, par. n2 = .00 and there was no significant interaction between the participants’ gender and the gesture or viewing perspective. Consult Table 6 for all values of the mixed-measures ANOVA.

TABLE 6
www.frontiersin.org

TABLE 6. Results of the mixed-measures ANOVA for the perceived hostility of the robot.

4.6 Cooperativeness

The perceived cooperativeness of the robot is significantly affected by the expressed gesture (F(2,220) = 14.48, p < .001, par. n2 = .12). No significant main effect for the viewing perspective and no interaction effect occurred. According to the post-hoc results, the robot is perceived as cooperative when it presents the large arm spread gesture (M = 3.63, SE = 0.08) compared to the Akimbo pose (M = 3.33, SE = 0.08, p = .008) and the crossing arm gesture (M = 3.07, SE = 0.09, p < .001). Also, the Akimbo pose is associated with significantly higher values of cooperativeness than the crossing arm gesture (p = .039). Here higher values indicate higher attributed levels of cooperativeness. Again, no interaction effects of participants’ gender and the other two independent variables occurred (see Table 7) and gender had no effect on the general perception of the robot’s cooperativeness, F(1,110) = 0.30, p = .583, par. n2 = .00.

TABLE 7
www.frontiersin.org

TABLE 7. Results of the mixed-measures ANOVA for the perceived cooperativity of the robot.

4.7 Threat

The perceived threat of the robot is significantly affected by the expressed gesture (F(2,220) = 7.62, p = .001, par. n2 = .07). No significant main effect for the viewing perspective and no interaction effect occurred. Post-hoc tests show that the robot is perceived as more harmless when it presents the large arm spread gesture (M = 3.79, SE = 0.07) compared to the crossing arm gesture (M = 3.41, SE = 0.10, p = .001) and the Akimbo pose (M = 3.66, SE = 0.09, p = .030). Higher values indicate lower perceived threat and higher harmlessness. The perceived threat of the robot was not affected by participants’ gender, (F(1,110) = 0.22, p = .641, par. n2 = .00). In addition, the effect of the gesture and viewing perspective on the perceived threat was also not affected by gender (see Table 8).

TABLE 8
www.frontiersin.org

TABLE 8. Results of the mixed-measures ANOVA for the perceived threat of the robot.

4.8 Imagined collaboration with the robot

The imagination to collaborate with the robot is significantly affected by the gesture (F(2,220) = 5.96, p = .003, par. n2 = .05). No significant main effect for the viewing perspective and no interaction effect occurred. Post-hoc tests show that participants feel more positive to collaborate with the robot when the robot shows the large arm spread gesture (M = 5.24, SE = 0.10) compared to the Akimbo pose (M = 4.95, SE = 0.13, p = .038) and the crossing arm gesture (M = 4.88, SE = 0.12, p = .006). Again no main effect of participants’ gender as between factor (F(1,110) = 0.00, p = .984, par. n2 = .00) and no interaction effects (consult Table 9) with the other two variables on the imagined collaboration with the robot was found.

TABLE 9
www.frontiersin.org

TABLE 9. Results of the mixed-measures ANOVA for the imagined collaboration with the robot.

5 Summary of the results

To obtain the pattern that underlays the above-presented results, Table 10 presents the significant effects of all independent variables and their interaction effects on the measured dependent variables. Overall, results indicate that the gestures conducted by the robot and the respective viewing perspective affected the perceived dominance of the robot. The robot is perceived as the most dominant in the sitting perspective whereas the male perspective resulted in the lowest attribution of dominance out of the three perspectives. This coincides with the theoretical outline presented in Section 2 and renders H1 supported.

TABLE 10
www.frontiersin.org

TABLE 10. Overview of the significant differences for all dependent variables. High values indicate a high degree of the respective attribute category.

For RQ1, results indicate that the attribution of threat by the robot is higher in the large arm spread gesture. This contradicts the order of perceived dominance found in the effects of the other two gestures. It can be argued that the posture of large arms spread by the robot can elicit a more threatening perception as the robot’s size and reach of the arms are seen as a potential hazard that can provoke accidents due to unintended collisions with the robot.

Regarding RQ2, results indicated that the gender of the participants did not affect the perception of the robot. In addition to the perception gained from the respective gestures, no significant differences regarding the gender of the participant emerged from the different viewing perspectives. This implies that in terms of workplace ergonomics, gestures can be utilized independent of the staff’s gender and viewing perspective without creating detrimental effects for a specific condition that hampers the collaboration procedure. This renders H1 supported. Regarding the second research question (RQ2), which addressed potential differences in gender due to the diverging average height. Results indicated that the gender of the participants did not affect the perception of the robot. In addition to that, no influence on the effect of the different gestures viewing perspectives has been found.

5.1 Limitations

To contextualize the results it is essential to address the limitations of this study. A major limitation is that the questionnaire did not ask participants about their actual body height but rather assumed the assigned perspective based on the stated gender of the participants. It is advised that a future study should be conceptualized as a lab study where the actual body height of the participants is considered. Since virtual reality has meanwhile become a valuable methodological approach to studies mimicking future workplace scenarios, see e.g. (Hernoux et al., 2015; Arntz et al., 2021a), participants can be exposed to various human-like gestures portrayed by the robot from different perspectives independent of their actual height. Apart from the presentation of the stimulus material, it is necessary to discuss the execution of the gestures. Restrictions in the kinematics of the robot made slight alterations necessary combined with the absence of some anatomical characteristics such as the “hips” of the robot. While not completely accurate, the gestures followed the same trajectory as their human counterpart, aiming for an authentic representation of these gestures. In addition to the mere observation of these human-like gestures, participants should execute a shared task collaboration scenario in conjunction with the robot to further emphasize the context of these gestures. This circumstance addresses another major limitation of the presented study. Considering that participants merely observed the gestures of the robot through the video-based stimulus material, it can be argued that the stimulus material might not induce the same reaction compared to a study setup where people are confronted with the real robot. However, apart from the COVID-19-related restrictions for the execution of lab studies, it can be argued that the stimulus material ensured the comparability of the self-reported answers because every participant was exactly exposed to the same gestures presented from the same perspective. Additional artifacts such as technical difficulties that might occur by exposing participants to the real robots were therefore avoided. Nonetheless, a future study should refine the approach in a lab study as mentioned before. Another limitation is the neglect of further demographic-related variables apart from age and gender that prevent further contextualization of the data based on the participants’ background. The limitation lies in the inability to rate the sample composition for its applicability to a general population. Since this is not possible the participants of this study must be considered as a convenience sample. An additional limitation regarding the questionnaire is the omission of the full Godspeed scale. While this resulted in low reliability (cf. Section 3.2), the reduction in sub-scales was done to prevent the questionnaire to become too extensive. Since the presented material was already lengthy and a further elongation of the questionnaire might have discouraged participants to complete the questionnaire.

6 Conclusion

The usage of gestures in collaboration robots, especially representations capable of mimicking human-like gestures such as the Yumi IRB 14000 dual-arm robot, is a promising channel to convey situational information and elevate collaboration effectiveness. However, body language is also up for interpretation, as it does not contain a direct message from the sending entity to the receiving entity. Especially in industrial robots, where body language that is found in humans can not exactly be recreated compared to distinctly designed social robots, it is important to explore the individual perception of the gestures to evaluate their capability to elicit certain impressions of the robot onto the operator. The research presented here marks the first foray into the vast library of human body language expressions that can be translated onto collaborative robots. Upcoming studies should incorporate more gestures that are associated with different attributes apart from dominance to explore their effect on people’s perception of the robot. Furthermore, future studies are recommended to embed these gestures into a collaboration procedure with the robot, to investigate the direct ramifications of the usage of these gestures on the collaborative relationship between both parties.

Data availability statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.

Author contributions

AA, CS, and SE contributed equally to the manuscript. Selection of measures, set up of the online study and data analysis were mainly done by CS and SE; technical implementation and stimulus material generation were done by AA and SV with feedback provided by SE and CS.

Funding

This project was supported by the Institute of Positive Computing and partly funded by the Ministry of Culture and Science of the state of North Rhine-Westphalia.

Acknowledgments

The authors would like to express their gratitude to Lara Oldach and Noémi Tschiesche for their support in data collection and analysis. Furthermore, we thank Prof. Dr. Uwe Lesch for providing access to his laboratory as well as all participants of the study.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

[Dataset] ABB (2015). Yumi-irb 14000 | collaborative robot. Available at: https://www.kuka.com/en-gb/products/robotics-systems/industrial-robots/lbr-iiwa.

Google Scholar

Ajoudani, A., Zanchettin, A. M., Ivaldi, S., Albu-Schäffer, A., Kosuge, K., and Khatib, O. (2018). Progress and prospects of the human–robot collaboration. Auton. Robots 42, 957–975. doi:10.1007/s10514-017-9677-2

CrossRef Full Text | Google Scholar

Arntz, A., Eimler, S. C., and Hoppe, H. U. (2021a). A virtual sandbox Approach to studying the effect of augmented communication on human-robot collaboration. Front. Robot. AI 8, 728961. doi:10.3389/frobt.2021.728961

PubMed Abstract | CrossRef Full Text | Google Scholar

Arntz, A., Eimler, S. C., and Hoppe, H. U. (2021b). A virtual sandbox approach to studying the effect of augmented communication on human-robot collaboration. Front. Robot. AI 8, 728961. doi:10.3389/frobt.2021.728961

PubMed Abstract | CrossRef Full Text | Google Scholar

Arntz, A., Eimler, S. C., and Hoppe, H. U. (2020a). “Augmenting the human-robot communication channel in shared task environments,” in Collaboration technologies and social computing. Vol. 12324 of lecture notes in computer science. Editors A. Nolte, C. Alvarez, R. Hishiyama, I.-A. Chounta, M. J. Rodríguez-Triana, and T. Inoue (Cham: Springer International Publishing), 20–34. doi:10.1007/978-3-030-58157-2_2

CrossRef Full Text | Google Scholar

Arntz, A., Eimler, S. C., and Hoppe, H. U. (2020b). “The robot-arm talks back to me” - human perception of augmented human-robot collaboration in virtual reality,” in 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) (Utrecht, Netherlands: IEEE), 307–312. doi:10.1109/AIVR50618.2020.00062

CrossRef Full Text | Google Scholar

Ball, G., and Breese, J. (2000). “Relating personality and behavior: Posture and gestures,” in Affective interactions. Vol. 1814 of lecture notes in computer science. Editors G. Goos, J. Hartmanis, J. van Leeuwen, and A. Paiva (Berlin, Heidelberg: Springer Berlin Heidelberg), 196–203. doi:10.1007/10720296_14

CrossRef Full Text | Google Scholar

Bartneck, C., Croft, E., and Kulic, D. (2008). Measuring the anthropomorphism, animacy, likeability, perceived intelligence and perceived safety of robots.

Google Scholar

Beck, A., Stevens, B., Bard, K. A., and Cañamero, L. (2012). Emotional body language displayed by artificial agents. ACM Trans. Interact. Intell. Syst. 2, 1–29. doi:10.1145/2133366.2133368

CrossRef Full Text | Google Scholar

Bremner, P., and Leonards, U. (2016). Iconic gestures for robot avatars, recognition and integration with speech. Front. Psychol. 7, 183. doi:10.3389/fpsyg.2016.00183

PubMed Abstract | CrossRef Full Text | Google Scholar

Buxbaum, H.-J., Sen, S., and Häusler, R. (2020). “Theses on the future design of human-robot collaboration,” in Human-computer interaction. Multimodal and natural interaction. Vol. 12182 of lecture notes in computer science. Editor M. Kurosu (Cham: Springer International Publishing), 560–579. doi:10.1007/978-3-030-49062-1_38

CrossRef Full Text | Google Scholar

Chase, I., and Lindquist, W. (2009). Dominance hierarchies. Oxf. Handb. Anal. Sociol. 2009, 566–591.

Google Scholar

Chung-En, Y. (2018). “Humanlike robot and human staff in service: Age and gender differences in perceiving smiling behaviors,” in 2018 7th International Conference on Industrial Technology and Management (ICITM) (Oxford, UK: IEEE), 99–103. doi:10.1109/ICITM.2018.8333927

CrossRef Full Text | Google Scholar

Danbom, D. (2008). Perfect body language 21.

Google Scholar

Ende, T., Haddadin, S., Parusel, S., Wusthoff, T., Hassenzahl, M., and Albu-Schaffer, A. (2011). “A human-centered approach to robot gesture based communication within collaborative working processes,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (San Francisco, CA, USA: IEEE), 3367–3374. doi:10.1109/IROS.2011.6094592

CrossRef Full Text | Google Scholar

Geisser, S., and Greenhouse, S. W. (1958). enAn extension of box’s results on the use of the f distribution in multivariate analysis. Ann. Math. Stat. 29, 885–891. doi:10.1214/aoms/1177706545

CrossRef Full Text | Google Scholar

Hentout, A., Aouache, M., Maoudj, A., and Akli, I. (2019). Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 33, 764–799. doi:10.1080/01691864.2019.1636714

CrossRef Full Text | Google Scholar

Hernoux, F., Nyiri, E., and Gibaru, O. (2015). “Virtual reality for improving safety and collaborative control of industrial robots,” in Proceedings of the 2015 Virtual Reality International Conference (New York, NY, USA: ACM), 1–6. doi:10.1145/2806173.2806197

CrossRef Full Text | Google Scholar

Kirschner, D., Velik, R., Yahyanejad, S., Brandstötter, M., and Hofbaur, M. (2016). “YuMi, come and play with me! A collaborative robot for piecing together a tangram puzzle,” in Interactive collaborative robotics. Vol. 9812 of lecture notes in computer science. Editors A. Ronzhin, G. Rigoll, and R. Meshcheryakov (Cham: Springer International Publishing), 243–251. doi:10.1007/978-3-319-43955-6_29

CrossRef Full Text | Google Scholar

Lee, D.-H., Park, H., Park, J.-H., Baeg, M.-H., and Bae, J.-H. (2017). Design of an anthropomorphic dual-arm robot with biologically inspired 8-dof arms. Intell. Serv. Robot. 10, 137–148. doi:10.1007/s11370-017-0215-z

CrossRef Full Text | Google Scholar

Marshall, P., Bartolacci, A., and Burke, D. (2020). Human face tilt is a dynamic social signal that affects perceptions of dimorphism, attractiveness, and dominance. Evol. Psychol. 18, 147470492091040. doi:10.1177/1474704920910403

PubMed Abstract | CrossRef Full Text | Google Scholar

Mauchly, J. W. (1940). Significance test for sphericity of a normal $n$-Variate distribution. Ann. Math. Stat. 11, 204–209. doi:10.1214/aoms/1177731915

CrossRef Full Text | Google Scholar

McColl, D., and Nejat, G. (2014). Recognizing emotional body language displayed by a human-like social robot. Int. J. Soc. Robot. 6, 261–280. doi:10.1007/s12369-013-0226-7

CrossRef Full Text | Google Scholar

Mumm, J., and Mutlu, B. (2011). “Human-robot proxemics,” in Proceedings of the 6th international conference on Human-robot interaction - HRI ’11. Editors A. Billard, P. Kahn, J. A. Adams, and G. Trafton (New York, New York, USA: ACM Press), 331. doi:10.1145/1957656.1957786

CrossRef Full Text | Google Scholar

Orford, J. (1994). The interpersonal circumplex: A theory and method for applied psychology. Hum. Relat. 47, 1347–1375. doi:10.1177/001872679404701103

CrossRef Full Text | Google Scholar

Re, D. E., Lefevre, C. E., DeBruine, L. M., Jones, B. C., and Perrett, D. I. (2014). Impressions of dominance are made relative to others in the visual environment. Evol. Psychol. 12, 147470491401200. doi:10.1177/147470491401200118

PubMed Abstract | CrossRef Full Text | Google Scholar

Riek, L. D., Rabinowitch, T.-C., Bremner, P., Pipe, A. G., Fraser, M., and Robinson, P. (2010). “Cooperative gestures: Effective signaling for humanoid robots,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Osaka, Japan: IEEE), 61–68. doi:10.1109/HRI.2010.5453266

CrossRef Full Text | Google Scholar

Rosenthal-von der Pütten, A. M., Straßmann, C., Yaghoubzadeh, R., Kopp, S., and Krämer, N. C. (2019). Dominant and submissive nonverbal behavior of virtual agents and its effects on evaluation and negotiation outcome in different age groups. Comput. Hum. Behav. 90, 397–409. doi:10.1016/j.chb.2018.08.047

CrossRef Full Text | Google Scholar

[Dataset] Roser, M., Appel, C., and Ritchie, H. (2022). Our world data.

Google Scholar

Si, M., and McDaniel, J. D. (2016). “Using facial expression and body language to express attitude for non-humanoid robot: (extended abstract),” in AAMAS ’16: Proceedings of the 2016 International Conference on Autonomous Agents and Multiagent Systems (Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems), 1457–1458.

Google Scholar

Sokolov, A., Krüger, S., Enck, P., Krägeloh-Mann, I., and Pavlova, M. (2011). Gender affects body language reading. Front. Psychol. 2, 16. doi:10.3389/fpsyg.2011.00016

PubMed Abstract | CrossRef Full Text | Google Scholar

Stanton, C. J., and Stevens, C. J. (2017). Don’t stare at me: The impact of a humanoid robot’s gaze upon trust during a cooperative human–robot visual task. Int. J. Soc. Robot. 9, 745–753. doi:10.1007/s12369-017-0422-y

CrossRef Full Text | Google Scholar

Straßmann, C., von der Pütten, A. R., Yaghoubzadeh, R., Kaminski, R., and Krämer, N. (2016). “The effect of an intelligent virtual agent’s nonverbal behavior with regard to dominance and cooperativity,” in Intelligent virtual agents. Vol. 10011 of lecture notes in computer science. Editors D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, and A. Leuski (Cham: Springer), 15–28. doi:10.1007/978-3-319-47665-0_2

CrossRef Full Text | Google Scholar

[Dataset] Survey, S. (2022). SoSci Survey – die Lösung für eine Professionelle Onlinebefragung.

Google Scholar

[Dataset] Union, E. (2014). Special eurobarometer 382: Public attitudes towards robots. Availabel at: https://data.europa.eu/euodp/de/data/dataset/S1044_77_1_EBS382.

Google Scholar

Keywords: human-robot collaboration, non-verbal communication, dominance, dual-arm robots, industrial robots, online study, workplace ergonomics, human factors

Citation: Arntz A, Straßmann C, Völker S and Eimler SC (2022) Collaborating eye to eye: Effects of workplace design on the perception of dominance of collaboration robots. Front. Robot. AI 9:999308. doi: 10.3389/frobt.2022.999308

Received: 20 July 2022; Accepted: 17 August 2022;
Published: 27 September 2022.

Edited by:

Loris Roveda, Dalle Molle Institute for Artificial Intelligence Research, Switzerland

Reviewed by:

Asad Shahid, Dalle Molle Institute for Artificial Intelligence Research, Switzerland
Marco Maccarini, Dalle Molle Institute for Artificial Intelligence Research, Switzerland

Copyright © 2022 Arntz, Straßmann, Völker and Eimler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexander Arntz, YWxleGFuZGVyLmFybnR6QGhzLXJ1aHJ3ZXN0LmRl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.