Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI, 26 January 2021
Sec. Human-Robot Interaction
This article is part of the Research Topic Contextualized Affective Interactions with Robots View all 11 articles

Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human–Robot Goal Conflict

  • Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany

As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.

1 Introduction

Imagine you are preparing a meal in your kitchen. Your service robot enters the room and asks you to step aside as it has to clean the floor. Would you oblige or deny the robot’s request? Does your decision rely on whether you previously gave the command for it to clean? This example illustrates possible human-robot goal-conflicts when autonomous service robots will become more ubiquitous in our homes and public spaces and will be able to pursue goals (Bartneck and Hu, 2008; De Graaf and Allouch, 2013a; Savela et al., 2018). Such conflicts might range from simple trajectory planning interference (e.g. collision) to complex negotiation of prioritization of tasks (human vs. robot). Especially, in shared spaces, robots will conduct their tasks in dynamic and complex situations where being obedient might impede efficient task execution (Zuluaga and Vaughan, 2005; Lee et al., 2017; Milli et al., 2017; Thomas and Vaughan, 2018). For example, a public cleaning robot might have to be assertive to do its job effectively: when people block the robot’s way, it needs to interact with these people to make them step aside like cleaning staff would do in public spaces. Therefore, the question arises whether a service robot would benefit from assertiveness in the same way as human cleaning personnel does in terms of acceptance and compliance. Hereby, the Media Equation can serve as a basis to potentially answer this question as it states that humans react to robots like to humans and treat them as social actors (Reeves and Nass, 1996). Hence, it might be assumed that goal-conflict resolution with a robot would be similar to negotiating with a fellow human and consequently human conflict resolution strategies could be transferable to autonomous robots.

During conflict resolution, assertiveness is characterized by the negotiator advocating his/her interests in a non-threatening, self-confident and cooperative manner (Mnookin et al., 1996; Kirst, 2011). Assertiveness is an interpersonal communication skill that facilitates goal achievement (Gilbert and Allan, 1994; Kirst, 2011). Whereas for human negotiation, each negotiation partner is allowed to pursue her/his own goals and interests, it represents an unusual novelty for human-robot conflict resolution that an autonomous robot might be assertive. This is due to the asymmetrical relationship between humans and robots, which has prevailed over decades (Jarrassé et al., 2014). User studies show that humans prefer to be in control of the robot and are skeptical towards robot autonomy (Ray et al., 2008; Ziefle and Valdez, 2017; Vollmer, 2018). In the last decade, this human-robot power asymmetry was justifiable by the robot’s state of technical sophistication (e.g. teleoperation or manual control necessary). However, as robots become autonomous and can have goals and intentions, this paradigm needs to change to fully tap the potential of autonomous robots fully.

Thereby, user acceptance and trust in service robots are vital in human-robot interaction (HRI) (Goetz et al., 2003; Groom and Nass, 2007; Lee et al., 2013; Savela et al., 2018) as they can be seen as prerequisites for the usage of autonomous technology (Ghazizadeh et al., 2012). Consequently, the design of robotic conflict resolution strategies should aim at a combined optimization of both effectiveness (i.e. compliance) and subjective user evaluation in terms of acceptance and trust. Therefore, it is focal for this research to develop acceptable and effective conflict resolution strategies for service robots to be assertive.

Hereby, it could be beneficial to rely on the existing knowledge from psychological disciplines regarding effective human goal-conflict resolution and human-machine cooperation. Collecting and transferring knowledge from psychological disciplines could provide a useful addition to existing approaches (e.g. politeness, persuasion) to generate successful and acceptable robot conflict resolution strategies. On this basis, the robotic conflict resolution strategies were developed and empirically investigated.

Consequently, the novelty of this paper lies in the systematic collection and application of different psychological mechanisms of goal-conflict resolution and human-machine cooperation in developing robotic conflict resolution strategies. Furthermore, the empirical evaluation of these strategies regarding user compliance and acceptance in two essential areas of HRI (public and private context) should provide insights into the acceptable design of human-robot goal-conflict resolution strategies. Therefore, two online studies were conducted each set in one of the two application contexts: a train station as public space and the home environment as private space. Both studies featured a situation with a conflict between user (storage of objects) and robot task (cleaning).

In the following, a review of the status quo for robot request compliance strategies (politeness, persuasion and assertiveness) with regard to effectiveness and user acceptance is given. Then human conflict resolution behaviour is described to provide a theoretical basis for the described development of robotic conflict resolution strategies. Subsequently, the strategy design, implementation and categorization of the strategies in the presented studies is described.

2 Related Work

2.1 Robot Politeness

In human conflict resolution, politeness serves the purpose of mitigating face threats (i.e. potential damage to the image of the other party) and thereby making concession more likely (Pfafman, 2017). Politeness is an important factor in human-human interactions for acceptance and trust (Inbar and Meyer, 2015; MacArthur et al., 2017), which has been shown to be true for HRI (Zhu and Kaber, 2012; Inbar and Meyer, 2015). Therefore, politeness has been one commonly used approach to achieve compliance with a robot’s request. A considerable large literature body about robot politeness exists, but results have been mixed (Lee et al., 2017). Some studies find a positive effect of politeness (e.g. appeal, apologize) regarding robot evaluation (Nomura and Saeki, 2010; Inbar and Meyer, 2015; Castro-González et al., 2016), and user compliance with a polite request (Srinivasan and Takayama, 2016; Kobberholm et al., 2020). Other studies find no effect of robot politeness on compliance with health treatments (for an overview see Lee et al., 2017). Salem and colleagues (2013) conclude that the interaction context might impact the perception of the robot more than the politeness strategy (Salem et al., 2013). Hence, Lee and colleagues (2017) developed a research model for the connection between robot politeness and intention to comply with a robot’s request. They evaluated their model within the health care setting and found that higher levels of politeness did not necessarily lead to a higher intention to comply as it depended on factors such as the effectiveness of communication, gender and short vs. long-term effects. The authors conclude that the politeness level needs to be adapted to the user’s situation (Lee et al., 2017). Summarizing, robot politeness does not always seem to ensure user compliance, especially if the interaction partner is not cooperative. Persuasive and assertive robotic strategies have the potential to be more effective.

2.2 Persuasive Robots and Robot Assertiveness

Another form of achieving compliance with a robot request is persuasive robotics. It aims at ’appropriate persuasiveness, designed to benefit people and improve interaction […]’ (Siegel et al., 2009, p. 2,563). Amongst others, persuasive robotics has been successfully applied to stimulate energy preservation (Roubroeks et al., 2010), promote attitude change (Ham and Midden, 2014) and influence buyer’s decisions (Kamei et al., 2010). One study took a similar approach as the presented study and transferred ten compliance gaining strategies (e.g. threat, direct request) from social psychology to HRI (Saunderson and Nejat, 2019). Strategies’ effectiveness was tested with two NAO robots trying to persuade participants (N = 200) regarding a guessing game. No differences were found between the strategies regarding persuasiveness and trustworthiness but the threat was rated the worst. Possibly the effects only unfold if different robot types and application contexts are taken into account, as only then interactions become visible.

The most decisive form of a robot’s request is assertiveness. It has been first described in Thomas and Vaughan (2018) as the willingness to assert the robot’s right while at the same time participating in polite human social etiquette. The authors call the aim of robot assertiveness ’social compliance’: ’ […] humans can recognize the robot’s signals of intent and cooperate with it to mutual benefit’ (Thomas and Vaughan, 2018, p. 3,389). In their study, a small assertive robot negotiated the right-of-way at the door non-verbally. The robot’s right of way was respected in only half of the interactions as participants focused on their own efficiency to resolve the deadlock and some participants desired a verbal request (Thomas and Vaughan, 2018). Other studies examined assertive robots (for an overview see Paradeda et al., 2019) but produced mixed results regarding trust and compliance (Xin and Sharli, 2007; Chidambaram et al., 2012).

These findings might be explained by the level of assertiveness that had been implemented in the studies. An acceptable level of robot assertiveness is crucial as a rude or dominant robot has led to detrimental effects on robot liking and compliance (Roubroeks et al., 2010; Castro-González et al., 2016). Hence, for robot conflict resolution strategies it is necessary to find a balance between accepted politeness and appropriate assertiveness to achieve compliance with a robot’s request. Hereby, it seems promising to transfer knowledge about persuasion, negotiation and conflict resolution from psychology to HRI.

3 Theoretical Background

3.1 Human Goal-Conflict Resolution

Goal conflicts are determined by mutually exclusive goals of both parties (Rahim, 1983). When a conflict between human interaction partners arises, one has several options to resolve it: either negotiating mutually acceptable outcomes by a) cooperatively making concessions (Rahim, 1992; Brett and Thompson, 2016; Preuss and van der Wijst, 2017), b) trying to convince the other partner with arguments and thereby change his/her behaviour (i.e. persuasion) (Chaiken et al., 2000; Fogg, 2002; Maaravi et al., 2011), c) assertively advocating own interests and posing a request (Gilbert and Allan, 1994; Pfafman, 2017) or d) by politely managing disagreement and making concessions more likely (Paramasivam, 2007; Da-peng and Jing-hong, 2017). Summarizing, goal conflicts can be amongst others solved by cooperation, persuasion, assertion and facilitated by politeness.

The selection of an appropriate conflict resolution strategy determines the negotiator’s success and depends amongst others on conflict content (e.g. resources, behavioural preferences), negotiator’s goals (e.g. exclusive or mutual), individual differences (e.g. conflict type, communication skill), the other parties’ conflict resolution style and situational factors (e.g. information availability, trust, interpersonal power) (Rahim, 1983, Rahim, 1992; Preuss and van der Wijst, 2017).

In order to resolve goal conflicts, humans express different conflict styles. In the dual concern model, five styles are defined which are characterized by different levels of concern for self (assertiveness) and concern for others (cooperativeness): competing, collaborating, compromising, accommodating and avoiding (Thomas, 1992). Accommodating and avoiding are both considered as ineffective as they are both low in assertiveness (Pfafman, 2017). The other, more effective conflict styles can be grouped into distributive and integrative strategies (Brett and Thompson, 2016; Preuss and van der Wijst, 2017): distributive strategies (e.g. competing) are characterized by persuading the counterpart to make concessions by using threats or emotional appeals. They are more likely to be applied if negotiators do not trust each other and are perceived as less trustworthy than integrative strategies (Brett and Thompson, 2016). Integrative strategies (e.g. collaborating, compromising) are based on trust and information sharing about negotiators’ interests and priorities to find trade-offs (Brett and Thompson, 2016; Preuss and van der Wijst, 2017). Whereas negotiators employing distributive strategies claim value, negotiators using integrative strategies create better joint gains (Kong et al., 2014).

Assertiveness can be a distributive or integrating strategy depending on the respect for the other party’s goals (Mnookin et al., 1996). Assertive negotiators create value by directly expressing the interests of both sides which may lead to discovering joint gains. Contrasting, it is seen as distributive if only the assertive negotiator achieves his/her goals (Mnookin et al., 1996). Summarizing, assertiveness is an effective conflict resolution strategy if applied respectfully.

3.2 Selection of Conflict Resolution Strategies

In the following, the selection of conflict resolution strategies for the presented studies is described based on their effectivity in human conflict resolution and previous implementation in HRI. The effectiveness of human conflict resolution strategies can be explained when looking at their psychological working mechanisms: cognitive, emotional, physical, and social (Fogg, 2002; Thompson et al., 2010; Brett and Thompson, 2016).

Cognitive mechanisms which can be applied during a conflict include amongst other goal transparency to ensure mutual understanding (Vorauer and Claude, 1998; Hüffmeier et al., 2014) and showing the benefit of cooperation (Tversky and Kahneman, 1989; Boardman et al., 2017). Goal transparency is characterized as an integrative conflict strategy because information between both parties is shared. In HRI, goal transparency is usually applied to ensure human-robot awareness (Drury et al., 2003; Yanco and Drury, 2004): the understanding of the robot’s reasons and intentions and has shown to improve interaction (Lee et al., 2010; Stange and Kopp, 2020). Therefore, goal transparency is vital for requesting compliance, as the potential interaction partner has to understand that help is needed. Indeed, in a study where transparency was not ensured, compliance rates to a robot’s helping request were very low. Participants indicated not to have understood the robot’s behaviour (Fischer et al., 2014). Until now, it has not been tested yet whether goal transparency is enough to acquire compliance with a robot’s request.

Illustrating the benefits of cooperation has been successfully implemented as a persuasive technique to influence the interaction partner’s decision making (Tversky and Kahneman, 1989; Boardman et al., 2017). For HRI, showing cooperation benefits to the robot user has not yet been investigated for compliance gaining. Only one study implemented a vacuum cleaner’s help request (removing an obstacle) that was similar to pointing out the benefits of cooperation (’If I clean the room, you will be happy’). Thereby, the negative effects of malfunctions were alleviated but effects on request compliance were not tested (Lee et al., 2011). Therefore, goal transparency and showing the benefit of cooperation were tested as cognitive mechanisms for conflict resolution strategies in the present study.

Another cognitive mechanism that can be used to achieve compliance is reinforcement learning. Hereby, the possibility of the desired behaviour can be increased or decreased based on reward or punishment (Berridge, 2001). Positive reinforcement is based on adding a desired stimulus, hence rewarding desired behaviour (i.e. thanking). In HRI, this has been shown to be effective and accepted (Shimada et al., 2012; Castro-González et al., 2016). A robot rewarding humans has already been successfully applied in HRI for cooperative game task performance (Fasola and Matarić, 2009; Castro-González et al., 2016) or teaching (Janssen et al., 2011; Shimada et al., 2012). Negative reinforcement is effective by removing a negative stimulus (i.e. annoyance) if the desired behaviour is shown (Thorndike, 1998; Berridge, 2001). This is known from daily life (e.g. nagging child) and alarm design (Phansalkar et al., 2010) where it can be successful (e.g. alarm clock). Until now, negative reinforcement has not yet been implemented deliberately as a robot interaction strategy. To compare the effectiveness and acceptability of negative reinforcement for robotic conflict resolution strategies to positive reinforcement (i.e. thanking), annoyance was implemented in the present study. Hence, the likelihood of compliance should increase or decrease based on the reinforcement. If a person complies and is praised (or the nuisance is removed) the compliance behaviour is reinforced and should occur more often in the future.

Emotional mechanisms which can be applied during a conflict resolution, can be humor and empathy (Betancourt, 2004; Martinovski et al., 2007; Kurtzberg et al., 2009; Cohen, 2010). Humor has been applied to HRI to increase sympathy for the robot and improve interaction by setting a positive atmosphere (Niculescu et al., 2013; Bechade et al., 2016). It has been implemented by robots telling jokes (Sjöbergh and Araki, 2009; Bechade et al., 2016; Tay et al., 2016; Weber et al., 2018), by clumsiness (Mirnig et al., 2017), showing self-irony and laughing at another robot (Mirnig et al., 2016). The results showed that robots were perceived as more likeable when they used a positive, non-deprecating humor that corresponded to the interaction context (Tay et al., 2016). Another way to successfully resolve conflicts and negotiate is to trigger empathy for one’s situation (Betancourt, 2004). Hereby, empathetic concern can even be directed at mistreated robots (Rosenthal-von der Pütten et al., 2013; Darling et al., 2015; Rosenthal-von der Pütten et al., 2018). So far, empathy as a robotic conflict resolution strategy has not been directly investigated, but a robot showing affect (nervousness, fear) increased request compliance (Moshkina, 2012). Hence, humor and empathy were tested as emotional mechanisms for robotic conflict resolution strategies.

Physical mechanisms are more commonly applied for persuasion than negotiation and, for example, include the regulation of proximity (Albert and Dabbs, 1970; Mutlu, 2011). For a persuasive attempt to be effective, it is important to achieve an acceptable level of proximity as a distance below the individual’s comfort can lead to rejection (Sundstrom and Altman, 1976; Glick et al., 1988; Chidambaram et al., 2012). Indeed, persuasive messages were least effective for attitude change when uttered at distances below 0.6 m and were best perceived at a distance of 1.2–1.5 m (Albert and Dabbs, 1970). This distance corresponds to the social proximity zone of personal space (Hall, 1974; Lambert, 2004) and is acceptable for strangers and robots (Hall, 1974; Walters et al., 2006). Proximity regulation as a persuasive strategy has also been applied to HRI. In a study with a humanoid robot, different proximity levels (within or outside the personal space) were compared regarding their persuasiveness. In contrast to findings from psychology, a robot within the personal space (approach until 0.6 m) led to more compliance (Mutlu, 2011; Chidambaram et al., 2012). Other studies have also found that humans tend to let robots come closer than strangers (Walters et al., 2006; Babel et al., 2021). In the present study, two forms of human-robot proximity were implemented to study its effect on compliance with a robot’s request: within or outside the personal space.

Social mechanisms which are used during negotiation and persuasion are based on social influence and power to achieve compliance. Social influence is defined as ’the ability to influence other’s attitudes, behaviour and beliefs which has its origin in another person or group’ (Raven, 1964, abstract). Effective social influencing techniques (Guadagno, 2014) are amongst others a) social proof (Cialdini et al., 1999; Cialdini and Goldstein, 2004), b) social compliance techniques (e.g. foot-in-the-door) (Freedman and Fraser, 1966; Dillard, 1991) and c) authority-based influence (Cialdini, 2009).

Hereby, social proof a) is based on the assumption that what most people do must be reasonable and right (Cialdini et al., 1999; Guadagno, 2014). Social compliance techniques b) vary the sequence of the posed requests systematically to achieve commitment (Cialdini et al., 1999). Authority-based influence c) makes use of social status (Cialdini, 2009) and can be expressed by commands and threats (Shapiro and Bies, 1994). Whereas a command can be perceived as controlling or condescending, it represents a precise and potentially effective form of communication as politeness markers (i.e. please) do not mask the actual statement (Miller et al., 2007; Christenson et al., 2011). A threat is mostly the last conflict escalation step (De Dreu, 2010; Adam and Shirako, 2013) and belongs to the distributive conflict strategies: threats can be effective in conflict resolution if trust between interaction partners is low (Kong et al., 2014).

Some studies exist which have explored social influencing strategies in HRI: positive and negative social feedback based on social proof (Ham and Midden, 2014), sequential-compliance techniques (Lee and Liang, 2019), as well as authority-based influence such as command (Cormier et al., 2013; Salem et al., 2015) and threat (Roubroeks et al., 2010; Saunderson and Nejat, 2019). These studies will be discussed in more detail below.

In HRI, positive and negative social feedback has been tested in a study with a persuasive robot promoting environmentally friendly choices. Negative social feedback had the most potent persuasive effect (Ham and Midden, 2014). However, the impact of public social feedback on compliance has not yet been tested in HRI. Hence, in the present study, positive and negative public attention was applied. It was only implemented in the public application context where an audience is more likely to be present.

Different sequential-compliance techniques exist. One of those who has been successfully applied to HRI is the foot-in-the-door technique (Lee and Liang, 2019). This technique consists of asking a small request first and then uttering the real request after the interaction partner has consented to the first one. Sequential-compliance techniques base their effectiveness on the interaction partner’s commitment to the initial request (Cialdini et al., 1999). As this could potentially be effective for long-term HRI at home, the foot-in-the-door technique was implemented in the present study in the private context.

Concerning authority-based strategies, threat (Roubroeks et al., 2010) and command (Cormier et al., 2013; Strait et al., 2014; Inbar and Meyer, 2015; Salem et al., 2015) have been applied in HRI. Hereby, in the study of Roubroeks and colleagues (2010) threat did not lead to higher compliance but to psychological reactance. Participants reported more negative thoughts when a robot uttered a command compared to a suggestion. The effect increased when the robot had other task goals than the participant (Roubroeks et al., 2010). Results for compliance rates compared to threat and suggestion were not reported. Arguably, the verbal utterance (’You have to set […]’, Roubroeks et al., 2010, p. 178) might rather have represented a command. A threat usually includes the announcement of a negative consequence. A robot using a command to achieve user compliance has been shown to be effective, although tested in an ethically questionable task (i.e. Milgram experiment) (Cormier et al., 2013; Salem et al., 2015). If the request is ethically acceptable, a direct request could be an effective and fast way to achieve compliance in a short interaction.

In conclusion, the conflict resolution strategies mentioned above have only been partly applied to HRI until now. They have neither been integrated into cohesive conflict resolution strategies for social robots nor have been systematically evaluated for compliance and acceptance. Hereby, a robotic conflict resolution strategy is understood similar to a robotic persuasive strategy (Lee and Liang, 2019; Saunderson and Nejat, 2019) as a sequence of robot behaviours (verbal or non-verbal) that are tactically applied to achieve user compliance to resolve a conflict given certain circumstances (e.g. situation, robot, user). Therefore, the following conflict resolution strategies were developed and tested in two application contexts: a private household and as public space, a train station.

3.3 Development of Robotic Conflict Resolution Strategies

3.3.1 Strategy Design and Implementation

The robotic conflict resolution strategies in the present paper were designed based on the psychological mechanisms used in negotiation (Pruitt, 1983) and persuasion (Cialdini and Goldstein, 2004) and by studying previous robot strategy designs from persuasive robotics (Siegel et al., 2009) and persuasive technology (Fogg, 2002). For an overview of concepts used for developing the strategies see Table 1. Hereby, we categorized the strategies by three dimensions which can be combined to produce a conflict resolution strategy.

• The first dimension represents the five levels of behaviour where psychological mechanisms of negotiation and persuasion take effect. It consists of five levels from an emotional level to a social level.

• The second dimension represent different implementation modalities for the strategies (e.g. auditory, visual, physical).

• The third dimension represents the valence of the strategy. It describes the user’s perception of the strategy: as positive (e.g. praise), negative (e.g. annoyance) or as neutral strategy (e.g. explanation).

TABLE 1
www.frontiersin.org

TABLE 1. Psychological concepts underlying presented conflict resolution strategies.

By combining the three different dimensions and considering both application contexts (public and private service robotics) as well as previous work in HRI, robotic conflict resolution strategies were designed. Strategy implementation for the present study is summarized in Table 1. Strategies are numbered in accordance with Table 1.

3.3.2 Strategy Categorization

The strategies were categorized into three valence categories based on the assumed effect of the human-robot power asymmetry. The strategies were hypothesized to affect the perception of the robot and the interaction with it. Although a robot is perceived as a social actor, its social status/power is still perceived as lower than the human. Hence, not all human strategies are likely to be accepted for robots. A negative evaluation was expected to result from a mismatch between the robot’s social role and its expressed interpersonal power. This was expected for distributive, power-based conflict resolution strategies like annoyance (S4), command (S5) and threat (S6). As distributive strategies are perceived as less trustworthy during human negotiations this was also expected for a robot applying distributive strategies. Polite and submissive strategies such as appeal (S10), thanking (S11) and apologize (S12), hypothesized to match the robot’s ascribed social role (i.e. submissive servant) and expressed interpersonal power better, and thus were expected to be positively evaluated. Additionally, integrative strategies not based on interpersonal power, such as explanation (S2) and showing benefit (S3) were expected to be evaluated as neutral. An overview of expected affective user judgments per strategy can be seen in Table 2.

TABLE 2
www.frontiersin.org

TABLE 2. Strategy overview for both studies with implementation.

3.4 Hypotheses and Research Question

The developed conflict resolution strategies were evaluated with regard to their effectiveness (compliance, interpersonal power), user’s strategy perception (valence, intensity, politeness) and the evaluation (acceptance, trust, fear). Hereby, the following assumptions were made.

One basic assumption that is based on the Media Equation (Reeves and Nass, 1996) is that conflict resolution strategies will render a service robot more effective during goal-conflict resolution as the robot applies strategies that have shown to be effective for human negotiators. Hence, it is assumed that a robot employing conflict resolution strategies will be more effective in achieving compliance with its request compared to not applying any conflict resolution strategy (i.e. waiting for the person to step aside).

H1. A robot applying a conflict resolution strategy is more effective (i.e. higher compliance rates) than if it applied no strategy.

It was also expected that the match between the robot’s ascribed and expressed interpersonal power determined the affective user reaction to the strategies leading to the following hypotheses:

H2. A robot applying negative strategies is rated as less accepted and less trustworthy than if it applied positive or neutral strategies.

Since distributive strategies in human-human negotiations claim value for the negotiator, it was expected that a robot using negative strategies would lead to more compliance than if it used positive or neutral strategies, although being less accepted.

H3. A robot applying negative strategies is more effective than if it applied positive or neutral strategies.

As the investigated conflict resolution strategies are based on psychological mechanisms from human-human interaction, their effectiveness might vary as a function of the perceived humanness of the robot. For human-likeness and compliance, inconclusive empirical results exist. Some studies emphasize the positive, persuasive effect of a social entity where a humanoid robot triggers reciprocity norms and thereby compliance (for an overview, see Sandoval et al., 2016). Likewise the tendency to perceive computers and robots as social actors has shown to increase with human-likeness (Xu and Lombard, 2016).

In the presented studies, robots with different degrees of human-likeness were tested. Additionally, a human interaction partner was included in the studies’ design as a comparison. It was expected that more humanlike robots would be more accepted and effective to apply human conflict resolution strategies. However, reactance has also found to be higher for a human-like persuasive robot compared to a persuasive message on a computer screen during a choice task (Ghazali et al., 2018). Therefore, it was expected that this advantage of human-likeness and social agency would vanish for the application of negative strategies.

H4. Human-like robots are more accepted and effective when applying positive and neutral conflict resolution strategies compared to mechanoid robots.

As both application contexts pose different requirements to HRI, they are expected to require different conflict resolution strategies. The public and private application contexts differ in critical dimensions for human-robot-interaction (HRI): interaction frequency and duration (i.e. robot familiarity) (Yanco and Drury, 2004) (public: short-term; private: long-term), voluntariness and motivation of interaction (Sung et al., 2008) (public: co-location, no ownership; private: interaction, ownership) and feasibility of interaction modality (public: non-verbal, universal; private: verbal, personalized) (Ray et al., 2008; Thunberg and Ziemke, 2020). They differ in their social roles of robot and user. This leads to differences in their levels of human-robot power asymmetry (public: same level as human as a representative of cleaning staff; private: lower level of the robot as a servant), which determines legitimization of a robot’s request (Bartneck and Hu, 2008; Sung et al., 2010; Jarrassé et al., 2014). Hence, it is conceivable that dominant, clear and fast strategies like a command (S5) or threat (S6) might be more effective in the public domain. Here, the passerby might feel less superior to the robot as it acts as representative of a cleaning company and the passerby is only a guest in public space. Contrasting, in the private context, the same strategies might lead to reactance of the robot owner as only more submissive strategies will be accepted. As currently, research on the influence of the application context on robot evaluation and conflict resolution strategy preferences is scarce, the following research question is investigated in the two presented studies:

Research question: Do strategy acceptance and effectiveness differ between the public and private application context? Are different conflict resolution strategies needed?

Additionally, to use context and the robot/agent, other potential influencing variables on strategy acceptance and user compliance like demographics, robot pre-experiences and attitudes (Nomura et al., 2008), and personality traits (Robert et al., 2020) will be tested exploratively.

4 Study 1

4.1 Method

4.1.1 Sample

Seventy-six participants were recruited via email, social media, and flyers on campus. Fifteen participants had to be excluded due to video display issues. The final sample size was N = 61. Participant’s characteristics of both studies can be seen in Table 3 and robot experience and ownership can be seen in Table 4. Participants received either course credit or a shopping voucher as compensation.

TABLE 3
www.frontiersin.org

TABLE 3. Sample characteristics.

TABLE 4
www.frontiersin.org

TABLE 4. Sample pre-experience and robot ownership.

4.1.2 Study Design

Study 1 was set in the public application context at a train station. The study followed a block design where participants saw five out of fifteen conflict resolution strategies. The strategies were implemented in blocks of six negative, six positive and three neutral strategies. The online program randomly assigned two out of six negative, two out of six positive and one out of three neutral strategies to the participants. Not all participants saw all strategies due to test economy and potential participant’s exhaustion (i.e. respondent fatigue). Hence, each strategy was on average rated by twenty participants.

4.1.3 Human–Robot Goal-Conflict Scenario

To test the developed conflict resolution strategies, a goal-conflict situation with a user task and robot task with mutually exclusive goals was introduced. A competitive situation was created where the user had to decide whether to interrupt his/her own task and give the robot’s task priority or vice versa. Time pressure was induced on both tasks to produce the cost of compliance. It has been shown that time pressure improves negotiation outcomes as cooperation and concessions become more likely (Stuhlmacher et al., 1998). The scenario was set in the hallway of a train station with lockers on one side. The participant’s task was framed as putting multiple pieces of luggage into the locker, thereby blocking the way of the cleaner. The participant instruction was the same for both studies: ‘You can now decide to interrupt your task and help the cleaner or continue your task. The cleaner will show different behaviours’. For both studies, participants were provided with a scenario’s setup drawing and the trajectory of the oncoming entity to improve the imagination of the scenario (see Figure 1 as example).

FIGURE 1
www.frontiersin.org

FIGURE 1. Schematic presentation of participant’s decision page in the questionnaire.

4.1.4 Conflict Resolution Strategies

The conflict resolution strategies were framed as the agent’s behaviour and utterances. The word ’strategy’ or ’negotiation’ was never mentioned to the participants. Applied conflict resolution strategies can be seen in Table 2. As baseline strategy (S1.1) waiting was chosen. In the public context, the agent waited without any verbal utterance. This represents the current behaviour of a cleaning robot if an obstacle is detected.

4.1.5 Robots and Human Agents

Participants saw videos of three robots: an industrial cleaning robot (CR700, ADLATUS), a small vacuum cleaning robot Roomba (iRobot), and a humanoid robot Pepper (SoftBanks). They saw a video of a cleaning staff member pushing the CR700 robot. The staff member was included for comparison purposes as it represents an existing system. The cleaner’s gender was not apparent, as the actor wore a coverall and a cap (see Figure 2). Schematic sketches of the respective robot were shown after each video comparing it to a male person of 1.8 m height. Hence, the agents comprised of three robots and one staff member. The robot video’s order was randomized. The staff video always came last. Each video lasted between 5 and 12 s and depicted the entity driving/walking towards the viewer in a neutral hallway (see Figure 2). The video showed the normal driving speed of the robots. Each video was shown twice and participants could not stop or replay the video. After each video, the participant had to confirm the correct video presentation (exclusion criteria). Stimuli videos can be found in the supplementary material along with a screen record of the video presentation in the online survey.

FIGURE 2
www.frontiersin.org

FIGURE 2. Screenshots from robot videos. Each video lasted about 10 s and depicted the entity driving/walking towards the viewer in a neutral hallway. Robots and agent shown in Study 1 (A)(D) and in Study 2 (C)–(E). Stimuli videos can be found in the supplementary material.

4.1.6 Study Procedure

Existing validated questionnaires were used for the assessment of constructs (see Table 5). Additional study-specific, self-developed measures can be seen in Table 6. The study started with study information, data protection rights and participant’s agreement to the informed consent. The reported research complied with the Declaration of Helsinki. The study consisted of two parts. Part I comprised the introduction of the robots with videos and sketches followed by participant’s robot ratings after each video. Ratings comprised humanness, uncanniness, power of impact, fear of agent’s presence, Robot Anxiety Scale (RAS, Nomura et al., 2006), attractiveness (AttrakDiff2, Hassenzahl et al., 2003), authority, novelty and task fit of the agent. Each questionnaire page had a small icon of the respective robot at the top as a reminder. Part II consisted of the strategy evaluation. The scenario description was presented and followed by the presentation of five conflict resolution strategies in randomized order (see Figure 1). After each strategy, the participants indicated their intention to comply with the robot’s request by choosing one of the four options (1 = I immediately go out of the agent’s way, 2 = I go out of the agent’s way, 3 = I go out of the agent’s way when I have finished my task, 4 = I do not go out of the agent’s way) or by indicating an alternative behaviour in a text field. This was followed by manipulation checks of the perceived strategy valence, intensity, interpersonal power and assertiveness. Then the participants judged the agent’s behaviour with regard to acceptance and politeness and indicated their perceived fear and trust in the agent. Each questionnaire page indicated the strategy description in the header as a reminder. At the end of the study, demographics were assessed including robot pre-experience and robot ownership, as well as participant’s negative attitude towards robots (NARS, Nomura et al., 2008). After questionnaire completion, participants were redirected to a separate online form to register for compensation. The average study duration was 35 min. Both online studies were hosted by a professional provider for online surveys (www.unipark.de).

TABLE 5
www.frontiersin.org

TABLE 5. Questionnaires.

TABLE 6
www.frontiersin.org

TABLE 6. Self-developed questionnaires.

4.1.7 Data Analysis

Due to the block design, not all strategies were rated by each participant. To analyse the data, the strategy ratings were merged into the three valence categories: negative, neutral and positive by using the modus of participants’ valence rating. Ratings were compared using repeated-measures ANOVA. Normality assumptions were checked and Greenhouse–Geisser corrected values were used when sphericity could not be assumed. Regression analysis was performed to find significant predictors of acceptance and compliance. Stepwise linear regression modeling was used to predict acceptance. Ordinal regression was used to predict compliance and ordered log-odds regression coefficients are reported. Compliance was reverse coded so higher values indicate higher compliance.

4.2 Results

4.2.1 Manipulation Checks
4.2.1.1Robot Ratings

Participants rated the robots (and the human cleaner) with regard to humanness, uncanniness, power of impact, the potential to produce fear and authority (see Figure 3, top). Pepper was rated as the most human-like (F(2,89)=25.5,p<.001,ηp2=.30) and the most uncanny robot (F(2,120)=21.8,p<.001,ηp2=.27). The CR700 had the same authority rating as the staff member. Compared with the other robots CR700 was rated as having more authority (F(2,120)=41.2,p<.001,ηp2=.41) and being more powerful (F(2,120)=112.5,p<.001,η2p=.65).

FIGURE 3
www.frontiersin.org

FIGURE 3. Robot ratings in the public context (top) and private context (bottom).

4.2.2 Strategy Ratings

To test whether the strategies produced the intended affect and politeness perception, participants rated the strategies concerning valence, intensity and politeness. Strategies that were considered to be negative in valence (see Table 2) were rated accordingly. Regarding single strategies, some strategy ratings did not match the assumptions: Both emotional strategies (S12.1, S13.1). were not rated as positive and the supposedly neutral baseline strategy was rated as positive. None of the strategies was rated as very positive (i.e. category 5, see Table 7). Negative strategies were rated as more intense than neutral and positive strategies. Positive strategies were rated less intense than neutral strategies (F(2,91)=22.3,p<.001,ηp2=.27). Especially, annoyance (S4.1) and threat (S6.1) were rated as the most intense strategies. The negative strategies were perceived as more rude than the positive strategies (F(2,120)=168.4,p<.001,ηp2=.74).

TABLE 7
www.frontiersin.org

TABLE 7. Participants’ Strategy Valence Ratings per use context.

4.2.3 Strategy Effectiveness: User Compliance and Interpersonal Power

It was expected that all strategies were more effective than no strategy (H1) and that negative strategies would lead to more compliance than positive and neutral strategies (H3). All strategies [except for command (S5.1)] were more effective in producing compliance than no strategy confirming H1 (see Figure 4). However, negative strategies led to significantly lower compliance rates than the positive strategies (F(2,114)=4.7,p<.05,ηp2=.08).

FIGURE 4
www.frontiersin.org

FIGURE 4. Compliance categories per use context. Public context (top) and private context (bottom).

Concerning the context-specific strategies, the following compliance rates (sum of compliance rates for ’immediate leave’ and ’leave’) emerged: negative public attention (S14.1b) had a compliance rate of 41%, which makes it as effective as the other negative strategies. As 11% of participants indicated not to move out of the system’s way, it was as likely to produce reactance as threat and annoyance. Positive public attention (S14.1a) was as effective as apologizing and thanking with a compliance rate of 86%. The results of the open answers to the participant’s behaviour revealed alternative compliance options: As an alternative reaction to the negative strategies, two participants stated that they would comply with the command (S5.1) but ask for a more polite approach. For physical contact (S8.1), one participant said s/he would stop the robot by pushing the emergency button. Concerning interpersonal power, a significant difference occurred with the robot being rated as more powerful when employing negative compared to neutral and positive strategies (F(2,106)=17.72,p<.001,ηp2=.24). Especially, for a threatening robot, participants reported that the robot controlled the situation and asserted itself. Summarizing, all conflict resolution strategies were more effective than no strategy. Although the robot employing negative strategies was perceived as more powerful, compliance rates for negative strategies were not higher than for positive or negative strategies. Hence, for the public application context, H1 was confirmed and H3 had to be rejected.

4.2.4 Strategy Evaluation: Acceptance, Trust and Fear

In H2 it was expected that negative strategies would be less accepted and less trustworthy than positive and neutral strategies. Acceptance ratings showed that none of the strategies was more accepted than no strategy (S1.1) (see Figure 5). Statistical testing revealed a significant difference in acceptance ratings between negative and neutral strategies and between negative and positive strategies (F(2,120)=128.3,p<.001,ηp2=.68) with negative strategies being less accepted. No difference between neutral and positive strategies occurred. Negative strategies led to less trust than positive and neutral strategies (F(2,120)=93.7,p<.001,ηp2=.61). No differences occurred between positive and neutral strategies. Negative strategies were rated to evoke more fear than neutral or positive strategies (F(2,120)=87.8,p<.001,ηp2=.59). No difference for fear ratings occurred between the neutral and positive strategies. Especially, threat (S6.1), annoyance (S4.1) and physical contact (S8.1) had high fear ratings. Descriptively, humor (S12.1) and empathy (S13.1) were the least trustworthy of the positive strategies and empathy (S13.1) had higher fear ratings than the positive or neutral strategies (but less than negative strategies). The evaluation of the context-specific strategies was as follows. Negative public attention (M = 2.6, SD = 1.1) was rated like the negative strategies and positive public attention (M = 5.1, SD = 1.2) was rated equally to the positive strategy, appeal (S9.1). The same results occurred for trust and fear ratings. Summarizing, as expected in H2, negative strategies were less accepted and less trustworthy than positive or neutral strategies.

FIGURE 5
www.frontiersin.org

FIGURE 5. Acceptance ratings per strategy and use context. Error bars indicate ±2 standard errors of the mean.

4.2.4.1Conflict Resolution Strategy Acceptance Rated by Agent

H4 expected human-like robots to be more accepted to apply conflict resolution strategies than mechanoid robots. The following strategies were more accepted if uttered by the human agent than by any robot: threat (S6.1) (F(3,60)=10.90,p<.001,ηp2=.31), show benefit (S3.1) (F(3,43)=4.10,p<.05,ηp2=.19), appeal (S9.1) (F(2,29)=5.92,p<.01,ηp2=.28), apologize (S11.1) (F(2,51)=3.81,p<.05,ηp2=.15), and trigger empathy (S13.1) (F(2,40)=5.80,p<.01,ηp2=.23). In contrast, the following strategies were more accepted by Roomba compared to all other agents: no strategy (S1.1) (F(2,51)=3.45,p<.05,ηp2=.13), approach (S7.1) (F(2,38)=3.50,p<.05,ηp2=.15), and physical contact (S8.1) (F(2,27)=5.29,p<.05,ηp2=.26). In conclusion, human-like robots were not more accepted to use conflict resolution strategies. As expected in H4, negative strategies were more accepted when applied by a mechanoid robot than by all other robots or the human agent.

4.2.5 Influences on Strategy Acceptance and Compliance

To explore whether acceptance and compliance are influenced by strategy ratings, correlations were examined. Acceptance correlated highly positively with politeness and trust, as well as moderately negatively with intensity and fear (see Table 8). As can be seen in Table 9, compliance and interpersonal power were positively correlated but compliance and acceptance did not correlate in the public application context. Strategy intensity and compliance correlated only for the negative strategies. Three stepwise linear regressions with trust, fear of agent behaviour, politeness and interpersonal power as potential predictors on strategy acceptance (negative, neutral, positive) were performed. Politeness and trust transpired as significant predictors for the acceptance of negative, neutral and negative strategies (see Table 10). Linear regressions with robot or user characteristics did not produce valuable, predictive models for strategy acceptance. For compliance, an ordinal regression was performed with power, fear, trust and politeness. Compliance with negative strategies could be significantly predicted by interpersonal power (β = 1.39, p < 0.001, CI [0.75; 2.0]) which could explain 36% of compliance variance (Nagelkerke Pseudo R2 = 0.36). If a participant were to increase his interpersonal power rating by one point, his ordered log-odds of being in a higher compliance category would increase by 1.39 (odds ratio = 4.0). Hence, the higher the perceived interpersonal power was, the more compliant the participants were when the agent applied negative strategies. Positive and neutral strategies showed the same pattern with interpersonal power as significant predictor of compliance but prerequisites were not met. Predictions with robot or user characteristics did not yield valid models. Concluding, the strategy acceptance could be predicted by politeness and trust, indicating that when participants rated the negative strategy as more polite and trustworthy they accepted it more. Participant’s compliance with negative strategies was influenced by interpersonal power.

TABLE 8
www.frontiersin.org

TABLE 8. Summary of correlations with acceptance.

TABLE 9
www.frontiersin.org

TABLE 9. Summary of correlations with compliance.

TABLE 10
www.frontiersin.org

TABLE 10. Regression coefficients for the prediction of strategy acceptance.

4.2.6 Summary of Results

Concerning compliance, all strategies were more effective in achieving compliance than no strategy (S1.1), except for command (S5.1). Compliance could be predicted by the perceived interpersonal power.

All negative strategies were less accepted than no strategy (S1.1). Cognitive and polite strategies were equally accepted as no strategy (S1.1). Command (S5.1), humor (S12.1) and empathy (S13.1) were neither effective nor accepted. Threat (S6.1) was only accepted for humans but the mechanoid robot Roomba was accepted to use physical strategies (S7.1, S8.1). Evaluative strategy ratings like politeness and trust were significant predictors for strategy acceptance.

5 Study 2

5.1 Method

5.1.1 Sample

Forty-eight participants were recruited via email, social media, and flyers on campus. Fifty participants were recruited by a professional online recruiter. Four participants had to be excluded due to video display issues and one due to answer tendencies. The final sample size was N = 93. University participants received either course credit or a shopping voucher as compensation. The professionally recruited participants were compensated monetarily.

5.1.2 Study Design

The second online study addressed the private household as an application context for assertive service robots. The study followed a block design where participants saw five out of fifteen conflict resolution strategies. The strategies were implemented in blocks of five negative, three neutral and seven positive strategies. As the context-sensitive strategies (foot-in-the-door (S15.2a) and thanking submissive (S15.2b)) were both positive in valance, an unequal number of negative and positive strategies resulted. The online program randomly assigned two out of five negative, one out of three neutral and two out of seven positive strategies. Not all participants saw all strategies due to test economy. Each strategy was on average rated by 32 participants.

5.1.3 Human–Robot Goal-Conflict Scenario

The scenario was set in the participant’s kitchen where s/he would host a party at home in 15 min. For that, the participant would need to prepare something in the kitchen for the party while it would be important that the robot/person would clean the kitchen before the party started. During preparation, the robot/person would begin to vacuum the kitchen and the participant would be in the way of that process. The participant was then instructed to choose how to behave (see Study 1).

5.1.4 Conflict Resolution Strategies

Applied conflict resolution strategies for both use cases were kept similar (with adapted context-sensitive wording) with four exceptions (see Table 2): no strategy (S1.2), foot-in-the-door (S15.2a), thanking submissive (S15.2b) and thanking dominant (S11.2). These strategies were adapted because of lessons-learned from Study 1 or added for a more complete investigation of possible conflict resolution strategies. As adaption to the private context, the baseline strategy (S1.2) included a verbal utterance. The agent uttered the sentence ’I would like to continue to vacuum the kitchen’ and waited. This sentence preceded all other strategies to create transparency regarding the agent’s intentions. Another lesson-learned from the participants’ comments to the strategies in Study 1, was adapting the wording of the strategy thanking (S11.1). In Study 1, the wording of thanking was criticized for being too dominant. Hence, in Study 2 both forms of thanking were compared: submissively (S15.2b) and dominant (S11.2). The foot-in-the-door technique (S15.2a) was only applied in the private context. In the public context, this technique did not seem feasible as no small and real request could be formulated to match the private context’s (i.e. asking to leave the train station was unsuitable).

5.1.5 Robots and Human Agents

Participants saw videos of three robots: a humanoid service robot TIAGo (PalRobotics), a small vacuum cleaning robot Roomba (iRobot) and a humanoid robot Pepper (SoftBanks) (see Figure 2). The robot video’s order was randomized. The videos of Roomba and Pepper were the same as in Study 1. Each video lasted between five and 14 s and depicted the robot driving with robot-specific speed towards the viewer in a neutral hallway. Each video was shown twice and participants could not stop or replay the video. After each video, the participant had to confirm the correct video presentation (exclusion criteria). Stimuli videos can be found in the supplementary material along with a screen record of the video presentation in the online survey. Videos and the sketch of each robot were presented as in Study 1. Additionally, the human agent’s social role (companion vs. employee) was manipulated to receive a reference value for the robot and strategy ratings based on power asymmetry (companion on equal power level, employee as subordinate). Hence, two human agents were selected: a household member and a domestic help. Both human agents were not introduced with videos to not influence the participants. Instead, the participant was asked to specify which household member s/he imagined during the interaction. The majority of the participants imagined interacting with their partner/spouse (40%) or their flatmate (27%). Summarizing, Study 2 comprised three robots and two human agents.

5.1.6 Study Procedure and Data Analysis

The procedure was identical to Study 1, except for the personality questionnaires. For the private context, where personalizing interaction strategies is possible, personality questionnaires regarding general personality traits, conflict type and dispositional empathy were assessed (see Table 5). Additionally, the ascribed social role of the robot (e.g. companion, colleague, tool) was assessed as a manipulation check by an open question, followed by a selection of nine potential roles). These additions to the study procedure led to a longer, average study duration of 45 min. Data analysis was similar to Study 1.

5.2 Results

5.2.1 Manipulation Checks
5.2.1.1Robot Ratings

Participants rated the robots with regard to humanness, uncanniness, power of impact, the potential to produce fear and authority (see Figure 3). It was expected that humanoid robots would be perceived more human-like and that larger robots would be perceived as having more power of impact and hence producing more fear. TIAGo was rated as the most uncanny (F(2,184)=75.1,p<.001,ηp2=.45) and authoritarian robot (F(2,184)=38.5,p<.001,ηp2=.30). TIAGo and Pepper were rated equally with regard to power and evoked fear. Pepper was rated the most human-like (F(2,156)=32.7,p<.001,ηp2=.26) whereas Roomba was rated the weakest (F(2,169)=96.9,p<.001,ηp2=.51) and most mechanical looking robot (see Figure 3 bottom). For TIAGo and Pepper, the most named social role was employee/butler (22% each). For Roomba, 26% of participants perceived it as having no social role. Twenty-three percent of participants perceived it as a tool and 22% as helper. Summarizing, TIAGo was rated as uncanny, Pepper as the most human-like and Roomba as the most mechanical-looking robot. Both humanoids were perceived as a butler, whereas Roomba was mainly perceived as a tool and as having no social role.

5.2.1.2Strategy Ratings

To test whether the strategies produced the intended affect and politeness perception, participants rated the strategies concerning valence, intensity and politeness. Strategies that were considered to be negative in valence were rated significantly more negative in valence than the neutral and positive strategies (F(2,184)=46.3,p<.001,ηp2=.34). Regarding single strategies, more positive strategies than expected were rated as neutral. Approach (S7.2) was not rated as a negative strategy. However, no strategy was rated as very positive (see Table 7). Negative strategies were rated as more intense than neutral and positive strategies (F(2,157)=20.7,p<.001,ηp2=.18). No difference between positive and neutral occurred. The negative strategies were perceived as more rude than the positive strategies (F(2,184)=48.3,p<.001,ηp2=.34). Especially, annoyance (S4.2), command (S5.2), threat (S6.2) and physical contact (S8.2) were rated as the most intense and as the rudest strategies.

5.2.2 Strategy Effectiveness: User Compliance and Interpersonal Power

It was expected that all strategies were more effective than no strategy (H1) and that negative strategies would lead to more compliance than positive and neutral strategies (H3). All strategies were more effective in producing compliance than no strategy (S1.2) (except for threat (S6.2)) (see Figure 4), hereby confirming H1. The ANOVA revealed a significant difference in compliance with negative, positive and neutral strategies (F(2,164)=25.0,p<.001,ηp2=.23). All post-hoc tests were significant. Concerning the context-specific strategies, the following compliance rates (sum of compliance rates for ’immediate leave’ and ’leave’) emerged. The foot-in-the-door strategy (S15.2a) was as effective as the average positive strategy with a compliance rate of 46%. Thanking dominant (S.10.2) was as effective as the negative strategies with a compliance rate of 26%. The results of the open answers to the participant’s behaviour revealed alternative compliance options: For negative strategies, nine participants stated that they would switch off the robot. For the positive and neutral strategies, four participants indicated that they would tell the robot to drive around them. Regarding interpersonal power, no difference occurred for the ratings between positive, negative and neutral or for the single strategies. Summarizing, as negative strategies were neither rated as more powerful nor were more effective than neutral or negative strategies, H3 had to be rejected.

5.2.3 Strategy Evaluation: Acceptance, Trust and Fear

In H2 it was expected that negative strategies would be less accepted and less trustworthy than positive and neutral strategies. Acceptance ratings showed that none of the strategies was more accepted than no strategy (S1.2) but cognitive and polite strategies were equally accepted (see Figure 5). The ANOVA revealed a significant difference of strategy acceptance ratings (F(2,184)=44.5,p<.001,ηp2=.33). The post-hoc test showed that negative strategies were less accepted than positive (M = −1.63, p < 0.001) or neutral strategies (M = −1.41, p < 0.001) but no difference between neutral and positive strategies occurred. The evaluation of the two context-specific strategies was as follows. The foot-in-the-door technique (S15.2a) (M = 4.5, SD = 1.6) was as accepted as the neutral strategies. Thanking dominant (S10.2) (M = 3.7, SD = 1.7) was less accepted than thanking submissive (S15.2b) as it was rated like the negative strategies. Concerning trust and fear, negative strategies led to less trust than positive and neutral strategies (F(2,165)=34.4,p<.001,ηp2=.27). No differences occurred between positive and neutral strategies but appeal led to the highest trust. Negative strategies were rated to evoke more fear than neutral or positive strategies (F(2,184)=36.3,p<.001,ηp2=.28). No difference for fear ratings occurred between the neutral and positive strategies. Especially, annoyance (S4.2) and threat (S6.2) led to the highest fear. Summarizing, as expected negative strategies were less accepted and less trustworthy than positive and neutral strategies which confirms H2 for the private context.

5.2.3.1Conflict Resolution Strategy Acceptance Rated by Agent

H4 expected human-like robots to be more accepted to apply conflict resolution strategies than mechanoid robots. The household member was the only agent accepted when applying the following conflict resolution strategies: threat (S6.2) (F(3,111)=2.80,p<.05,ηp2=.06), appeal (S9.2) (F(1,27)=8.20,p<.01,ηp2=.30), trigger empathy (S13.2) (F(3,83)=3.61,p<.05,ηp2=.11), humor (S12.2) (F(2,76)=11.31,p<.001,ηp2=.27), thanking dominant (S10.2) (F(2,63)=3.71,p<.05,ηp2=.13), and foot-in-the-door (S15.2a) (F(2,53)=4.12,p<.05,ηp2=.14). Only the household member was accepted to express emotional or social conflict resolution strategies. Contrary to expectations in H4, no strategy was more accepted if uttered by a robot regardless of human-likeness. However, most of the strategies were equally accepted for the robots and the domestic help.

5.2.4 Influences on Strategy Acceptance and Compliance

Correlations were examined to explore influences on acceptance and compliance. As can be seen in Table 8, acceptance correlated highly positively with politeness and trust, and moderately negatively with intensity and fear. Acceptance and compliance did correlate moderately positively as did politeness and compliance (see Table 9). However, compliance and interpersonal power were moderately negatively correlated. Three stepwise linear regressions with trust, fear of agent behaviour, politeness and interpersonal power as potential predictors on strategy acceptance (negative, neutral, positive) were performed. Politeness and trust transpired as significant predictors for the acceptance of negative, neutral and negative strategies (see Table 10). Hereby, politeness explained most of the variance of acceptance (see Table 10, R2 changes). Linear regressions with robot or user characteristics did not produce valuable predictive models for strategy acceptance. For compliance, an ordinal regression was performed with power, fear, trust and politeness. Compliance with positive strategies could be significantly negatively predicted by interpersonal power (β = −1.42, p < 0.001, CI [−1.99; −0.86]) which could explain 44% of compliance variance (Nagelkerke Pseudo R2 = 0.44). If a participant were to increase his interpersonal power rating by one point, his ordered log-odds of being in a higher compliance category would decrease by 1.42 (odds ratio = 0.24). Hence, the higher the perceived interpersonal power was, the less likely participants’ compliance was when the robot applied positive strategies. Negative and neutral strategies showed the same pattern with interpersonal power as significant predictor of compliance but model assumptions were not met. Also predictions with robot or user characteristics on compliance did not yield valid models. Summarizing, acceptance and compliance were positively associated. Higher ratings of strategy intensity and perceived fear resulted in lower acceptance ratings. Strategy acceptance could be predicted by politeness and trust, indicating that when participants rated the negative strategy as more polite and trustworthy they accepted it more. Compliance was positively associated with strategy politeness ratings and negatively with interpersonal power. Hence, if participants rated the strategy as more polite they were more compliant. The more powerful the robot was rated, the less compliant they were.

5.2.5 Summary of Results

All strategies were more effective in achieving compliance than waiting (S1.2), except for command (S5.2) and threat (S6.2). The latter two even led to reactance with about a third of participants not complying. Threat (S6.2) was rated as the least trustworthy and together with annoyance (S4.2) as the two most fearsome strategies. Regarding acceptance, all negative strategies, except for approach (S7.2), were rated as less acceptable than waiting (S1.2) but cognitive (S2.2, S3.2) and polite strategies (S9.2–11.2) were equally accepted. Regarding the agent employing the strategies, no strategy was more accepted if uttered by a robot. Especially, negative strategies (S4.2 - S8.2) and emotional strategies (S12.2, S13.2) were only accepted for the household member. Regarding influences on acceptance and compliance, acceptance was connected to politeness, trust, and fear. Compliance was negatively associated with interpersonal power and politeness in the private context. Compliance and acceptance correlated moderately.

6 Discussion

The aim of this study was to develop and test conflict resolution strategies for service robots to achieve compliance with a robot’s request in an accepted way. For this, psychological principles were transferred to HRI to develop conflict resolution strategies. The strategies were systematically tested in two online studies in two application contexts for service robots: public and private space. Hereby, the strategy classification into three valence categories allowed for systematically testing as each participant rated the same amount of negative, neutral and positive strategies. The results showed that neutral and positive conflict resolution strategies were accepted and effective in achieving compliance with a robot’s request. Negative strategies were more controversial as user acceptance and compliance were dependent on robot type and application context. Negative strategies like command (S5.2) and threat (S6.2) even led to user reactance. For the public context, influences on strategy acceptance and compliance could be found. Whereas acceptance was predicted by politeness and trust, compliance was predicted by interpersonal power.

Based on the results, two hypothesis could be accepted and one had to be rejected. Regarding the conflict resolution strategies, it was expected that they would be more effective than no strategy (H1). This was true for both application contexts (except for command and threat). Hence, H1 was supported. However, not all strategies can be recommended to be pursued further, as will be described below. Regarding negative strategies, it was assumed on the basis of the human-power asymmetry that strategies with high interpersonal power of the robot would be evaluated negatively in terms of acceptance and trust (H2), but would lead to more compliance (H3). For both application contexts, negative strategies like commanding (S5) were found to be less accepted and less effective in achieving compliance than positive strategies. Hence, H2 (acceptance, trust) was supported and H3 (compliance) had to be declined. Negative strategies even led to psychological reactance with about one-tenth to one-third of participants in both application contexts indicating that they intentionally disobeyed. Reactance was more common in the private than in the public application context. Only here, a positive correlation between politeness and compliance occurred, indicating that the more rude a request was perceived the less likely compliance was. This was mirrored in the correlations between interpersonal power and compliance. Whereas compliance and interpersonal power were highly correlated in both application contexts, only in the private context, the correlation was negative. Hence, the user did not comply even if s/he rated the robot as more powerful than him/herself. This illustrates, as expected, the higher effect of the power asymmetry in the private context. The reactance found in this study has been found in previous work (Roubroeks et al., 2010; Ghazali et al., 2018). Only in the private context, compliance and acceptance ratings were moderately, positively correlated. This might hint to the possibility that strategy acceptance might be more important in the private application context than in public. In the private context, where one has robot control and authorization, acceptance guides the compliance decision. In the public context, one might comply although not accepting the robot’s request because one feels in a weaker position and publicly observed.

In H4 it was expected that human-like robots would be more accepted to apply positive and neutral conflict resolution strategies compared to mechanoid robots. In both application contexts, it was more accepted if the human uttered the negative strategy threat (S6), the positive strategy appeal (S9) or the human-specific strategy empathy (S13) than if a robot did. As expected, the mechanoid robot Roomba was more accepted to use negative conflict resolution strategies than Pepper in public. In the private context, no strategy was more accepted if uttered by a robot regardless of human-likeness. Hence, H4 was only partially confirmed. However, most of the strategies were equally accepted for the robots and the domestic help. Only the household member with the assumed same social status as the participant was accepted to express emotional or social conflict resolution strategies. This may indicate a greater influence of social status on the acceptance of certain conflict resolution strategies in the private context than the human-likeness of the robot. For all other strategies in both contexts, no difference in acceptance occurred between robots and humans which shows the potential of robotic conflict resolution strategies. Hereby, more research is needed to determine the appropriate set of conflict resolution strategies per robot type and application context.

Apart from the hypotheses, a research question was formulated that concerned the differences between application contexts regarding strategy acceptance and effectiveness. Indeed differences between the contexts showed. For the private context, all positive strategies were rated as more polite than no strategy (S1) which was the opposite in the public context. Additionally, all negative strategies, except for command (S5.2), were more accepted in the private application context. Although negative strategies were less accepted in the public context, compliance rates for negative strategies were higher compared to the private context. Interestingly, human-robot power asymmetry influenced the prominent way of compliance. Whereas in public (assumed human-robot power equality), participants’ prevalent reaction was to comply (not immediately), they favored finishing their task first in the private context (assumed owner superiority). In a study which tried to elicit helping behaviour from participants who were occupied with a secondary task showed that people preferred to help after they had finished their task instead of interrupting it (Fischer et al., 2014).

Differences between application contexts also appeared for effective strategy mechanisms. Hereby, cognitive and polite strategies were most accepted and successful findings regarding social strategies were mixed. Authority-based strategies (i.e. S5 command and S6 threat) were neither accepted nor effective. This was also true for strategies using negative reinforcement (S4 annoyance) and negative social influence (S14.1b negative public attention). In contrast, positive social strategies using a sequential-compliance technique (S15.2a foot-in-the-door) or positive social influence (S14.1a positive public attention) were accepted and effective. Therefore, if an assertive robot makes use of social influence, it should be in a positive manner to avoid negative effects of human-robot power asymmetry. Concerning emotional strategies, empathy (S13.1), but not humor (S12.1), was less accepted in the public context. Empathy (S13.1) was rated as less trustworthy and more fearsome than other positive or neutral strategies in the public context. As the robot in the public context might be perceived as equal due to its social role, trying to elicit empathy for its situation (i.e. appearing weaker) could contradict the role assumption. Just as it is considered inappropriate for a cleaner to address a passer-by on a personal level, the same could apply to an autonomous service robot. Similarly, in the private context, emotional strategies (S12.2, S13.2) were only accepted for the household member but not for any robot. Regarding physical strategies, they were more accepted in the private than in the public context. As physical strategies emphasize the robot’s embodiment, they are likely connected to fear of the robot. Indeed, in the public context, physical strategies (S7.2, S8.2) were rated as more fearful than in the private context. A higher fear in the public context might be explained by a lack of prior information about the robot’s function and capabilities compared to the public. This is also mirrored in the interaction between strategy mechanism and robot type in public. Both physical strategies (S7.1, S8.1) were more successful for a small, non-threatening robot (Roomba) compared to other robots and the human agent. Naturally, if the users do not fear that an assertive robot might harm them, the robot is more accepted. This is in line with previous studies regarding robot size and perceived power of impact (Young et al., 2009; Jost et al., 2019). Hereby, pre-information and transparency will be important in the future to ensure that an assertive robot, regardless of size and strength, will never use force. In the private context, a robot respecting the user’s personal space (S7.2 approach) was more accepted than a close approach (5 cm in the presented study as in S8.2 physical contact). As in previous findings a positive effect on compliance was found with a minimum distance of 0.6 m (Mutlu, 2011; Chidambaram et al., 2012), our implementation was probably too close for comfort. Since the presented study was conducted online, the results regarding the physical mechanisms for robot conflict resolution strategies require further confirmation. Summarizing, application context differences regarding effective mechanisms suggest that robotic conflict resolution strategies need to be applied context-sensitively to be useful.

Having established strategies’ acceptability and effectiveness, a first test of influencing factors on those variables was performed. In both application contexts, acceptance ratings could be predicted by politeness and trust ratings. Similar to human negotiations (Pfafman, 2017), perceived politeness and trust were influential on strategy acceptance in both contexts. This might explain why integrative robot conflict resolution strategies were more effective and accepted than distributive strategies. Similarly, in human negotiations integrative strategies are preferred if trust between negotiators is high (Kong et al., 2014). Therefore, integrative strategies seem more promising in HRI than distributive conflict resolution strategies for both application contexts. For both application contexts, interpersonal power could predict compliance but the influence differed. In the public context, compliance with negative strategies could be positively predicted by the higher interpersonal power of the robot. Naturally, higher robot power led to higher compliance. In contrast, in the private context, compliance with positive strategies was negatively predicted by higher interpersonal power. Hence, although the robot was rated as more powerful, the participants were still less likely to comply. Once, more this could represent the higher impact of the power asymmetry in the home context. Here, even positive strategies might be perceived as inappropriate. This is also supported by the finding that no robotic conflict resolution strategy was highly accepted (average of five on a 7-Point Likert Scale). Therefore, in the home context, the robot user’s personal assessment of the human-robot power asymmetry is an important factor that needs to be considered for real-world applications. User variables regarding general personality, conflict type, dispositional empathy, demographics, robot experience/ownership or negative attitudes towards robots could not predict strategy acceptance or compliance. Potentially, a correlative design with a larger sample size has more potential to determine if user characteristics influence human-robot goal conflict resolution as they do in human-human interactions. Summarizing, differences were found between the developed conflict resolution strategies regarding compliance, acceptance and trust between the use contexts and were influenced by perceived interpersonal power and politeness. In addition to previous studies (Saunderson and Nejat, 2019), the presented findings can now serve as a basis for the application and further development of robotic conflict resolution strategies. Recommendations for the public and private application context are presented below.

6.1 Practical Implications

Concerning a real-world application of robot assertiveness, conflict resolution strategies could have the potential to render service robots in public and private more useful if such robot behaviour is accepted. Based on the theoretical background and empirical findings, we would like to present the following recommendations regarding acceptable and effective conflict resolution strategies for autonomous service robots.

Recommended conflict resolution strategies for the public application context are:

• Goal explanation (S2.1), showing the benefit of cooperation (S3.1), humor (S12.1), positive public attention (S14.1a), approach (S7.1) (if applied by small robot).

Not recommended for the public context:

• Annoyance (S4.1), command (S5.1), threat (S6.1), physical contact (S8.1), eliciting empathy (S13.1), negative public attention (S14.1b).

Recommended conflict resolution strategies for the private application context are:

• Goal explanation (S2.2), showing the benefit of cooperation (S3.2), approach (S7.2), foot-in-the-door (S15.2a).

Not recommended for the private context:

• annoyance (S4.2), command (S5.2), threat (S6.2), physical contact (S8.2).

Polite strategies like appeal (S9), thanking (S10) and apologizing (S11) can be used in addition to the conflict resolution strategies. Future studies could examine if a combination of assertive strategies with polite strategies is more accepted and effective than a single strategy approach. As in human negotiations, politeness could reduce the face threats posed by assertive strategies and make them more acceptable (Pfafman, 2017). Hereby, learning from psychology, an escalating manner might be feasible: applying assertive strategies after polite, cooperative strategies have failed might be more acceptable (Preuss and van der Wijst, 2017). For this, combining cognitive mechanisms like goal explanation (S2) and showing benefit (S3) with polite strategies (S9–S11) could be especially beneficial as both were effective and accepted in both application contexts. In practice, one possible implementation of conflict resolution strategies for the private context could be: first appeal (S9.2), then show the benefits of cooperation (S3.2) and finally, if the participant has not complied, try the foot-in-the-door technique (S15.2a). Future studies can then test if strategy combinations are more effective and acceptable than single strategy approaches. Hereby, observed application context and robot differences regarding strategy effectiveness and acceptability require a context-sensitive and robot-specific strategy development. Whereas cognitive and polite strategies seem feasible for both contexts, emotional and physical strategies were more acceptable for the private context. However, if a small mechanoid robot applies physical strategies (S7.1, S8.1), they could also be accepted in public. Regarding compliance, a robot using high power strategies (e.g. S5 command and S6 threat) can lead to reactance, especially in the private application context. In general, compliance with a robot’s request should be expected to be lower in the private application context than in public due to power asymmetry. Hereby, for real-world applications of assertive service robots at home it might be important to assess the user’s preferences regarding the robot’s autonomy and assertiveness level. For instance, if the service robot is delivered, the user could answer the respective questions and the robot’s level of robot assertiveness is personalized accordingly. Although some might deny robot assertiveness at the first assessment, it is conceivable that they will be convinced by time as conflict situations occur where the robot will be ineffective if it always defers to the user. Hereby, also trust and politeness will decide about the long-term acceptance of robot assertiveness. For the public context where personalizing is not feasible robot assertiveness should only be applied purposefully and in moderation to solve human-robot goal conflicts. This includes that before issuing the request in a crowded place, the robot checks whether the person addressed actually has the possibility to comply with the request (e.g. space and time for evasion; disability) in order not to disturb passers-by. Situational adaption of robot assertiveness might be key for long-term acceptance of assertive service robots in public. Finally, the ethical implications of robot assertiveness similar to persuasive robots (Chidambaram et al., 2012) need to be considered. Robot assertiveness could be an acceptable and effective form of robot goal achievement as long as it supports goals deemed appropriate by the user and society and never uses violence.

6.2 Strengths and Limitations

This study is the first to develop robot conflict resolution strategies that are based on psychological mechanisms of goal conflict resolution. The theoretical foundation had the advantage of developing a variety of potentially effective strategies which have not been focused in HRI yet and subsequently extends the design scope of robotic interaction strategies. Additionally, systematically considering the psychological mechanisms of conflict resolution strategies allowed for a deeper understanding of the results. The combination of two robot application contexts and different robot types (large, small, humanoid, mechanoid) allowed more precise statements to be made about the specific effectiveness of the strategies and their acceptance. This way, the study was able to investigate the specific effects conflict resolution strategy combinations with different robot types and application contexts. The online study format allowed for a text-based strategy presentation without the influence of the real-world implementation into a certain robot prototype (e.g. appearance, specifications, speech synthesis limitations). This meant that the strategy effect could be investigated without biases added by the implementation. When setting up the online studies, standardization of study material was emphasized, by amongst others, ensuring that the robot videos were of the same length, assessing whether the participants got the video displayed correctly, and using validated questionnaires where possible. Manipulation checks regarding robot ratings were successful.

Although the presented studies have provided insights into the acceptance and effectiveness of robot assertiveness, some limitations of the study have to be considered. The extensive testing of fifteen conflict resolution strategies per application context meant that not all participants saw all strategies. This limited the statistical power but, at the same, time diminished potential respondent fatigue. Regarding internal validity, standardization of strategies was difficult with regard to sentence length. Polite speech is naturally more indirect and lengthy as it tends to paraphrase and embellish (Danescu-Niculescu-Mizil et al., 2013). Strategy phrasing has shown to be essential regarding this study’s findings. Thanking dominant (S10.2) was perceived as a negative strategy compared to thanking submissive (S15.2b) which was positively evaluated. Hence, it was reasonable to differentiate between thanking dominant and submissive in Study 2. Consequently, the phrasing for a thankful strategy has to be chosen carefully (present tense vs. subjunctive). For the comparison between the application contexts, it has to be noted that the presented results can only provide first evidence regarding context differences. As the application context was not implemented as an independent variable and the robots differed, further studies are needed which compare both application contexts directly. Although the strategy classification into three valence categories allowed for systematically testing participants’ ratings differed from the expected affective evaluation. Some of the positive strategies were rated more neutrally than expected and none was rated very positively. The categorization based on the human-power asymmetry should not be seen as final but as a working hypothesis that allows for systematically testing. However, it shows the relevance of assessing participant’s perception of strategy valence for future testing of robotic conflict resolution strategies. Finally, as the evaluation was conducted online, external validity might be limited. As only the intention to comply could be measured and videos cannot replace real world encounters, lab and field experiments are needed to replicate results. This holds especially true for physical strategies which might have been difficult to imagine although they were described in relativity to the participants position (e.g. until the robot touches your luggage). Limitations regarding immersion seem likely but that the robot behaviour could trigger reactance and that some strategies (e.g. threat and command) were not even accepted in an online setting with imagined interaction indicates the psychological reality of the participants during the study. It has also been shown in previous HRI studies that imagined interaction with a robot does resemble real HRI with regard to acceptance of the robot, participant’s behaviour toward the robot (Wullenkord and Eyssel, 2019) and negative attitudes towards the robot (Wullenkord et al., 2016).

Therefore, guided imagined interactions seemed to be reasonable for conducting preliminary evaluations of the developed strategies. The intention behind the online format was not to replace real-world testing but to detect strategies that might already be rejected in an imagined situation (which was indeed the case for threat and command) and eliminate them from future research agendas regarding acceptable and effective robotic conflict resolution strategies. Then, for real-world testing, it can be focused on the final best-accepted strategies. Beyond the limitations of online testing, the external validity of the results is questionable as the conflict resolution strategies were examined in a specific situation with specific robots. Therefore, future work might aim to clarify the extent to which results can be generalized to different situations, robots and contexts.

6.3 Future Work

Future studies are needed to determine factors that render some robotic conflict resolution strategies more acceptable and effective than others. Hereby, robot, human and situational influences need to be considered. On the robot side, the strategy implementation must be skilfully implemented in terms of speech (e.g. tone of voice), gestures and proximity. Appropriate expression of assertiveness in human conflict resolutions is considered a communication skill that is not trivial to acquire (Pfafman, 2017). For this, it seems reasonable to rely on psychological research not only for strategy development but also for implementation, e.g. training programs to promote appropriate assertiveness at work (Thacker and Yost, 2002; Wilson et al., 2003; Nakamura et al., 2017). Additionally, future work is needed to determine appropriate conflict resolution strategies for more robot types (e.g. androids) and sizes (e.g. miniature, man-sized) which were not represented in the presented studies. Potentially, with an even more varied set of robots than used in the presented studies, robot characteristics like humanness, power of impact and authority might turn out as moderators for strategy effectiveness and acceptance.

On the human side, user personality, robot attitudes and pre-experience, as well as culture, are likely to be of importance for strategy acceptance and effectiveness as they are influential in human negotiations. Here, general personality traits (BIG5, Costa and McCrae, 1985) and specific conflict-related traits such as the conflict type (ROCI-II, Rahim, 1983) have shown to determine individual conflict behaviour (Rahim, 1983; Park and Antonioni, 2007). An integrating style was positively associated with Agreeableness and Extraversion (Park and Antonioni, 2007). Dominating personalities use distributive conflict resolution strategies (Rahim, 1983) and are positively associated with Extraversion but negatively with Agreeableness (Park and Antonioni, 2007). Conceivably, the robot’s strategy has to match the user conflict personality to be effective and accepted. If a dominating negotiator is confronted with an assertive robot, the robot might be less acceptable than if the robot had applied the strategy to a person with an obliging conflict style. In addition, negative attitudes and fears about robots could negatively influence the acceptance of and compliance with assertive robots, since such individuals already tend not to accept non-assertive robots (De Graaf and Allouch, 2013b; Ghazali et al., 2020). Negative attitudes and state anxiety have also shown to negatively influence trust in HRI (Miller et al., 2020). Culture is an additional influence that needs examination in future work. Cultural expectations shape expectations regarding politeness and assertiveness (Lee et al., 2017). Assertiveness must be considered appropriate (e.g. to context and culture), otherwise it can be perceived as aggressive (Pfafman, 2017). An assertive robot might be acceptable in Eurasian countries but could be considered as inappropriate and rude in Asian countries. For Germans and Chinese this has been shown for assertive communication strategies of a small autonomous delivery robot towards pedestrians (Lanzer et al., 2020). Consequently, the presented findings need further confirmation in different samples. Summarizing, future studies are needed to determine the influences of user characteristics on the acceptance of robot assertiveness. Findings could then be used to personalize the robot in the home setting as it has been suggested with other robot characteristics (Ligthart and Truong, 2015).

Situational influences on strategy acceptance and effectiveness are likely to be the conflict scenario (e.g. emergency situations), other application contexts (security robots), repetition and habituation. Apart from the presented scenarios, robot assertiveness could be especially useful for emergency situations. In the public context, for example, security robots might help during an evacuation and might need to be assertive to gain people’s trust and compliance in such a stressful, chaotic situation. In the private context, a service robot might need to be assertive and call an ambulance in case of a medical emergency. To avoid that the results are possibly distorted by the novelty effect of an assertive robot, it is necessary to test whether repeated interaction changes the participants’ attitude and behaviour towards the robot’s assertiveness (e.g. habituation, trust building). If the user benefited from the autonomy and effectiveness of the robot in the past and trust was built up through reliable functioning and appropriate robot actions, the acceptance of the robot’s assertiveness could increase (Ghazali et al., 2020; Kraus et al., 2019; Kraus et al., 2020). Similarly, human-robot power asymmetry might be reduced by habituation when assertive robots become an effective and accepted part of our society. This paper represents the first step towards this goal.

7 Conclusion

With future dissemination of service robots in public and private spaces, human-robot goal conflicts will arise. To negotiate acceptable outcomes and for efficient task execution, it might be feasible to apply an assertive robot behaviour under certain circumstances. This study explored different conflict resolution strategies, ranging from polite to assertive, to achieve user compliance and acceptance simultaneously in two application contexts, public and private space. The potential of applying robotic conflict resolution strategies to increase intended compliance with a robot’s request in an acceptable way was shown. Positive and neutral conflict resolution strategies were acceptable and effective in achieving compliance with a robot’s request and should be explored further. Combining strategies based on cognitive mechanisms with politeness seems especially feasible for both application contexts. Only command (S5) and threat (S6) do not seem feasible to be examined further as they were neither effective nor accepted. The perceived interpersonal power of the robot influenced the participants’ decision to comply. Trust and politeness were predictive of strategy acceptance. Concluding, if applied context-sensitively and robot-specifically, robotic conflict resolution strategies as an appropriate expression of robot assertiveness have the potential to solve human-robot goal-conflicts effectively and acceptably. This study represents a first step to designing conflict resolution strategies for future assertive robots. Future work is needed to determine factors that render robot assertiveness acceptable for various users, robots and situations.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.

Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author Contributions

All authors were involved in the process. FB lead the ideation, conception, study design, data analysis and writing, JK was substantial to study design and data collection, editing and additional writing, while MB contributed to study design, editing and supervision.

Funding

This research has been conducted within the interdisciplinary research project ‘RobotKoop’ which is funded by the German Ministry of Education and Research (Grant Number 16SV7967).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frobt.2020.591448/full#supplementary-material.

References

Adam, H., and Shirako, A. (2013). Not all anger is created equal: the impact of the expresser’s culture on the social effects of anger in negotiations. J. Appl. Psychol. 98, 785–798. doi:10.1037/a0032387

PubMed Abstract | CrossRef Full Text | Google Scholar

Albert, S., and Dabbs, J. M. (1970). Physical distance and persuasion. J. Pers. Soc. Psychol. 15, 265–270. doi:10.1037/h0029430

PubMed Abstract | CrossRef Full Text | Google Scholar

Argyle, M., and Dean, J. (1965). Eye-contact, distance and affiliation. Sociometry, 29, 289–304.

CrossRef Full Text | Google Scholar

Babel, F., Kraus, J., Miller, L., Kraus, M., Wagner, N., Minker, W., et al. (2021). Small talk with a robot? The impact of dialog content, talk initiative, and gaze behavior of a social robot on trust, acceptance, and proximity. Int. J. Soc. Robot. doi:10.1007/s12369-020-00730-0

Google Scholar

Bartneck, C., and Hu, J. (2008). Exploring the abuse of robots. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 9, 415–433. doi:10.1075/is.9.3.04bar

CrossRef Full Text | Google Scholar

Bartneck, C., Kulić, D., Croft, E., and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81. doi:10.1007/s12369-008-0001-3

CrossRef Full Text | Google Scholar

Bechade, L., Duplessis, G. D., and Devillers, L. (2016). “Empirical study of humor support in social human-robot interaction”, in Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 9749. (New York, NY: Springer), 305–316. doi:10.1007/978-3-319-39862-4˙28

CrossRef Full Text | Google Scholar

Berridge, K. C. (2001). Reward learning: reinforcement, incentives, and expectations. Psychol. Learn. Motiv. 40, 223–278. doi:10.1016/S0079-7421(00)80022-5

CrossRef Full Text | Google Scholar

Betancourt, H. (2004). Attribution-emotion processes in White’s realistic empathy approach to conflict and negotiation. Peace Conflict 10, 369–380. doi:10.1207/s15327949pac10047

CrossRef Full Text | Google Scholar

Boardman, A. E., Greenberg, D. H., Vining, A. R., and Weimer, D. L. (2017). Cost-benefit analysis: concepts and practice. (Cambridge: Cambridge University Press). doi:10.1080/13876988.2016.1190083

CrossRef Full Text | Google Scholar

Bradley, M. M., and Lang, P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatr. 25, 49–59.

CrossRef Full Text | Google Scholar

Brett, J., and Thompson, L. (2016). Negotiation. Organ. Behav. Human Decis. Process 136, 68–79. doi:10.1016/J.OBHDP.2016.06.003

CrossRef Full Text | Google Scholar

Brown, P., Levinson, S. C., and Levinson, S. C. (1987). Politeness: Some universals in language usage, vol. 4. (Cambridge, UK: Cambridge University Press).

Google Scholar

Cann, A., Calhoun, L. G., and Banks, J. S. (1997). On the role of humor appreciation in interpersonal attraction: it’s no joking matter. Humor Int. J. Humor Res. 10, 77–90.

CrossRef Full Text | Google Scholar

Castro-González, Á., Castillo, J. C., Alonso-Martín, F., Olortegui-Ortega, O. V., González-Pacheco, V., Malfaz, M., et al. (2016). “The effects of an impolite vs. A polite robot playing rock-paper-scissors”, in Lecture Notes in Computer Science (including subseries Lecture Notes in artificial Intelligence and Lecture Notes in Bioinformatics) 9979 LNAI, 306–316. doi:10.1007/978-3-319-47437-330

CrossRef Full Text | Google Scholar

Chaiken, S. L., Gruenfeld, D. H., and Judd, C. M. (2000). Persuasion in negotiations and conflict situations (San Francisco, CA: Jossey-Bass).

Google Scholar

Chidambaram, V., Chiang, Y.-H., and Mutlu, B. (2012). “Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues”, in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, 293–300.

Google Scholar

Christenson, A. M., Buchanan, J. A., Houlihan, D., and Wanzek, M. (2011). Command use and compliance in staff communication with elderly residents of long-term care facilities. Behav. Ther. 42, 47–58. doi:10.1016/j.beth.2010.07.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Cialdini, R. B. (2009). Influence: Science and practice, vol. 4. (Boston, MA: Pearson education).

Google Scholar

Cialdini, R. B., and Goldstein, N. J. (2004). Social influence: compliance and conformity. Annu. Rev. Psychol. 55, 591–621. doi:10.1146/annurev.psych.55.090902.142015

PubMed Abstract | CrossRef Full Text | Google Scholar

Cialdini, R. B., Wosinska, W., Barrett, D. W., Butner, J., and Gornik-Durose, M. (1999). Compliance with a request in two cultures: the differential influence of social proof and commitment/consistency on collectivists and individualists. Pers. Soc. Psychol. Bull. 25, 1242–1253. doi:10.1177/0146167299258006

CrossRef Full Text | Google Scholar

Cohen, T. R. (2010). Moral emotions and unethical bargaining: the differential effects of empathy and perspective taking in deterring deceitful negotiation. J. Bus. Ethics. 94, 569–579. doi:10.1007/s10551-009-0338-z

CrossRef Full Text | Google Scholar

Cormier, D., Newman, G., Nakane, M., Young, J. E., and Durocher, S. (2013). “Would you do as a robot commands? an obedience study for human-robot interaction”, in International Conference on Human-agent Interaction.

Google Scholar

Costa, P. T., and McCrae, R. R. (1985). NEO five factor inventory 1989.

Google Scholar

Da-peng, L., and Jing-hong, W. (2017). “Business negotiation skills based on politeness principle”, in Asia International Symposium on language Literature and Translation, 232.

Google Scholar

Danescu-Niculescu-Mizil, C., Sudhof, M., Dan, J., Leskovec, J., and Potts, C. (2013). “A computational approach to politeness with application to social factors”, in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics 1, 250–259. doi:10.1.1.294.4838

CrossRef Full Text | Google Scholar

Darling, K., Nandy, P., and Breazeal, C. (2015). Empathic concern and the effect of stories in human-robot interaction. Proc. IEEE Int. Work. Robot Hum. Interact. Commun. 73, 770–775. doi:10.1109/ROMAN.2015.7333675

CrossRef Full Text | Google Scholar

De Dreu, C. K. W. (2010). “Social conflict: the emergence and consequences of struggle and negotiation”, in Handbook of social psychology. (Amsterdam, Netherlands: University of Amsterdam). doi:10.1002/9780470561119.socpsy002027

CrossRef Full Text | Google Scholar

De Graaf, M. M., and Allouch, S. B. (2013a). Exploring influencing variables for the acceptance of social robots. Robot. Autonom. Syst. 61, 1476–1486. doi:10.1016/j.robot.2013.07.007

CrossRef Full Text | Google Scholar

De Graaf, M. M., and Allouch, S. B. (2013b). The relation between people’s attitude and anxiety towards robots in human-robot interaction. Proc. IEEE Int. Work. Robot Hum. Interact. Commun. 628, 632–637. doi:10.1109/ROMAN.2013.6628419

CrossRef Full Text | Google Scholar

Dillard, J. P. (1991). The current status of research on sequential-request compliance techniques. Pers. Soc. Psychol. Bull. 17, 283–288. doi:10.1177/0146167291173008

CrossRef Full Text | Google Scholar

Drury, J. L., Scholtz, J., and Yanco, H. A. (2003). “Awareness in human-robot interactions”, in SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, man and Cybernetics. Conference Theme-System Security and Assurance (Cat. No. 03CH37483) (IEEE). 1, 912–918.

Google Scholar

Fasola, J., and Matarić, M. J. (2009). “Robot motivator: improving user performance on a physical/mental task”, in 2009 4th ACM/IEEE International Conference on Human-robot Interaction (HRI) (IEEE), 295–296.

Google Scholar

Fischer, K., Soto, B., Pantofaru, C., and Takayama, L. (2014). “Initiating interactions in order to get help: effects of social framing on people’s responses to robots’ requests for assistance”, in The 23rd IEEE International Symposium on robot and Human Interactive Communication. IEEE, 999–1005.

Google Scholar

Fogg, B. J. (2002). Persuasive technology: using computers to change what we think and do (interactive technologies). (Burlington, MA: Morgan Kaufmann). doi:10.1145/764008.763957

CrossRef Full Text | Google Scholar

Freedman, J. L., and Fraser, S. C. (1966). Compliance without pressure: the foot-in-the-door technique. J. Pers. Soc. Psychol. 4, 195.

PubMed Abstract | CrossRef Full Text | Google Scholar

Gerpott, F. H., Balliet, D., Columbus, S., Molho, C., and de Vries, R. E. (2018). How do people think about interdependence? A multidimensional model of subjective outcome interdependence. J. Pers. Soc. Psychol. 115, 716–742. doi:10.1037/pspp0000166

PubMed Abstract | CrossRef Full Text | Google Scholar

Ghazali, A. S., Ham, J., Barakova, E., and Markopoulos, P. (2020). Persuasive robots acceptance model (pram): roles of social responses within the acceptance model of persuasive robots. Int. J. Soc. Robot. 12, 1075–1092. doi:10.1007/s12369-019-00611-1

CrossRef Full Text | Google Scholar

Ghazali, A. S., Ham, J., Barakova, E., and Markopoulos, P. (2018). The influence of social cues in persuasive social robots on psychological reactance and compliance. Comput. Hum. Behav. 87, 58–65. doi:10.1016/j.chb.2018.05.016

CrossRef Full Text | Google Scholar

Ghazizadeh, M., Lee, J. D., and Boyle, L. N. (2012). Extending the technology acceptance model to assess automation. Cognit. Technol. Work. 14, 39–49. doi:10.1007/s10111-011-0194-3

CrossRef Full Text | Google Scholar

Gilbert, P., and Allan, S. (1994). Assertiveness, submissive behaviour and social comparison. Br. J. Clin. Psychol. 33, 295–306.

PubMed Abstract | CrossRef Full Text | Google Scholar

Gilet, A., Mella, N., Studer, J., and Grühn, D. (2013). Assessing dispositional empathy in adults: a French validation of the interpersonal reactivity index (IRI). Can. J. Behav. Sci. 45, 42–48. doi:10.1037/a0030425

CrossRef Full Text | Google Scholar

Glick, P., DeMorest, J. A., and Hotze, C. A. (1988). Keeping your distance: group membership, personal space, and requests for small favors 1. J. Appl. Soc. Psychol. 18, 315–330.

CrossRef Full Text | Google Scholar

Goetz, J., Kiesler, S., and Powers, A. (2003). “Matching robot appearance and behaviour to tasks to improve human-robot cooperation”, in RO-MAN 2003: the 12th IEEE international workshop on robot and human interactive communication, 55–60.

Google Scholar

Goldstein, A. P., and Michaels, G. Y. (1985). Empathy: development, training, and consequences. (Mahwah, NJ: Lawrence Erlbaum).

Google Scholar

Groom, V., and Nass, C. (2007). Can robots be teammates? Benchmarks in human–robot teams. Interact. Stud. 8, 483–500. doi:10.1075/is.8.3.10gro

CrossRef Full Text | Google Scholar

Guadagno, R. E. (2014). “Compliance: a classic and contemporary review,” in Oxford Handbook of social influence. Editors S. Harkins, W. Kipling, and J. Burger (Oxford: Oxford University Press), 107–127. doi:10.1093/oxfordhb/9780199859870.013.4

CrossRef Full Text | Google Scholar

Hall, E. T. (1974). Handbook for proxemic research. (Washington: Society for the Anthropology of Visual Communication).

Google Scholar

Ham, J., and Midden, C. J. H. (2014). A persuasive robot to stimulate energy conservation: the influence of positive and negative social feedback and task similarity on energy-consumption behavior. Int. J. Soc. Robot. 6, 163–171. doi:10.1007/s12369-013-0205-z

CrossRef Full Text | Google Scholar

Hassenzahl, M., Burmester, M., and Koller, F. (2003). “Attrakdiff: Ein fragebogen zur messung wahrgenommener hedonischer und pragmatischer qualität”, in Mensch & Computer 2003. (New York, NY: Springer), 187–196.

CrossRef Full Text | Google Scholar

Ho, C. C., and MacDorman, K. F. (2017). Measuring the uncanny valley effect: refinements to indices for perceived humanness, attractiveness, and eeriness. Int. J. Soc. Robot. 9, 129–139. doi:10.1007/s12369-016-0380-9

CrossRef Full Text | Google Scholar

Hüffmeier, J., Freund, P. A., Zerres, A., Backhaus, K., and Hertel, G. (2014). Being Tough or being Nice? A meta-analysis on the impact of hard- and softline strategies in distributive negotiations. J. Manag. 40, 866–892. doi:10.1177/0149206311423788

CrossRef Full Text | Google Scholar

Inbar, O., and Meyer, J. (2015). “Manners matter: trust in robotic peacekeepers”, in Proceedings of the human factors and Ergonomics society, 2015, 185–189. doi:10.1177/1541931215591038

CrossRef Full Text | Google Scholar

Janssen, J. B., van der Wal, C. C., Neerincx, M. A., and Looije, R. (2011). “Motivating children to learn arithmetic with an adaptive robot game”, in International conference on social robotics (New York, NY: Springer), 153–162.

CrossRef Full Text | Google Scholar

Jarrassé, N., Sanguineti, V., and Burdet, E. (2014). Slaves no longer: review on role assignment for human-robot joint motor action. Adapt. Behav. 22, 70–82. doi:10.1177/1059712313481044

CrossRef Full Text | Google Scholar

Jian, J.-Y., Bisantz, A. M., and Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. Int. J. Cognit. Ergon. 4, 53–71.

CrossRef Full Text | Google Scholar

Jost, J., Kirks, T., Chapman, S., and Rinkenauer, G. (2019). “Examining the effects of height, velocity and emotional representation of a social transport robot and human factors in human-robot collaboration”, in Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 11747 (New York, NY: Springer), 517–526. doi:10.1007/978-3-030-29384-0˙31

CrossRef Full Text | Google Scholar

Kamei, K., Shinozawa, K., Ikeda, T., Utsumi, A., Miyashita, T., and Hagita, N. (2010). “Recommendation from robots in a real-world retail shop”, in International conference on multimodal interfaces and the workshop on machine learning for multimodal interaction, 1–8.

PubMed Abstract | Google Scholar

Kirst, L. K. (2011). Investigating the relationship between assertiveness and personality characteristics. B.S. Thesis.

Google Scholar

Kobberholm, K. W., Carstens, K. S., Bøg, L. W., Santos, M. H., Ramskov, S., Mohamed, S. A., et al. (2020). “The influence of incremental information presentation on the persuasiveness of a robot”, in HRI 2020: companion of the 2020 ACM/IEEE international conference on human-robot interaction, 302–304.

Google Scholar

Kong, D. T., Dirks, K. T., and Ferrin, D. L. (2014). Interpersonal trust within negotiations: meta-analytic evidence, critical contingencies, and directions for future research. Acad. Manag. J. 57, 1235–1255. doi:10.5465/amj.2012.0461

CrossRef Full Text | Google Scholar

Kraus, J., Scholz, D., Stiegemeier, D., and Baumann, M. (2019). The more you know: trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Hum. Factors, 62(5), 718–736. doi:10.1177/0018720819853686

PubMed Abstract | CrossRef Full Text | Google Scholar

Kraus, J., Scholz, D., Messner, E.-M., Messner, M., and Baumann, M. (2020). Scared to trust?–predicting trust in highly automated driving by depressiveness, negative self-evaluations and state anxiety. Front. Psychol. 10, 2917. doi:10.3389/fpsyg.2019.02917

PubMed Abstract | CrossRef Full Text | Google Scholar

Kurtzberg, T. R., Naquin, C. E., and Belkin, L. Y. (2009). Humor as a relationship-building tool in online negotiations. Int. J. Conflict Manag. 20, 377–397. doi:10.1108/10444060910991075

CrossRef Full Text | Google Scholar

Lambert, D. (2004). Body language. (London, UK: Harper Collins).

Google Scholar

Lanzer, M., Babel, F., Yan, F., Zhang, B., You, F., Wang, J., et al. (2020). “Designing communication strategies of autonomous vehicles with pedestrians: an intercultural study”, in 12th international conference on automotive user interfaces and interactive vehicular applications. Automotive UI ‘20, 10.

Google Scholar

Lee, J. J., Knox, B., Baumann, J., Breazeal, C., and DeSteno, D. (2013). Computationally modeling interpersonal trust. Front. Psychol. 4, 893. doi:10.3389/fpsyg.2013.00893

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, M. K., Kiesler, S., Forlizzi, J., Srinivasa, S., and Rybski, P. (2010). “Gracefully mitigating breakdowns in robotic services”, in 2010 5th ACM/IEEE international conference on human-robot interaction (HRI), IEEE, 203–210.

Google Scholar

Lee, N., Kim, J., Kim, E., and Kwon, O. (2017). The influence of politeness behavior on user compliance with social robots in a healthcare service setting. Int. J. Soc. Robot. 9, 727–743. doi:10.1007/s12369-017-0420-0

CrossRef Full Text | Google Scholar

Lee, S. A., and Liang, Y. J. (2019). Robotic foot-in-the-door: using sequential-request persuasive strategies in human-robot interaction. Comput. Hum. Behav. 90, 351–356. doi:10.1007/978-981-15-5784-2_1

CrossRef Full Text | Google Scholar

Lee, Y., Bae, J.-E., Kwak, S. S., and Kim, M.-S. (2011). “The effect of politeness strategy on human - robot collaborative interaction on malfunction of robot vacuum cleaner”, in RSS’11 (Robotics Sci. Syst. Work. Human-Robot Interact).

Google Scholar

Ligthart, M., and Truong, K. P. (2015). “Selecting the right robot: influence of user attitude, robot sociability and embodiment on user preferences”, in 2015 24th IEEE international symposium on robot and human interactive communication, RO-MAN (IEEE), 682–687.

Google Scholar

Maaravi, Y., Ganzach, Y., and Pazy, A. (2011). Negotiation as a form of persuasion: arguments in first offers. J. Pers. Soc. Psychol. 101, 245. doi:10.1037/a0023331

PubMed Abstract | CrossRef Full Text | Google Scholar

MacArthur, K. R., Stowers, K., and Hancock, P. (2017). “Human-robot interaction: proximity and speed––slowly back away from the robot!” in Advances in human factors in robots and unmanned systems. (New York, NY: Springer), 365–374.

CrossRef Full Text | Google Scholar

Martinovski, B., Traum, D., and Marsella, S. (2007). Rejection of empathy in negotiation. Group Decis. Negot. 16, 61–76. doi:10.1007/s10726-006-9032-z

CrossRef Full Text | Google Scholar

Miller, C. H., Lane, L. T., Deatrick, L. M., Young, A. M., and Potts, K. A. (2007). Psychological reactance and promotional health messages: the effects of controlling language, lexical concreteness, and the restoration of freedom. Hum. Commun. Res. 33, 219–240. doi:10.1111/j.1468-2958.2007.00297.x

CrossRef Full Text | Google Scholar

Miller, L., Kraus, J., Babel, F., and Baumann, M. (2020). Interrelation of different trust layers in human-robot interaction and effects of user dispositions and state anxiety. [Manuscript submitted for publication]

Google Scholar

Milli, S., Hadfield-Menell, D., Dragan, A., and Russell, S. (2017). “Should robots be obedient?” in IJCAI international joint conference on artificial intelligence, 4754–4760.

Google Scholar

Mirnig, N., Stadler, S., Stollnberger, G., Giuliani, M., and Tscheligi, M. (2016). “Robot humor: how self-irony and Schadenfreude influence people’s rating of robot likability”, in 2016 25th IEEE international symposium on robot and human interactive communication, RO-MAN, 166–171. doi:10.1109/ROMAN.2016.7745106

CrossRef Full Text | Google Scholar

Mirnig, N., Stollnberger, G., Giuliani, M., and Tscheligi, M. (2017). Elements of humor: how humans perceive verbal and non-verbal aspects of humorous robot behavior. ACM/IEEE Int. Conf. Human-Robot Interact, 81, 211–212. doi:10.1145/3029798.3038337

CrossRef Full Text | Google Scholar

Mnookin, R. H., Peppet, S. R., and Tulumello, A. S. (1996). The tension between empathy and assertiveness. Negot. J. 12, 217–230. doi:10.1007/bf02187629

CrossRef Full Text | Google Scholar

Moshkina, L. (2012). “Improving request compliance through robot affect”, in Proceedings of the twenty-sixth AAAI conference on artificial intelligence, 2031–2037.

Google Scholar

Mutlu, B. (2011). Designing embodied cues for dialogue with robots. AI Mag. 32, 17–30. doi:10.1609/aimag.v32i4.2376

CrossRef Full Text | Google Scholar

Nakamura, Y., Yoshinaga, N., Tanoue, H., Kato, S., Nakamura, S., Aoishi, K., et al. (2017). Development and evaluation of a modified brief assertiveness training for nurses in the workplace: a single-group feasibility study. BMC Nursing. 16, 29. doi:10.1186/s12912-017-0224-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Niculescu, A., van Dijk, B., Nijholt, A., Li, H., and See, S. L. (2013). Making social robots more attractive: the effects of voice pitch, humor and empathy. Int. J. Soc. Robot. 5, 171–191. doi:10.1007/s12369-012-0171-x

CrossRef Full Text | Google Scholar

Nomura, T., Kanda, T., Suzuki, T., and Kato, K. (2008). Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative attitudes toward robots. IEEE Trans. Robot. 24, 442–451.

CrossRef Full Text | Google Scholar

Nomura, T., and Saeki, K. (2010). Effects of polite behaviors expressed by robots: a psychological experiment in Japan. Int. J. Synth. Emot. (IJSE). 1, 38–52.

CrossRef Full Text | Google Scholar

Nomura, T., Suzuki, T., Kanda, T., and Kato, K. (2006). Measurement of anxiety toward robots. Proc. - IEEE Int. Work. Robot Hum. Interact. Commun. 46, 372–377. doi:10.1109/ROMAN.2006.314462

CrossRef Full Text | Google Scholar

Paradeda, R., Ferreira, M. J., Oliveira, R., Martinho, C., and Paiva, A. (2019). “What makes a good robotic advisor? The role of assertiveness in human-robot interaction”, in Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), LNAI, 11876 (New York, NY: Springer), 144–154. doi:10.1007/978-3-030-35888-414

CrossRef Full Text | Google Scholar

Paramasivam, S. (2007). Managing disagreement while managing not to disagree: polite disagreement in negotiation discourse. J. Intercult. Commun. Res. 36, 91–116. doi:10.1016/j.pragma.2012.06.011

CrossRef Full Text | Google Scholar

Park, H., and Antonioni, D. (2007). Personality, reciprocity, and strength of conflict resolution strategy. J. Res. Pers. 30, 414. doi:10.1016/j.jrp.2006.03.003

CrossRef Full Text | Google Scholar

Pfafman, T. (2017). “Assertiveness,” in Encyclopedia of personality and individual differences. Editors V. Zeigler-Hill, and T. Shackelford (Berlin, UK: Springer). doi:10.1007/978-3-319-28099-81044-1

CrossRef Full Text | Google Scholar

Phansalkar, S., Edworthy, J., Hellier, E., Seger, D. L., Schedlbauer, A., Avery, A. J., et al. (2010). A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J. Am. Med. Inf. Assoc. 17, 493–501. doi:10.1136/jamia.2010.005264

CrossRef Full Text | Google Scholar

Preuss, M., and van der Wijst, P. (2017). A phase-specific analysis of negotiation styles. J. Bus. Ind. Market. 32, 505–518. doi:10.1108/JBIM-01-2016-0010

CrossRef Full Text | Google Scholar

Pruitt, D. G. (1983). Strategic choice in negotiation. Am. Behav. Sci. 27, 167–194. doi:10.1177/000276483027002005

CrossRef Full Text | Google Scholar

Pruitt, D., and Rubin, J. (1986). Social conflict: escalation, stalemate, and resolution.(New York, NY: Random House).

Google Scholar

Rahim, M. A. (1983). A measure of styles of handling interpersonal conflict. Acad. Manag. J. 26, 368–376. doi:10.5465/255985

CrossRef Full Text | Google Scholar

Rahim, M. A. (1992). “Managing conflict in organizations,” in Proc. First Int. Constr. Manag. Conf. Univ. Manchester Inst. Sci. Technol. Editors P. Fenn, and R. Gameson (New York, NY: E & F N Spon), 386–395.

Google Scholar

Raven, B. H. (1964). Social influence and power. Tech. Rep. (California: University of Los Angeles).

Google Scholar

Ray, C., Mondada, F., and Siegwart, R. (2008). What do people expect from robots? In 2008. IEEE/RSJ Int. Conf. Intell. Robot. Syst. 46, 3816–3821. doi:10.1109/IROS.2008.4650714

CrossRef Full Text | Google Scholar

Reeves, B., and Nass, C. I. (1996). The media equation: how people treat computers, television, and new media like real people and places. (Cambridge, UK: Cambridge University Press).

Google Scholar

Robert, L., Alahmad, R., Esterwood, C., Kim, S., You, S., and Zhang, Q. (2020). A review of personality in human–robot interactions. Available at SSRN 3528496 doi:10.2139/ssrn.3528496

CrossRef Full Text | Google Scholar

Rosenthal-von der Pütten, A. M., Krämer, N. C., and Herrmann, J. (2018). The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. Int. J. Soc. Robot. 10, 569–582. doi:10.1007/s12369-018-0466-7

CrossRef Full Text | Google Scholar

Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., and Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. Int. J. Soc. Robot. 5, 17–34. doi:10.1007/s12369-012-0173-8

CrossRef Full Text | Google Scholar

Roubroeks, M. A. J., Ham, J. R. C., and Midden, C. J. H. (2010). “The dominant robot: threatening robots cause psychological reactance, especially when they have incongruent goals”, in International Conference on persuasive technology. (Heidelberg: Springer), 174–184. doi:10.1007/978-3-642-13226-118

CrossRef Full Text | Google Scholar

Salem, M., Lakatos, G., Amirabdollahian, F., and Dautenhahn, K. (2015). “Would you trust a (faulty) robot?”, in Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction - HRI’15, 141–148. doi:10.1145/2696454.2696497

CrossRef Full Text | Google Scholar

Salem, M., Ziadee, M., and Sakr, M. (2013). “Effects of politeness and interaction context on perception and experience of HRI”, in International conference on social robotics. (Berlin, UK: Springer), 531–541.

CrossRef Full Text | Google Scholar

Sandoval, E. B., Brandstetter, J., Obaid, M., and Bartneck, C. (2016). Reciprocity in human-robot interaction: a quantitative approach through the prisoner’s dilemma and the ultimatum game. Int. J. Soc. Robot. 8, 303–317.

CrossRef Full Text | Google Scholar

Saunderson, S., and Nejat, G. (2019). It would make me happy if you used my guess: comparing robot persuasive strategies in social human-robot interaction. IEEE Robot. Autom. Lett. 4, 1707–1714. doi:10.1109/LRA.2019.2897143

CrossRef Full Text | Google Scholar

Savela, N., Turja, T., and Oksanen, A. (2018). Social acceptance of robots in different occupational fields: a systematic literature review. Int. J. Soc. Robot. 10, 493–502. doi:10.1007/s12369-017-0452-5

CrossRef Full Text | Google Scholar

Shapiro, D. L., and Bies, R. J. (1994). Threats, bluffs, and disclaimers in negotiations. Organ. Behav. Hum. Decis. Process. 60, 14–35.

CrossRef Full Text | Google Scholar

Shimada, M., Kanda, T., and Koizumi, S. (2012). “How can a social robot facilitate children’s collaboration?”, in International conference on social robotics. (Berlin, UK: Springer), 98–107.

CrossRef Full Text | Google Scholar

Siegel, M., Breazeal, C., and Norton, M. I. (2009). “Persuasive robotics: the influence of robot gender on human behavior”, in 2009 IEEE/RSJ international conference on Intelligent robots and systems, IROS, 2563–2568. doi:10.1109/IROS.2009.5354116

CrossRef Full Text | Google Scholar

Sjöbergh, J., and Araki, K. (2009). “Robots make things funnier”, in Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), 5447 LNAI,. 306–313. doi:10.1007/978-3-642-00609-8˙27

CrossRef Full Text | Google Scholar

Srinivasan, V., and Takayama, L. (2016). “Help me please: robot politeness strategies for soliciting help from people”, in Proceedings of the 2016 CHI conference on human factors in computing systems - CHI’16, 4945–4955. doi:10.1145/2858036.2858217

CrossRef Full Text | Google Scholar

Stange, S., and Kopp, S. (2020). “Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior”, in ACM/IEEE Int. Conf. Human-robot Interact. (Washington, D.C. IEEE Computer Society), 619–627. doi:10.1145/3319502.3374802

CrossRef Full Text | Google Scholar

Strait, M., Canning, C., and Scheutz, M. (2014). “Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance”, in Proceedings of the 2014 ACM/IEEE international conference on human-robot interaction, HRI’14, 479–486. doi:10.1145/2559636.2559670

CrossRef Full Text | Google Scholar

Stuhlmacher, A. F., Gillespie, T. L., and Champagne, M. V. (1998). The impact of time pressure in negotiation: a meta-analysis. Int. J. Conflict Manag. 9, 97–116. doi:10.1108/eb022805

CrossRef Full Text | Google Scholar

Sundstrom, E., and Altman, I. (1976). Interpersonal relationships and personal space: research review and theoretical model. Hum. Ecol. 4, 47–67.

CrossRef Full Text | Google Scholar

Sung, J., Grinter, R. E., and Christensen, H. I. (2010). Domestic robot ecology. Int. J. Soc. Robot. 2, 417–429. doi:10.1007/s12369-010-0065-8

CrossRef Full Text | Google Scholar

Sung, J. Y., Grinter, R. E., Christensen, H. I., and Guo, L. (2008). “Housewives or technophiles? understanding domestic robot owners”, in HRI 2008 - Proc. 3rd ACM/IEEE Int. Conf. Human-Robot Interact. Living with Robot, 129–136. doi:10.1145/1349822.1349840

CrossRef Full Text | Google Scholar

Tay, B. T., Low, S. C., Ko, K. H., and Park, T. (2016). Types of humor that robots can play. Comput. Human Behav. 60, 19–28. doi:10.1016/j.chb.2016.01.042

CrossRef Full Text | Google Scholar

Thacker, R. A., and Yost, C. A. (2002). Training students to become effective workplace team leaders. Int. J. Team Perform. Manag. 8, 89.

CrossRef Full Text | Google Scholar

Thomas, J., and Vaughan, R. (2018). After you: doorway negotiation for human-robot and robot-robot interaction. IEEE international conference on intelligent robots and systems, 3387–3394. doi:10.1109/IROS.2018.8594034

CrossRef Full Text | Google Scholar

Thomas, K. W. (1992). Conflict and conflict management: reflections and update. J. Organ. Behav. 13, 265–274. doi:10.1002/job.4030130307

CrossRef Full Text | Google Scholar

Thompson, L. L., Wang, J., and Gunia, B. C. (2010). Negotiation. Annu. Rev. Psychol. 61, 491–515. doi:10.1146/annurev.psych.093008.100458

PubMed Abstract | CrossRef Full Text | Google Scholar

Thorndike, E. L. (1998). Animal intelligence: an experimental study of the associate processes in animals. Am. Psychol. 53, 1125.

CrossRef Full Text | Google Scholar

Thunberg, S., and Ziemke, T. (2020). “Are people ready for social robots in public spaces?”, in HRI 2020: Companion of the 2020 ACM/IEEE international conference on human-robot interaction. Association for Computing Machinery (ACM), 482–484. doi:10.1145/3371382.3378294

CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1989). “Rational choice and the framing of decisions”, in Multiple criteria decision making and risk analysis using microcomputers. (Berlin, UK: Springer), 81–126.

CrossRef Full Text | Google Scholar

Van Der Laan, J. D., Heino, A., and De Waard, D. (1997). A simple procedure for the assessment of acceptance of advanced transport telematics. Transport. Res. C Emerg. Technol. 5, 1–10.

CrossRef Full Text | Google Scholar

Vollmer, A.-L. (2018). “Fears of intelligent robots”, in Companion of the 2018 ACM/IEEE International conference on human-robot interaction, 273–274.

Google Scholar

Vorauer, J. D., and Claude, S. D. (1998). Perceived versus actual transparency of goals in negotiation. Pers. Soc. Psychol. Bull. 24, 371–385. doi:10.1177/0146167298244004

CrossRef Full Text | Google Scholar

Walters, M. L., Dautenhahn, K., Woods, S. N., Koay, K. L., Te Boekhorst, R., and Lee, D. (2006). Exploratory studies on social spaces between humans and a mechanical-looking robot. Connect. Sci. 18, 429–439.

CrossRef Full Text | Google Scholar

Weber, K., Ritschel, H., Aslan, I., Lingenfelser, F., and André, E. (2018). “How to shape the humor of a robot - social behavior adaptation based on reinforcement learning”, in Proceedings of the 20th ACM international conference on multimodal interaction, 154–162. doi:10.1145/3242969.3242976

CrossRef Full Text | Google Scholar

Wilson, C. P. (1979). Jokes: form, content, use, and function, 16 (New York, NY: Academic Press).

Google Scholar

Wilson, K. L., Lizzio, A. J., Whicker, L., Gallois, C., and Price, J. (2003). Effective assertive behavior in the workplace: responding to unfair criticism. J. Appl. Soc. Psychol. 33, 362–395.

CrossRef Full Text | Google Scholar

Wisp, L. (1987). History of the concept of empathy. Empathy Dev. 19, 17–37.

Google Scholar

Wullenkord, R., and Eyssel, F. (2019). Imagine how to behave: the influence of imagined contact on human–robot interaction. Philos. Trans. R. Soc. B. 374, doi:20180038

PubMed Abstract | CrossRef Full Text | Google Scholar

Wullenkord, R., Fraune, M. R., Eyssel, F., and Šabanović, S. (2016). “Getting in touch: how imagined, actual, and physical contact affect evaluations of robots”, in 2016 25th IEEE international symposium on robot and human interactive communication, RO-MAN (IEEE), 980–985.

Google Scholar

Xin, M., and Sharli, E., (2007). “Playing games with robots - a method for evaluating human-robot interaction”, inHuman Interact, and Robot. (Jamaica: Itech Education and Publishing), 522. doi:10.5772/5208

CrossRef Full Text | Google Scholar

Xu, K., and Lombard, M. (2016). “Media are social actors: expanding the casa paradigm in the 21st century”, in Annual conference of the international communication association, 1–47.

Google Scholar

Yanco, H. A., and Drury, J. (2004). “Classifying human-robot interaction: an updated taxonomy”, in 2004 IEEE Int. Conf. Syst. Man Cybern., 3, 2841–2846.

Google Scholar

Young, J. E., Hawkins, R., Sharlin, E., and Igarashi, T. (2009). Toward acceptable domestic robots: applying insights from social psychology. Int. J. Soc. Robot. 1, 95.

CrossRef Full Text | Google Scholar

Zhu, B., and Kaber, D. (2012). Effects of etiquette strategy on human-robot interaction in a simulated medicine delivery task. Intell. Serv. Robot. 5, 199–210. doi:10.1007/s11370-012-0113-3

CrossRef Full Text | Google Scholar

Ziefle, M., and Valdez, A. C. (2017). “Domestic robots for homecare: a technology acceptance perspective”, in International conference on human aspects of IT for the aged population, (New York, NY: Springer), 57–74.

CrossRef Full Text | Google Scholar

Zuluaga, M., and Vaughan, R. (2005). “Reducing spatial interference in robot teams by local-investment aggression”, in 2005 IEEE/RSJ international conference on Intelligent robots and systems. IEEE, 2798–2805.

Google Scholar

Keywords: HRI strategies, robot assertiveness, persuasive robots, user compliance, acceptance, trust

Citation: Babel F, Kraus JM and Baumann M (2021) Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human–Robot Goal Conflict. Front. Robot. AI 7:591448. doi: 10.3389/frobt.2020.591448

Received: 04 August 2020; Accepted: 14 December 2020;
Published: 26 January 2021.

Edited by:

Rosa PoggianiChung Hyuk Park, George Washington University, United States

Reviewed by:

Rosa PoggianiMarc Hanheide,University of Lincoln, United Kingdom
Rosa PoggianiSilvia Rossi, University of Naples Federico II, Italy

Copyright © 2021 Babel, Kraus and Baumann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Franziska Babel, franziska.babel@uni-ulm.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.