Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 09 September 2022
Sec. Interdisciplinary Physics
This article is part of the Research Topic Interdisciplinary Approaches to the Structure and Performance of Interdependent Autonomous Human Machine Teams and Systems (A-HMT-S) View all 16 articles

Grounding Human Machine Interdependence Through Dependence and Trust Networks: Basic Elements for Extended Sociality

  • Institute of Cognitive Sciences and Technologies, National Research Council of Italy, Rome, Italy

In this paper, we investigate the primitives of collaboration, useful also for conflicting and neutral interactions, in a world populated by both artificial and human agents. We analyze in particular the dependence network of a set of agents. And we enrich the connections of this network with the beliefs that agents have regarding the trustworthiness of their interlocutors. Thanks to a structural theory of what kind of beliefs are involved, it is possible not only to answer important questions about the power of agents in a network, but also to understand the dynamical aspects of relational capital. In practice, we are able to define the basic elements of an extended sociality (including human and artificial agents). In future research, we will address autonomy.

1 Introduction

In this paper we develop an analysis that aims to identify the basic elements of social interaction. In particular, we are interested in investigating the primitives of collaboration in a world populated by both artificial and human agents.

Social networks are studied extensively in the social sciences both from a theoretical and empirical point of view [13] and investigated in their various facets and uses. These studies have shown how relevant the structure of these networks is for their active or passive use by different phenomena (from the transmission of information to that of diseases, etc.). These networks can provide us with interesting characteristics of the collective and social phenomena they represent. For example, the paper [4] shows how the collaboration networks of scientists in biology and medicine “seem to constitute a ''small world'' in which the average distance between scientists via a line of intermediate collaborators varies logarithmically with the size of the relevant community” and “it is conjectured that this smallness is a crucial feature of a functional scientific community”. Other studies on social networks have tried to characterize subsets by properties and criteria for their definition: for example, the concept of “community” [5].

The primitives of these networks in which we are interested, which are essential both for collaborative behaviors and for neutral or conflicting interactions, serve to determine what we call an “extended sociality”, i.e. extended to artificial agents as well as human agents. For this to be possible it is necessary that the artificial agents are endowed, as well as humans, with a capacity that refers to a “theory of mind” [6] in order to call into question not so much and not only the objective data of reality but also the prediction on the cognitive processing of other agents (in more simple words: is relevant also the ability to acquire knowledge about other agents’ beliefs and desires).

In this sense, a criticism must be raised against the theory of organization which has not sufficiently reflected on the relevance of beliefs in relational and social capital [711]: the thing that transforms a relationship into a capital is not simply the structure of the network objectively considered (who is connected with whom and how much directly, with the consequent potential benefits of the interlocutors) but also the level of trust [12, 13] that characterizes the links in the network (who trusts who and how much). Since trust is based on beliefs–including also the believed dependence (who needs whom)—it should be clear that relational capital is a form of capital, which can be manipulated by manipulating beliefs.

Thanks to a structural theory of what kind of beliefs are involved it is possible not only to answer important questions about agents’ power in network but also to understand the dynamical aspects of relational capital. In particular, it is possible to evaluate how the differences in beliefs (between trustor and trustee) relating to dependence between agents allow to pursue behaviors, both strategic and reactive, with respect to the goals that the different interlocutors want to achieve.

2 Agents and Powers

2.1 Agent’s Definition

Let us consider the theory of intelligent agents and multi-agent systems as the reference field of our analysis. In particular, the BDI model of the rational agent [1417]. In the following we will present our theory in a semi-formal way. The goal is to develop a conceptual and relational apparatus capable of providing, beyond the strictly formal aspects, a rational, convincing and well-defined perspective that can be understood and translated appropriately in a computational modality.

We define an agent through its characteristics: a repertoire of actions, a set of mental attitudes (goals, beliefs, intentions, etc.), an architecture of the agent (i.e., the way of relating its characteristics with its operation). In particular, let a set of agents1:

AGT=def{Ag1,Ag2,Agn}.(1)

We can associate to each agent Agi∈Agt:

BELAgi=def{B1Agi,B2,AgiBmAgi}(2)

(a set of beliefs representing what the agent believes to be true in the world);

GOALAgi=def{g1Agi,g2Agi,gkAgi}(3)

(a set of goals representing states of the world that the agent wishes to obtain; that is, states of the world that the agent wants to be true);

AZAgi=def{α1Agi,α2Agi,αvAgi}(4)

(a set of actions representing the elementary actions that Agi is able to perform and that affect the real world; in general, with each action are associated preconditions - states of the world that guarantee its feasibility - and results, that is, states of the world resulting from its performance);

ΠAgi=def{p1Agi,p2Agi,puAgi}(5)

(the Agi’s plan library: a set of rules/prescriptions for aggregating agent actions); and

RAgi=def{r1Agi,r2Agi,rwAgi}(6)

(a set of resources representing available tool or capacity to the agent, consisting of a material reserve).

Of course, the same belief, goal, action, plan or resource can belong to different agents (i.e., shared), unless we introduce intrinsic limits to these notions2. For example, for the goals we can say that gk could be owned by Agi or by Agj and we would have: gAgik or gAgjk.

We can say that an agent is able to obtain on its own behalf (at a certain time, t, in a certain environmental context, c3) its own goal, gAgix, if it possesses the mental and practical attitudes to achieve that goal. In this case we can say that it has the power to achieve the goal, gAgix applying the plan, pAgix, (which can also coincide with a single elementary action).

In general, as usual [12, 13], we define a task τ, that is a couple

τ=def(α,g).(7)

in practice, we combine the goal g with the action α, necessary to obtain g, which may or may not be defined (in fact, indicating the achievement of a state of the world always implies also the application of some action).

2.2 Agent’s Powers

Given the above agent’s definition, we introduce the operator Pow(Agx,τ,c,t) to indicate the power of Agx to achieve goal g through action α, in a certain context c at a certain time t. This power may or may not exist. In positive case, we will have:

Pow(Agx,τ,c,t)=true(8)

that means that Agx has the ability (physical and cognitive) and the internal and/or external resources to achieve (or maintain) the state of the world corresponding with the goal g through the (elementary or complex) action (α or p) in the context c at the time t. We can similarly define an operator (lack of power: LoPow) in case it does not have this power:

LoPow(Agx,τ,c,t)=def¬Pow(Agx,τ,c,t)(9)

As we have just seen, we define the power of an agent with respect to a τ task, that is, with respect to the couple (action, state of the world). In this way we take into account, on the one hand, the fact that in many cases this couple is inseparable, i.e., the achievement of a certain state of the world is consequent (and expected) to be bound to the execution of a certain specific action (α) and to the possession of the resources (r1,..,rn) necessary for its execution. On the other hand, in this way we also take into consideration the case in which it is possible to predict the achievement of that state in the world with an action not necessarily defined a priori (therefore, in this case the action α in the τ pair would turn out to be undefined a priori). In the second case it would be possible to assign that power to the agent if it is able to obtain the indicated state of the world (g) regardless of the foreseeable (or expected) action to be applied (for example, it may be able to take different alternative actions to do this).

In any case, Pow(Agx,τ,c,t) implies that the goal (g) is potentially active for the Agx. It is always in relation to a goal (g) that an Agx has some “Power of/on”.

It is important to emphasize that arguing that Agx has the power to perform a certain task τ means attributing to that agent the possession of certain characteristics and the consequent possibility of exercising certain specific actions. This leads to the indication of a high probability of success but not necessarily to the certainty of the desired result. In this regard we introduce a Degree of Ability (DoA), i.e. a number (included between 0 and 1) which expresses - given the characteristics possessed by the agent, the state of the world to be achieved and the context in which this takes place - the probability of successfully realization of the task.

So, we can generally say that if Agx has the power Pow(Agx,τ,c,t), then its degree of ability (DoA) exceeds a certain threshold (for example σ) considered of adequate value to ensure (on a theoretical rather than an experimental basis) the success of the task: in practice, if DoA >σ than the probability of success is high; so:

(Pow(Agx,τ,c,t)=true)DoA(Agx,τ,c,t)>σ(10)

Where AB means A implies B; and σ has a high value in the range (0,1).

In words: if Agx has the power to achieve the goal g then the agent’s degree of ability (DoA) is above a defined threshold.

Similarly, we can define the absence of power in the realization of the task τ, by introducing a lower threshold (?), for which:

(LoPow(Agx,τ,c,t)=true)DoA(Agx,τ,c,t)<ζ(11)

In the cases in which ζ<DoA(Agx,τ,c,t)<σ we are uncertain about Agx’s power to accomplish the task τ .

We will see later the need to introduce probability thresholds.

3 Social Dependence

3.1 From Personal Powers to Social Dependence

Sociality presupposes a “common world”, hence “interference”: the action of one agent can favor (positive interference) or hamper/compromise the goals of another agent (negative interference). Since agents have limited personal powers, and compete for achieving their goals, they need social powers (that is, to have the availability of some of the powers collected from other agents). They also compete for resources (both material and social) and for having the power necessary for their goals.

3.2 Objective Dependence

Let us introduce the relevant concept of objective dependence [2022]. Given Agi,AgjAGT ; a set of tasks Τ=def{τ1,τ2,τl} ; a set of contexts Γ=def{c1,c2,cn}; and defined tx the specific time interval x, we can define:

ObjDep(Agi,Agj,τk,ck,tk)=defLoPow(Agi,τk,ck,tk)Pow(Agj,τk,ck,tk)(12)

where τkΤ , ckΓ ; and the time interval is tk.

It is the combination of a lack of Power (LoPow) of one agent (Agi), relative to one of its own tasks/goal (τk); and the corresponding Power (Pow) of another agent (Agj), under certain specific contextual (ck) and temporal (tk) conditions. It is the result of some interference between the two agents. It is “objective” in the sense that it holds independently of the involved agents’ awareness/beliefs and wants.

In words: an agent Agi has an Objective Dependence Relationship with respect to a task τk with agent Agj if for realizing τk, regardless of its awareness, are necessary actions, plans and/or resources that are owned by Agj and not owned (or not available, or less convenient to use) by Agi.

More in general, Agi has an Objective Dependence Relationship with Agj if for achieving at least one of its tasks τk, with gk ∈ GOALAgi, are necessary actions, plans and/or resources that are owned by Agj and not owned (or not available or less convenient to use) by Agi.

3.3 Awareness as Acquisition or Loss of Powers

Given that to decide to pursue a goal, a cognitive agent must believe/assume (at least with some degree of certainty) that it has that power (sense of competence, self-confidence, know-how and expertise/skills), then it does not really have that power if it does not know it has that power (Figure 1). Thus, the meta-cognition of agents’ internal powers and the awareness of their external resources empower them (enable them to make their “power” usable).

FIGURE 1
www.frontiersin.org

FIGURE 1. Agi to really have the power to accomplish the task τ, it must believe that it possesses that power. This belief actually enables the real power of it to act.

This awareness allows an agent to use this power also for other agents in the networks of dependence: social power (who could depend on it: power relations over others, relational capital, exchanges, collaborations, etc.).

Acquiring power and therefore autonomy (on that dimension) and power over other agents can therefore simply be due to the awareness of this power and not necessarily to the acquisition of external resources or skills and competences (learning): in fact, it is a cognitive power.

3.4 Types of Objective Dependence

A very relevant distinction is the case of a two-way dependence between agents (bilateral dependence). There are two possible kinds of bilateral dependence (to simplify, we make the task coincide with the goal: τk = gk):

- Reciprocal Dependence, in which Agi depends on Agj as for its goal gAgi1, and Agj depends on Agi as for its own goal gAgj2 (with g1g2). They need each other’s action, but for two different personal goals. This is the basis of a pervasive and fundamental form of human (and possibly artificial) interaction: Social Exchange. In this kind of interaction Agi performs an action useful-for/required by Agj for gAgj2, to obtain an action by Agj useful for its personal goal gAgi1. Agi and Agj are not co-interested in the fulfillment of the goal of the other.

- Mutual Dependence, in which Agi depends on Agj as for its goal gAgik, and Agj depends on Agi as for the same goal gAgik (both have the goal gk). They have a common goal, and they depend on each other as for this shared goal. When this situation is known by Agi and Agj, it becomes the basis of true cooperation. Agi and Agj are co-interested in the success of the goal of the other (instrumental to gk). Agi helps Agj to pursue her own goal, and vice versa. In this condition to defeat is not rational; it is self-defeating.

In the case in which an agent Agi depends on more than one other agent, it is possible to identify several typical objective dependence patterns. Just to name a few relevant examples, very interesting are the OR-Dependence, a disjunctive composition of dependence relations, and the AND-dependence, a conjunction of dependence relations.

In the first pattern (OR-Dependence) the agent Agi can potentially achieve its goal through the action of just one of the agents with which it is in that relationship. In the second pattern (AND-dependence) the agent Agi can potentially achieve its goal through the action of all the agents with which it is in that relationship (Agi needs all the other agents in that relationship).

The Dependence Network determines and predicts partnerships and coalitions formation, competition, cooperation, exchange, functional structure in organizations, rational and effective communication, and negotiation power. Dependence networks are very dynamic and unpredictable. In fact, they change by changing an individual goal; by changing individual resources or skills; by the exit or entrance of a new agent (open world); by acquaintance and awareness (see later); by indirect power acquisition.

3.5 Objective and Subjective Dependence

Objective Dependence constitutes the basis of all social interaction, the reason for society; it motivates cooperation in its different kinds. But objective dependence relationships that are the basis of adaptive social interactions, are not enough for predicting them. Subjective dependence is needed (that is, the dependence relationships that the agents know or at least believe).

We introduce the SubjDepAgi(Agi,Agj,τk,c,t) that represents the Agi’s point of view with respect its dependence relationships (for simplicity we neglect time and context). Formally:

SubjDepAgi(Agi,Agj,τk)=defBelAgi(ObjDep(Agi,Agj,τk))
BelAgi(ObjDep(Agi,Agj,τk))=defBelAgi(LoPow(Agi,τk))BelAgi(Pow(Agj,τk))(13)

where Agi,AgjAGT and BelAgi(τk=(αk,gk)) and BelAgi((αkAZAgj)(αkAZAgi)(gkGOALAgi)). That is, the relationship of dependence as we have introduced it in an objective way becomes aware of the single agent when it becomes its own belief.

When we introduce the concept of subjective view of dependence relationships, as we have just done with the SubjDep, we are considering what our agent believes and represents about its own dependence on others. Vice versa, it should also be analyzed what our agent believes about the dependence of other agents in the network (how it represents the dependencies of other agents). We can therefore formally introduce the formula for each Agi in potential relationship with other agents of the AGT set:

BelAgi(SubjDepAgj(Agj,Agi,τk))=defBelAgi(BelAgj(LoPow(Agj,τk))BelAgj(Pow(Agi,τk)))(14)

where Agi,AgjAGT and BelAgi(BelAgj(τk=(αk,gk))) with BelAgi(BelAgj((αkAZAgi)(αkAZAgj)(gkGOALAgj))). So resuming we can say:

1) The objective dependence says who needs who for what in each society (although perhaps ignoring this). This dependence has already the power of establishing certain asymmetric relationships in a potential market, and it determines the actual success or failure of the reliance and transaction.

2) The subjective (believed) dependence, says who is believed to be needed by who. This dependence is what potentially determines relationships in a real market and settles on the negotiation power (see §3); but it might be illusory and wrong, and one might rely upon unable agents, while even being autonomously able to do as needed.

If the world knowledge would be perfect for all the agents, the above-described objective dependence would be a common belief (a belief possessed by all agents) about the real state of the world: there would be no distinction between objective and subjective dependence.

In fact, however, the important relationship is the network of dependence believed by each agent. In other words, we cannot only associate to each agent a set of goals, actions, plans and resources, but we must evaluate these sets as believed by each agent (the subjective point of view), also considering that they would be partial, different each of others, sometime wrong, with different degrees and values, and so on. In more practical terms, each agent will have a different (subjective) representation of the dependence network and of its positioning: it is from this subjective view of the world that the actions and decisions of the agents will be guided.

So, we introduce the BelAgi(GOALAgz) that means the Goal set of Agz believed by Agi. The same for BelAgi(AZAgz), BelAgi(ΠAgz), BelAgi(RAgz), and also for BelAgi(BELAgz). In practice, the dependence relationships should be re-modulated based on the agents’ subjective interpretation.

In a first approximation each agent should correctly believe the sets it has, while it could mismatch the sets of other agents4. In formulas:

BelAgi(GOALAgi)=GOALAgi(15)
BelAgi(AZAgi)=AZAgi(16)
BelAgi(ΠAgi)=ΠAgi(17)
BelAgi(RAgi)=RAgz(18)
BelAgi(BELAgi)=BELAgi(19)
(AgiAGT).

We define DependenceNetwork(AGT,t,c) the set of dependence relationships (both subjective and objective) among the agents included in AGT set (also in this case we neglect time and context):

DependenceNetwork(AGT)=def
(ObjDep(Agi,Agj,τk)
i=1nSubjDepAgi(Agi,Agj,τk)
i=1nj=1mBelAgi(SubjDepAgj(Agj,Agi,τk))(20)
(Agi,Agj)AGT

For each couple (Agi,Agj) in ObjDep(Agi,Agj,τk) with τk=def(αk,gk) we have: (gkGOALAgi)(αkAZAgj).

For each couple (Agi,Agj) in SubjDepAgi(Agi,Agj,τk), withBelAgi(τk=def(αk,gk))wehave:BelAgi(gkGOALAgi)BelAgi(αkAZAgj).

For each couple (Agi,Agj) in BelAgi(SubjDepAgj(Agj,Agi,τk)), withBelAgi(BelAgj(τk=def(αk,gk))),wehave:BelAgi(BelAgj(gkGOALAgj)BelAgj(αkAZAgi)).

The three relational levels indicated (objective, subjective and subjective dependence believed by others) in the network of dependence defined above, determine the basic relationships to initiate even minimally informed negotiation processes. The only level always present is the objective one (even if the fact that the agents are aware of it is decisive). The others may or may not be present (and their presence or absence determines different behaviors in the achievement of the goals by the various agents and consequent successes or failures).

3.6 Relevant Relationships within a Dependence Network

The dependence network (Formula 20) collecting all the indicated relationships represents a complex articulation of objective situations and subjective points of view of the various agents that are part of it, with respect to the reciprocal powers to obtain tasks. However, it is interesting to investigate the situations of greatest interest within the defined network. Let’s see some of them below.

3.6.1 Comparison Between Agent’s Point of View and Reality

A first consideration concerns the coincidence or otherwise of the subjective points of view of the agents with respect to reality (objective dependence).

That is, given two agents, (Agi,Agj)AGT, the subjective dependence of Agi with respect to Agj for the task τ may or may not coincide with the objective dependence. So, remembering that:

SubjDepAgi(Agi,Agj,τ)=defBelAgi(ObjDep(Agi,Agj,τ)) and calling ObjDepi,j,τ=defObjDep(Agi,Agj,τ), we can have:

BelAgi(ObjDepi,j,τ)=ObjDepi,j,τ(21)

the subjective dependence believed by Agi with respect to Agj coincides with reality, that is, it is objective; or

BelAgi(ObjDepi,j,τ)ObjDepi,j,τ(22)

the subjective dependence believed by Agi with respect to Agj does not coincide with reality, that is, it is not objective.6

By defining AB as the comparison7 between the expressions A and B, the two cases above described (formulas 21, 22) are the result of the following comparison (see Figure 2):

BelAgi(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ)(23)
(Agi,Agj)AGT

FIGURE 2
www.frontiersin.org

FIGURE 2. Dependence of Agi by Agj on the task τ. Comparison on how it is believed by Agi and objective reality.

3.6.2 Comparison Among Points of View of Different Agents

What Agi believes about Agj’s potential subjective dependencies (from various agents in the network, including Agk third-party agents, and on various tasks in Τ) may or may not coincide with the subjective dependencies actually believed by Agj, where (Agi,Agj,Agk)AGT.

And vice versa, what Agj believes about Agi’s subjective dependence (on the various agents in the network, including Agk third-party agents, and on various tasks in Τ) may or may not coincide with the subjective dependence of Agi (and the various Agk third-party agents); furthermore, one can compare these subjective beliefs and dependencies with objective dependence and verify or not the coincidence. This is divided into the following interesting combinations.

Comparison between what Agi believes about the dependence of Agi by Agj (BelAgi(ObjDepi,j,τ)) and what Agj believes about the same dependence (BelAgj(ObjDepi,j,τ)) : So, Agi and Agj can believe the same thing (BelAgi(ObjDepi,j,τ)=BelAgj(ObjDepi,j,τ)), or not (BelAgi(ObjDepi,j,τ)BelAgj(ObjDepi,j,τ)).

In the first case (BelAgi(ObjDepi,j,τ)=BelAgj(ObjDepi,j,τ)), this situation may coincide with the reality (BelAgi(ObjDepi,j,τ)=BelAgj(ObjDepi,j,τ)=ObjDepi,j,τ), or not (BelAgi(ObjDepi,j,τ)=BelAgj(ObjDepi,j,τ)ObjDepi,j,τ).

In the second case, (BelAgi(ObjDepi,j,τ)BelAgj(ObjDepi,j,τ)), the point of view of Agi may coincide with reality (BelAgi(ObjDepi,j,τ)=ObjDepi,j,τ) and therefore does not correspond to the real the Agj’s point of view; or Agj’s point of view coincides with reality (BelAgj(ObjDepi,j,τ)=ObjDepi,j,τ), and therefore Agi’s point of view does not correspond to reality; or finally neither of the two points of view (of Agi and Agj) coincide with reality: BelAgi(ObjDepi,j,τ)ObjDepi,j,τ and at the same time BelAgj(ObjDepi,j,τ)ObjDepi,j,τ.

That is, the comparisons are in this case expressed by (see Figure 3):

(BelAgi(ObjDep(Agi,Agj,τ))BelAgj(ObjDep(Agi,Agj,τ)))
(BelAgi(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))
(BelAgj(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))(24)
(Agi,Agj)AGT

FIGURE 3
www.frontiersin.org

FIGURE 3. Dependence of Agi from Agj on the task τ. (A) comparison on how it is believed by Agi and by Agj; (B) comparison on how it is believed by Agi and objective reality; (C) comparison on how it is believed by Agj and objective reality.

Another case is the comparison between Agj’s subjective dependence on Agi for a task τ’Τ (BelAgj(ObjDepj,i,τ)) and what Agi believes about this dependence (BelAgi(ObjDepj,i,τ)): in this case it is Agj who thinks it depends on Agi. We therefore want to compare this subjective dependence with what the agent to whom it is addressed (i.e. the agent Agi) believes on its content: (BelAgi(ObjDepj,i,τ)). Also in this case there can be coincidence (BelAgj(ObjDepj,i,τ)=BelAgi(ObjDepj,i,τ)) or not (BelAgj(ObjDepj,i,τ)BelAgi(ObjDepj,i,τ)).

For both of these situations we can further compare these two cases with objective reality.

In the first case, (BelAgj(ObjDepj,i,τ)=BelAgi(ObjDepj,i,τ)) we can have coincidence with ObjDepj,i,τ: (BelAgj(ObjDepj,i,τ)=BelAgi(ObjDepj,i,τ)=ObjDepj,i,τ), that not: (BelAgj(ObjDepj,i,τ)=BelAgi(ObjDepj,i,τ)ObjDepj,i,τ).

In the second case, (BelAgj(ObjDepj,i,τ)BelAgi(ObjDepj,i,τ)) can coincide with reality the point of view of Agi (BelAgi(ObjDepj,i,τ)=ObjDepj,i,τ) and therefore does not correspond to the real Agj’s point of view (BelAgj(ObjDepj,i,τ)ObjDepj,i,τ); or the point of view of Agj (BelAgj(ObjDepj,i,τ)=ObjDepj,i,τ) coincides with reality and therefore does not correspond to the real the Agi’s point of view (BelAgj(ObjDepj,i,τ)=ObjDepj,i,τ); or finally neither of the two points of view (of Agi and Agj) coincide with the real: BelAgi(ObjDepj,i,τ)ObjDepj,i,τ and at the same time BelAgj(ObjDepj,i,τ)ObjDepj,i,τ.

That is, the comparisons are in this case expressed by (see Figure 4):

(BelAgi(ObjDep(Agj,Agi,τ))BelAgj(ObjDep(Agj,Agi,τ)))
(BelAgi(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))
(BelAgj(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))(25)
(Agi,Agj)AGT

FIGURE 4
www.frontiersin.org

FIGURE 4. : Dependence of Agj from Agi on the task τ'. (A) comparison on how it is believed by Agi and by Agj; (B) comparison on how it is believed by Agi and objective reality; (C) comparison on how it is believed by Agj and objective reality.

3.6.3 Comparison Among Agents’ Points of View on Others’ Points of View and Reality

Another interesting situation is the comparison between what Agi believes of Agj’s subjective dependence on itself: BelAgi(BelAgj(ObjDepj,i,τ) with Agj’s belief of this dependence: BelAgj(ObjDepj,i,τ). Also in this case we have: Agj can believe that it depends on Agi and at the same time Agi believe the same thing BelAgj(ObjDepj,i,τ)=BelAgi(BelAgj(ObjDepj,i,τ)), i.e. Agi believes that Agj believes that it depends on Agi) or not BelAgj(ObjDepj,i,τ)BelAgi(BelAgj(ObjDepj,i,τ)).

In the first case (BelAgj(ObjDepj,i,τ)=BelAgi(BelAgj(ObjDepj,i,τ))), the situation can coincide with reality (BelAgj(ObjDepj,i,τ)=BelAgi(BelAgj(ObjDepj,i,τ))=ObjDepj,i,τ), or not (BelAgj(ObjDepj,i,τ)=BelAgi(BelAgj(ObjDepj,i,τ))ObjDepj,i,τ).

In the second case (BelAgj(ObjDepj,i,τ)BelAgi(BelAgj(ObjDepj,i,τ))), the point of view of Agj can coincide with reality (BelAgj(ObjDepj,i,τ)=ObjDepj,i,τ) and therefore Agi’s point of view does not correspond to the real; or Agi’s view point coincides with reality (BelAgi(BelAgj(ObjDepj,i,τ)))=ObjDepj,i,τ), and therefore Agj’s point of view does not correspond to the real; or finally, neither of the two points of view (of Agi and Agj) coincide with reality: (BelAgj(ObjDepj,i,τ)ObjDepj,i,τ and at the same time (BelAgi(BelAgj(ObjDepj,i,τ)))ObjDepj,i,τ.

That is, the comparisons are in this case expressed by (see Figure 5):

(BelAgi(BelAgj(ObjDep(Agj,Agi,τ))BelAgj(ObjDep(Agj,Agi,τ)))
(BelAgi(BelAgj(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))
(BelAgj(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))(26)
(Agi,Agj)AGT

FIGURE 5
www.frontiersin.org

FIGURE 5. Dependence of Agj from Agi on the task τ'. (A) comparison on how it is believed by Agj and how Agi believes it is believed by Agj; (B) comparison on how it is believed by Agj and objective reality; (C) comparison on how Agi believes it is believed by Agj and objective reality.

In the same but reversed situation, is interesting the comparison between Agi’s subjective dependence on Agj (BelAgi(ObjDepi,j,τ)) and what Agj believes about this subjective belief of Agi (BelAgj(BelAgi(ObjDepi,j,τ))): Agi may believe that it depends on Agj for the task τ and at the same time Agj believe that Agi believes this thing (BelAgi(ObjDepi,j,τ)=BelAgj(BelAgi(ObjDepi,j,τ))) or not (BelAgi(ObjDepi,j,τ)BelAgj(BelAgi(ObjDepi,j,τ))).

In the first case, this situation may coincide with reality (BelAgi(ObjDepi,j,τ) =BelAgj(BelAgi (ObjDepi,j,τ))=ObjDepi,j,τ), or not (BelAgi(ObjDepi,j,τ)=BelAgj(BelAgi(ObjDepi,j,τ))ObjDepi,j,τ).

In the second case, the point of view of Agi may coincide with reality (BelAgi(ObjDepi,j,τ)=ObjDepi,j,τ) and therefore does not correspond to the real Agj’s point of view; or Agj’s point of view coincides with reality (BelAgj(BelAgi(ObjDepi,j,τ)=ObjDepi,j,τ), and therefore Agi’s point of view does not correspond to reality; or finally neither of the two points of view (of Agi and Agj) coincides with reality: BelAgi(ObjDepi,j,τ)ObjDepi,j,τ and at the same time BelAgj(BelAgi(ObjDepi,j,τ)ObjDepi,j,τ.

That is, the comparisons are in this case expressed by (see Figure 6):

(BelAgj(BelAgi(ObjDep(Agi,Agj,τ))BelAgi(ObjDep(Agi,Agj,τ)))
(BelAgj(BelAgi(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))
(BelAgi(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))(27)
(Agi,Agj)AGT

FIGURE 6
www.frontiersin.org

FIGURE 6. Dependence of Agi from Agj on the task τ. (A) comparison on how it is believed by Agi and how Agj believes it is believed by Agi; (B) comparison on how it is believed by Agi and the objective reality; (C) comparison on how Agj believes it is believed by Agi and objective reality.

3.6.4 More Complex Comparisons

In this case we consider the comparison between the subjective dependence of Agi on Agj (BelAgi(ObjDepi,j,τ)) and what Agj believes of this subjective belief of Agi (BelAgj(BelAgi(ObjDepi,j,τ))) also in relation to what Agj believes directly of this dependence (BelAgj(ObjDepi,j,τ)): Agj may believe that its belief on ObjDepi,j,τ coincides, or not, with Agi’s belief on the same dependence (ObjDepi,j,τ), that is: BelAgj(ObjDepi,j,τ)=BelAgj(BelAgi(ObjDepi,j,τ)) or not: BelAgj(ObjDepi,j,τ)BelAgj(BelAgi(ObjDepi,j,τ)).

In both cases the comparison with the real situation is also of interest (see Figure 7):

(BelAgj(BelAgi(ObjDep(Agi,Agj,τ))BelAgi(ObjDep(Agi,Agj,τ)))
(BelAgj(BelAgi(ObjDep(Agi,Agj,τ))BelAgj(ObjDep(Agi,Agj,τ)))
(BelAgj(Beli(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))
(BelAgj(ObjDep(Agi,Agj,τ))ObjDep(Agi,Agj,τ))(28)
(Agi,Agj)AGT

FIGURE 7
www.frontiersin.org

FIGURE 7. Dependence of Agi from Agj on the task τ. (A) comparison on how it is believed by Agi and how Agj believes it is believed by Agi; (B) comparison on how it is believed by Agj and the objective reality; (C) comparison on how Agj believes it is believed by Agi and how it is believed by Agj; (D) comparison on how Agj believes it is believed by Agi and the objective reality.

This relational schema can be analyzed by considering Agj’s point of view. It can compare what both Agi and Agj itself believe of the dependency relationship (ObjDepi,j,τ). The link with what really corresponds to the possible dependence of the two beliefs (of Agi and Agj on ObjDepi,j,τ) allows us to highlight many interesting specific cases.

We will see later how the use of the various relationships in the dependency network produces accumulations of “dependency capital” (truthful and/or false) and the phenomena that can result from them.

Finally, we consider the comparison between the subjective dependence of Agj from Agi (BelAgj(ObjDepj,i,τ)) and what Agi believes of this subjective belief of Agj (BelAgi(BelAgj(ObjDepj,i,τ))) also in relation to what Agi directly believes of this dependence (BelAgi(ObjDepj,i,τ)): Agi may believe that its belief on ObjDepj,i,τ coincides, or not, with Agj’s belief on the same dependence, namely: BelAgi(ObjDepj,i,τ)=BelAgi(BelAgj(ObjDepj,i,τ)) or not BelAgi(ObjDepj,i,τ)BelAgi(BelAgj(ObjDepj,i,τ)).

In both cases, the comparisons with the reality are also of interest (see Figure 8):

(BelAgi(BelAgj(ObjDep(Agj,Agi,τ))BelAgj(ObjDep(Agj,Agi,τ)))
(BelAgi(BelAgj(ObjDep(Agj,Agi,τ))BelAgi(ObjDep(Agj,Agi,τ)))
(BelAgi(BelAgj(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))
(BelAgi(ObjDep(Agj,Agi,τ))ObjDep(Agj,Agi,τ))(29)
(Agi,Agj)AGT

FIGURE 8
www.frontiersin.org

FIGURE 8. Dependence of Agj from Agi on the task τ'. (A) comparison on how it is believed by Agj and how Agi believes it is believed by Agj; (B) comparison on how it is believed by Agi and objective reality; (C) comparison on how Agi believes it is believed by Agj and how it is believed by Agi; (D) comparison on how Agi believes it is believed by Agj and objective reality.

3.6.5 Reasoning on the Dependence Network

As can be understood from the very general analyses, just shown the cross-dependence relationships between them can determine different ratios, degrees and dimensions. In this sense we must consider that what we have defined as the “power to accomplish a certain task” can refer to different actions (AZ), resources (R) and contexts (Γ), producing complex and interesting situations.

Not only that, but we also associate the “power of” (Pow(Agx,τ)) with a degree of ability (DoA(Agx,τ)) above a certain threshold (σ). But precisely for this reason it is possible to believe that there are different degrees of skill of the interlocutor when it is considered to have the “power of”. Let’s see the cases of greatest interest.

Agents may have beliefs about their dependence on other agents in the network, whether or not they match objective reality. This can happen in two main ways:

- In the first, looking at (formula 24) we can say that there is some task τ for which Agi does not believe it is dependent on some Agj agent and at the same time there is instead (precisely for that task from that agent) an objective dependency relationship. In formulas:

(BelAgi(ObjDep(Agi,Agj,τ))=false)(ObjDep(Agi,Agj,τ)=true)(30)

Evaluating how that belief can be denied, given that ObjDep(Agi,Agj,τ)=LoPow(Agi,τ)Pow(Agj,τ) not believing that dependence can mean denying one or both of the functions that define it, namely:

i) Thinking of having a power that it does not have (BelAgi(Pow(Agi,τ))) while objectively it is LoPow(Agi,τ);

ii) Thinking that Agj does not have that required power (BelAgi(LoPow(Agj,τ))) while objectively (Pow(Agj,τ));

iii) Believing both above as opposed to objective reality.

- In the second case, we can say that there is some task τ for which Agi believes it is dependent on some Agj agent and at the same time there is no objective dependency relationship (precisely for that task from that agent). In formulas:

(BelAgi(ObjDep(Agi,Agj,τ))=true)(ObjDep(Agi,Agj,τ)=false)(31)

believing this dependence may mean confirming one or both hypotheses that are denied in reality, namely:

i) Thinking (on the part of Agi) that it does not have a power (BelAgi(LoPow(Agi,τ))) while objectively and potentially it is (Pow(Agi,τ)), that is, it has that power8;

ii) Thinking (on Agi’s part) that Agj has that required power (BelAgi(Pow(Agj,τ)) while objectively it is (LoPow(Agj,τ));

iii) Believing both above as opposed to objective reality.

Going deeper, we can say that the meaning concerning the belief of having or not having the “power” to carry out a certain task, τ must be carefully analyzed. With τ=(α,g). In fact, given the definition of τ, we can say that the Agi agent has the power to realize τ if:

BelAgi(τ=(α,g))(32)

that is, Agi believes that the application of the action α (and the possession of the resources for its execution) produces the state of the world g (with a high probability of success, let’s say above a rather high threshold).

BelAgi(αAZAgi)(33)

that is, Agi believes it has the action α in its repertoire. And:

BelAgi(gGOALAgi)(34)

that is, in addition to having the power to obtain the task τ, the Agi agent should also have the state of the world g among the active goals it wants to achieve (we said previously that having the power implies the presence of the goal in potential form). We established (for simplicity) that an agent knows the goals/needs/duties that it possesses, while it may not know the goals of the other agents.

Given the conditions indicated above, there are cases of ignorance with respect to actually existing dependencies or of evaluations of false dependencies. As we have seen above, the beliefs of the agent Agi must also be compared with those of the agent with whom the interaction is being analyzed (Agj). So back to the belief:

(BelAgi(ObjDep(Agi,Agj,τ))=true)(BelAgi(ObjDep(Agi,Agj,τ))=false)(35)

putting it from the point of view of Agj we analogously have:

(BelAgj(ObjDep(Agi,Agj,τ))=true)(BelAgj(ObjDep(Agi,Agj,τ))=false)(36)

The divergence or convergence of the beliefs of the two agents (Agi, Agj) on the dependence of Agi with respect to Agj can be completely insignificant. What matters for the pursuit of the task and for its eventual success is what Agi believes and whether what it believes is also true in reality ObjDep(Agi,Agj,τ).

Another interesting analysis concerns the inconsistent fallacious beliefs of agents on dependence on them, of other agents in the network, with respect to objective reality.

That is, Agi may believe that Agj is dependent from it or not. And this may or may not coincide with reality. There are four possible combinations:

(BelAgi(ObjDep(Agj,Agi,τ))=true)(ObjDep(Agj,Agi,τ)=true)(37)
(BelAgi(ObjDep(Agj,Agi,τ))=true)(ObjDep(Agj,Agi,τ)=false)(38)
(BelAgi(ObjDep(Agj,Agi,τ))=false)(ObjDep(Agj,Agi,τ)=true)(39)
(BelAgi(ObjDep(Agj,Agi,τ))=false)(ObjDep(Agj,Agi,τ)=false)(40)

As we have seen the belief of dependence implies attribution of powers and lack of powers (and the denial of dependence belief in turn determines similar and inverted attributions). Compared to the previous case, in this case being possible not to necessarily know about the goals of the interlocutor, it is also possible to misunderstand on these goals: for example, considering thatgGOALAgj(or gGOALAgi) while instead it is the opposite. In this way, introducing an attribution error.

An interesting thing is that there are cases where one can believe that another agent has no power to achieve a task due not to its inability to perform an action (or lack of resources for that execution) but from the fact that the task’s goal is not included among its goals.

4 Dependence and Negotiation Power

Given a Dependence Network (DN, see formula 20) and an agent in this Network (AgiAGT), if the Agi has to achieve the task τsAgi, from here on τs,we can consider as its interlocutors the m agents included in the set Potential Solvers (PS), in practice the ones that have the power for achieving τs:

PS(Agi,τs)=defv=1mAgvAGT|(Pow(Agv,τs)=true)(41)

The same Agi (if it has the appropriate skills) could be included among these agents.

We define Objective Potential for Negotiation of AgiAGT about an its own task τs - and call it OPN(Agi,τs)- the following function:

OPN(Agi,τs)=defAglPS(Agi,τs)ObjDep(Agl,Agi,τk)*DoA(Agl,τs)*DoA(Agi,τk)1+psl(42)

So, the agents Agl are all included in PS(Agi,τs) and they are dependent by Agi about one of their own task (τkAgl, from here on τk). Remind that if ObjDep(Agl,Agi,τk) is true, it is also true Pow(Agi,τk). So Agi and Agl can balance the negotiating potential. We establish by convention that ObjDep(Agl,Agi,τk) is equal one if it is true and 0 if it is false. In addition, the negotiation potential OPN is measured on the respective abilities of Agi and Agl to realize their respective tasks: DoA(Agl,τs)and DoA(Agi,τk).

In words, m represents the number of agents (Agl) who can carry out the task τs and at the same time have tasks to perform that are potentially achievable by the agent Agi. This dependence relation should be either reciprocal (the tasks under negotiation are τkAgl and τsAgi) or mutual (the tasks under negotiation are τsAgl and τsAgi): more specifically, there should be an action, plan, or resource owned by Agi that is necessary for Agl to obtain τkAgl (possibly coincident with τsAgl) and at the same time there should be an action, plan, or resource owned by Agl that is necessary for Agi to obtain τsAgi (possibly coincident with τsAgl).

psl is the number of agents in AGT who need from Agl of a different task (τq) in competition with the request by Agi (in the same context and at the same time, and being able to offer it help on an Agl’s task in return). We are considering that these parallel requests cause a reduction in availability, as our agent Agl has to contribute to multiple requests (psl + 1) at the same time.

We can therefore say that every other agent in Agi’s network of dependence (either reciprocal or mutual) contributes to OPN(Agi,τs) with a value between (DoA(Agl,τs)*DoA(Agi,τk)) and (DoA(Agl,τs)*DoA(Agi,τk))/(1 + psl). We have therefore, to simplify, considered that the contribution to the negotiation potential is the same for each agent in reciprocal or mutual dependence with our agent Agi (with the same number of other psl contenders).

If we indicate with PSD all the agents included in PS with objective dependence equal to 1, so:

PSD(Agi,τs)=defv=1mAgvAGT|(Pow(Agv,τs)=true)ObjDep(Agv,Agi,τk)=1)(43)

we can say that:

0<OPN(Agi,τs)Card(PSD)(44)

In Figure 9 we represent the objective dependence of Agi: considering the areas of spaces A, B and C proportional to the number of agents they represent, we can say that: A represents the set of agents (Agv) who depend from Agi for some their task τkAgv, from here on τk, B represents the set of agents from which Agi depends for achieving the task τs (B=PS(Agi,τs) and at the same time it represents all the Agv agents who are able to achieve the goal gs through some αs action). The intersection between A and B (dashed part C) is the subset of PS(Agi,τs) with whom Agi could potentially negotiate for achieving τs (C=PSD(Agi,τs)). The greater the overlap the greater the negotiation power of Agi in that context.

FIGURE 9
www.frontiersin.org

FIGURE 9. Area A is proportional to the number of agents dependent by Agi per τk; Area B is proportional to the number of agents on which Agi depends per τs; Area C is the intersection of (A,B)

However, as we have seen above, the negotiation power of Agi also depends on the possible alternatives (psl) that its potential partners (Agv) have: the few alternatives to Agi they have, the greater its negotiation power (see below)9. Not only that, the power of negotiation should also take into account the abilities of the agents in carrying out their respective tasks (DoA(Agl,τs)*DoA(Agi,τk)).

The one just described is the objective potential for negotiating agents. But, as we have seen in the previous paragraphs, the operational role of dependence is established by being aware of (or at least by believing) such dependence on the part of the agents.

We now want to consider the set of agents with whom Agi can negotiate to get its own task (τs). This set, called Real set of Agents for Negotiation (RAN), includes all the agents that believe to be able to achieve that task (τs) and at the same time believe to be dependent by Agi about one’s own task (τk). At the same time, Agi must also be aware of Agv’s potential:

RAN(Agi,τs)=defv=1m(AgvAGT)|BelAgv(Pow(Agv,τs)=true)BelAgv(ObjDep(Agv,Agi,τk)=1)BelAgi(Pow(Agv,τs)=true)BelAgi(ObjDep(Agv,Agi,τk)=1)(45)

We also define the Real Objective Potential for Negotiation (ROPN) of AgiAGT about an its own task τs the following function:

ROPN(Agi,τs)=defAglRAN(Agi,τs)ObjDep(Agl,Agi,τk)*DoA(Agl,τs)*DoA(Agi,τk)1+psl(46)

As can be seen also ROPN, like OPN, depends on the objective dependence of the selected agents. In this case, however, the selection is based on the beliefs of the two interacting agents. We have:

0<ROPN(Agi,τs)Card(RAN)(47)

We have made reference above to the believed (by Agi and Agv) dependence relations (not necessarily true in the world). This is sufficient to define RAN(Agi,τs) and, therefore,ROPN(Agi,τs) which determine the actions of Agi and Agv in the negotiation10.

Analogously, we can interpret Figure 9 as the set of believed relationships by the agents.

In case Agi has to carry out the task τs, and does not have the power to do it by itself, it can be useful to evaluate the list of agents given by the setRAN(Agi,τs)11 and who have negotiating power with Agi, ordered by quantity of available commitment: that is, Agi, on the basis of its beliefs will be able to order the potential interlocutors of the negotiation in direct order with respect to the ability values attributed to Agl (by Agi) for the accomplishment of the task (DoA(Agl,τs)), and in reverse order to the number of parallel competitors, see ROPN(Agi,τs). Obviously, other criteria can be added for selecting the agent to choose. For example:

- based on the reciprocity task to be performed: the most relevant, the most pleasing, the cheapest, the simplest, and so on.

- based on the agent with whom it is preferred to enter into a relationship: usefulness, friendship, etc.

- based on the trustworthiness of the other agent with respect to the task delegated to it.

This last point leads us to the next paragraph.

5 The Trust Role in Dependence Networks

Let us introduce into the dependence network the trust relationships. In fact, although it is important to consider dependence relationship between agents in a society, there will be not exchange in the market if there is not trust to enforce these connections. Considering the analogy with Figure 9, now we will have a representation as given in Figure 10 (where we introduced the rectangle that represents the trustworthy agents with respect to the task τs).

FIGURE 10
www.frontiersin.org

FIGURE 10. The rectangle introduced with respect to Figure 9 represents the trustworthy agents with respect to τs.

The potential agents for negotiation are the ones in the dashed part D: they are trustworthy on the task τs for which Agi depends on them, and they are themselves dependent on Agi on another their task.

While part E includes agents who are trustworthy by Agi on the task τs for which Agi depends on them but they are not dependent by Agi on their own tasks. For part B and C are true the old definitions in Figure 9.

Therefore, not only the decision to trust presupposes a belief of being dependent but notice that a dependence belief implies on the other side a piece of trust. In fact, to believe to be dependent means: BelAgi(LoPow(Agi,τs)=true) and BelAgi(Pow(Agv,τs)=true). With τs=(αs,gs).In basic beliefs:

- (B1Agi) to believe (by Agi) not to be able to perform action αs and, therefore, not to be able to achieve goal gs; and

- (B2Agi) to believe (by Agi) that Agv is able and in condition to achieve gs, through the performance of the αs action.

Notice that B2Agi is precisely one component of trust concept in our analysis [12, 13]: the positive evaluation of Agv as competent, able, skilled, and so on. However, the other fundamental component of trust as evaluation is lacking, its reliability/trustworthiness: Agv really intends to do, is persistent, is loyal, is benevolent, etc. Thus, Agv will really do what Agi needs.

So, starting from the objective dependence of the agents, we must include the motivational aspects. In particular, we have a new set of interesting agents, called Potential Trustworthy Solvers (PTS):

PTS(Agi,τs)=defv=1mAgvAGT|(Pow(Agv,τs)=true)(Mot(Agv,τs)=true)(48)

Where Mot(Agv,τsAgi) means that the Agv agent is motivated to carry out the τsAgi task. Recall that in the case of skills (evaluated through the Pow function) reference was made to the degree of ability (DoA). Also, in the case of motivations (Mot) we must consider that an agent can be considered to have successful motivations if its degree of motivation/willingness (DoW) is above a given threshold (ξ).

(Mot(Agv,τs)=true)DoW(Agv,τs)>η(49)

where η has a high value in the range (0,1).

For Agv to be successful in the τsAgi task, it is therefore necessary that both conditions are met:

(DoA(Agv,τs))>σ))DoW(Agv,τs)>η(50)

We must now move from the objective value of PTS to what Agi believes about it (Potential Trustworthy Solvers (PTS) believed by Agi):

BelAgi(PTS(Agi,τs))=defv=1mAgvAGT|BelAgi(Pow(Agv,τs)=true)BelAgi)Mot(Agv,τs)=true)(51)

In fact, BelAgi(PTS(Agi,τs)) returns the list of agents who are believed by Agi to be trustworthy for the specified task (i.e. as capable as they are willing).

One of the main reasons why Agv is motivated (i.e., DoW(Agv,τs)>η) is given by its dependence on Agi with respect to a task of the Agv itself (τkAgv) and thus the possibility of successful negotiation between agents.

So, an interesting case is when:

Mot(Agv,τs)=defBelAgv(ObjDep(Agv,Agi,τk)=true)BelAgv(Mot(Agi,τk)=true)(52)

That is, Agv’s motivation to carry out the task τs for the Agi (DoW(Agv,τs)> η ) is linked to the fact that Agv believes it depends on Agi with respect to the task τkand similarly believes that Agi is capable and motivated to accomplish that task.

We have therefore defined the belief conditions of the two agents (Agi, Agv) in interaction so that they can negotiate and start a collaboration in which each one can achieve its own goal. These conditions show the need to be in the presence not only of bilateral dependence of Agi and Agv but also of their bilateral trust.

5.1 The Point of View of the Trustee: Towards Trust Capital

Let us, now, explicitly recall what are the cognitive ingredients of trust and reformulate them from the point of view of the trusted agent [23]. In order to do this, it is necessary to limit the set of trusted entities. It has in fact been argued that trust is a mental attitude, a decision and a behavior that only a cognitive agent endowed with both goals and beliefs can have, make and perform. But it has been underlined, also, that the entities that is trusted is not necessarily a cognitive agent. When a cognitive agent trusts another cognitive agent, we talk about social trust. As we have seen, the set of actions, plans and resources owned/available by an agent can be useful for achieving a set of tasks (τ1, …, τr).

We take now the point of view of the trustee agent in the dependence network: so, we present a cognitive theory of trust as a capital, which is, in our view, a good starting point to include this concept in the issue of negotiation power. That is to say what really matters are not the skills and intentions declared by the owner, but those actually believed by the other agents. In other words, it is on the trustworthiness perceived by other agents that our agent’s real negotiating power is based.

We call Objective Trust Capital (OTC) of AgiAGT about a generic task τs the function:

OTC(Agi,τs)=defAgvAGTBelAgv(DoA(Agi,τs)*DoW(Agi,τs))(53)

With

0OTC(Agi,τs)Card(AGT)13(54)

We can therefore determine on the basis of (OTC) the set of agents in the Agi’s DN that potentially consider the Agi reliable for the task τs. If we call Potential Objective Trustors (POT) this set we can write:

POT(Agi,τs)=defv=1mAgvAGT|BelAgv(DoA(Agi,τs)>σ)BelAgv(DoW(Agi,τs)>η)(55)

We are talking about “generic task” as the gS goal is not necessarily included in GOALAgi but indicates a task for which Agi could be considered trustworthy in its implementation. In other words, Agi would be able to carry out that task by having the possibility of mobilizing (i.e. possessing) its skills, competences and intentionality suitable for the task itself.

As showed in [13] we call Degree of Trust of the Agent Agv on the agent Agi about the task τs:

DoT(Agv,Agi,τs)=defBelAgv(DoA(Agi,τs)DoW(Agi,τs))(56)

We call the Subjective Trust Capital (STC) of AgiAGT about a generic task τs the function:

STC(Agi,τs)=defAgvAGTBelAgi(BelAgv(DoA(Agi,τs)*DoW(Agi,τs)))(57)

In words, the cumulated trust capital of an agent Agi with respect a task τs, is the sum (on all the agents in the Agi’s network dependence) of the corresponding potential abilities and willingness believed about Agi on the task τs, by each dependent agent. The subjectivity consists in the fact that both the network dependence and the believed potential abilities and willingness are believed by (the point of view of) the agent Agi.

We can therefore determine on the basis of (STC) the set of agents in the Agi’s DN which Agi believes may be potential trustors of Agi itself for the task τs. If we call Potential Believed Trustors (PBT) this set we can write:

PBT(Agi,τs)=defv=1mAgvAGT||BelAgi(BelAgv(DoA(Agi,τs)>σ))BelAgi(BelAgv(DoW(Agi,τs)>η))(58)

We can call Believed Degree of Trust (BDoT)of the Agent Agv on the agent Agi as believed by the agent Agi, about the task τs:

BDoT(Agv,Agi,τs)=defBelAgi(BelAgv(DoA(Agi,τs)*DoW(Agi,τs)))(59)

At the same way we can also call the Self-Trust (ST) of the agent Agi about the task τs. We can write:

ST(Agi,τs)=defBelAgi(DoA(Agi,τs)DoW(Agi,τs))(60)

From the comparison between OTC(Agi, τs), STC(Agi, τs), DoT(Agv, Agi, τs) and ST(Agi, τs) a set of interesting actions and decision could be taken from the agents (we will see in the next paragraphs).

6 Dynamics of Relational Capital

An important consideration we have to do is that a dependence network is mainly based on the set of actions, plans and resources owned by the agents and necessary for achieving the agents’ goals (we considered a set of tasks each agent is able to achieve). The dependence network is then closely related to the dynamics of these sets (actions, plans, resources, goals), from their modification over time. In particular, the dynamics of the agents’ goals, from their variations (from the emergency of new ones, from the disappearance of old ones, from the increasing request of a subset of them, and so on). On this basis changes the role and relevance of each agent in the dependence network, changes in fact the trust capital of the agents.

For what concerns the dynamical aspects of this kind of capital, it is possible to make hypotheses on how it can increase or how it can be wasted, depending on how each of basic beliefs involved in trust are manipulated. In the following, let us consider what kind of strategies can be performed by Agi to enforce the other agents’ dependence beliefs and their beliefs about Agi’s competence/motivation.

6.1 Reducing Agl’s Power

Agi can make the other agent (Agl) dependent on it by making the other lacking some resource or skill (or at least inducing the other to believe so).

We can say that there is at least one action (αAgi) in Agi’s action library which, if carried out by Agi, allows Agl to believe that it is no longer able to obtain τs on its own (whether the belief is true or false is not important). In practice:

Do(Agi,αAgi)(BelAgl(LoPow(Agl,τs)=true))(61)

Where AB means that A implies B. And at the same time:

BelAgl(Pow(Agi,τs)=true)BelAgl(Mot(Agi,τs)=true)(62)

So:

Do(Agi,αAgi)BelAgl(Pow(Agi,τs)=true)BelAgl(Mot(Agi,τs)=true)BelAgl(ObjDep(Agl,Agi,τs)=true)(63)

6.2 Inducing Goals in Agl

Agi can make Agl dependent on it by activating or inducing in Agl a new goal (need, desire) on which Agl is not autonomous (or believes so): effectively introducing a new bond of dependence.

We can say that there is at least one action (αAgi) in Agi’s action library which, if carried out by Agi, generates (directly or indirectly) a goal (gk, up to that moment not present) of Agl for which Agl itself believes to be dependent on Agi (whether the belief is true or false is not important). In practice:

Do(Agi,αAgi)(gkGoalAgl)(64)

And at the same time is true:

BelAgl(ObjDep(Agl,Agi,τk)=true)(65)

6.3 Reducing Other Agents’ Competition

Agi could work for reducing the believed (by Agl) value of ability/motivation of each of the possible competitors of Agi (in number of pkl) on that specific task τk.

We can say that there are actions (αAgi) of Agi that make Agl believe to be less dependent on other Agi’s competitors (on the task τs) as they (Agz) are less capable or motivated:

Do(Agi,αAgi)BelAgl(LoPow(Agz,τs)=true)BelAgl(Mot(Agz,τs)=false)(66)

In practice, the application of the action αAgi allows to reduce the number of agents potentially able to negotiate with Agl (RAN, formula 45) and therefore its ROPN(Agl, τk) value (formula 46). Similarly, by influencing the motivations of other agents (Agz) the action αAgi can affect the number of trustees with whom Agl negotiates (PTS(Agl, τk)) (formula 48) and therefore PBT(Agl, τk) (formula 58).

In the two cases just indicated (§6.1 and §6.2) the effects on the beliefs of Agl could derive not from the action of Agi but from other causes produced in the world (by third-party agents, by Agl or by environmental changes).

6.4 Increasing its Own Features

Competition with other agents can also be reduced by inducing Agl to believe that Agi is more capable and motivated. We can say that there are actions (αAgi) of Agi that make Agl believe that Agi’s degree of ability and of motivation have increased.

Do(Agi,αAgi)DoT(Agl,Agi,τs,t1)>DoT(Agl,Agi,τs,t0)(67)

where t1 is the time interval in which the action was carried out while t0 is the interval time prior to its realization. Remembering that

DoT(Agl,Agi,τs,t)=defBelAgl(DoA(Agi,τs,t)DoW(Agi,τs,t))(68)

6.5 Signaling its Own Presence and Qualities

Since dependence beliefs is strictly related with the possibility of the others to see the agent in the network and to know its ability in performing useful tasks, the goal of the agent who wants to improve its own relational capital will be to signaling its presence, its skills, and its trustworthiness on those tasks [2426]. While to show its presence it might have to shift its position (either physically or figuratively like, for instance, changing its field), to communicate its skills and its trustworthiness it might have to hold and show something that can be used as a signal (such as certificate, social status etc.). This implies, in its plan of actions, several and necessary sub-goals to make a signal. These sub-goals are costly to be reached and the cost the agent has to pay to reach them can be taken has the evidence for the signals to be credible (of course without considering cheating in building signals). It is important to underline that using these signals often implies the participation of a third subject in the process of building trust as a capital: a third part which must be trusted. We would say the more the third part is trusted in the society, the more expensive will be for the agent to acquire signals to show, and the more these signals will work in increasing the agent’s relational capital.

Obviously also Agi’s previous performances are ‘signals’ of trustworthiness. And this information is also provided by the circulating reputation of Agi [27].

6.6 Strategic Behavior of the Trustee

As we have seen previously there are different points of view for assessing trustworthiness and trust capital of a specific agent (Agi) with respect to a specific task (τs). In particular:

- its Real Trustworthiness (RT), that which is actually and objectively assessable regardless of what is believed by the same agent (Agi) and by the other agents in its world:

RT(Agi,τs)=defDoA(Agi,τs)DoW(Agi,τs)(69)

- its own perceived trustworthiness, that is what we have called the Self-Trust (ST):

ST(Agi,τs)=defBelAgi(DoA(Agi,τs)DoW(Agi,τs))(70)

- there is, therefore, the Objective Trust Capital (OTC) of Agi, i.e. the accumulation of trust that Agi can boast of what other agents in its world objectively believe:

OTC(Agi,τs)=defAgvAGTBelAgv(DoA(Agi,τs)DoW(Agi,τs))(71)

to which corresponds the set of agents (POT) who are potential trustors of Agi:

POT(Agi,τs)=defv=1mAgvAGT|BelAgv(DoA(Agi,τs)>σ)BelAgv(DoW(Agi,τs)>η)(72)

- And finally, there is the Subjective Trust Capital (STC) of Agi, i.e. the accumulation of trust that Agi believes it can boast with respect to other agents in its world, that is, based on its own beliefs with respect to how other agents deem it trustworthy:

STC(Agi,τs)=defAgvAGTBelAgi(BelAgv(DoA(Agi,τs)*DoW(Agi,τs)))(73)

to which corresponds the set of agents (PBT) who are believed by Agi to be potential trustors of Agi:

PBT(Agi,τs)=defv=1mAgvAGT|||BelAgi(BelAgv(DoA(Agi,τs)>σ))BelAgi(BelAgv(DoW(Agi,τs)>η))(74)

In fact, there is often a difference between how the others actually trust an agent and what the agent believes about (difference between OTC/POT and STC/PBT); but also between these and the level of trustworthiness that agent perceives in itself (difference between OTC/POT and ST or difference between STC/PBT and ST).

The subjective aspects of trust are fundamental in the process of managing this capital, since it can be possible that the capital is there but the agent does not know to have it (or vice versa).

At the base of the possible discrepancy in subjective valuation of trustworthiness there is the perception of how much an agent feels trustworthy in a given task (ST) and the valuation that agent does of how much the others trust it for that task (STC/PBT). In addition, this perception can change and become closer to the objective level while the task is performed (ST relationship with both RT and OTC/POT): the agent can either find out of being more or less trustworthy than what it believed or realize that the others’ perception was wrong (either positively or negatively). All these factors must take into account and studied together with the different component of trust, in order to build hypotheses on strategic actions the agent will perform to cope with its own relational capital. Then, we must consider what can be implied by these discrepancies in terms of strategic actions: how they can be individuated and valued? How will the trusted agent react when aware of them? it can either try to acquire competences to reduce the gap between others’ valuation and its own one, or exploiting the existence of this discrepancy, taking advantage economically of the reputation aver its capability and counting on the others’ scarce ability of monitoring and testing its real skills and/or motivations. In practice, it is on this basis of comparison between reality and subjective beliefs that the most varied behavioral strategies of agents develop. In the attempt to use the dependence network in which they are immersed at best. Dependence network that represents the most effective way to realize the goals they want to achieve.

7 Conclusion

With the expansion of the capabilities of intelligent autonomous systems and their pervasiveness in the real world, there is a growing need to equip these systems with autonomy and collaborative properties of an adequate level for intelligent interaction with humans. In fact, the complexity of the levels of interaction and the risks of inappropriate or even harmful interference are growing. A theoretical approach on the basic primitives of social interaction and the articulated outcomes that can derive from it is therefore fundamental.

This paper tries to define some basic elements of dependence relationships, enriched through attitudes of trust, in a network of cognitive agents (regardless of their human or artificial nature).

We have shown how, on the basis of the powers attributable to the various agents, objective relationships of dependence emerge between them. At the same time, we have seen how what really matters is the dependence believed by social agents, thus highlighting the need to consider cognition as a decisive element for highly adaptive systems to social interactions.

The articulation of the possibilities of confrontation within the network of dependence between the different interpretations that can arise from them, in a spirit of collaboration or at least of avoidance of conflicts, highlights the need for a clear ontology of social interaction.

By introducing, in the spirit of emulation of truly operational autonomies [28], also the dimension of intentionality and priority choice on this basis, the attitude of trust is particularly relevant, both from the point of view of those who must to choose a partner to trust with a task, as well as from the point of view of those who offer their availability to solve the task. In this sense we have introduced concepts such as relational capital and trust capital.

The future developments of this work will go on the one hand in the direction of further theoretical investigations: on the basis of the model introduced we will define with precision the various and articulated forms of autonomy that derive from it; we will tackle the problem of the “degree of dependence” that derives from many and varied dimensions such as: the value of the goal to be achieved; the number of available and reliable alternative agents that can be contacted; the degree of ability/reliability required for the task to be delegated; and so on.

In parallel, we will try to develop a simulative computational model for trusted dependency networks that we have introduced, with the ambition of having feedback on the basic conceptual scheme and at the same time trying to verify its operability in a concrete way.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author Contributions

RF and CC have equally contributed to the theoretical model; RF developed most of the formalization.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1We introduce the symbol A =defB to indicate that the symbol A is by definition associated with the expression B.

2For a more complete and detailed discussion of actions and plans (on their preconditions and results; on how the contexts may affect their effects; on their explicit or implicit conflicts, etc.), please refer to [18, 19].

3The context c defines the boundary conditions that can influence the other parameters of the indicated relationship. Different contexts can determine different outcomes of the actions, affect the agent’s beliefs and even the agent’s goals (for example, determine new ones or change their order of priority). To give a trivial example: being in different meteorological conditions or with a different force of gravity, so to speak, could strongly affect the results of the agent’s actions, and/or have an effect on the agent’s beliefs and/or on its own goals (changing their mutual priority or eliminating some and introducing new ones). In general, standard conditions are considered, i.e. default conditions that represent the usual situation in which agents operate: and the parameters (actions, beliefs, goals, etc.) to which we refer are generally referred to these standard values.

4Our beliefs can be considered with true/false values or included in a range (0,1). In this second case it will be relevant to consider a threshold value beyond which the belief will be considered valid even if not completely certain.

5Of course it can also happen that an agent does not have a good perception of its own characteristics/beliefs/goals/etc..

6The fact of being aware of one’s own goals is of absolute importance for an agent as it determines its subjective dependence which, as we will see, is the basis of its behavior.

7As we have defined the dependence, this non-coincidence may depend on different factors: wrong attribution of one’s own powers or the powers of the other agent.

8The comparison operator () allows to relate the two compared expressions (A and B in this case) to check whether they are equal or not and, in the second case, what are the possible factors that determine the difference.

9if it was aware of it.

10Obviously, this is a possible hypothesis, linked to a particular model of agent and of interaction between agents. We could also foresee different agency hypotheses.

11Of course, the success or failure of these negotiations will also depend on how true the beliefs of the various agents are.

12We assume, for simplicity, that if Agi has the beliefs BelAgi(Pow(Agv,τsAgi)=true)BelAgi(ObjDep(Agv,Agi,τkAgv)=1)then it believes that those same beliefs are also held by Agv.

13Being both DoA(Agi,τs) and DoW(Agi,τs) included in the interval (0,1).

References

1. Wasserman S, Faust K. Social Network Analysis. Cambridge): Cambridge Univ.Press (1994).

Google Scholar

2. Watts DJ. Small Worlds. Princeton, NJ): Princeton Univ. Press (1999).

Google Scholar

3. Guare J. Six Degrees of Separation. New York: Vintage (1990).

Google Scholar

4. Newman MEJ. The Structure of Scientific Collaboration Networks. Proc Natl Acad Sci U.S.A (2001) 98:404–9. doi:10.1073/pnas.98.2.404

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Radicchi F, Castellano C, Cecconi F, Loreto V, Parisi D. Defining and Identifying Communities in Networks. Proc Natl Acad Sci (2004). doi:10.1073/pnas.0400054101

CrossRef Full Text | Google Scholar

6. Nichols S, Stich S. Mindreading. Oxford: Oxford University Press (2003).

Google Scholar

7. Granovetter MS. The Strength of Weak Ties. Am J Sociol (1973) 78:1360–80. doi:10.1086/225469

CrossRef Full Text | Google Scholar

8. Putnam RD. Making Democracy Work. Civic Traditions in Modern Italy. Princeton NJ: Princeton University Press (1993).

Google Scholar

9. Putnam RD. Bowling Alone. The Collapse and Revival of American Community. New York: Simon & Schuster (2000).

Google Scholar

10. Coleman JS. Social Capital in the Creation of Human Capital. Am J Sociol (1988) 94:S95–S120. doi:10.1086/228943

CrossRef Full Text | Google Scholar

11. Bourdieu P. Forms of Capital. In: JC Richards, editor. Handbook of Theory and Research for the Sociology of Education. New York: Greenwood Press (1983).

Google Scholar

12. Castelfranchi C, Falcone R. Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification. In: Proceedings of the International Conference of Multi-Agent Systems (ICMAS'98). Paris: July (1998). p. 72–9.

Google Scholar

13. Castelfranchi C, Falcone R. Trust Theory: A Socio-Cognitive and Computational Model. John Wiley & Sons (2010).

Google Scholar

14. Bratman M. Intentions, Plans and Practical Reason. Cambridge, Massachusetts: Harvard U. Press (1987).

Google Scholar

15. Cohen P, Levesque H. Intention Is Choice with Commitment. Artif Intelligence (1990)(42). doi:10.1016/0004-3702(90)90055-5

CrossRef Full Text | Google Scholar

16. Rao A, Georgeff M. Modelling Rational Agents within a Bdi-Architecture (1991). Availabl at: http://citeseer.ist.psu.edu/122564.html.

Google Scholar

17. Wooldridge M. An Introduction to Multi-Agent Systems. Wiley and Sons (2002).

Google Scholar

18. Pollack ME. Intentions in Communication. In: J Morgan, and ME Pollack, editors. Plans As Complex Mental Attitudes in Cohen. USA: MIT press (1990). p. 77–103.

Google Scholar

19. Bratman ME, Israel DJ, Pollack ME. Plans and Resource-Bounded Practical Reasoning. Hoboken, New Jersey: Computational Intelligence (1988).

Google Scholar

20. Sichman J, Conte R, Castelfranchi C, Demazeau Y. A Social Reasoning Mechanism Based on Dependence Networks. In: Proceedings of the 11th ECAI (1994).

Google Scholar

21. Castelfranchi C, Conte R. The Dynamics of Dependence Networks and Power Relations in Open Multi-Agent Systems. In: Proc. COOP’96 – Second International Conference on the Design of Cooperative Systems, Juan-Les-Pins, France, June, 12-14. Valbonne, France: INRIA Sophia-Antipolis (1996). p. 125–37.

Google Scholar

22. Falcone R, Pezzulo G, Castelfranchi C, Calvi G. Contract Nets for Evaluating Agent Trustworthiness. Spec Issue “Trusting Agents Trusting Electron Societies” Lecture Notes Artif Intelligence (2005) 3577:43–58. doi:10.1007/11532095_3

CrossRef Full Text | Google Scholar

23. Castelfranchi C, Falcone R, Marzo F. Trust as Relational Capital: Its Importance, Evaluation, and Dynamics. In: Proceedings of the Ninth International Workshop on “Trust in Agent Societies”. Hokkaido (Japan): AAMAS 2006 Conference (2006).

Google Scholar

24. Spece M. Job Market Signaling. Q J Econ (1973) 87:296–332.

Google Scholar

25. Bird RB, Alden SE. Signaling Theory, Strategic Interaction, and Symbolic Capital. Curr Antropology (2005) 46–2. doi:10.1086/427115

CrossRef Full Text | Google Scholar

26. Schelling T. The Strategy of Conflict. Cambridge: Harvard University Press (1960).

Google Scholar

27. Conte R, Paolucci M. Reputation in Artificial Societies. In: Social Beliefs for Social Order. Amsterdam (NL): Kluwer (2002). doi:10.1007/978-1-4615-1159-5

CrossRef Full Text | Google Scholar

28. Castelfranchi C, Falcone R. From Automaticity to Autonomy: The Frontier of Artificial Agents. In: H Hexmoor, C Castelfranchi, and R Falcone, editors. Agent Autonomy. Amsterdam (NL): Kluwer Publisher (2003). p. 103–36. doi:10.1007/978-1-4419-9198-0_6

CrossRef Full Text | Google Scholar

Keywords: dependence network, trust, autonomy, agent architecture, power

Citation: Falcone R and Castelfranchi C (2022) Grounding Human Machine Interdependence Through Dependence and Trust Networks: Basic Elements for Extended Sociality. Front. Phys. 10:946095. doi: 10.3389/fphy.2022.946095

Received: 17 May 2022; Accepted: 23 June 2022;
Published: 09 September 2022.

Edited by:

William Frere Lawless, Paine College, United States

Reviewed by:

Giancarlo Fortino, University of Calabria, Italy
Luis Antunes, University of Lisbon, Portugal

Copyright © 2022 Falcone and Castelfranchi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rino Falcone, rino.falcone@istc.cnr.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.