Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 19 October 2020
Sec. Mathematics of Computation and Data Science
This article is part of the Research Topic Mathematical Fundamentals of Machine Learning View all 6 articles

Statistical Analysis of Multi-Relational Network Recovery

\nZhi WangZhi Wang1Xueying Tang
Xueying Tang2*Jingchen LiuJingchen Liu1
  • 1Department of Statistics, Columbia University, New York, NY, United States
  • 2Department of Mathematics, University of Arizona, Tucson, AZ, United States

In this paper, we develop asymptotic theories for a class of latent variable models for large-scale multi-relational networks. In particular, we establish consistency results and asymptotic error bounds for the (penalized) maximum likelihood estimators when the size of the network tends to infinity. The basic technique is to develop a non-asymptotic error bound for the maximum likelihood estimators through large deviations analysis of random fields. We also show that these estimators are nearly optimal in terms of minimax risk.

1. Introduction

A multi-relational network (MRN) describes multiple relations among a set of entities simultaneously. Our work on MRNs is mainly motivated by its applications to knowledge bases that are repositories of information. Examples of knowledge bases include WordNet [1], Unified Medical Language System [2], and Google Knowledge Graph (https://developers.google.com/knowledge-graph). They have been used as the information source in many natural language processing tasks, such as word-sense disambiguation and machine translation [35]. A knowledge base often includes knowledge on a large number of real-world objects or concepts. When a knowledge base is characterized by MRN, the objects and concepts correspond to nodes, and knowledge types are relations. Figure 1 provides an excerpt from an MRN in which “Earth,” “Sun,” and “solar system” are three nodes. The knowledge about the orbiting patterns of celestial objects forms a relation “orbit,” and the knowledge on classification of the objects forms another relation “belong to” in the MRN.

FIGURE 1
www.frontiersin.org

Figure 1. An example of the MRN representation of a knowledge base.

An important task of network analysis is to recover the unobserved network based on data. In this paper, we consider a latent variable model for MRNs. The presence of an edge from node i to node j of relation type k is a Bernoulli random variable Yijk with success probability Mijk. Each node is associated with a vector, θ, called the embedding of the node. The probability Mijk is modeled as a function f of the embeddings, θi and θj, and a relation-specific parameter vector wk. This is a natural generalization of the latent space model for single-relational networks [6]. Recently, it has been successfully applied to knowledge base analysis [714]. Various forms of f are proposed, such as distance models [7], bilinear models [1214], and neural networks [15]. Computational algorithms are proposed to improve link prediction for knowledge bases [16, 17]. The statistical properties of the embedding-based MRN models have not been rigorously studied. It remains unknown whether and to what extent the underlying distribution of MRN can be recovered, especially when there are a large number of nodes and relations.

The results in this paper fill in the void by studying the error bounds and asymptotic behaviors of the estimators for Mijk's for a general class of models. This is a challenging problem due to the following facts. Traditional statistical inference of latent variable models often requires a (proper or improper) prior distribution for θi. In such settings, one works with the marginalized likelihood with θi integrated out. For the analysis of MRN, the sample size and the latent dimensions are often so large that the above-mentioned inference approaches are computationally infeasible. For instance, a small-scale MRN could have a sample size as large as a few million, and the dimension of the embeddings is as large as several hundred. Therefore, in practice, the prior distribution is often dropped, and the latent variables θi's are considered as additional parameters and estimated via maximizing the likelihood or penalized likelihood functions. The parameter space is thus substantially enlarged due to the addition of θi's whose dimension is proportionate to the number of entities. As a result, in the asymptotic analysis, we face a double-asymptotic regime of both the sample size and the parameter dimension.

In this paper, we develop results for the (penalized) maximum likelihood estimator of such models and show that under regularity conditions the estimator is consistent. In particular, we overcome the difficulty induced by the double-asymptotic regime via non-asymptotic bounds for the error probabilities. Then, we show that the distribution of MRN can be consistently estimated in terms of average Kullback-Leibler (KL) divergence even when the latent dimension increases slowly as the sample size tends to infinity. A probability error bound is also provided together with the upper bound for the risk (expected KL divergence). We further study the lower bound and show the near-optimality of the estimator in terms of minimax risk. Besides the average KL divergence, similar results can be established for other criteria, such as link prediction accuracy.

The outline of the remaining sections is as follows. In section 2, we provide the model specification and formulate the problem. Our main results are presented in section 3. Finite sample performance is examined in section 4 through simulated and real data examples. Concluding remarks are included in section 5.

2. Problem Setup

2.1. Notation

Let | · | be the cardinality of a set and × be the Cartesian product. Set {1, …, N} is denoted by [N]. The sign function sgn(x) is defined to be 1 for x ≥ 0 and 0 otherwise. The logistic function is denoted by σ(x) = ex/(1 + ex). Let 1A be the indicator function on event A. We use U[a, b] to denote the uniform distribution on [a, b] and Ber(p) to denote the Bernoulli distribution with probability p. The KL divergence between Ber(p) and Ber(q) is written as D(p||q)=plogpq+(1-p)log1-p1-q. We use || · || to denote the Euclidean norm for vectors and the Frobenius norm for matrices.

For two real positive sequences {an} and {bn}, we write an = O(bn) if lim supn→∞ an/bn < ∞. Similarly, we write an = Ω(bn) if lim supn→∞ bn/an < ∞ and an = o(bn) if limnan/bn=0. We denote anbn if lim supn→∞ an/bn ≤ 1. When {an} and {bn} are negative sequences, anbn means liminfn→∞ an/bn ≥ 1. In some places, we use bnan as an interchangeable notation of anbn. Finally, if limnan/bn=1, we write an ~ bn.

2.2. Model

Consider an MRN with N entities and K relations. Given i, j ∈ [N] and k ∈ [K], the triple λ = (i, j, k) corresponds to the edge from entity i to entity j of relation k. Let Λ = [N] × [N] × [K] denote the set of all edges. We assume in this paper that an edge can be either present or absent in a network and use Yλ ∈ {0, 1} to indicate the presence of edge λ. In some scenarios, the status of an edge may have more than two types. Our analysis can be generalized to accommodate these cases.

We associate each entity i with a vector θi of dimension dE and each relation k with a vector wk of dimension dR. Let EdE be a compact domain where the embeddings θ1, …, θN live. We call R the entity space. Similarly, we define a compact relation space RdR for the relation-specific parameters w1, …, wK. Let x = (θ1, …, θN, w1, …, wK) be a vector in the product space Θ = EN × RK. The parameters associated with edge λ = (i, j, k) is then xλ = (θi, θj, wk). We assume that given x, elements in {Yλ ∣ λ ∈ Λ} are independent with each other and that the log odds of Yλ = 1 is

logP(Yλ=1|x)P(Yλ=0|x)=ϕ(xλ),for λΛ.    (1)

Here ϕ is defined on E2 × R, and ϕ(xλ) is often called the score of edge λ.

We will use Y to represent the N × N × K tensor formed by {Yλ ∣ λ ∈ Λ} and M(x) to represent the corresponding probability tensor {P(Yλ = 1 ∣ x) ∣ λ ∈ Λ}. Our model is given by

Yλ~Ber(Mλ(x*)),    (2)
Mλ(x)=σ(ϕ(xλ)),λΛ,    (3)

where x* stands for the true value of x and Yλ's are independent. In the above model, the probability of the presence of an edge is entirely determined by the embeddings of the corresponding entities and the relation-specific parameters. This imposes a low-dimensional latent structure on the probability tensor M* = M(x*).

We specify our model using a generic function ϕ. It includes various existing models as special cases. Below are two examples of ϕ.

1. Distance model [7].

ϕ(θi,θj,wk)=bk-||θi+ak-θj||2,    (4)

where θi,θj,akd, bk ∈ ℝ and wk = (ak, bk). In the distance model, relation k from node i to node j is more likely to exist if θi shifted by ak is closer to θj under the Euclidean norm.

2. Bilinear model [9].

ϕ(θi,θj,wk)=θiTdiag(wk)θj,    (5)

where θi,θj,wkd and diag(wk) is a diagonal matrix with wk as the diagonal elements. Model (5) is a special case of the more general model ϕ(θi,θj,wk)=θiTwkθj, where Wkd×d is a matrix parameterized by wkdR. Trouillon et al. [12], Nickel et al. [13], and Liu et al. [14] explored different ways of constructing wk.

Very often, only a small portion of the network is observed [18]. We assume that each edge in the MRN is observed independently with probability γ and that the observation of an edge is independent of Y. Let S ⊂ Λ be the set of observed edges. Then the elements in S are independent draws from Λ. For convenience, we use n to represent the expected number of observed edges, namely, n = E[|S|] = γ|Λ| = γN2K. Our goal is to recover the underlying probability tensor M* based on the observed edges {Yλ ∣ λ ∈ S}.

REMARK 1. Ideally, if there exists x* such that Yλ=sgn(Mλ(x*)-12) for all λ ∈ Λ, then Y can be recovered with no error under x*. This is, however, a rare case in practice, especially for large-scale MRN. A relaxed assumption is that Y can be recovered with some low dimensional x* and noiseλ} such that

Yλ=sgn(Mλ(x*)+ϵλ-12),ϵλ~i.i.dU[-12,12],λΛ.    (6)

By introducing the noise term, we formulate the deterministic MRN as a random graph. The model described in (2) is an equivalent but simpler form of (6).

2.3. Estimation

According to (2), the log-likelihood function of our model is

l(x;YS)=λSYλlogMλ(x)+(1-Yλ)log(1-Mλ(x)).    (7)

We omit the terms λSlogγ+λSlog(1-γ) in (7) since γ is not the parameter of interest. To obtain an estimator of M*, we take the following steps.

1. Obtain the maximum likelihood estimator (MLE) of x*,

x^=argmaxxΘl(x;YS).    (8)

2. Use the plug-in estimator

M^=M(x^)    (9)

as an estimator of M*.

In (8), the estimator x^ is a maximizer over the compact parameter space Θ = EN × RK. The dimension of Θ is

m=NdE+KdR,

which grows linearly in the number of entities N and the number of relations K.

2.4. Evaluation Criteria

We consider the following criteria to measure the error of the above-mentioned estimator. They will be used in both the main results and numerical studies.

1. Average KL divergence of the predictive distribution from the true distribution

L(M^,M*)=1|Λ|λΛD(Mλ*||M^λ).    (10)

2. Mean squared error of the predicted scores

MSEϕ=1|Λ|λΛ(ϕ(x^λ)-ϕ(xλ*))2.    (11)

3. Link prediction error

err^=1|Λ|λΛ1ŶλYλ*,    (12)

where Ŷλ=sgn (M^λ-12) and Yλ*=sgn (Mλ*-12).

REMARK 2. The latent attributes of entities and relations are often not identifiable, so the MLE x^ is not unique. For instance, in (4), the values of ϕ and M(x) remain the same if we replace θi and ak, respectively by Γθi + t and Γak, where t is an arbitrary vector in dE and Γ is an orthonormal matrix. Therefore, we consider the mean squared error of scores, which are identifiable.

3. Main Results

We first provide results of the MLE in terms of KL divergence between the estimated and the true model. Specifically, we investigate the tail probability P(L(M^,M*)>t) and the expected loss E[L(M^,M*)]. In section 3.1, we discuss upper bounds for the two quantities. The lower bounds are provided in section 3.2. In section 3.3, we extend the results to penalized maximum likelihood estimators (pMLE) and other loss functions. All proofs are included in the Supplementary Materials.

3.1. Upper Bounds

We first present an upper bound for the tail probability P(L(M^,M*)>t) in Lemma 1. The result depends on the tensor size, the number of observed edges, the functional form of ϕ, and the geometry of parameter space Θ. The lemma explicitly quantifies the impact of these element on the error probability. It is key to the subsequent analyses. Lemma 2 gives a non-asymptotic upper bound for the expected loss (risk). We then establish the consistency of M^ and the asymptotic error bounds in Theorem 1.

We will make the following assumptions throughout this section.

ASSUMPTION 1. x* ∈ Θ = EN × RK, where E and R are Euclidean balls of radius U.

ASSUMPTION 2. The function ϕ is Lipschitz continuous under the Euclidean norm,

|ϕ(u)-ϕ(v)|α||u-v||,u,vE2×R,    (13)

where α is a Lipschitz constant.

Assumption 1 is imposed for technical convenience. The results can be easily extended to general compact parameter spaces. Let C=supuE2×R|ϕ(u)|. Without loss of generality, we assume that C ≥ 2.

LEMMA 1. Consider M^ defined in (9) and the average KL divergence L in (10). Under Assumptions 1 and 2, for every t > 0, β > 0 and 0 < s < nt,

P(L(M^,M*)t)exp{-nt-sCh(12-s2nt)}(1+23αUn(1+β)s)m+exp{-nβh(β)},    (14)

where m = NdE + KdR is the dimension of Θ, n = γN2K is the expected number of observations, and h(u)=(1+1u)log(1+u)-1.

In the proof of Lemma 1, we use Bennett's inequality to develop a uniform bound that does not depend on the true parameters. It is sufficient for the current analysis. If the readers need sharper bounds, they can read through the proof and replace the Bennett's bound by the usual large deviation rate function which provides a sharp exponential bound that depends on the true parameters. We don't pursue this direction in this paper.

Lemma 2 below gives an upper bound of risk E[L(M^,M*)], which follows from Lemma 1.

LEMMA 2. Consider M^ defined in (9) and loss function L in (10). Let C1 = 18C, C2=83αU and C3 = 2 max {C1, C2}. If Assumptions 1 and 2 hold and nmC2+e, then

E[L(M^,M*)]C3mnlognm+C1nexp{-mlognm}+3nexp{-13(n+C3mlognm)}.    (15)

We are interested in the asymptotic behavior of the tail probability in two scenarios: (i) t is a fixed constant and (ii) t decays to zero as the number of entities N tends to infinity. The following theorem gives an asymptotic upper bound for the tail probability and the risk.

THEOREM 1. Consider M^ defined in (9) and the loss function L in (10). Let the number of entities N → ∞ and C, K, U, dE, dR, α, and γ be fixed constants. If Assumptions 1 and 2 hold, we have the following asymptotic inequalities.

When t is a fixed constant,

logP(L(M^,M*)t)-t5Cn.    (16)

When t=10Cmnlognm,

logP(L(M^,M*)t)-mlognm.    (17)

Furthermore,

E[L(M^,M*)]10Cmnlognm.    (18)

The consistency of M^ is implied by (16) and the rate of convergence is |logP(L(M^,M*)t)|=Ω(N2) if t is a fixed constant. The rate decreases to Ω(N log N) for the choice of t producing (17). It is also implied by (17) that L(M^,M*)=O(1NlogN) with high probability. We show in the next section that this upper bound is reasonably sharp.

The condition that K, U, dE, dR, and α are fixed constants can be relaxed. For instance, we can let U, dE, dR, and α go to infinity slowly at the rate O(log N) and K at the rate O(N). We can let γ go to zero provided that mnlognm=o(1).

3.2. Lower Bounds

We show in Theorem 2 that the order of the minimax risk is Ω(mn), which implies the near optimality of M^ in (9) and the upper bound O(mnlognm) in Theorem 1. To begin with, we introduce the following definition and assumption.

DEFINITION 1. For u = (θ, θ′, w) ∈ E2 × R, the r-neighborhood of u is

Nr(u)={(η,η,ζ)E2×R||η-θ||r,||η-θ||r,||ζ-w||r}.

Similarly, for x=(θ1,,θN,w1,,wK)EN×RK, the r-neighborhood of x is

Nr(x)={(η1,,ηN,ζ1,,ζK)EN×RK||ηi-θi||r,||ζk-wk||r,i[N],k[K]}.

ASSUMPTION 3. There exists u0E2×R and r, κ > 0 such that Nr(u0)E2×R and

|σ(ϕ(u))-σ(ϕ(v))|κ||u-v||,u,vNr(u0).    (19)

THEOREM 2. Let b=supuNr(u0)σ(ϕ(u)). Under Assumptions 2 and 3, if r2(m/16-1)b(1-b)12α2n, then for any estimator M^, there exists x* ∈ Θ such that

P(L(M^,M*)>C~m/16-1n)12,    (20)

where C~=κ2b(1-b)108α2. Consequently, the minimax risk

minM^maxM*E[L(M^,M*)]C~m/16-12n.    (21)

3.3. Extensions

3.3.1. Reguralization

In this section, we extend our asymptotic results in Theorem 1 to regularized estimators. In practice, regularization is often considered to prevent overfitting. We consider a regularization similar to elastic net [19]

lρ(x;YS)=l(x;YS)-ρ1||x||1-ρ2||x||2,    (22)

where || · ||1 stands for L1 norm and ρ1, ρ2 ≥ 0 are regularization parameters. The pMLE is

x^=argmaxxΘlρ(x;YS).    (23)

Note that the MLE in (8) is a special case of the pMLE above with ρ1 = ρ2 = 0. Since x^ is shrunk toward 0, without loss of generality, we assume that E and R are centered at 0. We generalize Theorem 1 to pMLE in the following theorem.

THEOREM 3. Consider the estimator M^ given by (23) and (9) and the loss function L in (10). Let the number of entities N → ∞ and C, K, U, dE, dR, α, γ be absolute constants. If Assumptions 1 and 2 hold and ρ1 + ρ2 = o(log N), then asymptotic inequalities (16), (17), and (18) in Theorem 1 hold.

3.3.2. Other Loss Functions

We present some results for the mean squared error loss MSEϕ defined in (11) and the link prediction error err^ defined in (12). Corollaries 1 and 2 give upper and lower bounds for MSEϕ, and Corollary 3 gives an upper bound for err^ under an additional assumption.

COROLLARY 1. Under the setting of Theorem 3 with the loss function replaced by MSEϕ, we have the following asymptotic results.

If t is a fixed constant,

logP(MSEϕt)-5σ(C)(1-σ(C))t2Cn.    (24)

If t=20Cσ(C)(1-σ(C))mnlognm,

logP(MSEϕt)-mlognm.    (25)

Furthermore,

E[MSEϕ]20Cσ(C)(1-σ(C))mnlognm.    (26)

COROLLARY 2. Under the setting of Theorem 2 with the loss function replaced by MSEϕ, we have

P(MSEϕ>C~m/16-18n)12,    (27)

and

minM^maxM*E[MSEϕ]C~m/16-116n.    (28)

ASSUMPTION 4. There exists ε > 0 such that |Mλ*-12|ε for every λ ∈ Λ.

COROLLARY 3. Under the setting of Theorem 3 with the loss function replaced by err^ and Assumption 4 added, we have the following asymptotic results.

If t is a fixed constant,

logP(err^t)-2ε2t5Cn.    (29)

If t=5Cε2mnlognm,

logP(err^t)-mlognm.    (30)

Furthermore,

E[err^]5Cε2mnlognm.    (31)

3.3.3. Sparse Representations

We are interested in sparse entity embeddings and relation parameters. Let || · ||0 be the number of non-zero elements of a vector and τ be a pre-specified sparsity level of x (i.e., the proportion of non-zero elements). Let mτ = be the upper bound of non-zero parameters, that is, ||x*||0mτ. Consider the following estimator

x^=argmaxxΘl(x;YS)subject to ||x||0mτ.    (32)

The estimator defined above maximizes the L0-penalized log-likelihood.

THEOREM 4. Consider M^ defined in (32) and (9) and the loss function L in (10). Let the number of entities N → ∞ and τ, C, K, U, dE, dR, α be absolute constants. Under Assumptions 1 and 2, the following asymptotic inequalities hold.

If t is a fixed constant,

logP(L(M^,M*)t)-t5Cn.    (33)

If t=10Cmτnlognmτ,

logP(L(M^,M*)t)-mτlognmτ.    (34)

Furthermore,

E[L(M^,M*)]10Cmτnlognmτ.    (35)

We omit the results for other loss functions as well as the lower bounds since they can be analogously obtained.

4. Numerical Examples

In this section, we demonstrate the finite sample performance of M^ through simulated and real data examples. Throughout the numerical experiments, AdaGrad algorithm [20] is used to compute x^ in (8) or (23). It is a first-order optimization method that combines stochastic gradient descent (SGD) [21] with adaptive step sizes for finding the local optima. Since the objective function in (8) is non-convex, a global maximizer is not guaranteed. Our objective function usually has many global maximizers, but, empirically, we found the algorithm works well on MRN recovery and the recovery performance is insensitive to the choice of the starting point of SGD. Computationally, SGD is also more appealing to handle large-scale MRNs than those more expensive global optimization methods.

4.1. Simulated Examples

In the simulated examples, we fix K = 20, d = 20 and consider various choices of N ranging from 100 to 10,000 to investigate the estimation performance as N grows. The function ϕ we consider is a combination of the distance model (4) and the bilinear model (5),

ϕ(θi,θj,wk)=(θi+ak-θj)Tdiag(bk)(θi+ak-θj),    (36)

where θi,θj,ak,bkd and wk = (ak, bk). We independently generate the elements of θi*, ak*, and bk* from normal distributions N(0, 1), N(0, 1), and N(0, 0.25), respectively, where N(μ, σ2) denotes the normal distribution with mean μ and variance σ2. To guarantee that the parameters are from a compact set, the normal distributions are truncated to the interval [−20, 20]. Given the latent attributes, each Yijk is generated from the Bernoulli distribution with success probability Mijk*=σ(ϕ(θi*,θj*,wk*)). The observation probability γ takes value from {0.005, 0.01, 0.02}. For each combination of γ and N, 100 independent datasets are generated. For each dataset, we compute x^ and M^ in (8) and (9) with AdaGrad algorithm and then calculate L(M^,M*) defined in (10) as well as the link prediction error err^ defined in (12). The two types of losses are averaged over the 100 datasets for each combination of N and γ to approximate the theoretical risks E[L(M^,M*)] and E[err^]. These quantities are plotted against N in log scale in Figure 2. As the figure shows, in general, both risks decrease as N increases. When N is small, n/m is not large enough to satisfy the condition n/mC2 + e in Lemma 2 and the expected KL risk increases at the beginning. After N gets sufficiently large, the trend agrees with our asymptotic analysis.

FIGURE 2
www.frontiersin.org

Figure 2. Average Kullback-Leibler divergence (left) and average link prediction error (right) of M^ for different choices of N and γ.

4.2. Real Data Example: Knowledge Base Completion

WordNet [1] is a large lexical knowledge base for English. It has been used in word sense disambiguation, text classification, question answering, and many other tasks in natural language processing [3, 5]. The basic components of WordNet are groups of words. Each group, called a synset, describes a distinct concept. In WordNet, synsets are linked by conceptual-semantic and lexical relations, such as super-subordinate relation and antonym. We model WordNet as an MRN with the synsets as entities and the links between synsets as relations.

Following Bordes et al. [7], we use a subset of WordNet for analysis. The dataset contains 40,943 synsets and 18 types of relations. A triple (i, j, k) is called valid if relation k from entity i to entity j exists, i.e., Yijk = 1. All the other triples are called invalid triples. Among more than 3.0 × 1010 possible triples in WordNet, only 151,442 triples are valid. We assume that 141,442 valid triples and the same proportion of invalid triples are observed. The goal of our analysis is to recover the unobserved part of the knowledge base. We adopt the ranking procedure, which is commonly used in knowledge graph embedding literature, to evaluate link predictions. Given a valid triple λ = (i, j, k), we rank estimated scores for all the invalid triples inside Λ·jk={(i,j,k)i[N]} in descending order and call the rank of ϕ(x^λ) as the head rank of λ, denoted by Hλ. Similarly, we can define the tail rank Tλ and the relation rank Rλ by ranking ϕ(x^λ) among the estimated scores of invalid triples in Λij· and Λi·k, respectively. For a set V of valid triples, the prediction performance can be evaluated by rank-based criteria, mean rank (MR), mean reciprocal rank (MRR), and hits at q (Hits@q), which are defined as

MRE=12|V|λVHλ+Tλ,MRR=1|V|λVRλ,
MRRE=12|V|λV1Hλ+1Tλ,  MRRR=1|V|λV1Rλ,

and

HitsE@q=12|V|λV1{Hλq}+1{Tλq},
HitsR@q=1|V|λV1{Rλq}.

The subscripts E and R represent the criteria for predicting entities and relations, respectively. Models with higher MRRs, Hits@q's or lower MRs are more preferable. In addition, MRR is more robust to outliers than MR.

The three models described in (4), (5), and (36) are considered in our data analysis and we refer to them as Model 1, 2, and 3, respectively. For each model, the latent dimension d takes value from {50, 100, 150, 200, 250}. Due to the high dimensionality of the parameter space, L2 penalized MLE is used to obtain the estimated latent attributes x^, with tuning parameters ρ1 = 0 and ρ2 chosen from {0, 10−2, 10−3, 10−4, 10−5} in (22). Since information criteria based dimension and tuning parameter selection is computationally intensive for dataset of this scale, we set aside 5,000 of the unobserved valid triples as a validation set and select the d and ρ2 that produce the smallest MRRE on this validation set. The model with the selected d and ρ2 is then evaluated on the test set consisting of the rest 5,000 unobserved valid triples.

The computed evaluation criteria on the test set are listed in Table 1. The table also includes the selected d and ρ2 for each of the three score models. Models 2 and 3 generate similar performance. The MRRs for the two models are very close to 1, and the Hits@q's are higher than 90%, suggesting that the two models can identify the valid triples very well. Although Model 1 is inferior to the other two models in terms of most of the criteria, it outperforms them in MRE. The results imply that Model 2 and Model 3 could perform extremely bad for a few triples.

TABLE 1
www.frontiersin.org

Table 1. Results for WordNet data analysis.

In addition to Models 1–3, we also display the performance of the Canonical Polyadic (CP) decomposition [22] and a tensor factorization approach, RESCAL [23]. Their MRRE and HitsE@10 results on the WordNet dataset are extracted from Trouillon et al. [12] and Nickel et al. [13], respectively. Both methods, especially CP, are outperformed by Model 3.

5. Concluding Remarks

In this article, we focused on the recovery of large-scale MRNs with a small portion of observations. We studied a generalized latent space model where entities and relations are associated with latent attribute vectors and conducted statistical analysis on the error of recovery. MLEs and pMLEs over a compact space are considered to estimate the latent attributes and the edge probabilities. We established non-asymptotic upper bounds for estimation error in terms of tail probability and risk, based on which we then studied the asymptotic properties when the size of MRN and latent dimension go to infinity simultaneously. A matching lower bound up to a log factor is also provided.

We kept ϕ generic for theoretical development. The choice of ϕ is usually problem-specific in practice. How to develop a data-driven method for selecting an appropriate ϕ is an interesting problem to investigate in future works.

Besides the latent space models, sparsity [24] or clustering assumptions [25] have been used to impose low-dimensional structures in single-relational networks. An MRN can be seen as a combination of several heterogeneous single-relational networks. The distribution of edges may vary dramatically across relations. Therefore, it is challenging to impose appropriate sparsity or cluster structures on MRNs. More empirical and theoretical studies are needed to quantify the impact of heterogeneous relations and to incorporate the information for recovering MRNs.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Author Contributions

ZW provided the theoretical proofs. ZW and XT conducted the simulation and real data analysis. All authors contributed to manuscript writing and revision.

Funding

This work was supported by National Science Foundation grants SES-1826540 and IIS-1633360.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fams.2020.540225/full#supplementary-material

References

1. Miller GA. WordNet: a lexical database for English. Commun ACM. (1995) 38:39–41. doi: 10.1145/219717.219748

CrossRef Full Text | Google Scholar

2. McCray AT. An upper-level ontology for the biomedical domain. Compar Funct Genomics. (2003) 4:80–4. doi: 10.1002/cfg.255

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Gabrilovich E, Markovitch S. Wikipedia-based semantic interpretation for natural language processing. J Artif Intell Res. (2009) 34:443–98. doi: 10.1613/jair.2669

CrossRef Full Text | Google Scholar

4. Scott S, Matwin S. Feature engineering for text classification. In: ICML Vol. 99 Bled (1999). p. 379–88.

5. Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, et al. Building Watson: an overview of the DeepQA project. AI Mag. (2010) 31:59–79. doi: 10.1609/aimag.v31i3.2303

CrossRef Full Text | Google Scholar

6. Hoff PD, Raftery AE, Handcock MS. Latent space approaches to social network analysis. J Am Stat Assoc. (2002) 97:1090–8. doi: 10.1198/016214502388618906

CrossRef Full Text | Google Scholar

7. Bordes A, Usunier N, Garcia-Duran A, Weston J, Yakhnenko O. Translating embeddings for modeling multi-relational data. In: Burges CJC, Bottou L, Welling M, Ghahramani Z, Weinberger KQ, editors. Advances in Neural Information Processing Systems 26. Curran Associates, Inc. (2013). p. 2787–95. Available online at: http://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf

Google Scholar

8. Wang Z, Zhang J, Feng J, Chen Z. Knowledge graph embedding by translating on hyperplanes. In: AAAI. Québec City, QC (2014). p. 1112–9.

Google Scholar

9. Yang B, Yih SWt, He X, Gao J, Deng L. Embedding entities and relations for learning and inference in knowledge bases. In: Proceedings of the International Conference on Learning Representations. San Diego, CA (2015).

Google Scholar

10. Lin Y, Liu Z, Sun M, Liu Y, Zhu X. Learning entity and relation embeddings for knowledge graph completion. In: AAAI. Austin, TX (2015). p. 2181–7.

Google Scholar

11. Garcia-Duran A, Bordes A, Usunier N, Grandvalet Y. Combining two and three-way embedding models for link prediction in knowledge bases. J Artif Intell Res. (2016) 55:715–42. doi: 10.1613/jair.5013

CrossRef Full Text | Google Scholar

12. Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G. Complex embeddings for simple link prediction. In: International Conference on Machine Learning. (2016). p. 2071–80.

Google Scholar

13. Nickel M, Rosasco L, Poggio TA. Holographic embeddings of knowledge graphs. In: AAAI. Phoenix, AZ (2016). p. 1955–61.

Google Scholar

14. Liu H, Wu Y, Yang Y. Analogical inference for multi-relational embeddings. In: Proceedings of the 34th International Conference on Machine Learning. Vol. 70. Sydney, NSW (2017). p. 2168–78.

Google Scholar

15. Socher R, Chen D, Manning CD, Ng A. Reasoning with neural tensor networks for knowledge base completion. In: Burges CJC, Bottou L, Welling M, Ghahramani Z, Weinberger KQ, editors. Advances in Neural Information Processing Systems. Lake Tahoe, NV: Curran Associates, Inc. (2013) p. 926–34.

Google Scholar

16. Kotnis B, Nastase V. Analysis of the impact of negative sampling on link prediction in knowledge graphs. arXiv. (2017) 170806816.

Google Scholar

17. Kanojia V, Maeda H, Togashi R, Fujita S. Enhancing knowledge graph embedding with probabilistic negative sampling. In: Proceedings of the 26th International Conference on World Wide Web Companion. Perth, WA (2017) p. 801–2. doi: 10.1145/3041021.3054238

CrossRef Full Text | Google Scholar

18. Min B, Grishman R, Wan L, Wang C, Gondek D. Distant supervision for relation extraction with an incomplete knowledge base. In: HLT-NAACL. Atlanta, GA (2013). p. 777–82.

Google Scholar

19. Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc B. (2005) 67:301–20. doi: 10.1111/j.1467-9868.2005.00503.x

CrossRef Full Text | Google Scholar

20. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res. (2011) 12:2121–59. doi: 10.5555/1953048.2021068

CrossRef Full Text | Google Scholar

21. Robbins H, Monro S. A stochastic approximation method. Ann Math Stat. (1951) 22:400–7. doi: 10.1214/aoms/1177729586

CrossRef Full Text | Google Scholar

22. Hitchcock FL. The expression of a tensor or a polyadic as a sum of products. J Math Phys. (1927) 6:164–89. doi: 10.1002/sapm192761164

CrossRef Full Text | Google Scholar

23. Nickel M, Tresp V, Kriegel HP. A three-way model for collective learning on multi-relational data. In: Proceedings of the 28th International Conference on International Conference on Machine Learning. ICML'11. Madison, WI: Omnipress (2011). p. 809–16.

Google Scholar

24. Tran N, Abramenko O, Jung A. On the sample complexity of graphical model selection from non-stationary samples. IEEE Trans Signal Process. (2020) 68:17–32. doi: 10.1109/TSP.2019.2956687

CrossRef Full Text | Google Scholar

25. Jung A, Hero AO III, Mara AC, Jahromi S, Heimowitz A, Eldar YC, et al. Semi-supervised learning in network-structured data via total variation minimization. IEEE Trans Signal Process. (2019) 67:6256–69. doi: 10.1109/TSP.2019.2953593

CrossRef Full Text | Google Scholar

Keywords: multi-relational network, knowledge graph completion, tail probability, risk, asymptotic analysis, non-asymptotic analysis, maximum likelihood estimation

Citation: Wang Z, Tang X and Liu J (2020) Statistical Analysis of Multi-Relational Network Recovery. Front. Appl. Math. Stat. 6:540225. doi: 10.3389/fams.2020.540225

Received: 04 March 2020; Accepted: 31 August 2020;
Published: 19 October 2020.

Edited by:

Yajun Mei, Georgia Institute of Technology, United States

Reviewed by:

Xiangyu Chang, Xi'an Jiaotong University, China
Alex Jung, Aalto University, Finland

Copyright © 2020 Wang, Tang and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xueying Tang, eHl0YW5nJiN4MDAwNDA7bWF0aC5hcml6b25hLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.