Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 09 June 2020
Sec. Statistical and Computational Physics

Central Limit Theorem for Linear Eigenvalue Statistics for Submatrices of Wigner Random Matrices

\nLingyun LiLingyun Li1Matthew ReedMatthew Reed2Alexander Soshnikov
Alexander Soshnikov2*
  • 1Department of Mathematics, Beijing Technology and Business University, Beijing, China
  • 2Department of Mathematics, University of California, Davis, Davis, CA, United States

We prove the Central Limit Theorem for finite-dimensional vectors of linear eigenvalue statistics of submatrices of Wigner random matrices under the assumption that test functions are sufficiently smooth. We connect the asymptotic covariance to a family of correlated Gaussian Free Fields.

1. Introduction

Wigner random matrices were introduced by Wigner in the 1950's (see e.g., [13]) to study energy levels of heavy nuclei. Let {Wjj}j=1n and {Wjk}1≤j<kn be two independent families of independent and identically distributed real-valued random variables satisfying:

𝔼Wjk=0,  𝔼|Wjk|2=1  for  j<k,  and  𝔼[Wjj2]=σ2.    (1.1)

Set W=(Wjk)j,k=1n with Wjk = Wkj. The Wigner Ensemble of normalized real symmetric n × n matrices consists of matrices M of the form

M=1nW.    (1.2)

The archetypal example of a Wigner real symmetric random matrix is the Gaussian Orthogonal Ensemble (GOE) defined as [3]

A=12(B+Bt),    (1.3)

where the entries of B are i.i.d. real Gaussian random variables with zero mean and variance 1/2.

Wigner Hermitian random matrices are defined in a similar fashion. Specifically, we assume that {Wjj}j=1n and {Wjk}1≤j<kn are two independent families of independent and identically distributed real, correspondingly complex random variables satisfying (1.1). The archetypal example of a Wigner Hermitian random matrix is the Gaussian Unitary Ensemble (GUE)

A=12(B+B*),    (1.4)

where the entries of B are i.i.d. complex standard Gaussian random variables [3].

Over the last sixty years, Random Matrix Theory has developed many exciting connections to Quantum Chaos [4], Quantum Gravity [5], Mesoscopic Physics [6], Numerical Analysis [7], Theoretical Neuroscience [8], Optimal Control [9], Number Theory [10], Integrable Systems [11], Combinatorics [12], Random Growth Models [13], Multivariate Statistics [14], and many other fields of Science and Engineering.

For a real symmetric (Hermitian) matrix M of order n, its empirical distribution of the eigenvalues is defined as μM=1ni=1nδλi, where λ1 ≤ … ≤ λn are the (ordered) eigenvalues of M. The Wigner semicircle law states that for any bounded continuous test function φ:ℝ → ℝ, the linear statistic

1ni=1nφ(λi)=1nTr(φ(M))=:trn(φ(M))    (1.5)

converges to ∫ φ(x)sc(dx) in probability, where μsc is determined by its density

dμscdx(x)=12π4-x21[-2,2](x),    (1.6)

see e.g., Wigner [2], Ben Arous and Guionnet [15], and Anderson et al. [16].

The Gaussian fluctuation for linear statistics i=1nφ(λi) has been extensively studied since the pioneering paper by Jonsson [17]. We refer the reader to Johansson [18], Sinai and Soshnikov [19], Bai et al. [20], Lytova and Pastur [21], Shcherbina [22], Anderson and Zeitouni [23], Li and Soshnikov [24], Lodhia and Simm [25], and references therein. The goal of this paper is to prove the central limit theorem for the joint distribution of linear eigenvalue statistics for submatrices of Wigner random matrices.

The rest of the paper is organized as follows. We formulate our results in section 2. Theorem 2.1 is proved in section 3. Theorem 2.2 is proved in section 4. Auxiliary results are discussed in the Appendices.

Research of the last author has been partially supported by the Simons Foundation Collaboration Grant for Mathematicians # 312391.

2. Statement of Main Results

This section is devoted to formulation of the main results of the paper.

For a generic random variable ξ, in what follows denote by ξ°: = ξ − 𝔼[ξ]. For a finite set B ⊂ {1, 2, …, n} denote by M(B) the submatrix of M formed by the entries corresponding to intersections of rows and columns of M marked by the indices in B, which inherits the ordering. For example,

M({1,3})=(M11M13M31M33).    (2.1)

Let B1,,Bd be infinite subsets of ℕ such that Bi, 1id, and their pairwise intersections have positive densities. Denote

Bin=Bi{1,2,,n}, 1id,    (2.2)
ni=|Bin|, 1id,    (2.3)
nlm=|BlnBmn|, 1lmd.    (2.4)

We assume that the following limits exist:

γl:=limnnln>0,  γlm:=limnnlmn,  1lmd.    (2.5)

If it does not lead to ambiguity, we will omit the superindex n in the notation for Bin, 1id. For an n × n matrix M and B ⊂ {1, 2, …, n}, consider a spectral linear statistic l=1|B|φ(λl),  where {λl}l=1|B| are the eigenvalues of the submatrix M(B). We are going to study the joint fluctuations of linear statistics of the eigenvalues. It will be beneficial later to view the submatrices from a different perspective. Consider the matrix PB=diag(PjjB), which projects onto the subspace corresponding to indices in B, i.e.,

PjjB=1{jB}, 1jn.    (2.6)

Define

MB:=PBMPB,    (2.7)
NB[φ]:= l=1nφ(λlB)=Tr(φ(MB)),    (2.8)

where {λlB}l=1n are the eigenvalues of MB. Note that the spectra of MB and M(B) differ only by a zero eigenvalue of multiplicity n − |B|. As a result, when we consider the linear statistics of their eigenvalues the extra terms (n − |B|)φ(0) cancel once we center these random variables. In general, when considering multiple sequences Bl, in order to simplify the notation we will write

      M(l):=MBl,P(l):=PBl,    Nn(l)[φ]:=NBl[φ],Nn(l)°[φ]=Nn(l)[φ]-𝔼{Nn(l)[φ]}.    (2.9)

Also, denote by P(l, r) the matrix which projects onto the subspace corresponding to the indices in the intersection BlBr, i.e.,

P(l,r)=P(l)P(r)=P(r)P(l).    (2.10)

Recall that a test function φ:ℝ → ℝ belongs to the Sobolev space Hs if

φs2:=-(1+|t|)2s|φ^(t)|2dt < ,    (2.11)

where φ^ is its Fourier transform. First we consider Gaussian Wigner matrices.

Theorem 2.1. Let W={Wjk:Wjk=Wkj}j,k=1n be an n × n real symmetric random matrix with Gaussian entries satisfying (1.1) and M = n−1/2 W. Let B1,Bd be infinite subsets ofsatisfying (2.2-2.5). Let φ1, ⋯ ;, φd:ℝ → ℝ be test functions that satisfy the regularity condition ||φl||s < ∞, for some s>52. Then the random vector

(Nn(1)°[φ1],, Nn(d)°[φd]),    (2.12)

converges in distribution to the zero mean Gaussian vector (G1,,Gd)d with the covariance given by

Cov(Gl,Gp)=σ24(φl)1(φp)1(γlpγlγp)+12k=2k(φl)k(φp)k(γlpγlγp)k=2π|z|2=γl𝔪 z>0|w|2=γp𝔪 w>0φl(z+γlz)φp(w+γpw)12πln|γlp-zwγlp-zw̄|(1-γlz2)(1-γpw2)dzdw+ γlp(σ2-2)4π2γlγp-2γl2γlλφl(λ)4γl-λ2dλ-2γp2γpμφp(μ)4γp-μ2dμ.    (2.13)

In the expression for the covariance, (φl)k denotes the coefficients in the expansion of φl in the (rescaled) Chebyshev basis, i.e.,

φl(x)=k=0(φl)kTkγl(x),   (φl)k=2π-2γl2γlφl(t)Tkγl(t)dt4γl-t2    (2.14)

and

Tkγ(x)=cos(k arccos(x2γ)).    (2.15)

Note the form of the kernel in the above contour integral expression for the covariance. Since it is the Green's function for the Laplacian on ℍ with Dirichlet boundary conditions (appropriately scaled), we note that the limiting distributions form a family of correlated Gaussian free fields. This is consistent with the previous work of Borodin [26, 27] for the covariance of linear eigenvalue statistics corresponding to polynomial test functions. Now we formulate our result for non-Gaussian Wigner matrices.

Theorem 2.2. Let W=(Wjk)j,k=1n be an n × n random matrix and M = n−1/2 W. Let B1,Bd be infinite subsets ofsatisfying (2.2-2.4) and (2.5). Assume the following conditions:

(1) All the entries of W are independent random variables.

(2) The fourth moment of the non-zero off-diagonal entries does not depend on n:

μ4=𝔼{Wjk4}.

(3) There exists a constant σ6 such that for any j, k, 𝔼{|Wjk|6}<σ6.

Let φ1, ⋯ ;, φd: ℝ → ℝ be test functions that satisfy the regularity condition ||φl||s < ∞, for some s > 5.5. Then the random vector (2.12) converges in distribution to the zero mean Gaussian vector (G~1,,G~d)d with the covariance given by

Cov(G~l,G~p)=Cov(Gl,Gp)+κ4γlp22π2γl2γp2-2γl2γl                          φl(λ)2γl-λ24γl-λ2dλ-2γp2γpφp(μ)2γp-μ24γp-μ2dμ    (2.16)

where Cov(Gl, Gp) is given by (2.13).

In the course of the proof of Theorem 2.1, it has been necessary to understand the following bilinear form.

Definition 2.3. Let M be a Wigner matrix satisfying (1.1), and let P(l), P(l,r) be the projection matrices defined in (2.6) and (2.10). For functions f,gHs,s>32, define

f,glr:=limn1nj,kBlBr𝔼[f(M(l))jk·g(M(r))kj]                  =limn1n𝔼[Tr{P(l)f(M(l))·P(l,r)·g(M(r))P(r)}].    (2.17)

Remark 2.4. The bilinear form 〈·, ·〉lr is well defined on Hs×Hs as a consequence of Proposition 3.9. The bilinear form is also well defined for polynomial f and g, see section 3.2 and also Lemma 2.5 below.

The following diagonalization lemma is an important technical tool for the proof of Theorem 2.1.

Lemma 2.5. The two families {Ukγl}k=0 and {Uqγr}q=0 of rescaled Chebyshev polynomials of the second kind diagonalize the bilinear form (2.17). More precisely,

1γlγr Ukγl,Uqγrlr = δkq (γlrγlγr)k+1.    (2.18)

Let f,gHs, for some s>32. A consequence of (2.18) is that

f,glr=14π2γlγr-2γl2γl-2γr2γrf(x)g(y)              [k=0Ukγl(x)Ukγr(y)γlrk+1γlk/2γrk/2]4γl-x24γr-y2dydx.    (2.19)

In section 3.2, it will also be proved that, with f, g given as above, almost surely

limn1nTr{P(l)f(M(l))·P(l,r)·g(M(r))P(r)}=14π2γlγr-2γl2γl-2γr2γrf(x)g(y)[k=0Ukγl(x)Ukγr(y)γlrk+1γlk/2γrk/2]    4γl-x24γr-y2dydx.    (2.20)

Remark 2.6. Recall that the rescaled Chebyshev polynomials of the second kind are orthonormal with respect to the Wigner semicircle law, i.e.,

12πγ-2γ2γUkγ(x)Uqγ(x)4γ-x2dx=δkq.    (2.21)

Also,

Ukγ(2γcos(θ))=sin((k+1)θ)sin(θ).    (2.22)

The proof of Theorem 2.1 appears in section 3 and the proof of Theorem 2.2 appears in section 4.

Remark 2.7. Theorems 2.1 and 2.2 prove convergence of finite-dimensional distributions. This paper does not address the functional convergence which would require a tightness result.

3. Proof of Theorem 2.1

3.1. Stein-Tikhomirov Method

We follow the approach used by Lytova and Pastur [21] for the full Wigner matrix case (see also [2830]). Essentially, it is a modification of the Stein-Tikhomirov method (see e.g., [31]). This approach was also used to prove the CLT for linear eigenvalue statistics of band random matrices in Li and Soshnikov [24], which is connected to our work through the Chu-Vandermonde identity (see section 3.2). While several steps of our proof are similar to the ones in Lytova and Pastur [21], the fact that we are dealing with submatrices introduces new technical difficulties.

We will prove Theorem 2.1 in the present section and extend the technique to non-Gaussian Wigner matrices later. The following inequalities will be used often. As a consequence of the Poincaré inequality, one can bound from above the variance of Trφ(M) for a differentiable test functions φ as

Var{Trφ(M)}4(σ2+1)n𝔼[Tr{φ(M)(φ(M))*}]    (3.1)
4(σ2+1)(supx|φ(x)|)2.    (3.2)

We refer the reader to Lytova and Pastur [21] for the details. The next inequality is due to Shcherbina (see [22]). Let s > 3/2 and φHs. Then there is a constant Cs > 0, so that

Var{Trφ(M)}Csφs2.    (3.3)

Let ϵ > 0 and set s=52+ϵ. Recall that the regularity assumption on the test functions is that ||φl||5/2+ϵ < ∞, for 1 ≤ ld. There exists a Cϵ > 0 so that

Var{N(l)[φl]}=Var{Trφl(M(Bl))}Cϵφl5/2+ϵ2.    (3.4)

The inequality holds because of (3.3), since M(Bl) is an ordinary |Bl| × |Bl| Gaussian Wigner matrix. We note that the bound is n-independent.

It is sufficient to prove the CLT for all linear combinations of the components of the random vector (2.12). Consider a linear combination ξ:=l=1dαlN(l)°[φl], and denote the characteristic function by

Zn(x)=𝔼[eixξ].    (3.5)

It is a basic fact that the characteristic function of the Gaussian distribution with variance V is given by

Z(x):=e-x2V/2.    (3.6)

As a consequence of the Levy Continuity theorem, to prove theorem 2.1 it will be sufficient to demonstrate that for each x ∈ ℝ,

limnZn(x)=Z(x),    (3.7)

where Z(x) is given as above with

V:= limn [l=1dαl2Var(Nn(l)°[φl])    +21l<rdαlαrCov(Nn(l)°[φl], Nn(r)°[φr])].    (3.8)

So V is the limiting variance of ξ. It will be demonstrated that Zn(x) converges uniformly to the solution of the following equation

Z(x) = 1-V0xyZ(y)dy.    (3.9)

Note that (3.6) is the unique solution of (3.9) within the class of bounded and continuous functions. Therefore, to prove the theorem, it is sufficient to demonstrate that the pointwise limit of Zn(x) is a continuous and bounded function which satisfies Equation (3.9), with V given by (3.8).

Observe that

Zn(x)=i𝔼[ξeixξ]=il=1dαl𝔼{Nn(l)[φl]eixξ}    (3.10)

Now it follows by the Cauchy-Schwarz inequality and (3.4) that

|Zn(x)|l=1d|αl|Var{N(l)[φl]}Constl=1d|αl| φl5/2+ϵ.    (3.11)

Since Zn(0) = 1, we have by the fundamental theorem of calculus that

Zn(x)=1+0xZn(y)dy.    (3.12)

Then to prove the CLT it is sufficient to show that any uniformly converging subsequences {Znm} and {Znm}, satisfy

limnmZnm(x)=Z(x),    (3.13)

and

limnmZnm(x)=-xVZ(x).    (3.14)

A pre-compactness argument based on the Arzela-Ascoli theorem will be developed below, which ensures that the subsequences converge uniformly, implying that the limit is a continuous function. The estimate |Zn(x)| ≤ 1, for all n, shows that the sequence is uniformly bounded. Generally we will abuse the subsequence notation by writing {n} for a uniformly converging subsequence. Since (3.11) combined with ||φl||5/2+ϵ < ∞ justify an application of the dominated convergence theorem in (3.12), it follows from (3.13) and (3.14) that the limit of Zn(x) satisfies equation (3.9). Therefore the pointwise limit (3.7) holds. We turn our attention to the pre-compactness argument, and will argue later that (3.13) and (3.14) hold. We follow the notations used in Lytova and Pastur [21]. Denote by

Djk:=/Mjk;    (3.15)
U(l)(t):=eitM(l), Ujk(l)(t):=(U(l)(t))jk;    (3.16)
un(l)(t):=Tr{P(l)U(l)(t)P(l)}, un(l)(t):=un(l)(t)-𝔼{un(l)(t)}.    (3.17)

Recall that U(l)(t) is a unitary matrix, and writing βjk:=(1+δjk)-1, we have

|Ujk(l)|1, k=1n|Ujk(l)|2=1, U(l)=1.    (3.18)

Moreover,

DjkUab(l)(t)=iβjk1{j,kBl}(Uaj(l)*Ubk(l)(t)+Uak(l)*Ubj(l)(t)),    (3.19)

where

f*g(t):=0tf(y)g(t-y)dy.    (3.20)

Applying the Fourier inversion formula

φl(λ)=-eitλφ^l(t)dt,    (3.21)

it follows that

N(l)[φl]=φ^l(t)un(l)(t)dt.    (3.22)

Now define

en(x):=eixξ.    (3.23)

Using the Fourier representation of the linear eigenvalue statistics in (3.10), it follows that

Zn(x)=i l=1dαlφ^l(t)Yn(l)(x,t)dt,    (3.24)

where

Yn(l)(x,t):=𝔼[un(l)°(t)en(x)].    (3.25)

The limit of Yn(l)(x,t) is determined later in the proof. Since

Yn(l)(x,t)¯=Yn(l)(x,t),    (3.26)

we need only consider t ≥ 0. It will now be demonstrated that each sequence {Yn(l)} is bounded and equicontinuous on compact subsets of {x ∈ ℝ, t ≥ 0}, and that every uniformly converging subsequence has the same limit Y(l), implying (3.13) and (3.14). See proposition 3.1.

Let φ(x) = eitx, and note that supx|φ(x)|=|t|. Applying the inequality (3.2) to the linear eigenvalue statistic N(l)[φ], we obtain

Var{un(l)(t)}=Var{N(l)[φ]}4(σ2+1)t2.    (3.27)

Now set φ(x) = ixeitx, and notice that

ddtun(l)(t)=iTr{M(l)eitM(l)}.

Using the inequality (3.1) and the fact that n−1𝔼Tr(M(l))2 ≤ σ2 + 1, it follows that

Var{ddtun(l)(t)}4(σ2+1)n𝔼[Tr{φ(M(l))(φ(M(l)))*}]                                4(σ2+1)n𝔼[Tr{1+t2(M(l))2}]                                4(σ2+1)[1+(σ2+1)t2].    (3.28)

Using the Cauchy-Schwarz inequality, the bound |en(x)| ≤ 1, (3.27) and (3.28), we obtain

|Yn(l)(x,t)|Var1/2{un(l)(t)}2(σ2+1)1/2|t|,    (3.29)

and also

|tYn(l)(x,t)|Var1/2{ddtun(l)(t)}2(σ2+1+(σ2+1)2t2).    (3.30)

Observe that

ddxen(x)=ien(x)r=1dαr N(r)°[φr].

Using the above derivative with the Cauchy-Schwarz inequality, (3.4) and (3.27), we have that

|xYn(l)(x,t)|=|ir=1dαr𝔼[un(l)°(t)Nn(r)°[φr]en(x)]|                               Var1/2{un(l)(t)}r=1d|αr| Var1/2{N(r)[φr]}                              Const·|t|r=1d|αr| φr5/2+ϵ.    (3.31)

It follows from (3.29), the mean value theorem combined with (3.30) and (3.31), and ||φr||5/2+ϵ < ∞, that each sequence Yn(l)(x,t) is bounded and equicontinuous on compact subsets of ℝ2. The following proposition justifies this restriction.

Proposition 3.1. In order to prove the functions Yn(l)(x,t) converge uniformly to appropriate limits so that (3.24) implies (3.14), it is sufficient to prove the convergence of Yn(l)(x,t) on arbitrary compact subsets of {x ∈ ℝ, t ≥ 0}.

Proof: Let δ > 0. Recall that the regularity assumption on the test functions φl are

(1+|h|)5+ϵ|φ^l(h)|2dh<,

i.e., that φlHs, with s = 5/2 + ϵ. Using the Cauchy-Schwarz inequality, it follows that

(1+|h|)|φ^l(h)|dhdh(1+|h|)3+ϵ                                              ·(1+|h|)5+ϵ|φ^l(h)|2dh,    (3.32)

which implies that

|h|·|φ^l(h)|dh<.    (3.33)

A consequence of the finiteness of the integral in (3.33), for each 1 ≤ ld, is that there exists a T > 0 so that

2(σ2+1)1/2l=1d|αl||t|T|t|·|φ^l(t)|dt<δ.    (3.34)

Using (3.24), we can write

Zn(x)=i l=1dαlTTφ^l(t)Yn(l)(x,t)dt                +i l=1dαl|t|Tφ^l(t)Yn(l)(x,t)dt.    (3.35)

Then (3.35), (3.29), (3.34) imply that

|Zn(x)i l=1dαlTTφ^l(t)Yn(l)(x,t)dt|               l=1d|αl||t|T|φ^l(t)|·|Yn(l)(x,t)|dt               2(σ2+1)1/2l=1d|αl||t|T|t|·|φ^l(t)|dt               <δ.    (3.36)

Notice that the estimate (3.36) is n-independent, so that in particular the estimate holds in the limit n → ∞. Since δ was arbitrary, this completes the proof of the proposition.

This completes the pre-compactness argument, which allows us to pass to the limit in (3.24) and in (3.12), and conclude that Zn(x) converges pointwise to the unique solution of equation (3.9) belonging to Cb(ℝ), implying (3.7), and hence the conclusion of the theorem. Now we show the limiting behavior of the sequences Yn(l)(x,t) imply (3.13) and (3.14). Consider the identity

eitM(l)=I+i0tM(l)eihM(l)dh.

Apply this identity, noting that Mjk(l)=0, if j,kBl, to obtain that

un(l)°(t)=Tr{P(l)U(l)(t)P(l)}𝔼[Tr{P(l)U(l)(t)P(l)}]               =i0tj,k=1n[Mjk(l)Ujk(l)(t1)𝔼[Mjk(l)Ujk(l)(t1)]].    (3.37)

Recalling that Yn(l)(x,t)=𝔼[un(l)°(t)en(x)], and applying the decoupling formula (see Appendix 1) for Gaussian random variables, it follows from (3.37) that

Yn(l)(x,t)=i0tj,k=1n𝔼[Mjk(l)Ujk(l)(t1)en°(x)]dt1                 =2in0t1j<kn1{j,kBl}𝔼[DjkUjk(l)(t1)en°(x)]dt1.                      +iσ2n0tj=1n1{jBl}𝔼[DjjUjj(l)(t1)en°(x)]dt1.    (3.38)

It will be useful to rewrite (3.38) as

Yn(l)(x,t)=in0tj,k=1n1{j,kBl}(1+δjk)𝔼[DjkUjk(l)(t1)en°(x)]dt1=:T1                    +i(σ22)n0tj=1n1{jBl}𝔼[DjjUjj(l)(t1)en°(x)]dt1=:T2.    (3.39)

The reason for the rewrite is that it splits the functions Yn(l)(x,t) into a part that depends on the distribution of the diagonal entries and a part that corresponds to the same term as for the Gaussian Orthogonal Ensemble, for which σ2 = 2. Recalling that en(x) is given by (3.23), again writing βjk=(1+δjk)-1 and using the identity

DjkTrf(M)=2βjkf(M)jk,

it follows by a direct calculation that

Djken(x)=2iβjkxen(x)r=1dαr(P(r)φr(M(r))P(r))jk.    (3.40)

Then for 1 ≤ ld, using (3.40) and (3.19), it follows that

T1=1n0t0t1𝔼[j,k=1n1{j,kBl}Ujj(l)(t2)Ukk(l)(t1t2)en°(x)]dt2dt1          1n0t0t1𝔼[j,k=1n1{j,kBl}Ujk(l)(t2)Ujk(l)(t1t2)en°(x)]dt2dt1          2xn0t𝔼[j,k=1n1{j,kBl}Ujk(l)(t1)en(x)r=1dαr(P(r)φr(M(r))P(r))jk]dt1,    (3.41)

and also that

T2=(σ22)n0t0t1𝔼[j=1n1{jBl}Ujj(l)(t2)Ujj(l)(t1t2)en°(x)]dt2dt1=:T21          (σ22)xn0t𝔼[j=1n1{jBl}Ujj(l)(t1)en(x)r=1dαr(P(r)φr(M(r))P(r))jj]dt1=:T22.    (3.42)

Using the semigroup property

U(l)(t)U(l)(h)=U(l)(t+h),

it follows form (3.41) that T1 can be written

T1=1n0t0t1𝔼[un(l)(t1t2)un(l)(t2)en°(x)]dt2dt1=:T11          1n0tt1𝔼[un(l)(t1)en°(x)]dt1=:T12         2xnr=1dαr0t𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}en(x)]dt1=:T13.    (3.43)

Define

v̄n(l)(t):=1n𝔼[un(l)(t)].    (3.44)

The following proposition presents the functions Yn(l)(x,t) in a form that is amenable to asymptotic analysis.

Proposition 3.2. The equation Yn(l)(x,t)=T1+T2, can be written as

Yn(l)(x,t)+20t0t1v̄n(l)(t1-t2)Yn(l)(x,t2)dt2dt1=xZn(x)[An(l)(t)+Qn(l)(t)]+rn(l)(x,t),    (3.45)

where

An(l)(t):=-2r=1dαr0t1n𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}]dt1,    (3.46)
Qn(l)(t):=-(σ2-2)nr=1dαr0tj=1n1{jBlBr}𝔼[Ujj(l)(t1)φr(M(r))jj}]dt1,    (3.47)

and

rn(l)(x,t)=     -1n0tt1Yn(l)(x,t1)dt1    (3.48)
-1n0t0t1𝔼[un(l)°(t1-t2)un(l)°(t2)en°(x)]dt2dt1    (3.49)
-2xnr=1dαr0t𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}en°(x)]dt1    (3.50)
-(σ2-2)n0t0t1𝔼[j=1n1{jBl}Ujj(l)(t2)Ujj(l)(t1-t2)en°(x)]dt2dt1    (3.51)
-x(σ2-2)nr=1dαr0tj=1n1{jBlBr}𝔼[Ujj(l)(t1)φr(M(r))jjen°(x)]dt1.    (3.52)

Proof: Begin with the term T11, defined in (3.43). Write

T11=-1n0t0t1𝔼[(un(l)°(t1-t2)+nv̄n(t1-t2))                ·(un(l)°(t2)+nv̄n(t2))en°(x)]dt2dt1,    (3.53)

so that

T11=             -1n0t0t1𝔼[un(l)°(t1-t2)un(l)°(t2)en°(x)]dt2dt1             -0t0t1v̄n(t1-t2)𝔼[un(l)°(t2)en°(x)]dt2dt1             -0t0t1v̄n(t2)𝔼[un(l)°(t1-t2)en°(x)]dt2dt1             -n0t0t1v̄n(t1-t2)·v̄n(t2)𝔼[en°(x)]=0dt2dt1.    (3.54)

Noting that

𝔼[un(l)°(t2)en°(x)]=Yn(l)(x,t2), 𝔼[un(l)°(t1-t2)en°(x)]                                      =Yn(l)(x,t1-t2),

and also that

0t0t1v̄n(t2)Yn(l)(x,t1-t2)dt2dt1=0t0t1v̄n(t1-t2)Yn(l)(x,t2)dt2dt1,

it follows that

T11=           -1n0t0t1𝔼[un(l)°(t1-t2)un(l)°(t2)en°(x)]dt2dt1    (3.55)
-20t0t1v̄n(t1-t2)Yn(l)(x,t2)dt2dt1.    (3.56)

The term (3.55) goes into the remainder, which becomes (3.49). Also, (3.56) is added to the left-hand side of (3.45). Now consider the term T12, defined in (3.43). We have that

T12=-1n0tt1Yn(l)(x,t1)dt1,    (3.57)

which becomes (3.48) in the remainder. Consider the term T13, also defined in (3.43). Writing

T13=-2xnr=1dαr0t𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}               ·(en°(x)+Zn(x))]dt1,    (3.58)

it follows, with An(l)(t) given by (3.46), that

T13=             -2xnr=1dαr0t𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}en°(x)]dt1    (3.59)
+ xZn(x)An(l)(t).    (3.60)

Then (3.59) becomes (3.50) in the remainder, while (3.60) remains on the right-hand side of (3.45). Now consider the term T21, defined in (3.42). This term becomes (3.51) in the remainder. Finally, consider the term T22, also defined in (3.42). Write

T22=-(σ2-2)xn0t𝔼[j=1n1{jBl}Ujj(l)(t1)               ·(en°(x)+Zn(x))r=1dαr(P(r)φr(M(r))P(r))jj]dt1,    (3.61)

so that, with Qn(l)(t) given by (3.47),

T22=            -(σ2-2)xn0t𝔼[j=1n1{jBl}Ujj(l)(t1)en°(x)r=1dαr(P(r)φr(M(r))P(r))jj]dt1    (3.62)
+ xZn(x)·Qn(l)(t).    (3.63)

The term (3.62) becomes (3.52) in the remainder. Also, the term (3.63) remains on the right-hand side of (3.45). This completes the argument for proposition 3.2.

We now turn our attention to the remainder term, rn(l)(x,t), of proposition 3.2. The content of the following proposition is that the remainder is negligible in the limit.

Proposition 3.3. Each term of rn(l)(x,t) converges to 0 uniformly on compact subsets of {x ∈ ℝ, t ≥ 0}, for 1 ≤ ld. In other words, we have the uniform limit

limnrn(l)(x,t)=0.    (3.64)

Proof: Begin with the term (3.48). Applying the estimate (3.29), we obtain

|1n0tt1Yn(l)(x,t1)dt1|1nt2|Yn(l)(x,t)|                                                 2(σ2+1)1/2n|t|3                                                  =O(1n).    (3.65)

Now consider the term (3.49). Using the bound |en°(x)|2, the Cauchy-Schwarz inequality, and (3.27) twice, it follows that

|1n0t0t1𝔼[un(l)(t1-t2)un(l)(t2)en(x)]dt2dt1|       2nt2Var1/2{un(l)(t)}Var1/2{un(l)(t)}       8(σ2+1)1/2nt4       =O(1n).    (3.66)

Consider the term (3.50) next. Applying (2.20) of lemma 2.5 to the exponential function and φr, and noting that φrH32+ϵ, it follows that

limn1n𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}]     =14π2γlγr-2γl2γl-2γr2γreit1xφr(y)     [k=0Ukγl(x)Ukγr(y)γlrk+1γlk/2γrk/2]4γl-x24γr-y2dydx.    (3.67)

While the exponential function does not belong to H32+ϵ, we can truncate the exponential function in a smooth fashion outside the support of the semicircle law, so that the truncated exponential function belongs to H32+ϵ. We may replace the exponential function by its truncated version because the eigenvalues of the submatrices concentrate in the support of the semicircle law with overwhelming probability. Then

limn1nTr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}      =14π2γlγr-2γl2γl-2γr2γreit1xφr(y)      [k=0Ukγl(x)Ukγr(y)γlrk+1γlk/2γrk/2]4γl-x24γr-y2dydx.    (3.68)

Here it is not so important to know the exact value of the limit, but we will use the fact that we have convergence in the mean and almost surely to the same limit. Note the convergence in (3.67) implies that the sequence of numbers

1n𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}],

is bounded. Also the convergence in (3.68) implies that the random variables

1nTr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)},

are bounded with probability 1. Using (3.67) and (3.68) with the dominated convergence theorem, it now follows that

limn𝔼|1nTrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)-1n𝔼{TrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)}|=0.    (3.69)

Combining the bound |en(x)| ≤ 1 with (3.69), it follows that

|1n𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}en°(x)]|      =|𝔼[(1nTrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)        1n𝔼{TrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)})en(x)]|        |1nTrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)      1n𝔼{  TrP(l)U(l)(t1)P(l,r)φr(M(r))P(r)}|0.             (3.70)

Then, using (3.70) in the remainder term (3.50), it follows that

|2xnr=1dαr0t𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}en°(x)]dt1|0 as n.    (3.71)

Consider (3.51), which is the next term in the remainder. Observe that, again using the Cauchy-Schwarz inequality and the fact that |en(x)| ≤ 1,

𝔼[1nj=1n1{jBl}Ujj(l)(t2)Ujj(l)(t1t2)en°(x)]  =𝔼[1njBlnUjj(l)(t2)Ujj(l)(t1t2)en°(x)]  𝔼|1njBlUjj(l)(t2)Ujj(l)(t1t2)1n𝔼{jBlUjj(l)(t2)Ujj(l)(t1t2)}|  Var1/2{1njBlUjj(l)(t2)Ujj(l)(t1t2)}.    (3.72)

For fixed j, p, qBl, using (3.19),

DpqUjj(l)(t)=iβpq[Ujp(l)*Ujq(l)(t)+Ujp(l)*Ujq(l)(t)]                  =2iβpq0tUjp(l)(t-h)Ujq(l)(h)dh.    (3.73)

Using (3.73), recalling that βpq=(1+δpq)-11, and the Cauchy-Schwarz inequality, it follows that

|DpqUjj(l)(t)|24|t|0t|Ujp(l)(th)Ujq(l)(h)|2dh.    (3.74)

Using (3.74), the fact that |Ujk(l)(t)|1, and the inequality 2aba2 + b2, it follows that

|Dpq{Ujj(l)(t2)Ujj(l)(t1t2)}|2      2|DpqUjj(l)(t2)|2+2|DpqUjj(l)(t1t2)|2      8|t|(0t2|Ujp(l)(t2h)Ujq(l)(h)|2dh     +0t1t2|Ujp(l)(t1t2h)Ujq(l)(h)|2dh)​​.    (3.75)

Using the Poincaré inequality, (3.75), adding more nonnegative terms, and using the property of the unitary matrices that

k=1n|Ujk(l)(t)|2=1,    (3.76)

it follows that

Var{Ujj(l)(t2)Ujj(l)(t1t2)}            pqp,qBl𝔼[(Mpq(l))2]𝔼[|Dpq{Ujj(l)(t2)Ujj(l)(t1t2)}|2]              8(σ2+1)|t|np=1nq=1n𝔼[0t2|Ujp(l)(t2h)Ujq(l)(h)|2dh          +0t1t2|Ujp(l)(t1t2h)Ujq(l)(h)|2dh]          8(σ2+1)|t|np=1n𝔼[0t2|Ujp(l)(t2h)|2dh          +0t1t2|Ujp(l)(t1t2h)|2dh]           16(σ2+1)|t|nt1          =O(1n).    (3.77)

Now, combining (3.72) with (3.77), we have that

𝔼[1nj=1n1{jBl}Ujj(l)(t2)Ujj(l)(t1-t2)en(x)]=O(1n),    (3.78)

and it follows that

|(σ22)n0t0t1𝔼[j=1n1{jBl}Ujj(l)(t2)Ujj(l)(t1t2)en°(x)]dt2dt1|      =O(1n)​​.    (3.79)

Now consider the final term of the remainder, given by (3.52). We apply the identity below

φr(M(r))=i-hφ^r(h)U(r)(h)dh,    (3.80)

which is a consequence of the matrix version of the Fourier inversion formula (3.21). Using (3.80), the finiteness of the integral (3.33), the above estimate (3.78), and the dominated convergence theorem, we have that

|x(σ22)nr=1dαr0tj=1n1{jBlBr}𝔼[Ujj(l)(t1)(φr(M(r)))jjen°(x)]dt1|           r=1d|x(σ22)αr||0thφ^r(h)1n         jBlBrn𝔼[Ujj(l)(t1)Ujj(r)(h)jjen°(x)]dhdt1|0.    (3.81)

Combining (3.65), (3.66), (3.71), (3.79), (3.81), and comparing to the remainder term (3.48), the proposition is proved.

The goal now is to pass to the limit in (3.45). In what follows let {Ukγ(x)} denote the (rescaled) Chebyshev polynomials of the second kind on [-2γ, 2γ],

Ukγ(x)=j=0k/2(1)j(kjj)(x2γ)k2j​​.    (3.82)

Proposition 3.4. Let An(l)(t) be given by (3.46), Qn(l)(t) given by (3.47), and v̄n(t) given by (3.44). Then the limits of An(l)(t),Qn(l)(t), and v̄n(t) as n → ∞ exist and

A(l)(t):=limnAn(l)(t)             =-12π2γlr=1dαrγr0t-2γl2γl-2γr2γreit1xφr(y)                 4γl-x24γr-y2Flr(x,y)dydxdt1,    (3.83)

where

Flr(x,y)=k=0Ukγl(x)Ukγr(y)γlrk+1γlk/2γrk/2,    (3.84)

the limit of Qn(l)(t) is given by

Q(l)(t):=limnQn(l)(t)              =-(σ2-2)4π2γlr=1dγlrαrγr0t-2γl2γleit1λ4γl-λ2dλdt1                    ·-2γr2γrφr(μ)4γr-μ2dμ,    (3.85)

and the limit of v̄n(t), after rescaling by γl, is given by

v(l)(t):=1γllimnv̄n(t)=12πγl-2γl2γleitx4γl-x2dx.    (3.86)

Proof: Recall that An(l)(t)=-2r=1dαr0t1n𝔼[Tr{P(l)U(l)(t1)P(l,r)φr(M(r))P(r)}]dt1. In the full Wigner matrix case one has An(t)=-20t1n𝔼Tr{eitMφ(M)}dt1, and the limiting behavior follows immediately from the Wigner semicircle law. In the case of submatrices with asymptotically regular intersections there are additional technical difficulties due to the fact that for the n × n submatrices M(l) = P(l)MP(l), we have

Tr{P(l,r)U(l)(t)φr(M(r))P(l,r)} =j,kBlBrUjk(l)(t)φr(M(r))jk,    (3.87)

so that the summation is restricted to entries common to both submatrices, i.e., to j, kBlBr. It follows from lemma 2.5 that the limit of An(l)(t) exists and equals

A(l)(t)=-2r=1dαr0teit1x,φr lr dt1,    (3.88)

where

eit1x, φrlr =14π2γlγr-2γl2γl-2γr2γreit1xφr(y)Flr(x,y)    4γl-x24γr-y2dydx.    (3.89)

This establishes (3.83). The proof of lemma 2.5 will be given in section 3.2.

We turn our attention to Qn(l)(t). First it will be argued that the variance of the matrix entries converge to zero. Using the Poincaré inequality, (3.74), (3.76), and proposition 3.1, it follows that

Var{Ujj(l)(t1)}pq,p,qBl𝔼[(Mpq(l))2]𝔼[|DpqUjj(l)(t)|2]4(σ2+1)|t1|np=1nq=1n𝔼0t1|Ujp(l)(t1-t2)Ujq(l)(t2)|2dt24(σ2+1)|t1|np=1n𝔼0t1|Ujp(l)(t1-t2)|2dt24(σ2+1)t12n=O(n-1).    (3.90)

Note that in the course of the calculation (3.90), we showed that

pq𝔼[|DpqUjj(l)(t1)|2]4t12.    (3.91)

The Cauchy-Schwarz inequality implies

(1+t12)|φ^r(t1)|dt1dt1(1+t12)1/2+ϵ    ·(1+t12)5/2+ϵ|φ^r(t1)|2dt1.    (3.92)

Since ||φr||5/2+ϵ < ∞, we have the estimate

-t12|φ^r(t1)|dt1<.    (3.93)

Using the Cauchy-Schwarz inequality and (3.80), it follows that

|Dpqφr(M(r))jj|2=|-t1φ^r(t1)DpqUjj(l)(t1)dt1|2                                     -t12|φ^r(t1)|dt1·-|φ^r(t1)|·|DpqUjj(l)(t1)|2dt1.    (3.94)

Using the Poincaré inequality, (3.91), (3.94), we obtain

Var{φr(M(r))jj}pq𝔼[(Mpq(l))2]𝔼[|Dpqφr(M(r))jj|2](σ2+1)n·-t12|φ^r(t1)|dt1·pq-|φ^r(t1)|𝔼[|DpqUjj(l)(t1)|2]dt14(σ2+1)n·(-t12|φ^r(t1)|dt1)2.    (3.95)

Using (3.93), (3.95), (3.90), and the Cauchy-Schwarz inequality, we obtain

Cov{Ujj(l)(t1),φr(M(r))jj}Var{Ujj(l)(t1)}·Var{φr(M(r))jj}                                                          =O(n-1).    (3.96)

Using (3.96) it is justified to replace the expectation 𝔼[Ujj(l)(t)φr(M(r))jj] by the product 𝔼[Ujj(l)(t)]·𝔼[φr(M(r))jj], when passing to the limit. We use proposition 2.1 of Pizzo et al. [32], which guarantees that for fCc7(),

limn𝔼[f(M)jj]=f(x)dμsc(x).    (3.97)

In order to apply this asymptotic to the exponential function, which is smooth enough, we truncate the function in a smooth fashion outside the support of μsc. We are justified in replacing the exponential function by its truncated version because the eigenvalues of the submatrices concentrate in the support of the semicircle law, with overwhelming probability. It is for this same reason that we may assume φr is compactly supported. This function is not sufficiently smooth, but we can avoid this problem by a density argument using standard convolution, and then apply the bound (3.3) on the variance of linear eigenvalue statistics.

Let ηCc() satisfy η(x)dx=1, and consider the mollifiers ηy(x):=y-1η(xy-1). Then

φr*ηyCc(), and using standard Fourier theory it can be shown that

limy0φr-φr*ηy3/2+ϵ2=0.    (3.98)

It follows from (3.96) and (3.97) that

limn1nj=1n1{jBlBr}𝔼[Ujj(l)(t)φr(M(r))jj]=     γlr(12πγl-2γl2γleitλ4γl-λ2dλ)     ·(12πγr-2γr2γrφr(μ)4γr-μ2dμ).    (3.99)

Using (3.99), we pass to the limit in (3.47), and obtain (3.85). The limit of

v̄n(l)(t)=1n𝔼[un(l)(t)]γl|Bl|𝔼[Tr{P(l)U(l)(t)P(l)}],

is given by (rescaled) Wigner semicircle law, as a consequence of the zero eigenvalues. Alternatively, it can be computed using the bilinear form in lemma 2.5, with f(x) = eitx and g(x) = 1. To facilitate solving the integral equation (3.101), below, it will be useful to rescale by γl. We obtain

v(l)(t)=1γleitx,1ll            =12πγl-2γl2γleitx4γl-x2dx,    (3.100)

which establishes (3.86). The proposition is proved.

Now using propositions 3.2, 3.3, 3.4, we pass to the limit nm → ∞ in (3.45), and determine that the limit Y(l) of every uniformly converging subsequence {Ynm(l)} satisfies the equation

Yl(x,t)+2γl0t0t1v(l)(t1-t2)Y(l)(x,t2)dt2dt1    =xZ(x)[A(l)(t)+Q(l)(t)],    (3.101)

where A(l)(t) is given by (3.83), Q(l)(t) is given by (3.85), and v(l)(t) is given by (3.86).

Now the argument will proceed by solving the integral equation (3.101). We use a version of the technique used by Pastur and Lytova [21], to solve this equation. Define

f(z):=(z2-4γl-z)/2γl,    (3.102)

which is the Stieltjes transform of the rescaled semicircle law, where z2-4γl=z+O(1/z) as z → ∞. A direct calculation shows that ṽ(l) = f, where ṽ(l) denotes the generalized Fourier transform of v(l). We obtain

v˜(l)(z):=12πiγl0-2γl2γleit(x-z)4γl-x2dxdt                =12πγl-2γl2γl1x-z4γl-x2dx                =f(z).    (3.103)

We check that

z+2γlf(z)=z2-4γl0,  𝔪 z0.    (3.104)

Set

T(t):=i2πLeiztdzz+2γlf(z)=-1π-2γl2γleiλtdλ4γl-λ2,    (3.105)

after replacing the integral over L by the integral over [−2γl, 2γl], and taking into account that z2-4γl is ±i4γl-λ2, on the upper and lower edges of the cut. Then the solution of (3.101) is

Y(l)(x,t)=-xZ(x)0tT(t-t1)ddt1[A(l)(t1)+Q(l)(t1)]dt1.    (3.106)

Then, with Flr given by (3.84),

0tT(t-t1)ddt1A(l)(t1)dt1    =12π3γlr=1dαrγr0t-2γl2γl-2γl2γl-2γr2γrei(t-t1)λeit1xφr(y)    4γl-x24γr-y24γl-λ2×Flr(x,y)dydxdλdt1    =12iπ3γlr=1dαrγr-2γl2γl-2γl2γl-2γr2γr[eitx-eitλ]φr(y)(x-λ)    4γl-x24γr-y24γl-λ2Flr(x,y)dydxdλ,    (3.107)

and

0tT(t-t1)ddt1Q(l)(t1)dt1    =-γlr(σ2-2)4π3γlr=1dαrγr0t-2γl2γl-2γl2γlei(t-t1)λ4γl-λ2eit1η    4γl-η2dηdλ×-2γr2γrφr(μ)4γr-μ2dμdt1    =-γlr(σ2-2)4π3γlir=1dαrγr-2γl2γl-2γl2γl[eitη-eitλη-λ]    4γl-η24γl-λ2dηdλ·-2γr2γrφr(μ)4γr-μ2dμ.    (3.108)

Using the regularity condition ||φl||5/2+ϵ < ∞ for 1 ≤ ld, (3.107), (3.108), and the dominated convergence theorem to pass to limit in (3.24) yields

Z(x)   =i l=1dαl-φ^l(t)Y(l)(x,t)dt   =-xZ(x)2π3 l=1dr=1dαlαrγlγr--2γl2γl-2γl2γl-2γr2γrφ^l(t)   [eitx-eitλ]φr(y)(x-λ)   ×4γl-x24γr-y24γl-λ2Flr(x,y)dydxdλdt   -γlr(σ2-2)xZ(x)4π3l=1dr=1dαlαrγlγr--2γl2γl-2γl2γl   [φl^(t)eitη-φl^(t)eitλη-λ]4γl-η24γl-λ2dηdλ   ×-2γr2γrφr(μ)4γr-μ2dμdt.    (3.109)

Applying the Fourier inversion formula (3.21), it follows that

Z(x)=   -xZ(x)2π3 l=1dr=1dαlαrγlγr-2γl2γl-2γl2γl-2γr2γr   [φl(x)-φl(λ)]φr(y)(x-λ)4γl-x24γr-y24γl-λ2   ×Flr(x,y)dydxdλ   -xZ(x)γlr(σ2-2)4π3l=1dr=1dαlαrγlγr-2γl2γl-2γl2γl   [φl(η)-φl(λ)η-λ]4γl-η24γl-λ2dηdλ   ×-2γr2γrφr(μ)4γr-μ2dμ.    (3.110)

We will use the fact that

-2γ2γ[Tkγ(x)-Tkγ(λ)](x-λ)dλ4γ-λ2= π2γUk-1γ(x),  k1.    (3.111)

Expand the test function φl in the Chebyshev basis to obtain

φl(x)=k=0(φl)kTkγl(x),   (φl)k=2π-2γl2γlφl(t)Tkγl(t)dt4γl-t2.    (3.112)

Returning to the computation of Z′(x), using (3.110), (3.111), and (3.112), it follows that

Z(x)=-xZ(x)4π2 l=1dr=1dk=1αlαrγl3/2γr(φl)k-2γl2γl-2γr2γr          Uk-1γl(x)φr(y)4γl-x24γr-y2×Flr(x,y)dydx          -xZ(x)γlr(σ2-2)8π2l=1dr=1dαlαrγl3/2γrk=1(φl)k-2γl2γl          Uk-1γl(η)4γl-η2dη×-2γr2γrφr(μ)4γr-μ2dμ.    (3.113)

Using the orthogonality of the Chebyshev polynomials (2.21),

k=1(φl)k-2γl2γlUk-1γl(η)4γl-η2dη     =2γl-2γl2γlλφl(λ)4γl-λ2dλ.    (3.114)

Integrating by parts yields

-2γr2γrφr(μ)4γr-μ2dμ=-2γr2γrμφr(μ)4γr-μ2dμ,    (3.115)

so that

γlr(σ2-2)4π2γlγr-2γl2γlλφl(λ)4γl-λ2dλ·-2γr2γrμφr(μ)4γr-μ2dμ=(σ2-2)4γlrγlγr(φl)1(φr)1.    (3.116)

Since

ddyTkγ(y)=k2γUk-1γ(y),    (3.117)

we expand φr(y) in the Chebyshev basis to obtain

φr(y)=12γrm=1m(φr)mUm-1γr(y).    (3.118)

Recalling that Flr is given by (3.84), it follows that

k=1m=1m(φl)k(φr)m[2γl2γl2γr2γrUk1γl(x)     Um1γr(y)4γlx24γry2Flr(x,y)dydx]=k=1m=1j=0m(φl)k(φr)m2γl2γl2γr2γrUk1γl(x)Um1γr(y)     Ujγl(x)Ujγr(y)×4γlx24γry2dydxγlrj+1γlj/2γrj/2.=4π2γlγrk=1k(φl)k(φr)k(γlrkγl(k1)/2γr(k1)/2)).    (3.119)

Using (3.119), (3.114), (3.115), and (3.116), in (3.113), it follows that

Z(x)=xZ(x)2l=1dr=1dαlαr[(σ22)2γlrγlγr(φl)1(φr)1           +k=1k(φl)k(φr)k(γlrγlγr)k]        =xZ(x)l=1dαl2[σ24(φl)12+12k=2k(φl)k2]           xZ(x)1l<rd2αlαr[σ24(φl)1(φr)1(γlrγlγr)           +12k=2k(φl)k(φr)k(γlrγlγr)k] .    (3.120)

We have obtained the expression for the asymptotic covariance (2.14) in terms of Chebyshev polynomials. Now we write this expression as a contour integral. Let

β:=γlrγlγr,

make the change of coordinates x=2γlcos(θ), y=2γrcos(ω), and use (2.14) to obtain that

12k=1kβk(φl)k(φr)k=2π2k=1kβk-2γl2γl-2γr2γrφl(x)φr(y)Tk(x2γl)Tk(y2γr)dxdy4γl-x24γr-y2=2π20π0πk=1kβkcos(kθ)cos(kω)φl(2γlcosθ)φr(2γrcosω)dθdω.    (3.121)

Integrating by parts in θ, ω it follows that

12k=1kβk(φl)k(φr)k     =2π20π0πφl(2γlcosθ)φr(2γrcosω)     [k=1βkksin(kθ)sin(kω)]×(2γlsinθ)(2γrsinω)dθdω.    (3.122)

To evaluate the infinite sum above, recall that for z ∈ ℂ with |z| < 1, we have

ln(1-z)=- k=1zkk.    (3.123)

Noting that β < 1, using (3.123), it follows that

k=1βkksin(kθ)sin(kω)=14k=1βkk[eikθeikθ][eikωeikω]=14[ln(1βei(θ+ω))+ln(1βei(θω))      ln(1βei(θ+ω))+ln(1βei(θω))]=14[ln[(1βei(θω))(1βei(θω))¯]      ln[(1βei(θ+ω))(1βei(θ+ω))¯]].    (3.124)

Making the change of coordinates z=γleiθ, w=γreiω, and recalling that β=γlrγlγr, this can be written as

k=1βkksin(kθ)sin(kω)=-14ln[(1-βei(θ-ω))(1-βei(θ-ω))¯(1-βei(θ+ω))(1-βei(θ+ω))¯]                                                       =-14ln[(1-γlrγlγrzw¯)(1-γlrγlγrzw¯)¯(1-γlrγlγrzw)(1-γlrγlγrzw)¯]                                                       =-14ln[|γlr-zw¯|2|γlr-zw|2]                                                       =12ln|γlr-zwγlr-zw¯|.    (3.125)

Combining (3.122), (3.125), and noting that

(2γlsinθ)(2γrsinω)dθdω=(1-γlz2)(1-γrw2)dzdw,

it follows that

12k=1kβk(φl)k(φr)k=2π|z|2=γl𝔪 z>0|w|2=γr𝔪 w>0φl(z+γlz)φr(w+γrw)12πln|γlr-zwγlr-zw̄|(1-γlz2)(1-γrw2)dzdw.    (3.126)

Compare (3.120) to (3.8). Using (3.126), (3.13), (3.14), and (3.9), it follows that the covariance can be written as.

limnCov{N(l)[φl], N(r)[φr]}=σ24(φl)1(φr)1(γlrγlγr)+12k=2k(φl)k(φr)k(γlrγlγr)k=2π|z|2=γl𝔪 z>0|w|2=γr𝔪 w>0φl(z+γlz)φr(w+γrw)12πlog     |γlr-zwγlr-zw̄|(1-γlz2)     ×(1-γrw2)dwdz+ γlr(σ2-2)4π2γlγr-2γl2γlλφl(λ)4γl-λ2dλ     -2γr2γrμφr(μ)4γr-μ2dμ.

3.2. The Bilinear Form

The main goal of this section is to prove Lemma 2.5, to which we now turn our attention. Begin with the following definition.

Definition 3.5. Let M be a Wigner matrix satisfying (1.1), and let P(l), P(l,r) be the projection matrices defined in (2.6) and (2.10). For polynomial functions f, g:ℝ → ℝ, define

f,glr,n:=1nj,kBlBr𝔼[f(M(l))jk·g(M(r))kj]                      =1n𝔼[Tr{P(l)f(M(l))·P(l,r)·g(M(r))P(r)}].    (3.127)

The large n limit of 〈f, glr, n exists for polynomial functions because all moments of the matrix entries of M are finite. Then limnf,glr,n=f,glr, where 〈·, ·〉lr is the bilinear form defined in definition 2.3.

We will compute the bilinear form 〈f, glr for monomial functions f(x) = xk, g(x) = xq. We will also consider the random variables n−1Tr{P(l)f(M(l))P(l, r)g(M(r))P(r)}, and prove their convergence almost surely to the non-random limit described in lemma 2.5. To this end, we will use some results and techniques from Free Probability. We refer the reader to Anderson et al. [16] for the relevant background concerning noncommutative probability spaces, asymptotic freeness of Wigner matrices, as well as the definition and the properties of the multilinear free cumulant functionals κp, for p ≥ 1.

Consider the matrices M, P(l), P(r) as noncommutative random variables in the noncommutative probability spaces (Matn(),E[1nTr]) and also (Matn(),1nTr{·}). Since M is a Wigner random matrix and {P(l), P(r)} are deterministic Hermitian matrices, it follows from part (i) of Theorem 5.4.5 in Anderson et al. [16] that M is asymptotically free from {P(l), P(r)} with respect to the functional n−1𝔼Tr(·). In addition, it follows from part (ii) of Theorem 5.4.5 in Anderson et al. [16] that M is almost surely asymptotically free from {P(l), P(r)} with respect to the functional n−1Tr(·). The collection of all non-crossing partitions over a set with p letters is denoted below by NC(p). An important consequence of the asymptotic freeness of these matrices is that mixed free cumulants of M and {P(l), P(r)} vanish in the limit, with respect to both functionals, see Theorem 5.3.15 of Anderson et al. [16]. Therefore, letting κπ denote a product of free cumulant functionals corresponding to the block structure of the partition π, it follows that

xk, xqlr=limn1n𝔼[Tr{P(l)(P(l)MP(l))k(P(r)MP(r))qP(r)}]                          =limn1n𝔼[Tr{P(l)MP(l)P(l)MP(l)P(r)MP(r)P(r)MP(r)]                          =πNC(2(k+q)+1)κπ(P(l),M,P(l),,M,P(l),P(r),M,,M,P(r))                          =π1NC(odd),π2NC(even)π1π2NC(2(k+q)+1)κπ2(M)κπ1(P(l),,P(l,r),,P(r)),    (3.128)

and also that almost surely

limn1nTr{P(l)(P(l)MP(l))k(P(r)MP(r))qP(r)}=πNC(2(k+q)+1)κπ(P(l),M,P(l),,M,P(l),P(r),M,,M,P(r))=π1NC(odd),π2NC(even)π1π2NC(2(k+q)+1)κπ2(M) κπ1(P(l),,P(l,r),,P(r)).    (3.129)

Above NC(odd), for example, denotes the set of non-crossing partitions on the odd integers in the indicated set. Since the calculation of the joint moments in each non-commutative probability space (Matn(),n-1𝔼Tr) and (Matn(),n-1Tr) is identical, we make no distinction between their free cumulants. Lets denote by NCP(p) the set of all non-crossing partitions over p letters which are also pair partitions. Recall that NC(p) is a poset, the notion of partition refinement induces a partial order on NC(p), which will be denoted by π ≤ σ if, with π, σ ∈ NC(p), each block of π is contained within a block of σ. Now a notion of the complement of a partition will be developed.

Definition 3.6. With π ∈ NC(p1), define the non-crossing complement πcNC(p2) to be the unique non-crossing partition on p2 letters so that ππcNC(p1+p2), and σ ≤ πc for all other σ ∈ NC(p2) satisfying π∪σ ∈ NC(p1 + p2).

Since the limiting spectral distribution of M is Wigner semicircle law with respect to the functional n−1𝔼Tr, and almost surely the Wigner semicircle law with respect to the functional n−1Tr, we have that κ2(M) = 1 and κp(M) = 0 for p ≠ 2. It follows now that

xk, xqlr=0, if k+q is odd,    (3.130)

and also that almost surely

limn1nTr{(P(l)MP(l))k(P(r)MP(r))q}=0, if k+q is odd.    (3.131)

Supposing then that k + q is even, and continuing the calculation,

xk, xqlr=π2NCP(even) π1NC(odd)π1π2NC(2(k+q)+1)κπ1(P(l),,P(r))                          =π2NCP(k+q) π1NC(k+q+1)π1π1cκπ1(P(l),,P(r))                          =π2NCP(k+q)i=1|π1c|limn1n𝔼Tr{P(j)SiP(j)},    (3.132)

where  π1c={S1,,S|π1c|} are the blocks of the non-crossing complement of a given partition. We have used the complement partitions to write the sum of the free cumulants over the partitions of the projection matrices into a product of joint moments of the projection matrices.

Similarly, with respect to the functional n−1Tr, we have that almost surely

limn1nTr{(P(l)MP(l))k(P(r)MP(r))q}=π2NCP(even) π1NC(odd)π1π2NC(2(k+q)+1)κπ1(P(l),,P(r))=π2NCP(k+q) π1NC(k+q+1)π1π1cκπ1(P(l),,P(r))=π2NCP(k+q)i=1|π1c|limn1nTr{P(j)SiP(j)}.    (3.133)

Recall that the non-crossing pair partitions are in bijection with Dyck paths, NCP(k + q) → D(k+q). Thus the computation for each functional reduces to counting Dyck paths. The number of Dyck paths (h(0), ⋯ ;, h(k + q)) with h(k) = j is

[(kk+j2)(kk+j+22)][(qq+j2)(qq+j+22)]     =(j+1)2(k+1)(q+1)(k+1k+j+22)(q+1q+j+22).

Note that limnn-1Tr(P(l))a(P(r))b=γlr, for any a, b ≥ 1. Also note that below the partition π1c depends on the Dyck path dD(k+q) (which corresponds to some non-crossing pair partition). Also note that by |π1c| we denote the number of blocks of π1c. Suppose for now that both k, q are even integers.

The height of the path at h(k) must be even, say h(k) = 2j. Those blocks which consist only of the matrices P(l) will contribute a factor of γl to the product of joint moments. The number of blocks which contain only the matrices P(l) corresponds to the number of down edges of the path in the first k steps. Denote by u the number of up edges and d the number of down edges of the path up to step k. Then u + d = k and ud = 2j, which implies that d = k/2 − j. The number of blocks which contain only the matrices P(r) is equal to the number of up edges of the path in the final q steps. This number corresponds to the exponent on the factor γr in the product of joint moments. Denote now by u the number of up edges and d the number of down edges of the path in the final q steps. The u + d = q and du = 2j, which implies that u = q/2 − j. The remaining blocks of the partition contain projection matrices of mixed type and will contribute a factor γlr to the product of joint moments. Since the total number of blocks in the partition is k+q2+1, the number of factors of γlr in the product of joint moments is 2j + 1. Partitioning the Dyck paths into equivalence classes based on the height h(k), we get that

xk,xqlr=dD(k+q)i=1|π1c|limn𝔼1nTr{P(j)SiP(j)}                        =j=0k2 dD(k+q)h(k)=2jγlk2jγrq2jγlr2j+1                        =j=0k2 (2j+1)2(k+1)(q+1)(k+1k+2j+22)(q+1q+2j+22)γlk2jγrq2jγlr2j+1,

and also, almost surely,

limn1nTr{(P(l)MP(l))k(P(r)MP(r))q}=dD(k+q)i=1|π1c|limn1nTr{P(j)SiP(j)}=j=0k2 dD(k+q)h(k)=2jγlk2jγrq2jγlr2j+1=j=0k2 (2j+1)2(k+1)(q+1)(k+1k+2j+22)(q+1q+2j+22)γlk2jγrq2jγlr2j+1.

Now suppose that both k, q are odd. The height of the path at h(k) must be odd, say h(k) = 2j + 1. Similar to the even case, the number of blocks which consist only of the matrices P(l) equals the exponent of γl in the product of joint moments. The number of blocks which contain only the matrices P(l) corresponds to the number of down edges of the path in the first k steps. Denote by u the number of up edges and d the number of down edges of the path up to step k. Then u + d = k and ud = 2j + 1, which implies that d = (k − 1)/2 − j. The number of blocks which contain only the matrices P(r) is equal to the number of up edges of the path in the final q steps. This number corresponds to the exponent on the factor γr in the product of joint moments. Denote now by u the number of up edges and d the number of down edges of the path in the final q steps. The u + d = q and du = 2j + 1, which implies that u = (q − 1)/2 − j. The remaining blocks of the partition contain projection matrices of mixed type and will contribute a factor of γlr to the product of joint moments. Since the total number of blocks in the partition is k+q2+1, the number of factors of γlr in the product of joint moments is 2j + 2. Partitioning the Dyck paths into equivalence classes based on the height h(k), we get that

xk, xqlr=dD(k+q)i=1|π1c|limn𝔼1nTr{P(j)SiP(j)}                         =j=0k12 dD(k+q)h(k)=2j+1γlk12jγrq12jγlr2j+2                         =j=0k12 (2j+2)2(k+1)(q+1)(k+1k+2j+32)(q+1q+2j+32)                                   γlk12jγrq12jγlr2j+2,

and also, almost surely,

limn1nTr{(P(l)MP(l))k(P(r)MP(r))q}=dD(k+q)i=1|π1c|limn1nTr{P(j)SiP(j)}=j=0k12 dD(k+q)h(k)=2j+1γlk12jγrq12jγlr2j+2=j=0k12 (2j+2)2(k+1)(q+1)(k+1k+2j+32)(q+1q+2j+32)γlk12jγrq12jγlr2j+2.

Now for polynomials f(x)=i=0paixi and g(x)=j=0mbjxj, we have by linearity that

f,glr=i=0pj=0maibjxi,xjlr.    (3.134)

The intersection of countably many events, each with probability 1, occurs with probability 1. There are only countably many polynomials with rational coefficients, so we have proved that the random variables

1nTr{P(l)f(M(l))P(l,r)g(M(r))P(r)},

converge almost surely to the same, non-random limit given by the right hand side of (3.134), whenever f, g are polynomials with rational coefficients.

The bilinear form 〈f, glr is diagonalized in the next proposition.

Proposition 3.7. The two families {Ukγl}k=0 and {Uqγr}q=0 of rescaled Chebyshev polynomials of the second kind are biorthogonal with respect to the bilinear form (3.128). More precisely,

1γlγr Ukγl,Uqγrlr = δkq (γlrγlγr)k+1.    (3.135)

The Proposition 3.7 is proven in the Appendix 2.

Remark 3.8. Previously we have shown that whenever f, g are polynomials with rational coefficients, almost surely (a.s.)

limn1nTr{P(l)f(M(l))·P(l,r)·g(M(r))P(r)}=f,glr.

The Chebyshev polynomials have rational coefficients, so it follows from the above argument that a.s.

1γlγrlimn1nTr{P(l)Ukγl(M(l))P(l,r)Uqγr(M(r))P(r)}    =δkq (γlrγlγr)k+1.    (3.136)

Now the bilinear form 〈·, ·〉lr will be extended to functions other than polynomials. For this part of the argument, the bound on the variance of linear eigenvalue statistics in 3.3 is essential.

Proposition 3.9. Let f,gHs for some s>32, i.e., for some ϵ > 0,

|f^(t)|2(1+|t|)3+ϵdt<, |g^(t)|2(1+|t|)3+ϵdt<.    (3.137)

Then the limit off, glr, n (see definition 3.5) as n → ∞ exists and

f,glr=14π2γlγr2γl2γl2γr2γrf(x)g(y)Flr(x,y)                    4γlx24γry2dydx,    (3.138)

and also, almost surely,

limn1nTr{P(l)f(M(l))·P(l,r)·g(M(r))P(r)}     =14π2γlγr2γl2γl2γr2γrf(x)g(y)Flr(x,y)     4γlx24γry2dydx,    (3.139)

where the kernel Flr(x, y) is given by (3.84).

The Proposition 3.9 is proven in the Appendix 3. Lemma 2.5 now follows from Propositions 3.7 and 3.9. This also completes the proof of Theorem 2.1.

4. Proof of Theorem 2.2

It is enough to prove the case of d = 2, i.e., the limiting covariance of Nn(1)°[φ1] and Nn(2)°[φ2]. Let U(t),U~(t),un(t),u~n(t) be U(1)(t),U(2)(t),un(1)(t),un(2)(t) defined in (3.16–3.17) respectively. U(t) and U~(t) are unitrary matrices and

U(t)U*(t)=U~(t)U~*(t)=I,     |Ujk|1,       k=1n|Ujk|2=1.

By Remark 3.3 in Lytova and Pastur [21], we have the following bounds

Var{un(t)}C(σ6)(1+|t|3)2,    (4.1)
Var{u˜n(t)}C(σ6)(1+|t|3)2,    (4.2)
Var{Nn(1)(t)}C(σ6)((1+|t|3)|φ^1(t)| dt)2​​,    (4.3)
Var{Nn(2)(t)}C(σ6)((1+|t|3)|φ^2(t)| dt)2​​.    (4.4)

Let w be a linear combination of random variables Nn(1)°[φ1] and Nn(2)°[φ2], and Zn(x) be the characteristic function of w, i.e.,

w=αNn(1)[φ1]+βNn(2)[φ2],  Zn(x)=𝔼{eixw}.    (4.5)

We note that

Zn(x)=1+0xZn(t)dt; Zn(x)=i𝔼{weixw},    (4.6)

By the Cauchy-Schwarz inequality and (4.3–4.4) we get

|Zn(x)|(|α|+|β|)C1/2(σ6)(1+|t|3)|φ^1(t)| dt,    (4.7)

Using the Fourier inversion formula f(λ)=eitλf^(t) dt we obtain

Nn(1)[φ1]=αφ^1un(t)dt,   Nn(2)°[φ2]=φ^2(t)u˜n(t)dt    (4.8)

Therefore,

w=αφ^1(t)un(t)+βφ^2(t)u˜n(t)dt,    (4.9)
Zn(x)=iα-φ1^(t)Yn(x,t)dt+iβ-φ2^(t)Y~n(x,t)dt,    (4.10)

where

Yn(x,t)=𝔼{un(t)en(x)},   Y~n(x,t)=𝔼{u~n(t)en(x)},   en(x)=eixw.    (4.11)

By the Cauchy-Schwarz inequality,

|Yn(x,t)|𝔼{|un(t)|}C1/2(σ6)(1+|t|3),    (4.12)
|Y~n(x,t)|𝔼{|u~n(t)|}C1/2(σ6)(1+|t|3),    (4.13)

and

|xYn(x,t)|=|𝔼{αunNn(1)[φ1]en(x)+βunNn(2)[φ2]en(x)}|                            C(σ6)(1+|t|3)-(1+|t|3)(|αφ^1(t)|                                      +|βφ^2(t)|)dt.    (4.14)

Also

tYn(x,t)=𝔼{un(t)en(x)}=inj,kB1𝔼{WjkΦn},    (4.15)

where

Φn=Ujk(t)en(x).

Recall that for Djk=/Mjk,  βjk=(1+δjk)-1,

DjkUab(t)=1j,kB1iβjk[Uaj*Ubk(t)+Ubj*Uak(t)],    (4.16)
DjkU~ab(t)=1j,kB2iβjk[U~aj*U~bk(t)+U~bj*U~ak(t)],    (4.17)

and

Djken(x)=2iβjkxen(x)(1j,kB1α(φ1)jk(M1)+1j,kB2β(φ2)jk(M2))    (4.18)
=-2βjkxen(x)(1j,kB1-tUjk(t)αφ1^(t)dt+1j,kB2          -tU~jkβφ2^(t)dt).    (4.19)

Lemma 4.1. Let φ1, φ2 have fourth bounded derivatives. Then

|Djkl(Ujk(t)en(x))|Cl(x,t),   0l5,    (4.20)

where Cl(x, t) is a degree l polynomial of |x|, |t|with positive coefficients.

Proof: From (4.16) and (4.17), we have

|DjklUab(t)|,|DjklU~ab(t)|Constl|t|l,   0l5.    (4.21)

(4.19) implies

|Djklen(x)|Constl(1+|x|l)   0l5.    (4.22)

These two inequalities complete the proof of Lemma 4.1

We now apply the Decoupling Formula (5.1) with p = 2 to obtain

tYn(x,t)=inj,kB1(1+(σ2-1)δjk)𝔼{DjkΦn}+O(1)                       =inj,kB1(1+δjk)𝔼{DjkΦn}                            +i(σ2-2)njB1𝔼{DjjΦn}+O(1).    (4.23)

where the error term is bounded by C3(x, t) as n → ∞. The first term in (4.23) is

-tnYn(t,x)-1n0t𝔼{un(t-t1)}Yn(x,t1)dt1-1n𝔼{0tun(t1)un(t-t1)dt1en(x)}-2in𝔼{xen(x)(-t1un(t+t1)αφ1^(t1)dt1+-t1TrP(1,2)Un(t)P(1,2)U~n(t1)βφ2^(t1)dt1).

The first term and the second term are bounded because of (4.12). The last term is bounded by

2|x|-|t|(|α||φ^1(t1)|+|β||φ^2(t1)|)dt1,

and the third term is bounded by 2|t|C1/2(σ6)(1+|t|3).

The second term in (4.23) is

2-σ2njB1𝔼{0tUjj(t1)Ujj(t-t1)dt1en(x)}+ix(2-σ2)njB1𝔼{en(x)-t1Ujj(t)Ujj(t1)αφ1^(t1)dt1}+ix(2-σ2)njB1B2𝔼{en(x)-t1Ujj(t)U~jj(t1)βφ2^(t1)dt1}

The first term is bounded by 2|2−σ2||t|, and the second term is bounded by

2|x||2-σ2|-|t|(|α||φ^1(t1)|+|β||φ^2(t1)|)dt1.

So

|tYn(x,t)|C5(x,t).

By symmetry, Y~n(x,t) has similar bounds. Therefore, we conclude that the sequences {Yn},{Y~n} are bounded and equicontinuous on any finite subset of ℝ2. We will prove now that any uniformly converging subsequence of {Yn}({Y~n}) has same limit Y(Y~).

We deal with Yn first, and by the symmetric property, we can find Y~n. We use the identity

un(t)=n1+i0tj,kB1MjkUjk(t1)dt1,    (4.24)

to write

Yn(x,t)=in0tj,kB1𝔼{WjkUjk(t1)en(x)}dt1.    (4.25)

By applying decoupling formula (5.1) with p = 3 to (4.25), we have

Yn(x,t)=in0tj,kB1[l=03κl+1,jknl/2l!𝔼{Djkl(Ujk(t1)en(x))}+ε3,jk]dt1,    (4.26)

where

κ1,jk=0,κ2,jk=1+δjk(σ2-1),    (4.27)
κ3,jk=μ3,κ4,jk=κ4,jk,    (4.28)

and κ3, jj, κ4, jj are uniformly bounded, i.e. there exist constants σ3, σ4 such that

|κ3,jj|σ3,|κ4,jj|σ4,    (4.29)

and

|ε3,jk|n-2C3𝔼{|Wjk|5}supt|Djk4Φn(x)|n-2C4(x,t).    (4.30)

Let

Tl=in(l+1)/20tj,kB1κl+1,jkl!𝔼{Djkl(Ujk(t1)en(x))}dt1,l=1,2,3,    (4.31)
En=in0tj,kB1ε3,jkdt.    (4.32)

Then

Yn(x,t)=T1+T2+T3+En,

and

|En|n12n5/2C5(x,t)0,   as n.

We note that if Wjk's are Gaussian, then Yn(x, t) = T1. Thus, T1 coincide with the Yn in Theorem 2.1.

Let

v¯n(t)=n-1𝔼{un(t)},   v~¯n(t)=n-1𝔼{u~n(t)}.

Then

Yn(x,t)+20tdt10t1v¯n(t1-t2)Yn(x,t2)dt2=xZn(x)An(t)+rn(x,t)+T2+T3+En,    (4.33)

where

An(t)=-2αn0t𝔼{TrU(t1)P1φ1(M1)P1}dt1      -2βn0t𝔼{TrU(t1)P2φ2(M2)P2}dt1,    (4.34)

and rn(x, t) → 0 on any bounded subset of {(x, t):x ∈ ℝ, t > 0}.

Let A(t)=limnAn(t). It follows from the proof of Theorem 2.1 that A(t) coincides with the one established in the Gaussian case.

Proposition 4.2. T2 → 0 on any bounded subset of {(x, t):x ∈ ℝ, t > 0}.

Proof: The second derivative (l=2) is

Djk2(Ujk(t1)en(x))=βjk2×{-(6Ujj*Ujk*Ukk+2Ujk*Ujk*Ujk)(t1)en(x)-4i(Ujj*Ukk+Ujk*Ujk)(t1)xen(x)  [-tUjk(t)αφ1^(t)dt+1j,kB2-tU~jkβφ2^(t)dt]+4Ujk(t1)x2en(x)[-tUjk(t)αφ1^(t)dt+1j,kB2-tU~jkβφ2^(t)dt]2-2iUjk(t1)xen(x)[-t(Ujj*Ukk+Ujk*Ujk)(t)αφ1^(t)dt+1j,kB2-t(U~jj*U~kk+U~jk*U~jk)(t)βφ2^(t)dt]}.

Let

T21=iκ32n3/20t𝔼{j,kB1-βjk2(6Ujj*Ujk*Ukk   +2Ujk*Ujk*Ujk)(t1)en(x)   -4iβjk2(Ujj*Ukk+Ujk*Ujk)(t1)xen(x)t2Ujk(t2)αφ1^(t2)dt2   +4βjk2Ujk(t1)x2en(x)(t2Ujk(t2)αφ1^(t2)dt2)2   -2iβjk2Ujk(t1)xen(x)t2(Ujj*Ukk+Ujk*Ujk)(t2)βφ1^(t2)dt2}dt1,T22=iκ32n3/20t𝔼{j,kB1B24βjk2Ujk(t1)x2en(x)(t2U~jk(t2)βφ2^(t2)dt2)2   +8βjk2Ujk(t1)x2en(x)t2Ujk(t2)αφ1^(t2)dt2t3U~jk(t3)βφ2^(t3)dt3   -2iβjk2Ujk(t1)xen(x)t2[U~jj*U~kk+U~jk*U~jk](t2)βφ2^(t2)dt2}dt1,T23=i2n3/20tjB1κ3,jj𝔼{Djj2(Ujj(t1)en(x))}dt1.

Then T2 = T21+T22+T23. It has been shown in Lytova and Pastur [21] that |T21||t|C2(x,t)n1/n3/2 on any bounded subset of {(x, t):x ∈ ℝ, t > 0}. Also, by Proposition 4.1 and (4.29), one has |T23||t|C2(x,t)n1/n3/2.

In T22, there are three types of a sum,

S1=n-3/2j,kB1B2Ujk(t1)U~jk(t2)U~jk(t3),S2=n-3/2j,kB1B2Ujk(t1)Ujk(t2)U~jk(t3),S3=n-3/2j,kB1B2Ujk(t1)U~jj(t2)U~kk(t3).

Applying the Cauchy-Schwarz inequality we obtain

|S1|n-3/2j,kB2|U~jk(t2)U~jk(t3)|n2n3/2,|S2|n-3/2j,kB1|Ujk(t1)Ujk(t2)|n1n3/2.

Writing

S3=n12n3/2(P12U(t1)P12V(t2),V(t3)),

where

V(t)=n12-1/2(U~jj(t))jB1B2T.

||V(t)|| ≤ 1, ||P12U(t)P12|| ≤ 1, we conclude that S3n12n3/2, hence T22|t|/n3/2. This completes the proof of Proposition 4.2.

Proposition 4.3.

T3=T31+T32+R3(x,t),

where

T31=iκ4n20tj,kB1𝔼{Ujj*Ukk(t1)xen(x)   t2Ujj*Ukk(t2)αφ1^(t2)dt2}dt1,T32=iκ4n20tj,kB1B2𝔼{Ujj*Ukk(t1)xen(x)   t2U~jj*U~kk(t2)βφ2^(t2)dt2}dt1.

and R3(x, t) → 0 on any bounded subset of {(x, t):x ∈ ℝ, t > 0}.

Proof:

T3=iκ46n20tj,kB1𝔼{Djk3(Ujk(t1)en(x))}dt1+T~3,

where

T~3=i6n20tjB1(κ4,jj-κ4)𝔼{Djj3(Ujj(t1)en(x))}dt1.

By Proposition 4.1 and (4.29), we have |T~3||t|C3(x,t)n1/n2.

The third derivative (l=3)

Djk3(Ujk(t1)en(x))=βjk3×{-i(36Ujj*Ujk*Ujk*Ukk+6Ujj*Ujj*Ukk*Ukk+6Ujk*Ujk*Ujk*Ujk)(t1)en(x)+6(6Ujj*Ujk*Ukk+2UjkUjk*Ujk)(t1)xen(x)(tUjk(t)αφ1^(t)dt+1j,kB2tU~jkβφ2^(t)dt)+12i(Ujj*Ukk+Ujk*Ujk)(t1)x2en(x)(tUjk(t)αφ1^(t)dt+1j,kB2tU~jkβφ2^(t)dt)2+6(Ujj*Ukk+Ujk*Ujk)(t1)xen(x)(t(Ujj*Ukk+Ujk*Ujk)(t)αφ1^(t)dt+1j,kB2t(U~jj*U~kk+U~jk*U~jk)βφ2^(t)dt)-8Ujk(t1)x3en(x)(tUjk(t)αφ1^(t)dt+1j,kB2tU~jkβφ2^(t)dt)3+12iUjk(t1)x2en(x)(tUjk(t)αφ1^(t)dt+1j,kB2tU~jkβφ2^(t)dt)×(t(Ujj*Ukk+Ujk*Ujk)(t)αφ1^(t)dt+1j,kB2t(U~jj*U~kk+U~jk*U~jk)βφ2^(t)dt)+2Ujk(t1)xen(x)[t(6Ujj*Ujk*Ukk+2Ujk*Ujk*Ujk)(t)αφ1^(t)dt+1j,kB2t(6U~jj*U~jk*U~kk+2U~jk*U~jk*U~jk)(t)βφ2^(t)dt]}.    (4.35)

So any term of

iκ46n20tj,kB1𝔼{Djk3(Ujk(t1)en(x))}dt1

containing at least one off-diagonal entry Ujk or U~jk is bounded by C3(x,t)n1/n2. Let R3(x, t) be the sum of T~3 and these terms. Then |R3(x,t)|C3(x,t)n1/n2+|t|C3(x,t)n1/n2. So two terms in (4.35) containing diagonal entries of U and U~ only left contribute to T3. They are T31 and T32.

Let

v(t)=12πγ1-2γ12γ1eitλ4γ1-λ2dλ,v~(t)=12πγ2-2γ22γ2eitλ4γ2-λ2dλ.

By Wigner semicircle law, one has

limnv¯n(t)=γ1v(t),   limnv~¯n(t)=γ2v~(t).

Then

(v*v)(t)=-i2πγ12-2γ12γ1eitμμ4γ1-μ2dμ       =1πtγ12-2γ12γ1eitμ2γ1-μ24γ1-μ2dμ,    (4.36)
(v˜*v˜)(t)=i2πγ222γ22γ2eitμμ4γ2μ2dμ                    =1πtγ222γ22γ2eitμ2γ2μ24γ2μ2dμ.    (4.37)

Let

I(t)=0t(v*v)(t1)dt1,  I˜(t)=0t(v˜*v˜)(t1)dt1.    (4.38)

Denote

Bφl=1πγl22γl2γlφl(μ)2γlμ24γlμ2dμ,l=1,2.    (4.39)

Proposition 4.4.

T31iκ4xZ(x)I(t)αγ12Bφ1,    (4.40)
T32iκ4xZ(x)I(t)βγ122Bφ2.    (4.41)

uniformly on any bounded subset of {(x, t):x ∈ ℝ, t > 0}.

Proof: The proof of (4.40) can be found in Lytova and Pastur [21]. To study asymptotic behavior of the l.h.s. of (4.41) we write:

T32=ixκ4n20tj,kB1B20t10t2t2𝔼{Ujj(t3)Ukk(t1t3)U˜jj(t4)U˜kk(t2t4)en(x)}            ×βφ2^(t2)dt4dt2dt3dt1       =ixκ40t0t10t2t2𝔼{vn(t3,t4)vn(t1t3,t2t4)en°(x)}βφ2^(t2)dt4dt2dt3dt1           +ixκ4Zn(x)0t0t10t2t2𝔼{vn(t3,t4)vn(t1t3,t2t4)}βφ2^(t2)dt4dt2dt3dt1    (4.42)

where

vn(t1,t2)=n1jB1B2Ujj(t1)U˜jj(t2).    (4.43)

Then

|𝔼{vn(t1,t2)vn(t3,t4)en°(x)}|4𝔼{|vn°(t1,t2)|}+4𝔼{|vn°(t3,t4)|},    (4.44)

and

𝔼{vn(t1,t2)vn(t3,t4)}=v¯n(t1,t2)v¯n(t3,t4)+𝔼{vn(t1,t2)vn°(t3,t4)},    (4.45)

where

v¯n(t1,t2)=𝔼{vn(t1,t2)}.    (4.46)

Proposition 4.5.

v¯n(t1,t2)=γ12v(t1)v˜(t2)+o(1),

uniformly on any compact set of2.

Proof: Indeed, E{Ujj(t1)U~jj(t2)}=v(t1)v~(t2)+o(1) uniformly in 1 ≤ jn and t1, t2 from a compact set of ℝ2, which follows from

𝔼Ujj(t)=v(t)+o(1),  Var{Ujj(t)}=o(1),𝔼U˜jj(t)=v˜(t)+o(1),  Var{U˜jj(t)}=o(1)

(see e.g., [33]).

So the limit of T32 is

ixκ4Z(x)γ1220tv*v(t1)dt1t2βφ2^(t2)v˜*v˜(t2)dt2        =ixκ4Z(x)γ122I(t)βBφ2.

So if Y(x,t)=limnYn(x,t), then Y(x, t) satisfies

Y(x,t)+2γ1dt10t1v(t1t2)Y(x,t2)dt2      =xZ(x)[A(t)+iκ4I(t)(αγ12Bφ1+βγ122Bφ2)].

Therefore, if let Y*(x, t) be the solution of

Y(x,t)+2γ1dt10t1v(t1t2)Y(x,t2)dt2=xZ(x)A(t),

then

Y(x,t)=Y*(x,t)+iκ4xZ(x)2πγ12[αγ12Bφ1+βγ122Bφ2]2γ12γ1eitλ(2γ1λ2)4γ1λ2dλ.    (4.47)

Symmetrically,

Y˜(x,t)=Y˜*(x,t)+iκ4xZ(x)2πγ22[αγ122Bφ1+βγ22Bφ2]2γ22γ2eitλ(2γ2λ2)4γ2λ2dλ.    (4.48)

Therefore,

Z(t)=iαφ1^(t)Y(x,t)dt+iβφ2^(t)Y˜(x,t)dt           =xVZ(x)ακ4xZ(x)2πγ12φ1^(t)[αγ12Bφ1+βγ122Bφ2]                  2γ12γ1eitλ(2γ1λ2)4γ1λ2dλdt               βκ4xZ(x)2πγ22φ2^(t)[αγ122Bφ1+βγ22Bφ2]                         2γ22γ2eitλ(2γ2λ2)4γ2λ2dλdt        =xVZ(x)α2xZ(x)2γ1γ2Bφ12αβxZ(x)γ122Bφ1Bφ2            β2xZ(x)2γ1γ2Bφ22        =xVZ(x)xκ4Z(x)[α2γ12Bφ122+αβγ122Bφ1Bφ2+β2γ22Bφ222]    (4.49)

where

V=α2Var(G1)+2αβCov(G1,G2)+β2Var(G2),

and G1, G2 are the random variables in Theorem 2.1 with d = 2.

Therefore,

limnCov(Nn(1)°[φ1],Nn(2)°[φ2])=       Cov(G1,G2)+γ122κ42π2γ12γ222γ12γ1φ1(μ)2γ1μ24γ1μ2dμ        2γ22γ2φ2(μ)2γ2μ24γ2μ2dμ.    (4.50)

By symmetry, for any 1 ≤ lpn,

Cov(G˜l,G˜p)=Cov(Gl,Gp)+γlp2κ42π2γl2γp22γl2γlφl(λ)          2γlλ24γlλ2dλ2γp2γpφp(μ)2γpμ24γpμ2dμ    (4.51)

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

This research has been supported in part by the Simons Foundation award #312391.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fams.2020.00017/full#supplementary-material

References

1. Wigner EP. On the statistical distribution of the widths and the spacings of nuclear resonance levels. Proc Cambridge Philos Soc. (1951) 47:790–8. doi: 10.1017/S0305004100027237

CrossRef Full Text | Google Scholar

2. Wigner EP. On the distribution of the roots of certain symmetric matrices. Ann Math. (1958) 67:325–7. doi: 10.2307/1970008

CrossRef Full Text | Google Scholar

3. Mehta ML. Random Matrices. New York, NY: Elsevier Publishing (2004).

PubMed Abstract | Google Scholar

4. Bohigas O, Giannoni MJ, Schmit C. Characterization of chaotic quantum spectra and universality of level fluctuation laws. Phys Rev Lett. (1984) 52:1–4. doi: 10.1103/PhysRevLett.52.1

CrossRef Full Text | Google Scholar

5. Franchini F, Kravtsov V. Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms. Phys Rev Lett. (2009) 103:166401. doi: 10.1103/PhysRevLett.103.166401

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Sanchez D, Buettiker DM. Magnetic-field asymmetry of nonlinear mesoscopic transport. Phys Rev Lett. (2004) 93:106802. doi: 10.1103/PhysRevLett.93.106802

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Edelman A, Rao NR. Random matrix theory. Acta Numer. (2005) 14:233–97. doi: 10.1017/S0962492904000236

CrossRef Full Text | Google Scholar

8. Sompolinsky H, Crisanti A, Sommers HH Chaos in random neural networks. Phys Rev Lett. (1988) 61:259–62. doi: 10.1103/PhysRevLett.61.259

PubMed Abstract | CrossRef Full Text

9. Chow GP. Analysis and Control of Dynamic Economic Systems. New York, NY: Wiley (1976).

Google Scholar

10. Keating J. The Riemann zeta-function and quantum chaology. Proc Int School Phys Enrico Fermi CXIX. (1993) 145–85.

11. Harnad J. Editor. Random Matrices, Random Processes and Integrable Systems. CRM Series in Mathematical Physics. New York, NY: Springer (2011).

12. Romik D. The Surprising Mathematics of Longest Increasing Subsequences. New York, NY: Cambridge University Press (2015).

Google Scholar

13. Johansson K. Random growth and random matrices. In: Proceedings of the 2000 European Congress of Mathematics, Progress in Mathematics (2001). Basel: Birkhäuser. p. 445–56.

Google Scholar

14. Johnstone IM. High dimensional statistical inference and random matrices. In: Proceedings of the International Congress of Mathematicians. Madrid (2006).

Google Scholar

15. Ben Arous G, Guionnet A. Wigner matrices. In: Akemann G, Baik J, Di Francesco P, editors. Oxford Handbook on Random Matrix Theory. New York, NY: Oxford University Press (2011).

Google Scholar

16. Anderson GW, Guionnet A, Zeitouni O. An Introduction to Random Matrices. New York, NY: Cambridge University Press (2010).

Google Scholar

17. Jonsson D. Some limit theorems for the eigenvalues of a sample covariance matrix. J Mult Anal. (1982) 12:1–38. doi: 10.1016/0047-259X(82)90080-X

CrossRef Full Text | Google Scholar

18. Johansson K. On fluctuations of eigenvalues of random hermitian matrices. Duke Math J. (1998) 91:151–204. doi: 10.1215/S0012-7094-98-09108-6

CrossRef Full Text | Google Scholar

19. Sinai Y, Soshnikov A. Central limit theorem for traces of large random symmetric matrices with independent matrix elements. Bol Soc Brasil Mat. (1998) 29:1–24. doi: 10.1007/BF01245866

CrossRef Full Text | Google Scholar

20. Bai ZD, Wang X, Zhou W. CLT for linear spectral statistics of wigner matrices. Electron J Probabil. (2009). 14:2391–417. doi: 10.1214/EJP.v14-705

CrossRef Full Text

21. Lytova A, Pastur L. Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann Probabil. (2009) 37:1778–840. doi: 10.1214/09-AOP452

CrossRef Full Text | Google Scholar

22. Shcherbina M. Central Limit Theorem for linear eigenvalue statistics of the Wigner and sample covariance random matrices. J Math Phys Anal Geometry. (2011) 7:176–92.

Google Scholar

23. Anderson GW, Zeitouni O. A CLT for a band matrix model. Probab Theory Relat Fields. (2006) 134:283–338. doi: 10.1007/s00440-004-0422-3

CrossRef Full Text | Google Scholar

24. Li L, Soshnikov A. Central limit theorem for linear statistics of eigenvalues of band random matrices. Random Matrices Theor Appl. (2013) 2:1350009. doi: 10.1142/S2010326313500093

CrossRef Full Text | Google Scholar

25. Lodhia A, Simm NJ. Mesoscopic linear statistics of Wigner matrices. arXiv:1503.03533 [math.PR].

Google Scholar

26. Borodin A. CLT for spectra of submatrices of Wigner random matrices. Mosc Math J. (2014) 14:29–38. doi: 10.17323/1609-4514-2014-14-1-29-38

CrossRef Full Text | Google Scholar

27. Borodin A. CLT for spectra of submatrices of Wigner random matrices II. Stochastic evolution. Random Matrix Theory Interact Part Integr Syst. (2014) 65:57–69.

Google Scholar

28. Lytova A, Pastur L. Fluctuations of matrix elements of regular functions of Gaussian random matrices. J Stat Phys. (2009) 134:147–59. doi: 10.1007/s10955-008-9665-1

CrossRef Full Text | Google Scholar

29. Shcherbina M, Tirozzi B. Central limit theorem for fluctuations of linear eigenvalue statistics of large random graphs. J Math Phys. (2010) 51:023523. doi: 10.1063/1.3299297

CrossRef Full Text | Google Scholar

30. Shcherbina M. On fluctuations of eigenvalues of random band matrices. J Stat Phys. (2015) 161:73–90. doi: 10.1007/s10955-015-1324-8

CrossRef Full Text | Google Scholar

31. Cekanavicius V. Approximation Methods in Probability Theory. Basel: Springer (2016).

Google Scholar

32. Pizzo A, Renfrew D, Soshnikov A. Fluctuations of matrix entries of regular functions of wigner matrices. Ann l'Institut Henri Poincare (B) Probab Stat. (2013) 49:64–94. doi: 10.1214/11-AIHP459

CrossRef Full Text | Google Scholar

33. O'Rourke S, Renfrew D, Soshnikov A. On fluctuations of matrix entries of regular functions of wigner matrices with non-identically distributed entries. J Theor Probab. (2013) 26:750–80. doi: 10.1007/s10959-011-0396-x

CrossRef Full Text | Google Scholar

Keywords: Wigner matrices, linear statistics, eigenvalues, central limit theorem, submatrices

Citation: Li L, Reed M and Soshnikov A (2020) Central Limit Theorem for Linear Eigenvalue Statistics for Submatrices of Wigner Random Matrices. Front. Appl. Math. Stat. 6:17. doi: 10.3389/fams.2020.00017

Received: 17 March 2020; Accepted: 04 May 2020;
Published: 09 June 2020.

Edited by:

Oleg N. Kirillov, Northumbria University, United Kingdom

Reviewed by:

Rajat Subhra Hazra, Indian Statistical Institute, India
Pragya Shukla, Indian Institute of Technology Kharagpur, India

Copyright © 2020 Li, Reed and Soshnikov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexander Soshnikov, soshniko@math.ucdavis.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.