Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 20 October 2020
Sec. Mathematics of Computation and Data Science
This article is part of the Research Topic Fundamental Mathematical Topics in Data Science View all 7 articles

Kernel-Based Analysis of Massive Data

  • Institute of Mathematical Sciences, Claremont Graduate University, Claremont, CA, United States

Dealing with massive data is a challenging task for machine learning. An important aspect of machine learning is function approximation. In the context of massive data, some of the commonly used tools for this purpose are sparsity, divide-and-conquer, and distributed learning. In this paper, we develop a very general theory of approximation by networks, which we have called eignets, to achieve local, stratified approximation. The very massive nature of the data allows us to use these eignets to solve inverse problems, such as finding a good approximation to the probability law that governs the data and finding the local smoothness of the target function near different points in the domain. In fact, we develop a wavelet-like representation using our eignets. Our theory is applicable to approximation on a general locally compact metric measure space. Special examples include approximation by periodic basis functions on the torus, zonal function networks on a Euclidean sphere (including smooth ReLU networks), Gaussian networks, and approximation on manifolds. We construct pre-fabricated networks so that no data-based training is required for the approximation.

1. Introduction

Rapid advances in technology have led to the availability and need to analyze a massive data. The problem arises in almost every area of life from medical science to homeland security to finance. An immediate problem in dealing with a massive data set is that it is not possible to store it in a computer memory; we therefore have to deal with the data piecemeal to keep access to an external memory to a minimum. The other challenge is to devise efficient numerical algorithms to overcome difficulties, for example, in using the customary optimization problems in machine learning. On the other hand, the very availability of a massive data set should lead also to opportunities to solve some problems heretofore considered unmanageable. For example, deep learning often requires a large amount of training data, which, in turn, helps us to figure out the granularity in the data. Apart from deep learning, distributed learning is also a popular way of dealing with big data. A good survey with the taxonomy for dealing with massive data was recently conducted by Zhou et al. [1].

As pointed out in Cucker and Smale [2], Cucker and Zhou [3], and Girosi and Poggio [4], the main task in machine learning can be viewed as one of approximation of functions based on noisy values of the target function, sampled at points that are themselves sampled from an unknown distribution. It is therefore natural to seek approximation theory techniques to solve the problem. However, most of the classical approximation theory results are either not constructive or study function approximation only on known domains. In this century, there is a new paradigm to consider function approximation on data-defined manifolds; a good introduction to the subject is in the special issue [5] of Applied and Computational Harmonic Analysis, edited by Chui and Donoho. In this theory, one assumes the manifold hypothesis, i.e., that the data is sampled from a probability distribution μ* supported on a smooth, compact, and connected Riemannian manifold; for simplicity, even that μ* is the Riemannian volume measure for the manifold, normalized to be a probability measure. Following (e.g., [610]), one constructs first a “graph Laplacian” from the data and finds its eigen decomposition. It is proved in the abovementioned papers that as the size of the data tends to infinity, the graph Laplacian converges to the Laplace-Beltrami operator on the manifold, and the eigenvalues (eigenvectors) converge to the corresponding quantities on the manifold. A great deal of work is devoted to studying the geometry of this unknown manifold (e.g., [11, 12]) based on the so-called heat kernel. The theory of function approximation on such manifolds is also well-developed (e.g., [1317]).

A bottleneck in this theory is the computation of the eigendecomposition of a matrix, which is necessarily huge in the case of big data. Kernel-based methods have been used also in connection with approximation on manifolds (e.g., [1822]). The kernels used in this method are constructed typically as a radial basis function (RBF) in the ambient space, and the methods are traditional machine learning methods involving optimization. As mentioned earlier, massive data poses a big challenge for the solution of these optimization problems. The theoretical results in this connection assume a Mercer's expansion in terms of the Laplacian eigenfunctions for the kernel, satisfying certain conditions. In this paper, we develop a general theory including several RBF kernels in use in different contexts (examples are discussed in section 2). Rather than using optimization-based techniques, we will provide a direct construction of the approximation based on what we have called eignets. An eignet is defined directly using the eigendecomposition on the manifold. We thus focus directly on the properties of Mercer expansion in an abstract and unified manner that enables us to construct local approximations suitable for working with massive data without using optimization.

It is also possible that the manifold hypothesis does not hold, and there is a recent work [23] by Fefferman et al. proposing an algorithm to test this hypothesis. On the other hand, our theory for function approximation does not necessarily use the full strength of Riemannian geometry. In this paper, we have therefore decided to work with a general locally compact metric measure space, isolating those properties which are needed for our analysis and substituting some that are not applicable in the current setting.

Our motivation comes from some recent works on distributed learning by Zhou et al. [2426] as well as our own work on deep learning [27, 28]. For example, in Lin et al. [26], the approximation is done on the Euclidean sphere using a localized kernel introduced in Mhaskar [29], where the massive data is divided into smaller parts, each dense on the sphere, and the resulting polynomial approximations are added to get the final result. In Chui et al. [24], the approximation takes place on a cube, and exploits any known sparsity in the representation of the target function in terms of spline functions. In Mhaskar and Poggio [28] and Mhaskar [27], we have argued that from a function approximation point of view, the observed superiority of deep networks over shallow ones results from the ability of deep networks to exploit any compositional structure in the target function. For example, in image analysis, one may divide the image into smaller patches, which are then combined in a hierarchical manner, resulting in a tree structure [30]. By putting a shallow network at each node to learn those aspects of the target function that depend upon the pixels seen up to that level, one can avoid the curse of dimensionality. In some sense, this is a divide-and-conquer strategy, not so much on the data set itself but on the dimension of the input space.

The highlights of this paper are the following.

• In order to avoid an explicit, data-dependent eigendecomposition, we introduce the notion of an eignet, which generalizes several radial basis function and zonal function networks. We construct pre-fabricated eignets, whose linear combinations can be constructed just by using the noisy values of the target function as the coefficients, to yield the desired approximation.

• Our theory generalizes the results in a number of examples used commonly in machine learning, some of which we will describe in section 2.

• The use of optimization methods, such as empirical risk minimization has an intrinsic difficulty, namely, the minimizer of this risk may have no connection with the approximation error. There are also other problems, such as local minima, saddle points, speed of convergence, etc. that need to be taken into account, and the massive nature of the data makes this an even more challenging task. Our results do not depend upon any kind of optimization in order to determine the necessary approximation.

• We developed a theory for local approximation using eignets so that only a relatively small amount of data is used in order to approximate the target function in any ball of the space, the data being sub-sampled using a distribution supported on a neighborhood of that ball. The accuracy of approximation adjusts itself automatically depending upon the local smoothness of the target function on the ball.

• In normal machine learning algorithms, it is customary to assume a prior on the target function called smoothness class in approximation theory parlance. Our theory demonstrates clearly how a massive data can actually help to solve the inverse problem to determine the local smoothness of the target function using a wavelet-like representation based solely on the data.

• Our results allow one to solve the inverse problem of estimating the probability density from which the data is chosen. In contrast to the statistical approaches that we are aware of, there is no limitation on how accurate the approximation can be asymptotically in terms of the number of samples; the accuracy is determined entirely by the smoothness of the density function.

• All our estimates are given in terms of probability of the error being small rather than the expected value of some loss function being small.

This paper is abstract, theoretical, and technical. In section 2, we present a number of examples that are generalized by our set-up. The abstract set-up, together with the necessary definitions and assumptions, are discussed in section 3. The main results are stated in section 4 and proved in section 8. The proofs require a great deal of preparation, which is presented in sections 5–7. The results in these sections are not all new. Many of them are new only in some nuance. For example, we have proven in section 7 the quadrature formulas required in the construction of our pre-fabricated networks in a probabilistic setting, and we have also substituted an estimate on the gradients by certain Lipschitz condition, which makes sense without the differentiability structure on the manifold as we had done in our previous works. Our Theorem 7.1 generalizes most of our previous results in this direction with the exception of [31, Theorem 2.3]. We have striven to give as many proofs as possible, partly for the sake of completion and partly because the results were not stated earlier in exactly the same form as needed here. In Appendix A, we give a short proof of the fact that the Gaussian upper bound for the heat kernel holds for arbitrary smooth, compact, connected manifolds. We could not find a reference for this fact. In Appendix B, we state the main probability theory estimates that are used ubiquitously in the paper.

2. Motivating Examples

In this paper, we aim to develop a unifying theory applicable to a variety of kernels and domains. In this section, we describe some examples which have motivated the abstract theory to be presented in the rest of the paper. In the following examples, q ≥ 1 is a fixed integer.

Example 2.1. Let 𝕋q = ℝq/(2πℤq) be the q-dimensional torus. The distance between points x = (x1, ⋯, xq) and y = (y1, ⋯, yq) is defined by max1kq|(xk-yk) mod 2π|. The trigonometric monomial system {exp(ik · ○) : k ∈ ℤq} is orthonormal with respect to the Lebesgue measure normalized to be a probability measure on 𝕋q. We recall that the periodization of a function f :ℝq → ℝ is defined formally by f(x)=kqf(x+2kπ). When f is integrable then the Fourier transform of f at k ∈ ℤq is the same as the k-th Fourier coefficient of f. This Fourier coefficient will be denoted by f^(k)=f^(k). A periodic basis function network has the form xk=1nakG(x-xk), where G is a periodic function called the activation function. The examples of the activation functions in which we are interested in this paper include:

1. Periodization of the Gaussian.

G(x)=kqexp(-|x-2πk|22/2) ,G^(k)=(2π)q/2exp(-|k|22/2).

2. Periodization of the Hardy multiquadric1.

G(x)=kq(α2+|x-2πk|22)-1 ,G^(k)=π(q+1)/2Γ(q+12)αexp(-α|k|2),      α>0.                 

Example 2.2. If x=(x1,,xq)[-1,1]q, there exists a unique θ=(θ1,,θq)[0,π]q such that x = cos(θ). Therefore, [−1, 1]q can be thought of as a quotient space of 𝕋q where all points of the form εθ = {(ε1θ1, ⋯, εqθq)}, ε=(ε1,,εq){-1,1}q, are identified. Any function on [−1, 1]q can then by lifted to 𝕋q, and this lifting preserves all the smoothness properties of the function. Our set-up below includes [−1, 1]q, where the distance and the measure are defined via the mapping to the torus, and suitably weighted Jacobi polynomials are considered to be the orthonormalized family of functions. In particular, if G is a periodic activation function, x = cos(θ), y = cos(ϕ), then the function G(x,y)=ε{-1,1}qG(ε(θ-ϕ)) is an activation function on [−1, 1]q with an expansion k+qbkTk(x)Tk(y), where Tk's are tensor product, orthonormalized, Chebyshev polynomials. Furthermore, bk's have the same asymptotic behavior as G^(k)'s.      □

Example 2.3. Let 𝕊q={xq+1:|x|2=1} be the unit sphere in ℝq+1. The dimension of 𝕊q as a manifold is q. We assume the geodesic distance ρ on 𝕊q and the volume measure μ* are normalized to be a probability measure. We refer the reader to Müller [33] for details, describing here only the essentials to get a “what-it-is-all-about” introduction. The set of (equivalence classes) of restrictions of polynomials in q + 1 variables with total degree < n to 𝕊q are called spherical polynomials of degree < n. The set of restrictions of homogeneous harmonic polynomials of degree ℓ to 𝕊q is denoted by ℍ with dimension d. There is an orthonormal basis {Y,k}k=1d for each ℍ that satisfies an addition formula

k =1dY,k(x)Y,k(y)=ωq-1-1p(1)p(x·y),

where ωq−1 is the volume of 𝕊q−1, and p is the degree ℓ ultraspherical polynomial so that the family {p} is orthonormalized with respect to the weight (1 − x2)(q−2)/2 on (−1, 1). A zonal function on the sphere has the form xG(x · y), where the activation function G:[−1, 1] → ℝ has a formal expansion of the form

G(t)=ωq-1-1=0G^()p(1)p(t).

In particular, formally, G(x·y)==0G^()k=1dY,k(x)Y,k(y). The examples of the activation functions in which we are interested in this paper include

1.

Gr(x):=(1-2rx+r2)-(q-1)/2,      x[-1,1], 0<r<1.

It is shown in Müller [33, Lemma 18] that

Gr^()=(q-1)ωq2+q-1r,      =1,2,.

2.

GrE(x):=exp(rx),      x[-1,1], r>0.

It is shown in Mhaskar et al. [34, Lemma 5.1] that

GrE^()=ωqr2Γ(+q+12)(1+O(1/)).

3. The smooth ReLU function G(t)=log(1+et)=t++O(e-|t|). The function G has an analytic extension to the strip ℝ + (−π, π)i of the complex plane. So, Bernstein approximation theorem [35, Theorem 5.4.2] can be used to show that

lim sup|G^()|1/=1/π.                 

Example 2.4. Let 𝕏 be a smooth, compact, connected Riemannian manifold (without boundary), ρ be the geodesic distance on 𝕏, μ* be the Riemannian volume measure normalized to be a probability measure, {λk} be the sequence of eigenvalues of the (negative) Laplace-Beltrami operator on 𝕏, and ϕk be the eigenfunction corresponding to the eigenvalue λk; in particular, ϕ0 ≡ 1. This example, of course, includes Examples 2.1–2.3. An eignet in this context has the form xk=1nakG(x,xk), where the activation function G has a formal expansion of the form G(x,y)=kb(λk)ϕk(x)ϕk(y). One interesting example is the heat kernel:

k=0exp(-λk2t)ϕk(x)ϕk(y).

     □

Example 2.5. Let 𝕏 = ℝq, ρ be the ℓ norm on 𝕏, μ* be the Lebesgue measure. For any multi-integer k+q, the (multivariate) Hermite function ϕk is defined via the generating function

k+qϕk(x)2|k|1k!wk=π-1/4exp(-12|x-w|22+|w|22/4),wq.    (2.1)

The system {ϕk} is orthonormal with respect to μ*, and satisfies

Δϕk(x)-|x|22ϕk(x)=-(2|k|1+1)ϕk(x),      xq,

where Δ is the Laplacian operator. As a consequence of the so called Mehler identity, one obtains [36] that

exp(-|x-32y|22)exp(-|y|22/4)=(32π)-q/2k+dϕk(x)ϕk(y)3-|k|1/2.    (2.2)

A Gaussian network is a network of the form xk=1nak(-|x-zk|22), where it is convenient to think of zk=32yk.      □

3. The Set-Up and Definitions

3.1. Data Spaces

Let 𝕏 be a connected, locally compact metric space with metric ρ. For r > 0, x ∈ 𝕏, we denote

𝔹(x,r)={y𝕏:ρ(x,y)r}, Δ(x,r)=closure(𝕏\𝔹(x,r)).

If K ⊆ 𝕏 and x ∈ 𝕏, we write as usual ρ(K,x)=infyKρ(y,x). It is convenient to denote the set

{x ∈ 𝕏; ρ(K, x) ≤ r} by 𝔹(K, r). The diameter of K is defined by diam(K)=supx,yKρ(x,y).

For a Borel measure ν on 𝕏 (signed or positive), we denote by |ν| its total variation measure defined for Borel subsets K ⊂ 𝕏 by

|ν|(K)=supUUU|ν(U)|,

where the supremum is over all countable measurable partitions U of K. In the sequel, the term measure will mean a signed or positive, complete, sigma-finite, Borel measure. Terms, such as measurable will mean Borel measurable. If f:𝕏 → ℝ is measurable, K ⊂ 𝕏 is measurable, and ν is a measure, we define2

fp,ν,K={{K|f(x)|pd|ν|(x)}1/p,if 1p<,|ν|-ess supxK|f(x)|,if p=.

The symbol Lp(ν, K) denotes the set of all measurable functions f for which ‖fp, ν, K < ∞, with the usual convention that two functions are considered equal if they are equal |ν|-almost everywhere on K. The set C0(K) denotes the set of all uniformly continuous functions on K vanishing at ∞. In the case when K = 𝕏, we will omit the mention of K, unless it is necessary to mention it to avoid confusion.

We fix a non-decreasing sequence {λk}k=0, with λ0 = 0 and λk ↑ ∞ as k → ∞. We also fix a positive sigma-finite Borel measure μ* on 𝕏, and a system of orthonormal functions {ϕk}k=0L1(μ*,𝕏)C0(𝕏), such that ϕ0(x) > 0 for all x ∈ 𝕏. We define

Πn=span {ϕk:λk<n},      n>0.    (3.1)

It is convenient to write Πn = {0} if n ≤ 0 and Π = ⋃n>0Πn. It will be assumed in the sequel that Π is dense in C0 (and, thus, in every Lp, 1 ≤ p < ∞). We will often refer to the elements of Π as diffusion polynomials in keeping with [13].

Definition 3.1. We will say that a sequence {an} (or a function F :[0, ∞) → ℝ) is fast decreasing if limnnSan=0 (respectively, limxxSf(x)=0) for every S > 0. A sequence {an} has polynomial growth if there exist c1, c2 > 0 such that |an|c1nc2 for all n ≥ 1, and similarly for functions.

Definition 3.2. The space 𝕏 (more precisely, the tuple Ξ=(𝕏,ρ,μ*,{λk}k=0,{ϕk}k=0)) is called a data space if each of the following conditions is satisfied.

1. For each x ∈ 𝕏, r > 0, 𝔹(x, r) is compact.

2. (Ball measure condition) There exist q ≥ 1 and κ > 0 with the following property: for each x ∈ 𝕏, r > 0,

μ*(𝔹(x,r))=μ*({y𝕏:ρ(x,y)<r})κrq.    (3.2)

(In particular, μ*({y ∈ 𝕏: ρ(x, y) = r}) = 0.)

3. (Gaussian upper bound) There exist κ1, κ2 > 0 such that for all x, y ∈ 𝕏, 0 < t ≤ 1,

|k=0exp(-λk2t)ϕk(x)ϕk(y)|κ1t-q/2exp(-κ2ρ(x,y)2t).    (3.3)

4. (Essential compactness) For every n ≥ 1, there exists a compact set 𝕂n ⊂ 𝕏 such that the function ndiam(𝕂n) has polynomial growth, while the functions

nsupx𝕏\𝕂nλk<nϕk(x)2

and

n𝕏\𝕂n(λk<nϕk(x)2)1/2dμ*(x)

are both fast decreasing. (Necessarily, nμ*(𝕂n) has polynomial growth as well.)

Remark 3.1. We assume without loss of generality that 𝕂n ⊆ 𝕂m for all n < m and that μ*(𝕂1)>0.      □

Remark 3.2. If 𝕏 is compact, then the first condition as well as the essential compactness condition are automatically satisfied. We may take 𝕂n = 𝕏 for all n. In this case, we will assume tacitly that μ* is a probability measure, and ϕ0 ≡ 1.      □

Example 3.1. (Manifold case) This example points out that our notion of data space generalizes the set-ups in Examples 2.1–2.4. Let 𝕏 be a smooth, compact, connected Riemannian manifold (without boundary), ρ be the geodesic distance on 𝕏, μ* be the Riemannian volume measure normalized to be a probability measure, {λk} be the sequence of eigenvalues of the (negative) Laplace-Beltrami operator on 𝕏, and ϕk be the eigenfunction corresponding to the eigenvalue λk; in particular, ϕ0 ≡ 1. If the condition (3.2) is satisfied, then (𝕏,ρ,μ*,{λk}k=0,{ϕk}k=0) is a data space. Of course, the assumption of essential compactness is satisfied trivially (see Appendix B for the Gaussian upper bound).      □

Example 3.2. (Hermite case) We illustrate how Example 2.5 is included in our definition of a data space. Accordingly, we assume the set-up as in that example. For a > 0, let ϕk,a(x)=a-q/2ϕk(ax). With λk=|k|1, the system Ξa=(q,ρ,μ*,{λk},{ϕk,a}) is a data space. When a = 1, we will omit its mention from the notation in this context. The first two conditions are obvious. The Gaussian upper bound follows by the multivariate Mehler identity [37, Equation 4.27]. The assumption of essential compactness is satisfied with 𝕂n = 𝔹(0, cn) for a suitable constant c (cf. [38, Chapter 6]).      □

In the rest of this paper, we assume 𝕏 to be a data space. Different theorems will require some additional assumptions, two of which we now enumerate. Not every theorem will need all of these; we will state explicitly which theorem uses which assumptions, apart from 𝕏 being a data space.

The first of these deals with the product of two diffusion polynomials. We do not know of any situation where it is not satisfied but are not able to prove it in general.

Definition 3.3. (Product assumption) There exists A* ≥ 1 and a family {Rj,k,nΠA*n}such that for every S > 0,

limnnS(maxλk,λj<n, p=1,ϕkϕj-Rj,k,nϕ0p)=0.    (3.4)

We say that an strong product assumption is satisfied if, instead of (3.4), we have for every n > 0 and P, Q ∈ Πn, PQΠA*n.

Example 3.3. In the setting of Example 3.2, if P, Q ∈ Πn, then PQ = 0 for some R ∈ Π2n. So, the product assumption holds trivially. The strong product assumption does not hold. However, if P, Q ∈ Πn, then PQspan{ϕk,2:λk<n2}. The manifold case is discussed below in Remark 3.3.      □

Remark 3.3. One of the referees of our paper has pointed out three recent references [3941], on the subject of the product assumption. The first two of these deal with the manifold case (Example 3.1). The paper [41] extends the results in Lu et al. [40] to the case when the functions ϕk are eigenfunctions of a more general elliptic operator. Since the results in these two papers are similar qualitatively, we will comment on Lu et al. [40] and Steinerberger [39].

In this remark only, let Kt(x,y)=kexp(-λk2t)ϕk(x)ϕk(y). Let λk, λj < n. In Steinerberger [39], Steinerberger relates EAn(2, ϕkϕj) [see (3.6) below for definition] with

𝕏Kt(,y)(ϕk(y)-ϕk())(ϕj(y)-ϕj())dμ*(y)2,μ*.

While this gives some insight into the product assumption, the results are inconclusive about the product assumption as stated. Also, it is hard to verify whether the conditions mentioned in the paper are satisfied for a given manifold.

In Lu et al. [40], it is shown that for any ϵ, δ > 0, there exists a subspace V of dimension Oδ(ϵ-δn1+δ) such that for all ϕk, ϕj ∈ Πn, infPVϕkϕj-P2,μ*ϵ. The subspace V does not have to be ΠAn for any A. Since the dimension of spankϕj} is O(n2), the result is meaningful only if 0 < δ < 1 and ϵ ≥ n1−1/δ.

In Geller and Pesenson [42, Theorem 6.1], it is shown that the strong product assumption (and, thus, also the product assumption) holds in the manifold case when the manifold is a compact homogeneous manifold. We have extended this theorem in Filbir and Mhaskar [17, Theorem A.1] for the case of eigenfunctions of general elliptic partial differential operators on arbitrary compact, smooth manifolds provided that the coefficient functions in the operator satisfy some technical conditions.      □

In our results in section 4, we will need the following condition, which serves the purpose of gradient in many of our earlier theorems on manifolds.

Definition 3.4. We say that the system Ξ satisfies Bernstein-Lipschitz condition if for every n > 0, there exists Bn > 0 such that

|P(x)-P(y)|Bnρ(x,y)P,      x,y𝕏, PΠn.    (3.5)

Remark 3.4. Both in the manifold case and the Hermite case, Bn = cn for some constant c > 0. A proof in the Hermite case can be found in Mhaskar [43] and in the manifold case in Filbir and Mhaskar [44].      □

3.2. Smoothness Classes

We define next the smoothness classes of interest here.

Definition 3.5. A function w:𝕏 → ℝ will be called a weight function if wϕkC0(𝕏)L1(𝕏) for all k. If w is a weight function, we define

En(w;p,f)=minPΠnf-Pwp,μ*,     n>0,1p, fLp(𝕏).    (3.6)

We will omit the mention of w if w ≡ 1 on 𝕏.

We find it convenient to denote by Xp the space {fLp(𝕏):limnEn(p,f)=0}; i.e., Xp = Lp(𝕏) if 1 ≤ p < ∞ and X=C0(𝕏).

Definition 3.6. Let 1 ≤ p ≤ ∞, γ > 0, and w be a weight function.

(a) For fLp(𝕏), we define

fWγ,p,w=fp,μ*+sup n>0nγEn(w;p,f),    (3.7)

and note that

fWγ,p,w~fp,μ*+supn+2nγE2n(w;p,f).    (3.8)

The space Wγ,p,w comprises all f for whichfWγ,p,w < ∞.

(b) We write Cw=γ>0Wγ,,w. If B is a ball in 𝕏, Cw(B) comprises functions in fCw, which are supported on B.

(c) If x0 ∈ 𝕏, the space Wγ,p,w(x0) comprises functions f such that there exists r > 0 with the property that, for every ϕCw(𝔹(x0,r)), ϕfWγ,p,w.

Remark 3.5. In both the manifold case and the Hermite case, characterizations of the smoothness classes Wγ,p are available in terms of constructive properties of the functions, such as the number of derivatives, estimates on certain moduli of smoothness or K-functionals, etc. In particular, the class C coincides with the class of infinitely differentiable functions vanishing at infinity.      □

We can now state another assumption that will be needed in studying local approximation.

Definition 3.7. (Partition of unity) For every r > 0, there exists a countable family Fr={ψk,r}k=0 of functions in C with the following properties:

1. Each ψk,rFr is supported on 𝔹(xk, r) for some xk ∈ 𝕏.

2. For every ψk,rFr and x ∈ 𝕏, 0 ≤ ψk, r(x) ≤ 1.

3. For every x ∈ 𝕏, there exists a finite subset Fr(x)Fr such that

ψk,rFr(x)ψk,r(y)=1,      y𝔹(x,r).    (3.9)

We note some obvious observations about the partition of unity without the simple proof.

Proposition 3.1. Let r > 0, Fr be a partition of unity.

(a) Necessarily, ψk,rFr(x)ψk,r is supported on 𝔹(x, 3r).

(b) For x ∈ 𝕏, ψk,rFrψk,r(x)=1.

The constant convention In the sequel, c, c1, ⋯ will denote generic positive constants depending only on the fixed quantities under discussion, such as Ξ, q, κ, κ1, κ2, the various smoothness parameters, and the filters to be introduced. Their value may be different at different occurrences, even within a single formula. The notation A ~ B means c1ABc2A.      □

We end this section by defining a kernel that plays a central role in this theory.

Let H :[0, ∞) → ℝ be a compactly supported function. In the sequel, we define

ΦN(H;x,y)=k=0H(λk/N)ϕk(x)ϕk(y),      N>0, x,y𝕏.    (3.10)

If S ≥ 1 is an integer, and H is S times continuously differentiable, we introduce the notation

|H|S:=max0kSmaxx|H(k)(x)|.

The following proposition recalls an important property of these kernels. Proposition 3.2 is proven in Maggioni and Mhaskar [13] and more recently in much greater generality in Mhaskar [45, Theorem 4.3].

Proposition 3.2. Let S > q be an integer, H :ℝ → ℝ be an even, S times continuously differentiable, compactly supported function. Then, for every x, y ∈ 𝕏, N > 0,

|ΦN(H;x,y)|cNq|H|Smax(1,(Nρ(x,y))S).    (3.11)

In the sequel, let h :ℝ → [0, 1] be a fixed, infinitely differentiable, even function, non-increasing on [0, ∞), with h(t) = 1 if |t| ≤ 1/2 and h(t) = 0 if t ≥ 1. If ν is any measure with a bounded total variation on 𝕏, we define

σn(ν,h;f)(x)=𝕏Φn(h;x,y)f(y)dν(y).    (3.12)

We will omit the mention of h in the notations; e.g., write Φn(x, y) = Φn(h; x, y), and the mention of ν if ν = μ*. In particular,

σn(f)(x)=k=0h(λkn)f^(k)ϕk(x),n>0, x𝕏,fL1(𝕏)+C0(𝕏),    (3.13)

where for fL1+C0, we write

f^(k)=𝕏f(y)ϕk(y)dμ*(y)    (3.14)

.

3.3. Measures

In this section, we describe the terminology involving measures.

Definition 3.8. Let d ≥ 0. A measure νM will be called dregular if

|ν|(𝔹(x,r))c(r+d)q,      x𝕏.    (3.15)

The infimum of all constants c that work in (3.15) will be denoted by |||ν|||R, d, and the class of all d-regular measures will be denoted by Rd.

For example, μ* itself is in R0 with |μ*|R,0κ [cf. (3.2)]. More generally, if wC0(𝕏) then the measure wdμ* is R0 with |μ*|R,0κw,μ*.

Definition 3.9. (a) A sequencen} of measures on 𝕏 is called an admissible quadrature measure sequence if the sequence {|νn|(𝕏)}has polynomial growth and

𝕏Pdνn=𝕏Pdμ*,      PΠn, n1.    (3.16)

(b) A sequencen} of measures on 𝕏 is called an admissible product quadrature measure sequence if the sequence {|νn|(𝕏)}has polynomial growth and

𝕏P1P2dνn=𝕏P1P2dμ*,      P1,P2Πn, n1.    (3.17)

(c) By abuse of terminology, we will say that a measure νn is an admissible quadrature measure (respectively, an admissible product quadrature measure) of order n if |νn|c1nc (with constants independent of n) and (3.16) [respectively, (3.17)] holds.

In the case when 𝕏 is compact, a well-known theorem called Tchakaloff's theorem [46, Exercise 2.5.8, p. 100] shows the existence of admissible product quadrature measures (even finitely supported probability measures). However, in order to construct such measures, it is much easier to prove the existence of admissible quadrature measures, as we will do in Theorem 7.1, and then use one of the product assumptions to derive admissible product quadrature measures.

Example 3.4. In the manifold case, let the strong product assumption hold as in Remark 3.3. If n ≥ 1 and C𝕏 is a finite subset satisfying the assumptions of Theorem 7.1, then the theorem asserts the existence of an admissible quadrature measure supported on C. If {νn} is an admissible quadrature measure sequence, then {νA*n} is an admissible product quadrature measure sequence. In particular, there exist finitely supported admissible product quadrature measures of order n for every n ≥ 1.      □

Example 3.5. We consider the Hermite case as in Example 3.2. For every a > 0 and n ≥ 1, Theorem 7.1 applied with the system Ξa yields admissible quadrature measures of order n supported on finite subsets of ℝq (in fact, of [−cn, cn]q for an appropriate c). In particular, an admissible quadrature measure of order n2 for Ξ2 is an admissible product quadrature measure of order n for Ξ = Ξ1.      □

3.4. Eignets

The notion of an eignet defined below is a generalization of the various kernels described in the examples in section 2.

Definition 3.10. A function b:[0, ∞) → (0, ∞) is called a smooth mask if b is non-increasing, and there exists B* = B*(b) ≥ 1 such that the mapping tb(B*t)/b(t) is fast decreasing. A function G:𝕏 × 𝕏 → ℝ is called a smooth kernel if there exists a measurable function W = W(G) :𝕏 → ℝ such that we have a formal expansion (with a smooth mask b)

W(y)G(x,y)=kb(λk)ϕk(x)ϕk(y),      x,y𝕏.    (3.18)

If m ≥ 1 is an integer, an eignet with m neurons is a function of the form xk=1makG(x,yk) for yk ∈ 𝕏.

Example 3.6. In the manifold case, the notion of eignet includes all the examples stated in section 2 with W ≡ 1, except for the example of smooth ReLU function described in Example 2.3. In the Hermite case, (2.2) shows that the kernel G(x,y)=exp(-|x-32y|22) defined on ℝq × ℝq is a smooth kernel, with λk = |k|1, ϕk as in Example 2.5, and b(t)=(32π)-q/23-t/2. The function W here is W(y)=exp(-|y|22/4).      □

Remark 3.6. It is possible to relax the conditions on the mask in Definition 3.10. Firstly, the condition that b should be non-increasing is made only to simplify our proofs. It is not difficult to modify them without this assumption. Secondly, let b0 :[0, ∞) → ℝ satisfy |b0(t)| ≤ b1(t) for a smooth mask b1 as stipulated in that definition. The function b2 = b + 2b1 is then a smooth mask and so is b1. Let Gj(x,y)=k=0bj(λk)ϕk(x)ϕk(y), j = 0, 1, 2. Then G0(x, y) = G2(x, y) − 2G1(x, y). Therefore, all of the results in sections 4 and 8 can be applied once with G2 and once with G1 to obtain a corresponding result for G0 with different constants. For this reason, we will simplify our presentation by assuming the apparently restrictive conditions stipulated in Definition 3.10. In particular, this includes the example of the smooth ReLU network described in Example 2.3.      □

Definition 3.11. Let ν be a measure on 𝕏 (signed or having bounded variation), and GC0(𝕏 × 𝕏). We define

DG,n(x,y)=k=0h(λk/n)b(λk)-1ϕk(x)ϕk(y),n1, x,y𝕏,    (3.19)

and

𝔾n(ν;x,y)=𝕏G(x,z)W(z)DG,n(z,y)dν(z).    (3.20)

Remark 3.7. Typically, we will use an approximate product quadrature measure sequence in place of the measure ν, where each of the measures in the sequence is finitely supported, to construct a sequence of networks. In the case when 𝕏 is compact, Tchakaloff's theorem shows that there exists an approximate product quadrature measure of order m supported on (dim(Πm)+1)2 points. Using this measure in place of ν, one obtains a pre-fabricated eignet 𝔾n(ν) with (dim(Πm)+1)2 neurons. However, this is not an actual construction. In the presence of the product assumption, Theorem 7.1 leads to the pre-fabricated networks 𝔾n in a constructive manner with the number of neurons as stipulated in that theorem.      □

4. Main Results

In this section, we assume the Bernstein-Lipschitz condition (Definition 3.4) in all the theorems. We note that the measure μ* may not be a probability measure. Therefore, we take the help of an auxiliary function f0 to define a probability measure as follows. Let f0C0(𝕏), f0 ≥ 0 for all x ∈ 𝕏, and dν*=f0dμ* be a probability measure. Necessarily, ν* is 0-regular, and k:|ν*|R,0kf0,μ*. We assume noisy data of the form (y, ϵ), with a joint probability distribution τ defined for Borel subsets of 𝕏 × Ω for some measure space Ω, and with ν* being the marginal distribution of y with respect to τ. Let F(y,ϵ) be a random variable following the law τ, and denote

f(y)=𝔼τ(F(y,ϵ)|y).    (4.1)

It is easy to verify using Fubini's theorem that if F is integrable with respect to τ, then, for any x ∈ 𝕏,

𝔼τ(F(y,ϵ)Φn(x,y))=σn(ν*;f)(x):=𝕏f(y)Φn(x,y)dν*(y).    (4.2)

Let Y be a random sample from τ, and {νn} be an admissible product quadrature sequence in the sense of Definition 3.9. We define [cf. (3.20)]

Gn(Y;F)(x)=Gn(νB*n,Y;F)(x)=1|Y|(y,ϵ)YF(y,ϵ)𝔾n(νB*n;x,y),     x𝕏, n=1,2,,    (4.3)

where B* is as in Definition 3.10.

Remark 4.1. We note that the networks 𝔾n are prefabricated independently of the data. The network Gn therefore has only |Y| terms depending upon the data.      □

Our first theorem describes local function recovery using local sampling. We may interpret it in the spirit of distributed learning as in Chui et al. [24] and Lin et al. [26], where we are taking a linear combination of pre-fabricated networks 𝔾n using the function values themselves as the coefficients. The networks 𝔾n have essentially the same localization property as the kernels Φn (cf. Theorem 8.2).

Theorem 4.1. Let x0 ∈ 𝕏 and r > 0. We assume the partition of unity and find a function ψ ∈ C supported on 𝔹(x0, 3r), which is equal to 1 on 𝔹(x0, r), 𝔪=𝕏ψdμ*, and let f0 = ψ/𝔪, dν*=f0dμ*. We assume the rest of the set-up as described. If f0fWγ, ∞, then for 0 < δ < 1, and |Y|cnq+2γrqlog(nBn/δ),

Probτ({m|Y|(y,ϵ)YF(y,ϵ)𝔾n(νB*n;°,y)                       f,μ*,𝔹(x0,r)c3nγ})                        δ.    (4.4)

Remark 4.2. If {y1, ⋯, yM} is a random sample from some probability measure supported on 𝕏, s==1Mf0(y), and we construct a sub-sample using the distribution that associates the mass f0(yj)/s with each yj, then the probability of selecting points outside of the support of f0 is 0. This leads to a sub-sample Y. If Mcnq+2γlog(nBn/δ), then the Chernoff bound, Proposition B.1(b), can be used to show that |Y| is large, as stipulated in Theorem 4.1.               □

Next, we state two inverse theorems. Our first theorem obtains accuracy on the estimation of the density f0 using eignets instead of positive kernels.

Theorem 4.2. With the set-up as in Theorem 8.3, let γ > 0, f0Wγ, ∞, and

|Y|f0,μ*nq+2γlog(nBnδ).

Then, with F1,

Probτ({1|Y|(y,ϵ)Y𝔾n(νB*n;,y)-f0c3n-γ})δ.    (4.5)

Remark 4.3. Unlike density estimation using positive kernels, there is no inherent limit on the accuracy predicted by (4.5) on the estimation of f0.      □

The following theorem gives a complete characterization of the local smoothness classes using eignets. In particular, Part (b) of the following theorem gives a solution to the inverse problem of determining what smoothness class the target function belongs to near each point of 𝕏. In theory, this leads to a data-based detection of singularities and sparsity analogous to what is assumed in Chui et al. [24] but in a much more general setting.

Theorem 4.3. Let f0C0(𝕏), f0(x) ≥ 0 for all x ∈ 𝕏, and dν*=f0dμ* be a probability measure, τ, F, and let f be as described above. We assume the partition of unity and the product assumption. Let Sq + 2, 0 < γ ≤ S, x0 ∈ 𝕏, 0 < δ < 1. For each j ≥ 0, suppose that Yj is a random sample from τ with |Yj|2c12j(q+2S)|ν*|R,0log(c22jB2j/δ). Then with τ-probability ≥ 1 − δ,

(a) If f0fWγ,∞(x0) then there exists a ball 𝔹 centered at x0 such that

sup j12jγG2j(Yj;F)-G2j-1(Yj;F),μ*,𝔹<.    (4.6)

(b) If there exists a ball 𝔹 centered at x0 for which (4.6) holds, then f0fWγ, ∞,ϕ0(x0).

5. Preparatory Results

We prove a lower bound on μ*(𝔹(x, r)) for x ∈ 𝕏 and 0 < r ≤ 1 (cf. [47]).

Proposition 5.1. We have

μ*(𝔹(x,r))crq,      0<r1, x𝕏.    (5.1)

In order to prove the proposition, we recall a lemma, proved in Mhaskar [14, Proposition 5.1].

Lemma 5.1. Let νRd, N > 0. If g1:[0, ∞) → [0, ∞) is a non-increasing function, then, for any N > 0, r > 0, x ∈ 𝕏,

                    NqΔ(x,r)g1(Nρ(x,y))d|ν|(y)c2q(1+(d/r)q)q1-2-q|ν|R,drN/2g1(u)uq-1du.    (5.2)

PROOF OF PROPOSITION 5.1.

Let x ∈ 𝕏, r > 0 be fixed in this proof, although the constants will not depend upon these. In this proof, we write

Kt(x,y)=k=0exp(-λk2t)ϕk(x)ϕk(y).

The Gaussian upper bound (3.3) shows that for t > 0,

Δ(x,r)|Kt(x,y)|dμ*(y)κ1t-q/2Δ(x,r)exp(-κ2ρ(x,y)2/t)dμ*(y).    (5.3)

Using Lemma 5.1 with d = 0, = *, g1(u)=exp(-u2), N=κ2/t, we obtain for r2/t(q-2)/κ2:

Δ(x,r)|Kt(x,y)|dμ*(y)cNr/2uq-1exp(-u2)du=c1(Nr/2)2uq/2-1e-uduc2(r2/t)(q-2)/2exp(-κ2r2/(4t)).    (5.4)

Therefore, denoting in this proof only that κ0 = ‖ϕ0, we obtain that

1=𝕏Kt(x,y)ϕ0(y)dμ*(y)κ0𝕏|Kt(x,y)|dμ*(y)κ0κ2tq/2μ*(𝔹(x,r))+c3(r2/t)(q2)/2exp(κ2r2/(4t).    (5.5)

We now choose t ~ r2 so that c3(r2/t)(q-2)/2exp(-κ3r2/(4t))1/2 to obtain (5.1) for rc4. The estimate is clear for c4 < r ≤ 1.      □

Next, we prove some results about the system {ϕk}.

Lemma 5.2. For n ≥ 1, we have

λk<nϕk(x)2cnq,      x𝕏.    (5.6)

and

dim(Πn)cnqμ*(𝕂n).    (5.7)

In particular, the function ndimn) has polynomial growth.

PROOF. The Gaussian upper bound with x = y implies that

k=0exp(-λk2t)ϕk(x)2ct-q/2,      0<t1, x𝕏.

The estimate (5.6) follows from a Tauberian theorem [44, Proposition 4.1]. The essential compactness now shows that for any R > 0,

𝕏\Knλk<nϕk(x)2dμ*(x){supx𝕏\Knλk<nϕk(x)2}1/2𝕏\Kn(λk<nϕk(x)2)1/2dμ*(x)cn-R.

In particular,

dim(Πn)=𝕏λk<nϕk(x)2dμ*(x)𝕂nλk<nϕk(x)2dμ*(x)+cn-Rcnqμ*(𝕂n).

            □

Next, we prove some properties of the operators σn and diffusion polynomials. The following proposition follows easily from Lemma 5.1 and Proposition 3.2. (cf. [14, 48]).

Proposition 5.2. Let S, H be as in Proposition 3.2, d > 0, νRd, and x ∈ 𝕏.

(a) If r ≥ 1/N, then

Δ(x,r)|ΦN(H;x,y)|d|ν|(y)c(1+(dN)q)(rN)-S+q|ν|R,d|H|S.    (5.8)

(b) We have

𝕏|ΦN(H;x,y)|d|ν|(y)c(1+(dN)q)|ν|R,d|H|S,    (5.9)
ΦN(H;x,)ν;𝕏,pcNq/p(1+(dN)q)1/p|ν|R,d1/p|H|S,    (5.10)

and

𝕏|ΦN(H;,y)|d|ν|(y)pc(1+(dN)q)1/p|ν|R,d1/p(|ν|(𝕏))1/p|H|S.    (5.11)

The following lemma is well-known; a proof is given in Mhaskar [15, Lemma 5.3].

Lemma 5.3. Let1, ν), (Ω2, τ) be sigma–finite measure spaces, Ψ : Ω1 × Ω2 → ℝ be ν × τ–integrable,

M:=ν-ess supxΩ1Ω2|Ψ(x,y)|dτ(y)<,M1:=τ-ess supyΩ2Ω1|Ψ(x,y)|dν(x)<,    (5.12)

and formally, for τ–measurable functions f : Ω2 → ℝ,

T(f,x):=Ω2f(y)Ψ(x,y)dτ(y),      xΩ1.

Let 1 ≤ p ≤ ∞. If fLp(τ;Ω2) then T(f, x) is defined for ν–almost all x ∈ Ω1, and

Tfν;Ω1,pM11/pM1/pfτ;Ω2,p,      fLp(Ω2,τ).    (5.13)

Theorem 5.1. Let n > 0. If P ∈ Πn/2, then σn(P) = P. Also, for any p with 1 ≤ p ≤ ∞,

σn(f)pcfp,      fLp.    (5.14)

If 1 ≤ p ≤ ∞, and fLp (𝕏), then

En(p,f)f-σn(f)p,μ*cEn/2(p,f).    (5.15)

PROOF. The fact that σn(P) = P for all P ∈ Πn/2 is verified easily using the fact that h(t) = 1 for 0 ≤ t ≤ 1/2. Using (5.9) with μ* in place of |ν| and 0 in place of d, we see that

supx𝕏𝕏|Φn(x,y)|dμ*(y)c.

The estimate (5.14) follows using Lemma 5.3. The estimate (5.15) is now routine to prove.      □

Proposition 5.3. For n ≥ 1, P ∈ Πn, 1 ≤ p ≤ ∞, and S > 0, we have

Pp,μ*,𝕏\𝕂2nc(S)n-SPp,μ*,𝕏.    (5.16)

PROOF. In this proof, all constants will depend upon S. Using Schwarz inequality and essential compactness, it is easy to deduce that

supx𝕏\𝕂2n𝕏|Φ2n(x,y)|dμ*(y)c1n-S,supy𝕏𝕏\𝕂2n|Φ2n(x,y)|dμ*(x)c1n-S.    (5.17)

Therefore, a use of Lemma 5.3 shows that

σ2n(f)p,μ*,𝕏\𝕂2ncn-Sfp.

We use P in place of f to obtain (5.16).      □

Proposition 5.4. Let n ≥ 1, P ∈ Πn, 0 < p < r ≤ ∞. Then

Prcnq(1/p-1/r)Pp,      Ppcμ*(𝕂2n)1/p-1/rPr.    (5.18)

PROOF. The first part of (5.18) is proved in Mhaskar [15, Lemma 5.4]. In that paper, the measure μ* is assumed to be a probability measure, but this assumption was not used in this proof. The second estimate follows easily from Proposition 5.3.           □

Lemma 5.4. Let R, n > 0, P1, P2 ∈ Πn, 1 ≤ p, r, s ≤ ∞. If the product assumption holds, then

EA*n(ϕ0;p,P1P2)cn-RP1rP2s.    (5.19)

PROOF. In view of essential compactness, Proposition 5.4 implies that for any P ∈ Πn, 1 ≤ r ≤ ∞, P2c1ncPr. Therefore, using Schwarz inequality, Parseval identity, and Lemma 5.2, we conclude that

k|P^(k)| (dim(Πn))1/2P2c1ncPr.    (5.20)

Now, the product assumption implies that for p = 1, ∞, and λk, λj < n, there exists Rj,k,nΠA*n such that for any R > 0,

ϕkϕj-Rj,k,nϕ0pcn-R-2c,    (5.21)

where c is the constant appearing in (5.20). The convexity inequality

fpf1/pf11/p

shows that (5.21) is valid for all p, 1 ≤ p ≤ ∞. So, using (5.20), we conclude that

P1P2-k,jP1^(k)P2^(k)Rj,k,nϕ0pcn-R-2c(k|P1^(k)|)(k|P2^(k)|)cn-RP1rP2s.

     □

6. Local Approximation by Diffusion Polynomials

In the sequel, we write g(t) = h(t) − h(2t), and

τj(f)={σ1(f),if j=0,σ2j(f)σ2j1(f),if j=1,2,.    (6.1)

We note that

τj(f)(x)=σ2j(μ*,g;f)(x)=𝕏f(y)Φ2j(g;x,y)dμ*(y),j=1,2,.    (6.2)

It is clear from Theorem 5.1 that for any p, 1 ≤ p ≤ ∞,

f=j=0τj(f),      fXp,    (6.3)

with convergence in the sense of Lp.

Theorem 6.1. Let 1 ≤ p ≤ ∞, γ > 0, fXp, x0 ∈ 𝕏. We assume the partition of unity and the product assumption.

(a) If 𝔹 is a ball centered at x0, then

sup n02nγf-σ2n(f)p,μ*,𝔹~sup j02jγτj(f)p,μ*,𝔹.    (6.4)

(b) If there exists a ball B centered at x0 such that

sup n02nγf-σ2n(f)p,μ*,𝔹~sup j02jγτj(f)p,μ*,𝔹<,    (6.5)

then fWγ, p,ϕ0(x0).

(c) If fWγ, p(x0), then there exists a ball 𝔹 centered at x0 such that (6.5) holds.

Remark 6.1. In the manifold case (Example 3.1), ϕ0 ≡ 1. So, the statements (b) and (c) in Theorem 6.1 provide necessary and sufficient conditions for fWγ, p(x0) in terms of the local rate of convergence of the globally defined operator σn(f) and the growth of the local norms of the operators τj, respectively In the Hermite case (Example 3.2), it is shown in Mhaskar [49] that fWγ, p,ϕ0 if and only if fWγ, p. Therefore, the statements (b) and (c) in Theorem 6.1 provide similar necessary and sufficient conditions for fWγ, p(x0) in this case as well.      □

The proof of Theorem 6.1 is routine, but we sketch a proof for the sake of completeness.

PROOF OF THEOREM 6.1.

Part (a) is easy to prove using the definitions.

In the rest of this proof, we fix S > γ + q + 2. To prove part (b), let ϕ ∈ C be supported on 𝔹. Then there exists {RnΠ2n}n=0 such that

ϕ-Rnc(ϕ)2-nS.    (6.6)

Further, Lemma 5.4 yields a sequence {QnΠA*2n} such that

Rnσ2n(f)-ϕ0Qnpc2-nSRnσ2n(f)pc(ϕ)2-nSfp.    (6.7)

Hence,

EA*2n(ϕ0;p,fϕ)fϕ-ϕ0Qnpc(ϕ)2-nSfp+fϕ-σ2n(f)Rnpc(ϕ)2-nSfp+(f-σ2n(f))ϕp+σ2n(f)(ϕ-Rn)pc(ϕ){2-nSfp+f-σ2n(f)p,μ*,𝔹+σ2n(f)pϕ-Rn}c(ϕ)2-nSfp+c(ϕ,f)(A*2-n)γ.

Thus, Wγ, p,ϕ0 for every ϕ ∈ C supported on 𝔹, and part (b) is proved.

To prove part (c), we observe that there exists r > 0 such that for any ϕC(𝔹(x0,6r)), Wγ, p. Using partition of unity [cf. Proposition 3.1(a)], we find ψC(𝔹(x0,6r)) such that ψ(x) = 1 for all x ∈ 𝔹(x0, 2r), and we let 𝔹 = 𝔹(x0, r). In view of Proposition 3.2, |Φ2n(x,y)|c(r)2-n(s-q) for all xB and y ∈ 𝕏\𝔹(x0, 2r). Hence,

σ2n((1-ψ)f)p|𝕏|(1-ψ(y))f(y)Φ2n(,y)|dμ*(y)p                                    =|𝕏\𝔹(x0,2r)|(1-ψ(y))f(y)Φ2n(,y)|dμ*(y)p                                    c(ψ,r)2-n(S-q)fp.    (6.8)

Recalling that ψ(x) = 1 for xB and 𝕊 − q ≥ γ + 2, we deduce that

f-σ2n(f)p,μ*,𝔹=ψf-σ2n(f)p,μ*,𝔹     ψf-σ2n(ψf)p,μ*,𝔹+σ2n((1-ψ)f)p     cE2n(ψf)+c(ψ,r)2-n(S-q)fp     c(r,ψ,f)2-nγ.

This proves part (c).      □

Let {Ψn: 𝕏 × 𝕏 → 𝕏} be a family of kernels (not necessarily symmetric). With a slight abuse of notation, we define when possible, for any measure ν with bounded total variation on 𝕏,

σ(ν,Ψn;f)(x)=𝕏f(y)Ψn(x,y)dν(y),x𝕏, fL1(𝕏)+C0(𝕏),    (6.9)

and

τj(ν,{Ψn};f)={σ(ν,Ψ1;f),if j=0,σ(ν,Ψ2j;f)-σ(ν,Ψ2j-1;f),if j=1,2,.    (6.10)

As usual, we will omit the mention of ν when ν = μ*.

Corollary 6.1. Let the assumptions of Theorem 6.1 hold, andn:𝕏 × 𝕏 → 𝕏} be a sequence of kernels (not necessarily symmetric) with the property that both of the following functions of n are decreasing rapidly.

supx𝕏𝕏|Ψn(x,y)-Φn(x,y)|dμ*(y),supy𝕏𝕏|Ψn(x,y)-Φn(x,y)|dμ*(x).    (6.11)

(a) If B is a ball centered at x0, then

sup n02nγf-σ(Ψ2n;f)p,μ*,𝔹~sup j02jγτj({Ψn};f)p,μ*,𝔹.    (6.12)

(b) If there exists a ball B centered at x0 such that

sup n02nγf-σ(Ψ2n;f)p,μ*,𝔹~sup j02jγτj({Ψn};f)p,μ*,𝔹<,    (6.13)

then fWγ, p,ϕ0(x0).

(c) If fWγ, p(x0), then there exists a ball B centered at x0 such that (6.13) holds.

PROOF. In view of Lemma 5.3, the assumption about the functions in (6.11) implies that ‖σ(Ψn; f) − σn(f)‖p is decreasing rapidly.      □

7. Quadrature Formula

The purpose of this section is to prove the existence of admissible quadrature measures in the general set-up as in this paper. The ideas are mostly developed already in our earlier works [17, 36, 43, 44, 50, 51] but always require an estimate on the gradient of diffusion polynomials. Here, we use the Bernstein-Lipschitz condition (Definition 3.4) instead.

If C𝕂𝕏, we denote

δ(K,C)=supxKinfyCρ(x,y),      η(C)=infx,yC,xyρ(x,y).    (7.1)

If K is compact, ϵ > 0, a subset CK is ϵ-distinguishable if ρ(x, y) ≥ ϵ for every x,yC, xy. The cardinality the maximal ϵ-distinguishable subset of K will be denoted by Hϵ(K).

Remark 7.1. If C1C is a maximal δ(K,C)-distinguishable subset of C, xy, then it is easy to deduce that

δ(K,C)η(C1)2δ(K,C),      δ(K,C)δ(K,C1)2δ(K,C).

In particular, by replacing C by C1, we can always assume that

(1/2)δ(K,C)η(C)2δ(K,C).    (7.2)

Theorem 7.1. We assume the Bernstein-Lipschitz condition. Let n > 0, C1={z1,,zM}𝕂2n be a finite subset, ϵ > 0.

(a) There exists a constant c(ϵ) with the following property: if δ(𝕂2n,C1)c(ϵ)min(1/n,1/𝔹2n), then there exist non-negative numbers Wk satisfying

0Wkcδ(𝕂2n,C1)q,     k=1MWkcμ*(𝔹(𝕂2n,4δ(𝕂2n,C1))),    (7.3)

such that for every P ∈ Πn,

|k=1MWk|P(zk)|-𝕏|P(x)|dμ*(x)|ϵ𝕏|P(x)|dμ*(x).    (7.4)

(b) Let the assumptions of part (a) be satisfied with ϵ = 1/2. There exist real numbers w1, ⋯, wM such that |wk| ≤ 2Wk, k = 1, ⋯, M, in particular,

k=1M|wk|cμ*(𝔹(𝕂2n,4δ(𝕂2n,C1))),    (7.5)

and

k=1MwkP(zk)=𝕏P(x)dμ*(x),      PΠn.    (7.6)

(c) Let δ > 0, C1 be a random sample from the probability law μ𝕂2n* given by

μ𝕂2n*(B)=μ*(B𝕂2n)μ*(𝕂2n),

and ϵn = min(1/n, 1/B2n). If

|C1|cϵn-qμ*(𝕂2n)log(μ*(𝔹(𝕂2n,ϵn))δϵnq),

then the statements (a) and (b) hold with μ𝕂2n*-probability exceeding 1−δ.

In order to prove Theorem 7.1, we first recall the following theorem [52, Theorem 5.1], applied to our context. The statement of Mhaskar [52, Theorem 5.1] seems to require that μ* is a probability measure, but this fact is not required in the proof. It is required only that μ*(𝔹(x, r)) ≥ crq for 0 < r ≤ 1.

Theorem 7.2. Let τ be a positive measure supported on a compact subset of 𝕏, ϵ > 0, A be a maximal ϵ-distinguishable subset of supp(τ), and 𝕂=𝔹(A,2ϵ). There then exists a subset CAsupp(τ) and a partition {Yy}yC of 𝕂 with each of the following properties.

1. (volume property) For yC, Yy ⊆ 𝔹(y, 18ϵ), (κ1/κ2)7-qϵqμ*(Yy)κ2(18ϵ)q, andτ(Yy)(κ1/κ2)19-qminyAτ(𝔹(y,ϵ))>0.

2. (density property) η(C)ϵ, δ(K,C)18ϵ.

3. (intersection property) Let K1 ⊆ K be a compact subset. Then

|{yC:YyK1}|(κ22/κ1)(133)qHϵ(K1).

PROOF OF THEOREM 7.1 (a), (b).

We observe first that it is enough to prove this theorem for sufficiently large values of n. In view of Proposition 5.3, we may choose n large enough so that for any P ∈ Πn,

P1,μ*,𝕏\𝕂2nn-SP1(ϵ/3)P1.    (7.7)

In this proof, we will write δ=δ(𝕂2n,C1) so that 𝕂2n𝔹(C1,δ). We use Theorem 7.2 with τ to be the measure associating the mass 1 with each element of C1, and δ in place of ϵ. If A is a maximal δ-distinguished subset of C1, then we denote in this proof, 𝕂=𝔹(A,2δ) and observe that 𝕂2n𝔹(C1,δ)𝕂𝔹(𝕂2n,4δ). We obtain a partition {Yy} of 𝕂 as in Theorem 7.2. The volume property implies that each Yy contains at least one element of C1. We construct a subset C of C1 by choosing exactly one element of YyC1 for each y. We may then re-index C1 so that, without loss of generality, C={z1,,zN} for some NM, and re-index {Yy} as {Yk}, so that zkYk, k = 1, ⋯, N. To summarize, we have a subset {z1,,zN}C1, and a partition {Yk}k=1N of 𝕂 ⊃ 𝕂2n such that each Yk ⊂ 𝔹(zk, 36δ) and μ*(Yk)~δq. In particular (cf. (7.7)), for any P ∈ Πn,

P1-P1,μ*,K(ϵ/3)P1.    (7.8)

We now let Wk=μ*(Yk), k = 1, ⋯, N, and Wk = 0, k = N + 1, ⋯, M.

The next step is to prove that if δ ≤ c(ϵ) min(1/n, 1/B2n), then

supy𝕏k=1NYk|Φ2n(zk,y)-Φ2n(x,y)|dμ*(x)2ϵ/3.    (7.9)

In this part of the proof, the constants denoted by c1, c2, ⋯ will retain their value until (7.9) is proved. Let y ∈ 𝕏. We let r ≥ δ to be chosen later, and write in this proof, N={k:dist(y,Yk)<r}, L={k:dist(y,Yk)r} and for j = 0, 1, ⋯, Lj={k:2jrdist(y,Yk)<2j+1r}. Since r≥δ, and each Yk ⊂ 𝔹(zk, 36δ), there are at most c1(r/δ)q elements in N. Using the Bernstein-Lipschitz condition and the fact that Φ2n(,y)c2nq, we deduce that

kNYk|Φ2n(zk,y)-Φ2n(x,y)|dμ*(x)c3μ*(Yk)nqB2nδ(r/δ)qc3μ*(B(zk,36δ))nqB2nδ(r/δ)qc4(nr)qB2nδ.    (7.10)

Next, since μ*(Yk)~δq, we see that the number of elements in each Lj is ~ (2jr/δ)q. Using Proposition 3.2 and the fact that S > q, we deduce that if r ≥ 1/n, then

kLYk|Φ2n(zk,y)-Φ2n(x,y)|dμ*(x)=j=0kLjYk|Φ2n(zk,y)-Φ2n(x,y)|dμ*(x)c5nq(nr)-Sj=02-jS{kLjμ*(Yk)}c6(nr)q-S.    (7.11)

Since S > q, we may choose r ~ ϵn such that c6(nr)q-𝕊ϵ/3, and we then require δ ≤ min(r, c7(ϵ)/B2n) so that, in (7.10), c4(nr)q𝔹2nδϵ/3. Then (7.10) and (7.11) lead to (7.9). The proof of (7.9) being completed, we resume the constant convention as usual.

Next, we observe that for any P ∈ Πn,

P(x)=𝕏P(y)Φ2n(x,y)dμ*(y),      x𝕏.

We therefore conclude, using (7.9), that

|k=1Nμ*(Yk)|P(zk)|K|P(x)|dμ*(x)|=|k=1NYk(|P(zk)||P(x)|)dμ*(x)|k=1NYk|P(zk)P(x)|dμ*(x)k=1NYk|𝕏P(y){Φ2n(zk,y)Φ2n(x,y)}dμ*(y)|dμ*(x)𝕏|P(y)|{k=1NYk|Φ2n(zk,y)Φ2n(x,y)|dμ*(x)}dμ*(y)(2ϵ/3)𝕏|P(y)|dμ*(y).

Together with (7.8), this leads to (7.4). From the definition of Wk=μ*(Yk), k = 1, ⋯, N, Wkcδq, and k=1NWk=μ*(𝕂)=μ*(𝔹(𝕂2n,4δ)). Since Wk = 0 if kN + 1, we have now proven (7.3), and we have thus completed the proof of part (a).

Having proved part (a), the proof of part (b) is by now a routine application of the Hahn-Banach theorem [cf. [17, 44, 50, 51]]. We apply part (a) with ϵ = 1/2. Continuing the notation in the proof of part (a), we then have

(1/2)P1k=1NWk|P(zk)|(3/2)P1,      PΠn.    (7.12)

We now equip ℝN with the norm |(a1,,aN)|=k=1NWk|ak| and consider the sampling operator S:ΠnN given by S(P)=(P(z1),,P(zN)), let V be the range of this operator, and define a linear functional x* on V by x*(S(P))=𝕏Pdμ*. The estimate (7.12) shows that the norm of this functional is ≤ 2. The Hahn-Banach theorem yields a norm-preserving extension 𝕏* of x* to ℝN, which, in turn, can be identified with a vector (w1,,wN)N. We set wk = 0 if kN + 1. Formula (7.6) then expresses the fact that X* is an extension of x*. The preservation of norms shows that |wk| ≤ 2Wk if k = 1, ⋯, N, and it is clear that for k = N + 1, ⋯, M, |wk| = 0 = Wk. This completes the proof of part (b).      □

Part (c) of Theorem 7.1 follows immediately from the first two parts and the following lemma.

Lemma 7.1. Let ν* be a probability measure on 𝕏, 𝕂 ⊂ supp*) be a compact set. Let ϵ, δ ∈ (0, 1], C be a maximal ϵ/2-distinguished subset of K, and νϵ=minxCν*(𝔹(x,ϵ/2)). If

Mcνϵ-1log(c1μ*(𝔹(K,ϵ))/(δϵq)),

and {z1, ⋯, zM} be random samples from the probability law ν* then

Probν*({δ(K,{z1,,zM})>ϵ})δ.    (7.13)

PROOF. If δ(K, {z1, ⋯, zM}) > ϵ, then there exists at least one xC such that 𝔹(x, ϵ/2)∩{z1, ⋯, zM} = ∅. For every xC, px=ν*(𝔹(x,ϵ/2))νϵ. We consider the random variable zj to be equal to 1 if zj ∈ 𝔹(x, ϵ/2) and 0 otherwise. Using (B.2) with t = 1, we see that

Prob(𝔹(x,ϵ/2){z1,,zM}=)exp(Mpx/2)exp(cMνϵ).

Since |C|c1μ*(𝔹(K,ϵ))/ϵq,

Prob({δ(K,{z1,,zM})>ϵ})c1μ*(𝔹(K,ϵ))ϵqexp(-cMνϵ).

We set the right-hand side above to δ and solve for M to prove the lemma.      □

8. Proofs of the Results in Section 4

We assume the set-up as in section 4. Our first goal is to prove the following theorem.

Theorem 8.1. Let τ, ν*, F, f be as described section 4. We assume the Bernstein-Lipschitz condition. Let 0 < δ <1. We assume further that |F(y,ϵ)|1 for all y ∈ 𝕏, ϵ ∈ Ω. There exist constants c1, c2, such that if Mc1nq|ν*|R,0log(cnBn/δ), and {(y1, ϵ1), ⋯, (yM, ϵM)} is a random sample from τ, then

Probν*({1Mj=1MF(yj,ϵj)Φn(,yj)σn(ν*;f)c3nq|ν*|R,0log(cnBn|ν*|R,0/δ)M})δ|ν*|R,0.    (8.1)

In order to prove this theorem, we record an observation. The following lemma is an immediate corollary of the Bernstein-Lipschitz condition and Proposition 5.3.

Lemma 8.1. Let the Bernstein-Lipschitz condition be satisfied. Then for every n > 0 and ϵ > 0, there exists a finite set Cn,ϵ𝕂2n such that |Cn,ϵ|cBnqϵ-qμ*(𝔹(𝕂2n,ϵ)) and for any P ∈ Πn,

|maxxCn,ϵ|P(x)|-P|ϵP.    (8.2)

PROOF OF THEOREM 8.1.

Let x ∈ 𝕏. We consider the random variables

Zj=F(yj,ϵj)Φn(x,yj),      j=1,,M.

Then in view of (4.2), 𝔼τ(Zj)=σn(ν*;f)(x) for every j. Further, Proposition 3.2 shows that for each j, |Zj|cnq. Using (5.10) with ν* in place of ν, N = n, d = 0, we see that for each j,

𝕏×Ω|Zj|2dτ𝕏|Φn(x,y)|2dν*(y)cnq|ν*|R,0.

Therefore, Bernstein concentration inequality (B.1) implies that for any t ∈ (0, 1),

Prob({|1Mj=1MF(yj,ϵj)Φn(x,yj)-σn(ν*;f)(x)|t/2})2 exp(-ct2Mnq|ν*|R,0);    (8.3)

We now note that Zj, σn(ν*;f) are all in Πn. Taking a finite set Cn,1/2 as in Lemma 8.1, so that |Cn,1/2|cBnqμ*(𝔹(𝕂2n,1/2))c1ncBnq, we deduce that

maxxCn,1/2|1Mj=1MF(yj,ϵj)Φn(x,yj)-σn(ν*;f)(x)|(1/2)1Mj=1MF(yj,ϵj)Φn(,yj)-σn(ν*;f).

Then (8.3) leads to

Prob({1Mj=1MF(yj,ϵj)Φn(x,yj)-σn(ν*;f)(x)t})c1Bnqnc exp(-c2t2Mnq|ν*|R,0).    (8.4)

We set the right-hand side above equal to δ/|ν*|R,0 and solve for t to obtain (8.1) (with different values of c, c1, c2).      □

Before starting to prove results regarding eignets, we first record the continuity and smoothness of a “smooth kernel” G as defined in Definition 3.10.

Proposition 8.1. If G is a smooth kernel, then (x, y) ↦ W(y)G(x, y) is in C0(𝕏×𝕏)L1(μ*×μ*;𝕏×𝕏). Further, for any p, 1 ≤ p ≤ ∞, and Λ ≥ 1,

supx𝕏W()G(x,)-k:λk<Λb(λk)ϕk(x)ϕk()pc1Λcb(Λ).    (8.5)

In particular, for every x, y ∈ 𝕏, W(○)G(x, ○) and W(y)G(○, y) are in C.

PROOF. Let b be the smooth mask corresponding to G. For any S ≥ 1, b(n) ≤ cnsb(n/B*) ≤ cnsb(0). Thus, b itself is decreasing rapidly. Next, let r > 0. Then remembering that B* ≥ 1 and b is non-increasing, we obtain that for S > 0, b(B*Λu) ≤ cu)sr−1bu), and

Λtrb(t)dt=(B*Λ)r+11/B*urb(B*Λu)ducΛS1/B*uS1b(Λu)ducΛS1uS1b(Λu)ducΛSb(Λ).    (8.6)

In this proof, let s(t)=k:λk<tϕk(x)2, so that s(t) ≤ ctq, t ≥ 1. If Λ ≥ 1, then, integrating by parts, we deduce (remembering that b is non-increasing) that for any x ∈ 𝕏,

k:λkΛb(λk)ϕk(x)2=Λb(t)ds(t)=b(Λ)s(Λ)Λs(t)db(t)c1{Λqb(Λ)Λtqdb(t)}c2{Λqb(Λ)+Λtq1b(t)dt}c3Λqb(Λ).    (8.7)

Using Schwarz inequality, we conclude that

supx,y𝕏k:λkΛb(λk)|ϕk(x)ϕk(y)|c3Λqb(Λ).    (8.8)

In particular, since b is fast decreasing, W(○)G(x, ○) ∈ C0(𝕏) (and in fact, W(y)G(x, y) ∈ C0(𝕏 × 𝕏)) and (8.5) holds with p = ∞. Next, for any j ≥ 0, essential compactness implies that

𝕏\𝕂2j+1Λ(k:λk[2jΛ,2j+1Λ)b(λk)ϕk(y)2)1/2dμ*(y)cΛ-S-qb(2jΛ)1/2.

So, there exists rq such that

𝕏(k:λk[2jΛ,2j+1Λ)b(λk)ϕk(y)2)1/2dμ*(y)𝕂2j+1Λ(k:λk[2jΛ,2j+1Λ)b(λk)ϕk(y)2)1/2dμ*(y)+cΛSqb(2jΛ)1/2c((2jΛ)qb(2jΛ))1/2μ*(𝕂2j+1Λ)c((2jΛ)rb(2jΛ))1/2.

Hence, for any x ∈ 𝕏,

𝕏k:λkΛb(λk)|ϕk(x)ϕk(y)|dμ*(y)                       =j=0𝕏k:λk[2jΛ,2j+1Λ)b(λk)|ϕk(x)ϕk(y)|dμ*(y)                       j=0{k:λk[2jΛ,2j+1Λ)b(λk)ϕk(x)2}1/2                       𝕏(k:λk[2jΛ,2j+1Λ)b(λk)ϕk(y)2)1/2dμ*(y)                       cj=0(2jΛ)rb(2jΛ)cj=02j-1Λ2jΛtr-1b(t)dt                       =cΛ/2tr-1b(t)dtcΛ-Sb(Λ).    (8.9)

This shows that

supx𝕏k:λkΛb(λk)|ϕk(x)ϕk()|1cΛ-Sb(Λ).    (8.10)

In view of the convexity inequality,

fpf1-1/pf11/p,      1<p<,

(8.8) and (8.10) lead to

supx𝕏k:λkΛb(λk)|ϕk(x)ϕk()|pc1Λcb(Λ),      1p.

In turn, this implies that WG(x, ○) ∈ Lp for all x ∈ 𝕏, and (8.5) holds.      □

A fundamental fact that relates the kernels Φn and the pre-fabricated eignets 𝔾n's is the following theorem.

Theorem 8.2. Let G be a smooth kernel andn} be an admissible product quadrature measure sequence. Then, for 1p ≤ ∞,

{supx𝕏𝔾n(νB*n;x,)-Φn(x,)p}

is fast decreasing. In particular, for every S > 0

|𝔾n(νB*n;x,y)|c(S){nqmax(1,(Nρ(x,y))S)+n2S}.    (8.11)

PROOF. Let x ∈ 𝕏. In this proof, we define Pn = Pn, x by Pn(z)=k:λk<B*nb(λk)ϕk(x)ϕk(z), z ∈ 𝕏, and note that PnΠB*n. In view of Proposition 8.1, the expansion in (3.18) converges in C0(𝕏×𝕏)L1(μ*×μ*;𝕏×𝕏), so that term-by-term integration can be made to deduce that for y ∈ 𝕏,

𝕏G(x,z)W(z)DG,n(z,y)dμ*(z)=𝕏Pn(z)DG,n(z,y)dμ*(z)                 +k:λkB*nb(λk)ϕk(x)𝕏ϕk(z)DG,n(z,y)dμ*(z).

By definition, DG,n(,y)Πnq, and, hence, each of the summands in the last expression above is equal to 0. Therefore, recalling that hk/n) = 0 if λk > n, we obtain

𝕏G(x,z)W(z)DG,n(z,y)dμ*(z)=𝕏Pn(z)DG,n(z,y)dμ*(z)=k:λk<B*nb(λk)ϕk(x)𝕏ϕk(z)DG,n(z,y)dμ*(z)=k:λk<B*nb(λk)ϕk(x)h(λk/n)b(λk)-1ϕk(y)=kh(λk/n)ϕk(x)ϕk(y)=Φn(x,y).    (8.12)

Since DG,n(z,)ΠnΠB*n, and ν𝔹*n is an admissible product quadrature measure of order B*n, this implies that

Φn(x,y)=𝕏Pn(z)DG,n(z,y)dνB*n(z),      y𝕏.    (8.13)

Therefore, for y ∈ 𝕏,

𝔾n(νB*n;x,y)-Φn(x,y)=𝕏{W(z)G(x,z)-Pn(z)}DG,n(z,y)dνB*n(z).

Using Proposition 8.1 (used with Λ = B*n) and the fact that {|νB*n|(𝕏)} has polynomial growth, we deduce that

𝔾n(νB*n;x,)-Φn(x,)p|νB*n|(𝕏)      ×W()G(x,)-Pnsupz𝕏DG,n(z,)p     c1ncb(B*n)supz𝕏DG,n(z,)p.    (8.14)

In view of Proposition 5.4 and Proposition 5.2, we see that for any z ∈ 𝕏,

DG,n(z,)p2c1n2cDG,n(z,)22=c1n2ck:λk<n(h(λk/n)b(λk)-1ϕk(z))2c1n2cb(n)-2Φn(z,)22c1ncb(n)-2Φn(z,)12c1ncb(n)-2.

We now conclude from (8.14) that

𝔾n(νB*n;x,)-Φn(x,)pc1ncb(B*n)b(n).

Since {b(B*n)/b(n)} is fast decreasing, this completes the proof.        □

The theorems in section 4 all follow from the following basic theorem.

Theorem 8.3. We assume the strong product assumption and the Bernstein-Lipschitz condition. With the set-up just described, we have

Probν*({Gn(Y;F)σn(f0f)c3nq|ν*|R,0log(cnBn|ν*|R,0/δ)|Y|})δ|ν*|R,0.    (8.15)

In particular, for f ∈ 𝕏(𝕏), Then

Probν*({𝔾n(Y;F)f0fc3(nq|ν*|R,0log(cnBn|ν*|R,0/δ)|Y|+En/2(,f0f))})δ|ν*|R,0.    (8.16)

PROOF. Theorems 8.1 and Theorem 8.2 together lead to (8.15). Since σn(ν*;f)=σn(f0f), the estimate 8.91 follows from Theorem 5.1 used with p = ∞.      □

PROOF OF THEOREM 4.1.

We observe that with the choice of f0 as in this theorem, |ν*|R,0f01/𝔪. Using 𝔪δ in place of δ, we obtain Theorem 4.1 directly from Theorem 8.3 by some simple calculations.      □

PROOF OF THEOREM 4.2.

This follows directly from Theorem 8.3 by choosing F1.           □

PROOF OF THEOREM 4.3.

In view of Theorem 8.3, our assumptions imply that for each j ≥ 0,

Probν*({G2j(Y;F)-σ2j(f0f)c2-jS})δ/2j+1.

Consequently, with probability ≥ 1 − δ, we have for each j ≥ 1,

G2j(Y;F)-G2j-1(Yj;F)-τj(f0f)c2-jS.

Hence, the theorem follows from Theorem 6.1.      □

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^A Hardy multiquadric is a function of the form x(α2+|x|22)-1, x ∈ ℝq. It is one of the oft-used function in theory and applications of radial basis function networks. For a survey, see the paper [32] of Hardy.

2. ^|ν|−ess supx ∈ 𝕂|f(x)| = inf{t : |ν|({x ∈ 𝕂:|f(x)| > t}) = 0}

References

1. Zhou L, Pan S, Wang J, Vasilakos AV. Machine learning on big data: opportunities and challenges. Neurocomputing. (2017) 237:350–61. doi: 10.1016/j.neucom.2017.01.026

CrossRef Full Text | Google Scholar

2. Cucker F, Smale S. On the mathematical foundations of learning. Bull Am Math Soc. (2002) 39:1–49. doi: 10.1090/S0273-0979-01-00923-5

CrossRef Full Text | Google Scholar

3. Cucker F, Zhou DX. Learning Theory: An Approximation Theory Viewpoint, Vol. 24. Cambridge: Cambridge University Press (2007).

Google Scholar

4. Girosi F, Poggio T. Networks and the best approximation property. Biol Cybernet. (1990) 63:169–76. doi: 10.1007/BF00195855

CrossRef Full Text | Google Scholar

5. Chui CK, Donoho DL. Special issue: diffusion maps and wavelets. Appl Comput Harm Anal. (2006) 21:1–2. doi: 10.1016/j.acha.2006.05.005

CrossRef Full Text | Google Scholar

6. Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. (2003) 15:1373–96. doi: 10.1162/089976603321780317

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Belkin M, Niyogi P. Towards a theoretical foundation for Laplacian-based manifold methods. J Comput Syst Sci. (2008) 74:1289–308. doi: 10.1016/j.jcss.2007.08.006

CrossRef Full Text | Google Scholar

8. Belkin M, Niyogi P. Semi-supervised learning on Riemannian manifolds. Mach Learn. (2004) 56:209–39. doi: 10.1023/B:MACH.0000033120.25363.1e

CrossRef Full Text | Google Scholar

9. Lafon SS. Diffusion maps and geometric harmonics (Ph.D. thesis), Yale University, New Haven, CT, United States (2004).

Google Scholar

10. Singer A. From graph to manifold Laplacian: the convergence rate. Appl Comput Harm Anal. (2006) 21:128–34. doi: 10.1016/j.acha.2006.03.004

CrossRef Full Text | Google Scholar

11. Jones PW, Maggioni M, Schul R. Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian. Ann Acad Sci Fenn Math. (2010) 35:131–74. doi: 10.5186/aasfm.2010.3508

CrossRef Full Text | Google Scholar

12. Liao W, Maggioni M. Adaptive geometric multiscale approximations for intrinsically low-dimensional data. arXiv. (2016) 1611.01179.

Google Scholar

13. Maggioni M, Mhaskar HN. Diffusion polynomial frames on metric measure spaces. Appl Comput Harm Anal. (2008) 24:329–53. doi: 10.1016/j.acha.2007.07.001

CrossRef Full Text | Google Scholar

14. Mhaskar HN. Eignets for function approximation on manifolds. Appl Comput Harm Anal. (2010) 29:63–87. doi: 10.1016/j.acha.2009.08.006

CrossRef Full Text | Google Scholar

15. Mhaskar HN. A generalized diffusion frame for parsimonious representation of functions on data defined manifolds. Neural Netw. (2011) 24:345–59. doi: 10.1016/j.neunet.2010.12.007

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Ehler M, Filbir F, Mhaskar HN. Locally learning biomedical data using diffusion frames. J Comput Biol. (2012) 19:1251–64. doi: 10.1089/cmb.2012.0187

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Filbir F, Mhaskar HN. Marcinkiewicz-Zygmund measures on manifolds. J Complexity. (2011) 27:568–96. doi: 10.1016/j.jco.2011.03.002

CrossRef Full Text | Google Scholar

18. Rosasco L, Belkin M, Vito ED. On learning with integral operators. J Mach Learn Res. (2010) 11:905–34.

Google Scholar

19. Rudi A, Carratino L, Rosasco L. Falkon: an optimal large scale kernel method. arXiv. (2017) 1705.10958. Available online at: http://jmlr.org/papers/v11/rosasco10a.html.

Google Scholar

20. Lu S, Pereverzev SV. Regularization Theory for Ill-Posed Problems. Berlin: de Gruyter (2013).

PubMed Abstract | Google Scholar

21. Mhaskar H, Pereverzyev SV, Semenov VY, Semenova EV. Data based construction of kernels for semi-supervised learning with less labels. Front Appl Math Stat. (2019) 5:21. doi: 10.3389/fams.2019.00021

CrossRef Full Text | Google Scholar

22. Pereverzyev SV, Tkachenko P. Regularization by the linear functional strategy with multiple kernels. Front Appl Math Stat. (2017) 3:1. doi: 10.3389/fams.2017.00001

CrossRef Full Text | Google Scholar

23. Fefferman C, Mitter S, Narayanan H. Testing the manifold hypothesis. J Am Math Soc. (2016) 29:983–1049. doi: 10.1090/jams/852

CrossRef Full Text | Google Scholar

24. Chui CK, Lin S-B, Zhang B, Zhou DX. Realization of spatial sparseness by deep relu nets with massive data. arXiv. (2019) 1912.07464.

Google Scholar

25. Guo ZC, Lin SB, Zhou DX. Learning theory of distributed spectral algorithms. Inverse Probl. (2017) 33:074009. doi: 10.1088/1361-6420/aa72b2

CrossRef Full Text | Google Scholar

26. Lin SB, Wang YG, Zhou DX. Distributed filtered hyperinterpolation for noisy data on the sphere. arXiv. (2019) 1910.02434.

Google Scholar

27. Mhaskar HN, Poggio T. Deep vs. shallow networks: an approximation theory perspective. Anal Appl. (2016) 14:829–48. doi: 10.1142/S0219530516400042

CrossRef Full Text | Google Scholar

28. Mhaskar H, Poggio T. Function approximation by deep networks. arXiv. (2019) 1905.12882.

Google Scholar

29. Mhaskar HN. On the representation of smooth functions on the sphere using finitely many bits. Appl Comput Harm Anal. (2005) 18:215–33. doi: 10.1016/j.acha.2004.11.004

CrossRef Full Text | Google Scholar

30. Smale S, Rosasco L, Bouvrie J, Caponnetto A, Poggio T. Mathematics of the neural response. Foundat Comput Math. (2010) 10:67–91. doi: 10.1007/s10208-009-9049-1

CrossRef Full Text | Google Scholar

31. Mhaskar HN. On the representation of band limited functions using finitely many bits. J Complexity. (2002) 18:449–78. doi: 10.1006/jcom.2001.0637

CrossRef Full Text | Google Scholar

32. Hardy RL. Theory and applications of the multiquadric-biharmonic method 20 years of discovery 1968–1988. Comput Math Appl. (1990) 19:163–208. doi: 10.1016/0898-1221(90)90272-L

CrossRef Full Text | Google Scholar

33. Müller A. Spherical Harmonics, Vol. 17. Berlin: Springer (2006).

Google Scholar

34. Mhaskar HN, Narcowich FJ, Ward JD. Approximation properties of zonal function networks using scattered data on the sphere. Adv Comput Math. (1999) 11:121–37. doi: 10.1023/A:1018967708053

CrossRef Full Text | Google Scholar

35. Timan AF. Theory of Approximation of Functions of a Real Variable: International Series of Monographs on Pure and Applied Mathematics, Vol. 34. New York, NY: Dover Publications (2014).

Google Scholar

36. Chui CK, Mhaskar HN. A unified method for super-resolution recovery and real exponential-sum separation. Appl Comput Harmon Anal. (2019) 46:431–51. doi: 10.1016/j.acha.2017.12.007

CrossRef Full Text | Google Scholar

37. Chui CK, Mhaskar HN. A Fourier-invariant method for locating point-masses and computing their attributes. Appl Comput Harmon Anal. (2018) 45:436–52. doi: 10.1016/j.acha.2017.08.010

CrossRef Full Text | Google Scholar

38. Mhaskar HN. Introduction to the Theory of Weighted Polynomial Approximation, Vol. 56. Singapore: World Scientific Singapore (1996).

Google Scholar

39. Steinerberger S. On the spectral resolution of products of laplacian eigenfunctions. arXiv. (2017) 1711.09826.

Google Scholar

40. Lu J, Sogge CD, Steinerberger S. Approximating pointwise products of laplacian eigenfunctions. J Funct Anal. (2019) 277:3271–82. doi: 10.1016/j.jfa.2019.05.025

CrossRef Full Text | Google Scholar

41. Lu J, Steinerberger S. On pointwise products of elliptic eigenfunctions. arXiv. (2018) 1810.01024.

Google Scholar

42. Geller D, Pesenson IZ. Band-limited localized Parseval frames and Besov spaces on compact homogeneous manifolds. J Geometr Anal. (2011) 21:334–71. doi: 10.1007/s12220-010-9150-3

CrossRef Full Text | Google Scholar

43. Mhaskar HN. Local approximation using Hermite functions. In: N. K. Govil, R. Mohapatra, M. A. Qazi, G. Schmeisser eds. Progress in Approximation Theory and Applicable Complex Analysis. Cham: Springer (2017). p. 341–62. doi: 10.1007/978-3-319-49242-1_16

CrossRef Full Text | Google Scholar

44. Filbir F, Mhaskar HN. A quadrature formula for diffusion polynomials corresponding to a generalized heat kernel. J Fourier Anal Appl. (2010) 16:629–57. doi: 10.1007/s00041-010-9119-4

CrossRef Full Text | Google Scholar

45. Mhaskar HN. A unified framework for harmonic analysis of functions on directed graphs and changing data. Appl Comput Harm Anal. (2018) 44:611–44. doi: 10.1016/j.acha.2016.06.007

CrossRef Full Text | Google Scholar

46. Rivlin TJ. The Chebyshev Polynomials. New York, NY: John Wiley and Sons (1974).

Google Scholar

47. Grigorlyan A. Heat kernels on metric measure spaces with regular volume growth. Handb Geometr Anal. (2010) 2. Available online at: https://www.math.uni-bielefeld.de/~grigor/hga.pdf.

Google Scholar

48. Mhaskar HN. Approximate quadrature measures on data-defined spaces. In: Dick J, Kuo FY, Wozniakowski H, editors. Festschrift for the 80th Birthday of Ian Sloan. Berlin: Springer (2017). p. 931–62. doi: 10.1007/978-3-319-72456-0_41

CrossRef Full Text | Google Scholar

49. Mhaskar HN. On the degree of approximation in multivariate weighted approximation. In: M. D. Buhman, and D. H. Mache, eds. Advanced Problems in Constructive Approximation. Basel: Birkhäuser (2003). p. 129–41. doi: 10.1007/978-3-0348-7600-1_10

CrossRef Full Text | Google Scholar

50. Mhaskar HN. Approximation theory and neural networks. In: Proceedings of the International Workshop in Wavelet Analysis and Applications. Delhi (1999). p. 247–89.

PubMed Abstract | Google Scholar

51. Mhaskar HN, Narcowich FJ, Ward JD. Spherical Marcinkiewicz-Zygmund inequalities and positive quadrature. Math Comput. (2001) 70:1113–30. doi: 10.1090/S0025-5718-00-01240-0

CrossRef Full Text | Google Scholar

52. Mhaskar HN. Dimension independent bounds for general shallow networks. Neural Netw. (2020) 123:142–52. doi: 10.1016/j.neunet.2019.11.006

PubMed Abstract | CrossRef Full Text | Google Scholar

53. Hörmander L. The spectral function of an elliptic operator. Acta Math. (1968) 121:193–218. doi: 10.1007/BF02391913

CrossRef Full Text | Google Scholar

54. Shubin MA. Pseudodifferential Operators and Spectral Theory. Berlin: Springer (1987).

Google Scholar

55. Grigor'yan A. Gaussian upper bounds for the heat kernel on arbitrary manifolds. J Diff Geom. (1997) 45:33–52. doi: 10.4310/jdg/1214459753

CrossRef Full Text | Google Scholar

56. Boucheron S, Lugosi G, Massart P. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford: Oxford University Press (2013).

Google Scholar

57. Hagerup T, Rüb C. A guided tour of Chernoff bounds. Inform Process Lett. (1990) 33:305–8. doi: 10.1016/0020-0190(90)90214-I

CrossRef Full Text | Google Scholar

Appendix

A. Gaussian Upper Bound on Manifolds

Let 𝕏 be a compact and connected smooth q-dimensional manifold, g(x) = (gi, j(x)) be its metric tensor, and (gi, j(x)) be the inverse of g(x). The Laplace-Beltrami operator on 𝕏 is defined by

Δ(f)(x)=1|g(x)|i=1nj=1ni(|g(x)|gi,j(x)jf),

where |g| = det(g). The symbol of Δ is given by

a(x,ξ)=1|g(x)|i=1nj=1n(|g(x)|gi,j(x))ξiξj.

Then a(x, ξ) ≥ c|ξ|2. Therefore, Hörmander's theorem [53, Theorem 4.4], [54, Theorem 16.1] shows that for x ∈ 𝕏,

λj<λϕk(x)2cλq,      λ1.    (A.1)

In turn, [44, Proposition 4.1] implies that

k=0exp(-λk2t)ϕk(x)2ct-q/2,      t(0,1], x𝕏.

Then [55, Theorem 1.1] shows that (3.3) is satisfied.

B. Probabilistic Estimates

We need the following basic facts from probability theory. Proposition B.1(a) below is a reformulation of Boucheron et al. [56, section 2.1, 2.7]. A proof of Proposition B.1(b) below is given in Hagerup and Rüb [57, Equation (7)].

Proposition B.1. (a) (Bernstein concentration inequality) Let Z1, ⋯, ZM be independent real valued random variables such that for each j = 1, ⋯, M, |Zj| ≤ R, and 𝔼(Zj2)V. Then, for any t > 0,

Prob(|1Mj=1M(Zj-𝔼(Zj))|t)2 exp(-Mt22(V+Rt)).    (8.18)

(b) (Chernoff bound) Let M ≥ 1, 0 ≤ p ≤ 1, and Z1, ⋯, ZM be random variables taking values in {0, 1}, with Prob(Zk = 1) = p. Then for t ∈ (0, 1],

Prob(k=1MZk(1-t)Mp)exp(-t2Mp/2), Prob(|k=1MZk-Mp|tMp)2 exp(-t2Mp/2).    (B.2)

Keywords: Kernel based approximation, distributed learning, machine learning, inverse problems, probability estimation

Citation: Mhaskar HN (2020) Kernel-Based Analysis of Massive Data. Front. Appl. Math. Stat. 6:30. doi: 10.3389/fams.2020.00030

Received: 29 March 2020; Accepted: 03 July 2020;
Published: 20 October 2020.

Edited by:

Ke Shi, Old Dominion University, United States

Reviewed by:

Jianjun Wang, Southwest University, China
Alex Cloninger, University of California, San Diego, United States

Copyright © 2020 Mhaskar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Hrushikesh N. Mhaskar, aHJ1c2hpa2VzaC5taGFza2FyJiN4MDAwNDA7Y2d1LmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.