- Institute of Mathematical Sciences, Claremont Graduate University, Claremont, CA, United States
Dealing with massive data is a challenging task for machine learning. An important aspect of machine learning is function approximation. In the context of massive data, some of the commonly used tools for this purpose are sparsity, divide-and-conquer, and distributed learning. In this paper, we develop a very general theory of approximation by networks, which we have called eignets, to achieve local, stratified approximation. The very massive nature of the data allows us to use these eignets to solve inverse problems, such as finding a good approximation to the probability law that governs the data and finding the local smoothness of the target function near different points in the domain. In fact, we develop a wavelet-like representation using our eignets. Our theory is applicable to approximation on a general locally compact metric measure space. Special examples include approximation by periodic basis functions on the torus, zonal function networks on a Euclidean sphere (including smooth ReLU networks), Gaussian networks, and approximation on manifolds. We construct pre-fabricated networks so that no data-based training is required for the approximation.
1. Introduction
Rapid advances in technology have led to the availability and need to analyze a massive data. The problem arises in almost every area of life from medical science to homeland security to finance. An immediate problem in dealing with a massive data set is that it is not possible to store it in a computer memory; we therefore have to deal with the data piecemeal to keep access to an external memory to a minimum. The other challenge is to devise efficient numerical algorithms to overcome difficulties, for example, in using the customary optimization problems in machine learning. On the other hand, the very availability of a massive data set should lead also to opportunities to solve some problems heretofore considered unmanageable. For example, deep learning often requires a large amount of training data, which, in turn, helps us to figure out the granularity in the data. Apart from deep learning, distributed learning is also a popular way of dealing with big data. A good survey with the taxonomy for dealing with massive data was recently conducted by Zhou et al. [1].
As pointed out in Cucker and Smale [2], Cucker and Zhou [3], and Girosi and Poggio [4], the main task in machine learning can be viewed as one of approximation of functions based on noisy values of the target function, sampled at points that are themselves sampled from an unknown distribution. It is therefore natural to seek approximation theory techniques to solve the problem. However, most of the classical approximation theory results are either not constructive or study function approximation only on known domains. In this century, there is a new paradigm to consider function approximation on data-defined manifolds; a good introduction to the subject is in the special issue [5] of Applied and Computational Harmonic Analysis, edited by Chui and Donoho. In this theory, one assumes the manifold hypothesis, i.e., that the data is sampled from a probability distribution μ* supported on a smooth, compact, and connected Riemannian manifold; for simplicity, even that μ* is the Riemannian volume measure for the manifold, normalized to be a probability measure. Following (e.g., [6–10]), one constructs first a “graph Laplacian” from the data and finds its eigen decomposition. It is proved in the abovementioned papers that as the size of the data tends to infinity, the graph Laplacian converges to the Laplace-Beltrami operator on the manifold, and the eigenvalues (eigenvectors) converge to the corresponding quantities on the manifold. A great deal of work is devoted to studying the geometry of this unknown manifold (e.g., [11, 12]) based on the so-called heat kernel. The theory of function approximation on such manifolds is also well-developed (e.g., [13–17]).
A bottleneck in this theory is the computation of the eigendecomposition of a matrix, which is necessarily huge in the case of big data. Kernel-based methods have been used also in connection with approximation on manifolds (e.g., [18–22]). The kernels used in this method are constructed typically as a radial basis function (RBF) in the ambient space, and the methods are traditional machine learning methods involving optimization. As mentioned earlier, massive data poses a big challenge for the solution of these optimization problems. The theoretical results in this connection assume a Mercer's expansion in terms of the Laplacian eigenfunctions for the kernel, satisfying certain conditions. In this paper, we develop a general theory including several RBF kernels in use in different contexts (examples are discussed in section 2). Rather than using optimization-based techniques, we will provide a direct construction of the approximation based on what we have called eignets. An eignet is defined directly using the eigendecomposition on the manifold. We thus focus directly on the properties of Mercer expansion in an abstract and unified manner that enables us to construct local approximations suitable for working with massive data without using optimization.
It is also possible that the manifold hypothesis does not hold, and there is a recent work [23] by Fefferman et al. proposing an algorithm to test this hypothesis. On the other hand, our theory for function approximation does not necessarily use the full strength of Riemannian geometry. In this paper, we have therefore decided to work with a general locally compact metric measure space, isolating those properties which are needed for our analysis and substituting some that are not applicable in the current setting.
Our motivation comes from some recent works on distributed learning by Zhou et al. [24–26] as well as our own work on deep learning [27, 28]. For example, in Lin et al. [26], the approximation is done on the Euclidean sphere using a localized kernel introduced in Mhaskar [29], where the massive data is divided into smaller parts, each dense on the sphere, and the resulting polynomial approximations are added to get the final result. In Chui et al. [24], the approximation takes place on a cube, and exploits any known sparsity in the representation of the target function in terms of spline functions. In Mhaskar and Poggio [28] and Mhaskar [27], we have argued that from a function approximation point of view, the observed superiority of deep networks over shallow ones results from the ability of deep networks to exploit any compositional structure in the target function. For example, in image analysis, one may divide the image into smaller patches, which are then combined in a hierarchical manner, resulting in a tree structure [30]. By putting a shallow network at each node to learn those aspects of the target function that depend upon the pixels seen up to that level, one can avoid the curse of dimensionality. In some sense, this is a divide-and-conquer strategy, not so much on the data set itself but on the dimension of the input space.
The highlights of this paper are the following.
• In order to avoid an explicit, data-dependent eigendecomposition, we introduce the notion of an eignet, which generalizes several radial basis function and zonal function networks. We construct pre-fabricated eignets, whose linear combinations can be constructed just by using the noisy values of the target function as the coefficients, to yield the desired approximation.
• Our theory generalizes the results in a number of examples used commonly in machine learning, some of which we will describe in section 2.
• The use of optimization methods, such as empirical risk minimization has an intrinsic difficulty, namely, the minimizer of this risk may have no connection with the approximation error. There are also other problems, such as local minima, saddle points, speed of convergence, etc. that need to be taken into account, and the massive nature of the data makes this an even more challenging task. Our results do not depend upon any kind of optimization in order to determine the necessary approximation.
• We developed a theory for local approximation using eignets so that only a relatively small amount of data is used in order to approximate the target function in any ball of the space, the data being sub-sampled using a distribution supported on a neighborhood of that ball. The accuracy of approximation adjusts itself automatically depending upon the local smoothness of the target function on the ball.
• In normal machine learning algorithms, it is customary to assume a prior on the target function called smoothness class in approximation theory parlance. Our theory demonstrates clearly how a massive data can actually help to solve the inverse problem to determine the local smoothness of the target function using a wavelet-like representation based solely on the data.
• Our results allow one to solve the inverse problem of estimating the probability density from which the data is chosen. In contrast to the statistical approaches that we are aware of, there is no limitation on how accurate the approximation can be asymptotically in terms of the number of samples; the accuracy is determined entirely by the smoothness of the density function.
• All our estimates are given in terms of probability of the error being small rather than the expected value of some loss function being small.
This paper is abstract, theoretical, and technical. In section 2, we present a number of examples that are generalized by our set-up. The abstract set-up, together with the necessary definitions and assumptions, are discussed in section 3. The main results are stated in section 4 and proved in section 8. The proofs require a great deal of preparation, which is presented in sections 5–7. The results in these sections are not all new. Many of them are new only in some nuance. For example, we have proven in section 7 the quadrature formulas required in the construction of our pre-fabricated networks in a probabilistic setting, and we have also substituted an estimate on the gradients by certain Lipschitz condition, which makes sense without the differentiability structure on the manifold as we had done in our previous works. Our Theorem 7.1 generalizes most of our previous results in this direction with the exception of [31, Theorem 2.3]. We have striven to give as many proofs as possible, partly for the sake of completion and partly because the results were not stated earlier in exactly the same form as needed here. In Appendix A, we give a short proof of the fact that the Gaussian upper bound for the heat kernel holds for arbitrary smooth, compact, connected manifolds. We could not find a reference for this fact. In Appendix B, we state the main probability theory estimates that are used ubiquitously in the paper.
2. Motivating Examples
In this paper, we aim to develop a unifying theory applicable to a variety of kernels and domains. In this section, we describe some examples which have motivated the abstract theory to be presented in the rest of the paper. In the following examples, q ≥ 1 is a fixed integer.
Example 2.1. Let 𝕋q = ℝq/(2πℤq) be the q-dimensional torus. The distance between points x = (x1, ⋯, xq) and y = (y1, ⋯, yq) is defined by . The trigonometric monomial system {exp(ik · ○) : k ∈ ℤq} is orthonormal with respect to the Lebesgue measure normalized to be a probability measure on 𝕋q. We recall that the periodization of a function f :ℝq → ℝ is defined formally by . When f is integrable then the Fourier transform of f at k ∈ ℤq is the same as the k-th Fourier coefficient of f○. This Fourier coefficient will be denoted by . A periodic basis function network has the form , where G is a periodic function called the activation function. The examples of the activation functions in which we are interested in this paper include:
1. Periodization of the Gaussian.
2. Periodization of the Hardy multiquadric1.
Example 2.2. If , there exists a unique such that x = cos(θ). Therefore, [−1, 1]q can be thought of as a quotient space of 𝕋q where all points of the form ε ⊙ θ = {(ε1θ1, ⋯, εqθq)}, , are identified. Any function on [−1, 1]q can then by lifted to 𝕋q, and this lifting preserves all the smoothness properties of the function. Our set-up below includes [−1, 1]q, where the distance and the measure are defined via the mapping to the torus, and suitably weighted Jacobi polynomials are considered to be the orthonormalized family of functions. In particular, if G is a periodic activation function, x = cos(θ), y = cos(ϕ), then the function is an activation function on [−1, 1]q with an expansion , where Tk's are tensor product, orthonormalized, Chebyshev polynomials. Furthermore, bk's have the same asymptotic behavior as (k)'s. □
Example 2.3. Let be the unit sphere in ℝq+1. The dimension of 𝕊q as a manifold is q. We assume the geodesic distance ρ on 𝕊q and the volume measure μ* are normalized to be a probability measure. We refer the reader to Müller [33] for details, describing here only the essentials to get a “what-it-is-all-about” introduction. The set of (equivalence classes) of restrictions of polynomials in q + 1 variables with total degree < n to 𝕊q are called spherical polynomials of degree < n. The set of restrictions of homogeneous harmonic polynomials of degree ℓ to 𝕊q is denoted by ℍℓ with dimension dℓ. There is an orthonormal basis for each ℍℓ that satisfies an addition formula
where ωq−1 is the volume of 𝕊q−1, and pℓ is the degree ℓ ultraspherical polynomial so that the family {pℓ} is orthonormalized with respect to the weight (1 − x2)(q−2)/2 on (−1, 1). A zonal function on the sphere has the form x ↦ G(x · y), where the activation function G:[−1, 1] → ℝ has a formal expansion of the form
In particular, formally, . The examples of the activation functions in which we are interested in this paper include
1.
It is shown in Müller [33, Lemma 18] that
2.
It is shown in Mhaskar et al. [34, Lemma 5.1] that
3. The smooth ReLU function . The function G has an analytic extension to the strip ℝ + (−π, π)i of the complex plane. So, Bernstein approximation theorem [35, Theorem 5.4.2] can be used to show that
Example 2.4. Let 𝕏 be a smooth, compact, connected Riemannian manifold (without boundary), ρ be the geodesic distance on 𝕏, μ* be the Riemannian volume measure normalized to be a probability measure, {λk} be the sequence of eigenvalues of the (negative) Laplace-Beltrami operator on 𝕏, and ϕk be the eigenfunction corresponding to the eigenvalue λk; in particular, ϕ0 ≡ 1. This example, of course, includes Examples 2.1–2.3. An eignet in this context has the form , where the activation function G has a formal expansion of the form . One interesting example is the heat kernel:
□
Example 2.5. Let 𝕏 = ℝq, ρ be the ℓ∞ norm on 𝕏, μ* be the Lebesgue measure. For any multi-integer , the (multivariate) Hermite function ϕk is defined via the generating function
The system {ϕk} is orthonormal with respect to μ*, and satisfies
where Δ is the Laplacian operator. As a consequence of the so called Mehler identity, one obtains [36] that
A Gaussian network is a network of the form , where it is convenient to think of . □
3. The Set-Up and Definitions
3.1. Data Spaces
Let 𝕏 be a connected, locally compact metric space with metric ρ. For r > 0, x ∈ 𝕏, we denote
If K ⊆ 𝕏 and x ∈ 𝕏, we write as usual . It is convenient to denote the set
{x ∈ 𝕏; ρ(K, x) ≤ r} by 𝔹(K, r). The diameter of K is defined by .
For a Borel measure ν on 𝕏 (signed or positive), we denote by |ν| its total variation measure defined for Borel subsets K ⊂ 𝕏 by
where the supremum is over all countable measurable partitions of K. In the sequel, the term measure will mean a signed or positive, complete, sigma-finite, Borel measure. Terms, such as measurable will mean Borel measurable. If f:𝕏 → ℝ is measurable, K ⊂ 𝕏 is measurable, and ν is a measure, we define2
The symbol Lp(ν, K) denotes the set of all measurable functions f for which ‖f‖p, ν, K < ∞, with the usual convention that two functions are considered equal if they are equal |ν|-almost everywhere on K. The set C0(K) denotes the set of all uniformly continuous functions on K vanishing at ∞. In the case when K = 𝕏, we will omit the mention of K, unless it is necessary to mention it to avoid confusion.
We fix a non-decreasing sequence , with λ0 = 0 and λk ↑ ∞ as k → ∞. We also fix a positive sigma-finite Borel measure μ* on 𝕏, and a system of orthonormal functions , such that ϕ0(x) > 0 for all x ∈ 𝕏. We define
It is convenient to write Πn = {0} if n ≤ 0 and Π∞ = ⋃n>0Πn. It will be assumed in the sequel that Π∞ is dense in C0 (and, thus, in every Lp, 1 ≤ p < ∞). We will often refer to the elements of Π∞ as diffusion polynomials in keeping with [13].
Definition 3.1. We will say that a sequence {an} (or a function F :[0, ∞) → ℝ) is fast decreasing if (respectively, ) for every S > 0. A sequence {an} has polynomial growth if there exist c1, c2 > 0 such that for all n ≥ 1, and similarly for functions.
Definition 3.2. The space 𝕏 (more precisely, the tuple ) is called a data space if each of the following conditions is satisfied.
1. For each x ∈ 𝕏, r > 0, 𝔹(x, r) is compact.
2. (Ball measure condition) There exist q ≥ 1 and κ > 0 with the following property: for each x ∈ 𝕏, r > 0,
(In particular, μ*({y ∈ 𝕏: ρ(x, y) = r}) = 0.)
3. (Gaussian upper bound) There exist κ1, κ2 > 0 such that for all x, y ∈ 𝕏, 0 < t ≤ 1,
4. (Essential compactness) For every n ≥ 1, there exists a compact set 𝕂n ⊂ 𝕏 such that the function n ↦ diam(𝕂n) has polynomial growth, while the functions
and
are both fast decreasing. (Necessarily, has polynomial growth as well.)
Remark 3.1. We assume without loss of generality that 𝕂n ⊆ 𝕂m for all n < m and that . □
Remark 3.2. If 𝕏 is compact, then the first condition as well as the essential compactness condition are automatically satisfied. We may take 𝕂n = 𝕏 for all n. In this case, we will assume tacitly that μ* is a probability measure, and ϕ0 ≡ 1. □
Example 3.1. (Manifold case) This example points out that our notion of data space generalizes the set-ups in Examples 2.1–2.4. Let 𝕏 be a smooth, compact, connected Riemannian manifold (without boundary), ρ be the geodesic distance on 𝕏, μ* be the Riemannian volume measure normalized to be a probability measure, {λk} be the sequence of eigenvalues of the (negative) Laplace-Beltrami operator on 𝕏, and ϕk be the eigenfunction corresponding to the eigenvalue λk; in particular, ϕ0 ≡ 1. If the condition (3.2) is satisfied, then is a data space. Of course, the assumption of essential compactness is satisfied trivially (see Appendix B for the Gaussian upper bound). □
Example 3.2. (Hermite case) We illustrate how Example 2.5 is included in our definition of a data space. Accordingly, we assume the set-up as in that example. For a > 0, let . With , the system is a data space. When a = 1, we will omit its mention from the notation in this context. The first two conditions are obvious. The Gaussian upper bound follows by the multivariate Mehler identity [37, Equation 4.27]. The assumption of essential compactness is satisfied with 𝕂n = 𝔹(0, cn) for a suitable constant c (cf. [38, Chapter 6]). □
In the rest of this paper, we assume 𝕏 to be a data space. Different theorems will require some additional assumptions, two of which we now enumerate. Not every theorem will need all of these; we will state explicitly which theorem uses which assumptions, apart from 𝕏 being a data space.
The first of these deals with the product of two diffusion polynomials. We do not know of any situation where it is not satisfied but are not able to prove it in general.
Definition 3.3. (Product assumption) There exists A* ≥ 1 and a family such that for every S > 0,
We say that an strong product assumption is satisfied if, instead of (3.4), we have for every n > 0 and P, Q ∈ Πn, .
Example 3.3. In the setting of Example 3.2, if P, Q ∈ Πn, then PQ = Rϕ0 for some R ∈ Π2n. So, the product assumption holds trivially. The strong product assumption does not hold. However, if P, Q ∈ Πn, then . The manifold case is discussed below in Remark 3.3. □
Remark 3.3. One of the referees of our paper has pointed out three recent references [39–41], on the subject of the product assumption. The first two of these deal with the manifold case (Example 3.1). The paper [41] extends the results in Lu et al. [40] to the case when the functions ϕk are eigenfunctions of a more general elliptic operator. Since the results in these two papers are similar qualitatively, we will comment on Lu et al. [40] and Steinerberger [39].
In this remark only, let . Let λk, λj < n. In Steinerberger [39], Steinerberger relates EAn(2, ϕkϕj) [see (3.6) below for definition] with
While this gives some insight into the product assumption, the results are inconclusive about the product assumption as stated. Also, it is hard to verify whether the conditions mentioned in the paper are satisfied for a given manifold.
In Lu et al. [40], it is shown that for any ϵ, δ > 0, there exists a subspace V of dimension such that for all ϕk, ϕj ∈ Πn, . The subspace V does not have to be ΠAn for any A. Since the dimension of span{ϕkϕj} is O(n2), the result is meaningful only if 0 < δ < 1 and ϵ ≥ n1−1/δ.
In Geller and Pesenson [42, Theorem 6.1], it is shown that the strong product assumption (and, thus, also the product assumption) holds in the manifold case when the manifold is a compact homogeneous manifold. We have extended this theorem in Filbir and Mhaskar [17, Theorem A.1] for the case of eigenfunctions of general elliptic partial differential operators on arbitrary compact, smooth manifolds provided that the coefficient functions in the operator satisfy some technical conditions. □
In our results in section 4, we will need the following condition, which serves the purpose of gradient in many of our earlier theorems on manifolds.
Definition 3.4. We say that the system Ξ satisfies Bernstein-Lipschitz condition if for every n > 0, there exists Bn > 0 such that
Remark 3.4. Both in the manifold case and the Hermite case, Bn = cn for some constant c > 0. A proof in the Hermite case can be found in Mhaskar [43] and in the manifold case in Filbir and Mhaskar [44]. □
3.2. Smoothness Classes
We define next the smoothness classes of interest here.
Definition 3.5. A function w:𝕏 → ℝ will be called a weight function if for all k. If w is a weight function, we define
We will omit the mention of w if w ≡ 1 on 𝕏.
We find it convenient to denote by Xp the space ; i.e., Xp = Lp(𝕏) if 1 ≤ p < ∞ and .
Definition 3.6. Let 1 ≤ p ≤ ∞, γ > 0, and w be a weight function.
(a) For f ∈ Lp(𝕏), we define
and note that
The space Wγ,p,w comprises all f for which ‖f‖Wγ,p,w < ∞.
(b) We write . If B is a ball in 𝕏, comprises functions in , which are supported on B.
(c) If x0 ∈ 𝕏, the space Wγ,p,w(x0) comprises functions f such that there exists r > 0 with the property that, for every , ϕf ∈ Wγ,p,w.
Remark 3.5. In both the manifold case and the Hermite case, characterizations of the smoothness classes Wγ,p are available in terms of constructive properties of the functions, such as the number of derivatives, estimates on certain moduli of smoothness or K-functionals, etc. In particular, the class C∞ coincides with the class of infinitely differentiable functions vanishing at infinity. □
We can now state another assumption that will be needed in studying local approximation.
Definition 3.7. (Partition of unity) For every r > 0, there exists a countable family of functions in C∞ with the following properties:
1. Each is supported on 𝔹(xk, r) for some xk ∈ 𝕏.
2. For every and x ∈ 𝕏, 0 ≤ ψk, r(x) ≤ 1.
3. For every x ∈ 𝕏, there exists a finite subset such that
We note some obvious observations about the partition of unity without the simple proof.
Proposition 3.1. Let r > 0, be a partition of unity.
(a) Necessarily, is supported on 𝔹(x, 3r).
(b) For x ∈ 𝕏, .
The constant convention In the sequel, c, c1, ⋯ will denote generic positive constants depending only on the fixed quantities under discussion, such as Ξ, q, κ, κ1, κ2, the various smoothness parameters, and the filters to be introduced. Their value may be different at different occurrences, even within a single formula. The notation A ~ B means c1A ≤ B ≤ c2A. □
We end this section by defining a kernel that plays a central role in this theory.
Let H :[0, ∞) → ℝ be a compactly supported function. In the sequel, we define
If S ≥ 1 is an integer, and H is S times continuously differentiable, we introduce the notation
The following proposition recalls an important property of these kernels. Proposition 3.2 is proven in Maggioni and Mhaskar [13] and more recently in much greater generality in Mhaskar [45, Theorem 4.3].
Proposition 3.2. Let S > q be an integer, H :ℝ → ℝ be an even, S times continuously differentiable, compactly supported function. Then, for every x, y ∈ 𝕏, N > 0,
In the sequel, let h :ℝ → [0, 1] be a fixed, infinitely differentiable, even function, non-increasing on [0, ∞), with h(t) = 1 if |t| ≤ 1/2 and h(t) = 0 if t ≥ 1. If ν is any measure with a bounded total variation on 𝕏, we define
We will omit the mention of h in the notations; e.g., write Φn(x, y) = Φn(h; x, y), and the mention of ν if ν = μ*. In particular,
where for , we write
.
3.3. Measures
In this section, we describe the terminology involving measures.
Definition 3.8. Let d ≥ 0. A measure will be called d–regular if
The infimum of all constants c that work in (3.15) will be denoted by |||ν|||R, d, and the class of all d-regular measures will be denoted by .
For example, μ* itself is in R0 with [cf. (3.2)]. More generally, if w ∈ C0(𝕏) then the measure wdμ* is R0 with .
Definition 3.9. (a) A sequence {νn} of measures on 𝕏 is called an admissible quadrature measure sequence if the sequence {|νn|(𝕏)}has polynomial growth and
(b) A sequence {νn} of measures on 𝕏 is called an admissible product quadrature measure sequence if the sequence {|νn|(𝕏)}has polynomial growth and
(c) By abuse of terminology, we will say that a measure νn is an admissible quadrature measure (respectively, an admissible product quadrature measure) of order n if (with constants independent of n) and (3.16) [respectively, (3.17)] holds.
In the case when 𝕏 is compact, a well-known theorem called Tchakaloff's theorem [46, Exercise 2.5.8, p. 100] shows the existence of admissible product quadrature measures (even finitely supported probability measures). However, in order to construct such measures, it is much easier to prove the existence of admissible quadrature measures, as we will do in Theorem 7.1, and then use one of the product assumptions to derive admissible product quadrature measures.
Example 3.4. In the manifold case, let the strong product assumption hold as in Remark 3.3. If n ≥ 1 and is a finite subset satisfying the assumptions of Theorem 7.1, then the theorem asserts the existence of an admissible quadrature measure supported on . If {νn} is an admissible quadrature measure sequence, then is an admissible product quadrature measure sequence. In particular, there exist finitely supported admissible product quadrature measures of order n for every n ≥ 1. □
Example 3.5. We consider the Hermite case as in Example 3.2. For every a > 0 and n ≥ 1, Theorem 7.1 applied with the system Ξa yields admissible quadrature measures of order n supported on finite subsets of ℝq (in fact, of [−cn, cn]q for an appropriate c). In particular, an admissible quadrature measure of order for is an admissible product quadrature measure of order n for Ξ = Ξ1. □
3.4. Eignets
The notion of an eignet defined below is a generalization of the various kernels described in the examples in section 2.
Definition 3.10. A function b:[0, ∞) → (0, ∞) is called a smooth mask if b is non-increasing, and there exists B* = B*(b) ≥ 1 such that the mapping t ↦ b(B*t)/b(t) is fast decreasing. A function G:𝕏 × 𝕏 → ℝ is called a smooth kernel if there exists a measurable function W = W(G) :𝕏 → ℝ such that we have a formal expansion (with a smooth mask b)
If m ≥ 1 is an integer, an eignet with m neurons is a function of the form for yk ∈ 𝕏.
Example 3.6. In the manifold case, the notion of eignet includes all the examples stated in section 2 with W ≡ 1, except for the example of smooth ReLU function described in Example 2.3. In the Hermite case, (2.2) shows that the kernel defined on ℝq × ℝq is a smooth kernel, with λk = |k|1, ϕk as in Example 2.5, and . The function W here is . □
Remark 3.6. It is possible to relax the conditions on the mask in Definition 3.10. Firstly, the condition that b should be non-increasing is made only to simplify our proofs. It is not difficult to modify them without this assumption. Secondly, let b0 :[0, ∞) → ℝ satisfy |b0(t)| ≤ b1(t) for a smooth mask b1 as stipulated in that definition. The function b2 = b + 2b1 is then a smooth mask and so is b1. Let , j = 0, 1, 2. Then G0(x, y) = G2(x, y) − 2G1(x, y). Therefore, all of the results in sections 4 and 8 can be applied once with G2 and once with G1 to obtain a corresponding result for G0 with different constants. For this reason, we will simplify our presentation by assuming the apparently restrictive conditions stipulated in Definition 3.10. In particular, this includes the example of the smooth ReLU network described in Example 2.3. □
Definition 3.11. Let ν be a measure on 𝕏 (signed or having bounded variation), and G ∈ C0(𝕏 × 𝕏). We define
and
Remark 3.7. Typically, we will use an approximate product quadrature measure sequence in place of the measure ν, where each of the measures in the sequence is finitely supported, to construct a sequence of networks. In the case when 𝕏 is compact, Tchakaloff's theorem shows that there exists an approximate product quadrature measure of order m supported on points. Using this measure in place of ν, one obtains a pre-fabricated eignet 𝔾n(ν) with neurons. However, this is not an actual construction. In the presence of the product assumption, Theorem 7.1 leads to the pre-fabricated networks 𝔾n in a constructive manner with the number of neurons as stipulated in that theorem. □
4. Main Results
In this section, we assume the Bernstein-Lipschitz condition (Definition 3.4) in all the theorems. We note that the measure μ* may not be a probability measure. Therefore, we take the help of an auxiliary function f0 to define a probability measure as follows. Let f0 ∈ C0(𝕏), f0 ≥ 0 for all x ∈ 𝕏, and be a probability measure. Necessarily, ν* is 0-regular, and . We assume noisy data of the form (y, ϵ), with a joint probability distribution τ defined for Borel subsets of 𝕏 × Ω for some measure space Ω, and with ν* being the marginal distribution of y with respect to τ. Let be a random variable following the law τ, and denote
It is easy to verify using Fubini's theorem that if is integrable with respect to τ, then, for any x ∈ 𝕏,
Let Y be a random sample from τ, and {νn} be an admissible product quadrature sequence in the sense of Definition 3.9. We define [cf. (3.20)]
where B* is as in Definition 3.10.
Remark 4.1. We note that the networks 𝔾n are prefabricated independently of the data. The network therefore has only |Y| terms depending upon the data. □
Our first theorem describes local function recovery using local sampling. We may interpret it in the spirit of distributed learning as in Chui et al. [24] and Lin et al. [26], where we are taking a linear combination of pre-fabricated networks 𝔾n using the function values themselves as the coefficients. The networks 𝔾n have essentially the same localization property as the kernels Φn (cf. Theorem 8.2).
Theorem 4.1. Let x0 ∈ 𝕏 and r > 0. We assume the partition of unity and find a function ψ ∈ C∞ supported on 𝔹(x0, 3r), which is equal to 1 on 𝔹(x0, r), , and let f0 = ψ/𝔪, . We assume the rest of the set-up as described. If f0f ∈ Wγ, ∞, then for 0 < δ < 1, and ,
Remark 4.2. If {y1, ⋯, yM} is a random sample from some probability measure supported on 𝕏, , and we construct a sub-sample using the distribution that associates the mass f0(yj)/s with each yj, then the probability of selecting points outside of the support of f0 is 0. This leads to a sub-sample Y. If , then the Chernoff bound, Proposition B.1(b), can be used to show that |Y| is large, as stipulated in Theorem 4.1. □
Next, we state two inverse theorems. Our first theorem obtains accuracy on the estimation of the density f0 using eignets instead of positive kernels.
Theorem 4.2. With the set-up as in Theorem 8.3, let γ > 0, f0 ∈ Wγ, ∞, and
Then, with ,
Remark 4.3. Unlike density estimation using positive kernels, there is no inherent limit on the accuracy predicted by (4.5) on the estimation of f0. □
The following theorem gives a complete characterization of the local smoothness classes using eignets. In particular, Part (b) of the following theorem gives a solution to the inverse problem of determining what smoothness class the target function belongs to near each point of 𝕏. In theory, this leads to a data-based detection of singularities and sparsity analogous to what is assumed in Chui et al. [24] but in a much more general setting.
Theorem 4.3. Let f0 ∈ C0(𝕏), f0(x) ≥ 0 for all x ∈ 𝕏, and be a probability measure, τ, , and let f be as described above. We assume the partition of unity and the product assumption. Let S ≥ q + 2, 0 < γ ≤ S, x0 ∈ 𝕏, 0 < δ < 1. For each j ≥ 0, suppose that Yj is a random sample from τ with . Then with τ-probability ≥ 1 − δ,
(a) If f0f ∈ Wγ,∞(x0) then there exists a ball 𝔹 centered at x0 such that
(b) If there exists a ball 𝔹 centered at x0 for which (4.6) holds, then f0f ∈ Wγ, ∞,ϕ0(x0).
5. Preparatory Results
We prove a lower bound on μ*(𝔹(x, r)) for x ∈ 𝕏 and 0 < r ≤ 1 (cf. [47]).
Proposition 5.1. We have
In order to prove the proposition, we recall a lemma, proved in Mhaskar [14, Proposition 5.1].
Lemma 5.1. Let ν ∈ Rd, N > 0. If g1:[0, ∞) → [0, ∞) is a non-increasing function, then, for any N > 0, r > 0, x ∈ 𝕏,
PROOF OF PROPOSITION 5.1.
Let x ∈ 𝕏, r > 0 be fixed in this proof, although the constants will not depend upon these. In this proof, we write
The Gaussian upper bound (3.3) shows that for t > 0,
Using Lemma 5.1 with d = 0, dν = dμ*, , , we obtain for :
Therefore, denoting in this proof only that κ0 = ‖ϕ0‖∞, we obtain that
We now choose t ~ r2 so that to obtain (5.1) for r ≤ c4. The estimate is clear for c4 < r ≤ 1. □
Next, we prove some results about the system {ϕk}.
Lemma 5.2. For n ≥ 1, we have
and
In particular, the function n ↦ dim(Πn) has polynomial growth.
PROOF. The Gaussian upper bound with x = y implies that
The estimate (5.6) follows from a Tauberian theorem [44, Proposition 4.1]. The essential compactness now shows that for any R > 0,
In particular,
□
Next, we prove some properties of the operators σn and diffusion polynomials. The following proposition follows easily from Lemma 5.1 and Proposition 3.2. (cf. [14, 48]).
Proposition 5.2. Let S, H be as in Proposition 3.2, d > 0, , and x ∈ 𝕏.
(a) If r ≥ 1/N, then
(b) We have
and
The following lemma is well-known; a proof is given in Mhaskar [15, Lemma 5.3].
Lemma 5.3. Let (Ω1, ν), (Ω2, τ) be sigma–finite measure spaces, Ψ : Ω1 × Ω2 → ℝ be ν × τ–integrable,
and formally, for τ–measurable functions f : Ω2 → ℝ,
Let 1 ≤ p ≤ ∞. If then T(f, x) is defined for ν–almost all x ∈ Ω1, and
Theorem 5.1. Let n > 0. If P ∈ Πn/2, then σn(P) = P. Also, for any p with 1 ≤ p ≤ ∞,
If 1 ≤ p ≤ ∞, and f ∈ Lp (𝕏), then
PROOF. The fact that σn(P) = P for all P ∈ Πn/2 is verified easily using the fact that h(t) = 1 for 0 ≤ t ≤ 1/2. Using (5.9) with μ* in place of |ν| and 0 in place of d, we see that
The estimate (5.14) follows using Lemma 5.3. The estimate (5.15) is now routine to prove. □
Proposition 5.3. For n ≥ 1, P ∈ Πn, 1 ≤ p ≤ ∞, and S > 0, we have
PROOF. In this proof, all constants will depend upon S. Using Schwarz inequality and essential compactness, it is easy to deduce that
Therefore, a use of Lemma 5.3 shows that
We use P in place of f to obtain (5.16). □
Proposition 5.4. Let n ≥ 1, P ∈ Πn, 0 < p < r ≤ ∞. Then
PROOF. The first part of (5.18) is proved in Mhaskar [15, Lemma 5.4]. In that paper, the measure μ* is assumed to be a probability measure, but this assumption was not used in this proof. The second estimate follows easily from Proposition 5.3. □
Lemma 5.4. Let R, n > 0, P1, P2 ∈ Πn, 1 ≤ p, r, s ≤ ∞. If the product assumption holds, then
PROOF. In view of essential compactness, Proposition 5.4 implies that for any P ∈ Πn, 1 ≤ r ≤ ∞, . Therefore, using Schwarz inequality, Parseval identity, and Lemma 5.2, we conclude that
Now, the product assumption implies that for p = 1, ∞, and λk, λj < n, there exists such that for any R > 0,
where c is the constant appearing in (5.20). The convexity inequality
shows that (5.21) is valid for all p, 1 ≤ p ≤ ∞. So, using (5.20), we conclude that
□
6. Local Approximation by Diffusion Polynomials
In the sequel, we write g(t) = h(t) − h(2t), and
We note that
It is clear from Theorem 5.1 that for any p, 1 ≤ p ≤ ∞,
with convergence in the sense of Lp.
Theorem 6.1. Let 1 ≤ p ≤ ∞, γ > 0, f ∈ Xp, x0 ∈ 𝕏. We assume the partition of unity and the product assumption.
(a) If 𝔹 is a ball centered at x0, then
(b) If there exists a ball B centered at x0 such that
then f ∈ Wγ, p,ϕ0(x0).
(c) If f ∈ Wγ, p(x0), then there exists a ball 𝔹 centered at x0 such that (6.5) holds.
Remark 6.1. In the manifold case (Example 3.1), ϕ0 ≡ 1. So, the statements (b) and (c) in Theorem 6.1 provide necessary and sufficient conditions for f ∈ Wγ, p(x0) in terms of the local rate of convergence of the globally defined operator σn(f) and the growth of the local norms of the operators τj, respectively In the Hermite case (Example 3.2), it is shown in Mhaskar [49] that f ∈ Wγ, p,ϕ0 if and only if f ∈ Wγ, p. Therefore, the statements (b) and (c) in Theorem 6.1 provide similar necessary and sufficient conditions for f ∈ Wγ, p(x0) in this case as well. □
The proof of Theorem 6.1 is routine, but we sketch a proof for the sake of completeness.
PROOF OF THEOREM 6.1.
Part (a) is easy to prove using the definitions.
In the rest of this proof, we fix S > γ + q + 2. To prove part (b), let ϕ ∈ C∞ be supported on 𝔹. Then there exists such that
Further, Lemma 5.4 yields a sequence such that
Hence,
Thus, fϕ ∈ Wγ, p,ϕ0 for every ϕ ∈ C∞ supported on 𝔹, and part (b) is proved.
To prove part (c), we observe that there exists r > 0 such that for any , fϕ ∈ Wγ, p. Using partition of unity [cf. Proposition 3.1(a)], we find such that ψ(x) = 1 for all x ∈ 𝔹(x0, 2r), and we let 𝔹 = 𝔹(x0, r). In view of Proposition 3.2, for all x ∈ B and y ∈ 𝕏\𝔹(x0, 2r). Hence,
Recalling that ψ(x) = 1 for x ∈ B and 𝕊 − q ≥ γ + 2, we deduce that
This proves part (c). □
Let {Ψn: 𝕏 × 𝕏 → 𝕏} be a family of kernels (not necessarily symmetric). With a slight abuse of notation, we define when possible, for any measure ν with bounded total variation on 𝕏,
and
As usual, we will omit the mention of ν when ν = μ*.
Corollary 6.1. Let the assumptions of Theorem 6.1 hold, and {Ψn:𝕏 × 𝕏 → 𝕏} be a sequence of kernels (not necessarily symmetric) with the property that both of the following functions of n are decreasing rapidly.
(a) If B is a ball centered at x0, then
(b) If there exists a ball B centered at x0 such that
then f ∈ Wγ, p,ϕ0(x0).
(c) If f ∈ Wγ, p(x0), then there exists a ball B centered at x0 such that (6.13) holds.
PROOF. In view of Lemma 5.3, the assumption about the functions in (6.11) implies that ‖σ(Ψn; f) − σn(f)‖p is decreasing rapidly. □
7. Quadrature Formula
The purpose of this section is to prove the existence of admissible quadrature measures in the general set-up as in this paper. The ideas are mostly developed already in our earlier works [17, 36, 43, 44, 50, 51] but always require an estimate on the gradient of diffusion polynomials. Here, we use the Bernstein-Lipschitz condition (Definition 3.4) instead.
If , we denote
If K is compact, ϵ > 0, a subset is ϵ-distinguishable if ρ(x, y) ≥ ϵ for every , x ≠ y. The cardinality the maximal ϵ-distinguishable subset of K will be denoted by Hϵ(K).
Remark 7.1. If is a maximal -distinguishable subset of , x ≠ y, then it is easy to deduce that
In particular, by replacing by , we can always assume that
Theorem 7.1. We assume the Bernstein-Lipschitz condition. Let n > 0, be a finite subset, ϵ > 0.
(a) There exists a constant c(ϵ) with the following property: if , then there exist non-negative numbers Wk satisfying
such that for every P ∈ Πn,
(b) Let the assumptions of part (a) be satisfied with ϵ = 1/2. There exist real numbers w1, ⋯, wM such that |wk| ≤ 2Wk, k = 1, ⋯, M, in particular,
and
(c) Let δ > 0, be a random sample from the probability law given by
and ϵn = min(1/n, 1/B2n). If
then the statements (a) and (b) hold with -probability exceeding 1−δ.
In order to prove Theorem 7.1, we first recall the following theorem [52, Theorem 5.1], applied to our context. The statement of Mhaskar [52, Theorem 5.1] seems to require that μ* is a probability measure, but this fact is not required in the proof. It is required only that μ*(𝔹(x, r)) ≥ crq for 0 < r ≤ 1.
Theorem 7.2. Let τ be a positive measure supported on a compact subset of 𝕏, ϵ > 0, be a maximal ϵ-distinguishable subset of supp(τ), and . There then exists a subset and a partition of 𝕂 with each of the following properties.
1. (volume property) For , Yy ⊆ 𝔹(y, 18ϵ), , and.
2. (density property) , .
3. (intersection property) Let K1 ⊆ K be a compact subset. Then
PROOF OF THEOREM 7.1 (a), (b).
We observe first that it is enough to prove this theorem for sufficiently large values of n. In view of Proposition 5.3, we may choose n large enough so that for any P ∈ Πn,
In this proof, we will write so that . We use Theorem 7.2 with τ to be the measure associating the mass 1 with each element of , and δ in place of ϵ. If is a maximal δ-distinguished subset of , then we denote in this proof, and observe that . We obtain a partition {Yy} of 𝕂 as in Theorem 7.2. The volume property implies that each Yy contains at least one element of . We construct a subset of by choosing exactly one element of for each y. We may then re-index so that, without loss of generality, for some N ≤ M, and re-index {Yy} as {Yk}, so that zk ∈ Yk, k = 1, ⋯, N. To summarize, we have a subset , and a partition of 𝕂 ⊃ 𝕂2n such that each Yk ⊂ 𝔹(zk, 36δ) and . In particular (cf. (7.7)), for any P ∈ Πn,
We now let , k = 1, ⋯, N, and Wk = 0, k = N + 1, ⋯, M.
The next step is to prove that if δ ≤ c(ϵ) min(1/n, 1/B2n), then
In this part of the proof, the constants denoted by c1, c2, ⋯ will retain their value until (7.9) is proved. Let y ∈ 𝕏. We let r ≥ δ to be chosen later, and write in this proof, , and for j = 0, 1, ⋯, . Since r≥δ, and each Yk ⊂ 𝔹(zk, 36δ), there are at most elements in . Using the Bernstein-Lipschitz condition and the fact that , we deduce that
Next, since , we see that the number of elements in each is ~ (2jr/δ)q. Using Proposition 3.2 and the fact that S > q, we deduce that if r ≥ 1/n, then
Since S > q, we may choose r ~ ϵn such that , and we then require δ ≤ min(r, c7(ϵ)/B2n) so that, in (7.10), . Then (7.10) and (7.11) lead to (7.9). The proof of (7.9) being completed, we resume the constant convention as usual.
Next, we observe that for any P ∈ Πn,
We therefore conclude, using (7.9), that
Together with (7.8), this leads to (7.4). From the definition of , k = 1, ⋯, N, , and . Since Wk = 0 if k ≥ N + 1, we have now proven (7.3), and we have thus completed the proof of part (a).
Having proved part (a), the proof of part (b) is by now a routine application of the Hahn-Banach theorem [cf. [17, 44, 50, 51]]. We apply part (a) with ϵ = 1/2. Continuing the notation in the proof of part (a), we then have
We now equip ℝN with the norm and consider the sampling operator given by , let V be the range of this operator, and define a linear functional x* on V by . The estimate (7.12) shows that the norm of this functional is ≤ 2. The Hahn-Banach theorem yields a norm-preserving extension 𝕏* of x* to ℝN, which, in turn, can be identified with a vector . We set wk = 0 if k ≥ N + 1. Formula (7.6) then expresses the fact that X* is an extension of x*. The preservation of norms shows that |wk| ≤ 2Wk if k = 1, ⋯, N, and it is clear that for k = N + 1, ⋯, M, |wk| = 0 = Wk. This completes the proof of part (b). □
Part (c) of Theorem 7.1 follows immediately from the first two parts and the following lemma.
Lemma 7.1. Let ν* be a probability measure on 𝕏, 𝕂 ⊂ supp(ν*) be a compact set. Let ϵ, δ ∈ (0, 1], be a maximal ϵ/2-distinguished subset of K, and . If
and {z1, ⋯, zM} be random samples from the probability law ν* then
PROOF. If δ(K, {z1, ⋯, zM}) > ϵ, then there exists at least one such that 𝔹(x, ϵ/2)∩{z1, ⋯, zM} = ∅. For every , . We consider the random variable zj to be equal to 1 if zj ∈ 𝔹(x, ϵ/2) and 0 otherwise. Using (B.2) with t = 1, we see that
Since ,
We set the right-hand side above to δ and solve for M to prove the lemma. □
8. Proofs of the Results in Section 4
We assume the set-up as in section 4. Our first goal is to prove the following theorem.
Theorem 8.1. Let τ, ν*, , f be as described section 4. We assume the Bernstein-Lipschitz condition. Let 0 < δ <1. We assume further that for all y ∈ 𝕏, ϵ ∈ Ω. There exist constants c1, c2, such that if , and {(y1, ϵ1), ⋯, (yM, ϵM)} is a random sample from τ, then
In order to prove this theorem, we record an observation. The following lemma is an immediate corollary of the Bernstein-Lipschitz condition and Proposition 5.3.
Lemma 8.1. Let the Bernstein-Lipschitz condition be satisfied. Then for every n > 0 and ϵ > 0, there exists a finite set such that and for any P ∈ Πn,
PROOF OF THEOREM 8.1.
Let x ∈ 𝕏. We consider the random variables
Then in view of (4.2), for every j. Further, Proposition 3.2 shows that for each j, . Using (5.10) with ν* in place of ν, N = n, d = 0, we see that for each j,
Therefore, Bernstein concentration inequality (B.1) implies that for any t ∈ (0, 1),
We now note that Zj, are all in Πn. Taking a finite set as in Lemma 8.1, so that , we deduce that
Then (8.3) leads to
We set the right-hand side above equal to and solve for t to obtain (8.1) (with different values of c, c1, c2). □
Before starting to prove results regarding eignets, we first record the continuity and smoothness of a “smooth kernel” G as defined in Definition 3.10.
Proposition 8.1. If G is a smooth kernel, then (x, y) ↦ W(y)G(x, y) is in . Further, for any p, 1 ≤ p ≤ ∞, and Λ ≥ 1,
In particular, for every x, y ∈ 𝕏, W(○)G(x, ○) and W(y)G(○, y) are in C∞.
PROOF. Let b be the smooth mask corresponding to G. For any S ≥ 1, b(n) ≤ cn−sb(n/B*) ≤ cn−sb(0). Thus, b itself is decreasing rapidly. Next, let r > 0. Then remembering that B* ≥ 1 and b is non-increasing, we obtain that for S > 0, b(B*Λu) ≤ c(Λu)−s−r−1b(Λu), and
In this proof, let , so that s(t) ≤ ctq, t ≥ 1. If Λ ≥ 1, then, integrating by parts, we deduce (remembering that b is non-increasing) that for any x ∈ 𝕏,
Using Schwarz inequality, we conclude that
In particular, since b is fast decreasing, W(○)G(x, ○) ∈ C0(𝕏) (and in fact, W(y)G(x, y) ∈ C0(𝕏 × 𝕏)) and (8.5) holds with p = ∞. Next, for any j ≥ 0, essential compactness implies that
So, there exists r ≥ q such that
Hence, for any x ∈ 𝕏,
This shows that
In view of the convexity inequality,
(8.8) and (8.10) lead to
In turn, this implies that WG(x, ○) ∈ Lp for all x ∈ 𝕏, and (8.5) holds. □
A fundamental fact that relates the kernels Φn and the pre-fabricated eignets 𝔾n's is the following theorem.
Theorem 8.2. Let G be a smooth kernel and {νn} be an admissible product quadrature measure sequence. Then, for 1 ≤ p ≤ ∞,
is fast decreasing. In particular, for every S > 0
PROOF. Let x ∈ 𝕏. In this proof, we define Pn = Pn, x by , z ∈ 𝕏, and note that . In view of Proposition 8.1, the expansion in (3.18) converges in , so that term-by-term integration can be made to deduce that for y ∈ 𝕏,
By definition, , and, hence, each of the summands in the last expression above is equal to 0. Therefore, recalling that h(λk/n) = 0 if λk > n, we obtain
Since , and is an admissible product quadrature measure of order B*n, this implies that
Therefore, for y ∈ 𝕏,
Using Proposition 8.1 (used with Λ = B*n) and the fact that has polynomial growth, we deduce that
In view of Proposition 5.4 and Proposition 5.2, we see that for any z ∈ 𝕏,
We now conclude from (8.14) that
Since {b(B*n)/b(n)} is fast decreasing, this completes the proof. □
The theorems in section 4 all follow from the following basic theorem.
Theorem 8.3. We assume the strong product assumption and the Bernstein-Lipschitz condition. With the set-up just described, we have
In particular, for f ∈ 𝕏∞(𝕏), Then
PROOF. Theorems 8.1 and Theorem 8.2 together lead to (8.15). Since , the estimate 8.91 follows from Theorem 5.1 used with p = ∞. □
PROOF OF THEOREM 4.1.
We observe that with the choice of f0 as in this theorem, . Using 𝔪δ in place of δ, we obtain Theorem 4.1 directly from Theorem 8.3 by some simple calculations. □
PROOF OF THEOREM 4.2.
This follows directly from Theorem 8.3 by choosing . □
PROOF OF THEOREM 4.3.
In view of Theorem 8.3, our assumptions imply that for each j ≥ 0,
Consequently, with probability ≥ 1 − δ, we have for each j ≥ 1,
Hence, the theorem follows from Theorem 6.1. □
Data Availability Statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Author Contributions
The author confirms being the sole contributor of this work and has approved it for publication.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
1. ^A Hardy multiquadric is a function of the form , x ∈ ℝq. It is one of the oft-used function in theory and applications of radial basis function networks. For a survey, see the paper [32] of Hardy.
2. ^|ν|−ess supx ∈ 𝕂|f(x)| = inf{t : |ν|({x ∈ 𝕂:|f(x)| > t}) = 0}
References
1. Zhou L, Pan S, Wang J, Vasilakos AV. Machine learning on big data: opportunities and challenges. Neurocomputing. (2017) 237:350–61. doi: 10.1016/j.neucom.2017.01.026
2. Cucker F, Smale S. On the mathematical foundations of learning. Bull Am Math Soc. (2002) 39:1–49. doi: 10.1090/S0273-0979-01-00923-5
3. Cucker F, Zhou DX. Learning Theory: An Approximation Theory Viewpoint, Vol. 24. Cambridge: Cambridge University Press (2007).
4. Girosi F, Poggio T. Networks and the best approximation property. Biol Cybernet. (1990) 63:169–76. doi: 10.1007/BF00195855
5. Chui CK, Donoho DL. Special issue: diffusion maps and wavelets. Appl Comput Harm Anal. (2006) 21:1–2. doi: 10.1016/j.acha.2006.05.005
6. Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. (2003) 15:1373–96. doi: 10.1162/089976603321780317
7. Belkin M, Niyogi P. Towards a theoretical foundation for Laplacian-based manifold methods. J Comput Syst Sci. (2008) 74:1289–308. doi: 10.1016/j.jcss.2007.08.006
8. Belkin M, Niyogi P. Semi-supervised learning on Riemannian manifolds. Mach Learn. (2004) 56:209–39. doi: 10.1023/B:MACH.0000033120.25363.1e
9. Lafon SS. Diffusion maps and geometric harmonics (Ph.D. thesis), Yale University, New Haven, CT, United States (2004).
10. Singer A. From graph to manifold Laplacian: the convergence rate. Appl Comput Harm Anal. (2006) 21:128–34. doi: 10.1016/j.acha.2006.03.004
11. Jones PW, Maggioni M, Schul R. Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian. Ann Acad Sci Fenn Math. (2010) 35:131–74. doi: 10.5186/aasfm.2010.3508
12. Liao W, Maggioni M. Adaptive geometric multiscale approximations for intrinsically low-dimensional data. arXiv. (2016) 1611.01179.
13. Maggioni M, Mhaskar HN. Diffusion polynomial frames on metric measure spaces. Appl Comput Harm Anal. (2008) 24:329–53. doi: 10.1016/j.acha.2007.07.001
14. Mhaskar HN. Eignets for function approximation on manifolds. Appl Comput Harm Anal. (2010) 29:63–87. doi: 10.1016/j.acha.2009.08.006
15. Mhaskar HN. A generalized diffusion frame for parsimonious representation of functions on data defined manifolds. Neural Netw. (2011) 24:345–59. doi: 10.1016/j.neunet.2010.12.007
16. Ehler M, Filbir F, Mhaskar HN. Locally learning biomedical data using diffusion frames. J Comput Biol. (2012) 19:1251–64. doi: 10.1089/cmb.2012.0187
17. Filbir F, Mhaskar HN. Marcinkiewicz-Zygmund measures on manifolds. J Complexity. (2011) 27:568–96. doi: 10.1016/j.jco.2011.03.002
18. Rosasco L, Belkin M, Vito ED. On learning with integral operators. J Mach Learn Res. (2010) 11:905–34.
19. Rudi A, Carratino L, Rosasco L. Falkon: an optimal large scale kernel method. arXiv. (2017) 1705.10958. Available online at: http://jmlr.org/papers/v11/rosasco10a.html.
21. Mhaskar H, Pereverzyev SV, Semenov VY, Semenova EV. Data based construction of kernels for semi-supervised learning with less labels. Front Appl Math Stat. (2019) 5:21. doi: 10.3389/fams.2019.00021
22. Pereverzyev SV, Tkachenko P. Regularization by the linear functional strategy with multiple kernels. Front Appl Math Stat. (2017) 3:1. doi: 10.3389/fams.2017.00001
23. Fefferman C, Mitter S, Narayanan H. Testing the manifold hypothesis. J Am Math Soc. (2016) 29:983–1049. doi: 10.1090/jams/852
24. Chui CK, Lin S-B, Zhang B, Zhou DX. Realization of spatial sparseness by deep relu nets with massive data. arXiv. (2019) 1912.07464.
25. Guo ZC, Lin SB, Zhou DX. Learning theory of distributed spectral algorithms. Inverse Probl. (2017) 33:074009. doi: 10.1088/1361-6420/aa72b2
26. Lin SB, Wang YG, Zhou DX. Distributed filtered hyperinterpolation for noisy data on the sphere. arXiv. (2019) 1910.02434.
27. Mhaskar HN, Poggio T. Deep vs. shallow networks: an approximation theory perspective. Anal Appl. (2016) 14:829–48. doi: 10.1142/S0219530516400042
29. Mhaskar HN. On the representation of smooth functions on the sphere using finitely many bits. Appl Comput Harm Anal. (2005) 18:215–33. doi: 10.1016/j.acha.2004.11.004
30. Smale S, Rosasco L, Bouvrie J, Caponnetto A, Poggio T. Mathematics of the neural response. Foundat Comput Math. (2010) 10:67–91. doi: 10.1007/s10208-009-9049-1
31. Mhaskar HN. On the representation of band limited functions using finitely many bits. J Complexity. (2002) 18:449–78. doi: 10.1006/jcom.2001.0637
32. Hardy RL. Theory and applications of the multiquadric-biharmonic method 20 years of discovery 1968–1988. Comput Math Appl. (1990) 19:163–208. doi: 10.1016/0898-1221(90)90272-L
34. Mhaskar HN, Narcowich FJ, Ward JD. Approximation properties of zonal function networks using scattered data on the sphere. Adv Comput Math. (1999) 11:121–37. doi: 10.1023/A:1018967708053
35. Timan AF. Theory of Approximation of Functions of a Real Variable: International Series of Monographs on Pure and Applied Mathematics, Vol. 34. New York, NY: Dover Publications (2014).
36. Chui CK, Mhaskar HN. A unified method for super-resolution recovery and real exponential-sum separation. Appl Comput Harmon Anal. (2019) 46:431–51. doi: 10.1016/j.acha.2017.12.007
37. Chui CK, Mhaskar HN. A Fourier-invariant method for locating point-masses and computing their attributes. Appl Comput Harmon Anal. (2018) 45:436–52. doi: 10.1016/j.acha.2017.08.010
38. Mhaskar HN. Introduction to the Theory of Weighted Polynomial Approximation, Vol. 56. Singapore: World Scientific Singapore (1996).
39. Steinerberger S. On the spectral resolution of products of laplacian eigenfunctions. arXiv. (2017) 1711.09826.
40. Lu J, Sogge CD, Steinerberger S. Approximating pointwise products of laplacian eigenfunctions. J Funct Anal. (2019) 277:3271–82. doi: 10.1016/j.jfa.2019.05.025
41. Lu J, Steinerberger S. On pointwise products of elliptic eigenfunctions. arXiv. (2018) 1810.01024.
42. Geller D, Pesenson IZ. Band-limited localized Parseval frames and Besov spaces on compact homogeneous manifolds. J Geometr Anal. (2011) 21:334–71. doi: 10.1007/s12220-010-9150-3
43. Mhaskar HN. Local approximation using Hermite functions. In: N. K. Govil, R. Mohapatra, M. A. Qazi, G. Schmeisser eds. Progress in Approximation Theory and Applicable Complex Analysis. Cham: Springer (2017). p. 341–62. doi: 10.1007/978-3-319-49242-1_16
44. Filbir F, Mhaskar HN. A quadrature formula for diffusion polynomials corresponding to a generalized heat kernel. J Fourier Anal Appl. (2010) 16:629–57. doi: 10.1007/s00041-010-9119-4
45. Mhaskar HN. A unified framework for harmonic analysis of functions on directed graphs and changing data. Appl Comput Harm Anal. (2018) 44:611–44. doi: 10.1016/j.acha.2016.06.007
47. Grigorlyan A. Heat kernels on metric measure spaces with regular volume growth. Handb Geometr Anal. (2010) 2. Available online at: https://www.math.uni-bielefeld.de/~grigor/hga.pdf.
48. Mhaskar HN. Approximate quadrature measures on data-defined spaces. In: Dick J, Kuo FY, Wozniakowski H, editors. Festschrift for the 80th Birthday of Ian Sloan. Berlin: Springer (2017). p. 931–62. doi: 10.1007/978-3-319-72456-0_41
49. Mhaskar HN. On the degree of approximation in multivariate weighted approximation. In: M. D. Buhman, and D. H. Mache, eds. Advanced Problems in Constructive Approximation. Basel: Birkhäuser (2003). p. 129–41. doi: 10.1007/978-3-0348-7600-1_10
50. Mhaskar HN. Approximation theory and neural networks. In: Proceedings of the International Workshop in Wavelet Analysis and Applications. Delhi (1999). p. 247–89.
51. Mhaskar HN, Narcowich FJ, Ward JD. Spherical Marcinkiewicz-Zygmund inequalities and positive quadrature. Math Comput. (2001) 70:1113–30. doi: 10.1090/S0025-5718-00-01240-0
52. Mhaskar HN. Dimension independent bounds for general shallow networks. Neural Netw. (2020) 123:142–52. doi: 10.1016/j.neunet.2019.11.006
53. Hörmander L. The spectral function of an elliptic operator. Acta Math. (1968) 121:193–218. doi: 10.1007/BF02391913
55. Grigor'yan A. Gaussian upper bounds for the heat kernel on arbitrary manifolds. J Diff Geom. (1997) 45:33–52. doi: 10.4310/jdg/1214459753
56. Boucheron S, Lugosi G, Massart P. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford: Oxford University Press (2013).
57. Hagerup T, Rüb C. A guided tour of Chernoff bounds. Inform Process Lett. (1990) 33:305–8. doi: 10.1016/0020-0190(90)90214-I
Appendix
A. Gaussian Upper Bound on Manifolds
Let 𝕏 be a compact and connected smooth q-dimensional manifold, g(x) = (gi, j(x)) be its metric tensor, and (gi, j(x)) be the inverse of g(x). The Laplace-Beltrami operator on 𝕏 is defined by
where |g| = det(g). The symbol of Δ is given by
Then a(x, ξ) ≥ c|ξ|2. Therefore, Hörmander's theorem [53, Theorem 4.4], [54, Theorem 16.1] shows that for x ∈ 𝕏,
In turn, [44, Proposition 4.1] implies that
Then [55, Theorem 1.1] shows that (3.3) is satisfied.
B. Probabilistic Estimates
We need the following basic facts from probability theory. Proposition B.1(a) below is a reformulation of Boucheron et al. [56, section 2.1, 2.7]. A proof of Proposition B.1(b) below is given in Hagerup and Rüb [57, Equation (7)].
Proposition B.1. (a) (Bernstein concentration inequality) Let Z1, ⋯, ZM be independent real valued random variables such that for each j = 1, ⋯, M, |Zj| ≤ R, and . Then, for any t > 0,
(b) (Chernoff bound) Let M ≥ 1, 0 ≤ p ≤ 1, and Z1, ⋯, ZM be random variables taking values in {0, 1}, with Prob(Zk = 1) = p. Then for t ∈ (0, 1],
Keywords: Kernel based approximation, distributed learning, machine learning, inverse problems, probability estimation
Citation: Mhaskar HN (2020) Kernel-Based Analysis of Massive Data. Front. Appl. Math. Stat. 6:30. doi: 10.3389/fams.2020.00030
Received: 29 March 2020; Accepted: 03 July 2020;
Published: 20 October 2020.
Edited by:
Ke Shi, Old Dominion University, United StatesReviewed by:
Jianjun Wang, Southwest University, ChinaAlex Cloninger, University of California, San Diego, United States
Copyright © 2020 Mhaskar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Hrushikesh N. Mhaskar, aHJ1c2hpa2VzaC5taGFza2FyJiN4MDAwNDA7Y2d1LmVkdQ==