Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 17 May 2018
Sec. Mathematics of Computation and Data Science

Construction of Neural Networks for Realization of Localized Deep Learning

  • 1Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong
  • 2Department of Statistics, Stanford University, Stanford, CA, United States
  • 3Department of Mathematics, Wenzhou University, Wenzhou, China
  • 4Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong

The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net) approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order O(m-2s/(2s+d)) of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown) manifold dimension d replaces the dimension D of the sampling (Euclidean) space for shallow nets.

1. Introduction

The continually rapid growth in data acquisition and data updating has recently posed crucial challenges to the machine learning community on developing learning schemes to match or outperform human learning capability. Fortunately, the introduction of deep learning (see for example [1]) has led to the feasibility of getting around the bottleneck of classical learning strategies, such as the support vector machine and boosting algorithms, based on classical neural networks (see for example [25]), by demonstrating remarkable successes in many applications, particularly computer vision [6] and speech recognition [7], and more recently in other areas, including: natural language processing, medical diagnosis and bioinformatics, financial market analysis and online advertisement, time series forecasting and search engines. Furthermore, the exciting recent advances of deep learning schemes for such applications have motivated the current interest in re-visiting the development of classical neural networks (to be called “shallow nets” in later discussions), by allowing multiple hidden layers between the input and output layers. Such neural networks are called “deep” neural nets, or simply, deep nets. Indeed, the advantages of deep nets over shallow nets, at least in applications, have led to various popular research directions in the academic communities of Approximation Theory and Learning Theory. Explicit results on the existence of functions, that are expressible by deep nets but cannot be approximated by shallow nets with comparable number of parameters, are generally regarded as powerful features of the advantage of deep nets in Approximation Theory. The first theoretical understanding of such results dates back to our early work [8], where by using the Heaviside activation function, it was shown that deep nets with two hidden layers already provide localized approximation, while shallow nets fail. Explicit results on neural network approximation derived in Eldan and Shamir [9], Mhaskar and Poggio [10], Poggio et al. [11], Raghu et al. [12], Shaham et al. [13], and Telgarsky [14] further reveal various advantages of deep nets over shallow nets. For example, the power of depth of neural network in approximating hierarchical functions was shown in Mhaskar and Poggio [10] and Poggio et al. [11], and that deep nets can improve the approximation capability of shallow nets when the data are located on a manifold was demonstrated in Shaham et al. [13].

From approximation to learning, the tug of war between bias and variance [15] indicates that explicit derivation of deep nets is insufficient to show its success in machine learning, in that besides bias, the capacity of deep nets should possess the expressivity of embodying variance. In this direction, the capacity of deep nets, as measured by the Betti number, number of linear regions and neuron transitions were studied in Bianchini and Scarselli [16], Montúfar et al. [17], and Raghu et al. [12] respectively, in showing that deep nets allow for many more functionalities than shallow nets. Although these results certainly show the benefits of deep nets, yet they pose more difficulties in analyzing the deep learning performance, since large capacity usually implies large variance and requires more elaborate learning algorithms. One of the main difficulties is development of satisfactory learning rate analysis for deep net learning, that has been well studied for shallow nets (see for example [18]). In this paper, we present an analysis of the advantages of deep nets in the framework of learning theory [15], taking into account the trade-off between bias and variance.

Our starting point is to assume that the samples are located approximately on some unknown manifold in the sample (D-dimensional Euclidean) space. For simplicity, consider the set of sample inputs: x1,,xmX[-1,1]D, with a corresponding set of outputs: y1,,ymY[-M,M] for some positive number M, where X is an unknown d-dimensional connected C Riemannian manifold (without boundary). We will call Sm={(xi,yi)}i=1m the sample set, and construct a deep net with three hidden layers, with the first for the dimensionality-reduction, the second for bias-reduction, and the third for variance-reduction. The main tools for our construction are the “local manifold learning” for deep nets in Chui and Mhaskar [19], “localized approximation” for deep nets in Chui et al. [8], and “local average” in Györfy et al. [20]. We will also introduce a feedback procedure to eliminate outliers during the learning process. Our constructions justify the common consensus that deep nets are intuitively capable of capturing data features via their architectural structures [21]. In addition, we will prove that the constructed deep net can well approximate the so-called regression function [15] within the accuracy of O(m-2s/(2s+d)) in expectation, where s denotes the order of smoothness (or regularity) of the regression function. Noting that the best existing learning rates of the shallow nets are O(m-2s/(2s+D)log2m) in Maiorov [18] and O(m-s/(8s+4d)(logm)s/(4s+2d)) in Ye and Zhou [22], we observe the power of deep nets over shallow nets, at least theoretically, in the framework of Learning Theory.

The organization of this paper is as follows. In the next section, we present a detailed construction of the proposed deep net. The main results of the paper will be stated in section 3, where tight learning rates of the constructed deep net are also deduced. Discussions of our contributions along with comparison with some related work and proofs of the main results will be presented in sections 4 and 5, respectively.

2. Construction of Deep Nets

In this section, we present a construction of deep neural networks with three hidden layers to realize certain deep learning algorithms, by applying the mathematical tools of localized approximation in Chui et al. [8], local manifold learning in Chui and Mhaskar [19], and local average arguments in Györfy et al. [20]. Throughout this paper, we will consider only two activation functions: the Heaviside function σ0 and the square-rectifier σ2, where the standard notation t+ = max{0, t} is used to define σn(t)=t+n=(t+)n, for any non-negative integer n.

2.1. Localized Approximation and Localized Manifold Learning

Performance comparison between deep nets and shallow nets is a classical topic in Approximation Theory. It is well-known from numerous publications (see for example [8, 9, 12, 14]) that various functions can be well approximated by deep nets but not by any shallow net with the same order of magnitude in the numbers of neurons. In particular, it was proved in Chui et al. [8] that deep nets can provide localized approximation, while shallow nets fail.

For r, q ∈ ℕ and an arbitrary j=(j())=1r2qr, where 2qr={1,2,,2q}r, let

ζj=ζj,q=(ζj())=1r  with ζj()=-1+2j()-12q(-1,1).

For a > 0 and ζ ∈ ℝr, let us denote by Ar,a,ζ=ζ+[-a2,a2]r, the cube in ℝr with center ζ and width a. Furthermore, we define N1,r,q,ζj:r by

N1,r,q,ζj(ξ)=σ0{ = 1rσ0[12q+ξ()ζj()]                                   + = 1rσ0[12qξ()+ζj()]2r+12}.    (1)

In what follows, the standard notion IA of the indicator function of a set (or an event) A will be used. For x ∈ ℝ, since

σ0[12q+x]+σ0[12qx]2                   =I[1/(2q),)(x)+I(,1/(2q)](x)2                   ={0,if x[1/(2q),1/(2q)],1,otherwise,

we observe that

 = 1rσ0[12q+ξ()]+ = 1rσ0[12qξ()]2r                                             +12 {=12,for ξ[1/(2q),1/(2q)]r,12,otherwise.

This implies that N1, r, q,ζj as introduced in (1) is the indicator function of the cube ζj+[-1/(2q),1/(2q)]r=Ar,1/q,ζj. Thus, the following proposition which describes the localized approximation property of N1, r, q,ζj, can be easily deduced by applying Theorem 2.3 in Chui et al. [8].

Proposition 1. Let r, q ∈ ℕ be arbitrarily given. Then N1, r, q,ζj = IAr, 1/q,ζj for all j2qr.

On the other hand, it was proposed in Basri and Jacobs [23] and DiCarlo and Cox [24] with practical arguments, that deep nets can tackle data in highly-curved manifolds, while any shallow nets fail. These arguments were theoretically verified in Chui and Mhaskar [19] and Shaham et al. [13], with the implication that adding hidden layers to shallow nets should enable the neural networks to have the capability of processing massive data in a high-dimensional space from samples in lower dimensional manifolds. More precisely, it follows from do Carmo [25] and Shaham et al. [13] that for a lower d-dimensional connected and compact C Riemannian submanifold X[-1,1]D (without boundary), isometrically embedded in ℝD and endowed with the geodesic distance dG, there exists some δ > 0, such that for any x,xX, with dG(x,x)<δ,

12dG(x,x)x-xD2dG(x,x),    (2)

where for any r > 0, ||·||r denotes, as usual, the Euclidean norm of ℝr. In the following, let BG0, τ), BD0, τ), and Bd0, τ) denote the closed geodesic ball, the D-dimensional Euclidean ball, and the d-dimensional Euclidean ball, with center at ξ0, respectively, and with radius τ > 0. Noting that t2=σ2(t)-σ2(-t), the following proposition then is a brief summary of Theorem 2.2 and Remark 2.1 in Chui and Mhaskar [19], with the implication that neural networks can be used as a dimensionality-reduction tool.

Proposition 2. For each ξX, there exist a positive number δξ and a neural network

Φξ=(Φξ())=1d:Xd

with

Φξ()(x)=k = 1(D+2)(D+1)ak,ξ,σ2(wk,ξ,·x+bk,ξ,),  wk,ξ,D,ak,ξ,,bk,ξ,,    (3)

that maps BG(ξ, δξ) diffeomorphically onto [−1, 1]d and satisfies

αξdG(x,x)Φξ(x)-Φξ(x)dβξdG(x,x),     x,xBG(ξ,δξ)    (4)

for some αξ, βξ > 0.

2.2. Learning via Deep Nets

Our construction of deep nets depends on the localized approximation and dimensionality-reduction technique, as presented in Propositions 1 and 2. To describe the learning process, firstly select a suitable q*, so that for every j2q*D, there exists some point ξj* in a finite set that satisfies

AD,1/q*,ζj,q*XBG(ξj*,δξj*).    (5)

To this end, we need a constant C0 ≥ 1, such that

dG(x,x)C0x-xD,   x,xX.    (6)

The existence of such a constant is proved in the literature (see for example [22]). Also, in view of the compactness of X, since ξX{xX:BG(x,ξ)<δξ/2} is an open covering of X, there exists a finite set of points {ξi*}i = 1FXX, such that Xi = 1FXBG(ξi*,δξi*/2).. Hence, q* ∈ ℕ may be chosen to satisfy

q*2C0Dmin1iFXδξi*.    (7)

With this choice, we claim that (5) holds. Indeed, if AD,1/q*,ζj,q*X=, then (5) obviously holds for any choice of ξX. On the other hand, if AD,1/q*,ζj,q*X, then from the inclusion property Xi=1FXBG(ξi*,δξi*/2), it follows that there is some i*{1,,FX}, depending on jN2q*d, such that

AD,1/q*,ζj,q*BG(ξi**,δξi**/2).    (8)

Next, let η*AD,1/q*,ζj,q*BG(ξi**,δξi**/2). By (6), we have, for any xAD,1/q*,ζj,q*X,

dG(x,η*)C0x-η*DC0D1q*.

Therefore, it follows from (7) that

dG(x,ξi**)dG(x,η*)+dG(η*,ξi**)C0D1q*+δξi**2δξi**.

This implies that AD,1/q*,ζj,q*XBG(ξi**,δξi**) and verifies our claim (5) with the choice of ξj*=ξi**.

Observe that for every j2q*D we may choose the point ξj*X to define N2,j=(N2,j())=1d:Xd by setting

N2,j()(x):=Φξj*()(x)=k = 1(D+2)(D+1)ak,ξj*,σ2(wk,ξj*,·x+bk,ξj*,),                        =1,,d    (9)

and apply (5) and (3) to obtain the following.

Proposition 3. For each j2q*D, N2,j maps AD,1/q*,ζj,q*X diffeomorphically into [−1, 1]d and

αdG(x,x)N2,j(x)-N2,j(x)dβdG(x,x),    x,xAD,1/q*,ζj,q*X,    (10)

where α:=min1iFXαξi* and β:=max1iFXβξi*.

As a result of Propositions 1 and 3, we now present the construction of the deep nets for the proposed learning purpose. Start with selecting (2n)d points tk=tk,n(-1,1)d, k2nd and n ∈ ℕ, with tk=(tk1,,tkd), where tk()=-1+2k()-12n in (−1, 1)d. Denote Ck = Ad, 1/n,tk and Hk,j={xXAD,1/q*,ζj,q*:N2,j(x)Ck}. In view of Proposition 3, it follows that Hk,j is well defined, Xj2q*DAD,1/q*,ζj,q*, and k2ndHk,j=XAD,1/q*,ζj,q*. We also define N3,k,j:X by

N3,k,j(x)=N1,d,n,tkN2,j(x)                    =σ0{ = 1dσ0[12n+N2,j()(x)tk()]                    + = 1dσ0[12nN2,j()(x)+tk()]2d+12}.    (11)

Then the desired deep net estimator with three hidden layers may be defined by

N3(x)=j2qDk2ndi = 1mN1,D,q,ζj(xi)N3,k,j(xi)yiN3,k,j(x)j2qDk2ndi = 1mN1,D,q,ζj(xi)N3,k,j(xi),    (12)

where we set N3(x) = 0 if the denominator is zero.

For a d-dimensional submanifold X and an x in AD,1/q*,ζj,q*, it is clear from (9) that the task of the first hidden layer N2,j(x) is to map X into [−1, 1]d. On the other hand, the second hidden layer is intended to searching for the location of N2,j(x) in [−1, 1]d. Indeed, it follows from (11) that large values of the parameter n narrow down certain small region that contains x, thereby reducing the bias. Furthermore, observe that N3(x) in (12) is some kind of local average, based on N3,k,j(x) and the small region that contains x. This is a standard local averaging strategy for reducing variance in statistics [20]. In summary, there is a totality of three hidden layers in the above construction for performing three separate tasks, namely: the first hidden layer is for reducing the dimension of the input space, while by applying local averaging [20], the second and third hidden layers are for reducing bias and data variance, respectively.

2.3. Fine-Tuning

For each xX, it follows from X=j2q*DAD,1/q*,ζj,q* that there is some j2q*D, such that xAD,1/q*,ζj,q*, which implies that N2,j(x)[-1,1]d. For each j2q*, since AD,1/q*,ζj,q* is a cube in ℝD, the cardinality of the set {j:xAD,1/q*,ζj,q*} is at most 2D. Also, because [-1,1]d=k2ndAd,1/n,tk for each j2q*, there exists some k2nd, such that N2,j(x) ∈ Ad, 1/n,tk, implying that N3, k,j(x) = N1, d, n,tkN2, j(x) = 1 and that the number of such integers k is bounded by 2d. For each xX, we consider a non-empty subset

Λx={(j,k)2q*D×2nd:xAD,1/q*,ζj,q*,N3,k,j(x)=1}.    (13)

of 2q*D×2nd, with cardinality

|Λx|2D+d,   xX.    (14)

Also, for each xX, we further define SΛx=(j,k)ΛxHk,j{xi}i=1m, as well as

Λx,S={(j,k)2q*D×2nd:N1,D,q*,ζj(xi)N3,k,j(xi)=1,xiSΛx},    (15)

and

Λx,S={(j,k)2qD×2nd:N1,D,q,ζj(xi)N3,k,j(xi)N3,k,j(x)                =1,xiSΛx}.    (16)

Then it follows from (15) and (16) that |Λx,S||Λx,S|, and it is easy to see that if each xiSΛx is an interior point of some Hk, j, then |Λx,S|=|Λx,S|. In this way, N3 is some local average estimator. However, if |Λx,S||Λx,S|, (and this is possible when some xi lies on the boundary of Hk,j for some (j,k)2q*D×2nd), then the estimator N3 in (12) might perform badly, and this happens even for training data. Note that to predict for some xjSm, which is an interior point of Hk0, j0, we have

N3(xj)=i = 1mN1,D,q*,ζj0(xi)N3,k0,j0(xi)yi|Λxj,S|,

which might be far away from yj when |Λx,S|<|Λx,S|. The reason is that there are |Λx,S| summations in the numerator. Noting that the Riemannian measure of the boundary of (j,k)2q*D×2ndHk,j is zero, we consider the above phenomenon as outliers.

Fine-tuning, often referred to as feedback in the literature of deep learning [21], can essentially improve the learning performance of deep nets [26]. We observe that fine-tuning can also be applied to handle outliers for our constructed deep net in (12), by counting the cardinalities of Λx,S and Λx,S. In the training process, besides computing N3(x) for some query point x, we may also record |Λx,S| and |Λx,S|. If the estimator is not big enough, we propose to add the factor |Λx,S||Λx,S| to N3(x). In this way, the deep net estimator with feedback can be mathematically represented by

N3F(x)=|Λx,S||Λx,S|N3(x)=j2q*Dk2ndi = 1myiΦk,j(x,xi)j2q*Dk2ndi = 1mΦk,j(x,xi),    (17)

where Φk,j=Φk,j,D,q*,n:X×X is defined by

Φk,j(x,u)=N1,D,q*,ζj(u)N3,k,j(u)N3,k,j(x);

and as before, we set N3F(x)=0 if the denominator j2q*Dk2ndi = 1mΦk,j(x,xi) vanishes.

3. Learning Rate Analysis

We consider a standard least squares regression setting in learning theory [15] and assume that the sample set S=Sm={(xi,yi)}i=1m of size m is drawn independently according to some Borel probability measure ρ on Z=X×Y. The regression function is then defined by

fρ(x)=Yydρ(y|x),      xX,

where ρ(y|x) denotes the conditional distribution at x induced by ρ. Let ρX be the marginal distribution of ρ on X and (LρX2,·ρ) be the Hilbert space of square-integrable functions with respect to ρX on X. Our goal is to estimate the distance between the output function N3 and the regression function fρ measured by ||N3fρ||ρ, as well as the distance between N3F and fρ.

We say that a function f on X is (s, c0)-Lipschitz (continuous) with positive exponent s ≤ 1 and constant c0 > 0, if

|f(x)-f(x)|c0(dG(x,x))s,  x,xX;    (18)

and denote by Lip(s,c0)=Lip(s,c0)(X), the family of all (s, c0)-Lipschitz functions that satisfy (18). Our error analysis of N3 will be carried out based on the following two assumptions.

Assumption 1. There exist an s ∈ (0, 1] and a constant c0 ∈ ℝ+ such that fρLip(s,c0).

This smoothness assumption is standard in learning theory for regression functions (see for example [15, 18, 20, 2735]).

Assumption 2. ρX is continuous with respect to the geodesic distance dG of the Riemannian manifold.

Note that Assumption 2, which is about the geometrical structure of ρX, is slightly weaker than the distortion assumption in Shi [36] and Zhou and Jetter [37] but similar to the assumption considered in Meister and Steinwart [38]. The objective of this assumption is for describing the functionality of fine-tuning.

We are now ready to state the main results of this paper. In the first theorem below, we obtain a learning rate for the constructed deep nets N3.

Theorem 1. Let m be the number of samples and set n = ⌈m1/(2s+d)⌉, where 1/(2n) is the uniform spacing of the points tk=tk,n(-1,1)d in the definition of N3 in (11). Then under Assumptions 1 and 2,

E[N3-fρρ2]C1m-2s2s+d    (19)

for some positive constant C1 independent of m.

Observe that Theorem 1 provides a fast learning rate for the constructed deep net which depends on the manifold dimension d instead of the sample space dimension D. In the second theorem below, we show the necessity of the fine-tuning process as presented in (17), when Assumption 2 is removed.

Theorem 2. Let m be the number of samples and set n = ⌈m1/(2s+d)⌉, where 1/(2n) is the uniform spacing of the points tk=tk,n(-1,1)d in the definition of N3 in (11), which is used to define N3F in (17). Then under Assumption 1,

E[N3F-fρρ2]C2m-2s2s+d.    (20)

for some positive constant C2 independent of m.

Observe that while Assumption 2 is needed in Theorem 1, it is not necessary for the validity of Theorem 2, which theoretically shows the significance of fine-tuning in our construction. The proofs of these two theorems will be presented in the final section of this paper.

4. Related Work and Discussions

The success in practical applications, especially in the fields of computer vision [6] and speech recognition [7], has triggered enormous research activities on deep learning. Several other encouraging results, such as object recognition [24], unsupervised training [39], and artificial intelligence architecture [21], have been obtained to demonstrate further the significance of deep learning. We refer the interested readers to the 2016 MIT monograph, “Deep Learning” [40], by Goodfellow, Bengjio and Courville, for further study of this exciting subject, which is only at the infancy of its development.

Indeed, deep learning has already created several challenges to the machine learning community. Among the main challenges are to show the necessity of the usage of deep nets and to theoretically justify the advantages of deep nets over shallow nets. This is essentially a classical topic in Approximation Theory. In particular, dating back to the early 1990's, it was already proved that deep nets can provide localized approximation but shallow nets fail (see for example [8]). Furthermore, it was also shown that deep nets provide high approximation orders, that are certainly not restricted by the lower error bounds for shallow nets (see [41, 42]). More recently, stimulated by the avid enthusiasm of deep learning, numerous advantages of deep nets were also revealed from the point of view of function approximation. In particular, certain functions discussed in Eldan and Shamir [9] can be represented by deep nets but cannot be approximated by shallow nets with polynomially increasing orders of neurons; it was shown in Mhaskar and Poggio [10] that deep nets, but not shallow nets, can approximate efficiently functions composed by bivariate ones; it was exhibited in Poggio et al. [11] that deep nets can avoid the curse of dimension of shallow nets; a probability argument was given in Lin [43] to show that deep nets have better approximation performance than shallow nets with high confidence; it was demonstrated in Chui and Mhaskar [19] and Shaham et al. [13] that deep nets can improve the approximation capability of shallow nets when the data are located on data-dependent manifolds; and so on. All of these results give theoretical explanations of the significance of deep nets from the Approximation Theory point of view.

As a departure from the work mentioned above, our present paper is devoted to explore better performance of deep nets over shallow nets in the framework of Leaning Theory. In particular, we are concerned not only with the approximation accuracy but also with the cost to attain such accuracy. In this regard, learning rates of certain deep nets have been analyzed in Kohler and Krzyżak [32], where near-optimal learning rates are provided for a fairly complex regularization scheme, with the hypothesis space being the family of deep nets with two hidden layers proposed in Mhaskar [44]. More precisely, they derived a learning rate of order O(m-2s/(2s+D)(logm)4s/(2s+D)) for functions fρLip(s,c0). This is close to the optimal learning rate of shallow nets in Maiorov [18], different only by a logarithmic factor. Hence, the study in Kohler and Krzyżak [32] theoretically shows that deep nets at least do not downgrade the learning performance of shallow nets. In comparison with Kohler and Krzyżak [32], our study is focussed on answering the question: “What is to be gained by deep learning?” The deep net constructed in our paper possesses a learning rate of order O(m-2s/(2s+d)), when X is an unknown d-dimensional connected C Riemannian manifold (without boundary). This rate is the same as the optimal learning rate [20, Chapter 3] for special case of the cube X=[-1,1]d under a similar condition, and it is better than the optimal learning rates for shallow nets [18]. Another line of related work is Ye and Zhou [22, 45], where Ye and Zhou deduced learning rates for regularized least-squares over shallow nets for the same setting of our paper. They derived a learning rate of O(m-s/(8s+4d)(logm)s/(4s+2d)), which is worse than the rate established in our paper. It should be mentioned that in a more recent work Kohler and Krzyzak [46], some advantages of deep nets are revealed from the learning theory viewpoint. However, the results in Kohler and Krzyzak [46] require a hierarchical interaction structure, which is totally different from what is presented in our present paper.

Due to the high degree of freedom for deep nets, the number and type of parameters for deep nets are much more than those of shallow nets. Thus, it should be of great interest to develop scalable algorithms to reduce the computational burdens of deep learning. Distributed learning based on a divide-and-conquer strategy [47, 48] could be a fruitful approach for this purpose. It is also of interest to establish results similar to Theorem 2 and Theorem 1 for deep nets, but with rectifier neurons, by using the rectifier (or ramp) function, σ1(t) = t+, as activation. The reason is that the rectifier is one of the most widely used activations in the literature on deep learning. Our research in these directions is postponed to a later work.

5. Proofs of the Main Results

To facilitate our proofs of the theorems stated in section 3, we first establish the following two lemmas.

Observe from Proposition 1 and the definition (11) of the function N3, k,j that

N1,D,q*,ζj(x)N3,k,j(x)=IAD,1/q*,ζj(x)IAd,1/n,tk(N2,j(x))=IHk,j(x).    (21)

For j2q*D,k2nd, define a random function Tk,j:Zm in term of the random sample S={(xi,yi)}i = 1m by

Tk,j(S)=i=1mN1,D,q*,ζj(xi)N3,k,j(xi),    (22)

so that

Tk,j(S)=i=1mIHk,j(xi).    (23)

Lemma 1. Let Λ*2q*D×2nd be a non-empty subset, (j × k) ∈ Λ* and Tk,j(S) be defined as in (22). Then

ES[I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S)]2(m+1)ρX((j,k)Λ*Hk,j),    (24)

where if (j,k)Λ*Tk,j(S)=0, we set

I{zZm:(j,k)Λ*Tk,j(z)>0}(S)j,kΛ*Tk,j(S)=0.

Proof. Observe from (23) that Tk,j(S)∈{0, 1, …, m} and

ES[I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S)]= = 0mES[I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S)|(j,k)Λ*Tk,j(S) = ]Pr[(j,k)Λ*Tk,j(S)=].

By the definition of the fraction I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S), the term with ℓ = 0 above vanishes, so

ES[I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S)]= = 1mE[1|(j,k)Λ*Tk,j(S)=]                                                                              Pr[(j,k)Λ*Tk,j(S)=]                                                                         = = 1m1Pr[(j,k)Λ*Tk,j(S) = ].

On the other hand, note from (23) that (j,k)Λ*Tk,j(S)= is equivalent to xi(j,k)Λ*Hk,j for ℓ indices i from {1, ⋯, m}, which in turn implies that

Pr[(j,k)Λ*Tk,j(S)=]=(m)[ρX((j,k)Λ*Hk,j)]                                                       [1-ρX((j,k)Λ*Hk,j)]m-.

Thus, we obtain

ES[I{zZm:(j,k)Λ*Tk,j(z)>0}(S)(j,k)Λ*Tk,j(S)]==1m1(m)[ρX((j,k)Λ*Hk,j)] [1-ρX((j,k)Λ*Hk,j)]m-=1m2+1(m)[ρX((j,k)Λ*Hk,j)] [1-ρX((j,k)Λ*Hk,j)]m-=2(m+1)ρX((j,k)Λ*Hk,j) = 1m(m+1+1)[ρX((j,k)Λ*Hk,j)]+1                                                                         [1-ρX((j,k)Λ*Hk,j)]m-.

Therefore, the desired inequality (24) follows. This completes the proof of Lemma 1. □

Lemma 2. Let S={(xi,yi)}i=1m be a sample set drawn independently according to ρ. If fS(x)=i=1myihx(x,xi) with a measurable function hx:X×X that depends on x:={xi}i=1m, then

E[fS-fρμ2|x]=E[fS-i = 1mfρ(xi)hx(·,xi)μ2|x]                                     +i = 1mfρ(xi)hx(·,xi)-fρμ2    (25)

for any Borel probability measure μ on X.

Proof. Since fρ(x) is the conditional mean of y given xX, we have from fS(x)=i=1myihx(x,xi) that E[fS|x]=i=1mfρ(xi)hx(·,xi). Hence,

E[fS-i = 1mfρ(xi)hx(·,xi),i = 1mfρ(xi)hx(·,xi)-fρμ|x]=E[fS|x]-i = 1mfρ(xi)hx(·,xi),i = 1mfρ(xi)hx(·,xi)-fρμ=0.

Thus, along with the inner-product expression

fS-fρμ2=fS-i = 1mfρ(xi)hx(·,xi)μ2+i = 1mfρ(xi)hx(·,xi)-fρμ2                           +2fS-i = 1mfρ(xi)hx(·,xi),i = 1mfρ(xi)hx(·,xi)-fρμ

the above equality yields the desired result (25). This completes the proof of Lemma 2. □

We are now ready to prove the two main results of the paper.

Proof of Theorem 1. We divide the proof into four steps, namely: error decomposition, sampling error estimation, approximation error estimation, and learning rate deduction.

Step 1: Error decomposition. Let Ḣk,j be the set of interior points of Hk,j. For arbitrarily fixed k′, j′ and xH˙k,j, it follows from (21) that

j2qDk2ndi = 1mN1,D,q,ζj(xi)N3,k,j(xi)yiN3,k,j(x)                                =i = 1myiN1,D,q,ζj(xi)N3,k,j(xi)                               =i = 1myiIHk,j(xi).

If, in addition, for each i ∈ {1, …, m}, xi ∈ Ḣk,j for some k,j2q*D×2nd, then from (12) we have

N3(x)=i = 1myiIHk,j(xi)i = 1mIHk,j(xi)=i = 1myiIHk,j(xi)Tk,j(S).    (26)

In view of Assumption 2, for an arbitrary subset ARD, λG(A) = 0 implies ρX(A) = 0, where λG(A) denotes the Riemannian measure of A. In particular, for A = Hk,j\Ḣk,j in the above analysis, we have ρX(Hk,j\Ḣk,j) = 0, which implies that (26) almost surely holds. Next, set

N3~=E[N3|x].    (27)

Then it follows from Lemma 2, with μ = ρX, that

E[N3-fρρ2]=E[N3-N3~ρ2]+E[N3~-fρρ2].    (28)

In what follows, the two terms on the right-hand side of (28) will be called sampling error and approximation error, respectively.

Step 2: Sampling error estimation. Due to Assumption 2, we have

E[N3-N3~ρ2]=(j,k)2q*D×2ndH˙k,jE[(N3(x)-N3~(x))2]dρX.    (29)

On the other hand, (26) and (27) together imply that

N3(x)-N3~(x)=i = 1m(yi-fρ(xi))IHk,j(xi)Tk,j(S)

almost surely for x ∈ Ḣk,j, and that

E[(N3(x)N3˜(x))2|x]=i = 1mY(yfρ(xi))2dρ(y|xi)IHk,j2(xi)[Tk,j(S)]2                                                 4M2I{z:Tk,j(z)>0}(S)Tk,j(S),

where 𝔼[yi|xi] = fρ(xi) in the second equality, IHk,j2(xi)=IHk,j(xi) and |yi| ≤ M holds almost surely in the inequality. It then follows from Lemma 1 and Assumption 2 that

E[(N3(x)-N3~(x))2]8M2(m+1)ρX(Hk,j).

This, together with (29), implies that

E[N3N˜3ρ2](j,k)2qD×2ndH˙k,j8M2(m+1)ρX(Hk,j)dρX                                        8(2q)D(2n)dM2m+1.    (30)

Step 3: Approximation error estimation. According to Assumption 2, we have

E[fρN˜3ρ2]=(j,k)2q*D×2ndH˙k,jE[(fρ(x)N˜3(x))2]dρX.    (31)

For x ∈ Ḣk,j, it follows from Assumption 1, (26) and (27) that

|fρ(x)-N3~(x)|i = 1m|fρ(x)-fρ(xi)|IHk,j(xi)Tk,j(S)                                   c0(maxx,xHk,jdG(x,x))s

almost surely holds. We then have, from (10) and N2,j(x),N2,j(x)Ad,1/n,tk, that

maxx,xHk,jdG(x,x)maxx,xHk,jα-1N2,j(x)-N2,j(x)||d.

Now, since maxt,tAd,1/n,tkt-t d2dn, we obtain

maxx,xHk,jdG(x,x)2d1/2αn-1,

so that

|fρ(x)-N3~(x)|c02sds/2αsn-s.

holds almost surely. Inserting the above estimate into (31), we obtain

E[fρ-N3~ρ2](j,k)2q*D×2ndρX(H˙k,j)c024sdsα2sn-2sc024sdsα2sn-2s.    (32)

Step 4: Learning rate deduction. Inserting (32) and (30) into (28), we obtain

E[N3-fρρ2]8(2q*)D(2n)dM2m+1+c024sdsα2sn-2s.

Since n = ⌈m1/(2s+d)⌉, we have

E[N3F-fρρ2]C1m-2s2s+d

with

C1:=8(2q*)D2dM2+c024sdsα2s.

As q* depends only on X, C1 is independent of m or n. This completes the proof of Theorem 1. □

Proof of Theorem 2. As in the proof of Theorem 1, we divide this proof into four steps.

Step 1: Error decomposition. From (17), we have

N3F(x)=i=1myihx(x,xi),    (33)

where hx:X×X is a function defined for x,uX by

hx(x,u)=j2q*Dk2ndΦk,j(x,u)j2q*Dk2ndi=1mΦk,j(x,xi),    (34)

and hx(x, u) = 0 when the denominator vanishes. Define N3F~:X by

N3F~(x)=E[N3F(x)|x]=i=1mfρ(xi)hx(x,xi).    (35)

Then it follows from Lemma 2 with μ = ρX, that

E[N3F-fρρ2]=E[N3F-N3F~ρ2]+E[N3F~-fρρ2].    (36)

In what follows, the terms on the right-hand side of (36) will be called sampling error and approximation error, respectively. By (21), for each xX and i ∈ {1, ⋯ , m}, we have Φk,j(x, xi) = IHk,j(xi)N3, k, j(x) = IHk,j(xi) for (j, k) ∈ Λx and Φk,j(x, xi) = 0 for (j, k) ∉ Λx, where Λx is defined by (13). This, together with (35), (33), and (34), yields

N3F(x)-N3F~(x)=i=1m(yi-fρ(xi))(j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S),  xX    (37)

and

N3F~(x)-fρ(x)=i=1m[fρ(xi)-fρ(x)](j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S),  xX,    (38)

where Tk,j(S)=i=1mIHk,j(xi).

Step 2: Sampling error estimation. First consider

E[N3FN˜3Fρ2](j,k)2q*D×2ndHk,jE[(N3F(x)N˜3F(x))2]dρX.    (39)

For each xHk,j, since 𝔼[y|x] = fρ(x), it follows from (37) and |y| ≤ M that

E[(N3F(x)-N3F~(x))2|x]=E[(i = 1m(yi-fρ(xi))(j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S))2|x]=E[i = 1m(yi-fρ(xi))2((j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S))2|x]4M2i = 1m((j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S))2

holds almost surely. Since i=1mIHk,j(xi)=Tk,j(S), we apply the Schwarz inequality to (j,k)ΛxIHk,j(xi) to obtain

E[(N3F(x)N3F˜(x))2|x]4M2|Λx|(j,k)Λxi = 1mIHk,j2(xi)((j,k)ΛxTk,j(S))2                                        =4M2|Λx|I{zZm:(j,k)ΛxTk,j>0}(S)(j,k)ΛxTk,j(S).

Thus, from Lemma 1 and (14) we have

E[(N3F(x)-N3F~(x))2]=E[E[(N3F(x)-N3F~(x))2|x]]                                      8M22D+d(m+1)ρX((j,k)ΛxHk,j).

This, along with (39), implies that

E[N3F-N3F~ρ2]2D+d+3M2(m+1)(j,k)2q*D×2ndHk,j1ρX((j,k)ΛxHk,j)dρX2D+d+3M2(m+1)(j,k)2q*D×2ndHk,j1ρX(Hk,j)dρX2D+d+3(2q*)DM2(2n)d(m+1).    (40)

Step 3 Approximation error estimation. For each xX, set

A1(x)=E[(N3F~(x)-fρ(x))2|(j,k)ΛxTk,j(S)=0]                 Pr[(j,k)ΛxTk,j(S)=0]

and

A2(x)=E[(N3F~(x)-fρ(x))2|(j,k)ΛxTk,j(S)1]                  Pr[(j,k)ΛxTk,j(S)1];

and observe that

E[N3F˜fρρ2]=XE[(N3F˜(x)fρ(x))2]dρX                          =XA1(x)dρX+XA2(x)dρX.    (41)

Let us first consider XA1(x)dρX as follows. Since N3F~(x)=0 for (j,k)ΛxTk,j(S)=0, we have, from |fρ(x)| ≤ M, that

E[(N3F(x)˜fρ(x))2|(j,k)ΛxTk,j(S)=0]M2.

On the other hand, since

Pr[(j,k)ΛxTk,j(S)=0]=[1ρX((j,k)ΛxHk,j)]m,

it follows from the elementary inequality

v(1v)mvemv1em,  0v1

that

XA1(x)dρXXM2[1ρX((j,k)ΛxHk,j)]mdρXM2(j,k)2qD×2ndHk,j[1ρX((j,k)ΛxHk,j)]mdρXM2(j,k)2qD×2ndHk,j[1ρX(Hk,j)]mdρXM2(j,k)2qD×2nd[1ρX(Hk,j)]mρX(Hk,j)(2n)d(2q)DM2em.    (42)

We next consider XA2(x)dρX. Let xX satisfy (j,k)ΛxTk,j(S)1. Then xiHx: = ∪(j, k)∈ΛxHk,j at least for some i ∈ {1, 2, …, m}. For those xiHx, we have (j,k)ΛxIHk,j(xi)=0, so that

|N3F~(x)-fρ(x)|=i:xiHx|fρ(xi)-fρ(x)|(j,k)ΛxIHk,j(xi)(j,k)ΛxTk,j(S).

For xiHx, we have xiHk,j for some (j, k) ∈ Λx. But xHk,j, so that

|N3F~(x)-fρ(x)|maxu,uHk,j|fρ(u)-fρ(u)c0maxu,uHk,j[dG(u,u)]s,                                    xX.

But (10) implies that

maxu,uHk,j[dG(u,u)]smaxu,uHk,jα-sN2,jx(u)-N2,jx(u)||ds                                            α-smaxt,tAd,1/n,tkt-t||ds2sds/2αsn-s.

Hence, for xX with (j,k)ΛxTk,j(S)1, we have

|N3F~(x)-fρ(x)|c02sds/2αsn-si:xiHx(j,k)Λx(j,k)ΛxTk,j(S)c02sds/2αsn-s,

and threby

XA2(x)dρXXE[(N3F˜(x)fρ(x))2|(j,k)ΛxTk,j(S)1]                         dρXc024sdsα2sn2s.n    (43)

Therefore, putting (42) and (43) into (41), we have

E[N3F~-fρρ2]c024sdsα2sn-2s +M2(2n)d(2q*)Dem.    (44)

Step 4: Learning rate deduction. By inserting (40) and (44) into (36), we obtain

E[N3F-fρρ2]2D+d+3(2q*)DM2(2n)dm + 1+c024sdsα2sn-2s                                +M2(2n)d(2q*)Dem.

Hence, in view of n = ⌈m1/(2s+d)⌉, we have

E[N3F-fρρ2]C2m-2s2s+d

with

C2:=2D+d+4(2q*)DM2(2n)d+c024sdsα2s.

This completes the proof of Theorem 2, since q* depends only on X, so that C2 is independent of m or n. □

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The research of CC is partially supported by U.S. ARO Grant W911NF-15-1-0385, Hong Kong Research Council (Grant No. 12300917), and Hong Kong Baptist University (Grant No. HKBU-RC-ICRS/16-17/03). The research of S-BL is partially supported by the National Natural Science Foundation of China (Grant No. 61502342). The work of D-XZ is supported partially by the Research Grants Council of Hong Kong [Project No. CityU 11303915] and by National Natural Science Foundation of China under Grant 11461161006. Part of the work was done during the third author's visit to Shanghai Jiaotong University (SJTU), for which the support from SJTU and the Ministry of Education is greatly appreciated.

References

1. Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief netws. Neural Comput. (2006) 18:1527–54. doi: 10.1162/neco.2006.18.7.1527

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Chui CK, Li X. Approximation by ridge functions and neural networks with one hidden layer. J Approx Theory (1992) 70:131–41. doi: 10.1016/0021-9045(92)90081-X

CrossRef Full Text | Google Scholar

3. Cybenko G. Approimation by superpositions of a sigmoid function. Math Control Signals Syst. (1989) 2:303–14. doi: 10.1007/BF02551274

CrossRef Full Text | Google Scholar

4. Funahashi KI. On the approximate realization of continuous mappings by neural networks. Neural Netw. (1989) 2:183–92. doi: 10.1016/0893-6080(89)90003-8

CrossRef Full Text | Google Scholar

5. Lippmann RP. An introduction to computing with neural nets. IEEE ASSP Mag. (1987) 4:4–22. doi: 10.1109/MASSP.1987.1165576

CrossRef Full Text | Google Scholar

6. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems. Granada (2012). p. 1105–2097.

7. Lee H, Pham P, Largman Y, Ng AY. Unsupervised feature learning for audio classification using convolutional deep belief networks. In: Neural Information Processing Systems. Vancouver, BC (2010). p. 469–477.

Google Scholar

8. Chui CK, Li X, Mhaskar HN. Neural networks for localized approximation. Math Comput. (1994) 63:607–23. doi: 10.1090/S0025-5718-1994-1240656-2

CrossRef Full Text | Google Scholar

9. Eldan R, Shamir O. The power of depth for feedforward neural networks. In: Conference on Learning Theory. New York, NY (2016). p. 907–940.

Google Scholar

10. Mhaskar H, Poggio T. Deep vs shallow networks: an approximation theory perspective. Anal Appl. (2006) 14:829–48. doi: 10.1142/S0219530516400042

CrossRef Full Text | Google Scholar

11. Poggio T, Mhaskar H, Rosasco L, Miranda B, Liao Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. Int J Auto Comput. (2017) 14:503–19. doi: 10.1007/s11633-017-1054-2

CrossRef Full Text | Google Scholar

12. Raghu M, Poole B, Kleinberg J, Ganguli S, Sohl-Dickstein J. On the expressive power of deep neural networks. In: Proceedings of the 34th International Conference on Machine Learning, PMLR, Vol. 70 (2017), p. 2847-54.

Google Scholar

13. Shaham U, Cloninger A, Coifman RR. Provable approximation properties for deep neural networks. Appl Comput Harmon Anal. (2018) 44:537–57. doi: 10.1016/j.acha.2016.04.003

CrossRef Full Text | Google Scholar

14. Telgarsky M. Benefits of depth in neural networks. In: 29th Annual Conference on Learning Theory, PMLR Vol. 49 (2016), p. 1517–39.

Google Scholar

15. Cucker F, Zhou DX. Learning Theory: An Approximation Theory Viewpoint. Cambridge: Cambridge University Press (2007).

Google Scholar

16. Bianchini M, Scarselli F. On the complexity of neural network classifiers: a comparison between shallow and deep architectures, IEEE Trans Neural Netw Learn Syst. (2014) 25:1553–65. doi: 10.1109/TNNLS.2013.2293637

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Montúfar G, Pascanu R, Cho K, Bengio Y. On the number of linear regions of deep nerual networks. In: Neural Information Processing Systems. Lake Tahoe, CA (2014). p. 2924–2932.

18. Maiorov V. Approximation by neural networks and learning theory. J Complex. (2006) 22:102–17. doi: 10.1016/j.jco.2005.09.001

CrossRef Full Text | Google Scholar

19. Chui CK, Mhaskar HN. Deep nets for local manifold learning. Front Appl Math Stat. (2016) arXiv: 1607.07110.

Google Scholar

20. Györfy L, Kohler M, Krzyzak A, Walk H. A Distribution-Free Theory of Nonparametric Regression. Berlin: Springer (2002).

Google Scholar

21. Bengio Y. Learning deep architectures for AI, Found. Trends Mach Learn. (2009) 2:1–127. doi: 10.1561/2200000006

CrossRef Full Text | Google Scholar

22. Ye GB, Zhou DX. Learning and approximation by Gaussians on Riemannian manifolds. Adv Comput Math. (2008) 29:291–310. doi: 10.1007/s10444-007-9049-0

CrossRef Full Text | Google Scholar

23. Basri R, Jacobs D. Efficient representation of low-dimensional manifolds using deep networks. (2016) arXiv:1602.04723.

Google Scholar

24. DiCarlo J, Cox D. Untangling invariant object recognition. Trends Cogn Sci. (2007) 11:333–41. doi: 10.1016/j.tics.2007.06.010

PubMed Abstract | CrossRef Full Text | Google Scholar

25. do Carmo M. Riemannian Geometry. Boston, MA: Birkhäuser (1992).

26. Larochelle H, Bengio Y, Louradour J, Lamblin R. Exploring strategies for training deep neural networks. J Mach Learn Res. (2009) 10:1–40.

Google Scholar

27. Chang X, Lin SB, Wang Y. Divide and conquer local average regression. Electron J Stat. (2017) 11:1326–50. doi: 10.1214/17-EJS1265

CrossRef Full Text | Google Scholar

28. Christmann A, Zhou DX. On the robustness of regularized pairwise learning methods based on kernels. J Complex. (2017) 37:1–33. doi: 10.1016/j.jco.2016.07.001

CrossRef Full Text | Google Scholar

29. Fan J, Hu T, Wu Q, Zhou DX. Consistency analysis of an empirical minimum error entropy algorithm. Appl Comput Harmon Anal. (2016) 41:164–89. doi: 10.1016/j.acha.2014.12.005

CrossRef Full Text | Google Scholar

30. Guo ZC, Xiang DH, Guo X, Zhou DX. Thresholded spectral algorithms for sparse approximations Anal Appl. (2017) 15:433–55. doi: 10.1142/S0219530517500026

CrossRef Full Text | Google Scholar

31. Hu T, Fan J, Wu Q, Zhou DX. Regularization schemes for minimum error entropy principle. Anal Appl. (2015) 13:437–55. doi: 10.1142/S0219530514500110

CrossRef Full Text | Google Scholar

32. Kohler M, Krzyżak A. Adaptive regression estimation with multilayer feedforward neural networks. J Nonparametr Stat. (2005) 17:891–913. doi: 10.1080/10485250500309608

CrossRef Full Text | Google Scholar

33. Lin SB, Zhou DX. Distributed kernel-based gradient descent algorithms. Constr Approx. (2018) 47:249–76. doi: 10.1007/s00365-017-9379-1

CrossRef Full Text | Google Scholar

34. Shi L, Feng YL, Zhou DX. Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces. Appl Comput Harmon Anal. (2011) 31:286–302. doi: 10.1016/j.acha.2011.01.001

CrossRef Full Text | Google Scholar

35. Wu Q, Zhou DX. Learning with sample dependent hypothesis space. Comput Math Appl. (2008) 56:2896–907. doi: 10.1016/j.camwa.2008.09.014

CrossRef Full Text | Google Scholar

36. Shi L. Learning theory estimates for coefficient-based regularized regression. Appl Comput Harmon Anal. (2013) 34:252–65. doi: 10.1016/j.acha.2012.05.001

CrossRef Full Text | Google Scholar

37. Zhou DX, Jetter K. Approximation with polynomial kernels and SVM classifiers. Adv Comput Math. (2006) 25:323–44. doi: 10.1007/s10444-004-7206-2

CrossRef Full Text | Google Scholar

38. Meister M, Steinwart I. Optimal learning rates for localized SVMs. J Mach Learn Res. (2016) 17:1–44.

Google Scholar

39. Erhan D, Bengio Y, Courville A, Manzagol P, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? J Mach Learn Res. (2010) 11:625–60.

Google Scholar

40. Goodfellow I, Bengio Y, Courville A. Deep Learning. Cambridge: MIT Press (2016).

Google Scholar

41. Chui CK, Li X, Mhaskar HN. Limitations of the approximation capabilities of neural networks with one hidden layer. Adv Comput Math. (1996) 5:233–43. doi: 10.1007/BF02124745

CrossRef Full Text | Google Scholar

42. Maiorov V, Pinkus A. Lower bounds for approximation by MLP neural networks. Neurocomputing (1999) 25:81–91. doi: 10.1016/S0925-2312(98)00111-8

CrossRef Full Text | Google Scholar

43. Lin SB. Limitations of shallow nets approximation. Neural Netw. (2017) 94:96–102. doi: 10.1016/j.neunet.2017.06.016

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Mhaskar H. Approximation properties of a multilayered feedforward artificial neural network. Adv Comput Math. (1993) 1:61–80. doi: 10.1007/BF02070821

CrossRef Full Text | Google Scholar

45. Ye GB, Zhou DX. SVM learning and Lp approximation by Gaussians on Riemannian manifolds. Anal Appl. (2009) 7:309–39. doi: 10.1142/S0219530509001384

CrossRef Full Text | Google Scholar

46. Kohler M, Krzyzak A. Nonparametric regression based on hierarchical interaction models. IEEE Trans Inform. Theory (2017) 63:1620–30. doi: 10.1109/TIT.2016.2634401

CrossRef Full Text

47. Lin SB, Guo X, Zhou DX. Distributed learning with least square regularization. J Mach Learn Res. (2017) 18:1–31.

48. Zhang YC, Duchi J, Wainwright M. Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates. J Mach Learn Res. (2015) 16:3299–340.

Keywords: deep nets, learning theory, deep learning, manifold learning, feedback

Citation: Chui CK, Lin S-B and Zhou D-X (2018) Construction of Neural Networks for Realization of Localized Deep Learning. Front. Appl. Math. Stat. 4:14. doi: 10.3389/fams.2018.00014

Received: 30 January 2018; Accepted: 26 April 2018;
Published: 17 May 2018.

Edited by:

Lixin Shen, Syracuse University, United States

Reviewed by:

Sivananthan Sampath, Indian Institutes of Technology, India
Ashley Prater, United States Air Force Research Laboratory, United States

Copyright © 2018 Chui, Lin and Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Shao-Bo Lin, c2JsaW4xOTgzQGdtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.