- 1Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong
- 2Department of Statistics, Stanford University, Stanford, CA, United States
- 3Department of Mathematics, Wenzhou University, Wenzhou, China
- 4Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong
The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net) approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component is also designed to deal with outliers. The main theoretical result in this paper is the order of approximation of the regression function with regularity s, in terms of the number m of sample points, where the (unknown) manifold dimension d replaces the dimension D of the sampling (Euclidean) space for shallow nets.
1. Introduction
The continually rapid growth in data acquisition and data updating has recently posed crucial challenges to the machine learning community on developing learning schemes to match or outperform human learning capability. Fortunately, the introduction of deep learning (see for example [1]) has led to the feasibility of getting around the bottleneck of classical learning strategies, such as the support vector machine and boosting algorithms, based on classical neural networks (see for example [2–5]), by demonstrating remarkable successes in many applications, particularly computer vision [6] and speech recognition [7], and more recently in other areas, including: natural language processing, medical diagnosis and bioinformatics, financial market analysis and online advertisement, time series forecasting and search engines. Furthermore, the exciting recent advances of deep learning schemes for such applications have motivated the current interest in re-visiting the development of classical neural networks (to be called “shallow nets” in later discussions), by allowing multiple hidden layers between the input and output layers. Such neural networks are called “deep” neural nets, or simply, deep nets. Indeed, the advantages of deep nets over shallow nets, at least in applications, have led to various popular research directions in the academic communities of Approximation Theory and Learning Theory. Explicit results on the existence of functions, that are expressible by deep nets but cannot be approximated by shallow nets with comparable number of parameters, are generally regarded as powerful features of the advantage of deep nets in Approximation Theory. The first theoretical understanding of such results dates back to our early work [8], where by using the Heaviside activation function, it was shown that deep nets with two hidden layers already provide localized approximation, while shallow nets fail. Explicit results on neural network approximation derived in Eldan and Shamir [9], Mhaskar and Poggio [10], Poggio et al. [11], Raghu et al. [12], Shaham et al. [13], and Telgarsky [14] further reveal various advantages of deep nets over shallow nets. For example, the power of depth of neural network in approximating hierarchical functions was shown in Mhaskar and Poggio [10] and Poggio et al. [11], and that deep nets can improve the approximation capability of shallow nets when the data are located on a manifold was demonstrated in Shaham et al. [13].
From approximation to learning, the tug of war between bias and variance [15] indicates that explicit derivation of deep nets is insufficient to show its success in machine learning, in that besides bias, the capacity of deep nets should possess the expressivity of embodying variance. In this direction, the capacity of deep nets, as measured by the Betti number, number of linear regions and neuron transitions were studied in Bianchini and Scarselli [16], Montúfar et al. [17], and Raghu et al. [12] respectively, in showing that deep nets allow for many more functionalities than shallow nets. Although these results certainly show the benefits of deep nets, yet they pose more difficulties in analyzing the deep learning performance, since large capacity usually implies large variance and requires more elaborate learning algorithms. One of the main difficulties is development of satisfactory learning rate analysis for deep net learning, that has been well studied for shallow nets (see for example [18]). In this paper, we present an analysis of the advantages of deep nets in the framework of learning theory [15], taking into account the trade-off between bias and variance.
Our starting point is to assume that the samples are located approximately on some unknown manifold in the sample (D-dimensional Euclidean) space. For simplicity, consider the set of sample inputs: , with a corresponding set of outputs: for some positive number M, where is an unknown d-dimensional connected C∞ Riemannian manifold (without boundary). We will call the sample set, and construct a deep net with three hidden layers, with the first for the dimensionality-reduction, the second for bias-reduction, and the third for variance-reduction. The main tools for our construction are the “local manifold learning” for deep nets in Chui and Mhaskar [19], “localized approximation” for deep nets in Chui et al. [8], and “local average” in Györfy et al. [20]. We will also introduce a feedback procedure to eliminate outliers during the learning process. Our constructions justify the common consensus that deep nets are intuitively capable of capturing data features via their architectural structures [21]. In addition, we will prove that the constructed deep net can well approximate the so-called regression function [15] within the accuracy of in expectation, where s denotes the order of smoothness (or regularity) of the regression function. Noting that the best existing learning rates of the shallow nets are in Maiorov [18] and in Ye and Zhou [22], we observe the power of deep nets over shallow nets, at least theoretically, in the framework of Learning Theory.
The organization of this paper is as follows. In the next section, we present a detailed construction of the proposed deep net. The main results of the paper will be stated in section 3, where tight learning rates of the constructed deep net are also deduced. Discussions of our contributions along with comparison with some related work and proofs of the main results will be presented in sections 4 and 5, respectively.
2. Construction of Deep Nets
In this section, we present a construction of deep neural networks with three hidden layers to realize certain deep learning algorithms, by applying the mathematical tools of localized approximation in Chui et al. [8], local manifold learning in Chui and Mhaskar [19], and local average arguments in Györfy et al. [20]. Throughout this paper, we will consider only two activation functions: the Heaviside function σ0 and the square-rectifier σ2, where the standard notation t+ = max{0, t} is used to define , for any non-negative integer n.
2.1. Localized Approximation and Localized Manifold Learning
Performance comparison between deep nets and shallow nets is a classical topic in Approximation Theory. It is well-known from numerous publications (see for example [8, 9, 12, 14]) that various functions can be well approximated by deep nets but not by any shallow net with the same order of magnitude in the numbers of neurons. In particular, it was proved in Chui et al. [8] that deep nets can provide localized approximation, while shallow nets fail.
For r, q ∈ ℕ and an arbitrary , where , let
For a > 0 and ζ ∈ ℝr, let us denote by , the cube in ℝr with center ζ and width a. Furthermore, we define by
In what follows, the standard notion IA of the indicator function of a set (or an event) A will be used. For x ∈ ℝ, since
we observe that
This implies that N1, r, q,ζj as introduced in (1) is the indicator function of the cube . Thus, the following proposition which describes the localized approximation property of N1, r, q,ζj, can be easily deduced by applying Theorem 2.3 in Chui et al. [8].
Proposition 1. Let r, q ∈ ℕ be arbitrarily given. Then N1, r, q,ζj = IAr, 1/q,ζj for all .
On the other hand, it was proposed in Basri and Jacobs [23] and DiCarlo and Cox [24] with practical arguments, that deep nets can tackle data in highly-curved manifolds, while any shallow nets fail. These arguments were theoretically verified in Chui and Mhaskar [19] and Shaham et al. [13], with the implication that adding hidden layers to shallow nets should enable the neural networks to have the capability of processing massive data in a high-dimensional space from samples in lower dimensional manifolds. More precisely, it follows from do Carmo [25] and Shaham et al. [13] that for a lower d-dimensional connected and compact C∞ Riemannian submanifold (without boundary), isometrically embedded in ℝD and endowed with the geodesic distance dG, there exists some δ > 0, such that for any , with ,
where for any r > 0, ||·||r denotes, as usual, the Euclidean norm of ℝr. In the following, let BG(ξ0, τ), BD(ξ0, τ), and Bd(ξ0, τ) denote the closed geodesic ball, the D-dimensional Euclidean ball, and the d-dimensional Euclidean ball, with center at ξ0, respectively, and with radius τ > 0. Noting that , the following proposition then is a brief summary of Theorem 2.2 and Remark 2.1 in Chui and Mhaskar [19], with the implication that neural networks can be used as a dimensionality-reduction tool.
Proposition 2. For each , there exist a positive number δξ and a neural network
with
that maps BG(ξ, δξ) diffeomorphically onto [−1, 1]d and satisfies
for some αξ, βξ > 0.
2.2. Learning via Deep Nets
Our construction of deep nets depends on the localized approximation and dimensionality-reduction technique, as presented in Propositions 1 and 2. To describe the learning process, firstly select a suitable q*, so that for every , there exists some point in a finite set that satisfies
To this end, we need a constant C0 ≥ 1, such that
The existence of such a constant is proved in the literature (see for example [22]). Also, in view of the compactness of , since is an open covering of , there exists a finite set of points , such that . Hence, q* ∈ ℕ may be chosen to satisfy
With this choice, we claim that (5) holds. Indeed, if , then (5) obviously holds for any choice of . On the other hand, if , then from the inclusion property , it follows that there is some , depending on , such that
Next, let . By (6), we have, for any ,
Therefore, it follows from (7) that
This implies that and verifies our claim (5) with the choice of .
Observe that for every we may choose the point to define by setting
and apply (5) and (3) to obtain the following.
Proposition 3. For each , N2,j maps diffeomorphically into [−1, 1]d and
where and .
As a result of Propositions 1 and 3, we now present the construction of the deep nets for the proposed learning purpose. Start with selecting (2n)d points , and n ∈ ℕ, with , where in (−1, 1)d. Denote Ck = Ad, 1/n,tk and . In view of Proposition 3, it follows that Hk,j is well defined, , and We also define by
Then the desired deep net estimator with three hidden layers may be defined by
where we set N3(x) = 0 if the denominator is zero.
For a d-dimensional submanifold and an x in , it is clear from (9) that the task of the first hidden layer N2,j(x) is to map into [−1, 1]d. On the other hand, the second hidden layer is intended to searching for the location of N2,j(x) in [−1, 1]d. Indeed, it follows from (11) that large values of the parameter n narrow down certain small region that contains x, thereby reducing the bias. Furthermore, observe that N3(x) in (12) is some kind of local average, based on N3,k,j(x) and the small region that contains x. This is a standard local averaging strategy for reducing variance in statistics [20]. In summary, there is a totality of three hidden layers in the above construction for performing three separate tasks, namely: the first hidden layer is for reducing the dimension of the input space, while by applying local averaging [20], the second and third hidden layers are for reducing bias and data variance, respectively.
2.3. Fine-Tuning
For each , it follows from that there is some , such that , which implies that . For each , since is a cube in ℝD, the cardinality of the set is at most 2D. Also, because for each , there exists some , such that N2,j(x) ∈ Ad, 1/n,tk, implying that N3, k,j(x) = N1, d, n,tk◦N2, j(x) = 1 and that the number of such integers k is bounded by 2d. For each , we consider a non-empty subset
of , with cardinality
Also, for each , we further define , as well as
and
Then it follows from (15) and (16) that and it is easy to see that if each xi ∈ SΛx is an interior point of some Hk, j, then . In this way, N3 is some local average estimator. However, if , (and this is possible when some xi lies on the boundary of Hk,j for some ), then the estimator N3 in (12) might perform badly, and this happens even for training data. Note that to predict for some xj ∈ Sm, which is an interior point of Hk0, j0, we have
which might be far away from yj when . The reason is that there are |Λx,S| summations in the numerator. Noting that the Riemannian measure of the boundary of is zero, we consider the above phenomenon as outliers.
Fine-tuning, often referred to as feedback in the literature of deep learning [21], can essentially improve the learning performance of deep nets [26]. We observe that fine-tuning can also be applied to handle outliers for our constructed deep net in (12), by counting the cardinalities of Λx,S and . In the training process, besides computing N3(x) for some query point x, we may also record |Λx,S| and . If the estimator is not big enough, we propose to add the factor to N3(x). In this way, the deep net estimator with feedback can be mathematically represented by
where is defined by
and as before, we set if the denominator vanishes.
3. Learning Rate Analysis
We consider a standard least squares regression setting in learning theory [15] and assume that the sample set of size m is drawn independently according to some Borel probability measure ρ on . The regression function is then defined by
where ρ(y|x) denotes the conditional distribution at x induced by ρ. Let ρX be the marginal distribution of ρ on and be the Hilbert space of square-integrable functions with respect to ρX on . Our goal is to estimate the distance between the output function N3 and the regression function fρ measured by ||N3 − fρ||ρ, as well as the distance between and fρ.
We say that a function f on is (s, c0)-Lipschitz (continuous) with positive exponent s ≤ 1 and constant c0 > 0, if
and denote by , the family of all (s, c0)-Lipschitz functions that satisfy (18). Our error analysis of N3 will be carried out based on the following two assumptions.
Assumption 1. There exist an s ∈ (0, 1] and a constant c0 ∈ ℝ+ such that .
This smoothness assumption is standard in learning theory for regression functions (see for example [15, 18, 20, 27–35]).
Assumption 2. ρX is continuous with respect to the geodesic distance dG of the Riemannian manifold.
Note that Assumption 2, which is about the geometrical structure of ρX, is slightly weaker than the distortion assumption in Shi [36] and Zhou and Jetter [37] but similar to the assumption considered in Meister and Steinwart [38]. The objective of this assumption is for describing the functionality of fine-tuning.
We are now ready to state the main results of this paper. In the first theorem below, we obtain a learning rate for the constructed deep nets N3.
Theorem 1. Let m be the number of samples and set n = ⌈m1/(2s+d)⌉, where 1/(2n) is the uniform spacing of the points in the definition of N3 in (11). Then under Assumptions 1 and 2,
for some positive constant C1 independent of m.
Observe that Theorem 1 provides a fast learning rate for the constructed deep net which depends on the manifold dimension d instead of the sample space dimension D. In the second theorem below, we show the necessity of the fine-tuning process as presented in (17), when Assumption 2 is removed.
Theorem 2. Let m be the number of samples and set n = ⌈m1/(2s+d)⌉, where 1/(2n) is the uniform spacing of the points in the definition of N3 in (11), which is used to define in (17). Then under Assumption 1,
for some positive constant C2 independent of m.
Observe that while Assumption 2 is needed in Theorem 1, it is not necessary for the validity of Theorem 2, which theoretically shows the significance of fine-tuning in our construction. The proofs of these two theorems will be presented in the final section of this paper.
4. Related Work and Discussions
The success in practical applications, especially in the fields of computer vision [6] and speech recognition [7], has triggered enormous research activities on deep learning. Several other encouraging results, such as object recognition [24], unsupervised training [39], and artificial intelligence architecture [21], have been obtained to demonstrate further the significance of deep learning. We refer the interested readers to the 2016 MIT monograph, “Deep Learning” [40], by Goodfellow, Bengjio and Courville, for further study of this exciting subject, which is only at the infancy of its development.
Indeed, deep learning has already created several challenges to the machine learning community. Among the main challenges are to show the necessity of the usage of deep nets and to theoretically justify the advantages of deep nets over shallow nets. This is essentially a classical topic in Approximation Theory. In particular, dating back to the early 1990's, it was already proved that deep nets can provide localized approximation but shallow nets fail (see for example [8]). Furthermore, it was also shown that deep nets provide high approximation orders, that are certainly not restricted by the lower error bounds for shallow nets (see [41, 42]). More recently, stimulated by the avid enthusiasm of deep learning, numerous advantages of deep nets were also revealed from the point of view of function approximation. In particular, certain functions discussed in Eldan and Shamir [9] can be represented by deep nets but cannot be approximated by shallow nets with polynomially increasing orders of neurons; it was shown in Mhaskar and Poggio [10] that deep nets, but not shallow nets, can approximate efficiently functions composed by bivariate ones; it was exhibited in Poggio et al. [11] that deep nets can avoid the curse of dimension of shallow nets; a probability argument was given in Lin [43] to show that deep nets have better approximation performance than shallow nets with high confidence; it was demonstrated in Chui and Mhaskar [19] and Shaham et al. [13] that deep nets can improve the approximation capability of shallow nets when the data are located on data-dependent manifolds; and so on. All of these results give theoretical explanations of the significance of deep nets from the Approximation Theory point of view.
As a departure from the work mentioned above, our present paper is devoted to explore better performance of deep nets over shallow nets in the framework of Leaning Theory. In particular, we are concerned not only with the approximation accuracy but also with the cost to attain such accuracy. In this regard, learning rates of certain deep nets have been analyzed in Kohler and Krzyżak [32], where near-optimal learning rates are provided for a fairly complex regularization scheme, with the hypothesis space being the family of deep nets with two hidden layers proposed in Mhaskar [44]. More precisely, they derived a learning rate of order for functions . This is close to the optimal learning rate of shallow nets in Maiorov [18], different only by a logarithmic factor. Hence, the study in Kohler and Krzyżak [32] theoretically shows that deep nets at least do not downgrade the learning performance of shallow nets. In comparison with Kohler and Krzyżak [32], our study is focussed on answering the question: “What is to be gained by deep learning?” The deep net constructed in our paper possesses a learning rate of order , when is an unknown d-dimensional connected C∞ Riemannian manifold (without boundary). This rate is the same as the optimal learning rate [20, Chapter 3] for special case of the cube under a similar condition, and it is better than the optimal learning rates for shallow nets [18]. Another line of related work is Ye and Zhou [22, 45], where Ye and Zhou deduced learning rates for regularized least-squares over shallow nets for the same setting of our paper. They derived a learning rate of , which is worse than the rate established in our paper. It should be mentioned that in a more recent work Kohler and Krzyzak [46], some advantages of deep nets are revealed from the learning theory viewpoint. However, the results in Kohler and Krzyzak [46] require a hierarchical interaction structure, which is totally different from what is presented in our present paper.
Due to the high degree of freedom for deep nets, the number and type of parameters for deep nets are much more than those of shallow nets. Thus, it should be of great interest to develop scalable algorithms to reduce the computational burdens of deep learning. Distributed learning based on a divide-and-conquer strategy [47, 48] could be a fruitful approach for this purpose. It is also of interest to establish results similar to Theorem 2 and Theorem 1 for deep nets, but with rectifier neurons, by using the rectifier (or ramp) function, σ1(t) = t+, as activation. The reason is that the rectifier is one of the most widely used activations in the literature on deep learning. Our research in these directions is postponed to a later work.
5. Proofs of the Main Results
To facilitate our proofs of the theorems stated in section 3, we first establish the following two lemmas.
Observe from Proposition 1 and the definition (11) of the function N3, k,j that
For , define a random function in term of the random sample by
so that
Lemma 1. Let be a non-empty subset, (j × k) ∈ Λ* and Tk,j(S) be defined as in (22). Then
where if , we set
Proof. Observe from (23) that Tk,j(S)∈{0, 1, …, m} and
By the definition of the fraction , the term with ℓ = 0 above vanishes, so
On the other hand, note from (23) that is equivalent to for ℓ indices i from {1, ⋯, m}, which in turn implies that
Thus, we obtain
Therefore, the desired inequality (24) follows. This completes the proof of Lemma 1. □
Lemma 2. Let be a sample set drawn independently according to ρ. If with a measurable function that depends on , then
for any Borel probability measure μ on .
Proof. Since fρ(x) is the conditional mean of y given , we have from that . Hence,
Thus, along with the inner-product expression
the above equality yields the desired result (25). This completes the proof of Lemma 2. □
We are now ready to prove the two main results of the paper.
Proof of Theorem 1. We divide the proof into four steps, namely: error decomposition, sampling error estimation, approximation error estimation, and learning rate deduction.
Step 1: Error decomposition. Let Ḣk,j be the set of interior points of Hk,j. For arbitrarily fixed k′, j′ and , it follows from (21) that
If, in addition, for each i ∈ {1, …, m}, xi ∈ Ḣk,j for some , then from (12) we have
In view of Assumption 2, for an arbitrary subset A ⊂ RD, λG(A) = 0 implies ρX(A) = 0, where λG(A) denotes the Riemannian measure of A. In particular, for A = Hk,j\Ḣk,j in the above analysis, we have ρX(Hk,j\Ḣk,j) = 0, which implies that (26) almost surely holds. Next, set
Then it follows from Lemma 2, with μ = ρX, that
In what follows, the two terms on the right-hand side of (28) will be called sampling error and approximation error, respectively.
Step 2: Sampling error estimation. Due to Assumption 2, we have
On the other hand, (26) and (27) together imply that
almost surely for x ∈ Ḣk,j, and that
where 𝔼[yi|xi] = fρ(xi) in the second equality, and |yi| ≤ M holds almost surely in the inequality. It then follows from Lemma 1 and Assumption 2 that
This, together with (29), implies that
Step 3: Approximation error estimation. According to Assumption 2, we have
For x ∈ Ḣk,j, it follows from Assumption 1, (26) and (27) that
almost surely holds. We then have, from (10) and , that
Now, since , we obtain
so that
holds almost surely. Inserting the above estimate into (31), we obtain
Step 4: Learning rate deduction. Inserting (32) and (30) into (28), we obtain
Since n = ⌈m1/(2s+d)⌉, we have
with
As q* depends only on , C1 is independent of m or n. This completes the proof of Theorem 1. □
Proof of Theorem 2. As in the proof of Theorem 1, we divide this proof into four steps.
Step 1: Error decomposition. From (17), we have
where is a function defined for by
and hx(x, u) = 0 when the denominator vanishes. Define by
Then it follows from Lemma 2 with μ = ρX, that
In what follows, the terms on the right-hand side of (36) will be called sampling error and approximation error, respectively. By (21), for each and i ∈ {1, ⋯ , m}, we have Φk,j(x, xi) = IHk,j(xi)N3, k, j(x) = IHk,j(xi) for (j, k) ∈ Λx and Φk,j(x, xi) = 0 for (j, k) ∉ Λx, where Λx is defined by (13). This, together with (35), (33), and (34), yields
and
where
Step 2: Sampling error estimation. First consider
For each x ∈ Hk,j, since 𝔼[y|x] = fρ(x), it follows from (37) and |y| ≤ M that
holds almost surely. Since , we apply the Schwarz inequality to to obtain
Thus, from Lemma 1 and (14) we have
This, along with (39), implies that
Step 3 Approximation error estimation. For each , set
and
and observe that
Let us first consider as follows. Since for , we have, from |fρ(x)| ≤ M, that
On the other hand, since
it follows from the elementary inequality
that
We next consider . Let satisfy . Then xi ∈ Hx: = ∪(j, k)∈ΛxHk,j at least for some i ∈ {1, 2, …, m}. For those xi ∉ Hx, we have , so that
For xi ∈ Hx, we have xi ∈ Hk,j for some (j, k) ∈ Λx. But x ∈ Hk,j, so that
But (10) implies that
Hence, for with , we have
and threby
Therefore, putting (42) and (43) into (41), we have
Step 4: Learning rate deduction. By inserting (40) and (44) into (36), we obtain
Hence, in view of n = ⌈m1/(2s+d)⌉, we have
with
This completes the proof of Theorem 2, since q* depends only on , so that C2 is independent of m or n. □
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The research of CC is partially supported by U.S. ARO Grant W911NF-15-1-0385, Hong Kong Research Council (Grant No. 12300917), and Hong Kong Baptist University (Grant No. HKBU-RC-ICRS/16-17/03). The research of S-BL is partially supported by the National Natural Science Foundation of China (Grant No. 61502342). The work of D-XZ is supported partially by the Research Grants Council of Hong Kong [Project No. CityU 11303915] and by National Natural Science Foundation of China under Grant 11461161006. Part of the work was done during the third author's visit to Shanghai Jiaotong University (SJTU), for which the support from SJTU and the Ministry of Education is greatly appreciated.
References
1. Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief netws. Neural Comput. (2006) 18:1527–54. doi: 10.1162/neco.2006.18.7.1527
2. Chui CK, Li X. Approximation by ridge functions and neural networks with one hidden layer. J Approx Theory (1992) 70:131–41. doi: 10.1016/0021-9045(92)90081-X
3. Cybenko G. Approimation by superpositions of a sigmoid function. Math Control Signals Syst. (1989) 2:303–14. doi: 10.1007/BF02551274
4. Funahashi KI. On the approximate realization of continuous mappings by neural networks. Neural Netw. (1989) 2:183–92. doi: 10.1016/0893-6080(89)90003-8
5. Lippmann RP. An introduction to computing with neural nets. IEEE ASSP Mag. (1987) 4:4–22. doi: 10.1109/MASSP.1987.1165576
6. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems. Granada (2012). p. 1105–2097.
7. Lee H, Pham P, Largman Y, Ng AY. Unsupervised feature learning for audio classification using convolutional deep belief networks. In: Neural Information Processing Systems. Vancouver, BC (2010). p. 469–477.
8. Chui CK, Li X, Mhaskar HN. Neural networks for localized approximation. Math Comput. (1994) 63:607–23. doi: 10.1090/S0025-5718-1994-1240656-2
9. Eldan R, Shamir O. The power of depth for feedforward neural networks. In: Conference on Learning Theory. New York, NY (2016). p. 907–940.
10. Mhaskar H, Poggio T. Deep vs shallow networks: an approximation theory perspective. Anal Appl. (2006) 14:829–48. doi: 10.1142/S0219530516400042
11. Poggio T, Mhaskar H, Rosasco L, Miranda B, Liao Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. Int J Auto Comput. (2017) 14:503–19. doi: 10.1007/s11633-017-1054-2
12. Raghu M, Poole B, Kleinberg J, Ganguli S, Sohl-Dickstein J. On the expressive power of deep neural networks. In: Proceedings of the 34th International Conference on Machine Learning, PMLR, Vol. 70 (2017), p. 2847-54.
13. Shaham U, Cloninger A, Coifman RR. Provable approximation properties for deep neural networks. Appl Comput Harmon Anal. (2018) 44:537–57. doi: 10.1016/j.acha.2016.04.003
14. Telgarsky M. Benefits of depth in neural networks. In: 29th Annual Conference on Learning Theory, PMLR Vol. 49 (2016), p. 1517–39.
15. Cucker F, Zhou DX. Learning Theory: An Approximation Theory Viewpoint. Cambridge: Cambridge University Press (2007).
16. Bianchini M, Scarselli F. On the complexity of neural network classifiers: a comparison between shallow and deep architectures, IEEE Trans Neural Netw Learn Syst. (2014) 25:1553–65. doi: 10.1109/TNNLS.2013.2293637
17. Montúfar G, Pascanu R, Cho K, Bengio Y. On the number of linear regions of deep nerual networks. In: Neural Information Processing Systems. Lake Tahoe, CA (2014). p. 2924–2932.
18. Maiorov V. Approximation by neural networks and learning theory. J Complex. (2006) 22:102–17. doi: 10.1016/j.jco.2005.09.001
19. Chui CK, Mhaskar HN. Deep nets for local manifold learning. Front Appl Math Stat. (2016) arXiv: 1607.07110.
20. Györfy L, Kohler M, Krzyzak A, Walk H. A Distribution-Free Theory of Nonparametric Regression. Berlin: Springer (2002).
21. Bengio Y. Learning deep architectures for AI, Found. Trends Mach Learn. (2009) 2:1–127. doi: 10.1561/2200000006
22. Ye GB, Zhou DX. Learning and approximation by Gaussians on Riemannian manifolds. Adv Comput Math. (2008) 29:291–310. doi: 10.1007/s10444-007-9049-0
23. Basri R, Jacobs D. Efficient representation of low-dimensional manifolds using deep networks. (2016) arXiv:1602.04723.
24. DiCarlo J, Cox D. Untangling invariant object recognition. Trends Cogn Sci. (2007) 11:333–41. doi: 10.1016/j.tics.2007.06.010
26. Larochelle H, Bengio Y, Louradour J, Lamblin R. Exploring strategies for training deep neural networks. J Mach Learn Res. (2009) 10:1–40.
27. Chang X, Lin SB, Wang Y. Divide and conquer local average regression. Electron J Stat. (2017) 11:1326–50. doi: 10.1214/17-EJS1265
28. Christmann A, Zhou DX. On the robustness of regularized pairwise learning methods based on kernels. J Complex. (2017) 37:1–33. doi: 10.1016/j.jco.2016.07.001
29. Fan J, Hu T, Wu Q, Zhou DX. Consistency analysis of an empirical minimum error entropy algorithm. Appl Comput Harmon Anal. (2016) 41:164–89. doi: 10.1016/j.acha.2014.12.005
30. Guo ZC, Xiang DH, Guo X, Zhou DX. Thresholded spectral algorithms for sparse approximations Anal Appl. (2017) 15:433–55. doi: 10.1142/S0219530517500026
31. Hu T, Fan J, Wu Q, Zhou DX. Regularization schemes for minimum error entropy principle. Anal Appl. (2015) 13:437–55. doi: 10.1142/S0219530514500110
32. Kohler M, Krzyżak A. Adaptive regression estimation with multilayer feedforward neural networks. J Nonparametr Stat. (2005) 17:891–913. doi: 10.1080/10485250500309608
33. Lin SB, Zhou DX. Distributed kernel-based gradient descent algorithms. Constr Approx. (2018) 47:249–76. doi: 10.1007/s00365-017-9379-1
34. Shi L, Feng YL, Zhou DX. Concentration estimates for learning with l1-regularizer and data dependent hypothesis spaces. Appl Comput Harmon Anal. (2011) 31:286–302. doi: 10.1016/j.acha.2011.01.001
35. Wu Q, Zhou DX. Learning with sample dependent hypothesis space. Comput Math Appl. (2008) 56:2896–907. doi: 10.1016/j.camwa.2008.09.014
36. Shi L. Learning theory estimates for coefficient-based regularized regression. Appl Comput Harmon Anal. (2013) 34:252–65. doi: 10.1016/j.acha.2012.05.001
37. Zhou DX, Jetter K. Approximation with polynomial kernels and SVM classifiers. Adv Comput Math. (2006) 25:323–44. doi: 10.1007/s10444-004-7206-2
38. Meister M, Steinwart I. Optimal learning rates for localized SVMs. J Mach Learn Res. (2016) 17:1–44.
39. Erhan D, Bengio Y, Courville A, Manzagol P, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? J Mach Learn Res. (2010) 11:625–60.
41. Chui CK, Li X, Mhaskar HN. Limitations of the approximation capabilities of neural networks with one hidden layer. Adv Comput Math. (1996) 5:233–43. doi: 10.1007/BF02124745
42. Maiorov V, Pinkus A. Lower bounds for approximation by MLP neural networks. Neurocomputing (1999) 25:81–91. doi: 10.1016/S0925-2312(98)00111-8
43. Lin SB. Limitations of shallow nets approximation. Neural Netw. (2017) 94:96–102. doi: 10.1016/j.neunet.2017.06.016
44. Mhaskar H. Approximation properties of a multilayered feedforward artificial neural network. Adv Comput Math. (1993) 1:61–80. doi: 10.1007/BF02070821
45. Ye GB, Zhou DX. SVM learning and Lp approximation by Gaussians on Riemannian manifolds. Anal Appl. (2009) 7:309–39. doi: 10.1142/S0219530509001384
46. Kohler M, Krzyzak A. Nonparametric regression based on hierarchical interaction models. IEEE Trans Inform. Theory (2017) 63:1620–30. doi: 10.1109/TIT.2016.2634401
Keywords: deep nets, learning theory, deep learning, manifold learning, feedback
Citation: Chui CK, Lin S-B and Zhou D-X (2018) Construction of Neural Networks for Realization of Localized Deep Learning. Front. Appl. Math. Stat. 4:14. doi: 10.3389/fams.2018.00014
Received: 30 January 2018; Accepted: 26 April 2018;
Published: 17 May 2018.
Edited by:
Lixin Shen, Syracuse University, United StatesReviewed by:
Sivananthan Sampath, Indian Institutes of Technology, IndiaAshley Prater, United States Air Force Research Laboratory, United States
Copyright © 2018 Chui, Lin and Zhou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Shao-Bo Lin, c2JsaW4xOTgzQGdtYWlsLmNvbQ==