Spline representation and redundancies of one-dimensional ReLU neural network models
Abstract
We analyze the structure of a one-dimensional deep ReLU neural network (ReLU DNN) in comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary knots. In particular, we give a recursive algorithm to transfer the parameter set determining the ReLU DNN into the parameter set of a CPL spline function. Using this representation, we show that after removing the well-known parameter redundancies of the ReLU DNN, which are caused by the positive scaling property, all remaining parameters are independent. Moreover, we show that the ReLU DNN with one, two or three hidden layers can represent CPL spline functions with K arbitrarily prescribed knots (breakpoints), where K is the number of real parameters determining the normalized ReLU DNN (up to the output layer parameters). Our findings are useful to fix a priori conditions on the ReLU DNN to achieve an output with prescribed breakpoints and function values.
References
- 1. , Understanding deep neural networks with rectified linear units, in 6th International Conference on Learning Representations (ICLR 2018),
30 April–3 May 2018,Vancouver Convention Center, Vancouver, BC, Canada , https://openreview.net/forum?id=B1J˙rgWRW. Google Scholar - 2. , Max-affine spline insights into deep network pruning, in Proc. 35th Int. Conf. Machine Learning, Vol. 80 (2018), pp. 374–383. Google Scholar
- 3. , Optimal approximation with sparsely connected deep neural networks, SIAM J. Math. Data Sci. 1(1) (2019) 8–45. Crossref, Web of Science, Google Scholar
- 4. J. Bona-Pellissier, F. Bachoc and F. Malgouyres, Parameter identifiability of a deep feedforward ReLU neural network, preprint (2021), arXiv:2112.12982. Google Scholar
- 5. , Efficient approximation of deep ReLU networks for functions on low dimensional manifolds, 33rd Conf. Neural Information Processing Systems,
8–14 December 2019 ,Vancouver, Canada , https://proceedings.neurips.cc/paper/2019/file/fd95ec8df5dbeea25aa8e6c808bad583-Paper.pdf. Google Scholar - 6. , Neural networks for localized approximation, Math. Comp. 63 (1994) 607–623. Crossref, Web of Science, Google Scholar
- 7. , Deep neural networks for rotation-invariance approximation and learning, Anal. Appl. 17(5) (2019) 737–772. Link, Web of Science, Google Scholar
- 8. , Deep nets for local manifold learning, Front. Appl. Math. Stat. 4 (2018). https://doi.org/doi.org/10.3389/fams.2018.00012. Crossref, Google Scholar
- 9. , Approximation by superpositions of a sigmoidal function, Math. Control Signals Syst. 2(4) (1989) 303–314. Crossref, Google Scholar
- 10. I. Daubechies, R. DeVore, N. Dym, S. Faigenbaum-Golovin, S. Z. Kovalsky, K.-C. Lin, J. Park, G. Petrova and B. Sober, Neural network approximation of refinable functions, preprint (2021), arXiv:2107.13191v1. Google Scholar
- 11. , Nonlinear approximation and (deep) ReLU networks, Constr. Approx. 55(1) (2022) 127–172. Crossref, Web of Science, Google Scholar
- 12. , A Practical Guide to Splines (Springer, New York, 2001). Google Scholar
- 13. , Nonlinear approximation, Acta Numer. 7 (1998) 51–150. Crossref, Google Scholar
- 14. , Neural network approximation, Acta Numer. 30 (2021) 327–444. Crossref, Web of Science, Google Scholar
- 15. , Denoising prior driven deep neural network for image restoration, IEEE Trans. Pattern Anal. Mach. Intell. 41(10) (2019) 2305–2318. Crossref, Web of Science, Google Scholar
- 16. , Depth selection for deep ReLU nets in feature extraction and generalization, IEEE Trans. Pattern Anal. Mach. Intell. 44(4) (2022) 1853–1868. Crossref, Web of Science, Google Scholar
- 17. , Deep ReLU networks have surprisingly few activation patterns, in Proc. 33rd Int. Conf. Neural Information Processing Systems,
8–14 December 2019,BC, Vancouver, Canada , pp. 361–370. Google Scholar - 18. , Multilayer feedforward networks are universal approximators, Neural Netw. 2(5) (1989) 359–366. Crossref, Web of Science, Google Scholar
- 19. , Deep learning, Nature 521(7553) (2015) 436–444. Crossref, Web of Science, Google Scholar
- 20. , The expressive power of neural networks: A view from the width, in Proc. 31st Int. Conf. Neural Information Processing Systems (ACM, 2017), pp. 6232–6240. Crossref, Google Scholar
- 21. , Deep network approximation for smooth functions, SIAM J. Math. Anal. 53 (2021) 5465–5560. Crossref, Web of Science, Google Scholar
- 22. , On the number of linear regions of deep neural networks, in Proc. 27th Int. Conf. Neural Information Processing Systems, Vol. 2 (MIT Press, 2014), pp. 2924–2932. Google Scholar
- 23. , Optimal approximation of piecewise smooth functions using deep ReLU neural networks, Neural Netw. 108 (2018) 296–330. Crossref, Web of Science, Google Scholar
- 24. , Functional vs. parametric equivalence of ReLU networks, in 8th International Conference on Learning Representations,
ICLR 2020 ,26–30 April 2020,Addis Ababa, Ethiopia , https://openreview.net/forum?id=Bylx-TNKvH. Google Scholar - 25. , Approximation theory of the MLP model in neural networks, Acta Numer. 8 (1999) 143–195. Crossref, Google Scholar
- 26. , Bounding and counting linear regions of deep neural networks, in 6th International Conference on Learning Representations,
30 April–3 May 2018,Vancouver Convention Center, Vancouver, BC, Canada , https://openreview.net/forum?id=Sy-tszZRZ. Google Scholar - 27. , Deep network approximation characterized by number of neurons, Commun. Comput. Phys. 28 (2020) 1768–1811. Crossref, Web of Science, Google Scholar
- 28. , Equi-normalization of neural networks, in International Conference on Learning Representations,
6–9 May 2019,New Orleans, Louisiana, United States , https://openreview.net/forum?id=r1gEqiC9FX. Google Scholar - 29. , Benefits of depth in neural networks. J. Mach. Learn. Res. 49 (2016) 1–23. Google Scholar
- 30. , Error bounds for approximations with deep ReLU networks, Neural Netw. 94 (2017) 103–114. Crossref, Web of Science, Google Scholar
- 31. , Universality of deep convolutional neural networks, Appl. Comput. Harmon. Anal. 48(2) (2020) 787–794. Crossref, Web of Science, Google Scholar
Remember to check out the Most Cited Articles! |
---|
Check out our Differential Equations and Mathematical Analysis books in our Mathematics 2021 catalogue |