World Scientific
  • Search
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
Our website is made possible by displaying certain online content using javascript.
In order to view the full content, please disable your ad blocker or whitelist our website

System Upgrade on Tue, Oct 25th, 2022 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at [email protected] for any enquiries.

Spline representation and redundancies of one-dimensional ReLU neural network models by:1 (Source: Crossref)
    This article is part of the issue:

    We analyze the structure of a one-dimensional deep ReLU neural network (ReLU DNN) in comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary knots. In particular, we give a recursive algorithm to transfer the parameter set determining the ReLU DNN into the parameter set of a CPL spline function. Using this representation, we show that after removing the well-known parameter redundancies of the ReLU DNN, which are caused by the positive scaling property, all remaining parameters are independent. Moreover, we show that the ReLU DNN with one, two or three hidden layers can represent CPL spline functions with K arbitrarily prescribed knots (breakpoints), where K is the number of real parameters determining the normalized ReLU DNN (up to the output layer parameters). Our findings are useful to fix a priori conditions on the ReLU DNN to achieve an output with prescribed breakpoints and function values.

    AMSC: 41A15, 65D05, 68T07


    • 1. R. Arora, A. Basu, P. Mianjy and A. Mukherjee , Understanding deep neural networks with rectified linear units, in 6th International Conference on Learning Representations (ICLR 2018), 30 April–3 May 2018, Vancouver Convention Center, Vancouver, BC, Canada,˙rgWRW. Google Scholar
    • 2. R. Balestriero, H. You, Z. Lu, Y. Kou, H. Shi, Y. Lin and R. G. Baraniuk , Max-affine spline insights into deep network pruning, in Proc. 35th Int. Conf. Machine Learning, Vol. 80 (2018), pp. 374–383. Google Scholar
    • 3. H. Bölcskei, P. Grohs, G. Kutyniok and P. Petersen , Optimal approximation with sparsely connected deep neural networks, SIAM J. Math. Data Sci. 1(1) (2019) 8–45. CrossrefGoogle Scholar
    • 4. J. Bona-Pellissier, F. Bachoc and F. Malgouyres, Parameter identifiability of a deep feedforward ReLU neural network, preprint (2021), arXiv:2112.12982. Google Scholar
    • 5. M. Chen, H. Jiang, W. Liao and T. Zhao , Efficient approximation of deep ReLU networks for functions on low dimensional manifolds, 33rd Conf. Neural Information Processing Systems, 8–14 December 2019, Vancouver, Canada, Google Scholar
    • 6. C. K. Chui, X. Li and H. Mhaskar , Neural networks for localized approximation, Math. Comp. 63 (1994) 607–623. Crossref, ISIGoogle Scholar
    • 7. C. K. Chui, S.-B. Lin and D.-X. Zhou , Deep neural networks for rotation-invariance approximation and learning, Anal. Appl. 17(5) (2019) 737–772. Link, ISIGoogle Scholar
    • 8. C. K. Chui and H. Mhaskar , Deep nets for local manifold learning, Front. Appl. Math. Stat. 4 (2018). CrossrefGoogle Scholar
    • 9. G. Cybenko , Approximation by superpositions of a sigmoidal function, Math. Control Signals Syst. 2(4) (1989) 303–314. CrossrefGoogle Scholar
    • 10. I. Daubechies, R. DeVore, N. Dym, S. Faigenbaum-Golovin, S. Z. Kovalsky, K.-C. Lin, J. Park, G. Petrova and B. Sober, Neural network approximation of refinable functions, preprint (2021), arXiv:2107.13191v1. Google Scholar
    • 11. I. Daubechies, R. DeVore, S. Foucart, B. Hanin and G. Petrova , Nonlinear approximation and (deep) ReLU networks, Constr. Approx. 55(1) (2022) 127–172. Crossref, ISIGoogle Scholar
    • 12. C. De Boor , A Practical Guide to Splines (Springer, New York, 2001). Google Scholar
    • 13. R. DeVore , Nonlinear approximation, Acta Numer. 7 (1998) 51–150. CrossrefGoogle Scholar
    • 14. R. DeVore, B. Hanin and G. Petrova , Neural network approximation, Acta Numer. 30 (2021) 327–444. Crossref, ISIGoogle Scholar
    • 15. W. Dong, P. Wang, W. Yin, G. Shi, F. Wu and X. Lu , Denoising prior driven deep neural network for image restoration, IEEE Trans. Pattern Anal. Mach. Intell. 41(10) (2019) 2305–2318. Crossref, ISIGoogle Scholar
    • 16. Z. Han, S. Yu, S.-B. Lin and D.-X. Zhou , Depth selection for deep ReLU nets in feature extraction and generalization, IEEE Trans. Pattern Anal. Mach. Intell. 44(4) (2022) 1853–1868. Crossref, ISIGoogle Scholar
    • 17. B. Hanin and D. Rolnick , Deep ReLU networks have surprisingly few activation patterns, in Proc. 33rd Int. Conf. Neural Information Processing Systems, 8–14 December 2019, BC, Vancouver, Canada, pp. 361–370. Google Scholar
    • 18. K. Hornik, M. Stinchcombe and H. White , Multilayer feedforward networks are universal approximators, Neural Netw. 2(5) (1989) 359–366. Crossref, ISIGoogle Scholar
    • 19. Y. LeCun, Y. Bengio and G. Hinton , Deep learning, Nature 521(7553) (2015) 436–444. Crossref, ISIGoogle Scholar
    • 20. Z. Lu, H. Pu, F. Wang, Z. Hu and L. Wang , The expressive power of neural networks: A view from the width, in Proc. 31st Int. Conf. Neural Information Processing Systems (ACM, 2017), pp. 6232–6240. CrossrefGoogle Scholar
    • 21. J. Lu, Z. Shen, H. Yang and S. Zhang , Deep network approximation for smooth functions, SIAM J. Math. Anal. 53 (2021) 5465–5560. Crossref, ISIGoogle Scholar
    • 22. G. Montúfar, R. Pascanu, K. Cho and Y. Bengio , On the number of linear regions of deep neural networks, in Proc. 27th Int. Conf. Neural Information Processing Systems, Vol. 2 (MIT Press, 2014), pp. 2924–2932. Google Scholar
    • 23. P. Petersen and F. Voigtlaender , Optimal approximation of piecewise smooth functions using deep ReLU neural networks, Neural Netw. 108 (2018) 296–330. Crossref, ISIGoogle Scholar
    • 24. M. Phuong and C. H. Lampert , Functional vs. parametric equivalence of ReLU networks, in 8th International Conference on Learning Representations, ICLR 2020, 26–30 April 2020, Addis Ababa, Ethiopia, Google Scholar
    • 25. A. Pinkus , Approximation theory of the MLP model in neural networks, Acta Numer. 8 (1999) 143–195. CrossrefGoogle Scholar
    • 26. T. Serra, C. Tjandraatmadja and S. Ramalingam , Bounding and counting linear regions of deep neural networks, in 6th International Conference on Learning Representations, 30 April–3 May 2018, Vancouver Convention Center, Vancouver, BC, Canada, Google Scholar
    • 27. Z. Shen, H. Yang and S. Zhang , Deep network approximation characterized by number of neurons, Commun. Comput. Phys. 28 (2020) 1768–1811. Crossref, ISIGoogle Scholar
    • 28. P. Stock, B. Graham, R. Gribonval and H. Jégou , Equi-normalization of neural networks, in International Conference on Learning Representations, 6–9 May 2019, New Orleans, Louisiana, United States, Google Scholar
    • 29. M. Telgarsky , Benefits of depth in neural networks. J. Mach. Learn. Res. 49 (2016) 1–23. Google Scholar
    • 30. D. Yarotsky , Error bounds for approximations with deep ReLU networks, Neural Netw. 94 (2017) 103–114. Crossref, ISIGoogle Scholar
    • 31. D.-X. Zhou , Universality of deep convolutional neural networks, Appl. Comput. Harmon. Anal. 48(2) (2020) 787–794. Crossref, ISIGoogle Scholar
    Remember to check out the Most Cited Articles!

    Check out our Differential Equations and Mathematical Analysis books in our Mathematics 2021 catalogue
    Featuring authors such as Ronen Peretz, Antonio Martínez-Abejón & Martin Schechter