1. Memory-optimal neural network approximation
- Author
-
Philipp Grohs, Philipp Petersen, Gitta Kutyniok, and Helmut Bölcskei
- Subjects
Recurrent neural network ,Radial basis function network ,Stochastic gradient descent ,Artificial neural network ,Computer science ,Time delay neural network ,business.industry ,Deep learning ,Feedforward neural network ,Artificial intelligence ,business ,Stochastic neural network ,Algorithm - Abstract
We summarize the main results of a recent theory—developed by the authors—establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems—so-called affine systems —can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.
- Published
- 2017