Back to Search Start Over

Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks

Authors :
Bartlett, Peter L.
Harvey, Nick
Liaw, Chris
Mehrabian, Abbas
Source :
Journal of Machine Learning Research 20 (2019) 1-17
Publication Year :
2017

Abstract

We prove new upper and lower bounds on the VC-dimension of deep neural networks with the ReLU activation function. These bounds are tight for almost the entire range of parameters. Letting $W$ be the number of weights and $L$ be the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and provide examples with VC-dimension $\Omega( W L \log(W/L) )$. This improves both the previously known upper bounds and lower bounds. In terms of the number $U$ of non-linear units, we prove a tight bound $\Theta(W U)$ on the VC-dimension. All of these bounds generalize to arbitrary piecewise linear activation functions, and also hold for the pseudodimensions of these function classes. Combined with previous results, this gives an intriguing range of dependencies of the VC-dimension on depth for networks with different non-linearities: there is no dependence for piecewise-constant, linear dependence for piecewise-linear, and no more than quadratic dependence for general piecewise-polynomial.<br />Comment: Extended abstract appeared in COLT 2017; the upper bound was presented at the 2016 ACM Conference on Data Science. This version includes all the proofs and a refinement of the upper bound, Theorem 6. 16 pages, 2 figures

Details

Database :
arXiv
Journal :
Journal of Machine Learning Research 20 (2019) 1-17
Publication Type :
Report
Accession number :
edsarx.1703.02930
Document Type :
Working Paper