Back to Search
Start Over
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
- Source :
- IEEE Transactions on Signal Processing. 67:2357-2370
- Publication Year :
- 2019
- Publisher :
- Institute of Electrical and Electronics Engineers (IEEE), 2019.
-
Abstract
- Neural networks with REctified Linear Unit (ReLU) activation functions (a.k.a. ReLU networks) have achieved great empirical success in various domains. Nonetheless, existing results for learning ReLU networks either pose assumptions on the underlying data distribution being e.g. Gaussian, or require the network size and/or training size to be sufficiently large. In this context, the problem of learning a two-layer ReLU network is approached in a binary classification setting, where the data are linearly separable and a hinge loss criterion is adopted. Leveraging the power of random noise perturbation, this paper presents a novel stochastic gradient descent (SGD) algorithm, which can \emph{provably} train any single-hidden-layer ReLU network to attain global optimality, despite the presence of infinitely many bad local minima, maxima, and saddle points in general. This result is the first of its kind, requiring no assumptions on the data distribution, training/network size, or initialization. Convergence of the resultant iterative algorithm to a global minimum is analyzed by establishing both an upper bound and a lower bound on the number of non-zero updates to be performed. Moreover, generalization guarantees are developed for ReLU networks trained with the novel SGD leveraging classic compression bounds. These guarantees highlight a key difference (at least in the worst case) between reliably learning a ReLU network as well as a leaky ReLU network in terms of sample complexity. Numerical tests using both synthetic data and real images validate the effectiveness of the algorithm and the practical merits of the theory.<br />23 pages, 7 figures, work in progress
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer science
Iterative method
Computer Science - Information Theory
Gaussian
Initialization
Machine Learning (stat.ML)
02 engineering and technology
Machine Learning (cs.LG)
symbols.namesake
Statistics - Machine Learning
Saddle point
Hinge loss
FOS: Mathematics
0202 electrical engineering, electronic engineering, information engineering
Electrical and Electronic Engineering
Mathematics - Optimization and Control
Linear separability
Artificial neural network
business.industry
Information Theory (cs.IT)
Deep learning
020206 networking & telecommunications
Rectifier (neural networks)
Maxima and minima
Stochastic gradient descent
Binary classification
Optimization and Control (math.OC)
Signal Processing
symbols
Artificial intelligence
business
Algorithm
Subjects
Details
- ISSN :
- 19410476 and 1053587X
- Volume :
- 67
- Database :
- OpenAIRE
- Journal :
- IEEE Transactions on Signal Processing
- Accession number :
- edsair.doi.dedup.....00a6abc58d8ad8cd8c7ac499eca0c609