Back to Search
Start Over
Large Deviations of Gaussian Neural Networks with ReLU activation
- Publication Year :
- 2024
-
Abstract
- We prove a large deviation principle for deep neural networks with Gaussian weights and (at most linearly growing) activation functions. This generalises earlier work, in which bounded and continuous activation functions were considered. In practice, linearly growing activation functions such as ReLU are most commonly used. We furthermore simplify previous expressions for the rate function and a give power-series expansions for the ReLU case.<br />Comment: 13 pages, 2 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2405.16958
- Document Type :
- Working Paper