Back to Search Start Over

Unifying Low Dimensional Observations in Deep Learning Through the Deep Linear Unconstrained Feature Model

Authors :
Garrod, Connall
Keating, Jonathan P.
Publication Year :
2024

Abstract

Modern deep neural networks have achieved high performance across various tasks. Recently, researchers have noted occurrences of low-dimensional structure in the weights, Hessian's, gradients, and feature vectors of these networks, spanning different datasets and architectures when trained to convergence. In this analysis, we theoretically demonstrate these observations arising, and show how they can be unified within a generalized unconstrained feature model that can be considered analytically. Specifically, we consider a previously described structure called Neural Collapse, and its multi-layer counterpart, Deep Neural Collapse, which emerges when the network approaches global optima. This phenomenon explains the other observed low-dimensional behaviours on a layer-wise level, such as the bulk and outlier structure seen in Hessian spectra, and the alignment of gradient descent with the outlier eigenspace of the Hessian. Empirical results in both the deep linear unconstrained feature model and its non-linear equivalent support these predicted observations.<br />Comment: 35 pages, 14 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.06106
Document Type :
Working Paper