1. Training Invertible Linear Layers through Rank-One Perturbations
- Author
-
Krämer, Andreas, Köhler, Jonas, and Noé, Frank
- Subjects
Chemical Physics (physics.chem-ph) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,FOS: Physical sciences ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) ,Statistics - Machine Learning ,Optimization and Control (math.OC) ,68T07, 82-10 ,Physics - Chemical Physics ,Physics - Data Analysis, Statistics and Probability ,FOS: Mathematics ,Mathematics - Optimization and Control ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
Many types of neural network layers rely on matrix properties such as invertibility or orthogonality. Retaining such properties during optimization with gradient-based stochastic optimizers is a challenging task, which is usually addressed by either reparameterization of the affected parameters or by directly optimizing on the manifold. This work presents a novel approach for training invertible linear layers. In lieu of directly optimizing the network parameters, we train rank-one perturbations and add them to the actual weight matrices infrequently. This P$^{4}$Inv update allows keeping track of inverses and determinants without ever explicitly computing them. We show how such invertible blocks improve the mixing and thus the mode separation of the resulting normalizing flows. Furthermore, we outline how the P$^4$ concept can be utilized to retain properties other than invertibility., Comment: 17 pages, 10 figures
- Published
- 2020
- Full Text
- View/download PDF