Back to Search Start Over

Pivotal Auto-Encoder via Self-Normalizing ReLU

Authors :
Goldenstein, Nelson
Sulam, Jeremias
Romano, Yaniv
Publication Year :
2024

Abstract

Sparse auto-encoders are useful for extracting low-dimensional representations from high-dimensional data. However, their performance degrades sharply when the input noise at test time differs from the noise employed during training. This limitation hinders the applicability of auto-encoders in real-world scenarios where the level of noise in the input is unpredictable. In this paper, we formalize single hidden layer sparse auto-encoders as a transform learning problem. Leveraging the transform modeling interpretation, we propose an optimization problem that leads to a predictive model invariant to the noise level at test time. In other words, the same pre-trained model is able to generalize to different noise levels. The proposed optimization algorithm, derived from the square root lasso, is translated into a new, computationally efficient auto-encoding architecture. After proving that our new method is invariant to the noise level, we evaluate our approach by training networks using the proposed architecture for denoising tasks. Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise compared to commonly used architectures.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.16052
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TSP.2024.3418971