Back to Search Start Over

Neural Network Memorization Dissection

Authors :
Gu, Jindong
Tresp, Volker
Publication Year :
2019

Abstract

Deep neural networks (DNNs) can easily fit a random labeling of the training data with zero training error. What is the difference between DNNs trained with random labels and the ones trained with true labels? Our paper answers this question with two contributions. First, we study the memorization properties of DNNs. Our empirical experiments shed light on how DNNs prioritize the learning of simple input patterns. In the second part, we propose to measure the similarity between what different DNNs have learned and memorized. With the proposed approach, we analyze and compare DNNs trained on data with true labels and random labels. The analysis shows that DNNs have \textit{One way to Learn} and \textit{N ways to Memorize}. We also use gradient information to gain an understanding of the analysis results.<br />Comment: Workshop on Machine Learning with Guarantees, NeurIPS 2019

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.09537
Document Type :
Working Paper