1. Quantized Proximal Averaging Network for Analysis Sparse Coding
- Author
-
Nareddy, Kartheek Kumar Reddy, Bulusu, Mani Madhoolika, Pokala, Praveen Kumar, and Seelamantula, Chandra Sekhar
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,68T07 ,Computer Vision and Pattern Recognition (cs.CV) ,I.4.5 ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
We solve the analysis sparse coding problem considering a combination of convex and non-convex sparsity promoting penalties. The multi-penalty formulation results in an iterative algorithm involving proximal-averaging. We then unfold the iterative algorithm into a trainable network that facilitates learning the sparsity prior. We also consider quantization of the network weights. Quantization makes neural networks efficient both in terms of memory and computation during inference, and also renders them compatible for low-precision hardware deployment. Our learning algorithm is based on a variant of the ADAM optimizer in which the quantizer is part of the forward pass and the gradients of the loss function are evaluated corresponding to the quantized weights while doing a book-keeping of the high-precision weights. We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction. The proposed approach offers superior reconstruction accuracy and quality than state-of-the-art unfolding techniques and the performance degradation is minimal even when the weights are subjected to extreme quantization., 8 pages + references, 7 figures and 4 tables
- Published
- 2021