Back to Search Start Over

Learning Maximally Monotone Operators for Image Recovery

Authors :
Matthieu Terris
Audrey Repetti
Yves Wiaux
Jean-Christophe Pesquet
OPtimisation Imagerie et Santé (OPIS)
Inria Saclay - Ile de France
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de vision numérique (CVN)
Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-CentraleSupélec-Université Paris-Saclay
School of Mathematical and Computer Sciences (MATHEMATICS DEPARTMENT OF HERIOT-WATT UNIVERSITY)
Heriot-Watt University [Edinburgh] (HWU)
School of Engineering and Physical Sciences [Edinburgh] (EPS-HWU)
M. Terris would like to thank Heriot-Watt University for the PhD funding. This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant EP/T028270/1. The work of J.-C. Pesquet was supported by Institut Universitaire de France and the ANR Chair in AI BRIGEABLE.
Source :
SIAM Journal on Imaging Sciences, SIAM Journal on Imaging Sciences, 2021, 14 (3), pp.1206-1237, SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, 2021, 14 (3), pp.1206-1237
Publication Year :
2020
Publisher :
arXiv, 2020.

Abstract

International audience; We introduce a new paradigm for solving regularized variational problems. These are typically formulated to address ill-posed inverse problems encountered in signal and image processing. The objective function is traditionally defined by adding a regularization function to a data fit term, which is subsequently minimized by using iterative optimization algorithms. Recently, several works have proposed to replace the operator related to the regularization by a more sophisticated denoiser. These approaches, known as plug-and-play (PnP) methods, have shown excellent performance. Although it has been noticed that, under nonexpansiveness assumptions on the denoisers, the convergence of the resulting algorithm is guaranteed, little is known about characterizing the asymptotically delivered solution. In the current article, we propose to address this limitation. More specifically, instead of employing a functional regularization, we perform an operator regularization, where a maximally monotone operator (MMO) is learned in a supervised manner. This formulation is flexible as it allows the solution to be characterized through a broad range of variational inequalities, and it includes convex regularizations as special cases. From an algorithmic standpoint, the proposed approach consists in replacing the resolvent of the MMO by a neural network (NN). We provide a universal approximation theorem proving that nonexpansive NNs provide suitable models for the resolvent of a wide class of MMOs. The proposed approach thus provides a sound theoretical framework for analyzing the asymptotic behavior of first-order PnP algorithms. In addition, we propose a numerical strategy to train NNs corresponding to resolvents of MMOs. We apply our approach to image restoration problems and demonstrate its validity in terms of both convergence and quality.

Details

ISSN :
19364954
Database :
OpenAIRE
Journal :
SIAM Journal on Imaging Sciences, SIAM Journal on Imaging Sciences, 2021, 14 (3), pp.1206-1237, SIAM Journal on Imaging Sciences, Society for Industrial and Applied Mathematics, 2021, 14 (3), pp.1206-1237
Accession number :
edsair.doi.dedup.....74c6cdef59f22b6adac14fec7c0849bb
Full Text :
https://doi.org/10.48550/arxiv.2012.13247