A relaxed proximal gradient descent algorithm for convergent plug-and-play with proximal denoiser
Jun 12, 2024·,,,·
0 min read
Samuel Hurault
Antonin Chambolle
Arthur Leclaire
Nicolas Papadakis
Abstract
In this work, we present new proofs of convergence for plug-and-play (PnP) algorithms. PnP methods are efficient iterative algorithms for solving image inverse problems where regularization is performed by plugging a pre-trained denoiser in a proximal algorithm, such as Proximal Gradient Descent (PGD) or Douglas–Rachford splitting (DRS). Recent research has explored convergence by incorporating a denoiser that writes exactly as a proximal operator. However, in these works, the corresponding PnP algorithm has the drawback to be necessarily run with stepsize equal to 1. The stepsize condition for nonconvex convergence of the proximal algorithm in use then translates to restrictive conditions on the regularization parameter of the inverse problem. This can severely degrade the restoration capacity of the algorithm. In this paper, we present two remedies for this limitation. First, we provide a novel convergence proof for PnP-DRS that does not impose any restriction on the regularization parameter. Second, we examine a relaxed version of the PGD algorithm that converges across a broader range of regularization parameters. Our experimental study, conducted on deblurring and super-resolution experiments, demonstrate that these two solutions both enhance the accuracy of image restoration.
Type
Publication
In * Journal of Mathematical Imaging and Vision*