Back to Search
Start Over
A Proximal-Gradient Method for Constrained Optimization
A Proximal-Gradient Method for Constrained Optimization
- Publication Year :
- 2024
-
Abstract
- We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear equality constraints, we combine a decomposition procedure for computing trial steps with an exact merit function for determining trial step acceptance. Under common assumptions, we show that both the proximal parameter and merit function parameter eventually remain fixed, and then prove a worst-case complexity result for the maximum number of iterations before an iterate satisfying approximate first-order optimality conditions for a given tolerance is computed. Our preliminary numerical results indicate that our approach has great promise, especially in terms of returning approximate solutions that are structured (e.g., sparse solutions when a one-norm regularizer is used).
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2404.07460
- Document Type :
- Working Paper