1. Benchmarking ADMM in nonconvex NLPs.
- Author
-
Rodriguez, Jose S., Nicholson, Bethany, Laird, Carl, and Zavala, Victor M.
- Subjects
- *
NONCONVEX programming , *MULTIPLIERS (Mathematical analysis) , *LYAPUNOV functions , *MATHEMATICAL models , *NUMERICAL analysis - Abstract
Highlights • Exploit connections between decomposition schemes to study convergence in NLP. • Establish more formalism in benchmarks of different schemes. • Demonstrate performance in challenging nonconvex problems. Abstract We study connections between the alternating direction method of multipliers (ADMM), the classical method of multipliers (MM), and progressive hedging (PH). The connections are used to derive benchmark metrics and strategies to monitor and accelerate convergence and to help explain why ADMM and PH are capable of solving complex nonconvex NLPs. Specifically, we observe that ADMM is an inexact version of MM and approaches its performance when multiple coordination steps are performed. In addition, we use the observation that PH is a specialization of ADMM and borrow Lyapunov function and primal-dual feasibility metrics used in ADMM to explain why PH is capable of solving nonconvex NLPs. This analysis also highlights that specialized PH schemes can be derived to tackle a wider range of stochastic programs and even other problem classes. Our exposition is tutorial in nature and seeks to to motivate algorithmic improvements and new decomposition strategies [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF