Back to Search Start Over

Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

Authors :
Ajalloeian, Amirhossein
Simonetto, Andrea
Dall'Anese, Emiliano
Publication Year :
2019

Abstract

This paper considers an online proximal-gradient method to track the minimizers of a composite convex function that may continuously evolve over time. The online proximal-gradient method is inexact, in the sense that: (i) it relies on an approximate first-order information of the smooth component of the cost; and, (ii) the proximal operator (with respect to the non-smooth term) may be computed only up to a certain precision. Under suitable assumptions, convergence of the error iterates is established for strongly convex cost functions. On the other hand, the dynamic regret is investigated when the cost is not strongly convex, under the additional assumption that the problem includes feasibility sets that are compact. Bounds are expressed in terms of the cumulative error and the path length of the optimal solutions. This suggests how to allocate resources to strike a balance between performance and precision in the gradient computation and in the proximal operator.<br />Comment: In Proceedings of ACC-2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1910.02018
Document Type :
Working Paper