Back to Search
Start Over
Online Learning with Inexact Proximal Online Gradient Descent Algorithms
- Publication Year :
- 2018
-
Abstract
- We consider non-differentiable dynamic optimization problems such as those arising in robotics and subspace tracking. Given the computational constraints and the time-varying nature of the problem, a low-complexity algorithm is desirable, while the accuracy of the solution may only increase slowly over time. We put forth the proximal online gradient descent (OGD) algorithm for tracking the optimum of a composite objective function comprising of a differentiable loss function and a non-differentiable regularizer. An online learning framework is considered and the gradient of the loss function is allowed to be erroneous. Both, the gradient error as well as the dynamics of the function optimum or target are adversarial and the performance of the inexact proximal OGD is characterized in terms of its dynamic regret, expressed in terms of the cumulative error and path length of the target. The proposed inexact proximal OGD is generalized for application to large-scale problems where the loss function has a finite sum structure. In such cases, evaluation of the full gradient may not be viable and a variance reduced version is proposed that allows the component functions to be sub-sampled. The efficacy of the proposed algorithms is tested on the problem of formation control in robotics and on the dynamic foreground-background separation problem in video.
- Subjects :
- Mathematics - Optimization and Control
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1806.00202
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/TSP.2018.2890368