Back to Search Start Over

Stochastic Variance Reduction for DR-Submodular Maximization.

Authors :
Lian, Yuefang
Du, Donglei
Wang, Xiao
Xu, Dachuan
Zhou, Yang
Source :
Algorithmica; May2024, Vol. 86 Issue 5, p1335-1364, 30p
Publication Year :
2024

Abstract

Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a (1 - e - 1) OPT - ε approximation after O (ε - 1) iterations and O (ε - 2) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a 1 4 (1 - min x ∈ C ‖ x ‖ ∞) OPT - ε approximation with O (ε - 1) iterations and O (ε - 2) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only O (1) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01784617
Volume :
86
Issue :
5
Database :
Complementary Index
Journal :
Algorithmica
Publication Type :
Academic Journal
Accession number :
177776405
Full Text :
https://doi.org/10.1007/s00453-023-01195-z