Poster
Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen
East Exhibition Hall B, C #169
Keywords: [ Submodular Optimization ] [ Optimization ]
[
Abstract
]
Abstract:
In this paper, we develop \scg~(\text{SCG}{$++$}), the first efficient variant of a conditional gradient method for maximizing a continuous submodular function subject to a convex constraint. Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle. The best previously known algorithms either achieve a suboptimal $[(1/2)\OPT -\epsilon]$ solution with $O(1/\epsilon^2)$ stochastic gradients or the tight $[(1-1/e)\OPT -\epsilon]$ solution with suboptimal $O(1/\epsilon^3)$ stochastic gradients. We further provide an information-theoretic lower bound to showcase the necessity of $\OM({1}/{\epsilon^2})$ stochastic oracle queries in order to achieve $[(1-1/e)\OPT -\epsilon]$ for monotone and DR-submodular functions. This result shows that our proposed \SCGPP enjoys optimality in terms of both approximation guarantee, i.e., $(1-1/e)$ approximation factor, and stochastic gradient evaluations, i.e., $O(1/\epsilon^2)$ calls to the stochastic oracle. By using stochastic
continuous optimization as an interface, we also show that it is possible to obtain the $[(1-1/e)\OPT-\epsilon]$ tight approximation guarantee for maximizing a monotone
but stochastic submodular set function subject to a general matroid constraint after at most
$\mathcal{O}(n^2/\epsilon^2)$ calls to the stochastic function value, where $n$ is the number of elements in the ground set.
Live content is unavailable. Log in and register to view live content