Loading…

Practical Budgeted Submodular Maximization

We consider the problem of maximizing a non-negative monotone submodular function subject to a knapsack constraint, which is also known as the Budgeted Submodular Maximization (BSM) problem. Sviridenko (2004) showed that by guessing 3 appropriate elements of an optimal solution, and then executing a...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2021-02
Main Authors: Feldman, Moran, Nutov, Zeev, Shoham, Elad
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider the problem of maximizing a non-negative monotone submodular function subject to a knapsack constraint, which is also known as the Budgeted Submodular Maximization (BSM) problem. Sviridenko (2004) showed that by guessing 3 appropriate elements of an optimal solution, and then executing a greedy algorithm, one can obtain the optimal approximation ratio of \(\alpha =1-1/e\approx 0.632\) for BSM. However, the need to guess (by enumeration) 3 elements makes the algorithm of Sviridenko impractical as it leads to a time complexity of \(O(n^5)\) (which can be slightly improved using the thresholding technique of Badanidiyuru & Vondrak (2014), but only to roughly \(O(n^4)\)). Our main results in this paper show that fewer guesses suffice. Specifically, by making only 2 guesses, we get the same optimal approximation ratio of \(\alpha\) with an improved time complexity of roughly \(O(n^3)\). Furthermore, by making only a single guess, we get an almost as good approximation ratio of \(0.6174>0.9767\alpha\) in roughly \(O(n^2)\) time. Prior to our work, the only algorithms that were known to obtain an approximation ratio close to \(\alpha\) for BSM were the algorithm of Sviridenko and an algorithm of Ene & Nguyen (2019) that achieves \((\alpha-\epsilon)\)-approximation. However, the algorithm of Ene & Nguyen requires \({(1/\epsilon)}^{O(1/\epsilon^4)}n\log^2 n\) time, and hence, is of theoretical interest only as \({(1/\epsilon)}^{O(1/\epsilon^4)}\) is huge even for moderate values of \(\epsilon\). In contrast, all the algorithms we analyze are simple and parallelizable, which makes them good candidates for practical use. Recently, Tang et al. (2020) studied a simple greedy algorithm that already has a long research history, and proved that its approximation ratio is at least 0.405. We improve over this result, and show that the approximation ratio of this algorithm is within the range [0.427, 0.462].
ISSN:2331-8422