Robust Algorithms for Online Convex Problems via Primal-Dual
Primal-dual methods in online optimization give several of the state-of-the art results in both of the most common models: adversarial and stochastic/random order. Here we try to provide a more unified analysis of primal-dual algorithms to better understand the mechanisms behind this important method. With this we are able of recover and extend in one goal several results of the literature. In particular, we obtain robust online algorithm for fairly general online convex problems: we consider the MIXED model where in some of the time steps the data is stochastic and in the others the data is adversarial. Both the quantity and location of the adversarial time steps are unknown to the algorithm. The guarantees of our algorithms interpolate between the (close to) best guarantees for each of the pure models. In particular, the presence of adversarial times does not degrade the guarantee relative to the stochastic part of the instance. Concretely, we first consider Online Convex Programming: at each time a feasible set $V_t$ is revealed, and the algorithm needs to select $v_t \in V_t$ to minimize the total cost $\psi(\sum_t v_t)$, for a convex function $\psi$. Our robust primal-dual algorithm for this problem on the MIXED model recovers and extends, for example, a result of Gupta et al. and recent work on $\ell_p$-norm load balancing by the author. We also consider the problem of Welfare Maximization with Convex Production Costs: at each time a customer presents a value $c_t$ and resource consumption vector $a_t$, and the goal is to fractionally select customers to maximize the profit $\sum_t c_t x_t – \psi(\sum_t a_t x_t)$. Our robust primal-dual algorithm on the MIXED model recovers and extends the result of Azar et al. Given the ubiquity of primal-dual algorithms we hope the ideas presented here will be useful in obtaining other robust algorithm in the MIXED or related models.