A Novel Individually Rational Objective In Multi-Agent Multi-Armed Bandit: Algorithms and Regret Bounds
- Aristide Tossou ,
- Christos Dimitrakakis ,
- Jaroslaw Rzepecki ,
- Katja Hofmann
International Conference on Autonomous Agents and Multiagent Systems (AAMAS) |
We study 2-player stochastic multi-armed bandits (MABs) with different expected rewards for each player, a generalization of 2-players general sum repeated games to settings with stochastic rewards. Our aim is to find the egalitarian bargaining solution (EBS) for this repeated game, which can lead to much higher rewards than the maximin value of both players. Our most important contribution is the derivation of an algorithm, UCRG that achieves simultaneously, for both players, a high-probability regret bound of order O(³√ln T T²⁄³) after any T rounds of play. We demonstrate that our upper bound is nearly optimal by proving a lower bound of Ω(T²⁄³) for any algorithm. Experiments confirm our theoretical results and the superiority of UCRG compared to the well-known explore-then-commit heuristic.