Adapting to a Changing Environment: the Brownian Restless Bandits
- Aleksandrs Slivkins ,
- Eli Upfal
21st Conference on Learning Theory (COLT) |
In the multi-armed bandit (MAB) problem there are k distributions associated with the rewards of playing each of k strategies (slot machine arms). The reward distributions are initially unknown to the player. The player iteratively plays one strategy per round, observes the associated reward, and decides on the strategy for the next iteration. The goal is to maximize the reward by balancing exploitation: the use of acquired information, with exploration: learning new information.