A general class of surrogate functions for stable and efficient reinforcement learning
- Sharan Vaswani ,
- Olivier Bachem ,
- Simone Totaro ,
- Robert Müller ,
- Shivam Garg ,
- Matthieu Geist ,
- Marlos C. Machado ,
- Pablo Samuel Castro ,
- Nicolas Le Roux
2022 International Conference on Artificial Intelligence and Statistics |
Common policy gradient methods rely on the maximization of a sequence of surrogate functions. In recent years, many such surrogate functions have been proposed, most without strong theoretical guarantees, leading to algorithms such as TRPO, PPO or MPO. Rather than design yet another surrogate function, we instead propose a general framework (FMA-PG) based on functional mirror ascent that gives rise to an entire family of surrogate functions. We construct surrogate functions that enable policy improvement guarantees, a property not shared by most existing surrogate functions. Crucially, these guarantees hold regardless of the choice of policy parameterization. Moreover, a particular instantiation of FMA-PG recovers important implementation heuristics (e.g., using forward vs reverse KL divergence) resulting in a variant of TRPO with additional desirable properties. Via experiments on simple reinforcement learning problems, we evaluate the algorithms instantiated by FMA-PG. The proposed framework also suggests an improved variant of PPO, whose robustness and efficiency we empirically demonstrate on the MuJoCo suite.