A Generalized Framework for Self-Play Training
- Daniel Hernandez ,
- Kevin Denamganai ,
- Yuan Gao ,
- Peter York ,
- Sam Devlin ,
- Spyridon Samothrakis ,
- James Walker
IEEE Conference on Games |
Throughout scientific history, overarching theoretical frameworks have allowed researchers to grow beyond personal intuitions and culturally biased theories. They allow to verify and replicate existing findings, and to link disconnected results. The notion of self-play, albeit often cited in multiagent Reinforcement Learning, has never been grounded in a formal model. We present a formalized framework, with clearly defined assumptions, which encapsulates the meaning of self-play as abstracted from various existing self-play algorithms. This framework is framed as an approximation to a theoretical solution concept for multiagent training. On a simple environment, we qualitatively measure how well a subset of the captured self-play methods approximate this solution when paired with the famous PPO algorithm. The results indicate that throughout training the trained policies exhibit cyclic evolutions, showing that self-play research is still at an early stage.