A general approach to progressive learning
- Joshua T. Vogelstein ,
- Hayden S. Helm ,
- Ronak D. Mehta ,
- Jayanta Dey ,
- Will LeVine ,
- Weiwei Yang ,
- Bryan Tower ,
- Jonathan Larson ,
- Carey E. Priebe ,
- Chris White
In biological learning, data are used to improve performance simultaneously on the current task, as well as previously encountered and as yet unencountered tasks. In contrast, classical machine learning starts from a blank slate, or tabula rasa, using data only for the single task at hand. While typical transfer learning algorithms can improve performance on future tasks, their performance on prior tasks degrades upon learning new tasks (called catastrophic forgetting). Many recent approaches have attempted to maintain performance given new tasks. But striving to avoid forgetting sets the goal unnecessarily low: the goal of progressive learning, whether biological or artificial, is to improve performance on all tasks (including past and future) with any new data. We propose representation ensembling, as opposed to learner ensembling (e.g., bagging), to address progressive learning. We show that representation ensembling — including representations learned by decision forests or deep network — uniquely demonstrates improved performance on both past and future tasks in a variety of simulated and real data scenarios, including vision, language, and adversarial tasks, with or without resource constraints. Beyond progressive learning, this work has immediate implications with regards to mitigating batch effects and federated learning applications. We expect a deeper understanding of the mechanisms underlying biological progressive learning to enable further improvements in machine progressive learning.