PipeDream: Generalized Pipeline Parallelism for DNN Training
- Deepak Narayanan ,
- Aaron Harlap ,
- Amar Phanishayee ,
- Vivek Seshadri ,
- Nikhil Devanur ,
- Greg Granger ,
- Phil Gibbons ,
- Matei Zaharia
ACM Symposium on Operating Systems Principles (SOSP 2019) |
DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds \emph{inter-batch pipelining} to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3x faster than commonly used intra-batch parallelism techniques.