Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads
- Dharma Shukla ,
- Muthian Sivathanu ,
- Srinidhi Viswanatha ,
- Bhargav Gulavani ,
- Rimma Nehme ,
- Amey Agrawal ,
- Chen Chen ,
- Nipun Kwatra ,
- Ramachandran Ramjee ,
- Pankaj Sharma ,
- Atul Katiyar ,
- Vipul Modi ,
- Vaibhav Sharma ,
- Abhishek Singh ,
- Shreshth Singhal ,
- Kaustubh Welankar ,
- Lu Xun ,
- Ravi Anupindi ,
- Karthik Elangovan ,
- Hasibur Rehman ,
- Zhou Lin ,
- Rahul Seetharaman ,
- Cheng Xu ,
- Eddie Ailijiang ,
- Suresh Krishnappa ,
- Mark Russinovich
arXiv preprint
Lowering costs by driving high utilization across deep learning workloads is a crucial lever for cloud providers. We present Singularity, Microsoft’s globally distributed scheduling service for highly efficient and reliable execution of deep learning training and inference workloads. At the heart of Singularity is a novel, workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance, across a global fleet of AI accelerators (e.g., GPUs, FPGAs).
All jobs in Singularity are preemptable, migratable, and dynamically resizable (elastic) by default: a live job can be dynamically and transparently (a) preempted and migrated to a different set of nodes, cluster, data center or a region and resumed exactly from the point where the execution was preempted, and (b) resized (i.e., elastically scaled-up/down) on a varying set of accelerators of a given type. Our mechanisms are transparent in that they do not require the user to make any changes to their code or require using any custom libraries that may limit flexibility. Additionally, our approach significantly improves the reliability of deep learning workloads. We show that the resulting efficiency and reliability gains with Singularity are achieved with negligible impact on the steady-state performance. Finally, our design approach is agnostic of DNN architectures and handles a variety of parallelism strategies (e.g., data/pipeline/model parallelism).