Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point

  • Bita Darvish Rouhani ,
  • Daniel Lo ,
  • Ritchie Zhao ,
  • Ming Liu ,
  • Jeremy Fowers ,
  • Kalin Ovtcharov ,
  • Anna Vinogradsky ,
  • Sarah Massengill ,
  • Lita Yang ,
  • ,
  • ,
  • Haishan Zhu ,
  • Taesik Na ,
  • Prerak Patel ,
  • Shuai Che ,
  • Lok Chand Koppaka ,
  • ,
  • Subhojit Som ,
  • Kaustav Das ,
  • Saurabh Tiwary ,
  • Steve Reinhardt ,
  • ,
  • Eric Chung ,

NeurIPS 2020 |

Organized by ACM

In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware. Through the co-evolution of hardware design and algorithms, MSFP achieves accuracy comparable to or better than industry standards Bfloat16 and INT8 at 3x and 4x lower cost, respectively. MSFP incurs negligible impact to accuracy (<1%), requires no changes to the model topology, and is integrated with a mature cloud production pipeline. MSFP supports various classes of deep learning models including CNNs, RNNs, and Transformers without modification. Finally, we characterize the accuracy and implementation of MSFP and demonstrate its efficacy on a number of production scenarios, including models that power major online scenarios such as web search, question-answering, and image classification.