Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit

  • Boaz Barak ,
  • Benjamin L. Edelman ,
  • Surbhi Goel ,
  • Sham M. Kakade ,
  • Eran Malach ,

NeurIPS 2022 |

There is mounting empirical evidence of emergent phenomena in the capabilities of deep learning methods as we scale up datasets, model sizes, and training times. While there are some accounts of how these resources modulate statistical capacity, far less is known about their effect on the computational problem of model training. This work conducts such an exploration through the lens of learning \(k\)-sparse parities of \(n\) bits, a canonical family of problems which pose theoretical computational barriers. In this setting, we find that neural networks exhibit surprising phase transitions when scaling up dataset size and running time. In particular, we demonstrate empirically that with standard training, a variety of architectures learn sparse parities with \(n^{O(k)}\) examples, with loss (and error) curves abruptly dropping after \(n^{O(k)}\) iterations. These positive results nearly match known SQ lower bounds, even without an explicit sparsity-promoting prior. We elucidate the mechanisms of these phenomena with a theoretical analysis: we find that the phase transition in performance is not due to SGD «stumbling in the dark» until it finds the hidden set of features (a natural algorithm which also runs in \(n^{O(k)}\) time); instead, we show that SGD gradually amplifies a Fourier gap in the population gradient.