- “No Oops, You Won’t Do It Again: Mechanisms for Self-Correction in Crowdsourcing (opens in new tab)” by Nihar Shah, UC Berkeley; Dengyong Zhou (opens in new tab), Microsoft Research
- “Dropout Distillation (opens in new tab)” by Samuel Rota Bulò, FBK; Lorenzo Porzi, FBK; Peter Kontschieder, Microsoft Research Cambridge
- “CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy (opens in new tab)” by Nathan Dowlin, Princeton; Ran Gilad-Bachrach (opens in new tab), Microsoft Research; Kim Laine, Microsoft Research; Kristin Lauter (opens in new tab), Microsoft Research; Michael Naehrig (opens in new tab), Microsoft Research; John Wernsing, Microsoft Research
- “Parameter Estimation for Generalized Thurstone Choice Models (opens in new tab)” by Milan Vojnovic, Microsoft; Seyoung Yun, Microsoft
- “Network Morphism (opens in new tab)” by Tao Wei, University at Buffalo; Changhu Wang (opens in new tab), Microsoft Research; Yong Rui (opens in new tab), Microsoft Research; Chang Wen Chen
- “Exact Exponent in Optimal Rates for Crowdsourcing (opens in new tab)” by Chao Gao, Yale University; Yu Lu, Yale University; Dengyong Zhou, Microsoft Research
- “Doubly Robust Off-Policy Value Evaluation for Reinforcement Learning (opens in new tab)” by Nan Jiang, University of Michigan; Lihong Li, Microsoft
- “Analysis of Deep Neural Networks with Extended Data Jacobian Matrix (opens in new tab)” by Shengjie Wang, University of Washington; Abdel-rahman Mohamed, Rich Caruana, Microsoft; Jeff Bilmes, University of Washington; Matthai Plilipose, Matthew Richardson, Krzysztof Geras, Gregor Urban, UC Irvine; Ozlem Aslan
- “Analysis of Variational Bayesian Factorizations for Sparse and Low-Rank Estimation (opens in new tab)” by David Wipf (opens in new tab), Microsoft Research
- “Non-Negative Matrix Factorization Under Heavy Noise (opens in new tab)” by Jagdeep Pani, Indian Institute of Science; Ravindran Kannan, Microsoft Research India; Chiranjib Bhattacharya; Navin Goyal, Microsoft Research India
- “Optimal Classification with Multivariate Losses (opens in new tab)” by Nagarajan Natarajan, Microsoft Research India; Oluwasanmi Koyejo, Stanford University and University of Illinois at Urbana Champaign; Pradeep Ravikumar, UT Austin; Inderjit
- “Efficient Algorithms for Adversarial Contextual Learning (opens in new tab)” by Vasilis Syrgkanis (opens in new tab), Microsoft Research; Akshay Krishnamurthy, Microsoft Research; Robert Schapire (opens in new tab), Microsoft Research
- “Principal Component Projection Without Principal Component Analysis (opens in new tab)” by Roy Frostig, Stanford University; Cameron Musco, Massachusetts Institute of Technology; Christopher Musco, Mass. Institute of Technology; Aaron Sidford (opens in new tab), Microsoft Research
- “Faster Eigenvector Computation via Shift-and-Invert Preconditioning (opens in new tab)” by Dan Garber, TTI Chicago; Elad Hazan, Princeton University; Chi Jin, UC Berkeley; Sham, Cameron Musco, Massachusetts Institute of Technology; Praneeth Netrapalli (opens in new tab), Microsoft Research; Aaron Sidford (opens in new tab), Microsoft Research
- “Efficient Algorithms for Large-Scale Generalized Eigenvector Computation and CCA (opens in new tab)” by Rong Ge, Chi Jin, UC Berkeley; Sham, Praneeth Netrapalli (opens in new tab), Microsoft Research; Aaron Sidford (opens in new tab), Microsoft Research
- “The Label Complexity of Mixed-Initiative Classifier Training (opens in new tab)” by Jina Suh, Microsoft; Xiaojin Zhu, University of Wisconsin; Saleema Amershi, Microsoft
- “Bayesian Poisson Tucker Decomposition for Learning the Structure of International Relations (opens in new tab)” by Aaron Schein, Mingyuan Zhou, Blei David, Columbia; Hanna Wallach, Microsoft