Research talk: Differentially private fine-tuning of large language models
We have come a long way in terms of protecting privacy when training ML models, particularly with large language models. We recently demonstrated that using differentially private stochastic gradient descent (DP-SGD) to fine-tune very large language models, such as GPT-3, is not only feasible but shows very promising results with respect to the privacy-utility tradeoff. In this talk, we highlight the challenges we have overcome over the past year and the opportunities our research enables for a range of product applications.
- Event:
- Research Summit 2022
- Track:
- Building User Trust Through Privacy, Identity, and Responsible AI
- Date:
-
-
Huishuai Zhang
Principal Researcher
-
Melissa Chase
Principal Researcher
-
-