Direct Nash Optimization: Teaching language models to self-improve with general preferences
Corby Rosset, Senior Researcher, Microsoft Research AI Frontiers, discusses teaching language models to self-improve using a preference oracle like GPT-4, framing it as a two-player game to find an optimal policy at a Nash equilibrium, and achieving state-of-the-art win rates against GPT-4 Turbo on benchmarks such as Alpaca-Eval and MT-Bench.
- Series:
- Microsoft Research Forum
- Date:
-
-
Corby Rosset
Senior Researcher
-
-
Series: Microsoft Research Forum
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
What's new in AutoGen?
Speakers:- Chi Wang
-
-
-
-
Generative AI meets Structural Biology: Equilibrium Distribution Prediction
Speakers:- Shuxin Zheng
-
-
-