微软研究院博客
加载中…
| Tom Hartvigsen 和 Hamid Palangi
Lifelong model editing fixes mistakes discovered after model deployment. This work could expand sequential editing to model properties like fairness and privacy and enable a new class of solutions for adapting LLMs over long deployment lifetimes.
| Boxin Wang, Bo Li, 和 Zinan Lin
This paper received the outstanding benchmarks track paper award during NeurIPS 2023 (opens in new tab). How trustworthy are generative pre-trained transformer (GPT) models? To answer this question, University of Illinois Urbana-Champaign, together with Stanford University, University of California, Berkeley,…