Using platform models responsibly: Developer tools with human-AI partnership at the center
How we develop AI systems is changing. We can do it faster and more easily than before thanks to the emergence of platform models—that is, large-scale models that are trained on vast and diverse amounts of data and can easily be adapted to new applications. Spanning modalities, these models can be used to drive a broad range of applications. Platform models present opportunities we’re only beginning to grasp and risks that require greater understanding and long-term solutions. At Microsoft, researchers are exploring ways to harness the ability of platform models in ways that maximize their benefit to society.
Learn about the methods and tools Microsoft researchers are working on to help build reliable AI applications with platform models:
- (De)ToxiGen: Leveraging large language models to build more robust hate speech detection tools (opens in new tab) (blog post)
- ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection (opens in new tab) (download)
- Partnering people with large language models to find and fix bugs in NLP systems (opens in new tab) (blog post)
- Adaptive Testing and Debugging of NLP models (opens in new tab) (video)
- AdaTest (opens in new tab) (download)
- Date:
-
-
Ece Kamar
VP and Lab Director of AI Frontiers
-
-