Panel: Large-scale neural platform models: Opportunities, concerns, and directions
Large-scale, pretrained neural models are driving significant research and development across multiple AI areas. They have played a major role in research efforts and have been at the root of leaps forward in capabilities in natural language processing, computer vision, and multimodal reasoning. Over the last five years, large-scale neural models have evolved into platforms where fixed large-scale “platform models” are adapted via fine-tuning to develop capabilities on specific tasks. Research continues, and we have much to learn. While there is excitement about demonstrated capabilities, the “models as platforms” paradigm is concurrently raising questions and framing discussions about a constellation of concerns. These include challenges with safety and responsibility in regard to the understandability of emergent behaviors, the potential for systems to generate offensive output, and malevolent uses of new capabilities. Other discussion focuses on challenges with the cost of building platform models and with the rise of have and have-nots, where only a few industry organizations can construct platform models. Microsoft Chief Scientific Officer Eric Horvitz will lead an expert panel on neural platform models discussing research directions, responsible practices, and directions forward on key concerns.
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
- Évènement :
- Microsoft Research Summit 2021
- Piste :
- Deep Learning & Large-Scale AI
- Date:
- Haut-parleurs:
- Eric Horvitz, Miles Brundage, Yejin Choi, Percy Liang
- Affiliation:
- Microsoft, OpenAI, University of Washington / AI2, Stanford University
-
-
Eric Horvitz
Chief Scientific Officer
-
-
-
Percy Liang
Researcher
-
-
Deep Learning & Large-Scale AI
-
-
-
Research talk: Resource-efficient learning for large pretrained models
Speakers:- Subhabrata (Subho) Mukherjee
-
-
-
Research talk: Prompt tuning: What works and what's next
Speakers:- Danqi Chen
-
-
-
Research talk: NUWA: Neural visual world creation with multimodal pretraining
Speakers:- Lei Ji,
- Chenfei Wu
-
-
-
-
Research talk: Towards Self-Learning End-to-end Dialog Systems
Speakers:- Baolin Peng
-
Research talk: WebQA: Multihop and multimodal
Speakers:- Yonatan Bisk
-
Research talk: Closing the loop in natural language interfaces to relational databases
Speakers:- Dragomir Radev
-
Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
Speakers:- Yonatan Bisk,
- Daniel McDuff,
- Dragomir Radev
-