AI Explainer: Foundation models ​and the next era of AI

Publié

Par , Partner Research Manager

The release of OpenAI’s GPT-4 is a significant advance that builds on several years of rapid innovation in foundation models. GPT-4, which was trained on the Microsoft Azure AI supercomputer, has exhibited significantly improved abilities across many dimensionsfrom summarizing lengthy documents, to answering complex questions about a wide range of topics and explaining the reasoning behind those answers, to telling jokes and writing code and poetry.

Microsoft Senior Principal Research Manager Ahmed H. Awadallah was among a group of researchers across the company who have worked in partnership with OpenAI over several months to evaluate this new model’s capabilities. In this video, recapped below, he tells the story of the technical innovations in recent years that have brought us to this moment: the surprising progress of GPT-4’s predecessor models, leading up to the capabilities demonstrated in ChatGPT, and the integration of the latest models into Bing.

While watching this video, you can hover to see video chapter titles and jump directly to those you’re interested in.

Introduction to foundation models [00:00-11:01]

Over the last decade, AI has made significant progress on perception tasks like image recognition and language processing. More recently, the field is witnessing new advances in the form of generative AI, underpinned by a class of large-scale models known as foundation models. Foundation models are trained on massive amounts of data and are capable of performing a wide range of tasks. With a simple natural language prompt like “describe a scene of the sun rising over the beach,” generative AI models can output a detailed description or produce an image based on the generated description, which can then be animated or even turned into video. Many recent language models are not only good at generating text but also generating, explaining, and debugging code.

Listen in at 1:37

Three components have been driving these advances:

  • The transformer architecture: A popular choice across modalities, the transformer architecture is efficient, easy to scale and parallelize, and can model interdependence between different components in input and output data.
  • Scale: Growing model size and the use of increasingly large amounts of data have resulted in what is being termed as “emerging capabilities.” When models reach a critical size, they begin displaying capabilities not previously present.
  • In-context learning: Showing potential on a range of applications, from text classification to translation and summarization, this new training paradigm provides pre-trained models with instructions for new tasks or just a few examples instead of training or fine-tuning models on labeled data. Because no additional data or training is needed and prompts are provided in natural language, models can be applied right out of the box and aren’t limited to those with developer experience.

From GPT-3 to ChatGPT – a jump in generative capabilities [11:02-19:07]

With the November 2022 release of ChatGPT, a language model optimized for dialogue, we saw exciting developments in text generation. Compared with GPT-3, an earlier language model in the GPT family, ChatGPT not only provides longer, more thorough, and more structured responses to questions and instructions but can also produce answers in different styles, or tones, and tailor explanations to different audiences, like a child, a first-year college student, or someone with a PhD.

Earlier language models such as GPT-3 were trained to predict the next word in a sentence using large amounts of text from the web with no direct human supervision. Several additional training approaches have helped fuel the improved performance of later models such as ChatGPT. These models are being trained on code in addition to text, which seems to be providing another opportunity to identify the relationship between different parts of speech. This is resulting in models that are better at following instructions and reasoning than models trained on text alone. Human-generated data is also contributing to better outputs. Instruction tuning adds the step of training models on prompts and responses created by a human, while model-generated responses ranked by a human are being employed to train a reward model that can be used to train the main model with reinforcement learning.

The fast-paced advancements demonstrated by these models have challenged one of the traditional methods used to measure progress: benchmarks. Improvements are happening so fast that benchmarks are becoming obsolete, with many solved or saturated as quickly as they come out.

Everyday impact: Integrating foundation models and products [19:09-27:20]

Foundation models are already appearing in products available today. For example, GitHub Copilot (opens in new tab) leverages OpenAI Codex to assist in writing code. The AI pair programmer has been shown to not only make developers feel more productive but to support them in actually getting more done. A GitHub study found participants using Copilot were 55 percent more productive (opens in new tab) than participants without access to Copilot.

Combining language models optimized for dialogue with external knowledge sources and tools is another avenue for improved experiences. The new Bing (opens in new tab), for instance, brings together these models and search. Years of research have yielded insight into the web search experience; much of it involves reviewing and synthesizing information across a variety of resources identified via multiple queries, which is time-consuming. The new Bing can do the heavy lifting for the searcher, working behind the scenes to make the necessary queries, collect results, synthesize the information, and present a single complete answer.

Large language models and foundation models more broadly are not without their limitations, however. There are issues such as reliability, accuracy, staleness, and provenance that need to be explored. Additionally, each specific application of one of these models comes with its own challenges and opportunities. For example, in applying foundation models to web search, we need to rethink the overall user experience, including how people interact with search and how we improve, measure, and personalize the experience over time.

Listen in at 27:48

Transcript

Introduction to foundation models [00:00–11:01]

Hello, everyone. My name is Ahmed Awadallah. I am a researcher here at Microsoft Research. Today, I am going to be talking about foundation models and the impact they are having on the current era of AI.

If we look back at the last five to 10 years, AI has been making significant impact on many perception tasks like image and object recognition, speech recognition, and most recently on language understanding tasks, where we have been seeing different AI models achieving superior performance and in many cases reaching performance equal to what a human annotator would do on the same task. Over the last couple of years, though, the frontier of AI has changed toward generative AI. 

We have had quite good text generation models for some time. You could actually prompt a model with asking it to describe an imaginary scene, and it will produce a very good description of what you have asked it to do. And then we started making a lot of progress on image generation, as well. With models like DALL-E 2 and Imagen and even models coming out from such startups like Midjourney and Stability AI, we have been getting to a level of quality of image generation that we have never seen before. Inspired by that, there has been also a lot of work on animating the generated images or even generating videos from scratch. Another frontier for generative models has been code, and not only generating code based on text prompt but also explaining the code or in some cases even debugging the code. I was listening to this episode of the Morning Edition on NPR when it aired at the beginning of February where they were attempting to use a bunch of AI models for producing a schematic design of a rocket and also for coming up with some equations for the rocket design. And, of course, the hypothetical design would have crashed and burned, but I couldn’t help but think how exciting it is that AI has become so good that we are even attempting to measure its proficiency on a field as complex as rocket science.

[2:11] If we look back, we will find that there are three main components that led to the current performance we are seeing from AI models: the transformer architecture, the scale, and in-context learning. Transformer in particular has been dominating the field of AI for the previous years. At the beginning, we started with natural language processing, and the architecture was very efficient that it took over the field of natural language processing within a very short amount of time. The transformer is a very efficient architecture that’s easy to scale, easy to parallelize, and relies on its heart at the attention mechanism, a technique that allows us to model interdependence between different components or different tokens in our input and output data. Transformers started off mostly in natural language processing, but slowly but surely, they made their way to pretty much any modality. So now we are seeing that models that are operating on images, on videos, on audio, and many other modalities are also using transformers. Five years later since their inception and transformers have surprisingly changed little compared to when they started despite so many attempts at producing better and more efficient variants of transformers, perhaps because of the gains were limited to certain use cases or perhaps because the gains did not persist at scale. Another potential reason is that maybe they made the architecture less universal, which has been one of its more—of its biggest advantages.

[03:53] The next point is scale, and when we talk about scale, we really mean the amount of compute that’s being used to train the model, and that can be translated into either training bigger and bigger models with larger and larger number of parameters—and we have been seeing a steady increase of that over the previous years—but scale could also mean more data, using more data to train the model on larger and larger amounts of data. And we have seen different models over the previous few years taking different approaches in deciding how much data and how large the model is. But the consistent trend is that we have been scaling larger and larger and using more and more compute. Scale has also led to what is being called as “emerging capabilities.” And that’s one of the most interesting properties of scale that have been described over the previous year or so. By emerging capability, we mean that the model starts to show a certain ability that appears only when it reaches a critical size. Before that, the model is not demonstrating any of this ability at all. For example, let’s look at the figures here, and on the left-hand side, we see arithmetic. If we try to use language models to solve arithmetic word problems, up until a certain scale, they absolutely cannot solve the problem in any way, and they do not perform any better than random. But then at a certain critical point, we start seeing improved performance, and that performance just keeps getting better and better. And we have seen that at so many other tasks, as well, ranging from arithmetic to transliteration to multitask learning.

[05:38] And perhaps one of the most exciting emerging capabilities of language models recently is their ability to in-context learn, which has been introducing a new paradigm for using these models. If we take a look back at how we have been practicing machine learning in general, with deep learning, you would start by choosing an architecture, a transformer or before that an RNN or CNN, and then you fully supervise train your model. You have a lot of labeled data, and you train your model based on that data. When we started getting into pre-trained models, we instead of training models from scratch, we actually start off with a pre-trained model and then fine-tune it still on a lot of fully supervised labeled data for the task at hand. But then with in-context learning, suddenly we can actually use the models out of the box. We can just use a pre-trained model and use a prompt in order to learn—in order to perform a new task without actually doing any learning. We can do that in zero-shot settings, meaning we do not provide any examples at all, just instructions or a description of what the task is, or in a few-shot setting, where we just provide a small handful number of examples to the model. For example, if we are interested in trying to do text classification, we can just—in this case sentiment analysis—we can just provide the text to the model and ask it to classify the text into either positive or negative. If the task is a little bit harder, we can provide few-shot samples, just a few examples of how do we want the model to classify things into, say, positive, negative, or neutral, and then ask the model to reason about a new piece of text, and it actually does pretty good at it. And it’s not only simple tasks like text classification. We can do translation or summarization and much more complex tasks with that paradigm. We can even try to do things like arithmetic where we try to give the model a word problem and ask it to come up with the answer. On the example we are showing right now, we did give the model just one sample to show it how we would solve a problem and then ask it to solve another problem. But in that particular case, the model actually failed. It did produce an answer, but it was not the correct answer. But then came the idea of chain-of-thought prompts, where instead of just showing the model the input and the output, we can actually also show it the steps it can take in order to get to that output from that particular input. In that case, we are just solving the arithmetic word problem step by step and showing an example of that to the model. When we do that, the models are not only able to produce the correct answer, but they are also able to walk us step by step through how they produced that answer. That mechanism is referred to as a chain-of-thought prompting, and it has been very prominently used in so many tasks and showing very superior performance on multiple tasks. It has been also used in many different ways, including in fine-tuning and training some of the models. The “pre-train and then fine-tune” paradigm have been established paradigm for years, since maybe the inception of BERT and similar pre-trained language models. But now you would see that there’s increased shift into using the models by prompting them instead of having to fine-tune them. That’s evident in a lot of practical usage of the models but even in the publications in the machine learning areas that have been using natural language processing tasks and switching into using prompting instead of using fine-tuning. In-context learning and prompting matters a lot because it’s actually changing the way we apply the models to new tasks. The ability of applying the models to new tasks out of the box without collecting additional data, without doing any additional training, is an amazing ability that increases the amount of tasks that can be applied—the models can be applied to and also reduces the amount of effort needed into building models with these tasks.

[09:57] The performance has been also amazing by just providing only a few examples, and the tasks in this setting are being adapted to the model rather than the models being adapted to the tasks. If you think about the fine-tuning paradigm, what we did is that we already had the pre-trained model and we were fine-tuning it to adapt to the task. Now we are trying to frame the task in a way that’s more friendly to how the model is being trained so that the model can perform well on the task even without any fine-tuning. Finally, this allows the humans to interact with the models in their normal form of communication, in natural language. We can just give instructions describing the task that we want, and the model would perform the task. And that blurs the line between who is an ML user and who is an ML developer because now anyone can just prompt and describe different tasks to the language model and get the language model to do a large number of tasks without having to have any training or any development involved.

From GPT-3 to ChatGPT—a jump in generative capabilities [11:02–19:07]

[11:02] Now looking back at the last three months or so, we have been seeing the field changing quite a bit and a tremendous amount of excitement happening around the release of the ChatGPT model. And if we think about the ChatGPT model as a generative model, we would see that there has been other generative models out there from the GPT family and other models, as well, that have been doing a decent job at text generation. So you can take one of these models, in this case GPT-3, and prompt it to the question asking it to explain what the foundational language model means and it would give you a pretty decent answer. You can ask the same question to ChatGPT and you’ll find that it’s able to provide a much better answer. It’s longer; it’s more thorough; it’s more structured. You can ask it to style it in different ways. You can ask it to simplify it in different ways. And all of these are capabilities that the previous generation of the models could not really do. If we look at how ChatGPT is described, the description lists different things, but it’s mostly optimized for dialogue, allowing the humans to interact in natural language. It’s much better at following instructions and so on and so forth. If we look at step by step about how this actually was manifested in the training, we will see from the description that looking at base models that ChatGPT was built on and other models before ChatGPT, that language model training was following a self-supervised pre-training approach, where we have a lot of unsupervised language, web-scale language, that we are training the models on, and the models in this particular case are trained with an autoregressive next word prediction approach. So we are looking at an input context, which is a sentence or a part of a sentence, and trying to predict the next word. But then over the last year or so, we have been seeing a shift where models are being trained not just on text but also on code. For example, GPT-3.5 models are trained on both text and code, and surprisingly, training the models on both text and codes improves their performance on many tasks that has nothing to do with code. On the figure we see right now, we see different models being compared on—models that were trained with code and models that were not trained with code—and we are seeing that the models that were trained with both text and code show better performance at following task instructions, show better performance at reasoning, compared to similar models that were trained on text only. So the training on code seems to be grounding the models in different ways, allowing them to learn a little bit more about how to reason, about how to look at structured relation between different parts of the text.

[13:59] The second main difference is the idea of instruction tuning, which has been—what you have been seeing becoming more and more popular over different models over the last year, maybe starting with InstructGPT that introduced the idea of training the models on human-generated data. And this is a departure from the traditional self-supervised approach, where we have been only training the model on unsupervised, free, unstructured text. Now there’s an additional step in the training process that actually trains the models on human-generated data. The human-generated data takes the format of prompt and the response, and it’s trying to teach the model to respond in a particular way given a prompt, and this step of instruction tuning has been actually helping the models get a lot better, especially in zero-shot performance. And we see here that the instruction-tuned models tend to perform a lot better than their non-instruction–tuned counterpart, especially in zero-shot settings. And the last step of the training process introduces yet another human-generated data. In this case, we actually have different responses generated by the model and we have a human providing preferences to all these responses so in a sense ranking responses and choosing which response is better than other responses. This data is used to train a reward model that can then be used to actually train the main model with reinforcement learning. And this approach further aligns the model into responding in certain ways that correspond to the way the human has been providing the feedback data. This notion of training the model with human feedback data is very interesting, and it’s creating a lot of traction with many people thinking about the best technique to train on human feedback data, the best form of human feedback to collect, to train the model on, and it would probably help us improve the models even further in the near future.

[16:02] Now with all these advances we have been seeing, the pace of innovation and the acceleration of the advances have been moving so fast that it has been very challenging in so many ways, but perhaps one of the most profound ways it has been challenging with is the notion of benchmarking, that traditionally research in machine learning has been very dependent on using very solid benchmarks on measuring the progress of different approaches. But the pace of innovation has been really challenging that recently. To understand how fast the progress has been, let’s look at this data coming from Hypermind, a forecasting company that uses crowd forecasting and has been doing that—tracking some of the AI benchmarks recently. The first benchmark is Massive Multitask Language Understanding benchmark, a large collection of language understanding tasks. In June of 2021, a forecast was made that in a year, by June 2022, we will get to around 57 performance on this task. But in reality, what happens is that by June 2022, we were at around 67 percent, and a couple of months later, we were at 75 percent, and we keep seeing more and more fast improvements after that. A second task is the MATH task, which is a collection of middle and high school math problems, and here the prediction was that in a year, we will get to around 13 percent. But in reality, we ended up going much more beyond that within one year, and we still see more and more advances happening at a faster-than-ever-expected pace. That rate of improvement is actually resulting in a lot of the benchmarks being saturated really fast.

[17:51] If we look back at benchmarks like MNIST and Switchboard, it took the community 20-plus years in order to fully saturate these benchmarks. And that has been accelerating, accelerating to the point where now we see benchmarks being saturated in a year or less. In fact, many of the benchmarks are becoming obsolete to the point that only 66 percent of machine learning benchmarks have received more than three results at different time points, and many of them are solved or saturated soon after they are being released. And that actually motivated the community to come together with very large efforts to try to design benchmarks that are designed specifically to challenge large language models. In that particular case, with BIG-bench, more than 400 authors from over 100 institutions came together to create it. But even with such an elaborate effort, we are seeing very fast progress, and with large language models and chain-of-thought prompting that we discussed earlier, we are seeing that we are making very fast progress against the hardest tasks in BIG-bench, and in many of them, models are already performing better than humans right now.

Everyday impact: Integrating foundation models and products [19:09–27:20]

[19:09] The foundation models are not only getting better and better at benchmarks, but they are actually changing many products that we use every day. We mentioned code generation earlier, so let’s talk a little bit about Copilot. GitHub Copilot is a new experience that helps developers write code, and Copilot is very interesting in many perspectives. One is how fast it went from the model being created in research to how—to the point it made it as a product generally available in GitHub Copilot but also in how much user value it has been generating. This study that was done by the Copilot GitHub team was looking at quantifying the value these models were providing to developers. And in the first part of the study, they asked different questions to the developers, trying to assess how useful the models are, and we see that 88 percent of the participants reported that they feel like they are much more productive when using Copilot than before, and they reported many other positive implications on their productivity, as well. But perhaps even more interesting, the study did a controlled study where there were two groups of developers trying to solve the same set of tasks. A group of them had access to Copilot, and the other group did not, and interestingly, the group that had access to Copilot not only finished the tasks at a higher success rate but also at a much more efficient rate. Overall, they were 55 percent more productive. Fifty-five percent more productivity in a coding scenario is an amazing progress that a lot of people would have been very surprised to think about a model like Copilot performing so fast with such value.

[21:10] Now beyond code generation and text generation, another frontier where these models are starting to shine is when we start connecting them with external knowledge sources and external tools. Language models that have been optimized for dialogue have amazing language capabilities; they do really good at understanding language, at following instructions. They also do really well at synthesizing and generating answers. They are also conversational in nature and do store knowledge from the training data that they were trained on. But they do have a lot of limitations around reliability, factualness, staleness, access to more recent information that was not part of the training data, provenance, and so on. And that’s why connecting these models to external knowledge sources and tools could be super exciting. Let’s talk about, for example, connecting language models to search as we have seen recently with the new Bing.

[22:14] If we take a look back years ago, there was many, many studies studying web search, studying tasks that people try to complete in web search scenarios. And many of these tasks were deemed as complex search tasks, tasks that are not navigational, as in trying to go to a particular website, or that are not simple informational tasks where you are trying to look up a fact that you can quickly get with one query but more complex tasks that involve multiple queries. Maybe you are planning a travel, maybe you are trying to buy a product, and as part of your research process, there are multifaceted queries that you would like to look at. There has been a lot of research understanding user behavior with such tasks and how prevalent they are and how much time and effort people spend in order to perform them. And they typically involve spending a significant amount of time with the search engine, reading and synthesizing information from different sources with different queries. But with a new experience like the experience Bing is providing, we can actually take one of these queries and provide much more complex long queries to the search engine. And the search engine uses both search and the power of the language model to generate multiple queries, get the results of all of these queries, and synthesize a detailed answer back to the searcher. Not only that, but it can recommend additional searches and additional ways you could interact with the search engine in order to learn more. That has the potential of saving a lot of time and a lot of effort for many searchers in supporting these complex search tasks in a much better way. Not only that, but there are some of these complex search tasks that are multistep in nature, where I would start with one query and then follow up with another query based on the information I get from the first query. Imagine that I am doing this search before the Super Bowl where I am trying to understand some comparisons, stats, between the two quarterbacks that are going to face each other, and I start with that query. What the search engine did in that particular case is that it actually started with a query where it was trying to identify who are the two quarterbacks that are going to be playing in the Super Bowl. And if I have done that as a human, I would have done that. I would have identified the teams and the two quarterbacks, and then maybe I would follow up with another query where I would actually search for the stats of the two quarterbacks I am asking about, and get that and actually synthesize the information maybe from different results and then get to the answer I am looking for. But with the new Bing experience, I can just issue the query and all of that is happening in the background. Different search queries are being generated, submitted to the search engine, recent results are getting collected, and a single answer is being synthesized and displayed, making me as a searcher much more productive and much more efficient.

[25:21] The potential of LLM integrated—large language models integrated with search and other tools is very huge and can add much, much value to so many scenarios. But there are also a lot of challenges and a lot of opportunities and a lot of limitations that needs to be addressed. Reliability and safety are one of them; making the models more accurate; thinking about trust, provenance, and bias. User experience and behavior and how the new experience would affect how the users are interacting with the search engine is another one, with new and different tasks or different user interfaces or even different behavior models. Search has been a very well-studied experience, and we have very good understanding of how users interact with the search engine and very reliable behavior models to predict that. Changing this experience will require a lot of additional study there. Personalization and managing user preferences and search history and so on and so forth has also been a very well-studied field in web search, and with new experiences like that, we have so many opportunities and thinking about things like personalization and user experience again but also evaluation and what do metrics mean. How do we measure user satisfaction? How do we understand good and bad abandonment? Good abandonment as in when people get satisfied with the result but they don’t have to click on anything on the search result page, and bad abandonment being the opposite of that. Thinking about feedback loops, which has been playing a large part in improving search engines, and how can we apply them to new experiences and new scenarios. So while integrating language models with an experience like search and other tools and experiences is very exciting, it’s actually also creating so many opportunities for new research problems or for revisiting previous search problems that we had very good understanding for.

Conclusion [27:21–28:37]

[27:21] To conclude, we have been seeing incredible advancing with AI over the past couple of years. The progress has been accelerating and outpacing expectations in so many ways, and the advances are not only in terms of academic benchmarks and publications, but we are also seeing an explosion of applications that are changing the products that we use every day. However, we are really much closer to the beginning of a new era with AI than we are to the end state of AI capabilities. There are so many opportunities, and we will probably see a lot more advances and even more accelerated progress over the coming month and years. And there are so many challenges that remain and many new opportunities that are arising because of the state of where these models are. It’s a very exciting time for AI, and we are really looking forward to seeing the advances that will happen moving forward and to the applications that will result from these advances and how they will affect every one of us with the products we use every day. Thank you so much.

[END]

Lire la suite

Voir tous les articles de blog