Textbooks Are All You Need II: phi-1.5 technical report

PDF

We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} — a 10 million parameter model that can produce coherent English — and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate «textbook quality» data as a way to enhance the learning process compared to traditional web data. We follow the «Textbooks Are All You Need» approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good — such as the ability to «think step by step» or perform some rudimentary in-context learning — and bad, including hallucinations and the potential for toxic and biased generations — encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.

Téléchargements de publications

Phi-1.5

décembre 11, 2023

The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters. We did not fine-tune phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more. For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models. phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.

Research Forum Keynote: Research in the Era of AI

Microsoft Research Forum, January 30, 2024 Peter Lee, Corporate Vice President, Microsoft Research and Incubations, discusses how recent developments in AI have transformed the way Microsoft approaches research. See more at https://aka.ms/ResearchForum-Jan2024