Projets
Using AI, to improve AI Orca is a research team in Microsoft Research. Orca focuses on creating automated pipelines for creating high-quality synthetic data at scale, and training models for specialization and model self-improvement. Orca’s research areas involve self-improvement strategies, feedback-driven…
An Open-Source Programming Framework for Agentic [email protected] AutoGen provides a multi-agent conversation framework as a high-level abstraction. It is an open-source library for enabling next-generation LLM applications with multi-agent collaborations, teachability and personalization. With this framework, users can build LLM…
This project develops techniques that enable AI to use computing infrastructure more efficiently. The goals are to maintain predictive accuracy while reducing carbon emissions, whether embodied in manufactured hardware, or produced from electricity usage when green energy is not available.
In recent times, the explosion of information from a variety of sources and cutting edge techniques such as Deepfake have made it increasingly important to check the credibility and reliability of the data. Large volumes of data generated from diverse…
The need for labeled data is one of the largest bottlenecks in training supervised learning models like deep neural networks. This is especially the case for many real-world tasks where large scale annotated examples are either too expensive to acquire…
Deep neural networks including pre-trained language models like BERT, Turing-NLG and GPT-3 require thousands of labeled training examples to obtain state-of-the-art performance for downstream tasks and applications. Such large number of labeled examples are difficult and expensive to acquire in…
Modern machine learning applications have enjoyed a great boost utilizing deep and large neural network models, allowing them to achieve state-of-the-art results on a wide range of tasks such as question-answering, conversational AI, search and recommendation. A significant challenge facing…