新闻与深度文章
Yasuyuki Matsushita rejoins Microsoft to lead the new Microsoft Research Asia – Tokyo lab. Learn more about his journey and his perspective on the Tokyo lab’s role in evolution of AI.
As cloud AI workloads grow in complexity and scale, maintaining high system reliability has become crucial. Traditional methods of ensuring system reliability, such as using redundant components, inadvertently introduce a new problem: subtle performance degradation, also known as “gray failures”.…
Author: Machine Learning Group Time-series forecasting is crucial across various industries, including health, energy, commerce, climate, etc. Accurate forecasts over different prediction horizons are essential for both short-term and long-term planning needs across these domains. For instance, during a public health…
Author: Youshan Miao Today, deep learning has permeated our daily lives. As the size of models continues to grow, training these models on massive GPU accelerators has become increasingly time-consuming and costly. To effectively harness the power of massive GPUs…
Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft. Time-series forecasting is a technique used to predict future values based on previously…
Author: Shujie Liu In recent years, the rapid advancement of AI has continually expanded the capabilities of Text-to-Speech (TTS) technology. Ongoing optimizations and innovations in TTS have enriched and simplified voice interaction experiences. These research developments hold significant potential across…
| Dongsheng Li, Dongqi Han, 和 Yansen Wang
Researchers and their collaborators are drawing inspiration from the brain to develop more sustainable AI models. Projects like CircuitNet and CPG-PE improve performance and energy efficiency by mimicking the brain’s neural patterns.
Learn what’s next for AI at Research Forum on Sept. 3; WizardArena simulates human-annotated chatbot games; MInference speeds pre-filling for long-context LLMs via dynamic sparse attention; Reef: Fast succinct non-interactive zero-knowledge regex proofs.