新闻与深度文章
Microsoft Research Asia StarTrack Scholars Program has recently embarked on an exciting new chapter, inviting exceptional young minds from around the world to partake in a transformative three-month research visit. In this second installment of our exclusive series on the…
We’re proud to have 100+ accepted papers At NeurIPS 2023, plus 18 workshops. Several submissions were chosen as oral presentations and spotlight posters, reflecting groundbreaking concepts, methods, or applications. Here’s an overview of those submissions.
| Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, 和 Lili Qiu
Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX.
Abstracts: December 6, 2023
| Gretchen Huizinga 和 Xing Xie
“Abstracts”—your source for world-class research in brief—welcomes Senior Principal Research Manager Xing Xie to the podcast series to discuss his paper on evaluating general-purpose AI with psychometrics.
In a groundbreaking move, Microsoft Research Asia’s prestigious StarTrack Scholars Program has officially taken flight, extending a global invitation to brilliant young minds for an immersive three-month research visit. Picture this: collaboration with elite researchers, a deep dive into the…
A new deep-learning compiler for dynamic sparsity; Tongue Tap could make tongue gestures viable for VR/AR headsets; Ranking LLM-Generated Loop Invariants for Program Verification; Assessing the limits of zero-shot foundation models in single-cell biology.
In this issue: Kosmos-2.5: A Multimodal Literate Model; Can vine copulas explain complex relationships of weather variables; New system accelerates the adaptive training process; Structural inequalities and relational labor in the influencer industry.
| Li Lyna Zhang, Jiahang Xu, Quanlu Zhang, Yuqing Yang, Ting Cao, 和 Mao Yang
A persistent challenge in deep learning is optimizing neural network models for diverse hardware configurations, balancing performance and low latency. Learn how SpaceEvo automates hardware-aware neural architecture search to fine-tune DNN models for swift execution on diverse devices.
Chunked prefills & decode-maximal batching boost LLM inference; DragNUWA combines text, image, and trajectory for fine-grained video content control; reconstructing images from human brain signals; structural inequalities in creator-audience relationships.