Nouvelles et reportages
Innovations in AI: Brain-inspired design for more capable and sustainable technology
| Dongsheng Li, Dongqi Han, et Yansen Wang
Researchers and their collaborators are drawing inspiration from the brain to develop more sustainable AI models. Projects like CircuitNet and CPG-PE improve performance and energy efficiency by mimicking the brain’s neural patterns.
Research Focus: Week of August 26, 2024
Learn what’s next for AI at Research Forum on Sept. 3; WizardArena simulates human-annotated chatbot games; MInference speeds pre-filling for long-context LLMs via dynamic sparse attention; Reef: Fast succinct non-interactive zero-knowledge regex proofs.
Synergizing habits and goals with variational Bayes: A new framework for biological and artificial embodied agents
| Dongqi Han
The Bayesian behavior framework synergizes habits and goals through variational Bayesian methods, offering new insights on sensorimotor behavior and comprehension of actions.
“If life is a marathon, then health is the key to its duration.” Health is not only the foundation of happiness and societal progress but also a pivotal aspect of the intelligent era. AI’s integration into healthcare represents a transformative…
Deciding between fundamental and applied research is a dilemma that confronts many in the scientific community. Dongqi Han, on the cusp of graduation, ambitiously aspired to bridge this divide by pursuing both avenues of research in his future endeavors. After…
Research Focus: Week of April 15, 2024
In this issue: New research on appropriate reliance on generative AI; Power management opportunities for LLMs in the cloud; LLMLingua-2 improves task-agnostic prompt compression; Enhancing COMET to embrace under-resourced African languages:
Structured knowledge from LLMs improves prompt learning for visual language models
| Xinyang Jiang, Yubin Wang, Dongsheng Li, et Cairong Zhao
Using LLMs to create structured graphs of image descriptors can enhance the images generated by visual language models. Learn how structured knowledge can improve prompt tuning for both visual and language comprehension.
LLMLingua: Innovating LLM efficiency with prompt compression
| Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, et Lili Qiu
Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX.
Research Focus: Week of August 14, 2023
In this issue: HyWay enables hybrid mingling; Auto-Tables transforms non-relational tables into standard relational forms; training dense retrievers to identify high-quality in-context examples for LLM; improving pronunciation assessment in CAPT.