Eureka: Evaluating and understanding progress in AI

Publié

Par , Senior Researcher , UX Designer , Senior Principal Research Manager , Principal Research Manager , Staff Research Scientist , Senior Research Software Engineer , Principal Researcher , Senior Research Software Engineer , Senior RSDE

A summary of insights extracted by using the Eureka framework, shown via two radar charts for multimodal (left) and language (right) capabilities respectively. The radar charts show the best and worst performance observed for each capability.

In the fast-paced progress of AI, the question of how to evaluate and understand capabilities of state-of-the-art models is timelier than ever. New and capable models are being released frequently, and each release promises the next big leap in frontiers of intelligence. Yet, as researchers and developers, often we ask ourselves: Are these models all comparable, if not the same, in terms of capabilities? There are, of course, strong reasons to believe they are, given that many score similarly in standard benchmarks. In addition, rankings in the numerous leaderboards do not offer a consistent and detailed explanation of why a model is ranked slightly better than others. However, if some models are fundamentally different, what are their strengths and weaknesses? More importantly, are there capabilities that are essential for making AI useful in the real world but still universally challenging for most models? Answering such questions helps us understand where we are on the frontier of AI, and what capability improvements are needed to meet the expectations that humanity and science have for safe and responsible deployments of AI models. 

The prevalence of these models is dependent on our ability to mature the science of in-depth AI evaluation and measurement. In our latest open-source release and technical report EUREKA: Evaluating and Understanding Large Foundation Models (opens in new tab), we start answering these questions by running an in-depth measurement analysis across 12 state-of-the-art proprietary and open-weights models. Behind this analysis stands Eureka (opens in new tab), an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. The framework currently supports both language and multimodal (text and image) data and enables developers to define custom pipelines for data processing, inference, and evaluation, with the possibility to inherit from existing pipelines and minimize development work. Eureka and all our evaluation pipelines are available as open source to foster transparent and reproducible evaluation practices. We hope to collaborate with the open-source community to share and expand current measurements for new capabilities and models. 

Focus on challenging and non-saturated capabilities

Eureka tests models across a rich collection of fundamental language and multimodal capabilities that are challenging for even the most advanced models, but are often overlooked by standard benchmarks commonly reported in model releases. In practice, this also means that our analysis intentionally does not pivot on oversaturated benchmarks. As unconventional as this may sound, it is motivated by two reasons. First, measurement on saturated benchmarks, for which most models perform over 95%, leaves very little space for failure analysis and model comparison. Second, even though saturation may be rooted in genuine model improvements, concerns about memorization and overfitting to labeling errors lower the credibility of measurements, especially in the very high accuracy regime. 

Microsoft research podcast

What’s Your Story: Lex Story

Model maker and fabricator Lex Story helps bring research to life through prototyping. He discusses his take on failure; the encouragement and advice that has supported his pursuit of art and science; and the sabbatical that might inspire his next career move.

Beyond single-score measurements and universal rankings

Even though rankings and leaderboards remain the quickest way to compare models, they rarely uncover important conditions of failure. Due to overreliance on single-score aggregations of performance, the more nuanced comparative findings are hidden behind small differences between model scores aggregated across many capabilities and experimental conditions.

As we show in our study, the chase after these rankings has created surprising dynamics that do not necessarily lead to identical models, but to models that use different complementary skills to achieve comparable overall scores in important leaderboards. Imagine you are a triathlon athlete aiming to achieve an elite performance, which historically takes around two hours. Despite your ambition to hit this top-tier mark, you face constraints with limited time and resources for training and preparation. In practice, athletes often focus their best resources on excelling in certain disciplines while aiming for a satisfactory performance in others. They prioritize based on what they believe is most achievable given their time and experience.

We observe similar phenomena in the set of 12 models we study. Even if two models may score very closely for the same capability, disaggregating that performance across disciplines and input conditions shows that each model has its own complementary strengths. Identifying, measuring, and understanding these strengths for a single model is needed for planning targeted improvements. Repeating this process for a large set of models, as we do in Eureka, is needed for identifying the hypothetical frontier, guiding research and development, and creating a model that combines and delivers capabilities that build on the strengths observed in existing models. 

Measuring consistency: non-determinism and backward compatibility

When people work with collaborators or when they choose tools to assist them in everyday tasks, predictability and consistency are key to a successful collaboration. Similarly, humans and application developers expect their AI assistants and models to be consistent over time for similar inputs and interactions. In our analysis, we study this under-explored angle of model performance, by focusing on two key aspects: the determinism of answer outcomes for identical examples and prompts, and the backward compatibility of model answers at the example level after a model has been updated with a new version. Lack of consistency in either of these domains would lead to breaking trust with users and application developers. 

The analysis shows surprising results and opens new considerations for improvement. For example, we observe that very few large foundation models are fully deterministic and for most of them there are visible variations in the output — and most importantly in accuracy — when asked the same question several times, with generation temperature set to zero—a control that tells models to minimize randomness in generations. In addition, when comparing new model releases with earlier models from the same family, a significant amount of regress at the example level can be observed after the update, even though the overall accuracy may increase. In practice, this type of inconsistency can be frustrating for application developers who rely on prewritten examples and prompts propagated to a foundation model. 

Eureka Insights

Figure 1 is a high-level illustration of the current state of AI for Eureka-Bench, highlighting the best and the worst performances across various capabilities. These results reveal a nuanced picture of different models’ strengths, showing that no single model excels in all tasks. However, Claude 3.5 Sonnet, GPT-4o 2024-05-13, and Llama 3.1 405B consistently outperform others in several key areas.

A summary of insights extracted by using the Eureka framework, shown via two radar charts for multimodal (left) and language (right) capabilities respectively. The radar charts show the best and worst performance observed for each capability.
Figure 1 – Performance of best and worse models for multimodal (left) and language (right) datasets in in Eureka-Bench. The red frontier shows the performance of the worse model, indicating the area that is already solved for the set of capabilities. The green frontier shows the performance of the best model, indicating the best-known result with current technology. The blue horizon between the best model and the maximum performance shows the room for improvement for mastering the capability. The best performance sets indicated in the green border include all models that perform within 2% of the best observed result. 

Multimodal capabilities

Evaluation in Eureka reveals that state-of-the-art models are still fairly limited in their multimodal abilities, specifically when it comes to detailed image understanding (for example, localization of objects, geometric and spatial reasoning, and navigation), which is most needed in truly multimodal scenarios that require physical awareness, visual grounding, and localization. 

  1. State-of-the-art multimodal models struggle with geometric reasoning. 
    Models perform worse in reasoning about height than about depth. Claude 3.5 Sonnet and Gemini 1.5 Pro are the best performing models for this task, with Claude 3.5 Sonnet being the most accurate model for depth ordering, and Gemini 1.5 Pro the most accurate for height ordering. 
  2. Multimodal capabilities lag language capabilities. 
    On tasks that can be described either as multimodal or as language-only, the performance of most tested models is higher for the language-only condition. GPT-4o 2024-05-13 is the only model that consistently achieves better results when presented with both vision and language information, showing therefore that it can better fuse the two data modalities.
  3. Complementary performance across models for fundamental multimodal skills.
    Claude 3.5 Sonnet, GPT-4o 2024-05-13, and GPT-4 Turbo 2024-04-09 have comparable performance in multimodal question answering (MMMU). In tasks like object recognition and visual prompting, the performance of Claude 3.5 Sonnet is better or comparable to GPT-4o 2024-05-13, but Gemini 1.5 Pro outperforms them both. Finally, in tasks like object detection and spatial reasoning, GPT-4o 2024-05-13 is the most accurate model. 

Language

The evaluation through Eureka shows that there have been important advances from state-of-the-art models in the language capabilities of instruction following, long context question answering, information retrieval, and safety. The analysis also discovers major differences and gaps between models related to robustness to context length, factuality and grounding for information retrieval, and refusal behavior. 

  1. Faster improvements in instruction following across all model families. 
    Instruction following is the ability to follow guidance expressed in user prompts regarding specifications related to format, style, and structure of the generated content. Among the studied language capabilities, instruction following is where most models are improving faster, potentially due to strong investments in instruction tuning processes, with most models now having an instruction following rate of higher than 75%. 
  2. All models’ performance in question answering drops with longer context. 
    Contrary to “needle-in-a-haystack” experiments, testing state-of-the-art models on tasks that involve reasoning over long context shows significant decline in performance as context size grows. Amongst all models, GPT-4o 2024-05-13 and Llama 3.1 405B have the lowest drop in performance for longer context.
  3. Major gaps in factuality and grounding for information retrieval from parametric knowledge or input context. 
    Models exhibit query fact precision rates of lower than 55%, fact recall rates of lower than 25%, and rates of irrelevant and fabricated information above 20%. Llama 3.1 405B, GPT-4o 2024-05-13, and Claude 3.5 Sonnet are the top performers in this area across different conditions.
  4. High refusal rates. Lower accuracy in detecting toxic content vs. neutral content for most models. 
    While several models have high accuracy rates for toxicity detection, others (Gemini 1.5 Pro, Claude 3.5 Sonnet, Claude 3 Opus, and Llama 3.1 405B) exhibit low accuracy in classifying toxic content and a high refusal rate to classify toxic or neutral context, both of which make toxic content difficult to detect. During the safe language generation evaluation, models like GPT-4 1106 Preview and Mistral Large 2407 have the highest toxicity rates. GPT-4o 2024-05-13 is the only model that has both a high toxicity detection accuracy and a low toxicity score for safe language generation. 

Non-determinism

Several models have highly non-deterministic output for identical runs. Gemini 1.5 Pro, GPT-4 1106 Preview, GPT-4 Vision Preview, and GPT-4 Turbo 2024-04-09 show high non-determinism of outcomes. These results raise important questions regarding the stability of user and developer experiences when repeatedly inferencing with identical queries using the same prompt templates. Llama 3 70B, Llama 3.1 70B, and Mistral Large 2407 are almost perfectly deterministic. 

Backward compatibility

Backward incompatibility for shifts within the same model family is prevalent across all state-of-the-art models. This is reflected in high regression rates for individual examples and at a subcategory level. This type of regression can break trust with users and application developers during model updates. Regression varies per task and metric, but we observe several cases when it is higher than 10% across three model families (Claude, GPT, Llama), and sometimes they can dominate progress rates for whole subcategories of data. 

Conclusion

The complementary results extracted from this study highlight opportunities for improving current models across various areas, aiming to match the performance of the best model for each individual capability in this challenge set. However, several tasks in the challenge set remain difficult even for the most capable models. It is crucial to discuss and explore whether these gaps can be addressed with current technologies, architectures, and data synthesis protocols.

Finally, Eureka and the set of associated benchmarks are only the initial snapshot of an effort that aims at reliably measuring progress in AI. Our team is excited about further collaborations with the open-source community and research, with the goal of sharing and extending current measurements for new capabilities and models. 

Domaines de recherche

Poste connexe

En relation