A New AI Evaluation Cosmos: Ready to Play the Game?
- José Hérnandez-Orallo ,
- Marco Baroni ,
- Jordi Bieger ,
- Nader Chmait ,
- David L. Dowe ,
- Katja Hofmann ,
- Fernando Martínez-Plumed ,
- Claes Strannegård ,
- Kristinn R. Thórisson
AI Magazine | , Vol 38
We report on a series of new platforms and events dealing with AI evaluation that may change the way in which AI systems are compared and their progress is measured. The introduction of a more diverse and challenging set of tasks in these platforms can feed AI research in the years to come, shaping the notion of success and the directions of the field. However, the playground of tasks and challenges presented there may misdirect the field without some meaningful structure and systematic guidelines for its organization and use. Anticipating this issue, we also report on several initiatives and workshops that are putting the focus on analyzing the similarity and dependencies between tasks, their difficulty, what capabilities they really measure and – ultimately – on elaborating new concepts and tools that can arrange tasks and benchmarks into a meaningful taxonomy.