Visual question answering and reasoning over vision and language: Beyond the limits of statistical learning?

Advances in deep learning keep producing impressive results at the junction of computer vision and natural language processing. The task of visual question answering (VQA), once considered incredibly ambitious, is now commonly used to benchmark multimodal models. Despite apparent progress, however, I will argue that some capabilities required for a general solution to VQA, such as strong out-of-distribution generalization, are beyond the reach of prevailing practices in machine learning. I will discuss how causal reasoning helps in formalizing the limits of classical, correlation-based learning. We will use a new layer of understanding of existing techniques to identify what information is missing from typical datasets, where else to find it, and how to test our models for the behaviors we really care about.

发言人详细信息

Damien Teney is a research scientist heading the machine learning group at the Idiap Research Institute in Switzerland. He is best known for his work at the intersection of computer vision, machine learning, and natural language processing. In particular, he was part of the team that won the challenge on Visual question answering at CVPR 2017. His work introduced the bottom-up/top-down visual features that are now ubiquitous in architectures for vision and language. More broadly, he is interested in building robust machine learning models that truly understand the world around us.

He has previously been affiliated with the University of Adelaide in Australia, Carnegie Mellon University, the University of Bath, the University of Innsbruck, and the University of Liege in Belgium, where he is originally from. For this virtual visit, he will be joining us remotely from the Idiap Research Institute in the beautiful Swiss Alps.

日期:
演讲者:
Damien Teney
所属机构:
machine learning group at the Idiap Research Institute in Switzerland

系列: Microsoft Vision+Language Summer Talk Series