First-person Perception and Interaction

MSR Distinguished Lecture Series: First-person Perception and Interaction

Computer vision has seen major success in learning to recognize objects from massive “disembodied” Web photo collections labeled by human annotators. Yet cognitive science tells us that perception develops in the context of acting the world—and without intensive supervision. Meanwhile, many realistic vision tasks require not only categorizing a well-composed human-taken photo, but also actively deciding where to look in the first place. In the context of these challenges, we are exploring how machine perception benefits from anticipating the sights and sounds an agent will experience as a function of its own actions. Based on this premise, we introduce methods for learning to look around intelligently in novel environments, learning from video how to interact with objects, and perceiving audio-visual streams for both semantic and spatial context. Together, these are steps towards first-person perception, where interaction with the world is itself a supervisory signal.

[Slides]

Date:
Haut-parleurs:
Eric Horvitz, Kristen Grauman
Affiliation:
Microsoft Research, University of Texas Austin

Taille: MSR AI Distinguished Lectures and Fireside Chats