À propos
I work on developing multimodal, integrative-AI systems for physically situated interaction and collaboration between people and machines. The long term question that shapes my research agenda is how can we enable computer systems to reason more deeply about their surroundings, understand how people behave and interact with each other in physical space, and seamlessly participate these interactions.
Physically situated interaction hinges critically on the ability to reason about and model a number of processes such as conversational engagement, turn-taking, grounding, interaction planning and action coordination. Creating robust solutions that operate in the real-world brings to the fore broader AI challenges. Example questions include issues of representation (e.g. what are useful formalisms for creating actionable, robust models for multiparty interaction), machine learning methods for multimodal inference from streaming sensory data, predictive modeling, decision making and planning under uncertainty and temporal constraints, etc.
Over the last few years, I’ve also been involved in developing Platform for Situated Intelligence (opens in new tab), an open-source framework (opens in new tab) that supports and accelerates development and research for multimodal, integrative AI systems. You can read more about it in this blog post (opens in new tab).
You can find more information about my work at my personal homepage here (opens in new tab).