AvatarPilot: Decoupling One-to-One Motions from their Semantics with Weighted Interpolations

  • Cheng Yao Wang ,
  • Eyal Ofek ,
  • Hyunju Kim ,
  • ,
  • Andrea Stevenson Won ,
  • Mar Gonzalez Franco

2024 International Symposium on Mixed and Augmented Reality |

Published by IEEE

Physical restrictions of the real spaces where users are situated present challenges to remote XR and spatial computing interactions using avatars. Users may not have available space in their physical environment to duplicate the physical set-up of their collaborators, but if avatars are relocated, one-to-one motions may no longer preserve meaning. We propose a solution: using weighted interpolations we can guarantee that everybody is looking or pointing at the same objects, both locally and remotely. At the same time, this preserves the meaning of gestures and postures that are not object-directed (i.e., that are close to the body). We extend this work to locomotion and direct interactions in near space such as grabbing of objects; exploring the limits of our social and scene understanding and finding a new use for Inverse Kinematics (IK). We discuss limitations and applications and open-source the AvatarPilot for general use.