MRTransformer: Transforming Avatar Non-verbal Behavior for Remote MR Collaboration in Incongruent Spaces

  • Cheng Yao Wang ,
  • Hyunju Kim ,
  • ,
  • Eyal Ofek ,
  • Mar Gonzalez Franco ,
  • Andrea Stevenson Won

2024 International Symposium on Mixed and Augmented Reality |

Published by IEEE

Demonstrating MRTransformer’s dynamic management of collaboration spaces in real-time. (Left) Alice and Bob collaborate within a designated mapped area, (Middle) Bob independently adjusts the collaboration area during the activity, and (Right) The movements of both Alice’s and Bob’s avatars are seamlessly preserved as the collaboration area is adjusted.

Demonstrating MRTransformer’s dynamic management of collaboration spaces in real-time. (Left) Alice and Bob collaborate within a designated mapped area, (Middle) Bob independently adjusts the collaboration area during the activity, and (Right) The movements of both Alice’s and Bob’s avatars are seamlessly preserved as the collaboration area is adjusted.

 

Avatar-mediated remote MR collaboration allows users in different spaces to interact as if they were together. However, directly applying a user’s motion to an avatar in incongruent spaces leads to ambiguous and error-prone communication. This paper introduces MRTransformer, a technique enabling dynamic MR collaboration across dissimilar spaces. By adapting transformations to user movements, MRTransformer preserves non-verbal cues and spatial context. It also allows flexible management of collaboration areas and remote object visualization, enhancing remote collaborations. A user study evaluated MRTransformer’s effectiveness in preserving non-verbal cues and spatial awareness, and examined social presence and privacy concerns. Findings offer implications for future remote MR collaboration research and design.