Sensor Fusion for Learning-based Motion Estimation in VR

Tracking 3D-position of controllers is an important problem in AR and VR devices. Current state-of-the-art in Windows Mixed Reality (MXR) utilizes a constellation of LEDs on the controllers to track pose. The performance of this vision-based system suffers in sunlight and when the controller moves out of the camera’s field-of-view (Out-of-FOV). In this work, we employ sensor fusion within a learning-based framework to track the controller position. Specifically, we utilize ultrasound sensors on hand-held controllers and the head-mounted display to obtain ranging information. We then combine this information within the feedback loop of an auto-regressive forecasting model that is built with Recurrent Neural Networks (RNN). Finally, we fuse the RNN output with the default MXR tracking result via a Kalman Filter across different positional states (including Out-of-FOV). Thanks to the proposed approach, we demonstrate near-isotropic accuracy levels for estimating controller position, which was not possible to achieve before with the default MXR tracking system.

发言人详细信息
Chen Song is a fourth year PhD student in the ESC (Embedded Sensing and Computing) lab at University at Buffalo SUNY, advised by Prof. Wenyao Xu. His PhD research focuses on emerging cyber physical systems such as smart manufacturing, mobile healthcare and human biometrics. Specifically, he is interested in using sensor fusion and machine learning technologies to address system-level challenges in security, efficiency and reliability.
日期:
演讲者:
Cheng Song
所属机构:
University at Buffalo SUNY