Machine Learning for Placement-insensitive Inertial Motion Capture

  • Xuesu Xiao ,
  • Shuayb Zarar

IEEE Int. Conf. Robotics and Automation (ICRA) |

Publication

Although existing inertial motion-capture systems work reasonably well (less than 10 degree error in Euler angles), their accuracy suffers when sensor positions change relative to the associated body segments (+/1 60 degree mean error and 120 degree standard deviation). We attribute this performance degradation to undermined calibration values, sensor movement latency and displacement offsets. The latter specifically leads to incongruent rotation matrices in kinematic algorithms that rely on homogeneous transformations. To overcome these limitations, we propose to employ machine-learning techniques. In particular, we use multi-layer perceptrons to learn sensor-displacement patterns based on 3 hours of motion data collected from 12 test subjects in the lab over 215 trials. Furthermore, to compensate for calibration and latency errors, we directly process sensor data with deep neural networks and estimate the joint angles. Based on these approaches, we demonstrate up to 69% reduction in tracking errors.

论文与出版物下载

Microsoft Inertial Motion Capture Dataset (MIMC17)

9 10 月, 2017

This dataset provides an unprecedented number of sensor recordings (405 in total), including multiple IMUs and infrared (IR) sensors deployed on 24 individual body segments. Together there are over 3 hours of data (sampled at 30 Hz) from 215 trials conducted with 12 subjects (8 male, 4 female) performing 5 different motions.

Machine Learning for Placement-insensitive Inertial Motion Capture (ICRA 2018)

Although existing inertial motion-capture systems work reasonably well (less than 10 degree error in Euler angles), their accuracy suffers when sensor positions change relative to the associated body segments (+/1 60 degree mean error and 120 degree standard deviation). We attribute this performance degradation to undermined calibration values, sensor movement latency and displacement offsets. The latter specifically leads to incongruent rotation matrices in kinematic algorithms that rely on homogeneous transformations. To overcome these limitations, we propose to employ machine-learning techniques. In particular, we use multi-layer perceptrons to learn sensor-displacement patterns based on 3 hours of motion data collected from 12 test subjects in the lab over 215 trials. Furthermore, to compensate for calibration and latency errors, we directly process sensor data with deep neural networks…