Machine Learning for Placement-insensitive Inertial Motion Capture
Although existing inertial motion-capture systems work reasonably well (less than 10 degree error in Euler angles), their accuracy suffers when sensor positions change relative to the associated body segments (+/1 60 degree mean error and 120 degree standard deviation). We attribute this performance degradation to undermined calibration values, sensor movement latency and displacement offsets. The latter specifically leads to incongruent rotation matrices in kinematic algorithms that rely on homogeneous transformations. To overcome these limitations, we propose to employ machine-learning techniques. In particular, we use multi-layer perceptrons to learn sensor-displacement patterns based on 3 hours of motion data collected from 12 test subjects in the lab over 215 trials. Furthermore, to compensate for calibration and latency errors, we directly process sensor data with deep neural networks and estimate the joint angles. Based on these approaches, we demonstrate up to 69% reduction in tracking errors.
© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
Téléchargements de publications
Microsoft Inertial Motion Capture Dataset (MIMC17)
octobre 9, 2017
This dataset provides an unprecedented number of sensor recordings (405 in total), including multiple IMUs and infrared (IR) sensors deployed on 24 individual body segments. Together there are over 3 hours of data (sampled at 30 Hz) from 215 trials conducted with 12 subjects (8 male, 4 female) performing 5 different motions.
Machine Learning for Placement-insensitive Inertial Motion Capture (ICRA 2018)
Although existing inertial motion-capture systems work reasonably well (less than 10 degree error in Euler angles), their accuracy suffers when sensor positions change relative to the associated body segments (+/1 60 degree mean error and 120 degree standard deviation). We attribute this performance degradation to undermined calibration values, sensor movement latency and displacement offsets. The latter specifically leads to incongruent rotation matrices in kinematic algorithms that rely on homogeneous transformations. To overcome these limitations, we propose to employ machine-learning techniques. In particular, we use multi-layer perceptrons to learn sensor-displacement patterns based on 3 hours of motion data collected from 12 test subjects in the lab over 215 trials. Furthermore, to compensate for calibration and latency errors, we directly process sensor data with deep neural networks…