Object affordance as a guide for grasp-type recognition
- Naoki Wake ,
- Daichi Saito ,
- Kazuhiro Sasabuchi ,
- Hideki Koike ,
- Katsushi Ikeuchi
Recognizing human grasping strategies is an important factor in robot teaching as these strategies contain the implicit knowledge necessary to perform a series of manipulations smoothly. This study analyzed the effects of object affordance-a prior distribution of grasp types for each object-on convolutional neural network (CNN)-based grasp-type recognition. To this end, we created datasets of first-person grasping-hand images labeled with grasp types and object names, and tested a recognition pipeline leveraging object affordance. We evaluated scenarios with real and illusory objects to be grasped, to consider a teaching condition in mixed reality where the lack of visual object information can make the CNN recognition challenging. The results show that object affordance guided the CNN in both scenarios, increasing the accuracy by 1) excluding unlikely grasp types from the candidates and 2) enhancing likely grasp types. In addition, the “enhancing effect” was more pronounced with high degrees of grasp-type heterogeneity. These results indicate the effectiveness of object affordance for guiding grasp-type recognition in robot teaching applications.