Feeling the Force: Integrating Force and Pose for Fluent Discovery through Imitation Learning to Open Medicine Bottles

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017

Mark Edmonds1*, Feng Gao1*, Xu Xie1, Hangxin Liu1, Siyuan Qi1, Yixin Zhu1, Brandon Rothrock2,
Song-Chun Zhu1; * Equal Contributors

1 UCLA Center for Vision, Cognition, Learning and Autonomy at Statistics Department
2 Jet Propulsion Laboratory, California Institute of Technology


Learning complex robot manipulation policies for real-world objects is extremely challenging, and often requires significant tuning within controlled environments. In this paper, we learn a manipulation model to execute tasks with multiple stages and highly variable structure, which are typically not suitable for most robot manipulation approaches. The model is learned from human demonstration using a tactile glove that measures both hand pose and contact forces. The tactile glove enables observation of visually latent changes in the scene, specifically the forces imposed to unlock the child-safety mechanisms of medicine bottles. From these observations, we learn an action planner through a top-down stochastic grammar model (And-Or graph) to represent the compositional nature of the task sequence, and bottom-up discriminative model learned from the observed poses and forces. These two terms are combined during planning to select the next optimal action. We present a method for transferring this human-specific knowledge onto a robot platform and demonstrate that the robot can perform successful manipulations of unseen objects with similar task structure. [Full Paper][Github]



Coming Soon