101. Vision-Based One-Shot Imitation Learning Supplemented with Target Recognition via Meta Learning
- Author
-
Yueyan Peng, Decheng Zhou, Xuyun Yang, Wei Li, and James Zhiqing Wen
- Subjects
One shot ,Meta learning (computer science) ,Cloning (programming) ,Computer science ,Human–computer interaction ,media_common.quotation_subject ,Mechatronics ,Imitation learning ,Object (computer science) ,Imitation ,Robotic arm ,media_common - Abstract
In this paper, an end-to-end meta imitation learning method supplemented with target recognition (TaR-MIL) is proposed for one-shot learning. This approach divides the procedure of imitating from demonstrations into two parts: distinguishing the target object from distractors and executing the correct actions. Accordingly, the objective of imitation is defined as the combination of target recognition and behavior cloning. Specifically, a target recognition module is adopted in the model architecture, which helps to extract useful information about tasks from observations during training. After training with demonstrations of various tasks via meta learning, a policy capable of solving new tasks given one demonstration is obtained. The real-world experiments on a UR10e robot arm illustrate that, the derived policy manages to perform placing tasks in new scenarios or with new objects after one video demonstration is given, which verify the effectiveness of the proposed method.
- Published
- 2021