Back to Search Start Over

Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping

Authors :
Guoyu Zuo
Jiayuan Tong
Hongxing Liu
Wenbai Chen
Jianfeng Li
Source :
Frontiers in Neurorobotics, Vol 15 (2021)
Publication Year :
2021
Publisher :
Frontiers Media S.A., 2021.

Abstract

To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.

Details

Language :
English
ISSN :
16625218
Volume :
15
Database :
Directory of Open Access Journals
Journal :
Frontiers in Neurorobotics
Publication Type :
Academic Journal
Accession number :
edsdoj.99d7935094943fea85ba6785c4d9819
Document Type :
article
Full Text :
https://doi.org/10.3389/fnbot.2021.719731