Back to Search Start Over

OCID-Ref: A 3D Robotic Dataset with Embodied Language for Clutter Scene Grounding

Authors :
Wang, Ke-Jyun
Liu, Yun-Hsuan
Su, Hung-Ting
Wang, Jen-Wei
Wang, Yu-Siang
Hsu, Winston H.
Chen, Wen-Chin
Publication Year :
2021

Abstract

To effectively apply robots in working environments and assist humans, it is essential to develop and evaluate how visual grounding (VG) can affect machine performance on occluded objects. However, current VG works are limited in working environments, such as offices and warehouses, where objects are usually occluded due to space utilization issues. In our work, we propose a novel OCID-Ref dataset featuring a referring expression segmentation task with referring expressions of occluded objects. OCID-Ref consists of 305,694 referring expressions from 2,300 scenes with providing RGB image and point cloud inputs. To resolve challenging occlusion issues, we argue that it's crucial to take advantage of both 2D and 3D signals to resolve challenging occlusion issues. Our experimental results demonstrate the effectiveness of aggregating 2D and 3D signals but referring to occluded objects still remains challenging for the modern visual grounding systems. OCID-Ref is publicly available at https://github.com/lluma/OCID-Ref<br />Comment: NAACL 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.07679
Document Type :
Working Paper