Back to Search
Start Over
Learning accurate and efficient three-finger grasp generation in clutters with an auto-annotated large-scale dataset
- Publication Year :
- 2025
-
Abstract
- With the development of intelligent manufacturing and robotic technologies, the capability of grasping unknown objects in unstructured environments is becoming more prominent for robots with extensive applications. However, current robotic three-finger grasping studies only focus on grasp generation for single objects or scattered scenes, and suffer from high time expenditure to label grasp ground truth, making them incapable of predicting grasp poses for cluttered objects or generating large-scale datasets. To address such limitations, we first introduce a novel three-finger grasp representation with fewer prediction dimensions, which balances the training difficulty and representation accuracy to obtain efficient grasp performance. Based on this representation, we develop an auto-annotation pipeline and contribute a large-scale three-finger grasp dataset (TF-Grasp Dataset). Our dataset contains 222,720 RGB-D images with over 2 billion grasp annotations in cluttered scenes. In addition, we also propose a three-finger grasp pose detection network (TF-GPD), which detects globally while fine-tuning locally to predict high-quality collision-free grasps from a single-view point cloud. In sum, our work addresses the issue of high-quality collision-free three-finger grasp generation in cluttered scenes based on the proposed pipeline. Extensive comparative experiments show that our proposed methodology outperforms previous methods and improves the grasp quality and efficiency in clutters. The superior results in real-world robot grasping experiments not only prove the reliability of our grasp model but also pave the way for practical applications of three-finger grasping. Our dataset and source code will be released.<br />QC 20240725
Details
- Database :
- OAIster
- Notes :
- English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1457579116
- Document Type :
- Electronic Resource
- Full Text :
- https://doi.org/10.1016.j.rcim.2024.102822