Back to Search
Start Over
Contour-based next-best view planning from point cloud segmentation of unknown objects.
- Source :
- Autonomous Robots; Feb2018, Vol. 42 Issue 2, p443-458, 16p
- Publication Year :
- 2018
-
Abstract
- A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09295593
- Volume :
- 42
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Autonomous Robots
- Publication Type :
- Academic Journal
- Accession number :
- 127877153
- Full Text :
- https://doi.org/10.1007/s10514-017-9618-0