33 results on '"3-D point cloud"'
Search Results
2. DE-Net: A Dual-Encoder Network for Local and Long-Distance Context Information Extraction in Semantic Segmentation of Large-Scale Scene Point Clouds
- Author
-
Zhipeng He, Jing Liu, and Shuai Yang
- Subjects
Deep learning ,dual-encoder ,semantic segmentation ,3-D point cloud ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Semantic segmentation of large-scale point clouds is essential for applications such as autonomous driving and high-definition mapping. However, this task remains challenging due to the imbalanced distribution of categories in large-scale point cloud data and the similarity in local geometric structures. Most current deep learning–based methods concentrate on designing local feature extraction modules while neglecting the significance of long-distance contextual information. Nevertheless, this contextual information is crucial for accurate object segmentation in large-scale scenes. To address this limitation, we propose a dual-encoder segmentation network called DE-Net. DE-Net effectively learns both the local and long-distance contextual information for each point to achieve accurate point segmentation. DE-Net consists of two main components: dual-encoder modules (DEMs) and gradient-aware pooling modules (GAPM). DEMs extract local geometry and long-distance contextual information for each point using positional and trigonometric encoding to distinguish complex geometric features. GAPMs aggregate global information effectively using dual-distance and xy gradient information. In addition, a prediction jitter module was introduced during training to address the issue of class imbalance and improve the network's prediction results. The experimental results on three public benchmarks demonstrate that DE-Net outperforms existing state-of-the-art methods, achieving mean intersection over union scores of 83.5%, 61.8%, and 63.9% on Toronto-3D, WHU-MLS, and S3DIS datasets, respectively.
- Published
- 2024
- Full Text
- View/download PDF
3. A Handheld Laser-Scanning-Based Methodology for Monitoring Tree Growth in Chestnut Orchards.
- Author
-
Pereira-Obaya, Dimas, Cabo, Carlos, Ordóñez, Celestino, and Rodríguez-Pérez, José Ramón
- Subjects
- *
CHESTNUT , *POINT cloud , *ORCHARDS , *PRECISION farming , *SPRING , *POINT processes , *TREE growth - Abstract
Chestnut and chestnut byproducts are of worldwide interest, so there is a constant need to develop faster and more accurate monitoring techniques. Recent advances in simultaneous localization and mapping (SLAM) algorithms and user accessibility have led to increased use of handheld mobile laser scanning (HHLS) in precision agriculture. We propose a tree growth monitoring methodology, based on HHLS point cloud processing, that calculates the length of branches through spatial discretization of the point cloud for each tree. The methodology was tested by comparing two point clouds collected almost simultaneously for each of a set of sweet chestnut trees. The results obtained indicated that our HHLS method was reliable and accurate in efficiently monitoring sweet chestnut tree growth. The same methodology was used to calculate the growth of the same set of trees over 37 weeks (from spring to winter). Differences in week 0 and week 37 scans showed an approximate mean growth of 0.22 m, with a standard deviation of around 0.16 m reflecting heterogeneous tree growth. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. 融合图注意力的多分辨率点云补全.
- Author
-
潘李琳 and 邵剑飞
- Abstract
Copyright of Laser Technology is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
5. PointNest: Learning Deep Multiscale Nested Feature Propagation for Semantic Segmentation of 3-D Point Clouds
- Author
-
Jie Wan, Ziyin Zeng, Qinjun Qiu, Zhong Xie, and Yongyang Xu
- Subjects
3-D point cloud ,deep supervision (DS) ,multiscale feature propagation ,semantic segmentation ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
3-D point cloud semantic segmentation is a fundamental task for scene understanding, but this task remains challenging due to the diverse scene classes, data defects, and occlusions. Most existing deep learning-based methods focus on new designs of feature extraction operators but neglect the importance of exploiting multiscale point information in the network, which is crucial for identifying objects under complex scenes. To tackle this limitation, we propose an innovative network called PointNest that efficiently learns multiscale point feature propagation for accurate point segmentation. PointNest employs a deep nested U-shape encoder–decoder architecture, where the encoder learns multiscale point features through nested feature aggregation units at different network depths and propagates local geometric contextual information with skip connections along horizontal and vertical directions. The decoder then receives multiscale nested features from the encoder to progressively recover geometric details of the abstracted decoding point features for pointwise semantic prediction. In addition, we introduce a deep supervision strategy to further promote multiscale information propagation in the network for efficient training and performance improvement. Experiments on three public benchmarks demonstrate that PointNest outperforms existing mainstream methods with the mean intersection over union scores of 68.8%, 74.7%, and 62.7% in S3DIS, Toronto-3D, and WHU-MLS datasets, respectively.
- Published
- 2023
- Full Text
- View/download PDF
6. Deep Learning-Based Plant Organ Segmentation and Phenotyping of Sorghum Plants Using LiDAR Point Cloud
- Author
-
Ajay Kumar Patel, Eun-Sung Park, Hongseok Lee, G. G. Lakshmi Priya, Hangi Kim, Rahul Joshi, Muhammad Akbar Andi Arief, Moon S. Kim, Insuck Baek, and Byoung-Kwan Cho
- Subjects
3-D point cloud ,deep learning ,lidar technique ,phenotyping ,sorghum ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Increasing food demands, global climatic variations, and population growth have spurred the growth of crop yield driven by plant phenotyping in the age of Big Data. High-throughput phenotyping of sorghum at each plant and organ level is vital in molecular plant breeding to increase crop yield. LiDAR (light detection and ranging) sensor provides 3-D point clouds of plants with the advantages of high precision, high resolution, and rapid measurement. However, need to develop robust algorithms for extracting the phenotypic traits of sorghum plants using LiDAR 3-D point cloud. This study utilized four 3-D point cloud-based deep learning models named PointNet, PointNet++, PointCNN, and dynamic graph CNN for the specific objective of the segmentation of sorghum plants. Subsequently, phenotypic traits were extracted using the segmentation results. Study plants sample were grown under controlled conditions at various developmental stages. The extracted phenotypic traits outcome has been validated through the manually measured phenotypic traits of the sorghum plant. PointNet++ outperformed the other three deep learning models and provided the best segmentation result with a mean accuracy of 91.5%. The correlations of the six phenotypic traits, such as plant height, plant crown diameter, plant compactness, stem diameter, panicle length, and panicle width were calculated from the segmentation results of the PointNet++ model and the measured coefficient of determination (R2) were 0.97, 0.96, 0.94, 0.90, 0.95, and 0.88, respectively. The obtained results showed that LiDAR 3-D point cloud have good potential to measure the sorghum plant phenotype traits rapidly and accurately using deep learning techniques.
- Published
- 2023
- Full Text
- View/download PDF
7. A Handheld Laser-Scanning-Based Methodology for Monitoring Tree Growth in Chestnut Orchards
- Author
-
Dimas Pereira-Obaya, Carlos Cabo, Celestino Ordóñez, and José Ramón Rodríguez-Pérez
- Subjects
sweet chestnut ,MLS ,SLAM ,3-D point cloud ,tree growth monitoring ,Chemical technology ,TP1-1185 - Abstract
Chestnut and chestnut byproducts are of worldwide interest, so there is a constant need to develop faster and more accurate monitoring techniques. Recent advances in simultaneous localization and mapping (SLAM) algorithms and user accessibility have led to increased use of handheld mobile laser scanning (HHLS) in precision agriculture. We propose a tree growth monitoring methodology, based on HHLS point cloud processing, that calculates the length of branches through spatial discretization of the point cloud for each tree. The methodology was tested by comparing two point clouds collected almost simultaneously for each of a set of sweet chestnut trees. The results obtained indicated that our HHLS method was reliable and accurate in efficiently monitoring sweet chestnut tree growth. The same methodology was used to calculate the growth of the same set of trees over 37 weeks (from spring to winter). Differences in week 0 and week 37 scans showed an approximate mean growth of 0.22 m, with a standard deviation of around 0.16 m reflecting heterogeneous tree growth.
- Published
- 2024
- Full Text
- View/download PDF
8. Camper’s Plane Localization and Head Pose Estimation Based on Multi-View RGBD Sensors
- Author
-
Huaqiang Wang, Lu Huang, Kang Yu, Tingting Song, Fengen Yuan, Hao Yang, and Haiying Zhang
- Subjects
Head pose estimation ,multi-view ,camper’s plane ,depth sensor ,3-D point cloud ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Head pose estimation (HPE) is a key step in computation and quantification of 3D facial features and has a significant impact on the precision and accuracy of measurements. High-precision HPE is the basis for standardized facial data collection and analysis. The Camper’s plane is the standard (baseline) plane commonly used by anthropologists for head and face research, but there is no research on automatic positioning of the Camper’s plane using color and depth cameras. This paper presents a high-accuracy method for Camper’s plane localization and HPE based on multi-view RGBD depth sensors. The 3D facial point clouds acquired by the multi-view RGBD depth sensors are aligned to obtain a complete 3D face. Keypoint RCNN is used for facial keypoint detection to obtain facial landmarks. A method is proposed to build a general face datum model based on a self-built dataset. The head pose is estimated by applying rigid body transformation to an individual 3D face and the general 3D face model. In order to verify the accuracy of Camper’s plane localization and HPE, 102 cases of 3D facial data and experiments were collected and conducted. The tragus and nasal alar points are localized to within 7 pixels (about 0.83 cm) and the average accuracy of the three dimensions of Camper’s plane identified is 0.87°, 0.64° and 0.47° respectively. The average accuracies of the three dimensions of HPE were 1.17°, 0.90° and 0.97. The experiment results demonstrate the effectiveness of the method for Camper’s plane localization and HPE.
- Published
- 2022
- Full Text
- View/download PDF
9. Rotation-Invariant Point Cloud Representation for 3-D Model Recognition.
- Author
-
Wang, Yan, Zhao, Yining, Ying, Shihui, Du, Shaoyi, and Gao, Yue
- Abstract
Three-dimensional (3-D) data have many applications in the field of computer vision and a point cloud is one of the most popular modalities. Therefore, how to establish a good representation for a point cloud is a core issue in computer vision, especially for 3-D object recognition tasks. Existing approaches mainly focus on the invariance of representation under the group of permutations. However, for point cloud data, it should also be rotation invariant. To address such invariance, in this article, we introduce a relation of equivalence under the action of rotation group, through which the representation of point cloud is located in a homogeneous space. That is, two point clouds are regarded as equivalent when they are only different from a rotation. Our network is flexibly incorporated into existing frameworks for point clouds, which guarantees the proposed approach to be rotation invariant. Besides, a sufficient analysis on how to parameterize the group SO(3) into a convolutional network, which captures a relation with all rotations in 3-D Euclidean space $\mathbb {R}^{3}$. We select the optimal rotation as the best representation of point cloud and propose a solution for minimizing the problem on the rotation group SO(3) by using its geometric structure. To validate the rotation invariance, we combine it with two existing deep models and evaluate them on ModelNet40 dataset and its subset ModelNet10. Experimental results indicate that the proposed strategy improves the performance of those existing deep models when the data involve arbitrary rotations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Efficient MSPSO Sampling for Object Detection and 6-D Pose Estimation in 3-D Scenes.
- Author
-
Xing, Xuejun, Guo, Jianwei, Nan, Liangliang, Gu, Qingyi, Zhang, Xiaopeng, and Yan, Dong-Ming
- Subjects
- *
PARTICLE swarm optimization , *3-D printers , *POINT cloud - Abstract
The point pair feature (PPF) is widely used in manufacturing for estimating 6-D poses. The key to the success of PPF matching is to establish correct 3-D correspondences between the object and the scene, i.e., finding as many valid similar point pairs as possible. However, efficient sampling of point pairs has been overlooked in existing frameworks. In this article, we propose a revised PPF matching pipeline to improve the efficiency of 6-D pose estimation. Our basic idea is that the valid scene reference points are lying on the object’s surface and the previously sampled reference points can provide prior information for locating new reference points. The novelty of our approach is a new sampling algorithm for selecting scene reference points based on the multisubpopulation particle swarm optimization guided by a probability map. We also introduce an effective pose clustering and hypotheses verification method to obtain the optimal pose. Moreover, we optimize the progressive sampling for multiframe point clouds to improve processing efficiency. The experimental results show that our method outperforms previous methods by 6.6%, 3.9% in terms of accuracy on the public DTU and LineMOD datasets, respectively. We further validate our approach by applying it in a real robot grasping task. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Evaluation of Convolution Operation Based on the Interpretation of Deep Learning on 3-D Point Cloud
- Author
-
Bufan Zhao, Xianghong Hua, Kegen Yu, Wuyong Tao, Xiaoxing He, Shaoquan Feng, and Pengju Tian
- Subjects
3-D point cloud ,convolution function evaluation ,deep learning interpretation ,external consistency ,internal consistency ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
The interpretation of deep learning network is an important part in understanding the convolutional neural networks (CNNs). As an exploratory research, this article explored the interpretation method in 3-D point cloud deep learning networks, for the purpose of evaluating the performance of convolution functions in 3-D point cloud CNNs. Specifically, a 3-D point cloud classification network with two branches is used as the interpretation network in two aspects; 1) information entropy is introduced to diagnose the internal representation in the middle layer of CNN; and 2) the external consistency of convolution function is measured by per-point classification accuracy with class activation mapping technique. Four typical convolution functions are tested by the interpretation network on ModelNet40 dataset and the experimental results demonstrate that the proposed evaluation method is reliable. Feature transformation ability and feature recognition ability of convolution functions are extracted by visualization evaluation and proposed measurable metrics evaluation.
- Published
- 2020
- Full Text
- View/download PDF
12. Fine-Grained Patch Segmentation and Rasterization for 3-D Point Cloud Attribute Compression.
- Author
-
Zhao, Baoquan, Lin, Weisi, and Lv, Chenlei
- Subjects
- *
POINT cloud , *VIDEO compression , *VIDEO codecs , *GEOMETRIC analysis , *SOURCE code , *IMAGE segmentation - Abstract
Due to the high dimensionality of point cloud data and the irregularity and complexity of its geometric structure, effective attribute compression remains a very challenging task. Many recent efforts have focused on transforming point clouds into images and leveraging existing sophisticated image/video codecs to improve attribute coding efficiency. However, how to synthesize coherent and correlation-preserving attribute images is still inadequately addressed by existing studies, which are hindering the exertion of the merits of well-developed compression infrastructure. In this paper, we present a novel image synthesis method for effective point cloud attribute compression. Firstly, the proposed scheme segments a given point cloud into a collection of fine-grained patches by performing geometric structure analysis using heat kernel signature feature descriptor and complex points; Secondly, we transform the obtained patches from 3-D to 2-D using a low-dimensional embedding algorithm and then convert them into patch attribute images with the proposed patch rasterization and rectification method; And finally, we compactly assemble all the attribute images of patches together by formulating it as a bin nesting problem and harvest an attribute image of the whole point cloud for image/video-based compression. Experimental results demonstrate the effectiveness of the proposed method in point cloud attribute compression and its superiority over state-of-the-art codecs. The source code of this work is publicly available at https://github.com/pccompession/UPCAC. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. ESTIMATION OF SHOULDER WIDTH AND NECK GIRTH BASED ON 3-D POINT CLOUD DATA.
- Author
-
Xin LU, Panpan GUO, and Guolian LIU
- Subjects
- *
POINT cloud , *NECK - Abstract
The 3-D point cloud map in the anthropometry has attracted intensive attention due to the availability of fast and accurate laser scan devices. Inevitably, there is a data deviation between 3-D measurement and manual tests. To address this problem, shoulder width and neck girth are accurately determined from 3-D point cloud, the two-scale fractal is used for 3-D point cloud simplification, and young female samples are used in our experiment to show the accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Extraction of Street Pole-Like Objects Based on Plane Filtering From Mobile LiDAR Data.
- Author
-
Tu, Jingmin, Yao, Jian, Li, Li, Zhao, Wenjie, and Xiang, Binbin
- Subjects
- *
OPTICAL radar , *LIDAR , *ROAD maps , *BRIDGE bearings , *STREETS - Abstract
Pole-like objects provide important street infra- structure for road inventory and road mapping. In this article, we proposed a novel pole-like object extraction algorithm based on plane filtering from mobile Light Detection and Ranging (LiDAR) data. The proposed approach is composed of two parts. In the first part, a novel octree-based split scheme was proposed to fit initial planes from off-ground points. The results of the plane fitting contribute to the extraction of pole-like objects. In the second part, we proposed a novel method of pole-like object extraction by plane filtering based on local geometric feature restriction and isolation detection. The proposed approach is a new solution for detecting pole-like objects from mobile LiDAR data. The innovation in this article is that we assumed that each of the pole-like objects can be represented by a plane. Thus, the essence of extracting pole-like objects will be converted to plane selecting problem. The proposed method has been tested on three data sets captured from different scenes. The average completeness, correctness, and quality of our approach can reach up to 87.66%, 88.81%, and 79.03%, which is superior to state-of-the-art approaches. The experimental results indicate that our approach can extract pole-like objects robustly and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Indoor Point Cloud Segmentation Using Iterative Gaussian Mapping and Improved Model Fitting.
- Author
-
Zhao, Bufan, Hua, Xianghong, Yu, Kegen, Xuan, Wei, Chen, Xijiang, and Tao, Wuyong
- Subjects
- *
POINT cloud , *MAXIMUM likelihood statistics , *ALGORITHMS , *CLASSIFICATION - Abstract
Indoor scene segmentation based on 3-D laser point cloud is important for rebuilding and classification, especially for permanent building structure. However, the existing segmentation methods mainly focus on the large-scale planar structures but ignore the other sharp structures and details, which would cause accuracy degradation in scene reconstruction. To handle this issue, an iterative Gaussian mapping-based segmentation strategy has been proposed in this article, which goes from rough segmentation to refined one iteratively to decompose the indoor scene into detectable point cloud clusters layer by layer. An improved model fitting algorithm based on the maximum likelihood estimation sampling consensus (MLESAC) algorithm is proposed for refined segmentation, which is called the Prior-MLESAC algorithm, to deal with the extraction of both vertical and nonvertical planar and cylindrical structures. The experimental results demonstrate that planar and cylindrical structures are segmented more completely by the proposed strategy, and more details of the indoor structure are restored than other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. AUV-Based Multi-View Scanning Method for 3-D Reconstruction of Underwater Object Using Forward Scan Sonar.
- Author
-
Kim, Byeongjin, Kim, Jason, Cho, Hyeonwoo, Kim, Jinwhan, and Yu, Son-Cheol
- Abstract
In this study, we propose an autonomous underwater vehicle (AUV)-based multi-directional scanning method of underwater objects using forward scan sonar (FSS). Recently, a method was developed that can generate a 3-D point cloud of an underwater object using FSS. However, the data comprised sparse and noisy characteristics that made it difficult for 3-D recognition. Another limitation was the absence of back and side surface information of an object. These limitations degraded the results of the 3-D reconstruction. We propose a multi-directional scanning strategy to improve the 3-D point cloud and reconstruction results where the AUV determines the direction of the next scan by analyzing the 3-D data of the object until the scanning is complete. This enables adaptive scanning based on the shape of the target object while reducing the amount of scanning time. Based on the scanning strategy, a polygonal approximation method for real-time 3-D reconstruction is developed to process scanned data groups of the 3-D point cloud. This process can efficiently handle multiple 3-D point cloud data for real-time operation and reduce its uncertainty. To verify the performance of our proposed method, simulations were performed with various objects and conditions. In addition, experiments were conducted in an indoor water tank, and the results were compared with the simulation results. Field experiments were conducted to verify the proposed method for more diverse environments and objects. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. 基于三维冠层模型的玉米光合作用和光能利用模拟.
- Author
-
顾生浩, 王勇健, 温维亮, 卢宪菊, 于泽涛, and 郭新宇
- Subjects
- *
PHOTOSYNTHETIC rates , *LIGHT curves , *WEATHER , *PHOTOSYNTHESIS , *CULTIVARS - Abstract
Modelling canopy photosynthesis production on the basis of leaf photosynthesis has been rarely reported for maize (Zea mays). In this study, we built a maize photosynthetic production model 3DMaizeCaP via coupling canopy 3D architecture model, radiative flux distribution model, leaf photosynthesis model and radiation utilization model. In this study, three cultivars with different plant architecture, i.e., AD268, JK968 and ZD958, and two typical weather conditions, i.e., a sunny day and an overcast day were used. In order to unravel the responses of canopy photosynthesis rate and radiation use efficiency to cultivar and environment, a simulation study combined with field experiment was performed. The results showed that the maximum photosynthesis rate and dark respiration rate decreased linearly with decreasing leaf rank for AD268, JK968 and ZD958. The distribution of both the maximum photosynthesis rate and dark respiration rate of individual leaves showed a vertical profile from the top to the bottom of the maize canopy. The AD268 had the highest maximum photosynthesis rate and the lowest dark respiration rate among three cultivars. The diurnal course of canopy photosynthesis rate was characterized evidently that canopy photosynthesis rate increased in the morning and reached the maximum value at 12:00 of noon on an overcast day and at 11:00 on a sunny day and then decreased in the afternoon for all cultivars. The maximum canopy photosynthesis rate of AD268 was 21.6 μmol CO2/(m²·s) on an overcast day and was 26.2 μmol CO2/(m²·s) on a sunny day, which were significantly higher than that of JK968 (20.8 μmol CO2/(m²·s) and 24.9 μmol CO2/(m²·s)) and of ZD958 (19.6 μmol CO2/(m²·s) and 24.4 μmol CO2/(m²·s)). The daily net assimilated CO2 of AD268 was significantly (P<0.05) higher than that of ZD958. In comparison with ZD958, the daily net assimilated CO2 increased by 14.8% and 12.4% on a sunny and an overcast day respectively. The plant architecture of AD268 was significantly different with other cultivars (P<0.05). However, there was no significant difference in the daily intercepted photosynthetic absorbed radiation between cultivars (P>0.05). The leaf at 16th main stem phytomer rank produced the highest daily net assimilated CO2 among individual leaves at the leaf level. The radiation use efficiency of AD268 was 3.22 and 3.03 g/MJ under a sunny and an overcast condition, respectively, indicating a 4.5% and a 5.6% increase compared to JK968 and a 7.7% and a 7.8% compared to ZD958. The canopy radiation use efficiency of maize was more sensitive to the initial slope of light response curve than to the maximum photosynthesis rate (P<0.05). From the point view of improving canopy radiation use efficiency for maize, designing a maize ideotype that has a more compact plant architecture and higher leaf photosynthetic capacity was suggested for breeding in the future. This study could provide not only an approach for quantitatively estimating canopy photosynthesis rate of maize but also an evaluation basis as well as a phenotyping technique for breeding cultivars with high photosynthetic efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Flood-fill-based object segmentation and tracking for intelligent vehicles.
- Author
-
Phuong Minh Chu, Seoungjae Cho, Kaisi Huang, and Kyungeun Cho
- Subjects
OBJECT tracking (Computer vision) ,POINT cloud ,VEHICLES - Abstract
In this article, an application for object segmentation and tracking for intelligent vehicles is presented. The proposed object segmentation and tracking method is implemented by combining three stages in each frame. First, based on our previous research on a fast ground segmentation method, the present approach segments three-dimensional point clouds into ground and non-ground points. The ground segmentation is important for clustering each object in subsequent steps. From the non-ground parts, we continue to segment objects using a flood-fill algorithm in the second stage. Finally, object tracking is implemented to determine the same objects over time in the final stage. This stage is performed based on likelihood probability calculated using features of each object. Experimental results demonstrate that the proposed system shows effective, real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Precise and Automatic 3-D Absolute Geolocation of Targets Using Only Two Long-Aperture SAR Acquisitions.
- Author
-
Duque, Sergi, Parizzi, Alessandro, and De Zan, Francesco
- Subjects
- *
SYNTHETIC aperture radar , *THREE-dimensional display systems , *RADARSAT satellites , *STRUCTURE-activity relationships , *POINT cloud , *SPACE-based radar , *SATELLITE geodesy - Abstract
This paper deals with precise absolute geolocation of point targets by means of a pair of high-resolution synthetic aperture radar (SAR) acquisitions, acquired from a satellite. Even though a single SAR image is a 2-D projection of the backscatter, some 3-D information can be extracted from a defocussing analysis, depending on the resolution, thanks to orbital curvature. A second acquisition, observing the same scene under a different look angle, adds stereogrammetric capability and can achieve geolocation accuracy at decimeter level. However, for the stereogrammetric analysis to work, it is necessary to match targets correctly in the two images. This task is particularly difficult if it has to be automatic and targets are dense. Unfortunately, the defocussing-based geolocation is not sufficient for reliable target matching: the limiting factor is the unknown tropospheric delay that can cause geolocation errors of several meters in the elevation direction. However, observing that the tropospheric phase screen displays a low-pass character, this paper shows how to identify statistically the local atmospheric disturbances, therefore dramatically improving the score of successful matching. All steps involved exploit peculiar radar image characteristics and, thanks to this, avoid generic point cloud matching algorithms. The proposed algorithm is shown at work on a pair of TerraSAR-X staring spotlight images. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Background Point Filtering of Low-Channel Infrastructure-Based LiDAR Data Using a Slice-Based Projection Filtering Algorithm
- Author
-
Ciyun Lin, Hui Liu, Dayong Wu, and Bowen Gong
- Subjects
background points filtering ,infrastructure-based LiDAR ,slice-based projection ,3-D point cloud ,Chemical technology ,TP1-1185 - Abstract
A light detection and ranging (LiDAR) sensor can obtain richer and more detailed traffic flow information than traditional traffic detectors, which could be valuable data input for various novel intelligent transportation applications. However, the point cloud generated by LiDAR scanning not only includes road user points but also other surrounding object points. It is necessary to remove the worthless points from the point cloud by using a suitable background filtering algorithm to accelerate the micro-level traffic data extraction. This paper presents a background point filtering algorithm using a slice-based projection filtering (SPF) method. First, a 3-D point cloud is projected to 2-D polar coordinates to reduce the point data dimensions and improve the processing efficiency. Then, the point cloud is classified into four categories in a slice unit: Valuable object points (VOPs), worthless object points (WOPs), abnormal ground points (AGPs), and normal ground points (NGPs). Based on the point cloud classification results, the traffic objects (pedestrians and vehicles) and their surrounding information can be easily identified from an individual frame of the point cloud. We proposed an artificial neuron network (ANN)-based model to improve the adaptability of the algorithm in dealing with the road gradient and LiDAR-employing inclination. The experimental results showed that the algorithm of this paper successfully extracted the valuable points, such as road users and curbstones. Compared to the random sample consensus (RANSAC) algorithm and 3-D density-statistic-filtering (3-D-DSF) algorithm, the proposed algorithm in this paper demonstrated better performance in terms of the run-time and background filtering accuracy.
- Published
- 2020
- Full Text
- View/download PDF
21. Adaptive Nonrigid Inpainting of Three-Dimensional Point Cloud Geometry.
- Author
-
Dinesh, Chinthaka, Bajic, Ivan V., and Cheung, Gene
- Subjects
INPAINTING ,TELEPRESENCE ,CLOUD computing ,THREE-dimensional imaging ,FROBENIUS algebras - Abstract
In this letter, we introduce several algorithms for geometry inpainting of three-dimensional (3-D) point clouds with large holes. The algorithms are exemplar based. Hole filling is performed iteratively using templates near the hole boundary to find the best matching regions elsewhere in the cloud, from where existing points are transferred to the hole. We propose two improvements over the previous work on exemplar-based hole filling. The first one is adaptive template size selection in each iteration, which simultaneously leads to higher accuracy and lower execution time. The second improvement is a nonrigid transformation to better align the candidate set of points with the template before the point transfer, which leads to even higher accuracy. We demonstrate the algorithm's ability to fill holes that are difficult or impossible to fill by existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. A Novel Surface Descriptor for Automated 3-D Object Recognition and Localization
- Author
-
Liang-Chia Chen and Thanh-Hung Nguyen
- Subjects
Machine vision ,3-D point cloud ,object segmentation ,object recognition ,object localization ,3-D descriptor ,Chemical technology ,TP1-1185 - Abstract
This paper presents a novel approach to the automated recognition and localization of 3-D objects. The proposed approach uses 3-D object segmentation to segment randomly stacked objects in an unstructured point cloud. Each segmented object is then represented by a regional area-based descriptor, which measures the distribution of surface area in the oriented bounding box (OBB) of the segmented object. By comparing the estimated descriptor with the template descriptors stored in the database, the object can be recognized. With this approach, the detected object can be matched with the model using the iterative closest point (ICP) algorithm to detect its 3-D location and orientation. Experiments were performed to verify the feasibility and effectiveness of the approach. With the measured point clouds having a spatial resolution of 1.05 mm, the proposed method can achieve both a mean deviation and standard deviation below half of the spatial resolution.
- Published
- 2019
- Full Text
- View/download PDF
23. Evaluation of Convolution Operation Based on the Interpretation of Deep Learning on 3-D Point Cloud
- Author
-
Xianghong Hua, Wuyong Tao, Bufan Zhao, Pengju Tian, Shaoquan Feng, Xiaoxing He, and Kegen Yu
- Subjects
Atmospheric Science ,Computer science ,external consistency ,Geophysics. Cosmic physics ,Feature extraction ,Point cloud ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Convolutional neural network ,Interpretation (model theory) ,Convolution ,internal consistency ,0202 electrical engineering, electronic engineering, information engineering ,Computers in Earth Sciences ,TC1501-1800 ,0105 earth and related environmental sciences ,deep learning interpretation ,QC801-809 ,business.industry ,Deep learning ,Feature recognition ,Pattern recognition ,Visualization ,Ocean engineering ,3-D point cloud ,020201 artificial intelligence & image processing ,Artificial intelligence ,convolution function evaluation ,business - Abstract
The interpretation of deep learning network is an important part in understanding the convolutional neural networks (CNNs). As an exploratory research, this article explored the interpretation method in 3-D point cloud deep learning networks, for the purpose of evaluating the performance of convolution functions in 3-D point cloud CNNs. Specifically, a 3-D point cloud classification network with two branches is used as the interpretation network in two aspects; 1) information entropy is introduced to diagnose the internal representation in the middle layer of CNN; and 2) the external consistency of convolution function is measured by per-point classification accuracy with class activation mapping technique. Four typical convolution functions are tested by the interpretation network on ModelNet40 dataset and the experimental results demonstrate that the proposed evaluation method is reliable. Feature transformation ability and feature recognition ability of convolution functions are extracted by visualization evaluation and proposed measurable metrics evaluation.
- Published
- 2020
- Full Text
- View/download PDF
24. RECOGNITION OF THE STACKED OBJECTS FOR BIN PICKING.
- Author
-
Hikizu, M., Mikami, S., and Seki, H.
- Subjects
INFORMATION theory ,THREE-dimensional imaging ,CLOUD computing ,PROBLEM solving ,IMAGE segmentation ,PATTERN recognition systems - Abstract
In this paper, we propose recognition method of the stacked objects for pick-and-place motion. The situation that the objects are stacked miscellaneously in the home is assumed. In the home, the equipment to arrange the objects doesn't exist. Therefore it's necessary to recognize the stacked objects respectively. In this paper, Information on the objects are measured by a laser range finder (LRF). Those information is used as 3-D point cloud, and the objects are recognized by model-base. A local minimum problem exists in recognition of the objects. We propose the method to recognize the stacked objects statistically using multiple recognition result. Avoidance of the local minimum problem and the segmentation of each objects are performed by recognizing statistically. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. RECONSTRUCTION OF BUILDING FOOTPRINTS USING SPACEBORNE TOMOSAR POINT CLOUDS.
- Author
-
Shahzad, M. and Zhu, X. X.
- Subjects
CLOUD computing ,WEB services - Abstract
This paper presents an approach that automatically (but parametrically) reconstructs 2-D/3-D building footprints using 3-D synthetic aperture radar (SAR) tomography (TomoSAR) point clouds. These point clouds are generated by processing SAR image stacks via SAR tomographic inversion. The proposed approach reconstructs the building outline by exploiting both the roof and fa'ade points. Initial building footprints are derived by applying the alpha shapes method on pre-segmented point clusters of individual buildings. A recursive angular deviation based refinement is then carried out to obtain refined/smoothed 2-D polygonal boundaries. A robust fusion framework then fuses the information pertaining to building fa'ades to the smoothed polygons. Afterwards, a rectilinear building identification procedure is adopted and constraints are added to yield geometrically correct and visually aesthetic building shapes. The proposed approach is illustrated and validated using TomoSAR point clouds generated from a stack of TerraSAR-X high-resolution spotlight images from ascending orbit covering approximately 1.5 km2 area in the city of Berlin, Germany. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Registration for 3-D point cloud using angular-invariant feature
- Author
-
Jiang, Jun, Cheng, Jun, and Chen, Xinglin
- Subjects
- *
IMAGE registration , *INVARIANTS (Mathematics) , *ALGORITHMS , *CURVATURE , *THREE-dimensional imaging , *MATHEMATICAL transformations , *ITERATIVE methods (Mathematics) , *VECTOR analysis - Abstract
Abstract: This paper proposes an angular-invariant feature for 3-D registration procedure to perform reliable selection of point correspondence. The feature is a -dimensional vector, and each element within the vector is an angle between the normal vector and one of its nearest neighbors. The angular feature is invariant to scale and rotation transformation, and is applicable for the surface with small curvature. The feature improves the convergence and error without any assumptions about the initial transformation. Besides, no strict sampling strategy is required. Experiments illustrate that the proposed angular-based algorithm is more effective than iterative closest point (ICP) and the curvature-based algorithm. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
27. A Novel Surface Descriptor for Automated 3-D Object Recognition and Localization
- Author
-
Thanh-Hung Nguyen and Liang-Chia Chen
- Subjects
0209 industrial biotechnology ,Computer science ,Machine vision ,Point cloud ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,object segmentation ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,object recognition ,Analytical Chemistry ,010309 optics ,020901 industrial engineering & automation ,Minimum bounding box ,0103 physical sciences ,Computer vision ,Segmentation ,lcsh:TP1-1185 ,3-D descriptor ,Electrical and Electronic Engineering ,Instrumentation ,Image resolution ,business.industry ,Orientation (computer vision) ,Cognitive neuroscience of visual object recognition ,Iterative closest point ,Object (computer science) ,Atomic and Molecular Physics, and Optics ,3-D point cloud ,Artificial intelligence ,business ,object localization - Abstract
This paper presents a novel approach to the automated recognition and localization of 3-D objects. The proposed approach uses 3-D object segmentation to segment randomly stacked objects in an unstructured point cloud. Each segmented object is then represented by a regional area-based descriptor, which measures the distribution of surface area in the oriented bounding box (OBB) of the segmented object. By comparing the estimated descriptor with the template descriptors stored in the database, the object can be recognized. With this approach, the detected object can be matched with the model using the iterative closest point (ICP) algorithm to detect its 3-D location and orientation. Experiments were performed to verify the feasibility and effectiveness of the approach. With the measured point clouds having a spatial resolution of 1.05 mm, the proposed method can achieve both a mean deviation and standard deviation below half of the spatial resolution.
- Published
- 2019
- Full Text
- View/download PDF
28. Recognition of the Stacked Objects for Bin Picking
- Author
-
S. Mikami, Hiroaki Seki, and Masatoshi Hikizu
- Subjects
laser range finder (LRF) ,Thesaurus (information retrieval) ,Information retrieval ,010504 meteorology & atmospheric sciences ,bin picking ,lcsh:T ,Computer science ,stacked objects ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Recognition ,3-D point cloud ,Control and Systems Engineering ,lcsh:Technology (General) ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:T1-995 ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Bin picking ,0105 earth and related environmental sciences - Abstract
In this paper, we propose recognition method of the stacked objects for pick-and-place motion. The situation that the objects are stacked miscellaneously in the home is assumed. In the home, the equipment to arrange the objects doesn’t exist. Therefore it’s necessary to recognize the stacked objects respectively. In this paper, Information on the objects are measured by a laser range finder (LRF). Those information is used as 3-D point cloud, and the objects are recognized by model-base. A local minimum problem exists in recognition of the objects. We propose the method to recognize the stacked objects statistically using multiple recognition result. Avoidance of the local minimum problem and the segmentation of each objects are performed by recognizing statistically.
- Published
- 2016
- Full Text
- View/download PDF
29. Background Point Filtering of Low-Channel Infrastructure-Based LiDAR Data Using a Slice-Based Projection Filtering Algorithm
- Author
-
Hui Liu, Ciyun Lin, Dayong Wu, and Bowen Gong
- Subjects
Computer science ,Point cloud ,RANSAC ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,infrastructure-based LiDAR ,0502 economics and business ,lcsh:TP1-1185 ,Point (geometry) ,Electrical and Electronic Engineering ,Projection (set theory) ,Instrumentation ,Intelligent transportation system ,background points filtering ,050210 logistics & transportation ,010401 analytical chemistry ,05 social sciences ,Frame (networking) ,Traffic flow ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Lidar ,3-D point cloud ,slice-based projection ,Algorithm - Abstract
A light detection and ranging (LiDAR) sensor can obtain richer and more detailed traffic flow information than traditional traffic detectors, which could be valuable data input for various novel intelligent transportation applications. However, the point cloud generated by LiDAR scanning not only includes road user points but also other surrounding object points. It is necessary to remove the worthless points from the point cloud by using a suitable background filtering algorithm to accelerate the micro-level traffic data extraction. This paper presents a background point filtering algorithm using a slice-based projection filtering (SPF) method. First, a 3-D point cloud is projected to 2-D polar coordinates to reduce the point data dimensions and improve the processing efficiency. Then, the point cloud is classified into four categories in a slice unit: Valuable object points (VOPs), worthless object points (WOPs), abnormal ground points (AGPs), and normal ground points (NGPs). Based on the point cloud classification results, the traffic objects (pedestrians and vehicles) and their surrounding information can be easily identified from an individual frame of the point cloud. We proposed an artificial neuron network (ANN)-based model to improve the adaptability of the algorithm in dealing with the road gradient and LiDAR-employing inclination. The experimental results showed that the algorithm of this paper successfully extracted the valuable points, such as road users and curbstones. Compared to the random sample consensus (RANSAC) algorithm and 3-D density-statistic-filtering (3-D-DSF) algorithm, the proposed algorithm in this paper demonstrated better performance in terms of the run-time and background filtering accuracy.
- Published
- 2020
- Full Text
- View/download PDF
30. Autonomous surface modeling of unknown 3D objects with an industrial robot manipulator
- Author
-
Söyleyici, Cansu, Özkan, Metin, ESOGÜ, Mühendislik Mimarlık Fakültesi, Elektrik Elektronik Mühendisliği, and Elektrik-Elektronik Mühendisliği Anabilim Dalı
- Subjects
Elektrik ve Elektronik Mühendisliği ,Automatic Scanning ,Görüş Planlama ,Next Best View ,3B Sayısallaştırma ,Sonraki En İyi Bakış ,3-D Digitization ,Industrial Robot Manipulator ,Endüstriyel Robot Kolu ,View Planning ,Electrical and Electronics Engineering ,Otonom Tarama ,3B Nokta Bulutu ,3-D Point Cloud - Abstract
Günümüzde, gelişen teknolojiyle beraber nesnelerin 3B modellemesi mümkün halegelmiş; endüstri, medikal, eğlence ve kültürel mirasların korunması gibi geniş uygulamaalanlarına yayılmıştır. Bahsedilen bu kullanım alanları, yüksek kalitede ve doğrulukta 3Bmodellerin elde edilmesi ihtiyacını doğurmuştur.Cisimlerin 3B modellemesinde, cismin yüzeyinden elde edilen derinlik ve/veya renkbilgisi kullanılmaktadır. Bu bilgi, algılayıcılar vasıtasıyla cismin tüm yüzeyinden veritoplanarak sağlanmaktadır. Veri toplama amacıyla, algılayıcının nesne yüzeyi boyuncadolaştırılmasına yüzey tarama denilebilir. Nesne yüzeyinden veri toplama süreci, elle veotonom olmak üzere iki şekilde gerçekleştirilebilmektedir. Elle tarama, bu alanda tecrübelive yetenekli bir personel gerektirmektedir; çünkü karmaşık nesnelerin hangi açılardantaranacağı ve sonraki tarama noktasının ne olacağını belirlemek kolay bir iş değildir. Budurumda, insan kaynaklı hatalar tamamlanmamış veya yanlış tamamlanmış modellerin eldeedilmesine sebep olmakta, taramanın tekrar tekrar yapılması ise zaman kaybına yolaçmaktadır. Otonom tarama ise, genellikle en az bir robot kolu ve algılayıcıdan oluşan birsistem kullanılarak sonraki en iyi bakışın (Next Best View - NBV) hesaplanmasıylayapılmaktadır.Bu tez çalışması kapsamında; endüstriyel robot kolu, lazer profil algılayıcı ve dönertabla kullanılarak bilinmeyen nesnelerin 3B modelinin otonom olarak elde edilmesiamaçlanmaktadır. Uygulanan bu yöntem ile yaklaşık boyutları önceden bilinen herhangi birnesne yüzeyinden ilk tarama ile veri alındıktan sonra NBV adayları hesaplanmaktadır.Sonra, hesaplanan NBV adaylarından ön tanımlı kriterlere göre en uygun olanı seçilmekteve algılayıcı bu konuma götürülmektedir. 3B nesnenin yüzey modeli tamamlanana kadaralgılayıcı için tarama noktalarının seçimi ve tarama süreci devam etmektedir.Anahtar kelimeler: 3B sayısallaştırma, otonom tarama, sonraki en iyi bakış, görüşplanlama, endüstriyel robot kolu, 3B nokta bulutu Today, 3-D modeling of objects becomes possible due to developing technology,spreads extensive application area such as industry, medical, entertainment and preservingcultural inheritance. Mentioned these applications create a need of obtaining 3D modelswith high quality and precision.The depth and/or color information obtained from the surface of the object is usedfor 3-D modeling. This information is obtained by collecting data from the entire surface ofthe object by using sensors. Moving the sensor along the object surface for the purpose ofdata collection can be called surface scanning. Two different ways that are manual scanningand autonomous scanning can be followed to obtain 3D model of an object. Manual scanningrequires an experienced and skilled person in this area; because it is not easy task to determinefrom which views the objects will be scanned and what the next scan point will be. In such acase, while man-made faults cause incomplete or inaccurate 3D models, scanning again andagain causes loss of time. Autonomous scanning is generally made with calculation of NextBest Views (NBV) by using a system generally at least consisting of a robot manipulator anda sensor.Within the scope of this thesis, it is planned to obtain 3D model of an object fullautonomously using industrial robot manipulator, laser profile sensor and rotary table. Bythis method applied, NBV candidates are calculated after the data is acquired by the first scanon any object whose approximate dimensions are previously known. Then, the most suitableone is selected from the calculated NBV candidates according to the predefined criteria, andthe sensor is taken to this position. The selection of the scan points and scanning process forthe sensor continues until the surface model of the 3D object is completed.Keywords: 3-D Digitization, automatic scanning, next best view, view planning,industrial robot manipulator, 3-D point cloud 126
- Published
- 2017
31. Background Point Filtering of Low-Channel Infrastructure-Based LiDAR Data Using a Slice-Based Projection Filtering Algorithm.
- Author
-
Lin, Ciyun, Liu, Hui, Wu, Dayong, and Gong, Bowen
- Subjects
- *
LIDAR , *VEHICLE detectors , *PEDESTRIANS , *POINT cloud , *ROAD users , *TRAFFIC flow - Abstract
A light detection and ranging (LiDAR) sensor can obtain richer and more detailed traffic flow information than traditional traffic detectors, which could be valuable data input for various novel intelligent transportation applications. However, the point cloud generated by LiDAR scanning not only includes road user points but also other surrounding object points. It is necessary to remove the worthless points from the point cloud by using a suitable background filtering algorithm to accelerate the micro-level traffic data extraction. This paper presents a background point filtering algorithm using a slice-based projection filtering (SPF) method. First, a 3-D point cloud is projected to 2-D polar coordinates to reduce the point data dimensions and improve the processing efficiency. Then, the point cloud is classified into four categories in a slice unit: Valuable object points (VOPs), worthless object points (WOPs), abnormal ground points (AGPs), and normal ground points (NGPs). Based on the point cloud classification results, the traffic objects (pedestrians and vehicles) and their surrounding information can be easily identified from an individual frame of the point cloud. We proposed an artificial neuron network (ANN)-based model to improve the adaptability of the algorithm in dealing with the road gradient and LiDAR-employing inclination. The experimental results showed that the algorithm of this paper successfully extracted the valuable points, such as road users and curbstones. Compared to the random sample consensus (RANSAC) algorithm and 3-D density-statistic-filtering (3-D-DSF) algorithm, the proposed algorithm in this paper demonstrated better performance in terms of the run-time and background filtering accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. A Higher-Fidelity Approach to Bridging the Simulation-Reality Gap for 3-D Object Classification
- Author
-
Feydt, Austin Pack
- Subjects
- Computer Science, Robotics, Machine Learning, Computer Vision, Simulation, Simulation reality gap, point cloud, 3-D point cloud, Deep Learning, Object Recognition
- Abstract
Computer vision tasks require collecting large volumes of data, which can be a time consuming effort. Automating the collection process with simulations speeds up the process, at the cost of the virtual data not closely matching the physical data. Building upon a previous attempt to bridge this gap, this thesis proposes three nuances to improve the correspondence between simulated and physical 3-D point clouds and depth images. First, the same CAD files used for simulated data acquisition are also used to 3-D print physical models used for physical data acquisition. Second, a new projection method is developed to make better use of all information provided by the depth camera. Finally, all projection parameters are unified to prevent the deep learning model from developing a dependence on intensity scaling. A convolutional neural network is trained on the simulated data and evaluated on the physical data to determine the model’s generalization ability.
- Published
- 2019
33. A Novel Surface Descriptor for Automated 3-D Object Recognition and Localization.
- Author
-
Chen, Liang-Chia and Nguyen, Thanh-Hung
- Subjects
- *
OBJECT recognition (Computer vision) , *SURFACE area , *IMAGE segmentation , *THREE-dimensional imaging , *INDOOR positioning systems - Abstract
This paper presents a novel approach to the automated recognition and localization of 3-D objects. The proposed approach uses 3-D object segmentation to segment randomly stacked objects in an unstructured point cloud. Each segmented object is then represented by a regional area-based descriptor, which measures the distribution of surface area in the oriented bounding box (OBB) of the segmented object. By comparing the estimated descriptor with the template descriptors stored in the database, the object can be recognized. With this approach, the detected object can be matched with the model using the iterative closest point (ICP) algorithm to detect its 3-D location and orientation. Experiments were performed to verify the feasibility and effectiveness of the approach. With the measured point clouds having a spatial resolution of 1.05 mm, the proposed method can achieve both a mean deviation and standard deviation below half of the spatial resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.