46 results on '"sparse point cloud"'
Search Results
2. Accurate visual localization of the crane boom end
- Author
-
Fu, Ling, Liu, Yanbing, Fan, Qing, and Yin, Yufeng
- Published
- 2024
- Full Text
- View/download PDF
3. YOLOv8-LiDAR Fusion: Increasing Range Resolution Based on Image Guided-Sparse Depth Fusion in Self-Driving Vehicles
- Author
-
Serhat Yildiz, Ahmet, Meng, Hongying, Rafiq Swash, Mohammad, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huda, M. Nazmul, editor, Wang, Mingfeng, editor, and Kalganova, Tatiana, editor
- Published
- 2025
- Full Text
- View/download PDF
4. 针对毫米波雷达人员目标稀疏点云的聚类算法.
- Author
-
杨冬, 曾春艳, 郝丹妮, and 万相奎
- Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2025
- Full Text
- View/download PDF
5. Sparse Point Cloud Upsampling Based on Neural Implicit Functions
- Author
-
Wang, Wenjun, Kong, Xiangyu, Wang, Daole, Zhao, Xiuyang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Zhang, Chuanlei, editor, and Zhang, Qinhu, editor
- Published
- 2024
- Full Text
- View/download PDF
6. 稀疏点云引导的航空影像数字表面模型 生成方法.
- Author
-
张永军, 邹思远, and 刘欣怡
- Subjects
- *
DIGITAL elevation models , *POINT cloud , *MAPS - Abstract
Objectives: Digital surface model is of great significance in the fields of real-life 3D modeling, smart city construction, natural resources management, geoscience research, and hydrology and water resources management. However, dense matching, as a core step in generating digital surface models, is prone to matching failures in regions with a lack of texture, disparity gap and inconsistent illumination. The sparse point cloud data with high accuracy and extensive coverage after aerial triangulation, which can be used as a priori information to improve the accuracy of dense matching results. Methods: First, this paper proposes a sparse point cloud guidance (SPCG) method for generating digital surface models of aerial images. The method aims to constrain the dense matching of images using sparse point cloud encrypted by aerial triangulation. The sparse point cloud guidance first selects stereo image pairs with good geometric configurations, high overlap, and extensive coverage. Then, the number of sparse points is extended by using the closest proximity clustering and pyramid propagation methods. Additionally, the matching cost of the extended points is optimized by using the improved Gaussian function to enhance the accuracy of the dense matching results. Finally, the sparse point cloud is fused with the dense matching point cloud to generate the digital surface model. Results: Experiments on simulated stereo images and real aerial stereo images show that the optimized semi-global matching by the SPCG method in this paper significantly improves the matching accuracy of the original semi-global matching algorithm and outperforms the semi-global matching optimized by the Gaussian method and the deep learning method, pyramid stereo matching network. Numerically, the percentages of disparity maps generated by semi-global matching with greater than 1, 2, or 3 pixels difference from the true disparities are 46.72%, 32.83%, or 27.32%, respectively, whereas the SPCG method decreases by 7.67%, 9.75%, or 10.28%, respectively, compared to the former. The experimental results of the multiview aerial images show that the SPCG method accurately generates the digital surface model of the whole survey area, and it is better than the digital surface model generated by the superior SURE software in both qualitative and quantitative aspects. Conclusions: Compared to the original dense matching, sparse point cloud-guided dense matching improves the matching accuracy in difficult matching regions such as weak textures, repetitive textures and depth discontinuities. In turn, high precision and high density point clouds are generated. A complete digital surface model is generated by the fusion of the densely matched point clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Viewpoint-tolerant Scene Recognition Based on Segmentation of Sparse Point Cloud
- Author
-
HE Xionghui, TAN Jiefu, LIU Zhe, XUE Chao, YANG Shaowu, ZHANG Yongjun
- Subjects
visual scene recognition ,segmentation ,sparse point cloud ,simultaneous localization and mapping ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
In autonomous robot navigation,simultaneous localization and mapping is responsible for perceiving the surrounding environment and positioning itself,providing perceptual support for subsequent advanced tasks.Scene recognition,as a key mo-dule,can help the robot perceive the surrounding environment more accurately.It can correct the accumulated error caused by sensor error by identifying whether the current observation and the previous observation belong to the same scene.Existing me-thods mainly focus on scene recognition under the stable viewpoint,and judge whether two observations belong to the same scene based on the visual similarity between them.However,when the observation angle changes,there may be large visual differences in observations of the same scene,which may make the observations only partially similar,and this will lead to the failure of traditional methods.Therefore,a scene recognition method based on sparse point cloud segmentation is proposed.It divides the scene to solve local similar problems,and combines visual information and geometric information to achieve accurate scene description and ma-tching.So that the robot can recognize the same scene observation under different perspectives,which supports the loop detection for a single robot or the map fusion for multi-robot.This method divides each observation into several parts based on sparse point cloud segmentation.The segmentation result is invariant to the perspective,and each segment is extracted with a local bag of words vector and a β angle histogram to accurately describe its scene content.The former contains the visual semantic information of the scene.The latter contains the geometric structure information of the scene.Then,based on the segment,the same parts between observations are matched,the different parts are discarded to achieve accurate scene content matching and improve the success rate of place recognition.Finally,results on the public dataset show that this method outperforms the mainstream method bag of words in both stable and changing perspectives.
- Published
- 2023
- Full Text
- View/download PDF
8. An Image-Aided Sparse Point Cloud Registration Strategy for Managing Stockpiles in Dome Storage Facilities.
- Author
-
Liu, Jidong, Hasheminasab, Seyyed Meghdad, Zhou, Tian, Manish, Raja, and Habib, Ayman
- Subjects
- *
POINT cloud , *GLOBAL Positioning System , *STORAGE facilities , *LASER based sensors - Abstract
Stockpile volume estimation plays a critical role in several industrial/commercial bulk material management applications. LiDAR systems are commonly used for this task. Thanks to Global Navigation Satellite System (GNSS) signal availability in outdoor environments, Uncrewed Aerial Vehicles (UAV) equipped with LiDAR are frequently adopted for the derivation of dense point clouds, which can be used for stockpile volume estimation. For indoor facilities, static LiDAR scanners are usually used for the acquisition of point clouds from multiple locations. Acquired point clouds are then registered to a common reference frame. Registration of such point clouds can be established through the deployment of registration targets, which is not practical for scalable implementation. For scans in facilities bounded by planar walls/roofs, features can be automatically extracted/matched and used for the registration process. However, monitoring stockpiles stored in dome facilities remains to be a challenging task. This study introduces an image-aided fine registration strategy of acquired sparse point clouds in dome facilities, where roof and roof stringers are extracted, matched, and modeled as quadratic surfaces and curves. These features are then used in a Least Squares Adjustment (LSA) procedure to derive well-aligned LiDAR point clouds. Planar features, if available, can also be used in the registration process. Registered point clouds can then be used for accurate volume estimation of stockpiles. The proposed approach is evaluated using datasets acquired by a recently developed camera-assisted LiDAR mapping platform—Stockpile Monitoring and Reporting Technology (SMART). Experimental results from three datasets indicate the capability of the proposed approach in producing well-aligned point clouds acquired inside dome facilities, with a feature fitting error in the 0.03–0.08 m range. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. A 3D Scene Information Enhancement Method Applied in Augmented Reality.
- Author
-
Li, Bo, Wang, Xiangfeng, Gao, Qiang, Song, Zhimei, Zou, Cunyu, and Liu, Siyuan
- Subjects
AUGMENTED reality ,POINT cloud ,FEATURE extraction ,IMAGE registration ,PROBLEM solving ,STATISTICAL sampling ,GEOSTATIONARY satellites - Abstract
Aiming at the problem that the detection of small planes with unobvious texture is easy to be missed in augmented reality scene, a 3D scene information enhancement method to grab the planes for augmented reality scene is proposed based on a series of images of a real scene taken by a monocular camera. Firstly, we extract the feature points from the images. Secondly, we match the feature points from different images, and build the three-dimensional sparse point cloud data of the scene based on the feature points and the camera internal parameters. Thirdly, we estimate the position and size of the planes based on the sparse point cloud. The planes can be used to provide extra structural information for augmented reality. In this paper, an optimized feature points extraction and matching algorithm based on Scale Invariant Feature Transform (SIFT) is proposed, and a fast spatial planes recognition method based on a RANdom SAmple Consensus (RANSAC) is established. Experiments show that the method can achieve higher accuracy compared to the Oriented Fast and Rotated Brief (ORB), Binary Robust Invariant Scalable Keypoints (BRISK) and Super Point. The proposed method can effectively solve the problem of missing detection of faces in ARCore, and improve the integration effect between virtual objects and real scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Low Illumination Soybean Plant Reconstruction and Trait Perception.
- Author
-
Huang, Yourui, Liu, Yuwen, Han, Tao, Xu, Shanyong, and Fu, Jiahao
- Subjects
IMAGE intensifiers ,POINT cloud ,CROP growth ,AGRICULTURAL equipment ,LIGHTING - Abstract
Agricultural equipment works poorly under low illumination such as nighttime, and there is more noise in soybean plant images collected under light constraints, and the reconstructed soybean plant model cannot fully and accurately represent its growth condition. In this paper, we propose a low-illumination soybean plant reconstruction and trait perception method. Our method is based on low-illumination enhancement, using the image enhancement algorithm EnlightenGAN to adjust soybean plant images in low-illumination environments to improve the performance of the scale-invariant feature transform (SIFT) algorithm for soybean plant feature detection and matching and using the motion recovery structure (SFM) algorithm to generate the sparse point cloud of soybean plants, and the point cloud of the soybean plants is densified by the face slice-based multi-view stereo (PMVS) algorithm. We demonstrate that the reconstructed soybean plants are close to the growth conditions of real soybean plants by image enhancement in challenging low-illumination environments, expanding the application of three-dimensional reconstruction techniques for soybean plant trait perception, and our approach is aimed toward achieving the accurate perception of current crop growth conditions by agricultural equipment under low illumination. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Moving Objects Tracking Based on Geometric Model-Free Approach With Particle Filter Using Automotive LiDAR.
- Author
-
Lee, Hojoon, Lee, Hyunsung, Shin, Donghoon, and Yi, Kyongsu
- Abstract
In this paper, we propose a Geometric Model-Free Approach with a Particle Filter (GMFA-PF) through the use of automotive LiDAR for real-time tracking of moving objects within an urban driving environment. GMFA-PF proved to be lightweight, capable of finishing the process within the sensing period of the LiDAR on a single CPU. The proposed GMFA-PF tracks and estimates moving objects without any assumptions on the geometry of the target. This approach enables efficient tracking of multiple object classes, with robustness to a sparse point cloud. Point cloud on moving objects is classified via the predicted Static Obstacle Map (STOM). A likelihood field is generated through the classified point cloud and is used in particle filtering to estimate the moving object’s pose, shape, and speed. Quantitative and qualitative comparisons - with Geometric Model-Based Tracking (MBT), Deep Neural Network (DNN), and GMFA - are performed for GMFA-PF using urban driving and scenario driving data gathered on an autonomous vehicle fitted with close-to-market sensors. The proposed approach shows robust tracking and accurate estimation performance in both sparse and dense point clouds; GMFA-PF achieves improved tracking performance in dense traffic and reduces yaw estimation delay compared to the others. Autonomous vehicles with GMFA-PF demonstrated auto-nomous driving on urban roads. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. DenseSphere: Multimodal 3D object detection under a sparse point cloud based on spherical coordinate.
- Author
-
Jung, Jong Won, Yoon, Jae Hyun, and Yoo, Seok Bong
- Subjects
- *
OBJECT recognition (Computer vision) , *POINT cloud , *SPHERICAL coordinates , *OPTICAL radar , *LIDAR - Abstract
Multimodal 3D object detection has gained significant attention due to the fusion of light detection and range (LiDAR) and RGB data. Existing 3D detection models in autonomous driving are typically trained on dense point cloud data from high-specification LiDAR sensors. However, budgetary constraints often lead to adopting low point-per-second (PPS) LiDAR sensors in real-world scenarios. The low PPS specification can generate a sparse point cloud. In this case, the existing models trained on dense data with a high PPS specification cannot achieve optimal performance under a sparse point cloud. To address this problem, we propose DenseSphere for robust multimodal 3D object detection under a sparse point cloud. Considering the data acquisition process of LiDAR sensors, DenseSphere involves the spherical coordinate-based point upsampler. Specifically, points are interpolated in the horizontal or vertical direction using bilateral interpolation. The interpolated points are refined using dilated pyramid blocks with various receptive fields. For efficient fusion with generated dense point cloud, we use a graph-based detector and hierarchical layers. Then, we demonstrate the performance of DenseSphere by comparing it with other multimodal 3D object detection models through experiments. The visual results and source code with the pretrained models are available at https://github.com/Jung-jongwon/DenseSphere. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Sparse point cloud registration and aggregation with mesh‐based generalized iterative closest point.
- Author
-
Young, Matthew, Pretty, Chris, McCulloch, Josh, and Green, Richard
- Subjects
POINT cloud ,VISUAL odometry ,SLAM (Robotics) ,RECORDING & registration - Abstract
Accurate registration is critical for robotic mapping and simultaneous localization and mapping (SLAM). Sparse or non‐uniform point clouds can be very challenging to register, even in ideal environments. Previous research by Holz et al. has developed a mesh‐based extension to the popular generalized iterative closest point (GICP) algorithm, which can accurately register sparse clouds where unmodified GICP would fail. This paper builds on that work by expanding the comparison between the two algorithms across multiple data sets at a greater range of distances. The results confirm that Mesh‐GICP is more accurate, more precise, and faster. They also show that it can successfully register scans 4–17 times further apart than GICP. In two different experiments this paper uses Mesh‐GICP to compare three different registration methods—pairwise, metascan, keyscan—in two different situations, one in a visual odometry (VO) style, and another in a mapping style. The results of these experiments show that the keyscan method is the most accurate of the three so long as there is sufficient overlap between the target and source clouds. Where there is unsufficient overlap, pairwise matching is more accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Low Illumination Soybean Plant Reconstruction and Trait Perception
- Author
-
Yourui Huang, Yuwen Liu, Tao Han, Shanyong Xu, and Jiahao Fu
- Subjects
three-dimensional reconstruction ,image enhancement ,feature detection and matching ,sparse point cloud ,dense point cloud ,Agriculture (General) ,S1-972 - Abstract
Agricultural equipment works poorly under low illumination such as nighttime, and there is more noise in soybean plant images collected under light constraints, and the reconstructed soybean plant model cannot fully and accurately represent its growth condition. In this paper, we propose a low-illumination soybean plant reconstruction and trait perception method. Our method is based on low-illumination enhancement, using the image enhancement algorithm EnlightenGAN to adjust soybean plant images in low-illumination environments to improve the performance of the scale-invariant feature transform (SIFT) algorithm for soybean plant feature detection and matching and using the motion recovery structure (SFM) algorithm to generate the sparse point cloud of soybean plants, and the point cloud of the soybean plants is densified by the face slice-based multi-view stereo (PMVS) algorithm. We demonstrate that the reconstructed soybean plants are close to the growth conditions of real soybean plants by image enhancement in challenging low-illumination environments, expanding the application of three-dimensional reconstruction techniques for soybean plant trait perception, and our approach is aimed toward achieving the accurate perception of current crop growth conditions by agricultural equipment under low illumination.
- Published
- 2022
- Full Text
- View/download PDF
15. Moving Object Detection and Tracking Based on Interaction of Static Obstacle Map and Geometric Model-Free Approachfor Urban Autonomous Driving.
- Author
-
Lee, Hojoon, Yoon, Jeongsik, Jeong, Yonghwan, and Yi, Kyongsu
- Abstract
Detection and tracking of moving objects (DATMO) in an urban environment using Light Detection and Ranging (LiDAR) is a major challenge for autonomous vehicles due to sparse point cloud, multiple moving directions, various traffic participants, and computational load. To address the complexity of this issue, this study presents a novel model-free approach for DATMO using 2D LiDAR implemented on autonomous vehicles. The approach has been used to classify moving points in the point cloud using the predicted Static Obstacle Map (SOM) generated via interaction between Geometric Model-Free Approach (GMFA) and SOM, and estimates the state of each moving object via GMFA. The motion of each point represented by the state of moving objects updates the SOM. The interaction between GMFA and SOM estimates the correspondence between consecutive point clouds in real-time. The proposed approach has been evaluated via RT range and labeled dataset. The accuracy of estimation of the yaw angle and the velocity of a moving vehicle has been quantitatively evaluated using the RT-range. The performance is significantly improved compared with the geometric model-based tracking (MBT). The estimation of the yaw angle, which has a significant effect on the cut-in/cut-out intention of the target vehicle, is shown to be remarkably improved. Based on the evaluation of the labeled dataset, false-positive and false-negative features are suppressed more than MBT. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Sparse and Low-Overlapping Point Cloud Registration Network for Indoor Building Environments.
- Author
-
Zhang, Zhenghua, Chen, Guoliang, Wang, Xuan, and Wu, Han
- Subjects
- *
POINT cloud , *RECORDING & registration , *INTELLIGENT buildings - Abstract
Registration aims at merging multiple scans to cover all scenes of a large environment. Thus, it is crucial to many civil infrastructure applications based on three-dimensional (3D) models. However, in many real-world scenarios, it is necessary to align point clouds with low-density or small overlaps. It is difficult to extract stable features and enough features for registration, whether keypoint features or overall posture features, under this condition. Existing methods cannot solve this problem well. This work proposed an end-to-end registration network that can self-adaptively focus on the overlap. The network learned to directly encode posture information from the overlapping area instead of using sparse keypoint correspondences, which makes the network more generalized and efficient. This work also proposed a self-supervised overlapping detector as an extension module to expand the use of this network to align large-scale point clouds of indoor building environments. The proposed detector is compatible with any registration approaches to promote their accuracy and efficiency further. The proposed network was experimentally demonstrated to outperform the state-of-the-art methods in registering sparse and low-overlapping point clouds, with higher robustness to point density and overlap ratio change. The proposed detector can reliably detect the overlapping area and empower the network to accurately align the sparse and low-overlapping point clouds of the large-scale indoor scene, thus simplifying and promoting laser scanning practices in civil infrastructure applications. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Raw fusion of camera and sparse LiDAR for detecting distant objects.
- Author
-
Rövid, András, Remeli, Viktor, and Szalay, Zsolt
- Subjects
LASER based sensors ,LIDAR ,POINT cloud ,DRIVERLESS cars ,TRAFFIC safety ,CAMERAS ,INFORMATION networks - Abstract
Copyright of Automatisierungstechnik is the property of De Gruyter and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2020
- Full Text
- View/download PDF
18. SLAM-OR: Simultaneous Localization, Mapping and Object Recognition Using Video Sensors Data in Open Environments from the Sparse Points Cloud
- Author
-
Patryk Mazurek and Tomasz Hachaj
- Subjects
simultaneous localization and mapping ,objects recognition ,video sensors ,deep learning ,sparse point cloud ,principal component analysis ,Chemical technology ,TP1-1185 - Abstract
In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current and previous monocular visual sensors video frame to determine observer position and to determine a cloud of points that represent objects in the environment, while the deep neural network uses the current frame to detect and recognize objects (OR). In the next step, the sparse point cloud returned from the SLAM algorithm is compared with the area recognized by the OR network. Because each point from the 3D map has its counterpart in the current frame, therefore the filtration of points matching the area recognized by the OR algorithm is performed. The clustering algorithm determines areas in which points are densely distributed in order to detect spatial positions of objects detected by OR. Then by using principal component analysis (PCA)—based heuristic we estimate bounding boxes of detected objects. The image processing pipeline that uses sparse point clouds generated by SLAM in order to determine positions of objects recognized by deep neural network and mentioned PCA heuristic are main novelties of our solution. In contrary to state-of-the-art approaches, our algorithm does not require any additional calculations like generation of dense point clouds for objects positioning, which highly simplifies the task. We have evaluated our research on large benchmark dataset using various state-of-the-art OR architectures (YOLO, MobileNet, RetinaNet) and clustering algorithms (DBSCAN and OPTICS) obtaining promising results. Both our source codes and evaluation data sets are available for download, so our results can be easily reproduced.
- Published
- 2021
- Full Text
- View/download PDF
19. Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
- Author
-
Yongtang Bao, Pengfei Lin, Yao Li, Yue Qi, Zhihui Wang, Wenxiang Du, and Qing Fan
- Subjects
structure from motion ,graph segmentation ,sparse point cloud ,large-scale scene ,camera clustering ,UAV image ,Chemical technology ,TP1-1185 - Abstract
Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes.
- Published
- 2021
- Full Text
- View/download PDF
20. ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature
- Author
-
Bo Gu, Jianxun Liu, Huiyuan Xiong, Tongtong Li, and Yuelong Pan
- Subjects
cooperative perception ,intelligent vehicles ,precise 6D pose estimation ,sparse point cloud ,roadside Lidars ,point cloud registration ,Chemical technology ,TP1-1185 - Abstract
In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications.
- Published
- 2021
- Full Text
- View/download PDF
21. Refining the Joint 3D Processing of Terrestrial and UAV Images Using Quality Measures
- Author
-
Elisa Mariarosaria Farella, Alessandro Torresani, and Fabio Remondino
- Subjects
data fusion ,sparse point cloud ,filtering ,image orientation ,dense point cloud generation ,Science - Abstract
The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and detailed 3D representation of large and complex scenarios. Nevertheless, their photogrammetric integration often raises several issues in the image orientation and dense 3D reconstruction processes. Noisy and erroneous 3D reconstructions are the typical result of inaccurate orientation results. In this work, we propose an automatic filtering procedure which works at the sparse point cloud level and takes advantage of photogrammetric quality features. The filtering step removes low-quality 3D tie points before refining the image orientation in a new adjustment and generating the final dense point cloud. Our method generalizes to many datasets, as it employs statistical analyses of quality feature distributions to identify suitable filtering thresholds. Reported results show the effectiveness and reliability of the method verified using both internal and external quality checks, as well as visual qualitative comparisons. We made the filtering tool publicly available on GitHub.
- Published
- 2020
- Full Text
- View/download PDF
22. Classification and Segmentation of Mining Area Objects in Large-Scale Spares Lidar Point Cloud Using a Novel Rotated Density Network
- Author
-
Yueguan Yan, Haixu Yan, Junting Guo, and Huayang Dai
- Subjects
lidar ,sparse point cloud ,deep learning ,classification ,segmentation ,Geography (General) ,G1-922 - Abstract
The classification and segmentation of large-scale, sparse, LiDAR point cloud with deep learning are widely used in engineering survey and geoscience. The loose structure and the non-uniform point density are the two major constraints to utilize the sparse point cloud. This paper proposes a lightweight auxiliary network, called the rotated density-based network (RD-Net), and a novel point cloud preprocessing method, Grid Trajectory Box (GT-Box), to solve these problems. The combination of RD-Net and PointNet was used to achieve high-precision 3D classification and segmentation of the sparse point cloud. It emphasizes the importance of the density feature of LiDAR points for 3D object recognition of sparse point cloud. Furthermore, RD-Net plus PointCNN, PointNet, PointCNN, and RD-Net were introduced as comparisons. Public datasets were used to evaluate the performance of the proposed method. The results showed that the RD-Net could significantly improve the performance of sparse point cloud recognition for the coordinate-based network and could improve the classification accuracy to 94% and the segmentation per-accuracy to 70%. Additionally, the results concluded that point-density information has an independent spatial−local correlation and plays an essential role in the process of sparse point cloud recognition.
- Published
- 2020
- Full Text
- View/download PDF
23. A 3D Scene Information Enhancement Method Applied in Augmented Reality
- Author
-
Bo Li, Xiangfeng Wang, Qiang Gao, Zhimei Song, Cunyu Zou, and Siyuan Liu
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Electrical and Electronic Engineering ,augmented reality ,sparse point cloud ,information enhancement - Abstract
Aiming at the problem that the detection of small planes with unobvious texture is easy to be missed in augmented reality scene, a 3D scene information enhancement method to grab the planes for augmented reality scene is proposed based on a series of images of a real scene taken by a monocular camera. Firstly, we extract the feature points from the images. Secondly, we match the feature points from different images, and build the three-dimensional sparse point cloud data of the scene based on the feature points and the camera internal parameters. Thirdly, we estimate the position and size of the planes based on the sparse point cloud. The planes can be used to provide extra structural information for augmented reality. In this paper, an optimized feature points extraction and matching algorithm based on Scale Invariant Feature Transform (SIFT) is proposed, and a fast spatial planes recognition method based on a RANdom SAmple Consensus (RANSAC) is established. Experiments show that the method can achieve higher accuracy compared to the Oriented Fast and Rotated Brief (ORB), Binary Robust Invariant Scalable Keypoints (BRISK) and Super Point. The proposed method can effectively solve the problem of missing detection of faces in ARCore, and improve the integration effect between virtual objects and real scenes.
- Published
- 2022
- Full Text
- View/download PDF
24. Efficient edge-aware surface mesh reconstruction for urban scenes.
- Author
-
Bódis-Szomorú, András, Riemenschneider, Hayko, and Van Gool, Luc
- Subjects
IMAGE reconstruction ,DIGITAL elevation models ,POLYHEDRA ,MATHEMATICAL optimization ,CITY maps - Abstract
We propose an efficient approach for building compact, edge-preserving, view-centric triangle meshes from either dense or sparse depth data, with a focus on modeling architecture in large-scale urban scenes. Our method constructs a 2D base mesh from a preliminary view partitioning, then lifts the base mesh into 3D in a fast vertex depth optimization. Different view partitioning schemes are proposed for imagery and dense depth maps. They guarantee that mesh edges are aligned with crease edges and discontinuities. In particular, we introduce an effective plane merging procedure with a global error guarantee in order to maximally compact the resulting models. Moreover, different strategies for detecting and handling discontinuities are presented. We demonstrate that our approach provides an excellent trade-off between quality and compactness, and is eligible for fast production of polyhedral building models from large-scale urban height maps, as well as, for direct meshing of sparse street-side Structure-from-Motion (SfM) data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Comparison of forest canopy gap fraction measurements from drone-based video frames, below-canopy hemispherical photography, and airborne laser scanning
- Author
-
Mait Lang, Mikk Antsov, Andres Mumma, Indrek Suitso, Andres Kuusk, and Kaarel Piip
- Subjects
View zenith angle ,visibility of ground ,forest ,sparse point cloud ,polar transformation ,drone ,Oceanography ,GC1-1581 ,Geology ,QE1-996.5 - Abstract
The amount of gaps in forest canopy is related to the radiation interception for photosynthesis and visibility through the canopy. The dependence of forest canopy gap fraction determination on view zenith angle was calculated from polar-transformed sparse ([Formula: see text]) airborne laser scanning (ALS) point clouds for a Scots pine (Pinus sylvestris L.) stand growing on Kiriku Bog, Estonia. Visibility of ground targets was estimated from video image frames taken during drone (UAV) overpass at low altitude (40 m). Below-canopy digital hemispherical images (DHP) were taken in zenith direction as reference measurements. Angular grids of 3[Formula: see text] and 5[Formula: see text] were used to match the three data sources so as to decrease uncertainties in measurement geometries. The linear relationship between DHP and UAV data had [Formula: see text] = 0.67, with most of the deviations occurring at gap boundaries. Relationships over individual targets between DHP and polar-transformed ALS data had [Formula: see text]. However, the simulation overestimated gap fraction at smaller zenith angles because of uncertainties in constructing lidar pulse footprints from point data. We conclude that regional coverage by means of sparse ALS point clouds shows potential for the assessment of forest canopy gaps at off-nadir angles.
- Published
- 2025
- Full Text
- View/download PDF
26. Structure-from-Motion 3D Reconstruction of the Historical Overpass Ponte della Cerra: A Comparison between MicMac® Open Source Software and Metashape®
- Author
-
Matteo Cutugno, Umberto Robustelli, Giovanni Pugliano, Cutugno, M., Robustelli, U., and Pugliano, G.
- Subjects
3D model ,accuracy ,MicMac ,Aerospace Engineering ,photogrammetry ,sparse point cloud ,Metashape ,Computer Science Applications ,Artificial Intelligence ,Control and Systems Engineering ,unmanned aerial vehicle (UAV) ,free-and-open-source software (FOSS) ,Information Systems - Abstract
In recent years, the performance of free-and-open-source software (FOSS) for image processing has significantly increased. This trend, as well as technological advancements in the unmanned aerial vehicle (UAV) industry, have opened blue skies for both researchers and surveyors. In this study, we aimed to assess the quality of the sparse point cloud obtained with a consumer UAV and a FOSS. To achieve this goal, we also process the same image dataset with a commercial software package using its results as a term of comparison. Various analyses were conducted, such as the image residuals analysis, the statistical analysis of GCPs and CPs errors, the relative accuracy assessment, and the Cloud-to-Cloud distance comparison. A support survey was conducted to measure 16 markers identified on the object. In particular, 12 of these were used as ground control points to scale the 3D model, while the remaining 4 were used as check points to assess the quality of the scaling procedure by examining the residuals. Results indicate that the sparse clouds obtained are comparable. MicMac® has mean image residuals equal to 0.770 pixels while for Metashape® is 0.735 pixels. In addition, the 3D errors on control points are similar: the mean 3D error for MicMac® is equal to 0.037 m with a standard deviation of 0.017 m, whereas for Metashape®, it is 0.031 m with a standard deviation equal to 0.015 m. The present work represents a preliminary study: a comparison between software packages is something hard to achieve, given the secrecy of the commercial software and the theoretical differences between the approaches. This case study analyzes an object with extremely complex geometry; it is placed in an urban canyon where the GNSS support can not be exploited. In addition, the scenario changes continuously due to the vehicular traffic.
- Published
- 2022
27. DEVELOPMENT OF A NEW LOW-COST INDOOR MAPPING SYSTEM - SYSTEM DESIGN, SYSTEM CALIBRATION AND FIRST RESULTS.
- Author
-
Kersten, T. P., Stallmann, D., and Tschirschwitz, F.
- Subjects
INTERIOR architecture ,SURVEYING (Engineering) ,THREE-dimensional modeling - Abstract
For mapping of building interiors various 2D and 3D indoor surveying systems are available today. These systems essentially differ from each other by price and accuracy as well as by the effort required for fieldwork and post-processing. The Laboratory for Photogrammetry & Laser Scanning of HafenCity University (HCU) Hamburg has developed, as part of an industrial project, a lowcost indoor mapping system, which enables systematic inventory mapping of interior facilities with low staffing requirements and reduced, measurable expenditure of time and effort. The modelling and evaluation of the recorded data take place later in the office. The indoor mapping system of HCU Hamburg consists of the following components: laser range finder, panorama head (pan-tilt-unit), single-board computer (Raspberry Pi) with digital camera and battery power supply. The camera is pre-calibrated in a photogrammetric test field under laboratory conditions. However, remaining systematic image errors are corrected simultaneously within the generation of the panorama image. Due to cost reasons the camera and laser range finder are not coaxially arranged on the panorama head. Therefore, eccentricity and alignment of the laser range finder against the camera must be determined in a system calibration. For the verification of the system accuracy and the system calibration, the laser points were determined from measurements with total stations. The differences to the reference were 4-5mm for individual coordinates. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. IMPROVING PERFORMANCE OF FEATURE EXTRACTION IN SFM ALGORITHMS FOR 3D SPARSE POINT CLOUD
- Author
-
R. Higuchi, Fulvio Rinaudo, H. Sugawara, S. Nasu, and F. Condorelli
- Subjects
lcsh:Applied optics. Photonics ,Sparse Point Cloud ,Open Source Algorithms ,lcsh:T ,Computer science ,business.industry ,Feature extraction ,Geomatics ,Point cloud ,lcsh:TA1501-1820 ,lcsh:Technology ,Photogrammetry ,lcsh:TA1-2040 ,Photogrammetry, Feature Points Extraction, Open Source Algorithms, Sparse Point Cloud, Metric Quality Assessment, Historical Photographs ,Feature (computer vision) ,Historical Photographs ,Metric (mathematics) ,Feature Points Extraction ,lcsh:Engineering (General). Civil engineering (General) ,Metric Quality Assessment ,business ,Algorithm - Abstract
The use of Structure-from-Motion algorithms is a common practice to obtain a rapid photogrammetric reconstruction. However, the performance of these algorithms is limited by the fact that in some conditions the resulting point clouds present low density. This is the case when processing materials from historical archives, such as photographs and videos, which generates only sparse point clouds due to the lack of necessary information in the photogrammetric reconstruction. This paper explores ways to improve the performance of open source SfM algorithms in order to guarantee the presence of strategic feature points in the resulting point cloud, even if sparse. To reach this objective, a photogrammetric workflow is proposed to process historical images. The first part of the workflow presents a method that allows the manual selection of feature points during the photogrammetric process. The second part evaluates the metric quality of the reconstruction on the basis of a comparison with a point cloud that has a different density from the sparse point cloud. The workflow was applied to two different case studies. Transformations of wall paintings of the Karanlık church in Cappadocia were analysed thanks to the comparison of 3D model resulting from archive photographs and a recent survey. Then a comparison was performed between the state of the Komise building in Japan, before and after restoration. The findings show that the method applied allows the metric scale and evaluation of the model also in bad condition and when only low-density point clouds are available. Moreover, this tool should be of great use for both art and architecture historians and geomatics experts, to study the evolution of Cultural Heritage.
- Published
- 2019
- Full Text
- View/download PDF
29. Ground Segmentation Algorithm for Sloped Terrain and Sparse LiDAR Point Cloud
- Author
-
Ministerio de Ciencia e Innovación (España), Comunidad de Madrid, European Commission, Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Artuñedo, Antonio [0000-0003-2161-9876], Villagrá, Jorge [0000-0002-3963-7952], Jiménez, Víctor, Godoy, Jorge, Artuñedo, Antonio, Villagrá, Jorge, Ministerio de Ciencia e Innovación (España), Comunidad de Madrid, European Commission, Jiménez, Víctor [0000-0003-1197-0937], Godoy, Jorge [0000-0002-3132-5348], Artuñedo, Antonio [0000-0003-2161-9876], Villagrá, Jorge [0000-0002-3963-7952], Jiménez, Víctor, Godoy, Jorge, Artuñedo, Antonio, and Villagrá, Jorge
- Abstract
Distinguishing obstacles from ground is an essential step for common perception tasks such as object detection-and-tracking or occupancy grid maps. Typical approaches rely on plane fitting or local geometric features, but their performance is reduced in situations with sloped terrain or sparse data. Some works address these issues using Markov Random Fields and Belief Propagation, but these rely on local geometric features uniquely. This article presents a strategy for ground segmentation in LiDAR point clouds composed by two main steps: (i) First, an initial classification is performed dividing the points in small groups and analyzing geometric features between them. (ii) Then, this initial classification is used to model the surrounding ground height as a Markov Random Field, which is solved using the Loopy Belief Propagation algorithm. Points are finally classified comparing their height with the estimated ground height map. On one hand, using an initial estimation to model the Markov Random Field provides a better description of the scene than local geometric features commonly used alone. On the other hand, using a graph-based approach with message passing achieves better results than simpler filtering or enhancement techniques, since data propagation compensates sparse distributions of LiDAR point clouds. Experiments are conducted with two different sources of information: nuScenes’s public dataset and an autonomous vehicle prototype. The estimation results are analyzed with respect to other methods, showing a good performance in a variety of situations.
- Published
- 2021
30. ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature
- Author
-
Pan Yuelong, Li Tongtong, Liu Jianxun, Huiyuan Xiong, and Bo Gu
- Subjects
intelligent vehicles ,Computer science ,Point cloud ,TP1-1185 ,02 engineering and technology ,Biochemistry ,cooperative perception ,sparse point cloud ,roadside Lidars ,Article ,Analytical Chemistry ,Robustness (computer science) ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,point cloud registration ,Computer vision ,Electrical and Electronic Engineering ,Instrumentation ,Pose ,050210 logistics & transportation ,Orientation (computer vision) ,business.industry ,Chemical technology ,05 social sciences ,Atomic and Molecular Physics, and Optics ,point cloud sparseness description ,Lidar ,Feature (computer vision) ,Principal component analysis ,020201 artificial intelligence & image processing ,Artificial intelligence ,Scale (map) ,business ,precise 6D pose estimation - Abstract
In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications.
- Published
- 2021
31. Experimental results for image-based geometrical reconstruction for spacecraft Rendezvous navigation with unknown and uncooperative target spacecraft.
- Author
-
Schnitzer, Frank, Janschek, Klaus, and Willich, Georg
- Abstract
For both manned and autonomous space on-orbit servicing missions, the collision avoidance and motion prediction of the target objects are essential for a safe mission operation. Our approach assumes unknown and uncooperative target objects (spacecraft, space debris) and a camera only vision system. The target's 3D structure is reconstructed from a sparse point cloud extracted from a rendezvous-SLAM algorithm and processed by a RANSAC algorithm. Afterwards the 3D model can be used in a feedback manner for enhancing visual navigation processing tasks. The paper details the general concept and presents experimental results with camera image data from a spacecraft rendezvous simulator testbed. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
32. Incremental Reconstruction of Manifold Surface from Sparse Visual Mapping.
- Author
-
Yu, Shuda and Lhuillier, Maxime
- Abstract
Automatic image-based-modeling usually has two steps: Structure from Motion (SfM) and the estimation of a triangulated surface. The former provides camera poses and a sparse point cloud. The latter usually involves dense stereo. From the computational standpoint, it would be nice to avoid dense stereo and estimate the surface from the sparse cloud directly. Furthermore, it would be useful for online applications to update the surface while the camera is moving in the scene. This paper deals with both requirements: it introduces an incremental method which reconstructs a surface from a sparse cloud estimated by incremental SfM. The context is new and difficult since we ensure the resulting surface to be manifold at all times. The manifold property is important since it is needed by differential operators involved in surface refinements. We have experimented with a hand-held omni directional camera moving in a city. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
33. Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes
- Author
-
Pengfei Lin, Yongtang Bao, Qing Fan, Yue Qi, Du Wenxiang, Yao Li, and Zhihui Wang
- Subjects
Matching (graph theory) ,Computer science ,large-scale scene ,0211 other engineering and technologies ,Point cloud ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,TP1-1185 ,Biochemistry ,sparse point cloud ,Article ,Analytical Chemistry ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,Computer vision ,UAV image ,Electrical and Electronic Engineering ,Cluster analysis ,Instrumentation ,021101 geological & geomatics engineering ,business.industry ,structure from motion ,Node (networking) ,Chemical technology ,Atomic and Molecular Physics, and Optics ,GNSS applications ,Computer Science::Computer Vision and Pattern Recognition ,graph segmentation ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,camera clustering ,Artificial intelligence ,Scale (map) ,business - Abstract
Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes.
- Published
- 2021
34. Қaтты тұрмыстық қaлдықтaр полигонын қaшықтықтaн ұшқышсыз ұшу аппараттарымен бaқылaу
- Subjects
полигон ,Global Position System ,structure from motion ,сиретілген нүктелер бұлты ,разреженное облако точек ,landfill ,payload ,қозғалыс арқылы құрылым алу ,структура от движения ,sparse point cloud ,aerial photograph ,контроль ,бақылау ,қaтты тұрмыстық қaлдықтaр ,solid waste ,твердые отходы ,unmanned aerial vehicle ,беспилотный летательный аппарат ,aerial platform ,control ,ұшқышсыз ұшу аппараты - Abstract
Мақалада қатты тұрмыстық қалдықтарды көму нысандарын бақылауда ұшқышсыз ұшу аппараттарын қолдану мүмкіндіктері қарастырылған. Сонымен қатар, ұшқышсыз ұшу аппараттарына орнатылған фото- және видеокамераның көмегімен полигонды пайдалану жағдайына бақылау жасап, заңсыз қоқыс тастау көздерін анықтау, қоқыс полигондарындағы төтенше жағдайлардың алдын-алу сияқты іс-шараларды жүргізу артықшылықтары келтірілген. Белгілі-бір ара-қашықтықтан нысанның түсірісін алу қалдықтардың заңсыз үйінділерін анықтауда, полигон құрылымының тұтастығын бақылауда және олардың жану ошақтарын іздестіруде тиімділігімен ерекшеленеді. Ұшқышсыз ұшу аппаратарын қолданудың басты артықшылығы нысан аумағындағы жағдай жөнінде жедел ақпарат алып, жер бетілік зерттеулерде мүмкін бола бермейтін күрделі бақылауларды жүргізу болып табылады., В статье рассматривается возможность использования беспилотных летательных аппаратов для управления объектами захоронения твердых бытовых отходов. Кроме того, преимущества таких мер, как наблюдение за работой полигона с помощью фото- и видеокамер, установленных на беспилотных летательных аппаратах, выявление источников незаконного захоронения, предотвращение чрезвычайных ситуаций на полигонах. Визуализация объекта на определенном расстоянии эффективна для обнаружения незаконных свалок, контроля целостности конструкции полигона и поиска источников их возгорания. Основным преимуществом использования беспилотных летательных аппаратов является возможность оперативно получать информацию о ситуации и проводить комплексные наблюдения, невозможные при наземных съемках., The article discusses the possibility of using unmanned aerial vehicles for the management of solid waste disposal facilities. In addition, the advantages of such measures as monitoring the operation of the landfill using photo and video cameras installed on unmanned aerial vehicles, identifying sources of illegal burial, and preventing emergencies at landfills. Visualization of an object at a certain distance is effective for detecting illegal landfills, monitoring the integrity of the landfill structure and searching for sources of ignition. The main advantage of using unmanned aerial vehicles is the ability to receive operational information about the situation on the site and conduct comprehensive observations that are impossible with ground surveys., Горный журнал Казахстана, Выпуск 5 (193) 2021, Pages 48-53
- Published
- 2021
- Full Text
- View/download PDF
35. Manifold surface reconstruction of an environment from sparse Structure-from-Motion data.
- Author
-
Lhuillier, Maxime and Yu, Shuda
- Subjects
SURFACE reconstruction ,DATA analysis ,ESTIMATION theory ,CONSTRAINT satisfaction ,CAMERAS ,COMPUTATIONAL complexity - Abstract
Highlights: [•] The surface is directly estimated from the sparse Structure-from-Motion data. [•] Both visibility and manifold constraints are enforced. [•] We experiment with hand-held and helmet-held low cost omnidirectional cameras. [•] Compact models of complete environments are obtained with low time complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
36. Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features
- Author
-
Fathi, Habib and Brilakis, Ioannis
- Subjects
- *
AUTOMATION , *SPATIAL data infrastructures , *REMOTE sensing , *ALGORITHMS , *VIDEOS , *QUATERNIONS , *ESTIMATION theory - Abstract
Abstract: The commercial far-range (>10m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
37. Rapid, on-site spatial information acquisition and its use for infrastructure operation and maintenance
- Author
-
Kim, Changwan, Haas, Carl T., and Liapi, Katherine A.
- Subjects
- *
CONSTRUCTION industry , *CASE studies , *COMPUTER-aided design , *SIMULATION methods & models - Abstract
Abstract: Site modeling can be useful in various safety-enhancement applications and for as-built data acquisition. In this article, a rapid, on-site, spatial-modeling method using a “sparse point cloud” approach that represents construction sites in an efficient manner is proposed. The various procedures used in the modeling process are explained. The results of the experiments performed on actual construction sites are described, as are case studies of the modeling method per se. An example of the application of the proposed site modeling method to the simulation of obstacle-avoidance in the operation of equipment on an industrial construction project is also presented. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
38. SLAM-OR: Simultaneous Localization, Mapping and Object Recognition Using Video Sensors Data in Open Environments from the Sparse Points Cloud.
- Author
-
Mazurek, Patryk and Hachaj, Tomasz
- Subjects
POINT cloud ,PRINCIPAL components analysis ,ALGORITHMS ,DETECTORS ,OPERATIONS research ,IMAGE processing - Abstract
In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current and previous monocular visual sensors video frame to determine observer position and to determine a cloud of points that represent objects in the environment, while the deep neural network uses the current frame to detect and recognize objects (OR). In the next step, the sparse point cloud returned from the SLAM algorithm is compared with the area recognized by the OR network. Because each point from the 3D map has its counterpart in the current frame, therefore the filtration of points matching the area recognized by the OR algorithm is performed. The clustering algorithm determines areas in which points are densely distributed in order to detect spatial positions of objects detected by OR. Then by using principal component analysis (PCA)—based heuristic we estimate bounding boxes of detected objects. The image processing pipeline that uses sparse point clouds generated by SLAM in order to determine positions of objects recognized by deep neural network and mentioned PCA heuristic are main novelties of our solution. In contrary to state-of-the-art approaches, our algorithm does not require any additional calculations like generation of dense point clouds for objects positioning, which highly simplifies the task. We have evaluated our research on large benchmark dataset using various state-of-the-art OR architectures (YOLO, MobileNet, RetinaNet) and clustering algorithms (DBSCAN and OPTICS) obtaining promising results. Both our source codes and evaluation data sets are available for download, so our results can be easily reproduced. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Parallel Structure from Motion for Sparse Point Cloud Generation in Large-Scale Scenes.
- Author
-
Bao, Yongtang, Lin, Pengfei, Li, Yao, Qi, Yue, Wang, Zhihui, Du, Wenxiang, and Fan, Qing
- Subjects
POINT cloud ,GLOBAL Positioning System ,SMART cities ,IMAGE registration ,IMAGE reconstruction - Abstract
Scene reconstruction uses images or videos as input to reconstruct a 3D model of a real scene and has important applications in smart cities, surveying and mapping, military, and other fields. Structure from motion (SFM) is a key step in scene reconstruction, which recovers sparse point clouds from image sequences. However, large-scale scenes cannot be reconstructed using a single compute node. Image matching and geometric filtering take up a lot of time in the traditional SFM problem. In this paper, we propose a novel divide-and-conquer framework to solve the distributed SFM problem. First, we use the global navigation satellite system (GNSS) information from images to calculate the GNSS neighborhood. The number of images matched is greatly reduced by matching each image to only valid GNSS neighbors. This way, a robust matching relationship can be obtained. Second, the calculated matching relationship is used as the initial camera graph, which is divided into multiple subgraphs by the clustering algorithm. The local SFM is executed on several computing nodes to register the local cameras. Finally, all of the local camera poses are integrated and optimized to complete the global camera registration. Experiments show that our system can accurately and efficiently solve the structure from motion problem in large-scale scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature.
- Author
-
Gu, Bo, Liu, Jianxun, Xiong, Huiyuan, Li, Tongtong, Pan, Yuelong, Sepulcre, Miguel, Rondinone, Michele, Leich, Andreas, and Lu, Meng
- Subjects
POINT cloud ,ROADSIDE improvement ,LIDAR ,CONSTRAINT algorithms - Abstract
In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Refining the Joint 3D Processing of Terrestrial and UAV Images Using Quality Measures.
- Author
-
Farella, Elisa Mariarosaria, Torresani, Alessandro, and Remondino, Fabio
- Subjects
POINT cloud ,IMAGE ,MULTISENSOR data fusion ,STATISTICS - Abstract
The paper presents an efficient photogrammetric workflow to improve the 3D reconstruction of scenes surveyed by integrating terrestrial and Unmanned Aerial Vehicle (UAV) images. In the last years, the integration of this kind of images has shown clear advantages for the complete and detailed 3D representation of large and complex scenarios. Nevertheless, their photogrammetric integration often raises several issues in the image orientation and dense 3D reconstruction processes. Noisy and erroneous 3D reconstructions are the typical result of inaccurate orientation results. In this work, we propose an automatic filtering procedure which works at the sparse point cloud level and takes advantage of photogrammetric quality features. The filtering step removes low-quality 3D tie points before refining the image orientation in a new adjustment and generating the final dense point cloud. Our method generalizes to many datasets, as it employs statistical analyses of quality feature distributions to identify suitable filtering thresholds. Reported results show the effectiveness and reliability of the method verified using both internal and external quality checks, as well as visual qualitative comparisons. We made the filtering tool publicly available on GitHub. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Classification and Segmentation of Mining Area Objects in Large-Scale Spares Lidar Point Cloud Using a Novel Rotated Density Network.
- Author
-
Yan, Yueguan, Yan, Haixu, Guo, Junting, and Dai, Huayang
- Subjects
POINT cloud ,LIDAR ,POINT processes ,VEGETATION classification ,CLASSIFICATION ,DENSITY ,DEEP learning ,OBJECT recognition (Computer vision) - Abstract
The classification and segmentation of large-scale, sparse, LiDAR point cloud with deep learning are widely used in engineering survey and geoscience. The loose structure and the non-uniform point density are the two major constraints to utilize the sparse point cloud. This paper proposes a lightweight auxiliary network, called the rotated density-based network (RD-Net), and a novel point cloud preprocessing method, Grid Trajectory Box (GT-Box), to solve these problems. The combination of RD-Net and PointNet was used to achieve high-precision 3D classification and segmentation of the sparse point cloud. It emphasizes the importance of the density feature of LiDAR points for 3D object recognition of sparse point cloud. Furthermore, RD-Net plus PointCNN, PointNet, PointCNN, and RD-Net were introduced as comparisons. Public datasets were used to evaluate the performance of the proposed method. The results showed that the RD-Net could significantly improve the performance of sparse point cloud recognition for the coordinate-based network and could improve the classification accuracy to 94% and the segmentation per-accuracy to 70%. Additionally, the results concluded that point-density information has an independent spatial–local correlation and plays an essential role in the process of sparse point cloud recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. Manifold surface reconstruction of an environment from sparse Structure-from-Motion data
- Author
-
Shuda Yu, Maxime Lhuillier, Institut Pascal (IP), SIGMA Clermont (SIGMA Clermont)-Université Clermont Auvergne [2017-2020] (UCA [2017-2020])-Centre National de la Recherche Scientifique (CNRS), Laboratoire des sciences et matériaux pour l'électronique et d'automatique (LASMEA), and Université Blaise Pascal - Clermont-Ferrand 2 (UBP)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Sparse Point Cloud ,Surface (mathematics) ,3D Delaunay Triangulation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,law.invention ,Catadioptric system ,law ,Genus (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Steiner Vertices ,Structure-from-Motion ,Structure from motion ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics ,business.industry ,Visibility (geometry) ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,020207 software engineering ,2-Manifold Reconstruction ,Complexity Analysis ,Signal Processing ,Trajectory ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Manifold (fluid mechanics) ,Software ,Surface reconstruction - Abstract
International audience; The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-from-Motion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer.
- Published
- 2013
- Full Text
- View/download PDF
44. Modélisation 3D automatique d'environnements : une approche éparse à partir d'images prises par une caméra catadioptrique
- Author
-
Yu, Shuda, Institut Pascal (IP), Université Blaise Pascal - Clermont-Ferrand 2 (UBP)-SIGMA Clermont (SIGMA Clermont)-Centre National de la Recherche Scientifique (CNRS), SIGMA Clermont (SIGMA Clermont)-Centre National de la Recherche Scientifique (CNRS)-Université Clermont Auvergne [2017-2020] (UCA [2017-2020]), Université Blaise Pascal - Clermont-Ferrand II, Maxime Lhuillier, STAR, ABES, and SIGMA Clermont (SIGMA Clermont)-Université Clermont Auvergne [2017-2020] (UCA [2017-2020])-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Sparse Point Cloud ,[SPI.OTHER]Engineering Sciences [physics]/Other ,Analyse de complexité ,3d Delaunay Triangulation ,Complexity Analysis ,[SPI.OTHER] Engineering Sciences [physics]/Other ,Triangulation de Delaunay 3d ,Steiner Vertices ,Structure-from-Motion ,Manifold Reconstruction ,Nuage de points épars ,Reconstruction de 2-variété ,Sommets de Steiner - Abstract
The automatic 3d modeling of an environment using images is still an active topic in Computer Vision. Standard methods have three steps : moving a camera in the environment to take an image sequence, reconstructing the geometry of the environment, and applying a dense stereo method to obtain a surface model of the environment. In the second step, interest points are detected and matched in images, then camera poses and a sparse cloud of 3d points corresponding to the interest points are simultaneously estimated. In the third step, all pixels of images are used to reconstruct a surface of the environment, e.g. by estimating a dense cloud of 3d points. Here we propose to generate a surface directly from the sparse point cloud and its visibility information provided by the geometry reconstruction step. The advantages are low time and space complexities ; this is useful e.g. for obtaining compact models of large and complete environments like a city. To do so, a surface reconstruction method by sculpting 3d Delaunay triangulation of the reconstructed points is proposed.The visibility information is used to classify the tetrahedra in free-space and matter. Then a surface is extracted thanks to a greedy method and a minority of Steiner points. The 2-manifold constraint is enforced on the surface to allow standard surface post-processing such as denoising, refinement by photo-consistency optimization ... This method is also extended to the incremental case : each time a new key-frame is selected in the input video, new 3d points and camera pose are estimated, then the reconstructed surface is updated.We study the time complexity in both cases (incremental or not). In experiments, a low-cost catadioptric camera is used to generate textured 3d models for complete environments including buildings, ground, vegetation ... A drawback of our methods is that thin scene components cannot be correctly reconstructed, e.g. tree branches and electric posts., La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques.
- Published
- 2013
45. Grasping unknown novel objects from single view using octant analysis
- Author
-
Chleborad, Aaron A. and Chleborad, Aaron A.
- Abstract
Octant analysis, when combined with properties of the multivariate central limit theorem and multivariate normal distribution, allows finding a reasonable grasping point on an unknown novel object possible. This thesis's original contribution is the ability to find progressively improving grasp points in a poor and/or sparse point cloud. It is shown how octant analysis was implemented using common consumer grade electronics to demonstrate the applicability to home and office robotics. Tests were carried out on three novel objects in multiple poses to determine the algorithm's consistency and effectiveness at finding a grasp point on those objects. Results from the experiments bolster the idea that the application of octant analysis to the grasping point problem seems promising and deserving of further investigation. Other applications of the technique are also briefly considered.
- Published
- 2010
46. Ground Segmentation Algorithm for Sloped Terrain and Sparse LiDAR Point Cloud
- Author
-
Victor Jimenez, Jorge Godoy, Antonio Artunedo, Jorge Villagra, Ministerio de Ciencia e Innovación (España), Comunidad de Madrid, European Commission, Jiménez, Víctor, Godoy, Jorge, Artuñedo, Antonio, and Villagrá, Jorge
- Subjects
Occupancy grid mapping ,LiDAR ,General Computer Science ,Computer science ,020209 energy ,Point cloud ,Channel-based ,Terrain ,02 engineering and technology ,Belief propagation ,Sloped terrain ,Obstacle-ground segmentation ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Belief Propagation ,Random field ,Markov random field ,Markov chain ,General Engineering ,Sparse point cloud ,Markov Random Field ,TK1-9971 ,Geometric features ,Statistical classification ,020201 artificial intelligence & image processing ,Electrical engineering. Electronics. Nuclear engineering ,Algorithm - Abstract
Distinguishing obstacles from ground is an essential step for common perception tasks such as object detection-and-tracking or occupancy grid maps. Typical approaches rely on plane fitting or local geometric features, but their performance is reduced in situations with sloped terrain or sparse data. Some works address these issues using Markov Random Fields and Belief Propagation, but these rely on local geometric features uniquely. This article presents a strategy for ground segmentation in LiDAR point clouds composed by two main steps: (i) First, an initial classification is performed dividing the points in small groups and analyzing geometric features between them. (ii) Then, this initial classification is used to model the surrounding ground height as a Markov Random Field, which is solved using the Loopy Belief Propagation algorithm. Points are finally classified comparing their height with the estimated ground height map. On one hand, using an initial estimation to model the Markov Random Field provides a better description of the scene than local geometric features commonly used alone. On the other hand, using a graph-based approach with message passing achieves better results than simpler filtering or enhancement techniques, since data propagation compensates sparse distributions of LiDAR point clouds. Experiments are conducted with two different sources of information: nuScenes’s public dataset and an autonomous vehicle prototype. The estimation results are analyzed with respect to other methods, showing a good performance in a variety of situations., This work was supported in part by the Spanish Ministry of Science and Innovation with National Projects PRYSTINE and SECREDAS,under Grant PCI2018-092928 and PCI2018-093144, respectively, in part by the Community of Madrid through SEGVAUTO 4.0-CM Programme under Grant S2018-EMT-4362, and in part by the European Commission and Electronic Components and Systems for European Leadership (ECSEL) Joint Undertaking through the Project PRYSTINE under Grant 783190 and the Project SECREDAS under Grant 783119
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.