7,802 results on '"Camera resectioning"'
Search Results
2. Camera Pose Estimation Using First-Order Curve Differential Geometry.
- Author
-
Fabbri, Ricardo, Giblin, Peter, and Kimia, Benjamin
- Subjects
- *
CAMERAS , *PROBLEM solving , *CURVES , *DIFFERENTIAL geometry , *IMAGE reconstruction - Abstract
This paper considers and solves the problem of estimating camera pose given a pair of point-tangent correspondences between a 3D scene and a projected image. The problem arises when considering curve geometry as the basis of forming correspondences, computation of structure and calibration, which in its simplest form is a point augmented with the curve tangent. We show that while the resectioning problem is solved with a minimum of three points given the intrinsic parameters, when points are augmented with tangent information only two points are required, leading to substantial robustness and computational savings, e.g., as a minimal engine within ransac. In addition, algorithms are developed to find a practical solution shown to effectively recover camera pose using synthetic and real datasets. This technology is intended as a building block of curve-based structure from motion systems, allowing new views to be incrementally registered to a core set of views for which relative pose has been computed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Detection and maintenance of cracks by novel algorithm using IoT
- Author
-
Vikas Kumar Sharma
- Subjects
Task (computing) ,Basis (linear algebra) ,Computer science ,Robot ,Mobile robot ,ComputingMethodologies_GENERAL ,General Medicine ,Motion planning ,Blob detection ,Algorithm ,Bridge (nautical) ,Camera resectioning - Abstract
The most significant task aimed at maintaining the bridge is the “bridge deck crack inspection”. Conventionally, the inspector identifies the cracks utilizing eyes and notices the crack location manually. Nevertheless, accuracy of inspection outcome is less because of “subjective nature of human judgment”. And Suggested a “crack inspection system” which utilizes the mobile robot equipped with the camera to gather bridge deck pictures. And in this model, “Laplacian of Gaussian (LoG) algorithm” is utilized to identify cracks & “global crack map” is achieved by localization of robot and “camera calibration”. Ensuring that robot gathers entire bridge deck pictures, the “path planning algorithm” on the basis of genetic-algorithm will be improved. And the “path planning algorithm” identifies the solution that reduces the count of turns and moving distance. The evaluation is done on the suggested method by both experiments & simulations.
- Published
- 2023
- Full Text
- View/download PDF
4. Lane detector for driver assistance systems
- Author
-
R. Agilesh Saravanan, S.A. Sivasankari, K.T.P.S. Kumar, and J. Bennilo Fernandes
- Subjects
010302 applied physics ,Computer science ,business.industry ,Detector ,Advanced driver assistance systems ,02 engineering and technology ,General Medicine ,Python (programming language) ,021001 nanoscience & nanotechnology ,01 natural sciences ,Task (computing) ,0103 physical sciences ,Shadow ,Code (cryptography) ,Computer vision ,Artificial intelligence ,0210 nano-technology ,Representation (mathematics) ,business ,computer ,Camera resectioning ,computer.programming_language - Abstract
The challenging problem in the traffic system is lane detection. This Lane detection which attracts the computer vision community’s attention. For computer vision and machine learning techniques, the Lane detection which acts as the multi-feature detection problem. Many machine learning techniques are used for lane detection. Driver support system is one of the most important features in the modern vehicles to ensure the safety of the driver and decrease the vehicle accidents on road. Road Lane detection is the most challenging task and complex tasks now-a-days. Road localization and relative position between vehicle and roads which also includes with this. But in this journal, we propose a new method. Here, an on- board camera to be used which is looking outwards are presented here in this work. This proposed technique which can be used for all types of roads like painted, unpainted, curved, straight roads etc in different weather conditions. No need for camera calibration and coordination of the transform, may be any changing illumination situation, shadow effects, various road types. No representation for speed limits. This includes that the system acquires the front view using a camera mounted on the vehicles and detect the Lane by applying the code from the Python Programming process. This proposed system does not require any more information about roads. This system which demonstrates a robust performance for Lane detection.
- Published
- 2023
- Full Text
- View/download PDF
5. Experimental evaluation of a camera rig extrinsic calibration method based on retro-reflective markers detection.
- Author
-
Chiodini, Sebastiano, Pertile, Marco, Giubilato, Riccardo, Salvioli, Federico, Barrera, Marco, Franceschetti, Paola, and Debei, Stefano
- Subjects
- *
CAMERA calibration , *MONTE Carlo method , *MOTION capture (Human mechanics) , *CALIBRATION , *CAMERAS , *DIGITAL image correlation , *RELATIVE motion - Abstract
• Extrinsic camera calibration method relative to motion capture system is presented. • Three different spherical markers segmentation methods are analyzed. • Two approaches to fuse the retro-reflective marker measurements are carried out. • Calibration uncertainty propagation is evaluated via Monte Carlo simulation. • Camera extrinsics are retrieved with an uncertainty of the order of the millimeter. Nowadays, Motion Capture (MC) systems are used more and more to evaluate the accuracy of Visual Odometry and visual Simultaneous Localization and Mapping (SLAM) algorithms. However, the misalignment between the camera optical frame and the camera body frame, as tracked by a MC system, leads to a drift between the reconstructed trajectory and the reference one. In this work, we present a calibration method which estimates the relative orientation and position between this two reference frames. The proposed method is highly efficient because it uses a calibration target composed by a set of retro-reflective markers which are tracked by the MC system itself. Three segmentation methods and two different optimization approaches have been tested. The uncertainty propagation analysis, performed by means of a Monte Carlo simulation, shows that it is possible to calibrate the extrinsic parameters of a stereo-camera with a position accuracy of one millimeter and an orientation accuracy better than 1°. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Uncalibrated Visual Servoing for a Planar Two Link Rigid-Flexible Manipulator Without Joint-Space-Velocity Measurement
- Author
-
Tian Hao, Fan Xu, Yanzi Miao, Jingchuan Wang, and Hesheng Wang
- Subjects
Lyapunov function ,Singular perturbation ,Observer (quantum physics) ,Adaptive algorithm ,Computer science ,Linear-quadratic regulator ,Visual servoing ,Computer Science Applications ,Human-Computer Interaction ,Vibration ,symbols.namesake ,Control and Systems Engineering ,Control theory ,symbols ,Trajectory ,Electrical and Electronic Engineering ,Manipulator ,Software ,Beam (structure) ,Camera resectioning - Abstract
In this article, to solve trajectory tracing problem and vibration suppression for a planar two-link rigid-flexible manipulator subject to joint-velocity measurement noise, a novel uncalibrated visual servoing control is proposed. To begin with, the manipulator's dynamic model is established by the assumed mode method (AMM). On this basis, based on the singular perturbation theory, two subsystem controllers are designed, one is slow subsystem controller, and the other one is fast subsystem controller. In the slow subsystem, to cope with the complication of the camera calibration, an adaptive algorithm is formulated to evaluate the parameters of a fixed camera online. Aiming to overcome the challenge that exact joint-velocity measurement may be disturbed by external noise, a nonlinear sliding observer is developed to estimate the state of joint velocity accurately. The asymptotic convergence of image tracking error is proved by means of Lyapunov analysis. Additionally, for the purpose of restraining the flexible beam's elastic vibration, a linear quadratic regulator (LQR) approach is adopted in the fast subsystem control design. The realistic comparing simulation experiments are presented to demonstrate the performance of the proposed controller.
- Published
- 2022
- Full Text
- View/download PDF
7. Camera Pose Estimation Using First-Order Curve Differential Geometry
- Author
-
Fabbri, Ricardo, Kimia, Benjamin B., Giblin, Peter J., Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Fitzgibbon, Andrew, editor, Lazebnik, Svetlana, editor, Perona, Pietro, editor, Sato, Yoichi, editor, and Schmid, Cordelia, editor
- Published
- 2012
- Full Text
- View/download PDF
8. Using Multiple Images
- Author
-
Peter Corke
- Subjects
Stereopsis ,Pixel ,business.industry ,Computer science ,Detector ,Point cloud ,Object model ,Computer vision ,Visual Word ,Artificial intelligence ,Fundamental matrix (computer vision) ,business ,Camera resectioning - Abstract
In the previous chapter we learnt about corner detectors which find particularly distinctive points in a scene. These points can be reliably detected in different views of the same scene irrespective of viewpoint or lighting conditions. Such points are characterised by high image gradients in orthogonal directions and typically occur on the corners of objects. However the 3-dimensional coordinate of the corresponding world point was lost in the perspective projection process which we discussed in Chap. 11 - we mapped a 3-dimensional world point to a 2-dimensional image coordinate. All we know is that the world point lies along some ray in space corresponding to the pixel coordinate, as shown in Fig. 11.1. To recover the missing third dimension we need additional information. In Sect. 11.2.3 the additional information was camera calibration parameters plus a geometric object model, and this allowed us to estimate the object’s 3-dimensional pose from the 2-dimensional image data.
- Published
- 2023
- Full Text
- View/download PDF
9. HARD-PnP: PnP Optimization Using a Hybrid Approximate Representation.
- Author
-
Hadfield, Simon, Lebeda, Karel, and Bowden, Richard
- Subjects
- *
IMAGE processing , *APPROXIMATION error , *COMPUTER simulation , *MATHEMATICAL optimization , *ERROR analysis in mathematics - Abstract
This paper proposes a Hybrid Approximate Representation (HAR) based on unifying several efficient approximations of the generalized reprojection error (which is known as the gold standard for multiview geometry). The HAR is an over-parameterization scheme where the approximation is applied simultaneously in multiple parameter spaces. A joint minimization scheme “HAR-Descent” can then solve the PnP problem efficiently, while remaining robust to approximation errors and local minima. The technique is evaluated extensively, including numerous synthetic benchmark protocols and the real-world data evaluations used in previous works. The proposed technique was found to have runtime complexity comparable to the fastest $O(n)$ techniques, and up to 10 times faster than current state of the art minimization approaches. In addition, the accuracy exceeds that of all 9 previous techniques tested, providing definitive state of the art performance on the benchmarks, across all 90 of the experiments in the paper and supplementary material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPAMI.2018.2806446. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. A Vision-Based Pipeline for Vehicle Counting, Speed Estimation, and Classification
- Author
-
Mark Reynolds, Steve Atkinson, Chenghuan Liu, Du Q. Huynh, and Yuchao Sun
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Traffic flow ,Object detection ,Computer Science Applications ,Video tracking ,Automotive Engineering ,Computer vision ,Smart camera ,Artificial intelligence ,business ,Monocular vision ,Intelligent transportation system ,Camera resectioning ,Homography (computer vision) - Abstract
Cameras have been widely used in traffic operations. While many technologically smart camera solutions in the market can be integrated into Intelligent Transport Systems (ITS) for automated detection, monitoring and data generation, many Network Operations (a.k.a Traffic Control) Centres still use legacy camera systems as manual surveillance devices. In this paper, we demonstrate effective use of these older assets by applying computer vision techniques to extract traffic data from videos captured by legacy cameras. In our proposed vision-based pipeline, we adopt recent state-of-the-art object detectors and transfer-learning to detect vehicles, pedestrians, and cyclists from monocular videos. By weakly calibrating the camera, we demonstrate a novel application of the image-to-world homography which gives our monocular vision system the efficacy of counting vehicles by lane and estimating vehicle length and speed in real-world units. Our pipeline also includes a module which combines a convolutional neural network (CNN) classifier with projective geometry information to classify vehicles. We have tested it on videos captured at several sites with different traffic flow conditions and compared the results with the data collected by piezoelectric sensors. Our experimental results show that the proposed pipeline can process 60 frames per second for pre-recorded videos and yield high-quality metadata for further traffic analysis.
- Published
- 2021
- Full Text
- View/download PDF
11. Automatic Roadblock Identification Algorithm for Unmanned Vehicles Based on Binocular Vision
- Author
-
Liang Fang, Zhiwei Guan, and Jinghua Li
- Subjects
Technology ,Article Subject ,Matching (graph theory) ,Computer Networks and Communications ,Computer science ,Coordinate system ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,TK5101-6720 ,Position (vector) ,Obstacle ,Telecommunication ,Feature (machine learning) ,Electrical and Electronic Engineering ,Algorithm ,Binocular vision ,Information Systems ,Camera resectioning - Abstract
In order to improve the accuracy of automatic obstacle recognition algorithm for driverless vehicles, an automatic obstacle recognition algorithm for driverless vehicles based on binocular vision is constructed. Firstly, the relevant parameters of the camera are calibrated around the new car coordinate system to determine the corresponding obstacle position of the vehicle. At the same time, the three-dimensional coordinates of obstacle points are obtained by binocular matching method. Then, the left and right cameras are used to capture the feature points of obstacles in the image to realize the recognition of obstacles. Finally, the experimental results show that for obstacle 1, the recognition error of the algorithm is 0.03 m; for obstacle 2, the recognition error is 0.02 m; for obstacle 3, the recognition error is 0.01 m. The algorithm has small recognition error. The vehicle coordinate system is added in the camera calibration process, which can accurately measure the relative position information between the vehicle and the obstacle.
- Published
- 2021
- Full Text
- View/download PDF
12. Evaluation of Test Field-based Calibration and Self-calibration Models of UAV Integrated Compact Cameras
- Author
-
Hakan Karabörk, Gülüstan Kılınç Kazar, and Hasan Bilgehan Makineci
- Subjects
Computer science ,business.industry ,Orientation (computer vision) ,Geography, Planning and Development ,Process (computing) ,Imaging phantom ,Software ,Photogrammetry ,Earth and Planetary Sciences (miscellaneous) ,Calibration ,Computer vision ,Artificial intelligence ,business ,Camera resectioning ,Block (data storage) - Abstract
Unmanned aerial vehicles (UAVs), which have made a name for themselves in photogrammetry studies in recent years, provide users with integrated camera systems. Identifying interior orientation parameters, such as focal coordinates, focal length and distortions, is an essential requirement for camera systems used for photogrammetric purposes. This process, which is called camera calibration, is offered automatically by software from the library. Another important known calibration method is self-calibration. Calibrating cameras by creating 2D or 3D test areas is a troublesome and grueling option. However, it is the most commonly accepted way in terms of accuracy. In this study, images were taken in different test areas (2D and 3D) to perform the calibrations of the cameras integrated on two different UAVs, namely DJI Phantom 4 Pro and Parrot Anafi. The calibration parameters determined from the images taken were compared with the calibration parameters obtained by the self-calibration method, and block adjustment was performed with ground control points marked in the study area. In order to perform performance analysis, the root-mean-square error (RMSE) was determined from the control points. In conclusion, it was determined that the results of both the calibrations obtained with the test fields and those obtained with self-calibration were acceptable.
- Published
- 2021
- Full Text
- View/download PDF
13. Assessment of computer vision methods for motion tracking of planar mechanisms
- Author
-
Juan Carlos Arellano-González, Hugo I. Medellín-Castillo, Mario A. García-Murillo, and J. Jesús Cervantes-Sánchez
- Subjects
Mechanism (engineering) ,Planar ,Match moving ,Computer science ,business.industry ,Mechanical Engineering ,Computer vision ,Artificial intelligence ,business ,Performance index ,Camera resectioning - Abstract
One of the main challenges on the use of planar mechanisms is to verify and monitor that the trajectories described by the mechanism correspond to those originally required. However, very few research studies have focused on tracking and monitoring the motion of target points located on the mechanisms during operation conditions. In this paper, a comparative study to evaluate the performance of several computer vision methods (CVMs) when used in motion tracking of planar mechanisms is presented. The aim is to compare and identify the best CVM, in terms of precision, speed, low cost, and computational performance, to track the movement of planar mechanisms. For this purpose, a case study corresponding to a planar four-bar mechanism is selected and analysed. The results show that the vision methods based on the homogeneous and non-homogeneous solution of the camera calibration matrix are a technological alternative for monitoring motion trajectories of planar mechanisms.
- Published
- 2021
- Full Text
- View/download PDF
14. Modelling extreme wide‐angle lens cameras
- Author
-
Robert Radovanovic, Derek D. Lichti, Wynand Tredoux, Petra Helmholz, and Reza Maalek
- Subjects
Optics ,Computer science ,business.industry ,Earth and Planetary Sciences (miscellaneous) ,Computers in Earth Sciences ,business ,Engineering (miscellaneous) ,Wide-angle lens ,Computer Science Applications ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
15. A survey on horizon detection algorithms for maritime video surveillance: advances and future techniques
- Author
-
Yassir Zardoua, Mohammed Boulaala, and Abdelali Astito
- Subjects
Computer science ,Computation ,Horizon ,media_common.quotation_subject ,Variation (game tree) ,Computer Graphics and Computer-Aided Design ,Convolutional neural network ,Computer graphics ,Robustness (computer science) ,Sky ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Camera resectioning ,media_common - Abstract
On maritime images, the horizon is a linear shape separating the sea and non-sea regions. This visual cue is essential in several sea video surveillance applications, including camera calibration, digital video stabilization, target detection and tracking, and distance estimation of detected targets. Given the nature of these applications, the horizon detection algorithm must satisfy robustness and real-time constraints. Our first aim in this paper is to provide a comprehensive review of horizon detection algorithms. After analyzing assumptions and test results reported in the horizon detection literature, we found a high trade-off between robustness and real-time performance. Thus, our second aim is to propose and describe three workable techniques to reduce this trade-off. The first technique aims to increase the robustness against contrast-degraded horizons. The non-sea region right above the horizon mainly depicts the sky, coast, ship, or combination of these classes. Thus, the second technique suggests a way to handle such class variation. While we believe that the last two techniques require relatively little computations, the third technique concerns using an alternative convolutional neural network (CNN) architecture to avoid a significant quantity of redundant computations in a previous CNN-based algorithm.
- Published
- 2021
- Full Text
- View/download PDF
16. Semi‐automatic reconstruction of object lines using a smartphone’s dual camera
- Author
-
Isam Abu-Qasmieh and Mohammed Aldelgawy
- Subjects
business.industry ,Computer science ,Earth and Planetary Sciences (miscellaneous) ,Computer vision ,Artificial intelligence ,Semi automatic ,Computers in Earth Sciences ,DUAL (cognitive architecture) ,business ,Object (computer science) ,Engineering (miscellaneous) ,Computer Science Applications ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
17. Inferring Bias and Uncertainty in Camera Calibration
- Author
-
Moritz Michael Knorr, Annika Hagemann, Christoph Stiller, and Holger Janssen
- Subjects
Estimation theory ,Computer science ,Model selection ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Estimator ,Artificial Intelligence ,Resampling ,Metric (mathematics) ,Benchmark (computing) ,Calibration ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Camera resectioning - Abstract
Accurate camera calibration is a precondition for many computer vision applications. Calibration errors, such as wrong model assumptions or imprecise parameter estimation, can deteriorate a system’s overall performance, making the reliable detection and quantification of these errors critical. In this work, we introduce an evaluation scheme to capture the fundamental error sources in camera calibration: systematic errors (biases) and uncertainty (variance). The proposed bias detection method uncovers smallest systematic errors and thereby reveals imperfections of the calibration setup and provides the basis for camera model selection. A novel resampling-based uncertainty estimator enables uncertainty estimation under non-ideal conditions and thereby extends the classical covariance estimator. Furthermore, we derive a simple uncertainty metric that is independent of the camera model. In combination, the proposed methods can be used to assess the accuracy of individual calibrations, but also to benchmark new calibration algorithms, camera models, or calibration setups. We evaluate the proposed methods with simulations and real cameras.
- Published
- 2021
- Full Text
- View/download PDF
18. Robotic Camera Calibration to Maintain Consistent Percision of 3D Trackers
- Author
-
Jun-Min Baek, Joonho Seo, and Gunwoo Noh
- Subjects
BitTorrent tracker ,business.industry ,Computer science ,Mechanical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tracking (particle physics) ,Industrial and Manufacturing Engineering ,Standard deviation ,Position (vector) ,Checkerboard ,Calibration ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Robotic arm ,Camera resectioning - Abstract
Camera calibration is essential for calculating camera parameters, and the parameters should be constant whenever the position tracker needs to be recalibrated. Manual camera calibration obtains a checkerboard image by tilting or positioning the checkerboard by hand. However, it makes difficult to obtain the same checkerboard images in every calibration, resulting in different camera parameters. Therefore, this study proposes robotic camera calibration to produce the constant camera parameters whenever the tracker needs to be calibrated. The robot arm moves the checkerboard according to the programmed positions and orientations so that the position tracker can obtain the same checkerboard image every calibration. Experiments were conducted to compare results of the manual and robotic camera calibrations. First, we compared the standard deviations of the intrinsic parameters and extrinsic parameters; the results show that robotic camera calibration produced the parameters with smaller standard deviations. We also compared the tracking precision of the cubic marker using manual and robotic camera calibration. As a result, the standard deviations of the tracking precision of cubic marker by the manual and the robotic camera calibration were 1.65 mm and only 0.35 mm, respectively. It reveals that the robotic camera calibration enables more precise and consistent tracking than the manual calibration.
- Published
- 2021
- Full Text
- View/download PDF
19. Human height estimation from highly distorted surveillance image
- Author
-
Francesco Tosti, Carla Nardinocchi, Samuele Giuliani, Maria Marsella, Wissam Wahbeh, Pierpaolo Lopes, and Claudio Ciampini
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Field (computer science) ,Pathology and Forensic Medicine ,Image (mathematics) ,law.invention ,Software ,law ,Photography ,Genetics ,Calibration ,Humans ,Crime scene ,Computer vision ,business.industry ,Forensic Sciences ,Body Height ,Lens (optics) ,Photogrammetry ,camera calibration ,height measurements ,image distortion ,photogrammetry ,space resection ,terrestrial laser scanner ,vanishing line and point ,Artificial intelligence ,business ,Camera resectioning - Abstract
Video surveillance camera (VSC) is an important source of information during investigations especially if used as a tool for the extraction of verified and reliable forensic measurements. In this study, some aspects of human height extraction from VSC video frames are analyzed with the aim of identifying and mitigating error sources that can strongly affect the measurement. More specifically, those introduced by lens distortion are present in wide-field-of-view lens such as VSCs. A weak model, which is not able to properly describe and correct the lens distortion, could introduce systematic errors. This study focuses on the aspect of camera calibration to verify human height extraction by Amped FIVE software, which is adopted by the Forensic science laboratories of Carabinieri Force (RaCIS), Italy. A stable and reliable approach of camera calibration is needed since investigators have to deal with different cameras while inspecting the crime scene. The performance of the software in correcting distorted images is compared with a technique of single view self-calibration. Both approaches were applied to several frames acquired by a fish-eye camera and then measuring the height of five different people. Moreover, two actual cases, both characterized by common low-resolution and distorted images, were also analyzed. The height of four known persons was measured and used as reference value for validation. Results show no significant difference between the two calibration approaches working with fish-eye camera in test field, while evidence of differences was found in the measurement on the actual cases.
- Published
- 2021
- Full Text
- View/download PDF
20. Camera Orientation Estimation Using Motion-Based Vanishing Point Detection for Advanced Driver-Assistance Systems
- Author
-
Joonki Paik, Minwoo Shin, Youngran Jo, and Jinbeum Jang
- Subjects
Lane departure warning system ,business.industry ,Orientation (computer vision) ,Computer science ,Mechanical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Advanced driver assistance systems ,RANSAC ,Computer Science Applications ,Extended Kalman filter ,Intersection ,Automotive Engineering ,Computer vision ,Artificial intelligence ,Vanishing point ,business ,Camera resectioning - Abstract
Advanced driver-assistance systems need a camera calibration algorithm for various vision applications including surround-view monitoring (SVM) and lane departure warning (LDW). Although cameras mounted on a vehicle are calibrated in the manufacturing process, their orientation angles are subject to tilting because of continuing vibration and external impact. To solve the problem, this paper presents an online calibration algorithm for camera orientation estimation using motion vectors and three-dimensional geometry. The proposed algorithm consists of three steps: i) driving direction estimation by calculating an intersection of motion vectors, ii) camera orientation estimation based on 3-line random sample consensus (RANSAC) using the estimated intersection, and iii) final orientation decision using extended Kalman filter from the result of each frame. Experimental results demonstrate that the proposed algorithm stably estimates camera orientation angles from motion vectors and lines under the parallelism and orthogonality assumptions.
- Published
- 2021
- Full Text
- View/download PDF
21. Motion estimation for fisheye video sequences combining perspective projection with camera calibration information
- Author
-
Andre Kaup, Andrea Eichenseer, and Michel Batz
- Subjects
0209 industrial biotechnology ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Luminance ,law.invention ,020901 industrial engineering & automation ,law ,Computer graphics (images) ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Projection (set theory) ,ComputingMethodologies_COMPUTERGRAPHICS ,Block (data storage) ,Motion compensation ,business.industry ,Image and Video Processing (eess.IV) ,Video processing ,Image plane ,Electrical Engineering and Systems Science - Image and Video Processing ,Quarter-pixel motion ,Lens (optics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Camera resectioning - Abstract
Fisheye cameras prove a convenient means in surveillance and automotive applications as they provide a very wide field of view for capturing their surroundings. Contrary to typical rectilinear imagery, however, fisheye video sequences follow a different mapping from the world coordinates to the image plane which is not considered in standard video processing techniques. In this paper, we present a motion estimation method for real-world fisheye videos by combining perspective projection with knowledge about the underlying fisheye projection. The latter is obtained by camera calibration since actual lenses rarely follow exact models. Furthermore, we introduce a re-mapping for ultra-wide angles which would otherwise lead to wrong motion compensation results for the fisheye boundary. Both concepts extend an existing hybrid motion estimation method for equisolid fisheye video sequences that decides between traditional and fisheye block matching in a block-based manner. Compared to that method, the proposed calibration and re-mapping extensions yield gains of up to 0.58 dB in luminance PSNR for real-world fisheye video sequences. Overall gains amount to up to 3.32 dB compared to traditional block matching.
- Published
- 2022
22. Urban dual mode video detection system based on fisheye and PTZ cameras
- Author
-
Damián Oliva, Lilian Garcia, Sebastián I. Arroyo, and Felix Safar
- Subjects
0303 health sciences ,General Computer Science ,business.industry ,Computer science ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Visualization ,03 medical and health sciences ,Omnidirectional camera ,ONVIF ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Zoom ,business ,Image resolution ,030304 developmental biology ,Camera resectioning ,Ground plane - Abstract
This work presents an artificial vision-based monitoring system for urban environments. It comprises a fisheye camera monitoring the scenes 180ox360o hemisphere and a Pan-Tilt-Zoom camera capturing narrower regions of interest in high-resolution. The ONVIF protocol standard is used to interface both IP-cameras, allowing for the integration of camera control (image acquisition and movement) and geometric calculations on a single device. The events of interest (motion of vehicles and pedestrians) are assumed to happen on the ground plane. This assumption is required to solve the back-projection, the function that maps coordinates in the highly distorted images of the fisheye camera to the ground plane. A calibration strategy estimates the poses of the cameras without placing restrictions on their orientations or relative distance. It optimizes the back-projection error in the ground plane instead of the re-projection error in the image. Finally, a simple pointing and zoom adjustment strategy controls the Pan-Tilt-Zoom camera. The system is tested in controlled laboratory conditions and shows accurate outdoor performance for pedestrian observation.
- Published
- 2021
- Full Text
- View/download PDF
23. Geometrically Driven Underground Camera Modeling and Calibration With Coplanarity Constraints for a Boom-Type Roadheader
- Author
-
Yang Wenjuan, Hongwei Ma, Guang-Ming Zhang, and Xuhui Zhang
- Subjects
TR ,T1 ,Computer science ,business.industry ,Calibration (statistics) ,TN ,020208 electrical & electronic engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,02 engineering and technology ,Coplanarity ,GeneralLiterature_MISCELLANEOUS ,Collinearity equation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Roadheader ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Pose ,Camera resectioning - Abstract
The conventional calibration methods based on a perspective camera model are not suitable for the underground camera with two-layer glasses, which is specially designed for explosion proof and dust removal in a coal mine. Underground camera modeling and calibration algorithms are urgently needed to improve the precision and reliability of underground visual measurement systems. This article presents a novel geometrically driven underground camera calibration algorithm for a boom-type roadheader. The underground camera model is established under coplanarity constraints, considering explicitly the impact of refraction triggered by the two-layer glasses and deriving the geometrical relationship of equivalent collinearity equations. On this basis, we perform parameters calibration based on a geometrically driven calibration model, which are 2D–2D correspondences between the image points and object coordinates of the planar target. A hybrid Levenberg–Marqurdt (LM) and particle swarm optimization (PSO) algorithm is further proposed in terms of the dynamic combination of the LM and PSO, which optimizes the underground camera calibration results by minimizing the error of the nonlinear underground camera model. The experimental results demonstrate that the pose errors caused by the two-layer glass refraction are well corrected by the proposed method. The accuracy of the cutting-head pose estimation has increased by 55.73%, meeting the requirements of underground excavations.
- Published
- 2021
- Full Text
- View/download PDF
24. Moving Camera-Based Object Tracking Using Adaptive Ground Plane Estimation and Constrained Multiple Kernels
- Author
-
Yong Liu and Tao Liu
- Subjects
Economics and Econometrics ,Article Subject ,Computer science ,Strategy and Management ,02 engineering and technology ,Tracking (particle physics) ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,Computer vision ,HE1-9990 ,Ground plane ,050210 logistics & transportation ,TA1001-1280 ,business.industry ,Mechanical Engineering ,05 social sciences ,Tracking system ,Kalman filter ,Computer Science Applications ,Transportation engineering ,Video tracking ,Kernel (statistics) ,Automotive Engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Transportation and communications ,Camera resectioning - Abstract
Moving camera-based object tracking method for the intelligent transportation system (ITS) has drawn increasing attention. The unpredictability of driving environments and noise from the camera calibration, however, make conventional ground plane estimation unreliable and adversely affecting the tracking result. In this paper, we propose an object tracking system using an adaptive ground plane estimation algorithm, facilitated with constrained multiple kernel (CMK) tracking and Kalman filtering, to continuously update the location of moving objects. The proposed algorithm takes advantage of the structure from motion (SfM) to estimate the pose of moving camera, and then the estimated camera’s yaw angle is used as a feedback to improve the accuracy of the ground plane estimation. To further robustly and efficiently tracking objects under occlusion, the constrained multiple kernel tracking technique is adopted in the proposed system to track moving objects in 3D space (depth). The proposed system is evaluated on several challenging datasets, and the experimental results show the favorable performance, which not only can efficiently track on-road objects in a dashcam equipped on a free-moving vehicle but also can well handle occlusion in the tracking.
- Published
- 2021
- Full Text
- View/download PDF
25. PatchMatch Filter‐Census: A slanted‐plane stereo matching method for slope modelling application
- Author
-
Weiyong Eng, Voon Chet Koo, and Tien-Sze Lim
- Subjects
Ground truth ,Matching (statistics) ,Computer Networks and Communications ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Word error rate ,QA75.5-76.95 ,Filter (signal processing) ,Human-Computer Interaction ,Transformation matrix ,Computational Theory and Mathematics ,Artificial Intelligence ,Hardware and Architecture ,Robustness (computer science) ,Electronic computers. Computer science ,Q300-390 ,Computer vision ,Image rectification ,Artificial intelligence ,business ,Cybernetics ,Information Systems ,Camera resectioning - Abstract
Image matching is a well‐studied problem in computer vision. Conventional image matching is solved using image feature matching algorithms, and later deep learning techniques are also applied to tackle the problem. Here, a slope‐modelling framework is proposed by adopting the image matching techniques. First, image pairs of a slope scene are captured and camera calibration as well as image rectification are performed. Then, PatchMatch Filter (PMF‐S) and PWC‐Net techniques are adapted to solve the matching of image pairs. In the proposed PatchMatch Filter‐Census (PMF‐Census), slanted‐plane modelling, image census transform and gradient difference are employed in matching cost formulation. Later, nine matching points are manually selected from an image pair. Matching point pairs are further used in fitting a transformation matrix to relate the matching between the image pair. Then, the transformation matrix is applied to obtain a ground truth matching image for algorithm evaluation. The challenges in this matching problem are that the slope is of a homogenous region and it has a slanted‐surface geometric structure. In this work, it is found out that the error rate of the proposed PMF‐Census is significantly lower as compared with the PWC‐Net method and is more suitable in this slope‐modelling task. In addition, to show the robustness of the proposed PMF‐Census against the original PMF‐S, further experiments on some image pairs from Middlebury Stereo 2006 dataset are conducted. It is demonstrated that the error percentage by the proposed PMF‐Census is reduced significantly especially in the low‐texture and photometric distorted region, in comparison to the original PMF‐S algorithm. This further verifies the suitability of the PMF‐Census in modelling the outdoor low‐texture slope scene.
- Published
- 2021
- Full Text
- View/download PDF
26. Accuracy comparison of interior orientation parameters from different photogrammetric software and direct linear transformation method
- Author
-
Muhammed Enes Atik and Zaide Duran
- Subjects
Computer science ,business.industry ,Orientation (computer vision) ,020209 energy ,Engineering, Multidisciplinary ,Mühendislik, Ortak Disiplinler ,04 agricultural and veterinary sciences ,02 engineering and technology ,General Medicine ,Photogrammetry ,Software ,040103 agronomy & agriculture ,0202 electrical engineering, electronic engineering, information engineering ,0401 agriculture, forestry, and fisheries ,Computer vision ,Artificial intelligence ,Direct linear transformation ,business ,Camera Calibration,Accuracy Assessment,Three Dimensional Model,Photogrammetry,Interior Orientation ,Camera resectioning ,Three dimensional model - Abstract
The integration of computer vision algorithms and photogrammetric methods leads to procedures that increasingly automate the image-based 3D modeling process. The main objective of photogrammetry is to obtain a three-dimensional model using terrestrial or aerial images. Calibration of the camera and detection of the orientation parameters are important for obtaining accurate and reliable 3D models. For this purpose, many methods have been developed in the literature. However, since each method has different mathematical background, calibration results may be different. In this study, the effect of camera interior orientation parameters obtained from different methods on the accuracy of three-dimensional model will be examined. In this context, a test area consisting of 21 points was used. The test network was coordinated in a local coordinate system using geodetic methods. Some points of the test area were selected as the check point and accuracy analysis was performed. Direct Linear Transformation (DLT) method, MATLAB, Agisoft Lens, Photomodeler, 3D Flow Zephyr software were analysed. The lowest error value of 7.7 cm was achieved by modelling with Agisoft Lens.
- Published
- 2021
- Full Text
- View/download PDF
27. A Fast and Flexible Projector-Camera Calibration System
- Author
-
Haibin Ling, Samed Ozdemir, Bingyao Huang, and Ying Tang
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Calibration (statistics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Bundle adjustment ,02 engineering and technology ,Iterative reconstruction ,law.invention ,020901 industrial engineering & automation ,Software ,Projector ,Control and Systems Engineering ,law ,Robustness (computer science) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Structured light ,Camera resectioning - Abstract
Existing projector-camera calibration methods typically warp keypoints from a camera image to a projector image using estimated homographies and often suffer from errors in camera parameters and noises due to imperfect planarity of the calibration target. This article proposes a practical and robust projector-camera calibration system that explicitly deals with these challenges. First, a graph-theory-based correspondence algorithm is built on top of a color-coded spatial structured light (SL) pattern. Such SL correspondences are then used for a coarse projector-camera calibration. To gain more robustness against noises from an imperfect planar calibration board, we develop a bundle adjustment algorithm to jointly optimize the estimated projector-camera parameters and the correspondences’ coordinates. Moreover, our system requires only one shot of an SL pattern for each calibration board pose, which is much more practical than multishot solutions. Comprehensive experimental validation is conducted on both synthetic and real data sets, and our method clearly outperforms the existing methods in all experiments. For the benefit of the society, a practical open-source software with graphical user interface (GUI) of the developed system is publicly available at https://github.com/bingyaohuang/single-shot-pro-cam-calib . Note to Practitioners —The proposed method is motivated by two challenges in industrial structured light (SL) system calibration: 1) robustness against imperfect planarity of the calibration target and 2) the number of SL projections per pose. In many industrial SL-based 3-D reconstruction systems, the calibration accuracy greatly affects the reconstruction reliability. Our SL calibration system explicitly deals with calibration target’s imperfect planarity and thus outperforms the existing methods in terms of system accuracy. Another advantage of our SL calibration system is single-shot-per-pose, allowing fast recalibration and reducing the decoding error due to slight pattern misalignment in multishot methods [37] . In addition, we release the open-source calibration software with a graphical user interface (GUI), with which calibration and sparse 3-D reconstruction can be easily performed without any further instructions. Moreover, considering the complex calibration environment and setup, we make the camera and projector imaging parameters, such as exposure, brightness, and contrast, adjustable through widgets and preview. Finally, a limitation of our color-coded SL system is its sensitivity to environment lighting and target texture. This problem may be solved by projector photometric compensation [16] , [18] , [19] , [39] .
- Published
- 2021
- Full Text
- View/download PDF
28. Monocular Vision Ranging and Camera Focal Length Calibration
- Author
-
Aixia Sun, Lixia Xue, Meian Li, Tian Gao, and Liang Fan
- Subjects
Polynomial regression ,0209 industrial biotechnology ,Pixel ,Computer science ,business.industry ,Calibration (statistics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Field of view ,02 engineering and technology ,Computer Science Applications ,QA76.75-76.765 ,020901 industrial engineering & automation ,Position (vector) ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,Focal length ,020201 artificial intelligence & image processing ,Computer vision ,Computer software ,Artificial intelligence ,business ,Monocular vision ,Software ,Camera resectioning - Abstract
The camera calibration in monocular vision represents the relationship between the pixels’ units which is obtained from a camera and the object in the real world. As an essential procedure, camera calibration calculates the three-dimensional geometric information from the captured two-dimensional images. Therefore, a modified camera calibration method based on polynomial regression is proposed to simplify. In this method, a parameter vector is obtained by pixel coordinates of obstacles and corresponding distance values using polynomial regression. The set of parameter’s vectors can measure the distance between the camera and the ground object in the field of vision under the camera’s posture and position. The experimental results show that the lowest accuracy of this focal length calibration method for measurement is 97.09%, and the average accuracy was 99.02%.
- Published
- 2021
- Full Text
- View/download PDF
29. DLT-Lines Based Camera Calibration with Lens Radial and Tangential Distortion
- Author
-
Gang Wang, Zhongchen Shi, Yang Shang, and X. F. Zhang
- Subjects
business.industry ,Computer science ,Mechanical Engineering ,Distortion (optics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Aerospace Engineering ,Intersection (Euclidean geometry) ,law.invention ,Lens (optics) ,Matrix (mathematics) ,Mechanics of Materials ,law ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,Direct linear transformation ,Focus (optics) ,business ,Plumb bob ,Camera resectioning - Abstract
Camera calibration is an essential step for the optical measurement method used in the experimental mechanics. Most plumb line methods focus on solving lens distortions without considering camera intrinsic and extrinsic parameters. In this paper, we propose a full camera calibration method to estimate the camera parameters, including camera intrinsic parameters, extrinsic parameters and lens distortion parameters, from a single image with six or more non-coplanar lines. We parameterize the 3D lines with the intersection of two planes that allow the direct linear transformation of the lines(DLT-Lines). Based on the DLT-Lines, the projection matrix is estimated linearly, and then the camera intrinsic and extrinsic parameters are extracted from the matrix. The relationship between the distorted 2D lines and the distortion coefficients is derived, based on which the distortion coefficients can be solved linearly. In the last step, a non-linear optimization algorithm is used to jointly refine all the camera parameters, including the distortion coefficients. Both synthetic and real data are used to evaluate the performance of our method, which demonstrates that the proposed method can calibrate the cameras with radial and tangential distortions accurately. We propose a DLT-lines based camera calibration method for experimental mechanics. The proposed method can calibrate all the camera parameters from a single image.
- Published
- 2021
- Full Text
- View/download PDF
30. Camera calibration using projection properties of conics of equal eccentricity
- Author
-
Yang Fengli, Yue Zhao, and Wang Xuechun
- Subjects
Projection (mathematics) ,Conic section ,The Intersect ,Mathematics::History and Overview ,Line (geometry) ,Geometry ,Computer Science::Computational Geometry ,Eccentricity (mathematics) ,Ellipse ,Atomic and Molecular Physics, and Optics ,Hyperbola ,Mathematics ,Camera resectioning - Abstract
Conics typically include ellipses, hyperbolas, and parabolas with equal eccentricities that intersect infinite line at absolute points. According to the imaging characteristics of conics under a pi...
- Published
- 2021
- Full Text
- View/download PDF
31. EVALUATION OF INTERIOR ORIENTATION MODELLING FOR CAMERAS WITH ASPHERIC LENSES AND IMAGE PRE-PROCESSING WITH SPECIAL EMPHASIS TO SFM RECONSTRUCTION
- Author
-
T. Luhmann, H.-J. Przybilla, H. Hastedt, and Robin Rofallski
- Subjects
Technology ,Orientation (computer vision) ,Computer science ,business.industry ,Distortion (optics) ,3D reconstruction ,Bundle adjustment ,Engineering (General). Civil engineering (General) ,law.invention ,TA1501-1820 ,Lens (optics) ,law ,Structure from motion ,Computer vision ,Applied optics. Photonics ,Artificial intelligence ,TA1-2040 ,Focus (optics) ,business ,Camera resectioning - Abstract
For optical 3D measurements in close-range and UAV applications, the modelling of interior orientation is of superior importance in order to subsequently allow for high precision and accuracy in geometric 3D reconstruction. Nowadays, modern camera systems are often used for optical 3D measurements due to UAV payloads and economic purposes. They are constructed of aspheric and spherical lens combinations and include image pre-processing like low-pass filtering or internal distortion corrections that may lead to effects in image space not being considered with the standard interior orientation models. With a variety of structure-from-motion (SfM) data sets, four typical systematic patterns of residuals could be observed. These investigations focus on the evaluation of interior orientation modelling with respect to minimising systematics given in image space after bundle adjustment. The influences are evaluated with respect to interior and exterior orientation parameter changes and their correlations as well as the impact in object space. With the variety of data sets, camera/lens/platform configurations and pre-processing influences, these investigations indicate a number of different behaviours. Some specific advices in the usage of extended interior orientation models, like Fourier series, could be derived for a selection of the data sets. Significant reductions of image space systematics are achieved. Even though increasing standard deviations and correlations for the interior orientation parameters are a consequence, improvements in object space precision and image space reliability could be reached.
- Published
- 2021
32. Novel Approach to Inspections of As-Built Reinforcement in Incrementally Launched Bridges by Means of Computer Vision-Based Point Cloud Data
- Author
-
Piotr Owerko and Tomasz Owerko
- Subjects
Engineering drawing ,Laser scanning ,Computer science ,business.industry ,010401 analytical chemistry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,01 natural sciences ,0104 chemical sciences ,Photogrammetry ,Software ,Data acquisition ,Data analysis ,Electrical and Electronic Engineering ,business ,Instrumentation ,Camera resectioning ,Computer technology - Abstract
The paper presents inspection and assessment of as-built reinforcement of a selected segment of an incrementally launched (IL) concrete bridge under construction. Two novel techniques were used for data acquisition: modified photogrammetry and High Definition Surveying - a combination of terrestrial-based laser scanning, computer technology and precision control networks. The main goal of this in-situ experiment was to develop a practical, effective, yet affordable methodology for inspecting reinforcement of the above mentioned structures. In order to maintain the requirements resulting from the specifics of IL technology (seven-day cycles, 24/7 operation, two concrete pours per week), the authors have limited the maximum allowed time for data acquisition, minimized the complexity of measurement data processing and related requirements for the software and computers, and replaced the professional equipment for photogrammetry with a commonly available SLR camera. Data obtained using this method was than referred to the point cloud model obtained with a precise, state-of-the-art 3D laser scanner. The adopted mathematical model for data post-processing turned out to be effective and correct both for the analysis of data from laser scanners and photogrammetry. Presented solution adopts practical workflow with camera calibration to simplify in-situ measurement procedure achieving high accuracy standards required for reinforcement inspection. The point cloud data model obtained from photogrammetry turned out to be insufficient to assess web reinforcement thoroughly. In turn, it was possible to inspect and evaluate the reinforcement of the bottom slab, with accuracy matching the laser scanning.
- Published
- 2021
- Full Text
- View/download PDF
33. Three-Dimensional Reconstruction of Welding Pool Surface by Binocular Vision
- Author
-
Ji Chen, Chuansong Wu, and Gu Zunan
- Subjects
0209 industrial biotechnology ,Computer science ,Feature vector ,Coordinate system ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,Welding ,Industrial and Manufacturing Engineering ,law.invention ,Gas metal arc welding ,Perspective distortion ,020901 industrial engineering & automation ,Binocular imaging ,law ,Feature points matching ,0202 electrical engineering, electronic engineering, information engineering ,TJ1-1570 ,Computer vision ,Mechanical engineering and machinery ,TC1501-1800 ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,Mechanical Engineering ,Welding pool ,Ocean engineering ,Feature (computer vision) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Surface reconstruction ,Binocular vision ,Camera resectioning - Abstract
Current research of binocular vision systems mainly need to resolve the camera’s intrinsic parameters before the reconstruction of three-dimensional (3D) objects. The classical Zhang’ calibration is hardly to calculate all errors caused by perspective distortion and lens distortion. Also, the image-matching algorithm of the binocular vision system still needs to be improved to accelerate the reconstruction speed of welding pool surfaces. In this paper, a preset coordinate system was utilized for camera calibration instead of Zhang’ calibration. The binocular vision system was modified to capture images of welding pool surfaces by suppressing the strong arc interference during gas metal arc welding. Combining and improving the algorithms of speeded up robust features, binary robust invariant scalable keypoints, and KAZE, the feature information of points (i.e., RGB values, pixel coordinates) was extracted as the feature vector of the welding pool surface. Based on the characteristics of the welding images, a mismatch-elimination algorithm was developed to increase the accuracy of image-matching algorithms. The world coordinates of matching feature points were calculated to reconstruct the 3D shape of the welding pool surface. The effectiveness and accuracy of the reconstruction of welding pool surfaces were verified by experimental results. This research proposes the development of binocular vision algorithms that can reconstruct the surface of welding pools accurately to realize intelligent welding control systems in the future.
- Published
- 2021
34. Development of 3D environmental laser scanner using pinhole projection
- Author
-
Lateef Abd Zaid Qudr
- Subjects
Laser scanning ,Computer science ,020209 energy ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Energy Engineering and Power Technology ,02 engineering and technology ,Industrial and Manufacturing Engineering ,three-dimensional laser scanners ,Management of Technology and Innovation ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,Calibration ,T1-995 ,Industry ,Focal length ,Computer vision ,3D reconstruction ,Electrical and Electronic Engineering ,Projection (set theory) ,Technology (General) ,visualization ,Observational error ,business.industry ,Applied Mathematics ,Mechanical Engineering ,HD2321-4730.9 ,Computer Science Applications ,Control and Systems Engineering ,Pinhole (optics) ,Artificial intelligence ,camera calibration ,business ,pinhole projection ,Camera resectioning - Abstract
Three-dimensional (3D) information of capturing and reconstructing an object existing in its environment is a big challenge. In this work, we discuss the 3D laser scanning techniques, which can obtain a high density of data points by an accurate and fast method. This work considers the previous developments in this area to propose a developed cost-effective system based on pinhole projection concept and commercial hardware components taking into account the current achieved accuracy. A laser line auto-scanning system was designed to perform close-range 3D reconstructions for home/office objects with high accuracy and resolution. The system changes the laser plane direction with a microcontroller to perform automatic scanning and obtain continuous laser strips for objects’ 3D reconstruction. The system parameters were calibrated with Matlab’s built-in camera calibration toolbox to find camera focal length and optical center constraints. The pinhole projection equation was defined to optimize the prototype rotating axis equation. The developed 3D environmental laser scanner with pinhole projection proved the system’s effectiveness on close-range stationary objects with high resolution and accuracy with a measurement error in the range (0.05–0.25)mm. The 3D point cloud processing of the Matlab computer vision toolbox has been employed to show the 3D object reconstruction and to perform the camera calibration, which improves efficiency and highly simplifies the calibration method. The calibration error is the main error source in the measurements, and the errors of the actual measurement are found to be influenced by several environmental parameters. The presented platform can be equipped with a system of lower power consumption, and compact smaller size
- Published
- 2021
- Full Text
- View/download PDF
35. A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection
- Author
-
Mingzhu Zhu, Junzhi Yu, and Zhang Gao
- Subjects
Computer science ,business.industry ,010401 analytical chemistry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,01 natural sciences ,0104 chemical sciences ,Visualization ,Identification (information) ,Digital image processing ,Calibration ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Projection (set theory) ,business ,Instrumentation ,Camera resectioning - Abstract
When calibrating multi-camera systems with existing camera calibration toolboxes, it is oftentimes required that calibration boards are fully captured by all cameras to establish correspondence of X-junctions automatically. It becomes impractical when different cameras share limited common field of views, resulting in the incomplete samples of the calibration board. This article proposes a practical solution containing a modified checkerboard pattern and image processing algorithms. Corresponding world and image points for calibration algorithms are provided by the proposed solution when calibration boards are partially captured. X-junctions are categorized in two types and recognized in a group as an identification unit. The proposed pattern is arranged to ensure uniqueness of each identification unit; thus X-junctions can be positioned by supporting algorithms and correspondences are founded. No burden is introduced in sampling compared to traditional calibration methods. Experimental results of calibrating a multi-camera system verify that the use of the partially captured images brings benefits to calibration accuracy. Furthermore, the proposed method can substantially facilitate the calibration process.
- Published
- 2021
- Full Text
- View/download PDF
36. Color Interactive Contents System using Kinect Camera Calibration
- Author
-
Eunchong Ha et.al
- Subjects
Color calibration ,business.industry ,Computer science ,Color image ,General Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Education ,Computational Mathematics ,Pixel color ,Computational Theory and Mathematics ,Computer vision ,Color data ,Artificial intelligence ,business ,Media arts ,Interactive media ,ComputingMethodologies_COMPUTERGRAPHICS ,Camera resectioning - Abstract
Recently, media content that interacts in real time is increasing. In this paper, we introduce a real-time color extraction content system that utilizes the Kinect camera used in ‘COLOR’ media art. The Kinect camera used in the work detects and tracks the joints of the visitors that enter the exhibition space. Kinect detected data is mapped to color calibration in a Unity environment to generate a point cloud video. Get the pixel color of the spine shoulder joint coordinates of the visitor in the point cloud image. The color data is output on the screen in the form of color one, and passes through along with the spectators. Color circle decreases as the distance between the visitors and Kinect increases and the distance increases. When visitors come in and the color circles overlap, color of the mixed part will have an intermediate value between the two color circles. This work shows the form of a person's social movement through the colors that each person has and the mixture of the colors. The technology used in this work differs from other media arts in that it extracted the calibrated image colors separately and advanced the interactive media arts. We will improve the accuracy of the point cloud that corrects the color image and the depth image, and improve the color extraction accuracy of the visitors.
- Published
- 2021
- Full Text
- View/download PDF
37. DEEP LEARNING FOR CODED TARGET DETECTION
- Author
-
Vladimir A. Knyaz, V. V. Kniaz, and L. Grodzitskiy
- Subjects
lcsh:Applied optics. Photonics ,business.industry ,Computer science ,lcsh:T ,Deep learning ,Process (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:TA1501-1820 ,lcsh:Technology ,Object detection ,Range (mathematics) ,Photogrammetry ,Robustness (computer science) ,lcsh:TA1-2040 ,Computer vision ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,Decoding methods ,Camera resectioning - Abstract
Coded targets are physical optical markers that can be easily identified in an image. Their detection is a critical step in the process of camera calibration. A wide range of coded targets was developed to date. The targets differ in their decoding algorithms. The main limitation of the existing methods is low robustness to new backgrounds and illumination conditions. Modern deep learning recognition-based algorithms demonstrate exciting progress in object detection performance in low-light conditions or new environments. This paper is focused on the development of a new deep convolutional network for automatic detection and recognition of the coded targets and sub-pixel estimation of their centers.
- Published
- 2021
38. Road Lane Detection Using Advanced Image Processing Techniques
- Author
-
Varun Goel and Prateek Sawhney
- Subjects
Computer science ,business.industry ,3D projection ,Image processing ,Computer vision ,Artificial intelligence ,Lane detection ,Image warping ,business ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
39. The Impact of Photo Overlap, the Number of Control Points and the Method of Camera Calibration on the Accuracy of 3D Model Reconstruction
- Author
-
Antoni Rzonca and Marta Róg
- Subjects
Measure (data warehouse) ,Data processing ,Environmental Engineering ,business.product_category ,010504 meteorology & atmospheric sciences ,business.industry ,Computer science ,Geography, Planning and Development ,3D reconstruction ,0211 other engineering and technologies ,Total station ,02 engineering and technology ,01 natural sciences ,Photogrammetry ,Software ,Computer Science (miscellaneous) ,Computer vision ,Artificial intelligence ,Computers in Earth Sciences ,business ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Earth-Surface Processes ,Digital camera ,Camera resectioning - Abstract
This research attempted to determine the optimal photo overlap, number of control points and method of camera calibration for a photogrammetric 3D model reconstruction of an object of cultural heritage value. Terrestrial images of the object were taken with a hand‑held digital camera and processed in the ContextCapture software using the Structure‑from‑Motion (SfM) algorithm. A total station was used to measure ground control points (GCPs) and check points. Here, the research workflow, methodology, and various analyses concerning different configurations of the aforementioned factors are described. An attempt to assess the parameters which should be implemented in order to provide a high degree of accuracy of the model and reduce time‑consumption both during fieldwork and data processing was taken. The manuscript discusses the results of the analyses and compares them with other studies presented by different authors and indicates further potential directions of studies within this scope. Based on the authors’ experience with this research, some general conclusions and remarks concerning the planning of photo acquisition from the terrestrial level for the purpose of 3D model reconstruction were formulated.
- Published
- 2021
- Full Text
- View/download PDF
40. Method for automating fish-size measurement and camera calibration using a three-dimensional structure and an optical character recognition technique
- Author
-
Kazuyoshi Komeyama, Tatsuya Tanaka, and Naoya Furuta
- Subjects
business.industry ,Computer science ,%22">Fish ,Computer vision ,Artificial intelligence ,Optical character recognition ,Aquatic Science ,Size measurement ,business ,computer.software_genre ,computer ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
41. A 3D Machine Vision-Enabled Intelligent Robot Architecture
- Author
-
Jianxin zhao, Heyong Han, and Yanjun Zhang
- Subjects
0209 industrial biotechnology ,Article Subject ,Computer Networks and Communications ,Calibration (statistics) ,Computer science ,Machine vision ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,TK5101-6720 ,02 engineering and technology ,01 natural sciences ,GeneralLiterature_MISCELLANEOUS ,020901 industrial engineering & automation ,Robustness (computer science) ,Preprocessor ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Monocular ,business.industry ,010401 analytical chemistry ,0104 chemical sciences ,Computer Science Applications ,Telecommunication ,Artificial intelligence ,business ,Binocular vision ,Camera resectioning - Abstract
In this paper, the principle of camera imaging is studied, and the transformation model of camera calibration is analyzed. Based on Zhang Zhengyou’s camera calibration method, an automatic calibration method for monocular and binocular cameras is developed on a multichannel vision platform. The automatic calibration of camera parameters using human-machine interface of the host computer is realized. Based on the principle of binocular vision, a feasible three-dimensional positioning method for binocular target points is proposed and evaluated to provide binocular three-dimensional positioning of target in simple environment. Based on the designed multichannel vision platform, image acquisition, preprocessing, image display, monocular and binocular automatic calibration, and binocular three-dimensional positioning experiments are conducted. Moreover, the positioning error is analyzed, and the effectiveness of the binocular vision module is verified to justify the robustness of our approach.
- Published
- 2021
- Full Text
- View/download PDF
42. Robust control point estimation with an out-of-focus camera calibration pattern
- Author
-
Hyunki Lee, Ho-Gun Ha, Jaesung Hong, and Hyunseok Choi
- Subjects
Zoom lens ,business.industry ,Calibration (statistics) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,01 natural sciences ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Focal length ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Zoom ,Robust control ,010306 general physics ,Focus (optics) ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Camera resectioning - Abstract
The calibration of a zoom lens camera depends on the precision of the localization of control points. At a long focal length, the narrow depth-of-field (DOF) causes defocused blurring and an inevitable decrease in accuracy in control points localization. In particular, the camera calibration requires multiple control points defined on calibration patterns acquired at various camera angles. However, clear pattern images are difficult to obtain owing to the narrow DOF. We propose a robust and intuitive method to accurately estimate the control points in blurred images. To obtain control points that are less affected by blurring, we dynamically varied the circle size in the patterns and identified the local maximum point using the intensity gradient of accumulated concentric circles. This approach is robust to blurring and can be employed at all zoom levels. In our experiments, the error of the control point estimation was measured while varying the angles of the calibration patterns and the degree of blurring. Compared with the conventional checker pattern method, the performance of the proposed method in the estimation of the control points was better and its related camera parameters with severely defocused images were settled.
- Published
- 2021
- Full Text
- View/download PDF
43. Automatic far‐field camera calibration for construction scene analysis
- Author
-
Mehrdad Arashpour, Tuan Ngo, Alireza Bab-Hadiashar, Heng Li, and Amin Assadzadeh
- Subjects
050210 logistics & transportation ,Scene analysis ,Computer science ,business.industry ,05 social sciences ,020101 civil engineering ,Near and far field ,02 engineering and technology ,Tracking (particle physics) ,Computer Graphics and Computer-Aided Design ,0201 civil engineering ,Computer Science Applications ,Computational Theory and Mathematics ,0502 economics and business ,Computer vision ,Artificial intelligence ,business ,Civil infrastructure ,Civil and Structural Engineering ,Safety monitoring ,Camera resectioning - Abstract
The use of cameras for safety monitoring, progress tracking, and site security has grown significantly on construction and civil infrastructure sites over the past decade. Localization of ...
- Published
- 2021
- Full Text
- View/download PDF
44. Applications, databases and open computer vision research from drone videos and images: a survey
- Author
-
Younes Akbari, Somaya Al-Maadeed, Noor Almaadeed, and Omar Elharrouss
- Subjects
Linguistics and Language ,Database ,Computer science ,business.industry ,Image matching ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Triangulation (computer vision) ,02 engineering and technology ,Visual localization ,computer.software_genre ,Facial recognition system ,Language and Linguistics ,Drone ,Artificial Intelligence ,020204 information systems ,Obstacle ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,computer ,Camera resectioning - Abstract
Analyzing videos and images captured by unmanned aerial vehicles or aerial drones is an emerging application attracting significant attention from researchers in various areas of computer vision. Currently, the major challenge is the development of autonomous operations to complete missions and replace human operators. In this paper, based on the type of analyzing videos and images captured by drones in computer vision, we have reviewed these applications by categorizing them into three groups. The first group is related to remote sensing with challenges such as camera calibration, image matching, and aerial triangulation. The second group is related to drone-autonomous navigation, in which computer vision methods are designed to explore challenges such as flight control, visual localization and mapping, and target tracking and obstacle detection. The third group is dedicated to using images and videos captured by drones in various applications, such as surveillance, agriculture and forestry, animal detection, disaster detection, and face recognition. Since most of the computer vision methods related to the three categories have been designed for real-world conditions, providing real conditions based on drones is impossible. We aim to explore papers that provide a database for these purposes. In the first two groups, some survey papers presented are current. However, the surveys have not been aimed at exploring any databases. This paper presents a complete review of databases in the first two groups and works that used the databases to apply their methods. Vision-based intelligent applications and their databases are explored in the third group, and we discuss open problems and avenues for future research.
- Published
- 2021
- Full Text
- View/download PDF
45. Multi‐camera traffic scene mosaic based on camera calibration
- Author
-
Wei Wang, Huansheng Song, Dai Zhe, Li Junyan, and Feifan Wu
- Subjects
Computer science ,business.industry ,Computer applications to medicine. Medical informatics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,R858-859.7 ,Mosaic (geodemography) ,Multi camera ,QA76.75-76.765 ,Traffic scene ,Computer vision ,Computer software ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Camera resectioning - Abstract
Recently, the traffic application based on vision in the single traffic‐monitoring scene has been widely studied and developed. However, cross‐regional research is still in its infancy. In order to help solve the application of cross‐regional traffic surveillance scenarios, this paper proposes a more reliable and accurate road scene mosaic method under multi‐camera surveillance. The mosaic road panorama contains physical information, which can be used to achieve a cross‐regional measurement. It also lays the foundation for vehicle spatial location, vehicle speed and traffic incident detection across regions. First, the mapping relationship between the three‐dimensional sub‐world coordinates and their corresponding two‐dimensional image coordinates is established by camera calibration. Second, the projection transformation relationship between two cameras is established by two sub‐world coordinate systems and their common information. Finally, we use the proposed inverse projection idea and translation vector relationship to complete the mosaic of two traffic‐monitoring road scenes. The experimental results show that the camera calibration accuracy can reach more than 97% in a single scene. The measurement accuracy of the mosaic block is over 95%. The experimental results show that the proposed method has a higher accuracy, which has great value in related theoretical research and practical applications.
- Published
- 2021
- Full Text
- View/download PDF
46. Identification of geometric parameters of a parallel robot by using a camera calibration technique
- Author
-
J. Jesús Cervantes-Sánchez, Hector A. Moreno-Avalos, Mauricio Arredondo-Soto, Mario A. García-Murillo, and Felipe J. Torres
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Mechanical Engineering ,System of measurement ,Parallel manipulator ,02 engineering and technology ,Kinematics ,Identification (information) ,020303 mechanical engineering & transports ,020901 industrial engineering & automation ,Planar ,0203 mechanical engineering ,Mechanics of Materials ,Electronic instrumentation ,Computer vision ,Artificial intelligence ,business ,Edge space ,Camera resectioning - Abstract
This work reports a novel method to estimate the geometrical parameters of a 2-(3-RRPS) parallel robot intended for manufacturing tasks. The method uses camera calibration techniques, and it is based on the concept of vertex space. The advantage of this technique is that the system does not require complex electronic instrumentation, and only uses a CCD camera as a main sensor and planar patterns, which makes it portable, accurate and low cost. To ensure the quality of the measurements, a methodology for characterization of the measurement system is included. The applicability and the advantages of using the proposed method are shown by means of the estimation of the geometrical dimensions of a spatial parallel manipulator with a relatively complex kinematic architecture. Experiments are conducted and show a significant improvement in manipulator accuracy when the parameters estimated with this technique are used.
- Published
- 2021
- Full Text
- View/download PDF
47. Mobile Robot Control Based on 2D Visual Servoing: A New Approach Combining Neural Network With Variable Structure and Flatness Theory
- Author
-
Imed Miraoui, Osama I. El-Hamrawy, Khaled Kaaniche, Nasr Rashid, and Hassen Mekki
- Subjects
visual servoing ,Robot kinematics ,General Computer Science ,Artificial neural network ,Computer science ,neural network with variable structure ,Flatness (systems theory) ,020208 electrical & electronic engineering ,General Engineering ,Mobile robot ,02 engineering and technology ,Flatness control ,Visual servoing ,TK1-9971 ,Reduction (complexity) ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,General Materials Science ,Electrical engineering. Electronics. Nuclear engineering ,Algorithm ,robot control ,Camera resectioning - Abstract
This paper focuses on the 2D visual servo-control of a mobile robot using a neural network (NN) with variable structure. The interaction matrix relating camera movement and changes in visual characteristics requires an estimation phase to determine its parameters as well as a camera calibration phase. It is common in applications related to mobile robotics that the robot model contains uncertainties generated by the sliding phenomenon. We suggest online identification, using NN to avoid this problem. The RBF NN is used to estimate the block formed by the interaction matrix and the reverse robot. Since the number of variables to be estimated is large, this can lead to the use of an excessive number of RBFs. We propose to use a single point of the scene which is sufficient to solve the problem. This problem reduction is possible thanks to flatness theory which allows to reduce the number of NN inputs from 8 inputs (4 image points) -generally used in the literature- to 2 (one image point) only. In order to further reduce the complexity of the proposed algorithm, the number of neurons for each layer and for each iteration is optimized. We use a neural network with variable structure to reach this objective. The very encouraging results obtained validate the proposed approach.
- Published
- 2021
- Full Text
- View/download PDF
48. Deformable Model-Based Vehicle Tracking and Recognition Using 3-D Constrained Multiple-Kernels and Kalman Filter
- Author
-
Tao Liu and Yong Liu
- Subjects
Vehicle tracking system ,constrained multiple kernels ,General Computer Science ,Computer science ,business.industry ,vehicle tracking and recogniton ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Kalman filter ,Solid modeling ,Display resolution ,TK1-9971 ,3-D vehicle modeling ,Kernel (linear algebra) ,General Materials Science ,Computer vision ,Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,Electrical and Electronic Engineering ,camera calibration ,business ,Intelligent transportation system ,Camera resectioning - Abstract
Video-based vehicle tracking and recognition is an important application in the Intelligent Transportation System. High similarity among vehicle types, frequent occlusion and low video resolution in traffic surveillance are the major problems in this research area. In this paper, we proposed a vehicle tracking system by using 3-D constrained multiple-kernels, facilitated with Kalman filtering, to continuously update the location of the moving vehicles. To further robustly and efficiently track vehicles that are partially or even fully occluded, evolutionary optimization is applied to camera calibration for systematically building 3-D vehicle model, from which we can extract the vehicle’s features such as the vehicle type, color and license plate. Then, a self-similarity descriptor is further introduced for vehicle re-identification. The proposed system is evaluated on the NVIDIA AI City Datasets and one self-recorded high-resolution video. The experimental results have shown the favorable performance, which not only can successfully track vehicles under occlusion, but also can maintain the knowledge of 3-D vehicle geometry.
- Published
- 2021
- Full Text
- View/download PDF
49. [Paper] Quality Improvement for Real-time Free Viewpoint Video Using View-dependent Shape Refinement
- Author
-
Masaru Sugano, Keisuke Nonaka, Tatsuya Kobayashi, Ryosuke Watanabe, Kato Haruhisa, and Tomoaki Konno
- Subjects
business.industry ,Computer science ,Signal Processing ,3D reconstruction ,Media Technology ,View dependent ,Image processing ,Computer vision ,Paper quality ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
50. [Paper] Sports Camera Calibration using Flexible Intersection Selection and Refinement
- Author
-
Ryosuke Watanabe, Tomoaki Konno, Sei Naito, Keisuke Nonaka, and Hiroki Tsurusaki
- Subjects
Intersection ,Computer science ,business.industry ,Signal Processing ,Media Technology ,Computer vision ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,Selection (genetic algorithm) ,Camera resectioning - Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.