49 results on '"Christoph Strecha"'
Search Results
2. Dynamic and scalable large scale image reconstruction.
- Author
-
Christoph Strecha, Timo Pylvänäinen, and Pascal Fua
- Published
- 2010
- Full Text
- View/download PDF
3. BRIEF: Binary Robust Independent Elementary Features.
- Author
-
Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua
- Published
- 2010
- Full Text
- View/download PDF
4. Classification of Aerial Photogrammetric 3D Point Clouds.
- Author
-
Carlos Becker, Nicolai Häni, Elena Rosinskaya, Emmanuel d'Angelo, and Christoph Strecha
- Published
- 2017
5. Training for Task Specific Keypoint Detection.
- Author
-
Christoph Strecha, Albrecht J. Lindner, Karim Ali 0002, and Pascal Fua
- Published
- 2009
- Full Text
- View/download PDF
6. Making Background Subtraction Robust to Sudden Illumination Changes.
- Author
-
Julien Pilet, Christoph Strecha, and Pascal Fua
- Published
- 2008
- Full Text
- View/download PDF
7. A Mean Field EM-algorithm for Coherent Occlusion Handling in MAP-Estimation Prob.
- Author
-
Rik Fransens, Christoph Strecha, and Luc Van Gool
- Published
- 2006
- Full Text
- View/download PDF
8. Combined Depth and Outlier Estimation in Multi-View Stereo.
- Author
-
Christoph Strecha, Rik Fransens, and Luc Van Gool
- Published
- 2006
- Full Text
- View/download PDF
9. Parametric Stereo for Multi-pose Face Recognition and 3D-Face Modeling.
- Author
-
Rik Fransens, Christoph Strecha, and Luc Van Gool
- Published
- 2005
- Full Text
- View/download PDF
10. Wide-Baseline Stereo from Multiple Views: A Probabilistic Account.
- Author
-
Christoph Strecha, Rik Fransens, and Luc Van Gool
- Published
- 2004
- Full Text
- View/download PDF
11. A Probabilistic Formulation of Image Registration.
- Author
-
Christoph Strecha, Rik Fransens, and Luc Van Gool
- Published
- 2004
- Full Text
- View/download PDF
12. A Probabilistic Approach to Large Displacement Optical Flow and Occlusion Detection.
- Author
-
Christoph Strecha, Rik Fransens, and Luc Van Gool
- Published
- 2004
- Full Text
- View/download PDF
13. Dense Matching of Multiple Wide-baseline Views.
- Author
-
Christoph Strecha, Tinne Tuytelaars, and Luc Van Gool
- Published
- 2003
- Full Text
- View/download PDF
14. Reconstruction of Subjective Surfaces from Occlusion Cues.
- Author
-
Naoki Kogo, Christoph Strecha, Rik Fransens, Geert Caenen, Johan Wagemans, and Luc Van Gool
- Published
- 2002
- Full Text
- View/download PDF
15. Motion - Stereo Integration for Depth Estimation.
- Author
-
Christoph Strecha and Luc Van Gool
- Published
- 2002
- Full Text
- View/download PDF
16. PDE-based Multi-view Depth Estimation.
- Author
-
Christoph Strecha and Luc Van Gool
- Published
- 2002
- Full Text
- View/download PDF
17. Efficient large-scale multi-view stereo for ultra high-resolution image sets.
- Author
-
Engin Tola, Christoph Strecha, and Pascal Fua
- Published
- 2012
- Full Text
- View/download PDF
18. LDAHash: Improved Matching with Smaller Descriptors.
- Author
-
Christoph Strecha, Alexander M. Bronstein, Michael M. Bronstein, and Pascal Fua
- Published
- 2012
- Full Text
- View/download PDF
19. BRIEF: Computing a Local Binary Descriptor Very Fast.
- Author
-
Michael Calonder, Vincent Lepetit, Mustafa özuysal, Tomasz Trzcinski, Christoph Strecha, and Pascal Fua
- Published
- 2012
- Full Text
- View/download PDF
20. Optical flow based super-resolution: A probabilistic approach.
- Author
-
Rik Fransens, Christoph Strecha, and Luc Van Gool
- Published
- 2007
- Full Text
- View/download PDF
21. On benchmarking camera calibration and multi-view stereo for high resolution imagery.
- Author
-
Christoph Strecha, Wolfgang von Hansen, Luc Van Gool, Pascal Fua, and Ulrich Thoennessen
- Published
- 2008
- Full Text
- View/download PDF
22. A new method to determine multi-angular reflectance factor from lightweight multispectral cameras with sky sensor in a target-less workflow applicable to UAV
- Author
-
Manuel Cubero-Castan, Christoph Strecha, Klaus Schneider-Zapp, and Dai Shi
- Subjects
reflectance ,010504 meteorology & atmospheric sciences ,Computer science ,media_common.quotation_subject ,0208 environmental biotechnology ,Multispectral image ,FOS: Physical sciences ,Soil Science ,02 engineering and technology ,01 natural sciences ,multi-angular ,Physics - Geophysics ,hemispheric-directional reflectance factor (hdrf) ,Calibration ,Computers in Earth Sciences ,0105 earth and related environmental sciences ,Remote sensing ,media_common ,Spectralon ,Orientation (computer vision) ,Geology ,multispectral camera ,downwelling irradiance sensor ,calibration ,remote ,Geophysics (physics.geo-ph) ,020801 environmental engineering ,Photogrammetry ,Overcast ,Sky ,Physics - Data Analysis, Statistics and Probability ,crop surface models ,systems ,Radiometry ,aircraft ,Data Analysis, Statistics and Probability (physics.data-an) - Abstract
A new physically based method to estimate hemispheric-directional reflectance factor (HDRF) from lightweight multispectral cameras that have a downwelling irradiance sensor is presented. It combines radiometry with photogrammetric computer vision to derive geometrically and radiometrically accurate data purely from the images, without requiring reflectance targets or any other additional information apart from the imagery. The sky sensor orientation is initially computed using photogrammetric computer vision and revised with a non-linear regression comprising radiometric and photogrammetry-derived information. It works for both clear sky and overcast conditions. A ground-based test acquisition of a Spectralon target observed from different viewing directions and with different sun positions using a typical multispectral sensor configuration for clear sky and overcast showed that both the overall value and the directionality of the reflectance factor as reported in the literature were well retrieved. An RMSE of 3% for clear sky and up to 5 for overcast sky was observed.
- Published
- 2019
- Full Text
- View/download PDF
23. Robust Estimation in the Presence of Spatially Coherent Outliers.
- Author
-
Rik Fransens, Christoph Strecha, and Luc Van Gool
- Published
- 2006
- Full Text
- View/download PDF
24. A Probabilistic Approach to Optical Flow based Super-Resolution.
- Author
-
Rik Fransens, Christoph Strecha, and Luc Van Gool
- Published
- 2004
- Full Text
- View/download PDF
25. PHOTOGRAMMETRIC ACCURACY AND MODELING OF ROLLING SHUTTER CAMERAS
- Author
-
Simon Rutishauser, Jonas Vautherin, Alexis Glass, Christoph Strecha, Hon Fai Choi, Klaus Schneider-Zapp, and Venera Chovancova
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,lcsh:T ,Computer science ,business.industry ,Flight plan ,3D reconstruction ,0211 other engineering and technologies ,lcsh:TA1501-1820 ,Rolling shutter ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Software ,Photogrammetry ,lcsh:TA1-2040 ,Robustness (computer science) ,Shutter ,Motion estimation ,Computer vision ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.
- Published
- 2016
- Full Text
- View/download PDF
26. Assessment Of The Radiometric Accuracy In A Target Less Work Flow Using Pix4D Software
- Author
-
M. Cubero-Castan, Klaus Schneider-Zapp, M. Bellomo, Christoph Strecha, D. Shi, and Martin Rehak
- Subjects
Measure (data warehouse) ,010504 meteorology & atmospheric sciences ,business.industry ,Computer science ,media_common.quotation_subject ,Multispectral image ,0211 other engineering and technologies ,Irradiance ,Hyperspectral imaging ,02 engineering and technology ,01 natural sciences ,Software ,Sky ,Radiometric dating ,business ,Radiometric calibration ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing ,media_common - Abstract
To compute reflectance from images taken by multispectral sensors aboard UAVs, most users perform radiometric calibration using a target with known reflectance. This workflow is error-prone and not practical for large data acquisitions. With recent advances in multispectral cameras, sensors which measure the sky down-welling irradiance have become available. This enables radiometric calibration without using a target. In this paper, we assess the radiometric accuracy of target less acquisition using a Sequoia + camera1 for both at-ground and in-flight measurements. Most of the measured control points exhibit a high correlation of 0.98 in the computed reflectance factor with respect to the expected values.1https://www.parrot.com/business-solutions-us/parrotprofessional/parrot-sequoia
- Published
- 2018
- Full Text
- View/download PDF
27. THE ACCURACY OF AUTOMATIC PHOTOGRAMMETRIC TECHNIQUES ON ULTRA-LIGHT UAV IMAGERY
- Author
-
François Gervaix, Jean-Christophe Zufferey, Antoine Beyeler, Dario Floreano, Christoph Strecha, Olivier Küng, and Pascal Fua
- Subjects
lcsh:Applied optics. Photonics ,Image processing ,lcsh:Technology ,Automated Georeferencing ,Inertial measurement unit ,Aerial Triangulation ,Computer vision ,Orthomosaic ,Mini-drone ,lcsh:T ,business.industry ,Orientation (computer vision) ,NCCR-MICS ,Frame (networking) ,DEM ,lcsh:TA1501-1820 ,Aerial Robotics ,Geotagging ,Geography ,Photogrammetry ,lcsh:TA1-2040 ,NCCR-MICS/EMSP ,Trajectory ,Artificial intelligence ,Ultra-light UAV ,lcsh:Engineering (General). Civil engineering (General) ,business ,Differential GPS - Abstract
This paper presents an affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery. Several datasets are analysed and their accuracy is estimated. We show that the accuracy highly depends on the ground resolution (flying height) of the input imagery. When chosen appropriately this mapping solution can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board. Due to the careful integration with recent computer vision techniques, the post processing is robust and fully automatic and can deal with inaccurate position and orientation information which are typically problematic with traditional techniques. Fully autonomous, ultra-light Unmanned Aerial Vehicles (UAV) have recently become commercially available at very reasonable cost for civil applications. The advantages linked to their small mass (typically around 500 grams) are that they do not represent a real threat for third parties in case of malfunctioning. In addition, they are very easy and quick to deploy and retrieve. The drawback of these autonomous platforms certainly lies in the relatively low accuracy of their orientation estimates. In this paper, we show however that such ultra-light UAV’s can take reasonably good images with large amount of overlap while covering areas in the order of a few square kilometers per flight. Since their miniature on-board autopilots cannot deliver extremely precise positioning and orientation of the recorded images, postprocessing is key in the generation of geo-referenced orthomosaics and digital elevation models (DEMs). In this paper we evaluate an automatic image processing pipeline with respect to its accuracy on various datasets. Our study shows that ultralight UAV imagery provides a convenient and affordable solution for measuring geographic information with a similar accuracy as larger airborne systems equipped with high-end imaging sensors, IMU and differential GPS devices. In the frame of this paper, we present results from a flight campaign carried out with the swinglet CAM, a 500-gram autonomous flying wing initially developed at EPFL-LIS and now produced by senseFly. The swinglet CAM records 12MP images and can cover area up to 10 square km. These images can easily be geotagged after flight using the senseFly PostFlight Suite that processes the flight trajectory to find where the images have been taken. The images and their geotags form the input to the processing developed at EPFL-CVLab. In this paper, we compare two variants
- Published
- 2018
28. QUALITY ASSESSMENT OF 3D RECONSTRUCTION USING FISHEYE AND PERSPECTIVE SENSORS
- Author
-
V. Chovancova, L. Glassey, M. Krull, R. Zoller, B. Brot, Klaus Schneider-Zapp, Simon Rutishauser, and Christoph Strecha
- Subjects
lcsh:Applied optics. Photonics ,Laser scanning ,business.industry ,Computer science ,lcsh:T ,Perspective (graphical) ,3D reconstruction ,Point cloud ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:TA1501-1820 ,Usability ,3D modeling ,lcsh:Technology ,Fisheye lens ,Photogrammetry ,lcsh:TA1-2040 ,Computer vision ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) - Abstract
Recent mathematical advances, growing alongside the use of unmanned aerial vehicles, have not only overcome the restriction of roll and pitch angles during flight but also enabled us to apply non-metric cameras in photogrammetric method, providing more flexibility for sensor selection. Fisheye cameras, for example, advantageously provide images with wide coverage; however, these images are extremely distorted and their non-uniform resolutions make them more difficult to use for mapping or terrestrial 3D modelling. In this paper, we compare the usability of different camera-lens combinations, using the complete workflow implemented in Pix4Dmapper to achieve the final terrestrial reconstruction result of a well-known historical site in Switzerland: the Chillon Castle. We assess the accuracy of the outcome acquired by consumer cameras with perspective and fisheye lenses, comparing the results to a laser scanner point cloud.
- Published
- 2015
29. Classification of Aerial Photogrammetric 3D Point Clouds
- Author
-
Elena Rosinskaya, Nicolai Häni, Emmanuel d'Angelo, Carlos Becker, and Christoph Strecha
- Subjects
lcsh:Applied optics. Photonics ,FOS: Computer and information sciences ,010504 meteorology & atmospheric sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Point cloud ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computers in Earth Sciences ,Remote sensing ,0105 earth and related environmental sciences ,Environmental modelling ,Point (typography) ,lcsh:T ,business.industry ,lcsh:TA1501-1820 ,Object (computer science) ,Class (biology) ,Lidar ,Photogrammetry ,lcsh:TA1-2040 ,Classification methods ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business - Abstract
We present a powerful method to extract per-point semantic class labels from aerialphotogrammetry data. Labeling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer., Comment: ISPRS 2017
- Published
- 2017
- Full Text
- View/download PDF
30. PHOTOGRAMMETRIC PERFORMANCE OF AN ULTRA LIGHT WEIGHT SWINGLET 'UAV'
- Author
-
Julien Vallet, Flory Panissod, M. Tracol, and Christoph Strecha
- Subjects
lcsh:Applied optics. Photonics ,Laser scanning ,lcsh:T ,Orientation (computer vision) ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Orthophoto ,lcsh:TA1501-1820 ,lcsh:Technology ,Geography ,Photogrammetry ,Software ,lcsh:TA1-2040 ,Computer graphics (images) ,Assisted GPS ,Computer vision ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Camera resectioning ,Block (data storage) - Abstract
Low cost mapping using UAV technology is becoming a trendy topic. Many systems exist where a simple camera can be deployed to take images, generally georeferenced with a GPS chip and MEMS attitude sensors. The step from using those images as information picture to photogrammetric products with geo-reference, such as digital terrain model (DTM) or orthophotos is not so big. New development in the field of image correlation allow matching rapidly and accurately images together, build a relative orientation of an image block, extract a DTM and produce orthoimage through a web server.The following paper focuses on the photogrammetric performance of an ultra light UAV equipped with a compact 12Mpix camera combined with online data processes provided by Pix4D. First, the step of image orientation is studied with the camera calibration step, thus the DTM extraction will be compared with conventional results from conventional photogrammetric software, new generation technique of pixel correlation and with reference data issued from high density laser scanning. The quality of orthoimage is presented in terms of quality and geometric accuracy.
- Published
- 2012
- Full Text
- View/download PDF
31. Efficient large-scale multi-view stereo for ultra high-resolution image sets
- Author
-
Pascal Fua, Christoph Strecha, and Engin Tola
- Subjects
Matching (graph theory) ,Computational complexity theory ,Computer science ,business.industry ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Scale (descriptive set theory) ,Computer Science Applications ,Image (mathematics) ,Hardware and Architecture ,Outlier ,Pattern recognition (psychology) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
We present a new approach for large-scale multi-view stereo matching, which is designed to operate on ultra high-resolution image sets and efficiently compute dense 3D point clouds. We show that, using a robust descriptor for matching purposes and high-resolution images, we can skip the computationally expensive steps that other algorithms require. As a result, our method has low memory requirements and low computational complexity while producing 3D point clouds containing virtually no outliers. This makes it exceedingly suitable for large-scale reconstruction. The core of our algorithm is the dense matching of image pairs using DAISY descriptors, implemented so as to eliminate redundancies and optimize memory access. We use a variety of challenging data sets to validate and compare our results against other algorithms.
- Published
- 2011
- Full Text
- View/download PDF
32. BRIEF: Computing a Local Binary Descriptor Very Fast
- Author
-
Pascal Fua, Christoph Strecha, Mustafa Özuysal, Vincent Lepetit, Michael Calonder, and Tomasz Trzcinski
- Subjects
Binary descriptors ,Basis (linear algebra) ,business.industry ,Applied Mathematics ,point matching ,Feature extraction ,SURF ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-invariant feature transform ,Pattern recognition ,Point set registration ,Hamming distance ,Computational Theory and Mathematics ,Artificial Intelligence ,Feature (computer vision) ,SIFT ,Fraction (mathematics) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Quantization (image processing) ,Software ,Mathematics - Abstract
Binary descriptors are becoming increasingly popular as a means to compare feature points very fast while requiring comparatively small amounts of memory. The typical approach to creating them is to first compute floating-point ones, using an algorithm such as SIFT, and then to binarize them. In this paper, we show that we can directly compute a binary descriptor, which we call BRIEF, on the basis of simple intensity difference tests. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and SIFT on standard benchmarks and show that it yields comparable recognition accuracy, while running in an almost vanishing fraction of the time required by either.
- Published
- 2011
33. Dynamic and scalable large scale image reconstruction
- Author
-
Timo Pylvänäinen, Christoph Strecha, and Pascal Fua
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Bundle adjustment ,Iterative reconstruction ,computer.software_genre ,large scale reconstruction ,Image (mathematics) ,Metadata ,bundle adjustment ,Scalability ,Computer vision ,Artificial intelligence ,Data mining ,Scale (map) ,business ,computer - Abstract
Recent approaches to reconstructing city-sized areas from large image collections usually process them all at once and only produce disconnected descriptions of image subsets, which typically correspond to major landmarks. In contrast, we propose a framework that lets us take advantage of the available meta-data to build a single, consistent description from these potentially disconnected descriptions. Furthermore, this description can be incrementally updated and enriched as new images become avail- able. We demonstrate the power of our approach by building large-scale reconstructions using images of Lausanne and Prague.
- Published
- 2010
- Full Text
- View/download PDF
34. Surface construction by a 2-D differentiation-integration process: A neurocomputational model for perceived border ownership, depth, and lightness in Kanizsa figures
- Author
-
Christoph Strecha, Naoki Kogo, Johan Wagemans, and Luc Van Gool
- Subjects
Lightness ,ground separation ,Visual perception ,neural dynamics ,Light ,Perceptual Completion ,Models, Neurological ,perceptual completion ,PSI_VISICS ,Area V2 ,monkey visual-cortex ,Psychophysical Evidence ,Filling-In ,Visual processing ,neural computation ,area v2 ,spatial arrangement ,Illusory contours ,Humans ,Computer vision ,General Psychology ,surface completion ,subjective contours ,depth perception ,Depth Perception ,Optical illusion ,Filling-in ,business.industry ,illusory contours ,filling-in ,psychophysical evidence ,Monkey Visual-Cortex ,Spatial Arrangement ,Subjective Contours ,lightness/brightness perception ,Neural Dynamics ,Visual Perception ,Artificial intelligence ,Ground Separation ,Kinetic depth effect ,Depth perception ,Psychology ,business - Abstract
Human visual perception is a fundamentally relational process: Lightness perception depends on luminance ratios, and depth perception depends on occlusion (difference of depth) cues. Neurons in low-level visual cortex are sensitive to the difference (but not the value itself) of signals, and these differences have to be used to reconstruct the input. This process can be regarded as a 2-dimensional differentiation and integration process: First, differentiated signals for depth and lightness are created at an earlier stage of visual processing and then 2-dimensionally integrated at a later stage to construct surfaces. The subjective filling in of physically missing parts of input images (completion) can be explained as a property that emerges from this surface construction process. This approach is implemented in a computational model, called DISC (Differentiation-Integration for Surface Completion). In the DISC model, border ownership (the depth order at borderlines) is computed based on local occlusion cues (L- and T-junctions) and the distribution of borderlines. Two-dimensional integration is then applied to construct surfaces in the depth domain, and lightness values are in turn modified by these depth measurements. Illusory percepts emerge through the surface-construction process with the development of illusory border ownership and through the interaction between depth and lightness perception. The DISC model not only produces a central surface with the correctly modified lightness values of the original Kanizsa figure but also responds to variations of this figure such that it can distinguish between illusory and nonillusory configurations in a manner that is consistent with human perception. Kogo N., Strecha C., Van Gool L., Wagemans J., ''Surface construction by a 2-D differentiation-integration process : a neurocomputational model for perceived border ownership, depth, and lightness in Kanizsa figures'', Psychological review, vol. 117, no. 2, pp. 406-439, April 2010. ispartof: Psychological review vol:117 issue:2 pages:406-439 ispartof: location:United States status: published
- Published
- 2010
- Full Text
- View/download PDF
35. BRIEF: Binary Robust Independent Elementary Features
- Author
-
Vincent Lepetit, Christoph Strecha, Pascal Fua, and Michael Calonder
- Subjects
Theoretical computer science ,Similarity (geometry) ,Discriminative model ,Simple (abstract algebra) ,Feature (machine learning) ,Binary number ,Fraction (mathematics) ,Point (geometry) ,Hamming distance ,Algorithm ,Computer Science::Databases ,Mathematics - Abstract
We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.
- Published
- 2010
- Full Text
- View/download PDF
36. Training for Task Specific Keypoint Detection
- Author
-
Albrecht J. Lindner, Karim Ali, Christoph Strecha, and Pascal Fua
- Subjects
Class (computer programming) ,business.industry ,Computer science ,Detector ,Training (meteorology) ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Features ,Task (project management) - Abstract
In this paper, we show that a better performance can be achieved by training a keypoint detector to only find those points that are suitable to the needs of the given task. We demonstrate our approach in an urban environment, where the keypoint detector should focus on stable man-made structures and ignore objects that undergo natural changes such as vegetation and clouds. We use Wald-Boost learning with task specific training samples in order to train a keypoint detector with this capability. We show that our aproach generalizes to a broad class of problems where the task is known beforehand.
- Published
- 2009
- Full Text
- View/download PDF
37. On benchmarking camera calibration and multi-view stereo for high resolution imagery
- Author
-
W. von Hansen, L. Van Gool, U. Thoennessen, Pascal Fua, and Christoph Strecha
- Subjects
Stereo cameras ,Pixel ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,LIDAR ,Lidar ,Camera auto-calibration ,multi-view stereo ,Computer vision ,benchmarking ,Artificial intelligence ,camera calibration ,business ,Pose ,Image resolution ,Stereo camera ,Computer stereo vision ,Camera resectioning - Abstract
In this paper we want to start the discussion on whether image based 3-D modelling techniques can possibly be used to replace LIDAR systems for outdoor 3D data acquisition. Two main issues have to be addressed in this context: (i) camera calibration (internal and external) and (ii) dense multi-view stereo. To investigate both, we have acquired test data from outdoor scenes both with LIDAR and cameras. Using the LIDAR data as reference we estimated the ground-truth for several scenes. Evaluation sets are prepared to evaluate different aspects of 3D model building. These are: (i) pose estimation and multi-view stereo with known internal camera parameters; (ii) camera calibration and multi-view stereo with the raw images as the only input and (iii) multi-view stereo.
- Published
- 2008
- Full Text
- View/download PDF
38. Making Background Subtraction Robust to Sudden Illumination Changes
- Author
-
Pascal Fua, Christoph Strecha, and Julien Pilet
- Subjects
Background subtraction ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Statistical model ,Density estimation ,Mixture model ,Image (mathematics) ,Density-Estimation ,Computer vision ,Augmented reality ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Modern background subtraction techniques can handle gradual illumination changes but can easily be confused by rapid ones. We propose a technique that overcomes this limitation by relying on a statistical model, not of the pixel intensities, but of the illumination effects. Because they tend to affect whole areas of the image as opposed to individual pixels, low-dimensional models are appropriate for this purpose and make our method extremely robust to illumination changes, whether slow or fast.
- Published
- 2008
- Full Text
- View/download PDF
39. A Mean Field EM-algorithm for Coherent Occlusion Handling in MAP-Estimation Prob
- Author
-
Christoph Strecha, Rik Fransens, and L. Van Gool
- Subjects
Markov random field ,business.industry ,Visibility (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Pattern recognition ,Facial recognition system ,Generative model ,Depth map ,Pattern recognition (psychology) ,Computer vision ,Artificial intelligence ,Face detection ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
This paper presents a generative model based approach to deal with occlusions in vision problems which can be formulated as MAP-estimation problems. The approach is generic and targets applications in diverse domains like model-based object recognition, depth-from-stereo and image registration. It relies on a probabilistic imaging model, in which visible regions and occlusions are generated by two separate processes. The partitioning into visible and occluded regions is made explicit by the introduction of an hidden binary visibility map, which, to account for the coherent nature of occlusions, is modelled as a Markov Random Field. Inference is made tractable by a mean field EMalgorithm, which alternates between estimation of visibility and optimisation of model parameters. We demonstrate the effectiveness of the approach with two examples. First, in a N-view stereo experiment, we compute a dense depth map of a scene which is contaminated by multiple occluding objects. Finally, in a 2D-face recognition experiment, we try to identify people from partially occluded facial images.
- Published
- 2006
- Full Text
- View/download PDF
40. Robust Estimation in the Presence of Spatially Coherent Outliers
- Author
-
L. Van Gool, Christoph Strecha, and Rik Fransens
- Subjects
Markov random field ,Pixel ,Estimation theory ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inference ,Pattern recognition ,Facial recognition system ,Generative model ,Robustness (computer science) ,Outlier ,Artificial intelligence ,business ,Mathematics - Abstract
We present a generative model based approach to deal with spatially coherent outliers. The model assumes that image pixels are generated by either one of two distinct processes: an inlier process which is responsible for the generation of the majority of the data, and an outlier process which generates pixels not adhering to the inlier model. The partitioning into inlier and outlier regions is made explicit by the introduction of a hidden binary map. To account for the coherent nature of outliers this map is modelled as a Markov Random Field, and inference is made tractable by a mean field EM-algorithm. We make a connection with classical robust estimation theory, and derive the analytic expressions of the equivalent M-estimator for two limiting cases of our model. The effectiveness of the proposed method is demonstrated with two examples. First, in a synthetic linear regression problem, we compare our approach with different M-estimators. Next, in a 2D-face recognition experiment, we try to identify people from partially occluded facial images.
- Published
- 2006
- Full Text
- View/download PDF
41. Wide-baseline stereo from multiple views: a probabilistic account
- Author
-
L. Van Gool, Rik Fransens, and Christoph Strecha
- Subjects
Discretization ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Probabilistic logic ,Pattern recognition ,Iterative reconstruction ,Small set ,Image (mathematics) ,novel view generation ,Image texture ,Computer Science::Computer Vision and Pattern Recognition ,Convergence (routing) ,multi-view stereo ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
This paper describes a method for dense depth reconstruction from a small set of wide-baseline images. In a wide-baseline setting an inherent difficulty which complicates the stereo-correspondence problem is self-occlusion. Also, we have to consider the possibility that image pixels in different images, which are projections of the same point in the scene, will have different color values due to non-Lambertian effects or discretization errors. We propose a Bayesian approach to tackle these problems. In this framework, the images are regarded as noisy measurements of an underlying 'true' image-function. Also, the image data is considered incomplete, in the sense that we do not know which pixels from a particular image are occluded in the other images. We describe an EM-algorithm, which iterates between estimating values for all hidden quantities, and optimizing the current depth estimates. The algorithm has few free parameters, displays a stable convergence behavior and generates accurate depth estimates. The approach is illustrated with several challenging real-world examples. We also show how the algorithm can generate realistic view interpolations and how it merges the information of all images into a new, synthetic view.
- Published
- 2004
- Full Text
- View/download PDF
42. A Probabilistic Approach to Large Displacement Optical Flow and Occlusion Detection
- Author
-
Luc Van Gool, Rik Fransens, and Christoph Strecha
- Subjects
Ground truth ,Pixel ,business.industry ,Computation ,Bayesian probability ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Probabilistic logic ,Displacement (vector) ,Feature (computer vision) ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
This paper deals with the computation of optical flow and occlusion detection in the case of large displacements. We propose a Bayesian approach to the optical flow problem and solve it by means of differential techniques. The images are regarded as noisy measurements of an underlying ’true’ image-function. Additionally, the image data is considered incomplete, in the sense that we do not know which pixels from a particular image are occluded in the other images. We describe an EM-algorithm, which iterates between estimating values for all hidden quantities, and optimizing the current optical flow estimates by differential techniques. The Bayesian way of describing the problem leads to more insight in existing differential approaches, and offers some natural extensions to them. The resulting system involves less parameters and gives an interpretation to the remaining ones. An important new feature is the photometric detection of occluded pixels. We compare the algorithm with existing optical flow methods on ground truth data. The comparison shows that our algorithm generates the most accurate optical flow estimates. We further illustrate the approach with some challenging real-world examples.
- Published
- 2004
- Full Text
- View/download PDF
43. PDE-based multi-view depth estimation
- Author
-
L. Van Gool and Christoph Strecha
- Subjects
Matching (statistics) ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Classification of discontinuities ,Variable (computer science) ,Data visualization ,Stereopsis ,Calibration ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
The paper describes a method for depth extraction from multiple, calibrated images. Emphasis lies on the integration of multiple views during the matching process. This process is guided by the relative confidence that the system has in the data coming from the different views. This weighing is fine-grained in that it is determined for every pixel at every iteration. Reliable information spreads fast at the expense of less reliable data, both in terms of spatial communications and in terms of exchange between views. The resulting system can handle large disparities, depth discontinuities and occlusions. Moreover provisions are made to deal with intensity changes between corresponding pixels. Experimental results corroborate the viability of the approach and the improved results that can be expected from the system's ability to deal with variable intensities.
- Published
- 2003
- Full Text
- View/download PDF
44. Motion — Stereo Integration for Depth Estimation
- Author
-
Luc Van Gool and Christoph Strecha
- Subjects
Pixel ,Computer science ,business.industry ,Distortion (optics) ,Emphasis (telecommunications) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Epipolar line ,Motion (geometry) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Depth extraction with a mobile stereo system is described. The stereo setup is precalibrated, but the system extracts its own motion. Emphasis lies on the integration of the motion and stereo cues. It is guided by the relative confidence that the system has in these cues. This weighing is fine-grained in that it is determined for every pixel at every iteration. Reliable information spreads fast at the expense of less reliable data, both in terms of spatial communication and in terms of exchange between cues. The resulting system can handle large displacements, depth discontinuities and occlusions. Experimental results corroborate the viability of the approach.
- Published
- 2002
- Full Text
- View/download PDF
45. Reconstruction of Subjective Surfaces from Occlusion Cues
- Author
-
Luc Van Gool, Naoki Kogo, Geert Caenen, Rik Fransen, Johan Wagemans, and Christoph Strecha
- Subjects
Lightness ,Surface (mathematics) ,Diffusion equation ,business.industry ,media_common.quotation_subject ,Gaussian ,Illusion ,Feedback loop ,Convolution ,symbols.namesake ,Depth map ,symbols ,Computer vision ,Artificial intelligence ,business ,media_common ,Mathematics - Abstract
In the Kanizsa figure, an illusory central area and its contours are perceived. Replacing the pacman inducers with other shapes can significantly influence this effect. Psychophysical studies indicate that the determination of depth is a task that our visual system constantly conducts. We hypothesized that the illusion is due to the modification of the image according to the higher level depth interpretation. This idea was implemented in a feedback model based on a surface completion scheme. The relative depths, with their signs reflecting the polarity of the image, were determined from junctions by convolution of Gaussian derivative based filters, while a diffusion equation reconstructed the surfaces. The feedback loop was established by converting this depth map to modify the lightness of the image. This model created a central surface and extended the contours from the inducers. Results on a variety of figures were consistent with psychophysical experiments.
- Published
- 2002
- Full Text
- View/download PDF
46. Combined Depth and Outlier Estimation in Multi-View Stereo
- Author
-
Christoph Strecha, Rik Fransens, and L. Van Gool
- Subjects
Random field ,Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Generative model ,Stereopsis ,Kernel (image processing) ,Outlier ,Computer vision ,Artificial intelligence ,business ,Hidden Markov model ,Algorithm - Abstract
In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects.
47. Pose estimation of landscape images using DEM and orthophotos
- Author
-
François Golay, Christoph Strecha, Timothée Produit, and Devis Tuia
- Subjects
RANSAC approach ,Pixel ,Computer science ,business.industry ,Feature extraction ,Orthophoto ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,RANSAC ,3D pose estimation ,pose estimation ,Edge detection ,orthoimage ,Ortho-image ,landscape image ,Analysis-by-synthesis ,Computer vision ,Artificial intelligence ,Projection (set theory) ,business ,Pose ,pose of oblique landscape images - Abstract
In this paper, we propose a methodology for the estimation of the pose of oblique landscape images. Knowledge about the pose is needed for using such images in augmented reality applications or to allow projection of pixels in a GIS for spatial analysis. We propose to estimate the pose using a 3D digital elevation model (DEM) rendered with an ortho-image as reference. Starting from a rough estimation, the pose is refined by exploiting correspondences detected with a local normalized cross-correlation method. Matches are searched between edge features extracted both in the query image and in a synthetic image generated from the DEM and the ortho-image. A RANSAC approach based on the camera model extracts the best matches. Few iterations of the algorithm provide a precise estimation of the pose, leading to a precise georeferencing of the query image. We tested the proposed methodology to images of a popular glacier in the south of Switzerland downloaded from Panoramio.
48. Simplified building models extraction from ultra-light UAV imagery
- Author
-
Jan Stumpf, Daniel Gurdan, Klaus-Michael Doth, Olivier Küng, Pascal Fua, Christoph Strecha, and Mickael Achtelik
- Subjects
Visual reconstruction ,lcsh:Applied optics. Photonics ,Engineering drawing ,Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,UAVs ,computer.software_genre ,lcsh:Technology ,Task (project management) ,Computer Aided Design ,Computer vision ,Extraction (military) ,business.industry ,lcsh:T ,3D reconstruction ,lcsh:TA1501-1820 ,Pipeline (software) ,Simplified Building Model ,Photogrammetry ,lcsh:TA1-2040 ,Key (cryptography) ,Artificial intelligence ,3D Reconstruction ,business ,lcsh:Engineering (General). Civil engineering (General) ,computer - Abstract
Generating detailed simplified building models such as the ones present on Google Earth is often a difficult and lengthy manual task, requiring advanced CAD software and a combination of ground imagery, LIDAR data and blueprints. Nowadays, UAVs such as the AscTec Falcon 8 have reached the maturity to offer an affordable, fast and easy way to capture large amounts of oblique images covering all parts of a building. In this paper we present a state-of-the-art photogrammetry and visual reconstruction pipeline provided by Pix4D applied to medium resolution imagery acquired by such UAVs. The key element of simplified building models extraction is the seamless integration of the outputs of such a pipeline for a final manual refinement step in order to minimize the amount of manual work.
49. LDAHash: Improved matching with smaller descriptors
- Author
-
Pascal Fua, Alexander M. Bronstein, Michael M. Bronstein, and Christoph Strecha
- Subjects
Performance Evaluation ,Feature extraction ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-invariant feature transform ,Poison control ,metric learning ,Artificial Intelligence ,SIFT ,Computer Science::Multimedia ,Computer vision ,3D reconstruction ,Local Descriptors ,Hamming space ,Image retrieval ,Transformation geometry ,Mathematics ,DAISY ,business.industry ,similarity-sensitive hashing ,Applied Mathematics ,matching ,Hamming distance ,Pattern recognition ,Local features ,Computational Theory and Mathematics ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Affine transformation ,binarization ,business ,Software - Abstract
SIFT-like local feature descriptors are ubiquitously employed in such computer vision applications as content-based retrieval, video analysis, copy detection, object recognition, photo-tourism and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Secondly, descriptors are usually high-dimensional (e.g. SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space, in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.