226 results on '"Oliver Cossairt"'
Search Results
202. Robust 3D Acquisition Using Motion Contrast 3D Scanning
- Author
-
Mohit Gupta, Nathan Matsuda, and Oliver Cossairt
- Subjects
Materials science ,Light source ,Optics ,Laser scanning ,business.industry ,Bandwidth (signal processing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,3d scanning ,Reflective surfaces ,Image sensor ,business ,Structured light - Abstract
We present a novel structured light technique called Motion Contrast 3D scanning that maximizes bandwidth and light source power to avoid performance trade-offs. Our technique allows laser scanning resolution with single-shot speed, even in the presence of strong ambient illumination, significant inter-reflections, and highly reflective surfaces.
- Published
- 2015
203. A Compressed Sensing Approach to Solving the Dynamic Range Problem in Fourier Transform Holography
- Author
-
Oliver Cossairt and Kuan He
- Subjects
business.industry ,Dynamic range ,Holography ,Reconstruction method ,Image (mathematics) ,law.invention ,symbols.namesake ,Compressed sensing ,Fourier transform ,law ,symbols ,Computer vision ,Artificial intelligence ,business ,Mathematics - Abstract
We propose a reconstruction method using compressed sensing to accurately recover an image from a single hologram despite a large amount of unmeasured intensities, which allows us to overcome dynamic range limitations in the sensor.
- Published
- 2015
204. Dictionary Learning Based Color Demosaicing for Plenoptic Cameras
- Author
-
Xiang Huang and Oliver Cossairt
- Subjects
Color histogram ,Demosaicing ,Bayer filter ,Pixel ,Computer science ,Color image ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image-based modeling and rendering ,Primary color ,RGB color model ,Color filter array ,Computer vision ,Artificial intelligence ,Image sensor ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Recently plenoptic cameras have gained much attention, as they capture the 4D light field of a scene which is useful for numerous computer vision and graphics applications. Similar to traditional digital cameras, plenoptic cameras use a color filter array placed onto the image sensor so that each pixel only samples one of three primary color values. A color demosaicing algorithm is then used to generate a full-color plenoptic image, which often introduces color aliasing artifacts. In this paper, we propose a dictionary learning based demosaicing algorithm that recovers a full-color light field from a captured plenoptic image using sparse optimization. Traditional methods consider only spatial correlations between neighboring pixels on a captured plenoptic image. Our method takes advantage of both spatial and angular correlations inherent in naturally occurring light fields. We demonstrate that our method outperforms traditional color demosaicing methods by performing experiments on a wide variety of scenes.
- Published
- 2014
205. Performance limits for motion deblurring cameras
- Author
-
Oliver Cossairt and Mohit Gupta
- Subjects
Blind deconvolution ,Point spread function ,Deblurring ,Machine vision ,business.industry ,Optical transfer function ,Linear motion ,Motion blur ,Computer vision ,Coded aperture ,Artificial intelligence ,business ,Mathematics - Published
- 2014
206. Can we beat Hadamard multiplexing? Data driven design and analysis for computational imaging systems
- Author
-
Ashok Veeraraghavan, Kaushik Mitra, and Oliver Cossairt
- Subjects
Deblurring ,Theoretical computer science ,Computer science ,Hadamard transform ,Prior probability ,Linear system ,Depth of field ,Error detection and correction ,Mixture model ,Multiplexing ,Algorithm - Abstract
Computational Imaging (CI) systems that exploit optical multiplexing and algorithmic demultiplexing have been shown to improve imaging performance in tasks such as motion deblurring, extended depth of field, light field and hyper-spectral imaging. Design and performance analysis of many of these approaches tend to ignore the role of image priors. It is well known that utilizing statistical image priors significantly improves demultiplexing performance. In this paper, we extend the Gaussian Mixture Model as a data-driven image prior (proposed by Mitra et. al [21]) to under-determined linear systems and study compressive CI methods such as light-field and hyper-spectral imaging. Further, we derive a novel algorithm for optimizing multiplexing matrices that simultaneously accounts for (a) sensor noise (b) image priors and (c) CI design constraints. We use our algorithm to design data-optimal multiplexing matrices for a variety of existing CI designs, and we use these matrices to analyze the performance of CI systems as a function of noise level. Our analysis gives new insight into the optimal performance of CI systems, and how this relates to the performance of classical multiplexing designs such as Hadamard matrices.
- Published
- 2014
207. Digital refocusing with incoherent holography
- Author
-
Nathan Matsuda, Mohit Gupta, and Oliver Cossairt
- Subjects
Physics ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Holography ,Laser ,law.invention ,Optics ,Signal-to-noise ratio ,law ,Broadband ,Computer vision ,Artificial intelligence ,business ,Optical resolution ,Image resolution ,Light field ,Single point source - Abstract
Light field cameras allow us to digitally refocus a photograph after the time of capture. However, recording a light field requires either a significant loss in spatial resolution [11, 21, 10] or a large number of images to be captured [12]. In this paper, we propose incoherent holography for digital refocusing without loss of spatial resolution from only 3 captured images. The main idea is to capture 2D coherent holograms of the scene instead of the 4D light fields. The key properties of coherent light propagation are that the coherent spread function (hologram of a single point source) encodes scene depths and has a broadband spatial frequency response. These properties enable digital refocusing with 2D coherent holograms, which can be captured on sensors without loss of spatial resolution. Incoherent holography does not require illuminating the scene with high power coherent laser, making it possible to acquire holograms even for passively illuminated scenes. We provide an in-depth performance comparison between light field and incoherent holographic cameras in terms of the signal-to-noise-ratio (SNR). We show that given the same sensing resources, an incoherent holography camera outperforms light field cameras in most real world settings. We demonstrate a prototype incoherent holography camera capable of performing digital refocusing from only 3 acquired images. We show results on a variety of scenes that verify the accuracy of our theoretical analysis.
- Published
- 2014
208. Performance Limits for Computational Photography
- Author
-
Kaushik Mitra, Ashok Veeraraghavan, and Oliver Cossairt
- Subjects
Computational photography ,Deblurring ,Noise (signal processing) ,Computer science ,Computer Science::Computer Vision and Pattern Recognition ,Motion blur ,Multispectral image ,Reconstruction algorithm ,Mixture model ,Multiplexing ,Algorithm - Abstract
Over the last decade, a number of Computational Imaging (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. However, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm, which typically uses signal priors. In this paper, we utilize a recently proposed framework incorporating all three components [13]. We model signal priors using a Gaussian Mixture Model (GMM), which allows us to analytically compute Minimum Mean-Squared Error (MMSE). We analyze the specific problem of motion and defocus deblurring, showing how to find the optimal exposure time and aperture setting for defocus and motion deblurring cameras, respectively. This framework gives us the machinery to answer an open question in computational imaging: “To deblur or denoise?”
- Published
- 2014
209. Analyzing computational imaging systems
- Author
-
Ashok Veeraraghavan, Oliver Cossairt, and Kaushik Mitra
- Subjects
Computer science ,Computational science - Published
- 2013
210. Focal sweep videography with deformable optics
- Author
-
Shree K. Nayar, Daniel Miau, and Oliver Cossairt
- Subjects
Depth of focus ,Video capture ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,law.invention ,Lens (optics) ,Optics ,Cardinal point ,law ,Focal length ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Videography ,Image resolution - Abstract
A number of cameras have been introduced that sweep the focal plane using mechanical motion. However, mechanical motion makes video capture impractical and is unsuitable for long focal length cameras. In this paper, we present a focal sweep telephoto camera that uses a variable focus lens to sweep the focal plane. Our camera requires no mechanical motion and is capable of sweeping the focal plane periodically at high speeds. We use our prototype camera to capture EDOF videos at 20fps, and demonstrate space-time refocusing for scenes with a wide depth range. In addition, we capture periodic focal stacks, and show how they can be used for several interesting applications such as video refocusing and trajectory estimation of moving objects.
- Published
- 2013
211. When does computational imaging improve performance?
- Author
-
Oliver Cossairt, Mohit Gupta, and Shree K. Nayar
- Subjects
Computer science ,business.industry ,Image quality ,Noise (signal processing) ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Digital photography ,Computer Graphics and Computer-Aided Design ,Computational photography ,Computer engineering ,Daylight ,Computer vision ,Artificial intelligence ,Deconvolution ,business ,Throughput (business) ,Software ,Image restoration - Abstract
A number of computational imaging techniques are introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies noise. Although it is well understood that optical coding can increase performance at low light levels, little is known about the quantitative performance advantage of computational imaging in general settings. In this paper, we derive the performance bounds for various computational imaging techniques. We then discuss the implications of these bounds for several real-world scenarios (e.g., illumination conditions, scene properties, and sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination that is brighter than typical daylight. These results can be readily used by practitioners to design the most suitable imaging systems given the application at hand.
- Published
- 2012
212. Scaling law for computational imaging using spherical optics
- Author
-
Shree K. Nayar, Oliver Cossairt, and Daniel Miau
- Subjects
Deblurring ,business.industry ,Computer science ,Image quality ,Resolution (electron density) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Tracking (particle physics) ,Atomic and Molecular Physics, and Optics ,Object detection ,Electronic, Optical and Magnetic Materials ,Optics ,Computer Science::Computer Vision and Pattern Recognition ,Digital image processing ,Computer Vision and Pattern Recognition ,Deconvolution ,business - Abstract
The resolution of a camera system determines the fidelity of visual features in captured images. Higher resolution implies greater fidelity and, thus, greater accuracy when performing automated vision tasks, such as object detection, recognition, and tracking. However, the resolution of any camera is fundamentally limited by geometric aberrations. In the past, it has generally been accepted that the resolution of lenses with geometric aberrations cannot be increased beyond a certain threshold. We derive an analytic scaling law showing that, for lenses with spherical aberrations, resolution can be increased beyond the aberration limit by applying a postcapture deblurring step. We then show that resolution can be further increased when image priors are introduced. Based on our analysis, we advocate for computational camera designs consisting of a spherical lens shared by several small planar sensors. We show example images captured with a proof-of-concept gigapixel camera, demonstrating that high resolution can be achieved with a compact form factor and low complexity. We conclude with an analysis on the trade-off between performance and complexity for computational imaging systems with spherical lenses.
- Published
- 2011
213. Gigapixel Computational Imaging
- Author
-
Daniel Miau, Oliver Cossairt, and Shree K. Nayar
- Subjects
Pixel ,business.industry ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Field of view ,Image processing ,Computational geometry ,law.invention ,Lens (optics) ,law ,Computer Science::Computer Vision and Pattern Recognition ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,Image sensor ,business ,Image resolution ,Mathematics - Abstract
Today, consumer cameras produce photographs with tens of millions of pixels. The recent trend in image sensor resolution seems to suggest that we will soon have cameras with billions of pixels. However, the resolution of any camera is fundamentally limited by geometric aberrations. We derive a scaling law that shows that, by using computations to correct for aberrations, we can create cameras with unprecedented resolution that have low lens complexity and compact form factor. In this paper, we present an architecture for gigapixel imaging that is compact and utilizes a simple optical design. The architecture consists of a ball lens shared by several small planar sensors, and a post-capture image processing stage. Several variants of this architecture are shown for capturing a contiguous hemispherical field of view as well as a complete spherical field of view. We demonstrate the effectiveness of our architecture by showing example images captured with two proof-of-concept gigapixel cameras.
- Published
- 2011
214. Spectral Focal Sweep: Extended depth of field from chromatic aberrations
- Author
-
Oliver Cossairt and Shree K. Nayar
- Subjects
Point spread function ,Depth of focus ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,law.invention ,Camera lens ,Lens (optics) ,Optics ,Cardinal point ,law ,Computer Science::Computer Vision and Pattern Recognition ,Chromatic aberration ,Focal length ,Computer vision ,Depth of field ,Artificial intelligence ,business - Abstract
In recent years, many new camera designs have been proposed which preserve image detail over a larger depth range than conventional cameras. These methods rely on either mechanical motion or a custom optical element placed in the pupil plane of a camera lens to create the desired point spread function (PSF). This work introduces a new Spectral Focal Sweep (SFS) camera which can be used to extend depth of field (DOF) when some information about the reflectance spectra of objects being imaged is known. Our core idea is to exploit the principle that for a lens without chromatic correction, the focal length varies with wavelength. We use a SFS camera to capture an image that effectively “sweeps” the focal plane continuously through a scene without the need for either mechanical motion or custom optical elements. We demonstrate that this approach simplifies lens design constraints, enabling an inexpensive implementation to be constructed with off-the-shelf components. We verify the effectiveness of our implementation and show several example images illustrating a significant increase in DOF over conventional cameras.
- Published
- 2010
215. Diffusion coded photography for extended depth of field
- Author
-
Shree K. Nayar, Oliver Cossairt, and Changyin Zhou
- Subjects
Point spread function ,Deblurring ,business.industry ,Photography ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Computer Graphics and Computer-Aided Design ,Amplitude ,Kernel (image processing) ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Depth of field ,Artificial intelligence ,business ,Light field ,Mathematics ,Coding (social sciences) - Abstract
In recent years, several cameras have been introduced which extend depth of field (DOF) by producing a depth-invariant point spread function (PSF). These cameras extend DOF by deblurring a captured image with a single spatially-invariant PSF. For these cameras, the quality of recovered images depends both on the magnitude of the PSF spectrum (MTF) of the camera, and the similarity between PSFs at different depths. While researchers have compared the MTFs of different extended DOF cameras, relatively little attention has been paid to evaluating their depth invariances. In this paper, we compare the depth invariance of several cameras, and introduce a new camera that improves in this regard over existing designs, while still maintaining a good MTF. Our technique utilizes a novel optical element placed in the pupil plane of an imaging system. Whereas previous approaches use optical elements characterized by their amplitude or phase profile, our approach utilizes one whose behavior is characterized by its scattering properties. Such an element is commonly referred to as an optical diffuser, and thus we refer to our new approach as diffusion coding . We show that diffusion coding can be analyzed in a simple and intuitive way by modeling the effect of a diffuser as a kernel in light field space. We provide detailed analysis of diffusion coded cameras and show results from an implementation using a custom designed diffuser.
- Published
- 2010
216. Light field transfer
- Author
-
Oliver Cossairt, Ravi Ramamoorthi, and Shree K. Nayar
- Subjects
Computer science ,Global illumination ,business.industry ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Computer Graphics and Computer-Aided Design ,law.invention ,Projector ,law ,Computer graphics (images) ,Compositing ,Radiance ,Computer vision ,Augmented reality ,Specular reflection ,Artificial intelligence ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a novel image-based method for compositing real and synthetic objects in the same scene with a high degree of visual realism. Ours is the first technique to allow global illumination and near-field lighting effects between both real and synthetic objects at interactive rates, without needing a geometric and material model of the real scene. We achieve this by using a light field interface between real and synthetic components---thus, indirect illumination can be simulated using only two 4D light fields, one captured from and one projected onto the real scene. Multiple bounces of interreflections are obtained simply by iterating this approach. The interactivity of our technique enables its use with time-varying scenes, including dynamic objects. This is in sharp contrast to the alternative approach of using 6D or 8D light transport functions of real objects, which are very expensive in terms of acquisition and storage and hence not suitable for real-time applications. In our method, 4D radiance fields are simultaneously captured and projected by using a lens array, video camera, and digital projector. The method supports full global illumination with restricted object placement, and accommodates moderately specular materials. We implement a complete system and show several example scene compositions that demonstrate global illumination effects between dynamic real and synthetic objects. Our implementation requires a single point light source and dark background.
- Published
- 2008
217. Imaging artifact precompensation for spatially multiplexed 3-D displays
- Author
-
Thomas J. Purtell, Oliver Cossairt, Sourav R. Dey, Samuel L. Hill, Gregg E. Favalora, Joshua Napoli, and Sandy Stutsman
- Subjects
Optics ,Pixel ,business.industry ,Computer science ,Image quality ,Autostereoscopy ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Artificial intelligence ,business ,Parallax ,Multiplexing ,Superresolution - Abstract
We describe a projection system that presents a 20 megapixel image using a single XGA SLM and time-division multiplexing. The system can be configured as a high-resolution 2-D display or a highly multi-view horizontal parallax display. In this paper, we present a technique for characterizing the light transport function of the display and for precompensating the image for the measured transport function. The techniques can improve the effective quality of the display without modifying its optics. Precompensation is achieved by approximately solving a quadratic optimization problem. Compared to a linear filter, this technique is not limited by a fixed kernel size and can propagate image detail to all related pixels. Large pixel-count images are supported through dividing the problem into blocks. A remedy for blocking artifacts is given. Results of the algorithm are presented based on simulations of a display design. The display characterization method is suitable for experimental designs that may be dim and imperfectly aligned. Simulated results of the characterization and precompensation process are presented. RMS and qualitative improvement of display image quality are demonstrated.
- Published
- 2008
218. High spatio-temporal resolution video with compressed sensing
- Author
-
Lukas Schmid, Aggelos K. Katsaggelos, Guido M. Schuster, Oliver Cossairt, Nathan Matsuda, Leonidas Spinoulas, Thomas Niederberger, and Roman Koller
- Subjects
Computer science ,Image quality ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Reconstruction algorithm ,Video camera ,Atomic and Molecular Physics, and Optics ,law.invention ,Optical path ,Optics ,Compressed sensing ,law ,Temporal resolution ,Computer vision ,Artificial intelligence ,Image sensor ,business ,Image resolution - Abstract
We present a prototype compressive video camera that encodes scene movement using a translated binary photomask in the optical path. The encoded recording can then be used to reconstruct multiple output frames from each captured image, effectively synthesizing high speed video. The use of a printed binary mask allows reconstruction at higher spatial resolutions than has been previously demonstrated. In addition, we improve upon previous work by investigating tradeoffs in mask design and reconstruction algorithm selection. We identify a mask design that consistently provides the best performance across multiple reconstruction strategies in simulation, and verify it with our prototype hardware. Finally, we compare reconstruction algorithms and identify the best choice in terms of balancing reconstruction quality and speed.
- Published
- 2015
219. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays
- Author
-
Thomas J. Purtell, Joshua Napoli, Deirdre M. Hall, Won Chun, Oliver Cossairt, Yigal Banker, Gregg E. Favalora, James F. Schooler, and Rick K. Dorval
- Subjects
Workstation ,business.industry ,Computer science ,OpenGL ,computer.software_genre ,Rendering (computer graphics) ,law.invention ,Visualization ,Software framework ,Software ,law ,Computer graphics (images) ,Autostereoscopy ,Graphics ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors’ first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality’s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality’s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.
- Published
- 2005
220. Novel view sequential display based on DMD technology
- Author
-
Oliver Cossairt, Christian Moller, Adrian R. L. Travis, and Stephen A. Benton
- Subjects
Computer science ,Computer graphics (images) ,Stereo display ,Projection (set theory) - Abstract
The authors present work that was conducted as a collaboration between Cambridge University and MIT. The work is a continuation of previous research at Cambridge University, where several view-sequential 3D displays were built. The authors discuss a new display which they built and compare performance to previous versions. The new display utilizes a DMD projection engine, whereas previous versions used high frame rate CRTs to generate imagery. The benefits of this technique are discussed, and suggestions for future improvements are made.
- Published
- 2004
221. Investigation into screenless 3D TV
- Author
-
Oliver Cossairt, Adrian Travis, Christian Moller, Lucy Stockbridge, and Stephen A. Benton
- Subjects
Temperature gradient ,Optics ,Materials science ,business.industry ,Screenless ,Three dimensional television ,business ,Refractive index ,Pressure gradient ,Sound wave ,Light scattering ,Radius of curvature (optics) - Abstract
If a three dimensional image is to be projected into mid-air in a room with bare walls, then light must follow a curving path. Since this does not happen in a vacuum, a gradient must be introduced into the refractive index of air itself, which can be introduced by varying either the temperature or pressure of air. A reduction from 300°C to room temperature across the front of a 1 mm wide ray will bend it with a radius of curvature of 3 m. However the temperature gradient cannot be sustained without an unacceptably aggressive mechanism for cooling. The pressure gradients delivered by sound waves are dynamically sustainable, but even powers as extreme as 175 dBm at 25 kHz deliver a radius of curvature of only 63 m. It appears that something will have to be added to the air if such displays are to be possible.
- Published
- 2003
222. Occlusion-capable multiview volumetric three-dimensional display
- Author
-
Gregg E. Favalora, Joshua Napoli, Rick K. Dorval, Samuel L. Hill, and Oliver Cossairt
- Subjects
Pixel ,Geometrical optics ,Computer science ,business.industry ,Materials Science (miscellaneous) ,Observer (special relativity) ,Volumetric display ,Stereo display ,Industrial and Manufacturing Engineering ,Rendering (computer graphics) ,law.invention ,Optics ,Projector ,law ,Occlusion ,Medical imaging ,Holographic display ,Business and International Management ,business - Abstract
Volumetric 3D displays are frequently purported to lack the ability to reconstruct scenes with viewer-position-dependent effects such as occlusion. To counter these claims, a swept-screen 198-view horizontal-parallax-only 3D display is reported here that is capable of viewer-position-dependent effects. A digital projector illuminates a rotating vertical diffuser with a series of multiperspective 768 x 768 pixel renderings of a 3D scene. Evidence of near-far object occlusion is reported. The aggregate virtual screen surface for a stationary observer is described, as are guidelines to construct a full-parallax system and the theoretical ability of the present system to project imagery outside of the volume swept by the screen.
- Published
- 2007
223. A low-cost solution for 3D reconstruction of large-scale specular objects
- Author
-
Aggelos K. Katsaggelos, Yunhao Li, Bingjie Xu, Florian Schiffers, Florian Willomitzer, Chia-Kai Yeh, Oliver Cossairt, Marc Walton, and Jack Tumblin
- Subjects
Cultural heritage ,Optics ,Liquid-crystal display ,Scale (ratio) ,business.industry ,Computer science ,law ,3D reconstruction ,Calibration ,Specular reflection ,business ,Three dimensional measurement ,law.invention - Abstract
In this paper, we present a low-cost 3D reconstruction method for large-scale specular objects based on deflectometry. Experiments show that our system reaches high accuracy and meets requirements of the target applications in the cultural heritage preservation.
224. Fast simulations in computer-generated holograms for binary data storage
- Author
-
Oliver Cossairt, Hamid Hasani, Florian Schiffers, Zihao W. Wang, Jack Tumblin, Prasan Shedligeri, Manuel Ballester, Aggelos K. Katsaggelos, Lionel Fiske, and Florian Willomitzer
- Subjects
Wave propagation ,Computer science ,law ,Binary data ,Reflection (physics) ,Holography ,Volume hologram ,Function (mathematics) ,Born approximation ,Algorithm ,Ptychography ,law.invention - Abstract
We present an efficient simulation of the recording and playback phases of a 2D image in a reflection volume hologram. The proposed algorithm uses the free-space Green’s function propagation and assumes the Born approximation.
225. Performance bounds for computational imaging
- Author
-
Ashok Veeraraghavan, Kaushik Mitra, Mohit Gupta, and Oliver Cossairt
- Subjects
symbols.namesake ,Mathematical optimization ,Noise (signal processing) ,Gaussian noise ,Computer science ,Gaussian ,symbols ,Image processing ,Deconvolution ,Mixture model ,Algorithm ,Multiplexing ,Signal - Abstract
We analyze the effects of multiplexing under a noise model incorporating both signal dependent and signal independent noise and scene priors modeled both as a Gaussian and as a mixture of Gaussians.
226. Regularization for undersampled ptychography
- Author
-
Aggelos K. Katsaggelos, Oliver Cossairt, Florian Schiffers, Semih Barutcu, Prasan Shedligeri, and Pablo Ruiz
- Subjects
Computer science ,business.industry ,Prior probability ,Pattern recognition ,Image processing ,Iterative reconstruction ,Artificial intelligence ,Object (computer science) ,business ,Phase retrieval ,Regularization (mathematics) ,Ptychography - Abstract
Ptychography becomes increasingly ill-posed when the overlap between neighboring scan points is reduced, inhibiting the object reconstruction. Here, we discuss and show reconstructions with low-overlap ratios by regularizing with priors such as Total- Variation and Structure-Tensor-Prior.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.