59 results on '"Per-pixel lighting"'
Search Results
2. Lighting Design for Globally Illuminated Volume Rendering
- Author
-
Yubo Zhang and Kwan-Liu Ma
- Subjects
Per-pixel lighting ,Light ,Computer science ,Global illumination ,Graphics hardware ,Volumetric lighting ,Tone mapping ,Backlight ,Sensitivity and Specificity ,Computer graphics ,User-Computer Interface ,Imaging, Three-Dimensional ,Data visualization ,Computer graphics (images) ,Image Interpretation, Computer-Assisted ,Shadow ,Computer Graphics ,Scattering, Radiation ,Computer vision ,Lighting ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,Reproducibility of Results ,Volume rendering ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Image-based lighting ,Signal Processing ,Computer-Aided Design ,Computer Vision and Pattern Recognition ,Shading ,Artificial intelligence ,business ,Algorithms ,Software - Abstract
With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.
- Published
- 2013
- Full Text
- View/download PDF
3. Interactive Lighting Design with Hierarchical Light Representation
- Author
-
Jung-Hong Chuang, Tan-Chi Ho, Yueh-Tse Chen, Tsung-Shian Huang, and Wen-Chieh Lin
- Subjects
Per-pixel lighting ,Image-based lighting ,Computer science ,business.industry ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) - Abstract
Lighting design plays a crucial role in indoor lighting design, computer cinematograph and many other applications. Computer-assisted lighting design aims to find a lighting configuration that best approximates the illumination effect specified by designers. In this paper, we present an automatic approach for lighting design, in which discrete and continuous optimization of the lighting configuration, including the number, intensity, and position of lights, are achieved. Our lighting design algorithm consists of two major steps. The first step estimates an initial lighting configuration by light sampling and clustering. The initial light clusters are then recursively merged to form a light hierarchy. The second step optimizes the lighting configuration by alternatively selecting a light cut on the light hierarchy to determine the number of representative lights and optimizing the lighting parameters using the simplex method. To speed up the optimization computation, only illumination at scene vertices that are important to rendering are sampled and taken into account in the optimization. Using the proposed approach, we develop a lighting design system that can compute appropriate lighting configurations to generate the illumination effects iteratively painted and modified by a designer interactively.
- Published
- 2013
- Full Text
- View/download PDF
4. Deferred voxel shading for real-time global illumination
- Author
-
José Jesús Villegas and Esmitt Ramírez
- Subjects
Per-pixel lighting ,Global illumination ,business.industry ,Mipmap ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.software_genre ,GeneralLiterature_MISCELLANEOUS ,Real-time rendering ,Rendering (computer graphics) ,Geography ,Voxel ,Computer graphics (images) ,Ambient occlusion ,Computer vision ,Cone tracing ,Artificial intelligence ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.
- Published
- 2016
- Full Text
- View/download PDF
5. Instant Mixed Reality Lighting from Casual Scanning
- Author
-
Denis Kalkofen, Dieter Schmalstieg, Thomas Richter-Trummer, and Jinwoo Park
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Volumetric lighting ,Iterative reconstruction ,Spherical harmonic lighting ,Mixed reality ,Rendering (computer graphics) ,Image-based lighting ,Robustness (computer science) ,020204 information systems ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a method for recovering both incident lighting and surface materials from casually scanned geometry. By casual, we mean a rapid and potentially noisy scanning procedure of unmodified and uninstrumented scenes with a commodity RGB-D sensor. In other words, unlike reconstruction procedures which require careful preparations in a laboratory environment, our method works with input that can be obtained by consumer users. To ensure a robust procedure, we segment the reconstructed geometry into surfaces with homogeneous material properties and compute the radiance transfer on these segments. With this input, we solve the inverse rendering problem of factorization into lighting and material properties using an iterative optimization in spherical harmonics form. This allows us to account for self-shadowing and recover specular properties. The resulting data can be used to generate a wide range of mixed reality applications, including the rendering of synthetic objects with matching lighting into a given scene, but also re-rendering the scene (or a part of it) with new lighting. We show the robustness of our approach with real and synthetic examples under a variety of lighting conditions and compare them with ground truth data.
- Published
- 2016
- Full Text
- View/download PDF
6. Automatic light compositing using rendered images
- Author
-
Kadi Bouatouch, Matis Hudon, and Rémi Cozot
- Subjects
Per-pixel lighting ,business.industry ,Computer science ,Photography ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Solid modeling ,Set (abstract data type) ,Image-based lighting ,Computer graphics (images) ,Histogram ,Compositing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Lighting is a key element in photography. Professional photographers often work with complex lighting setups to directly capture an image close to the targeted one. Some photographers reversed this traditional workflow. Indeed, they capture the scene under several lighting conditions, then combine the captured images to get the expected one. Acquiring such a set of images is a tedious task and combining them requires some skill in photography. We propose a fully automatic method, that renders, based on a 3D reconstructed model (shape and albedo), a set of images corresponding to several lighting conditions. The resulting images are combined using a genetic optimization algorithm to match the desired lighting provided by the user as an image.
- Published
- 2016
- Full Text
- View/download PDF
7. Cromaticity improvement in images with poor lighting using the Multiscale-Retinex MSR algorithm
- Author
-
Volodymyr Ponomaryov, J Rosales Silva Alberto, J Gallegos Funes Francisco, Victor F. Kravchenko, and Dehesa Gonzalez Mario
- Subjects
Per-pixel lighting ,Color constancy ,Computer science ,Machine vision ,Visibility (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video camera ,Standard illuminant ,Luminance ,law.invention ,Image-based lighting ,law ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
An important factor when realizing image capture using electronic means (photographic or video camera), is the lighting source type present at the capture moment. If the lighting is deficient, the information present in the image is hidden for the Human Vision System (HVS), such that the color and detail characteristics introduce values with low luminance. As a solution for this lighting problem, the Multiscale-Retinex (MSR) algorithm is proposed, based on the understanding of how the HVS interprets and adapts the perception of colors. This solution can be used as a preprocessing stage to correct the lack of an adequate lighting source like the standard illuminant D 65 (light source with similar characteristics to midday sunlight). The MSR algorithm increase the chromatic content of the image, this improves the objects visibility in the scene. The lack effects of lighting are minimized and its magnitude content evaluates the resulting chromaticity vector to characterize the MSR.
- Published
- 2016
- Full Text
- View/download PDF
8. Research of Lighting Technology Based on the OpenGL
- Author
-
Zhi Jie Shen and Jin Xu
- Subjects
Engineering ,Per-pixel lighting ,business.industry ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Volumetric lighting ,Rendering (computer graphics) ,Light intensity ,Image-based lighting ,Computer graphics (images) ,Computer graphics lighting ,Shading ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The manipulating technique of lighting is a very important component of realistic image rendering, including lighting model and shading. This paper, first, introduces the basic concepts, the principles and the programming of the general thoughts of Lighting with OpenGL; then, introduces how to compute the light intensity and shading interpolation by investigating the reflex factors on object surface; last, provides a set of cases to show the ideal effect with different rendering techniques.
- Published
- 2012
- Full Text
- View/download PDF
9. Automatic Light Scene Setting Through Image-Based Sparse Light Effect Approximation
- Author
-
T. Gritti and Gianluca Monaci
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Volumetric lighting ,Light effect ,Computer Science Applications ,Rendering (computer graphics) ,Image-based lighting ,Signal Processing ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image based - Abstract
Recently a series of key factors are deeply transforming lighting systems. Single lights can be controlled individually and with advanced rendering capabilities. Furthermore, a shift is underway from independent light sources to integrated lighting installations. These factors fuel the need for intuitive, flexible control techniques that can fully exploit the rendering capabilities of a lighting infrastructure. In this paper, we present a novel framework to automatically create lighting atmospheres in any type of environment. The desired light settings are derived by approximation of user-defined input images or videos. This input modality gives an immediate and intuitive representation of a lighting atmosphere and allows the user to translate into a light setting any visual content. The effectiveness and versatility of the proposed solution is demonstrated and discussed by deploying the system in several challenging, real-life application scenarios.
- Published
- 2012
- Full Text
- View/download PDF
10. Memory efficient light baking
- Author
-
Jochen SüíMuth, Marc Stamminger, Henry Schäfer, and Cornelia Denk
- Subjects
Vertex (computer graphics) ,Per-pixel lighting ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Hybrid approach ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Human-Computer Interaction ,Computer graphics (images) ,Fully automatic ,Radiance ,Ambient occlusion ,Computer vision ,Artificial intelligence ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In real-time rendering, global lighting information that is too expensive to be computed on the fly is typically pre-computed and baked as vertex attributes or into textures. Prominent examples are view independent effects, such as ambient occlusion, shadows, indirect lighting, or radiance transfer coefficients. Vertex baking usually requires less memory, but exhibits artifacts on large triangles. These artifacts are avoided by baking lighting information into textures, but at the expense of significant memory consumption and additional work to obtain a parameterization. In this paper, we propose a memory efficient and performant hybrid approach that combines texture- and vertex-based baking. Cheap vertex baking is applied by default and textures are used only where vertex baking is insufficient to represent the signal. Seams at transitions between both representations are hidden using a simple shader which smoothly blends between vertex- and texture-based shading. With our fully automatic approach, we can significantly reduce memory requirements without negative impact on rendering quality or performance.
- Published
- 2012
- Full Text
- View/download PDF
11. Irradiance Rigs
- Author
-
Peter-Pike Sloan, Derek Nowrouzezahrai, and Hong Yuan
- Subjects
Per-pixel lighting ,Image-based lighting ,Computer science ,business.industry ,Irradiance ,Computer vision ,Artificial intelligence ,Volumetric lighting ,business ,Reflectivity ,Rendering (computer graphics) - Abstract
When precomputed lighting is generated for static scene elements, the incident illumination on dynamic objects must be computed in a manner that is efficient and that faithfully captures the near- and far-field variation of the environment's illumination. Depending on the relative size of dynamic objects, as well as the number of lights in the scene, previous approaches fail to adequately sample the incident lighting and/or fail to scale. We present a principled, error-driven approach for dynamically transitioning between near- and far-field lighting. A more accurate model for sampling near-field lighting for disk sources is introduced, as well as far-field sampling and interpolation schemes tailored to each dynamic object. Lastly, we apply a flexible reflectance model to the computed illumination.
- Published
- 2012
- Full Text
- View/download PDF
12. A Fast Calculation Method for View-Dependent per-Pixel Lighting
- Author
-
Hua Rui Wu, Chun Jiang Zhao, and Rong Hua Gao
- Subjects
Brightness ,Per-pixel lighting ,Pixel ,Physics::Instrumentation and Detectors ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,View dependent ,Spherical harmonic lighting ,Image-based lighting ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Multimedia ,Computer vision ,Specular reflection ,Artificial intelligence ,Visibility ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Per-Pixel lighting can greatly improve the computational efficiency because of the model has been completed the hidden and visibility judge processing. However, each pixel brightness values need to recalculate when the viewpoint changes. And per-pixel lighting will still be constrained computational efficiency in the case of greater pixel number. In this paper, a fast calculation method for view-dependent per-pixel lighting under a fixed light source is proposed, and a symmetric relationship between pixel brightness value and specular reflection was given. Finally, the brightness under the current viewpoint can calculate fastly. Experimental results show that this method has an efficient lighting calculation than the method of per-pixel lighting and hardware-accelerated lighting calculation.
- Published
- 2011
- Full Text
- View/download PDF
13. BendyLights: Artistic Control of Direct Illumination by Curving Light Rays
- Author
-
William B. Kerr, Jonathan D. Denning, and Fabio Pellacini
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Global illumination ,Volumetric lighting ,Computer Graphics and Computer-Aided Design ,Ray ,Rendering (computer graphics) ,Cinematography ,Image-based lighting ,Direct illumination ,Computer graphics (images) ,Shadow ,Computer vision ,Shading ,Artificial intelligence ,business - Abstract
In computer cinematography, artists routinely use non-physical lighting models to achieve desired appearances. This paper presents BendyLights, a non-physical lighting model where light travels nonlinearly along splines, allowing artists to control light direction and shadow position at different points in the scene independently. Since the light deformation is smoothly defined at all world-space positions, the resulting non-physical lighting effects remain spatially consistent, avoiding the frequent incongruences of many non-physical models. BendyLights are controlled simply by reshaping splines, using familiar interfaces, and require very few parameters. BendyLight control points can be keyframed to support animated lighting effects. We demonstrate BendyLights both in a realtime rendering system for editing and a production renderer for final rendering, where we show that BendyLights can also be used with global illumination.
- Published
- 2010
- Full Text
- View/download PDF
14. envyLight
- Author
-
Fabio Pellacini
- Subjects
Per-pixel lighting ,Computer science ,Interface (Java) ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Animation ,Volumetric lighting ,Computer Graphics and Computer-Aided Design ,Image-based lighting ,Feature (computer vision) ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Reflection mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Scenes lit with high dynamic range environment maps of real-world environments exhibit all the complex nuances of natural illumination. For applications that need lighting adjustments to the rendered images, editing environment maps directly is still cumbersome. First, designers have to determine which region in the environment map is responsible for the specific lighting feature (e.g. diffuse gradients, highlights and shadows) they desire to edit. Second, determining the parameters of image-editing operations needed to achieve specific changes to the selected lighting feature requires extensive trial-and-error. This paper presents envyLight , an interactive interface for editing natural illumination that combines an algorithm to select environment map regions, by sketching strokes on lighting features in the rendered image, with a small set of editing operations to quickly adjust the selected feature. The envyLight selection algorithm works well for indoor and outdoor lighting corresponding to rendered images where lighting features vary widely in number, size, contrast and edge blur. Furthermore, envyLight selection is general with respect to material type, from matte to sharp glossy, and the complexity of scenes' shapes. envyLight editing operations allow designers to quickly alter the position, contrast and edge blur of the selected lighting feature and can be keyframed to support animation.
- Published
- 2010
- Full Text
- View/download PDF
15. Information-theoretic analysis of Blinn-Phong lighting with applicationto mobile cloud gaming
- Author
-
Chau Yuen, Ngai-Man Cheung, and Seong-Ping Chuah
- Subjects
Phong shading ,Per-pixel lighting ,Image-based lighting ,Computer science ,Cloud gaming ,Computer graphics (images) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Graphics pipeline ,Mobile device ,ComputingMethodologies_COMPUTERGRAPHICS ,Rendering (computer graphics) - Abstract
In mobile cloud gaming, images are rendered, compressed, and delivered to mobile devices from cloud servers. Joint optimization of rendering and coding aims to reduce data rate of the delivery. Among the tasks in a rendering pipeline, Blinn-Phong lighting is the most widely adopted lighting model and also has the most impact on the visual information of the rendered images. In this paper, we analyze and model the information content of an image rendered using the Blinn-Phong lighting model. Our analytic model estimates the entropy generated by the Blinn-Phong reflections on the rendered images. In the context of cloud gaming, we derive the solution for the joint rendering and coding optimization, and demonstrate how the entropy estimators can facilitate fast and on-the-fly solution to the optimization. Experiments results validate our information-theoretic analyses, and show substantial bitrate reduction by the joint optimization of rendering and coding.
- Published
- 2015
- Full Text
- View/download PDF
16. Image based relighting using room lighting basis
- Author
-
Abhijeet Ghosh and Antoine Toisoul
- Subjects
Ground truth ,Per-pixel lighting ,Image-based lighting ,Computer science ,business.industry ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Reflectivity ,Image based - Abstract
We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.
- Published
- 2015
- Full Text
- View/download PDF
17. Realistic inverse lighting from a single 2D image of a face, taken under unknown and complex lighting
- Author
-
Davoud Shahlaei and Volker Blanz
- Subjects
Per-pixel lighting ,business.industry ,Global illumination ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inverse ,Volumetric lighting ,Rendering (computer graphics) ,Superposition principle ,Image-based lighting ,Computer vision ,Artificial intelligence ,business ,Linear combination ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
In this paper, we address a difficult inverse rendering problem with many unknowns: a single 2D input image of an unknown face in an unknown environment, taken under unknown conditions. First, the geometry and texture of the face are estimated from the input image, using a 3D Morphable Model. In a second step, considering the superposition principle for light, we estimate the light source intensities as optimized non-negative weights for a linear combination of a synthetic illumination cone for that face. Each image of the illumination cone is lighted by one directional light, considering non-lambertian reflectance and non-convex geometry. Modeling the lighting separately from the face model enhances the face modeling and analysis, provides information about the environment of the face, and facilitates realistic rendering of the face in novel pose and lighting.
- Published
- 2015
- Full Text
- View/download PDF
18. Hardware Lighting and Shading: a Survey
- Author
-
Jan Kautz
- Subjects
Phong shading ,Per-pixel lighting ,Computer science ,business.industry ,Graphics hardware ,Computer Graphics and Computer-Aided Design ,Real-time rendering ,Image-based lighting ,Computer graphics (images) ,Computer vision ,Shading ,Artificial intelligence ,Smart lighting ,business ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS ,Gouraud shading - Abstract
Traditionally, hardware rasterizers only support the Phong lighting model in combination with Gouraud shading using point light sources. However, the Phong lighting model is strictly empirical and physically implausible. Gouraud shading also tends to undersample the highlight unless a highly tesselated surface is used. Hence, higher-quality hardware accelerated lighting and shading has gained much interest in the recent five years. The research on hardware lighting and shading is two-fold. On the one hand, better lighting models for local illumination (assuming point light sources but evaluated per pixel) were demonstrated to be amenable to hardware implementation. On the other hand, recent research has demonstrated that even area lights, represented as environment maps, can be combined with complex lighting models. In both areas, many articles have been published, making it hard to decide which algorithm is well-suited for which application. This state-of-the-art report will review all relevent articles in both areas, and list advantages and disadvantages of each algorithm.
- Published
- 2004
- Full Text
- View/download PDF
19. Image-based lighting
- Author
-
Paul Debevec
- Subjects
Per-pixel lighting ,Global illumination ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,High-dynamic-range rendering ,3D rendering ,Rendering (computer graphics) ,Computer graphics ,Computer graphics (images) ,Computer vision ,Computer graphics lighting ,Tiled rendering ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,Software rendering ,Scientific visualization ,Image-based modeling and rendering ,Computer Graphics and Computer-Aided Design ,Real-time rendering ,Real-time computer graphics ,Image-based lighting ,Artificial intelligence ,Alternate frame rendering ,business ,2D computer graphics ,Software ,3D computer graphics - Abstract
This tutorial shows how image-based lighting can illuminate synthetic objects with measurements of real light, making objects appear as if they're actually in a real-world scene.
- Published
- 2002
- Full Text
- View/download PDF
20. Toward accurate recovery of shape from shading under diffuse lighting
- Author
-
A.J. Stewart and Michael S. Langer
- Subjects
Surface (mathematics) ,Per-pixel lighting ,Computer science ,business.industry ,Orientation (computer vision) ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Real image ,Photometric stereo ,Computational Theory and Mathematics ,Artificial Intelligence ,Computer graphics (images) ,Radiance ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Diffuse reflection ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
A new surface radiance model for diffuse lighting is presented which incorporates shadows, interreflections, and surface orientation. An algorithm is presented that uses this model to compute shape-from-shading under diffuse lighting. The algorithm is tested on both synthetic and real images, and is found to perform more accurately than the only previous algorithm for this problem.
- Published
- 1997
- Full Text
- View/download PDF
21. High Quality Shape from a Single RGB-D Image under Uncalibrated Natural Illumination
- Author
-
Yudeog Han, In So Kweon, and Joon-Young Lee
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Global illumination ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Albedo ,Spherical harmonic lighting ,Photometric stereo ,Image-based lighting ,RGB color model ,Computer vision ,Diffuse reflection ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a novel framework to estimate detailed shape of diffuse objects with uniform albedo from a single RGB-D image. To estimate accurate lighting in natural illumination environment, we introduce a general lighting model consisting of two components: global and local models. The global lighting model is estimated from the RGB-D input using the low-dimensional characteristic of a diffuse reflectance model. The local lighting model represents spatially varying illumination and it is estimated by using the smoothly-varying characteristic of illumination. With both the global and local lighting model, we can estimate complex lighting variations in uncontrolled natural illumination conditions accurately. For high quality shape capture, a shape-from-shading approach is applied with the estimated lighting model. Since the entire process is done with a single RGB-D input, our method is capable of capturing the high quality shape details of a dynamic object under natural illumination. Experimental results demonstrate the feasibility and effectiveness of our method that dramatically improves shape details of the rough depth input.
- Published
- 2013
- Full Text
- View/download PDF
22. Perceptually based radiance map for realistic composition
- Author
-
Taehyun Rhee, Jong Jin Choi, and Andrew Chalmers
- Subjects
Per-pixel lighting ,Global illumination ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Volumetric lighting ,Object detection ,Image-based lighting ,Computer vision ,Artificial intelligence ,business ,Set (psychology) ,Level of detail ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The seamless composition of synthetic objects into real-world scenes requires a high level of detail to convince viewers that the rendered object belongs in the scene. These details include illumination properties that perceptually match the real-world environment. To achieve such effects, image based lighting is often used to emulate real-world lighting. This paper explores the role of lighting, and measures effects of local illumination changes. A psychophysical study is presented that captures the thresholds of the human visual systems ability to perceive inconsistencies in illumination in image composites. Further, a set of optimised parameters for image based lighting is proposed. The results show a significant reduction in memory while maintaining the perceived visual quality of a composite render.
- Published
- 2013
- Full Text
- View/download PDF
23. EnvyDepth: an interface for recovering local natural illumination from environment maps
- Author
-
Massimiliano Corsini, Matteo Dellepiane, Fabio Pellacini, Marco Callieri, Roberto Scopigno, and Francesco Banterle
- Subjects
Per-pixel lighting ,Computer science ,Global illumination ,Interface (Java) ,High dynamic range imaging ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Volumetric lighting ,Rendering (computer graphics) ,Illumination Estimation ,Environment map ,Computer graphics (images) ,Spatial image based lighting ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Geometric primitive ,ComputingMethodologies_COMPUTERGRAPHICS ,Three-Dimensional Graphics and Realism ,business.industry ,Picture/Image Generation ,020207 software engineering ,Image-Based Lighting ,Computer Graphics and Computer-Aided Design ,Image-based lighting ,Reflectance and Shading ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Reflection mapping ,3D computer graphics ,Image based lighting - Abstract
In this paper, we present EnvyDepth, an interface for recovering local illumination from a single HDR environment map. In EnvyDepth, the user quickly indicates strokes to mark regions of the environment map that should be grouped together in a single geometric primitive. From these annotated strokes, EnvyDepth uses edit propagation to create a detailed collection of virtual point lights that reproduce both the local and the distant lighting effects in the original scene. When compared to the sole use of the distant illumination, the added spatial information better reproduces a variety of local effects such as shadows, highlights and caustics. Without the effort needed to create precise scene reconstructions, EnvyDepth annotations take only tens of seconds to produce a plausible lighting without visible artifacts. This is easy to obtain even in the case of complex scenes, both indoors and outdoors. The generated lighting environments work well in a production pipeline since they are efficient to use and able to produce accurate renderings.
- Published
- 2013
- Full Text
- View/download PDF
24. Real-Time Rendering Framework in the Virtual Home Design System
- Author
-
Pengyu Zhu, Zhigeng Pan, and Mingmin Zhang
- Subjects
Deferred shading ,Per-pixel lighting ,business.industry ,Unified lighting and shadowing ,Shadow volume ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Real-time rendering ,Rendering (computer graphics) ,Geography ,Image-based lighting ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper introduces a home design system with its great functions and framework design, including the scene management based on the Cell&Portal system, improved variance shadow mapping and the recently popular real-time rendering framework called deferred lighting. In the implementation details, we put in some useful improvements, such as compressing the Geometry Buffer and Lighting Buffer to decrease the video memory and bandwidth occupation with which the multi-render-target limitation has been dislodged, using the light volume stencil culling which is similar to the shadow volume algorithm to identify the lit pixels and modifying the physically correct shading model based on Fresnel term to adapt to the deferred lighting framework.
- Published
- 2013
- Full Text
- View/download PDF
25. Estimating all frequency lighting using a color/depth image
- Author
-
Hyunjung Shim
- Subjects
Per-pixel lighting ,Color image ,Global illumination ,Computer science ,business.industry ,Volumetric lighting ,Spherical harmonic lighting ,Rendering (computer graphics) ,Image-based lighting ,Computer vision ,Diffuse reflection ,Specular reflection ,Artificial intelligence ,business - Abstract
This paper presents a novel approach to estimating the lighting using a pair of one color and one depth image. To effectively model all frequency lighting, we introduce a hybrid representation for lighting; the combination of spherical harmonic basis functions and point lights. Upon the existing framework of spherical harmonics based diffuse reflection, we divide the color image into diffuse and non-diffuse reflections. Then, we use the diffuse reflection for estimating the low frequency lighting. For high frequency lighting, we obtain the specular reflections by analyzing the non-diffuse reflection. Knowing specular reflections and scene geometry, we are capable of computing the direction of point lights, inverting the reflected ray with respect to the surface normal. Then, we optimize the intensity of point lights by analysis by synthesis paradigm. By superimposing the low and high frequency lighting, we recover the lighting present in the scene. While existing methods use the low frequency lighting to infer the high frequency lighting, we propose to use the nondiffuse reflection for directly estimating the high frequency lighting. In this way, we make good use of the non-diffuse reflections in scene analysis and understanding. Experimental results show that the proposed approach is an effective solution for the lighting estimation of real world environment.
- Published
- 2012
- Full Text
- View/download PDF
26. Non-uniform Illumination Representation based on HDR Light Probe Sequences
- Author
-
Lin Wang, Tao Yu, Zhong Zhou, Jian Hu, and Wei Wu
- Subjects
Per-pixel lighting ,Image texture ,Image-based lighting ,Global illumination ,Computer science ,business.industry ,Computer vision ,Volumetric lighting ,Artificial intelligence ,Image-based modeling and rendering ,business ,3D rendering ,Rendering (computer graphics) - Published
- 2012
- Full Text
- View/download PDF
27. Real-time rendering with complex natural illumination
- Author
-
Jingui Pan and Jie Guo
- Subjects
Per-pixel lighting ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Virtual reality ,Spherical harmonic lighting ,Real-time rendering ,Rendering (computer graphics) ,Image-based lighting ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,Shadow mapping ,business ,Reflection mapping ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Complex natural lighting provides better realism compared to traditional artificial lights, which has been increasing in real-time graphic applications, such as video games and virtual reality. However, rendering with complex natural illumination proves quite costly owing to a spherical space integral. This paper addresses the problem of real-time rendering scenes under complex environment lighting that captures soft shadows and glossy reflections. The environment lighting is separated into low-frequency lighting component and highlight component based on a rejection-based sampling method. Scenes lit under low-frequency lighting are approximated via spherical harmonic lighting, while highlight illumination effects such as soft shadows and glossy reflections are simulated by a small set of sampled virtual point lights (VPL). Experimental results demonstrate that our method can be run in real-time under complex natural lighting without precomputation.
- Published
- 2011
- Full Text
- View/download PDF
28. Interactive Lighting and Material Design System for Cyber Worlds
- Author
-
Tomoyuki Nishita, Kei Iwasaki, and Yoshinori Dobashi
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Visibility function ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Material Design ,Volumetric lighting ,Viewpoints ,Real-time rendering ,Rendering (computer graphics) ,Image-based lighting ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Interactive rendering under complex real world illumination is essential for many applications such as material design, lighting design, and virtual realities. For such applications, interactive manipulations of viewpoints, lighting, BRDFs, and positions of objects are beneficial to designers and users. This paper proposes a system that acquires complex, all-frequency lighting environments and renders dynamic scenes under captured illumination, for lighting and material design applications in cyber worlds. To capture real world lighting environments easily, our method uses a camera equipped with a cellular phone. To handle dynamic scenes of rigid objects and dynamic BRDFs, our method decomposes the visibility function at each vertex of each object into the occlusion due to the object itself and occlusions due to other objects, which are represented by a nonlinear piecewise constant approximation, called cuts. Our method proposes a compact cut representation and efficient algorithm for cut operations. By using our system, interactive manipulation of positions of objects and real time rendering with dynamic viewpoints, lighting, and BRDFs can be achieved.
- Published
- 2010
- Full Text
- View/download PDF
29. Screen space classification for efficient deferred shading
- Author
-
Jeremy Moore, Neil Hutchinson, Matthew Ritchie, George Parrish, and Balor Knight
- Subjects
Deferred shading ,Per-pixel lighting ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Screen space ,GeneralLiterature_MISCELLANEOUS ,Rendering (computer graphics) ,Computer graphics (images) ,Shadow ,Computer vision ,Shading ,Artificial intelligence ,business ,Shader ,Video game ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Deferred shading is an increasingly popular technique for video game rendering. In the standard implementation a geometry pass writes depths, normals and other properties to a geometry buffer (G-buffer) before a lighting pass is applied as a screen space operation. Deferred shading is often combined with deferred shadowing where the occlusion values due to one or more lights are gathered in a screen space shadow buffer. The universal application of complex shaders to the entire screen during the shadow and light passes of these techniques can contribute to poor performance. A more optimal approach would take different shading paths for different parts of the scene. For example we would prefer to only apply expensive shadow filtering to known shadow edges. However this typically involves the use of dynamic branches within shaders which can lead to poor performance on current video game shading hardware.
- Published
- 2010
- Full Text
- View/download PDF
30. Inferred lighting
- Author
-
Scott Kircher and Alan Lawrance
- Subjects
Per-pixel lighting ,Deferred shading ,business.industry ,Computer science ,Volumetric lighting ,Real-time rendering ,Rendering (computer graphics) ,Image-based lighting ,Computer graphics (images) ,Polygon ,Computer vision ,Artificial intelligence ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents a three phase pipeline for real-time rendering that provides fast dynamic light calculations while enabling greater material flexibility than deferred shading. This method, called inferred lighting, allows lighting calculations to occur at a significantly lower resolution than the final output and is compatible with hardware multisample antialiasing (MSAA). In addition, inferred lighting introduces a novel method of computing lighting and shadows for translucent objects (alpha polygons) that unifies the pipeline for processing lit alpha polygons with that of opaque polygons. The key to our method is a discontinuity sensitive filtering algorithm that enables material shaders to "infer" their lighting values from a light buffer sampled at a different resolution. This paper also discusses specific implementation issues of inferred lighting on DirectX 10, Xbox 360, and PlayStation 3 hardware.
- Published
- 2009
- Full Text
- View/download PDF
31. Interactive lighting manipulation application on GPU
- Author
-
Borom Tunwattanapong and Paul Debevec
- Subjects
Per-pixel lighting ,Pixel ,Computer science ,business.industry ,Global illumination ,Image (mathematics) ,Reflection (mathematics) ,Image-based lighting ,Computer graphics (images) ,Face (geometry) ,Key (cryptography) ,Computer vision ,Artificial intelligence ,business - Abstract
We present a technique for relighting an image such that different areas of the image are illuminated with different combinations of lighting directions. The key idea is to capture illumination data using a lighting apparatus system such as Hawkins et al. [2004], calculate radial basis function interpolation of light constraints specified by users and render the calculated illumination result in realtime using GPU. The application can simulate the result of unnatural lighting conditions, for example, the image of a whole face lit from per pixel view dependence reflection angles or from gazing angles (see Fig. 1, a). The application can also render a high-resolution result at 1920 x 1080 in three to four minutes.
- Published
- 2009
- Full Text
- View/download PDF
32. Image-Based Fitting Diffuse and Specular Reflectance of Object
- Author
-
Caixian Chen, Jiaye Wang, and Huijian Han
- Subjects
Per-pixel lighting ,Pixel ,business.industry ,Computer science ,Global illumination ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Rendering (computer graphics) ,Image texture ,Virtual image ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,Specular reflection ,Diffuse reflection ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Traditional image texture mapping can increase abundant details for virtual object. As virtual lighting condition varies, textures lack of realistic representation with dynamically varying with lighting, due to pictures are captured in especial lighting condition. In computer graphics, it often needs to produce continuous and different illumination animation effect in a fixed viewpoint scene. In order to speed up the rendering, the paper proposes a projections-based method, which can interpolate the scene by using several images. This method uses two quadratic multinomials to respectively fit the diffuse reflection component and the specular reflection component for each pixel in different light sources, coefficients of diffuse reflection multinomial and specular reflection multinomial of each pixel can be figure out by least square method. The coefficients are stored for each pixel Illumination effects for arbitrary light-source position may be interpolated efficiently.
- Published
- 2008
- Full Text
- View/download PDF
33. Rendering synthetic objects into real scenes
- Author
-
Paul Debevec
- Subjects
Per-pixel lighting ,Global illumination ,business.industry ,Computer science ,Photography ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scene statistics ,Volumetric lighting ,Image-based modeling and rendering ,Reflectivity ,Rendering (computer graphics) ,Visualization ,Image-based lighting ,Computer graphics (images) ,Inverse rendering ,Radiance ,Reflection (physics) ,Computer vision ,Artificial intelligence ,Graphics ,business ,Geometric modeling ,High dynamic range ,ComputingMethodologies_COMPUTERGRAPHICS ,Structured light - Abstract
We present a method that uses measured scene radiance and global illumination in order to add new objects to light-based models with correct lighting. The method uses a high dynamic range image-based model of the scene, rather than synthetic light sources, to illuminate the new objects. To compute the illumination, the scene is considered as three components: the distant scene, the local scene, and the synthetic objects. The distant scene is assumed to be photometrically unaffected by the objects, obviating the need for reflectance model information. The local scene is endowed with estimated reflectance model information so that it can catch shadows and receive reflected light from the new objects. Renderings are created with a standard global illumination method by simulating the interaction of light amongst the three components. A differential rendering technique allows for good results to be obtained when only an estimate of the local scene reflectance properties is known.We apply the general method to the problem of rendering synthetic objects into real scenes. The light-based model is constructed from an approximate geometric model of the scene and by using a light probe to measure the incident illumination at the location of the synthetic objects. The global illumination solution is then composited into a photograph of the scene using the differential rendering technique. We conclude by discussing the relevance of the technique to recovering surface reflectance properties in uncontrolled lighting situations. Applications of the method include visual effects, interior design, and architectural visualization.
- Published
- 2008
- Full Text
- View/download PDF
34. Recovering high dynamic range radiance maps from photographs
- Author
-
Jitendra Malik and Paul Debevec
- Subjects
Per-pixel lighting ,Pixel ,Computer science ,business.industry ,Motion blur ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Digital imaging ,Image processing ,Volumetric lighting ,Tone mapping ,Exposure fusion ,HDRi ,High-dynamic-range imaging ,Computer graphics (images) ,Human visual system model ,Compositing ,Radiance ,Computer vision ,Artificial intelligence ,business ,High dynamic range ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.
- Published
- 2008
- Full Text
- View/download PDF
35. Rendering Trees with Indirect Lighting in Real Time
- Author
-
Sumanta Pattanaik, Kadi Bouatouch, Kévin Boulanger, Perception, decision and action of real and virtual humans in virtual environments and impact on real environments (BUNRAKU), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria), School of Electrical Engineering and Computer Science [Orlando], University of Central Florida [Orlando] (UCF), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Cachan (ENS Cachan)-Inria Rennes – Bretagne Atlantique
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Volumetric lighting ,Computer Graphics and Computer-Aided Design ,ACM: I.: Computing Methodologies/I.3: COMPUTER GRAPHICS ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Rendering (computer graphics) ,Image-based lighting ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Shading ,Artificial intelligence ,business ,ComputingMilieux_MISCELLANEOUS - Abstract
High quality lighting is one of the challenges for interactive tree rendering. To this end, this paper presents a lighting model allowing real-time rendering of trees with convincing indirect lighting. Rather than defining an empirical model to mimic lighting of real trees, we work at a lower level by modeling the spatial distribution of leaves and by assigning them probabilistic properties. We focus mainly on precise low-frequency lighting that our eyes are more sensitive to and we add high-frequency details afterwards. The resulting model is efficient and simple to implement on a GPU.
- Published
- 2008
36. Crayon lighting
- Author
-
Baoquan Chen and Amit Shesh
- Subjects
Per-pixel lighting ,Image-based lighting ,Computer science ,business.industry ,Global illumination ,Graphics hardware ,Computer graphics (images) ,Computer vision ,Ray tracing (graphics) ,Artificial intelligence ,Volumetric lighting ,business ,Sketch - Abstract
An interactive and intuitive way of designing lighting around a model is desirable in many applications. In this paper, we present a tool for interactive inverse lighting in which a model is rendered based on sketched lighting effects. To specify target lighting, the user freely sketches bright and dark regions on the model as if coloring it with crayons. Using these hints and the geometry of the model, the system efficiently derives light positions, directions, intensities and spot angles, assuming a local point-light based illumination model. As the system also minimizes changes from the previous specifications, lighting can be designed incrementally. We formulate the inverse lighting problem as that of an optimization and solve it using a judicious mix of greedy and minimization methods. We also map expensive calculations of the optimization to graphics hardware to make the process fast and interactive. Our tool can be used to augment larger systems that use point-light based illumination models but lack intuitive interfaces for lighting design, and also in conjunction with applications like ray tracing where interactive lighting design is difficult to achieve.
- Published
- 2007
- Full Text
- View/download PDF
37. Real-time spectral scene lighting on a fragment pipeline
- Author
-
Bernardt Duvenhage
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Volumetric lighting ,Graphics pipeline ,Rendering (computer graphics) ,Computer graphics ,Image-based lighting ,Computer graphics (images) ,Computer graphics lighting ,business ,3D computer graphics ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Real-time desktop computer graphics systems have historically been based on empirical lighting models, such as the Phong lighting model, designed to be perceptually appropriate, but computationally efficient. New hardware developments since early 2003 have resulted in an affordable fourth generation graphical processing unit technology. This technology is allowing desktop computer graphics systems, like NVIDIA's GeForce® series, to implement more and more complex computer graphics algorithms, and lighting models for real-time applications. This paper describes an innovative "physically based" spectral lighting, material and camera model that is based on radiometry theory and is an expansion of the historical fixed pipeline graphics system. There are two render target modes of which Direct mode is aimed at high spectral resolution solids rendering and buffered mode at including transparencies.
- Published
- 2006
- Full Text
- View/download PDF
38. Illumination brush
- Author
-
Takeo Igarashi, Heung-Yeung Shum, Yasuyuki Matsushita, and Makoto Okabe
- Subjects
Painting ,Per-pixel lighting ,Computer science ,Orientation (computer vision) ,business.industry ,Interface (computing) ,Interactive design ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,GeneralLiterature_MISCELLANEOUS ,Image-based lighting ,Computer graphics (images) ,Radiance ,Computer vision ,Specular reflection ,Artificial intelligence ,Diffuse reflection ,Shading ,User interface ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
To help artists design customized lighting environments, we present a simple user interface for designing image-based lighting. Our system allows the artist to directly and interactively specify the appearance of the resulting image by painting and dragging operations. Our system constructs an image-based lighting model that produces the desired (and painted) appearance by solving the inverse shading problem [Ramamoorthi and Hanrahan 2001]. To obtain realistic lighting effects, we design diffuse and specular lighting effects separately. We represent diffuse lighting using low frequency spherical harmonics and pre-computed radiance transfer[Sloan et al. 2002]. For specular lighting, we simply project the painted highlights onto a cube map texture. We also provide an interface with which the artist can drag highlights and shadows on the surface to control the orientation of the entire lighting environment. We demonstrate that our technique is useful and practical for adding lighting effects to synthetic objects. We also show an application of seamlessly merging synthetic objects into photographs.
- Published
- 2006
- Full Text
- View/download PDF
39. Per-Pixel Lighting Data Analysis
- Author
-
Mehlika Inanici
- Subjects
Engineering ,Per-pixel lighting ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Glare (vision) ,Adaptation (eye) ,Luminance ,Data availability ,Luminance distribution ,Toolbox ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,High dynamic range ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This report presents a framework for per-pixel analysis of the qualitative and quantitative aspects of luminous environments. Recognizing the need for better lighting analysis capabilities and appreciating the new measurement abilities developed within the LBNL Lighting Measurement and Simulation Toolbox, ''Per-pixel Lighting Data Analysis'' project demonstrates several techniques for analyzing luminance distribution patterns, luminance ratios, adaptation luminance and glare assessment. The techniques are the syntheses of the current practices in lighting design and the unique practices that can be done with per-pixel data availability. Demonstrated analysis techniques are applicable to both computer-generated and digitally captured images (physically-based renderings and High Dynamic Range photographs).
- Published
- 2005
- Full Text
- View/download PDF
40. Applying Light Mapping Techniques to Vis-Sim Databases
- Author
-
Michael M. Morrison
- Subjects
Per-pixel lighting ,Database ,Quake (series) ,business.industry ,Shadow volume ,OpenFlight ,computer.software_genre ,Stencil ,Visualization ,Geography ,Computer graphics (images) ,Shadow ,Computer vision ,Artificial intelligence ,business ,Shader ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The Gaming and Visualization and Simulation industries have recently focused on shaders and stencil shadow volumes to add realistic lighting and hard shadow effects to the scene. While producing great looking results, these techniques are still computationally expensive at run time, especially when simulating many light sources. For most databases and simulations the lighting and shadow effects for the terrain geometry are diffuse, static in nature, and not greatly affected by dynamic entities. Using this assumption, the diffuse lighting, shading, and shadowing effects of all static light sources can be pre-rendered into the database. This technique is generally known as "Light Mapping," and it has been in wide use in the gaming industry since ID Software's Quake 1. This paper will present the tool set and methods that Real-Time Technologies developed to apply this technique to OpenFlight models.
- Published
- 2005
- Full Text
- View/download PDF
41. Interactive Rembrandt Lighting Design
- Author
-
Kyoung Chin Seo, Sang Wook Lee, and Hongmi Joe
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,Gaussian surface ,3d model ,Trial and error ,User input ,Rendering (computer graphics) ,symbols.namesake ,Image-based lighting ,Face model ,Computer graphics (images) ,symbols ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The paper presents an efficient way of designing lighting setup for rendering 3D face model. Specifically, we focus on obtaining lighting direction for Rembrandt lighting. A Rembrandt patch is a triangle defined as the bright region surrounded by self and cast shadows on a check area, and we use the self- and cast-shadow curves for computing the direction of main lighting. A user graphically specifies a Rembrandt patch on a 3D model. From the user input, lighting directions are estimated from the cast- and self-shadow geometry on a 3D face model. The final lighting direction is decided among the candidates predicted by the self and cast shadows. The presented method lets a user interactively design and achieve Rembrandt lighting by alleviating repetitive manual search for the light direction by trial and error. Experimental results show the effectiveness of the presented method. It suggests appropriate Rembrandt lighting directions quickly and easily.
- Published
- 2005
- Full Text
- View/download PDF
42. Generation of virtual images under different lighting conditions*
- Author
-
Zunce Wei, Jiawan Zhang, Jizhou Sun, and Zhanwei Li
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Iterative reconstruction ,Image (mathematics) ,Image-based lighting ,Virtual image ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, a new method of generating virtual images of the same scene under different lighting conditions is presented, and the inherent relations of color values and outside lighting conditions are discovered by the use of general regression neural networks. As a result, novel images of same scene under different lighting conditions can be deduced from known scene image. The method can infer the change of lighting in the scene without any space geometry informations. Dynamic reconstruction of scene can be reached combining IBMR techniques.
- Published
- 2004
- Full Text
- View/download PDF
43. Synthesizing pose and lighting variation from object motion
- Author
-
Akiko Nakashima and Atsuto Maki
- Subjects
Surface (mathematics) ,Per-pixel lighting ,Basis (linear algebra) ,Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Object (computer science) ,Image (mathematics) ,Image-based lighting ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,business - Abstract
We present a novel method to synthesize images of a 3D object in arbitrary poses illuminated from arbitrary directions, given a few images of the object in unknown motion under static lighting. Our scheme is underpinned by the notion of illumination image basis which spans an image space of arbitrary lighting, and we propose to generate it by a recursive search for the correspondence between input images and by subsequent realignment of their pixels. Using the 3D surface of the object that also becomes available in this procedure, we synthesize images of the object in arbitrary poses while arbitrarily varying the direction of lighting by combination of the illumination basis images. The effectiveness of the entire algorithm is shown through experiments.
- Published
- 2004
- Full Text
- View/download PDF
44. Multi-channel train visual simulation system based on PC cluster
- Author
-
Tang Bing, Zhou Meiyu, Pan Zhi-geng, and Su Hu
- Subjects
Large field of view ,Per-pixel lighting ,Projection screen ,Workstation ,Computer science ,law ,Computer graphics (images) ,Virtual reality ,Simulation system ,Multi channel ,law.invention ,Rendering (computer graphics) - Abstract
Train visual simulation system is an important component of a train simulator. Now advanced train visual simulation systems are usually equipped with large-wide projection screen or multiprojection screens to display, and high-end graphic workstation to complete the multiple channels real-time rendering task. They can provide the user large field of view and bring a strong feeling of immersion with the support of interactive devices. This paper presented a multi-channel train visual simulation system based on PC cluster. The new system has a close or even better performance compared to SGI high performance workstation based visual simulation system, furthermore it will cut down the system cost a lot. The techniques of photorealistic rendering, emergency simulation and natural phenomena simulation such as raining and snowing are also discussed in detail.
- Published
- 2004
- Full Text
- View/download PDF
45. Real-time bump mapped texture shading based-on hardware acceleration
- Author
-
Jizhou Sun and Jiening Wang
- Subjects
Per-pixel lighting ,Computer science ,Graphics hardware ,Computer graphics (images) ,OpenGL ,Normal mapping ,Bump mapping ,Hardware acceleration ,Rasterisation ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
It would be more convinced if we can provide more realistic visual effects in a VR system. Bump mapping can simulate a surface bumpy appearance without any overload of increasing extra polygons. Techniques of programmable per-vertex shader and per-pixel shader have been introduced in recent years based on newly developed graphics hardware armed with powerful GPU. This makes ease in realizing real-time bump mapping and complex lighting computation. Firstly, the mathematics of bump mapping is discussed. And then real-time bump mapping implementation using per-pixel shading supported by programmable hardware is described. Finally we present our experimental result. It shows our method is desirable.
- Published
- 2004
- Full Text
- View/download PDF
46. Real-time image based lighting in software using HDR panoramas
- Author
-
Magnus Wrenninge, Jonas Unger, and Mark Ollila
- Subjects
Per-pixel lighting ,Image-based lighting ,Computer science ,Computer graphics (images) ,OpenGL ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,Tiled rendering ,Computer graphics lighting ,Volumetric lighting ,Diffuse reflection ,Graphics ,Rendering (computer graphics) - Abstract
We present a system allowing real-time image based lighting based on HDR panoramic images [Debevec 1998]. The system performs time-consuming diffuse light calculations in a pre-processing step, which is key to attaining interactivity. The real-time subsystem processes an image based lighting model in software, which would be simple to implement in hardware. Rendering is handled by OpenGL, but could be substituted for another graphics API, should there be such a need. Applications for the technique presented are discussed, and includes methods for realistic outdoor lighting. The system architecture is outlined, describing the algorithms used. Lastly, the ideas for future work that arose during the project are discussed
- Published
- 2003
- Full Text
- View/download PDF
47. A new lighting model for computer graphics
- Author
-
Wenxian Li, Wentao Wang, and Zhenshou Sun
- Subjects
Sunlight ,Per-pixel lighting ,Computer science ,business.industry ,Unified lighting and shadowing ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Cloud computing ,Atmospheric model ,Function (mathematics) ,Computer graphics ,Sky ,Computer graphics (images) ,business ,Astrophysics::Galaxy Astrophysics ,media_common - Abstract
A new lighting model called the natural sun lighting model is presented in this paper. In the model, a new function named cloud density distribution function is used to simulate the cloud in the sky, and the sky dome is considered as the source of the sunlight. >
- Published
- 2002
- Full Text
- View/download PDF
48. Displaying shiny objects under virtual lighting
- Author
-
Y. Sakata and K. Sato
- Subjects
Per-pixel lighting ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volumetric lighting ,Virtual reality ,Image-based modeling and rendering ,Rendering (computer graphics) ,Image-based lighting ,Specularity ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,Image retrieval ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper describes an extension of Image-Based Rendering (IBR) to represent the specularity of a real object in a virtual environment. A captured image of all object contains several kinds of physical information; its shape, color, texture, specularity and the lighting condition. This paper is intended as all investigation of how to preserve the specularity of an object, which is considerably sensitive to the lighting condition. Like IBR, it takes all image set by changing a lighting pattern electronically with a PC projector. When displaying the object, it re-calls the image set and does image retrieval of the most suitable image according to a user assigned lighting condition. As the user is able to change the lighting condition virtually, the appearance with highlight and specularity under the virtual lighting condition is produced on a monitor. Also using a camera observing the real global lighting, the new display can import it into the virtual space and shows the object under pseudo real lighting. The proposed imaging techniques are applicable to virtual museum or digital archives.
- Published
- 2002
- Full Text
- View/download PDF
49. Deferred lighting: a computation-efficient approach for real-time 3-D graphics
- Author
-
Bor-Sung Liang, Chein-Wei Jen, Wen-Chang Yeh, and Yuan-Chung Lee
- Subjects
Phong shading ,Deferred shading ,Per-pixel lighting ,Computer science ,Software rendering ,Volumetric lighting ,Rendering (computer graphics) ,Computer graphics ,Image-based lighting ,Computer graphics (images) ,Polygon ,Rasterisation ,Computer graphics lighting ,Shading ,Graphics ,Shader ,3D computer graphics ,ComputingMethodologies_COMPUTERGRAPHICS ,Gouraud shading - Abstract
Real-time 3-D graphics emerges rapidly in multimedia applications, but suffers from huge computation requirement. In 3-D graphics, lighting is an essential operation, but requires high computation power. In fact, many lighting calculations are redundant due to invisible polygons. To explore computation efficiency, we propose a novel deferred lighting approach. This approach defers lighting calculation after visibility comparison, hence eliminates lighting calculation on invisible polygon. Simulation result shows that deferred lighting approach can save 22%-58% of lighting calculation in flat shading and Gouraud shading, and initial cost of Taylor series in fast Phong shading.
- Published
- 2002
- Full Text
- View/download PDF
50. How a CSG-based raytracer saves time
- Author
-
David Esneault, Mitch Kopelman, and Jodi Whitsel
- Subjects
Per-pixel lighting ,Image-based lighting ,Computer science ,Scripting language ,Computer graphics (images) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Radiosity (computer graphics) ,Volumetric lighting ,computer.software_genre ,computer ,GeneralLiterature_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS ,Rendering (computer graphics) - Abstract
At Blue Sky Studios, two overriding principles have guided the development of the renderer and production lighting tools: 1) the lighting model should be as physically accurate as possible, and 2) be straightforward and easy to use so that the computers take care of the technical work leaving the artists free to concentrate on the creative aspects of lighting a scene. Blue Sky's proprietary renderer, CGIStudio™, has one of the most robust lighting models in the industry. The renderer's realistic approach to how light actually behaves in the real world lets the artist add details like soft shadows, reflections, and radiosity at the flip of a switch. Artists are able to achieve complex and subtle lighting with relatively simple lighting rigs, allowing them to get the image as 'right' as possible in the original render.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.