26 results on '"Petrik Clarberg"'
Search Results
2. Coarse Pixel Shading.
- Author
-
Karthik Vaidyanathan, Marco Salvi, Robert Toth, Theresa Foley, Tomas Akenine-Möller, Jim Nilsson, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrik Clarberg, Tomasz Janczak, and Aaron E. Lefohn
- Published
- 2014
- Full Text
- View/download PDF
3. Design and Novel Uses of Higher-Dimensional Rasterization.
- Author
-
Jim Nilsson, Petrik Clarberg, Björn Johnsson, Jacob Munkberg, Jon Hasselgren, Robert Toth, Marco Salvi, and Tomas Akenine-Möller
- Published
- 2012
- Full Text
- View/download PDF
4. Hierarchical Stochastic Motion Blur Rasterization.
- Author
-
Jacob Munkberg, Petrik Clarberg, Jon Hasselgren, Robert Toth, Masamichi Sugihara, and Tomas Akenine-Möller
- Published
- 2011
- Full Text
- View/download PDF
5. Floating-Point Buffer Compression in a Unified Codec Architecture.
- Author
-
Jacob Ström, Per Wennersten, Jim Rasmusson, Jon Hasselgren, Jacob Munkberg, Petrik Clarberg, and Tomas Akenine-Möller
- Published
- 2008
- Full Text
- View/download PDF
6. Adaptive enhancement and noise reduction in very low light-level video.
- Author
-
Henrik Malm, Magnus Oskarsson, Eric Warrant, Petrik Clarberg, Jon Hasselgren, and Calle Lejdfors
- Published
- 2007
- Full Text
- View/download PDF
7. Interactive Path Tracing and Reconstruction of Sparse Volumes
- Author
-
Jacob Munkberg, Jon Hasselgren, Nikolai Hofmann, and Petrik Clarberg
- Subjects
Adaptive sampling ,Image quality ,business.industry ,Computer science ,020207 software engineering ,Volume rendering ,02 engineering and technology ,Iterative reconstruction ,computer.software_genre ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Rendering (computer graphics) ,010309 optics ,Voxel ,0103 physical sciences ,Path tracing ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,Computer vision ,Artificial intelligence ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We combine state-of-the-art techniques into a system for high-quality, interactive rendering of participating media. We leverage unbiased volume path tracing with multiple scattering, temporally stable neural denoising and NanoVDB [Museth 2021], a fast, sparse voxel tree data structure for the GPU, to explore what performance and image quality can be obtained for rendering volumetric data. Additionally, we integrate neural adaptive sampling to significantly improve image quality at a fixed sample budget. Our system runs at interactive rates at 1920 × 1080 on a single GPU and produces high quality results for complex dynamic volumes.
- Published
- 2021
8. Sampling Transformations Zoo
- Author
-
Peter Shirley, David Cline, David Hart, Eric Haines, Matthias Raab, Petrik Clarberg, Samuli Laine, and Matt Pharr
- Subjects
Computer science ,Preprocessor ,Probability density function ,Ray tracing (graphics) ,Algorithm ,Rendering (computer graphics) - Abstract
We present several formulas and methods for generating samples distributed according to a desired probability density function on a specific domain. Sampling is a fundamental operation in modern rendering, both at runtime and in preprocessing. It is becoming ever more prevalent with the introduction of ray tracing in standard APIs, as many ray tracing algorithms are based on sampling by nature. This chapter provides a concise list of some useful tricks and methods.
- Published
- 2019
9. Importance Sampling of Many Lights on the GPU
- Author
-
Petrik Clarberg and Pierre Moreau
- Subjects
Computer science ,Global illumination ,Computer graphics (images) ,Sampling (statistics) ,Hardware acceleration ,Ray tracing (graphics) ,DirectX ,Bounding volume hierarchy ,Importance sampling ,Rendering (computer graphics) - Abstract
The introduction of standardized APIs for ray tracing, together with hardware acceleration, opens up possibilities for physically based lighting in real-time rendering. Light importance sampling is one of the fundamental operations in light transport simulations, applicable to both direct and indirect illumination. This chapter describes a bounding volume hierarchy data structure and associated sampling methods to accelerate importance sampling of local light sources. The work is based on recently published methods for light sampling in production rendering, but it is evaluated in a real-time implementation using Microsoft DirectX Raytracing.
- Published
- 2019
10. Layered Light Field Reconstruction for Defocus Blur
- Author
-
Petrik Clarberg, Jacob Munkberg, Marco Salvi, and Karthik Vaidyanathan
- Subjects
business.industry ,Computer science ,Visibility (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Filter (signal processing) ,Computer Graphics and Computer-Aided Design ,symbols.namesake ,Fourier analysis ,symbols ,Computer vision ,Artificial intelligence ,Depth of field ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a novel algorithm for reconstructing high-quality defocus blur from a sparsely sampled light field. Our algorithm builds upon recent developments in the area of sheared reconstruction filters and significantly improves reconstruction quality and performance. While previous filtering techniques can be ineffective in regions with complex occlusion, our algorithm handles such scenarios well by partitioning the input samples into depth layers. These depth layers are filtered independently and then combined together, taking into account inter-layer visibility. We also introduce a new separable formulation of sheared reconstruction filters that achieves real-time preformance on a modern GPU and is more than two orders of magnitude faster than previously published techniques.
- Published
- 2015
11. Deep shading buffers on commodity GPUs
- Author
-
Petrik Clarberg and Jacob Munkberg
- Subjects
Deferred shading ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Computer graphics ,Fragment processing ,Computer graphics (images) ,Computer vision ,Rasterisation ,Ray tracing (graphics) ,Shading ,Artificial intelligence ,Graphics ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS ,Gouraud shading - Abstract
Real-time rendering with true motion and defocus blur remains an elusive goal for application developers. In recent years, substantial progress has been made in the areas of rasterization, shading, and reconstruction for stochastic rendering. However, we have yet to see an efficient method for decoupled sampling that can be implemented on current or near-future graphics processors. In this paper, we propose one such algorithm that leverages the capability of modern GPUs to perform unordered memory accesses from within shaders. Our algorithm builds per-pixel primitive lists in canonical shading space. All shading then takes place in a single, non-multisampled forward rendering pass using conservative rasterization. This pass exploits the rasterization and shading hardware to perform shading very efficiently, and only samples that are visible in the final image are shaded. Last, the shading samples are gathered and filtered to create the final image. The input to our algorithm can be generated using a variety of methods, of which we show examples of interactive stochastic and interleaved rasterization, as well as ray tracing.
- Published
- 2014
12. AMFS
- Author
-
Petrik Clarberg, Tomas Akenine-Möller, Robert M. Toth, Jon Hasselgren, and Jim K. Nilsson
- Subjects
Tessellation ,Deferred shading ,Pixel ,Computer science ,Graphics hardware ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Graphics and Computer-Aided Design ,GeneralLiterature_MISCELLANEOUS ,Rendering (computer graphics) ,Computer graphics (images) ,Rasterisation ,Shading ,Graphics ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS ,Gouraud shading - Abstract
We propose a powerful hardware architecture for pixel shading, which enables flexible control of shading rates and automatic shading reuse between triangles in tessellated primitives. The main goal is efficient pixel shading for moderately to finely tessellated geometry, which is not handled well by current GPUs. Our method effectively decouples the cost of pixel shading from the geometric complexity. It thereby enables a wider use of tessellation and fine geometry, even at very limited power budgets. The core idea is to shade over small local grids in parametric patch space, and reuse shading for nearby samples. We also support the decomposition of shaders into multiple parts, which are shaded at different frequencies. Shading rates can be locally and adaptively controlled, in order to direct the computations to visually important areas and to provide performance scaling with a graceful degradation of quality. Another important benefit of shading in patch space is that it allows efficient rendering of distribution effects, which further closes the gap between real-time and offline rendering.
- Published
- 2014
13. Layered Reconstruction for Defocus and Motion Blur
- Author
-
Petrik Clarberg, Tomas Akenine-Möller, Jacob Munkberg, Jon Hasselgren, and Karthik Vaidyanathan
- Subjects
business.industry ,Computer science ,Motion blur ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Reconstruction filter ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Computer Science::Graphics ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,business ,Light field ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Light field reconstruction algorithms can substantially decrease the noise in stochastically rendered images. Recent algorithms for defocus blur alone are both fast and accurate. However, motion blur is a considerably more complex type of camera effect, and as a consequence, current algorithms are either slow or too imprecise to use in high quality rendering. We extend previous work on real-time light field reconstruction for defocus blur to handle the case of simultaneous defocus and motion blur. By carefully introducing a few approximations, we derive a very efficient sheared reconstruction filter, which produces high quality images even for a low number of input samples. Our algorithm is temporally robust, and is about two orders of magnitude faster than previous work, making it suitable for both real-time rendering and as a post-processing pass for offline rendering.
- Published
- 2014
14. A sort-based deferred shading architecture for decoupled sampling
- Author
-
Robert M. Toth, Jacob Munkberg, and Petrik Clarberg
- Subjects
Hardware architecture ,Deferred shading ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Computer engineering ,sort ,Rasterisation ,Shading ,Graphics ,business ,Shader ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Stochastic sampling in time and over the lens is essential to produce photo-realistic images, and it has the potential to revolutionize real-time graphics. In this paper, we take an architectural view of the problem and propose a novel hardware architecture for efficient shading in the context of stochastic rendering. We replace previous caching mechanisms by a sorting step to extract coherence, thereby ensuring that only non-occluded samples are shaded. The memory bandwidth is kept at a minimum by operating on tiles and using new buffer compression methods. Our architecture has several unique benefits not traditionally associated with deferred shading. First, shading is performed in primitive order, which enables late shading of vertex attributes and avoids the need to generate a G-buffer of pre-interpolated vertex attributes. Second, we support state changes, e.g., change of shaders and resources in the deferred shading pass, avoiding the need for a single über-shader. We perform an extensive architectural simulation to quantify the benefits of our algorithm on real workloads.
- Published
- 2013
15. An Optimizing Compiler for Automatic Shader Bounding
- Author
-
Petrik Clarberg, Robert M. Toth, Tomas Akenine-Möller, and Jon Hasselgren
- Subjects
Multiple Render Targets ,Computer science ,Optimizing compiler ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,HLSL2GLSL ,Rendering (computer graphics) ,Unified shader model ,Computer graphics (images) ,Shading ,Compiler ,computer ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Programmable shading provides artistic control over materials and geometry, but the black box nature of shaders makes some rendering optimizations difficult to apply. In many cases, it is desirable to compute bounds of shaders in order to speed up rendering. A bounding shader can be automatically derived from the original shader by a compiler using interval analysis, but creating optimized interval arithmetic code is non-trivial. A key insight in this paper is that shaders contain metadata that can be automatically extracted by the compiler using data flow analysis. We present a number of domain-specific optimizations that make the generated code faster, while computing the same bounds as before. This enables a wider use and opens up possibilities for more efficient rendering. Our results show that on average 42–44% of the shader instructions can be eliminated for a common use case: single-sided bounding shaders used in lightcuts and importance sampling.
- Published
- 2010
16. Practical HDR Texture Compression
- Author
-
Tomas Akenine-Möller, Jon Hasselgren, Petrik Clarberg, and Jacob Munkberg
- Subjects
Lossless compression ,Texture compression ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,Adaptive Scalable Texture Compression ,Computer Graphics and Computer-Aided Design ,Uncompressed video ,S3 Texture Compression ,Chrominance ,Computer vision ,Artificial intelligence ,business ,High dynamic range ,ComputingMethodologies_COMPUTERGRAPHICS ,Data compression - Abstract
The use of high dynamic range (HDR) textures in real-time graphics applications can increase realism and provide a more vivid experience. However, the increased bandwidth and storage requirements for uncompressed HDR data can become a major bottleneck. Hence, several recent algorithms for HDR texture compression have been proposed. In this paper, we discuss several practical issues one has to confront in order to develop and implement HDR texture compression schemes. These include improved texture filtering and efficient offline compression. For compression, we describe how Procrustes analysis can be used to quickly match a predefined template shape against chrominance data. To reduce the cost of HDR texture filtering, we perform filtering prior to the color transformation, and use a simple trick to reduce the incurred errors. We also introduce a number of novel compression modes, which can be combined with existing compression schemes, or used on their own. (Less)
- Published
- 2008
17. Exploiting Visibility Correlation in Direct Illumination
- Author
-
Tomas Akenine-Moeller and Petrik Clarberg
- Subjects
business.industry ,Computer science ,Control variates ,Computer Graphics and Computer-Aided Design ,Computer graphics ,Ambient occlusion ,Computer vision ,Ray tracing (graphics) ,Artificial intelligence ,business ,Visibility ,Reflection mapping ,Importance sampling ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The visibility function in direct illumination describes the binary visibility over a light source, e.g., an environment map. Intuitively, the visibility is often strongly correlated between nearby locations in time and space, but exploiting this correlation without introducing noticeable errors is a hard problem. In this paper, we first study the statistical characteristics of the visibility function. Then, we propose a robust and unbiased method for using estimated visibility information to improve the quality of Monte Carlo evaluation of direct illumination. Our method is based on the theory of control variates, and it can be used on top of existing state-of-the-art schemes for importance sampling. The visibility estimation is obtained by sparsely sampling and caching the 4D visibility field in a compact bitwise representation. In addition to Monte Carlo rendering, the stored visibility information can be used in a number of other applications, for example, ambient occlusion and lighting design.
- Published
- 2008
18. Practical Product Importance Sampling for Direct Illumination
- Author
-
Tomas Akenine-Möller and Petrik Clarberg
- Subjects
Computer science ,business.industry ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sampling (statistics) ,Computer Graphics and Computer-Aided Design ,Uncompressed video ,Computer graphics ,Tree traversal ,Wavelet ,Quadtree ,Ray tracing (graphics) ,Computer vision ,Artificial intelligence ,Bidirectional reflectance distribution function ,business ,Algorithm ,Shader ,Importance sampling ,Reflection mapping - Abstract
We present a practical algorithm for sampling the product of environment map lighting and surface reflectance. Our method builds on wavelet-based importance sampling, but has a number of important advantages over previous methods. Most importantly, we avoid using precomputed reflectance functions by sampling the BRDF on-the-fly. Hence, all types of materials can be handled, including anisotropic and spatially varying BRDFs, as well as procedural shaders. This also opens up for using very high resolution, uncompressed, environment maps. Our results show that this gives a significant reduction of variance compared to using lower resolution approximations. In addition, we study the wavelet product, and present a faster algorithm geared for sampling purposes. For our application, the computations are reduced to a simple quadtree-based multiplication. We build the BRDF approximation and evaluate the product in a single tree traversal, which makes the algorithm both faster and more flexible than previous methods. (Less)
- Published
- 2008
19. Fast Equal-Area Mapping of the (Hemi)Sphere using SIMD
- Author
-
Petrik Clarberg
- Subjects
Source code ,Low distortion ,Polynomial approximations ,media_common.quotation_subject ,Scalar (mathematics) ,Inverse ,Geometry ,SIMD ,Trigonometry ,Unit square ,Algorithm ,Mathematics ,media_common - Abstract
We present a fast vectorized implementation of a transform that maps points in the unit square to the surface of the sphere, while preserving fractional area. The mapping uses the octahedral map combined with an equal-area parameterization and has many desirable features such as low distortion, straightforward interpolation, and fast inverse and forward transforms. Our SIMD implementation completely avoids branching and uses polynomial approximations for the trigonometric operations, as well as other tricks. This results in up to 9 times speed-up over a traditional scalar implementation. Source code is available online.
- Published
- 2008
20. Wavelet importance sampling
- Author
-
Tomas Akenine-Möller, Wojciech Jarosz, Petrik Clarberg, and Henrik Wann Jensen
- Subjects
Wavelet ,Computer graphics (images) ,Image warping ,Computer Graphics and Computer-Aided Design ,Algorithm ,Importance sampling ,Order of magnitude ,Rendering (computer graphics) ,Mathematics - Abstract
We present a new technique for importance sampling products of complex functions using wavelets. First, we generalize previous work on wavelet products to higher dimensional spaces and show how this product can be sampled on-the-fly without the need of evaluating the full product. This makes it possible to sample products of high-dimensional functions even if the product of the two functions in itself is too memory consuming. Then, we present a novel hierarchical sample warping algorithm that generates high-quality point distributions, which match the wavelet representation exactly. One application of the new sampling technique is rendering of objects with measured BRDFs illuminated by complex distant lighting --- our results demonstrate how the new sampling technique is more than an order of magnitude more efficient than the best previous techniques.
- Published
- 2005
21. Hierarchical stochastic motion blur rasterization
- Author
-
Tomas Akenine-Möller, Robert M. Toth, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, and Petrik Clarberg
- Subjects
Homogeneous coordinates ,business.industry ,Computer science ,Bandwidth (signal processing) ,Visibility (geometry) ,Motion blur ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sampling (statistics) ,Set (abstract data type) ,Tree traversal ,CUDA ,Computer Science::Graphics ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a hierarchical traversal algorithm for stochastic rasterization of motion blur, which efficiently reduces the number of inside tests needed to resolve spatio-temporal visibility. Our method is based on novel tile against moving primitive tests that also provide temporal bounds for the overlap. The algorithm works entirely in homogeneous coordinates, supports MSAA, facilitates efficient hierarchical spatio-temporal occlusion culling, and handles typical game workloads with widely varying triangle sizes. Furthermore, we use high-quality sampling patterns based on digital nets, and present a novel reordering that allows efficient procedural generation with good anti-aliasing properties. Finally, we evaluate a set of hierarchical motion blur rasterization algorithms in terms of both depth buffer bandwidth, shading efficiency, and arithmetic complexity.
- Published
- 2011
22. Efficient multi-view ray tracing using edge detection and shader reuse
- Author
-
Petrik Clarberg, Magnus Andersson, Jacob Munkberg, Tomas Akenine-Möller, Bjorn Johnsson, and Jon Hasselgren
- Subjects
edge detection ,Adaptive sampling ,ray tracing ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,adaptive sampling ,Computer Graphics and Computer-Aided Design ,Edge detection ,GeneralLiterature_MISCELLANEOUS ,Silhouette ,Computer graphics ,Autostereoscopy ,Computer graphics (images) ,multi-view ,Computer Science ,Ray tracing (graphics) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Parallax ,Shader ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Stereoscopic rendering and 3D stereo displays are quickly becoming mainstream. The natural extension is autostereoscopic multi-view displays, which by the use of parallax barriers or lenticular lenses, can accommodate many simultaneous viewers without the need for active or passive glasses. As these displays, for the foreseeable future, will support only a rather limited number of views, there is a need for high-quality interperspective antialiasing. We present a specialized algorithm for efficient multi-view image generation from a camera line using ray tracing, which builds on previous methods for multi-dimensional adaptive sampling and reconstruction of light elds. We introduce multi-view silhouette edges to detect sharp geometrical discontinuities in the radiance function. These are used to significantly improve the quality of the reconstruction. In addition, we exploit shader coherence by computing analytical visibility between shading points and the camera line, and by sharing shading computations over the camera line.
- Published
- 2011
23. High dynamic range texture compression for graphics hardware
- Author
-
Tomas Akenine-Möller, Petrik Clarberg, Jon Hasselgren, and Jacob Munkberg
- Subjects
Texture compression ,Computer science ,business.industry ,Dynamic range ,Image quality ,Graphics hardware ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data_CODINGANDINFORMATIONTHEORY ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Uncompressed video ,Computer graphics (images) ,S3 Texture Compression ,Color depth ,Computer vision ,Artificial intelligence ,business ,High dynamic range ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we break new ground by presenting algorithms for fixed-rate compression of high dynamic range textures at low bit rates. First, the S3TC low dynamic range texture compression scheme is extended in order to enable compression of HDR data. Second, we introduce a novel robust algorithm that offers superior image quality. Our algorithm can be efficiently implemented in hardware, and supports textures with a dynamic range of over 10 9 :1. At a fixed rate of 8 bits per pixel, we obtain results virtually indistinguishable from uncompressed HDR textures at 48 bits per pixel. Our research can have a big impact on graphics hardware and real-time rendering, since HDR texturing suddenly becomes affordable.
- Published
- 2006
24. Efficient product sampling using hierarchical thresholding.
- Author
-
Petrik Clarberg, Luc Leblanc, Victor Ostromoukhov, and Pierre Poulin
- Subjects
- *
THRESHOLD (Perception) , *PIXELS , *LIGHTING , *MINIMAL surfaces - Abstract
Abstract We present an efficient method for importance sampling the product of multiple functions. Our algorithm computes a quick approximation of the product on the fly, based on hierarchical representations of the local maxima and averages of the individual terms. Samples are generated by exploiting the hierarchical properties of many low-discrepancy sequences, and thresholded against the estimated product. We evaluate direct illumination by sampling the triple product of environment map lighting, surface reflectance, and a visibility function estimated per pixel. Our results show considerable noise reduction compared to existing state-of-the-art methods using only the product of lighting and BRDF. [ABSTRACT FROM AUTHOR]
- Published
- 2008
25. Efficient product sampling using hierarchical thresholding
- Author
-
Fabrice Rousselle, Pierre Poulin, Luc Leblanc, Petrik Clarberg, and Victor Ostromoukhov
- Subjects
Pixel ,business.industry ,Noise reduction ,Rejection sampling ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,Thresholding ,Maxima and minima ,Triple product ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Reflection mapping ,Importance sampling ,Mathematics - Abstract
We present an efficient method for importance sampling the product of multiple functions. Our algorithm computes a quick approximation of the product on the fly, based on hierarchical representations of the local maxima and averages of the individual terms. Samples are generated by exploiting the hierarchical properties of many low-discrepancy sequences, and thresholded against the estimated product. We evaluate direct illumination by sampling the triple product of environment map lighting, surface reflectance, and a visibility function estimated per pixel. Our results show considerable noise reduction compared to existing state-of-the-art methods using only the product of lighting and BRDF.
26. Proceedings of the 7th Conference on High-Performance Graphics, HPG 2015, Los Angeles, California, USA, August 7-9, 2015
- Author
-
Michael C. Doggett, Steven E. Molnar, Kayvon Fatahalian, Jacob Munkberg, Elmar Eisemann, Petrik Clarberg, and Stephen N. Spencer
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.