1,463 results on '"Shader"'
Search Results
2. Application of Unity Shader in Character Rendering: A Case Study on Dota Ogre Mag
- Author
-
Wang, Yunzhe, Luo, Xun, Editor-in-Chief, Almohammedi, Akram A., Series Editor, Chen, Chi-Hua, Series Editor, Guan, Steven, Series Editor, Pamucar, Dragan, Series Editor, and Ahmad, Badrul Hisham, editor
- Published
- 2024
- Full Text
- View/download PDF
3. A Digital Twin Model of Three-Dimensional Shading for Simulation of the Ironmaking Process.
- Author
-
Lei, Yongxiang and Karimi, Hamid Reza
- Subjects
DIGITAL twins ,THREE-dimensional modeling ,DIRECT-fired heaters ,BLAST furnaces ,TECHNOLOGICAL innovations ,LOGIC design ,SMELTING furnaces - Abstract
Advanced manufacturing is a new trend for sustainable industrial development, and digital twin is a new technology that has attracted attention. Blast furnace smelting is an effective method in the manufacturing of iron and steel. Comprehensive and dependable surveillance of the blast furnace smelting process is essential for ensuring the smooth operation and improving of iron and steel output quality. The current technology makes it difficult to monitor the entire process of blast furnace ironmaking. Based on Unity 3D, this study presents a digital-twin virtual reality simulation system of blast furnace ironmaking. First, shading modeling creates a three-dimensional dynamic geometric model in different ironmaking system scenarios. Then, we script the animation and call particle system according to the motion mode of distinct geometric objects to give the dynamic effect of geometric objects. Shaders are the focus of the design and contributions. In addition, shader optimization technology can reduce hardware resource consumption and increase system fluency. Vertex shaders are used for all types of coordinate space transformation and vertex output; fragment shaders are used for texture sampling, light model calculation, normal calculation, noise superposition, and color output. The shader rendering technique allows for more realistic lighting effects. The presented dynamic digital twin system implements more realistic lighting analyzed in the ironmaking process. Virtual interaction logic's design and deployment process is based on HTC VIVE hardware and VRTK toolkit. In the actual simulation process, the typical animation frame rate is stable at about 75 FPS (frames per second). The simulation system runs smoothly and a cutting-edge and state-of-the-art method for observing the blast furnace ironmaking process is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. 基于纹理特征的小麦锈病动态模拟方法.
- Author
-
杨猛, 丁曙, 马云涛, 谢佳翊, and 段瑞枫
- Subjects
STRIPE rust ,WHEAT rusts ,PLANT surfaces ,AGRICULTURAL processing ,ORDER picking systems ,WHEAT - Abstract
Copyright of Journal of Zhejiang University (Science Edition) is the property of Journal of Zhejiang University (Science Edition) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
5. A Digital Twin Model of Three-Dimensional Shading for Simulation of the Ironmaking Process
- Author
-
Yongxiang Lei and Hamid Reza Karimi
- Subjects
virtual reality ,shader ,virtual interaction ,digital twin ,ironmaking process ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Advanced manufacturing is a new trend for sustainable industrial development, and digital twin is a new technology that has attracted attention. Blast furnace smelting is an effective method in the manufacturing of iron and steel. Comprehensive and dependable surveillance of the blast furnace smelting process is essential for ensuring the smooth operation and improving of iron and steel output quality. The current technology makes it difficult to monitor the entire process of blast furnace ironmaking. Based on Unity 3D, this study presents a digital-twin virtual reality simulation system of blast furnace ironmaking. First, shading modeling creates a three-dimensional dynamic geometric model in different ironmaking system scenarios. Then, we script the animation and call particle system according to the motion mode of distinct geometric objects to give the dynamic effect of geometric objects. Shaders are the focus of the design and contributions. In addition, shader optimization technology can reduce hardware resource consumption and increase system fluency. Vertex shaders are used for all types of coordinate space transformation and vertex output; fragment shaders are used for texture sampling, light model calculation, normal calculation, noise superposition, and color output. The shader rendering technique allows for more realistic lighting effects. The presented dynamic digital twin system implements more realistic lighting analyzed in the ironmaking process. Virtual interaction logic’s design and deployment process is based on HTC VIVE hardware and VRTK toolkit. In the actual simulation process, the typical animation frame rate is stable at about 75 FPS (frames per second). The simulation system runs smoothly and a cutting-edge and state-of-the-art method for observing the blast furnace ironmaking process is suggested.
- Published
- 2022
- Full Text
- View/download PDF
6. Virtual reality rendering methods for training deep learning, analysing landscapes, and preventing virtual reality sickness.
- Author
-
Fukuda, Tomohiro, Novak, Marcos, Fujii, Hiroyuki, and Pencreach, Yoann
- Subjects
VIRTUAL reality ,DEEP learning ,LINEAR velocity ,ANGULAR velocity ,URBAN planning ,VIRTUAL reality software ,LANDSCAPES - Abstract
Virtual reality (VR) has been proposed for various purposes such as design studies, presentation, simulation and communication in the field of computer-aided architectural design. This paper explores new roles for VR; in particular, we propose rendering methods that consist of post-processing rendering, segmentation rendering and shadow-casting rendering for more-versatile approaches in the use of data. We focus on the creation of a dataset of annotated images, composed of paired foreground-background and semantic-relevant images, in addition to traditional immersive rendering for training deep learning neural networks and analysing landscapes. We also develop a camera velocity rendering method using a customised segmentation rendering technique that calculates the linear and angular velocities of the virtual camera within the VR space at each frame and overlays a colour on the screen according to the velocity value. Using this velocity information, developers of VR applications can improve the animation path within the VR space and prevent VR sickness. We successfully applied the developed methods to urban design and a design project for a building complex. In conclusion, the proposed method was evaluated to be both feasible and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Three dimensional visualization of models and physical characteristics of oil and gas reservoir for virtual reality systems
- Author
-
D. Zh. Akhmed-Zaki, O. N. Turar, and A. R. Rakhymova
- Subjects
computer graphics ,computer animation ,machine graphics ,virtual reality ,opengl ,openvr ,shader ,visualization ,grid model visualization ,.grdecl. ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The paper describes three-dimensional visualization of grid models of oil and gas reservoir for virtual reality systems. It was implemented in a C ++ programming language, for visualization of the model using the OpenGL library and in the virtual environment of the OpenVR library, which needs use of the SteamVR utility. Created module of visualization requires connection of special equipment for operations with the virtual environment, such as headset with its own display, base stations and controllers. As input data for drawing of model geometrical data and physical characteristics of oil field in .GRDECL format provided by Shchlumberger Eclipse are offered. Files of this format store data describing three-dimensional models consisting of of cells on Ox, Oy and Oz, which represent the distorted parallelepipeds. The advantage of using virtual reality in visualization is that for the observer visual perceptions considerably improves, and immersion in a virtual environment is accompanied by the effect of presence. In the VR display the quality of drawing of an object significantly differs from what can be watched on a flat screen monitor.
- Published
- 2018
- Full Text
- View/download PDF
8. Semantic Composition of Language-Integrated Shaders
- Author
-
Haaser, Georg, Steinlechner, Harald, May, Michael, Schwärzler, Michael, Maierhofer, Stefan, Tobler, Robert, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Sivalingam, Krishna M., Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Battiato, Sebastiano, editor, Coquillart, Sabine, editor, Pettré, Julien, editor, Laramee, Robert S., editor, Kerren, Andreas, editor, and Braz, José, editor
- Published
- 2015
- Full Text
- View/download PDF
9. RADAR SIMULATION USING GPU-BASED TEXTURE MAPPING
- Author
-
Nguyễn Trung Kiên, Trương Khánh Nghĩa, and Nguyễn Thị Lan
- Subjects
lập trình gpu ,mô phỏng ,phủ ảnh ,ra-đa ,shader ,texture mapping. ,General Works - Abstract
Radar is widely used and integrated in many different kinds of weapons and equipment. During the design and development of a radar-based training system, the simulated operation of radar screen is really important. However, this work is complicated with heavy computation. Thus, this paper proposed a GPU-based method to simulate the contents and effects of a working radar screen. Most of the computation is performed in GPU so that the radar simulation can be run in real time and it can be integrated in a larger simulation system.
- Published
- 2017
- Full Text
- View/download PDF
10. Evolution of the Graphics Processing Unit (GPU)
- Author
-
Stephen W. Keckler, David B. Kirk, and William J. Dally
- Subjects
Vertex (computer graphics) ,Fragment (computer graphics) ,Computer science ,Graphics processing unit ,Frame rate ,High memory ,Hardware and Architecture ,Computer graphics (images) ,Smart camera ,Electrical and Electronic Engineering ,Graphics ,Shader ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Graphics processing units (GPUs) power today’s fastest supercomputers, are the dominant platform for deep learning, and provide the intelligence for devices ranging from self-driving cars to robots and smart cameras. They also generate compelling photorealistic images at real-time frame rates. GPUs have evolved by adding features to support new use cases. NVIDIA’s GeForce 256, the first GPU, was a dedicated processor for real-time graphics, an application that demands large amounts of floating-point arithmetic for vertex and fragment shading computations and high memory bandwidth. As real-time graphics advanced, GPUs became programmable. The combination of programmability and floating-point performance made GPUs attractive for running scientific applications. Scientists found ways to use early programmable GPUs by casting their calculations as vertex and fragment shaders. GPUs evolved to meet the needs of scientific users by adding hardware for simpler programming, double-precision floating-point arithmetic, and resilience.
- Published
- 2021
11. CPU–GPU buffer communication using compute shader to fill volumes with spheres
- Author
-
F. Moo-Mena, J. Gomez-Montalvo, F. A. Madera-Ramirez, and J. L. López-Martínez
- Subjects
Hardware and Architecture ,Computer science ,SPHERES ,Throughput (business) ,Shader ,Software ,Buffer (optical fiber) ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems ,Theoretical Computer Science ,Rendering (computer graphics) ,Computational science - Abstract
This paper describes the usage of shaders to make parallel operations by improving the CPU–GPU communication, using both rendering and compute shaders. When the number of spheres is large, the execution becomes slow and requires a lot of space to store particles. We parallelized the frozen method using an efficient inter-process GPU communication to reduce the operations required. We propose to handle two buffers, one for operations and the other for rendering. While the rendering buffer increases as the number of spheres is required, the operational buffer maintains its size and the number of operations is hold. Experimental results demonstrate that the proposed method shows up 100x throughput improvement over the sequential version. We define a configuration by a 4-tuple as input to the algorithm, and we found a pattern to choose better configurations.
- Published
- 2021
12. OpenGL 4.3, Shaders and the Programmable Pipeline: Liftoff
- Author
-
Sumanta Guha
- Subjects
Computer science ,Computer graphics (images) ,OpenGL ,Pipeline (software) ,Shader - Published
- 2022
13. Research of Battlefield Visualization Shadow Effects on Ogre
- Author
-
Ye, Fang, Wang, Jingxuan, Fu, Tianshuang, Jin, David, editor, and Lin, Sally, editor
- Published
- 2012
- Full Text
- View/download PDF
14. Producció d'un spot CGI per a automoció: producció de backplates per a la integració d'elements CGI i edició audiovisual d'un spot per a automoció
- Abstract
Aquest projecte té com a finalitat la creació d’un vídeo confeccionat mitjançant CGI per a un spot publicitari d’automoció, que serveixi a l’alumne com a mostra de les seves habilitats tècniques i creatives quant a generació d’idees, conceptualització i disseny 3D, composició, edició i tot el conjunt de disciplines que engloben la producció audiovisual d’una peça d’aquestes característiques. La intenció final d’aquesta creació és augmentar la visibilitat de l’alumne a les xarxes socials, arribar a més artistes i gent interessada en el sector i conseqüentment despertar l’interès d’empreses del món audiovisual i de possibles futurs clients dins i fora de la plataforma de freelance Fiverr. El desenvolupament d’aquest projecte contempla la integració del material proporcionat per Pere Gras, que en el seu TFG es dedicarà a modelar el vehicle seleccionat a la proposta. Posteriorment, la part que contempla aquest treball, generarà tot el contingut digital necessari per a la renderització, composició i edició d’un vídeo. Aquesta proposta consistirà en la confecció de diferents seqüències on és essencial mantenir una coherència i estètica visual directament relacionada a treballs personals i projectes prèviament elaborats per l’alumne. D’aquesta manera, es busca lligar el TFG amb la trajectòria artística i la galeria de continguts digitals de la marca alguero3d. Els resultats generats en aquest projecte són fruit d’anys d’inspiració i de recopilació de referències visuals, musicals, estètics i artístiques, que han dut a l’alumne a veure que aquest treball és una gran oportunitat per poder plasmar moltes idees i per dur-lo més enllà d’un treball acadèmic i poder treure’n profit en l’àmbit professional.
- Published
- 2022
15. Aetherius: Real-Time Volumetric Cloud Generation Tool for Unity
- Abstract
This thesis describes the development of Aetherius, a Unity tool which can generate and visualize virtually endless and unique cloudscapes in real-time dynamically; The resulting tool can be used in videogames to easily and quickly create immersive and dynamic skies without wasting resources in the development of a dedicated system. Developing a volumetric cloud system is complicated and especially small studios do not have the resources to create such systems for their skies. The objective of this project is to provide an accessible and easy to use alternative for small studios and indie developers to turn static, boring and featureless skies into high quality ones. In this document the problems encountered during the development of the tool and the techniques used to generate, render and optimize cloudscapes are described; to test the tool’s usefulness this project includes the creation of a small demo application.
- Published
- 2022
16. A hybrid graphics/video rate control method based on graphical assets for cloud gaming
- Author
-
Mahmoud Reza Hashemi, Mohammad Ghanbari, and Iman Soltani Mohammadi
- Subjects
Computer graphics ,Computer science ,Cloud gaming ,Real-time computing ,Frame (networking) ,Graphics ,Client-side ,Frame rate ,Shader ,Information Systems ,Rendering (computer graphics) - Abstract
In Hybrid Cloud Gaming (HCG), as long as the graphical assets are available at the client side, rendering can be performed locally. However, if the client is not able to render all the frames in time, this may result in the frame rate to drop under the target value. This paper presents an Asset-based frame-level hybrid graphics/video rate control method for HCG, referred to as AHCG, which aims at improving the Quality of Experience (QoE) of players by tapping into the available processing power at the client side, while keeping a steady frame rate for thin clients and taking full advantage of the user’s tolerable delay. In the proposed client/server model, graphics data are intercepted and streamed in an asset-based approach. This approach handles the rate control issue per asset which is defined to be 3D object model data, textures, and shader programs. Rendering a frame on the client side not only maintains original quality for that frame, but it also reduces bandwidth requirements of the entire service by reusing the same assets for different frames. In the proposed method, if the accumulated rendering delay at the client side violates the tolerable delay set by the user, video streaming is used to compensate for the client device’s lack of processing power. Furthermore, quality fluctuation between graphics and video frames is addressed to provide a seamless experience when switching between graphics and video streaming. Several objective and subjective tests are conducted and the experimental results show a 20-fps increase in frame rate while maintaining a minimum value of 58 fps, with a minimum of 0.25 units improvement in the pooled standard deviation of SSIM values, compared to existing HCGs. Also, the subjective tests suggest an average 5.62 percent improvement in MOS compared to best HCG methods.
- Published
- 2021
17. Systematically differentiating parametric discontinuities
- Author
-
Jesse Michel, Jonathan Ragan-Kelley, Kevin Mu, Tzu-Mao Li, Sai Praveen Bangaru, and Gilbert Bernstein
- Subjects
Computer graphics ,Language primitive ,Automatic differentiation ,Computer science ,Semantics (computer science) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Classification of discontinuities ,Shader ,Algorithm ,Computer Graphics and Computer-Aided Design ,ComputingMethodologies_COMPUTERGRAPHICS ,Parametric statistics ,Rendering (computer graphics) - Abstract
Emerging research in computer graphics, inverse problems, and machine learning requires us to differentiate and optimize parametric discontinuities. These discontinuities appear in object boundaries, occlusion, contact, and sudden change over time. In many domains, such as rendering and physics simulation, we differentiate the parameters of models that are expressed as integrals over discontinuous functions. Ignoring the discontinuities during differentiation often has a significant impact on the optimization process. Previous approaches either apply specialized hand-derived solutions, smooth out the discontinuities, or rely on incorrect automatic differentiation. We propose a systematic approach to differentiating integrals with discontinuous integrands, by developing a new differentiable programming language. We introduce integration as a language primitive and account for the Dirac delta contribution from differentiating parametric discontinuities in the integrand. We formally define the language semantics and prove the correctness and closure under the differentiation, allowing the generation of gradients and higher-order derivatives. We also build a system, Teg, implementing these semantics. Our approach is widely applicable to a variety of tasks, including image stylization, fitting shader parameters, trajectory optimization, and optimizing physical designs.
- Published
- 2021
18. Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
- Author
-
Michael Wimmer, Bernhard Kerbl, and Markus Schütz
- Subjects
FOS: Computer and information sciences ,020203 distributed computing ,Vertex (computer graphics) ,Computer science ,Pipeline (computing) ,Sorting ,Point cloud ,020207 software engineering ,02 engineering and technology ,Frame rate ,Computer Graphics and Computer-Aided Design ,Graphics (cs.GR) ,Computational science ,Rendering (computer graphics) ,Computer Science - Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Aliasing (computing) ,Shader - Abstract
While commodity GPUs provide a continuously growing range of features and sophisticated methods for accelerating compute jobs, many state-of-the-art solutions for point cloud rendering still rely on the provided point primitives (GL_POINTS, POINTLIST, ...) of graphics APIs for image synthesis. In this paper, we present several compute-based point cloud rendering approaches that outperform the hardware pipeline by up to an order of magnitude and achieve significantly better frame times than previous compute-based methods. Beyond basic closest-point rendering, we also introduce a fast, high-quality variant to reduce aliasing. We present and evaluate several variants of our proposed methods with different flavors of optimization, in order to ensure their applicability and achieve optimal performance on a range of platforms and architectures with varying support for novel GPU hardware features. During our experiments, the observed peak performance was reached rendering 796 million points (12.7GB) at rates of 62 to 64 frames per second (50 billion points per second, 802GB/s) on an RTX 3090 without the use of level-of-detail structures. We further introduce an optimized vertex order for point clouds to boost the efficiency of GL_POINTS by a factor of 5x in cases where hardware rendering is compulsory. We compare different orderings and show that Morton sorted buffers are faster for some viewpoints, while shuffled vertex buffers are faster in others. In contrast, combining both approaches by first sorting according to Morton-code and shuffling the resulting sequence in batches of 128 points leads to a vertex buffer layout with high rendering performance and low sensitivity to viewpoint changes., Comment: 13 pages content, 5 pages appendix
- Published
- 2021
19. How to Analyze, Preserve, and Communicate Leonardo's Drawing? A Solution to Visualize in RTR Fine Art Graphics Established from 'the Best Sense'
- Author
-
Marco Gaiani, Fabrizio Ivan Apollonio, Simone Garagnani, Riccardo Foschi, and Fabrizio Ivan Apollonio, Riccardo Foschi, Marco Gaiani, Simone Garagnani
- Subjects
3D digital artifact capture, analytic tools for scholars, Leonardo da Vinci, Renaissancedrawings, Color reproduction, Real-Time Rendering, Shaders, Material classification, and reproduction ,Computer science ,business.industry ,media_common.quotation_subject ,010401 analytical chemistry ,Fidelity ,020207 software engineering ,02 engineering and technology ,Conservation ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Real-time rendering ,0104 chemical sciences ,Computer Science Applications ,Rendering (computer graphics) ,Fine art ,Visualization ,Workflow ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Graphics ,business ,Shader ,Information Systems ,media_common - Abstract
Original hand drawings by Leonardo are astonishing collections of knowledge, superb representations of the artist's way of working, which proves the technical and cultural peak of the Renaissance era. However, due to their delicate and fragile nature, they are hard to manipulate and compulsory to preserve. To overcome this problem we developed, in a 10-year-long research program, a complete workflow to produce a system able to replace, investigate, describe and communicate ancient fine drawings through what Leonardo calls “ the best sense ” (i.e., the view), the so-called ISLe ( InSightLeonardo ). The outcoming visualization app is targeted to a wide audience made of museum visitors and, most importantly, art historians, scholars, conservators, and restorers. This article describes a specific feature of the workflow: the appearance modeling with the aim of an accurate Real-Time Rendering (RTR) visualization. This development is based on the direct observation of five among the most known Leonardo da Vinci's drawings, spanning his entire activity as a draftsman, and it is the result of an accurate analysis of drawing materials used by Leonardo, in which peculiarities of materials are digitally reproduced at the various scales exploiting solutions that favor the accuracy of perceived reproduction instead of the fidelity to the physical model and their ability to be efficiently implemented over a standard GPU-accelerated RTR pipeline. Results of the development are exemplified on five of Leonardo's drawings and multiple evaluations of the results, subjective and objective, are illustrated, aiming to assess potential and critical issues of the application.
- Published
- 2021
20. Producció d'un spot CGI per a automoció: producció de backplates per a la integració d'elements CGI i edició audiovisual d'un spot per a automoció
- Author
-
Muñoz Algueró, Joan and Virgili Torrent, Marc
- Subjects
Animació 3D ,Estètica Visual ,Automòbils--Indústria i comerç ,Automobile industry and trade ,Publicitat ,Blender ,Shader ,Advertising ,Cycles ,Three-dimensional imaging ,CGI ,Frame ,Keyframe ,So, imatge i multimèdia::Creació multimèdia::Producció audiovisual [Àrees temàtiques de la UPC] ,Imatgeria tridimensional - Abstract
Aquest projecte té com a finalitat la creació d’un vídeo confeccionat mitjançant CGI per a un spot publicitari d’automoció, que serveixi a l’alumne com a mostra de les seves habilitats tècniques i creatives quant a generació d’idees, conceptualització i disseny 3D, composició, edició i tot el conjunt de disciplines que engloben la producció audiovisual d’una peça d’aquestes característiques. La intenció final d’aquesta creació és augmentar la visibilitat de l’alumne a les xarxes socials, arribar a més artistes i gent interessada en el sector i conseqüentment despertar l’interès d’empreses del món audiovisual i de possibles futurs clients dins i fora de la plataforma de freelance Fiverr. El desenvolupament d’aquest projecte contempla la integració del material proporcionat per Pere Gras, que en el seu TFG es dedicarà a modelar el vehicle seleccionat a la proposta. Posteriorment, la part que contempla aquest treball, generarà tot el contingut digital necessari per a la renderització, composició i edició d’un vídeo. Aquesta proposta consistirà en la confecció de diferents seqüències on és essencial mantenir una coherència i estètica visual directament relacionada a treballs personals i projectes prèviament elaborats per l’alumne. D’aquesta manera, es busca lligar el TFG amb la trajectòria artística i la galeria de continguts digitals de la marca alguero3d. Els resultats generats en aquest projecte són fruit d’anys d’inspiració i de recopilació de referències visuals, musicals, estètics i artístiques, que han dut a l’alumne a veure que aquest treball és una gran oportunitat per poder plasmar moltes idees i per dur-lo més enllà d’un treball acadèmic i poder treure’n profit en l’àmbit professional.
- Published
- 2022
21. Real-Time, Curvature-Sensitive Surface Simplification Using Depth Images.
- Author
-
Bahirat, Kanchan, Raghuraman, Suraj, and Prabhakaran, Balakrishnan
- Abstract
With the rising popularity of handheld virtual reality (VR) devices and depth sensing RGB-D cameras, a variety of VR applications merging these two technologies has been suggested. However, immersive quality of experience in such VR applications is constrained mainly by the large data size and the hardware limitations to handle it. The depth data captured by RGB-D cameras provide a dense sampling of the surface, resulting in a high-poly mesh, which is difficult to be rendered on handheld VR devices due to their limited processing power. To improve the immersive VR experience, a sparse approximation of the depth data is needed. Traditional mesh and point cloud simplification methods are iterative and so are unsuitable for real-time applications. In this paper, we introduce a depth-image-based approach that is capable of generating a good quality sparse mesh for visualization in real time. We propose a curvature-sensitive surface simplification—\textCS^3 operator that assigns an importance measure to each point in the depth image, based on the local curvature. Further, it applies an importance-order-based restrictive sampling to generate a sparse representation that retains the overall shape as well as the finer features of the object. We also modify the 2-D sweep-line-based constrained Delaunay triangulation to generate 3-D meshes from the sparse point sampling obtained using \textCS^3. In addition, the proposed approach preserves key surface properties, such as texture coordinates and materials. We used three different datasets containing dense 3-D models with and without texture, which are scanned using various sensors to validate and compare the robustness, real-time performance, and accuracy of the proposed method over existing approaches. Based on the experimental results, we show that the proposed \textCS^3 operator and modified 2-D sweep-line-based triangulation generate sparse meshes from depth image in real time, performing significantly faster than current state-of-the-art methods while maintaining similar visual quality. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
22. Thickness-aware voxelization.
- Author
-
Zhang, Zhuopeng, Morishima, Shigeo, and Wang, Changbo
- Subjects
COMPUTER graphics ,GLOBAL illumination algorithms ,GRAPHICS processing units ,MESH networks ,COLLISION detection (Computer animation) - Abstract
Voxelization is a crucial process for many computer graphics applications such as collision detection, rendering of translucent objects, and global illumination. However, in some situations, although the mesh looks good, the voxelization resultmay be undesirable. In this paper,we describe a novel voxelizationmethod that uses the graphics processing unit for surface voxelization. Our improvements on the voxelization algorithm can address a problem of state-of-the-art voxelization, which cannot deal with thin parts of the mesh object.We improve the quality of voxelization on both normal mediation and surface correction. Furthermore, we investigate our voxelizationmethods on indirect illumination, showing the improvement on the quality of real-time rendering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Multimodal visualization of complementary color-coded FA map and tensor glyphs for interactive tractography ROI seeding
- Author
-
Raphael Voltoline and Shin-Ting Wu
- Subjects
Computer science ,business.industry ,General Engineering ,020207 software engineering ,Complementary colors ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Visualization ,Rendering (computer graphics) ,Human-Computer Interaction ,Region of interest ,Fractional anisotropy ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Shader ,Tractography ,Diffusion MRI - Abstract
Fiber tractography is still unique in providing detailed imaging of white matter fiber bundles and connectivity between different brain regions. For finding specific fiber bundles, the most applied technique is tracking fibers from the seeds in a region of interest (ROI) within a diffusion tensor imaging (DTI) volume, or the limitation of tracking results to the ROI. Color-encoded fractional anisotropy (FA) map derived from DTI data, neuroanatomical atlas, and anatomical T1-weighted magnetic resonance imaging (MRI) data have been proposed as complementary data to improve the placement of an ROI. Mental mapping of colors in color-encoded FA map to directions requires a cognitive process. This paper addresses the fusion of shape with color to make the ROI drawing more a perceptual rather than a cognitive task. We propose the rendering of diffusion tensors as superquadric glyphs (shape) superimposed over the standard practice consisting of a color-encoded FA map (color) co-registered to a T1-weighted MRI image (anatomical constraint). A novel object-space algorithm that can efficiently render diffusion tensor glyphs is presented. A strategy for distributing the GPU hardware workload was devised to maximize its occupancy and reduce its stall. Implementations with a compute shader, and a geometry shader are detailed comparatively. We show that our proposal outperforms other rendering solutions. Preliminary quantitative comparisons of the nerve fibers reconstructed by interactive seeding strategies with and without the glyphs suggest that the first approach is more accurate in conveying directional information.
- Published
- 2021
24. A Shader Technique that applies Noise Texture to Vertex Movement and Surface Texture Mapping of Polygon Mesh
- Author
-
Minseok Hong and Jinho Park
- Subjects
Noise ,Vertex (computer graphics) ,business.industry ,Movement (music) ,Computer science ,Computer vision ,Polygon mesh ,Artificial intelligence ,Surface finish ,business ,Shader ,Texture (geology) - Published
- 2021
25. Temporally Adaptive Shading Reuse for Real-Time Rendering and Virtual Reality
- Author
-
Dieter Schmalstieg, Philip Voglreiter, Thomas Neff, Joerg H. Mueller, and Markus Steinberger
- Subjects
business.industry ,Computer science ,Frame (networking) ,Visibility (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Reuse ,Computer Graphics and Computer-Aided Design ,Real-time rendering ,Rendering (computer graphics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Shading ,Artificial intelligence ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Temporal coherence has the potential to enable a huge reduction of shading costs in rendering. Existing techniques focus either only on spatial shading reuse or cannot adaptively choose temporal shading frequencies. We find that temporal shading reuse is possible for extended periods of time for a majority of samples, and we show under which circumstances users perceive temporal artifacts. Our analysis implies that we can approximate shading gradients to efficiently determine when and how long shading can be reused. Whereas visibility usually stays temporally coherent from frame to frame for more than 90%, we find that even in heavily animated game scenes with advanced shading, typically more than 50% of shading is also temporally coherent. To exploit this potential, we introduce a temporally adaptive shading framework and apply it to two real-time methods. Its application saves more than 57% of the shader invocations, reducing overall rendering times up to in virtual reality applications without a noticeable loss in visual quality. Overall, our work shows that there is significantly more potential for shading reuse than currently exploited.
- Published
- 2021
26. Optimizing the Oriental Painting Shader for 3D Online-Game
- Author
-
Kim, Sung-Soo, Jang, Hyuna, Lee, Won-Hyung, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Rangan, C. Pandu, editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Hui, Kin-chuen, editor, Pan, Zhigeng, editor, Chung, Ronald Chi-kit, editor, Wang, Charlie C. L., editor, Jin, Xiaogang, editor, Göbel, Stefan, editor, and Li, Eric C.-L., editor
- Published
- 2007
- Full Text
- View/download PDF
27. A Training Oriented Driving Simulator
- Author
-
Sun, Chao, Xie, Feng, Feng, Xiaocao, Zhang, Mingmin, Pan, Zhigeng, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Ma, Lizhuang, editor, Rauterberg, Matthias, editor, and Nakatsu, Ryohei, editor
- Published
- 2007
- Full Text
- View/download PDF
28. Shader-Like Computations in WebGL for Advanced Graphics and General Purposes
- Author
-
Aaron R. Watters
- Subjects
Context model ,HTML5 ,General Computer Science ,Computer science ,Computation ,General Engineering ,Parallel algorithm ,Context (language use) ,Python (programming language) ,Computer graphics (images) ,Graphics ,Shader ,computer ,computer.programming_language - Abstract
The feedWebGL2 software package enables parallel computations using a feature of HTML5/WebGL2 called Transform/Feedback. The Transform/Feedback mechanism can be used within Jupyter widgets or in other web components and applications for a special class of parallel computations called shader-like calculations, which can solve many problems well, but are unsuitable for some classes of parallel algorithms. This article describes the characteristics of shader-like calculations and how they can be created using the feedWebGL2 software package. The example programs use shader-like calculations for special purpose graphical stages in combination with standard WebGL graphics pipelines, and also use them for general purposes, such as matrix computations.
- Published
- 2021
29. Parametric Shape Estimation of Human Body Under Wide Clothing
- Author
-
Yucheng Lu, Seung-Won Jung, Sekyoung Youm, and Jin-Hyuck Cha
- Subjects
Body shape ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Human body ,Clothing ,Computer Science Applications ,Silhouette ,Signal Processing ,Media Technology ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Shader ,Pose ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The shape of the human body plays an important role in many applications, such as those involving personal healthcare and virtual clothing try-ons. However, accurate body shape measurements typically require the user to be wearing a minimal amount of clothing, which is not practical in many situations. To resolve this issue using deep learning techniques, we need a paired dataset of ground-truth naked human body shapes and their corresponding color images with clothes. As it is practically impossible to collect enough of this kind of data from real-world environments to train a deep neural network, in this paper, we present the Synthetic dataset of Human Avatars under wiDE gaRment (SHADER). The SHADER dataset consists of 300,000 paired ground-truth naked and dressed images of 1,500 synthetic humans with different body shapes, poses, garments, skin tones, and backgrounds. To take full advantage of SHADER, we propose a novel silhouette confidence measure and show that our silhouette confidence prediction network can help improve the performance of state-of-the-art shape estimation networks for human bodies under clothing. The experimental results demonstrate the effectiveness of the proposed approach. The code and dataset are available at https://github.com/YCL92/SHADER .
- Published
- 2021
30. Color Rendering in Medical Extended-Reality Applications
- Author
-
Wei-Chung Cheng, Andrea S. Kim, Ryan Beams, and Aldo Badano
- Subjects
Discrete mathematics ,Original Paper ,Radiological and Ultrasound Technology ,Pixel ,Color difference ,Color image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color ,Image processing ,030218 nuclear medicine & medical imaging ,Computer Science Applications ,Rendering (computer graphics) ,Color rendering index ,03 medical and health sciences ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Humans ,RGB color model ,Radiology, Nuclear Medicine and imaging ,Shader ,030217 neurology & neurosurgery ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Cross-platform development of medical applications in extended-reality (XR) head-mounted displays (HMDs) often relies on game engines with rendering capabilities currently not standardized in the context of medical visualizations. Many aspects of the visualization pipeline including the characterization of color have yet to be consistently defined across rendering models and platforms. We examined the transfer of color properties from digital objects, through the rendering and image processing steps, to the RGB values sent to the display device. Five rendering pipeline configurations within the Unity engine were evaluated using 24 digital color patches. In the second experiment, the same configurations were evaluated with a tissue slide sample image. Measurements of the change in color associated with each configuration were characterized using the CIE 1976 color difference ( $${\Delta}{\text{E}}$$ ). We found that the distribution of $${\Delta}{\text{E}}$$ for the first experiment ranges from zero, as in the case using an Unlit Shader, to 25.97, as in the case using default configurations. The default Unity configuration consistently returned the highest $${\Delta}{\text{E}}$$ across all 24 colors and also the largest range of color differences. In the second experiment, $${\Delta}{\text{E}}$$ E ranged from 7.49 to 34.18. The Unlit configuration resulted in the highest $${\Delta}{\text{E}}$$ in three of four selected pixels in the tissue sample image. Changes in color image properties associated with texture import settings were then evaluated in a third experiment using the TG18-QC test pattern. Differences in pixel values were found in all nine of the investigated texture import settings. The findings provide an initial characterization of color transfer and a basis for future work on standardization, consistency, and optimization of color in medical XR applications.
- Published
- 2020
31. Geometry types for graphics programming
- Author
-
Adrian Sampson, Aditi Kabra, Yinnon Sanders, Irene Yoon, Dietrich Geisler, and Horace He
- Subjects
Computer science ,Coordinate system ,OpenGL ,Spherical coordinate system ,Geometry ,Object (computer science) ,law.invention ,law ,Cartesian coordinate system ,Shading language ,Graphics ,Safety, Risk, Reliability and Quality ,Shader ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In domains that deal with physical space and geometry, programmers need to track the coordinate systems that underpin a computation. We identify a class of geometry bugs that arise from confusing which coordinate system a vector belongs to. These bugs are not ruled out by current languages for vector-oriented computing, are difficult to check for at run time, and can generate subtly incorrect output that can be hard to test for. We introduce a type system and language that prevents geometry bugs by reflecting the coordinate system for each geometric object. A value's geometry type encodes its reference frame, the kind of geometric object (such as a point or a direction), and the coordinate representation (such as Cartesian or spherical coordinates). We show how these types can rule out geometrically incorrect operations, and we show how to use them to automatically generate correct-by-construction code to transform vectors between coordinate systems. We implement a language for graphics programming, Gator, that checks geometry types and compiles to OpenGL's shading language, GLSL. Using case studies, we demonstrate that Gator can raise the level of abstraction for shader programming and prevent common errors without inducing significant annotation overhead or performance cost.
- Published
- 2020
32. Virtual reality rendering methods for training deep learning, analysing landscapes, and preventing virtual reality sickness
- Author
-
Marcos Novak, Yoann Pencreach, Hiroyuki Fujii, and Tomohiro Fukuda
- Subjects
Computer-aided architectural design ,business.industry ,Computer science ,Deep learning ,Architectural design ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Building and Construction ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Rendering (computer graphics) ,Design studies ,Human–computer interaction ,Artificial intelligence ,Virtual reality sickness ,business ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Virtual reality (VR) has been proposed for various purposes such as design studies, presentation, simulation and communication in the field of computer-aided architectural design. This paper explores new roles for VR; in particular, we propose rendering methods that consist of post-processing rendering, segmentation rendering and shadow-casting rendering for more-versatile approaches in the use of data. We focus on the creation of a dataset of annotated images, composed of paired foreground-background and semantic-relevant images, in addition to traditional immersive rendering for training deep learning neural networks and analysing landscapes. We also develop a camera velocity rendering method using a customised segmentation rendering technique that calculates the linear and angular velocities of the virtual camera within the VR space at each frame and overlays a colour on the screen according to the velocity value. Using this velocity information, developers of VR applications can improve the animation path within the VR space and prevent VR sickness. We successfully applied the developed methods to urban design and a design project for a building complex. In conclusion, the proposed method was evaluated to be both feasible and effective.
- Published
- 2020
33. An OpenGL Compliant Hardware Implementation of a Graphic Processing Unit Using Field Programmable Gate Array–System on Chip Technology
- Author
-
Robert J. Watson, Alexander E. Beasley, and Christopher Clarke
- Subjects
Multi-core processor ,General Computer Science ,Computer science ,business.industry ,OpenGL ,Control reconfiguration ,02 engineering and technology ,020202 computer hardware & architecture ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,System on a chip ,Graphics ,business ,Field-programmable gate array ,Shader ,Throughput (business) ,Computer hardware - Abstract
FPGA-SoC technology provides a heterogeneous platform for advanced, high-performance systems. The System on Chip (SoC) architecture combines traditional single and multiple core processor topologies with flexible FPGA fabric. Dynamic reconfiguration allows the hardware accelerators to be changed at run-time. This article presents a novel OpenGL compliant GPU design implemented on an FPGA. The design uses an FPGA-SoC environment allowing the embedded processor to offload graphics operation onto a more suitable architecture. To the authors’ knowledge, this is a first. The graphics processor consists of GLSL compliant shaders, an efficient Barycentric Rasterizer, and a draw mode manager. Performance analysis shows the throughput of the shaders to be hundreds of millions of vertices per second. The design uses both pipelining and resource reuse to optimise throughput and resource use, allowing implementation on a low-cost, FPGA device. Pixel processing rates from this implementation are almost 80% higher than other FPGA implementations. Power consumption compared with comparative embedded devices shows the FPGA consuming as little as 2% of the power of a Mali device, and an up to 11.9-fold increase in efficiency compared to an Nvidia RTX 2060 - Turing architecture device.
- Published
- 2020
34. Optimal Path Maps on the GPU
- Author
-
Marcelo Kallmann and Renato Farias
- Subjects
Computer science ,OpenGL ,020207 software engineering ,02 engineering and technology ,Animation ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Vertex (geometry) ,Tree traversal ,Line segment ,Virtual machine ,Signal Processing ,Shortest path problem ,Polygon ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Motion planning ,computer ,Algorithm ,Shader ,Software - Abstract
We introduce a new method for computing optimal path maps on the GPU using OpenGL shaders. Our method explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing optimal path maps which can efficiently be queried at run-time. Our method is implemented entirely with GPU shaders, does not require pre-computation, addresses optimal path maps with multiple points and line segments as sources, and introduces a new optimal path map concept not addressed before: maps with weights at vertices representing possible changes in traversal speed. The produced maps offer new capabilities not explored by previous navigation representations and at the same time address paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed path maps partition the input environment into the regions sharing a same parent point along the shortest path to the closest source, taking into account possible speed changes at vertices. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps.
- Published
- 2020
35. High-Performance Image Filters via Sparse Approximations
- Author
-
Philip Trettner, Kersten Schuster, and Leif Kobbelt
- Subjects
Sparse image ,Bokeh ,Computer science ,Constrained optimization ,020207 software engineering ,Image processing ,02 engineering and technology ,Filter (signal processing) ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Rendering (computer graphics) ,Aliasing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Shader ,Algorithm - Abstract
We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.
- Published
- 2020
36. Efficient Adaptive Deferred Shading with Hardware Scatter Tiles
- Author
-
Cem Yuksel, Ian Mallett, and Larry Seiler
- Subjects
Deferred shading ,Pixel ,business.industry ,Computer science ,Graphics hardware ,020207 software engineering ,Memory bandwidth ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,Computer Science Applications ,Scheduling (computing) ,Rendering (computer graphics) ,0202 electrical engineering, electronic engineering, information engineering ,business ,Shader ,Image resolution ,Computer hardware ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Adaptive shading is an effective mechanism for reducing the number of shaded pixels to a subset of the image resolution with minimal impact on final rendering quality. We present a new scheduling method based on on-chip tiles that, along with relatively minor modifications to the GPU architecture, provides efficient hardware support. As compared to software implementations on current hardware using compute shaders, our approach dramatically reduces memory bandwidth requirements, thereby significantly improving performance and energy use. We also introduce the concept of a fragment pre-shader for programmatically controlling when a fragment shader is invoked, and describe advanced techniques for utilizing our approach to further reduce the number of shaded pixels via temporal filtering, or to adjust rendering quality to maintain stable framerates.
- Published
- 2020
37. Zeroploit
- Author
-
Virat Agarwal, Aditya Ukarande, Marc Blackstein, Mark Stephenson, Shyam Murthy, and Ram Rangan
- Subjects
Speedup ,Computer science ,02 engineering and technology ,Parallel computing ,Fast path ,Program optimization ,Operand ,020202 computer hardware & architecture ,Hardware and Architecture ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Profile-guided optimization ,020201 artificial intelligence & image processing ,Shader ,Software ,Information Systems - Abstract
In this article, we first characterize register operand value locality in shader programs of modern gaming applications and observe that there is a high likelihood of one of the register operands of several multiply, logical-and, and similar operations being zero, dynamically. We provide intuition, examples, and a quantitative characterization for how zeros originate dynamically in these programs. Next, we show that this dynamic behavior can be gainfully exploited with a profile-guided code optimization called Zeroploit that transforms targeted code regions into a zero-(value-)specialized fast path and a default slow path. The fast path benefits from zero-specialization in two ways, namely: (a) the backward slice of the other operand of a given multiply or logical-and can be skipped dynamically, provided the only use of that other operand is in the given instruction, and (b) the forward slice of instructions originating at the given instruction can be zero-specialized, potentially triggering further backward slice specializations from operations of that forward slice as well. Such specialization helps the fast path avoid redundant dynamic computations as well as memory fetches, while the fast-slow versioning transform helps preserve functional correctness. With an offline value profiler and manually optimized shader programs, we demonstrate that Zeroploit is able to achieve an average speedup of 35.8% for targeted shader programs, amounting to an average frame-rate speedup of 2.8% across a collection of modern gaming applications on an NVIDIA® GeForce RTX™ 2080 GPU.
- Published
- 2020
38. Tile Pair-Based Adaptive Multi-Rate Stereo Shading
- Author
-
Yuan Yazhen, Rui Wang, and Hujun Bao
- Subjects
Pixel ,Computer science ,business.industry ,Pipeline (computing) ,Epipolar geometry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,GeneralLiterature_MISCELLANEOUS ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Shading ,Artificial intelligence ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,business ,Shader ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This work proposes a new stereo shading architecture that enables adaptive shading rates and automatic shading reuse among triangles and between two views. The proposed pipeline presents several novel features. First, the present sort-middle/bin shading is extended to tile pair-based shading to rasterize and shade pixels at two views simultaneously. A new rasterization algorithm utilizing epipolar geometry is then proposed to schedule tile pairs and perform rasterization at stereo views efficiently. Second, this work presents an adaptive multi-rate shading framework to compute shading on pixels at different rates. A novel tile-based screen space cache and a new cache reuse shader are proposed to perform such multi-rate shading across triangles and views. The results show that the newly proposed method outperforms the standard sort-middle shading and the state-of-the-art multi-rate shading by achieving considerably lower shading cost and memory bandwidth.
- Published
- 2020
39. Graphics Pipeline Evolution Based on Object Shaders
- Author
-
D. Mazouka and V. V. Krasnoproshin
- Subjects
Computer science ,02 engineering and technology ,Object (computer science) ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Pipeline (software) ,Graphics pipeline ,Visualization ,010309 optics ,Set (abstract data type) ,Computer graphics ,Computer graphics (images) ,0103 physical sciences ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Shader ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper addresses some current problems associated with the development of computer graphics technologies. We propose a possible solution based on the use of object shaders for the further evolution of the graphics pipeline technology. The visualization problem is formalized to generate a set of procedures (object shaders) that implement a programmable pipeline.
- Published
- 2020
40. Using mobile augmented reality to enhancing students’ conceptual understanding of physically-based rendering in 3D animation
- Author
-
Tiantada Hiranyachattada and Kampanat Kusirirat
- Subjects
business.industry ,Computer science ,020207 software engineering ,02 engineering and technology ,Animation ,3D rendering ,Education ,Rendering (computer graphics) ,Digital media ,Human–computer interaction ,Concept learning ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics education ,020201 artificial intelligence & image processing ,Augmented reality ,business ,Shader ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Physically-based rendering (PBR) concept is widely use nowadays in 3D rendering works, this concepts interested in the ray of light for describes the interaction of light and materials. Understanding the principles of PBR can be easier to adjust shader parameters to be realistic, react correctly to changes in lighting condition and also giving the same results even in different 3D rendering software. In Shading lighting and rendering (SLR) subject which PBR concepts become important concept instead the ‘old style rendering’ it was found that students are unable to clearly understand concepts of PBR. According to the problem, in this research bringing mobile augmented reality (AR) to be a media for demonstrate PBR concepts for students. The sample were 35 students from Department of Animation and Digital Media, Bansomdejchaopraya Rajabhat University. The results found that students understand the PBR concepts and can adjust the PBR shader paremeters to be realistic assess from students pre-test, post-test scores and students homework. The mobile AR application media were usable for students and suitable to be a learning media in 3D animation assess form students’ response.
- Published
- 2020
41. GPGPU-Based ATPG System: Myth or Reality?
- Author
-
Huawei Li, Kun-Han Tsai, and Liyang Lai
- Subjects
Speedup ,Computer science ,Task parallelism ,02 engineering and technology ,Parallel computing ,Automatic test pattern generation ,Computer Graphics and Computer-Aided Design ,020202 computer hardware & architecture ,CUDA ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Electrical and Electronic Engineering ,Graphics ,General-purpose computing on graphics processing units ,Shader ,Software - Abstract
General-purpose computing on graphics processing units (GPGPUs) is a programming model that uses graphics cards to perform computations traditionally done by CPU. It began to become practical with the advent of programmable shaders and floating-point support on GPU in around 2001. The spread of GPGPU has been accelerated with introduction of CUDA from NVIDIA in 2006 and later OpenCL in 2009. Nowadays GPGPU is widely deployed in various applications, such as data mining, artificial intelligence, and many scientific computations. GPGPU seemingly promises immense parallelism with massive concurrent cores, and thus much shorter run times. This is true for algorithms that bear intrinsic data and task parallelism, such as image and video processing. For an ATPG system where some algorithms are sequential in nature, the speedup is not easy to achieve in the real world. Flaws in setting up speedup evaluation can lead to false promises. Will GPGPU-based ATPG system become a reality? Or it is just a myth. In this paper, we try to provide an answer by surveying state-of-the-art works and by analyzing practical aspects of today’s industrial designs.
- Published
- 2020
42. DeepAO: Efficient Screen Space Ambient Occlusion Generation via Deep Network
- Author
-
Chu Han, Chuhua Xian, Guoliang Luo, Dongjiu Zhang, and Yunhui Xiong
- Subjects
Brightness ,Deferred shading ,General Computer Science ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Ambient occlusion ,Rendering (computer graphics) ,Screen space ambient occlusion ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Computer vision ,Shader ,business.industry ,deep neural network ,General Engineering ,020207 software engineering ,rendering ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Shading ,Artificial intelligence ,business ,shading ,lcsh:TK1-9971 - Abstract
Ambient occlusion ( abbr . AO) plays an important role in realistic rendering applications because AO produces more realistic ambient lighting, which is achieved by calculating the brightness of certain screen parts based on objects’ geometry. However, the baseline computation of AO algorithm is time-consuming, which limits its application for real-time rendering. Currently, most AO algorithms are based on screen space to reduce the computational consumption, which leads to unrealistic results due to the usage of artificial features. To overcome these challenges, in this paper, we first create a well-crafted dataset with the pair of deferred shading buffer data and ground-truth AO shaded images. Then, we design an efficient deep neural network for the screen space AO image generation, based on which we further design a Compute Shader Library to compute the shaded AO images. Our extensive experimental results show that our method achieves competent performance than existing screen space ambient or volumetric ambient based AO methods both in visual quality and efficiency.
- Published
- 2020
43. CLASSIFICATION AND INTEGRATION OF MASSIVE 3D POINTS CLOUDS IN A VIRTUAL REALITY (VR) ENVIRONMENT
- Author
-
Rafika Hajji, Roland Billen, Abderrazzaq Kharroubi, and Florent Poux
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,lcsh:T ,Computer science ,0211 other engineering and technologies ,Point cloud ,lcsh:TA1501-1820 ,02 engineering and technology ,Virtual reality ,lcsh:Technology ,01 natural sciences ,Mixed reality ,Visualization ,Immersive technology ,lcsh:TA1-2040 ,Human–computer interaction ,Immersion (virtual reality) ,User interface ,lcsh:Engineering (General). Civil engineering (General) ,Shader ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.
- Published
- 2019
44. Active Asteroid-SLAM
- Author
-
Carsten Rachuy, David Nakath, and Joachim Clemens
- Subjects
0209 industrial biotechnology ,Offset (computer science) ,Computer science ,Mechanical Engineering ,Point cloud ,02 engineering and technology ,Simultaneous localization and mapping ,Industrial and Manufacturing Engineering ,Extended Kalman filter ,020901 industrial engineering & automation ,Lidar ,Artificial Intelligence ,Control and Systems Engineering ,Maximum a posteriori estimation ,Graph (abstract data type) ,Electrical and Electronic Engineering ,Algorithm ,Shader ,Software - Abstract
In this paper, we propose an active real-time capable 3D graph based simultaneous localization and mapping (Graph SLAM) approach, which actively estimates the state of an autonomous spacecraft relative to a simultaneously established map estimate. The graph is constructed in a tightly-coupled fashion, where an Extended Kalman Filter estimates the relative offset between two of its vertices. An additional relative measurement is derived by matching point clouds obtained by a light detection and ranging (LiDAR) system. In order to yield a significant speed-up, scan matching is implemented on the GPU. To reduce the uncertainty of either the state or the map estimate, we present an approach to actively control the system resting on an extended representation of uncertainty in the map. Furthermore, it adapts its behavior depending on the current uncertainty distribution in order to find a dynamic trade-off between exploitation (improve localization performance) and exploration (improve knowledge about the environment). Finally, we present a post-processing approach to discover landing sites in the map estimate without prior knowledge. The evaluation is conducted in a numerical simulation, where the spacecraft explores the real 3D model of Itokawa in its actual dynamic environment. Within that simulation, we use a shader-based GPU implementation for simulating LiDAR measurements. We evaluate the performance of the active SLAM approach and demonstrate that the use of the adaptive approach improves navigation and exploration performance at the same time.
- Published
- 2019
45. Staged metaprogramming for shader system development
- Author
-
Serban D. Porumbescu, John D. Owens, Tim Foley, and Kerry A. Seitz
- Subjects
Source code ,Computer science ,Programming language ,Design space exploration ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,computer.file_format ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Metaprogramming ,Rendering (computer graphics) ,Shaders ,Computer graphics ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Multi-Stage Languages ,Executable ,Compiler ,Shading Languages ,computer ,Shader ,Compile time ,media_common - Abstract
The shader system for a modern game engine comprises much more than just compilation of source code to executable kernels. Shaders must also be exposed to art tools, interfaced with engine code, and specialized for performance. Engines typically address each of these tasks in an ad hoc fashion, without a unifying abstraction. The alternative of developing a more powerful compiler framework is prohibitive for most engines. In this paper, we identify staged metaprogramming as a unifying abstraction and implementation strategy to develop a powerful shader system with modest effort. By using a multi-stage language to perform metaprogramming at compile time, engine-specific code can consume, analyze, transform, and generate shader code that will execute at runtime. Staged metaprogramming reduces the effort required to implement a shader system that provides earlier error detection, avoids repeat declarations of shader parameters, and explores opportunities to improve performance. To demonstrate the value of this approach, we design and implement a shader system, called Selos, built using staged metaprogramming. In our system, shader and application code are written in the same language and can share types and functions. We implement a design space exploration framework for Selos that investigates static versus dynamic composition of shader features, exploring the impact of shader specialization in a deferred renderer. Staged metaprogramming allows Selos to provide compelling features with a simple implementation.
- Published
- 2019
46. Projection Distortion-based Object Tracking in Shader Lamp Scenarios
- Author
-
Peter Eisert, Anna Hilsmann, Niklas Gard, and Publica
- Subjects
Computer science ,business.industry ,Distortion (optics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,020207 software engineering ,02 engineering and technology ,Tracking (particle physics) ,Computer Graphics and Computer-Aided Design ,law.invention ,Projector ,law ,Video tracking ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Projection (set theory) ,Pose ,Shader ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Shader lamp systems augment the real environment by projecting new textures on known target geometries. In dynamic scenes, object tracking maintains the illusion if the physical and virtual objects are well aligned. However, traditional trackers based on texture or contour information are often distracted by the projected content and tend to fail. In this paper, we present a model-based tracking strategy, which directly takes advantage from the projected content for pose estimation in a projector-camera system. An iterative pose estimation algorithm captures and exploits visible distortions caused by object movements. In a closed-loop, the corrected pose allows the update of the projection for the subsequent frame. Synthetic frames simulating the projection on the model are rendered and an optical flow-based method minimizes the difference between edges of the rendered and the camera image. Since the thresholds automatically adapt to the synthetic image, a complicated radiometric calibration can be avoided. The pixel-wise linear optimization is designed to be easily implemented on the GPU. Our approach can be combined with a regular contour-based tracker and is transferable to other problems, like the estimation of the extrinsic pose between projector and camera. We evaluate our procedure with real and synthetic images and obtain very precise registration results.
- Published
- 2019
47. An ubiquitous 3D visual analysis platform of seabed
- Author
-
Tianyun Su and Zhihan Lv
- Subjects
Feature (archaeology) ,Computer Networks and Communications ,Computer science ,Delaunay triangulation ,Drilling ,020207 software engineering ,02 engineering and technology ,Grid ,Visualization ,Set (abstract data type) ,Hardware and Architecture ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Layer (object-oriented design) ,Shader ,Software ,Seabed - Abstract
We built a ‘virtual-world’ of real seabed for the visual analysis. Sub-bottom profile is imported in the 3D environment.” section-drilling” three-dimensional model is designed according to the characteristics of the multi-source comprehensive data under the seabed. In this model, the seabed stratigraphic profile obtained by seismic reflection is digitized into discrete points and interpolated with an approved Krig- ing arithmetic to produce uniform grid in every strata layer. The Delaunay triangular model is then constructed in every layer and calibrated using the drilling data to rec- tify the depth value of the dataset within the buffer. Finally, the constructed 3D seabed stratigraphic model is rendered in every layer by GPU shader engine. Based on this model, two state-of-the-art applications on website explorer and smartphone prove its ubiquitous feature. The resulting ‘3D Seabed’ is used for simulation, visualization, and analysis, by a set of interlinked, real-time layers of information about the 3D Seabed and its analysis result.
- Published
- 2019
48. NEW POTREE SHADER CAPABILITIES FOR 3D VISUALIZATION OF BEHAVIORS NEAR COVID-19 RICH HEALTHCARE FACILITIES
- Author
-
C. Carey, J. Romero, and D. F. Laefer
- Subjects
Spatial contextual awareness ,Technology ,Computer science ,Event (computing) ,Point cloud ,Engineering (General). Civil engineering (General) ,Visualization ,TA1501-1820 ,Human–computer interaction ,Doors ,Use case ,Applied optics. Photonics ,TA1-2040 ,Shader ,Built environment - Abstract
While data on human behavior in COVID-19 rich environments have been captured and publicly released, spatial components of such data are recorded in two-dimensions. Thus, the complete roles of the built and natural environment cannot be readily ascertained. This paper introduces a mechanism for the three-dimensional (3D) visualization of egress behaviors of individuals leaving a COVID-19 exposed healthcare facility in Spring 2020 in New York City. Behavioral data were extracted and projected onto a 3D aerial laser scanning point cloud of the surrounding area rendered with Potree, a readily available open-source Web Graphics Library (WebGL) point cloud viewer. The outcomes were 3D heatmap visualizations of the built environment that indicated the event locations of individuals exhibiting specific characteristics (e.g., men vs. women; public transit users vs. private vehicle users). These visualizations enabled interactive navigation through the space accessible through any modern web browser supporting WebGL. Visualizing egress behavior in this manner may highlight patterns indicative of correlations between the environment, human behavior, and transmissible diseases. Findings using such tools have the potential to identify high-exposure areas and surfaces such as doors, railings, and other physical features. Providing flexible visualization capabilities with 3D spatial context can enable analysts to quickly advise and communicate vital information across a broad range of use cases. This paper presents such an application to extract the public health information necessary to form localized responses to reduce COVID-19 infection and transmission rates in urban areas.
- Published
- 2021
49. Texture Mapping on NURBS Surface
- Author
-
Sergio Vázquez and Margarita Amor
- Subjects
NURBS ,texture ,shader ,GPU ,General Works - Abstract
Texture mapping allows high resolution details over 3D surfaces. Nevertheless, texture mapping has a number of unresolved problems such as distortion, boundary between textures or filtering. On the other hand, NURBS surfaces are usually decomposed into a set of Bézier surfaces, since NURBS surface can not be directly rendered by GPU. In this work, we propose a texture mapping directly on the NURBS surfaces using the RPNS (Rendering Pipeline for NURBS Surface) method, which allows the rendering of NURBS surface directly on the GPU. Our proposal facilitates the implementation while minimizing the cost of storage, mitigating distortions and stitching between textures.
- Published
- 2018
- Full Text
- View/download PDF
50. Blender: shader Principled BSDF
- Abstract
En este polimedia se describe el shader Principled GSDF, su uso en blender y algunos de sus parámetros más importantes
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.