3,265 results on '"rendering"'
Search Results
2. On Confucian Mediality and Rendering Xiang-Thought
- Author
-
Fleming, David H. and Fleming, David H.
- Published
- 2025
- Full Text
- View/download PDF
3. Innovative Design of Thermal Insulating Green Rendering Mortar for Energy Efficient Buildings
- Author
-
Shoukry, Hamada, di Prisco, Marco, Series Editor, Chen, Sheng-Hong, Series Editor, Vayas, Ioannis, Series Editor, Kumar Shukla, Sanjay, Series Editor, Sharma, Anuj, Series Editor, Kumar, Nagesh, Series Editor, Wang, Chien Ming, Series Editor, Cui, Zhen-Dong, Series Editor, Lu, Xinzheng, Series Editor, Mansour, Yasser, editor, Subramaniam, Umashankar, editor, Mustaffa, Zahiraniza, editor, Abdelhadi, Abdelhakim, editor, Al-Atroush, Mohamed, editor, and Abowardah, Eman, editor
- Published
- 2025
- Full Text
- View/download PDF
4. ADVANCEMENTS IN SPECTRAL VISUALIZATION: APPLICATION OF COLOR ACCURACY AND REPRESENTATION IN COMPUTER GRAPHICS.
- Author
-
Iana, Vlasova
- Subjects
RENDERING (Computer graphics) ,GRAPHICS processing units ,COMPUTER simulation ,MACHINE learning ,ELECTRONIC data processing ,COMPUTER graphics - Abstract
This paper discusses the advancements in spectral visualization (SV) and its application to improve color accuracy and material modeling in computer graphics. The main advantages of SV over traditional methods, such as RGB, are described, as well as its potential applications in various fields, including the film industry, video games, medicine, and scientific research. Special attention is given to spectral data processing algorithms, the use of modern computational technologies, and methods such as machine learning (ML) and graphics processors (GPU) to create photorealistic images. Further development of SV is predicted to enhance real-time visualization quality. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. MoNeRF: Deformable Neural Rendering for Talking Heads via Latent Motion Navigation.
- Author
-
Li, X., Ding, Y., Li, R., Tang, Z., and Li, K.
- Subjects
- *
VIDEO processing , *ORTHOGONAL codes , *HUMAN body , *SOURCE code , *RADIANCE - Abstract
Novel view synthesis for talking heads presents significant challenges due to the complex and diverse motion transformations involved. Conventional methods often resort to reliance on structure priors, like facial templates, to warp observed images into a canonical space conducive to rendering. However, the incorporation of such priors introduces a trade‐off‐while aiding in synthesis, they concurrently amplify model complexity, limiting generalizability to other deformable scenes. Departing from this paradigm, we introduce a pioneering solution: the motion‐conditioned neural radiance field, MoNeRF, designed to model talking heads through latent motion navigation. At the core of MoNeRF lies a novel approach utilizing a compact set of latent codes to represent orthogonal motion directions. This innovative strategy empowers MoNeRF to efficiently capture and depict intricate scene motion by linearly combining these latent codes. In an extended capability, MoNeRF facilitates motion control through latent code adjustments, supports view transfer based on reference videos, and seamlessly extends its applicability to model human bodies without necessitating structural modifications. Rigorous quantitative and qualitative experiments unequivocally demonstrate MoNeRF's superior performance compared to state‐of‐the‐art methods in talking head synthesis. We will release the source code upon publication. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Creating a 3D Mesh in A‐pose from a Single Image for Character Rigging.
- Author
-
Lee, Seunghwan and Liu, C. Karen
- Subjects
- *
COMPUTER vision , *QUANTITATIVE research , *SKELETON , *GEOMETRY - Abstract
Learning‐based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A‐pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large‐scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. NFPLight: Deep SVBRDF Estimation via the Combination of Near and Far Field Point Lighting.
- Author
-
Wang, Li, Zhang, Lianghao, Gao, Fangzhou, Kang, Yuzhen, and Zhang, Jiawan
- Subjects
REQUIREMENTS engineering ,COMPUTER graphics ,DEEP learning ,SOURCE code ,REFLECTANCE - Abstract
Recovering spatial-varying bi-directional reflectance distribution function (SVBRDF) from a few hand-held captured images has been a challenging task in computer graphics. Benefiting from the learned priors from data, single-image methods can obtain plausible SVBRDF estimation results. However, the extremely limited appearance information in a single image does not suffice for high-quality SVBRDF reconstruction. Although increasing the number of inputs can improve the reconstruction quality, it also affects the efficiency of real data capture and adds significant computational burdens. Therefore, the key challenge is to minimize the required number of inputs, while keeping high-quality results. To address this, we propose maximizing the effective information in each input through a novel co-located capture strategy that combines near-field and far-field point lighting. To further enhance effectiveness, we theoretically investigate the inherent relation between two images. The extracted relation is strongly correlated with the slope of specular reflectance, substantially enhancing the precision of roughness map estimation. Additionally, we designed the registration and denoising modules to meet the practical requirements of hand-held capture. Quantitative assessments and qualitative analysis have demonstrated that our method achieves superior SVBRDF estimations compared to previous approaches. All source codes will be publicly released. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. GPU Coroutines for Flexible Splitting and Scheduling of Rendering Tasks.
- Author
-
Zheng, Shaokun, Chen, Xin, Shi, Zhong, Yan, Ling-Qi, and Xu, Kun
- Subjects
PROGRAM transformation ,PROGRAMMING languages ,SCHEDULING ,THREAD (Textiles) - Abstract
We introduce coroutines into GPU kernel programming, providing an automated solution for flexible splitting and scheduling of rendering tasks. This approach addresses a prevalent challenge in harnessing the power of modern GPUs for complex, imbalanced graphics workloads like path tracing. Usually, to accommodate the SIMT execution model and latency-hiding architecture, developers have to decompose a monolithic mega-kernel into smaller sub-tasks for improved thread coherence and reduced register pressure. However, involving the handling of intricate nested control flows and numerous interdependent program states, this process can be exceedingly tedious and error-prone when performed manually. Coroutines, a building block for asynchronous programming in many high-level CPU languages, exhibit untapped potential for restructuring GPU kernels due to their versatility in control representation. By extending Luisa [Zheng et al. 2022], we implement an asymmetric, stackless coroutine model with programming language support and multiple built-in schedulers for modern GPUs. To showcase the effectiveness of our model and implementation, we examine them in different application scenarios, including path tracing, SDF rendering, and incorporation with custom passes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Lipschitz-agnostic, efficient and accurate rendering of implicit surfaces.
- Author
-
Winchenbach, Rene, Möller, Michael, and Kolb, Andreas
- Subjects
- *
CHEBYSHEV approximation , *APPROXIMATION error - Abstract
In this paper, we propose an accurate and controllable rendering process for implicit surfaces with no or unknown analytic Lipschitz constants. Our process is built upon a ray-casting approach where we construct an adaptive Chebyshev proxy along each ray to perform an accurate intersection test via a robust and multi-stage searching method. By taking into account approximation errors and numerical conditions, our methods comprise several pre-conditioning and post-processing stages to improve the numerical accuracy, which potentially applied recursively. The intersection search is performed by evaluating a QR decomposition on the Chebyshev proxy function, which can be done in a numerically accurate way. Our process achieves comparable accuracy to other techniques that impose more constraints on the surface, e.g., knowledge of Lipschitz constants, and higher accuracy compared to approaches that impose similar constraints as our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An interactive visualization tool for the exploration and analysis of multivariate ocean data.
- Author
-
K. G., Preetha, S., Saritha, Jeevan, Jishnu, Sachidanandan, Chinnu, and Maheswaran, P. A.
- Subjects
DEBYE temperatures ,MARINE biology ,DRIVERLESS cars ,MULTIVARIATE analysis ,OCEAN - Abstract
Ocean data exhibits great heterogeneity from variances in measuring methods, formats, and quality, making it extremely complicated and diverse due to a variety of data kinds, sources, and study elements. A few examples of data sources are satellites, buoys, ships, self-driving cars, and distant systems. The processing of data is made more challenging by the significant regional and temporal variations in oceanic characteristics including temperature, salinity, and currents. This work presents an interactive tool for multivariate ocean parameter visualisation, specifically overlays, based on Python. In ocean data visualisation, overlays are extra visual layers or data points that are layered to improve comprehension over a basic map. Based on the available data and the visualisation goals, these overlays are chosen and blended. Users can customise overlays with this tool, which also supports formatting, 2D and 3D visualisation, and data preparation. In order to reduce artefacts, it uses kriging interpolation for 3D visualisation and a modified version of the ray casting algorithm for representing octree data. By integrating overlays like as bathymetry, currents, temperature, and marine life, users can produce visually appealing and comprehensive depictions of ocean data. This method provides a thorough grasp of intricate marine processes by making it easier to see patterns, trends, and abnormalities in the data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Efficient Environment Map Rendering Based on Decomposition.
- Author
-
Wu, Yu‐Ting
- Subjects
- *
PIXELS , *ALGORITHMS , *LIGHTING , *NOISE - Abstract
This paper presents an efficient environment map sampling algorithm designed to render high‐quality, low‐noise images with only a few light samples, making it ideal for real‐time applications. We observe that bright pixels in the environment map produce high‐frequency shading effects, such as sharp shadows and shading, while the rest influence the overall tone of the scene. Building on this insight, our approach differs from existing techniques by categorizing the pixels in an environment map into emissive and non‐emissive regions and developing specialized algorithms tailored to the distinct properties of each region. By decomposing the environment lighting, we ensure that light sources are deposited on bright pixels, leading to more accurate shadows and specular highlights. Additionally, this strategy allows us to exploit the smoothness in the low‐frequency component by rendering a smaller image with more lights, thereby enhancing shading accuracy. Extensive experiments demonstrate that our method significantly reduces shadow artefacts and image noise compared to previous techniques, while also achieving lower numerical errors across a range of illumination types, particularly under limited sample conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Dimensionality Reduction for the Real-Time Light-Field View Synthesis of Kernel-Based Models.
- Author
-
Courteaux, Martijn, Mareen, Hannes, Ramlot, Bert, Lambert, Peter, and Van Wallendael, Glenn
- Subjects
SIGNAL-to-noise ratio ,RADIANCE ,PIXELS ,DEFAULT (Finance) ,ALGORITHMS - Abstract
Several frameworks have been proposed for delivering interactive, panoramic, camera-captured, six-degrees-of-freedom video content. However, it remains unclear which framework will meet all requirements the best. In this work, we focus on a Steered Mixture of Experts (SMoE) for 4D planar light fields, which is a kernel-based representation. For SMoE to be viable in interactive light-field experiences, real-time view synthesis is crucial yet unsolved. This paper presents two key contributions: a mathematical derivation of a view-specific, intrinsically 2D model from the original 4D light field model and a GPU graphics pipeline that synthesizes these viewpoints in real time. Configuring the proposed GPU implementation for high accuracy, a frequency of 180 to 290 Hz at a resolution of 2048 × 2048 pixels on an NVIDIA RTX 2080Ti is achieved. Compared to NVIDIA's instant-ngp Neural Radiance Fields (NeRFs) with the default configuration, our light field rendering technique is 42 to 597 times faster. Additionally, allowing near-imperceptible artifacts in the reconstruction process can further increase speed by 40%. A first-order Taylor approximation causes imperfect views with peak signal-to-noise ratio (PSNR) scores between 45 dB and 63 dB compared to the reference implementation. In conclusion, we present an efficient algorithm for synthesizing 2D views at arbitrary viewpoints from 4D planar light-field SMoE models, enabling real-time, interactive, and high-quality light-field rendering within the SMoE framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Automatic Inbetweening for Stroke‐Based Painterly Animation.
- Author
-
Barroso, Nicolas, Fondevilla, Amélie, and Vanderhaeghe, David
- Subjects
- *
ANIMATION (Cinematography) , *ARTISTS - Abstract
Painterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Dynamic Voxel‐Based Global Illumination.
- Author
-
Cosin Ayerbe, Alejandro, Poulin, Pierre, and Patow, Gustavo
- Subjects
- *
LIGHT sources , *RAY tracing , *POLYGONS , *LIGHTING , *SCALABILITY - Abstract
Global illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. CoupNeRF: Property‐aware Neural Radiance Fields for Multi‐Material Coupled Scenario Reconstruction.
- Author
-
Li, Jin, Gao, Yang, Song, Wenfeng, Li, Yacong, Li, Shuai, Hao, Aimin, and Qin, Hong
- Subjects
- *
CONTINUUM mechanics , *SYSTEM identification , *RADIANCE , *PHYSICS - Abstract
Neural Radiance Fields (NeRFs) have achieved significant recognition for their proficiency in scene reconstruction and rendering by utilizing neural networks to depict intricate volumetric environments. Despite considerable research dedicated to reconstructing physical scenes, rare works succeed in challenging scenarios involving dynamic, multi‐material objects. To alleviate, we introduce CoupNeRF, an efficient neural network architecture that is aware of multiple material properties. This architecture combines physically grounded continuum mechanics with NeRF, facilitating the identification of motion systems across a wide range of physical coupling scenarios. We first reconstruct specific‐material of objects within 3D physical fields to learn material parameters. Then, we develop a method to model the neighbouring particles, enhancing the learning process specifically in regions where material transitions occur. The effectiveness of CoupNeRF is demonstrated through extensive experiments, showcasing its proficiency in accurately coupling and identifying the behavior of complex physical scenes that span multiple physics domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. GS‐Octree: Octree‐based 3D Gaussian Splatting for Robust Object‐level 3D Reconstruction Under Strong Lighting.
- Author
-
Li, J., Wen, Z., Zhang, L., Hu, J., Hou, F., Zhang, Z., and He, Y.
- Subjects
- *
SOURCE code , *DEGREES of freedom , *RADIANCE , *LIGHTING , *GEOMETRY - Abstract
The 3D Gaussian Splatting technique has significantly advanced the construction of radiance fields from multi‐view images, enabling real‐time rendering. While point‐based rasterization effectively reduces computational demands for rendering, it often struggles to accurately reconstruct the geometry of the target object, especially under strong lighting conditions. Strong lighting can cause significant color variations on the object's surface when viewed from different directions, complicating the reconstruction process. To address this challenge, we introduce an approach that combines octree‐based implicit surface representations with Gaussian Splatting. Initially, it reconstructs a signed distance field (SDF) and a radiance field through volume rendering, encoding them in a low‐resolution octree. This initial SDF represents the coarse geometry of the target object. Subsequently, it introduces 3D Gaussians as additional degrees of freedom, which are guided by the initial SDF. In the third stage, the optimized Gaussians enhance the accuracy of the SDF, enabling the recovery of finer geometric details compared to the initial SDF. Finally, the refined SDF is used to further optimize the 3D Gaussians via splatting, eliminating those that contribute little to the visual appearance. Experimental results show that our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting. The source code can be downloaded from https://github.com/LaoChui999/GS-Octree. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Generative early architectural visualizations: incorporating architect's style-trained models.
- Author
-
Lee, Jin-Kook, Yoo, Youngjin, and Cha, Seung Hyun
- Subjects
GENERATIVE artificial intelligence ,ARTIFICIAL intelligence ,FASHION ,ARCHITECTURAL style ,ARCHITECTS - Abstract
This study introduces a novel approach to architectural visualization using generative artificial intelligence (AI), particularly emphasizing text-to-image technology, to remarkably improve the visualization process right from the initial design phase within the architecture, engineering, and construction industry. By creating more than 10 000 images incorporating an architect's personal style and characteristics into a residential house model, the effectiveness of base AI models. Furthermore, various architectural styles were integrated to enhance the visualization process. This method involved additional training for styles with low similarity rates, which required extensive data preparation and their integration into the base AI model. Demonstrated to be effective across multiple scenarios, this technique markedly enhances the efficiency and speed of production of architectural visualization images. Highlighting the vast potential of AI in design visualization, our study emphasizes the technology's shift toward facilitating more user-centered and personalized design applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Fast and Incremental 3D Model Renewal for Urban Scenes With Appearance Changes.
- Author
-
Xiong, Yuan and Zhou, Zhong
- Subjects
URBAN renewal ,GEOGRAPHIC information systems ,CAMERA calibration ,AERIAL photographs ,MIXED reality - Abstract
Urban 3D models with high‐resolution details are the basis of various mixed reality and geographic information systems. Fast and accurate urban reconstruction from aerial photographs has attracted intense attention. Existing methods exploit multi‐view geometry information from landscape patterns with similar illumination conditions and terrain appearance. In practice, urban models become obsolete over time due to human activities. Mainstream reconstruction pipelines rebuild the whole scene even if the main part of them remains unchanged. This paper proposes a novel wrapping‐based incremental modeling framework to reuse existing models and renew them with new meshes efficiently. The paper illustrates a pose optimization method with illumination‐based augmentation and virtual bundle adjustment. Besides, a high‐performance wrapping‐based meshing method is proposed for fast reconstruction. Experimental results show that the proposed method can achieve higher performance and quality than state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Accelerating ray tracing engine of BLENDER on the new Sunway architecture
- Author
-
Zhaoqi Sun, Zhen Wang, Mengyuan Hua, Puyu Xiong, Wubing Wan, Ping Gao, Wenlai Zhao, Zhenchun Huang, and Lin Han
- Subjects
Blender ,CYCLES ,many‐core architecture ,rendering ,Sunway supercomputer ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract With the increasing popularity of high‐resolution displays, there is a growing demand for more realistic rendered images. Ray tracing has become the most effective algorithm for image rendering, but its complexity and large amount of computing data require sophisticated HPC solutions. In this article, we present our efforts to port the ray tracing engine CYCLES of Blender to the new generation of Sunway supercomputers. We propose optimizations that are tailored to the new hardware architecture, including a multi‐level parallel scheme that efficiently maps and scales Blender onto the novel Sunway architecture, strategies to address memory bottlenecks, a revised task dispatching method that achieves excellent load balancing, and a pipeline approach that maximizes computation and communication overlap. By combining all these optimizations, we achieve a significant reduction in rendering time for a single‐frame image, from 2260 s using the single‐core serial version to 71 s using 48 processes, which is a speedup of about 128×. Accelerating the ray tracing engine CYCLES of Blender in the new generation of Sunway supercomputers.
- Published
- 2025
- Full Text
- View/download PDF
20. Exploring the effects of synthetic data generation: a case study on autonomous driving for semantic segmentation: Exploring the effects of synthetic data generation
- Author
-
Silva, Manuel, Seoane, Antonio, Mures, Omar A., López, Antonio M., and Iglesias-Guitian, Jose A.
- Published
- 2025
- Full Text
- View/download PDF
21. Generalized Lipschitz Tracing of Implicit Surfaces.
- Author
-
Bán, Róbert and Valasek, Gábor
- Subjects
- *
RAY tracing , *SPATIAL resolution , *POLYNOMIALS , *PRIOR learning , *CONSERVATIVES , *POLYNOMIAL approximation - Abstract
We present a versatile and robust framework to render implicit surfaces defined by black‐box functions that only provide function value queries. We assume that the input function is locally Lipschitz continuous; however, we presume no prior knowledge of its Lipschitz constants. Our pre‐processing step generates a discrete acceleration structure, a Lipschitz field, that provides data to infer local and directional Lipschitz upper bounds. These bounds are used to compute safe step sizes along rays during rendering. The Lipschitz field is constructed by generating local polynomial approximations to the input function, then bounding the derivatives of the approximating polynomials. The accuracy of the approximation is controlled by the polynomial degree and the granularity of the spatial resolution used during fitting, which is independent from the resolution of the Lipschitz field. We demonstrate that our process can be implemented in a massively parallel way, enabling straightforward integration into interactive and real‐time modelling workflows. Since the construction only requires function value evaluations, the input surface may be represented either procedurally or as an arbitrarily filtered grid of function samples. We query the original implicit representation upon ray trace, as such, we preserve the geometric and topological details of the input as long as the Lipschitz field supplies conservative estimates. We demonstrate our method on both procedural and discrete implicit surfaces and compare its exact and approximate variants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images.
- Author
-
Jin, Ge, Jung, Younhyun, Fulham, Michael, Feng, Dagan, and Kim, Jinman
- Subjects
- *
GENERATIVE adversarial networks , *COMPUTED tomography , *DEEP learning , *DIAGNOSTIC imaging , *ANGIOGRAPHY - Abstract
Direct volume rendering (DVR) is an important tool for scientific and medical imaging visualization. Modern GPU acceleration has made DVR more accessible; however, the production of high‐quality rendered images with high frame rates is computationally expensive. We propose a deep learning method with a reduced computational demand. We leveraged a conditional generative adversarial network (cGAN) to upsample DVR images (a rendered scene), with a reduced sampling rate to obtain similar visual quality to that of a fully sampled method. Our dvrGAN is combined with a colour‐based loss function that is optimized for DVR images where different structures such as skin, bone,
etc . are distinguished by assigning them distinct colours. The loss function highlights the structural differences between images, by examining pixel‐level colour, and thus helps identify, for instance, small bones in the limbs that may not be evident with reduced sampling rates. We evaluated our method in DVR of human computed tomography (CT) and CT angiography (CTA) volumes. Our method retained image quality and reduced computation time when compared to fully sampled methods and outperformed existing state‐of‐the‐art upsampling methods. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
23. Deep SVBRDF Acquisition and Modelling: A Survey.
- Author
-
Kavoosighafi, Behnaz, Hajisharif, Saghi, Miandji, Ehsan, Baravdish, Gabriel, Cao, Wen, and Unger, Jonas
- Subjects
- *
GENERATIVE artificial intelligence , *MACHINE learning , *REFLECTANCE measurement , *RESEARCH & development , *REFLECTANCE , *DEEP learning - Abstract
Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine‐learning‐driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high‐quality measurements of bi‐directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi‐directional Reflectance Distribution Functions (SVBRDFs). Learning‐based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State‐of‐the‐Art Report (STAR) presents an in‐depth overview of the state‐of‐the‐art in machine‐learning‐driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real‐world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Directional Texture Editing for 3D Models.
- Author
-
Liu, Shengqi, Chen, Zhuo, Gao, Jingnan, Yan, Yichao, Zhu, Wenhan, Lyu, Jiangjing, and Yang, Xiaokang
- Subjects
- *
VIDEO editing , *VIDEO processing , *TEXTURE mapping , *SURFACES (Technology) , *PROBLEM solving - Abstract
Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a Texture Editing Model designed for automatic 3D object editing according to the text Instructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields.
- Author
-
Holland, Leif Van, Bliersbach, Ruben, Müller, Jan U., Stotko, Patrick, and Klein, Reinhard
- Subjects
- *
RAY tracing , *RADIANCE , *MIRRORS , *REFLECTANCE , *AMBIGUITY - Abstract
Implicit representations like neural radiance fields (NeRF) showed impressive results for photorealistic rendering of complex scenes with fine details. However, ideal or near‐perfectly specular reflecting objects such as mirrors, which are often encountered in various indoor scenes, impose ambiguities and inconsistencies in the representation of the re‐constructed scene leading to severe artifacts in the synthesized renderings. In this paper, we present a novel reflection tracing method tailored for the involved volume rendering within NeRF that takes these mirror‐like objects into account while avoiding the cost of straightforward but expensive extensions through standard path tracing. By explicitly modelling the reflection behaviour using physically plausible materials and estimating the reflected radiance with Monte‐Carlo methods within the volume rendering formulation, we derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples. We show that our novel method enables the training of consistent representations of such challenging scenes and achieves superior results in comparison to previous state‐of‐the‐art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Deep and Fast Approximate Order Independent Transparency.
- Author
-
Tsopouridis, Grigoris, Vasilakis, Andreas A., and Fudos, Ioannis
- Subjects
- *
DEEP learning , *MACHINE learning , *SOURCE code , *TRIANGLES , *PIXELS - Abstract
We present a machine learning approach for efficiently computing order independent transparency (OIT) by deploying a light weight neural network implemented fully on shaders. Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel colour with a pre‐trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Search Me Knot, Render Me Knot: Embedding Search and Differentiable Rendering of Knots in 3D.
- Author
-
Gangopadhyay, Aalok, Gupta, Paras, Sharma, Tarun, Singh, Prajwal, and Raman, Shanmuganathan
- Subjects
- *
RENDERING (Computer graphics) , *TUBE bending , *HOMEOMORPHISMS , *BUDGET , *INVERSE problems - Abstract
We introduce the problem of knot‐based inverse perceptual art. Given multiple target images and their corresponding viewing configurations, the objective is to find a 3D knot‐based tubular structure whose appearance resembles the target images when viewed from the specified viewing configurations. To solve this problem, we first design a differentiable rendering algorithm for rendering tubular knots embedded in 3D for arbitrary perspective camera configurations. Utilizing this differentiable rendering algorithm, we search over the space of knot configurations to find the ideal knot embedding. We represent the knot embeddings via homeomorphisms of the desired template knot, where the weights of an invertible neural network parametrize the homeomorphisms. Our approach is fully differentiable, making it possible to find the ideal 3D tubular structure for the desired perceptual art using gradient‐based optimization. We propose several loss functions that impose additional physical constraints, enforcing that the tube is free of self‐intersection, lies within a predefined region in space, satisfies the physical bending limits of the tube material, and the material cost is within a specified budget. We demonstrate through results that our knot representation is highly expressive and gives impressive results even for challenging target images in both single‐view and multiple‐view constraints. Through extensive ablation study, we show that each proposed loss function effectively ensures physical realizability. We construct a real‐world 3D‐printed object to demonstrate the practical utility of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Physics Based Differentiable Rendering for Inverse Problems and Beyond.
- Author
-
Kakkar, Preetish, Mukherjee, Srijani, Ragothaman, Hariharan, and Mehta, Vishal
- Subjects
- *
LIGHT propagation , *COMPUTER vision , *INVERSE problems , *MACHINE learning , *PHYSICS - Abstract
Physics-based differentiable rendering (PBDR) has become an efficient method in computer vision, graphics, and machine learning for addressing an array of inverse problems. PBDR allows patterns to be generated from perceptions which can be applied to enhance object attributes like geometry, substances, and lighting by adding physical models of light propagation and materials interaction. Due to these capabilities, distinguished rendering has been employed in a wider range of sectors such as autonomous navigation, scene reconstruction, and material design. We provide an extensive overview of PBDR techniques in this study, emphasizing their creation, effectiveness, and limitations while managing inverse situations. We demonstrate modern techniques and examine their value in everyday situations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Refined tri-directional path tracing with generated light portal.
- Author
-
Wei, Xuchen, Pu, GuiYang, Huo, Yuchi, Bao, Hujun, and Wang, Rui
- Subjects
- *
MONTE Carlo method , *PATH analysis (Statistics) - Abstract
The rendering efficiency of Monte Carlo path tracing often depends on the ease of path construction. For scenes with particularly complex visibility, e.g. where the camera and light sources are placed in separate rooms connected by narrow doorways or windows, it is difficult to construct valid paths using traditional path tracing algorithms such as unidirectional path tracing or bidirectional path tracing. Light portal is a class of methods that assist in sampling direct light paths based on prior knowledge of the scene. It usually requires additional manual editing and labelling by the artist or renderer user. Tri-directional path tracing is a sophisticated path tracing algorithm that combines bidirectional path tracing and light portals sampling, but the original work lacks sufficient analysis to demonstrate its effectiveness. In this paper, we propose an automatic light portal generation algorithm based on spatial radiosity analysis that mitigates the cost of manual operations for complex scenes. We also further analyse and improve the light portal-based tri-directional path tracing rendering algorithm, giving a detailed analysis of path construction strategies, algorithm complexity, and the unbiasedness of the Monte Carlo estimation. The experimental results show that our algorithm can accurately locate the light portals with low computational cost and effectively improve the rendering performance of complex scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Exploring 'Modelo Alberto Spencer' Stadium: 3D Virtualization for an Immersive Experience
- Author
-
Aguas, Luis, Toasa, Renato-M., Pabón, Juan, Salazar, Elizabeth, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Garcia, Marcelo V., editor, Gordón-Gallegos, Carlos, editor, Salazar-Ramírez, Asier, editor, and Nuñez, Carlos, editor
- Published
- 2024
- Full Text
- View/download PDF
31. Artificial Intelligence for Hair Color Rendering
- Author
-
Balladares, Johanna, Manzano, Santiago, Jaime, Ruiz, Granizo, Cesar, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Garcia, Marcelo V., editor, Gordón-Gallegos, Carlos, editor, Salazar-Ramírez, Asier, editor, and Nuñez, Carlos, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Science and Technology of Fats and Lipids
- Author
-
Athira, K. R., Sifana, P. I., Menon, Sajith, Thomas, Sabu, editor, Hosur, Mahesh, editor, Pasquini, Daniel, editor, and Jose Chirayil, Cintil, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Applications and Limitations of Machine Learning in Computer Graphics
- Author
-
Fu, Chengyu, Luo, Xun, Editor-in-Chief, Almohammedi, Akram A., Series Editor, Chen, Chi-Hua, Series Editor, Guan, Steven, Series Editor, Pamucar, Dragan, Series Editor, and Ahmad, Badrul Hisham, editor
- Published
- 2024
- Full Text
- View/download PDF
34. ANALYSIS OF STRATEGIES FOR MOBILE OPTIMIZATION IN FRONTEND DEVELOPMENTQ.
- Author
-
Denys, Sidorov
- Subjects
MODULAR construction ,IMPACT loads ,MOBILE apps ,USER experience ,SPEED - Abstract
The article analyzes mobile optimization strategies in frontend development aimed at enhancing the performance and usability of mobile applications. It examines architectural approaches such as modular and microservices structures, as well as rendering methods (server-side and client-side) and their impact on load speed and interface responsiveness. Special attention is given to application state management methods, responsive design, and adherence to accessibility standards, which together improve the user experience for a wide audience. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Critical Archaeology in the Digital Age: Proceedings of the 12th IEMA Visiting Scholar’s Conference
- Subjects
Archaeology ,Garstki ,Digital ,Virtual ,Data ,Rendering ,Education - Abstract
Every part of archaeological practice is intimately tied to digital technologies, but how deeply do we really understand the ways these technologies impact the theoretical trends in archaeology, how these trends affect the adoption of these technologies, or how the use of technology alters our interactions with the human past? This volume suggests a critical approach to archaeology in a digital world, a purposeful and systematic application of digital tools in archaeology. This is a call to pay attention to your digital tools, to be explicit about how you are using them, and to understand how they work and impact your own practice. The chapters in this volume demonstrate how this critical, reflexive approach to archaeology in the digital age can be accomplished, touching on topics that include 3D data, predictive and procedural modelling, digital publishing, digital archiving, public and community engagement, ethics, and global sustainability. The scale and scope of this research demonstrates how necessary it is for all archaeological practitioners to approach this digital age with a critical perspective and to be purposeful in our use of digital technologies.
- Published
- 2022
36. Biodiesel production from agricultural biomass wastes: Duroc breed fat oil, Citrillus lanatus rind, and Sorghum Bagasse
- Author
-
Akwenuke, O.M., Okwelum, C.O., Balogun, T.A., Nwadiolu, R., Okolotu, G.I., Chukwuma, I.E., Adepoju, T.F., Essaghah, A.E., Ibimilua, A.F., and Taiga, A.
- Published
- 2024
- Full Text
- View/download PDF
37. 基于可逆神经网络的神经辐射场水印.
- Author
-
孙文权, 刘佳, 董炜娜, 陈立峰, and 钮可
- Abstract
Aimin at the copyright problem surrounding 3D models of neural radiation fields focused on implicit representation, this paper tackled this issue by considering the embedding and extraction of neural radiation field watermarks as inverse prob-lems of image transformations, and proposed a scheme for protecting the copyright of neural radiation fields using invertible neural network watermarking. This scheme utilized 2D image watermarking technology to safeguard 3D scenes. In the forward process of the invertible network, the watermark was embedded in the training image of the neural radiation field. In the re- verse process, the watermark was extracted from the image rendered by the neural radiation field. This ensured copyright protection for both the neural radiation field and the 3D scene. However, the rendering process of the neural radiation field may result in the loss of watermark information. To address this, the paper introduced an image quality enhancement module. This module utilized a neural network to recover the rendered image and subsequently extract the watermark. Simultaneously, the watermark was embedded in each training image to train the neural radiation field. This enabled the extraction of watermark in- formation from multiple viewpoints. Experimental results demonstrate that the watermarking scheme outlined in this paper effectively achieves copyright protection and highlights the feasibility of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Bioactive peptides extracted from hydrolyzed animal byproducts for dogs and cats.
- Author
-
Vasconcellos, Ricardo S, Volpato, Josiane A, and Silva, Ingrid C
- Subjects
DIETARY bioactive peptides ,SCIENTIFIC literature ,FEATHERS ,PEPTIDE antibiotics ,ANGIOTENSIN converting enzyme ,MILK proteins ,DOGS ,LIVER proteins - Abstract
This article explores the use of bioactive peptides and hydrolyzed proteins from animal byproducts in pet food for dogs and cats. These peptides are produced through enzymatic hydrolysis and offer various health benefits, including prebiotic, antioxidant, anti-inflammatory, immunological, and antihypertensive effects. Animal byproducts like skin, blood, bones, and feathers are commonly used to create these ingredients. The article also discusses different methods of protein hydrolysis, such as chemical, enzymatic, and microbial methods. Overall, the inclusion of bioactive peptides and hydrolyzed proteins in pet food enhances its nutritional value and taste. The article acknowledges the potential benefits of bioactive peptides in areas like gut health, immune function, joint health, antioxidant activity, antimicrobial activity, blood pressure control, and glycemic control, but emphasizes the need for further research. It also mentions the growing popularity of enzymatically hydrolyzed ingredients in the pet food industry and highlights the authors' expertise in animal nutrition. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
39. Importance Sampling BRDF Derivatives.
- Author
-
BELHE, YASH, BING XU, PRAVEEN BANGARU, SAI, RAMAMOORTHI, RAVI, and TZU-MAO LI
- Subjects
PARTITION functions ,REFLECTANCE - Abstract
We propose a set of techniques to efficiently importance sample the derivatives of a wide range of Bidirectional Reflectance Distribution Function (BRDF) models. In differentiable rendering, BRDFs are replaced by their differential BRDF counterparts, which are real-valued and can have negative values. This leads to a new source of variance arising from their change in sign. Real-valued functions cannot be perfectly importance sampled by a positive-valued PDF, and the direct application of BRDF sampling leads to high variance. Previous attempts at antithetic sampling only addressed the derivative with the roughness parameter of isotropic microfacet BRDFs. Our work generalizes BRDF derivative sampling to anisotropic microfacet models, mixture BRDFs, Oren-Nayar, Hanrahan-Krueger, among other analytic BRDFs. Our method first decomposes the real-valued differential BRDF into a sum of single-signed functions, eliminating variance from a change in sign. Next, we importance sample each of the resulting single-signed functions separately. The first decomposition, positivization, partitions the real-valued function based on its sign, and is effective at variance reduction when applicable. However, it requires analytic knowledge of the roots of the differential BRDF, and for it to be analytically integrable too. Our key insight is that the single-signed functions can have overlapping support, which significantly broadens the ways we can decompose a real-valued function. Our product and mixture decompositions exploit this property, and they allow us to support several BRDF derivatives that positivization could not handle. For a wide variety of BRDF derivatives, our method significantly reduces the variance (up to 58× in some cases) at equal computation cost and enables better recovery of spatially varying textures through gradient-descent-based inverse rendering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. HaLo‐NeRF: Learning Geometry‐Guided Semantics for Exploring Unconstrained Photo Collections.
- Author
-
Dudai, Chen, Alper, Morris, Bezalel, Hana, Hanocka, Rana, Lang, Itai, and Averbuch‐Elor, Hadar
- Subjects
- *
LANGUAGE models , *SEMANTICS , *METADATA - Abstract
Internet image collections containing photos captured by crowds of photographers show promise for enabling digital exploration of large‐scale tourist landmarks. However, prior works focus primarily on geometric reconstruction and visualization, neglecting the key role of language in providing a semantic interface for navigation and fine‐grained understanding. In more constrained 3D domains, recent methods have leveraged modern vision‐and‐language models as a strong prior of 2D visual semantics. While these models display an excellent understanding of broad visual semantics, they struggle with unconstrained photo collections depicting such tourist landmarks, as they lack expert knowledge of the architectural domain and fail to exploit the geometric consistency of images capturing multiple views of such scenes. In this work, we present a localization system that connects neural representations of scenes depicting large‐scale landmarks with text describing a semantic region within the scene, by harnessing the power of SOTA vision‐and‐language models with adaptations for understanding landmark scene semantics. To bolster such models with fine‐grained knowledge, we leverage large‐scale Internet data containing images of similar landmarks along with weakly‐related textual information. Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts, whose semantics may be unlocked from Internet textual metadata with large language models. We use correspondences between views of scenes to bootstrap spatial understanding of these semantics, providing guidance for 3D‐compatible segmentation that ultimately lifts to a volumetric scene representation. To evaluate our method, we present a new benchmark dataset containing large‐scale scenes with ground‐truth segmentations for multiple semantic concepts. Our results show that HaLo‐NeRF can accurately localize a variety of semantic concepts related to architectural landmarks, surpassing the results of other 3D models as well as strong 2D segmentation baselines. Our code and data are publicly available at https://tau‐vailab.github.io/HaLo‐NeRF/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Multimodal perception of digital protective materials.
- Author
-
BOCANCEA, VICTORIA, MARIN, IRINA ELENA, and LOGHIN, CARMEN MARIA
- Subjects
VIRTUAL prototypes ,PADS & protectors (Textiles) ,LIKERT scale ,DIGITAL images ,CLOTHING industry ,3-D animation ,TECHNICAL textiles - Abstract
Copyright of Industria Textila is the property of Institutul National de Cercetare-Dezvoltare pentru Textile si Pielarie and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
42. РОЗРОБКА ПЛАГІНА ДЛЯ ВІЗУАЛІЗАЦІЇ СТРУКТУРНИХ СХЕМ ОБЧИСЛЮВАЧІВ НА ОСНОВІ ТЕКСТОВОГО ОПИСУ АЛГОРИТМІВ ГАРМОНІЧНИХ ПЕРЕТВОРЕНЬ.
- Author
-
І., Процько and В., Теслюк
- Subjects
COMPUTER engineering ,HARTLEY transforms ,REAL numbers ,DATA visualization ,HARMONIC functions - Abstract
Context. In many areas of science and technology, the numerical solution of problems is not enough for the further development of the implementation of the obtained results. Among the existing information visualization approaches, the one that allows you to effectively reveal unstructured actionable ideas, generalize or simplify the analysis of the received data is chosen. The results of visualization of generalized structural diagrams based on the textual description of the algorithm clearly reflect the interaction of its parts, which is important at the system engineering stage of computer design. Objective of the study is the analysis and software implementation of structure visualization using the example of discrete harmonic transformation calculators obtained as a result of the synthesis of an algorithm based on cyclic convolutions with the possibility of extending the structure visualization to other computational algorithms. Method. The generalized scheme of the synthesis of algorithms of fast harmonic transformations in the form of a set of cyclic convolution operations on the combined sequences of input data and the coefficients of the harmonic transformation function with their visualization in the form of a generalized structural diagram of the calculator. The results. The result of the work is a software implementation of the visualization of generalized structural diagrams for the synthesized algorithms of cosine and Hartley transformations, which visually reflect the interaction of the main blocks of the computer. The software implementation of computer structure visualization is made in TypeScript using the Phaser 3 framework. Conclusions. The work considers and analyzes the developed software implementation of visualization of the general structure of the calculator for fast algorithms of discrete harmonic transformations in the domain of real numbers, obtained as a result of the synthesis of the algorithm based on cyclic convolutions. The results of visualization of variants of structural schemes of computers clearly and clearly reflect the interaction of its parts and allow to evaluate one or another variant of the computing algorithm in the design process [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Real-Time Monte Carlo Denoising With Adaptive Fusion Network
- Author
-
Junmin Lee, Seunghyun Lee, Min Yoon, and Byung Cheol Song
- Subjects
Image processing ,rendering ,real-time de-noising ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Real-time Monte Carlo denoising aims to denoise a 1spp-rendered image with a limited time budget. Many latest techniques for real-time Monte Carlo denoising utilize temporal accumulation (TA) as a pre-processing to improve the temporal stability of successive frames and increase the effective spp. However, existing techniques using TA used to suffer from significant performance degradation when TA does not work well. In addition, they have the disadvantage of deteriorating performance in dynamic scenes because pixel information of the current frame cannot be sufficiently utilized due to the pixel averaging effect between temporally adjacent frames. To solve this problem, this paper proposes a framework that utilizes both 1spp images and temporally accumulated 1spp (TA-1spp) images. First, the multi-scale kernel prediction module estimates kernel maps for filtering 1spp images and TA-1spp images, respectively. Then, the filtered images are properly fused so that the two advantages of 1spp and TA-1spp images can create synergy. Also, the remaining noise is removed through the refinement module and fine details are reconstructed to improve the model flexibility, beyond using only the kernel prediction module. As a result, we achieve better quantitative and qualitative performance at 39% faster than state-of-the-art (SOTA) real-time Monte Carlo denoisers.
- Published
- 2024
- Full Text
- View/download PDF
44. Modeling Practical Multi-Center-of-Projection Using Ellipsoid
- Author
-
Soohyun Lee, Junyoung Yoon, and Joo Ho Lee
- Subjects
Projection geometry ,rendering ,scene contraction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Traditional 3D projection models, such as perspective and orthographic projection, are limited to two types of projective ray fields: rays passing through a single point and parallel rays. In this paper, we introduce an ellipsoidal-based 3D projection model to overcome the sparsity of 3D projections. Our ellipsoidal 3D projection model comprises an ellipsoid and an axis-aligned geometry such as a line or a plane. By linearly mapping these two geometries along their principal axes, our model enables us to explore the continuous domain of projective ray fields while taking advantage of the anisotropy in ellipsoids. We introduce the intrinsic characteristic of our projection field, called the ellipse property, that enables testing isomorphism with other projection models. We prove the difference between ours and the catadioptric projection model employing an elliptic mirror. Besides, we propose a perspectivity metric for the intuitive control over the parameter space. We present both forward and backward projections of our model, demonstrating its applicability across several visual applications, ranging from image synthesis to scene reconstruction.
- Published
- 2024
- Full Text
- View/download PDF
45. Making JavaScript Render Decisions to Optimize Security-Oriented Crawler Process
- Author
-
Onur Aktas and Ahmet Burak Can
- Subjects
Crawler ,cyber security ,JavaScript ,machine learning ,rendering ,web application security ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The widespread use of web applications requires important changes in cybersecurity to protect online services and data. In the process of identifying security vulnerabilities in web applications, a systematic approach is employed to detect and mitigate cybersecurity risks. This approach utilizes web crawlers to identify attack vectors. Traditional web crawling methods are resource-intensive and often need to be more efficient in handling dynamic JavaScript-rich content. Addressing this crucial gap, our study introduces an innovative approach to predict the necessity of JavaScript rendering, thereby enhancing the effectiveness and efficiency of security-oriented web crawlers. This approach seeks to reduce computational requirements and quicken the security evaluation process through the use of machine learning algorithms. By utilizing a dataset containing the source code from the main pages of 17,160 websites, our experimental results demonstrate a 20% reduction in execution time compared to full JavaScript rendering, indicating an improvement in resource usage without any significant reduction in coverage. Our methodology significantly improves the efficiency of security-focused web crawlers and helps security scanners to detect security risks of web applications with fewer resources.
- Published
- 2024
- Full Text
- View/download PDF
46. Path from photorealism to perceptual realism
- Author
-
Zhong, Fangcheng and Mantiuk, Rafal
- Subjects
3D display ,computer graphics ,mixed reality ,perceptual realism ,perceptually realistic graphics ,rendering ,virtual reality - Abstract
Photorealism in computer graphics - rendering images that appear as realistic as photographs - has matured to the point that it is now widely used in industry. With emerging 3D display technologies, the next big challenge in graphics is to achieve Perceptual Realism - producing virtual imagery that is perceptually indistinguishable from real-world 3D scenes. Such a significant upgrade in the level of realism offers highly immersive and engaging experiences that have the potential to revolutionise numerous aspects of life and society, including entertainment, social networks, education, business, research, engineering, and design. While perceptual realism puts strict requirements on the quality of reproduction, the virtual scene does not have to be identical in light distributions to its physical counterpart to be perceptually realistic, providing that it is visually indistinguishable to human eyes. Due to the limitations of human vision, a significant improvement in perceptual realism can, in principle, be achieved by fulfilling the essential visual requirements with sufficient qualities and without having to reconstruct the physically accurate distribution of lights. In this dissertation, we start by discussing the capabilities and limits of the human visual system, which serves as a basis for the analysis of the essential visual requirements for perceptual realism. Next, we introduce a Perceptually Realistic Graphics (PRG) pipeline consisting of the acquisition, representation, and reproduction of the plenoptic function of a 3D scene. Finally, we demonstrate that taking advantage of the limits and mechanisms of the human visual system can significantly improve this pipeline. Specifically, we present three approaches to push the quality of virtual imagery towards perceptual realism. First, we introduce DiCE, a real-time rendering algorithm that exploits the binocular fusion mechanism of the human visual system to boost the perceived local contrast of stereoscopic displays. The method was inspired by an established model of binocular contrast fusion. To optimise the experience of binocular fusion, we proposed and empirically validated a rivalry-prediction model that better controls rivalry. Next, we introduce Dark Stereo, another real-time rendering algorithm that facilitates depth perception from binocular depth cues for stereoscopic displays, especially those under low luminance. The algorithm was designed based on a proposed model of stereo constancy that predicts the precision of binocular depth cues for a given contrast and luminance. Both DiCE and Dark Stereo have been experimentally demonstrated to be effective in improving realism. Their real-time performance also makes them readily integrable into any existing VR rendering pipeline. Nonetheless, only improving rendering is not sufficient to meet all the visual requirements for perceptual realism. The overall fidelity of a typical stereoscopic VR display is still confined by its limited dynamic range, low spatial resolution, optical aberrations, and vergence-accommodation conflicts. To push the limits of the overall fidelity, we present a High-Dynamic-Range Multi-Focal Stereo display (HDR-MF-S display) with an end-to-end imaging and rendering system. The system can visually reproduce real-world 3D objects with high resolution, accurate colour, a wide dynamic range and contrast, and most depth cues, including binocular disparity and focal depth cues, and permits a direct comparison between real and virtual scenes. It is the first work that achieves a close perceptual match between a physical 3D object and its virtual counterpart. The fidelity of reproduction has been confirmed by a Visual Turing Test (VTT) where naive participants failed to discern any difference between the real and virtual objects in more than half of the trials. The test provides insights to better understand the conditions necessary to achieve perceptual realism. In the long term, we foresee this system as a crucial step in the development of perceptually realistic graphics, for not only a quality unprecedentedly achieved but also a fundamental approach that can effectively identify bottlenecks and direct future studies for perceptually realistic graphics.
- Published
- 2022
- Full Text
- View/download PDF
47. Motion quality models for real-time adaptive rendering
- Author
-
Jindal, Akshay and Mantiuk, Rafał
- Subjects
Computer Graphics ,Perception ,Rendering ,Displays - Abstract
The demand for compute power and transmission bandwidth is growing rapidly as the display technologies progress towards higher spatial resolutions and frame rates, more bits per pixel (HDR), and multiple views required for 3D displays. Advancement in real-time rendering has also made shading incredibly complex. However, GPUs are still limited in processing capabilities and often have to work at a fraction of their available bandwidth due to hardware constraints. In this dissertation, I build upon the observation that the human visual system has a limited capability to perceive images of high spatial and temporal frequency, and hence it is unnecessary to strive to meet these computational demands. I propose to model the spatio-temporal limitations of the visual system, specifically the perception of image artefacts under motion, and exploit them to improve the quality of rendering. I present four main contributions: First, I demonstrate the potential of existing motion quality models in improving rendering quality under restricted bandwidths. This validation is done using an eye tracker through psychophysical experiments involving complex motion on a G-Sync display. Second, I note that the current models of motion quality ignore the effect of displayed content and cannot take advantage of recent shading technologies such as variable-rate shading which allows for more flexible control of local shading resolution. To this end, I develop a new content-dependent model of motion quality and calibrate it through psychophysical experiments on a wide range of content, display configurations, and velocities. Third, I propose a new rendering algorithm that utilises such models to calculate the optimal refresh rate and local shading resolution given the allowed bandwidth. Finally, I present a novel high dynamic range multi-focal stereo display that will serve as an experimental apparatus for next-generation of perceptual experiments by enabling us to study the interplay of these factors in achieving perceptual realism.
- Published
- 2022
- Full Text
- View/download PDF
48. Development of utility pet soap utilizing rendered fat from deserted poultry sleeves
- Author
-
Gangwar, Mukesh, Kumar, Rajiv Ranjan, Mendiratta, S.K., Biswas, Ashim Kumar, and Chand, Sagar
- Published
- 2023
- Full Text
- View/download PDF
49. Wandering Architecture Through the Looking Glass of Digital Representation: An Expeditious Teaching Experience in Understanding and Modelling Modern Architecture
- Author
-
Anna Sanseverino, Victoria Ferraris, and Carla Ferreyra
- Subjects
le corbusier ,coromandel estate ,curutchet house ,jooste house ,rendering ,Psychology ,BF1-990 ,Visual arts ,N1-9211 - Abstract
The present work focuses on an expeditious teaching experience of Architecture and 3D modelling aimed at the ‘Architectural Drawing II’ and ‘Computer Graphics’ students of the Building Engineering-Architecture Degree Programme of the University of Salerno, Italy. The students, involved in the ‘Italy-South Africa Joint Research Programme, ISARP 2018-2020 – A Social e spatial investigation at the Moxomatsi village, Mpumalanga’ (SSIMM), were supported in the digital reconstruction of three iconic examples of modern architecture located in South America and South Africa, i.e., the Curutchet House (La Plata, Argentina), the Coromandel Estate Manor House (Mpumalanga, South Africa) and the Jooste House (Pretoria, South Africa). Through an Alice-in-wonderland-type of voyage, they had the chance to first analyse the complex inner space of these architectural assets that both emerge from and fade into the landscape and then propose their own interpretation through rendered and post-processed imagery.
- Published
- 2023
- Full Text
- View/download PDF
50. The Hybridization of graphic survey techniques in funerary architecture
- Author
-
María José Muñoz-Mora, David Navarro-Moreno, Pedro Jiménez-Vicario, Jose Gabriel Gómez-Carrasco, and Manuel Alejandro Ródenas-López
- Subjects
graphic restitution ,pantheon ,photogrammetry ,rendering ,cemetery ,Architecture ,NA1-9428 ,Architectural drawing and design ,NA2695-2793 - Abstract
Funerary architecture often presents a series of specificities that makes it necessary to combine different techniques for its adequate graphic restitution. These conditioning factors are usually present both in the exterior, such as nearby trees and metalwork elements, and in the interior, due to the arrangement of small objects and furniture as well as poor lighting in the rooms.This paper focuses on the methodology followed for the graphic restitution of the Pedreño y Deu family pantheon in the main cemetery of Cartagena (Spain). The pantheon, built in 1875, consists of a circular chapel on the ground floor and a crypt below ground level. At the start of the survey, the building, protected by municipal planning, was in a state of advanced deterioration.The techniques used for the survey of each part will be described, as well as the procedure followed to assemble them into a single model. In this work, we have been able to verify that hybridization in survey techniques is one of the best options to represent architectural heritage in case studies where there are situations of very different natures.DOI: https://doi.org/10.20365/disegnarecon.30.2023.15
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.