3,115 results on '"rendering"'
Search Results
2. An interactive visualization tool for the exploration and analysis of multivariate ocean data.
- Author
-
K. G., Preetha, S., Saritha, Jeevan, Jishnu, Sachidanandan, Chinnu, and Maheswaran, P. A.
- Subjects
DEBYE temperatures ,MARINE biology ,DRIVERLESS cars ,MULTIVARIATE analysis ,OCEAN - Abstract
Ocean data exhibits great heterogeneity from variances in measuring methods, formats, and quality, making it extremely complicated and diverse due to a variety of data kinds, sources, and study elements. A few examples of data sources are satellites, buoys, ships, self-driving cars, and distant systems. The processing of data is made more challenging by the significant regional and temporal variations in oceanic characteristics including temperature, salinity, and currents. This work presents an interactive tool for multivariate ocean parameter visualisation, specifically overlays, based on Python. In ocean data visualisation, overlays are extra visual layers or data points that are layered to improve comprehension over a basic map. Based on the available data and the visualisation goals, these overlays are chosen and blended. Users can customise overlays with this tool, which also supports formatting, 2D and 3D visualisation, and data preparation. In order to reduce artefacts, it uses kriging interpolation for 3D visualisation and a modified version of the ray casting algorithm for representing octree data. By integrating overlays like as bathymetry, currents, temperature, and marine life, users can produce visually appealing and comprehensive depictions of ocean data. This method provides a thorough grasp of intricate marine processes by making it easier to see patterns, trends, and abnormalities in the data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. GS‐Octree: Octree‐based 3D Gaussian Splatting for Robust Object‐level 3D Reconstruction Under Strong Lighting.
- Author
-
Li, J., Wen, Z., Zhang, L., Hu, J., Hou, F., Zhang, Z., and He, Y.
- Subjects
- *
SOURCE code , *DEGREES of freedom , *RADIANCE , *LIGHTING , *GEOMETRY - Abstract
The 3D Gaussian Splatting technique has significantly advanced the construction of radiance fields from multi‐view images, enabling real‐time rendering. While point‐based rasterization effectively reduces computational demands for rendering, it often struggles to accurately reconstruct the geometry of the target object, especially under strong lighting conditions. Strong lighting can cause significant color variations on the object's surface when viewed from different directions, complicating the reconstruction process. To address this challenge, we introduce an approach that combines octree‐based implicit surface representations with Gaussian Splatting. Initially, it reconstructs a signed distance field (SDF) and a radiance field through volume rendering, encoding them in a low‐resolution octree. This initial SDF represents the coarse geometry of the target object. Subsequently, it introduces 3D Gaussians as additional degrees of freedom, which are guided by the initial SDF. In the third stage, the optimized Gaussians enhance the accuracy of the SDF, enabling the recovery of finer geometric details compared to the initial SDF. Finally, the refined SDF is used to further optimize the 3D Gaussians via splatting, eliminating those that contribute little to the visual appearance. Experimental results show that our method, which leverages the distribution of 3D Gaussians with SDFs, reconstructs more accurate geometry, particularly in images with specular highlights caused by strong lighting. The source code can be downloaded from https://github.com/LaoChui999/GS-Octree. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. CoupNeRF: Property‐aware Neural Radiance Fields for Multi‐Material Coupled Scenario Reconstruction.
- Author
-
Li, Jin, Gao, Yang, Song, Wenfeng, Li, Yacong, Li, Shuai, Hao, Aimin, and Qin, Hong
- Subjects
- *
CONTINUUM mechanics , *SYSTEM identification , *RADIANCE , *PHYSICS - Abstract
Neural Radiance Fields (NeRFs) have achieved significant recognition for their proficiency in scene reconstruction and rendering by utilizing neural networks to depict intricate volumetric environments. Despite considerable research dedicated to reconstructing physical scenes, rare works succeed in challenging scenarios involving dynamic, multi‐material objects. To alleviate, we introduce CoupNeRF, an efficient neural network architecture that is aware of multiple material properties. This architecture combines physically grounded continuum mechanics with NeRF, facilitating the identification of motion systems across a wide range of physical coupling scenarios. We first reconstruct specific‐material of objects within 3D physical fields to learn material parameters. Then, we develop a method to model the neighbouring particles, enhancing the learning process specifically in regions where material transitions occur. The effectiveness of CoupNeRF is demonstrated through extensive experiments, showcasing its proficiency in accurately coupling and identifying the behavior of complex physical scenes that span multiple physics domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Efficient Environment Map Rendering Based on Decomposition.
- Author
-
Wu, Yu‐Ting
- Subjects
- *
PIXELS , *ALGORITHMS , *LIGHTING , *NOISE - Abstract
This paper presents an efficient environment map sampling algorithm designed to render high‐quality, low‐noise images with only a few light samples, making it ideal for real‐time applications. We observe that bright pixels in the environment map produce high‐frequency shading effects, such as sharp shadows and shading, while the rest influence the overall tone of the scene. Building on this insight, our approach differs from existing techniques by categorizing the pixels in an environment map into emissive and non‐emissive regions and developing specialized algorithms tailored to the distinct properties of each region. By decomposing the environment lighting, we ensure that light sources are deposited on bright pixels, leading to more accurate shadows and specular highlights. Additionally, this strategy allows us to exploit the smoothness in the low‐frequency component by rendering a smaller image with more lights, thereby enhancing shading accuracy. Extensive experiments demonstrate that our method significantly reduces shadow artefacts and image noise compared to previous techniques, while also achieving lower numerical errors across a range of illumination types, particularly under limited sample conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Dimensionality Reduction for the Real-Time Light-Field View Synthesis of Kernel-Based Models.
- Author
-
Courteaux, Martijn, Mareen, Hannes, Ramlot, Bert, Lambert, Peter, and Van Wallendael, Glenn
- Abstract
Several frameworks have been proposed for delivering interactive, panoramic, camera-captured, six-degrees-of-freedom video content. However, it remains unclear which framework will meet all requirements the best. In this work, we focus on a Steered Mixture of Experts (SMoE) for 4D planar light fields, which is a kernel-based representation. For SMoE to be viable in interactive light-field experiences, real-time view synthesis is crucial yet unsolved. This paper presents two key contributions: a mathematical derivation of a view-specific, intrinsically 2D model from the original 4D light field model and a GPU graphics pipeline that synthesizes these viewpoints in real time. Configuring the proposed GPU implementation for high accuracy, a frequency of 180 to 290 Hz at a resolution of 2048 × 2048 pixels on an NVIDIA RTX 2080Ti is achieved. Compared to NVIDIA's instant-ngp Neural Radiance Fields (NeRFs) with the default configuration, our light field rendering technique is 42 to 597 times faster. Additionally, allowing near-imperceptible artifacts in the reconstruction process can further increase speed by 40%. A first-order Taylor approximation causes imperfect views with peak signal-to-noise ratio (PSNR) scores between 45 dB and 63 dB compared to the reference implementation. In conclusion, we present an efficient algorithm for synthesizing 2D views at arbitrary viewpoints from 4D planar light-field SMoE models, enabling real-time, interactive, and high-quality light-field rendering within the SMoE framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Creating a 3D Mesh in A‐pose from a Single Image for Character Rigging.
- Author
-
Lee, Seunghwan and Liu, C. Karen
- Subjects
- *
COMPUTER vision , *QUANTITATIVE research , *SKELETON , *GEOMETRY - Abstract
Learning‐based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A‐pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large‐scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Automatic Inbetweening for Stroke‐Based Painterly Animation.
- Author
-
Barroso, Nicolas, Fondevilla, Amélie, and Vanderhaeghe, David
- Subjects
- *
ANIMATION (Cinematography) , *ARTISTS - Abstract
Painterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Dynamic Voxel‐Based Global Illumination.
- Author
-
Cosin Ayerbe, Alejandro, Poulin, Pierre, and Patow, Gustavo
- Subjects
- *
LIGHT sources , *RAY tracing , *POLYGONS , *LIGHTING , *SCALABILITY - Abstract
Global illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Generalized Lipschitz Tracing of Implicit Surfaces.
- Author
-
Bán, Róbert and Valasek, Gábor
- Subjects
- *
RAY tracing , *SPATIAL resolution , *POLYNOMIALS , *PRIOR learning , *CONSERVATIVES , *POLYNOMIAL approximation - Abstract
We present a versatile and robust framework to render implicit surfaces defined by black‐box functions that only provide function value queries. We assume that the input function is locally Lipschitz continuous; however, we presume no prior knowledge of its Lipschitz constants. Our pre‐processing step generates a discrete acceleration structure, a Lipschitz field, that provides data to infer local and directional Lipschitz upper bounds. These bounds are used to compute safe step sizes along rays during rendering. The Lipschitz field is constructed by generating local polynomial approximations to the input function, then bounding the derivatives of the approximating polynomials. The accuracy of the approximation is controlled by the polynomial degree and the granularity of the spatial resolution used during fitting, which is independent from the resolution of the Lipschitz field. We demonstrate that our process can be implemented in a massively parallel way, enabling straightforward integration into interactive and real‐time modelling workflows. Since the construction only requires function value evaluations, the input surface may be represented either procedurally or as an arbitrarily filtered grid of function samples. We query the original implicit representation upon ray trace, as such, we preserve the geometric and topological details of the input as long as the Lipschitz field supplies conservative estimates. We demonstrate our method on both procedural and discrete implicit surfaces and compare its exact and approximate variants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images.
- Author
-
Jin, Ge, Jung, Younhyun, Fulham, Michael, Feng, Dagan, and Kim, Jinman
- Subjects
- *
GENERATIVE adversarial networks , *COMPUTED tomography , *DEEP learning , *DIAGNOSTIC imaging , *ANGIOGRAPHY - Abstract
Direct volume rendering (DVR) is an important tool for scientific and medical imaging visualization. Modern GPU acceleration has made DVR more accessible; however, the production of high‐quality rendered images with high frame rates is computationally expensive. We propose a deep learning method with a reduced computational demand. We leveraged a conditional generative adversarial network (cGAN) to upsample DVR images (a rendered scene), with a reduced sampling rate to obtain similar visual quality to that of a fully sampled method. Our dvrGAN is combined with a colour‐based loss function that is optimized for DVR images where different structures such as skin, bone,
etc . are distinguished by assigning them distinct colours. The loss function highlights the structural differences between images, by examining pixel‐level colour, and thus helps identify, for instance, small bones in the limbs that may not be evident with reduced sampling rates. We evaluated our method in DVR of human computed tomography (CT) and CT angiography (CTA) volumes. Our method retained image quality and reduced computation time when compared to fully sampled methods and outperformed existing state‐of‐the‐art upsampling methods. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
12. Deep SVBRDF Acquisition and Modelling: A Survey.
- Author
-
Kavoosighafi, Behnaz, Hajisharif, Saghi, Miandji, Ehsan, Baravdish, Gabriel, Cao, Wen, and Unger, Jonas
- Subjects
- *
GENERATIVE artificial intelligence , *MACHINE learning , *REFLECTANCE measurement , *RESEARCH & development , *REFLECTANCE , *DEEP learning - Abstract
Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine‐learning‐driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high‐quality measurements of bi‐directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi‐directional Reflectance Distribution Functions (SVBRDFs). Learning‐based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State‐of‐the‐Art Report (STAR) presents an in‐depth overview of the state‐of‐the‐art in machine‐learning‐driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real‐world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Directional Texture Editing for 3D Models.
- Author
-
Liu, Shengqi, Chen, Zhuo, Gao, Jingnan, Yan, Yichao, Zhu, Wenhan, Lyu, Jiangjing, and Yang, Xiaokang
- Subjects
- *
VIDEO editing , *VIDEO processing , *TEXTURE mapping , *SURFACES (Technology) , *PROBLEM solving - Abstract
Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a Texture Editing Model designed for automatic 3D object editing according to the text Instructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. TraM‐NeRF: Tracing Mirror and Near‐Perfect Specular Reflections Through Neural Radiance Fields.
- Author
-
Holland, Leif Van, Bliersbach, Ruben, Müller, Jan U., Stotko, Patrick, and Klein, Reinhard
- Subjects
- *
RAY tracing , *RADIANCE , *MIRRORS , *REFLECTANCE , *AMBIGUITY - Abstract
Implicit representations like neural radiance fields (NeRF) showed impressive results for photorealistic rendering of complex scenes with fine details. However, ideal or near‐perfectly specular reflecting objects such as mirrors, which are often encountered in various indoor scenes, impose ambiguities and inconsistencies in the representation of the re‐constructed scene leading to severe artifacts in the synthesized renderings. In this paper, we present a novel reflection tracing method tailored for the involved volume rendering within NeRF that takes these mirror‐like objects into account while avoiding the cost of straightforward but expensive extensions through standard path tracing. By explicitly modelling the reflection behaviour using physically plausible materials and estimating the reflected radiance with Monte‐Carlo methods within the volume rendering formulation, we derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples. We show that our novel method enables the training of consistent representations of such challenging scenes and achieves superior results in comparison to previous state‐of‐the‐art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Deep and Fast Approximate Order Independent Transparency.
- Author
-
Tsopouridis, Grigoris, Vasilakis, Andreas A., and Fudos, Ioannis
- Subjects
- *
DEEP learning , *MACHINE learning , *SOURCE code , *TRIANGLES , *PIXELS - Abstract
We present a machine learning approach for efficiently computing order independent transparency (OIT) by deploying a light weight neural network implemented fully on shaders. Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel colour with a pre‐trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Search Me Knot, Render Me Knot: Embedding Search and Differentiable Rendering of Knots in 3D.
- Author
-
Gangopadhyay, Aalok, Gupta, Paras, Sharma, Tarun, Singh, Prajwal, and Raman, Shanmuganathan
- Subjects
- *
RENDERING (Computer graphics) , *TUBE bending , *HOMEOMORPHISMS , *BUDGET , *INVERSE problems - Abstract
We introduce the problem of knot‐based inverse perceptual art. Given multiple target images and their corresponding viewing configurations, the objective is to find a 3D knot‐based tubular structure whose appearance resembles the target images when viewed from the specified viewing configurations. To solve this problem, we first design a differentiable rendering algorithm for rendering tubular knots embedded in 3D for arbitrary perspective camera configurations. Utilizing this differentiable rendering algorithm, we search over the space of knot configurations to find the ideal knot embedding. We represent the knot embeddings via homeomorphisms of the desired template knot, where the weights of an invertible neural network parametrize the homeomorphisms. Our approach is fully differentiable, making it possible to find the ideal 3D tubular structure for the desired perceptual art using gradient‐based optimization. We propose several loss functions that impose additional physical constraints, enforcing that the tube is free of self‐intersection, lies within a predefined region in space, satisfies the physical bending limits of the tube material, and the material cost is within a specified budget. We demonstrate through results that our knot representation is highly expressive and gives impressive results even for challenging target images in both single‐view and multiple‐view constraints. Through extensive ablation study, we show that each proposed loss function effectively ensures physical realizability. We construct a real‐world 3D‐printed object to demonstrate the practical utility of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Refined tri-directional path tracing with generated light portal.
- Author
-
Wei, Xuchen, Pu, GuiYang, Huo, Yuchi, Bao, Hujun, and Wang, Rui
- Subjects
- *
MONTE Carlo method , *PATH analysis (Statistics) - Abstract
The rendering efficiency of Monte Carlo path tracing often depends on the ease of path construction. For scenes with particularly complex visibility, e.g. where the camera and light sources are placed in separate rooms connected by narrow doorways or windows, it is difficult to construct valid paths using traditional path tracing algorithms such as unidirectional path tracing or bidirectional path tracing. Light portal is a class of methods that assist in sampling direct light paths based on prior knowledge of the scene. It usually requires additional manual editing and labelling by the artist or renderer user. Tri-directional path tracing is a sophisticated path tracing algorithm that combines bidirectional path tracing and light portals sampling, but the original work lacks sufficient analysis to demonstrate its effectiveness. In this paper, we propose an automatic light portal generation algorithm based on spatial radiosity analysis that mitigates the cost of manual operations for complex scenes. We also further analyse and improve the light portal-based tri-directional path tracing rendering algorithm, giving a detailed analysis of path construction strategies, algorithm complexity, and the unbiasedness of the Monte Carlo estimation. The experimental results show that our algorithm can accurately locate the light portals with low computational cost and effectively improve the rendering performance of complex scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. 基于可逆神经网络的神经辐射场水印.
- Author
-
孙文权, 刘佳, 董炜娜, 陈立峰, and 钮可
- Abstract
Aimin at the copyright problem surrounding 3D models of neural radiation fields focused on implicit representation, this paper tackled this issue by considering the embedding and extraction of neural radiation field watermarks as inverse prob-lems of image transformations, and proposed a scheme for protecting the copyright of neural radiation fields using invertible neural network watermarking. This scheme utilized 2D image watermarking technology to safeguard 3D scenes. In the forward process of the invertible network, the watermark was embedded in the training image of the neural radiation field. In the re- verse process, the watermark was extracted from the image rendered by the neural radiation field. This ensured copyright protection for both the neural radiation field and the 3D scene. However, the rendering process of the neural radiation field may result in the loss of watermark information. To address this, the paper introduced an image quality enhancement module. This module utilized a neural network to recover the rendered image and subsequently extract the watermark. Simultaneously, the watermark was embedded in each training image to train the neural radiation field. This enabled the extraction of watermark in- formation from multiple viewpoints. Experimental results demonstrate that the watermarking scheme outlined in this paper effectively achieves copyright protection and highlights the feasibility of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Bioactive peptides extracted from hydrolyzed animal byproducts for dogs and cats.
- Author
-
Vasconcellos, Ricardo S, Volpato, Josiane A, and Silva, Ingrid C
- Subjects
DIETARY bioactive peptides ,SCIENTIFIC literature ,FEATHERS ,PEPTIDE antibiotics ,ANGIOTENSIN converting enzyme ,MILK proteins ,DOGS ,LIVER proteins - Abstract
This article explores the use of bioactive peptides and hydrolyzed proteins from animal byproducts in pet food for dogs and cats. These peptides are produced through enzymatic hydrolysis and offer various health benefits, including prebiotic, antioxidant, anti-inflammatory, immunological, and antihypertensive effects. Animal byproducts like skin, blood, bones, and feathers are commonly used to create these ingredients. The article also discusses different methods of protein hydrolysis, such as chemical, enzymatic, and microbial methods. Overall, the inclusion of bioactive peptides and hydrolyzed proteins in pet food enhances its nutritional value and taste. The article acknowledges the potential benefits of bioactive peptides in areas like gut health, immune function, joint health, antioxidant activity, antimicrobial activity, blood pressure control, and glycemic control, but emphasizes the need for further research. It also mentions the growing popularity of enzymatically hydrolyzed ingredients in the pet food industry and highlights the authors' expertise in animal nutrition. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
20. Importance Sampling BRDF Derivatives.
- Author
-
BELHE, YASH, BING XU, PRAVEEN BANGARU, SAI, RAMAMOORTHI, RAVI, and TZU-MAO LI
- Subjects
PARTITION functions ,REFLECTANCE - Abstract
We propose a set of techniques to efficiently importance sample the derivatives of a wide range of Bidirectional Reflectance Distribution Function (BRDF) models. In differentiable rendering, BRDFs are replaced by their differential BRDF counterparts, which are real-valued and can have negative values. This leads to a new source of variance arising from their change in sign. Real-valued functions cannot be perfectly importance sampled by a positive-valued PDF, and the direct application of BRDF sampling leads to high variance. Previous attempts at antithetic sampling only addressed the derivative with the roughness parameter of isotropic microfacet BRDFs. Our work generalizes BRDF derivative sampling to anisotropic microfacet models, mixture BRDFs, Oren-Nayar, Hanrahan-Krueger, among other analytic BRDFs. Our method first decomposes the real-valued differential BRDF into a sum of single-signed functions, eliminating variance from a change in sign. Next, we importance sample each of the resulting single-signed functions separately. The first decomposition, positivization, partitions the real-valued function based on its sign, and is effective at variance reduction when applicable. However, it requires analytic knowledge of the roots of the differential BRDF, and for it to be analytically integrable too. Our key insight is that the single-signed functions can have overlapping support, which significantly broadens the ways we can decompose a real-valued function. Our product and mixture decompositions exploit this property, and they allow us to support several BRDF derivatives that positivization could not handle. For a wide variety of BRDF derivatives, our method significantly reduces the variance (up to 58× in some cases) at equal computation cost and enables better recovery of spatially varying textures through gradient-descent-based inverse rendering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Science and Technology of Fats and Lipids
- Author
-
Athira, K. R., Sifana, P. I., Menon, Sajith, Thomas, Sabu, editor, Hosur, Mahesh, editor, Pasquini, Daniel, editor, and Jose Chirayil, Cintil, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Applications and Limitations of Machine Learning in Computer Graphics
- Author
-
Fu, Chengyu, Luo, Xun, Editor-in-Chief, Almohammedi, Akram A., Series Editor, Chen, Chi-Hua, Series Editor, Guan, Steven, Series Editor, Pamucar, Dragan, Series Editor, and Ahmad, Badrul Hisham, editor
- Published
- 2024
- Full Text
- View/download PDF
23. HaLo‐NeRF: Learning Geometry‐Guided Semantics for Exploring Unconstrained Photo Collections.
- Author
-
Dudai, Chen, Alper, Morris, Bezalel, Hana, Hanocka, Rana, Lang, Itai, and Averbuch‐Elor, Hadar
- Subjects
- *
LANGUAGE models , *SEMANTICS , *METADATA - Abstract
Internet image collections containing photos captured by crowds of photographers show promise for enabling digital exploration of large‐scale tourist landmarks. However, prior works focus primarily on geometric reconstruction and visualization, neglecting the key role of language in providing a semantic interface for navigation and fine‐grained understanding. In more constrained 3D domains, recent methods have leveraged modern vision‐and‐language models as a strong prior of 2D visual semantics. While these models display an excellent understanding of broad visual semantics, they struggle with unconstrained photo collections depicting such tourist landmarks, as they lack expert knowledge of the architectural domain and fail to exploit the geometric consistency of images capturing multiple views of such scenes. In this work, we present a localization system that connects neural representations of scenes depicting large‐scale landmarks with text describing a semantic region within the scene, by harnessing the power of SOTA vision‐and‐language models with adaptations for understanding landmark scene semantics. To bolster such models with fine‐grained knowledge, we leverage large‐scale Internet data containing images of similar landmarks along with weakly‐related textual information. Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts, whose semantics may be unlocked from Internet textual metadata with large language models. We use correspondences between views of scenes to bootstrap spatial understanding of these semantics, providing guidance for 3D‐compatible segmentation that ultimately lifts to a volumetric scene representation. To evaluate our method, we present a new benchmark dataset containing large‐scale scenes with ground‐truth segmentations for multiple semantic concepts. Our results show that HaLo‐NeRF can accurately localize a variety of semantic concepts related to architectural landmarks, surpassing the results of other 3D models as well as strong 2D segmentation baselines. Our code and data are publicly available at https://tau‐vailab.github.io/HaLo‐NeRF/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Multimodal perception of digital protective materials.
- Author
-
BOCANCEA, VICTORIA, MARIN, IRINA ELENA, and LOGHIN, CARMEN MARIA
- Subjects
VIRTUAL prototypes ,PADS & protectors (Textiles) ,LIKERT scale ,DIGITAL images ,CLOTHING industry ,3-D animation ,TECHNICAL textiles - Abstract
Copyright of Industria Textila is the property of Institutul National de Cercetare-Dezvoltare pentru Textile si Pielarie and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
25. РОЗРОБКА ПЛАГІНА ДЛЯ ВІЗУАЛІЗАЦІЇ СТРУКТУРНИХ СХЕМ ОБЧИСЛЮВАЧІВ НА ОСНОВІ ТЕКСТОВОГО ОПИСУ АЛГОРИТМІВ ГАРМОНІЧНИХ ПЕРЕТВОРЕНЬ.
- Author
-
І., Процько and В., Теслюк
- Subjects
COMPUTER engineering ,HARTLEY transforms ,REAL numbers ,DATA visualization ,HARMONIC functions - Abstract
Context. In many areas of science and technology, the numerical solution of problems is not enough for the further development of the implementation of the obtained results. Among the existing information visualization approaches, the one that allows you to effectively reveal unstructured actionable ideas, generalize or simplify the analysis of the received data is chosen. The results of visualization of generalized structural diagrams based on the textual description of the algorithm clearly reflect the interaction of its parts, which is important at the system engineering stage of computer design. Objective of the study is the analysis and software implementation of structure visualization using the example of discrete harmonic transformation calculators obtained as a result of the synthesis of an algorithm based on cyclic convolutions with the possibility of extending the structure visualization to other computational algorithms. Method. The generalized scheme of the synthesis of algorithms of fast harmonic transformations in the form of a set of cyclic convolution operations on the combined sequences of input data and the coefficients of the harmonic transformation function with their visualization in the form of a generalized structural diagram of the calculator. The results. The result of the work is a software implementation of the visualization of generalized structural diagrams for the synthesized algorithms of cosine and Hartley transformations, which visually reflect the interaction of the main blocks of the computer. The software implementation of computer structure visualization is made in TypeScript using the Phaser 3 framework. Conclusions. The work considers and analyzes the developed software implementation of visualization of the general structure of the calculator for fast algorithms of discrete harmonic transformations in the domain of real numbers, obtained as a result of the synthesis of the algorithm based on cyclic convolutions. The results of visualization of variants of structural schemes of computers clearly and clearly reflect the interaction of its parts and allow to evaluate one or another variant of the computing algorithm in the design process [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. High performance GPU graphics API abstraction layer in C# for real-time graphics.
- Author
-
Szabó, Dávid and Illés, Dr. Zoltán
- Subjects
GRAPHICAL user interfaces ,COMPILERS (Computer programs) ,GRAPHICS processing units ,RENDERING (Computer graphics) - Abstract
Real-Time rendering is the technique which allows us to have graphical applications in our everyday life, whether it is a 3D game or a tool with graphical user interface. Nowadays graphics rendering is handled by the GPU (Graphics Processing Unit) in our device. There are many layers of abstraction above the programming of GPUs through libraries and graphics engines, though the most low-level way of accessing a GPU in user-mode applications is using a Graphics API. Due to the need of high performance and low-level capabilities usually these APIs are used from C or C++, but we realized the need to utilize these APIs in higher level languages as well. In our approach we're using the.NET C# language for developing multi-platform real-time graphical applications instead of the C or C++ languages. Using the modern.NET environment, we're able to use Graphics APIs for rendering onto common.NET UI Frameworks while consuming all our previously implemented C# libraries and.NET technologies in the same application. To maintain compatibility with multiple platforms we're developing a library system allowing the use different Graphics APIs from the same C# source-code. The library system contains a Graphics API abstraction layer with multiple Graphics API implementations of this layer in C# and a C# to shader language compiler for cross-API shader development in C#. In this paper, we're proposing our considerations for implementing a library to be able to use the Vulkan and OpenGL APIs through a single C# codebase. We provide solutions for multi-platform rendering and dealing with the low-level challenges of using the two deeply different APIs, while maintaining performance capable to do real-time rendering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. VirtualLoc: Large-scale Visual Localization Using Virtual Images.
- Author
-
YUAN XIONG, JINGRU WANG, and ZHONG ZHOU
- Abstract
Robust and accurate camera pose estimation is fundamental in computer vision. Learning-based regression approaches acquire six-degree-of-freedom camera parameters accurately from visual cues of an input image. However, most are trained on street-view and landmark datasets. These approaches can hardly be generalized to overlooking use cases, such as the calibration of the surveillance camera and unmanned aerial vehicle. Besides, reference images captured from the real world are rare and expensive, and their diversity is not guaranteed. In this article, we address the problem of using alternative virtual images for visual localization training. This work has the following principle contributions: First, we present a new challenging localization dataset containing six reconstructed large-scale three-dimensional scenes, 10,594 calibrated photographs with condition changes, and 300k virtual images with pixelwise labeled depth, relative surface normal, and semantic segmentation. Second, we present a flexible multi-feature fusion network trained on virtual image datasets for robust image retrieval. Third, we propose an end-to-end confidence map prediction network for feature filtering and pose estimation. We demonstrate that large-scale rendered virtual images are beneficial to visual localization. Using virtual images can solve the diversity problem of real images and leverage labeled multi-feature data for deep learning. Experimental results show that our method achieves remarkable performance surpassing state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Critical Archaeology in the Digital Age: Proceedings of the 12th IEMA Visiting Scholar’s Conference
- Subjects
Archaeology ,Garstki ,Digital ,Virtual ,Data ,Rendering ,Education - Abstract
Every part of archaeological practice is intimately tied to digital technologies, but how deeply do we really understand the ways these technologies impact the theoretical trends in archaeology, how these trends affect the adoption of these technologies, or how the use of technology alters our interactions with the human past? This volume suggests a critical approach to archaeology in a digital world, a purposeful and systematic application of digital tools in archaeology. This is a call to pay attention to your digital tools, to be explicit about how you are using them, and to understand how they work and impact your own practice. The chapters in this volume demonstrate how this critical, reflexive approach to archaeology in the digital age can be accomplished, touching on topics that include 3D data, predictive and procedural modelling, digital publishing, digital archiving, public and community engagement, ethics, and global sustainability. The scale and scope of this research demonstrates how necessary it is for all archaeological practitioners to approach this digital age with a critical perspective and to be purposeful in our use of digital technologies.
- Published
- 2022
29. Modeling Practical Multi-Center-of-Projection Using Ellipsoid
- Author
-
Soohyun Lee, Junyoung Yoon, and Joo Ho Lee
- Subjects
Projection geometry ,rendering ,scene contraction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Traditional 3D projection models, such as perspective and orthographic projection, are limited to two types of projective ray fields: rays passing through a single point and parallel rays. In this paper, we introduce an ellipsoidal-based 3D projection model to overcome the sparsity of 3D projections. Our ellipsoidal 3D projection model comprises an ellipsoid and an axis-aligned geometry such as a line or a plane. By linearly mapping these two geometries along their principal axes, our model enables us to explore the continuous domain of projective ray fields while taking advantage of the anisotropy in ellipsoids. We introduce the intrinsic characteristic of our projection field, called the ellipse property, that enables testing isomorphism with other projection models. We prove the difference between ours and the catadioptric projection model employing an elliptic mirror. Besides, we propose a perspectivity metric for the intuitive control over the parameter space. We present both forward and backward projections of our model, demonstrating its applicability across several visual applications, ranging from image synthesis to scene reconstruction.
- Published
- 2024
- Full Text
- View/download PDF
30. Real-Time Monte Carlo Denoising With Adaptive Fusion Network
- Author
-
Junmin Lee, Seunghyun Lee, Min Yoon, and Byung Cheol Song
- Subjects
Image processing ,rendering ,real-time de-noising ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Real-time Monte Carlo denoising aims to denoise a 1spp-rendered image with a limited time budget. Many latest techniques for real-time Monte Carlo denoising utilize temporal accumulation (TA) as a pre-processing to improve the temporal stability of successive frames and increase the effective spp. However, existing techniques using TA used to suffer from significant performance degradation when TA does not work well. In addition, they have the disadvantage of deteriorating performance in dynamic scenes because pixel information of the current frame cannot be sufficiently utilized due to the pixel averaging effect between temporally adjacent frames. To solve this problem, this paper proposes a framework that utilizes both 1spp images and temporally accumulated 1spp (TA-1spp) images. First, the multi-scale kernel prediction module estimates kernel maps for filtering 1spp images and TA-1spp images, respectively. Then, the filtered images are properly fused so that the two advantages of 1spp and TA-1spp images can create synergy. Also, the remaining noise is removed through the refinement module and fine details are reconstructed to improve the model flexibility, beyond using only the kernel prediction module. As a result, we achieve better quantitative and qualitative performance at 39% faster than state-of-the-art (SOTA) real-time Monte Carlo denoisers.
- Published
- 2024
- Full Text
- View/download PDF
31. Wandering Architecture Through the Looking Glass of Digital Representation: An Expeditious Teaching Experience in Understanding and Modelling Modern Architecture
- Author
-
Anna Sanseverino, Victoria Ferraris, and Carla Ferreyra
- Subjects
le corbusier ,coromandel estate ,curutchet house ,jooste house ,rendering ,Psychology ,BF1-990 ,Visual arts ,N1-9211 - Abstract
The present work focuses on an expeditious teaching experience of Architecture and 3D modelling aimed at the ‘Architectural Drawing II’ and ‘Computer Graphics’ students of the Building Engineering-Architecture Degree Programme of the University of Salerno, Italy. The students, involved in the ‘Italy-South Africa Joint Research Programme, ISARP 2018-2020 – A Social e spatial investigation at the Moxomatsi village, Mpumalanga’ (SSIMM), were supported in the digital reconstruction of three iconic examples of modern architecture located in South America and South Africa, i.e., the Curutchet House (La Plata, Argentina), the Coromandel Estate Manor House (Mpumalanga, South Africa) and the Jooste House (Pretoria, South Africa). Through an Alice-in-wonderland-type of voyage, they had the chance to first analyse the complex inner space of these architectural assets that both emerge from and fade into the landscape and then propose their own interpretation through rendered and post-processed imagery.
- Published
- 2023
- Full Text
- View/download PDF
32. Development of utility pet soap utilizing rendered fat from deserted poultry sleeves
- Author
-
Gangwar, Mukesh, Kumar, Rajiv Ranjan, Mendiratta, S.K., Biswas, Ashim Kumar, and Chand, Sagar
- Published
- 2023
- Full Text
- View/download PDF
33. Res-NeuS: Deep Residuals and Neural Implicit Surface Learning for Multi-View Reconstruction.
- Author
-
Wang, Wei, Gao, Fengjiao, and Shen, Yongliang
- Subjects
- *
IMPLICIT learning , *SURFACE reconstruction , *GEOMETRIC surfaces - Abstract
Surface reconstruction using neural networks has proven effective in reconstructing dense 3D surfaces through image-based neural rendering. Nevertheless, current methods are challenging when dealing with the intricate details of large-scale scenes. The high-fidelity reconstruction performance of neural rendering is constrained by the view sparsity and structural complexity of such scenes. In this paper, we present Res-NeuS, a method combining ResNet-50 and neural surface rendering for dense 3D reconstruction. Specifically, we present appearance embeddings: ResNet-50 is used to extract the appearance depth features of an image to further capture more scene details. We interpolate points near the surface and optimize their weights for the accurate localization of 3D surfaces. We introduce photometric consistency and geometric constraints to optimize 3D surfaces and eliminate geometric ambiguity existing in current methods. Finally, we design a 3D geometry automatic sampling to filter out uninteresting areas and reconstruct complex surface details in a coarse-to-fine manner. Comprehensive experiments demonstrate Res-NeuS's superior capability in the reconstruction of 3D surfaces in complex, large-scale scenes, and the harmful distance of the reconstructed 3D model is 0.4 times that of general neural rendering 3D reconstruction methods and 0.6 times that of traditional 3D reconstruction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Neural Path Sampling for Rendering Pure Specular Light Transport.
- Author
-
Yu, Rui, Dong, Yue, Kong, Youkang, and Tong, Xin
- Subjects
- *
RAY tracing - Abstract
Multi‐bounce, pure specular light paths produce complex lighting effects, such as caustics and sparkle highlights, which are challenging to render due to their sparse and diverse nature. We introduce a learning‐based method for the efficient rendering of pure specular light transport. The key idea is training a neural network to model the distribution of all specular light paths between pairs of endpoints for one specular object. To achieve this, for each object, our method models the distribution of sparse and diverse specular light paths between two endpoints using smooth 2D maps of ray directions from one endpoint and represents these maps with a 2D convolutional network. We design a training scheme to efficiently sample specular light paths from the scene and train the network. Once trained, our method predicts specular light paths for a given pair of endpoints using the network and employs root‐finding‐based algorithms for rendering the specular light transport. Experimental results demonstrate that our method generates high‐quality results, supports dynamic lighting and moving objects within the scene, and significantly enhances the rendering speed of existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. State of the Art in Efficient Translucent Material Rendering with BSSRDF.
- Author
-
Liang, Shiyu, Gao, Yang, Hu, Chonghao, Zhou, Peng, Hao, Aimin, Wang, Lili, and Qin, Hong
- Subjects
- *
REFLECTANCE - Abstract
Sub‐surface scattering is always an important feature in translucent material rendering. When light travels through optically thick media, its transport within the medium can be approximated using diffusion theory, and is appropriately described by the bidirectional scattering‐surface reflectance distribution function (BSSRDF). BSSRDF methods rely on assumptions about object geometry and light distribution in the medium, which limits their applicability to general participating media problems. However, despite the high computational cost of path tracing, BSSRDF methods are often favoured due to their suitability for real‐time applications. We review these methods and discuss the most recent breakthroughs in this field. We begin by summarizing various BSSRDF models and then implement most of them in a 2D searchlight problem to demonstrate their differences. We focus on acceleration methods using BSSRDF, which we categorize into two primary groups: pre‐computation and texture methods. Then we go through some related topics, including applications and advanced areas where BSSRDF is used, as well as problems that are sometimes important yet are ignored in sub‐surface scattering estimation. In the end of this survey, we point out remaining constraints and challenges, which may motivate future work to facilitate sub‐surface scattering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Real‐time Terrain Enhancement with Controlled Procedural Patterns.
- Author
-
Grenier, C., Guérin, É., Galin, É., and Sauvage, B.
- Subjects
- *
ARTIST-model relationships , *RELIEF models , *LANDFORMS , *GEOMETRIC modeling , *EROSION - Abstract
Assisting the authoring of virtual terrains is a perennial challenge in the creation of convincing synthetic landscapes. Particularly, there is a need for augmenting artist‐controlled low‐resolution models with consistent relief details. We present a structured noise that procedurally enhances terrains in real time by adding spatially varying erosion patterns. The patterns can be cascaded, i.e. narrow ones are nested into large ones. Our model builds upon the Phasor noise, which we adapt to the specific characteristics of terrains (water flow, slope orientation). Relief details correspond to the underlying terrain characteristics and align with the slope to preserve the coherence of generated landforms. Moreover, our model allows for artist control, providing a palette of control maps, and can be efficiently implemented in graphics hardware, thus allowing for real‐time synthesis and rendering, therefore permitting effective and intuitive authoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. WebGL vs. WebGPU: A Performance Analysis for Web 3.0.
- Author
-
Chickerur, Satyadhyan, Balannavar, Sankalp, Hongekar, Pranali, Prerna, Aditi, and Jituri, Soumya
- Subjects
HETEROGENEOUS computing ,WEB-based user interfaces ,USER experience ,INTERNETWORKING - Abstract
This study investigates web 3.0 heterogeneous computing with webGL, webGPU, and IPFS. The primary focus is on the benefits of utilising these technologies to enhance the functionality and performance of web 3.0 applications. The study investigates web 3.0 as it currently exists and the constraints that developers face due to graphic, computational, and storage capabilities. According to the findings, incorporating webGL and webGPU can considerably increase user experience, speed, efficacy, and decentralization. Finally, this study summarizes the importance of continuing research in this subject, particularly with relation to platform interoperability and the future prospects of heterogeneous computing on web 3.0 via graphical APIs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. АНАЛІЗ ВЕЛИКИХ ДАНИХ У КОМП'ЮТЕРНІЙ ГРАФІЦІ
- Author
-
РОМАНЮК, О. Н., ПАВЛОВ, С. В., БОБКО, О. Л., ЗАВАЛЬНЮК, Є. К., and РЕШЕТНІК, О. О.
- Abstract
In this article, an overview of the aspects of big data analysis and representation in computer graphics is presented, creating new prospects for the development and improvement of applications for processing graphic information, visualization, and simulation. Thanks to advancements in data processing and analysis technologies, computer graphics can become even more realistic, interactive, and efficient. Data can come from various sources, including 3D scanning, modeling, sensors, video cameras, games, and simulations. Storing large volumes of graphic data requires effective solutions such as distributed file systems, databases, and cloud services. The review analysis covers the processing of big data, including machine learning, image recognition algorithms, parallel computing, and resource optimization. Special attention is paid to the challenges and prospects of using big data in computer graphics, which includes improving the quality of graphic data analysis, optimizing the rendering of extremely large images, and integration with third-party systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Methane Production from a Rendering Waste Covered Anaerobic Digester: Greenhouse Gas Reduction and Energy Production.
- Author
-
Lovanh, Nanh, Loughrin, John, Ruiz-Aguilar, Graciela, and Sistani, Karamat
- Subjects
- *
DIGESTER gas , *POULTRY processing plants , *GREENHOUSE gases , *SEWAGE lagoons , *ANIMAL waste , *UPFLOW anaerobic sludge blanket reactors , *ANAEROBIC digestion - Abstract
Livestock wastes can serve as the feedstock for biogas production (mainly methane) that could be used as an alternative energy source. The green energy derived from animal wastes is considered to be carbon neutral and offsetting the emissions generated from fossil fuels. In this study, an evaluation of methane production from anaerobic digesters utilizing different livestock residues (e.g., poultry rendering wastewater and dairy manure) was carried out. An anaerobic continuous flow system (15 million gallons, polyethylene-covered) subjected to natural conditions (i.e., high flow rate, seasonal temperatures, etc.) containing poultry rendering wastewater was set up to evaluate methane potential and energy production. A parallel pilot-scale plug-flow anaerobic digestion system (9 m3) was also set up to test different feedstocks and operating parameters. Biogas production was sampled and monitored by gas chromatography over several months of operation. The results showed that methane production increased as the temperature increased as well as depending on the type of feedstock utilized. The covered rendering wastewater lagoon achieved an upward of 80% (v/v) methane production. The rates of methane production were 0.0478 g per g of COD for the poultry rendering wastewater and 0.0141 g per g of COD for dairy manure as feedstock. Hence, a poultry processing plant with a rendering wastewater flow rate of about 4.5 million liters per day has the potential to capture about two million kilograms of methane for energy production per year from a waste retention pond, potentially reducing global warming potential by about 50,000 tons of CO2 equivalent annually. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Path from photorealism to perceptual realism
- Author
-
Zhong, Fangcheng and Mantiuk, Rafal
- Subjects
3D display ,computer graphics ,mixed reality ,perceptual realism ,perceptually realistic graphics ,rendering ,virtual reality - Abstract
Photorealism in computer graphics - rendering images that appear as realistic as photographs - has matured to the point that it is now widely used in industry. With emerging 3D display technologies, the next big challenge in graphics is to achieve Perceptual Realism - producing virtual imagery that is perceptually indistinguishable from real-world 3D scenes. Such a significant upgrade in the level of realism offers highly immersive and engaging experiences that have the potential to revolutionise numerous aspects of life and society, including entertainment, social networks, education, business, research, engineering, and design. While perceptual realism puts strict requirements on the quality of reproduction, the virtual scene does not have to be identical in light distributions to its physical counterpart to be perceptually realistic, providing that it is visually indistinguishable to human eyes. Due to the limitations of human vision, a significant improvement in perceptual realism can, in principle, be achieved by fulfilling the essential visual requirements with sufficient qualities and without having to reconstruct the physically accurate distribution of lights. In this dissertation, we start by discussing the capabilities and limits of the human visual system, which serves as a basis for the analysis of the essential visual requirements for perceptual realism. Next, we introduce a Perceptually Realistic Graphics (PRG) pipeline consisting of the acquisition, representation, and reproduction of the plenoptic function of a 3D scene. Finally, we demonstrate that taking advantage of the limits and mechanisms of the human visual system can significantly improve this pipeline. Specifically, we present three approaches to push the quality of virtual imagery towards perceptual realism. First, we introduce DiCE, a real-time rendering algorithm that exploits the binocular fusion mechanism of the human visual system to boost the perceived local contrast of stereoscopic displays. The method was inspired by an established model of binocular contrast fusion. To optimise the experience of binocular fusion, we proposed and empirically validated a rivalry-prediction model that better controls rivalry. Next, we introduce Dark Stereo, another real-time rendering algorithm that facilitates depth perception from binocular depth cues for stereoscopic displays, especially those under low luminance. The algorithm was designed based on a proposed model of stereo constancy that predicts the precision of binocular depth cues for a given contrast and luminance. Both DiCE and Dark Stereo have been experimentally demonstrated to be effective in improving realism. Their real-time performance also makes them readily integrable into any existing VR rendering pipeline. Nonetheless, only improving rendering is not sufficient to meet all the visual requirements for perceptual realism. The overall fidelity of a typical stereoscopic VR display is still confined by its limited dynamic range, low spatial resolution, optical aberrations, and vergence-accommodation conflicts. To push the limits of the overall fidelity, we present a High-Dynamic-Range Multi-Focal Stereo display (HDR-MF-S display) with an end-to-end imaging and rendering system. The system can visually reproduce real-world 3D objects with high resolution, accurate colour, a wide dynamic range and contrast, and most depth cues, including binocular disparity and focal depth cues, and permits a direct comparison between real and virtual scenes. It is the first work that achieves a close perceptual match between a physical 3D object and its virtual counterpart. The fidelity of reproduction has been confirmed by a Visual Turing Test (VTT) where naive participants failed to discern any difference between the real and virtual objects in more than half of the trials. The test provides insights to better understand the conditions necessary to achieve perceptual realism. In the long term, we foresee this system as a crucial step in the development of perceptually realistic graphics, for not only a quality unprecedentedly achieved but also a fundamental approach that can effectively identify bottlenecks and direct future studies for perceptually realistic graphics.
- Published
- 2022
- Full Text
- View/download PDF
41. Motion quality models for real-time adaptive rendering
- Author
-
Jindal, Akshay and Mantiuk, Rafał
- Subjects
Computer Graphics ,Perception ,Rendering ,Displays - Abstract
The demand for compute power and transmission bandwidth is growing rapidly as the display technologies progress towards higher spatial resolutions and frame rates, more bits per pixel (HDR), and multiple views required for 3D displays. Advancement in real-time rendering has also made shading incredibly complex. However, GPUs are still limited in processing capabilities and often have to work at a fraction of their available bandwidth due to hardware constraints. In this dissertation, I build upon the observation that the human visual system has a limited capability to perceive images of high spatial and temporal frequency, and hence it is unnecessary to strive to meet these computational demands. I propose to model the spatio-temporal limitations of the visual system, specifically the perception of image artefacts under motion, and exploit them to improve the quality of rendering. I present four main contributions: First, I demonstrate the potential of existing motion quality models in improving rendering quality under restricted bandwidths. This validation is done using an eye tracker through psychophysical experiments involving complex motion on a G-Sync display. Second, I note that the current models of motion quality ignore the effect of displayed content and cannot take advantage of recent shading technologies such as variable-rate shading which allows for more flexible control of local shading resolution. To this end, I develop a new content-dependent model of motion quality and calibrate it through psychophysical experiments on a wide range of content, display configurations, and velocities. Third, I propose a new rendering algorithm that utilises such models to calculate the optimal refresh rate and local shading resolution given the allowed bandwidth. Finally, I present a novel high dynamic range multi-focal stereo display that will serve as an experimental apparatus for next-generation of perceptual experiments by enabling us to study the interplay of these factors in achieving perceptual realism.
- Published
- 2022
- Full Text
- View/download PDF
42. The Hybridization of graphic survey techniques in funerary architecture
- Author
-
María José Muñoz-Mora, David Navarro-Moreno, Pedro Jiménez-Vicario, Jose Gabriel Gómez-Carrasco, and Manuel Alejandro Ródenas-López
- Subjects
graphic restitution ,pantheon ,photogrammetry ,rendering ,cemetery ,Architecture ,NA1-9428 ,Architectural drawing and design ,NA2695-2793 - Abstract
Funerary architecture often presents a series of specificities that makes it necessary to combine different techniques for its adequate graphic restitution. These conditioning factors are usually present both in the exterior, such as nearby trees and metalwork elements, and in the interior, due to the arrangement of small objects and furniture as well as poor lighting in the rooms.This paper focuses on the methodology followed for the graphic restitution of the Pedreño y Deu family pantheon in the main cemetery of Cartagena (Spain). The pantheon, built in 1875, consists of a circular chapel on the ground floor and a crypt below ground level. At the start of the survey, the building, protected by municipal planning, was in a state of advanced deterioration.The techniques used for the survey of each part will be described, as well as the procedure followed to assemble them into a single model. In this work, we have been able to verify that hybridization in survey techniques is one of the best options to represent architectural heritage in case studies where there are situations of very different natures.DOI: https://doi.org/10.20365/disegnarecon.30.2023.15
- Published
- 2023
43. Fluids and Deep Learning: A Brief Review
- Author
-
Antonio Giraldi, Gilson, Almeida, Liliane Rodrigues de, Lopes Apolinário Jr., Antonio, Silva, Leandro Tavares da, Bellomo, Nicola, Series Editor, Benzi, Michele, Series Editor, Jorgensen, Palle, Series Editor, Li, Tatsien, Series Editor, Melnik, Roderick, Series Editor, Scherzer, Otmar, Series Editor, Steinberg, Benjamin, Series Editor, Reichel, Lothar, Series Editor, Tschinkel, Yuri, Series Editor, Yin, George, Series Editor, Zhang, Ping, Series Editor, Giraldi, Gilson Antonio, Almeida, Liliane Rodrigues de, Apolinário Jr., Antonio Lopes, and Silva, Leandro Tavares da
- Published
- 2023
- Full Text
- View/download PDF
44. Introductory Material to Animation and Learning
- Author
-
Antonio Giraldi, Gilson, Almeida, Liliane Rodrigues de, Lopes Apolinário Jr., Antonio, Silva, Leandro Tavares da, Bellomo, Nicola, Series Editor, Benzi, Michele, Series Editor, Jorgensen, Palle, Series Editor, Li, Tatsien, Series Editor, Melnik, Roderick, Series Editor, Scherzer, Otmar, Series Editor, Steinberg, Benjamin, Series Editor, Reichel, Lothar, Series Editor, Tschinkel, Yuri, Series Editor, Yin, George, Series Editor, Zhang, Ping, Series Editor, Giraldi, Gilson Antonio, Almeida, Liliane Rodrigues de, Apolinário Jr., Antonio Lopes, and Silva, Leandro Tavares da
- Published
- 2023
- Full Text
- View/download PDF
45. Design of a Video Game Applying a Layered Architecture Based on the Unity Framework
- Author
-
Luis, Aguas, Henry, Recalde, Renato, Toasa, Elizabeth, Salazar, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rocha, Álvaro, editor, Ferrás, Carlos, editor, and Ibarra, Waldo, editor
- Published
- 2023
- Full Text
- View/download PDF
46. Application Potential of Stable Diffusion in Different Stages of Industrial Design
- Author
-
Liu, Miao, Hu, Yifei, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Degen, Helmut, editor, and Ntoa, Stavroula, editor
- Published
- 2023
- Full Text
- View/download PDF
47. Architectural Visualization Using Virtual Reality
- Author
-
Malla, Kranthi, Petluri, Kalpana, Mekala, Bhanu Prasad, Sandeep, Y., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Senjyu, Tomonobu, editor, So–In, Chakchai, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
48. Enhancing the Tourism Experience Using Mobile Augmented Reality: Geo-Visualization Techniques
- Author
-
Solanki, Karishma, Abbasi, Danish Faraz, Hossain, Munir, Salahuddin, Emran, Ali, Shaymaa Ismail, Ahmed, Shahad, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Daimi, Kevin, editor, and Al Sadoon, Abeer, editor
- Published
- 2023
- Full Text
- View/download PDF
49. Sustainable Solutions by the Use of Immersive Technologies for Repurposing Buildings
- Author
-
Rosilius, Maximilian, Wilhelm, Markus, von Eitzen, Ingo, Decker, Steffen, Damek, Sebastian, Braeutigam, Volker, Chaari, Fakher, Series Editor, Gherardini, Francesco, Series Editor, Ivanov, Vitalii, Series Editor, Cavas-Martínez, Francisco, Editorial Board Member, di Mare, Francesca, Editorial Board Member, Haddar, Mohamed, Editorial Board Member, Kwon, Young W., Editorial Board Member, Trojanowska, Justyna, Editorial Board Member, Xu, Jinyang, Editorial Board Member, Kohl, Holger, editor, Seliger, Günther, editor, and Dietrich, Franz, editor
- Published
- 2023
- Full Text
- View/download PDF
50. 3D Modelling of Freedom Summit for Virtual Environments
- Author
-
Luis, Aguas, Lizbeth, Suárez, Rosario, Coral, Byron, Machay, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Botto-Tobar, Miguel, editor, Gómez, Omar S., editor, Rosero Miranda, Raul, editor, Díaz Cadena, Angela, editor, and Luna-Encalada, Washington, editor
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.