2,587 results on '"Texture Synthesis"'
Search Results
2. TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation
- Author
-
Liu, Yufei, Zhu, Junwei, Tang, Junshu, Zhang, Shijie, Zhang, Jiangning, Cao, Weijian, Wang, Chengjie, Wu, Yunsheng, Huang, Dongjin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
3. RoomTex: Texturing Compositional Indoor Scenes via Iterative Inpainting
- Author
-
Wang, Qi, Lu, Ruijie, Xu, Xudong, Wang, Jingbo, Wang, Michael Yu, Dai, Bo, Zeng, Gang, Xu, Dan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Learning Pseudo 3D Guidance for View-Consistent Texturing with 2D Diffusion
- Author
-
Li, Kehan, Fan, Yanbo, Wu, Yang, Sun, Zhongqian, Yang, Wei, Ji, Xiangyang, Yuan, Li, Chen, Jie, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
5. WordRobe: Text-Guided Generation of Textured 3D Garments
- Author
-
Srivastava, Astitva, Manu, Pranav, Raj, Amit, Jampani, Varun, Sharma, Avinash, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Stochastic geometry models for texture synthesis of machined metallic surfaces: sandblasting and milling
- Author
-
Natascha Jeziorski and Claudia Redenbach
- Subjects
Stochastic geometry modeling ,Texture synthesis ,Machined surfaces ,Visual surface inspection ,Synthetic training data ,Mathematics ,QA1-939 ,Industry ,HD2321-4730.9 - Abstract
Abstract Training defect detection algorithms for visual surface inspection systems requires a large and representative set of training data. Often there is not enough real data available which additionally cannot cover the variety of possible defects. Synthetic data generated by a synthetic visual surface inspection environment can overcome this problem. Therefore, a digital twin of the object is needed, whose micro-scale surface topography is modeled by texture synthesis models. We develop stochastic texture models for sandblasted and milled surfaces based on topography measurements of such surfaces. As the surface patterns differ significantly, we use separate modeling approaches for the two cases. Sandblasted surfaces are modeled by a combination of data-based texture synthesis methods that rely entirely on the measurements. In contrast, the model for milled surfaces is procedural and includes all process-related parameters known from the machine settings.
- Published
- 2024
- Full Text
- View/download PDF
7. Directional Texture Editing for 3D Models.
- Author
-
Liu, Shengqi, Chen, Zhuo, Gao, Jingnan, Yan, Yichao, Zhu, Wenhan, Lyu, Jiangjing, and Yang, Xiaokang
- Subjects
- *
VIDEO editing , *VIDEO processing , *TEXTURE mapping , *SURFACES (Technology) , *PROBLEM solving - Abstract
Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a Texture Editing Model designed for automatic 3D object editing according to the text Instructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state‐of‐the‐art methods on various 3D objects. We also perform text‐guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions.
- Author
-
Fournier, Romain and Sauvage, Basile
- Subjects
- *
DISTRIBUTION (Probability theory) , *VIDEO processing , *ALGORITHMS - Abstract
Mixing textures is a basic and ubiquitous operation in data‐driven algorithms for real‐time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel‐wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant‐time and parallel evaluation of the resulting mix over square footprints of MIP‐maps, making our operator suitable for real‐time rendering. We also develop a micro‐priority model, inspired by micro‐geometry models in rendering, which represents sub‐pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Stochastic geometry models for texture synthesis of machined metallic surfaces: sandblasting and milling.
- Author
-
Jeziorski, Natascha and Redenbach, Claudia
- Subjects
- *
STOCHASTIC geometry , *SURFACE topography measurement , *INSPECTION & review , *DIGITAL twins , *GEOMETRIC modeling - Abstract
Training defect detection algorithms for visual surface inspection systems requires a large and representative set of training data. Often there is not enough real data available which additionally cannot cover the variety of possible defects. Synthetic data generated by a synthetic visual surface inspection environment can overcome this problem. Therefore, a digital twin of the object is needed, whose micro-scale surface topography is modeled by texture synthesis models. We develop stochastic texture models for sandblasted and milled surfaces based on topography measurements of such surfaces. As the surface patterns differ significantly, we use separate modeling approaches for the two cases. Sandblasted surfaces are modeled by a combination of data-based texture synthesis methods that rely entirely on the measurements. In contrast, the model for milled surfaces is procedural and includes all process-related parameters known from the machine settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Image inpainting via modified exemplar‐based inpainting with two‐stage structure tensor and image sparse representation.
- Author
-
Yodjai, Petcharaporn, Kumam, Poom, Martínez‐Moreno, Juan, and Jirakitpuwapat, Wachirapong
- Subjects
- *
IMAGE representation , *INPAINTING , *PIXELS - Abstract
The approach described in this research is an exemplar‐based inpainting problem that combines a two‐stage structure tensor and image sparse representation to fill in any missing pixels. An important step is to select the filling order and local intensity smoothness, as well as to ensure that the structure is not destroyed. We employ a two‐stage structure tensor‐based priority for the filling order: finding the candidate patches and determining the appropriate weight of each candidate patch under the constraint of local patch consistency, then applying a blend of a sparse linear combination of candidate patches to fill in the missing region of the image. In addition, this technique may also be used for object removal. The proposed method yields results that are visually natural and qualitative. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Seasonal terrain texture synthesis via Köppen periodic conditioning.
- Author
-
Kanai, Toshiki, Endo, Yuki, and Kanamori, Yoshihiro
- Subjects
- *
CONVOLUTIONAL neural networks , *KOPPEN climate classification , *SEASONS - Abstract
This paper presents the first method for synthesizing seasonal transition of terrain textures for an input heightfield. Our method reproduces a seamless transition of terrain textures according to the seasons by learning measured data on the earth using a convolutional neural network. We attribute the main seasonal texture transition to vegetation and snow, and control the texture synthesis not only with the input heightfield but also with the annual temperature and precipitation based on Köppen's climate classification as well as insolation at the location. We found that month-by-month synthesis yields incoherent transitions, while a naïve conditioning with explicit temporal information (e.g., month) degrades generalizability due to the north–south hemisphere difference. To address these issues, we introduce a simple solution—periodic conditioning on the annual data without explicit temporal information. Our experiments reveal that our method can synthesize plausible seasonal transitions of terrain textures. We also demonstrate large-scale texture synthesis by tiling the texture output. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Suitable and Style-Consistent Multi-Texture Recommendation for Cartoon Illustrations.
- Author
-
Wu, Huisi, Wang, Zhaoze, Li, Yifan, Liu, Xueting, and Lee, Tong-Yee
- Subjects
RECOMMENDER systems ,SEARCH engines - Abstract
Texture plays an important role in cartoon illustrations to display object materials and enrich visual experiences. Unfortunately, manually designing and drawing an appropriate texture is not easy even for proficient artists, let alone novice or amateur people. While there exist tons of textures on the Internet, it is not easy to pick an appropriate one using traditional text-based search engines. Although several texture pickers have been proposed, they still require the users to browse the textures by themselves, which is still labor-intensive and time-consuming. In this article, an automatic texture recommendation system is proposed for recommending multiple textures to replace a set of user-specified regions in a cartoon illustration with visually pleasant look. Two measurements, the suitability measurement and the style-consistency measurement, are proposed to make sure that the recommended textures are suitable for cartoon illustration and at the same time mutually consistent in style. The suitability is measured based on the synthesizability, cartoonity, and region fitness of textures. The style-consistency is predicted using a learning-based solution since it is subjective to judge whether two textures are consistent in style. An optimization problem is formulated and solved via the genetic algorithm. Our method is validated on various cartoon illustrations, and convincing results are obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. End-to-End Framework for the Automatic Matching of Omnidirectional Street Images and Building Data and the Creation of 3D Building Models.
- Author
-
Ogawa, Yoshiki, Nakamura, Ryoto, Sato, Go, Maeda, Hiroya, and Sekimoto, Yoshihide
- Subjects
- *
AERIAL photographs , *URBAN planning - Abstract
For accurate urban planning, three-dimensional (3D) building models with a high level of detail (LOD) must be developed. However, most large-scale 3D building models are limited to a low LOD of 1–2, as the creation of higher LOD models requires the modeling of detailed building elements such as walls, windows, doors, and roof shapes. This process is currently not automated and is performed manually. In this study, an end-to-end framework for the creation of 3D building models was proposed by integrating multi-source data such as omnidirectional images, building footprints, and aerial photographs. These different data sources were matched with the building ID considering their spatial location. The building element information related to the exterior of the building was extracted, and detailed LOD3 3D building models were created. Experiments were conducted using data from Kobe, Japan, yielding a high accuracy for the intermediate processes, such as an 86.9% accuracy in building matching, an 88.3% pixel-based accuracy in the building element extraction, and an 89.7% accuracy in the roof type classification. Eighty-one LOD3 3D building models were created in 8 h, demonstrating that our method can create 3D building models that adequately represent the exterior information of actual buildings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Reinforcement learning-based approach for plastic texture surface defects inspection.
- Author
-
Ho, Chao-Ching, Chiao, Yuan-Cheng, and Su, Eugene
- Subjects
- *
SURFACE texture , *SURFACE defects , *GENERATIVE adversarial networks , *REINFORCEMENT (Psychology) , *MATERIALS texture , *LIGHT sources - Abstract
This paper proposes a novel data-enhanced virtual texture generation network for use in deep learning detection systems. The current methods of data enhancement, such as image flipping, scaling ratios, or Generative Adversarial Networks, have limitations as they cannot determine characteristics beyond the training data. The proposed system uses the texture characteristics of a learning surface to generate surface textures through the Open Graphics Library, which can simulate material textures, light sources, and shadow effects. This enables the generation of required texture parameters for the Reinforcement Learning Network to conduct parameter search. The generated image is authenticated by a discriminator, and the reward score is fed back into the critic network to update the value network. The proposed system can complement the imbalance of defective data types, generate large quantities of random and non-defective data, and automatically classify and label during the generation process, reducing labor consumption and improving labeling accuracy. The study found that the proposed data enhancement method can increase the diversity of data characteristics, and the generated data can increase the recall rate of test and verification data sets. Specifically, the proposed system increased the recall rate of test data sets with different distributions from 78.21% to 82.40% and the recall rate of verification data sets with the same distribution from 81.64 to 91.94%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Creation mechanism of new media art combining artificial intelligence and internet of things technology in a metaverse environment.
- Author
-
Wang, Xiaolong, Cai, Ling, and Xu, Yunhao
- Subjects
- *
MEDIA art , *ARTIFICIAL intelligence , *SHARED virtual environments , *INTERNET of things , *COMPUTER art - Abstract
The Metaverse is regarded as a brand-new virtual society constructed by deep media, and the new media art produced by new media technology will gradually replace the traditional art form and play an important role in the infinite Metaverse in the future. The maturity of the new media art creation mechanism must also depend on the help of artificial intelligence (AI) and Internet of Things (IoT) technology. The purpose of this study is to explore the image style transfer of digital painting art in new media art, that is, to reshape the image style by neural network technology in AI based on retaining the semantic information of the original image. Based on neural style transfer, an image style conversion method based on feature synthesis is proposed. Using the feature mapping of content image and style image and combining the advantages of traditional texture synthesis, a richer multi-style target feature mapping is synthesized. Then, the inverse transformation of target feature mapping is restored to an image to realize style transformation. In addition, the research results are analyzed. Under the background of integrating AI and IoT, the creation mechanism of new media art is optimized. Regarding digital art style transformation, the Tensorflow program framework is used for simulation verification and performance evaluation. The experimental results show that the image style transfer method based on feature synthesis proposed in this study can make the image texture more reasonably distributed, and can change the style texture by retaining more semantic structure content of the original image, thus generating richer artistic effects, and having better interactivity and local controllability. It can provide theoretical help and reference for developing new media art creation mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. MeshSegmenter: Zero-Shot Mesh Semantic Segmentation via Texture Synthesis
- Author
-
Zhong, Ziming, Xu, Yanyu, Li, Jing, Xu, Jiale, Li, Zhengxin, Yu, Chaohui, Gao, Shenghua, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Leonardis, Aleš, editor, Ricci, Elisa, editor, Roth, Stefan, editor, Russakovsky, Olga, editor, Sattler, Torsten, editor, and Varol, Gül, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Color-Correlated Texture Synthesis for Hybrid Indoor Scenes
- Author
-
He, Yu, Jin, Yi-Han, Liu, Ying-Tian, Lu, Bao-Li, Yu, Ge, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hu, Shi-Min, editor, Cai, Yiyu, editor, and Rosin, Paul, editor
- Published
- 2024
- Full Text
- View/download PDF
18. NeRF Synthesis with Shading Guidance
- Author
-
Li, Chenbin, Xin, Yu, Liu, Gaoyi, Zeng, Xiang, Liu, Ligang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hu, Shi-Min, editor, Cai, Yiyu, editor, and Rosin, Paul, editor
- Published
- 2024
- Full Text
- View/download PDF
19. Semantic segmentation of textured mosaics
- Author
-
Melissa Cote, Amanda Dash, and Alexandra Branzan Albu
- Subjects
Texture segmentation ,Semantic segmentation network ,Textures in the wild ,Visual attributes ,Deep learning ,Texture synthesis ,Electronics ,TK7800-8360 - Abstract
Abstract This paper investigates deep learning (DL)-based semantic segmentation of textured mosaics. Existing popular datasets for mosaic texture segmentation, designed prior to the DL era, have several limitations: (1) training images are single-textured and thus differ from the multi-textured test images; (2) training and test textures are typically cut out from the same raw images, which may hinder model generalization; (3) each test image has its own limited set of training images, thus forcing an inefficient training of one model per test image from few data. We propose two texture segmentation datasets, based on the existing Outex and DTD datasets, that are suitable for training semantic segmentation networks and that address the above limitations: SemSegOutex focuses on materials acquired under controlled conditions, and SemSegDTD focuses on visual attributes of textures acquired in the wild. We also generate a synthetic version of SemSegOutex via texture synthesis that can be used in the same way as standard random data augmentation. Finally, we study the performance of the state-of-the-art DeepLabv3+ for textured mosaic segmentation, which is excellent for SemSegOutex and variable for SemSegDTD. Our datasets allow us to analyze results according to the type of material, visual attributes, various image acquisition artifacts, and natural versus synthetic aspects, yielding new insights into the possible usage of recent DL technologies for texture analysis.
- Published
- 2023
- Full Text
- View/download PDF
20. Artistic image synthesis from unsupervised segmentation maps.
- Author
-
Liu, Dilin, Yao, Hongxun, and Lu, Xiusheng
- Abstract
We present a framework for artwork image synthesis from unsupervised segmentation maps input and style images. The output has style consistency with style images and the semantic structure from the corresponding segmentation label. Existing methods of transferring semantic labels to painting images require large amounts of manual segmentation pairs for training. To address the issue, we use unsupervised segmentation maps to build on training pairs and learn the generator with the proposed spatially adaptive instance normalization block. Our method exploits the style consistency and semantic consistency loss functions to reduce the artifact in synthetic images. Extensive experiments in several image translation tasks show the effectiveness of our method in generating an image with both structures of segmentation and style of exemplar image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Texture synthesis for generating realistic-looking bronchoscopic videos.
- Author
-
Guo, Lu and Nahm, Werner
- Abstract
Purpose: Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. Methods: A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size. Results: Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated. Conclusions: The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including "realistic-looking," "long-term temporal consistency," "sufficient image features for depth estimation," and "large size and variety of synthesized textures." Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Neural style transfer based on deep feature synthesis.
- Author
-
Li, Dajin and Gao, Wenran
- Subjects
- *
ARTIFICIAL neural networks - Abstract
Neural Style Transfer makes full use of the high-level features of deep neural networks, so stylized images can represent content and style features on high-level semantics. But neural networks are end-to-end black box systems. Previous style transfer models are based on the overall features of the image when constructing the target image, so they cannot effectively intervene in the content and style representations. This paper presents a locally controllable nonparametric neural style transfer model. We treat style transfer as a feature matching process independent of neural networks and propose a deep-to-shallow feature synthesis algorithm. The target feature map is synthesized layer by layer in the deep feature space and then transformed into the target image. Because the feature synthesis is a local manipulation on feature maps, it is easy to control the local texture structure, content details and texture distribution. Based on our synthesis algorithm, we propose a multi-exemplar synthesis method that can make local stroke directions better match content semantics or combine multiple styles into a single image. Our experiments show that our model can produce more impressive results than previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. TextureAda: Deep 3D Texture Transfer for Ideation in Product Design Conceptualization
- Author
-
Gallega, Rgee Wharlo, Azcarraga, Arnulfo, Sumi, Yasuyuki, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Degen, Helmut, editor, and Ntoa, Stavroula, editor
- Published
- 2023
- Full Text
- View/download PDF
24. A Finite Differences-Based Metric for Magnetic Resonance Image Inpainting
- Author
-
Seracini, Marco, Testa, Claudia, Brown, Stephen R., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gervasi, Osvaldo, editor, Murgante, Beniamino, editor, Rocha, Ana Maria A. C., editor, Garau, Chiara, editor, Scorza, Francesco, editor, Karaca, Yeliz, editor, and Torre, Carmelo M., editor
- Published
- 2023
- Full Text
- View/download PDF
25. Optimal Transport Between GMM for Multiscale Texture Synthesis
- Author
-
Delon, Julie, Desolneux, Agnès, Facq, Laurent, Leclaire, Arthur, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Calatroni, Luca, editor, Donatelli, Marco, editor, Morigi, Serena, editor, Prato, Marco, editor, and Santacesaria, Matteo, editor
- Published
- 2023
- Full Text
- View/download PDF
26. A Geometrically Aware Auto-Encoder for Multi-texture Synthesis
- Author
-
Chatillon, Pierrick, Gousseau, Yann, Lefebvre, Sidonie, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Calatroni, Luca, editor, Donatelli, Marco, editor, Morigi, Serena, editor, Prato, Marco, editor, and Santacesaria, Matteo, editor
- Published
- 2023
- Full Text
- View/download PDF
27. End-to-End Framework for the Automatic Matching of Omnidirectional Street Images and Building Data and the Creation of 3D Building Models
- Author
-
Yoshiki Ogawa, Ryoto Nakamura, Go Sato, Hiroya Maeda, and Yoshihide Sekimoto
- Subjects
3D building model ,building matching ,omnidirectional image ,building footprint ,building element extraction ,texture synthesis ,Science - Abstract
For accurate urban planning, three-dimensional (3D) building models with a high level of detail (LOD) must be developed. However, most large-scale 3D building models are limited to a low LOD of 1–2, as the creation of higher LOD models requires the modeling of detailed building elements such as walls, windows, doors, and roof shapes. This process is currently not automated and is performed manually. In this study, an end-to-end framework for the creation of 3D building models was proposed by integrating multi-source data such as omnidirectional images, building footprints, and aerial photographs. These different data sources were matched with the building ID considering their spatial location. The building element information related to the exterior of the building was extracted, and detailed LOD3 3D building models were created. Experiments were conducted using data from Kobe, Japan, yielding a high accuracy for the intermediate processes, such as an 86.9% accuracy in building matching, an 88.3% pixel-based accuracy in the building element extraction, and an 89.7% accuracy in the roof type classification. Eighty-one LOD3 3D building models were created in 8 h, demonstrating that our method can create 3D building models that adequately represent the exterior information of actual buildings.
- Published
- 2024
- Full Text
- View/download PDF
28. Synthesising 3D solid models of natural heterogeneous materials from single sample image, using encoding deep convolutional generative adversarial networks
- Author
-
Seda Zirek
- Subjects
Material synthesis ,DCGAN ,Texture synthesis ,3D Solid textures ,Natural heterogeneous materials ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Three-dimensional solid computational representations of natural heterogeneous materials are challenging to generate due to their high degree of randomness and varying scales of patterns, such as veins and cracks, in different sizes and directions. In this regard, this paper introduces a new architecture to synthesise 3D solid material models by using encoding deep convolutional generative adversarial networks (EDCGANs). DCGANs have been useful in generative tasks in relation to image processing by successfully recreating similar results based on adequate training. While concentrating on natural heterogeneous materials, this paper uses an encoding and a decoding DCGAN combined in a similar way to auto-encoders to convert a given image into marble, based on patches. Additionally, the method creates an input dataset from a single 2D high-resolution exemplar. Further, it translates of 2D data, used as a seed, into 3D data to create material blocks. While the results on the Z-axis do not have size restrictions, the X- and Y-axis are constrained by the given image. Using the method, the paper explores possible ways to present 3D solid textures. The modelling potentials of the developed approach as a design tool is explored to synthesise a 3D solid texture of leaf-like material from an exemplar of a leaf image.
- Published
- 2023
- Full Text
- View/download PDF
29. Texture Inpainting for Photogrammetric Models.
- Author
-
Maggiordomo, A., Cignoni, P., and Tarini, M.
- Abstract
We devise a technique designed to remove the texturing artefacts that are typical of 3D models representing real‐world objects, acquired by photogrammetric techniques. Our technique leverages the recent advancements in inpainting of natural colour images, adapting them to the specific context. A neural network, modified and trained for our purposes, replaces the texture areas containing the defects, substituting them with new plausible patches of texels, reconstructed from the surrounding surface texture. We train and apply the network model on locally reparametrized texture patches, so to provide an input that simplifies the learning process, because it avoids any texture seams, unused texture areas, background, depth jumps and so on. We automatically extract appropriate training data from real‐world datasets. We show two applications of the resulting method: one, as a fully automatic tool, addressing all problems that can be detected by analysing the UV‐map of the input model; and another, as an interactive semi‐automatic tool, presented to the user as a 3D 'fixing' brush that has the effect of removing artefacts from any zone the users paints on. We demonstrate our method on a variety of real‐world inputs and provide a reference usable implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Vector solid texture synthesis using unified RBF-based representation and optimization.
- Author
-
Qian, Yinling, Shi, Jian, Sun, Hanqiu, Chen, Yanyun, and Wang, Qiong
- Subjects
- *
RADIAL basis functions , *EXPECTATION-maximization algorithms , *ENERGY function - Abstract
Solid textures are essential for modeling virtual internal materials. Existing approaches either generate raster solid textures or only focus on vector representation. To facilitate efficient synthesis and intuitive editing of vector solid texture, we propose the novel solid texture representation, named radial basis function (RBF) solid texture. An RBF solid texture consists of a set of spatially distributed RBF instances. Each RBF instance encapsulates a 3D position, an RGB color and a signed distance field (SDF) value. Such a representation is resolution independent, compact in storage and capable of supporting efficient random access with an indexing uniform grid. We directly synthesize RBF solid texture from raster exemplar by minimizing an energy function, which encodes the position, color and SDF difference between output volumetric RBF instances and input example planar RBF instances. The minimization process iteratively updates output RBF instances with an EM algorithm. Our experiments show that our algorithm can produce RBF solid textures in high efficiency and compact storage for a variety of exemplars, including stochastic patterns or more structured patterns. Furthermore, RBF solid textures we proposed benefit intuitive editing for either region-based and RBF-based effects. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Autocompletion of repetitive stroking with image guidance.
- Author
-
Chen, Yilan, Kwan, Kin Chung, and Fu, Hongbo
- Subjects
FLUID control ,BUSINESS airplanes ,BOEING airplanes - Abstract
Image-guided drawing can compensate for a lack of skill but often requires a significant number of repetitive strokes to create textures. Existing automatic stroke synthesis methods are usually limited to predefined styles or require indirect manipulation that may break the spontaneous flow of drawing. We present an assisted drawing system to autocomplete repetitive short strokes during a user's normal drawing process. Users draw over a reference image as usual; at the same time, our system silently analyzes the input strokes and the reference to infer strokes that follow the user's input style when certain repetition is detected. Users can accept, modify, or ignore the system's predictions and continue drawing, thus maintaining fluid control over drawing. Our key idea is to jointly analyze image regions and user input history to detect and predict repetition. The proposed system can effectively reduce the user's workload when drawing repetitive short strokes, helping users to create results with rich patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Global suppression heuristic: fast GraphCut in GPU for image stitching.
- Author
-
Bui, Minh, Nguyen, Tai, Ninh, Huong, Nguyen, Tu, and Tran, Tien Hai
- Abstract
GraphCut algorithm has shown its effectiveness when solving many computer vision tasks. However, its heavy computational nature makes it hard to apply in real-world applications. Many attempts have been made to accelerate GraphCut algorithm, most successfully seen in methods that utilize parallel computing platforms like CUDA. In this paper, we introduce a parallel implementation of push relabel algorithm for GraphCut on CUDA designed for the image stitching problem. Furthermore, we propose global suppression heuristic to boost the convergence process of the algorithm. Experiment results on sets of thermal infrared and RGB images show that our method can be up to 3 times faster than the fastest sequential algorithm while obtaining satisfactory stitched images. Our source code will be soon available for further research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Semantic segmentation of textured mosaics.
- Author
-
Cote, Melissa, Dash, Amanda, and Branzan Albu, Alexandra
- Subjects
- *
TEXTURE analysis (Image processing) , *DATA augmentation , *DEEP learning - Abstract
This paper investigates deep learning (DL)-based semantic segmentation of textured mosaics. Existing popular datasets for mosaic texture segmentation, designed prior to the DL era, have several limitations: (1) training images are single-textured and thus differ from the multi-textured test images; (2) training and test textures are typically cut out from the same raw images, which may hinder model generalization; (3) each test image has its own limited set of training images, thus forcing an inefficient training of one model per test image from few data. We propose two texture segmentation datasets, based on the existing Outex and DTD datasets, that are suitable for training semantic segmentation networks and that address the above limitations: SemSegOutex focuses on materials acquired under controlled conditions, and SemSegDTD focuses on visual attributes of textures acquired in the wild. We also generate a synthetic version of SemSegOutex via texture synthesis that can be used in the same way as standard random data augmentation. Finally, we study the performance of the state-of-the-art DeepLabv3+ for textured mosaic segmentation, which is excellent for SemSegOutex and variable for SemSegDTD. Our datasets allow us to analyze results according to the type of material, visual attributes, various image acquisition artifacts, and natural versus synthetic aspects, yielding new insights into the possible usage of recent DL technologies for texture analysis. Article highlights: We propose two texture segmentation datasets that address the limitations of existing texture segmentation datasets. Experiments with materials and attributes shed a new light on recent deep learning technologies for texture analysis. Our results also suggest that synthetic textures can be used for data augmentation to improve segmentation results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Inpainting in Discrete Sobolev Spaces: Structural Information for Uncertainty Reduction.
- Author
-
Seracini, Marco and Brown, Stephen R.
- Subjects
SOBOLEV spaces ,INPAINTING ,FINITE differences - Abstract
In this article, we introduce a new mathematical functional whose minimization determines the quality of the solution for the exemplar-based inpainting-by-patch problem. The new functional expression includes finite difference terms in a similar fashion to what happens in the theoretical Sobolev spaces: its use reduces the uncertainty in the choice of the most suitable values for each point to inpaint. Moreover, we introduce a probabilistic model by which we prove that the usual principal directions, generally employed for continuous problems, are not enough to achieve consistent reconstructions in the discrete inpainting asset. Finally, we formalize a new priority index and new rules for its dynamic update. The quality of the reconstructions, achieved using a reduced neighborhood size of more than 95 % with respect to the current state-of-the-art algorithms based on the same inpainting approach, further provides the experimental validation of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. The infinite doodler: expanding textures within tightly constrained manifolds.
- Author
-
Baluja, Shumeet
- Subjects
- *
DOODLES , *SCHOOL environment - Abstract
Hand-drawn doodles present a difficult set of textures to model and synthesize. Unlike the typical natural images that are most often used in texture synthesis studies, the doodles examined here are characterized by the use of sharp, irregular, and imperfectly scribbled patterns, frequent imprecise strokes, haphazardly connected edges, and randomly or spatially shifting themes. The almost binary nature of the doodles examined makes it difficult to hide common mistakes such as discontinuities. Further, there is no color or shading to mask flaws and repetition; any process that relies on, even stochastic, region copying is readily discernible. To tackle the problem of synthesizing these textures, we model the underlying generation process of the doodle taking into account potential unseen, but related, expansion contexts. We demonstrate how to generate infinitely long textures, such that the texture can be extended far beyond a single image's source material. This is accomplished by creating a novel learning mechanism that is taught to condition the generation process on its own generated context—what was generated in previous steps—not just upon the original. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Multi-exemplar-guided image weathering via texture synthesis.
- Author
-
Du, Shiyin and Song, Ying
- Subjects
- *
METEOROLOGICAL charts , *WEATHERING - Abstract
We propose a novel method for generating gradually varying weathering effects from a single image. Time-variant weathering effects tend to appear simultaneously on one object. Compared to previous methods, our method is able to obtain gradually changing weathering effects through simple interactions while keeping texture variations and shading details. We first classify the weathering regions into several stages based on a weathering degree map extracted from the image. For each weathering stage, we automatically extract the corresponding weathering sample, from which a texture image is synthesized subsequently. Then, we generate weathering effects by fusing different textures according to the weathering degree of the image pixels. Finally, in order to maintain the intrinsic shape details of the object during the fusing process, we utilize a new shading-preserving method taking account of the weathering degrees. Experiments show that our method is able to produce visually realistic and time-variant weathering effects interactively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Exemplar-Based Texture Synthesis Using Two Random Coefficients Autoregressive Models
- Author
-
Ayoub Abderrazak Maarouf, Fella Hachouf, and Soumia Kharfouchi
- Subjects
exemplar based method ,gmm ,local approximated images ,texture synthesis ,2d-rca models ,Medicine (General) ,R5-920 ,Mathematics ,QA1-939 - Abstract
Example-based texture synthesis is a fundamental topic of many image analysis and computer vision applications. Consequently, its representation is one of the most critical and challenging topics in computer vision and pattern recognition, attracting much academic interest throughout the years. In this paper, a new statistical method to synthesize textures is proposed. It consists in using two indexed random coefficients autoregressive (2D-RCA) models to deal with this problem. These models have a good ability to well detect neighborhood information. Simulations have demonstrated that the 2D-RCA models are very suitable to represent textures. So, in this work, to generate textures from an example, each original image is splitted into blocks which are modeled by the 2D-RCA. The proposed algorithm produces approximations of the obtained blocks images from the original image using the generalized method of moments (GMM). Different sizes of windows have been used. This study offers some important insights into the newly generated image. Satisfying obtained results have been compared to those given by well-established methods. The proposed algorithm outperforms the state-of-the-art approaches.
- Published
- 2023
- Full Text
- View/download PDF
38. Autocompletion of repetitive stroking with image guidance
- Author
-
Yilan Chen, Kin Chung Kwan, and Hongbo Fu
- Subjects
interaction ,autocompletion ,digital drawing ,prediction ,texture synthesis ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Image-guided drawing can compensate for a lack of skill but often requires a significant number of repetitive strokes to create textures. Existing automatic stroke synthesis methods are usually limited to predefined styles or require indirect manipulation that may break the spontaneous flow of drawing. We present an assisted drawing system to autocomplete repetitive short strokes during a user’s normal drawing process. Users draw over a reference image as usual; at the same time, our system silently analyzes the input strokes and the reference to infer strokes that follow the user’s input style when certain repetition is detected. Users can accept, modify, or ignore the system’s predictions and continue drawing, thus maintaining fluid control over drawing. Our key idea is to jointly analyze image regions and user input history to detect and predict repetition. The proposed system can effectively reduce the user’s workload when drawing repetitive short strokes, helping users to create results with rich patterns.
- Published
- 2023
- Full Text
- View/download PDF
39. Scraping Textures from Natural Images for Synthesis and Editing
- Author
-
Li, Xueting, Wang, Xiaolong, Yang, Ming-Hsuan, Efros, Alexei A., Liu, Sifei, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
40. Comparison of Algorithms for Style Transfer of Images Using Texture Synthesis
- Author
-
Rao, Vishakh, Hemanth, K. U., Rudra, Bhawana, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Giri, Debasis, editor, Raymond Choo, Kim-Kwang, editor, Ponnusamy, Saminathan, editor, Meng, Weizhi, editor, Akleylek, Sedat, editor, and Prasad Maity, Santi, editor
- Published
- 2022
- Full Text
- View/download PDF
41. A Fast Cement Microstructure Texture Image Synthesis Method Based on PixelCNN
- Author
-
Huang, Xiaosheng, Duan, Runtao, Zhao, Yuxiao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Liang, Jianying, editor, Liu, Zhigang, editor, Diao, Lijun, editor, and An, Min, editor
- Published
- 2022
- Full Text
- View/download PDF
42. Novel View Synthesis Of Transparent Object From a Single Image.
- Author
-
Zhou, Shizhe, Wang, Zezu, and Ye, Dongwei
- Subjects
- *
LIGHT transmission , *TEST methods , *REFRACTION (Optics) , *VIDEO processing - Abstract
We propose a method for converting a single image of a transparent object into multi‐view photo that enables users observing the object from multiple new angles, without inputting any 3D shape. The complex light paths formed by refraction and reflection makes it challenging to compute the lighting effects of transparent objects from a new angle. We construct an encoder–decoder network for normal reconstruction and texture extraction, which enables synthesizing novel views of transparent object from a set of new views and new environment maps using only one RGB image. By simultaneously considering the optical transmission and perspective variation, our network learns the characteristics of optical transmission and the change of perspective as guidance to the conversion from RGB colours to surface normals. A texture extraction subnetwork is proposed to alleviate the contour loss phenomenon during normal map generation. We test our method using 3D objects within and without our training data, including real 3D objects that exists in our lab, and completely new environment maps that we take using our phones. The results show that our method performs better on view synthesis of transparent objects in complex scenes using only a single‐view image. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. 基于相对坐标控制的非均匀纹理合成方法.
- Author
-
陈凯健, 李二强, and 周漾
- Abstract
Copyright of Journal of Computer-Aided Design & Computer Graphics / Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao is the property of Gai Kan Bian Wei Hui and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
44. Development of Texture Mapping Approaches for Additively Manufacturable Surfaces
- Author
-
Bhupesh Verma, Omid Zarei, Song Zhang, and Johannes Henrich Schleifenbaum
- Subjects
Texture synthesis ,Design for additive manufacturing ,Image processing ,Textured surface generation ,Texture mapping ,Ocean engineering ,TC1501-1800 ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
Abstract Additive manufacturing (AM) technologies have been recognized for their capability to build complex components and hence have offered more freedom to designers for a long time. The ability to directly use a computer-aided design (CAD) model has allowed for fabricating and realizing complicated components, monolithic design, reducing the number of components in an assembly, decreasing time to market, and adding performance or comfort-enhancing functionalities. One of the features that can be introduced for boosting a component functionality using AM is the inclusion of surface texture on a given component. This inclusion is usually a difficult task as creating a CAD model resolving fine details of a given texture is difficult even using commercial software packages. This paper develops a methodology to include texture directly on the CAD model of a target surface using a patch-based sampling texture synthesis algorithm, which can be manufactured using AM. Input for the texture generation algorithm can be either a physical sample or an image with heightmap information. The heightmap information from a physical sample can be obtained by 3D scanning the sample and using the information from the acquired point cloud. After obtaining the required inputs, the patches are sampled for texture generation according to non-parametric estimation of the local conditional Markov random field (MRF) density function, which helps avoid mismatched features across the patch boundaries. While generating the texture, a design constraint to ensure AM producibility is considered, which is essential when manufacturing a component using, e.g., Fused Deposition Melting (FDM) or Laser Powder Bed Fusion (LPBF). The generated texture is then mapped onto the surface using the developed distance and angle preserving mapping algorithms. The implemented algorithms can be used to map the generated texture onto a mathematically defined surface. This paper maps the textures onto flat, curved, and sinusoidal surfaces for illustration. After the texture mapping, a stereolithography (STL) model is generated with the desired texture on the target surface. The generated STL model is printed using FDM technology as a final step.
- Published
- 2022
- Full Text
- View/download PDF
45. Fast Structural Texture Image Synthesis Algorithm Based on Seam ConsistencyCriterion
- Author
-
JIN Li-zhen, LI Qing-zhong
- Subjects
texture synthesis ,seam line consistency ,non-overlapping splicing ,hsi color space ,structure information ,Computer software ,QA76.75-76.765 ,Technology (General) ,T1-995 - Abstract
Aiming at the problems of patch-based synthesis algorithm of structured texture images,such as discontinuity of structure,distortion of boundary,seam misalignment,and low synthesis speed,a new fast non-overlapping synthesis algorithm of texture images is proposed based on the consistency criterion of double-seam lines,thereby effectively improving the synthesis quality and speed of structured texture images.Firstly,the seamline consistency criterion considering hue,saturation,brightness and edge characteristics simultaneously is established in HSI color space that is more consistent with human visual characteristic.Then,a sub-block search strategy and a new non-overlapping splicing algorithm based on the consistency criterion of double-seam line are proposed and implemented.The experiment results show that the proposed algorithm can significantly improve the synthesis quality and speed of structured texture images in comparison with the traditional algorithms.
- Published
- 2022
- Full Text
- View/download PDF
46. Diverse non-homogeneous texture synthesis from a single exemplar.
- Author
-
Phillips, A., Lang, J., and Mould, D.
- Subjects
- *
GENERATIVE adversarial networks , *AUTOENCODER , *INFORMATION networks , *SAMPLE size (Statistics) , *INFORMATION sharing - Abstract
Capturing non-local, long range features present in non-homogeneous textures is difficult to achieve with existing techniques. We introduce a new training method and architecture for single-exemplar texture synthesis that combines a Generative Adversarial Network (GAN) and a Variational Autoencoder (VAE). In the proposed architecture, the combined networks share information during training via structurally identical, independent blocks, facilitating highly diverse texture variations from a single image exemplar. Supporting this training method, we also include a similarity loss term that further encourages diverse output while also improving the overall quality. Using our approach, it is possible to produce diverse results over the entire sample size taken from a single model that can be trained in approximately 15 min. We show that our approach obtains superior performance when compared to SOTA texture synthesis methods and single image GAN methods using standard diversity and quality metrics. [Display omitted] • Synthesizing long-range features in varied, non-homogeneous textures is challenging • Our method combines two models to create high quality, diverse texture variations • Combining VAE and GAN, we leverage probabilistic sampling and fine-detail generation • We estimate feature diversity during training with a sampling loss for regularization [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model
- Author
-
Soowoong Kim and Jungwon Kang
- Subjects
immersive rendering ,multiview image processing ,texture synthesis ,view-dependent texture mapping ,volumetric video representation ,Telecommunication ,TK5101-6720 ,Electronics ,TK7800-8360 - Abstract
In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.
- Published
- 2022
- Full Text
- View/download PDF
48. EXEMPLAR-BASED TEXTURE SYNTHESIS USING TWO RANDOM COEFFICIENTS AUTOREGRESSIVE MODELS.
- Author
-
MAAROUF, AYOUB ABDERRAZAK, HACHOUF, FELLA, and KHARFOUCHI, SOUMIA
- Subjects
- *
AUTOREGRESSIVE models , *PATTERN recognition systems , *GENERALIZED method of moments , *TEXTURE analysis (Image processing) , *COMPUTER vision , *TWO-dimensional models , *IMAGE analysis , *AUTOREGRESSION (Statistics) - Abstract
Example-based texture synthesis is a fundamental topic of many image analysis and computer vision applications. Consequently, its representation is one of the most critical and challenging topics in computer vision and pattern recognition, attracting much academic interest throughout the years. In this paper, a new statistical method to synthesize textures is proposed. It consists in using two indexed random coefficients autoregressive (2D-RCA) models to deal with this problem. These models have a good ability to well detect neighborhood information. Simulations have demonstrated that the 2D-RCA models are very suitable to represent textures. So, in this work, to generate textures from an example, each original image is splitted into blocks which are modeled by the 2D-RCA. The proposed algorithm produces approximations of the obtained blocks images from the original image using the generalized method of moments (GMM). Different sizes of windows have been used. This study offers some important insights into the newly generated image. Satisfying obtained results have been compared to those given by well-established methods. The proposed algorithm outperforms the state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. A Generative Model for Texture Synthesis based on Optimal Transport Between Feature Distributions.
- Author
-
Houdard, Antoine, Leclaire, Arthur, Papadakis, Nicolas, and Rabin, Julien
- Abstract
We propose GOTEX, a general framework for texture synthesis by optimization that constrains the statistical distribution of local features. While our model encompasses several existing texture models, we focus on the case where the comparison between feature distributions relies on optimal transport distances. We show that the semi-dual formulation of optimal transport allows to control the distribution of various possible features, even if these features live in a high-dimensional space. We then study the resulting minimax optimization problem, which corresponds to a Wasserstein generative model, for which the inner concave maximization problem can be solved with standard stochastic gradient methods. The alternate optimization algorithm is shown to be versatile in terms of applications, features and architecture; in particular, it allows to produce high-quality synthesized textures with different sets of features. We analyze the results obtained by constraining the distribution of patches or the distribution of responses to a pre-learned VGG neural network. We show that the patch representation can retrieve the desired textural aspect in a more precise manner. We also provide a detailed comparison with state-of-the-art texture synthesis methods. The GOTEX model based on patch features is also adapted to texture inpainting and texture interpolation. Finally, we show how to use our framework to learn a feed-forward neural network that can synthesize on-the-fly new textures of arbitrary size in a very fast manner. Experimental results and comparisons with the mainstream methods from the literature illustrate the relevance of the generative models learned with GOTEX. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Texture Synthesis
- Author
-
Chen, Dongdong, Yuan, Lu, Hua, Gang, and Ikeuchi, Katsushi, editor
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.