17 results on '"Stefan Guthe"'
Search Results
2. Depth-of-Field Segmentation for Near-lossless Image Compression and 3D Reconstruction
- Author
-
Max von Buelow, Reimar Tausch, Martin Schurig, Volker Knauthe, Tristan Wirth, Stefan Guthe, Pedro Santos, Dieter W. Fellner, and Publica
- Subjects
Research Line: Computer vision (CV) ,Image segmentation ,Image compression ,Lead Topic: Digitized Work ,Cultural heritage ,Conservation ,3D Reconstruction ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Information Systems - Abstract
Over the years, photometric three-dimensional (3D) reconstruction gained increasing importance in several disciplines, especially in cultural heritage preservation. While increasing sizes of images and datasets enhanced the overall reconstruction results, requirements in storage got immense. Additionally, unsharp areas in the background have a negative influence on 3D reconstructions algorithms. Handling the sharp foreground differently from the background simultaneously helps to reduce storage size requirements and improves 3D reconstruction results. In this article, we examine regions outside the Depth of Field (DoF) and eliminate their inaccurate information to 3D reconstructions. We extract DoF maps from the images and use them to handle the foreground and background with different compression backends, making sure that the actual object is compressed losslessly. Our algorithm achieves compression rates between 1:8 and 1:30 depending on the artifact and DoF size and improves the 3D reconstruction.
- Published
- 2022
- Full Text
- View/download PDF
3. Algorithm 1015
- Author
-
Daniel Thuerck, Stefan Guthe, and Publica
- Subjects
Computer science ,Applied Mathematics ,010102 general mathematics ,02 engineering and technology ,Solver ,Auction algorithm ,01 natural sciences ,Parallel processing (DSP implementation) ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Limit (mathematics) ,0101 mathematics ,Assignment problem ,Scaling ,Dijkstra's algorithm ,Algorithm ,Software - Abstract
We present a new algorithm for solving the dense linear (sum) assignment problem and an efficient, parallel implementation that is based on the successive shortest path algorithm. More specifically, we introduce the well-known epsilon scaling approach used in the Auction algorithm to approximate the dual variables of the successive shortest path algorithm prior to solving the assignment problem to limit the complexity of the path search. This improves the runtime by several orders of magnitude for hard-to-solve real-world problems, making the runtime virtually independent of how hard the assignment is to find. In addition, our approach allows for using accelerators and/or external compute resources to calculate individual rows of the cost matrix. This enables us to solve problems that are larger than what has been reported in the past, including the ability to efficiently solve problems whose cost matrix exceeds the available systems memory. To our knowledge, this is the first implementation that is able to solve problems with more than one trillion arcs in less than 100 hours on a single machine.
- Published
- 2021
- Full Text
- View/download PDF
4. CySecAlert: An Alert Generation System for Cyber Security Events Using Open Source Intelligence Data
- Author
-
Marc-André Kaufhold, Christian Reuter, Tristan Wirth, Philipp Kuehn, Volker Knauthe, Thea Riebe, Markus Bayer, and Stefan Guthe
- Subjects
Open-source intelligence ,Computer science ,Active learning (machine learning) ,Social media ,Context (language use) ,Relevance (information retrieval) ,Data breach ,Cluster analysis ,Computer security ,computer.software_genre ,Classifier (UML) ,computer - Abstract
Receiving relevant information on possible cyber threats, attacks, and data breaches in a timely manner is crucial for early response. The social media platform Twitter hosts an active cyber security community. Their activities are often monitored manually by security experts, such as Computer Emergency Response Teams (CERTs). We thus propose a Twitter-based alert generation system that issues alerts to a system operator as soon as new relevant cyber security related topics emerge. Thereby, our system allows us to monitor user accounts with significantly less workload. Our system applies a supervised classifier, based on active learning, that detects tweets containing relevant information. The results indicate that uncertainty sampling can reduce the amount of manual relevance classification effort and enhance the classifier performance substantially compared to random sampling. Our approach reduces the number of accounts and tweets that are needed for the classifier training, thus making the tool easily and rapidly adaptable to the specific context while also supporting data minimization for Open Source Intelligence (OSINT). Relevant tweets are clustered by a greedy stream clustering algorithm in order to identify significant events. The proposed system is able to work near real-time within the required 15-min time frameand detects up to 93.8% of relevant events with a false alert rate of 14.81%.
- Published
- 2021
- Full Text
- View/download PDF
5. Rapid, detail-preserving image downscaling
- Author
-
Michael Goesele, Michael Waechter, Sandra C. Amend, Stefan Guthe, and Nicolas Weber
- Subjects
Pixel ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Image processing ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Downscaling - Abstract
Image downscaling is arguably the most frequently used image processing tool. We present an algorithm based on convolutional filters where input pixels contribute more to the output image the more their color deviates from their local neighborhood, which preserves visually important details. In a user study we verify that users prefer our results over related work. Our efficient GPU implementation works in real-time when downscaling images from 24 M to 70 k pixels. Further, we demonstrate empirically that our method can be successfully applied to videos.
- Published
- 2016
- Full Text
- View/download PDF
6. Decoupled Space and Time Sampling of Motion and Defocus Blur for Unified Rendering of Transparent and Opaque Objects
- Author
-
Michael Goesele, Sven Widmer, D. Thul, Dominik Wodniok, and Stefan Guthe
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,020207 software engineering ,02 engineering and technology ,Bounding volume hierarchy ,Frame rate ,Computer Graphics and Computer-Aided Design ,3D rendering ,Rendering (computer graphics) ,Real-time computer graphics ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Ray tracing (graphics) ,Computer vision ,Shading ,Artificial intelligence ,Alternate frame rendering ,business ,3D computer graphics ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We propose a unified rendering approach that jointly handles motion and defocus blur for transparent and opaque objects at interactive frame rates. Our key idea is to create a sampled representation of all parts of the scene geometry that are potentially visible at any point in time for the duration of a frame in an initial rasterization step. We store the resulting temporally-varying fragments (t-fragments) in a bounding volume hierarchy which is rebuild every frame using a fast spatial median construction algorithm. This makes our approach suitable for interactive applications with dynamic scenes and animations. Next, we perform spatial sampling to determine all t-fragments that intersect with a specific viewing ray at any point in time. Viewing rays are sampled according to the lens uv-sampling for depth-of-field effects. In a final temporal sampling step, we evaluate the pre-determined viewing ray/t-fragment intersections for one or multiple points in time. This allows us to incorporate all standard shading effects including transparency. We describe the overall framework, present our GPU implementation, and evaluate our rendering approach with respect to scalability, quality, and performance.
- Published
- 2016
- Full Text
- View/download PDF
7. A visual model for quality driven refinement of global illumination
- Author
-
Stefan Guthe, Robert Günther, and Michael Guthe
- Subjects
Pixel ,Computer science ,Global illumination ,business.industry ,020207 software engineering ,02 engineering and technology ,Observer (special relativity) ,Rendering (computer graphics) ,Standard error ,Wavelet ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Image based - Abstract
When rendering complex scenes using path-tracing methods, long processing times are required to calculate a sufficient number of samples for high quality results. In this paper, we propose a new method for priority sampling in path-tracing that exploits restrictions of the human visual system by recognizing whether an error is perceivable or not. We use the stationary wavelet transformation to efficiently calculate noise-contrasts in the image based on the standard error of the mean. We then use the Contrast Sensitivity Function and Contrast Masking of the Human Visual System to detect if an error is perceivable for any given pixel in the output image. Errors that can not be detected by a human observer are then ignored in further sampling steps, reducing the amount of samples calculated while producing the same perceived quality. This approach leads to a drastic reduction in the total number of samples required and therefore in total rendering time.
- Published
- 2017
- Full Text
- View/download PDF
8. How Human Am I?
- Author
-
Michael Goesele, Stefan Guthe, Marcus Magnor, Maryam Mustafa, and Jan-Philipp Tauscher
- Subjects
Neural correlates of consciousness ,medicine.diagnostic_test ,Brain activity and meditation ,Computer science ,05 social sciences ,Uncanny valley ,020207 software engineering ,02 engineering and technology ,Animation ,Cognitive neuroscience ,Electroencephalography ,050105 experimental psychology ,Character (mathematics) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Uncanny ,Cognitive psychology - Abstract
There is a continuous effort by animation experts to create increasingly realistic and more human-like digital characters. However, as virtual characters become more human they risk evoking a sense of unease in their audience. This sensation, called the Uncanny Valley effect, is widely acknowledged both in the popular media and scientific research but empirical evidence for the hypothesis has remained inconsistent. In this paper, we investigate the neural responses to computer-generated faces in a cognitive neuroscience study. We record brain activity from participants (N = 40)} using electroencephalography (EEG) while they watch videos of real humans and computer-generated virtual characters. Our results show distinct differences in neural responses for highly realistic computer-generated faces such as Digital Emily compared with real humans. These differences are unique only to agents that are highly photorealistic, i.e. the `uncanny' response. Based on these specific neural correlates we train a support vector machine~(SVM) to measure the probability of an uncanny response for any given computer-generated character from EEG data. This allows the ordering of animated characters based on their level of `uncanniness'.
- Published
- 2017
- Full Text
- View/download PDF
9. Ghosting and popping detection for image-based rendering
- Author
-
Michael Goesele, Douglas W. Cunningham, P. Schardt, and Stefan Guthe
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Image-based modeling and rendering ,3D rendering ,Real-time rendering ,Rendering (computer graphics) ,Image-based lighting ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Tiled rendering ,Artificial intelligence ,business ,Alternate frame rendering ,Ghosting - Abstract
Film sequences generated using image-based rendering techniques are commonly used in broadcasting, especially for sporting events. In many cases, however, image-based rending sequences contain artifacts, and these must be manually located. Here, we propose an algorithm to automatically detect not only the presence of the two most disturbing classes of artifact (popping and ghosting), but also the strength of each instance of an artifact. A simple perceptual evaluation of the technique shows that it performs well.
- Published
- 2016
- Full Text
- View/download PDF
10. GPU-based lossless volume data compression
- Author
-
Stefan Guthe and Michael Goesele
- Subjects
Lossless compression ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Software rendering ,020207 software engineering ,Volume rendering ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,01 natural sciences ,Real-time rendering ,010305 fluids & plasmas ,Computational science ,Rendering (computer graphics) ,Computer graphics (images) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Tiled rendering ,Texture memory ,ComputingMethodologies_COMPUTERGRAPHICS ,Data compression - Abstract
In rendering, textures are usually consuming more graphics memory than the geometry. This is especially true when rendering regular sampled volume data as the geometry is a single box. In addition, volume rendering suffers from the curse of dimensionality. Every time the resolution doubles, the number of projected pixels is multiplied by four but the amount of data is multiplied by eight. Data compression is thus mandatory even with the increasing amount of memory available on today's GPUs. Existing compression schemes are either lossy or do not allow on-the-fly random access to the volume data while rendering. Both of these properties are, however, important for high quality direct volume rendering. In this paper, we propose a lossless compression and caching strategy that allows random access and decompression on the GPU using a compressed volume object.
- Published
- 2016
- Full Text
- View/download PDF
11. Single-trial EEG classification of artifacts in videos
- Author
-
Marcus Magnor, Stefan Guthe, and Maryam Mustafa
- Subjects
General Computer Science ,medicine.diagnostic_test ,Computer science ,Image quality ,business.industry ,media_common.quotation_subject ,020207 software engineering ,Experimental and Cognitive Psychology ,02 engineering and technology ,Electroencephalography ,Theoretical Computer Science ,Rendering (computer graphics) ,Support vector machine ,Wavelet ,Perception ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Single trial ,business ,media_common - Abstract
In this article we use an ElectroEncephaloGraph (EEG) to explore the perception of artifacts that typically appear during rendering and determine the perceptual quality of a sequence of images. Although there is an emerging interest in using an EEG for image quality assessment, one of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which makes it exceedingly difficult to distinguish neural responses from noise. Traditionally, event-related potentials have been used for analysis of EEG data. However, they rely on averaging and so require a large number of participants and trials to get meaningful data. Also, due the the low SNR ERP's are not suited for single-trial classification. We propose a novel wavelet-based approach for evaluating EEG signals which allows us to predict the perceived image quality from only a single trial. Our wavelet-based algorithm is able to filter the EEG data and remove noise, eliminating the need for many participants or many trials. With this approach it is possible to use data from only 10 electrode channels for single-trial classification and predict the presence of an artifact with an accuracy of 85%. We also show that it is possible to differentiate and classify a trial based on the exact type of artifact viewed. Our work is particularly useful for understanding how the human visual system responds to different types of degradations in images and videos. An understanding of the perception of typical image-based rendering artifacts forms the basis for the optimization of rendering and masking algorithms.
- Published
- 2012
- Full Text
- View/download PDF
12. Geometry Presorting for Implicit Object Space Partitioning
- Author
-
Pablo Bauszat, Stefan Guthe, Marcus Magnor, and Martin Eisemann
- Subjects
Theoretical computer science ,Computer science ,Software rendering ,Geometry ,Bounding volume hierarchy ,Computer Graphics and Computer-Aided Design ,Real-time computer graphics ,Computer graphics ,Vector graphics ,Tree structure ,Texture mapping unit ,Node (circuits) ,Local feature size ,Space partitioning ,Representation (mathematics) ,2D computer graphics ,Algorithm ,3D computer graphics - Abstract
We present a new data structure for object space partitioning that can be represented completely implicitly. The bounds of each node in the tree structure are recreated at run-time from the scene objects contained therein. By applying a presorting procedure to the geometry, only a known fraction of the geometry is needed to locate the bounding planes of any node. We evaluate the impact of the implicit bounding plane representation and compare our algorithm to a classic bounding volume hierarchy. Though the representation is completely implicit, we still achieve interactive frame rates on commodity hardware. © 2012 Wiley Periodicals, Inc.
- Published
- 2012
- Full Text
- View/download PDF
13. Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography
- Author
-
Dirk A. Lorenz, Marcus Magnor, Marco Ament, Andreas M. Tillmann, Daniel Weiskopf, Stephan Wenger, and Stefan Guthe
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Graphics processing unit ,Reconstruction algorithm ,Volume rendering ,Iterative reconstruction ,GPU cluster ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Visualization ,Computer Science::Graphics ,Data visualization ,Compressed sensing ,Computer graphics (images) ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.
- Published
- 2015
14. Large volume visualization of compressed time-dependent datasets on GPU clusters
- Author
-
Daniel Weiskopf, Magnus Strengert, Thomas Ertl, Marcelo Magallón, and Stefan Guthe
- Subjects
Framebuffer ,Viewport ,Computer Networks and Communications ,Computer science ,InfiniBand ,Wavelet transform ,Frame rate ,Computer Graphics and Computer-Aided Design ,Theoretical Computer Science ,Rendering (computer graphics) ,Artificial Intelligence ,Hardware and Architecture ,Computer graphics (images) ,Compositing ,Cluster (physics) ,Software - Abstract
We describe a system for the texture-based direct volume visualization of large data sets on a PC cluster equipped with GPUs. The data is partitioned into volume bricks in object space, and the intermediate images are combined to a final picture in a sort-last approach. Hierarchical wavelet compression is applied to increase the effective size of volumes that can be handled. An adaptive rendering mechanism takes into account the viewing parameters and the properties of the data set to adjust the texture resolution and number of slices. We discuss the specific issues of this adaptive and hierarchical approach in the context of a distributed memory architecture and present corresponding solutions. Furthermore, our compositing scheme takes into account the footprints of volume bricks to minimize the costs for reading from framebuffer, network communication, and blending. A detailed performance analysis is provided for several network, CPU, and GPU architectures-and scaling characteristics of the parallel system are discussed. For example, our tests on a eight-node AMD64 cluster with InfiniBand show a rendering speed of 6 frames per second for a 2048x1024x1878 data set on a 1024^2 viewport.
- Published
- 2005
- Full Text
- View/download PDF
15. Advanced techniques for high-quality multi-resolution volume rendering
- Author
-
Wolfgang Strasser and Stefan Guthe
- Subjects
Parallel rendering ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Software rendering ,Computer Graphics and Computer-Aided Design ,3D rendering ,Real-time rendering ,Rendering (computer graphics) ,Human-Computer Interaction ,Computer graphics (images) ,Computer vision ,Tiled rendering ,Artificial intelligence ,business ,Alternate frame rendering ,Texture memory ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present several improvements for compression-based multi-resolution rendering of very large volume data sets at interactive to real-time frame rates on standard PC hardware. The algorithm accepts scalar or multi-variant data sampled on a regular grid as input. The input data are converted into a compressed hierarchical wavelet representation in a pre-processing step. During rendering, the wavelet representation is decompressed on-the-fly and rendered using hardware texture mapping. The level-of-detail used for rendering is adapted to the estimated screen-space error. To increase the rendering performance additional visibility tests, such as empty space skipping and occlusion culling, are applied. Furthermore, we discuss how to render the remaining multi-resolution blocks efficiently using modern graphics hardware. Using a prototype implementation of this algorithm we are able to perform a high-quality interactive rendering of large data sets on a single off-the-shelf PC.
- Published
- 2004
- Full Text
- View/download PDF
16. Using Sparse Optical Flow for Two-Phase Gas Flow Capturing with Multiple Kinect
- Author
-
Yannic Schroeder, Marc A. Kastner, Kai Berger, and Stefan Guthe
- Subjects
business.industry ,Computer science ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Phase (waves) ,Visual hull ,Flow (mathematics) ,Particle image velocimetry ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
The use of multiple Microsoft Kinect has become prominent in the last 2 years and enjoyed widespread acceptance. While several work has been published to mitigate quality degradations in the precomputed depth image, this work focuses on employing an optical flow suitable for dot patterns as employed in the Kinect to retrieve subtle scene data alterations for reconstruction. The method is employed in a multiple Kinect vision architecture to detect the interface of propane flow around occluding objects in air.
- Published
- 2014
- Full Text
- View/download PDF
17. The capturing of turbulent gas flows using multiple Kinects
- Author
-
Stefan Guthe, Yannic Schroder, Kai Berger, Al. Scholz, Kai Ruhl, J. Kokemuller, Marcus Magnor, and M. Albers
- Subjects
Flow visualization ,Computer science ,business.industry ,Turbulence ,Computer vision ,Noise (video) ,Iterative reconstruction ,Aerodynamics ,Artificial intelligence ,Computational fluid dynamics ,business ,Visualization - Abstract
We introduce the Kinect as a tool for capturing gas flows around occluders using objects of different aerodynamic properties. Previous approaches have been invasive or require elaborate setups including large printed sheets of complex noise patterns and neat lighting. Our method is easier to set up while still producing good results. We show that three Kinects are sufficient to qualitatively reconstruct nonstationary time varying gas flows in the presence of occluders.
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.