35 results on '"Jitsev, Jenia"'
Search Results
2. In-situ UNet-based heliostat beam characterization method for precise flux calculation using the camera-target method
- Author
-
Kuhl, Mathias, Pargmann, Max, Cherti, Mehdi, Jitsev, Jenia, Maldonado Quinto, Daniel, and Pitz-Paal, Robert
- Published
- 2024
- Full Text
- View/download PDF
3. Using physics-informed enhanced super-resolution generative adversarial networks for subfilter modeling in turbulent reactive flows
- Author
-
Bode, Mathis, Gauding, Michael, Lian, Zeyu, Denker, Dominik, Davidovic, Marco, Kleinheinz, Konstantin, Jitsev, Jenia, and Pitsch, Heinz
- Published
- 2021
- Full Text
- View/download PDF
4. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond
- Author
-
Morita, Kenji, Jitsev, Jenia, and Morrison, Abigail
- Published
- 2016
- Full Text
- View/download PDF
5. A Comparative Study on Generative Models for High Resolution Solar Observation Imaging
- Author
-
Cherti, Mehdi, Czernik, Alexander, Kesselheim, Stefan, Effenberger, Frederic, and Jitsev, Jenia
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Solar activity is one of the main drivers of variability in our solar system and the key source of space weather phenomena that affect Earth and near Earth space. The extensive record of high resolution extreme ultraviolet (EUV) observations from the Solar Dynamics Observatory (SDO) offers an unprecedented, very large dataset of solar images. In this work, we make use of this comprehensive dataset to investigate capabilities of current state-of-the-art generative models to accurately capture the data distribution behind the observed solar activity states. Starting from StyleGAN-based methods, we uncover severe deficits of this model family in handling fine-scale details of solar images when training on high resolution samples, contrary to training on natural face images. When switching to the diffusion based generative model family, we observe strong improvements of fine-scale detail generation. For the GAN family, we are able to achieve similar improvements in fine-scale generation when turning to ProjectedGANs, which uses multi-scale discriminators with a pre-trained frozen feature extractor. We conduct ablation studies to clarify mechanisms responsible for proper fine-scale handling. Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts, as suggested by the evaluation we conduct. We make all code, models and workflows used in this study publicly available at \url{https://github.com/SLAMPAI/generative-models-for-highres-solar-images}.
- Published
- 2023
6. LAION-5B: An open large-scale dataset for training next generation image-text models
- Author
-
Schuhmann, Christoph, Beaumont, Romain, Vencu, Richard, Gordon, Cade, Wightman, Ross, Cherti, Mehdi, Coombes, Theo, Katta, Aarush, Mullis, Clayton, Wortsman, Mitchell, Schramowski, Patrick, Kundurthy, Srivatsa, Crowson, Katherine, Schmidt, Ludwig, Kaczmarczyk, Robert, and Jitsev, Jenia
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Groundbreaking language-vision architectures like CLIP and DALL-E proved the utility of training on large amounts of noisy image-text data, without relying on expensive accurate labels used in standard vision unimodal supervised learning. The resulting models showed capabilities of strong text-guided image generation and transfer to downstream tasks, while performing remarkably at zero-shot classification with noteworthy out-of-distribution robustness. Since then, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and Imagen made further improvements. Studying the training and capabilities of such models requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. To address this problem and democratize research on large-scale multi-modal models, we present LAION-5B - a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language. We show successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further experiments enabled with an openly available dataset of this scale. Additionally we provide several nearest neighbor indices, an improved web-interface for dataset exploration and subset generation, and detection scores for watermark, NSFW, and toxic content detection. Announcement page https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/, 36th Conference on Neural Information Processing Systems (NeurIPS 2022), Track on Datasets and Benchmarks. OpenReview: https://openreview.net/forum?id=M3Y74vmsMcY
- Published
- 2022
7. Adversarial domain adaptation to reduce sample bias of a high energy physics event classifier
- Author
-
Clavijo, J. M., Glaysher, P., Jitsev, Jenia, and Katzy, J. M.
- Subjects
ddc:621.3 - Abstract
We apply adversarial domain adaptation in unsupervised setting to reduce sample bias in a supervised high energy physics events classifier training. We make use of a neural network containing event and domain classifier with a gradient reversal layer to simultaneously enable signal versus background events classification on the one hand, while on the other hand minimizing the difference in response of the network to background samples originating from different Monte Carlo models via adversarial domain classification loss. We show the successful bias removal on the example of simulated events at the Large Hadron Collider with $t\bar{t}H$ signal versus $t\bar{t}b\bar{b}$ background classification and discuss implications and limitations of the method.
- Published
- 2022
- Full Text
- View/download PDF
8. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
- Author
-
Schuhmann, Christoph, Vencu, Richard, Beaumont, Romain, Kaczmarczyk, Robert, Mullis, Clayton, Katta, Aarush, Coombes, Theo, Jitsev, Jenia, and Komatsuzaki, Aran
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Computation and Language ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computation and Language (cs.CL) ,Machine Learning (cs.LG) - Abstract
Multi-modal language-vision models trained on hundreds of millions of image-text pairs (e.g. CLIP, DALL-E) gained a recent surge, showing remarkable capability to perform zero- or few-shot learning and transfer even in absence of per-sample labels on target image data. Despite this trend, to date there has been no publicly available datasets of sufficient scale for training such models from scratch. To address this issue, in a community effort we build and release for public LAION-400M, a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search., Comment: Short version. Accepted at Data Centric AI NeurIPS Workshop 2021
- Published
- 2021
9. Adversarial domain adaptation to reduce sample bias of a high energy physics event classifier
- Author
-
Clavijo Columbie, Jose Manuel, Glaysher, Paul, Jitsev, Jenia, and Katzy, Judith M.
- Subjects
data analysis method ,associated production [Higgs particle] ,ddc:621.3 ,background ,neural network ,domain adaptation ,adversarial neural network ,Monte Carlo [numerical calculations] ,Higgs particle: associated production ,pair production [top] ,621.3 ,ComputingMethodologies_PATTERNRECOGNITION ,CERN LHC Coll ,statistical analysis ,adversarial training ,LHC ,ttH ,top: pair production ,numerical calculations: Monte Carlo - Abstract
Machine learning: science and technology 3(1), 015014 (2022). doi:10.1088/2632-2153/ac3dde, We apply adversarial domain adaptation in unsupervised setting to reduce sample bias in a supervised high energy physics events classifier training. We make use of a neural network containing event and domain classifier with a gradient reversal layer to simultaneously enable signal versus background events classification on the one hand, while on the other hand minimizing the difference in response of the network to background samples originating from different Monte Carlo models via adversarial domain classification loss. We show the successful bias removal on the example of simulated events at the Large Hadron Collider with signal versus background classification and discuss implications and limitations of the method., Published by IOP Publishing, Bristol
- Published
- 2021
- Full Text
- View/download PDF
10. Self-generated off-line memory reprocessing on different layers of a hierarchical recurrent neuronal network
- Author
-
Jitsev Jenia
- Subjects
Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 ,Neurophysiology and neuropsychology ,QP351-495 - Published
- 2011
- Full Text
- View/download PDF
11. A global decision-making model via synchronization in macrocolumn units
- Author
-
Burwick Thomas, Jitsev Jenia, Sato Yasuomi D, and Malsburg Christoph
- Subjects
Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 ,Neurophysiology and neuropsychology ,QP351-495 - Published
- 2009
- Full Text
- View/download PDF
12. Adversarial domain adaptation to reduce sample bias of a high energy physics classifier
- Author
-
Clavijo, Jose M., Glaysher, Paul, Katzy, Judith M., and Jitsev, Jenia
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,data analysis method ,associated production [Higgs particle] ,background ,neural network ,FOS: Physical sciences ,Machine Learning (stat.ML) ,Monte Carlo [numerical calculations] ,pair production [top] ,Machine Learning (cs.LG) ,High Energy Physics - Experiment ,High Energy Physics - Experiment (hep-ex) ,High Energy Physics - Phenomenology ,High Energy Physics - Phenomenology (hep-ph) ,CERN LHC Coll ,statistical analysis ,Statistics - Machine Learning - Abstract
We apply adversarial domain adaptation in unsupervised setting to reduce sample bias in a supervised high energy physics events classifier training. We make use of a neural network containing event and domain classifier with a gradient reversal layer to simultaneously enable signal versus background events classification on the one hand, while on the other hand minimising the difference in response of the network to background samples originating from different MC models via adversarial domain classification loss. We show the successful bias removal on the example of simulated events at the LHC with $t\bar{t}H$ signal versus $t\bar{t}b\bar{b}$ background classification and discuss implications and limitations of the method, 17 pages, 8 figures, to be submitted to MLST
- Published
- 2020
- Full Text
- View/download PDF
13. Sub-Grid Scale Modelling at Scale with Deep Learning and up to 60 Billion Degrees of Freedom
- Author
-
Bode, Mathis, Denker, Dominik, Jitsev, Jenia, and Pitsch, Heinz
- Subjects
Physics::Fluid Dynamics - Abstract
This work presents fully resolved direct numerical simulations (DNSs) of a turbulent reactive planar temporally non-premixed jet configuration with up to 60 billion degrees of freedom. As scalar mixing is of utmost importance for this kind of configuration, a novel deep learning (DL) approach in the context of large-eddy simulation is presented which results in predictive mixing statistics on underresolved grids. The usability of the mixing model is approved by applying it to the DNS data. Furthermore, node performance measurements for the training of the DL networks are shown for different computing clusters.
- Published
- 2020
14. Scaling Up a Multispectral Resnet-50 to 128 GPUs
- Author
-
Sedona, Rocco, Cavallaro, Gabriele, Jitsev, Jenia, Riedel, Morris, and Book, Matthias
- Abstract
Similarly to other scientific domains, Deep Learning (DL) holds great promises to fulfil the challenging needs of Remote Sensing (RS) applications. However, the increase in volume, variety and complexity of acquisitions that are carried out on a daily basis by Earth Observation (EO) missions generates new processing and storage challenges within operational processing pipelines. The aim of this work is to show that High-Performance Computing (HPC) systems can speed up the training time of Convolutional Neural Networks (CNNs). Particular attention is put on the monitoring of the classification accuracy that usually degrades when using large batch sizes. The experimental results of this work show that the training of the model scales up to a batch size of 8,000, obtaining classification performances in terms of accuracy in line with those using smaller batch sizes.
- Published
- 2020
15. Developing Exascale Computing at JSC
- Author
-
Suarez, Estela, Frings, Wolfgang, Krause, Dorian, Di Napoli, Edoardo, Meinke, Jan, Michielsen, Kristel, Mohr, Bernd, Pleiter, Dirk, Strube, Alexandre, Lippert, Thomas, Attig, Norbert, Achilles, Sebastian, De Amicis, Jacopo, Eickermann, Thomas, Gregory, Eric, Hagemeier, Björn, Herten, Andreas, and Jitsev, Jenia
- Abstract
The first Exaflop-capable systems will be installed in the USA and China beginning in 2020. Europe intends to have its own machines starting in 2023. It is therefore very timely for computer centres, software providers, and application developers to prepare for the challenge of operating and efficiently using such Exascale systems. This paper summarises the activities that have been going on for years in the Jülich Supercomputing Centre (JSC) to prepare the scientists and users for the arrival of Exascale computing. The Jülich activities revolve around the concept of modular supercomputing. They include both computational and data management aspects, ranging from the deployment and operation of large-scale computing platforms (e. g. the JUWELS Booster at JSC) and the federation of storage infrastructures (as for example the European data and compute platform Fenix), up to the education, training and support of application developers to exploit these future technologies.
- Published
- 2020
16. Using Physics-Informed Super-Resolution Generative Adversarial Networks for Subgrid Modeling in Turbulent Reactive Flows
- Author
-
Bode, Mathis, Gauding, Michael, Lian, Zeyu, Denker, Dominik, Davidovic, Marco, Kleinheinz, Konstantin, Jitsev, Jenia, and Pitsch, Heinz
- Subjects
FOS: Computer and information sciences ,Physics::Fluid Dynamics ,Computer Science - Machine Learning ,Computer Science - Graphics ,Statistics - Machine Learning ,Fluid Dynamics (physics.flu-dyn) ,FOS: Physical sciences ,Machine Learning (stat.ML) ,Physics - Fluid Dynamics ,Computational Physics (physics.comp-ph) ,Physics - Computational Physics ,Graphics (cs.GR) ,Machine Learning (cs.LG) - Abstract
Turbulence is still one of the main challenges for accurately predicting reactive flows. Therefore, the development of new turbulence closures which can be applied to combustion problems is essential. Data-driven modeling has become very popular in many fields over the last years as large, often extensively labeled, datasets became available and training of large neural networks became possible on GPUs speeding up the learning process tremendously. However, the successful application of deep neural networks in fluid dynamics, for example for subgrid modeling in the context of large-eddy simulations (LESs), is still challenging. Reasons for this are the large amount of degrees of freedom in realistic flows, the high requirements with respect to accuracy and error robustness, as well as open questions, such as the generalization capability of trained neural networks in such high-dimensional, physics-constrained scenarios. This work presents a novel subgrid modeling approach based on a generative adversarial network (GAN), which is trained with unsupervised deep learning (DL) using adversarial and physics-informed losses. A two-step training method is used to improve the generalization capability, especially extrapolation, of the network. The novel approach gives good results in a priori as well as a posteriori tests with decaying turbulence including turbulent mixing. The applicability of the network in complex combustion scenarios is furthermore discussed by employing it to a reactive LES of the Spray A case defined by the Engine Combustion Network (ECN)., Submitted to Combustion Symposium 2020
- Published
- 2019
17. Convolutional neural networks for high throughput screening of catalyst layer inks for polymer electrolyte fuel cells.
- Author
-
Eslamibidgoli, Mohammad J., Tipp, Fabian P., Jitsev, Jenia, Jankovic, Jasna, Eikerling, Michael H., and Malek, Kourosh
- Published
- 2021
- Full Text
- View/download PDF
18. Machine Learning @ Deep Learning
- Author
-
Jitsev, Jenia
- Published
- 2018
19. Using physics-informed enhanced super-resolution generative adversarial networks for subfilter modeling in turbulent reactive flows.
- Author
-
Bode, Mathis, Gauding, Michael, Lian, Zeyu, Denker, Dominik, Davidovic, Marco, Kleinheinz, Konstantin, Jitsev, Jenia, and Pitsch, Heinz
- Abstract
Turbulence is still one of the main challenges in accurate prediction of reactive flows. Therefore, the development of new turbulence closures that can be applied to combustion problems is essential. Over the last few years, data-driven modeling has become popular in many fields as large, often extensively labeled datasets are now available and training of large neural networks has become possible on graphics processing units (GPUs) that speed up the learning process tremendously. However, the successful application of deep neural networks in fluid dynamics, such as in subfilter modeling in the context of large-eddy simulations (LESs), is still challenging. Reasons for this are the large number of degrees of freedom in natural flows, high requirements of accuracy and error robustness, and open questions, for example, regarding the generalization capability of trained neural networks in such high-dimensional, physics-constrained scenarios. This work presents a novel subfilter modeling approach based on a generative adversarial network (GAN), which is trained with unsupervised deep learning (DL) using adversarial and physics-informed losses. A two-step training method is employed to improve the generalization capability, especially extrapolation, of the network. The novel approach gives good results in a priori and a posteriori tests with decaying turbulence including turbulent mixing, and the importance of the physics-informed continuity loss term is demonstrated. The applicability of the network in complex combustion scenarios is furthermore discussed by employing it in reactive and inert LESs of the Spray A case defined by the Engine Combustion Network (ECN). [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
20. Off-line memory reprocessing in a recurrent neuronal network formed by unsupervised learning
- Author
-
Jitsev, Jenia and von der Malsburg, Christoph
- Published
- 2011
- Full Text
- View/download PDF
21. Functional role of opponent, dopamine modulated D1/D2 plasticity in prediction error-driven reinforcement learning in the basal ganglia
- Author
-
Jitsev, Jenia, Abraham, Nobi, Tittgemeyer, Marc, and Morrison, Abigail
- Subjects
Computational Neuroscience ,Bernstein Conference - Published
- 2013
- Full Text
- View/download PDF
22. Learning from Delayed Reward und Punishment in a Spiking Neural Network Model of Basal Ganglia with Opposing D1/D2 Plasticity.
- Author
-
Jitsev, Jenia, Abraham, Nobi, Morrison, Abigail, and Tittgemeyer, Marc
- Published
- 2012
- Full Text
- View/download PDF
23. Information-Theoretic Connectivity-Based Cortex Parcellation.
- Author
-
Gorbach, Nico S., Siep, Silvan, Jitsev, Jenia, Melzer, Corina, and Tittgemeyer, Marc
- Published
- 2012
- Full Text
- View/download PDF
24. Learning from positive and negative rewards in a spiking neural network model of basal ganglia.
- Author
-
Jitsev, Jenia, Morrison, Abigail, and Tittgemeyer, Marc
- Abstract
Despite the vast amount of experimental findings on the role of the basal ganglia in reinforcement learning, there is still general lack of network models that use spiking neurons and plausible plasticity mechanisms to demonstrate network-level reward-based learning. In this work we extend a recent spiking actor-critic network model of the basal ganglia, aiming to create a minimal realistic model of learning from both positive and negative rewards. We hypothesize and implement in the model segregation of not only the dorsal striatum, but also of the ventral striatum into populations of medium spiny neurons (MSNs) that carry either D1 or D2 dopamine (DA) receptor type. This segregation allows explicit representation of both positive and negative expected reward within respective population. In line with recent experiments, we further assume that D1 and D2 MSN populations have distinct, opposing DA-modulated bidirectional synaptic plasticity. We implement the spiking network model in the simulator NEST and conduct experiments involving application of delayed rewards in a grid world setting, where a moving agent has to reach a goal state while maximizing the total obtained reward. We demonstrate that the network can learn not only to approach the positive rewards, but also to consequently avoid punishments as opposed to the original model. The spiking network model highlights thus functional role of D1-D2 MSN segregation within striatum and explains necessity for reversed direction of DA-dependent plasticity found at synapses converging on different types of striatal MSNs. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
25. Visual Object Detection by Specifying the Scale and Rotation Transformations.
- Author
-
Sato, Yasuomi D., Jitsev, Jenia, and von der Malsburg, Christoph
- Abstract
We here propose a simple but highly potential algorithm to detect a model object΄s position on an input image by determining the initially unknown transformational states of the model object, in particular, size and 2D-rotation. In this algorithm, a single feature is extracted around or at the center of the input image through 2D-Gabor wavelet transformation, in order to find not only the most likely relative size and rotation to the model object, but also the most appropriate positional region on the input image for detecting the correct relative transformational states. We also show the reliable function on the face images of different persons, or of different appearance in the same person. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
26. A Visual Object Recognition System Invariant to Scale and Rotation.
- Author
-
Sato, Yasuomi D., Jitsev, Jenia, and von der Malsburg, Christoph
- Abstract
We here address the problem of scale and orientation invariant object recognition, making use of a correspondence-based mechanism, in which the identity of an object represented by sensory signals is determined by matching it to a representation stored in memory. The sensory representation is in general affected by various transformations, notably scale and rotation, thus giving rise to the fundamental problem of invariant object recognition. We focus here on a neurally plausible mechanism that deals simultaneously with identification of the object and detection of the transformation, both types of information being important for visual processing. Our mechanism is based on macrocolumnar units. These evaluate identity- and transformation-specific feature similarities, performing competitive computation on the alternatives of their own subtask, and cooperate to make a coherent global decision for the identity, scale and rotation of the object. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. P9 Self-generated off-line memory reprocessing in a hierarchical recurrent neuronal network and its impact on learning
- Author
-
Jitsev, Jenia and Tittgemeyer, Marc
- Published
- 2011
- Full Text
- View/download PDF
28. Remote Sensing Big Data Classification with High Performance Distributed Deep Learning.
- Author
-
Sedona, Rocco, Cavallaro, Gabriele, Jitsev, Jenia, Strube, Alexandre, Riedel, Morris, and Benediktsson, Jón Atli
- Subjects
CONVOLUTIONAL neural networks ,REMOTE sensing ,BIG data ,GRAPHICS processing units ,DEEP learning ,HIGH performance computing - Abstract
High-Performance Computing (HPC) has recently been attracting more attention in remote sensing applications due to the challenges posed by the increased amount of open data that are produced daily by Earth Observation (EO) programs. The unique parallel computing environments and programming techniques that are integrated in HPC systems are able to solve large-scale problems such as the training of classification algorithms with large amounts of Remote Sensing (RS) data. This paper shows that the training of state-of-the-art deep Convolutional Neural Networks (CNNs) can be efficiently performed in distributed fashion using parallel implementation techniques on HPC machines containing a large number of Graphics Processing Units (GPUs). The experimental results confirm that distributed training can drastically reduce the amount of time needed to perform full training, resulting in near linear scaling without loss of test accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. Functional role of opponent, dopamine modulated D1/D2 plasticity in reinforcement learning.
- Author
-
Jitsev, Jenia, Abraham, Nobi, Morrison, Abigail, and Tittgemeyer, Marc
- Subjects
- *
PHENOTYPIC plasticity , *NEUROSCIENCES - Abstract
An abstract of the article "Functional role of opponent, dopamine modulated D1/D2 plasticity in reinforcement learning" by Jenia Jitsev, Nobi Abraham, Abigail Morrison, and Marc Tittgemeyer is presented.
- Published
- 2013
- Full Text
- View/download PDF
30. A global decision-making model via synchronization in macrocolumn units.
- Author
-
Sato, Yasuomi D., Jitsev, Jenia, Burwick, Thomas, and Von der Malsburg, Christoph
- Subjects
- *
DECISION making , *SYNCHRONIZATION , *EYE , *COMPUTER vision , *VISUAL perception , *AFFERENT pathways , *SYNAPSES - Abstract
Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. Mechanism The model we propose here is inspired by the binding-by-synchrony as well as the dynamic link architecture. The decision-making is done by so-called control (C) macrocolumn units, which are responsible not only for the synchronization or de-synchronization of selected feature macrocolumns, but also for signaling the position of the object in the scene. The feature macrocolumns are placed on two distinct domains. The input (I) domain contains the sensory data from the scene while the gallery (G) domain stores the reference objects to be recognized. Each macrocolumn consists of subunits called minicolumns, which are bound together by common afferents and lateral inhibition modulated by an autonomous oscillator of the integrate-and-fire (IF) type, being a further development of the previous modeling approach of a macrocolumn cortex. The binding-by-synchrony, establishing the related dynamic links, is achieved via similarity computation between the feature columns and the similarity-based modulation of a time constant and weight of the IF synaptic couplings, influenced by the C column subunits. Results Figure 1 demonstrates that the binding-by-synchrony in our system is achieved so rapidly within a few of hundred milliseconds. More precisely, the IF neural oscillators in the feature macrocolumns of I and G with the higher similarity become synchronized with zero-lag, showing asynchronous behavior between the IF oscillators of the feature macrocolumns with lower similarity. Transition of synchrony to asynchrony occurs by modulating a time constant and weight of the IF synaptic couplings, under the influence of subunit activities in the C. The zero-lag synchronization between the IF oscillators is the global object recognition, assigning each object the corresponding position in the scene, which is signaled by the activities in the C column units. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
31. Activity-dependent bidirectional plasticity and homeostasis regulation governing structure formation in a model of layered visual memory.
- Author
-
Jitsev, Jenia and von der Malsburg, Christoph
- Subjects
- *
BIOLOGICAL models , *MEMORY , *VISUAL learning , *NEUROPLASTICITY , *HOMEOSTASIS , *PHYSIOLOGICAL aspects of learning - Abstract
Our work deals with the self-organization of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses -- bottom-up, lateral and top-down -- being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. The structure formation relies on the activity-dependent bidirectional plasticity and the homeostatic regulation of unit's activity. While these occur on the slow time scale, the fast acting neural dynamics with strong competitive character ensures that only a small subset of units may update their synapses during a decision cycle spanned by oscillatory inhibition and excitation in the gamma range. This repetitive selection triggered by certain face images leads to amplification of the memory trace for the respective person. Acting together, homeostatic constraint and bidirectional plasticity work on reducing the overlap between different memory traces, segregating them in the memory structure. The ongoing modification of the memory's structure conditions the system for more and more coherent communication between the bottom-up and top-down signals. The binding of the local features via lateral and top-down connections into a global identity enhances the generalization capability of the memory, and renders the system to reliably recognize novel face images of different views not presented before. The proposed mechanisms of learning reveal thus basic principles behind the self-organization of successful subsystem coordination. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
32. UTILE-Gen: Automated Image Analysis in Nanoscience Using Synthetic Dataset Generator and Deep Learning.
- Author
-
Colliard-Granero A, Jitsev J, Eikerling MH, Malek K, and Eslamibidgoli MJ
- Abstract
This work presents the development and implementation of a deep learning-based workflow for autonomous image analysis in nanoscience. A versatile, agnostic, and configurable tool was developed to generate instance-segmented imaging datasets of nanoparticles. The synthetic generator tool employs domain randomization to expand the image/mask pairs dataset for training supervised deep learning models. The approach eliminates tedious manual annotation and allows training of high-performance models for microscopy image analysis based on convolutional neural networks. We demonstrate how the expanded training set can significantly improve the performance of the classification and instance segmentation models for a variety of nanoparticle shapes, ranging from spherical-, cubic-, to rod-shaped nanoparticles. Finally, the trained models were deployed in a cloud-based analytics platform for the autonomous particle analysis of microscopy images., Competing Interests: The authors declare no competing financial interest., (© 2023 The Authors. Published by American Chemical Society.)
- Published
- 2023
- Full Text
- View/download PDF
33. Deep learning for the automation of particle analysis in catalyst layers for polymer electrolyte fuel cells.
- Author
-
Colliard-Granero A, Batool M, Jankovic J, Jitsev J, Eikerling MH, Malek K, and Eslamibidgoli MJ
- Abstract
The rapidly growing use of imaging infrastructure in the energy materials domain drives significant data accumulation in terms of their amount and complexity. The applications of routine techniques for image processing in materials research are often ad hoc , indiscriminate, and empirical, which renders the crucial task of obtaining reliable metrics for quantifications obscure. Moreover, these techniques are expensive, slow, and often involve several preprocessing steps. This paper presents a novel deep learning-based approach for the high-throughput analysis of the particle size distributions from transmission electron microscopy (TEM) images of carbon-supported catalysts for polymer electrolyte fuel cells. A dataset of 40 high-resolution TEM images at different magnification levels, from 10 to 100 nm scales, was annotated manually. This dataset was used to train the U-Net model, with the StarDist formulation for the loss function, for the nanoparticle segmentation task. StarDist reached a precision of 86%, recall of 85%, and an F1-score of 85% by training on datasets as small as thirty images. The segmentation maps outperform models reported in the literature for a similar problem, and the results on particle size analyses agree well with manual particle size measurements, albeit at a significantly lower cost.
- Published
- 2021
- Full Text
- View/download PDF
34. Hierarchical information-based clustering for connectivity-based cortex parcellation.
- Author
-
Gorbach NS, Schütte C, Melzer C, Goldau M, Sujazow O, Jitsev J, Douglas T, and Tittgemeyer M
- Abstract
One of the most promising avenues for compiling connectivity data originates from the notion that individual brain regions maintain individual connectivity profiles; the functional repertoire of a cortical area ("the functional fingerprint") is closely related to its anatomical connections ("the connectional fingerprint") and, hence, a segregated cortical area may be characterized by a highly coherent connectivity pattern. Diffusion tractography can be used to identify borders between such cortical areas. Each cortical area is defined based upon a unique probabilistic tractogram and such a tractogram is representative of a group of tractograms, thereby forming the cortical area. The underlying methodology is called connectivity-based cortex parcellation and requires clustering or grouping of similar diffusion tractograms. Despite the relative success of this technique in producing anatomically sensible results, existing clustering techniques in the context of connectivity-based parcellation typically depend on several non-trivial assumptions. In this paper, we embody an unsupervised hierarchical information-based framework to clustering probabilistic tractograms that avoids many drawbacks offered by previous methods. Cortex parcellation of the inferior frontal gyrus together with the precentral gyrus demonstrates a proof of concept of the proposed method: The automatic parcellation reveals cortical subunits consistent with cytoarchitectonic maps and previous studies including connectivity-based parcellation. Further insight into the hierarchically modular architecture of cortical subunits is given by revealing coarser cortical structures that differentiate between primary as well as premotoric areas and those associated with pre-frontal areas.
- Published
- 2011
- Full Text
- View/download PDF
35. Experience-driven formation of parts-based representations in a model of layered visual memory.
- Author
-
Jitsev J and von der Malsburg C
- Abstract
Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.