431 results on '"Recurrent networks"'
Search Results
2. Optimizing Portfolio in the Evolutional Portfolio Optimization System (EPOS) †.
- Author
-
Loukeris, Nikolaos, Boutalis, Yiannis, Eleftheriadis, Iordanis, and Gikas, Gregorios
- Subjects
- *
PORTFOLIO management (Investments) , *FREE will & determinism , *GENETIC algorithms , *MATHEMATICAL optimization , *UTILITY functions - Abstract
A novel method of portfolio selection is provided with further higher moments, filtering with fundamentals in intelligent computing resources. The Evolutional Portfolio Optimization System (EPOS) evaluates unobtrusive relations from a vast amount of accounting and financial data, excluding hoax and noise, to select the optimal portfolio. The fundamental question of Free Will, limited in investment selection, is answered through a new philosophical approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. NARX Model for Potato Price Prediction Utilising Multimarket Information
- Author
-
Jaiswal, Ronit, Jha, Girish Kumar, Kumar, Rajeev Ranjan, and Choudhary, Kapil
- Published
- 2025
- Full Text
- View/download PDF
4. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks.
- Author
-
Eckmann, Samuel, Young, Edward James, and Gjorgjieva, Julijana
- Subjects
- *
STIMULUS & response (Psychology) , *MODEL theory , *NEURONS , *NEUROPLASTICITY , *PERCEPTUAL learning - Abstract
Cortical networks exhibit complex stimulus-response patterns that are based on specific recurrent interactions between neurons. For example, the balance between excitatory and inhibitory currents has been identified as a central component of cortical computations. However, it remains unclear how the required synaptic connectivity can emergeindevelopingcircuitswheresynapsesbetweenexcitatoryandinhibitoryneurons are simultaneously plastic. Using theory and modeling, we propose that a wide range of cortical response properties can arise from a single plasticity paradigm that acts simultaneously at all excitatory and inhibitory connections--Hebbian learning that is stabilized by the synapse-type-specific competition for a limited supply of synaptic resources. In plastic recurrent circuits, this competition enables the formation and decorrelation of inhibition-balanced receptive fields. Networks develop an assembly structure with stronger synaptic connections between similarly tuned excitatory and inhibitory neurons and exhibit response normalization and orientation-specific centersurround suppression, reflecting the stimulus statistics during training. These results demonstrate how neurons can self-organize into functional networks and suggest an essential role for synapse-type-specific competitive learning in the development of cortical circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Hyperparameter Optimization Using Budget-Constrained BOHB for Traffic Forecasting
- Author
-
Swaminatha Rao, Lakshmi Priya, Jaganathan, Suresh, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Das, Swagatam, editor, Saha, Snehanshu, editor, Coello, Carlos A. Coello, editor, Rathore, Hemant, editor, and Bansal, Jagdish Chand, editor
- Published
- 2024
- Full Text
- View/download PDF
6. Exploring Emergent Properties of Recurrent Neural Networks Using a Novel Energy Function Formalism
- Author
-
Sengupta, Rakesh, Bapiraju, Surampudi, Pattanayak, Anindya, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nicosia, Giuseppe, editor, Ojha, Varun, editor, La Malfa, Emanuele, editor, La Malfa, Gabriele, editor, Pardalos, Panos M., editor, and Umeton, Renato, editor
- Published
- 2024
- Full Text
- View/download PDF
7. A Comparative Study of Loss Functions for Deep Neural Networks in Time Series Analysis
- Author
-
Jaiswal, Rashi, Singh, Brijendra, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Borah, Malaya Dutta, editor, Laiphrakpam, Dolendro Singh, editor, Auluck, Nitin, editor, and Balas, Valentina Emilia, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Hierarchical multi-head attention LSTM for polyphonic symbolic melody generation.
- Author
-
Kasif, Ahmet, Sevgen, Selcuk, Ozcan, Alper, and Catal, Cagatay
- Abstract
Creating symbolic melodies with machine learning is challenging because it requires an understanding of musical structure and the handling of inter-dependencies and long-term dependencies. Learning the relationship between events that occur far apart in time in music poses a considerable challenge for machine learning models. Another notable feature of music is that notes must account for several inter-dependencies, including melodic, harmonic, and rhythmic aspects. Baseline methods, such as RNNs, LSTMs, and GRUs, often struggle to capture these dependencies, resulting in the generation of musically incoherent or repetitive melodies. As such, in this study, a hierarchical multi-head attention LSTM model is proposed for creating polyphonic symbolic melodies. This enables our model to generate more complex and expressive melodies than previous methods, while still being musically coherent. The model allows learning of long-term dependencies at different levels of abstraction, while retaining the ability to form inter-dependencies. The study has been conducted on two major symbolic music datasets, MAESTRO and Classical-Music MIDI, which feature musical content encoded on MIDI. The artistic nature of music poses a challenge to evaluating the generated content and qualitative analysis are often not enough. Thus, human listening tests are conducted to strengthen the evaluation. Qualitative analysis conducted on the generated melodies shows significantly improved loss scores on MSE over baseline methods, and is able to generate melodies that were both musically coherent and expressive. The listening tests conducted using Likert-scale support the qualitative results and provide better statistical scores over baseline methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Optimized deep network based spoof detection in automatic speaker verification system.
- Author
-
Neelima, Medikonda and Prabha, I. Santi
- Abstract
Speaker-verification-system (SVS) is automated nowadays to improve the authenticity score of digital applications. However, spoofs in the audio signal have reduced the integrity score of the audio signal, which has tended to cause less authentication exactness score. Considering this, spoof recognition objectives emerged in this field to find the different types of spoofs with high exactness scores. Attracting the widest spoof forecasting score is impossible due to harmful and different spoof features. So, the present study built a novel Dove-based Recurrent Spoof Recognition System (DbRSRS) to identify the spoofing behaviour and its types from the trained audio data. The noise features were filtered in the primary stage to mitigate the complexity of spoof recognition. Moreover, the noise features filtered data is taken to the classification phase for feature selection and spoof recognition. Here, the spoof types were classified based on the different class features. Once the Spoof is identified, it is specified under different spoof classes. Here, the optimal dove features are utilized to tune the DbRSRS classification parameters. This process helped to earn the finest spoof recognition score than the recently published associated model. Henceforth, the recorded highest spoof forecasting accuracy was 99.2%, and the reported less error value was 0.05%. Thus, attaining the highest spoof prediction exactness score with less error value might improve the SVS performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Deep learning-based forecasting of sea surface temperature in the interim future: application over the Aegean, Ionian, and Cretan Seas (NE Mediterranean Sea).
- Author
-
Krestenitis, Marios, Androulidakis, Yannis, and Krestenitis, Yannis
- Subjects
- *
DEEP learning , *OCEAN temperature , *MARINE heatwaves , *FORECASTING , *MODULAR design , *MARINE ecology - Abstract
Sea surface temperature (SST) is a key indicator of the global climate system and is directly related to marine and coastal ecosystems, weather conditions, and atmospheric events. Marine heat waves (MHWs), characterized by prolonged periods of high SST, affect significantly the oceanic water quality and thus, the local ecosystem, and marine and coastal activities. Given the anticipated increase of MHWs occurrences due to climate change, developing targeted strategies is needed to mitigate their impact. Accurate SST forecasting can significantly contribute to this cause and thus it comprises a crucial, yet challenging, task for the scientific community. Despite the wide variety of existing methods in the literature, the majority of them focus either on providing near-future SST forecasts (a few days until 1 month) or long-term predictions (decades to century) in climate scales based on hypothetical scenarios that need to be proven. In this work, we introduce a robust deep learning-based method for efficient SST forecasting of the interim future (1 year ahead) using high-resolution satellite-derived SST data. Our approach processes daily SST sequences lasting 1 year, along with five other relevant atmospheric variables, to predict the corresponding daily SST timeseries for the subsequent year. The novel method was deployed to accurately forecast SST over the northeastern Mediterranean Seas (Aegean, Ionian, Cretan Seas: AICS). Utilizing the effectiveness of well-established deep learning architectures, our method can provide accurate spatiotemporal predictions for multiple areas at once, without the need to be deployed separately at each sub-region. The modular design of the framework allows customization for different spatial and temporal resolutions according to use case requirements. The proposed model was trained and evaluated using available data from the AICS region over a 15-year time period (2008–2022). The results demonstrate the efficiency of our method in predicting SST variability, even for previously unseen data that are over 2 years in advance, in respect to the training set. The proposed methodology is a valuable tool that also can contribute to MHWs prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach.
- Author
-
González-Rodríguez, Jaime-Rodrigo, Córdova-Esparza, Diana-Margarita, Terven, Juan, and Romero-González, Julio-Alejandro
- Subjects
DEEP learning ,RECURRENT neural networks ,SIGN language ,TRANSLATING & interpreting ,TRANSFORMER models ,COMMUNICATION barriers - Abstract
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Recurrent Neural Networks (RNN) to Predict the Curve of COVID-19 in Ecuador During the El Niño Phenomenon
- Author
-
Pérez-Espinoza, Charles M., Chon Long, Darwin Pow, Lopez, Jorge, Chalén, Genesis Rodriguez, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Valencia-García, Rafael, editor, Bucaram-Leverone, Martha, editor, Del Cioppo-Morstadt, Javier, editor, Vera-Lucio, Néstor, editor, and Centanaro-Quiroz, Pablo Humberto, editor
- Published
- 2023
- Full Text
- View/download PDF
13. Introduction
- Author
-
Paaß, Gerhard, Giesselbach, Sven, O'Sullivan, Barry, Series Editor, Wooldridge, Michael, Series Editor, Paaß, Gerhard, and Giesselbach, Sven
- Published
- 2023
- Full Text
- View/download PDF
14. Brain-like Combination of Feedforward and Recurrent Network Components Achieves Prototype Extraction and Robust Pattern Recognition
- Author
-
Ravichandran, Naresh Balaji, Lansner, Anders, Herman, Pawel, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nicosia, Giuseppe, editor, Ojha, Varun, editor, La Malfa, Emanuele, editor, La Malfa, Gabriele, editor, Pardalos, Panos, editor, Di Fatta, Giuseppe, editor, Giuffrida, Giovanni, editor, and Umeton, Renato, editor
- Published
- 2023
- Full Text
- View/download PDF
15. Daily Trading of the FTSE Index Using LSTM with Principal Component Analysis
- Author
-
Edelman, David, Mannion, David, Corazza, Marco, editor, Perna, Cira, editor, Pizzi, Claudio, editor, and Sibillo, Marilena, editor
- Published
- 2022
- Full Text
- View/download PDF
16. Time Series in Sensor Data Using State-of-the-Art Deep Learning Approaches: A Systematic Literature Review
- Author
-
Jácome-Galarza, Luis-Roberto, Realpe-Robalino, Miguel-Andrés, Paillacho-Corredores, Jonathan, Benavides-Maldonado, José-Leonardo, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Rocha, Álvaro, editor, López-López, Paulo Carlos, editor, and Salgado-Guerrero, Juan Pablo, editor
- Published
- 2022
- Full Text
- View/download PDF
17. An Efficient Recurrent Adversarial Framework for Unsupervised Real-Time Video Enhancement.
- Author
-
Fuoli, Dario, Huang, Zhiwu, Paudel, Danda Pani, Van Gool, Luc, and Timofte, Radu
- Subjects
- *
SUPERVISED learning , *GENERATIVE adversarial networks , *TEMPORAL integration , *LEARNING strategies , *VIDEO surveillance - Abstract
Video enhancement is a challenging problem, more than that of stills, mainly due to high computational cost, larger data volumes and the difficulty of achieving consistency in the spatio-temporal domain. In practice, these challenges are often coupled with the lack of example pairs, which inhibits the application of supervised learning strategies. To address these challenges, we propose an efficient adversarial video enhancement framework that learns directly from unpaired video examples. In particular, our framework introduces new recurrent cells that consist of interleaved local and global modules for implicit integration of spatial and temporal information. The proposed design allows our recurrent cells to efficiently propagate spatio-temporal information across frames and reduces the need for high complexity networks. Our setting enables learning from unpaired videos in a cyclic adversarial manner, where the proposed recurrent units are employed in all architectures. Efficient training is accomplished by introducing one single discriminator that learns the joint distribution of source and target domain simultaneously. The enhancement results demonstrate clear superiority of the proposed video enhancer over the state-of-the-art methods, in all terms of visual quality, quantitative metrics, and inference speed. Notably, our video enhancer is capable of enhancing over 35 frames per second of FullHD video (1080x1920). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Recurrent neural network optimization for wind turbine condition prognosis
- Author
-
Kerboua Adlen and Kelaiaia Ridha
- Subjects
optimization ,forecasting ,loss ,recurrent networks ,hyperparameters ,Technology - Abstract
This research focuses on employing Recurrent Neural Networks (RNN) to prognosis a wind turbine operation’s health from collected vibration time series data, by using several memory cell variations, including Long Short Time Memory (LSTM), Bilateral LSTM (BiLSTM), and Gated Recurrent Unit (GRU), which are integrated into various architectures. We tune the training hyperparameters as well as the adapted depth and recurrent cell number of the proposed networks to obtain the most accurate predictions. Tuning those parameters is a hard task and depends widely on the experience of the designer. This can be resolved by integrating the training process in a Bayesian optimization loop where the loss is considered as the objective function to minimize. The obtained results show the effectiveness of the proposed method, which generates more accurate recurrent models with a more accurate prognosis of the operating state of the wind turbine than those generated using trivial training parameters.
- Published
- 2022
- Full Text
- View/download PDF
19. Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach
- Author
-
Jaime-Rodrigo González-Rodríguez, Diana-Margarita Córdova-Esparza, Juan Terven, and Julio-Alejandro Romero-González
- Subjects
Mexican sign language ,translation ,machine learning ,recurrent networks ,assistive technologies ,Technology - Abstract
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities.
- Published
- 2024
- Full Text
- View/download PDF
20. LiMNet: Early-Stage Detection of IoT Botnets with Lightweight Memory Networks
- Author
-
Giaretta, Lodovico, Lekssays, Ahmed, Carminati, Barbara, Ferrari, Elena, Girdzijauskas, Šarūnas, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Shulman, Haya, editor, and Waidner, Michael, editor
- Published
- 2021
- Full Text
- View/download PDF
21. The Cerebral Cortex: A Delay-Coupled Recurrent Oscillator Network?
- Author
-
Singer, Wolf, Bäck, Thomas, Series Editor, Kari, Lila, Series Editor, Nakajima, Kohei, editor, and Fischer, Ingo, editor
- Published
- 2021
- Full Text
- View/download PDF
22. Electrocardiogram Classification Using Long Short-Term Memory Networks
- Author
-
Tang, Shijun, Tang, Jenny, Arabnia, Hamid, Series Editor, Arabnia, Hamid R., editor, Deligiannidis, Leonidas, editor, Shouno, Hayaru, editor, Tinetti, Fernando G., editor, and Tran, Quoc-Nam, editor
- Published
- 2021
- Full Text
- View/download PDF
23. Continuous Quasi-Attractors dissolve with too much - or too little - variability.
- Author
-
Schönsberg F, Monasson R, and Treves A
- Abstract
Recent research involving bats flying in long tunnels has confirmed that hippocampal place cells can be active at multiple locations, with considerable variability in place field size and peak rate. With self-organizing recurrent networks, variability implies inhomogeneity in the synaptic weights, impeding the establishment of a continuous manifold of fixed points. Are continuous attractor neural networks still valid models for understanding spatial memory in the hippocampus, given such variability? Here, we ask what are the noise limits, in terms of an experimentally inspired parametrization of the irregularity of a single map, beyond which the notion of continuous attractor is no longer relevant. Through numerical simulations we show that (i) a continuous attractor can be approximated even when neural dynamics ultimately converge onto very few fixed points, since a quasi-attractive continuous manifold supports dynamically localized activity; (ii) excess irregularity in field size however disrupts the continuity of the manifold, while too little irregularity, with multiple fields, surprisingly prevents localized activity; and (iii) the boundaries in parameter space among these three regimes, extracted from simulations, are well matched by analytical estimates. These results lead to predict that there will be a maximum size of a 1D environment which can be retained in memory, and that the replay of spatial activity during sleep or quiet wakefulness will be for short segments of the environment., (© The Author(s) 2024. Published by Oxford University Press on behalf of National Academy of Sciences.)
- Published
- 2024
- Full Text
- View/download PDF
24. Cellular-resolution optogenetics reveals attenuation-by-suppression in visual cortical neurons.
- Author
-
LaFosse PK, Zhou Z, O'Rawe JF, Friedman NG, Scott VM, Deng Y, and Histed MH
- Subjects
- Animals, Mice, Photic Stimulation methods, Action Potentials physiology, Optogenetics methods, Visual Cortex physiology, Visual Cortex cytology, Neurons physiology
- Abstract
The relationship between neurons' input and spiking output is central to brain computation. Studies in vitro and in anesthetized animals suggest that nonlinearities emerge in cells' input-output (IO; activation) functions as network activity increases, yet how neurons transform inputs in vivo has been unclear. Here, we characterize cortical principal neurons' activation functions in awake mice using two-photon optogenetics. We deliver fixed inputs at the soma while neurons' activity varies with sensory stimuli. We find that responses to fixed optogenetic input are nearly unchanged as neurons are excited, reflecting a linear response regime above neurons' resting point. In contrast, responses are dramatically attenuated by suppression. This attenuation is a powerful means to filter inputs arriving to suppressed cells, privileging other inputs arriving to excited neurons. These results have two major implications. First, somatic neural activation functions in vivo accord with the activation functions used in recent machine learning systems. Second, neurons' IO functions can filter sensory inputs-not only do sensory stimuli change neurons' spiking outputs, but these changes also affect responses to input, attenuating responses to some inputs while leaving others unchanged., Competing Interests: Competing interests statement:The authors declare no competing interest.
- Published
- 2024
- Full Text
- View/download PDF
25. Deep Learning Networks and Visual Perception
- Author
-
Lindsay, Grace W. and Serre, Thomas
- Published
- 2021
- Full Text
- View/download PDF
26. Learning sequence attractors in recurrent networks with hidden neurons.
- Author
-
Lu, Yao and Wu, Si
- Subjects
- *
MACHINE learning , *INFORMATION processing , *NEURONS , *MEMORY , *ALGORITHMS - Abstract
The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how recurrent networks of binary neurons learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that the network model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Uncertainty quantification of graph convolution neural network models of evolving processes.
- Author
-
Hauth, Jeremiah, Safta, Cosmin, Huan, Xun, Patel, Ravi G., and Jones, Reese E.
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *MACHINE learning , *MONTE Carlo method , *RECURRENT equations , *RECURRENT neural networks - Abstract
The application of neural network models to scientific machine learning tasks has proliferated in recent years. In particular, neural networks have proved to be adept at modeling processes with spatial–temporal complexity. Nevertheless, these highly parameterized models have garnered skepticism in their ability to produce outputs with quantified error bounds over the regimes of interest. Hence there is a need to find uncertainty quantification methods that are suitable for neural networks. In this work we present comparisons of the parametric uncertainty quantification of neural networks modeling complex spatial–temporal processes with Hamiltonian Monte Carlo and Stein variational gradient descent and its projected variant. Specifically we apply these methods to graph convolutional neural network models of evolving systems modeled with recurrent neural network and neural ordinary differential equations architectures. We show that Stein variational inference is a viable alternative to Monte Carlo methods with some clear advantages for complex neural network models. For our exemplars, Stein variational interference gave similar pushed forward uncertainty profiles through time compared to Hamiltonian Monte Carlo, albeit with generally more generous variance. Projected Stein variational gradient descent also produced similar uncertainty profiles to the non-projected counterpart, but large reductions in the active weight space were confounded by the stability of the neural network predictions and the convoluted likelihood landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Deep Learning Architectures
- Author
-
Hosseini, Mohammad-Parsa, Lu, Senbao, Kamaraj, Kavin, Slowikowski, Alexander, Venkatesh, Haygreev C., Kacprzyk, Janusz, Series Editor, Pedrycz, Witold, editor, and Chen, Shyi-Ming, editor
- Published
- 2020
- Full Text
- View/download PDF
29. Implementation of Myanmar Handwritten Recognition
- Author
-
Win, Hsu Yadanar, Wai, Thinn Thinn, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Vasant, Pandian, editor, Zelinka, Ivan, editor, and Weber, Gerhard-Wilhelm, editor
- Published
- 2020
- Full Text
- View/download PDF
30. Long-Term Prediction of Physical Interactions: A Challenge for Deep Generative Models
- Author
-
Cenzato, Alberto, Testolin, Alberto, Zorzi, Marco, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nicosia, Giuseppe, editor, Ojha, Varun, editor, La Malfa, Emanuele, editor, Jansen, Giorgio, editor, Sciacca, Vincenzo, editor, Pardalos, Panos, editor, Giuffrida, Giovanni, editor, and Umeton, Renato, editor
- Published
- 2020
- Full Text
- View/download PDF
31. Inhibitory control of frontal metastability sets the temporal signature of cognition
- Author
-
Vincent Fontanier, Matthieu Sarazin, Frederic M Stoll, Bruno Delord, and Emmanuel Procyk
- Subjects
inhibition ,timescale ,prefrontal cortex ,recurrent networks ,cingulate ,metastable states ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Cortical dynamics are organized over multiple anatomical and temporal scales. The mechanistic origin of the temporal organization and its contribution to cognition remain unknown. Here, we demonstrate the cause of this organization by studying a specific temporal signature (time constant and latency) of neural activity. In monkey frontal areas, recorded during flexible decisions, temporal signatures display specific area-dependent ranges, as well as anatomical and cell-type distributions. Moreover, temporal signatures are functionally adapted to behaviourally relevant timescales. Fine-grained biophysical network models, constrained to account for experimentally observed temporal signatures, reveal that after-hyperpolarization potassium and inhibitory GABA-B conductances critically determine areas’ specificity. They mechanistically account for temporal signatures by organizing activity into metastable states, with inhibition controlling state stability and transitions. As predicted by models, state durations non-linearly scale with temporal signatures in monkey, matching behavioural timescales. Thus, local inhibitory-controlled metastability constitutes the dynamical core specifying the temporal organization of cognitive functions in frontal areas.
- Published
- 2022
- Full Text
- View/download PDF
32. Novel deep learning architectures for haemodialysis time series classification.
- Author
-
Leonardi, Giorgio, Montani, Stefania, and Striani, Manuel
- Subjects
- *
DEEP learning , *TIME series analysis , *CONVOLUTIONAL neural networks , *HEMODIALYSIS , *RECURRENT neural networks - Abstract
Classifying haemodialysis sessions, on the basis of the evolution of specific clinical variables over time, allows the physician to identify patients that are being treated inefficiently, and that may need additional monitoring or corrective interventions. In this paper, we propose a deep learning approach to clinical time series classification, in the haemodialysis domain. In particular, we have defined two novel architectures, able to take advantage of the strengths of Convolutional Neural Networks and of Recurrent Networks. The novel architectures we introduced and tested outperformed classical mathematical classification techniques, as well as simpler deep learning approaches. In particular, combining Recurrent Networks with convolutional structures in different ways, allowed us to obtain accuracies above 81%, coupled with high values of the Matthews Correlation Coefficient (MCC), a parameter particularly suitable to assess the quality of classification when dealing with unbalanced classes-as it was our case. In the future we will test an extension of the approach to additional monitoring time series, aiming at an overall optimization of patient care. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Input addition and deletion in reinforcement: towards protean learning.
- Author
-
Bonnici, Iago, Gouaïch, Abdelkader, and Michel, Fabien
- Subjects
REINFORCEMENT learning ,FUNCTIONAL analysis - Abstract
Reinforcement Learning (RL) agents are commonly thought of as adaptive decision procedures. They work on input/output data streams called "states", "actions" and "rewards". Most current research about RL adaptiveness to changes works under the assumption that the streams signatures (i.e. arity and types of inputs and outputs) remain the same throughout the agent lifetime. As a consequence, natural situations where the signatures vary (e.g. when new data streams become available, or when others become obsolete) are not studied. In this paper, we relax this assumption and consider that signature changes define a new learning situation called Protean Learning (PL). When they occur, traditional RL agents become undefined, so they need to restart learning. Can better methods be developed under the PL view? To investigate this, we first construct a stream-oriented formalism to properly define PL and signature changes. Then, we run experiments in an idealized PL situation where input addition and deletion occur during the learning process. Results show that a simple PL-oriented method enables graceful adaptation of these arity changes, and is more efficient than restarting the process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Fine-Tuning BERT Models for Intent Recognition Using a Frequency Cut-Off Strategy for Domain-Specific Vocabulary Extension.
- Author
-
Fernández-Martínez, Fernando, Luna-Jiménez, Cristina, Kleinlein, Ricardo, Griol, David, Callejas, Zoraida, and Montero, Juan Manuel
- Subjects
RECURRENT neural networks ,VOCABULARY ,WORD frequency ,NATURAL languages ,DEEP learning ,PHYSIOLOGICAL adaptation - Abstract
Intent recognition is a key component of any task-oriented conversational system. The intent recognizer can be used first to classify the user's utterance into one of several predefined classes (intents) that help to understand the user's current goal. Then, the most adequate response can be provided accordingly. Intent recognizers also often appear as a form of joint models for performing the natural language understanding and dialog management tasks together as a single process, thus simplifying the set of problems that a conversational system must solve. This happens to be especially true for frequently asked question (FAQ) conversational systems. In this work, we first present an exploratory analysis in which different deep learning (DL) models for intent detection and classification were evaluated. In particular, we experimentally compare and analyze conventional recurrent neural networks (RNN) and state-of-the-art transformer models. Our experiments confirmed that best performance is achieved by using transformers. Specifically, best performance was achieved by fine-tuning the so-called BETO model (a Spanish pretrained bidirectional encoder representations from transformers (BERT) model from the Universidad de Chile) in our intent detection task. Then, as the main contribution of the paper, we analyze the effect of inserting unseen domain words to extend the vocabulary of the model as part of the fine-tuning or domain-adaptation process. Particularly, a very simple word frequency cut-off strategy is experimentally shown to be a suitable method for driving the vocabulary learning decisions over unseen words. The results of our analysis show that the proposed method helps to effectively extend the original vocabulary of the pretrained models. We validated our approach with a selection of the corpus acquired with the Hispabot-Covid19 system obtaining satisfactory results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Intrinsic dynamic shapes responses to external stimulation in the human brain.
- Author
-
Nentwich M, Leszczynski M, Schroeder CE, Bickel S, and Parra LC
- Abstract
Sensory stimulation of the brain reverberates in its recurrent neuronal networks. However, current computational models of brain activity do not separate immediate sensory responses from intrinsic recurrent dynamics. We apply a vector-autoregressive model with external input (VARX), combining the concepts of "functional connectivity" and "encoding models", to intracranial recordings in humans. We find that the recurrent connectivity during rest is largely unaltered during movie watching. The intrinsic recurrent dynamic enhances and prolongs the neural responses to scene cuts, eye movements, and sounds. Failing to account for these exogenous inputs, leads to spurious connections in the intrinsic "connectivity". The model shows that an external stimulus can reduce intrinsic noise. It also shows that sensory areas have mostly outward, whereas higher-order brain areas mostly incoming connections. We conclude that the response to an external audiovisual stimulus can largely be attributed to the intrinsic dynamic of the brain, already observed during rest., Competing Interests: Declaration of interests The authors declare no competing interests.
- Published
- 2024
- Full Text
- View/download PDF
36. Machine-Health Application Based on Machine Learning Techniques for Prediction of Valve Wear in a Manufacturing Plant
- Author
-
Fernández-García, María-Elena, Larrey-Ruiz, Jorge, Ros-Ros, Antonio, Figueiras-Vidal, Aníbal R., Sancho-Gómez, José-Luis, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Ferrández Vicente, José Manuel, editor, Álvarez-Sánchez, José Ramón, editor, de la Paz López, Félix, editor, Toledo Moreo, Javier, editor, and Adeli, Hojjat, editor
- Published
- 2019
- Full Text
- View/download PDF
37. Effects of Input Addition in Learning for Adaptive Games: Towards Learning with Structural Changes
- Author
-
Bonnici, Iago, Gouaïch, Abdelkader, Michel, Fabien, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Kaufmann, Paul, editor, and Castillo, Pedro A., editor
- Published
- 2019
- Full Text
- View/download PDF
38. Temporally Consistent Depth Estimation in Videos with Recurrent Architectures
- Author
-
Tananaev, Denis, Zhou, Huizhong, Ummenhofer, Benjamin, Brox, Thomas, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Leal-Taixé, Laura, editor, and Roth, Stefan, editor
- Published
- 2019
- Full Text
- View/download PDF
39. Deep Learning: An Introduction
- Author
-
Liermann, Volker, Li, Sangmeng, Schaudinnus, Norbert, Liermann, Volker, editor, and Stegmann, Claus, editor
- Published
- 2019
- Full Text
- View/download PDF
40. Single image rain removal using recurrent scale-guide networks.
- Author
-
Wang, Cong, Zhu, Honghe, Fan, Wanshu, Wu, Xiao-Ming, and Chen, Junyang
- Subjects
- *
OBJECT recognition (Computer vision) , *SURVEILLANCE detection , *COMPUTER vision , *VIDEO surveillance , *SOURCE code - Abstract
Recently, removing rain streaks from a single image has attracted a lot of attention because rain streaks can severely degrade the perceptual quality of the image and cause many practical vision systems to fail. Single image deraining can be served as a pre-processing step to improve the performance of high-level vision tasks such as object detection and video surveillance. In this paper, we propose recurrent scale-guide networks for single image deraining. Although the multi-scale strategy has been successfully applied to many computer vision problems, the correlation between different scales has not been explored in most existing methods. To overcome this deficiency, we propose two types of scale-guide blocks and develop two combinations between the blocks. One type of scale-guide block is that small scale guides the large, and the other is that large scale guides the small. Moreover, we extend the single-stage deraining model to the multi-stage recurrent framework and introduce the Long Short-Term Memory (LSTM) to link every stage. Extensive experiments verify that the scale-guide manner boosts the deraining performance and the recurrent style improves the deraining results. Experimental results demonstrate that the proposed method outperforms other state-of-the-art deraining methods on three widely used datasets: Rain100H, Rain100L, and Rain1200. The source codes can be found at https://supercong94.wixsite.com/supercong94. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Open Set Semantic Segmentation for Multitemporal Crop Recognition.
- Author
-
Chamorro Martinez, Jorge A., Oliveira, Hugo, Santos, Jefersson A. dos, and Feitosa, Raul Queiroz
- Abstract
Multitemporal remote-sensing images play a key role as a source of information for automated crop mapping and monitoring. The spatial/spectral pattern evolution along time provides information about the dynamics of the crops and are very useful for productivity estimation. Although the multitemporal mapping of crops has progressed considerably with the advent of deep learning in recent years, the classification models obtained still have limitations when exposed to unknown classes in the prediction phase, reducing their usefulness. In other words, these models are trained to identify a closed set of crops (e.g., soy and sugar cane) and are therefore unable to recognize other types of crops (e.g., maize). In this letter, we deal with the challenges of multitemporal crop recognition by proposing a new approach called OpenPCS++ that is not only able to learn known classes but is also capable of identifying new crops in the predicting phase. The proposed approach was evaluated in two challenging public datasets located in tropical climates in Brazil. Results showed that OpenPCS++ achieved increases of up to 0.19 in terms of area under the receiver-operating characteristic (ROC) curve in comparison with baselines. Code is available at https://github.com/DiMorten/osss-mcr. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Network-centered homeostasis through inhibition maintains hippocampal spatial map and cortical circuit function
- Author
-
Klara Kaleb, Victor Pedrosa, and Claudia Clopath
- Subjects
network homeostasis ,inhibitory plasticity ,hippocampus ,place cells ,remapping ,recurrent networks ,Biology (General) ,QH301-705.5 - Abstract
Summary: Despite ongoing experiential change, neural activity maintains remarkable stability. Although this is thought to be mediated by homeostatic plasticity, what aspect of neural activity is conserved and how the flexibility necessary for learning and memory is maintained is not fully understood. Experimental studies suggest that there exists network-centered, in addition to the well-studied neuron-centered, control. Here we computationally study such a potential mechanism: input-dependent inhibitory plasticity (IDIP). In a hippocampal model, we show that IDIP can explain the emergence of active and silent place cells as well as remapping following silencing of active place cells. Furthermore, we show that IDIP can also stabilize recurrent dynamics while preserving firing rate heterogeneity and stimulus representation, as well as persistent activity after memory encoding. Hence, the establishment of global network balance with IDIP has diverse functional implications and may be able to explain experimental phenomena across different brain areas.
- Published
- 2021
- Full Text
- View/download PDF
43. Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge.
- Author
-
Singer, Wolf
- Subjects
- *
CEREBRAL cortex , *SENSORIMOTOR integration , *DEEP learning , *FEATURE extraction , *EVIDENCE , *PREDICTIVE tests , *PATIENT discharge instructions - Abstract
Current concepts of sensory processing in the cerebral cortex emphasize serial extraction and recombination of features in hierarchically structured feed-forward networks in order to capture the relations among the components of perceptual objects. These concepts are implemented in convolutional deep learning networks and have been validated by the astounding similarities between the functional properties of artificial systems and their natural counterparts. However, cortical architectures also display an abundance of recurrent coupling within and between the layers of the processing hierarchy. This massive recurrence gives rise to highly complex dynamics whose putative function is poorly understood. Here a concept is proposed that assigns specific functions to the dynamics of cortical networks and combines, in a unifying approach, the respective advantages of recurrent and feed-forward processing. It is proposed that the priors about regularities of the world are stored in the weight distributions of feed-forward and recurrent connections and that the high-dimensional, dynamic space provided by recurrent interactions is exploited for computations. These comprise the ultrafast matching of sensory evidence with the priors covertly represented in the correlation structure of spontaneous activity and the context-dependent grouping of feature constellations characterizing natural objects. The concept posits that information is encoded not only in the discharge frequency of neurons but also in the precise timing relations among the discharges. Results of experiments designed to test the predictions derived from this concept support the hypothesis that cerebral cortex exploits the high-dimensional recurrent dynamics for computations serving predictive coding. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. TConvRec: temporal convolutional-recurrent fusion model with additional pattern learning
- Author
-
Singh, Brijendra and Jaiswal, Rashi
- Published
- 2023
- Full Text
- View/download PDF
45. The Research on Distributed Fusion Estimation Based on Machine Learning
- Author
-
Zhengxiao Peng, Yun Li, and Gang Hao
- Subjects
Distributed fusion ,machine learning ,recurrent networks ,BP network ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Multi-sensor distributed fusion estimation algorithms based on machine learning are proposed in this paper. Firstly, using local estimations as inputs and estimations of three classic distributed fusion (weighted by matrices, by diagonal matrices and by scalars) as the training sets, three distributed fusion algorithms based on BP network (BP net-based fusion weighted by matrices, by diagonal matrices and by scalar) are proposed and the selection basis of the number of nodes in hidden layer is given. Furthermore, by using local estimations as inputs and centralized fusion estimation as training set, another recurrent net-based distributed fusion algorithm is proposed, in the case that neither true states nor cross-covariance matrices is available. This method is not limited to the linear minimum variance (LMV) criterion, so its accuracy is higher than the classical three distributed fusion algorithms. A radar tracking simulation verifies the effectiveness of the proposed fusion networks.
- Published
- 2020
- Full Text
- View/download PDF
46. A Variational Latent Variable Model with Recurrent Temporal Dependencies for Session-Based Recommendation (VLaReT)
- Author
-
Christodoulou, Panayiotis, Chatzis, Sotirios P., Andreou, Andreas S., Spagnoletti, Paolo, Series Editor, De Marco, Marco, Series Editor, Pouloudi, Nancy, Series Editor, Te'eni, Dov, Series Editor, vom Brocke, Jan, Series Editor, Winter, Robert, Series Editor, Baskerville, Richard, Series Editor, Paspallis, Nearchos, editor, Raspopoulos, Marios, editor, Barry, Chris, editor, Lang, Michael, editor, Linger, Henry, editor, and Schneider, Christoph, editor
- Published
- 2018
- Full Text
- View/download PDF
47. A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks
- Author
-
Udaya B. Rongala, Jonas M. D. Enander, Matthias Kohler, Gerald E. Loeb, and Henrik Jörntell
- Subjects
neuron model ,recurrent networks ,dynamic leak ,spurious high frequency signals ,non-spiking ,excitation ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.
- Published
- 2021
- Full Text
- View/download PDF
48. A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks.
- Author
-
Rongala, Udaya B., Enander, Jonas M. D., Kohler, Matthias, Loeb, Gerald E., and Jörntell, Henrik
- Subjects
DYNAMIC models ,NEURONS ,NEURAL circuitry ,RC circuits ,BIOLOGICAL membranes ,SYNAPSES ,INTERNEURONS - Abstract
Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a "dynamic leak", corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. Local Homeostatic Regulation of the Spectral Radius of Echo-State Networks
- Author
-
Fabian Schubert and Claudius Gros
- Subjects
recurrent networks ,homeostasis ,synaptic scaling ,echo-state networks ,reservoir computing ,spectral radius ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius toward the desired value. For both mechanisms the spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols. Moreover, we evaluated the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing.
- Published
- 2021
- Full Text
- View/download PDF
50. Local Homeostatic Regulation of the Spectral Radius of Echo-State Networks.
- Author
-
Schubert, Fabian and Gros, Claudius
- Subjects
MATHEMATICAL forms ,BIOLOGICAL neural networks ,RANDOM matrices ,BINARY sequences ,BINARY operations ,PSYCHOLOGICAL feedback - Abstract
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius toward the desired value. For both mechanisms the spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols. Moreover, we evaluated the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.