3,294 results on '"ARTIFICIAL neural networks"'
Search Results
2. Riemannian-Gradient-Based Learning on the Complex Matrix-Hypersphere.
- Author
-
Fiori, Simone
- Subjects
- *
MATHEMATICAL optimization , *MIMO systems , *RIEMANNIAN manifolds , *ARTIFICIAL neural networks , *GEODESICS , *NUMERICAL analysis - Abstract
This brief tackles the problem of learning over the complex-valued matrix-hypersphere \BBSn,p^{\alpha}({\BBC}). The developed learning theory is formulated in terms of Riemannian-gradient-based optimization of a regular criterion function and is implemented by a geodesic-stepping method. The stepping method is equipped with a geodesic-search sub-algorithm to compute the optimal learning stepsize at any step. Numerical results show the effectiveness of the developed learning method and of its implementation. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
3. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Subject to Linear Equality Constraints.
- Author
-
Guo, Zhishan, Liu, Qingshan, and Wang, Jun
- Subjects
- *
ARTIFICIAL neural networks , *CONVEX functions , *STOCHASTIC convergence , *MATHEMATICAL optimization , *PSEUDOCONVEX domains , *SIMULATION methods & models , *RECURSIVE sequences (Mathematics) , *CHEMICAL processes - Abstract
In this paper, a one-layer recurrent neural network is presented for solving pseudoconvex optimization problems subject to linear equality constraints. The global convergence of the neural network can be guaranteed even though the objective function is pseudoconvex. The finite-time state convergence to the feasible region defined by the equality constraints is also proved. In addition, global exponential convergence is proved when the objective function is strongly pseudoconvex on the feasible region. Simulation results on illustrative examples and application on chemical process data reconciliation are provided to demonstrate the effectiveness and characteristics of the neural network. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
4. Parallel Programmable Asynchronous Neighborhood Mechanism for Kohonen SOM Implemented in CMOS Technology.
- Author
-
Dlugosz, Rafał, Kolasa, Marta, Pedrycz, Witold, and Szulc, Micha
- Subjects
- *
SELF-organizing maps , *COMPLEMENTARY metal oxide semiconductors , *PARALLEL programming , *ARTIFICIAL neural networks , *TEMPERATURE effect , *ALGORITHMS , *ENERGY consumption , *MATHEMATICAL optimization - Abstract
We present a new programmable neighborhood mechanism for hardware implemented Kohonen self-organizing maps (SOMs) with three different map topologies realized on a single chip. The proposed circuit comes as a fully parallel and asynchronous architecture. The mechanism is very fast. In a medium sized map with several hundreds neurons implemented in the complementary metal-oxide semiconductor 0.18 \mum technology, all neurons start adapting the weights after no more than 11 ns. The adaptation is then carried out in parallel. This is an evident advantage in comparison with the commonly used software-realized SOMs. The circuit is robust against the process, supply voltage and environment temperature variations. Due to a simple structure, it features low energy consumption of a few pJ per neuron per a single learning pattern. In this paper, we discuss different aspects of hardware realization, such as a suitable selection of the map topology and the initial neighborhood range, as the optimization of these parameters is essential when looking from the circuit complexity point of view. For the optimal values of these parameters, the chip area and the power dissipation can be reduced even by 60% and 80%, respectively, without affecting the quality of learning. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
5. Bioinspired Neural Network for Real-Time Cooperative Hunting by Multirobots in Unknown Environments.
- Author
-
Ni, Jianjun and Yang, Simon X.
- Subjects
- *
ARTIFICIAL neural networks , *ROBOT motion , *ALGORITHMS , *REAL-time computing , *ROBOTICS , *COMPUTER simulation - Abstract
Multiple robot cooperation is a challenging and critical issue in robotics. To conduct the cooperative hunting by multirobots in unknown and dynamic environments, the robots not only need to take into account basic problems (such as searching, path planning, and collision avoidance), but also need to cooperate in order to pursue and catch the evaders efficiently. In this paper, a novel approach based on a bioinspired neural network is proposed for the real-time cooperative hunting by multirobots, where the locations of evaders and the environment are unknown and changing. The bioinspired neural network is used for cooperative pursuing by the multirobot team. Some other algorithms are used to enable the robots to catch the evaders efficiently, such as the dynamic alliance and formation construction algorithm. In the proposed approach, the pursuing alliances can dynamically change and the robot motion can be adjusted in real-time to pursue the evader cooperatively, to guarantee that all the evaders can be caught efficiently. The proposed approach can deal with various situations such as when some robots break down, the environment has different boundary shapes, or the obstacles are linked with different shapes. The simulation results show that the proposed approach is capable of guiding the robots to achieve the hunting of multiple evaders in real-time efficiently. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
6. Nonlinear System Identification by Gustafson–Kessel Fuzzy Clustering and Supervised Local Model Network Learning for the Drug Absorption Spectra Process.
- Author
-
Teslic, Luka, Hartmann, Benjamin, Nelles, Oliver, and Skrjanc, Igor
- Subjects
- *
NONLINEAR systems , *PARALLEL algorithms , *ARTIFICIAL neural networks , *FUZZY clustering technique , *ANALYSIS of covariance , *DRUG absorption , *ABSORPTION spectra - Abstract
This paper deals with the problem of fuzzy nonlinear model identification in the framework of a local model network (LMN). A new iterative identification approach is proposed, where supervised and unsupervised learning are combined to optimize the structure of the LMN. For the purpose of fitting the cluster-centers to the process nonlinearity, the Gustafsson–Kessel (GK) fuzzy clustering, i.e., unsupervised learning, is applied. In combination with the LMN learning procedure, a new incremental method to define the number and the initial locations of the cluster centers for the GK clustering algorithm is proposed. Each data cluster corresponds to a local region of the process and is modeled with a local linear model. Since the validity functions are calculated from the fuzzy covariance matrices of the clusters, they are highly adaptable and thus the process can be described with a very sparse amount of local models, i.e., with a parsimonious LMN model. The proposed method for constructing the LMN is finally tested on a drug absorption spectral process and compared to two other methods, namely, Lolimot and Hilomot. The comparison between the experimental results when using each method shows the usefulness of the proposed identification algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
7. Modeling Activity-Dependent Plasticity in BCM Spiking Neural Networks With Application to Human Behavior Recognition.
- Author
-
Meng, Yan, Jin, Yaochu, and Yin, Jun
- Subjects
- *
NEUROPLASTICITY , *BIOLOGICAL systems , *ARTIFICIAL neural networks , *HIDDEN Markov models , *COMPUTER simulation , *MATHEMATICAL models , *FEATURE extraction - Abstract
Spiking neural networks (SNNs) are considered to be computationally more powerful than conventional NNs. However, the capability of SNNs in solving complex real-world problems remains to be demonstrated. In this paper, we propose a substantial extension of the Bienenstock, Cooper, and Munro (BCM) SNN model, in which the plasticity parameters are regulated by a gene regulatory network (GRN). Meanwhile, the dynamics of the GRN is dependent on the activation levels of the BCM neurons. We term the whole model “GRN-BCM.” To demonstrate its computational power, we first compare the GRN-BCM with a standard BCM, a hidden Markov model, and a reservoir computing model on a complex time series classification problem. Simulation results indicate that the GRN-BCM significantly outperforms the compared models. The GRN-BCM is then applied to two widely used datasets for human behavior recognition. Comparative results on the two datasets suggest that the GRN-BCM is very promising for human behavior recognition, although the current experiments are still limited to the scenarios in which only one object is moving in the considered video sequences. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
8. SaFIN: A Self-Adaptive Fuzzy Inference Network.
- Author
-
Tung, Sau Wai, Quek, Chai, and Guan, Cuntai
- Subjects
- *
ADAPTIVE control systems , *FUZZY systems , *ARTIFICIAL neural networks , *NUMERICAL analysis , *KNOWLEDGE acquisition (Expert systems) , *CLUSTER analysis (Statistics) , *SELF-organizing systems - Abstract
There are generally two approaches to the design of a neural fuzzy system: 1) design by human experts, and 2) design through a self-organization of the numerical training data. While the former approach is highly subjective, the latter is commonly plagued by one or more of the following major problems: 1) an inconsistent rulebase; 2) the need for prior knowledge such as the number of clusters to be computed; 3) heuristically designed knowledge acquisition methodologies; and 4) the stability–plasticity tradeoff of the system. This paper presents a novel self-organizing neural fuzzy system, named Self-Adaptive Fuzzy Inference Network (SaFIN), to address the aforementioned deficiencies. The proposed SaFIN model employs a new clustering technique referred to as categorical learning-induced partitioning (CLIP), which draws inspiration from the behavioral category learning process demonstrated by humans. By employing the one-pass CLIP, SaFIN is able to incorporate new clusters in each input–output dimension when the existing clusters are not able to give a satisfactory representation of the incoming training data. This not only avoids the need for prior knowledge regarding the number of clusters needed for each input–output dimension, but also allows SaFIN the flexibility to incorporate new knowledge with old knowledge in the system. In addition, the self-automated rule formation mechanism proposed within SaFIN ensures that it obtains a consistent resultant rulebase. Subsequently, the proposed SaFIN model is employed in a series of benchmark simulations to demonstrate its efficiency as a self-organizing neural fuzzy system, and excellent performances have been achieved. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
9. Passivity and Stability Analysis of Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions.
- Author
-
Wang, Jin-Liang, Wu, Huai-Ning, and Guo, Lei
- Subjects
- *
DIRICHLET problem , *ARTIFICIAL neural networks , *LYAPUNOV functions , *PASSIVITY-based control , *UNCERTAINTY (Information theory) , *REACTION-diffusion equations , *BOUNDARY value problems , *NUMERICAL analysis - Abstract
This paper is concerned with the passivity and stability problems of reaction-diffusion neural networks (RDNNs) in which the input and output variables are varied with the time and space variables. By utilizing the Lyapunov functional method combined with the inequality techniques, some sufficient conditions ensuring the passivity and global exponential stability are derived. Furthermore, when the parameter uncertainties appear in RDNNs, several criteria for robust passivity and robust global exponential stability are also presented. Finally, a numerical example is provided to illustrate the effectiveness of the proposed criteria. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
10. Exponential Synchronization of Complex Networks With Finite Distributed Delays Coupling.
- Author
-
Hu, Cheng, Yu, Juan, Jiang, Haijun, and Teng, Zhidong
- Subjects
- *
COMPUTATIONAL complexity , *ARTIFICIAL neural networks , *COMPUTER simulation , *CONTROL theory (Engineering) , *SYNCHRONIZATION , *COMPARATIVE studies - Abstract
In this paper, the exponential synchronization for a class of complex networks with finite distributed delays coupling is studied via periodically intermittent control. Some novel and useful criteria are derived by utilizing a different technique compared with some correspondingly previous results. As a special case, some sufficient conditions ensuring the exponential synchronization for a class of coupled neural networks with distributed delays are obtained. Furthermore, a feasible region of the control parameters is derived for the realization of exponential synchronization. It is worth noting that the synchronized state in this paper is not an isolated node but a non-decoupled state, in which the inner coupling matrix and the degree of the nodes play a central role. Additionally, the traditional assumptions on control width, non-control width, and discrete delays are removed in our results. Finally, some numerical simulations are given to demonstrate the effectiveness of the proposed control method. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
11. Auto-Regressive Processes Explained by Self-Organized Maps. Application to the Detection of Abnormal Behavior in Industrial Processes.
- Author
-
Brighenti, Chiara and Sanz-Bobi, Miguel Á.
- Subjects
- *
SELF-organizing maps , *MANUFACTURING processes , *PARAMETER estimation , *TIME series analysis , *CLUSTER analysis (Statistics) , *ALGORITHMS , *DENSITY functionals , *ARTIFICIAL neural networks - Abstract
This paper analyzes the expected time evolution of an auto-regressive (AR) process using self-organized maps (SOM). It investigates how a SOM captures the time information given by the AR input process and how the transitions from one neuron to another one can be understood under a probabilistic perspective. In particular, regions of the map into which the AR process is expected to move are identified. This characterization allows detecting anomalous changes in the AR process structure or parameters. On the basis of the theoretical results, an anomaly detection method is proposed and applied to a real industrial process. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
12. Delay-Slope-Dependent Stability Results of Recurrent Neural Networks.
- Author
-
Li, Tao, Zheng, Wei Xing, and Lin, Chong
- Subjects
- *
ARTIFICIAL neural networks , *RECURSIVE sequences (Mathematics) , *COMPUTATIONAL complexity , *LYAPUNOV functions , *NUMERICAL analysis , *TIME-varying systems - Abstract
By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov–Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
13. Decentralized Optimal Control of a Class of Interconnected Nonlinear Discrete-Time Systems by Using Online Hamilton-Jacobi-Bellman Formulation.
- Author
-
Mehraeen, Shahab and Jagannathan, Sarangapani
- Subjects
- *
INTEGRATED circuit interconnections , *DISCRETE-time systems , *NONLINEAR systems , *HAMILTON-Jacobi equations , *DYNAMIC programming , *CONTROL theory (Engineering) , *ARTIFICIAL neural networks , *LYAPUNOV functions - Abstract
In this paper, the direct neural dynamic programming technique is utilized to solve the Hamilton-Jacobi-Bellman equation forward-in-time for the decentralized near optimal regulation of a class of nonlinear interconnected discrete-time systems with unknown internal subsystem and interconnection dynamics, while the input gain matrix is considered known. Even though the unknown interconnection terms are considered weak and functions of the entire state vector, the decentralized control is attempted under the assumption that only the local state vector is measurable. The decentralized nearly optimal controller design for each subsystem consists of two neural networks (NNs), an action NN that is aimed to provide a nearly optimal control signal, and a critic NN which evaluates the performance of the overall system. All NN parameters are tuned online for both the NNs. By using Lyapunov techniques it is shown that all subsystems signals are uniformly ultimately bounded and that the synthesized subsystems inputs approach their corresponding nearly optimal control inputs with bounded error. Simulation results are included to show the effectiveness of the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
14. Stability and Convergence Analysis for a Class of Neural Networks.
- Author
-
Gao, Xingbao and Liao, Li-Zhi
- Subjects
- *
STOCHASTIC convergence , *STABILITY (Mechanics) , *ARTIFICIAL neural networks , *VARIATIONAL inequalities (Mathematics) , *MATRICES (Mathematics) , *MATHEMATICAL models , *MATHEMATICAL mappings - Abstract
In this paper, we analyze and establish the stability and convergence of the dynamical system proposed by Xia and Feng, whose equilibria solve variational inequality and related problems. Under the pseudo-monotonicity and other conditions, this system is proved to be stable in the sense of Lyapunov and converges to one of its equilibrium points for any starting point. Meanwhile, the global exponential stability of this system is also shown under some mild conditions without the strong monotonicity of the mapping. The obtained results improve and correct some existing ones. The validity and performance of this system are demonstrated by some numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
15. Adaptive Evolutionary Artificial Neural Networks for Pattern Classification.
- Author
-
Oong, Tatt Hee and Isa, Nor Ashidi Mat
- Subjects
- *
ARTIFICIAL neural networks , *PATTERN recognition systems , *ALGORITHMS , *PERTURBATION theory , *MACHINE learning , *ELECTRIC network topology - Abstract
This paper presents a new evolutionary approach called the hybrid evolutionary artificial neural network (HEANN) for simultaneously evolving an artificial neural networks (ANNs) topology and weights. Evolutionary algorithms (EAs) with strong global search capabilities are likely to provide the most promising region. However, they are less efficient in fine-tuning the search space locally. HEANN emphasizes the balancing of the global search and local search for the evolutionary process by adapting the mutation probability and the step size of the weight perturbation. This is distinguishable from most previous studies that incorporate EA to search for network topology and gradient learning for weight updating. Four benchmark functions were used to test the evolutionary framework of HEANN. In addition, HEANN was tested on seven classification benchmark problems from the UCI machine learning repository. Experimental results show the superior performance of HEANN in fine-tuning the network complexity within a small number of generations while preserving the generalization capability compared with other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
16. Estimating the Ultimate Bound and Positively Invariant Set for a Class of Hopfield Networks.
- Author
-
Zhang, Jianxiong, Tang, Wansheng, and Zheng, Pengsheng
- Subjects
- *
INVARIANT sets , *ARTIFICIAL neural networks , *STABILITY (Mechanics) , *LYAPUNOV functions , *MATRIX inequalities , *MATHEMATICAL optimization , *CHAOS theory - Abstract
In this paper, we investigate the ultimate bound and positively invariant set for a class of Hopfield neural networks (HNNs) based on the Lyapunov stability criterion and Lagrange multiplier method. It is shown that a hyperelliptic estimate of the ultimate bound and positively invariant set for the HNNs can be calculated by solving a linear matrix inequality (LMI). Furthermore, the global stability of the unique equilibrium and the instability region of the HNNs are analyzed, respectively. Finally, the most accurate estimate of the ultimate bound and positively invariant set can be derived by solving the corresponding optimization problems involving the LMI constraints. Some numerical examples are given to illustrate the effectiveness of the proposed results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
17. Decentralized Dynamic Surface Control of Large-Scale Interconnected Systems in Strict-Feedback Form Using Neural Networks With Asymptotic Stabilization.
- Author
-
Mehraeen, Shahab, Jagannathan, Sarangapani, and Crow, Mariesa L.
- Subjects
- *
ARTIFICIAL neural networks , *ADAPTIVE control systems , *NONLINEAR systems , *APPROXIMATION theory , *ERROR analysis in mathematics , *LYAPUNOV stability , *SIMULATION methods & models - Abstract
A novel neural network (NN)-based nonlinear decentralized adaptive controller is proposed for a class of large-scale, uncertain, interconnected nonlinear systems in strict-feedback form by using the dynamic surface control (DSC) principle, thus, the “explosion of complexity” problem which is observed in the conventional backstepping approach is relaxed in both state and output feedback control designs. The matching condition is not assumed when considering the interconnection terms. Then, NNs are utilized to approximate the uncertainties in both subsystem and interconnected terms. By using novel NN weight update laws with quadratic error terms as well as proposed control inputs, it is demonstrated using Lyapunov stability that the system states errors converge to zero asymptotically with both state and output feedback controllers, even in the presence of NN approximation errors in contrast with the uniform ultimate boundedness result, which is common in the literature with NN-based DSC and backstepping schemes. Simulation results show the effectiveness of the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
18. Multistability of Second-Order Competitive Neural Networks With Nondecreasing Saturated Activation Functions.
- Author
-
Nie, Xiaobing and Cao, Jinde
- Subjects
- *
ARTIFICIAL neural networks , *STABILITY (Mechanics) , *DECOMPOSITION method , *STOCHASTIC convergence , *SILICON , *MATHEMATICAL functions , *SIMULATION methods & models - Abstract
In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2^N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)^N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
19. Delay-Independent Stability of Genetic Regulatory Networks.
- Author
-
Wu, Fang-Xiang
- Subjects
- *
TIME delay systems , *STABILITY (Mechanics) , *ARTIFICIAL neural networks , *NONLINEAR differential equations , *RNA splicing , *GENETIC regulation , *EIGENVALUES - Abstract
Genetic regulatory networks can be described by nonlinear differential equations with time delays. In this paper, we study both locally and globally delay-independent stability of genetic regulatory networks, taking messenger ribonucleic acid alternative splicing into consideration. Based on nonnegative matrix theory, we first develop necessary and sufficient conditions for locally delay-independent stability of genetic regulatory networks with multiple time delays. Compared to the previous results, these conditions are easy to verify. Then we develop sufficient conditions for global delay-independent stability for genetic regulatory networks. Compared to the previous results, this sufficient condition is less conservative. To illustrate theorems developed in this paper, we analyze delay-independent stability of two genetic regulatory networks: a real-life repressilatory network with three genes and three proteins, and a synthetic gene regulatory network with five genes and seven proteins. The simulation results show that the theorems developed in this paper can effectively determine the delay-independent stability of genetic regulatory networks. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
20. Stability and L2 Performance Analysis of Stochastic Delayed Neural Networks.
- Author
-
Chen, Yun and Zheng, Wei Xing
- Subjects
- *
STABILITY (Mechanics) , *PERFORMANCE evaluation , *STOCHASTIC processes , *ARTIFICIAL neural networks , *ROBUST control , *RANDOM noise theory , *STOCHASTIC systems , *TIME delay systems , *NUMERICAL analysis - Abstract
This brief focuses on the robust mean-square exponential stability and L2 performance analysis for a class of uncertain time-delay neural networks perturbed by both additive and multiplicative stochastic noises. New mean-square exponential stability and L2 performance criteria are developed based on the delay partition Lyapunov–Krasovskii functional method and generalized Finsler lemma which is applicable to stochastic systems. The analytical results are established without involving any model transformation, estimation for cross terms, additional free-weighting matrices, or tuning parameters. Numerical examples are presented to verify that the proposed approach is both less conservative and less computationally complex than the existing ones. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
21. Observer Design for Switched Recurrent Neural Networks: An Average Dwell Time Approach.
- Author
-
Lian, Jie, Feng, Zhi, and Shi, Peng
- Subjects
- *
OBSERVABILITY (Control theory) , *SWITCHING circuits , *RECURSIVE sequences (Mathematics) , *ARTIFICIAL neural networks , *TIME delay systems , *EXPONENTIAL functions , *MATRIX inequalities - Abstract
This paper is concerned with the problem of observer design for switched recurrent neural networks with time-varying delay. The attention is focused on designing the full-order observers that guarantee the global exponential stability of the error dynamic system. Based on the average dwell time approach and the free-weighting matrix technique, delay-dependent sufficient conditions are developed for the solvability of such problem and formulated as linear matrix inequalities. The error-state decay estimate is also given. Then, the stability analysis problem for the switched recurrent neural networks can be covered as a special case of our results. Finally, four illustrative examples are provided to demonstrate the effectiveness and the superiority of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
22. Embedding Prior Knowledge Within Compressed Sensing by Neural Networks.
- Author
-
Merhej, Dany, Diab, Chaouki, Khalil, Mohamad, and Prost, Rémy
- Subjects
- *
EMBEDDINGS (Mathematics) , *ARTIFICIAL neural networks , *ALGORITHMS , *LINEAR systems , *MATCHING theory , *SPARSE matrices , *SIGNAL processing , *COMPUTATIONAL complexity , *DISTRIBUTION (Probability theory) - Abstract
In the compressed sensing framework, different algorithms have been proposed for sparse signal recovery from an incomplete set of linear measurements. The most known can be classified into two categories: \ell1 norm minimization-based algorithms and \ell0 pseudo-norm minimization with greedy matching pursuit algorithms. In this paper, we propose a modified matching pursuit algorithm based on the orthogonal matching pursuit (OMP). The idea is to replace the correlation step of the OMP, with a neural network. Simulation results show that in the case of random sparse signal reconstruction, the proposed method performs as well as the OMP. Complexity overhead, for training and then integrating the network in the sparse signal recovery is thus not justified in this case. However, if the signal has an added structure, it is learned and incorporated in the proposed new OMP. We consider three structures: first, the sparse signal is positive, second the positions of the non zero coefficients of the sparse signal follow a certain spatial probability density function, the third case is a combination of both. Simulation results show that, for these signals of interest, the probability of exact recovery with our modified OMP increases significantly. Comparisons with \ell1 based reconstructions are also performed. We thus present a framework to reconstruct sparse signals with added structure by embedding, through neural network training, additional knowledge to the decoding process in order to have better performance in the recovery of sparse signals of interest. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
23. Neural Networks-Based Adaptive Control for Nonlinear Time-Varying Delays Systems With Unknown Control Direction.
- Author
-
Wen, Yuntong and Ren, Xuemei
- Subjects
- *
ARTIFICIAL neural networks , *ADAPTIVE control systems , *NONLINEAR systems , *TIME delay systems , *LYAPUNOV functions , *CONTINUOUS functions , *SIMULATION methods & models , *APPROXIMATION theory - Abstract
This paper investigates a neural network (NN) state observer-based adaptive control for a class of time-varying delays nonlinear systems with unknown control direction. An adaptive neural memoryless observer, in which the knowledge of time-delay is not used, is designed to estimate the system states. Furthermore, by applying the property of the function \tanh^2(\vartheta/\epsilon)/\vartheta (the function can be defined at \vartheta=0) and introducing a novel type appropriate Lyapunov–Krasovskii functional, an adaptive output feedback controller is constructed via backstepping method which can efficiently avoid the problem of controller singularity and compensate for the time-delay. It is highly proven that the closed-loop systems controller designed by the NN-basis function property, new kind parameter adaptive law and Nussbaum function in detecting the control direction is able to guarantee the semi-global uniform ultimate boundedness of all signals and the tracking error can converge to a small neighborhood of zero. The characteristic of the proposed approach is that it relaxes any restrictive assumptions of Lipschitz condition for the unknown nonlinear continuous functions. And the proposed scheme is suitable for the systems with mismatching conditions and unmeasurable states. Finally, two simulation examples are given to illustrate the effectiveness and applicability of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
24. A New Formulation for Feedforward Neural Networks.
- Author
-
Razavi, Saman and Tolson, Bryan A.
- Subjects
- *
FEEDFORWARD control systems , *ARTIFICIAL neural networks , *APPROXIMATION theory , *MACHINE learning , *RANDOM variables , *RESPONSE surfaces (Statistics) - Abstract
Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
25. Zhang Neural Network Versus Gradient Neural Network for Solving Time-Varying Linear Inequalities.
- Author
-
Xiao, Lin and Zhang, Yunong
- Subjects
- *
ARTIFICIAL neural networks , *RECURSIVE sequences (Mathematics) , *STOCHASTIC convergence , *COMPUTER simulation , *MATRIX inequalities , *VECTOR analysis , *MATHEMATICAL models - Abstract
By following Zhang design method, a new type of recurrent neural network [i.e., Zhang neural network (ZNN)] is presented, investigated, and analyzed for online solution of time-varying linear inequalities. Theoretical analysis is given on convergence properties of the proposed ZNN model. For comparative purposes, the conventional gradient neural network is developed and exploited for solving online time-varying linear inequalities as well. Computer simulation results further verify and demonstrate the efficacy, novelty, and superiority of such a ZNN model and its method for solving time-varying linear inequalities. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
26. Passivity Analysis for Discrete-Time Stochastic Markovian Jump Neural Networks With Mixed Time Delays.
- Author
-
Wu, Zheng-Guang, Shi, Peng, Su, Hongye, and Chu, Jian
- Subjects
- *
PASSIVITY-based control , *DISCRETE-time systems , *STOCHASTIC processes , *MARKOV processes , *JUMP processes , *ARTIFICIAL neural networks , *TIME delay systems , *LYAPUNOV functions , *NUMERICAL analysis - Abstract
In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-dependent passivity condition is derived in terms of the linear matrix inequality approach. The case of Markov chain with partially unknown transition probabilities is also considered. All the results presented depend upon not only discrete delay but also distributed delay. A numerical example is included to demonstrate the effectiveness of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
27. Chaotic Simulated Annealing by a Neural Network With a Variable Delay: Design and Application.
- Author
-
Chen, Shyan-Shiou
- Subjects
- *
CHAOS theory , *SIMULATED annealing , *ARTIFICIAL neural networks , *LYAPUNOV functions , *MATRIX inequalities , *STOCHASTIC convergence , *TRAVELING salesman problem - Abstract
In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
28. Textual and Visual Content-Based Anti-Phishing: A Bayesian Approach.
- Author
-
Zhang, Haijun, Liu, Gang, Chow, Tommy W. S., and Liu, Wenyin
- Subjects
- *
BAYESIAN analysis , *PHISHING , *WEBSITES , *CLASSIFICATION , *ARTIFICIAL neural networks , *ALGORITHMS , *FEATURE extraction , *MULTISENSOR data fusion - Abstract
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
29. Low-Complexity Nonlinear Adaptive Filter Based on a Pipelined Bilinear Recurrent Neural Network.
- Author
-
Zhao, Haiquan, Zeng, Xiangping, and He, Zhengyou
- Subjects
- *
DATA pipelining , *ADAPTIVE filters , *ARTIFICIAL neural networks , *COMPUTER simulation , *NONLINEAR systems , *STOCHASTIC convergence , *MATHEMATICAL models , *COMPUTER architecture - Abstract
To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
30. Echo State Gaussian Process.
- Author
-
Chatzis, Sotirios P. and Demiris, Yiannis
- Subjects
- *
GAUSSIAN processes , *BAYESIAN analysis , *DATA modeling , *ARTIFICIAL neural networks , *PREDICTION models , *NEURONS , *ROBUST control - Abstract
Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
31. A Dynamic Feedforward Neural Network Based on Gaussian Particle Swarm Optimization and its Application for Predictive Control.
- Author
-
Han, Min, Fan, Jianchao, and Wang, Jun
- Subjects
- *
ARTIFICIAL neural networks , *FEEDFORWARD control systems , *PARTICLE swarm optimization , *PREDICTION models , *HEURISTIC algorithms , *MATHEMATICAL optimization , *STABILITY (Mechanics) , *SYSTEM identification , *ROBUST control - Abstract
A dynamic feedforward neural network (DFNN) is proposed for predictive control, whose adaptive parameters are adjusted by using Gaussian particle swarm optimization (GPSO) in the training process. Adaptive time-delay operators are added in the DFNN to improve its generalization for poorly known nonlinear dynamic systems with long time delays. Furthermore, GPSO adopts a chaotic map with Gaussian function to balance the exploration and exploitation capabilities of particles, which improves the computational efficiency without compromising the performance of the DFNN. The stability of the particle dynamics is analyzed, based on the robust stability theory, without any restrictive assumption. A stability condition for the GPSO+DFNN model is derived, which ensures a satisfactory global search and quick convergence, without the need for gradients. The particle velocity ranges could change adaptively during the optimization process. The results of a comparative study show that the performance of the proposed algorithm can compete with selected algorithms on benchmark problems. Additional simulation results demonstrate the effectiveness and accuracy of the proposed combination algorithm in identifying and controlling nonlinear systems with long time delays. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
32. Parallel Reservoir Computing Using Optical Amplifiers.
- Author
-
Vandoorne, Kristof, Dambre, Joni, Verstraeten, David, Schrauwen, Benjamin, and Bienstman, Peter
- Subjects
- *
OPTICAL amplifiers , *ARTIFICIAL neural networks , *ELECTRIC network topology , *PHOTONICS , *PHASE shift (Nuclear physics) , *SPEECH perception , *SEMICONDUCTORS , *INTEGRATED optics - Abstract
Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the system's physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
33. Comprehensive Review of Neural Network-Based Prediction Intervals and New Advances.
- Author
-
Khosravi, Abbas, Nahavandi, Saeid, Creighton, Doug, and Atiya, Amir F.
- Subjects
- *
ARTIFICIAL neural networks , *PREDICTION models , *PERFORMANCE evaluation , *COMPARATIVE studies , *BAYESIAN analysis , *GENETIC algorithms , *STATISTICAL bootstrapping - Abstract
This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
34. Nonlinear Identification With Local Model Networks Using GTLS Techniques and Equality Constraints.
- Author
-
Hametner, Christoph and Jakubek, Stefan
- Subjects
- *
NONLINEAR systems , *SYSTEM identification , *ARTIFICIAL neural networks , *LEAST squares , *PARAMETER estimation , *NOISE measurement , *MATHEMATICAL optimization , *EIGENVALUES , *EIGENFUNCTIONS , *IMAGE reconstruction - Abstract
Local model networks approximate a nonlinear system through multiple local models fitted within a partition space. The main advantage of this approach is that the identification of complex nonlinear processes is alleviated by the integration of structured knowledge about the process. This paper extends these concepts by the integration of quantitative process knowledge into the identification procedure. Quantitative knowledge describes explicit dependences between inputs and outputs and is integrated in the parameter estimation process by means of equality constraints. For this purpose, a constrained generalized total least squares algorithm for local parameter estimation is presented. Furthermore, the problem of proper integration of constraints in the partitioning process is treated where an expectation-maximization procedure is combined with constrained parameter estimation. The benefits and the applicability of the proposed concepts are demonstrated by means of two illustrative examples and a practical application using real measurement data. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
35. SortNet: Learning to Rank by a Neural Preference Function.
- Author
-
Rigutini, Leonardo, Papini, Tiziano, Maggini, Marco, and Scarselli, Franco
- Subjects
- *
ARTIFICIAL neural networks , *APPROXIMATION algorithms , *MACHINE learning , *NEURONS , *SORTING (Electronic computers) , *INFORMATION storage & retrieval systems , *COMPARATIVE studies - Abstract
Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
36. Generalized Halanay Inequalities and Their Applications to Neural Networks With Unbounded Time-Varying Delays.
- Author
-
Liu, Bo, Lu, Wenlian, and Chen, Tianping
- Subjects
- *
ARTIFICIAL neural networks , *STABILITY (Mechanics) , *SYNCHRONIZATION , *DIFFERENTIAL equations , *ELECTRONIC indexes , *MATHEMATICAL inequalities , *GENERALIZATION - Abstract
In this brief, we discuss some variants of generalized Halanay inequalities that are useful in the discussion of dissipativity and stability of delayed neural networks, integro-differential systems, and Volterra functional differential equations. We provide some generalizations of the Halanay inequality, which is more accurate than the existing results. As applications, we discuss invariant set, dissipative synchronization, and global asymptotic stability for the Hopfield neural networks with infinite delays. We also prove that the dynamical systems with unbounded time-varying delays are globally asymptotically stable. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
37. New Accurate and Flexible Design Procedure for a Stable KWTA Continuous Time Network.
- Author
-
Costea, Ruxandra L. and Marinov, Corneliu A.
- Subjects
- *
ARTIFICIAL neural networks , *NEURONS , *NEURAL circuitry , *METAL oxide semiconductor field-effect transistors , *ANALOG computer circuits , *SIGNAL processing , *STABILITY (Mechanics) , *ELECTRIC conductivity - Abstract
The classical continuous time recurrent (Hopfield) network is considered and adapted to K-winner-take-all operation. The neurons are of sigmoidal type with a controllable gain G, an amplitude m and interconnected by the conductance p. The network is intended to process one by one a sequence of lists, each of them with N distinct elements, each of them squeezed to [0,I] admission interval, each of them having an imposed minimum separation between elements z\min. The network carries out: 1) a matching dynamic process between the order of list elements and the order of outputs, and 2) a binary type steady-state separation between K and K+1 outputs, the former surpassing a +\xi threshold and the later falling under the -\xi threshold. As a result, the machine will signal the ranks of the K largest elements of the list. To achieve 1), the initial condition of processing phase has to be placed in a computable \theta-vicinity of zero-state. This requires a resetting procedure after each list. To achieve 2) the bias current M has to be within a certain interval computable from circuit parameters. In addition, the steady-state should be asymptotically stable. To these goals, we work with high gain and exploit the sigmoid properties and network symmetry. The various inequality type constraints between parameters are shown to be compatible and a neat synthesis procedure, simple and flexible, is given for the tanh sigmoid. It starts with the given parameters N, K, I, z\min, m and computes simple bounds of p, G, \xi, \theta, and M. Numerical tests and comments reveal qualities and shortcomings of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
38. Convergence of Cyclic and Almost-Cyclic Learning With Momentum for Feedforward Neural Networks.
- Author
-
Wang, Jian, Yang, Jie, and Wu, Wei
- Subjects
- *
STOCHASTIC convergence , *MACHINE learning , *ARTIFICIAL neural networks , *FEEDFORWARD control systems , *CONJUGATE gradient methods , *BACK propagation , *ERROR analysis in mathematics , *ALGORITHMS - Abstract
Two backpropagation algorithms with momentum for feedforward neural networks with a single hidden layer are considered. It is assumed that the training samples are supplied to the network in a cyclic or an almost-cyclic fashion in the learning procedure, i.e., in each training cycle, each sample of the training set is supplied in a fixed or a stochastic order respectively to the network exactly once. A restart strategy for the momentum is adopted such that the momentum coefficient is set to zero at the beginning of each training cycle. Corresponding weak and strong convergence results are then proved, indicating that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. The convergence conditions on the learning rate, the momentum coefficient, and the activation functions are much relaxed compared with those of the existing results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
39. Rapid Detection of Small Oscillation Faults via Deterministic Learning.
- Author
-
Wang, Cong and Chen, Tianrui
- Subjects
- *
MACHINE learning , *UNCERTAINTY (Information theory) , *APPROXIMATION theory , *NONLINEAR theories , *FAULT tolerance (Engineering) , *OSCILLATIONS , *RADIAL basis functions , *SIMULATION methods & models - Abstract
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L1 norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
40. Adaptive Neural Output Feedback Controller Design With Reduced-Order Observer for a Class of Uncertain Nonlinear SISO Systems.
- Author
-
Liu, Yan-Jun, Tong, Shao-Cheng, Wang, Dan, Li, Tie-Shan, and Chen, C. L. Philip
- Subjects
- *
ADAPTIVE control systems , *FEEDBACK control systems , *OBSERVABILITY (Control theory) , *UNCERTAINTY (Information theory) , *NONLINEAR systems , *SIMULATION methods & models , *ARTIFICIAL neural networks , *LYAPUNOV functions - Abstract
An adaptive output feedback control is studied for uncertain nonlinear single-input–single-output systems with partial unmeasured states. In the scheme, a reduced-order observer (ROO) is designed to estimate those unmeasured states. By employing radial basis function neural networks and incorporating the ROO into a new backstepping design, an adaptive output feedback controller is constructively developed. A prominent advantage is its ability to balance the control action between the state feedback and the output feedback. In addition, the scheme can be still implemented when all the states are not available. The stability of the closed-loop system is guaranteed in the sense that all the signals are semiglobal uniformly ultimately bounded and the system output tracks the reference signal to a bounded compact set. A simulation example is given to validate the effectiveness of the proposed scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
41. Global Asymptotic Stability for a Class of Generalized Neural Networks With Interval Time-Varying Delays.
- Author
-
Zhang, Xian-Ming and Han, Qing-Long
- Subjects
- *
GLOBAL analysis (Mathematics) , *ASYMPTOTIC expansions , *INTERVAL analysis , *TIME delay systems , *NUMERICAL analysis , *BIOLOGICAL neural networks - Abstract
This paper is concerned with global asymptotic stability for a class of generalized neural networks (NNs) with interval time-varying delays, which include two classes of fundamental NNs, i.e., static neural networks (SNNs) and local field neural networks (LFNNs), as their special cases. Some novel delay-independent and delay-dependent stability criteria are derived. These stability criteria are applicable not only to SNNs but also to LFNNs. It is theoretically proven that these stability criteria are more effective than some existing ones either for SNNs or for LFNNs, which is confirmed by some numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
42. Identification of Extended Hammerstein Systems Using Dynamic Self-Optimizing Neural Networks.
- Author
-
Ren, Xuemei and Lv, Xiaohua
- Subjects
- *
SELF-organizing systems , *ARTIFICIAL neural networks , *SYSTEM identification , *RANDOM noise theory , *VECTOR analysis , *STOCHASTIC convergence , *PERFORMANCE evaluation , *ENTROPY (Information theory) - Abstract
In this paper, a new dynamic self-optimizing neural network (DSONN) with online adjusting hidden layer and weights is proposed for a class of extended Hammerstein systems with non-Gaussian noises. Input vector to the network is first determined by means of system order estimation using a designated input signal. Then the hidden layer is generated online, which consists of a growing step according to the plant dynamics and a revised pruning step used to refine the hidden structure such that the generated model can be a minimal realization with satisfactory performance. The algorithm is capable of adjusting both the network structure and weights simultaneously by using of weight variations as the conditions of structure optimization. An integrated performance including the identification error and an additional entropy penalty term is employed such that the model can attenuate the non-Gaussian noises as well as match the unknown plant automatically with a suitable structure. Convergence of the weights is guaranteed by suitably choosing the learning rates. The proposed DSONN can be established without a priori knowledge of the unknown nonlinearity. The efficiency of the method is illustrated through the applications to three different Hammerstein systems. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
43. Adaptive Neural Network Decentralized Backstepping Output-Feedback Control for Nonlinear Large-Scale Systems With Time Delays.
- Author
-
Tong, Shao Cheng, Li, Yong Ming, and Zhang, Hua-Guang
- Subjects
- *
ARTIFICIAL neural networks , *FEEDBACK control systems , *NONLINEAR systems , *TIME delay systems , *SIMULATION methods & models , *APPROXIMATION theory , *ERROR analysis in mathematics - Abstract
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of “explosion of complexity” inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
44. Phase Synchronization Motion and Neural Coding in Dynamic Transmission of Neural Information.
- Author
-
Wang, Rubin, Zhang, Zhikang, Qu, Jingyi, and Cao, Jianting
- Subjects
- *
SYNCHRONIZATION , *NEURONS , *BIOLOGICAL neural networks , *STOCHASTIC processes , *NEURAL stimulation , *NUMERICAL analysis , *ARTIFICIAL neural networks - Abstract
In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
45. Adaptive Learning and Control for MIMO System Based on Adaptive Dynamic Programming.
- Author
-
Fu, Jian, He, Haibo, and Zhou, Xinmin
- Subjects
- *
ARTIFICIAL neural networks , *DYNAMIC programming , *MIMO systems , *ALGORITHMS , *MATHEMATICAL models , *INTELLIGENT control systems , *INDUSTRIAL applications - Abstract
Adaptive dynamic programming (ADP) is a promising research field for design of intelligent controllers, which can both learn on-the-fly and exhibit optimal behavior. Over the past decades, several generations of ADP design have been proposed in the literature, which have demonstrated many successful applications in various benchmarks and industrial applications. While many of the existing researches focus on multiple-inputs-single-output system with steepest descent search, in this paper we investigate a generalized multiple-input-multiple-output (GMIMO) ADP design for online learning and control, which is more applicable to a wide range of practical real-world applications. Furthermore, an improved weight-updating algorithm based on recursive Levenberg–Marquardt methods is presented and embodied in the GMIMO approach to improve its performance. Finally, we test the performance of this approach based on a practical complex system, namely, the learning and control of the tension and height of the looper system in a hot strip mill. Experimental results demonstrate that the proposed approach can achieve effective and robust performance. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
46. Sparse Neural Networks With Large Learning Diversity.
- Author
-
Gripon, Vincent and Berrou, Claude
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *NEURONS , *ASSOCIATIVE storage , *ARTIFICIAL intelligence , *CLASSIFICATION , *ERROR analysis in mathematics - Abstract
Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
47. Adaptive Neural Output Feedback Tracking Control for a Class of Uncertain Discrete-Time Nonlinear Systems.
- Author
-
Liu, Yan-Jun, Chen, C. L. Philip, Wen, Guo-Xing, and Tong, Shaocheng
- Subjects
- *
ARTIFICIAL neural networks , *NONLINEAR systems , *APPROXIMATION theory , *FEEDBACK control systems , *SIMULATION methods & models , *MIMO systems , *LYAPUNOV functions - Abstract
This brief studies an adaptive neural output feedback tracking control of uncertain nonlinear multi-input–multi-output (MIMO) systems in the discrete-time form. The considered MIMO systems are composed of n subsystems with the couplings of inputs and states among subsystems. In order to solve the noncausal problem and decouple the couplings, it needs to transform the systems into a predictor form. The higher order neural networks are utilized to approximate the desired controllers. By using Lyapunov analysis, it is proven that all the signals in the closed-loop system is the semi-globally uniformly ultimately bounded and the output errors converge to a compact set. In contrast to the existing results, the advantage of the scheme is that the number of the adjustable parameters is highly reduced. The effectiveness of the scheme is verified by a simulation example. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. Analysis and Compensation of the Effects of Analog VLSI Arithmetic on the LMS Algorithm.
- Author
-
Carvajal, Gonzalo, Figueroa, Miguel, Sbarbaro, Daniel, and Valenzuela, Waldo
- Subjects
- *
ARTIFICIAL neural networks , *LEAST squares , *MATHEMATICAL models , *APPROXIMATION theory , *STOCHASTIC convergence , *ALGORITHMS , *METAL oxide semiconductors - Abstract
Analog very large scale integration implementations of neural networks can compute using a fraction of the size and power required by their digital counterparts. However, intrinsic limitations of analog hardware, such as device mismatch, charge leakage, and noise, reduce the accuracy of analog arithmetic circuits, degrading the performance of large-scale adaptive systems. In this paper, we present a detailed mathematical analysis that relates different parameters of the hardware limitations to specific effects on the convergence properties of linear perceptrons trained with the least-mean-square (LMS) algorithm. Using this analysis, we derive design guidelines and introduce simple on-chip calibration techniques to improve the accuracy of analog neural networks with a small cost in die area and power dissipation. We validate our analysis by evaluating the performance of a mixed-signal complementary metal-oxide-semiconductor implementation of a 32-input perceptron trained with LMS. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. LMI-Based Approach for Global Asymptotic Stability Analysis of Recurrent Neural Networks with Various Delays and Structures.
- Author
-
Wang, Zhanshan, Zhang, Huaguang, and Jiang, Bin
- Subjects
- *
LYAPUNOV stability , *ARTIFICIAL neural networks , *MATHEMATICAL inequalities , *NUMERICAL analysis , *LEBESGUE integral , *MEASURE theory , *ARTIFICIAL intelligence - Abstract
Global asymptotic stability problem is studied for a class of recurrent neural networks with distributed delays satisfying Lebesgue–Stieljies measures on the basis of linear matrix inequality. The concerned network model includes many neural network models with various delays and structures as its special cases, such as the delays covering the discrete delays and distributed delays, and the network structures containing the neutral-type networks and high-order networks. Therefore, many new stability criteria for the above neural network models have also been derived from the present stability analysis method. All the obtained stability results have similar matrix inequality structures and can be easily checked. Three numerical examples are used to show the effectiveness of the obtained results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. Selectable and Unselectable Sets of Neurons in Recurrent Neural Networks With Saturated Piecewise Linear Transfer Function.
- Author
-
Zhang, Lei and Yi, Zhang
- Subjects
- *
NEURONS , *ARTIFICIAL neural networks , *TRANSFER functions , *LAGRANGIAN points , *SIMULATION methods & models , *EIGENVALUES , *MATHEMATICAL models , *EIGENFUNCTIONS - Abstract
The concepts of selectable and unselectable sets are proposed to describe some interesting dynamical properties of a class of recurrent neural networks (RNNs) with saturated piecewise linear transfer function. A set of neurons is said to be selectable if it can be co-unsaturated at a stable equilibrium point by some external input. A set of neurons is said to be unselectable if it is not selectable, i.e., such set of neurons can never be co-unsaturated at any stable equilibrium point regardless of what the input is. The importance of such concepts is that they enable a new perspective of the memory in RNNs. Necessary and sufficient conditions for the existence of selectable and unselectable sets of neurons are obtained. As an application, the problem of group selection is discussed by using such concepts. It shows that, under some conditions, each group is a selectable set, and each selectable set is contained in some group. Thus, groups are indicated by selectable sets of the RNNs and can be selected by external inputs. Simulations are carried out to further illustrate the theory. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.