Back to Search Start Over

Convergence analysis of sliding mode trajectories in multi-objective neural networks learning

Authors :
Costa, Marcelo Azevedo
Braga, Antonio Padua
de Menezes, Benjamin Rodrigues
Source :
Neural Networks. Sep2012, Vol. 33, p21-31. 11p.
Publication Year :
2012

Abstract

Abstract: The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
08936080
Volume :
33
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
77766872
Full Text :
https://doi.org/10.1016/j.neunet.2012.04.006