4 results
Search Results
2. Fast Modular Network Implementation for Support Vector Machines.
- Author
-
Guang-Bin Huang, Mao, K. Z., Chee-Kheong Siew, and De-Shuang Huang
- Subjects
COMPUTER architecture ,ALGORITHMS ,ARTIFICIAL neural networks ,MACHINE learning ,PROBLEM solving ,NUMERICAL analysis - Abstract
Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture which consists of several SVMs each trained on a small subregion of the whole data sampling space and the same number of simple neural quantizer modules which inhibit the outputs of all the remote SVMs and only allow a single local SVM to fire (produce actual output) at any time. In principle, this region-computing based modular network method can significantly reduce the learning time of SYM algorithms without sacrificing much generalization performance. The experiments on a few real large complex bench- mark problems demonstrate that our method can be significantly faster than single SVMs without losing much generalization performance. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
3. Bagging and Boosting Negatively Correlated Neural Networks.
- Author
-
Islam, Md. Monirul, Xin Yao, Nirjon, S. M. Shahriar, Islam, Muhammad Asiful, and Murase, Kazuyuki
- Subjects
ARTIFICIAL neural networks ,STATISTICAL correlation ,ALGORITHMS ,NEURONS ,MACHINE learning ,PROBLEM solving - Abstract
In this paper, we propose two cooperative ensemble learning algorithms, i.e., NegBagg and NegBoost, for designing neural network (NN) ensembles. The proposed algorithms incrementally train different individual NNs in an ensemble using the negative correlation learning algorithm. Bagging and boosting algorithms are used in NegBagg and NegBoost, respectively, to create different training sets for different NNs in the ensemble. The idea behind using negative correlation learning in conjunction with the bagging/boosting algorithm is to facilitate interaction and cooperation among NNs during their training. Both NegBagg and NegBoost use a constructive approach to automatically determine the number of hidden neurons for NNs. NegBoost also uses the constructive approach to automatically determine the number of NNs for the ensemble. The two algorithms have been tested on a number of benchmark problems in machine learning and NNs, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, satellite, soybean, and waveform problems. The experimental results show that NegBagg and NegBoost require a small number of training epochs to produce compact NN ensembles with good generalization. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
4. Distributed Min–Max Learning Scheme for Neural Networks With Applications to High-Dimensional Classification.
- Author
-
Raghavan, Krishnan, Garg, Shweta, Jagannathan, Sarangapani, and Samaranayake, V. A.
- Subjects
PROBLEM solving ,COST functions ,ALGORITHMS ,CLASSIFICATION ,ARTIFICIAL neural networks ,DISTRIBUTED algorithms - Abstract
In this article, a novel learning methodology is introduced for the problem of classification in the context of high-dimensional data. In particular, the challenges introduced by high-dimensional data sets are addressed by formulating a $L_{1}$ regularized zero-sum game where optimal sparsity is estimated through a two-player game between the penalty coefficients/sparsity parameters and the deep neural network weights. In order to solve this game, a distributed learning methodology is proposed where additional variables are utilized to derive layerwise cost functions. Finally, an alternating minimization approach developed to solve the problem where the Nash solution provides optimal sparsity and compensation through the classifier. The proposed learning approach is implemented in a parallel and distributed environment through a novel computational algorithm. The efficiency of the approach is demonstrated both theoretically and empirically with nine data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.