41 results on '"Kernel perceptron"'
Search Results
2. Hadamard powers and kernel perceptrons.
- Author
-
Damm, Tobias and Dietrich, Nicolas
- Subjects
- *
PERCEPTRONS , *BOOLEAN matrices , *POLYNOMIALS - Abstract
We study a relation between Hadamard powers and polynomial kernel perceptrons. The rank of Hadamard powers for the special case of a Boolean matrix and for the generic case of a real matrix is computed explicitly. These results are interpreted in terms of the classification capacities of perceptrons. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Kernel Perceptron Feature Selection Based on Sparse Bayesian Probabilistic Relevance Vector Machine Classification for Disease Diagnosis with Healthcare Data
- Author
-
Marimuthu C N and Arun G
- Subjects
Kernel perceptron ,Computer science ,business.industry ,Bayesian probability ,General Engineering ,Probabilistic logic ,Feature selection ,Machine learning ,computer.software_genre ,Relevance vector machine ,Artificial intelligence ,Healthcare data ,business ,computer - Published
- 2020
4. Arm Motion Capture and Recognition Algorithm Based on MEMS Sensor Networks and KPA
- Author
-
ZeYu Wang, Guoxing Yi, Lei Hu, and Zhihui Cao
- Subjects
Kernel perceptron ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Perceptron ,Motion capture ,Motion (physics) ,Support vector machine ,Units of measurement ,Inertial measurement unit ,Computer Science::Computer Vision and Pattern Recognition ,Computer vision ,Artificial intelligence ,business ,Wireless sensor network - Abstract
With the development of artificial intelligence, human-computer interaction technology has become more and more popular. Motion capture and recognition technology based on sensor networks and machine learning algorithms gradually drawn wide attention and extensive works have been done. This article introduces the arm motion capture system based on MEMS sensor networks that takes the micro inertial measurement unit as the core, and uses the Kernel Perceptron Algorithm (KPA) to realize motion classification and recognition. This algorithm combines the advantages of Support Vector Machine (SVM) and perceptron algorithm to gain a faster model training speed under the premise of high recognition accuracy. Extensive experiments can prove that arm motions can be accurately captured by MEMS sensor networks, and the KPA has a good performance of motion recognition and faster model training speed than SVM.
- Published
- 2021
5. Machine learning based smart steering for wireless mesh networks
- Author
-
Ozgur Gurbuz, Bulut Kuskonmaz, and Huseyin Ozkan
- Subjects
Kernel perceptron ,Wireless mesh network ,Computer Networks and Communications ,Computer science ,business.industry ,010401 analytical chemistry ,Mesh networking ,Online machine learning ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,0104 chemical sciences ,Support vector machine ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Online algorithm ,business ,computer ,Software - Abstract
Steering actions in wireless mesh networks refer to requesting clients to change their access points (AP) for better exploiting the mesh network and achieving higher quality connections. However, steering actions for especially the sticky clients do not always successfully produce the intended outcome. In this work, we address this issue from a machine learning perspective as we formulate a classification problem in both batch (SVM) and online (kernel perceptron) setting based on various network features. We train classifiers to learn the nonlinear regions of correct decisions to maximize the overall success probability in steering actions. In particular, the presented online kernel perceptron classifier (1) performs learning sequentially at the cloud from the entire data of multiple mesh networks and (2) operates at APs for steering; both are executed in real-time. The presented algorithm is completely data driven, adaptive, optimal in its steering and real-time, hence named as Online Machine Learning for Smart Steering. Our batch algorithm is observed in our experiments to achieve -at least- 95% of classification accuracy in identifying the conditions for successful steering. Our online algorithm -on the other hand- successfully approximates the baseline accuracy by a small margin with relatively negligible space and computational complexity, allowing real-time steering.
- Published
- 2019
6. Smart Steering with Machine Learning for Wireless Mesh Networks
- Author
-
Ozgur Gurbuz, Huseyin Ozkan, and Bulut Kuskonmaz
- Subjects
Support vector machine ,Kernel perceptron ,Wireless mesh network ,Computer science ,Wireless network ,Margin (machine learning) ,Network service ,Real-time computing ,Online algorithm - Abstract
In wireless networks, clients can be steered from one access point (AP) to another for a better internet connection. Although this client steering has large potential to improve overall network service and the user experience, such steering actions may not always yield the desired result and the client may remain persistently connected to its current AP. This issue is referred to as the sticky client problem, which prevents the intended improvement in the network. In this work, in order to address the sticky client problem, Support Vector Machine (SVM) as a batch method and kernel perceptron as an online method are examined based on various network features. Nonlinear classifiers of correct steering actions have been trained to maximize the accuracy of steering actions. In particular, the online kernel perceptron performs sequential learning at APs using the cloud data to decide about steering actions in real time. This algorithm is data-driven, and able to provide optimum steering in realtime. In our experiments, we observed that our batch approach identifies successful steering actions with %95 accuracy. On the other hand, our online algorithm is able to approximate the batch performance by a small margin while allowing real time steering with significantly lower computational complexity.
- Published
- 2020
7. Autonomous Navigation in Unknown Environments using Sparse Kernel-based Occupancy Mapping
- Author
-
Nikolay Atanasov, Michael Yip, Thai P. Duong, and Nikhil Das
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,0209 industrial biotechnology ,Occupancy ,Computer science ,02 engineering and technology ,Systems and Control (eess.SY) ,Electrical Engineering and Systems Science - Systems and Control ,Machine Learning (cs.LG) ,Computer Science::Robotics ,Computer Science - Robotics ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer vision ,Kernel perceptron ,business.industry ,Autonomous robot ,Obstacle ,Decision boundary ,Robot ,020201 artificial intelligence & image processing ,Configuration space ,Artificial intelligence ,business ,Classifier (UML) ,Robotics (cs.RO) - Abstract
This paper focuses on real-time occupancy mapping and collision checking onboard an autonomous robot navigating in an unknown environment. We propose a new map representation, in which occupied and free space are separated by the decision boundary of a kernel perceptron classifier. We develop an online training algorithm that maintains a very sparse set of support vectors to represent obstacle boundaries in configuration space. We also derive conditions that allow complete (without sampling) collision-checking for piecewise-linear and piecewise-polynomial robot trajectories. We demonstrate the effectiveness of our mapping and collision checking algorithms for autonomous navigation of an Ackermann-drive robot in unknown environments., Comment: Accepted to ICRA 2020
- Published
- 2020
- Full Text
- View/download PDF
8. Learning intersections of halfspaces with a margin
- Author
-
Klivans, Adam R. and Servedio, Rocco A.
- Subjects
- *
ALGORITHMS , *MACHINE learning , *MACHINE theory , *DATA mining - Abstract
Abstract: We give a new algorithm for learning intersections of halfspaces with a margin, i.e. under the assumption that no example lies too close to any separating hyperplane. Our algorithm combines random projection techniques for dimensionality reduction, polynomial threshold function constructions, and kernel methods. The algorithm is fast and simple. It learns a broader class of functions and achieves an exponential runtime improvement compared with previous work on learning intersections of halfspaces with a margin. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
9. Kernel perceptron algorithm for sinusitis classification
- Author
-
Z. Rustam, J. Pandelaki, and Sri Hartini
- Subjects
History ,Kernel perceptron ,Computer science ,business.industry ,medicine ,Pattern recognition ,Artificial intelligence ,Sinusitis ,medicine.disease ,business ,Computer Science Applications ,Education - Abstract
Sinusitis is one of the most commonly diagnosed diseases in the world. Its diagnosis is usually based on clinical signs and symptoms, which led to the development and use of many machine learning methods to provide a better diagnosis. This research, therefore, proposed a kernel perceptron method applied to the sinusitis dataset, consisting of 102 acute and 98 chronic samples, obtained from Cipto Mangunkusumo Hospital in Indonesia. This research utilized the RBF and polynomial kernel function for several k values in k-fold cross-validation and compared the results in accuracy, sensitivity, precision, specificity, and Fl-Score. From the experiments, it was concluded that the kernel parameter σ = 0.0001 obtained excellent performance in every k-fold, with a better performance achieved using 10-fold cross-validation. Meanwhile, the polynomial degree did not affect the kernel perceptron performance. However, the use of 7-fold cross-validation can be considered to obtain better performance of kernel perceptron based on polynomial kernel.
- Published
- 2020
10. Experiments with Adabag in Biology Classification Tasks
- Author
-
María Pérez-Ortiz, E. Cernadas, and Manuel Fernández-Delgado
- Subjects
Kernel perceptron ,biology ,business.industry ,Pattern recognition ,Merluccius merluccius ,biology.organism_classification ,Fecundity ,Expression (mathematics) ,Software ,Simple (abstract algebra) ,Kernel (statistics) ,Classifier (linguistics) ,Artificial intelligence ,business - Abstract
The assessment of fecundity is fundamental in the study of biology and to define the management of sustainable fisheries. Stereometry is an accurate method to estimate fecundity from histological images. This chapter shows some histological images of fish species Merluccius merluccius. The direct kernel perceptron (DKP) is a very simple and fast kernel-based classifier, whose trainable parameters are calculated directly, without any iterative training, using an analytical closed-form expression that involves only the training patterns and the classes to which they belong. An accurate fish fecundity estimation must only consider mature oocytes, which must be reliably classified, according to their stage of development, by experienced personnel using histological images. The fish oocytes were manually drawn and labelled with the development stage by expert technicians of the Institute of Marine Research CSIC using Govocitos software. Adaboost.M1 in Weka (ABW) is much worse than the Adabag version in all the species and experiments.
- Published
- 2018
11. Extending instance-based and linear models
- Author
-
Eibe Frank, Christopher J. Pal, Mark Hall, and Ian H. Witten
- Subjects
Support vector machine ,Kernel method ,Kernel perceptron ,business.industry ,Polynomial kernel ,Kernel embedding of distributions ,Linear model ,Principal component regression ,Artificial intelligence ,Perceptron ,business ,Mathematics - Abstract
We begin by revisiting the basic instance-based learning method of nearest-neighbor classification and considering how it can be made more robust and storage efficient by generalizing both exemplars and distance functions. We then discuss two well-known approaches for generalizing linear models that go beyond modeling linear relationships between the inputs and the outputs. The first is based on the so-called kernel trick, which implicitly creates a high-dimensional feature space and models linear relationships in this extended space. We discuss support vector machines for classification and regression, kernel ridge regression, and kernel perceptrons. The second approach is based on applying simple linear models in a network structure that includes nonlinear transformations. This yields neural networks, and we discuss the classical multilayer perceptron. The final part of the chapter discusses an alternative method for tackling learning problems with complex relationships: building linear models that are local in the sense that they only apply to a small part of the input space. We consider model trees, which are decision trees with linear regression models at the leaf nodes, and locally weighted linear regression, which combines instance-based learning and linear regression.
- Published
- 2017
12. A study of visual behavior of multidimensional scaling for kernel perceptron algorithm
- Author
-
Kuo-Shong Wang, Shih-Hsing Chang, Che-Chang Hsu, and Hung-Yuan Chung
- Subjects
Kernel perceptron ,business.industry ,Feature vector ,Hilbert space ,Multimodal distribution ,Pattern recognition ,Machine learning ,computer.software_genre ,Visualization ,symbols.namesake ,Artificial Intelligence ,symbols ,Multidimensional scaling ,Artificial intelligence ,Sources of error ,business ,Classifier (UML) ,computer ,Software ,Mathematics - Abstract
The class imbalance problem occurs when the classifier is to detect a rare but important class. The purpose of this paper is to study whether possible sources of error are not only the imbalance but also other factors in combination, which lead to these misclassifications. The theoretical difficulties in purely predictive settings arise from the lack of visualization. Therefore, for kernel classifiers we propose the link with a kernel version of multidimensional scaling in high-dimensional feature space. The transformed version of the features specifically discloses the intrinsic structure of Hilbert space and is then used as inputs into a learning system: in the example, this prediction method is based on the SVMs-rebalance methodology. The graphical representations indicate the effects of masking, skewed, and multimodal distribution, which are also responsible for the poor performance. By studying the properties of the misclassifications, we can further develop ways to improve them.
- Published
- 2014
13. Formalized Generalization Bounds for Perceptron-Like Algorithms
- Author
-
Kelby, Robin J.
- Subjects
- Computer Science, Artificial Intelligence, Kernel Perceptron, Budget Kernel Perceptron, Software Verification, Coq, Machine learning, Generalization error
- Abstract
Machine learning algorithms are integrated into many aspects of daily life. However, research into the correctness and security of these important algorithms has lagged behind experimental results and improvements. My research seeks to add to our theretical understanding of the Perceptron family of algorithms, which includes the Kernel Perceptron, Budget Kernel Perceptron, and Description Kernel Perceptron algorithms. In this thesis, I will describe three variants of the Kernel Perceptron algorithm and provide both proof and performance results for verified implementations of these algorithms written in the Coq Proof Assistant. This research employs generalization error, which bounds how poorly a model may perform on unseen testing data, as a guarantee of performance with proofs verified in Coq. These implementations are also extracted to the functional language Haskell to evaluate their generalization error and performance results on real and synthetic data sets.
- Published
- 2020
14. Learning intersections of halfspaces with a margin
- Author
-
Rocco A. Servedio and Adam R. Klivans
- Subjects
Computer Science::Machine Learning ,Polynomial ,Computer Networks and Communications ,Random projection ,Intersections of halfspaces ,0102 computer and information sciences ,01 natural sciences ,Theoretical Computer Science ,Polynomial threshold function ,Margin (machine learning) ,0101 mathematics ,Margin ,Mathematics ,Discrete mathematics ,Kernel perceptron ,Applied Mathematics ,Dimensionality reduction ,010102 general mathematics ,Computational learning theory ,Kernel method ,Computational Theory and Mathematics ,Hyperplane ,010201 computation theory & mathematics ,Algorithm ,Kernel Perceptron - Abstract
We give a new algorithm for learning intersections of halfspaces with a margin, i.e. under the assumption that no example lies too close to any separating hyperplane. Our algorithm combines random projection techniques for dimensionality reduction, polynomial threshold function constructions, and kernel methods. The algorithm is fast and simple. It learns a broader class of functions and achieves an exponential runtime improvement compared with previous work on learning intersections of halfspaces with a margin.
- Published
- 2008
15. Scalable classification for large dynamic networks
- Author
-
Yibo Yao and Lawrence B. Holder
- Subjects
Graph kernel ,Kernel perceptron ,business.industry ,Computer science ,Entropy (statistical thermodynamics) ,Feature extraction ,Pattern recognition ,Graph ,Support vector machine ,Kernel (linear algebra) ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Discriminative model ,Kernel embedding of distributions ,Polynomial kernel ,String kernel ,Radial basis function kernel ,Entropy (information theory) ,Artificial intelligence ,Entropy (energy dispersal) ,Tree kernel ,business ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
We examine the problem of node classification in large-scale and dynamically changing graphs. An entropy-based subgraph extraction method has been developed for extracting subgraphs surrounding the nodes to be classified. We introduce an online version of an existing graph kernel to incrementally compute the kernel matrix for a unbounded stream of these extracted subgraphs. After obtaining the kernel values, we adopt a kernel perceptron to learn a discriminative classifier and predict the class labels of the target nodes with their corresponding subgraphs. We demonstrate the advantages of our learning techniques by conducting empirical evaluations on two real-world graph datasets.
- Published
- 2015
16. Kernel Affine Projection Algorithm
- Author
-
Kazuhiko Ozeki
- Subjects
Adaptive filter ,Kernel method ,Kernel perceptron ,Computer science ,Kernel embedding of distributions ,Kernel (statistics) ,Radial basis function kernel ,Perceptron ,Algorithm ,Kernel principal component analysis - Abstract
The unknown system to be identified by an adaptive filter is usually assumed to be a linear system. Based on this assumption, we model the unknown system by a linear filter. In reality, however, there are cases where a linear filter is inadequate. In spite of this problem, using a very general nonlinear filter is not a good idea. To construct an adaptation algorithm for a general nonlinear filter is not simple. Moreover, an adaptive model that has too many free parameters is not desirable from a machine learning theoretic point of view, because such a model exhibits poor generalization. In this chapter, we make a review of a work that extends the APA by the kernel trick so that it is applicable to identification of a nonlinear system. We start with the kernel perceptron as a simple example to show how the kernel trick is used to extend the perceptron so that it can learn a nonlinear discriminant function without losing the simplicity of the original linear structure. Then, noting that the kernel trick replaces the inner product with the kernel function, we extend the APA to the kernel APA. It is seen that the kernel APA has a similar structure with the resource-allocating network. In the perceptron, the training data set is finite and fixed. In the APA, on the other hand, the set of training data, i.e., the set of regressors, is infinite. To keep the set of regressors actually used in adaptation to be finite, we sieve the regressors by the novelty criterion that checks if a newly arrived regressor is informative enough for adaptation.
- Published
- 2015
17. Fuzzy kernel perceptron
- Author
-
Chu-Song Chen and Jiun-Hung Chen
- Subjects
Kernel perceptron ,Computer Networks and Communications ,business.industry ,Feature vector ,Pattern recognition ,General Medicine ,Perceptron ,Fuzzy logic ,Computer Science Applications ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Hyperplane ,Artificial Intelligence ,Kernel (statistics) ,Artificial intelligence ,business ,Software ,Mathematics - Abstract
A new learning method, the fuzzy kernel perceptron (FKP), in which the fuzzy perceptron (FP) and the Mercer kernels are incorporated, is proposed in this paper. The proposed method first maps the input data into a high-dimensional feature space using some implicit mapping functions. Then, the FP is adopted to find a linear separating hyperplane in the high-dimensional feature space. Compared with the FP, the FKP is more suitable for solving the linearly nonseparable problems. In addition, it is also more efficient than the kernel perceptron (KP). Experimental results show that the FKP has better classification performance than FP, KP, and the support vector machine.
- Published
- 2002
18. Kernel ridge regression for super vised classification
- Author
-
S. Y. Kung
- Subjects
Multi-label classification ,Kernel perceptron ,business.industry ,Pattern recognition ,Machine learning ,computer.software_genre ,Multiclass classification ,Kernel method ,Lasso (statistics) ,Polynomial kernel ,Radial basis function kernel ,Artificial intelligence ,Kernel Fisher discriminant analysis ,business ,computer ,Mathematics - Published
- 2014
19. Offline Signature Verification Using Support Vector Machine
- Author
-
Deepika C. Shet and C. Kruthi
- Subjects
Support vector machine ,Kernel perceptron ,Digital signature ,Computer science ,business.industry ,Histogram ,Feature extraction ,Centroid ,Pattern recognition ,Artificial intelligence ,business ,Grayscale ,Edge detection - Abstract
This paper aims at developing a support vector machine for identity verification of offline signature based on the feature values in the database. A set of signature samples are collected from individuals and these signature samples are scanned in a gray scale scanner. These scanned signature images are then subjected to a number of image enhancement operations like binarization, complementation, filtering, thinning and edge detection. From these pre-processed signatures, features such as centroid, centre of gravity, calculation of number of loops, horizontal and vertical profile and normalized area are extracted and stored in a database separately. The values from the database are fed to the support vector machine which draws a hyper plane and classifies the signature into original or forged based on a particular feature value. The developed SVM is successfully tested against 336 signature samples and the classification error rate is less than 7.16% and this is found to be convincing.
- Published
- 2014
20. An Empirical Evaluation of the Fuzzy Kernel Perceptron
- Author
-
Gavin C. Cawley
- Subjects
Kernel perceptron ,biology ,Artificial neural network ,Computer Networks and Communications ,business.industry ,Fuzzy set ,Pattern recognition ,General Medicine ,biology.organism_classification ,Perceptron ,Fuzzy logic ,GeneralLiterature_MISCELLANEOUS ,Computer Science Applications ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Chen ,Kernel method ,Artificial Intelligence ,Artificial intelligence ,business ,Software ,Mathematics - Abstract
J.-H. Chen and C.-S. Chen have recently proposed a nonlinear variant of Keller and Hunt's fuzzy perceptron algorithm, based on the now familiar "kernel trick." In this letter, we demonstrate experimentally that J.-H. Chen and C.-S. Chen's assertion that the fuzzy kernel perceptron (FKP) outperforms the support vector machine (SVM) cannot be sustained. A more thorough model comparison exercise, based on a much wider range of benchmark data sets, shows that the FKP algorithm is not competitive with the SVM
- Published
- 2007
21. Direct Kernel Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation
- Author
-
E. Cernadas, José Neves, Jorge Ribeiro, Manuel Fernández-Delgado, and Senén Barro
- Subjects
Kernel perceptron ,Support Vector Machine ,business.industry ,Cognitive Neuroscience ,Feature vector ,Discriminant Analysis ,Pattern recognition ,Linear classifier ,Linear discriminant analysis ,Perceptron ,Classification ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Linear Models ,Humans ,Computer Simulation ,Artificial intelligence ,AdaBoost ,Neural Networks, Computer ,business ,Algorithms ,Mathematics ,Extreme learning machine - Abstract
The Direct Kernel Perceptron (DKP) (Fernandez-Delgado et al., 2010) is a very simple and fast kernel-based classifier, related to the Support Vector Machine (SVM) and to the Extreme Learning Machine (ELM) (Huang, Wang, & Lan, 2011), whose @a-coefficients are calculated directly, without any iterative training, using an analytical closed-form expression which involves only the training patterns. The DKP, which is inspired by the Direct Parallel Perceptron, (Auer et al., 2008), uses a Gaussian kernel and a linear classifier (perceptron). The weight vector of this classifier in the feature space minimizes an error measure which combines the training error and the hyperplane margin, without any tunable regularization parameter. This weight vector can be translated, using a variable change, to the @a-coefficients, and both are determined without iterative calculations. We calculate solutions using several error functions, achieving the best trade-off between accuracy and efficiency with the linear function. These solutions for the @a coefficients can be considered alternatives to the ELM with a new physical meaning in terms of error and margin: in fact, the linear and quadratic DKP are special cases of the two-class ELM when the regularization parameter C takes the values C=0 and C=~. The linear DKP is extremely efficient and much faster (over a vast collection of 42 benchmark and real-life data sets) than 12 very popular and accurate classifiers including SVM, Multi-Layer Perceptron, Adaboost, Random Forest and Bagging of RPART decision trees, Linear Discriminant Analysis, K-Nearest Neighbors, ELM, Probabilistic Neural Networks, Radial Basis Function neural networks and Generalized ART. Besides, despite its simplicity and extreme efficiency, DKP achieves higher accuracies than 7 out of 12 classifiers, exhibiting small differences with respect to the best ones (SVM, ELM, Adaboost and Random Forest), which are much slower. Thus, the DKP provides an easy and fast way to achieve classification accuracies which are not too far from the best one for a given problem. The C and Matlab code of DKP are freely available.
- Published
- 2013
22. Incremental Learning on a Budget and Its Application to Power Electronics
- Author
-
Akinari Maeda, Kiyotaka Nakano, Yusuke Kondo, Koichiro Yamauchi, and Akihisa Kato
- Subjects
Kernel perceptron ,business.industry ,Computer science ,Machine learning ,computer.software_genre ,Upper and lower bounds ,Set (abstract data type) ,Kernel (linear algebra) ,Kernel method ,Kernel (statistics) ,Bounded function ,Power electronics ,Incremental learning ,Artificial intelligence ,business ,computer - Abstract
In this paper, we present an incremental learning method on a budget for embedded systems. We discuss its application for two power systems: a micro-converter for photovoltaic and a step down DC-DC-converter. This learning method is a variation of the general regression neural network but it is able to continue incremental learning on a bounded support set. The method basically learns new instances by adding new kernels. However, when the number of kernels reaches a predefined upper bound, the method selects the most effective learning option from several options: including replacing the most ineffective kernel with the new kernel, modifying of the parameters of existing kernels, and ignoring the new instance. The proposed method is compared with other similar learning methods on a budget, which are based on kernel perceptron. Two examples of the application of the proposed method are demonstrated in power electronics. In these two examples, we show that the proposed system learns the properties of the control-objects during the services and realizes quick control.
- Published
- 2013
23. A kernel fused perceptron for the online classification of large-scale data
- Author
-
Huijun He, Wenqiang Zhang, and Mingmin Chi
- Subjects
Kernel perceptron ,Kernel method ,Computer science ,Polynomial kernel ,Kernel embedding of distributions ,Kernel (statistics) ,Radial basis function kernel ,Data mining ,Tree kernel ,Perceptron ,computer.software_genre ,computer - Abstract
To solve online nonlinear problems, usually, a set of misclassified observed examples (defined as support set) should be stored in the internal memory for computing kernel values. With the increase of a large scale of training data, computing all the kernel values is expensive and also can lead to an out-of-memory problem. In the paper, a fusion strategy is proposed to compress the size of support set for online learning and the fused kernel can best represent the current instance and its nearest one in the support set in the previous time. The proposed algorithm is based on Perceptron-like method, and thus it is called as Fuseptron. Different from the most recently proposed nonlinear online algorithms, the internal memory can be bounded in Fuseptron and the mistake bound is also derived. Experiments carried out on one synthetic and four real large-scale datasets validate the effectiveness and efficiency of Fuseptron compared to the state-of-the-art algorithms.
- Published
- 2012
24. Fast weight calculation for kernel-based perceptron in two-class classification problems
- Author
-
Jorge Ribeiro, Senén Barro, Manuel Fernández-Delgado, and E. Cernadas
- Subjects
Kernel perceptron ,Artificial neural network ,business.industry ,Pattern recognition ,Perceptron ,Linear discriminant analysis ,Support vector machine ,symbols.namesake ,ComputingMethodologies_PATTERNRECOGNITION ,Dimension (vector space) ,Kernel (statistics) ,symbols ,Artificial intelligence ,business ,Gaussian process ,Mathematics - Abstract
We propose a method, called Direct Kernel Perceptron (DKP), to directly calculate the weights of a single perceptron using a closed-form expression which does not require any training stage. The weigths minimize a performance measure which simultaneously takes into account the training error and the classification margin of the perceptron. The ability to learn non-linearly separable problems is provided by a kernel mapping between the input and the hidden space. Using Gaussian kernels, DKP achieves better results than the standard Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) for a wide variety of benchmark two-class data sets. The computational cost of DKP linearly increases with the dimension of the input space and it is much lower than the corresponding to SVM.
- Published
- 2010
25. Learning to rank with a novel kernel perceptron method
- Author
-
Xiaotong Lin, Haixun Wang, and Xue-wen Chen
- Subjects
Active learning (machine learning) ,Computer science ,Feature vector ,Stability (learning theory) ,Semi-supervised learning ,computer.software_genre ,Machine learning ,Ranking SVM ,Instance-based learning ,Kernel perceptron ,business.industry ,Supervised learning ,Online machine learning ,Perceptron ,Ensemble learning ,Generalization error ,Ranking ,Kernel (statistics) ,Outlier ,Decision boundary ,Unsupervised learning ,Learning to rank ,Artificial intelligence ,Data mining ,business ,computer - Abstract
While conventional ranking algorithms, such as the PageRank, rely on the web structure to decide the relevancy of a web page, learning to rank seeks a function capable of ordering a set of instances using a supervised learning approach. Learning to rank has gained increasing popularity in information retrieval and machine learning communities. In this paper, we propose a novel nonlinear perceptron method for rank learning. The proposed method is an online algorithm and simple to implement. It introduces a kernel function to map the original feature space into a nonlinear space and employs a perceptron method to minimize the ranking error by avoiding converging to a solution near the decision boundary and alleviating the effect of outliers in the training dataset. Furthermore, unlike existing approaches such as RankSVM and RankBoost, the proposed method is scalable to large datasets for online learning. Experimental results on benchmark corpora show that our approach is more efficient and achieves higher or comparable accuracies in instance ranking than state of the art methods such as FRank, RankSVM and RankBoost.
- Published
- 2009
26. A simpler unified analysis of budget perceptrons
- Author
-
Ilya Sutskever
- Subjects
Set (abstract data type) ,Mathematical optimization ,Kernel perceptron ,Online learning ,Convex optimization ,Regret ,Mathematical proof ,Perceptron ,Mathematics ,Drawback - Abstract
The kernel Perceptron is an appealing online learning algorithm that has a drawback: whenever it makes an error it must increase its support set, which slows training and testing if the number of errors is large. The Forgetron and the Randomized Budget Perceptron algorithms overcome this problem by restricting the number of support vectors the Perceptron is allowed to have. These algorithms have regret bounds whose proofs are dissimilar. In this paper we propose a unified analysis of both of these algorithms by observing that the way in which they remove support vectors can be seen as types of L2-regularization. By casting these algorithms as instances of online convex optimization problems and applying a variant of Zinkevich's theorem for noisy and incorrect gradient, we can bound the regret of these algorithms more easily than before. Our bounds are similar to the existing ones, but the proofs are less technical.
- Published
- 2009
27. Tighter Perceptron with improved dual use of cached data for model representation and validation
- Author
-
Zhuang Wang and Slobodan Vucetic
- Subjects
Support vector machine ,Kernel (linear algebra) ,Statistical classification ,Kernel method ,Training set ,Kernel perceptron ,Computer science ,Kernel (statistics) ,Data mining ,Perceptron ,computer.software_genre ,Time complexity ,computer - Abstract
Kernel Perceptrons are represented by a subset of training points, called the support vectors, and their associated weights. To address the issue of unlimited growth in model size during training, budget kernel perceptrons maintain the fixed number of support vectors and thus achieve the constant update time and space complexity. In this paper, a new kernel perceptron algorithm for online learning on a budget is proposed. Following the idea of Tighter Perceptron, upon exceeding the budget, the algorithm removes the support vector with the minimal impact on classification accuracy. To optimize memory use, instead on maintaining a separate validation data set for accuracy estimation, the proposed algorithm only uses the support vectors for both model representation and validation. This is achieved by estimating posterior class probability of each support vector and using this information in validation. The experimental results on 11 benchmark data sets indicate that the proposed algorithm is significantly more accurate than the competing budget kernel perceptrons and that it has comparable accuracy to the resource unbounded perceptrons, including the original kernel perceptron and the Tighter Perceptron that uses whole training data set for validation.
- Published
- 2009
28. Compressed Kernel Perceptrons
- Author
-
Vladimir Coric, Slobodan Vucetic, and Zhuang Wang
- Subjects
Kernel perceptron ,business.industry ,Computer science ,Machine learning ,computer.software_genre ,Kernel method ,String kernel ,Polynomial kernel ,Kernel embedding of distributions ,Kernel (statistics) ,Radial basis function kernel ,Artificial intelligence ,Tree kernel ,business ,computer - Abstract
Kernel machines are a popular class of machine learning algorithms that achieve state of the art accuracies on many real-life classification problems. Kernel perceptrons are among the most popular online kernel machines that are known to achieve high-quality classification despite their simplicity. They are represented by a set of B prototype examples, called support vectors, and their associated weights. To obtain a classification, a new example is compared to the support vectors. Both space to store a prediction model and time to provide a single classification scale as O(B). A problem with kernel perceptrons is that on noisy data the number of support vectors tends to grow without bounds with the number of training examples. To reduce the strain at computational resources, budget kernel perceptrons have been developed by upper bounding the number of support vectors. In this work, we propose a new budget algorithm that upper bounds the number of bits needed to store kernel perceptron. Setting the bitlength constraint could facilitate development of hardware and software implementations of kernel perceptrons on resource-limited devices such as microcontrollers. The proposed compressed kernel perceptron algorithm decides on the optimal tradeoff between number of support vectors and their bit precision. The algorithm was evaluated on several benchmark data sets and the results indicate that it can train highly accurate classifiers even when the available memory budget is below 1 Kbit. This promising result points to a possibility of implementing powerful learning algorithms even on the most resource-constrained computational devices.
- Published
- 2009
29. SIRMs connected fuzzy inference method using kernel method
- Author
-
F. Mizuguchi, Hiroaki Ishii, Masaharu Mizumoto, Soichiro Watanabe, and Hirosato Seki
- Subjects
Kernel perceptron ,Kernel method ,business.industry ,Kernel (statistics) ,Fuzzy set ,Method of steepest descent ,Exclusive or ,Pattern recognition ,Function (mathematics) ,Artificial intelligence ,business ,Fuzzy logic ,Mathematics - Abstract
Single input rule modules connected fuzzy inference method (SIRMs method, for short) by Yubazaki can decrease the number of fuzzy rules drastically in comparison with the conventional fuzzy inference methods. Seki et al. have proposed functional type single input rule modules connected fuzzy inference method (functional type SIRMs method, for short) which generalizes the consequent part of SIRMs method to function. However, these SIRMs methods can not be applied to XOR (exclusive OR). In this paper, we propose ldquokernel type single input rule modules connected fuzzy inference methodrdquo which uses kernel trick to SIRMs method, and show that this method can treat XOR. Further, learning algorithm of the proposed SIRMs method is derived by using the steepest descent method, and is shown to be superior to the one of conventional SIRMs method and kernel perceptron by applying to identification of nonlinear functions.
- Published
- 2008
30. Accelerating Kernel Perceptron Learning
- Author
-
José R. Dorronsoro, Ana González, and Daniel Jaque García
- Subjects
Kernel perceptron ,business.industry ,Training (meteorology) ,Acceleration (differential geometry) ,Pattern recognition ,Perceptron ,Support vector machine ,Hyperplane ,Sample size determination ,Margin (machine learning) ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
Recently it has been shown that appropriate perceptron training methods, such as the Schlesinger-Kozinec (SK) algorithm, can provide maximal margin hyperplanes with training costs O(N ×T), with N denoting sample size and T the number of training iterations. In this work we shall relate SK training with the classical Rosenblatt rule and show that, when the hyperplane vector is written in dual form, the support vector (SV) coefficients determine their training appearance frequency; in particular, large coefficient SVs penalize training costs. Under this light we shall explore a training acceleration procedure in which large coefficient and, hence, large cost SVs are removed from training and that allows for a further stable large sample shrinking. As we shall see, this results in a much faster training while not penalizing test classification.
- Published
- 2007
31. A Multiclass Kernel Perceptron Algorithm
- Author
-
Jianhua Xu and Xuegong Zhang
- Subjects
Computer Science::Machine Learning ,Kernel perceptron ,Structured support vector machine ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Pattern recognition ,Perceptron ,Multiclass classification ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Computer Science::Computer Vision and Pattern Recognition ,Radial basis function kernel ,Artificial intelligence ,Kernel Fisher discriminant analysis ,business ,Algorithm - Abstract
Original kernel machines (e.g., support vector machine, least squares support vector machine, kernel Fisher discriminant analysis, kernel perceptron algorithm, and etc.) were mainly designed for binary classification. How to effectively extend them for multiclass classification is still an ongoing research issue. Rosenblatt's linear perceptron algorithm for binary classification and its corresponding multiclass linear version are the simplest learning machines according to their algorithmic routines. Kernel perceptron algorithm for binary classification was constructed by extending linear perceptron algorithm with Mercer kernel. In this paper, a multiclass kernel perceptron algorithm is proposed by combining multiclass linear perceptron algorithm with binary kernel perceptron algorithm, which can deal with multiclass classification problem directly and nonlinearly in a simple iterative procedure. Two artificial examples and four benchmark datasets are used to evaluate the performance of our multiclass method. The experimental results show that our algorithm could achieve the good classification performance
- Published
- 2006
32. Selecting the kernel type for a web-based adaptive image retrieval systems (AIRS)
- Author
-
Anca Doloc-Mihu and Vijay V. Raghavan
- Subjects
Kernel perceptron ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Machine learning ,computer.software_genre ,Kernel principal component analysis ,Kernel embedding of distributions ,Variable kernel density estimation ,String kernel ,Polynomial kernel ,Computer Science::Computer Vision and Pattern Recognition ,Radial basis function kernel ,Artificial intelligence ,Tree kernel ,business ,computer ,Mathematics - Abstract
The goal of this paper is to investigate the selection of the kernel for a Web-based AIRS. Using the Kernel Perceptron learning method, several kernels having polynomial and Gaussian Radial Basis Function (RBF) like forms (6 polynomials and 6 RBFs) are applied to general images represented by color histograms in RGB and HSV color spaces. Experimental results on these collections show that performance varies significantly between different kernel types and that choosing an appropriate kernel is important.
- Published
- 2006
33. A soft Bayes perceptron
- Author
-
M. Bruckner and W. Dilger
- Subjects
Kernel perceptron ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Pattern recognition ,Machine learning ,computer.software_genre ,Perceptron ,Bayes' theorem ,Kernel (linear algebra) ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Kernel (statistics) ,Benchmark (computing) ,Point (geometry) ,Artificial intelligence ,business ,computer - Abstract
The kernel perceptron is one of the simplest and fastest kernel machines, its performance, however, is inferior to other well known kernel machines. We introduce an algorithm that combines several approaches, mainly Herbrich's large-scale Bayes point machine and the soft perceptron in order to improve the kernel perceptron. Our experiments, which were based on standard benchmark datasets, show that the performance of the perceptron can be improved significantly with similar computational effort.
- Published
- 2006
34. A Fast and Sparse Implementation of Multiclass Kernel Perceptron Algorithm
- Author
-
Jianhua Xu
- Subjects
Kernel (linear algebra) ,Kernel method ,Kernel perceptron ,Artificial neural network ,Discriminant function analysis ,Computer science ,Iterative method ,Kernel (statistics) ,Perceptron ,Algorithm ,Cross-validation - Abstract
Original multiclass kernel perceptron algorithm is time consuming in its training and discriminating procedures. In this paper, for each class its reduced kernel-based discriminant function is defined only by training samples from this class itself and a bias term, which means that except for bias terms the number of variables to be solved is always equal to the number of total training samples regardless of class number and the final discriminant functions are sparse. Such a strategy can speed up the training and discriminating procedures effectively. Further an additional iterative procedure with a decreasing learning rate is designed to improve the classification accuracy for the nonlinearly separable case. The experimental results on five benchmark datasets using ten-fold cross validation show that our modified training methods run at least two times and at most five times as fast as original algorithm does.
- Published
- 2006
35. Convergence theorem for kernel perceptron
- Author
-
Kazushi Ikeda
- Subjects
Computer Science::Machine Learning ,Winnow ,Mathematical optimization ,TheoryofComputation_COMPUTATIONBYABSTRACTDEVICES ,Kernel perceptron ,Computer Science::Neural and Evolutionary Computation ,Perceptron ,Condensed Matter::Disordered Systems and Neural Networks ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Kernel embedding of distributions ,Polynomial kernel ,Computer Science::Computer Vision and Pattern Recognition ,Multilayer perceptron ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Radial basis function kernel ,Applied mathematics ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,Mathematics - Abstract
The convergence of the kernel perceptron algorithm is examined. We first introduce the kernel perceptron algorithm which is an application of kernel methods to perceptron learning and also an extension of the algebraic perceptron algorithm to a general kernel function and a general learning coefficient. Although the naive perceptron is shown to converge, it is not clear whether the kernel perceptron algorithm converges or not. We prove that it converges when the learning coefficient is unity and derive the condition of the learning coefficient to converge for given examples.
- Published
- 2004
36. Concurrent Support Vector Machine Processor for Disease Diagnosis
- Author
-
Jae Woo Wee and Chong Ho Lee
- Subjects
Support vector machine ,Computer Science::Hardware Architecture ,Kernel (linear algebra) ,Kernel perceptron ,Kernel method ,Dimension (vector space) ,Artificial neural network ,Computer science ,Kernel (statistics) ,Parallel computing ,Throughput (business) ,Algorithm - Abstract
The Concurrent Support Vector Machine processor (CSVM) that performs all phases of recognition process including kernel computing, learning, and recall on a chip is proposed. The classification problems of bio data having high dimension are solved fast and easily using the CSVM. Hardware-friendly support vector machine learning algorithms, kernel adatron and kernel perceptron, are embedded on a chip. Concurrent operation by parallel architecture of elements generates high speed and throughput. Experiments on fixed-point algorithm having quantization error are performed and their results are compared with floating-point algorithm. CSVM implemented on FPGA chip generates fast and accurate results on high dimensional cancer data.
- Published
- 2004
37. Learning on Graphs in the Game of Go
- Author
-
Marco Krüger, Thore Graepel, Ralf Herbrich, and Mike Goutrié
- Subjects
Support vector machine ,Kernel (linear algebra) ,Kernel perceptron ,Artificial neural network ,Computer science ,business.industry ,Feature vector ,Graph theory ,Artificial intelligence ,Game tree ,Perceptron ,business ,Graph - Abstract
We consider the game of Go from the point of view of machine learning and as a well-defined domain for learning on graph representations. We discuss the representation of both board positions and candidate moves and introduce the common fate graph (CFG) as an adequate representation of board positions for learning. Single candidate moves are represented as feature vectors with features given by subgraphs relative to the given move in the CFG. Using this representation we train a support vector machine (SVM) and a kernel perceptron to discriminate good moves from bad moves on a collection of life-and-death problems and on 9 × 9 game records. We thus obtain kernel machines that solve Go problems and play 9 × 9 Go.
- Published
- 2001
38. Learning Kernel Classifiers
- Author
-
Ralf Herbrich
- Subjects
Graph kernel ,Theoretical computer science ,Kernel perceptron ,Computer science ,business.industry ,Machine learning ,computer.software_genre ,Kernel method ,Computational learning theory ,Kernel embedding of distributions ,Polynomial kernel ,Radial basis function kernel ,Artificial intelligence ,Tree kernel ,business ,Algorithm ,computer - Abstract
From the Publisher: Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier--a limited, but well-established and comprehensively studied model--and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments. It begins by describing the major algorithmic advances: kernel perceptron learning, kernel Fisher discriminants, support vector machines, relevance vector machines, Gaussian processes, and Bayes point machines. Then follows a detailed introduction to learning theory, including VC and PAC-Bayesian theory, data-dependent structural risk minimization, and compression bounds. Throughout, the book emphasizes the interaction between theory and algorithms: how learning algorithms work and why. The book includes many examples, complete pseudo code of the algorithms presented, and an extensive source code library.
- Published
- 2001
39. The perceptron: A probabilistic model for information storage and organization in the brain
- Author
-
Frank Rosenblatt
- Subjects
Winnow ,Models, Statistical ,Kernel perceptron ,Artificial neural network ,business.industry ,Computer science ,media_common.quotation_subject ,Brain ,Information Storage and Retrieval ,Cognition ,Statistical model ,Machine learning ,computer.software_genre ,Perceptron ,Perception ,Harmonic Grammar ,Humans ,Neural Networks, Computer ,Artificial intelligence ,business ,computer ,General Psychology ,media_common - Published
- 1958
40. On the generalization ability of on-line learning algorithms
- Author
-
Nicolò Cesa-Bianchi, Alex Conconi, and Claudio Gentile
- Subjects
Pointwise ,Independent and identically distributed random variables ,Kernel perceptron ,Margin (machine learning) ,Generalization ,Statistical learning theory ,Hinge loss ,Library and Information Sciences ,Perceptron ,Algorithm ,Computer Science Applications ,Information Systems ,Mathematics - Abstract
In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary on-line learning algorithm run on an independent and identically distributed (i.i.d.) sample of data. Using a simple large deviation argument, we prove tight data-dependent bounds for the risk of this hypothesis in terms of an easily computable statistic M/sub n/ associated with the on-line performance of the ensemble. Via sharp pointwise bounds on M/sub n/, we then obtain risk tail bounds for kernel perceptron algorithms in terms of the spectrum of the empirical kernel matrix. These bounds reveal that the linear hypotheses found via our approach achieve optimal tradeoffs between hinge loss and margin size over the class of all linear functions, an issue that was left open by previous results. A distinctive feature of our approach is that the key tools for our analysis come from the model of prediction of individual sequences; i.e., a model making no probabilistic assumptions on the source generating the data. In fact, these tools turn out to be so powerful that we only need very elementary statistical facts to obtain our final risk bounds.
41. Cryptographically private support vector machines
- Author
-
Taneli Mielikäinen, Helger Lipmaa, and Sven Laur
- Subjects
Computer Science::Computer Science and Game Theory ,Kernel perceptron ,Computer science ,business.industry ,Computation ,02 engineering and technology ,16. Peace & justice ,computer.software_genre ,Machine learning ,Encryption ,Support vector machine ,Kernel (linear algebra) ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel method ,Polynomial kernel ,020204 information systems ,Computer Science::Networking and Internet Architecture ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Computer Science::Cryptography and Security - Abstract
We propose private protocols implementing the Kernel Adatron and Kernel Perceptron learning algorithms, give private classification protocols and private polynomial kernel computation protocols. The new protocols return their outputs - either the kernel value, the classifier or the classifications - in encrypted form so that they can be decrypted only by a common agreement by the protocol participants. We show how to use the encrypted classifications to privately estimate many properties of the data and the classifier. The new SVM classifiers are the first to be proven private according to the standard cryptographic definitions.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.