Back to Search
Start Over
Distributed optimization of multi-class SVMs.
- Source :
- PLoS ONE; 6/1/2017, Vol. 12 Issue 6, p1-18, 18p
- Publication Year :
- 2017
-
Abstract
- Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations (Lee et al. and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 19326203
- Volume :
- 12
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- PLoS ONE
- Publication Type :
- Academic Journal
- Accession number :
- 123348541
- Full Text :
- https://doi.org/10.1371/journal.pone.0178161