1. Reinforced SVM method and memorization mechanisms.
- Author
-
Vapnik, Vladimir and Izmailov, Rauf
- Subjects
- *
MEMORIZATION , *ALGORITHMS , *SET functions , *MACHINE learning , *LEARNING ability , *INDUCTION (Logic) - Abstract
The paper is devoted to two problems: (1) reinforcement of SVM algorithms, and (2) justification of memorization mechanisms for generalization. (1) Current SVM algorithm was designed for the case when the risk for the set of nonnegative slack variables is defined by l 1 norm. In this paper, along with that classical l 1 norm, we consider risks defined by l 2 norm and l ∞ norm. Using these norms, we formulate several modifications of the existing SVM algorithm and show that the resulting modified SVM algorithms can improve (sometimes significantly) the classification performance. (2) Generalization ability of existing learning algorithms is usually explained by arguments involving uniform convergence of empirical losses to the corresponding expected losses over a given set of functions. However, along with bounds for uniform convergence of empirical losses to the expected losses, the VC theory also provides bounds for relative uniform convergence. These bounds lead to a more accurate estimate of the expected loss. Advanced methods of estimating of expected risk of error have to leverage these bounds, which also support mechanisms of training data memorization, which, as the paper demonstrates, can improve classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF