Back to Search Start Over

Stability-Based Generalization Analysis of Distributed Learning Algorithms for Big Data.

Authors :
Wu, Xinxing
Zhang, Junping
Wang, Fei-Yue
Source :
IEEE Transactions on Neural Networks & Learning Systems; Mar2020, Vol. 31 Issue 3, p801-812, 12p
Publication Year :
2020

Abstract

As one of the efficient approaches to deal with big data, divide-and-conquer distributed algorithms, such as the distributed kernel regression, bootstrap, structured perception training algorithms, and so on, are proposed and broadly used in learning systems. Some learning theories have been built to analyze the feasibility, approximation, and convergence bounds of these distributed learning algorithms. However, less work has been studied on the stability of these distributed learning algorithms. In this paper, we discuss the generalization bounds of distributed learning algorithms from the view of algorithmic stability. First, we introduce a definition of uniform distributed stability for distributed algorithms and study the distributed algorithms’ generalization risk bounds. Then, we analyze the stability properties and generalization risk bounds of a kind of regularization-based distributed algorithms. Two generalization distributed risks obtained show that the generalization distributed risk bounds for the difference between their generalization distributed and empirical distributed/leave-one-computer-out risks are closely related to the size of samples $n$ and the amount of working computers $m$ as $\mathcal {O}(m/{n}^{1/2})$. Furthermore, the results in this paper indicate that, for a good generalization regularized distributed kernel algorithm, the regularization parameter $\lambda $ should be adjusted with the change of the term $m/{n}^{1/2}$. These theoretic discoveries provide the useful guidance when deploying the distributed algorithms on practical big data platforms. We explore our theoretic analyses through two simulation experiments. Finally, we discuss some problems about the sufficient amount of working computers, nonequivalence, and generalization for distributed learning. We show that the rules for the computation on one single computer may not always hold for distributed learning. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
31
Issue :
3
Database :
Complementary Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
142127666
Full Text :
https://doi.org/10.1109/TNNLS.2019.2910188