1. Federated learning for minimizing nonsmooth convex loss functions.
- Author
-
Wei, Le-Yin, Yu, Zhan, and Zhou, Ding-Xuan
- Subjects
- *
CONVEX functions , *OPTIMIZATION algorithms , *ACCESS to information , *TRAINING needs , *HEALTH websites - Abstract
Federated learning possesses a distributed learning framework for protecting privacy where local clients need to collaboratively train a shared model via a central server. Many existing methods in the literature of federated learning are analyzed under some smoothness conditions of loss functions and need access to gradient information to perform local optimization algorithms such as stochastic gradient descent or dual average. Some methods even need a strong convexity of the loss function. However, in many situations of real world applications such as readability of texts, first-order gradient information is difficult to obtain, and the strong smoothness and strong convexity of loss functions are not satisfied. We consider methods to overcome such situations. This paper aims at providing an understanding of federated learning in the situation that the loss functions are nonsmooth and gradient information is unavailable, also a strong convexity condition is not needed. Based on Nesterov's zeroth-order (gradient-free) techniques, we propose a zeroth-order stochastic federated learning method. Constant and decreasing step size strategies are considered. Moreover, a new type of approximating sequence is proposed in federated learning for strictly decreasing step sizes. Expected error bounds for the proposed approximating sequence and learning rates of the proposed method are derived under some selection rules of the step sizes and smoothing parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF