Back to Search
Start Over
Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks.
- Source :
- Algorithms; Sep2022, Vol. 15 Issue 9, p325-325, 15p
- Publication Year :
- 2022
-
Abstract
- The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized strategy that requires transferring the training data to a centralized location. Such a strategy, however, limits our ability to secure data privacy or use high-performance distributed/parallel computing platforms. To alleviate such limitations, in this paper, we study the federated training of DeepONets for the first time. That is, we develop a framework, which we refer to as Fed-DeepONet, that allows multiple clients to train DeepONets collaboratively under the coordination of a centralized server. To achieve Fed-DeepONets, we propose an efficient stochastic gradient-based algorithm that enables the distributed optimization of the DeepONet parameters by averaging first-order estimates of the DeepONet loss gradient. Then, to accelerate the training convergence of Fed-DeepONets, we propose a moment-enhanced (i.e., adaptive) stochastic gradient-based strategy. Finally, we verify the performance of Fed-DeepONet by learning, for different configurations of the number of clients and fractions of available clients, (i) the solution operator of a gravity pendulum and (ii) the dynamic response of a parametric library of pendulums. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 19994893
- Volume :
- 15
- Issue :
- 9
- Database :
- Complementary Index
- Journal :
- Algorithms
- Publication Type :
- Academic Journal
- Accession number :
- 159274889
- Full Text :
- https://doi.org/10.3390/a15090325