Back to Search Start Over

Model pruning enables localized and efficient federated learning for yield forecasting and data sharing.

Authors :
Li, Andy
Markovic, Milan
Edwards, Peter
Leontidis, Georgios
Source :
Expert Systems with Applications. May2024, Vol. 242, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Federated Learning (FL) presents a decentralized approach to model training in the agri-food sector and offers the potential for improved machine learning performance, while ensuring the safety and privacy of individual farms or data silos. However, the conventional FL approach has two major limitations. First, the heterogeneous data on individual silos can cause the global model to perform well for some clients but not all, as the update direction on some clients may hinder others after they are aggregated. Second, it is lacking with respect to the efficiency perspective concerning communication costs during FL and large model sizes. This paper proposes a new technical solution that utilizes network pruning on client models and aggregates the pruned models. This method enables local models to be tailored to their respective data distribution and mitigate the data heterogeneity present in agri-food data. Moreover, it allows for more compact models that consume less data during transmission. We experiment with a soybean yield forecasting dataset and find that this approach can improve inference performance by 15.5% to 20% compared to FedAvg, while reducing local model sizes by up to 84% and the data volume communicated between the clients and the server by 57.1% to 64.7%. Our method demonstrates the potential to use efficient models that are more environmentally friendly to support the agri-food sector's transition to net zero. Future enhancements of this method could further optimize distributed learning in agri-food, enhancing sustainability and applicability. • We propose a new federated pruning learning method for yield forecasting. • Our method can be used to improve local inference performance. • The method reduces communication costs during training and model sizes. • Our method facilitates the on-edge implementation of ML models using sparse tensor. • The method is generalizable and works with different pruning policies and schedules. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09574174
Volume :
242
Database :
Academic Search Index
Journal :
Expert Systems with Applications
Publication Type :
Academic Journal
Accession number :
175499824
Full Text :
https://doi.org/10.1016/j.eswa.2023.122847