Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

A performance evaluation of federated learning algorithms

: Nilsson, A.; Smith, S.; Ulm, G.; Gustavsson, E.; Jirstrand, M.


Association for Computing Machinery -ACM-:
DIDL 2018, 2nd Workshop on Distributed Infrastructures for Deep Learning. Proceedings : Part of Middleware 2018; December 10, 2018, Rennes, France
New York: ACM, 2018
ISBN: 978-1-4503-6119-4
Workshop on Distributed Infrastructures for Deep Learning (DIDL) <2, 2018, Rennes>
International Middleware Conference <2018, Rennes>
Fraunhofer FCC ()

Federated learning is an approach to distributed machine learning where a global model is learned by aggregating models that have been trained locally on data-generating clients. Contrary to centralized optimization, clients can be very large in number and face challenges of data and network heterogeneity. Examples of clients include smartphones and connected vehicles, which highlights the practical relevance of federated learning. We benchmark three federated learning algorithms and compare their performance against a centralized approach where data resides on the server. The algorithms Federated Averaging (FedAvg), Federated Stochastic Variance Reduced Gradient, and CO-OP are evaluated on the MNIST dataset, using both i.i.d. and non-i.i.d. partitionings of the data. Our results show that FedAvg achieves the highest accuracy among the federated algorithms, regardless of how data was partitioned. Our comparison between FedAvg and centralized learning shows that they are practically equivalent when i.i.d. data is used. However, the centralized approach outperforms FedAvg with non-i.i.d. data.