Options
August 7, 2023
Conference Paper
Title
TinyFL: On-Device Training, Communication and Aggregation on a Microcontroller for Federated Learning
Abstract
In federated learning (FL), in contrast to centralized ML learning processes, ML models are sent rather than the raw data. Therefore, FL is a decentralized and privacy-compliant process currently experiencing significant research interest. As a result, initial investigations were carried out with FL and microcontrollers (MCUs). However, each of these studies used a PC as a server. In this work, we introduce TinyFL, a method using only MCUs to build a low-cost, low-power, and low-storage system. TinyFL uses a hybrid master/slave protocol where the master MCU is responsible for communication and aggregation. Thereby, the communication is performed by inter-integrated circuit (I 2 C). TinyFL demonstrates that communication and aggregation for FL can be performed on only MCUs. Furthermore, we show that the training with TinyFL is 11.57 % faster than centralized training using a gesture recognition use case.
Conference