Under CopyrightKeuper, JanisJanisKeuperPfreundt, Franz-JosefFranz-JosefPfreundt2022-03-152024-05-242022-03-152015https://doi.org/10.24406/h-413961https://publica.fraunhofer.de/handle/publica/41396110.24406/h-413961Stochastic Gradient Descent (SGD) is the standard numerical method used to solve the core optimization problem for the vast majority of machine learning (ML) algorithms. In the context of large scale learning, as utilized by many Big Data applications, efficient parallelization of SGD is in the focus of active research. Recently, we were able to show that the asynchronous communication paradigm can be applied to achieve a fast and scalable parallelization of SGD. Asynchronous Stochastic Gradient Descent (ASGD) outperforms other, mostly MapReduce based, parallel algorithms solving large scale machine learning problems. In this paper, we investigate the impact of asynchronous communication frequency and message size on the performance of ASGD applied to large scale ML on HTC cluster and cloud environments. We introduce a novel algorithm for the automatic balancing of the asynchronous communication load, which allows to adapt ASGD to changing network bandwidths and latencies.en003006519Balancing the communication load of asynchronously parallelized machine learning algorithmspaper