Keuper, J.J.KeuperPreundt, F.-J.F.-J.Preundt2022-03-132022-03-132016https://publica.fraunhofer.de/handle/publica/39435510.1109/MLHPC.2016.006This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neural Networks (DNNs). The presented results show, that the current state of the art approach, using data-parallelized Stochastic Gradient Descent (SGD), is quickly turning into a vastly communication bound problem. In addition, we present simple but fixed theoretic constraints, preventing effective scaling of DNN training beyond only a few dozen nodes. This leads to poor scalability of DNN training in most practical scenarios.en003006519Distributed training of deep neural networks: Theoretical and practical limits of parallel scalabilityconference paper