Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Asynchronous parallel stochastic gradient descent: A numeric core for scalable distributed machine learning algorithms

 
: Keuper, Janis; Pfreundt, Franz-Josef

:

Association for Computing Machinery -ACM-:
MLHPC 2015, Workshop on Machine Learning in High-Performance Computing Environments. Proceedings : SC 2015, November 15 - 20, 2015, Austin, Texas
New York: ACM, 2015
ISBN: 978-1-4503-4006-9
Art. 1, 5 pp.
Workshop on Machine Learning in High-Performance Computing Environments (MLHPC) <2015, Austin/Tex.>
Supercomputing Conference (SC) <2015, Austin/Tex.>
English
Conference Paper
Fraunhofer ITWM ()

Abstract
The implementation of a vast majority of machine learning (ML) algorithms boils down to solving a numerical optimization problem. In this context, Stochastic Gradient Descent (SGD) methods have long proven to provide good results, both in terms of convergence and accuracy. Recently, several parallelization approaches have been proposed in order to scale SGD to solve very large ML problems. At their core, most of these approaches are following a MapReduce scheme. This paper presents a novel parallel updating algorithm for SGD, which utilizes the asynchronous single-sided communication paradigm. Compared to existing methods, Asynchronous Parallel Stochastic Gradient Descent (ASGD) provides faster convergence, at linear scalability and stable accuracy.

: http://publica.fraunhofer.de/documents/N-374931.html