• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Effective parallelisation for machine learning
 
  • Details
  • Full
Options
2017
Conference Paper
Title

Effective parallelisation for machine learning

Abstract
We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelization techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelization scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question [21] on efficient parallelization of machine learning algorithms in the sense of Nick's Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic application scenarios.
Author(s)
Kamp, Michael  
Boley, Mario  
Missura, Olana  
Gärtner, Thomas  
Mainwork
Advances in Neural Information Processing Systems 30, NIPS 2017. Online resource  
Conference
Conference on Neural Information Processing Systems (NIPS) 2017  
DOI
10.24406/publica-fhg-399015
File(s)
N-477162.pdf (504.39 KB)
Rights
Under Copyright
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024