Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Sparsity in Deep Neural Networks - An Empirical Investigation with TensorQuant

: Loroch, Dominik Marek; Pfreundt, Franz-Josef; Wehn, Norbert; Keuper, Janis


Monreale, A.:
ECML PKDD 2018 Workshops : DMLE 2018 and IoTStream 2018, Dublin, Ireland, September 10-14, 2018; Revised selected papers
Cham: Springer Nature, 2019 (Communications in computer and information science 967)
ISBN: 978-3-030-14879-9 (Print)
ISBN: 978-3-030-14880-5 (Online)
ISBN: 3-030-14879-3
ISBN: 3-030-14880-7
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) <2018, Dublin>
Workshop "Decentralized Machine Learning on the Edge" (DMLE) <2018, Dublin>
Workshop on IoT Large Scale Machine Learning from Data Streams (IOTStream) <3, 2018, Dublin>
Conference Paper
Fraunhofer ITWM ()

Deep learning is finding its way into the embedded world with applications such as autonomous driving, smart sensors and augmented reality. However, the computation of deep neural networks is demanding in energy, compute power and memory. Various approaches have been investigated to reduce the necessary resources, one of which is to leverage the sparsity occurring in deep neural networks due to the high levels of redundancy in the network parameters. It has been shown that sparsity can be promoted specifically and the achieved sparsity can be very high. But in many cases the methods are evaluated on rather small topologies. It is not clear if the results transfer onto deeper topologies.
In this paper, the TensorQuant toolbox has been extended to offer a platform to investigate sparsity, especially in deeper models. Several practical relevant topologies for varying classification problem sizes are investigated to show the differences in sparsity for activations, weights and gradients.