Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Layer-Wise Relevance Propagation: An Overview

: Montavon, G.; Binder, A.; Lapuschkin, S.; Samek, W.; Müller, K.-R.


Samek, W. ; Neural Information Processing Systems -NIPS- Foundation:
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Cham: Springer Nature, 2019 (Lecture Notes in Artificial Intelligence 11700)
ISBN: 978-3-030-28953-9 (Print)
ISBN: 978-3-030-28954-6 (Online)
Workshop "Interpreting, Explaining and Visualizing Deep Learning ... now what?" <2017, Long Beach/Calif.>
Conference Paper
Fraunhofer HHI ()

For a machine learning model to generalize well, one needs to ensure that its decisions are supported by meaningful patterns in the input data. A prerequisite is however for the model to be able to explain itself, e.g. by highlighting which input features it uses to support its prediction. Layer-wise Relevance Propagation (LRP) is a technique that brings such explainability and scales to potentially highly complex deep neural networks. It operates by propagating the prediction backward in the neural network, using a set of purposely designed propagation rules. In this chapter, we give a concise introduction to LRP with a discussion of (1) how to implement propagation rules easily and efficiently, (2) how the propagation procedure can be theoretically justified as a ‘deep Taylor decomposition’, (3) how to choose the propagation rules at each layer to deliver high explanation quality, and (4) how LRP can be extended to handle a variety of machine learning scenarios beyond deep neural networks.