Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Explaining and Interpreting LSTMs

: Arras, L.; Arjona-Medina, J.; Widrich, M.; Montavon, G.; Gillhofer, M.; Müller, K.-R.; Hochreiter, S.; Samek, W.


Samek, W. ; Neural Information Processing Systems -NIPS- Foundation:
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Cham: Springer Nature, 2019 (Lecture Notes in Artificial Intelligence 11700)
ISBN: 978-3-030-28953-9 (Print)
ISBN: 978-3-030-28954-6 (Online)
Workshop "Interpreting, Explaining and Visualizing Deep Learning ... now what?" <2017, Long Beach/Calif.>
Fraunhofer HHI ()

While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved. In this chapter, we explore how to adapt the Layer-wise Relevance Propagation (LRP) technique used for explaining the predictions of feed-forward networks to the LSTM architecture used for sequential data modeling and forecasting. The special accumulators and gated interactions present in the LSTM require both a new propagation scheme and an extension of the underlying theoretical framework to deliver faithful explanations.