Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Towards Explainable Artificial Intelligence

: Samek, W.; Müller, K.-R.


Samek, W. ; Neural Information Processing Systems -NIPS- Foundation:
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Cham: Springer Nature, 2019 (Lecture Notes in Artificial Intelligence 11700)
ISBN: 978-3-030-28953-9 (Print)
ISBN: 978-3-030-28954-6 (Online)
Workshop "Interpreting, Explaining and Visualizing Deep Learning ... now what?" <2017, Long Beach/Calif.>
Fraunhofer HHI ()

In recent years, machine learning (ML) has become a key enabling technology for the sciences and industry. Especially through improvements in methodology, the availability of large databases and increased computational power, today’s ML algorithms are able to achieve excellent performance (at times even exceeding the human level) on an increasing number of complex tasks. Deep learning models are at the forefront of this development. However, due to their nested non-linear structure, these powerful models have been generally considered “black boxes”, not providing any information about what exactly makes them arrive at their predictions. Since in many applications, e.g., in the medical domain, such lack of transparency may be not acceptable, the development of methods for visualizing, explaining and interpreting deep learning models has recently attracted increasing attention. This introductory paper presents recent developments and applications in this field and makes a plea for a wider use of explainable learning algorithms in practice.