Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Supplementing Machine Learning with Knowledge Models Towards Semantic Explainable AI

: Sander, Jennifer; Kuwertz, Achim


Ahram, Tareq (Ed.):
Human Interaction, Emerging Technologies and Future Applications IV : Proceedings of the 4th International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET - AI 2021), April 28-30, 2021, Strasbourg, France
Cham: Springer Nature, 2021 (Advances in Intelligent Systems and Computing 1378)
ISBN: 978-3-030-73270-7 (Print)
ISBN: 978-3-030-74009-2 (Online)
ISBN: 978-3-030-73986-7
International Conference on Human Interaction and Emerging Technologies - Future Applications (IHIET-AI) <4, 2021, Strasbourg>
Fraunhofer IOSB ()
artificial intelligence; explainable artificial intelligence; explainability; machine learning; deep learning; knowledge model; Knowledge Engineering; semantics

Explainable Artificial Intelligence (XAI) aims at making the results of Artificial Intelligence (AI) applications more understandable. It may also help to understand the applications themselves and to get an insight into how results are obtained. Such capabilities are particularly required with regard to Machine Learning approaches like Deep Learning which must be generally considered as black boxes, today. In the last years, different XAI approaches became available. However, many of them adopt a mainly technical perspective and do not sufficiently take into consideration that giving a well-comprehensible explanation means that the output has to be provided in a human understandable form. By supplementing Machine Learning with semantic knowledge models, Semantic XAI can fill some of these gaps. In this publication, we raise awareness for its potential and, taking Deep Learning for object recognition as an example, we present initial research results on how to achieve explainability on a semantic level.