Options
2019
Conference Paper
Title
Understanding Neural Network Decisions by Creating Equivalent Symbolic AI Models
Abstract
Different forms of neural networks have been used to solve all sorts of problems in the previous years. These were typically problems that classic approaches of artificial intelligence and automation could not solve efficiently, like handwriting recognition, speech recognition, or machine translation of natural languages. Yet, it is very hard for us to understand how exactly all these different types of neural networks make their decisions in specific situations. We cannot verify them as we can verify, e.g., grammars, trees and classic state machines. Being able to actually prove the reliability of artificial intelligence models becomes more and more important, especially, when cyber-physical systems and humans are the subject of the AI's decisions. The aim of this paper is to introduce an approach for the analysis of decision processes in neural networks at a specific point of training. Therefore, we identify characteristics that artificial neural networks have in common with classic symbolic AI models and where both are different. Besides, we describe our first ideas of how to overcome the aspects where both systems are different and of how to find a way to create something from an artificial neural network that is either an equivalent symbolic model or at least similar enough to such a symbolic model to allow for its construction. Our long term goal is to find, if possible, an appropriate bidirectional transformation between both AI approaches.