Options
Forschungszentrum Informationstechnik GmbH GMD
Now showing
1 - 10 of 40
-
-
PublicationExplanation and artificial neural networks( 1992)Diederich, J.Explanation is an important function in symbolic artificial intelligence (AI). For instance, explanation is used in machine learning, in case-based reasoning and, most important, the explanation of the results of a reasoning process to a user must be a component of any inference system. Experience with expert systems has shown that the ability to generate explanations is absolutely crucial for the user acceptance of Al systems. In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weight) and distributed all over the system. It is the intention of this paper to discuss the ability of neural networks to generate explanations. It will be shown that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components (ECs). Connectionist semantic networks (CSNs), i.e. connectionist systems with an explicit conceptual hierarchy, belong to a class of artificial neural networks which can be extended by an explanation component which gives meaningful responses to a limited class of "How" questions. An explanation component of this kind is described in detail.
-
PublicationRecurrent and feedforward networks for human-computer interaction( 1992)
;Diederich, J. ;Thümmel, A.Bartels, E.The classification, selection and organization of electronic messages (e-mail) is a task that can be supported by an artificial neural network (ANN). The ANNs (simple recurrent networks and feedforward nets) extract relevant information from incoming messages during a training period, learn the reaction to the incoming message, i.e., a sequence of user actions, and use the learned representation for the proposal of user reactions. The results show that (1) simple recurrent nets and feedforward networks can learn a mapping from random input vectors (a coding of an incoming message) to output patterns (sequences of user reactions), (2) both types of networks absorb random noise as part of the input pattern and generalize well, and (3) simple recurrent networks for sequence production, though significantly larger, learn faster than feedforward nets trained on a similar-structured data set. -
PublicationKünstliche neuronale Netze und elektronische Kommunikation(GMD, 1992)
;Diederich, J. ;Wasserschaff, M.Mark, S. -
-
-
-
-
PublicationSpreading activation and connectionist models for natural language processing( 1990)Diederich, J.
-