Now showing 1 - 8 of 8
  • Publication
    Supported Decision-Making by Explainable Predictions of Ship Trajectories
    Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.
  • Publication
    Explanation Framework for Intrusion Detection
    ( 2021) ;
    Franz, Maximilian
    ;
    Huber, Marco F.
    Machine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.
  • Publication
    Batch-wise Regularization of Deep Neural Networks for Interpretability
    ( 2020) ;
    Faller, Philipp M.
    ;
    Peinsipp, Elisabeth
    ;
    Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.
  • Publication
    Comparison of Angle and Size Features with Deep Learning for Emotion Recognition
    ( 2019)
    Dunau, Patrick
    ;
    ;
    The robust recognition of a person's emotion from images is an important task in human-machine interaction. This task can be considered a classification problem, for which a plethora of methods exists. In this paper, the emotion recognition performance of two fundamentally different approaches is compared: classification based on hand-crafted features against deep learning. This comparison is conducted by means of well-established datasets and highlights the benefits and drawbacks of each approach.
  • Publication
    Gaussian Process based Dynamic Facial Emotion Tracking
    ( 2019)
    Dunau, Patrick
    ;
    ;
    Capturing the emotions of humans is of paramount importance in human-machine interaction. Here, emotions are typically extracted from the human's face recorded in image sequences. In this paper, tracking emotions from images is formulated as Bayesian state estimation problem where the system state represents the valence-arousal space of emotions. Handcrafted image features are first mapped to the valence-arousal space by means of a Gaussian process. To allow dynamic emotion tracking, a Kalman filter is derived, where an inequality constraint on the emotional state is employed in order to avoid a drifting state. Experiments based on two well-known facial expression datasets are performed to demonstrate the performance of the proposed approach.
  • Publication
    Forcing Interpretability for Deep Neural Networks through Rule-based Regularization
    ( 2019) ; ;
    Faller, Philipp M.
    Remarkable progress in the field of machine learning strongly drives the research in many application domains. For some domains, it is mandatory that the output of machine learning algorithms needs to be interpretable. In this paper, we propose a rule-based regularization technique to enforce interpretability for neural networks (NN). For this purpose, we train a rule-based surrogate model simultaneously with the NN. From the surrogate, a metric quantifying its degree of explainability is derived and fed back to the training of the NN as a regularization term. We evaluate our model on four datasets and compare it to unregularized models as well as a decision tree (DT) based baseline. The rule-based regularization approach achieves interpretability and competitive accuracy.
  • Publication
    Situation responsive networking of mobile robots for disaster management
    ( 2014)
    Kuntze, Helge-Björn
    ;
    Frey, Christian W.
    ;
    ; ; ; ; ; ; ;
    Walter, Moriz
    ;
    ;
    Müller, Fabian
    If a natural disaster like an earthquake or an accident in a chemical or nuclear plant hits a populated area, rescue teams have to get a quick overview of the situation in order to identify possible locations of victims, which need to be rescued, and dangerous locations, hich need to be secured. Rescue forces must operate quickly in order to save lives, and they often need to operate in dangerous enviroments. Hence, robot-supported systems are increasingly used to support and accelerate search operations. The objective of the SENEKA concept is the situation responsive networking of various robots and sensor systems used by first responders in order to make the search for victims and survivors more quick and efficient. SENEKA targets the integration of the robot-sensor network into the operation procedures of the rescue teams. The aim of this paper is to inform on the objectives and first research results of the ongoing joint research project SENEKA.
  • Publication
    SENEKA - sensor network with mobile robots for disaster management
    ( 2012)
    Kuntze, Helge-Björn
    ;
    Frey, Christian W.
    ;
    ;
    Staehle, Barbara
    ;
    ; ;
    Wenzel, Andreas
    ;
    Developed societies have a high level of preparedness for natural or man-made disasters. But such incidents cannot be completely prevented, and when an incident like an earthquake or an accident in a chemical or nuclear plant hits a populated area, rescue teams need to be employed. In such situations it is a necessity for rescue teams to get a quick overview of the situation in order to identify possible locations of victims that need to be rescued and dangerous locations that need to be secured. Rescue forces must operate quickly in order to save lives, and they often need to operate in dangerous environments. Hence, robot-supported systems are increasingly used to support and accelerate search operations. The objective of the SENEKA concept is to network the various robots and sensor systems used by first responders in order to make the search for victims and survivors more quick and efficient. SENEKA targets the integration of the robot-sensor network into the operation procedures of the rescue teams. The aim of this paper is to inform on the goals and first research results of the ongoing joint research project SENEKA.