Now showing 1 - 10 of 18
  • Publication
    Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain
    Due to the lack of explanation towards their internal mechanism, state-of-the-art deep learning-based classifiers are often considered as black-box models. For instance, in the maritime domain, models that classify the types of ships based on their trajectories and other features perform well, but give no further explanation for their predictions. To gain the trust of human operators responsible for critical decisions, the reason behind the classification is crucial. In this paper, we introduce explainable artificial intelligence (XAI) approaches to the task of classification of ship types. This supports decision-making by providing explanations in terms of the features contributing the most towards the prediction, along with their corresponding time intervals. In the case of the LIME explainer, we adapt the time-slice mapping technique (LimeforTime), while for Shapley additive explanations (SHAP) and path integrated gradient (PIG), we represent the relevance of each input variable to generate a heatmap as an explanation. In order to validate the XAI results, the existing perturbation and sequence analyses for classifiers of univariate time series data is employed for testing and evaluating the XAI explanations on multivariate time series. Furthermore, we introduce a novel evaluation technique to assess the quality of explanations yielded by the chosen XAI method.
  • Publication
    A Survey on the Explainability of Supervised Machine Learning
    ( 2021) ;
    Huber, Marco F.
    Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.
  • Publication
    Are you sure? Prediction revision in automated decision-making
    With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.
  • Publication
    Modular and scalable automation for field robots
    This article describes a modular and scalable charging and navigation concept for electrified field robots and other agricultural machines. The concept consists of an underbody charging system on a trailer and a modular navigation box. The underlying conductive charging process is compared to other charging techniques. Charging time in relation to charging current and mean power consumption in field use is displayed. In the navigation box, data of various sensors are combined by means of multi-sensor fusion regarding the precise time of arrival. Time synchronization is achieved by a novel method for compensating the data latency jitter by employing Kalman based timestamp filtering. Furthermore, navigation functionalities, such as motion planning and mapping, are presented.
  • Publication
    Supported Decision-Making by Explainable Predictions of Ship Trajectories
    Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.
  • Publication
    Explanation Framework for Intrusion Detection
    ( 2021) ;
    Franz, Maximilian
    ;
    Huber, Marco F.
    Machine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.
  • Publication
    ASARob - Aufmerksamkeitssensitiver Assistenzroboter
    ( 2020)
    Bachter, Hannes
    ;
    ; ; ;
    Messmer, Felix
    ;
    Mosmann, Victor
    ;
    ;
    Putze, Felix
    ;
    ;
    Reich, Daniel
    ;
    Reiser, Ulrich
    ;
    Romanelli, Massimo
    ;
    Scheck, Kevin
    ;
    Schultz, Tanja
    ;
    ;
    Das Projektziel des Vorhabens ASARob war die Implementierung einer robusten Aufmerksamkeitserfassung und -lenkung für die Roboter-Mensch-Interaktion. Multimodale Verfahren zur Aufmerksamkeitserfassung und -lenkung wurden hierzu in die bestehende, mobile Roboterplattform Care-O-bot 4 (care-o-bot.de) integriert. Die fusionierten Verfahren dienten als zentrale Grundfertigkeiten des Roboters, um bestehende Assistenzfunktionen, wie z. B. das räumliche Führen zu vorgegebenen Orten oder das Holen und Bringen von Gegenständen, anzureichern und in einem intuitiven Dialog mit dem betroffenen Nutzer durchführen zu können. Die Aufmerksamkeitserfassung diente insbesondere zur erwartungskonformen und kontextangepassten Annäherung des Roboters an Menschen bzw. Gesprächspartner. Durch den Einsatz der multimodalen Erfassungsvielfalt von Menschen und Umfeld sollte insbesondere gewährleistet werden, dass die Aufmerksamkeit von Personen auch in unstrukturierten Umgebungen, wie sie im Alltag zu erwarten sind, robust und fehlertolerant nachvollzogen werden kann. So wurden bspw. durch die Erfassung der Blickrichtung, Kopfdrehung, Sprache, Stimme und Körperhaltung von Nutzern partiell redundante Wahrnehmungskanäle implementiert, die sich gegenseitig ergänzen, insbesondere aber bei etwaigem Ausfall eines Kanals (z. B. durch Hinterkopfansichten, die eine Sicht auf die Augen eines Nutzers verhindern) durch konfidenzbasierte Informationsfusion für Rückfalloptionen sorgen.
  • Publication
    Batch-wise Regularization of Deep Neural Networks for Interpretability
    ( 2020) ;
    Faller, Philipp M.
    ;
    Peinsipp, Elisabeth
    ;
    Fast progress in the field of Machine Learning and Deep Learning strongly influences the research in many application domains like autonomous driving or health care. In this paper, we propose a batch-wise regularization technique to enhance the interpretability for deep neural networks (NN) by means of a global surrogate rule list. For this purpose, we introduce a novel regularization approach that yields a differentiable penalty term. Compared to other regularization approaches, our approach avoids repeated creating of surrogate models during training of the NN. The experiments show that the proposed approach has a high fidelity to the main model and also results in interpretable and more accurate models compared to some of the baselines.
  • Publication
    Deutsche Normungsroadmap Künstliche Intelligenz
    Die deutsche Normungsroadmap Künstliche Intelligenz (KI) verfolgt das Ziel, für die Normung Handlungsempfehlungen rund um KI zu geben, denn sie gilt in Deutschland und Europa in fast allen Branchen als eine der Schlüsseltechnologien für künftige Wettbewerbsfähigkeit. Die EU geht davon aus, dass die Wirtschaft in den kommenden Jahren mit Hilfe von KI stark wachsen wird. Umso wichtiger sind die Empfehlungen der Normungsroadmap, die die deutsche Wirtschaft und Wissenschaft im internationalen KI-Wettbewerb stärken, innovationsfreundliche Bedingungen schaffen und Vertrauen in die Technologie aufbauen sollen.
  • Publication
    A Study on Trust in Black Box Models and Post-hoc Explanations
    ( 2019)
    El Bekri, Nadia
    ;
    Kling, J.
    ;
    Huber, M.
    Machine learning algorithms that construct complex prediction models are increasingly used for decision-making due to their high accuracy, e.g., to decide whether a bank customer should receive a loan or not. Due to the complexity, the models are perceived as black boxes. One approach is to augment the models with post-hoc explainability. In this work, we evaluate three different explanation approaches based on the users' initial trust, the users' trust in the provided explanation, and the established trust in the black box by a within-subject design study.