Now showing 1 - 3 of 3
  • Publication
    Explainable AI for sensor-based sorting systems
    Explainable artificial intelligence (XAI) can make machine learning based systems more transparent. This additional transparency can enable the use of machine learning in many different domains. In our work, we show how XAI methods can be applied to an autoencoder for anomaly detection in a sensor-based sorting system. The setup of the sorting system consists of a vibrating feeder, a conveyor belt, a line-scan camera and an array of fast-switching pneumatic valves. It allows the separation of a material stream into two fractions, realizing a binary sorting task. The autoencoder tries to mimic the normal behavior of the nozzle array and thus can detect abnormal behavior. The XAI methods are used to explain the output of the autoencoder. As XAI methods global and local approaches are used, which means we receive explanations for both a single result and the whole autoencoder. Initial results for both approaches are shown, together with possible interpretations of these results
  • Publication
    Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain
    Due to the lack of explanation towards their internal mechanism, state-of-the-art deep learning-based classifiers are often considered as black-box models. For instance, in the maritime domain, models that classify the types of ships based on their trajectories and other features perform well, but give no further explanation for their predictions. To gain the trust of human operators responsible for critical decisions, the reason behind the classification is crucial. In this paper, we introduce explainable artificial intelligence (XAI) approaches to the task of classification of ship types. This supports decision-making by providing explanations in terms of the features contributing the most towards the prediction, along with their corresponding time intervals. In the case of the LIME explainer, we adapt the time-slice mapping technique (LimeforTime), while for Shapley additive explanations (SHAP) and path integrated gradient (PIG), we represent the relevance of each input variable to generate a heatmap as an explanation. In order to validate the XAI results, the existing perturbation and sequence analyses for classifiers of univariate time series data is employed for testing and evaluating the XAI explanations on multivariate time series. Furthermore, we introduce a novel evaluation technique to assess the quality of explanations yielded by the chosen XAI method.
  • Publication
    Supported Decision-Making by Explainable Predictions of Ship Trajectories
    Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.