Now showing 1 - 6 of 6
  • Publication
    Explainable AI for sensor-based sorting systems
    Explainable artificial intelligence (XAI) can make machine learning based systems more transparent. This additional transparency can enable the use of machine learning in many different domains. In our work, we show how XAI methods can be applied to an autoencoder for anomaly detection in a sensor-based sorting system. The setup of the sorting system consists of a vibrating feeder, a conveyor belt, a line-scan camera and an array of fast-switching pneumatic valves. It allows the separation of a material stream into two fractions, realizing a binary sorting task. The autoencoder tries to mimic the normal behavior of the nozzle array and thus can detect abnormal behavior. The XAI methods are used to explain the output of the autoencoder. As XAI methods global and local approaches are used, which means we receive explanations for both a single result and the whole autoencoder. Initial results for both approaches are shown, together with possible interpretations of these results
  • Publication
    Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain
    Due to the lack of explanation towards their internal mechanism, state-of-the-art deep learning-based classifiers are often considered as black-box models. For instance, in the maritime domain, models that classify the types of ships based on their trajectories and other features perform well, but give no further explanation for their predictions. To gain the trust of human operators responsible for critical decisions, the reason behind the classification is crucial. In this paper, we introduce explainable artificial intelligence (XAI) approaches to the task of classification of ship types. This supports decision-making by providing explanations in terms of the features contributing the most towards the prediction, along with their corresponding time intervals. In the case of the LIME explainer, we adapt the time-slice mapping technique (LimeforTime), while for Shapley additive explanations (SHAP) and path integrated gradient (PIG), we represent the relevance of each input variable to generate a heatmap as an explanation. In order to validate the XAI results, the existing perturbation and sequence analyses for classifiers of univariate time series data is employed for testing and evaluating the XAI explanations on multivariate time series. Furthermore, we introduce a novel evaluation technique to assess the quality of explanations yielded by the chosen XAI method.
  • Publication
    A Survey on the Explainability of Supervised Machine Learning
    ( 2021) ;
    Huber, Marco F.
    Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.
  • Publication
    Are you sure? Prediction revision in automated decision-making
    With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.
  • Publication
    Modular and scalable automation for field robots
    This article describes a modular and scalable charging and navigation concept for electrified field robots and other agricultural machines. The concept consists of an underbody charging system on a trailer and a modular navigation box. The underlying conductive charging process is compared to other charging techniques. Charging time in relation to charging current and mean power consumption in field use is displayed. In the navigation box, data of various sensors are combined by means of multi-sensor fusion regarding the precise time of arrival. Time synchronization is achieved by a novel method for compensating the data latency jitter by employing Kalman based timestamp filtering. Furthermore, navigation functionalities, such as motion planning and mapping, are presented.
  • Publication
    Cyber-physical systems in manufacturing
    ( 2016)
    Monostori, László
    ;
    Kádár, Botond
    ;
    ;
    Kondoh, Shinsuke
    ;
    Kumara, Soundar R.
    ;
    Reinhart, Gunther
    ;
    ;
    Schuh, Günther
    ;
    Sihn, Wilfried
    ;
    Ueda, Kanji
    One of the most significant advances in the development of computer science, information and communication technologies is represented by the cyber-physical systems (CPS). They are systems of collaborating computational entities which are in intensive connection with the surrounding physical world and its on-going processes, providing and using, at the same time, data-accessing and data-processing services available on the Internet. Cyber-physical production systems (CPPS), relying on the latest, and the foreseeable further developments of computer science, information and communication technologies on one hand, and of manufacturing science and technology, on the other, may lead to the 4th industrial revolution, frequently noted as Industrie 4.0. The paper underlines that there are significant roots in general - and in particular to the CIRP community - which point towards CPPS. Expectations towards research in and implementation of CPS and CPPS are outlined and some case studies are introduced. Related new R&D challenges are highlighted.