Now showing 1 - 2 of 2
  • Publication
    Are you sure? Prediction revision in automated decision-making
    With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.
  • Publication
    Explanation Framework for Intrusion Detection
    ( 2021) ;
    Franz, Maximilian
    ;
    Huber, Marco F.
    Machine learning and deep learning are widely used in various applications to assist or even replace human reasoning. For instance, a machine learning based intrusion detection system (IDS) monitors a network for malicious activity or specific policy violations. We propose that IDSs should attach a sufficiently understandable report to each alert to allow the operator to review them more efficiently. This work aims at complementing an IDS by means of a framework to create explanations. The explanations support the human operator in understanding alerts and reveal potential false positives. The focus lies on counterfactual instances and explanations based on locally faithful decision-boundaries.