• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. Assessing Systematic Weaknesses of DNNs using Counterfactuals
 
  • Details
  • Full
Options
August 3, 2023
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

Assessing Systematic Weaknesses of DNNs using Counterfactuals

Title Supplement
Published on arXiv
Abstract
With the advancement of DNNs into safety-critical applications, testing approaches for such models have gained more attention. A current direction is the search for and identification of systematic weaknesses that put safety assumptions based on average performance values at risk. Such weaknesses can take on the form of (semantically coherent) subsets or areas in the input space where a DNN performs systematically worse than its expected average. However, it is non-trivial to attribute the reason for such observed low performances to the specific semantic features that describe the subset. For instance, inhomogeneities within the data w.r.t. other (non-considered) attributes might distort results. However, taking into account all (available) attributes and their interaction is often computationally highly expensive. Inspired by counterfactual explanations, we propose an effective and computationally cheap algorithm to validate the semantic attribution of existing subsets, i.e., to check whether the identified attribute is likely to have caused the degraded performance. We demonstrate this approach on an example from the autonomous driving domain using highly annotated simulated data, where we show for a semantic segmentation model that (i) performance differences among the different pedestrian assets exist, but (ii) only in some cases is the asset type itself the reason for this reduction in the performance.
Author(s)
Gannamaneni, Sujan Sai  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Mock, Michael  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Akila, Maram  
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Project(s)
safe.trAIn
Funder
Bundesministerium für Wirtschaft und Klimaschutz  
Conference
Association for the Advancement of Artificial Intelligence (AAAI Spring Symposium) 2023  
DOI
10.48550/arXiv.2308.01614
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Keyword(s)
  • DNN testing

  • Explainability in ML

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024