• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. Transparency and Reliability Assurance Methods for Safeguarding Deep Neural Networks - A Survey
 
  • Details
  • Full
Options
September 2022
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

Transparency and Reliability Assurance Methods for Safeguarding Deep Neural Networks - A Survey

Title Supplement
Paper presented at Workshop on Trustworthy Artificial Intelligence as a part of the ECML/PKDD 2022, September 2022, Grenoble, France
Paper published on HAL science ouverte
Abstract
In light of deep neural network applications emerging in diverse fields - e.g., industry, healthcare or finance - weaknesses and failures of these models might bare unacceptable risks. Methods are needed that enable developers to discover and mitigate such weaknesses in order to develop trustworthy Machine Learning (ML), especially in safety-critical application areas. However, it is necessary to get an insight into the rapidly developing variety of methods for correcting different deficiencies. Unlike other similar work that focuses on one particular topic, we consider three areas of action which are directly associated with the development and evaluation of ML models: transparency, uncertainty estimation and robustness. We provide an overview and comparative assessment of current approaches for building reliable and transparent models targeted at ML developers.
Author(s)
Haedecke, Elena Gina  orcid-logo
Universität Bonn  
Pintz, Maximilian Alexander
Universität Bonn  
Project(s)
ZERTIFIZIERTE KI
Funder
Ministerium für Wirtschaft, Industrie, Klimaschutz und Energie des Landes Nordrhein-Westfalen
Conference
Workshop on Trustworthy Artificial Intelligence 2022  
Link
Link
Language
English
Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS  
Keyword(s)
  • Trustworthy AI

  • Deep Neural Networks

  • Safeguarding AI

  • Transparency

  • Robustness

  • Uncertainty Estimation

  • Survey

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024