• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. The Role of Uncertainty Quantification for Trustworthy AI
 
  • Details
  • Full
Options
2024
Book Article
Title

The Role of Uncertainty Quantification for Trustworthy AI

Abstract
The development of AI systems involves a series of steps, including data acquisition and preprocessing, model selection, training, evaluation, and deployment. However, each of these steps involves certain assumptions that introduce inherent uncertainty, which can result in inaccurate outcomes and reduced confidence in the system. To enhance confidence and comply with the EU AI Act, we recommend using Uncertainty Quantification methods to estimate the belief in the correctness of a model’s output. To make these methods more accessible, we provide insights into the possible sources of uncertainty and offer an overview of the different available methods. We categorize these methods based on when they are used in the process, accounting for various application requirements. We distinguish between three types: data-based, architecture-modifying and post-hoc methods, and share our personal experiences with each.
Author(s)
Deuschel, Jessica
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Foltyn, Andreas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Roscher, Karsten  
Fraunhofer-Institut für Kognitive Systeme IKS  
Scheele, Stephan
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Mainwork
Unlocking Artificial Intelligence  
DOI
10.1007/978-3-031-64832-8_5
Language
English
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Fraunhofer-Institut für Kognitive Systeme IKS  
Fraunhofer Group
Fraunhofer-Verbund IUK-Technologie  
Keyword(s)
  • artificial intelligence

  • AI

  • trustworthy artificial intelligence

  • trustworthy AI

  • uncertainty

  • uncertainty quantification

  • machine learning

  • ML

  • safety-critical

  • safety-critical systems

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024