Deuschel, JessicaJessicaDeuschelFoltyn, AndreasAndreasFoltynRoscher, KarstenKarstenRoscherScheele, StephanStephanScheele2025-01-022025-01-022024https://publica.fraunhofer.de/handle/publica/48095410.1007/978-3-031-64832-8_5The development of AI systems involves a series of steps, including data acquisition and preprocessing, model selection, training, evaluation, and deployment. However, each of these steps involves certain assumptions that introduce inherent uncertainty, which can result in inaccurate outcomes and reduced confidence in the system. To enhance confidence and comply with the EU AI Act, we recommend using Uncertainty Quantification methods to estimate the belief in the correctness of a model’s output. To make these methods more accessible, we provide insights into the possible sources of uncertainty and offer an overview of the different available methods. We categorize these methods based on when they are used in the process, accounting for various application requirements. We distinguish between three types: data-based, architecture-modifying and post-hoc methods, and share our personal experiences with each.enartificial intelligenceAItrustworthy artificial intelligencetrustworthy AIuncertaintyuncertainty quantificationmachine learningMLsafety-criticalsafety-critical systemsThe Role of Uncertainty Quantification for Trustworthy AIbook article