Options
2024
Book Article
Title
The Role of Uncertainty Quantification for Trustworthy AI
Abstract
The development of AI systems involves a series of steps, including data acquisition and preprocessing, model selection, training, evaluation, and deployment. However, each of these steps involves certain assumptions that introduce inherent uncertainty, which can result in inaccurate outcomes and reduced confidence in the system. To enhance confidence and comply with the EU AI Act, we recommend using Uncertainty Quantification methods to estimate the belief in the correctness of a model’s output. To make these methods more accessible, we provide insights into the possible sources of uncertainty and offer an overview of the different available methods. We categorize these methods based on when they are used in the process, accounting for various application requirements. We distinguish between three types: data-based, architecture-modifying and post-hoc methods, and share our personal experiences with each.
Author(s)