Now showing 1 - 3 of 3
  • Publication
    Machine Learning Methods for Enhanced Reliable Perception of Autonomous Systems
    (Fraunhofer IKS, 2021)
    Henne, Maximilian
    ;
    ; ;
    In our modern life, automated systems are already omnipresent. The latest advances in machine learning (ML) help with increasing automation and the fast-paced progression towards autonomous systems. However, as such methods are not inherently trustworthy and are being introduced into safety-critical systems, additional means are needed. In autonomous driving, for example, we can derive the main challenges when introducing ML in the form of deep neural networks (DNNs) for vehicle perception. DNNs are overconfident in their predictions and assume high confidence scores in the wrong situations. To counteract this, we have introduced several techniques to estimate the uncertainty of the results of DNNs. In addition, we present what are known as out-of-distribution detection methods that identify unknown concepts that have not been learned beforehand, thus helping to avoid making wrong decisions. For the task of reliably detecting objects in 2D and 3D, we will outline further methods. To apply ML in the perception pipeline of autonomous systems, we propose using the supplementary information from these methods for more reliable decision-making. Our evaluations with respect to safety-related metrics show the potential of this approach. Moreover, we have applied these enhanced ML methods and newly developed ones to the autonomous driving use case. In variable environmental conditions, such as road scenarios, light, or weather, we have been able to enhance the reliability of perception in automated driving systems. Our ongoing and future research is on further evaluating and improving the trustworthiness of ML methods to use them safely and to a high level of performance in various types of autonomous systems, ranging from vehicles to autonomous mobile robots, to medical devices.
  • Publication
    Benchmarking Uncertainty Estimation Methods for Deep Learning with Safety-Related Metrics
    Deep neural networks generally perform very well on giving accurate predictions, but they often lack in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications. There are certain approaches that try to address this problem by designing the models to give more reliable values for their uncertainty. However, even though the performance of these models are compared to each other in various ways, there is no thorough evaluation comparing them in a safety-critical context using metrics that are designed to describe trade-offs between performance and safe system behavior. In this paper we attempt to fill this gap by evaluating and comparing several state-of-the-art methods for estimating uncertainty for image classifcation with respect to safety-related requirements and metrics that are suitable to describe the models performance in safety-critical domains. We show the relationship of remaining error for predictions with high confidence and its impact on the performance for three common datasets. In particular, Deep Ensembles and Learned Confidence show high potential to significantly reduce the remaining error with only moderate performance penalties.
  • Publication
    Managing Uncertainty of AI-based Perception for Autonomous Systems
    ( 2019)
    Henne, Maximilian
    ;
    ;
    With the advent of autonomous systems, machine perception is a decisive safety-critical part to make such systems become reality. However, presently used AI-based perception does not meet the required reliability for usage in real-world systems beyond prototypes, as for autonomous cars. In this work, we describe the challenge of reliable perception for autonomous systems. Furthermore, we identify methods and approaches to quantify the uncertainty of AI-based perception. Along with dynamic management of the safety, we show a path to how uncertainty information can be utilized for the perception, so that it will meet the high dependability demands of life-critical autonomous systems.