Now showing 1 - 6 of 6
No Thumbnail Available
Publication

Assured Resilience in Autonomous Systems - Machine Learning Methods for Reliable Perception

2024 , Weiß, Gereon , Gansloser, Jens , Schwaiger, Adrian , Schwaiger, Maximilian

Machine learning in the form of deep neural networks provides a powerful tool for enhanced perception of autonomous systems. However, the results of such networks are still not reliable enough for safety-critical tasks, like autonomous driving. We provide an overview of common challenges when applying these methods and introduce our approach for making the perception more robust. It includes utilizing uncertainty quantification based on ensemble distribution distillation and an out-of-distribution approach for detecting unknown inputs. We evaluate the approaches for object detection tasks in different autonomous driving scenarios with varying environmental conditions. The results show that the additional methods can support making the perception task of object detection more robust and reliable for future usage in autonomous systems.

No Thumbnail Available
Publication

Benchmarking Uncertainty Estimation Methods for Deep Learning with Safety-Related Metrics

2020 , Henne, Maximilian , Schwaiger, Adrian , Roscher, Karsten , Weiß, Gereon

Deep neural networks generally perform very well on giving accurate predictions, but they often lack in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications. There are certain approaches that try to address this problem by designing the models to give more reliable values for their uncertainty. However, even though the performance of these models are compared to each other in various ways, there is no thorough evaluation comparing them in a safety-critical context using metrics that are designed to describe trade-offs between performance and safe system behavior. In this paper we attempt to fill this gap by evaluating and comparing several state-of-the-art methods for estimating uncertainty for image classifcation with respect to safety-related requirements and metrics that are suitable to describe the models performance in safety-critical domains. We show the relationship of remaining error for predictions with high confidence and its impact on the performance for three common datasets. In particular, Deep Ensembles and Learned Confidence show high potential to significantly reduce the remaining error with only moderate performance penalties.

No Thumbnail Available
Publication

Measuring Ensemble Diversity and its Effects on Model Robustness

2021 , Heidemann, Lena , Schwaiger, Adrian , Roscher, Karsten

Deep ensembles have been shown to perform well on a variety of tasks in terms of accuracy, uncertainty estimation, and further robustness metrics. The diversity among ensemble members is often named as the main reason for this. Due to its complex and indefinite nature, diversity can be expressed by a multitude of metrics. In this paper, we aim to explore the relation of a selection of these diversity metrics among each other, as well as their link to different measures of robustness. Specifically, we address two questions: To what extent can ensembles with the same training conditions differ in their performance and robustness? And are diversity metrics suitable for selecting members to form a more robust ensemble? To this end, we independently train 20 models for each task and compare all possible ensembles of 5 members on several robustness metrics, including the performance on corrupted images, out-of-distribution detection, and quality of uncertainty estimation. Our findings reveal that ensembles trained with the same conditions can differ significantly in their robustness, especially regarding out-of-distribution detection capabilities. Across all setups, using different datasets and model architectures, we see that, in terms of robustness metrics, choosing ensemble members based on the considered diversity metrics seldom exceeds the baseline of a selection based on the accuracy. We conclude that there is significant potential to improve the formation of robust deep ensembles and that novel and more sophisticated diversity metrics could be beneficial in that regard.

No Thumbnail Available
Publication

Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?

2020 , Schwaiger, Adrian , Sinhamahapatra, Poulami , Gansloser, Jens , Roscher, Karsten

Reliable information about the uncertainty of predictions from deep neural networks could greatly facilitate their utilization in safety-critical applications. Current approaches for uncertainty quantification usually focus on in-distribution data, where a high uncertainty should be assigned to incorrect predictions. In contrast, we focus on out-of-distribution data where a network cannot make correct predictions and therefore should always report high uncertainty. In this paper, we compare several state-of-the-art uncertainty quantification methods for deep neural networks regarding their ability to detect novel inputs. We evaluate them on image classification tasks with regard to metrics reflecting requirements important for safety-critical applications. Our results show that a portion of out-of-distribution inputs can be detected with reasonable loss in overall accuracy. However, current uncertainty quantification approaches alone are not sufficient for an overall reliable out-of-distribution detection.

No Thumbnail Available
Publication

Machine Learning Methods for Enhanced Reliable Perception of Autonomous Systems

2021 , Henne, Maximilian , Gansloser, Jens , Schwaiger, Adrian , Weiß, Gereon

In our modern life, automated systems are already omnipresent. The latest advances in machine learning (ML) help with increasing automation and the fast-paced progression towards autonomous systems. However, as such methods are not inherently trustworthy and are being introduced into safety-critical systems, additional means are needed. In autonomous driving, for example, we can derive the main challenges when introducing ML in the form of deep neural networks (DNNs) for vehicle perception. DNNs are overconfident in their predictions and assume high confidence scores in the wrong situations. To counteract this, we have introduced several techniques to estimate the uncertainty of the results of DNNs. In addition, we present what are known as out-of-distribution detection methods that identify unknown concepts that have not been learned beforehand, thus helping to avoid making wrong decisions. For the task of reliably detecting objects in 2D and 3D, we will outline further methods. To apply ML in the perception pipeline of autonomous systems, we propose using the supplementary information from these methods for more reliable decision-making. Our evaluations with respect to safety-related metrics show the potential of this approach. Moreover, we have applied these enhanced ML methods and newly developed ones to the autonomous driving use case. In variable environmental conditions, such as road scenarios, light, or weather, we have been able to enhance the reliability of perception in automated driving systems. Our ongoing and future research is on further evaluating and improving the trustworthiness of ML methods to use them safely and to a high level of performance in various types of autonomous systems, ranging from vehicles to autonomous mobile robots, to medical devices.

No Thumbnail Available
Publication

Managing Uncertainty of AI-based Perception for Autonomous Systems

2019 , Henne, Maximilian , Schwaiger, Adrian , Weiß, Gereon

With the advent of autonomous systems, machine perception is a decisive safety-critical part to make such systems become reality. However, presently used AI-based perception does not meet the required reliability for usage in real-world systems beyond prototypes, as for autonomous cars. In this work, we describe the challenge of reliable perception for autonomous systems. Furthermore, we identify methods and approaches to quantify the uncertainty of AI-based perception. Along with dynamic management of the safety, we show a path to how uncertainty information can be utilized for the perception, so that it will meet the high dependability demands of life-critical autonomous systems.