Now showing 1 - 2 of 2
  • Publication
    Beyond Test Accuracy: The Effects of Model Compression on CNNs
    ( 2022) ;
    Schwienbacher, Kristian
    ;
    Model compression is widely employed to deploy convolutional neural networks on devices with limited computational resources or power limitations. For high stakes applications, such as autonomous driving, it is, however, important that compression techniques do not impair the safety of the system. In this paper, we therefore investigate the changes introduced by three compression methods - post-training quantization, global unstructured pruning, and the combination of both - that go beyond the test accuracy. To this end, we trained three image classifiers on two datasets and compared them regarding their performance on the class level and regarding their attention to different input regions. Although the deviations in test accuracy were minimal, our results show that the considered compression techniques introduce substantial changes to the models that reflect in the quality of predictions of individual classes and in the salience of input regions. While we did not observe the introduction of systematic errors or biases towards certain classes, these changes can significantly impact the failure modes of CNNs and thus are highly relevant for safety analyses. We therefore conclude that it is important to be aware of the changes caused by model compression and to already consider them in the early stages of the development process.
  • Publication
    Is Uncertainty Quantification in Deep Learning Sufficient for Out-of-Distribution Detection?
    Reliable information about the uncertainty of predictions from deep neural networks could greatly facilitate their utilization in safety-critical applications. Current approaches for uncertainty quantification usually focus on in-distribution data, where a high uncertainty should be assigned to incorrect predictions. In contrast, we focus on out-of-distribution data where a network cannot make correct predictions and therefore should always report high uncertainty. In this paper, we compare several state-of-the-art uncertainty quantification methods for deep neural networks regarding their ability to detect novel inputs. We evaluate them on image classification tasks with regard to metrics reflecting requirements important for safety-critical applications. Our results show that a portion of out-of-distribution inputs can be detected with reasonable loss in overall accuracy. However, current uncertainty quantification approaches alone are not sufficient for an overall reliable out-of-distribution detection.