• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. Robust Models are less Over-Confident
 
  • Details
  • Full
Options
October 12, 2022
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

Robust Models are less Over-Confident

Abstract
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network’s prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pend ing. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model’s building blocks (like activation functions and pooling) have a strong influence on the models’ prediction confidences.
Author(s)
Grabinski, Julia
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Gavrikov, Paul
sl-0
Keuper, Janis  
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keuper, Margret
sl-0
Open Access
File(s)
Download (892.43 KB)
Rights
CC BY 4.0: Creative Commons Attribution
DOI
10.48550/arXiv.2210.05938
10.24406/publica-1429
Language
English
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keyword(s)
  • CNNs

  • adversarial training (AT)

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024