• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Aliasing and adversarial robust generalization of CNNs
 
  • Details
  • Full
Options
2022
Journal Article
Title

Aliasing and adversarial robust generalization of CNNs

Abstract
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Author(s)
Grabinski, Julia
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keuper, Janis  
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keuper, Margret
Visual Computing, University of Siegen, Siegen
Journal
Machine learning  
Open Access
DOI
10.1007/s10994-022-06222-8
Language
English
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keyword(s)
  • Adversarial robustness

  • Aliasing

  • Roubst overfitting

  • CNNs

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024