• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Anderes
  4. Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers
 
  • Details
  • Full
Options
2023
Paper (Preprint, Research Paper, Review Paper, White Paper, etc.)
Title

Don't Look into the Sun: Adversarial Solarization Attacks on Image Classifiers

Title Supplement
Published on arXiv
Abstract
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Author(s)
Gavrikov, Paul
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keuper, Janis  
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
DOI
10.48550/arXiv.2308.12661
Language
English
Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM  
Keyword(s)
  • image-processing-python

  • adversarial-attacks

  • robustness

  • image-classification

  • pytorch

  • image-processing

  • deep-learning

  • deep-neural-networks

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024