• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. On Adversarial Robustness of Quantized Neural Networks Against Direct Attacks
 
  • Details
  • Full
Options
December 2024
Journal Article
Title

On Adversarial Robustness of Quantized Neural Networks Against Direct Attacks

Abstract
Deep Neural Networks (DNNs) prove to be susceptible to synthetically generated samples, so-called adversarial examples. Such adversarial examples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. With the increasing use of deep learning on embedded devices and the resulting use of quantization techniques to compress deep neural networks, it is critical to investigate the adversarial vulnerability of quantized neural networks.In this paper, we perform an in-depth study of the adversarial robustness of quantized networks against direct attacks, where adversarial examples are both generated and applied on the same network. Our experiments show that quantization makes models resilient to the generation of adversarial examples, even for attacks that demonstrate a high success rate, indicating that it offers some degree of robustness against these attacks. Additionally, we open-source Adversarial Neural Network Toolkit (ANNT) to support the replication of our results.
Author(s)
Shrestha, Abhishek
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Großmann, Jürgen  
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Journal
Advances in science, technology and engineering systems journal  
Open Access
DOI
10.25046/aj090604
Additional link
Full text
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Deep neural networks

  • Quantization

  • Adversarial attacks

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024