Shrestha, AbhishekAbhishekShresthaGroßmann, JürgenJürgenGroßmann2025-01-082025-01-082024-12https://publica.fraunhofer.de/handle/publica/48113010.25046/aj090604Deep Neural Networks (DNNs) prove to be susceptible to synthetically generated samples, so-called adversarial examples. Such adversarial examples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. With the increasing use of deep learning on embedded devices and the resulting use of quantization techniques to compress deep neural networks, it is critical to investigate the adversarial vulnerability of quantized neural networks.In this paper, we perform an in-depth study of the adversarial robustness of quantized networks against direct attacks, where adversarial examples are both generated and applied on the same network. Our experiments show that quantization makes models resilient to the generation of adversarial examples, even for attacks that demonstrate a high success rate, indicating that it offers some degree of robustness against these attacks. Additionally, we open-source Adversarial Neural Network Toolkit (ANNT) to support the replication of our results.enDeep neural networksQuantizationAdversarial attacksOn Adversarial Robustness of Quantized Neural Networks Against Direct Attacksjournal article