Options
2023
Conference Paper
Title
Evaluating the effectiveness of attacks and defenses on machine learning through adversarial samples
Abstract
Adversarial attacks can compromise the robustness of machine learning models, including neural networks. Adversarial defenses can be employed to mitigate the impact of adversarial attacks. Due to adaptive attacks, however, the adversarial defenses are vulnerable as well. This makes it onerous to employ neural networks in safety and security-critical areas. However, a perception of the effectiveness of adversarial attacks and defenses can facilitate the development of more robust neural networks that are suitable for applications in these areas.The purpose of this paper is to evaluate the effectiveness of adversarial attacks and defenses and determine the dependency of effectiveness on the chosen values of the underlying parameters. To that end, we evaluate the (adaptive) Carlini & Wagner attack and KDE defense to measure their effectiveness for a range of parameter values. This paper investigates the aforementioned attacks and defenses for the optimal values of the parameters. We also prove that by changing the value of parameters, the effectiveness of adversarial attacks and defenses can be improved and state the necessary trade-offs involved. Furthermore, to substantiate the effect of adversarial attacks and defenses on the effectiveness of adaptive attacks, this paper investigates the effectiveness of the adaptive CW attack for the corresponding optimal values of the CW attack and KDE defense parameters.
Author(s)
Project(s)
Entwicklung von Zertifizierungsverfahren für die Luftfahrt - SQC
Funder
Bundesministerium für Wirtschaft und Klimaschutz -BMWK-