Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

DLA: Dense-Layer-Analysis for Adversarial Example Detection

 
: Sperl, Philip; Kao, Ching-yu; Chen, Peng; Lei, Xiao; Böttinger, Konstantin

:

Institute of Electrical and Electronics Engineers -IEEE-; IEEE Computer Society:
5th IEEE European Symposium on Security and Privacy, EuroS&P 2020. Proceedings : 7-11 September 2020, Virtual Event
Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2020
ISBN: 978-1-7281-5088-8
ISBN: 978-1-7281-5087-1
S.198-215
European Symposium on Security and Privacy (EuroS&P) <5, 2020, Online>
Englisch
Konferenzbeitrag
Fraunhofer AISEC ()

Abstract
In recent years Deep Neural Networks (DNNs) have achieved remarkable results and even showed superhuman capabilities in a broad range of domains. This led people to trust in DNN classifications even in security-sensitive environments like autonomous driving. Despite their impressive achievements, DNNs are known to be vulnerable to adversarial examples. Such inputs contain small perturbations to intentionally fool the attacked model. In this paper, we present a novel end-to-end framework to detect such attacks without influencing the target model's performance. Inspired by research in neuron-coverage guided testing we show that dense layers of DNNs carry security-sensitive information. With a secondary DNN we analyze the activation patterns of the dense layers during classification run-time, which enables effective and real-time detection of adversarial examples. Our prototype implementation successfully detects adversarial examples in image, natural language, and audio processing. Thereby, we cover a variety of target DNN architectures. In addition to effectively defending against state-of-the-art attacks, our approach generalizes between different sets of adversarial examples. Our experiments indicate that we are able to detect future, yet unknown, attacks. Finally, during white-box adaptive attacks, we show our method cannot be easily bypassed.

: http://publica.fraunhofer.de/dokumente/N-630564.html