Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Optimizing Information Loss Towards Robust Neural Networks

 
: Sperl, Philip; Böttinger, Konstantin

:
Volltext (PDF; )

Association for Computing Machinery -ACM-:
DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security Workshop, DYNAMICS 2020. Proceedings : December 7th, 2020, co-located with the Annual Computer Security Applications Conference 2020 (ACSAC), Austin, Texas, virtually
New York: ACM, 2020
ISBN: 978-1-4503-8714-9
9 S.
DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security Workshop (DYNAMICS) <2020, Online>
Annual Computer Security Applications Conference (ACSAC) <36, 2020, Online>
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer AISEC ()
deep learning; Adversarial Machine Learning; Neural Network Security

Abstract
Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The perturbations required to craft the examples are often negligible and even human-imperceptible. To protect deep learning-based systems from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computationally expensive and time consuming process, which often leads to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call entropic retraining. Based on an information-theoretic-inspired analysis, we investigate the effects of adversarial training and achieve a robustness increase without laboriously generating adversarial examples. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets. We empirically show that entropic retraining leads to a significant increase in NNs' security and robustness while only relying on the given original data. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets.

: http://publica.fraunhofer.de/dokumente/N-638756.html