Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Generation of adversarial examples to prevent misclassification of deep neural network based condition monitoring systems for cyber-physical production systems

: Specht, Felix; Otto, Jens; Niggemann, Oliver; Hammer, Barbara

Postprint urn:nbn:de:0011-n-5256162 (1.1 MByte PDF)
MD5 Fingerprint: 73201b604e05d1833d306a9ce97e3397
© IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Created on: 8.1.2019

Institute of Electrical and Electronics Engineers -IEEE-:
IEEE 16th International Conference on Industrial Informatics, INDIN 2018. Proceedings : 18-20 July 2018, Porto, Portugal
Piscataway, NJ: IEEE, 2018
ISBN: 978-1-5386-4829-2
ISBN: 978-1-5386-4828-5
ISBN: 978-1-5386-4830-8
International Conference on Industrial Informatics (INDIN) <16, 2018, Porto>
Conference Paper, Electronic Publication
Fraunhofer IOSB ()

Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of the physical process poses a serious threat for production systems and employees. This paper introduces CyberProtect, a novel approach to prevent misclassification caused by adversarial example attacks. CyberProtect generates adversarial examples and uses them to retrain deep neural networks. This results in a hardened deep neural network with a significant reduced misclassification rate. The proposed countermeasure increases the classification rate from 20% to 82%, as proved by empirical results.