Under CopyrightSpecht, FelixFelixSpechtOtto, JensJensOttoNiggemann, OliverOliverNiggemannHammer, BarbaraBarbaraHammer2022-03-148.1.20192018https://publica.fraunhofer.de/handle/publica/40294510.1109/INDIN.2018.847206010.24406/publica-r-402945Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of the physical process poses a serious threat for production systems and employees. This paper introduces CyberProtect, a novel approach to prevent misclassification caused by adversarial example attacks. CyberProtect generates adversarial examples and uses them to retrain deep neural networks. This results in a hardened deep neural network with a significant reduced misclassification rate. The proposed countermeasure increases the classification rate from 20% to 82%, as proved by empirical results.en004670Generation of adversarial examples to prevent misclassification of deep neural network based condition monitoring systems for cyber-physical production systemsconference paper