Bunzel, NiklasNiklasBunzel2024-09-252024-09-252024https://publica.fraunhofer.de/handle/publica/47567810.1109/DSN-S60304.2024.000202-s2.0-85203846920Neural networks, essential for high-security tasks such as autonomous vehicles and facial recognition, are vulnerable to attacks that alter model predictions through small input perturbations. This paper outlines current and future research on detecting real-world adversarial attacks. We present a framework for detecting transferred black-box attacks and a novel method for identifying adversarial patches without prior training, focusing on high entropy regions. In addition, we investigate the effectiveness and resilience of 3D adversarial attacks to environmental factors.enAdversarial 3D Objects and PatchesAdversarial AttacksDetectionImage ClassificationObject DetectionPatching the Cracks: Detecting and Addressing Adversarial Examples in Real-World Applicationsconference paper