• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Adversarial Patch Robustness against Occlusion: A case study
 
  • Details
  • Full
Options
August 24, 2025
Conference Paper
Title

Adversarial Patch Robustness against Occlusion: A case study

Abstract
Neural networks have demonstrated remarkable success in tasks like image classification and object detection. However, concerns persist regarding their security and robustness. Even state-of-the-art object detectors are vulnerable to localized patch attacks, which could potentially result in safety-critical failures. In such attacks, adversaries introduce a small, subtle adversarial patch within an image, leading detectors to either overlook real objects or identify nonexistent ones. These patches often cause even the most advanced detectors to make highly confident yet erroneous predictions. The potential real-world consequences of these attacks amplify the seriousness of these concerns. This paper presents a study on the robustness of patch attacks against occlusions. We evaluated patch attacks using the APRICOT dataset and a set of COCO images with the robust DPatch, testing its performance against occlusions of various sizes and colors in both clipped and unclipped conditions. Moreover, our study demonstrates that digitally applied occlusions can act as a defense mechanism by neutralizing adversarial patches once they have been localized. Additionally, simple occlusion is shown to be a computationally more efficient mitigation strategy compared to inpainting. It also effectively reduces hallucinations and false detections or classifications that frequently occur with diffusion-based inpainting methods.
Author(s)
Bunzel, Niklas  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Gelbing, Erik  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Mainwork
SecTL 2025, 3rd ACM Workshop on Secure and Trustworthy Deep Learning Systems. Proceedings  
Conference
Workshop on Secure and Trustworthy Deep Learning Systems 2025  
Asia Conference on Computer and Communications Security 2025  
DOI
10.1145/3709021.3737664
Language
English
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024