• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Statistical Feature-Based Detection of Adversarial Noise and Patch Attacks in Image and Deepfake Analysis
 
  • Details
  • Full
Options
2026
Book Article
Title

Statistical Feature-Based Detection of Adversarial Noise and Patch Attacks in Image and Deepfake Analysis

Abstract
Adversarial attacks pose a significant threat to the reliability and trustworthiness of machine learning systems, particularly in image classification tasks like deepfake detection. This chapter presents a comprehensive approach to detecting two prominent types of adversarial attacks: noise perturbation attacks and patch-based attacks. Using the ImageNet classification task as a primary use case, we investigate methods based on statistical features for identifying adversarial noise across a diverse range of attacks. These methods are designed to detect subtle changes in image distributions caused by adversarial manipulations, offering a lightweight and interpretable solution for adversarial attack detection that can be part of a multi-class detector framework. Building upon this foundation, the chapter explores the security-critical application of deepfake detection. Here, patch-based attacks are examined in depth. The proposed detection framework leverages statistical and spatial features to identify patch artifacts, ensuring robustness against these localized yet highly effective attacks. Our analysis compares the effectiveness of statistical detectors across multiple adversarial attack types and evaluates their performance in real-world scenarios. By addressing both noise perturbation and patch attacks, this chapter provides actionable insights and tools for enhancing the security of machine learning systems deployed in high-stakes applications, bridging the gap between theory and practice in adversarial defense.
Author(s)
Bunzel, Niklas  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Frick, Raphael
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Graner, Lukas
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Göller, Nicolas  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Steinebach, Martin  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Mainwork
Adversarial Example Detection and Mitigation Using Machine Learning  
DOI
10.1007/978-3-031-99447-0_14
Language
English
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024