• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Buch
  4. Detection of Images Generated by Multi-Modal Models
 
  • Details
  • Full
Options
2025
Report
Title

Detection of Images Generated by Multi-Modal Models

Abstract
This report provides a comprehensive analysis of the advancements in artificial image generation, deepfake detection, and explainable AI (XAI) techniques for this task. It is prepared by the Fraunhofer IOSB in collaboration with the German Federal Office for Information Security (BSI), aiming to address the challenges posed by AI-generated image content. The report was produced as part of BSI’s Project 658 (RealOrRender) and presents the results of work package 2. Recent years have seen significant progress in generative AI models for image generation, which are capable of producing highly realistic images. While these models have beneficial applications in areas like art and medical imaging, they also raise ethical concerns, particularly regarding deepfakes, which can be used to spread misinformation and undermine trust. This report reviews the state-of-the-art in generative models, highlighting their capabilities and the challenges they present. It then explores techniques to detect deepfakes, emphasizing the importance of robust methods that can identify AI-generated content. Key insights include the benefits of the developed hybrid approach in detection, combining both reconstruction error-based and classification techniques to improve accuracy and generalization capabilities. Evaluation of detection methods is conducted on a newly created diverse dataset, demonstrating that the proposed hybrid model outperforms existing state-of-the-art approaches. Furthermore, the report investigates XAI techniques applicable to deepfake detection and introduces a hybrid method combining established approaches. These techniques are evaluated using a range of metric-based criteria, with results indicating that certain methods, including the proposed hybrid method, excel in explanatory quality, while others fall short in explaining complex detection models. The work also incorporates human-centered evaluations, assessing trust, helpfulness, and persuasiveness through user studies. In conclusion, the report underscores the importance of developing detection systems that are not only accurate but also explainable and thereby enhance trust.
Author(s)
Burkart, Nadia  
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Specker, Andreas  
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Golda, Thomas  orcid-logo
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Veerappa, Manjunatha  
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Alveen, Dominik
Wilhelm, Anna
Publisher
Federal Office for Information Security
File(s)
Download (5.12 MB)
Rights
Use according to copyright law
DOI
10.24406/publica-4826
Language
English
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024