• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update
 
  • Details
  • Full
Options
April 11, 2025
Conference Paper
Title

Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update

Abstract
Warning: This paper contains offensive content that may disturb some readers. Vision-language models (VLMs) demonstrate strong multimodal capabilities but have been found to be more susceptible to generating harmful content compared to their backbone large language models (LLMs). Our investigation reveals that the integration of images significantly shifts the model's internal activations during the forward pass, diverging from those triggered by textual input. Moreover, the safety alignments of LLMs embedded within VLMs are not sufficiently robust to handle the activations discrepancies, making the models vulnerable to even the simplest jailbreaking attacks. To address this issue, we propose an internal activation revision approach that efficiently revises activations during generation, steering the model toward safer outputs. Our framework incorporates revisions at both the layer and head levels, offering control over the model's generation at varying levels of granularity. In addition, we explore three strategies for constructing positive and negative samples and two approaches for extracting revision vectors, resulting in different variants of our method. Comprehensive experiments demonstrate that the internal activation revision method significantly improves the safety of widely used VLMs, reducing attack success rates by an average of 48.94%, 34.34%, 43.92%, and 52.98% on SafeBench, Safe-Unsafe, Unsafe, and MM-SafetyBench, respectively, while minimally impacting model helpfulness.
Author(s)
Li, Qing
Geng, Jiahui
Zhu, Derui
Chen, Zongxiong
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Song, Kun
Ma, Lei
Karray, Fakhri
Mainwork
39h AAAI Conference on Artificial Intelligence 2025. Proceedings. No.26: AAAI-25 Special Track on AI Alignment  
Conference
Conference on Artificial Intelligence 2025  
Conference on Innovative Applications of Artificial Intelligence 2025  
Symposium on Educational Advances in Artificial Intelligence 2025  
Open Access
DOI
10.1609/aaai.v39i26.34954
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Integration of images

  • Language model

  • Multi-modal

  • Negative samples

  • Simple++

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024