• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Evasion Attacks in Continual Learning
 
  • Details
  • Full
Options
2025
Conference Paper
Title

Evasion Attacks in Continual Learning

Abstract
Continual learning (CL) enables machine learning models to adapt to evolving tasks while addressing challenges such as catastrophic forgetting. However, it inherits vulnerabilities from conventional settings, notably evasion attacks where adversarial perturbations degrade model performance. This study investigates the impact of evasion attacks, specifically Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), in a class incremental continual learning scenario using the CIFAR-10 dataset. Results show that adversarial examples generated during training largely retain their effectiveness across CL steps, demonstrating transferability over time. Their success varies depending on the similarity between newly introduced and previously learned classes, sometimes increasing or decreasing accordingly. Adversarial training, adapted for the CL setting, is also evaluated. While it improves robustness against specific attacks (mean gain ~30%), it introduces trade-offs such as reduced accuracy on benign inputs and potential overfitting to adversarial examples. These findings highlight the challenge of balancing robustness, generalization, and efficiency, and emphasize the importance of understanding how adversarial examples transfer across tasks in continual learning.
Author(s)
Bunzel, Niklas  
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Schwarte, Aino
Technische Universität Darmstadt
Mainwork
IEEE 24th International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2025. Proceedings  
Funder
Bundesministerium für Forschung, Technologie und Raumfahrt  
Conference
International Conference on Trust, Security and Privacy in Computing and Communications 2025  
DOI
10.1109/Trustcom66490.2025.00292
Language
English
Fraunhofer-Institut für Sichere Informationstechnologie SIT  
Keyword(s)
  • Adversarial Machine Learning

  • Continual Learning

  • Evasion Attacks

  • Transferability

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024