Options
2024
Conference Paper
Title
Towards an Empirical Robustness Assessment Through Measuring Adversarial Subspaces
Abstract
Machine learning (ML) paves the way for innovative applications in various domains. However, adversarial examples pose a significant threat to their robustness, which hinders their usage, for instance, in safety-critical applications. The arms race of attacks and defenses against adversarial examples has received much attention, whilst the analysis on measuring the robustness received little. Robustness scores provide a means to estimate safe regions in the input space for which no adversarial examples exist for a given model. However, these methods often do not scale. On the other hand, empirical investigations have brought the insight that adversarial examples are not isolated examples in the input space, but form contiguous subspaces. In this paper, we contribute to these investigations with methods for the empirical analysis on identifying the extent of adversarial subspaces through analyzing their boundaries. To that aim, we apply two methods to measure their boundaries and draw conclusions on the shape, extent, and distribution of adversarial subspaces within the input space. The presented results are a first step towards an efficient and scalable empirical measurement of adversarial subspaces, aiming to quantify the robustness of an ML model in cases where a formal verification is not feasible. To the best of our knowledge, this is the first empirical investigation of the extent of adversarial subspaces. We illustrate our results on the OpenSky dataset and identify the challenges in assessing the robustness of ML models.
Author(s)