Now showing 1 - 4 of 4
  • Publication
    Butterfly Transforms for Efficient Representation of Spatially Variant Point Spread Functions in Bayesian Imaging
    ( 2023)
    Eberle, Vincent
    ;
    Frank, Philipp
    ;
    Stadler, Julia
    ;
    ;
    Enßlin, Torsten A.
    Bayesian imaging algorithms are becoming increasingly important in, e.g., astronomy, medicine and biology. Given that many of these algorithms compute iterative solutions to high-dimensional inverse problems, the efficiency and accuracy of the instrument response representation are of high importance for the imaging process. For efficiency reasons, point spread functions, which make up a large fraction of the response functions of telescopes and microscopes, are usually assumed to be spatially invariant in a given field of view and can thus be represented by a convolution. For many instruments, this assumption does not hold and degrades the accuracy of the instrument representation. Here, we discuss the application of butterfly transforms, which are linear neural network structures whose sizes scale sub-quadratically with the number of data points. Butterfly transforms are efficient by design, since they are inspired by the structure of the Cooley–Tukey fast Fourier transform. In this work, we combine them in several ways into butterfly networks, compare the different architectures with respect to their performance and identify a representation that is suitable for the efficient representation of a synthetic spatially variant point spread function up to a (Formula presented.) error. Furthermore, we show its application in a short synthetic example.
  • Publication
    Counteract Side-Channel Analysis of Neural Networks by Shuffling
    ( 2022)
    Brosch, M.
    ;
    Probst, M.
    ;
    Machine learning is becoming an essential part in almost every electronic device. Implementations of neural networks are mostly targeted towards computational performance or memory footprint. Nevertheless, security is also an important part in order to keep the network secret and protect the intellectual property associated to the network. Especially, since neural network implementations are demonstrated to be vulnerable to side-channel analysis, powerful and computational cheap countermeasures are in demand. In this work, we apply a shuffling countermeasure to a microcontroller implementation of a neural network to prevent side-channel analysis. The countermeasure is effective while the computational overhead is low. We investigate the extensions necessary for our countermeasure, and how shuffling increases the effort for an attack in theory. In addition, we demonstrate the increase in effort for an attacker through experiments on real side-channel measurements. Based on the mechanism of shuffling and our experimental results, we conclude that an attack on a commonly used neural network with shuffling is no longer feasible in a reasonable amount of time.
  • Publication
    The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks
    ( 2022)
    Bouanani, Oussama
    ;
    With Machine Learning (ML) models being increasingly applied in sensitive domains, the related privacy concerns are rising. Neural networks (NN) are vulnerable to, so-called, membership inference attacks (MIA) which aim at determining whether a particular data sample was used for training the model. The factors that render NNs prone to this privacy attack are not yet fully understood. However, previous work suggests that the setup of the models and the training process might impact a model's risk to MIAs. To investigate these factors more in detail, we set out to experimentally evaluate the influence of the training choices in NNs on the models' vulnerability. Our analyses highlight that the batch size, the activation function, and the application and placement of batch normalization and dropout have the highest impact on the success of MIAs. Additionally, we applied statistical analyses to the experiment results and found a highly positive correlation between a model's ability to resist MIAs and its generalization capacity. We also defined a metric to measure the difference in the distributions of loss values between member and non-member data samples and observed that models scoring higher values on that metric were consistently more exposed to the attack. The latter observation was further confirmed by manually generating predictions for member and non-member samples producing loss values within specific distributions and launching MIAs on them.
  • Publication
    Rectifying adversarial inputs using XAI techniques
    ( 2022)
    Kao, Ching-Yu Franziska
    ;
    Chen, Junhao
    ;
    Markert, Karla
    ;
    With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.