Now showing 1 - 7 of 7
  • Publication
    Chosen Ciphertext k-Trace Attacks on Masked CCA2 Secure Kyber
    ( 2021)
    Hamburg, Mike
    ;
    Hermelink, Julius
    ;
    Primas, Robert
    ;
    Samardjiska, Simona
    ;
    Schamberger, Thomas
    ;
    ; ;
    Vredendaal, Christine van
    Single-trace attacks are a considerable threat to implementations of classic public-key schemes, and their implications on newer lattice-based schemes are still not well understood. Two recent works have presented successful single-trace attacks targeting the Number Theoretic Transform (NTT), which is at the heart of many lattice-based schemes. However, these attacks either require a quite powerful side-channel adversary or are restricted to specific scenarios such as the encryption of ephemeral secrets. It is still an open question if such attacks can be performed by simpler adversaries while targeting more common public-key scenarios. In this paper, we answer this question positively. First, we present a method for crafting ring/module-LWE ciphertexts that result in sparse polynomials at the input of inverse NTT computations, independent of the used private key. We then demonstrate how this sparseness can be incorporated into a side-channel attack, thereby significantly improving noise resistance of the attack compared to previous works. The effectiveness of our attack is shown on the use-case of CCA2 secure Kyber k-module-LWE, where k ∈ {2, 3, 4}. Our k-trace attack on the long-term secret can handle noise up to a s < 1.2 in the noisy Hamming weight leakage model, also for masked implementations. A 2k-trace variant for Kyber1024 even allows noise s < 2.2 also in the masked case, with more traces allowing us to recover keys up to s < 2.7. Single-trace attack variants have a noise tolerance depending on the Kyber parameter set, ranging from s < 0.5 to s < 0.7. As a comparison, similar previous attacks in the masked setting were only successful with s < 0.5.
  • Publication
    Privatsphäre und Maschinelles Lernen
    Wir alle generieren täglich große Mengen an potenziell sensiblen Daten: Wörter, die wir auf unseren Smartphones eingeben, Produkte, die wir online kaufen, Gesundheitsdaten, die wir in Apps erfassen. All diese Daten haben eins gemeinsam sie werden an verschiedensten Stellen in Machine-Learning-Modelle (ML-Modelle) eingespeist. Mithilfe der Zusammenhänge, die sich in diesen ,,Trainingsdaten finden lassen, können die Modelle immer präzisere Voraussagen hinsichtlich unseres Verhaltens oder anderer Fragestellungen treffen. Lange Zeit bestand die Annahme, dass dieser Prozess eine Einbahnstraße ist: Wegen der komplexen Datenverarbeitung in ML-Modellen kann man zwar Trainingsdaten einspeisen, sie aber später nicht wiederherstellen. In den letzten Jahren wurde jedoch gezeigt, dass anhand gezielter Attacken gegen trainierte Modelle Rückschlüsse auf die ursprünglichen Daten gezogen werden können. Der Schutz der Privatsphäre in ML-Modellen ist daher, insbesondere vor dem Hintergrund der Anforderungen der Datenschutz- Grundverordnung, ein Thema von großer Bedeutung. Er kann und muss durch den Einsatz geeigneter Methoden, wie z.B. Differential Privacy, aktiv gestärkt werden.
  • Publication
    Review of error correction for PUFs and evaluation on state-of-the-art FPGAs
    ( 2020) ;
    Kürzinger, Ludwig
    ;
    Efficient error correction and key derivation is a prerequisite to generate secure and reliable keys from PUFs. The most common methods can be divided into linear schemes and pointer-based schemes. This work compares the performance of several previous designs on an algorithmic level concerning the required number of PUF response bits, helper data bits, number of clock cycles, and FPGA slices for two scenarios. One targets the widely used key error probability of 10 - 6, while the other one requires a key error probability of 10 - 9. In addition, we provide a wide span of new implementation results on state-of-the-art Xilinx FPGAs and set them in context to old synthesis results on legacy FPGAs.
  • Publication
    Retrofitting Leakage Resilient Authenticated Encryption to Microcontrollers
    ( 2020)
    Unterstein, Florian
    ;
    ;
    Schamberger, Thomas
    ;
    Tebelmann, Lars
    ;
    ;
    The security of Internet of Things (IoT) devices relies on fundamental concepts such as cryptographically protected firmware updates. In this context attackers usually have physical access to a device and therefore side-channel attacks have to be considered. This makes the protection of required cryptographic keys and implementations challenging, especially for commercial off-the-shelf (COTS) microcontrollers that typically have no hardware countermeasures. In this work, we demonstrate how unprotected hardware AES engines of COTS microcontrollers can be efficiently protected against side-channel attacks by constructing a leakage resilient pseudo random function (LR-PRF). Using this side-channel protected building block, we implement a leakage resilient authenticated encryption with associated data (AEAD) scheme that enables secured firmware updates. We use concepts from leakage resilience to retrofit side-channel protection on unprotected hardware AES engines by means of software-only modifications. The LR-PRF construction leverages frequent key changes and low data complexity together with key dependent noise from parallel hardware to protect against side-channel attacks. Contrary to most other protection mechanisms such as time-based hiding, no additional true randomness is required. Our concept relies on parallel S-boxes in the AES hardware implementation, a feature that is fortunately present in many microcontrollers as a measure to increase performance. In a case study, we implement the protected AEAD scheme for two popular ARM Cortex-M microcontrollers with differing parallelism. We evaluate the protection capabilities in realistic IoT attack scenarios, where non-invasive EM probes or power consumption measurements are employed by the attacker. We show that the concept provides the side-channel hardening that is required for the long-term security of IoT devices.
  • Publication
    Smart Intersections Improve Traffic Flow and Safety
    ( 2019)
    Striegel, Martin
    ;
    Smart intersections help to address increasing traffic density and improve road safety. By leveraging data from infrastructure sensors, and combining and supplying those data to road users, their perception can be improved. This aids in protecting vulnerable road users (VRUs) and acts as a crucial building block for enabling automated and autonomous driving.