Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Towards a Formal Model for Quantifying Trust in Distributed Usage Control Systems

: Wagner, Paul Georg

Volltext urn:nbn:de:0011-n-6086919 (337 KByte PDF)
MD5 Fingerprint: 753b23841a7511490815878ef6506dee
Erstellt am: 17.11.2020

Beyerer, Jürgen (Ed.); Zander, Tim (Ed.):
Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory 2019. Proceedings : July, 29 to August, 2, 2019, Triberg-Nussbach, Germany
Karlsruhe: KIT Scientific Publishing, 2020 (Karlsruher Schriften zur Anthropomatik 45)
ISBN: 978-3-7315-1028-4
DOI: 10.5445/KSP/1000118012
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Institute for Anthropomatics, Vision and Fusion Laboratory (Joint Workshop) <2019, Triberg-Nussbach>
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IOSB ()

Distributed usage control is a form of usage control that spans over multiple domains and computer systems. As a result, usage control components responsible for evaluating policies, gathering information, executing actions and enforcing decisions are operated in the vicinity of different stakeholders with conflicting interests. In order to prevent malicious stakeholders from manipulating these components, remote attestation can be used to verify the integrity of their code base. However, in a distributed case it is not always apparent what sequence of attestations is necessary and which verifier should conduct them. Furthermore, it is unclear what impact a failed attestation has on the trustworthiness of the whole usage control system. To solve these questions, it is necessary to identify which agents need to trust each other in order to securely execute a certain usage control function. Then the sequence of remote attestations that occur across the distributed usage control system can be examined accordingly. In this work we develop a formal model that represents the trust relationships of distributed usage control systems with multiple collaborating actors. Based on the conducted attestations we define simple binary and non-binary trust metrics that quantify the trust level a data owner can expect at a certain point in time. Finally we show how the model can be used to determine the level of trust reached in a real-world scenario.