Towards a Formal Model for Quantifying Trust in Distributed Usage Control Systems
Distributed usage control is a form of usage control that spans over multiple domains and computer systems. As a result, usage control components responsible for evaluating policies, gathering information, executing actions and enforcing decisions are operated in the vicinity of different stakeholders with conflicting interests. In order to prevent malicious stakeholders from manipulating these components, remote attestation can be used to verify the integrity of their code base. However, in a distributed case it is not always apparent what sequence of attestations is necessary and which verifier should conduct them. Furthermore, it is unclear what impact a failed attestation has on the trustworthiness of the whole usage control system. To solve these questions, it is necessary to identify which agents need to trust each other in order to securely execute a certain usage control function. Then the sequence of remote attestations that occur across the distributed usage control system can be examined accordingly. In this work we develop a formal model that represents the trust relationships of distributed usage control systems with multiple collaborating actors. Based on the conducted attestations we define simple binary and non-binary trust metrics that quantify the trust level a data owner can expect at a certain point in time. Finally we show how the model can be used to determine the level of trust reached in a real-world scenario.