Di Rienzo, SimoneSimoneDi RienzoFrattolillo, FrancescoFrancescoFrattolilloCipollone, RobertoRobertoCipolloneFanti, AndreaAndreaFantiBrandizzi, NicoloNicoloBrandizziIocchi, LucaLucaIocchi2024-12-162024-12-162024-06https://publica.fraunhofer.de/handle/publica/4807342-s2.0-85210324423The concept of trust has long been studied, initially in the context of human interactions and, more recently, in human-machine or human-agent interactions. Despite extensive studies, defining trust remains challenging due to its inherent complexities and the diverse factors that influence its dynamics in multi-agent environments. This paper focuses on a specific formalization of a trust factor: predictive reliability, defined as the ability of agents to accurately forecast the actions of their peers in a shared environment. By realizing this trust factor within the framework of multi-agent reinforcement learning (MARL), we integrate it as a criterion for agents to assess and select collaborators. This approach enhances the functionality of MARL systems, promoting improved cooperation and overall effectiveness.enfalseComputational ModelingMulti-Agent SystemsReinforcement LearningTrust FactorDeveloping Targeted Communication through a Trust Factor in Multi-Agent Reinforcement Learningconference paper