Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Hybrid centralized-distributed resource allocation for device-to-device communication underlaying cellular networks

: Maghsudi, S.; Stanczak, S.


IEEE transactions on vehicular technology 65 (2016), Nr.4, S.2481-2495
ISSN: 0018-9545
Deutsche Forschungsgemeinschaft DFG
STA 864/3-3
Fraunhofer HHI ()

The basic idea of device-to-device (D2D) communication is that pairs of suitably selected wireless devices reuse the cellular spectrum to establish direct communication links, provided that the adverse effects of D2D communication on cellular users are minimized and that cellular users are given higher priority in using limited wireless resources. Despite its great potential in terms of coverage and capacity performance, implementing this new concept poses some challenges, particularly with respect to radio resource management. The main challenges arise from a strong need for distributed D2D solutions that operate in the absence of precise channel and network knowledge. To address this challenge, this paper studies a resource allocation problem in a single-cell wireless network with multiple D2D users sharing the available radio frequency channels with cellular users. We consider a realistic scenario where the base station (BS) is provided with strictly limited channel knowledge, whereas D2D and cellular users have no information. We prove a lower bound for the cellular aggregate utility in the downlink with fixed BS power, which allows for decoupling the channel allocation and D2D power control problems. An efficient graph-theoretical approach is proposed to perform channel allocation, which offers flexibility with respect to allocation criteria (aggregate utility maximization, fairness, and quality-of-service (QoS) guarantee). We model the power control problem as a multiagent learning game. We show that the game is an exact potential game with noisy rewards, which is defined on a discrete strategy set, and characterize the set of Nash equilibria. Q-learning better-reply dynamics is then used to achieve equilibrium.