Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Infiniband-verbs on GPU: A case study of controlling an infiniband network device from the GPU

: Oden, L.; Fröning, H.; Pfreundt, F.-J.


Institute of Electrical and Electronics Engineers -IEEE-; IEEE Computer Society, Technical Committee on Parallel Processing:
IEEE International Parallel & Distributed Processing Symposium Workshops, IPDPSW 2014. Vol.2 : Phoenix, Arizona, USA, 19 - 23 May 2014; Proceedings
Piscataway, NJ: IEEE, 2014
ISBN: 978-1-4799-4115-5
ISBN: 978-0-7695-5208-8
ISBN: 978-1-4799-4116-2
ISBN: 978-1-4799-4117-9
International Parallel & Distributed Processing Symposium (IPDPS) <28, 2014, Phoenix/Ariz.>
High-Performance Grid and Cloud Computing Workshop (HPGC) <11, 2014, Phoenix/Ariz.>
Conference Paper
Fraunhofer ITWM ()

Due to their massive parallelism and high performance per watt GPUs gain high popularity in high performance computing and are a strong candidate for future exacscale systems. But communication and data transfer in GPU accelerated systems remain a challenging problem. Since the GPU normally is not able to control a network device, today a hybrid-programming model is preferred, whereby the GPU is used for calculation and the CPU handles the communication. As a result, communication between distributed GPUs suffers from unnecessary overhead, introduced by switching control flow from GPUs to CPUs and vice versa. In this work, we modify user space libraries and device drivers of GPUs and the Infiniband network device in a way to enable the GPU to control an Infiniband network device to independently source and sink communication requests without any involvements of the CPU. Our performance analysis shows the differences to hybrid communication models in detail, in particular that the CPU's advantage in generating work requests outshines the overhead associated with context switching. In other terms, our results show that complex networking protocols like IBVERBS are better handled by CPUs in spite of time penalties due to context switching, since overhead of work request generation cannot be parallelized and is not suitable with the high parallel programming model of GPUs.