Alrutz, T.T.AlrutzBackhaus, J.J.BackhausBrandes, T.T.BrandesEnd, V.V.EndGerhold, T.T.GerholdGeiger, A.A.GeigerGrünewald, D.D.GrünewaldHeuveline, V.V.HeuvelineJägersküpper, J.J.JägersküpperKnüpfer, A.A.KnüpferKrzikalla, O.O.KrzikallaKügeler, E.E.KügelerLojewski, C.C.LojewskiLonsdale, G.G.LonsdaleMüller-Pfefferkorn, R.R.Müller-PfefferkornNagel, W.W.NagelOden, L.L.OdenPfreundt, F.-J.F.-J.PfreundtRahn, M.M.RahnSattler, M.M.SattlerSchmidtobreick, M.M.SchmidtobreickSchiller, A.A.SchillerSimmendinger, C.C.SimmendingerSoddemann, T.T.SoddemannSutmann, G.G.SutmannWeber, H.H.WeberWeiss, J.-P.J.-P.Weiss2022-03-122022-03-122013https://publica.fraunhofer.de/handle/publica/38059410.1007/978-3-642-35893-7_18At the threshold to exascale computing, limitations of the MPI programming model become more and more pronounced. HPC programmers have to design codes that can run and scale on systems with hundreds of thousands of cores. Setting up accordingly many communication buffers, point-to-point communication links, and using bulk-synchronous communication phases is contradicting scalability in these dimensions. Moreover, the reliability of upcoming systems will worsen.en003003006519GASPI - a partitioned global address space programming interfaceconference paper