Bauer, MartinKuschel, ChristianRitter, DanielSembritzki, Klaus2017-12-062017-12-062013https://dl.gi.de/handle/20.500.12116/8605The intention of partitioned global address space (PGAS) languages is to decrease developing time of parallel programs by abstracting the view on the memory and communication. Despite the abstraction a decent speed-up is promised. In this paper the performance and implementation time of Co-Array Fortran (CAF), Unified Parallel C (UPC) and Cascade High Productivity Language (Chapel) are compared by means of a linked cell algorithm. An MPI parallel reference implementation in C is ported to CAF, Chapel and UPC, respectively, and is optimized with respect to the available features of the corresponding language. Our tests show parallel programs are developed faster with the above mentioned PGAS languages as compared to MPI. We experienced a performance penalty for the PGAS versions that can be reduced at the expense of a similar programming effort as for MPI. Programmers should be aware that the utilization of PGAS languages may lead to higher administrative effort for compiling and executing programs on different super-computers.enDomain DecompositionParallel ProgramMessage Passing InterfaceMessage Passing Interface ImplementationGlobal Address SpaceComparison of PGAS Languages on a Linked Cell AlgorithmText/Journal Article10.1007/BF033542380177-0454