Pietrzyk, JohannesHabich, DirkDamme, PatrickFocht, ErichLehner, Wolfgang2021-05-042021-05-0420192019http://dx.doi.org/10.1007/s13222-019-00323-whttps://dl.gi.de/handle/20.500.12116/36380In-memory column-store database systems are state of the art for the efficient processing of analytical workloads. In these systems, data compression as well as vectorization play an important role. Currently, the vectorized processing is done using regular SIMD (Single Instruction Multiple Data) extensions of modern processors. For example, Intel’s latest SIMD extension supports 512-bit vector registers which allows the parallel processing of 8× 64-bit values. From a database system perspective, this vectorization technique is not only very interesting for compression and decompression to reduce the computational overhead, but also for all database operators like joins, scan, as well as groupings. In contrast to these SIMD extensions, NEC Corporation has recently introduced a novel pure vector engine (supercomputer) as a co-processor called SX-Aurora TSUBASA. This vector engine features a vector length of 16.384 bits with the world’s highest bandwidth of up to 1.2 TB/s, which perfectly fits to data-intensive applications like in-memory database systems. Therefore, we describe the unique architecture and properties of this novel vector engine in this paper. Moreover, we present selected in-memory column-store-specific evaluation results to show the benefits of this vector engine compared to regular SIMD extensions. Finally, we conclude the paper with an outlook on our ongoing research activities in this direction.Column storesExperimental evaluationNEC SX-Aurora TSUBASASIMD extensionVectorizationEvaluating the Vector Supercomputer SX-Aurora TSUBASA as a Co-Processor for In-Memory Database SystemsText/Journal Article10.1007/s13222-019-00323-w1610-1995