Logo des Repositoriums

it - Information Technology 59(3) - Juni 2017

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 7 von 7
  • Zeitschriftenartikel
    GPU-GIST – a case of generalized database indexing on modern hardware
    (it - Information Technology: Vol. 59, No. 5, 2017) Beier, Felix; Sattler, Kai-Uwe
    A lot of different indexes have been developed for accelerating search operations on large data sets. Search trees, representing the most prominent class, are ubiquitous in database management systems but are also widely used in non-DBMS applications. An approach for lowering the implementation complexity of these structures are index frameworks like generalized search trees (GiST). Common data management operations are implemented within the framework which can be specialized by data organization and evaluation strategies in order to model the actual index type. These frameworks are particularly useful in scientific and engineering applications where characteristics of the underlying data set are not known a priori and a lot of prototyping is required in order to find suitable index structures for the workload.
  • Zeitschriftenartikel
    Architecture of a data analytics service in hybrid cloud environments
    (it - Information Technology: Vol. 59, No. 5, 2017) Beier, Felix; Stolze, Knut
    DB2 for z/OS is the backbone of many transactional systems in the world. IBM DB2 Analytics Accelerator (IDAA) is IBM's approach to enhance DB2 for z/OS with very fast processing of OLAP and analytical SQL workload. While IDAA was originally designed as an appliance to be connected directly to System z, the trend in the IT industry is towards cloud environments. That offers a broad range of tools for analytical data processing tasks.
  • Zeitschriftenartikel
    Exploiting capabilities of modern processors in data intensive applications
    (it - Information Technology: Vol. 59, No. 5, 2017) Broneske, David; Saake, Gunter
    In main-memory database systems, the time to process the data has become a limiting factor due to the missing access gap. With changing processing capabilities (e.g., branch prediction, pipelining) in every new CPU architecture, code that was optimal once will probably not stay the best code forever. In this article, we analyze processing capabilities of the classical CPU and describe code optimizations to exploit the capabilities. Furthermore, we present state-of-the-art compiler techniques that already implement code optimizations, while also showing gaps for further code optimization integration.
  • Zeitschriftenartikel
    Heterogeneous placement optimization for database query processing
    (it - Information Technology: Vol. 59, No. 5, 2017) Karnagel, Tomas; Habich, Dirk
    Computing hardware is constantly evolving and database systems need to adapt to ongoing hardware changes to improve performance. The current hardware trend is heterogeneity, where multiple computing units like CPUs and GPUs are used together in one system. In this paper, we summarize our efforts to use hardware heterogeneity efficiently for query processing. We discuss different approaches of execution and investigate heterogeneous placement in detail by showing, how to automatically determine operator placement decisions according to the given hardware environment and query properties.
  • Zeitschriftenartikel
    FPGAs for data processing: Current state
    (it - Information Technology: Vol. 59, No. 5, 2017) Teubner, Jens
    To escape a number of physical limitations (e.g., bandwidth and thermal issues), hardware technology is strongly trending toward heterogeneous system designs, where a large share of the application work can be off-loaded to accelerators, such as graphics or network processors.
  • Zeitschriftenartikel
    Hardware Based Databases
    (it - Information Technology: Vol. 59, No. 5, 2017) Lehner, Wolfgang
  • Zeitschriftenartikel
    Storage class memory and databases: Opportunities and challenges
    (it - Information Technology: Vol. 59, No. 5, 2017) Oukid, Ismail; Kettler, Robert; Willhalm, Thomas
    Storage Class Memory (SCM) is emerging as a viable solution to lift DRAM's scalability limits, both in capacity and energy consumption. Indeed, SCM combines the economic characteristics, non-volatility, and density of traditional storage media with the low latency and byte-addressability of DRAM. In this paper we survey research works on how SCM can be leveraged in databases and explore different solutions ranging from using SCM as disk replacement, to single-level storage architectures, where SCM is used as universal memory (i.e., as memory and storage at the same time), together with the challenges that stem from these opportunities. Finally, we synthesize our findings into recommendations on how to exploit the full potential of SCM in next-generation database architectures.