Auflistung nach Autor:in "Zeuch, Steffen"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- JournalEfficient and Scalable k‑Means on GPUs(Datenbank-Spektrum: Vol. 18, No. 3, 2018) Lutz, Clemens; Breß, Sebastian; Rabl, Tilmann; Zeuch, Steffen; Markl, Volker
- TextdokumentAn Overview of Hawk: A Hardware-Tailored Code Generator for the Heterogeneous Many Core Age(BTW 2019 – Workshopband, 2019) Breß, Sebastian; Funke, Henning; Zeuch, Steffen; Rabl, Tilmann; Markl, VolkerProcessor manufacturers build increasingly specialized processors to mitigate the effects of the power wall in order to deliver improved performance. Currently, database engines have to be manually optimized for each processor which is a costly and error prone process. In this paper, we provide a summary of our recent VLDB Journal publication, where we propose concepts to adapt to performance enhancements of modern processors and to exploit their capabilities automatically. Our key idea is to create processor-specific code variants and to learn a well-performing code variant for each processor. These code variants leverage various parallelization strategies and apply both generic and processor-specific code transformations. We observe that performance of code variants may diverge up to two orders of magnitude. Thus, we need to generate custom code for each processor for peak performance. Hawk automatically finds efficient code variants for CPUs, GPUs, and MICs.
- KonferenzbeitragWorkload Prediction for IoT Data Management Systems(BTW 2023, 2023) Burrell, David; Chatziliadis, Xenofon; Zacharatou, Eleni Tzirita; Zeuch, Steffen; Markl, VolkerThe Internet of Things (IoT) is an emerging technology that allows numerous devices, potentially spread over a large geographical area, to collect and collectively process data from high-speed data streams.To that end, specialized IoT data management systems (IoTDMSs) have emerged.One challenge in those systems is the collection of different metrics from devices in a central location for analysis. This analysis allows IoTDMSs to maintain an overview of the workload on different devices and to optimize their processing. However, as an IoT network comprises of many heterogeneous devices with low computation resources and limited bandwidth, collecting and sending workload metrics can cause increased latency in data processing tasks across the network.In this ongoing work, we present an approach to avoid unnecessary transmission of workload metrics by predicting CPU, memory, and network usage using machine learning (ML).Specifically, we demonstrate the performance of two ML models, linear regression and Long Short-Term Memory (LSTM) neural network, and show the features that we explored to train these models.This work is part of an ongoing research to develop a monitoring tool for our new IoTDMS named NebulaStream.