Auflistung nach Autor:in "Meister, Andreas"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCost-Aware Query Optimization during Cloud-Based Complex Event Processing(Informatik 2014, 2014) Meister, Andreas; Breß, Sebastian; Saake, GunterComplex Event Processing describes the problem of timely and continuous processing of event streams. The load of Complex Event Processing systems can vary (e.g., event rates). Static resource provision leads to higher monetary costs because enough resources have to be provided to efficiently handle peak loads. Therefore, most of the time the resources will not be fully utilized. One way to achieve scalable processing and elastical resource allocation fitting varying requirements is to use Cloud Computing. Properties of Cloud Computing are the pay-as-you-go-payment model and high availability. These properties can be used in Complex Event Processing systems to minimize the monetary costs of systems while satisfying Service Level Agreements. Complex Event Processing systems must continuously optimize the event processing to adapt to varying loads without violation of Service Level Agreements. To guarantee efficiency, the optimization cost must be considered, leading to cost savings without violating the Service Level Agreements. In this work, we discuss factors, which should be considered during the optimization of cloud-based Complex Event Processing systems that use the pause-train-resume strategy to migrate operators. Furthermore, we propose heuristics to estimate the cost of these factors. In our experiments, the cost could be decreased by 15 \% by using a cost-aware optimizer. This proofs that the costs of cloud-based Complex Event Processing systems can be further decreased if optimization is cost-aware.
- KonferenzbeitragHardware-Sensitive Scan Operator Variants for Compiled Selection Pipelines(Datenbanksysteme für Business, Technologie und Web (BTW 2017), 2017) Broneske, David; Meister, Andreas; Saake, GunterThe ever-increasing demand for performance on huge data sets forces database systems to tweak the last bit of performance out of their operators. Especially query compiled plans allow for several tuning opportunities that can be applied depending on the query plan and the underlying data. Apart from classical query optimization opportunities, it includes to tune the code using code optimizations for processor specifics, e.g., using Single Instruction Multiple Data processing or predication. In this paper, we examine code optimizations that can be applied for compiled scan pipelines that include aggregations, evaluate impact factors that influence the performance of the scan pipelines, and derive guidelines that a query compiler should implement to choose the best variant for a given query plan and workload.
- ZeitschriftenartikelToward GPU-accelerated Database Optimization(Datenbank-Spektrum: Vol. 15, No. 2, 2015) Meister, Andreas; Breß, Sebastian; Saake, GunterFor over three decades, research investigates optimization options in DBMSs. Nowadays, the hardware used in DBMSs become more and more heterogeneous, because processors are bound by a fixed energy budget leading to increased parallelism. Existing optimization approaches in DBMSs do not exploit parallelism for a single optimization task and, hence, can only benefit from the parallelism offered by current hardware by batch-processing multiple optimization tasks.Since a large optimization space often allows us to process sub-spaces in parallel, we expect large gains in result quality for optimization approaches in DBMSs and, hence, performance for query processing on modern (co-)processors. However, parallel optimization on CPUs is likely to slow down query processing, because DBMSs can fully exploit the CPUs computing resources due to high data parallelism. In contrast, the communication overhead of co-processors such as GPUs typically lead to plenty of compute resources unused.In this paper, we motivate the use of parallel co-processors for optimization in DBMSs, identify optimization problems benefiting from parallelization, and show how we can design parallel optimization approaches on the example of the operator placement problem.