Auflistung nach Schlagwort "benchmarking"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCloud-Native Scalability Benchmarking with Theodolite: Applied to the TeaStore Benchmark(Softwaretechnik-Trends Band 43, Heft 1, 2023) Henning, Sören; Wetzel, Benedikt; Hasselbring, WilhelmTheodolite is a framework for benchmarking the scalability of cloud-native applications such as microservices. It automates deployment and monitoring of a cloud-native application for different load intensities and provisioned cloud resources and assesses whether specified service level objectives (SLOs) are fulfilled. Provided as a Kubernetes Operator, Theodolite integrates with the cloud-native ecosystem and runs existing deployment configurations of various systems under-test, load generators, and benchmarks. We givea presentation on Theodolite and exemplify its usage by benchmarking the scalability of the TeaStore microservice reference application.
- KonferenzbeitragA Dynamic Resource Demand Analysis Approach for Stream Processing Systems(Softwaretechnik-Trends Band 40, Heft 3, 2020) Rank, Johannes; Hein, Andreas; Krcmar, HelmutSystems that provide real-time business insights based on live data, so-called Stream Processing Systems (SPS), have received much attention in recent years. In many areas such as stock markets or surveillance, it is essential to process data immediately and react accordingly. As the processing of real-time data is at the heart of SPS, their performance in terms of latency, throughput, and resource utilization constitutes a crucial role. Traditional performance and benchmarking approaches for SPS usually focus on the throughput and latency, trying to answer the question of which engine processes the incoming events fastest. However, neglecting the corresponding resource utilization provides only a limited and sometimes even misleading view on their actual performance. Depending on the use-case, an engine that achieves faster processing results at the cost of higher memory utilization is not always best suited, which can be shown based on the example of IoT edge computing devices with limited resources. For this reason, we developed a dynamic performance approach to analyze the resource demands of an SPS. The approach yields fine-grained performance metrics based on the individual processing steps of the SPS and without requiring any knowledge of the actual source code. More-over it takes the whole system (engine and streaming application) into account. Since, we do not rely on code instrumentation or language-specific profiling techniques but instead, use the dynamic tracing capabilities of the Linux kernel, we can support a broad range of different SPSs. We evaluate our approach by inspecting the CPU performance of Apache Flink while performing the Yahoo streaming benchmark.
- KonferenzbeitragMore is Less in Kieker? The Paradox of No Logging Being Slower Than Logging(Softwaretechnik-Trends Band 43, Heft 4, 2023) Reichelt, David Georg; Jung, Reiner; van Hoorn, AndréUnderstanding the sources of monitoring overhead is crucial for understanding the performance of a monitored application. The MooBench bench mark measures the monitoring overhead and its sources. MooBench assumes that benchmarking overhead emerges from the instrumentation, the data collection, and the writing of data. These three parts are measured through individual factorial experiments. We made the counter-intuitive observation that MooBench consistently and reproducibly reported higher overhead for Kieker and other monitoring frameworks when not writing data. Intuitively, writing should consume resources and therefore slow down (or, since is parallelized, at least not speed up) the monitoring. In this paper, we present an investigation of this problem in Kieker. We find that lock contention at Kieker’s writing queue causes the problem. Therefore, we propose to add a new queue that dumps all elements. Thereby, a realistic measurement of data collection without writing can be provided.
- ZeitschriftenartikelProcess Engine Benchmarking with Betsy in the Context of ISO/IEC Quality Standards(Softwaretechnik-Trends Band 36, Heft 3, 2016) Geiger, Matthias; Harrer, Simon; Lenhard, JörgBusiness process management and automation has been the focus of intense research for a long time. Today, a plethora of process languages for specifying and implementing process models have evolved. Examples for such languages are established international standards, such as BPEL 2.0 or, more recently, BPMN 2.0. Implementations of these standards which are able to execute models, so called process engines, differ in their quality of service, e.g., in performance or usability, but also in the degree to which they actually implement a given standard. Selecting the “best” engine for a particular use case is hard, as none of the existing process standards features an objective certification process to assess the quality of its implementations. To fill this gap, we present the current achievements in process engine benchmarking and point out future directions.