Auflistung Softwaretechnik-Trends 44(4) - 2024 nach Schlagwort "benchmark"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragEvaluating the Overhead of the Performance Profiler Cloudprofiler With MooBench(Softwaretechnik-Trends Band 44, Heft 4, 2024) Yang, Shinhyung; Reichelt, David Georg; Hasselbring, WilhelmPerformance engineering has become crucial for the cloud-native architecture. This architecture deploys multiple services, with each service representing an orchestration of containerized processes. OpenTelemetry is growing popular in the cloud-native industry for observing the software’s behaviour, and Kieker provides the necessary tools to monitor and analyze the performance of target architectures. Observability overhead is an important aspect of performance engineering and MooBench is designed to compare different observability frameworks, including OpenTelemetry and Kieker. In this work, we measure the overhead of Cloudprofiler, a performance profiler implemented in C++ to measure native and JVM processes. It minimizes the profiling overhead by locating the profiler process outside the target process and moving the disk writing overhead off the critical path with buffer blocks and compression threads. Using MooBench, Cloudprofiler’s buffered ID handler with the Zstandard lossless data compression ZSTD showed an average execution time of 2.28 microseconds. It is 6.15 times faster than the non-buffered and non-compression handler.
- KonferenzbeitragEvaluating the Overhead of the Performance Profiler Cloudprofiler With MooBench(Softwaretechnik-Trends Band 44, Heft 4, 2024) Yang, Shinhyung; Reichelt, David Georg; Hasselbring, WilhelmPerformance engineering has become crucial for the cloud-native architecture. This architecture deploys multiple services, with each service representing an orchestration of containerized processes. OpenTelemetry is growing popular in the cloud-native industry for observing the software’s behaviour, and Kieker provides the necessary tools to monitor and analyze the performance of target architectures. Observability overhead is an important aspect of performance engineering and MooBench is designed to compare different observability frameworks, including OpenTelemetry and Kieker. In this work, we measure the overhead of Cloudprofiler, a performance profiler implemented in C++ to measure native and JVM processes. It minimizes the profiling overhead by locating the profiler process outside the target process and moving the disk writing overhead off the critical path with buffer blocks and compression threads. Using MooBench, Cloudprofiler’s buffered ID handler with the Zstandard lossless data compression ZSTD showed an average execution time of 2.28 microseconds. It is 6.15 times faster than the non-buffered and non-compression handler.
- KonferenzbeitragOverhead Measurement Noise in Different Runtime Environments(Softwaretechnik-Trends Band 44, Heft 4, 2024) Reichelt, David Georg; Jung, Reiner; van Hoorn, AndréIn order to detect performance changes, measurements are performed with the same execution environment. In cloud environments, the noise from different processes running on the same cluster nodes might change measurement results and thereby make performance changes hard to measure. The benchmark MooBench determines the overhead of different observability tools and is executed continuously. In this study, we compare the suitability of different execution environments to benchmark the observability overhead using MooBench. To do so, we compare the execution times and standard deviation of MooBench in a cloud execution environment to three bare-metal execution environments. We find that bare metal servers have lower runtime and standard deviation for multi-threaded MooBench execution. Nevertheless, we see that performance changes up to 4.41 % are detectable by GitHub actions, as long as only sequential workloads are examined.
- KonferenzbeitragOverhead Measurement Noise in Different Runtime Environments(Softwaretechnik-Trends Band 44, Heft 4, 2024) Reichelt, David Georg; Jung, Reiner; van Hoorn, AndréIn order to detect performance changes, measurements are performed with the same execution environment. In cloud environments, the noise from different processes running on the same cluster nodes might change measurement results and thereby make performance changes hard to measure. The benchmark MooBench determines the overhead of different observability tools and is executed continuously. In this study, we compare the suitability of different execution environments to benchmark the observability overhead using MooBench. To do so, we compare the execution times and standard deviation of MooBench in a cloud execution environment to three bare-metal execution environments. We find that bare metal servers have lower runtime and standard deviation for multi-threaded MooBench execution. Nevertheless, we see that performance changes up to 4.41 % are detectable by GitHub actions, as long as only sequential workloads are examined.