Auflistung nach Schlagwort "performance"
1 - 10 von 63
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAnalyzing the Evolution of Data Structures in Trace-Based Memory Monitoring(Softwaretechnik-Trends Band 39, Heft 3, 2019) Weninger, Markus; Gander, Elias; Mössenböck, HanspeterModern software systems are becoming increasingly complex and are thus more prone to performance degradation due to memory leaks. Memory leaks occur if objects that are not needed anymore are still unintentionally kept alive. While there exists a variety of state-of-the-art memory monitoring tools, most of them only use memory snapshots, i.e., heap dumps, to analyze an application’s live objects at a single point in time. This does not allow developers to identify data structures that grow over time. Tracebased monitoring tools tackle this problem by recording memory events, e.g., allocations or object moves performed by the garbage collector (GC), throughout an application’s run time. In this paper, we present ongoing research on the use of memory traces for detecting the root causes of memory leaks introduced by growing data structures. This encompasses (1) a domain-specific language (DSL) to describe arbitrary data structures, (2) an algorithm to detect instances of previously defined data structures in reconstructed heaps, as well as (3) techniques to analyze the temporal evolution of these data structure instances to identify those possibly involved in memory leaks. All these concepts have been integrated into AntTracks, a trace-based memory monitoring tool, to prove their feasibility.
- KonferenzbeitragAn Architectural Template for Parallel Loops and Sections(Softwaretechnik-Trends Band 39, Heft 3, 2019) Frank, Markus; Hakamian, AlirezaThe Palladio Component Model uses UML-like diagrams to specify architectural software designs, which are used for early design-time analyses of software performance metrics. As a current drawback of the PCM, it does not support the specification of massive parallel software behaviour like OpenMP parallel loops. For Software Performance Engineers this results in complex modelling workarounds, or it is not possible to model the software’s behaviour at all, which results in inaccurate analyses and semantic discrepancies. In this paper, we present a light-weight PCM metamodel extension, allowing SPEs to easily annotate parallel sections (similar to OpenMP) in their software specifications. This significantly reduces the modelling effort through automation.
- KonferenzbeitragAutomatic Performance Modeling of Configurable Scientific Software(Softwaretechnik-Trends Band 44, Heft 4, 2024) Schmid, LarissaModern software is configurable and allows users to set many parameters according to their needs. Due to many non-functional parameters, usually, the same functionality can be achieved with varying performance. Performance models express application performance as functions of input parameters, helping users and developers understand application behavior. Automatic performance modeling approaches can generate performance models automatically from empirical measurements of the software. Current modeling approaches employ heuristics for deciding which configurations to measure, resulting in a trade-off between the cost of measurements and accuracy of the model. To overcome this trade-off, we propose approaches to derive the smallest necessary measurement setup based on results of a system analysis, and to automatically identify performance-irrelevant options. Our evaluation with real-world applications show that we can significantly decrease cost of performance modeling while maintaining accuracy of the resulting models.
- KonferenzbeitragAutomatic Performance Modeling of Configurable Scientific Software(Softwaretechnik-Trends Band 44, Heft 4, 2024) Schmid, LarissaModern software is configurable and allows users to set many parameters according to their needs. Due to many non-functional parameters, usually, the same functionality can be achieved with varying performance. Performance models express application performance as functions of input parameters, helping users and developers understand application behavior. Automatic performance modeling approaches can generate performance models automatically from empirical measurements of the software. Current modeling approaches employ heuristics for deciding which configurations to measure, resulting in a trade-off between the cost of measurements and accuracy of the model. To overcome this trade-off, we propose approaches to derive the smallest necessary measurement setup based on results of a system analysis, and to automatically identify performance-irrelevant options. Our evaluation with real-world applications show that we can significantly decrease cost of performance modeling while maintaining accuracy of the resulting models.
- KonferenzbeitragBenchmarking Function Hook Latency in Cloud-Native Environments(Softwaretechnik-Trends Band 43, Heft 4, 2023) Kahlhofer, Mario; Kern, Patrick; Henning, Sören; Rass, StefanResearchers and engineers are increasingly adopting cloud-native technologies for application development and performance evaluation. While this has improved the reproducibility of benchmarks in the cloud, the complexity of cloud-native environments makes it difficult to run benchmarks reliably. Cloud-native applications are often instrumented or altered at runtime, by dynamically patching or hooking them, which introduces a significant performance overhead. Our work discusses the benchmarking-related pitfalls of the dominant cloud-native technology, Kubernetes, and how they affect performance measurements of dy namically patched or hooked applications. We present recommendations to mitigate these risks and demonstrate how an improper experimental setup can negatively impact latency measurements.
- KonferenzbeitragBenchmarking Stream Processing Frameworks for Large Scale Data Shuffling(Softwaretechnik-Trends Band 43, Heft 4, 2023) Henning, Sören; Vogel, Adriano; Leichtfried, Michael; Ertl, Otmar; Rabiser, RickDistributed stream processing frameworks help building scalable and reliable applications that perform transformations and aggregations on continuous data streams. We outline our ongoing research on designing a new benchmark for distributed stream processing frameworks. In contrast to other benchmarks, it focuses on use cases where stream processing frameworks are mainly used for redistributing data records to perform state-local aggregations, while the actual aggregation logic is considered as black-box software components. We describe our benchmark architecture based on a real-world use case, show how we imple mented it with four state-of-the-art frameworks, and give an overview of initial experimental results.
- KonferenzbeitragBetter a Microbenchmark on a Cluster than a User at the Office: Flink Cluster Benchmarking(Softwaretechnik-Trends Band 39, Heft 3, 2019) Reichelt, David Georg; Meyer, Lars-Peter; Kühne, StefanWhen operating an Apache Flink cluster, performance problems may occur on all components of its setup. Reproducing those problems in different software or hardware components and on different nodes requires systematic experiments. We present an Apache Flink cluster benchmark set for server operators which is able to measure the performance of an Apache Flink cluster. This enables spotlighting irregularities in software or hardware behaviour.
- KonferenzbeitragCo-Simulation of Hardware and Software in Palladio(Softwaretechnik-Trends Band 43, Heft 1, 2023) Weber, Sebastian; Henss, Jörg; Reussner, RalfTo ensure the fulfilment of quality requirements, for example performance, at design time, the software architect can model the software and simulate it with Palladio. The accuracy of the model depends on the estimation of resource demands, which is difficult and error prone. Therefore, in the later stages of development, values should be evaluated based on available information instead of being estimated. An unavailable component implementation or delayed hardware access should not force the software architect to stick with the estimation, but be compensated otherwise, for example with prototypes or hardware simulation. This paper presents six approaches to adapt the evaluation of hardware resource usage in the Palladio software architecture simulation by incorporating co-simulation techniques to compensate different shortcomings. Possible usage scenarios of the approaches are presented and put in relation to the estimated development effort. We present our initial results of implementing one of the approaches, the integration of a hardware-simulation-based resource demand estimation in Palladio. It can compensate unavailable hardware, despite hardware specifications and software being available. The results show that the parameterization options of the hardware simulation were not sufficient to achieve the desired accuracy and the simulation time increases significantly.
- KonferenzbeitragCombating Run-time Performance Bugs with Performance Claim Annotations(Softwaretechnik-Trends Band 40, Heft 3, 2020) Casey, Zachery; Shah, Michael D.Bugs in software are classified by a failure to meet some aspect of a specification. A piece of code which does not match the performance given by a specification contains a performance bug. We believe there is a need for better in-source language support and tools to assist a developer in mitigating and documenting performance bugs during the software development life cycle. In this paper, we present our performance claim annotation framework for specifying and monitoring the performance of a program. A performance claim annotation (PCA) is written by a programmer to assert a section of code’s run-time execution coincides with a specific metric (e.g. time elapsed) and they want to perform some action, typically logging, if the code fails to match the metric during execution. Our implementation uses a combination of the DWARF debugging format and the Pin dynamic binary instrumentation tool to provide an interface for building, using, and checking performance claims in order to reduce performance bugs during the development life cycle.
- KonferenzbeitragCommit-Based Continuous Integration of Performance Models(Softwaretechnik-Trends Band 43, Heft 2, 2023) Armbruster, MartinArchitecture-level performance models (aPM) such as the Palladio Component Model (PCM) can be used for, e.g., performance predictions to explore design alternatives and combines the aspects of architecture and performance models. An up-to-date architecture model can support the software maintenance by reducing the architectural degradation or guide the software evolution. At the same time, performance models allow the investigation of the software performance without the need to implement or change the system. However, keeping them up-to-date requires manual effort which hinders their adoption. Especially in the agile software development which is characterized by incremental and iterative development cycles, no or short design phases prevent manual modeling activities. This master thesis presents an approach building upon the previous work with these two main goals: (1) closing the gaps by completing the pipeline for the aPM extraction and instrumentation, and (2) evaluating the pipeline with a real Git history. As a result, in the approach, the Java source code in the state of a new commit is parsed into a code model. By a state-based comparison with the code model of the previous commit, a delta-based change sequence is obtained which describes how the code model of the previous commit can be transformed into a code model conforming to the state of the new commit. Afterwards, the changes are utilized to incrementally update the PCM, i.e., only the PCM elements affected by the source code changes are updated while the other PCM elements remain unchanged.