Logo des Repositoriums
 

Softwaretechnik-Trends 39(4) - 2019

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 17
  • Konferenzbeitrag
    Towards domain-specific extensibility of quality-aware software architecture meta-models
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Krach, Sebastian Dieter
    The evermore extending presence of software systems in ubiquitous application domains requires current software architecture analyses to incorporate domain-specific concepts. Current model extension approaches are too general, do not provide sufficient support for multiple roles in the development process or impose high effort on the extension developer. In this paper, we present MDSD.tools Characteristics, a framework to enhance existing architecture description languages with easy-to-use quality modeling profiles. It facilitates capturing of domain expert knowledge concerning relevant attributes of architecture entities into reusable specifications. Furthermore, it comprises a notion of contextual information with a model-based specification for information propagation to simplify the integration with existing analyses.
  • Konferenzbeitrag
    Improving Batch Performance when Migrating to Microservices with Chunking and Coroutines
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Knoche, Holger
    When migrating enterprise software towards microservices, batch jobs are particularly sensitive to communication overhead introduced by the distributed nature of microservices. As it is not uncommon for a single batch job to process millions of data items, even an additional millisecond of overhead per item may lead to a significant increase in runtime. A common strategy for reducing the average overhead per item is called chunking, which means that individual requests for different data items are grouped into larger requests. However, chunking is difficult to integrate into existing batch jobs, which are traditionally executed sequentially. In this paper, we present a chunking approach based on coroutines, and investigate whether it can be used to mitigate the potential penalty to batch performance during migrations to microservices.
  • Konferenzbeitrag
    Modelling and Predicting Memory Behaviour in Parallel Systems with Network Links? Palladio-based Experiment Report
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Gruber, Philipp; Frank, Markus
    This work improves the capabilities of Palladio to predict the performance of parallel software in multicore environments. In previous work, we could show that the accuracy of the Palladio simulations is not sufficient for multicore systems. We assume that one reason for this is the memory bandwidth behaviour, which is not included in the Palladio Component Model and can become a bottleneck in parallel software. We present an approach to model the memory bandwidth behaviour by the means of an already existing network link concept. We can show that by using network link as a memory model we can improve our predictions up to 26% points using 16 cores on a machine and can receive an accuracy of 90% for our use case.
  • Konferenzbeitrag
    Understanding the Performance of Distributed Database Management Systems in Volatile Environments
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Domaschka, Jörg; Seybold, Daniel
    Cloud computing provides scalability and elasticity mechanisms on resource level and has become the preferred operational model for many applications. These, in turn, are based on distributed architectures trusting that this leads to scalability and elasticity and hence, good performance. Many applications rely on one or multiple database management systems (DBMS) as storage backends in order to manage their persistent state. Hence, the selection of a DBMS for a specific use case is crucial for performance and other non-functional properties. Yet, the choice is cumbersome due to the large number of available systems and the many impact factors ranging from the size of virtual resources, the type of the DBMS, and its architecture and scaling factor. In this paper, we summarise our experiences with performance evaluation for cloud-hosted DBMS in order to find well-suited configurations for specific use cases. We demonstrate that the overall performance of a distributed DBMS depends on three major domains (workload, cloud environment, and DBMS) with various parameters for each dimension.
  • Konferenzbeitrag
    On Learning Parametric Dependencies from Monitoring Data
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Grohmann, Johannes; Eismann, Simon; Kounev, Samuel
    A common approach to predict system performance are so-called architectural performance models. In these models, parametric dependencies describe the relation between the input parameters of a component and its performance properties and therefore significantly increase the model expressiveness. However, manually modeling parametric dependencies is often infeasible in practice. Existing automated extraction approaches require either application source code or dedicated performance tests, which are not always available. We therefore introduced one approach for identification and one for characterization of parametric dependencies, solely based on run-time monitoring data. In this paper, we propose our idea on combining both techniques in order to create a holistic approach for the identification and characterization of parametric dependencies. Furthermore, we discuss challenges we are currently facing and potential ideas on how to overcome them.
  • Konferenzbeitrag
    Towards Testing the Performance Influence of Hypervisor Hypercall Interface Behavior
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Beierlieb, Lukas; Iffländer, Lukas; Kounev, Samuel; Milenkoski, Aleksandar
    With the continuing rise of cloud technology hypervisors play a vital role in the performance and reliability of current services. Hypervisors offer so-called hypercall interfaces for communication with the hosted virtual machines. These interfaces require thorough robustness to assure performance, security, and reliability. Existing research focusses on finding hypercall-related vulnerabilities. In this work, we discuss open challenges regarding hypercall interfaces. To address these challenges, we propose an extensive framework architecture to perform robustness testing on hypercall interfaces. This framework supports test campaigns and modeling of hypercall interfaces.
  • Konferenzbeitrag
    10th Symposium on Software Performance (SSP) Würzburg, November 5-6, 2019: Preface
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Herbst, Nikolas; Reussner, Ralf
    Around fifty participants from Karlsruhe, Würzburg, Stuttgart, München, Kiel, Nürnberg, Leipzig, Hannover, Ulm, Mannheim, Linz and Hildesheim attended the 10 year anniversary edition of the Symposium on Software Performance in Würzburg. Performance is one of the most relevant quality attributes of any IT system. While good performance leads to high user satisfaction, weak response times lead to loss of users, perceived unavailability of the system, or unnecessarily high costs of network or computing resources. Therefore, various techniques to evaluate, control, and improve the performance of IT systems have been developed, ranging from online monitoring and benchmarking to modeling and prediction. Experience shows, that for system design or later optimization, such techniques should be applied in smart combination. Therefore, the "Symposium on Software Performance" brings together researchers and practitioners interested in software performance, where performance is understood both in classical sense as the amount of useful work accomplished by a software system compared to the time and resources used", as well as in a broader sense as "the manner in which or the efficiency with which a software system reacts or fulfills its intended purpose". The scope of the symposium spans measurement, modeling, benchmark design, and run-time management. The focus is both on classical performance metrics such as response time, throughput and resource utilization, as well as on the relationship of such metrics to other software quality attributes including but not limited to scalability, elasticity, (energy) efficiency, dependability (in terms of availability and reliability), resilience, security and privacy.
  • Konferenzbeitrag
    Using OPEN.xtrace and Architecture-Level Models to Predict Workload Performance on In-Memory Database Systems
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Barnert, Maximilian; Streitz, Adrian; Rank, Johannes; Kienegger, Harald; Krcmar, Helmut
    In-Memory Database Systems (IMDB) come into operation on highly dynamic on-premise and cloud environments. Existing approaches use classical modeling notations such as queuing network models (QN) to reflect performance on IMDB. Changes to workload or hardware come along with a recreation of entire models. At the same time, new paradigms for IMDB increase parallelism within database workload, which intensifies the effort to create and parameterize models. To simplify and reduce the effort for researchers and practitioners to model workload performance on IMDB, we propose the use of architecture level performance models and present a model creation process, which transforms database traces of SAP HANA to the Palladio Component Model (PCM). We evaluate our approach based on experiments using analytical workload. We receive prediction errors for response time and throughput below 4 %.
  • Konferenzbeitrag
    Data Stream Operations as First-Class Entities in Palladio
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Werle, Dominik; Seifermann, Stephan; Koziolek, Anne
    The Palladio Component Model (PCM) is an approach to simulate the performance of software systems using a component-based modeling language. When simulating PCM models, requests only influence each other if they compete for the same resources. However, for some applications, such as data stream processing, it is not realistic for requests to be this independent. For example, it is common to group requests in windows over time or to join data streams. Modeling the resulting behavior and resource demands in the system via stochastic approximations is possible but has drawbacks. It requires additional effort for determining the approximation and it may require spreading information across model elements that should be encapsulated in one place. In this paper, we propose a way of modeling interaction between requests that is similar to query languages for data streams. Thus, we introduce state into models without sacrificing the understandability and composability of the model.
  • Konferenzbeitrag
    Memory Leak Visualization using Evolving Software Cities
    (Softwaretechnik-Trends Band 39, Heft 4, 2019) Weninger, Markus; Makor, Lukas; Mössenböck, Hanspeter
    Memory leaks occur when no longer needed objects are unnecessarily kept alive. They can have a significant performance impact, possibly leading to a crash of the application in the worst case. Most state-of-the-art memory monitoring tools lack visualizations of memory growth over time. However, domains such as software evolution and program comprehension have shown that graphically visualizing the growth and evolution of a system can help users in understanding and interpreting this growth. In this paper, we present ongoing research on how to visualize an application’s memory evolution over time using the software city metaphor. While software cities are typically used to visualize static artifacts of a software system such as classes, we use them to visualize the dynamic memory behavior of an application. In our approach, heap objects can be grouped by arbitrary properties such as their types or their allocating threads. These groups are visualized as buildings arranged in districts, where the size of a building corresponds to the number of objects it represents. Continuously updating the city over time creates the feeling of an evolving city. Users can then identify and inspect those buildings, i.e., object groups, that grow the most. We integrated our approach into AntTracks, a trace-based memory monitoring tool developed by us, to prove its feasibility.