Auflistung nach Autor:in "Zdun, Uwe"
1 - 10 von 63
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAccurate profiling in the presence of dynamic compilation(Software Engineering 2016, 2016) Zheng, Yudi; Bulej, Lubomír; Binder, WalterMany programming languages are implemented on top of a managed runtime system, such as the Java Virtual Machine (JVM) or the .NET CLR, featuring an optimizing dynamic (just-intime) compiler. Programs written in those languages are first interpreted (or compiled by a baseline compiler), whereas frequently executed methods are later compiled by the optimizing dynamic compiler. Common feedback-directed optimizations [AHR02] performed by state-of-the-art dynamic compilers, such as the optimizing compiler in the Jikes RVM [Ar00] or Graal [Op], include method inlining and stack allocation of objects based on (partial) escape analysis [Ch99, SWM14], amongst others. Such optimizations result in compiled machine code that does not perform certain operations present at the bytecode level. In the case of inlining, method invocations are removed. In the case of stack allocation, heap allocations are removed and pressure on the garbage collector is reduced. Many profiling tools are implemented using bytecode instrumentation techniques, inserting profiling code into programs at the bytecode level. However, because dynamic compilation is transparent to the instrumented program, a profiler based on bytecode instrumentation techniques is not aware of the optimizations performed by the dynamic compiler. Prevailing profilers based on bytecode instrumentation suffer from two serious limitations: (1) over-profiling of code that is optimized (and in the extreme case completely removed) by the dynamic compiler, and (2) perturbation of the compiler optimizations due to the inserted instrumentation code. We present a novel technique to make profilers implemented with bytecode instrumentation techniques aware of the optimization decisions of the dynamic compiler, and to make the dynamic compiler aware of inserted profiling code. Our technique enables profilers which collect dynamic metrics that (1) correspond to an execution of the base profiling (w.r.t. the applied compiler optimizations), and (2) properly reflect the impact of dynamic compiler optimizations. We implement our approach in a state-of-the-art Java virtual machine and demonstrate its significance with concrete profilers. We quantify the impact of escape analysis on allocation profiling, object lifetime analysis, and the impact of method inlining on callsite profiling. We illustrate how our approach enables new kinds of profilers, such as a profiler for non-inlined callsites, and a testing framework for locating performance bugs in dynamic compiler implementations. This work was originally presented at OOPSLA'15 [ZBB15], where it received a Distinguished Paper Award as well as an endorsement from the Artifact Evaluation Committee for having submitted an easy-to-use, well-documented, consistent, and complete artifact4. In the meantime, the work has been integrated into the Graal project.
- KonferenzbeitragAnalysis of the trade-offs in different modeling approaches for performance prediction of software systems(Software Engineering 2016, 2016) Kounev, Samuel; Brosig, Fabian; Meier, Philipp; Becker, Steffen; Koziolek, Anne; Koziolek, Heiko; Rygielski, PiotrA number of performance modeling approaches for predicting the performance of modern software systems and IT infrastructures exist in the literature. Different approaches differ in their modeling expressiveness and accuracy, on the one hand, and their modeling overhead and costs, on the other hand. Considering a representative set of established approaches, we analyze the semantic gaps between them as well as the trade-offs in using them; we further provide guidelines for selecting the right approach suitable for a given scenario.
- KonferenzbeitragATPS 2016: 9. Arbeitstagung Programmiersprachen(Software Engineering 2016, 2016) Krall, Andreas; Schaefer, Ina
- KonferenzbeitragAutomated workload characterization for I/O performance analysis in virtualized environments(Software Engineering 2016, 2016) Busch, Axel; Noorshams, Qais; Kounev, Samuel; Koziolek, Anne; Reussner, Ralf; Amrehn, Erich
- KonferenzbeitragAutomatic detection and resolution of lexical ambiguity in process models(Software Engineering 2016, 2016) Pittke, Fabian; Leopold, Henrik; Mendling, Jan
- KonferenzbeitragContinuous Software Engineering(Software Engineering 2016, 2016) Hasselbring, Wilhelm
- KonferenzbeitragA controlled experiment on manual test case derivation from UML activity diagrams and state machines(Software Engineering 2016, 2016) Felderer, Michael; Herrmann, Andrea
- KonferenzbeitragCopy and paste redeemed(Software Engineering 2016, 2016) Narasimhan, Krishna; Reichenbach, ChristophSoftware evolves continuously. As software evolves, it's code bases require implementations of new features. These new functionalities are sometimes mere extensions of existing functionalities with minor changes. A commonly used method of extending an existing feature into a similar new feature is to copy the existing feature and modify it. This method of extending feature is called “Copy-paste-modify”. Another method of achieving the same goal of extending existing feature into similar feature is abstracting the multiple similar features into one common feature with appropriate selectors that enable choosing between the features. The advantages of the “Copy-paste-modify” technique range from speed of development to reduced possibility of breaking existing feature. The advantages of abstraction vary from user preference to have abstracted code to long term maintenance benefits. In our paper, we describe an informal poll and discuss related work to confirm our beliefs about the advantages of each method of extending features. We observe a potential compromise while developers extend features which are near-clones of existing features. We propose to address this dilemma by coming up with a novel approach that can semi-automatically abstract nearclone features and evaluate our approach by building a prototype in C++ and abstracting near-clone methods in popular open source repositories.
- KonferenzbeitragCost-effective evolution of research prototypes into end-user tools: the Mach case study(Software Engineering 2016, 2016) Störrle, HaraldMuch of Software Engineering research hinges on an implementation as a proof-ofconcept. The resulting tools are often quite prototypical, to the degree of having little practical benefit. The Model Analyzer/Checker (MACH) is a case study in turning a set of research prototypes for analyzing UML models into a tool that can be used in research and teaching, by a broad audience. We document how the requirements and constraints of an academic environment influence design decisions in software tool development. We argue that our approach while perhaps unconventional, serves its purpose with a remarkable cost-benefit ratio. This paper is a summary of [St15]. Background For the purposes of research and publication, there is little value in polishing the usability, stability, portability, extensibility, or performance of a tool, as there is only ever one user of the tool. However, this excludes independent replication, benchmarking, human-factors studies (e.g., observing real users actually using the tool), tool usage in teaching settings, or commercial dissemination. Turning a research prototype into a “proper” tool, however, implies substantial engineering effort with little contribution to the research being conducted. Delegating the task to a student as a thesis project may not be possible, or may prove not successful. All too often, the researcher ends up abandoning the further dissemination of a strand of research for lack of resources. Objective In this particular case, the author's research work focuses on advanced operations on UML models that are beyond the scope of existing modeling tools. Over the years, many small exploratory prototypes have been created, each of which requires a high degree of expertise to use, and presents only a negligible contribution to a modeler. Thus, the objective of MACH was to create a single integrated tool from a set of prototypes to realize synergies, encapsulate it with a user interface to make it accessible to a wide range of users, and, most of all, do all this with as little cost as possible. Method We focus on academic stakeholders, thus justifying the assumptions that (1) MACH users have some understanding of models and the underlying concepts, (2) they have sufficient motivation to use the tool even if its user interface is less polished than commercial end user software. Driven by the main rationale of rapid prototyping, most of the prototypes mentioned above have been implemented using the PROLOG programming language. Long-term usage of the resulting code, usage by third parties, or long-term-evolution of the code base was not considered at the time of creation. Thus, creating MACH tried to achieve three goals: (1) Combine most or all of the existing prototypes into a single tool; (2) make the major functions available to students and colleagues; but (3) strictly limit the effort in creating
- KonferenzbeitragCounterexample guided abstraction refinement of product-line behavioural models(Software Engineering 2016, 2016) Cordy, Maxime; Heymans, Patrick; Legay, Axel; Schobbens, Pierre-Yves; Dawagne, Bruno; Leucker, MartinThe model-checking problem for Software Products Lines (SPLs) is harder than for single systems: variability constitutes a new source of complexity that exacerbates the state-explosion problem. Abstraction techniques have successfully alleviated state explosion in single-system models. However, they need to be adapted to SPLs, to take into account the set of variants that produce a counterexample. In this paper, we recall the main ideas of a paper published elsewhere that applies CEGAR (Counterexample-Guided Abstraction Refinement) and desings new forms of abstraction specifically for SPLs. Experiments are carried out to evaluate the efficiency of our new abstractions. The results show that our abstractions, combined with an appropriate refinement strategy, hold the potential to achieve large reductions in verification time, although they sometimes perform worse.