Auflistung nach Autor:in "Zheng, Yudi"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAccurate profiling in the presence of dynamic compilation(Software Engineering 2016, 2016) Zheng, Yudi; Bulej, Lubomír; Binder, WalterMany programming languages are implemented on top of a managed runtime system, such as the Java Virtual Machine (JVM) or the .NET CLR, featuring an optimizing dynamic (just-intime) compiler. Programs written in those languages are first interpreted (or compiled by a baseline compiler), whereas frequently executed methods are later compiled by the optimizing dynamic compiler. Common feedback-directed optimizations [AHR02] performed by state-of-the-art dynamic compilers, such as the optimizing compiler in the Jikes RVM [Ar00] or Graal [Op], include method inlining and stack allocation of objects based on (partial) escape analysis [Ch99, SWM14], amongst others. Such optimizations result in compiled machine code that does not perform certain operations present at the bytecode level. In the case of inlining, method invocations are removed. In the case of stack allocation, heap allocations are removed and pressure on the garbage collector is reduced. Many profiling tools are implemented using bytecode instrumentation techniques, inserting profiling code into programs at the bytecode level. However, because dynamic compilation is transparent to the instrumented program, a profiler based on bytecode instrumentation techniques is not aware of the optimizations performed by the dynamic compiler. Prevailing profilers based on bytecode instrumentation suffer from two serious limitations: (1) over-profiling of code that is optimized (and in the extreme case completely removed) by the dynamic compiler, and (2) perturbation of the compiler optimizations due to the inserted instrumentation code. We present a novel technique to make profilers implemented with bytecode instrumentation techniques aware of the optimization decisions of the dynamic compiler, and to make the dynamic compiler aware of inserted profiling code. Our technique enables profilers which collect dynamic metrics that (1) correspond to an execution of the base profiling (w.r.t. the applied compiler optimizations), and (2) properly reflect the impact of dynamic compiler optimizations. We implement our approach in a state-of-the-art Java virtual machine and demonstrate its significance with concrete profilers. We quantify the impact of escape analysis on allocation profiling, object lifetime analysis, and the impact of method inlining on callsite profiling. We illustrate how our approach enables new kinds of profilers, such as a profiler for non-inlined callsites, and a testing framework for locating performance bugs in dynamic compiler implementations. This work was originally presented at OOPSLA'15 [ZBB15], where it received a Distinguished Paper Award as well as an endorsement from the Artifact Evaluation Committee for having submitted an easy-to-use, well-documented, consistent, and complete artifact4. In the meantime, the work has been integrated into the Graal project.
- KonferenzbeitragComprehensive multi-platform dynamic program analysis for the Java and dalvik virtual machines(Software-engineering and management 2015, 2015) Binder, Walter; Zheng, Yudi; Bulej, Lubomír; Sun, Haiyang; Tuma, PetrDespite its importance for many software engineering tasks, dynamic program analysis is only insufficiently supported on the Java platform [KABM12]. Ex- isting Java Virtual Machines (JVMs) as well as Android's Dalvik Virtual Machine (DVM) lack dedicated mechanisms for expressing arbitrary dynamic program analysis tasks at a high abstraction level, for ensuring complete code coverage of the analysis, and for isolating analysis tasks from the observed program to prevent interference. For example, the JVM Tool Interface requires analysis tasks to be written in low-level native code, and some virtual machines (e.g., DVM) do not support it. As a consequence, dynamic program analysis tools are often implemented using low-level mechanisms, resulting in error-prone code that is difficult to maintain, and support only a particular virtual machine. Moreover, many analysis tools produce unsound profiles (due to interference of the analysis with the observed program) or incomplete profiles (due to limited code coverage).
- KonferenzbeitragRenaissance: Benchmarking Suite for Parallel Applications on the JVM(Software Engineering 2020, 2020) Prokopec, Aleksandar; Rosà, Andrea; Leopoldseder, David; Duboscq, Gilles; Tu̇ma, Petr; Studener, Martin; Bulej, Lubomír; Zheng, Yudi; Villazón, Alex; Simon, Doug; Würthinger, Thomas; Binder, WalterOur paper published in the proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2019)proposes Renaissance, a modern benchmark suite whose aim is to advance compiler and virtual machine (VM) research. The publication is complemented by an accepted artifactTo demonstrate that a compiler optimization, a memory management algorithm, or a synchronization technique is useful, a VM researcher needs benchmarks that demonstrate the desired behavior, and at the same time capture representative aspects of real-world applications. During the last decade, multiple new programming paradigms appeared on the Java VM (JVM), including functional programming, big-data processing, parallel and concurrent programming, message-passing, stream processing, and machine learning. The JVM has evolved as a platform too: new features, such as method-handles, variable-handles, the invokedynamic instruction, lambdas, atomic and relaxed memory operations, present new challenges for dynamic compilers and runtime environments. Existing benchmark suites do not capture the new applications, because they were made in a time when these workloads did not exist. Renaissance bridges this gap. The Renaissance suite is an ongoing, open-source effort to collect representative real-world workloads, and to advance the research and development of VMs. Renaissance is available at https://renaissance.dev/