Auflistung nach Autor:in "Benz, Manuel"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ConferencePaperHeaps'n Leaks: How Heap Snapshots Improve Android Taint Analysis(Software Engineering 2021, 2021) Benz, Manuel; Kristensen, Erik Krogh; Luo, Linghui; Borges Jr., Nataniel P.; Bodden, Eric; Zeller, AndreasThe assessment of information flows is an essential part of analyzing Android apps, and is frequently supported by static taint analysis. Its precision, however, can suffer from the analysis not being able to precisely determine what elements a pointer can (and cannot) point to. Recent advances in static analysis suggest that incorporating dynamic heap snapshots, taken at one point at runtime, can significantly improve general static analysis. In this paper, we investigate to what extent this also holds for taint analysis, and how various design decisions, such as when and how many snapshots are collected during execution, and how exactly they are used, impact soundness and precision. We have extended FlowDroid to incorporate heap snapshots, yielding our prototype Heapster, and evaluated it on DroidMacroBench, a novel benchmark comprising real-world Android apps that we also make available as an artifact. The results show (1) the use of heap snapshots lowers analysis time and memory consumption while increasing precision; (2) a very good trade-off between precision and recall is achieved by a mixed mode in which the analysis falls back to static points-to relations for objects for which no dynamic data was recorded; and (3) while a single heap snapshot (ideally taken at the end of the execution) suffices to improve performance and precision, a better trade-off can be obtained by using multiple snapshots.
- KonferenzbeitragTaintBench: Automatic Real-World Malware Benchmarking of Android Taint Analyses(Software Engineering 2022, 2022) Luo, Linghui; Pauck, Felix; Piskachev, Goran; Benz, Manuel; Pashchenko, Ivan; Mory, Martin; Bodden, Eric; Hermann, Ben; Massacci, FabioDue to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world applications, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. Our recent study fills this gap. It first defines a set of sensible construction criteria for such a benchmark suite. It further proposes the TaintBench benchmark suite designed to fulfil these construction criteria. Along with the suite, this paper introduces the TaintBench framework, which allows tool-assisted benchmark suite construction, evaluation and inspection. Our experiments using TaintBench reveal new insights of popular Android taint analysis tools.