Auflistung nach Autor:in "Pauck, Felix"
1 - 8 von 8
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelCooperative Android App Analysis(Softwaretechnik-Trends Band 44, Heft 2, 2024) Pauck, FelixIn this summary, the three main contributions of the thesis ”Cooperative Android App Analysis” are presented. The first contribution proposes the cooperative analysis approach. The centerpiece of this approach is the AQL (Android App Analysis Query Language) – a domain specific query language. It allows formulating (AQL-)queries in order to interact with arbitrary analysis tools. As counterpart AQL-Answer come into play, which are able to universally but well structured embody any kind of analysis result. The second contribution uses the AQL to define reproducible benchmarks that can be used to automatically evaluate analysis tools on such. Various benchmarks are then used in the third contribution to conduct a thorough evaluation of 13 Android taint analysis tools. Please note, in the context of the thesis, the cooperative analysis implementation is tailored to Android taint analysis, however, the concept can be applied to any kind of analysis.
- ConferencePaperCooperative Android App Analysis with CoDiDroid(Software Engineering 2021, 2021) Pauck, Felix; Wehrheim, HeikeNovel Android app analysis tools as well as improved versions of available tools are frequently proposed. These proposed tools often tackle a specific single issue that cannot be handled with existing tools. Consequently, the best analysis possible should use the advantages of each and every tool. With CoDiDroid we present an analysis framework that allows to combine analysis tools such that the best out of each tool is used for a more comprehensive and more precise cooperative analysis. Our experimental results show indeed that CoDiDroid allows to setup cooperative analyses which are beneficial with respect to effectiveness, accuracy and scalability.
- KonferenzbeitragGenBenchDroid: Fuzzing Android Taint Analysis Benchmarks(Software Engineering 2023, 2023) Schott, Stefan; Pauck, FelixThe conventional approach of assessing the performance of Android taint analysis tools consists of applying the tool to already existing benchmarks and calculating its performance on the contained benchmark cases. Creating and maintaining a benchmark requires a lot of effort, since it needs to comprise various analysis challenges, and since each benchmark case needs a well documented ground-truth - otherwise one cannot know whether a tool’s analysis is accurate. This effort is further increased by the frequently changing Android API. All these factors lead to the same, usually manually created, benchmarks being reused over and over again. In consequence analysis tools are often over-adapted to these benchmarks. To overcome these issues we propose the concept of benchmark fuzzing , which allows the generation of previously unknown and unique benchmarks, alongside their ground-truths, at evaluation time. We implement this approach in our tool GenBenchDroid and additionally show that we are able to find analysis faults that remain uncovered when solely relying on the conventional benchmarking approach.
- KonferenzbeitragJicer: Slicing Android Apps for Cooperative Analysis(Software Engineering 2023, 2023) Pauck, Felix; Wehrheim, HeikeSlicing allows to identify which program parts influence or are influenced by a certain statement of a program. Hence, if we know which statement is potentially causing an issue we can slice accordingly to only inspect the slice while debugging. With Jicer, we proposed a slicer that can be used in a different context, namely cooperative Android app analysis. In combination with taint analysis tools, we employed Jicer to get more accurate results.
- TextdokumentKooperative Android App Analyse(Ausgezeichnete Informatikdissertationen 2023 (Band 24), 2024) Pauck, FelixIn dieser Zusammenfassung der gleichnamigen Dissertation wird das Konzept kooperativer Analysen vorgestellt. Bereits existierende Analysen werden kombiniert, um ihre komplementären Eigenschaften gewinnbringend zusammenzubringen. Wie beliebige Analysen bewertet werden können, wird im Rahmen von automatisierten und reproduzierbaren Benchmarks erläutert. Abschließend werden eben diese Benchmarks zur Evaluation eigenständiger und kooperativer Analysen genutzt. Dabei wird gezeigt, dass kooperative Analysen genauere Ergebnisse liefern als ihre eigenständigen Pendants.
- KonferenzbeitragReproducing Taint-Analysis Results with ReproDroid(Software Engineering 2020, 2020) Pauck, Felix; Bodden, Eric; Wehrheim, HeikeMore and more Android taint-analysis tools appear each year. Any paper proposing such a tool typically comes with an in-depth evaluation of its supported features, accuracy and ability to be applied on real-world apps. Although the authors spent a lot of effort to come up with these evaluations, comparability is often hindered since the description of their experimental targets is usually limited. To conduct a comparable, automatic and unbiased evaluation of different analysis tools, we propose the framework ReproDroid. The framework enables us to precisely declare our evaluation targets, in consequence we refine three well-known benchmarks: DroidBench, ICCBench and DIALDroidBench. Furthermore, we instantiate this framework for six prominent taint-analysis tools, namely Amandroid, DIALDroid, DidFail, DroidSafe, FlowDroid and IccTA. Finally, we use these instances to automatically check whether different promises commonly made in the associated proposing papers are kept.
- KonferenzbeitragTaintBench: Automatic Real-World Malware Benchmarking of Android Taint Analyses(Software Engineering 2022, 2022) Luo, Linghui; Pauck, Felix; Piskachev, Goran; Benz, Manuel; Pashchenko, Ivan; Mory, Martin; Bodden, Eric; Hermann, Ben; Massacci, FabioDue to the lack of established real-world benchmark suites for static taint analyses of Android applications, evaluations of these analyses are often restricted and hard to compare. Even in evaluations that do use real-world applications, details about the ground truth in those apps are rarely documented, which makes it difficult to compare and reproduce the results. Our recent study fills this gap. It first defines a set of sensible construction criteria for such a benchmark suite. It further proposes the TaintBench benchmark suite designed to fulfil these construction criteria. Along with the suite, this paper introduces the TaintBench framework, which allows tool-assisted benchmark suite construction, evaluation and inspection. Our experiments using TaintBench reveal new insights of popular Android taint analysis tools.
- KonferenzbeitragVariable Misuse Detection: Software Developers versus Neural Bug Detectors(Software Engineering 2023, 2023) Richter, Cedric; Haltermann, Jan; Jakobs, Marie-Christine; Pauck, Felix; Schott, Stefan; Wehrheim, HeikeFinding and fixing software bugs is a central part of software development. Developers are therefore often confronted with the task of identifying whether a code snippet contains a bug and where it is located. Recently, data-driven approaches have been employed to automate this process. These so called neural bug detectors are trained on millions of buggy and correct code snippets to learn the task of bug detection. This raises the question how the performance of neural bug detectors and software developers compare. As a first step, we study this question in the context of variable misuse bugs. To this end, we performed a study with over 100 software developers and two state-of-the-art approaches for neural bug detection. Our study shows that software developers are on average slightly better than neural bug detectors – even though the bug detectors are trained specifically for this task. In addition, we identified several bottlenecks in existing neural bug detectors which could be mitigated in the future to improve their bug detection performance.