Auflistung nach Autor:in "Schott, Stefan"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragGenBenchDroid: Fuzzing Android Taint Analysis Benchmarks(Software Engineering 2023, 2023) Schott, Stefan; Pauck, FelixThe conventional approach of assessing the performance of Android taint analysis tools consists of applying the tool to already existing benchmarks and calculating its performance on the contained benchmark cases. Creating and maintaining a benchmark requires a lot of effort, since it needs to comprise various analysis challenges, and since each benchmark case needs a well documented ground-truth - otherwise one cannot know whether a tool’s analysis is accurate. This effort is further increased by the frequently changing Android API. All these factors lead to the same, usually manually created, benchmarks being reused over and over again. In consequence analysis tools are often over-adapted to these benchmarks. To overcome these issues we propose the concept of benchmark fuzzing , which allows the generation of previously unknown and unique benchmarks, alongside their ground-truths, at evaluation time. We implement this approach in our tool GenBenchDroid and additionally show that we are able to find analysis faults that remain uncovered when solely relying on the conventional benchmarking approach.
- KonferenzbeitragVariable Misuse Detection: Software Developers versus Neural Bug Detectors(Software Engineering 2023, 2023) Richter, Cedric; Haltermann, Jan; Jakobs, Marie-Christine; Pauck, Felix; Schott, Stefan; Wehrheim, HeikeFinding and fixing software bugs is a central part of software development. Developers are therefore often confronted with the task of identifying whether a code snippet contains a bug and where it is located. Recently, data-driven approaches have been employed to automate this process. These so called neural bug detectors are trained on millions of buggy and correct code snippets to learn the task of bug detection. This raises the question how the performance of neural bug detectors and software developers compare. As a first step, we study this question in the context of variable misuse bugs. To this end, we performed a study with over 100 software developers and two state-of-the-art approaches for neural bug detection. Our study shows that software developers are on average slightly better than neural bug detectors – even though the bug detectors are trained specifically for this task. In addition, we identified several bottlenecks in existing neural bug detectors which could be mitigated in the future to improve their bug detection performance.