Auflistung nach Schlagwort "Fuzzing"
1 - 7 von 7
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBadger: Complexity Analysis with Fuzzing and Symbolic Execution(Software Engineering and Software Management 2019, 2019) Noller, Yannic; Kersten, Rody; Pasareanu, CorinaIn this work, we report on our recent research results on “Badger: Complexity Analysis with Fuzzing and Symbolic Execution” which was published in the proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis [NKP18]. Badger employs a hybrid software analysis technique that combines fuzzing and symbolic execution for finding performance bottlenecks in software. Our primary goal is to use Badger to discover vulnerabilities which are related to worst-case time or space complexity of an application. To this end, we use a cost-guided fuzzing approach, which produces inputs to increase the code coverage, but also to maximize a resource-related cost function, such as execution time or memory usage. We combine this fuzzing technique with a customized symbolic execution, which is also guided by heuristics that aim to increase the same cost. Experimental evaluation shows that this hybrid approach enables us to use the strengths of both techniques and overcome their individual weaknesses.
- TextdokumentConcolic-Fuzzing of JavaScript Programs using GraalVM and Truffle(SKILL 2021, 2021) Delhougne, RobertThe scripting language JavaScript has established itself as a central component of the modern internet. However, the dynamic execution model of the language limits the support for source-code analysis, which leaves a developer without essential tools to maintain safety and security requirements. This paper describes a concolic-fuzzer based on the GraalVM to automatically test JavaScript programs. The fuzzer shows promising results in both code coverage and runtime evaluations and provides developers with additional features such as special analysis targets.
- TextdokumentDifferential Testing: How to find differences between programs that mostly behave identically?(GI SICHERHEIT 2022, 2022) Möller, JonasDifferences between programs based on the same specification might lead to vulnerabilities that can not be detected by conventional testing. Differential testing is able to find these discrepancies by executing multiple programs on the same input and comparing their output. In this work, we discuss the fundamentals of differential testing and outline a general scheme for differential testing methods which is used to categorize and analyze current research approaches. Based on this, we formulate our own research questions which focus on how machine learning can aid differential testing
- KonferenzbeitragDifFuzz: Differential Fuzzing for Side-Channel Analysis(Software Engineering 2020, 2020) Nilizadeh, Shirin; Noller, Yannic; Noller, YannicThis summary is based on our research results on ``DifFuzz: Differential Fuzzing for Side-Channel Analysis'' which was published in the proceedings of the 41st International Conference on Software Engineering. Side-channel analysis aims to investigate the risk that a potential attacker can infer any secret information through observations of the system, such as the execution time or the memory consumption. Side-channel vulnerabilities therefore represent security risks that can cause serious damage and need to be identified and repaired. DifFuzz applies differential fuzzing to identify inputs that trigger such vulnerabilities. Our fuzzing approach analyzes multiple program executions, which vary in their secret information, and uses resource-guided heuristics to identify inputs that maximize the observable cost difference between these executions. Our evaluation shows that such a dynamic analysis approach can find the same side-channel vulnerabilities as state-of-the-art static analysis techniques, and even more vulnerabilities since it does not rely on models for its analysis. Additionally, the advantage of DifFuzz compared to other techniques is not only that it can generate inputs that show a vulnerability, but that the resulting cost difference can also be used to estimate the severity of an identified vulnerability. This enables the comparing of repaired versions of an application.
- KonferenzbeitragGenBenchDroid: Fuzzing Android Taint Analysis Benchmarks(Software Engineering 2023, 2023) Schott, Stefan; Pauck, FelixThe conventional approach of assessing the performance of Android taint analysis tools consists of applying the tool to already existing benchmarks and calculating its performance on the contained benchmark cases. Creating and maintaining a benchmark requires a lot of effort, since it needs to comprise various analysis challenges, and since each benchmark case needs a well documented ground-truth - otherwise one cannot know whether a tool’s analysis is accurate. This effort is further increased by the frequently changing Android API. All these factors lead to the same, usually manually created, benchmarks being reused over and over again. In consequence analysis tools are often over-adapted to these benchmarks. To overcome these issues we propose the concept of benchmark fuzzing , which allows the generation of previously unknown and unique benchmarks, alongside their ground-truths, at evaluation time. We implement this approach in our tool GenBenchDroid and additionally show that we are able to find analysis faults that remain uncovered when solely relying on the conventional benchmarking approach.
- ConferencePaperMoFuzz: A Fuzzer Suite for Testing Model-Driven Software Engineering Tools(Software Engineering 2021, 2021) Nguyen, Hoang Lam; Nassar, Nebras; Kehrer, Timo; Grunske, LarsFuzzing or fuzz testing is an established technique that aims to discover unexpected program behavior (\eg, bugs, vulnerabilities, or crashes) by feeding automatically generated data into a program under test. However, the application of fuzzing to test Model-Driven Software Engineering (MDSE) tools is still limited because of the difficulty of existing fuzzers to provide structured, well-typed inputs, namely models that conform to typing and consistency constraints induced by a given meta-model and underlying modeling framework. We present three different approaches for fuzzing MDSE tools: A graph grammar-based fuzzer and two variants of a coverage-guided mutation-based fuzzer working with different sets of model mutation operators. Our evaluation on a set of real-world MDSE tools shows that our approaches can outperform both standard fuzzers and model generators w.r.t. their fuzzing capabilities. Moreover, we found that each of our approaches comes with its own strengths and weaknesses in terms of code coverage and fault finding capabilities, thus complementing each other, forming a fuzzer suite for testing MDSE tools.
- TextdokumentSicherheit medizintechnischer Protokolle im Krankenhaus(GI SICHERHEIT 2022, 2022) Saatjohann, Christoph; Ising, Fabian; Gierlings, Matthias; Noss, Dominik; Schimmler, Sascha; Klemm, Alexander; Grundmann, Leif; Frosch, Tilman; Schinzel, SebastianMedizinische Einrichtungen waren in den letzten Jahren immer wieder von Cyber-Angriffen betroffen. Auch wenn sich diese Angriffe derzeit auf die Office-IT-Infrastruktur der Einrichtungen konzentrieren, existiert mit medizinischen Systemen und Kommunikationsprotokollen eine weitere wenig beachtete Angriffsoberfläche. In diesem Beitrag analysieren wir die weit verbreiteten medizintechnischen Kommunikations-Protokolle DICOM und HL7 sowie Protokoll-Implementierungen auf ihre IT-Sicherheit. Dafür präsentieren wir die Ergebnisse der Sicherheitsanalyse der DICOM-und HL7-Standards, einen Fuzzer “MedFUZZ” für diese Protokolle sowie einen Schwachstellenscanner “MedVAS”, der Schwachstellen in medizintechnischen Produktivumgebungen auffinden kann.