Nguyen, Hoang LamNassar, NebrasKehrer, TimoGrunske, LarsKoziolek, AnneSchaefer, InaSeidl, Christoph2020-12-172020-12-172021978-3-88579-704-3https://dl.gi.de/handle/20.500.12116/34524Fuzzing or fuzz testing is an established technique that aims to discover unexpected program behavior (\eg, bugs, vulnerabilities, or crashes) by feeding automatically generated data into a program under test. However, the application of fuzzing to test Model-Driven Software Engineering (MDSE) tools is still limited because of the difficulty of existing fuzzers to provide structured, well-typed inputs, namely models that conform to typing and consistency constraints induced by a given meta-model and underlying modeling framework. We present three different approaches for fuzzing MDSE tools: A graph grammar-based fuzzer and two variants of a coverage-guided mutation-based fuzzer working with different sets of model mutation operators. Our evaluation on a set of real-world MDSE tools shows that our approaches can outperform both standard fuzzers and model generators w.r.t. their fuzzing capabilities. Moreover, we found that each of our approaches comes with its own strengths and weaknesses in terms of code coverage and fault finding capabilities, thus complementing each other, forming a fuzzer suite for testing MDSE tools.enModel-Driven Software EngineeringModeling ToolsFuzzingAutomated Model GenerationEclipse Modeling FrameworkMoFuzz: A Fuzzer Suite for Testing Model-Driven Software Engineering ToolsText/ConferencePaper10.18420/SE2021_291617-5468