Auflistung nach Autor:in "Havrikov, Nikolas"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ConferencePaperGenerating Tests that Cover Input Structure(Software Engineering 2021, 2021) Pereira Borges Jr., Nataniel; Havrikov, Nikolas; Zeller, AndreasTo systematically test a program, one needs good inputs—inputs that are valid such that they are not rejected by the program, and inputs that cover as much of the input space as possible in order to reach a maximum of functionality. We present recent techniques to systematically cover input structure. Our k-path algorithm for grammar production [HZ19] systematically covers syntactic elements of the input as well as their combinations. We show how to learn such input structures from graphical user interfaces, notably their interaction language [DBZ19]. Finally, we demonstrate that knowledge bases such as DBPedia can be a reliable source of semantically coherent inputs [Wa20]. All these techniques result in a significantly higher code coverage than state of the art.
- ConferencePaperLearning Circumstances of Software Failures(Software Engineering 2021, 2021) Gopinath, Rahul; Havrikov, Nikolas; Kampmann, Alexander; Soremekun, Ezekiel; Zeller, AndreasA program fails. Under which circumstances does the failure occur? Starting with a single failure-inducing input ("The input ((4)) fails") and an input grammar, this talk presents two techniques that use systematic tests to automatically determine the circumstances under which the failure occurs. The DDSET algorithm [Go20] generalizes the input to an _abstract failure-inducing input_ that contains both (concrete) terminal symbols and (abstract) nonterminal symbols from the grammar - for instance, “(())”, which represents any expression in double parentheses. The ALHAZEN technique [Ka20] takes this even further, using decision trees to learn input properties such as length or numerical values associated with failures: "The error occurs as soon as there are two parentheses or more." Such abstractions can be used as debugging diagnostics, characterizing the circumstances under which a failure occurs; and as producers of additional failure-inducing tests to help design and validate fixes and repair candidates. Both have the potential to significantly boost speed and quality of software debugging.
- ConferencePaperProbabilistic Grammar-based Test Generation(Software Engineering 2021, 2021) Soremekun, Ezekiel; Pavese, Esteban; Havrikov, Nikolas; Grunske, Lars; Zeller, AndreasGiven a program that has been tested on some sample input(s), what does one test next? To further test the program, one needs to construct inputs that cover (new) input features, in a manner that is different from the initial samples. This talk presents an approach that learns from past test inputs to generate new but different inputs. To achieve this, we present an approach called inputs from hell which employs probabilistic context-free grammars to learn the distribution of input elements from sample inputs. In this work, we employ probabilistic grammars as input parsers and producers. Applying probabilistic grammars as input parsers, we learn the statistical distribution of input features in sample inputs. As a producer, probabilistic grammars ensure that generated inputs are syntactically correct by construction, and it controls the distribution of input elements by assigning probabilities to individual production rules. Thus, we create inputs that are dissimilar to the sample by inverting learned probabilities. In addition, we generate failure-inducing inputs by learning from inputs that caused failures in the past, this gives us inputs that share similar features and thus also have a high chance of triggering bugs. This approach is useful for bug reproduction and testing for patch completeness.