Knüppel, AlexanderThüm, ThomasPardylla, Carsten ImmanuelSchaefer, InaBecker, SteffenBogicevic, IvanHerzwurm, GeorgWagner, Stefan2019-03-142019-03-142019978-3-88579-686-2https://dl.gi.de/handle/20.500.12116/20913As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters even for trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and specified extracts of OpenJDK, and formulate 53 hypotheses of which only three have been rejected. We identified parameters that represent a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts.enDeductive VerificationDesign by ContractFormal MethodsTheorem ProvingKeYControl ParametersAutomated ReasoningUnderstanding Parameters of Deductive Verification: An Empirical Investigation of KeYText/Conference Poster10.18420/se2019-511617-5468