Yazdi, Hamed ShariatPietsch, PitKehrer, TimoKelter, UdoKowalewski, StefanRumpe, Bernhard2018-10-312018-10-312013978-3-88579-607-7https://dl.gi.de/handle/20.500.12116/17707Tools and methods in the context of Model-Driven Engineering have to be evaluated and tested. Unfortunately, adequate test models are scarcely available in many application domains, and available models often lack required properties. Test model generators have been proposed recently to overcome this deficiency. Their basic principle is to synthesize test models by controlled application of edit operations from a given set of edit operation definitions. If test models are created by randomly selecting edit operations, then they become quite unnatural and do not exhibit realworld characteristics; generated sequences of edit operation should rather be similar to realistic model evolution. To this end, we have reverse-engineered a carefully selected set of open-source Java projects to class diagrams and computed the differences between subsequent revisions in terms of various edit operations, including generic low-level graph edit operations and high-level edit operations such as model refactorings. Finally, we statistically analyzed the distribution of the frequency of these edit operations. We have checked the fitness of 60 distributions in order to correctly represent the statistical properties. Only four distributions have been able to adequately describe the observed evolution. The successful distributions are being used to configure our model generator in order to produce more realistic test models.enStatistical analysis of changes for synthesizing realistic test modelsText/Conference Paper1617-5468