Kuiter, EliasKrieter, SebastianSundermann, ChicoThüm, ThomasSaake, GunterEngels, GregorHebig, ReginaTichy, Matthias2023-01-182023-01-182023978-3-88579-726-5https://dl.gi.de/handle/20.500.12116/40094This work was published at the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2022 [Ku22]. Feature modeling is widely used to systematically model features of variant-rich software systems and their dependencies. By translating feature models into propositional formulas and analyzing them with solvers, a wide range of automated analyses across all phases of the software development process become possible. Most solvers only accept formulas in conjunctive normal form (CNF), so an additional transformation of feature models is often necessary. However, it is unclear whether this transformation has a noticeable impact on analyses. We compare three transformations for bringing feature-model formulas into CNF. We analyze which transformation can be used to correctly perform feature-model analyses and evaluate three CNF transformation tools on a corpus of 22 real-world feature models. Our empirical evaluation illustrates that some CNF transformations do not scale to complex feature models or even lead to wrong results for model-counting analyses. Further, the choice of the CNF transformation can substantially influence the performance of subsequent analyses.enFeature ModelingAutomated ReasoningConjunctive Normal FormTseitin or not Tseitin? The Impact of CNF Transformations on Feature-Model AnalysesText/Conference Paper1617-5468