Auflistung nach Autor:in "Siegmund, Norbert"
1 - 10 von 12
Treffer pro Seite
Sortieroptionen
- ConferencePaperAccurate Modeling of Performance Histories for Evolving Software Systems(Software Engineering 2021, 2021) Mühlbauer, Stefan; Apel, Sven; Siegmund, NorbertThis work has been originally published in the proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019). Learning from the history of a software system’s performance behavior does not only help discovering and locating performance bugs, but also supports identifying evolutionary performance patterns and general trends. Exhaustive regression testing is usually impractical, because rigorous performance benchmarking requires executing realistic workloads per revision, resulting in large execution times. We devise a novel active revision sampling approach that aims at tracking and understanding a system’s performance history by approximating the performance behavior of a software system across all of its revisions. In short, we iteratively sample and measure the performance of specific revisions to learn a performance-evolution model. We select revisions based on how uncertainty our models predicts their correspondent performance values. Technically, we use Gaussian Process models that not only estimates performance for each revision, but also provides an uncertainty value alongside. This way, we iteratively improve our model with only few measurements. Our evaluation with six real-world configurable software system demonstrates that Gaussian Process models are able to accurately estimate the performance-evolution histories with only few measurements and to reveal interesting behaviors and trends, such as change points.
- KonferenzbeitragBridging the gap between variability in client application and database schema(Datenbanksysteme in Business, Technologie und Web (BTW) – 13. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS), 2009) Siegmund, Norbert; Kästner, Christian; Rosenmüller, Marko; Heidenreich, Florian; Apel, Sven; Saake, GunterDatabase schemas are used to describe the logical design of a database. Diverse groups of users have different views on the global schema which leads to different local schemas. Research has focused on view integration to generate a global, consistent sch
- KonferenzbeitragGenerierung maßgeschneiderter Relationenschemata in Softwareproduktlinien mittels Superimposition(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Schäler, Martin; Leich, Thomas; Siegmund, Norbert; Kästner, Christian; Saake, GunterDie Erstellung eines individuellen Programms aus einer Softwareproduktlinie (Programmfamilie) erfordert auf Anwendungsund Datenbankseite einen speziell angepassten und aufeinander abgestimmten Funktionsumfang. Die Modellierung maßgeschneiderter Relationenschemata stellt z.B. aufgrund der großen Anzahl an Programmen, die aus einer Produktlinie erstellt werden können, eine Herausforderung dar. Wir präsentieren einen Lösungsvorschlag zur Modellierung und Generierung von maßgeschneiderten Relationenschemata mittels Superimposition. Wir zeigen anhand einer realen, produktiv eingesetzten Fallstudie welche Vorteile unser Ansatz in den Bereichen Wartung und Weiterentwicklung erzeugt und welche Herausforderungen beispielsweise durch redundant definierte Schemaelemente existieren.
- KonferenzbeitragHow reviewers think about internal and external validity in empirical software engineering(Software Engineering 2016, 2016) Siegmund, Janet; Siegmund, Norbert; Apel, SvenEmpirical methods have grown common in software engineering, but there is no consensus on how to apply them properly. Is practical relevance key? Do in-ternally valid studies have any value? Should we replicate more to address the trade-off between internal and external validity? We asked the key players of software-engineering research, but they do not agree on answers to these questions. The original paper has been published at the International Conference on Software Engineering 2015 [SSA15]. Empirical research in software engineering came a long way. From being received as a niche science, the awareness of its importance has increased. In 2005, empirical studies were found in about 2% of papers of major venues and conferences, while in recent years, almost all papers of ICSE, ESEC/FSE, and EMSE reported some kind of empirical evaluation, as we found in a literature review. Thus, the amount of empirically investigated claims has increased considerably. With the rising awareness and usage of empirical studies, the question of where to go with empirical software-engineering research is also emerging. New programming languages, techniques, and paradigms, new tool support to improve debugging and testing, new visualizations to present information emerge almost daily, and claims regarding their merits need to be evaluated-otherwise, they remain claims. But, how should new approaches be evaluated? Do we want observations that we can fully explain, but with a limited generalizability, or do we want results that are applicable to a variety of circumstances, but where we cannot reliably explain underlying factors and relationships? In other words, do researchers focus on internal validity and control every aspect of the experiment setting, so that differences in the outcome can only be caused by the newly introduced technique? Or, do they focus on external validity and observe their technique in the wild, showing a real-world effect, but without knowing which factors actually caused the observed difference? This tradeoff between internal and external validity is inherent in empirical research. Due to the options' different objectives, we cannot choose both. Deciding for one of these options is not easy, and existing guidelines are too general to assist in making this decision. With our work, we want to raise the awareness of this problem: How should we address the tradeoff between internal or external validity? In the end, every time we are planning an experiment, we must ask ourselves: Do we ask the right questions? Do we want pure, ground research, or applied research with immediate practical relevance? Is there even a way to design studies such that we can answer both kinds of questions at the same time, or is there no way around replications (i.e., exactly repeated studies or studies that deviate from the original study design only in a few, well-selected factors) in software-engineering research? To understand how the key players of software-engineering research would address this problem, we conducted a survey among the program-committee members of the major software-engineering venues of the recent years [SSA15]. In essence, we found that there is no agreement and that the opinions of the key players differ considerably (illustrated in Fig. 1). Even worse, we also found a profound lack of awareness regarding the tradeoff between internal and external validity, such that one reviewer would reject a paper that maximizes internal validity, because it “[w]ould show no value at all to SE community”. When we asked about replication, many program-committee members admitted that we need more replication in software-engineering research, but also indicated that replications have a difficult stand. One reviewer even states that replications are “a good example of hunting for publications just for the sake of publishing. Come on.” If the key players cannot agree on how to address the tradeoff between internal and external validity (or even do not see this tradeoff), and admit that replication-a well-established technique in other disciplines-would have almost no success in software-engineering research, how should we move forward? In the original paper, we shed light on this question, give insights on the participants' responses, and make suggestions on how we can address the tradeoff between internal and external validity. References [SSA15] Janet Siegmund, Norbert Siegmund, and Sven Apel. Views on Internal and External Validity in Empirical Software Engineering. In Proc. Int'l Conf. Software Engineering (ICSE), pages 9-19. IEEE CS, 2015.
- KonferenzbeitragIdentifying Software Performance Changes Across Variants and Versions(Software Engineering 2022, 2022) Mühlbauer, Stefan; Apel, Sven; Siegmund, NorbertPerformance changes of configurable software systems can occur and persist throughout their lifetime. Finding optimal configurations and configuration options that influence performance is already difficult, but in the light of software evolution, configuration-dependent performance changes may lurk in a potentially large number of different versions of the system. Building on previous work, we combine two perspectives---variability and time---and devise an approach to identify configuration-dependent performance changes retrospectively across the software variants and versions of a software system. In a nutshell, we iteratively sample pairs of configurations and versions and measure the respective performance, which we use to actively learn a model that estimates how likely a commit introduces a performance change. For such commits, we infer the configuration options that best explain observed performance changes. Pursuing a search strategy to measure selectively and incrementally further pairs, we increase the accuracy of identified change points related to configuration options and interactions. Our evaluation with both real-world software systems and synthesized data demonstrates that we can pinpoint performance shifts to individual configuration options and commits with high accuracy and at scale.
- KonferenzbeitragIs Performance a Reliable Proxy for Energy Consumption?(Software Engineering 2024 (SE 2024), 2024) Weber, Max; Kaltenecker, Christian; Sattler, Florian; Apel, Sven; Siegmund, Norbert
- ZeitschriftenartikelOn the Role of Program Comprehension in Embedded Systems(Softwaretechnik-Trends Band 31, Heft 2, 2011) Feigenspan, Janet; Siegmund, Norbert; Fruth, Jana
- KonferenzbeitragPerformance prediction in the presence of feature interactions(Software Engineering 2014, 2014) Siegmund, Norbert; Kolesnikov, Sergiy; Kästner, Christian; Apel, Sven; Batory, Don; Rosenmüller, Marko; Saake, Gunter
- KonferenzbeitragPerformance Sensitivity Across Configuration and Workload(Software Engineering 2024 (SE 2024), 2024) Mühlbauer, Stefan; Sattler, Florian; Kaltenecker, Christian; Dorn, Johannes; Apel, Sven; Siegmund, Norbert
- KonferenzbeitragPerformance-Influence Models(Software Engineering 2016, 2016) Siegmund, Norbert; Grebhahn, Alexander; Apel, Sven; Kästner, Christian