Auflistung P310 - Software Engineering 2021 nach Erscheinungsdatum
1 - 10 von 55
Treffer pro Seite
Sortieroptionen
- ConferencePaperUsing Key Performance Indicators to Compare Software-Development Processes(Software Engineering 2021, 2021) Sürücü, Cem; Song, Bianying; Krüger, Jacob; Saake, Gunter; Leich, ThomasExtended abstract of our paper published at the Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2020.
- ConferencePaper2nd Workshop on Requirement Management in Enterprise Systems Projects (AESP’21)(Software Engineering 2021, 2021) Weiss, Christoph; Keckeis, JohannesERP systems and other enterprise systems are the backbone of any company in a digitized world. In almost every company Enterprise Systems are adapted to the needs of the customers within the scope of parameterization, modifications (changes to existing functions and logics) or even extensions (new developments of existing functions and logics). However, many of such Enterprise Systems projects fail due to missing, incorrect, inadequate or incomplete requirements there are "incorrect" expectations, divergents in definition and attitudes on requirements management between customers and suppliers. These challenges will be highlighted, talked over and discussed during this workshop.
- ConferencePaperWhat Kind of Questions Do Developers Ask on Stack Overflow? A Comparison of Automated Approaches to Classify Posts Into Question Categories(Software Engineering 2021, 2021) Beyer, Stefanie; Macho, Christian; Di Penta, Massimiliano; Pinzger, MartinThis paper has been published in the Journal Empirical Software Engineering, 2020. Stack Overflow (SO) is among the most popular question and answers sites used by developers. Labeling posts with tags is one of the features to facilitate searching and browsing SO posts. However, existing tags mainly refer to technological aspects but not to the purpose of a question. In this paper, we argue that tagging posts with their purpose can facilitate developers to find the posts that provide an answer to their question. We first present a harmonization of existing taxonomies of question categories, that represent the purpose of a question, into seven categories. Next, we present two approaches to automate the classification of posts into the seven question categories, one using regular expressions and one using machine learning. Evaluating both approaches on an independent test set, we found that our regular expressions outperform machine learning. Applying the regular expressions on posts related to Android app development, showed that the categories API USAGE, CONCEPTUAL, and DISCREPANCY are most frequently assigned. By integrating our approach into SO, posts could be manually tagged with our categories which would allow developers to search posts by question category.
- ConferencePaperDetermining Context Factors for Hybrid Development Methods with Trained Models(Software Engineering 2021, 2021) Klünder, Jil; Karajic, Dzejlana; Tell, Paolo; Karras, Oliver; Münkel, Christian; Münch, Jürgen; MacDonell, Stephen; Hebig, Regina; Kuhrmann, MarcoSelecting a suitable development method for a specific project context is one of the most challenging activities in process design. To extend the so far statistical construction of hybrid development methods, we analyze 829 data points to investigate which context factors influence the choice of methods or practices. Using exploratory factor analysis, we derive five base clusters consisting of up to 10 methods. Logistic regression analysis then reveals which context factors have an influence on the integration of methods from these clusters in the development process. Our results indicate that only a few context factors including project/product size and target application domain significantly influence the choice. This summary refers to the paper “Determining Context Factors for Hybrid Development Methods with Trained Models”. This paper was published in the proceedings of the International Conference on Software and System Process in 2020.
- ConferencePaperGenerating Tests that Cover Input Structure(Software Engineering 2021, 2021) Pereira Borges Jr., Nataniel; Havrikov, Nikolas; Zeller, AndreasTo systematically test a program, one needs good inputs—inputs that are valid such that they are not rejected by the program, and inputs that cover as much of the input space as possible in order to reach a maximum of functionality. We present recent techniques to systematically cover input structure. Our k-path algorithm for grammar production [HZ19] systematically covers syntactic elements of the input as well as their combinations. We show how to learn such input structures from graphical user interfaces, notably their interaction language [DBZ19]. Finally, we demonstrate that knowledge bases such as DBPedia can be a reliable source of semantically coherent inputs [Wa20]. All these techniques result in a significantly higher code coverage than state of the art.
- ConferencePaperLearning Circumstances of Software Failures(Software Engineering 2021, 2021) Gopinath, Rahul; Havrikov, Nikolas; Kampmann, Alexander; Soremekun, Ezekiel; Zeller, AndreasA program fails. Under which circumstances does the failure occur? Starting with a single failure-inducing input ("The input ((4)) fails") and an input grammar, this talk presents two techniques that use systematic tests to automatically determine the circumstances under which the failure occurs. The DDSET algorithm [Go20] generalizes the input to an _abstract failure-inducing input_ that contains both (concrete) terminal symbols and (abstract) nonterminal symbols from the grammar - for instance, “(())”, which represents any expression in double parentheses. The ALHAZEN technique [Ka20] takes this even further, using decision trees to learn input properties such as length or numerical values associated with failures: "The error occurs as soon as there are two parentheses or more." Such abstractions can be used as debugging diagnostics, characterizing the circumstances under which a failure occurs; and as producers of additional failure-inducing tests to help design and validate fixes and repair candidates. Both have the potential to significantly boost speed and quality of software debugging.
- ConferencePaperLearning to Generate Fault-revealing Test Cases in Metamorphic Testing(Software Engineering 2021, 2021) Spieker, Helge; Gotlieb, ArnaudMetamorphic Testing is a software testing paradigm which aims at using necessary properties of a system under test, called metamorphic relations (MR), to either check its expected outputs, or to generate new test cases. Metamorphic Testing has been successful to test programs for which a full oracle is unavailable or to test programs with uncertainties on expected outputs such as learning systems. In this paper, we formulate the effective selection of MRs as a reinforcement learning problem, based on contextual bandits. Our method Adaptive Metamorphic Testing sequentially selects a MR that is expected to provide the highest payoff, i.e., that is most likely to reveal faults. Which MRs are likely to reveal faults is learned from successive exploration trials. The bandit explores the available MRs and evaluates the fault landscape of the system under test, thereby providing valuable information to the tester. We present experimental results over two applications in machine learning, namely image classification and object detection, where Adaptive Metamorphic Testing efficiently identifies weaknesses of the tested systems. The original paper "Adaptive Metamorphic Testing with Contextual Bandits" first appeared in the Journal of Systems and Software (2020).
- ConferencePaperA Domain Analysis of Resource and Requirements Monitoring: Towards a Comprehensive Model of the Software Monitoring Domain(Software Engineering 2021, 2021) Rabiser, Rick; Schmid, Klaus; Eichelberger, Holger; Vierhauser, Michael; Grünbacher, PaulThis is a summary of an article published in the Information and Software Technology Journal in 2019 (see Bibliography) describing a domain model we developed to structure and systematize the field of software monitoring as well as a reference architecture to support developing software monitoring approaches.
- ConferencePaperAn Empirical Analysis of the Costs of Clone- and Platform-Oriented Software Reuse(Software Engineering 2021, 2021) Krüger, Jacob; Berger, ThorstenIn this extended abstract, we summarize our paper with the homonymous title published at the Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2020.
- ConferencePaperProgramming in Natural Language with fuSE: Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding(Software Engineering 2021, 2021) Weigelt, Sebastian; Steurer, Vanessa; Hey, Tobias; Tichy, WalterWith fuSE laypeople can create simple programs: one can teach intelligent systems new functions using plain English. fuSE uses deep learning to synthesize source code: it creates method signatures (for newly learned functions) and generates API calls (to form the body). In an evaluation on an unseen dataset fuSE synthesized 84.6% of the signatures and 66.9% of the API calls correctly.