Auflistung P310 - Software Engineering 2021 nach Erscheinungsdatum
1 - 10 von 55
Treffer pro Seite
Sortieroptionen
- ConferencePaperProgramming in Natural Language with fuSE: Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding(Software Engineering 2021, 2021) Weigelt, Sebastian; Steurer, Vanessa; Hey, Tobias; Tichy, WalterWith fuSE laypeople can create simple programs: one can teach intelligent systems new functions using plain English. fuSE uses deep learning to synthesize source code: it creates method signatures (for newly learned functions) and generates API calls (to form the body). In an evaluation on an unseen dataset fuSE synthesized 84.6% of the signatures and 66.9% of the API calls correctly.
- ConferencePaperTrace Link Recovery Using Semantic Relation Graphs and Spreading Activation(Software Engineering 2021, 2021) Schlutter, Aaron; Vogelsang, AndreasThe paper was first published at the 28th IEEE International Requirements Engineering Conference in 2020. Trace Link Recovery tries to identify and link related existing requirements with each other to support further engineering tasks. Existing approaches are mainly based on algebraic Information Retrieval or machine-learning. Machine-learning approaches usually demand reasonably large and labeled datasets to train. Algebraic Information Retrieval approaches like distance between tf-idf scores also work on smaller datasets without training but are limited in providing explanations for trace links. In this work, we present a Trace Link Recovery approach that is based on an explicit representation of the content of requirements as a semantic relation graph and uses Spreading Activation to answer trace queries over this graph. Our approach is fully automated including an NLP pipeline to transform unrestricted natural language requirements into a graph. We evaluate our approach on five common datasets. Depending on the selected configuration, the predictive power strongly varies. With the best tested configuration, the approach achieves a mean average precision of 40% and a Lag of 50%. Even though the predictive power of our approach does not outperform state-of-the-art approaches, we think that an explicit knowledge representation is an interesting artifact to explore in Trace Link Recovery approaches to generate explanations and refine results.
- ConferencePaperGenerating Tests that Cover Input Structure(Software Engineering 2021, 2021) Pereira Borges Jr., Nataniel; Havrikov, Nikolas; Zeller, AndreasTo systematically test a program, one needs good inputs—inputs that are valid such that they are not rejected by the program, and inputs that cover as much of the input space as possible in order to reach a maximum of functionality. We present recent techniques to systematically cover input structure. Our k-path algorithm for grammar production [HZ19] systematically covers syntactic elements of the input as well as their combinations. We show how to learn such input structures from graphical user interfaces, notably their interaction language [DBZ19]. Finally, we demonstrate that knowledge bases such as DBPedia can be a reliable source of semantically coherent inputs [Wa20]. All these techniques result in a significantly higher code coverage than state of the art.
- ConferencePaperIsolating Faults in Failure-Inducing Inputs(Software Engineering 2021, 2021) Kirschner, Lukas; Soremekun, Ezekiel; Zeller, AndreasProgram failures are often caused by faulty inputs (e.g. due to data corruption). When an input induces failure, one needs to debug the input data, i.e. isolate faults to obtain valid input data. Typically, debuggers focus on diagnosing faults in the program, rather than the input. This talk instead presents an approach that automatically repairs faults in the input data, without requiring program analysis. In addition, we present empirical data on the causes and prevalence of invalid inputs in practice, we found that four percent of inputs in the wild are invalid. We present a general-purpose algorithm called ddmax that automatically isolates faults in invalid inputs and recovers the maximal valid input data. The aim of ddmax is to (1) identify which parts of the input data prevent processing by the program, and (2) recover as much of the (valuable) input data as possible. Given a program and an invalid input, through experiments, ddmax recovers and repairs as much data as possible. The difference between the original failing input and the “maximized” passing input includes all input fragments that could not be processed, i.e. the fault. This approach is useful for automatically debugging and repairing invalid inputs.
- ConferencePaperAccurate Modeling of Performance Histories for Evolving Software Systems(Software Engineering 2021, 2021) Mühlbauer, Stefan; Apel, Sven; Siegmund, NorbertThis work has been originally published in the proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019). Learning from the history of a software system’s performance behavior does not only help discovering and locating performance bugs, but also supports identifying evolutionary performance patterns and general trends. Exhaustive regression testing is usually impractical, because rigorous performance benchmarking requires executing realistic workloads per revision, resulting in large execution times. We devise a novel active revision sampling approach that aims at tracking and understanding a system’s performance history by approximating the performance behavior of a software system across all of its revisions. In short, we iteratively sample and measure the performance of specific revisions to learn a performance-evolution model. We select revisions based on how uncertainty our models predicts their correspondent performance values. Technically, we use Gaussian Process models that not only estimates performance for each revision, but also provides an uncertainty value alongside. This way, we iteratively improve our model with only few measurements. Our evaluation with six real-world configurable software system demonstrates that Gaussian Process models are able to accurately estimate the performance-evolution histories with only few measurements and to reveal interesting behaviors and trends, such as change points.
- ConferencePaperData-driven Risk Management for Requirements Engineering: An Automated Approach based on Bayesian Networks(Software Engineering 2021, 2021) Wiesweg, Florian; Vogelsang, Andreas; Mendez, DanielThis paper has been accepted at the 2020 IEEE Requirements Engineering Conference (RE). RE is a means to reduce the risk of delivering a product that does not fulfill the stakeholders' needs. Therefore, a major challenge in RE is to decide how much RE is needed and what RE methods to apply. The quality of such decisions is strongly based on the RE expert's experience and expertise in carefully analyzing the context and current state of a project. Recent work, however, shows that lack of experience and qualification are common causes for problems in RE. We trained a series of Bayesian Networks on data from the NaPiRE survey to model relationships between RE problems, their causes, and effects in projects with different contextual characteristics. These models were used to conduct (1) a post-mortem (diagnostic) analysis, deriving probable causes of sub-optimal RE performance, and (2) to conduct a preventive analysis, predicting probable issues a young project might encounter. The method was subject to a rigorous cross-validation procedure for both use cases before assessing its applicability to real-world scenarios with a case study.
- ConferencePaperTool Support for Correctness-by-Construction(Software Engineering 2021, 2021) Runge, Tobias; Schaefer, Ina; Cleophas, Loek; Thüm, Thomas; Kourie, Derrick; Watson, Bruce W.This work was published in International Conference on Fundamental Approaches to Software Engineering 2019. We tackled a fundamental problem of missing tool support of the correctness-by-construction (CbC) methodology that was proposed by Dijsktra and Hall and revised to a light-weight and more amenable version by Kourie and Watson. Correctness-by-construction (CbC) is an incremental approach to create programs using a set of small, easily applicable refinement rules that guarantee the correctness of the program with regard to its pre- and postcondition specifications. Our goal was to implement a functional and user-friendly IDE, so that developers will adapt to the CbC approach and benefit from its advantages (e.g., defects can be easily tracked through the refinement structure of the program). The tool has a hybrid textual and graphical IDE that programmers can use to refine a specification to a correct implementation. The textual editor fits to programmers that want to stay in their familiar environment, while the graphical editor highlights the refinement structure of the program to give visual feedback if errors occur using KeY as verification backend. The tool was evaluated regarding feasibility and effort to develop correct programs. Here, slight benefits in comparison to a standard verification approach were discovered.
- ConferencePaperAn Exploratory Study on Performance Engineering in Model Transformations(Software Engineering 2021, 2021) Groner, Raffaela; Beaucamp, Luis; Tichy, Matthias; Becker, SteffenModel-Driven Software Engineering is used to deal with the increasing complexity of software, but this trend also leads to larger and more complex models and model transformations. While improving the performance of transformation engines has been a focus, there does not exist any empirical study on how transformation developers deal with performance issues. We used a quantitative questionnaire to investigate whether the performance of transformations is actually important for transformation developers. Based on the answers to the questionnaire, we conducted qualitative semi-structured interviews. The results of the online survey show that 43 of 81 participants have already tried to improve the performance of a transformation and 34 participants are sometimes or only rarely satisfied with the execution performance. Based on the answers from our 13 interviews, we identified different strategies to prevent or find performance issues in model transformations as well as different types of causes of performance issues and solutions to resolve them. We compiled a collection of tool features perceived helpful by the interviewees for finding causes. Overall, our results show that performance of transformations is relevant and that there is a lack of support for transformation developers without detailed knowledge of the engine to solve performance issues. This summary refers to our work, which was accepted for the Foundation Track of the ACM / IEEE 23rd International Conference on Model Driven Engineering Languages and Systems (MODELS) in 2020.
- ConferencePaperSchönheit und Unzulänglichkeit von Software-Architektur(Software Engineering 2021, 2021) Engelschall, Ralf S.Software-Architektur ist die Königsdisziplin schlechthin im industriellen Software-Engineering. Sie zeigt sich aber oft von zwei gegensätzlichen Seiten: sie kann einerseits konzeptionell äußerst elegant und wunderschön sein, andererseits ist sie in der Praxis regelmäßig schwach und unzulänglich. Was steckt dahinter? Wieso tun wir uns auch nach 50 Jahren Software-Engineering immer noch so schwer mit Software-Architektur? An welchen Stellen sollten wir erneut forschen und die Disziplin eventuell noch Mal überdenken? Wie können wir die kommenden Generationen von Software-Architekten noch besser ausbilden?
- ConferencePaperDetermining Context Factors for Hybrid Development Methods with Trained Models(Software Engineering 2021, 2021) Klünder, Jil; Karajic, Dzejlana; Tell, Paolo; Karras, Oliver; Münkel, Christian; Münch, Jürgen; MacDonell, Stephen; Hebig, Regina; Kuhrmann, MarcoSelecting a suitable development method for a specific project context is one of the most challenging activities in process design. To extend the so far statistical construction of hybrid development methods, we analyze 829 data points to investigate which context factors influence the choice of methods or practices. Using exploratory factor analysis, we derive five base clusters consisting of up to 10 methods. Logistic regression analysis then reveals which context factors have an influence on the integration of methods from these clusters in the development process. Our results indicate that only a few context factors including project/product size and target application domain significantly influence the choice. This summary refers to the paper “Determining Context Factors for Hybrid Development Methods with Trained Models”. This paper was published in the proceedings of the International Conference on Software and System Process in 2020.