Auflistung nach Schlagwort "MDSE"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ConferencePaperClaimed Advantages and Disadvantages of (dedicated) Model Transformation Languages: A Systematic Literature Review(Software Engineering 2021, 2021) Götz, Stefan; Tichy, Matthias; Groner, RaffaelaThere exists a plethora of claims about the advantages and disadvantages of model transformation languages compared to general purpose programming languages. With our work, published at the Software and Systems Modelling Journal in 2020[GTG2020], we aim to create an overview over these claims in literature and systematize evidence thereof. For this purpose we conducted a systematic literature review by following a systematic process for searching and selecting relevant publications and extracting data. We selected a total of 58 publications, categorized claims about model transformation languages into 14 separate groups and conceived a representation to track claims and evidence through literature. From our results we conclude that: (i) current literature claims many advantages of model transformation languages but also points towards certain deficits and (ii) there is insufficient evidence for claimed advantages and disadvantages and (iii) a lack of research interest into the verification of claims.
- WorkshopbeitragCombining Retrieval-Augmented Generation and Few-Shot Learning for Model Synthesis of Uncommon DSLs(Modellierung 2024 Satellite Events, 2024) Baumann, Nils; Diaz, Juan Sebastian; Michael, Judith; Netz, Lukas; Nqiri, Haron; Reimer, Jan; Rumpe, BernhardWe introduce a method that empowers large language models (LLMs) to generate models for domain-specific languages (DSLs) for which the LLM has little to no training data on. Common LLMs such as GPT-4, Llama 2, or Bard are trained on publicly available data and thus have the capability to produce models for well-known modeling languages such as PlantUML, however, they perform worse on lesser-known or unpublished DSLs. Previous work focused on the usage of few-shot learning (FSL) to synthesize models but did not address or evaluate the potential of retrieval-augmented generation (RAG) to provide fitting examples for the FSL-based modeling approach. In this work, we propose a toolchain and test each building block individually: We use the MontiCore Sequence Diagram Language, which GPT-4 has minimal training data on, to assess the extent to which FSL enhances the likelihood of synthesizing an accurate model. Additionally, we evaluate how effectively RAG can identify suitable models for user requests and determine whether GPT-4 can distinguish between requests for a specific model and those for general information. We show that RAG and FSL can be used to enable simple model synthesis for uncommon DSLs, as long as there is a fitting knowledge base that can be accessed to provide the needed examples for the FSL approach.
- ZeitschriftenartikelEMODE – Modellgetriebene Entwicklung multimodaler, kontextsensitiver Anwendungen (EMODE – Model-driven Development of Multimodal, Context Sensitive Applications)(i-com: Vol. 6, No. 3, 2008) Behring, Alexander; Heinrich, Matthias; Winkler, Matthias; Dargie, WaltenegusDie Entwicklung multimodaler, kontextsensitiver Anwendungen gewinnt zunehmend an Interesse. Jedoch stellen diese höhere Anforderungen an den Softwareentwicklungsprozess. In diesem Beitrag werden die Arbeiten und Ergebnisse aus dem EMODE-Projekt vorgestellt, welches sich die Verbesserung der Effizienz der Entwicklung multimodaler, kontextsensitiver Anwendungen zum Ziel gesetzt hat. Dabei nutzt EMODE modellbasierte Entwicklung; wobei insbesondere die Integration der verschiedenen Entwicklungsschritte und eine durchgehende Werkzeugunterstützung betont werden.