Baumann, NilsDiaz, Juan SebastianMichael, JudithNetz, LukasNqiri, HaronReimer, JanRumpe, BernhardGiese, HolgerRosenthalKristina2024-03-122024-03-122024https://dl.gi.de/handle/20.500.12116/43781We introduce a method that empowers large language models (LLMs) to generate models for domain-specific languages (DSLs) for which the LLM has little to no training data on. Common LLMs such as GPT-4, Llama 2, or Bard are trained on publicly available data and thus have the capability to produce models for well-known modeling languages such as PlantUML, however, they perform worse on lesser-known or unpublished DSLs. Previous work focused on the usage of few-shot learning (FSL) to synthesize models but did not address or evaluate the potential of retrieval-augmented generation (RAG) to provide fitting examples for the FSL-based modeling approach. In this work, we propose a toolchain and test each building block individually: We use the MontiCore Sequence Diagram Language, which GPT-4 has minimal training data on, to assess the extent to which FSL enhances the likelihood of synthesizing an accurate model. Additionally, we evaluate how effectively RAG can identify suitable models for user requests and determine whether GPT-4 can distinguish between requests for a specific model and those for general information. We show that RAG and FSL can be used to enable simple model synthesis for uncommon DSLs, as long as there is a fitting knowledge base that can be accessed to provide the needed examples for the FSL approach.enLLMsRAGDSLsFew-Shot LearningMDSECombining Retrieval-Augmented Generation and Few-Shot Learning for Model Synthesis of Uncommon DSLsText/Workshop Paper10.18420/modellierung2024-ws-007