Logo des Repositoriums

Combining Retrieval-Augmented Generation and Few-Shot Learning for Model Synthesis of Uncommon DSLs

Vorschaubild nicht verfügbar

Volltext URI


Text/Workshop Paper





ISSN der Zeitschrift



Gesellschaft für Informatik e.V.


We introduce a method that empowers large language models (LLMs) to generate models for domain-specific languages (DSLs) for which the LLM has little to no training data on. Common LLMs such as GPT-4, Llama 2, or Bard are trained on publicly available data and thus have the capability to produce models for well-known modeling languages such as PlantUML, however, they perform worse on lesser-known or unpublished DSLs. Previous work focused on the usage of few-shot learning (FSL) to synthesize models but did not address or evaluate the potential of retrieval-augmented generation (RAG) to provide fitting examples for the FSL-based modeling approach. In this work, we propose a toolchain and test each building block individually: We use the MontiCore Sequence Diagram Language, which GPT-4 has minimal training data on, to assess the extent to which FSL enhances the likelihood of synthesizing an accurate model. Additionally, we evaluate how effectively RAG can identify suitable models for user requests and determine whether GPT-4 can distinguish between requests for a specific model and those for general information. We show that RAG and FSL can be used to enable simple model synthesis for uncommon DSLs, as long as there is a fitting knowledge base that can be accessed to provide the needed examples for the FSL approach.


Baumann, Nils; Diaz, Juan Sebastian; Michael, Judith; Netz, Lukas; Nqiri, Haron; Reimer, Jan; Rumpe, Bernhard (2024): Combining Retrieval-Augmented Generation and Few-Shot Learning for Model Synthesis of Uncommon DSLs. Modellierung 2024 Satellite Events. DOI: 10.18420/modellierung2024-ws-007. Gesellschaft für Informatik e.V.. LLM4Modeling. Potsdam. 12. - 15. März