Auflistung nach Schlagwort "Code Generation"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragFrom Natural Language to Web Applications: Using Large Language Models for Model-Driven Software Engineering(Modellierung 2024, 2024) Netz, Lukas; Michael, Judith; Rumpe, BernhardWe evaluate the usage of Large Language Models (LLMs) to transform natural language into models of a predefined domain-specific language within the context of model-driven software engineering. In this work we test systematically the reliability and correctness of the developed tooling, to ensure its usability in an automated model-driven engineering context. Up to now, LLMs such as ChatGPT were not sophisticated enough to yield promising results. The new API-Access and the release of GPT-4, enabled us to develop improved tooling that can be evaluated systematically. This paper introduces an approach that can produce a running web application based on simple informal specifications, that is provided by a domain expert with no prior knowledge of any DSL. We extended our toolchain to include ChatGPT and provided the AI with additional DSL-specific contexts in order to receive models that can be further processed. We performed tests to ensure the semantic and syntactic correctness of the created models. This approach shows the potential of LLMs to successfully bridge the gap between domain experts and developers and discusses its current limitations.
- KonferenzbeitragMLProvCodeGen: A Tool for Provenance Data Input and Capture of Customizable Machine Learning Scripts(BTW 2023, 2023) Mustafa, Tarek Al; König-Ries, Birgitta; Samuel, SheebaOver the last decade Machine learning (ML) has dramatically changed the application ofand research in computer science. It becomes increasingly complicated to assure the transparency and reproducibility of advanced ML systems from raw data to deployment. In this paper, we describe an approach to supply users with an interface to specify a variety of parameters that together provide complete provenance information and automatically generate executable ML code from this information. We introduce MLProvCodeGen (Machine Learning Provenance Code Generator), a JupyterLab extension to generate custom code for ML experiments from user-defined metadata. ML workflows can be generated with different data settings, model parameters, methods, and trainingparameters and reproduce results in Jupyter Notebooks. We evaluated our approach with two ML applications, image and multiclass classification, and conducted a user evaluation.