Auflistung nach Schlagwort "GPT-4"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragEvalQuiz – LLM-based Automated Generation of Self-Assessment Quizzes in Software Engineering Education(Software Engineering im Unterricht der Hochschulen 2024, 2024) Meißner, Niklas; Speth, Sandro; Kieslinger, Julian; Becker, SteffenSelf-assessment quizzes after lectures, educational videos, or chapters are a commonly used method in software engineering (SE) education to give students the opportunity to test their gained knowledge. However, the creation of these quizzes is time-consuming, cognitively exhausting, and complex, as an expert in the field needs to create the quizzes and review the lecture material for validity. Therefore, this paper presents a concept to automatically generate self-assessment quizzes based on lecture material using a large language model (LLM) to reduce lecturers' workload and simplify the ...
- KonferenzbeitragFrom Natural Language to Web Applications: Using Large Language Models for Model-Driven Software Engineering(Modellierung 2024, 2024) Netz, Lukas; Michael, Judith; Rumpe, BernhardWe evaluate the usage of Large Language Models (LLMs) to transform natural language into models of a predefined domain-specific language within the context of model-driven software engineering. In this work we test systematically the reliability and correctness of the developed tooling, to ensure its usability in an automated model-driven engineering context. Up to now, LLMs such as ChatGPT were not sophisticated enough to yield promising results. The new API-Access and the release of GPT-4, enabled us to develop improved tooling that can be evaluated systematically. This paper introduces an approach that can produce a running web application based on simple informal specifications, that is provided by a domain expert with no prior knowledge of any DSL. We extended our toolchain to include ChatGPT and provided the AI with additional DSL-specific contexts in order to receive models that can be further processed. We performed tests to ensure the semantic and syntactic correctness of the created models. This approach shows the potential of LLMs to successfully bridge the gap between domain experts and developers and discusses its current limitations.