Auflistung nach Schlagwort "Domain-Specific Languages"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragTeaching the Use and Engineering of DSLs with JupyterLab: Experiences and Lessons Learned(Modellierung 2022, 2022) Charles, Joel; Jansen, Nico; Michael, Judith; Rumpe, BernhardDomain-Specific Languages (DSLs) are tailored to a specific domain which requires them to provide domain-specific concepts and a sophisticated tooling for their engineering; aspects which we address with the language workbench MontiCore. As we use MontiCore for research and teaching, we are interested in reducing the entry barrier to use and engineer MontiCore DSLs. While there are approaches for ready-to-use learning environments such as web-based editors, only a few provide a tailored solution for specific DSLs. Within this paper, we present our experiences using JupyterLab in combination with the infrastructure of MontiCore for teaching the use and engineering of DSLs in an interactive manner. We have realized three practical courses and one conference tutorial applying this technical approach. The front-end provides immediate feedback and includes supporting explanations in an integrated manner. Initial feedback indicates that this approach can lower the entry barrier for DSL use and engineering for students and practitioners.
- KonferenzbeitragUsing Language Workbenches and Domain-Specific Languages for Safety-critical Software Development(Software Engineering and Software Management 2019, 2019) Voelter, MarkusIn a 2018 article in the journal on Software & Systems Modeling we discussed the use of DSLs and language workbenches in the context of safety-critical software development. Language workbenches support the efficient creation, integration, and use of domain-specific languages. Typically, they execute models by code generation to programming language code. This can lead to increased productivity and higher quality. However, in safety-/mission-critical environments, generated code may not be considered trustworthy, because of the lack of trust in the generation mechanisms. This makes it harder to justify the use of language workbenches in such an environment. In the SOSYM paper, we demonstrate an approach to use such tools in critical environments. We argue that models created with domain-specific languages are easier to validate and that the additional risk resulting from the transformation to code can be mitigated by a suitably designed transformation and verification architecture. We validate the approach with an industrial case study from the healthcare domain. We also discuss the degree to which the approach is appropriate for critical software in space, automotive, and robotics systems.