Thaler, TomHouy, ConstantinFettke, PeterLoos, PeterBetz, StefanieReimer, Ulrich2017-06-212017-06-212016978-3-88579-649-7The assessment of process modeling exams is a time consuming and complex task. It is desirable to give each student a detailed feedback on their solution in terms of syntactic, semantic, and pragmatic quality. It is obvious that especially in the case of mass courses with hundreds of participants, individual grading of modeling exams by humans is challenging: Besides reliability, consistency, and validity, the efficiency of the grading process must be guaranteed. Against that background, this paper aims at developing first ideas for an automated assessment of process modeling exams. The goal is to improve modeling education while teaching students not only to model correctly but to develop good models. Our ideas were prototypically implemented and applied in an exemplary scenario with promising results. It was possible to identify limitations but also to derive reliable semi-automated approaches for the assessment of process modeling exams.enAutomated assessment of process modeling exams: basic ideas and prototypical implementationText/Conference Paper1617-5468