Tröbs, EricHagedorn, StefanSattler, Kai-UweKönig-Ries, BirgittaScherzinger, StefanieLehner, WolfgangVossen, Gottfried2023-02-232023-02-232023978-3-88579-725-8https://dl.gi.de/handle/20.500.12116/40343Jupyter Notebook is not only a popular tool for publishing data science results, but canalso be used for the interactive explanation of teaching content as well as the supervised work onexercises. In order to give students feedback on their solutions, it is necessary to check and evaluatethe submitted work. To exploit the possibilities of remote learning as well as to reduce the workneeded to evaluate submissions, we present a flexible and efficient framework. It enables automatedchecking of notebooks for completeness and syntactic correctness as well as fine-grained evaluationof submitted tasks. The framework comes with a high level of parallelization, isolation and a shortand efficient API.enJupyterTeachingExercisingUnit-TestingAutomationJPTest - Grading Data Science Exercises in Jupyter Made Short, Fast and ScalableText/Conference Paper10.18420/BTW2023-37