Serth, SebastianKöhler, DanielMarschke, LeonardAuringer, FelixHanff, KonradHellenberg, Jan-EricKantusch, TobiasPaß, MaximilianMeinel, ChristophGreubel, AndréStrickroth, SvenStriewe, Michael2021-11-152021-11-152021https://dl.gi.de/handle/20.500.12116/37539Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students' submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students' source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.enAuto-GraderScalabilityMOOCProgrammingSecurityExecutionImproving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCsText/Conference Paper10.18420/abp2021-1