Improving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs
dc.contributor.author | Serth, Sebastian | |
dc.contributor.author | Köhler, Daniel | |
dc.contributor.author | Marschke, Leonard | |
dc.contributor.author | Auringer, Felix | |
dc.contributor.author | Hanff, Konrad | |
dc.contributor.author | Hellenberg, Jan-Eric | |
dc.contributor.author | Kantusch, Tobias | |
dc.contributor.author | Paß, Maximilian | |
dc.contributor.author | Meinel, Christoph | |
dc.contributor.editor | Greubel, André | |
dc.contributor.editor | Strickroth, Sven | |
dc.contributor.editor | Striewe, Michael | |
dc.date.accessioned | 2021-11-15T05:03:40Z | |
dc.date.available | 2021-11-15T05:03:40Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students' submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students' source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs. | en |
dc.identifier.doi | 10.18420/abp2021-1 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/37539 | |
dc.language.iso | en | |
dc.relation.ispartof | Proceedings of the Fifth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2021),virtual event, October 28-29, 2021 | |
dc.relation.ispartofseries | Workshop „Automatische Bewertung von Programmieraufgaben“ | |
dc.subject | Auto-Grader | |
dc.subject | Scalability | |
dc.subject | MOOC | |
dc.subject | Programming | |
dc.subject | Security | |
dc.subject | Execution | |
dc.title | Improving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs | en |
dc.type | Text/Conference Paper | |
gi.conference.sessiontitle | Vollbeiträge „Architekturen für die automatische Bewertung“ |
Dateien
Originalbündel
1 - 1 von 1
Vorschaubild nicht verfügbar
- Name:
- paper1.pdf
- Größe:
- 220.11 KB
- Format:
- Adobe Portable Document Format