Rüdian, SylvioQuandt, JoachimHahn, KathrinPinkwart, NielsZender, RaphaelIfenthaler, DirkLeonhardt, ThiemoSchumacher, Clara2020-09-082020-09-082020978-3-88579-702-9https://dl.gi.de/handle/20.500.12116/34170Giving feedback for open writing tasks in online language learning courses is time-consuming and expensive, as it requires manpower. Existing tools can support tutors in various ways, e.g. by finding mistakes. However, whether a submission is appropriate to what was taught in the course section still has to be rated by experienced tutors. In this paper, we explore what kind of submission meta-data from texts of an online course can be extracted and used to predict tutor ratings. Our approach is generalizable, scalable and works with every online language course where the language is supported by the tools that we use. We applied a threshold-based approach and trained a neural network to compare the results. Both methods achieve an accuracy of 70% in 10-fold cross-validation. This approach also identifies “fake” submissions from automatic translators to enable more fine-granular feedback. It does not replace tutors, but instead provides them with a rating based on objective metrics and other submissions. This helps to standardize ratings on a scale, which could otherwise vary due to subjective evaluations.enfeedbackonline courseslanguage learningAutomatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?Text/Conference Paper1617-5468