Logo des Repositoriums

Automatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?

Vorschaubild nicht verfügbar

Volltext URI


Text/Conference Paper





ISSN der Zeitschrift



Gesellschaft für Informatik e.V.


Giving feedback for open writing tasks in online language learning courses is time-consuming and expensive, as it requires manpower. Existing tools can support tutors in various ways, e.g. by finding mistakes. However, whether a submission is appropriate to what was taught in the course section still has to be rated by experienced tutors. In this paper, we explore what kind of submission meta-data from texts of an online course can be extracted and used to predict tutor ratings. Our approach is generalizable, scalable and works with every online language course where the language is supported by the tools that we use. We applied a threshold-based approach and trained a neural network to compare the results. Both methods achieve an accuracy of 70% in 10-fold cross-validation. This approach also identifies “fake” submissions from automatic translators to enable more fine-granular feedback. It does not replace tutors, but instead provides them with a rating based on objective metrics and other submissions. This helps to standardize ratings on a scale, which could otherwise vary due to subjective evaluations.


Rüdian, Sylvio; Quandt, Joachim; Hahn, Kathrin; Pinkwart, Niels (2020): Automatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?. DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V.. Bonn: Gesellschaft für Informatik e.V.. PISSN: 1617-5468. ISBN: 978-3-88579-702-9. pp. 265-276. Online. 14.-18. September 2020