Konferenzbeitrag
Automatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?
Lade...
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2020
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Giving feedback for open writing tasks in online language learning courses is time-consuming and expensive, as it requires manpower. Existing tools can support tutors in various ways, e.g. by finding mistakes. However, whether a submission is appropriate to what was taught in the course section still has to be rated by experienced tutors. In this paper, we explore what kind of submission meta-data from texts of an online course can be extracted and used to predict tutor ratings. Our approach is generalizable, scalable and works with every online language course where the language is supported by the tools that we use. We applied a threshold-based approach and trained a neural network to compare the results. Both methods achieve an accuracy of 70% in 10-fold cross-validation. This approach also identifies “fake” submissions from automatic translators to enable more fine-granular feedback. It does not replace tutors, but instead provides them with a rating based on objective metrics and other submissions. This helps to standardize ratings on a scale, which could otherwise vary due to subjective evaluations.