Auflistung nach Schlagwort "online courses"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAutomatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?(DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., 2020) Rüdian, Sylvio; Quandt, Joachim; Hahn, Kathrin; Pinkwart, NielsGiving feedback for open writing tasks in online language learning courses is time-consuming and expensive, as it requires manpower. Existing tools can support tutors in various ways, e.g. by finding mistakes. However, whether a submission is appropriate to what was taught in the course section still has to be rated by experienced tutors. In this paper, we explore what kind of submission meta-data from texts of an online course can be extracted and used to predict tutor ratings. Our approach is generalizable, scalable and works with every online language course where the language is supported by the tools that we use. We applied a threshold-based approach and trained a neural network to compare the results. Both methods achieve an accuracy of 70% in 10-fold cross-validation. This approach also identifies “fake” submissions from automatic translators to enable more fine-granular feedback. It does not replace tutors, but instead provides them with a rating based on objective metrics and other submissions. This helps to standardize ratings on a scale, which could otherwise vary due to subjective evaluations.
- KonferenzbeitragIs the context-based Word2Vec representation useful to determine Question Words for Generators?(DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., 2020) Rüdian, Sylvio; Pinkwart, NielsQuestion and answer generation approaches focus on the quality and correctness of generated questions for online courses but miss to use a good question word, which is a deficiency reported by many previous studies. In this experimental setup, we explored whether the word2vec representation, which is semantic-based, can be used to predict question words. We compare two pipelines of the prediction process and observed that splitting the problem into several subproblems performs similar to feeding a neural network with all the data. Although our approach is promising to take the context-based representation into account we can see that the success rate is still low but better than guessing.