Auflistung nach Schlagwort "feedback"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- Konferenz-AbstractAutomatic feedback and hints on steps students take when learning how to program(21. Fachtagung Bildungstechnologien (DELFI), 2023) Jeuring, JohanEvery year, millions of students learn how to write programs. Learning activities for beginners almost always include programming tasks that require a student to write a program to solve a particular problem. When learning how to solve such a task, many students need feedback on their previous actions, and hints on how to proceed. For tasks such as programming, which are most often solved stepwise, the feedback should take the steps a student has taken towards implementing a solution into account, and the hints should help a student to complete or improve a possibly partial solution. In this talk I will give an overview of the approaches to automatic feedback and hints on programming steps and discuss our research on how to evaluate the quality of feedback and hints. I will also take the opportunity to involve the audience in some of the dilemmas we are facing.
- KonferenzbeitragAutomatic Feedback for Open Writing Tasks: Is this text appropriate for this lecture?(DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., 2020) Rüdian, Sylvio; Quandt, Joachim; Hahn, Kathrin; Pinkwart, NielsGiving feedback for open writing tasks in online language learning courses is time-consuming and expensive, as it requires manpower. Existing tools can support tutors in various ways, e.g. by finding mistakes. However, whether a submission is appropriate to what was taught in the course section still has to be rated by experienced tutors. In this paper, we explore what kind of submission meta-data from texts of an online course can be extracted and used to predict tutor ratings. Our approach is generalizable, scalable and works with every online language course where the language is supported by the tools that we use. We applied a threshold-based approach and trained a neural network to compare the results. Both methods achieve an accuracy of 70% in 10-fold cross-validation. This approach also identifies “fake” submissions from automatic translators to enable more fine-granular feedback. It does not replace tutors, but instead provides them with a rating based on objective metrics and other submissions. This helps to standardize ratings on a scale, which could otherwise vary due to subjective evaluations.