Wolf, Karsten D.Maya, FatimaHeilmann, LisanneKiesler, NatalieSchulz, Sandra2024-10-212024-10-212024https://dl.gi.de/handle/20.500.12116/45057Providing timely formative feedback to students is very important to support self-regulated learning and deep learning strategies. Feedback has been shown to increase student engagement, satisfaction and learning outcomes, especially in generative learning tasks such as ePortfolios and other forms of multimodal compositions. However, the provision of detailed formative feedback places high demands on teachers’ resources. It would be highly beneficial if Large Language Models (LLM) could be used to support the feedback process. Therefore, this paper first describes a general architecture for multimodal formative assessment analysis and feedback generation. It is based on assessment rubrics, which are then used to build task-specific AI analysis pipelines to generate explainable assessment metrics, which are then used to produce helpful feedback. An example of a feedback pipeline for student video submissions in an ePortfolio is given, along with a prompting chain for feedback generation. The paper describes further steps necessary to evaluate and optimise this process in real classroom scenarios..enePortfoliosExplanatory VideosMultimodal Formative AssessmentLarge Language Models (LLM)Prompt EngineeringFew-Shot-LearningAssessment RubricsExplainable Feedback for Learning Based on Rubric-Based Multimodal Assessment Analytics with AIText/Conference Paper10.18420/delfi2024-ws-40