Normann, MarcStolzenburg, Frieder2023-09-202023-09-202023https://dl.gi.de/handle/20.500.12116/42407The field of AI often provides opaque methods and algorithms, which are based on personal data. Therefore, challenges could arise in the scope of acceptance and trust within an adaptive learning environment, that provides learning recommendations based on students’ behavior with learning content. To prevent the refusal of such systems by its user and to ensure a fair usage in the educational sector, this research project aims at identifying the most important points and conditions for the acceptance of adaptive learning systems. The contribution of the PhD project will be at the intersection between engineering and human-machine-interfaces with a focus on social science. A focus is on a trustworthy and responsible handling of educational AI systems. The role of explainable AI methods in relationship to trust and acceptance should be studied and consequential changes in students’ motivation could be examined.enExplainable AI; Trustworthy AI; E-Learning; MotivationStudents’ Acceptance of Explainable, AI-based Learning Path Recommendations in an Adaptive Learning SystemText10.18420/ki2023-dc-07