Draxler, FionaHaller, LauraSchmidt, AlbrechtChuang, Lewis L.Mühlhäuser, MaxReuter, ChristianPfleging, BastianKosch, ThomasMatviienko, AndriiGerling, Kathrin|Mayer, SvenHeuten, WilkoDöring, TanjaMüller, FlorianSchmitz, Martin2022-08-312022-08-312022https://dl.gi.de/handle/20.500.12116/39291The unique affordances of mobile devices enable the design of novel language learning experiences with auto-generated learning materials. Thus, they can support independent learning without increasing the burden on teachers. In this paper, we investigate the potential and the design requirements of such learning experiences for children. We implement a novel mobile app that auto-generates context-based multimedia material for learning English. It automatically labels photos children take with the app and uses them as a trigger for generating content using machine translation, image retrieval, and text-to-speech. An exploratory study with 25 children showed that children were ready to engage to an equal extent with this app and a non-personal version using random instead of personal photos. Overall, the children appreciated the independence gained compared to learning at school but missed the teachers’ support. From a technological perspective, we found that auto-generation works in many cases. However, handling erroneous input, such as blurry images and spelling mistakes, is crucial for children as a target group. We conclude with design recommendations for future projects, including scaffolds for the photo-taking process and information redundancy for identifying inaccurate auto-generation results.enMobile Language LearningContent GenerationObject DetectionApplied Machine LearningAuto-Generating Multimedia Language Learning Material for Children with Off-the-Shelf AIText/Conference Paper10.1145/3543758.3543777