Rüdian, SylvioHeuts, AlexanderPinkwart, NielsZender, RaphaelIfenthaler, DirkLeonhardt, ThiemoSchumacher, Clara2020-09-082020-09-082020978-3-88579-702-9https://dl.gi.de/handle/20.500.12116/34171Many question generation approaches focus on the generation process itself, but they work with single sentences as input only. Although the state of the art of question generation’s results is quite good, it cannot be used practically as the selection which sentences are worth asking for in an educational setting is currently not possible in an automated way. This limits the ability to generate interactive course materials at scale. In this paper, we conduct a study where we compare teachers’ sentence selections of texts with 9 algorithms to find the most appropriate ones concerning reading comprehension. 30 teachers compared the “winner” algorithm, Edmundson with LexRank, which was found to be the optimal algorithm according to previous literature. The result shows that Edmundson outperforms LexRank.enQuestion generationOnline coursesText summarizationEducational Text Summarizer: Which sentences are worth asking for?Text/Conference Paper1617-5468