Logo des Repositoriums
 

On the State of German (Abstractive) Text Summarization

dc.contributor.authorAumiller, Dennis
dc.contributor.authorFan, Jing
dc.contributor.authorGertz, Michael
dc.contributor.editorKönig-Ries, Birgitta
dc.contributor.editorScherzinger, Stefanie
dc.contributor.editorLehner, Wolfgang
dc.contributor.editorVossen, Gottfried
dc.date.accessioned2023-02-23T13:59:45Z
dc.date.available2023-02-23T13:59:45Z
dc.date.issued2023
dc.description.abstractWith recent advancements in the area of Natural Language processing, the focus is slowly shifting from a purely English-centric view towards more language-specific solutions, including German.Especially practical for businesses to analyze their growing amount of textual data are text summarization systems, which transform long input documents into compressed and more digestible summary texts.In this work, we assess the particular landscape of German abstractive text summarization and investigate the reasons why practically useful solutions for abstractive text summarization are still absent in industry. Our focus is two-fold, analyzing a) training resources, and b) publicly available summarization systems.We are able to show that popular existing datasets exhibit crucial flaws in their assumptions about the original sources, which frequently leads to detrimental effects on system generalization and evaluation biases. We confirm that for the most popular training dataset, MLSUM, over 50% of the training set is unsuitable for abstractive summarization purposes. Furthermore, available systems frequently fail to compare to simple baselines, and ignore more effective and efficient extractive summarization approaches. We attribute poor evaluation quality to a variety of different factors, which are investigated in more detail in this work:A lack of qualitative (and diverse) gold data considered for training, understudied (and untreated) positional biases in some of the existing datasets, and the lack of easily accessible and streamlined pre-processing strategies or analysis tools. We therefore provide a comprehensive assessment of available models on the cleaned versions of datasets, and find that this can lead to a reduction of more than 20 ROUGE-1 points during evaluation. As a cautious reminder for future work, we finally highlight the problems of solely relying on n-gram based scoring methods by presenting particularly problematic failure cases. Code for dataset filtering and reproducing results can be found online: https://github.com/anonymized-user/anonymized-repositoryen
dc.identifier.doi10.18420/BTW2023-10
dc.identifier.isbn978-3-88579-725-8
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/40314
dc.language.isoen
dc.publisherGesellschaft für Informatik e.V.
dc.relation.ispartofBTW 2023
dc.relation.ispartofseriesLecture Notes in Informatics (LNI) - Proceedings, Volume P-331
dc.subjectAbstractive Text Summarization
dc.subjectNatural Language Generation
dc.subjectGerman
dc.subjectEvaluation
dc.titleOn the State of German (Abstractive) Text Summarizationen
dc.typeText/Conference Paper
gi.citation.endPage220
gi.citation.publisherPlaceBonn
gi.citation.startPage195
gi.conference.date06.-10. März 2023
gi.conference.locationDresden, Germany

Dateien

Originalbündel
1 - 1 von 1
Lade...
Vorschaubild
Name:
B2-3.pdf
Größe:
692.48 KB
Format:
Adobe Portable Document Format