Logo des Repositoriums
 

Habilitation Abstract: Towards Explainable Fact Checking

dc.contributor.authorAugenstein, Isabelle
dc.date.accessioned2023-01-18T13:08:25Z
dc.date.available2023-01-18T13:08:25Z
dc.date.issued2022
dc.description.abstractWith the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models.de
dc.identifier.doi10.1007/s13218-022-00774-6
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-022-00774-6
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/40053
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 36, No. 0
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectAutomatic fact checking
dc.subjectExplainable AI
dc.subjectLow-resource learning
dc.subjectMulti-task learning
dc.subjectNatural language understanding
dc.titleHabilitation Abstract: Towards Explainable Fact Checkingde
dc.typeText/Journal Article
gi.citation.endPage258
gi.citation.startPage255

Dateien