Habilitation Abstract: Towards Explainable Fact Checking
dc.contributor.author | Augenstein, Isabelle | |
dc.date.accessioned | 2023-01-18T13:08:25Z | |
dc.date.available | 2023-01-18T13:08:25Z | |
dc.date.issued | 2022 | |
dc.description.abstract | With the substantial rise in the amount of mis- and disinformation online, fact checking has become an important task to automate. This article is a summary of a habilitation (doctor scientiarum) thesis submitted to the University of Copenhagen, which was sucessfully defended in December 2021 (Augenstein in Towards Explainable Fact Checking. Dr. Scient. thesis, University of Copenhagen, Faculty of Science, 2021). The dissertation addresses several fundamental research gaps within automatic fact checking. The contributions are organised along three verticles: (1) the fact-checking subtask they address; (2) methods which only require small amounts of manually labelled data; (3) methods for explainable fact checking, addressing the problem of opaqueness in the decision-making of black-box fact checking models. | de |
dc.identifier.doi | 10.1007/s13218-022-00774-6 | |
dc.identifier.pissn | 1610-1987 | |
dc.identifier.uri | http://dx.doi.org/10.1007/s13218-022-00774-6 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/40053 | |
dc.publisher | Springer | |
dc.relation.ispartof | KI - Künstliche Intelligenz: Vol. 36, No. 0 | |
dc.relation.ispartofseries | KI - Künstliche Intelligenz | |
dc.subject | Automatic fact checking | |
dc.subject | Explainable AI | |
dc.subject | Low-resource learning | |
dc.subject | Multi-task learning | |
dc.subject | Natural language understanding | |
dc.title | Habilitation Abstract: Towards Explainable Fact Checking | de |
dc.type | Text/Journal Article | |
gi.citation.endPage | 258 | |
gi.citation.startPage | 255 |