Engelmann, BjörnSchaer, PhilippKönig-Ries, BirgittaScherzinger, StefanieLehner, WolfgangVossen, Gottfried2023-02-232023-02-232023978-3-88579-725-8https://dl.gi.de/handle/20.500.12116/40378We present an approach to extract relations from multimodal documents using a few training data. Furthermore, we derive explanations in the form of extraction rules from the underlying model to ensure the reliability of the extraction. Finally, we will evaluate how reliable (high model fidelity) extracted rules are and which type of classifier is suitable in terms of F1 Score and explainability. Our code and data are available at https://osf.io/dn9hm/?view_only=7e65fd1d4aae44e1802bb5ddd3465e08.enRelation ExtractionKnowledge ExtractionKnowledge Base ConstructionExplainable AIMultimodal DocumentsReliable Rules for Relation Extraction in a Multimodal SettingText/Conference Paper10.18420/BTW2023-69