Finzel, BettinaSaranti, AnnaAngerschmid, AlessaTafler, DavidPfeifer, BastianHolzinger, Andreas2023-01-182023-01-1820222022http://dx.doi.org/10.1007/s13218-022-00781-7https://dl.gi.de/handle/20.500.12116/40054Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.Explainable AI (xAI)Graph neural networks (GNN)Inductive logic programming (ILP)Kandinsky pattern (KP)Symbolic AIGenerating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-GraphsText/Journal Article10.1007/s13218-022-00781-71610-1987