Hamann, FelixFalk, MauriceWalker, LukasKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45096Knowledge graphs (KG) play a crucial role for knowledge modeling in various domains such as web search, medical applications, or technical support, yet they are often incomplete. To mitigate this problem, knowledge graph completion (KGC) may be used to infer missing links of the graph. Taking it a step further, in an automated knowledge acquisition process, links for entirely new, unseen entities may be incorporated. This process is known as inductive link prediction (I-LP). Optionally, text as an external source of information is leveraged to infer the correct linkage of such entities. Depending on the context, this text either provides a comprehensive singular description of the entity or includes numerous incidental references to it. This paper presents a study that explores the application of LLAMA3 as a representative of the current generation of large language models (LLM) to I-LP. Through experimentation on popular benchmark datasets such as Wikidata5m, FB15k-237, WN18-RR, and IRT2, we evaluate the performance of LLMs for inserting new facts into a knowledge base, given textual references to the target object. These benchmarks, by design, exhibit significant variations in the quality of the associated text, as well as in the number of entities and links included. This paper explores several prompt formulations and studies whether pre-emptive retrieval of text helps. For automated link prediction, we implement the full cycle of prompt generation, answer processing, entity candidate lookup, and finally link prediction. Our results show that LLM-based inductive link prediction is outperformed by previously suggested models which fine-tune task-specific LM encoders.enInductive Link PredictionKnowledge Graph CompletionLarge Language ModelsPromptingExpanding Knowledge Graphs Through Text: Leveraging Large Language Models for Inductive Link PredictionText/Conference Paper10.18420/inf2024_1231617-54682944-7682