Logo des Repositoriums
 

Expanding Knowledge Graphs Through Text: Leveraging Large Language Models for Inductive Link Prediction

dc.contributor.authorHamann, Felix
dc.contributor.authorFalk, Maurice
dc.contributor.authorWalker, Lukas
dc.contributor.editorKlein, Maike
dc.contributor.editorKrupka, Daniel
dc.contributor.editorWinter, Cornelia
dc.contributor.editorGergeleit, Martin
dc.contributor.editorMartin, Ludger
dc.date.accessioned2024-10-21T18:24:13Z
dc.date.available2024-10-21T18:24:13Z
dc.date.issued2024
dc.description.abstractKnowledge graphs (KG) play a crucial role for knowledge modeling in various domains such as web search, medical applications, or technical support, yet they are often incomplete. To mitigate this problem, knowledge graph completion (KGC) may be used to infer missing links of the graph. Taking it a step further, in an automated knowledge acquisition process, links for entirely new, unseen entities may be incorporated. This process is known as inductive link prediction (I-LP). Optionally, text as an external source of information is leveraged to infer the correct linkage of such entities. Depending on the context, this text either provides a comprehensive singular description of the entity or includes numerous incidental references to it. This paper presents a study that explores the application of LLAMA3 as a representative of the current generation of large language models (LLM) to I-LP. Through experimentation on popular benchmark datasets such as Wikidata5m, FB15k-237, WN18-RR, and IRT2, we evaluate the performance of LLMs for inserting new facts into a knowledge base, given textual references to the target object. These benchmarks, by design, exhibit significant variations in the quality of the associated text, as well as in the number of entities and links included. This paper explores several prompt formulations and studies whether pre-emptive retrieval of text helps. For automated link prediction, we implement the full cycle of prompt generation, answer processing, entity candidate lookup, and finally link prediction. Our results show that LLM-based inductive link prediction is outperformed by previously suggested models which fine-tune task-specific LM encoders.en
dc.identifier.doi10.18420/inf2024_123
dc.identifier.isbn978-3-88579-746-3
dc.identifier.pissn1617-5468
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/45096
dc.language.isoen
dc.publisherGesellschaft für Informatik e.V.
dc.relation.ispartofINFORMATIK 2024
dc.relation.ispartofseriesLecture Notes in Informatics (LNI) - Proceedings, Volume P-352
dc.subjectInductive Link Prediction
dc.subjectKnowledge Graph Completion
dc.subjectLarge Language Models
dc.subjectPrompting
dc.titleExpanding Knowledge Graphs Through Text: Leveraging Large Language Models for Inductive Link Predictionen
dc.typeText/Conference Paper
gi.citation.endPage1417
gi.citation.publisherPlaceBonn
gi.citation.startPage1407
gi.conference.date24.-26. September 2024
gi.conference.locationWiesbaden
gi.conference.sessiontitleAI@WORK

Dateien

Originalbündel
1 - 1 von 1
Lade...
Vorschaubild
Name:
Hamann_et_al_Expanding_Knowledge_Graphs.pdf
Größe:
518.09 KB
Format:
Adobe Portable Document Format