Suravee, SumaiyaStoev, TeodorKonow, SaraYordanova, KristinaKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45195As the aging population grows, the incidence of dementia is rising sharply, necessitating the extraction of domain-specific information from texts to gain valuable insights into the condition. Training Natural Language Processing (NLP) models for this purpose requires substantial amounts of annotated data, which is typically produced by human annotators. While human annotation is precise, it is also labor-intensive and costly. Large Language Models (LLMs) present a promising alternative that could potentially streamline and economize the annotation process. However, LLMs may struggle with complex, domain-specific contexts, potentially leading to inaccuracies. This paper investigates the effectiveness of LLMs in annotating words and phrases in ambiguous dementia-related texts by comparing LLM-generated annotations with those produced by human annotators. We followed a specific annotation scheme and had both the LLM and human raters annotate a corpus of informal texts from forums of family carers of people with dementia. The results indicate a moderate overlap in inter-rater agreement between LLM and expert annotators, with the LLM identifying nearly twice as many instances as the human raters. Although LLMs can partially automate the annotation process, they are not yet fully reliable for complex domains. By refining LLM-generated data through expert review, it is possible to reduce the burden on human raters and accelerate the creation of annotated datasets.enData AnnotationLarge Language ModelPeople with DementiaNamed Entity RecognitionAssessing Large Language Models for annotating data in Dementia-Related texts: A Comparative Study with Human AnnotatorsText/Conference Paper10.18420/inf2024_361617-54682944-7682