Logo des Repositoriums
 
Konferenzbeitrag

Assessing Large Language Models for annotating data in Dementia-Related texts: A Comparative Study with Human Annotators

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2024

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

As the aging population grows, the incidence of dementia is rising sharply, necessitating the extraction of domain-specific information from texts to gain valuable insights into the condition. Training Natural Language Processing (NLP) models for this purpose requires substantial amounts of annotated data, which is typically produced by human annotators. While human annotation is precise, it is also labor-intensive and costly. Large Language Models (LLMs) present a promising alternative that could potentially streamline and economize the annotation process. However, LLMs may struggle with complex, domain-specific contexts, potentially leading to inaccuracies. This paper investigates the effectiveness of LLMs in annotating words and phrases in ambiguous dementia-related texts by comparing LLM-generated annotations with those produced by human annotators. We followed a specific annotation scheme and had both the LLM and human raters annotate a corpus of informal texts from forums of family carers of people with dementia. The results indicate a moderate overlap in inter-rater agreement between LLM and expert annotators, with the LLM identifying nearly twice as many instances as the human raters. Although LLMs can partially automate the annotation process, they are not yet fully reliable for complex domains. By refining LLM-generated data through expert review, it is possible to reduce the burden on human raters and accelerate the creation of annotated datasets.

Beschreibung

Suravee, Sumaiya; Stoev, Teodor; Konow, Sara; Yordanova, Kristina (2024): Assessing Large Language Models for annotating data in Dementia-Related texts: A Comparative Study with Human Annotators. INFORMATIK 2024. DOI: 10.18420/inf2024_36. Bonn: Gesellschaft für Informatik e.V.. ISSN: 2944-7682. PISSN: 1617-5468. EISSN: 2944-7682. ISBN: 978-3-88579-746-3. pp. 487-498. 8th International Workshop on Annotation of useR Data for UbiquitOUs Systems. Wiesbaden. 24.-26. September 2024

Zitierform

Tags