Stoev, TeodorSuravee, SumaiyaYordanova, KristinaKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45194Data annotation is a crucial step in various domains where Machine Learning (ML) approaches are utilized. Despite the availability of automated and semi-automated data labeling methods, manual annotation by experts remains essential for developing high-quality models in certain scenarios. This study explores how annotations can evolve over time through an experiment focused on annotating named entities and relationships within the domain of dementia and related behaviors. Two annotators labeled a task-specific text corpus on two separate occasions, one year apart. Our findings revealed an increase in both the quantity and quality of annotated entities and relationships for both annotators. Statistical tests were conducted to assess the significance of the changes in annotations. The results indicate substantial variability in annotations over time, particularly in complex domains. The paper also discusses potential reasons for these variations.endata annotationannotation evolutionannotatorsannotation variabilitydata labelingVariability of annotations over time: An experimental study in the dementia-related named entity recognition domainText/Conference Paper10.18420/inf2024_351617-54682944-7682