Auflistung nach Schlagwort "annotation"
1 - 6 von 6
Treffer pro Seite
Sortieroptionen
- WorkshopbeitragData Modelling for Historical Corpus Annotation(INF-DH-2018, 2018) Vertan, CristinaIn this article we discuss the problem of annotation of historical languages for which less, up to no resources are available, and which do not follow the standard paradigm of Indo-European languages. We show that the development of a tool adequate to the data model (and not the adaptation of data to given tools) has to be considered in particular cases.
- TextdokumentExploring the Use of the Pronoun I in German Academic Texts with Machine Learning(INFORMATIK 2020, 2021) Andresen, Melanie; Knorr, DagmarThe use of the pronoun ich (‘I’) in academic language is a source of constant debate and a frequent cause of insecurity for students. We explore manually annotated instances of I from a German learner corpus. Using machine learning techniques, we investigate to what extent it is possible to automatically distinguish between different types of I usage (author I vs. narrator I). We additionally inspect which context words are good indicators of one type or the other. The results show that an automatic classification is not straightforward, but the distinctive features are in line with previous research. The results of the automatic classification are not perfect, but would greatly facilitate manual annotation. The distinctive words are in line with previous research and indicate that the author I is a more homogeneous class.
- KonferenzbeitragHuman-machine Collaboration on Data Annotation of Images by Semi-automatic Labeling(Mensch und Computer 2021 - Tagungsband, 2021) Haider, Tom; Michahelles, FlorianDeployment of deep neural network architectures in computer vision applications requires labeled images which human workers create in a manual, cumbersome process of drawing bounding boxes and segmentation masks. In this work, we propose an image labeling companion that supports human workers to label images faster and more efficiently. Our data-pipeline utilizes One-Shot, Few-Shot and pre-trained object detection models to provide bounding box suggestions, thereby reducing the required user interactions during labeling to corrective adjustments. The resulting labels are then used to continuously update the underlying suggestion models. Optionally, we apply a refinement step, where an available bounding box is converted into a finer segmentation mask. We evaluate our approach with a group of participants who label images using our tool - both manually and with the system. In all our experiments, the achieved quality is consistently comparable with manually created labels at factor 2 to 6 faster execution times.
- KonferenzbeitragThe InsightsNet Climate Change Corpus (ICCC)(BTW 2023, 2023) Bartsch, Sabine; Duan, Changxu; Tan, Sherry; Volkanovska, Elena; Stille, WolfgangThe discourse on climate change has become a centerpiece of public debate, thereby creating a pressing need to analyze the multitude of messages created by the participants in this communication process. In addition to text, messages on this topic are communicated through images, videos, tables and other data objects that are embedded within a document and accompany the text. This paper presents the process of building the InsightsNet Climate Change Corpus (ICCC), a multimodal corpus on the topic of climate change, using NLP tools to enrich corpus metadata, a dataset that lends itself to the exploration of the interplay between the various modalities that constitute the discourse on climate change.
- KonferenzbeitragMediaBrain: Annotating Videos based on Brain-Computer Interaction(Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Sahami Shirazi, Alireza; Funk, Markus; Pfleiderer, Florian; Glück, Hendrik; Schmidt, AlbrechtAdding notes to time segments on a video timeline makes it easier to search, find, and play-back important segments of the video. Various approaches have been explored to annotate videos (semi) automatically to summarize videos. In this research we investigate the feasi-bility of implicitly annotating videos based on brain signals retrieved from a Brain-Computer Interface (BCI) headset. The signals provided by the BCI can reveal different infor¬mation such as brain activities, facial expressions, or the level of users excitement. This in¬formation correlates with scenes the users watch in a video. Thus, it can be used for anno¬tating a video and automatically generating a summary. To achieve the goal, an annotation tool called MediaBrain is developed and a user study is conducted. The result reveals that it is possible to annotate a video and select a set of highlights based on the excitement information.
- KonferenzbeitragToward Cyber-Physical Research Practice based on Mixed Reality(Mensch und Computer 2017 - Workshopband, 2017) Hoffmeister, Anouk; Berger, Florian; Pogorzhelskiy, Michael; Zhang, Guangtao; Zwick, Carola; Müller-Birn, ClaudiaResearch practice has benefited greatly from advances in technology. Yet, disciplines handling physical objects still experience several limitations when connecting analog research practices to digital resources. Utilizing recent developments in mixed reality technology, we propose a digital research environment that overcomes these limitations. We present a soft- and hardware prototype that blends analog annotation practices with its digital counterpart. Our approach advances the state of the art by enabling real-time handling of digital representations by manipulating actual physical objects. We discuss our future work concepts of blending objects and augmenting information using mixed reality technologies, and connecting object-centered research practices to online data sources.