Jonas Boysen, JonasStein, AnthonyGandorfer, MarkusHoffmann, ChristaEl Benni, NadjaCockburn, MarianneAnken, ThomasFloto, Helga2022-02-242022-02-242022978-3-88579-711-1https://dl.gi.de/handle/20.500.12116/38432Recent Deep Learning-based Computer Vision methods proved quite successful in various tasks, also involving the classification, detection and segmentation of crop and weed plants with Convolutional Neural Networks (CNNs). Such solutions require a vast amount of labeled data. The annotation is a tedious and time-consuming task, which often constitutes a limiting factor in the Machine Learning process. In this work, an approach for an annotation pipeline for UAV-based images of sugar beet fields of BBCH-scale 12 to 17 is presented. For the creation of pixel-wise annotated data, we utilize a threshold-based method for the creation of a binary plant mask, a row detection based on Hough Transform and a lightweight CNN for the classification of small, cropped images. Our findings demonstrate that an increased image data annotation efficiency can be reached by using an AI approach already at the crucial Machine Learning-process step of training data collection.enweed detectiondata annotationConvolutional Neural Networkssemantic segmentationinteractive AIAI-supported data annotation in the context of UAV-based weed detection in sugar beet fields using Deep Neural NetworksText/Conference Paper1617-5468