Auflistung nach Schlagwort "data labeling"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragA digital weed counting system for the weed control performance evaluation(42. GIL-Jahrestagung, Künstliche Intelligenz in der Agrar- und Ernährungswirtschaft, 2022) Pamornnak, Burawich; Scholz, Christian; Becker, Silke; Ruckelshausen, ArnoThe weed counting method is one of the keys to indicate the performance of the weed control process. This article presents a digital weed counting system to use instead of a conventional manual counting system called “Göttinger Zähl- und Schätzrahmen” or “Göttinger Rahmen” due to the limitation of human counting on big-scale field experiment areas. The proposed method demonstrated on the maize field consists of two main parts, a virtual weed counting frame and a weed counting core, respectively. The system was implemented as a mobile application for the smartphone (Android) with server-based processing. The pre-processed image on the mobile phone will be sent to the weed counting core based on the pre-trained convolution neural network model (CNN or deep learning) on the server. Finally, the number of detected weeds will be sent back to the mobile phone to show the results. In the first implementation, 100 frames on a 1-hectare field area were evaluated. The absolute weed counting errors were categorized into three groups, A-Group (0-10 weeds error) achieves 73 %, B-Group (11-20 weeds error) achieves 17 %, and C-Group (21-30 weeds error) achieves 10 %, respectively. For overall performance, the system achieves the = 0.97 from the correlation and 12.8 % counting error. These results show the digital version of “Göttinger Rahmen” has the potential to become a practical tool for weed control evaluations.
- KonferenzbeitragHuman-machine Collaboration on Data Annotation of Images by Semi-automatic Labeling(Mensch und Computer 2021 - Tagungsband, 2021) Haider, Tom; Michahelles, FlorianDeployment of deep neural network architectures in computer vision applications requires labeled images which human workers create in a manual, cumbersome process of drawing bounding boxes and segmentation masks. In this work, we propose an image labeling companion that supports human workers to label images faster and more efficiently. Our data-pipeline utilizes One-Shot, Few-Shot and pre-trained object detection models to provide bounding box suggestions, thereby reducing the required user interactions during labeling to corrective adjustments. The resulting labels are then used to continuously update the underlying suggestion models. Optionally, we apply a refinement step, where an available bounding box is converted into a finer segmentation mask. We evaluate our approach with a group of participants who label images using our tool - both manually and with the system. In all our experiments, the achieved quality is consistently comparable with manually created labels at factor 2 to 6 faster execution times.