Auflistung nach Autor:in "Bodesheim, Paul"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- TextdokumentAutomatic Plant Cover Estimation with Convolutional Neural Networks(INFORMATIK 2021, 2021) Körschens, Matthias; Bodesheim, Paul; Römermann, Christine; Bucher, Solveig Franziska; Migliavacca, Mirco; Ulrich, Josephine; Denzler, JoachimMonitoring the responses of plants to environmental changes is essential for plant biodiversity research. This, however, is currently still being done manually by botanists in the field. This work is very laborious, and the data obtained is, though following a standardized method to estimate plant coverage, usually subjective and has a coarse temporal resolution. To remedy these caveats, we investigate approaches using convolutional neural networks (CNNs) to automatically extract the relevant data from images, focusing on plant community composition and species coverages of 9 herbaceous plant species. To this end, we investigate several standard CNN architectures and different pretraining methods. We find that we outperform our previous approach at higher image resolutions using a custom CNN with a mean absolute error of 5.16%. In addition to these investigations, we also conduct an error analysis based on the temporal aspect of the plant cover images. This analysis gives insight into where problems for automatic approaches lie, like occlusion and likely misclassifications caused by temporal changes.
- TextdokumentDeep Learning Pipeline for Automated Visual Moth Monitoring: Insect Localization and Species Classification(INFORMATIK 2021, 2021) Korsch, Dimitri; Bodesheim, Paul; Denzler, JoachimBiodiversity monitoring is crucial for tracking and counteracting adverse trends in population fluctuations. However, automatic recognition systems are rarely applied so far, and experts evaluate the generated data masses manually. Especially the support of deep learning methods for visual monitoring is not yet established in biodiversity research, compared to other areas like advertising or entertainment. In this paper, we present a deep learning pipeline for analyzing images captured by a moth scanner, an automated visual monitoring system of moth species developed within the AMMOD project. We first localize individuals with a moth detector and afterward determine the species of detected insects with a classifier. Our detector achieves up to 99:01% mean average precision and our classifier distinguishes 200 moth species with an accuracy of 93:13% on image cutouts depicting single insects. Combining both in our pipeline improves the accuracy for species identification in images of the moth scanner from 79:62% to 88:05%.
- TextdokumentExploiting Web Images for Moth Species Classification(INFORMATIK 2021, 2021) Böhlke,Julia; Korsch, Dimitri; Bodesheim, Paul; Denzler, JoachimDue to shrinking habitats, moth populations are declining rapidly. An automated moth population monitoring tool is needed to support conservationists in making informed decisions for counteracting this trend. A non-invasive tool would involve the automatic classification of images of moths, a fine-grained recognition problem. Currently, the lack of images annotated by experts is the main hindrance to such a classification model. To understand how to achieve acceptable predictive accuracies, we investigate the effect of differently sized datasets and data acquired from the Internet. We find the use of web data immensely beneficial and observe that few images from the evaluation domain are enough to mitigate the domain shift in web data. Our experiments show that counteracting the domain shift may yield a relative reduction of the error rate of over 60%. Lastly, the effect of label noise in web data and proposed filtering techniques are analyzed and evaluated.
- TextdokumentMinimizing the Annotation Effort for Detecting Wildlife in Camera Trap Images with Active Learning(INFORMATIK 2021, 2021) Auer, Daphne; Bodesheim, Paul; Fiderer, Christian; Heurich, Marco; Denzler, JoachimAnalyzing camera trap images is a challenging task due to complex scene structures at different locations, heavy occlusions, and varying sizes of animals. One particular problem is the large fraction of images only showing background scenes, which are recorded when a motion detector gets triggered by signals other than animal movements. To identify these background images automatically, an active learning approach is used to train binary classifiers with small amounts of labeled data, keeping the annotation effort of humans minimal. By training classifiers for single sites or small sets of camera traps, we follow a region-based approach and particularly focus on distinct models for daytime and nighttime images. Our approach is evaluated on camera trap images from the Bavarian Forest National Park. Comparable or even superior performances to publicly available detectors trained with millions of labeled images are achieved while requiring significantly smaller amounts of annotated training images.