Schneider, Daniel
Research Projects
Organizational Units
Job Title
Last Name
Schneider
First Name
Daniel
Name
Now showing 1 - 2 of 2
Research Data Open Access Identifying and Counting Avian Blood Cells in Whole Slide Images via Deep LearningVogelbacher, Markus; Strehmann, Finja; Bellafkir, Hicham; Mühling, Markus; Korfhage, Nikolaus; Schneider, Daniel; Rösner, Sascha; Schabo, Dana G.; Farwig, Nina; Freisleben, BerndResearch Data Open Access Recognition of European mammals and birds in camera trap images using deep neural networks(Philipps-Universität Marburg) Schneider, Daniel; Lindner, Kim; Vogelbacher, Markus; Bellafkir, Hicham; Mühling, Markus; Farwig, Nina; Freisleben, BerndThis record contains the trained models and the test data sets presented in the papers "Recognizing European mammals and birds in camera trap images using convolutional neural networks" (Schneider et al, 2023) and "Recognition of European mammals and birds in camera trap images using deep neural networks" (Schneider et al., 2024). In these publications, we present deep neural network models to recognize both mammal and bird species in camera trap images. In the archive files "model2023_ConvNextBase.tar" and "model2023_EfficientNetV2.tar" as well as "model2024_ConvNextBase_species.tar" and "model2024_ConvNextBase_taxonomy.tar" we provide downloads of the best trained models from our 2023 and 2024 papers, respectively. All models are provided in the Tensorflow2 SavedModel format (https://www.tensorflow.org/guide/saved_model). A script to load and run the models can be found in our Git-Repository: https://github.com/umr-ds/Marburg-Camera-Traps. There we also provide a code snippet to perform predictions with these models. In the archive files "data_MOF.tar" and "data_BNP.tar", we provide downloads for our Marburg Open Forest (MOF) and Białowieża National Park (BNP) data sets, consisting of about 2,500 and 15,000 labeled camera trap images, respectively. The files contain two folders named "img" and "md", respectively. The "img" folder contains the images grouped in subfolders by recording date and camera trap id. The "md" folder contains the metadata for each image, which constists of the bounding box detections obtained using the MegaDetector model (https://github.com/agentmorris/MegaDetector). The metadata is grouped into yaml-files for each label at different taxonomic levels.