Project Details
BigPlantSens - Assessing the Synergies of Big Data and Deep Learning for the Remote Sensing of Plant Species
Applicant
Dr. Teja Kattenborn
Subject Area
Ecology and Biodiversity of Plants and Ecosystems
Forestry
Geodesy, Photogrammetry, Remote Sensing, Geoinformatics, Cartography
Physical Geography
Forestry
Geodesy, Photogrammetry, Remote Sensing, Geoinformatics, Cartography
Physical Geography
Term
since 2020
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 444524904
Various tasks - including research, nature conservation, and economic activities such as forestry, agriculture, or ecosystem service assessments - require accurate information on the geographical distribution of plant species. Due to novel very high spatial resolution satellite missions and Unmanned Aerial Vehicles (UAV), there is a growing availability of Earth observation data revealing both high spatial and temporal detail on vegetation patterns. Consequently, efficient methods are needed to harness this growing source of information for vegetation analysis.In the field of remote sensing of vegetation, Deep Learning methods such as Convolutional Neural Networks (CNN) are currently revolutionizing possibilities for pattern and object recognition. Thus, it is expected that, in tandem with advances in high-resolution sensor technology, CNN will enlarge our capability to determine spatially explicit vegetation patterns. However, CNN commonly require ample reference observations to learn the pivotal image features. A big data approach may provide these reference observations required for training the CNN models. Various initiatives (e.g., CLEF; GBIF, Pl@ntNet) provide a vast amount of labelled image data on plant species, i.e., photographs together with species names. As a result of the constant efforts in the area of Open Data, such image datasets are freely accessible and continue to grow. However, it remains unclear if CNN models trained with such image datasets are directly applicable to very-high-resolution Earth observation data in terms of their spatial resolution, quality, and viewing geometries. Accordingly, in the proposed project, we aim to assess the synergies of big data with high spatial resolution Earth observation data for fully automated vegetation mapping. The proposed approach uses big data in terms of freely available imagery tagged with species names to train CNN models. The trained models are then applied to high-resolution Earth observation data to reveal the spatial distribution of the target species. Thereby, we seek to identify which characteristics of the images used for training affect mapping accuracy (e.g., acquisition geometry, image quality), and we will develop an algorithm for filtering the image datasets according to these characteristics before training. Specifically, our research questions are:1) How accurately can the spatial distribution of different plant species be identified using the proposed big data approach combined with deep learning and very-high-resolution remote sensing data?2) What are the critical factors determining the value of Big Data-based image datasets for CNN training, and can these be efficiently filtered using deep learning?3) How does the spatial resolution of the Earth observation data limit the plant species identification?
DFG Programme
Research Grants