1. Is Your Training Data Really Ground Truth? A Quality Assessment of Manual Annotation for Individual Tree Crown Delineation.
- Author
-
Steier, Janik, Goebel, Mona, and Iwaszczuk, Dorota
- Subjects
- *
CROWNS (Botany) , *OBJECT recognition (Computer vision) , *FOREST density , *FOREST mapping , *REMOTE-sensing images - Abstract
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, it is crucial to have an accurate tree crown annotation dataset. The current method of generating these training datasets still relies on manual annotation and labeling. Because of the intricate contours of tree crowns, vegetation density in natural forests and the insufficient ground sampling distance of the imagery, manually generated annotations are error-prone. It is unlikely that the manually delineated tree crowns represent the true conditions on the ground. If these error-prone annotations are used as training data for deep learning models, this may lead to inaccurate mapping results for the models. This study critically validates manual tree crown annotations on two study sites: a forest-like plantation on a cemetery and a natural city forest. The validation is based on tree reference data in the form of an official tree register and tree segments extracted from UAV laser scanning (ULS) data for the quality assessment of a training dataset. The validation results reveal that the manual annotations detect only 37% of the tree crowns in the forest-like plantation area and 10% of the tree crowns in the natural forest correctly. Furthermore, it is frequent for multiple trees to be interpreted in the annotation as a single tree at both study sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF