1. Instance segmentation and stand-scale forest mapping based on UAV images derived RGB and CHM.
- Author
-
Xie, Yunhong, Wang, Yifu, Sun, Zhao, Liang, Ruiting, Ding, Zhidan, Wang, Baoying, Huang, Shaodong, and Sun, Yujun
- Subjects
- *
FOREST mapping , *DEEP learning , *DRONE aircraft , *TREE crops , *FEATURE extraction , *FOREST reserves , *AUTOMOBILE license plates - Abstract
• RGB & CHM fusion for precise canopy data. • Variable fusion thresholds impact segmentation. • Stand-scale forest mapping with Mask R-CNN & instance merging. The tree canopy represents a fundamental element of tree-related information. However, achieving precise canopy information from remote sensing images remains a significant challenge due to varying canopy sizes, mutual overlap, and diverse woodland environments. This study aims to leverage high-resolution Chinese fir images captured by an unmanned aerial vehicle (UAV) from a state forest farm in Jiangle County, Fujian Province, China. The images are integrated with the Mask R-CNN model to autonomously extract attributes at both individual and stand levels, facilitating precise forest mapping at the level of individual trees. The fusion entails a corresponding band saturation-weighted approach between RGB images and thermally-enhanced Canopy Height Model (CHM) images. The fusion threshold is set equal to the weight assigned to the CHM, and the weight of the RGB is computed as 1 minus the fusion threshold, ranging from 0 to 1 in intervals of 0.1. The dataset is then trained using three instance segmentation models: feature extraction networks based on ResNet50, ResNet101, and ResNeXt101, respectively. An instance merging approach based on the canopy cross-occupancy ratio is introduced, to enhance the accuracy of individual tree and stand-level attributes extraction, as well as forest mapping for individual tree canopies by using extensive stand images. This study focuses on evaluating the performance of instance segmentation models using two critical metrics: Bounding Box Average Precision (Box-AP) and Segmentation Average Precision (Segm-AP). The results highlighted that the Mask R-CNN model integrated with ResNeXt101, coupled with sample fusion using a threshold of 0.1, demonstrated exceptional performance. The accuracy of segmentation is acceptable, with Box-AP at 51.697 % and Segm-AP at 54.946 %. The extractions of individual level attributes, crown area, north–south crown width, and east–west crown width yielded R2 of 0.933, 0.871, and 0.877, respectively. As for stand-level attributes, canopy density, and individual population extraction resulted in R2 values of 0.901 and 0.912, respectively. Relative to the original images (with a fusion threshold of 0), segmentation accuracy was improved for all combinations, with the optimal configuration showing a 0.019 increase in R2 for canopy density and a 0.014 increase in R2 for individual population extraction. Furthermore, Box-AP and Segm-AP exhibit enhancements of 10.795 % and 10.746 %, respectively. The instances-merging method enhances the extraction accuracy of individual population by 5%. This study underscores the precision and efficacy of the Mask R-CNN instance-based segmentation model and fusion strategy, providing a robust support for the integration of deep learning in canopy extraction research. Its implications are of great significance for large-scale forestry surveys and the advancement of precision forestry, highlighting the substantial potential. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF