10 results on '"Zhang, Chunlong"'
Search Results
2. Mapping of Rubber Forest Growth Models Based on Point Cloud Data.
- Author
-
Zhou, Hang, Zhang, Gan, Zhang, Junxiong, and Zhang, Chunlong
- Subjects
POINT cloud ,FOREST mapping ,TRANSFORMER models ,FOREST management ,FEATURE extraction ,CLOUD storage - Abstract
The point cloud-based 3D model of forest helps to understand the growth and distribution pattern of trees, to improve the fine management of forestry resources. This paper describes the process of constructing a fine rubber forest growth model map based on 3D point clouds. Firstly, a multi-scale feature extraction module within the point cloud column is used to enhance the PointPillars learning capability. The Swin Transformer module is employed in the backbone to enrich the contextual semantics and acquire global features with the self-attention mechanism. All of the rubber trees are accurately identified and segmented to facilitate single-trunk localisation and feature extraction. Then, the structural parameters of the trunks calculated by RANSAC and IRTLS cylindrical fitting methods are compared separately. A growth model map of rubber trees is constructed. The experimental results show that the precision and recall of the target detection reach 0.9613 and 0.8754, respectively, better than the original network. The constructed rubber forest information map contains detailed and accurate trunk locations and key structural parameters, which are useful to optimise forestry resource management and guide the enhancement of mechanisation of rubber tapping. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Robust Cherry Tomatoes Detection Algorithm in Greenhouse Scene Based on SSD
- Author
-
Junxiong Zhang, Jun Fu, Lin Lv, Yuan Ting, Jin Gao, Zhang Chunlong, Zhang Wenqiang, Fan Zhang, and Wei Li
- Subjects
cherry tomatoes ,Computer science ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Greenhouse ,02 engineering and technology ,Plant Science ,01 natural sciences ,0202 electrical engineering, electronic engineering, information engineering ,Contrast (vision) ,robotic harvesting ,Computer vision ,lcsh:Agriculture (General) ,Image resolution ,Network model ,media_common ,SSD ,Pixel ,business.industry ,Deep learning ,010401 analytical chemistry ,Detector ,deep learning ,lcsh:S1-972 ,0104 chemical sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Agronomy and Crop Science ,Rotation (mathematics) ,Food Science - Abstract
The detection of cherry tomatoes in greenhouse scene is of great significance for robotic harvesting. This paper states a method based on deep learning for cherry tomatoes detection to reduce the influence of illumination, growth difference, and occlusion. In view of such greenhouse operating environment and accuracy of deep learning, Single Shot multi-box Detector (SSD) was selected because of its excellent anti-interference ability and self-taught from datasets. The first step is to build datasets containing various conditions in greenhouse. According to the characteristics of cherry tomatoes, the image samples with illumination change, images rotation and noise enhancement were used to expand the datasets. Then training datasets were used to train and construct network model. To study the effect of base network and the input size of networks, one contrast experiment was designed on different base networks of VGG16, MobileNet, Inception V2 networks, and the other contrast experiment was conducted on changing the network input image size of 300 pixels by 300 pixels, 512 pixels by 512 pixels. Through the analysis of the experimental results, it is found that the Inception V2 network is the best base network with the average precision of 98.85% in greenhouse environment. Compared with other detection methods, this method shows substantial improvement in cherry tomatoes detection.
- Published
- 2020
4. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field
- Author
-
Kunlin Zou, Fan Zhang, Xin Chen, Yonglin Wang, and Zhang Chunlong
- Subjects
business.industry ,Intersection (set theory) ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Forestry ,Pattern recognition ,Image segmentation ,Horticulture ,Field (computer science) ,Computer Science Applications ,Image (mathematics) ,Range (mathematics) ,Segmentation ,Artificial intelligence ,business ,Weed ,Agronomy and Crop Science - Abstract
Weeds are harmful to crop yield. The segmentation of weeds in images is of great significance for precise weeding and reducing herbicide pollution. However, in the field environment, crops and weeds are similar, so it is difficult to accurately segment weed from complex field images. In this paper, an algorithm based on deep learning was proposed to segment weeds from images. This algorithm can segment weeds from the soil and crops in images. This semantic segmentation algorithm was developed with a simplified U-net. Due to the difficulty of image labeling for the semantic segmentation of weeds, an image augmentation method was proposed. The semantic segmentation network was trained by a two-stage training method composed of pre-training and fine-tuning. After training, the intersection over union (IoU) of this method was 92.91% and the average segmentation time of a single image (ST) was 51.71 ms. The results demonstrated that the modified U-Net was able to effectively segment weeds from images with a significant amount of other plants. The weed-targeted image segmentation method proposed in this paper can accurately segment weeds in complex field environments. It has a relatively wide range of applicability.
- Published
- 2021
5. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution
- Author
-
Dashuai Wang, Zhang Chunlong, Nan Li, Wei Li, and Xiaoguang Liu
- Subjects
0106 biological sciences ,business.industry ,Computer science ,Deep learning ,Forestry ,Image processing ,04 agricultural and veterinary sciences ,Horticulture ,Collision ,Object (computer science) ,01 natural sciences ,Object detection ,Computer Science Applications ,Obstacle ,Obstacle avoidance ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Computer vision ,Artificial intelligence ,business ,Agronomy and Crop Science ,Collision avoidance ,010606 plant biology & botany - Abstract
In agriculture, Unmanned Aerial Vehicles (UAVs) have shown great potential for plant protection. Uncertain obstacles randomly distributed in the unstructured farmland usually pose significant collision risks to flight safety. In order to improve the UAV’s intelligence and minimize the obstacle’s adverse impacts on operating safety and efficiency, we put forward a comprehensive solution which consists of deep-learning based object detection, image processing, RGB-D information fusion and Task Control System (TCS). Taking full advantages of both deep learning and depth camera, this solution allows the UAV to perceive not only the presence of obstacles, but also their attributes like category, profile and 3D spatial position. Based on the object detection results, collision avoidance strategy generation method and the corresponding calculation approach of optimal collision avoidance flight path are elaborated detailly. A series of experiments are conducted to verify the UAV’s environmental perception ability and autonomous obstacle avoidance performance. Results show that the average detection accuracy of CNN model is 75.4% and the mean time cost for processing single image is 53.33 ms. Additionally, we find that the prediction accuracy of obstacle’s profile and position depends heavily on the relative distance between the object and the depth camera. When the distance is between 4.5 m and 8.0 m, errors of object’s depth data, width and height are −0.53 m, −0.26 m and −0.24 m respectively. Outcomes of simulation flight experiments indicated that the UAV can autonomously determine optimal obstacle avoidance strategy and generate distance-minimized flight path based on the results of RGB-D information fusion. The proposed solution has extensive potential to enhance the UAV’s environmental perception and autonomous obstacle avoidance abilities.
- Published
- 2020
6. A rapid segmentation method for weed based on CDM and ExG index.
- Author
-
Han, Xiaowu, Wang, Han, Yuan, Ting, Zou, Kunlin, Liao, Qianfeng, Deng, Kai, Zhang, Zhiqin, Zhang, Chunlong, and Li, Wei
- Subjects
WEEDS ,WEED control ,AGRICULTURE - Abstract
Precision spraying technology is promising for effective weed control application in agriculture, which is dependent on the perception of weed information. However, existing weed segmentation algorithms have significant limitations in the accurate segmentation of all weed types in real time. In this study, a rapid segmentation method for weed was proposed following the crop detection model (CDM) and the Excess Green Index (ExG). The CDM was introduced, such that the normal convolution in the YOLO-V4 Tiny backbone network was replaced with a depthwise separable convolution to increase the receptive field of the network while reducing the number of parameters. Moreover, the CDM incorporated a SPP structure in YOLO-V4 Tiny to reduce the effect of different scale targets on detection results. After training, the AP value of the CDM was increased to 94.83%, with an inference time of 11.13 ms per image. Subsequently, the CDM was combined with the ExG index and the optimized Otsu to segment the weeds from the maize field accurately and rapidly. As revealed by the experimental results, the proposed algorithm can segment weeds in real time and accurately with a precision of 92.50%, an IoU of 76.14%, as well as an accuracy (Acc) of 98.10%. The segmentation time per image reached 15.40 ms. Lastly, two deployment methods were proposed to make the algorithm conform to different field spraying operation requirements. In brief, the proposed method provides a reliable and efficient solution for weed segmentation in agricultural fields. • A lightweight crop detection model (CDM) is proposed to detect maize seedlings in the complex field environment. • Combining with the distribution characteristics of weeds, an improved Otsu method is devised to segment weeds. • A rapid and accurate weed segmentation approach is designed to guide precise weeding. • The proposed method achieves 92.50% precision, 76.14% IoU, 98.10% accuracy, and 15.40ms inference time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Robust Cherry Tomatoes Detection Algorithm in Greenhouse Scene Based on SSD.
- Author
-
Yuan, Ting, Lv, Lin, Zhang, Fan, Fu, Jun, Gao, Jin, Zhang, Junxiong, Li, Wei, Zhang, Chunlong, and Zhang, Wenqiang
- Subjects
GREENHOUSES ,TOMATOES ,DEEP learning ,CHERRIES ,PIXELS ,ALGORITHMS ,NETWORK effect - Abstract
The detection of cherry tomatoes in greenhouse scene is of great significance for robotic harvesting. This paper states a method based on deep learning for cherry tomatoes detection to reduce the influence of illumination, growth difference, and occlusion. In view of such greenhouse operating environment and accuracy of deep learning, Single Shot multi-box Detector (SSD) was selected because of its excellent anti-interference ability and self-taught from datasets. The first step is to build datasets containing various conditions in greenhouse. According to the characteristics of cherry tomatoes, the image samples with illumination change, images rotation and noise enhancement were used to expand the datasets. Then training datasets were used to train and construct network model. To study the effect of base network and the input size of networks, one contrast experiment was designed on different base networks of VGG16, MobileNet, Inception V2 networks, and the other contrast experiment was conducted on changing the network input image size of 300 pixels by 300 pixels, 512 pixels by 512 pixels. Through the analysis of the experimental results, it is found that the Inception V2 network is the best base network with the average precision of 98.85% in greenhouse environment. Compared with other detection methods, this method shows substantial improvement in cherry tomatoes detection. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. A segmentation network for smart weed management in wheat fields.
- Author
-
Zou, Kunlin, Liao, Qianfeng, Zhang, Fan, Che, Xiaoxi, and Zhang, Chunlong
- Subjects
- *
WEED control , *WEEDS , *WHEAT , *IMAGE segmentation - Abstract
Precision mechanical weed control is important for wheat cultivation. Accurate segmentation of weeds and wheat in images is a critical step in precision weeding. A modified U-net for segmenting wheat and weeds on images was presented in this paper. A image classification task was used to select the backbone network for encoding part. A image segmentation task on similar datasets was used to select and pre-training the decoding network. The training process applied the transfer learning. Experiment results show that the IoU of segmentation reached 88.98%, and the average speed on the embedded devices was 52 FPS. Results demonstrated that the modified neural network was able to effectively segment wheat and weed in the image. It can be used to guide precision weeding. • Research on wheat seedling and weed recognition algorithms is needed. • A modified Unet-based network was used to segment wheat and weeds. • We propose an encoder part determination method for semantic segmentation networks based on classification networks. • The semantic segmentation neural network obtained in this paper can accurately and quickly segment wheat images in field images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field.
- Author
-
Zou, Kunlin, Chen, Xin, Wang, Yonglin, Zhang, Chunlong, and Zhang, Fan
- Subjects
- *
IMAGE segmentation , *DEEP learning , *WEEDS , *ARTIFICIAL neural networks , *CROP yields - Abstract
• In order to spray pesticides accurately, previous research has segmented crops from the image. However, this method requires different segmentation algorithms for different crops and is not very practical. We proposed an image segmentation method for segmenting the most common weeds (Green bristlegrass) from images. And a neural network model for image segmentation was constructed. This model can segment Green bristlegrass in different crop field under complex conditions. It has strong robustness and adaptability. • As weeds are difficult to mark manually, we proposed a two-stage training method consisting of pre-training and fine-tuning and a method for synthesizing pre-trained samples. Synthetic images were used to pre-train the network and then fine-tune it with real images. This approach effectively reduces the need for manually labeled real samples. • We propose a new modified U-net based on the characteristics of a weed segmentation task. The U-net was simplified. The speed of segment images was reducing, but the accuracy of weed segmentation remained high. Weeds are harmful to crop yield. The segmentation of weeds in images is of great significance for precise weeding and reducing herbicide pollution. However, in the field environment, crops and weeds are similar, so it is difficult to accurately segment weed from complex field images. In this paper, an algorithm based on deep learning was proposed to segment weeds from images. This algorithm can segment weeds from the soil and crops in images. This semantic segmentation algorithm was developed with a simplified U-net. Due to the difficulty of image labeling for the semantic segmentation of weeds, an image augmentation method was proposed. The semantic segmentation network was trained by a two-stage training method composed of pre-training and fine-tuning. After training, the intersection over union (IoU) of this method was 92.91% and the average segmentation time of a single image (ST) was 51.71 ms. The results demonstrated that the modified U-Net was able to effectively segment weeds from images with a significant amount of other plants. The weed-targeted image segmentation method proposed in this paper can accurately segment weeds in complex field environments. It has a relatively wide range of applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution.
- Author
-
Wang, Dashuai, Li, Wei, Liu, Xiaoguang, Li, Nan, and Zhang, Chunlong
- Subjects
- *
LANDSCAPE assessment , *OPERANT conditioning , *DEEP learning , *IMAGE processing , *DATA mining , *CAMERAS - Abstract
• Applied depth camera on UAV to sense unstructured farmland environment. • Proposed deep-learning based RGB-D information extraction and fusion method. • Optimize obstacle avoidance strategies according to the differences of obstacle's attributes. • Improved the level of intelligence and automation of the UAV. In agriculture, Unmanned Aerial Vehicles (UAVs) have shown great potential for plant protection. Uncertain obstacles randomly distributed in the unstructured farmland usually pose significant collision risks to flight safety. In order to improve the UAV's intelligence and minimize the obstacle's adverse impacts on operating safety and efficiency, we put forward a comprehensive solution which consists of deep-learning based object detection, image processing, RGB-D information fusion and Task Control System (TCS). Taking full advantages of both deep learning and depth camera, this solution allows the UAV to perceive not only the presence of obstacles, but also their attributes like category, profile and 3D spatial position. Based on the object detection results, collision avoidance strategy generation method and the corresponding calculation approach of optimal collision avoidance flight path are elaborated detailly. A series of experiments are conducted to verify the UAV's environmental perception ability and autonomous obstacle avoidance performance. Results show that the average detection accuracy of CNN model is 75.4% and the mean time cost for processing single image is 53.33 ms. Additionally, we find that the prediction accuracy of obstacle's profile and position depends heavily on the relative distance between the object and the depth camera. When the distance is between 4.5 m and 8.0 m, errors of object's depth data, width and height are −0.53 m, −0.26 m and −0.24 m respectively. Outcomes of simulation flight experiments indicated that the UAV can autonomously determine optimal obstacle avoidance strategy and generate distance-minimized flight path based on the results of RGB-D information fusion. The proposed solution has extensive potential to enhance the UAV's environmental perception and autonomous obstacle avoidance abilities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.