11 results on '"Tang, Yunchao"'
Search Results
2. Real-Time Defect Detection for Metal Components: A Fusion of Enhanced Canny–Devernay and YOLOv6 Algorithms.
- Author
-
Wang, Hongjun, Xu, Xiujin, Liu, Yuping, Lu, Deda, Liang, Bingqiang, and Tang, Yunchao
- Subjects
METAL detectors ,METAL defects ,DEEP learning ,COMPUTER vision ,SURFACE defects ,METALS in the body - Abstract
Due to the presence of numerous surface defects, the inadequate contrast between defective and non-defective regions, and the resemblance between noise and subtle defects, edge detection poses a significant challenge in dimensional error detection, leading to increased dimensional measurement inaccuracies. These issues serve as major bottlenecks in the domain of automatic detection of high-precision metal parts. To address these challenges, this research proposes a combined approach involving the utilization of the YOLOv6 deep learning network in conjunction with metal lock body parts for the rapid and accurate detection of surface flaws in metal workpieces. Additionally, an enhanced Canny–Devernay sub-pixel edge detection algorithm is employed to determine the size of the lock core bead hole. The methodology is as follows: The data set for surface defect detection is acquired using the labeling software lableImg and subsequently utilized for training the YOLOv6 model to obtain the model weights. For size measurement, the region of interest (ROI) corresponding to the lock cylinder bead hole is first extracted. Subsequently, Gaussian filtering is applied to the ROI, followed by a sub-pixel edge detection using the improved Canny–Devernay algorithm. Finally, the edges are fitted using the least squares method to determine the radius of the fitted circle. The measured value is obtained through size conversion. Experimental detection involves employing the YOLOv6 method to identify surface defects in the lock body workpiece, resulting in an achieved mean Average Precision ( m A P ) value of 0.911. Furthermore, the size of the lock core bead hole is measured using an upgraded technique based on the Canny–Devernay sub-pixel edge detection, yielding an average inaccuracy of less than 0.03 mm. The findings of this research showcase the successful development of a practical method for applying machine vision in the realm of the automatic detection of metal parts. This achievement is accomplished through the exploration of identification methods and size-measuring techniques for common defects found in metal parts. Consequently, the study establishes a valuable framework for effectively utilizing machine vision in the field of metal parts inspection and defect detection. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Grand Challenges of Machine-Vision Technology in Civil Structural Health Monitoring
- Author
-
Lin Yunfan, Minghui Yao, Zou Xiangjun, Xueyu Huang, Tang Yunchao, and Huang Zhaofeng
- Subjects
Engineering management ,Engineering ,Machine vision ,business.industry ,Deep learning ,General Medicine ,Artificial intelligence ,Structural health monitoring ,business ,Civil infrastructure ,Condition assessment ,Grand Challenges ,Computer technology - Abstract
Machine-vision technology has progressed remarkably in both accuracy and speed owing to advances in computer technology and artificial intelligence. In this paper, state-of-the-art research on vision-based techniques is reviewed for civil infrastructure condition assessment. The major challenges of machine vision technique in civil structural health monitoring are presented.
- Published
- 2020
4. Identification and Detection of Biological Information on Tiny Biological Targets Based on Subtle Differences.
- Author
-
Chen, Siyu, Tang, Yunchao, Zou, Xiangjun, Huo, Hanlin, Hu, Kewei, Hu, Boran, and Pan, Yaoqiang
- Subjects
DRUG target ,IMAGE fusion ,DIAGNOSTIC sex determination ,IMAGE intensifiers ,SEX (Biology) ,FEATURE extraction - Abstract
In order to detect different biological features and dynamic tiny targets with subtle features more accurately and efficiently and analyze the subtle differences of biological features, this paper proposes classifying and identifying the local contour edge images of biological features and different types of targets and reveals high similarities in their subtle features. Taking pigeons as objects, there is little difference in appearance between female pigeons and male pigeons. Traditional methods need to manually observe the morphology near the anus of pigeons to identify their sex or carry out chromosome examination or even molecular biological examination to achieve accurate sex identification. In this paper, a compound marker region for extracting gender features is proposed. This area has a strong correlation with the gender difference of pigeons, and its area's proportion is low, which can reduce calculation costs. A dual-weight image fusion feature enhancement algorithm based on edge detection is proposed. After the color information and contour information of the image are extracted, a new feature enhancement image is fused according to a pair of weights, and the difference between tiny features increased so as to realize the detection and identification of pigeon sex by visual methods. The results show that the detection accuracy is 98%, and the F1 value is 0.98. Compared with the original data set without any enhancement, the accuracy increased by 32% and the F1 score increased by 0.35. Experiments show that this method can achieve accurate visual sex classifications of pigeons and provide intelligent decision data for pigeon breeding. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. A Novel Agricultural Machinery Intelligent Design System Based on Integrating Image Processing and Knowledge Reasoning.
- Author
-
Li, Cheng'en, Tang, Yunchao, Zou, Xiangjun, Zhang, Po, Lin, Junqiang, Lian, Guoping, and Pan, Yaoqiang
- Subjects
MACHINE design ,AGRICULTURAL equipment ,IMAGE processing ,VIRTUAL prototypes ,APPLICATION software ,COMPUTER performance ,AGRICULTURAL technology ,DIGITAL image processing - Abstract
Agricultural machinery intelligence is the inevitable direction of agricultural machinery design, and the systems in these designs are important tools. In this paper, to address the problem of low processing power of traditional agricultural machinery design systems in analyzing data, such as fit, tolerance, interchangeability, and the assembly process, as well as to overcome the disadvantages of the high cost of intelligent design modules, lack of data compatibility, and inconsistency between modules, a novel agricultural machinery intelligent design system integrating image processing and knowledge reasoning is constructed. An image-processing algorithm and trigger are used to detect the feature parameters of key parts of agricultural machinery and build a virtual prototype. At the same time, a special knowledge base of agricultural machinery is constructed to analyze the test data of the virtual prototype. The results of practical application and software evaluation of third-party institutions show that the system improves the efficiency of intelligent design in key parts of agricultural machinery by approximately 20%, reduces the operation error rate of personnel by approximately 40% and the consumption of computer resources by approximately 30%, and greatly reduces the purchase cost of intelligent design systems to provide a reference for intelligent design to guide actual production. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. A Study on Long-Close Distance Coordination Control Strategy for Litchi Picking.
- Author
-
Wang, Hongjun, Lin, Yiyan, Xu, Xiujin, Chen, Zhaoyi, Wu, Zihao, and Tang, Yunchao
- Subjects
LITCHI ,POINT cloud ,COMPUTER vision ,ROBOTICS ,CAMERAS - Abstract
For the automated robotic picking of bunch-type fruit, the strategy is to roughly determine the location of the bunches, plan the picking route from a remote location, and then locate the picking point precisely at a more appropriate, closer location. The latter can reduce the amount of information to be processed and obtain more precise and detailed features, thus improving the accuracy of the vision system. In this study, a long-close distance coordination control strategy for a litchi picking robot was proposed based on an Intel Realsense D435i camera combined with a point cloud map collected by the camera. The YOLOv5 object detection network and DBSCAN point cloud clustering method were used to determine the location of bunch fruits at a long distance to then deduce the sequence of picking. After reaching the close-distance position, the Mask RCNN instance segmentation method was used to segment the more distinctive bifurcate stems in the field of view. By processing segmentation masks, a dual reference model of "Point + Line" was proposed, which guided picking by the robotic arm. Compared with existing studies, this strategy took into account the advantages and disadvantages of depth cameras. By experimenting with the complete process, the density-clustering approach in long distance was able to classify different bunches at a closer distance, while a success rate of 88.46% was achieved during fruit-bearing branch locating. This was an exploratory work that provided a theoretical and technical reference for future research on fruit-picking robots. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Vision-Based Three-Dimensional Reconstruction and Monitoring of Large-Scale Steel Tubular Structures.
- Author
-
Tang, Yunchao, Chen, Mingyou, Lin, Yunfan, Huang, Xueyu, Huang, Kuangyu, He, Yuxin, and Li, Lijuan
- Subjects
TUBULAR steel structures ,DEEP learning ,VIBRATION (Mechanics) ,POINT cloud ,CYCLIC loads ,CONCRETE-filled tubes ,ALGORITHMS - Abstract
A four-ocular vision system is proposed for the three-dimensional (3D) reconstruction of large-scale concrete-filled steel tube (CFST) under complex testing conditions. These measurements are vitally important for evaluating the seismic performance and 3D deformation of large-scale specimens. A four-ocular vision system is constructed to sample the large-scale CFST; then point cloud acquisition, point cloud filtering, and point cloud stitching algorithms are applied to obtain a 3D point cloud of the specimen surface. A point cloud correction algorithm based on geometric features and a deep learning algorithm are utilized, respectively, to correct the coordinates of the stitched point cloud. This enhances the vision measurement accuracy in complex environments and therefore yields a higher-accuracy 3D model for the purposes of real-time complex surface monitoring. The performance indicators of the two algorithms are evaluated on actual tasks. The cross-sectional diameters at specific heights in the reconstructed models are calculated and compared against laser rangefinder data to test the performance of the proposed algorithms. A visual tracking test on a CFST under cyclic loading shows that the reconstructed output well reflects the complex 3D surface after correction and meets the requirements for dynamic monitoring. The proposed methodology is applicable to complex environments featuring dynamic movement, mechanical vibration, and continuously changing features. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision.
- Author
-
Tang, Yunchao, Zhou, Hao, Wang, Hongjun, and Zhang, Yunqi
- Subjects
- *
BINOCULAR vision , *CAMELLIA oleifera , *OBJECT recognition (Computer vision) , *FRUIT , *K-means clustering , *MOBILE robots - Abstract
• Object detection based on deep learning is applied to binocular location. • An improved Camellia oleifera fruit detection model is proposed. • The calculation amount of stereo matching is reduced. • It provides visual technical reference for field picking robots. In the complex environment of an orchard, changes in illumination, leaf occlusion, and fruit overlap make it challenging for mobile picking robots to detect and locate oil-seed camellia fruit. To address this problem, YOLO-Oleifera was developed as a fruit detection model method based on a YOLOv4-tiny model, To obtain clustering results appropriate to the size of the Camellia oleifera fruit, the k-means++ clustering algorithm was used instead of the k-means clustering algorithm used by the YOLOv4-tiny model to determine bounding box priors. Two convolutional kernels of 1 × 1 and 3 × 3 were respectively added after the second and third CSPBlock modules of the YOLOv4-tiny model. This model allows the learning of Camellia oleifera fruit feature information and reduces overall computational complexity. Compared with the classic stereo matching method based on binocular camera images, this method innovatively used the bounding box generated by the YOLO-Oleifera model to extract the region of interest of the fruit, and then adaptively performs stereo matching according to the generation mechanism of the bounding box. This allows the determination of disparity and facilitates the subsequent use of the triangulation principle to determine the picking position of the fruit. An ablation experiment demonstrated the effective improvement of the YOLOv4-tiny model. Camellia oleifera fruit images obtained under sunlight and shading conditions were used to test the YOLO-Oleifera model, and the model robustly detected the fruit under different illumination conditions. Occluded Camellia oleifera fruit decreased precision and recall due to the loss of semantic information. Comparison of this model with deep learning models YOLOv5-s,YOLOv3-tiny, and YOLOv4-tiny, the YOLO-Oleifera model achieved the highest AP of 0.9207 with the smallest data weight of 29 MB. The YOLO-Oleifera model took an average of 31 ms to detect each fruit image, fast enough to meet the demand for real-time detection. The algorithm exhibited high positioning stability and robust function despite changes in illumination. The results of this study can provide a technical reference for the robust detection and positioning of Camellia oleifera fruit by a mobile picking robot in a complex orchard environment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Transforming unmanned pineapple picking with spatio-temporal convolutional neural networks.
- Author
-
Meng, Fan, Li, Jinhui, Zhang, Yunqi, Qi, Shaojun, and Tang, Yunchao
- Subjects
- *
PINEAPPLE , *CONVOLUTIONAL neural networks , *TRANSFORMER models , *COMPUTER vision , *AGRICULTURE , *AGRICULTURAL development - Abstract
Automated pineapple harvesting has emerged as a prominent prospective development within the agricultural domain. Nevertheless, the intricate growth conditions that pineapples encounter in the field, such as inadequate light, overexposure, obstructions caused by fruit leaves, or the overlapping of fruits, pose substantial challenges to the accuracy and robustness of traditional real-time detection algorithms. In recent times, Transformer models, when applied to computer vision, have exhibited commendable performance, underscoring their potential for target detection in smart agricultural applications. In this report, we propose a spatio-temporal convolutional neural network model that leverages the shifted window Transformer fusion region convolutional neural network model for the purpose of detecting pineapple fruits. Our study includes a comparative analysis of these results and those obtained through the utilization of conventional models. Additionally, we investigate the influence of various aspects of data preparation, including image resolution, object size, and object complexity, on the ultimate pineapple detection outcomes. Experimental findings elucidate that, in the case of detecting a single-category target like a pineapple, the employment of 2000 annotated supervised data points yields the optimal detection accuracy. Further augmenting the size of the training dataset does not yield any significant improvement in detection accuracy. Furthermore, images of pineapples captured from greater distances encompass smaller targets and an increased number of pineapple instances, rendering them more intricate and challenging to accurately detect. In summary, our study employs the spatio-temporal convolutional neural network model to attain pineapple detection with an impressive accuracy rate of 92.54% and an average inference time of 0.163 s, thus affirming the efficacy of our developed model in achieving superior detection results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Automatic classification of asphalt pavement cracks using a novel integrated generative adversarial networks and improved VGG model.
- Author
-
Que, Yun, Dai, Yi, Ji, Xue, Kwan Leung, Anthony, Chen, Zheng, Jiang, Zhenliang, and Tang, Yunchao
- Subjects
- *
GENERATIVE adversarial networks , *CRACKING of pavements , *AUTOMATIC classification , *DATA augmentation , *DEEP learning , *ASPHALT pavements , *EXTREME weather - Abstract
[Display omitted] • A GAN-based pavement cracking image augmentation model was proposed. • The improved VGG model outperformed other algorithms in cracking classification. • The novel integrated GAN and VGG model improved cracking classification accuracy. Crack development is increasingly intensified and causes pavement damage in recent decades under extreme weather events. Although various auto- or semi-auto crack classification algorithms have been proposed, most of them require manual extraction of image features, which is considerably labor-intensive, compromising classification accuracy and efficiency. Moreover, collecting original images for model training is difficult due to various limitations. This study proposes a Generative Adversarial Networks (GAN)-based method for data augmentation of the collected crack digital images and a modified deep learning network (i.e., VGG) for crack classification. Firstly, according to the characteristics of collected data, a GAN-based image generation model is established to expand the training dataset. Then, an improved VGG model is built based on the most efficient model via comparisons of several mainstream feature extraction networks. Finally, comparison studies of classification performance are performed for different classification models (i.e., the improved VGG and other traditionally used ones) and datasets (i.e., generated by GAN-based and traditional methods). The model trained by the dataset expanded by GAN has a higher accuracy rate and lower loss values than traditional methods. The improved VGG model trained by the validation set performs similarly to the training set. Compared to the original VGG model, the accuracy of crack prediction of the improved VGG model is increased by 5.9% (i.e., 96.30%), and the F1-score is also increased by 5.78% (i.e., 96.23%). Trained by the same test set expanded by GAN, the improved VGG model has a higher recall and F1-score than GoogLeNet, ResNet18, and AlexNet. The novel integrated GAN and modified VGG model shows satisfactory efficiency for classifying pavement cracks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Collision-free path planning for a guava-harvesting robot based on recurrent deep reinforcement learning.
- Author
-
Lin, Guichao, Zhu, Lixue, Li, Jinhui, Zou, Xiangjun, and Tang, Yunchao
- Subjects
- *
DEEP learning , *REINFORCEMENT learning , *ROBOTIC path planning , *RECURRENT neural networks , *MACHINE learning , *ALGORITHMS - Abstract
• An image process pipeline is introduced to locate guava fruits and obstacles. • A recurrent deep reinforcement learning algorithm is proposed to predict collision-free paths. • The robot is trained in a simulation environment and transferred to the real world directly. • The robot only needs 29 ms to plan a collision-free path with a success rate of 90.9%. In unstructured orchard environments, picking a target fruit without colliding with neighboring branches is a significant challenge for guava-harvesting robots. This paper introduces a fast and robust collision-free path-planning method based on deep reinforcement learning. A recurrent neural network is first adopted to remember and exploit the past states observed by the robot, then a deep deterministic policy gradient algorithm (DDPG) predicts a collision-free path from the states. A simulation environment is developed and its parameters are randomized during the training phase to enable recurrent DDPG to generalize to real-world scenarios. We also introduce an image processing method that uses a deep neural network to detect obstacles and uses many three-dimensional line segments to approximate the obstacles. Simulations show that recurrent DDPG only needs 29 ms to plan a collision-free path with a success rate of 90.90%. Field tests show that recurrent DDPG can increase grasp, detachment, and harvest success rates by 19.43%, 9.11%, and 10.97%, respectively, compared to cases where no collision-free path-planning algorithm is implemented. Recurrent DDPG strikes a strong balance between efficiency and robustness and may be suitable for other fruits. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.