151 results on '"image-based visual servoing"'
Search Results
2. Fault-tolerant visual servo control for a robotic arm with actuator faults.
- Author
-
Li, Jiashuai, Peng, Xiuyan, Li, Bing, Sreeram, Victor, and Wu, Jiawei
- Subjects
- *
RADIAL basis functions , *FAULT-tolerant control systems , *ADAPTIVE control systems , *COUPLINGS (Gearing) , *ACTUATORS - Abstract
The study targets uncertain coupling faults in robotic arm actuators and proposes a new fault-tolerant visual servo control strategy. Specifically, it considers both multiplicative and additive actuator faults within the dynamic of the robotic arm, treating the coupling faults and time-varying disturbances as an aggregate of concentrated uncertainties. A radial basis function neural network-based state observer is introduced to online approximate these concentrated uncertainties, which include fault information, eliminating the need for prior knowledge of faults. Furthermore, a fault-tolerant controller based on a non-singular fast terminal sliding mode is proposed, which separately decouples the nominal quantities and concentrated uncertainties and develops individual adaptive control laws for each. This effectively reduces the detrimental impact of coupled faults and disturbances on the system's performance, facilitating image feature trajectory tracking control with minimal jitter, high precision, and strong transient response capabilities. The stability of the state observer and the fault-tolerant controller has been substantiated through Lyapunov's theory. Lastly, numerical simulations validate the efficacy and robustness of the proposed fault-tolerant visual servo control approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Fixed-time controller of visual quadrotor for tracking a moving target.
- Author
-
He, Wei and Yuan, Liang
- Subjects
- *
LINEAR velocity , *BACKSTEPPING control method , *MODEL airplanes , *MONOCULARS , *COMPUTER simulation - Abstract
In order to eliminate the dependence of finite-time control on the initial state of the system in the image-based visual servoing (IBVS) of a quadrotor UAV (QUAV) for the moving target, we propose a control scheme for the moving target of a QUAV based on fixed-time stability. We select the image moments of the virtual camera plane as features and establish an image moment dynamics model that includes target motion parameters based on them. Then, we use the backstepping method to design the fixed-time controller for the system. The design steps of the controller mainly consist of two parts. Firstly, we designed a fixed-time stable linear velocity observer to address the unmeasurable linear velocity and unmeasurable monocular camera depth issues of QUAV. Then, using a high-order tracking differentiator to estimate the target linear velocity as a feedforward, combined with the fixed-time linear velocity observer estimated QUAV linear velocity, we designed a fixed-time controller for the system using the backstepping method. We demonstrated the fixed-time stability of the controller using the Lyapunov theory. The effectiveness and robustness of the proposed method are demonstrated by numerical simulation. At the same time, the comparative simulation shows that the method can guarantee the rate of convergence of the system and has high control accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Finite-time dynamic visual servo control for quadrotor tracking unknown motion target: Finite-time dynamic visual servo control for quadrotor
- Author
-
Hu, Shengrong, Wang, Qiang, Wang, Fei, and Li, Yixian
- Published
- 2024
- Full Text
- View/download PDF
5. Image-Based Visual Servoing for Three Degree-of-Freedom Robotic Arm with Actuator Faults.
- Author
-
Li, Jiashuai, Peng, Xiuyan, Li, Bing, Li, Mingze, and Wu, Jiawei
- Subjects
SLIDING mode control ,ITERATIVE learning control ,ACTUATORS ,FAULT-tolerant control systems ,ROBOTICS - Abstract
This study presents a novel image-based visual servoing fault-tolerant control strategy aimed at ensuring the successful completion of visual servoing tasks despite the presence of robotic arm actuator faults. Initially, a depth-independent image-based visual servoing model is established to mitigate the effects of inaccurate camera parameters and missing depth information on the system. Additionally, a robotic arm dynamic model is constructed, which simultaneously considers both multiplicative and additive actuator faults. Subsequently, model uncertainties, unknown disturbances, and coupled actuator faults are consolidated as centralized uncertainties, and an iterative learning fault observer is designed to estimate them. Based on this, suitable sliding surfaces and control laws are developed within the super-twisting sliding mode visual servo controller to rapidly reduce control deviation to near zero and circumvent the chattering phenomenon typically observed in traditional sliding mode control. Finally, through comparative simulation between different control strategies, the proposed method is shown to effectively counteract the effect of actuator faults and exhibit robust performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Novel Fuzzy Image-Based UAV Landing Using RGBD Data and Visual SLAM
- Author
-
Shayan Sepahvand, Niloufar Amiri, Houman Masnavi, Iraj Mantegh, and Farrokh Janabi-Sharifi
- Subjects
image-based visual servoing ,SLAM ,fuzzy systems ,UAV Landing ,safe landing zone ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
In this work, an innovative perception-guided approach is proposed for landing zone detection and realization of Unmanned Aerial Vehicles (UAVs) operating in unstructured environments ridden with obstacles. To accommodate secure landing, two well-established tools, namely fuzzy systems and visual Simultaneous Localization and Mapping (vSLAM), are implemented into the landing pipeline. Firstly, colored images and point clouds acquired by a visual sensory device are processed to serve as characterizing maps that acquire information about flatness, steepness, inclination, and depth variation. By leveraging these images, a novel fuzzy map infers the areas for risk-free landing on which the UAV can safely land. Subsequently, the vSLAM system is employed to estimate the platform’s pose and an additional set of point clouds. The vSLAM point clouds presented in the corresponding keyframe are projected back onto the image plane on which a threshold fuzzy landing score map is applied. In other words, this binary image serves as a mask for the re-projected vSLAM world points to identify the best subset for landing. Once these image points are identified, their corresponding world points are located, and among them, the center of the cluster with the largest area is chosen as the point to land. Depending on the UAV’s size, four synthesis points are added to the vSLAM point cloud to execute the image-based visual servoing landing using image moment features. The effectiveness of the landing package is assessed through the ROS Gazebo simulation environment, where comparisons are made with a state-of-the-art landing site detection method.
- Published
- 2024
- Full Text
- View/download PDF
7. Fuzzy Adaptive Model Predictive Control for Image-based Visual Servoing of Robot Manipulators With Kinematic Constraints.
- Author
-
Zhu, Tianqi, Mao, Jianliang, Han, Linyan, and Zhang, Chuanlin
- Abstract
This paper presents a novel image-based visual servoing (IBVS) controller for a six-degree-of-freedom (6-DoF) robot manipulator by employing a fuzzy adaptive model predictive control (FAMPC) approach. The control strategy allows the robot to track the desired feature points adaptively and fulfill kinematic constraints appearing in a vision-guided task with different initial Cartesian poses. To this aim, the successive linearization method is firstly employed to transform the nonlinear IBVS model to the linear time-invariant (LTI) one at each sampling instant. The nonlinear optimization problem is therefore degraded into a convex quadratic programming (QP) problem. Subsequently, a fuzzy logic is exploited to tune the weighting coefficients in the cost function on the basis of image pixels changes at each step, endowing the reliable adaptation capabilities to different working environments. Experimental comparison tests performed on a 6-DoF robot manipulator with an eye-in-hand configuration are provided to demonstrate the efficacy of the proposed controller. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Adaptive visual servoing for the robot manipulator with extreme learning machine and reinforcement learning.
- Author
-
Li, Jiashuai, Peng, Xiuyan, Li, Bing, Sreeram, Victor, Wu, Jiawei, and Mi, Wansheng
- Subjects
REINFORCEMENT learning ,MACHINE learning ,PARTICLE swarm optimization ,MANIPULATORS (Machinery) ,CAMERA calibration ,ROBOTS - Abstract
In this study, a novel image‐based visual servo (IBVS) controller for robot manipulators is investigated using an optimized extreme learning machine (ELM) algorithm and an offline reinforcement learning (RL) algorithm. First of all, the classical IBVS method and its difficulties in accurately estimating the image interaction matrix and avoiding the singularity of pseudo‐inverse are introduced. Subsequently, an IBVS method based on ELM and RL is proposed to solve the problem of the singularity of the pseudo‐inverse solution and tune adaptive servo gain, improving the servo efficiency and stability. Specifically, the ELM algorithm optimized by particle swarm optimization (PSO) was used to approximate the pseudo‐inverse of the image interaction matrix to reduce the influence of camera calibration errors. Then, the RL algorithm was adopted to tune the adaptive visual servo gain in continuous space and improve the convergence speed. Finally, comparative simulation experiments on a 6‐DOF robot manipulator were conducted to verify the effectiveness of the proposed IBVS controller. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Robotic grasping and assembly of screws based on visual servoing using point features.
- Author
-
Hao, Tiantian and Xu, De
- Subjects
- *
ROBOTIC assembly , *WORKFLOW , *SCREWS , *ROBOTS , *ALGORITHMS , *FEATURE extraction - Abstract
The robotic assembly of screws is the basic task for the automation assembly of complex equipment. However, a complete robotic assembly framework is difficult to be designed due to the integration of multiple technologies to achieve efficient and stable operations. In this paper, a robotic assembly workflow is proposed, which mainly consists of a feature extraction stage, a grasping stage, and an installation stage. In the feature extraction stage, a feature extraction algorithm consisting of a semantic segmentation network and an object classification module is designed. The semantic segmentation network segments the areas of multiple categories' objects, and the object classification module selects an appropriate target object. The grasping stage and installation stage involve the position alignment of the objects. A position alignment method is developed based on image-based visual servoing using the point features extracted from the segmented areas. The experiments are conducted on a real robot. The alignment errors in grasping stage are less 0.53 mm. The assemblies for a M6-sized screw in ten experiments are successful. The experiment results verify the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Robot manipulator visual servoing based on image moments and improved firefly optimization algorithm-based extreme learning machine.
- Author
-
Zhou, Zhiyu, Wang, Junjie, Zhu, Zefei, and Xia, Jingsong
- Subjects
OPTIMIZATION algorithms ,FIREFLIES ,MACHINE learning ,ROBOTS ,NONLINEAR equations ,PROBLEM solving - Abstract
We propose an improved extreme learning machine (ELM) to solve the decoupling problem between the camera coordinates and the image moment features for robot manipulator image-based visual servoing system, that is, determine the nonlinear relationship between them. First, an improved firefly optimization algorithm (IFOA) based on an adaptive inertial weight and individual variations is proposed. Then, the IFOA is optimized the weight and hidden bias in ELM algorithm; this improves the training accuracy of the ELM. Finally, the improved firefly optimization algorithm is integrated into ELM (IFOA-ELM) to solve the decoupling problem and ensure stable performance. The results of experiment show that the estimated error of the rotation angle around the camera frame in the visual servoing system determined by the IFOA-ELM algorithm is less than 0.25°, confirming that the proposed algorithm exhibits good robustness and stability. • A novel extreme learning machine (ELM) is used to solve the decoupling problem of the nonlinear mapping in visual servoing. • A new firefly algorithm uses an adaptive inertial weight and individual variations for optimization and exhibits faster convergence. • The input weight, bias, and output weight of the hidden layers of the ELM are optimized by the improved firefly optimization algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Adaptive intelligent vision-based control of a flexible-link manipulator.
- Author
-
Sahu, Umesh Kumar, Patra, Dipti, and Subudhi, Bidyadhar
- Subjects
- *
INTELLIGENT control systems , *REINFORCEMENT learning , *RANGE of motion of joints , *IMAGE sensors , *ROBUST control - Abstract
Present space robots such as planetary robots and flexible robots have structural flexibility in their arms and joints that leads to an error in the tip positioning owing to tip deflection. The flexible-link manipulator (FLM) is a non-collocated system that has unstable and inaccurate system performance. Thus, tip-tracking of FLM possesses difficult control challenges. The purpose of this study is to design adaptive intelligent tip-tracking control strategy for FLMs to deal with this control challenges of FLM. A vision sensor is utilized in conjunction with a traditional mechanical sensor to directly measure tip-position in order to address the aforementioned problem. Image-based visual servoing (IBVS), one of several visual servoing control techniques, is more efficient. However, the IBVS scheme faces numerous difficulties that impair the system's performance in real-time applications, including singularities in the interaction matrix, local minima in trajectory, visibility issues. To address the issues with the IBVS scheme, a novel adaptive intelligent IBVS (AI-IBVS) controller for tip-tracking control of a two-link flexible manipulator (TLFM) is designed in this study. In particular, this paper addresses the IBVS issues along-with retention of visual features in the field-of-view (FOV). First, in order to retain object within the camera FOV, an intelligent controller with off-policy reinforcement learning (RL) is proposed. Second, a composite controller for TLFM is developed to combine RL controller and IBVS controller. The simulation has been conducted to examine the effectiveness and robustness of the proposed controller. The obtained results show that the AI-IBVS controller developed here possesses the capabilities of self-learning and decision-making for robust tip-tracking control of TLFM. Further, a comparison with other similar approach is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Model predictive control for constrained robot manipulator visual servoing tuned by reinforcement learning
- Author
-
Jiashuai Li, Xiuyan Peng, Bing Li, Victor Sreeram, Jiawei Wu, Ziang Chen, and Mingze Li
- Subjects
robot manipulator ,image-based visual servoing ,model predictive control ,reinforcement learning ,Biotechnology ,TP248.13-248.65 ,Mathematics ,QA1-939 - Abstract
For constrained image-based visual servoing (IBVS) of robot manipulators, a model predictive control (MPC) strategy tuned by reinforcement learning (RL) is proposed in this study. First, model predictive control is used to transform the image-based visual servo task into a nonlinear optimization problem while taking system constraints into consideration. In the design of the model predictive controller, a depth-independent visual servo model is presented as the predictive model. Next, a suitable model predictive control objective function weight matrix is trained and obtained by a deep-deterministic-policy-gradient-based (DDPG) RL algorithm. Then, the proposed controller gives the sequential joint signals, so that the robot manipulator can respond to the desired state quickly. Finally, appropriate comparative simulation experiments are developed to illustrate the efficacy and stability of the suggested strategy.
- Published
- 2023
- Full Text
- View/download PDF
13. Global finite-time control for image-based visual servoing of quadrotor using backstepping method.
- Author
-
He, Wei and Yuan, Liang
- Subjects
- *
BACKSTEPPING control method , *LINEAR velocity , *ADAPTIVE control systems , *DRONE aircraft - Abstract
The main objective of this paper is to use a novel finite-time control method to solve the global finite-time convergence problem of image-based visual servoing of a quadrotor Unmanned Aerial Vehicle (QUAV). The effects of external wind resistance and system uncertainty are considered in the QUAV dynamics, and a disturbance observer is used to compensate for these effects. For the problem of obtaining target feature depth information, a novel nonlinear finite-time linear velocity observer is proposed by using the backstepping method. Based on the above two observers, we use the backstepping method to design the global finite-time controller of the system. The system is proved global finite-time stable using the Lyapunov method. Finally, numerical simulation and ROS gazebo simulation results demonstrate the effectiveness of the proposed control scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Image-Based Visual Servoing for Three Degree-of-Freedom Robotic Arm with Actuator Faults
- Author
-
Jiashuai Li, Xiuyan Peng, Bing Li, Mingze Li, and Jiawei Wu
- Subjects
image-based visual servoing ,fault-tolerant control ,iterative learning ,sliding mode control ,robotic arm ,Materials of engineering and construction. Mechanics of materials ,TA401-492 ,Production of electric energy or power. Powerplants. Central stations ,TK1001-1841 - Abstract
This study presents a novel image-based visual servoing fault-tolerant control strategy aimed at ensuring the successful completion of visual servoing tasks despite the presence of robotic arm actuator faults. Initially, a depth-independent image-based visual servoing model is established to mitigate the effects of inaccurate camera parameters and missing depth information on the system. Additionally, a robotic arm dynamic model is constructed, which simultaneously considers both multiplicative and additive actuator faults. Subsequently, model uncertainties, unknown disturbances, and coupled actuator faults are consolidated as centralized uncertainties, and an iterative learning fault observer is designed to estimate them. Based on this, suitable sliding surfaces and control laws are developed within the super-twisting sliding mode visual servo controller to rapidly reduce control deviation to near zero and circumvent the chattering phenomenon typically observed in traditional sliding mode control. Finally, through comparative simulation between different control strategies, the proposed method is shown to effectively counteract the effect of actuator faults and exhibit robust performance.
- Published
- 2024
- Full Text
- View/download PDF
15. Online Predictive Visual Servo Control for Constrained Target Tracking of Fixed-Wing Unmanned Aerial Vehicles
- Author
-
Lingjie Yang, Xiangke Wang, Yu Zhou, Zhihong Liu, and Lincheng Shen
- Subjects
target tracking ,fixed-wing UAV ,pan-tilt camera ,image-based visual servoing ,model predictive control ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
This paper proposes an online predictive control method for fixed-wing unmanned aerial vehicles (UAVs) with a pan-tilt camera in target tracking. It aims to achieve long-term tracking while concurrently maintaining the target near the image center. Particularly, this work takes the UAV and pan-tilt camera as an overall system and deals with the target tracking problem via joint optimization, so that the tracking ability of the UAV can be improved. The image captured by the pan-tilt camera is the unique input associated with the target, and model predictive control (MPC) is used to solve the optimization problem with constraints that cannot be performed by the classic image-based visual servoing (IBVS). In addition to the dynamic constraint of the UAV, the perception constraint of the camera is also taken into consideration, which is described by the maximum distance between the target and the camera. The accurate detection of the target depends on the amount of its feature information contained in the image, which is highly related to the relative distance between the target and the camera. Moreover, considering the real-time requirements of practical applications, an MPC strategy based on soft constraints and a warm start is presented. Furthermore, a switching-based approach is proposed to return the target back to the perception range quickly once it exceeds the range, and the exponential asymptotic stability of the switched controller is proven as well. Both numerical and hardware-in-the-loop (HITL) simulations are conducted to verify the effectiveness and superiority of the proposed method compared with the existing method.
- Published
- 2024
- Full Text
- View/download PDF
16. Image-Based Visual Servoing Control of Quadrotor with MsQL Method
- Author
-
Yi, Xin-Ning, Luo, Biao, Xue, Shan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Jing, Xingjian, editor, Ding, Hu, editor, and Wang, Jiqiang, editor
- Published
- 2022
- Full Text
- View/download PDF
17. Pesticide-Free Robotic Control of Aphids as Crop Pests
- Author
-
Virginie Lacotte, Toan NGuyen, Javier Diaz Sempere, Vivien Novales, Vincent Dufour, Richard Moreau, Minh Tu Pham, Kanty Rabenorosoa, Sergio Peignier, François G. Feugier, Robin Gaetani, Thomas Grenier, Bruno Masenelli, Pedro da Silva, Abdelaziz Heddi, and Arnaud Lelevé
- Subjects
farming ,robotics ,aphid detection ,laser-based neutralization ,deep learning ,image-based visual servoing ,Agriculture (General) ,S1-972 ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Because our civilization has relied on pesticides to fight weeds, insects, and diseases since antiquity, the use of these chemicals has become natural and exclusive. Unfortunately, the use of pesticides has progressively had alarming effects on water quality, biodiversity, and human health. This paper proposes to improve farming practices by replacing pesticides with a laser-based robotic approach. This study focused on the neutralization of aphids, as they are among the most harmful pests for crops and complex to control. With the help of deep learning, we developed a mobile robot that spans crop rows, locates aphids, and neutralizes them with laser beams. We have built a prototype with the sole purpose of validating the localization-neutralization loop on a single seedling row. The experiments performed in our laboratory demonstrate the feasibility of detecting different lines of aphids (50% detected at 3 cm/s) and of neutralizing them (90% mortality) without impacting the growth of their host plants. The results are encouraging since aphids are one of the most challenging crop pests to eradicate. However, enhancements in detection and mainly in targeting are necessary to be useful in a real farming context. Moreover, robustness regarding field conditions should be evaluated.
- Published
- 2022
- Full Text
- View/download PDF
18. Visual servoing of quadrotor UAVs for slant targets with autonomous object search.
- Author
-
Shi, Lintao, Li, Baoquan, Shi, Wuxi, and Zhang, Xuebo
- Abstract
In this paper, an enhanced visual servoing method is designed for a quadrotor unmanned aerial vehicle (UAV) based on virtual plane image moments, under underactuation and tight coupling constraints of UAV kinematics. Moreover, in order to make the UAV search visual targets autonomously in target vicinity during flight, a flexible flight system is developed with stages of take-off, target searching, and image-based visual servoing (IBVS). With dual-camera sensor configuration, the UAV system searches targets from given directions while making localization. A virtual image plane is constructed and image moments are adopted to decouple UAV lateral movement. For a non-horizonal target, homography is utilized to construct the target plane and transform it into a horizonal plane. Backstepping techniques are used to derive the nonlinear controller to realize the IBVS strategy. Stability analysis proves global asymptotic performance of the closed-loop system. Experimental verification shows feasibility of the overall flight system and effectiveness of the visual servoing controller. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Visual servoing control of 4-DOF palletizing robotic arm for vision based sorting robot system.
- Author
-
Cong, Vo Duy
- Abstract
In this paper, an image-based visual servoing is integrated into a vision-based sorting robot system to increase the flexibility and performance of the system. A novel decoupled image-based visual servoing is proposed to control a 4-DOF robot arm for grasping products. Area, orientation angle, and centroid features extracted from the image are used as input to control the velocities of the robot arm. A multi-threshold algorithm is presented to detect and classify objects and extract the image features. Due to its simplicity but efficiency, image processing and visual servoing algorithms can be implemented in real-time on a low-cost embedded computer. Furthermore, the system is also easy to set up because the robot and camera calibrations are not required. Experiment results reveal the time effectiveness and performance robustness of the system. The system can be used in industrial processes to reduce the required time and improve the performance and flexibility of the production line. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. The Study of 3D Reaching Tasks on Visual Serving Using Quaternion.
- Author
-
Liucun Zhu, Junqi Luo, Mingyou Chen, and Haofeng Deng
- Subjects
- *
QUATERNIONS , *JACOBIAN matrices , *MONOCULAR vision , *MONOCULARS - Abstract
The image-based monocular visual servoing is difficult to realize the grasping task on three-dimension. This study presented a method of quaternion on 3D reaching tasks of visual servoing. This method used a camera and a laser scanner to acquire the information of the images and the depth. Then the parameters of Jacobian matrix could be determined by the method of the quaternion. A particular constraint mechanism was also proposed to optimize the reaching trajectory of the robot. The simulation results demonstrated our proposed method could implement the 3D reaching tasks successfully and had a significant performance improvement compared to conventional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Image-based finite-time visual servoing of a quadrotor for tracking a moving target.
- Author
-
He, Wei and Yuan, Liang
- Abstract
This paper proposes an image-based visual servoing control method for a moving target of a quadrotor UAV (QUAV). Firstly, the dynamic image model with moving target parameters is established based on the image moment features in the virtual camera plane. For the unpredictability of the moving target in space, we use a high-order differentiator to estimate the state parameters of the moving target. In order to solve the problem of image depth information caused by a monocular camera, we derive a nonlinear finite-time linear velocity observer from the virtual image plane, which can not only estimate the linear velocity information of QUAV but also avoid the measurement of image depth. Based on the above information, we design the global finite-time controller and use Lyapunov theory to prove the finite-time stability of the system. Finally, the numerical simulations verify the convergence of the proposed control scheme, and the ROS gazebo simulations demonstrate the improved performance of the proposed control scheme in tracking error. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Pesticide-Free Robotic Control of Aphids as Crop Pests.
- Author
-
Lacotte, Virginie, NGuyen, Toan, Sempere, Javier Diaz, Novales, Vivien, Dufour, Vincent, Moreau, Richard, Pham, Minh Tu, Rabenorosoa, Kanty, Peignier, Sergio, Feugier, François G., Gaetani, Robin, Grenier, Thomas, Masenelli, Bruno, da Silva, Pedro, Heddi, Abdelaziz, and Lelevé, Arnaud
- Subjects
AGRICULTURAL pests ,APHID control ,HOST plants ,ROBOTICS ,DEEP learning ,PESTICIDES - Abstract
Because our civilization has relied on pesticides to fight weeds, insects, and diseases since antiquity, the use of these chemicals has become natural and exclusive. Unfortunately, the use of pesticides has progressively had alarming effects on water quality, biodiversity, and human health. This paper proposes to improve farming practices by replacing pesticides with a laser-based robotic approach. This study focused on the neutralization of aphids, as they are among the most harmful pests for crops and complex to control. With the help of deep learning, we developed a mobile robot that spans crop rows, locates aphids, and neutralizes them with laser beams. We have built a prototype with the sole purpose of validating the localization-neutralization loop on a single seedling row. The experiments performed in our laboratory demonstrate the feasibility of detecting different lines of aphids (50% detected at 3 cm/s) and of neutralizing them (90% mortality) without impacting the growth of their host plants. The results are encouraging since aphids are one of the most challenging crop pests to eradicate. However, enhancements in detection and mainly in targeting are necessary to be useful in a real farming context. Moreover, robustness regarding field conditions should be evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Logarithmic Observation of Feature Depth for Image-Based Visual Servoing.
- Author
-
Li, Xiangfei, Zhao, Huan, and Ding, Han
- Subjects
- *
STEREO vision (Computer science) , *CAMERA calibration , *KINECT (Motion sensor) , *SPACE robotics , *COMPUTER vision , *NOISE measurement , *FEATURE selection , *ROBOTICS - Abstract
Due to the robustness to robot modeling and camera calibration errors and avoidance of complete target geometry, image-based visual servoing has always been an important topic in the fields such as robotics, computer vision and so forth. When the image information obtained by the camera is mapped to the robotic task space to design the servoing control law, the resulting interaction matrix, which links the spatial velocity of the camera to the temporal variation of the selected image features, depends on the unknown feature depths. The use of inaccurate feature depths may influence the stability and robustness of the controller, and even cause the failure of the task. In this article, based on the perspective camera model, by employing the principle of reduced order observer, a novel logarithmic observer is presented for on-line recovery of feature depth. Compared with the typical observers now available, the presented observer offers several advantages: global convergence, a faster convergence rate of error structure than exponential error structure, a less restricted observability condition and greater robustness against measurements with noise. The comparison results of numerical simulations indicate the superiority of the presented observer, and real experiments with Kinect v2 sensor further validate the effectiveness of the presented observer in practical situation. Note to Practitioners—This article was motivated by the depth problem in the image-based visual servoing scheme, but it can also be used in other situations where the image depth information is needed, such as 3D reconstruction, robot navigation, etc. The existing depth acquisition methods include TOF sensors, stereo vision, depth observers and so on. However, TOF sensors are sensitive to light conditions, and the mounting space of stereo vision is slightly large, and there is contradiction between observation performance and computational complexity in most existing observers. In this article, a novel structure of logarithmic reduced order observer is described in detail, which can be utilized to estimate the depth information of images easily. The simulations and experiments verify the good performance of the observer. The limitations of the given observer are that the estimation accuracy is not very good under weak excitation, and the camera needs to be calibrated in advance. Future work will focus on overcoming these two limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Sliding Surface Designs for Visual Servo Control of Quadrotors
- Author
-
Tolga Yuksel
- Subjects
image-based visual servoing ,sliding-surface designs ,fuzzy logic ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
Autonomy is the main task of a quadrotor, and visual servoing assists with this task while providing fault tolerance under GPS failure. The main approach to visual servoing is image-based visual servoing, which uses image features directly without the need for pose estimation. The classical sliding surface design of sliding mode control is used by the linear controller law of image-based visual servoing, and focuses only on minimizing the error in the image features as convergence. In addition to providing convergence, performance characteristics such as visual-feature-convergence time, error, and motion characteristics should be taken into consideration while controlling a quadrotor under velocity limitations and disturbance. In this study, an image-based visual servoing system for quadrotors with five different sliding surface designs is proposed using analytical techniques and fuzzy logic. The proposed visual servo system was simulated, utilizing the moment characteristics of a preset shape to demonstrate the effectiveness of these designs. The stated parameters, convergence time, errors, motion characteristics, and length of the path, followed by the quadrotor, were compared for each of these design approaches, and a convergence time that was 46.77% shorter and path length that was 6.15% shorter were obtained by these designs. In addition to demonstrating the superiority of the designs, this study can be considered as a reflection of the realization, as well as the velocity constraints and disturbance resilience in the simulations.
- Published
- 2023
- Full Text
- View/download PDF
25. Control barrier function based visual servoing for Mobile Manipulator Systems under functional limitations.
- Author
-
Heshmati-Alamdari, Shahab, Sharifi, Maryam, Karras, George C., and Fourlas, George K.
- Subjects
- *
STEADY-state responses , *ROBUST control , *SYSTEM dynamics , *COMPUTER systems , *DYNAMICAL systems , *MANIPULATORS (Machinery) - Abstract
This paper proposes a new control strategy for Mobile Manipulator Systems (MMSs) that integrates image-based visual servoing (IBVS) to address operational limitations and safety constraints. The proposed approach based on the concept of control barrier functions (CBFs), provides a solution to address various operational challenges including visibility constraints, manipulator joint limits, predefined system velocity bounds, and system dynamic uncertainties. The proposed control strategy is a two-tiered structure, wherein the first level, a CBF-IBVS controller calculates control commands, taking into account the Field of View (FoV) constraints. By leveraging null space techniques, these commands are transposed to the joint-level configuration of the MMS, while considering system operational limits. Subsequently, in the second level, a CBF velocity controller employed for the entire MMS undertakes the tracking of the commands at the joint level, ensuring compliance with the predefined system's velocity limitations as well as the safety of the whole combined system dynamics. The proposed control strategy offers superior transient and steady-state responses and heightened resilience to disturbances and modeling uncertainties. Furthermore, due to its low computational complexity, it can be easily implemented on an onboard computing system, facilitating real-time operation. The proposed strategy's effectiveness is illustrated via simulation outcomes, which reveal enhanced performance and system safety compared to conventional IBVS methods. The results indicate that the proposed approach is effective in addressing the challenging operational limitations and safety constraints of mobile manipulator systems, making it suitable for practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Adaptive dynamic programming-based visual servoing control for quadrotor.
- Author
-
Yi, Xinning, Luo, Biao, and Zhao, Yuqian
- Subjects
- *
ADAPTIVE fuzzy control , *DYNAMIC programming , *CLOSED loop systems , *ADAPTIVE control systems , *TIME-varying systems , *DYNAMIC models - Abstract
In this paper, the problem of image-based visual servoing (IBVS) control for quadrotor is addressed by developing an adaptive dynamic programming (ADP) method. The perspective projection model and image moment feature are used to derive the quadrotor-image dynamic model. By dividing the model into three subsystems, the effective subsystem controllers are designed to make the quadrotor complete the visual servoing task. The height subsystem control is designed with backstepping method and the yaw subsystem control design is based on a direct control method. The lateral subsystem is a time-varying system with input constraints and an ADP-based control is developed. The existence of time-varying terms results in the time-varying Hamilton–Jacobi-Bellman (HJB) equation, which implies the analytic solution is unable to obtain. Thus, the ADP-based IBVS control method is developed by utilizing a critic neural network structure to approximate the time-dependent value function of the HJB equation. It is proved that the ADP method guarantees that the closed-loop system and the estimation error weights are uniformly ultimately bounded. The experimental results demonstrate the effectiveness of the developed ADP-based IBVS control method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Robot Manipulator Visual Servoing via Kalman Filter- Optimized Extreme Learning Machine and Fuzzy Logic.
- Author
-
Zhiyu Zhou, Yanjun Hu, Jiangfei Ji, Yaming Wang, Zefei Zhu, Donghe Yang, and Ji Chen
- Subjects
FUZZY logic ,ROBOT kinematics ,JACOBIAN matrices ,KALMAN filtering ,ROBOTS ,MACHINE learning ,MOBILE robots - Abstract
Visual servoing (VS) based on the Kalman filter (KF) algorithm, as in the case of KF-based image-based visual servoing (IBVS) systems, suffers from three problems in uncalibrated environments: the perturbation noises of the robot system, error of noise statistics, and slow convergence. To solve these three problems, we use an IBVS based on KF, African vultures optimization algorithm enhanced extreme learning machine (AVOA-ELM), and fuzzy logic (FL) in this paper. Firstly, KF online estimation of the Jacobian matrix. We propose an AVOAELM error compensation model to compensate for the sub-optimal estimation of the KF to solve the problems of disturbance noises and noise statistics error. Next, an FL controller is designed for gain adaptation. This approach addresses the problem of the slow convergence of the IBVS system with the KF. Then, we propose a visual servoing scheme combining FL and KF-AVOA-ELM (FL-KF-AVOA-ELM). Finally, we verify the algorithm on the 6-DOF robotic manipulator PUMA 560. Compared with the existing methods, our algorithm can solve the three problems mentioned above without camera parameters, robot kinematics model, and target depth information. We also compared the proposed method with other KF-based IBVS methods under different disturbance noise environments. And the proposed method achieves the best results under the three evaluation metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. A small family service robot system with uncalibrated monocular camera for visual servoing tracking fast moving family targets in short range.
- Author
-
Zhou, Zhihui, Zhu, Shiqiang, Zhu, Kaiyuan, Cheng, Chao, and Gu, Jason
- Subjects
FAMILY services ,HUMAN-robot interaction ,INFRARED cameras ,MONOCULARS ,ROBOTS ,OBJECT recognition (Computer vision) ,ANGULAR velocity - Abstract
Small family service robots are essential applications in service robots, and their accurate tracking of moving targets is the primary condition for robot-human interaction. Compared with long-range target tracking, the limited family space leads to the short distance between the robot and the target, which often leads to the problem of large circumferential relative angular velocity between the robot and the moving target. It dramatically reduces the ability of the robot to track the fast target motion at close range and at different scales. In this paper, a novel family service robot system based on monocular RGB and robot terminal calculation is proposed to track fast-moving targets in close range. Based on the construction of the robot body with two rotation directions, the neural network model is used for visual detection and recognition of the target object. Combined with the historical information of the target, the new relative azimuth information of the target object is calculated to control the changes in the pitch angle and yaw angle of the robot body and realize the servo tracking of the robot vision on the moving target. Experimental results confirm the visual servoing tracking ability of the tiny family service robot system for indoor moving objects, which will positively promote the application of small intelligent service robots. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Low-Complexity Leader-Following Formation Control of Mobile Robots Using Only FOV-Constrained Visual Feedback.
- Author
-
Miao, Zhiqiang, Zhong, Hang, Wang, Yaonan, Zhang, Hui, Tan, Haoran, and Fierro, Rafael
- Abstract
This article aims to solve the problem of formation control of mobile robots based on image and provide a low-cost as well as ease-of-implementation solution for mobile robots relying merely on a monocular camera under field-of-view (FOV) constraints. A low-complexity image-based visual servo controller is proposed, which can achieve the desired relative position on the image plane and solve the FOV constraints without the feature depth and leader’s velocities information. To facilitate the control design, a state transformation is first performed to decouple the visual motion kinematics. Then, an error transformation is introduced to handle the FOV constraints, and performance specifications are incorporated in the error transformation to achieve the predefined control performance. Finally, a simple static controller is derived using only information from images, and the stability of the uncertain system with unknown control direction/coefficients under the given performance control condition is analyzed. The effectiveness and performance of the proposed visual servoing controller can be illustrated using both simulations and experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. MAT-Fly: An Educational Platform for Simulating Unmanned Aerial Vehicles Aimed to Detect and Track Moving Objects
- Author
-
Giuseppe Silano and Luigi Iannelli
- Subjects
Educational ,Matlab/Simulink ,image-based visual servoing ,trajectory control ,vision detection and tracking ,software-in-the-loop ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The main motivation of this work is to propose a simulation approach for a specific task within the Unmanned Aerial Vehicle (UAV) field, i.e., the visual detection and tracking of arbitrary moving objects. In particular, it is described MAT-Fly, a numerical simulation platform for multi-rotor aircraft characterized by the ease of use and control development. The platform is based on Matlab® and the MathWorks™ Virtual Reality (VR) and Computer Vision System (CVS) toolboxes that work together to simulate the behavior of a quad-rotor while tracking a car that moves along a nontrivial path. The VR toolbox has been chosen due to the familiarity that students have with Matlab and because it does not require a notable effort by the user for the learning and development phase thanks to its simple structure. The overall architecture is quite modular so that each block can be easily replaced with others simplifying the code reuse and the platform customization. Some simple testbeds are presented to show the validity of the approach and how the platform works. The simulator is released as open-source, making it possible to go through any part of the system, and available for educational purposes.
- Published
- 2021
- Full Text
- View/download PDF
31. Image-based visual servoing with depth estimation.
- Author
-
Gongye, Qingxuan, Cheng, Peng, and Dong, Jiuxiang
- Subjects
- *
JACOBIAN matrices , *KALMAN filtering , *MATHEMATICAL models , *INFORMATION storage & retrieval systems , *MANIPULATORS (Machinery) , *FUZZY logic - Abstract
For the depth estimation problem in the image-based visual servoing (IBVS) control, this paper proposes a new observer structure based on Kalman filter (KF) to recover the feature depth in real time. First, according to the number of states, two different mathematical models of the system are established. The first one is to extract the depth information from the Jacobian matrix as the state vector of the system. The other is to use the depth information and the coordinate point information of the two-dimensional image plane as the state vector of the system. The KF is used to estimate the unknown depth information of the system in real time. And an IBVS controller gain adjustment method for 6-degree-of-freedom (6-DOF) manipulator is obtained using fuzzy controller. This method can obtain the gain matrix by taking the depth and error information as the input of the fuzzy controller. Compared with the existing works, the proposed observer has less redundant motion while solving the Jacobian matrix depth estimation problem. At the same time, it will also be beneficial to reducing the time for the camera to reach the target. Conclusively, the experimental results of the 6-DOF robot with eye-in-hand configuration demonstrate the effectiveness and practicability of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Design of a Gough–Stewart Platform Based on Visual Servoing Controller.
- Author
-
Zhu, Minglei, Huang, Cong, Song, Shijie, and Gong, Dawei
- Subjects
- *
ROBOT design & construction , *PARALLEL robots - Abstract
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design methodology. This advanced optimal design method considers the controller positioning accuracy in the design process for getting the optimal geometric parameters of the robot. In this paper, three types of visual servoing controllers are applied to control the motions of the Gough–Stewart platform: leg-direction-based visual servoing, line-based visual servoing, and image moment visual servoing. Depending on these controllers, the positioning error models considering the camera observation error together with the controller singularities are analyzed. In the next step, the optimization problems are formulated in order to get the optimal geometric parameters of the robot and the placement of the camera for the Gough–Stewart platform for each type of controller. Then, we perform co-simulations on the three optimized Gough–Stewart platforms in order to test the positioning accuracy and the robustness with respect to the manufacturing errors. It turns out that the optimal control-based design methodology helps get both the optimum design parameters of the robot and the performance of the controller {robot + dedicated controller}. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Robust predictive visual servoing control for an inertially stabilized platform with uncertain kinematics.
- Author
-
Liu, Xiangyang, Mao, Jianliang, Yang, Jun, Li, Shihua, and Yang, Kaifeng
- Subjects
PREDICTIVE control systems ,ANGULAR velocity ,DEGREES of freedom ,CLOSED loop systems ,PARALLEL kinematic machines ,COST functions ,KINEMATICS - Abstract
In this paper, a disturbance observer (DOB) based predictive control approach is developed for the image-based visual servoing of an inertially stabilized platform (ISP). As the limitation in degrees of freedom of a two-axes ISP, it is hard to estimate the variable feature depth of the target at each control cycle when using an uncalibrated camera, which brings the challenge in the design of the visual servoing controller. To this end, a depth-independent kinematic matrix that only involves nominal parameters is obtained by employing the partitioned scheme in the system modeling. The uncertain kinematics arising from the unknown feature depth, angular velocity tracking errors, and uncalibrated intrinsic parameters is considered as the lumped uncertainty. A discrete-time DOB is then constructed to estimate the lumped uncertainty in real time. Instead of taking an integral action to eliminate tracking errors induced by the uncertain kinematics, the disturbance estimation is actively incorporated into the receding optimization process of the predictive controller. The stability of the closed-loop system is fully analyzed. Experiments on tracking a moving target are performed to validate promising qualities of the proposed approach. • The kinematics from the unknown feature depth is viewed as the lumped disturbance. • The disturbance estimation is completely compensated with the designed cost function. • The stability of the closed-loop system with the observer is fully analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Image-Based Visual Servo Tracking Control of a Ground Moving Target for a Fixed-Wing Unmanned Aerial Vehicle.
- Author
-
Yang, Lingjie, Liu, Zhihong, Wang, Xiangke, Yu, Xianguo, Wang, Guanzheng, and Shen, Lincheng
- Abstract
This paper proposes a new control method for the ground moving target tracking problem by a fixed-wing unmanned aerial vehicle (UAV) with a monocular pan-tilt camera. By utilizing the image-based visual servoing, the control law can be directly designed in the image plane, thereby avoiding errors caused by the 3D position calculation. Based on that, we design a control framework to integrate the control of the UAV and the pan-tilt, which enables the UAV to track the target while maintaining the feature point near the image center. Furthermore, considering that the low-cost pan-tilt camera we use has restricted characteristics, we present a deterministic finite automata model to transit the states of tracking when the pan-tilt attitude reaches saturation, thereby improving the tracking ability of the UAV for the moving target. The stability proof of the controller is given, and extensive experiments of hardware-in-the-loop (HIL) simulation and real flights are provided. The results show that the proposed method can achieve continuous robust tracking of the ground moving target. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Robust image-based control for spacecraft uncooperative rendezvous and synchronization using a zooming camera.
- Author
-
Zhao, Xiangtian, Emami, M. Reza, and Zhang, Shijie
- Subjects
- *
FOCAL length , *CAMERAS , *SYNCHRONIZATION , *CLOSED loop systems , *SLIDING mode control , *SPACE vehicles , *ROBUST control - Abstract
This paper presents a new image-based control scheme for spacecraft rendezvous and synchronization with an uncooperative tumbling target, which is capable of autonomously adjusting the camera focal length, in order to extend the working range of visual servoing and guarantee that the target remains within the camera field-of-view. Unlike conventional visual servoing, the new scheme is based on a system model that is invariant to changes in camera-intrinsic parameters. An active zooming strategy is proposed which ensures that the target remains in the image plane with a proper size during the visual servoing. By utilizing the image features as feedback information, a finite-time controller is designed, which is robust to the unknown target's motion as well as the external perturbations, with the ability to estimate and adapt to the upper bound of the uncertainties. The closed-loop system stability is proved using the Lyapunov theory. Simulation scenarios are studied for two different onboard cameras, namely, a fixed-focal-length camera and a zooming camera. In addition, a comparative study with a conventional sliding-mode controller is performed to evaluate the convergence and accuracy of the proposed control scheme. • Image-based spacecraft orbit–attitude control with zooming capability. • Far-range rendezvous and synchronization with uncooperative objects. • System model formulated in the invariant space. • Controller robust to unknown target motions and external perturbations. • Extensive simulation studies show superior performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. A Bayesian Deep Neural Network for Safe Visual Servoing in Human–Robot Interaction
- Author
-
Lei Shi, Cosmin Copot, and Steve Vanlanduit
- Subjects
safety ,human–robot interaction ,Bayesian neural network ,deep learning ,image-based visual servoing ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Safety is an important issue in human–robot interaction (HRI) applications. Various research works have focused on different levels of safety in HRI. If a human/obstacle is detected, a repulsive action can be taken to avoid the collision. Common repulsive actions include distance methods, potential field methods, and safety field methods. Approaches based on machine learning are less explored regarding the selection of the repulsive action. Few research works focus on the uncertainty of the data-based approaches and consider the efficiency of the executing task during collision avoidance. In this study, we describe a system that can avoid collision with human hands while the robot is executing an image-based visual servoing (IBVS) task. We use Monte Carlo dropout (MC dropout) to transform a deep neural network (DNN) to a Bayesian DNN, and learn the repulsive position for hand avoidance. The Bayesian DNN allows IBVS to converge faster than the opposite repulsive pose. Furthermore, it allows the robot to avoid undesired poses that the DNN cannot avoid. The experimental results show that Bayesian DNN has adequate accuracy and can generalize well on unseen data. The predictive interval coverage probability (PICP) of the predictions along x, y, and z directions are 0.84, 0.94, and 0.95, respectively. In the space which is unseen in the training data, the Bayesian DNN is also more robust than a DNN. We further implement the system on a UR10 robot, and test the robustness of the Bayesian DNN and the IBVS convergence speed. Results show that the Bayesian DNN can avoid the poses out of the reach range of the robot and it lets the IBVS task converge faster than the opposite repulsive pose.1
- Published
- 2021
- Full Text
- View/download PDF
37. Vision-based neural predictive tracking control for multi-manipulator systems with parametric uncertainty.
- Author
-
Wu, Jinhui, Jin, Zhehao, Liu, Andong, and Yu, Li
- Subjects
TRACKING control systems ,RECURRENT neural networks ,ARTIFICIAL neural networks ,MANIPULATORS (Machinery) ,POLE assignment ,PREDICTIVE control systems - Abstract
To deal with the coordination problem for multi-manipulator trajectory tracking systems with parametric uncertainties, this paper proposes a two-layer control scheme incorporating a model predictive strategy and an extended state observer. In the kinematic layer, the visual information is implemented and a visual servoing error model is derived by the image-based visual servoing strategy. A recurrent neural network model predictive control approach is proposed to obtain velocities which are regarded as the reference signals for the dynamic layer. For dynamics, a linear time-varying dynamic model of the multi-manipulator system coupled with the object is established, where the parametric uncertainty is recognized as an added disturbance. An extended state observer is sequentially designed to estimate the disturbance by using pole placement method. The input-to-state practical stability of the system is further analyzed with a bounded disturbance. Finally, simulations and comparison are given to verify the effectiveness and robustness of the proposed algorithm. • The kinematics and dynamics for visual servoing systems of the multi-manipulator are modeled. • A two-layer control scheme is proposed to handle the coordination problem for trajectory tracking systems of the multi-manipulator. • A model predictive controller combined with a recurrent neural network and an extended state observer is used to solve the optimization problem. • The input-to-state practical stability of the system and the maximal admissible bound of the uncertainty are given. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Model Predictive Control for Uncalibrated and Constrained Image-Based Visual Servoing Without Joint Velocity Measurements
- Author
-
Zhoujingzi Qiu, Shiqiang Hu, and Xinwu Liang
- Subjects
Image-based visual servoing ,model predictive control ,constrained optimization control ,depth-independent interaction matrix ,sliding mode observer ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper presents a novel scheme for image-based visual servoing (IBVS) of a robot manipulator by considering robot dynamics without using joint velocity measurements in the presence of constraints, uncalibrated camera intrinsic and extrinsic parameters and unknown feature position parameters. An approach to design model predictive control (MPC) method based on identification algorithm and sliding mode observer has been proposed. Based on the MPC method, the IBVS tasks can be considered as a nonlinear optimization problem while the constraints due to the visibility constraint and the torque constraint can be explicitly taken into account. By using the depth-independent interaction matrix framework, the identification algorithm can be used to update the unknown parameters and the prediction model. In addition, many existing controllers require the joint velocity measurements which can be contaminated by noises, thus resulting in the IBVS performance degradation. To overcome the problem without joint velocity measurements, the sliding mode observer is designed to estimate the joint velocities of the IBVS system. The simulation results for both eye-in-hand and eye-to-hand camera configurations are presented to verify the effectiveness of the proposed control method.
- Published
- 2019
- Full Text
- View/download PDF
39. A Robust Feature Extraction Method for Image-Based Visual Servoing
- Author
-
Qiu, Zhoujingzi, Hu, Shiqiang, Luo, Lingkun, Tang, Fuhui, Cai, Jiyuan, Zhang, Hong, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Sun, Fuchun, editor, Liu, Huaping, editor, and Hu, Dewen, editor
- Published
- 2017
- Full Text
- View/download PDF
40. Adaptive Image-Based Visual Servoing Using Reinforcement Learning With Fuzzy State Coding.
- Author
-
Shi, Haobin, Wu, Haibo, Xu, Chenxi, Zhu, Jinhui, Hwang, Maxwell, and Hwang, Kao-Shing
- Subjects
JACOBIAN matrices ,REINFORCEMENT learning ,ADAPTIVE fuzzy control ,FEATURE extraction ,IMAGE - Abstract
Image-based visual servoing (IBVS) allows precise control of positioning and motion for relatively stationary targets using visual feedback. For IBVS, a mixture parameter $\beta$ allows better approximation of the image Jacobian matrix, which has a significant effect on the performance of IBVS. However, the setting for the mixture parameter depends on the camera's real-time posture; there is no clear way to define the change rules for most IBVS applications. Using simple model-free reinforcement learning, Q-learning, this article proposes a method to adaptively adjust the image Jacobian matrix for IBVS. If the state-space is discretized, traditional Q-learning encounters problems with the resolution that can cause sudden changes in the action, so the visual servoing system performs poorly. Besides, a robot in a real-world environment also cannot learn on as large a scale as virtual agents, so the efficiency with which agents learn must be increased. This article proposes a method that uses fuzzy state coding to accelerate learning during the training phase and to produce a smooth output in the application phase of the learning experience. A method that compensates for delay also allows more accurate extraction of features in a real environment. The results for simulation and experiment demonstrate that the proposed method performs better than other methods, in terms of learning speed, movement trajectory, and convergence time. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. Adaptive Switch Image-based Visual Servoing for Industrial Robots.
- Author
-
Ghasemi, Ahmad, Li, Pengcheng, and Xie, Wen-Fang
- Abstract
In this paper, an adaptive switch image-based visual servoing (IBVS) controller for industrial robots is presented. The proposed control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while the proposed adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The stability of the designed controller is proved using Lyapunov method. Experimental results on a 6 degree of freedom (DOF) robot show the significant enhancement of the control performance over other IBVS methods, in terms of the response time and tracking performance. Also the designed visual servoing controller demonstrates its capability to overcome some of the inherent drawbacks of IBVS, such its inability to perform a 180° camera rotation around its center. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. A new decoupled control law for image-based visual servoing control of robot manipulators
- Author
-
Cong, Vo Duy and Hanh, Le Duc
- Published
- 2022
- Full Text
- View/download PDF
43. Marker-guided auto-landing on a moving platform
- Author
-
Borshchova, Iryna and O’Young, Siu
- Published
- 2017
- Full Text
- View/download PDF
44. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques.
- Author
-
Sampedro, Carlos, Rodriguez-Ramos, Alejandro, Bavle, Hriday, Carrio, Adrian, de la Puente, Paloma, and Campoy, Pascual
- Abstract
Search and Rescue (SAR) missions represent an important challenge in the robotics research field as they usually involve exceedingly variable-nature scenarios which require a high-level of autonomy and versatile decision-making capabilities. This challenge becomes even more relevant in the case of aerial robotic platforms owing to their limited payload and computational capabilities. In this paper, we present a fully-autonomous aerial robotic solution, for executing complex SAR missions in unstructured indoor environments. The proposed system is based on the combination of a complete hardware configuration and a flexible system architecture which allows the execution of high-level missions in a fully unsupervised manner (i.e. without human intervention). In order to obtain flexible and versatile behaviors from the proposed aerial robot, several learning-based capabilities have been integrated for target recognition and interaction. The target recognition capability includes a supervised learning classifier based on a computationally-efficient Convolutional Neural Network (CNN) model trained for target/background classification, while the capability to interact with the target for rescue operations introduces a novel Image-Based Visual Servoing (IBVS) algorithm which integrates a recent deep reinforcement learning method named Deep Deterministic Policy Gradients (DDPG). In order to train the aerial robot for performing IBVS tasks, a reinforcement learning framework has been developed, which integrates a deep reinforcement learning agent (e.g. DDPG) with a Gazebo-based simulator for aerial robotics. The proposed system has been validated in a wide range of simulation flights, using Gazebo and PX4 Software-In-The-Loop, and real flights in cluttered indoor environments, demonstrating the versatility of the proposed system in complex SAR missions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. Visual servoing framework using Gaussian process for an aerial parallel manipulator.
- Author
-
Cho, Sungwook and Shim, David Hyunchul
- Subjects
MANIPULATORS (Machinery) ,PARALLEL robots ,GAUSSIAN processes ,PARALLEL processing ,RELATIVE velocity ,FLIGHT testing - Abstract
This paper proposes a Gaussian process based visual servoing framework for an aerial parallel manipulator. Our aerial parallel manipulator utilizes the on-board eye-in-hand vision sensor system attached on the end-effector of three-degrees-of-freedom parallel manipulator. There are three major advantages: small, light in weight, and linearity with respect to the host vehicle rather than the serial manipulator, but it has a critical drawback that its workspace is too small to perform the mission itself during the hovering. In order to overcome the limited workspace problem and perform the mission more actively, proposed visual servoing framework is proposed to generate relative body velocity commands of the host vehicle by using the interpolated and extrapolated feature path between the initial and desired features to fed into the underactuated aerial parallel manipulator. It can generate not only numerical stable but also feasible control input. Furthermore, it can overcome the weakness of the traditional image-based visual servoing such as singularities, uncertainties, and local minimums during calculating image Jacobian under the large disparity environment between the target and the unmanned aerial vehicle. As a result of the proposed contribution, we show that our contribution is reliable to perform the picking-and-replacement autonomously, and it shows that it can be applied in the large displacement environments throughout the flight test. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Robot manipülatörler için akıllı GTGS sistemi
- Author
-
Tolga Yüksel
- Subjects
Görüntü-tabanlı görsel servolama ,Robot manipülatör ,Sinir ağı ,Bulanık mantık ,Image-based visual servoing ,Robot manipulator ,Neural network ,Fuzzy logic ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Görsel servolama (GS) yaklaşımları içinde görüntü-tabanlı görsel servolama (GTGS) duruş kestirimi gerektirmediğinden robot manipülatörler için popüler GS yaklaşımlarından biridir. Bu popülerliğin yanında GTGS, uygulanması sırasında ise iki temel sorun ile uğraşır: Etkileşim matrisinin tersinin eldesi ve kontrolör için uygun bir sabit kazanç değeri bulunması. GTGS için etkileşim matrisi her ne kadar yalancı tersi ile beraber kullanılsa da tekilliklerin oluşması durumunda kontrol yasası işleyememektedir. Diğer bir taraftan sabit kazanç değeri yakınsama hızı ile sonlandırıcı hızları arasında bir ödünleşmeye sebep olmaktadır. Bu çalışmada bu sorunları çözmek için akıllı bir GTGS sistemi önerilmiştir. Sistemin ilk aşaması olarak eğitilmiş bir yapay sinir ağı (YSA) etkileşim matrisinin tersinin yerini almakta ve tekillik sorunu çözülmektedir. Ayrıca klasik hız kontrolcüsünün sebep olduğu başlangıç hız süreksizliği yararlanılan sürekli hız kontrolcü ile giderilmiştir. İkinci aşama olarak sabit kazanç yerine bulanık kayan kipten esinlenen ve her çevrimde hata ve hata türevinin değerine göre kazanç hesabı yapan bir bulanık mantık birimi kullanılmıştır. Bu uyarlanabilir kazanç yaklaşımıyla yüksek hız ihtiyacı olmadan hızlı yakınsama sağlanmıştır.
- Published
- 2016
47. Evolving Fuzzy Uncalibrated Visual Servoing for Mobile Robots
- Author
-
Gonçalves, P. J. S., Lopes, P. J. F., Torres, P. M. B., Sequeira, J. M. R., Madureira, Ana, editor, Reis, Cecilia, editor, and Marques, Viriato, editor
- Published
- 2013
- Full Text
- View/download PDF
48. Improved Method of Robot Trajectory in IBVS Based on an Efficient Second-Order Minimization Technique
- Author
-
Zhang, Jie, Liu, Ding, Yang, Yanxi, Zheng, Gang, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Su, Chun-Yi, editor, Rakheja, Subhash, editor, and Liu, Honghai, editor
- Published
- 2012
- Full Text
- View/download PDF
49. Output Feedback Image-Based Visual Servoing of Rotorcrafts.
- Author
-
Li, Jianan, Xie, Hui, Ma, Rui, and Low, K. H.
- Abstract
This paper presents an improved output feedback based image-based visual servoing (IBVS) law for rotorcraft unmanned aerial vehicles (RUAVs). The control law enables a RUAV with a minimal set of sensors, i.e. an inertial measurement unit (IMU) and a single downward facing camera, to regulate its position and heading relative to a planar visual target consisting of multiple points. As compared to our previous work, twofold improvement is made. First, the desired value of the image feature of controlling the vertical motion of the RUAV is a function of other image features instead of a constant. This modification helps to keep the visual target stay in the camera's field of view by indirectly adjusting the height of the vehicle. Second, the proposed approach simplifies our previous output feedback law by reducing the dimension of the observer filter state space while the same asymptotic stability result is kept. Both simulation and experimental results are presented to demonstrate the performance of the proposed controller. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Shadow Removal in Acquired Image for Visual Servoing of Unmanned Ground Vehicles.
- Author
-
Sandeep, Kundala, Parthasarathy, Eswaran, and Prathibha, Lakshmi
- Subjects
COMPUTER vision ,MILITARY applications of virtual reality ,FEATURE extraction ,PIXELS ,PARAMETER estimation - Abstract
Vision-guided robotic operation is one of the new concepts in teleoperated Unmanned Ground Vehicle (UGV) for military applications. The objective of the Visual Servoing (VS) is to control the position of the robotic arm using the data such as the distance of an object from the reference frame, or the length and width of the object extracted from the vision sensor. In Image-Based Visual Servoing (IBVS) scheme, position control values are computed from image features directly. This work proposes the technique of shadow removal of the object through the advanced light model from the image, acquired using a monocular camera. The 2D spatial coordinates and Point-of-Contact (PoC) of the object with reference to the ground plane is computed using straight-line equations. PoC analysis is made through mapping of pixel distances to spatial distances to analyses mean pixel distance, and the percentage of error obtained between actual distance and calculated distance computed. Other parameters like the width of the object and, the distance between the camera and object is also estimated. The computed result yields a maximum error of 3 cm and shows 2.6% error for camera fixed at the height of 160 cm with a tilt angle of 50°, which covers an area of 150 cm x 120 cm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.