Back to Search
Start Over
Deep reinforcement learning on variable stiffness compliant control for programming-free robotic assembly in smart manufacturing.
- Source :
- International Journal of Production Research; Oct2024, Vol. 62 Issue 19, p7073-7095, 23p
- Publication Year :
- 2024
-
Abstract
- Nowadays industrial robots have been increasingly widely deployed across various sectors and assembly tasks are seen as one of the dominant application fields. The assembly tasks, as the critical process in manufacturing, are still challenging for the robot because of the complex contact state between the robot and the environment (i.e. assembly components). In the assembly task, the robot should switch its controller from a non-contact mode to a contact-rich mode when the contact condition changes, where compliant behaviour is necessary for robustness to uncertain contact and safety of physical interaction. This paper proposes a deep reinforcement learning (DRL) method to achieve such compliance using variable stiffness compliant control. Concretely, a Cartesian compliant controller is built on a virtual dynamics model with a variable non-diagonal stiffness matrix to derive desired motion that reacts with the external force/torque. Upon that, a deep deterministic policy gradient (DDPG)-based agent is deployed to fine-tune such a non-diagonal stiffness matrix. After error-and-trial learning, robots can handle changes in contact state in assembly tasks without any pause and switching its controller mode, thus increasing task efficiency and compliance. The simulation and experiments show that our method allows robots to complete assembly tasks safely and efficiently under noisy observation. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00207543
- Volume :
- 62
- Issue :
- 19
- Database :
- Complementary Index
- Journal :
- International Journal of Production Research
- Publication Type :
- Academic Journal
- Accession number :
- 179297129
- Full Text :
- https://doi.org/10.1080/00207543.2024.2318488