Back to Search Start Over

Target-Oriented Multi-Agent Coordination with Hierarchical Reinforcement Learning.

Authors :
Yu, Yuekang
Zhai, Zhongyi
Li, Weikun
Ma, Jianyu
Source :
Applied Sciences (2076-3417); Aug2024, Vol. 14 Issue 16, p7084, 21p
Publication Year :
2024

Abstract

In target-oriented multi-agent tasks, agents collaboratively achieve goals defined by specific objects, or targets, in their environment. The key to success is the effective coordination between agents and these targets, especially in dynamic environments where targets may shift. Agents must adeptly adjust to these changes and re-evaluate their target interactions. Inefficient coordination can lead to resource waste, extended task times, and lower overall performance. Addressing this challenge, we introduce the regulatory hierarchical multi-agent coordination (RHMC), a hierarchical reinforcement learning approach. RHMC divides the coordination task into two levels: a high-level policy, assigning targets based on environmental state, and a low-level policy, executing basic actions guided by individual target assignments and observations. Stabilizing RHMC's high-level policy is crucial for effective learning. This stability is achieved by reward regularization, reducing reliance on the dynamic low-level policy. Such regularization ensures the high-level policy remains focused on broad coordination, not overly dependent on specific agent actions. By minimizing low-level policy dependence, RHMC adapts more seamlessly to environmental changes, boosting learning efficiency. Testing demonstrates RHMC's superiority over existing methods in global reward and learning efficiency, highlighting its effectiveness in multi-agent coordination. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
16
Database :
Complementary Index
Journal :
Applied Sciences (2076-3417)
Publication Type :
Academic Journal
Accession number :
179351112
Full Text :
https://doi.org/10.3390/app14167084