Back to Search Start Over

Performance of NPG in Countable State-Space Average-Cost RL

Authors :
Murthy, Yashaswini
Grosof, Isaac
Maguluri, Siva Theja
Srikant, R.
Publication Year :
2024

Abstract

We consider policy optimization methods in reinforcement learning settings where the state space is arbitrarily large, or even countably infinite. The motivation arises from control problems in communication networks, matching markets, and other queueing systems. Specifically, we consider the popular Natural Policy Gradient (NPG) algorithm, which has been studied in the past only under the assumption that the cost is bounded and the state space is finite, neither of which holds for the aforementioned control problems. Assuming a Lyapunov drift condition, which is naturally satisfied in some cases and can be satisfied in other cases at a small cost in performance, we design a state-dependent step-size rule which dramatically improves the performance of NPG for our intended applications. In addition to experimentally verifying the performance improvement, we also theoretically show that the iteration complexity of NPG can be made independent of the size of the state space. The key analytical tool we use is the connection between NPG step-sizes and the solution to Poisson's equation. In particular, we provide policy-independent bounds on the solution to Poisson's equation, which are then used to guide the choice of NPG step-sizes.<br />Comment: 24 pages; 3 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.20467
Document Type :
Working Paper