Back to Search Start Over

Strategy-Driven Limit Theorems Associated Bandit Problems

Authors :
Chen, Zengjing
Feng, Shui
Zhang, Guodong
Publication Year :
2022

Abstract

Motivated by the study of asymptotic behaviour of the bandit problems, we obtain several strategy-driven limit theorems including the law of large numbers, the large deviation principle, and the central limit theorem. Different from the classical limit theorems, we develop sampling strategy-driven limit theorems that generate the maximum or minimum average reward. The law of large numbers identifies all possible limits that are achievable under various strategies. The large deviation principle provides the maximum decay probabilities for deviations from the limiting domain. To describe the fluctuations around averages, we obtain strategy-driven central limit theorems under optimal strategies. The limits in these theorem are identified explicitly, and depend heavily on the structure of the events or the integrating functions and strategies. This demonstrates the key signature of the learning structure. Our results can be used to estimate the maximal (minimal) rewards, and to identify the conditions of avoiding the Parrondo's paradox in the two-armed bandit problem. It also lays the theoretical foundation for statistical inference in determining the arm that offers the higher mean reward.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.04442
Document Type :
Working Paper