Back to Search Start Over

Non-Stationary Latent Bandits

Authors :
Hong, Joey
Kveton, Branislav
Zaheer, Manzil
Chow, Yinlam
Ahmed, Amr
Ghavamzadeh, Mohammad
Boutilier, Craig
Publication Year :
2020

Abstract

Users of recommender systems often behave in a non-stationary fashion, due to their evolving preferences and tastes over time. In this work, we propose a practical approach for fast personalization to non-stationary users. The key idea is to frame this problem as a latent bandit, where the prototypical models of user behavior are learned offline and the latent state of the user is inferred online from its interactions with the models. We call this problem a non-stationary latent bandit. We propose Thompson sampling algorithms for regret minimization in non-stationary latent bandits, analyze them, and evaluate them on a real-world dataset. The main strength of our approach is that it can be combined with rich offline-learned models, which can be misspecified, and are subsequently fine-tuned online using posterior sampling. In this way, we naturally combine the strengths of offline and online learning.<br />Comment: 15 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2012.00386
Document Type :
Working Paper