Back to Search Start Over

Trajectory-wise Iterative Reinforcement Learning Framework for Auto-bidding

Authors :
Li, Haoming
Huo, Yusen
Dou, Shuai
Zheng, Zhenzhe
Zhang, Zhilin
Yu, Chuan
Xu, Jian
Wu, Fan
Publication Year :
2024

Abstract

In online advertising, advertisers participate in ad auctions to acquire ad opportunities, often by utilizing auto-bidding tools provided by demand-side platforms (DSPs). The current auto-bidding algorithms typically employ reinforcement learning (RL). However, due to safety concerns, most RL-based auto-bidding policies are trained in simulation, leading to a performance degradation when deployed in online environments. To narrow this gap, we can deploy multiple auto-bidding agents in parallel to collect a large interaction dataset. Offline RL algorithms can then be utilized to train a new policy. The trained policy can subsequently be deployed for further data collection, resulting in an iterative training framework, which we refer to as iterative offline RL. In this work, we identify the performance bottleneck of this iterative offline RL framework, which originates from the ineffective exploration and exploitation caused by the inherent conservatism of offline RL algorithms. To overcome this bottleneck, we propose Trajectory-wise Exploration and Exploitation (TEE), which introduces a novel data collecting and data utilization method for iterative offline RL from a trajectory perspective. Furthermore, to ensure the safety of online exploration while preserving the dataset quality for TEE, we propose Safe Exploration by Adaptive Action Selection (SEAS). Both offline experiments and real-world experiments on Alibaba display advertising platform demonstrate the effectiveness of our proposed method.<br />Comment: Accepted by The Web Conference 2024 (WWW'24) as an oral paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.15102
Document Type :
Working Paper