Back to Search Start Over

Distributionally Robust Policy Evaluation and Learning for Continuous Treatment with Observational Data

Authors :
Leung, Cheuk Hang
Huang, Yiyan
Li, Yijun
Wu, Qi
Publication Year :
2025

Abstract

Using offline observational data for policy evaluation and learning allows decision-makers to evaluate and learn a policy that connects characteristics and interventions. Most existing literature has focused on either discrete treatment spaces or assumed no difference in the distributions between the policy-learning and policy-deployed environments. These restrict applications in many real-world scenarios where distribution shifts are present with continuous treatment. To overcome these challenges, this paper focuses on developing a distributionally robust policy under a continuous treatment setting. The proposed distributionally robust estimators are established using the Inverse Probability Weighting (IPW) method extended from the discrete one for policy evaluation and learning under continuous treatments. Specifically, we introduce a kernel function into the proposed IPW estimator to mitigate the exclusion of observations that can occur in the standard IPW method to continuous treatments. We then provide finite-sample analysis that guarantees the convergence of the proposed distributionally robust policy evaluation and learning estimators. The comprehensive experiments further verify the effectiveness of our approach when distribution shifts are present.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.10693
Document Type :
Working Paper