Back to Search Start Over

Optimizing the Performative Risk under Weak Convexity Assumptions

Authors :
Zhao, Yulai
Publication Year :
2022

Abstract

In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closed-loop setting, the natural measure of performance named performative risk ($\mathrm{PR}$), captures the expected loss incurred by a predictive model \emph{after} deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly non-convex $\mathrm{PR}$ minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the $\mathrm{PR}$ minimization problem for iterative optimization methods.<br />Comment: Neurips 2022 Workshop on Optimization for Machine Learning

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.00771
Document Type :
Working Paper