Back to Search Start Over

Model Uncertainty Stochastic Mean-Field Control

Authors :
Agram, Nacira
Øksendal, Bernt
Publication Year :
2016

Abstract

We consider the problem of optimal control of a mean-field stochastic differential equation under model uncertainty. The model uncertainty is represented by ambiguity about the law $\mathcal{L}(X(t))$ of the state $X(t)$ at time $t$. For example, it could be the law $\mathcal{L}_{\mathbb{P}}(X(t))$ of $X(t)$ with respect to the given, underlying probability measure $\mathbb{P}$. This is the classical case when there is no model uncertainty. But it could also be the law $\mathcal{L}_{\mathbb{Q}}(X(t))$ with respect to some other probability measure $\mathbb{Q}$ or, more generally, any random measure $\mu(t)$ on $\mathbb{R}$ with total mass $1$. We represent this model uncertainty control problem as a stochastic differential game of a mean-field related type stochastic differential equation (SDE) with two players. The control of one of the players, representing the uncertainty of the law of the state, is a measure valued stochastic process $\mu(t)$ and the control of the other player is a classical real-valued stochastic process $u(t)$. This control with respect to random probability processes $\mu(t)$ on $\mathbb{R}$ is a new type of stochastic control problems that has not been studied before. By introducing operator-valued backward stochastic differential equations, we obtain a sufficient maximum principle for Nash equilibria for such games in the general nonzero-sum case, and saddle points for zero-sum games. As an application we find an explicit solution of the problem of optimal consumption under model uncertainty of a cash flow described by a mean-field related type SDE.<br />Comment: 19 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1611.01385
Document Type :
Working Paper