Back to Search Start Over

Deep reinforcement learning in World-Earth system models to discover sustainable management strategies

Authors :
Strnad, Felix M.
Barfuss, Wolfram
Donges, Jonathan F.
Heitzig, Jobst
Source :
Chaos 29, 123122 (2019)
Publication Year :
2019

Abstract

Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models for informing policy makers and the wider public, e.g. pathways leading to a robust mitigation of dangerous anthropogenic climate change, is a challenging and widely investigated task in the field of climate research and broader Earth system science. This problem is particularly difficult when constraints on avoiding transgressions of planetary boundaries and social foundations need to be taken into account. In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system. Based on the concept of the agent-environment interface, we develop an agent that is generally able to act and learn in variable manageable environment models of the Earth system. We demonstrate the potential of our framework by applying DRL algorithms to two stylized World-Earth system models. Conceptually, we explore thereby the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries. The artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable on the long term.<br />Comment: 16 pages, 8 figures

Details

Database :
arXiv
Journal :
Chaos 29, 123122 (2019)
Publication Type :
Report
Accession number :
edsarx.1908.05567
Document Type :
Working Paper
Full Text :
https://doi.org/10.1063/1.5124673