Back to Search Start Over

QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game

Authors :
Oakley, Lisa
Oprea, Alina
Source :
Decision and Game Theory for Security. GameSec 2019. Lecture Notes in Computer Science, vol 11836. Springer, Cham. pp 364-384
Publication Year :
2019

Abstract

A rise in Advanced Persistent Threats (APTs) has introduced a need for robustness against long-running, stealthy attacks which circumvent existing cryptographic security guarantees. FlipIt is a security game that models attacker-defender interactions in advanced scenarios such as APTs. Previous work analyzed extensively non-adaptive strategies in FlipIt, but adaptive strategies rise naturally in practical interactions as players receive feedback during the game. We model the FlipIt game as a Markov Decision Process and introduce QFlip, an adaptive strategy for FlipIt based on temporal difference reinforcement learning. We prove theoretical results on the convergence of our new strategy against an opponent playing with a Periodic strategy. We confirm our analysis experimentally by extensive evaluation of QFlip against specific opponents. QFlip converges to the optimal adaptive strategy for Periodic and Exponential opponents using associated state spaces. Finally, we introduce a generalized QFlip strategy with composite state space that outperforms a Greedy strategy for several distributions including Periodic and Uniform, without prior knowledge of the opponent's strategy. We also release an OpenAI Gym environment for FlipIt to facilitate future research.<br />Comment: Outstanding Student Paper award

Details

Database :
arXiv
Journal :
Decision and Game Theory for Security. GameSec 2019. Lecture Notes in Computer Science, vol 11836. Springer, Cham. pp 364-384
Publication Type :
Report
Accession number :
edsarx.1906.11938
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-030-32430-8_22