Back to Search
Start Over
Extraction of Complex DNN Models: Real Threat or Boogeyman?
- Publication Year :
- 2019
-
Abstract
- Recently, machine learning (ML) has introduced advanced solutions to many domains. Since ML models provide business advantage to model owners, protecting intellectual property of ML models has emerged as an important consideration. Confidentiality of ML models can be protected by exposing them to clients only via prediction APIs. However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API. In this work, we question whether model extraction is a serious threat to complex, real-life ML models. We evaluate the current state-of-the-art model extraction attack (Knockoff nets) against complex models. We reproduce and confirm the results in the original paper. But we also show that the performance of this attack can be limited by several factors, including ML model architecture and the granularity of API response. Furthermore, we introduce a defense based on distinguishing queries used for Knockoff nets from benign queries. Despite the limitations of the Knockoff nets, we show that a more realistic adversary can effectively steal complex ML models and evade known defenses.<br />Comment: 16 pages, 1 figure, Accepted for publication in AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems (AAAI-EDSMLS 2020)
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1910.05429
- Document Type :
- Working Paper