Back to Search Start Over

Locking Machine Learning Models into Hardware

Authors :
Clifford, Eleanor
Saravanan, Adhithya
Langford, Harry
Zhang, Cheng
Zhao, Yiren
Mullins, Robert
Shumailov, Ilia
Hayes, Jamie
Publication Year :
2024

Abstract

Modern Machine Learning models are expensive IP and business competitiveness often depends on keeping this IP confidential. This in turn restricts how these models are deployed -- for example it is unclear how to deploy a model on-device without inevitably leaking the underlying model. At the same time, confidential computing technologies such as Multi-Party Computation or Homomorphic encryption remain impractical for wide adoption. In this paper we take a different approach and investigate feasibility of ML-specific mechanisms that deter unauthorized model use by restricting the model to only be usable on specific hardware, making adoption on unauthorized hardware inconvenient. That way, even if IP is compromised, it cannot be trivially used without specialised hardware or major model adjustment. In a sense, we seek to enable cheap locking of machine learning models into specific hardware. We demonstrate that locking mechanisms are feasible by either targeting efficiency of model representations, such making models incompatible with quantisation, or tie the model's operation on specific characteristics of hardware, such as number of cycles for arithmetic operations. We demonstrate that locking comes with negligible work and latency overheads, while significantly restricting usability of the resultant model on unauthorized hardware.<br />Comment: 10 pages, 2 figures of main text; 14 pages, 16 figures of appendices

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.20990
Document Type :
Working Paper