Back to Search Start Over

Occlumency

Authors :
Chenren Xu
Zhiqi Lin
Saumay Pushp
Caihua Li
Yunxin Liu
Junehwa Song
Fengyuan Xu
Taegyeong Lee
Lintao Zhang
Youngki Lee
Source :
MobiCom
Publication Year :
2019
Publisher :
ACM, 2019.

Abstract

Deep-learning (DL) is receiving huge attention as enabling techniques for emerging mobile and IoT applications. It is a common practice to conduct DNN model-based inference using cloud services due to their high computation and memory cost. However, such a cloud-offloaded inference raises serious privacy concerns. Malicious external attackers or untrustworthy internal administrators of clouds may leak highly sensitive and private data such as image, voice and textual data. In this paper, we propose Occlumency, a novel cloud-driven solution designed to protect user privacy without compromising the benefit of using powerful cloud resources. Occlumency leverages secure SGX enclave to preserve the confidentiality and the integrity of user data throughout the entire DL inference process. DL inference in SGX enclave, however, impose a severe performance degradation due to limited physical memory space and inefficient page swapping. We designed a suite of novel techniques to accelerate DL inference inside the enclave with a limited memory size and implemented Occlumency based on Caffe. Our experiment with various DNN models shows that Occlumency improves inference speed by 3.6x compared to the baseline DL inference in SGX and achieves a secure DL inference within 72% of latency overhead compared to inference in the native environment.

Details

Database :
OpenAIRE
Journal :
The 25th Annual International Conference on Mobile Computing and Networking
Accession number :
edsair.doi...........daf81841df1ebf21fff808d459638f9e
Full Text :
https://doi.org/10.1145/3300061.3345447