Back to Search Start Over

Enabling Homomorphically Encrypted Inference for Large DNN Models.

Authors :
Lloret-Talavera, Guillermo
Jorda, Marc
Servat, Harald
Boemer, Fabian
Chauhan, Chetan
Tomishima, Shigeki
Shah, Nilesh N.
Pena, Antonio J.
Source :
IEEE Transactions on Computers. May2022, Vol. 71 Issue 5, p1145-1155. 11p.
Publication Year :
2022

Abstract

The proliferation of machine learning services in the last few years has raised data privacy concerns. Homomorphic encryption (HE) enables inference using encrypted data but it incurs 100x–10,000x memory and runtime overheads. Secure deep neural network (DNN) inference using HE is currently limited by computing and memory resources, with frameworks requiring hundreds of gigabytes of DRAM to evaluate small models. To overcome these limitations, in this paper we explore the feasibility of leveraging hybrid memory systems comprised of DRAM and persistent memory. In particular, we explore the recently-released Intel® Optane™ PMem technology and the Intel® HE-Transformer nGraph® to run large neural networks such as MobileNetV2 (in its largest variant) and ResNet-50 for the first time in the literature. We present an in-depth analysis of the efficiency of the executions with different hardware and software configurations. Our results conclude that DNN inference using HE incurs on friendly access patterns for this memory configuration, yielding efficient executions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189340
Volume :
71
Issue :
5
Database :
Academic Search Index
Journal :
IEEE Transactions on Computers
Publication Type :
Academic Journal
Accession number :
156273025
Full Text :
https://doi.org/10.1109/TC.2021.3076123