1. Privacy Preserving Deep Learning with Distributed Encoders
- Author
-
Yitian Zhang, Errol Colak, Hojjat Salehinejad, Joseph Barfett, and Shahrokh Valaee
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,Inference ,Encryption ,Machine learning ,computer.software_genre ,Annotation ,Mode (computer interface) ,Artificial intelligence ,business ,Encoder ,computer ,MNIST database - Abstract
In this paper, we propose a distributed machine learning framework for training and inference in machine learning models using distributed data while preserving privacy of the data owner. In the training mode, we deploy an encoder on the end-user device which extracts high level features from input data. The extracted features along with the corresponding annotation are sent to a centralized machine learning server. In the inference mode, the users submit the extracted features from encoder instead of the original data for inference to the server. This approach enables users to contributed in training a machine learning model and use inference services without sharing their original data with the server or a third party. We have studied this approach on MNIST, Fashion, SVHN and CIFAR-10 datasets. The results show high classification accuracy of neural networks, trained with encoded features, and high encryption performance of the encoders.
- Published
- 2019
- Full Text
- View/download PDF