Back to Search Start Over

Feature Inference Attack on Model Predictions in Vertical Federated Learning

Authors :
Luo, Xinjian
Wu, Yuncheng
Xiao, Xiaokui
Ooi, Beng Chin
Publication Year :
2020

Abstract

Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.<br />Comment: Accepted at the IEEE 37th International Conference on Data Engineering (ICDE 2021); 15 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.10152
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/ICDE51399.2021.00023