Back to Search Start Over

GCON: Differentially Private Graph Convolutional Network via Objective Perturbation

Authors :
Wei, Jianxin
Zhu, Yizheng
Xiao, Xiaokui
Bao, Ergute
Yang, Yin
Cai, Kuntai
Ooi, Beng Chin
Publication Year :
2024

Abstract

Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics, including healthcare, transportation, and finance. Similar to other neural networks, a GCN may memorize parts of the training data through its model weights. Thus, when the underlying graph data contains sensitive information such as interpersonal relationships, a GCN trained without privacy-protection measures could be exploited to extract private data, leading to potential violations of privacy regulations such as GDPR. To defend against such attacks, a promising approach is to train the GCN with differential privacy (DP), which is a rigorous framework that provides strong privacy protection by injecting random noise into the trained model weights. However, training a large graph neural network under DP is a highly challenging task. Existing solutions either introduce random perturbations in the graph topology, which leads to severe distortions of the network's message passing, or inject randomness into each neighborhood aggregation operation, which leads to a high noise scale when the GCN performs multiple levels of aggregations. Motivated by this, we propose GCON, a novel and effective solution for training GCNs with edge differential privacy. The main idea is to (i) convert the GCN training process into a convex optimization problem, and then (ii) apply the classic idea of perturbing the objective function to satisfy DP. Extensive experiments using multiple benchmark datasets demonstrate GCON's consistent and superior performance over existing solutions in a wide variety of settings.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.05034
Document Type :
Working Paper