Back to Search Start Over

A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services

Authors :
Hu, Hongsheng
Wang, Shuo
Chang, Jiamin
Zhong, Haonan
Sun, Ruoxi
Hao, Shuang
Zhu, Haojin
Xue, Minhui
Publication Year :
2023

Abstract

The right to be forgotten requires the removal or "unlearning" of a user's data from machine learning models. However, in the context of Machine Learning as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning request is impractical due to the lack of training data on the service provider's side (the server). Furthermore, approximate unlearning further embraces a complex trade-off between utility (model performance) and privacy (unlearning performance). In this paper, we try to explore the potential threats posed by unlearning services in MLaaS, specifically over-unlearning, where more information is unlearned than expected. We propose two strategies that leverage over-unlearning to measure the impact on the trade-off balancing, under black-box access settings, in which the existing machine unlearning attacks are not applicable. The effectiveness of these strategies is evaluated through extensive experiments on benchmark datasets, across various model architectures and representative unlearning approaches. Results indicate significant potential for both strategies to undermine model efficacy in unlearning scenarios. This study uncovers an underexplored gap between unlearning and contemporary MLaaS, highlighting the need for careful considerations in balancing data unlearning, model utility, and security.<br />Comment: To Appear in the Network and Distributed System Security Symposium (NDSS) 2024, San Diego, CA, USA

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.08230
Document Type :
Working Paper