Back to Search Start Over

KDGAN: Knowledge distillation‐based model copyright protection for secure and communication‐efficient model publishing.

Authors :
Xie, Bingyi
Xu, Honghui
Seo, Daehee
Shin, DongMyung
Cai, Zhipeng
Source :
IET Communications (Wiley-Blackwell). Aug2024, Vol. 18 Issue 14, p860-868. 9p.
Publication Year :
2024

Abstract

Deep learning‐based models have become ubiquitous across a wide range of applications, including computer vision, natural language processing, and robotics. Despite their efficacy, one of the significant challenges associated with deep neural network (DNN) models is the potential risk of copyright leakage due to the inherent vulnerability of the entire model architecture and the communication burden of the large models during publishing. So far, it is still challenging for us to safeguard the intellectual property rights of these DNN models while reducing the communication time during model publishing. To this end, this paper introduces a novel approach using knowledge distillation techniques aimed at training a surrogate model to stand in for the original DNN model. To be specific, a knowledge distillation generative adversarial network (KDGAN) model is proposed to train a student model capable of achieving remarkable performance levels while simultaneously safeguarding the copyright integrity of the original large teacher model and improving communication efficiency during model publishing. Herein, comprehensive experiments are conducted to showcase the efficacy of model copyright protection, communication‐efficient model publishing, and the superiority of the proposed KDGAN model over other copyright protection mechanisms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
17518628
Volume :
18
Issue :
14
Database :
Academic Search Index
Journal :
IET Communications (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
178945483
Full Text :
https://doi.org/10.1049/cmu2.12795