Back to Search Start Over

GumbelSoft: Diversified Language Model Watermarking via the GumbelMax-trick

Authors :
Fu, Jiayi
Zhao, Xuandong
Yang, Ruihan
Zhang, Yuansen
Chen, Jiangjie
Xiao, Yanghua
Publication Year :
2024

Abstract

Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty. Decoding-based watermark, particularly the GumbelMax-trick-based watermark(GM watermark), is a standout solution for safeguarding machine-generated texts due to its notable detectability. However, GM watermark encounters a major challenge with generation diversity, always yielding identical outputs for the same prompt, negatively impacting generation diversity and user experience. To overcome this limitation, we propose a new type of GM watermark, the Logits-Addition watermark, and its three variants, specifically designed to enhance diversity. Among these, the GumbelSoft watermark (a softmax variant of the Logits-Addition watermark) demonstrates superior performance in high diversity settings, with its AUROC score outperforming those of the two alternative variants by 0.1 to 0.3 and surpassing other decoding-based watermarking methods by a minimum of 0.1.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.12948
Document Type :
Working Paper