Back to Search Start Over

Smoothed Embeddings for Robust Language Models

Authors :
Hase, Ryo
Rashid, Md Rafi Ur
Lewis, Ashley
Liu, Jing
Koike-Akino, Toshiaki
Parsons, Kieran
Wang, Ye
Publication Year :
2025

Abstract

Improving the safety and reliability of large language models (LLMs) is a crucial aspect of realizing trustworthy AI systems. Although alignment methods aim to suppress harmful content generation, LLMs are often still vulnerable to jailbreaking attacks that employ adversarial inputs that subvert alignment and induce harmful outputs. We propose the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense, which adds random noise to the embedding vectors and performs aggregation during the generation of each output token, with the aim of better preserving semantic information. Our experiments demonstrate that our approach achieves superior robustness versus utility tradeoffs compared to the baseline defenses.<br />Comment: Presented in the Safe Generative AI Workshop at NeurIPS 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.16497
Document Type :
Working Paper