Back to Search Start Over

Towards Comprehensive and Efficient Post Safety Alignment of Large Language Models via Safety Patching

Authors :
Zhao, Weixiang
Hu, Yulin
Li, Zhuojun
Deng, Yang
Zhao, Yanyan
Qin, Bing
Chua, Tat-Seng
Publication Year :
2024

Abstract

Safety alignment of large language models (LLMs) has been gaining increasing attention. However, current safety-aligned LLMs suffer from the fragile and imbalanced safety mechanisms, which can still be induced to generate unsafe responses, exhibit over-safety by rejecting safe user inputs, and fail to preserve general utility after safety alignment. To this end, we propose a novel post safety alignment (PSA) method to address these inherent and emerging safety challenges, including safety enhancement, over-safety mitigation, and utility preservation. In specific, we introduce \textsc{SafePatching}, a novel framework for comprehensive and efficient PSA, where two distinct safety patches are developed on the harmful data to enhance safety and mitigate over-safety concerns, and then seamlessly integrated into the target LLM backbone without compromising its utility. Extensive experiments show that \textsc{SafePatching} achieves a more comprehensive and efficient PSA than baseline methods. It even enhances the utility of the backbone, further optimizing the balance between being helpful and harmless in current aligned LLMs. Also, \textsc{SafePatching} demonstrates its superiority in continual PSA scenarios.<br />Comment: 24 pages, 8 figures and 12 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.13820
Document Type :
Working Paper