Back to Search Start Over

SDPO: Segment-Level Direct Preference Optimization for Social Agents

Authors :
Kong, Aobo
Ma, Wentao
Zhao, Shiwan
Li, Yongbin
Wu, Yuchuan
Wang, Ke
Liu, Xiaoqian
Li, Qicheng
Qin, Yong
Huang, Fei
Publication Year :
2025

Abstract

Social agents powered by large language models (LLMs) can simulate human social behaviors but fall short in handling complex goal-oriented social dialogues. Direct Preference Optimization (DPO) has proven effective in aligning LLM behavior with human preferences across a variety of agent tasks. Existing DPO-based approaches for multi-turn interactions are divided into turn-level and session-level methods. The turn-level method is overly fine-grained, focusing exclusively on individual turns, while session-level methods are too coarse-grained, often introducing training noise. To address these limitations, we propose Segment-Level Direct Preference Optimization (SDPO), which focuses on specific key segments within interactions to optimize multi-turn agent behavior while minimizing training noise. Evaluations on the SOTOPIA benchmark demonstrate that SDPO-tuned agents consistently outperform both existing DPO-based methods and proprietary LLMs like GPT-4o, underscoring SDPO's potential to advance the social intelligence of LLM-based agents. We release our code and data at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/SDPO.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.01821
Document Type :
Working Paper