Back to Search Start Over

Towards Reliable Evaluation of Behavior Steering Interventions in LLMs

Authors :
Pres, Itamar
Ruis, Laura
Lubana, Ekdeep Singh
Krueger, David
Publication Year :
2024

Abstract

Representation engineering methods have recently shown promise for enabling efficient steering of model behavior. However, evaluation pipelines for these methods have primarily relied on subjective demonstrations, instead of quantitative, objective metrics. We aim to take a step towards addressing this issue by advocating for four properties missing from current evaluations: (i) contexts sufficiently similar to downstream tasks should be used for assessing intervention quality; (ii) model likelihoods should be accounted for; (iii) evaluations should allow for standardized comparisons across different target behaviors; and (iv) baseline comparisons should be offered. We introduce an evaluation pipeline grounded in these criteria, offering both a quantitative and visual analysis of how effectively a given method works. We use this pipeline to evaluate two representation engineering methods on how effectively they can steer behaviors such as truthfulness and corrigibility, finding that some interventions are less effective than previously reported.<br />Comment: Accepted to the NeurIPS 2024 - Workshop on Foundation Model Interventions

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.17245
Document Type :
Working Paper