Back to Search Start Over

DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning

Authors :
Wang, Zifeng
Zhang, Zizhao
Ebrahimi, Sayna
Sun, Ruoxi
Zhang, Han
Lee, Chen-Yu
Ren, Xiaoqi
Su, Guolong
Perot, Vincent
Dy, Jennifer
Pfister, Tomas
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Continual learning aims to enable a single model to learn a sequence of tasks without catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past pristine examples for experience replay, which, however, limits their practical value due to privacy and memory constraints. In this work, we present a simple yet effective framework, DualPrompt, which learns a tiny set of parameters, called prompts, to properly instruct a pre-trained model to learn tasks arriving sequentially without buffering past examples. DualPrompt presents a novel approach to attach complementary prompts to the pre-trained backbone, and then formulates the objective as learning task-invariant and task-specific "instructions". With extensive experimental validation, DualPrompt consistently sets state-of-the-art performance under the challenging class-incremental setting. In particular, DualPrompt outperforms recent advanced continual learning methods with relatively large buffer sizes. We also introduce a more challenging benchmark, Split ImageNet-R, to help generalize rehearsal-free continual learning research. Source code is available at https://github.com/google-research/l2p.<br />Comment: Published at ECCV 2022 as a conference paper

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....ea96294bce7a1b3853fafb8406801004
Full Text :
https://doi.org/10.48550/arxiv.2204.04799