Back to Search Start Over

Phased Consistency Model

Authors :
Wang, Fu-Yun
Huang, Zhaoyang
Bergman, Alexander William
Shen, Dazhong
Gao, Peng
Lingelbach, Michael
Sun, Keqiang
Bian, Weikang
Song, Guanglu
Liu, Yu
Li, Hongsheng
Wang, Xiaogang
Publication Year :
2024

Abstract

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1--16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator. More details are available at https://g-u-n.github.io/projects/pcm/.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.18407
Document Type :
Working Paper