响马读paper

一个要求成员每月至少读一篇文献并打卡的学术交流社群

2023, arXiv. DOI: 10.48550/arXiv.2303.01469 arXiv ID: 2303.01469
Consistency Models
Yang Song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever
Abstract:
Diffusion models have significantly advanced the fields of image, audio, and
video generation, but they depend on an iterative sampling process that causes
slow generation. To overcome this limitation, we propose consistency models, a
new family of models that generate high quality samples by directly mapping
noise to data. They support fast one-step generation by design, while still
allowing multistep sampling to trade compute for sample quality. They also
support zero-shot data editing, such as image inpainting, colorization, and
super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion
models, or as standalone generative models altogether. Through extensive
experiments, we demonstrate that they outperform existing distillation
techniques for diffusion models in one- and few-step sampling, achieving the
new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for
one-step generation. When trained in isolation, consistency models become a new
family of generative models that can outperform existing one-step,
non-adversarial generative models on standard benchmarks such as CIFAR-10,
ImageNet 64x64 and LSUN 256x256.
2023-12-31 14:32:00
#paper Consistency Models https://doi.org/10.48550/arXiv.2303.01469 扩散模型目前已经是生成式AI的核心技术方案了,但是由于它的迭代生成的性质,使得采样速度一直存在问题,因此在实际应用的场景下就会遇到阻碍。CM(consistency models)作为常规的扩散模型的高效改进方案,基于PE(probability flow) ODE轨道,提出一个针对ODE轨道(可以认为是演化迭代的步骤)上的映射,使得我们能够从任意轨道点,即任意迭代的timestep,映射到初始点,即原图。cm模型的提出,让单步扩散模型采样的质量变得更高,从而带动了大量实际应用的产生,包括图像编辑、图像补全等。目前大量基于扩散模型的实际应用,都已经使用了cm。这个是年初的时候Yang Song大佬和Ilya Sutskever一起的工作,四个作者全部都是来自openAI的扩散模型大佬。
TOP