周周复始 (2023-05-31 22:29):
#paper doi:https://doi.org/10.48550/arXiv.2201.00308. DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents.2022.目前扩散概率模型在几个有竞争性图像合成基准上产生最先进的结果,但缺乏低维、可解释的潜在空间,并且生成速度较慢。而变分自编码器(VAEs)通常具有低维潜在空间,但生成的样本质量较差。基于此本文提出了一种新的生成框架DiffuseVAE,它将VAE集成到扩散模型框架中,并利用它为扩散模型设计新的条件参数化。文章表明,所得到的模型为扩散模型配备了低维VAE推断潜在代码,可用于下游任务,如条件生成。
DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents
翻译
Abstract:
Diffusion probabilistic models have been shown to generate state-of-the-art results on several competitive image synthesis benchmarks but lack a low-dimensional, interpretable latent space, and are slow at generation. On the other hand, standard Variational Autoencoders (VAEs) typically have access to a low-dimensional latent space but exhibit poor sample quality. We present DiffuseVAE, a novel generative framework that integrates VAE within a diffusion model framework, and leverage this to design novel conditional parameterizations for diffusion models. We show that the resulting model equips diffusion models with a low-dimensional VAE inferred latent code which can be used for downstream tasks like controllable synthesis. The proposed method also improves upon the speed vs quality tradeoff exhibited in standard unconditional DDPM/DDIM models (for instance, FID of 16.47 vs 34.36 using a standard DDIM on the CelebA-HQ-128 benchmark using T=10 reverse process steps) without having explicitly trained for such an objective. Furthermore, the proposed model exhibits synthesis quality comparable to state-of-the-art models on standard image synthesis benchmarks like CIFAR-10 and CelebA-64 while outperforming most existing VAE-based methods. Lastly, we show that the proposed method exhibits inherent generalization to different types of noise in the conditioning signal. For reproducibility, our source code is publicly available at this https URL.
翻译
回到顶部