尹志 (2025-02-28 15:55):
#paper doi:10.48550/arXiv.2205.15463 Few-Shot Diffusion Models. 文章提出了一种扩散模型及set-based ViT的方式实现few shot生成的技术。实验表明,该模型仅需5个样本就可以完成新类别的生成。
arXiv, 2022-05-30T23:20:33Z. DOI: 10.48550/arXiv.2205.15463
Few-Shot Diffusion Models
翻译
Abstract:
Denoising diffusion probabilistic models (DDPM) are powerful hierarchicallatent variable models with remarkable sample generation quality and trainingstability. These properties can be attributed to parameter sharing in thegenerative hierarchy, as well as a parameter-free diffusion-based inferenceprocedure. In this paper, we present Few-Shot Diffusion Models (FSDM), aframework for few-shot generation leveraging conditional DDPMs. FSDMs aretrained to adapt the generative process conditioned on a small set of imagesfrom a given class by aggregating image patch information using a set-basedVision Transformer (ViT). At test time, the model is able to generate samplesfrom previously unseen classes conditioned on as few as 5 samples from thatclass. We empirically show that FSDM can perform few-shot generation andtransfer to new datasets. We benchmark variants of our method on complex visiondatasets for few-shot learning and compare to unconditional and conditionalDDPM baselines. Additionally, we show how conditioning the model on patch-basedinput set information improves training convergence.
翻译
回到顶部