符毓 Yu (2024-06-30 23:02):
#paper doi.org/10.48550/arXiv.2404.17569, 2024, MaPa: Text-driven Photorealistic Material Painting for 3D Shapes. 本文提供了通过文字给3D模型渲染高质量材质表面的算法。 算法分为四步,首先,将网格分解为不同的片段,并使用片段控制图像生成技术(具体采用 ControlNet)将它们投影到 2D 图像上;第二,根据相似的材质属性和外观将这些片段分类。第三,每个材质组都会经过选择过程,会在此过程中识别和优化合适的材质图,以准确表示其纹理和特性。最后是迭代的,不断在多个视图中渲染和优化这些材质图,填补视觉数据中的任何空白,并重复分组和优化阶段,直到网格的每个片段都由相应的材质图准确表示。这种综合方法可确保根据 3D 网格每个片段的独特特征定制详细而逼真的材质纹理。
MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
翻译
Abstract:
This paper aims to generate materials for 3D meshes from text descriptions.Unlike existing methods that synthesize texture maps, we propose to generatesegment-wise procedural material graphs as the appearance representation, whichsupports high-quality rendering and provides substantial flexibility inediting. Instead of relying on extensive paired data, i.e., 3D meshes withmaterial graphs and corresponding text descriptions, to train a material graphgenerative model, we propose to leverage the pre-trained 2D diffusion model asa bridge to connect the text and material graphs. Specifically, our approachdecomposes a shape into a set of segments and designs a segment-controlleddiffusion model to synthesize 2D images that are aligned with mesh parts. Basedon generated images, we initialize parameters of material graphs and fine-tunethem through the differentiable rendering module to produce materials inaccordance with the textual description. Extensive experiments demonstrate thesuperior performance of our framework in photorealism, resolution, andeditability over existing methods. Project page: https://zju3dv.github.io/MaPa
翻译
回到顶部