符毓
(2024-06-30 23:02):
#paper doi.org/10.48550/arXiv.2404.17569, 2024, MaPa: Text-driven Photorealistic Material Painting for 3D Shapes. 本文提供了通过文字给3D模型渲染高质量材质表面的算法。
算法分为四步,首先,将网格分解为不同的片段,并使用片段控制图像生成技术(具体采用 ControlNet)将它们投影到 2D 图像上;第二,根据相似的材质属性和外观将这些片段分类。第三,每个材质组都会经过选择过程,会在此过程中识别和优化合适的材质图,以准确表示其纹理和特性。最后是迭代的,不断在多个视图中渲染和优化这些材质图,填补视觉数据中的任何空白,并重复分组和优化阶段,直到网格的每个片段都由相应的材质图准确表示。这种综合方法可确保根据 3D 网格每个片段的独特特征定制详细而逼真的材质纹理。
arXiv,
2024.
DOI: 10.48550/arXiv.2404.17569
MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Shangzhan Zhang,
Sida Peng,
Tao Xu,
Yuanbo Yang,
Tianrun Chen,
Nan Xue,
Yujun Shen,
Hujun Bao,
Ruizhen Hu,
Xiaowei Zhou
Abstract:
This paper aims to generate materials for 3D meshes from text descriptions.<br>Unlike existing methods that synthesize texture maps, we propose to generate<br>segment-wise procedural material graphs as the appearance representation, which<br>supports high-quality rendering and provides substantial flexibility in<br>editing. Instead of relying on extensive paired data, i.e., 3D meshes with<br>material graphs and corresponding text descriptions, to train a material graph<br>generative model, we propose to leverage the pre-trained 2D diffusion model as<br>a bridge to connect the text and material graphs. Specifically, our approach<br>decomposes a shape into a set of segments and designs a segment-controlled<br>diffusion model to synthesize 2D images that are aligned with mesh parts. Based<br>on generated images, we initialize parameters of material graphs and fine-tune<br>them through the differentiable rendering module to produce materials in<br>accordance with the textual description. Extensive experiments demonstrate the<br>superior performance of our framework in photorealism, resolution, and<br>editability over existing methods. Project page: https://zju3dv.github.io/MaPa
Related Links: