符毓 (2026-04-30 22:46):
#paper doi: arXiv:2604.26509v1, 2026, 3D Generation for Embodied AI and Robotic Simulation: A Survey. 本文以仿真为中心,对具身人工智能的3D生成技术进行了综述,并围绕三个部分展开:数据生成器——用于生成可用于仿真的资源,仿真环境——用于构建交互式世界,以及Sim2Real桥梁——用于支持现实世界的迁移。在每个部分中,都追踪了从面向外观的生成到感知物理特性、兼容仿真器的输出的演进过程:数据生成器越来越多地生成带有物理标注和运动学结构的资源;场景级方法将物理和语义约束集成到布局合成中;而Sim2Real方法则利用生成模型来缩小外观和动力学方面的领域差距 在这三个方面,都呈现出一个一致的趋势:3D生成的目标已从视觉上的逼真性转向仿真就绪性,这使得生成成为具身学习的核心基础设施层。然而,关键挑战依然存在,包括物理标注的匮乏、几何真实性和仿真器部署能力之间的差距、对可变形和动态资源的支持有限、评估标准分散以及持续存在的仿真与现实之间的差距 从根本上讲,当前的生态系统仍然是模块化且互不相连的,生成模型、物理引擎和机器人学习系统各自独立优化,并通过脆弱的转换流程连接起来
arXiv, 2026-04-29T10:17:55Z. DOI: 10.48550/arXiv.2604.26509
3D Generation for Embodied AI and Robotic Simulation: A Survey
Tianwei Ye, Yifan Mao, Minwen Liao, Jian Liu, Chunchao Guo, Dazhao Du, Quanxin Shou, Fangqi Zhu, Song Guo
Abstract:
Embodied AI and robotic systems increasingly depend on scalable, diverse, and physically grounded 3D content for simulation-based training and real-world deployment. While 3D generative modeling has advanced rapidly, embodied applications impose requirements far beyond visual realism: generated objects must carry kinematic structure and material properties, scenes must support interaction and task execution, and the resulting content must bridge the gap between simulation and reality. This survey presents the first survey of 3D generation for embodied AI and organizes the literature around three roles that 3D generation plays in embodied systems. In \emph{Data Generator}, 3D generation produces simulation-ready objects and assets, including articulated, physically grounded, and deformable content for downstream interaction; in \emph{Simulation Environments}, it constructs interactive and task-oriented worlds, spanning structure-aware, controllable, and agentic scene generation; and in \emph{Sim2Real Bridge}, it supports digital twin reconstruction, data augmentation, and synthetic demonstrations for downstream robot learning and real-world transfer. We also show that the field is shifting from visual realism toward interaction readiness, and we identify the main bottlenecks, including limited physical annotations, the gap between geometric quality and physical validity, fragmented evaluation, and the persistent sim-to-real divide, that must be addressed for 3D generation to become a dependable foundation for embodied intelligence. Our project page is at https://3dgen4robot.github.io.
回到顶部