王昊 (2022-10-25 10:11):
#paper doi: 10.48550/arXiv.2110.07342 So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and Ruslan Salakhutdinov. 2022. FILM: Following Instructions in Language with Modular Methods. Retrieved July 13, 2022 from http://arxiv.org/abs/2110.07342. 应用于视觉语言导航任务的算法文章,目前在ALFRED数据集下排名第4的方法。本文提出了一种具有结构化表示的模块化方法,(1)构建场景的语义地图,(2)使用语义搜索策略进行探索,以实现自然语言目标。Film的四个组件:1.将语言指令转换成结构化形式(语言处理)2.将以自我为中心的视觉输入转换为语义度量图(语义映射)3. 将以自我为中心的视觉输入转换为语义度量图(语义搜索策略)4. 输出后续导航/交互操作(确定性策略)。FILM不需要任何提供顺序指导的输入,即专家轨迹或低级语言指令(用来指导顺序)。
FILM: Following Instructions in Language with Modular Methods
翻译
Abstract:
Recent methods for embodied instruction following are typically trained end-to-end using imitation learning. This often requires the use of expert trajectories and low-level language instructions. Such approaches assume that neural states will integrate multimodal semantics to perform state tracking, building spatial memory, exploration, and long-term planning. In contrast, we propose a modular method with structured representations that (1) builds a semantic map of the scene and (2) performs exploration with a semantic search policy, to achieve the natural language goal. Our modular method achieves SOTA performance (24.46 %) with a substantial (8.17 % absolute) gap from previous work while using less data by eschewing both expert trajectories and low-level instructions. Leveraging low-level language, however, can further increase our performance (26.49 %). Our findings suggest that an explicit spatial memory and a semantic search policy can provide a stronger and more general representation for state-tracking and guidance, even in the absence of expert trajectories or low-level instructions.
翻译
回到顶部