姗姗来迟 (2023-02-27 21:25):
#paper https://openaccess.thecvf.com/content_CVPR_2019/html/Tang_Learning_to_Compose_Dynamic_Tree_Structures_for_Visual_Contexts_CVPR_2019_paper.html title:为视觉上下文组成动态树结构的学习 提出,将图像中的objects放置到视觉上下文中,组成动态树结构,帮助场景图生成和视觉问答等视觉推理任务。该可视化上下文树模型,称为VCTREE,有两个关键优势: 1)高效且富有表现力的二叉树编码了对象之间固有的并行/层次关系; 2)动态结构从图像到图像,从任务到任务,允许更多内容/任务特定的消息传递。
Learning to Compose Dynamic Tree Structures for Visual Contexts
翻译
Abstract:
We propose to compose dynamic tree structures that place the objects in an image into a visual context, helping visual reasoning tasks such as scene graph generation and visual Q&A. Our visual context tree model, dubbed VCTree, has two key advantages over existing structured object representations including chains and fully-connected graphs: 1) The efficient and expressive binary tree encodes the inherent parallel/hierarchical relationships among objects, e.g., "clothes" and "pants" are usually co-occur and belong to "person"; 2) the dynamic structure varies from image to image and task to task, allowing more content-/task-specific message passing among objects. To construct a VCTree, we design a score function that calculates the task-dependent validity between each object pair, and the tree is the binary version of the maximum spanning tree from the score matrix. Then, visual contexts are encoded by bidirectional TreeLSTM and decoded by task-specific models. We develop a hybrid learning procedure which integrates end-task supervised learning and the tree structure reinforcement learning, where the former's evaluation result serves as a self-critic for the latter's structure exploration. Experimental results on two benchmarks, which require reasoning over contexts: Visual Genome for scene graph generation and VQA2.0 for visual Q&A, show that VCTree outperforms state-of-the-art results while discovering interpretable visual context structures.
翻译
回到顶部