响马读paper

一个要求成员每月至少读一篇文献并打卡的学术交流社群

2019, arXiv. DOI: 10.48550/arXiv.1907.10830 arXiv ID: 1907.10830
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
Junho Kim, Minjae Kim, Hyeonwoo Kang, Kwanghee Lee
Abstract:
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at this https URL or this https URL.
2022-09-30 11:06:00
#paper doi:10.48550/arXiv.1907.10830 U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation, ICLR 2020. 这又是一篇图像翻译的文章,还是在网络结构上做了有效的改进。作者通过提出一个新的注意力模块和一种新的归一化函数实现无监督的图像翻译工作。作者提出的注意力模块对于图像的几何形变能够做出很好的处理,这也让文章的架构对于很多艺术风格的变化处理具有优越的效果。
TOP