负负
(2022-10-29 19:25):
#paper Understanding the role of individual units in a deep neural network (https://doi.org/10.1073/pnas.1907375117) PNAS, 2020.
作者通过对Place365数据集上训练得到的VGG16网络的神经元激活图进行上采样观察到了深度学习神经网络中的单个神经元所学习到的概念特征,讨论了这些神经元在“场景分类器”以及生成对抗网络中的“生成器”中的作用,最后讨论了这一发现的应用前景。本项工作的主要研究发现:
1、场景分类器中较“浅”层的神经元倾向于学习到“颜色”、“材质”等抽象概念,较“深”层的神经元倾向于学习到“物体”、“零件”等具体概念。
2、部分神经元对场景识别有重要的作用,关闭这些神经元会导致场景识别能力降低,在多个场景识别任务中都发挥重要作用的神经元具有更好的可解释性。
3、GANs中生成器的神经元学习到的特征与辨别器相反,即,“浅”层的神经元倾向于学习具体概念,而较“深”层的神经元倾向于学习到抽象概念。
4、关闭或启动生成器中的部分神经元,会使生成的图片中去除或增添部分场景元素,同时生成器会根据场景的特性在合适的位置生成物体,因此可以通过操纵GANs中的神经元的激活情况来进行场景绘画。
IF:9.400Q1
Proceedings of the National Academy of Sciences of the United States of America,
2020-12-01.
DOI: 10.1073/pnas.1907375117
PMID: 32873639
Understanding the role of individual units in a deep neural network
翻译
Abstract:
Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
翻译
Keywords: