Vincent (2023-03-31 15:34):
#paper https://doi.org/10.48550/arXiv.1904.10098 ICML 2019 DAG-GNN: DAG Structure Learning with Graph Neural Networks. 有向无环图(DAG)的结构学习是一项十分具有挑战性的工作,其搜索空间随着节点数的增多而呈现指数式的增长。常用的研究手段是将结构学习转化为一种score的优化问题。为了让问题可解,传统的方法通常考虑线性结构方程模型(Linear SEM),这篇文章基于线性SEM的框架,发展了一套基于变分自编码器VAE和图神经网络GNN的DAG学习方法,得益于神经网络的非线性拟合,这套方法在保证至少比线性SEM好的情况下还能解决一些非线性的问题。通过数据仿真和真实数据的学习,文章验证了该方法的准确度比线性SEM好,假发现率比线性SEM低。
DAG-GNN: DAG Structure Learning with Graph Neural Networks
翻译
Abstract:
Learning a faithful directed acyclic graph (DAG) from samples of a joint distribution is a challenging combinatorial problem, owing to the intractable search space superexponential in the number of graph nodes. A recent breakthrough formulates the problem as a continuous optimization with a structural constraint that ensures acyclicity (Zheng et al., 2018). The authors apply the approach to the linear structural equation model (SEM) and the least-squares loss function that are statistically well justified but nevertheless limited. Motivated by the widespread success of deep learning that is capable of capturing complex nonlinear mappings, in this work we propose a deep generative model and apply a variant of the structural constraint to learn the DAG. At the heart of the generative model is a variational autoencoder parameterized by a novel graph neural network architecture, which we coin DAG-GNN. In addition to the richer capacity, an advantage of the proposed model is that it naturally handles discrete variables as well as vector-valued ones. We demonstrate that on synthetic data sets, the proposed method learns more accurate graphs for nonlinearly generated samples; and on benchmark data sets with discrete variables, the learned graphs are reasonably close to the global optima. The code is available at \url{this https URL}.
翻译
回到顶部