张德祥 (2022-11-16 09:16):
#paper https://doi.org/10.48550/arXiv.2206.02063 Active Bayesian Causal Inference :We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, and update our beliefs to choose the next experiment; 目前的工作中,我们考虑了更一般的设置,其中我们有兴趣进行因果推理,但没有获得参考因果模型的先验。 在这种情况下,因果发现可以被视为达到目的的手段,而不是主要目标。由于两个原因, 专注于主动学习完整的因果模型以实现随后的因果推理可能是不利的。首先,如果我们只对因果模型的特定方面感兴趣,那么浪费样本来学习完整的因果图是次优的。其次,从少量数据中发现因果关系会带来显著的认知不确定性; 我们提出了主动贝叶斯因果推理(ABCI),这是一个完全贝叶斯框架,用于整合因果发现和推理与实验设计。基本方法是将贝叶斯先验置于选择的因果模型类之上, 并将学习问题作为贝叶斯推理置于模型后验之上。给定未观察的因果模型,我们通过引入目标因果查询来形式化因果推理 ; 我们遵循贝叶斯最优实验设计方法[10,42]然后根据我们当前的信念,在真正的因果模型上选择最能提供我们目标查询信息的可接受的干预。给定观察到的数据,我们然后通过计算因果模型和查询的后验来更新我们的信念,并使用它们来设计下一个实验。
Active Bayesian Causal Inference
翻译
Abstract:
Causal discovery and causal reasoning are classically treated as separate and consecutive tasks: one first infers the causal graph, and then uses it to estimate causal effects of interventions. However, such a two-stage approach is uneconomical, especially in terms of actively collected interventional data, since the causal query of interest may not require a fully-specified causal model. From a Bayesian perspective, it is also unnatural, since a causal query (e.g., the causal graph or some causal effect) can be viewed as a latent quantity subject to posterior inference -- other unobserved quantities that are not of direct interest (e.g., the full causal model) ought to be marginalized out in this process and contribute to our epistemic uncertainty. In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest. In our approach to ABCI, we focus on the class of causally-sufficient, nonlinear additive noise models, which we model using Gaussian processes. We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, and update our beliefs to choose the next experiment. Through simulations, we demonstrate that our approach is more data-efficient than several baselines that only focus on learning the full causal graph. This allows us to accurately learn downstream causal queries from fewer samples while providing well-calibrated uncertainty estimates for the quantities of interest.
翻译
回到顶部