翁凯 (2022-04-30 23:23):
#paper DOI: 10.1126/science.1192788 science, 2011, How to Grow a Mind: Statistics, Structure, and Abstraction. 这是一篇综述,提出了在我看来比较可信的关于人脑如何学习的解释。人脑学习的一个特点是只需少量样本量(或者说数据很稀疏)就能学得很好,尤其是对因果关联的学习。作者认为学习效率高是因为用了抽象知识指导学习,并认为贝叶斯定理能很好地解释是如何用抽象知识指导学习的。而且贝叶斯方法可以有效利用多种形式的抽象知识,从而避免了传统方法需要穷举各种可能(一个个很长的数值向量)的需要。至于是如何从数据学到抽象知识的,比如是如何知道哪种形式是正确的,作者提到了各种形式(树、空间、环、次序……)都可以用graph表示,然后可以用分层贝叶斯模型来生成所需的graph,并且非参形式的分层贝叶斯模型自动蕴含了奥卡姆剃刀,只在数据需要时引入更多变量。不过,有些重要问题仍然没有被分层贝叶斯模型解决,比如学习到底是如何开始的?总得有什么作为基础吧?作者指出,有些贝叶斯建模者认为哪怕是最抽象的概念(比如因果关系的概念)原则上也是可以被学习的。作者还有一些讨论,比如什么Turing complete compositional representations,还有人脑具体如何实现贝叶斯算法,但目前不是我的兴趣(或者其实更是今晚我没有时间重新仔细看了……虽然2011年这篇文献出来的时候我就读过)。有兴趣的朋友可以直接找文献看。
How to grow a mind: statistics, structure, and abstraction
翻译
Abstract:
In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?
翻译
回到顶部