姗姗来迟 (2023-03-27 15:44):
#paper arXiv:2201.11903 chain of thought Prompting elicits reasoning in large language models 阅读笔记被记录在本人的博文中:https://blog.csdn.net/weixin_44845357/article/details/129566376 主要是了解思维链(通过逐步回答示例来引出复杂的多步推理的技术)
arXiv, 2022.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
翻译
Abstract:
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
翻译
回到顶部