[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-zjunlp--Prompt4ReasoningPapers":3,"tool-zjunlp--Prompt4ReasoningPapers":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",158594,2,"2026-04-16T23:34:05",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":76,"stars":81,"forks":82,"last_commit_at":83,"license":84,"difficulty_score":85,"env_os":86,"env_gpu":87,"env_ram":87,"env_deps":88,"category_tags":91,"github_topics":92,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":145},8240,"zjunlp\u002FPrompt4ReasoningPapers","Prompt4ReasoningPapers","[ACL 2023] Reasoning with Language Model Prompting: A Survey","Prompt4ReasoningPapers 是一个专注于“大语言模型提示推理”领域的开源学术资源库，由浙江大学团队维护并收录于 ACL 2023 综述论文。它旨在解决研究人员在面对海量推理相关文献时难以系统梳理、分类和追踪最新进展的痛点。\n\n该项目不仅仅是一份简单的论文列表，更提供了一套结构化的知识体系。它将复杂的推理方法细分为策略增强（如提示工程、多阶段优化、外部工具调用）和知识增强（隐式与显式知识结合）等类别，帮助使用者快速定位所需技术路线。此外，资源库还整理了相关的基准测试数据集和实用工具代码，极大地降低了复现实验和入门研究的门槛。\n\nPrompt4ReasoningPapers 特别适合人工智能领域的研究人员、算法工程师以及希望深入理解大模型推理机制的开发者使用。无论是需要撰写综述的学者，还是正在寻找特定推理优化策略的技术人员，都能从中获得系统的理论指导和丰富的实践资源。其独特的亮点在于将前沿学术论文与可运行的代码框架（如 EasyInstruct、EasyEdit 等）紧密结合，实现了从理论研究到工程落地的无缝衔接，是探索大模型逻辑推理能力的必备指南。","# Reasoning with Language Model Prompting Papers\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers) \n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fzjunlp\u002FPrompt4ReasoningPapers?color=green) \n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red) \n\n## 🔔 News\n- **2024-03-05 We release a new paper: \"[KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03101)\".**\n- **2024-02-06 We release a new paper: \"[EasyInstruct: An Easy-to-use Instruction Processing Framework for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03049)\" with an HF demo [EasyInstruct](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fzjunlp\u002FEasyInstruct).**\n- **2024-01-03  We release a new paper:\"[A Comprehensive Study of Knowledge Editing for Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01286)\" with a new benchmark [KnowEdit](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fzjunlp\u002FKnowEdit)! We are looking forward to any comments or discussions on this topic :)**\n- **2023-7-12 We release [EasyEdit](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyEdit), an easy-to-use knowledge editing framework for Large Language Models.**\n- **2023-6-19 We open-source [KnowLM](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FKnowLM), a knowledgeable large language model framework with pre-training and instruction fine-tuning code (supports multi-machine multi-GPU setup) and various LLMs**.\n- **2023-3-27 We release [EasyInstruct](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyInstruct), a package for instructing Large Language Models (LLMs) like ChatGPT in your research experiments. It is designed to be easy to use and easy to extend!**\n- **2023-2-19 We upload a [tutorial](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fblob\u002Fmain\u002Ftutorial.pdf) of our survey paper to help you learn more about reasoning with language model prompting (Attached with a [video](https:\u002F\u002Fwww.techbeat.net\u002Ftalk-info?id=756) (Chinese) of the tutorial).**\n- **2022-12-19  We release a new survey paper:\"[Reasoning with Language Model Prompting: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09597)\" based on this repository! We are looking forward to any comments or discussions on this topic :)**\n- **2022-09-14 We create this repository to maintain a paper list on *Reasoning with Language Model Prompting*.**\n\n---\n\n## 🔍 Contents\n\n- [🌟 Introduction](#-introduction)\n- [📜 Papers](#-papers)\n  - [Overview](#overview)\n  - [Methods](#methods)\n    - [Strategy Enhanced Reasoning](#strategy-enhanced-reasoning)\n      - [Prompt Engineering](#prompt-engineering)\n        - [Single-Stage](#single-stage)\n        - [Multi-Stage](#multi-stage)\n      - [Process Optimization](#process-Optimization)\n        - [Self-Optimization](#self-optimization)\n        - [Ensemble-Optimization](#ensemble-optimization)\n        - [Iterative-Optimization](#iterative-optimization)\n      - [External Engine](#external-engine)\n        - [Physical Simulator](#physical-simulator)\n        - [Code Interpreter](#code-interpreter)\n        - [Tool Learning](#tool-learning)\n    - [Knowledge Enhanced Reasoning](#knowledge-enhanced-reasoning)\n      - [Implicit Knowledge](#implicit-knowledge)\n      - [Explicit Knowledge](#explicit-knowledge)\n    - [Others](#others)\n  - [Analysis](#analysis)\n- [🧰 Resources](#-resources)\n    - [Benchmarks and Tasks](#benchmarks-and-tasks)\n    - [Tools](#tools)\n- [🎉 Contributing](#-contributing)\n- [🚩Citation ](#-citation)\n\n---\n\n## 🌟 Introduction\n\nReasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions. \n\n---\n\n## 📜 Papers\n\n### Overview\n\n1. **Reasoning with Language Model Prompting: A Survey.**\n\n   *Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09597)], 2022.12\n\n2. **Towards Reasoning in Large Language Models: A Survey.**\n\n   *Jie Huang, Kevin Chen-Chuan Chang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10403)], 2022.12\n\n3. **A Survey of Deep Learning for Mathematical Reasoning.**\n\n   *Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10535)], 2022.12\n\n4. **A Survey for In-context Learning.**\n\n   *Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, Zhifang Sui.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00234)], 2022.12\n\n5. **Knowledge-enhanced Neural Machine Reasoning: A Review.**\n\n   *Tanmoy Chowdhury, Chen Ling, Xuchao Zhang, Xujiang Zhao, Guangji Bai, Jian Pei, Haifeng Chen, Liang Zhao.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02093)], 2023.2\n\n6. **Augmented Language Models: a Survey.**\n\n   *Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, Thomas Scialom.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.07842)], 2023.2\n\n7. **The Life Cycle of Knowledge in Big Language Models: A Survey.**\n\n   *Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07616)], 2023.3\n\n8. **Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning.**\n\n   *Renze Lou, Kai Zhang, Wenpeng Yin.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10475)], 2023.3\n\n9. **Logical Reasoning over Natural Language as Knowledge Representation: A Survey.**\n\n   *Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, Erik Cambria.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12023)], 2023.3\n\n10. **Nature Language Reasoning, A Survey.**\n\n    *Fei Yu, Hongbo Zhang, Benyou Wang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14725)], 2023.3\n\n11. **A Survey of Large Language Models.**\n\n    *Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223)], 2023.3\n\n12. **Tool Learning with Foundation Models.**\n\n    *Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, Maosong Sun.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08354)], 2023.4\n\n13. **A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future.**\n\n    *Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, Ting Liu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15402)], 2023.9\n\n14. **A Survey of Reasoning with Foundation Models: Concepts, Methodologies, and Outlook.**\n\n    *Jiankai Sun, Chuanyang Zheng, Enze Xie, Zhengying Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu, Mingyu Ding, Hongyang Li, Mengzhe Geng, Yue Wu, Wenhai Wang, Junsong Chen, Zhangyue Yin, Xiaozhe Ren, Jie Fu, Junxian He, Wu Yuan, Qi Liu, Xihui Liu, Yu Li, Hao Dong, Yu Cheng, Ming Zhang, Pheng Ann Heng, Jifeng Dai, Ping Luo, Jingdong Wang, Ji-Rong Wen, Xipeng Qiu, Yike Guo, Hui Xiong, Qun Liu, Zhenguo Li.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11562)], 2023.12\n\n### Methods\n\n#### Strategy Enhanced Reasoning\n\n##### Prompt Engineering\n\n###### Single-Stage\n\n1. **Prompting Contrastive Explanations for Commonsense Reasoning Tasks.**\n\n   *Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06823)], 2021.6\n\n2. **Template Filling for Controllable Commonsense Reasoning.**\n\n   *Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00539)], 2021.11\n\n3. **Chain of Thought Prompting Elicits Reasoning in Large Language Models.**\n\n   *Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)], 2022.1\n\n4. **Large Language Models are Zero-Shot Reasoners.**\n\n   *Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11916)], 2022.5\n\n5. **Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models.**\n\n   *Ben Prystawski, Paul Thibodeau, Noah Goodman.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08141)], 2022.9\n\n6. **Complexity-based Prompting for Multi-step Reasoning.**\n\n   *Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00720)], 2022.10\n\n7. **Language Models are Multilingual Chain-of-thought Reasoners.**\n\n   *Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03057)], 2022.10\n\n8. **Automatic Chain of Thought Prompting in Large Language Models.**\n\n    *Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03493)], 2022.10\n\n9. **Large Language Models are few(1)-shot Table Reasoners.**\n\n    *Wenhu Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06710)], 2022.10\n\n10. **Teaching Algorithmic Reasoning via In-context Learning.**\n\n    *Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, Hanie Sedghi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09066)], 2022.11\n\n11. **Active Prompting with Chain-of-Thought for Large Language Models.**\n\n     *Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12246)], 2023.2\n\n12. **Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data.**\n\n     *KaShun Shum, Shizhe Diao, Tong Zhang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12822)], 2023.2\n\n13. **A prompt pattern catalog to enhance prompt engineering with chatgpt.**\n\n     *Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, Douglas C Schmidt.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11382)], 2023.2\n\n14. **ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Requirements Elicitation, anLearning to Reason and Memorize with Self-Notesd Software Design.**\n\n     *Jules White, Sam Hays, Quchen Fu, Jesse Spencer-Smith, Douglas C Schmidt.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07839)], 2023.3\n\n15. **Learning to Reason and Memorize with Self-Notes.**\n\n     *Jack lanchantin, Shubham Toshniwal, Jason Weston, Arthur Szlam, Sainbayar Sukhbaatar.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00833)], 2023.5\n\n16. **Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models.**\n\n     *Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04091)], 2023.5\n\n17. **Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models.**\n\n     *Yao Yao, Zuchao Li, Hai Zhao.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16582)], 2023.5\n\n18. **Re-Reading Improves Reasoning in Language Models.**\n\n     *Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06275)], 2023.9\n\n19. **Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL.**\n\n    *Hao Sun, Alihan Huyuk, Mihaela van der Schaar.*[[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06553)], 2023.9\n\n20. **Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning.**\n\n    *Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu.* [[abs](https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.353\u002F)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning)], 2024.8\n\n\n###### Multi-Stage\n\n1. **Iteratively Prompt Pre-trained Language Models for Chain of Thought.**\n\n   *Boshi Wang, Xiang Deng, Huan Sun.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08383)], 2022.3\n\n2. **Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.**\n\n   *Antonia Creswell, Murray Shanahan, Irina Higgins.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.09712)], 2022.5\n\n3. **Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.**\n\n   *Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10625)], 2022.5\n\n4. **Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.**\n\n   *Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11822)], 2022.5\n\n5. **Faithful Reasoning Using Large Language Models.**\n\n   *Antonia Creswell, Murray Shanahan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14271)], 2022.8\n\n6. **Compositional Semantic Parsing with Large Language Models.**\n\n   *Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15003)], 2022.9\n\n7. **Decomposed Prompting: A Modular Approach for Solving Complex Tasks.**\n\n   *Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish Sabharwal.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02406)], 2022.10\n\n8. **Measuring and Narrowing the Compositionality Gap in Language Models.**\n\n   *Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03350)], 2022.10\n\n9. **Successive Prompting for Decomposing Complex Questions.**\n\n   *Dheeru Dua, Shivanshu Gupta, Sameer Singh, Matt Gardner.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04092)], 2022.12\n\n10. **The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning.**\n\n    *Hanlin Zhang, Yi-Fan Zhang, Li Erran Li, Eric Xing.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08686)], 2022.12\n\n11. **LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.**\n\n    *Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.13894)], 2022.12\n\n12. **Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes.**\n\n    *Justin Reppert, Ben Rachbach, Charlie George, Luke Stebbing, Jungwon Byun, Maggie Appleton, Andreas Stuhlmüller.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01751)], 2023.1\n\n13. **Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement.**\n\n    *Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14497)], 2023.5\n\n14. **Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation.**\n\n    *Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.14000)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FMulti-Step-Deductive-Reasoning-Over-Natural-Language)], 2022.8\n\n15. **Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models.**\n\n    *Qiming Bao, Juho Leinonen, Alex Peng, Wanjun Zhong, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock and Jiamou Liu.* [[abs](http:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10444)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FExplanation-Generation)], 2025.3\n\n16. **ChatLogic: Integrating Logic Programming with Large Language Models for Multi-step Reasoning.**\n\n    *Zhongsheng Wang, Jiamou Liu, Qiming Bao, Hongfei Rong, Jingfeng Zhang.* [[abs](https:\u002F\u002Fopenreview.net\u002Fforum?id=AOqGF7Po7Z)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FChatLogic)], 2024.2\n\n\n##### Process Optimization\n\n###### Self-Optimization\n\n1. **Reframing Human-AI Collaboration for Generating Free-Text Explanations.**\n\n   *Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.08674)], 2021.12\n\n2. **The Unreliability of Explanations in Few-Shot In-Context Learning.**\n\n   *Xi Ye, Greg Durrett.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03401)], 2022.5\n\n3. **Discriminator-Guided Multi-step Reasoning with Language Models.**\n\n   *Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14934)], 2023.5\n   \n4. **RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by Reversing Chain-of-Thought.**\n\n   *Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han, Pengfei Yu, Heng Ji.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11499)], 2023.5\n   \n###### Ensemble-Optimization\n\n1. **Self-Consistency Improves Chain of Thought Reasoning in Language Models.**\n\n   *Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11171)], 2022.3\n\n2. **On the Advance of Making Language Models Better Reasoners.**\n\n   *Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02336)], 2022.6\n\n3. **Complexity-based Prompting for Multi-step Reasoning.**\n\n   *Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00720)], 2022.10\n\n4. **Large Language Models are reasoners with Self-Verification.**\n\n   *Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09561)], 2022.12\n\n5. **Answering Questions by Meta-Reasoning over Multiple Chains of Thought.**\n\n   *Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, Jonathan Berant.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13007)], 2023.4\n\n6. **Tree of Thoughts: Deliberate Problem Solving with Large Language Models.**\n\n   *Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601)], 2023.5\n\n7. **Improving Factuality and Reasoning in Language Models through Multiagent Debate.**\n\n   *Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14325)], 2023.5\n\n8. **AutoMix: Automatically Mixing Language Models**\n\n   *Aman Madaan, Pranjal Aggarwal, Ankit Anand, Srividya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappaganthu, Yiming Yang, Shyam Upadhyay, Mausam, Manaal Faruqui.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12963)], 2023.9\n\n9. **Reversal of Thought: Enhancing Large Language Models with Preference-Guided Reverse Reasoning Warm-up.** \n\n    *Jiahao Yuan, Dehui Du, Hao Zhang, Zixiang Di, Usman Naseem.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12323)], [[code](https:\u002F\u002Fgithub.com\u002FJiahao-Yuan\u002FReversal-of-Thought)], 2024.10\n\n###### Iterative-Optimization\n\n1. **STaR: Bootstrapping Reasoning With Reasoning.**\n\n   *Eric Zelikman, Yuhuai Wu, Noah D. Goodman.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14465)], 2022.3\n\n2. **Large Language Models Can Self-Improve.**\n\n   *Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11610)], 2022.10\n\n3. **Reflexion: An Autonomous Agent with Dynamic Memory and Self-reflection.**\n\n   *Noah Shinn, Beck Labash, Ashwin Gopinath.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366)], 2023.3\n\n4. **Self-Refine: Iterative Refinement with Self-Feedback.**\n\n   *Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, Peter Clark.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651)], 2023.3\n\n5. **REFINER: Reasoning Feedback on Intermediate Representations.**\n\n   *Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, Boi Faltings.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01904)], 2023.4\n\n6. **Reasoning with Language Model is Planning with World Model**\n\n   *Shibo Hao\\*, Yi Gu\\*, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14992)], 2023.5\n\n7. **Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic.**\n\n   *Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, Stefan Wermter.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13339)] [[code](https:\u002F\u002Fgithub.com\u002Fxf-zhao\u002FLoT)], 2024.2\n\n##### External Engine\n\n###### Physical Simulator\n\n1. **Mind's Eye: Grounded Language Model Reasoning through Simulation.**\n\n   *Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai*. [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359)], 2022.10\n\n###### Code Interpreter\n\n1. **Language Models of Code are Few-Shot Commonsense Learners.**\n\n   *Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07128)], 2022.10\n\n2. **PAL: Program-aided Language Models.**\n\n   *Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10435)], 2022.11\n\n3. **Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.**\n\n   *Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12588)], 2022.11\n\n4. **Faithful Chain-of-Thought Reasoning.**\n\n   *Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13379)], 2023.1\n\n5. **Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning.**\n\n   *Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13808)], 2023.1\n\n6. **Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models.**\n\n   *Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00618)], 2023.2\n\n7. **MathPrompter: Mathematical Reasoning Using Large Language Models.**\n\n   *Shima Imani, Liang Du, Harsh Shrivastava.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05398)], 2023.3\n\n8. **Automatic Model Selection with Large Language Models for Reasoning.**\n\n   *Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, Qizhe Xie.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14333)], 2023.5\n\n9. **Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models.**\n\n   *Yi Hu, Haotong Yang, Zhouchen Lin, Muhan Zhang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18507)], 2023.5\n\n10. **The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code.**\n\n    *Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19213)], 2023.5\n\n11. **When Do Program-of-Thought Works for Reasoning?**\n\n    *Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, Huajun Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15452)], 2023.12\n\n###### Tool Learning\n\n1. **Toolformer: Language Models Can Teach Themselves to Use Tools.**\n\n   *Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761)], 2023.2\n\n2. **ART: Automatic multi-step reasoning and tool-use for large language models.**\n\n   *Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09014)], 2023.3\n\n3. **Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models.**\n\n   *Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09842)], 2023.4\n\n4. **CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing.**\n\n   *Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, Weizhu Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11738)], 2023.5\n\n5. **Making Language Models Better Tool Learners with Execution Feedback.**\n\n   *Shuofei Qiao, Honghao Gui, Huajun Chen, Ningyu Zhang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068)], 2023.5\n\n6. **CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation.**\n\n   *Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14318)], 2023.5\n\n7. **ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models.**\n\n   *Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14323)], 2023.5\n\n8. **MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting.**\n\n   *Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, Sadao Kurohashi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16896)], 2023.5\n   \n9. **ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings**\n\n   *Shibo Hao, Tianyang Liu, Zhen Wang, Zhiting Hu* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554)], 2023.5\n   \n10. **SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement**\n    \n    *Runnan Fang, Xiaobin Wang, Yuan Liang, Shuofei Qiao, Jialong Wu, Zekun Xi, Ningyu Zhang, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03561)], 2025.4\n\n\n#### Knowledge Enhanced Reasoning\n\n##### Implicit Knowledge\n\n1. **Generated Knowledge Prompting for Commonsense Reasoning.**\n\n    *Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08387)], 2021.10\n\n2. **Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering.**\n\n    *Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03078)], 2022.10\n\n3. **Explanations from Large Language Models Make Small Reasoners Better.**\n\n    *Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, Xifeng Yan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06726)], 2022.10\n\n4. **PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales.**\n\n    *Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01562)], 2022.11\n\n5. **TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering.**\n\n    *Yueqing Sun, Yu Zhang, Le Qi, Qi Shi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13515)], 2022.11\n\n6. **Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions.**\n\n    *Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00193)], 2022.12\n\n7. **Teaching Small Language Models to Reason.**\n\n    *Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08410)], 2022.12\n\n8. **Large Language Models Are Reasoning Teachers.**\n\n    *Namgyu Ho, Laura Schmid, Se-Young Yun.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10071)], 2022.12\n\n9. **Specializing Smaller Language Models towards Multi-Step Reasoning.**\n\n    *Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12726)], 2023.1\n\n10. **PaD: Program-aided Distillation Specializes Large Models in Reasoning.**\n\n    *Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, Bowen Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13888)], 2023.5\n\n##### Explicit Knowledge\n\n1. **MemPrompt: Memory-assisted prompt editing to improve GPT-3 after deployment**\n\n   *Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.06009)], 2022.1\n\n2. **LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning.**\n\n   *Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, Xiaodan Liang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.08232)], 2022.5\n\n3. **Selective Annotation Makes Language Models Better Few-Shot Learners.**\n\n   *Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.01975)], 2022.9\n\n4. **Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.**\n\n   *Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14610)], 2022.9\n\n5. **Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions.**\n\n   *Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10509)], 2022.12\n\n6. **Rethinking with Retrieval: Faithful Large Language Model Inference.**\n\n   *Hangfeng He, Hongming Zhang, Dan Roth.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00303)], 2023.1\n\n7. **Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework.**\n\n   *Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03268)], 2023.5\n\n8. **A Dynamic Prompt-tuning Method for Data Augmentation with Associated Knowledge.**\n\n   *Qianqian Qi, Qiming Bao*, Alex Yuxuan Peng, Jiamou Liu, Michael Witbrock.* [[abs](https:\u002F\u002Fopenreview.net\u002Fforum?id=hli7A0ioiS_)], 2023.3\n\n9. **Enhancing Data Augmentation with Knowledge-Enriched Data Generation via Dynamic Prompt-Tuning Method.**\n\n   *Qianqian Qi, Qiming Bao*, Alex Yuxuan Peng, Jiamou Liu, Michael Witbrock.* [[abs](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10651072\u002F)], 2020.03\n\n10. **HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention.**\n\n    *Qiming Bao, Lin Ni, Jiamou Liu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03140)], 2020.02\n\n11.  **Agentic Knowledgeable Self-awareness.**\n\n    *Shuofei Qiao, Zhisong Qiu, Baochang Ren, Xiaobin Wang, Xiangyuan Ru, Ningyu Zhang, Xiang Chen, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen.* [abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03553), 2025.03\n\n\n#### Others\n\n1. **Language Model Cascades.**\n\n   *David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton*. [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10342)], 2022.7\n\n2. **Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering.**\n\n   *Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, Ashwin Kalyan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09513)], 2022.9\n\n3. **Multimodal Analogical Reasoning over Knowledge Graphs.**\n\n   *Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00312)], 2022.10\n\n4. **Scaling Instruction-Finetuned Language Models.**\n\n   *Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11416)], 2022.10\n\n5. **See, Think, Confirm: Interative Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning.**\n\n   *Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, Chuang Gan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.05226)], 2023.1\n\n6. **Multimodal Chain-of-Thought Reasoning in Language Models.**\n\n   *Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00923)], 2023.2\n\n7. **Language Is not All You Need: Aligning Perception with Language Models.**\n\n   *Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv,  Lei Cui, Owais Khan Mohammed, Qiang Liu,  Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14045)], 2023.2\n\n8. **Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models.**\n\n   *Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04671)], 2023.3\n\n9. **ViperGPT: Visual Inference via Python Execution for Reasoning.**\n\n   *Dídac Surís, Sachit Menon, Carl Vondrick.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08128)], 2023.3\n\n10. **MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action.**\n\n    *Zhengyuan Yang, Linjie Li , Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11381)], 2023.3\n\n11. **Boosting Theory-of-Mind Performance in Large Language Models via Prompting.**\n\n    *Shima Rahimi Moghaddam, Christopher J. Honey.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11490)], 2023.4\n\n12. **Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus.**\n\n    *Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12498)], 2023.11\n\n13. **Multi2Claim: Generating Scientific Claims from Multi-Choice Questions for Scientific Fact-Checking.**\n\n    *Neset TAN, Trung Nguyen, Josh Bensemann, Alex Peng, Qiming Bao, Yang Chen, Mark Gahegan, Michael Witbrock.* [[abs](https:\u002F\u002Faclanthology.org\u002F2023.eacl-main.194\u002F)], 2023.5\n\n14. **Input-length-shortening and text generation via attention values.**\n\n    *Neset TAN, Alex Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07585)], 2023.2\n\n15. **DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions.**\n\n    *Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu.* [[abs](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21562)], 2022.3\n\n16. **CoRA: Optimizing Low-Rank Adaptation with Common Subspace of Large Language Models.**\n\n    *Xiaojun Xiao, Sen Shen, Qiming Bao, Hongfei Rong, Kairui Liu, Zhongsheng Wang, Jiamou Liu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02119)], 2024.8\n\n17. **Natural Language Processing and Reasoning.**\n\n    *Qiming Bao, Michael Witbrock, Jiamou Liu.* [[abs](https:\u002F\u002F14h034160212.github.io\u002FQiming_Bao_IEEE_VTS_Natural_Language_Processing_Reasoning_Invited_Talk_Final.pdf)], 2022.8\n\n18. **Relating Blindsight and AI: A Review.**\n\n    *Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock.* [[abs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00616)], 2022.1\n\n19. **Developing And Assessing Language Models For Logical Reasoning Over Natural Language.**\n\n    *Qiming Bao.* [[abs](https:\u002F\u002Fhdl.handle.net\u002F2292\u002F71735)], 2025.4\n\n### Analysis\n\n1. **Can language models learn from explanations in context?**\n\n   *Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02329)], 2022.4\n\n2. **Emergent Abilities of Large Language Models.**\n\n   *Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07682)], 2022.6\n\n3. **Language models show human-like content effects on reasoning.**\n\n   *Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07051)], 2022.7\n\n4. **Rationale-Augmented Ensembles in Language Models.**\n\n   *Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.00747)], 2022.7\n\n5. **Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts.**\n\n   *Joel Jang, Seongheyon Ye, Minjoon Seo.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12711)], 2022.9\n\n6. **Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango**\n\n   *Aman Madaan, Amir Yazdanbakhsh.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07686)], 2022.9\n\n7. **Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.**\n\n   *Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09261)], 2022.10\n\n8. **Language Models are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-thought.**\n\n   *Abulhair Saparov, He He.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01240)], 2022.10\n\n9. **Knowledge Unlearning for Mitigating Privacy Risks in Language Models.**\n\n   *Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01504)], 2022.10\n\n10. **Emergent Analogical Reasoning in Large Language Models.**\n\n    *Taylor Webb, Keith J. Holyoak, Hongjing Lu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09196)], 2022.12\n\n11. **Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.**\n\n    *Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10001)], 2022.12\n\n12. **On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.**\n\n    *Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08061)], 2022.12\n\n13. **Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model.**\n\n    *Parishad BehnamGhader, Santiago Miret, Siva Reddy.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09146)], 2022.12\n\n14. **Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.**\n\n    *Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10559)], 2022.12\n\n15. **Dissociating language and thought in large language models: a cognitive perspective.**\n\n    *Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06627)], 2023.1\n\n16. **Large Language Models Can Be Easily Distracted by Irrelevant Context.**\n\n    *Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00093)], 2023.2\n\n17. **A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity.**\n\n    *Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04023)], 2023.2\n\n18. **ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models.**\n\n    *Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, Ben He.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16421)], 2023.3\n\n19. **Why think step-by-step? Reasoning emerges from the locality of experience.**\n\n    *Ben Prystawski, Noah D. Goodman.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03843)], 2023.4\n\n20. **Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic.**\n\n    *Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07336)], 2023.8\n\n21. **Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning.**\n\n    *Qiming Bao, Gaël Gendron, Alex Peng, Neset Tan, Michael Witbrock, Jiamou Liu.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09430)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-and-abstract-reasoning)], 2024.12\n\n22. **Large Language Models Are Not Strong Abstract Reasoners.**\n\n    *Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19555)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-and-abstract-reasoning)], 2024.08\n\n23. **AbductionRules: Training Transformers to Explain Unexpected Inputs.**\n\n    *Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock.* [[abs](https:\u002F\u002Faclanthology.org\u002F2022.findings-acl.19\u002F)], [[code](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FAbductionRules)], 2022.08\n\n24. **Can Pruning Improve Reasoning? Revisiting Long-CoT Compression with Capability in Mind for Better Reasoning**\n\n    *Shangziqi Zhao, Jiahao Yuan, Guisong Yang, Usman Naseem.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14582)], 2025.05\n---\n\n## 🧰 Resources\n\n### Benchmarks and Tasks\n\n|     Reasoning Skills      | Benchmarks                                                   |\n| :-----------------------: | ------------------------------------------------------------ |\n| **Arithmetic Reasoning**  | [GSM8K](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14168), [SVAMP](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.168), [ASDiv](https:\u002F\u002Faclanthology.org\u002F2020.acl-main.92\u002F), [AQuA-RAT](https:\u002F\u002Faclanthology.org\u002FP17-1015\u002F), [MAWPS](https:\u002F\u002Faclanthology.org\u002FN16-1136\u002F), [AddSub](https:\u002F\u002Faclanthology.org\u002FD14-1058\u002F), [MultiArith](https:\u002F\u002Faclanthology.org\u002FD15-1202\u002F), [SingleEq](https:\u002F\u002Faclanthology.org\u002FQ15-1042\u002F), [SingleOp]( https:\u002F\u002Fdoi.org\u002F10.1162\u002Ftacl_a_00118) |\n| **Commonsense Reasoning** | [CommonsenseQA](https:\u002F\u002Faclanthology.org\u002FN19-1421\u002F), [StrategyQA](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00370\u002F100680\u002FDid-Aristotle-Use-a-Laptop-A-Question-Answering), [ARC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05457), [SayCan](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691), [BoolQA](https:\u002F\u002Faclanthology.org\u002FN19-1300\u002F), [HotpotQA](https:\u002F\u002Faclanthology.org\u002FD18-1259\u002F), [OpenBookQA](https:\u002F\u002Faclanthology.org\u002FD18-1260\u002F), [PIQA](https:\u002F\u002Fyonatanbisk.com\u002Fpiqa\u002F), [WikiWhy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.12152) |\n|  **Symbolic Reasoning**   | [Last Letter Concatenation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903), [Coin Flip](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903), Reverse List |\n|   **Logical Reasoning**   | [ProofWriter](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.13048), [EntailmentBank](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08661), [RuleTaker](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2020\u002F537), [CLUTRR](https:\u002F\u002Faclanthology.org\u002FD19-1458\u002F), [FLD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07336), [FLDx2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12498) |\n| **Multimodal Reasoning**  | [SCIENCEQA](https:\u002F\u002Fscienceqa.github.io\u002F)                    |\n|        **Others**         | [BIG-bench](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2206.04615), [SCAN](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flake18a.html), [Chain-of-Thought Hub](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17306), [MR-BEN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13975), [WorFBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) |\n\n### Tools\n\n- **[ThoughtSource](https:\u002F\u002Fgithub.com\u002FOpenBioLink\u002FThoughtSource)**: A central, open resource for data and tools related to chain-of-thought reasoning in LLMs. \n- **[LangChain](https:\u002F\u002Fgithub.com\u002Fhwchase17\u002Flangchain)**: A library designed to help developers build applications using LLMs combined with other sources of computation or knowledge.\n- **[LogiTorch](https:\u002F\u002Fgithub.com\u002FLogiTorch\u002Flogitorch)**: A PyTorch-based library for logical reasoning on natural language.\n- **[λprompt](https:\u002F\u002Fgithub.com\u002Fapproximatelabs\u002Flambdaprompt)**: A library that allows for building a full large LM-based prompt machines, including ones that self-edit to correct and even self-write their own execution code.\n- **[Promptify](https:\u002F\u002Fgithub.com\u002Fpromptslab\u002FPromptify)**: Prompt Engineering, Solve NLP Problems with LLM's & Easily generate different NLP Task prompts for popular generative models like GPT, PaLM, and more with Promptify.\n- **[MiniChain](https:\u002F\u002Fgithub.com\u002Fsrush\u002FMiniChain)**: A tiny library for coding with large language models that aims to implement the core prompt chaining functionality.\n- **[LlamaIndex](https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fllama_index)**: A project that provides a central interface to connect your LLM's with external data.\n- **[EasyInstruct](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyInstruct)**: A package for instructing Large Language Models (LLMs) like GPT-3 in your research experiments. It is designed to be easy to use and easy to extend.\n\n---\n\n## 🎉 Contributing\n\n- Add a new paper or update an existing paper, thinking about which category the work should belong to.\n- Use the same format as existing entries to describe the work.\n- Add the abstract link of the paper (`\u002Fabs\u002F` format if it is an arXiv publication).\n- A very brief explanation why you think a paper should be added or updated is recommended.\n\n**Don't worry if you put something wrong, they will be fixed for you. Just contribute and promote your awesome work here!**\n\n### Contributors\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_Prompt4ReasoningPapers_readme_d48623904465.png\" \u002F>\n\u003C\u002Fa>\n\n---\n\n## 🚩Citation \n\nIf you find this survey useful for your research, please consider citing\n\n```\n@inproceedings{qiao-etal-2023-reasoning,\n    title = \"Reasoning with Language Model Prompting: A Survey\",\n    author = \"Qiao, Shuofei  and\n      Ou, Yixin  and\n      Zhang, Ningyu  and\n      Chen, Xiang  and\n      Yao, Yunzhi  and\n      Deng, Shumin  and\n      Tan, Chuanqi  and\n      Huang, Fei  and\n      Chen, Huajun\",\n    booktitle = \"Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n    month = jul,\n    year = \"2023\",\n    address = \"Toronto, Canada\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2023.acl-long.294\",\n    pages = \"5368--5393\",\n    abstract = \"Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions. Resources are available at https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers (updated periodically).\",\n}\n```\n\n","# 使用语言模型提示进行推理的论文\n[![Awesome](https:\u002F\u002Fawesome.re\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers) \n[![许可证：MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-green.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT)\n![](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Fzjunlp\u002FPrompt4ReasoningPapers?color=green) \n![](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-Welcome-red) \n\n## 🔔 最新消息\n- **2024年3月5日 我们发布了一篇新论文：“[KnowAgent：基于LLM的智能体的知识增强规划](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.03101)”**。\n- **2024年2月6日 我们发布了一篇新论文：“[EasyInstruct：大型语言模型的易用指令处理框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2402.03049)” ，并附带了一个HF演示 [EasyInstruct](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fzjunlp\u002FEasyInstruct)**。\n- **2024年1月3日 我们发布了一篇新论文：“[大型语言模型知识编辑的综合研究](https:\u002F\u002Farxiv.org\u002Fabs\u002F2401.01286)” ，同时推出一个新的基准测试集 [KnowEdit](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fzjunlp\u002FKnowEdit)! 我们期待大家对这一主题的任何评论或讨论 :)**\n- **2023年7月12日 我们发布了 [EasyEdit](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyEdit)，一个易于使用的大型语言模型知识编辑框架。**\n- **2023年6月19日 我们开源了 [KnowLM](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FKnowLM)，这是一个具备预训练和指令微调代码（支持多机多GPU部署）以及多种LLM的大规模知识型语言模型框架**。\n- **2023年3月27日 我们发布了 [EasyInstruct](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyInstruct)，这是一个用于在您的研究实验中指导大型语言模型（LLMs），如ChatGPT的工具包。它设计得简单易用且易于扩展！**\n- **2023年2月19日 我们上传了我们的综述论文的[教程](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fblob\u002Fmain\u002Ftutorial.pdf)，帮助您更深入地了解使用语言模型提示进行推理的方法（附有该教程的[视频](https:\u002F\u002Fwww.techbeat.net\u002Ftalk-info?id=756)（中文））。**\n- **2022年12月19日 我们基于此仓库发布了一篇新的综述论文：“[使用语言模型提示进行推理：综述](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09597)”！我们期待大家对这一主题的任何评论或讨论 :)**\n- **2022年9月14日 我们创建了这个仓库，用于维护关于“使用语言模型提示进行推理”的论文列表。**\n\n---\n\n## 🔍 目录\n\n- [🌟 简介](#-introduction)\n- [📜 论文](#-papers)\n  - [概述](#overview)\n  - [方法](#methods)\n    - [策略增强推理](#strategy-enhanced-reasoning)\n      - [提示工程](#prompt-engineering)\n        - [单阶段](#single-stage)\n        - [多阶段](#multi-stage)\n      - [流程优化](#process-Optimization)\n        - [自我优化](#self-optimization)\n        - [集成优化](#ensemble-optimization)\n        - [迭代优化](#iterative-optimization)\n      - [外部引擎](#external-engine)\n        - [物理模拟器](#physical-simulator)\n        - [代码解释器](#code-interpreter)\n        - [工具学习](#tool-learning)\n    - [知识增强推理](#knowledge-enhanced-reasoning)\n      - [隐式知识](#implicit-knowledge)\n      - [显式知识](#explicit-knowledge)\n    - [其他](#others)\n  - [分析](#analysis)\n- [🧰 资源](#-resources)\n    - [基准测试与任务](#benchmarks-and-tasks)\n    - [工具](#tools)\n- [🎉 贡献](#-contributing)\n- [🚩 引用 ](#-citation)\n\n---\n\n## 🌟 简介\n\n推理作为解决复杂问题的一项关键能力，可以为各种实际应用提供后端支持，例如医疗诊断、谈判等。本文对当前使用语言模型提示进行推理的前沿研究进行了全面综述。我们介绍了相关研究工作，并进行了比较和总结，同时提供了系统的资源来帮助初学者入门。此外，我们还探讨了这种推理能力出现的潜在原因，并指出了未来的研究方向。\n\n---\n\n## 📜 论文\n\n### 概述\n\n1. **基于语言模型提示的推理：综述。**\n\n   *乔硕飞、欧义欣、张宁宇、陈翔、姚云志、邓淑敏、谭传奇、黄飞、陈华军。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09597)]，2022年12月\n\n2. **迈向大型语言模型中的推理：综述。**\n\n   *黄杰、凯文·陈传昌。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10403)]，2022年12月\n\n3. **数学推理的深度学习综述。**\n\n   *陆攀、邱亮、于文浩、肖恩·韦莱克、常凯威。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10535)]，2022年12月\n\n4. **上下文学习综述。**\n\n   *董庆秀、李磊、戴大迈、郑策、吴志勇、常宝宝、孙旭、徐晶晶、李磊、隋志芳。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00234)]，2022年12月\n\n5. **知识增强型神经机器推理：综述。**\n\n   *坦莫伊·乔杜里、凌晨、张旭超、赵旭江、白广济、裴健、陈海峰、赵亮。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.02093)]，2023年2月\n\n6. **增强型语言模型：综述。**\n\n   *格雷瓜尔·米亚隆、罗伯托·德西、玛丽亚·洛梅利、克里斯托福罗斯·纳尔帕蒂斯、拉姆·帕苏努鲁、罗伯塔·赖列阿努、巴普蒂斯特·罗齐耶、蒂莫·希克、简·德维迪-余、阿斯莉·切利基尔马兹、爱德华·格拉夫、扬·勒丘恩、托马斯·西亚洛姆。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.07842)]，2023年2月\n\n7. **大型语言模型中知识的生命周期：综述。**\n\n   *曹博熙、林洪宇、韩先培、孙乐。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07616)]，2023年3月\n\n8. **提示就是全部吗？不是。指令学习的全面且更广阔的视角。**\n\n   *楼仁泽、张凯、尹文鹏。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.10475)]，2023年3月\n\n9. **以自然语言为知识表示的逻辑推理：综述。**\n\n   *杨宗林、杜新雅、毛睿、倪金杰、埃里克·坎布里亚。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.12023)]，2023年3月\n\n10. **自然语言推理：综述。**\n\n    *于飞、张洪波、王本友。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.14725)]，2023年3月\n\n11. **大型语言模型综述。**\n\n    *赵鑫伟、周坤、李俊毅、唐天一、王小雷、侯玉鹏、闵英谦、张贝辰、张俊杰、董子灿、杜一凡、杨晨、陈雨硕、陈志鹏、蒋金浩、任瑞阳、李一凡、唐欣宇、刘子康、刘沛宇、聂建云、温继荣。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.18223)]，2023年3月\n\n12. **基础模型的工具学习。**\n\n    *秦宇佳、胡圣鼎、林彦楷、陈伟泽、丁宁、崔甘渠、曾珍怡、黄宇飞、肖超君、韩驰、任义丰、苏宇生、王华东、钱成、田润初、朱昆仑、梁世豪、沈星宇、徐博凯、张振、叶依宁、李博文、唐子威、易静、朱宇章、戴振宁、燕兰、丛欣、陆雅茜、赵伟琳、黄宇翔、严俊熙、韩旭、孙贤、李大海、方杰森、杨程、吴彤爽、季恒、刘志远、孙茂松。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.08354)]，2023年4月\n\n13. **思维链推理综述：进展、前沿与未来。**\n\n    *楚铮、陈景昌、陈强龙、俞卫江、何涛、王浩天、彭卫华、刘明、秦兵、刘婷。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.15402)]，2023年9月\n\n14. **基础模型推理综述：概念、方法论与展望。**\n\n    *孙建凯、郑川阳、谢恩泽、刘正英、储瑞航、邱嘉宁、许佳琪、丁明宇、李宏洋、耿孟哲、吴岳、王文海、陈俊松、殷张悦、任晓哲、傅杰、何俊贤、袁武、刘琦、刘锡辉、李宇、董浩、程宇、张明、潘安恒、戴继峰、罗平、王京东、温继荣、邱锡鹏、郭一科、熊辉、刘群、李振国。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.11562)]，2023年12月\n\n### 方法\n\n#### 策略增强型推理\n\n##### 提示工程\n\n###### 单阶段\n\n1. **针对常识推理任务的对比解释提示。**\n\n   *巴拉吉·帕兰贾佩、朱利安·迈克尔、马尔詹·加兹维内贾德、卢克·泽特勒莫耶、汉娜内·哈吉希日齐。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.06823)]，2021年6月\n\n2. **用于可控常识推理的模板填充。**\n\n   *迪拉吉·拉贾戈帕尔、维韦克·凯坦、博格丹·萨卡列努、阿纳托尔·格尔什曼、安德鲁·法诺、爱德华·霍维。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.00539)]，2021年11月\n\n3. **思维链提示激发大型语言模型的推理能力。**\n\n   *杰森·魏、王雪芝、达勒·舒尔曼斯、马尔滕·博斯马、布莱恩·伊克特、夏菲、爱德·H·奇、阮文奎、周登尼。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)]，2022年1月\n\n4. **大型语言模型是零样本推理者。**\n\n   *小岛武士、顾世祥、麦克尔·里德、松尾丰、岩泽佑介。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11916)]，2022年5月\n\n5. **基于心理学的思维链提示用于大型语言模型中的隐喻理解。**\n\n   *本·普里斯托斯基、保罗·蒂博多、诺亚·古德曼。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.08141)]，2022年9月\n\n6. **基于复杂度的多步推理提示。**\n\n   *付瑶、彭浩、阿希什·萨巴瓦尔、彼得·克拉克、图沙尔·科特。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00720)]，2022年10月\n\n7. **语言模型是多语种思维链推理者。**\n\n   *史芙蕾、米拉克·苏兹贡、马库斯·弗莱塔格、王雪芝、苏拉杰·斯里瓦茨、索鲁什·沃索吉、郑炯元、泰伊、塞巴斯蒂安·鲁德尔、周登尼、迪潘詹·达斯、杰森·魏。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03057)]，2022年10月\n\n8. **大型语言模型中的自动思维链提示。**\n\n   *张卓胜、张阿斯顿、李牧、亚历克斯·斯莫拉。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03493)]，2022年10月\n\n9. **大型语言模型是少样本（1次）表格推理者。**\n\n   *陈文虎。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06710)]，2022年10月\n\n10. **通过上下文学习教授算法性推理。**\n\n    *周哈蒂、诺瓦·阿扎德、于戈·拉罗谢尔、阿伦·库维尔、贝赫南·奈沙布尔、哈妮·塞德吉。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.09066)]，2022年11月\n\n11. **大型语言模型的主动思维链提示。**\n\n     *刁士哲、王鹏程、林勇、张通。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12246)]，2023年2月\n\n12. **基于标注数据的思维链自动提示增强与选择。**\n\n     *舒凯顺、刁士哲、张通。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.12822)]，2023年2月\n\n13. **用于增强ChatGPT提示工程的提示模式目录。**\n\n     *朱尔斯·怀特、傅秋臣、山姆·海斯、迈克尔·桑德伯恩、卡洛斯·奥莱亚、亨利·吉尔伯特、阿什拉夫·埃尔纳沙尔、杰西·斯宾塞-史密斯、道格拉斯·施密特。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11382)]，2023年2月\n\n14. **ChatGPT提示模式用于提升代码质量、重构、需求获取、学习推理以及通过自注释记忆进行软件设计。**\n\n     *朱尔斯·怀特、山姆·海斯、傅秋臣、杰西·斯宾塞-史密斯、道格拉斯·施密特。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07839)]，2023年3月\n\n15. **通过自注释学习推理与记忆。**\n\n*杰克·朗尚坦、舒巴姆·托什尼瓦尔、杰森·韦斯顿、阿瑟·斯拉姆、赛恩巴亚尔·苏赫巴特尔.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.00833)], 2023年5月\n\n16. **计划与求解提示：通过大型语言模型提升零样本链式思维推理能力。**\n\n     *王磊、许万宇、兰义怀、胡志强、兰云石、李家伟、林亿鹏.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.04091)], 2023年5月\n\n17. **超越链式思维：大型语言模型中的高效图式思维推理。**\n\n     *姚瑶、李祖超、赵海.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16582)], 2023年5月\n\n18. **重读提升语言模型的推理能力。**\n\n     *徐晓涵、陶崇阳、沈涛、徐灿、徐洪波、龙国栋、楼建光.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06275)], 2023年9月\n\n19. **基于离线逆强化学习的查询依赖型提示评估与优化。**\n\n    *孙浩、阿里汗·胡尤克、米哈埃拉·范德沙尔.*[[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.06553)], 2023年9月\n\n20. **基于抽象意义表示的逻辑驱动数据增强用于逻辑推理。**\n\n    *鲍启明、Alex Peng、邓振云、钟万军、盖尔·根德隆、内塞特·坦、内森·杨、陈阳、朱永华、迈克尔·维特布罗克、刘嘉谋.* [[摘要](https:\u002F\u002Faclanthology.org\u002F2024.findings-acl.353\u002F)], [[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning)], 2024年8月\n\n\n###### 多阶段\n\n1. **迭代式提示预训练语言模型以实现链式思维。**\n\n   *王博思、邓翔、孙欢.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.08383)], 2022年3月\n\n2. **选择—推理：利用大型语言模型实现可解释的逻辑推理。**\n\n   *安东尼娅·克雷斯威尔、默里·沙纳汉、伊琳娜·希金斯.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.09712)], 2022年5月\n\n3. **由简入繁提示法使大型语言模型具备复杂推理能力。**\n\n   *周登尼、纳撒尼尔·舍尔利、侯乐、贾森·魏、内森·斯凯尔斯、王雪芝、戴尔·舒尔曼斯、奥利维尔·布斯凯、吴科、艾德·奇.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10625)], 2022年5月\n\n4. **助产术提示：通过递归解释实现逻辑一致的推理。**\n\n   *郑在勋、秦连辉、肖恩·韦莱克、法泽·布拉曼、钱德拉·巴加瓦图拉、罗南·勒布拉斯、崔艺珍.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11822)], 2022年5月\n\n5. **使用大型语言模型进行忠实推理。**\n\n   *安东尼娅·克雷斯威尔、默里·沙纳汉.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2208.14271)], 2022年8月\n\n6. **利用大型语言模型进行组合语义解析。**\n\n   *安德鲁·德罗兹多夫、纳撒尼尔·舍尔利、埃金·阿基尤雷克、内森·斯凯尔斯、宋欣颖、陈欣芸、奥利维尔·布斯凯、周登尼.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.15003)], 2022年9月\n\n7. **分解式提示：一种解决复杂任务的模块化方法。**\n\n   *库赫特·图沙尔、特里维迪·哈什、芬利森·马修、傅尧、理查森·凯尔、克拉克·彼得、萨巴瓦尔·阿希什.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02406)], 2022年10月\n\n8. **衡量并缩小语言模型中的组合性差距。**\n\n   *奥菲尔·普雷斯、张慕儒、闵世温、路德维希·施密特、诺亚·A·史密斯、迈克·刘易斯.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03350)], 2022年10月\n\n9. **连续提示法用于分解复杂问题。**\n\n   *杜阿·德赫鲁、古普塔·希万舒、辛格·萨米尔、加德纳·马特.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.04092)], 2022年12月\n\n10. **符号表示对少样本推理中上下文学习的影响。**\n\n    *张翰林、张一凡、李二然、邢埃里克.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08686)], 2022年12月\n\n11. **LAMBADA：基于自然语言的自动推理中的逆向链式推理。**\n\n    *卡泽米·赛耶德·梅赫兰、金娜琼、比蒂·迪普蒂、徐鑫、拉马昌德兰·迪帕克.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.13894)], 2022年12月\n\n12. **迭代分解：通过监督推理过程改进科学问答。**\n\n    *雷珀特·贾斯汀、拉赫巴赫·本、乔治·查理、斯特宾·卢克、卞正源、阿普尔顿·玛吉、施图尔穆勒·安德烈亚斯.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.01751)], 2023年1月\n\n13. **自我打磨：通过问题精炼提升大型语言模型的推理能力。**\n\n    *席志恒、金森杰、周宇豪、郑睿、高松阳、桂涛、张琪、黄宣静.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14497)], 2023年5月\n\n14. **自然语言上的多步演绎推理：一项关于分布外泛化的实证研究。**\n\n    *鲍启明、Alex Peng、哈蒂尔·蒂姆、内塞特·坦、邓振云、迈克尔·维特布罗克、刘嘉谋.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.14000)], [[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FMulti-Step-Deductive-Reasoning-Over-Natural-Language)], 2022年8月\n\n15. **探索迭代增强方法，以改进大型语言模型生成的学习者来源多项选择题解释。**\n\n    *鲍启明、莱诺宁·朱霍、Alex Peng、钟万军、皮斯托蒂·蒂莫西、黄爱丽丝、丹尼·保罗、维特布罗克·迈克尔以及刘嘉谋.* [[摘要](http:\u002F\u002Farxiv.org\u002Fabs\u002F2309.10444)], [[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FExplanation-Generation)], 2025年3月\n\n16. **ChatLogic：将逻辑编程与大型语言模型结合用于多步推理。**\n\n    *王仲生、刘嘉谋、鲍启明、荣洪飞、张景峰.* [[摘要](https:\u002F\u002Fopenreview.net\u002Fforum?id=AOqGF7Po7Z)], [[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FChatLogic)], 2024年2月\n\n\n##### 过程优化\n\n###### 自我优化\n\n1. **重新构想人机协作以生成自由文本解释。**\n\n   *莎拉·维格雷夫、杰克·赫塞尔、斯瓦布哈·斯瓦扬迪普塔、马克·里德尔、崔艺珍.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.08674)], 2021年12月\n\n2. **少样本上下文学习中解释的不可靠性。**\n\n   *叶曦、格雷格·杜雷特.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.03401)], 2022年5月\n\n3. **判别器引导下的语言模型多步推理。**\n\n   *穆罕默德·哈利法、拉贾努根·洛格斯瓦兰、李文泰、李洪洛、王陆.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14934)], 2023年5月\n\n4. **RCOT：通过逆转链式思维检测并纠正推理中的事实性不一致。**\n\n   *薛天赐、王子淇、王振海龙、韩驰、于鹏飞、季恒.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11499)], 2023年5月\n\n###### 集成优化\n\n1. **自一致性提升语言模型的链式思维推理能力。**\n\n   *王雪芝、贾森·魏、戴尔·舒尔曼斯、吴科、艾德·H·奇、纳朗·沙兰、乔德里·阿坎克夏、周登尼.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11171)], 2022年3月\n\n2. **关于提升语言模型推理能力的进展。**\n\n   *李一飞、林泽齐、张士卓、傅强、陈贝、楼建光、陈伟竹.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02336)], 2022年6月\n\n3. **基于复杂性的提示用于多步推理。**\n\n   *傅尧、彭浩、萨巴瓦尔·阿希什、克拉克·彼得、库赫特·图沙尔.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00720)], 2022年10月\n\n4. **大型语言模型是具有自我验证能力的推理者。**\n\n*Yixuan Weng、Minjun Zhu、Shizhu He、Kang Liu、Jun Zhao.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09561)]，2022年12月\n\n5. **通过多条思维链的元推理回答问题。**\n\n   *Ori Yoran、Tomer Wolfson、Ben Bogin、Uri Katz、Daniel Deutch、Jonathan Berant.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.13007)]，2023年4月\n\n6. **思维之树：利用大型语言模型进行审慎的问题解决。**\n\n   *Shunyu Yao、Dian Yu、Jeffrey Zhao、Izhak Shafran、Thomas L. Griffiths、Yuan Cao、Karthik Narasimhan.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601)]，2023年5月\n\n7. **通过多智能体辩论提升语言模型的事实性和推理能力。**\n\n   *Yilun Du、Shuang Li、Antonio Torralba、Joshua B. Tenenbaum、Igor Mordatch.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14325)]，2023年5月\n\n8. **AutoMix：自动混合语言模型**\n\n   *Aman Madaan、Pranjal Aggarwal、Ankit Anand、Srividya Pranavi Potharaju、Swaroop Mishra、Pei Zhou、Aditya Gupta、Dheeraj Rajagopal、Karthik Kappaganthu、Yiming Yang、Shyam Upadhyay、Mausam、Manaal Faruqui.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.12963)]，2023年9月\n\n9. **逆向思维：以偏好引导的逆向推理预热增强大型语言模型。**\n\n    *Jiahao Yuan、Dehui Du、Hao Zhang、Zixiang Di、Usman Naseem.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.12323)]，[[代码](https:\u002F\u002Fgithub.com\u002FJiahao-Yuan\u002FReversal-of-Thought)]，2024年10月\n\n###### 迭代优化\n\n1. **STaR：用推理来启动推理。**\n\n   *Eric Zelikman、Yuhuai Wu、Noah D. Goodman.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14465)]，2022年3月\n\n2. **大型语言模型可以自我改进。**\n\n   *Jiaxin Huang、Shixiang Shane Gu、Le Hou、Yuexin Wu、Xuezhi Wang、Hongkun Yu、Jiawei Han.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11610)]，2022年10月\n\n3. **Reflexion：具有动态记忆和自我反思能力的自主智能体。**\n\n   *Noah Shinn、Beck Labash、Ashwin Gopinath.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11366)]，2023年3月\n\n4. **Self-Refine：基于自我反馈的迭代精炼。**\n\n   *Aman Madaan、Niket Tandon、Prakhar Gupta、Skyler Hallinan、Luyu Gao、Sarah Wiegreffe、Uri Alon、Nouha Dziri、Shrimai Prabhumoye、Yiming Yang、Sean Welleck、Bodhisattwa Prasad Majumder、Shashank Gupta、Amir Yazdanbakhsh、Peter Clark.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.17651)]，2023年3月\n\n5. **REFINER：对中间表示的推理反馈。**\n\n   *Debjit Paul、Mete Ismayilzada、Maxime Peyrard、Beatriz Borges、Antoine Bosselut、Robert West、Boi Faltings.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.01904)]，2023年4月\n\n6. **使用语言模型进行推理即是在使用世界模型进行规划**\n\n   *Shibo Hao\\*、Yi Gu\\*、Haodi Ma、Joshua Jiahua Hong、Zhen Wang、Daisy Zhe Wang、Zhiting Hu* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14992)]，2023年5月\n\n7. **通过逻辑增强大型语言模型的零样本思维链推理能力。**\n\n   *Xufeng Zhao、Mengdi Li、Wenhao Lu、Cornelius Weber、Jae Hee Lee、Kun Chu、Stefan Wermter.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2309.13339)] [[代码](https:\u002F\u002Fgithub.com\u002Fxf-zhao\u002FLoT)]，2024年2月\n\n##### 外部引擎\n\n###### 物理模拟器\n\n1. **心灵之眼：通过仿真实现 grounded 语言模型推理。**\n\n   *Ruibo Liu、Jason Wei、Shixiang Shane Gu、Te-Yen Wu、Soroush Vosoughi、Claire Cui、Denny Zhou、Andrew M. Dai*. [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.05359)]，2022年10月\n\n###### 代码解释器\n\n1. **代码领域的语言模型是少样本常识学习者。**\n\n   *Aman Madaan、Shuyan Zhou、Uri Alon、Yiming Yang、Graham Neubig.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.07128)]，2022年10月\n\n2. **PAL：程序辅助语言模型。**\n\n   *Luyu Gao、Aman Madaan、Shuyan Zhou、Uri Alon、Pengfei Liu、Yiming Yang、Jamie Callan、Graham Neubig.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.10435)]，2022年11月\n\n3. **思维程序提示：为数值推理任务将计算与推理解耦。**\n\n   *Wenhu Chen、Xueguang Ma、Xinyi Wang、William W. Cohen.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.12588)]，2022年11月\n\n4. **忠实的思维链推理。**\n\n   *Qing Lyu、Shreya Havaldar、Adam Stein、Li Zhang、Delip Rao、Eric Wong、Marianna Apidianaki、Chris Callison-Burch.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13379)]，2023年1月\n\n5. **大型语言模型是多功能分解器：为基于表格的推理分解证据和问题。**\n\n   *Yunhu Ye、Binyuan Hui、Min Yang、Binhua Li、Fei Huang、Yongbin Li.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.13808)]，2023年1月\n\n6. **合成提示：为大型语言模型生成思维链示例。**\n\n   *Zhihong Shao、Yeyun Gong、Yelong Shen、Minlie Huang、Nan Duan、Weizhu Chen.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00618)]，2023年2月\n\n7. **MathPrompter：利用大型语言模型进行数学推理。**\n\n   *Shima Imani、Liang Du、Harsh Shrivastava.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.05398)]，2023年3月\n\n8. **利用大型语言模型进行推理时的自动模型选择。**\n\n   *Xu Zhao、Yuxi Xie、Kenji Kawaguchi、Junxian He、Qizhe Xie.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14333)]，2023年5月\n\n9. **代码提示：一种用于大型语言模型复杂推理的神经符号方法。**\n\n   *Yi Hu、Haotong Yang、Zhouchen Lin、Muhan Zhang.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.18507)]，2023年5月\n\n10. **IF 的魔力：探究代码领域大型语言模型的因果推理能力。**\n\n    *Xiao Liu、Da Yin、Chen Zhang、Yansong Feng、Dongyan Zhao.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19213)]，2023年5月\n\n11. **思维程序何时适用于推理？**\n\n    *Zhen Bi、Ningyu Zhang、Yinuo Jiang、Shumin Deng、Guozhou Zheng、Huajun Chen.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.15452)]，2023年12月\n\n###### 工具学习\n\n1. **Toolformer：语言模型可以自我教授如何使用工具。**\n\n   *Timo Schick、Jane Dwivedi-Yu、Roberto Dessì、Roberta Raileanu、Maria Lomeli、Luke Zettlemoyer、Nicola Cancedda、Thomas Scialom.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04761)]，2023年2月\n\n2. **ART：大型语言模型的自动多步推理与工具使用。**\n\n   *Bhargavi Paranjape、Scott Lundberg、Sameer Singh、Hannaneh Hajishirzi、Luke Zettlemoyer、Marco Tulio Ribeiro.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.09014)]，2023年3月\n\n3. **Chameleon：利用大型语言模型进行即插即用的组合式推理。**\n\n   *Pan Lu、Baolin Peng、Hao Cheng、Michel Galley、Kai-Wei Chang、Ying Nian Wu、Song-Chun Zhu、Jianfeng Gao.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.09842)]，2023年4月\n\n4. **CRITIC：大型语言模型可通过工具交互式批评实现自我修正。**\n\n   *Zhibin Gou、Zhihong Shao、Yeyun Gong、Yelong Shen、Yujiu Yang、Nan Duan、Weizhu Chen.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11738)]，2023年5月\n\n5. **通过执行反馈使语言模型成为更好的工具学习者。**\n\n   *Shuofei Qiao、Honghao Gui、Huajun Chen、Ningyu Zhang.* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13068)]，2023年5月\n\n6. **CREATOR：通过工具创建解构大型语言模型的抽象与具体推理。**\n\n*程谦、池瀚、Yi R. Fung、秦宇佳、刘志远、季恒.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14318)], 2023年5月\n\n7. **ChatCoT：基于聊天型大语言模型的工具增强思维链推理。**\n\n   *陈志鹏、周坤、张贝晨、龚政、Wayne Xin Zhao、文继荣.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.14323)], 2023年5月\n\n8. **MultiTool-CoT：GPT-3 可通过思维链提示使用多种外部工具。**\n\n   *稻叶达郎、清丸博一、程飞、黑桥贞夫.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.16896)], 2023年5月\n   \n9. **ToolkenGPT：通过工具嵌入大规模扩展冻结语言模型的功能**\n\n   *郝世博、刘天阳、王振、胡志婷* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.11554)], 2023年5月\n   \n10. **SynWorld：用于智能体行动知识精炼的虚拟场景合成**\n    \n    *方润楠、王晓彬、梁源、乔硕飞、吴嘉隆、席泽坤、张宁宇、江勇、谢鹏军、黄飞、陈华俊* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03561)], 2025年4月\n\n\n#### 知识增强推理\n\n##### 隐式知识\n\n1. **面向常识推理的生成式知识提示。**\n\n    *刘家成、Alisa Liu、陆锡明、Sean Welleck、Peter West、Ronan Le Bras、Yejin Choi、Hannaneh Hajishirzi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.08387)], 2021年10月\n\n2. **Rainier：用于常识问答的强化知识内省器。**\n\n    *刘家成、Skyler Hallinan、陆锡明、何鹏飞、Sean Welleck、Hannaneh Hajishirzi、Yejin Choi.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03078)], 2022年10月\n\n3. **大型语言模型的解释使小型推理模型表现更好。**\n\n    *李诗洋、陈建树、沈业龙、陈志宇、张欣璐、李泽坤、王洪、钱静、彭宝林、毛毅、陈文虎、严西峰.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.06726)], 2022年10月\n\n4. **PINTO：利用提示生成的理由实现忠实的语言推理。**\n\n    *王培峰、Aaron Chan、Filip Ilievski、陈沐浩、任翔.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.01562)], 2022年11月\n\n5. **TSGP：用于无监督常识问答的两阶段生成式提示。**\n\n    *孙月青、张宇、齐乐、石琪.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.13515)], 2022年11月\n\n6. **通过语义分解将大型语言模型的多步推理能力蒸馏到小型模型中。**\n\n    *库马尔·施里达尔、亚历山德罗·斯托尔福、姆林玛雅·萨昌.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.00193)], 2022年12月\n\n7. **教导小型语言模型进行推理。**\n\n    *露西·夏洛特·马吉斯特、乔纳森·马林森、雅库布·阿达梅克、埃里克·马尔米、阿里克谢·塞维林.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08410)], 2022年12月\n\n8. **大型语言模型是推理教师。**\n\n    *洪南圭、劳拉·施密德、尹世英.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10071)], 2022年12月\n\n9. **将小型语言模型专门化为多步推理。**\n\n    *傅瑶、彭浩、欧立图、阿希什·萨巴瓦尔、图沙尔·科特.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.12726)], 2023年1月\n\n10. **PaD：程序辅助蒸馏使大型模型在推理方面更加专业。**\n\n    *朱学凯、戚碧青、张凯燕、龙兴伟、周博文.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.13888)], 2023年5月\n\n##### 显式知识\n\n1. **MemPrompt：部署后通过记忆辅助提示编辑改进 GPT-3**\n\n   *阿曼·马丹、尼凯特·坦东、彼得·克拉克、杨一鸣.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.06009)], 2022年1月\n\n2. **LogicSolver：迈向可解释的数学应用题求解——基于逻辑提示的学习。**\n\n   *杨志诚、秦景辉、陈佳琪、林亮、梁晓丹.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.08232)], 2022年5月\n\n3. **选择性标注使语言模型成为更好的少样本学习者。**\n\n   *苏洪进、笠井纯悟、吴亨利、史伟嘉、王天禄、辛佳怡、张睿、玛丽·奥斯坦多夫、卢克·泽特勒莫耶、诺亚·A·史密斯、余涛.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.01975)], 2022年9月\n\n4. **基于策略梯度的动态提示学习，用于半结构化数学推理。**\n\n   *潘璐、邱亮、常凯威、吴颖年、朱松春、谭迈·拉杰普罗希特、彼得·克拉克、阿什温·卡利亚恩.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14610)], 2022年9月\n\n5. **在密集知识型多步问题中，将检索与思维链推理交织进行。**\n\n   *哈什·特里维迪、尼兰詹·巴拉苏布拉马尼安、图沙尔·科特、阿希什·萨巴瓦尔.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10509)], 2022年12月\n\n6. **借助检索重新思考：忠实的大语言模型推理。**\n\n   *何航峰、张宏明、丹·罗斯.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.00303)], 2023年1月\n\n7. **验证与编辑：一种知识增强型思维链框架。**\n\n   *赵若晨、李星轩、沙菲克·乔蒂、秦成伟、宾立东.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.03268)], 2023年5月\n\n8. **一种结合关联知识进行数据增强的动态提示调优方法。**\n\n   *齐倩倩、鲍启明、彭宇轩、刘嘉谋、迈克尔·维特布罗克.* [[abs](https:\u002F\u002Fopenreview.net\u002Fforum?id=hli7A0ioiS_)], 2023年3月\n\n9. **通过动态提示调优方法进行知识丰富型数据生成，从而增强数据增广效果。**\n\n   *齐倩倩、鲍启明、彭宇轩、刘嘉谋、迈克尔·维特布罗克.* [[abs](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10651072\u002F)], 2020年3月\n\n10. **HHH：基于知识图谱和层次双向注意力的在线医疗聊天机器人系统。**\n\n    *鲍启明、倪琳、刘嘉谋.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.03140)], 2020年2月\n\n11.  **智能体的知识型自我意识。**\n\n    *乔硕飞、邱志松、任宝昌、王晓彬、茹向远、张宁宇、陈向、江勇、谢鹏军、黄飞、陈华俊.* [abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.03553), 2025年3月\n\n\n#### 其他\n\n1. **语言模型级联。**\n\n   *大卫·多汉、Winnie Xu、艾托尔·莱科维奇、雅各布·奥斯汀、大卫·比伯、拉斐尔·贡蒂霍·洛佩斯、吴宇怀、亨里克·米哈列夫斯基、Rif A. Saurous、贾莎·索尔-迪克斯坦、凯文·墨菲、查尔斯·萨顿*. [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.10342)], 2022年7月\n\n2. **学会解释：通过思维链进行多模态推理以解答科学问题。**\n\n   *潘璐、斯瓦鲁普·米什拉、托尼·夏、邱亮、常凯威、朱松春、奥伊温德·塔夫约德、彼得·克拉克、阿什вин·卡利亚恩.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.09513)], 2022年9月\n\n3. **基于知识图谱的多模态类比推理。**\n\n   *张宁宇、李磊、陈向、梁小专、邓淑敏、陈华俊.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.00312)], 2022年10月\n\n4. **指令微调后的语言模型规模化。**\n\n*Hyung Won Chung、Le Hou、Shayne Longpre、Barret Zoph、Yi Tay、William Fedus、Yunxuan Li、Xuezhi Wang、Mostafa Dehghani、Siddhartha Brahma、Albert Webson、Shixiang Shane Gu、Zhuyun Dai、Mirac Suzgun、Xinyun Chen、Aakanksha Chowdhery、Alex Castro-Ros、Marie Pellat、Kevin Robinson、Dasha Valter、Sharan Narang、Gaurav Mishra、Adams Yu、Vincent Zhao、Yanping Huang、Andrew Dai、Hongkun Yu、Slav Petrov、Ed H. Chi、Jeff Dean、Jacob Devlin、Adam Roberts、Denny Zhou、Quoc V. Le、Jason Wei.* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.11416)], 2022年10月\n\n5. **看、想、确认：基于知识的视觉推理中视觉与语言模型之间的交互式提示方法。**\n\n   *陈振芳、周钦宏、沈一康、洪怡宁、张浩、甘闯。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.05226)], 2023年1月\n\n6. **语言模型中的多模态思维链推理。**\n\n   *张卓生、阿斯顿·张、穆力、赵海、乔治·卡里皮斯、亚历克斯·斯莫拉。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00923)], 2023年2月\n\n7. **语言并非一切：将感知与语言模型对齐。**\n\n   *黄绍涵、李东、王文辉、郝雅茹、萨克沙姆·辛格哈尔、马树明、吕腾超、崔磊、欧韦斯·汗·穆罕默德、刘强、克里蒂·阿加瓦尔、池泽文、约翰·比约克、维什拉夫·乔杜里、苏博吉特·索姆、宋霞、魏富。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.14045)], 2023年2月\n\n8. **视觉ChatGPT：与视觉基础模型对话、绘图和编辑。**\n\n   *吴晨飞、尹圣明、齐伟珍、王晓东、唐泽成、段楠。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.04671)], 2023年3月\n\n9. **ViperGPT：通过Python执行进行视觉推理。**\n\n   *迪达克·苏里斯、萨奇特·梅农、卡尔·冯德里克。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.08128)], 2023年3月\n\n10. **MM-REACT：提示ChatGPT实现多模态推理与行动。**\n\n    *杨正元、李林杰、王建峰、凯文·林、埃桑·阿扎尔纳萨布、费萨尔·艾哈迈德、刘子程、刘策、迈克尔·曾、王丽娟。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11381)], 2023年3月\n\n11. **通过提示提升大型语言模型的心智理论表现。**\n\n    *希玛·拉希米·莫加达姆、克里斯托弗·J·霍尼。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.11490)], 2023年4月\n\n12. **通过原则性的合成逻辑语料库增强LLM的推理能力。**\n\n    *森下照文、森尾岳、山口敦纪、曾川靖弘。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12498)], 2023年11月\n\n13. **Multi2Claim：从多项选择题生成科学主张，用于科学事实核查。**\n\n    *内塞特·坦、阮忠、乔什·本塞曼、亚历克斯·彭、包启明、陈阳、马克·盖黑根、迈克尔·维特布罗克。* [[abs](https:\u002F\u002Faclanthology.org\u002F2023.eacl-main.194\u002F)], 2023年5月\n\n14. **通过注意力值缩短输入长度并生成文本。**\n\n    *内塞特·坦、亚历克斯·彭、乔什·本塞曼、包启明、蒂姆·哈蒂尔、马克·盖黑根、迈克尔·维特布罗克。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.07585)], 2023年2月\n\n15. **DeepQR：基于神经网络的学习者生成多项选择题质量评分系统。**\n\n    *林妮、包启明、李晓萱、齐倩倩、保罗·丹尼、吉姆·沃伦、迈克尔·维特布罗克、刘嘉谋。* [[abs](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F21562)], 2022年3月\n\n16. **CoRA：利用大型语言模型的公共子空间优化低秩适应。**\n\n    *肖俊、沈森、包启明、荣洪飞、刘凯瑞、王仲盛、刘嘉谋。* [[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2409.02119)], 2024年8月\n\n17. **自然语言处理与推理。**\n\n    *包启明、迈克尔·维特布罗克、刘嘉谋。* [[abs](https:\u002F\u002F14h034160212.github.io\u002FQiming_Bao_IEEE_VTS_Natural_Language_Processing_Reasoning_Invited_Talk_Final.pdf)], 2022年8月\n\n18. **盲视与人工智能的关系：综述。**\n\n    *乔什·本塞曼、包启明、盖尔·让德龙、蒂姆·哈蒂尔、迈克尔·维特布罗克。* [[abs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.00616)], 2022年1月\n\n19. **开发与评估用于自然语言逻辑推理的语言模型。**\n\n    *包启明。* [[abs](https:\u002F\u002Fhdl.handle.net\u002F2292\u002F71735)], 2025年4月\n\n### 分析\n\n1. **语言模型能否从上下文中的解释中学习？**\n\n   *安德鲁·K·兰皮宁、伊希塔·达斯古普塔、斯蒂芬妮·C·Y·陈、科里·马修森、迈克尔·亨利·泰斯勒、安东尼娅·克雷斯韦尔、詹姆斯·L·麦克莱兰、简·X·王、费利克斯·希尔。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02329)]，2022年4月\n\n2. **大型语言模型的涌现能力。**\n\n   *杰森·魏、易泰、里希·博马萨尼、科林·拉菲尔、巴雷特·佐夫、塞巴斯蒂安·博格奥德、丹妮·约加塔玛、马尔滕·博斯马、丹尼·周、唐纳德·梅茨勒、埃德·H·奇、辰范·桥本、奥里奥尔·维尼亚尔斯、珀西·梁、杰夫·迪恩、威廉·费杜斯。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07682)]，2022年6月\n\n3. **语言模型在推理中表现出类似人类的内容效应。**\n\n   *伊希塔·达斯古普塔、安德鲁·K·兰皮宁、斯蒂芬妮·C·Y·陈、安东尼娅·克雷斯韦尔、达尔尚·库马拉南、詹姆斯·L·麦克莱兰、费利克斯·希尔。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.07051)]，2022年7月\n\n4. **语言模型中的理由增强集成方法。**\n\n   *薛志王、杰森·魏、戴尔·舒尔曼斯、阮家乐、埃德·奇、丹尼·周。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.00747)]，2022年7月\n\n5. **大型语言模型真的能理解提示吗？以否定提示为例的研究。**\n\n   *乔尔·张、成贤延、徐敏俊。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.12711)]，2022年9月\n\n6. **文本与模式：有效的思维链需要双方配合**\n\n   *阿曼·马丹、阿米尔·亚兹丹巴赫什。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.07686)]，2022年9月\n\n7. **挑战BIG-Bench任务及思维链是否能够解决它们。**\n\n   *米拉克·苏兹贡、内森·斯凯尔斯、纳塔纳埃尔·舍尔利、塞巴斯蒂安·格尔曼、易泰、洪元·郑、阿坎克沙·乔德里、阮家乐、埃德·H·奇、丹尼·周、杰森·魏。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.09261)]，2022年10月\n\n8. **语言模型是贪婪的推理者：对思维链的系统性形式化分析。**\n\n   *阿布尔海尔·萨帕罗夫、何贺。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01240)]，2022年10月\n\n9. **知识遗忘用于缓解语言模型中的隐私风险。**\n\n   *乔尔·张、尹东根、杨素熙、车成民、李文泰、拉贾努根·洛格斯瓦兰、徐敏俊。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.01504)]，2022年10月\n\n10. **大型语言模型中的涌现类比推理。**\n\n    *泰勒·韦伯、基思·J·霍利奥克、陆宏景。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09196)]，2022年12月\n\n11. **迈向对思维链提示的理解：一项关于关键因素的实证研究。**\n\n    *王博思、闵世源、邓翔、沈嘉明、吴友、卢克·泽特莫耶、孙欢。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10001)]，2022年12月\n\n12. **再想想吧，我们还是别一步一步地思考了！零样本推理中的偏见与毒性。**\n\n    *奥马尔·谢赫、张宏鑫、威廉·赫尔德、迈克尔·伯恩斯坦、杨迪毅。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.08061)]，2022年12月\n\n13. **检索增强型语言模型能进行推理吗？检索器与语言模型之间的责任归属问题。**\n\n    *帕里沙德·贝纳姆加德尔、圣地亚哥·米雷特、西瓦·雷迪。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09146)]，2022年12月\n\n14. **GPT为何能在上下文中学习？语言模型作为元优化器秘密执行梯度下降。**\n\n    *代大伟、孙宇涛、董力、郝雅茹、隋志芳、魏福瑞。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.10559)]，2022年12月\n\n15. **从认知视角看大型语言模型中语言与思维的分离。**\n\n    *凯尔·马霍瓦尔德、安娜·A·伊万诺娃、伊丹·A·布兰克、南希·坎威舍、约书亚·B·特南鲍姆、埃韦丽娜·费多伦科。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.06627)]，2023年1月\n\n16. **大型语言模型很容易被无关的上下文分散注意力。**\n\n    *弗雷达·施伊、陈欣云、卡尼什卡·米斯拉、内森·斯凯尔斯、大卫·多汉、埃德·奇、纳塔纳埃尔·舍尔利、丹尼·周。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.00093)]，2023年2月\n\n17. **ChatGPT在推理、幻觉和交互性方面的多任务、多语言、多模态评估。**\n\n    *裴艺珍、塞缪尔·卡哈维贾亚、李娜妍、戴文亮、苏丹、布莱恩·威利、霍利·洛维尼亚、季子薇、于铁正、钟威利、杜越、许燕、冯佩斯卡尔。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.04023)]，2023年2月\n\n18. **ChatGPT是一位博学但缺乏经验的解题者：对大型语言模型中常识问题的探究。**\n\n    *卞宁、韩先培、孙乐、林鸿宇、陆耀杰、何奔。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.16421)]，2023年3月\n\n19. **为什么要一步一步地思考？推理源于经验的局部性。**\n\n    *本·普里斯托斯基、诺亚·D·古德曼。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.03843)]，2023年4月\n\n20. **基于形式逻辑的合成语料库学习演绎推理。**\n\n    *森下照文、森尾岳、山口敦树、曾川康弘。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07336)]，2023年8月\n\n21. **通过逻辑推理任务结构的变化评估并提升大型语言模型的鲁棒性。**\n\n    *包启明、盖尔·根德隆、亚历克斯·彭、内塞特·坦、迈克尔·维特布罗克、刘佳谋。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.09430)]，[[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-and-abstract-reasoning)]，2024年12月\n\n22. **大型语言模型并非强大的抽象推理者。**\n\n    *盖尔·根德隆、包启明、迈克尔·维特布罗克、吉利安·多比。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.19555)]，[[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FLogical-and-abstract-reasoning)]，2024年8月\n\n23. **AbductionRules：训练Transformer解释意外输入。**\n\n    *内森·杨、包启明、约书亚·柳多·本塞曼、迈克尔·J·维特布罗克。* [[摘要](https:\u002F\u002Faclanthology.org\u002F2022.findings-acl.19\u002F)]，[[代码](https:\u002F\u002Fgithub.com\u002FStrong-AI-Lab\u002FAbductionRules)]，2022年8月\n\n24. **剪枝能改善推理吗？重新审视长思维链压缩，以提升推理能力为目标**\n\n    *赵尚子琪、袁嘉浩、杨贵松、乌斯曼·纳西姆。* [[摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.14582)]，2025年5月\n---\n\n## 🧰 资源\n\n### 基准与任务\n\n|     推理技能      | 基准                                                   |\n| :-----------------------: | ------------------------------------------------------------ |\n| **算术推理**  | [GSM8K](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.14168), [SVAMP](https:\u002F\u002Faclanthology.org\u002F2021.naacl-main.168), [ASDiv](https:\u002F\u002Faclanthology.org\u002F2020.acl-main.92\u002F), [AQuA-RAT](https:\u002F\u002Faclanthology.org\u002FP17-1015\u002F), [MAWPS](https:\u002F\u002Faclanthology.org\u002FN16-1136\u002F), [AddSub](https:\u002F\u002Faclanthology.org\u002FD14-1058\u002F), [MultiArith](https:\u002F\u002Faclanthology.org\u002FD15-1202\u002F), [SingleEq](https:\u002F\u002Faclanthology.org\u002FQ15-1042\u002F), [SingleOp]( https:\u002F\u002Fdoi.org\u002F10.1162\u002Ftacl_a_00118) |\n| **常识推理** | [CommonsenseQA](https:\u002F\u002Faclanthology.org\u002FN19-1421\u002F), [StrategyQA](https:\u002F\u002Fdirect.mit.edu\u002Ftacl\u002Farticle\u002Fdoi\u002F10.1162\u002Ftacl_a_00370\u002F100680\u002FDid-Aristotle-Use-a-Laptop-A-Question-Answering), [ARC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05457), [SayCan](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.01691), [BoolQA](https:\u002F\u002Faclanthology.org\u002FN19-1300\u002F), [HotpotQA](https:\u002F\u002Faclanthology.org\u002FD18-1259\u002F), [OpenBookQA](https:\u002F\u002Faclanthology.org\u002FD18-1260\u002F), [PIQA](https:\u002F\u002Fyonatanbisk.com\u002Fpiqa\u002F), [WikiWhy](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.12152) |\n|  **符号推理**   | [最后一个字母拼接](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903), [硬币翻转](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903), 反转列表 |\n|   **逻辑推理**   | [ProofWriter](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.13048), [EntailmentBank](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08661), [RuleTaker](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2020\u002F537), [CLUTRR](https:\u002F\u002Faclanthology.org\u002FD19-1458\u002F), [FLD](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.07336), [FLDx2](https:\u002F\u002Farxiv.org\u002Fabs\u002F2411.12498) |\n| **多模态推理**  | [SCIENCEQA](https:\u002F\u002Fscienceqa.github.io\u002F)                    |\n|        **其他**         | [BIG-bench](https:\u002F\u002Fdoi.org\u002F10.48550\u002FarXiv.2206.04615), [SCAN](http:\u002F\u002Fproceedings.mlr.press\u002Fv80\u002Flake18a.html), [Chain-of-Thought Hub](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.17306), [MR-BEN](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.13975), [WorFBench](https:\u002F\u002Farxiv.org\u002Fabs\u002F2410.07869) |\n\n### 工具\n\n- **[ThoughtSource](https:\u002F\u002Fgithub.com\u002FOpenBioLink\u002FThoughtSource)**：一个集中式的开源资源，用于提供与大语言模型链式思维推理相关的数据和工具。\n- **[LangChain](https:\u002F\u002Fgithub.com\u002Fhwchase17\u002Flangchain)**：一个旨在帮助开发者构建结合大语言模型与其他计算或知识来源的应用程序的库。\n- **[LogiTorch](https:\u002F\u002Fgithub.com\u002FLogiTorch\u002Flogitorch)**：一个基于PyTorch的自然语言逻辑推理库。\n- **[λprompt](https:\u002F\u002Fgithub.com\u002Fapproximatelabs\u002Flambdaprompt)**：一个允许构建完整的大型语言模型提示引擎的库，其中包括能够自我编辑以纠正甚至自动生成执行代码的功能。\n- **[Promptify](https:\u002F\u002Fgithub.com\u002Fpromptslab\u002FPromptify)**：提示工程工具，可利用大语言模型解决自然语言处理问题，并轻松为GPT、PaLM等主流生成模型生成不同NLP任务的提示，尽在Promptify。\n- **[MiniChain](https:\u002F\u002Fgithub.com\u002Fsrush\u002FMiniChain)**：一个小型库，用于使用大型语言模型进行编程，旨在实现核心的提示链功能。\n- **[LlamaIndex](https:\u002F\u002Fgithub.com\u002Fjerryjliu\u002Fllama_index)**：一个项目，提供了一个中心化的接口，用于将您的大语言模型与外部数据连接起来。\n- **[EasyInstruct](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FEasyInstruct)**：一个用于在研究实验中指导大型语言模型（如GPT-3）的软件包。它设计简单易用且易于扩展。\n\n---\n\n## 🎉 贡献\n\n- 添加新论文或更新现有论文，并思考该工作应归入哪个类别。\n- 使用与现有条目相同的格式来描述该工作。\n- 添加论文的摘要链接（如果是arXiv预印本，则采用`\u002Fabs\u002F`格式）。\n- 建议简要说明您认为为何应添加或更新某篇论文。\n\n**如果您不小心写错了也没关系，我们会帮您修正。只需贡献您的优秀工作，推广到这里吧！**\n\n### 贡献者\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_Prompt4ReasoningPapers_readme_d48623904465.png\" \u002F>\n\u003C\u002Fa>\n\n---\n\n## 🚩引用 \n\n如果您发现本综述对您的研究有所帮助，请考虑引用以下内容：\n\n```\n@inproceedings{qiao-etal-2023-reasoning,\n    title = \"通过语言模型提示进行推理：综述\",\n    author = \"Qiao, Shuofei  and\n      Ou, Yixin  and\n      Zhang, Ningyu  and\n      Chen, Xiang  and\n      Yao, Yunzhi  and\n      Deng, Shumin  and\n      Tan, Chuanqi  and\n      Huang, Fei  and\n      Chen, Huajun\",\n    booktitle = \"第61届计算语言学协会年会论文集（第一卷：长文）\",\n    month = jul,\n    year = \"2023\",\n    address = \"多伦多，加拿大\",\n    publisher = \"计算语言学协会\",\n    url = \"https:\u002F\u002Faclanthology.org\u002F2023.acl-long.294\",\n    pages = \"5368--5393\",\n    abstract = \"推理作为复杂问题解决的核心能力，可以为医疗诊断、谈判等多种实际应用提供后端支持。本文全面综述了当前基于语言模型提示的推理前沿研究。我们介绍了相关研究成果并进行了比较和总结，同时提供了系统性的资源以帮助初学者。此外，我们还探讨了此类推理能力出现的潜在原因，并指出了未来的研究方向。相关资源可在https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers上找到（定期更新）。\",\n}\n```","# Prompt4ReasoningPapers 快速上手指南\n\n## 📖 项目简介\n**Prompt4ReasoningPapers** 并非一个可安装的软件工具或代码库，而是一个由浙江大学 ZJUNLP 团队维护的**开源论文列表与综述资源库**。该项目旨在系统性地整理关于“基于语言模型提示的推理（Reasoning with Language Model Prompting）”的前沿研究论文、基准测试及相关资源。\n\n本指南将指导开发者如何获取该资源库，并高效利用其中的论文列表和分类体系进行学习与研究。\n\n## 🛠️ 环境准备\n\n本项目主要为文档和资源索引，无需复杂的运行环境。仅需具备以下基础条件即可浏览和使用：\n\n*   **操作系统**：Windows \u002F macOS \u002F Linux 均可。\n*   **前置依赖**：\n    *   **Git**：用于克隆仓库到本地（推荐）。\n    *   **Web 浏览器**：用于直接访问 GitHub 页面或论文链接。\n    *   **PDF 阅读器**：用于阅读下载的学术论文。\n\n> **💡 国内加速建议**：\n> 若访问 GitHub 速度较慢，推荐使用国内镜像源克隆仓库：\n> *   **Gitee 镜像**（如有同步）：`https:\u002F\u002Fgitee.com\u002Fmirrors\u002FPrompt4ReasoningPapers` (需确认最新同步状态)\n> *   **通用加速**：使用 `git clone https:\u002F\u002Fghproxy.com\u002Fhttps:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers.git` 通过代理加速克隆。\n\n## 📥 安装步骤（获取资源）\n\n由于本项目是资源列表，所谓的“安装”即是将仓库克隆到本地以便离线查阅或贡献。\n\n### 方法一：使用 Git 克隆（推荐）\n\n在终端中执行以下命令：\n\n```bash\n# 官方源\ngit clone https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers.git\n\n# 或者使用国内加速源（如果上述速度慢）\ngit clone https:\u002F\u002Fghproxy.com\u002Fhttps:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers.git\n```\n\n进入目录：\n```bash\ncd Prompt4ReasoningPapers\n```\n\n### 方法二：直接在线浏览\n\n无需下载，直接访问项目主页查看最新整理的论文列表：\n*   **GitHub 主页**: [https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers](https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers)\n*   **综述论文**: [Reasoning with Language Model Prompting: A Survey](https:\u002F\u002Farxiv.org\u002Fabs\u002F2212.09597)\n\n## 🚀 基本使用\n\n本项目的使用核心在于**检索论文**和**理解分类体系**。以下是快速上手的三个步骤：\n\n### 1. 浏览论文分类体系\n打开本地的 `README.md` 文件或在线查看 **Contents** 章节。项目将推理方法系统地分为以下几类，您可根据研究兴趣直达对应板块：\n\n*   **Strategy Enhanced Reasoning (策略增强推理)**\n    *   *Prompt Engineering*: 包含单阶段（Single-Stage，如 Chain-of-Thought）和多阶段（Multi-Stage）提示工程。\n    *   *Process Optimization*: 涉及自优化、集成优化和迭代优化。\n    *   *External Engine*: 结合物理模拟器、代码解释器或工具学习。\n*   **Knowledge Enhanced Reasoning (知识增强推理)**\n    *   区分隐式知识（Implicit）和显式知识（Explicit）的利用。\n*   **Analysis & Resources**: 包含各类基准测试（Benchmarks）和分析文章。\n\n### 2. 查找特定论文\n在 `README.md` 的 **Papers** 部分，每篇论文都提供了标题、作者、发表时间和 `[abs]` 链接。\n\n**示例：查找思维链（Chain-of-Thought）相关论文**\n1.  定位到 `Methods` -> `Strategy Enhanced Reasoning` -> `Prompt Engineering` -> `Single-Stage`。\n2.  找到经典论文：**\"Chain of Thought Prompting Elicits Reasoning in Large Language Models\"** (Jason Wei et al., 2022)。\n3.  点击链接 `[abs](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)` 直接跳转至 arXiv 阅读摘要或下载 PDF。\n\n### 3. 利用配套资源\n项目不仅列出论文，还关联了相关的代码库和基准测试。\n*   **教程视频**：在 `News` 栏目中可找到关于该综述的中文讲解视频链接，适合初学者快速建立概念。\n*   **关联项目**：注意查看同一团队（zjunlp）发布的配套工具，如 `EasyEdit` (知识编辑)、`KnowLM` (知识增强大模型框架) 等，这些通常在新闻更新中提及并附有 GitHub 链接。\n\n### 4. 参与贡献（可选）\n如果您发现了新的相关论文，可以通过以下方式贡献：\n1.  Fork 本仓库。\n2.  按照现有的 Markdown 格式在 `README.md` 对应分类下添加论文条目。\n3.  提交 Pull Request (PR)。\n\n---\n*注：本指南仅针对资源库的获取与使用。若需复现具体论文中的算法，请参考各论文链接中指向的独立代码仓库。*","某金融科技公司算法团队正致力于提升大模型在复杂信贷风险评估中的逻辑推理能力，以辅助人工审批。\n\n### 没有 Prompt4ReasoningPapers 时\n- **文献检索如大海捞针**：团队成员需手动在 arXiv 和谷歌学术中筛选海量论文，难以区分哪些是真正提升推理能力的有效方法，耗时且易遗漏关键成果。\n- **技术路线选择盲目**：面对思维链（CoT）、自洽性（Self-Consistency）等众多提示策略，缺乏系统对比，只能凭经验“试错”，导致实验周期长、资源浪费严重。\n- **复现门槛高企**：开源代码分散且标准不一，缺乏统一的基准测试（Benchmark）参考，新人上手困难，难以快速验证前沿算法在实际业务数据上的效果。\n- **知识更新滞后**：无法及时追踪如“外部工具调用”或“知识增强推理”等最新细分领域的突破，导致模型迭代速度落后于竞争对手。\n\n### 使用 Prompt4ReasoningPapers 后\n- **一站式权威导航**：直接利用其分类清晰的论文列表，快速定位到\"Multi-Stage Prompt Engineering\"等与风控场景高度匹配的前沿研究，文献调研效率提升数倍。\n- **策略选型有据可依**：参考综述中对各类方法的优缺点总结及对比，迅速锁定适合金融逻辑推导的“过程优化”方案，大幅减少无效实验尝试。\n- **资源对接无缝衔接**：通过集成的 Benchmarks 和 Tools 链接，直接复用成熟的评估框架和代码库，将新算法的验证周期从数周缩短至几天。\n- **紧跟前沿动态**：依托社区持续更新的机制（如新增的 KnowAgent 规划论文），团队能即时引入知识增强规划技术，显著提升模型处理复杂因果链条的准确性。\n\nPrompt4ReasoningPapers 将碎片化的推理研究转化为系统化的工程指南，让团队从“盲目摸索”转向“精准打击”，极大加速了高可靠推理模型的落地进程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fzjunlp_Prompt4ReasoningPapers_0e8b174f.png","zjunlp","ZJUNLP","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fzjunlp_4dd6d5d4.jpg","Knowledge Engine Lab: A NLP & KG Group of  Zhejiang  University",null,"huajunsir@zju.edu.cn","ChenHuajun","http:\u002F\u002Fzjunlp.org","https:\u002F\u002Fgithub.com\u002Fzjunlp",1005,67,"2026-04-14T15:18:59","MIT",1,"","未说明",{"notes":89,"python":87,"dependencies":90},"该项目主要是一个论文综述列表和资源集合（Awesome List），用于整理关于“语言模型提示推理”的研究论文、方法和基准测试。README 内容中未包含任何可执行的代码库、安装脚本或具体的运行环境配置需求。用户主要使用该仓库来查阅文献链接和了解相关研究进展，而非部署软件服务。",[],[35,16,14],[93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"prompt","reasoning","awsome-list","chain-of-thought","paper-list","survey","nlp","datasets","language-models","natural-language-processing","large-language-models","artificial-intelligence","arithmetic-reasoning","commonsense-reasoning","symbolic-reasoning","prompt-engineering","logical-reasoning","llm","chatgpt","gpt-3","2026-03-27T02:49:30.150509","2026-04-17T08:24:11.268467",[116,121,126,131,135,140],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},36894,"如何获取论文中的分类图（Taxonomy Figure）源文件？","分类图是使用 LaTeX 绘制的。您可以在仓库中找到名为 `categories_tree_small.tex` 或 `categories_tree_big.tex` 的源文件来生成或查看该图片。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fissues\u002F17",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},36895,"为什么 Chain-of-Thought (CoT) 方法被归类为单阶段（Single-Stage）而不是多阶段（Multi-Stage）？","虽然 CoT 包含生成推理步骤和答案两个部分，但维护者指出，实验表明获取答案的步骤对于推理本身并非必须，其主要作用是让模型按特定格式输出。因此，与典型的多阶段方法相比，CoT 的两个阶段代表性不强，故将其归类为单阶段方法。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fissues\u002F5",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},36896,"如果想推荐新的相关论文或综述到这个仓库，应该怎么做？","您可以直接在 GitHub 上提交 Pull Request (PR)。维护者欢迎社区贡献，并建议用户直接通过 PR 添加新的工作内容以便审查和合并。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fissues\u002F15",{"id":132,"question_zh":133,"answer_zh":134,"source_url":125},36897,"Zero-CoT (Large Language Models are Zero-Shot Reasoners) 为什么也被归类为单阶段方法？","尽管 Zero-CoT 通过两阶段的 input-output 得到结果，但其核心逻辑与标准 CoT 类似，重点在于激发模型的推理能力而非复杂的多阶段交互流程。根据项目目前的分类标准，这类通过提示工程直接激发推理的方法通常归入单阶段范畴（参考对 CoT 的分类解释）。",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},36898,"仓库是否接受关于逻辑推理数据增强和评估的新论文？","是的，仓库接受相关领域的最新论文。例如，关于对比学习的数据增强、提示增强（Prompt Augmentation）以及分布外（OOD）逻辑推理评估的论文已被考虑添加。如果您有相关论文，可以通过 Issue 讨论或直接提交 PR 进行推荐。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fissues\u002F6",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},36899,"是否有其他视角的逻辑推理综述可以补充参考？","有的。除了本仓库整理的内容外，还有从推理范式（端到端、前向、后向）角度出发的综述《Nature Language Reasoning, A Survey》。该工作已被本项目收录，可以作为互补资料帮助更好地理解推理任务。","https:\u002F\u002Fgithub.com\u002Fzjunlp\u002FPrompt4ReasoningPapers\u002Fissues\u002F4",[]]