[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dave1010--tree-of-thought-prompting":3,"tool-dave1010--tree-of-thought-prompting":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",145895,2,"2026-04-08T11:32:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":79,"owner_twitter":73,"owner_website":80,"owner_url":81,"languages":76,"stars":82,"forks":83,"last_commit_at":84,"license":85,"difficulty_score":86,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":93,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":127},5545,"dave1010\u002Ftree-of-thought-prompting","tree-of-thought-prompting","Using Tree-of-Thought Prompting to boost ChatGPT's reasoning","tree-of-thought-prompting 是一种旨在提升大语言模型（如 ChatGPT）推理能力的提示工程技术。它借鉴了“思维树”框架的理念，对现有的“思维链”方法进行了扩展与升级，让模型在回答问题时能够像人类一样进行多路径探索、自我纠错并逐步积累知识，从而显著增强解决复杂逻辑问题的能力。\n\n该技术主要解决了大模型在面对多步骤推理或包含干扰信息的复杂场景时容易“迷路”或得出错误结论的痛点。例如，在追踪物体位置变化的逻辑题中，传统方法可能导致模型忽略关键转折步骤，而 tree-of-thought-prompting 能引导模型自主发现并修正推理过程中的偏差，使较低版本的模型也能展现出接近高阶模型的推理水平。\n\n这一工具特别适合 AI 开发者、研究人员以及需要处理复杂逻辑任务的技术人员使用。其核心亮点在于赋予了模型“深思熟虑”的机制：不再局限于单一线性的思考路径，而是允许模型生成多种可能的解题思路，评估每条路径的可行性，并在发现错误时自动回溯调整。这种模拟人类决策过程的方法，无需更换更强大的模型底座，即可通过优化提示策略有效挖掘现有模型的潜力，为构建更可靠的智能应用提供了新思路","tree-of-thought-prompting 是一种旨在提升大语言模型（如 ChatGPT）推理能力的提示工程技术。它借鉴了“思维树”框架的理念，对现有的“思维链”方法进行了扩展与升级，让模型在回答问题时能够像人类一样进行多路径探索、自我纠错并逐步积累知识，从而显著增强解决复杂逻辑问题的能力。\n\n该技术主要解决了大模型在面对多步骤推理或包含干扰信息的复杂场景时容易“迷路”或得出错误结论的痛点。例如，在追踪物体位置变化的逻辑题中，传统方法可能导致模型忽略关键转折步骤，而 tree-of-thought-prompting 能引导模型自主发现并修正推理过程中的偏差，使较低版本的模型也能展现出接近高阶模型的推理水平。\n\n这一工具特别适合 AI 开发者、研究人员以及需要处理复杂逻辑任务的技术人员使用。其核心亮点在于赋予了模型“深思熟虑”的机制：不再局限于单一线性的思考路径，而是允许模型生成多种可能的解题思路，评估每条路径的可行性，并在发现错误时自动回溯调整。这种模拟人类决策过程的方法，无需更换更强大的模型底座，即可通过优化提示策略有效挖掘现有模型的潜力，为构建更可靠的智能应用提供了新思路。","# Using Tree-of-Thought Prompting to boost ChatGPT's reasoning\n\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.10323452.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.10323452)\n\n\n## Abstract\n\nTree-of-Thought (ToT) Prompting, a fresh technique borrowing ideas from the Tree-of-Thoughts framework, broadens and enhances the well-established Chain-of-Thought prompting concept, thereby enabling Large Language Models, like ChatGPT, to exhibit superior reasoning abilities. This Tree-of-Thought Prompting technique permits Large Language Models to rectify their errors autonomously while progressively accumulating knowledge.\n\nIn one example, a ToT prompt improves ChatGPT 3.5's reasoning ability to answer a question that could previously only be answered by ChatGPT 4.\n\n## Complex questions for LLMs\n\nLarge Language Models (LLMs) often struggle to answer certain types of complex questions. Here's a representative example:\n\n    Bob is in the living room.\n    He walks to the kitchen, carrying a cup.\n    He puts a ball in the cup and carries the cup to the bedroom.\n    He turns the cup upside down, then walks to the garden.\n    He puts the cup down in the garden, then walks to the garage.\n    Where is the ball?\n\nThe correct answer is that the ball is in the bedroom. Although LLMs are non-deterministic, the responses I received during several attempts were as follows:\n\n* ChatGPT 3.5: The ball is in the garden. ❌\n* ChatGPT 4: The ball is in the bedroom. [followed by an explanation] ✅\n\n## Chain-of-Thought prompting\n\nA widely recognized concept known as \"Chain-of-Thought prompting\" (CoT for short) exists, which encourages Large Language Models (LLMs) to elucidate their thought process, thereby increasing the likelihood of a correct response. The roots of this concept can be traced back to a paper from January 2022 titled: [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903).\n\nAt its core, the CoT technique can be implemented by adding something akin to the following to a prompt:\n\n    Think carefully and logically, explaining your answer.\n\nThis approach enhances the performance of LLMs in answering many questions, although its effectiveness varies.\n\nHere's an example of ChatGPT 3.5's response:\n\n    Based on the given sequence of events, we can logically determine the location of the ball. Let's analyze each step:\n    1. Bob is in the living room.\n     - No mention of the ball in the living room.\n    2. Bob walks to the kitchen, carrying a cup.\n     - No mention of the ball in the kitchen yet.\n    3. Bob puts a ball in the cup.\n     - The ball is now inside the cup.\n    4. Bob carries the cup to the bedroom.\n     - The ball is still inside the cup.\n    5. Bob turns the cup upside down.\n     - If the ball was inside the cup, it would have fallen out when Bob turned it upside down.\n    6. Bob walks to the garden.\n     - Bob left the cup in the garden, but there is no mention of the ball being removed from the cup.\n    7. Bob puts the cup down in the garden.\n     - The ball, if it was in the cup, would still be in the cup in the garden.\n    8. Bob walks to the garage.\n     - No mention of the ball being taken to the garage.\n    Based on the given information, we can conclude that the ball is still inside the cup in the garden.\n\n* ChatGPT 3.5 CoT: The ball is in the garden. ❌\n\nYou can see how ChatGPT realised the ball fell out at step 5 but still didn't get the answer correct.\n\nCan we do any better, without resorting to ChatGPT 4?\n\n## Tree-of-Thought Framework\n\nRecent weeks have seen the emergence of several papers and Python repositories introducing the concept of Tree-of-Thoughts (ToT):\n\n* [Large Language Model Guided Tree-of-Thought](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08291), 15 May 2023. [Github](https:\u002F\u002Fgithub.com\u002Fjieyilong\u002Ftree-of-thought-puzzle-solver).\n* [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601), 17 May 2023. [Github](https:\u002F\u002Fgithub.com\u002Fysymyth\u002Ftree-of-thought-llm) (no code as of 22nd May)\n* [Tree of Thoughts](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002Ftree-of-thoughts) Github, 21st May 2023\n\nThe underlying principle is to provide the LLM an opportunity to explore multiple branches of reasoning while self-evaluating as it progresses. According to the referenced papers, this approach seems to be highly effective for certain problems.\n\n## Tree-of-Thought Prompting\n\nTThe existing papers discussing the Tree-of-Thought framework appear to be premised on making multiple calls to an LLM, typically via Python. One of the main advantages of Chain-of-Thought reasoning is that it can be achieved with ChatGPT in a single sentence prompt.\n\nThis led me to ponder: could we improve upon the results of CoT by utilizing a ToT-style prompt?\n\nHere's the prompt I used:\n\n    Imagine three different experts are answering this question.\n    All experts will write down 1 step of their thinking,\n    then share it with the group.\n    Then all experts will go on to the next step, etc.\n    If any expert realises they're wrong at any point then they leave.\n    The question is...\n\nAnd here's the response from ChatGTP 3.5:\n\n    Expert 1: The ball is in the living room.\n\n    Expert 2: The ball is in the kitchen.\n\n    Expert 3: The ball is in the bedroom.\n\n    Expert 1: Bob carries the cup to the bedroom, so the ball must be in the cup.\n\n    Expert 2: Oh, I see my mistake. Yes, the ball is in the cup.\n\n    Expert 3: Agreed, the ball is in the cup in the bedroom.\n\n    Expert 1: Next, Bob turns the cup upside down in the bedroom.\n\n    Expert 2: After that, Bob walks to the garden and puts the cup down.\n\n    Expert 3: Therefore, the ball must have fallen out of the cup when Bob turned it upside down in the bedroom. So, the ball is in the bedroom, not in the cup anymore.\n\n    Expert 1: Oh, you're right. I made a mistake. The ball is in the bedroom, not in the cup.\n\n    Expert 2: Agreed, the ball is in the bedroom.\n\n    Expert 3: Bob then walks to the garage, so the ball remains in the bedroom. It is not in the garden or the garage.\n\n    Expert 1: Absolutely, the ball is still in the bedroom.\n\n    Expert 2: Yes, the ball hasn't moved from the bedroom.\n\n    Expert 3: Therefore, the ball is in the bedroom, not in the garden or the garage.\n\n    All three experts agree that the ball is in the bedroom.\n\n* ChatGPT 3.5 ToT: The ball is in the bedroom. ✅\n\nYou can see how this time ChatGPT 3.5 gets the answer correct!\n\n# Limitations\n\n* This approach has not undergone extensive testing yet, but preliminary results indicate improved performance over traditional CoT prompts.\n* The ToT framework comprises more components than those represented in my example prompt.\n\n\n# Enhancement, Feedback, and Contributions\n\nThere's room to refine the prompt for improved performance, potentially leading to more concise responses.\n\nShould you find success with any other prompts, we welcome you to submit a PR and add them to `tree-of-thought-prompts.txt`.\n\n# Additional thoughts\n\nThe Tree-of-Thought framework bears resemblance to an **organizational decision-making process** involving multiple stakeholders. The structure tries to emulate consensus and diversity of thought.\n\nThe effect can be extended by using multiple agents that are fine tuned to handle different parts of a task. AutoGPT is a good example of this, where it self-evaluates LLM plans against a goal. Using differently tuned LLMs for specialisation or even completely different LLMs would allow greater diversity of thought. These LLMs could interact in different modes (using Team Topologies terminology):\n\n* **Collaboration**: This could involve combining and iterating the outputs of multiple models to generate a final response to a user's input. For example, a general-purpose model might generate a preliminary response, which is then refined or enhanced by a specialized model.\n* **X-as-a-Service**: Some models might provide services to others, such as pre-processing input data, post-processing output data, or providing contextual information. These models would operate in a service role, supporting the functions of the other models.\n* **Facilitating**: Some models might play a facilitative role, for example by training other models, monitoring their performance, or providing feedback that can be used to improve them.\n\nHigh-performing teams often outperform individuals in decision-making. Therefore, it's plausible that adopting other organizational structures and characteristics could enhance the performance of LLMs. In addition to diversity of thought, specialisation and concensus, we may be able to emulate:\n\n* **Hierarchy**: where simpler queries are handled by a lower-level model and more complex ones are escalated to more capable or specialized models\n* **Redundancy**: ensuring that if one model fails to generate an accurate or useful output, another might be able to step in and provide a better result\n\n# Acknowledgements\n\n* [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903), Jan 2022.\n* [Large Language Model Guided Tree-of-Thought](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08291), 15 May 2023. [Github](https:\u002F\u002Fgithub.com\u002Fjieyilong\u002Ftree-of-thought-puzzle-solver).\n* [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601), 17 May 2023. [Github](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm)\n* [Tree of Thoughts](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002Ftree-of-thoughts) Github, 21st May 2023\n\n\n# Citations\n\nPlease cite this repository if you use the code.\n\n    @misc{tree-of-thought-prompting,\n    \ttitle        = {Using Tree-of-Thought Prompting to boost ChatGPT's reasoning},\n    \tauthor       = {Dave Hulbert},\n    \tyear         = 2023,\n    \tmonth        = may,\n    \tjournal      = {GitHub repository},\n    \tpublisher    = {Zenodo},\n    \tdoi          = {10.5281\u002FZENODO.10323452},\n    \turl          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.10323452},\n    \thowpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fdave1010\u002Ftree-of-thought-prompting}}\n    }\n","# 使用思维树提示增强ChatGPT的推理能力\n\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.10323452.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.10323452)\n\n\n## 摘要\n\n思维树提示（ToT）是一种借鉴思维树框架理念的新方法，它扩展并增强了成熟的思维链提示概念，从而使像ChatGPT这样的大型语言模型展现出更出色的推理能力。这种思维树提示技术使大型语言模型能够在逐步积累知识的同时，自主纠正错误。\n\n在一个示例中，使用思维树提示后，ChatGPT 3.5的推理能力得到了提升，能够回答原本只有ChatGPT 4才能正确解答的问题。\n\n## 面向LLM的复杂问题\n\n大型语言模型（LLMs）通常难以回答某些类型的复杂问题。以下是一个具有代表性的例子：\n\n    Bob在客厅里。\n    他拿着一个杯子走向厨房。\n    他在杯子里放了一个球，然后端着杯子走进卧室。\n    他把杯子倒过来，接着走到花园。\n    他在花园里放下杯子，随后走向车库。\n    球现在在哪里？\n\n正确答案是：球在卧室里。尽管LLMs具有非确定性，但在多次尝试中，我得到的回答如下：\n\n* ChatGPT 3.5：球在花园里。❌\n* ChatGPT 4：球在卧室里。[并附有解释] ✅\n\n## 思维链提示\n\n一种广为人知的概念称为“思维链提示”（简称CoT），它鼓励大型语言模型（LLMs）阐明其思考过程，从而提高回答正确的可能性。这一概念的起源可以追溯到2022年1月发表的一篇论文：《思维链提示激发大型语言模型的推理能力》（Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, arXiv:2201.11903）。\n\n从本质上讲，思维链提示技术可以通过在提示中加入类似以下内容来实现：\n\n    请仔细、逻辑地思考，并解释你的答案。\n\n这种方法能够提升LLMs在许多问题上的表现，但其效果因情况而异。\n\n以下是ChatGPT 3.5的一个回答示例：\n\n    根据给定的事件顺序，我们可以逻辑地推断出球的位置。让我们逐条分析：\n    1. Bob在客厅里。\n     - 未提及球在客厅中。\n    2. Bob拿着杯子走向厨房。\n     - 目前尚未提到球在厨房。\n    3. Bob把球放入杯子中。\n     - 此时球已在杯子里。\n    4. Bob端着杯子走进卧室。\n     - 球仍然在杯子里。\n    5. Bob把杯子倒过来。\n     - 如果球在杯子里，那么当Bob将杯子倒置时，球就会掉出来。\n    6. Bob走向花园。\n     - Bob把杯子留在了花园里，但并未提到球被从杯子里取出。\n    7. Bob把杯子放在花园里。\n     - 如果球在杯子里，它应该仍在花园里的杯子里。\n    8. Bob走向车库。\n     - 未提及球被带到车库。\n    综上所述，我们可以得出结论：球仍然在花园里的杯子中。\n\n* ChatGPT 3.5 CoT：球在花园里。❌\n\n可以看出，ChatGPT意识到球在第5步就已掉出，但仍未能给出正确答案。\n\n那么，在不依赖ChatGPT 4的情况下，我们能否做得更好呢？\n\n## 思维树框架\n\n近几周，出现了多篇论文和Python代码库，介绍了思维树（ToT）的概念：\n\n* 《大型语言模型引导下的思维树》（Large Language Model Guided Tree-of-Thought, arXiv:2305.08291），2023年5月15日。[GitHub链接](https:\u002F\u002Fgithub.com\u002Fjieyilong\u002Ftree-of-thought-puzzle-solver)。\n* 《思维树：利用大型语言模型进行审慎的问题解决》（Tree of Thoughts: Deliberate Problem Solving with Large Language Models, arXiv:2305.10601），2023年5月17日。[GitHub链接](https:\u002F\u002Fgithub.com\u002Fysymyth\u002Ftree-of-thought-llm)（截至5月22日尚无代码）。\n* 《思维树》（Tree of Thoughts, GitHub），2023年5月21日。\n\n其核心原理是为LLM提供探索多种推理路径的机会，并在推进过程中进行自我评估。根据相关论文，这种方法对某些问题似乎非常有效。\n\n## 思维树提示\n\n现有的关于思维树框架的论文似乎都基于通过Python对LLM进行多次调用。而思维链推理的一大优势在于，它只需通过一句简单的提示就能在ChatGPT中实现。\n\n这促使我思考：我们是否可以通过使用类似思维树风格的提示来改进思维链提示的效果呢？\n\n我使用的提示如下：\n\n    假设三位不同的专家正在回答这个问题。\n    所有专家都会写下自己思考的第一步，\n    然后将其分享给小组成员。\n    接着，所有专家将继续下一步，以此类推。\n    如果任何一位专家在任何时候发现自己错了，就会退出。\n    问题是……\n\n以下是ChatGPT 3.5的回应：\n\n    专家1：球在客厅里。\n\n    专家2：球在厨房里。\n\n    专家3：球在卧室里。\n\n    专家1：Bob把杯子端进了卧室，所以球一定在杯子里。\n\n    专家2：哦，我明白了，我的确错了。没错，球确实在杯子里。\n\n    专家3：同意，球确实在卧室里的杯子里。\n\n    专家1：接下来，Bob在卧室里把杯子倒了过来。\n\n    专家2：之后，Bob走向花园，把杯子放下了。\n\n    专家3：因此，球一定是在Bob把杯子倒置时从杯子里掉出来的。所以，球在卧室里，不再在杯子里了。\n\n    专家1：哦，你说得对。我确实犯了错误。球确实在卧室里，而不是在杯子里。\n\n    专家2：同意，球在卧室里。\n\n    专家3：Bob随后走向车库，所以球仍然在卧室里。它既不在花园里，也不在车库里。\n\n    专家1：完全同意，球还在卧室里。\n\n    专家2：是的，球一直都在卧室里。\n\n    专家3：因此，球在卧室里，而不是在花园或车库里。\n\n    三位专家一致认为，球在卧室里。\n\n* ChatGPT 3.5 ToT：球在卧室里。✅\n\n可以看到，这一次ChatGPT 3.5给出了正确的答案！\n\n# 局限性\n\n* 该方法尚未经过广泛测试，但初步结果表明，其性能优于传统的思维链提示。\n* 思维树框架包含的内容比我在示例中展示的提示更为丰富。\n\n\n# 改进、反馈与贡献\n\n该提示仍有优化空间，以进一步提升性能，甚至可能生成更简洁的回答。\n\n如果您在其他提示方面取得了成功，欢迎提交拉取请求，并将其添加到`tree-of-thought-prompts.txt`文件中。\n\n# 补充思考\n\n思维树框架与涉及多方利益相关者的**组织决策过程**颇为相似。该结构旨在模拟共识和思想多样性。\n\n这一效果可以通过使用多个经过微调、分别负责处理任务不同部分的智能体来进一步扩展。AutoGPT便是一个很好的例子，它会根据目标对大语言模型生成的计划进行自我评估。如果采用针对特定领域进行微调的模型，甚至完全不同的大语言模型，就能实现更广泛的思想多样性。这些模型可以按照团队拓扑学的术语以不同模式进行交互：\n\n* **协作模式**：这可能涉及将多个模型的输出结合起来并迭代优化，从而生成对用户输入的最终响应。例如，一个通用模型可能会先生成初步响应，再由一个专业模型对其进行细化或增强。\n* **X即服务模式**：某些模型可以为其他模型提供服务，比如对输入数据进行预处理、对输出数据进行后处理，或者提供上下文信息。这些模型将以服务角色运行，支持其他模型的功能。\n* **促进模式**：有些模型可能扮演促进者的角色，例如训练其他模型、监控其性能，或提供可用于改进它们的反馈。\n\n高效团队在决策方面往往优于个人。因此，借鉴其他组织结构和特征来提升大语言模型的表现是完全可行的。除了思想多样性、专业化和共识之外，我们还可以尝试模拟：\n\n* **层级结构**：较简单的查询由低级别模型处理，而更复杂的查询则会被升级到能力更强或更专业的模型；\n* **冗余机制**：确保当某个模型无法生成准确或有用的结果时，另一个模型能够介入并提供更好的答案。\n\n# 致谢\n\n* [思维链提示激发大型语言模型的推理能力](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)，2022年1月。\n* [大语言模型引导的思维树](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.08291)，2023年5月15日。[GitHub仓库](https:\u002F\u002Fgithub.com\u002Fjieyilong\u002Ftree-of-thought-puzzle-solver)。\n* [思维树：利用大型语言模型进行审慎问题解决](https:\u002F\u002Farxiv.org\u002Fabs\u002F2305.10601)，2023年5月17日。[GitHub仓库](https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm)。\n* [思维树](https:\u002F\u002Fgithub.com\u002Fkyegomez\u002Ftree-of-thoughts)，GitHub，2023年5月21日。\n\n\n# 引用\n\n如果您使用了本仓库中的代码，请引用如下：\n\n    @misc{tree-of-thought-prompting,\n    \ttitle        = {利用思维树提示提升ChatGPT的推理能力},\n    \tauthor       = {Dave Hulbert},\n    \tyear         = 2023,\n    \tmonth        = may,\n    \tjournal      = {GitHub仓库},\n    \tpublisher    = {Zenodo},\n    \tdoi          = {10.5281\u002FZENODO.10323452},\n    \turl          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.10323452},\n    \thowpublished = {\\url{https:\u002F\u002Fgithub.com\u002Fdave1010\u002Ftree-of-thought-prompting}}\n    }","# Tree-of-Thought Prompting 快速上手指南\n\nTree-of-Thought (ToT) Prompting 是一种创新的提示工程技术，旨在通过模拟多专家协作与自我纠错机制，显著提升大语言模型（如 ChatGPT）在复杂逻辑推理任务中的表现。本工具无需安装额外的 Python 库或依赖环境，核心在于使用特定的提示词模板。\n\n## 环境准备\n\n本项目本质上是一套**提示词（Prompt）策略**，而非需要编译安装的软件包，因此对环境要求极低：\n\n*   **系统要求**：任意支持现代浏览器的操作系统（Windows, macOS, Linux）。\n*   **前置依赖**：\n    *   一个可用的大语言模型对话界面（例如：ChatGPT Web 版、API 调用终端或其他支持长文本输入的 LLM 平台）。\n    *   无需安装 Python、Node.js 或任何第三方库。\n\n## 安装步骤\n\n由于本工具是纯文本提示词方案，**无需执行任何安装命令**。\n\n1.  访问你的大模型服务页面（如 chat.openai.com）。\n2.  新建一个对话窗口。\n3.  直接复制下方的“基本使用”中的提示词模板即可开始使用。\n\n> **提示**：如果你希望通过代码批量调用，只需将下文提供的 Prompt 字符串通过 API 发送给模型即可，无需克隆仓库或运行 `pip install`。\n\n## 基本使用\n\n### 核心原理\n传统的思维链（Chain-of-Thought）要求模型单线思考，而 ToT Prompting 通过让模型模拟“三位专家”逐步分享思路、互相验证并剔除错误分支，从而实现更准确的推理。\n\n### 使用示例\n\n将以下提示词模板复制到对话框中，并将 `[在此处插入你的复杂问题]` 替换为你实际需要解决的问题：\n\n```text\nImagine three different experts are answering this question.\nAll experts will write down 1 step of their thinking,\nthen share it with the group.\nThen all experts will go on to the next step, etc.\nIf any expert realises they're wrong at any point then they leave.\nThe question is...\n\n[在此处插入你的复杂问题]\n```\n\n### 实战案例\n\n假设你需要解决以下逻辑陷阱题：\n\n> Bob is in the living room. He walks to the kitchen, carrying a cup. He puts a ball in the cup and carries the cup to the bedroom. He turns the cup upside down, then walks to the garden. He puts the cup down in the garden, then walks to the garage. Where is the ball?\n\n**操作步骤：**\n\n1.  构建完整 Prompt：\n\n```text\nImagine three different experts are answering this question.\nAll experts will write down 1 step of their thinking,\nthen share it with the group.\nThen all experts will go on to the next step, etc.\nIf any expert realises they're wrong at any point then they leave.\nThe question is...\n\nBob is in the living room.\nHe walks to the kitchen, carrying a cup.\nHe puts a ball in the cup and carries the cup to the bedroom.\nHe turns the cup upside down, then walks to the garden.\nHe puts the cup down in the garden, then walks to the garage.\nWhere is the ball?\n```\n\n2.  发送给模型。\n\n**预期效果：**\n模型将模拟三位专家的对话过程。专家们会逐步推导，当某位专家发现逻辑错误（例如意识到杯子倒置时球已掉落）时会主动纠正或退出，最终达成共识输出正确答案（球在卧室），即使使用的是较旧版本的模型（如 GPT-3.5），也能达到接近 GPT-4 的推理水平。","某教育科技公司的算法工程师正在优化一款面向中小学生的逻辑推理辅导机器人，旨在让其在不升级昂贵模型的前提下准确解答多步骤陷阱题。\n\n### 没有 tree-of-thought-prompting 时\n- 模型容易陷入线性思维陷阱，像原文案例中那样，虽然识别到“杯子倒置”的关键动作，却未能正确推导物体掉落的最终位置，导致答案错误。\n- 面对复杂情境时缺乏自我纠错机制，一旦中间推理步骤出现偏差，模型会沿着错误路径一直走到黑，无法自主回溯修正。\n- 为了获得正确答案被迫升级到 GPT-4 等高级模型，显著增加了 API 调用成本和系统延迟，难以在大规模并发场景下落地。\n- 输出的解题过程往往只是简单的步骤罗列，缺乏对多种可能性的探索与评估，无法给学生提供具有启发性的多维思路。\n\n### 使用 tree-of-thought-prompting 后\n- 模型能够模拟人类“三思而后行”的过程，并行生成多条推理分支，通过对比不同路径（如球在杯内 vs 球已掉落）自主锁定唯一正确解。\n- 内置的自我反思与剪枝机制让模型能在发现逻辑矛盾时及时回退，重新规划思考路径，从而在 GPT-3.5 上实现了原本只有 GPT-4 才能达到的推理准确率。\n- 在保持低成本模型运行的同时解决了高难度逻辑题，大幅降低了算力开支，使高质量智能辅导服务更具商业可行性。\n- 生成的回答不仅包含结论，还展示了完整的决策树演变过程，清晰地解释了为何排除其他选项，极大地提升了教学互动的深度。\n\ntree-of-thought-prompting 通过赋予模型自主探索与纠错的能力，以极低的成本突破了现有大模型在复杂逻辑推理任务上的性能瓶颈。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdave1010_tree-of-thought-prompting_4e8c8b27.png","dave1010","Dave Hulbert","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdave1010_aab7a133.jpg",null,"Passenger Technology Group","Poole, Bournemouth, UK","dave1010@gmail.com","https:\u002F\u002Fdave.engineer","https:\u002F\u002Fgithub.com\u002Fdave1010",816,78,"2026-04-01T11:53:22","MIT",1,"","未说明",{"notes":90,"python":88,"dependencies":91},"该工具并非传统的代码库或需要本地部署的模型，而是一种提示词（Prompt）工程技术。它不需要安装特定的 Python 环境、依赖库或 GPU 硬件。用户只需将 README 中提供的特定提示词模板（模拟多位专家逐步推理并自我纠错）复制到现有的大语言模型聊天界面（如 ChatGPT Web 版）中即可使用。其运行依赖于外部 API 服务（如 OpenAI），而非本地计算资源。",[],[35,14],[94,95,96],"large-language-models","llm","prompt-engineering","2026-03-27T02:49:30.150509","2026-04-08T22:45:50.220369",[100,105,110,115,119,123],{"id":101,"question_zh":102,"answer_zh":103,"source_url":104},25173,"新的思维树（ToT）提示词在 ChatGPT 3.5 上效果不佳，总是得出错误结论怎么办？","根据测试，新的 ToT 提示词在 ChatGPT 3.5 上的成功率约为 50%，而在 ChatGPT 4 上可达 100%。如果必须在 3.5 上使用且遇到错误，建议尝试原始的提示词版本，该版本要求专家逐步分享思考过程，若发现错误则退出，直到达成一致才宣布结论。此外，增加重试次数（如 6 次）也能提高获得正确答案的概率。","https:\u002F\u002Fgithub.com\u002Fdave1010\u002Ftree-of-thought-prompting\u002Fissues\u002F2",{"id":106,"question_zh":107,"answer_zh":108,"source_url":109},25174,"哪里可以找到普林斯顿大学论文中提到的官方思维树（ToT）代码和提示词示例？","官方的 ToT 代码库位于 https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm。具体的提示词示例可以在以下路径找到：\n1. 提示词目录：https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm\u002Ftree\u002Fmaster\u002Fsrc\u002Ftot\u002Fprompts\n2. 文本任务提示词代码：https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm\u002Fblob\u002Fmaster\u002Fsrc\u002Ftot\u002Fprompts\u002Ftext.py\n注意：部分用户反馈旧的 \u002Fprompts\u002F 链接可能已失效，请使用上述 src\u002Ftot\u002Fprompts 路径。","https:\u002F\u002Fgithub.com\u002Fdave1010\u002Ftree-of-thought-prompting\u002Fissues\u002F3",{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},25175,"《Deliberate Problem Solving with Large Language Models》论文是否有配套的开源代码？","是的，该论文有配套的开源代码。代码托管在普林斯顿 NLP 团队的 GitHub 仓库中：https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm。该项目包含了实现思维树（ToT）方法的具体代码和示例。","https:\u002F\u002Fgithub.com\u002Fdave1010\u002Ftree-of-thought-prompting\u002Fissues\u002F5",{"id":116,"question_zh":117,"answer_zh":118,"source_url":109},25176,"为什么我在某些链接中找不到官方 ToT 的提示词文件？","官方仓库（princeton-nlp\u002Ftree-of-thought-llm）的结构可能发生过调整，导致旧链接失效。请直接访问重构后的路径获取提示词：\n- 通用提示词文件夹：https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm\u002Ftree\u002Fmaster\u002Fsrc\u002Ftot\u002Fprompts\n- 具体文本提示词实现：https:\u002F\u002Fgithub.com\u002Fprinceton-nlp\u002Ftree-of-thought-llm\u002Fblob\u002Fmaster\u002Fsrc\u002Ftot\u002Fprompts\u002Ftext.py",{"id":120,"question_zh":121,"answer_zh":122,"source_url":104},25177,"在不同版本的 ChatGPT 模型上使用思维树（ToT）提示词，效果有何差异？","根据用户实验数据（每个版本重试 6 次），ToT 提示词在 ChatGPT 3.5 上的成功率约为 50%，经常出现推理错误；而在 ChatGPT 4 上的成功率可达 100%。因此，对于复杂的推理任务，强烈建议使用 ChatGPT 4 或更高版本的模型以获得最佳效果。",{"id":124,"question_zh":125,"answer_zh":126,"source_url":114},25178,"除了 GitHub 仓库，还有哪些地方更新了关于 ToT 代码可用性的信息？","维护者已确认代码可用性，并同步更新了相关资源链接。除了 GitHub 仓库外，相关的 Medium 文章（Using Tree of Thought Prompting to Boost ChatGPT's Reasoning）也已更新，以反映官方代码库的存在和最新链接。用户可以参考这些更新后的文档获取更多信息。",[128,132],{"id":129,"version":130,"summary_zh":76,"released_at":131},154573,"v0.2","2023-12-09T16:15:19",{"id":133,"version":134,"summary_zh":76,"released_at":135},154574,"v0.1","2023-12-09T16:02:45"]