[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bytedance--deer-flow":3,"tool-bytedance--deer-flow":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":121,"forks":122,"last_commit_at":123,"license":124,"difficulty_score":10,"env_os":125,"env_gpu":126,"env_ram":127,"env_deps":128,"category_tags":137,"github_topics":139,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":158,"updated_at":159,"faqs":160,"releases":190},9651,"bytedance\u002Fdeer-flow","deer-flow","An open-source long-horizon SuperAgent harness that researches, codes, and creates. With the help of sandboxes, memories, tools, skill, subagents and message gateway, it handles different levels of tasks that could take minutes to hours.","DeerFlow 是一款开源的“超级智能体”框架，专为执行耗时数分钟至数小时的复杂长程任务而设计。它不仅能进行深度信息调研，还能自主编写代码并生成最终成果，真正实现了从“思考”到“行动”的全流程自动化。\n\n面对传统 AI 难以独立完成的宏大目标，DeerFlow 通过协调多个“子智能体”分工协作来解决问题。它拥有记忆机制以维持长期上下文，利用沙箱环境安全地测试和运行代码，并借助可扩展的技能库调用各类工具。这种架构让 AI 能够像人类团队一样，将大任务拆解、规划并逐步落实，有效克服了单次对话的局限性。\n\n这款工具特别适合开发者、技术研究人员以及需要处理复杂工作流的进阶用户。无论是构建自动化研发流程、进行深度行业调研，还是开发多智能体应用，DeerFlow 都能提供强大的底层支持。\n\n其技术亮点在于全新的 2.0 架构，支持模块化技能扩展与安全的代码沙箱执行。此外，它还集成了字节跳动自研的 InfoQuest 智能搜索套件，并兼容 LangSmith 等追踪工具，让复杂的智能体行为可观察、可调试。作为一个完全开源的项目，DeerFlow 为构建下一代自主 AI 系统提供了灵活且坚实的基础。","# 🦌 DeerFlow - 2.0\n\nEnglish | [中文](.\u002FREADME_zh.md) | [日本語](.\u002FREADME_ja.md) | [Français](.\u002FREADME_fr.md) | [Русский](.\u002FREADME_ru.md)\n\n[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.12%2B-3776AB?logo=python&logoColor=white)](.\u002Fbackend\u002Fpyproject.toml)\n[![Node.js](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNode.js-22%2B-339933?logo=node.js&logoColor=white)](.\u002FMakefile)\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](.\u002FLICENSE)\n\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F14699\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_4a68feb902da.png\" alt=\"bytedance%2Fdeer-flow | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n> On February 28th, 2026, DeerFlow claimed the 🏆 #1 spot on GitHub Trending following the launch of version 2. Thanks a million to our incredible community — you made this happen! 💪🔥\n\nDeerFlow (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) is an open-source **super agent harness** that orchestrates **sub-agents**, **memory**, and **sandboxes** to do almost anything — powered by **extensible skills**.\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa8bcadc4-e040-4cf2-8fda-dd768b999c18\n\n> [!NOTE]\n> **DeerFlow 2.0 is a ground-up rewrite.** It shares no code with v1. If you're looking for the original Deep Research framework, it's maintained on the [`1.x` branch](https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Ftree\u002Fmain-1.x) — contributions there are still welcome. Active development has moved to 2.0.\n\n## Official Website\n\n[\u003Cimg width=\"2880\" height=\"1600\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_dac4d6e08b1c.png\" \u002F>](https:\u002F\u002Fdeerflow.tech)\n\nLearn more and see **real demos** on our [**official website**](https:\u002F\u002Fdeerflow.tech).\n\n## Coding Plan from ByteDance Volcengine\n\n\u003Cimg width=\"4808\" height=\"2400\" alt=\"英文方舟\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_b699f03b4900.png\" \u002F>\n\n- We strongly recommend using Doubao-Seed-2.0-Code, DeepSeek v3.2 and Kimi 2.5 to run DeerFlow\n- [Learn more](https:\u002F\u002Fwww.byteplus.com\u002Fen\u002Factivity\u002Fcodingplan?utm_campaign=deer_flow&utm_content=deer_flow&utm_medium=devrel&utm_source=OWO&utm_term=deer_flow)\n- [中国大陆地区的开发者请点击这里](https:\u002F\u002Fwww.volcengine.com\u002Factivity\u002Fcodingplan?utm_campaign=deer_flow&utm_content=deer_flow&utm_medium=devrel&utm_source=OWO&utm_term=deer_flow)\n\n## InfoQuest\n\nDeerFlow has newly integrated the intelligent search and crawling toolset independently developed by BytePlus--[InfoQuest (supports free online experience)](https:\u002F\u002Fdocs.byteplus.com\u002Fen\u002Fdocs\u002FInfoQuest\u002FWhat_is_Info_Quest)\n\n\u003Ca href=\"https:\u002F\u002Fdocs.byteplus.com\u002Fen\u002Fdocs\u002FInfoQuest\u002FWhat_is_Info_Quest\" target=\"_blank\">\n  \u003Cimg\n    src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_ec01e74a7606.png\"   alt=\"InfoQuest_banner\"\n  \u002F>\n\u003C\u002Fa>\n\n---\n\n## Table of Contents\n\n- [🦌 DeerFlow - 2.0](#-deerflow---20)\n  - [Official Website](#official-website)\n  - [Coding Plan from ByteDance Volcengine](#coding-plan-from-bytedance-volcengine)\n  - [InfoQuest](#infoquest)\n  - [Table of Contents](#table-of-contents)\n  - [One-Line Agent Setup](#one-line-agent-setup)\n  - [Quick Start](#quick-start)\n    - [Configuration](#configuration)\n    - [Running the Application](#running-the-application)\n      - [Deployment Sizing](#deployment-sizing)\n      - [Option 1: Docker (Recommended)](#option-1-docker-recommended)\n      - [Option 2: Local Development](#option-2-local-development)\n    - [Advanced](#advanced)\n      - [Sandbox Mode](#sandbox-mode)\n      - [MCP Server](#mcp-server)\n      - [IM Channels](#im-channels)\n      - [LangSmith Tracing](#langsmith-tracing)\n      - [Langfuse Tracing](#langfuse-tracing)\n      - [Using Both Providers](#using-both-providers)\n  - [From Deep Research to Super Agent Harness](#from-deep-research-to-super-agent-harness)\n  - [Core Features](#core-features)\n    - [Skills \\& Tools](#skills--tools)\n      - [Claude Code Integration](#claude-code-integration)\n    - [Sub-Agents](#sub-agents)\n    - [Sandbox \\& File System](#sandbox--file-system)\n    - [Context Engineering](#context-engineering)\n    - [Long-Term Memory](#long-term-memory)\n  - [Recommended Models](#recommended-models)\n  - [Embedded Python Client](#embedded-python-client)\n  - [Documentation](#documentation)\n  - [⚠️ Security Notice](#️-security-notice)\n    - [Improper Deployment May Introduce Security Risks](#improper-deployment-may-introduce-security-risks)\n    - [Security Recommendations](#security-recommendations)\n  - [Contributing](#contributing)\n  - [License](#license)\n  - [Acknowledgments](#acknowledgments)\n    - [Key Contributors](#key-contributors)\n  - [Star History](#star-history)\n\n## One-Line Agent Setup\n\nIf you use Claude Code, Codex, Cursor, Windsurf, or another coding agent, you can hand it the setup instructions in one sentence:\n\n```text\nHelp me clone DeerFlow if needed, then bootstrap it for local development by following https:\u002F\u002Fraw.githubusercontent.com\u002Fbytedance\u002Fdeer-flow\u002Fmain\u002FInstall.md\n```\n\nThat prompt is intended for coding agents. It tells the agent to clone the repo if needed, choose Docker when available, and stop with the exact next command plus any missing config the user still needs to provide.\n\n## Quick Start\n\n### Configuration\n\n1. **Clone the DeerFlow repository**\n\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow.git\n   cd deer-flow\n   ```\n\n2. **Run the setup wizard**\n\n   From the project root directory (`deer-flow\u002F`), run:\n\n   ```bash\n   make setup\n   ```\n\n   This launches an interactive wizard that guides you through choosing an LLM provider, optional web search, and execution\u002Fsafety preferences such as sandbox mode, bash access, and file-write tools. It generates a minimal `config.yaml` and writes your keys to `.env`. Takes about 2 minutes.\n\n   The wizard also lets you configure an optional web search provider, or skip it for now.\n\n   Run `make doctor` at any time to verify your setup and get actionable fix hints.\n\n   > **Advanced \u002F manual configuration**: If you prefer to edit `config.yaml` directly, run `make config` instead to copy the full template. See `config.example.yaml` for the complete reference including CLI-backed providers (Codex CLI, Claude Code OAuth), OpenRouter, Responses API, and more.\n\n   \u003Cdetails>\n   \u003Csummary>Manual model configuration examples\u003C\u002Fsummary>\n\n   ```yaml\n   models:\n     - name: gpt-4o\n       display_name: GPT-4o\n       use: langchain_openai:ChatOpenAI\n       model: gpt-4o\n       api_key: $OPENAI_API_KEY\n\n     - name: openrouter-gemini-2.5-flash\n       display_name: Gemini 2.5 Flash (OpenRouter)\n       use: langchain_openai:ChatOpenAI\n       model: google\u002Fgemini-2.5-flash-preview\n       api_key: $OPENROUTER_API_KEY\n       base_url: https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1\n\n     - name: gpt-5-responses\n       display_name: GPT-5 (Responses API)\n       use: langchain_openai:ChatOpenAI\n       model: gpt-5\n       api_key: $OPENAI_API_KEY\n       use_responses_api: true\n       output_version: responses\u002Fv1\n\n     - name: qwen3-32b-vllm\n       display_name: Qwen3 32B (vLLM)\n       use: deerflow.models.vllm_provider:VllmChatModel\n       model: Qwen\u002FQwen3-32B\n       api_key: $VLLM_API_KEY\n       base_url: http:\u002F\u002Flocalhost:8000\u002Fv1\n       supports_thinking: true\n       when_thinking_enabled:\n         extra_body:\n           chat_template_kwargs:\n             enable_thinking: true\n   ```\n\n   OpenRouter and similar OpenAI-compatible gateways should be configured with `langchain_openai:ChatOpenAI` plus `base_url`. If you prefer a provider-specific environment variable name, point `api_key` at that variable explicitly (for example `api_key: $OPENROUTER_API_KEY`).\n\n   To route OpenAI models through `\u002Fv1\u002Fresponses`, keep using `langchain_openai:ChatOpenAI` and set `use_responses_api: true` with `output_version: responses\u002Fv1`.\n\n   For vLLM 0.19.0, use `deerflow.models.vllm_provider:VllmChatModel`. For Qwen-style reasoning models, DeerFlow toggles reasoning with `extra_body.chat_template_kwargs.enable_thinking` and preserves vLLM's non-standard `reasoning` field across multi-turn tool-call conversations. Legacy `thinking` configs are normalized automatically for backward compatibility. Reasoning models may also require the server to be started with `--reasoning-parser ...`. If your local vLLM deployment accepts any non-empty API key, you can still set `VLLM_API_KEY` to a placeholder value.\n\n   CLI-backed provider examples:\n\n   ```yaml\n   models:\n     - name: gpt-5.4\n       display_name: GPT-5.4 (Codex CLI)\n       use: deerflow.models.openai_codex_provider:CodexChatModel\n       model: gpt-5.4\n       supports_thinking: true\n       supports_reasoning_effort: true\n\n     - name: claude-sonnet-4.6\n       display_name: Claude Sonnet 4.6 (Claude Code OAuth)\n       use: deerflow.models.claude_provider:ClaudeChatModel\n       model: claude-sonnet-4-6\n       max_tokens: 4096\n       supports_thinking: true\n   ```\n\n   - Codex CLI reads `~\u002F.codex\u002Fauth.json`\n   - Claude Code accepts `CLAUDE_CODE_OAUTH_TOKEN`, `ANTHROPIC_AUTH_TOKEN`, `CLAUDE_CODE_CREDENTIALS_PATH`, or `~\u002F.claude\u002F.credentials.json`\n   - ACP agent entries are separate from model providers — if you configure `acp_agents.codex`, point it at a Codex ACP adapter such as `npx -y @zed-industries\u002Fcodex-acp`\n   - On macOS, export Claude Code auth explicitly if needed:\n\n   ```bash\n   eval \"$(python3 scripts\u002Fexport_claude_code_oauth.py --print-export)\"\n   ```\n\n   API keys can also be set manually in `.env` (recommended) or exported in your shell:\n\n   ```bash\n   OPENAI_API_KEY=your-openai-api-key\n   TAVILY_API_KEY=your-tavily-api-key\n   ```\n\n   \u003C\u002Fdetails>\n\n### Running the Application\n\n#### Deployment Sizing\n\nUse the table below as a practical starting point when choosing how to run DeerFlow:\n\n| Deployment target | Starting point | Recommended | Notes |\n|---------|-----------|------------|-------|\n| Local evaluation \u002F `make dev` | 4 vCPU, 8 GB RAM, 20 GB free SSD | 8 vCPU, 16 GB RAM | Good for one developer or one light session with hosted model APIs. `2 vCPU \u002F 4 GB` is usually not enough. |\n| Docker development \u002F `make docker-start` | 4 vCPU, 8 GB RAM, 25 GB free SSD | 8 vCPU, 16 GB RAM | Image builds, bind mounts, and sandbox containers need more headroom than pure local dev. |\n| Long-running server \u002F `make up` | 8 vCPU, 16 GB RAM, 40 GB free SSD | 16 vCPU, 32 GB RAM | Preferred for shared use, multi-agent runs, report generation, or heavier sandbox workloads. |\n\n- These numbers cover DeerFlow itself. If you also host a local LLM, size that service separately.\n- Linux plus Docker is the recommended deployment target for a persistent server. macOS and Windows are best treated as development or evaluation environments.\n- If CPU or memory usage stays pinned, reduce concurrent runs first, then move to the next sizing tier.\n\n#### Option 1: Docker (Recommended)\n\n**Development** (hot-reload, source mounts):\n\n```bash\nmake docker-init    # Pull sandbox image (only once or when image updates)\nmake docker-start   # Start services (auto-detects sandbox mode from config.yaml)\n```\n\n`make docker-start` starts `provisioner` only when `config.yaml` uses provisioner mode (`sandbox.use: deerflow.community.aio_sandbox:AioSandboxProvider` with `provisioner_url`).\n\nDocker builds use the upstream `uv` registry by default. If you need faster mirrors in restricted networks, export `UV_INDEX_URL=https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` and `NPM_REGISTRY=https:\u002F\u002Fregistry.npmmirror.com` before running `make docker-init` or `make docker-start`.\n\nBackend processes automatically pick up `config.yaml` changes on the next config access, so model metadata updates do not require a manual restart during development.\n\n> [!TIP]\n> On Linux, if Docker-based commands fail with `permission denied while trying to connect to the Docker daemon socket at unix:\u002F\u002F\u002Fvar\u002Frun\u002Fdocker.sock`, add your user to the `docker` group and re-login before retrying. See [CONTRIBUTING.md](CONTRIBUTING.md#linux-docker-daemon-permission-denied) for the full fix.\n\n**Production** (builds images locally, mounts runtime config and data):\n\n```bash\nmake up     # Build images and start all production services\nmake down   # Stop and remove containers\n```\n\n> [!NOTE]\n> The LangGraph agent server currently runs via `langgraph dev` (the open-source CLI server).\n\nAccess: http:\u002F\u002Flocalhost:2026\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.\n\n#### Option 2: Local Development\n\nIf you prefer running services locally:\n\nPrerequisite: complete the \"Configuration\" steps above first (`make setup`). `make dev` requires a valid `config.yaml` in the project root (can be overridden via `DEER_FLOW_CONFIG_PATH`). Run `make doctor` to verify your setup before starting.\nOn Windows, run the local development flow from Git Bash. Native `cmd.exe` and PowerShell shells are not supported for the bash-based service scripts, and WSL is not guaranteed because some scripts rely on Git for Windows utilities such as `cygpath`.\n\n1. **Check prerequisites**:\n   ```bash\n   make check  # Verifies Node.js 22+, pnpm, uv, nginx\n   ```\n\n2. **Install dependencies**:\n   ```bash\n   make install  # Install backend + frontend dependencies\n   ```\n\n3. **(Optional) Pre-pull sandbox image**:\n   ```bash\n   # Recommended if using Docker\u002FContainer-based sandbox\n   make setup-sandbox\n   ```\n\n4. **(Optional) Load sample memory data for local review**:\n   ```bash\n   python scripts\u002Fload_memory_sample.py\n   ```\n   This copies the sample fixture into the default local runtime memory file so reviewers can immediately test `Settings > Memory`.\n   See [backend\u002Fdocs\u002FMEMORY_SETTINGS_REVIEW.md](backend\u002Fdocs\u002FMEMORY_SETTINGS_REVIEW.md) for the shortest review flow.\n\n5. **Start services**:\n   ```bash\n   make dev\n   ```\n\n6. **Access**: http:\u002F\u002Flocalhost:2026\n\n#### Startup Modes\n\nDeerFlow supports multiple startup modes across two dimensions:\n\n- **Dev \u002F Prod** — dev enables hot-reload; prod uses pre-built frontend\n- **Standard \u002F Gateway** — standard uses a separate LangGraph server (4 processes); Gateway mode (experimental) embeds the agent runtime in the Gateway API (3 processes)\n\n| | **Local Foreground** | **Local Daemon** | **Docker Dev** | **Docker Prod** |\n|---|---|---|---|---|\n| **Dev** | `.\u002Fscripts\u002Fserve.sh --dev`\u003Cbr\u002F>`make dev` | `.\u002Fscripts\u002Fserve.sh --dev --daemon`\u003Cbr\u002F>`make dev-daemon` | `.\u002Fscripts\u002Fdocker.sh start`\u003Cbr\u002F>`make docker-start` | — |\n| **Dev + Gateway** | `.\u002Fscripts\u002Fserve.sh --dev --gateway`\u003Cbr\u002F>`make dev-pro` | `.\u002Fscripts\u002Fserve.sh --dev --gateway --daemon`\u003Cbr\u002F>`make dev-daemon-pro` | `.\u002Fscripts\u002Fdocker.sh start --gateway`\u003Cbr\u002F>`make docker-start-pro` | — |\n| **Prod** | `.\u002Fscripts\u002Fserve.sh --prod`\u003Cbr\u002F>`make start` | `.\u002Fscripts\u002Fserve.sh --prod --daemon`\u003Cbr\u002F>`make start-daemon` | — | `.\u002Fscripts\u002Fdeploy.sh`\u003Cbr\u002F>`make up` |\n| **Prod + Gateway** | `.\u002Fscripts\u002Fserve.sh --prod --gateway`\u003Cbr\u002F>`make start-pro` | `.\u002Fscripts\u002Fserve.sh --prod --gateway --daemon`\u003Cbr\u002F>`make start-daemon-pro` | — | `.\u002Fscripts\u002Fdeploy.sh --gateway`\u003Cbr\u002F>`make up-pro` |\n\n| Action | Local | Docker Dev | Docker Prod |\n|---|---|---|---|\n| **Stop** | `.\u002Fscripts\u002Fserve.sh --stop`\u003Cbr\u002F>`make stop` | `.\u002Fscripts\u002Fdocker.sh stop`\u003Cbr\u002F>`make docker-stop` | `.\u002Fscripts\u002Fdeploy.sh down`\u003Cbr\u002F>`make down` |\n| **Restart** | `.\u002Fscripts\u002Fserve.sh --restart [flags]` | `.\u002Fscripts\u002Fdocker.sh restart` | — |\n\n> **Gateway mode** eliminates the LangGraph server process — the Gateway API handles agent execution directly via async tasks, managing its own concurrency.\n\n#### Why Gateway Mode?\n\nIn standard mode, DeerFlow runs a dedicated [LangGraph Platform](https:\u002F\u002Flangchain-ai.github.io\u002Flanggraph\u002F) server alongside the Gateway API. This architecture works well but has trade-offs:\n\n| | Standard Mode | Gateway Mode |\n|---|---|---|\n| **Architecture** | Gateway (REST API) + LangGraph (agent runtime) | Gateway embeds agent runtime |\n| **Concurrency** | `--n-jobs-per-worker` per worker (requires license) | `--workers` × async tasks (no per-worker cap) |\n| **Containers \u002F Processes** | 4 (frontend, gateway, langgraph, nginx) | 3 (frontend, gateway, nginx) |\n| **Resource usage** | Higher (two Python runtimes) | Lower (single Python runtime) |\n| **LangGraph Platform license** | Required for production images | Not required |\n| **Cold start** | Slower (two services to initialize) | Faster |\n\nBoth modes are functionally equivalent — the same agents, tools, and skills work in either mode.\n\n#### Docker Production Deployment\n\n`deploy.sh` supports building and starting separately. Images are mode-agnostic — runtime mode is selected at start time:\n\n```bash\n# One-step (build + start)\ndeploy.sh                    # standard mode (default)\ndeploy.sh --gateway          # gateway mode\n\n# Two-step (build once, start with any mode)\ndeploy.sh build              # build all images\ndeploy.sh start              # start in standard mode\ndeploy.sh start --gateway    # start in gateway mode\n\n# Stop\ndeploy.sh down\n```\n\n### Advanced\n#### Sandbox Mode\n\nDeerFlow supports multiple sandbox execution modes:\n- **Local Execution** (runs sandbox code directly on the host machine)\n- **Docker Execution** (runs sandbox code in isolated Docker containers)\n- **Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service)\n\nFor Docker development, service startup follows `config.yaml` sandbox mode. In Local\u002FDocker modes, `provisioner` is not started.\n\nSee the [Sandbox Configuration Guide](backend\u002Fdocs\u002FCONFIGURATION.md#sandbox) to configure your preferred mode.\n\n#### MCP Server\n\nDeerFlow supports configurable MCP servers and skills to extend its capabilities.\nFor HTTP\u002FSSE MCP servers, OAuth token flows are supported (`client_credentials`, `refresh_token`).\nSee the [MCP Server Guide](backend\u002Fdocs\u002FMCP_SERVER.md) for detailed instructions.\n\n#### IM Channels\n\nDeerFlow supports receiving tasks from messaging apps. Channels auto-start when configured — no public IP required for any of them.\n\n| Channel | Transport | Difficulty |\n|---------|-----------|------------|\n| Telegram | Bot API (long-polling) | Easy |\n| Slack | Socket Mode | Moderate |\n| Feishu \u002F Lark | WebSocket | Moderate |\n| WeChat | Tencent iLink (long-polling) | Moderate |\n| WeCom | WebSocket | Moderate |\n\n**Configuration in `config.yaml`:**\n\n```yaml\nchannels:\n  # LangGraph Server URL (default: http:\u002F\u002Flocalhost:2024)\n  langgraph_url: http:\u002F\u002Flocalhost:2024\n  # Gateway API URL (default: http:\u002F\u002Flocalhost:8001)\n  gateway_url: http:\u002F\u002Flocalhost:8001\n\n  # Optional: global session defaults for all mobile channels\n  session:\n    assistant_id: lead_agent  # or a custom agent name; custom agents are routed via lead_agent + agent_name\n    config:\n      recursion_limit: 100\n    context:\n      thinking_enabled: true\n      is_plan_mode: false\n      subagent_enabled: false\n\n  feishu:\n    enabled: true\n    app_id: $FEISHU_APP_ID\n    app_secret: $FEISHU_APP_SECRET\n    # domain: https:\u002F\u002Fopen.feishu.cn       # China (default)\n    # domain: https:\u002F\u002Fopen.larksuite.com   # International\n\n  wecom:\n    enabled: true\n    bot_id: $WECOM_BOT_ID\n    bot_secret: $WECOM_BOT_SECRET\n\n  slack:\n    enabled: true\n    bot_token: $SLACK_BOT_TOKEN     # xoxb-...\n    app_token: $SLACK_APP_TOKEN     # xapp-... (Socket Mode)\n    allowed_users: []               # empty = allow all\n\n  telegram:\n    enabled: true\n    bot_token: $TELEGRAM_BOT_TOKEN\n    allowed_users: []               # empty = allow all\n\n  wechat:\n    enabled: false\n    bot_token: $WECHAT_BOT_TOKEN\n    ilink_bot_id: $WECHAT_ILINK_BOT_ID\n    qrcode_login_enabled: true      # optional: allow first-time QR bootstrap when bot_token is absent\n    allowed_users: []               # empty = allow all\n    polling_timeout: 35\n    state_dir: .\u002F.deer-flow\u002Fwechat\u002Fstate\n    max_inbound_image_bytes: 20971520\n    max_outbound_image_bytes: 20971520\n    max_inbound_file_bytes: 52428800\n    max_outbound_file_bytes: 52428800\n\n    # Optional: per-channel \u002F per-user session settings\n    session:\n      assistant_id: mobile-agent  # custom agent names are also supported here\n      context:\n        thinking_enabled: false\n      users:\n        \"123456789\":\n          assistant_id: vip-agent\n          config:\n            recursion_limit: 150\n          context:\n            thinking_enabled: true\n            subagent_enabled: true\n```\n\nNotes:\n- `assistant_id: lead_agent` calls the default LangGraph assistant directly.\n- If `assistant_id` is set to a custom agent name, DeerFlow still routes through `lead_agent` and injects that value as `agent_name`, so the custom agent's SOUL\u002Fconfig takes effect for IM channels.\n\nSet the corresponding API keys in your `.env` file:\n\n```bash\n# Telegram\nTELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrSTUvwxYZ\n\n# Slack\nSLACK_BOT_TOKEN=xoxb-...\nSLACK_APP_TOKEN=xapp-...\n\n# Feishu \u002F Lark\nFEISHU_APP_ID=cli_xxxx\nFEISHU_APP_SECRET=your_app_secret\n\n# WeChat iLink\nWECHAT_BOT_TOKEN=your_ilink_bot_token\nWECHAT_ILINK_BOT_ID=your_ilink_bot_id\n\n# WeCom\nWECOM_BOT_ID=your_bot_id\nWECOM_BOT_SECRET=your_bot_secret\n```\n\n**Telegram Setup**\n\n1. Chat with [@BotFather](https:\u002F\u002Ft.me\u002FBotFather), send `\u002Fnewbot`, and copy the HTTP API token.\n2. Set `TELEGRAM_BOT_TOKEN` in `.env` and enable the channel in `config.yaml`.\n\n**Slack Setup**\n\n1. Create a Slack App at [api.slack.com\u002Fapps](https:\u002F\u002Fapi.slack.com\u002Fapps) → Create New App → From scratch.\n2. Under **OAuth & Permissions**, add Bot Token Scopes: `app_mentions:read`, `chat:write`, `im:history`, `im:read`, `im:write`, `files:write`.\n3. Enable **Socket Mode** → generate an App-Level Token (`xapp-…`) with `connections:write` scope.\n4. Under **Event Subscriptions**, subscribe to bot events: `app_mention`, `message.im`.\n5. Set `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` in `.env` and enable the channel in `config.yaml`.\n\n**Feishu \u002F Lark Setup**\n\n1. Create an app on [Feishu Open Platform](https:\u002F\u002Fopen.feishu.cn\u002F) → enable **Bot** capability.\n2. Add permissions: `im:message`, `im:message.p2p_msg:readonly`, `im:resource`.\n3. Under **Events**, subscribe to `im.message.receive_v1` and select **Long Connection** mode.\n4. Copy the App ID and App Secret. Set `FEISHU_APP_ID` and `FEISHU_APP_SECRET` in `.env` and enable the channel in `config.yaml`.\n\n**WeChat Setup**\n\n1. Enable the `wechat` channel in `config.yaml`.\n2. Either set `WECHAT_BOT_TOKEN` in `.env`, or set `qrcode_login_enabled: true` for first-time QR bootstrap.\n3. When `bot_token` is absent and QR bootstrap is enabled, watch backend logs for the QR content returned by iLink and complete the binding flow.\n4. After the QR flow succeeds, DeerFlow persists the acquired token under `state_dir` for later restarts.\n5. For Docker Compose deployments, keep `state_dir` on a persistent volume so the `get_updates_buf` cursor and saved auth state survive restarts.\n\n**WeCom Setup**\n\n1. Create a bot on the WeCom AI Bot platform and obtain the `bot_id` and `bot_secret`.\n2. Enable `channels.wecom` in `config.yaml` and fill in `bot_id` \u002F `bot_secret`.\n3. Set `WECOM_BOT_ID` and `WECOM_BOT_SECRET` in `.env`.\n4. Make sure backend dependencies include `wecom-aibot-python-sdk`. The channel uses a WebSocket long connection and does not require a public callback URL.\n5. The current integration supports inbound text, image, and file messages. Final images\u002Ffiles generated by the agent are also sent back to the WeCom conversation.\n\nWhen DeerFlow runs in Docker Compose, IM channels execute inside the `gateway` container. In that case, do not point `channels.langgraph_url` or `channels.gateway_url` at `localhost`; use container service names such as `http:\u002F\u002Flanggraph:2024` and `http:\u002F\u002Fgateway:8001`, or set `DEER_FLOW_CHANNELS_LANGGRAPH_URL` and `DEER_FLOW_CHANNELS_GATEWAY_URL`.\n\n**Commands**\n\nOnce a channel is connected, you can interact with DeerFlow directly from the chat:\n\n| Command | Description |\n|---------|-------------|\n| `\u002Fnew` | Start a new conversation |\n| `\u002Fstatus` | Show current thread info |\n| `\u002Fmodels` | List available models |\n| `\u002Fmemory` | View memory |\n| `\u002Fhelp` | Show help |\n\n> Messages without a command prefix are treated as regular chat — DeerFlow creates a thread and responds conversationally.\n\n#### LangSmith Tracing\n\nDeerFlow has built-in [LangSmith](https:\u002F\u002Fsmith.langchain.com) integration for observability. When enabled, all LLM calls, agent runs, and tool executions are traced and visible in the LangSmith dashboard.\n\nAdd the following to your `.env` file:\n\n```bash\nLANGSMITH_TRACING=true\nLANGSMITH_ENDPOINT=https:\u002F\u002Fapi.smith.langchain.com\nLANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxx\nLANGSMITH_PROJECT=xxx\n```\n\n#### Langfuse Tracing\n\nDeerFlow also supports [Langfuse](https:\u002F\u002Flangfuse.com) observability for LangChain-compatible runs.\n\nAdd the following to your `.env` file:\n\n```bash\nLANGFUSE_TRACING=true\nLANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxxxxxx\nLANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxxxxxx\nLANGFUSE_BASE_URL=https:\u002F\u002Fcloud.langfuse.com\n```\n\nIf you are using a self-hosted Langfuse instance, set `LANGFUSE_BASE_URL` to your deployment URL.\n\n#### Using Both Providers\n\nIf both LangSmith and Langfuse are enabled, DeerFlow attaches both tracing callbacks and reports the same model activity to both systems.\n\nIf a provider is explicitly enabled but missing required credentials, or if its callback fails to initialize, DeerFlow fails fast when tracing is initialized during model creation and the error message names the provider that caused the failure.\n\nFor Docker deployments, tracing is disabled by default. Set `LANGSMITH_TRACING=true` and `LANGSMITH_API_KEY` in your `.env` to enable it.\n\n## From Deep Research to Super Agent Harness\n\nDeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated.\n\nThat told us something important: DeerFlow wasn't just a research tool. It was a **harness** — a runtime that gives agents the infrastructure to actually get work done.\n\nSo we rebuilt it from scratch.\n\nDeerFlow 2.0 is no longer a framework you wire together. It's a super agent harness — batteries included, fully extensible. Built on LangGraph and LangChain, it ships with everything an agent needs out of the box: a filesystem, memory, skills, sandbox-aware execution, and the ability to plan and spawn sub-agents for complex, multi-step tasks.\n\nUse it as-is. Or tear it apart and make it yours.\n\n## Core Features\n\n### Skills & Tools\n\nSkills are what make DeerFlow do *almost anything*.\n\nA standard Agent Skill is a structured capability module — a Markdown file that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for research, report generation, slide creation, web pages, image and video generation, and more. But the real power is extensibility: add your own skills, replace the built-in ones, or combine them into compound workflows.\n\nSkills are loaded progressively — only when the task needs them, not all at once. This keeps the context window lean and makes DeerFlow work well even with token-sensitive models.\n\nWhen you install `.skill` archives through the Gateway, DeerFlow accepts standard optional frontmatter metadata such as `version`, `author`, and `compatibility` instead of rejecting otherwise valid external skills.\n\nTools follow the same philosophy. DeerFlow comes with a core toolset — web search, web fetch, file operations, bash execution — and supports custom tools via MCP servers and Python functions. Swap anything. Add anything.\n\nGateway-generated follow-up suggestions now normalize both plain-string model output and block\u002Flist-style rich content before parsing the JSON array response, so provider-specific content wrappers do not silently drop suggestions.\n\n```\n# Paths inside the sandbox container\n\u002Fmnt\u002Fskills\u002Fpublic\n├── research\u002FSKILL.md\n├── report-generation\u002FSKILL.md\n├── slide-creation\u002FSKILL.md\n├── web-page\u002FSKILL.md\n└── image-generation\u002FSKILL.md\n\n\u002Fmnt\u002Fskills\u002Fcustom\n└── your-custom-skill\u002FSKILL.md      ← yours\n```\n\n#### Claude Code Integration\n\nThe `claude-to-deerflow` skill lets you interact with a running DeerFlow instance directly from [Claude Code](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code). Send research tasks, check status, manage threads — all without leaving the terminal.\n\n**Install the skill**:\n\n```bash\nnpx skills add https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow --skill claude-to-deerflow\n```\n\nThen make sure DeerFlow is running (default at `http:\u002F\u002Flocalhost:2026`) and use the `\u002Fclaude-to-deerflow` command in Claude Code.\n\n**What you can do**:\n- Send messages to DeerFlow and get streaming responses\n- Choose execution modes: flash (fast), standard, pro (planning), ultra (sub-agents)\n- Check DeerFlow health, list models\u002Fskills\u002Fagents\n- Manage threads and conversation history\n- Upload files for analysis\n\n**Environment variables** (optional, for custom endpoints):\n\n```bash\nDEERFLOW_URL=http:\u002F\u002Flocalhost:2026            # Unified proxy base URL\nDEERFLOW_GATEWAY_URL=http:\u002F\u002Flocalhost:2026    # Gateway API\nDEERFLOW_LANGGRAPH_URL=http:\u002F\u002Flocalhost:2026\u002Fapi\u002Flanggraph  # LangGraph API\n```\n\nSee [`skills\u002Fpublic\u002Fclaude-to-deerflow\u002FSKILL.md`](skills\u002Fpublic\u002Fclaude-to-deerflow\u002FSKILL.md) for the full API reference.\n\n### Sub-Agents\n\nComplex tasks rarely fit in a single pass. DeerFlow decomposes them.\n\nThe lead agent can spawn sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output.\n\nThis is how DeerFlow handles tasks that take minutes to hours: a research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals. One harness, many hands.\n\n### Sandbox & File System\n\nDeerFlow doesn't just *talk* about doing things. It has its own computer.\n\nEach task gets its own execution environment with a full filesystem view — skills, workspace, uploads, outputs. The agent reads, writes, and edits files. It can view images and, when configured safely, execute shell commands.\n\nWith `AioSandboxProvider`, shell execution runs inside isolated containers. With `LocalSandboxProvider`, file tools still map to per-thread directories on the host, but host `bash` is disabled by default because it is not a secure isolation boundary. Re-enable host bash only for fully trusted local workflows.\n\nThis is the difference between a chatbot with tool access and an agent with an actual execution environment.\n\n```\n# Paths inside the sandbox container\n\u002Fmnt\u002Fuser-data\u002F\n├── uploads\u002F          ← your files\n├── workspace\u002F        ← agents' working directory\n└── outputs\u002F          ← final deliverables\n```\n\n### Context Engineering\n\n**Isolated Sub-Agent Context**: Each sub-agent runs in its own isolated context. This means that the sub-agent will not be able to see the context of the main agent or other sub-agents. This is important to ensure that the sub-agent is able to focus on the task at hand and not be distracted by the context of the main agent or other sub-agents.\n\n**Summarization**: Within a session, DeerFlow manages context aggressively — summarizing completed sub-tasks, offloading intermediate results to the filesystem, compressing what's no longer immediately relevant. This lets it stay sharp across long, multi-step tasks without blowing the context window.\n\n**Strict Tool-Call Recovery**: When a provider or middleware interrupts a tool-call loop, DeerFlow now strips provider-level raw tool-call metadata on forced-stop assistant messages and injects placeholder tool results for dangling calls before the next model invocation. This keeps OpenAI-compatible reasoning models that strictly validate `tool_call_id` sequences from failing with malformed history errors.\n\n### Long-Term Memory\n\nMost agents forget everything the moment a conversation ends. DeerFlow remembers.\n\nAcross sessions, DeerFlow builds a persistent memory of your profile, preferences, and accumulated knowledge. The more you use it, the better it knows you — your writing style, your technical stack, your recurring workflows. Memory is stored locally and stays under your control.\n\nMemory updates now skip duplicate fact entries at apply time, so repeated preferences and context do not accumulate endlessly across sessions.\n\n## Recommended Models\n\nDeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-compatible API. That said, it performs best with models that support:\n\n- **Long context windows** (100k+ tokens) for deep research and multi-step tasks\n- **Reasoning capabilities** for adaptive planning and complex decomposition\n- **Multimodal inputs** for image understanding and video comprehension\n- **Strong tool-use** for reliable function calling and structured outputs\n\n## Embedded Python Client\n\nDeerFlow can be used as an embedded Python library without running the full HTTP services. The `DeerFlowClient` provides direct in-process access to all agent and Gateway capabilities, returning the same response schemas as the HTTP Gateway API. The HTTP Gateway also exposes `DELETE \u002Fapi\u002Fthreads\u002F{thread_id}` to remove DeerFlow-managed local thread data after the LangGraph thread itself has been deleted:\n\n```python\nfrom deerflow.client import DeerFlowClient\n\nclient = DeerFlowClient()\n\n# Chat\nresponse = client.chat(\"Analyze this paper for me\", thread_id=\"my-thread\")\n\n# Streaming (LangGraph SSE protocol: values, messages-tuple, end)\nfor event in client.stream(\"hello\"):\n    if event.type == \"messages-tuple\" and event.data.get(\"type\") == \"ai\":\n        print(event.data[\"content\"])\n\n# Configuration & management — returns Gateway-aligned dicts\nmodels = client.list_models()        # {\"models\": [...]}\nskills = client.list_skills()        # {\"skills\": [...]}\nclient.update_skill(\"web-search\", enabled=True)\nclient.upload_files(\"thread-1\", [\".\u002Freport.pdf\"])  # {\"success\": True, \"files\": [...]}\n```\n\nAll dict-returning methods are validated against Gateway Pydantic response models in CI (`TestGatewayConformance`), ensuring the embedded client stays in sync with the HTTP API schemas. See `backend\u002Fpackages\u002Fharness\u002Fdeerflow\u002Fclient.py` for full API documentation.\n\n## Documentation\n\n- [Contributing Guide](CONTRIBUTING.md) - Development environment setup and workflow\n- [Configuration Guide](backend\u002Fdocs\u002FCONFIGURATION.md) - Setup and configuration instructions\n- [Architecture Overview](backend\u002FCLAUDE.md) - Technical architecture details\n- [Backend Architecture](backend\u002FREADME.md) - Backend architecture and API reference\n\n## ⚠️ Security Notice\n\n### Improper Deployment May Introduce Security Risks\n\nDeerFlow has key high-privilege capabilities including **system command execution, resource operations, and business logic invocation**, and is designed by default to be **deployed in a local trusted environment (accessible only via the 127.0.0.1 loopback interface)**. If you deploy the agent in untrusted environments — such as LAN networks, public cloud servers, or other multi-endpoint accessible environments — without strict security measures, it may introduce security risks, including:\n\n- **Unauthorized illegal invocation**: Agent functionality could be discovered by unauthorized third parties or malicious internet scanners, triggering bulk unauthorized requests that execute high-risk operations such as system commands and file read\u002Fwrite, potentially causing serious security consequences.\n- **Compliance and legal risks**: If the agent is illegally invoked to conduct cyberattacks, data theft, or other illegal activities, it may result in legal liability and compliance risks.\n\n### Security Recommendations\n\n**Note: We strongly recommend deploying DeerFlow in a local trusted network environment.** If you need cross-device or cross-network deployment, you must implement strict security measures, such as:\n\n- **IP allowlist**: Use `iptables`, or deploy hardware firewalls \u002F switches with Access Control Lists (ACL), to **configure IP allowlist rules** and deny access from all other IP addresses.\n- **Authentication gateway**: Configure a reverse proxy (e.g., nginx) and **enable strong pre-authentication**, blocking any unauthenticated access.\n- **Network isolation**: Where possible, place the agent and trusted devices in the **same dedicated VLAN**, isolated from other network devices.\n- **Stay updated**: Continue to follow DeerFlow's security feature updates.\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, workflow, and guidelines.\n\nRegression coverage includes Docker sandbox mode detection and provisioner kubeconfig-path handling tests in `backend\u002Ftests\u002F`.\nGateway artifact serving now forces active web content types (`text\u002Fhtml`, `application\u002Fxhtml+xml`, `image\u002Fsvg+xml`) to download as attachments instead of inline rendering, reducing XSS risk for generated artifacts.\n\n## License\n\nThis project is open source and available under the [MIT License](.\u002FLICENSE).\n\n## Acknowledgments\n\nDeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.\n\nWe would like to extend our sincere appreciation to the following projects for their invaluable contributions:\n\n- **[LangChain](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain)**: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.\n- **[LangGraph](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph)**: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow's sophisticated workflows.\n\nThese projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.\n\n### Key Contributors\n\nA heartfelt thank you goes out to the core authors of `DeerFlow`, whose vision, passion, and dedication have brought this project to life:\n\n- **[Daniel Walnut](https:\u002F\u002Fgithub.com\u002FhetaoBackend\u002F)**\n- **[Henry Li](https:\u002F\u002Fgithub.com\u002Fmagiccube\u002F)**\n\nYour unwavering commitment and expertise have been the driving force behind DeerFlow's success. We are honored to have you at the helm of this journey.\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_1bf7aad60d33.png)](https:\u002F\u002Fstar-history.com\u002F#bytedance\u002Fdeer-flow&Date)\n","# 🦌 DeerFlow - 2.0\n\n英语 | [中文](.\u002FREADME_zh.md) | [日语](.\u002FREADME_ja.md) | [法语](.\u002FREADME_fr.md) | [俄语](.\u002FREADME_ru.md)\n\n[![Python](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.12%2B-3776AB?logo=python&logoColor=white)](.\u002Fbackend\u002Fpyproject.toml)\n[![Node.js](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FNode.js-22%2B-339933?logo=node.js&logoColor=white)](.\u002FMakefile)\n[![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](.\u002FLICENSE)\n\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F14699\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_4a68feb902da.png\" alt=\"bytedance%2Fdeer-flow | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\u003C\u002Fa>\n> 2026年2月28日，DeerFlow在推出2.0版本后，荣登GitHub趋势榜榜首！衷心感谢我们了不起的社区——是你们让这一切成为现实！💪🔥\n\nDeerFlow（**D**eep **E**xploration and **E**fficient **R**esearch **Flow**）是一个开源的**超级智能体框架**，用于编排**子智能体**、**记忆模块**和**沙盒环境**，以完成几乎任何任务——这一切都由**可扩展的能力模块**驱动。\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fa8bcadc4-e040-4cf2-8fda-dd768b999c18\n\n> [!NOTE]\n> **DeerFlow 2.0 是一次从零开始的重写。** 它与 v1 版本没有任何代码共享。如果您正在寻找最初的 Deep Research 框架，它仍然维护在 [`1.x` 分支](https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Ftree\u002Fmain-1.x)上——欢迎继续贡献代码。目前的开发重心已转移到 2.0 版本。\n\n## 官方网站\n\n[\u003Cimg width=\"2880\" height=\"1600\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_dac4d6e08b1c.png\" \u002F>](https:\u002F\u002Fdeerflow.tech)\n\n访问我们的[**官方网站**](https:\u002F\u002Fdeerflow.tech)，了解更多内容并观看**真实演示**。\n\n## 字节跳动火山引擎编程计划\n\n\u003Cimg width=\"4808\" height=\"2400\" alt=\"英文方舟\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_b699f03b4900.png\" \u002F>\n\n- 我们强烈建议使用 Doubao-Seed-2.0-Code、DeepSeek v3.2 和 Kimi 2.5 来运行 DeerFlow\n- [了解更多](https:\u002F\u002Fwww.byteplus.com\u002Fen\u002Factivity\u002Fcodingplan?utm_campaign=deer_flow&utm_content=deer_flow&utm_medium=devrel&utm_source=OWO&utm_term=deer_flow)\n- [中国大陆地区的开发者请点击这里](https:\u002F\u002Fwww.volcengine.com\u002Factivity\u002Fcodingplan?utm_campaign=deer_flow&utm_content=deer_flow&utm_medium=devrel&utm_source=OWO&utm_term=deer_flow)\n\n## InfoQuest\n\nDeerFlow 新近集成了字节跳动自主研发的智能搜索与爬虫工具套件——[InfoQuest（支持免费在线体验）](https:\u002F\u002Fdocs.byteplus.com\u002Fen\u002Fdocs\u002FInfoQuest\u002FWhat_is_Info_Quest)\n\n\u003Ca href=\"https:\u002F\u002Fdocs.byteplus.com\u002Fen\u002Fdocs\u002FInfoQuest\u002FWhat_is_Info_Quest\" target=\"_blank\">\n  \u003Cimg\n    src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_ec01e74a7606.png\"   alt=\"InfoQuest_banner\"\n  \u002F>\n\u003C\u002Fa>\n\n---\n\n## 目录\n\n- [🦌 DeerFlow - 2.0](#-deerflow---20)\n  - [官方网站](#official-website)\n  - [字节跳动火山引擎编程计划](#coding-plan-from-bytedance-volcengine)\n  - [InfoQuest](#infoquest)\n  - [目录](#table-of-contents)\n  - [一行式智能体设置](#one-line-agent-setup)\n  - [快速入门](#quick-start)\n    - [配置](#configuration)\n    - [运行应用](#running-the-application)\n      - [部署规模](#deployment-sizing)\n      - [选项1：Docker（推荐）](#option-1-docker-recommended)\n      - [选项2：本地开发](#option-2-local-development)\n    - [高级功能](#advanced)\n      - [沙盒模式](#sandbox-mode)\n      - [MCP服务器](#mcp-server)\n      - [IM渠道](#im-channels)\n      - [LangSmith追踪](#langsmith-tracing)\n      - [Langfuse追踪](#langfuse-tracing)\n      - [同时使用两家提供商](#using-both-providers)\n  - [从Deep Research到超级智能体框架](#from-deep-research-to-super-agent-harness)\n  - [核心特性](#core-features)\n    - [技能与工具](#skills--tools)\n      - [Claude Code集成](#claude-code-integration)\n    - [子智能体](#sub-agents)\n    - [沙盒与文件系统](#sandbox--file-system)\n    - [上下文工程](#context-engineering)\n    - [长期记忆](#long-term-memory)\n  - [推荐模型](#recommended-models)\n  - [嵌入式Python客户端](#embedded-python-client)\n  - [文档](#documentation)\n  - [⚠️ 安全须知](#️-security-notice)\n    - [不当部署可能带来安全风险](#improper-deployment-may-introduce-security-risks)\n    - [安全建议](#security-recommendations)\n  - [贡献代码](#contributing)\n  - [许可证](#license)\n  - [致谢](#acknowledgments)\n    - [关键贡献者](#key-contributors)\n  - [星标历史](#star-history)\n\n## 一行式智能体设置\n\n如果您使用 Claude Code、Codex、Cursor、Windsurf 或其他编程智能体，只需用一句话就能向其传达设置指令：\n\n```text\n如果需要的话帮我克隆 DeerFlow，然后按照 https:\u002F\u002Fraw.githubusercontent.com\u002Fbytedance\u002Fdeer-flow\u002Fmain\u002FInstall.md 的说明进行本地开发的初始化。\n```\n\n这条提示专为编程智能体设计。它会指示智能体在必要时克隆仓库，在有 Docker 环境时选择 Docker，并在执行完下一步命令后停止，同时提醒用户还需补充其他必要的配置。\n\n## 快速入门\n\n### 配置\n\n1. **克隆 DeerFlow 仓库**\n\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow.git\n   cd deer-flow\n   ```\n\n2. **运行设置向导**\n\n   从项目根目录（`deer-flow\u002F`）中，运行：\n\n   ```bash\n   make setup\n   ```\n\n   这将启动一个交互式向导，引导您选择 LLM 提供商、可选的网络搜索功能，以及执行和安全偏好设置，例如沙盒模式、Bash 访问权限和文件写入工具。它会生成一个最小化的 `config.yaml` 文件，并将您的密钥写入 `.env` 文件。整个过程大约需要 2 分钟。\n\n   向导还允许您配置可选的网络搜索提供商，或者暂时跳过此步骤。\n\n   您可以随时运行 `make doctor` 来验证您的配置，并获取可行的修复建议。\n\n   > **高级\u002F手动配置**：如果您更倾向于直接编辑 `config.yaml` 文件，可以改用 `make config` 命令来复制完整的模板文件。请参阅 `config.example.yaml`，其中包含了完整的参考信息，包括通过 CLI 支持的提供商（Codex CLI、Claude Code OAuth）、OpenRouter、Responses API 等。\n\n   \u003Cdetails>\n   \u003Csummary>手动模型配置示例\u003C\u002Fsummary>\n\n   ```yaml\n   models:\n     - name: gpt-4o\n       display_name: GPT-4o\n       use: langchain_openai:ChatOpenAI\n       model: gpt-4o\n       api_key: $OPENAI_API_KEY\n\n     - name: openrouter-gemini-2.5-flash\n       display_name: Gemini 2.5 Flash (OpenRouter)\n       use: langchain_openai:ChatOpenAI\n       model: google\u002Fgemini-2.5-flash-preview\n       api_key: $OPENROUTER_API_KEY\n       base_url: https:\u002F\u002Fopenrouter.ai\u002Fapi\u002Fv1\n\n     - name: gpt-5-responses\n       display_name: GPT-5 (Responses API)\n       use: langchain_openai:ChatOpenAI\n       model: gpt-5\n       api_key: $OPENAI_API_KEY\n       use_responses_api: true\n       output_version: responses\u002Fv1\n\n     - name: qwen3-32b-vllm\n       display_name: Qwen3 32B (vLLM)\n       use: deerflow.models.vllm_provider:VllmChatModel\n       model: Qwen\u002FQwen3-32B\n       api_key: $VLLM_API_KEY\n       base_url: http:\u002F\u002Flocalhost:8000\u002Fv1\n       supports_thinking: true\n       when_thinking_enabled:\n         extra_body:\n           chat_template_kwargs:\n             enable_thinking: true\n   ```\n\n   OpenRouter 及类似兼容 OpenAI 的网关应使用 `langchain_openai:ChatOpenAI` 并配合 `base_url` 进行配置。如果您希望使用特定提供商的环境变量名，请明确将 `api_key` 指向该变量（例如 `api_key: $OPENROUTER_API_KEY`）。\n\n   若要通过 `\u002Fv1\u002Fresponses` 路由 OpenAI 模型，仍需使用 `langchain_openai:ChatOpenAI`，并设置 `use_responses_api: true` 和 `output_version: responses\u002Fv1`。\n\n   对于 vLLM 0.19.0，请使用 `deerflow.models.vllm_provider:VllmChatModel`。对于 Qwen 类型的推理模型，DeerFlow 通过 `extra_body.chat_template_kwargs.enable_thinking` 来启用推理功能，并在多轮工具调用对话中保留 vLLM 的非标准 `reasoning` 字段。旧版的 `thinking` 配置会自动规范化以确保向后兼容性。推理模型可能还需要在启动服务器时添加 `--reasoning-parser ...` 参数。如果您的本地 vLLM 部署接受任何非空 API 密钥，您仍然可以将 `VLLM_API_KEY` 设置为占位符值。\n\n   通过 CLI 支持的提供商示例：\n\n   ```yaml\n   models:\n     - name: gpt-5.4\n       display_name: GPT-5.4 (Codex CLI)\n       use: deerflow.models.openai_codex_provider:CodexChatModel\n       model: gpt-5.4\n       supports_thinking: true\n       supports_reasoning_effort: true\n\n     - name: claude-sonnet-4.6\n       display_name: Claude Sonnet 4.6 (Claude Code OAuth)\n       use: deerflow.models.claude_provider:ClaudeChatModel\n       model: claude-sonnet-4-6\n       max_tokens: 4096\n       supports_thinking: true\n   ```\n\n   - Codex CLI 会读取 `~\u002F.codex\u002Fauth.json`\n   - Claude Code 接受 `CLAUDE_CODE_OAUTH_TOKEN`、`ANTHROPIC_AUTH_TOKEN`、`CLAUDE_CODE_CREDENTIALS_PATH` 或 `~\u002F.claude\u002F.credentials.json`\n   - ACP 代理条目与模型提供商是分开的——如果您配置了 `acp_agents.codex`，请将其指向 Codex ACP 适配器，例如 `npx -y @zed-industries\u002Fcodex-acp`\n   - 在 macOS 上，如有必要，可显式导出 Claude Code 的认证信息：\n\n   ```bash\n   eval \"$(python3 scripts\u002Fexport_claude_code_oauth.py --print-export)\"\n   ```\n\n   API 密钥也可以手动设置在 `.env` 文件中（推荐），或直接在 shell 中导出：\n\n   ```bash\n   OPENAI_API_KEY=your-openai-api-key\n   TAVILY_API_KEY=your-tavily-api-key\n   ```\n\n   \u003C\u002Fdetails>\n\n### 运行应用\n\n#### 部署规模建议\n\n在选择 DeerFlow 的运行方式时，可参考下表作为实际的起点：\n\n| 部署目标         | 起始配置           | 推荐配置          | 备注                                                         |\n|------------------|-------------------|------------------|------------------------------------------------------------|\n| 本地评估 \u002F `make dev` | 4 vCPU、8 GB 内存、20 GB 空闲 SSD | 8 vCPU、16 GB 内存 | 适合单个开发者或使用托管模型 API 的轻量级会话。`2 vCPU \u002F 4 GB` 通常不够。 |\n| Docker 开发 \u002F `make docker-start` | 4 vCPU、8 GB 内存、25 GB 空闲 SSD | 8 vCPU、16 GB 内存 | 镜像构建、绑定挂载和沙盒容器比纯本地开发需要更多的资源余量。 |\n| 长期运行服务器 \u002F `make up`       | 8 vCPU、16 GB 内存、40 GB 空闲 SSD | 16 vCPU、32 GB 内存 | 更适合共享使用、多智能体运行、报告生成或较重的沙盒工作负载。 |\n\n- 上述配置仅针对 DeerFlow 本身。如果您还部署了本地 LLM，则需单独为其规划资源。\n- 建议在 Linux 系统上使用 Docker 部署持久化服务器。macOS 和 Windows 则更适合用作开发或评估环境。\n- 如果 CPU 或内存占用持续居高不下，请先减少并发运行数量，再考虑升级到更高一级的资源配置。\n\n#### 方案一：Docker（推荐）\n\n**开发模式**（热重载、源码挂载）：\n\n```bash\nmake docker-init    # 拉取沙盒镜像（仅首次执行或镜像更新时）\nmake docker-start   # 启动服务（根据 config.yaml 自动检测是否为沙盒模式）\n```\n\n`make docker-start` 只有当 `config.yaml` 使用 provisioner 模式（即 `sandbox.use: deerflow.community.aio_sandbox:AioSandboxProvider` 并设置了 `provisioner_url`）时，才会启动 `provisioner` 服务。\n\nDocker 构建默认使用上游的 `uv` 注册表。若在受限网络中需要更快的镜像源，可在执行 `make docker-init` 或 `make docker-start` 之前，先导出环境变量 `UV_INDEX_URL=https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 和 `NPM_REGISTRY=https:\u002F\u002Fregistry.npmmirror.com`。\n\n后端进程会在下次访问配置时自动加载 `config.yaml` 的更改，因此在开发过程中更新模型元数据无需手动重启。\n\n> [!TIP]\n> 在 Linux 系统上，如果 Docker 相关命令因权限不足而失败（错误信息为“permission denied while trying to connect to the Docker daemon socket at unix:\u002F\u002F\u002Fvar\u002Frun\u002Fdocker.sock”），请将当前用户添加到 `docker` 用户组，并重新登录后再试。完整修复方法请参见 [CONTRIBUTING.md](CONTRIBUTING.md#linux-docker-daemon-permission-denied)。\n\n**生产模式**（本地构建镜像、挂载运行时配置和数据）：\n\n```bash\nmake up     # 构建镜像并启动所有生产服务\nmake down   # 停止并移除容器\n```\n\n> [!NOTE]\n> 当前 LangGraph 智能体服务器通过 `langgraph dev`（开源 CLI 服务器）运行。\n\n访问地址：http:\u002F\u002Flocalhost:2026\n\n详细的 Docker 开发指南请参阅 [CONTRIBUTING.md](CONTRIBUTING.md)。\n\n#### 方案二：本地开发\n\n如果您更倾向于在本地运行服务：\n\n前提条件：请先完成上述“配置”步骤（`make setup`）。`make dev` 需要在项目根目录下存在有效的 `config.yaml` 文件（可通过 `DEER_FLOW_CONFIG_PATH` 覆盖）。在启动前，请运行 `make doctor` 检查您的环境配置是否正确。\n在 Windows 系统上，请使用 Git Bash 来运行本地开发流程。原生的 `cmd.exe` 和 PowerShell 不支持基于 bash 的服务脚本，而 WSL 也未必适用，因为部分脚本依赖于 Windows 版 Git 提供的工具，如 `cygpath`。\n\n1. **检查前置条件**：\n   ```bash\n   make check  # 验证 Node.js 22+、pnpm、uv、nginx 是否已安装\n   ```\n\n2. **安装依赖**：\n   ```bash\n   make install  # 安装前后端依赖\n   ```\n\n3. **（可选）预先拉取沙盒镜像**：\n   ```bash\n   # 如果使用 Docker\u002F容器化沙盒，建议提前执行此步骤\n   make setup-sandbox\n   ```\n\n4. **（可选）加载示例记忆数据以便本地查看**：\n   ```bash\n   python scripts\u002Fload_memory_sample.py\n   ```\n   此操作会将示例数据复制到默认的本地运行时记忆文件中，方便测试人员立即体验“设置 > 记忆”功能。\n   最短的审查流程请参阅 [backend\u002Fdocs\u002FMEMORY_SETTINGS_REVIEW.md](backend\u002Fdocs\u002FMEMORY_SETTINGS_REVIEW.md)。\n\n5. **启动服务**：\n   ```bash\n   make dev\n   ```\n\n6. **访问地址**：http:\u002F\u002Flocalhost:2026\n\n#### 启动模式\n\nDeerFlow 支持两种维度的多种启动模式：\n\n- **开发 \u002F 生产** — 开发模式支持热重载；生产模式则使用预构建的前端。\n- **标准 \u002F 网关** — 标准模式使用独立的 LangGraph 服务器（共 4 个进程）；网关模式（实验性）则将智能体运行时嵌入到网关 API 中（共 3 个进程）。\n\n|                | 本地前台模式 | 本地守护进程模式 | Docker 开发模式 | Docker 生产模式 |\n|----------------|-------------|-----------------|---------------|---------------|\n| **开发模式**   | `.\u002Fscripts\u002Fserve.sh --dev`\u003Cbr>`make dev` | `.\u002Fscripts\u002Fserve.sh --dev --daemon`\u003Cbr>`make dev-daemon` | `.\u002Fscripts\u002Fdocker.sh start`\u003Cbr>`make docker-start` | — |\n| **开发 + 网关** | `.\u002Fscripts\u002Fserve.sh --dev --gateway`\u003Cbr>`make dev-pro` | `.\u002Fscripts\u002Fserve.sh --dev --gateway --daemon`\u003Cbr>`make dev-daemon-pro` | `.\u002Fscripts\u002Fdocker.sh start --gateway`\u003Cbr>`make docker-start-pro` | — |\n| **生产模式**   | `.\u002Fscripts\u002Fserve.sh --prod`\u003Cbr>`make start` | `.\u002Fscripts\u002Fserve.sh --prod --daemon`\u003Cbr>`make start-daemon` | — | `.\u002Fscripts\u002Fdeploy.sh`\u003Cbr>`make up` |\n| **生产 + 网关**| `.\u002Fscripts\u002Fserve.sh --prod --gateway`\u003Cbr>`make start-pro` | `.\u002Fscripts\u002Fserve.sh --prod --gateway --daemon`\u003Cbr>`make start-daemon-pro` | — | `.\u002Fscripts\u002Fdeploy.sh --gateway`\u003Cbr>`make up-pro` |\n\n| 操作         | 本地模式 | Docker 开发模式 | Docker 生产模式 |\n|--------------|---------|---------------|---------------|\n| **停止**     | `.\u002Fscripts\u002Fserve.sh --stop`\u003Cbr>`make stop` | `.\u002Fscripts\u002Fdocker.sh stop`\u003Cbr>`make docker-stop` | `.\u002Fscripts\u002Fdeploy.sh down`\u003Cbr>`make down` |\n| **重启**     | `.\u002Fscripts\u002Fserve.sh --restart [flags]` | `.\u002Fscripts\u002Fdocker.sh restart` | — |\n\n> **网关模式**去除了 LangGraph 服务器进程——网关 API 直接通过异步任务处理智能体执行，并自行管理并发。\n\n#### 为何选择网关模式？\n\n在标准模式下，DeerFlow 会同时运行一个专用的 [LangGraph 平台](https:\u002F\u002Flangchain-ai.github.io\u002Flanggraph\u002F)服务器以及网关 API。这种架构虽然可行，但存在一些权衡：\n\n|                | 标准模式 | 网关模式 |\n|----------------|---------|---------|\n| **架构**       | 网关（REST API）+ LangGraph（智能体运行时） | 网关直接嵌入智能体运行时 |\n| **并发控制**   | 每个工作线程的 `--n-jobs-per-worker`（需许可证） | 通过 `--workers` × 异步任务实现（无每工作线程上限限制） |\n| **容器\u002F进程**  | 4 个（前端、网关、LangGraph、Nginx） | 3 个（前端、网关、Nginx） |\n| **资源消耗**   | 较高（两个 Python 运行时） | 较低（单个 Python 运行时） |\n| **LangGraph 平台许可证** | 生产镜像需要 | 无需 |\n| **冷启动时间** | 较长（需初始化两个服务） | 较短（只需初始化一个服务） |\n\n两种模式在功能上是等效的——相同的智能体、工具和技能在任一模式下均可正常工作。\n\n#### Docker 生产部署\n\n`deploy.sh` 支持分别构建和启动镜像。镜像本身与运行模式无关——运行时模式是在启动时指定的：\n\n```bash\n\n# 一步式（构建 + 启动）\ndeploy.sh                    # 标准模式（默认）\ndeploy.sh --gateway          # 网关模式\n\n# 两步式（先构建一次，再以任意模式启动）\ndeploy.sh build              # 构建所有镜像\ndeploy.sh start              # 以标准模式启动\ndeploy.sh start --gateway    # 以网关模式启动\n\n# 停止\ndeploy.sh down\n```\n\n### 高级功能\n#### 沙箱模式\n\nDeerFlow 支持多种沙箱执行模式：\n- **本地执行**（直接在宿主机上运行沙箱代码）\n- **Docker 执行**（在隔离的 Docker 容器中运行沙箱代码）\n- **Kubernetes 上的 Docker 执行**（通过 provisioner 服务将沙箱代码运行在 Kubernetes Pod 中）\n\n对于 Docker 开发环境，服务启动会遵循 `config.yaml` 中配置的沙箱模式。在本地或 Docker 模式下，`provisioner` 不会被启动。\n\n请参阅 [沙箱配置指南](backend\u002Fdocs\u002FCONFIGURATION.md#sandbox)，以配置您偏好的模式。\n\n#### MCP 服务器\n\nDeerFlow 支持可配置的 MCP 服务器和技能，以扩展其功能。对于 HTTP\u002FSSE 类型的 MCP 服务器，支持 OAuth token 流程（`client_credentials`、`refresh_token`）。详细说明请参阅 [MCP 服务器指南](backend\u002Fdocs\u002FMCP_SERVER.md)。\n\n#### IM 渠道\n\nDeerFlow 支持从消息应用接收任务。配置完成后，渠道会自动启动，且无需任何公共 IP 地址。\n\n| 渠道       | 传输方式         | 难度     |\n|------------|------------------|----------|\n| Telegram   | Bot API（长轮询） | 简单     |\n| Slack      | Socket Mode      | 中等     |\n| Feishu \u002F Lark | WebSocket      | 中等     |\n| WeChat     | Tencent iLink（长轮询） | 中等     |\n| WeCom      | WebSocket        | 中等     |\n\n**`config.yaml` 中的配置：**\n\n```yaml\nchannels:\n  # LangGraph 服务器 URL（默认：http:\u002F\u002Flocalhost:2024）\n  langgraph_url: http:\u002F\u002Flocalhost:2024\n  # 网关 API URL（默认：http:\u002F\u002Flocalhost:8001）\n  gateway_url: http:\u002F\u002Flocalhost:8001\n\n  # 可选：所有移动端渠道的全局会话默认值\n  session:\n    assistant_id: lead_agent  # 或自定义代理名称；自定义代理会通过 lead_agent + agent_name 路由\n    config:\n      recursion_limit: 100\n    context:\n      thinking_enabled: true\n      is_plan_mode: false\n      subagent_enabled: false\n\n  feishu:\n    enabled: true\n    app_id: $FEISHU_APP_ID\n    app_secret: $FEISHU_APP_SECRET\n    # domain: https:\u002F\u002Fopen.feishu.cn       # 中国（默认）\n    # domain: https:\u002F\u002Fopen.larksuite.com   # 国际\n\n  wecom:\n    enabled: true\n    bot_id: $WECOM_BOT_ID\n    bot_secret: $WECOM_BOT_SECRET\n\n  slack:\n    enabled: true\n    bot_token: $SLACK_BOT_TOKEN     # xoxb-...\n    app_token: $SLACK_APP_TOKEN     # xapp-...（Socket Mode）\n    allowed_users: []               # 空列表表示允许所有用户\n\n  telegram:\n    enabled: true\n    bot_token: $TELEGRAM_BOT_TOKEN\n    allowed_users: []               # 空列表表示允许所有用户\n\n  wechat:\n    enabled: false\n    bot_token: $WECHAT_BOT_TOKEN\n    ilink_bot_id: $WECHAT_ILINK_BOT_ID\n    qrcode_login_enabled: true      # 可选：当缺少 bot_token 时，允许首次二维码引导登录\n    allowed_users: []               # 空列表表示允许所有用户\n    polling_timeout: 35\n    state_dir: .\u002F.deer-flow\u002Fwechat\u002Fstate\n    max_inbound_image_bytes: 20971520\n    max_outbound_image_bytes: 20971520\n    max_inbound_file_bytes: 52428800\n    max_outbound_file_bytes: 52428800\n\n    # 可选：针对特定渠道或用户的会话设置\n    session:\n      assistant_id: mobile-agent  # 此处也支持自定义代理名称\n      context:\n        thinking_enabled: false\n      users:\n        \"123456789\":\n          assistant_id: vip-agent\n          config:\n            recursion_limit: 150\n          context:\n            thinking_enabled: true\n            subagent_enabled: true\n```\n\n注意事项：\n- 当 `assistant_id: lead_agent` 时，会直接调用默认的 LangGraph 助理。\n- 如果将 `assistant_id` 设置为自定义代理名称，DeerFlow 仍会通过 `lead_agent` 进行路由，并将该值作为 `agent_name` 注入，从而使自定义代理的 SOUL\u002F配置对 IM 渠道生效。\n\n请在 `.env` 文件中设置相应的 API 密钥：\n\n```bash\n# Telegram\nTELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrSTUvwxYZ\n\n# Slack\nSLACK_BOT_TOKEN=xoxb-...\nSLACK_APP_TOKEN=xapp-...\n\n# Feishu \u002F Lark\nFEISHU_APP_ID=cli_xxxx\nFEISHU_APP_SECRET=your_app_secret\n\n# WeChat iLink\nWECHAT_BOT_TOKEN=your_ilink_bot_token\nWECHAT_ILINK_BOT_ID=your_ilink_bot_id\n\n# WeCom\nWECOM_BOT_ID=你的 bot_id\nWECOM_BOT_SECRET= 你的 bot_secret\n```\n\n**Telegram 设置**\n\n1. 与 [@BotFather](https:\u002F\u002Ft.me\u002FBotFather) 대화하여 `\u002Fnewbot`를 보내고 HTTP API 토큰을 복사합니다.\n2. `.env` 파일에 `TELEGRAM_BOT_TOKEN`을 설정하고 `config.yaml`에서 채널을 활성화합니다.\n\n**Slack 설정**\n\n1. [api.slack.com\u002Fapps](https:\u002F\u002Fapi.slack.com\u002Fapps) → 새 앱 만들기 → 처음부터 Slack 앱을 생성합니다.\n2. **OAuth 및 권한** 섹션에서 봇 토큰 범위를 추가하세요: `app_mentions:read`, `chat:write`, `im:history`, `im:read`, `im:write`, `files:write`.\n3. **소켓 모드**를 활성화한 후, `connections:write` 범위가 있는 앱 수준 토큰(`xapp-…`)을 생성합니다.\n4. **이벤트 구독** 섹션에서 봇 이벤트를 구독하세요: `app_mention`, `message.im`.\n5. `.env` 파일에 `SLACK_BOT_TOKEN`과 `SLACK_APP_TOKEN`을 설정하고 `config.yaml`에서 채널을 활성화합니다.\n\n**Feishu \u002F Lark 설정**\n\n1. [Feishu Open Platform](https:\u002F\u002Fopen.feishu.cn\u002F)에서 앱을 생성한 후 **봇** 기능을 활성화합니다.\n2. 권한을 추가하세요: `im:message`, `im:message.p2p_msg:readonly`, `im:resource`.\n3. **이벤트** 섹션에서 `im.message.receive_v1`을 구독하고 **장기 연결** 모드를 선택합니다.\n4. 앱 ID와 앱 시크릿을 복사합니다. `.env` 파일에 `FEISHU_APP_ID`와 `FEISHU_APP_SECRET`을 설정하고 `config.yaml`에서 채널을 활성화합니다.\n\n**WeChat 설정**\n\n1. `config.yaml`에서 `wechat` 채널을 활성화합니다.\n2. `.env` 파일에 `WECHAT_BOT_TOKEN`을 설정하거나, 최초 QR 부팅을 위해 `qrcode_login_enabled: true`로 설정합니다.\n3. `bot_token`이 없고 QR 부팅이 활성화된 경우, iLink에서 반환된 QR 코드 내용을 백엔드 로그에서 확인하고 바인딩 절차를 완료해야 합니다.\n4. QR 절차가 성공하면 DeerFlow는 획득한 토큰을 `state_dir`에 저장하여 이후 재시작 시에도 사용할 수 있도록 합니다.\n5. Docker Compose 배포 시에는 `state_dir`를 영구 저장소에 유지하여 `get_updates_buf` 커서와 저장된 인증 상태가 재시작 후에도 유지되도록 해야 합니다.\n\n**WeCom 설정**\n\n1. WeCom AI Bot 플랫폼에서 봇을 생성하고 `bot_id`와 `bot_secret`을 확보합니다.\n2. `config.yaml`에서 `channels.wecom`을 활성화하고 `bot_id`\u002F`bot_secret`을 입력합니다.\n3. `.env` 파일에 `WECOM_BOT_ID`와 `WECOM_BOT_SECRET`을 설정합니다.\n4. 백엔드 종속성에 `wecom-aibot-python-sdk`가 포함되어 있는지 확인하세요. 이 채널은 WebSocket 장기 연결을 사용하며 공개 콜백 URL이 필요하지 않습니다.\n5. 현재 통합은 인바운드 텍스트, 이미지 및 파일 메시지를 지원합니다. 에이전트가 생성한 최종 이미지\u002F파일도 WeCom 대화로 다시 전송됩니다.\n\nDocker Compose에서 DeerFlow가 실행될 때 IM 채널은 `gateway` 컨테이너 내부에서 작동합니다. 이 경우 `channels.langgraph_url`이나 `channels.gateway_url`을 `localhost`로 지정하지 말고, `http:\u002F\u002Flanggraph:2024` 및 `http:\u002F\u002Fgateway:8001`과 같은 컨테이너 서비스 이름을 사용하거나, `DEER_FLOW_CHANNELS_LANGGRAPH_URL` 및 `DEER_FLOW_CHANNELS_GATEWAY_URL`을 설정하세요.\n\n**명령어**\n\n채널이 연결되면 채팅에서 직접 DeerFlow와 상호작용할 수 있습니다:\n\n| 명령어 | 설명 |\n|---------|-------------|\n| `\u002Fnew` | 새로운 대화 시작 |\n| `\u002Fstatus` | 현재 스레드 정보 표시 |\n| `\u002Fmodels` | 사용 가능한 모델 목록 |\n| `\u002Fmemory` | 메모리 보기 |\n| `\u002Fhelp` | 도움말 표시 |\n\n> 명령어 접두사가 없는 메시지는 일반 채팅으로 간주되며, DeerFlow는 스레드를 생성하고 대화형으로 응답합니다.\n\n#### LangSmith 추적\n\nDeerFlow에는 관측성을 위한 [LangSmith](https:\u002F\u002Fsmith.langchain.com) 통합이 기본 탑재되어 있습니다. 활성화하면 모든 LLM 호출, 에이전트 실행 및 툴 실행이 추적되어 LangSmith 대시보드에서 확인할 수 있습니다.\n\n`.env` 파일에 다음을 추가하세요:\n\n```bash\nLANGSMITH_TRACING=true\nLANGSMITH_ENDPOINT=https:\u002F\u002Fapi.smith.langchain.com\nLANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxx\nLANGSMITH_PROJECT=xxx\n```\n\n#### Langfuse 추적\n\nDeerFlow는 LangChain 호환 실행을 위한 [Langfuse](https:\u002F\u002Flangfuse.com) 관측성도 지원합니다.\n\n`.env` 파일에 다음을 추가하세요:\n\n```bash\nLANGFUSE_TRACING=true\nLANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxxxxxx\nLANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxxxxxx\nLANGFUSE_BASE_URL=https:\u002F\u002Fcloud.langfuse.com\n```\n\n자체 호스팅 Langfuse 인스턴스를 사용하는 경우, `LANGFUSE_BASE_URL`을 귀하의 배포 URL로 설정하세요.\n\n#### 두 제공업체 모두 사용하기\n\nLangSmith와 Langfuse가 모두 활성화된 경우, DeerFlow는 두 가지 추적 콜백을 모두 첨부하여 동일한 모델 활동을 두 시스템에 보고합니다.\n\n특정 제공업체가 명시적으로 활성화되었으나 필수 자격 증명이 누락되었거나 콜백 초기화에 실패한 경우, 모델 생성 중 추적 초기화 시 DeerFlow는 즉시 오류를 발생시키며, 오류 메시지에는 실패 원인을 제공한 제공업체가 명시됩니다.\n\nDocker 배포에서는 기본적으로 추적이 비활성화되어 있습니다. 이를 활성화하려면 `.env` 파일에 `LANGSMITH_TRACING=true`와 `LANGSMITH_API_KEY`를 설정하세요.\n\n## 딥 리서치에서 슈퍼 에이전트 하니스로\n\nDeerFlow는 딥 리서치 프레임워크로 시작되었으며, 커뮤니티는 이를 적극적으로 활용해 왔습니다. 출시 이후 개발자들은 연구를 넘어 데이터 파이프라인 구축, 슬라이드덱 생성, 대시보드 구축, 콘텐츠 워크플로우 자동화 등 우리가 예상하지 못했던 다양한 작업을 수행해 왔습니다.\n\n이는 우리에게 중요한 사실을 알려주었습니다: DeerFlow는 단순한 연구 도구가 아니었습니다. 그것은 에이전트가 실제로 업무를 수행할 수 있도록 인프라를 제공하는 런타임인 **하니스**였습니다.\n\n그래서 우리는 처음부터 다시 설계했습니다.\n\nDeerFlow 2.0은 더 이상 사용자가 직접 구성해야 하는 프레임워크가 아닙니다. 배터리가 포함되어 있으며 완전히 확장 가능한 슈퍼 에이전트 하니스입니다. LangGraph와 LangChain을 기반으로 제작된 이 제품은 에이전트가 바로 사용할 수 있도록 파일 시스템, 메모리, 스킬, 샌드박싱 인식 실행 기능, 그리고 복잡한 다단계 작업을 위해 하위 에이전트를 계획하고 생성할 수 있는 기능을 기본 탑재하고 있습니다.\n\n그대로 사용하거나, 필요한 부분만 떼어내어 자신만의 방식으로 활용할 수 있습니다.\n\n## 핵심 기능\n\n### 스킬 & 툴\n\nDeerFlow가 *거의 모든* 일을 할 수 있게 만드는 것은 바로 스킬입니다.\n\n표준 에이전트 스킬은 구조화된 기능 모듈로, 워크플로, 모범 사례 및 관련 자료를 정의하는 Markdown 파일입니다. DeerFlow는 연구, 보고서 작성, 슬라이드 생성, 웹 페이지 제작, 이미지 및 비디오 생성 등을 위한 기본 스킬을 제공합니다. 그러나 진정한 강점은 확장성에 있습니다: 사용자 고유의 스킬을 추가하거나 기본 스킬을 교체하거나, 여러 스킬을 결합하여 복합적인 워크플로를 구성할 수 있습니다.\n\n스킬은 작업에 필요할 때만 순차적으로 로드됩니다. 이는 컨텍스트 윈도우를 효율적으로 유지하고, 토큰 수에 민감한 모델에서도 DeerFlow가 원활하게 작동하도록 합니다.\n\n게이트웨이를 통해 `.skill` 아카이브를 설치할 때, DeerFlow는 유효한 외부 스킬을 거부하는 대신 `version`, `author`, `compatibility`와 같은 표준 옵션 프론트마터 메타데이터를 받아들입니다.\n\n툴 역시 동일한 개념을 따릅니다. DeerFlow는 웹 검색, 웹 가져오기, 파일 작업, bash 실행 등의 기본 툴셋을 제공하며, MCP 서버와 Python 함수를 통해 맞춤형 툴을 지원합니다. 원하는 대로 교체하거나 추가할 수 있습니다.\n\n게이트웨이에서 생성되는 후속 제안은 이제 JSON 배열 응답을 파싱하기 전에 일반 문자열 모델 출력과 블록\u002F목록 형식의 리치 콘텐츠를 정규화하여, 제공업체 특정 콘텐츠 래퍼로 인해 제안이 무시되는 현상을 방지합니다.\n\n```\n\n# 沙箱容器内的路径\n\u002Fmnt\u002Fskills\u002Fpublic\n├── research\u002FSKILL.md\n├── report-generation\u002FSKILL.md\n├── slide-creation\u002FSKILL.md\n├── web-page\u002FSKILL.md\n└── image-generation\u002FSKILL.md\n\n\u002Fmnt\u002Fskills\u002Fcustom\n└── your-custom-skill\u002FSKILL.md      ← 你的技能\n```\n\n#### Claude Code 集成\n\n`claude-to-deerflow` 技能使你能够直接从 [Claude Code](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code) 与正在运行的 DeerFlow 实例交互。发送研究任务、检查状态、管理线程——所有这些都不需要离开终端。\n\n**安装该技能**：\n\n```bash\nnpx skills add https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow --skill claude-to-deerflow\n```\n\n然后确保 DeerFlow 正在运行（默认地址为 `http:\u002F\u002Flocalhost:2026`），并在 Claude Code 中使用 `\u002Fclaude-to-deerflow` 命令。\n\n**你可以做的事情**：\n- 向 DeerFlow 发送消息并获取流式响应\n- 选择执行模式：flash（快速）、standard、pro（规划）、ultra（子代理）\n- 检查 DeerFlow 的健康状况，列出模型\u002F技能\u002F代理\n- 管理线程和对话历史\n- 上传文件进行分析\n\n**环境变量**（可选，用于自定义端点）：\n\n```bash\nDEERFLOW_URL=http:\u002F\u002Flocalhost:2026            # 统一代理基础 URL\nDEERFLOW_GATEWAY_URL=http:\u002F\u002Flocalhost:2026    # Gateway API\nDEERFLOW_LANGGRAPH_URL=http:\u002F\u002Flocalhost:2026\u002Fapi\u002Flanggraph  # LangGraph API\n```\n\n完整 API 参考请参阅 [`skills\u002Fpublic\u002Fclaude-to-deerflow\u002FSKILL.md`](skills\u002Fpublic\u002Fclaude-to-deerflow\u002FSKILL.md)。\n\n### 子代理\n\n复杂的任务很少能在一次处理中完成。DeerFlow 会将它们分解。\n\n主代理可以随时动态生成子代理——每个子代理都有自己的作用域上下文、工具和终止条件。子代理在可能的情况下并行运行，返回结构化的结果，而主代理则将所有内容综合成一个连贯的输出。\n\n这就是 DeerFlow 处理需要数分钟到数小时的任务的方式：一项研究任务可能会扩展为十几个子代理，每个子代理探索不同的角度，然后汇聚成一份报告、一个网站或带有生成视觉效果的幻灯片演示文稿。一个统筹者，多双手协作。\n\n### 沙箱与文件系统\n\nDeerFlow 不只是“谈论”如何做事，它有自己的计算机。\n\n每个任务都会获得一个独立的执行环境，包含完整的文件系统视图——技能、工作区、上传文件、输出文件。代理可以读取、写入和编辑文件。它可以查看图像，并在安全配置下执行 Shell 命令。\n\n使用 `AioSandboxProvider` 时，Shell 执行会在隔离的容器内运行。而使用 `LocalSandboxProvider` 时，文件工具仍然映射到主机上的线程专用目录，但主机的 `bash` 默认被禁用，因为它不是一个安全的隔离边界。仅在完全可信的本地工作流中才应重新启用主机的 `bash`。\n\n这正是拥有工具访问权限的聊天机器人与拥有实际执行环境的代理之间的区别。\n\n```\n# 沙箱容器内的路径\n\u002Fmnt\u002Fuser-data\u002F\n├── uploads\u002F          ← 你的文件\n├── workspace\u002F        ← 代理的工作目录\n└── outputs\u002F          ← 最终交付物\n```\n\n### 上下文工程\n\n**隔离的子代理上下文**：每个子代理都在自己独立的上下文中运行。这意味着子代理无法看到主代理或其他子代理的上下文。这一点很重要，可以确保子代理专注于手头的任务，而不受主代理或其他子代理上下文的干扰。\n\n**总结**：在一个会话中，DeerFlow 会积极管理上下文——总结已完成的子任务，将中间结果卸载到文件系统，压缩不再立即相关的部分。这样它就能在长时间的多步骤任务中保持高效，而不会超出上下文窗口限制。\n\n**严格的工具调用恢复**：当提供者或中间件中断工具调用循环时，DeerFlow 现在会在强制停止的助手消息中剥离提供者级别的原始工具调用元数据，并在下次模型调用之前为悬空的工具调用注入占位符结果。这样做可以防止严格验证 `tool_call_id` 序列的 OpenAI 兼容推理模型因历史记录格式错误而失败。\n\n### 长期记忆\n\n大多数代理在对话结束时就会忘记一切。而 DeerFlow 则会记住。\n\n在不同会话之间，DeerFlow 会构建一个持久的记忆库，记录你的个人资料、偏好和积累的知识。你使用得越多，它就越了解你——你的写作风格、你的技术栈、你经常重复的工作流程。记忆存储在本地，始终由你掌控。\n\n现在，记忆更新会在应用时跳过重复的事实条目，因此重复的偏好和上下文不会在不同会话中无休止地累积。\n\n## 推荐模型\n\nDeerFlow 是模型无关的——它适用于任何实现了 OpenAI 兼容 API 的 LLM。不过，它在以下类型的模型上表现最佳：\n\n- **长上下文窗口**（10万+ tokens），适合深度研究和多步骤任务\n- **推理能力**，用于自适应规划和复杂分解\n- **多模态输入**，用于图像理解和视频解析\n- **强大的工具使用能力**，以实现可靠的函数调用和结构化输出\n\n## 嵌入式 Python 客户端\n\nDeerFlow 可以作为嵌入式 Python 库使用，而无需运行完整的 HTTP 服务。`DeerFlowClient` 提供对所有代理和 Gateway 功能的直接进程内访问，返回与 HTTP Gateway API 相同的响应模式。HTTP Gateway 还公开了 `DELETE \u002Fapi\u002Fthreads\u002F{thread_id}` 端点，用于在 LangGraph 线程本身已被删除后，移除 DeerFlow 管理的本地线程数据：\n\n```python\nfrom deerflow.client import DeerFlowClient\n\nclient = DeerFlowClient()\n\n# 对话\nresponse = client.chat(\"帮我分析这篇论文\", thread_id=\"my-thread\")\n\n# 流式传输（LangGraph SSE 协议：values、messages-tuple、end）\nfor event in client.stream(\"hello\"):\n    if event.type == \"messages-tuple\" 和 event.data.get(\"type\") == \"ai\":\n        print(event.data[\"content\"])\n\n# 配置与管理——返回与 Gateway 对齐的字典\nmodels = client.list_models()        # {\"models\": [...]}\nskills = client.list_skills()        # {\"skills\": [...]}\nclient.update_skill(\"web-search\", enabled=True)\nclient.upload_files(\"thread-1\", [\".\u002Freport.pdf\"])  # {\"success\": True, \"files\": [...]}\n```\n\n所有返回字典的方法都会在 CI 中根据 Gateway Pydantic 响应模型进行验证（`TestGatewayConformance`），以确保嵌入式客户端与 HTTP API 模式保持同步。完整 API 文档请参阅 `backend\u002Fpackages\u002Fharness\u002Fdeerflow\u002Fclient.py`。\n\n## 文档\n\n- [贡献指南](CONTRIBUTING.md) - 开发环境设置和工作流程\n- [配置指南](backend\u002Fdocs\u002FCONFIGURATION.md) - 设置和配置说明\n- [架构概述](backend\u002FCLAUDE.md) - 技术架构细节\n- [后端架构](backend\u002FREADME.md) - 后端架构和 API 参考\n\n## ⚠️ 安全提示\n\n### 部署不当可能引入安全风险\n\nDeerFlow 具有关键的高权限能力，包括 **系统命令执行、资源操作和业务逻辑调用**，默认设计为 **部署在本地可信环境中（仅可通过 127.0.0.1 回环接口访问）**。如果您在不可信环境（如局域网、公有云服务器或其他多端点可访问的环境）中部署代理，且未采取严格的安全措施，则可能带来以下安全风险：\n\n- **未经授权的非法调用**：代理功能可能被未经授权的第三方或恶意网络扫描器发现，从而触发大量未经许可的请求，执行系统命令、文件读写等高风险操作，进而导致严重的安全后果。\n- **合规与法律风险**：若代理被非法调用以实施网络攻击、数据窃取或其他违法行为，可能会引发法律责任和合规风险。\n\n### 安全建议\n\n**注意：我们强烈建议将 DeerFlow 部署在本地可信网络环境中。** 如果需要跨设备或跨网络部署，必须实施严格的安全措施，例如：\n\n- **IP 白名单**：使用 `iptables`，或部署带有访问控制列表（ACL）的硬件防火墙\u002F交换机，**配置 IP 白名单规则**，拒绝来自其他所有 IP 地址的访问。\n- **认证网关**：配置反向代理（如 nginx），并**启用强身份验证机制**，阻止任何未认证的访问。\n- **网络隔离**：在条件允许的情况下，将代理与受信任设备放置在**同一专用 VLAN 中**，与其他网络设备隔离。\n- **保持更新**：持续关注 DeerFlow 的安全功能更新。\n\n## 贡献说明\n\n我们欢迎各位的贡献！请参阅 [CONTRIBUTING.md](CONTRIBUTING.md) 了解开发环境搭建、工作流程及相关指南。\n\n回归测试覆盖范围包括 Docker 沙箱模式检测以及 `backend\u002Ftests\u002F` 中 provisioner kubeconfig-path 处理相关的测试用例。目前，网关在提供构建产物时，会强制将活动 Web 内容类型（`text\u002Fhtml`、`application\u002Fxhtml+xml`、`image\u002Fsvg+xml`）作为附件下载，而非内联渲染，从而降低生成产物的 XSS 风险。\n\n## 许可证\n\n本项目为开源项目，采用 [MIT 许可证](.\u002FLICENSE) 开放源代码。\n\n## 致谢\n\nDeerFlow 的开发建立在开源社区卓越工作的基础之上。我们由衷感谢所有为 DeerFlow 的实现做出贡献的项目和开发者们。可以说，我们正是站在巨人的肩膀上才得以完成这一项目。\n\n在此，我们特别感谢以下项目提供的宝贵支持：\n\n- **[LangChain](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flangchain)**：其出色的框架为我们提供了强大的 LLM 交互与链式处理能力，实现了无缝集成与高效功能。\n- **[LangGraph](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph)**：他们在多智能体编排方面的创新方法，对 DeerFlow 精密工作流的实现起到了至关重要的作用。\n\n这些项目充分展现了开源协作的变革力量，我们很荣幸能够在此基础上继续前行。\n\n### 主要贡献者\n\n衷心感谢 `DeerFlow` 的核心作者们，是他们的远见卓识、满腔热忱与不懈努力，才让这个项目得以诞生：\n\n- **[Daniel Walnut](https:\u002F\u002Fgithub.com\u002FhetaoBackend\u002F)**\n- **[Henry Li](https:\u002F\u002Fgithub.com\u002Fmagiccube\u002F)**\n\n你们坚定不移的承诺与专业能力，是 DeerFlow 取得成功的关键动力。我们深感荣幸能与您们携手共进，引领这一旅程。\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_readme_1bf7aad60d33.png)](https:\u002F\u002Fstar-history.com\u002F#bytedance\u002Fdeer-flow&Date)","# DeerFlow 2.0 快速上手指南\n\nDeerFlow (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) 是一个开源的**超级智能体框架（Super Agent Harness）**，能够编排子智能体、记忆模块和沙箱环境，通过可扩展的技能执行复杂任务。\n\n## 1. 环境准备\n\n### 系统要求\n*   **操作系统**: 推荐 Linux (Ubuntu\u002FDebian) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python**: 3.12 或更高版本。\n*   **Node.js**: 22 或更高版本。\n*   **Docker**: 推荐安装 Docker Desktop 或 Docker Engine（用于沙箱隔离）。\n\n### 硬件建议\n*   **本地开发\u002F评估**: 至少 4 vCPU, 8 GB RAM, 20 GB 可用 SSD。\n*   **长期运行\u002F多智能体**: 推荐 8 vCPU, 16 GB RAM 或更高。\n\n### 前置依赖\n确保已安装 `git`, `make`, `docker` (可选但推荐)。\n\n## 2. 安装步骤\n\n### 第一步：克隆项目\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow.git\ncd deer-flow\n```\n\n### 第二步：运行配置向导\n在项目根目录下运行以下命令，启动交互式配置向导。它将引导你选择大模型提供商、配置 API Key 并生成 `config.yaml` 和 `.env` 文件。\n\n```bash\nmake setup\n```\n\n> **提示**：\n> *   该过程约需 2 分钟。\n> *   如果你在中国大陆，建议在运行前设置国内镜像加速环境变量，以提升依赖下载速度：\n>     ```bash\n>     export UV_INDEX_URL=https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n>     export NPM_REGISTRY=https:\u002F\u002Fregistry.npmmirror.com\n>     ```\n> *   如需手动检查配置状态，可随时运行 `make doctor`。\n\n### 第三步：启动服务 (推荐 Docker 模式)\n\n**初始化沙箱镜像**（首次运行或镜像更新时执行）：\n```bash\nmake docker-init\n```\n\n**启动服务**：\n```bash\nmake docker-start\n```\n*注：`make docker-start` 会根据 `config.yaml` 自动检测是否启用沙箱模式。后端进程会自动热重载配置变更，开发时无需手动重启。*\n\n> **Linux 用户注意**：若遇到 `permission denied` 连接 Docker socket 错误，请将当前用户加入 docker 组并重新登录：\n> `sudo usermod -aG docker $USER`\n\n## 3. 基本使用\n\n### 配置模型\n在运行前，请确保已在 `.env` 文件中配置好大模型 API Key。DeerFlow 支持多种模型，推荐使用 **Doubao-Seed-2.0-Code**, **DeepSeek v3.2** 或 **Kimi 2.5** 以获得最佳代码与推理性能。\n\n你可以在 `config.yaml` 中手动指定模型（示例）：\n```yaml\nmodels:\n  - name: deepseek-v3\n    display_name: DeepSeek V3\n    use: langchain_openai:ChatOpenAI\n    model: deepseek-chat\n    api_key: $DEEPSEEK_API_KEY\n    base_url: https:\u002F\u002Fapi.deepseek.com\u002Fv1\n```\n\n### 运行智能体\n启动服务后，DeerFlow 将根据你的配置运行。你可以通过以下方式与之交互：\n\n1.  **命令行交互**: 直接在终端输入自然语言指令，DeerFlow 将自动规划任务、调用子智能体、搜索信息（如配置了 InfoQuest 或 Tavily）并执行代码。\n2.  **集成 InfoQuest**: DeerFlow 已集成字节火山引擎开发的智能搜索工具 [InfoQuest](https:\u002F\u002Fdocs.byteplus.com\u002Fen\u002Fdocs\u002FInfoQuest\u002FWhat_is_Info_Quest)，可在配置中启用以增强联网检索能力。\n\n### 验证安装\n运行以下命令验证环境是否正常：\n```bash\nmake doctor\n```\n如果一切正常，系统将输出当前的配置摘要和健康状态。\n\n---\n*更多高级功能（如 MCP Server、LangSmith 追踪、自定义沙箱等）请参考官方文档或访问 [DeerFlow 官网](https:\u002F\u002Fdeerflow.tech) 查看实时演示。*","某初创公司的技术负责人需要在 48 小时内完成对竞品新技术的深度调研，并基于调研结果构建一个可运行的原型系统。\n\n### 没有 deer-flow 时\n- **信息搜集碎片化**：工程师需手动在多个搜索引擎、技术论坛和文档站之间切换，耗时数小时整理零散信息，极易遗漏关键细节。\n- **上下文频繁断裂**：从调研转为编码时，需人工将笔记转化为代码逻辑，思维中断导致实现偏差，往往需要反复返工。\n- **环境配置繁琐**：为验证想法需单独搭建隔离的沙箱环境，安装依赖和调试配置消耗了大量本应用于核心逻辑开发的时间。\n- **长任务难以维持**：面对跨数小时的复杂任务，普通 AI 助手容易迷失方向或遗忘早期指令，无法独立闭环完成“调研 - 编码 - 验证”的全流程。\n\n### 使用 deer-flow 后\n- **自动化深度探索**：deer-flow 调用 InfoQuest 等工具自主执行全网深度搜索，自动过滤噪音并结构化输出高价值技术情报。\n- **无缝研发表达**：作为超级智能体框架，它利用记忆模块保持长程上下文，直接将调研结论转化为代码规划，无需人工转译。\n- **内置安全沙箱**：自动拉起隔离沙箱环境进行代码编写与实时运行验证，确保实验过程安全且零配置启动。\n- **多智能体协同**：通过调度子智能体并行处理资料分析、架构设计和具体编码任务，将原本数天的工作流压缩至数小时内自动完成。\n\ndeer-flow 通过将分散的调研、记忆、沙箱与多智能体协作融为一体，让开发者从繁琐的执行者转变为高效的任务指挥官。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_deer-flow_dac4d6e0.png","bytedance","Bytedance Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbytedance_7fee2b15.png","",null,"ByteDanceOSS","https:\u002F\u002Fopensource.bytedance.com","https:\u002F\u002Fgithub.com\u002Fbytedance",[82,86,90,94,98,102,106,110,114,117],{"name":83,"color":84,"percentage":85},"Python","#3572A5",69,{"name":87,"color":88,"percentage":89},"TypeScript","#3178c6",19.4,{"name":91,"color":92,"percentage":93},"HTML","#e34c26",4.6,{"name":95,"color":96,"percentage":97},"Shell","#89e051",2.2,{"name":99,"color":100,"percentage":101},"CSS","#663399",2.1,{"name":103,"color":104,"percentage":105},"JavaScript","#f1e05a",1.7,{"name":107,"color":108,"percentage":109},"MDX","#fcb32c",0.6,{"name":111,"color":112,"percentage":113},"Makefile","#427819",0.2,{"name":115,"color":116,"percentage":113},"Dockerfile","#384d54",{"name":118,"color":119,"percentage":120},"Batchfile","#C1F12E",0,62604,8102,"2026-04-19T07:50:56","MIT","Linux, macOS, Windows","未说明（主要依赖外部 LLM API，若本地部署 vLLM 则需自行配置相应 GPU）","最低 8GB，推荐 16GB（长期运行服务器推荐 32GB）",{"notes":129,"python":130,"dependencies":131},"1. 推荐使用 Docker 部署，Linux 为生产环境首选，macOS\u002FWindows 适合开发评估。2. 核心功能依赖外部大模型 API（如 Doubao-Seed-2.0, DeepSeek v3.2, Kimi 2.5, GPT-4o 等），非必须本地显卡。3. 若选择本地部署 vLLM 运行开源模型，需单独规划 GPU 资源。4. 网络受限环境下可配置清华\u002F阿里镜像加速 Python 和 NPM 依赖下载。5. 包含沙箱模式，需确保 Docker 权限配置正确。","3.12+",[132,133,134,135,136],"Node.js 22+","Docker (推荐)","uv (Python 包管理)","langchain_openai","vllm (可选，用于本地模型)",[35,138,14,15,13],"其他",[140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157],"agent","agentic","agentic-framework","agentic-workflow","ai","ai-agents","deep-research","langchain","langgraph","llm","multi-agent","nodejs","podcast","python","langmanus","typescript","harness","superagent","2026-03-27T02:49:30.150509","2026-04-20T04:06:11.453770",[161,166,171,176,181,185],{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},43356,"配置 Ollama 模型时出现 404 Not Found 或连接失败，应该如何正确配置 conf.yaml？","配置 Ollama 时需注意以下三点：\n1. base_url 末尾必须添加 `\u002Fv1` 以兼容 ChatGPT 接口格式（例如：`http:\u002F\u002Flocalhost:11434\u002Fv1\u002F`）。\n2. api_key 字段不能省略，即使本地运行也需填写任意非空字符串（如 `fake` 或 `cc`）。\n3. model 名称前不要加 `ollama\u002F` 前缀，直接使用模型标签（如 `qwen3:8b` 或 `mistral-small3.1:24b`）。\n4. 避免使用推理类模型（如 deepseek r1），可能导致报错。\n\n正确配置示例：\n```yaml\nBASIC_MODEL:\n  model: \"qwen3:8b\"\n  base_url: \"http:\u002F\u002Flocalhost:11434\u002Fv1\u002F\"\n  api_key: \"fake\"\n```","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Fissues\u002F111",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},43357,"如何在 Windows 或 WSL 环境下替换代码中写死的 `\u002Fmnt\u002Fuser-data` 路径以挂载本地目录？","可以通过在配置文件（config.yaml）的 `sandbox` 部分设置 `mounts` 来映射主机路径到容器内路径。注意 `host_path` 必须是绝对路径。\n\n配置示例：\n```yaml\nsandbox:\n  use: deerflow.sandbox.local:LocalSandboxProvider\n  allow_host_bash: false\n  mounts:\n   - host_path: \u002Fhome\u002Fwilliam\u002Fworkspace    # 主机上的绝对路径 (WSL 中为 \u002Fmnt\u002Fd\u002F... 或 Linux 路径)\n     container_path: \u002Fmnt\u002Fmy-project       # 容器内访问的虚拟路径\n     read_only: false\n```\n映射后，容器内的 `\u002Fmnt\u002Fmy-project\u002Ftest.md` 将对应主机上的 `\u002Fhome\u002Fwilliam\u002Fworkspace\u002Ftest.md`。\n注意：如果在 WSL 下运行仍报无权限错误，请检查主机目录的读写权限设置。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Fissues\u002F2185",{"id":172,"question_zh":173,"answer_zh":174,"source_url":175},43358,"运行时遇到 `AttributeError: 'AIMessage' object has no attribute 'tool_call_chunks'` 错误如何解决？","该错误通常由 `langgraph` 版本不兼容引起。解决方案是将 `langgraph` 库降级到 0.3.5 版本。\n\n执行命令：\n```bash\npip install langgraph==0.3.5\n```\n此问题已在后续版本修复（参考 PR #136），如果无法降级，请尝试更新项目到最新代码。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Fissues\u002F132",{"id":177,"question_zh":178,"answer_zh":179,"source_url":180},43359,"部署生产环境有什么推荐的服务器配置？开源版支持多用户并发吗？","目前官方不建议直接将代码部署到生产环境，建议先在本地进行评估。\n关于并发限制：目前的开源版本在使用 LangGraph 启动时，尚不提供免费的多线程模式，因此原生不支持多用户并发请求。\n如果需要企业级功能（如多线程、PostgreSQL 数据存储），通常需要移除企业版私有包并安装开源替代方案，但这可能涉及 License 验证问题，需谨慎操作。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Fissues\u002F1175",{"id":182,"question_zh":183,"answer_zh":184,"source_url":180},43360,"如何去除 LangGraph 企业版限制并在本地运行开源版本？","可以通过修改 Dockerfile 来卸载企业版私有包并安装开源版本。具体步骤如下：\n\n1. 卸载企业版包：`RUN pip uninstall -y langgraph-api`\n2. 安装开源版本及内存存储支持：`RUN pip install langgraph-api \"langgraph-cli[inmem]\"`\n\nDockerfile 片段示例：\n```dockerfile\nFROM langchain\u002Flanggraph-api:3.12-wolfi\n# 删除企业版私有包\nRUN pip uninstall -y langgraph-api\n# 安装开源版本，license 验证将被跳过\nRUN pip install langgraph-api \"langgraph-cli[inmem]\"\n```\n注意：如果启动时仍显示认证失败，请确认 `pip uninstall` 命令是否成功执行。",{"id":186,"question_zh":187,"answer_zh":188,"source_url":189},43361,"是否有可用的公共部署实例或公网部署教程？","社区成员已提供基于阿里云 + 容器函数的公网部署方案和免费试用地址。\n\n1. 公共试用地址：https:\u002F\u002Fdeerflow.top\n2. 部署教程及修改参考代码：https:\u002F\u002Fgithub.com\u002Fchmod777john\u002Fdeer-flow-deploy\n\n注意：公网部署需要对源码进行一定修改（如 CORS 配置、域名绑定等），上述仓库提供了具体的部署方法参考，但代码可能较为粗糙，仅供学习借鉴。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fdeer-flow\u002Fissues\u002F72",[]]