[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-langchain-ai--agent-protocol":3,"tool-langchain-ai--agent-protocol":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":68,"owner_location":68,"owner_email":79,"owner_twitter":76,"owner_website":80,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":10,"env_os":103,"env_gpu":103,"env_ram":103,"env_deps":104,"category_tags":109,"github_topics":68,"view_count":10,"oss_zip_url":68,"oss_zip_packed_at":68,"status":16,"created_at":110,"updated_at":111,"faqs":112,"releases":143},750,"langchain-ai\u002Fagent-protocol","agent-protocol",null,"agent-protocol 致力于为大语言模型（LLM）代理的生产级服务建立一套通用的标准化接口。简单来说，它就像是为智能体应用制定的一套“交通规则”，定义了如何执行任务、管理多轮对话线程以及处理长期记忆。\n\n在当前的开发环境中，不同框架间的交互标准往往不一致，导致集成复杂且难以维护。agent-protocol 通过明确“运行（Runs）”、“线程（Threads）”和“存储（Store）”三大核心模块，有效解决了多轮交互中的状态同步、历史记录追踪及并发控制难题。无论是短暂的一次性请求，还是复杂的持续对话场景，都能找到对应的 API 支持。\n\n这套协议特别适合正在构建或部署 LLM 应用的开发者、系统架构师以及相关领域的研究人员。它提供了详尽的 OpenAPI 文档，并拥有 Python 和 JavaScript 的参考实现，帮助团队快速落地标准化的 Agent 服务，无需重复造轮子。","# Agent Protocol\n\nAgent Protocol is our attempt at codifying the framework-agnostic APIs that are needed to serve LLM agents in production. This document explains the purpose of the protocol and makes the case for each of the endpoints in the spec. We finish by listing some roadmap items for the future.\n\nSee the full OpenAPI docs [here](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html) and the JSON spec [here](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fopenapi.json).\n\n[LangGraph Platform](https:\u002F\u002Fwww.langchain.com\u002Fpricing-langgraph-platform) implements a superset of this protocol, but we very much welcome other implementations from the community.\n\n## Resources\n\n- [Agent Protocol OpenAPI Docs](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html)\n- [Agent Protocol JSON Spec](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fopenapi.json)\n- [Agent Protocol Python Server Stubs](.\u002Fserver\u002F) - a Python server, using Pydantic V2 and FastAPI, auto-generated from the OpenAPI spec\n- [LangGraph.js API](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraphjs-api\u002Ftree\u002Fmain\u002Flibs\u002Flanggraph-api) - an open-source implementation of this protocol, for LangGraph.js agents, using in-memory storage\n- [LangGraph Platform](https:\u002F\u002Fwww.langchain.com\u002Fpricing-langgraph-platform) - a commercial platform that implements a superset of this protocol for deploying any LLM agent in production\n\n## Why Agent Protocol\n\nWhat is the right API to serve an LLM application in production? We believe it’s centered around 3 important concepts:\n\n- Runs: APIs for executing an agent\n- Threads: APIs to organize multi-turn executions of agents\n- Store: APIs to work with long-term memory\n\nLet’s dive deeper into each one, starting with the requirements, and then presenting the Protocol endpoints that meet these requirements.\n\n## Stateless Runs: one-shot interactions\n\nIn some cases, you may want to create a thread and run in one request, and have the thread be deleted after the run concludes. This is useful for ephemeral or stateless interactions, where you don’t need to keep track of the thread’s state.\n\n- [`POST \u002Fruns\u002Fwait`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fruns\u002FPOST\u002Fruns\u002Fwait) - Create an ephemeral run, and wait for its final output, which is returned in the response.\n- [`POST \u002Fruns\u002Fstream`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fruns\u002FPOST\u002Fruns\u002Fstream) - Create an ephemeral run, and stream output as produced.\n\n## Threads: multi-turn interactions\n\nWhat APIs do you need to enable multi-turn interactions?\n\n- Persistent state\n  - Get and update state\n  - Track history of past states of a thread, modelled as an append-only log of states\n  - Optimize storage by storing only diffs between states\n- Concurrency controls\n  - Ensure that only one run per thread is active at a time\n  - Customizable handling of concurrent runs (interrupt, enqueue, interrupt or rollback)\n- CRUD endpoints for threads\n  - List threads by user, or other metadata\n  - List threads by status (idle, interrupted, errored, finished)\n  - Copy or delete threads\n\nEndpoints:\n\n- [`POST \u002Fthreads`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads) - Create a thread.\n- [`POST \u002Fthreads\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads\u002Fsearch) - Search threads.\n- [`GET \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FGET\u002Fthreads\u002F%7Bthread_id%7D) - Get a thread.\n- [`GET \u002Fthreads\u002F{thread_id}\u002Fhistory`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FGET\u002Fthreads\u002F%7Bthread_id%7D\u002Fhistory) - Browse past revisions of a thread’s state. Revisions are created by runs, or through the PATCH endpoint below.\n- [`POST \u002Fthreads\u002F{thread_id}\u002Fcopy`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads\u002F%7Bthread_id%7D\u002Fcopy) - Create an independent copy of a thread.\n- [`DELETE \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FDELETE\u002Fthreads\u002F%7Bthread_id%7D) - Delete a thread.\n- [`PATCH \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPATCH\u002Fthreads\u002F%7Bthread_id%7D) - Update a thread's values or metadata. Updating values creates a new revision in the thread's history.\n\n## Agents: Introspection\n\nBefore you make use of an agent, it's sometimes useful to know what it can do, what inputs it accepts, what it returns, etc. This is where the introspection endpoints come in.\n\nEndpoints:\n\n- [`POST \u002Fagents\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FPOST\u002Fagents\u002Fsearch) - List all agents, optionally filtered by metadata or name.\n- [`GET \u002Fagents\u002F{agent_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FGET\u002Fagents\u002F%7Bagent_id%7D) - Get basic information about an agent, including its name, description, metadata.\n- [`GET \u002Fagents\u002F{agent_id}\u002Fschemas`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FGET\u002Fagents\u002F%7Bagent_id%7D\u002Fschemas) - Get the input, output, state and config schemas for an agent. All schemas are represented in JSON Schema format.\n\n## Background Runs: Atomic agent executions\n\nWhat do we need out of an API to execute an agent?\n\n- Support the two paradigms for launching a run\n  - Fire and forget, ie. launch a run in the background, but don’t wait for it to finish\n  - Waiting on a reply (blocking or polling), ie. launch a run and wait\u002Fstream its output\n- Support CRUD for agent executions\n  - List and get runs\n  - Cancel and delete runs\n- Flexible ways to consume output\n  - Get the final state\n  - Multiple types of streaming output, eg. token-by-token, intermediate steps, etc.\n  - Able to reconnect to output stream if disconnected\n- Handling edge cases\n  - Failures should be handled gracefully, and retried if desired\n  - Bursty traffic should be queued up\n\nBase Endpoints:\n\n- [`GET \u002Fthreads\u002F{thread_id}\u002Fruns`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fthreads\u002F{thread_id}\u002Fruns) - List runs for a thread.\n- [`POST \u002Fruns`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FPOST\u002Fruns) - Create a background run.\n- [`GET \u002Fruns\u002F{run_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}) - Get a run and its status.\n- [`POST \u002Fruns\u002F{run_id}\u002Fcancel`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FPOST\u002Fruns\u002F{run_id}\u002Fcancel) - Cancel a run. If the run hasn’t started, cancel it immediately, if it’s currently running then cancel it as soon as possible.\n- [`DELETE \u002Fruns\u002F{run_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FDELETE\u002Fruns\u002F{run_id}) - Delete a finished run. A pending run needs to be cancelled first, see previous endpoint.\n- [`GET \u002Fruns\u002F{run_id}\u002Fwait`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}\u002Fwait) - Wait for a run to finish, return the final output. If the run already finished, returns its final output immediately.\n- [`GET \u002Fruns\u002F{run_id}\u002Fstream`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}\u002Fstream) - Join the output stream of an existing run. Only output produced after this endpoint is called will be streamed.\n\n## Store: Long-term memory\n\nWhat do you need out of a memory API for agents?\n\n- Customizable memory scopes\n  - Storing memory against the user, thread, assistant, company, etc\n  - Accessing memory from different scopes in the same run\n- Flexible storage\n  - Support simple text memories, as well as structured data\n  - CRUD operations for memories (create, read, update, delete)\n- Search and retrieval\n  - Get a single memory by namespace and key\n  - List memories filtered by namespace, contents, sorted by time, etc\n\nEndpoints:\n\n- [`PUT \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPUT\u002Fstore\u002Fitems) - Create or update a memory item, at a given namespace and key.\n- [`DELETE \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FDELETE\u002Fstore\u002Fitems) - Delete a memory item, at a given namespace and key.\n- [`GET \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FGET\u002Fstore\u002Fitems) - Get a memory item, at a given namespace and key.\n- [`POST \u002Fstore\u002Fitems\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPOST\u002Fstore\u002Fitems\u002Fsearch) - Search memory items.\n- [`POST \u002Fstore\u002Fnamespaces`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPOST\u002Fstore\u002Fnamespaces) - List namespaces.\n\n## Messages\n\nMessages have emerged as a core primitive in dealing with LLMs, and as such we have first-class support for messages in Agent Protocol. This is in addition to completely customizable input\u002Foutput schemas for agents. We define a Message spec, which is a subset of the message formats supported by major LLM providers, such as OpenAI and Anthropic. In all endpoints that expose thread values, there is also a separate `messages` field, which agents can optionally implement.\n\n## Agent Protocol in Action\n\nBelow are a few illustrative “user journeys” in [Hurl](https:\u002F\u002Fhurl.dev) format, each showing a common sequence of API calls against your Agent Protocol service (listening at localhost:8000, no auth required).\n\nThey’re organized so that you can paste each journey into its own .hurl file (or combine them), then run them with the “hurl” command. This should give you a good sense of how the protocol can be used in practice.\n\n### Journey 1: Create Thread → Get Thread → Create Run → Wait for Output\n\nThis journey demonstrates the typical sequence of creating a thread, launching a run, and waiting for the final output. You can then repeat the two last steps to launch more runs in the same thread. This is the most common pattern for multi-turn interactions, such as a chatbot conversation.\n\n```hurl\n# 1. Create a brand new thread\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\nContent-Type: application\u002Fjson\n\n{\n  \"thread_id\": \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\",\n  \"metadata\": {\n    \"purpose\": \"support-chat\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.thread_id\" == \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\"\n\n\n# 2. Retrieve the thread we just created\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.status\" == \"idle\"\n\n\n# 3. Create a run in the existing thread (background run).\n# Capture the run_id for the next step.\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\nContent-Type: application\u002Fjson\n\n{\n  \"input\": {\n    \"message\": \"Hi there, what's the weather?\"\n  },\n  \"metadata\": {\n    \"requestType\": \"weatherQuery\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Captures]\nrun_id: jsonpath \"$.run_id\"\n[Asserts]\njsonpath \"$.status\" == \"pending\"\n\n\n# 4. Wait for final run output\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\u002F{{run_id}}\u002Fwait\n\nHTTP\u002F1.1 200\n[Asserts]\n# For example, check that the run status is success or error,\n# depending on your actual system's response:\njsonpath \"$.status\" == \"success\"\n```\n\nYou can replace the last step with `GET \u002Fthreads\u002F{thread_id}\u002Fruns\u002F{run_id}\u002Fstream` to stream the output as it’s produced, or with `GET \u002Fthreads\u002F{thread_id}` to poll status\u002Foutput without waiting.\n\n### Journey 2: Ephemeral “Stateless” Run (Create + Wait)\n\nThis journey demonstrates a one-shot run, where you create a thread and run in one request, and wait for the final output. This is useful for stateless interactions, where you want to start fresh each time. Good use cases include extraction or research agents.\n\n```hurl\n# Launch a one-shot run with a brand new ephemeral thread,\n# and wait for the final output right away.\nPOST http:\u002F\u002Flocalhost:8000\u002Fruns\u002Fwait\nContent-Type: application\u002Fjson\n\n{\n  \"input\": {\n    \"prompt\": \"What's the fastest route to the airport?\"\n  },\n  \"metadata\": {\n    \"useCase\": \"travelPlan\"\n  },\n  \"config\": {\n    \"tags\": [\"ephemeral\", \"demo\"]\n  }\n}\n\nHTTP\u002F1.1 200\n```\n\n### Journey 3: Using the Store (Add, Retrieve, and Delete an Item)\n\nThis journey demonstrates how to use the Store API to add, retrieve, and delete an item. This is useful for storing long-term memory, such as user profiles, preferences, or other structured data, which can be accessed both inside and outside the agent.\n\n```hurl\n# 1. Put (store or update) an item in the store\nPUT http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems\nContent-Type: application\u002Fjson\n\n{\n  \"namespace\": [\"user_profiles\"],\n  \"key\": \"profile_jane_doe\",\n  \"value\": {\n    \"displayName\": \"Jane Doe\",\n    \"role\": \"customer\"\n  }\n}\n\nHTTP\u002F1.1 204\n\n\n# 2. Retrieve it by namespace\u002Fkey\nGET http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems?key=profile_jane_doe&namespace=user_profiles\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.value.displayName\" == \"Jane Doe\"\njsonpath \"$.value.role\" == \"customer\"\n\n\n# 3. Delete the item\nDELETE http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems\nContent-Type: application\u002Fjson\n\n{\n  \"namespace\": [\"user_profiles\"],\n  \"key\": \"profile_jane_doe\"\n}\n\nHTTP\u002F1.1 204\n```\n\n## Roadmap\n\n- Add detailed specification for each stream mode (currently this is left open to the implementer)\n- Add Store endpoint to perform a vector search over memory entries\n- Add param for `POST \u002Fthreads\u002F{thread_id}\u002Fruns\u002F{run_id}\u002Fstream` to replay events since `event-id` before streaming new events\n- Add param to `POST \u002Fthreads\u002F{thread_id}\u002Fruns ` to optionally allow concurrent runs on the same thread (current spec makes this forbidden)\n- (Open an issue and let us know what else should be here!)\n","# Agent 协议\n\nAgent Protocol 是我们尝试将用于在生产环境中服务 LLM 智能体的框架无关 API 进行规范化的努力。本文档解释了该协议的目的，并阐述了规范中每个端点的理由。最后，我们列出了一些未来的路线图项目。\n\n查看完整的 OpenAPI 文档 [此处](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html)，查看 JSON 规范 [此处](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fopenapi.json)。\n\n[LangGraph 平台](https:\u002F\u002Fwww.langchain.com\u002Fpricing-langgraph-platform) 实现了该协议的超集，但我们非常欢迎社区的其他实现。\n\n## 资源\n\n- [Agent 协议 OpenAPI 文档](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html)\n- [Agent 协议 JSON 规范](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fopenapi.json)\n- [Agent 协议 Python 服务器存根](.\u002Fserver\u002F) - 一个使用 Pydantic V2 和 FastAPI 的 Python 服务器，由 OpenAPI 规范自动生成\n- [LangGraph.js API](https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraphjs-api\u002Ftree\u002Fmain\u002Flibs\u002Flanggraph-api) - 此协议的一个开源实现，用于 LangGraph.js 智能体，使用内存存储\n- [LangGraph 平台](https:\u002F\u002Fwww.langchain.com\u002Fpricing-langgraph-platform) - 一个商业平台，实现了该协议的超集，用于在生产环境中部署任何 LLM 智能体\n\n## 为什么需要 Agent 协议\n\n什么是适合在生产环境中服务 LLM 应用程序的正确 API？我们相信它围绕 3 个重要概念构建：\n\n- Runs（运行）：执行智能体的 API\n- Threads（线程）：组织智能体多轮执行的 API\n- Store（存储）：处理长期记忆的 API\n\n让我们深入探讨每一个，首先从需求开始，然后展示满足这些需求的协议端点。\n\n## 无状态运行：一次性交互\n\n在某些情况下，您可能希望在单个请求中创建线程并运行，并在运行结束后删除该线程。这对于短暂或无状态的交互很有用，在这些交互中您不需要跟踪线程的状态。\n\n- [`POST \u002Fruns\u002Fwait`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fruns\u002FPOST\u002Fruns\u002Fwait) - 创建一个临时运行，并等待其最终输出，输出将在响应中返回。\n- [`POST \u002Fruns\u002Fstream`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fruns\u002FPOST\u002Fruns\u002Fstream) - 创建一个临时运行，并按生产情况流式传输输出。\n\n## 线程：多轮交互\n\n您需要哪些 API 来启用多轮交互？\n\n- 持久化状态\n  - 获取和更新状态\n  - 跟踪线程过去状态的历史，建模为状态的追加日志\n  - 通过仅存储状态之间的差异来优化存储\n- 并发控制\n  - 确保同一时间每个线程只有一个运行处于活动状态\n  - 可自定义处理并发运行（中断、入队、中断或回滚）\n- 线程的 CRUD 端点\n  - 按用户或其他元数据列出线程\n  - 按状态列出线程（空闲、已中断、出错、完成）\n  - 复制或删除线程\n\n端点：\n\n- [`POST \u002Fthreads`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads) - 创建线程。\n- [`POST \u002Fthreads\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads\u002Fsearch) - 搜索线程。\n- [`GET \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FGET\u002Fthreads\u002F%7Bthread_id%7D) - 获取线程。\n- [`GET \u002Fthreads\u002F{thread_id}\u002Fhistory`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FGET\u002Fthreads\u002F%7Bthread_id%7D\u002Fhistory) - 浏览线程状态的历史修订版。修订版由运行创建，或通过下面的 PATCH 端点创建。\n- [`POST \u002Fthreads\u002F{thread_id}\u002Fcopy`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPOST\u002Fthreads\u002F%7Bthread_id%7D\u002Fcopy) - 创建线程的独立副本。\n- [`DELETE \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FDELETE\u002Fthreads\u002F%7Bthread_id%7D) - 删除线程。\n- [`PATCH \u002Fthreads\u002F{thread_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fthreads\u002FPATCH\u002Fthreads\u002F%7Bthread_id%7D) - 更新线程的值或元数据。更新值会在线程历史中创建新的修订版。\n\n## 智能体：内省\n\n在使用智能体之前，了解它能做什么、接受什么输入、返回什么等有时很有用。这就是内省端点发挥作用的地方。\n\n端点：\n\n- [`POST \u002Fagents\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FPOST\u002Fagents\u002Fsearch) - 列出所有智能体，可选择性地按元数据或名称过滤。\n- [`GET \u002Fagents\u002F{agent_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FGET\u002Fagents\u002F%7Bagent_id%7D) - 获取智能体的基本信息，包括其名称、描述、元数据。\n- [`GET \u002Fagents\u002F{agent_id}\u002Fschemas`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fagents\u002FGET\u002Fagents\u002F%7Bagent_id%7D\u002Fschemas) - 获取智能体的输入、输出、状态和配置模式。所有模式均以 JSON Schema 格式表示。\n\n## 后台运行：原子化智能体执行\n\n要执行一个智能体，我们需要 API 提供什么功能？\n\n- 支持两种启动运行的范式\n  - “发射即忘”（Fire and Forget），即在后台启动运行，但不等待其完成\n  - 等待回复（阻塞或轮询），即启动运行并等待\u002F流式传输其输出\n- 支持智能体执行的 CRUD (增删改查) 操作\n  - 列出和获取运行\n  - 取消和删除运行\n- 灵活的消费输出方式\n  - 获取最终状态\n  - 多种类型的流式输出，例如逐 token、中间步骤等\n  - 如果断开连接，能够重新连接到输出流\n- 处理边界情况\n  - 故障应被优雅地处理，如果需要可以重试\n  - 突发流量应该被排队\n\n基础端点：\n\n- [`GET \u002Fthreads\u002F{thread_id}\u002Fruns`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fthreads\u002F{thread_id}\u002Fruns) - 列出线程的运行。\n- [`POST \u002Fruns`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FPOST\u002Fruns) - 创建一个后台运行。\n- [`GET \u002Fruns\u002F{run_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}) - 获取运行及其状态。\n- [`POST \u002Fruns\u002F{run_id}\u002Fcancel`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FPOST\u002Fruns\u002F{run_id}\u002Fcancel) - 取消运行。如果运行尚未开始，立即取消；如果正在运行，则尽快取消。\n- [`DELETE \u002Fruns\u002F{run_id}`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FDELETE\u002Fruns\u002F{run_id}) - 删除已完成的运行。待处理的运行需要先取消，参见上一个端点。\n- [`GET \u002Fruns\u002F{run_id}\u002Fwait`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}\u002Fwait) - 等待运行完成，返回最终输出。如果运行已完成，立即返回其最终输出。\n- [`GET \u002Fruns\u002F{run_id}\u002Fstream`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fbackground-runs\u002FGET\u002Fruns\u002F{run_id}\u002Fstream) - 加入现有运行的输出流。仅调用此端点后产生的输出才会被流式传输。\n\n## 存储：长期记忆\n\n对于智能体的记忆 API，你需要什么功能？\n\n- 可自定义的记忆范围\n  - 针对用户、线程、助手、公司等存储记忆\n  - 在同一运行中访问不同范围的记忆\n- 灵活的存储\n  - 支持简单的文本记忆以及结构化数据\n  - 记忆的 CRUD (增删改查) 操作（创建、读取、更新、删除）\n- 搜索和检索\n  - 通过命名空间和键获取单个记忆\n  - 按命名空间、内容过滤，按时间排序等列出记忆\n\n端点：\n\n- [`PUT \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPUT\u002Fstore\u002Fitems) - 在指定的命名空间和键处创建或更新记忆项。\n- [`DELETE \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FDELETE\u002Fstore\u002Fitems) - 在指定的命名空间和键处删除记忆项。\n- [`GET \u002Fstore\u002Fitems`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FGET\u002Fstore\u002Fitems) - 在指定的命名空间和键处获取记忆项。\n- [`POST \u002Fstore\u002Fitems\u002Fsearch`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPOST\u002Fstore\u002Fitems\u002Fsearch) - 搜索记忆项。\n- [`POST \u002Fstore\u002Fnamespaces`](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html#tag\u002Fstore\u002FPOST\u002Fstore\u002Fnamespaces) - 列出命名空间。\n\n## 消息\n\n消息已成为处理大语言模型 (LLMs) 的核心原语，因此我们在 Agent Protocol 中对消息提供了第一类支持。此外，我们还完全支持为智能体定制输入\u002F输出模式 (schemas)。我们定义了一个消息规范 (Message spec)，它是主要 LLM 提供商（如 OpenAI 和 Anthropic）支持的格式的子集。在所有暴露线程值的端点中，还有一个单独的 `messages` 字段，智能体可以选择实现。\n\n## Agent Protocol 实战\n\n以下是几个使用 [Hurl](https:\u002F\u002Fhurl.dev) 格式的示例“用户旅程”，每个都展示了针对您的 Agent Protocol 服务（监听在 localhost:8000，无需认证）的常见 API 调用序列。\n\n它们经过组织，以便您可以将每个旅程粘贴到独立的 .hurl 文件中（或合并它们），然后使用\"hurl\"命令运行。这应该能让您很好地了解该协议在实际中如何使用。\n\n### 旅程 1：创建线程 → 获取线程 → 创建运行 → 等待输出\n\n此旅程演示了创建线程、启动运行并等待最终输出的典型序列。然后您可以重复最后两步以在同一线程中启动更多运行。这是多轮交互（如聊天机器人对话）最常见的模式。\n\n```hurl\n# 1. Create a brand new thread\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\nContent-Type: application\u002Fjson\n\n{\n  \"thread_id\": \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\",\n  \"metadata\": {\n    \"purpose\": \"support-chat\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.thread_id\" == \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\"\n\n\n# 2. Retrieve the thread we just created\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.status\" == \"idle\"\n\n\n# 3. Create a run in the existing thread (background run).\n# Capture the run_id for the next step.\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\nContent-Type: application\u002Fjson\n\n{\n  \"input\": {\n    \"message\": \"Hi there, what's the weather?\"\n  },\n  \"metadata\": {\n    \"requestType\": \"weatherQuery\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Captures]\nrun_id: jsonpath \"$.run_id\"\n[Asserts]\njsonpath \"$.status\" == \"pending\"\n\n\n# 4. Wait for final run output\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\u002F{{run_id}}\u002Fwait\n\nHTTP\u002F1.1 200\n[Asserts]\n# For example, check that the run status is success or error,\n# depending on your actual system's response:\njsonpath \"$.status\" == \"success\"\n```\n\n您可以将最后一步替换为 `GET \u002Fthreads\u002F{thread_id}\u002Fruns\u002F{run_id}\u002Fstream` 以流式传输产生的输出，或者使用 `GET \u002Fthreads\u002F{thread_id}` 进行轮询状态\u002F输出而无需等待。\n\n### 旅程 2：临时“无状态”运行（创建 + 等待）\n\n此旅程演示了一次性运行，其中您在一次请求中创建线程并运行，并等待最终输出。这对于无状态交互很有用，您希望每次从头开始。良好的用例包括提取或研究智能体。\n\n```hurl\n# Launch a one-shot run with a brand new ephemeral thread,\n# and wait for the final output right away.\nPOST http:\u002F\u002Flocalhost:8000\u002Fruns\u002Fwait\nContent-Type: application\u002Fjson\n\n{\n  \"input\": {\n    \"prompt\": \"What's the fastest route to the airport?\"\n  },\n  \"metadata\": {\n    \"useCase\": \"travelPlan\"\n  },\n  \"config\": {\n    \"tags\": [\"ephemeral\", \"demo\"]\n  }\n}\n\nHTTP\u002F1.1 200\n```\n\n### 旅程 3：使用存储（添加、检索和删除项）\n\n本旅程演示了如何使用 Store API（存储 API）来添加、检索和删除一项内容。这对于存储长期记忆非常有用，例如用户档案、偏好设置或其他结构化数据，这些数据既可以在智能体内部访问，也可以在智能体外部访问。\n\n```hurl\n# 1. Put (store or update) an item in the store\nPUT http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems\nContent-Type: application\u002Fjson\n\n{\n  \"namespace\": [\"user_profiles\"],\n  \"key\": \"profile_jane_doe\",\n  \"value\": {\n    \"displayName\": \"Jane Doe\",\n    \"role\": \"customer\"\n  }\n}\n\nHTTP\u002F1.1 204\n\n\n# 2. Retrieve it by namespace\u002Fkey\nGET http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems?key=profile_jane_doe&namespace=user_profiles\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.value.displayName\" == \"Jane Doe\"\njsonpath \"$.value.role\" == \"customer\"\n\n\n# 3. Delete the item\nDELETE http:\u002F\u002Flocalhost:8000\u002Fstore\u002Fitems\nContent-Type: application\u002Fjson\n\n{\n  \"namespace\": [\"user_profiles\"],\n  \"key\": \"profile_jane_doe\"\n}\n\nHTTP\u002F1.1 204\n```\n\n## 路线图\n\n- 为每种流模式添加详细规范（目前这部分留给实现者决定）\n- 添加 Store 端点以在内存条目上执行向量搜索\n- 为 `POST \u002Fthreads\u002F{thread_id}\u002Fruns\u002F{run_id}\u002Fstream` 添加参数，以便在流式传输新事件之前重放自 `event-id` 以来的事件\n- 向 `POST \u002Fthreads\u002F{thread_id}\u002Fruns `` 添加参数，以选择性地允许在同一线程上进行并发运行（当前规范禁止此操作）\n- （提交一个 Issue（问题）并告诉我们这里还应该包含什么！）","# Agent Protocol 快速上手指南\n\n**Agent Protocol** 是一套旨在为生产环境中的 LLM 代理（Agents）提供框架无关 API 的规范。它定义了用于执行代理、管理多轮对话线程以及长期记忆存储的核心接口。本指南将指导您如何部署参考服务器并测试基础 API。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **操作系统**: Linux \u002F macOS \u002F Windows\n- **Python 版本**: 3.10 或更高版本\n- **依赖工具**: Git, Pip\n- **网络建议**: 由于涉及 GitHub 仓库克隆及 PyPI 包下载，建议使用国内镜像源加速（如清华源）。\n\n## 安装步骤\n\n### 1. 克隆项目仓库\n从 GitHub 获取官方代码库。如果连接缓慢，可使用国内镜像地址。\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol.git\ncd agent-protocol\n```\n\n### 2. 安装服务端依赖\n进入 `server\u002F` 目录，该目录包含基于 Pydantic V2 和 FastAPI 自动生成的 Python 服务器存根。\n\n```bash\ncd server\n# 推荐使用国内 PyPI 镜像源安装依赖\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple fastapi uvicorn pydantic\n```\n*(注：如果目录下存在 `requirements.txt`，请优先使用 `pip install -r requirements.txt`)*\n\n### 3. 启动服务\n使用 Uvicorn 启动 FastAPI 应用，默认监听端口为 8000。\n\n```bash\nuvicorn main:app --host 0.0.0.0 --port 8000\n```\n*(注：具体入口文件名为 `main.py`，若不同请根据实际目录结构调整)*\n\n## 基本使用\n\n启动服务后，您可以使用支持 HTTP 请求的工具（如 Hurl 或 cURL）来调用协议接口。以下示例展示了创建线程、启动运行并等待输出的标准流程。\n\n### 示例场景：创建线程 → 获取线程 → 创建运行 → 等待输出\n\n将以下代码保存为 `journey1.hurl` 文件，并在终端运行 `hurl journey1.hurl` 进行测试。\n\n```hurl\n# 1. Create a brand new thread\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\nContent-Type: application\u002Fjson\n\n{\n  \"thread_id\": \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\",\n  \"metadata\": {\n    \"purpose\": \"support-chat\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.thread_id\" == \"229c1834-bc04-4d90-8fd6-77f6b9ef1462\"\n\n\n# 2. Retrieve the thread we just created\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\n\nHTTP\u002F1.1 200\n[Asserts]\njsonpath \"$.status\" == \"idle\"\n\n\n# 3. Create a run in the existing thread (background run).\n# Capture the run_id for the next step.\nPOST http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\nContent-Type: application\u002Fjson\n\n{\n  \"input\": {\n    \"message\": \"Hi there, what's the weather?\"\n  },\n  \"metadata\": {\n    \"requestType\": \"weatherQuery\"\n  }\n}\n\nHTTP\u002F1.1 200\n[Captures]\nrun_id: jsonpath \"$.run_id\"\n[Asserts]\njsonpath \"$.status\" == \"pending\"\n\n\n# 4. Wait for final run output\nGET http:\u002F\u002Flocalhost:8000\u002Fthreads\u002F229c1834-bc04-4d90-8fd6-77f6b9ef1462\u002Fruns\u002F{{run_id}}\u002Fwait\n\nHTTP\u002F1.1 200\n[Asserts]\n# For example, check that the run status is success or error,\n# depending on your actual system's response:\njsonpath \"$.status\" == \"success\"\n```\n\n### 其他常用模式\n- **无状态运行 (Stateless)**: 使用 `POST \u002Fruns\u002Fwait` 直接创建临时线程并获取结果。\n- **流式输出 (Streaming)**: 将最后一步替换为 `GET \u002Fthreads\u002F{thread_id}\u002Fruns\u002F{run_id}\u002Fstream` 以实时接收输出。\n\n更多完整的 API 文档与规范，请访问 [Agent Protocol OpenAPI Docs](https:\u002F\u002Flangchain-ai.github.io\u002Fagent-protocol\u002Fapi.html)。","某金融科技团队正在构建一款支持长期记忆的智能理财助手，需处理用户跨会话的投资偏好与复杂账户查询。\n\n### 没有 agent-protocol 时\n- 每个模型调用都需手写独立的接口逻辑，导致后端代码冗余且难以维护更新。\n- 对话历史与用户状态分散在数据库各处，难以实现多轮上下文的精准关联与检索。\n- 切换底层大模型时需重构整个后端架构，迁移成本高且容易引入新的系统 Bug。\n- 缺乏统一的并发控制机制，高并发下容易出现同一会话的数据冲突或状态丢失。\n\n### 使用 agent-protocol 后\n- agent-protocol 提供标准化的 Runs 和 Threads 接口，统一了所有 Agent 的执行入口与文档规范。\n- 内置 Thread 管理功能，自动处理多轮对话的状态持久化、增量存储与历史版本回溯。\n- 基于框架无关的规范，无缝替换不同厂商的大模型而不改动核心业务代码逻辑。\n- 原生支持并发控制与状态锁，确保同一用户会话的操作安全有序且能应对高负载流量。\n\n通过标准化接口大幅降低了 Agent 应用的开发与运维复杂度，让团队更专注于业务逻辑本身。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flangchain-ai_agent-protocol_6c24ff42.png","langchain-ai","LangChain","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flangchain-ai_8e6aaeef.png","","support@langchain.dev","https:\u002F\u002Fwww.langchain.com","https:\u002F\u002Fgithub.com\u002Flangchain-ai",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.2,{"name":88,"color":89,"percentage":90},"Makefile","#427819",0.4,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0.3,{"name":96,"color":97,"percentage":98},"HTML","#e34c26",0.1,546,45,"2026-04-05T10:16:27","MIT","未说明",{"notes":105,"python":103,"dependencies":106},"本工具为 API 协议规范文档，定义了 LLM Agent 的生产环境接口标准，并非独立的模型推理引擎。提供的 Python Server Stubs 仅作为示例实现，实际硬件资源需求取决于具体部署的 Agent 应用框架（如 LangGraph）。",[107,108],"pydantic>=2.0","fastapi",[15,13],"2026-03-27T02:49:30.150509","2026-04-06T05:35:48.091469",[113,118,123,128,133,138],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},3195,"本项目与现有的 `Agent Protocol` 项目有何关系？","两者没有任何关系。这是两个独立的项目。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F1",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3196,"能否在创建线程的同时直接启动运行（无需单独创建线程）？","可以。已添加 `POST \u002Fthreads\u002Fruns` 端点，支持一次性创建线程并运行，适用于单轮运行实现以节省额外调用。具体实现请参考 PR #8。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F6",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},3197,"如何获取代理的元数据和运行时配置信息？","已添加 `\u002Fmetadata` 和 `\u002Fconfiguration` 端点，可提供代理标签、版本等元数据及运行时配置参数，便于组织和管理多个代理。具体实现请参考 PR #23。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F7",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3198,"运行时的助手 ID 类型不匹配导致验证异常如何解决？","该问题已在 PR #23 中修复。此前 `runCreate*` 方法的助手 ID 类型与模型定义不一致，使用 pydantic 等类型验证工具会导致异常，现已修正。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F20",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},3199,"哪里可以进行项目相关的讨论和交流？","项目已开启 Discussions 标签页（类似于 LangChain 仓库），建议在此处进行项目相关的讨论和交流。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F3",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},3200,"如何通过 PATCH 接口更新线程的状态？","目前 `ThreadPatch` 模型尚未暴露状态字段（status field），无法直接通过 PATCH 接口更新。如需更新状态，需关注后续对 `openapi.json` 第 2282 行定义的修改。","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fissues\u002F34",[144,149,154,159,164,169,174,179],{"id":145,"version":146,"summary_zh":147,"released_at":148},102713,"0.1.3","## What's Changed\r\n* Add user journeys in hurl format by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F19\r\n* Add 404 responses for all `\u002Fthreads` that were missing them, thanks to @beryder in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F22\r\n* Add \u002Fagents endpoints for introspection of available agents and their schemas by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F23\r\n\r\n## New Contributors\r\n* @beryder made their first contribution in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F22\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fcompare\u002F0.1.2...0.1.3","2025-02-21T23:06:24",{"id":150,"version":151,"summary_zh":152,"released_at":153},102714,"0.1.2","- Add Stateless runs endpoints for simpler one-shot runs \r\n- Add auto-generated Python server stubs, these will be kept up-to-date with the spec, and are a great starting point for a custom implementation\r\n- Make clear that the type of `input` is unconstrained, ie any valid JSON value is accepted\r\n- Remove `interrupt_before` \u002F `interrupt_after` parameters, which were too specific to LangGraph\r\n- Remove `feedback_keys` and `tasks` parameters, which were too specific to LangGraph","2025-02-05T00:37:29",{"id":155,"version":156,"summary_zh":157,"released_at":158},102715,"0.1.1","- Make assistant_id optional on all endpoints to create runs. Implementations that don't need to handle multiple agents per deployment don't need this requirement","2025-01-24T17:01:13",{"id":160,"version":161,"summary_zh":162,"released_at":163},102708,"0.2.1","- Fix typo","2025-04-14T22:26:49",{"id":165,"version":166,"summary_zh":167,"released_at":168},102709,"0.2.0","## What's Changed\r\n* fix: adding thread status in the thread search api #34 by @MadaraUchiha-314 in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F35\r\n* Consolidade run creation endpoints under \u002Fruns (instead of under \u002Fthreads\u002Fruns) by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F47 and https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F48\r\n* Add streaming value to the capabilities enum by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F46\r\n\r\n## New Contributors\r\n* @MadaraUchiha-314 made their first contribution in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F35\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fcompare\u002F0.1.6...0.2.0","2025-04-14T15:24:13",{"id":170,"version":171,"summary_zh":172,"released_at":173},102710,"0.1.6","## What's Changed\r\n* Change 200 responses to 204 for Deletes and Cancel, add missing Store 404 responses by @beryder in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F33\r\n* Add specification for runs\u002Fwait schema by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F37\r\n* Use RunsWaitResponse in \u002Fthreads\u002F{thread_id}\u002Fruns\u002Fwait by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F39\r\n* fix spelling by @adilhafeez in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F40\r\n* Add capabilities map for Agent by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F44\r\n\r\n## New Contributors\r\n* @adilhafeez made their first contribution in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F40\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fcompare\u002F0.1.5...0.1.6","2025-03-26T21:23:34",{"id":175,"version":176,"summary_zh":177,"released_at":178},102711,"0.1.5","## What's Changed\r\n* In Thread History endpoint add metadata, and convert checkpoint_id to a checkpoint object by @nfcampos in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F31\r\n* Fix issues with generated server by @beryder in https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fpull\u002F30\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Fagent-protocol\u002Fcompare\u002F0.1.4...0.1.5","2025-02-26T23:03:44",{"id":180,"version":181,"summary_zh":182,"released_at":183},102712,"0.1.4","* Consolidated Thread State endpoints, `Get Thread State` endpoint was duplicating functionality in `Get Thread`, so has been removed. `Update Thread State` has been removed, and updating thread values is now achieved in `Patch Thread`. The response schema of `Get Thread History` is now simplified, to remove properties that wouldn't generalize well to other frameworks\r\n* Add Thread.messages as a top-level property, providing a more strongly typed interaction pattern for dealing with messages, which many agents will want to implement in a compatible format.","2025-02-26T00:40:42"]