[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-emcie-co--parlant":3,"tool-emcie-co--parlant":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":116,"forks":117,"last_commit_at":118,"license":119,"difficulty_score":23,"env_os":120,"env_gpu":120,"env_ram":120,"env_deps":121,"category_tags":125,"github_topics":126,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":138,"updated_at":139,"faqs":140,"releases":169},3071,"emcie-co\u002Fparlant","parlant","The conversational control layer for customer-facing AI agents - Parlant is a context-engineering framework optimized for controlling customer interactions.","Parlant 是一款专为面向客户的 AI 智能体设计的对话控制层框架，旨在优化企业级 B2C 及敏感 B2B 场景下的交互质量。它通过先进的“上下文工程”技术，确保 AI 在对话中始终保持一致、合规且符合品牌调性。\n\n在实际开发中，开发者常面临两难困境：单纯依赖系统提示词（System Prompts）会导致指令过多时 AI 注意力分散；而使用复杂的路由图（Routed Graphs）又容易因自然语言的多样性变得脆弱不堪。Parlant 巧妙解决了这一难题，它允许开发者一次性定义规则、知识库和工具，引擎则能在对话过程中实时筛选，仅将当前回合最相关的上下文注入提示词，从而避免信息过载并提升响应精准度。\n\n这款工具特别适合需要构建高可靠性客服机器人或业务助理的软件开发者和工程师使用。其独特亮点在于动态上下文管理機制，能够像专家一样判断何时调用特定工具或遵循何种指南，例如仅在客户提及专业金融术语时才触发深度回答策略。通过 Parlant，团队可以更轻松地打造出既灵活又稳健的对话式 AI 应用，无需在复杂性与可控性之间做出妥协。","\u003Cdiv align=\"center\">\n\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fblob\u002Fdevelop\u002Fdocs\u002FLogoTransparentLight.png?raw=true\">\n  \u003Cimg alt=\"Parlant\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_7b32078bcc73.png\" width=400 \u002F>\n\u003C\u002Fpicture>\n\n### The conversational control layer for customer-facing AI agents\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fparlant\u002F\">\u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fparlant?color=blue\">\u003C\u002Fa>\n  \u003Cimg alt=\"Python 3.10+\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-blue\">\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-green\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J\">\u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1312378700993663007?color=7289da&logo=discord&logoColor=white\">\u003C\u002Fa>\n  \u003Cimg alt=\"GitHub Repo stars\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Femcie-co\u002Fparlant?style=social\">\n\u003C\u002Fp>\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002F\" target=\"_blank\">Website\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation\" target=\"_blank\">Quick Start\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Fexamples\" target=\"_blank\">Examples\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J\" target=\"_blank\">Discord\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fde\u002Femcie-co\u002Fparlant\">Deutsch\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fes\u002Femcie-co\u002Fparlant\">Español\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Ffr\u002Femcie-co\u002Fparlant\">français\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fja\u002Femcie-co\u002Fparlant\">日本語\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fko\u002Femcie-co\u002Fparlant\">한국어\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fpt\u002Femcie-co\u002Fparlant\">Português\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fru\u002Femcie-co\u002Fparlant\">Русский\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fzh\u002Femcie-co\u002Fparlant\">中文\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F12768\" target=\"_blank\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_4a68feb902da.png\" alt=\"Trending\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\n\u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n&nbsp;\n\n**Parlant streamlines conversational context engineering for enterprise-grade B2C (business to consumer) and sensitive B2B interactions that need to be consistent, compliant, and on-brand.**\n\n## Why Parlant?\n\nConversational context engineering is hard because real-world interactions are diverse, nuanced, and non-linear.\n\n### ❌ The Problem: What you've probably tried and couldn't get to work at scale\n**System prompts** work until production complexity kicks in. The more instructions you add to a prompt, the faster your agent stops paying attention to any of them.\n\n**Routed graphs** solve the prompt-overload problem, but the more routing you add, the more fragile it becomes when faced with the chaos of natural interactions.\n\n### 🔑 The Solution: Context engineering, optimized for conversational control\nParlant solves this with [context engineering](https:\u002F\u002Fwww.gartner.com\u002Fen\u002Farticles\u002Fcontext-engineering): getting the right context, no more and no less, into the prompt at the right time. You define your rules, knowledge, and tools once; the engine narrows the context in real-time to what's immediately relevant to the current turn.\n\n\u003Cimg alt=\"Parlant Demo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_ee4a3954e8b1.gif\" width=\"100%\" \u002F>\n\n## Getting started\n\n```bash\npip install parlant\n```\n\n```python\nimport parlant.sdk as p\n\nasync with p.Server():\n    agent = await server.create_agent(\n        name=\"Customer Support\",\n        description=\"Handles customer inquiries for an airline\",\n    )\n\n    # Evaluate and call tools only under the right conditions\n    expert_customer = await agent.create_observation(\n        condition=\"customer uses financial terminology like DTI or amortization\",\n        tools=[research_deep_answer],\n    )\n\n    # When the expert observation holds, always respond\n    # with depth. Set the guideline to automatically match\n    # whenever the observation it depends on holds...\n    expert_answers = await agent.create_guideline(\n        matcher=p.MATCH_ALWAYS,\n        action=\"respond with technical depth\",\n        dependencies=[expert_customer],\n    )\n\n    beginner_answers = await agent.create_guideline(\n        condition=\"customer seems new to the topic\",\n        action=\"simplify and use concrete examples\",\n    )\n\n    # When both match, beginners wins. Neither expert-level\n    # tool-data nor instructions can enter the agent's context.\n    await beginner_answers.exclude(expert_customer)\n```\n\nFollow the **[5-minute quickstart](https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation)** for a full walkthrough.\n\n## Parlant at a glance\n\nYou define your agent's behavior in code (not prompts), and the engine dynamically narrows the context on each turn to only what's immediately relevant, so the LLM stays focused and your agent stays aligned.\n\n```mermaid\ngraph TD\n    O[Observations] -->|Events| E[Contextual Matching Engine]\n    G[Guidelines] -->|Instructions| E\n    J[\"Journeys (SOPs)\"] -->|Current Steps| E\n    R[Retrievers] -->|Domain Knowledge| E\n    GL[Glossary] -->|Domain Terms| E\n    V[Variables] -->|Memories| E\n    E -->|Tool Requests| T[Tool Caller]\n    T -.->|Results + Optional Extra Matching Iterations| E\n    T -->|**Key Result:**\u003Cbr\u002F>Focused Context Window| M[Message Generation]\n```\n\nInstead of sending a large system prompt followed by a raw conversation to the model, Parlant first assembles a focused context — matching only the instructions and tools relevant to each conversational turn — then generates a response from that narrowed context.\n\n```mermaid\n%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#e8f5e9', 'primaryTextColor': '#1b5e20', 'primaryBorderColor': '#81c784', 'lineColor': '#66bb6a', 'secondaryColor': '#fff9e1', 'tertiaryColor': 'transparent'}}}%%\nflowchart LR\n    A(User):::outputNode\n\n    subgraph Engine[\"Parlant Engine\"]\n        direction LR\n        B[\"Match Guidelines and Resolve Journey States\"]:::matchNode\n        C[\"Call Contextually-Associated Tools and Workflows\"]:::toolNode\n        D[\"Generated Message\"]:::composeNode\n        E[\"Canned Message\"]:::cannedNode\n    end\n\n    A a@-->|💬 User Input| B\n    B b@--> C\n    C c@-->|Fluid Output Mode?| D\n    C d@-->|Strict Output Mode?| E\n    D e@-->|💬 Fluid Output| A\n    E f@-->|💬 Canned Output| A\n\n    a@{animate: true}\n    b@{animate: true}\n    c@{animate: true}\n    d@{animate: true}\n    e@{animate: true}\n    f@{animate: true}\n\n    linkStyle 2 stroke-width:2px\n    linkStyle 4 stroke-width:2px\n    linkStyle 3 stroke-width:2px,stroke:#3949AB\n    linkStyle 5 stroke-width:2px,stroke:#3949AB\n\n    classDef composeNode fill:#F9E9CB,stroke:#AB8139,stroke-width:2px,color:#7E5E1A,stroke-width:0\n    classDef cannedNode fill:#DFE3F9,stroke:#3949AB,stroke-width:2px,color:#1a237e,stroke-width:0\n```\n\nIn this way, adding more rules makes the agent smarter, not more confused — because the engine filters context relevance, not the LLM.\n\n## Is Parlant for you?\n\nParlant is built for teams that need their AI agent to behave reliably in front of real customers. It's a good fit if:\n\n- You're building a **customer-facing agent** — support, sales, onboarding, advisory — where tone, accuracy, and compliance matter.\n- You have **dozens or hundreds of behavioral rules** and your system prompt is buckling under the weight.\n- You're in a **regulated or high-stakes domain** (finance, insurance, healthcare, telecom) where every response needs to be explainable and auditable.\n\n**_Parlant is deployed in production at the most stringent organizations, including banks._**\n\n> _Parlant isn't just a framework. It's a high-level software that solves the conversational modeling problem head-on._\n> — **Sarthak Dalabehera**, Principal Engineer, Slice Bank\n\n> _By far the most elegant conversational AI framework that I've come across._\n> — **Vishal Ahuja**, Senior Lead, Applied AI, JPMorgan Chase\n\n> _Parlant dramatically reduces the need for prompt engineering and complex flow control. Building agents becomes closer to domain modeling._\n> — **Diogo Santiago**, AI Engineer, Orcale\n\n## Features\n\n- **[Guidelines](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)** —\n  Behavioral rules as condition-action pairs; the engine matches only what's relevant per turn.\n\n- **[Relationships](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Frelationships)** —\n  Dependencies and exclusions between guidelines to keep the context narrow and focused.\n\n- **[Journeys](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fjourneys)** —\n  Multi-turn SOPs that adapt to how the customer actually interacts.\n\n- **[Canned Responses](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fcanned-responses)** —\n  Pre-approved response templates that eliminate hallucination at critical moments.\n\n- **[Tools](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Ftools)** —\n  External APIs and workflows, triggered only when their observation matches.\n\n- **[Glossary](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fglossary)** —\n  Domain-specific vocabulary so the agent understands customer language.\n\n- **[Explainability](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fadvanced\u002Fexplainability)** —\n  Full OpenTelemetry tracing — every guideline match and decision is logged.\n\n## [Guidelines](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)\n\nBehavioral rules as condition-action pairs: when the condition applies, the action kicks into context.\n\nInstead of cramming all guidelines in a single prompt, the engine evaluates which ones apply on each conversational turn and only includes the relevant ones in the LLM's context.\n\nThis lets you define hundreds of guidelines without degrading adherence.\n\n```python\nawait agent.create_guideline(\n    condition=\"customer uses financial terminology like DTI or amortization\",\n    action=\"respond with technical depth — skip basic explanations\",\n)\n```\n\n## [Relationships](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)\n\nRelationships between elements help you keep the final context just right: narrow and focused.\n\n**Exclusion** relationships keep certain guidelines out of the model's attention when conflicting ones are matched.\n\n```python\nfor_experts = await agent.create_guideline(\n    condition=\"customer uses financial terminology\",\n    action=\"respond with technical depth\",\n)\n\nfor_beginners = await agent.create_guideline(\n    condition=\"customer seems new to the topic\",\n    action=\"simplify and use concrete examples\",\n)\n\n# In conflicting reads of the customer, set which takes priority\nawait for_beginners.exclude(for_experts)\n```\n\n**Dependency** relationships ensure a guideline only activates when another one has set the stage, helping you create _topic-based guideline hierarchies._\n\n```python\nsuspects_fraud = await agent.create_observation(\n    condition=\"customer suspects unauthorized transactions on their card\",\n)\n\nawait agent.create_guideline(\n    condition=\"customer wants to take action regarding the transaction\",\n    action=\"ask whether they want to dispute the transaction or lock the card\",\n    # Only activates when fraud suspicion has been established\n    dependencies=[suspects_fraud],\n)\n```\n\n## [Journeys](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fjourneys)\n\nMulti-turn SOPs (Standard Operating Procedures). Define a flow for processes like booking, troubleshooting, or onboarding. The agent follows the flow but adapts — it can fast-forward states, revisit earlier ones, or adjust pace based on how the customer interacts.\n\n```python\njourney = await agent.create_journey(\n    title=\"Book Flight\",\n    description=\"Guide the customer through flight booking\",\n    conditions=[\"customer wants to book a flight\"],\n)\n\nt0 = await journey.initial_state.transition_to(\n    # Instruction to follow while in this state (could be multiple turns)\n    chat_state=\"See if they're interested in last-minute deals\",\n)\n\n# Branch A - not interested in deals\nt1 = await t0.target.transition_to(\n    chat_state=\"Determine where they want to go and when\",\n    condition=\"They aren't interested\",\n)\n\n# Branch B - interested in deals\nt2 = await t0.target.transition_to(\n    tool_state=load_latest_flight_deals,\n    condition=\"They are\",\n)\n\nt3 = await t1.target.transition_to(\n    chat_state=\"List deals and see if they're interested\",\n)\n```\n\n## [Canned Responses](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fcanned-responses)\n\nAt critical moments or conversational events, limit the agent to using only pre-approved response templates.\n\nAfter running the matching sequence and drafting a message to the customer, the agent selects the template that best matches its generated draft instead of sending it directly, eliminating hallucination risk entirely and keeping wording exact to the letter.\n\n```python\nawait agent.create_guideline(\n    condition=\"The customer discusses things unrelated to our business\"\n    action=\"Tell them you can't help with that\",\n    # Strict composition mode triggers when this guideline\n    # matches - the rest of the agent stays fluid\n    composition_mode=p.CompositionMode.STRICT,\n    canned_responses=[\n        await agent.create_canned_response(\n            \"Sorry, but I can't help you with that.\"\n        )\n    ],\n    priority=100,  # Top priority, focuses the agent on this alone\n)\n```\n\n## [Tools](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Ftools)\n\nTools activate only when their observation matches; they don't sit in the context permanently. This prevents the false-positive invocations that plague traditional LLM tool setups.\n\n```python\n@p.tool\nasync def query_docs(context: p.ToolContext, user_query: str) -> p.ToolResult:\n    results = search_knowledge_base(user_query)\n    return p.ToolResult(results)\n\nawait agent.create_observation(\n    condition=\"customer asks about service features\",\n    tools=[query_docs],\n)\n```\n\nTools can also feed custom values into canned response templates.\n\n## [Glossary](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fglossary)\n\nDomain-specific vocabulary for your agent. Map colloquial terms and synonyms to precise business definitions so the agent understands customer language.\n\n```python\nawait agent.create_term(\n    name=\"Ocean View\",\n    description=\"Room category with direct view of the Atlantic\",\n    synonyms=[\"sea view\", \"rooms with a view to the Atlantic\"],\n)\n```\n\n## [Explainability](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fadvanced\u002Fexplainability)\n\nEvery decision is traced with OpenTelemetry. Parlant ships out of the box with elaborate logs, metrics, and traces.\n\n## Framework Integration\n\nParlant handles conversational governance; it doesn't replace your existing stack.\n\nUse it alongside frameworks like LangGraph, Agno, LlamaIndex, or others for workflow automation and knowledge retrieval. Parlant takes over the behavioral control layer while your framework of choice handles the rest of your agent's processing logic.\n\nAny external workflow or agent becomes a Parlant tool, triggered only when relevant:\n\n```python\nfrom my_workflows import refund_graph  # a compiled LangGraph StateGraph\n\n@p.tool\nasync def run_refund_workflow(\n  context: p.ToolContext,\n  order_id: str\n) -> p.ToolResult:\n    result = await refund_graph.ainvoke({\"order_id\": order_id})\n\n    # Graph result can inject both data and instructions into the agent.\n    # Instructions are transformed to guidelines, and participate\n    # in contextual guideline resolution (including prioritizations)\n\n    return p.ToolResult(\n        data=result[\"data\"],\n        # Inject dynamic guidelines from workflow result\n        guidelines=[\n            {\"action\": inst, \"priority\": 3} for inst in result[\"instructions\"]\n        ],\n    )\n\nawait agent.create_observation(\n    condition=\"customer wants to process a refund\",\n    tools=[run_refund_workflow],\n)\n```\n\nThe same pattern works with LlamaIndex query engines, Agno agents, or any async Python function.\n\n## LLM Agnostic\n\nParlant works with most LLM providers. The recommended ones are [Emcie](https:\u002F\u002Fwww.emcie.co) which delivers an ideal cost\u002Fquality value since it's built specifically for Parlant, but OpenAI and Anthropic deliver excellent quality outputs as well. You can also use any model and provider via LiteLLM, but they need to be good ones - off-the-shelf models which are too small tend to produce inconsistent results.\n\nGenerally, you can swap models without changing behavioral configuration.\n\n## [Official React Chat Widget](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant-chat-react)\n\nDrop-in chat component to get a frontend running immediately.\n\n## Learn more\n\n- **[How Parlant ensures compliance](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fhow-parlant-guarantees-compliance)** — deep dive into the engine\n- **[Parlant vs LangGraph](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fparlant-vs-langgraph)** — when to use which\n- **[Parlant vs DSPy](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fparlant-vs-dspy)** — different tools for different problems\n\n## Community\n\n- **[Discord](https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J)** — ask questions, share what you're building\n- **[GitHub Issues](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues)** — bug reports and feature requests\n- **[Contact](https:\u002F\u002Fparlant.io\u002Fcontact)** — reach the engineering team directly\n\n**If Parlant helps you build better agents, **[give it a star](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant)** — it helps others find the project.**\n\n## License\n\nApache 2.0 — free for commercial use.\n\n---\n\n\u003Cdiv align=\"center\">\n\n**[Try it now](https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation)** &bull; **[Join Discord](https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J)** &bull; **[Read the docs](https:\u002F\u002Fwww.parlant.io\u002F)**\n\nBuilt by the team at **[Emcie](https:\u002F\u002Femcie.co)**\n\n\u003C\u002Fdiv>\n","\u003Cdiv align=\"center\">\n\n\u003Cpicture>\n  \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fblob\u002Fdevelop\u002Fdocs\u002FLogoTransparentLight.png?raw=true\">\n  \u003Cimg alt=\"Parlant\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_7b32078bcc73.png\" width=400 \u002F>\n\u003C\u002Fpicture>\n\n### 面向客户的AI智能体的对话控制层\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fparlant\u002F\">\u003Cimg alt=\"PyPI\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fparlant?color=blue\">\u003C\u002Fa>\n  \u003Cimg alt=\"Python 3.10+\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpython-3.10+-blue\">\n  \u003Ca href=\"https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0\">\u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-Apache%202.0-green\">\u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J\">\u003Cimg alt=\"Discord\" src=\"https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1312378700993663007?color=7289da&logo=discord&logoColor=white\">\u003C\u002Fa>\n  \u003Cimg alt=\"GitHub Repo stars\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Femcie-co\u002Fparlant?style=social\">\n\u003C\u002Fp>\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002F\" target=\"_blank\">官网\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation\" target=\"_blank\">快速入门\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Fexamples\" target=\"_blank\">示例\u003C\u002Fa> &bull;\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J\" target=\"_blank\">Discord\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp>\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fde\u002Femcie-co\u002Fparlant\">Deutsch\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fes\u002Femcie-co\u002Fparlant\">Español\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Ffr\u002Femcie-co\u002Fparlant\">français\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fja\u002Femcie-co\u002Fparlant\">日本語\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fko\u002Femcie-co\u002Fparlant\">한국어\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fpt\u002Femcie-co\u002Fparlant\">Português\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fru\u002Femcie-co\u002Fparlant\">Русский\u003C\u002Fa> |\n  \u003Ca href=\"https:\u002F\u002Fzdoc.app\u002Fzh\u002Femcie-co\u002Fparlant\">中文\u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F12768\" target=\"_blank\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_4a68feb902da.png\" alt=\"Trending\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"\u002F>\n\u003C\u002Fa>\n\n\u003C\u002Fdiv>\n\n&nbsp;\n\n**Parlant简化了企业级B2C（企业对消费者）和需要保持一致性、合规性及品牌调性的敏感B2B交互中的对话上下文工程。**\n\n## 为什么选择Parlant？\n\n对话上下文工程之所以困难，是因为现实世界的交互是多样、细微且非线性的。\n\n### ❌ 问题：你可能已经尝试过但无法规模化落地的方法\n**系统提示词**在生产环境复杂度较低时有效。然而，当你向提示词中添加越来越多的指令时，你的智能体会逐渐忽略这些指令。\n\n**路由图**可以解决提示词过载的问题，但随着路由数量的增加，它在面对自然交互的混乱局面时会变得越来越脆弱。\n\n### 🔑 解决方案：针对对话控制优化的上下文工程\nParlant通过[上下文工程](https:\u002F\u002Fwww.gartner.com\u002Fen\u002Farticles\u002Fcontext-engineering)来解决这一难题：在恰当的时间将恰到好处的上下文注入到提示词中。你只需一次性定义规则、知识和工具；引擎会实时缩小上下文范围，使其仅包含当前对话轮次中直接相关的内容。\n\n\u003Cimg alt=\"Parlant Demo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_readme_ee4a3954e8b1.gif\" width=\"100%\" \u002F>\n\n## 开始使用\n\n```bash\npip install parlant\n```\n\n```python\nimport parlant.sdk as p\n\nasync with p.Server():\n    agent = await server.create_agent(\n        name=\"Customer Support\",\n        description=\"Handles customer inquiries for an airline\",\n    )\n\n    # 在特定条件下评估并调用工具\n    expert_customer = await agent.create_observation(\n        condition=\"customer uses financial terminology like DTI or amortization\",\n        tools=[research_deep_answer],\n    )\n\n    # 当专家观察成立时，始终以深度回应\n    # 将指南设置为自动匹配\n    # 只要其依赖的观察成立...\n    expert_answers = await agent.create_guideline(\n        matcher=p.MATCH_ALWAYS,\n        action=\"respond with technical depth\",\n        dependencies=[expert_customer],\n    )\n\n    beginner_answers = await agent.create_guideline(\n        condition=\"customer seems new to the topic\",\n        action=\"simplify and use concrete examples\",\n    )\n\n    # 当两者同时匹配时，优先执行初学者指南。无论是专家级别的工具数据还是指令，都不会进入智能体的上下文。\n    await beginner_answers.exclude(expert_customer)\n```\n\n请参阅**[5分钟快速入门](https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation)**以获取完整教程。\n\n## Parlant概览\n\n你通过代码而非提示词来定义智能体的行为，引擎会在每一回合动态缩小上下文范围，只保留与当前对话直接相关的内容，从而使大模型保持专注，确保你的智能体始终如一地遵循既定目标。\n\n```mermaid\ngraph TD\n    O[Observations] -->|Events| E[Contextual Matching Engine]\n    G[Guidelines] -->|Instructions| E\n    J[\"Journeys (SOPs)\"] -->|Current Steps| E\n    R[Retrievers] -->|Domain Knowledge| E\n    GL[Glossary] -->|Domain Terms| E\n    V[Variables] -->|Memories| E\n    E -->|Tool Requests| T[Tool Caller]\n    T -.->|Results + Optional Extra Matching Iterations| E\n    T -->|**Key Result:**\u003Cbr\u002F>Focused Context Window| M[Message Generation]\n```\n\n与先向模型发送一个庞大的系统提示词，再附上原始对话内容不同，Parlant会首先构建一个聚焦的上下文——仅匹配与每一轮对话相关的指令和工具——然后基于这个精简后的上下文生成响应。\n\n```mermaid\n%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#e8f5e9', 'primaryTextColor': '#1b5e20', 'primaryBorderColor': '#81c784', 'lineColor': '#66bb6a', 'secondaryColor': '#fff9e1', 'tertiaryColor': 'transparent'}}}%%\nflowchart LR\n    A(User):::outputNode\n\n    subgraph Engine[\"Parlant Engine\"]\n        direction LR\n        B[\"Match Guidelines and Resolve Journey States\"]:::matchNode\n        C[\"Call Contextually-Associated Tools and Workflows\"]:::toolNode\n        D[\"Generated Message\"]:::composeNode\n        E[\"Canned Message\"]:::cannedNode\n    end\n\n    A a@-->|💬 User Input| B\n    B b@--> C\n    C c@-->|Fluid Output Mode?| D\n    C d@-->|Strict Output Mode?| E\n    D e@-->|💬 Fluid Output| A\n    E f@-->|💬 Canned Output| A\n\n    a@{animate: true}\n    b@{animate: true}\n    c@{animate: true}\n    d@{animate: true}\n    e@{animate: true}\n    f@{animate: true}\n\n    linkStyle 2 stroke-width:2px\n    linkStyle 4 stroke-width:2px\n    linkStyle 3 stroke-width:2px,stroke:#3949AB\n    linkStyle 5 stroke-width:2px,stroke:#3949AB\n\n    classDef composeNode fill:#F9E9CB,stroke:#AB8139,stroke-width:2px,color:#7E5E1A,stroke-width:0\n    classDef cannedNode fill:#DFE3F9,stroke:#3949AB,stroke-width:2px,color:#1a237e,stroke-width:0\n```\n\n这样一来，即使添加更多规则，智能体也会变得更加智能，而不会陷入混乱——因为过滤上下文相关性的主体是引擎，而不是大模型。\n\n## Parlant 适合您吗？\n\nParlant 专为需要在真实客户面前保持可靠表现的团队打造。如果您符合以下情况，Parlant 就是理想选择：\n\n- 您正在构建面向客户的智能助手——例如客服、销售、用户引导或咨询服务——其中语气、准确性和合规性至关重要。\n- 您拥有数十甚至数百条行为规则，而系统提示词已不堪重负。\n- 您所在的行业属于受监管或高风险领域（如金融、保险、医疗、电信），要求每一条回复都可解释且可审计。\n\n**_Parlant 已在包括银行在内的最严格机构中投入生产环境使用。_**\n\n> _Parlant 不仅仅是一个框架，它是一款高层次的软件，直面解决对话式建模难题。_\n> — **Sarthak Dalabehera**，Slice Bank 首席工程师\n\n> _迄今为止，我遇到过的最优雅的对话式 AI 框架。_\n> — **Vishal Ahuja**，摩根大通应用 AI 高级主管\n\n> _Parlant 极大地减少了对提示工程和复杂流程控制的需求，使智能助手的构建更接近领域建模。_\n> — **Diogo Santiago**，Oracle AI 工程师\n\n## 核心功能\n\n- **[指南](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)** —\n  行为规则以条件-动作对的形式定义；引擎仅匹配当前轮次相关的规则。\n\n- **[关系](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Frelationships)** —\n  指南之间的依赖与排除关系，确保上下文始终聚焦且不冗余。\n\n- **[旅程](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fjourneys)** —\n  多轮次标准操作流程，可根据客户的实际交互方式动态调整。\n\n- **[预设回复](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fcanned-responses)** —\n  经过批准的回复模板，在关键场景下彻底避免幻觉问题。\n\n- **[工具](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Ftools)** —\n  外部 API 和工作流仅在观察结果匹配时才会触发。\n\n- **[术语表](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fglossary)** —\n  领域专属词汇表，帮助智能助手理解客户的语言。\n\n- **[可解释性](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fadvanced\u002Fexplainability)** —\n  全链路 OpenTelemetry 跟踪——每次规则匹配及决策都会被完整记录。\n\n## [指南](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)\n\n行为规则以条件-动作对的形式定义：当条件满足时，动作即被激活并纳入上下文。\n\n与将所有指南塞入单一提示词不同，引擎会在每一轮对话中评估哪些指南适用，并仅将相关部分注入大模型的上下文中。\n\n这种方式允许您定义数百条指南，而不会降低执行一致性。\n\n```python\nawait agent.create_guideline(\n    condition=\"客户使用 DTI 或摊销等金融术语\",\n    action=\"以技术深度回应，省略基础解释\",\n)\n```\n\n## [关系](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fguidelines)\n\n元素间的关系有助于精确控制最终上下文，使其既紧凑又聚焦。\n\n**排除**关系可在存在冲突的指南被匹配时，将某些指南排除在模型关注范围之外。\n\n```python\nfor_experts = await agent.create_guideline(\n    condition=\"客户使用金融术语\",\n    action=\"以技术深度回应\",\n)\n\nfor_beginners = await agent.create_guideline(\n    condition=\"客户似乎对该主题较为陌生\",\n    action=\"简化表达并使用具体示例\",\n)\n\n# 在客户表述存在冲突时，设定优先级\nawait for_beginners.exclude(for_experts)\n```\n\n**依赖**关系则确保某条指南仅在另一条指南先行设置前提后才会生效，从而帮助您构建基于主题的指南层级结构。\n\n```python\nsuspects_fraud = await agent.create_observation(\n    condition=\"客户怀疑其银行卡存在未经授权的交易\",\n)\n\nawait agent.create_guideline(\n    condition=\"客户希望就该笔交易采取行动\",\n    action=\"询问客户是希望对交易提出异议还是锁定卡片\",\n    # 仅在确认欺诈嫌疑后才会触发\n    dependencies=[suspects_fraud],\n)\n```\n\n## [旅程](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fjourneys)\n\n多轮次标准操作流程。您可以为预订、故障排除或用户引导等流程定义一套完整的对话流程。智能助手会遵循该流程，但同时具备自适应能力——它能够跳过某些步骤、返回之前的状态，或根据客户的互动方式调整节奏。\n\n```python\njourney = await agent.create_journey(\n    title=\"预订航班\",\n    description=\"引导客户完成航班预订\",\n    conditions=[\"客户希望预订航班\"],\n)\n\nt0 = await journey.initial_state.transition_to(\n    # 该状态下的指令（可能持续多轮）\n    chat_state=\"先了解客户是否对特价机票感兴趣\",\n)\n\n# 分支 A - 不感兴趣\nt1 = await t0.target.transition_to(\n    chat_state=\"确定客户的出发地和出行时间\",\n    condition=\"客户表示不感兴趣\",\n)\n\n# 分支 B - 对特价机票感兴趣\nt2 = await t0.target.transition_to(\n    tool_state=加载最新特价机票,\n    condition=\"客户表示感兴趣\",\n)\n\nt3 = await t1.target.transition_to(\n    chat_state=\"列出特价机票并询问客户是否感兴趣\",\n)\n```\n\n## [预设回复](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fcanned-responses)\n\n在关键对话时刻或特定事件发生时，限制智能助手仅使用预先批准的回复模板。\n\n在完成规则匹配并生成初步回复后，智能助手会从预设模板中挑选最匹配的一项，而非直接发送生成内容，从而完全消除幻觉风险，并确保措辞精准无误。\n\n```python\nawait agent.create_guideline(\n    condition=\"客户谈论与我们业务无关的话题\",\n    action=\"告知客户无法提供帮助\",\n    # 当此指南匹配时，将触发严格组合模式——其余部分仍保持灵活\n    composition_mode=p.CompositionMode.STRICT,\n    canned_responses=[\n        await agent.create_canned_response(\n            \"很抱歉，这件事我帮不上忙。\",\n        )\n    ],\n    priority=100,  \u002F\u002F 最高优先级，使智能助手专注于此任务\n)\n```\n\n## [工具](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Ftools)\n\n工具仅在观察结果匹配时才会被激活，而不会一直保留在上下文中。这有效避免了传统大模型工具调用中常见的误触发问题。\n\n```python\n@p.tool\nasync def query_docs(context: p.ToolContext, user_query: str) -> p.ToolResult:\n    results = search_knowledge_base(user_query)\n    return p.ToolResult(results)\n\nawait agent.create_observation(\n    condition=\"客户询问服务功能\",\n    tools=[query_docs],\n)\n```\n\n此外，工具还可以向预设回复模板注入自定义值。\n\n## [术语表](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fconcepts\u002Fcustomization\u002Fglossary)\n\n为您的智能体定义领域特定的词汇。将日常用语和同义词映射到精确的业务定义，以便智能体能够理解客户的语言。\n\n```python\nawait agent.create_term(\n    name=\"Ocean View\",\n    description=\"Room category with direct view of the Atlantic\",\n    synonyms=[\"sea view\", \"rooms with a view to the Atlantic\"],\n)\n```\n\n## [可解释性](https:\u002F\u002Fparlant.io\u002Fdocs\u002Fadvanced\u002Fexplainability)\n\n每个决策都会通过 OpenTelemetry 进行追踪。Parlant 开箱即用地提供了详尽的日志、指标和追踪信息。\n\n## 框架集成\n\nParlant 负责对话治理；它不会取代您现有的技术栈。\n\n您可以将其与 LangGraph、Agno、LlamaIndex 等框架一起使用，以实现工作流自动化和知识检索。Parlant 负责行为控制层，而您选择的框架则负责智能体其余的处理逻辑。\n\n任何外部工作流或智能体都可以成为 Parlant 的工具，仅在相关时才会被触发：\n\n```python\nfrom my_workflows import refund_graph  # 已编译的 LangGraph StateGraph\n\n@p.tool\nasync def run_refund_workflow(\n  context: p.ToolContext,\n  order_id: str\n) -> p.ToolResult:\n    result = await refund_graph.ainvoke({\"order_id\": order_id})\n\n    # 图结果可以将数据和指令注入到智能体中。\n    # 指令会被转换为指导原则，并参与上下文相关的指导原则解析（包括优先级排序）。\n\n    return p.ToolResult(\n        data=result[\"data\"],\n        # 从工作流结果中注入动态指导原则\n        guidelines=[\n            {\"action\": inst, \"priority\": 3} for inst in result[\"instructions\"]\n        ],\n    )\n\nawait agent.create_observation(\n    condition=\"customer wants to process a refund\",\n    tools=[run_refund_workflow],\n)\n```\n\n同样的模式也适用于 LlamaIndex 查询引擎、Agno 智能体，或任何异步 Python 函数。\n\n## 与大模型无关\n\nParlant 可以与大多数大模型提供商合作。推荐使用 [Emcie](https:\u002F\u002Fwww.emcie.co)，因为它专为 Parlant 打造，能够提供理想的成本效益比；不过，OpenAI 和 Anthropic 也能输出高质量的结果。您也可以通过 LiteLLM 使用任何模型和提供商，但需要确保模型质量足够好——过于小型的现成模型往往会产生不一致的结果。\n\n通常，您可以在不更改行为配置的情况下切换不同的模型。\n\n## [官方 React 聊天组件](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant-chat-react)\n\n即插即用的聊天组件，可让您快速搭建前端界面。\n\n## 了解更多\n\n- **[Parlant 如何确保合规性](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fhow-parlant-guarantees-compliance)** — 对引擎的深入剖析\n- **[Parlant 与 LangGraph 的比较](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fparlant-vs-langgraph)** — 何时使用哪一种\n- **[Parlant 与 DSPy 的比较](https:\u002F\u002Fwww.parlant.io\u002Fblog\u002Fparlant-vs-dspy)** — 不同工具解决不同问题\n\n## 社区\n\n- **[Discord](https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J)** — 提问、分享您的项目进展\n- **[GitHub Issues](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues)** — 用于报告 bug 和提出功能请求\n- **[联系我们](https:\u002F\u002Fparlant.io\u002Fcontact)** — 直接联系工程团队\n\n**如果您觉得 Parlant 帮助您构建了更优秀的智能体，请为它[点个赞](https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant)**——这有助于更多人发现这个项目。\n\n## 许可证\n\nApache 2.0 — 免费用于商业用途。\n\n---\n\n\u003Cdiv align=\"center\">\n\n**[立即试用](https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation)** &bull; **[加入 Discord](https:\u002F\u002Fdiscord.gg\u002FduxWqxKk6J)** &bull; **[阅读文档](https:\u002F\u002Fwww.parlant.io\u002F)**\n\n由 **[Emcie](https:\u002F\u002Femcie.co)** 团队打造\n\n\u003C\u002Fdiv>","# Parlant 快速上手指南\n\nParlant 是一个专为面向客户的 AI 代理设计的**对话控制层**。它通过“上下文工程”技术，动态筛选与当前对话轮次最相关的规则、知识和工具，解决传统 System Prompt 过长导致模型注意力分散的问题，特别适用于金融、医疗等对合规性和一致性要求极高的场景。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：3.10 或更高版本 (`python3.10+`)\n*   **依赖管理**：推荐使用 `pip` 或 `venv` 虚拟环境\n\n## 安装步骤\n\n使用 pip 直接安装最新版本的 Parlant：\n\n```bash\npip install parlant\n```\n\n> **提示**：国内开发者若遇到下载速度慢的问题，可使用清华或阿里镜像源加速安装：\n> ```bash\n> pip install parlant -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 基本使用\n\nParlant 的核心逻辑是通过代码定义**观察 (Observations)** 和 **准则 (Guidelines)**，引擎会自动根据对话上下文动态组装 Prompt。\n\n以下是一个最简单的异步使用示例，展示如何创建一个客服代理，并根据客户是否使用专业术语来调整回答深度：\n\n```python\nimport parlant.sdk as p\n\nasync with p.Server():\n    # 1. 创建代理实例\n    agent = await server.create_agent(\n        name=\"Customer Support\",\n        description=\"Handles customer inquiries for an airline\",\n    )\n\n    # 2. 定义观察条件 (Observation)\n    # 当检测到客户使用金融术语（如 DTI 或摊销）时触发\n    expert_customer = await agent.create_observation(\n        condition=\"customer uses financial terminology like DTI or amortization\",\n        tools=[research_deep_answer], # 可选：关联特定工具\n    )\n\n    # 3. 定义行为准则 (Guideline) - 专家模式\n    # 依赖上述观察，一旦匹配始终执行深度回答\n    expert_answers = await agent.create_guideline(\n        matcher=p.MATCH_ALWAYS,\n        action=\"respond with technical depth\",\n        dependencies=[expert_customer],\n    )\n\n    # 4. 定义行为准则 (Guideline) - 新手模式\n    # 当客户看起来是新手时，简化回答并使用具体例子\n    beginner_answers = await agent.create_guideline(\n        condition=\"customer seems new to the topic\",\n        action=\"simplify and use concrete examples\",\n    )\n\n    # 5. 设置互斥关系\n    # 当两者都匹配时，优先“新手模式”，排除“专家模式”的上下文干扰\n    await beginner_answers.exclude(expert_customer)\n    \n    # 此时代理已配置完成，可接入对话流\n```\n\n### 核心概念简述\n*   **Observations (观察)**：定义触发特定行为的条件（如用户意图、关键词）。\n*   **Guidelines (准则)**：定义“条件 - 动作”对，只有匹配当前上下文的准则才会被送入 LLM。\n*   **Relationships (关系)**：通过 `exclude` (互斥) 和 `dependencies` (依赖) 管理准则间的优先级，确保上下文精准聚焦。\n\n更多详细用法请参考官方 [5 分钟快速入门教程](https:\u002F\u002Fwww.parlant.io\u002Fdocs\u002Fquickstart\u002Finstallation)。","某大型航空公司的客服团队正在部署 AI 助手，以处理从简单航班查询到复杂退改签及金融术语咨询的各类客户请求。\n\n### 没有 parlant 时\n- **指令失效**：随着系统提示词（System Prompts）中堆砌的规则越来越多，AI 开始忽略关键约束，导致回复偏离品牌语调或遗漏合规声明。\n- **流程脆弱**：依赖硬编码的路由图谱难以应对用户自然的非线性对话，一旦用户突然切换话题，对话逻辑极易断裂或陷入死循环。\n- **上下文过载**：无论用户问题简单与否，AI 都被迫加载全部知识库和工具列表，不仅响应延迟高，还常因信息噪音产生幻觉。\n- **维护噩梦**：每次更新业务规则或新增工具，都需要重新调整庞大的提示词结构，测试成本极高且容易引入新漏洞。\n\n### 使用 parlant 后\n- **动态精准控场**：parlant 通过上下文工程，仅在检测到特定条件（如用户提及“摊销”或\"DTI\"等专业术语）时，才动态激活相应的专家工具和深度回复指南。\n- **自适应对话流**：不再依赖僵化的路由图，parlant 实时筛选当前轮次最相关的上下文，让 AI 能灵活跟随用户跳跃的思维，保持对话连贯自然。\n- **性能与准确性双升**：通过“按需注入”原则，大幅减少无效上下文干扰，显著降低延迟并杜绝了因信息过载导致的胡编乱造。\n- **规则解耦易扩展**：开发人员只需定义独立的观察条件和行为准则，无需触碰核心提示词，即可快速迭代业务逻辑，确保企业级合规性。\n\nparlant 将原本脆弱的提示词堆砌转化为智能的动态上下文控制层，让面向客户的 AI 代理在复杂多变的真实对话中始终保持一致、合规且专业。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Femcie-co_parlant_ee4a3954.gif","emcie-co","Emcie","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Femcie-co_4211b47e.png","Extremely Cool Technologies",null,"https:\u002F\u002Fwww.emcie.co","https:\u002F\u002Fgithub.com\u002Femcie-co",[83,87,91,95,99,103,107,110,113],{"name":84,"color":85,"percentage":86},"Python","#3572A5",91.3,{"name":88,"color":89,"percentage":90},"Gherkin","#5B2063",4.7,{"name":92,"color":93,"percentage":94},"TypeScript","#3178c6",3.6,{"name":96,"color":97,"percentage":98},"CSS","#663399",0.2,{"name":100,"color":101,"percentage":102},"JavaScript","#f1e05a",0.1,{"name":104,"color":105,"percentage":106},"Shell","#89e051",0,{"name":108,"color":109,"percentage":106},"SCSS","#c6538c",{"name":111,"color":112,"percentage":106},"Dockerfile","#384d54",{"name":114,"color":115,"percentage":106},"HTML","#e34c26",17868,1513,"2026-04-03T15:35:37","Apache-2.0","未说明",{"notes":122,"python":123,"dependencies":124},"该工具是一个用于构建面向客户 AI 代理的对话控制层框架，通过 pip install parlant 安装。它侧重于上下文工程（Context Engineering），动态筛选与当前对话相关的规则和工具，而非依赖庞大的系统提示词。支持异步操作（async\u002Fawait）。具体操作系统、GPU 及内存需求未在 README 中明确列出，通常此类纯逻辑编排框架对硬件无特殊要求，主要取决于后端调用的大模型服务。","3.10+",[67],[26,15,13],[127,128,129,130,131,132,133,134,135,136,137],"ai-agents","genai","llm","customer-service","customer-success","gemini","llama3","openai","python","ai-alignment","hacktoberfest","2026-03-27T02:49:30.150509","2026-04-06T08:45:48.677921",[141,146,151,156,161,165],{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},14139,"如何配置 Ollama 的自定义基础 URL（OLLAMA_BASE_URL）和安装相关依赖？","要使用自定义的 Ollama 服务地址，请确保参考官方文档进行配置。具体文档地址为：https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fblob\u002Fdevelop\u002Fdocs\u002Fadapters\u002Fnlp\u002Follama.md。此外，如果需要支持 Ollama 功能，请通过命令 `pip install 'parlant[ollama]'` 安装包含 Ollama 支持的额外依赖包。","https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues\u002F544",{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},14140,"调用 create_guideline() 时服务器挂起或卡在“Caching entity embeddings”进度条不动怎么办？","这通常是因为在异步代码中调用 `create_guideline()` 或其他类似方法时遗漏了 `await` 关键字。请检查您的初始化代码，确保所有异步方法调用前都加上了 `await`。例如，应使用 `await agent.create_guideline(...)` 而不是 `agent.create_guideline(...)`。另外，如果您使用了 AI 生成的代码骨架，检查是否在不该使用的地方添加了 `event.wait()` 导致事件循环阻塞。","https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues\u002F579",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},14141,"当 NLP 提供商（如 Gemini）返回配额耗尽或错误请求时，Parlant 无限重试并刷屏报错，如何处理？","目前社区建议的解决方案是引入一种机制来标记已耗尽的 API 密钥以避免无效重试。具体思路包括：1. 在服务模块中设置一个状态标志（如 `resource_exhausted`）；2. 创建一个新的策略类，在重试逻辑之前捕获异常并检查该标志；3. 如果检测到资源耗尽，直接抛出清晰的错误并停止后续重试，而不是打印难以阅读的 JSON 错误堆栈。维护者已将此改进列入路线图，旨在让系统在遇到此类错误时能根据元数据（如 retry-after）智能决定重试策略。","https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues\u002F531",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},14142,"如何连接自托管的 OpenAI API 兼容服务器（如 vLLM 或 LM Studio）？","虽然这是一个功能增强请求，但目前的解决思路是通过配置允许传递自定义的 `base_url` 参数。用户可以期望在 `OpenAISchematicGenerator` 类或通过环境变量（如 `OPENAI_BASE_URL`）指定自托管服务的地址。这将使 AsyncClient 实例连接到本地的 vLLM 或 Triton 推理服务器，而无需编写自定义适配器。请关注后续版本更新以获取原生支持。","https:\u002F\u002Fgithub.com\u002Femcie-co\u002Fparlant\u002Fissues\u002F359",{"id":162,"question_zh":163,"answer_zh":164,"source_url":150},14143,"为什么添加指南（guidelines）和旅程（journeys）后服务器无法完成缓存启动？","如果在添加指南和旅程后服务器卡在缓存步骤，首先应确认代码中是否正确使用了 `await` 异步调用。如果重启服务器后问题依旧出现在相同位置，极有可能是代码逻辑中存在同步阻塞操作（如缺少 await 或错误的等待机制）。请审查调用 `create_guideline` 或相关嵌入缓存函数的代码段，确保它们被正确地异步执行。",{"id":166,"question_zh":167,"answer_zh":168,"source_url":155},14144,"遇到 API 密钥限额耗尽时，如何避免日志被相同的错误信息淹没？","理想的处理方式是系统能够识别特定的错误元数据（如 \"error_code\" 或 \"status\"），并在检测到密钥耗尽时停止重试。在当前版本中，开发者可以通过自定义策略来实现这一点：在生成器策略链的前端加入一个检查函数，一旦捕获到资源耗尽异常，立即标记该服务不可用并抛出单一、清晰的错误信息，从而阻止后续的重复重试和日志刷屏。",[170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250,255,260,265],{"id":171,"version":172,"summary_zh":173,"released_at":174},80868,"v3.3.0","## [3.3.0] - 2025-03-15\n\n### 新增功能\n\n- 通过 `Server.create_agent(planner=...)` 添加按代理配置的规划器，允许每个代理使用自定义的 `Planner` 实现。\n- 在 `Guideline` 和 `Tag` 上的 `depend_on()`、`exclude()` 和 `prioritize_over()` 方法中，支持将 `Tag` 作为目标，从而建立针对所有共享特定标签的指南之间的依赖和优先级关系。\n- 在 SDK 中新增 `Tag.depend_on()`、`Tag.exclude()` 和 `Tag.prioritize_over()` 方法，支持基于标签的指南和旅程之间的依赖与优先级关系。\n- 在关系解析器中，支持将自定义 TAG 作为 DEPENDENCY 关系的源。\n- 在 `Agent` 和 `Journey` 的 `create_guideline`、`create_observation` 和 `create_journey` 方法中添加 `tags` 参数，允许在创建实体时为其附加自定义标签。\n- 在 SDK 中新增 `Tag.reevaluate_after()` 方法，支持基于标签的工具重新评估关系。\n- 引入引擎中的标签驱动型重新评估功能：当某个工具触发时，所有与该工具存在重新评估关系的标签所关联的指南都会被重新评估。\n- 在 SDK 的 `GuidelineMatchingContext` 中新增 `staged_events` 字段。\n- 为指南和旅程添加 `priority` 属性，用于在关系解析器中进行基于优先级的筛选。\n- 新增暂态指南（原名“工具提供的指南”），允许工具动态地将行为指南注入到代理的上下文中。\n- 在 SDK 中新增 `Agent.utter()` 方法，支持使用暂态指南以编程方式生成代理消息。\n- 在 SDK 中新增 `Customer.update()` 和 `CustomerMetadata`，允许工具更新客户名称和元数据。\n- 在 SDK 中新增 `Session.update()`、`SessionMetadata` 和 `SessionLabels`，允许工具更新会话属性、元数据和标签。\n- 在 SDK 的 `Session` 类中新增 `customer`、`agent`、`mode` 和 `title` 属性。\n- 在 SDK 中新增 `Server.get_tag()` 方法，支持通过 `id` 或 `name` 进行查找。\n- 通过可选的 `name` 查询参数，在 `TagStore.list_tags()` 和 `GET \u002Ftags` API 端点中实现基于名称的过滤。\n- 在 `TagStore` 中强制执行标签名称唯一性，若尝试创建同名标签则会抛出错误。\n\n### 变更内容\n\n- 将感知性能策略中的扩展思考指示器设为可选。\n- 修改 `Tag` 和 `Guideline` 上的 `reevaluate_after()` 方法，使其接受多个工具（`*tools`）并返回 `Sequence[Relationship]`。\n- 将 SDK 中 `Guideline`、`Journey`、`Capability`、`Term`、`Variable`、 `Customer` 和 `Agent` 的 `tags` 字段类型由 `Sequence[TagId]` 更改为 `Sequence[Tag]`。\n- 修改 `Tag.preamble()` 方法，使其返回完整的 `Tag` 对象，而非仅返回 `TagId`。\n- 升级 MCP 服务并提高依赖版本，以解决安全漏洞。\n\n### 已弃用\n\n- OpenAPI 工具服务现已弃用；请迁移到 SDK 工具服务。\n\n### 修复内容\n\n- 修复在前言之后立即发送新消息时出现的死锁问题。\n- 修复关系解析器中针对自定义标签依赖目标（指南依赖）的传递性筛选问题。","2026-03-15T15:30:56",{"id":176,"version":177,"summary_zh":178,"released_at":179},80869,"v3.2.1","## [3.2.1] - 2026-02-17\n\n### 新增\n\n- 在指南、观察和旅程的创建方法中添加可选的 `dependencies` 参数\n- 为指南和旅程中的 `prioritize_over()` 方法添加 `exclude()` 别名\n- 在 `create_observation` 方法中添加 `tools` 参数\n\n### 变更\n\n- 弃用 `attach_tool()`，改用带有 `tools` 参数的 `create_guideline()` 和 `create_observation()`\n\n### 修复\n\n- 在重组预设回复时保留草稿消息的语言\n- 修复在设置过程中发生异常时服务器卡死的问题\n- 修复预设回复字段提取逻辑，使其能够正确处理假值","2026-02-17T17:13:48",{"id":181,"version":182,"summary_zh":183,"released_at":184},80870,"v3.2.0","## [3.2.0] - 2026-02-09\n\n### 新增功能\n\n- 为指南、旅程、旅程节点和会话添加标签，用于分类和筛选\n- 增加从匹配实体（指南、观察、旅程）自动传播会话标签的功能\n- 在指南中新增 `track` 参数，用于控制“是否已应用”的跟踪逻辑\n- 支持在 `prioritize_over()` 和 `depend_on()` 方法中指定多个目标\n- 为预设回复新增 `field_dependencies`，以明确字段的可用性要求\n- 为指南、旅程和旅程状态新增 `attach_retriever()` 方法，用于条件性数据检索\n- 为旅程新增 `on_match` 和 `on_message` 钩子，用于生命周期回调\n- 增加针对每个客服人员的前言配置（自定义示例和说明）\n- 为流式模式下的首次客服消息，新增独立的默认问候回复\n- 新增流式消息输出模式\n- 允许指定自定义的旅程节点 ID\n- 将匹配的指南\u002F旅程状态添加到完成就绪事件中\n\n### 变更\n\n- 使 SDK 指南中的条件变为可选\n- 调整默认前言示例\n- 降低关系型指南解析器的日志级别\n- 在自定义指南匹配批次中增加激活\u002F跳过日志\n\n### 修复\n\n- 修复启动时的 WebSocket 警告\n- 修复客服意图提出器（指南被错误重写的问题）\n- 修复多个客户指南匹配器无法正常工作的问题\n- 修复 SDK 中上下文变量访问的 bug","2026-02-09T08:34:14",{"id":186,"version":187,"summary_zh":188,"released_at":189},80871,"v3.1.0","## [3.1.0] - 2026-01-05\n\n### 新增功能\n\n- 在 SDK 中为 Server、Agent 和 Customer 添加 .current 属性\n- 添加 \u002Fhealthz 端点\n- 添加会话元数据的 CRUD 操作 API\n- 添加 EmcieService 服务\n- 添加 GLM 服务\n- 添加 Mistral 服务\n- 添加 OpenRouter 服务\n- 添加 OpenTelemetry 集成，支持 Meter、Logger 和 Tracer\n- 添加 Qdrant VectorDatabase 适配器\n- 添加 Snowflake Cortex 服务\n- 增加配置和扩展 FastAPI 应用对象的能力\n- 添加延迟检索器\n- 添加动态组合模式\n- 添加后续预设回复功能\n- 添加指南关键性级别\n- 添加指南 on_match() 钩子\n- 添加上下文变量值的持久化选项（变量存储）\n- 增加指南描述字段\n- 允许通过钩子跳过预设回复选择，并直接使用草稿内容\n- 允许通过环境变量控制工具结果负载的最大值\n- 允许按代理分别控制感知性能策略\n- 允许旅程从一个工具状态过渡到另一个工具状态\n- 允许在通过 SDK 和 API 创建代理时指定自定义 ID\n- 允许在通过 SDK 和 API 创建客户时指定自定义 ID\n- 允许在通过 SDK 和 API 创建指南、旅程和术语表条目时指定自定义 ID\n- 在服务器对象中暴露 IoC 容器\n- 支持为匹配的指南和旅程状态添加自定义 canrep 字段\n- 支持基于代码的自定义指南匹配器\n\n### 变更\n\n- 将默认的 NLPService 更改为 EmcieService\n- 提升了当首个状态为工具状态时，旅程状态匹配的效率\n- 将 ContextualCorrelator 重命名为 Tracer\n- 将 LoadedContext 重命名为 EngineContext\n- 支持 LiteLLM 的代理 URL\n\n### 修复\n\n- 修复响应分析过程中取消操作导致的严重 bug\n- 修复 TransientVectorDatabase 中的关键相似度计算错误\n- 修复在某些边缘情况下对旅程和工具进行不必要的额外评估问题\n- 通过使用函数调用技巧替代结构化输出，提升了 Gemini Flash 2.5 输出的一致性","2026-01-05T13:28:44",{"id":191,"version":192,"summary_zh":193,"released_at":194},80872,"v3.0.4","## [3.0.4] - 2025-11-18\n\n### 修复\n\n- 修复当无过滤条件匹配时 NanoDB 查询失败的 bug\n- 扩展工具洞察功能，使其跨迭代生效\n- 将已弃用的 status.HTTP_422_UNPROCESSABLE_ENTITY 更改为 status.HTTP_422_UNPROCESSABLE_CONTENT\n- 通过添加缺失的 websocket-client 依赖项修复损坏的 CLI\n- 添加用于嵌入器初始化的特定类\n- 在 OllamaEmbedder 中仅设置一次基础 URL\n- 更新依赖以提升安全性，升级 FastAPI，并修复 hugging_face.py 中的 mypy 错误\n- 升级 torch 以修复漏洞","2025-11-18T22:08:21",{"id":196,"version":197,"summary_zh":198,"released_at":199},80873,"v3.0.3","### 修复\n\n- 修复在某些环境中因 FastMCP 版本过旧而导致的安装问题\n- 升级 OpenTelemetry 和 Uvicorn 的版本\n- 将 ChromaDB 提升为可选依赖，通过 `parlant[chroma]` 安装\n- 更新集成 UI 的 NPM 依赖\n- 修复 API 类型的默认值，以避免触发 Pydantic 警告","2025-10-24T09:00:47",{"id":201,"version":202,"summary_zh":203,"released_at":204},80874,"v3.0.2","### 新增\n\n- 新增 docs\u002F\\* 和 llms.txt\n- 新增 Vertex NLP 服务\n- 新增 Ollama NLP 服务\n- 在 SDK 中添加了 LiteLLM 支持\n- 在 SDK 中添加了 Gemini 支持\n- 新增 Journey.create_observation() 辅助方法\n- 新增授权权限 READ_AGENT_DESCRIPTION\n- 向 BedrockService 添加了可选的 AWS_SESSION_TOKEN 参数\n- 支持通过 API 创建状态事件\n\n### 变更\n\n- 将工具调用成功日志级别调整为 DEBUG\n- 优化 canrep 功能，在严格模式下若未找到 canrep 候选回复，则不再生成草稿\n- 从状态事件中移除 `acknowledged_event_offset`\n- 从 `LoadedContext.interaction` 中移除 `last_known_event_offset`\n\n### 修复\n\n- 修复内置 NLP 服务中缺失 API 密钥的显示问题\n- 改进了预设回复生成功能\n- 修复在某些情况下旅程路径为 null 的 bug\n- 修复旅程节点选择中终端节点的小 bug\n- 修复版本升级后评估结果无法正常显示的问题","2025-08-27T14:24:52",{"id":206,"version":207,"summary_zh":208,"released_at":209},80875,"v3.0.1","今天，我们非常高兴地宣布推出 **Parlant 3.0**——这是我们迄今为止最重要的版本发布。这一版本将 Parlant 打造成一个真正适用于生产环境的、面向客户的应用对话式 AI 框架。凭借显著的性能提升、更出色的开发者体验以及企业级的安全特性，Parlant 3.0 已经准备好解决您最棘手的 AI 一致性问题，并为最关键的客户交互应用提供强大支持。\n\n\n## Parlant 3.0 的新特性\n\n本次发布聚焦于四个对在生产环境中部署对话式 AI 团队至关重要的核心领域：\n\n1. **延迟优化与感知性能**——大幅提速，带来更加流畅的用户体验\n2. **增强型对话流程**——通过状态图和工具集成，打造更强大的对话流\n3. **预设回复**——全新升级（原“话语模板”更名为“预设回复”），现支持灵活的组合模式\n4. **生产就绪性**——强化 API 接口、人工转接功能、自定义 NLP 服务，以及极致的引擎扩展能力\n\n接下来，我们将逐一深入探讨这些方面。\n\n## 性能提升\n\n对于对话式 AI 而言，性能至关重要。用户期望获得流畅的交互体验，因为卡顿或长时间延迟都会打断对话的连贯性。Parlant 3.0 在实际性能和用户感知性能两方面都实现了显著提升。\n\n### 优化的响应生成流水线\n\n我们对响应生成流水线进行了全面重构，并引入了多项关键优化：\n\n- **并行处理**：对话流程的状态匹配与规则评估可并行执行，从而将响应延迟缩短多达 60%。\n- **对话流程预测性激活**：引擎会根据当前对话上下文预测哪些流程将被触发，进而提前准备相关状态。\n\n### 基于前导回复的感知性能优化\n\n除了提升实际速度之外，Parlant 3.0 还采用了“感知性能”技术。其中最具影响力的改进便是 **前导回复**：\n\n![演示 GIF](https:\u002F\u002Fparlant.io\u002Fimg\u002Fblog-preamble-responses.png)\n\n这些即时确认消息能够在后台处理复杂请求的同时，持续保持用户的参与感。最终呈现出一种即使在执行复杂操作时也依然迅速、流畅的对话体验。\n\n\n## 增强型对话流程：引导复杂对话\n\nParlant 3.0 中的对话流程已发展成为一套用于管理复杂多步骤对话的成熟系统。它在结构化与灵活性之间取得了良好平衡，既能引导用户完成既定流程，又能灵活适应自然的对话模式。\n\n### 对话流程架构的改进\n\n**灵活的状态转换**：与僵化的对话框架不同，Parlant 的对话流程允许代理根据上下文和用户需求跳过某些状态、返回先前状态，或直接跳转到后续步骤：\n\n```python\nasync def create_scheduling_journey(agent: p.Agent):\n    journey = await agent.create_journey(\n        title=\"Schedule Appoin","2025-08-16T17:06:20",{"id":211,"version":212,"summary_zh":213,"released_at":214},80876,"v2.2.0","## [2.2.0] - 2025-05-20\n\n### 新增\n- 添加旅程\n- 添加指南属性评估功能\n- 在添加直接工具指南时，新增自动推导指南动作的功能\n- 在工具洞察中新增“无效”和“缺失”工具参数选项\n\n### 变更\n- 将指南动作设置为可选","2025-05-20T11:30:53",{"id":216,"version":217,"summary_zh":218,"released_at":219},80877,"v2.1.2","## [2.1.2] - 2025-05-07\n\n### 变更\n- 从话语重组提示中移除交互历史\n- 在话语字段替换中使用整个交互中的工具调用\n- 改进话语渲染失败时的错误处理和报告\n\n### 修复\n- 始终对话语选择进行推理，以提升性能","2025-05-07T08:39:51",{"id":221,"version":222,"summary_zh":223,"released_at":224},80878,"v2.1.1","## [2.1.1] - 2025-04-30\r\n\r\n### Fixed\r\n- Fixed rendering relationships in CLI\r\n- Fixed parlant client using old imports from python client SDK","2025-04-30T11:06:08",{"id":226,"version":227,"summary_zh":228,"released_at":229},80879,"v2.1.0","## [2.1.0] - 2025-04-29\r\n\r\n### Added\r\n- ToolParameterOptions.choice_provider can now access ToolContext\r\n- Added utterance\u002Fdraft toggle in the integrated UI\r\n- Added new guideline relationship: Dependency\r\n- Added tool relationships and the OVERLAP relationship\r\n- Added the 'overlap' property to tools. By default, tools will be assumed not to overlap with each other, simplifying their evaluation at runtime.\r\n\r\n### Changed\r\n- Improved tool calling efficiency by adjusting the prompt to the tool at hand\r\n- Revised completion schema (ARQs) for tool calling\r\n- Utterances now follow a 2-stage process: draft + select\r\n- Changed guest customer name to Guest\r\n\r\n### Fixed\r\n- Fixed deprioritized guidelines always being skipped\r\n- Fixed agent creation with tags\r\n- Fixed client CLI exit status when encountering an error\r\n- Fixed agent update\r\n\r\n### Known Issues\r\n- OpenAPI tool services sometimes run into issues due to a version update in aiopenapi3","2025-04-29T18:04:04",{"id":231,"version":232,"summary_zh":233,"released_at":234},80880,"v2.0.0","# [2.0.0] - 2025-04-09\r\n\r\n### Added\r\n- Improved tool parameter flexibility: custom types, Pydantic models, and annotated ToolParameterOptions\r\n- Allow returning a new (modified) container in modules using configure_module()\r\n- Added Tool Insights with tool parameter options\r\n- Added support for default values for tool parameters in tool calling\r\n- Added WebSocket logger feature for streaming logs in real time\r\n- Added a log viewer to the sandbox UI\r\n- Added API and CLI for Utterances\r\n- Added support for the --migrate CLI flag to enable seamless store version upgrades during server startup\r\n- Added clear rate limit error logs for NLP adapters\r\n- Added enabled\u002Fdisabled flag for guidelines to facilitate experimentation without deletion\r\n- Allow different schematic generators to adjust incoming prompts in a structured manner\r\n- Added tags to context variables, guidelines, glossary and agents\r\n- Added guideline matching strategies\r\n- Added guideline relationships\r\n\r\n### Changed\r\n- Made the message generator slightly more polite by default, following user feedback\r\n- Allow only specifying guideline condition or action when updating guideline from CLI\r\n- Renamed guideline proposer with guideline matcher\r\n\r\n### Fixed\r\n- Lowered likelihood of the agent hallucinating facts in fluid mode\r\n- Lowered likelihood of the agent offering services that were not specifically mentioned by the business","2025-04-09T08:48:17",{"id":236,"version":237,"summary_zh":238,"released_at":239},80881,"v1.6.7","## [1.6.7] - 2025-02-23\r\n\r\n### Fixed\r\n\r\n- Fix major issue with tool-call inference of non-string parameters on OpenAI","2025-02-23T17:27:41",{"id":241,"version":242,"summary_zh":243,"released_at":244},80882,"v1.6.6","## [1.6.6] - 2025-02-23\r\n\r\n### Fixed\r\n\r\n- Fix unicode generation issues in gemini\r\n- Adapt tool calling with optional parameters","2025-02-23T16:19:25",{"id":246,"version":247,"summary_zh":248,"released_at":249},80883,"v1.6.5","## [1.6.5] - 2025-02-20\r\n\r\n### Fixed\r\n\r\n- Improve guideline proposer generation consistency","2025-02-20T12:32:21",{"id":251,"version":252,"summary_zh":253,"released_at":254},80884,"v1.6.4","## [1.6.4] - 2025-02-20\r\n\r\n### Fixed\r\n\r\n- Upgrade to Gemini 2.0 Flash","2025-02-20T10:27:47",{"id":256,"version":257,"summary_zh":258,"released_at":259},80885,"v1.6.3","## [1.6.3] - 2025-02-18\r\n### Fixed\r\n- Fix Cerebras generation (as well as other Llama 3 generations)","2025-02-18T15:09:49",{"id":261,"version":262,"summary_zh":263,"released_at":264},80886,"v1.6.2","## [1.6.2] - 2025-01-29\r\n\r\n### Fixed\r\n- Fix loading DeepSeek service during server boot","2025-01-29T16:38:46",{"id":266,"version":267,"summary_zh":268,"released_at":269},80887,"v1.6.1","## [1.6.1] - 2025-01-20\r\n\r\n### Fixed\r\n- Fix ToolCaller not getting clear information on a parameter being optional\r\n- Ensure ToolCaller only calls a tool if all required args were given\r\n- Improve valid JSON generation likelihood in MessageEventGenerator\r\n- Improve ToolCaller's ability to correctly run multiple tools at once\r\n","2025-01-20T20:15:24"]