[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-videosdk-live--agents":3,"tool-videosdk-live--agents":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",155373,2,"2026-04-14T11:34:08",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":32,"env_os":97,"env_gpu":98,"env_ram":98,"env_deps":99,"category_tags":107,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":110,"updated_at":111,"faqs":112,"releases":118},7418,"videosdk-live\u002Fagents","agents","Open-source framework for developing real-time multimodal conversational AI agents.","VideoSDK AI Agents 是一个开源的 Python 框架，专为构建能够实时参与音视频会议的语音及多模态 AI 智能体而设计。它主要解决了开发者在打造生产级实时对话应用时面临的复杂技术挑战，如低延迟音频流处理、自动发言检测、打断机制以及媒体路由等繁琐的基础设施问题。\n\n通过该框架，开发者可以将代理工作节点、AI 模型与用户设备无缝连接成一条低延迟管道，从而专注于核心业务逻辑而非底层通信细节。其最新 v1.0.0 版本引入了统一的 `Pipeline` 类，能够自动组合语音识别（STT）、大语言模型（LLM）、语音合成（TTS）及虚拟形象等组件，并智能选择最佳执行模式；同时提供基于装饰器的钩子系统，让数据拦截与变换更加灵活便捷。\n\n这款工具非常适合需要快速开发实时语音助手、会议陪练或交互式客服系统的软件工程师与技术团队。无论是采用传统的级联模式还是最新的统一实时模型（如 Gemini Live），VideoSDK AI Agents 都能帮助开发者高效构建稳定、流畅且具备自然交互能力的智能应用。","\u003C!--BEGIN_BANNER_IMAGE-->\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_b8d2a1e88706.png\" alt=\"VideoSDK AI Agents Banner\" style=\"width:100%;\">\n\u003C\u002Fp>\n\u003C!--END_BANNER_IMAGE-->\n\n# VideoSDK AI Agents\nOpen-source Python framework for building production-ready, real-time voice and multimodal AI agents.\n\n![PyPI - Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvideosdk-agents)\n[![PyPI Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_f5475ba7dd49.png)](https:\u002F\u002Fpepy.tech\u002Fprojects\u002Fvideosdk-agents)\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fvideo_sdk)](https:\u002F\u002Fx.com\u002Fvideo_sdk)\n[![YouTube](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYouTube-VideoSDK-red)](https:\u002F\u002Fwww.youtube.com\u002Fc\u002FVideoSDK)\n[![LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinkedIn-VideoSDK-blue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fvideo-sdk\u002F)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Join%20Us-7289DA)](https:\u002F\u002Fdiscord.com\u002Finvite\u002Ff2WsNDN9S5)\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fvideosdk-live\u002Fagents)\n\nThe **VideoSDK AI Agents framework** is a Python SDK for building AI agents that join VideoSDK rooms as real-time participants. It connects your agent worker, AI models, and user devices into a single low-latency pipeline — handling audio streaming, turn detection, interruptions, and media routing automatically so you can focus on agent logic.\n\n\u003C!-- ![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_3ea6bcd399eb.png) -->\n\u003C!-- ![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_6801c8051e8b.png) -->\n\n![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_18fcee8bab61.png)\n\n\n## Overview\n\n**VideoSDK AI Agents** is a Python framework that lets you build voice and multimodal AI agents that participate directly in VideoSDK rooms. The framework manages the full agent lifecycle — from joining a room and processing live audio, to running STT → LLM → TTS pipelines or connecting to unified realtime models, to handling turn detection, VAD, interruptions, and clean teardown.\n\n**v1.0.0** introduces a unified `Pipeline` class that replaces the previous `CascadingPipeline` and `RealtimePipeline`. Pass in any combination of components — STT, LLM, TTS, VAD, turn detector, avatar — and the framework wires them together and selects the optimal execution mode automatically. A decorator-based hooks system (`@pipeline.on(...)`) lets you intercept and transform data at any stage without subclassing.\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎙️ \u003Ca href=\"examples\u002Fcascade_basic.py\" target=\"_blank\">Agent with Cascade Mode\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Build an AI Voice Agent using Cascade Mode (STT → LLM → TTS).\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>⚡ \u003Ca href=\"examples\u002Frealtime_basic.py\" target=\"_blank\">Agent with Realtime Mode\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Build an AI Voice Agent using a unified Realtime model (e.g. Gemini Live).\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💻 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction\" target=\"_blank\">Agent Documentation\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>The VideoSDK Agent Official Documentation.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📚 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fagent-sdk-reference\u002Fagents\u002F\" target=\"_blank\">SDK Reference\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Reference Docs for Agents Framework.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cdiv style={{marginTop: '1.5rem'}}>\u003C\u002Fdiv>\n\n\n| #  | Feature                         | Description                                                                 |\n|----|----------------------------------|-----------------------------------------------------------------------------|\n| 1  | **🎤 Real-time Communication (Audio\u002FVideo)**       | Agents can listen, speak, and interact live in meetings.                   |\n| 2  | **📞 SIP & Telephony Integration**   | Seamlessly connect agents to phone systems via SIP for call handling, routing, and PSTN access. |\n| 3  | **🧍 Virtual Avatars**               | Build or plug in any avatar provider — the framework handles audio routing, sync, and teardown automatically. |\n| 4  | **🤖 Multi-Model Support**           | Integrate with OpenAI, Gemini, AWS NovaSonic, Anthropic, and more.         |\n| 5  | **🧩 Cascade Mode**                  | Compose any STT → LLM → TTS chain across providers for full control and flexibility. |\n| 6  | **⚡ Realtime Mode**                  | Use unified realtime models (OpenAI Realtime, AWS Nova Sonic, Gemini Live) for lowest latency. |\n| 7  | **🔀 Hybrid Mode**                   | Mix cascade and realtime components — custom STT with a realtime model, or realtime with custom TTS. |\n| 8  | **🪝 Pipeline Hooks**                | Intercept and transform data at any stage (STT, LLM, TTS, turns) using `@pipeline.on(...)`. |\n| 9  | **🛠️ Function Tools**               | Extend agent capabilities with any external tool or API call.               |\n| 10 | **🌐 MCP Integration**               | Connect agents to external data sources and tools using Model Context Protocol. |\n| 11 | **🔗 A2A Protocol**                  | Reliable agent-to-agent routing with correlation-based request tracking.    |\n| 12 | **🦜 LangChain & LangGraph**         | Plug in any LangChain `BaseChatModel` or LangGraph `StateGraph` as the agent's LLM. |\n| 13 | **📊 Observability**                 | Built-in metrics, OpenTelemetry tracing, and structured logging per component. |\n\n> \\[!IMPORTANT]\n>\n> **Star VideoSDK Repositories** ⭐️\n>\n> Get instant notifications for new releases and updates. Your support helps us grow and improve VideoSDK!\n\n---\n\n## Pipeline Modes\n\nAll agents are built around a single `Pipeline` class. Pass in your components — the SDK picks the right execution mode automatically.\n\n### Cascade Mode — STT → LLM → TTS\n\nMix and match any provider for each stage. Best when you need custom STT, specific LLM behaviour, or a particular TTS voice.\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=DeepgramSTT(),\n        llm=GoogleLLM(),\n        tts=CartesiaTTS(),\n        vad=SileroVAD(),\n        turn_detector=TurnDetector(),\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n### Realtime Mode — Lowest Latency with Unified Models\n\nUse a single realtime model for the entire voice pipeline. Best for sub-500ms response latency.\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=GeminiRealtime(\n            model=\"gemini-3.1-flash-live-preview\",\n            config=GeminiLiveConfig(voice=\"Leda\", response_modalities=[\"AUDIO\"]),\n        )\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n### Hybrid Mode — Mix & Match\n\nUse an external STT with a Realtime LLM, or a Realtime model with a custom TTS:\n\n```python\n# External STT → Realtime LLM\npipeline = Pipeline(stt=DeepgramSTT(), llm=OpenAIRealtime(...))\n\n# Realtime LLM → External TTS\npipeline = Pipeline(llm=OpenAIRealtime(...), tts=ElevenLabsTTS(...))\n```\n\n### Pipeline Hooks — Intercept Any Stage\n\n```python\n@pipeline.on(\"stt\")\nasync def clean_transcript(text: str) -> str:\n    return text.strip()\n\n@pipeline.on(\"llm\")\nasync def route_llm(messages):\n    if \"transfer\" in messages[-1].content:\n        yield \"Transferring you now.\"  # bypass LLM entirely\n\n@pipeline.on(\"tts\")\nasync def fix_pronunciation(text: str) -> str:\n    return text.replace(\"VideoSDK\", \"Video S D K\")\n\n@pipeline.on(\"user_turn_start\")\nasync def on_user_starts():\n    print(\"User is speaking...\")\n```\n\nAvailable hook points: `stt` · `tts` · `llm` · `vision_frame` · `user_turn_start` · `user_turn_end` · `agent_turn_start` · `agent_turn_end`\n\n---\n\n## Pre-requisites\n\nBefore you begin, ensure you have:\n\n- A VideoSDK authentication token (generate from [app.videosdk.live](https:\u002F\u002Fapp.videosdk.live))\n   - A VideoSDK meeting ID (you can generate one using the [Create Room API](https:\u002F\u002Fdocs.videosdk.live\u002Fapi-reference\u002Frealtime-communication\u002Fcreate-room) or through the VideoSDK dashboard)\n- Python 3.12 or higher\n- Third-Party API Keys:\n   - API keys for the services you intend to use (e.g., OpenAI for LLM\u002FSTT\u002FTTS, ElevenLabs for TTS, Google for Gemini etc.).\n\n## Installation\n\n### Using UV (Recommended)\n\n[UV](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) is a fast Python package manager that handles virtual environments and dependency management automatically.\n\n> If you don't have UV installed, see the [UV installation guide](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002Fgetting-started\u002Finstallation\u002F).\n\n- Install the core VideoSDK AI Agent package:\n  ```bash\n  uv add videosdk-agents\n  ```\n\n- Install Optional Plugins:\n  ```bash\n  uv add videosdk-plugins-openai\n  uv add videosdk-plugins-deepgram\n  ```\n\n- Run your agent:\n  ```bash\n  uv run python main.py\n  ```\n\n### Using pip\n\n- Create and activate a virtual environment with Python 3.12 or higher.\n    \u003Cdetails>\n    \u003Csummary>\u003Cstrong> macOS \u002F Linux\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n    ```bash\n    python3 -m venv venv\n    source venv\u002Fbin\u002Factivate\n    ```\n    \u003C\u002Fdetails>\n    \u003Cdetails>\n    \u003Csummary>\u003Cstrong> Windows\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n    ```bash\n    python -m venv venv\n    venv\\Scripts\\activate\n    ```\n    \u003C\u002Fdetails>\n\n- Install the core VideoSDK AI Agent package\n  ```bash\n  pip install videosdk-agents\n  ```\n- Install Optional Plugins. Plugins help integrate different providers for Realtime, STT, LLM, TTS, and more. Install what your use case needs:\n  ```bash\n  # Example: Install the Turn Detector plugin\n  pip install videosdk-plugins-turn-detector\n  ```\n  👉 Supported plugins (Realtime, LLM, STT, TTS, VAD, Avatar, SIP) are listed in the [Supported Libraries](#supported-libraries-and-plugins) section below.\n\n### Development Setup\n\nTo set up the project locally, clone the repo and install all packages (core + all plugins) as editable installs:\n\n**Using UV (Recommended):**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents.git\ncd agents\nuv sync\nuv run python examples\u002Fcascade_basic.py\n```\n\n**Using pip:**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents.git\ncd agents\nbash setup.sh\nsource venv\u002Fbin\u002Factivate\npython examples\u002Fcascade_basic.py\n```\n\n\n## Generating a VideoSDK Meeting ID\n\nBefore your AI agent can join a meeting, you'll need to create a meeting ID. You can generate one using the VideoSDK Create Room API:\n\n### Using cURL\n\n```bash\ncurl -X POST https:\u002F\u002Fapi.videosdk.live\u002Fv2\u002Frooms \\\n  -H \"Authorization: YOUR_JWT_TOKEN_HERE\" \\\n  -H \"Content-Type: application\u002Fjson\"\n```\n\nFor more details on the Create Room API, refer to the [VideoSDK documentation](https:\u002F\u002Fdocs.videosdk.live\u002Fapi-reference\u002Frealtime-communication\u002Fcreate-room).\n\n## Getting Started: Your First Agent\n\n### Quick Start\n\nNow that you've installed the necessary packages, you're ready to build!\n\n### Step 1: Creating a Custom Agent\n\nFirst, let's create a custom voice agent by inheriting from the base `Agent` class:\n\n```python title=\"main.py\"\nfrom videosdk.agents import Agent, function_tool\n\n# External Tool\n# async def get_weather(self, latitude: str, longitude: str):\n\nclass VoiceAgent(Agent):\n    def __init__(self):\n        super().__init__(\n            instructions=\"You are a helpful voice assistant that can answer questions and help with tasks.\",\n             tools=[get_weather] # You can register any external tool defined outside of this scope\n        )\n\n    async def on_enter(self) -> None:\n        \"\"\"Called when the agent first joins the meeting\"\"\"\n        await self.session.say(\"Hi there! How can I help you today?\")\n    \n    async def on_exit(self) -> None:\n      \"\"\"Called when the agent exits the meeting\"\"\"\n        await self.session.say(\"Goodbye!\")\n```\n\nThis code defines a basic voice agent with:\n\n- Custom instructions that define the agent's personality and capabilities\n- An entry message when joining a meeting\n- State change handling to track the agent's current activity\n\n### Step 2: Implementing Function Tools\n\nFunction tools allow your agent to perform actions beyond conversation. There are two ways to define tools:\n\n- **External Tools:** Defined as standalone functions outside the agent class and registered via the `tools` argument in the agent's constructor.\n- **Internal Tools:** Defined as methods inside the agent class and decorated with `@function_tool`.\n\nBelow is an example of both:\n\n```python\nimport aiohttp\n\n# External Function Tools\n@function_tool\ndef get_weather(latitude: str, longitude: str):\n    print(f\"Getting weather for {latitude}, {longitude}\")\n    url = f\"https:\u002F\u002Fapi.open-meteo.com\u002Fv1\u002Fforecast?latitude={latitude}&longitude={longitude}&current=temperature_2m\"\n    async with aiohttp.ClientSession() as session:\n        async with session.get(url) as response:\n            if response.status == 200:\n                data = await response.json()\n                return {\n                    \"temperature\": data[\"current\"][\"temperature_2m\"],\n                    \"temperature_unit\": \"Celsius\",\n                }\n            else:\n                raise Exception(\n                    f\"Failed to get weather data, status code: {response.status}\"\n                )\n\nclass VoiceAgent(Agent):\n# ... previous code ...\n# Internal Function Tools\n    @function_tool\n    async def get_horoscope(self, sign: str) -> dict:\n        horoscopes = {\n            \"Aries\": \"Today is your lucky day!\",\n            \"Taurus\": \"Focus on your goals today.\",\n            \"Gemini\": \"Communication will be important today.\",\n        }\n        return {\n            \"sign\": sign,\n            \"horoscope\": horoscopes.get(sign, \"The stars are aligned for you today!\"),\n        }\n```\n\n- Use external tools for reusable, standalone functions (registered via `tools=[...]`).\n- Use internal tools for agent-specific logic as class methods.\n- Both must be decorated with `@function_tool` for the agent to recognize and use them.\n\n\n### Step 3: Setting Up the Pipeline\n\nConnect your agent to an AI model using the unified `Pipeline` class. Pass in whichever components you need — the SDK handles the rest.\n\n**Realtime mode** (single model, lowest latency):\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=GeminiRealtime(\n            model=\"gemini-3.1-flash-live-preview\",\n            config=GeminiLiveConfig(voice=\"Leda\", response_modalities=[\"AUDIO\"]),\n        )\n    )\n    session = AgentSession(agent=VoiceAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n**Cascade mode** (STT → LLM → TTS, full provider control):\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=DeepgramSTT(),\n        llm=GoogleLLM(),\n        tts=CartesiaTTS(),\n        vad=SileroVAD(),\n        turn_detector=TurnDetector(),\n    )\n    session = AgentSession(agent=VoiceAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n### Step 4: Assembling and Starting the Agent Session\n\n```python\nfrom videosdk.agents import AgentSession, WorkerJob, RoomOptions, JobContext\n\nasync def start_session(context: JobContext):\n    session = AgentSession(\n        agent=VoiceAgent(),\n        pipeline=pipeline,\n    )\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n\ndef make_context() -> JobContext:\n    room_options = RoomOptions(\n        room_id=\"\u003Cmeeting_id>\",\n        name=\"Test Agent\",\n        playground=True,\n    )\n    return JobContext(room_options=room_options)\n\nif __name__ == \"__main__\":\n    job = WorkerJob(entrypoint=start_session, jobctx=make_context)\n    job.start()\n```\n### Step 5: Connecting with VideoSDK Client Applications\n\nAfter setting up your AI Agent, you'll need a client application to connect with it. You can use any of the VideoSDK quickstart examples to create a client that joins the same meeting:\n\n- [JavaScript](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fjs-rtc)\n- [React](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Freact-rtc)\n- [React Native](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Freact-native)\n- [Android](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fandroid-rtc)\n- [Flutter](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fflutter-rtc)\n- [iOS](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fios-rtc)\n- [Unity](http:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fvideosdk-rtc-unity-sdk-example)\n- [IoT](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fvideosdk-rtc-iot-sdk-example)\n\nWhen setting up your client application, make sure to use the same meeting ID that your AI Agent is using.\n\n### Step 6: Running the Project\nOnce you have completed the setup, you can run your AI Voice Agent project using Python. Make sure your `.env` file is properly configured and all dependencies are installed.\n\n```bash\npython main.py\n```\n> [!TIP]\n>\n> **Console Mode** — test your agent locally without a meeting room.\n> Set `playground=True` in `RoomOptions` and run `python main.py` to interact via your mic and speakers directly from the terminal.\n\n\n### Step 7: Deployment\n\nFor deployment options and guide, checkout the official documentation here: [Deployment](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fdeployments\u002Fintroduction)\n\n---\n\n\u003C!-- - For detailed guides, tutorials, and API references, check out our official [VideoSDK AI Agents Documentation](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction).\n- To see the framework in action, explore the code in the [Examples](examples\u002F) directory. It is a great place to quickstart. -->\n\n## VideoSDK Inference\n\nVideoSDK Inference provides a **unified gateway** to access STT, LLM, TTS, Denoise, and Realtime models — without managing individual provider API keys. Authentication is handled via your `VIDEOSDK_AUTH_TOKEN` and usage is billed from your VideoSDK account balance.\n\n```python\nfrom videosdk.agents.inference import STT, LLM, TTS, Denoise, Realtime\n```\n\n**Cascade Mode with VideoSDK Inference:**\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=STT.sarvam(model_id=\"saarika:v2.5\", language=\"en-IN\"),\n        llm=LLM.google(model_id=\"gemini-2.5-flash\"),\n        tts=TTS.sarvam(model_id=\"bulbul:v2\", speaker=\"anushka\", language=\"en-IN\"),\n        denoise=Denoise.sanas(),\n        vad=SileroVAD(),\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n**Realtime Mode with VideoSDK Inference:**\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=Realtime.gemini(\n            model_id=\"gemini-3.1-flash-live-preview\",\n            voice=\"Puck\",\n            language_code=\"en-US\",\n            response_modalities=[\"AUDIO\"],\n        )\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n> See [Inference Pricing](https:\u002F\u002Fdocs.videosdk.live\u002Fhelp_docs\u002Fpricing-inference) for provider-wise billing details.\n\n---\n\n## Supported Libraries and Plugins\n\nThe framework supports integration with various AI models and tools, across multiple categories:\n\n\n| Category                 | Services |\n|--------------------------|----------|\n| **Real-time Models**     | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fopenai) &#124; [Gemini](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fgoogle-live-api) &#124; [AWS Nova Sonic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Faws-nova-sonic) &#124; [Azure Voice Live](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fazure-voice-live)|\n| **Speech-to-Text (STT)** | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fgoogle) &#124; [Azure AI Speech](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fazure-ai-stt) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fazureopenai) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fsarvam-ai) &#124; [Deepgram](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fdeepgram) &#124; [Cartesia](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fcartesia-stt) &#124; [AssemblyAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fassemblyai) &#124; [Navana](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fnavana) |\n| **Language Models (LLM)**| [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fopenai) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fazureopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fgoogle-llm) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fsarvam-ai-llm) &#124; [Anthropic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fanthropic-llm) &#124; [Cerebras](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002FCerebras-llm) |\n| **Text-to-Speech (TTS)** | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fgoogle-tts) &#124; [AWS Polly](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Faws-polly-tts) &#124; [Azure AI Speech](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fazure-ai-tts) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fazureopenai) &#124; [Deepgram](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fdeepgram) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fsarvam-ai-tts) &#124; [ElevenLabs](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Feleven-labs) &#124; [Cartesia](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fcartesia-tts) &#124; [Resemble AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fresemble-ai-tts) &#124; [Smallest AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fsmallestai-tts) &#124; [Speechify](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fspeechify-tts) &#124; [InWorld](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Finworld-ai-tts) &#124; [Neuphonic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fneuphonic-tts) &#124; [Rime AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Frime-ai-tts) &#124; [Hume AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fhume-ai-tts) &#124; [Groq](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fgroq-ai-tts) &#124; [LMNT AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Flmnt-ai-tts) &#124; [Papla Media](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fpapla-media) |\n| **Voice Activity Detection (VAD)** | [SileroVAD](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fsilero-vad) |\n| **Turn Detection Model** | [Namo Turn Detector](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fnamo-turn-detector) |\n| **Virtual Avatar** | [Simli](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fcore-components\u002Favatar) &#124; [Anam](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Favatar\u002Fanam) &#124; Custom (implement `connect` \u002F `aclose` protocol) |\n| **LLM Orchestration** | [LangChain](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Flangchain) &#124; [LangGraph](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Flanggraph) |\n| **Denoise** | [RNNoise](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fcore-components\u002Fde-noise) |\n\n> [!TIP]\n> **Installation Examples**\n>\n> ```bash\n> # Install with specific plugins\n> pip install videosdk-agents[openai,elevenlabs,silero]\n>\n> # Install individual plugins\n> pip install videosdk-plugins-anthropic\n> pip install videosdk-plugins-deepgram\n> ```\n\n\n\n## Examples\n\nExplore the following examples to see the framework in action:\n\n### Core Mode Examples\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎙️ \u003Ca href=\"examples\u002Fcascade_basic.py\" target=\"_blank\">Cascade Mode (Basic)\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Simple STT → LLM → TTS voice agent using Google LLM + Deepgram STT + Cartesia TTS.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔧 \u003Ca href=\"examples\u002Fcascade_advanced.py\" target=\"_blank\">Cascade Mode (Advanced)\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Advanced cascade agent with VAD, turn detection, and interruption handling.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>⚡ \u003Ca href=\"examples\u002Frealtime_basic.py\" target=\"_blank\">Realtime Mode\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Minimal realtime agent using Gemini Live for lowest-latency voice interactions.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔀 \u003Ca href=\"examples\u002Fhybrid_mode(cascade+realtime)\u002F\" target=\"_blank\">Hybrid Mode\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Mix cascade and realtime — custom STT with a realtime model, or realtime with custom TTS.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧩 \u003Ca href=\"examples\u002Fcomposable_pipelines\u002F\" target=\"_blank\">Composable Pipelines\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Flexible Pipeline configs — transcription-only, LLM-only, voice+chat, full voice agent.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🪝 \u003Ca href=\"examples\u002Fvoice_pipeline_hooks.py\" target=\"_blank\">Pipeline Hooks\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Intercept and transform STT, LLM, and TTS data at any stage using \u003Ccode>@pipeline.on(...)\u003C\u002Fcode>.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### Integrations & Advanced Features\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🌐 \u003Ca href=\"examples\u002Fmcp_server_examples\u002F\" target=\"_blank\">Agent with MCP Server\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Stock Market Analyst Agent with real-time market data access via Model Context Protocol.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🤝 \u003Ca href=\"examples\u002Fa2a\u002F\" target=\"_blank\">Agent-to-Agent (A2A)\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Multi-agent workflow: customer agent that transfers loan queries to a Loan Specialist Agent.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🦜 \u003Ca href=\"examples\u002Flangchain\u002F\" target=\"_blank\">LangChain Integration\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Use LangChain tools and agents within the VideoSDK agent framework.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🕸️ \u003Ca href=\"examples\u002Flanggraph\u002F\" target=\"_blank\">LangGraph Integration\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Orchestrate multi-step agent workflows using LangGraph state machines.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧠 \u003Ca href=\"examples\u002Fmem0\u002F\" target=\"_blank\">Memory Agent (Mem0)\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Persistent memory across sessions using Mem0 for long-term context retention.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>👁️ \u003Ca href=\"examples\u002Fvision\u002F\" target=\"_blank\">Vision Agent\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Multimodal agent that processes video frames alongside voice using cascading or realtime pipelines.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔄 \u003Ca href=\"examples\u002Fn8n_workflow\u002F\" target=\"_blank\">n8n Workflow Integration\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Trigger n8n automation workflows from within your agent using webhooks.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧑‍💼 \u003Ca href=\"examples\u002Fhuman_in_the_loop\u002F\" target=\"_blank\">Human in the Loop\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Escalate to a human agent mid-conversation via Discord or other channels.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### Use Case Examples\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📞 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-community\u002Fai-telephony-demo\" target=\"_blank\">AI Telephony Agent\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Hospital appointment booking via a voice-enabled telephony agent.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>✈️ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-community\u002Fvideosdk-whatsapp-ai-calling-agent\" target=\"_blank\">AI WhatsApp Agent\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Ask about available hotel rooms and book on the go.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🛒 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents-quickstart\u002Ftree\u002Fmain\u002FRAG\" target=\"_blank\">Agent with Knowledge (RAG)\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Agent that answers questions based on documentation knowledge.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎭 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents-quickstart\u002Ftree\u002Fmain\u002FVirtual%20Avatar\" target=\"_blank\">Virtual Avatar Agent\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>A Virtual Avatar Agent that presents a weather forecast.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🏥 \u003Ca href=\"use_case_examples\u002Fappointment_booking_agent.py\" target=\"_blank\">Appointment Booking\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Healthcare front-desk receptionist for scheduling clinic appointments.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📣 \u003Ca href=\"use_case_examples\u002Fannouncement_agent.py\" target=\"_blank\">Announcement Agent\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Proactive outbound agent for broadcasting announcements.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎧 \u003Ca href=\"use_case_examples\u002Fcustomer_support_agent.py\" target=\"_blank\">Customer Support\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>AI-powered customer support agent with escalation and knowledge base.\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📂 \u003Ca href=\"use_case_examples\u002F\" target=\"_blank\">More Use Cases\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Call center, IVR, medical triage, language tutor, meeting notes, and more.\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## Documentation\n\nFor comprehensive guides and API references:\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📄 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction\" target=\"_blank\">Official Documentation\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Complete framework documentation\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📝 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fagent-sdk-reference\u002Fagents\u002F\" target=\"_blank\">API Reference\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Detailed API documentation\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📂 \u003Ca href=\"examples\u002F\" target=\"_blank\">Examples Directory\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Additional code examples\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n## Contributing\n\nWe welcome contributions! Here's how you can help:\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🐞 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fissues\" target=\"_blank\">Report Issues\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Open an issue for bugs or feature requests\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔀 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fpulls\" target=\"_blank\">Submit PRs\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Create a pull request with improvements\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🛠️ \u003Ca href=\"BUILD_YOUR_OWN_PLUGIN.md\" target=\"_blank\">Build Plugins\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Follow our plugin development guide\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💬 \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002FGpmj6eCq5u\" target=\"_blank\">Join Community\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Connect with us on Discord\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\nThe framework is under active development, so contributions in the form of new plugins, features, bug fixes, or documentation improvements are highly appreciated.\n\n### 🛠️ Building Custom Plugins\n\nWant to integrate a new AI provider? Check out **[BUILD YOUR OWN PLUGIN](BUILD_YOUR_OWN_PLUGIN.md)** for:\n\n- Step-by-step plugin creation guide  \n- Directory structure and file requirements  \n- Implementation examples for STT, LLM, and TTS  \n- Testing and submission guidelines  \n\n## Community & Support\n\nStay connected with VideoSDK:\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💬 \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002FGpmj6eCq5u\" target=\"_blank\">Discord\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>Join our community\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🐦 \u003Ca href=\"https:\u002F\u002Fx.com\u002Fvideo_sdk\" target=\"_blank\">Twitter\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>@video_sdk\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>▶️ \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fc\u002FVideoSDK\" target=\"_blank\">YouTube\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>VideoSDK Channel\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔗 \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fvideo-sdk\u002F\" target=\"_blank\">LinkedIn\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>VideoSDK Company\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n> [!TIP]\n>\n> **Support the Project!** ⭐️  \n> Star the repository, join the community, and help us improve VideoSDK by providing feedback, reporting bugs, or contributing plugins.\n\n---\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_2076d710a3aa.png\" \u002F>\n\u003C\u002Fa>\n\n**\u003Ccenter>Made with ❤️ by The VideoSDK Team\u003C\u002Fcenter>**\n","\u003C!--BEGIN_BANNER_IMAGE-->\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_b8d2a1e88706.png\" alt=\"VideoSDK AI Agents Banner\" style=\"width:100%;\">\n\u003C\u002Fp>\n\u003C!--END_BANNER_IMAGE-->\n\n# VideoSDK AI Agents\n用于构建生产级、实时语音及多模态AI代理的开源Python框架。\n\n![PyPI - Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fvideosdk-agents)\n[![PyPI Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_f5475ba7dd49.png)](https:\u002F\u002Fpepy.tech\u002Fprojects\u002Fvideosdk-agents)\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fvideo_sdk)](https:\u002F\u002Fx.com\u002Fvideo_sdk)\n[![YouTube](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYouTube-VideoSDK-red)](https:\u002F\u002Fwww.youtube.com\u002Fc\u002FVideoSDK)\n[![LinkedIn](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinkedIn-VideoSDK-blue)](https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fvideo-sdk\u002F)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Join%20Us-7289DA)](https:\u002F\u002Fdiscord.com\u002Finvite\u002Ff2WsNDN9S5)\n[![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fvideosdk-live\u002Fagents)\n\n**VideoSDK AI Agents框架** 是一个Python SDK，用于构建能够作为实时参与者加入VideoSDK房间的AI代理。它将您的代理工作进程、AI模型和用户设备连接成一条低延迟的统一流水线——自动处理音频流、发言轮次检测、打断以及媒体路由等问题，使您能够专注于代理逻辑本身。\n\n\u003C!-- ![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_3ea6bcd399eb.png) -->\n\u003C!-- ![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_6801c8051e8b.png) -->\n\n![VideoSDK AI Agents High Level Architecture](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_18fcee8bab61.png)\n\n\n## 概述\n\n**VideoSDK AI Agents** 是一个Python框架，允许您构建直接参与VideoSDK会议的语音及多模态AI代理。该框架管理代理的完整生命周期——从加入会议、处理实时音频，到运行STT → LLM → TTS流水线或接入统一的实时模型，再到处理发言轮次检测、VAD、打断以及干净地结束会话等环节。\n\n**v1.0.0** 引入了一个统一的`Pipeline`类，取代了之前的`CascadingPipeline`和`RealtimePipeline`。您可以传入任意组合的组件——STT、LLM、TTS、VAD、发言轮次检测器、虚拟形象等，框架会自动将它们串联起来，并选择最优的执行模式。基于装饰器的钩子系统（`@pipeline.on(...)`）让您无需继承即可在任何阶段拦截并转换数据。\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎙️ \u003Ca href=\"examples\u002Fcascade_basic.py\" target=\"_blank\">使用级联模式的代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用级联模式（STT → LLM → TTS）构建AI语音代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>⚡ \u003Ca href=\"examples\u002Frealtime_basic.py\" target=\"_blank\">使用实时模式的代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用统一的实时模型（如Gemini Live）构建AI语音代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💻 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction\" target=\"_blank\">代理文档\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>VideoSDK代理官方文档。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📚 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fagent-sdk-reference\u002Fagents\u002F\" target=\"_blank\">SDK参考\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>代理框架的参考文档。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cdiv style={{marginTop: '1.5rem'}}>\u003C\u002Fdiv>\n\n\n| #  | 特性                         | 描述                                                                 |\n|----|----------------------------------|-----------------------------------------------------------------------------|\n| 1  | **🎤 实时通信（音频\u002F视频）**       | 代理可以在会议中实时收听、发言并互动。                   |\n| 2  | **📞 SIP与电话集成**   | 通过SIP无缝连接代理至电话系统，实现呼叫处理、路由及PSTN接入。 |\n| 3  | **🧍 虚拟形象**               | 可以构建或接入任何虚拟形象提供商——框架会自动处理音频路由、同步及清理等工作。 |\n| 4  | **🤖 多模型支持**           | 可与OpenAI、Gemini、AWS NovaSonic、Anthropic等集成。         |\n| 5  | **🧩 级联模式**                  | 可跨不同提供商组合任意STT → LLM → TTS链条，获得完全的控制权和灵活性。 |\n| 6  | **⚡ 实时模式**                  | 使用统一的实时模型（如OpenAI Realtime、AWS Nova Sonic、Gemini Live）以实现最低延迟。 |\n| 7  | **🔀 混合模式**                   | 可混合使用级联和实时组件——例如自定义STT搭配实时模型，或实时模式结合自定义TTS。 |\n| 8  | **🪝 流水线钩子**                | 使用`@pipeline.on(...)`在任何阶段（STT、LLM、TTS、发言轮次）拦截并转换数据。 |\n| 9  | **🛠️ 函数工具**               | 通过任何外部工具或API调用扩展代理功能。               |\n| 10 | **🌐 MCP集成**               | 使用模型上下文协议（MCP）将代理连接到外部数据源和工具。 |\n| 11 | **🔗 A2A协议**                  | 提供可靠的代理间路由，并基于关联性追踪请求。    |\n| 12 | **🦜 LangChain & LangGraph**         | 可将任何LangChain `BaseChatModel`或LangGraph `StateGraph`作为代理的LLM接入。 |\n| 13 | **📊 可观测性**                 | 内置指标、OpenTelemetry追踪以及按组件划分的结构化日志记录。 |\n\n> \\[!IMPORTANT]\n>\n> **请给VideoSDK仓库加星标** ⭐️\n>\n> 即可第一时间获取新版本和更新通知。您的支持将帮助我们不断成长并改进VideoSDK！\n\n---\n\n## 流水线模式\n\n所有代理都围绕一个`Pipeline`类构建。只需传入所需的组件，SDK便会自动选择合适的执行模式。\n\n### 级联模式 — STT → LLM → TTS\n\n您可以为每个环节自由搭配不同的服务提供商。当您需要自定义STT、特定的LLM行为或独特的TTS声音时，此模式最为适用。\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=DeepgramSTT(),\n        llm=GoogleLLM(),\n        tts=CartesiaTTS(),\n        vad=SileroVAD(),\n        turn_detector=TurnDetector(),\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n### 实时模式 — 统一模型实现最低延迟\n\n在整个语音处理流程中使用单个实时模型。最适合响应延迟低于500毫秒的场景。\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=GeminiRealtime(\n            model=\"gemini-3.1-flash-live-preview\",\n            config=GeminiLiveConfig(voice=\"Leda\", response_modalities=[\"AUDIO\"]),\n        )\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n### 混合模式 — 自由组合\n\n可以将外部 STT 与实时 LLM 结合使用，或者将实时模型与自定义 TTS 结合使用：\n\n```python\n# 外部 STT → 实时 LLM\npipeline = Pipeline(stt=DeepgramSTT(), llm=OpenAIRealtime(...))\n\n# 实时 LLM → 外部 TTS\npipeline = Pipeline(llm=OpenAIRealtime(...), tts=ElevenLabsTTS(...))\n```\n\n### 管道钩子 — 拦截任意阶段\n\n```python\n@pipeline.on(\"stt\")\nasync def clean_transcript(text: str) -> str:\n    return text.strip()\n\n@pipeline.on(\"llm\")\nasync def route_llm(messages):\n    if \"transfer\" in messages[-1].content:\n        yield \"正在为您转接。\"  # 完全绕过 LLM\n\n@pipeline.on(\"tts\")\nasync def fix_pronunciation(text: str) -> str:\n    return text.replace(\"VideoSDK\", \"Video S D K\")\n\n@pipeline.on(\"user_turn_start\")\nasync def on_user_starts():\n    print(\"用户正在讲话...\")\n```\n\n可用的钩子点：`stt` · `tts` · `llm` · `vision_frame` · `user_turn_start` · `user_turn_end` · `agent_turn_start` · `agent_turn_end`\n\n---\n\n## 前置条件\n\n在开始之前，请确保您已具备以下内容：\n\n- VideoSDK 身份验证令牌（可从 [app.videosdk.live](https:\u002F\u002Fapp.videosdk.live) 生成）\n   - VideoSDK 会议 ID（可通过 [创建房间 API](https:\u002F\u002Fdocs.videosdk.live\u002Fapi-reference\u002Frealtime-communication\u002Fcreate-room) 或 VideoSDK 控制台生成）\n- Python 3.12 或更高版本\n- 第三方 API 密钥：\n   - 您计划使用的各项服务的 API 密钥（例如，用于 LLM\u002FSTT\u002FTTS 的 OpenAI、用于 TTS 的 ElevenLabs、用于 Gemini 的 Google 等）。\n\n## 安装\n\n### 使用 UV（推荐）\n\n[UV](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) 是一个快速的 Python 包管理器，可自动处理虚拟环境和依赖关系管理。\n\n> 如果尚未安装 UV，请参阅 [UV 安装指南](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002Fgetting-started\u002Finstallation\u002F)。\n\n- 安装核心 VideoSDK AI 代理包：\n  ```bash\n  uv add videosdk-agents\n  ```\n\n- 安装可选插件：\n  ```bash\n  uv add videosdk-plugins-openai\n  uv add videosdk-plugins-deepgram\n  ```\n\n- 运行您的代理：\n  ```bash\n  uv run python main.py\n  ```\n\n### 使用 pip\n\n- 创建并激活一个 Python 3.12 或更高版本的虚拟环境。\n    \u003Cdetails>\n    \u003Csummary>\u003Cstrong> macOS \u002F Linux\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n    ```bash\n    python3 -m venv venv\n    source venv\u002Fbin\u002Factivate\n    ```\n    \u003C\u002Fdetails>\n    \u003Cdetails>\n    \u003Csummary>\u003Cstrong> Windows\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n    ```bash\n    python -m venv venv\n    venv\\Scripts\\activate\n    ```\n    \u003C\u002Fdetails>\n\n- 安装核心 VideoSDK AI 代理包\n  ```bash\n  pip install videosdk-agents\n  ```\n- 安装可选插件。插件有助于集成不同的提供商以实现实时交互、STT、LLM、TTS 等功能。根据您的使用场景安装所需插件：\n  ```bash\n  # 示例：安装话轮检测插件\n  pip install videosdk-plugins-turn-detector\n  ```\n  👉 支持的插件（实时交互、LLM、STT、TTS、VAD、Avatar、SIP）列于下方的【支持的库与插件】部分。\n\n### 开发环境设置\n\n要在本地设置项目，请克隆仓库并以可编辑模式安装所有包（核心 + 所有插件）：\n\n**使用 UV（推荐）：**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents.git\ncd agents\nuv sync\nuv run python examples\u002Fcascade_basic.py\n```\n\n**使用 pip：**\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents.git\ncd agents\nbash setup.sh\nsource venv\u002Fbin\u002Factivate\npython examples\u002Fcascade_basic.py\n```\n\n\n## 生成 VideoSDK 会议 ID\n\n在您的 AI 代理加入会议之前，您需要先创建一个会议 ID。可以通过 VideoSDK 的创建房间 API 来生成：\n\n### 使用 cURL\n\n```bash\ncurl -X POST https:\u002F\u002Fapi.videosdk.live\u002Fv2\u002Frooms \\\n  -H \"Authorization: YOUR_JWT_TOKEN_HERE\" \\\n  -H \"Content-Type: application\u002Fjson\"\n```\n\n有关创建房间 API 的更多详细信息，请参阅 [VideoSDK 文档](https:\u002F\u002Fdocs.videosdk.live\u002Fapi-reference\u002Frealtime-communication\u002Fcreate-room)。\n\n## 开始使用：您的第一个代理\n\n### 快速入门\n\n现在您已经安装了必要的软件包，就可以开始构建了！\n\n### 第一步：创建自定义代理\n\n首先，让我们通过继承基础 `Agent` 类来创建一个自定义语音代理：\n\n```python title=\"main.py\"\nfrom videosdk.agents import Agent, function_tool\n\n# 外部工具\n# async def get_weather(self, latitude: str, longitude: str):\n\nclass VoiceAgent(Agent):\n    def __init__(self):\n        super().__init__(\n            instructions=\"您是一位乐于助人的语音助手，能够回答问题并协助完成任务。\",\n             tools=[get_weather] # 您可以在该作用域之外注册任何外部工具\n        )\n\n    async def on_enter(self) -> None:\n        \"\"\"当代理首次加入会议时调用\"\"\"\n        await self.session.say(\"您好！今天有什么可以帮您的吗？\")\n    \n    async def on_exit(self) -> None:\n      \"\"\"当代理退出会议时调用\"\"\"\n        await self.session.say(\"再见！\")\n```\n\n此代码定义了一个基本的语音代理，包含：\n\n- 自定义指令，用于定义代理的性格和能力\n- 加入会议时的欢迎语\n- 状态变化处理，用于跟踪代理当前的活动状态\n\n### 第二步：实现函数工具\n\n函数工具使您的代理能够执行对话之外的操作。定义工具有两种方式：\n\n- **外部工具：** 作为代理类之外的独立函数定义，并通过代理构造函数中的 `tools` 参数进行注册。\n- **内部工具：** 作为代理类内部的方法定义，并使用 `@function_tool` 装饰器标记。\n\n以下是两者的示例：\n\n```python\nimport aiohttp\n\n# 外部函数工具\n@function_tool\ndef get_weather(latitude: str, longitude: str):\n    print(f\"正在获取 {latitude}, {longitude} 的天气信息\")\n    url = f\"https:\u002F\u002Fapi.open-meteo.com\u002Fv1\u002Fforecast?latitude={latitude}&longitude={longitude}&current=temperature_2m\"\n    async with aiohttp.ClientSession() as session:\n        async with session.get(url) as response:\n            if response.status == 200:\n                data = await response.json()\n                return {\n                    \"temperature\": data[\"current\"][\"temperature_2m\"],\n                    \"temperature_unit\": \"摄氏度\",\n                }\n            else:\n                raise Exception(\n                    f\"未能获取天气数据，状态码：{response.status}\"\n                )\n\nclass VoiceAgent(Agent):\n# ... 上文代码 ...\n\n# 内部函数工具\n    @function_tool\n    async def get_horoscope(self, sign: str) -> dict:\n        horoscopes = {\n            \"Aries\": \"今天是你的幸运日！\",\n            \"Taurus\": \"今天要专注于你的目标。\",\n            \"Gemini\": \"今天的沟通将非常重要。\",\n        }\n        return {\n            \"sign\": sign,\n            \"horoscope\": horoscopes.get(sign, \"今天星星为你排好了队！\"),\n        }\n```\n\n- 对于可复用的独立函数，使用外部工具（通过 `tools=[...]` 注册）。\n- 对于代理特定的逻辑，则使用内部工具作为类方法。\n- 无论是外部工具还是内部工具，都必须使用 `@function_tool` 装饰器，以便代理能够识别并调用它们。\n\n\n### 第3步：设置管道\n\n使用统一的 `Pipeline` 类将您的代理连接到AI模型。只需传入所需的组件，SDK会处理其余部分。\n\n**实时模式**（单个模型，最低延迟）：\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=GeminiRealtime(\n            model=\"gemini-3.1-flash-live-preview\",\n            config=GeminiLiveConfig(voice=\"Leda\", response_modalities=[\"AUDIO\"]),\n        )\n    )\n    session = AgentSession(agent=VoiceAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n**级联模式**（STT → LLM → TTS，完全由提供商控制）：\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=DeepgramSTT(),\n        llm=GoogleLLM(),\n        tts=CartesiaTTS(),\n        vad=SileroVAD(),\n        turn_detector=TurnDetector(),\n    )\n    session = AgentSession(agent=VoiceAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n### 第4步：组装并启动代理会话\n\n```from videosdk.agents import AgentSession, WorkerJob, RoomOptions, JobContext\n\nasync def start_session(context: JobContext):\n    session = AgentSession(\n        agent=VoiceAgent(),\n        pipeline=pipeline,\n    )\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n\ndef make_context() -> JobContext:\n    room_options = RoomOptions(\n        room_id=\"\u003Cmeeting_id>\",\n        name=\"测试代理\",\n        playground=True,\n    )\n    return JobContext(room_options=room_options)\n\nif __name__ == \"__main__\":\n    job = WorkerJob(entrypoint=start_session, jobctx=make_context)\n    job.start()\n```\n### 第5步：与VideoSDK客户端应用连接\n\n在设置好您的AI代理后，您需要一个客户端应用程序来与其连接。您可以使用任何VideoSDK快速入门示例来创建一个加入同一会议的客户端：\n\n- [JavaScript](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fjs-rtc)\n- [React](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Freact-rtc)\n- [React Native](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Freact-native)\n- [Android](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fandroid-rtc)\n- [Flutter](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fflutter-rtc)\n- [iOS](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fquickstart\u002Ftree\u002Fmain\u002Fios-rtc)\n- [Unity](http:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fvideosdk-rtc-unity-sdk-example)\n- [物联网](https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fvideosdk-rtc-iot-sdk-example)\n\n在设置客户端应用程序时，请确保使用与您的AI代理相同的会议ID。\n\n### 第6步：运行项目\n完成设置后，您可以使用Python运行您的AI语音代理项目。请确保您的 `.env` 文件已正确配置，并且所有依赖项均已安装。\n\n```bash\npython main.py\n```\n> [!TIP]\n>\n> **控制台模式** — 在没有会议室的情况下，在本地测试您的代理。\n> 将 `RoomOptions` 中的 `playground=True` 设置为真值，然后运行 `python main.py`，即可直接通过终端的麦克风和扬声器进行交互。\n\n\n### 第7步：部署\n\n有关部署选项和指南，请查看官方文档：[部署](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fdeployments\u002Fintroduction)\n\n---\n\n\u003C!-- - 如需详细指南、教程和API参考，请参阅我们的官方[VideoSDK AI代理文档](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction)。\n- 要查看框架的实际运行情况，请浏览[示例](examples\u002F)目录中的代码。这是快速上手的好地方。 -->\n\n## VideoSDK推理\n\nVideoSDK推理提供了一个**统一的网关**，用于访问STT、LLM、TTS、降噪和实时模型——无需管理各个提供商的API密钥。身份验证通过您的 `VIDEOSDK_AUTH_TOKEN` 进行，费用则从您的VideoSDK账户余额中扣除。\n\n```python\nfrom videosdk.agents.inference import STT, LLM, TTS, Denoise, Realtime\n```\n\n**使用VideoSDK推理的级联模式：**\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        stt=STT.sarvam(model_id=\"saarika:v2.5\", language=\"en-IN\"),\n        llm=LLM.google(model_id=\"gemini-2.5-flash\"),\n        tts=TTS.sarvam(model_id=\"bulbul:v2\", speaker=\"anushka\", language=\"en-IN\"),\n        denoise=Denoise.sanas(),\n        vad=SileroVAD(),\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n**使用VideoSDK推理的实时模式：**\n\n```python\nasync def start_session(context: JobContext):\n    pipeline = Pipeline(\n        llm=Realtime.gemini(\n            model_id=\"gemini-3.1-flash-live-preview\",\n            voice=\"Puck\",\n            language_code=\"en-US\",\n            response_modalities=[\"AUDIO\"],\n        )\n    )\n    session = AgentSession(agent=MyAgent(), pipeline=pipeline)\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n> 有关按提供商计费的详细信息，请参阅[推理定价](https:\u002F\u002Fdocs.videosdk.live\u002Fhelp_docs\u002Fpricing-inference)。\n\n---\n\n## 支持的库和插件\n\n该框架支持与多种类别下的各类 AI 模型和工具集成：\n\n\n| 类别                 | 服务 |\n|--------------------------|----------|\n| **实时模型**     | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fopenai) &#124; [Gemini](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fgoogle-live-api) &#124; [AWS Nova Sonic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Faws-nova-sonic) &#124; [Azure Voice Live](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Frealtime\u002Fazure-voice-live)|\n| **语音转文本 (STT)** | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fgoogle) &#124; [Azure AI Speech](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fazure-ai-stt) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fazureopenai) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fsarvam-ai) &#124; [Deepgram](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fdeepgram) &#124; [Cartesia](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fcartesia-stt) &#124; [AssemblyAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fassemblyai) &#124; [Navana](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fstt\u002Fnavana) |\n| **语言模型 (LLM)**| [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fopenai) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fazureopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fgoogle-llm) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fsarvam-ai-llm) &#124; [Anthropic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Fanthropic-llm) &#124; [Cerebras](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002FCerebras-llm) |\n| **文本转语音 (TTS)** | [OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fopenai) &#124; [Google](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fgoogle-tts) &#124; [AWS Polly](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Faws-polly-tts) &#124; [Azure AI Speech](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fazure-ai-tts) &#124; [Azure OpenAI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fazureopenai) &#124; [Deepgram](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fdeepgram) &#124; [Sarvam AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fsarvam-ai-tts) &#124; [ElevenLabs](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Feleven-labs) &#124; [Cartesia](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fcartesia-tts) &#124; [Resemble AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fresemble-ai-tts) &#124; [Smallest AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fsmallestai-tts) &#124; [Speechify](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fspeechify-tts) &#124; [InWorld](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Finworld-ai-tts) &#124; [Neuphonic](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fneuphonic-tts) &#124; [Rime AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Frime-ai-tts) &#124; [Hume AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fhume-ai-tts) &#124; [Groq](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fgroq-ai-tts) &#124; [LMNT AI](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Flmnt-ai-tts) &#124; [Papla Media](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Ftts\u002Fpapla-media) |\n| **语音活动检测 (VAD)** | [SileroVAD](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fsilero-vad) |\n| **轮次检测模型** | [Namo Turn Detector](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fnamo-turn-detector) |\n| **虚拟化身** | [Simli](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fcore-components\u002Favatar) &#124; [Anam](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Favatar\u002Fanam) &#124; 自定义（实现 `connect` \u002F `aclose` 协议） |\n| **LLM 编排** | [LangChain](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Flangchain) &#124; [LangGraph](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fplugins\u002Fllm\u002Flanggraph) |\n| **降噪** | [RNNoise](https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fcore-components\u002Fde-noise) |\n\n> [!TIP]\n> **安装示例**\n>\n> ```bash\n> # 安装带有特定插件的包\n> pip install videosdk-agents[openai,elevenlabs,silero]\n>\n> # 安装单个插件\n> pip install videosdk-plugins-anthropic\n> pip install videosdk-plugins-deepgram\n> ```\n\n\n\n## 示例\n\n通过以下示例，您可以了解该框架的实际应用：\n\n### 核心模式示例\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎙️ \u003Ca href=\"examples\u002Fcascade_basic.py\" target=\"_blank\">级联模式（基础）\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用 Google LLM + Deepgram STT + Cartesia TTS 的简单 STT → LLM → TTS 语音助手。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔧 \u003Ca href=\"examples\u002Fcascade_advanced.py\" target=\"_blank\">级联模式（高级）\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>具备 VAD、轮次检测及打断处理功能的高级级联助手。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>⚡ \u003Ca href=\"examples\u002Frealtime_basic.py\" target=\"_blank\">实时模式\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用 Gemini Live 实现最低延迟语音交互的极简实时助手。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔀 \u003Ca href=\"examples\u002Fhybrid_mode(cascade+realtime)\u002F\" target=\"_blank\">混合模式\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>将级联模式与实时模式结合——自定义 STT 配合实时模型，或实时模式搭配自定义 TTS。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧩 \u003Ca href=\"examples\u002Fcomposable_pipelines\u002F\" target=\"_blank\">可组合管道\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>灵活的管道配置——仅转录、仅 LLM、语音+聊天、完整语音助手。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🪝 \u003Ca href=\"examples\u002Fvoice_pipeline_hooks.py\" target=\"_blank\">管道钩子\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用 \u003Ccode>@pipeline.on(...)\u003C\u002Fcode> 在任何阶段拦截并转换 STT、LLM 和 TTS 数据。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### 集成与高级功能\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🌐 \u003Ca href=\"examples\u002Fmcp_server_examples\u002F\" target=\"_blank\">带有 MCP 服务器的代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>股票市场分析师代理，可通过模型上下文协议实时访问市场数据。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🤝 \u003Ca href=\"examples\u002Fa2a\u002F\" target=\"_blank\">代理间通信（A2A）\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>多代理工作流：客户代理将贷款咨询转交给贷款专员代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🦜 \u003Ca href=\"examples\u002Flangchain\u002F\" target=\"_blank\">LangChain 集成\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>在 VideoSDK 代理框架中使用 LangChain 工具和代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🕸️ \u003Ca href=\"examples\u002Flanggraph\u002F\" target=\"_blank\">LangGraph 集成\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用 LangGraph 状态机编排多步骤代理工作流。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧠 \u003Ca href=\"examples\u002Fmem0\u002F\" target=\"_blank\">记忆代理（Mem0）\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>利用 Mem0 实现跨会话的持久化记忆，以保持长期上下文。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>👁️ \u003Ca href=\"examples\u002Fvision\u002F\" target=\"_blank\">视觉代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>多模态代理，通过级联或实时管道同时处理视频帧和语音。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔄 \u003Ca href=\"examples\u002Fn8n_workflow\u002F\" target=\"_blank\">n8n 工作流集成\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>使用 Webhook 在您的代理中触发 n8n 自动化工作流。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🧑‍💼 \u003Ca href=\"examples\u002Fhuman_in_the_loop\u002F\" target=\"_blank\">人工介入\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>在对话过程中通过 Discord 或其他渠道升级到人工客服。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### 使用场景示例\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📞 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-community\u002Fai-telephony-demo\" target=\"_blank\">AI 电话客服代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>通过语音驱动的电话客服代理预约医院挂号。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>✈️ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-community\u002Fvideosdk-whatsapp-ai-calling-agent\" target=\"_blank\">AI WhatsApp 代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>随时随地查询并预订酒店客房。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🛒 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents-quickstart\u002Ftree\u002Fmain\u002FRAG\" target=\"_blank\">具备知识库的代理（RAG）\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>基于文档知识回答问题的代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎭 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents-quickstart\u002Ftree\u002Fmain\u002FVirtual%20Avatar\" target=\"_blank\">虚拟化身代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>展示天气预报的虚拟化身代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🏥 \u003Ca href=\"use_case_examples\u002Fappointment_booking_agent.py\" target=\"_blank\">预约挂号\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>用于诊所预约的医疗前台接待员。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📣 \u003Ca href=\"use_case_examples\u002Fannouncement_agent.py\" target=\"_blank\">公告代理\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>主动外呼代理，用于发布公告。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🎧 \u003Ca href=\"use_case_examples\u002Fcustomer_support_agent.py\" target=\"_blank\">客户支持\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>具备升级机制和知识库的 AI 客户支持代理。\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"50%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📂 \u003Ca href=\"use_case_examples\u002F\" target=\"_blank\">更多使用场景\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>呼叫中心、IVR、医疗分诊、语言辅导、会议记录等。\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 文档\n\n获取全面指南和 API 参考：\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📄 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fai_agents\u002Fintroduction\" target=\"_blank\">官方文档\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>完整的框架文档\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📝 \u003Ca href=\"https:\u002F\u002Fdocs.videosdk.live\u002Fagent-sdk-reference\u002Fagents\u002F\" target=\"_blank\">API 参考\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>详细的 API 文档\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"33%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>📂 \u003Ca href=\"examples\u002F\" target=\"_blank\">示例目录\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>更多代码示例\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 贡献\n\n我们欢迎贡献！以下是您可以提供帮助的方式：\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🐞 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fissues\" target=\"_blank\">报告问题\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>提交关于 bug 或功能请求的问题\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔀 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fpulls\" target=\"_blank\">提交 PR\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>创建包含改进的拉取请求\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🛠️ \u003Ca href=\"BUILD_YOUR_OWN_PLUGIN.md\" target=\"_blank\">开发插件\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>按照我们的插件开发指南进行操作\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💬 \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002FGpmj6eCq5u\" target=\"_blank\">加入社区\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>在 Discord 上与我们交流\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n该框架目前处于积极开发阶段，因此我们非常欢迎以新插件、新功能、bug 修复或文档改进等形式做出的贡献。\n\n### 🛠️ 构建自定义插件\n\n想要集成新的 AI 提供商吗？请查看 **[构建您自己的插件](BUILD_YOUR_OWN_PLUGIN.md)**，了解：\n\n- 分步插件创建指南  \n- 目录结构和文件要求  \n- STT、LLM 和 TTS 的实现示例  \n- 测试和提交指南\n\n## 社区与支持\n\n与 VideoSDK 保持联系：\n\n\u003Ctable width=\"100%\">\n  \u003Ctr>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>💬 \u003Ca href=\"https:\u002F\u002Fdiscord.com\u002Finvite\u002FGpmj6eCq5u\" target=\"_blank\">Discord\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>加入我们的社区\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🐦 \u003Ca href=\"https:\u002F\u002Fx.com\u002Fvideo_sdk\" target=\"_blank\">Twitter\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>@video_sdk\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>▶️ \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fc\u002FVideoSDK\" target=\"_blank\">YouTube\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>VideoSDK 频道\u003C\u002Fp>\n    \u003C\u002Ftd>\n    \u003Ctd width=\"25%\" valign=\"top\" style=\"padding-left: 20px;\">\n      \u003Ch3>🔗 \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fcompany\u002Fvideo-sdk\u002F\" target=\"_blank\">LinkedIn\u003C\u002Fa>\u003C\u002Fh3>\n      \u003Cp>VideoSDK 公司\u003C\u002Fp>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n> [!TIP]\n>\n> **支持本项目！** ⭐️  \n> 请给仓库点个赞，加入社区，并通过提供反馈、报告 bug 或贡献插件来帮助我们改进 VideoSDK。\n\n---\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_readme_2076d710a3aa.png\" \u002F>\n\u003C\u002Fa>\n\n**\u003Ccenter>由 VideoSDK 团队用心打造\u003C\u002Fcenter>**","# VideoSDK AI Agents 快速上手指南\n\nVideoSDK AI Agents 是一个开源 Python 框架，用于构建生产级的实时语音和多模态 AI 智能体。它能让 AI 作为实时参与者加入视频会议房间，自动处理音频流、说话人检测、打断逻辑和媒体路由，让你专注于智能体业务逻辑。\n\n## 环境准备\n\n在开始之前，请确保满足以下要求：\n\n*   **操作系统**: macOS, Linux 或 Windows\n*   **Python 版本**: Python 3.12 或更高版本\n*   **VideoSDK 凭证**:\n    *   VideoSDK 认证 Token (JWT)：可在 [VideoSDK 控制台](https:\u002F\u002Fapp.videosdk.live) 生成。\n    *   Meeting ID (房间号)：可通过 [Create Room API](https:\u002F\u002Fdocs.videosdk.live\u002Fapi-reference\u002Frealtime-communication\u002Fcreate-room) 或控制台创建。\n*   **第三方 API Key**: 根据你选择的模型提供商（如 OpenAI, Google Gemini, Deepgram, ElevenLabs 等）准备相应的 API Key。\n\n## 安装步骤\n\n推荐使用 **UV** 包管理器，它能自动处理虚拟环境和依赖，速度更快。\n\n### 方式一：使用 UV (推荐)\n\n1.  **安装 UV** (如果尚未安装):\n    参考 [UV 安装指南](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002Fgetting-started\u002Finstallation\u002F)。\n\n2.  **初始化项目并安装核心包**:\n    ```bash\n    uv init my-agent-project\n    cd my-agent-project\n    uv add videosdk-agents\n    ```\n\n3.  **安装所需插件**:\n    根据需求安装特定提供商的插件（例如 OpenAI 和 Deepgram）：\n    ```bash\n    uv add videosdk-plugins-openai\n    uv add videosdk-plugins-deepgram\n    ```\n\n4.  **运行智能体**:\n    ```bash\n    uv run python main.py\n    ```\n\n### 方式二：使用 pip\n\n1.  **创建并激活虚拟环境**:\n\n    *   **macOS \u002F Linux**:\n        ```bash\n        python3 -m venv venv\n        source venv\u002Fbin\u002Factivate\n        ```\n    *   **Windows**:\n        ```bash\n        python -m venv venv\n        venv\\Scripts\\activate\n        ```\n\n2.  **安装核心包**:\n    ```bash\n    pip install videosdk-agents\n    ```\n\n3.  **安装所需插件**:\n    ```bash\n    # 示例：安装 OpenAI 插件和语音活动检测插件\n    pip install videosdk-plugins-openai\n    pip install videosdk-plugins-turn-detector\n    ```\n\n## 基本使用\n\n以下是构建第一个语音智能体的最小化示例。该示例展示了如何定义一个继承自 `Agent` 类的自定义智能体，并配置级联模式（STT → LLM → TTS）。\n\n### 1. 编写代码 (`main.py`)\n\n```python\nimport os\nfrom videosdk.agents import Agent, AgentSession, JobContext, Pipeline\nfrom videosdk.plugins.openai import OpenAIRealtime # 或者使用 OpenAILLM, OpenAITTS 等组合\nfrom videosdk.plugins.deepgram import DeepgramSTT\nfrom videosdk.plugins.elevenlabs import ElevenLabsTTS\n\n# 设置环境变量 (在实际项目中建议使用 .env 文件)\nos.environ[\"VIDEOSDK_TOKEN\"] = \"YOUR_VIDEO_SDK_TOKEN\"\nos.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY\"\nos.environ[\"ELEVENLABS_API_KEY\"] = \"YOUR_ELEVENLABS_API_KEY\"\n\nclass VoiceAgent(Agent):\n    def __init__(self):\n        super().__init__(\n            instructions=\"You are a helpful voice assistant. Keep responses concise.\",\n            tools=[] # 可在此注册外部工具函数\n        )\n\n    async def on_enter(self) -> None:\n        \"\"\"当智能体加入会议时触发\"\"\"\n        await self.session.say(\"Hi there! How can I help you today?\")\n    \n    async def on_exit(self) -> None:\n        \"\"\"当智能体离开会议时触发\"\"\"\n        await self.session.say(\"Goodbye!\")\n\nasync def start_session(context: JobContext):\n    # 配置 Pipeline：级联模式 (STT -> LLM -> TTS)\n    pipeline = Pipeline(\n        stt=DeepgramSTT(),\n        llm=OpenAIRealtime(model=\"gpt-4o-realtime-preview\"), # 或使用 GoogleLLM, AnthropicLLM 等\n        tts=ElevenLabsTTS(),\n    )\n    \n    session = AgentSession(agent=VoiceAgent(), pipeline=pipeline)\n    \n    # 启动会话：等待用户加入后开始，直到关闭\n    await session.start(wait_for_participant=True, run_until_shutdown=True)\n\n# 入口点 (具体启动方式取决于你的部署环境，本地测试通常配合 VideoSDK Worker)\n# 此处仅为逻辑展示，实际运行需配合 VideoSDK Worker 服务\n```\n\n### 2. 获取 Meeting ID\n\n在运行智能体前，你需要一个有效的 Meeting ID。可以使用 cURL 调用 API 生成：\n\n```bash\ncurl -X POST https:\u002F\u002Fapi.videosdk.live\u002Fv2\u002Frooms \\\n  -H \"Authorization: YOUR_JWT_TOKEN_HERE\" \\\n  -H \"Content-Type: application\u002Fjson\"\n```\n\n### 3. 运行与测试\n\n1.  将上述代码保存为 `main.py`。\n2.  替换代码中的 `YOUR_VIDEO_SDK_TOKEN` 和其他 API Key。\n3.  使用安装步骤中的命令运行脚本（例如 `uv run python main.py`）。\n4.  智能体启动后，使用生成的 Meeting ID 在客户端（Web\u002FMobile\u002FDesktop）加入房间，即可与 AI 进行实时语音对话。\n\n> **提示**: VideoSDK 支持多种运行模式，包括**级联模式**（灵活组合不同厂商的 STT\u002FLLM\u002FTTS）、**实时模式**（使用 Gemini Live 或 OpenAI Realtime 等统一模型以获得超低延迟）以及**混合模式**。只需更改 `Pipeline` 初始化时的组件即可切换。","某在线教育平台希望为外教一对一课程引入\"AI 陪练助手”，使其能实时加入视频教室，在学生卡壳时提供语音提示或纠正发音。\n\n### 没有 agents 时\n- **开发链路割裂**：团队需分别对接语音识别（STT）、大模型（LLM）和语音合成（TTS）三个独立服务，手动编写代码串联数据流，耗时数周。\n- **交互体验生硬**：难以精准判断用户说话结束时机，导致 AI 频繁打断学生发言，或在学生说完后延迟过久才回应，破坏课堂节奏。\n- **并发维护困难**：随着课程量增加，音频流的低延迟传输和房间状态管理变得极其复杂，服务器资源消耗巨大且不稳定。\n- **多模态扩展受阻**：若想让学生看到 AI 的数字人形象或共享屏幕，需重新重构底层架构，几乎相当于重写项目。\n\n### 使用 agents 后\n- **流水线一键构建**：利用 agents 统一的 `Pipeline` 类，只需配置组件即可自动连接 STT→LLM→TTS 全链路，将开发周期从数周缩短至几天。\n- **拟人化实时互动**：框架内置高精度的语音活动检测（VAD）和轮次判断机制，AI 能自然地在学生停顿时介入，支持随时打断，对话流畅度媲美真人。\n- **基础设施托管**：agents 自动处理音频流的低延迟路由与房间生命周期管理，开发者无需关心底层并发细节，系统稳定性显著提升。\n- **多模态无缝集成**：基于原生支持的视频房间能力，可轻松为 AI 添加虚拟形象或屏幕共享功能，无需改动核心逻辑即可升级交互维度。\n\nagents 通过将复杂的实时音视频通信与 AI 推理链路封装为标准化流程，让开发者能专注于业务逻辑，快速打造出具备“真人感”的多模态智能助教。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fvideosdk-live_agents_b8d2a1e8.png","videosdk-live","Video SDK","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fvideosdk-live_39073a67.jpg","Video SDK is an API that enables developers to easily build live audio & video experiences in any platform within minutes. 👩‍💻👨‍💻",null,"support@videosdk.live","video_sdk","https:\u002F\u002Fwww.videosdk.live","https:\u002F\u002Fgithub.com\u002Fvideosdk-live",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0,{"name":91,"color":92,"percentage":89},"HTML","#e34c26",615,86,"2026-04-13T13:07:44","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":100,"python":101,"dependencies":102},"该工具是一个用于构建实时语音和多模态 AI 代理的 Python 框架，需连接 VideoSDK 房间。运行前需要 VideoSDK 认证令牌和会议 ID，以及所使用第三方服务（如 OpenAI, Google Gemini, ElevenLabs 等）的 API 密钥。推荐使用 UV 作为包管理器来安装核心包和可选插件。支持级联模式（STT→LLM→TTS）、实时模式和混合模式。","3.12+",[103,104,105,106],"videosdk-agents","videosdk-plugins-openai","videosdk-plugins-deepgram","videosdk-plugins-turn-detector",[108,14,13,109],"音频","其他","2026-03-27T02:49:30.150509","2026-04-14T20:55:34.910814",[113],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},33287,"为什么无法运行 Agent Avatar（虚拟人代理）？","此问题通常由版本不兼容引起。请尝试以下两个步骤解决：\n1. 将 videosdk-agents 包升级到 0.0.46 版本：`pip install videosdk-agents==0.0.46`\n2. 指定安装 simli-ai 库的特定版本以修复库冲突：`pip install simli-ai==0.1.25`","https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fissues\u002F113",[119,124,129,134,139,144,149,154,159,164,169,174,179,184,189,194,199,204,209,214],{"id":120,"version":121,"summary_zh":122,"released_at":123},255465,"v1.0.6","- 对话图支持 #259\n     - 按照有向图构建多轮语音和聊天 AI 助手。您定义对话流程，LLM 负责处理自然语言交互。引擎会严格遵循您的转接逻辑——LLM 不会做出任何路由决策。\n- 修复：在创建房间之前使用环境中的房间 ID #260    ","2026-04-10T06:29:33",{"id":125,"version":126,"summary_zh":127,"released_at":128},255466,"v1.0.5","- 错误修复与功能增强\n    - 从代理框架中移除了 PlaygroundManager 及相关代码。#255\n    - 杂项：移除旧的 videosdk CLI 入口点 #256\n    - 修复：在静音帧期间停止重置 VAD 模型状态 #257\n","2026-04-08T14:21:07",{"id":130,"version":131,"summary_zh":132,"released_at":133},255467,"v1.0.4","- 修复：解决长时间对话后 VAD 卡住的问题 #251\n    - 音频采集相关改进\n    - 修复：当打包文件损坏时，自动下载 silero VAD 模型","2026-04-06T07:46:09",{"id":135,"version":136,"summary_zh":137,"released_at":138},255468,"v1.0.3","- 功能：增强上下文处理、大模型行为及工具执行 #248 \n\n    - 支持可配置的递归与并行工具调用执行\n    - **功能：ContextWindow**\n       - 为基于大模型的对话摘要生成添加 ContextWindow，用智能历史压缩替代盲目的截断\n       - 支持自动摘要 + 结合 `max_tokens` 和 `max_context_items` 进行截断\n       - 通过 `keep_recent_turns` 保留最后 N 轮对话，以保证回复质量\n       - 引入 `max_tool_calls_per_turn` 来防止工具调用陷入无限循环\n\n        使用方法：\n```python\n            from videosdk.agents import ContextWindow\n            \n            pipeline = Pipeline(\n                ....\n                context_window=ContextWindow(\n                    max_tokens=4000,\n                    max_context_items=20,\n                    keep_recent_turns=3,\n                    max_tool_calls_per_turn=10,\n                ),\n            )\n```\n\n- 修复大模型钩子时机及 TTS 钩子导致的双倍语音问题 #247\n      - 修复 `@pipeline.on(\"llm\")` 钩子在 TTS 已经说出文本之后才触发的问题，从而无法修改文本\n      - 修复使用 `@pipeline.on(\"tts\")` 钩子时偶尔出现的代理双倍说话问题\n\n- 功能：为轨道录制启动添加重试逻辑，并修复合并负载 #249","2026-04-04T11:31:53",{"id":140,"version":141,"summary_zh":142,"released_at":143},255469,"v1.0.2","- 修复 bug 和优化改进\n- 新增对 SarvamAILLM 的模型支持 #242\n","2026-04-02T11:14:59",{"id":145,"version":146,"summary_zh":147,"released_at":148},255470,"v1.0.1","- 修复无 participant_id 的自管理头像的连接流程（https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fpull\u002F240）\n        - 修复无 participant_id 的自管理头像的连接流程\n        - 修复在使用 S2S 头像时出现的中断问题","2026-03-31T05:06:41",{"id":150,"version":151,"summary_zh":152,"released_at":153},255471,"v1.0.0","# Agents v1.0.0\n- 功能：Agents v1.0.0 — 统一的管道、钩子系统、头像及模块化架构 #169\n- 重构\u002F代理框架核心指标收集 #207\n\n> **稳定版**\n>\n> 这是 VideoSDK 代理框架全新统一管道架构的稳定版本。\n> 它与 v0.x 版本（最高至 v0.0.73）**不兼容**。\n\n---\n\n## 架构\n\nAgents v1.0.0 引入了一个围绕三大原则重新设计的核心：\n\n- **松耦合** — STT、LLM、TTS、VAD 和话轮检测等各阶段相互独立、可替换，彼此之间不存在硬性依赖。\n- **低延迟优化** — `PipelineOrchestrator` 负责管理组件连接与事件路由，确保每个阶段在运行时不会阻塞其他阶段。\n- **设计上可组合** — 将任意组件组合传递给 `Pipeline(...)`，调度器会自动选择合适的执行模式；通过 `@pipeline.on(...)` 钩子接入任意阶段，即可拦截或转换数据，而无需继承类。\n\n`PipelineOrchestrator` 处于核心位置，负责管理组件生命周期、在各阶段间路由音频和文本、分发话轮事件，并协调钩子。开发者无需直接与它交互，一切由传入 `Pipeline` 的配置驱动。\n\n---\n\n## 新特性\n\n### 统一管道架构\n\n`CascadingPipeline` 和 `RealtimePipeline` 已被单一的 **`Pipeline`** 类取代。只需配置一个管道，SDK 便会根据您提供的组件自动确定执行模式。\n\n**旧版 (v0.x)：**\n```python\nfrom videosdk.agents import CascadingPipeline, RealtimePipeline\n\npipeline = CascadingPipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)\npipeline = RealtimePipeline(llm=OpenAIRealtime(...))\n```\n\n**新版 (v1.0.0)：**\n```python\nfrom videosdk.agents import Pipeline\n\npipeline = Pipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)\npipeline = Pipeline(llm=OpenAIRealtime(...))\n```\n\n---\n\n### 🧩 级联模式 — STT → LLM → TTS\n\n您可以自由组合任意提供商链路，完全掌控每个环节。\n\n```python\npipeline = Pipeline(\n    stt=DeepgramSTT(),\n    llm=GoogleLLM(),\n    tts=CartesiaTTS(),\n    vad=SileroVAD(),\n    turn_detector=TurnDetector(),\n)\nsession = AgentSession(agent=MyAgent(), pipeline=pipeline)\nawait session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n---\n\n### ⚡ 实时模式 — 使用统一模型实现最低延迟\n\n只需使用一个实时模型即可完成整个语音处理流程。\n\n```python\npipeline = Pipeline(\n    llm=GeminiRealtime(\n        model=\"gemini-3.1-flash-live-preview\",\n        config=GeminiLiveConfig(voice=\"Leda\", response_modalities=[\"AUDIO\"]),\n    )\n)\nsession = AgentSession(agent=MyAgent(), pipeline=pipeline)\nawait session.start(wait_for_participant=True, run_until_shutdown=True)\n```\n\n其他支持的实时模型包括：`OpenAIRealtime`、`AWSNovaSonic`、`AzureVoiceLive`。\n\n---\n\n### 🔀 混合模式 — 结合级联与实","2026-03-27T15:02:23",{"id":155,"version":156,"summary_zh":157,"released_at":158},255472,"v0.0.73","- 错误修复与改进\n    - 修复推理网关中的实时管道参数 https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fpull\u002F235","2026-03-25T08:36:23",{"id":160,"version":161,"summary_zh":162,"released_at":163},255473,"v0.0.72","- 修复\u002F最小字数 #233\n      - 错误修复与改进","2026-03-20T11:40:53",{"id":165,"version":166,"summary_zh":167,"released_at":168},255474,"v0.0.71","修复：转录时间戳 + 保护传输转录 #231\r\n- 错误修复与改进\r\n\r\n","2026-03-18T10:13:23",{"id":170,"version":171,"summary_zh":172,"released_at":173},255475,"v0.0.70","- Worker refactor, updates, and improvements  #222\r\n    - update: wait time added\r\n- feat: add support for screen sharing streams #229\r\n","2026-03-14T15:03:31",{"id":175,"version":176,"summary_zh":177,"released_at":178},255476,"v0.0.69","- Plugin Update : Fix interrupt in sarvam tts https:\u002F\u002Fgithub.com\u002Fvideosdk-live\u002Fagents\u002Fpull\u002F226","2026-03-13T07:55:23",{"id":180,"version":181,"summary_zh":182,"released_at":183},255477,"v0.0.68","- Inferencing common data type structure #221\r\n   - Improved VideoSDK Inference Gateway\r\n- feat: add `no_participant_timeout_seconds` to auto-end session when no participant joins #224\r\n   - When the agent joins a meeting but no participant connects within the configured timeout (default 90s), the session now ends automatically instead of staying connected indefinitely.\r\n    - Add `no_participant_timeout_seconds` param to `RoomOptions`\r\n- agent participant implementation for #215\r\n- Force cleanup after jobctx is cleaned up #223","2026-03-12T12:26:03",{"id":185,"version":186,"summary_zh":187,"released_at":188},255478,"1.0.0b1","# Agents 1.0.0b1 \r\n\r\n⚠️ **Beta Release**\r\n\r\nThis release introduces the new **unified pipeline architecture** for the VideoSDK Agents framework.\r\n\r\nIt is **not backward compatible with v0.x (up to v0.0.67)**.\r\n\r\nThis beta release is intended for testing the new architecture before the stable `1.0.0` release.\r\n\r\n# Unified Pipeline Architecture\r\n\r\n`CascadingPipeline` and `RealtimePipeline` have been replaced with a single **Pipeline** class.\r\n\r\nDevelopers now configure **one pipeline**, and the SDK automatically determines the optimal execution mode based on the components provided.\r\n\r\n---\r\n\r\n# Before (v0.0.67 and earlier)\r\n\r\n```python\r\nfrom videosdk.agents import CascadingPipeline, RealtimePipeline\r\n\r\n# Cascading pipeline\r\npipeline = CascadingPipeline(\r\n    stt=..., \r\n    llm=..., \r\n    tts=..., \r\n    vad=..., \r\n    turn_detector=...\r\n)\r\n\r\n# Realtime pipeline\r\npipeline = RealtimePipeline(\r\n    llm=OpenAIRealtime(...)\r\n)\r\n```\r\n\r\n---\r\n\r\n# After (v1.0.0b1)\r\n\r\n```python\r\nfrom videosdk.agents import Pipeline\r\n\r\n# Full voice agent\r\npipeline = Pipeline(\r\n    stt=..., \r\n    llm=..., \r\n    tts=..., \r\n    vad=..., \r\n    turn_detector=...\r\n)\r\n\r\n# Realtime voice agent\r\npipeline = Pipeline(\r\n    llm=OpenAIRealtime(...)\r\n)\r\n```\r\n\r\n---\r\n\r\n# Flexible Agent Composition\r\n\r\nThe new `Pipeline` allows you to build different types of agents depending on your use case.\r\n\r\n### Transcription Agent\r\n\r\n```python\r\nPipeline(stt=...)\r\n```\r\n\r\n### Voice + Chat Agent\r\n\r\n```python\r\nPipeline(stt=..., llm=..., tts=...)\r\n```\r\n\r\n### Full Voice Agent with Turn Detection\r\n\r\n```python\r\nPipeline(stt=..., llm=..., tts=..., vad=..., turn_detector=...)\r\n```\r\n\r\n### Chatbot (Text Only)\r\n\r\n```python\r\nPipeline(llm=...)\r\n```\r\n\r\n### Realtime Voice Agent\r\n\r\n```python\r\nPipeline(llm=OpenAIRealtime(...))\r\n```\r\n\r\nYou simply include the components you need, and the SDK handles the rest.\r\n\r\n---\r\n\r\n# Conversational Flow Removal\r\n\r\nThe previous **Conversational Flow abstraction has been removed**.\r\n\r\nInstead of defining conversation flows separately, developers can now **directly control behavior using Pipeline Hooks**.\r\n\r\nThis gives full flexibility to intercept and modify data at any stage of the pipeline.\r\n\r\n---\r\n\r\n# Pipeline Hooks\r\n\r\nCustomize pipeline behavior using:\r\n\r\n```python\r\n@pipeline.on(...)\r\n```\r\n\r\nAvailable hook points include:\r\n\r\n- `stt`\r\n- `tts`\r\n- `llm`\r\n- `vision_frame`\r\n- `user_turn_start`\r\n- `user_turn_end`\r\n- `agent_turn_start`\r\n- `agent_turn_end`\r\n- `content_generated`\r\n\r\nHooks allow you to preprocess input, modify outputs, control LLM invocation, and implement custom logic.\r\n---\r\n\r\nThese hooks allow you to implement:\r\n\r\n- custom preprocessing\r\n- business logic\r\n- LLM routing\r\n- speech formatting\r\n- pronunciation correction\r\n\r\ndirectly inside the pipeline.\r\n\r\n---\r\n\r\n## API Changes\r\n\r\n| Previous (v0.x) | Replacement (v1.0.0b1) |\r\n|-----------------|------------------------|\r\n| `CascadingPipeline` | Use **`Pipeline`** |\r\n| `RealtimePipeline` | Use **`Pipeline`** |\r\n| `ConversationalFlow` | Use **`Pipeline Hooks`** |\r\n\r\nInstead of defining conversational flows separately, you can now implement custom logic directly using pipeline hooks:\r\n\r\n```python\r\n@pipeline.on(\"stt\")\r\n@pipeline.on(\"llm\")\r\n@pipeline.on(\"tts\")\r\n```\r\n\r\nThis provides more flexibility to control preprocessing, LLM invocation, and speech synthesis behavior directly within the pipeline.\r\n\r\nOther features such as **function tools, Agent lifecycle, AgentSession, and WorkerJob** continue to work as before.\r\n\r\n---\r\n\r\n# Migration Guide\r\n\r\nReplace:\r\n\r\n```python\r\nCascadingPipeline(...)\r\nRealtimePipeline(...)\r\n```\r\n\r\nwith:\r\n\r\n```python\r\nPipeline(...)\r\n```\r\n\r\nConstructor arguments remain the same — simply pass your components and the SDK will handle execution automatically.","2026-03-05T11:07:59",{"id":190,"version":191,"summary_zh":192,"released_at":193},255479,"v0.0.67","- Fix\u002Fcalculations #216\r\n    -  Metrics improvements and refinements\r\n- fix: prevent duplicate MCP tools by only extending with newly added tools #217\r\n- Updates Cambai Plugin #218","2026-03-03T10:53:52",{"id":195,"version":196,"summary_zh":197,"released_at":198},255480,"v0.0.66","- deprecated : remove the cli in version >= 0.0.65 #210 \r\n- Remove unused parameter #205\r\n- feat: upgrade Simli AI plugin to SDK v2.0 #206\r\n- feat: enhance TTS and STT plugins with additional configuration options #212 \r\n    - Cartesia,Deepgram, ElevenLabs, Google Plugin Updates.","2026-02-26T13:51:16",{"id":200,"version":201,"summary_zh":202,"released_at":203},255481,"v0.0.65","- feat: add Anam virtual avatar support (#188)\r\n    - Add support for Anam Virtual Avatar with VideoSDK AI Agents\r\n- chore: update Sarvam plugin (#199)\r\n     - Upgrade Sarvam AI plugin to the latest configuration provided by Sarvam (STT, LLM, TTS)\r\n- feat: add Camb AI TTS plugin (#200)\r\n    - Add support for Camb AI Text-to-Speech (TTS)\r\n- fix: TURN-D handling in cascading pipeline (#203)\r\n    - Fix issue where missing TURN-D caused unintended agent execution\r\n- fix(rnnoise): resolve segfaults and init errors on ARM64\u002FLinux (#202)\r\n    - Fix rnnoise build issues on Linux ARM64 and Resolve segmentation faults and initialization errors\r\n    - Now supported across all platforms","2026-02-23T13:14:32",{"id":205,"version":206,"summary_zh":207,"released_at":208},255482,"v0.0.64","- fix linux rnnoise issue #186\r\n- fix: allow session.say inside tools without interrupting LLM #189\r\n- feat : add keywords, keyterm prompting, and language param #193\r\n- Feature\u002Fkrisp deepgram #194\r\n     - Add support for Denoise in VideoSDK inference for the providers Krisp, ai-coustics, and Sanas.\r\n     - Add Deepgram TTS support in VideoSDK inference.\r\n - fix(turn-detector): download model once and use cache #198 ","2026-02-17T06:10:01",{"id":210,"version":211,"summary_zh":212,"released_at":213},255483,"v0.0.63","- Usage & Tracing Updates #187  : \r\n     - Added usage collection for STT, LLM, TTS, Realtime providers, and Knowledge Base to improve pricing estimates.\r\n     - Expanded tracing for STT, LLM, TTS, VAD, Background Audio, and Knowledge Base.\r\n     - Added VAD span tracking and fallback event tracing.\r\n     - Enhanced STT and EOU traces with additional metadata.\r\n    - Deepgram STT: set endpointing = 0 (disabled at API level).","2026-02-12T13:31:25",{"id":215,"version":216,"summary_zh":217,"released_at":218},255484,"v0.0.62","- TTS Updates: ElevenLabs + Cartesia (#174) \r\n     - Added a new optional language parameter to ElevenLabsTTS().\r\n     - Added GenerationConfig support to CartesiaTTS for fine-grained control over generation parameters.\r\n```\r\ntts = CartesiaTTS(\r\n    model=\"sonic-3\",\r\n    generation_config=GenerationConfig(\r\n        volume=1.0,\r\n        speed=1.2,\r\n        emotion=\"happy\"\r\n    )\r\n)\r\n```\r\n- Fix recording issue when multiple participants join (#178) \r\n   - Ensures agent and main user recordings are not affected during multi-participant users in sessions. \r\n\r\n- Fix INVALID_PROTOBUF error when loading Silero VAD ONNX model (#179)\r\n\r\n- Parameter added in assemblyai STT (#182) \r\n\r\n","2026-01-27T17:32:39"]