[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-strands-agents--sdk-python":3,"tool-strands-agents--sdk-python":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,2,"2026-04-08T23:32:35",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":76,"languages":77,"stars":82,"forks":83,"last_commit_at":84,"license":85,"difficulty_score":32,"env_os":86,"env_gpu":87,"env_ram":87,"env_deps":88,"category_tags":94,"github_topics":95,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":114,"updated_at":115,"faqs":116,"releases":145},5733,"strands-agents\u002Fsdk-python","sdk-python","A model-driven approach to building AI agents in just a few lines of code.","Strands Agents 是一个基于 Python 的开源开发工具包，旨在通过“模型驱动”的理念，让开发者仅需几行代码即可构建并运行功能强大的人工智能代理。它有效解决了传统 AI 应用开发中流程繁琐、配置复杂以及难以灵活适配不同大模型的痛点，帮助用户快速从原型验证过渡到生产部署。\n\n这款工具非常适合希望高效构建 AI 应用的软件开发者、技术研究人员以及需要自动化工作流的工程团队。无论是打造简单的对话助手，还是设计复杂的多智能体协作系统，Strands Agents 都能提供轻量且灵活的支撑。\n\n其核心技术亮点在于广泛的模型兼容性，原生支持 Amazon Bedrock、Anthropic、OpenAI、Ollama 等主流服务商，让用户无需修改代码即可自由切换底层模型。此外，它内置了对模型上下文协议（MCP）的支持，能够直接连接数千种预制工具；同时提供便捷的 Python 装饰器语法来自定义工具，并支持目录热重载功能，极大提升了开发迭代效率。配合对多智能体系统和流式输出的原生支持，Strands Agents 成为了连接大模型能力与实际业务场景的得力桥梁。","\u003Cdiv align=\"center\">\n  \u003Cdiv>\n    \u003Ca href=\"https:\u002F\u002Fstrandsagents.com\">\n      \u003Cimg src=\"https:\u002F\u002Fstrandsagents.com\u002Flatest\u002Fassets\u002Flogo-github.svg\" alt=\"Strands Agents\" width=\"55px\" height=\"105px\">\n    \u003C\u002Fa>\n  \u003C\u002Fdiv>\n\n  \u003Ch1>\n    Strands Agents\n  \u003C\u002Fh1>\n\n  \u003Ch2>\n    A model-driven approach to building AI agents in just a few lines of code.\n  \u003C\u002Fh2>\n\n  \u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fgraphs\u002Fcommit-activity\">\u003Cimg alt=\"GitHub commit activity\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\">\u003Cimg alt=\"GitHub open issues\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpulls\">\u003Cimg alt=\"GitHub open pull requests\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg alt=\"License\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fstrands-agents\u002F\">\u003Cimg alt=\"PyPI version\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fstrands-agents\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpython.org\">\u003Cimg alt=\"Python versions\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fstrands-agents\"\u002F>\u003C\u002Fa>\n  \u003C\u002Fdiv>\n  \n  \u003Cp>\n    \u003Ca href=\"https:\u002F\u002Fstrandsagents.com\u002F\">Documentation\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsamples\">Samples\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\">Python SDK\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Ftools\">Tools\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fagent-builder\">Agent Builder\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fmcp-server\">MCP Server\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\nStrands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. From simple conversational assistants to complex autonomous workflows, from local development to production deployment, Strands Agents scales with your needs.\n\n## Feature Overview\n\n- **Lightweight & Flexible**: Simple agent loop that just works and is fully customizable\n- **Model Agnostic**: Support for Amazon Bedrock, Anthropic, Gemini, LiteLLM, Llama, Ollama, OpenAI, Writer, and custom providers\n- **Advanced Capabilities**: Multi-agent systems, autonomous agents, and streaming support\n- **Built-in MCP**: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools\n\n## Quick Start\n\n```bash\n# Install Strands Agents\npip install strands-agents strands-agents-tools\n```\n\n```python\nfrom strands import Agent\nfrom strands_tools import calculator\nagent = Agent(tools=[calculator])\nagent(\"What is the square root of 1764\")\n```\n\n> **Note**: For the default Amazon Bedrock model provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region. See the [Quickstart Guide](https:\u002F\u002Fstrandsagents.com\u002F) for details on configuring other model providers.\n\n## Installation\n\nEnsure you have Python 3.10+ installed, then:\n\n```bash\n# Create and activate virtual environment\npython -m venv .venv\nsource .venv\u002Fbin\u002Factivate  # On Windows use: .venv\\Scripts\\activate\n\n# Install Strands and tools\npip install strands-agents strands-agents-tools\n```\n\n## Features at a Glance\n\n### Python-Based Tools\n\nEasily build tools using Python decorators:\n\n```python\nfrom strands import Agent, tool\n\n@tool\ndef word_count(text: str) -> int:\n    \"\"\"Count words in text.\n\n    This docstring is used by the LLM to understand the tool's purpose.\n    \"\"\"\n    return len(text.split())\n\nagent = Agent(tools=[word_count])\nresponse = agent(\"How many words are in this sentence?\")\n```\n\n**Hot Reloading from Directory:**\nEnable automatic tool loading and reloading from the `.\u002Ftools\u002F` directory:\n\n```python\nfrom strands import Agent\n\n# Agent will watch .\u002Ftools\u002F directory for changes\nagent = Agent(load_tools_from_directory=True)\nresponse = agent(\"Use any tools you find in the tools directory\")\n```\n\n### MCP Support\n\nSeamlessly integrate Model Context Protocol (MCP) servers:\n\n```python\nfrom strands import Agent\nfrom strands.tools.mcp import MCPClient\nfrom mcp import stdio_client, StdioServerParameters\n\naws_docs_client = MCPClient(\n    lambda: stdio_client(StdioServerParameters(command=\"uvx\", args=[\"awslabs.aws-documentation-mcp-server@latest\"]))\n)\n\nwith aws_docs_client:\n   agent = Agent(tools=aws_docs_client.list_tools_sync())\n   response = agent(\"Tell me about Amazon Bedrock and how to use it with Python\")\n```\n\n### Multiple Model Providers\n\nSupport for various model providers:\n\n```python\nfrom strands import Agent\nfrom strands.models import BedrockModel\nfrom strands.models.ollama import OllamaModel\nfrom strands.models.llamaapi import LlamaAPIModel\nfrom strands.models.gemini import GeminiModel\nfrom strands.models.llamacpp import LlamaCppModel\n\n# Bedrock\nbedrock_model = BedrockModel(\n  model_id=\"us.amazon.nova-pro-v1:0\",\n  temperature=0.3,\n  streaming=True, # Enable\u002Fdisable streaming\n)\nagent = Agent(model=bedrock_model)\nagent(\"Tell me about Agentic AI\")\n\n# Google Gemini\ngemini_model = GeminiModel(\n  client_args={\n    \"api_key\": \"your_gemini_api_key\",\n  },\n  model_id=\"gemini-2.5-flash\",\n  params={\"temperature\": 0.7}\n)\nagent = Agent(model=gemini_model)\nagent(\"Tell me about Agentic AI\")\n\n# Ollama\nollama_model = OllamaModel(\n  host=\"http:\u002F\u002Flocalhost:11434\",\n  model_id=\"llama3\"\n)\nagent = Agent(model=ollama_model)\nagent(\"Tell me about Agentic AI\")\n\n# Llama API\nllama_model = LlamaAPIModel(\n    model_id=\"Llama-4-Maverick-17B-128E-Instruct-FP8\",\n)\nagent = Agent(model=llama_model)\nresponse = agent(\"Tell me about Agentic AI\")\n```\n\nBuilt-in providers:\n - [Amazon Bedrock](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Famazon-bedrock\u002F)\n - [Anthropic](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fanthropic\u002F)\n - [Gemini](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fgemini\u002F)\n - [Cohere](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fcohere\u002F)\n - [LiteLLM](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Flitellm\u002F)\n - [llama.cpp](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fllamacpp\u002F)\n - [LlamaAPI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fllamaapi\u002F)\n - [MistralAI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fmistral\u002F)\n - [Ollama](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Follama\u002F)\n - [OpenAI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fopenai\u002F)\n - [OpenAI Responses API](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fopenai\u002F)\n - [SageMaker](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fsagemaker\u002F)\n - [Writer](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fwriter\u002F)\n\nCustom providers can be implemented using [Custom Providers](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fcustom_model_provider\u002F)\n\n### Example tools\n\nStrands offers an optional strands-agents-tools package with pre-built tools for quick experimentation:\n\n```python\nfrom strands import Agent\nfrom strands_tools import calculator\nagent = Agent(tools=[calculator])\nagent(\"What is the square root of 1764\")\n```\n\nIt's also available on GitHub via [strands-agents\u002Ftools](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Ftools).\n\n### Bidirectional Streaming\n\n> **⚠️ Experimental Feature**: Bidirectional streaming is currently in experimental status. APIs may change in future releases as we refine the feature based on user feedback and evolving model capabilities.\n\nBuild real-time voice and audio conversations with persistent streaming connections. Unlike traditional request-response patterns, bidirectional streaming maintains long-running conversations where users can interrupt, provide continuous input, and receive real-time audio responses. Get started with your first BidiAgent by following the [Quickstart](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fbidirectional-streaming\u002Fquickstart\u002F) guide. \n\n**Supported Model Providers:**\n- Amazon Nova Sonic (v1, v2)\n- Google Gemini Live\n- OpenAI Realtime API\n\n**Installation:**\n\n```bash\n# Server-side only (no audio I\u002FO dependencies)\npip install strands-agents[bidi]\n\n# With audio I\u002FO support (includes PyAudio dependency)\npip install strands-agents[bidi,bidi-io]\n```\n\n**Quick Example:**\n\n```python\nimport asyncio\nfrom strands.experimental.bidi import BidiAgent\nfrom strands.experimental.bidi.models import BidiNovaSonicModel\nfrom strands.experimental.bidi.io import BidiAudioIO, BidiTextIO\nfrom strands.experimental.bidi.tools import stop_conversation\nfrom strands_tools import calculator\n\nasync def main():\n    # Create bidirectional agent with Nova Sonic v2\n    model = BidiNovaSonicModel()\n    agent = BidiAgent(model=model, tools=[calculator, stop_conversation])\n\n    # Setup audio and text I\u002FO (requires bidi-io extra)\n    audio_io = BidiAudioIO()\n    text_io = BidiTextIO()\n\n    # Run with real-time audio streaming\n    # Say \"stop conversation\" to gracefully end the conversation\n    await agent.run(\n        inputs=[audio_io.input()],\n        outputs=[audio_io.output(), text_io.output()]\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n> **Note**: `BidiAudioIO` and `BidiTextIO` require the `bidi-io` extra. For server-side deployments where audio I\u002FO is handled by clients (browsers, mobile apps), install only `strands-agents[bidi]` and implement custom input\u002Foutput handlers using the `BidiInput` and `BidiOutput` protocols.\n\n**Configuration Options:**\n\n```python\nfrom strands.experimental.bidi.models import BidiNovaSonicModel\n\n# Configure audio settings and turn detection (v2 only)\nmodel = BidiNovaSonicModel(\n    provider_config={\n        \"audio\": {\n            \"input_rate\": 16000,\n            \"output_rate\": 16000,\n            \"voice\": \"matthew\"\n        },\n        \"turn_detection\": {\n            \"endpointingSensitivity\": \"MEDIUM\"  # HIGH, MEDIUM, or LOW\n        },\n        \"inference\": {\n            \"max_tokens\": 2048,\n            \"temperature\": 0.7\n        }\n    }\n)\n\n# Configure I\u002FO devices\naudio_io = BidiAudioIO(\n    input_device_index=0,  # Specific microphone\n    output_device_index=1,  # Specific speaker\n    input_buffer_size=10,\n    output_buffer_size=10\n)\n\n# Text input mode (type messages instead of speaking)\ntext_io = BidiTextIO()\nawait agent.run(\n    inputs=[text_io.input()],  # Use text input\n    outputs=[audio_io.output(), text_io.output()]\n)\n\n# Multi-modal: Both audio and text input\nawait agent.run(\n    inputs=[audio_io.input(), text_io.input()],  # Speak OR type\n    outputs=[audio_io.output(), text_io.output()]\n)\n```\n\n## Documentation\n\nFor detailed guidance & examples, explore our documentation:\n\n- [User Guide](https:\u002F\u002Fstrandsagents.com\u002F)\n- [Quick Start Guide](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fquickstart\u002F)\n- [Agent Loop](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fagents\u002Fagent-loop\u002F)\n- [Examples](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fexamples\u002F)\n- [API Reference](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fapi\u002Fpython\u002Fstrands.agent.agent\u002F)\n- [Production & Deployment Guide](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fdeploy\u002Foperating-agents-in-production\u002F)\n\n## Contributing ❤️\n\nWe welcome contributions! See our [Contributing Guide](CONTRIBUTING.md) for details on:\n- Reporting bugs & features\n- Development setup\n- Contributing via Pull Requests\n- Code of Conduct\n- Reporting of security issues\n\n## License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n","\u003Cdiv align=\"center\">\n  \u003Cdiv>\n    \u003Ca href=\"https:\u002F\u002Fstrandsagents.com\">\n      \u003Cimg src=\"https:\u002F\u002Fstrandsagents.com\u002Flatest\u002Fassets\u002Flogo-github.svg\" alt=\"Strands Agents\" width=\"55px\" height=\"105px\">\n    \u003C\u002Fa>\n  \u003C\u002Fdiv>\n\n  \u003Ch1>\n    Strands Agents\n  \u003C\u002Fh1>\n\n  \u003Ch2>\n    一种基于模型驱动的方法，只需几行代码即可构建AI智能体。\n  \u003C\u002Fh2>\n\n  \u003Cdiv align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fgraphs\u002Fcommit-activity\">\u003Cimg alt=\"GitHub提交活动\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\">\u003Cimg alt=\"GitHub开放问题\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpulls\">\u003Cimg alt=\"GitHub开放拉取请求\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fblob\u002Fmain\u002FLICENSE\">\u003Cimg alt=\"许可证\" src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fstrands-agents\u002Fsdk-python\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fstrands-agents\u002F\">\u003Cimg alt=\"PyPI版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fstrands-agents\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpython.org\">\u003Cimg alt=\"Python版本\" src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fstrands-agents\"\u002F>\u003C\u002Fa>\n  \u003C\u002Fdiv>\n  \n  \u003Cp>\n    \u003Ca href=\"https:\u002F\u002Fstrandsagents.com\u002F\">文档\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsamples\">示例\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\">Python SDK\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Ftools\">工具\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fagent-builder\">智能体构建器\u003C\u002Fa>\n    ◆ \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fmcp-server\">MCP服务器\u003C\u002Fa>\n  \u003C\u002Fp>\n\u003C\u002Fdiv>\n\nStrands Agents是一款简单而强大的SDK，采用基于模型驱动的方式构建和运行AI智能体。无论是简单的对话助手，还是复杂的自主工作流；无论是在本地开发环境，还是在生产环境中部署，Strands Agents都能根据您的需求灵活扩展。\n\n## 功能概览\n\n- **轻量级且灵活**：简单易用且完全可定制的智能体循环\n- **模型无关**：支持Amazon Bedrock、Anthropic、Gemini、LiteLLM、Llama、Ollama、OpenAI、Writer以及自定义提供商\n- **高级功能**：多智能体系统、自主智能体及流式传输支持\n- **内置MCP**：原生支持模型上下文协议（MCP）服务器，可访问数千个预构建工具\n\n## 快速入门\n\n```bash\n# 安装Strands Agents\npip install strands-agents strands-agents-tools\n```\n\n```python\nfrom strands import Agent\nfrom strands_tools import calculator\nagent = Agent(tools=[calculator])\nagent(\"1764的平方根是多少\")\n```\n\n> **注意**：对于默认的Amazon Bedrock模型提供商，您需要配置AWS凭证，并在us-west-2区域启用Claude 4 Sonnet的模型访问权限。有关其他模型提供商的配置，请参阅[快速入门指南](https:\u002F\u002Fstrandsagents.com\u002F)。\n\n## 安装步骤\n\n请确保已安装Python 3.10及以上版本，然后执行以下操作：\n\n```bash\n# 创建并激活虚拟环境\npython -m venv .venv\nsource .venv\u002Fbin\u002Factivate  # Windows用户使用：.venv\\Scripts\\activate\n\n# 安装Strands及其工具\npip install strands-agents strands-agents-tools\n```\n\n## 功能一览\n\n### 基于Python的工具\n\n使用Python装饰器轻松构建工具：\n\n```python\nfrom strands import Agent, tool\n\n@tool\ndef word_count(text: str) -> int:\n    \"\"\"统计文本中的单词数量。\n\n    此文档字符串将被大模型用于理解工具的功能。\n    \"\"\"\n    return len(text.split())\n\nagent = Agent(tools=[word_count])\nresponse = agent(\"这句话有多少个单词？\")\n```\n\n**目录热重载**：\n启用自动加载并从`.\u002Ftools\u002F`目录中热重载工具：\n\n```python\nfrom strands import Agent\n\n# 智能体会监视.\u002Ftools\u002F目录以检测更改\nagent = Agent(load_tools_from_directory=True)\nresponse = agent(\"使用工具目录中的任何工具\")\n```\n\n### MCP支持\n\n无缝集成模型上下文协议（MCP）服务器：\n\n```python\nfrom strands import Agent\nfrom strands.tools.mcp import MCPClient\nfrom mcp import stdio_client, StdioServerParameters\n\naws_docs_client = MCPClient(\n    lambda: stdio_client(StdioServerParameters(command=\"uvx\", args=[\"awslabs.aws-documentation-mcp-server@latest\"]))\n)\n\nwith aws_docs_client:\n   agent = Agent(tools=aws_docs_client.list_tools_sync())\n   response = agent(\"请告诉我关于Amazon Bedrock的信息，以及如何用Python使用它\")\n```\n\n### 多种模型提供商\n\n支持多种模型提供商：\n\n```python\nfrom strands import Agent\nfrom strands.models import BedrockModel\nfrom strands.models.ollama import OllamaModel\nfrom strands.models.llamaapi import LlamaAPIModel\nfrom strands.models.gemini import GeminiModel\nfrom strands.models.llamacpp import LlamaCppModel\n\n# Bedrock\nbedrock_model = BedrockModel(\n  model_id=\"us.amazon.nova-pro-v1:0\",\n  temperature=0.3,\n  streaming=True, \u002F\u002F 启用或禁用流式传输\n)\nagent = Agent(model=bedrock_model)\nagent(\"请介绍一下代理型AI\")\n\n# Google Gemini\ngemini_model = GeminiModel(\n  client_args={\n    \"api_key\": \"your_gemini_api_key\",\n  },\n  model_id=\"gemini-2.5-flash\",\n  params={\"temperature\": 0.7}\n)\nagent = Agent(model=gemini_model)\nagent(\"请介绍一下代理型AI\")\n\n# Ollama\nollama_model = OllamaModel(\n  host=\"http:\u002F\u002Flocalhost:11434\",\n  model_id=\"llama3\"\n)\nagent = Agent(model=ollama_model)\nagent(\"请介绍一下代理型AI\")\n\n# Llama API\nllama_model = LlamaAPIModel(\n    model_id=\"Llama-4-Maverick-17B-128E-Instruct-FP8\",\n)\nagent = Agent(model=llama_model)\nresponse = agent(\"告诉我关于代理式AI的信息\")\n```\n\n内置模型提供商：\n - [Amazon Bedrock](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Famazon-bedrock\u002F)\n - [Anthropic](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fanthropic\u002F)\n - [Gemini](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fgemini\u002F)\n - [Cohere](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fcohere\u002F)\n - [LiteLLM](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Flitellm\u002F)\n - [llama.cpp](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fllamacpp\u002F)\n - [LlamaAPI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fllamaapi\u002F)\n - [MistralAI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fmistral\u002F)\n - [Ollama](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Follama\u002F)\n - [OpenAI](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fopenai\u002F)\n - [OpenAI Responses API](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fopenai\u002F)\n - [SageMaker](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fsagemaker\u002F)\n - [Writer](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fwriter\u002F)\n\n自定义模型提供商可以通过[自定义模型提供商](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fmodel-providers\u002Fcustom_model_provider\u002F)实现。\n\n### 示例工具\n\nStrands 提供了一个可选的 strands-agents-tools 包，其中包含预构建的工具，方便快速实验：\n\n```python\nfrom strands import Agent\nfrom strands_tools import calculator\nagent = Agent(tools=[calculator])\nagent(\"1764 的平方根是多少\")\n```\n\n该包也可在 GitHub 上通过 [strands-agents\u002Ftools](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Ftools) 获取。\n\n### 双向流式传输\n\n> **⚠️ 实验性功能**：双向流式传输目前处于实验阶段。随着我们根据用户反馈和不断发展的模型能力完善此功能，相关 API 可能在未来的版本中发生变化。\n\n通过持久的流式连接构建实时语音和音频对话。与传统的请求-响应模式不同，双向流式传输可以维持长时间的对话，用户可以在其中随时打断、持续输入，并获得实时的音频回应。按照[快速入门](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fbidirectional-streaming\u002Fquickstart\u002F)指南，即可开始使用您的第一个 BidiAgent。\n\n**支持的模型提供商：**\n- Amazon Nova Sonic (v1, v2)\n- Google Gemini Live\n- OpenAI Realtime API\n\n**安装：**\n\n```bash\n# 仅服务器端（无音频 I\u002FO 依赖）\npip install strands-agents[bidi]\n\n# 带有音频 I\u002FO 支持（包含 PyAudio 依赖）\npip install strands-agents[bidi,bidi-io]\n```\n\n**快速示例：**\n\n```python\nimport asyncio\nfrom strands.experimental.bidi import BidiAgent\nfrom strands.experimental.bidi.models import BidiNovaSonicModel\nfrom strands.experimental.bidi.io import BidiAudioIO, BidiTextIO\nfrom strands.experimental.bidi.tools import stop_conversation\nfrom strands_tools import calculator\n\nasync def main():\n    # 创建带有 Nova Sonic v2 的双向代理\n    model = BidiNovaSonicModel()\n    agent = BidiAgent(model=model, tools=[calculator, stop_conversation])\n\n    # 设置音频和文本 I\u002FO（需要 bidi-io 附加组件）\n    audio_io = BidiAudioIO()\n    text_io = BidiTextIO()\n\n    # 运行实时音频流\n    # 说出“停止对话”以优雅地结束对话\n    await agent.run(\n        inputs=[audio_io.input()],\n        outputs=[audio_io.output(), text_io.output()]\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n> **注意**：`BidiAudioIO` 和 `BidiTextIO` 需要 `bidi-io` 附加组件。对于由客户端（浏览器、移动应用）处理音频 I\u002FO 的服务器端部署，只需安装 `strands-agents[bidi]`，并使用 `BidiInput` 和 `BidiOutput` 协议实现自定义的输入输出处理器。\n\n**配置选项：**\n\n```python\nfrom strands.experimental.bidi.models import BidiNovaSonicModel\n\n# 配置音频设置和话轮检测（仅 v2）\nmodel = BidiNovaSonicModel(\n    provider_config={\n        \"audio\": {\n            \"input_rate\": 16000,\n            \"output_rate\": 16000,\n            \"voice\": \"matthew\"\n        },\n        \"turn_detection\": {\n            \"endpointingSensitivity\": \"MEDIUM\"  # HIGH、MEDIUM 或 LOW\n        },\n        \"inference\": {\n            \"max_tokens\": 2048,\n            \"temperature\": 0.7\n        }\n    }\n)\n\n# 配置 I\u002FO 设备\naudio_io = BidiAudioIO(\n    input_device_index=0,  \u002F\u002F 特定麦克风\n    output_device_index=1, \u002F\u002F 特定扬声器\n    input_buffer_size=10,\n    output_buffer_size=10\n)\n\n# 文本输入模式（输入消息而非说话）\ntext_io = BidiTextIO()\nawait agent.run(\n    inputs=[text_io.input()],  \u002F\u002F 使用文本输入\n    outputs=[audio_io.output(), text_io.output()]\n)\n\n# 多模态：同时使用音频和文本输入\nawait agent.run(\n    inputs=[audio_io.input(), text_io.input()],  \u002F\u002F 说话或打字\n    outputs=[audio_io.output(), text_io.output()]\n)\n```\n\n## 文档\n\n如需详细指南和示例，请参阅我们的文档：\n\n- [用户指南](https:\u002F\u002Fstrandsagents.com\u002F)\n- [快速入门指南](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fquickstart\u002F)\n- [代理循环](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Fagents\u002Fagent-loop\u002F)\n- [示例](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fexamples\u002F)\n- [API 参考](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fapi\u002Fpython\u002Fstrands.agent.agent\u002F)\n- [生产与部署指南](https:\u002F\u002Fstrandsagents.com\u002Fdocs\u002Fuser-guide\u002Fdeploy\u002Foperating-agents-in-production\u002F)\n\n## 贡献 ❤️\n\n我们欢迎各种形式的贡献！请参阅我们的[贡献指南](CONTRIBUTING.md)，了解以下内容：\n- 报告漏洞与功能需求\n- 开发环境搭建\n- 通过 Pull Request 贡献代码\n- 行为准则\n- 安全问题报告\n\n## 许可证\n\n本项目采用 Apache License 2.0 许可证——详情请参阅[LICENSE](LICENSE)文件。\n\n## 安全性\n\n更多信息请参阅[CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications)。","# Strands Agents (sdk-python) 快速上手指南\n\nStrands Agents 是一个轻量级且强大的 Python SDK，采用模型驱动的方式，仅需几行代码即可构建和运行 AI 智能体（Agent）。它支持从简单的对话助手到复杂的自主工作流，并兼容多种大模型提供商。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：Python 3.10 或更高版本\n*   **前置依赖**：\n    *   若使用默认 Amazon Bedrock 模型，需配置 AWS 凭证并在 `us-west-2` 区域启用 Claude 模型访问权限。\n    *   若使用其他模型（如 Ollama, OpenAI, Gemini 等），请提前准备好相应的 API Key 或服务地址。\n\n> **提示**：国内开发者建议使用国内镜像源加速安装过程（如下方安装命令所示）。\n\n## 安装步骤\n\n推荐使用虚拟环境进行隔离安装。\n\n1.  **创建并激活虚拟环境**\n\n    ```bash\n    python -m venv .venv\n    # Linux\u002FmacOS\n    source .venv\u002Fbin\u002Factivate\n    # Windows\n    .venv\\Scripts\\activate\n    ```\n\n2.  **安装 Strands Agents 及工具包**\n\n    使用 `-i` 参数指定清华镜像源以加速下载：\n\n    ```bash\n    pip install strands-agents strands-agents-tools -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n以下是最简单的使用示例，展示如何创建一个带有计算器工具的 Agent 并执行任务。\n\n### 1. 快速示例\n\n```python\nfrom strands import Agent\nfrom strands_tools import calculator\n\n# 初始化 Agent 并加载计算器工具\nagent = Agent(tools=[calculator])\n\n# 调用 Agent 执行自然语言指令\nresponse = agent(\"What is the square root of 1764\")\nprint(response)\n```\n\n### 2. 自定义 Python 工具\n\n您可以通过装饰器轻松定义自己的工具：\n\n```python\nfrom strands import Agent, tool\n\n@tool\ndef word_count(text: str) -> int:\n    \"\"\"Count words in text.\n\n    This docstring is used by the LLM to understand the tool's purpose.\n    \"\"\"\n    return len(text.split())\n\n# 将自定义工具传递给 Agent\nagent = Agent(tools=[word_count])\nresponse = agent(\"How many words are in this sentence?\")\n```\n\n### 3. 切换模型提供商\n\nStrands 支持多种模型后端。以下是切换到 Google Gemini 模型的示例：\n\n```python\nfrom strands import Agent\nfrom strands.models.gemini import GeminiModel\n\n# 配置 Gemini 模型\ngemini_model = GeminiModel(\n  client_args={\n    \"api_key\": \"your_gemini_api_key\", # 替换为您的 API Key\n  },\n  model_id=\"gemini-2.5-flash\",\n  params={\"temperature\": 0.7}\n)\n\n# 创建指定模型的 Agent\nagent = Agent(model=gemini_model)\nresponse = agent(\"Tell me about Agentic AI\")\n```\n\n支持的内置模型提供商包括：Amazon Bedrock, Anthropic, Gemini, Ollama, OpenAI, LlamaAPI, MistralAI 等。","某电商数据团队需要快速构建一个能自动分析每日销售报表、计算关键指标并回答业务人员自然语言提问的智能助手。\n\n### 没有 sdk-python 时\n- **开发门槛高**：工程师需手动编写复杂的 Agent 循环逻辑，处理消息历史、工具调用解析及错误重试，代码量大且易出错。\n- **模型切换困难**：若想从 OpenAI 切换到 Amazon Bedrock 或本地 Ollama 模型，必须重构底层连接代码，缺乏统一的抽象层。\n- **工具集成繁琐**：每新增一个数据分析函数（如“计算同比增长率”），都需要编写大量样板代码来定义输入输出格式，以便让大模型理解。\n- **上下文管理复杂**：难以灵活控制多轮对话中的上下文窗口，导致长文档分析时容易丢失关键信息或超出令牌限制。\n\n### 使用 sdk-python 后\n- **极速构建代理**：仅需几行代码即可实例化 Agent，内置的模型驱动循环自动处理对话流与工具调度，让开发者聚焦业务逻辑。\n- **无缝模型切换**：凭借模型无关架构，只需修改配置参数即可在 Anthropic、Gemini 或本地模型间自由切换，无需改动核心代码。\n- **声明式工具定义**：利用 Python 装饰器 `@tool` 即可将普通函数转化为智能工具，文档字符串自动成为模型的理解依据，极大简化集成流程。\n- **原生生态支持**：直接加载内置的计算器、文件读取等工具包，或热加载本地目录下的自定义脚本，瞬间赋予 Agent 复杂执行能力。\n\nsdk-python 通过极简的模型驱动架构，将原本需要数天开发的智能体工程缩减为分钟级的配置任务，让 AI 应用落地变得触手可及。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstrands-agents_sdk-python_2a2dba18.png","strands-agents","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fstrands-agents_c03901da.png","",null,"https:\u002F\u002Fgithub.com\u002Fstrands-agents",[78],{"name":79,"color":80,"percentage":81},"Python","#3572A5",100,5575,768,"2026-04-08T19:28:51","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":89,"python":90,"dependencies":91},"该工具是一个模型驱动的 AI Agent SDK，本身不捆绑特定大模型，而是通过适配器连接外部模型服务（如 Amazon Bedrock, Ollama, OpenAI, Gemini 等）。因此，本地运行通常不需要高性能 GPU 或大量内存，具体硬件需求取决于您选择连接的模型提供商（例如：若连接本地 Ollama 或 llama.cpp，则需满足对应模型的硬件要求；若连接云端 API，则仅需网络环境）。默认配置下使用 Amazon Bedrock 需要配置 AWS 凭证。双向流媒体功能（Bidi Streaming）处于实验阶段，如需本地音频处理需安装额外依赖 `bidi-io`（包含 PyAudio）。","3.10+",[72,92,93],"strands-agents-tools","PyAudio (可选，用于双向流媒体音频输入输出)",[14,15,13,35],[96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,72],"agentic","agentic-ai","agents","ai","autonomous-agents","genai","llm","multi-agent-systems","python","anthropic","litellm","machine-learning","ollama","mcp","opentelemetry","bedrock","llama","openai","2026-03-27T02:49:30.150509","2026-04-09T10:04:27.173622",[117,122,126,131,136,141],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},26004,"调用 Bedrock 模型时遇到\"AccessDeniedException: You don't have access to the model with the specified model ID\"错误怎么办？","这通常是因为您尚未在 AWS Bedrock 控制台中为该特定区域启用该模型的访问权限。即使您的账户拥有 Anthropic 等模型的通用访问权，也需要针对每个部署区域（如 eu-west-1, us-east-1）单独启用。解决方法：登录 AWS Bedrock 控制台，选择对应的区域，找到报错的模型（例如 Claude 3.5 Sonnet 或 Claude 3.7 Sonnet），点击\"Request access\"或启用该模型。启用后即可正常调用。","https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\u002F38",{"id":123,"question_zh":124,"answer_zh":125,"source_url":121},26005,"使用 Bedrock 模型时遇到\"ValidationException: Invocation of model ID ... with on-demand throughput isn't supported\"错误如何解决？","该错误表明您尝试调用的模型不支持按需吞吐量（on-demand throughput），或者模型标识符格式不正确。对于某些新模型（如 Claude 3.7 Sonnet 或 Amazon Nova），可能需要使用推理配置文件（Inference Profile）的 ID 或 ARN，而不是直接使用该模型的原始 ID。请检查 AWS Bedrock 文档，确认该模型是否支持按需模式，或查找并替换为正确的推理配置文件 ID。此外，确保模型 ID 包含正确的区域前缀（如 `eu.` 或 `us.`），如果不确定，可以尝试移除区域前缀或使用完整的 ARN。",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},26006,"对话过程中出现\"ValidationException: tool_use ids were found without tool_result blocks\"错误导致对话中断，该如何修复？","这是一个已知的回归问题，通常发生在 Bedrock 未完整返回工具结果（tool_result）而直接结束消息时，导致后续请求中遗留了孤立的 `tool_use` 块。目前的临时解决方案是确保在发送新消息前清理这些孤立块。虽然官方正在修复此问题（重新引入清理机制），但用户可以检查是否升级到了包含修复的最新版本。如果问题依旧，建议在代码逻辑中手动干预，确保每个 `tool_use` 都有对应的 `tool_result`，或者在捕获到此异常时重置会话状态。","https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\u002F495",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},26007,"Strands SDK 是否支持 VLLM 或 SGLang 模型以实现 Token-in\u002FToken-out 功能？","目前 Strands 核心仓库为了保持可维护性，不再直接集成所有新的模型提供商。但是，社区已经开发了独立的包来支持这些需求。对于 SGLang 支持，您可以查看并使用社区包 `strands-sglang`（参考相关文档讨论）。如果您需要 VLLM 或其他提供商的支持，官方鼓励将其作为独立的 PyPI 包发布，一旦发布，官方文档会将其列为支持的社区模型提供商。这样可以确保您能使用最新的 Token-in\u002FToken-out 特性进行 Agentic RL 训练，同时保持核心 SDK 的精简。","https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\u002F1368",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},26008,"在使用 Gemini 模型（如 gemini-3-pro-preview）进行工具调用时遇到\"INVALID_ARGUMENT (400 Bad Request)\"错误是什么原因？","此错误通常与特定模型版本对工具调用（Tool Calling）参数的兼容性有关。当切换到预览版模型（如 gemini-3-pro-preview）时，其接受的参数格式可能与稳定版不同，导致请求被拒绝。建议检查该预览版模型的官方文档，确认其支持的工具定义格式。如果问题持续，可能是 SDK 与该特定预览版模型之间存在暂时的不兼容，建议回退到稳定的模型版本（如 gemini-2.5-flash），或者关注官方 Issue 追踪以获取针对该特定模型版本的补丁更新。","https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fissues\u002F1199",{"id":142,"question_zh":143,"answer_zh":144,"source_url":121},26009,"如何在 Lambda 等不同区域部署 Strands Agent 时避免模型访问错误？","在 Lambda 等多区域部署时，必须确保在每个目标区域（Region）的 Bedrock 控制台中都单独启用了所需的模型。例如，如果在 `eu-central-1` 和 `eu-west-1` 部署，您需要分别在这两个区域的控制台中启用 Claude 或其他模型。仅仅在一个区域启用是不够的。此外，代码中指定的 `model_id` 必须与部署区域匹配（例如在欧洲区域使用带 `eu.` 前缀的模型 ID），并且要确保依赖项文件夹在打包部署时没有损坏或缺失。",[146,151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241],{"id":147,"version":148,"summary_zh":149,"released_at":150},163376,"v1.35.0","## 变更内容\n\n\n### 功能特性\n\n#### 底层服务层级支持 — [PR#1799](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1799)\n\nAmazon Bedrock 现在提供了服务层级（优先级、标准、弹性），允许您在每个请求的基础上控制延迟与成本之间的权衡。`BedrockModel` 接受一个新的 `service_tier` 配置字段，这与其他 Bedrock 特定功能（如护栏）的暴露方式保持一致。如果未设置该字段，则会省略它，Bedrock 将使用其默认行为。\n\n```python\nfrom strands import Agent\nfrom strands.models.bedrock import BedrockModel\n\n# 使用“弹性”层级进行成本优化的批量处理\nmodel = BedrockModel(\n    model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\",\n    service_tier=\"flex\",\n)\nagent = Agent(model=model)\n\n# 对于延迟敏感的应用，使用“优先级”层级\nrealtime_model = BedrockModel(\n    model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\",\n    service_tier=\"priority\",\n)\n```\n\n有效值为 `\"default\"`、`\"priority\"` 和 `\"flex\"`。如果模型或区域不支持指定的层级，Bedrock 将返回 `ValidationException`。\n\n### Bug 修复\n\n- **滑动窗口对话管理器的用户优先强制执行** — [PR#2087](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2087)：滑动窗口可能会生成以助手消息开头的截断对话，从而导致对要求用户优先顺序的提供商（包括 Bedrock Nova）抛出 `ValidationException`。现在，截断点验证会确保剩余的第一条消息始终具有 `role == \"user\"`。此外，还修复了 `toolUse` 护栏中的短路逻辑错误，该错误会导致孤立的工具调用块在窗口边界处漏过。\n\n- **MCP `_meta` 字段的转发** — [PR#1918](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1918)、[PR#2081](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2081)：根据 MCP 规范的自定义元数据曾被静默丢弃，因为 `MCPClient` 从未将 `_meta` 字段转发到 `ClientSession.call_tool()`。此外，OTEL 仪器化使用了 `model_dump()` 而不是 `model_dump(by_alias=True)`，导致该字段被序列化为 `\"meta\"` 而不是 `\"_meta\"`，从而破坏了有效载荷。无论是直接的 `call_tool` 路径，还是任务增强的执行路径，现在都能正确地转发 `meta`。\n\n- **工具异常传播至 OpenTelemetry 活动跨度** — [PR#2046](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2046)：当工具抛出异常时，原始异常会在到达 `end_tool_call_span` 之前被丢弃，导致所有工具活动跨度即使在发生错误时也会显示 `StatusCode.OK`。现在，工具错误会正确地传播为 `StatusCode.ERROR`，同时保留原始的异常类型和回溯信息，以便 Langfuse 等可观测性后端能够更好地进行监控。\n\n- **Anthropic 流式传输过早终止** — [PR#2047](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2047)：当流式传输在最终的 `message_stop` 事件之前终止时，Anthropic 提供商会因 `AttributeError` 而崩溃，原因是它","2026-04-08T19:41:20",{"id":152,"version":153,"summary_zh":154,"released_at":155},163377,"v1.34.1","## 变更内容\n* 修复：由 @JackYPCOnline 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2018 中修复类型不兼容问题\n* 修复：由 @lizradway 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2022 中隔离 LangFuse 的环境变量\n* 修复：由 @zastrowm 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2032 中恢复显式调用 span.end()，以修复 Span 结束时间的回归问题\n* 功能（上下文）：由 @lizradway 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2009 中跟踪上下文 Token 数量\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.34.0...v1.34.1","2026-04-01T20:32:21",{"id":157,"version":158,"summary_zh":159,"released_at":160},163378,"v1.34.0","## 变更内容\n* chore：移除必需集成测试提供商中的 Cohere，由 @zastrowm 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1967 中完成\n* feat：添加 AgentAsTool 功能，由 @notowen333 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1932 中完成\n* feat：自动包装传入工具列表中的 Agent 实例，由 @agent-of-mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1997 中完成\n* feat（遥测）：根据 GenAI 语义约定，在聊天跨度上发出系统提示，由 @sanjeed5 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1818 中完成\n* feat（MCP）：添加对 MCP 激发 -32042 错误处理的支持，由 @Christian-kam 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1745 中完成\n* fix：修复 Ollama 的输入\u002F输出 token 计数问题，由 @lizradway 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2008 中完成\n* feat：为服务器端对话管理添加有状态模型支持，由 @pgrayy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2004 中完成\n* feat：为 OpenAI Responses API 添加内置工具支持，由 @pgrayy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2011 中完成\n* fix：处理 OpenAIResponsesModel 请求格式化中的推理内容，由 @pgrayy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F2013 中完成\n\n## 新贡献者\n* @notowen333 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1932 中完成了首次贡献\n* @agent-of-mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1997 中完成了首次贡献\n* @sanjeed5 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1818 中完成了首次贡献\n* @Christian-kam 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1745 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.33.0...v1.34.0","2026-03-31T18:45:59",{"id":162,"version":163,"summary_zh":164,"released_at":165},163379,"v1.33.0","已将 litellm 硬性锁定至 ≤1.82.6，以应对供应链攻击——[PyPI 上 litellm 1.82.8 的供应链攻击](https:\u002F\u002Ffuturesearch.ai\u002Fblog\u002Flitellm-pypi-supply-chain-attack\u002F)\n\n## 变更内容\n\n* 修复：摘要对话管理器有时会返回空响应，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1947 中完成\n* 修复：从 swarm 测试中移除代理，以提高测试的一致性，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1946 中完成\n* 修复：严重：为缓解供应链攻击，硬性锁定 `litellm\u003C=1.82.6`，由 @udaymehta 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1961 中完成\n\n## 新贡献者\n* @udaymehta 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1961 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.32.0...v1.33.0","2026-03-24T17:55:24",{"id":167,"version":168,"summary_zh":169,"released_at":170},163380,"v1.32.0","## 变更内容\n* 修复（事件循环）：确保所有循环指标都包含结束时间和持续时间，由 @stephentreacy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1903 中完成\n* 修复：固定 mistralai 依赖的上限版本，由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1935 中完成\n* 修复：当流式响应包含 toolUse 块时，覆盖 end_turn 的停止原因，由 @atian8179 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1827 中完成\n\n## 新贡献者\n* @stephentreacy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1903 中完成了首次贡献\n* @atian8179 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1827 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.31.0...v1.32.0","2026-03-20T14:02:41",{"id":172,"version":173,"summary_zh":174,"released_at":175},163381,"v1.31.0","## 变更内容\n* 功能：由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1854 中实现，将 A2A 请求上下文元数据作为调用状态传递。\n* 修复：由 @mehtarac 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1915 中修复 S3 会话管理器的 bug。\n* 修复（图）：由 @giulio-leone 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1846 中实现，仅评估已完成节点的出边。\n* 修复（OpenAI）：由 @giulio-leone 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1878 中实现，始终对工具消息使用字符串内容。\n* 功能：由 @BV-Venky 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1793 中扩展 OpenAI 依赖范围，以支持 2.x 版本，从而兼容 LiteLLM。\n* 修复：由 @JackYPCOnline 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1870 中修复在 Graph\u002FSwarm 会话持久化中序列化包含二进制内容的多模态提示时出现的 TypeError。\n* 修复：由 @zastrowm 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1929 中将代码片段中的 Python 语言名称改为小写。\n* 修复：由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1931 中改进 OpenAI 响应的 API 错误处理。\n\n## 新贡献者\n* @BV-Venky 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1793 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.30.0...v1.31.0","2026-03-19T14:07:19",{"id":177,"version":178,"summary_zh":179,"released_at":180},163382,"v1.30.0","## 变更内容\n* 功能：添加“anthropic”缓存策略，以绕过模型 ID 检查，由 @kevmyung 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1808 中实现\n* 功能：在可能的情况下将工具结果序列化为 JSON 格式，由 @clareliguori 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1752 中实现\n* 修复：摘要管理器使用结构化输出，由 @pgrayy 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1805 中实现\n* 功能（MCP）：在 MCPClient 上公开 InitializeResult 中的服务器指令，由 @ShotaroKataoka 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1814 中实现\n* 修复：为附加属性添加 LANGFUSE_BASE_URL 检查，由 @poshinchen 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1826 中实现\n* 功能（会话）：添加脏标志，以跳过不必要的代理状态持久化，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1803 中实现\n* 功能：添加公共 tool_spec 设置器，由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1822 中实现\n* 功能：添加 CancellationToken，用于优雅地取消代理执行，由 @jgoyani1 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1772 中实现\n* 功能（会话）：优化会话管理器的初始化，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1829 中实现\n* 修复（Mistral）：在流式模式下报告用量指标，由 @jackatorcflo 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1697 中实现\n* 修复（openai_responses）：在多轮对话中，助手消息使用 output_text，由 @giulio-leone 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1851 中实现\n* 功能（钩子）：向 AfterInvocationEvent 添加 resume 标志，由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1767 中实现\n* 修复：将缓存点设置在最后一条用户消息上，而非助手消息，由 @kevmyung 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1821 中实现\n* 功能（技能）：将代理技能作为插件添加，由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1755 中实现\n* 功能（Steering）：将 Steering 从实验性功能移至生产环境，由 @dbschmigelski 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1853 中实现\n* 修复：打破循环引用，以防止使用 MCPClient 时代理清理操作卡死，由 @dbschmigelski 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1830 中实现\n* 修复：在每个 initialize_* 方法结束时将 _is_new_session 设置为 False，由 @mehtarac 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1859 中实现\n\n## 新贡献者\n* @ShotaroKataoka 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1814 中完成了首次贡献\n* @jgoyani1 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1772 中完成了首次贡献\n* @jackatorcflo 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1697 中完成了首次贡献\n* @giulio-leone 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1851 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.29.0...v1.30.0","2026-03-11T18:34:05",{"id":182,"version":183,"summary_zh":184,"released_at":185},163383,"v1.29.0","## 变更内容\n* 测试：为解决 hatch 的问题，将 virtualenv 锁定至 \u003C21 版本，由 @clareliguori 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1771 中完成。\n* 修复（遥测）：为 LangFuse 添加了最新的语义约定作为跨度属性，由 @poshinchen 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1768 中完成。\n* 修复：在工具执行后保留 guardrail_latest_message 包装，由 @austinmw 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1658 中完成。\n* 功能（对话管理器）：改进工具结果截断策略，由 @kevmyung 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1756 中完成。\n* 功能（插件）：使用 @hook 和 @tool 装饰器提升插件创建的开发体验，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1740 中完成。\n* CI：将 actions\u002Fupload-artifact 从 6 升级到 7，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1777 中完成。\n* CI：将 actions\u002Fdownload-artifact 从 7 升级到 8，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1776 中完成。\n* 修复：从 ConcurrentToolExecutor 抛出异常（#1796），由 @charles-dyfis-net 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1797 中完成。\n* 功能：添加 OpenAI Responses API 模型实现，由 @notgitika 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F975 中完成。\n\n## 新贡献者\n* @austinmw 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1658 中完成了首次贡献。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.28.0...v1.29.0","2026-03-04T21:13:13",{"id":187,"version":188,"summary_zh":189,"released_at":190},163384,"v1.28.0","## 变更内容\n* 修复：在新账号中更新 agentcore 的区域，由 @afarntrog 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1715 中完成\n* 修复：移除适用于 Python 3.14 时会失败的测试，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1717 中完成\n* 功能：(hooks) 支持 add_hook 的联合类型和类型列表，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1719 中完成\n* 功能：通过懒加载将 pyaudio 设为可选依赖，由 @mehtarac 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1731 中完成\n* 功能：(hooks) 为代理扩展性添加 Plugin 协议，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1733 中完成\n* 功能：为 Agent 添加 plugins 参数，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1734 中完成\n* 重构：(plugins) 将 Plugin 从 Protocol 转换为 ABC，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1741 中完成\n* 功能：(steering) 将 SteeringHandler 从 HookProvider 迁移到 Plugin，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1738 中完成\n* 杂项：切换至 Sonnet 4.6 用于 Anthropic 提供商集成测试，由 @clareliguori 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1754 中完成\n* 修复：将 init_plugin 重命名为 init_agent，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1765 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.27.0...v1.28.0","2026-02-25T19:32:15",{"id":192,"version":193,"summary_zh":194,"released_at":195},163385,"v1.27.0","## 变更内容\n* 功能：将异常传播到 AfterToolCallEvent，适用于装饰工具 (#1565)，由 @charles-dyfis-net 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1566 中实现\n* 功能（工作流）：在 PR 中添加常规提交工作流，由 @mkmeral 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1645 中实现\n* 修复：A2AAgent 返回空的 AgentResult 内容，由 @afarntrog 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1675 中修复\n* 自动运行维护者 PR 的审查工作流，由 @mehtarac 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1673 中实现\n* 修复：修正集成测试中 approval-env 的输出引用，由 @afarntrog 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1685 中修复\n* 修复：更新 strands agent 工作流的 approval 环境变量，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1701 中修复\n* 修复：更新允许的角色以包含维护者，由 @afarntrog 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1704 中修复\n* 修复：在 Gemini 工具使用时传播 reasoningSignature，由 @afarntrog 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1703 中修复\n* CI：将 actions\u002Fgithub-script 从 7 升级到 8，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1699 中完成\n* CI：将 amannn\u002Faction-semantic-pull-request 从 5 升级到 6，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1684 中完成\n* 修复：处理包含工具调用但无其他助手内容的 OpenAI 模型响应，由 @clareliguori 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1562 中修复\n* 修复：更新工作流执行的最终条件，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1708 中修复\n* 修复：为支持 Tasks，将 mcp 的最低依赖版本升级至 1.23.0，由 @clareliguori 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1674 中修复\n* 功能（代理）：添加 concurrent_invocation_mode 参数，由 @zastrowm 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1707 中实现\n* 测试：为 Python 3.14 提供覆盖率，由 @awsarron 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1178 中实现\n* 功能（代理）：添加用于注册钩子回调的 add_hook 便捷方法，由 @Unshure 在 https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1706 中实现\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.26.0...v1.27.0","2026-02-19T17:10:50",{"id":197,"version":198,"summary_zh":199,"released_at":200},163386,"v1.26.0","## What's Changed\r\n* ci: bump aws-actions\u002Fconfigure-aws-credentials from 5 to 6 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1632\r\n* docs: add guidance on using Protocol instead of Callable for extensible interfaces by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1637\r\n* feat(mcp): Implement basic support for Tasks by @LucaButBoring in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1475\r\n* fix(multiagent): set empty text part data in `parts` for `Artifact` by @punkyoon in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1643\r\n* fix(summarizing_conversation_manager): use model stream to generate summary by @mkmeral in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1653\r\n* fix(bedrock): add 'prompt is too long' to context window overflow mes… by @eladb3 in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1663\r\n* fix: fix mcp tests by @afarntrog in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1664\r\n\r\n## New Contributors\r\n* @LucaButBoring made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1475\r\n* @punkyoon made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1643\r\n* @eladb3 made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1663\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.25.0...v1.26.0","2026-02-11T19:49:46",{"id":202,"version":203,"summary_zh":204,"released_at":205},163387,"v1.25.0","## Major Features\r\n\r\n### A2AAgent: First-Class Client for Remote A2A Agents - [PR#1441](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1441)\r\n\r\nThe new `A2AAgent` class makes it simple to connect to and invoke remote agents that implement the [Agent-to-Agent (A2A) protocol](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fa2a). `A2AAgent` implements the `AgentBase` protocol, so it can be called synchronously, asynchronously, or used in streaming mode just like a local `Agent`. It automatically discovers the remote agent's card to populate its name and description, and manages HTTP client lifecycle for you.\r\n\r\n```python\r\nfrom strands.agent.a2a_agent import A2AAgent\r\n\r\n# Connect to a remote A2A agent\r\na2a_agent = A2AAgent(endpoint=\"http:\u002F\u002Flocalhost:9000\")\r\n\r\n# Invoke it like any other agent\r\nresult = a2a_agent(\"Show me 10 ^ 6\")\r\nprint(result.message)\r\n\r\n# Or stream events asynchronously\r\nasync for event in a2a_agent.stream_async(\"Summarize this report\"):\r\n    if event.get(\"type\") == \"a2a_stream\":\r\n        print(f\"A2A event: {event['event']}\")\r\n    elif \"result\" in event:\r\n        print(f\"Final: {event['result'].message}\")\r\n```\r\n\r\n### A2AAgent Support in Graph Workflows - [PR#1615](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1615)\r\n\r\nGraph nodes now accept any `AgentBase` implementation as an executor, not just the concrete `Agent` class. This means `A2AAgent` instances and other custom `AgentBase` implementations can participate in graph-based multi-agent workflows alongside local agents. The `Agent` class also now explicitly extends `AgentBase` for compile-time protocol verification.\r\n\r\n```python\r\nfrom strands import Agent\r\nfrom strands.agent.a2a_agent import A2AAgent\r\nfrom strands.multiagent.graph import GraphBuilder\r\n\r\nlocal_agent = Agent(name=\"summarizer\", system_prompt=\"Summarize input concisely.\")\r\nremote_agent = A2AAgent(endpoint=\"http:\u002F\u002Fremote-agent:9000\")\r\n\r\nbuilder = GraphBuilder()\r\nbuilder.add_node(remote_agent, \"research\")\r\nbuilder.add_node(local_agent, \"summarize\")\r\nbuilder.add_edge(\"research\", \"summarize\")\r\ngraph = builder.build()\r\n\r\nresult = graph(\"Analyze recent AI trends\")\r\n```\r\n\r\n### Interrupt Support for MultiAgent Graph Nodes - [PR#1606](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1606)\r\n\r\nInterrupts now propagate correctly through nested multi-agent graph nodes. When an agent inside a nested `Graph`, `Swarm`, or any custom `MultiAgentBase` node raises an interrupt, the outer graph pauses execution and surfaces the interrupt to the caller. After the caller provides a response, the outer graph resumes execution from where it left off. This builds on the interrupt support for single-agent graph nodes added in v1.24.0.\r\n\r\n```python\r\nfrom strands import Agent, tool\r\nfrom strands.interrupt import Interrupt\r\nfrom strands.multiagent import GraphBuilder, Status\r\nfrom strands.types.tools import ToolContext\r\n\r\n@tool(context=True)\r\ndef approval_tool(tool_context: ToolContext) -> str:\r\n    return tool_context.interrupt(\"approval\", reason=\"Needs human approval\")\r\n\r\nagent = Agent(name=\"reviewer\", tools=[approval_tool])\r\n\r\ninner_graph = GraphBuilder()\r\ninner_graph.add_node(agent, \"reviewer\")\r\nouter_graph = GraphBuilder()\r\nouter_graph.add_node(inner_graph.build(), \"review_pipeline\")\r\n\r\ngraph = outer_graph.build()\r\nresult = graph(\"Review this document\")\r\n\r\nwhile result.status == Status.INTERRUPTED:\r\n    responses = [\r\n        {\"interruptResponse\": {\"interruptId\": i.id, \"response\": \"Approved\"}}\r\n        for i in result.interrupts\r\n    ]\r\n    result = graph(responses)\r\n```\r\n\r\n### S3 Location Support for Documents, Images, and Videos - [PR#1572](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1572)\r\n\r\nMedia content types now support S3 locations as a source, allowing you to reference documents, images, and videos stored in Amazon S3 directly without base64 encoding. The new `S3Location` type includes a required `uri` field and an optional `bucketOwner` field for cross-account access. On Bedrock, S3 locations are passed through to the API natively.\r\n\r\n```python\r\nfrom strands import Agent\r\n\r\nagent = Agent()\r\n\r\nresponse = agent([{\r\n    \"role\": \"user\",\r\n    \"content\": [\r\n        {\"text\": \"Summarize this document:\"},\r\n        {\r\n            \"document\": {\r\n                \"format\": \"pdf\",\r\n                \"name\": \"report\",\r\n                \"source\": {\r\n                    \"location\": {\r\n                        \"type\": \"s3\",\r\n                        \"uri\": \"s3:\u002F\u002Fmy-bucket\u002Fdocuments\u002Freport.pdf\",\r\n                        \"bucketOwner\": \"123456789012\"  # optional, for cross-account\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    ]\r\n}])\r\n```\r\n\r\n### Configurable Structured Output Prompt - [PR#1627](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1627)\r\n\r\nThe prompt message the agent uses to request structured output formatting is now configurable via the `structured_output_prompt` parameter. Previously, the hardcoded message `\"You must format the previous response as structured output.\"` could trigge","2026-02-05T19:48:35",{"id":207,"version":208,"summary_zh":209,"released_at":210},163388,"v1.24.0","## What's Changed\r\n* test: fix flaky openai structured output test by adding Field guidance by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1534\r\n* interrupts - multiagent - do not emit AfterNodeCallEvent on interrupt by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1539\r\n* ci: add workflow for lambda layer publish by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F870\r\n* fix: Populate tool_args correctly for steering by @clareliguori in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1531\r\n* interrupts - graph - agent based by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1533\r\n* chore: refactor use_span to be closed automatically by @poshinchen in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1293\r\n* ci: limit permission scope on lambda layer github action by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1555\r\n* chore: Enable Auto-close labels on Pull requests as well. by @yonib05 in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1552\r\n* Use devtools actions by @Unshure in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1554\r\n* feat(bedrock): add automatic prompt caching support by @kevmyung in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1438\r\n* feat(hooks): add retry mechanism for tool calls by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1556\r\n* feat(tools): move ToolProvider out of experimental namespace by @Unshure in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1567\r\n* [FIX] models - gemini - start and stop reasoningContent by @JackYPCOnline in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1557\r\n* feat(agent): update AgentResult __str__ priority order by @afarntrog in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1553\r\n* callback handler - fix reporting of tool when missing delta by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1573\r\n* feat(hooks): Add invocation state by @mkmeral in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1550\r\n* test(steering): Fix failing integ tests by @mkmeral in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1580\r\n\r\n## New Contributors\r\n* @kevmyung made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1438\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.23.0...v1.24.0","2026-01-29T01:23:20",{"id":212,"version":213,"summary_zh":214,"released_at":215},163389,"v1.23.0"," ## Major Features\r\n ### Configurable Retry Strategy for Model Calls - [PR#1424](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1424)\r\n Agents now support customizable retry behavior when handling model throttling exceptions. The new `ModelRetryStrategy` class allows you to configure maximum retry attempts, initial delay, and maximum delay, replacing the previously hardcoded retry logic.\r\n\r\n```python\r\n from strands import Agent, ModelRetryStrategy\r\n \r\n agent = Agent(\r\n     model=\"anthropic.claude-3-sonnet\",\r\n     retry_strategy=ModelRetryStrategy(\r\n         max_attempts=3,\r\n         initial_delay=2,\r\n         max_delay=60\r\n     )\r\n )\r\n```\r\n\r\n ### Model Response Steering - [PR#1429](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1429)\r\n\r\n Steering handlers can now intercept and guide model responses through the new `steer_after_model()` method. This enables validation of model outputs, enforcement of required tool usage, and conversation flow control based on model responses before the agent completes.\r\n\r\n```python\r\n from strands.experimental.steering import Guide, Proceed, SteeringHandler\r\n \r\n class ForceToolUsageHandler(SteeringHandler):\r\n     def __init__(self, required_tool: str):\r\n         super().__init__()\r\n         self.required_tool = required_tool\r\n \r\n     async def steer_after_model(self, *, agent, message, stop_reason, **kwargs):\r\n         if stop_reason != \"end_turn\":\r\n             return Proceed(reason=\"Model still processing\")\r\n \r\n         # Check if required tool was used\r\n         for block in message.get(\"content\", []):\r\n             if \"toolUse\" in block and block[\"toolUse\"].get(\"name\") == self.required_tool:\r\n                 return Proceed(reason=\"Required tool was used\")\r\n \r\n         # Force tool usage\r\n         return Guide(reason=f\"You MUST use the {self.required_tool} tool.\")\r\n \r\n agent = Agent(tools=[log_tool], hooks=[ForceToolUsageHandler(\"log_activity\")])\r\n```\r\n\r\n ### Input Messages in BeforeInvocationEvent - [PR#1474](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1474)\r\n The `BeforeInvocationEvent` hook now exposes the input messages through a new `messages` parameter, enabling preprocessing, guardrails, and filters to run before the agent processes user input.\r\n\r\n```python\r\n from strands import Agent\r\n from strands.hooks import HookProvider, BeforeInvocationEvent\r\n \r\n class PreprocessingHook(HookProvider):\r\n     def register_hooks(self, registry):\r\n         registry.add_callback(BeforeInvocationEvent, self.preprocess)\r\n     \r\n     def preprocess(self, event):\r\n         # Access input messages for preprocessing\r\n         messages = event.messages\r\n         # Apply guardrails or filters here\r\n         pass\r\n \r\n agent = Agent(hooks=[PreprocessingHook()])\r\n```\r\n\r\n ### Nova Sonic 2 Support for BidiAgent - [PR#1476](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1476)\r\n BidiAgent now supports Amazon Nova Sonic v2 models with enhanced features including the turn_taking parameter and improved text input handling for bidirectional voice conversations.\r\n\r\n```python\r\n from strands.experimental.bidi.models import BidiNovaSonicModel\r\n from strands.experimental.bidi import BidiAgent\r\n \r\n model = BidiNovaSonicModel(model_id=\"us.amazon.nova-sonic-v2:0\")\r\n agent = BidiAgent(model=model)\r\n```\r\n\r\n ### Graph Interrupts from Hooks - [PR#1478](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1478)\r\n Graph nodes can now be interrupted from `BeforeNodeCallEvent` hooks, enabling approval workflows and human-in-the-loop validation before specific agents execute in multi-agent graphs.\r\n\r\n```python\r\n from strands import Agent\r\n from strands.hooks import HookProvider, BeforeNodeCallEvent\r\n from strands.multiagent import GraphBuilder, Status\r\n \r\n class ApprovalHook(HookProvider):\r\n     def register_hooks(self, registry):\r\n         registry.add_callback(BeforeNodeCallEvent, self.approve)\r\n \r\n     def approve(self, event):\r\n         if event.node_id == \"info_agent\":\r\n             return\r\n \r\n         response = event.interrupt(\"my_interrupt\", reason=f\"{event.node_id} needs approval\")\r\n         if response != \"APPROVE\":\r\n             event.cancel_node = \"node rejected\"\r\n \r\n builder = GraphBuilder()\r\n builder.add_node(info_agent, \"info_agent\")\r\n builder.add_node(weather_agent, \"weather_agent\")\r\n builder.add_edge(\"info_agent\", \"weather_agent\")\r\n builder.set_hook_providers([ApprovalHook()])\r\n graph = builder.build()\r\n \r\n result = graph(\"What is the weather?\")\r\n while result.status == Status.INTERRUPTED:\r\n     # Handle interrupts and resume with responses\r\n     result = graph(interrupt_responses)\r\n```\r\n\r\n ### Multiagent Hook Events Graduated - [PR#1498](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1498)\r\n Multiagent hook events (`BeforeNodeCallEvent`, `AfterNodeCallEvent`) have been promoted from experimental to the stable API. These events are now available directly from `strands.hooks` without the experimental import path.\r\n\r\n```python\r\n from strands.hooks import BeforeNodeCallEvent, AfterNodeCa","2026-01-21T20:10:36",{"id":217,"version":218,"summary_zh":219,"released_at":220},163390,"v1.22.0","## Major Features\r\n\r\n### MCP Resource Operations - [PR#1117](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1117)\r\n\r\nThe MCP client now supports resource operations, enabling agents to list, read, and work with resources provided by MCP servers. This includes static resources, binary resources, and parameterized resource templates.\r\n\r\n```python\r\nfrom strands.tools.mcp import MCPClient\r\n\r\nwith MCPClient(server_transport) as client:\r\n    # List available resources\r\n    resources = client.list_resources_sync()\r\n    for resource in resources.resources:\r\n        print(f\"Resource: {resource.name} at {resource.uri}\")\r\n    \r\n    # Read a specific resource\r\n    content = client.read_resource_sync(\"file:\u002F\u002Fdocuments\u002Freport.txt\")\r\n    text = content.contents[0].text\r\n    \r\n    # List resource templates (parameterized resources)\r\n    templates = client.list_resource_templates_sync()\r\n    for template in templates.resourceTemplates:\r\n        print(f\"Template: {template.uriTemplate}\")\r\n```\r\n\r\n### Bedrock Guardrails Latest Message Option - [PR#1224](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1224)\r\n\r\nBedrock models now support the `guardrail_latest_message` parameter, which sends only the latest user message to AWS Bedrock Guardrails for evaluation instead of the entire conversation history. This reduces token usage and enables conversation recovery after guardrail interventions.\r\n\r\n```python\r\nfrom strands.models.bedrock import BedrockModel\r\n\r\nmodel = BedrockModel(\r\n    model_id=\"us.anthropic.claude-sonnet-4-20250514-v1:0\",\r\n    guardrail_id=\"your-guardrail-id\",\r\n    guardrail_version=\"DRAFT\",\r\n    guardrail_latest_message=True  # Only evaluate the latest user message\r\n)\r\n```\r\n\r\nSee the [Bedrock Guardrails documentation](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fdocs\u002Fpull\u002F340) for more details.\r\n\r\n### LiteLLM Non-Streaming Support - [PR#512](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F512)\r\n\r\nThe LiteLLM model provider now correctly handles non-streaming responses, fixing an issue where `stream=False` would raise an error. Both streaming and non-streaming modes now work seamlessly.\r\n\r\n```python\r\nfrom strands.models.litellm import LiteLLMModel\r\n\r\n# Use non-streaming mode for simpler response handling\r\nmodel = LiteLLMModel(\r\n    model_id=\"gpt-3.5-turbo\",\r\n    params={\"stream\": False}\r\n)\r\n\r\n# Works correctly now - no more ValueError\r\nagent = Agent(model=model)\r\nresult = agent(\"What is 2+2?\")\r\n```\r\n\r\n---\r\n\r\n## Major Bug Fixes\r\n\r\n- **Concurrent Agent Invocations** - [PR#1453](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1453)  \r\n  Fixed critical agent state corruption when multiple concurrent invocations occurred on the same agent instance. A new `ConcurrencyException` is now raised to prevent concurrent invocations and protect agent state integrity.\r\n\r\n- **Gemini Empty Stream Handling** - [PR#1420](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1420)  \r\n  Fixed UnboundLocalError crash when Gemini returns an empty event stream by properly initializing variables before the stream loop.\r\n\r\n- **Deprecation Warning on Import** - [PR#1380](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1380)  \r\n  Fixed unwanted deprecation warnings appearing when importing `strands` by using lazy `__getattr__` to emit warnings only when deprecated aliases are actually accessed.\r\n\r\n---\r\n\r\n## What's Changed\r\n* docs: update github agent action to reference S3_SESSION_BUCKET by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1418\r\n* feat: provide extra command content as the the prompt to the agent by @zastrowm in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1419\r\n* [FEATURE] add MCP resource operations in MCP Tools by @xiehust in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1117\r\n* fix: import errors for models with optional imports by @mehtarac in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1384\r\n* add BidiGeminiLiveModel and BidiOpenAIRealtimeModel to the init by @mehtarac in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1383\r\n* bidi - async - remove cancelling call by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1357\r\n* feat(bedrock): add guardrail_latest_message option by @aiancheruk in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1224\r\n* fix(gemini): UnboundLocal Exception Fix by @emattiza in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1420\r\n* fix! Litellm handle non streaming response fix for issue #477 by @schleidl in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F512\r\n* feat(agent-interface): introduce AgentBase Protocol as the interface for agent classes to implement by @awsarron in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1126\r\n* ci: update pytest requirement from \u003C9.0.0,>=8.0.0 to >=8.0.0,\u003C10.0.0 in the dev-dependencies group by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1161\r\n* feat: pass invocation_state to model providers by @tirth14 in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1414\r\n* Add Security.md file b","2026-01-13T21:29:19",{"id":222,"version":223,"summary_zh":224,"released_at":225},163391,"v1.21.0","\r\n## Major Features\r\n\r\n### Custom HTTP Client Support for OpenAI and Gemini - [PR#1366](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1366)\r\n\r\nThe OpenAI and Gemini model providers now accept a pre-configured client via the `client` parameter, enabling connection pooling, proxy configuration, custom timeouts, and centralized observability across all model requests. The client is reused for all requests and its lifecycle is managed by the caller, not the model provider.\r\n\r\n```python\r\nfrom strands.models.openai import OpenAIModel\r\nimport httpx\r\n\r\n# Create custom client with proxy and timeout configuration\r\ncustom_client = httpx.AsyncClient(\r\n    proxy=\"http:\u002F\u002Fproxy.example.com:8080\",\r\n    timeout=60.0\r\n)\r\n\r\nmodel = OpenAIModel(model_id=\"gpt-4o-mini\", client=custom_client)\r\n```\r\n\r\n### Gemini Built-in Tools (Google Search, Code Execution) - [PR#1050](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1050)\r\n\r\nGemini models now support native Google tools like GoogleSearch and CodeExecution through the `gemini_tools` parameter. These tools integrate directly with Gemini's API without requiring custom function implementations, enabling agents to search the web and execute code natively.\r\n\r\n```python\r\nfrom strands.models.gemini import GeminiModel\r\nfrom google.genai import types\r\n\r\nmodel = GeminiModel(\r\n    model_id=\"gemini-2.0-flash-exp\",\r\n    gemini_tools=[types.Tool(google_search=types.GoogleSearch())]\r\n)\r\n```\r\n\r\n### Hook-based Model Retry on Exceptions - [PR#1405](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1405)\r\n\r\nHooks can now retry model invocations by setting `event.retry = True` in the `AfterModelCallEvent` handler, enabling custom retry logic for transient errors, rate limits, or quality checks. This provides fine-grained control over model retry behavior beyond basic exception handling.\r\n\r\n```python\r\nclass RetryOnServiceUnavailable(HookProvider):\r\n    def __init__(self, max_retries=3):\r\n        self.max_retries = max_retries\r\n        self.retry_count = 0\r\n\r\n    def register_hooks(self, registry, **kwargs):\r\n        registry.add_callback(BeforeInvocationEvent, self.reset_counts)\r\n        registry.add_callback(AfterModelCallEvent, self.handle_retry)\r\n\r\n    def reset_counts(self, event = None):\r\n      \tself.retry_count = 0\r\n\r\n    async def handle_retry(self, event):\r\n        if event.exception:\r\n            if \"ServiceUnavailable\" in str(event.exception):\r\n                logger.info(\"ServiceUnavailable encountered\")\r\n                count = self.retry_count\r\n                if count \u003C self.max_retries:\r\n                    logger.info(\"Retrying model call\")\r\n                    self.retry_count = count + 1\r\n                    event.retry = True\r\n                    await asyncio.sleep(2 ** count)  # Exponential backoff\r\n        else:\r\n            # reset counts in the succesful case\r\n            self.reset_counts(None)\r\n```\r\n\r\n### Per-Turn Conversation Management - [PR#1374](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1374)\r\n\r\nConversation managers now support mid-execution management via the `per_turn` parameter, applying conversation window management before each model call rather than only at the end. This prevents context window overflow during multi-turn conversations with tools or long responses.\r\n\r\n```python\r\nfrom strands import Agent\r\nfrom strands.agent.conversation_manager import SlidingWindowConversationManager\r\n\r\n# Enable management before every model call\r\nmanager = SlidingWindowConversationManager(\r\n    per_turn=True,\r\n    window_size=40\r\n)\r\n\r\n# Or manage every N turns\r\nmanager = SlidingWindowConversationManager(\r\n    per_turn=3,  # Manage every 3 model calls\r\n    window_size=40\r\n)\r\n\r\nagent = Agent(model=model, conversation_manager=manager)\r\n```\r\n\r\n### Agent Invocation Metrics - [PR#1387](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1387)\r\n\r\nMetrics now track per-invocation data through `agent_invocations` and `latest_agent_invocation` properties, providing granular insight into each agent call's performance, token usage, and execution time. This enables detailed performance analysis for multi-invocation workflows.\r\n\r\n```python\r\nfrom strands import Agent\r\n\r\nagent = Agent(model=model)\r\nresult = agent(\"Analyze this data\")\r\n\r\n# Access invocation-level metrics\r\nlatest = result.metrics.latest_agent_invocation\r\nprint(f\"Cycles: {len(latest.cycles)}\")\r\nprint(f\"Tokens: {latest.usage}\")\r\n\r\n# Access all invocations\r\nfor invocation in result.metrics.agent_invocations:\r\n    print(f\"Invocation usage: {invocation.usage}\")\r\n```\r\n\r\n### ToolRegistry Replace Method - [PR#1182](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1182)\r\n\r\nThe `ToolRegistry` now supports replacing existing tools via the `replace()` method, enabling dynamic tool updates without re-registering all tools. This is particularly useful for hot-reloading tool implementations or updating tools based on runtime conditions.\r\n\r\n```python\r\nfrom strands.tools.registry import ToolRegistry\r\nfrom strands import tool\r\n\r\nregistry","2026-01-02T19:26:07",{"id":227,"version":228,"summary_zh":229,"released_at":230},163392,"v1.20.0","## Major Features\r\n\r\n### Swarm Interrupts for Human-in-the-Loop - [PR#1193](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1193)\r\n\r\nSwarm multi-agent systems now support interrupts, enabling Human-in-the-Loop patterns for approval workflows and user interaction during agent execution. Interrupts can be triggered via BeforeNodeCallEvent hooks or directly within agent tools using ToolContext.\r\n\r\n```python\r\nfrom strands import Agent, tool\r\nfrom strands.experimental.hooks.multiagent import BeforeNodeCallEvent\r\nfrom strands.hooks import HookProvider\r\nfrom strands.multiagent import Swarm\r\nfrom strands.multiagent.base import Status\r\n\r\n# Example 1: Interrupt via Hook\r\nclass ApprovalHook(HookProvider):\r\n    def register_hooks(self, registry):\r\n        registry.add_callback(BeforeNodeCallEvent, self.approve)\r\n\r\n    def approve(self, event):\r\n        response = event.interrupt(\"approval\", reason=f\"{event.node_id} needs approval\")\r\n        if response != \"APPROVE\":\r\n            event.cancel_node = \"rejected\"\r\n\r\nswarm = Swarm([agent1, agent2], hooks=[ApprovalHook()])\r\nresult = swarm(\"Task requiring approval\")\r\n\r\n# Handle interrupts\r\nwhile result.status == Status.INTERRUPTED:\r\n    for interrupt in result.interrupts:\r\n        user_input = input(f\"{interrupt.reason}: \")\r\n        responses = [{\"interruptResponse\": {\"interruptId\": interrupt.id, \"response\": user_input}}]\r\n    result = swarm(responses)\r\n\r\n# Example 2: Interrupt via Tool\r\n@tool(context=True)\r\ndef get_user_info(tool_context: ToolContext) -> str:\r\n    response = tool_context.interrupt(\"user_info\", reason=\"need user name\")\r\n    return f\"User: {response}\"\r\n\r\nuser_agent = Agent(name=\"user\", tools=[get_user_info])\r\nswarm = Swarm([user_agent])\r\nresult = swarm(\"Who is the user?\")\r\n# Resume with interrupt response as shown above\r\n```\r\n\r\nSee the [interrupts documentation](https:\u002F\u002Fstrandsagents.com\u002Flatest\u002Fdocumentation\u002Fdocs\u002Fuser-guide\u002Fconcepts\u002Finterrupts\u002F) for more details.\r\n\r\n### AgentResult Access in AfterInvocationEvent - [PR#1125](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1125)\r\n\r\nHooks can now access the complete AgentResult in AfterInvocationEvent, enabling post-invocation actions based on the agent's output, stop reason, and metrics. This enhancement allows for richer observability and custom handling of agent results.\r\n\r\n```python\r\nfrom strands import Agent\r\nfrom strands.hooks import AfterInvocationEvent, HookProvider\r\n\r\nclass ResultLoggingHook(HookProvider):\r\n    def register_hooks(self, registry):\r\n        registry.add_callback(AfterInvocationEvent, self.log_result)\r\n\r\n    def log_result(self, event: AfterInvocationEvent):\r\n        # Access the complete AgentResult\r\n        if event.result:\r\n            print(f\"Stop reason: {event.result.stop_reason}\")\r\n            print(f\"Tokens used: {event.result.usage}\")\r\n            print(f\"Response: {event.result.text}\")\r\n\r\nagent = Agent(hooks=[ResultLoggingHook()])\r\nresult = agent(\"What is 2+2?\")\r\n```\r\n\r\n---\r\n\r\n## Major Bug Fixes\r\n\r\n- **Structured Output Display Fix** - [PR#1290](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1290)  \r\n  Fixed `AgentResult.__str__()` to return structured output JSON when no text content is present, resolving issues where `print(agent_result)` showed empty output and structured output was lost in multi-agent graph propagation.\r\n\r\n- **Tool Spec Composition Keywords Fix** - [PR#1301](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1301)  \r\n  Fixed tool specification handling for JSON Schema composition keywords (anyOf, oneOf, allOf, not), preventing models from incorrectly returning string-encoded JSON for optional parameters like `Optional[List[str]]`.\r\n\r\n- **MCP Client Resource Leak Fix** - [PR#1321](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1321)  \r\n  Fixed file descriptor leak in MCP client by properly closing the asyncio event loop, preventing resource exhaustion in multi-tenant applications that create many MCP clients.\r\n\r\n---\r\n\r\n## All Changes \r\n\r\n* Remove toolResult message when toolUse is missing due to pagination in session management by @afarntrog in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1274\r\n* interrupts - swarm by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1193\r\n* fix(agent): Return structured output JSON when AgentResult has no text by @afarntrog in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1290\r\n* bidi - fix record direct tool call by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1300\r\n* Update doc strings to eliminate warnings in doc build by @zastrowm in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1284\r\n* fix: fix broken tool spec with composition keywords  by @mkmeral in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1301\r\n* bidi - tests - lint by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1307\r\n* bidi - fix mypy errors by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1308\r\n* feat(hooks): add AgentResult to AfterInvocationEvent by @Ratish1 in https:\u002F\u002Fgithub","2025-12-15T20:36:15",{"id":232,"version":233,"summary_zh":234,"released_at":235},163393,"v1.19.0","## What's New\r\n\r\n- **Bidirectional Agents (Experimental):** This release introduces BidiAgent for real-time voice conversations with AI agents through persistent connections that support continuous audio streaming, natural interruptions, and concurrent tool execution. This experimental feature allows developers to build voice assistants and interactive applications with support for Amazon Nova Sonic, OpenAI Realtime API, and Google Gemini Live.\r\n\r\n- **Steering (Experimental):** Enables modular prompting with progressive disclosure for complex agent workflows through just-in-time feedback loops. Instead of front-loading all instructions into monolithic prompts, steering handlers provide contextual guidance that appears when relevant, maintaining agent effectiveness on multi-step tasks while preserving adaptive reasoning capabilities.\r\n\r\n\r\n## What's Changed\r\n* fix: avoid KeyError in direct tool calls with context by @qmays-phdata in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1213\r\n* fix: attached custom attributes to all spans by @poshinchen in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1235\r\n* hooks - before node call - cancel node by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1203\r\n* interrupts - support falsey responses by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1256\r\n* Bidirectional Streaming Agent by @mehtarac in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1276\r\n* mcp - elicitation - fix server request test by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1281\r\n* feat(steering): add experimental steering for modular prompting by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1280\r\n* test(steering): adjust integ test system prompts to reduce flakiness by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1282\r\n\r\n## New Contributors\r\n* @qmays-phdata made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1213\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.18.0...v1.19.0","2025-12-03T18:43:54",{"id":237,"version":238,"summary_zh":239,"released_at":240},163394,"v1.18.0","## What's Changed\r\n* multi agent input by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1196\r\n* interrupt - activate - set context separately by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1194\r\n* In PrintingCallbackHandler, make the verbose description and counting… by @marcbrooker in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1211\r\n* fix: fix swarm session management integ test. by @JackYPCOnline in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1155\r\n* move tool caller definition out of agent module by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1215\r\n* interrupt - interruptible multi agent hook interface by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1207\r\n* security(tool_loader): prevent tool name and sys modules collisions i… by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1214\r\n* fix(mcp): protect connection on non-fatal client side timeout error by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1231\r\n* fix(litellm): populate cacheWriteInputTokens from cache_creation_input_token not cache_creation_tokens by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1233\r\n* fix: fix integ test for mcp elicitation_server by @JackYPCOnline in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1234\r\n\r\n## New Contributors\r\n* @marcbrooker made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1211\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.17.0...v1.18.0","2025-11-21T21:25:57",{"id":242,"version":243,"summary_zh":244,"released_at":245},163395,"v1.17.0","# Strands Agents SDK v1.17.0 Release Notes\r\n\r\n## Features\r\n\r\n### Configurable Timeout for MCP Agent Tools - [PR#1184](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1184)\r\n\r\nYou can now set custom timeout values when creating MCP (Model Context Protocol) agent tools, providing better control over tool execution time limits and improving reliability when working with external MCP servers.\r\n\r\n```python\r\nfrom datetime import timedelta\r\nfrom strands.tools.mcp import MCPAgentTool\r\n\r\n# Create MCP tool with custom 30-second timeout\r\nmcp_tool = MCPAgentTool(\r\n    ...,\r\n    timeout=timedelta(seconds=30)\r\n)\r\n\r\nagent = Agent(tools=[mcp_tool])\r\n```\r\n\r\nThis feature is especially useful when working with MCP servers that may have varying response times, allowing you to fine-tune timeout behavior for different use cases.\r\n\r\n---\r\n\r\n## Bug Fixes\r\n\r\n- **Swarm Handoff Timing** - [PR#1147](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1147)  \r\n  Fixed swarm handoff behavior to only switch to the handoff node after the current node completes execution. Previously, the switch occurred mid-execution, causing incorrect event emissions and invalid swarm state when tools were interrupted concurrently with handoff tools.\r\n\r\n- **LiteLLM Stream Parameter Validation** - [PR#1183](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1183)  \r\n  Added validation for the `stream` parameter in LiteLLM to prevent TypeError when `stream=False` is provided. The SDK now properly handles both streaming and non-streaming responses with clear error messaging.\r\n\r\n- **Optional MetadataEvent Fields** - [PR#1187](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1187)  \r\n  Fixed handling of MetadataEvents when custom model implementations omit optional `usage` or `metrics` fields. The SDK now provides sensible defaults, preventing KeyError exceptions and enabling greater flexibility for custom model providers.\r\n\r\n- **A2A Protocol File Data Decoding** - [PR#1195](https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1195)  \r\n  Fixed A2A (Agent-to-Agent) executor to properly base64 decode file bytes from A2A messages before passing to Strands agents. Previously, agents were receiving base64-encoded strings instead of actual binary file content.\r\n\r\n---\r\n\r\n## All changes\r\n\r\n* feat: allow setting a timeout when creating MCPAgentTool by @AnirudhKonduru in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1184\r\n* fix(litellm): add validation for stream parameter in LiteLLM by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1183\r\n* fix(event_loop): handle MetadataEvents without optional usage and metrics by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1187\r\n* swarm - switch to handoff node only after current node stops by @pgrayy in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1147\r\n* fix(a2a): base64 decode byte data before placing in ContentBlocks by @dbschmigelski in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1195\r\n\r\n## New Contributors\r\n\r\n* @AnirudhKonduru made their first contribution in https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fpull\u002F1184\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fstrands-agents\u002Fsdk-python\u002Fcompare\u002Fv1.16.0...v1.17.0","2025-11-18T19:09:39"]