[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Ingenimax--agent-sdk-go":3,"tool-Ingenimax--agent-sdk-go":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":76,"owner_avatar_url":77,"owner_bio":76,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":113,"forks":114,"last_commit_at":115,"license":116,"difficulty_score":23,"env_os":117,"env_gpu":118,"env_ram":118,"env_deps":119,"category_tags":125,"github_topics":126,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":131,"updated_at":132,"faqs":133,"releases":164},3744,"Ingenimax\u002Fagent-sdk-go","agent-sdk-go","A powerful Go framework for building production-ready AI agents!","agent-sdk-go 是一款专为 Go 语言开发者打造的强大框架，旨在帮助用户轻松构建可用于生产环境的高性能 AI 智能体。它解决了在开发复杂 AI 应用时，难以高效整合多模型支持、记忆管理、工具调用及企业级安全机制的痛点。\n\n无论是需要快速原型验证的独立开发者，还是追求高可用性与安全性的企业技术团队，都能通过 agent-sdk-go 获得流畅的开发体验。该工具不仅无缝兼容 OpenAI、Anthropic 和 Google Vertex AI 等主流大模型，还具备独特的模块化设计：支持即插即用的工具生态、基于向量的高级记忆管理，以及符合 Model Context Protocol (MCP) 标准的服务器集成。\n\n此外，agent-sdk-go 内置了完善的可观测性系统、令牌用量追踪和多租户隔离机制，确保应用在规模化部署时的稳定与安全。通过直观的 YAML 配置和零样本引导功能，开发者可以迅速定义复杂的任务流程，甚至直接通过 CLI 工具与 AI 进行交互式对话。如果你希望用 Go 语言构建灵活、可扩展且具备企业级特性的 AI 智能体，agent-sdk-go 将是一个值得信","agent-sdk-go 是一款专为 Go 语言开发者打造的强大框架，旨在帮助用户轻松构建可用于生产环境的高性能 AI 智能体。它解决了在开发复杂 AI 应用时，难以高效整合多模型支持、记忆管理、工具调用及企业级安全机制的痛点。\n\n无论是需要快速原型验证的独立开发者，还是追求高可用性与安全性的企业技术团队，都能通过 agent-sdk-go 获得流畅的开发体验。该工具不仅无缝兼容 OpenAI、Anthropic 和 Google Vertex AI 等主流大模型，还具备独特的模块化设计：支持即插即用的工具生态、基于向量的高级记忆管理，以及符合 Model Context Protocol (MCP) 标准的服务器集成。\n\n此外，agent-sdk-go 内置了完善的可观测性系统、令牌用量追踪和多租户隔离机制，确保应用在规模化部署时的稳定与安全。通过直观的 YAML 配置和零样本引导功能，开发者可以迅速定义复杂的任务流程，甚至直接通过 CLI 工具与 AI 进行交互式对话。如果你希望用 Go 语言构建灵活、可扩展且具备企业级特性的 AI 智能体，agent-sdk-go 将是一个值得信赖的选择。","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIngenimax_agent-sdk-go_readme_26fb7b1b12f9.png\" alt=\"Ingenimax\" width=\"400\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIngenimax_agent-sdk-go_readme_2ab0a80ed303.png\" alt=\"Ingenimax\" width=\"400\">\n\t\n\u003C\u002Fdiv>\n\n# Agent Go SDK\n\nA powerful Go framework for building production-ready AI agents that seamlessly integrates memory management, tool execution, multi-LLM support, and enterprise features into a flexible, extensible architecture.\n\n## Documentation\n\n📖 **[docs.goagents.dev](https:\u002F\u002Fdocs.goagents.dev\u002F)** — Full documentation, guides, and reference.\n\n## Community\n\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Join%20Our%20Community-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.com\u002Finvite\u002FMjJbDG2nQZ)\n\nJoin our Discord server to collaborate, share what you're building, and get community support for agent-sdk-go!\n\n## Features\n\n### Core Capabilities\n- 🧠 **Multi-Model Intelligence**: Seamless integration with OpenAI, Anthropic, and Google Vertex AI (Gemini models).\n- 🔧 **Modular Tool Ecosystem**: Expand agent capabilities with plug-and-play tools for web search, data retrieval, and custom operations\n- 📝 **Advanced Memory Management**: Persistent conversation tracking with buffer and vector-based retrieval options\n- 🔌 **MCP Integration**: Support for Model Context Protocol (MCP) servers via HTTP and stdio transports\n- 📊 **Token Usage Tracking**: Built-in token counting for cost monitoring, usage analytics, and optimization\n\n### Enterprise-Ready\n- 🚦 **Built-in Guardrails**: Comprehensive safety mechanisms for responsible AI deployment\n- 📈 **Complete Observability**: Integrated tracing and logging for monitoring and debugging\n- 🏢 **Enterprise Multi-tenancy**: Securely support multiple organizations with isolated resources\n\n### Development Experience\n- 🛠️ **Structured Task Framework**: Plan, approve, and execute complex multi-step operations\n- 📄 **Declarative Configuration**: Define sophisticated agents and tasks using intuitive YAML definitions\n- 🧙 **Zero-Effort Bootstrapping**: Auto-generate complete agent configurations from simple system prompts\n\n## Getting Started\n\n### Prerequisites\n\n- Go 1.23+\n- Redis (optional, for distributed memory)\n\n### Installation\n\n#### As a Go Library\n\nAdd the SDK to your Go project:\n\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go\n```\n\n#### As a CLI Tool (Headless SDK)\n\n**Option 1: Download Pre-built Binaries (Recommended)**\n\nDownload the latest release for your platform from [GitHub Releases](https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Freleases) and add it to your PATH.\n\n**Option 2: Install via Go**\n\n```bash\ngo install github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcmd\u002Fagent-cli@latest\n```\n\n**Option 3: Build from Source**\n\n```bash\n# Clone the repository\ngit clone https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\ncd agent-sdk-go\n\n# Build the CLI tool\nmake build-cli\n\n# Install to system PATH (optional)\nmake install\n```\n\n**Quick CLI Start:**\n```bash\n# Initialize configuration\nagent-cli init\n\n# Option 1: Set environment variables\nexport OPENAI_API_KEY=your_api_key_here\n\n# Option 2: Use .env file (recommended)\ncp env.example .env\n# Edit .env with your API keys\n\n# Run a simple query\nagent-cli run \"What's the weather in San Francisco?\"\n\n# Start interactive chat\nagent-cli chat\n```\n\n### Configuration\n\nThe SDK uses environment variables for configuration. Key variables include:\n\n- `OPENAI_API_KEY`: Your OpenAI API key\n- `OPENAI_MODEL`: The model to use (e.g., gpt-4o-mini)\n- `LOG_LEVEL`: Logging level (debug, info, warn, error)\n- `REDIS_ADDRESS`: Redis server address (if using Redis for memory)\n\nSee `.env.example` for a complete list of configuration options.\n\n### Get Help with Nina (AI Assistant)\n\nNina is an AI assistant that knows the agent-sdk-go codebase inside and out. Connect to Nina via MCP (Model Context Protocol) to get help directly in your IDE.\n\n#### Cursor IDE\n\nAdd to `~\u002F.cursor\u002Fmcp.json`:\n\n```json\n{\n  \"mcpServers\": {\n    \"agent-sdk-go\": {\n      \"url\": \"https:\u002F\u002Fnina.agentgogo.app\u002Fmcp\",\n      \"transport\": \"sse\"\n    }\n  }\n}\n```\n\nRestart Cursor IDE and Nina's tools will be available in your AI assistant.\n\n#### Claude Desktop\n\nAdd to `claude_desktop_config.json`:\n\n| Platform | Config Location |\n|----------|-----------------|\n| macOS | `~\u002FLibrary\u002FApplication Support\u002FClaude\u002Fclaude_desktop_config.json` |\n| Windows | `%APPDATA%\\Claude\\claude_desktop_config.json` |\n\n```json\n{\n  \"mcpServers\": {\n    \"agent-sdk-go\": {\n      \"url\": \"https:\u002F\u002Fnina.agentgogo.app\u002Fmcp\",\n      \"transport\": \"sse\"\n    }\n  }\n}\n```\n\nRestart Claude Desktop and Nina's tools will be available via the 🔌 icon.\n\n#### Available Tools\n\n| Tool | Description |\n|------|-------------|\n| `ask_nina` | Ask questions about agent-sdk-go, Go programming, or development |\n| `search_sdk` | Search the SDK documentation and source code |\n| `get_sdk_status` | Get status of Nina's SDK knowledge base |\n\n## Usage Examples\n\n### Creating a Simple Agent\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fconfig\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Flogging\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmultitenancy\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Ftools\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Ftools\u002Fwebsearch\"\n)\n\nfunc main() {\n\t\u002F\u002F Create a logger\n\tlogger := logging.New()\n\n\t\u002F\u002F Get configuration\n\tcfg := config.Get()\n\n\t\u002F\u002F Create a new agent with OpenAI\n\topenaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey,\n\t\topenai.WithLogger(logger))\n\n\tagent, err := agent.NewAgent(\n\t\tagent.WithLLM(openaiClient),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithTools(createTools(logger).List()...),\n\t\tagent.WithSystemPrompt(\"You are a helpful AI assistant. When you don't know the answer or need real-time information, use the available tools to find the information.\"),\n\t\tagent.WithName(\"ResearchAssistant\"),\n\t)\n\tif err != nil {\n\t\tlogger.Error(context.Background(), \"Failed to create agent\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\t\u002F\u002F Create a context with organization ID and conversation ID\n\tctx := context.Background()\n\tctx = multitenancy.WithOrgID(ctx, \"default-org\")\n\tctx = context.WithValue(ctx, memory.ConversationIDKey, \"conversation-123\")\n\n\t\u002F\u002F Run the agent\n\tresponse, err := agent.Run(ctx, \"What's the weather in San Francisco?\")\n\tif err != nil {\n\t\tlogger.Error(ctx, \"Failed to run agent\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\tfmt.Println(response)\n}\n\nfunc createTools(logger logging.Logger) *tools.Registry {\n\t\u002F\u002F Get configuration\n\tcfg := config.Get()\n\n\t\u002F\u002F Create tools registry\n\ttoolRegistry := tools.NewRegistry()\n\n\t\u002F\u002F Add web search tool if API keys are available\n\tif cfg.Tools.WebSearch.GoogleAPIKey != \"\" && cfg.Tools.WebSearch.GoogleSearchEngineID != \"\" {\n\t\tsearchTool := websearch.New(\n\t\t\tcfg.Tools.WebSearch.GoogleAPIKey,\n\t\t\tcfg.Tools.WebSearch.GoogleSearchEngineID,\n\t\t)\n\t\ttoolRegistry.Register(searchTool)\n\t}\n\n\treturn toolRegistry\n}\n```\n\n### Token Usage Tracking\n\nThe SDK provides built-in token usage tracking for cost monitoring and usage analytics. You can access token information using the detailed generation methods:\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fanthropic\"\n)\n\nfunc main() {\n\t\u002F\u002F Create LLM client\n\tclient := anthropic.NewClient(\"your-api-key\",\n\t\tanthropic.WithModel(\"claude-3-haiku-20240307\"),\n\t)\n\n\tctx := context.Background()\n\tprompt := \"Explain quantum computing in one paragraph.\"\n\n\t\u002F\u002F Traditional method (backward compatible)\n\tcontent, err := client.Generate(ctx, prompt)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Printf(\"Response: %s\\n\", content)\n\n\t\u002F\u002F New detailed method with token usage\n\tresponse, err := client.GenerateDetailed(ctx, prompt)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Printf(\"Response: %s\\n\", response.Content)\n\tfmt.Printf(\"Model: %s\\n\", response.Model)\n\n\tif response.Usage != nil {\n\t\tfmt.Printf(\"Token Usage:\\n\")\n\t\tfmt.Printf(\"  Input Tokens: %d\\n\", response.Usage.InputTokens)\n\t\tfmt.Printf(\"  Output Tokens: %d\\n\", response.Usage.OutputTokens)\n\t\tfmt.Printf(\"  Total Tokens: %d\\n\", response.Usage.TotalTokens)\n\n\t\t\u002F\u002F Calculate estimated cost (adjust based on actual pricing)\n\t\tinputCost := float64(response.Usage.InputTokens) * 0.25 \u002F 1000000\n\t\toutputCost := float64(response.Usage.OutputTokens) * 1.25 \u002F 1000000\n\t\tfmt.Printf(\"  Estimated Cost: $%.6f\\n\", inputCost + outputCost)\n\t}\n}\n```\n\n**Available Methods:**\n- `Generate()` - Traditional method returning string (unchanged)\n- `GenerateDetailed()` - New method returning `*LLMResponse` with usage info\n- `GenerateWithTools()` - Traditional method with tools (unchanged)\n- `GenerateWithToolsDetailed()` - New method with tools and usage info\n\n**Provider Support:**\n- ✅ **Anthropic**: Full token usage support\n- ✅ **OpenAI**: Full support including reasoning tokens\n- ✅ **Azure OpenAI**: Full support (similar to OpenAI)\n- ❌ **Ollama\u002FvLLM**: Local models don't provide usage data (Usage=nil)\n\nSee the [token usage example](examples\u002Ftoken-usage\u002F) for a complete demonstration.\n\n### Advanced YAML Configuration\n\nThe SDK now supports comprehensive YAML-based agent configuration with advanced features including behavioral settings, tool configuration, MCP integration, sub-agents, and environment variable expansion.\n\n**Example: Complete Agent with YAML Configuration**\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F Create LLM client\n\tllm := openai.NewClient(os.Getenv(\"OPENAI_API_KEY\"))\n\n\t\u002F\u002F Load agent configurations from YAML\n\tconfigs, err := agent.LoadAgentConfigsFromFile(\"agents.yaml\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t\u002F\u002F Create agent directly from configuration\n\tagentInstance, err := agent.NewAgentFromConfig(\"research_assistant\", configs, nil, agent.WithLLM(llm))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t\u002F\u002F Run the agent\n\tresult, err := agentInstance.Run(context.Background(), \"What are the latest developments in renewable energy?\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tprintln(result)\n}\n```\n\n**agents.yaml** (Advanced Configuration):\n```yaml\nresearch_assistant:\n  role: \"Advanced Research Assistant\"\n  goal: \"Provide comprehensive research and analysis\"\n  backstory: \"Expert researcher with access to multiple data sources and specialized sub-agents\"\n\n  # Behavioral settings\n  max_iterations: 15\n  require_plan_approval: false\n\n  # LLM configuration\n  llm_config:\n    temperature: 0.7\n    enable_reasoning: true\n    reasoning_budget: 20000\n\n  # Built-in and custom tools\n  tools:\n    - type: \"builtin\"\n      name: \"websearch\"\n      enabled: true\n      config:\n        api_key: \"${SEARCH_API_KEY}\"\n        engine: \"brave\"\n\n    - type: \"builtin\"\n      name: \"calculator\"\n      enabled: true\n\n  # MCP server integration\n  mcp:\n    mcpServers:\n      filesystem:\n        command: \"npx\"\n        args: [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \".\"]\n\n      database:\n        command: \"python\"\n        args: [\"-m\", \"mcp_server_database\"]\n        env:\n          DATABASE_URL: \"${DATABASE_URL}\"\n\n  # Memory configuration\n  memory:\n    type: \"redis\"\n    config:\n      address: \"${REDIS_ADDRESS}\"\n      db: 0\n\n  # Sub-agents for specialized tasks\n  sub_agents:\n    data_analyzer:\n      role: \"Data Analysis Specialist\"\n      goal: \"Analyze complex datasets and provide insights\"\n      backstory: \"Expert in statistical analysis and data visualization\"\n      max_iterations: 8\n      llm_config:\n        temperature: 0.3\n\n    report_writer:\n      role: \"Technical Writer\"\n      goal: \"Create comprehensive reports and documentation\"\n      backstory: \"Skilled at converting complex data into clear reports\"\n      tools:\n        - type: \"builtin\"\n          name: \"text_processor\"\n          enabled: true\n\n  # Runtime settings\n  runtime:\n    log_level: \"info\"\n    enable_tracing: true\n    timeout: \"300s\"\n```\n\n**Key Features of Advanced YAML Configuration:**\n\n- **Environment Variable Expansion**: Use `${VAR}` syntax for sensitive data\n- **Behavioral Settings**: Configure iterations, plan approval, and runtime behavior\n- **LLM Configuration**: Fine-tune temperature, reasoning, and model-specific settings\n- **Tool Integration**: Configure built-in, custom, MCP, and agent tools declaratively\n- **Sub-Agents**: Create hierarchical agent structures with specialized capabilities\n- **Memory Backends**: Configure buffer, Redis, or vector memory systems\n- **MCP Integration**: Seamless Model Context Protocol server configuration\n- **Structured Responses**: Define JSON schema for consistent output formats\n\n### Creating an Agent with YAML Configuration (Basic)\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"path\u002Ffilepath\"\n\t\"strings\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F Get OpenAI API key from environment\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tif apiKey == \"\" {\n\t\tlog.Fatal(\"OpenAI API key not provided. Set OPENAI_API_KEY environment variable.\")\n\t}\n\n\t\u002F\u002F Create the LLM client\n\tllm := openai.NewClient(apiKey)\n\n\t\u002F\u002F Load agent configurations\n\tagentConfigs, err := agent.LoadAgentConfigsFromFile(\"agents.yaml\")\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to load agent configurations: %v\", err)\n\t}\n\n\t\u002F\u002F Load task configurations\n\ttaskConfigs, err := agent.LoadTaskConfigsFromFile(\"tasks.yaml\")\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to load task configurations: %v\", err)\n\t}\n\n\t\u002F\u002F Create variables map for template substitution\n\tvariables := map[string]string{\n\t\t\"topic\": \"Artificial Intelligence\",\n\t}\n\n\t\u002F\u002F Create the agent for a specific task\n\ttaskName := \"research_task\"\n\tagent, err := agent.CreateAgentForTask(taskName, agentConfigs, taskConfigs, variables, agent.WithLLM(llm))\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to create agent for task: %v\", err)\n\t}\n\n\t\u002F\u002F Execute the task\n\tfmt.Printf(\"Executing task '%s' with topic '%s'...\\n\", taskName, variables[\"topic\"])\n\tresult, err := agent.ExecuteTaskFromConfig(context.Background(), taskName, taskConfigs, variables)\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to execute task: %v\", err)\n\t}\n\n\t\u002F\u002F Print the result\n\tfmt.Println(\"\\nTask Result:\")\n\tfmt.Println(result)\n}\n```\n\nExample YAML configurations:\n\n**agents.yaml**:\n```yaml\nresearcher:\n  role: >\n    {topic} Senior Data Researcher\n  goal: >\n    Uncover cutting-edge developments in {topic}\n  backstory: >\n    You're a seasoned researcher with a knack for uncovering the latest\n    developments in {topic}. Known for your ability to find the most relevant\n    information and present it in a clear and concise manner.\n\nreporting_analyst:\n  role: >\n    {topic} Reporting Analyst\n  goal: >\n    Create detailed reports based on {topic} data analysis and research findings\n  backstory: >\n    You're a meticulous analyst with a keen eye for detail. You're known for\n    your ability to turn complex data into clear and concise reports, making\n    it easy for others to understand and act on the information you provide.\n```\n\n**tasks.yaml**:\n```yaml\nresearch_task:\n  description: >\n    Conduct a thorough research about {topic}\n    Make sure you find any interesting and relevant information given\n    the current year is 2025.\n  expected_output: >\n    A list with 10 bullet points of the most relevant information about {topic}\n  agent: researcher\n\nreporting_task:\n  description: >\n    Review the context you got and expand each topic into a full section for a report.\n    Make sure the report is detailed and contains any and all relevant information.\n  expected_output: >\n    A fully fledged report with the main topics, each with a full section of information.\n    Formatted as markdown without '```'\n  agent: reporting_analyst\n  output_file: \"{topic}_report.md\"\n```\n\n### Structured Output with YAML Configuration\n\nThe SDK supports defining structured output (JSON responses) directly in YAML configuration files. This allows you to automatically apply structured output when creating agents from YAML and unmarshal responses directly into Go structs.\n\n**agents.yaml with structured output**:\n```yaml\nresearcher:\n  role: >\n    {topic} Senior Data Researcher\n  goal: >\n    Uncover cutting-edge developments in {topic}\n  backstory: >\n    You're a seasoned researcher with a knack for uncovering the latest\n    developments in {topic}. Known for your ability to find the most relevant\n    information and present it in a clear and concise manner.\n  response_format:\n    type: \"json_object\"\n    schema_name: \"ResearchResult\"\n    schema_definition:\n      type: \"object\"\n      properties:\n        findings:\n          type: \"array\"\n          items:\n            type: \"object\"\n            properties:\n              title:\n                type: \"string\"\n                description: \"Title of the finding\"\n              description:\n                type: \"string\"\n                description: \"Detailed description\"\n              source:\n                type: \"string\"\n                description: \"Source of the information\"\n        summary:\n          type: \"string\"\n          description: \"Executive summary of findings\"\n        metadata:\n          type: \"object\"\n          properties:\n            total_findings:\n              type: \"integer\"\n            research_date:\n              type: \"string\"\n```\n\n**tasks.yaml with structured output**:\n```yaml\nresearch_task:\n  description: >\n    Conduct a thorough research about {topic}\n    Make sure you find any interesting and relevant information.\n  expected_output: >\n    A structured JSON response with findings, summary, and metadata\n  agent: researcher\n  output_file: \"{topic}_report.json\"\n  response_format:\n    type: \"json_object\"\n    schema_name: \"ResearchResult\"\n    schema_definition:\n      # Same schema as above\n```\n\n**Usage in Go code**:\n```go\n\u002F\u002F Define your Go struct to match the YAML schema\ntype ResearchResult struct {\n    Findings []struct {\n        Title       string `json:\"title\"`\n        Description string `json:\"description\"`\n        Source      string `json:\"source\"`\n    } `json:\"findings\"`\n    Summary  string `json:\"summary\"`\n    Metadata struct {\n        TotalFindings int    `json:\"total_findings\"`\n        ResearchDate  string `json:\"research_date\"`\n    } `json:\"metadata\"`\n}\n\n\u002F\u002F Create agent and execute task\nagent, err := agent.CreateAgentForTask(\"research_task\", agentConfigs, taskConfigs, variables, agent.WithLLM(llm))\nresult, err := agent.ExecuteTaskFromConfig(context.Background(), \"research_task\", taskConfigs, variables)\n\n\u002F\u002F Unmarshal structured output\nvar structured ResearchResult\nerr = json.Unmarshal([]byte(result), &structured)\n```\n\nFor more details, see [Structured Output with YAML Configuration](docs\u002Fstructured_output_yaml.md).\n\n### Auto-Generating Agent Configurations\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fconfig\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F Load configuration\n\tcfg := config.Get()\n\n\t\u002F\u002F Create LLM client\n\topenaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey)\n\n\t\u002F\u002F Create agent with auto-configuration from system prompt\n\tagent, err := agent.NewAgentWithAutoConfig(\n\t\tcontext.Background(),\n\t\tagent.WithLLM(openaiClient),\n\t\tagent.WithSystemPrompt(\"You are a travel advisor who helps users plan trips and vacations. You specialize in finding hidden gems and creating personalized itineraries based on travelers' preferences.\"),\n\t\tagent.WithName(\"Travel Assistant\"),\n\t)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t\u002F\u002F Access the generated configurations\n\tagentConfig := agent.GetGeneratedAgentConfig()\n\ttaskConfigs := agent.GetGeneratedTaskConfigs()\n\n\t\u002F\u002F Print generated agent details\n\tfmt.Printf(\"Generated Agent Role: %s\\n\", agentConfig.Role)\n\tfmt.Printf(\"Generated Agent Goal: %s\\n\", agentConfig.Goal)\n\tfmt.Printf(\"Generated Agent Backstory: %s\\n\", agentConfig.Backstory)\n\n\t\u002F\u002F Print generated tasks\n\tfmt.Println(\"\\nGenerated Tasks:\")\n\tfor taskName, taskConfig := range taskConfigs {\n\t\tfmt.Printf(\"- %s: %s\\n\", taskName, taskConfig.Description)\n\t}\n\n\t\u002F\u002F Save the generated configurations to YAML files\n\tagentConfigMap := map[string]agent.AgentConfig{\n\t\t\"Travel Assistant\": *agentConfig,\n\t}\n\n\t\u002F\u002F Save agent configs to file\n\tagentYaml, _ := os.Create(\"agent_config.yaml\")\n\tdefer agentYaml.Close()\n\tagent.SaveAgentConfigsToFile(agentConfigMap, agentYaml)\n\n\t\u002F\u002F Save task configs to file\n\ttaskYaml, _ := os.Create(\"task_config.yaml\")\n\tdefer taskYaml.Close()\n\tagent.SaveTaskConfigsToFile(taskConfigs, taskYaml)\n\n\t\u002F\u002F Use the auto-configured agent\n\tresponse, err := agent.Run(context.Background(), \"I want to plan a 3-day trip to Tokyo.\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(response)\n}\n```\n\nThe auto-configuration feature uses LLM reasoning to derive a complete agent profile and associated tasks from a simple system prompt. The generated configurations include:\n\n- **Agent Profile**: Role, goal, and backstory that define the agent's persona\n- **Task Definitions**: Specialized tasks the agent can perform, with descriptions and expected outputs\n- **Reusable YAML**: Save configurations for reuse in other applications\n\nThis approach dramatically reduces the effort needed to create specialized agents while ensuring consistency and quality.\n\n### Using MCP Servers with an Agent\n\nThe SDK supports both **eager** and **lazy** MCP server initialization:\n\n- **Eager**: MCP servers are initialized when the agent is created\n- **Lazy**: MCP servers are initialized only when their tools are first called (recommended)\n\n#### Lazy MCP Integration (Recommended)\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n)\n\nfunc main() {\n\t\u002F\u002F Create OpenAI LLM client\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tllm := openai.NewClient(apiKey, openai.WithModel(\"gpt-4o-mini\"))\n\n\t\u002F\u002F Define lazy MCP configurations\n\t\u002F\u002F Note: The CLI supports dynamic tool discovery, but the SDK requires explicit tool definitions\n\tlazyMCPConfigs := []agent.LazyMCPConfig{\n\t\t{\n\t\t\tName:    \"aws-api-server\",\n\t\t\tType:    \"stdio\",\n\t\t\tCommand: \"docker\",\n\t\t\tArgs:    []string{\"run\", \"--rm\", \"-i\", \"public.ecr.aws\u002Fawslabs-mcp\u002Fawslabs\u002Faws-api-mcp-server:latest\"},\n\t\t\tEnv:     []string{\"AWS_REGION=us-west-2\"},\n\t\t\tTools: []agent.LazyMCPToolConfig{\n\t\t\t\t{\n\t\t\t\t\tName:        \"suggest_aws_commands\",\n\t\t\t\t\tDescription: \"Suggest AWS CLI commands based on natural language\",\n\t\t\t\t\tSchema:      map[string]interface{}{\"type\": \"object\", \"properties\": map[string]interface{}{\"query\": map[string]interface{}{\"type\": \"string\"}}},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tName:    \"kubectl-ai\",\n\t\t\tType:    \"stdio\",\n\t\t\tCommand: \"kubectl-ai\",\n\t\t\tArgs:    []string{\"--mcp-server\"},\n\t\t\tTools: []agent.LazyMCPToolConfig{\n\t\t\t\t{\n\t\t\t\t\tName:        \"kubectl\",\n\t\t\t\t\tDescription: \"Execute kubectl commands against Kubernetes cluster\",\n\t\t\t\t\tSchema:      map[string]interface{}{\"type\": \"object\", \"properties\": map[string]interface{}{\"command\": map[string]interface{}{\"type\": \"string\"}}},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\n\t\u002F\u002F Create agent with lazy MCP configurations\n\tmyAgent, err := agent.NewAgent(\n\t\tagent.WithLLM(llm),\n\t\tagent.WithLazyMCPConfigs(lazyMCPConfigs),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithSystemPrompt(\"You are an AI assistant with access to AWS and Kubernetes tools.\"),\n\t)\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to create agent: %v\", err)\n\t}\n\n\t\u002F\u002F Use the agent - MCP servers will be initialized on first tool use\n\tresponse, err := myAgent.Run(context.Background(), \"List my EC2 instances and show cluster pods\")\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to run agent: %v\", err)\n\t}\n\n\tfmt.Println(\"Agent Response:\", response)\n}\n```\n\n#### Eager MCP Integration\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Finterfaces\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmcp\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmultitenancy\"\n)\n\nfunc main() {\n\tlogger := log.New(os.Stderr, \"AGENT: \", log.LstdFlags)\n\n\t\u002F\u002F Create OpenAI LLM client\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tif apiKey == \"\" {\n\t\tlogger.Fatal(\"Please set the OPENAI_API_KEY environment variable.\")\n\t}\n\tllm := openai.NewClient(apiKey, openai.WithModel(\"gpt-4o-mini\"))\n\n\t\u002F\u002F Create MCP servers\n\tvar mcpServers []interfaces.MCPServer\n\n\t\u002F\u002F Connect to HTTP-based MCP server\n\thttpServer, err := mcp.NewHTTPServer(context.Background(), mcp.HTTPServerConfig{\n\t\tBaseURL: \"http:\u002F\u002Flocalhost:8083\u002Fmcp\",\n\t})\n\tif err != nil {\n\t\tlogger.Printf(\"Warning: Failed to initialize HTTP MCP server: %v\", err)\n\t} else {\n\t\tmcpServers = append(mcpServers, httpServer)\n\t\tlogger.Println(\"Successfully initialized HTTP MCP server.\")\n\t}\n\n\t\u002F\u002F Connect to stdio-based MCP server\n\tstdioServer, err := mcp.NewStdioServer(context.Background(), mcp.StdioServerConfig{\n\t\tCommand: \"go\",\n\t\tArgs:    []string{\"run\", \".\u002Fserver-stdio\u002Fmain.go\"},\n\t})\n\tif err != nil {\n\t\tlogger.Printf(\"Warning: Failed to initialize STDIO MCP server: %v\", err)\n\t} else {\n\t\tmcpServers = append(mcpServers, stdioServer)\n\t\tlogger.Println(\"Successfully initialized STDIO MCP server.\")\n\t}\n\n\t\u002F\u002F Create agent with MCP server support\n\tmyAgent, err := agent.NewAgent(\n\t\tagent.WithLLM(llm),\n\t\tagent.WithMCPServers(mcpServers),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithSystemPrompt(\"You are an AI assistant that can use tools from MCP servers.\"),\n\t\tagent.WithName(\"MCPAgent\"),\n\t)\n\tif err != nil {\n\t\tlogger.Fatalf(\"Failed to create agent: %v\", err)\n\t}\n\n\t\u002F\u002F Create context with organization and conversation IDs\n\tctx := context.Background()\n\tctx = multitenancy.WithOrgID(ctx, \"default-org\")\n\tctx = context.WithValue(ctx, memory.ConversationIDKey, \"mcp-demo\")\n\n\t\u002F\u002F Run the agent with a query that will use MCP tools\n\tresponse, err := myAgent.Run(ctx, \"What time is it right now?\")\n\tif err != nil {\n\t\tlogger.Fatalf(\"Agent run failed: %v\", err)\n\t}\n\n\tfmt.Println(\"Agent response:\", response)\n}\n```\n\n## Architecture\n\nThe SDK follows a modular architecture with these key components:\n\n- **Agent**: Coordinates the LLM, memory, and tools\n- **LLM**: Interface to language model providers (OpenAI, Anthropic, Google Vertex AI)\n- **Memory**: Stores conversation history and context\n- **Tools**: Extend the agent's capabilities\n- **Vector Store**: For semantic search and retrieval\n- **Guardrails**: Ensures safe and responsible AI usage\n- **Execution Plan**: Manages planning, approval, and execution of complex tasks\n- **Configuration**: YAML-based agent and task definitions\n\n### Supported LLM Providers\n\n- **OpenAI**: GPT-4, GPT-3.5, and other OpenAI models\n- **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, and other Claude models\n- **DeepSeek**: DeepSeek-V3.2 chat and reasoning models\n  - Native tool\u002Ffunction calling support\n  - Cost-effective pricing with cache optimization\n  - 128K token context window\n  - Reasoning mode with up to 64K output tokens\n  - Full feature parity with OpenAI\u002FAnthropic\n- **Google Vertex AI**: Gemini 1.5 Pro, Gemini 1.5 Flash, Gemini 2.0 Flash, and Gemini Pro Vision\n  - Advanced reasoning modes (none, minimal, comprehensive)\n  - Multimodal capabilities with vision models\n  - Function calling and tool integration\n  - Flexible authentication (ADC or service account files)\n- **Ollama**: Local LLM server supporting various open-source models\n  - Run models locally without external API calls\n  - Support for Llama2, Mistral, CodeLlama, and other models\n  - Model management (list, pull, switch models)\n  - Local processing for reduced latency and privacy\n- **vLLM**: High-performance local LLM inference with PagedAttention\n  - Optimized for GPU inference with CUDA\n  - Efficient memory management for large models\n  - Support for Llama2, Mistral, CodeLlama, and other models\n  - Model management (list, pull, switch models)\n  - Local processing for reduced latency and privacy\n\n## CLI Tool (Headless SDK)\n\nThe Agent SDK includes a powerful command-line interface for headless usage:\n\n### CLI Features\n\n- 🤖 **Multiple LLM Providers**: OpenAI, Anthropic, DeepSeek, Google Vertex AI, Ollama, vLLM\n- 💬 **Interactive Chat Mode**: Real-time conversations with persistent memory\n- 📝 **Task Execution**: Run predefined tasks from YAML configurations\n- 🎨 **Auto-Configuration**: Generate agent configs from simple prompts\n- 🔧 **Flexible Configuration**: JSON-based configuration with environment variables\n- 🛠️ **Rich Tool Integration**: Web search, GitHub, MCP servers, and more\n- 🔌 **MCP Server Management**: Add, list, remove, and test MCP servers\n- 📄 **.env File Support**: Automatic loading of environment variables from .env files\n\n### CLI Commands\n\n```bash\n# Initialize configuration\nagent-cli init\n\n# Run agent with a single prompt\nagent-cli run \"Explain quantum computing in simple terms\"\n\n# Direct execution (no setup required)\nagent-cli --prompt \"What is 2+2?\"\n\n# Direct execution with MCP server\nagent-cli --prompt \"List my EC2 instances\" \\\n  --mcp-config .\u002Faws_api_server.json \\\n  --allowedTools \"mcp__aws__suggest_aws_commands,mcp__aws__call_aws\" \\\n  --dangerously-skip-permissions\n\n# Execute predefined tasks\nagent-cli task --agent-config=agents.yaml --task-config=tasks.yaml --task=research_task --topic=\"AI\"\n\n# Start interactive chat\nagent-cli chat\n\n# Generate configurations from system prompt\nagent-cli generate --prompt=\"You are a travel advisor\" --output=.\u002Fconfigs\n\n# List available resources\nagent-cli list providers\nagent-cli list models\nagent-cli list tools\n\n# Manage configuration\nagent-cli config show\nagent-cli config set provider anthropic\n\n# Manage MCP servers\nagent-cli mcp add --type=http --url=http:\u002F\u002Flocalhost:8083\u002Fmcp --name=my-server\nagent-cli mcp list\nagent-cli mcp remove --name=my-server\n\n# Import\u002FExport MCP servers from JSON config\nagent-cli mcp import --file=mcp-servers.json\nagent-cli mcp export --file=mcp-servers.json\n\n# Direct execution with MCP servers and tool filtering\nagent-cli --prompt \"List my EC2 instances\" \\\n  --mcp-config .\u002Faws_api_server.json \\\n  --allowedTools \"suggest_aws_commands,call_aws\" \\\n  --dangerously-skip-permissions\n\n# Kubernetes management with kubectl-ai\nagent-cli --prompt \"List all pods in the default namespace\" \\\n  --mcp-config .\u002Fkubectl_ai.json \\\n  --allowedTools \"kubectl\" \\\n  --dangerously-skip-permissions\n```\n\n### Advanced MCP Features\n\nThe CLI now supports **dynamic tool discovery** and **flexible tool filtering**:\n\n- **No Hardcoded Tools**: MCP servers define their own tools and schemas\n- **Dynamic Discovery**: Tools are discovered when MCP servers are first initialized\n- **Flexible Filtering**: Use `--allowedTools` to specify exactly which tools can be used\n- **JSON Configuration**: Load MCP server configurations from external JSON files\n- **Environment Variables**: Each MCP server can specify custom environment variables\n\n**Popular MCP Servers:**\n- **AWS API Server**: AWS CLI operations and suggestions\n- **kubectl-ai**: Kubernetes cluster management via natural language\n- **Filesystem Server**: File system operations and management\n- **Database Server**: SQL query execution and database operations\n\n### CLI Documentation\n\nFor complete CLI documentation, see: [CLI README](cmd\u002Fagent-cli\u002FREADME.md)\n\n## Examples\n\nCheck out the `cmd\u002Fexamples` directory for complete examples:\n\n- **Simple Agent**: Basic agent with system prompt\n- **YAML Configuration**: Defining agents and tasks in YAML\n- **Auto-Configuration**: Generating agent configurations from system prompts\n- **Agent Config Wizard**: Interactive CLI for creating and using agents\n- **MCP Integration**: Using Model Context Protocol servers with agents\n- **Multi-LLM Support**: Examples using OpenAI, Azure OpenAI, Anthropic, and Vertex AI\n- **Vertex AI Integration**: Comprehensive examples with Gemini models, reasoning modes, and tools\n\n### LLM Provider Examples\n\n- `examples\u002Fllm\u002Fopenai\u002F`: OpenAI integration examples\n- `examples\u002Fllm\u002Fazureopenai\u002F`: Azure OpenAI integration examples with deployment-based configuration\n- `examples\u002Fllm\u002Fanthropic\u002F`: Anthropic Claude integration examples\n- `examples\u002Fllm\u002Follama\u002F`: Ollama local LLM integration examples\n- `examples\u002Fllm\u002Fvllm\u002F`: vLLM high-performance local LLM integration examples\n\n## Agent GoGo - Deploy an agent based on this SDK quickly\n- Self-host or launch your agent with our Cloud Gateway. Visit https:\u002F\u002Fagentgogo.app to learn more\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Documentation\n\n📖 **Full documentation available at [docs.goagents.dev](https:\u002F\u002Fdocs.goagents.dev\u002F)**\n\nFor more detailed information, you can also refer to the following documents:\n\n- [Environment Variables](docs\u002Fenvironment_variables.md)\n- [Memory](docs\u002Fmemory.md)\n- [Tracing](docs\u002Ftracing.md)\n- [Vector Store](docs\u002Fvectorstore.md)\n- [DataStore](docs\u002Fdatastore.md) - PostgreSQL and Supabase integration for structured data\n- [LLM](docs\u002Fllm.md)\n- [Multitenancy](docs\u002Fmultitenancy.md)\n- [Task](docs\u002Ftask.md)\n- [Tools](docs\u002Ftools.md)\n- [Agent](docs\u002Fagent.md)\n- [Execution Plan](docs\u002Fexecution_plan.md)\n- [Guardrails](docs\u002Fguardrails.md)\n- [MCP](docs\u002Fmcp.md)\n","\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIngenimax_agent-sdk-go_readme_26fb7b1b12f9.png\" alt=\"Ingenimax\" width=\"400\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIngenimax_agent-sdk-go_readme_2ab0a80ed303.png\" alt=\"Ingenimax\" width=\"400\">\n\t\n\u003C\u002Fdiv>\n\n# Agent Go SDK\n\n一个功能强大的 Go 框架，用于构建生产就绪的 AI 代理，可将内存管理、工具执行、多模型支持和企业级特性无缝集成到灵活且可扩展的架构中。\n\n## 文档\n\n📖 **[docs.goagents.dev](https:\u002F\u002Fdocs.goagents.dev\u002F)** — 完整的文档、指南和参考。\n\n## 社区\n\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Join%20Our%20Community-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https:\u002F\u002Fdiscord.com\u002Finvite\u002FMjJbDG2nQZ)\n\n加入我们的 Discord 社区，一起协作、分享你的项目，并获得 agent-sdk-go 的社区支持！\n\n## 特性\n\n### 核心能力\n- 🧠 **多模型智能**：无缝集成 OpenAI、Anthropic 和 Google Vertex AI（Gemini 模型）。\n- 🔧 **模块化工具生态**：通过即插即用的工具扩展代理能力，支持网络搜索、数据检索和自定义操作。\n- 📝 **高级内存管理**：持久化对话跟踪，提供缓冲区和向量检索选项。\n- 🔌 **MCP 集成**：支持通过 HTTP 和 stdio 传输协议连接 Model Context Protocol (MCP) 服务器。\n- 📊 **Token 使用追踪**：内置 Token 计数功能，用于成本监控、使用分析和优化。\n\n### 企业级支持\n- 🚦 **内置安全机制**：全面的安全防护措施，确保负责任的 AI 部署。\n- 📈 **完整可观测性**：集成追踪与日志记录功能，便于监控与调试。\n- 🏢 **企业级多租户支持**：安全地为多个组织提供服务，实现资源隔离。\n\n### 开发体验\n- 🛠️ **结构化任务框架**：规划、审批并执行复杂的多步骤操作。\n- 📄 **声明式配置**：使用直观的 YAML 定义来构建复杂的代理和任务。\n- 🧙 **零成本快速启动**：根据简单的系统提示自动生成完整的代理配置。\n\n## 快速入门\n\n### 前置条件\n\n- Go 1.23+\n- Redis（可选，用于分布式内存）\n\n### 安装\n\n#### 作为 Go 库\n\n将 SDK 添加到你的 Go 项目中：\n\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go\n```\n\n#### 作为 CLI 工具（无头 SDK）\n\n**选项 1：下载预编译二进制文件（推荐）**\n\n从 [GitHub Releases](https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Freleases) 下载适用于你平台的最新版本，并将其添加到你的 PATH 中。\n\n**选项 2：通过 Go 安装**\n\n```bash\ngo install github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcmd\u002Fagent-cli@latest\n```\n\n**选项 3：从源码构建**\n\n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\ncd agent-sdk-go\n\n# 构建 CLI 工具\nmake build-cli\n\n# 安装到系统 PATH（可选）\nmake install\n```\n\n**CLI 快速启动：**\n```bash\n# 初始化配置\nagent-cli init\n\n# 选项 1：设置环境变量\nexport OPENAI_API_KEY=your_api_key_here\n\n# 选项 2：使用 .env 文件（推荐）\ncp env.example .env\n# 编辑 .env 文件以填写你的 API 密钥\n\n# 运行简单查询\nagent-cli run \"旧金山的天气如何？\"\n\n# 启动交互式聊天\nagent-cli chat\n```\n\n### 配置\n\nSDK 使用环境变量进行配置。关键变量包括：\n\n- `OPENAI_API_KEY`：你的 OpenAI API 密钥\n- `OPENAI_MODEL`：要使用的模型（例如 gpt-4o-mini）\n- `LOG_LEVEL`：日志级别（debug、info、warn、error）\n- `REDIS_ADDRESS`：Redis 服务器地址（如果使用 Redis 作为内存存储）\n\n完整配置选项请参阅 `.env.example` 文件。\n\n### 获取 Nina（AI 助手）的帮助\n\nNina 是一位对 agent-sdk-go 代码库了如指掌的 AI 助手。通过 MCP（Model Context Protocol）连接 Nina，即可在你的 IDE 中直接获得帮助。\n\n#### Cursor IDE\n\n将以下内容添加到 `~\u002F.cursor\u002Fmcp.json`：\n\n```json\n{\n  \"mcpServers\": {\n    \"agent-sdk-go\": {\n      \"url\": \"https:\u002F\u002Fnina.agentgogo.app\u002Fmcp\",\n      \"transport\": \"sse\"\n    }\n  }\n}\n```\n\n重启 Cursor IDE 后，Nina 的工具将出现在你的 AI 助手中。\n\n#### Claude Desktop\n\n将以下内容添加到 `claude_desktop_config.json`：\n\n| 平台 | 配置位置 |\n|----------|-----------------|\n| macOS | `~\u002FLibrary\u002FApplication Support\u002FClaude\u002Fclaude_desktop_config.json` |\n| Windows | `%APPDATA%\\Claude\\claude_desktop_config.json` |\n\n```json\n{\n  \"mcpServers\": {\n    \"agent-sdk-go\": {\n      \"url\": \"https:\u002F\u002Fnina.agentgogo.app\u002Fmcp\",\n      \"transport\": \"sse\"\n    }\n  }\n}\n```\n\n重启 Claude Desktop 后，Nina 的工具将通过 🔌 图标提供服务。\n\n#### 可用工具\n\n| 工具 | 描述 |\n|------|-------------|\n| `ask_nina` | 提问关于 agent-sdk-go、Go 编程或开发的问题 |\n| `search_sdk` | 搜索 SDK 文档和源代码 |\n| `get_sdk_status` | 获取 Nina 的 SDK 知识库状态 |\n\n## 使用示例\n\n### 创建一个简单的智能体\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fconfig\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Flogging\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmultitenancy\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Ftools\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Ftools\u002Fwebsearch\"\n)\n\nfunc main() {\n\t\u002F\u002F 创建日志记录器\n\tlogger := logging.New()\n\n\t\u002F\u002F 获取配置\n\tcfg := config.Get()\n\n\t\u002F\u002F 使用 OpenAI 创建一个新的智能体\n\topenaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey,\n\t\topenai.WithLogger(logger))\n\n\tagent, err := agent.NewAgent(\n\t\tagent.WithLLM(openaiClient),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithTools(createTools(logger).List()...),\n\t\tagent.WithSystemPrompt(\"你是一位有用的 AI 助手。当你不知道答案或需要实时信息时，可以使用可用工具来查找信息。\"),\n\t\tagent.WithName(\"ResearchAssistant\"),\n\t)\n\tif err != nil {\n\t\tlogger.Error(context.Background(), \"创建智能体失败\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\t\u002F\u002F 创建包含组织 ID 和对话 ID 的上下文\n\tctx := context.Background()\n\tctx = multitenancy.WithOrgID(ctx, \"default-org\")\n\tctx = context.WithValue(ctx, memory.ConversationIDKey, \"conversation-123\")\n\n\t\u002F\u002F 运行智能体\n\tresponse, err := agent.Run(ctx, \"旧金山的天气如何？\")\n\tif err != nil {\n\t\tlogger.Error(ctx, \"运行智能体失败\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\tfmt.Println(response)\n}\n\nfunc createTools(logger logging.Logger) *tools.Registry {\n\t\u002F\u002F 获取配置\n\tcfg := config.Get()\n\n\t\u002F\u002F 创建工具注册表\n\ttoolRegistry := tools.NewRegistry()\n\n\t\u002F\u002F 如果有 API 密钥，则添加网络搜索工具\n\tif cfg.Tools.WebSearch.GoogleAPIKey != \"\" && cfg.Tools.WebSearch.GoogleSearchEngineID != \"\" {\n\t\tsearchTool := websearch.New(\n\t\t\tcfg.Tools.WebSearch.GoogleAPIKey,\n\t\t\tcfg.Tools.WebSearch.GoogleSearchEngineID,\n\t\t)\n\t\ttoolRegistry.Register(searchTool)\n\t}\n\n\treturn toolRegistry\n}\n```\n\n### 令牌使用跟踪\n\nSDK 提供了内置的令牌使用跟踪功能，用于成本监控和使用情况分析。你可以通过详细生成方法访问令牌信息：\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fanthropic\"\n)\n\nfunc main() {\n\t\u002F\u002F 创建 LLM 客户端\n\tclient := anthropic.NewClient(\"your-api-key\",\n\t\tanthropic.WithModel(\"claude-3-haiku-20240307\"),\n\t)\n\n\tctx := context.Background()\n\tprompt := \"用一段话解释量子计算。\"\n\n\t\u002F\u002F 传统方法（向后兼容）\n\tcontent, err := client.Generate(ctx, prompt)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Printf(\"响应：%s\\n\", content)\n\n\t\u002F\u002F 新的详细方法，包含令牌使用情况\n\tresponse, err := client.GenerateDetailed(ctx, prompt)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfmt.Printf(\"响应：%s\\n\", response.Content)\n\tfmt.Printf(\"模型：%s\\n\", response.Model)\n\n\tif response.Usage != nil {\n\t\tfmt.Printf(\"令牌使用情况：\\n\")\n\t\tfmt.Printf(\"  输入令牌： %d\\n\", response.Usage.InputTokens)\n\t\tfmt.Printf(\"  输出令牌： %d\\n\", response.Usage.OutputTokens)\n\t\tfmt.Printf(\"  总令牌： %d\\n\", response.Usage.TotalTokens)\n\n\t\t\u002F\u002F 计算预估成本（根据实际定价调整）\n\t\tinputCost := float64(response.Usage.InputTokens) * 0.25 \u002F 1000000\n\t\toutputCost := float64(response.Usage.OutputTokens) * 1.25 \u002F 1000000\n\t\tfmt.Printf(\"  预估成本：$%.6f\\n\", inputCost + outputCost)\n\t}\n}\n```\n\n**可用方法：**\n- `Generate()` - 传统方法，返回字符串（未更改）\n- `GenerateDetailed()` - 新方法，返回包含使用信息的 `*LLMResponse`\n- `GenerateWithTools()` - 传统方法，支持工具调用（未更改）\n- `GenerateWithToolsDetailed()` - 新方法，支持工具调用并提供使用信息\n\n**提供商支持：**\n- ✅ **Anthropic**：完全支持令牌使用跟踪\n- ✅ **OpenAI**：完全支持，包括推理令牌\n- ✅ **Azure OpenAI**：完全支持（与 OpenAI 类似）\n- ❌ **Ollama\u002FvLLM**：本地模型不提供使用数据（Usage=nil）\n\n完整的演示请参阅 [令牌使用示例](examples\u002Ftoken-usage\u002F)。\n\n### 高级 YAML 配置\n\nSDK 现在支持基于 YAML 的全面代理配置，具备行为设置、工具配置、MCP 集成、子代理以及环境变量扩展等高级功能。\n\n**示例：使用 YAML 配置的完整代理**\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F 创建 LLM 客户端\n\tllm := openai.NewClient(os.Getenv(\"OPENAI_API_KEY\"))\n\n\t\u002F\u002F 从 YAML 文件加载代理配置\n\tconfigs, err := agent.LoadAgentConfigsFromFile(\"agents.yaml\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t\u002F\u002F 直接从配置创建代理\n\tagentInstance, err := agent.NewAgentFromConfig(\"research_assistant\", configs, nil, agent.WithLLM(llm))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\t\u002F\u002F 运行代理\n\tresult, err := agentInstance.Run(context.Background(), \"可再生能源领域的最新进展是什么？\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tprintln(result)\n}\n```\n\n**agents.yaml**（高级配置）：\n```yaml\nresearch_assistant:\n  role: \"高级研究助理\"\n  goal: \"提供全面的研究与分析\"\n  backstory: \"拥有多种数据源和专业子代理的专家研究员\"\n\n  # 行为设置\n  max_iterations: 15\n  require_plan_approval: false\n\n  # LLM 配置\n  llm_config:\n    temperature: 0.7\n    enable_reasoning: true\n    reasoning_budget: 20000\n\n  # 内置及自定义工具\n  tools:\n    - type: \"builtin\"\n      name: \"websearch\"\n      enabled: true\n      config:\n        api_key: \"${SEARCH_API_KEY}\"\n        engine: \"brave\"\n\n    - type: \"builtin\"\n      name: \"calculator\"\n      enabled: true\n\n  # MCP 服务器集成\n  mcp:\n    mcpServers:\n      filesystem:\n        command: \"npx\"\n        args: [\"-y\", \"@modelcontextprotocol\u002Fserver-filesystem\", \".\"]\n\n      database:\n        command: \"python\"\n        args: [\"-m\", \"mcp_server_database\"]\n        env:\n          DATABASE_URL: \"${DATABASE_URL}\"\n\n  # 内存配置\n  memory:\n    type: \"redis\"\n    config:\n      address: \"${REDIS_ADDRESS}\"\n      db: 0\n\n  # 用于专项任务的子代理\n  sub_agents:\n    data_analyzer:\n      role: \"数据分析专家\"\n      goal: \"分析复杂数据集并提供洞察\"\n      backstory: \"擅长统计分析和数据可视化\"\n      max_iterations: 8\n      llm_config:\n        temperature: 0.3\n\n    report_writer:\n      role: \"技术文档撰写员\"\n      goal: \"撰写综合性报告和文档\"\n      backstory: \"善于将复杂数据转化为清晰的报告\"\n      tools:\n        - type: \"builtin\"\n          name: \"text_processor\"\n          enabled: true\n\n  # 运行时设置\n  runtime:\n    log_level: \"info\"\n    enable_tracing: true\n    timeout: \"300s\"\n```\n\n**高级 YAML 配置的关键特性：**\n\n- **环境变量扩展**：使用 `${VAR}` 语法处理敏感数据\n- **行为设置**：配置迭代次数、计划审批要求及运行时行为\n- **LLM 配置**：微调温度、推理能力及模型特定参数\n- **工具集成**：以声明式方式配置内置工具、自定义工具、MCP 工具及代理工具\n- **子代理**：构建具有专业化能力的层级化代理结构\n- **内存后端**：配置缓冲区、Redis 或向量存储等内存系统\n- **MCP 集成**：无缝配置 Model Context Protocol 服务器\n- **结构化响应**：定义 JSON 模式以确保输出格式一致\n\n### 使用 YAML 配置创建代理（基础）\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"path\u002Ffilepath\"\n\t\"strings\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F 从环境变量获取 OpenAI API 密钥\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tif apiKey == \"\" {\n\t\tlog.Fatal(\"未提供 OpenAI API 密钥。请设置 OPENAI_API_KEY 环境变量。\")\n\t}\n\n\t\u002F\u002F 创建 LLM 客户端\n\tllm := openai.NewClient(apiKey)\n\n\t\u002F\u002F 加载代理配置\n\tagentConfigs, err := agent.LoadAgentConfigsFromFile(\"agents.yaml\")\n\tif err != nil {\n\t\tlog.Fatalf(\"加载代理配置失败：%v\", err)\n\t}\n\n\t\u002F\u002F 加载任务配置\n\ttaskConfigs, err := agent.LoadTaskConfigsFromFile(\"tasks.yaml\")\n\tif err != nil {\n\t\tlog.Fatalf(\"加载任务配置失败：%v\", err)\n\t}\n\n\t\u002F\u002F 创建用于模板替换的变量映射\n\tvariables := map[string]string{\n\t\t\"topic\": \"人工智能\",\n\t}\n\n\t\u002F\u002F 为特定任务创建代理\n\ttaskName := \"research_task\"\n\tagent, err := agent.CreateAgentForTask(taskName, agentConfigs, taskConfigs, variables, agent.WithLLM(llm))\n\tif err != nil {\n\t\tlog.Fatalf(\"为任务创建代理失败：%v\", err)\n\t}\n\n\t\u002F\u002F 执行任务\n\tfmt.Printf(\"正在执行主题为 '%s' 的任务...\\n\", variables[\"topic\"])\n\tresult, err := agent.ExecuteTaskFromConfig(context.Background(), taskName, taskConfigs, variables)\n\tif err != nil {\n\t\tlog.Fatalf(\"执行任务失败：%v\", err)\n\t}\n\n\t\u002F\u002F 打印结果\n\tfmt.Println(\"\\n任务结果：\")\n\tfmt.Println(result)\n}\n```\n\nYAML 配置示例：\n\n**agents.yaml**：\n```yaml\nresearcher:\n  role: >\n    {topic} 高级数据研究员\n  goal: >\n    探索 {topic} 领域的前沿发展\n  backstory: >\n    您是一位经验丰富的研究员，擅长挖掘 {topic} 领域的最新动态。以能够找到最相关的信息并以清晰简洁的方式呈现而闻名。\n  \nreporting_analyst:\n  role: >\n    {topic} 报告分析师\n  goal: >\n    基于 {topic} 数据分析和研究成果，撰写详细报告\n  backstory: >\n    您是一位细致入微的分析师，对细节有着敏锐的洞察力。您以能够将复杂数据转化为清晰简洁的报告而著称，使他人易于理解并采取行动。\n```\n\n**tasks.yaml**：\n```yaml\nresearch_task:\n  description: >\n    对 {topic} 进行全面研究\n    确保在 2025 年这一背景下，找到所有有趣且相关的信息。\n  expected_output: >\n    关于 {topic} 的 10 条要点列表\n  agent: researcher\n\nreporting_task:\n  description: >\n    审阅您所获得的背景信息，并将每个主题扩展为报告中的完整章节。\n    确保报告内容详尽，涵盖所有相关信息。\n  expected_output: >\n    一份完整的报告，包含主要主题及其详细信息。\n    格式为 Markdown，不含 '```'\n  agent: reporting_analyst\n  output_file: \"{topic}_report.md\"\n```\n\n### 使用 YAML 配置的结构化输出\n\nSDK 支持直接在 YAML 配置文件中定义结构化输出（JSON 响应）。这使得您可以在从 YAML 创建代理时自动应用结构化输出，并将响应直接反序列化为 Go 结构体。\n\n**包含结构化输出的 agents.yaml**:\n```yaml\nresearcher:\n  role: >\n    {topic} 高级数据研究员\n  goal: >\n    探索 {topic} 领域的前沿发展\n  backstory: >\n    您是一位经验丰富的研究员，擅长挖掘 {topic} 领域的最新进展。以能够找到最相关的信息并以清晰简洁的方式呈现而闻名。\n  response_format:\n    type: \"json_object\"\n    schema_name: \"ResearchResult\"\n    schema_definition:\n      type: \"object\"\n      properties:\n        findings:\n          type: \"array\"\n          items:\n            type: \"object\"\n            properties:\n              title:\n                type: \"string\"\n                description: \"发现的标题\"\n              description:\n                type: \"string\"\n                description: \"详细描述\"\n              source:\n                type: \"string\"\n                description: \"信息来源\"\n        summary:\n          type: \"string\"\n          description: \"发现的执行摘要\"\n        metadata:\n          type: \"object\"\n          properties:\n            total_findings:\n              type: \"integer\"\n            research_date:\n              type: \"string\"\n```\n\n**包含结构化输出的 tasks.yaml**:\n```yaml\nresearch_task:\n  description: >\n    对 {topic} 进行全面研究\n    确保找到任何有趣且相关的信息。\n  expected_output: >\n    包含发现、摘要和元数据的结构化 JSON 响应\n  agent: researcher\n  output_file: \"{topic}_report.json\"\n  response_format:\n    type: \"json_object\"\n    schema_name: \"ResearchResult\"\n    schema_definition:\n      # 与上面相同的模式\n```\n\n**在 Go 代码中的用法**:\n```go\n\u002F\u002F 定义与 YAML 模式匹配的 Go 结构体\ntype ResearchResult struct {\n    Findings []struct {\n        Title       string `json:\"title\"`\n        Description string `json:\"description\"`\n        Source      string `json:\"source\"`\n    } `json:\"findings\"`\n    Summary  string `json:\"summary\"`\n    Metadata struct {\n        TotalFindings int    `json:\"total_findings\"`\n        ResearchDate  string `json:\"research_date\"`\n    } `json:\"metadata\"`\n}\n\n\u002F\u002F 创建代理并执行任务\nagent, err := agent.CreateAgentForTask(\"research_task\", agentConfigs, taskConfigs, variables, agent.WithLLM(llm))\nresult, err := agent.ExecuteTaskFromConfig(context.Background(), \"research_task\", taskConfigs, variables)\n\n\u002F\u002F 反序列化结构化输出\nvar structured ResearchResult\nerr = json.Unmarshal([]byte(result), &structured)\n```\n\n更多详情，请参阅 [使用 YAML 配置的结构化输出](docs\u002Fstructured_output_yaml.md)。\n\n### 自动生成代理配置\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fconfig\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n)\n\nfunc main() {\n\t\u002F\u002F 加载配置\n\tcfg := config.Get()\n\n\t\u002F\u002F 创建 LLM 客户端\n\topenaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey)\n\n\t\u002F\u002F 根据系统提示自动生成配置创建代理\n\tagent, err := agent.NewAgentWithAutoConfig(\n\t\tcontext.Background(),\n\t\tagent.WithLLM(openaiClient),\n\t\tagent.WithSystemPrompt(\"您是一位旅行顾问，帮助用户规划旅行和度假。您擅长寻找隐藏的宝藏，并根据旅客的偏好制定个性化行程。\"),\n\t\tagent.WithName(\"旅行助手\"),\n\t)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t\u002F\u002F 获取生成的配置\n\tagentConfig := agent.GetGeneratedAgentConfig()\n\ttaskConfigs := agent.GetGeneratedTaskConfigs()\n\n\t\u002F\u002F 打印生成的代理详情\n\tfmt.Printf(\"生成的代理角色： %s\\n\", agentConfig.Role)\n\tfmt.Printf(\"生成的代理目标： %s\\n\", agentConfig.Goal)\n\tfmt.Printf(\"生成的代理背景故事： %s\\n\", agentConfig.Backstory)\n\n\t\u002F\u002F 打印生成的任务\n\tfmt.Println(\"\\n生成的任务：\")\n\tfor taskName, taskConfig := range taskConfigs {\n\t\tfmt.Printf(\"- %s: %s\\n\", taskName, taskConfig.Description)\n\t}\n\n\t\u002F\u002F 将生成的配置保存为 YAML 文件\n\tagentConfigMap := map[string]agent.AgentConfig{\n\t\t\"旅行助手\": *agentConfig,\n\t}\n\n\t\u002F\u002F 保存代理配置到文件\n\tagentYaml, _ := os.Create(\"agent_config.yaml\")\n\tdefer agentYaml.Close()\n\tagent.SaveAgentConfigsToFile(agentConfigMap, agentYaml)\n\n\t\u002F\u002F 保存任务配置到文件\n\ttaskYaml, _ := os.Create(\"task_config.yaml\")\n\tdefer taskYaml.Close()\n\tagent.SaveTaskConfigsToFile(taskConfigs, taskYaml)\n\n\t\u002F\u002F 使用自动生成的代理\n\tresponse, err := agent.Run(context.Background(), \"我想计划一次为期三天的东京之旅。\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(response)\n}\n```\n\n自动生成配置功能利用 LLM 推理，从简单的系统提示中推导出完整的代理档案及其相关任务。生成的配置包括：\n\n- **代理档案**：定义代理人格的角色、目标和背景故事\n- **任务定义**：代理可以执行的专业化任务，附有描述和预期输出\n- **可重用 YAML**：保存配置以便在其他应用程序中重复使用\n\n这种方法大大减少了创建专业化代理所需的工作量，同时确保了配置的一致性和质量。\n\n### 使用带有代理的 MCP 服务器\n\nSDK 支持 **立即初始化** 和 **延迟初始化** 的 MCP 服务器：\n\n- **立即初始化**：MCP 服务器在代理创建时即被初始化\n- **延迟初始化**：MCP 服务器仅在其工具首次被调用时才被初始化（推荐）\n\n#### 延迟初始化的 MCP 集成（推荐）\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n)\n\nfunc main() {\n\t\u002F\u002F 创建 OpenAI LLM 客户端\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tllm := openai.NewClient(apiKey, openai.WithModel(\"gpt-4o-mini\"))\n\n\t\u002F\u002F 定义延迟初始化的 MCP 配置\n\t\u002F\u002F 注意：CLI 支持动态工具发现，但 SDK 需要显式定义工具\n\tlazyMCPConfigs := []agent.LazyMCPConfig{\n\t\t{\n\t\t\tName:    \"aws-api-server\",\n\t\t\tType:    \"stdio\",\n\t\t\tCommand: \"docker\",\n\t\t\tArgs:    []string{\"run\", \"--rm\", \"-i\", \"public.ecr.aws\u002Fawslabs-mcp\u002Fawslabs\u002Faws-api-mcp-server:latest\"},\n\t\t\tEnv:     []string{\"AWS_REGION=us-west-2\"},\n\t\t\tTools: []agent.LazyMCPToolConfig{\n\t\t\t\t{\n\t\t\t\t\tName:        \"suggest_aws_commands\",\n\t\t\t\t\tDescription: \"根据自然语言建议 AWS CLI 命令\",\n\t\t\t\t\tSchema:      map[string]interface{}{\"type\": \"object\", \"properties\": map[string]interface{}{\"query\": map[string]interface{}{\"type\": \"string\"}}},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tName:    \"kubectl-ai\",\n\t\t\tType:    \"stdio\",\n\t\t\tCommand: \"kubectl-ai\",\n\t\t\tArgs:    []string{\"--mcp-server\"},\n\t\t\tTools: []agent.LazyMCPToolConfig{\n\t\t\t\t{\n\t\t\t\t\tName:        \"kubectl\",\n\t\t\t\t\tDescription: \"对 Kubernetes 集群执行 kubectl 命令\",\n\t\t\t\t\tSchema:      map[string]interface{}{\"type\": \"object\", \"properties\": map[string]interface{}{\"command\": map[string]interface{}{\"type\": \"string\"}}},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\n\t\u002F\u002F 创建带有延迟初始化 MCP 配置的代理\n\tmyAgent, err := agent.NewAgent(\n\t\tagent.WithLLM(llm),\n\t\tagent.WithLazyMCPConfigs(lazyMCPConfigs),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithSystemPrompt(\"你是一位可以访问 AWS 和 Kubernetes 工具的 AI 助手。\"),\n\t)\n\tif err != nil {\n\t\tlog.Fatalf(\"创建代理失败：%v\", err)\n\t}\n\n\t\u002F\u002F 使用代理 - MCP 服务器将在首次使用工具时初始化\n\tresponse, err := myAgent.Run(context.Background(), \"列出我的 EC2 实例并显示集群中的 Pod\")\n\tif err != nil {\n\t\tlog.Fatalf(\"运行代理失败：%v\", err)\n\t}\n\n\tfmt.Println(\"代理响应：\", response)\n}\n```\n\n#### 立即初始化的 MCP 集成\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Finterfaces\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmcp\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmultitenancy\"\n)\n\nfunc main() {\n\tlogger := log.New(os.Stderr, \"AGENT: \", log.LstdFlags)\n\n\t\u002F\u002F 创建 OpenAI LLM 客户端\n\tapiKey := os.Getenv(\"OPENAI_API_KEY\")\n\tif apiKey == \"\" {\n\t\tlogger.Fatal(\"请设置 OPENAI_API_KEY 环境变量。\")\n\t}\n\tllm := openai.NewClient(apiKey, openai.WithModel(\"gpt-4o-mini\"))\n\n\t\u002F\u002F 创建 MCP 服务器\n\tvar mcpServers []interfaces.MCPServer\n\n\t\u002F\u002F 连接到基于 HTTP 的 MCP 服务器\n\thttpServer, err := mcp.NewHTTPServer(context.Background(), mcp.HTTPServerConfig{\n\t\tBaseURL: \"http:\u002F\u002Flocalhost:8083\u002Fmcp\",\n\t})\n\tif err != nil {\n\t\tlogger.Printf(\"警告：初始化 HTTP MCP 服务器失败：%v\", err)\n\t} else {\n\t\tmcpServers = append(mcpServers, httpServer)\n\t\tlogger.Println(\"成功初始化 HTTP MCP 服务器。\")\n\t}\n\n\t\u002F\u002F 连接到基于 stdio 的 MCP 服务器\n\tstdioServer, err := mcp.NewStdioServer(context.Background(), mcp.StdioServerConfig{\n\t\tCommand: \"go\",\n\t\tArgs:    []string{\"run\", \".\u002Fserver-stdio\u002Fmain.go\"},\n\t})\n\tif err != nil {\n\t\tlogger.Printf(\"警告：初始化 STDIO MCP 服务器失败：%v\", err)\n\t} else {\n\t\tmcpServers = append(mcpServers, stdioServer)\n\t\tlogger.Println(\"成功初始化 STDIO MCP 服务器。\")\n\t}\n\n\t\u002F\u002F 创建支持 MCP 服务器的代理\n\tmyAgent, err := agent.NewAgent(\n\t\tagent.WithLLM(llm),\n\t\tagent.WithMCPServers(mcpServers),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithSystemPrompt(\"你是一位可以使用 MCP 服务器工具的 AI 助手。\"),\n\t\tagent.WithName(\"MCPAgent\"),\n\t)\n\tif err != nil {\n\t\tlogger.Fatalf(\"创建代理失败：%v\", err)\n\t}\n\n\t\u002F\u002F 创建包含组织和对话 ID 的上下文\n\tctx := context.Background()\n\tctx = multitenancy.WithOrgID(ctx, \"default-org\")\n\tctx = context.WithValue(ctx, memory.ConversationIDKey, \"mcp-demo\")\n\n\t\u002F\u002F 运行将使用 MCP 工具的代理\n\tresponse, err := myAgent.Run(ctx, \"现在几点了？\")\n\tif err != nil {\n\t\tlogger.Fatalf(\"代理运行失败：%v\", err)\n\t}\n\n\tfmt.Println(\"代理响应：\", response)\n}\n```\n\n## 架构\n\n该 SDK 采用模块化架构，包含以下关键组件：\n\n- **代理**：协调 LLM、记忆和工具\n- **LLM**：与语言模型提供商（OpenAI、Anthropic、Google Vertex AI）的接口\n- **记忆**：存储对话历史和上下文\n- **工具**：扩展代理的能力\n- **向量存储**：用于语义搜索和检索\n- **护栏**：确保安全且负责任的 AI 使用\n- **执行计划**：管理复杂任务的规划、审批和执行\n- **配置**：基于 YAML 的代理和任务定义\n\n### 支持的 LLM 提供商\n\n- **OpenAI**：GPT-4、GPT-3.5 及其他 OpenAI 模型\n- **Anthropic**：Claude 3.5 Sonnet、Claude 3 Haiku 及其他 Claude 模型\n- **DeepSeek**：DeepSeek-V3.2 对话和推理模型\n  - 原生工具\u002F函数调用支持\n  - 具有缓存优化的成本效益定价\n  - 128K 令牌上下文窗口\n  - 推理模式下可生成高达 64K 输出令牌\n  - 功能与 OpenAI\u002FAnthropic 完全对等\n- **Google Vertex AI**：Gemini 1.5 Pro、Gemini 1.5 Flash、Gemini 2.0 Flash 及 Gemini Pro Vision\n  - 先进的推理模式（无、最小、全面）\n  - 具备视觉模型的多模态能力\n  - 函数调用和工具集成\n  - 灵活的身份验证方式（ADC 或服务账户文件）\n- **Ollama**：本地 LLM 服务器，支持多种开源模型\n  - 无需外部 API 调用即可在本地运行模型\n  - 支持 Llama2、Mistral、CodeLlama 等模型\n  - 模型管理功能（列出、拉取、切换模型）\n  - 本地处理以降低延迟并保护隐私\n- **vLLM**：高性能本地 LLM 推理，采用 PagedAttention 技术\n  - 针对 CUDA 加速的 GPU 推理进行了优化\n  - 高效的内存管理，适用于大型模型\n  - 支持 Llama2、Mistral、CodeLlama 等模型\n  - 模型管理功能（列出、拉取、切换模型）\n  - 本地处理以降低延迟并保护隐私\n\n## CLI 工具（无头 SDK）\n\nAgent SDK 包含一个功能强大的命令行界面，可用于无头使用：\n\n### CLI 功能\n\n- 🤖 **多模型提供商支持**：OpenAI、Anthropic、DeepSeek、Google Vertex AI、Ollama、vLLM\n- 💬 **交互式聊天模式**：实时对话，具备持久化记忆\n- 📝 **任务执行**：从 YAML 配置中运行预定义任务\n- 🎨 **自动配置**：根据简单提示生成智能体配置\n- 🔧 **灵活配置**：基于 JSON 的配置，并支持环境变量\n- 🛠️ **丰富的工具集成**：网络搜索、GitHub、MCP 服务器等\n- 🔌 **MCP 服务器管理**：添加、列出、移除和测试 MCP 服务器\n- 📄 **.env 文件支持**：自动加载 .env 文件中的环境变量\n\n### CLI 命令\n\n```bash\n# 初始化配置\nagent-cli init\n\n# 使用单个提示运行智能体\nagent-cli run \"用通俗易懂的语言解释量子计算\"\n\n# 直接执行（无需设置）\nagent-cli --prompt \"2+2 等于多少？\"\n\n# 使用 MCP 服务器直接执行\nagent-cli --prompt \"列出我的 EC2 实例\" \\\n  --mcp-config .\u002Faws_api_server.json \\\n  --allowedTools \"mcp__aws__suggest_aws_commands,mcp__aws__call_aws\" \\\n  --dangerously-skip-permissions\n\n# 执行预定义任务\nagent-cli task --agent-config=agents.yaml --task-config=tasks.yaml --task=research_task --topic=\"AI\"\n\n# 启动交互式聊天\nagent-cli chat\n\n# 根据系统提示生成配置\nagent-cli generate --prompt=\"你是一名旅游顾问\" --output=.\u002Fconfigs\n\n# 列出可用资源\nagent-cli list providers\nagent-cli list models\nagent-cli list tools\n\n# 管理配置\nagent-cli config show\nagent-cli config set provider anthropic\n\n# 管理 MCP 服务器\nagent-cli mcp add --type=http --url=http:\u002F\u002Flocalhost:8083\u002Fmcp --name=my-server\nagent-cli mcp list\nagent-cli mcp remove --name=my-server\n\n# 从 JSON 配置导入\u002F导出 MCP 服务器\nagent-cli mcp import --file=mcp-servers.json\nagent-cli mcp export --file=mcp-servers.json\n\n# 使用 MCP 服务器并过滤工具的直接执行\nagent-cli --prompt \"列出我的 EC2 实例\" \\\n  --mcp-config .\u002Faws_api_server.json \\\n  --allowedTools \"suggest_aws_commands,call_aws\" \\\n  --dangerously-skip-permissions\n\n# 使用 kubectl-ai 进行 Kubernetes 管理\nagent-cli --prompt \"列出 default 命名空间下的所有 Pod\" \\\n  --mcp-config .\u002Fkubectl_ai.json \\\n  --allowedTools \"kubectl\" \\\n  --dangerously-skip-permissions\n```\n\n### 高级 MCP 功能\n\nCLI 现在支持 **动态工具发现** 和 **灵活的工具过滤**：\n\n- **无硬编码工具**：MCP 服务器自行定义其工具和架构\n- **动态发现**：工具在 MCP 服务器首次初始化时被发现\n- **灵活过滤**：使用 `--allowedTools` 指定可使用的具体工具\n- **JSON 配置**：从外部 JSON 文件加载 MCP 服务器配置\n- **环境变量**：每个 MCP 服务器可以指定自定义环境变量\n\n**热门 MCP 服务器：**\n- **AWS API 服务器**：AWS CLI 操作及建议\n- **kubectl-ai**：通过自然语言管理 Kubernetes 集群\n- **文件系统服务器**：文件系统操作与管理\n- **数据库服务器**：SQL 查询执行及数据库操作\n\n### CLI 文档\n\n完整的 CLI 文档请参阅：[CLI README](cmd\u002Fagent-cli\u002FREADME.md)\n\n## 示例\n\n请查看 `cmd\u002Fexamples` 目录以获取完整示例：\n\n- **简单智能体**：带有系统提示的基本智能体\n- **YAML 配置**：使用 YAML 定义智能体和任务\n- **自动配置**：根据系统提示生成智能体配置\n- **智能体配置向导**：交互式 CLI，用于创建和使用智能体\n- **MCP 集成**：将 Model Context Protocol 服务器与智能体结合使用\n- **多 LLM 支持**：使用 OpenAI、Azure OpenAI、Anthropic 和 Vertex AI 的示例\n- **Vertex AI 集成**：包含 Gemini 模型、推理模式和工具的全面示例\n\n### LLM 提供商示例\n\n- `examples\u002Fllm\u002Fopenai\u002F`：OpenAI 集成示例\n- `examples\u002Fllm\u002Fazureopenai\u002F`：基于部署配置的 Azure OpenAI 集成示例\n- `examples\u002Fllm\u002Fanthropic\u002F`：Anthropic Claude 集成示例\n- `examples\u002Fllm\u002Follama\u002F`：本地 Ollama LLM 集成示例\n- `examples\u002Fllm\u002Fvllm\u002F`：高性能本地 vLLM 集成示例\n\n## Agent GoGo - 快速部署基于本 SDK 的智能体\n- 自行托管或通过我们的 Cloud Gateway 启动您的智能体。访问 https:\u002F\u002Fagentgogo.app 了解更多信息。\n\n## 许可证\n\n本项目采用 MIT 许可证授权——详情请参阅 LICENSE 文件。\n\n## 文档\n\n📖 **完整文档可在 [docs.goagents.dev](https:\u002F\u002Fdocs.goagents.dev\u002F) 查阅**\n\n如需更详细的信息，您还可以参考以下文档：\n\n- [环境变量](docs\u002Fenvironment_variables.md)\n- [记忆](docs\u002Fmemory.md)\n- [追踪](docs\u002Ftracing.md)\n- [向量存储](docs\u002Fvectorstore.md)\n- [数据存储](docs\u002Fdatastore.md) - PostgreSQL 和 Supabase 集成，用于结构化数据\n- [LLM](docs\u002Fllm.md)\n- [多租户](docs\u002Fmultitenancy.md)\n- [任务](docs\u002Ftask.md)\n- [工具](docs\u002Ftools.md)\n- [智能体](docs\u002Fagent.md)\n- [执行计划](docs\u002Fexecution_plan.md)\n- [护栏](docs\u002Fguardrails.md)\n- [MCP](docs\u002Fmcp.md)","# agent-sdk-go 快速上手指南\n\n`agent-sdk-go` 是一个强大的 Go 语言框架，专为构建生产级 AI 智能体设计。它无缝集成了内存管理、工具执行、多模型支持（OpenAI, Anthropic, Google Vertex AI）以及企业级特性。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **Go 版本**: 1.23 或更高版本\n*   **可选依赖**: Redis（如果您需要使用分布式内存功能）\n\n## 安装步骤\n\n您可以选择将 SDK 作为 Go 库引入项目，或者作为命令行工具（CLI）使用。\n\n### 方式一：作为 Go 库安装\n\n在项目目录中运行以下命令添加依赖：\n\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go\n```\n\n### 方式二：作为 CLI 工具安装（无头模式）\n\n推荐下载预编译二进制文件，也可通过 Go 安装。\n\n**选项 1：下载预编译二进制文件（推荐）**\n访问 [GitHub Releases](https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Freleases) 下载适合您平台的最新版本，并将其添加到系统 `PATH` 中。\n\n**选项 2：通过 Go 安装**\n\n```bash\ngo install github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcmd\u002Fagent-cli@latest\n```\n\n**选项 3：从源码构建**\n\n```bash\n# 克隆仓库\ngit clone https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\ncd agent-sdk-go\n\n# 构建 CLI 工具\nmake build-cli\n\n# (可选) 安装到系统 PATH\nmake install\n```\n\n## 基本使用\n\n### 1. 配置环境变量\n\nSDK 主要通过环境变量进行配置。您可以复制示例文件并编辑，或直接导出变量。\n\n**使用 .env 文件（推荐）：**\n```bash\ncp env.example .env\n# 编辑 .env 文件，填入您的 API Key\n```\n\n**关键配置项：**\n*   `OPENAI_API_KEY`: 您的 OpenAI API 密钥\n*   `OPENAI_MODEL`: 使用的模型名称 (例如：`gpt-4o-mini`)\n*   `LOG_LEVEL`: 日志级别 (`debug`, `info`, `warn`, `error`)\n*   `REDIS_ADDRESS`: Redis 地址（如需使用 Redis 内存）\n\n### 2. CLI 快速体验\n\n安装完成后，您可以立即运行简单查询或进入交互模式：\n\n```bash\n# 初始化配置\nagent-cli init\n\n# 运行单次查询\nagent-cli run \"What's the weather in San Francisco?\"\n\n# 启动交互式聊天\nagent-cli chat\n```\n\n### 3. 代码示例：创建简单智能体\n\n以下是一个最小的 Go 代码示例，展示如何初始化一个具备记忆和工具能力的智能体：\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fagent\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fconfig\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fllm\u002Fopenai\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Flogging\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmemory\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Fmultitenancy\"\n\t\"github.com\u002FIngenimax\u002Fagent-sdk-go\u002Fpkg\u002Ftools\"\n)\n\nfunc main() {\n\t\u002F\u002F 创建日志记录器\n\tlogger := logging.New()\n\tcfg := config.Get()\n\n\t\u002F\u002F 初始化 OpenAI 客户端\n\topenaiClient := openai.NewClient(cfg.LLM.OpenAI.APIKey,\n\t\topenai.WithLogger(logger))\n\n\t\u002F\u002F 创建智能体\n\tagentInstance, err := agent.NewAgent(\n\t\tagent.WithLLM(openaiClient),\n\t\tagent.WithMemory(memory.NewConversationBuffer()),\n\t\tagent.WithTools(tools.NewRegistry().List()...),\n\t\tagent.WithSystemPrompt(\"You are a helpful AI assistant.\"),\n\t\tagent.WithName(\"MyAssistant\"),\n\t)\n\tif err != nil {\n\t\tlogger.Error(context.Background(), \"Failed to create agent\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\t\u002F\u002F 设置上下文（包含组织 ID 和会话 ID）\n\tctx := context.Background()\n\tctx = multitenancy.WithOrgID(ctx, \"default-org\")\n\tctx = context.WithValue(ctx, memory.ConversationIDKey, \"conversation-123\")\n\n\t\u002F\u002F 运行智能体\n\tresponse, err := agentInstance.Run(ctx, \"Hello, who are you?\")\n\tif err != nil {\n\t\tlogger.Error(ctx, \"Failed to run agent\", map[string]interface{}{\"error\": err.Error()})\n\t\treturn\n\t}\n\n\tfmt.Println(response)\n}\n```\n\n### 进阶提示：YAML 配置\n\n对于复杂的智能体编排，SDK 支持通过 YAML 文件声明式地定义智能体行为、工具链和子智能体。您可以使用 `agent.LoadAgentConfigsFromFile` 加载配置，并通过 `agent.NewAgentFromConfig` 直接实例化，无需编写大量样板代码。详细配置请参考官方文档。","某电商平台的后端团队需要构建一个能自动处理用户退货请求、查询库存并生成物流单的智能客服系统。\n\n### 没有 agent-sdk-go 时\n- **多模型切换困难**：团队需手动编写大量胶水代码来适配 OpenAI 和 Google Gemini，一旦切换模型或进行 A\u002FB 测试，重构成本极高。\n- **记忆管理混乱**：缺乏原生的长短期记忆机制，难以在复杂的退货流程中准确追踪用户多轮对话上下文，导致经常重复询问用户信息。\n- **可观测性缺失**：无法精细监控每个 Agent 步骤的 Token 消耗和执行耗时，出现错误时只能靠猜测排查，运维调试如同“黑盒”操作。\n- **安全管控薄弱**：缺少内置的护栏机制，担心 AI 误操作直接调用数据库删除订单，不得不花费数周自行开发权限校验中间件。\n\n### 使用 agent-sdk-go 后\n- **无缝多模型集成**：利用其多模型智能特性，通过简单配置即可在 GPT-4o 和 Gemini 间自由切换，快速验证不同模型在退货场景下的表现。\n- **高级记忆自动化**：借助内置的缓冲区和向量检索记忆管理，Agent 能精准记住用户之前的商品编号和退货原因，实现流畅的多轮交互。\n- **全链路可观测**：通过集成的追踪和日志功能，团队能实时查看每一步的工具调用详情与 Token 账单，迅速定位并优化高成本环节。\n- **企业级安全落地**：直接使用内置的安全护栏（Guardrails），严格限制 Agent 仅能执行“查询”和“创建”操作，从架构层面杜绝了数据误删风险。\n\nagent-sdk-go 将原本需要数周搭建的复杂 Agent 基础设施浓缩为声明式配置，让团队能专注于业务逻辑而非底层框架的重复造轮子。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FIngenimax_agent-sdk-go_74db69b4.png","Ingenimax","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FIngenimax_ca9934d9.png",null,"admins@ingenimax.ai","https:\u002F\u002Fwww.ingenimax.ai","https:\u002F\u002Fgithub.com\u002FIngenimax",[83,87,91,95,99,103,106,109],{"name":84,"color":85,"percentage":86},"Go","#00ADD8",91.1,{"name":88,"color":89,"percentage":90},"TypeScript","#3178c6",7.5,{"name":92,"color":93,"percentage":94},"HTML","#e34c26",0.9,{"name":96,"color":97,"percentage":98},"CSS","#663399",0.2,{"name":100,"color":101,"percentage":102},"Shell","#89e051",0.1,{"name":104,"color":105,"percentage":102},"Makefile","#427819",{"name":107,"color":108,"percentage":102},"Ruby","#701516",{"name":110,"color":111,"percentage":112},"JavaScript","#f1e05a",0,571,120,"2026-04-04T17:13:53","MIT","Linux, macOS, Windows","未说明",{"notes":120,"python":121,"dependencies":122},"该工具是基于 Go 语言开发的 SDK 和 CLI 工具，非 Python 项目。核心运行环境需安装 Go 1.23 或更高版本。可选依赖 Redis 用于分布式内存管理。支持通过预编译二进制文件或源码构建安装 CLI 工具。","不适用",[123,124],"Go 1.23+","Redis (可选)",[13,15],[127,128,129,130],"agentic-ai","agents","framework","golang","2026-03-27T02:49:30.150509","2026-04-06T07:13:03.128095",[134,139,144,149,154,159],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},17135,"是否支持用于远程代理通信的 A2A 协议？","是的，该功能已在 PR #294 中实现并合并（提交哈希 bfb0d06），完全支持 A2A v0.3.0 协议。主要组件包括：\n- pkg\u002Fa2a\u002Fserver.go：通过 JSON-RPC + 代理卡片暴露代理的 HTTP 服务器\n- pkg\u002Fa2a\u002Fexecutor.go：将 RunStream 桥接到 A2A 事件的 AgentExecutor\n- pkg\u002Fa2a\u002Fclient.go：用于发现和调用远程 A2A 代理的客户端\n- pkg\u002Fa2a\u002Ftool.go：将 A2A 代理封装为 SDK 工具的 RemoteAgentTool\n- pkg\u002Fa2a\u002Fagent_card.go：根据元数据生成代理卡片的 CardBuilder\n示例代码位于 examples\u002Fa2a\u002Fserver 和 examples\u002Fa2a\u002Fclient。使用该 SDK 构建的代理现在可以被 Google ADK、LangChain、CrewAI 和其他兼容 A2A 的框架发现和调用。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F129",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},17136,"如何获取 LLM 调用的输入\u002F输出 Token 使用量统计？","该功能已通过 PR #229 实现。用户现在可以通过包中的相关函数直接获取输入和输出的 Token 计数，无需自行解析响应。请确保升级到包含此 PR 的最新版本以使用该功能。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F166",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},17137,"如何在 Langfuse 中追踪会话 ID (SessionID) 以便分组查看用户交互？","可以通过在请求头中传递 sessionID（作为 conversationID）来实现。SDK 中的 otel_langfuse.go 会提取此 ID 并将其作为 langfuse.session.id 属性关联到当前追踪中。\n具体实现逻辑包括：\n1. 从 Context 中获取 conversationID（通常来自 memory 包）。\n2. 创建根 Span 时，添加 attribute.String(\"langfuse.trace.name\", contextID) 等属性。\n3. 如果可用，还会附加组织 ID (langfuse.user.id) 和代理名称。\n这有助于在 Langfuse 中将属于同一用户会话的所有追踪记录分组，便于分析用户旅程或调试多轮交互问题。建议参考最新的追踪文档了解如何将追踪包裹在带有 conversationId 的会话中。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F208",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},17138,"对话历史记录是如何格式化并传递给不同 LLM 提供商的？是否支持提供商特定的格式？","早期版本使用通用的 formatHistoryIntoPrompt 函数，导致历史记录被转换为简单字符串，忽略了不同 LLM 提供商的原生 API 要求（如 Gemini 的 genai.Content 或 OpenAI 的 ChatCompletionMessageParamUnion）。\n该问题已通过重构解决（见相关 PR #191）。现在的实现会根据不同的 LLM 提供商，使用其原生的消息结构来格式化对话历史记录，从而提高了响应的准确性。系统消息（system messages）的处理也得到了改进，以更好地支持摘要功能。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F141",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},17139,"如何在 Langfuse 追踪中查看未哈希的完整提示词 (Prompt) 和响应内容？","默认情况下，为了隐私或长度限制，提示词和响应可能会被哈希处理。用户希望看到完整对话以评估质量。\n虽然代码中存在相关函数（如 pkg\u002Ftracing\u002Fotel_langfuse.go 中的实现），但可能需要配置环境变量或特定选项来启用未哈希的日志记录。建议检查最新的追踪配置文档，寻找类似 \"LOG_PROMPT_RESPONSE_RAW\" 的环境变量设置，或者确认是否需要在初始化 Tracer 时显式开启此选项。如果原生不支持，可以考虑基于 SDK 构建自定义 Tracer，但首选方案是等待或推动官方通过环境变量支持此功能。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F221",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},17140,"运行 go mod tidy 时遇到 github.com\u002Fopenai\u002Fopenai-go 包缺失的错误怎么办？","这是一个已知的依赖问题，通常发生在引入 gemini 客户端或其他模块时，间接依赖了错误版本的 openai-go 包（例如 v1.12.0 中可能不包含预期的子包路径）。\n解决方法：\n1. 检查项目是否直接导入了 github.com\u002Fopenai\u002Fopenai-go 或其子包，确保版本兼容性。\n2. 尝试清理 go.mod 和 go.sum 文件，然后重新运行 go mod tidy。\n3. 如果问题依然存在，可能是 SDK 内部测试代码或特定模块（如 pkg\u002Fembedding）引入了不兼容的依赖。建议升级到 SDK 的最新版本，因为维护者已经对 gemini 客户端和相关依赖进行了修复工作。如果仍报错，请检查是否需要手动替换依赖版本或在 go.mod 中添加 exclude 指令排除有问题的版本。","https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fissues\u002F177",[165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250,255,260],{"id":166,"version":167,"summary_zh":168,"released_at":169},99323,"v0.2.42","## Agent SDK Go v0.2.42\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 功能\n* c47267c2d0328a2b35da7f91aefb36bca2792303：特性：添加 LangFuse 内容追踪和会话 ID 支持 (#221, #208) (@meidad)\n### 其他\n* 5b6b33be3f2fecb79499247b833bcdf70e0229a0：合并拉取请求 #297，来自 Ingenimax\u002Ffeat\u002Flangfuse-tracing-improvements (@meidad)\n\n## 安装\n\n### CLI 工具\n请从下方的资源中下载适用于您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.42\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.41...v0.2.42\n\n","2026-03-13T17:24:59",{"id":171,"version":172,"summary_zh":173,"released_at":174},99324,"v0.2.41","## Agent SDK Go v0.2.41\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 功能\n* e74236ad2ba7c7e33e72e5a7dd180bc6f94cf5e4：特性：添加 WithDisableFinalSummary 选项，用于跳过最终的 LLM 调用 (#230) (@meidad)\n### 其他\n* e9f21205115791d8b9cbcde756b6ba4d12be269b：合并拉取请求 #298，来自 Ingenimax\u002Ffeat\u002Fdisable-final-summary (@meidad)\n\n## 安装\n\n### CLI 工具\n请从下方的资源中下载适用于您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.41\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.40...v0.2.41\n\n","2026-03-13T17:24:50",{"id":176,"version":177,"summary_zh":178,"released_at":179},99325,"v0.2.40","## Agent SDK Go v0.2.40\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 功能\n* 893ade27bc648511e0056ae0912f5ddfc8d29473：特性：为 Anthropic Bedrock 添加提示缓存支持（@danostrosky）\n* 9e055f4ca9f4d95e19da0bce0caaafbef1e07702：特性：将 CacheConfig 传递到 Agent 层以实现提示缓存（@danostrosky）\n### 错误修复\n* 32a0e5e50ccefa7a24f04336efd422f4f8f64c06：修复：在流式工具调用路径中传递 CacheConfig（@danostrosky）\n### 其他\n* f87854e5677dac9b1ce0e2fcc7d1fec6ce8d4a83：合并拉取请求 #290，来自 danostrosky 的 feat\u002Fbedrock-prompt-caching 分支（@meidad）\n\n## 安装\n\n### CLI 工具\n请从下方的资源中下载适用于您平台的相应二进制文件，并将其添加到您的 PATH 中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.40\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.39...v0.2.40\n\n","2026-03-11T17:58:51",{"id":181,"version":182,"summary_zh":183,"released_at":184},99326,"v0.2.39","## Agent SDK Go v0.2.39\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 功能\n* bfb0d063e9d601c1caf3e9eb93356d775b9d3512: 新增 (a2a): 添加 A2A 协议支持，用于跨框架的代理互操作 (#294) (@cleversonledur)\n\n## 安装\n\n### 命令行工具\n请从下方的资源中下载适合您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.39\n```\n\n**完整更改日志**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.38...v0.2.39\n\n","2026-03-10T23:41:12",{"id":186,"version":187,"summary_zh":188,"released_at":189},99327,"v0.2.38","## Agent SDK Go v0.2.38\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 错误修复\n* 297cf205b541df6a54158cc6ea4c78dfc11044d1：修复：在 StreamEventToolResult 中填充 Content 字段 (#287) (@Ahmed1Ossama13)\n\n## 安装\n\n### 命令行工具\n请从下方的资源中下载适合您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.38\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.37...v0.2.38\n\n","2026-02-04T13:32:57",{"id":191,"version":192,"summary_zh":193,"released_at":194},99328,"v0.2.37","## Agent SDK Go v0.2.37\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 其他\n* 6812ca20d7b6da91edef4914e19aa12fbb999602：修复合并配置 - 图像生成 (#286) (@cleverson-ingenimax)\n\n## 安装\n\n### 命令行工具\n从下方的资源中下载适用于您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.37\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.36...v0.2.37\n\n","2026-01-23T16:23:00",{"id":196,"version":197,"summary_zh":198,"released_at":199},99329,"v0.2.36","## Agent SDK Go v0.2.36\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 其他\n* b34b98c1d33c93a55e4ed7063a03ebd8f9522d80：为图像生成添加多轮支持，并增加更多示例 (#285) (@cleverson-ingenimax)\n\n## 安装\n\n### 命令行工具\n请从下方的资源中下载适合您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.36\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.35...v0.2.36\n\n","2026-01-21T18:53:05",{"id":201,"version":202,"summary_zh":203,"released_at":204},99330,"v0.2.35","## Agent SDK Go v0.2.35\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 功能\n* 4842aefcf8f61ae7b932a0d52db7372b22383c85：特性：添加支持 Vertex AI 的图像生成功能 (#283) (@cleverson-ingenimax)\n\n## 安装\n\n### CLI 工具\n请从下方的资源中下载适用于您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.35\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.34...v0.2.35\n\n","2026-01-20T16:11:44",{"id":206,"version":207,"summary_zh":208,"released_at":209},99331,"v0.2.34","## Agent SDK Go v0.2.34\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 其他\n* 87696d5d3d333f012abcf161283cc0a00f5fbcf5：MessageStartData 不匹配 (#278) (@RazGvili)\n\n## 安装\n\n### 命令行工具\n请从下方的资源中下载适合您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.34\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.33...v0.2.34\n\n","2026-01-12T21:40:22",{"id":211,"version":212,"summary_zh":213,"released_at":214},99332,"v0.2.33","## Agent SDK Go v0.2.33\n\n欢迎使用 Agent SDK Go 的新版本！\n\n## 更改日志\n### 其他\n* 8c41f14fa3e57d915b6344bd83dfe667452a07a3：通过配置文件为 MCP 服务器筛选工具的功能 (#281) (@vnalawad-tibco)\n\n## 安装\n\n### CLI 工具\n请从下方的资源中下载适用于您平台的二进制文件，并将其添加到您的 PATH 环境变量中。\n\n### Go 库\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.33\n```\n\n**完整更改日志**：https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.32...v0.2.33\n\n","2026-01-12T21:38:24",{"id":216,"version":217,"summary_zh":218,"released_at":219},99333,"v0.2.32","## Agent SDK Go v0.2.32\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Features\n* dff6eede4a231f7ce0280d99ec47e0f59c2660d8: feat(anthropic): add prompt caching support (#277) (@backjo)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.32\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.31...v0.2.32\n\n","2026-01-06T17:26:22",{"id":221,"version":222,"summary_zh":223,"released_at":224},99334,"v0.2.31","## Agent SDK Go v0.2.31\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Bug Fixes\n* cda266f90ee7be8b7e144e0f914359781321a4b8: fix: include thinking parameter in Bedrock requests (#274) (@RazGvili)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.31\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.30...v0.2.31\n\n","2025-12-30T14:52:28",{"id":226,"version":227,"summary_zh":228,"released_at":229},99335,"v0.2.30","## Agent SDK Go v0.2.30\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Features\n* 6d3a73c7b45d9bb1b6248d70d654871649636459: feat: bedrock support (#273) (@danostrosky)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.30\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.29...v0.2.30\n\n","2025-12-29T22:41:54",{"id":231,"version":232,"summary_zh":233,"released_at":234},99336,"v0.2.29","## Agent SDK Go v0.2.29\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Others\n* d0ed5fa96f800fb8734fe51593495d8387573fa8: added support for HTTP Transport type in MCP server configuration (#271) (@vnalawad-tibco)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.29\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.28...v0.2.29\n\n","2025-12-29T11:50:40",{"id":236,"version":237,"summary_zh":238,"released_at":239},99337,"v0.2.28","## Agent SDK Go v0.2.28\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Features\n* 3ab43d87e270eeccf6bf73f6ae45a3d129b173ea: feat(graphrag): Add Graph-based Retrieval-Augmented Generation support (#270) (@cleverson-ingenimax)\n### Others\n* 6065e1f2fa3845121db56cc7e11d5a7bcb931c6a: Update README.md (@meidad)\n* c889b6c4cebe2366d1358d20ccd923492c4d226a: Update README.md (@meidad)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.28\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.27...v0.2.28\n\n","2025-12-23T21:41:40",{"id":241,"version":242,"summary_zh":243,"released_at":244},99338,"v0.2.27","## Agent SDK Go v0.2.27\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Features\n* fe63f30cd13f0a94d1feb37ed8ad0630a8ac5d06: feat: enhance config service integration  (#269) (@cleverson-ingenimax)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.27\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.26...v0.2.27\n\n","2025-12-19T16:35:35",{"id":246,"version":247,"summary_zh":248,"released_at":249},99339,"v0.2.26","## Agent SDK Go v0.2.26\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Others\n* 1f6e21ee365d578c3b64869acaa9b7ee1a87d129: fix expand and add more logs (#268) (@cleverson-ingenimax)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.26\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.25...v0.2.26\n\n","2025-12-17T18:53:47",{"id":251,"version":252,"summary_zh":253,"released_at":254},99340,"v0.2.25","## Agent SDK Go v0.2.25\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Bug Fixes\n* 106cf05a389839d348dca9bf93023f08f6e5051a: fix: append tools in WithTools to preserve sub-agent tools (#267) (@cleverson-ingenimax)\n### Others\n* 4b404e7f89c512ed5dcbb637420017eef2b7a9b1: Merge pull request #266 from Ingenimax\u002Fdocs\u002Freadme-update (@meidad)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.25\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.24...v0.2.25\n\n","2025-12-16T15:36:22",{"id":256,"version":257,"summary_zh":258,"released_at":259},99341,"v0.2.24","## Agent SDK Go v0.2.24\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Bug Fixes\n* 63683903145f77c93c1080852f94517cdb1f05db: fix: add memoryConfig fallback when memory instance is nil (@meidad)\n### Others\n* 36acc8e1764df39ecb1b34e34b58175471db7eaf: Merge pull request #265 from Ingenimax\u002Ffix\u002Fmemory-config-fallback (@meidad)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.24\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.23...v0.2.24\n\n","2025-12-11T23:36:12",{"id":261,"version":262,"summary_zh":263,"released_at":264},99342,"v0.2.23","## Agent SDK Go v0.2.23\n\nWelcome to this new release of Agent SDK Go!\n\n## Changelog\n### Features\n* a0bba875ed6ef0a779448265300e22cc0a8d7bf9: feat: add DataStore field to Agent and expose in config endpoint (@meidad)\n### Others\n* 0f76b22e6939d1f3bf9ca695442fdd5cdc2ed116: Merge pull request #264 from Ingenimax\u002Ffeature\u002Fadd-datastore-to-agent (@meidad)\n\n## Installation\n\n### CLI Tool\nDownload the appropriate binary for your platform from the assets below and add it to your PATH.\n\n### Go Library\n```bash\ngo get github.com\u002FIngenimax\u002Fagent-sdk-go@v0.2.23\n```\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FIngenimax\u002Fagent-sdk-go\u002Fcompare\u002Fv0.2.22...v0.2.23\n\n","2025-12-11T22:54:45"]