[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-temporal-community--temporal-ai-agent":3,"tool-temporal-community--temporal-ai-agent":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":112,"forks":113,"last_commit_at":114,"license":115,"difficulty_score":10,"env_os":116,"env_gpu":116,"env_ram":116,"env_deps":117,"category_tags":122,"github_topics":79,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":123,"updated_at":124,"faqs":125,"releases":154},399,"temporal-community\u002Ftemporal-ai-agent","temporal-ai-agent","This demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow.","temporal-ai-agent 是一个基于 Temporal 工作流构建的 AI 智能体框架，旨在实现可靠的多轮对话与自动化任务执行。它能围绕特定目标（如金融、HR、旅行等）主动收集信息，并在过程中调用各类工具完成操作。\n\n传统 AI 应用常面临状态丢失或任务中断的困境，而 temporal-ai-agent 利用 Temporal 强大的状态管理和错误处理机制，确保了长流程任务的耐久性与可观测性。它支持单智能体专注单一目标，也提供实验性的多智能体协作模式，允许在对话中动态切换角色。\n\n对于开发者和技术研究者而言，这是一个构建复杂 AI 应用的理想起点。它不仅兼容 OpenAI、Anthropic、Google 及本地 Ollama 等多种大模型，还原生支持 Model Context Protocol (MCP)，方便集成 Stripe、数据库等外部服务。系统采用代码优先设计，配置仅需两个环境变量，让构建稳健的 Agentic AI 应用变得既灵活又高效。","# Temporal AI Agent\n\nThis demo shows a multi-turn conversation with an AI agent running inside a Temporal workflow. The purpose of the agent is to collect information towards a goal, running tools along the way. The agent supports both native tools and Model Context Protocol (MCP) tools, allowing it to interact with external services.\n\nThe agent operates in single-agent mode by default, focusing on one specific goal. It also supports experimental multi-agent\u002Fmulti-goal mode where users can choose between different agent types and switch between them during conversations.\n\nGoals are organized in the `\u002Fgoals\u002F` directory by category (finance, HR, travel, ecommerce, etc.) and can leverage both native and MCP tools.\n\nThe AI will respond with clarifications and ask for any missing information to that goal. You can configure it to use any LLM supported by [LiteLLM](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders), including:\n- OpenAI models (GPT-4, GPT-3.5)\n- Anthropic Claude models\n- Google Gemini models\n- Deepseek models\n- Ollama models (local)\n- And many more!\n\nIt's really helpful to [watch the demo (5 minute YouTube video)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GEXllEH2XiQ) to understand how interaction works.\n\n[![Watch the demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftemporal-community_temporal-ai-agent_readme_5737ff66ae84.jpeg)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GEXllEH2XiQ)\n\n### Multi-Agent Demo Video\nSee multi-agent execution in action [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8Dc_0dC14yY).\n\n## Why Temporal?\nThere are a lot of AI and Agentic AI tools out there, and more on the way. But why Temporal? Temporal gives this system reliability, state management, a code-first approach that we really like, built-in observability and easy error handling.\nFor more, check out [architecture-decisions](docs\u002Farchitecture-decisions.md).\n\n## What is \"Agentic AI\"?\nThese are the key elements of an agentic framework:\n1. Goals that a system can accomplish, made up of tools that can execute individual steps\n2. Agent loops - executing an LLM, executing tools, and eliciting input from an external source such as a human: repeat until goal(s) are done\n3. Support for tool calls that require input and approval\n4. Use of an LLM to check human input for relevance before calling the 'real' LLM\n5. Use of an LLM to summarize and compact the conversation history\n6. Prompt construction made of system prompts, conversation history, and tool metadata - sent to the LLM to create user questions and confirmations\n7. Ideally high durability (done in this system with Temporal Workflow and Activities)\n\nFor a deeper dive into this, check out the [architecture guide](docs\u002Farchitecture.md).\n\n## 🔧 MCP Tool Calling Support\n\nThis agent acts as an **MCP (Model Context Protocol) client**, enabling seamless integration with external services and tools. The system supports two types of tools:\n- **Native Tools**: Custom tools implemented directly in the codebase (in `\u002Ftools\u002F`)\n - **MCP Tools**: External tools accessed via Model Context Protocol (MCP) servers like Stripe, databases, or APIs. Configuration is covered in [the Setup guide](docs\u002Fsetup.md)\n- Set `AGENT_GOAL=goal_food_ordering` with `SHOW_CONFIRM=False` in `.env` for an example of a goal that calls MCP Tools (Stripe).\n\n## Setup and Configuration\nSee [the Setup guide](docs\u002Fsetup.md) for detailed instructions. The basic configuration requires just two environment variables:\n```bash\nLLM_MODEL=openai\u002Fgpt-4o  # or any other model supported by LiteLLM\nLLM_KEY=your-api-key-here\n```\n\n## Customizing Interaction & Tools\nSee [the guide to adding goals and tools](docs\u002Fadding-goals-and-tools.md).\n\nThe system supports MCP (Model Context Protocol) for easy integration with external services. MCP server configurations are managed in `shared\u002Fmcp_config.py`, and goals are organized by category in the `\u002Fgoals\u002F` directory.\n\n## Architecture\nSee [the architecture guide](docs\u002Farchitecture.md).\n\n## Testing\n\nThe project includes comprehensive tests for workflows and activities using Temporal's testing framework:\n\n```bash\n# Install dependencies including test dependencies\nuv sync\n\n# Run all tests\nuv run pytest\n\n# Run with time-skipping for faster execution\nuv run pytest --workflow-environment=time-skipping\n```\n\n**Test Coverage:**\n- ✅ **Workflow Tests**: AgentGoalWorkflow signals, queries, state management\n- ✅ **Activity Tests**: ToolActivities, LLM integration (mocked), environment configuration\n- ✅ **Integration Tests**: End-to-end workflow and activity execution\n\n- **Quick Start**: [testing.md](docs\u002Ftesting.md) - Simple commands to run tests\n- **Comprehensive Guide**: [tests\u002FREADME.md](tests\u002FREADME.md) - Detailed testing documentation, patterns, and best practices\n\n## Development\n\nTo contribute to this project, see [contributing.md](docs\u002Fcontributing.md).\n\nStart the Temporal Server and API server, see [setup](docs\u002Fsetup.md)\n\n## Productionalization & Adding Features\n- In a prod setting, I would need to ensure that payload data is stored separately (e.g. in S3 or a noSQL db - the claim-check pattern), or otherwise 'garbage collected'. Without these techniques, long conversations will fill up the workflow's conversation history, and start to breach Temporal event history payload limits.\n- A single worker can easily support many agent workflows (chats) running at the same time. Currently the workflow ID is the same each time, so it will only run one agent at a time. To run multiple agents, you can use a different workflow ID each time (e.g. by using a UUID or timestamp).\n- Perhaps the UI should show when the LLM response is being retried (i.e. activity retry attempt because the LLM provided bad output)\n- The project now includes comprehensive tests for workflows and activities! [See testing guide](docs\u002Ftesting.md).\n\nSee [the todo](docs\u002Ftodo.md) for more details on things we want to do (or that you could contribute!).\n\nSee [the guide to adding goals and tools](docs\u002Fadding-goals-and-tools.md) for more ways you can add features.\n\n## Enablement Guide (internal resource for Temporal employees)\nCheck out the [slides](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wUFY4v17vrtv8llreKEBDPLRtZte3FixxBUn0uWy5NU\u002Fedit#slide=id.g3333e5deaa9_0_0) here and the [enablement guide](https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F14E0cEOibUAgHPBqConbWXgPUBY0Oxrnt6_AImdiheW4\u002Fedit?tab=t.0#heading=h.ajnq2v3xqbu1).\n\n\n","# Temporal AI 智能体\n\n此演示展示了在 Temporal 工作流（Workflow）中运行的 AI 智能体（Agent）的多轮对话。智能体的目的是收集信息以实现目标，并在此过程中运行工具。该智能体支持原生工具和模型上下文协议（Model Context Protocol, MCP）工具，允许它与外部服务交互。\n\n默认情况下，智能体以单智能体模式运行，专注于一个特定目标。它还支持实验性的多智能体\u002F多目标模式，用户可以在不同智能体类型之间选择并在对话期间切换。\n\n目标按类别（金融、人力资源、旅行、电子商务等）组织在 `\u002Fgoals\u002F` 目录中，并利用原生和 MCP 工具。\n\nAI 将提供澄清并询问实现该目标所需的任何缺失信息。您可以配置它使用 [LiteLLM](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders) 支持的任何大语言模型（LLM），包括：\n- OpenAI 模型 (GPT-4, GPT-3.5)\n- Anthropic Claude 模型\n- Google Gemini 模型\n- Deepseek 模型\n- Ollama 模型 (本地)\n- 以及更多！\n\n观看演示视频（5 分钟 YouTube 视频）[链接] 对于理解交互方式非常有帮助。\n\n[![Watch the demo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftemporal-community_temporal-ai-agent_readme_5737ff66ae84.jpeg)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GEXllEH2XiQ)\n\n### 多智能体演示视频\n在此处查看多智能体执行的实时演示 [here](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8Dc_0dC14yY)。\n\n## 为什么选择 Temporal？\n市面上有很多 AI 和智能体 AI 工具，而且还在不断增加。但为什么要用 Temporal？Temporal 为该系统的可靠性、状态管理、我们非常喜欢的代码优先方法、内置的可观测性以及简单的错误处理提供了支持。\n更多信息，请查看 [architecture-decisions](docs\u002Farchitecture-decisions.md)。\n\n## 什么是“智能体 AI\"？\n这是智能体框架的关键要素：\n1. 系统可以完成的目标，由可执行单个步骤的工具组成\n2. 智能体循环 - 执行 LLM，执行工具，并从外部来源（如人类）获取输入：重复直到目标完成\n3. 支持需要输入和批准的工具调用\n4. 使用 LLM 在调用“真实”LLM 之前检查人类输入的相关性\n5. 使用 LLM 总结和压缩对话历史\n6. 提示词构建包含系统提示词、对话历史和工具元数据 - 发送给 LLM 以创建用户问题和确认\n7. 理想情况下具有高持久性（本系统中通过 Temporal 工作流和活动（Activities）实现）\n\n深入了解，请查看 [架构指南](docs\u002Farchitecture.md)。\n\n## 🔧 MCP 工具调用支持\n\n该智能体充当 **MCP（模型上下文协议）客户端**，支持与外部服务和工具的无缝集成。系统支持两种类型的工具：\n- **原生工具**：直接在代码库中实现的自定义工具（在 `\u002Ftools\u002F` 中）\n - **MCP 工具**：通过模型上下文协议（MCP）服务器访问的外部工具，如 Stripe、数据库或 API。配置见 [the Setup guide](docs\u002Fsetup.md)\n- 在 `.env` 中设置 `AGENT_GOAL=goal_food_ordering` 且 `SHOW_CONFIRM=False` 作为调用 MCP 工具（Stripe）的目标示例。\n\n## 设置与配置\n详细说明请参见 [the Setup guide](docs\u002Fsetup.md)。基本配置只需要两个环境变量：\n```bash\nLLM_MODEL=openai\u002Fgpt-4o  # or any other model supported by LiteLLM\nLLM_KEY=your-api-key-here\n```\n\n## 自定义交互与工具\n请参见 [添加目标和工具的指南](docs\u002Fadding-goals-and-tools.md)。\n\n系统支持 MCP（模型上下文协议），便于与外部服务集成。MCP 服务器配置在 `shared\u002Fmcp_config.py` 中管理，目标按类别组织在 `\u002Fgoals\u002F` 目录中。\n\n## 架构\n请参见 [架构指南](docs\u002Farchitecture.md)。\n\n## 测试\n\n该项目使用 Temporal 的测试框架为工作流和活动包含全面的测试：\n\n```bash\n# Install dependencies including test dependencies\nuv sync\n\n# Run all tests\nuv run pytest\n\n# Run with time-skipping for faster execution\nuv run pytest --workflow-environment=time-skipping\n```\n\n**测试覆盖范围：**\n- ✅ **工作流测试**：AgentGoalWorkflow 信号、查询、状态管理\n- ✅ **活动测试**：ToolActivities、LLM 集成（模拟）、环境配置\n- ✅ **集成测试**：端到端工作流和活动执行\n\n- **快速开始**：[testing.md](docs\u002Ftesting.md) - 运行测试的简单命令\n- **综合指南**：[tests\u002FREADME.md](tests\u002FREADME.md) - 详细的测试文档、模式和最佳实践\n\n## 开发\n\n要为此项目做贡献，请参见 [contributing.md](docs\u002Fcontributing.md)。\n\n启动 Temporal 服务器和 API 服务器，请参见 [setup](docs\u002Fsetup.md)。\n\n## 生产化与添加功能\n- 在生产环境中，我需要确保有效载荷数据存储在别处（例如 S3 或 NoSQL 数据库 - claim-check 模式），或者进行“垃圾回收”。如果没有这些技术，长对话会填满工作流的对话历史，并开始突破 Temporal 事件历史记录负载限制。\n- 单个工作器可以轻松支持同时运行的许多智能体工作流（聊天）。目前每次的工作流 ID 都相同，因此它将一次只运行一个智能体。要运行多个智能体，您可以每次都使用不同的工作流 ID（例如通过使用 UUID 或时间戳）。\n- 也许 UI 应该显示何时正在重试 LLM 响应（即由于 LLM 提供不良输出而导致的活动重试尝试）\n- 该项目现在包含针对工作流和活动的全面测试！[参见测试指南](docs\u002Ftesting.md)。\n\n有关我们要做的事情（或你可以贡献的事情）的更多详细信息，请参见 [todo](docs\u002Ftodo.md)。\n\n有关您可以添加功能的更多方式，请参见 [添加目标和工具的指南](docs\u002Fadding-goals-and-tools.md)。\n\n## 启用指南（Temporal 员工的内部资源）\n请查看这里的 [幻灯片](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wUFY4v17vrtv8llreKEBDPLRtZte3FixxBUn0uWy5NU\u002Fedit#slide=id.g3333e5deaa9_0_0) 和 [启用指南](https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F14E0cEOibUAgHPBqConbWXgPUBY0Oxrnt6_AImdiheW4\u002Fedit?tab=t.0#heading=h.ajnq2v3xqbu1)。","# temporal-ai-agent 快速上手指南\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **操作系统**: Linux \u002F macOS \u002F Windows\n- **编程语言**: Python 3.x\n- **包管理工具**: [`uv`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv) (用于依赖管理和运行脚本)\n- **中间件**: 本地或云端的 Temporal Server (需预先启动)\n- **大模型服务**: 支持通过 [LiteLLM](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders) 调用的模型\n  - 推荐国内开发者使用：Deepseek、Ollama (本地部署)、Google Gemini 等\n  - 其他支持：OpenAI (GPT-4\u002F3.5)、Anthropic Claude 等\n\n## 安装步骤\n\n1. **克隆项目代码**\n   ```bash\n   git clone \u003Crepository-url>\n   cd temporal-ai-agent\n   ```\n\n2. **安装依赖**\n   使用 `uv` 同步项目依赖（包含测试依赖）：\n   ```bash\n   uv sync\n   ```\n\n3. **配置环境变量**\n   在项目根目录创建 `.env` 文件，配置基础参数（详细配置请查阅 `docs\u002Fsetup.md`）：\n   ```bash\n   LLM_MODEL=openai\u002Fgpt-4o  # 或任何 LiteLLM 支持的模型\n   LLM_KEY=your-api-key-here\n   ```\n   *提示：若使用本地 Ollama 模型，可设置为 `ollama\u002Fllama3` 等。*\n\n## 基本使用\n\n1. **启动服务**\n   确保 Temporal Server 和 API Server 已经运行（具体启动命令请参考 `docs\u002Fsetup.md`）。\n\n2. **配置目标与工具**\n   系统支持多种目标分类（finance, HR, travel 等），位于 `\u002Fgoals\u002F` 目录下。\n   若要测试 MCP 工具调用（例如 Stripe），可在 `.env` 中添加以下配置：\n   ```bash\n   AGENT_GOAL=goal_food_ordering\n   SHOW_CONFIRM=False\n   ```\n\n3. **运行与验证**\n   项目内置了完善的测试框架，可用于验证工作流和活动逻辑：\n   ```bash\n   # 运行所有测试\n   uv run pytest\n\n   # 加速运行（跳过时间等待）\n   uv run pytest --workflow-environment=time-skipping\n   ```\n   正式运行 Agent 交互的具体命令请参照 `docs\u002Fsetup.md` 中的完整操作指引。\n\n---\n**相关文档**\n- **架构说明**: [docs\u002Farchitecture.md](docs\u002Farchitecture.md)\n- **添加新目标与工具**: [docs\u002Fadding-goals-and-tools.md](docs\u002Fadding-goals-and-tools.md)\n- **MCP 配置**: `shared\u002Fmcp_config.py`","某企业行政专员需要为频繁出差的员工自动完成跨平台的机票酒店预订及后续报销单生成，涉及多个外部系统交互与人工确认环节。\n\n### 没有 temporal-ai-agent 时\n- 手动切换不同供应商平台查询比价，耗时费力且数据难以统一汇总。\n- 自定义 Python 脚本缺乏状态管理，一旦网络中断或超时，所有进度丢失需重头再来。\n- 审批流程依赖邮件或即时通讯工具，缺乏结构化记录，难以追溯决策依据。\n- 遇到 API 异常时无优雅降级方案，导致整个自动化任务直接崩溃报错。\n\n### 使用 temporal-ai-agent 后\n- temporal-ai-agent 作为智能体自主规划步骤，通过 MCP 工具无缝调用各预订接口获取最优方案。\n- 依托 Temporal 工作流引擎，任务状态持久化存储，任何中断后都能精确恢复至断点继续执行。\n- 内置人机协作循环，在支付前自动暂停并询问用户确认，确保关键操作符合业务规范。\n- 具备完善的错误处理与日志观察能力，API 调用失败会自动重试或通知管理员，保障任务高可用。\n\n它将分散的自动化步骤整合为可靠的工作流，显著提升了复杂业务场景下的执行稳定性与可维护性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftemporal-community_temporal-ai-agent_5737ff66.jpg","temporal-community","Temporal Community","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftemporal-community_c21ab371.png","This organization hosts code samples and other projects for the Temporal Community. Repos in this organization have no guarantee of support or maintenance.",null,"community@temporal.io","temporalio","https:\u002F\u002Ftemporal.io\u002F","https:\u002F\u002Fgithub.com\u002Ftemporal-community",[85,89,93,97,101,105,108],{"name":86,"color":87,"percentage":88},"Python","#3572A5",86.7,{"name":90,"color":91,"percentage":92},"JavaScript","#f1e05a",9.6,{"name":94,"color":95,"percentage":96},"C#","#178600",2.6,{"name":98,"color":99,"percentage":100},"Makefile","#427819",0.4,{"name":102,"color":103,"percentage":104},"CSS","#663399",0.3,{"name":106,"color":107,"percentage":104},"Dockerfile","#384d54",{"name":109,"color":110,"percentage":111},"HTML","#e34c26",0.1,658,168,"2026-04-02T03:09:20","MIT","未说明",{"notes":118,"python":116,"dependencies":119},"需要部署并运行 Temporal Server；通过 .env 文件配置 LLM 模型名称及 API 密钥（支持 OpenAI、Anthropic、Google、Deepseek、Ollama 等）；使用 uv 工具管理依赖和运行测试；支持 MCP 协议集成外部服务；本地运行 Ollama 时可能需要 GPU 资源但非强制。",[120,81,121],"litellm","pytest",[15,26],"2026-03-27T02:49:30.150509","2026-04-06T07:13:00.497938",[126,131,136,140,145,149],{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},1485,"项目使用了什么许可证？","维护者已在仓库中添加 MIT 许可证。这意味着您可以自由地使用和修改代码，无需单独申请版权许可。","https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fissues\u002F17",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},1486,"如何配置使用 OpenAI 还是 Ollama 模型？","项目现已支持通过环境变量选择模型类型。请查看 .env.example 文件（第 8 行）以了解具体配置项。","https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fissues\u002F1",{"id":137,"question_zh":138,"answer_zh":139,"source_url":135},1487,"本地 Ollama 模型的效果如何？","维护者反馈目前尚未找到比 OpenAI gpt-4o 更适合此用例的 Ollama 模型，建议优先考虑 OpenAI。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},1488,"为什么 Workflow 失败会导致前端界面卡死？","当 Workflow 因错误被终止或取消时，查询句柄仍然有效，但无法获取有效对话历史，导致前端轮询陷入死循环。","https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fissues\u002F2",{"id":146,"question_zh":147,"answer_zh":148,"source_url":144},1489,"如何修复 get_conversation_history 以防止前端卡死？","需要在查询前检查 Workflow 状态。如果状态为 TERMINATED、CANCELED 或 FAILED，应直接返回空列表，避免无效查询。\n\n代码示例：\ndescription = await handle.describe()\nif description.status in [WorkflowExecutionStatus.WORKFLOW_EXECUTION_STATUS_TERMINATED, ...]:\n    return []\nconversation_history = await handle.query(\"get_conversation_history\")",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},1490,"如何解决本地 LLM 响应中的 JSON 解析失败问题？","本地 LLM 响应可能包含 Markdown 标记（如 ```json）或额外文本。解决方案是提取标记之间的内容再解析。\n\n代码逻辑：\nstart_marker = \"```json\"\nend_marker = \"```\"\njson_start = response_content.index(start_marker) + len(start_marker)\njson_end = response_content.index(end_marker, json_start)\njson_str = response_content[json_start:json_end].strip()\ndata = json.loads(json_str)","https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fissues\u002F3",[155,160,165,170],{"id":156,"version":157,"summary_zh":158,"released_at":159},100987,"0.4.1","## What's Changed\r\n* Mcp enhancements by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F43\r\n* [Bug fix] Update API to use proper query by @MasonEgger in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F44\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fcompare\u002F0.4.0...0.4.1","2025-06-17T15:42:09",{"id":161,"version":162,"summary_zh":163,"released_at":164},100988,"0.4.0","# Model Context Protocol (MCP) Support\r\n\r\n## What's Changed\r\n* Model Context Protocol (MCP) support with new use case by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F42\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fcompare\u002F0.3.0...0.4.0","2025-06-09T23:42:22",{"id":166,"version":167,"summary_zh":168,"released_at":169},100989,"0.3.0","## What's Changed\r\n* relocking Poetry lock file to align with pyproject.toml by @MasonEgger in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F32\r\n* Docker setup by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F34\r\n* Review dallastexas92 nostripekey by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F35\r\n* Jonymusky litellm integration by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F36\r\n* Use mock football data if no key by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F37\r\n* todo list by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F38\r\n* fix(setup): add stripe to Python dep by @kawofong in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F39\r\n* Temporal tests by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F40\r\n* Enhance Dev Experience and Code Quality by @steveandroulakis in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F41\r\n\r\n## New Contributors\r\n* @MasonEgger made their first contribution in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F32\r\n* @kawofong made their first contribution in https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fpull\u002F39\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fcompare\u002F0.2.0...0.3.0","2025-06-04T17:45:49",{"id":171,"version":172,"summary_zh":173,"released_at":174},100990,"0.2.0","# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\n## [0.2.0] - 2025-04-24\r\n\r\n## Commits \r\n* (Everything prior, as this is the first \"official\" release)\r\n\r\n![0.2.0 Changes Screenshot](https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fblob\u002Fmain\u002Fassets\u002F0.2.0_changes.jpeg)\r\n\r\n### Added\r\n- **Multi‑goal agent architecture** with dynamic goal switching (`goal_choose_agent_type`, `ListAgents`, `ChangeGoal`).\r\n    - See [the architecture guide](https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fblob\u002Fmain\u002Farchitecture.md) and [setup guide](https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fblob\u002Fmain\u002Fsetup.md).\r\n- **New goal categories & agents**: HR PTO scheduling\u002Fchecking, paycheck integration, Financial (balances, money movement, loan application), E‑commerce order tracking.\r\n    - See [the guide for adding goals and tools](https:\u002F\u002Fgithub.com\u002Ftemporal-community\u002Ftemporal-ai-agent\u002Fblob\u002Fmain\u002Fadding-goals-and-tools.md).\r\n- **Force Confirmation**: `SHOW_CONFIRM` will show a confirmation box before allowing the agent to run a tool.\r\n- **Grok (`x.ai`) LLM provider** support via `GROK_API_KEY`.\r\n- Extensive **docs**: `setup.md`, `architecture.md`, `architecture-decisions.md`, `adding-goals-and-tools.md`, plus new diagrams & assets.\r\n\r\n### Changed\r\n- **UI Confirmation Box** is less 'debug' looking and prettier.\r\n- Package renamed to **`temporal_AI_agent`** and version bumped to **0.2.0** in `pyproject.toml`.\r\n- Environment variables changed (see `.env_example`): (`RAPIDAPI_HOST_*`, `AGENT_GOAL` defaults, `GOAL_CATEGORIES`, `SHOW_CONFIRM`, `FIN_START_REAL_WORKFLOW`).\r\n\r\n## [0.1.0] - 2025-01-04\r\n\r\n### Added\r\n- **Initial release** of the Temporal AI Agent demo.\r\n- **Single goal agent** architecture with a single goal and agent type.\r\n    - This is the agent demoed in the [YouTube video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=GEXllEH2XiQ).\r\n\r\n","2025-04-28T16:49:23"]