[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-dynamiq-ai--dynamiq":3,"tool-dynamiq-ai--dynamiq":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":23,"env_os":100,"env_gpu":101,"env_ram":101,"env_deps":102,"category_tags":106,"github_topics":107,"view_count":115,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":116,"updated_at":117,"faqs":118,"releases":139},175,"dynamiq-ai\u002Fdynamiq","dynamiq","Dynamiq is an orchestration framework for agentic AI and LLM applications","Dynamiq 是一个面向智能体（agentic）AI 和大语言模型（LLM）应用的编排框架，帮助开发者更高效地构建复杂的生成式 AI 应用。它专注于简化检索增强生成（RAG）和 LLM 智能体的工作流设计，让多个组件（如语言模型、工具、记忆模块等）能够协同工作。在实际开发中，用户常面临流程编排复杂、异步执行困难、工具集成繁琐等问题，Dynamiq 通过清晰的节点化架构和内置支持（如 ReAct 智能体、代码解释器工具等）有效缓解这些挑战。该框架主要面向 AI 工程师、研究人员和有一定编程基础的开发者，尤其适合需要快速搭建可扩展、可维护的 LLM 应用原型或产品的团队。其技术亮点包括对异步执行的良好支持、模块化的节点设计，以及与 E2B 等外部工具的无缝集成，使得构建具备真实任务解决能力的智能体变得更加直观和灵活。","\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.getdynamiq.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdynamiq-ai_dynamiq_readme_5a543c644ef0.png\" alt=\"Dynamiq\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n    \u003Cem>Dynamiq is an orchestration framework for agentic AI and LLM applications\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgetdynamiq.ai\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fwebsite?label=website&up_message=online&url=https%3A%2F%2Fgetdynamiq.ai\" alt=\"Website\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Freleases\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fdynamiq-ai\u002Fdynamiq\" alt=\"Release Notes\">\n  \u003C\u002Fa>\n  \u003Ca href=\"#\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.10%2B-brightgreen.svg\" alt=\"Python 3.10+\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fblob\u002Fmain\u002FLICENSE\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg\" alt=\"License\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdynamiq-ai.github.io\u002Fdynamiq\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fwebsite?label=documentation&up_message=online&url=https%3A%2F%2Fdynamiq-ai.github.io%2Fdynamiq\" alt=\"Documentation\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\nWelcome to Dynamiq! 🤖\n\nDynamiq is your all-in-one Gen AI framework, designed to streamline the development of AI-powered applications. Dynamiq specializes in orchestrating retrieval-augmented generation (RAG) and large language model (LLM) agents.\n\n## Getting Started\n\nReady to dive in? Here's how you can get started with Dynamiq:\n\n### Installation\n\nFirst, let's get Dynamiq installed. You'll need Python, so make sure that's set up on your machine. Then run:\n\n```sh\npip install dynamiq\n```\n\nOr build the Python package from the source code:\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq.git\ncd dynamiq\npoetry install\n```\n\n## Documentation\nFor more examples and detailed guides, please refer to our [documentation](https:\u002F\u002Fdynamiq-ai.github.io\u002Fdynamiq).\n\n## Examples\n\n### Simple LLM Flow\n\nHere's a simple example to get you started with Dynamiq:\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.prompts import Prompt, Message\n\n# Define the prompt template for translation\nprompt_template = \"\"\"\nTranslate the following text into English: {{ text }}\n\"\"\"\n\n# Create a Prompt object with the defined template\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\n# Setup your LLM (Large Language Model) Node\nllm = OpenAI(\n    id=\"openai\",  # Unique identifier for the node\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),  # Connection using API key\n    model=\"gpt-4o\",  # Model to be used\n    temperature=0.3,  # Sampling temperature for the model\n    max_tokens=1000,  # Maximum number of tokens in the output\n    prompt=prompt  # Prompt to be used for the model\n)\n\n# Run the LLM node with the input data\nresult = llm.run(\n    input_data={\n        \"text\": \"Hola Mundo!\"  # Text to be translated\n    }\n)\n\n# Print the result of the translation\nprint(result.output)\n```\n\n\n### Simple ReAct Agent with asynchronous execution\nAn agent that has the access to E2B Code Interpreter and is capable of solving complex coding tasks.\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection, E2B as E2BConnection\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool\n\n# Initialize the E2B tool\ne2b_tool = E2BInterpreterTool(\n    connection=E2BConnection(api_key=\"E2B_API_KEY\")\n)\n\n# Setup your LLM\nllm = OpenAI(\n    id=\"openai\",\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.3,\n    max_tokens=1000,\n)\n\n# Create the agent\nagent = Agent(\n    name=\"react-agent\",\n    llm=llm, # Language model instance\n    tools=[e2b_tool],  # List of tools that the agent can use\n    role=\"Senior Data Scientist\",  # Role of the agent\n    max_loops=10, # Limit on the number of processing loops\n)\n\nasync def run_async_agent():\n    # Run the agent asynchronously with an input\n    result = await agent.run(\n        input_data={\n            \"input\": \"Add the first 10 numbers and tell if the result is prime.\",\n        }\n    )\n\n    print(result.output.get(\"content\"))\n\n\n# Execute the async function\nif __name__ == \"__main__\":\n    asyncio.run(run_async_agent())\n```\n\n### Configuring Two Parallel Agents with WorkFlow\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents import Agent\n\n# Setup your LLM\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\n# Define the first agent: a question answering agent\nfirst_agent = Agent(\n    name=\"Expert Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of producing well-written and informative responses.\",\n    id=\"agent_1\",\n    max_loops=5\n)\n\n# Define the second agent: a poetic writer\nsecond_agent = Agent(\n    name=\"Poetic Rewriter Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of rewriting user input as a poem without changing its meaning.\",\n    id=\"agent_2\",\n    max_loops=5\n)\n\n\n# Create a workflow to run both agents with the same input\n# The `Workflow` class simplifies setting up and executing a series of nodes in a pipeline.\n# It automatically handles running the agents in parallel where possible.\nwf = Workflow()\nwf.flow.add_nodes(first_agent)\nwf.flow.add_nodes(second_agent)\n\n# Equivalent alternative way to define the workflow:\n# from dynamiq.flows import Flow\n# wf = Workflow(flow=Flow(nodes=[agent_first, agent_second]))\n\n# Run the workflow with an input\nresult = wf.run(\n    input_data={\"input\": \"How are sin(x) and cos(x) connected in electrodynamics?\"},\n)\n\n# Print the input and output for both agents\nprint('--- Agent 1: Input ---\\n', result.output[first_agent.id].get(\"input\").get('input'))\nprint('--- Agent 1: Output ---\\n', result.output[first_agent.id].get(\"output\").get('content'))\nprint('--- Agent 2: Input ---\\n', result.output[second_agent.id].get(\"input\").get('input'))\nprint('--- Agent 2: Output ---\\n', result.output[second_agent.id].get(\"output\").get('content'))\n```\n\n### Configuring Two Sequential Agents with WorkFlow\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents import Agent\n\nfrom dynamiq.nodes.node import InputTransformer, NodeDependency\n\n# Setup your LLM\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nfirst_agent = Agent(\n    name=\"Expert Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of producing well-written and informative responses.\",  # Role of the agent\n    id=\"agent_1\",\n    max_loops=5\n)\n\nsecond_agent = Agent(\n    name=\"Poetic Rewriter Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of rewriting user input as a poem without changing its meaning.\",  # Role of the agent\n    id=\"agent_2\",\n    depends=[NodeDependency(first_agent)],  # Set dependency on the first agent\n    input_transformer=InputTransformer(\n        selector={\"input\": f\"${[first_agent.id]}.output.content\"}  # Extract the output of the first agent as input\n    ),\n    max_loops=5\n)\n\n# Create a workflow to run the agents sequentially based on dependencies.\n# Without a workflow, you would need to run `first_agent`, collect its output,\n# and then manually pass that output as input to `second_agent`. The workflow automates this process.\nwf = Workflow()\nwf.flow.add_nodes(first_agent)\nwf.flow.add_nodes(second_agent)\n\n# Equivalent alternative way to define the workflow:\n# from dynamiq.flows import Flow\n# wf = Workflow(flow=Flow(nodes=[agent_first, agent_second]))\n\n# Run the workflow with an input\nresult = wf.run(\n    input_data={\"input\": \"How are sin(x) and cos(x) connected in electrodynamics?\"},\n)\n\n# Print the input and output for both agents\nprint('--- Agent 1: Input ---\\n', result.output[first_agent.id].get(\"input\").get('input'))\nprint('--- Agent 1: Output ---\\n', result.output[first_agent.id].get(\"output\").get('content'))\nprint('--- Agent 2: Input ---\\n', result.output[second_agent.id].get(\"input\").get('input'))\nprint('--- Agent 2: Output ---\\n', result.output[second_agent.id].get(\"output\").get('content'))\n```\n\n### Multi-agent orchestration\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, ScaleSerp as ScaleSerpConnection\nfrom dynamiq.flows import Flow\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.nodes.tools.scale_serp import ScaleSerpTool\nfrom dynamiq.nodes.types import Behavior, InferenceMode\n\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nsearch_tool = ScaleSerpTool(connection=ScaleSerpConnection(api_key=\"SCALESERP_API_KEY\"))\n\nresearch_agent = Agent(\n    name=\"Research Analyst\",\n    role=\"Find recent market news and provide referenced highlights.\",\n    llm=llm,\n    tools=[search_tool],\n    inference_mode=InferenceMode.XML,\n    max_loops=6,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nwriter_agent = Agent(\n    name=\"Brief Writer\",\n    role=\"Turn research highlights into a concise executive brief.\",\n    llm=llm,\n    inference_mode=InferenceMode.XML,\n    max_loops=4,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nmanager_agent = Agent(\n    name=\"Manager\",\n    role=(\n        \"Delegate research and writing to sub-agents.\\n\"\n        \"Always call tools with {'input': '\u003Ctask>'} payloads and assemble the final brief.\"\n    ),\n    llm=llm,\n    tools=[research_agent, writer_agent],\n    inference_mode=InferenceMode.XML,\n    parallel_tool_calls_enabled=True,\n    max_loops=8,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nworkflow = Workflow(flow=Flow(nodes=[manager_agent]))\n\nresult = workflow.run(\n    input_data={\"input\": \"Summarize the latest developments in battery technology for investors.\"},\n)\n\nprint(result.output[manager_agent.id][\"output\"][\"content\"])\n\n```\n\n### RAG - document indexing flow\nThis workflow takes input PDF files, pre-processes them, converts them to vector embeddings, and stores them in the Pinecone vector database.\nThe example provided is for an existing index in Pinecone. You can find examples for index creation on the `docs\u002Ftutorials\u002Frag` page.\n\n```python\nfrom io import BytesIO\n\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.converters import PyPDFConverter\nfrom dynamiq.nodes.splitters.document import DocumentSplitter\nfrom dynamiq.nodes.embedders import OpenAIDocumentEmbedder\nfrom dynamiq.nodes.writers import PineconeDocumentWriter\n\nrag_wf = Workflow()\n\n# PyPDF document converter\nconverter = PyPDFConverter(document_creation_mode=\"one-doc-per-page\")\nrag_wf.flow.add_nodes(converter)  # add node to the DAG\n\n# Document splitter\ndocument_splitter = (\n    DocumentSplitter(\n        split_by=\"sentence\",\n        split_length=10,\n        split_overlap=1,\n    )\n    .inputs(documents=converter.outputs.documents)  # map converter node output to the expected input of the current node\n    .depends_on(converter)\n)\nrag_wf.flow.add_nodes(document_splitter)\n\n# OpenAI vector embeddings\nembedder = (\n    OpenAIDocumentEmbedder(\n        connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n        model=\"text-embedding-3-small\",\n    )\n    .inputs(documents=document_splitter.outputs.documents)\n    .depends_on(document_splitter)\n)\nrag_wf.flow.add_nodes(embedder)\n\n# Pinecone vector storage\nvector_store = (\n    PineconeDocumentWriter(\n        connection=PineconeConnection(api_key=\"PINECONE_API_KEY\"),\n        index_name=\"default\",\n        dimension=1536,\n    )\n    .inputs(documents=embedder.outputs.documents)\n    .depends_on(embedder)\n)\nrag_wf.flow.add_nodes(vector_store)\n\n# Prepare input PDF files\nfile_paths = [\"example.pdf\"]\ninput_data = {\n    \"files\": [\n        BytesIO(open(path, \"rb\").read()) for path in file_paths\n    ],\n    \"metadata\": [\n        {\"filename\": path} for path in file_paths\n    ],\n}\n\n# Run RAG indexing flow\nrag_wf.run(input_data=input_data)\n```\n\n### RAG - document retrieval flow\nSimple retrieval RAG flow that searches for relevant documents and answers the original user question using retrieved documents.\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.embedders import OpenAITextEmbedder\nfrom dynamiq.nodes.retrievers import PineconeDocumentRetriever\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.prompts import Message, Prompt\n\n# Initialize the RAG retrieval workflow\nretrieval_wf = Workflow()\n\n# Shared OpenAI connection\nopenai_connection = OpenAIConnection(api_key=\"OPENAI_API_KEY\")\n\n# OpenAI text embedder for query embedding\nembedder = OpenAITextEmbedder(\n    connection=openai_connection,\n    model=\"text-embedding-3-small\",\n)\nretrieval_wf.flow.add_nodes(embedder)\n\n# Pinecone document retriever\ndocument_retriever = (\n    PineconeDocumentRetriever(\n        connection=PineconeConnection(api_key=\"PINECONE_API_KEY\"),\n        index_name=\"default\",\n        dimension=1536,\n        top_k=5,\n    )\n    .inputs(embedding=embedder.outputs.embedding)\n    .depends_on(embedder)\n)\nretrieval_wf.flow.add_nodes(document_retriever)\n\n# Define the prompt template\nprompt_template = \"\"\"\nPlease answer the question based on the provided context.\n\nQuestion: {{ query }}\n\nContext:\n{% for document in documents %}\n- {{ document.content }}\n{% endfor %}\n\n\"\"\"\n\n# OpenAI LLM for answer generation\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\nanswer_generator = (\n    OpenAI(\n        connection=openai_connection,\n        model=\"gpt-4o\",\n        prompt=prompt,\n    )\n    .inputs(\n        documents=document_retriever.outputs.documents,\n        query=embedder.outputs.query,\n    )  # take documents from the vector store node and query from the embedder\n    .depends_on([document_retriever, embedder])\n)\nretrieval_wf.flow.add_nodes(answer_generator)\n\n# Run the RAG retrieval flow\nquestion = \"What are the line intems provided in the invoice?\"\nresult = retrieval_wf.run(input_data={\"query\": question})\n\nanswer = result.output.get(answer_generator.id).get(\"output\", {}).get(\"content\")\nprint(answer)\n```\n\n### Simple Chatbot with Memory\nA simple chatbot that uses the `Memory` module to store and retrieve conversation history.\n\n```python\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.memory import Memory\nfrom dynamiq.memory.backends.in_memory import InMemory\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\n\nAGENT_ROLE = \"helpful assistant, goal is to provide useful information and answer questions\"\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nmemory = Memory(backend=InMemory())\n\nagent = Agent(\n    name=\"Agent\",\n    llm=llm,\n    role=AGENT_ROLE,\n    id=\"agent\",\n    memory=memory,\n)\n\n\ndef main():\n    print(\"Welcome to the AI Chat! (Type 'exit' to end)\")\n    while True:\n        user_input = input(\"You: \")\n        user_id = \"user\"\n        session_id = \"session\"\n        if user_input.lower() == \"exit\":\n            break\n\n        response = agent.run({\"input\": user_input, \"user_id\": user_id, \"session_id\": session_id})\n        response_content = response.output.get(\"content\")\n        print(f\"AI: {response_content}\")\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### Graph Orchestrator\nGraph Orchestrator allows to create any architecture tailored to specific use cases.\nExample of simple workflow that manages iterative process of feedback and refinement of email.\n\n```python\nfrom typing import Any\n\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents.orchestrators.graph import END, START, GraphOrchestrator\nfrom dynamiq.nodes.agents.orchestrators.graph_manager import GraphAgentManager\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\n\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nemail_writer = Agent(\n    name=\"email-writer-agent\",\n    llm=llm,\n    role=\"Write personalized emails taking into account feedback.\",\n)\n\n\ndef gather_feedback(context: dict[str, Any], **kwargs):\n    \"\"\"Gather feedback about email draft.\"\"\"\n    feedback = input(\n        f\"Email draft:\\n\"\n        f\"{context.get('history', [{}])[-1].get('content', 'No draft')}\\n\"\n        f\"Type in SEND to send email, CANCEL to exit, or provide feedback to refine email: \\n\"\n    )\n\n    reiterate = True\n\n    result = f\"Gathered feedback: {feedback}\"\n\n    feedback = feedback.strip().lower()\n    if feedback == \"send\":\n        print(\"####### Email was sent! #######\")\n        result = \"Email was sent!\"\n        reiterate = False\n    elif feedback == \"cancel\":\n        print(\"####### Email was canceled! #######\")\n        result = \"Email was canceled!\"\n        reiterate = False\n\n    return {\"result\": result, \"reiterate\": reiterate}\n\n\ndef router(context: dict[str, Any], **kwargs):\n    \"\"\"Determines next state based on provided feedback.\"\"\"\n    if context.get(\"reiterate\", False):\n        return \"generate_sketch\"\n\n    return END\n\n\norchestrator = GraphOrchestrator(\n    name=\"Graph orchestrator\",\n    manager=GraphAgentManager(llm=llm),\n)\n\n# Attach tasks to the states. These tasks will be executed when the respective state is triggered.\norchestrator.add_state_by_tasks(\"generate_sketch\", [email_writer])\norchestrator.add_state_by_tasks(\"gather_feedback\", [gather_feedback])\n\n# Define the flow between states by adding edges.\n# This configuration creates the sequence of states from START -> \"generate_sketch\" -> \"gather_feedback\".\norchestrator.add_edge(START, \"generate_sketch\")\norchestrator.add_edge(\"generate_sketch\", \"gather_feedback\")\n\n# Add a conditional edge to the \"gather_feedback\" state, allowing the flow to branch based on a condition.\n# The router function will determine whether the flow should go to \"generate_sketch\" (reiterate) or END (finish the process).\norchestrator.add_conditional_edge(\"gather_feedback\", [\"generate_sketch\", END], router)\n\n\nif __name__ == \"__main__\":\n    print(\"Welcome to email writer.\")\n    email_details = input(\"Provide email details: \")\n    orchestrator.run(input_data={\"input\": f\"Write and post email, provide feedback about status of email: {email_details}\"})\n```\n\n## Contributing\n\nWe love contributions! Whether it's bug reports, feature requests, or pull requests, head over to our [CONTRIBUTING.md](CONTRIBUTING.md) to see how you can help.\n\n## License\n\nDynamiq is open-source and available under the [Apache 2 License](LICENSE).\n\nHappy coding! 🚀\n","\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fwww.getdynamiq.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdynamiq-ai_dynamiq_readme_5a543c644ef0.png\" alt=\"Dynamiq\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n\n\u003Cp align=\"center\">\n    \u003Cem>Dynamiq 是一个用于智能体（agentic）AI 和大语言模型（LLM）应用的编排框架\u003C\u002Fem>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fgetdynamiq.ai\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fwebsite?label=website&up_message=online&url=https%3A%2F%2Fgetdynamiq.ai\" alt=\"Website\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Freleases\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fdynamiq-ai\u002Fdynamiq\" alt=\"Release Notes\">\n  \u003C\u002Fa>\n  \u003Ca href=\"#\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPython-3.10%2B-brightgreen.svg\" alt=\"Python 3.10+\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fblob\u002Fmain\u002FLICENSE\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache_2.0-blue.svg\" alt=\"License\">\n  \u003C\u002Fa>\n  \u003Ca href=\"https:\u002F\u002Fdynamiq-ai.github.io\u002Fdynamiq\" target=\"_blank\">\n    \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fwebsite?label=documentation&up_message=online&url=https%3A%2F%2Fdynamiq-ai.github.io%2Fdynamiq\" alt=\"Documentation\">\n  \u003C\u002Fa>\n\u003C\u002Fp>\n\n\n欢迎使用 Dynamiq！🤖\n\nDynamiq 是您的一站式生成式 AI（Gen AI）框架，旨在简化 AI 驱动应用的开发。Dynamiq 专注于编排检索增强生成（RAG, Retrieval-Augmented Generation）和大语言模型（LLM, Large Language Model）智能体（agents）。\n\n## 快速开始\n\n准备好了吗？以下是使用 Dynamiq 的入门方法：\n\n### 安装\n\n首先，安装 Dynamiq。您需要 Python，请确保已在您的机器上安装。然后运行：\n\n```sh\npip install dynamiq\n```\n\n或者从源代码构建 Python 包：\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq.git\ncd dynamiq\npoetry install\n```\n\n## 文档\n更多示例和详细指南，请参阅我们的 [文档](https:\u002F\u002Fdynamiq-ai.github.io\u002Fdynamiq)。\n\n## 示例\n\n### 简单的 LLM 流程\n\n以下是一个简单的示例，帮助您快速上手 Dynamiq：\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.prompts import Prompt, Message\n\n# 定义翻译用的提示模板\nprompt_template = \"\"\"\n将以下文本翻译成英文：{{ text }}\n\"\"\"\n\n# 使用定义的模板创建 Prompt 对象\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\n# 设置您的 LLM（大语言模型）节点\nllm = OpenAI(\n    id=\"openai\",  # 节点的唯一标识符\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),  # 使用 API 密钥的连接\n    model=\"gpt-4o\",  # 要使用的模型\n    temperature=0.3,  # 模型的采样温度\n    max_tokens=1000,  # 输出的最大 token 数量\n    prompt=prompt  # 模型使用的提示\n)\n\n# 使用输入数据运行 LLM 节点\nresult = llm.run(\n    input_data={\n        \"text\": \"Hola Mundo!\"  # 要翻译的文本\n    }\n)\n\n# 打印翻译结果\nprint(result.output)\n```\n\n\n### 带异步执行的简单 ReAct 智能体\n一个可访问 E2B 代码解释器并能解决复杂编码任务的智能体。\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection, E2B as E2BConnection\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool\n\n# 初始化 E2B 工具\ne2b_tool = E2BInterpreterTool(\n    connection=E2BConnection(api_key=\"E2B_API_KEY\")\n)\n\n# 设置您的 LLM\nllm = OpenAI(\n    id=\"openai\",\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.3,\n    max_tokens=1000,\n)\n\n# 创建智能体\nagent = Agent(\n    name=\"react-agent\",\n    llm=llm, # 语言模型实例\n    tools=[e2b_tool],  # 智能体可使用的工具列表\n    role=\"高级数据科学家\",  # 智能体的角色\n    max_loops=10, # 处理循环次数上限\n)\n\nasync def run_async_agent():\n    # 异步运行智能体并传入输入\n    result = await agent.run(\n        input_data={\n            \"input\": \"将前 10 个数字相加，并判断结果是否为质数。\",\n        }\n    )\n\n    print(result.output.get(\"content\"))\n\n\n# 执行异步函数\nif __name__ == \"__main__\":\n    asyncio.run(run_async_agent())\n```\n\n### 使用 WorkFlow 配置两个并行智能体\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents import Agent\n\n# 设置您的 LLM\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\n# 定义第一个智能体：问答智能体\nfirst_agent = Agent(\n    name=\"专家智能体\",\n    llm=llm,\n    role=\"专业写手，目标是生成条理清晰且信息丰富的回答。\",\n    id=\"agent_1\",\n    max_loops=5\n)\n\n# 定义第二个智能体：诗歌重写智能体\nsecond_agent = Agent(\n    name=\"诗歌重写智能体\",\n    llm=llm,\n    role=\"专业写手，目标是将用户输入重写为诗歌，同时不改变其原意。\",\n    id=\"agent_2\",\n    max_loops=5\n)\n\n\n# 创建一个工作流，以相同输入同时运行两个智能体\n# `Workflow` 类简化了在流水线中设置和执行一系列节点的过程。\n# 它会自动尽可能地并行运行智能体。\nwf = Workflow()\nwf.flow.add_nodes(first_agent)\nwf.flow.add_nodes(second_agent)\n\n# 定义工作流的等效替代方式：\n# from dynamiq.flows import Flow\n# wf = Workflow(flow=Flow(nodes=[agent_first, agent_second]))\n\n# 使用输入运行工作流\nresult = wf.run(\n    input_data={\"input\": \"在电动力学中，sin(x) 和 cos(x) 是如何关联的？\"},\n)\n\n# 打印两个智能体的输入和输出\nprint('--- 智能体 1: 输入 ---\\n', result.output[first_agent.id].get(\"input\").get('input'))\nprint('--- 智能体 1: 输出 ---\\n', result.output[first_agent.id].get(\"output\").get('content'))\nprint('--- 智能体 2: 输入 ---\\n', result.output[second_agent.id].get(\"input\").get('input'))\nprint('--- 智能体 2: 输出 ---\\n', result.output[second_agent.id].get(\"output\").get('content'))\n```\n\n### 使用 WorkFlow 配置两个串行智能体\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents import Agent\n\nfrom dynamiq.nodes.node import InputTransformer, NodeDependency\n```\n\n# 设置你的 LLM（大语言模型）\n```python\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nfirst_agent = Agent(\n    name=\"Expert Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of producing well-written and informative responses.\",  # Agent 的角色\n    id=\"agent_1\",\n    max_loops=5\n)\n\nsecond_agent = Agent(\n    name=\"Poetic Rewriter Agent\",\n    llm=llm,\n    role=\"Professional writer with the goal of rewriting user input as a poem without changing its meaning.\",  # Agent 的角色\n    id=\"agent_2\",\n    depends=[NodeDependency(first_agent)],  # 设置对第一个 agent 的依赖\n    input_transformer=InputTransformer(\n        selector={\"input\": f\"${[first_agent.id]}.output.content\"}  # 将第一个 agent 的输出提取为输入\n    ),\n    max_loops=5\n)\n\n# 创建一个工作流（Workflow），根据依赖关系依次运行 agents。\n# 如果没有工作流，你需要手动先运行 `first_agent`，收集其输出，\n# 然后再手动将该输出作为输入传递给 `second_agent`。工作流自动完成这一过程。\nwf = Workflow()\nwf.flow.add_nodes(first_agent)\nwf.flow.add_nodes(second_agent)\n\n# 定义工作流的等效替代方式：\n# from dynamiq.flows import Flow\n# wf = Workflow(flow=Flow(nodes=[agent_first, agent_second]))\n\n# 使用输入运行工作流\nresult = wf.run(\n    input_data={\"input\": \"How are sin(x) and cos(x) connected in electrodynamics?\"},\n)\n\n# 打印两个 agents 的输入和输出\nprint('--- Agent 1: Input ---\\n', result.output[first_agent.id].get(\"input\").get('input'))\nprint('--- Agent 1: Output ---\\n', result.output[first_agent.id].get(\"output\").get('content'))\nprint('--- Agent 2: Input ---\\n', result.output[second_agent.id].get(\"input\").get('input'))\nprint('--- Agent 2: Output ---\\n', result.output[second_agent.id].get(\"output\").get('content'))\n```\n\n### 多智能体编排（Multi-agent orchestration）\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, ScaleSerp as ScaleSerpConnection\nfrom dynamiq.flows import Flow\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.nodes.tools.scale_serp import ScaleSerpTool\nfrom dynamiq.nodes.types import Behavior, InferenceMode\n\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nsearch_tool = ScaleSerpTool(connection=ScaleSerpConnection(api_key=\"SCALESERP_API_KEY\"))\n\nresearch_agent = Agent(\n    name=\"Research Analyst\",\n    role=\"Find recent market news and provide referenced highlights.\",\n    llm=llm,\n    tools=[search_tool],\n    inference_mode=InferenceMode.XML,\n    max_loops=6,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nwriter_agent = Agent(\n    name=\"Brief Writer\",\n    role=\"Turn research highlights into a concise executive brief.\",\n    llm=llm,\n    inference_mode=InferenceMode.XML,\n    max_loops=4,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nmanager_agent = Agent(\n    name=\"Manager\",\n    role=(\n        \"Delegate research and writing to sub-agents.\\n\"\n        \"Always call tools with {'input': '\u003Ctask>'} payloads and assemble the final brief.\"\n    ),\n    llm=llm,\n    tools=[research_agent, writer_agent],\n    inference_mode=InferenceMode.XML,\n    parallel_tool_calls_enabled=True,\n    max_loops=8,\n    behaviour_on_max_loops=Behavior.RETURN,\n)\n\nworkflow = Workflow(flow=Flow(nodes=[manager_agent]))\n\nresult = workflow.run(\n    input_data={\"input\": \"Summarize the latest developments in battery technology for investors.\"},\n)\n\nprint(result.output[manager_agent.id][\"output\"][\"content\"])\n```\n\n### RAG（检索增强生成）- 文档索引流程\n此工作流接收输入的 PDF 文件，对其进行预处理，转换为向量嵌入（vector embeddings），并存储到 Pinecone 向量数据库中。  \n此处示例使用的是 Pinecone 中已存在的索引。你可以在 `docs\u002Ftutorials\u002Frag` 页面找到创建索引的示例。\n\n```python\nfrom io import BytesIO\n\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.converters import PyPDFConverter\nfrom dynamiq.nodes.splitters.document import DocumentSplitter\nfrom dynamiq.nodes.embedders import OpenAIDocumentEmbedder\nfrom dynamiq.nodes.writers import PineconeDocumentWriter\n\nrag_wf = Workflow()\n\n# PyPDF 文档转换器\nconverter = PyPDFConverter(document_creation_mode=\"one-doc-per-page\")\nrag_wf.flow.add_nodes(converter)  # 将节点添加到 DAG（有向无环图）\n\n# 文档切分器\ndocument_splitter = (\n    DocumentSplitter(\n        split_by=\"sentence\",\n        split_length=10,\n        split_overlap=1,\n    )\n    .inputs(documents=converter.outputs.documents)  # 将 converter 节点的输出映射为当前节点的预期输入\n    .depends_on(converter)\n)\nrag_wf.flow.add_nodes(document_splitter)\n\n# OpenAI 向量嵌入\nembedder = (\n    OpenAIDocumentEmbedder(\n        connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n        model=\"text-embedding-3-small\",\n    )\n    .inputs(documents=document_splitter.outputs.documents)\n    .depends_on(document_splitter)\n)\nrag_wf.flow.add_nodes(embedder)\n\n# Pinecone 向量存储\nvector_store = (\n    PineconeDocumentWriter(\n        connection=PineconeConnection(api_key=\"PINECONE_API_KEY\"),\n        index_name=\"default\",\n        dimension=1536,\n    )\n    .inputs(documents=embedder.outputs.documents)\n    .depends_on(embedder)\n)\nrag_wf.flow.add_nodes(vector_store)\n\n# 准备输入的 PDF 文件\nfile_paths = [\"example.pdf\"]\ninput_data = {\n    \"files\": [\n        BytesIO(open(path, \"rb\").read()) for path in file_paths\n    ],\n    \"metadata\": [\n        {\"filename\": path} for path in file_paths\n    ],\n}\n\n# 运行 RAG 索引流程\nrag_wf.run(input_data=input_data)\n```\n\n### RAG - 文档检索流程\n一个简单的检索型 RAG 流程，用于搜索相关文档，并利用检索到的文档回答用户的原始问题。\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.embedders import OpenAITextEmbedder\nfrom dynamiq.nodes.retrievers import PineconeDocumentRetriever\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.prompts import Message, Prompt\n\n# 初始化 RAG 检索工作流\nretrieval_wf = Workflow()\n\n# 共享的 OpenAI 连接\nopenai_connection = OpenAIConnection(api_key=\"OPENAI_API_KEY\")\n\n# 用于查询嵌入的 OpenAI 文本嵌入器\nembedder = OpenAITextEmbedder(\n    connection=openai_connection,\n    model=\"text-embedding-3-small\",\n)\nretrieval_wf.flow.add_nodes(embedder)\n\n# Pinecone 文档检索器\ndocument_retriever = (\n    PineconeDocumentRetriever(\n        connection=PineconeConnection(api_key=\"PINECONE_API_KEY\"),\n        index_name=\"default\",\n        dimension=1536,\n        top_k=5,\n    )\n    .inputs(embedding=embedder.outputs.embedding)\n    .depends_on(embedder)\n)\nretrieval_wf.flow.add_nodes(document_retriever)\n```\n\n# 定义提示模板（prompt template）\nprompt_template = \"\"\"\n请根据提供的上下文回答问题。\n\n问题: {{ query }}\n\n上下文:\n{% for document in documents %}\n- {{ document.content }}\n{% endfor %}\n\n\"\"\"\n\n# 使用 OpenAI LLM 生成答案\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\nanswer_generator = (\n    OpenAI(\n        connection=openai_connection,\n        model=\"gpt-4o\",\n        prompt=prompt,\n    )\n    .inputs(\n        documents=document_retriever.outputs.documents,\n        query=embedder.outputs.query,\n    )  # 从向量存储节点获取文档，从嵌入器（embedder）获取查询\n    .depends_on([document_retriever, embedder])\n)\nretrieval_wf.flow.add_nodes(answer_generator)\n\n# 运行 RAG（检索增强生成）检索流程\nquestion = \"发票中提供了哪些明细项目？\"\nresult = retrieval_wf.run(input_data={\"query\": question})\n\nanswer = result.output.get(answer_generator.id).get(\"output\", {}).get(\"content\")\nprint(answer)\n```\n\n### 带记忆功能的简单聊天机器人\n一个使用 `Memory` 模块存储和检索对话历史的简单聊天机器人。\n\n```python\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.memory import Memory\nfrom dynamiq.memory.backends.in_memory import InMemory\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\n\nAGENT_ROLE = \"乐于助人的助手，目标是提供有用的信息并回答问题\"\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nmemory = Memory(backend=InMemory())\n\nagent = Agent(\n    name=\"Agent\",\n    llm=llm,\n    role=AGENT_ROLE,\n    id=\"agent\",\n    memory=memory,\n)\n\n\ndef main():\n    print(\"欢迎使用 AI 聊天！（输入 'exit' 退出）\")\n    while True:\n        user_input = input(\"你: \")\n        user_id = \"user\"\n        session_id = \"session\"\n        if user_input.lower() == \"exit\":\n            break\n\n        response = agent.run({\"input\": user_input, \"user_id\": user_id, \"session_id\": session_id})\n        response_content = response.output.get(\"content\")\n        print(f\"AI: {response_content}\")\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### 图编排器（Graph Orchestrator）\n图编排器允许创建针对特定用例定制的任意架构。  \n以下是一个管理邮件反馈与迭代优化流程的简单工作流示例。\n\n```python\nfrom typing import Any\n\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.nodes.agents.orchestrators.graph import END, START, GraphOrchestrator\nfrom dynamiq.nodes.agents.orchestrators.graph_manager import GraphAgentManager\nfrom dynamiq.nodes.agents import Agent\nfrom dynamiq.nodes.llms import OpenAI\n\nllm = OpenAI(\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),\n    model=\"gpt-4o\",\n    temperature=0.1,\n)\n\nemail_writer = Agent(\n    name=\"email-writer-agent\",\n    llm=llm,\n    role=\"根据反馈撰写个性化邮件。\",\n)\n\n\ndef gather_feedback(context: dict[str, Any], **kwargs):\n    \"\"\"收集关于邮件草稿的反馈。\"\"\"\n    feedback = input(\n        f\"邮件草稿:\\n\"\n        f\"{context.get('history', [{}])[-1].get('content', '无草稿')}\\n\"\n        f\"输入 SEND 发送邮件，CANCEL 退出，或提供反馈以优化邮件：\\n\"\n    )\n\n    reiterate = True\n\n    result = f\"收集到的反馈: {feedback}\"\n\n    feedback = feedback.strip().lower()\n    if feedback == \"send\":\n        print(\"####### 邮件已发送！ #######\")\n        result = \"邮件已发送！\"\n        reiterate = False\n    elif feedback == \"cancel\":\n        print(\"####### 邮件已取消！ #######\")\n        result = \"邮件已取消！\"\n        reiterate = False\n\n    return {\"result\": result, \"reiterate\": reiterate}\n\n\ndef router(context: dict[str, Any], **kwargs):\n    \"\"\"根据提供的反馈决定下一个状态。\"\"\"\n    if context.get(\"reiterate\", False):\n        return \"generate_sketch\"\n\n    return END\n\n\norchestrator = GraphOrchestrator(\n    name=\"Graph orchestrator\",\n    manager=GraphAgentManager(llm=llm),\n)\n\n# 将任务附加到状态。当相应状态被触发时，这些任务将被执行。\norchestrator.add_state_by_tasks(\"generate_sketch\", [email_writer])\norchestrator.add_state_by_tasks(\"gather_feedback\", [gather_feedback])\n\n# 通过添加边（edges）定义状态之间的流程。\n# 此配置创建了从 START -> \"generate_sketch\" -> \"gather_feedback\" 的状态序列。\norchestrator.add_edge(START, \"generate_sketch\")\norchestrator.add_edge(\"generate_sketch\", \"gather_feedback\")\n\n# 为 \"gather_feedback\" 状态添加条件边，使流程可根据条件进行分支。\n# router 函数将决定流程应转向 \"generate_sketch\"（继续迭代）还是 END（结束流程）。\norchestrator.add_conditional_edge(\"gather_feedback\", [\"generate_sketch\", END], router)\n\n\nif __name__ == \"__main__\":\n    print(\"欢迎使用邮件撰写工具。\")\n    email_details = input(\"请提供邮件详情：\")\n    orchestrator.run(input_data={\"input\": f\"撰写并发送邮件，并提供关于邮件状态的反馈：{email_details}\"})\n```\n\n## 贡献指南\n\n我们非常欢迎各种贡献！无论是提交 bug 报告、功能请求，还是发起 pull request，请查看我们的 [CONTRIBUTING.md](CONTRIBUTING.md) 了解如何参与。\n\n## 许可证\n\nDynamiq 是开源项目，采用 [Apache 2.0 许可证](LICENSE) 发布。\n\n祝你编码愉快！🚀","# Dynamiq 快速上手指南\n\nDynamiq 是一个用于编排智能体（Agentic AI）和大语言模型（LLM）应用的开源框架，特别适用于构建 RAG（检索增强生成）和多智能体系统。\n\n---\n\n## 环境准备\n\n- **操作系统**：支持 Linux、macOS 和 Windows\n- **Python 版本**：3.10 或更高版本\n- **依赖服务**（按需）：\n  - OpenAI API Key（用于调用 GPT 模型）\n  - 其他工具 API（如 E2B、ScaleSerp、Pinecone 等，根据示例需求）\n\n> 💡 建议使用虚拟环境（如 `venv` 或 `conda`）隔离依赖。\n\n---\n\n## 安装步骤\n\n### 方式一：通过 PyPI 安装（推荐）\n\n```sh\npip install dynamiq\n```\n\n> 若国内网络访问较慢，可尝试使用清华源加速：\n>\n> ```sh\n> pip install dynamiq -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方式二：从源码安装（适用于开发或最新特性）\n\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq.git\ncd dynamiq\npoetry install\n```\n\n> 需提前安装 [Poetry](https:\u002F\u002Fpython-poetry.org\u002F)。\n\n---\n\n## 基本使用\n\n以下是最简单的 LLM 调用示例，实现文本翻译功能：\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.prompts import Prompt, Message\n\n# 定义提示模板\nprompt_template = \"\"\"\nTranslate the following text into English: {{ text }}\n\"\"\"\n\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\n# 配置 LLM 节点\nllm = OpenAI(\n    id=\"openai\",\n    connection=OpenAIConnection(api_key=\"OPENAI_API_KEY\"),  # 替换为你的 API Key\n    model=\"gpt-4o\",\n    temperature=0.3,\n    max_tokens=1000,\n    prompt=prompt\n)\n\n# 执行推理\nresult = llm.run(\n    input_data={\n        \"text\": \"Hola Mundo!\"\n    }\n)\n\nprint(result.output)\n```\n\n> ✅ 将 `OPENAI_API_KEY` 替换为你自己的 OpenAI API 密钥即可运行。\n\n更多高级用法（如多智能体、工作流、RAG 等）请参考 [官方文档](https:\u002F\u002Fdynamiq-ai.github.io\u002Fdynamiq)。","一家金融科技初创公司的数据团队需要构建一个智能投研助手，能自动分析上市公司财报、生成摘要，并执行Python代码进行财务指标计算。\n\n### 没有 dynamiq 时\n- 需手动拼接多个组件：分别调用LLM API、向量数据库检索和代码解释器，逻辑分散在不同模块中，维护成本高  \n- 实现ReAct（推理+行动）代理需自行编写复杂的状态管理与工具调度逻辑，容易出错且难以调试  \n- 异步执行多个AI步骤（如并行检索多份财报）需额外引入asyncio或任务队列框架，开发周期长  \n- 不同工具（如E2B代码沙箱）的输入输出格式不统一，需大量胶水代码做数据转换  \n\n### 使用 dynamiq 后\n- 通过声明式节点（Node）组合即可构建完整RAG+Agent流程，LLM、检索器和工具天然集成，代码结构清晰  \n- 内置ReAct代理支持自动规划与工具调用，只需定义角色和可用工具，无需手写决策循环  \n- 原生支持异步执行流，轻松实现“同时分析三份财报并汇总结果”的并发任务，提升响应速度  \n- 统一的数据接口自动处理各组件间的数据传递，例如将LLM输出直接作为E2B沙箱的代码输入，零转换开销  \n\ndynamiq 将复杂的AI应用编排简化为可组合、可复用的流程定义，让开发者聚焦业务逻辑而非底层胶水代码。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fdynamiq-ai_dynamiq_d10f9da2.png","dynamiq-ai","Dynamiq","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fdynamiq-ai_1273f490.png","Operating platform for agentic Generative AI applications",null,"hello@getdynamiq.ai","DynamiqAGI","https:\u002F\u002Fgetdynamiq.ai","https:\u002F\u002Fgithub.com\u002Fdynamiq-ai",[85,89,93],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.9,{"name":90,"color":91,"percentage":92},"Makefile","#427819",0,{"name":94,"color":95,"percentage":92},"Dockerfile","#384d54",1044,125,"2026-04-05T03:35:16","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":103,"python":104,"dependencies":105},"需要提供 OpenAI、E2B、ScaleSerp、Pinecone 等第三方服务的 API 密钥；支持通过 pip 或 poetry 安装；部分功能依赖异步运行环境。","3.10+",[],[14,26,13,15],[108,109,110,111,112,113,114],"agents","ai","generative-ai","gpt","llm","llmops","rag",4,"2026-03-27T02:49:30.150509","2026-04-06T05:36:54.030937",[119,124,129,134],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},394,"前端应用（App）是否开源？","不，前端应用不是开源的。","https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fissues\u002F461",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},395,"运行 gpt_researcher 示例需要哪些环境变量？","需要设置以下环境变量：\n- OPENAI_API_KEY：你的 OpenAI API 密钥。\n- TAVILY_API_KEY：你的 Tavily API 密钥。\n- PINECONE_API_KEY：你的 Pinecone API 密钥。\n- PINECONE_CLOUD：你的 Pinecone 云提供商（默认使用 aws）。\n- PINECONE_REGION：你的 Pinecone 区域（默认使用 us-east-1）。\n- ZENROWS_API_KEY：你的 ZenRows API 密钥。\n\n此外，如果使用 markdown，可能需要安装 pango、gdk-pixbuf 和 libffi，例如在 macOS 上运行：\n```bash\nbrew install pango gdk-pixbuf libffi\n```","https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fissues\u002F324",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},396,"能否将 Dynamiq 用作本地开源模型的 RAG 框架？","可以。你可以使用任何本地运行的开源大语言模型（LLM）。例如，可以参考使用 Ollama 提供商的示例：https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fblob\u002Fmain\u002Fexamples\u002Fllm\u002Follama.py。此外，也可以使用 CustomLLM 节点，代码位于：https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fblob\u002Fmain\u002Fdynamiq\u002Fnodes\u002Fllms\u002Fcustom_llm.py。","https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fissues\u002F92",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},397,"如何为 Dynamiq 添加监控支持或贡献相关文档？","Dynamiq 支持通过 OpenTelemetry 进行代理（Agent）可观测性，可与 Grafana 或其他开源 OTel 工具集成。如需为文档添加监控相关内容，建议联系维护者了解文档贡献流程（例如通过 Issue 中提到的 @acoola 和 @maksDev123）。","https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fissues\u002F64",[140,145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235],{"id":141,"version":142,"summary_zh":143,"released_at":144},100065,"v0.46.0","## What's Changed\r\n* chore: unify nodes naming convention by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F661\r\n* chore: allow parallel execution for more search\u002Fscrape nodes by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F666\r\n* feat: whitelist tools for input streaming by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F667\r\n* feat: e2b files by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F663\r\n* fix: schemas by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F660\r\n* feat: limit subagents per one run by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F665\r\n* fix: truncated streaming by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F675\r\n* feat: optimize streaming events by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F670\r\n* refactor: update agent-accessible params for search\u002Fscrape nodes by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F672\r\n* fix: fix dublicate openAI bug by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F677\r\n* fix: file write append by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F676\r\n* fix: agent input validation by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F678\r\n* build: dynamiq v0.46.0 by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F679\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.45.0...v0.46.0","2026-04-03T15:00:36",{"id":146,"version":147,"summary_zh":148,"released_at":149},100066,"v0.45.0","## What's Changed\r\n* feat: Sandbox full setup by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F648\r\n* feat: input streaming for function calling by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F649\r\n* feat: Partial read FileReadTool by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F656\r\n* fix: make output_filename field optional by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F652\r\n* feat: migrate dynamiq-extra into dynamiq by @vitalii-dynamiq in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F651\r\n* feat: native function calling by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F650\r\n* feat: add input error event by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F655\r\n* build: dynamiq v0.45.0 by @vitalii-dynamiq in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F658\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.44.0...v0.45.0","2026-03-26T21:40:21",{"id":151,"version":152,"summary_zh":153,"released_at":154},100067,"v0.43.0","## What's Changed\r\n* fix: yaml inline connection support by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F624\r\n* fix: update Exa parameters for agent input schema by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F623\r\n* chore: include cached tokens in LLM cost calculation by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F627\r\n* fix: parallel tracing by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F628\r\n* feat: reworked subagents by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F611\r\n* feat: save tool output to files in the sandbox by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F587\r\n* build: dynamiq v0.43.0 by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F630\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.42.0...v0.43.0","2026-03-16T20:27:17",{"id":156,"version":157,"summary_zh":158,"released_at":159},100068,"v0.42.0","## What's Changed\r\n* feat: add tool result status by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F618\r\n* feat: streaming parallel calls at once by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F617\r\n* fix: improve action generation by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F615\r\n* feat: add cache control for Anthropic models by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F613\r\n* build: dynamiq v0.42.0 by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F619\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.41.0...v0.42.0","2026-03-12T19:06:41",{"id":161,"version":162,"summary_zh":163,"released_at":164},100069,"v0.41.0","## What's Changed\r\n* feat: add lightweight code interpreter by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F593\r\n* fix: remove tool name mentions by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F592\r\n* fix: handle E2B sandbox timeout issues by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F594\r\n* feat: tool input streaming by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F597\r\n* fix: file upload fix by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F600\r\n* fix: enhance ShellCommandResult structure and error handling by @delamainer in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F602\r\n* fix: improve max input tokens handling by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F605\r\n* fix: update ContextManagerTool timeout by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F603\r\n* refactor: optimize main agent prompt by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F604\r\n* fix: remove redundant parametrs by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F606\r\n* fix: fix context overflow by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F607\r\n* feat: improve followup by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F601\r\n* build: dynamiq v0.41.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F609\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.40.0...v0.41.0","2026-03-06T12:12:08",{"id":166,"version":167,"summary_zh":168,"released_at":169},100070,"v0.40.0","## What's Changed\r\n* ci: run slow tests manually by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F579\r\n* feat: handle async context in threads by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F580\r\n* feat: safe parallel tool run by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F586\r\n* fix: upd skills tool usage by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F588\r\n* feat: extra streaming parameters by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F583\r\n* feat: update summarization of conversation history by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F582\r\n* feat: Return files from agent by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F589\r\n* build: dynamiq v0.40.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F591\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.39.2...v0.40.0","2026-02-25T20:21:10",{"id":171,"version":172,"summary_zh":173,"released_at":174},100071,"v0.39.2","## What's Changed\r\n* feat: add sandbox url, improve skills ingestion by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F572\r\n* feat: add streaming output fields by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F574\r\n* feat: file write and streaming by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F570\r\n* build: dynamiq v0.39.2 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F575\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.39.1...v0.39.2","2026-02-19T19:32:09",{"id":176,"version":177,"summary_zh":178,"released_at":179},100072,"v0.39.1","## What's Changed\r\n* fix: fix exist method in sandbox by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F565\r\n* fix: yaml roundtrip by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F567\r\n* fix: add file messages handling for anthropic by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F564\r\n* chore: improve skills ingestion by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F568\r\n* build: dynamiq v0.39.1 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F569\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.39.0...v0.39.1","2026-02-18T15:15:52",{"id":181,"version":182,"summary_zh":183,"released_at":184},100073,"v0.39.0","## What's Changed\r\n* feat: add possibility to rewrite method type in HttpApiCall by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F554\r\n* chore: add params support in SQL Executor tool by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F556\r\n* fix: add tests retry by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F558\r\n* fix: sandbox dublication by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F553\r\n* feat: upgrade litellm package, fix related errors and Weaviate conn by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F555\r\n* chore: improve conn manager and yaml loader by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F562\r\n* fix: No copy when single tool is runned by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F559\r\n* feat: add skills sandbox usage by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F552\r\n* feat: make todo tool use sandbox by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F561\r\n* feat: sandbox file sharing by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F560\r\n* fix: upd xml parser by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F557\r\n* build: dynamiq v0.39.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F563\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.38.0...v0.39.0","2026-02-16T19:28:03",{"id":186,"version":187,"summary_zh":188,"released_at":189},100074,"v0.38.0","## What's Changed\r\n* feat: add baseline skills by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F515\r\n* feat: add e2b sandbox by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F541\r\n* feat: files sandbox by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F549\r\n* feat: perception of files from sandbox by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F550\r\n* feat: add support of jsonpath requirements values by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F548\r\n* build: dynamiq v0.38.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F551\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.37.1...v0.38.0","2026-02-11T19:57:40",{"id":191,"version":192,"summary_zh":193,"released_at":194},100075,"v0.37.1","## What's Changed\r\n* fix: human feedback prompts by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F542\r\n* chore: add extra file path sanitization by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F543\r\n* build: dynamiq v0.37.1 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F544\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.37.0...v0.37.1","2026-02-04T18:38:09",{"id":196,"version":197,"summary_zh":198,"released_at":199},100076,"v0.37.0","## What's Changed\r\n* feat: add tool action classification by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F528\r\n* feat: context compaction by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F523\r\n* feat: add reranker by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F532\r\n* fix: agent node creation by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F533\r\n* fix: python code executor by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F534\r\n* chore: unify human feedback tools by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F536\r\n* fix: rework parallel tool calling by @mykhailobuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F537\r\n* chore: improve node\u002Fworkflow cancellation by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F538\r\n* refactor: update pgvector node by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F530\r\n* fix: streaming events by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F539\r\n* fix: fragile regex in ReAct agent action parsing by @minimAluminiumalism in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F506\r\n* feat: agent reasoning by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F531\r\n* build(deps): bump pypdf from 6.3.0 to 6.6.2 by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F535\r\n* build: dynamiq v0.37.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F540\r\n\r\n## New Contributors\r\n* @mykhailobuleshnyi made their first contribution in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F533\r\n* @minimAluminiumalism made their first contribution in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F506\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.36.1...v0.37.0","2026-02-03T10:37:35",{"id":201,"version":202,"summary_zh":203,"released_at":204},100077,"v0.36.1","## What's Changed\r\n* fix: rollback litellm version, add test for update error check by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F525\r\n* build: dynamiq v0.36.1 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F526\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.36.0...v0.36.1","2026-01-16T15:24:18",{"id":206,"version":207,"summary_zh":208,"released_at":209},100078,"v0.36.0","## What's Changed\r\n* feat: add delegation toggle by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F492\r\n* fix: add subagents memory by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F493\r\n* fix: use other inner node type params by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F499\r\n* ci: add Bugbot project rules by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F500\r\n* feat: add new image generation and editing nodes by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F495\r\n* feat: update agents tests by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F507\r\n* feat: add model specific prompting by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F490\r\n* feat: add graph kb logic and tools by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F504\r\n* feat: handle requirements data in yaml loader by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F509\r\n* fix: metadata processing by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F511\r\n* fix: override delete_documents_by_file_ids for pgvector by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F517\r\n* fix: dry run for VectorStoreWriter node by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F522\r\n* feat: refactor agents by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F510\r\n* build: dynamiq v0.36.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F524\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.35.1...v0.36.0","2026-01-15T17:32:23",{"id":211,"version":212,"summary_zh":213,"released_at":214},100079,"v0.35.1","## What's Changed\r\n* ci: increase allowed PR title length by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F488\r\n* fix: update connection config for watsonx parameters by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F475\r\n* feat: add python 3.13 support by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F489\r\n* feat: add subagents final answer by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F487\r\n* fix: client reinit by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F494\r\n* fix: tracing client flush on error and skip by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F496\r\n* build: dynamiq v0.35.1 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F497\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.35.0...v0.35.1","2025-12-11T17:24:33",{"id":216,"version":217,"summary_zh":218,"released_at":219},100080,"v0.35.0","## What's Changed\r\n* feat: add pyyaml package to python sandbox by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F479\r\n* feat: add human feedback to the trace by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F478\r\n* fix: check Pinecone namespace before deletion by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F476\r\n* fix: lock e2b dependency by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F481\r\n* feat: fail flow\u002Fworkflow on node failure by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F480\r\n* feat: add search tool by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F482\r\n* feat: add e2b domain by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F483\r\n* feat: add llm fallback by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F484\r\n* build: dynamiq v0.35.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F486\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.34.1...v0.35.0","2025-12-02T17:20:22",{"id":221,"version":222,"summary_zh":223,"released_at":224},100081,"v0.34.1","## What's Changed\r\n* feat: update version of e2b by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F472\r\n* build: bump pkg to v0.34.1 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F473\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.34.0...v0.34.1","2025-11-21T14:56:41",{"id":226,"version":227,"summary_zh":228,"released_at":229},100082,"v0.34.0","## What's Changed\r\n* feat: add client check and auto reinit by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F463\r\n* feat: update prefix for watsonx models by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F468\r\n* feat: add python code executor, update file agent tools by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F462\r\n* feat: update cli config for service deploy, add resource list by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F467\r\n* feat: update search tools by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F466\r\n* build: bump pypdf 6.3.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F470\r\n* feat: add OpenSearch node by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F465\r\n* feat: update Unstructured, add handling for base64 images by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F464\r\n* build: bump pkg to v0.34.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F471\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.33.0...v0.34.0","2025-11-21T09:08:10",{"id":231,"version":232,"summary_zh":233,"released_at":234},100083,"v0.33.0","## What's Changed\r\n* chore: improve CM clients refresh by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F457\r\n* chore: add agent example by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F454\r\n* fix: trim llm chunks by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F456\r\n* feat: add dynamiq memory by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F453\r\n* feat: add VectorStoreWriter tool by @tyaroshko in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F451\r\n* feat: add logic to return files instance as Elevenlabs output by @TrachukT in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F458\r\n* build: bump pkg to v0.33.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F459\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.32.0...v0.33.0","2025-11-03T19:08:31",{"id":236,"version":237,"summary_zh":238,"released_at":239},100084,"v0.32.0","## What's Changed\r\n* fix: agents generation by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F443\r\n* fix: add pinecone metric by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F444\r\n* fix: update xml inference mode by @olbychos in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F446\r\n* chore: improve node\u002Fflow run and reduce CPU usage by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F437\r\n* fix: infinite loop hung in sync flow run by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F449\r\n* feat: add agent file processing by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F442\r\n* feat: agent automatic context managment by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F447\r\n* feat: add config to output tansform by @maksymbuleshnyi in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F450\r\n* build: bump pkg to v0.32.0 by @acoola in https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fpull\u002F455\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fdynamiq-ai\u002Fdynamiq\u002Fcompare\u002Fv0.31.0...v0.32.0","2025-10-28T20:34:57"]