[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-mastra-ai--mastra":3,"tool-mastra-ai--mastra":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":67,"owner_website":81,"owner_url":82,"languages":83,"stars":102,"forks":103,"last_commit_at":104,"license":105,"difficulty_score":23,"env_os":106,"env_gpu":107,"env_ram":107,"env_deps":108,"category_tags":115,"github_topics":116,"view_count":130,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":131,"updated_at":132,"faqs":133,"releases":162},2649,"mastra-ai\u002Fmastra","mastra","From the team behind Gatsby, Mastra is a framework for building AI-powered applications and agents with a modern TypeScript stack.","Mastra 是由 Gatsby 团队打造的现代化 TypeScript 框架，专为构建 AI 驱动的应用程序和智能体（Agents）而设计。它旨在解决开发者在将 AI 原型转化为生产级应用时面临的复杂挑战，提供从开发、调试到规模化部署的一站式解决方案。\n\n这款工具非常适合熟悉 JavaScript 或 TypeScript 的前后端开发者，尤其是那些希望利用 React、Next.js 或 Node.js 生态快速落地 AI 产品的技术团队。Mastra 的核心优势在于其“开箱即用”的完整能力：通过统一的接口连接 40 多家主流大模型提供商，简化了模型切换成本；内置基于图的引擎，既能编排复杂的多步骤工作流，也能构建具备自主推理能力的智能体。\n\n特别值得一提的是，Mastra 原生支持“人机协同”模式，允许工作流在执行中暂停以等待用户确认，并能持久化保存状态以便随时恢复。此外，它还提供了完善的上下文记忆管理、自动化评估体系以及可观测性工具，帮助团队持续优化 AI 表现。无论是需要将 AI 助手集成到现有 Web 应用中，还是构建独立的 MCP 服务器，Mastra 都能让构建可靠、可控","Mastra 是由 Gatsby 团队打造的现代化 TypeScript 框架，专为构建 AI 驱动的应用程序和智能体（Agents）而设计。它旨在解决开发者在将 AI 原型转化为生产级应用时面临的复杂挑战，提供从开发、调试到规模化部署的一站式解决方案。\n\n这款工具非常适合熟悉 JavaScript 或 TypeScript 的前后端开发者，尤其是那些希望利用 React、Next.js 或 Node.js 生态快速落地 AI 产品的技术团队。Mastra 的核心优势在于其“开箱即用”的完整能力：通过统一的接口连接 40 多家主流大模型提供商，简化了模型切换成本；内置基于图的引擎，既能编排复杂的多步骤工作流，也能构建具备自主推理能力的智能体。\n\n特别值得一提的是，Mastra 原生支持“人机协同”模式，允许工作流在执行中暂停以等待用户确认，并能持久化保存状态以便随时恢复。此外，它还提供了完善的上下文记忆管理、自动化评估体系以及可观测性工具，帮助团队持续优化 AI 表现。无论是需要将 AI 助手集成到现有 Web 应用中，还是构建独立的 MCP 服务器，Mastra 都能让构建可靠、可控的 AI 产品变得更加简单高效。","# Mastra\n\n[![npm version](https:\u002F\u002Fbadge.fury.io\u002Fjs\u002F@mastra%2Fcore.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@mastra\u002Fcore)\n[![CodeQl](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Factions\u002Fworkflows\u002Fgithub-code-scanning\u002Fcodeql\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Factions\u002Fworkflows\u002Fgithub-code-scanning\u002Fcodeql)\n[![GitHub Repo stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmastra-ai\u002Fmastra)](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fstargazers)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1309558646228779139?logo=discord&label=Discord&labelColor=white&color=7289DA)](https:\u002F\u002Fdiscord.gg\u002FBTYqqHKUrf)\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fmastra?style=social)](https:\u002F\u002Fx.com\u002Fmastra)\n[![NPM Downloads](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fdm\u002F%40mastra%252Fcore)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@mastra\u002Fcore)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FY%20Combinator-W25-orange)](https:\u002F\u002Fwww.ycombinator.com\u002Fcompanies?batch=W25)\n\nMastra is a framework for building AI-powered applications and agents with a modern TypeScript stack.\n\nIt includes everything you need to go from early prototypes to production-ready applications. Mastra integrates with frontend and backend frameworks like React, Next.js, and Node, or you can deploy it anywhere as a standalone server. It's the easiest way to build, tune, and scale reliable AI products.\n\n## Why Mastra?\n\nPurpose-built for TypeScript and designed around established AI patterns, Mastra gives you everything you need to build great AI applications out-of-the-box.\n\nSome highlights include:\n\n- [**Model routing**](https:\u002F\u002Fmastra.ai\u002Fmodels) - Connect to 40+ providers through one standard interface. Use models from OpenAI, Anthropic, Gemini, and more.\n\n- [**Agents**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fagents\u002Foverview) - Build autonomous agents that use LLMs and tools to solve open-ended tasks. Agents reason about goals, decide which tools to use, and iterate internally until the model emits a final answer or an optional stopping condition is met.\n\n- [**Workflows**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fworkflows\u002Foverview) - When you need explicit control over execution, use Mastra's graph-based workflow engine to orchestrate complex multi-step processes. Mastra workflows use an intuitive syntax for control flow (`.then()`, `.branch()`, `.parallel()`).\n\n- [**Human-in-the-loop**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fworkflows\u002Fsuspend-and-resume) - Suspend an agent or workflow and await user input or approval before resuming. Mastra uses [storage](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fserver-db\u002Fstorage) to remember execution state, so you can pause indefinitely and resume where you left off.\n\n- **Context management** - Give your agents the right context at the right time. Provide [conversation history](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fconversation-history), [retrieve](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Frag\u002Foverview) data from your sources (APIs, databases, files), and add human-like [working](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fworking-memory) and [semantic](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fsemantic-recall) memory so your agents behave coherently.\n\n- **Integrations** - Bundle agents and workflows into existing React, Next.js, or Node.js apps, or ship them as standalone endpoints. When building UIs, integrate with agentic libraries like Vercel's AI SDK UI and CopilotKit to bring your AI assistant to life on the web.\n\n- [**MCP servers**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Ftools-mcp\u002Fmcp-overview) - Author Model Context Protocol servers, exposing agents, tools, and other structured resources via the MCP interface. These can then be accessed by any system or agent that supports the protocol.\n\n- **Production essentials** - Shipping reliable agents takes ongoing insight, evaluation, and iteration. With built-in [evals](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fevals\u002Foverview) and [observability](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fobservability\u002Foverview), Mastra gives you the tools to observe, measure, and refine continuously.\n\n## Get started\n\nThe **recommended** way to get started with Mastra is by running the command below:\n\n```shell\nnpm create mastra@latest\n```\n\nFollow the [Installation guide](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Finstallation) for step-by-step setup with the CLI or a manual install.\n\nIf you're new to AI agents, check out our [templates](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Ftemplates), [course](https:\u002F\u002Fmastra.ai\u002Fcourse), and [YouTube videos](https:\u002F\u002Fyoutube.com\u002F@mastra-ai) to start building with Mastra today.\n\n## Documentation\n\nVisit our [official documentation](https:\u002F\u002Fmastra.ai\u002Fdocs).\n\n## Build with AI\n\nLearn how to make your agent a Mastra expert by following the [Build with AI guide](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Fbuild-with-ai).\n\n## Contributing\n\nLooking to contribute? All types of help are appreciated, from coding to testing and feature specification. Read [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) for more details on how to get involved.\n\nIf you are a developer and would like to contribute with code, please open an issue to discuss before opening a Pull Request.\n\nInformation about the project setup can be found in the [development documentation](.\u002FDEVELOPMENT.md)\n\n## Support\n\nWe have an [open community Discord](https:\u002F\u002Fdiscord.gg\u002FBTYqqHKUrf). Come and say hello and let us know if you have any questions or need any help getting things running.\n\nIt's also super helpful if you leave the project a star here at the [top of the page](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra)\n\n## Licensing\n\nThis repository uses a dual-license model:\n\n- **Apache License 2.0** — The core framework and the vast majority of this codebase is open source under Apache-2.0.\n- **Mastra Enterprise License** — Code in any directory named `ee\u002F` (e.g., `packages\u002Fcore\u002Fsrc\u002Fauth\u002Fee\u002F`) is source-available under the Mastra Enterprise License. These features require a valid enterprise license for production use but can be freely used for development and testing.\n\nSee [LICENSE.md](.\u002FLICENSE.md) for the full license mapping and [ee\u002FLICENSE](.\u002Fee\u002FLICENSE) for the enterprise license terms.\n\n## Security\n\nWe are committed to maintaining the security of this repo and of Mastra as a whole. If you discover a security finding we ask you to please responsibly disclose this to us at [security@mastra.ai](mailto:security@mastra.ai) and we will get back to you.\n","# Mastra\n\n[![npm version](https:\u002F\u002Fbadge.fury.io\u002Fjs\u002F@mastra%2Fcore.svg)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@mastra\u002Fcore)\n[![CodeQl](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Factions\u002Fworkflows\u002Fgithub-code-scanning\u002Fcodeql\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Factions\u002Fworkflows\u002Fgithub-code-scanning\u002Fcodeql)\n[![GitHub Repo stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fmastra-ai\u002Fmastra)](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fstargazers)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1309558646228779139?logo=discord&label=Discord&labelColor=white&color=7289DA)](https:\u002F\u002Fdiscord.gg\u002FBTYqqHKUrf)\n[![Twitter Follow](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fmastra?style=social)](https:\u002F\u002Fx.com\u002Fmastra)\n[![NPM Downloads](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fdm\u002F%40mastra%252Fcore)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@mastra\u002Fcore)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FY%20Combinator-W25-orange)](https:\u002F\u002Fwww.ycombinator.com\u002Fcompanies?batch=W25)\n\nMastra 是一个基于现代 TypeScript 技术栈的框架，用于构建由 AI 驱动的应用程序和智能体。\n\n它提供了从早期原型到生产就绪应用所需的一切。Mastra 可以与 React、Next.js 和 Node 等前后端框架集成，也可以作为独立服务器部署在任何地方。它是构建、调优和扩展可靠 AI 产品的最简单方式。\n\n## 为什么选择 Mastra？\n\nMastra 专为 TypeScript 打造，并围绕成熟的 AI 模式设计，开箱即用，为您提供构建优秀 AI 应用所需的全部功能。\n\n亮点包括：\n\n- [**模型路由**](https:\u002F\u002Fmastra.ai\u002Fmodels) - 通过一个标准接口连接 40 多家提供商。支持 OpenAI、Anthropic、Gemini 等多种模型。\n\n- [**智能体**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fagents\u002Foverview) - 构建能够使用大语言模型和工具解决开放式任务的自主智能体。智能体会对目标进行推理，决定使用哪些工具，并在内部迭代，直到模型生成最终答案或满足可选的停止条件。\n\n- [**工作流**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fworkflows\u002Foverview) - 当您需要对执行过程进行显式控制时，可以使用 Mastra 基于图的工作流引擎来编排复杂的多步骤流程。Mastra 工作流采用直观的语法来控制流程（`.then()`、`.branch()`、`.parallel()`）。\n\n- [**人机协作**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fworkflows\u002Fsuspend-and-resume) - 暂停智能体或工作流，在恢复之前等待用户输入或批准。Mastra 使用 [存储](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fserver-db\u002Fstorage) 来记住执行状态，因此您可以无限期暂停，并在中断处继续。\n\n- **上下文管理** - 在正确的时间为您的智能体提供合适的上下文。提供 [对话历史](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fconversation-history)，从您的数据源（API、数据库、文件）中 [检索](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Frag\u002Foverview) 数据，并添加类似人类的 [工作记忆](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fworking-memory) 和 [语义记忆](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fsemantic-recall)，使您的智能体行为更加连贯。\n\n- **集成** - 将智能体和工作流打包到现有的 React、Next.js 或 Node.js 应用中，或者将其作为独立的 API 端点发布。在构建 UI 时，可以与 Vercel 的 AI SDK UI 和 CopilotKit 等智能体库集成，从而在网页上实现您的 AI 助手。\n\n- [**MCP 服务器**](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Ftools-mcp\u002Fmcp-overview) - 开发 Model Context Protocol 服务器，通过 MCP 接口公开智能体、工具和其他结构化资源。这些资源随后可以被任何支持该协议的系统或智能体访问。\n\n- **生产必备** - 发布可靠的智能体需要持续的洞察、评估和迭代。借助内置的 [评估](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fevals\u002Foverview) 和 [可观测性](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fobservability\u002Foverview)，Mastra 为您提供持续观察、度量和优化的工具。\n\n## 开始使用\n\n开始使用 Mastra 的 **推荐** 方法是运行以下命令：\n\n```shell\nnpm create mastra@latest\n```\n\n请按照 [安装指南](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Finstallation) 中的说明，通过 CLI 或手动安装逐步完成设置。\n\n如果您是 AI 智能体的新手，请查看我们的 [模板](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Ftemplates)、[课程](https:\u002F\u002Fmastra.ai\u002Fcourse) 和 [YouTube 视频](https:\u002F\u002Fyoutube.com\u002F@mastra-ai)，立即开始使用 Mastra 进行开发吧。\n\n## 文档\n\n请访问我们的 [官方文档](https:\u002F\u002Fmastra.ai\u002Fdocs)。\n\n## 使用 AI 构建\n\n按照 [使用 AI 构建指南](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fgetting-started\u002Fbuild-with-ai)，学习如何让您的智能体成为 Mastra 专家。\n\n## 贡献\n\n希望参与贡献吗？我们欢迎各种形式的帮助，从编码到测试和功能规范。请阅读 [CONTRIBUTING.md](.\u002FCONTRIBUTING.md) 以了解如何参与。\n\n如果您是开发者并希望提交代码，请先打开一个问题讨论，再创建拉取请求。\n\n有关项目设置的信息，请参阅 [开发文档](.\u002FDEVELOPMENT.md)。\n\n## 支持\n\n我们有一个开放的社区 Discord 群组：[discord.gg\u002FBTYqqHKUrf](https:\u002F\u002Fdiscord.gg\u002FBTYqqHKUrf)。欢迎加入并与我们交流，如果您有任何问题或需要帮助启动项目，请随时告诉我们。\n\n此外，如果您能在页面顶部的 [GitHub 仓库](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra) 上给本项目点个赞，将对我们非常有帮助。\n\n## 许可证\n\n本仓库采用双重许可证模式：\n\n- **Apache License 2.0** — 核心框架及本代码库的绝大部分内容均以 Apache-2.0 开源许可发布。\n- **Mastra 企业许可证** — 任何名为 `ee\u002F` 的目录中的代码（例如 `packages\u002Fcore\u002Fsrc\u002Fauth\u002Fee\u002F`）均以 Mastra 企业许可证开源。这些功能在生产环境中使用需要有效的企业许可证，但在开发和测试阶段可以免费使用。\n\n完整的许可证映射请参阅 [LICENSE.md](.\u002FLICENSE.md)，企业许可证条款请参阅 [ee\u002FLICENSE](.\u002Fee\u002FLICENSE)。\n\n## 安全性\n\n我们致力于维护本仓库以及整个 Mastra 项目的安全性。如果您发现任何安全漏洞，请通过 [security@mastra.ai](mailto:security@mastra.ai) 以负责任的方式向我们报告，我们将尽快与您联系。","# Mastra 快速上手指南\n\nMastra 是一个基于现代 TypeScript 技术栈构建的 AI 应用与智能体（Agent）开发框架。它提供了从原型设计到生产部署所需的全套工具，支持模型路由、自主智能体、工作流编排、人机协作及可观测性等核心功能。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **操作系统**：macOS, Linux, 或 Windows (推荐 WSL2)\n- **Node.js**：版本 18.0 或更高（推荐使用 LTS 版本）\n- **包管理器**：npm, yarn, pnpm 或 bun\n- **编辑器**：VS Code 或其他支持 TypeScript 的 IDE\n\n> **提示**：国内开发者若遇到 npm 安装缓慢，可临时切换至淘宝镜像源：\n> ```bash\n> npm config set registry https:\u002F\u002Fregistry.npmmirror.com\n> ```\n\n## 安装步骤\n\n推荐使用官方提供的 CLI 工具快速初始化项目，这是最简便的起步方式。\n\n1. **创建新项目**\n   在终端运行以下命令：\n   ```shell\n   npm create mastra@latest\n   ```\n\n2. **跟随向导配置**\n   命令行将引导您完成以下设置：\n   - 输入项目名称\n   - 选择模板（如基础 Agent、工作流等）\n   - 选择包管理器（npm\u002Fpnpm\u002Fyarn\u002Fbun）\n\n3. **进入项目目录并安装依赖**\n   ```shell\n   cd \u003Cyour-project-name>\n   npm install\n   # 或者如果您选择了其他包管理器\n   # pnpm install\n   # yarn install\n   # bun install\n   ```\n\n4. **配置环境变量**\n   在项目根目录找到 `.env` 文件，填入您的 LLM 提供商 API Key（如 OpenAI, Anthropic 等）：\n   ```env\n   OPENAI_API_KEY=sk-your-api-key-here\n   ```\n\n## 基本使用\n\n以下是一个最简单的示例，展示如何定义一个具备工具调用能力的 AI 智能体。\n\n### 1. 定义智能体 (agent.ts)\n\n在 `src\u002Fmastra\u002Fagents` 目录下创建文件（具体路径依模板而定），编写如下代码：\n\n```typescript\nimport { Mastra } from '@mastra\u002Fcore';\nimport { openai } from '@mastra\u002Fopenai';\nimport { calculatorTool } from '.\u002Ftools'; \u002F\u002F 假设已定义的工具\n\nexport const mastra = new Mastra({\n  agents: {\n    mathAssistant: {\n      name: 'Math Assistant',\n      instructions: 'You are a helpful math tutor. Use the calculator tool to solve problems.',\n      model: openai('gpt-4o'),\n      tools: [calculatorTool],\n    },\n  },\n});\n```\n\n### 2. 运行智能体\n\n创建一个入口文件（如 `index.ts`）来启动并调用智能体：\n\n```typescript\nimport { mastra } from '.\u002Fsrc\u002Fmastra\u002Fagents';\n\nasync function main() {\n  const agent = mastra.getAgent('mathAssistant');\n  \n  const response = await agent.generate('What is 123 multiplied by 456?');\n  \n  console.log(response.text);\n}\n\nmain().catch(console.error);\n```\n\n### 3. 启动开发服务器\n\n如果项目包含服务端功能，通常可以使用以下命令启动本地开发服务器：\n\n```shell\nnpm run dev\n```\n\n现在，您已经成功运行了第一个 Mastra 智能体。您可以继续探索 [Workflows](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fworkflows\u002Foverview) 进行复杂流程编排，或使用 [Memory](https:\u002F\u002Fmastra.ai\u002Fdocs\u002Fmemory\u002Fconversation-history) 功能为智能体添加长期记忆。","一家电商初创团队正在构建一个能自动处理复杂售后请求（如退货、换货、赔偿协商）的智能客服系统。\n\n### 没有 mastra 时\n- **模型切换成本高**：每当需要测试不同大模型（如从 OpenAI 切换到 Anthropic）以优化回答质量时，必须重写大量底层连接代码，开发效率极低。\n- **流程控制混乱**：处理涉及“核实订单 - 判断政策 - 生成方案 - 人工审批”的多步逻辑时，缺乏统一的状态管理，容易在异步等待中丢失上下文或陷入死循环。\n- **人工介入困难**：当遇到需要人工确认的敏感赔付时，难以优雅地暂停程序并保存现场，导致用户需重新描述问题，体验割裂。\n- **可观测性缺失**：代理决策过程如同黑盒，无法追踪其为何选择特定工具或产生错误回答，导致调试和评估极其耗时。\n\n### 使用 mastra 后\n- **统一模型路由**：利用 mastra 的标准接口，团队只需修改一行配置即可在 40+ 个大模型提供商间无缝切换，快速找到性价比最优的组合。\n- **可视化工作流编排**：通过 `.then()` 和 `.branch()` 等直观语法构建图谱化工作流，清晰定义复杂的售后审批链路，确保每一步执行都可控且状态持久化。\n- **原生支持人机协作**：借助 mastra 的挂起与恢复机制，系统在需人工审批时自动暂停并存储记忆，待管理员批准后精准续接对话，用户体验流畅自然。\n- **内置评估与监控**：利用自带的评测和可观测性模块，团队能实时量化代理表现，快速定位逻辑缺陷并持续迭代优化，确保生产环境稳定可靠。\n\nmastra 让开发团队从繁琐的基础设施搭建中解放出来，专注于业务逻辑，高效交付了生产级的智能客服应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmastra-ai_mastra_d9e38202.png","mastra-ai","Mastra","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmastra-ai_f815a025.png","Build agents with a modern TypeScript stack. Mastra is an all-in-one framework for building AI-powered applications and agents.",null,"https:\u002F\u002Fmastra.ai","https:\u002F\u002Fgithub.com\u002Fmastra-ai",[84,88,92,96,99],{"name":85,"color":86,"percentage":87},"TypeScript","#3178c6",99.3,{"name":89,"color":90,"percentage":91},"JavaScript","#f1e05a",0.7,{"name":93,"color":94,"percentage":95},"CSS","#663399",0,{"name":97,"color":98,"percentage":95},"Shell","#89e051",{"name":100,"color":101,"percentage":95},"HTML","#e34c26",22624,1836,"2026-04-03T07:42:05","NOASSERTION","Linux, macOS, Windows","未说明",{"notes":109,"python":107,"dependencies":110},"Mastra 是一个基于 TypeScript 的 AI 应用开发框架，主要通过 npm 安装使用。它不依赖特定的 Python 环境或 GPU 配置，具体资源需求取决于所集成的 AI 模型提供商（如 OpenAI、Anthropic 等）及本地运行模型的情况。支持作为独立服务器部署或集成到现有 Node.js\u002FReact\u002FNext.js 项目中。",[111,112,113,114],"Node.js","npm","React (可选)","Next.js (可选)",[13,55,14,54,26,15],[117,118,119,120,121,122,123,124,125,126,127,128,129],"agents","ai","chatbots","javascript","llm","nextjs","nodejs","reactjs","typescript","workflows","evals","mcp","tts",7,"2026-03-27T02:49:30.150509","2026-04-06T08:45:58.668362",[134,139,144,148,153,158],{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},12270,"在 TypeScript Monorepo 中使用 `mastra dev` 时遇到依赖处理错误或 `.ts` 文件扩展名未知错误怎么办？","建议尝试安装最新的 `@alpha` 版本，许多此类问题已在该版本中修复。如果构建时出现 `Cannot find package '@babel\u002Fpreset-typescript'` 错误，请手动将该包添加到项目的 `devDependencies` 中：\n```json\n\"devDependencies\": {\n  \"@babel\u002Fpreset-typescript\": \"^7.x\"\n}\n```\n此外，官方已将 Monorepo 相关问题汇总到一个 Mega Issue (#6852) 中以便集中修复，如遇新问题可前往该处提供复现步骤。","https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fissues\u002F1996",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},12271,"如何在 Mastra 中启用 AI SDK v5 格式的流输出？","Mastra 已更新 `Agent.streamVNext()` 和 `Agent.generateVNext()` API 以支持 AI SDK v5。只需在调用时传入 `format: 'aisdk'` 参数即可：\n```ts\nconst stream = await agent.streamVNext(messages, {\n  format: 'aisdk'  \u002F\u002F 启用 AI SDK v5 兼容性\n});\nreturn stream.toUIMessageStreamResponse();\n```\n默认情况下输出的是 Mastra 原生流格式，添加该参数仅改变输出格式，输入配置和行为保持一致。","https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fissues\u002F5470",{"id":145,"question_zh":146,"answer_zh":147,"source_url":143},12272,"在使用 AI SDK v5 集成 Mastra 时，涉及记忆（memory）、工具（tools）或推理（reasoning）时报错或无法正常工作怎么办？","当使用记忆、工具或推理功能时，必须先将消息转换为模型兼容的格式。可以使用 `convertToModelMessages` 或 Mastra 提供的转换方法：\n```ts\n\u002F\u002F 转换消息以适配 AI SDK v5 模型\nconst convertedMessages = convertMessages(messages).to(\"AIV5.Model\");\n```\n文档可能尚未完全更新，如果遇到验证失败（如 workflow-start 类型错误），请确保在发送给 Agent 前执行了此转换步骤。",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},12273,"使用 `@mastra\u002Fai-sdk` beta 版本时，Network Agent 行为不可预测或丢失用户消息是什么原因？","这是在早期 beta 版本（如 beta.1）中已知的问题。维护者确认该问题已在后续的 beta 版本中修复。如果您遇到此类行为不一致或消息丢失的情况，请将 `@mastra\u002Fai-sdk` 升级到最新的 beta 版本即可解决。","https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fissues\u002F10092",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},12274,"Mastra 的 AI 可观测性（Observability）和追踪（Tracing）功能目前状态如何？","AI 可观测性功能已在 1.0-beta 版本（以及之前的 0.x 版本）中正式发布。之前用于收集反馈和讨论的 Umbrella Thread 已关闭。如果您在使用过程中发现任何 Bug 或有改进建议，请直接在新的 GitHub Issue 中报告。此前报告的回归问题（如 #9272）也已在热修复版本中解决。","https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fissues\u002F6773",{"id":159,"question_zh":160,"answer_zh":161,"source_url":138},12275,"在 Monorepo 环境中构建 Mastra 应用时遇到 `@babel\u002Fpreset-typescript` 相关错误如何解决？","如果在运行 `yarn build` 时遇到 `Failed to analyze Mastra application: Cannot find package '@babel\u002Fpreset-typescript'` 错误，即使 `mastra dev` 正常，也需要显式安装该依赖。请在项目的 `package.json` 的 `devDependencies` 中添加：\n```json\n\"@babel\u002Fpreset-typescript\": \"^7.x\"\n```\n然后重新运行构建命令。同时建议优先尝试 `@alpha` 版本，其中包含了对 Monorepo 场景的更多修复。",[163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258],{"id":164,"version":165,"summary_zh":166,"released_at":167},62643,"@mastra\u002Fcore@1.16.0","## 亮点\n\n### 针对观察记忆的更智能模型选择\n\n`@mastra\u002Fmemory` 现在允许您使用 `ModelByInputTokens` 根据输入大小将观察者和反射器调用路由到不同的模型。短输入可以发送到快速且经济的模型，而较长的输入则会发送到功能更强的模型——所有这些都通过声明式地配置令牌阈值来实现。跟踪记录会显示选择了哪个模型以及原因。\n\n### 数据集和实验的 MongoDB 支持\n\n`@mastra\u002Fmongodb` 现在存储带有完整项目历史和时间旅行查询的版本化数据集，以及实验结果和 CRUD 操作。如果您已经在使用 `MongoDBStore`，则无需额外设置即可自动生效。\n\n### Okta 身份验证与 RBAC\n\n新的 `@mastra\u002Fauth-okta` 包通过 Okta 提供 SSO 身份验证和基于角色的访问控制。您可以将 Okta 组映射到 Mastra 权限，根据 Okta 的 JWKS 端点验证 JWT，并管理会话；或者将 Okta 的 RBAC 与其他身份提供商（如 Auth0 或 Clerk）结合使用。\n\n### 重大变更\n\n- 此变更日志中未列出任何重大变更。\n\n## 变更日志\n\n### [@mastra\u002Fcore@1.16.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.16.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.16.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n\n#### 小幅改进\n\n- 为评估工作流添加了数据集与代理的关联以及实验状态跟踪。（[#14470](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14470)）\n  - **数据集目标定位**：为数据集添加了 `targetType` 和 `targetIds` 字段，使其能够与代理、评分器或工作流相关联。现在，一个数据集可以关联多个实体。\n  - **实验状态**：在实验结果中添加了 `status` 字段（`'needs-review'`、`'reviewed'`、`'complete'`），用于评审队列工作流。\n  - **数据集实验路由**：新增了从数据集触发实验的 API 端点，支持配置目标类型和目标 ID。\n  - **LLM 数据生成**：新增了使用 LLM 生成数据集条目的端点，可配置数量和提示。\n  - **失败分析**：新增了对实验失败进行聚类并利用 LLM 分析提出标签的端点。\n\n- 为实验添加了代理版本支持。现在，在触发实验时，您可以传递 `agentVersion` 参数来指定要使用的代理版本。该代理版本会与实验一起存储，并在实验响应中返回。（[#14562](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14562)）\n\n  ```ts\n  const client = new MastraClient();\n\n  await client.triggerDatasetExperiment({\n    datasetId: \"my-dataset\",\n    targetType: \"agent\",\n    targetId: \"my-agent\",\n    version: 3, \u002F\u002F 锁定到数据集版本 3\n    agentVersion: \"ver_abc123\" \u002F\u002F 锁定到特定代理版本\n  });\n  ```\n\n- 为 Harness 添加了工具挂起处理功能。（[#14611](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14611)）\n\n  当工具在执行过程中调用 `suspend()` 时，Harness 现在会发出 `tool_suspended` 事件，并报告 `agent_","2026-03-26T10:19:34",{"id":169,"version":170,"summary_zh":171,"released_at":172},62644,"@mastra\u002Fcore@1.14.0","## 亮点\n\n### 智能体循环中的 AI 网关工具支持\n`@mastra\u002Fcore` 现在支持 AI 网关工具（例如 `gateway.tools.perplexitySearch()`）作为由提供商执行的工具：它会推断 `providerExecuted` 属性，将流式返回的提供商结果合并回原始工具调用，并在提供商已返回结果时跳过本地执行。\n\n### 更可靠的观察记忆（缓存稳定性 + “截至”检索）\n通过使用带日期的消息边界分隔符和分块机制，观察记忆的持久性更加稳定；同时，`@mastra\u002Fmemory` 新增了 `getObservationsAsOf()` 方法，用于检索在指定消息时间戳下生效的精确观察集合（这对于回放\u002F调试以及一致性提示非常有用）。\n\n### MCP 客户端诊断与服务器级控制\n`@mastra\u002Fmcp` 增加了针对每个服务器的操作工具——`reconnectServer(serverName)`、`listToolsetsWithErrors()` 和 `getServerStderr(serverName)`——以提升 MCP 标准输入输出\u002F服务器集成的可靠性和调试能力。\n\n### 破坏性变更\n- 本更新日志中未列出任何破坏性变更。\n\n## 更新日志\n\n### [@mastra\u002Fcore@1.14.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.14.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.14.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 补丁更新\n\n- 更新提供商注册表和模型文档，加入最新的模型和提供商信息（[`51970b3`](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002F51970b3828494d59a8dd4df143b194d37d31e3f5)）\n\n- 在激活缓冲的观察记录时添加带日期的消息边界分隔符，以提升缓存稳定性。（[#14367](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14367)）\n\n- 修复了由提供商执行的工具调用在内存回放过程中可能出现的顺序错乱或未保存结果的问题。（修复 #13762）（[#13860](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13860)）\n\n- 修正了 `generateEmptyFromSchema` 函数，使其既能接受字符串形式的 JSON 模式输入，也能接受已解析的对象格式输入；递归初始化嵌套对象属性，并尊重默认值。同时，更新了 `WorkingMemoryTemplate` 类型，将其改为支持 `Record\u003Cstring, unknown>` 内容的联合类型，以适应 JSON 格式的模板。此外，移除了工作记忆处理器中重复的私有模式生成器，改用共享工具函数。（[#14310](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14310)）\n\n- 修复了由提供商执行的工具调用（如 Anthropic 的 `web_search`）在被提供商延迟时可能被丢弃或错误保存的问题。现在，工具调用的各个部分会按流式顺序保存，而延迟返回的结果也会正确地合并回原始消息中。（[#14282](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14282)）\n\n- 修复了 `replaceString` 工具函数，使其能够正确转义替换字符串中的 `$` 字符。此前，替换文本中类似 `$&` 的模式会被解释为正则表达式反向引用，而非字面文本。（[#14434](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14434)）\n\n- 修复了工具调用更新逻辑，确保保留 `providerExecuted` 和 `providerMetad","2026-03-19T10:48:28",{"id":174,"version":175,"summary_zh":176,"released_at":177},62645,"@mastra\u002Fcore@1.13.0","## 亮点\n\n### 可观测性存储域（模式 + 内存中实现）\nMastra 现在为所有可观测性信号（分数、日志、反馈、指标、发现）提供了基于 Zod 的存储模式和内存中实现，具备完整的类型推断功能，并包含一个基础的 `ObservabilityStorage` 类，其中包含了默认的方法实现。\n\n### 新的持久化 Workspace 文件系统：`@mastra\u002Fagentfs`\n新的 `AgentFSFilesystem` Workspace 提供者通过 `agentfs-sdk` 为代理跨会话添加了基于 Turso\u002FSQLite 的数据库持久化文件存储。\n\n### 可观测性管道升级：重命名类型 + `EventBuffer` 批量处理\n`@mastra\u002Fobservability` 中的导出器和事件总线已更新，以匹配重命名的核心可观测性类型，并新增了一个 `EventBuffer`，用于对非追踪类信号进行批量处理，支持可配置的刷新间隔。\n\n### 通过 `@mastra\u002Fserver\u002Fschemas` 实现类型安全的服务器路由推断\n新推出的 `@mastra\u002Fserver\u002Fschemas` 导出提供了一系列工具类型（`RouteMap`、`InferPathParams`、`InferBody`、`InferResponse` 等），能够自动从 `SERVER_ROUTES` 中推断出请求和响应的类型，包括通过 `createRoute()` 添加的路由。\n\n### 长时间运行的观测记忆降低 Token 成本\n观测记忆新增了 `observation.previousObserverTokens` 配置项，用于将“先前观测”上下文截断到指定的 Token 预算范围内（或完全禁用截断），从而在长时间对话中减少观察者提示的大小。\n\n### 破坏性变更\n- `MetricType`（`counter`\u002F`gauge`\u002F`histogram`）已弃用——指标现在是原始事件，在查询时再进行聚合。\n- 分数模式现在使用 `scorerId` 而不是 `scorerName` 来标识评分者。\n- `ObservabilityBus` 构造函数现在接受一个配置对象（`cardinalityFilter`、`autoExtractMetrics`）；`setCardinalityFilter()` 和 `enableAutoExtractedMetrics()` 已被移除。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.13.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.13.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.13.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小版本变更\n\n- **新增可观测性存储域模式及其实现**（[#14214](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F14214)）\n\n  引入了针对所有可观测性信号（分数、日志、反馈、指标、发现）的全面存储模式和内存中实现。所有模式均基于 Zod，具备完整的类型推断能力。`ObservabilityStorage` 基类包含了所有新方法的默认实现。\n\n  **破坏性变更：**\n  - `MetricType`（`counter`\u002F`gauge`\u002F`histogram`）已弃用——指标现在是原始事件，在查询时再进行聚合。\n  - 分数模式使用 `scorerId` 而不是 `scorerName` 来标识评分者。\n\n#### 补丁版本变更\n\n- 更新提供商注册表和模型文档，加入最新模型和提供商信息（[`ea86967`](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002Fea86967449426e0a3673253bd1c2c052a99d970d)）\n\n- 修复了提供商工具（例如 `openai.tools.webSearch()`）的问题……","2026-03-17T10:20:51",{"id":179,"version":180,"summary_zh":181,"released_at":182},62646,"@mastra\u002Fcore@1.12.0","## 亮点\n\n### Cloudflare Durable Objects 存储适配器\n`@mastra\u002Fcloudflare` 新增了一种基于 Durable Objects 的存储实现（除 KV 外），支持 SQLite 持久化、批量操作以及表\u002F列验证，从而在 Cloudflare 上实现更健壮的有状态存储。\n\n### 工作区文件系统路径解析现与真实文件系统语义一致\n`LocalFilesystem` 不再将绝对路径（如 `\u002Ffile.txt`）视为工作区相对路径；绝对路径现在会解析为真实的文件系统位置（并进行包含性检查），相对路径则以 `basePath` 为基准解析，而 `~\u002F` 会被扩展为用户主目录。\n\n### MCP 工具及 Agent\u002F工作流执行的可观测性提升\nMCP 工具调用现通过专用的 `MCP_TOOL_CALL` span 类型进行追踪，并附带服务器名称和版本元数据。Studio 增加了针对 MCP 的时间线样式，由处理器触发的中止操作可在追踪中完全可见，且工作流的挂起与恢复现在会在原始 span 下保持追踪的连续性。\n\n### 多步骤运行中更可靠的 Agent 循环与令牌预算管理\n修复包括：当 `onIterationComplete` 返回 `continue: true` 时，Agent 循环能够正确继续；并通过在每一步（包括工具调用的续行）都执行基于令牌的消息修剪，防止令牌数量呈指数级增长。\n\n### 通过提供商特定的 getter 方法与字符串 PID 实现沙箱与工作区的可扩展性\n沙箱进程 ID 现在采用字符串类型（`ProcessHandle.pid: string`），以支持跨不同提供商的会话 ID；同时，沙箱和文件系统通过新的提供商特定 getter 方法暴露底层 SDK 实例，例如 `sandbox.daytona`、`sandbox.blaxel`、用于 S3 的 `filesystem.client`，以及用于 GCS 的 `filesystem.storage\u002Fbucket`。\n\n### 重大变更\n- `LocalFilesystem` 不再将绝对路径（如 `\u002Fsrc\u002Findex.ts`）视为相对于 `basePath` 的路径；请更新调用方，在目标为工作区时传递相对路径。\n- `ProcessHandle.pid` 由 `number` 改为 `string`；请更新所有假设 PID 为数字的代码（包括 `processes.get(...)`）。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.12.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.12.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.12.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅变更\n\n- MCP 工具调用在追踪中现使用 `MCP_TOOL_CALL` span 类型，而非 `TOOL_CALL`。`CoreToolBuilder` 会检测工具上的 `mcpMetadata`，并创建带有 MCP 服务器名称、版本及工具描述属性的 span。（[#13274](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13274)）\n\n- **绝对路径现解析为真实的文件系统位置，不再被视为工作区相对路径。**（[#13804](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13804)）\n\n  此前，在受控模式下，`LocalFilesystem` 会将绝对路径（如 `\u002Ffile.txt`）视为 `basePath\u002Ffile.txt` 的简写形式（即“虚拟根”约定）。这可能导致路径被静默解析到意外的位置——例如 `\u002Fhome\u002Fuser\u002F.config\u002Ffile.txt`。","2026-03-16T14:36:27",{"id":184,"version":185,"summary_zh":186,"released_at":187},62647,"@mastra\u002Fcore@1.11.0","## 亮点\n\n### 动态模型回退数组（运行时路由）\n代理现在可以使用返回完整回退数组（`ModelWithRetries[]`）的 `model` 函数，从而实现基于上下文的模型路由（如用户等级、地区等），支持嵌套和异步选择，并正确继承 `maxRetries` 配置。\n\n### 标准 Schema + Zod v4 兼容层\nMastra 通过 `@mastra\u002Fschema-compat` 在 Zod v3\u002Fv4、AI SDK Schema 和 JSON Schema 之间添加了标准 Schema 规范化功能（`toStandardSchema`、`standardSchemaToJSONSchema`），统一了 Schema 处理方式，并提升了与严格模式提供者的兼容性。\n\n### 所有服务器适配器中可自定义的请求验证错误处理\n`ServerConfig` 和 `createRoute()` 新增了 `onValidationError` 钩子，允许您控制 Zod 验证失败时的状态码和响应体，并在 Hono\u002FExpress\u002FFastify\u002FKoa 等适配器中得到一致支持。\n\n### 请求上下文端到端支持（追踪 + 数据集\u002F实验 + 存储）\n`requestContext` 现在会被捕获到追踪跨度中（并持久化到 ClickHouse\u002FPG\u002FLibSQL\u002FMSSQL 的跨度表中），同时也在数据集项和实验中得到支持，使得请求范围内的元数据（租户\u002F用户\u002F标志位）能够贯穿评估和可观ability 流程。\n\n### 更快、更灵活的存储：语义召回性能提升 + PgVector 索引优化 + 新向量类型\n语义召回在多个适配器上显著提速，尤其是在 Postgres 中处理超大线程时；PgVector 新增了 `metadataIndexes`，支持对过滤后的元数据字段进行 B-tree 索引；此外，`@mastra\u002Fpg` 现在也支持 pgvector 的 `bit` 和 `sparsevec` 向量类型。\n\n### 破坏性变更\n- 最低 Zod 版本现已提升至 `^3.25.0`（针对 v3）或 `^4.0.0`（支持 v4）。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.11.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.11.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.11.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅改进\n\n- 功能：支持返回模型回退数组的动态函数（[#11975](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11975)）\n\n  代理现在可以使用根据运行时上下文返回完整回退数组的动态函数。这实现了：\n  - 动态选择完整的回退配置\n  - 基于上下文的模型选择及自动回退机制\n  - 根据用户等级、地区或其他因素灵活路由模型\n  - 返回数组中可包含嵌套的动态函数（数组中的每个模型也可以是动态的）\n\n  ## 示例\n\n  ### 基础动态回退数组\n\n  ```typescript\n  const agent = new Agent({\n    model: ({ requestContext }) => {\n      const tier = requestContext.get('tier');\n      if (tier === 'premium') {\n        return [\n          { model: 'openai\u002Fgpt-4', maxRetries: 2 },\n          { model: 'anthropic\u002Fclaude-3-opus', maxRetries: 1 },\n        ];\n      }\n      return [{ model: 'openai\u002Fgpt-3.5-turbo', maxRetries: 1 }];\n    },\n  });\n  ```\n\n  ### 基于地区的路由与嵌套动态\n\n  ```typescript\n  const agent =","2026-03-16T14:35:05",{"id":189,"version":190,"summary_zh":191,"released_at":192},62648,"@mastra\u002Fcore@1.10.0","## 亮点\n\n### 工具 `inputExamples` 提升模型工具调用准确性\n\n工具定义现在可以包含 `inputExamples`，这些示例会传递给支持该功能的模型（例如 Anthropic 的 `input_examples`），以展示有效的输入格式，从而减少错误的工具调用。\n\n### MCP 客户端 fetch 钩子现可接收 `RequestContext`（身份验证\u002FCookie 转发）\n\n`@mastra\u002Fmcp` 为 MCP HTTP 服务器定义中的自定义 `fetch` 函数增加了对 `requestContext` 的支持，这使得在工具执行过程中能够按请求范围转发 Cookie 或 Bearer Token，同时保持与 `(url, init)` fetch 签名的向后兼容性。\n\n### 针对代理、流式传输和内存清理的可靠性与开发体验修复\n\n提供程序的流式错误现在会一致地从 `generate()`\u002F`resumeGenerate()` 中抛出，AI SDK 错误会通过 Mastra 日志记录器并附带结构化上下文进行路由，客户端工具在无状态部署中不再丢失历史记录，而 `memory.deleteThread()`\u002F`deleteMessages()` 现在会自动清理受支持向量存储中孤立的向量嵌入。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.10.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.10.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.10.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n\n#### 小幅变更\n\n- 在工具定义中添加 `inputExamples` 支持，向 AI 模型展示有效的工具输入样例。支持此功能的模型（如 Anthropic 的 `input_examples`）将连同工具 Schema 一起接收到这些示例，从而提升工具调用的准确性。（[#12932](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12932)）\n  - 向 `ToolAction`、`CoreTool` 和 `Tool` 类添加了可选的 `inputExamples` 字段\n\n  ```ts\n  const weatherTool = createTool({\n    id: \"get-weather\",\n    description: \"获取某地天气\",\n    inputSchema: z.object({\n      city: z.string(),\n      units: z.enum([\"celsius\", \"fahrenheit\"])\n    }),\n    inputExamples: [{ input: { city: \"New York\", units: \"fahrenheit\" } }, { input: { city: \"Tokyo\", units: \"celsius\" } }],\n    execute: async ({ city, units }) => {\n      return await fetchWeather(city, units);\n    }\n  });\n  ```\n\n#### 补丁变更\n\n- 依赖项更新：（[#13209](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13209)）\n  - 更新依赖项 [`p-map@^7.0.4` ↗︎](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fp-map\u002Fv\u002F7.0.4)（从 `^7.0.3` 更新至 `dependencies`）\n\n- 依赖项更新：（[#13210](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13210)）\n  - 更新依赖项 [`p-retry@^7.1.1` ↗︎](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fp-retry\u002Fv\u002F7.1.1)（从 `^7.1.0` 更新至 `dependencies`）\n\n- 使用最新模型和提供商更新提供商注册表及模型文档（[`33e2fd5`](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002F33e2fd5088f83666df17401e2da68c943dbc0448)）\n\n- 修复了 `execute_command` 工具的超时参数，使其接受秒而不是毫秒，从而防止代理意外设置极短的超时时间（[#13799](https:\u002F","2026-03-09T12:11:39",{"id":194,"version":195,"summary_zh":196,"released_at":197},62649,"@mastra\u002Fcore@1.9.0","## 亮点\n\n ### 工作区与沙盒的重大升级（令牌感知输出、挂载、进程控制、LSP 解析）\n工作区获得了显著的功能增强：工作区工具的输出现在受到令牌数量限制，并会去除 ANSI 控制符以适应模型上下文；同时，它还具备 `.gitignore` 感知能力，从而减少令牌使用量。沙盒命令支持 `abortSignal` 用于取消操作，以及后台进程流式回调（`onStdout`\u002F`onStderr`\u002F`onExit`），并且在 `LocalSandbox` 中支持本地符号链接挂载。通过 `WorkspaceToolConfig.name`，工具可以以自定义名称暴露出来。此外，LSP 二进制文件的解析现在可配置（`binaryOverrides`、`searchPaths`、`packageRunner`），这使得工作区诊断能够在 monorepo、全局安装和自定义设置中可靠运行。\n\n### 服务器、Studio 和提供商之间的端到端身份验证 + RBAC\nMastra 现在提供了一个可插拔的身份验证系统 (`@mastra\u002Fcore\u002Fauth`)，以及服务器端的身份验证路由和基于约定的路由权限强制执行机制 (`@mastra\u002Fserver` + 所有服务器适配器)。新的身份验证提供商包 (`@mastra\u002Fauth-cloud`、`@mastra\u002Fauth-studio`、`@mastra\u002Fauth-workos`) 增加了 OAuth\u002FSSO、会话管理以及 RBAC 功能——Studio UI 也新增了受权限控制的身份验证界面和组件。\n\n### 工作流执行路径跟踪 + 并发安全的工作流快照更新\n工作流结果现在包含 `stepExecutionPath`（在执行过程中也可获取，并且在恢复或重启后仍能保留），同时通过去重有效负载来减小执行日志的大小。存储后端新增了原子性的 `updateWorkflowResults` 和 `updateWorkflowState` 方法，并提供了 `supportsConcurrentUpdates()` 检查——这使得并发工作流更新更加安全（例如，Postgres、LibSQL、MongoDB、DynamoDB、Upstash 等都支持此功能；而 ClickHouse、Cloudflare、Lance 等部分后端则明确不支持）。\n\n### 破坏性变更\n- `harness.sendMessage()` 现在使用 `files` 而不是 `images`（支持任何文件类型，保留文件名，并自动解码文本文件）。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.9.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.9.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.9.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅改进\n\n- 在 `NetworkOptions` 中添加了 `onStepFinish` 和 `onError` 回调，允许在网络执行过程中监控每个 LLM 步骤的进度并进行自定义错误处理。关闭 #13362。([#13370](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13370))\n\n  **之前：** 无法观察每一步的进度或在网络执行过程中处理错误。\n\n  ```typescript\n  const stream = await agent.network('研究 AI 趋势', {\n    memory: { thread: 'my-thread', resource: 'my-resource' },\n  });\n  ```\n\n  **之后：** `onStepFinish` 和 `onError` 现已在 `NetworkOptions` 中可用。\n\n  ```typescript\n  const stream = await agent.network('研究 AI 趋势', {\n    onStepFinish: event => {\n      console.log('步骤完成：', event.finishReason, event.usage);\n    },\n    onError: ({ error }) => {\n      console.error('网络错误：', er","2026-03-04T14:48:52",{"id":199,"version":200,"summary_zh":201,"released_at":202},62650,"@mastra\u002Fcore@1.8.0","## 亮点\n\n### 多智能体协调的监督者模式\n全新监督者模式支持通过 `stream()` 和 `generate()` 协调多个智能体，提供委托钩子、迭代监控、完成度评分、内存隔离、工具审批传播、上下文过滤以及退出机制等功能。\n\n### 仅元数据向量查询（可选 `queryVector`）\n向量查询现支持仅基于元数据的检索，将 `queryVector` 设置为可选参数（要求至少提供 `queryVector` 或 `filter` 中的一个）。`@mastra\u002Fpg` 的 `PgVector.query()` 明确支持仅使用过滤器的查询，而其他向量存储则会在不支持仅元数据查询时抛出明确的 `MastraError`。\n\n### 更灵活的评估：`runEvals` 的目标选项\n`runEvals` 新增了 `targetOptions`，用于将执行\u002F运行选项传递给 `agent.generate()` 或 `workflow.run.start()`；同时，还为每个评估数据项提供了 `startOptions`，以便针对特定工作流进行覆盖配置（如 `initialState`）。\n\n### 工作区编辑后的 LSP 诊断\n工作区编辑工具（`write_file`、`edit_file`、`ast_edit`）现在可以在编辑后立即显示语言服务器协议诊断信息（支持 TypeScript、Python\u002FPyright、Go\u002Fgopls、Rust\u002Frust-analyzer、ESLint），帮助在下一次工具调用之前捕获类型或 lint 错误。\n\n### 新增 Blaxel 云沙盒提供商\n`@mastra\u002Fblaxel` 增加了 Blaxel 云沙盒提供商，扩展了在托管环境中执行工作区工具的部署和运行选项。\n\n### 破坏性变更\n- 本变更日志中未列出任何破坏性变更。\n\n## 变更日志\n\n### [@mastra\u002Fcore@1.8.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.8.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fd4\u002Fmn8gvlx91cz80s_9c4gjr12r0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.8.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅变更\n\n- 将 `QueryVectorParams` 接口中的 `queryVector` 设置为可选，以支持仅元数据查询。必须至少提供 `queryVector` 或 `filter` 中的一个。并非所有向量存储后端都支持仅元数据查询，请查阅相应存储的文档以获取详细信息。（[#13286](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13286)）\n\n  同时修复了文档中将 `query()` 参数错误地命名为 `vector` 而非 `queryVector` 的问题。\n\n- 为 `runEvals` 添加了 `targetOptions` 参数，该参数会直接传递给 `agent.generate()`（现代路径）或 `workflow.run.start()`。此外，还在 `RunEvalsDataItem` 中新增了每项的 `startOptions` 字段，用于为每个评估数据项设置特定的工作流选项，例如 `initialState`。（[#13366](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13366)）\n\n  **新功能：`targetOptions`**\n\n  可将智能体执行选项（如 `maxSteps`、`modelSettings`、`instructions`）传递至 `agent.generate()`，或将工作流运行选项（如 `perStep`、`outputOptions`）传递至 `workflow.run.start()`：\n\n  ```ts\n  \u002F\u002F 智能体 - 传递 modelSettings 或 maxSteps\n  await runEvals({\n    data,\n    scorers,\n    target: myAgent,\n    targetOptions: { maxSteps: 5, modelSettings: { temperature: 0 } }","2026-03-02T16:29:27",{"id":204,"version":205,"summary_zh":206,"released_at":207},62651,"@mastra\u002Fcore@1.7.0","## 亮点\n\n### 工作区与沙盒中的后台进程管理\n工作区现在支持启动和管理长时间运行的后台进程（通过 `SandboxProcessManager` \u002F `ProcessHandle`），并提供了新工具，如 `execute_command`（`background: true`）、`get_process_output` 和 `kill_process`，同时改进了流式终端风格的 UI。\n\n### 通过 `Workspace.setToolsConfig()` 动态更新运行时工具配置\n您可以在现有工作区实例上动态启用或禁用工具（包括通过传递 `undefined` 来重新启用所有工具），从而在不重新创建工作区的情况下实现计划\u002F只读等更安全的模式。\n\n### 观察记忆可靠性与自省功能的改进\nCore 添加了 `Harness.getObservationalMemoryRecord()` 方法，用于公开当前线程的完整观察记忆记录；同时，`@mastra\u002Fmemory` 修复了观察记忆系统中的重大稳定性问题（共享分词器以防止 OOM 和内存泄漏，并修复了 PostgreSQL 死锁问题，以及在缺少 `threadId` 时提供更清晰的错误信息）。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.7.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.7.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fr2\u002Fqpk_zznx2qlbc61wr2gtsm4h0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.7.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅变更\n\n- 向 `Harness` 类添加了 `getObservationalMemoryRecord()` 方法。修复 #13392。([#13395](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13395))\n\n  该方法提供了对当前线程完整 `ObservationalMemoryRecord` 的公共访问权限，包括 `activeObservations`、`generationCount` 和 `observationTokenCount`。此前，若要访问原始观察文本，必须绕过 Harness 抽象层，直接操作私有存储内部。\n\n  ```typescript\n  const record = await harness.getObservationalMemoryRecord();\n  if (record) {\n    console.log(record.activeObservations);\n  }\n  ```\n\n- 添加了 `Workspace.setToolsConfig()` 方法，用于在不重新创建工作区实例的情况下，在运行时动态更新各工具的配置。传递 `undefined` 可以重新启用所有工具。([#13439](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13439))\n\n  ```ts\n  const workspace = new Workspace({ filesystem, sandbox });\n\n  \u002F\u002F 禁用写入工具（例如在计划\u002F只读模式下）\n  workspace.setToolsConfig({\n    mastra_workspace_write_file: { enabled: false },\n    mastra_workspace_edit_file: { enabled: false },\n  });\n\n  \u002F\u002F 重新启用所有工具\n  workspace.setToolsConfig(undefined);\n  ```\n\n- 添加了 `HarnessDisplayState`，使任何 UI 都可以读取单一的状态快照，而无需处理 35 多个独立事件。([#13427](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13427))\n\n  **原因：** 之前，每个 UI（TUI、Web、桌面）都需要订阅数十个细粒度的 Harness 事件，并独立重建需要显示的内容。这导致状态跟踪重复，以及不同 UI 实现之间的不一致。现在，Harness 维护一个单一的规范显示状态，任何 UI 都可以读取。\n\n  **以前：** UIs 订阅","2026-02-25T09:16:42",{"id":209,"version":210,"summary_zh":211,"released_at":212},62652,"@mastra\u002Fcore@1.6.0","## 亮点\n\n### 基于 AST 的工作区编辑工具 (`mastra_workspace_ast_edit`)\n\n全新 AST 编辑工具支持智能代码转换（重命名标识符、添加\u002F删除\u002F合并导入、基于模式的带元变量替换），并在安装 `@ast-grep\u002Fnapi` 后自动可用，从而实现超越字符串级编辑的稳健重构。\n\n### Harness UX + 内置功能：流式工具参数预览与任务工具\n\n工具渲染器现可通过部分 JSON 解析，实时流式呈现参数预览（包括编辑操作的差异以及写入操作的流式文件内容）；同时，`task_write` 和 `task_check` 已作为内置 Harness 工具，自动注入到代理调用中，以实现结构化的任务跟踪。\n\n### 观察型记忆连续性改进（建议续写 + 当前任务）\n\n观察型记忆现可在不同激活之间保留 `suggestedContinuation` 和 `currentTask`（需存储适配器支持），从而在消息窗口缩小时提升对话连贯性；此外，激活和优先级处理也得到优化，以更好地达成留存目标并避免观察者输出失控。\n\n## 更改日志\n\n### [@mastra\u002Fcore@1.6.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.6.0\u002F\u002Fprivate\u002Fvar\u002Ffolders\u002Fr2\u002Fqpk_zznx2qlbc61wr2gtsm4h0000gn\u002FT\u002Fmastra-mastra-ai-mastra-_mastra_core_1.6.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\n#### 小幅更新\n\n- 新增 AST 编辑工具 (`workspace_ast_edit`)，用于基于 AST 分析的智能代码转换。支持重命名标识符、添加\u002F删除\u002F合并导入，以及带元变量替换的模式匹配查找与替换。当项目中安装了 `@ast-grep\u002Fnapi` 时，该工具会自动可用。([#13233](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13233))\n\n  **示例：**\n\n  ```ts\n  const workspace = new Workspace({\n    filesystem: new LocalFilesystem({ basePath: '\u002Fmy\u002Fproject' }),\n  });\n  const tools = createWorkspaceTools(workspace);\n\n  \u002F\u002F 重命名所有出现的标识符\n  await tools['mastra_workspace_ast_edit'].execute({\n    path: '\u002Fsrc\u002Futils.ts',\n    transform: 'rename',\n    targetName: 'oldName',\n    newName: 'newName',\n  });\n\n  \u002F\u002F 添加一个导入（若已有相同模块的导入，则合并）\n  await tools['mastra_workspace_ast_edit'].execute({\n    path: '\u002Fsrc\u002Fapp.ts',\n    transform: 'add-import',\n    importSpec: { module: 'react', names: ['useState', 'useEffect'] },\n  });\n\n  \u002F\u002F 带元变量的模式替换\n  await tools['mastra_workspace_ast_edit'].execute({\n    path: '\u002Fsrc\u002Fapp.ts',\n    pattern: 'console.log($ARG)',\n    replacement: 'logger.debug($ARG)',\n  });\n  ```\n\n- 在所有工具渲染器中新增流式工具参数预览功能。模型生成工具名称、文件路径和命令时，这些信息会立即显示，而无需等待整个工具调用完成。([#13328](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13328))\n  - **通用工具**会在参数流式输入时实时显示键值对形式的参数预览。\n  - **编辑工具**重新","2026-02-24T16:55:36",{"id":214,"version":215,"summary_zh":216,"released_at":217},62653,"@mastra\u002Fcore@1.5.0","## Highlights\r\n\r\n### Core Harness: A Reusable Orchestration Layer for Agent Apps\r\n\r\nA new generic `Harness` in `@mastra\u002Fcore` provides a foundation for building agent-powered applications with modes, state management, built-in tools (`ask_user`, `submit_plan`), subagent support, Observational Memory integration, model discovery, permission-aware tool approval, and event-driven\u002Fthread\u002Fheartbeat management.\r\n\r\n### Workspace Capability Upgrades (Security, Discovery, and Search)\r\n\r\nWorkspaces gain **least-privilege filesystem access** via `LocalFilesystem.allowedPaths` (plus runtime updates with `setAllowedPaths()`), expanded **glob-based configuration** for file listing\u002Findexing\u002Fskill discovery, and a new regex search tool `mastra_workspace_grep` to complement semantic search.\r\n\r\n### Better Tool I\u002FO and Streaming Behavior in the Agent Loop\r\n\r\nTools can now define `toModelOutput` to transform tool results into model-friendly content while preserving raw outputs in storage, and workspace tools now return raw text (moving structured metadata to `data-workspace-metadata` stream chunks) to reduce token usage. Streaming reliability also improves (correct chunk types for tool errors, onChunk receives raw Mastra chunks, agent loop continues after tool errors).\r\n\r\n## Changelog\r\n\r\n### [@mastra\u002Fcore@1.5.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.5.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n#### Minor Changes\r\n\r\n- Added `allowedPaths` option to `LocalFilesystem` for granting agents access to specific directories outside `basePath` without disabling containment. ([#13054](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13054))\r\n\r\n  ```typescript\r\n  const workspace = new Workspace({\r\n    filesystem: new LocalFilesystem({\r\n      basePath: '.\u002Fworkspace',\r\n      allowedPaths: ['\u002Fhome\u002Fuser\u002F.config', '\u002Fhome\u002Fuser\u002Fdocuments'],\r\n    }),\r\n  });\r\n  ```\r\n\r\n  Allowed paths can be updated at runtime using `setAllowedPaths()`:\r\n\r\n  ```typescript\r\n  workspace.filesystem.setAllowedPaths(prev => [...prev, '\u002Fhome\u002Fuser\u002Fnew-dir']);\r\n  ```\r\n\r\n  This is the recommended approach for least-privilege access — agents can only reach the specific directories you allow, while containment stays enabled for everything else.\r\n\r\n- Added generic Harness class for orchestrating agents with modes, state management, built-in tools (ask_user, submit_plan), subagent support, Observational Memory integration, model discovery, and permission-aware tool approval. The Harness provides a reusable foundation for building agent-powered applications with features like thread management, heartbeat monitoring, and event-driven architecture. ([#13245](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13245))\r\n\r\n- Added glob pattern support for workspace configuration. The `list_files` tool now accepts a `pattern` parameter for filtering files (e.g., `**\u002F*.ts`, `src\u002F**\u002F*.test.ts`). `autoIndexPaths` accepts glob patterns like `.\u002Fdocs\u002F**\u002F*.md` to selectively index files for BM25 search. Skills paths support globs like `.\u002F**\u002Fskills` to discover skill directories at any depth, including dot-directories like `.agents\u002Fskills`. ([#13023](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13023))\r\n\r\n  **`list_files` tool with pattern:**\r\n\r\n  ```typescript\r\n  \u002F\u002F Agent can now use glob patterns to filter files\r\n  const result = await workspace.tools.workspace_list_files({\r\n    path: '\u002F',\r\n    pattern: '**\u002F*.test.ts',\r\n  });\r\n  ```\r\n\r\n  **`autoIndexPaths` with globs:**\r\n\r\n  ```typescript\r\n  const workspace = new Workspace({\r\n    filesystem: new LocalFilesystem({ basePath: '.\u002Fproject' }),\r\n    bm25: true,\r\n    \u002F\u002F Only index markdown files under .\u002Fdocs\r\n    autoIndexPaths: ['.\u002Fdocs\u002F**\u002F*.md'],\r\n  });\r\n  ```\r\n\r\n  **Skills paths with globs:**\r\n\r\n  ```typescript\r\n  const workspace = new Workspace({\r\n    filesystem: new LocalFilesystem({ basePath: '.\u002Fproject' }),\r\n    \u002F\u002F Discover any directory named 'skills' within 4 levels of depth\r\n    skills: ['.\u002F**\u002Fskills'],\r\n  });\r\n  ```\r\n\r\n  Note: Skills glob discovery walks up to 4 directory levels deep from the glob's static prefix. Use more specific patterns like `.\u002Fsrc\u002F**\u002Fskills` to narrow the search scope for large workspaces.\r\n\r\n- Added direct skill path discovery — you can now pass a path directly to a skill directory or SKILL.md file in the workspace skills configuration (e.g., `skills: ['\u002Fpath\u002Fto\u002Fmy-skill']` or `skills: ['\u002Fpath\u002Fto\u002Fmy-skill\u002FSKILL.md']`). Previously only parent directories were supported. Also improved error handling when a configured skills path is inaccessible (e.g., permission denied), logging a warning instead of breaking discovery for all skills. ([#13031](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13031))\r\n\r\n- Add optional `instruction` field to ObservationalMemory config types ([#13240](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F13240))\r\n\r\n  Adds `instruction?: string` to `ObservationalMemoryObservationConfig` and `ObservationalMemoryReflectionConfig` interfaces, allowing external consumers to pass custom instructions to observational memory.\r\n\r","2026-02-20T09:07:26",{"id":219,"version":220,"summary_zh":221,"released_at":222},62654,"@mastra\u002Fcore@1.4.0","## Highlights\r\n\r\n### Datasets & Experiments (core + server + Studio UI)\r\n\r\nMastra now has first-class evaluation primitives: versioned Datasets (with JSON Schema validation and SCD-2 item versioning) and Experiments that run agents against datasets with configurable scorers and result tracking. This ships end-to-end across `@mastra\u002Fcore` APIs, new `\u002Fdatasets` REST endpoints in `@mastra\u002Fserver`, and a full Studio UI for managing datasets, triggering experiments, and comparing results.\r\n\r\n### Workspace & Filesystem Lifecycle + Safer Filesystem Introspection\r\n\r\nWorkspace lifecycle interfaces were split into `FilesystemLifecycle` and `SandboxLifecycle`, and `MastraFilesystem` now supports `onInit`\u002F`onDestroy` callbacks. Filesystem path resolution and metadata were improved (generic `FilesystemInfo\u003CTMetadata>`, provider-specific metadata, safer instructions for uncontained filesystems) and filesystem info is now exposed via the workspaces API response.\r\n\r\n### Workflow foreach Progress Streaming\r\n\r\nWorkflows now emit a `workflow-step-progress` stream event for `foreach` steps (completed\u002Ftotal, current index, per-iteration status\u002Foutput), supported by both execution engines. Studio renders real-time progress bars, and `@mastra\u002Freact` watch hooks now accumulate `foreachProgress` into step state.\r\n\r\n### Breaking Changes\r\n\r\n- `@mastra\u002Fmemory`: `observe()` now takes a single object parameter (e.g., `observe({ threadId, resourceId })`) instead of positional arguments.\r\n\r\n## Changelog\r\n\r\n### [@mastra\u002Fcore@1.4.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.4.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- Added Datasets and Experiments to core. Datasets let you store and version collections of test inputs with JSON Schema validation. Experiments let you run AI outputs against dataset items with configurable scorers to track quality over time. ([#12747](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12747))\r\n\r\n  **New exports from `@mastra\u002Fcore\u002Fdatasets`:**\r\n  - `DatasetsManager` — orchestrates dataset CRUD, item versioning (SCD-2), and experiment execution\r\n  - `Dataset` — single-dataset handle for adding items and running experiments\r\n\r\n  **New storage domains:**\r\n  - `DatasetsStorage` — abstract base class for dataset persistence (datasets, items, versions)\r\n  - `ExperimentsStorage` — abstract base class for experiment lifecycle and result tracking\r\n\r\n  **Example:**\r\n\r\n  ```ts\r\n  import { Mastra } from \"@mastra\u002Fcore\";\r\n\r\n  const mastra = new Mastra({\r\n    \u002F* ... *\u002F\r\n  });\r\n\r\n  const dataset = await mastra.datasets.create({ name: \"my-eval-set\" });\r\n  await dataset.addItems([{ input: { query: \"What is 2+2?\" }, groundTruth: { answer: \"4\" } }]);\r\n\r\n  const result = await dataset.runExperiment({\r\n    targetType: \"agent\",\r\n    targetId: \"my-agent\",\r\n    scorerIds: [\"accuracy\"]\r\n  });\r\n  ```\r\n\r\n- Fix LocalFilesystem.resolvePath handling of absolute paths and improve filesystem info. ([#12971](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12971))\r\n  - Fix absolute path resolution: paths were incorrectly stripped of leading slashes and resolved relative to basePath, causing PermissionError for valid paths (e.g. skills processor accessing project-local skills directories).\r\n  - Make `FilesystemInfo` generic (`FilesystemInfo\u003CTMetadata>`) so providers can type their metadata.\r\n  - Move provider-specific fields (`basePath`, `contained`) to metadata in LocalFilesystem.getInfo().\r\n  - Update LocalFilesystem.getInstructions() for uncontained filesystems to warn agents against listing \u002F.\r\n  - Use FilesystemInfo type in WorkspaceInfo instead of duplicated inline shape.\r\n\r\n- Add `workflow-step-progress` stream event for foreach workflow steps. Each iteration emits a progress event with `completedCount`, `totalCount`, `currentIndex`, `iterationStatus` (`success` | `failed` | `suspended`), and optional `iterationOutput`. Both the default and evented execution engines emit these events. ([#12838](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12838))\r\n\r\n  The Mastra Studio UI now renders a progress bar with an N\u002Ftotal counter on foreach nodes, updating in real time as iterations complete:\r\n\r\n  ```ts\r\n  \u002F\u002F Consuming progress events from the workflow stream\r\n  const run = workflow.createRun();\r\n  const result = await run.start({ inputData });\r\n  const stream = result.stream;\r\n\r\n  for await (const chunk of stream) {\r\n    if (chunk.type === \"workflow-step-progress\") {\r\n      console.log(`${chunk.payload.completedCount}\u002F${chunk.payload.totalCount} - ${chunk.payload.iterationStatus}`);\r\n    }\r\n  }\r\n  ```\r\n\r\n  **MCP Client Storage**\r\n\r\n  New storage domain for persisting MCP client configurations with CRUD operations. Each MCP client can contain multiple servers with independent tool selection:\r\n\r\n  ```ts\r\n  \u002F\u002F Store an MCP client with multiple servers\r\n  await storage.mcpClients.create({\r\n    id: \"my-mcp\",\r\n    name: \"My MCP Client\",\r\n    servers: {\r\n      \"github-server\": { url: \"https:\u002F\u002Fmcp.github.com\u002Fsse\" },\r\n      \"slack-server\": { url: \"h","2026-02-16T16:45:52",{"id":224,"version":225,"summary_zh":226,"released_at":227},62655,"@mastra\u002Fcore@1.3.0","## Highlights\r\n\r\n### Observational Memory Async Buffering (default-on) + New Streaming Events\r\n\r\nObservational memory now buffers background observations\u002Freflections by default to avoid blocking as conversations grow, and introduces structured streaming status\u002Fevents (`data-om-status` plus buffering start\u002Fend\u002Ffailed markers) for better UI\u002Ftelemetry.\r\n\r\n### Workspace Mounts (CompositeFilesystem)\r\n\r\nWorkspaces can now mount multiple filesystem providers (S3\u002FGCS\u002Flocal\u002Fetc.) into a single unified directory tree via `CompositeFilesystem`, so agents and tools can access files across backends through one path structure.\r\n\r\n\r\n## Changelog\r\n\r\n### [@mastra\u002Fcore@1.3.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.3.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- Added mount support to workspaces, so you can combine multiple storage providers (S3, GCS, local disk, etc.) under a single directory tree. This lets agents access files from different sources through one unified filesystem. ([#12851](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12851))\r\n\r\n  **Why:** Previously a workspace could only use one filesystem. With mounts, you can organize files from different providers under different paths — for example, S3 data at `\u002Fdata` and GCS models at `\u002Fmodels` — without agents needing to know which provider backs each path.\r\n\r\n  **What's new:**\r\n  - Added `CompositeFilesystem` for combining multiple filesystems under one tree\r\n  - Added descriptive error types for sandbox and mount failures (e.g., `SandboxTimeoutError`, `MountError`)\r\n  - Improved `MastraFilesystem` and `MastraSandbox` base classes with safer concurrent lifecycle handling\r\n\r\n  ```ts\r\n  import { Workspace, CompositeFilesystem } from \"@mastra\u002Fcore\u002Fworkspace\";\r\n\r\n  \u002F\u002F Mount multiple filesystems under one tree\r\n  const composite = new CompositeFilesystem({\r\n    mounts: {\r\n      \"\u002Fdata\": s3Filesystem,\r\n      \"\u002Fmodels\": gcsFilesystem\r\n    }\r\n  });\r\n\r\n  const workspace = new Workspace({\r\n    filesystem: composite,\r\n    sandbox: e2bSandbox\r\n  });\r\n  ```\r\n\r\n Stored agent fields (`tools`, `model`, `workflows`, `agents`, `memory`, `scorers`, `inputProcessors`, `outputProcessors`, `defaultOptions`) can now be configured as conditional variants with rule groups that evaluate against request context at runtime. All matching variants accumulate — arrays are concatenated and objects are shallow-merged — so agents dynamically compose their configuration based on the incoming request context.\r\n\r\n  **New `requestContextSchema` field**\r\n\r\n  Stored agents now accept an optional `requestContextSchema` (JSON Schema) that is converted to a Zod schema and passed to the Agent constructor, enabling request context validation.\r\n\r\n  **Conditional field example**\r\n\r\n  ```ts\r\n  await agentsStore.create({\r\n    agent: {\r\n      id: \"my-agent\",\r\n      name: \"My Agent\",\r\n      instructions: \"You are a helpful assistant\",\r\n      model: { provider: \"openai\", name: \"gpt-4\" },\r\n      tools: [\r\n        { value: { \"basic-tool\": {} } },\r\n        {\r\n          value: { \"premium-tool\": {} },\r\n          rules: {\r\n            operator: \"AND\",\r\n            conditions: [{ field: \"tier\", operator: \"equals\", value: \"premium\" }]\r\n          }\r\n        }\r\n      ],\r\n      requestContextSchema: {\r\n        type: \"object\",\r\n        properties: { tier: { type: \"string\" } }\r\n      }\r\n    }\r\n  });\r\n  ```\r\n\r\n- Add native `@ai-sdk\u002Fgroq` support to model router. Groq models now use the official AI SDK package instead of falling back to OpenAI-compatible mode. ([#12741](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12741))\r\n\r\n  - Added a new `scorer-definitions` storage domain for storing LLM-as-judge and preset scorer configurations in the database\r\n  - Introduced a `VersionedStorageDomain` generic base class that unifies `AgentsStorage`, `PromptBlocksStorage`, and `ScorerDefinitionsStorage` with shared CRUD methods (`create`, `getById`, `getByIdResolved`, `update`, `delete`, `list`, `listResolved`)\r\n  - Flattened stored scorer type system: replaced nested `preset`\u002F`customLLMJudge` config with top-level `type`, `instructions`, `scoreRange`, and `presetConfig` fields\r\n  - Refactored `MastraEditor` to use a namespace pattern (`editor.agent.*`, `editor.scorer.*`, `editor.prompt.*`) backed by a `CrudEditorNamespace` base class with built-in caching and an `onCacheEvict` hook\r\n  - Added `rawConfig` support to `MastraBase` and `MastraScorer` via `toRawConfig()`, so hydrated primitives carry their stored configuration\r\n  - Added prompt block and scorer registration to the `Mastra` class (`addPromptBlock`, `removePromptBlock`, `addScorer`, `removeScorer`)\r\n\r\n  **Creating a stored scorer (LLM-as-judge):**\r\n\r\n  ```ts\r\n  const scorer = await editor.scorer.create({\r\n    id: \"my-scorer\",\r\n    name: \"Response Quality\",\r\n    type: \"llm-judge\",\r\n    instructions: \"Evaluate the response for accuracy and helpfulness.\",\r\n    model: { provider: \"openai\", name: \"gpt-4o\" },\r\n    scoreRange: { min: 0, max: 1 }\r\n  });\r\n  ```\r\n\r\n  **Ret","2026-02-11T17:24:13",{"id":229,"version":230,"summary_zh":231,"released_at":232},62656,"mastra@1.2.0","## Highlights\r\n\r\n### Observational Memory for long-running agents\r\n\r\nObservational Memory is a new Mastra Memory feature which makes small context windows behave like large ones, while retaining long-term memory. It compresses conversations into dense observations logs (5–40x smaller than raw messages). When observations grow too long, they're condensed into reflections. Supports thread and resource scopes. It requires the latest versions of `@mastra\u002Fcore`, `@mastra\u002Fmemory`, `mastra`, and `@mastra\u002Fpg`, `@mastra\u002Flibsql`, or `@mastra\u002Fmongodb`.\r\n\r\n### Skills.sh ecosystem integration (server + UI + CLI)\r\n\r\n`@mastra\u002Fserver` adds skills.sh proxy endpoints (search\u002Fbrowse\u002Fpreview\u002Finstall\u002Fupdate\u002Fremove), Studio adds an “Add Skill” dialog for browsing\u002Finstalling skills, and the CLI wizard can optionally install Mastra skills during `create-mastra` (with non-interactive `--skills` support).\r\n\r\n### Dynamic tool discovery with `ToolSearchProcessor`\r\n\r\nAdds `ToolSearchProcessor` to let agents search and load tools on demand via built-in `search_tools` and `load_tool` meta-tools, dramatically reducing context usage for large tool libraries (e.g., MCP\u002Fintegration-heavy setups).\r\n\r\n### New `@mastra\u002Feditor`: store, version, and resolve agents from a database\r\n\r\nIntroduces `@mastra\u002Feditor` for persisting complete agent configurations (instructions, models, tools, workflows, nested agents, processors, memory), managing versions\u002Factivation, and instantiating dependencies from the Mastra registry with caching and type-safe serialization.\r\n\r\n### Breaking Changes\r\n\r\n- `@mastra\u002Felasticsearch`: vector document IDs now come from Elasticsearch `_id`; stored `id` fields are no longer written (breaking if you relied on `source.id`).\r\n\r\n## Changelog\r\n\r\n### [@mastra\u002Fcore@1.2.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.2.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n- Update provider registry and model documentation with latest models and providers\r\n\r\n\tFixes: [e6fc281](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002Fe6fc281896a3584e9e06465b356a44fe7faade65)\r\n\r\n- Fixed processors returning `{ tools: {}, toolChoice: 'none' }` being ignored. Previously, when a processor returned empty tools with an explicit `toolChoice: 'none'` to prevent tool calls, the toolChoice was discarded and defaulted to 'auto'. This fix preserves the explicit 'none' value, enabling patterns like ensuring a final text response when `maxSteps` is reached.\r\n\r\n\tFixes: [#12601](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12601)\r\n\r\n- Internal changes to enable observational memory\r\n\r\n- Internal changes to enable `@mastra\u002Feditor`\r\n\r\n- Fix moonshotai\u002Fkimi-k2.5 multi-step tool calling failing with \"reasoning_content is missing in assistant tool call message\"\r\n\r\n- Changed moonshotai and moonshotai-cn (China version) providers to use Anthropic-compatible API endpoints instead of OpenAI-compatible\r\n  - moonshotai: `https:\u002F\u002Fapi.moonshot.ai\u002Fanthropic\u002Fv1`\r\n  - moonshotai-cn: `https:\u002F\u002Fapi.moonshot.cn\u002Fanthropic\u002Fv1`\r\n\tThis properly handles reasoning_content for kimi-k2.5 model\r\n\r\n\tFixes: [#12530](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12530)\r\n\r\n- Fixed custom input processors from disabling workspace skill tools in generate() and stream(). Custom processors now replace only the processors you configured, while memory and skills remain available. Fixes #12612.\r\n\r\n\tFixes: [#12676](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12676)\r\n\r\n- **Fixed**\r\n  Workspace search index names now use underscores so they work with SQL-based vector stores (PgVector, LibSQL).\r\n\r\n\t**Added**\r\n\tYou can now set a custom index name with `searchIndexName`.\r\n\t\r\n\t**Why**\r\n\tSome SQL vector stores reject hyphens in index names.\r\n\t\r\n\t**Example**\r\n\t\r\n\t```ts\r\n\t\u002F\u002F Before - would fail with PgVector\r\n\tnew Workspace({ id: \"my-workspace\", vectorStore, embedder });\r\n\t\r\n\t\u002F\u002F After - works with all vector stores\r\n\tnew Workspace({ id: \"my-workspace\", vectorStore, embedder });\r\n\t\r\n\t\u002F\u002F Or use a custom index name\r\n\tnew Workspace({ vectorStore, embedder, searchIndexName: \"my_workspace_vectors\" });\r\n\t```\r\n\t\r\n\tFixes: [#12673](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12673)\r\n\r\n- Added logger support to Workspace filesystem and sandbox providers. Providers extending MastraFilesystem or MastraSandbox now automatically receive the Mastra logger for consistent logging of file operations and command executions.\r\n\r\n\tFixes: [#12606](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12606)\r\n\r\n- Added ToolSearchProcessor for dynamic tool discovery.\r\n\r\n\tAgents can now discover and load tools on demand instead of having all tools available upfront. This reduces context token usage by ~94% when working with large tool libraries.\r\n\t\r\n\t**New API:**\r\n\t\r\n\t```typescript\r\n\timport { ToolSearchProcessor } from \"@mastra\u002Fcore\u002Fprocessors\";\r\n\timport { Agent } from \"@mastra\u002Fcore\";\r\n\t\r\n\t\u002F\u002F Create a processor with searchable tools\r\n\tconst toolSearch = new ToolSearchProcessor({\r\n\t  tools: {\r\n\t    createIssue: githubTools.createIssue,\r\n\t    sendEmail: emailTools.send\r\n\t    \u002F\u002F ..","2026-02-04T23:13:20",{"id":234,"version":235,"summary_zh":236,"released_at":237},62657,"mastra@1.1.0","## Highlights\r\n\r\n### Unified Workspace API (Filesystem + Sandbox Execution + Search + Skills)\r\n\r\nA new `Workspace` capability unifies agent-accessible filesystem operations, sandboxed command\u002Fcode execution, keyword\u002Fsemantic\u002Fhybrid search, and SKILL.md discovery with safety controls (read-only, approval flows, read-before-write guards). The Workspace is exposed end-to-end: core Workspace class (`@mastra\u002Fcore\u002Fworkspace`), server API endpoints (`\u002Fworkspaces\u002F...`), and new `@mastra\u002Fclient-js` workspace client methods (files, skills, references, search).\r\n\r\n### Observability & Streaming Improvements (Trace Status, Better Spans, Tool Approval Tracing)\r\n\r\nTracing is more actionable: `listTraces` now returns a `status` (`success|error|running`), spans are cleaner (inherit entity metadata, remove internal spans, emit model chunk spans for all streaming chunks), and tool approval requests are visible in traces. Token accounting for Langfuse\u002FPostHog is corrected (cached tokens separated) and default tracing tags are preserved across exporters.\r\n\r\n### Server Adapters: Serverless MCP Support + Explicit Route Auth Controls\r\n\r\nExpress\u002FFastify\u002FHono\u002FKoa adapters gain `mcpOptions` (notably `serverless: true`) to run MCP HTTP transport statelessly in serverless\u002Fedge environments without overriding response handling. Adapters and `@mastra\u002Fserver` also add explicit `requiresAuth` per route (defaulting to protected), improved custom-route auth enforcement (including path params), and corrected route prefix replacement\u002Fnormalization.\r\n\r\n### requestContextSchema: Runtime-Validated, Type-Safe Request Context Everywhere\r\n\r\nTools, agents, workflows, and steps can now define a `requestContextSchema` (Zod) to validate required context at runtime and get typed access in execution; `RequestContext.all` provides convenient access to all validated values. This also flows into Studio UX (Request Context tab\u002Fforms) and fixes\u002Fimproves propagation through agent networks and nested workflow execution for better observability and analytics.\r\n\r\n### Breaking Changes\r\n\r\n- Google embedding model router removes deprecated `text-embedding-004`; use `google\u002Fgemini-embedding-001` instead.\r\n\r\n## Changelog\r\n\r\n### [@mastra\u002Fcore@1.1.0](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F@mastra\u002Fcore@1.1.0\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n- dependencies updates:\r\n  - Updated dependency [`@isaacs\u002Fttlcache@^2.1.4` ↗︎](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@isaacs\u002Fttlcache\u002Fv\u002F2.1.4) (from `^1.4.1`, in `dependencies`)\r\n\r\nFixes: [#10184](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F10184)\r\n- Update provider registry and model documentation with latest models and providers\r\n\r\nFixes: [1cf5d2e](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002F1cf5d2ea1b085be23e34fb506c80c80a4e6d9c2b)\r\n- Fixed skill loading error caused by Zod version conflicts between v3 and v4. Replaced Zod schemas with plain TypeScript validation functions in skill metadata validation.\r\n\r\nFixes: [#12485](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12485)\r\n- Restructured stored agents to use a thin metadata record with versioned configuration snapshots.\r\n\r\nThe agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.\r\n\r\n**Key changes:**\r\n\r\n- Stored Agent records are now thin metadata-only (StorageAgentType)\r\n- All config lives in version snapshots (StorageAgentSnapshotType)\r\n- New resolved type (StorageResolvedAgentType) merges agent record + active version config\r\n- Renamed `ownerId` to `authorId` for multi-tenant filtering\r\n- Changed `memory` field type from `string` to `Record\u003Cstring, unknown>`\r\n- Added `status` field ('draft' | 'published') to agent records\r\n- Flattened CreateAgent\u002FUpdateAgent input types (config fields at top level, no nested snapshot)\r\n- Version config columns are top-level in the agent_versions table (no single snapshot jsonb column)\r\n- List endpoints return resolved agents (thin record + active version config)\r\n- Auto-versioning on update with retention limits and race condition handling\r\n\r\nFixes: [#12488](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12488)\r\n- Fix model router routing providers that use non-default AI SDK packages (e.g. `@ai-sdk\u002Fanthropic`, `@ai-sdk\u002Fopenai`) to their correct SDK instead of falling back to `openai-compatible`. Add `cerebras`, `togetherai`, and `deepinfra` as native SDK providers.\r\n\r\nFixes: [#12450](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12450)\r\n- Make suspendedToolRunId nullable to fix the null issue in tool input validation\r\n\r\nFixes: [#12303](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12303)\r\n- Fixed agent.network() to properly pass requestContext to workflow runs. Workflow execution now includes user metadata (userId, resourceId) for observability and analytics. (Fixes #12330)\r\n\r\nFixes: [#12379](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F12379)\r\n- Added dynamic agent management wit","2026-01-30T16:26:51",{"id":239,"version":240,"summary_zh":241,"released_at":242},62658,"@mastra\u002Fcore@1.0.0-beta.21","# Changelog\r\n\r\n## [@mastra\u002Fai-sdk@1.0.0-beta.14](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F8edb59796838fbece6868afc558af4c3581c0e9b\u002Fclient-sdks\u002Fai-sdk\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Add structured output support to agent.network() method. Users can now pass a `structuredOutput` option with a Zod schema to get typed results from network execution. ([#11701](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11701))\r\n\r\n  The stream exposes `.object` (Promise) and `.objectStream` (ReadableStream) getters, and emits `network-object` and `network-object-result` chunk types. The structured output is generated after task completion using the provided schema.\r\n\r\n  ```typescript\r\n  const stream = await agent.network('Research AI trends', {\r\n    structuredOutput: {\r\n      schema: z.object({\r\n        summary: z.string(),\r\n        recommendations: z.array(z.string()),\r\n      }),\r\n    },\r\n  });\r\n\r\n  const result = await stream.object;\r\n  \u002F\u002F result is typed: { summary: string; recommendations: string[] }\r\n  ```\r\n\r\n---\r\n\r\n## [@mastra\u002Farize@1.0.0-beta.12](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F8edb59796838fbece6868afc558af4c3581c0e9b\u002Fobservability\u002Farize\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- feat(observability): add zero-config environment variable support for all exporters ([#11686](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11686))\r\n\r\n  All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:\r\n  - **Langfuse**: `LANGFUSE_PUBLIC_KEY`, `LANGFUSE_SECRET_KEY`, `LANGFUSE_BASE_URL`\r\n  - **Braintrust**: `BRAINTRUST_API_KEY`, `BRAINTRUST_ENDPOINT`\r\n  - **PostHog**: `POSTHOG_API_KEY`, `POSTHOG_HOST`\r\n  - **Arize\u002FPhoenix**: `ARIZE_SPACE_ID`, `ARIZE_API_KEY`, `ARIZE_PROJECT_NAME`, `PHOENIX_ENDPOINT`, `PHOENIX_API_KEY`, `PHOENIX_PROJECT_NAME`\r\n  - **OTEL Providers**:\r\n    - Dash0: `DASH0_API_KEY`, `DASH0_ENDPOINT`, `DASH0_DATASET`\r\n    - SigNoz: `SIGNOZ_API_KEY`, `SIGNOZ_REGION`, `SIGNOZ_ENDPOINT`\r\n    - New Relic: `NEW_RELIC_LICENSE_KEY`, `NEW_RELIC_ENDPOINT`\r\n    - Traceloop: `TRACELOOP_API_KEY`, `TRACELOOP_DESTINATION_ID`, `TRACELOOP_ENDPOINT`\r\n    - Laminar: `LMNR_PROJECT_API_KEY`, `LAMINAR_ENDPOINT`, `LAMINAR_TEAM_ID`\r\n\r\n  Example usage:\r\n\r\n  ```typescript\r\n  \u002F\u002F Zero-config - reads from environment variables\r\n  new LangfuseExporter();\r\n  new BraintrustExporter();\r\n  new PosthogExporter();\r\n  new ArizeExporter();\r\n  new OtelExporter({ provider: { signoz: {} } });\r\n  ```\r\n\r\n  Explicit configuration still works and takes precedence over environment variables.\r\n\r\n---\r\n\r\n## [@mastra\u002Fbraintrust@1.0.0-beta.12](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F8edb59796838fbece6868afc558af4c3581c0e9b\u002Fobservability\u002Fbraintrust\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- feat(observability): add zero-config environment variable support for all exporters ([#11686](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11686))\r\n\r\n  All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:\r\n  - **Langfuse**: `LANGFUSE_PUBLIC_KEY`, `LANGFUSE_SECRET_KEY`, `LANGFUSE_BASE_URL`\r\n  - **Braintrust**: `BRAINTRUST_API_KEY`, `BRAINTRUST_ENDPOINT`\r\n  - **PostHog**: `POSTHOG_API_KEY`, `POSTHOG_HOST`\r\n  - **Arize\u002FPhoenix**: `ARIZE_SPACE_ID`, `ARIZE_API_KEY`, `ARIZE_PROJECT_NAME`, `PHOENIX_ENDPOINT`, `PHOENIX_API_KEY`, `PHOENIX_PROJECT_NAME`\r\n  - **OTEL Providers**:\r\n    - Dash0: `DASH0_API_KEY`, `DASH0_ENDPOINT`, `DASH0_DATASET`\r\n    - SigNoz: `SIGNOZ_API_KEY`, `SIGNOZ_REGION`, `SIGNOZ_ENDPOINT`\r\n    - New Relic: `NEW_RELIC_LICENSE_KEY`, `NEW_RELIC_ENDPOINT`\r\n    - Traceloop: `TRACELOOP_API_KEY`, `TRACELOOP_DESTINATION_ID`, `TRACELOOP_ENDPOINT`\r\n    - Laminar: `LMNR_PROJECT_API_KEY`, `LAMINAR_ENDPOINT`, `LAMINAR_TEAM_ID`\r\n\r\n  Example usage:\r\n\r\n  ```typescript\r\n  \u002F\u002F Zero-config - reads from environment variables\r\n  new LangfuseExporter();\r\n  new BraintrustExporter();\r\n  new PosthogExporter();\r\n  new ArizeExporter();\r\n  new OtelExporter({ provider: { signoz: {} } });\r\n  ```\r\n\r\n  Explicit configuration still works and takes precedence over environment variables.\r\n\r\n---\r\n\r\n## [@mastra\u002Fcore@1.0.0-beta.21](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F8edb59796838fbece6868afc558af4c3581c0e9b\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- Add structured output support to agent.network() method. Users can now pass a `structuredOutput` option with a Zod schema to get typed results from network execution. ([#11701](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11701))\r\n\r\n  The stream exposes `.object` (Promise) and `.objectStream` (ReadableStream) getters, and emits `network-object` and `network-object-result` chunk types. The structured output is generated after task completion using the provided schema.\r\n\r\n  ```typescript\r\n  const stream = await agent.network('Research AI trends', {\r\n    structuredOutput: {\r\n      schema: z.object({\r\n        summary: z.string(","2026-01-10T17:10:20",{"id":244,"version":245,"summary_zh":246,"released_at":247},62659,"@mastra\u002Fcore@1.0.0-beta.20","# Changelog\r\n\r\n## [@mastra\u002Fai-sdk@1.0.0-beta.13](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002Fb3e7a74b36001336085d9b02be9bfafca8762f3f\u002Fclient-sdks\u002Fai-sdk\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Add embedded documentation support for Mastra packages ([#11472](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11472))\r\n\r\n  Mastra packages now include embedded documentation in the published npm package under `dist\u002Fdocs\u002F`. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from `node_modules`.\r\n\r\n  Each package includes:\r\n  - **SKILL.md** - Entry point explaining the package's purpose and capabilities\r\n  - **SOURCE_MAP.json** - Machine-readable index mapping exports to types and implementation files\r\n  - **Topic folders** - Conceptual documentation organized by feature area\r\n\r\n  Documentation is driven by the `packages` frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.\r\n\r\n- Fixed agent network not returning text response when routing agent handles requests without delegation. ([#11497](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11497))\r\n\r\n  **What changed:**\r\n  - Agent networks now correctly stream text responses when the routing agent decides to handle a request itself instead of delegating to sub-agents, workflows, or tools\r\n  - Added fallback in transformers to ensure text is always returned even if core events are missing\r\n\r\n  **Why this matters:**\r\n  Previously, when using `toAISdkV5Stream` or `networkRoute()` outside of the Mastra Studio UI, no text content was returned when the routing agent handled requests directly. This fix ensures consistent behavior across all API routes.\r\n\r\n  Fixes #11219\r\n\r\n- Refactor the MessageList class from ~4000 LOC monolith to ~850 LOC with focused, single-responsibility modules. This improves maintainability, testability, and makes the codebase easier to understand. ([#11658](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11658))\r\n  - Extract message format adapters (AIV4Adapter, AIV5Adapter) for SDK conversions\r\n  - Extract TypeDetector for centralized message format identification\r\n  - Extract MessageStateManager for tracking message sources and persistence\r\n  - Extract MessageMerger for streaming message merge logic\r\n  - Extract StepContentExtractor for step content extraction\r\n  - Extract CacheKeyGenerator for message deduplication\r\n  - Consolidate provider compatibility utilities (Gemini, Anthropic, OpenAI)\r\n\r\n  ```\r\n  message-list\u002F\r\n  ├── message-list.ts        # Main class (~850 LOC, down from ~4000)\r\n  ├── adapters\u002F              # SDK format conversions\r\n  │   ├── AIV4Adapter.ts     # MastraDBMessage \u003C-> AI SDK V4\r\n  │   └── AIV5Adapter.ts     # MastraDBMessage \u003C-> AI SDK V5\r\n  ├── cache\u002F\r\n  │   └── CacheKeyGenerator.ts  # Deduplication keys\r\n  ├── conversion\u002F\r\n  │   ├── input-converter.ts    # Any format -> MastraDBMessage\r\n  │   ├── output-converter.ts   # MastraDBMessage -> SDK formats\r\n  │   ├── step-content.ts       # Step content extraction\r\n  │   └── to-prompt.ts          # LLM prompt formatting\r\n  ├── detection\u002F\r\n  │   └── TypeDetector.ts       # Format identification\r\n  ├── merge\u002F\r\n  │   └── MessageMerger.ts      # Streaming merge logic\r\n  ├── state\u002F\r\n  │   └── MessageStateManager.ts # Source & persistence tracking\r\n  └── utils\u002F\r\n      └── provider-compat.ts    # Provider-specific fixes\r\n  ```\r\n\r\n- Fix autoresume not working fine in useChat ([#11486](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11486))\r\n\r\n---\r\n\r\n## [@mastra\u002Farize@1.0.0-beta.11](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002Fb3e7a74b36001336085d9b02be9bfafca8762f3f\u002Fobservability\u002Farize\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Add embedded documentation support for Mastra packages ([#11472](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11472))\r\n\r\n  Mastra packages now include embedded documentation in the published npm package under `dist\u002Fdocs\u002F`. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from `node_modules`.\r\n\r\n  Each package includes:\r\n  - **SKILL.md** - Entry point explaining the package's purpose and capabilities\r\n  - **SOURCE_MAP.json** - Machine-readable index mapping exports to types and implementation files\r\n  - **Topic folders** - Conceptual documentation organized by feature area\r\n\r\n  Documentation is driven by the `packages` frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.\r\n\r\n---\r\n\r\n## [@mastra\u002Fastra@1.0.0-beta.3](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002Fb3e7a74b36001336085d9b02be9bfafca8762f3f\u002Fstores\u002Fastra\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Add embedded documentation support for Mastra packages ([#11472](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11472))\r\n\r\n  Mastra packages now include embedded documentation in the published npm package under `dist\u002Fdocs\u002F`. This enables coding agents and AI assistants to understand and u","2026-01-10T17:09:29",{"id":249,"version":250,"summary_zh":251,"released_at":252},62660,"@mastra\u002Fcore@1.0.0-beta.19","# Changelog\r\n\r\n## [@mastra\u002Fai-sdk@1.0.0-beta.12](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F483764477e6a596e7809552d59ef380bf04152e7\u002Fclient-sdks\u002Fai-sdk\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Fix data chunk property filtering to only include type, data, and id properties ([#11477](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11477))\r\n\r\n  Previously, when `isDataChunkType` checks were performed, the entire chunk object was returned, potentially letting extra properties like `from`, `runId`, `metadata`, etc go through. This could cause issues with `useChat` and other UI components.\r\n\r\n  Now, all locations that handle `DataChunkType` properly destructure and return only the allowed properties:\r\n  - `type` (required): The chunk type identifier starting with \"data-\"\r\n  - `data` (required): The actual data payload\r\n  - `id` (optional): An optional identifier for the chunk\r\n\r\n---\r\n\r\n## [@mastra\u002Fcore@1.0.0-beta.19](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F483764477e6a596e7809552d59ef380bf04152e7\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Minor Changes\r\n\r\n- Add embedderOptions support to Memory for AI SDK 5+ provider-specific embedding options ([#11462](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11462))\r\n\r\n  With AI SDK 5+, embedding models no longer accept options in their constructor. Options like `outputDimensionality` for Google embedding models must now be passed when calling `embed()` or `embedMany()`. This change adds `embedderOptions` to Memory configuration to enable passing these provider-specific options.\r\n\r\n  You can now configure embedder options when creating Memory:\r\n\r\n  ```typescript\r\n  import { Memory } from '@mastra\u002Fcore';\r\n  import { google } from '@ai-sdk\u002Fgoogle';\r\n\r\n  \u002F\u002F Before: No way to specify providerOptions\r\n  const memory = new Memory({\r\n    embedder: google.textEmbeddingModel('text-embedding-004'),\r\n  });\r\n\r\n  \u002F\u002F After: Pass embedderOptions with providerOptions\r\n  const memory = new Memory({\r\n    embedder: google.textEmbeddingModel('text-embedding-004'),\r\n    embedderOptions: {\r\n      providerOptions: {\r\n        google: {\r\n          outputDimensionality: 768,\r\n          taskType: 'RETRIEVAL_DOCUMENT',\r\n        },\r\n      },\r\n    },\r\n  });\r\n  ```\r\n\r\n  This is especially important for:\r\n  - Google `text-embedding-004`: Control output dimensions (default 768)\r\n  - Google `gemini-embedding-001`: Reduce from default 3072 dimensions to avoid pgvector's 2000 dimension limit for HNSW indexes\r\n\r\n  Fixes #8248\r\n\r\n### Patch Changes\r\n\r\n- Fix Anthropic API error when tool calls have empty input objects ([#11474](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11474))\r\n\r\n  Fixes issue #11376 where Anthropic models would fail with error \"messages.17.content.2.tool_use.input: Field required\" when a tool call in a previous step had an empty object `{}` as input.\r\n\r\n  The fix adds proper reconstruction of tool call arguments when converting messages to AIV5 model format. Tool-result parts now correctly include the `input` field from the matching tool call, which is required by Anthropic's API validation.\r\n\r\n  Changes:\r\n  - Added `findToolCallArgs()` helper method to search through messages and retrieve original tool call arguments\r\n  - Enhanced `aiV5UIMessagesToAIV5ModelMessages()` to populate the `input` field on tool-result parts\r\n  - Added comprehensive test coverage for empty object inputs, parameterized inputs, and multi-turn conversations\r\n\r\n- Fixed an issue where deprecated Groq models were shown during template creation. The model selection now filters out models marked as deprecated, displaying only active and supported models. ([#11445](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11445))\r\n\r\n- Fix AI SDK v6 (specificationVersion: \"v3\") model support in sub-agent calls. Previously, when a parent agent invoked a sub-agent with a v3 model through the `agents` property, the version check only matched \"v2\", causing v3 models to incorrectly fall back to legacy streaming methods and throw \"V2 models are not supported for streamLegacy\" error. ([#11452](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11452))\r\n\r\n  The fix updates version checks in `listAgentTools` and `llm-mapping-step.ts` to use the centralized `supportedLanguageModelSpecifications` array which includes both v2 and v3.\r\n\r\n  Also adds missing v3 test coverage to tool-handling.test.ts to prevent regression.\r\n\r\n- Fixed \"Transforms cannot be represented in JSON Schema\" error when using Zod v4 with structuredOutput ([#11466](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11466))\r\n\r\n  When using schemas with `.optional()`, `.nullable()`, `.default()`, or `.nullish().default(\"\")` patterns with `structuredOutput` and Zod v4, users would encounter an error because OpenAI schema compatibility layer adds transforms that Zod v4's native `toJSONSchema()` cannot handle.\r\n\r\n  The fix uses Mastra's transform-safe `zodToJsonSchema` function which gracefully handles transforms by using the `unrepresentable: 'any'` option.\r\n\r\n  Also exported `isZodType` utility from `@mastra\u002Fschema-compat` and upd","2026-01-10T17:06:27",{"id":254,"version":255,"summary_zh":256,"released_at":257},62661,"@mastra\u002Fcore@1.0.0-beta.18","# Changelog\r\n\r\n## [@mastra\u002Fcore@1.0.0-beta.18](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F3d9c9fb3cda8825b2d61a17e78d15f29470d2a28\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Fixed semantic recall fetching all thread messages instead of only matched ones. ([#11435](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11435))\r\n\r\n  When using `semanticRecall` with `scope: 'thread'`, the processor was incorrectly fetching all messages from the thread instead of just the semantically matched messages with their context. This caused memory to return far more messages than expected when `topK` and `messageRange` were set to small values.\r\n\r\n  Fixes #11428\r\n\r\n---\r\n\r\n## [@mastra\u002Fobservability@1.0.0-beta.9](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F3d9c9fb3cda8825b2d61a17e78d15f29470d2a28\u002Fobservability\u002Fmastra\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Fix SensitiveDataFilter destroying Date objects ([#11437](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11437))\r\n\r\n  The `deepFilter` method now correctly preserves `Date` objects instead of converting them to empty objects `{}`. This fixes issues with downstream exporters like `BraintrustExporter` that rely on `Date` methods like `getTime()`.\r\n\r\n  Previously, `Object.keys(new Date())` returned `[]`, causing Date objects to be incorrectly converted to `{}`. The fix adds an explicit check for `Date` instances before generic object processing.\r\n\r\n---\r\n\r\n\r\n---\r\n\r\n**Full Changelog**: [`3d9c9fb`](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002F3d9c9fb3cda8825b2d61a17e78d15f29470d2a28)\r\n","2026-01-10T17:06:20",{"id":259,"version":260,"summary_zh":261,"released_at":262},62662,"@mastra\u002Fcore@1.0.0-beta.17","# Changelog\r\n\r\n## [@mastra\u002Fcore@1.0.0-beta.17](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F4cbe85007721a517939d67e5967d59ea7a62bc16\u002Fpackages\u002Fcore\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Fix Zod 4 compatibility for storage schema detection ([#11431](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11431))\r\n\r\n  If you're using Zod 4, `buildStorageSchema` was failing to detect nullable and optional fields correctly. This caused `NOT NULL constraint failed` errors when storing observability spans and other data.\r\n\r\n  This fix enables proper schema detection for Zod 4 users, ensuring nullable fields like `parentSpanId` are correctly identified and don't cause database constraint violations.\r\n\r\n---\r\n\r\n## [@mastra\u002Fschema-compat@1.0.0-beta.4](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fblob\u002F4cbe85007721a517939d67e5967d59ea7a62bc16\u002Fpackages\u002Fschema-compat\u002FCHANGELOG.md)\r\n\r\n### Patch Changes\r\n\r\n- Fix OpenAI structured output compatibility for fields with `.default()` values ([#11434](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fpull\u002F11434))\r\n\r\n  When using Zod schemas with `.default()` fields (e.g., `z.number().default(1)`), OpenAI's structured output API was failing with errors like `Missing '\u003Cfield>' in required`. This happened because `zod-to-json-schema` doesn't include fields with defaults in the `required` array, but OpenAI requires all properties to be required.\r\n\r\n  This fix converts `.default()` fields to `.nullable()` with a transform that returns the default value when `null` is received, ensuring compatibility with OpenAI's strict mode while preserving the original default value semantics.\r\n\r\n---\r\n\r\n\r\n---\r\n\r\n**Full Changelog**: [`4cbe850`](https:\u002F\u002Fgithub.com\u002Fmastra-ai\u002Fmastra\u002Fcommit\u002F4cbe85007721a517939d67e5967d59ea7a62bc16)\r\n","2026-01-10T17:06:13"]