[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ax-llm--ax":3,"tool-ax-llm--ax":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",144730,2,"2026-04-07T23:26:32",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":32,"env_os":118,"env_gpu":119,"env_ram":119,"env_deps":120,"category_tags":126,"github_topics":127,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":148,"updated_at":149,"faqs":150,"releases":181},5307,"ax-llm\u002Fax","ax","The pretty much \"official\" DSPy framework for Typescript","Ax 是将著名的 DSPy 框架引入 TypeScript 生态的开源工具，旨在帮助开发者轻松构建可靠的 AI 应用。它的核心理念是“只定义输入与输出”，让框架自动处理提示词生成、优化及执行细节，从而告别繁琐且易碎的提示词工程。\n\n在大模型开发中，开发者常面临提示词难以调试、切换模型需重写代码、以及缺乏标准化验证机制等痛点。Ax 通过类型安全的声明式语法解决了这些问题：只需描述数据流转逻辑，它便能自动生成最优提示词，并内置了流式传输、错误重试、数据校验及可观测性等生产级功能。更独特的是，Ax 支持通过示例自动“训练”程序以提升准确率，无需深厚的机器学习背景。\n\nAx 特别适合使用 TypeScript 进行后端或全栈开发的工程师，尤其是那些希望快速将大模型能力集成到产品中，却不愿陷入底层提示词调优陷阱的团队。无论是提取结构化数据、处理复杂嵌套对象，还是构建具备工具调用能力的智能体，Ax 都能让开发过程像编写普通函数一样直观高效，实现“一次编写，多模型运行”。","# Ax - Build Reliable AI Apps in TypeScript with DSPy\n\nAx brings DSPy's approach to TypeScript – describe what you want, and let the framework handle the rest. Production-ready, type-safe, works with all major LLMs.\n\n[![NPM Package](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002F@ax-llm\u002Fax?style=for-the-badge&color=green)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@ax-llm\u002Fax)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fdosco?style=for-the-badge&color=red)](https:\u002F\u002Ftwitter.com\u002Fdosco)\n[![Discord Chat](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1078454354849304667?style=for-the-badge&color=green)](https:\u002F\u002Fdiscord.gg\u002FDSHg3dU7dW)\n\n## The Problem\n\nBuilding with LLMs is painful. You write prompts, test them, they break. You switch providers, everything needs rewriting. You add validation, error handling, retries – suddenly you're maintaining infrastructure instead of shipping features.\n\n## The Solution\n\nDefine what goes in and what comes out. Ax handles the rest.\n\n```typescript\nimport { ai, ax } from \"@ax-llm\u002Fax\";\n\nconst llm = ai({ name: \"openai\", apiKey: process.env.OPENAI_APIKEY });\n\nconst classifier = ax(\n  'review:string -> sentiment:class \"positive, negative, neutral\"',\n);\n\nconst result = await classifier.forward(llm, {\n  review: \"This product is amazing!\",\n});\n\nconsole.log(result.sentiment); \u002F\u002F \"positive\"\n```\n\nNo prompt engineering. No trial and error. Works with GPT-4, Claude, Gemini, or any LLM.\n\n## Why Ax\n\n**Write once, run anywhere.** Switch between OpenAI, Anthropic, Google, or 15+ providers with one line. No rewrites.\n\n**Ship faster.** Stop tweaking prompts. Define inputs and outputs. The framework generates optimal prompts automatically.\n\n**Production-ready.** Built-in streaming, validation, error handling, observability. Used in production handling millions of requests.\n\n**Gets smarter.** Train your programs with examples. Watch accuracy improve automatically. No ML expertise needed.\n\n## Examples\n\n### Extract structured data\n\n```typescript\nconst extractor = ax(`\n  customerEmail:string, currentDate:datetime -> \n  priority:class \"high, normal, low\",\n  sentiment:class \"positive, negative, neutral\",\n  ticketNumber?:number,\n  nextSteps:string[],\n  estimatedResponseTime:string\n`);\n\nconst result = await extractor.forward(llm, {\n  customerEmail: \"Order #12345 hasn't arrived. Need this resolved immediately!\",\n  currentDate: new Date(),\n});\n```\n\n### Complex nested objects\n\n```typescript\nimport { f, ax } from \"@ax-llm\u002Fax\";\n\nconst productExtractor = f()\n  .input(\"productPage\", f.string())\n  .output(\"product\", f.object({\n    name: f.string(),\n    price: f.number(),\n    specs: f.object({\n      dimensions: f.object({\n        width: f.number(),\n        height: f.number()\n      }),\n      materials: f.array(f.string())\n    }),\n    reviews: f.array(f.object({\n      rating: f.number(),\n      comment: f.string()\n    }))\n  }))\n  .build();\n\nconst generator = ax(productExtractor);\nconst result = await generator.forward(llm, { productPage: \"...\" });\n\n\u002F\u002F Full TypeScript inference\nconsole.log(result.product.specs.dimensions.width);\nconsole.log(result.product.reviews[0].comment);\n```\n\n### Validation and constraints\n\n```typescript\nconst userRegistration = f()\n  .input(\"userData\", f.string())\n  .output(\"user\", f.object({\n    username: f.string().min(3).max(20),\n    email: f.string().email(),\n    age: f.number().min(18).max(120),\n    password: f.string().min(8).regex(\"^(?=.*[A-Za-z])(?=.*\\\\d)\", \"Must contain letter and digit\"),\n    bio: f.string().max(500).optional(),\n    website: f.string().url().optional(),\n  }))\n  .build();\n```\n\nAvailable constraints: `.min(n)`, `.max(n)`, `.email()`, `.url()`, `.date()`, `.datetime()`, `.regex(pattern, description)`, `.optional()`\n\nValidation runs on both input and output. Automatic retry with corrections on validation errors.\n\n### Agents with tools (ReAct pattern)\n\n```typescript\nconst assistant = ax(\n  \"question:string -> answer:string\",\n  {\n    functions: [\n      { name: \"getCurrentWeather\", func: weatherAPI },\n      { name: \"searchNews\", func: newsAPI },\n    ],\n  },\n);\n\nconst result = await assistant.forward(llm, {\n  question: \"What's the weather in Tokyo and any news about it?\",\n});\n```\n\n### AxAgent + RLM for long context\n\n```typescript\nimport { agent, AxJSRuntime } from \"@ax-llm\u002Fax\";\n\nconst analyzer = agent(\n  \"context:string, query:string -> answer:string, evidence:string[]\",\n  {\n    name: \"documentAnalyzer\",\n    description: \"Analyze very long documents with recursive code + sub-queries\",\n    maxSteps: 20,\n    rlm: {\n      contextFields: [\"context\"],\n      runtime: new AxJSRuntime(),\n      maxSubAgentCalls: 40,\n      maxRuntimeChars: 2_000, \u002F\u002F Shared cap for llmQuery context + interpreter output\n      maxBatchedLlmQueryConcurrency: 6,\n      subModel: \"gpt-4o-mini\",\n    },\n  },\n);\n\nconst result = await analyzer.forward(llm, {\n  context: veryLongDocument,\n  query: \"What are the main arguments and supporting evidence?\",\n});\n```\n\nRLM mode keeps long context out of the root prompt, runs iterative analysis in a persistent runtime session, and uses bounded sub-queries for semantic extraction (typically targeting \u003C=10k chars per sub-call).\n\n### AxJSRuntime\n\n`AxJSRuntime` is the built-in JavaScript runtime used by RLM and tool-style execution.\nIt works across:\n\n- Node.js\u002FBun-style backends (worker_threads runtime path)\n- Deno backends (module worker path)\n- Browser environments (Web Worker path)\n\nIt supports:\n\n- Persistent sessions via `createSession()`\n- Function tool usage via `toFunction()`\n- Sandbox permissions via `AxJSRuntimePermission`\n\n### Multi-modal (images, audio)\n\n```typescript\nconst analyzer = ax(`\n  image:image, question:string ->\n  description:string,\n  mainColors:string[],\n  category:class \"electronics, clothing, food, other\",\n  estimatedPrice:string\n`);\n```\n\n## Install\n\n```bash\nnpm install @ax-llm\u002Fax\n```\n\nAdditional packages:\n\n```bash\n# AWS Bedrock provider\nnpm install @ax-llm\u002Fax-ai-aws-bedrock\n\n# Vercel AI SDK v5 integration\nnpm install @ax-llm\u002Fax-ai-sdk-provider\n\n# Tools: MCP stdio transport, JS runtime\nnpm install @ax-llm\u002Fax-tools\n```\n\n## Features\n\n- **15+ LLM Providers** – OpenAI, Anthropic, Google, Mistral, Ollama, and more\n- **Type-safe** – Full TypeScript support with auto-completion\n- **Streaming** – Real-time responses with validation\n- **Multi-modal** – Images, audio, text in the same signature\n- **Optimization** – Automatic prompt tuning with MiPRO, ACE, GEPA\n- **Observability** – OpenTelemetry tracing built-in\n- **Workflows** – Compose complex pipelines with AxFlow\n- **RAG** – Multi-hop retrieval with quality loops\n- **Agents** – Tools and multi-agent collaboration\n- **RLM in AxAgent** – Long-context analysis with recursive runtime loops\n- **Zero dependencies** – Lightweight, fast, reliable\n\n## Documentation\n\n**Get Started**\n- [Quick Start Guide](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002FREADME.md) – Set up in 5 minutes\n- [Examples Guide](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Fexamples.md) – Comprehensive examples\n- [DSPy Concepts](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Fdspy.md) – Understanding the approach\n- [Signatures Guide](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-signature.md) – Type-safe signature design\n\n**Deep Dives**\n- [AI Providers](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-ai.md) – All providers, AWS Bedrock, Vercel AI SDK\n- [AxFlow Workflows](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-flow.md) – Build complex AI systems\n- [Optimization (MiPRO, ACE, GEPA)](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Foptimize.md) – Make programs smarter\n- [AxAgent & RLM](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-agent.md) – Agents, child agents, tools, and RLM for long contexts\n- [Advanced RAG](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Faxrag.md) – Production search and retrieval\n\n## Run Examples\n\n```bash\nOPENAI_APIKEY=your-key npm run tsx .\u002Fsrc\u002Fexamples\u002F[example-name].ts\n```\n\nCore examples: `extract.ts`, `react.ts`, `agent.ts`, `streaming1.ts`, `multi-modal.ts`\n\nProduction patterns: `customer-support.ts`, `food-search.ts`, `rlm.ts`, `ace-train-inference.ts`, `ax-flow-enhanced-demo.ts`\n\n[View all 70+ examples](src\u002Fexamples\u002F)\n\n## Community\n\n- [Twitter](https:\u002F\u002Ftwitter.com\u002Fdosco) – Updates\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FDSHg3dU7dW) – Help and discussion\n- [GitHub](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax) – Star the project\n- [DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fax-llm\u002Fax) – AI-powered docs\n\n## Production Ready\n\n- Battle-tested in production\n- Stable minor versions\n- Comprehensive test coverage\n- OpenTelemetry built-in\n- TypeScript first\n\n## Contributors\n\n- Author: [@dosco](https:\u002F\u002Fgithub.com\u002Fdosco)\n- GEPA and ACE optimizers: [@monotykamary](https:\u002F\u002Fgithub.com\u002Fmonotykamary)\n\n## License\n\nApache 2.0\n\n---\n\n```bash\nnpm install @ax-llm\u002Fax\n```\n","# Ax - 使用 DSPy 在 TypeScript 中构建可靠的 AI 应用\n\nAx 将 DSPy 的方法引入 TypeScript——只需描述你想要实现的功能，剩下的工作由框架自动完成。生产就绪、类型安全，兼容所有主流大模型。\n\n[![NPM 包](https:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002F@ax-llm\u002Fax?style=for-the-badge&color=green)](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@ax-llm\u002Fax)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fdosco?style=for-the-badge&color=red)](https:\u002F\u002Ftwitter.com\u002Fdosco)\n[![Discord 聊天](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1078454354849304667?style=for-the-badge&color=green)](https:\u002F\u002Fdiscord.gg\u002FDSHg3dU7dW)\n\n## 问题所在\n\n使用大模型开发应用非常痛苦。你需要编写提示词、测试它们，但往往很快就会失效。一旦更换服务提供商，所有代码都需要重写。再加上验证、错误处理和重试机制，最终你会发现自己的精力都花在维护基础设施上，而不是交付新功能。\n\n## 解决方案\n\n只需定义输入和输出，其余的一切都交给 Ax 处理。\n\n```typescript\nimport { ai, ax } from \"@ax-llm\u002Fax\";\n\nconst llm = ai({ name: \"openai\", apiKey: process.env.OPENAI_APIKEY });\n\nconst classifier = ax(\n  'review:string -> sentiment:class \"positive, negative, neutral\"',\n);\n\nconst result = await classifier.forward(llm, {\n  review: \"This product is amazing!\",\n});\n\nconsole.log(result.sentiment); \u002F\u002F \"positive\"\n```\n\n无需进行提示工程，也无需反复试验。支持 GPT-4、Claude、Gemini 等任意大模型。\n\n## 为什么选择 Ax\n\n**一次编写，随处运行。** 只需一行代码即可在 OpenAI、Anthropic、Google 等超过 15 家服务商之间无缝切换，无需任何修改。\n\n**更快地交付。** 停止不断调整提示词，只需定义输入和输出，框架会自动生成最优的提示内容。\n\n**生产就绪。** 内置流式处理、验证、错误处理和可观ability 功能。已在生产环境中稳定运行，日均处理数百万次请求。\n\n**越用越智能。** 通过示例数据训练你的程序，准确率会自动提升，无需任何机器学习专业知识。\n\n## 示例\n\n### 提取结构化数据\n\n```typescript\nconst extractor = ax(`\n  customerEmail:string, currentDate:datetime -> \n  priority:class \"high, normal, low\",\n  sentiment:class \"positive, negative, neutral\",\n  ticketNumber?:number,\n  nextSteps:string[],\n  estimatedResponseTime:string\n`);\n\nconst result = await extractor.forward(llm, {\n  customerEmail: \"Order #12345 hasn't arrived. Need this resolved immediately!\",\n  currentDate: new Date(),\n});\n```\n\n### 复杂嵌套对象\n\n```typescript\nimport { f, ax } from \"@ax-llm\u002Fax\";\n\nconst productExtractor = f()\n  .input(\"productPage\", f.string())\n  .output(\"product\", f.object({\n    name: f.string(),\n    price: f.number(),\n    specs: f.object({\n      dimensions: f.object({\n        width: f.number(),\n        height: f.number()\n      }),\n      materials: f.array(f.string())\n    }),\n    reviews: f.array(f.object({\n      rating: f.number(),\n      comment: f.string()\n    }))\n  }))\n  .build();\n\nconst generator = ax(productExtractor);\nconst result = await generator.forward(llm, { productPage: \"...\" });\n\n\u002F\u002F 完整的 TypeScript 类型推断\nconsole.log(result.product.specs.dimensions.width);\nconsole.log(result.product.reviews[0].comment);\n```\n\n### 验证与约束\n\n```typescript\nconst userRegistration = f()\n  .input(\"userData\", f.string())\n  .output(\"user\", f.object({\n    username: f.string().min(3).max(20),\n    email: f.string().email(),\n    age: f.number().min(18).max(120),\n    password: f.string().min(8).regex(\"^(?=.*[A-Za-z])(?=.*\\\\d)\", \"必须包含字母和数字\"),\n    bio: f.string().max(500).optional(),\n    website: f.string().url().optional(),\n  }))\n  .build();\n```\n\n可用的约束包括：`.min(n)`、`.max(n)`、`.email()`、`.url()`、`.date()`、`.datetime()`、`.regex(pattern, description)` 和 `.optional()`。\n\n验证会在输入和输出阶段同时进行，并在验证失败时自动重试并修正。\n\n### 带工具的代理（ReAct 模式）\n\n```typescript\nconst assistant = ax(\n  \"question:string -> answer:string\",\n  {\n    functions: [\n      { name: \"getCurrentWeather\", func: weatherAPI },\n      { name: \"searchNews\", func: newsAPI },\n    ],\n  },\n);\n\nconst result = await assistant.forward(llm, {\n  question: \"What's the weather in Tokyo and any news about it?\",\n});\n```\n\n### AxAgent + RLM 处理长上下文\n\n```typescript\nimport { agent, AxJSRuntime } from \"@ax-llm\u002Fax\";\n\nconst analyzer = agent(\n  \"context:string, query:string -> answer:string, evidence:string[]\",\n  {\n    name: \"documentAnalyzer\",\n    description: \"Analyze very long documents with recursive code + sub-queries\",\n    maxSteps: 20,\n    rlm: {\n      contextFields: [\"context\"],\n      runtime: new AxJSRuntime(),\n      maxSubAgentCalls: 40,\n      maxRuntimeChars: 2_000, \u002F\u002F 共享的 LLM 查询上下文与解释器输出的最大字符限制，\n      maxBatchedLlmQueryConcurrency: 6,\n      subModel: \"gpt-4o-mini\",\n    },\n  },\n);\n\nconst result = await analyzer.forward(llm, {\n  context: veryLongDocument,\n  query: \"What are the main arguments and supporting evidence?\",\n});\n```\n\nRLM 模式将长上下文从根提示中分离出来，在持久化的运行时会话中进行迭代分析，并通过有界的子查询来提取语义信息（通常每次子调用不超过 1 万字符）。\n\n### AxJSRuntime\n\n`AxJSRuntime` 是 RLM 和工具式执行所使用的内置 JavaScript 运行时环境。它可在以下环境中运行：\n\n- Node.js\u002FBun 样式的后端（worker_threads 运行路径）\n- Deno 后端（模块 worker 路径）\n- 浏览器环境（Web Worker 路径）\n\n它支持：\n\n- 通过 `createSession()` 创建持久化会话\n- 通过 `toFunction()` 使用函数工具\n- 通过 `AxJSRuntimePermission` 设置沙箱权限\n\n### 多模态（图像、音频）\n\n```typescript\nconst analyzer = ax(`\n  image:image, question:string ->\n  description:string,\n  mainColors:string[],\n  category:class \"electronics, clothing, food, other\",\n  estimatedPrice:string\n`);\n```\n\n## 安装\n\n```bash\nnpm install @ax-llm\u002Fax\n```\n\n其他相关包：\n\n```bash\n# AWS Bedrock 提供商\nnpm install @ax-llm\u002Fax-ai-aws-bedrock\n\n# Vercel AI SDK v5 集成\nnpm install @ax-llm\u002Fax-ai-sdk-provider\n\n# 工具：MCP 标准输入输出传输、JS 运行时\nnpm install @ax-llm\u002Fax-tools\n```\n\n## 特性\n\n- **支持 15+ 大模型服务商**——OpenAI、Anthropic、Google、Mistral、Ollama 等\n- **类型安全**——全面支持 TypeScript，提供自动补全功能\n- **流式处理**——实时响应并伴随验证\n- **多模态**——图像、音频、文本可混合在同一签名中\n- **优化**——自动提示优化，支持 MiPRO、ACE、GEPA 等技术\n- **可观ability**——内置 OpenTelemetry 追踪\n- **工作流**——可通过 AxFlow 组合复杂流程\n- **RAG**——多跳检索与质量循环\n- **代理**——工具集成与多代理协作\n- **AxAgent 中的 RLM**——递归运行时循环实现长上下文分析\n- **零依赖**——轻量级、快速且可靠\n\n## 文档\n\n**入门**\n- [快速入门指南](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002FREADME.md) – 5分钟内完成设置\n- [示例指南](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Fexamples.md) – 详尽的示例\n- [DSPy 概念](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Fdspy.md) – 理解其方法论\n- [签名指南](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-signature.md) – 类型安全的签名设计\n\n**深入探索**\n- [AI 提供商](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-ai.md) – 所有提供商、AWS Bedrock、Vercel AI SDK\n- [AxFlow 工作流](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-flow.md) – 构建复杂的 AI 系统\n- [优化（MiPRO、ACE、GEPA）](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Foptimize.md) – 让程序更智能\n- [AxAgent & RLM](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fax\u002Fskills\u002Fax-agent.md) – 代理、子代理、工具以及用于长上下文的 RLM\n- [高级 RAG](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fblob\u002Fmain\u002Fsrc\u002Fdocs\u002Fsrc\u002Fcontent\u002Fdocs\u002Faxrag.md) – 生产级搜索与检索\n\n## 运行示例\n\n```bash\nOPENAI_APIKEY=your-key npm run tsx .\u002Fsrc\u002Fexamples\u002F[example-name].ts\n```\n\n核心示例：`extract.ts`、`react.ts`、`agent.ts`、`streaming1.ts`、`multi-modal.ts`\n\n生产模式示例：`customer-support.ts`、`food-search.ts`、`rlm.ts`、`ace-train-inference.ts`、`ax-flow-enhanced-demo.ts`\n\n[查看全部 70 多个示例](src\u002Fexamples\u002F)\n\n## 社区\n\n- [Twitter](https:\u002F\u002Ftwitter.com\u002Fdosco) – 最新动态\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FDSHg3dU7dW) – 帮助与讨论\n- [GitHub](https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax) – 给项目点个赞\n- [DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fax-llm\u002Fax) – AI 驱动的文档\n\n## 生产就绪\n\n- 经过生产环境的严格考验\n- 稳定的小版本发布\n- 全面的测试覆盖率\n- 内置 OpenTelemetry\n- TypeScript 优先\n\n## 贡献者\n\n- 作者：[@dosco](https:\u002F\u002Fgithub.com\u002Fdosco)\n- GEPA 和 ACE 优化器：[@monotykamary](https:\u002F\u002Fgithub.com\u002Fmonotykamary)\n\n## 许可证\n\nApache 2.0\n\n---\n\n```bash\nnpm install @ax-llm\u002Fax\n```","# Ax 快速上手指南\n\nAx 是一个基于 TypeScript 的 AI 应用开发框架，它将 DSPy 的理念引入 JS\u002FTS 生态。你只需定义输入和输出的数据结构，Ax 会自动处理提示词生成、优化、验证及错误重试，支持所有主流大模型。\n\n## 环境准备\n\n- **运行时环境**：Node.js (v18+)、Bun 或 Deno\n- **包管理器**：npm、yarn、pnpm 或 bun\n- **API Key**：已准备好任意主流 LLM 的 API Key（如 OpenAI, Anthropic, Google 等）\n- **语言支持**：项目需使用 TypeScript 以获得完整的类型推断体验\n\n## 安装步骤\n\n使用 npm 安装核心包：\n\n```bash\nnpm install @ax-llm\u002Fax\n```\n\n如需使用特定提供商或高级功能，可选装以下扩展包：\n\n```bash\n# AWS Bedrock 支持\nnpm install @ax-llm\u002Fax-ai-aws-bedrock\n\n# Vercel AI SDK v5 集成\nnpm install @ax-llm\u002Fax-ai-sdk-provider\n\n# 工具链支持 (MCP, JS 运行时等)\nnpm install @ax-llm\u002Fax-tools\n```\n\n> **提示**：国内开发者若遇到下载缓慢问题，可配置淘宝镜像源：\n> `npm config set registry https:\u002F\u002Fregistry.npmmirror.com`\n\n## 基本使用\n\n### 1. 初始化 LLM 客户端\n\n首先导入库并配置你的大模型提供商。以下以 OpenAI 为例：\n\n```typescript\nimport { ai, ax } from \"@ax-llm\u002Fax\";\n\nconst llm = ai({ name: \"openai\", apiKey: process.env.OPENAI_APIKEY });\n```\n\n### 2. 定义任务签名 (Signature)\n\n使用自然语言描述输入和输出格式，无需编写具体的 Prompt。Ax 会自动将其转换为最优提示词。\n\n```typescript\n\u002F\u002F 定义：输入 review (字符串) -> 输出 sentiment (分类：positive, negative, neutral)\nconst classifier = ax(\n  'review:string -> sentiment:class \"positive, negative, neutral\"',\n);\n```\n\n### 3. 执行推理\n\n调用 `forward` 方法传入数据，获取结构化结果。\n\n```typescript\nconst result = await classifier.forward(llm, {\n  review: \"This product is amazing!\",\n});\n\nconsole.log(result.sentiment); \n\u002F\u002F 输出: \"positive\"\n```\n\n### 进阶：复杂对象提取与验证\n\nAx 支持定义嵌套对象和严格的字段验证规则，自动处理重试逻辑。\n\n```typescript\nimport { f, ax } from \"@ax-llm\u002Fax\";\n\n\u002F\u002F 定义复杂的输出结构及验证规则\nconst userSchema = f()\n  .input(\"userData\", f.string())\n  .output(\"user\", f.object({\n    username: f.string().min(3).max(20),\n    email: f.string().email(),\n    age: f.number().min(18).max(120),\n    password: f.string().min(8).regex(\"^(?=.*[A-Za-z])(?=.*\\\\d)\", \"Must contain letter and digit\"),\n    bio: f.string().max(500).optional(),\n  }))\n  .build();\n\nconst generator = ax(userSchema);\n\nconst result = await generator.forward(llm, {\n  userData: \"Register user: john_doe, john@example.com, 25, pass123\"\n});\n\n\u002F\u002F 结果具有完整的 TypeScript 类型推断\nconsole.log(result.user.email); \n```\n\n### 运行示例\n\n安装完成后，你可以直接运行官方提供的示例代码来测试功能：\n\n```bash\nOPENAI_APIKEY=your-key npm run tsx .\u002Fsrc\u002Fexamples\u002Fextract.ts\n```\n\n更多示例（包括 Agent、流式输出、多模态等）可在 `src\u002Fexamples\u002F` 目录中找到。","某电商初创团队的 TypeScript 后端工程师正在构建一个自动处理用户评论并提取结构化数据的 AI 服务，需同时支持情感分析、优先级判定及关键信息抽取。\n\n### 没有 ax 时\n- **提示词维护噩梦**：每次调整输出格式（如增加“预计回复时间”字段），都需要人工反复微调 Prompt 字符串，测试成本高且容易破坏原有逻辑。\n- **供应商锁定风险**：若因成本或策略需要从 OpenAI 切换至 Claude 或 Gemini，必须重写大量针对特定模型优化的提示词和解析代码。\n- **数据可靠性差**：缺乏内置验证机制，LLM 偶尔返回格式错误的 JSON 或非枚举值（如将情感判为\"good\"而非\"positive\"），导致下游程序频繁崩溃。\n- **类型安全缺失**：TypeScript 无法自动推断复杂的嵌套返回结构，开发者需手动定义接口并编写繁琐的运行时校验代码。\n\n### 使用 ax 后\n- **声明式开发**：只需定义输入输出签名（如 `review:string -> sentiment:class...`），ax 自动生成最优提示词，新增字段仅需修改一行类型定义。\n- **无缝切换模型**：通过 `ai({ name: \"anthropic\" })` 单行配置即可切换底层大模型，业务逻辑代码无需任何改动，真正实现“一次编写，随处运行”。\n- **自愈与验证**：利用 `.email()`、`.min()` 等内置约束，ax 在检测到输出违规时自动触发重试修正，确保返回数据 100% 符合预期格式。\n- **端到端类型安全**：基于函数式构建器生成的复杂嵌套对象（如商品规格、评论数组），享受完整的 TypeScript 智能提示与编译期检查，杜绝运行时错误。\n\nax 让开发者从繁琐的提示词工程中解放出来，专注于业务逻辑，以类型安全的方式快速构建高可靠的生产级 AI 应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fax-llm_ax_a06b9720.png","ax-llm","Ax","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fax-llm_5626fd9f.png","",null,"https:\u002F\u002Fgithub.com\u002Fax-llm",[79,83,87,91,95,99,103,107,111],{"name":80,"color":81,"percentage":82},"TypeScript","#3178c6",95.6,{"name":84,"color":85,"percentage":86},"Python","#3572A5",1.8,{"name":88,"color":89,"percentage":90},"JavaScript","#f1e05a",1,{"name":92,"color":93,"percentage":94},"Astro","#ff5a03",0.6,{"name":96,"color":97,"percentage":98},"HTML","#e34c26",0.5,{"name":100,"color":101,"percentage":102},"CSS","#663399",0.3,{"name":104,"color":105,"percentage":106},"Shell","#89e051",0.1,{"name":108,"color":109,"percentage":110},"Dockerfile","#384d54",0,{"name":112,"color":113,"percentage":110},"MDX","#fcb32c",2516,159,"2026-04-07T01:58:18","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":121,"python":122,"dependencies":123},"该工具是基于 TypeScript 的框架，非 Python 项目。支持在 Node.js、Bun、Deno 及浏览器环境（Web Worker）中运行。无需本地 GPU，通过 API 调用主流大模型（如 OpenAI, Anthropic, Google 等）。可选安装 AWS Bedrock、Vercel AI SDK 或工具包等额外依赖。","不适用 (基于 TypeScript\u002FNode.js)",[124,125],"@ax-llm\u002Fax","Node.js\u002FBun\u002FDeno 运行时",[13,15,35,14],[128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147],"ai","cohere","llm","openai","typescript","javascript","nodejs","claude","large-language-models","opensource","anthropic","gemini","google","gpt-4","ollama","rag","vectordb","google-gemini","dspy","webllm","2026-03-27T02:49:30.150509","2026-04-08T10:07:02.956185",[151,156,161,166,171,176],{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},24070,"如何访问和处理流式生成（Streaming）的结果块？","当设置 `stream: true` 时，响应是一个异步生成器（AsyncGenerator）。你可以使用 `for await...of` 循环来遍历结果块。参考项目中的流式示例代码（如 `examples\u002Fstreaming2.ts`），使用类似 `for await (const chunk of result)` 的方式即可捕获和处理流数据。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F36",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},24071,"是否支持 AWS Bedrock 提供商？","是的，AWS Bedrock 支持已发布。你可以通过安装独立的 npm 包 `@ax-llm\u002Fax-ai-aws-bedrock` 来使用该功能。建议查阅相关文档以获取具体的配置和使用方法。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F10",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},24072,"GEPA 优化器为什么总是返回默认指令或初始指令，而不是优化后的指令？","这是一个已知问题。在某些版本（如 5.0.13 到 5.0.18 之间）中，`sig.instruction` 属性访问可能导致回退到默认字符串 \"Follow the task precisely...\" 或仅显示初始指令。维护者指出这可能与特定模型和数据组合有关，有时表现为间歇性故障（Heisenbug）。如果遇到此问题，建议检查使用的 ax 版本，尝试在 5.0.13 到 5.0.17 之间进行测试，并提供具体的测试数据和代码以便进一步排查。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F463",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},24073,"如何在浏览器环境中使用 ax-llm 以避免 crypto 模块错误？","为了在浏览器中兼容，需要避免直接使用 Node.js 的 `crypto` 模块。解决方案是切换到使用浏览器全局 `crypto` 对象，并结合同步实现的 sha256 摘要算法（例如引入 `@noble\u002Fhashes` 库）。此外，可能需要避免导入 `google-auth-library` 并禁用 `AxJSInterpreter`，以减小打包体积并确保在浏览器中正常运行。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F263",{"id":172,"question_zh":173,"answer_zh":174,"source_url":175},24074,"如何在 Observable Notebook 中导入和使用 ax-llm？","在 Observable 中直接导入可能会遇到 `exports is not defined` 的错误，这通常与模块如何处理静态字段重写有关。虽然官方调试器显示某些版本（如 9.0.31 的 CJS 版本）可能通过 `require('@ax-llm\u002Fax@9.0.31\u002Fcjs\u002Findex.js')` 工作，但目前在浏览器环境（包括 Observable）中仍存在兼容性问题。建议关注项目后续更新以解决此模块导出问题。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F31",{"id":177,"question_zh":178,"answer_zh":179,"source_url":180},24075,"使用 Anthropic 进行函数调用（Function Calling）时遇到 'assistant message pre-fill' 错误怎么办？","该错误表明 API 请求在最终位置包含了一条 `assistant` 消息，这会预填充助手的响应。当使用工具（Tools）时，Anthropic 不支持预填充助手响应。解决方法是检查发送给 API 的消息历史，确保最后一条消息不是 `assistant` 角色，或者在构建请求时移除导致预填充的逻辑。","https:\u002F\u002Fgithub.com\u002Fax-llm\u002Fax\u002Fissues\u002F27",[182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257,262,267,272,277],{"id":183,"version":184,"summary_zh":185,"released_at":186},145653,"19.0.42","* 修复：多项修复 d12a683e\n* 修复：多项修复 5d257d5e\n* 新功能：添加 GPT-5.4 模型 + 修复：将 chatReqUpdater 传递给 Azure OpenAI (#505) 6cef1358","2026-04-07T01:58:17",{"id":188,"version":189,"summary_zh":190,"released_at":191},145654,"19.0.41","* 按 API 版本 6a060c31 路由 Vertex Gemini 模型","2026-04-01T20:29:14",{"id":193,"version":194,"summary_zh":195,"released_at":196},145655,"19.0.40","* 在发布前将包的 README 复制到 dist 目录 55d4d142","2026-04-01T19:44:29",{"id":198,"version":199,"summary_zh":200,"released_at":201},145656,"19.0.39","* 修复：Gemini 3.1 Pro Vertex 修复 979383d9","2026-04-01T19:38:40",{"id":203,"version":204,"summary_zh":205,"released_at":206},145657,"19.0.38","* 功能：用于训练数据的聊天记录 874e38f4","2026-03-29T08:09:18",{"id":208,"version":209,"summary_zh":210,"released_at":211},145658,"19.0.37","* 修复：多项修复 f50828cd\n* 修复：在 Gemini 3 上下文缓存路径中保留 thought_signature (#502) 31e2f95b\n* 功能：在 ctx.addFunctions() 后刷新系统提示 \u003Cavailable_functions> (#501) 6d8517c4\n* 修复 README.md 中的链接 (#498) 86d24e7e\n* 功能 (dsp)：为 AxGen 添加 customTemplate 选项 (#499) 63e496ea","2026-03-27T19:11:55",{"id":213,"version":214,"summary_zh":215,"released_at":216},145659,"19.0.36","* 修复：各种修复 05cbc64b","2026-03-27T04:41:28",{"id":218,"version":219,"summary_zh":220,"released_at":221},145660,"19.0.35","* 修复：在 Deno Worker 作用域中处理只读全局属性 f2ae6a87","2026-03-26T18:48:24",{"id":223,"version":224,"summary_zh":225,"released_at":226},145661,"19.0.34","* 功能：向 AxAgentCompletionProtocol 添加 stop() 以及 success()\u002Ffailed() 方法 375e3916\n* 文档：为 RLM 架构添加 axagent-rlm.md 参考文档 b5d20589\n* 功能：在 AxAgent RLM 中添加 agentStatusCallback，并修复 final() 合约 921357f5","2026-03-26T06:56:30",{"id":228,"version":229,"summary_zh":230,"released_at":231},145662,"19.0.33","* 修复：在 JS 运行时 Worker 消息传递中处理 DataCloneError 8f54922d","2026-03-24T06:53:03",{"id":233,"version":234,"summary_zh":235,"released_at":236},145663,"19.0.32","* feat: improvements to the live runtime state system 0ed618d3\r\n* fix: various rlm runtime fixes 929939e5","2026-03-24T03:17:47",{"id":238,"version":239,"summary_zh":240,"released_at":241},145664,"19.0.31","* fix: test failures c8e5cae6\r\n* chore: Update RLM templates and discovery example 7031987a\r\n* Fix: Bubble up AxAgentClarificationError instead of logging in actorLog 7eb3739a","2026-03-23T08:22:11",{"id":243,"version":244,"summary_zh":245,"released_at":246},145665,"19.0.30","* Fix lint errors and add truncate utilities 2d6cc1f5","2026-03-23T05:21:39",{"id":248,"version":249,"summary_zh":250,"released_at":251},145666,"19.0.29","* Add context cache improvements and expanded test coverage 18230e42","2026-03-22T23:30:43",{"id":253,"version":254,"summary_zh":255,"released_at":256},145667,"19.0.28","* Refine AxAgent summarizer and runtime controls 15e72a31","2026-03-22T08:04:34",{"id":258,"version":259,"summary_zh":260,"released_at":261},145668,"19.0.27","* Simplify AxAgent context budgets 3c27abc8\r\n* Refine agent guidance and action log replay ca05836b","2026-03-22T07:13:59",{"id":263,"version":264,"summary_zh":265,"released_at":266},145669,"19.0.26","* Refine prompt templates and actor guidance 1f4b115b","2026-03-21T05:34:51",{"id":268,"version":269,"summary_zh":270,"released_at":271},145670,"19.0.25","* Fix AxAgent debug system prompt visibility 8c0d27e3\r\n* Refine actor guidance prompt handling 225286b9\r\n* rename ask_clarification to askClarification e17571cc\r\n* Add authenticated guideAgent flow 37f42153\r\n* add namespace-aware actor model policy routing 5cf97b79","2026-03-20T06:31:56",{"id":273,"version":274,"summary_zh":275,"released_at":276},145671,"19.0.24","* fix: agent refactor and other fixes 2018ddc5","2026-03-19T18:54:46",{"id":278,"version":279,"summary_zh":280,"released_at":281},145672,"19.0.23","* feat: automatic model upgrade in axagent d841ed62","2026-03-19T07:43:10"]