[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Mirascope--mirascope":3,"tool-Mirascope--mirascope":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":66,"owner_company":75,"owner_location":75,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":42,"env_os":105,"env_gpu":106,"env_ram":106,"env_deps":107,"category_tags":113,"github_topics":114,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":122,"updated_at":123,"faqs":124,"releases":157},9705,"Mirascope\u002Fmirascope","mirascope","The LLM Anti-Framework","Mirascope 是一款专为大语言模型（LLM）应用开发设计的轻量级工具库，自称\"LLM 反框架”。它旨在解决当前 AI 开发中框架过于臃肿、学习曲线陡峭以及不同模型接口不统一的痛点。通过提供一套简洁的装饰器语法，Mirascope 让开发者能够用极少的代码调用各类前沿大模型，并轻松实现结构化数据输出和智能体（Agent）工具调用功能。\n\n这款工具特别适合希望快速构建原型、追求代码可控性的 Python 和 TypeScript 开发者。与传统重型框架不同，Mirascope 不强制绑定特定的工作流或隐藏底层逻辑，而是作为一层轻薄的抽象层，让开发者在保留灵活性的同时，享受类型安全、自动重试和流式处理等便利特性。其独特的“反框架”理念意味着它专注于增强原生开发体验，而非取代开发者的控制权。无论是需要快速验证想法的工程师，还是希望精细把控模型交互的研究人员，都能利用 Mirascope 高效地构建从简单对话到复杂代理系统的各类 AI 应用。","\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fmirascope.com\u002F#mirascope\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMirascope_mirascope_readme_952ceca8a10f.png\" \u002F>\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Factions\u002Fworkflows\u002Fci.yml\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg?branch=main\" alt=\"Tests\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fapp.codecov.io\u002Fgithub\u002FMirascope\u002Fmirascope\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgithub\u002FMirascope\u002Fmirascope\u002Fgraph\u002Fbadge.svg?token=HAEAWT3KC9\" alt=\"Coverage\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmirascope\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmirascope.svg\" alt=\"PyPI Version\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmirascope\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fmirascope.svg\" alt=\"Python Versions\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fblob\u002Fmain\u002Fpython\u002FLICENSE\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-blue.svg\" alt=\"License\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## Mirascope \n\nWelcome to Mirascope, which allows you to use any frontier LLM with one unified interface.\n\n## Quick Start\n\nInstall Mirascope:\n\n```bash\nuv add \"mirascope[all]\"\n```\n\n### Call LLMs with a Decorator\n\n```python\nfrom mirascope import llm\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\")\ndef recommend_book(genre: str):\n    return f\"Recommend a {genre} book.\"\n\n\nresponse = recommend_book(\"fantasy\")\nprint(response.text())\n```\n\n### Get Structured Output\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", format=Book)\ndef recommend_book(genre: str):\n    return f\"Recommend a {genre} book.\"\n\n\nbook = recommend_book(\"fantasy\").parse()\nprint(f\"{book.title} by {book.author}\")\n```\n\n### Build an Agent with Tools\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.tool\ndef get_available_books(genre: str) -> list[Book]:\n    \"\"\"Get available books in the library by genre.\"\"\"\n    return [Book(title=\"The Name of the Wind\", author=\"Patrick Rothfuss\")]\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", tools=[get_available_books], format=Book)\ndef librarian(request: str):\n    return f\"You are a librarian. Help the user: {request}\"\n\n\nresponse = librarian(\"I want a fantasy book\")\nwhile response.tool_calls:\n    response = response.resume(response.execute_tools())\nbook = response.parse()\nprint(f\"Recommending: {book.title} by {book.author}\")\n```\n\nFor streaming, async, multi-turn conversations, and more, see the [full documentation](https:\u002F\u002Fmirascope.com\u002Fdocs).\n\n## Monorepo Structure\n\nThis project is structured as a monorepo, that conceptually divides into four parts:\n\n- `python\u002F` contains the Python implementation, and examples (in `python\u002Fexamples`)\n- `typescript\u002F` contains the Typescript implementation, and examples (in `typescript\u002Fexamples`)\n- `website\u002F` contains the marketing website (docs, blog, landing page)\n- `docs\u002F` contains the unified cross-language documentation (in `docs\u002Fcontent`), as well as configuration needed to build the docs\n\nFor detailed information about the codebase structure, architecture, and design decisions, see [`STRUCTURE.md`](STRUCTURE.md).\n\n## Developing the site\n\nUse `bun run website:dev` to launch the dev server.\n\nNote that [Bun](http:\u002F\u002Fbun.sh\u002F) must be installed.\n\n## CI and local testing\n\nWe currently have four CI jobs:\n- codespell: Checks for common misspellings including python, typescript, and docs repos\n- python-lint: Linting and typechecking for Python code\n- typescript-lint: Linting and typechecking for Typescript code\n- cloudflare docs build: Builds and previews the documentation site\n\nYou can run `bun run ci` in the root directory to run all CI checks locally. If adding new checks to GitHub CI, please also add it to the ci script in root `package.json` as well.\n\n## Versioning\n\nMirascope uses [Semantic Versioning](https:\u002F\u002Fsemver.org\u002F).\n\n## License\n\nThis project is licensed under the [MIT License](https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Ftree\u002Fmain\u002FLICENSE).\n","\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fmirascope.com\u002F#mirascope\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMirascope_mirascope_readme_952ceca8a10f.png\" \u002F>\n    \u003C\u002Fa>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Factions\u002Fworkflows\u002Fci.yml\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg?branch=main\" alt=\"测试\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fapp.codecov.io\u002Fgithub\u002FMirascope\u002Fmirascope\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fcodecov.io\u002Fgithub\u002FMirascope\u002Fmirascope\u002Fgraph\u002Fbadge.svg?token=HAEAWT3KC9\" alt=\"覆盖率\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmirascope\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmirascope.svg\" alt=\"PyPI版本\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fpypi.org\u002Fproject\u002Fmirascope\u002F\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fmirascope.svg\" alt=\"Python版本\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fblob\u002Fmain\u002Fpython\u002FLICENSE\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F许可证-MIT-blue.svg\" alt=\"许可证\"\u002F>\u003C\u002Fa>\n\u003C\u002Fp>\n\n---\n\n## Mirascope\n\n欢迎来到 Mirascope，它使您能够通过一个统一的接口使用任何前沿的大型语言模型。\n\n## 快速入门\n\n安装 Mirascope：\n\n```bash\nuv add \"mirascope[all]\"\n```\n\n### 使用装饰器调用 LLM\n\n```python\nfrom mirascope import llm\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\")\ndef recommend_book(genre: str):\n    return f\"推荐一本{genre}类的书。\"\n\n\nresponse = recommend_book(\"fantasy\")\nprint(response.text())\n```\n\n### 获取结构化输出\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", format=Book)\ndef recommend_book(genre: str):\n    return f\"推荐一本{genre}类的书。\"\n\n\nbook = recommend_book(\"fantasy\").parse()\nprint(f\"{book.title} 作者：{book.author}\")\n```\n\n### 构建带有工具的智能体\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.tool\ndef get_available_books(genre: str) -> list[Book]:\n    \"\"\"根据类型获取图书馆中可用的书籍\"\"\"\n    return [Book(title=\"风之名\", author=\"帕特里克·罗斯福斯\")]\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", tools=[get_available_books], format=Book)\ndef librarian(request: str):\n    return f\"你是一名图书管理员，请帮助用户：{request}\"\n\n\nresponse = librarian(\"我想要一本奇幻类的书\")\nwhile response.tool_calls:\n    response = response.resume(response.execute_tools())\nbook = response.parse()\nprint(f\"推荐：{book.title} 作者：{book.author}\")\n```\n\n如需流式传输、异步对话、多轮对话等功能，请参阅[完整文档](https:\u002F\u002Fmirascope.com\u002Fdocs)。\n\n## 单仓库结构\n\n该项目采用单仓库结构，从概念上可分为四个部分：\n\n- `python\u002F` 包含 Python 实现及示例（位于 `python\u002Fexamples`）\n- `typescript\u002F` 包含 TypeScript 实现及示例（位于 `typescript\u002Fexamples`）\n- `website\u002F` 包含营销网站（文档、博客、着陆页）\n- `docs\u002F` 包含跨语言的统一文档（位于 `docs\u002Fcontent`），以及构建文档所需的配置\n\n有关代码库结构、架构和设计决策的详细信息，请参阅 [`STRUCTURE.md`](STRUCTURE.md)。\n\n## 网站开发\n\n使用 `bun run website:dev` 启动开发服务器。\n\n请注意，必须安装 [Bun](http:\u002F\u002Fbun.sh\u002F)。\n\n## CI 和本地测试\n\n我们目前有四条 CI 流程：\n- codespell：检查包括 Python、TypeScript 和文档仓库在内的常见拼写错误\n- python-lint：对 Python 代码进行 lint 检查和类型检查\n- typescript-lint：对 TypeScript 代码进行 lint 检查和类型检查\n- cloudflare docs build：构建并预览文档网站\n\n您可以在根目录下运行 `bun run ci` 来在本地运行所有 CI 检查。如果要向 GitHub CI 添加新的检查，请同时将其添加到根目录下的 `package.json` 中的 CI 脚本中。\n\n## 版本管理\n\nMirascope 使用 [语义化版本控制](https:\u002F\u002Fsemver.org\u002F)。\n\n## 许可证\n\n本项目采用 [MIT 许可证](https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Ftree\u002Fmain\u002FLICENSE)。","# Mirascope 快速上手指南\n\nMirascope 是一个统一的接口库，允许开发者使用相同的代码模式调用各种前沿的大语言模型（LLM），支持结构化输出和工具调用（Agent）。\n\n## 环境准备\n\n- **操作系统**：Linux、macOS 或 Windows\n- **Python 版本**：3.9 及以上\n- **包管理器**：推荐使用 `uv`（高性能 Python 包安装器），也可使用 `pip`\n- **依赖项**：无特殊系统级依赖，需确保网络可访问模型提供商 API（如 Anthropic、OpenAI 等）\n\n> **提示**：国内开发者若遇到安装慢的问题，可配置国内镜像源。\n> - 使用 `uv` 时：`export UV_INDEX_URL=https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n> - 使用 `pip` 时：`pip install ... -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 安装步骤\n\n推荐使用 `uv` 进行安装，它将自动处理依赖和环境：\n\n```bash\nuv add \"mirascope[all]\"\n```\n\n如果习惯使用 `pip`，也可以使用以下命令：\n\n```bash\npip install \"mirascope[all]\"\n```\n\n## 基本使用\n\n### 1. 调用大模型\n\n通过简单的装饰器即可调用模型。以下示例使用 Anthropic 的 Claude 模型推荐书籍：\n\n```python\nfrom mirascope import llm\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\")\ndef recommend_book(genre: str):\n    return f\"Recommend a {genre} book.\"\n\n\nresponse = recommend_book(\"fantasy\")\nprint(response.text())\n```\n\n### 2. 获取结构化输出\n\n结合 Pydantic 模型，让 LLM 直接返回格式化的 JSON 数据：\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", format=Book)\ndef recommend_book(genre: str):\n    return f\"Recommend a {genre} book.\"\n\n\nbook = recommend_book(\"fantasy\").parse()\nprint(f\"{book.title} by {book.author}\")\n```\n\n### 3. 构建带工具的 Agent\n\n定义工具函数并让模型自主调用，实现简单的 Agent 逻辑：\n\n```python\nfrom pydantic import BaseModel\nfrom mirascope import llm\n\n\nclass Book(BaseModel):\n    title: str\n    author: str\n\n\n@llm.tool\ndef get_available_books(genre: str) -> list[Book]:\n    \"\"\"Get available books in the library by genre.\"\"\"\n    return [Book(title=\"The Name of the Wind\", author=\"Patrick Rothfuss\")]\n\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", tools=[get_available_books], format=Book)\ndef librarian(request: str):\n    return f\"You are a librarian. Help the user: {request}\"\n\n\nresponse = librarian(\"I want a fantasy book\")\nwhile response.tool_calls:\n    response = response.resume(response.execute_tools())\nbook = response.parse()\nprint(f\"Recommending: {book.title} by {book.author}\")\n```\n\n> **注意**：使用前请确保已配置对应模型提供商的 API Key（通常通过环境变量设置，如 `ANTHROPIC_API_KEY`）。","某电商初创团队正在开发一个智能客服系统，需要让 AI 自动查询库存并推荐商品，同时确保返回的数据能直接存入数据库。\n\n### 没有 mirascope 时\n- **厂商锁定严重**：代码中充斥着特定 LLM 提供商（如 OpenAI 或 Anthropic）的专属 API 调用逻辑，一旦需要切换模型或进行多模型对比，必须重构大量底层代码。\n- **结构化数据解析繁琐**：开发者需手动编写复杂的正则表达式或额外的解析层，将 LLM 生成的自然语言文本转换为程序可用的 JSON 对象，极易因格式错误导致崩溃。\n- **工具调用逻辑割裂**：实现“查询库存”等外部功能时，需要自行管理函数定义、参数提取、执行回调及多轮对话的状态维护，代码冗长且难以调试。\n- **类型安全缺失**：由于缺乏统一的接口规范，输入输出依赖动态类型，经常在运行时才发现参数不匹配或返回字段缺失的问题。\n\n### 使用 mirascope 后\n- **统一接口自由切换**：通过简单的装饰器即可调用任意前沿模型，更换底层 LLM 只需修改一行配置，业务逻辑代码完全无需变动。\n- **原生支持结构化输出**：直接结合 Pydantic 模型定义输出格式，mirascope 自动处理解析与验证，确保返回的商品信息天然符合数据库写入要求。\n- **声明式构建智能体**：仅需给普通 Python 函数加上 `@llm.tool` 装饰器，即可自动集成工具调用能力，框架自动处理复杂的多轮交互与状态流转。\n- **端到端类型安全保障**：从提示词构造到最终结果解析全程利用 Python 类型系统，在编码阶段即可拦截潜在错误，大幅降低运行时风险。\n\nmirascope 通过极简的装饰器语法屏蔽了底层大模型的复杂性，让开发者能专注于业务逻辑而非基础设施的重复造轮子。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMirascope_mirascope_b92d7dfb.png","Mirascope","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMirascope_e898500f.png",null,"support@mirascope.com","mirascope_ai","https:\u002F\u002Fmirascope.com","https:\u002F\u002Fgithub.com\u002FMirascope",[81,85,89,93,97],{"name":82,"color":83,"percentage":84},"Python","#3572A5",42.4,{"name":86,"color":87,"percentage":88},"MDX","#fcb32c",37.9,{"name":90,"color":91,"percentage":92},"TypeScript","#3178c6",19.4,{"name":94,"color":95,"percentage":96},"CSS","#663399",0.2,{"name":98,"color":99,"percentage":100},"Shell","#89e051",0,1461,116,"2026-04-19T06:45:53","MIT","Linux, macOS, Windows","未说明",{"notes":108,"python":109,"dependencies":110},"Mirascope 是一个用于调用大语言模型（如 Anthropic Claude）的 Python\u002FTypeScript 库，本身不运行本地模型，因此无特定 GPU 或显存需求。推荐使用 'uv' 进行包管理安装（命令：uv add \"mirascope[all]\"）。项目采用 Monorepo 结构，包含 Python 和 TypeScript 实现。开发文档站点需安装 Bun。","3.9+",[111,112],"pydantic","uv",[13,14,35],[115,116,117,118,119,120,121],"artificial-intelligence","developer-tools","llm","python","llm-agent","llm-tools","typescript","2026-03-27T02:49:30.150509","2026-04-20T04:06:08.723590",[125,130,135,140,144,149,153],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},43600,"如何在 Mirascope 中实现工具描述（docstring 和字段描述）的动态模板化？","该功能已在 v1 版本中实现。维护者建议可以使用装饰器来明确动态行为，或者通过 `Toolkit` 类结合 `@toolkit_tool()` 装饰器来实现。例如，定义一个继承自 `Toolkit` 的类，并在方法中使用 f-string 或模板变量访问实例属性（如 `self.reading_level`）来动态生成描述。","https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fissues\u002F278",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},43601,"Mirascope 是否计划提供官方的 `llm.agent` 装饰器或有限状态机（FSM）类？","目前官方暂不计划直接提供 `agents` 和 `graphs` 模块的特定接口。维护者表示，新的 v2 接口已经足够简洁，可以自行构建 Agent 逻辑。团队优先发布 `llm` 模块，未来会根据反馈重新考虑 Agent 接口的具体形态。建议用户在 Discord 社区讨论具体需求，或关注未来可能发布的基于注册表（Registry）的预构建组件。","https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fissues\u002F908",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},43602,"在使用 OpenAI 工具时遇到 Pydantic 验证错误（关于缺失参数字段），如何解决？","这是因为 OpenAI 在使用 tools 时无法直接处理 `dict` 类型的参数。解决方法是在调用配置中显式设置 `json_mode=True`。这将强制模型以 JSON 格式输出，从而兼容 Pydantic 的验证要求。其他一些模型可能不需要此设置，但在使用 OpenAI 时必须添加。","https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fissues\u002F461",{"id":141,"question_zh":142,"answer_zh":143,"source_url":139},43603,"如何将多个参数传递给搜索工具函数？文档中的示例似乎只展示了一个参数。","工具函数的参数是通过 Pydantic 模型自动提取的。确保你的工具函数参数定义清晰，并且在使用 `call` 时正确列在 `tools` 列表中。如果遇到验证错误，通常是因为模型返回的参数格式不符合预期（如 OpenAI 需要 `json_mode=True`）。检查函数签名是否与 LLM 生成的调用参数匹配，并确保没有遗漏必填字段。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},43604,"Mirascope v1\u002Fv2 版本在接口设计上有什么主要变化？","Mirascope 正在从基于类的设计转向更函数式的接口，因为“调用”本质上不是有状态的。在更新后的接口中（如 v1\u002Fv2），基本调用可以通过装饰器（如 `@openai_call`）直接修饰函数来实现，而不是必须定义类。这种设计更加功能化，减少了样板代码，使得推荐书籍等简单任务只需几行代码即可完成。","https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fissues\u002F322",{"id":150,"question_zh":151,"answer_zh":152,"source_url":139},43605,"不同模型提供商对工具调用和 JSON 模式的支持有何差异？","不同模型对工具参数的类型支持不同。例如，某些模型支持直接使用 `dict` 类型参数，而 OpenAI 在使用 tools 时不支持 `dict`，必须开启 `json_mode=True`。对于 `response_model`，通常可以通过设置 `json_mode=True` 来解决兼容性问题。建议查阅官方文档中关于“跨提供商的字段类型支持”列表以获取详细信息。",{"id":154,"question_zh":155,"answer_zh":156,"source_url":129},43606,"是否可以在运行时动态更新工具的模板变量，而不是在定义时固定？","是的，可以通过在运行时生成 schema 来实现。一种推荐的方法是创建一个配置对象，在运行时传入并更新生成逻辑。此外，使用 `Toolkit` 类允许在方法内部通过 `self` 访问运行时状态（如 `self.reading_level`），从而实现动态描述注入，这比在类定义时使用 f-string 更灵活。",[158,163,168,173,178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253],{"id":159,"version":160,"summary_zh":161,"released_at":162},343223,"v2.4.0","## 突发新闻\n\n很遗憾，我们决定停止维护 Mirascope Cloud。在本版本中，`api` 模块以及所有与云后端相关的部分已被移除，因此从现在起它将完全作为一个纯 LLM SDK 存在。\n\n我们将继续构建和维护 Mirascope 这一 Python\u002FTypeScript LLM 库，但请您为您的 OTEL 后端寻找替代方案。\n\n## 变更内容\n* 杂项：停止云服务，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2856 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.3.0...v2.4.0","2026-03-08T03:59:23",{"id":164,"version":165,"summary_zh":166,"released_at":167},343224,"v2.3.0","## 变更内容\n* 修复：解决 span 签名在暗色模式下的样式问题 (#2164)，由 @vnkaralkar 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2844 中完成\n\n## 新贡献者\n* @sazed-verity 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2751 中完成了首次贡献\n* @vnkaralkar 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2844 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.2.2...v2.3.0","2026-02-27T23:43:32",{"id":169,"version":170,"summary_zh":171,"released_at":172},343225,"v2.2.2","## 变更内容\n* 修复（云）：@willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2428 中指出，OTEL intValue 应当接受数字类型。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.2.1...v2.2.2","2026-02-05T21:00:37",{"id":174,"version":175,"summary_zh":176,"released_at":177},343226,"v2.2.1","## 变更内容\n* 修复（云）：`ok` 的范围过于狭窄，现透传所有 `2xx` 和 `3xx` 状态码，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2406 中完成\n* 重构（内容）：将内容清单与逻辑分离，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2416 中完成\n* 修复（云）：不再使用 ESLint，仅使用 OXLint，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2417 中完成\n* 修复（SEO）：Satori 渲染器无法包裹不含空格的标题，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2421 中完成\n* 杂项：生成 SDK 并递增补丁版本号，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2426 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.2.0...v2.2.1","2026-02-05T08:20:56",{"id":179,"version":180,"summary_zh":181,"released_at":182},343227,"v2.2.0","## TypeScript SDK\n\n本次发布标志着 TypeScript SDK 的一个重要里程碑，新增了多项重大功能（Alpha 版本，功能与 Python SDK 完全一致）：\n\n### 新特性\n\n- MCP 支持 - 完整的模型上下文协议支持，用于构建和使用 MCP 服务器\n- 结构化输出 - 使用 Zod 定义响应模式，实现类型安全的结构化响应，包括流式传输支持\n- 工具流式传输 - 在模型生成工具调用时进行流式传输\n- 子流式传输 - 细粒度的流式控制，支持部分内容的逐步更新\n- Zod 原生接口 - 通过 JSDoc 转换处理，为工具和输出格式提供原生 Zod 模式支持\n\n### 新增提供商\n\n- OpenRouterProvider - 通过 OpenRouter 访问多种模型\n- OllamaProvider - 使用 Ollama 运行本地模型\n- TogetherProvider - 支持 Together AI 推理\n- MirascopeProvider - 集成 Mirascope Cloud\n\n### 其他改进\n\n- ProviderTool + WebSearchTool - 内置包括网页搜索在内的提供商工具\n- Anthropic 缓存控制 - 控制 Anthropic 模型的缓存行为\n- 闭包分析与版本管理 - 通过编译时转换实现运维集成，用于提示词版本管理\n\n---\n\n## Python SDK\n\n### 新特性\n\n- 一等公民级重试支持 - 新增 llm.retries 和 RetryConfig，用于配置重试行为，并提供 RetryCall、RetryPrompt 及其异步变体\n\n### 改进\n\n- 提示词与调用重构 - Prompt 和 Call 现已成为正式类，以提升扩展性和类型安全性\n\n---\n\n## Cloud\n\n### 新特性\n\n- 多语言代码高亮 - 代码块现支持 TypeScript 语法高亮，除 Python 外\n\n### Bug 修复\n\n- 修复用户消息中代码块的 Markdown 渲染问题\n- 修复 HTTP Basic 认证提示仅在预发域名匹配时触发的问题\n- 修复预发环境中的资产服务检查问题\n- 修复 Cloudflare 资产绑定与 Basic 认证凭据的兼容性问题\n- 改进基于纬度的日落时长计算\n\n## 变更内容\n* fix(cloud): 仅当预发域名匹配时才提示输入 HTTP Basic 认证信息，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2333 中完成\n* chore(cloud): 确保编码代理现在使用 oxlint，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2344 中完成\n* fix(cloud): 在预发环境中始终检查是否需要提供资产服务，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2349 中完成\n* fix(cloud): CF 资产绑定不支持 URL 中包含 Basic 认证凭据，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2352 中完成\n* fix(cloud): 使用纬度计算日落时长，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2358 中完成\n* dev(python-sdk): 添加占位符 llm.retries 和 llm.RetryConfig，由 @teamdandelion 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1878 中完成\n* dev(python-sdk): 添加 RetryModel 和 RetryResponse\u002FAsyncRetryResponse，由 @teamdandelion 在 https:\u002F\u002Fgithub.com\u002FMirasc","2026-02-04T04:57:48",{"id":184,"version":185,"summary_zh":186,"released_at":187},343228,"v2.1.1","## OpenRouterProvider\n\n看起来官方的 `v2.1` 版本由于合并失败，并未包含 `OpenRouterProvider`。本次发布修复了这一问题。\n\n## 变更内容\n* tests(ts): 由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2137 中实现的 AnthropicProvider 错误包装\n* feat(ts): 支持 OpenAI + OpenAICompletions（暂不支持 Responses），由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2139 中实现\n* tests(ts): 清理测试代码，优先使用端到端测试，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2140 中完成\n* refactor(ts): 优化代码生成脚本，充分利用 TypeScript 的类型系统，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2141 中完成\n* feat(ts): OpenAIResponsesProvider，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2142 中实现\n* feat(ts): llm Thinking 功能，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2238 中实现\n* feat(ts): llm StreamResponse（流式支持），由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2239 中实现\n* chore(ts): 将消息命名空间调整为与 Python SDK 结构一致，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2240 中完成\n* chore(ts): 为 prompt 和 call 添加缺失的 `.call` 方法，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2241 中完成\n* feat(ts): llm Context 调用\u002Fprompt\u002F模型功能，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2242 中实现\n* feat(ts): llm 工具定义及接口，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2246 中实现\n* feat(ts): 提供商的工具参数，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2250 中实现\n* feat(ts): prompt\u002Fcall 中的 llm 工具功能，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2251 中实现\n* feat(llm): 添加 OpenRouterProvider，以原生支持 OpenRouter，由 @koxudaxi 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2249 中实现\n* refactor(llm): 引入 CompletionsModelFeatureInfo 数据类，由 @koxudaxi 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2269 中完成\n* chore: 为 2.1.1 版本更新补丁版本号，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2321 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.1.0...v2.1.1","2026-02-01T23:17:12",{"id":189,"version":190,"summary_zh":191,"released_at":192},343229,"v2.1.0","## Python SDK\n\n### OpenRouter 提供者\n\n原生支持 https:\u002F\u002Fopenrouter.ai\u002F，使您可以通过单一 API 访问 200 多种模型。您可以使用它将请求路由到 OpenRouter，或访问其他提供商未直接提供的模型：\n\n```python\nfrom mirascope import llm\n\n# 注册提供者\nllm.register_provider(\"openrouter\", scope=\"openrouter\u002F\")\n\n@llm.call(\"openrouter\u002Fopenai\u002Fgpt-4o\")\ndef my_prompt() -> str:\n    return \"Hello!\"\n\n# 或者将现有模型前缀通过 OpenRouter 路由\nllm.register_provider(\"openrouter\", scope=[\"openai\u002F\", \"anthropic\u002F\"])\n\n@llm.call(\"openai\u002Fgpt-4o\")  # 现在会通过 OpenRouter\ndef my_prompt() -> str:\n    return \"Hello!\"\n```\n\n请在您的环境变量中设置 `OPENROUTER_API_KEY` 即可开始使用。\n\n### 网络搜索工具\n\n现在您可以借助提供商原生的网络搜索功能，赋予模型在网络上进行搜索的能力。`llm.WebSearchTool()` 支持 Anthropic、Google Gemini 和 OpenAI（Responses API）：\n\n```python\nfrom mirascope import llm\n\n@llm.call(\"anthropic\u002Fclaude-sonnet-4-5\", tools=[llm.WebSearchTool()])\ndef search(query: str) -> str:\n    return query\n\nresponse = search(\"旧金山目前的天气如何？\")\nprint(response.text())  # 响应包含带引用来源的实时数据\n```\n\n搜索将在提供商的基础设施上以服务器端方式执行。模型会根据提示决定何时进行搜索，搜索结果会自动整合到响应中。\n\n### 提供商工具使用情况跟踪\n\n当使用网络搜索等服务器端工具时，您现在可以通过响应对象中的 `provider_tool_usage` 字段来跟踪其使用情况：\n\n```python\nresponse = search(\"最新新闻\")\nif response.usage and response.usage.provider_tool_usage:\n    for tool_usage in response.usage.provider_tool_usage:\n        print(f\"{tool_usage.name}: {tool_usage.call_count} 次调用\")\n```\n\n### OpenAI Completions API：推理模型支持\n\n推理模型（o1、o3-mini 等）现已支持 Completions API。对于推理模型，温度和 top_p 等参数会被自动排除，而 max_tokens 会正确映射为 max_completion_tokens。\n\n注意：对于推理模型，我们建议使用 Responses API（`:responses` 后缀），因为它对推理功能提供了更丰富的支持。\n\n### 提供商 SDK 原生监控（ops）\n\n现在即使不使用 `@llm.call`，您也可以直接为提供商 SDK 进行 OpenTelemetry 跟踪监控：\n\n```python\nfrom mirascope import ops\n\nops.instrument_openai()\nops.instrument_anthropic()\nops.instrument_google_genai()\n```\n\n这在您希望对直接使用提供商 SDK 的代码进行跟踪时非常有用。\n\n## Mirascope Cloud\n\n- 服务器工具成本跟踪：网络搜索及其他服务器端工具现已纳入计量，并计入成本计算。\n- 时区处理：所有时间戳现均采用时区感知存储，以确保跨区域的一致性。\n- 修复了错误：项目管理员和开发者现在可以创建 API 密钥（此前 bloc","2026-01-30T22:21:55",{"id":194,"version":195,"summary_zh":196,"released_at":197},343230,"v2.0.2","## 重要修复\n* 在装饰时保留原方法的元数据（例如 `__name__`）\n* 更新组织邀请邮件验证的正则表达式，以避免 Python 3.12 及以上版本报错\n* 大幅改进文档样式和结构，提升导航体验\n* 修复了失效的 Discord 链接\n\n## 变更内容\n* 修复（云）：移动端菜单汉堡图标颜色优化，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2094 中完成\n* 重构（云）：优化重定向逻辑，实现单次往返，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2093 中完成\n* 修复（文档）：暗色模式下表格样式、首页及安全性相关调整，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2096 中完成\n* 重构（文档）：将文档内容从 `cloud\u002F` 目录中移出，由 @sourishkrout 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2089 中完成\n* 修复（网站）：通过使用完整 URL 并在新标签页中打开，使 Discord 链接恢复正常工作，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2102 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.0.1...v2.0.2","2026-01-23T06:31:33",{"id":199,"version":200,"summary_zh":201,"released_at":202},343231,"v2.0.1","小幅错误修复：将旧的 v2.mirascope.com 实例替换为正确的 mirascope.com\u002Fapi\u002Fv2。\n\n## 变更内容\n* 修复：将错误的基 URL 实例更新为正确的，由 @willbakst 在 https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F2092 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.0.0...v2.0.1","2026-01-22T00:35:32",{"id":204,"version":205,"summary_zh":206,"released_at":207},343232,"v2.0.0","## 文档\r\n\r\n我们为 Mirascope v2 编写了详尽的文档！您可以在这里阅读全部内容：https:\u002F\u002Fmirascope.com\u002Fdocs\n\n## 新特性\n\n### 结构化输出改进\n\n- 增加对原生格式类型的支持。现在您可以将 `str`、`int`、`list` 或 `list[BaseModel]` 等原生类型传递给模型、提示和调用中的 `format=` 参数。\n- 增加对结构化流式输出的支持。遍历 `stream_response.structured_stream()` 将会按部分结构化输出的生成顺序提供结果。\n- 添加 `llm.output_parser` 作为格式选项，允许您编写自定义的输出解析器，该解析器接收 `llm.AnyResponse` 并从中解析出结构化类型。`llm.AnyResponse` 是一个新的类型别名，与 `llm.Response`、`llm.AsyncResponse`、`llm.StreamResponse` 等兼容。\n- 改进了 `response.parse()` 的错误处理。任何错误都会被封装为 `llm.ParseError`，您可以使用 `error.retry_message()` 获取一条适合传回 LLM 的消息，以便它能够重试。\n\n### 界面改进\n\n- 调用 `llm.Model.call`（或 `.stream` 等）时，现在可以直接传递用户内容，系统会自动将其转换为消息；例如，通过 `llm.Model(\"anthropic\u002Fclaude-sonnet-4-5\").call(\"推荐一本奇幻小说\")`。\n- 使用 `llm.Prompt` 时，现在可以选择直接传入 `ModelId`，如 `prompt.call(\"openai\u002Fgpt-5-mini\", ...)`，此时会以默认参数构造一个模型。\n- 暴露了 `prompt.messages(...)` 方法，作为便捷方式来重建提示中将使用的 `Sequence[llm.Message]`。此方法也可在调用中通过 `call.prompt.messages(...)` 访问。\n- `llm.ThinkingConfig` 接口已简化；不再使用 `include_summaries: bool`，而是改为 `include_thoughts: bool`，该选项将一致地决定响应中是否包含任何 `llm.Thought` 内容。\n- 改进了工具调用的错误处理。如果执行工具调用时发生错误，该错误会被自动捕获并包含在 `ToolOutput` 中（作为 `tool_output.error`）。如果工具输出被传回模型，模型会收到错误信息，从而尝试调整其使用工具的方式。\n- 添加了 `llm.ProviderError` 作为所有由提供商包装的异常的基类；并将 `llm.MirascopeLLMError` 重命名为 `llm.Error`。\n- 添加了 `response.text()`（以及 `stream_response.text_stream()`），作为获取响应中所有文本内容的便捷辅助方法，这些内容将以换行符（或自定义分隔符）连接起来。\n- 使用 `@llm.tool()` 时，严格模式现为可选设置（`bool | None = None`）。若设置为 `None`，则由提供商根据其支持情况决定是否启用严格模式。对于那些对严格输出支持较好的提供商（如 OpenAI），将默认启用该模式。\n- 如果您拥有 Mirascope 路由密钥，且原本因缺少密钥而调用失败，则 MirascopeProvider 现在会默认激活。","2026-01-21T23:32:00",{"id":209,"version":210,"summary_zh":211,"released_at":212},343233,"v2.0.0-alpha.6","## Thinking Support\r\n\r\nWe've re-worked the thinking interface. Rather than simply setting thinking=True, you now provider a ThinkingConfig, like {\"level\": \"medium\", \"include_summaries\": True}. The ThinkingLevel is required, and options include \"default\", \"none\", \"minimal\", \"low\", \"medium\", \"high\", and \"max\". If include_summaries is True, then the model is asked to output summaries of its thoughts. Under the hood, Mirascope will convert the thinking level to the provider-specific representation.\r\n\r\n## MCP Support\r\n\r\nWe've added MCP support to Mirascope! You can now connect to an MCP server using functions in the llm.mcp package, and pass the MCP server's tools to your Mirascope call.\r\n\r\nHere is a runnable example showing both thinking and MCP. When invoked, Gemini will use the official FastMCP server to learn about FastMCP, and reports back on its findings.\r\n\r\n```python\r\nimport asyncio\r\n\r\nfrom mirascope import llm\r\n\r\nasync def main():\r\n\r\n    async with llm.mcp.streamable_http_client(\r\n        \"https:\u002F\u002Fgofastmcp.com\u002Fmcp\"\r\n     as mcp_client:\r\n        tools = await mcp_client.list_tools()\r\n\r\n        @llm.call(\r\n          \"google\u002Fgemini-3-flash-preview\",\r\n          thinking={\"level\": \"medium\", \"include_summaries\": True},\r\n          tools=tools,\r\n        )\r\n        async def learn_mcp():\r\n            return \"Use the tools to learn about FastMCP, and write a report on the library.\"\r\n\r\n        response = await learn_mcp.stream()\r\n        \r\n        while True:  # Loop for tool calls\r\n            async for chunk in response.pretty_stream():\r\n                print(chunk, flush=True, end=\"\")\r\n            print()\r\n\r\n            if response.tool_calls:\r\n                tool_output = await response.execute_tools()\r\n                response = await response.resume(tool_output)\r\n            else:\r\n                break\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n## Other Fixes\r\n\r\n- Improve OpenAI thinking support to use GPT-5.2 full thinking levels\r\n- Improve Google thinking support to use thinking levels for Gemini-3 models\r\n- Improve Google thinking budget calculation to use a proportion of max tokens\r\n- Refactor llm.Tool construction to make it easier to construct tools that are not decorated functions\r\n- Future-proof streamed tool calls to support providers that stream multiple tool calls simultaneously\r\n- Consolidate formatting utilities onto the llm.Format dataclass\r\n- Improve missing import handling for optional packages (e.g. specific provider sdks)\r\n- Add the llm.MirascopeProvider for future use with Mirascope Router\r\n\r\n## What's Changed\r\n* feat(db): add migration for traces and spans tables by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1605\r\n* tests(cloud): 100% coverage by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1804\r\n* feat(db): add TraceService for OTLP ingestion by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1606\r\n* feat(api): add API Key auth to \u002Ftraces endpoint by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1607\r\n* fix(api): accept null parentSpanId in trace schema by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1609\r\n* chore(api): regenerate Python SDK for traces endpoint by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1608\r\n* feat(db): add functions schema and migration by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1623\r\n* feat(db): add FunctionService for function versioning by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1624\r\n* feat(api): add functions API endpoints by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1625\r\n* feat(db): add annotations schema by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1627\r\n* feat(db): add AnnotationService for span annotations by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1628\r\n* feat(api): add annotations API endpoints by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1629\r\n* chore(api): regenerate Python SDK for functions and annotations by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1631\r\n* chore(examples): add local function versioning example by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1636\r\n* feat(ops): implement Trace.annotate and AsyncTrace.annotate method by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1654\r\n* feat(ops): implement _ensure_registration for VersionedFunction and AsyncVersionedFunction by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1764\r\n* nit: vscode search ignores generated api files by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1822\r\n* feat(cloud): initial router for Anthropic, Google, and OpenAI by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1800\r\n* fix(llm): missing authorization header for google when base_url is set by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1801\r\n* feat(llm): MirascopeProvider by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1802\r\n* de","2026-01-10T02:06:41",{"id":214,"version":215,"summary_zh":216,"released_at":217},343234,"v2.0.0-alpha.5","Hello and happy holidays! Here are the short-but-sweet release notes for the latest Mirascope v2 Alpha Release.\r\n- Improved exception handling: All LLM calls now emit consistent exceptions, such as `llm.RateLimitError` or `llm.AuthenticationError`. You can learn more via our [API docs](https:\u002F\u002Fmirascope.com\u002Fdocs\u002Fmirascope\u002Fv2\u002Fapi\u002Fexceptions), or by [reading the source](https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fblob\u002Fv2\u002Fpython\u002Fmirascope\u002Fllm\u002Fexceptions.py).\r\n- Improved semantics for `Response.usage`. Specifically, `input_tokens` now includes all input tokens (including cache reads and writes) and `output_tokens` includes all output tokens (including reasoning).\r\n- Added `llm.reset_provider_registry()` for resetting all provider configuration to defaults.\r\n- Removed `llm.load_provider()` and made it an implementation detail of `llm.register_provider()` (which both registers and returns the provider).\r\n\r\n## What's Changed\r\n* refactor(cloud): add initial effect\u002Fsql-drizzle infra for migration by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1714\r\n* refactor(cloud): Users (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1715\r\n* refactor(cloud): Sessions (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1716\r\n* refactor(cloud): OrganizationMemberships (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1724\r\n* refactor(cloud): Organizations (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1725\r\n* feat(cloud): add desktop header and footer for home and other pages by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1721\r\n* fix(cloud): center app ui properly to avoid it hanging off to the side by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1723\r\n* fix(cloud): use tailwind for homepage buttons background color by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1749\r\n* refactor(cloud): ProjectMemberships (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1729\r\n* refactor(cloud): Projects (@effect\u002Fsql-drizzle) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1730\r\n* tests(cloud): use in-process test container rather than separate postgres (ensures fresh state) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1731\r\n* fix(cloud): test warnings on testcontainer setup by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1733\r\n* refactor(cloud): remove old db services and replace with @effect\u002Fdrizzle-sql implementation by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1734\r\n* fix(cloud): use fk violation to enforce org membership requirement on project membership creation by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1735\r\n* feat(cloud): db\u002FEnvironments by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1591\r\n* feat(cloud): add environments API endpoints and tests by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1592\r\n* feat(cloud): add environments frontend (hooks, context, sidebar update) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1593\r\n* feat(cloud): add API keys database schema, service, and tests by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1594\r\n* feat(cloud): add API keys API endpoints and tests by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1595\r\n* feat(cloud): add API key-based authentication for v0 routes by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1596\r\n* feat(cloud): add API keys frontend (hooks and settings section) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1597\r\n* feat(cloud): add organization settings page with full management by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1598\r\n* feat(python-sdk): generate `api` for environments and API keys by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1599\r\n* fix(cloud): settings content using hooks outside of their context provider by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1602\r\n* fix(cloud): org deletion failure and api key copy button styling issues by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1603\r\n* chore: update AGENTS.md by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1600\r\n* tests(cloud): ignore lines in db client not worth covering by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1738\r\n* refactor(cloud): clean up auth\u002Foauth.ts to improve readability and documentation by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1739\r\n* tests(cloud): auth 100% test coverage (ignoring oauth.ts which we test live) by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1740\r\n* refactor(cloud): remove Effect from naming now that @effect\u002Fsql-drizzle refactor is complete by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1744\r\n* fix(cloud): update makeReady to use Proxy instead of creating a new obj","2025-12-31T04:52:37",{"id":219,"version":220,"summary_zh":221,"released_at":222},343235,"v2.0.0-alpha.4","## Response usage tracking\r\nThe `llm.Response` class now has a `.usage` property, which records provider token usage. When available, this will be set on both regular responses and stream responses. The `llm.Usage` class includes input tokens, output tokens and (when available with the chosen provider), cache read\u002Fwrite tokens and reasoning tokens.\r\n## Anthropic strict structured outputs\r\nWe now support strict structured response formatting with Anthropic, for supported models. Since this is still a beta feature on Anthropic's end, it will only be enabled if you manually specify strict mode using the `llm.format(formattable, mode=\"strict\")` function. Otherwise, the Anthropic provider defaults to tool mode.\r\n## Improved Google strict structured outputs\r\nRecent Google models improve support for using strict mode along with tool calling, so Mirascope now takes advantage of that and uses strict mode by default in more cases.\r\n## Anthropic Cache Control\r\nWe now automatically add cache control blocks to Anthropic prompts, bringing more consistency with other providers (which automatically configure caching).\r\n## Added support: [Together.ai](http:\u002F\u002Ftogether.ai\u002F) and Ollama providers\r\nWe now have builtin support for using [Together.ai](http:\u002F\u002Ftogether.ai\u002F) and Ollama as LLM providers.\r\nIf you use a model_id starting with \"ollama\u002F\", it will automatically invoke the `ollama` provider (defaulting to an API endpoint at `http:\u002F\u002Flocalhost:11434\u002Fv1\u002F`).\r\nIf you'd like to use [Together.ai](http:\u002F\u002Ftogether.ai\u002F) to use an ollama model (or any other model supported by Together), you can use register_provider, as follows:\r\n```py\r\nllm.register_provider(\"ollama\", \"ollama\u002F\")      # e.g. for `ollama\u002Fgemma3:4b`\r\nllm.register_provider(\"together\", \"together\u002F\")  # e.g. for `together\u002Fmoonshotai\u002FKimi-K2-Thinking`\r\n```\r\n\r\n## What's Changed\r\n* fix(cloud): add path param aware where clause checks to base service methods by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1674\r\n* refactor(cloud): change ANNOTATOR role to VIEWER by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1675\r\n* refactor(cloud): add parent scoped where to findAll in base services + clean up and document base service file by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1676\r\n* docs(cloud): improve docs for organization service by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1677\r\n* feat(cloud): OrganizationMembershipService by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1678\r\n* refactor(cloud): update mocks to use test users in organization service tests where it makes sense by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1660\r\n* refactor(cloud): rename organization membership path to \u002Fmembers\u002F:memberId by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1679\r\n* feat(cloud): add organization membership ACL change audit log by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1691\r\n* feat: use model_features framework to track anthropic model ids by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1693\r\n* feat: bump anthropic to 0.75.0\\ by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1694\r\n* feat: track strict structured output support for anthropic models by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1695\r\n* refactor: move ensure_additional_properties_false to base utils by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1696\r\n* chore: fixup mlx cassette tests by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1705\r\n* chore: include claude-sonnet-4-0 as a tracked anthropic model by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1706\r\n* chore(cloud): provide prettier default config plus ignore file to avoid vscode\u002Fcursor using their own defaults by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1708\r\n* fix(cloud): prettier formatting and exclusion of generated files by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1722\r\n* refactor(cloud): refactor UI code to prepare for incremental port of website features by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1633\r\n* feat(cloud): add shadcn-based ui design system by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1692\r\n* feat(refactor): disable builtin CSS validation to avoid tailwind v4 false positives and skip vendored ui components by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1703\r\n* feat(cloud): add barebones home page with highly local CSS styles by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1704\r\n* feat(cloud): add optimized images vite middleware in development by @sourishkrout in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1720\r\n* feat: introduce AnthropicBetaProvider for beta API support by @koxudaxi in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1713\r\n* refactor: reorg the AnthropicBetaProvider by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fm","2025-12-19T19:38:09",{"id":224,"version":225,"summary_zh":226,"released_at":227},343236,"v2.0.0-alpha.3","# Mirascope v2.0.0-alpha.3 Release Notes\r\nHello everyone! We are pleased to announce the latest alpha release of Mirascope: v2.0.0-alpha.3.\r\n\r\nThis release has an exciting new feature: the MLX Provider, allowing local inference on Apple Silicon for Macbook users.\r\n\r\nWe also have a refactored provider system, where you can use `llm.register_provider` to configure what provider will be used for any given LLM model. This means you can now use Mirascope v2 with any OpenAI-compatible model, by registering the OpenAI provider with a custom base url.\r\n\r\nThere are also core interface improvements like upgrading the `@llm.prompt` decorator as a model-agnostic alternative to `@llm.call`. Read more in the full notes below.\r\n\r\nGenerally, we are feeling good about the code quality and interfaces for Mirascope v2, and we're going to be releasing the official beta in the coming weeks. We welcome all feedback on the interfaces, as well as contributions from the community! You can start chatting with us in the #mirascope-v2-alpha channel to get involved.\r\n\r\n# Changes\r\n\r\n## New: MLX Support\r\n\r\nFirst, a big thank you to @tnadav for implementing Mirascope MLX support! You can now run local models from the `mlx-community` on HuggingFace directly on your Macbook. Here's an example:\r\n\r\n```python\r\nfrom mirascope import llm\r\n\r\n@llm.call(\"mlx-community\u002FQwen3-8B-4bit-DWQ-053125\")\r\ndef recommend_book(genre: str):\r\n    return f\"Recommend a {genre} book.\"\r\n\r\nresponse = recommend_book(\"fantasy\")\r\nprint(response.pretty())\r\n```\r\n\r\nRight now tools and structured formats are not supported, but @tnadav is working on them actively :)\r\n\r\n## New: `llm.register_provider` to customize provider behavior\r\n\r\nYou can now use `llm.register_provider` to specify what provider should be used for a given model. By default, Mirascope makes the OpenAI, Anthropic, Google, and MLX providers available (assuming their dependencies are installed). However, you can use register_provider to customize them, or give them additional scope. For example:\r\n\r\n```python\r\nfrom mirascope import llm\r\n\r\nllm.register_provider(\r\n    \"openai:completions\",\r\n    scope=\"ollama\u002F\",\r\n    base_url=\"http:\u002F\u002Flocalhost:11434\u002Fv1\",\r\n    api_key=\"ollama\",\r\n)\r\n\r\n@llm.call(\"ollama\u002Fgpt-oss:20b\")\r\ndef recommend_book(genre: str):\r\n    return f\"Recommend a {genre} book\"\r\n\r\nresponse = recommend_book(\"fantasy\")\r\nprint(response.pretty())\r\n```\r\n\r\nIn this example, a local ollama model can be called via Mirascope's `OpenAIProvider`.\r\n\r\nThe scope is simply a string with prefix matching (so `ollama\u002F` will match any model_id starting with `ollama\u002F`).\r\n\r\n## New `llm.Prompt` class from the `@llm.prompt` decorator\r\n\r\nThe `@llm.prompt` decorator now accepts tools and output format, and produces a `llm.Prompt` class that can be called with any model. This makes the prompt decorator a model agnostic version of the call decorator, so you can use it to fully define a prompt without specifying a default model. The logic is \"Prompt + Model = Call\".\r\n\r\nHere's an example:\r\n\r\n```python\r\nfrom mirascope import llm\r\n\r\n@llm.prompt\r\ndef recommend_book(genre: str):\r\n  return f\"Recommend a {genre} book\"\r\n\r\nmodel = llm.model(\"openai\u002Fgpt-5\")\r\nresponse = recommend_book(model, \"fantasy\")\r\nprint(response.pretty())\r\n```\r\n\r\nNote you can add tools or a response format to the prompt decorator, and that `Prompt` works just like `Call` in supporting streaming via `recommend_book.stream(model, \"fantasy\")`, supporting async or context prompts, etc.\r\n\r\n## Unified `ModelId` type\r\n\r\nWe now specify models via a unified ModelId that specifies the model developer and the model name.\r\nSo, instead of `@llm.call(provider=\"openai\", model_id=\"gpt-5\")`, we now use `@llm.call(\"openai\u002Fgpt-5\")`.\r\n\r\n## Unified `OpenAIProvider`\r\n\r\nOriginally, we had two separate providers for OpenAI, depending on whether you wanted to use the Completions or Responses APIs. We've now unified them into a single OpenAI provider, with optional api mode selection by adding a `:responses` or `:completions` suffix to the model id string. For example, `@llm.call(\"openai\u002Fgpt-5:responses\")` will use the responses API, or `@llm.call(\"openai\u002Fgpt-5:completions\")` will hit the completions API. If you don't specify API modes, it will be selected automatically for you (usually Responses, unless you are using audio or your chosen model does not support responses).\r\n\r\n## Use `llm.Model` as a context manager\r\n\r\nYou may now use a `llm.Model` as a context manager directly, in which case it will override the default model in your `Call`s:\r\n\r\n```python\r\n@llm.call(\"anthropic\u002Fclaude-4-5-sonnet\")\r\ndef recommend_book(genre: str):\r\n  return f\"Recommend a {genre} book.\"\r\n\r\nmodel = llm.Model(\"openai\u002Fgpt-5\", temperature=0.42)\r\nwith model:\r\n  response = recommend_book(\"fantasy\")\r\n```\r\n\r\n## Pass `llm.Model` to `@llm.call`\r\n\r\nYou can now pass a `llm.Model` directly to the call decorator, in lieu of a model id. This allows you to define a model once with custom params, and then pass it into all of your calls","2025-12-13T00:49:55",{"id":229,"version":230,"summary_zh":231,"released_at":232},343237,"v2.0.0-alpha.2","## What's Changed\r\n* fix: resolve import error with python 3.13.9 by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1497\r\n* test: ensure tests are run against python 3.13 by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1498\r\n* fix: fix the website build by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1501\r\n* fix: add top level pyrightconfig that points to python\u002F by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1499\r\n* test: e2e override tests use google as default provider by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1500\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.0.0-alpha.1...v2.0.0-alpha.2","2025-11-15T20:46:00",{"id":234,"version":235,"summary_zh":236,"released_at":237},343238,"v1.25.7","## What's Changed\r\n* Fix thinking stream reconstruction for Anthropic by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1495\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv1.25.6...v1.25.7","2025-11-08T00:03:46",{"id":239,"version":240,"summary_zh":241,"released_at":242},343239,"v2.0.0-alpha.1","## What's Changed\r\n* docs: write alpha intro docs by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1479\r\n* fix: include pydantic and httpx as required deps by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1490\r\n* fix: suppress MCP import errors by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1491\r\n* fix: handle missing provider packages gracefully by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1492\r\n* fix: upgrade deprecated tool.uv.dev-dependencies usage by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1493\r\n* fix: bump version to 2.0.0-alpha.1, add install instructions by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1494\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv2.0.0-alpha.0...v2.0.0-alpha.1","2025-11-06T22:49:32",{"id":244,"version":245,"summary_zh":246,"released_at":247},343240,"v2.0.0-alpha.0","## What's Changed\r\n* Setup documentation site for v2 by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F991\r\n* chore: cleanup v2 branch to only contain v2 code by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F992\r\n* chore: remove Text content dataclass by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1026\r\n* V2 docs cloudflare by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1028\r\n* chore: update comment on robots.txt disallow for v2 by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1032\r\n* Add messages.mdx with examples by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1033\r\n* chore: demote shorthand msg constructors by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1036\r\n* v2 Prompt cleanup by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1071\r\n* feat: initial v2 typescript structure by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1034\r\n* v2: monorepo reorg by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1078\r\n* v2: fix docs build by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1079\r\n* Mirascope v2 learn\u002Fprompts.mdx by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1077\r\n* fix: improve typescript typechecking by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1081\r\n* v2: Set up consistent CI across the monorepo by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1089\r\n* v2: Setup lint-staged precommit hooks by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1090\r\n* TypeScript Prompt Templates Interface & Documentation by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1092\r\n* fix(v2): issue with response format interface by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1093\r\n* Add calls.mdx, iterate on Response api by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1094\r\n* Rewrite calls and prompt templates guides  by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1096\r\n* V2 dev improvements by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1097\r\n* refactor: wrap all str content in a llm.Text by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1103\r\n* feat: document async calls in calls.mdx by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1101\r\n* refactor: Remove ResponseContent and ContextResponse\u002FContextStream by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1104\r\n* Stream & ContentChunk refactors by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1105\r\n* add streams.mdx by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1100\r\n* nit: fix various inconsistencies in the calls guide by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1107\r\n* v2: more consistent media representation & llm.ImageUrl by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1108\r\n* refactor: separate llm.Content and llm.Message types by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1109\r\n* v2: Agent takes context wrapping deps, not deps directly by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1112\r\n* v2: add response.to_tool, stream.to_tool, remove ContextTool by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1111\r\n* Merge of PRs 1113-1119 by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1121\r\n* Rework tools interface, add support for async tools by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1122\r\n* refactor: drop `llm.context` ctx manager, use `llm.Context` explicitly instead  by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1123\r\n* refactor: make ctx optional for agents by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1124\r\n* refactor: unify BaseAgent\u002FAgent\u002FAsyncAgent into agent.py by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1125\r\n* chore: use `@llm.tool` not `@llm.tool()` by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1126\r\n* feat: add unified examples.mdx for LLM consumption by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1127\r\n* refactor: rename ResponseFormat -> Format by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1128\r\n* feat: exploratory response format interface, with examples by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1129\r\n* refactor: remove ctx from response\u002Fstream by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1130\r\n* nit: enforce usage of ruff format by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1131\r\n* feat: add jinja2-based templates for example generation by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1132\r\n* feat: redone agent interface + fresh examples by @teamdandelion in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1133\r\n* feat: update script for examples.mdx by @teamdandelio","2025-11-06T01:42:50",{"id":249,"version":250,"summary_zh":251,"released_at":252},343241,"v1.25.6","## What's Changed\r\n* feat: add Chroma Cloud support by @kylediaz in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1189\r\n* fix: issue with mistral client when not set at a global level by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1212\r\n* fix: update fallback to use context so it can work with streaming by @willbakst in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1211\r\n\r\n## New Contributors\r\n* @kylediaz made their first contribution in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1189\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv1.25.5...v1.25.6","2025-08-14T22:53:11",{"id":254,"version":255,"summary_zh":256,"released_at":257},343242,"v1.25.5","## What's Changed\r\n* Fix _load_media fails with HTTP 403 Forbidden due to missing User-Agent #1172 by @emilioramirez in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1173\r\n\r\n## New Contributors\r\n* @emilioramirez made their first contribution in https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fpull\u002F1173\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMirascope\u002Fmirascope\u002Fcompare\u002Fv1.25.4...v1.25.5","2025-08-05T22:06:25"]