[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jackmpcollins--magentic":3,"tool-jackmpcollins--magentic":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":72,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":32,"env_os":94,"env_gpu":95,"env_ram":95,"env_deps":96,"category_tags":100,"github_topics":101,"view_count":32,"oss_zip_url":78,"oss_zip_packed_at":78,"status":17,"created_at":114,"updated_at":115,"faqs":116,"releases":141},7155,"jackmpcollins\u002Fmagentic","magentic","Seamlessly integrate LLMs as Python functions","magentic 是一款让开发者能够像调用普通 Python 函数一样无缝集成大语言模型（LLM）的开源库。它主要解决了传统 LLM 开发中提示词管理混乱、输出结果难以结构化以及工具调用流程复杂等痛点，帮助开发者将非确定的模型生成内容转化为可控的代码逻辑。\n\n该工具特别适合 Python 开发者及 AI 应用构建者使用。其核心亮点在于引入了 `@prompt` 和 `@chatprompt` 装饰器，用户只需定义函数签名和返回类型（支持 Pydantic 模型），magentic 即可自动处理提示词模板填充、模型调用及结果解析，直接返回结构化的 Python 对象，无需手动编写繁琐的解析代码。此外，magentic 原生支持流式输出、基于大模型的自动重试机制以提升格式遵循度，并集成了 OpenTelemetry 可观测性方案。它还兼容 OpenAI、Anthropic、Ollama 等多种主流模型提供商，支持异步编程、并行函数调用及多模态输入。通过 magentic，开发者可以高效地组合传统代码与 AI 能力，快速构建复杂的智能体系统。","# magentic\n\nSeamlessly integrate Large Language Models into Python code. Use the `@prompt` and `@chatprompt` decorators to create functions that return structured output from an LLM. Combine LLM queries and tool use with traditional Python code to build complex agentic systems.\n\n## Features\n\n- [Structured Outputs] using pydantic models and built-in python types.\n- [Streaming] of structured outputs and function calls, to use them while being generated.\n- [LLM-Assisted Retries] to improve LLM adherence to complex output schemas.\n- [Observability] using OpenTelemetry, with native [Pydantic Logfire integration].\n- [Type Annotations] to work nicely with linters and IDEs.\n- [Configuration] options for multiple LLM providers including OpenAI, Anthropic, and Ollama.\n- Many more features: [Chat Prompting], [Parallel Function Calling], [Vision], [Formatting], [Asyncio]...\n\n## Installation\n\n```sh\npip install magentic\n```\n\nor using uv\n\n```sh\nuv add magentic\n```\n\nConfigure your OpenAI API key by setting the `OPENAI_API_KEY` environment variable. To configure a different LLM provider see [Configuration] for more.\n\n## Usage\n\n### @prompt\n\nThe `@prompt` decorator allows you to define a template for a Large Language Model (LLM) prompt as a Python function. When this function is called, the arguments are inserted into the template, then this prompt is sent to an LLM which generates the function output.\n\n```python\nfrom magentic import prompt\n\n\n@prompt('Add more \"dude\"ness to: {phrase}')\ndef dudeify(phrase: str) -> str: ...  # No function body as this is never executed\n\n\ndudeify(\"Hello, how are you?\")\n# \"Hey, dude! What's up? How's it going, my man?\"\n```\n\nThe `@prompt` decorator will respect the return type annotation of the decorated function. This can be [any type supported by pydantic](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fusage\u002Ftypes\u002Ftypes\u002F) including a `pydantic` model.\n\n```python\nfrom magentic import prompt\nfrom pydantic import BaseModel\n\n\nclass Superhero(BaseModel):\n    name: str\n    age: int\n    power: str\n    enemies: list[str]\n\n\n@prompt(\"Create a Superhero named {name}.\")\ndef create_superhero(name: str) -> Superhero: ...\n\n\ncreate_superhero(\"Garden Man\")\n# Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])\n```\n\nSee [Structured Outputs] for more.\n\n### @chatprompt\n\nThe `@chatprompt` decorator works just like `@prompt` but allows you to pass chat messages as a template rather than a single text prompt. This can be used to provide a system message or for few-shot prompting where you provide example responses to guide the model's output. Format fields denoted by curly braces `{example}` will be filled in all messages (except `FunctionResultMessage`).\n\n```python\nfrom magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage\nfrom pydantic import BaseModel\n\n\nclass Quote(BaseModel):\n    quote: str\n    character: str\n\n\n@chatprompt(\n    SystemMessage(\"You are a movie buff.\"),\n    UserMessage(\"What is your favorite quote from Harry Potter?\"),\n    AssistantMessage(\n        Quote(\n            quote=\"It does not do to dwell on dreams and forget to live.\",\n            character=\"Albus Dumbledore\",\n        )\n    ),\n    UserMessage(\"What is your favorite quote from {movie}?\"),\n)\ndef get_movie_quote(movie: str) -> Quote: ...\n\n\nget_movie_quote(\"Iron Man\")\n# Quote(quote='I am Iron Man.', character='Tony Stark')\n```\n\nSee [Chat Prompting] for more.\n\n### FunctionCall\n\nAn LLM can also decide to call functions. In this case the `@prompt`-decorated function returns a `FunctionCall` object which can be called to execute the function using the arguments provided by the LLM.\n\n```python\nfrom typing import Literal\n\nfrom magentic import prompt, FunctionCall\n\n\ndef search_twitter(query: str, category: Literal[\"latest\", \"people\"]) -> str:\n    \"\"\"Searches Twitter for a query.\"\"\"\n    print(f\"Searching Twitter for {query!r} in category {category!r}\")\n    return \"\u003Ctwitter results>\"\n\n\ndef search_youtube(query: str, channel: str = \"all\") -> str:\n    \"\"\"Searches YouTube for a query.\"\"\"\n    print(f\"Searching YouTube for {query!r} in channel {channel!r}\")\n    return \"\u003Cyoutube results>\"\n\n\n@prompt(\n    \"Use the appropriate search function to answer: {question}\",\n    functions=[search_twitter, search_youtube],\n)\ndef perform_search(question: str) -> FunctionCall[str]: ...\n\n\noutput = perform_search(\"What is the latest news on LLMs?\")\nprint(output)\n# > FunctionCall(\u003Cfunction search_twitter at 0x10c367d00>, 'LLMs', 'latest')\noutput()\n# > Searching Twitter for 'Large Language Models news' in category 'latest'\n# '\u003Ctwitter results>'\n```\n\nSee [Function Calling] for more.\n\n### @prompt_chain\n\nSometimes the LLM requires making one or more function calls to generate a final answer. The `@prompt_chain` decorator will resolve `FunctionCall` objects automatically and pass the output back to the LLM to continue until the final answer is reached.\n\nIn the following example, when `describe_weather` is called the LLM first calls the `get_current_weather` function, then uses the result of this to formulate its final answer which gets returned.\n\n```python\nfrom magentic import prompt_chain\n\n\ndef get_current_weather(location, unit=\"fahrenheit\"):\n    \"\"\"Get the current weather in a given location\"\"\"\n    # Pretend to query an API\n    return {\"temperature\": \"72\", \"forecast\": [\"sunny\", \"windy\"]}\n\n\n@prompt_chain(\n    \"What's the weather like in {city}?\",\n    functions=[get_current_weather],\n)\ndef describe_weather(city: str) -> str: ...\n\n\ndescribe_weather(\"Boston\")\n# 'The current weather in Boston is 72°F and it is sunny and windy.'\n```\n\nLLM-powered functions created using `@prompt`, `@chatprompt` and `@prompt_chain` can be supplied as `functions` to other `@prompt`\u002F`@prompt_chain` decorators, just like regular python functions. This enables increasingly complex LLM-powered functionality, while allowing individual components to be tested and improved in isolation.\n\n\u003C!-- Links -->\n\n[Structured Outputs]: https:\u002F\u002Fmagentic.dev\u002Fstructured-outputs\n[Chat Prompting]: https:\u002F\u002Fmagentic.dev\u002Fchat-prompting\n[Function Calling]: https:\u002F\u002Fmagentic.dev\u002Ffunction-calling\n[Parallel Function Calling]: https:\u002F\u002Fmagentic.dev\u002Ffunction-calling\u002F#parallelfunctioncall\n[Observability]: https:\u002F\u002Fmagentic.dev\u002Flogging-and-tracing\n[Pydantic Logfire integration]: https:\u002F\u002Flogfire.pydantic.dev\u002Fdocs\u002Fintegrations\u002Fthird-party\u002Fmagentic\u002F\n[Formatting]: https:\u002F\u002Fmagentic.dev\u002Fformatting\n[Asyncio]: https:\u002F\u002Fmagentic.dev\u002Fasyncio\n[Streaming]: https:\u002F\u002Fmagentic.dev\u002Fstreaming\n[Vision]: https:\u002F\u002Fmagentic.dev\u002Fvision\n[LLM-Assisted Retries]: https:\u002F\u002Fmagentic.dev\u002Fretrying.md\n[Configuration]: https:\u002F\u002Fmagentic.dev\u002Fconfiguration\n[Type Annotations]: https:\u002F\u002Fmagentic.dev\u002Ftype-checking\n\n\n### Streaming\n\nThe `StreamedStr` (and `AsyncStreamedStr`) class can be used to stream the output of the LLM. This allows you to process the text while it is being generated, rather than receiving the whole output at once.\n\n```python\nfrom magentic import prompt, StreamedStr\n\n\n@prompt(\"Tell me about {country}\")\ndef describe_country(country: str) -> StreamedStr: ...\n\n\n# Print the chunks while they are being received\nfor chunk in describe_country(\"Brazil\"):\n    print(chunk, end=\"\")\n# 'Brazil, officially known as the Federative Republic of Brazil, is ...'\n```\n\nMultiple `StreamedStr` can be created at the same time to stream LLM outputs concurrently. In the below example, generating the description for multiple countries takes approximately the same amount of time as for a single country.\n\n```python\nfrom time import time\n\ncountries = [\"Australia\", \"Brazil\", \"Chile\"]\n\n\n# Generate the descriptions one at a time\nstart_time = time()\nfor country in countries:\n    # Converting `StreamedStr` to `str` blocks until the LLM output is fully generated\n    description = str(describe_country(country))\n    print(f\"{time() - start_time:.2f}s : {country} - {len(description)} chars\")\n\n# 22.72s : Australia - 2130 chars\n# 41.63s : Brazil - 1884 chars\n# 74.31s : Chile - 2968 chars\n\n\n# Generate the descriptions concurrently by creating the StreamedStrs at the same time\nstart_time = time()\nstreamed_strs = [describe_country(country) for country in countries]\nfor country, streamed_str in zip(countries, streamed_strs):\n    description = str(streamed_str)\n    print(f\"{time() - start_time:.2f}s : {country} - {len(description)} chars\")\n\n# 22.79s : Australia - 2147 chars\n# 23.64s : Brazil - 2202 chars\n# 24.67s : Chile - 2186 chars\n```\n\n### Object Streaming\n\nStructured outputs can also be streamed from the LLM by using the return type annotation `Iterable` (or `AsyncIterable`). This allows each item to be processed while the next one is being generated.\n\n```python\nfrom collections.abc import Iterable\nfrom time import time\n\nfrom magentic import prompt\nfrom pydantic import BaseModel\n\n\nclass Superhero(BaseModel):\n    name: str\n    age: int\n    power: str\n    enemies: list[str]\n\n\n@prompt(\"Create a Superhero team named {name}.\")\ndef create_superhero_team(name: str) -> Iterable[Superhero]: ...\n\n\nstart_time = time()\nfor hero in create_superhero_team(\"The Food Dudes\"):\n    print(f\"{time() - start_time:.2f}s : {hero}\")\n\n# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']\n# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']\n# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']\n```\n\nSee [Streaming] for more.\n\n### Asyncio\n\nAsynchronous functions \u002F coroutines can be used to concurrently query the LLM. This can greatly increase the overall speed of generation, and also allow other asynchronous code to run while waiting on LLM output. In the below example, the LLM generates a description for each US president while it is waiting on the next one in the list. Measuring the characters generated per second shows that this example achieves a 7x speedup over serial processing.\n\n```python\nimport asyncio\nfrom time import time\nfrom typing import AsyncIterable\n\nfrom magentic import prompt\n\n\n@prompt(\"List ten presidents of the United States\")\nasync def iter_presidents() -> AsyncIterable[str]: ...\n\n\n@prompt(\"Tell me more about {topic}\")\nasync def tell_me_more_about(topic: str) -> str: ...\n\n\n# For each president listed, generate a description concurrently\nstart_time = time()\ntasks = []\nasync for president in await iter_presidents():\n    # Use asyncio.create_task to schedule the coroutine for execution before awaiting it\n    # This way descriptions will start being generated while the list of presidents is still being generated\n    task = asyncio.create_task(tell_me_more_about(president))\n    tasks.append(task)\n\ndescriptions = await asyncio.gather(*tasks)\n\n# Measure the characters per second\ntotal_chars = sum(len(desc) for desc in descriptions)\ntime_elapsed = time() - start_time\nprint(total_chars, time_elapsed, total_chars \u002F time_elapsed)\n# 24575 28.70 856.07\n\n\n# Measure the characters per second to describe a single president\nstart_time = time()\nout = await tell_me_more_about(\"George Washington\")\ntime_elapsed = time() - start_time\nprint(len(out), time_elapsed, len(out) \u002F time_elapsed)\n# 2206 18.72 117.78\n```\n\nSee [Asyncio] for more.\n\n### Additional Features\n\n- The `functions` argument to `@prompt` can contain async\u002Fcoroutine functions. When the corresponding `FunctionCall` objects are called the result must be awaited.\n- The `Annotated` type annotation can be used to provide descriptions and other metadata for function parameters. See [the pydantic documentation on using `Field` to describe function arguments](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fusage\u002Fvalidation_decorator\u002F#using-field-to-describe-function-arguments).\n- The `@prompt` and `@prompt_chain` decorators also accept a `model` argument. You can pass an instance of `OpenaiChatModel` to use GPT4 or configure a different temperature. See below.\n- Register other types to use as return type annotations in `@prompt` functions by following [the example notebook for a Pandas DataFrame](examples\u002Fcustom_function_schemas\u002Fregister_dataframe_function_schema.ipynb).\n\n## Backend\u002FLLM Configuration\n\nMagentic supports multiple LLM providers or \"backends\". This roughly refers to which Python package is used to interact with the LLM API. The following backends are supported.\n\n### OpenAI\n\nThe default backend, using the `openai` Python package and supports all features of magentic.\n\nNo additional installation is required. Just import the `OpenaiChatModel` class from `magentic`.\n\n```python\nfrom magentic import OpenaiChatModel\n\nmodel = OpenaiChatModel(\"gpt-4o\")\n```\n\n#### Ollama via OpenAI\n\nOllama supports an OpenAI-compatible API, which allows you to use Ollama models via the OpenAI backend.\n\nFirst, install ollama from [ollama.com](https:\u002F\u002Follama.com\u002F). Then, pull the model you want to use.\n\n```sh\nollama pull llama3.2\n```\n\nThen, specify the model name and `base_url` when creating the `OpenaiChatModel` instance.\n\n```python\nfrom magentic import OpenaiChatModel\n\nmodel = OpenaiChatModel(\"llama3.2\", base_url=\"http:\u002F\u002Flocalhost:11434\u002Fv1\u002F\")\n```\n\n#### Other OpenAI-compatible APIs\n\nWhen using the `openai` backend, setting the `MAGENTIC_OPENAI_BASE_URL` environment variable or using `OpenaiChatModel(..., base_url=\"http:\u002F\u002Flocalhost:8080\")` in code allows you to use `magentic` with any OpenAI-compatible API e.g. [Azure OpenAI Service](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fquickstart?tabs=command-line&pivots=programming-language-python#create-a-new-python-application), [LiteLLM OpenAI Proxy Server](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproxy_server), [LocalAI](https:\u002F\u002Flocalai.io\u002Fhowtos\u002Feasy-request-openai\u002F). Note that if the API does not support tool calls then you will not be able to create prompt-functions that return Python objects, but other features of `magentic` will still work.\n\nTo use Azure with the openai backend you will need to set the `MAGENTIC_OPENAI_API_TYPE` environment variable to \"azure\" or use `OpenaiChatModel(..., api_type=\"azure\")`, and also set the environment variables needed by the openai package to access Azure. See https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python#microsoft-azure-openai\n\n### Anthropic\n\nThis uses the `anthropic` Python package and supports all features of magentic.\n\nInstall the `magentic` package with the `anthropic` extra, or install the `anthropic` package directly.\n\n```sh\npip install \"magentic[anthropic]\"\n```\n\nThen import the `AnthropicChatModel` class.\n\n```python\nfrom magentic.chat_model.anthropic_chat_model import AnthropicChatModel\n\nmodel = AnthropicChatModel(\"claude-3-5-sonnet-latest\")\n```\n\n### LiteLLM\n\nThis uses the `litellm` Python package to enable querying LLMs from [many different providers](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders). Note: some models may not support all features of `magentic` e.g. function calling\u002Fstructured output and streaming.\n\nInstall the `magentic` package with the `litellm` extra, or install the `litellm` package directly.\n\n```sh\npip install \"magentic[litellm]\"\n```\n\nThen import the `LitellmChatModel` class.\n\n```python\nfrom magentic.chat_model.litellm_chat_model import LitellmChatModel\n\nmodel = LitellmChatModel(\"gpt-4o\")\n```\n\n### Mistral\n\nThis uses the `openai` Python package with some small modifications to make the API queries compatible with the Mistral API. It supports all features of magentic. However tool calls (including structured outputs) are not streamed so are received all at once.\n\nNote: a future version of magentic might switch to using the `mistral` Python package.\n\nNo additional installation is required. Just import the `MistralChatModel` class.\n\n```python\nfrom magentic.chat_model.mistral_chat_model import MistralChatModel\n\nmodel = MistralChatModel(\"mistral-large-latest\")\n```\n\n## Configure a Backend\n\nThe default `ChatModel` used by `magentic` (in `@prompt`, `@chatprompt`, etc.) can be configured in several ways. When a prompt-function or chatprompt-function is called, the `ChatModel` to use follows this order of preference\n\n1. The `ChatModel` instance provided as the `model` argument to the magentic decorator\n1. The current chat model context, created using `with MyChatModel:`\n1. The global `ChatModel` created from environment variables and the default settings in [src\u002Fmagentic\u002Fsettings.py](https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fblob\u002Fmain\u002Fsrc\u002Fmagentic\u002Fsettings.py)\n\nThe following code snippet demonstrates this behavior:\n\n```python\nfrom magentic import OpenaiChatModel, prompt\nfrom magentic.chat_model.anthropic_chat_model import AnthropicChatModel\n\n\n@prompt(\"Say hello\")\ndef say_hello() -> str: ...\n\n\n@prompt(\n    \"Say hello\",\n    model=AnthropicChatModel(\"claude-3-5-sonnet-latest\"),\n)\ndef say_hello_anthropic() -> str: ...\n\n\nsay_hello()  # Uses env vars or default settings\n\nwith OpenaiChatModel(\"gpt-4o-mini\", temperature=1):\n    say_hello()  # Uses openai with gpt-4o-mini and temperature=1 due to context manager\n    say_hello_anthropic()  # Uses Anthropic claude-3-5-sonnet-latest because explicitly configured\n```\n\nThe following environment variables can be set.\n\n| Environment Variable           | Description                              | Example                      |\n| ------------------------------ | ---------------------------------------- | ---------------------------- |\n| MAGENTIC_BACKEND               | The package to use as the LLM backend    | anthropic \u002F openai \u002F litellm |\n| MAGENTIC_ANTHROPIC_MODEL       | Anthropic model                          | claude-3-haiku-20240307      |\n| MAGENTIC_ANTHROPIC_API_KEY     | Anthropic API key to be used by magentic | sk-...                       |\n| MAGENTIC_ANTHROPIC_BASE_URL    | Base URL for an Anthropic-compatible API | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_ANTHROPIC_MAX_TOKENS  | Max number of generated tokens           | 1024                         |\n| MAGENTIC_ANTHROPIC_TEMPERATURE | Temperature                              | 0.5                          |\n| MAGENTIC_LITELLM_MODEL         | LiteLLM model                            | claude-2                     |\n| MAGENTIC_LITELLM_API_BASE      | The base url to query                    | http:\u002F\u002Flocalhost:11434       |\n| MAGENTIC_LITELLM_MAX_TOKENS    | LiteLLM max number of generated tokens   | 1024                         |\n| MAGENTIC_LITELLM_TEMPERATURE   | LiteLLM temperature                      | 0.5                          |\n| MAGENTIC_MISTRAL_MODEL         | Mistral model                            | mistral-large-latest         |\n| MAGENTIC_MISTRAL_API_KEY       | Mistral API key to be used by magentic   | XEG...                       |\n| MAGENTIC_MISTRAL_BASE_URL      | Base URL for an Mistral-compatible API   | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_MISTRAL_MAX_TOKENS    | Max number of generated tokens           | 1024                         |\n| MAGENTIC_MISTRAL_SEED          | Seed for deterministic sampling          | 42                           |\n| MAGENTIC_MISTRAL_TEMPERATURE   | Temperature                              | 0.5                          |\n| MAGENTIC_OPENAI_MODEL          | OpenAI model                             | gpt-4                        |\n| MAGENTIC_OPENAI_API_KEY        | OpenAI API key to be used by magentic    | sk-...                       |\n| MAGENTIC_OPENAI_API_TYPE       | Allowed options: \"openai\", \"azure\"       | azure                        |\n| MAGENTIC_OPENAI_BASE_URL       | Base URL for an OpenAI-compatible API    | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_OPENAI_MAX_TOKENS     | OpenAI max number of generated tokens    | 1024                         |\n| MAGENTIC_OPENAI_SEED           | Seed for deterministic sampling          | 42                           |\n| MAGENTIC_OPENAI_TEMPERATURE    | OpenAI temperature                       | 0.5                          |\n\n## Type Checking\n\nMany type checkers will raise warnings or errors for functions with the `@prompt` decorator due to the function having no body or return value. There are several ways to deal with these.\n\n1. Disable the check globally for the type checker. For example in mypy by disabling error code `empty-body`.\n   ```toml\n   # pyproject.toml\n   [tool.mypy]\n   disable_error_code = [\"empty-body\"]\n   ```\n1. Make the function body `...` (this does not satisfy mypy) or `raise`.\n   ```python\n   @prompt(\"Choose a color\")\n   def random_color() -> str: ...\n   ```\n1. Use comment `# type: ignore[empty-body]` on each function. In this case you can add a docstring instead of `...`.\n   ```python\n   @prompt(\"Choose a color\")\n   def random_color() -> str:  # type: ignore[empty-body]\n       \"\"\"Returns a random color.\"\"\"\n   ```\n","# magentic\n\n将大型语言模型无缝集成到 Python 代码中。使用 `@prompt` 和 `@chatprompt` 装饰器创建从 LLM 返回结构化输出的函数。将 LLM 查询和工具使用与传统 Python 代码相结合，构建复杂的智能体系统。\n\n## 特性\n\n- 使用 Pydantic 模型和内置 Python 类型实现【结构化输出】。\n- 支持结构化输出和函数调用的【流式传输】，以便在生成过程中实时使用。\n- 【LLM 辅助重试】功能，以提高 LLM 对复杂输出模式的遵循度。\n- 基于 OpenTelemetry 的【可观测性】，并原生集成【Pydantic Logfire】。\n- 【类型注解】，便于与 linter 和 IDE 良好配合。\n- 针对多个 LLM 提供商（包括 OpenAI、Anthropic 和 Ollama）的【配置】选项。\n- 还有更多特性：【聊天提示】、【并行函数调用】、【视觉处理】、【格式化】、【异步支持】等。\n\n## 安装\n\n```sh\npip install magentic\n```\n\n或使用 uv：\n\n```sh\nuv add magentic\n```\n\n通过设置 `OPENAI_API_KEY` 环境变量来配置您的 OpenAI API 密钥。如需配置其他 LLM 提供商，请参阅【配置】部分以获取更多信息。\n\n## 使用方法\n\n### @prompt\n\n`@prompt` 装饰器允许您将大型语言模型 (LLM) 提示模板定义为一个 Python 函数。当此函数被调用时，参数会被插入到模板中，然后该提示会被发送到 LLM，由 LLM 生成函数的输出。\n\n```python\nfrom magentic import prompt\n\n\n@prompt('给以下句子增加“兄弟感”：{phrase}')\ndef dudeify(phrase: str) -> str: ...  # 无需函数体，因为该函数不会被执行\n\n\ndudeify(\"你好，最近怎么样？\")\n# \"嘿，兄弟！咋样啊？老铁，过得咋样？\"\n```\n\n`@prompt` 装饰器会尊重被装饰函数的返回值类型注解。这可以是 [Pydantic 支持的任何类型](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fusage\u002Ftypes\u002Ftypes\u002F)，包括一个 `pydantic` 模型。\n\n```python\nfrom magentic import prompt\nfrom pydantic import BaseModel\n\n\nclass Superhero(BaseModel):\n    name: str\n    age: int\n    power: str\n    enemies: list[str]\n\n\n@prompt(\"创建一位名为 {name} 的超级英雄。\")\ndef create_superhero(name: str) -> Superhero: ...\n\n\ncreate_superhero(\"花园侠\")\n# Superhero(name='花园侠', age=30, power='掌控植物', enemies=['污染侠', '水泥女'])\n```\n\n更多内容请参见【结构化输出】。\n\n### @chatprompt\n\n`@chatprompt` 装饰器的工作方式与 `@prompt` 类似，但它允许您将聊天消息作为模板传递，而不是单个文本提示。这可用于提供系统消息，或进行少样本提示，其中您提供示例响应来引导模型的输出。用大括号 `{example}` 标记的格式字段将在所有消息中填充（除了 `FunctionResultMessage`）。\n\n```python\nfrom magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage\nfrom pydantic import BaseModel\n\n\nclass Quote(BaseModel):\n    quote: str\n    character: str\n\n\n@chatprompt(\n    SystemMessage(\"你是个电影迷。\"),\n    UserMessage(\"你最喜欢的《哈利·波特》台词是什么？\"),\n    AssistantMessage(\n        Quote(\n            quote=\"沉溺于梦想而忘记生活是没有意义的。\",\n            character=\"阿不思·邓布利多\",\n        )\n    ),\n    UserMessage(\"你最喜欢的 {movie} 台词是什么？\"),\n)\ndef get_movie_quote(movie: str) -> Quote: ...\n\n\nget_movie_quote(\"钢铁侠\")\n# Quote(quote='我是钢铁侠。', character='托尼·斯塔克')\n```\n\n更多内容请参见【聊天提示】。\n\n### FunctionCall\n\nLLM 也可以决定调用函数。在这种情况下，使用 `@prompt` 装饰器的函数会返回一个 `FunctionCall` 对象，该对象可以被调用以使用 LLM 提供的参数执行函数。\n\n```python\nfrom typing import Literal\n\nfrom magentic import prompt, FunctionCall\n\n\ndef search_twitter(query: str, category: Literal[\"latest\", \"people\"]) -> str:\n    \"\"\"搜索 Twitter 上的查询内容。\"\"\"\n    print(f\"正在 Twitter 上搜索 {query!r}，类别为 {category!r}\")\n    return \"\u003Ctwitter 搜索结果>\"\n\n\ndef search_youtube(query: str, channel: str = \"all\") -> str:\n    \"\"\"搜索 YouTube 上的查询内容。\"\"\"\n    print(f\"正在 YouTube 上搜索 {query!r}，频道为 {channel!r}\")\n    return \"\u003Cyoutube 搜索结果>\"\n\n\n@prompt(\n    \"使用适当的搜索函数回答：{question}\",\n    functions=[search_twitter, search_youtube],\n)\ndef perform_search(question: str) -> FunctionCall[str]: ...\n\n\noutput = perform_search(\"关于 LLM 的最新消息是什么？\")\nprint(output)\n# > FunctionCall(\u003Cfunction search_twitter at 0x10c367d00>, 'LLMs', 'latest')\noutput()\n# > 正在 Twitter 上搜索 'Large Language Models news'，类别为 'latest'\n# '\u003Ctwitter 搜索结果>'\n```\n\n更多内容请参见【函数调用】。\n\n### @prompt_chain\n\n有时，LLM 需要调用一个或多个函数才能生成最终答案。`@prompt_chain` 装饰器会自动解析 `FunctionCall` 对象，并将输出传递回 LLM，直到获得最终答案为止。\n\n在下面的例子中，当调用 `describe_weather` 时，LLM 首先调用 `get_current_weather` 函数，然后利用其结果形成最终答案并返回。\n\n```python\nfrom magentic import prompt_chain\n\n\ndef get_current_weather(location, unit=\"fahrenheit\"):\n    \"\"\"获取指定地点的当前天气情况\"\"\"\n    # 模拟查询 API\n    return {\"temperature\": \"72\", \"forecast\": [\"晴朗\", \"多风\"]}\n\n\n@prompt_chain(\n    \"波士顿的天气如何？\",\n    functions=[get_current_weather],\n)\ndef describe_weather(city: str) -> str: ...\n\n\ndescribe_weather(\"波士顿\")\n# '波士顿目前的天气是 72°F，晴朗且多风。'\n```\n\n使用 `@prompt`、`@chatprompt` 和 `@prompt_chain` 创建的 LLM 驱动函数，可以像普通 Python 函数一样，作为 `functions` 参数传递给其他 `@prompt`\u002F`@prompt_chain` 装饰器。这使得 LLM 驱动的功能越来越复杂，同时允许单独测试和改进各个组件。\n\n\u003C!-- 链接 -->\n\n[结构化输出]: https:\u002F\u002Fmagentic.dev\u002Fstructured-outputs\n[聊天提示]: https:\u002F\u002Fmagentic.dev\u002Fchat-prompting\n[函数调用]: https:\u002F\u002Fmagentic.dev\u002Ffunction-calling\n[并行函数调用]: https:\u002F\u002Fmagentic.dev\u002Ffunction-calling\u002F#parallelfunctioncall\n[可观测性]: https:\u002F\u002Fmagentic.dev\u002Flogging-and-tracing\n[Pydantic Logfire 集成]: https:\u002F\u002Flogfire.pydantic.dev\u002Fdocs\u002Fintegrations\u002Fthird-party\u002Fmagentic\u002F\n[格式化]: https:\u002F\u002Fmagentic.dev\u002Fformatting\n[异步支持]: https:\u002F\u002Fmagentic.dev\u002Fasyncio\n[流式传输]: https:\u002F\u002Fmagentic.dev\u002Fstreaming\n[视觉处理]: https:\u002F\u002Fmagentic.dev\u002Fvision\n[LLM 辅助重试]: https:\u002F\u002Fmagentic.dev\u002Fretrying.md\n[配置]: https:\u002F\u002Fmagentic.dev\u002Fconfiguration\n[类型注解]: https:\u002F\u002Fmagentic.dev\u002Ftype-checking\n\n### 流式处理\n\n`StreamedStr`（和 `AsyncStreamedStr`）类可用于流式传输 LLM 的输出。这使您能够在文本生成过程中对其进行处理，而不是一次性接收整个输出。\n\n```python\nfrom magentic import prompt, StreamedStr\n\n\n@prompt(\"告诉我关于 {country} 的事情\")\ndef describe_country(country: str) -> StreamedStr: ...\n\n\n# 在接收到每个数据块时立即打印\nfor chunk in describe_country(\"巴西\"):\n    print(chunk, end=\"\")\n# '巴西，正式名称为巴西联邦共和国，是 ...'\n```\n\n可以同时创建多个 `StreamedStr` 对象，以并发地流式传输 LLM 的输出。在下面的示例中，为多个国家生成描述所需的时间与为单个国家生成描述所需的时间大致相同。\n\n```python\nfrom time import time\n\ncountries = [\"澳大利亚\", \"巴西\", \"智利\"]\n\n\n# 依次生成各国的描述\nstart_time = time()\nfor country in countries:\n    # 将 `StreamedStr` 转换为 `str` 会阻塞，直到 LLM 输出完全生成\n    description = str(describe_country(country))\n    print(f\"{time() - start_time:.2f}s : {country} - {len(description)} 字符\")\n\n# 22.72s : 澳大利亚 - 2130 字符\n# 41.63s : 巴西 - 1884 字符\n# 74.31s : 智利 - 2968 字符\n\n\n# 同时创建 `StreamedStr` 对象，以并发方式生成各国的描述\nstart_time = time()\nstreamed_strs = [describe_country(country) for country in countries]\nfor country, streamed_str in zip(countries, streamed_strs):\n    description = str(streamed_str)\n    print(f\"{time() - start_time:.2f}s : {country} - {len(description)} 字符\")\n\n# 22.79s : 澳大利亚 - 2147 字符\n# 23.64s : 巴西 - 2202 字符\n# 24.67s : 智利 - 2186 字符\n```\n\n### 对象流式处理\n\n也可以通过使用返回值类型注解 `Iterable`（或 `AsyncIterable`）从 LLM 流式传输结构化输出。这样可以在生成下一个项目的同时处理当前项目。\n\n```python\nfrom collections.abc import Iterable\nfrom time import time\n\nfrom magentic import prompt\nfrom pydantic import BaseModel\n\n\nclass 超级英雄(BaseModel):\n    名字: str\n    年龄: int\n    能力: str\n    敌人: list[str]\n\n\n@prompt(\"创建一个名为 {name} 的超级英雄团队。\")\ndef 创建超级英雄团队(name: str) -> Iterable[超级英雄]: ...\n\n\nstart_time = time()\nfor hero in 创建超级英雄团队(\"美食小队\"):\n    print(f\"{time() - start_time:.2f}s : {hero}\")\n\n# 2.23s : 名字='披萨侠' 年龄=30 能力='能从手中射出披萨片' 敌人=['饥饿军团', '垃圾食品帮']\n# 4.03s : 名字='胡萝卜队长' 年龄=35 能力='吃胡萝卜后拥有超强力量和敏捷性' 敌人=['糖分小队', '油腻帮']\n# 6.05s : 名字='冰淇淋女孩' 年龄=25 能力='能凭空制造冰淇淋' 敌人=['辣酱小队', '健康饮食者']\n```\n\n更多内容请参阅 [流式处理]。\n\n### Asyncio\n\n可以使用异步函数\u002F协程来并发查询 LLM。这不仅能显著提高整体生成速度，还能在等待 LLM 输出时运行其他异步代码。在下面的示例中，LLM 在等待列表中的下一位总统时，会为每位美国总统生成一段描述。通过测量每秒生成的字符数可以看出，该示例相比串行处理实现了 7 倍的加速。\n\n```python\nimport asyncio\nfrom time import time\nfrom typing import AsyncIterable\n\nfrom magentic import prompt\n\n\n@prompt(\"列出美国的十位总统\")\nasync def 遍历总统() -> AsyncIterable[str]: ...\n\n\n@prompt(\"给我讲讲关于 {topic} 的事\")\nasync def 讲讲关于(topic: str) -> str: ...\n\n\n# 为每位总统列出的描述并发生成\nstart_time = time()\ntasks = []\nasync for president in await 遍历总统():\n    # 使用 asyncio.create_task 安排协程执行，然后再等待其完成\n    # 这样在生成总统列表的同时，描述也会开始生成\n    task = asyncio.create_task(讲讲关于(president))\n    tasks.append(task)\n\ndescriptions = await asyncio.gather(*tasks)\n\n# 测量每秒生成的字符数\n总字符数 = sum(len(desc) for desc in descriptions)\n时间流逝 = time() - start_time\nprint(总字符数, 时间流逝, 总字符数 \u002F 时间流逝)\n# 24575 28.70 856.07\n\n\n# 测量描述一位总统时每秒生成的字符数\nstart_time = time()\nout = await 讲讲关于(\"乔治·华盛顿\")\n时间流逝 = time() - start_time\nprint(len(out), 时间流逝, len(out) \u002F 时间流逝)\n# 2206 18.72 117.78\n```\n\n更多内容请参阅 [Asyncio]。\n\n### 其他功能\n\n- `@prompt` 的 `functions` 参数可以包含异步函数\u002F协程。当调用相应的 `FunctionCall` 对象时，必须等待结果。\n- 可以使用 `Annotated` 类型注解为函数参数提供描述和其他元数据。请参阅 [Pydantic 文档中关于使用 `Field` 描述函数参数的内容](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fusage\u002Fvalidation_decorator\u002F#using-field-to-describe-function-arguments)。\n- `@prompt` 和 `@prompt_chain` 装饰器还接受 `model` 参数。您可以传入 `OpenaiChatModel` 实例以使用 GPT-4，或配置不同的温度。详情见下文。\n- 按照 [Pandas DataFrame 的示例笔记本](examples\u002Fcustom_function_schemas\u002Fregister_dataframe_function_schema.ipynb)，注册其他类型作为 `@prompt` 函数的返回值类型注解。\n\n## 后端\u002FLLM 配置\n\nMagentic 支持多种 LLM 提供商或“后端”。这大致指用于与 LLM API 交互的 Python 包。目前支持以下后端：\n\n### OpenAI\n\n默认后端，使用 `openai` Python 包，并支持 magentic 的所有功能。\n\n无需额外安装。只需从 `magentic` 中导入 `OpenaiChatModel` 类即可。\n\n```python\nfrom magentic import OpenaiChatModel\n\nmodel = OpenaiChatModel(\"gpt-4o\")\n```\n\n#### 通过 OpenAI 使用 Ollama\n\nOllama 支持与 OpenAI 兼容的 API，因此你可以通过 OpenAI 后端使用 Ollama 模型。\n\n首先，从 [ollama.com](https:\u002F\u002Follama.com\u002F) 安装 Ollama。然后拉取你想要使用的模型。\n\n```sh\nollama pull llama3.2\n```\n\n接着，在创建 `OpenaiChatModel` 实例时，指定模型名称和 `base_url`。\n\n```python\nfrom magentic import OpenaiChatModel\n\nmodel = OpenaiChatModel(\"llama3.2\", base_url=\"http:\u002F\u002Flocalhost:11434\u002Fv1\u002F\")\n```\n\n#### 其他与 OpenAI 兼容的 API\n\n当使用 `openai` 后端时，设置 `MAGENTIC_OPENAI_BASE_URL` 环境变量，或者在代码中使用 `OpenaiChatModel(..., base_url=\"http:\u002F\u002Flocalhost:8080\")`，即可将 `magentic` 与任何 OpenAI 兼容的 API 配合使用，例如 [Azure OpenAI 服务](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Fquickstart?tabs=command-line&pivots=programming-language-python#create-a-new-python-application)、[LiteLLM OpenAI 代理服务器](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproxy_server)、[LocalAI](https:\u002F\u002Flocalai.io\u002Fhowtos\u002Feasy-request-openai\u002F) 等。请注意，如果 API 不支持工具调用，则无法创建返回 Python 对象的提示函数，但 `magentic` 的其他功能仍可正常使用。\n\n若要将 Azure 与 openai 后端配合使用，需将 `MAGENTIC_OPENAI_API_TYPE` 环境变量设置为 `\"azure\"`，或在代码中使用 `OpenaiChatModel(..., api_type=\"azure\")`，并同时设置 openai 包访问 Azure 所需的环境变量。详情请参阅：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python#microsoft-azure-openai\n\n### Anthropic\n\n此后端使用 `anthropic` Python 包，并支持 magentic 的所有功能。\n\n可以使用包含 `anthropic` 附加组件的 `magentic` 包进行安装，或直接安装 `anthropic` 包。\n\n```sh\npip install \"magentic[anthropic]\"\n```\n\n然后导入 `AnthropicChatModel` 类。\n\n```python\nfrom magentic.chat_model.anthropic_chat_model import AnthropicChatModel\n\nmodel = AnthropicChatModel(\"claude-3-5-sonnet-latest\")\n```\n\n### LiteLLM\n\n此后端使用 `litellm` Python 包，允许从 [众多不同的提供商](https:\u002F\u002Fdocs.litellm.ai\u002Fdocs\u002Fproviders) 查询大语言模型。注意：部分模型可能不支持 `magentic` 的所有功能，例如函数调用、结构化输出和流式响应。\n\n可以使用包含 `litellm` 附加组件的 `magentic` 包进行安装，或直接安装 `litellm` 包。\n\n```sh\npip install \"magentic[litellm]\"\n```\n\n然后导入 `LitellmChatModel` 类。\n\n```python\nfrom magentic.chat_model.litellm_chat_model import LitellmChatModel\n\nmodel = LitellmChatModel(\"gpt-4o\")\n```\n\n### Mistral\n\n此后端使用 `openai` Python 包，并做了一些小修改以使 API 请求与 Mistral API 兼容。它支持 magentic 的所有功能。然而，工具调用（包括结构化输出）不会进行流式传输，而是会一次性接收全部结果。\n\n注意：magentic 的未来版本可能会改用 `mistral` Python 包。\n\n无需额外安装。只需导入 `MistralChatModel` 类即可。\n\n```python\nfrom magentic.chat_model.mistral_chat_model import MistralChatModel\n\nmodel = MistralChatModel(\"mistral-large-latest\")\n```\n\n## 配置后端\n\n`magentic` 默认使用的 `ChatModel`（在 `@prompt`、`@chatprompt` 等装饰器中）可以通过多种方式配置。当调用 prompt 函数或 chatprompt 函数时，所使用的 `ChatModel` 会按照以下优先级顺序确定：\n\n1. 作为 `model` 参数传递给 magentic 装饰器的 `ChatModel` 实例\n2. 使用 `with MyChatModel:` 创建的当前聊天模型上下文\n3. 根据环境变量和 [src\u002Fmagentic\u002Fsettings.py](https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fblob\u002Fmain\u002Fsrc\u002Fmagentic\u002Fsettings.py) 中的默认设置创建的全局 `ChatModel`\n\n以下代码片段演示了这一行为：\n\n```python\nfrom magentic import OpenaiChatModel, prompt\nfrom magentic.chat_model.anthropic_chat_model import AnthropicChatModel\n\n\n@prompt(\"Say hello\")\ndef say_hello() -> str: ...\n\n\n@prompt(\n    \"Say hello\",\n    model=AnthropicChatModel(\"claude-3-5-sonnet-latest\"),\n)\ndef say_hello_anthropic() -> str: ...\n\n\nsay_hello()  # 使用环境变量或默认设置\n\nwith OpenaiChatModel(\"gpt-4o-mini\", temperature=1):\n    say_hello()  # 由于上下文管理器的存在，使用 openai 的 gpt-4o-mini 并设置 temperature=1\n    say_hello_anthropic()  # 显式配置为 Anthropic claude-3-5-sonnet-latest\n```\n\n可以设置以下环境变量：\n\n| 环境变量           | 描述                              | 示例                      |\n| ------------------ | ---------------------------------------- | ---------------------------- |\n| MAGENTIC_BACKEND   | 作为 LLM 后端使用的包                | anthropic \u002F openai \u002F litellm |\n| MAGENTIC_ANTHROPIC_MODEL       | Anthropic 模型                          | claude-3-haiku-20240307      |\n| MAGENTIC_ANTHROPIC_API_KEY     | magentic 将使用的 Anthropic API 密钥 | sk-...                       |\n| MAGENTIC_ANTHROPIC_BASE_URL    | 兼容 Anthropic 的 API 的基础 URL | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_ANTHROPIC_MAX_TOKENS  | 最大生成标记数           | 1024                         |\n| MAGENTIC_ANTHROPIC_TEMPERATURE | 温度                              | 0.5                          |\n| MAGENTIC_LITELLM_MODEL         | LiteLLM 模型                            | claude-2                     |\n| MAGENTIC_LITELLM_API_BASE      | 查询的基础 URL                          | http:\u002F\u002Flocalhost:11434       |\n| MAGENTIC_LITELLM_MAX_TOKENS    | LiteLLM 最大生成标记数                  | 1024                         |\n| MAGENTIC_LITELLM_TEMPERATURE   | LiteLLM 温度                            | 0.5                          |\n| MAGENTIC_MISTRAL_MODEL         | Mistral 模型                            | mistral-large-latest         |\n| MAGENTIC_MISTRAL_API_KEY       | magentic 将使用的 Mistral API 密钥   | XEG...                       |\n| MAGENTIC_MISTRAL_BASE_URL      | 兼容 Mistral 的 API 的基础 URL          | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_MISTRAL_MAX_TOKENS    | 最大生成标记数                          | 1024                         |\n| MAGENTIC_MISTRAL_SEED          | 确定性采样的种子                        | 42                           |\n| MAGENTIC_MISTRAL_TEMPERATURE   | 温度                              | 0.5                          |\n| MAGENTIC_OPENAI_MODEL          | OpenAI 模型                             | gpt-4                        |\n| MAGENTIC_OPENAI_API_KEY        | magentic 将使用的 OpenAI API 密钥    | sk-...                       |\n| MAGENTIC_OPENAI_API_TYPE       | 允许选项：“openai”、“azure”       | azure                        |\n| MAGENTIC_OPENAI_BASE_URL       | 兼容 OpenAI 的 API 的基础 URL          | http:\u002F\u002Flocalhost:8080        |\n| MAGENTIC_OPENAI_MAX_TOKENS     | OpenAI 最大生成标记数                  | 1024                         |\n| MAGENTIC_OPENAI_SEED           | 确定性采样的种子                        | 42                           |\n| MAGENTIC_OPENAI_TEMPERATURE    | OpenAI 温度                            | 0.5                          |\n\n## 类型检查\n\n许多类型检查器会对带有 `@prompt` 装饰器的函数发出警告或错误，因为这些函数没有函数体或返回值。有几种方法可以处理这种情况：\n\n1. 在类型检查器中全局禁用该检查。例如，在 mypy 中通过禁用错误代码 `empty-body` 来实现。\n   ```toml\n   # pyproject.toml\n   [tool.mypy]\n   disable_error_code = [\"empty-body\"]\n   ```\n1. 将函数体写成 `...`（这不符合 mypy 的要求）或 `raise`。\n   ```python\n   @prompt(\"Choose a color\")\n   def random_color() -> str: ...\n   ```\n1. 在每个函数上使用注释 `# type: ignore[empty-body]`。在这种情况下，可以用文档字符串代替 `...`。\n   ```python\n   @prompt(\"Choose a color\")\n   def random_color() -> str:  # type: ignore[empty-body]\n       \"\"\"Returns a random color.\"\"\"\n   ```","# Magentic 快速上手指南\n\nMagentic 是一个用于将大语言模型（LLM）无缝集成到 Python 代码中的开源库。它通过 `@prompt` 和 `@chatprompt` 装饰器，让你能够定义返回结构化数据（如 Pydantic 模型）的函数，并结合传统 Python 代码构建复杂的智能体系统。\n\n## 环境准备\n\n在开始之前，请确保满足以下条件：\n\n*   **Python 版本**：建议安装 Python 3.10 或更高版本。\n*   **API Key**：你需要拥有支持的 LLM 提供商的 API Key（默认支持 OpenAI）。\n    *   请在环境变量中设置 `OPENAI_API_KEY`。\n    *   若使用 Anthropic 或 Ollama 等其他提供商，需在代码中进行相应配置。\n*   **依赖库**：Magentic 底层依赖 `pydantic` 进行数据验证和结构化输出。\n\n## 安装步骤\n\n你可以使用 `pip` 或现代化的包管理工具 `uv` 进行安装。\n\n### 方式一：使用 pip\n\n```sh\npip install magentic\n```\n\n> **国内加速提示**：如果遇到下载速度慢的问题，推荐使用国内镜像源：\n> ```sh\n> pip install magentic -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方式二：使用 uv (推荐)\n\n```sh\nuv add magentic\n```\n\n## 基本使用\n\nMagentic 的核心是通过装饰器将 Python 函数转换为 LLM 调用。以下是最基础的使用示例。\n\n### 1. 简单的文本生成 (@prompt)\n\n使用 `@prompt` 装饰器定义一个提示词模板。函数参数会自动填入模板，返回值由 LLM 生成。注意：被装饰的函数不需要编写函数体。\n\n```python\nfrom magentic import prompt\n\n\n@prompt('Add more \"dude\"ness to: {phrase}')\ndef dudeify(phrase: str) -> str: ...  # 无需函数体\n\n\nresult = dudeify(\"Hello, how are you?\")\nprint(result)\n# 输出示例：\"Hey, dude! What's up? How's it going, my man?\"\n```\n\n### 2. 结构化输出 (Pydantic Models)\n\nMagentic 的强大之处在于它能直接返回符合类型注解的结构化数据（如 Pydantic 模型），而不仅仅是字符串。\n\n```python\nfrom magentic import prompt\nfrom pydantic import BaseModel\n\n\nclass Superhero(BaseModel):\n    name: str\n    age: int\n    power: str\n    enemies: list[str]\n\n\n@prompt(\"Create a Superhero named {name}.\")\ndef create_superhero(name: str) -> Superhero: ...\n\n\nhero = create_superhero(\"Garden Man\")\nprint(hero)\n# 输出示例：Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman'])\nprint(hero.power)\n# 可以直接访问属性：Control over plants\n```\n\n### 3. 多轮对话与上下文 (@chatprompt)\n\n如果需要设置系统角色（System Message）或提供少样本示例（Few-shot prompting），请使用 `@chatprompt`。\n\n```python\nfrom magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage\nfrom pydantic import BaseModel\n\n\nclass Quote(BaseModel):\n    quote: str\n    character: str\n\n\n@chatprompt(\n    SystemMessage(\"You are a movie buff.\"),\n    UserMessage(\"What is your favorite quote from Harry Potter?\"),\n    AssistantMessage(\n        Quote(\n            quote=\"It does not do to dwell on dreams and forget to live.\",\n            character=\"Albus Dumbledore\",\n        )\n    ),\n    UserMessage(\"What is your favorite quote from {movie}?\"),\n)\ndef get_movie_quote(movie: str) -> Quote: ...\n\n\nquote = get_movie_quote(\"Iron Man\")\nprint(quote)\n# 输出示例：Quote(quote='I am Iron Man.', character='Tony Stark')\n```\n\n### 4. 函数调用 (Function Calling)\n\n让 LLM 决定何时调用你的 Python 函数。Magentic 会返回一个 `FunctionCall` 对象，执行该对象即可运行实际逻辑。\n\n```python\nfrom typing import Literal\nfrom magentic import prompt, FunctionCall\n\n\ndef search_twitter(query: str, category: Literal[\"latest\", \"people\"]) -> str:\n    \"\"\"Searches Twitter for a query.\"\"\"\n    return f\"Twitter results for {query}\"\n\n\n@prompt(\n    \"Use the appropriate search function to answer: {question}\",\n    functions=[search_twitter],\n)\ndef perform_search(question: str) -> FunctionCall[str]: ...\n\n\n# 获取函数调用对象\ncall_obj = perform_search(\"What is the latest news on LLMs?\")\n\n# 执行实际的 Python 函数\nresult = call_obj() \nprint(result)\n# 输出：Twitter results for LLMs\n```","某电商数据团队需要构建一个自动化脚本，将每日杂乱的用户评论转化为结构化的产品反馈报告，以便直接存入数据库进行分析。\n\n### 没有 magentic 时\n- 开发者需手动编写复杂的正则表达式或冗长的解析逻辑，试图从 LLM 返回的纯文本中提取 JSON，极易因模型输出格式微调而崩溃。\n- 缺乏类型安全保障，字段缺失或类型错误（如年龄变成字符串）只能在运行时暴露，导致下游数据处理流程频繁中断。\n- 当模型偶尔“幻觉”或不遵守指令时，缺少内置的重试机制，必须额外编写大量样板代码来处理异常和重新请求。\n- 难以实现流式处理结构化数据，用户必须等待整个响应生成完毕才能开始处理，增加了系统延迟。\n- 提示词模板与业务逻辑耦合严重，修改提示语往往需要重构整个函数调用链路，维护成本极高。\n\n### 使用 magentic 后\n- 直接利用 `@prompt` 装饰器配合 Pydantic 模型定义返回结构，magentic 自动确保输出严格符合 Schema，彻底告别手动解析 JSON。\n- 借助原生类型注解，IDE 能提供完整的代码补全和静态检查，在编码阶段即可发现字段定义错误，提升开发体验。\n- 内置的\"LLM 辅助重试”功能会在输出不符合规范时自动引导模型修正，显著提高了复杂场景下的执行成功率。\n- 支持结构化输出的流式传输，允许程序在数据生成的同时即刻处理，大幅优化了实时性要求高的应用场景。\n- 通过简单的函数签名即可管理提示词模板，将自然语言指令直接映射为 Python 函数调用，代码清晰且易于迭代。\n\nmagentic 通过将大模型能力无缝转化为类型安全的 Python 函数，让开发者能像调用普通 API 一样构建稳健的智能代理系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjackmpcollins_magentic_fc6ee7cf.png","jackmpcollins","Jack Collins","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjackmpcollins_27fdb6e6.jpg","Founder\u002FCTO @gradient-ascent-labs ","@gradient-ascent-labs ","California",null,"jackcollins.ie","https:\u002F\u002Fgithub.com\u002Fjackmpcollins",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.4,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.6,2403,124,"2026-04-11T16:44:32","MIT","","未说明",{"notes":97,"python":95,"dependencies":98},"该工具是一个用于集成大语言模型（LLM）的 Python 库，本身不运行本地模型，而是通过 API 调用外部服务（如 OpenAI、Anthropic、Ollama）。因此无特定 GPU、内存或操作系统限制，主要依赖网络环境和对应 LLM 提供商的 API Key。支持结构化输出、流式传输、函数调用及异步编程等特性。",[99],"pydantic",[13,35,52,14,15],[102,103,104,105,106,107,108,109,110,99,64,111,112,113],"agent","ai","chatbot","chatgpt","gpt","llm","openai","openai-api","prompt","magnetic","agentic","magenta","2026-03-27T02:49:30.150509","2026-04-13T23:54:47.802044",[117,122,127,132,136],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},32132,"如何在 Magentic 中使用 Ollama 模型进行工具调用（Tool Calls）或结构化输出？","要使用 Ollama 模型，请确保安装 Ollama v0.4.6 或更高版本，因为该版本才开始支持流式响应中的工具调用。建议使用 `OpenaiChatModel` 并指向本地 Ollama 地址。由于模型可能不完全遵循函数架构，建议配合重试机制或使用 `chatprompt` 提供示例来进行提示工程。\n\n示例代码：\n```python\nfrom magentic import chatprompt, AssistantMessage, OpenaiChatModel, UserMessage\n\n@chatprompt(\n    UserMessage(\"Return a list of fruits.\"),\n    AssistantMessage([\"apple\", \"banana\", \"cherry\"]),\n    UserMessage(\"Return a list of {category}.\"),\n    model=OpenaiChatModel(\"llama3.1\", base_url=\"http:\u002F\u002Flocalhost:11434\u002Fv1\u002F\"),\n)\ndef make_list(category: str) -> list[str]: ...\n\nprint(make_list(\"colors\"))\n# 输出：['red', 'green', 'blue']\n```","https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fissues\u002F207",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},32133,"如何将 Magentic 与 LangGraph 集成使用？","Magentic 可以通过其 `Chat` 类与 LangGraph 集成。你需要定义一个包含消息列表的状态类型（State），并在 LangGraph 的节点中使用 Magentic 的 `Chat` 类来处理消息和提取结构化数据（如 Pydantic 模型）。\n\n关键步骤：\n1. 定义状态类型，包含 `messages` 列表和其他需要的字段。\n2. 在节点函数中初始化 `Chat` 对象，传入系统消息、历史消息以及期望的输出类型。\n3. 调用 `.submit()` 获取响应，并根据响应类型更新状态。\n\n示例状态定义：\n```python\nfrom magentic import AnyMessage\nfrom pydantic import BaseModel\nfrom typing import List\n\nclass State(BaseModel):\n    messages: List[AnyMessage]\n    # 其他字段...\n```","https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fissues\u002F287",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},32134,"如何在 `prompt_chain` 装饰器中使用 `SystemMessage`、`UserMessage` 等多角色消息模板？","从 v0.37.0 版本开始，`prompt_chain` 装饰器支持将 `template` 参数设置为消息对象列表（如 `UserMessage`），而不仅仅是字符串模板。这允许你在链式调用中指定不同的消息角色。\n\n示例代码：\n```python\nfrom magentic import prompt_chain, UserMessage\n\ndef get_current_weather(location, unit=\"fahrenheit\"):\n    \"\"\"Get the current weather in a given location\"\"\"\n    return {\"temperature\": \"72\", \"forecast\": [\"sunny\", \"windy\"]}\n\n@prompt_chain(\n    template=[UserMessage(\"What's the weather like in {city}?\")],\n    functions=[get_current_weather],\n)\ndef describe_weather(city: str) -> str: ...\n\ndescribe_weather(\"Boston\")\n# 输出：'The weather in Boston is currently 72°F with sunny and windy conditions.'\n```","https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fissues\u002F389",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},32135,"如果不想使用装饰器，如何手动实现类似 `prompt_chain` 的多轮函数调用循环？","你可以直接使用 `Chat` 类来手动控制消息循环。初始化 `Chat` 时传入消息列表、可用函数和输出类型（需包含 `FunctionCall`）。然后在一个 `while` 循环中检查最后一条消息是否为函数调用，如果是则执行该函数并提交下一次请求，直到没有函数调用为止。\n\n示例代码：\n```python\nfrom magentic import UserMessage, FunctionCall\nfrom magentic.chat import Chat\n\nchat = Chat(\n    messages=[UserMessage(\"你的初始问题...\")],\n    functions=[my_func],\n    output_types=[str, list[int], FunctionCall],  # 必须包含 FunctionCall\n).submit()\n\nwhile isinstance(chat.last_message.content, FunctionCall):\n    chat = chat.exec_function_call().submit()\n\nresult = chat.last_message.content\n```",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},32136,"Magentic 是否支持 GPT-4 Vision 或多模态输入（如图片）？","是的，Magentic 旨在支持多模态输入。理想情况下，它应该能够接受 `PIL.Image` 对象、字节数据（bytes）或 HTTPS 图片 URL 作为输入，以便与 `gpt-4-vision-preview` 等模型配合使用。具体实现细节请参考相关版本的发布说明或文档更新。","https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fissues\u002F58",[142,147,152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237],{"id":143,"version":144,"summary_zh":145,"released_at":146},239374,"v0.41.1","## 变更内容\n* 处理空的流式数据块，并在 OpenRouterChatModel 中移除 `require_parameters`，由 @piiq 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F464 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.41.0...v0.41.1","2026-03-11T13:44:22",{"id":148,"version":149,"summary_zh":150,"released_at":151},239375,"v0.41.0","## 变更内容\n* @piiq 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F457 中为 OpenAI 聊天模型添加了对 `verbosity` 和 `max_completion_tokens` 请求参数的支持\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F458 中将 astral-sh\u002Fsetup-uv 从 6 升级到 7\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F454 中将 actions\u002Fcheckout 从 4 升级到 5\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F455 中将 actions\u002Fsetup-python 从 5 升级到 6\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.40.0...v0.41.0","2025-10-14T23:19:52",{"id":153,"version":154,"summary_zh":155,"released_at":156},239376,"v0.40.0","## 变更内容\n* 由 @dependabot 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F443 中将 astral-sh\u002Fsetup-uv 从 5 升级到 6\n* 由 @piiq 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F448 中添加 OpenRouter 聊天模型\n* 由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F451 中为 OpenaiChatModel 添加 reasoning_effort 参数\n\n## 新贡献者\n* @piiq 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F448 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.39.3...v0.40.0","2025-06-22T04:59:05",{"id":158,"version":159,"summary_zh":160,"released_at":161},239377,"v0.39.3","## 变更内容\n* 修复：函数调用解析位置参数时会忽略参数默认值，由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F439 中完成。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.39.2...v0.39.3","2025-04-07T05:58:54",{"id":163,"version":164,"summary_zh":165,"released_at":166},239378,"v0.39.2","## 变更内容\n* 由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F382 中通过 openai 包添加了对 Gemini 的测试用例\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.39.1...v0.39.2","2025-03-02T06:00:18",{"id":168,"version":169,"summary_zh":170,"released_at":171},239379,"v0.39.1","## 变更内容\n* 由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F433 中添加了针对 xAI \u002F Grok 的测试和文档，通过 OpenaiChatModel 实现。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.39.0...v0.39.1","2025-03-02T05:35:02",{"id":173,"version":174,"summary_zh":175,"released_at":176},239380,"v0.39.0","## 变更内容\n* 使用 TypeVar 的默认值来移除重载，由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F411 中完成\n* 在文档中添加缺失的 Field 导入，由 @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F428 中完成\n* 功能：支持将 extra_headers 传递给 LitellmChatModel，由 @ashwin153 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F426 中完成\n\n## 新贡献者\n* @ashwin153 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F426 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.38.1...v0.39.0","2025-02-24T10:13:41",{"id":178,"version":179,"summary_zh":180,"released_at":181},239381,"v0.38.1","## 变更内容\n* 修复 - 在使用 Anthropic API 时，修复了无参数的函数调用问题，由 @ananis25 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F408 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.38.0...v0.38.1","2025-01-29T08:06:59",{"id":183,"version":184,"summary_zh":185,"released_at":186},239382,"v0.38.0","## 变更内容\n* @ananis25 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F405 中实现了 API 消息转换为异步流式响应的功能。\n* @jackmpcollins 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F406 中增加了对 `message_to_X_message` 中 `AsyncParallelFunctionCall` 的支持。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.37.1...v0.38.0","2025-01-27T08:10:53",{"id":188,"version":189,"summary_zh":190,"released_at":191},239383,"v0.37.1","## 变更内容\nAnthropic 模型的消息序列化现在在 `AssistantMessage` 中支持 `StreamedResponse`。感谢 @ananis25 🎉\n\n## 拉取请求\n* 由 @ananis25 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F404 中添加了用于流式响应的 msg-to-anthropic-msg 转换器\n\n## 新贡献者\n* @ananis25 在 https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F404 中完成了他们的首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.37.0...v0.37.1","2025-01-24T05:51:47",{"id":193,"version":194,"summary_zh":195,"released_at":196},239384,"v0.37.0","## What's Changed\r\n\r\nThe `@prompt_chain` decorator can now accept a sequence of `Message` as input, like `@chatprompt`.\r\n\r\n```python\r\nfrom magentic import prompt_chain, UserMessage\r\n\r\ndef get_current_weather(location, unit=\"fahrenheit\"):\r\n    \"\"\"Get the current weather in a given location\"\"\"\r\n    return {\"temperature\": \"72\", \"forecast\": [\"sunny\", \"windy\"]}\r\n\r\n@prompt_chain(\r\n    template=[UserMessage(\"What's the weather like in {city}?\")],\r\n    functions=[get_current_weather],\r\n)\r\ndef describe_weather(city: str) -> str: ...\r\n\r\ndescribe_weather(\"Boston\")\r\n'The weather in Boston is currently 72°F with sunny and windy conditions.'\r\n```\r\n\r\n## PRs\r\n* Allow Messages as input to prompt_chain by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F403\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.36.0...v0.37.0","2025-01-15T07:00:44",{"id":198,"version":199,"summary_zh":200,"released_at":201},239385,"v0.36.0","## What's Changed\r\n\r\nDocument the `Chat` class and make it importable from the top level.\r\ndocs: https:\u002F\u002Fmagentic.dev\u002Fchat\u002F\r\n\r\n```python\r\nfrom magentic import Chat, OpenaiChatModel, UserMessage\r\n\r\n# Create a new Chat instance\r\nchat = Chat(\r\n    messages=[UserMessage(\"Say hello\")],\r\n    model=OpenaiChatModel(\"gpt-4o\"),\r\n)\r\n\r\n# Append a new user message\r\nchat = chat.add_user_message(\"Actually, say goodbye!\")\r\nprint(chat.messages)\r\n# [UserMessage('Say hello'), UserMessage('Actually, say goodbye!')]\r\n\r\n# Submit the chat to the LLM to get a response\r\nchat = chat.submit()\r\nprint(chat.last_message.content)\r\n# 'Hello! Just kidding—goodbye!'\r\n```\r\n\r\n## PRs\r\n* Use public import for ChatCompletionStreamState by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F398\r\n* Make Chat class public and add docs by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F401\r\n* Remove unused content None from openai messages by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F402\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.35.0...v0.36.0","2025-01-12T06:35:34",{"id":203,"version":204,"summary_zh":205,"released_at":206},239386,"v0.35.0","## What's Changed\r\n\r\n`UserMessage` now accepts image urls, image bytes, and document bytes directly using the `ImageUrl`, `ImageBytes`, and `DocumentBytes` types.\r\n\r\nExample of new `UserMessage` syntax and `DocumentBytes`\r\n\r\n```python\r\nfrom pathlib import Path\r\n\r\nfrom magentic import chatprompt, DocumentBytes, Placeholder, UserMessage\r\nfrom magentic.chat_model.anthropic_chat_model import AnthropicChatModel\r\n\r\n\r\n@chatprompt(\r\n    UserMessage(\r\n        [\r\n            \"Repeat the contents of this document.\",\r\n            Placeholder(DocumentBytes, \"document_bytes\"),\r\n        ]\r\n    ),\r\n    model=AnthropicChatModel(\"claude-3-5-sonnet-20241022\"),\r\n)\r\ndef read_document(document_bytes: bytes) -> str: ...\r\n\r\n\r\ndocument_bytes = Path(\"...\").read_bytes()\r\nread_document(document_bytes)\r\n# 'This is a test PDF.'\r\n```\r\n\r\n## PRs\r\n* Accept Sequence[Message] instead of list for Chat by @alexchandel in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F390\r\n* Bump astral-sh\u002Fsetup-uv from 4 to 5 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F393\r\n* Support images directly in UserMessage by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F387\r\n* Add DocumentBytes for submitting PDF documents by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F395\r\n\r\n## New Contributors\r\n* @alexchandel made their first contribution in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F390\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.34.1...v0.35.0","2025-01-06T03:33:04",{"id":208,"version":209,"summary_zh":210,"released_at":211},239387,"v0.34.1","## What's Changed\r\n* Consume LLM output stream via returned objects to allow caching by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F384\r\n* Improve ruff format\u002Flint rules by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F385\r\n* Update overview and configuration docs by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F386\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.34.0...v0.34.1","2024-12-01T08:10:25",{"id":213,"version":214,"summary_zh":215,"released_at":216},239388,"v0.34.0","## What's Changed\r\n\r\nAdd `StreamedResponse` and `AsyncStreamedResponse` to enable parsing responses that contain both text _and_ tool calls. See PR https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F383 or the new docs (copied below) https:\u002F\u002Fmagentic.dev\u002Fstreaming\u002F#StreamedResponse for more details.\r\n\r\n### ⚡  StreamedResponse\r\n\r\nSome LLMs have the ability to generate text output and make tool calls in the same response. This allows them to perform chain-of-thought reasoning or provide additional context to the user. In magentic, the `StreamedResponse` (or `AsyncStreamedResponse`) class can be used to request this type of output. This object is an iterable of `StreamedStr` (or `AsyncStreamedStr`) and `FunctionCall` instances.\r\n\r\n!!! warning \"Consuming StreamedStr\"\r\n\r\n    The StreamedStr object must be iterated over before the next item in the `StreamedResponse` is processed, otherwise the string output will be lost. This is because the `StreamedResponse` and `StreamedStr` share the same underlying generator, so advancing the `StreamedResponse` iterator skips over the `StreamedStr` items. The `StreamedStr` object has internal caching so after iterating over it once the chunks will remain available.\r\n\r\nIn the example below, we request that the LLM generates a greeting and then calls a function to get the weather for two cities. The `StreamedResponse` object is then iterated over to print the output, and the `StreamedStr` and `FunctionCall` items are processed separately.\r\n\r\n```python\r\nfrom magentic import prompt, FunctionCall, StreamedResponse, StreamedStr\r\n\r\n\r\ndef get_weather(city: str) -> str:\r\n    return f\"The weather in {city} is 20°C.\"\r\n\r\n\r\n@prompt(\r\n    \"Say hello, then get the weather for: {cities}\",\r\n    functions=[get_weather],\r\n)\r\ndef describe_weather(cities: list[str]) -> StreamedResponse: ...\r\n\r\n\r\nresponse = describe_weather([\"Cape Town\", \"San Francisco\"])\r\nfor item in response:\r\n    if isinstance(item, StreamedStr):\r\n        for chunk in item:\r\n            # print the chunks as they are received\r\n            print(chunk, sep=\"\", end=\"\")\r\n        print()\r\n    if isinstance(item, FunctionCall):\r\n        # print the function call, then call it and print the result\r\n        print(item)\r\n        print(item())\r\n\r\n# Hello! I'll get the weather for Cape Town and San Francisco for you.\r\n# FunctionCall(\u003Cfunction get_weather at 0x1109825c0>, 'Cape Town')\r\n# The weather in Cape Town is 20°C.\r\n# FunctionCall(\u003Cfunction get_weather at 0x1109825c0>, 'San Francisco')\r\n# The weather in San Francisco is 20°C.\r\n```\r\n\r\n\r\n## PRs\r\n\r\n* Test Ollama via `OpenaiChatModel` by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F281\r\n* Rename test to test_openai_chat_model_acomplete_ollama by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F381\r\n* Add `(Async)StreamedResponse` for multi-part responses by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F383\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.33.0...v0.34.0","2024-11-30T09:34:59",{"id":218,"version":219,"summary_zh":220,"released_at":221},239389,"v0.33.0","## What's Changed\r\n\r\n> [!WARNING]  \r\n> Breaking change: The prompt-function return type and the `output_types` argument to `ChatModel` must now contain `FunctionCall` or `(Async)ParallelFunctionCall` if these return types are desired. Previously instances of these types could be returned even if they were not indicated in the output types.\r\n\r\n- Dependency updates\r\n- Improve development workflows\r\n- Big internal refactor to prepare for future features. See PR https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F380 for details.\r\n\r\n## PRs\r\n\r\n* Bump logfire-api from 0.49.0 to 0.52.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F327\r\n* Bump litellm from 1.41.21 to 1.44.27 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F330\r\n* Bump jupyterlab from 4.2.3 to 4.2.5 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F322\r\n* Bump anthropic from 0.31.0 to 0.34.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F328\r\n* Bump pydantic-settings from 2.3.4 to 2.5.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F332\r\n* Bump notebook from 7.2.1 to 7.2.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F333\r\n* Bump ruff from 0.5.2 to 0.6.5 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F331\r\n* Bump jupyter from 1.0.0 to 1.1.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F335\r\n* Bump logfire-api from 0.52.0 to 0.53.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F336\r\n* Bump mkdocs-jupyter from 0.24.8 to 0.25.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F338\r\n* Bump pytest-asyncio from 0.23.7 to 0.24.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F337\r\n* Update precommit hooks by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F339\r\n* Switch to uv from poetry by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F373\r\n* Bump astral-sh\u002Fsetup-uv from 2 to 3 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F374\r\n* Use VCR for tests by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F375\r\n* Add CONTRIBUTING.md by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F376\r\n* Make VCR match on request body in tests by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F377\r\n* Add make help command by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F378\r\n* Bump astral-sh\u002Fsetup-uv from 3 to 4 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F379\r\n* Refactor to reuse stream parsing across ChatModels by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F380\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.32.0...v0.33.0","2024-11-29T08:41:40",{"id":223,"version":224,"summary_zh":225,"released_at":226},239390,"v0.32.0","## What's Changed\r\n\r\nAdd support for OpenAI \"strict\" setting for structured outputs. This guarantees that the generated JSON schema matches that supplied by the user. In magentic, this is set via an extension of pydantic's `ConfigDict`, and works for pydantic models as well as functions. See the docs for more info https:\u002F\u002Fmagentic.dev\u002Fstructured-outputs\u002F#configdict\r\n\r\nFor a BaseModel\r\n\r\n```python\r\nfrom magentic import prompt, ConfigDict\r\nfrom pydantic import BaseModel\r\n\r\n\r\nclass Superhero(BaseModel):\r\n    model_config = ConfigDict(openai_strict=True)\r\n\r\n    name: str\r\n    age: int\r\n    power: str\r\n    enemies: list[str]\r\n\r\n\r\n@prompt(\"Create a Superhero named {name}.\")\r\ndef create_superhero(name: str) -> Superhero: ...\r\n\r\n\r\ncreate_superhero(\"Garden Man\")\r\n```\r\n\r\nFor a function\r\n\r\n```python\r\nfrom typing import Annotated, Literal\r\n\r\nfrom magentic import ConfigDict, with_config\r\nfrom pydantic import Field\r\n\r\n\r\n@with_config(ConfigDict(openai_strict=True))\r\ndef activate_oven(\r\n    temperature: Annotated[int, Field(description=\"Temp in Fahrenheit\", lt=500)],\r\n    mode: Literal[\"broil\", \"bake\", \"roast\"],\r\n) -> str:\r\n    \"\"\"Turn the oven on with the provided settings.\"\"\"\r\n    return f\"Preheating to {temperature} F with mode {mode}\"\r\n\r\n\r\n@prompt(\r\n    \"Do some cooking\",\r\n    functions=[\r\n        activate_oven,\r\n        # ...\r\n```\r\n\r\n\r\n## PRs\r\n\r\n* Add support for OpenAI structured outputs by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F305\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.31.0...v0.32.0","2024-08-18T09:16:25",{"id":228,"version":229,"summary_zh":230,"released_at":231},239391,"v0.31.0","## What's Changed\r\n* Add Anthropic vision by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F304\r\n    - See https:\u002F\u002Fmagentic.dev\u002Fvision\u002F\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.30.0...v0.31.0","2024-08-13T07:29:51",{"id":233,"version":234,"summary_zh":235,"released_at":236},239392,"v0.30.0","## What's Changed\r\n\r\n> [!WARNING]  \r\n> Breaking change: `StructuredOutputError` has been replaced by more specific exceptions `StringNotAllowedError` and `ToolSchemaParseError` in PR https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F288\r\n\r\n🤖 ♻️  LLM-Assisted retries has been added. When enabled, this sends incorrectly formatted output back to the LLM along with the error message to have the LLM fix its mistakes. This can be used to enforce more complex validation on output schemas using pydantic validators.\r\n\r\nFor example, placing an arbitrary constraint on a string field\r\n\r\n```python\r\nfrom typing import Annotated\r\n\r\nfrom magentic import prompt\r\nfrom pydantic import AfterValidator, BaseModel\r\n\r\n\r\ndef assert_is_ireland(v: str) -> str:\r\n    if v != \"Ireland\":\r\n        raise ValueError(\"Country must be Ireland\")\r\n    return v\r\n\r\n\r\nclass Country(BaseModel):\r\n    name: Annotated[str, AfterValidator(assert_is_ireland)]\r\n    capital: str\r\n\r\n\r\n@prompt(\r\n    \"Return a country\",\r\n    max_retries=3,\r\n)\r\ndef get_country() -> Country: ...\r\n\r\n\r\nget_country()\r\n# 05:13:55.607 Calling prompt-function get_country\r\n# 05:13:55.622   LLM-assisted retries enabled. Max 3\r\n# 05:13:55.627     Chat Completion with 'gpt-4o' [LLM]\r\n# 05:13:56.309     streaming response from 'gpt-4o' took 0.11s [LLM]\r\n# 05:13:56.310     Retrying Chat Completion. Attempt 1.\r\n# 05:13:56.322     Chat Completion with 'gpt-4o' [LLM]\r\n# 05:13:57.456     streaming response from 'gpt-4o' took 0.00s [LLM]\r\n#\r\n# Country(name='Ireland', capital='Dublin')\r\n```\r\n\r\nSee the [new docs page on Retrying](https:\u002F\u002Fmagentic.dev\u002Fretrying) for more info.\r\n\r\n## PRs\r\n\r\n* Bump aiohttp from 3.9.5 to 3.10.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F297\r\n* Add LLM-assisted retries by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F288\r\n* Set logfire OTEL scope to magentic by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F298\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.29.0...v0.30.0","2024-08-12T07:30:41",{"id":238,"version":239,"summary_zh":240,"released_at":241},239393,"v0.29.0","## What's Changed\r\n* Make Message a pydantic model \u002F serializable by @jackmpcollins in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F294\r\n\r\nThis means `Message` objects can be used anywhere pydantic models can, including in prompt-functions. The new `AnyMessage` type simplifies this. For example\r\n```python\r\nfrom magentic import AnyMessage, prompt\r\n\r\n@prompt(\"Create an example of few-shot prompting for a chatbot\")\r\ndef make_few_shot_prompt() -> list[AnyMessage]: ...\r\n\r\nmake_few_shot_prompt()\r\n# [SystemMessage('You are a helpful and knowledgeable assistant.'),\r\n#  UserMessage('What’s the weather like today?'),\r\n#  AssistantMessage[Any]('The weather today is sunny with a high of 75°F (24°C).'),\r\n#  UserMessage('Can you explain the theory of relativity in simple terms?'),\r\n#  AssistantMessage[Any]('Sure! The theory of relativity, developed by Albert Einstein,  ...]\r\n```\r\n\r\nDependabot\r\n* Bump logfire-api from 0.46.1 to 0.49.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F292\r\n* Bump logfire from 0.46.1 to 0.49.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F293\r\n* Bump pytest from 8.2.2 to 8.3.2 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F286\r\n* Bump openai from 1.35.13 to 1.38.0 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F290\r\n* Bump mypy from 1.10.1 to 1.11.1 by @dependabot in https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fpull\u002F291\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjackmpcollins\u002Fmagentic\u002Fcompare\u002Fv0.28.1...v0.29.0","2024-08-08T06:29:43"]