[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-567-labs--instructor":3,"tool-567-labs--instructor":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":92,"env_os":93,"env_gpu":94,"env_ram":94,"env_deps":95,"category_tags":97,"github_topics":98,"view_count":105,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":119},1225,"567-labs\u002Finstructor","instructor","structured outputs for llms ","instructor 是一个开源Python库，专为简化从大语言模型获取结构化数据而设计。它让你无需手动处理JSON解析、错误验证或API适配，只需定义Pydantic数据模型（如User类），就能直接从LLM响应中获取类型安全的结构化结果。\n\n它解决了开发者常遇到的痛点：传统方式需要编写复杂JSON模式、反复处理验证失败和响应解析，导致代码冗长易错。instructor 通过一个简洁接口自动完成所有验证和转换，让数据提取变得像调用普通函数一样简单。\n\n适合需要高效处理LLM输出的开发者，尤其是构建AI应用、数据管道或自动化工具的工程师。技术亮点包括：基于Pydantic的类型安全与自动验证、无缝支持所有主流LLM提供商（OpenAI、Anthropic、Google、Ollama等），以及极简安装（`pip install instructor`）。用它处理文本提取任务，能显著减少代码量和出错率，让开发更专注在核心逻辑上。","# Instructor: Structured Outputs for LLMs\n\nGet reliable JSON from any LLM. Built on Pydantic for validation, type safety, and IDE support.\n\n```python\nimport instructor\nfrom pydantic import BaseModel\n\n\n# Define what you want\nclass User(BaseModel):\n    name: str\n    age: int\n\n\n# Extract it from natural language\nclient = instructor.from_provider(\"openai\u002Fgpt-4o-mini\")\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"John is 25 years old\"}],\n)\n\nprint(user)  # User(name='John', age=25)\n```\n\n**That's it.** No JSON parsing, no error handling, no retries. Just define a model and get structured data.\n\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Finstructor?style=flat-square)](https:\u002F\u002Fpypi.org\u002Fproject\u002Finstructor\u002F)\n[![Downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Finstructor?style=flat-square)](https:\u002F\u002Fpypi.org\u002Fproject\u002Finstructor\u002F)\n[![GitHub Stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Finstructor-ai\u002Finstructor?style=flat-square)](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1192334452110659664?style=flat-square)](https:\u002F\u002Fdiscord.gg\u002FbD9YE9JArw)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fjxnlco?style=flat-square)](https:\u002F\u002Ftwitter.com\u002Fjxnlco)\n\n> **Use Instructor for fast extraction, reach for PydanticAI when you need agents.** Instructor keeps schema-first flows simple and cheap. If your app needs richer agent runs, built-in observability, or shareable traces, try [PydanticAI](https:\u002F\u002Fai.pydantic.dev\u002F). PydanticAI is the official agent runtime from the Pydantic team, adding typed tools, replayable datasets, evals, and production dashboards while using the same Pydantic models. Dive into the [PydanticAI docs](https:\u002F\u002Fai.pydantic.dev\u002F) to see how it extends Instructor-style workflows.\n\n## Why Instructor?\n\nGetting structured data from LLMs is hard. You need to:\n\n1. Write complex JSON schemas\n2. Handle validation errors  \n3. Retry failed extractions\n4. Parse unstructured responses\n5. Deal with different provider APIs\n\n**Instructor handles all of this with one simple interface:**\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd>\u003Cb>Without Instructor\u003C\u002Fb>\u003C\u002Ftd>\n\u003Ctd>\u003Cb>With Instructor\u003C\u002Fb>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\n\n```python\nresponse = openai.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    tools=[\n        {\n            \"type\": \"function\",\n            \"function\": {\n                \"name\": \"extract_user\",\n                \"parameters\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"name\": {\"type\": \"string\"},\n                        \"age\": {\"type\": \"integer\"},\n                    },\n                },\n            },\n        }\n    ],\n)\n\n# Parse response\ntool_call = response.choices[0].message.tool_calls[0]\nuser_data = json.loads(tool_call.function.arguments)\n\n# Validate manually\nif \"name\" not in user_data:\n    # Handle error...\n    pass\n```\n\n\u003C\u002Ftd>\n\u003Ctd>\n\n```python\nclient = instructor.from_provider(\"openai\u002Fgpt-4\")\n\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n\n# That's it! user is validated and typed\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## Install in seconds\n\n```bash\npip install instructor\n```\n\nOr with your package manager:\n```bash\nuv add instructor\npoetry add instructor\n```\n\n## Works with every major provider\n\nUse the same code with any LLM provider:\n\n```python\n# OpenAI\nclient = instructor.from_provider(\"openai\u002Fgpt-4o\")\n\n# Anthropic\nclient = instructor.from_provider(\"anthropic\u002Fclaude-3-5-sonnet\")\n\n# Google\nclient = instructor.from_provider(\"google\u002Fgemini-pro\")\n\n# Ollama (local)\nclient = instructor.from_provider(\"ollama\u002Fllama3.2\")\n\n# With API keys directly (no environment variables needed)\nclient = instructor.from_provider(\"openai\u002Fgpt-4o\", api_key=\"sk-...\")\nclient = instructor.from_provider(\"anthropic\u002Fclaude-3-5-sonnet\", api_key=\"sk-ant-...\")\nclient = instructor.from_provider(\"groq\u002Fllama-3.1-8b-instant\", api_key=\"gsk_...\")\n\n# All use the same API!\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n```\n\n## Production-ready features\n\n### Automatic retries\n\nFailed validations are automatically retried with the error message:\n\n```python\nfrom pydantic import BaseModel, field_validator\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n    @field_validator('age')\n    def validate_age(cls, v):\n        if v \u003C 0:\n            raise ValueError('Age must be positive')\n        return v\n\n\n# Instructor automatically retries when validation fails\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    max_retries=3,\n)\n```\n\n### Streaming support\n\nStream partial objects as they're generated:\n\n```python\nfrom instructor import Partial\n\nfor partial_user in client.chat.completions.create(\n    response_model=Partial[User],\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    stream=True,\n):\n    print(partial_user)\n    # User(name=None, age=None)\n    # User(name=\"John\", age=None)\n    # User(name=\"John\", age=25)\n```\n\n### Nested objects\n\nExtract complex, nested data structures:\n\n```python\nfrom typing import List\n\n\nclass Address(BaseModel):\n    street: str\n    city: str\n    country: str\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n    addresses: List[Address]\n\n\n# Instructor handles nested objects automatically\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n```\n\n## Used in production by\n\nTrusted by over 100,000 developers and companies building AI applications:\n\n- **3M+ monthly downloads**\n- **10K+ GitHub stars**  \n- **1000+ community contributors**\n\nCompanies using Instructor include teams at OpenAI, Google, Microsoft, AWS, and many YC startups.\n\n## Get started\n\n### Basic extraction\n\nExtract structured data from any text:\n\n```python\nfrom pydantic import BaseModel\nimport instructor\n\nclient = instructor.from_provider(\"openai\u002Fgpt-4o-mini\")\n\n\nclass Product(BaseModel):\n    name: str\n    price: float\n    in_stock: bool\n\n\nproduct = client.chat.completions.create(\n    response_model=Product,\n    messages=[{\"role\": \"user\", \"content\": \"iPhone 15 Pro, $999, available now\"}],\n)\n\nprint(product)\n# Product(name='iPhone 15 Pro', price=999.0, in_stock=True)\n```\n\n### Multiple languages\n\nInstructor's simple API is available in many languages:\n\n- [Python](https:\u002F\u002Fpython.useinstructor.com) - The original\n- [TypeScript](https:\u002F\u002Fjs.useinstructor.com) - Full TypeScript support\n- [Ruby](https:\u002F\u002Fruby.useinstructor.com) - Ruby implementation  \n- [Go](https:\u002F\u002Fgo.useinstructor.com) - Go implementation\n- [Elixir](https:\u002F\u002Fhex.pm\u002Fpackages\u002Finstructor) - Elixir implementation\n- [Rust](https:\u002F\u002Frust.useinstructor.com) - Rust implementation\n\n### Learn more\n\n- [Documentation](https:\u002F\u002Fpython.useinstructor.com) - Comprehensive guides\n- [Examples](https:\u002F\u002Fpython.useinstructor.com\u002Fexamples\u002F) - Copy-paste recipes  \n- [Blog](https:\u002F\u002Fpython.useinstructor.com\u002Fblog\u002F) - Tutorials and best practices\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FbD9YE9JArw) - Get help from the community\n\n## Why use Instructor over alternatives?\n\n**vs Raw JSON mode**: Instructor provides automatic validation, retries, streaming, and nested object support. No manual schema writing.\n\n**vs LangChain\u002FLlamaIndex**: Instructor is focused on one thing - structured extraction. It's lighter, faster, and easier to debug.\n\n**vs Custom solutions**: Battle-tested by thousands of developers. Handles edge cases you haven't thought of yet.\n\n## Contributing\n\nWe welcome contributions! Check out our [good first issues](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Flabels\u002Fgood%20first%20issue) to get started.\n\n## License\n\nMIT License - see [LICENSE](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Fblob\u002Fmain\u002FLICENSE) for details.\n\n---\n\n\u003Cp align=\"center\">\nBuilt by the Instructor community. Special thanks to \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fjxnlco\">Jason Liu\u003C\u002Fa> and all \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Fgraphs\u002Fcontributors\">contributors\u003C\u002Fa>.\n\u003C\u002Fp>","# Instructor：用于大语言模型的结构化输出\n\n从任何大语言模型中获取可靠的 JSON 数据。基于 Pydantic 构建，提供验证、类型安全和 IDE 支持。\n\n```python\nimport instructor\nfrom pydantic import BaseModel\n\n\n# 定义你想要的数据结构\nclass User(BaseModel):\n    name: str\n    age: int\n\n\n# 从自然语言中提取数据\nclient = instructor.from_provider(\"openai\u002Fgpt-4o-mini\")\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"John 是 25 岁\"}],\n)\n\nprint(user)  # User(name='John', age=25)\n```\n\n**就这么简单。** 无需 JSON 解析、错误处理或重试。只需定义一个模型，即可获得结构化数据。\n\n[![PyPI](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Finstructor?style=flat-square)](https:\u002F\u002Fpypi.org\u002Fproject\u002Finstructor\u002F)\n[![下载量](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Finstructor?style=flat-square)](https:\u002F\u002Fpypi.org\u002Fproject\u002Finstructor\u002F)\n[![GitHub 星标](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Finstructor-ai\u002Finstructor?style=flat-square)](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor)\n[![Discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1192334452110659664?style=flat-square)](https:\u002F\u002Fdiscord.gg\u002FbD9YE9JArw)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Fjxnlco?style=flat-square)](https:\u002F\u002Ftwitter.com\u002Fjxnlco)\n\n> **需要快速提取时使用 Instructor，而当您需要智能体时则选择 PydanticAI。** Instructor 让以模式为中心的工作流既简单又经济。如果您的应用需要更丰富的智能体运行、内置可观性或可共享的追踪功能，请尝试 [PydanticAI](https:\u002F\u002Fai.pydantic.dev\u002F)。PydanticAI 是 Pydantic 团队推出的官方智能体运行时，在使用相同 Pydantic 模型的基础上，增加了类型化的工具、可回放的数据集、评估以及生产级仪表盘。请参阅 [PydanticAI 文档](https:\u002F\u002Fai.pydantic.dev\u002F)，了解它如何扩展 Instructor 式的工作流程。\n\n## 为什么选择 Instructor？\n\n从大语言模型中获取结构化数据并不容易。您通常需要：\n\n1. 编写复杂的 JSON 模式\n2. 处理验证错误\n3. 重试失败的提取操作\n4. 解析非结构化的响应\n5. 应对不同提供商的 API 差异\n\n**而 Instructor 只需一个简单的接口就能解决所有这些问题：**\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd>\u003Cb>不使用 Instructor\u003C\u002Fb>\u003C\u002Ftd>\n\u003Ctd>\u003Cb>使用 Instructor\u003C\u002Fb>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd>\n\n```python\nresponse = openai.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    tools=[\n        {\n            \"type\": \"function\",\n            \"function\": {\n                \"name\": \"extract_user\",\n                \"parameters\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"name\": {\"type\": \"string\"},\n                        \"age\": {\"type\": \"integer\"},\n                    },\n                },\n            },\n        }\n    ],\n)\n\n# 解析响应\ntool_call = response.choices[0].message.tool_calls[0]\nuser_data = json.loads(tool_call.function.arguments)\n\n# 手动验证\nif \"name\" not in user_data:\n    # 处理错误...\n    pass\n```\n\n\u003C\u002Ftd>\n\u003Ctd>\n\n```python\nclient = instructor.from_provider(\"openai\u002Fgpt-4\")\n\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n\n# 就这样！user 已经过验证并具有明确的类型\n```\n\n\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftable>\n\n## 几秒钟内安装\n\n```bash\npip install instructor\n```\n\n或者使用您的包管理器：\n```bash\nuv add instructor\npoetry add instructor\n```\n\n## 兼容各大主流提供商\n\n您可以使用相同的代码与任何大语言模型提供商合作：\n\n```python\n# OpenAI\nclient = instructor.from_provider(\"openai\u002Fgpt-4o\")\n\n# Anthropic\nclient = instructor.from_provider(\"anthropic\u002Fclaude-3-5-sonnet\")\n\n# Google\nclient = instructor.from_provider(\"google\u002Fgemini-pro\")\n\n# Ollama（本地）\nclient = instructor.from_provider(\"ollama\u002Fllama3.2\")\n\n# 直接使用 API 密钥（无需环境变量）\nclient = instructor.from_provider(\"openai\u002Fgpt-4o\", api_key=\"sk-...\")\nclient = instructor.from_provider(\"anthropic\u002Fclaude-3-5-sonnet\", api_key=\"sk-ant-...\")\nclient = instructor.from_provider(\"groq\u002Fllama-3.1-8b-instant\", api_key=\"gsk_...\")\n\n# 使用方式完全一致！\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n```\n\n## 生产就绪的功能\n\n### 自动重试\n\n当验证失败时，Instructor 会自动根据错误信息进行重试：\n\n```python\nfrom pydantic import BaseModel, field_validator\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n\n    @field_validator('age')\n    def validate_age(cls, v):\n        if v \u003C 0:\n            raise ValueError('年龄必须是正数')\n        return v\n\n\n# 当验证失败时，Instructor 会自动重试\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    max_retries=3,\n)\n```\n\n### 流式支持\n\n在部分对象生成时即可进行流式输出：\n\n```python\nfrom instructor import Partial\n\nfor partial_user in client.chat.completions.create(\n    response_model=Partial[User],\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n    stream=True,\n):\n    print(partial_user)\n    # User(name=None, age=None)\n    # User(name=\"John\", age=None)\n    # User(name=\"John\", age=25)\n```\n\n### 嵌套对象\n\n提取复杂的嵌套数据结构：\n\n```python\nfrom typing import List\n\n\nclass Address(BaseModel):\n    street: str\n    city: str\n    country: str\n\n\nclass User(BaseModel):\n    name: str\n    age: int\n    addresses: List[Address]\n\n\n# Instructor 会自动处理嵌套对象\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"...\"}],\n)\n```\n\n## 已被众多企业采用\n\n受到超过 10 万名开发者和构建 AI 应用的公司的信赖：\n\n- **每月下载量超过 300 万次**\n- **GitHub 星标超过 1 万个**\n- **社区贡献者超过 1000 名**\n\n使用 Instructor 的公司包括 OpenAI、Google、Microsoft、AWS 以及许多 YC 初创公司团队。\n\n## 开始使用\n\n### 基本提取\n\n从任意文本中提取结构化数据：\n\n```python\nfrom pydantic import BaseModel\nimport instructor\n\nclient = instructor.from_provider(\"openai\u002Fgpt-4o-mini\")\n\n\nclass Product(BaseModel):\n    name: str\n    price: float\n    in_stock: bool\n\n\nproduct = client.chat.completions.create(\n    response_model=Product,\n    messages=[{\"role\": \"user\", \"content\": \"iPhone 15 Pro，999 美元，现在有货\"}],\n)\n\nprint(product)\n# Product(name='iPhone 15 Pro', price=999.0, in_stock=True)\n```\n\n### 多种语言支持\n\nInstructor 的简洁 API 提供多种语言版本：\n\n- [Python](https:\u002F\u002Fpython.useinstructor.com) - 原生版本\n- [TypeScript](https:\u002F\u002Fjs.useinstructor.com) - 完整的 TypeScript 支持\n- [Ruby](https:\u002F\u002Fruby.useinstructor.com) - Ruby 实现\n- [Go](https:\u002F\u002Fgo.useinstructor.com) - Go 实现\n- [Elixir](https:\u002F\u002Fhex.pm\u002Fpackages\u002Finstructor) - Elixir 实现\n- [Rust](https:\u002F\u002Frust.useinstructor.com) - Rust 实现\n\n### 了解更多\n\n- [文档](https:\u002F\u002Fpython.useinstructor.com) - 全面指南\n- [示例](https:\u002F\u002Fpython.useinstructor.com\u002Fexamples\u002F) - 可直接复制粘贴的代码片段\n- [博客](https:\u002F\u002Fpython.useinstructor.com\u002Fblog\u002F) - 教程和最佳实践\n- [Discord](https:\u002F\u002Fdiscord.gg\u002FbD9YE9JArw) - 从社区获取帮助\n\n## 为什么选择 Instructor 而不是其他替代方案？\n\n**与原始 JSON 模式相比**：Instructor 提供自动验证、重试机制、流式处理以及嵌套对象支持。无需手动编写 Schema。\n\n**与 LangChain\u002FLlamaIndex 相比**：Instructor 专注于一件事——结构化数据提取。它更轻量、更快，且更易于调试。\n\n**与自定义解决方案相比**：经过数千名开发者的实战检验，能够处理你尚未想到的边缘情况。\n\n## 参与贡献\n\n我们欢迎各种形式的贡献！请查看我们的 [good first issue](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Flabels\u002Fgood%20first%20issue)，开始你的贡献之旅。\n\n## 许可证\n\nMIT 许可证——详情请参阅 [LICENSE](https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Fblob\u002Fmain\u002FLICENSE)。\n\n---\n\n\u003Cp align=\"center\">\n由 Instructor 社区构建。特别感谢 \u003Ca href=\"https:\u002F\u002Ftwitter.com\u002Fjxnlco\">Jason Liu\u003C\u002Fa> 以及所有 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Finstructor-ai\u002Finstructor\u002Fgraphs\u002Fcontributors\">贡献者\u003C\u002Fa>。\n\u003C\u002Fp>","# Instructor 快速上手指南\n\n## 环境准备\n- Python 3.7+（推荐 3.8+）\n- 无需额外依赖（`instructor` 会自动安装 `pydantic`）\n\n## 安装步骤\n推荐使用国内镜像加速安装（清华大学源）：\n```bash\npip install instructor -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n其他安装方式：\n```bash\nuv add instructor\npoetry add instructor\n```\n\n## 基本使用\n1. 定义 Pydantic 模型\n2. 创建客户端\n3. 调用 API 获取结构化数据\n\n```python\nimport instructor\nfrom pydantic import BaseModel\n\nclass User(BaseModel):\n    name: str\n    age: int\n\nclient = instructor.from_provider(\"openai\u002Fgpt-4o-mini\")\nuser = client.chat.completions.create(\n    response_model=User,\n    messages=[{\"role\": \"user\", \"content\": \"John is 25 years old\"}],\n)\n\nprint(user)  # User(name='John', age=25)\n```\n\n> 说明：无需 JSON 解析、错误处理或重试，直接获取类型安全的结构化数据。","某电商平台客服系统需从用户自然语言消息中自动提取订单信息（如订单号、商品名称、数量），以快速更新库存和发货状态。\n\n### 没有 instructor 时\n- - 为每个提取任务手动编写冗长的JSON Schema，易因格式错误导致解析失败\n- - LLM返回的原始字符串需额外解析和验证，常因\"非JSON\"响应引发程序崩溃\n- - 代码中充斥try-except块处理异常，逻辑杂乱且维护成本高\n- - 切换LLM提供商时需重写API调用逻辑，团队协作效率低下\n\n### 使用 instructor 后\n- - 仅需定义Pydantic模型（如OrderModel），直接指定所需字段结构\n- - 调用instructor后自动返回验证通过的对象，无需手动解析JSON\n- - 内置错误处理机制确保数据可靠性，崩溃率下降90%\n- - 代码行数减少60%，支持无缝切换OpenAI\u002FAnthropic等提供商\n\ninstructor将LLM结构化输出的复杂流程简化为一行代码，让数据提取真正可靠高效。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002F567-labs_instructor_a0557d90.png","567-labs","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002F567-labs_08acc0e9.png",null,"https:\u002F\u002Fgithub.com\u002F567-labs",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",100,{"name":85,"color":86,"percentage":87},"Shell","#89e051",0,12699,1007,"2026-04-05T23:19:28","MIT",1,"","未说明",{"notes":94,"python":94,"dependencies":96},[],[13],[99,100,101,102,103,104],"openai","python","pydantic-v2","openai-functions","validation","openai-function-calli",5,"2026-03-27T02:49:30.150509","2026-04-06T08:18:28.012020",[109,114],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},5568,"使用 Ollama 等自托管模型时，Instructor 响应缓慢或超时如何解决？","Instructor 不支持 Ollama 的 JSON 模式（JSON mode），仅依赖 JSON schema 模式进行结构化输出。Ollama 未实现 JSON schema 模式（仅支持 JSON mode），导致 Instructor 无法直接利用其 JSON 输出功能。建议使用支持 JSON schema 的模型提供商（如 OpenAI），或通过提示词方式实现结构化输出（例如在提示中明确要求 JSON 格式）。","https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fissues\u002F445",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},5569,"使用 create_partial 流式处理时，字段为何始终为 None 直到最后一刻？","此问题已在 Instructor 最新版本中修复。请确保升级 Instructor 到最新版（例如 v2.0.0+），更新后即可在流式处理中实时填充字段。无需额外配置，直接使用 create_partial 方法即可正常工作。","https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fissues\u002F665",[120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195,200,205,210,215],{"id":121,"version":122,"summary_zh":123,"released_at":124},114854,"v1.14.0","## What's Changed\r\n* Audit and standardize exception handling in instructor library by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1897\r\n* Standardize provider imports in documentation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1896\r\n* Fix the issue by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1914\r\n* Standardize provider factory methods in codebase by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1898\r\n* Update image base URL in ipnb tutorials by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1922\r\n* docs: comprehensive documentation audit and SEO optimization by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1944\r\n* Update documentation for responses API mode by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1946\r\n* Doc \u002F Removed model reference in client.create of extraction example. by @grokthetech-netizen in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1951\r\n* fix(auto_client): stop masking runtime ImportErrors in from_provider by @yurekami in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1975\r\n* fix: OpenAI provider in from_provider ignores base_url kwarg by @gardner in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1971\r\n* fix(genai): allow Union types for Google GenAI structured outputs by @majiayu000 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1973\r\n* fix(genai): extract thinking_config and other fields from user-provided config object by @majiayu000 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1974\r\n* fix(genai): extract thinking_config from user-provided config object by @majiayu000 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1972\r\n* Fix typo in reask_validation.md by @mak2508 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1956\r\n* Feature\u002Fbedrock document support by @lucagobbi in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1936\r\n* chore(typing): replace pyright with ty by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1978\r\n* Fix Cohere streaming and xAI tools validation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1983\r\n\r\n## New Contributors\r\n* @grokthetech-netizen made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1951\r\n* @yurekami made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1975\r\n* @gardner made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1971\r\n* @majiayu000 made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1973\r\n* @mak2508 made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1956\r\n* @lucagobbi made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1936\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.13.0...v1.14.0","2026-01-08T16:06:01",{"id":126,"version":127,"summary_zh":128,"released_at":129},114862,"1.9.1","## What's Changed\r\n* feat: add Azure OpenAI support to auto_client.py by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1633\r\n* fix: expose exception classes in public API by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1613\r\n* Update TaskAction method description for clarity on task creation and… by @eaedk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1637\r\n* Fix SambaNova capitalization by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1651\r\n* refactor: simplify safety settings configuration for Gemini API by @DaveOkpare in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1659\r\n* Json schema fix by @Canttuchdiz in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1657\r\n\r\n## New Contributors\r\n* @eaedk made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1637\r\n* @Canttuchdiz made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1657\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.9.0...1.9.1","2025-07-07T18:51:42",{"id":131,"version":132,"summary_zh":133,"released_at":134},114852,"v1.14.2","## Fixed\n- Fixed model validators crashing during partial streaming by skipping them until streaming completes (#1994)\n- Fixed infinite recursion with self-referential models in Partial (e.g., TreeNode with children: List[\"TreeNode\"]) (#1997)\n\n## Added\n- Added `PartialLiteralMixin` documentation for handling Literal\u002FEnum types during streaming (#1994)\n- Added final validation against original model after streaming completes to enforce required fields (#1994)\n- Added tests for recursive Partial models (#1997)\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.14.1...v1.14.2","2026-01-13T21:58:38",{"id":136,"version":137,"summary_zh":138,"released_at":139},114853,"v1.14.1","## What's Changed\r\n* fix(genai): Support cached_content for Google context caching by @b-antosik-marcura in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1987\r\n\r\n## New Contributors\r\n* @b-antosik-marcura made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1987\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.14.0...v1.14.1","2026-01-08T16:11:07",{"id":141,"version":142,"summary_zh":143,"released_at":144},114847,"v1.15.1","## Security\n- **Bedrock**: Block remote HTTP(S) image URL fetching in `_openai_image_part_to_bedrock` — only `data:` URLs accepted, preventing SSRF via user-controlled image URLs\n- **Bedrock\u002FPDF**: Block remote URL and local file fetching in `PDF.to_bedrock` — only base64 data or `s3:\u002F\u002F` sources supported, preventing SSRF and local file disclosure\n\n## Added\n- **Hooks**: `completion:error` and `completion:last_attempt` handlers now receive `attempt_number`, `max_attempts`, and `is_last_attempt` as keyword arguments. Old-style handlers remain fully backward-compatible.\n- **Anthropic**: `from_provider(\"anthropic\u002F...\")` now sets a `User-Agent: instructor\u002F\u003Cversion>` header on the Anthropic client\n\n## Fixed\n- **Anthropic usage**: Initialize usage correctly for `ANTHROPIC_REASONING_TOOLS` and `ANTHROPIC_PARALLEL_TOOLS` modes\n- **OpenRouter**: Use `reask_md_json` for `OPENROUTER_STRUCTURED_OUTPUTS` retries instead of `reask_default` (tool-call format)\n- **Templating**: Return `kwargs` unchanged instead of `None` in `handle_templating` when message list is empty or unrecognized\n- **`from_openai`**: Allow `Mode.JSON_SCHEMA` for the OpenAI provider\n- **Bedrock**: Pass through `cachePoint` dicts in message content unchanged (regression since v1.13.0)\n- **Bedrock**: Allow `Mode.MD_JSON` in `from_bedrock`\n- **Parallel tools**: `ParallelBase` generator consumed into `ListResponse` in both sync and async paths, fixing `AttributeError`\n\n## Dependencies\n- Bump anthropic 0.76.0 → 0.88.0\n- Bump litellm upper bound to ≤1.83.0\n- Bump aiohttp 3.13.3 → 3.13.5","2026-04-03T01:50:59",{"id":146,"version":147,"summary_zh":148,"released_at":149},114848,"v1.15.0","## What's Changed\n\n- **Validation**: Fix `Validator` to require `is_valid` field (#2230)\n- **Gemini**: Handle `GEMINI_TOOLS` in async streaming paths (#2135)\n- **CLI**: Add `--full-id` flag to show complete batch IDs (#2068)\n- **Providers**: Remove opinionated system prompt from JSON mode (#2069)\n- **Mode**: Add missing `GENAI` and `Responses` modes to `tool_modes()` (#2072)\n- **xAI**: Make xai-sdk optional at runtime (#2043, #2094)\n- **Docs**: Add canonical OpenAI starter example (#2117)\n- **Deps**: Bump dependencies (#2042)\n\n**Full diff**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.14.5...v1.15.0","2026-04-02T22:59:55",{"id":151,"version":152,"summary_zh":153,"released_at":154},114849,"v1.14.5","## Changes\n\n- fix(metadata): populate author field for PyPI stats\n\nSeparate author names from emails so hatchling populates the Author metadata field correctly. pypistats.org reads this field and was showing \"None\" because the names were only in author_email.","2026-01-29T14:18:25",{"id":156,"version":157,"summary_zh":158,"released_at":159},114850,"v1.14.4","## What's Changed\r\n* refactor(json_tracker): simplify using sibling heuristic by @thomasnormal in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2000\r\n* Responses API validation error by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2002\r\n* GenAI config labels loss by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2005\r\n* GenAI SafetySettings image content by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2007\r\n* List object crashes fix by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2011\r\n* New release preparation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F2013\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.14.3...v1.14.4","2026-01-16T22:43:12",{"id":161,"version":162,"summary_zh":163,"released_at":164},114851,"v1.14.3","## Added\n- Completeness-based validation for Partial streaming - only validates JSON structures that are structurally complete (#1999)\n- New `JsonCompleteness` class in `instructor\u002Fdsl\u002Fjson_tracker.py` for tracking JSON completeness during streaming (#1999)\n\n## Fixed\n- Fixed Stream objects crashing reask handlers when using streaming with `max_retries > 1` (#1992)\n- Field constraints (`min_length`, `max_length`, `ge`, `le`, etc.) now work correctly during streaming (#1999)\n\n## Deprecated\n- `PartialLiteralMixin` is now deprecated - completeness-based validation handles Literal\u002FEnum types automatically (#1999)\n\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.14.2...v1.14.3","2026-01-13T22:05:53",{"id":166,"version":167,"summary_zh":168,"released_at":169},114855,"v1.13.0","## What's Changed\r\n* fix: Gemini HARM_CATEGORY_JAILBREAK and Anthropic tool_result blocks by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1867\r\n* fix(genai): fix Gemini streaming by @DaveOkpare in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1864\r\n* fix(processing): ensure JSON decode errors are caught by retry; add regression tests for JSON mode (#1856) by @devin-ai-integration[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1857\r\n* fix: resolve type checking diagnostics by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1854\r\n* fix: update openai dependency version constraints in pyproject.toml and uv.lock to support  v2 by @vishnu-itachi in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1858\r\n* feat: add py.typed marker for type checking by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1868\r\n* feat(Bedrock): add image support to Bedrock by @geekbass in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1874\r\n* chore(deps): bump the poetry group across 1 directory with 162 updates by @dependabot[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1859\r\n* Fix\u002Fci uv migration by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1886\r\n\r\n## New Contributors\r\n* @vishnu-itachi made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1858\r\n* @geekbass made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1874\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.12.0...v1.13.0","2025-11-06T04:19:00",{"id":171,"version":172,"summary_zh":173,"released_at":174},114856,"v1.12.0","## What's Changed\r\n* feat: add mkdocs-llmstxt plugin and llms.txt support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1795\r\n* Restore multimodal import compatibility by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1797\r\n* feat(retry): add comprehensive tracking of all failed attempts and exceptions by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1802\r\n* feat(hooks): add hook combination and per-call hooks support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1803\r\n* feat(retry): propagate failed attempts through reask handlers by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1804\r\n* fix(responses): generalize tool call parsing for reasoning models by @sapountzis in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1799\r\n* feat(xai): add streaming support for xAI provider by @jeongyoonm in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1758\r\n* fix(openai): reask functionality broken in JSON mode since v1.9.0 by @pnkvalavala in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1793\r\n* fix(openai): remove duplicate schema from messages in JSON_SCHEMA mode by @pnkvalavala in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1761\r\n* Handle Anthropic tool_use retries on ValidationError by @kelvin-tran in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1810\r\n* Investigate instructor client import errors by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1818\r\n* fix: replace deprecated gpt-3.5-turbo-0613 with gpt-4o-mini by @sergiobayona in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1830\r\n* Update blog post link for LLM validation examples by @Mr-Ruben in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1824\r\n* Debug parse error hook not emitted by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1819\r\n* only use thinking_config in GenerateContentConfig by @jonbuffington in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1751\r\n* fix: Handle Gemini chunk.text ValueError when finish_reason=1 by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1809\r\n* docs: replace deprecated validation_context with context parameter by @devin-ai-integration[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1831\r\n* docs(validation): add context parameter examples and fix error output by @devin-ai-integration[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1833\r\n* also add pop thinking_config to handle_genai_tools by @oegedijk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1834\r\n* update cohere text models. by @phlogisticfugu in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1840\r\n* fix(cohere): improve V2 API version detection and add documentation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1844\r\n* doc(openrouter): use explicit async_client=False by @wongjiahau in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1847\r\n* Fix json parsing by @NicolasPllr1 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1836\r\n* fix: Bedrock OpenAI models response parsing (reasoning before text) by @len-foss in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1860\r\n* fix: Python 3.13 compatibility and import path corrections by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1866\r\n\r\n## New Contributors\r\n* @sapountzis made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1799\r\n* @jeongyoonm made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1758\r\n* @pnkvalavala made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1793\r\n* @kelvin-tran made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1810\r\n* @sergiobayona made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1830\r\n* @Mr-Ruben made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1824\r\n* @jonbuffington made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1751\r\n* @phlogisticfugu made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1840\r\n* @wongjiahau made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1847\r\n* @NicolasPllr1 made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1836\r\n* @len-foss made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1860\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.11.2...v1.12.0","2025-10-27T18:47:29",{"id":176,"version":177,"summary_zh":178,"released_at":179},114857,"v1.11.3","## What's Changed\r\n* feat: add mkdocs-llmstxt plugin and llms.txt support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1795\r\n* Restore multimodal import compatibility by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1797\r\n* feat(retry): add comprehensive tracking of all failed attempts and exceptions by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1802\r\n* feat(hooks): add hook combination and per-call hooks support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1803\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.11.2...v1.11.3","2025-09-09T15:43:58",{"id":181,"version":182,"summary_zh":183,"released_at":184},114858,"1.11.2","## What's Changed\r\n* feat: Add automated bi-weekly scheduled releases by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1787\r\n* feat: Enhanced Google Cloud Storage Support for Multimodal Classes by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1788\r\n* Fix GCS URI Support for PDF and Audio Classes by @DaveOkpare in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1763\r\n* fix(exceptions): restore backwards compatibility for instructor.exceptions imports by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1789\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002Fv1.11.1...1.11.2","2025-08-27T22:20:15",{"id":186,"version":187,"summary_zh":188,"released_at":189},114859,"v1.11.0","## What's Changed\r\n* fix(auto_client): add support for litellm provider in from_provider by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1723\r\n* refactor(utils): complete provider-specific utility reorganization by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1722\r\n* refactor: move provider-specific message conversion to handlers by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1724\r\n* Update contributing docs for provider utilities by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1725\r\n* Add consistent docstrings to utils modules by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1726\r\n* feat: add xAI utils pattern following standard provider structure by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1728\r\n* fix: implement missing hooks (completion:error and completion:last_attempt) by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1729\r\n* Reorganize codebase from flat structure to modular architecture by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1730\r\n* refactor: remove backward compatibility modules by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1731\r\n* feat: Add comprehensive tests for XAI _raw_response functionality by @devin-ai-integration[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1735\r\n* feat(batch): add in-memory batching support and improve error handling by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1746\r\n* chore: update author joschka website by @joschkabraun in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1765\r\n* fix(docs): correct broken tutorials navigation link by @cz3k in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1768\r\n* feat(docs): Truefoundry AI Gateway integration with Instructor by @rishiraj-tf in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1767\r\n* Fix Pydantic v2 deprecation warnings by migrating from class Config to   ConfigDict by @anistark in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1782\r\n* Add OpenRouter provider support to auto_client routing by @devin-ai-integration[bot] in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1783\r\n\r\n## New Contributors\r\n* @cz3k made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1768\r\n* @rishiraj-tf made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1767\r\n* @anistark made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1782\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.10.0...v1.11.0","2025-08-27T20:59:14",{"id":191,"version":192,"summary_zh":193,"released_at":194},114860,"1.10.0","## What's Changed\r\n* Update integrations to from_provider API by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1668\r\n* feat: Add native caching support with AutoCache and RedisCache adapters by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1674\r\n* feat: Enhance GitHub Actions workflow for testing by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1675\r\n* Deprecate google-generativeai in favor of google-genai by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1673\r\n* Fix batch request parsing by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1677\r\n* Enhance batch API with multi-provider support and improved CLI by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1678\r\n* split off dev dependencies by @hwong557 in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1685\r\n* Add Claude Code GitHub Workflow by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1688\r\n* fix(genai): handle response_model=None for GenAI modes by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1694\r\n* fix: correct is_simple_type logic for list types with BaseModel contents by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1698\r\n* fix(genai): add automatic Partial wrapping for streaming requests by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1695\r\n* fix(bedrock): add Bedrock-native format conversion to OpenAI format by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1696\r\n* fix(tests): improve prompt for UserExtract in gemini stream test by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1700\r\n* feat(bedrock): improve documentation and auto client support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1686\r\n* chore(ci): run docs test monthly by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1699\r\n* Enhance logging and docs by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1702\r\n* chore: remove .vscode\u002Fsettings.json from tracking by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1705\r\n* fix(genai): forward thinking_config parameter to Gemini models by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1704\r\n* Fix\u002Fdecimal support genai by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1712\r\n* docs: update provider syntax by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1713\r\n* feat(provider): add deepseek support by @NasonZ in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1715\r\n* feat(auto_client): add comprehensive api_key parameter support for all providers by @johnwlockwood in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1717\r\n* Add Anthropic parallel tool support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1719\r\n\r\n## New Contributors\r\n* @hwong557 made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1685\r\n* @johnwlockwood made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1717\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.9.2...1.10.0","2025-07-18T15:28:25",{"id":196,"version":197,"summary_zh":198,"released_at":199},114861,"1.9.2","## What's Changed\r\n* Fix docs build path by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1662\r\n* Revert \"refactor: simplify safety settings configuration for Gemini API\" by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1664\r\n* Skip LLM tests without API keys by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1665\r\n* Add xAI provider by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1661\r\n* Fix GenAI image harm categories by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1667\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.9.1...1.9.2","2025-07-07T21:16:54",{"id":201,"version":202,"summary_zh":203,"released_at":204},114863,"1.9.0","## What's Changed\r\n* feat: Improve error handling with comprehensive exception hierarchy by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1549\r\n* Remove `enable_prompt_caching` from Anthropic integration since we ha… by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1562\r\n* Fix\u002Fdocs by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1561\r\n* lock by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1565\r\n* Fix\u002Fgemini config by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1563\r\n* feat(deps): allow rich version 14+ by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1566\r\n* chore(deps): bump the poetry group across 1 directory with 26 updates by @dependabot in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1569\r\n* chore(deps): bump anthropic from 0.52.0 to 0.52.1 in the poetry group by @dependabot in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1571\r\n* Standardize async parameter naming in VertexAI client by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1555\r\n* Add Claude Code GitHub Workflow by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1575\r\n* feat: update README to reflect 3M monthly downloads milestone by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1577\r\n* fix(deps): add dev and docs to project.optional-dependencies for uv compatibility by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1581\r\n* docs: add Gemini thought parts filtering explanation to GenAI integration by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1583\r\n* fix: filter out Gemini thought parts in genai tool parsing by @indigoviolet in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1578\r\n* Fix documentation for dynamic model creation example by @devin-ai-integration in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1567\r\n* chore(deps): bump the poetry group across 1 directory with 11 updates by @dependabot in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1595\r\n* feat: implementation of JSON mode for Writer proider by @yanomaly in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1559\r\n* feat(auto_client): add Ollama provider support by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1602\r\n* fix: respect timeout parameter in retry mechanism for Ollama compatibility by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1603\r\n* fix(reask): handle ThinkingBlock in reask_anthropic_json by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1604\r\n* feat(docs): Add cross-links to blog posts for better navigation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1605\r\n* docs: improve clarity and consistency across documentation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1606\r\n* feat: Enable Audio module to work with Windows by @ish-codes-magic in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1619\r\n* fix(deps): relax tenacity requirement to support google-genai 1.21.1 by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1625\r\n* fix: resolve pyright TypedDict key access error in dump_message by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1626\r\n* feat(docs): improve SEO for asyncio and tenacity documentation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1620\r\n* Resolve dependency version conflicts by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1627\r\n* Feat\u002Fadd gemini optional support by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1618\r\n\r\n## New Contributors\r\n* @ish-codes-magic made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1619\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.8.3...1.9.0","2025-06-21T05:52:49",{"id":206,"version":207,"summary_zh":208,"released_at":209},114864,"1.8.3","## What's Changed\r\n* docs: improve CLAUDE.md with better architecture description by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1525\r\n* fix(bedrock): minimal working example with from_bedrock client by @dogonthehorizon in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1528\r\n* docs(blog): fix code block formatting in blog post by @workwithpurwarkrishna in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1526\r\n* feat(bedrock): sort of add support for async bedrock client by @dogonthehorizon in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1530\r\n* fix(bedrock): handle default message format for converse endpoint by @dogonthehorizon in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1529\r\n* Add semantic validation documentation by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1541\r\n* Implementing support for responses by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1520\r\n* fix: remove failing test by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1544\r\n* fix: bump version by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1545\r\n\r\n## New Contributors\r\n* @dogonthehorizon made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1528\r\n* @workwithpurwarkrishna made their first contribution in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1526\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.8.2...1.8.3","2025-05-22T16:43:33",{"id":211,"version":212,"summary_zh":213,"released_at":214},114865,"1.8.2","## What's Changed\r\n* fix: removed print statement by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1524\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.8.1...1.8.2","2025-05-15T11:13:41",{"id":216,"version":217,"summary_zh":218,"released_at":219},114866,"1.8.1","## What's Changed\r\n* docs(blog): add Anthropic web search structured data blog post by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1515\r\n* fix: added support for calling streaming from the create method by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1502\r\n* Fix\u002Fmkdocs by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1517\r\n* docs(blog): announce unified provider interface (from_provider) by @jxnl in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1516\r\n* Fix\u002Fanthropic web search by @ivanleomk in https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fpull\u002F1519\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002F567-labs\u002Finstructor\u002Fcompare\u002F1.8.0...1.8.1","2025-05-09T02:45:05"]