[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-openai--openai-python":3,"tool-openai--openai-python":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":32,"env_os":99,"env_gpu":100,"env_ram":100,"env_deps":101,"category_tags":110,"github_topics":111,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":144},4624,"openai\u002Fopenai-python","openai-python","The official Python library for the OpenAI API","openai-python 是 OpenAI 官方推出的 Python 客户端库，旨在让开发者能够便捷、高效地在 Python 应用中调用 OpenAI 的强大 API。它解决了手动构建 HTTP 请求、处理复杂参数及解析响应数据的繁琐问题，让集成大模型能力变得像调用普通函数一样简单。\n\n这款工具主要面向 Python 开发者、人工智能研究人员以及希望将智能对话、代码生成或图像理解功能嵌入自身软件的技术团队。无论是构建聊天机器人、开发智能助手，还是进行模型实验，openai-python 都能提供坚实的支持。\n\n其技术亮点在于提供了完整的类型定义，配合现代 IDE 可实现精准的代码自动补全与错误检查，大幅提升开发体验。库内同时内置了同步与异步两种客户端（基于 httpx），既能满足脚本快速运行需求，也能轻松应对高并发生产环境。此外，它全面支持文本生成、多轮对话以及最新的视觉识别功能，允许用户通过图片 URL 或 Base64 编码直接与模型进行“看图说话”式的交互。只需几行代码，配置好 API 密钥，即可开启智能化的应用开发之旅。","# OpenAI Python API library\n\n\u003C!-- prettier-ignore -->\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenai.svg?label=pypi%20(stable))](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenai\u002F)\n\nThe OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.9+\napplication. The library includes type definitions for all request params and response fields,\nand offers both synchronous and asynchronous clients powered by [httpx](https:\u002F\u002Fgithub.com\u002Fencode\u002Fhttpx).\n\nIt is generated from our [OpenAPI specification](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi) with [Stainless](https:\u002F\u002Fstainlessapi.com\u002F).\n\n## Documentation\n\nThe REST API documentation can be found on [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference). The full API of this library can be found in [api.md](api.md).\n\n## Installation\n\n```sh\n# install from PyPI\npip install openai\n```\n\n## Usage\n\nThe full API of this library can be found in [api.md](api.md).\n\nThe primary API for interacting with OpenAI models is the [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses). You can generate text from the model with the code below.\n\n```python\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n    # This is the default and can be omitted\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    instructions=\"You are a coding assistant that talks like a pirate.\",\n    input=\"How do I check if a Python object is an instance of a class?\",\n)\n\nprint(response.output_text)\n```\n\nThe previous standard (supported indefinitely) for generating text is the [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat). You can use that API to generate text from the model with the code below.\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-5.2\",\n    messages=[\n        {\"role\": \"developer\", \"content\": \"Talk like a pirate.\"},\n        {\n            \"role\": \"user\",\n            \"content\": \"How do I check if a Python object is an instance of a class?\",\n        },\n    ],\n)\n\nprint(completion.choices[0].message.content)\n```\n\nWhile you can provide an `api_key` keyword argument,\nwe recommend using [python-dotenv](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpython-dotenv\u002F)\nto add `OPENAI_API_KEY=\"My API Key\"` to your `.env` file\nso that your API key is not stored in source control.\n[Get an API key here](https:\u002F\u002Fplatform.openai.com\u002Fsettings\u002Forganization\u002Fapi-keys).\n\n### Vision\n\nWith an image URL:\n\n```python\nprompt = \"What is in this image?\"\nimg_url = \"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd5\u002F2023_06_08_Raccoon1.jpg\u002F1599px-2023_06_08_Raccoon1.jpg\"\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"input_text\", \"text\": prompt},\n                {\"type\": \"input_image\", \"image_url\": f\"{img_url}\"},\n            ],\n        }\n    ],\n)\n```\n\nWith the image as a base64 encoded string:\n\n```python\nimport base64\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nprompt = \"What is in this image?\"\nwith open(\"path\u002Fto\u002Fimage.png\", \"rb\") as image_file:\n    b64_image = base64.b64encode(image_file.read()).decode(\"utf-8\")\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"input_text\", \"text\": prompt},\n                {\"type\": \"input_image\", \"image_url\": f\"data:image\u002Fpng;base64,{b64_image}\"},\n            ],\n        }\n    ],\n)\n```\n\n## Async usage\n\nSimply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:\n\n```python\nimport os\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI(\n    # This is the default and can be omitted\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\n\nasync def main() -> None:\n    response = await client.responses.create(\n        model=\"gpt-5.2\", input=\"Explain disestablishmentarianism to a smart five year old.\"\n    )\n    print(response.output_text)\n\n\nasyncio.run(main())\n```\n\nFunctionality between the synchronous and asynchronous clients is otherwise identical.\n\n### With aiohttp\n\nBy default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.\n\nYou can enable this by installing `aiohttp`:\n\n```sh\n# install from PyPI\npip install openai[aiohttp]\n```\n\nThen you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:\n\n```python\nimport os\nimport asyncio\nfrom openai import DefaultAioHttpClient\nfrom openai import AsyncOpenAI\n\n\nasync def main() -> None:\n    async with AsyncOpenAI(\n        api_key=os.environ.get(\"OPENAI_API_KEY\"),  # This is the default and can be omitted\n        http_client=DefaultAioHttpClient(),\n    ) as client:\n        chat_completion = await client.chat.completions.create(\n            messages=[\n                {\n                    \"role\": \"user\",\n                    \"content\": \"Say this is a test\",\n                }\n            ],\n            model=\"gpt-5.2\",\n        )\n\n\nasyncio.run(main())\n```\n\n## Streaming responses\n\nWe provide support for streaming responses using Server Side Events (SSE).\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nstream = client.responses.create(\n    model=\"gpt-5.2\",\n    input=\"Write a one-sentence bedtime story about a unicorn.\",\n    stream=True,\n)\n\nfor event in stream:\n    print(event)\n```\n\nThe async client uses the exact same interface.\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\n\nasync def main():\n    stream = await client.responses.create(\n        model=\"gpt-5.2\",\n        input=\"Write a one-sentence bedtime story about a unicorn.\",\n        stream=True,\n    )\n\n    async for event in stream:\n        print(event)\n\n\nasyncio.run(main())\n```\n\n## Realtime API\n\nThe Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling) through a WebSocket connection.\n\nUnder the hood the SDK uses the [`websockets`](https:\u002F\u002Fwebsockets.readthedocs.io\u002Fen\u002Fstable\u002F) library to manage connections.\n\nThe Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Frealtime-client-events) and a guide can be found [here](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frealtime).\n\nBasic text based example:\n\n```py\nimport asyncio\nfrom openai import AsyncOpenAI\n\nasync def main():\n    client = AsyncOpenAI()\n\n    async with client.realtime.connect(model=\"gpt-realtime\") as connection:\n        await connection.session.update(\n            session={\"type\": \"realtime\", \"output_modalities\": [\"text\"]}\n        )\n\n        await connection.conversation.item.create(\n            item={\n                \"type\": \"message\",\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"input_text\", \"text\": \"Say hello!\"}],\n            }\n        )\n        await connection.response.create()\n\n        async for event in connection:\n            if event.type == \"response.output_text.delta\":\n                print(event.delta, flush=True, end=\"\")\n\n            elif event.type == \"response.output_text.done\":\n                print()\n\n            elif event.type == \"response.done\":\n                break\n\nasyncio.run(main())\n```\n\nHowever the real magic of the Realtime API is handling audio inputs \u002F outputs, see this example [TUI script](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fblob\u002Fmain\u002Fexamples\u002Frealtime\u002Fpush_to_talk_app.py) for a fully fledged example.\n\n### Realtime error handling\n\nWhenever an error occurs, the Realtime API will send an [`error` event](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frealtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.\n\n```py\nclient = AsyncOpenAI()\n\nasync with client.realtime.connect(model=\"gpt-realtime\") as connection:\n    ...\n    async for event in connection:\n        if event.type == 'error':\n            print(event.error.type)\n            print(event.error.code)\n            print(event.error.event_id)\n            print(event.error.message)\n```\n\n## Using types\n\nNested request parameters are [TypedDicts](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Ftyping.html#typing.TypedDict). Responses are [Pydantic models](https:\u002F\u002Fdocs.pydantic.dev) which also provide helper methods for things like:\n\n- Serializing back into JSON, `model.to_json()`\n- Converting to a dictionary, `model.to_dict()`\n\nTyped requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.\n\n## Pagination\n\nList methods in the OpenAI API are paginated.\n\nThis library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nall_jobs = []\n# Automatically fetches more pages as needed.\nfor job in client.fine_tuning.jobs.list(\n    limit=20,\n):\n    # Do something with job here\n    all_jobs.append(job)\nprint(all_jobs)\n```\n\nOr, asynchronously:\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\n\nasync def main() -> None:\n    all_jobs = []\n    # Iterate through items across all pages, issuing requests as needed.\n    async for job in client.fine_tuning.jobs.list(\n        limit=20,\n    ):\n        all_jobs.append(job)\n    print(all_jobs)\n\n\nasyncio.run(main())\n```\n\nAlternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\nif first_page.has_next_page():\n    print(f\"will fetch next page using these details: {first_page.next_page_info()}\")\n    next_page = await first_page.get_next_page()\n    print(f\"number of items we just fetched: {len(next_page.data)}\")\n\n# Remove `await` for non-async usage.\n```\n\nOr just work directly with the returned data:\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\n\nprint(f\"next page cursor: {first_page.after}\")  # => \"next page cursor: ...\"\nfor job in first_page.data:\n    print(job.id)\n\n# Remove `await` for non-async usage.\n```\n\n## Nested params\n\nNested parameters are dictionaries, typed using `TypedDict`, for example:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nresponse = client.chat.responses.create(\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How much ?\",\n        }\n    ],\n    model=\"gpt-5.2\",\n    response_format={\"type\": \"json_object\"},\n)\n```\n\n## File uploads\n\nRequest parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Fos.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.\n\n```python\nfrom pathlib import Path\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nclient.files.create(\n    file=Path(\"input.jsonl\"),\n    purpose=\"fine-tune\",\n)\n```\n\nThe async client uses the exact same interface. If you pass a [`PathLike`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Fos.html#os.PathLike) instance, the file contents will be read asynchronously automatically.\n\n## Webhook Verification\n\nVerifying webhook signatures is _optional but encouraged_.\n\nFor more information about webhooks, see [the API docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks).\n\n### Parsing webhook payloads\n\nFor most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.\n\n```python\nfrom openai import OpenAI\nfrom flask import Flask, request\n\napp = Flask(__name__)\nclient = OpenAI()  # OPENAI_WEBHOOK_SECRET environment variable is used by default\n\n\n@app.route(\"\u002Fwebhook\", methods=[\"POST\"])\ndef webhook():\n    request_body = request.get_data(as_text=True)\n\n    try:\n        event = client.webhooks.unwrap(request_body, request.headers)\n\n        if event.type == \"response.completed\":\n            print(\"Response completed:\", event.data)\n        elif event.type == \"response.failed\":\n            print(\"Response failed:\", event.data)\n        else:\n            print(\"Unhandled event type:\", event.type)\n\n        return \"ok\"\n    except Exception as e:\n        print(\"Invalid signature:\", e)\n        return \"Invalid signature\", 400\n\n\nif __name__ == \"__main__\":\n    app.run(port=8000)\n```\n\n### Verifying webhook payloads directly\n\nIn some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will raise an error if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.\n\n```python\nimport json\nfrom openai import OpenAI\nfrom flask import Flask, request\n\napp = Flask(__name__)\nclient = OpenAI()  # OPENAI_WEBHOOK_SECRET environment variable is used by default\n\n\n@app.route(\"\u002Fwebhook\", methods=[\"POST\"])\ndef webhook():\n    request_body = request.get_data(as_text=True)\n\n    try:\n        client.webhooks.verify_signature(request_body, request.headers)\n\n        # Parse the body after verification\n        event = json.loads(request_body)\n        print(\"Verified event:\", event)\n\n        return \"ok\"\n    except Exception as e:\n        print(\"Invalid signature:\", e)\n        return \"Invalid signature\", 400\n\n\nif __name__ == \"__main__\":\n    app.run(port=8000)\n```\n\n## Handling errors\n\nWhen the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.\n\nWhen the API returns a non-success status code (that is, 4xx or 5xx\nresponse), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.\n\nAll errors inherit from `openai.APIError`.\n\n```python\nimport openai\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntry:\n    client.fine_tuning.jobs.create(\n        model=\"gpt-4o\",\n        training_file=\"file-abc123\",\n    )\nexcept openai.APIConnectionError as e:\n    print(\"The server could not be reached\")\n    print(e.__cause__)  # an underlying Exception, likely raised within httpx.\nexcept openai.RateLimitError as e:\n    print(\"A 429 status code was received; we should back off a bit.\")\nexcept openai.APIStatusError as e:\n    print(\"Another non-200-range status code was received\")\n    print(e.status_code)\n    print(e.response)\n```\n\nError codes are as follows:\n\n| Status Code | Error Type                 |\n| ----------- | -------------------------- |\n| 400         | `BadRequestError`          |\n| 401         | `AuthenticationError`      |\n| 403         | `PermissionDeniedError`    |\n| 404         | `NotFoundError`            |\n| 422         | `UnprocessableEntityError` |\n| 429         | `RateLimitError`           |\n| >=500       | `InternalServerError`      |\n| N\u002FA         | `APIConnectionError`       |\n\n## Request IDs\n\n> For more information on debugging requests, see [these docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nAll object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.\n\n```python\nresponse = await client.responses.create(\n    model=\"gpt-5.2\",\n    input=\"Say 'this is a test'.\",\n)\nprint(response._request_id)  # req_123\n```\n\nNote that unlike other properties that use an `_` prefix, the `_request_id` property\n_is_ public. Unless documented otherwise, _all_ other `_` prefix properties,\nmethods and modules are _private_.\n\n> [!IMPORTANT]  \n> If you need to access request IDs for failed requests you must catch the `APIStatusError` exception\n\n```python\nimport openai\n\ntry:\n    completion = await client.chat.completions.create(\n        messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-5.2\"\n    )\nexcept openai.APIStatusError as exc:\n    print(exc.request_id)  # req_123\n    raise exc\n```\n\n## Retries\n\nCertain errors are automatically retried 2 times by default, with a short exponential backoff.\nConnection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,\n429 Rate Limit, and >=500 Internal errors are all retried by default.\n\nYou can use the `max_retries` option to configure or disable retry settings:\n\n```python\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # default is 2\n    max_retries=0,\n)\n\n# Or, configure per-request:\nclient.with_options(max_retries=5).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How can I get the name of the current day in JavaScript?\",\n        }\n    ],\n    model=\"gpt-5.2\",\n)\n```\n\n## Timeouts\n\nBy default requests time out after 10 minutes. You can configure this with a `timeout` option,\nwhich accepts a float or an [`httpx.Timeout`](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Ftimeouts\u002F#fine-tuning-the-configuration) object:\n\n```python\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # 20 seconds (default is 10 minutes)\n    timeout=20.0,\n)\n\n# More granular control:\nclient = OpenAI(\n    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),\n)\n\n# Override per-request:\nclient.with_options(timeout=5.0).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How can I list all files in a directory using Python?\",\n        }\n    ],\n    model=\"gpt-5.2\",\n)\n```\n\nOn timeout, an `APITimeoutError` is thrown.\n\nNote that requests that time out are [retried twice by default](#retries).\n\n## Advanced\n\n### Logging\n\nWe use the standard library [`logging`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Flogging.html) module.\n\nYou can enable logging by setting the environment variable `OPENAI_LOG` to `info`.\n\n```shell\n$ export OPENAI_LOG=info\n```\n\nOr to `debug` for more verbose logging.\n\n### How to tell whether `None` means `null` or missing\n\nIn an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:\n\n```py\nif response.my_field is None:\n  if 'my_field' not in response.model_fields_set:\n    print('Got json like {}, without a \"my_field\" key present at all.')\n  else:\n    print('Got json like {\"my_field\": null}.')\n```\n\n### Accessing raw response data (e.g. headers)\n\nThe \"raw\" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,\n\n```py\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.with_raw_response.create(\n    messages=[{\n        \"role\": \"user\",\n        \"content\": \"Say this is a test\",\n    }],\n    model=\"gpt-5.2\",\n)\nprint(response.headers.get('X-My-Header'))\n\ncompletion = response.parse()  # get the object that `chat.completions.create()` would have returned\nprint(completion)\n```\n\nThese methods return a [`LegacyAPIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.\n\nFor the sync client this will mostly be the same with the exception\nof `content` & `text` will be methods instead of properties. In the\nasync client, all methods will be async.\n\nA migration script will be provided & the migration in general should\nbe smooth.\n\n#### `.with_streaming_response`\n\nThe above interface eagerly reads the full response body when you make the request, which may not always be what you want.\n\nTo stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.\n\nAs such, `.with_streaming_response` methods return a different [`APIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_response.py) object, and the async client returns an [`AsyncAPIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_response.py) object.\n\n```python\nwith client.chat.completions.with_streaming_response.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"Say this is a test\",\n        }\n    ],\n    model=\"gpt-5.2\",\n) as response:\n    print(response.headers.get(\"X-My-Header\"))\n\n    for line in response.iter_lines():\n        print(line)\n```\n\nThe context manager is required so that the response will reliably be closed.\n\n### Making custom\u002Fundocumented requests\n\nThis library is typed for convenient access to the documented API.\n\nIf you need to access undocumented endpoints, params, or response properties, the library can still be used.\n\n#### Undocumented endpoints\n\nTo make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other\nhttp verbs. Options on the client will be respected (such as retries) when making this request.\n\n```py\nimport httpx\n\nresponse = client.post(\n    \"\u002Ffoo\",\n    cast_to=httpx.Response,\n    body={\"my_param\": True},\n)\n\nprint(response.headers.get(\"x-foo\"))\n```\n\n#### Undocumented request params\n\nIf you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request\noptions.\n\n#### Undocumented response properties\n\nTo access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You\ncan also get all the extra fields on the Pydantic model as a dict with\n[`response.model_extra`](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fapi\u002Fbase_model\u002F#pydantic.BaseModel.model_extra).\n\n### Configuring the HTTP client\n\nYou can directly override the [httpx client](https:\u002F\u002Fwww.python-httpx.org\u002Fapi\u002F#client) to customize it for your use case, including:\n\n- Support for [proxies](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Fproxies\u002F)\n- Custom [transports](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Ftransports\u002F)\n- Additional [advanced](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Fclients\u002F) functionality\n\n```python\nimport httpx\nfrom openai import OpenAI, DefaultHttpxClient\n\nclient = OpenAI(\n    # Or use the `OPENAI_BASE_URL` env var\n    base_url=\"http:\u002F\u002Fmy.test.server.example.com:8083\u002Fv1\",\n    http_client=DefaultHttpxClient(\n        proxy=\"http:\u002F\u002Fmy.test.proxy.example.com\",\n        transport=httpx.HTTPTransport(local_address=\"0.0.0.0\"),\n    ),\n)\n```\n\nYou can also customize the client on a per-request basis by using `with_options()`:\n\n```python\nclient.with_options(http_client=DefaultHttpxClient(...))\n```\n\n### Managing HTTP resources\n\nBy default the library closes underlying HTTP connections whenever the client is [garbage collected](https:\u002F\u002Fdocs.python.org\u002F3\u002Freference\u002Fdatamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.\n\n```py\nfrom openai import OpenAI\n\nwith OpenAI() as client:\n  # make requests here\n  ...\n\n# HTTP client is now closed\n```\n\n## Microsoft Azure OpenAI\n\nTo use this library with [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview), use the `AzureOpenAI`\nclass instead of the `OpenAI` class.\n\n> [!IMPORTANT]\n> The Azure API shape differs from the core API shape which means that the static types for responses \u002F params\n> won't always be correct.\n\n```py\nfrom openai import AzureOpenAI\n\n# gets the API Key from environment variable AZURE_OPENAI_API_KEY\nclient = AzureOpenAI(\n    # https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#rest-api-versioning\n    api_version=\"2023-07-01-preview\",\n    # https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fcognitive-services\u002Fopenai\u002Fhow-to\u002Fcreate-resource?pivots=web-portal#create-a-resource\n    azure_endpoint=\"https:\u002F\u002Fexample-endpoint.openai.azure.com\",\n)\n\ncompletion = client.chat.completions.create(\n    model=\"deployment-name\",  # e.g. gpt-35-instant\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How do I output all files in a directory using Python?\",\n        },\n    ],\n)\nprint(completion.to_json())\n```\n\nIn addition to the options provided in the base `OpenAI` client, the following options are provided:\n\n- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)\n- `azure_deployment`\n- `api_version` (or the `OPENAI_API_VERSION` environment variable)\n- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)\n- `azure_ad_token_provider`\n\nAn example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fblob\u002Fmain\u002Fexamples\u002Fazure_ad.py).\n\n## Versioning\n\nThis package generally follows [SemVer](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:\n\n1. Changes that only affect static types, without breaking runtime behavior.\n2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_\n3. Changes that we do not expect to impact the vast majority of users in practice.\n\nWe take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.\n\nWe are keen for your feedback; please open an [issue](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-python\u002Fissues) with questions, bugs, or suggestions.\n\n### Determining the installed version\n\nIf you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.\n\nYou can determine the version that is being used at runtime with:\n\n```py\nimport openai\nprint(openai.__version__)\n```\n\n## Requirements\n\nPython 3.9 or higher.\n\n## Contributing\n\nSee [the contributing documentation](.\u002FCONTRIBUTING.md).\n","# OpenAI Python API 库\n\n\u003C!-- prettier-ignore -->\n[![PyPI version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fopenai.svg?label=pypi%20(stable))](https:\u002F\u002Fpypi.org\u002Fproject\u002Fopenai\u002F)\n\nOpenAI Python 库为所有 Python 3.9+ 应用程序提供了便捷的 OpenAI REST API 访问方式。该库包含所有请求参数和响应字段的类型定义，并提供基于 [httpx](https:\u002F\u002Fgithub.com\u002Fencode\u002Fhttpx) 的同步和异步客户端。\n\n它由我们的 [OpenAPI 规范](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi) 使用 [Stainless](https:\u002F\u002Fstainlessapi.com\u002F) 生成。\n\n## 文档\n\nREST API 文档可在 [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference) 上找到。本库的完整 API 可在 [api.md](api.md) 中查阅。\n\n## 安装\n\n```sh\n# 从 PyPI 安装\npip install openai\n```\n\n## 使用\n\n本库的完整 API 可在 [api.md](api.md) 中查阅。\n\n与 OpenAI 模型交互的主要 API 是 [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses)。您可以通过以下代码从模型生成文本：\n\n```python\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n    # 这是默认值，可以省略\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    instructions=\"你是一位说话像海盗的编程助手。\",\n    input=\"如何检查一个 Python 对象是否是某个类的实例？\"\n)\n\nprint(response.output_text)\n```\n\n之前的标准（无限期支持）文本生成方法是 [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat)。您可以使用该 API 通过以下代码从模型生成文本：\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-5.2\",\n    messages=[\n        {\"role\": \"developer\", \"content\": \"像海盗一样说话。\"},\n        {\n            \"role\": \"user\",\n            \"content\": \"如何检查一个 Python 对象是否是某个类的实例？\"\n        },\n    ],\n)\n\nprint(completion.choices[0].message.content)\n```\n\n虽然您可以直接传递 `api_key` 关键字参数，但我们建议使用 [python-dotenv](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpython-dotenv\u002F) 将 `OPENAI_API_KEY=\"My API Key\"` 添加到您的 `.env` 文件中，以避免将 API 密钥存储在版本控制系统中。[在此处获取 API 密钥](https:\u002F\u002Fplatform.openai.com\u002Fsettings\u002Forganization\u002Fapi-keys)。\n\n### 视觉功能\n\n使用图片 URL：\n\n```python\nprompt = \"这张图片里有什么？\"\nimg_url = \"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fd5\u002F2023_06_08_Raccoon1.jpg\u002F1599px-2023_06_08_Raccoon1.jpg\"\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"input_text\", \"text\": prompt},\n                {\"type\": \"input_image\", \"image_url\": f\"{img_url}\"},\n            ],\n        }\n    ],\n)\n```\n\n使用 Base64 编码的图片字符串：\n\n```python\nimport base64\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nprompt = \"这张图片里有什么？\"\nwith open(\"path\u002Fto\u002Fimage.png\", \"rb\") as image_file:\n    b64_image = base64.b64encode(image_file.read()).decode(\"utf-8\")\n\nresponse = client.responses.create(\n    model=\"gpt-5.2\",\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"input_text\", \"text\": prompt},\n                {\"type\": \"input_image\", \"image_url\": f\"data:image\u002Fpng;base64,{b64_image}\"},\n            ],\n        }\n    ],\n)\n```\n\n## 异步使用\n\n只需导入 `AsyncOpenAI` 而不是 `OpenAI`，并在每次 API 调用前加上 `await` 即可：\n\n```python\nimport os\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI(\n    # 这是默认值，可以省略\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\n\nasync def main() -> None:\n    response = await client.responses.create(\n        model=\"gpt-5.2\", input=\"给一个聪明的五岁孩子解释一下‘disestablishmentarianism’是什么意思。\"\n    )\n    print(response.output_text)\n\n\nasyncio.run(main())\n```\n\n同步和异步客户端的功能完全相同。\n\n### 使用 aiohttp\n\n默认情况下，异步客户端使用 `httpx` 处理 HTTP 请求。然而，为了提升并发性能，您也可以选择使用 `aiohttp` 作为 HTTP 后端。\n\n您可以通过安装 `aiohttp` 来启用此功能：\n\n```sh\n# 从 PyPI 安装\npip install openai[aiohttp]\n```\n\n然后，在实例化客户端时指定 `http_client=DefaultAioHttpClient()` 即可启用：\n\n```python\nimport os\nimport asyncio\nfrom openai import DefaultAioHttpClient\nfrom openai import AsyncOpenAI\n\n\nasync def main() -> None:\n    async with AsyncOpenAI(\n        api_key=os.environ.get(\"OPENAI_API_KEY\"),  # 这是默认值，可以省略\n        http_client=DefaultAioHttpClient(),\n    ) as client:\n        chat_completion = await client.chat.completions.create(\n            messages=[\n                {\n                    \"role\": \"user\",\n                    \"content\": \"说这是个测试\",\n                }\n            ],\n            model=\"gpt-5.2\",\n        )\n\n\nasyncio.run(main())\n```\n\n## 流式响应\n\n我们支持使用服务器发送事件 (SSE) 实现流式响应。\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nstream = client.responses.create(\n    model=\"gpt-5.2\",\n    input=\"写一个关于独角兽的一句话睡前故事。\",\n    stream=True,\n)\n\nfor event in stream:\n    print(event)\n```\n\n异步客户端使用完全相同的接口：\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\n\nasync def main():\n    stream = await client.responses.create(\n        model=\"gpt-5.2\",\n        input=\"写一个关于独角兽的一句话睡前故事。\",\n        stream=True,\n    )\n\n    async for event in stream:\n        print(event)\n\n\nasyncio.run(main())\n```\n\n## 实时 API\n\n实时 API 使您能够构建低延迟、多模态的对话体验。目前，它支持文本和音频作为输入和输出，并通过 WebSocket 连接实现 [函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling)。\n\n在底层，SDK 使用 [`websockets`](https:\u002F\u002Fwebsockets.readthedocs.io\u002Fen\u002Fstable\u002F) 库来管理连接。\n\n实时 API 通过客户端发送事件和服务器发送事件相结合的方式工作。客户端可以发送事件来更新会话配置或发送文本和音频输入。服务器事件则用于确认音频响应已完成，或已收到模型的文本响应。完整的事件参考可以在 [这里](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Frealtime-client-events) 找到，相关指南则可在 [这里](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frealtime) 查阅。\n\n基于文本的基本示例：\n\n```py\nimport asyncio\nfrom openai import AsyncOpenAI\n\nasync def main():\n    client = AsyncOpenAI()\n\n    async with client.realtime.connect(model=\"gpt-realtime\") as connection:\n        await connection.session.update(\n            session={\"type\": \"realtime\", \"output_modalities\": [\"text\"]}\n        )\n\n        await connection.conversation.item.create(\n            item={\n                \"type\": \"message\",\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"input_text\", \"text\": \"Say hello!\"}],\n            }\n        )\n        await connection.response.create()\n\n        async for event in connection:\n            if event.type == \"response.output_text.delta\":\n                print(event.delta, flush=True, end=\"\")\n\n            elif event.type == \"response.output_text.done\":\n                print()\n\n            elif event.type == \"response.done\":\n                break\n\nasyncio.run(main())\n```\n\n然而，实时 API 的真正魔力在于处理音频输入\u002F输出。请参阅此示例 [TUI 脚本](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fblob\u002Fmain\u002Fexamples\u002Frealtime\u002Fpush_to_talk_app.py)，以获取一个完整的示例。\n\n### 实时错误处理\n\n每当发生错误时，实时 API 都会发送一个 [`error` 事件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Frealtime-model-capabilities#error-handling)，并且连接将保持打开状态，仍可继续使用。这意味着您需要自行处理这些错误，因为当 `error` 事件到来时，SDK 并不会直接抛出任何异常。\n\n```py\nclient = AsyncOpenAI()\n\nasync with client.realtime.connect(model=\"gpt-realtime\") as connection:\n    ...\n    async for event in connection:\n        if event.type == 'error':\n            print(event.error.type)\n            print(event.error.code)\n            print(event.error.event_id)\n            print(event.error.message)\n```\n\n## 使用类型\n\n嵌套请求参数是 [TypedDicts](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Ftyping.html#typing.TypedDict)。响应则是 [Pydantic 模型](https:\u002F\u002Fdocs.pydantic.dev)，它们还提供了诸如以下功能的辅助方法：\n\n- 将模型序列化回 JSON：`model.to_json()`\n- 转换为字典：`model.to_dict()`\n\n使用类型化的请求和响应可以在编辑器中获得自动补全和文档提示。如果您希望在 VS Code 中看到类型错误以帮助更早地捕获 bug，请将 `python.analysis.typeCheckingMode` 设置为 `basic`。\n\n## 分页\n\nOpenAI API 中的列表方法都进行了分页处理。\n\n该库为每个列表响应提供了自动分页迭代器，因此您无需手动请求后续页面：\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nall_jobs = []\n# 根据需要自动获取更多页面。\nfor job in client.fine_tuning.jobs.list(\n    limit=20,\n):\n    # 在此处对 job 做些处理\n    all_jobs.append(job)\nprint(all_jobs)\n```\n\n或者，以异步方式：\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\n\nasync def main() -> None:\n    all_jobs = []\n    # 遍历所有页面中的项目，按需发出请求。\n    async for job in client.fine_tuning.jobs.list(\n        limit=20,\n    ):\n        all_jobs.append(job)\n    print(all_jobs)\n\n\nasyncio.run(main())\n```\n\n此外，您还可以使用 `.has_next_page()`、`.next_page_info()` 或 `.get_next_page()` 方法来更精细地控制分页操作：\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\nif first_page.has_next_page():\n    print(f\"将使用以下信息获取下一页: {first_page.next_page_info()}\")\n    next_page = await first_page.get_next_page()\n    print(f\"我们刚刚获取的项目数量: {len(next_page.data)}\")\n\n# 如果是非异步使用，则移除 `await`。\n```\n\n或者直接操作返回的数据：\n\n```python\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\n\nprint(f\"下一页游标: {first_page.after}\")  # => \"下一页游标: ...\"\nfor job in first_page.data:\n    print(job.id)\n\n# 如果是非异步使用，则移除 `await`。\n```\n\n## 嵌套参数\n\n嵌套参数是使用 `TypedDict` 类型化的字典，例如：\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nresponse = client.chat.responses.create(\n    input=[\n        {\n            \"role\": \"user\",\n            \"content\": \"How much ?\",\n        }\n    ],\n    model=\"gpt-5.2\",\n    response_format={\"type\": \"json_object\"},\n)\n```\n\n## 文件上传\n\n与文件上传相对应的请求参数可以以 `bytes` 形式传递，也可以传递一个 [`PathLike`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Fos.html#os.PathLike) 实例，或一个包含 `(文件名, 内容, 媒体类型)` 的元组。\n\n```python\nfrom pathlib import Path\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nclient.files.create(\n    file=Path(\"input.jsonl\"),\n    purpose=\"fine-tune\",\n)\n```\n\n异步客户端使用完全相同的接口。如果您传递一个 [`PathLike`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Fos.html#os.PathLike) 实例，文件内容将被自动异步读取。\n\n## Webhook 验证\n\n验证 webhook 签名是 _可选但推荐的做法_。\n\n有关 Webhook 的更多信息，请参阅 [API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks)。\n\n### 解析 Webhook 负载\n\n对于大多数用例，您可能希望同时验证 Webhook 并解析负载。为此，我们提供了 `client.webhooks.unwrap()` 方法，该方法会解析 Webhook 请求并验证其是否由 OpenAI 发送。如果签名无效，此方法将引发错误。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（不要先对其进行解析）。`.unwrap()` 方法会在验证 Webhook 确实来自 OpenAI 后，为您将此 JSON 解析为事件对象。\n\n```python\nfrom openai import OpenAI\nfrom flask import Flask, request\n\napp = Flask(__name__)\nclient = OpenAI()  # 默认使用 OPENAI_WEBHOOK_SECRET 环境变量\n\n\n@app.route(\"\u002Fwebhook\", methods=[\"POST\"])\ndef webhook():\n    request_body = request.get_data(as_text=True)\n\n    try:\n        event = client.webhooks.unwrap(request_body, request.headers)\n\n        if event.type == \"response.completed\":\n            print(\"响应已完成:\", event.data)\n        elif event.type == \"response.failed\":\n            print(\"响应失败:\", event.data)\n        else:\n            print(\"未处理的事件类型:\", event.type)\n\n        return \"ok\"\n    except Exception as e:\n        print(\"无效签名:\", e)\n        return \"无效签名\", 400\n\n\nif __name__ == \"__main__\":\n    app.run(port=8000)\n```\n\n### 直接验证 Webhook 负载\n\n在某些情况下，您可能希望将 Webhook 的验证与负载解析分开进行。如果您更倾向于分别处理这些步骤，我们提供了 `client.webhooks.verify_signature()` 方法，用于仅验证 Webhook 请求的签名。与 `.unwrap()` 方法类似，如果签名无效，此方法也会引发错误。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（不要先对其进行解析）。验证签名后，您需要自行解析请求体。\n\n```python\nimport json\nfrom openai import OpenAI\nfrom flask import Flask, request\n\napp = Flask(__name__)\nclient = OpenAI()  # 默认使用 OPENAI_WEBHOOK_SECRET 环境变量\n\n\n@app.route(\"\u002Fwebhook\", methods=[\"POST\"])\ndef webhook():\n    request_body = request.get_data(as_text=True)\n\n    try:\n        client.webhooks.verify_signature(request_body, request.headers)\n\n        # 验证后解析请求体\n        event = json.loads(request_body)\n        print(\"已验证的事件:\", event)\n\n        return \"ok\"\n    except Exception as e:\n        print(\"无效签名:\", e)\n        return \"无效签名\", 400\n\n\nif __name__ == \"__main__\":\n    app.run(port=8000)\n```\n\n## 错误处理\n\n当库无法连接到 API 时（例如，由于网络连接问题或超时），会引发 `openai.APIConnectionError` 的子类异常。\n\n当 API 返回非成功状态码（即 4xx 或 5xx 响应）时，会引发 `openai.APIStatusError` 的子类异常，其中包含 `status_code` 和 `response` 属性。\n\n所有错误都继承自 `openai.APIError`。\n\n```python\nimport openai\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntry:\n    client.fine_tuning.jobs.create(\n        model=\"gpt-4o\",\n        training_file=\"file-abc123\",\n    )\nexcept openai.APIConnectionError as e:\n    print(\"无法连接到服务器\")\n    print(e.__cause__)  # 底层异常，很可能是在 httpx 内部引发的。\nexcept openai.RateLimitError as e:\n    print(\"收到 429 状态码；我们应该稍作退避。\")\nexcept openai.APIStatusError as e:\n    print(\"收到了另一个非 200 范围的状态码\")\n    print(e.status_code)\n    print(e.response)\n```\n\n错误代码如下：\n\n| 状态码 | 错误类型                 |\n| ----------- | -------------------------- |\n| 400         | `BadRequestError`          |\n| 401         | `AuthenticationError`      |\n| 403         | `PermissionDeniedError`    |\n| 404         | `NotFoundError`            |\n| 422         | `UnprocessableEntityError` |\n| 429         | `RateLimitError`           |\n| >=500       | `InternalServerError`      |\n| N\u002FA         | `APIConnectionError`       |\n\n## 请求 ID\n\n> 更多关于调试请求的信息，请参阅 [这些文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nSDK 中的所有对象响应都提供 `_request_id` 属性，该属性来自 `x-request-id` 响应头，以便您可以快速记录失败的请求并将其报告给 OpenAI。\n\n```python\nresponse = await client.responses.create(\n    model=\"gpt-5.2\",\n    input=\"说‘这是一个测试’。\",\n)\nprint(response._request_id)  # req_123\n```\n\n请注意，与其他以 `_` 为前缀的属性不同，`_request_id` 属性是公开的。除非另有说明，否则所有其他以 `_` 为前缀的属性、方法和模块都是私有的。\n\n> [!IMPORTANT]  \n> 如果您需要访问失败请求的请求 ID，必须捕获 `APIStatusError` 异常。\n\n```python\nimport openai\n\ntry:\n    completion = await client.chat.completions.create(\n        messages=[{\"role\": \"用户\", \"内容\": \"说‘这是一个测试’\"}], 模型为“gpt-5.2”\n    )\nexcept openai.APIStatusError as exc:\n    print(exc.request_id)  # req_123\n    raise exc\n```\n\n## 重试\n\n默认情况下，某些错误会自动重试 2 次，并采用短时间的指数退避策略。连接错误（例如，由于网络连接问题）、408 请求超时、409 冲突、429 速率限制以及 >=500 内部错误都会默认重试。\n\n您可以使用 `max_retries` 选项来配置或禁用重试设置：\n\n```python\nfrom openai import OpenAI\n\n# 配置所有请求的默认值：\nclient = OpenAI(\n    # 默认值为 2\n    max_retries=0,\n)\n\n# 或者按请求配置：\nclient.with_options(max_retries=5).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"用户\",\n            \"内容\": \"如何用 JavaScript 获取当前日期的名称？\",\n        }\n    ],\n    模型为“gpt-5.2”\n)\n```\n\n## 超时\n\n默认情况下，请求会在 10 分钟后超时。您可以使用 `timeout` 选项进行配置，该选项接受浮点数或 [`httpx.Timeout`](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Ftimeouts\u002F#fine-tuning-the-configuration) 对象：\n\n```python\nfrom openai import OpenAI\n\n# 配置所有请求的默认值：\nclient = OpenAI(\n    # 20 秒（默认为 10 分钟）\n    timeout=20.0,\n)\n\n# 更精细的控制：\nclient = OpenAI(\n    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),\n)\n\n# 按请求覆盖：\nclient.with_options(timeout=5.0).chat.completions.create(\n    messages=[\n        {\n            \"role\": \"用户\",\n            \"内容\": \"如何用 Python 列出目录中的所有文件？\",\n        }\n    ],\n    模型为“gpt-5.2”\n)\n```\n\n超时时，会抛出 `APITimeoutError`。\n\n请注意，超时的请求会默认重试两次（见“重试”部分）。\n\n## 高级\n\n### 日志记录\n\n我们使用标准库中的 [`logging`](https:\u002F\u002Fdocs.python.org\u002F3\u002Flibrary\u002Flogging.html) 模块。\n\n您可以通过将环境变量 `OPENAI_LOG` 设置为 `info` 来启用日志记录。\n\n```shell\n$ export OPENAI_LOG=info\n```\n\n或者将其设置为 `debug` 以获得更详细的日志输出。\n\n### 如何判断 `None` 是表示 `null` 还是缺失\n\n在 API 响应中，某个字段可能显式地为 `null`，也可能完全不存在；无论哪种情况，在本库中其值都为 `None`。您可以通过 `.model_fields_set` 来区分这两种情况：\n\n```py\nif response.my_field is None:\n  if 'my_field' not in response.model_fields_set:\n    print('收到的 JSON 类似于 {}, 根本没有 \"my_field\" 键。')\n  else:\n    print('收到的 JSON 类似于 {\"my_field\": null}。')\n```\n\n### 访问原始响应数据（例如头部信息）\n\n可以通过在任何 HTTP 方法调用前加上 `.with_raw_response.` 来访问“原始”响应对象，例如：\n\n```py\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.with_raw_response.create(\n    messages=[{\n        \"role\": \"user\",\n        \"content\": \"说这是个测试\",\n    }],\n    model=\"gpt-5.2\",\n)\nprint(response.headers.get('X-My-Header'))\n\ncompletion = response.parse()  # 获取 `chat.completions.create()` 本来会返回的对象\nprint(completion)\n```\n\n这些方法会返回一个 [`LegacyAPIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_legacy_response.py) 对象。这是一个遗留类，因为我们在下一个主要版本中会对其进行小幅改动。\n\n对于同步客户端，这基本上与之前相同，唯一的区别是 `content` 和 `text` 将变为方法而不是属性。而在异步客户端中，所有方法都是异步的。\n\n我们将提供迁移脚本，并且整体迁移过程应该会比较顺利。\n\n#### `.with_streaming_response`\n\n上述接口会在发出请求时立即读取完整的响应体，但这并不总是您所需要的。\n\n如果需要流式传输响应体，请改用 `.with_streaming_response`，它需要使用上下文管理器，并且只有在您调用 `.read()`、`.text()`、`.json()`、`.iter_bytes()`、`.iter_text()`、`.iter_lines()` 或 `.parse()` 时才会读取响应体。在异步客户端中，这些方法都是异步的。\n\n因此，`.with_streaming_response` 方法会返回不同的 [`APIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_response.py) 对象，而异步客户端则返回 [`AsyncAPIResponse`](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Ftree\u002Fmain\u002Fsrc\u002Fopenai\u002F_response.py) 对象。\n\n```python\nwith client.chat.completions.with_streaming_response.create(\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"说这是个测试\",\n        }\n    ],\n    model=\"gpt-5.2\",\n) as response:\n    print(response.headers.get(\"X-My-Header\"))\n\n    for line in response.iter_lines():\n        print(line)\n```\n\n必须使用上下文管理器，以确保响应能够可靠地关闭。\n\n### 发起自定义或未文档化的请求\n\n本库经过类型标注，方便用户访问已文档化的 API。\n\n如果您需要访问未文档化的端点、参数或响应属性，仍然可以使用本库。\n\n#### 未文档化的端点\n\n要向未文档化的端点发起请求，您可以使用 `client.get`、`client.post` 等 HTTP 方法。在发起此类请求时，客户端上的选项（如重试次数）仍会被尊重。\n\n```py\nimport httpx\n\nresponse = client.post(\n    \"\u002Ffoo\",\n    cast_to=httpx.Response,\n    body={\"my_param\": True},\n)\n\nprint(response.headers.get(\"x-foo\"))\n```\n\n#### 未文档化的请求参数\n\n如果您想明确发送额外的参数，可以使用 `extra_query`、`extra_body` 和 `extra_headers` 请求选项来实现。\n\n#### 未文档化的响应属性\n\n要访问未文档化的响应属性，您可以直接访问诸如 `response.unknown_prop` 之类的额外字段。此外，您还可以通过 [`response.model_extra`](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002Fapi\u002Fbase_model\u002F#pydantic.BaseModel.model_extra) 将 Pydantic 模型上的所有额外字段作为字典获取。\n\n### 配置 HTTP 客户端\n\n您可以直接覆盖 [httpx 客户端](https:\u002F\u002Fwww.python-httpx.org\u002Fapi\u002F#client)，以根据您的使用场景进行自定义，包括：\n\n- 支持 [代理](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Fproxies\u002F)\n- 自定义 [传输层](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Ftransports\u002F)\n- 其他 [高级功能](https:\u002F\u002Fwww.python-httpx.org\u002Fadvanced\u002Fclients\u002F)\n\n```python\nimport httpx\nfrom openai import OpenAI, DefaultHttpxClient\n\nclient = OpenAI(\n    # 或者使用 `OPENAI_BASE_URL` 环境变量\n    base_url=\"http:\u002F\u002Fmy.test.server.example.com:8083\u002Fv1\",\n    http_client=DefaultHttpxClient(\n        proxy=\"http:\u002F\u002Fmy.test.proxy.example.com\",\n        transport=httpx.HTTPTransport(local_address=\"0.0.0.0\"),\n    ),\n)\n```\n\n您也可以通过使用 `with_options()` 在每次请求时自定义客户端：\n\n```python\nclient.with_options(http_client=DefaultHttpxClient(...))\n```\n\n### 管理 HTTP 资源\n\n默认情况下，当客户端被 [垃圾回收](https:\u002F\u002Fdocs.python.org\u002F3\u002Freference\u002Fdatamodel.html#object.__del__) 时，库会自动关闭底层的 HTTP 连接。如果您希望手动关闭客户端，可以使用 `.close()` 方法，或者使用上下文管理器在退出时自动关闭。\n\n```py\nfrom openai import OpenAI\n\nwith OpenAI() as client:\n  # 在这里发起请求\n  ...\n\n# 此时 HTTP 客户端已被关闭\n```\n\n## Microsoft Azure OpenAI\n\n要将本库与 [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview) 一起使用，应使用 `AzureOpenAI` 类，而不是 `OpenAI` 类。\n\n> [!重要]\n> Azure API 的结构与核心 API 不同，这意味着响应和参数的静态类型并不总是正确的。\n\n```py\nfrom openai import AzureOpenAI\n\n# 从环境变量 AZURE_OPENAI_API_KEY 中获取 API 密钥\nclient = AzureOpenAI(\n    # https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#rest-api-versioning\n    api_version=\"2023-07-01-preview\",\n    # https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fcognitive-services\u002Fopenai\u002Fhow-to\u002Fcreate-resource?pivots=web-portal#create-a-resource\n    azure_endpoint=\"https:\u002F\u002Fexample-endpoint.openai.azure.com\",\n)\n\ncompletion = client.chat.completions.create(\n    model=\"deployment-name\",  # 例如 gpt-35-instant\n    messages=[\n        {\n            \"role\": \"user\",\n            \"content\": \"如何使用 Python 输出目录中的所有文件？\",\n        },\n    ],\n)\nprint(completion.to_json())\n```\n\n除了基础 `OpenAI` 客户端提供的选项之外，还提供了以下选项：\n\n- `azure_endpoint`（或环境变量 `AZURE_OPENAI_ENDPOINT`）\n- `azure_deployment`\n- `api_version`（或环境变量 `OPENAI_API_VERSION`）\n- `azure_ad_token`（或环境变量 `AZURE_OPENAI_AD_TOKEN`）\n- `azure_ad_token_provider`\n\n使用 Microsoft Entra ID（以前称为 Azure Active Directory）的客户端示例可以在此处找到：[https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fblob\u002Fmain\u002Fexamples\u002Fazure_ad.py](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fblob\u002Fmain\u002Fexamples\u002Fazure_ad.py)。\n\n## 版本控制\n\n本包通常遵循 [SemVer](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) 规范，但某些不向后兼容的更改可能会以次要版本发布：\n\n1. 仅影响静态类型而不破坏运行时行为的更改。\n2. 对库内部实现的更改，这些实现虽然在技术上是公开的，但并非设计用于外部使用或未被文档化。 _(如果您依赖于此类内部实现，请通过 GitHub 提交 issue 告知我们。)_\n3. 我们预计在实际应用中不会影响绝大多数用户的更改。\n\n我们非常重视向后兼容性，并努力确保您能够获得顺畅的升级体验。\n\n我们非常欢迎您的反馈，请在 [issue](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-python\u002Fissues) 中提出问题、报告 bug 或给出建议。\n\n### 确定已安装的版本\n\n如果您已经升级到最新版本，但仍未看到预期的新功能，则很可能您的 Python 环境仍在使用旧版本。\n\n您可以通过以下方式确定运行时正在使用的版本：\n\n```py\nimport openai\nprint(openai.__version__)\n```\n\n## 系统要求\n\nPython 3.9 或更高版本。\n\n## 贡献\n\n请参阅 [贡献文档](.\u002FCONTRIBUTING.md)。","# OpenAI Python SDK 快速上手指南\n\n## 环境准备\n\n- **Python 版本**：需要 Python 3.9 或更高版本。\n- **API Key**：请提前在 [OpenAI 平台](https:\u002F\u002Fplatform.openai.com\u002Fsettings\u002Forganization\u002Fapi-keys) 获取 API Key。\n- **环境变量**（推荐）：建议安装 `python-dotenv` 并将 Key 存入 `.env` 文件，避免硬编码在代码中。\n  ```sh\n  pip install python-dotenv\n  ```\n  在项目根目录创建 `.env` 文件，内容如下：\n  ```text\n  OPENAI_API_KEY=\"你的 API Key\"\n  ```\n\n## 安装步骤\n\n通过 PyPI 安装官方库：\n\n```sh\npip install openai\n```\n\n> **提示**：如果在国内网络环境下安装较慢，可使用国内镜像源加速：\n> ```sh\n> pip install openai -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n若需使用异步高性能后端（可选），可安装 aiohttp 支持：\n```sh\npip install openai[aiohttp]\n```\n\n## 基本使用\n\n### 1. 初始化客户端\n\n推荐使用环境变量自动加载 API Key：\n\n```python\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n    # 默认会自动读取 OPENAI_API_KEY 环境变量，也可手动传入\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n```\n\n### 2. 调用对话接口 (Chat Completions)\n\n这是最通用的文本生成方式：\n\n```python\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",  # 或 gpt-3.5-turbo 等可用模型\n    messages=[\n        {\"role\": \"system\", \"content\": \"你是一个乐于助人的助手。\"},\n        {\"role\": \"user\", \"content\": \"如何用 Python 检查对象是否为类的实例？\"},\n    ],\n)\n\nprint(completion.choices[0].message.content)\n```\n\n### 3. 调用最新 Responses 接口\n\nOpenAI 推出的新一代交互接口，支持更灵活的指令控制：\n\n```python\nresponse = client.responses.create(\n    model=\"gpt-4o\",\n    instructions=\"你是一个像海盗一样说话的编程助手。\",\n    input=\"如何用 Python 检查对象是否为类的实例？\",\n)\n\nprint(response.output_text)\n```\n\n### 4. 异步使用 (Async)\n\n如需在高并发场景下使用，可导入 `AsyncOpenAI` 并配合 `async\u002Fawait`：\n\n```python\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\nasync def main():\n    response = await client.responses.create(\n        model=\"gpt-4o\",\n        input=\"向一个聪明的五岁孩子解释去建制主义。\",\n    )\n    print(response.output_text)\n\nasyncio.run(main())\n```","某电商初创公司的后端工程师正在开发一个智能客服系统，需要让 Python 应用实时调用大模型来回答用户关于订单状态和退货政策的复杂咨询。\n\n### 没有 openai-python 时\n- 工程师必须手动编写繁琐的 HTTP 请求代码，自行处理 API 鉴权头、超时重试及连接池管理，极易出错。\n- 缺乏类型提示支持，开发时无法自动补全请求参数（如 `model`、`messages`），导致字段拼写错误频发且难以排查。\n- 处理多模态任务（如用户上传商品破损照片）时，需手动实现图片的 Base64 编码与格式拼接，代码冗余且维护困难。\n- 面对高并发场景，同步阻塞式请求容易拖慢主线程，而手动改造为异步架构工作量巨大且不稳定。\n\n### 使用 openai-python 后\n- 只需实例化 `OpenAI` 客户端并配置环境变量，库内部自动处理鉴权、重试机制及底层网络连接，代码简洁健壮。\n- 完整的类型定义让 IDE 能精准提示所有参数字段，编译期即可发现参数错误，显著提升了开发效率与代码质量。\n- 直接通过结构化字典传入文本与图片 URL（或 Base64 数据），openai-python 自动完成多模态内容的格式化封装。\n- 无缝切换至 `AsyncOpenAI` 客户端配合 `await` 语法，轻松实现高并发非阻塞调用，完美适配现代异步 Web 框架。\n\nopenai-python 将复杂的 API 交互细节封装为直观的 Python 对象，让开发者能专注于业务逻辑而非底层通信协议。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-python_46b44785.png","openai","OpenAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopenai_1960bbf4.png","",null,"https:\u002F\u002Fopenai.com\u002F","https:\u002F\u002Fgithub.com\u002Fopenai",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99.9,{"name":85,"color":86,"percentage":87},"Shell","#89e051",0.1,{"name":89,"color":90,"percentage":91},"Dockerfile","#384d54",0,{"name":93,"color":94,"percentage":91},"Ruby","#701516",30368,4692,"2026-04-06T14:42:01","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":102,"python":103,"dependencies":104},"该库是调用 OpenAI REST API 的客户端工具，所有计算均在云端进行，因此本地无需 GPU 或大内存。支持同步和异步调用。使用 Realtime API 时需安装 websockets 库。建议将 API 密钥存储在环境变量中而非代码里。","3.9+",[105,106,107,108,109],"httpx","pydantic","aiohttp (可选)","websockets (用于 Realtime API)","python-dotenv (推荐)",[35,14],[72,112],"python","2026-03-27T02:49:30.150509","2026-04-07T06:12:47.772363",[116,121,126,131,136,140],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},21020,"使用异步调用（async）时遇到持续超时或请求被取消怎么办？","这通常是因为无意中关闭了客户端或响应对象，或者是与其他异步框架（如 MattermostDriver）冲突导致的。建议检查代码中是否在不该关闭的地方调用了关闭方法。如果问题仅在特定异步环境下出现，尝试在独立的进程中运行以排除干扰。此外，确保没有未处理的 CancelledError 导致程序挂起。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F769",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},21021,"遇到 'openai.error.AuthenticationError: \u003Cempty message>' 错误且消息为空怎么办？","这通常是因为您的 API Key 已被自动撤销。如果您曾将包含 API Key 的代码上传到公共空间（如 GitHub），OpenAI 系统会自动删除该 Key 以保安全。解决方法是：登录 OpenAI 平台重新生成一个新的 API Key，并确保今后不再将其泄露到公共仓库中。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F464",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},21022,"导入 openai 库时报错 'ImportError: cannot import name override from typing_extensions' 如何解决？","这是因为 `typing_extensions` 版本过旧，缺少 `override` 属性。请运行命令 `pip install --upgrade typing_extensions` 升级该依赖包。如果您是在 Databricks 等特定环境中遇到此问题，升级后可能需要运行 `dbutils.library.restartPython()` 重启 Python 环境以生效。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F751",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},21023,"使用 CLI 进行微调时出现 'RuntimeWarning: coroutine ... was never awaited' 警告怎么办？","这是一个已知的异步协程未等待警告，常见于 openai 库的 0.26.0 及更高版本（如 0.27.x）。虽然会出现警告信息，但通常不影响微调任务的正常创建和运行。您可以忽略该警告，或者回退到 0.25.0 版本以消除此警告。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F173",{"id":137,"question_zh":138,"answer_zh":139,"source_url":130},21024,"在 Databricks 环境中遇到导入错误或路径冲突该如何处理？","如果在 Databricks 中因默认路径下的 `typing_extensions` 版本不兼容导致导入失败，可以尝试手动调整导入路径。使用 `sys.path.insert(0, os.path.join(os.path.dirname(__file__), \"..\", \"lib\"))` 将正确版本的库路径插入到系统路径最前方，确保导入的是兼容的包而不是环境自带的旧版本。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":125},21025,"为什么我的 API Key 突然失效并导致认证错误？","OpenAI 具有安全机制，一旦检测到 API Key 被提交到公共代码仓库（如 GitHub 公开项目），会自动将该 Key 标记为泄露并立即删除。遇到此情况，必须前往 OpenAI 官网账户设置页面生成新的 Key，并严格检查代码库历史，确保新 Key 不会被再次公开。",[145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240],{"id":146,"version":147,"summary_zh":148,"released_at":149},127082,"v2.30.0","## 2.30.0 (2026-03-25)\n\n完整更新日志：[v2.29.0...v2.30.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.29.0...v2.30.0)\n\n### 功能\n\n* **api:** 在 Click\u002FDoubleClick\u002FDrag\u002FMove\u002FScroll 计算机操作中添加 keys 字段 ([ee1bbed](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fee1bbeddbb38dab817557412dc106354409bb950))\n\n\n### 错误修复\n\n* **api:** 将 SDK 响应类型与扩展的项目模式对齐 ([f3f258a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ff3f258a9d4d19db3fb0c6c35e25ad3cedbe71254))\n* 对端点路径参数进行 sanitization ([89f6698](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F89f66988fde790c0c83ff8b876d1e1b10d616367))\n* **types:** 使 ResponseInputMessageItem 中的 type 成为必填项 ([cfdb167](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fcfdb1676ea0550840330a58f1a31a40a41a0a53f))\n\n\n### 杂项\n\n* **ci:** 跳过仅涉及元数据更改的 lint 检查 ([faa93e1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ffaa93e19a1d5c30c7dd672a08dbbdbb3c0374714))\n* **internal:** 更新 .gitignore 文件 ([c468477](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fc468477f1546579618865a726e35a685cffeacd9))\n* **tests:** 将 steady 升级至 v0.19.4 ([f350af8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ff350af86c13ade0237778010d264c55fda443354))\n* **tests:** 将 steady 升级至 v0.19.5 ([5c03401](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F5c0340128fc1a416e2dfdc6ab4b05f1e954e8482))\n* **tests:** 将 steady 升级至 v0.19.6 ([b6353b8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fb6353b8411d31dcc95875d801ce9e90a21e0fd52))\n* **tests:** 将 steady 升级至 v0.19.7 ([1d654be](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F1d654bea74ac9c3d43302587f98f33cfff502e48))\n\n\n### 重构\n\n* **tests:** 从 prism 切换到 steady ([4a82035](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F4a82035669b739d16a0e85d4ded778d51e061948))","2026-03-25T22:08:18",{"id":151,"version":152,"summary_zh":153,"released_at":154},127083,"v2.29.0","## 2.29.0 (2026-03-17)\n\n完整变更日志：[v2.28.0...v2.29.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.28.0...v2.29.0)\n\n### 功能\n\n* **api:** 新增5.4 nano和mini模型标识符（slug）([3b45666](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F3b456661f77ca3196aceb5ab3350664a63481114))\n* **api:** 在批次创建方法中添加 `\u002Fv1\u002Fvideos` 端点（[c0e7a16](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fc0e7a161a996854021e9eb69ea2a60ca0d08047f)）\n* **api:** 向 `ToolFunction` 添加 `defer_loading` 字段（[3167595](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F3167595432bdda2f90721901d30ad316db49323e)）\n* **api:** 向 `ComparisonFilter` 类型添加 `in` 和 `nin` 运算符（[664f02b](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F664f02b051af84e1ca3fa313981ec72fdea269b3)）\n\n\n### 错误修复\n\n* **deps:** 提升 `typing-extensions` 的最低版本号（[a2fb2ca](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fa2fb2ca55142c6658a18be7bd1392a01f5a83f35)）\n* **pydantic:** 仅在明确设置时才传递 `by_alias` 参数（[8ebe8fb](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F8ebe8fbcb011c6a005a715cae50c6400a8596ee0)）\n\n\n### 杂项\n\n* **internal:** 调整CI分支配置（[96ccc3c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F96ccc3cca35645fd3140f99b0fc8e55545065212)）","2026-03-17T17:53:05",{"id":156,"version":157,"summary_zh":158,"released_at":159},127084,"v2.28.0","## 2.28.0 (2026-03-13)\n\n完整更新日志：[v2.27.0...v2.28.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.27.0...v2.28.0)\n\n### 功能\n\n* **api:** 自定义语音（[50dc060](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F50dc060b55767615419219ef567d31210517e613)）","2026-03-13T19:55:50",{"id":161,"version":162,"summary_zh":163,"released_at":164},127085,"v2.27.0","## 2.27.0 (2026-03-13)\n\n完整变更日志：[v2.26.0...v2.27.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.26.0...v2.27.0)\n\n### 功能\n\n* **api:** API 更新 ([60ab24a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F60ab24ae722a7fa280eb4b2273da4ded1f930231))\n* **api:** 手动更新 ([b244b09](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fb244b0946045aaa0dbfa8c0ce5164b64e1156834))\n* **api:** 手动更新 ([d806635](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fd806635081a736cc81344bf1e62b57956a88d093))\n* **api:** Sora API 改进：角色 API、视频扩展\u002F编辑功能，以及更高分辨率的导出选项。([58b70d3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F58b70d304a4b2cf70eae4db4b448d439fc8b8ba3))\n\n\n### 错误修复\n\n* **api:** 修复合并视频资源问题 ([742d8ee](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F742d8ee1f969ee1bbb39ba9d799dcd5c480d8ddb))\n\n\n### 杂项\n\n* **内部:** 与代码生成相关的更新 ([4e6498e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F4e6498e2d222dd35d76bb397ba976ff53c852e12))\n* **内部:** 与代码生成相关的更新 ([93af129](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F93af129e8919de6d3aee19329c8bdef0532bd20a))\n* 将 HTTP 协议与 WS 协议匹配，而非 WSS ([026f9de](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F026f9de35d2aa74f35c91261eb5ea43d4ab1b8ba))\n* 对 WebSocket 使用正确的大小写 ([a2f9b07](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fa2f9b0722597627e8d01aa05c27a52015072726b))","2026-03-13T19:15:57",{"id":166,"version":167,"summary_zh":168,"released_at":169},127086,"v2.26.0","## 2.26.0 (2026-03-05)\n\n完整更新日志：[v2.25.0...v2.26.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.25.0...v2.26.0)\n\n### 功能\n\n* **api:** GA ComputerTool 现在使用 ComputerTool 类。'computer_use_preview' 工具已移至 ComputerUsePreview ([78f5b3c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F78f5b3c287b71ed6fbeb71fb6b5c0366db704cd2))","2026-03-05T23:16:58",{"id":171,"version":172,"summary_zh":173,"released_at":174},127087,"v2.25.0","## 2.25.0 (2026-03-05)\n\n完整变更日志：[v2.24.0...v2.25.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.24.0...v2.25.0)\n\n### 功能\n\n* **api:** gpt-5.4、工具搜索工具以及新的计算机工具 ([6b2043f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F6b2043f3d63058f5582eab7a7705b30a3d5536f0))\n* **api:** 从响应中移除 prompt_cache_key 参数，从消息类型中移除 phase 字段 ([44fb382](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F44fb382698872d98d5f72c880b47846c7b594f4f))\n\n\n### 错误修复\n\n* **api:** 内部模式修复 ([0c0f970](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F0c0f970cbd164131bf06f7ab38f170bbcb323683))\n* **api:** 手动更新 ([9fc323f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F9fc323f4da6cfca9de194e12c1486a3cd1bfa4b5))\n* **api:** 重新添加 phase 字段 ([1b27b5a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F1b27b5a834f5cb75f80c597259d0df0352ba83bd))\n\n\n### 杂项\n\n* **内部:** 与代码生成相关的更新 ([bdb837d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fbdb837d2c1d2a161cc4b22ef26e9e8446d5dc2a3))\n* **内部:** 与代码生成相关的更新 ([b1de941](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fb1de9419a68fd6fb97a63f415fb3d1e5851582cb))\n* **内部:** 减少警告信息 ([7cdbd06](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F7cdbd06d3ca41af64d616b4b4bb61226cc38b662))","2026-03-05T18:34:55",{"id":176,"version":177,"summary_zh":178,"released_at":179},127088,"v2.24.0","## 2.24.0 (2026-02-24)\n\n完整变更日志：[v2.23.0...v2.24.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.23.0...v2.24.0)\n\n### 功能\n\n* **api:** 添加 phase ([391deb9](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F391deb99f6a92e51bffb25efd8dfe367d144bb9d))\n\n\n### 错误修复\n\n* **api:** 修复 phase 枚举 ([42ebf7c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F42ebf7c30b7e27a175c0d75fcf42c8dc858e56d6))\n* **api:** phase 文档 ([7ddc61c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F7ddc61cd0f7825d5e7f3a10daf809135511d8d20))\n\n\n### 杂项\n\n* **内部:** 使 `test_proxy_environment_variables` 对环境变量更具有鲁棒性 ([65af8fd](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F65af8fd8550e99236e3f4dcb035312441788157a))\n* **内部:** 重构 SSE 事件解析 ([2344600](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F23446008f06fb474d8c75d14a1bce26f4c5b95d8))","2026-02-24T20:01:29",{"id":181,"version":182,"summary_zh":183,"released_at":184},127089,"v2.23.0","## 2.23.0 (2026-02-24)\n\n完整变更日志：[v2.22.0...v2.23.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.22.0...v2.23.0)\n\n### 功能\n\n* **api:** 在实时通话中添加 gpt-realtime-1.5 和 gpt-audio-1.5 模型选项 ([3300b61](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F3300b61e1d5a34c9d28ec9cebbebd0de1fa93aa6))\n\n\n### 维护\n\n* **内部:** 使 `test_proxy_environment_variables` 更加稳定 ([6b441e2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F6b441e2c43df60a773f62308e918d76b8eb3c4d3))","2026-02-24T03:19:39",{"id":186,"version":187,"summary_zh":188,"released_at":189},127090,"v2.22.0","## 2.22.0 (2026-02-23)\n\n完整更新日志：[v2.21.0...v2.22.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.21.0...v2.22.0)\n\n### 功能\n\n* **API:** 响应 API 支持 WebSocket ([c01f6fb](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fc01f6fb0d55b7454f73c4904ea7a1954553085dc))\n\n\n### 维护工作\n\n* **内部:** 向 SSE 类添加请求选项 ([cdb4315](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fcdb4315ee29d5260bb373625d74cb523b4e3859c))\n* 更新模拟服务器文档 ([91f4da8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F91f4da80ec3dba5d3566961560dfd6feb9c2feb0))\n\n\n### 文档\n\n* **API:** 在 file_batches 参数说明中添加批处理大小限制 ([16ae76a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F16ae76a20a47f94c91ee2ca0b2ada274633abab3))\n* **API:** 优化音频、聊天、实时、技能、上传、视频等模块的方法说明 ([21f9e5a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F21f9e5aaf6ae27f0235fddb3ffa30fe73337f59b))\n* **API:** 更新聊天完成和响应中的 safety_identifier 文档 ([d74bfff](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fd74bfff62c1c2b32d4dc88fd47ae7b1b2a962017))","2026-02-23T20:13:52",{"id":191,"version":192,"summary_zh":193,"released_at":194},127091,"v2.21.0","## 2.21.0 (2026-02-13)\n\n完整变更日志：[v2.20.0...v2.21.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.20.0...v2.21.0)\n\n### 功能\n\n* **api:** 容器 network_policy 和 skills ([d19de2e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fd19de2ee5c74413f9dc52684b650df1898dee82b))\n\n\n### 错误修复\n\n* **结构化输出:** 解决解析方法中的内存泄漏问题 ([#2860](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F2860)) ([6dcbe21](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F6dcbe211f12f8470db542a5cb95724cb933786dd))\n* **webhooks:** 保留方法的可见性以进行兼容性检查 ([44a8936](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F44a8936d580b770f23fae79659101a27eadafad6))\n\n\n### 杂项\n\n* **内部:** 修复 Python 3.14 上的 lint 错误 ([534f215](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F534f215941f504443d63509e872409a0b1236452))\n\n\n### 文档\n\n* 按独立资源拆分 `api.md` ([96e41b3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F96e41b398a110212ddec71436b2439343bea87d4))\n* 更新注释 ([63def23](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F63def23b7acd5c6dacf03337fe1bd08439d1dba8))","2026-02-14T00:11:26",{"id":196,"version":197,"summary_zh":198,"released_at":199},127092,"v2.20.0","## 2.20.0 (2026-02-10)\n\nFull Changelog: [v2.19.0...v2.20.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.19.0...v2.20.0)\n\n### Features\n\n* **api:** support for images in batch api ([28edb6e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F28edb6e1b7eb30dbb7be49979cee7882e8889264))","2026-02-10T19:02:11",{"id":201,"version":202,"summary_zh":203,"released_at":204},127093,"v2.19.0","## 2.19.0 (2026-02-10)\n\nFull Changelog: [v2.18.0...v2.19.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.18.0...v2.19.0)\n\n### Features\n\n* **api:** skills and hosted shell ([27fdf68](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F27fdf6820655b5994e3c1eddb3c8d9344a8be744))\n\n\n### Chores\n\n* **internal:** bump dependencies ([fae10fd](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ffae10fd6e936a044f8393a454a39906aa325a893))","2026-02-10T18:20:53",{"id":206,"version":207,"summary_zh":208,"released_at":209},127094,"v2.18.0","## 2.18.0 (2026-02-09)\n\nFull Changelog: [v2.17.0...v2.18.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.17.0...v2.18.0)\n\n### Features\n\n* **api:** add context_management to responses ([137e992](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F137e992b80956401d1867274fa7a0969edfdba54))\n* **api:** responses context_management ([c3bd017](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fc3bd017318347af0a0105a7e975c8d91e22f7941))","2026-02-09T21:41:36",{"id":211,"version":212,"summary_zh":213,"released_at":214},127095,"v2.17.0","## 2.17.0 (2026-02-05)\n\nFull Changelog: [v2.16.0...v2.17.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.16.0...v2.17.0)\n\n### Features\n\n* **api:** add shell_call_output status field ([1bbaf88](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F1bbaf8865000b338c24c9fdd5e985183feaca10f))\n* **api:** image generation actions for responses; ResponseFunctionCallArgumentsDoneEvent.name ([7d96513](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F7d965135f93f41b0c3dbf3dc9f01796bd9645b6c))\n* **client:** add custom JSON encoder for extended type support ([9f43c8b](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F9f43c8b1a1641db2336cc6d0ec0c6dc470a89103))\n\n\n### Bug Fixes\n\n* **client:** undo change to web search Find action ([8f14eb0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F8f14eb0a74363fdfc648c5cd5c6d34a85b938d3c))\n* **client:** update type for `find_in_page` action ([ec54dde](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fec54ddeb357e49edd81cc3fe53d549c297e59a07))","2026-02-05T16:26:56",{"id":216,"version":217,"summary_zh":218,"released_at":219},127096,"v2.16.0","## 2.16.0 (2026-01-27)\n\nFull Changelog: [v2.15.0...v2.16.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.15.0...v2.16.0)\n\n### Features\n\n* **api:** api update ([b97f9f2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fb97f9f26b9c46ca4519130e60a8bf12ad8d52bf3))\n* **api:** api updates ([9debcc0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F9debcc02370f5b76a6a609ded18fbf8dea87b9cb))\n* **client:** add support for binary request streaming ([49561d8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F49561d88279628bc400d1b09aa98765b67018ef1))\n\n\n### Bug Fixes\n\n* **api:** mark assistants as deprecated ([0419cbc](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F0419cbcbf1021131c7492321436ed01ca4337835))\n\n\n### Chores\n\n* **ci:** upgrade `actions\u002Fgithub-script` ([5139f13](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F5139f13ef35e64dadc65f2ba2bab736977985769))\n* **internal:** update `actions\u002Fcheckout` version ([f276714](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ff2767144c11833070c0579063ed33918089b4617))\n\n\n### Documentation\n\n* **examples:** update Azure Realtime sample to use v1 API ([#2829](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F2829)) ([3b31981](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F3b319819544d629c5b8c206b8b1f6ec6328c6136))","2026-01-27T23:27:23",{"id":221,"version":222,"summary_zh":223,"released_at":224},127097,"v2.15.0","## 2.15.0 (2026-01-09)\n\nFull Changelog: [v2.14.0...v2.15.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.14.0...v2.15.0)\n\n### Features\n\n* **api:** add new Response completed_at prop ([f077752](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Ff077752f4a8364a74f784f8fb1cbe31277e1762b))\n\n\n### Chores\n\n* **internal:** codegen related update ([e7daba6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fe7daba6662a3c30f73d991e96cb19d2b54d772e0))","2026-01-09T22:09:31",{"id":226,"version":227,"summary_zh":228,"released_at":229},127098,"v2.14.0","## 2.14.0 (2025-12-19)\n\nFull Changelog: [v2.13.0...v2.14.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.13.0...v2.14.0)\n\n### Features\n\n* **api:** slugs for new audio models; make all `model` params accept strings ([e517792](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fe517792b58d1768cfb3432a555a354ae0a9cfa21))\n\n\n### Bug Fixes\n\n* use async_to_httpx_files in patch method ([a6af9ee](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fa6af9ee5643197222f328d5e73a80ab3515c32e2))\n\n\n### Chores\n\n* **internal:** add `--fix` argument to lint script ([93107ef](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F93107ef36abcfd9c6b1419533a1720031f03caec))","2025-12-19T03:28:06",{"id":231,"version":232,"summary_zh":233,"released_at":234},127099,"v2.13.0","## 2.13.0 (2025-12-16)\n\nFull Changelog: [v2.12.0...v2.13.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.12.0...v2.13.0)\n\n### Features\n\n* **api:** gpt-image-1.5 ([1c88f03](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F1c88f03bb48aa67426744e5b74f6197f30f61c73))\n\n\n### Chores\n\n* **ci:** add CI job to detect breaking changes with the Agents SDK ([#1436](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fissues\u002F1436)) ([237c91e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002F237c91ee6738b6764b139fd7afa68294d3ee0153))\n* **internal:** add missing files argument to base client ([e6d6fd5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fe6d6fd5989d76358ea5d9abb5949aa87646cbef6))","2025-12-16T18:19:05",{"id":236,"version":237,"summary_zh":238,"released_at":239},127100,"v2.12.0","## 2.12.0 (2025-12-15)\n\nFull Changelog: [v2.11.0...v2.12.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.11.0...v2.12.0)\n\n### Features\n\n* **api:** api update ([a95c4d0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fa95c4d0952ff5eb767206574e687cb029a49a4ab))\n* **api:** fix grader input list, add dated slugs for sora-2 ([b2c389b](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fb2c389bf5c3bde50bac2d9f60cce58f4aef44a41))","2025-12-15T16:16:32",{"id":241,"version":242,"summary_zh":243,"released_at":244},127101,"v2.11.0","## 2.11.0 (2025-12-11)\n\nFull Changelog: [v2.10.0...v2.11.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcompare\u002Fv2.10.0...v2.11.0)\n\n### Features\n\n* **api:** gpt 5.2 ([dd9b8e8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-python\u002Fcommit\u002Fdd9b8e85cf91fe0d7470143fba10fe950ec740c4))","2025-12-11T18:18:06"]