[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-openai--openai-node":3,"tool-openai--openai-node":64},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":45,"last_commit_at":62,"category_tags":63,"status":16},2181,"OpenHands","OpenHands\u002FOpenHands","OpenHands 是一个专注于 AI 驱动开发的开源平台，旨在让智能体（Agent）像人类开发者一样理解、编写和调试代码。它解决了传统编程中重复性劳动多、环境配置复杂以及人机协作效率低等痛点，通过自动化流程显著提升开发速度。\n\n无论是希望提升编码效率的软件工程师、探索智能体技术的研究人员，还是需要快速原型验证的技术团队，都能从中受益。OpenHands 提供了灵活多样的使用方式：既可以通过命令行（CLI）或本地图形界面在个人电脑上轻松上手，体验类似 Devin 的流畅交互；也能利用其强大的 Python SDK 自定义智能体逻辑，甚至在云端大规模部署上千个智能体并行工作。\n\n其核心技术亮点在于模块化的软件智能体 SDK，这不仅构成了平台的引擎，还支持高度可组合的开发模式。此外，OpenHands 在 SWE-bench 基准测试中取得了 77.6% 的优异成绩，证明了其解决真实世界软件工程问题的能力。平台还具备完善的企业级功能，支持与 Slack、Jira 等工具集成，并提供细粒度的权限管理，适合从个人开发者到大型企业的各类用户场景。",70612,"2026-04-05T11:12:22",[15,14,13,36],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":102,"forks":103,"last_commit_at":104,"license":105,"difficulty_score":10,"env_os":106,"env_gpu":107,"env_ram":106,"env_deps":108,"category_tags":113,"github_topics":114,"view_count":117,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":150},572,"openai\u002Fopenai-node","openai-node","Official JavaScript \u002F TypeScript library for the OpenAI API","openai-node 是 OpenAI 官方推出的 JavaScript 与 TypeScript 开发库，专为简化接入 OpenAI 大模型服务而设计。它屏蔽了底层网络请求的复杂性，让开发者能够专注于业务逻辑，轻松实现文本生成、对话交互及文件处理等功能。\n\n对于使用 Node.js 或浏览器的开发者来说，openai-node 解决了手动编写 API 请求、管理密钥安全以及处理数据格式的难题。无论是搭建聊天机器人还是为现有应用添加智能功能，仅需少量代码即可完成集成。\n\n它的优势在于同时支持最新的 Responses API 和经典的 Chat Completions API，并提供原生的流式响应支持，使实时对话更加流畅自然。此外，openai-node 在文件上传方面也非常灵活，可适配本地文件、网络资源等多种来源。配合 TypeScript 的类型检查，它能有效减少代码错误，提升开发效率，是构建 AI 应用时的可靠选择。","# OpenAI TypeScript and JavaScript API Library\n\n[![NPM version](\u003Chttps:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopenai.svg?label=npm%20(stable)>)](https:\u002F\u002Fnpmjs.org\u002Fpackage\u002Fopenai) ![npm bundle size](https:\u002F\u002Fimg.shields.io\u002Fbundlephobia\u002Fminzip\u002Fopenai) [![JSR Version](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-node_readme_d496b688830b.png)](https:\u002F\u002Fjsr.io\u002F@openai\u002Fopenai)\n\nThis library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.\n\nIt is generated from our [OpenAPI specification](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi) with [Stainless](https:\u002F\u002Fstainlessapi.com\u002F).\n\nTo learn how to use the OpenAI API, check out our [API Reference](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference) and [Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs).\n\n## Installation\n\n```sh\nnpm install openai\n```\n\n### Installation from JSR\n\n```sh\ndeno add jsr:@openai\u002Fopenai\nnpx jsr add @openai\u002Fopenai\n```\n\nThese commands will make the module importable from the `@openai\u002Fopenai` scope. You can also [import directly from JSR](https:\u002F\u002Fjsr.io\u002Fdocs\u002Fusing-packages#importing-with-jsr-specifiers) without an install step if you're using the Deno JavaScript runtime:\n\n```ts\nimport OpenAI from 'jsr:@openai\u002Fopenai';\n```\n\n## Usage\n\nThe full API of this library can be found in [api.md file](api.md) along with many [code examples](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Ftree\u002Fmaster\u002Fexamples).\n\nThe primary API for interacting with OpenAI models is the [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses). You can generate text from the model with the code below.\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  apiKey: process.env['OPENAI_API_KEY'], \u002F\u002F This is the default and can be omitted\n});\n\nconst response = await client.responses.create({\n  model: 'gpt-5.2',\n  instructions: 'You are a coding assistant that talks like a pirate',\n  input: 'Are semicolons optional in JavaScript?',\n});\n\nconsole.log(response.output_text);\n```\n\nThe previous standard (supported indefinitely) for generating text is the [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat). You can use that API to generate text from the model with the code below.\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  apiKey: process.env['OPENAI_API_KEY'], \u002F\u002F This is the default and can be omitted\n});\n\nconst completion = await client.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [\n    { role: 'developer', content: 'Talk like a pirate.' },\n    { role: 'user', content: 'Are semicolons optional in JavaScript?' },\n  ],\n});\n\nconsole.log(completion.choices[0].message.content);\n```\n\n## Streaming responses\n\nWe provide support for streaming responses using Server Sent Events (SSE).\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI();\n\nconst stream = await client.responses.create({\n  model: 'gpt-5.2',\n  input: 'Say \"Sheep sleep deep\" ten times fast!',\n  stream: true,\n});\n\nfor await (const event of stream) {\n  console.log(event);\n}\n```\n\n## File uploads\n\nRequest parameters that correspond to file uploads can be passed in many different forms:\n\n- `File` (or an object with the same structure)\n- a `fetch` `Response` (or an object with the same structure)\n- an `fs.ReadStream`\n- the return value of our `toFile` helper\n\n```ts\nimport fs from 'fs';\nimport OpenAI, { toFile } from 'openai';\n\nconst client = new OpenAI();\n\n\u002F\u002F If you have access to Node `fs` we recommend using `fs.createReadStream()`:\nawait client.files.create({ file: fs.createReadStream('input.jsonl'), purpose: 'fine-tune' });\n\n\u002F\u002F Or if you have the web `File` API you can pass a `File` instance:\nawait client.files.create({ file: new File(['my bytes'], 'input.jsonl'), purpose: 'fine-tune' });\n\n\u002F\u002F You can also pass a `fetch` `Response`:\nawait client.files.create({\n  file: await fetch('https:\u002F\u002Fsomesite\u002Finput.jsonl'),\n  purpose: 'fine-tune',\n});\n\n\u002F\u002F Finally, if none of the above are convenient, you can use our `toFile` helper:\nawait client.files.create({\n  file: await toFile(Buffer.from('my bytes'), 'input.jsonl'),\n  purpose: 'fine-tune',\n});\nawait client.files.create({\n  file: await toFile(new Uint8Array([0, 1, 2]), 'input.jsonl'),\n  purpose: 'fine-tune',\n});\n```\n\n## Webhook Verification\n\nVerifying webhook signatures is _optional but encouraged_.\n\nFor more information about webhooks, see [the API docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks).\n\n### Parsing webhook payloads\n\nFor most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will throw an error if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.\n\n```ts\nimport { headers } from 'next\u002Fheaders';\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, \u002F\u002F env var used by default; explicit here.\n});\n\nexport async function webhook(request: Request) {\n  const headersList = headers();\n  const body = await request.text();\n\n  try {\n    const event = client.webhooks.unwrap(body, headersList);\n\n    switch (event.type) {\n      case 'response.completed':\n        console.log('Response completed:', event.data);\n        break;\n      case 'response.failed':\n        console.log('Response failed:', event.data);\n        break;\n      default:\n        console.log('Unhandled event type:', event.type);\n    }\n\n    return Response.json({ message: 'ok' });\n  } catch (error) {\n    console.error('Invalid webhook signature:', error);\n    return new Response('Invalid signature', { status: 400 });\n  }\n}\n```\n\n### Verifying webhook payloads directly\n\nIn some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verifySignature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will throw an error if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.\n\n```ts\nimport { headers } from 'next\u002Fheaders';\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, \u002F\u002F env var used by default; explicit here.\n});\n\nexport async function webhook(request: Request) {\n  const headersList = headers();\n  const body = await request.text();\n\n  try {\n    client.webhooks.verifySignature(body, headersList);\n\n    \u002F\u002F Parse the body after verification\n    const event = JSON.parse(body);\n    console.log('Verified event:', event);\n\n    return Response.json({ message: 'ok' });\n  } catch (error) {\n    console.error('Invalid webhook signature:', error);\n    return new Response('Invalid signature', { status: 400 });\n  }\n}\n```\n\n## Handling errors\n\nWhen the library is unable to connect to the API,\nor if the API returns a non-success status code (i.e., 4xx or 5xx response),\na subclass of `APIError` will be thrown:\n\n\u003C!-- prettier-ignore -->\n```ts\nconst job = await client.fineTuning.jobs\n  .create({ model: 'gpt-4o', training_file: 'file-abc123' })\n  .catch(async (err) => {\n    if (err instanceof OpenAI.APIError) {\n      console.log(err.request_id);\n      console.log(err.status); \u002F\u002F 400\n      console.log(err.name); \u002F\u002F BadRequestError\n      console.log(err.headers); \u002F\u002F {server: 'nginx', ...}\n    } else {\n      throw err;\n    }\n  });\n```\n\nError codes are as follows:\n\n| Status Code | Error Type                 |\n| ----------- | -------------------------- |\n| 400         | `BadRequestError`          |\n| 401         | `AuthenticationError`      |\n| 403         | `PermissionDeniedError`    |\n| 404         | `NotFoundError`            |\n| 422         | `UnprocessableEntityError` |\n| 429         | `RateLimitError`           |\n| >=500       | `InternalServerError`      |\n| N\u002FA         | `APIConnectionError`       |\n\n## Request IDs\n\n> For more information on debugging requests, see [these docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nAll object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.\n\n```ts\nconst completion = await client.chat.completions.create({\n  messages: [{ role: 'user', content: 'Say this is a test' }],\n  model: 'gpt-5.2',\n});\nconsole.log(completion._request_id); \u002F\u002F req_123\n```\n\nYou can also access the Request ID using the `.withResponse()` method:\n\n```ts\nconst { data: stream, request_id } = await openai.chat.completions\n  .create({\n    model: 'gpt-5.2',\n    messages: [{ role: 'user', content: 'Say this is a test' }],\n    stream: true,\n  })\n  .withResponse();\n```\n\n## Realtime API\n\nThe Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling) through a `WebSocket` connection.\n\n```ts\nimport { OpenAIRealtimeWebSocket } from 'openai\u002Frealtime\u002Fwebsocket';\n\nconst rt = new OpenAIRealtimeWebSocket({ model: 'gpt-realtime' });\n\nrt.on('response.text.delta', (event) => process.stdout.write(event.delta));\n```\n\nFor more information see [realtime.md](realtime.md).\n\n## Microsoft Azure OpenAI\n\nTo use this library with [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview), use the `AzureOpenAI`\nclass instead of the `OpenAI` class.\n\n> [!IMPORTANT]\n> The Azure API shape slightly differs from the core API shape which means that the static types for responses \u002F params\n> won't always be correct.\n\n```ts\nimport { AzureOpenAI } from 'openai';\nimport { getBearerTokenProvider, DefaultAzureCredential } from '@azure\u002Fidentity';\n\nconst credential = new DefaultAzureCredential();\nconst scope = 'https:\u002F\u002Fcognitiveservices.azure.com\u002F.default';\nconst azureADTokenProvider = getBearerTokenProvider(credential, scope);\n\nconst openai = new AzureOpenAI({ azureADTokenProvider });\n\nconst result = await openai.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [{ role: 'user', content: 'Say hello!' }],\n});\n\nconsole.log(result.choices[0]!.message?.content);\n```\n\n### Retries\n\nCertain errors will be automatically retried 2 times by default, with a short exponential backoff.\nConnection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,\n429 Rate Limit, and >=500 Internal errors will all be retried by default.\n\nYou can use the `maxRetries` option to configure or disable this:\n\n\u003C!-- prettier-ignore -->\n```js\n\u002F\u002F Configure the default for all requests:\nconst client = new OpenAI({\n  maxRetries: 0, \u002F\u002F default is 2\n});\n\n\u002F\u002F Or, configure per-request:\nawait client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in JavaScript?' }], model: 'gpt-5.2' }, {\n  maxRetries: 5,\n});\n```\n\n### Timeouts\n\nRequests time out after 10 minutes by default. You can configure this with a `timeout` option:\n\n\u003C!-- prettier-ignore -->\n```ts\n\u002F\u002F Configure the default for all requests:\nconst client = new OpenAI({\n  timeout: 20 * 1000, \u002F\u002F 20 seconds (default is 10 minutes)\n});\n\n\u002F\u002F Override per-request:\nawait client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-5.2' }, {\n  timeout: 5 * 1000,\n});\n```\n\nOn timeout, an `APIConnectionTimeoutError` is thrown.\n\nNote that requests which time out will be [retried twice by default](#retries).\n\n## Request IDs\n\n> For more information on debugging requests, see [these docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nAll object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.\n\n```ts\nconst response = await client.responses.create({ model: 'gpt-5.2', input: 'testing 123' });\nconsole.log(response._request_id); \u002F\u002F req_123\n```\n\nYou can also access the Request ID using the `.withResponse()` method:\n\n```ts\nconst { data: stream, request_id } = await openai.responses\n  .create({\n    model: 'gpt-5.2',\n    input: 'Say this is a test',\n    stream: true,\n  })\n  .withResponse();\n```\n\n## Auto-pagination\n\nList methods in the OpenAI API are paginated.\nYou can use the `for await … of` syntax to iterate through items across all pages:\n\n```ts\nasync function fetchAllFineTuningJobs(params) {\n  const allFineTuningJobs = [];\n  \u002F\u002F Automatically fetches more pages as needed.\n  for await (const fineTuningJob of client.fineTuning.jobs.list({ limit: 20 })) {\n    allFineTuningJobs.push(fineTuningJob);\n  }\n  return allFineTuningJobs;\n}\n```\n\nAlternatively, you can request a single page at a time:\n\n```ts\nlet page = await client.fineTuning.jobs.list({ limit: 20 });\nfor (const fineTuningJob of page.data) {\n  console.log(fineTuningJob);\n}\n\n\u002F\u002F Convenience methods are provided for manually paginating:\nwhile (page.hasNextPage()) {\n  page = await page.getNextPage();\n  \u002F\u002F ...\n}\n```\n\n## Realtime API\n\nThe Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling) through a `WebSocket` connection.\n\n```ts\nimport { OpenAIRealtimeWebSocket } from 'openai\u002Frealtime\u002Fwebsocket';\n\nconst rt = new OpenAIRealtimeWebSocket({ model: 'gpt-realtime' });\n\nrt.on('response.text.delta', (event) => process.stdout.write(event.delta));\n```\n\nFor more information see [realtime.md](realtime.md).\n\n## Microsoft Azure OpenAI\n\nTo use this library with [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview), use the `AzureOpenAI`\nclass instead of the `OpenAI` class.\n\n> [!IMPORTANT]\n> The Azure API shape slightly differs from the core API shape which means that the static types for responses \u002F params\n> won't always be correct.\n\n```ts\nimport { AzureOpenAI } from 'openai';\nimport { getBearerTokenProvider, DefaultAzureCredential } from '@azure\u002Fidentity';\n\nconst credential = new DefaultAzureCredential();\nconst scope = 'https:\u002F\u002Fcognitiveservices.azure.com\u002F.default';\nconst azureADTokenProvider = getBearerTokenProvider(credential, scope);\n\nconst openai = new AzureOpenAI({\n  azureADTokenProvider,\n  apiVersion: '\u003CThe API version, e.g. 2024-10-01-preview>',\n});\n\nconst result = await openai.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [{ role: 'user', content: 'Say hello!' }],\n});\n\nconsole.log(result.choices[0]!.message?.content);\n```\n\nFor more information on support for the Azure API, see [azure.md](azure.md).\n\n## Advanced Usage\n\n### Accessing raw Response data (e.g., headers)\n\nThe \"raw\" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return.\nThis method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.\n\nYou can also use the `.withResponse()` method to get the raw `Response` along with the parsed data.\nUnlike `.asResponse()` this method consumes the body, returning once it is parsed.\n\n\u003C!-- prettier-ignore -->\n```ts\nconst client = new OpenAI();\n\nconst httpResponse = await client.responses\n  .create({ model: 'gpt-5.2', input: 'say this is a test.' })\n  .asResponse();\n\n\u002F\u002F access the underlying web standard Response object\nconsole.log(httpResponse.headers.get('X-My-Header'));\nconsole.log(httpResponse.statusText);\n\nconst { data: modelResponse, response: raw } = await client.responses\n  .create({ model: 'gpt-5.2', input: 'say this is a test.' })\n  .withResponse();\nconsole.log(raw.headers.get('X-My-Header'));\nconsole.log(modelResponse);\n```\n\n### Logging\n\n> [!IMPORTANT]\n> All log messages are intended for debugging only. The format and content of log messages\n> may change between releases.\n\n#### Log levels\n\nThe log level can be configured in two ways:\n\n1. Via the `OPENAI_LOG` environment variable\n2. Using the `logLevel` client option (overrides the environment variable if set)\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  logLevel: 'debug', \u002F\u002F Show all log messages\n});\n```\n\nAvailable log levels, from most to least verbose:\n\n- `'debug'` - Show debug messages, info, warnings, and errors\n- `'info'` - Show info messages, warnings, and errors\n- `'warn'` - Show warnings and errors (default)\n- `'error'` - Show only errors\n- `'off'` - Disable all logging\n\nAt the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies.\nSome authentication-related headers are redacted, but sensitive data in request and response bodies\nmay still be visible.\n\n#### Custom logger\n\nBy default, this library logs to `globalThis.console`. You can also provide a custom logger.\nMost logging libraries are supported, including [pino](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fpino), [winston](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fwinston), [bunyan](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fbunyan), [consola](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fconsola), [signale](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fsignale), and [@std\u002Flog](https:\u002F\u002Fjsr.io\u002F@std\u002Flog). If your logger doesn't work, please open an issue.\n\nWhen providing a custom logger, the `logLevel` option still controls which messages are emitted, messages\nbelow the configured level will not be sent to your logger.\n\n```ts\nimport OpenAI from 'openai';\nimport pino from 'pino';\n\nconst logger = pino();\n\nconst client = new OpenAI({\n  logger: logger.child({ name: 'OpenAI' }),\n  logLevel: 'debug', \u002F\u002F Send all messages to pino, allowing it to filter\n});\n```\n\n### Making custom\u002Fundocumented requests\n\nThis library is typed for convenient access to the documented API. If you need to access undocumented\nendpoints, params, or response properties, the library can still be used.\n\n#### Undocumented endpoints\n\nTo make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs.\nOptions on the client, such as retries, will be respected when making these requests.\n\n```ts\nawait client.post('\u002Fsome\u002Fpath', {\n  body: { some_prop: 'foo' },\n  query: { some_query_arg: 'bar' },\n});\n```\n\n#### Undocumented request params\n\nTo make requests using undocumented parameters, you may use `\u002F\u002F @ts-expect-error` on the undocumented\nparameter. This library doesn't validate at runtime that the request matches the type, so any extra values you\nsend will be sent as-is.\n\n```ts\nclient.chat.completions.create({\n  \u002F\u002F ...\n  \u002F\u002F @ts-expect-error baz is not yet public\n  baz: 'undocumented option',\n});\n```\n\nFor requests with the `GET` verb, any extra params will be in the query, all other requests will send the\nextra param in the body.\n\nIf you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request\noptions.\n\n#### Undocumented response properties\n\nTo access undocumented response properties, you may access the response object with `\u002F\u002F @ts-expect-error` on\nthe response object, or cast the response object to the requisite type. Like the request params, we do not\nvalidate or strip extra properties from the response from the API.\n\n### Customizing the fetch client\n\nIf you want to use a different `fetch` function, you can either polyfill the global:\n\n```ts\nimport fetch from 'my-fetch';\n\nglobalThis.fetch = fetch;\n```\n\nOr pass it to the client:\n\n```ts\nimport OpenAI from 'openai';\nimport fetch from 'my-fetch';\n\nconst client = new OpenAI({ fetch });\n```\n\n### Fetch options\n\nIf you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.)\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  fetchOptions: {\n    \u002F\u002F `RequestInit` options\n  },\n});\n```\n\n#### Configuring proxies\n\nTo modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy\noptions to requests:\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fnode.svg\" align=\"top\" width=\"18\" height=\"21\"> **Node** \u003Csup>[[docs](https:\u002F\u002Fgithub.com\u002Fnodejs\u002Fundici\u002Fblob\u002Fmain\u002Fdocs\u002Fdocs\u002Fapi\u002FProxyAgent.md#example---proxyagent-with-fetch)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'openai';\nimport * as undici from 'undici';\n\nconst proxyAgent = new undici.ProxyAgent('http:\u002F\u002Flocalhost:8888');\nconst client = new OpenAI({\n  fetchOptions: {\n    dispatcher: proxyAgent,\n  },\n});\n```\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fbun.svg\" align=\"top\" width=\"18\" height=\"21\"> **Bun** \u003Csup>[[docs](https:\u002F\u002Fbun.sh\u002Fguides\u002Fhttp\u002Fproxy)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  fetchOptions: {\n    proxy: 'http:\u002F\u002Flocalhost:8888',\n  },\n});\n```\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fdeno.svg\" align=\"top\" width=\"18\" height=\"21\"> **Deno** \u003Csup>[[docs](https:\u002F\u002Fdocs.deno.com\u002Fapi\u002Fdeno\u002F~\u002FDeno.createHttpClient)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'npm:openai';\n\nconst httpClient = Deno.createHttpClient({ proxy: { url: 'http:\u002F\u002Flocalhost:8888' } });\nconst client = new OpenAI({\n  fetchOptions: {\n    client: httpClient,\n  },\n});\n```\n\n## Frequently Asked Questions\n\n## Semantic versioning\n\nThis package generally follows [SemVer](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:\n\n1. Changes that only affect static types, without breaking runtime behavior.\n2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_\n3. Changes that we do not expect to impact the vast majority of users in practice.\n\nWe take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.\n\nWe are keen for your feedback; please open an [issue](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-node\u002Fissues) with questions, bugs, or suggestions.\n\n## Requirements\n\nTypeScript >= 4.9 is supported.\n\nThe following runtimes are supported:\n\n- Node.js 20 LTS or later ([non-EOL](https:\u002F\u002Fendoflife.date\u002Fnodejs)) versions.\n- Deno v1.28.0 or higher.\n- Bun 1.0 or later.\n- Cloudflare Workers.\n- Vercel Edge Runtime.\n- Jest 28 or greater with the `\"node\"` environment (`\"jsdom\"` is not supported at this time).\n- Nitro v2.6 or greater.\n- Web browsers: disabled by default to avoid exposing your secret API credentials. Enable browser support by explicitly setting `dangerouslyAllowBrowser` to true'.\n  \u003Cdetails>\n    \u003Csummary>More explanation\u003C\u002Fsummary>\n\n  ### Why is this dangerous?\n\n  Enabling the `dangerouslyAllowBrowser` option can be dangerous because it exposes your secret API credentials in the client-side code. Web browsers are inherently less secure than server environments,\n  any user with access to the browser can potentially inspect, extract, and misuse these credentials. This could lead to unauthorized access using your credentials and potentially compromise sensitive data or functionality.\n\n  ### When might this not be dangerous?\n\n  In certain scenarios where enabling browser support might not pose significant risks:\n\n  - Internal Tools: If the application is used solely within a controlled internal environment where the users are trusted, the risk of credential exposure can be mitigated.\n  - Public APIs with Limited Scope: If your API has very limited scope and the exposed credentials do not grant access to sensitive data or critical operations, the potential impact of exposure is reduced.\n  - Development or debugging purpose: Enabling this feature temporarily might be acceptable, provided the credentials are short-lived, aren't also used in production environments, or are frequently rotated.\n\n\u003C\u002Fdetails>\n\nNote that React Native is not supported at this time.\n\nIf you are interested in other runtime environments, please open or upvote an issue on GitHub.\n\n## Contributing\n\nSee [the contributing documentation](.\u002FCONTRIBUTING.md).\n","# OpenAI TypeScript 和 JavaScript API 库\n\n[![NPM version](\u003Chttps:\u002F\u002Fimg.shields.io\u002Fnpm\u002Fv\u002Fopenai.svg?label=npm%20(stable)>)](https:\u002F\u002Fnpmjs.org\u002Fpackage\u002Fopenai) ![npm bundle size](https:\u002F\u002Fimg.shields.io\u002Fbundlephobia\u002Fminzip\u002Fopenai) [![JSR Version](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-node_readme_d496b688830b.png)](https:\u002F\u002Fjsr.io\u002F@openai\u002Fopenai)\n\n该库提供了从 TypeScript 或 JavaScript 便捷访问 OpenAI REST API 的功能。\n\n它是由我们的 [OpenAPI 规范](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi) 使用 [Stainless](https:\u002F\u002Fstainlessapi.com\u002F) 生成的。\n\n要了解如何使用 OpenAI API，请查看我们的 [API 参考](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference) 和 [文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs)。\n\n## 安装\n\n```sh\nnpm install openai\n```\n\n### 从 JSR 安装\n\n```sh\ndeno add jsr:@openai\u002Fopenai\nnpx jsr add @openai\u002Fopenai\n```\n\n这些命令将使模块可以从 `@openai\u002Fopenai` 范围导入。如果您使用的是 Deno JavaScript 运行时，也可以 [直接从 JSR 导入](https:\u002F\u002Fjsr.io\u002Fdocs\u002Fusing-packages#importing-with-jsr-specifiers)，而无需安装步骤：\n\n```ts\nimport OpenAI from 'jsr:@openai\u002Fopenai';\n```\n\n## 用法\n\n本库的完整 API 可在 [api.md 文件](api.md) 中找到，此外还有大量 [代码示例](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Ftree\u002Fmaster\u002Fexamples)。\n\n与 OpenAI 模型交互的主要 API 是 [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses)。您可以使用以下代码从模型生成文本。\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  apiKey: process.env['OPENAI_API_KEY'], \u002F\u002F This is the default and can be omitted\n});\n\nconst response = await client.responses.create({\n  model: 'gpt-5.2',\n  instructions: 'You are a coding assistant that talks like a pirate',\n  input: 'Are semicolons optional in JavaScript?',\n});\n\nconsole.log(response.output_text);\n```\n\n之前用于生成文本的标准（无限期支持）是 [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat)。您可以使用以下代码通过该 API 从模型生成文本。\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  apiKey: process.env['OPENAI_API_KEY'], \u002F\u002F This is the default and can be omitted\n});\n\nconst completion = await client.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [\n    { role: 'developer', content: 'Talk like a pirate.' },\n    { role: 'user', content: 'Are semicolons optional in JavaScript?' },\n  ],\n});\n\nconsole.log(completion.choices[0].message.content);\n```\n\n## 流式响应\n\n我们提供使用服务器发送事件 (SSE) 进行流式响应的支持。\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI();\n\nconst stream = await client.responses.create({\n  model: 'gpt-5.2',\n  input: 'Say \"Sheep sleep deep\" ten times fast!',\n  stream: true,\n});\n\nfor await (const event of stream) {\n  console.log(event);\n}\n```\n\n## 文件上传\n\n对应于文件上传的请求参数可以通过多种不同的形式传递：\n\n- `File`（或具有相同结构的对象）\n- `fetch` `Response`（或具有相同结构的对象）\n- `fs.ReadStream`\n- 我们 `toFile` 辅助函数的返回值\n\n```ts\nimport fs from 'fs';\nimport OpenAI, { toFile } from 'openai';\n\nconst client = new OpenAI();\n\n\u002F\u002F If you have access to Node `fs` we recommend using `fs.createReadStream()`:\nawait client.files.create({ file: fs.createReadStream('input.jsonl'), purpose: 'fine-tune' });\n\n\u002F\u002F Or if you have the web `File` API you can pass a `File` instance:\nawait client.files.create({ file: new File(['my bytes'], 'input.jsonl'), purpose: 'fine-tune' });\n\n\u002F\u002F You can also pass a `fetch` `Response`:\nawait client.files.create({\n  file: await fetch('https:\u002F\u002Fsomesite\u002Finput.jsonl'),\n  purpose: 'fine-tune',\n});\n\n\u002F\u002F Finally, if none of the above are convenient, you can use our `toFile` helper:\nawait client.files.create({\n  file: await toFile(Buffer.from('my bytes'), 'input.jsonl'),\n  purpose: 'fine-tune',\n});\nawait client.files.create({\n  file: await toFile(new Uint8Array([0, 1, 2]), 'input.jsonl'),\n  purpose: 'fine-tune',\n});\n```\n\n## Webhook 验证\n\n验证 Webhook (网络钩子) 签名是 _可选但推荐的_。\n\n有关 Webhook 的更多信息，请参阅 [API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks)。\n\n### 解析 Webhook 负载\n\n对于大多数用例，您可能希望同时验证 Webhook 并解析 payload (负载)。为了实现这一点，我们提供了方法 `client.webhooks.unwrap()`，该方法解析 Webhook 请求并验证其是否由 OpenAI 发送。如果签名无效，此方法将抛出错误。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（不要先解析它）。`.unwrap()` 方法将在验证 Webhook 来自 OpenAI 后为您将此 JSON 解析为事件对象。\n\n```ts\nimport { headers } from 'next\u002Fheaders';\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, \u002F\u002F env var used by default; explicit here.\n});\n\nexport async function webhook(request: Request) {\n  const headersList = headers();\n  const body = await request.text();\n\n  try {\n    const event = client.webhooks.unwrap(body, headersList);\n\n    switch (event.type) {\n      case 'response.completed':\n        console.log('Response completed:', event.data);\n        break;\n      case 'response.failed':\n        console.log('Response failed:', event.data);\n        break;\n      default:\n        console.log('Unhandled event type:', event.type);\n    }\n\n    return Response.json({ message: 'ok' });\n  } catch (error) {\n    console.error('Invalid webhook signature:', error);\n    return new Response('Invalid signature', { status: 400 });\n  }\n}\n```\n\n### 直接验证 Webhook（网络钩子）负载\n\n在某些情况下，您可能希望将 Webhook 的验证与解析负载分开进行。如果您倾向于单独处理这些步骤，我们提供了 `client.webhooks.verifySignature()` 方法，用于 _仅验证_ Webhook 请求的签名。类似于 `.unwrap()`，如果签名无效，此方法将抛出错误。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（不要先解析它）。然后，您需要在验证签名后解析该主体。\n\n```ts\nimport { headers } from 'next\u002Fheaders';\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  webhookSecret: process.env.OPENAI_WEBHOOK_SECRET, \u002F\u002F env var used by default; explicit here.\n});\n\nexport async function webhook(request: Request) {\n  const headersList = headers();\n  const body = await request.text();\n\n  try {\n    client.webhooks.verifySignature(body, headersList);\n\n    \u002F\u002F Parse the body after verification\n    const event = JSON.parse(body);\n    console.log('Verified event:', event);\n\n    return Response.json({ message: 'ok' });\n  } catch (error) {\n    console.error('Invalid webhook signature:', error);\n    return new Response('Invalid signature', { status: 400 });\n  }\n}\n```\n\n## 处理错误\n\n当库无法连接到 API，或者 API 返回非成功状态码（即 4xx 或 5xx 响应）时，将抛出一个 `APIError`（API 错误）的子类：\n\n\u003C!-- prettier-ignore -->\n```ts\nconst job = await client.fineTuning.jobs\n  .create({ model: 'gpt-4o', training_file: 'file-abc123' })\n  .catch(async (err) => {\n    if (err instanceof OpenAI.APIError) {\n      console.log(err.request_id);\n      console.log(err.status); \u002F\u002F 400\n      console.log(err.name); \u002F\u002F BadRequestError\n      console.log(err.headers); \u002F\u002F {server: 'nginx', ...}\n    } else {\n      throw err;\n    }\n  });\n```\n\n错误代码如下：\n\n| 状态码 | 错误类型                 |\n| ----------- | -------------------------- |\n| 400         | `BadRequestError`          |\n| 401         | `AuthenticationError`      |\n| 403         | `PermissionDeniedError`    |\n| 404         | `NotFoundError`            |\n| 422         | `UnprocessableEntityError` |\n| 429         | `RateLimitError`           |\n| >=500       | `InternalServerError`      |\n| N\u002FA         | `APIConnectionError`       |\n\n## 请求 ID\n\n> 有关调试请求的更多信息，请参阅 [这些文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nSDK 中的所有对象响应都提供了一个 `_request_id` 属性，该属性来自 `x-request-id` 响应头，以便您可以快速记录失败的请求并向 OpenAI 报告它们。\n\n```ts\nconst completion = await client.chat.completions.create({\n  messages: [{ role: 'user', content: 'Say this is a test' }],\n  model: 'gpt-5.2',\n});\nconsole.log(completion._request_id); \u002F\u002F req_123\n```\n\n您还可以使用 `.withResponse()` 方法访问请求 ID：\n\n```ts\nconst { data: stream, request_id } = await openai.chat.completions\n  .create({\n    model: 'gpt-5.2',\n    messages: [{ role: 'user', content: 'Say this is a test' }],\n    stream: true,\n  })\n  .withResponse();\n```\n\n## 实时 API\n\n实时 API 使您能够构建低延迟、多模态的对话体验。它目前支持文本和音频作为输入和输出，以及通过 `WebSocket` 连接进行的 [函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling)。\n\n```ts\nimport { OpenAIRealtimeWebSocket } from 'openai\u002Frealtime\u002Fwebsocket';\n\nconst rt = new OpenAIRealtimeWebSocket({ model: 'gpt-realtime' });\n\nrt.on('response.text.delta', (event) => process.stdout.write(event.delta));\n```\n\n有关更多信息，请参见 [realtime.md](realtime.md)。\n\n## Microsoft Azure OpenAI\n\n要将此库与 [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview) 配合使用，请使用 `AzureOpenAI` 类而不是 `OpenAI` 类。\n\n> [!IMPORTANT]\n> Azure API 的形状与核心 API 形状略有不同，这意味着响应\u002F参数的静态类型并不总是正确的。\n\n```ts\nimport { AzureOpenAI } from 'openai';\nimport { getBearerTokenProvider, DefaultAzureCredential } from '@azure\u002Fidentity';\n\nconst credential = new DefaultAzureCredential();\nconst scope = 'https:\u002F\u002Fcognitiveservices.azure.com\u002F.default';\nconst azureADTokenProvider = getBearerTokenProvider(credential, scope);\n\nconst openai = new AzureOpenAI({ azureADTokenProvider });\n\nconst result = await openai.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [{ role: 'user', content: 'Say hello!' }],\n});\n\nconsole.log(result.choices[0]!.message?.content);\n```\n\n### 重试\n\n某些错误默认会自动重试 2 次，并带有短暂的指数退避。连接错误（例如由于网络连接问题）、408 请求超时、409 冲突、429 速率限制以及 >=500 内部错误默认都会重试。\n\n您可以使用 `maxRetries` 选项来配置或禁用此功能：\n\n\u003C!-- prettier-ignore -->\n```js\n\u002F\u002F Configure the default for all requests:\nconst client = new OpenAI({\n  maxRetries: 0, \u002F\u002F default is 2\n});\n\n\u002F\u002F Or, configure per-request:\nawait client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in JavaScript?' }], model: 'gpt-5.2' }, {\n  maxRetries: 5,\n});\n```\n\n### 超时\n\n请求默认在 10 分钟后超时。您可以使用 `timeout` 选项配置此设置：\n\n\u003C!-- prettier-ignore -->\n```ts\n\u002F\u002F Configure the default for all requests:\nconst client = new OpenAI({\n  timeout: 20 * 1000, \u002F\u002F 20 seconds (default is 10 minutes)\n});\n\n\u002F\u002F Override per-request:\nawait client.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-5.2' }, {\n  timeout: 5 * 1000,\n});\n```\n\n超时时会抛出 `APIConnectionTimeoutError`。\n\n请注意，超时的请求将 [默认重试两次](#retries)。\n\n## 请求 ID\n\n> 有关调试请求的更多信息，请参阅 [这些文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)\n\nSDK 中的所有对象响应都提供了一个 `_request_id` 属性，该属性来自 `x-request-id` 响应头，以便您可以快速记录失败的请求并向 OpenAI 报告它们。\n\n```ts\nconst response = await client.responses.create({ model: 'gpt-5.2', input: 'testing 123' });\nconsole.log(response._request_id); \u002F\u002F req_123\n```\n\n您还可以使用 `.withResponse()` 方法访问请求 ID：\n\n```ts\nconst { data: stream, request_id } = await openai.responses\n  .create({\n    model: 'gpt-5.2',\n    input: 'Say this is a test',\n    stream: true,\n  })\n  .withResponse();\n```\n\n## 自动分页\n\nOpenAI API 中的列表方法是分页的。\n您可以使用 `for await … of` 语法遍历所有页面的项目：\n\n```ts\nasync function fetchAllFineTuningJobs(params) {\n  const allFineTuningJobs = [];\n  \u002F\u002F Automatically fetches more pages as needed.\n  for await (const fineTuningJob of client.fineTuning.jobs.list({ limit: 20 })) {\n    allFineTuningJobs.push(fineTuningJob);\n  }\n  return allFineTuningJobs;\n}\n```\n\n或者，您可以一次请求单个页面：\n\n```ts\nlet page = await client.fineTuning.jobs.list({ limit: 20 });\nfor (const fineTuningJob of page.data) {\n  console.log(fineTuningJob);\n}\n\n\u002F\u002F Convenience methods are provided for manually paginating:\nwhile (page.hasNextPage()) {\n  page = await page.getNextPage();\n  \u002F\u002F ...\n}\n```\n\n## 实时 API\n\n实时 API 使您能够构建低延迟、多模态的对话体验。它目前支持文本和音频作为输入和输出，以及通过 `WebSocket` 连接进行 [函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling)。\n\n```ts\nimport { OpenAIRealtimeWebSocket } from 'openai\u002Frealtime\u002Fwebsocket';\n\nconst rt = new OpenAIRealtimeWebSocket({ model: 'gpt-realtime' });\n\nrt.on('response.text.delta', (event) => process.stdout.write(event.delta));\n```\n\n更多信息请参阅 [realtime.md](realtime.md)。\n\n## Microsoft Azure OpenAI\n\n要与 [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview) 配合使用此库，请使用 `AzureOpenAI` 类而不是 `OpenAI` 类。\n\n> [!IMPORTANT]\n> The Azure API shape slightly differs from the core API shape which means that the static types for responses \u002F params\n> won't always be correct.\n\n```ts\nimport { AzureOpenAI } from 'openai';\nimport { getBearerTokenProvider, DefaultAzureCredential } from '@azure\u002Fidentity';\n\nconst credential = new DefaultAzureCredential();\nconst scope = 'https:\u002F\u002Fcognitiveservices.azure.com\u002F.default';\nconst azureADTokenProvider = getBearerTokenProvider(credential, scope);\n\nconst openai = new AzureOpenAI({\n  azureADTokenProvider,\n  apiVersion: '\u003CThe API version, e.g. 2024-10-01-preview>',\n});\n\nconst result = await openai.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [{ role: 'user', content: 'Say hello!' }],\n});\n\nconsole.log(result.choices[0]!.message?.content);\n```\n\n关于 Azure API 支持的更多信息，请参阅 [azure.md](azure.md)。\n\n## 高级用法\n\n### 访问原始 Response (响应) 数据（例如：headers (头部)）\n\n通过所有方法返回的 `APIPromise` 类型上的 `.asResponse()` 方法可以访问 `fetch()` 返回的“原始”`Response (响应)`。\n该方法在接收到成功响应的头部后立即返回，并且不会消耗响应体，因此您可以自由编写自定义解析或流式处理逻辑。\n\n您还可以使用 `.withResponse()` 方法获取原始 `Response (响应)` 以及解析后的数据。\n与 `.asResponse()` 不同，此方法会消耗响应体，并在解析完成后返回。\n\n\u003C!-- prettier-ignore -->\n```ts\nconst client = new OpenAI();\n\nconst httpResponse = await client.responses\n  .create({ model: 'gpt-5.2', input: 'say this is a test.' })\n  .asResponse();\n\n\u002F\u002F access the underlying web standard Response object\nconsole.log(httpResponse.headers.get('X-My-Header'));\nconsole.log(httpResponse.statusText);\n\nconst { data: modelResponse, response: raw } = await client.responses\n  .create({ model: 'gpt-5.2', input: 'say this is a test.' })\n  .withResponse();\nconsole.log(raw.headers.get('X-My-Header'));\nconsole.log(modelResponse);\n```\n\n### 日志记录\n\n> [!IMPORTANT]\n> All log messages are intended for debugging only. The format and content of log messages\n> may change between releases.\n\n#### 日志级别\n\n日志级别可以通过两种方式配置：\n\n1. 通过 `OPENAI_LOG` 环境变量\n2. 使用 `logLevel` 客户端选项（如果设置则覆盖环境变量）\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  logLevel: 'debug', \u002F\u002F Show all log messages\n});\n```\n\n可用的日志级别，从最详细到最少：\n\n- `'debug'` - 显示调试消息、信息、警告和错误\n- `'info'` - 显示信息消息、警告和错误\n- `'warn'` - 显示警告和错误（默认）\n- `'error'` - 仅显示错误\n- `'off'` - 禁用所有日志记录\n\n在 `'debug'` 级别下，所有 HTTP 请求和响应都会被记录，包括头部和主体。\n某些与身份验证相关的头部会被脱敏，但请求和响应主体中的敏感数据可能仍然可见。\n\n#### 自定义日志记录器\n\n默认情况下，此库将日志记录到 `globalThis.console`。您也可以提供自定义日志记录器。\n大多数日志库都受支持，包括 [pino](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fpino)、[winston](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fwinston)、[bunyan](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fbunyan)、[consola](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fconsola)、[signale](https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002Fsignale) 和 [@std\u002Flog](https:\u002F\u002Fjsr.io\u002F@std\u002Flog)。如果您的日志记录器无法工作，请提交一个 Issue。\n\n当提供自定义日志记录器时，`logLevel` 选项仍控制哪些消息被发出，低于配置级别的消息将不会发送到您的日志记录器。\n\n```ts\nimport OpenAI from 'openai';\nimport pino from 'pino';\n\nconst logger = pino();\n\nconst client = new OpenAI({\n  logger: logger.child({ name: 'OpenAI' }),\n  logLevel: 'debug', \u002F\u002F Send all messages to pino, allowing it to filter\n});\n```\n\n### 发起自定义\u002F未文档化的请求\n\n此库已针对便捷访问文档化 API 进行了类型定义。如果您需要访问未文档化的端点、参数或响应属性，该库仍然可以使用。\n\n#### 未文档化的端点\n\n要向未文档化的端点发起请求，您可以使用 `client.get`、`client.post` 和其他 HTTP 动词。\n在发起这些请求时，客户端的选项（如重试）将被尊重。\n\n```ts\nawait client.post('\u002Fsome\u002Fpath', {\n  body: { some_prop: 'foo' },\n  query: { some_query_arg: 'bar' },\n});\n```\n\n#### 未文档化的请求参数\n\n要使用未文档化的参数发起请求，您可以在未文档化的参数上使用 `\u002F\u002F @ts-expect-error`。\n此库在运行时不验证请求是否与类型匹配，因此您发送的任何额外值都将原样发送。\n\n```ts\nclient.chat.completions.create({\n  \u002F\u002F ...\n  \u002F\u002F @ts-expect-error baz is not yet public\n  baz: 'undocumented option',\n});\n```\n\n对于使用 `GET` 动词的请求，任何额外参数将在查询参数中，所有其他请求将在主体中发送额外参数。\n\n如果您想显式发送额外参数，可以使用 `query`、`body` 和 `headers` 请求选项来实现。\n\n#### 未文档化的响应属性\n\n要访问未文档化的响应属性，您可以在响应对象上使用 `\u002F\u002F @ts-expect-error` 来访问响应对象，或将响应对象强制转换为所需类型。\n与请求参数一样，我们不验证或从 API 响应中剥离额外属性。\n\n### 自定义 fetch 客户端\n\n如果你想使用不同的 `fetch` 函数，你可以选择全局 `polyfill（填充）`：\n\n```ts\nimport fetch from 'my-fetch';\n\nglobalThis.fetch = fetch;\n```\n\n或者将其传递给客户端：\n\n```ts\nimport OpenAI from 'openai';\nimport fetch from 'my-fetch';\n\nconst client = new OpenAI({ fetch });\n```\n\n### Fetch 选项\n\n如果你想在不过载 `fetch` 函数的情况下设置自定义 `fetch` 选项，可以在实例化客户端或发起请求时提供一个 `fetchOptions` 对象。（特定于请求的选项会覆盖客户端选项。）\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  fetchOptions: {\n    \u002F\u002F `RequestInit` 选项\n  },\n});\n```\n\n#### 配置代理\n\n要修改代理行为，你可以提供自定义的 `fetchOptions`，为请求添加特定运行时的代理选项：\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fnode.svg\" align=\"top\" width=\"18\" height=\"21\"> **Node** \u003Csup>[[文档](https:\u002F\u002Fgithub.com\u002Fnodejs\u002Fundici\u002Fblob\u002Fmain\u002Fdocs\u002Fdocs\u002Fapi\u002FProxyAgent.md#example---proxyagent-with-fetch)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'openai';\nimport * as undici from 'undici';\n\nconst proxyAgent = new undici.ProxyAgent('http:\u002F\u002Flocalhost:8888');\nconst client = new OpenAI({\n  fetchOptions: {\n    dispatcher: proxyAgent,\n  },\n});\n```\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fbun.svg\" align=\"top\" width=\"18\" height=\"21\"> **Bun** \u003Csup>[[文档](https:\u002F\u002Fbun.sh\u002Fguides\u002Fhttp\u002Fproxy)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  fetchOptions: {\n    proxy: 'http:\u002F\u002Flocalhost:8888',\n  },\n});\n```\n\n\u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fstainless-api\u002Fsdk-assets\u002Frefs\u002Fheads\u002Fmain\u002Fdeno.svg\" align=\"top\" width=\"18\" height=\"21\"> **Deno** \u003Csup>[[文档](https:\u002F\u002Fdocs.deno.com\u002Fapi\u002Fdeno\u002F~\u002FDeno.createHttpClient)]\u003C\u002Fsup>\n\n```ts\nimport OpenAI from 'npm:openai';\n\nconst httpClient = Deno.createHttpClient({ proxy: { url: 'http:\u002F\u002Flocalhost:8888' } });\nconst client = new OpenAI({\n  fetchOptions: {\n    client: httpClient,\n  },\n});\n```\n\n## 常见问题\n\n## 语义化版本控制\n\n本包通常遵循 [SemVer（语义化版本控制）](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) 规范，尽管某些向后不兼容的更改可能会作为次要版本发布：\n\n1. 仅影响静态类型而不破坏运行时行为的更改。\n2. 库内部更改，技术上虽然是公开的，但不打算或记录供外部使用。_(如果您依赖此类内部功能，请提交 GitHub 问题告知我们。)_\n3. 我们预计在实践中不会影响绝大多数用户的更改。\n\n我们非常重视向后兼容性，并努力确保您拥有顺畅的升级体验。\n\n我们非常期待您的反馈；如有问题、错误或建议，请提交 [问题](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-node\u002Fissues)。\n\n## 要求\n\n支持 TypeScript >= 4.9。\n\n支持以下运行时环境：\n\n- Node.js 20 LTS（长期支持版）或更高版本（[非 EOL（生命周期结束）](https:\u002F\u002Fendoflife.date\u002Fnodejs) 版本）。\n- Deno v1.28.0 或更高版本。\n- Bun 1.0 或更高版本。\n- Cloudflare Workers。\n- Vercel Edge Runtime。\n- Jest 28 或更高版本，配合 `\"node\"` 环境（目前不支持 `\"jsdom\"`）。\n- Nitro v2.6 或更高版本。\n- Web 浏览器：默认禁用以避免暴露你的秘密 API 凭证。通过显式设置 `dangerouslyAllowBrowser` 为 `true` 来启用浏览器支持。\n  \u003Cdetails>\n    \u003Csummary>更多说明\u003C\u002Fsummary>\n\n  ### 为什么这很危险？\n\n  启用 `dangerouslyAllowBrowser` 选项可能是危险的，因为它会将你的秘密 API 凭证暴露在客户端代码中。Web 浏览器本质上比服务器环境安全性更低，任何能够访问浏览器的用户都可能检查、提取和滥用这些凭证。这可能导致未经授权的访问，使用你的凭证，并可能危及敏感数据或功能。\n\n  ### 何时这可能不危险？\n\n  在某些启用浏览器支持可能不会构成重大风险的场景中：\n\n  - 内部工具：如果应用程序仅在受控的内部环境中使用，且用户可信，则凭证泄露的风险可以降低。\n  - 范围有限的公共 API：如果你的 API 范围非常有限，且暴露的凭证无法访问敏感数据或关键操作，则泄露的潜在影响会降低。\n  - 开发或调试目的：临时启用此功能可能是可以接受的，前提是凭证是短期的，不在生产环境中使用，或经常轮换。\n\n\u003C\u002Fdetails>\n\n请注意，React Native 目前尚不受支持。\n\n如果你对其它运行时环境感兴趣，请在 GitHub 上提交或点赞一个问题。\n\n## 贡献\n\n请参阅 [贡献文档](.\u002FCONTRIBUTING.md)。","# openai-node 快速上手指南\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n- **Node.js 环境**：已安装 Node.js 及 npm 包管理器。\n- **API 密钥**：拥有一个有效的 OpenAI API Key（建议通过环境变量 `OPENAI_API_KEY` 存储）。\n\n## 安装步骤\n\n使用 npm 或 yarn 安装官方 SDK：\n\n```sh\nnpm install openai\n```\n\n如果您使用 Deno 或 JSR，也可以使用以下方式：\n\n```sh\ndeno add jsr:@openai\u002Fopenai\nnpx jsr add @openai\u002Fopenai\n```\n\n## 基本使用\n\n初始化客户端并调用 API 生成文本。以下是使用标准的 Chat Completions API 的示例：\n\n```ts\nimport OpenAI from 'openai';\n\nconst client = new OpenAI({\n  apiKey: process.env['OPENAI_API_KEY'], \u002F\u002F This is the default and can be omitted\n});\n\nconst completion = await client.chat.completions.create({\n  model: 'gpt-5.2',\n  messages: [\n    { role: 'developer', content: 'Talk like a pirate.' },\n    { role: 'user', content: 'Are semicolons optional in JavaScript?' },\n  ],\n});\n\nconsole.log(completion.choices[0].message.content);\n```\n\n运行上述代码前，请确保已在终端中设置环境变量：\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```","某 SaaS 平台工程师正在为后台管理系统接入智能数据分析助手，需通过 Node.js 后端调用大模型生成报表摘要。\n\n### 没有 openai-node 时\n- 需手动使用 fetch 或 axios 构造 HTTP 请求，URL 拼接与鉴权头配置繁琐，环境切换易出错\n- 返回的 JSON 数据结构无类型约束，修改模型版本时常因字段变更导致运行时崩溃，调试耗时\n- 实现实时流式输出需自行编写 SSE 解析逻辑，代码量大且难以维护，用户体验卡顿\n- 处理文件上传进行微调时，需手动管理 Buffer 与 ReadStream，不同运行环境兼容性差\n\n### 使用 openai-node 后\n- 官方客户端封装了标准 API 调用，初始化即享安全鉴权，大幅降低网络层复杂度与出错率\n- 基于 OpenAPI 生成的 TypeScript 定义提供完整类型提示，重构与排查更安心，减少低级错误\n- 原生支持流式响应迭代，仅需简单循环即可实现流畅的打字机交互体验，提升用户感知\n- 内置 `toFile` 等辅助工具，轻松对接多种文件源，简化微调数据准备流程，专注业务逻辑\n\nopenai-node 将复杂的 API 交互转化为直观的代码操作，显著缩短 AI 功能落地周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-node_763f5010.png","openai","OpenAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopenai_1960bbf4.png","",null,"https:\u002F\u002Fopenai.com\u002F","https:\u002F\u002Fgithub.com\u002Fopenai",[83,87,91,95,99],{"name":84,"color":85,"percentage":86},"TypeScript","#3178c6",98.3,{"name":88,"color":89,"percentage":90},"JavaScript","#f1e05a",1,{"name":92,"color":93,"percentage":94},"Shell","#89e051",0.7,{"name":96,"color":97,"percentage":98},"HTML","#e34c26",0,{"name":100,"color":101,"percentage":98},"Ruby","#701516",10788,1452,"2026-04-04T19:02:23","Apache-2.0","未说明","无需（调用云端 API）",{"notes":109,"python":110,"dependencies":111},"本工具是 OpenAI API 的官方 TypeScript\u002FJavaScript 客户端库，用于通过 REST API 调用云端模型，无需本地算力。需设置 OPENAI_API_KEY 环境变量。支持 Node.js 和 Deno 运行时。包含流式传输、Webhook 验证及 Azure 集成支持。","不适用",[75,112],"@azure\u002Fidentity",[15],[115,75,116],"nodejs","typescript",8,"2026-03-27T02:49:30.150509","2026-04-06T06:44:10.704529",[121,126,131,136,140,145],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},2331,"如何在 OpenAI Node.js SDK 中正确使用流式响应（stream: true）？","旧版本中直接使用 `res.onmessage` 的语法已不再适用。该问题已在 v4 版本中修复。在 v4 中，应使用异步迭代器来处理流数据，例如通过 `for await (const part of stream)` 循环读取。如果遇到流式处理问题，建议确认是否为 SDK 问题而非底层 API 问题，并查看最新文档。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fissues\u002F18",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},2332,"升级 OpenAI SDK 后 TypeScript 编译报错 'Cannot find name File' 怎么办？","这通常出现在 GPT-3.5 更新后的版本中。解决方法包括：在 `tsconfig.json` 中设置 `\"skipLibCheck\": true`；确保 Node 版本为 18 或以上；检查导入路径是否正确（应使用 `import ... from 'openai'` 而非 `from 'openai\u002Fsrc'`）。如果仍失败，可尝试应用特定的 Yarn patch 修复类型定义。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fissues\u002F72",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},2333,"在浏览器环境中调用 chat.completions.create 总是返回 null 如何解决？","即使设置了 `dangerouslyAllowBrowser: true`，在某些环境下仍可能返回 null。建议使用 Promise 包装流式请求来正确处理响应。具体做法是将流对象放入 `new Promise` 中，通过 `for await` 循环累加 `part.choices[0]?.delta?.content`，最后 resolve 完整响应字符串。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fissues\u002F232",{"id":137,"question_zh":138,"answer_zh":139,"source_url":135},2334,"在 Angular 项目中使用 OpenAI SDK 遇到 Zone 相关问题怎么办？","Angular 的 Zone 机制可能会干扰 SDK 的异步处理。虽然 LangChainJS 曾尝试修复此类 API 怪异行为，但 Zones 仍可能导致问题。建议参考社区提供的变通方案（workarounds），或尝试在非 Zone 上下文中初始化客户端。如有必要，请提供最小复现代码以便进一步排查。",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},2335,"在 NestJS + Jest 测试环境中运行 OpenAI SDK 报 'ReferenceError: fetch is not defined' 怎么办？","这是因为测试环境缺少全局 fetch 实现。最有效的解决方法是将所有 Jest 相关库更新到最新版本。此外，确保使用 Node 18 及以上运行时（Node 16 支持即将停止）。在 AWS Lambda 等云环境中，使用 Node 18 runtime 也能避免此问题。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fissues\u002F304",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},2336,"在 Next.js 项目中导入 OpenAI 时报 'fetch is not exported' 错误如何处理？","这通常是模块解析或 TypeScript 配置问题。请检查 `tsconfig.json`，确保 `lib` 包含 `dom` 和 `esnext`，且 `moduleResolution` 设置为 `node` 或更高版本。同时建议开启 `skipLibCheck: true`。如果问题依旧，请创建包含最小复现步骤的仓库提交给维护者排查。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fissues\u002F243",[151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231,236,241,246],{"id":152,"version":153,"summary_zh":154,"released_at":155},101885,"v6.33.0","## 6.33.0 (2026-03-25)\n\nFull Changelog: [v6.32.0...v6.33.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.32.0...v6.33.0)\n\n### Features\n\n* **api:** add keys field to computer action types ([27a850e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F27a850e8a698cde5b7e05da70d8babb1205b2830))\n* **client:** add async iterator and stream() to WebSocket classes ([e1c16ee](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fe1c16ee35b8ef9db30e9a99a2b3460368f3044d0))\n\n\n### Bug Fixes\n\n* **api:** align SDK response types with expanded item schemas ([491cd52](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F491cd5290c36e6b1de7ff9787e80c73899d8b642))\n* **types:** make type required in ResponseInputMessageItem ([2012293](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F20122931977c2de8630cb03182766fbf6dc37868))\n\n\n### Chores\n\n* **ci:** skip lint on metadata-only changes ([74a917f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F74a917fd92dd2a1bd3089f3b5f79781bdc0d4ec3))\n* **internal:** refactor imports ([cfe9c60](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fcfe9c60aa41e9ed53e7d5f9187d31baf4364f8bd))\n* **internal:** update gitignore ([71bd114](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F71bd114f97e24c547660694d03c19b22d62ae961))\n* **tests:** bump steady to v0.19.4 ([f2e9dea](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ff2e9dea844405f189cc63a1d1493de3eabfcb7e7))\n* **tests:** bump steady to v0.19.5 ([37c6cf4](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F37c6cf495b9a05128572f9e955211b67d01410f3))\n* **tests:** bump steady to v0.19.6 ([496b3af](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F496b3af4371cf40f5d14f72d0770e152710b09df))\n* **tests:** bump steady to v0.19.7 ([8491eb6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F8491eb6d83cf8680bdc9d69e60b8e5d09e2bc8e8))\n\n\n### Refactors\n\n* **tests:** switch from prism to steady ([47c0581](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F47c0581a1923c9e700a619dd6bfa3fb93a188899))","2026-03-25T22:08:36",{"id":157,"version":158,"summary_zh":159,"released_at":160},101886,"v6.32.0","## 6.32.0 (2026-03-17)\n\nFull Changelog: [v6.31.0...v6.32.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.31.0...v6.32.0)\n\n### Features\n\n* **api:** 5.4 nano and mini model slugs ([068df6d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F068df6d625d7faa76dfac160065f1ca550539ba8))","2026-03-17T17:53:01",{"id":162,"version":163,"summary_zh":164,"released_at":165},101887,"v6.31.0","## 6.31.0 (2026-03-16)\n\nFull Changelog: [v6.30.1...v6.31.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.30.1...v6.31.0)\n\n### Features\n\n* **api:** add in\u002Fnin filter types to ComparisonFilter ([b2eda27](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fb2eda274418ceb9bbdb3778cb6a5ee28090df8ad))","2026-03-16T22:12:31",{"id":167,"version":168,"summary_zh":169,"released_at":170},101888,"v6.30.1","## 6.30.1 (2026-03-16)\n\nFull Changelog: [v6.30.0...v6.30.1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.30.0...v6.30.1)\n\n### Chores\n\n* **internal:** tweak CI branches ([25f5d74](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F25f5d74c1fc16e3303fcb87022f5f0559b052cbf))","2026-03-16T21:16:51",{"id":172,"version":173,"summary_zh":174,"released_at":175},101889,"v6.30.0","## 6.30.0 (2026-03-16)\n\nFull Changelog: [v6.29.0...v6.30.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.29.0...v6.30.0)\n\n### Features\n\n* **api:** add \u002Fv1\u002Fvideos endpoint option to batches ([271d879](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F271d87979f16950900f4253915bdda319b7fe935))\n* **api:** add defer_loading field to NamespaceTool ([7cc8f0a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F7cc8f0a736ea7ba0aa3e7860b4c30eaaa5795966))\n\n\n### Bug Fixes\n\n* **api:** oidc publishing for npm ([fa50066](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ffa500666e38379f2241ac43d60e2eb7eef7d39cb))","2026-03-16T14:59:25",{"id":177,"version":178,"summary_zh":179,"released_at":180},101890,"v6.29.0","## 6.29.0 (2026-03-13)\n\nFull Changelog: [v6.28.0...v6.29.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.28.0...v6.29.0)\n\n### Features\n\n* **api:** custom voices ([a11307a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fa11307afab49299fdf7e7ed3675d3e277d9b5c60))","2026-03-13T21:06:57",{"id":182,"version":183,"summary_zh":184,"released_at":185},101891,"v6.28.0","## 6.28.0 (2026-03-13)\n\nFull Changelog: [v6.27.0...v6.28.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.27.0...v6.28.0)\n\n### Features\n\n* **api:** manual updates ([d543959](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fd54395976aa4c1c1864bb45dbaf81ec1d66b8c6b))\n* **api:** manual updates ([4f87840](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F4f878406e029ae7527201251632e3fa00b800045))\n* **api:** sora api improvements: character api, video extensions\u002Fedits, higher resolution exports. ([262dac2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F262dac25aec6c9caa561f57a0b9e2a086f47a26a))\n\n\n### Bug Fixes\n\n* **types:** remove detail field from ResponseInputFile and ResponseInputFileContent ([8d6c0cd](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F8d6c0cdbbf08829db08745597e1806661534853f))\n\n\n### Chores\n\n* **internal:** update dependencies to address dependabot vulnerabilities ([f5810ee](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ff5810ee5f5bf96e81a77f91939f3d56427c46e00))\n* match http protocol with ws protocol instead of wss ([6f4e936](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F6f4e936bc2211da885bf492615b2bf413887576b))\n* **mcp-server:** improve instructions ([aad9ca1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Faad9ca15ddbb8dbc27ed6b2aa9b242af9bbf7b8f))\n* use proper capitalization for WebSockets ([cb4cf62](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fcb4cf6297c2a0eb7d3f55f8850e6e8ffc4c7ecc6))","2026-03-13T19:16:17",{"id":187,"version":188,"summary_zh":189,"released_at":190},101892,"v6.27.0","## 6.27.0 (2026-03-05)\n\nFull Changelog: [v6.26.0...v6.27.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.26.0...v6.27.0)\n\n### Features\n\n* **api:** The GA ComputerTool now uses the CompuerTool class. The 'computer_use_preview' tool is moved to ComputerUsePreview ([0206188](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F0206188f760be830738136e37dcf7be6ea0fe20c))\n\n\n### Chores\n\n* **internal:** improve import alias names ([9cc2478](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F9cc24789730a309037ef81f5a30af515d700459a))","2026-03-05T23:19:23",{"id":192,"version":193,"summary_zh":194,"released_at":195},101893,"v6.26.0","## 6.26.0 (2026-03-05)\n\nFull Changelog: [v6.25.0...v6.26.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.25.0...v6.26.0)\n\n### Features\n\n* **api:** gpt-5.4, tool search tool, and new computer tool ([1d1e5a9](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F1d1e5a9b5aeb11b0e940b4532dcd6a3fcc23898a))\n\n\n### Bug Fixes\n\n* **api:** internal schema fixes ([6b401ad](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F6b401ad7d3ff2ead9cfa577daf8381f62ea85b93))\n* **api:** manual updates ([2b54919](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F2b549195c70581022d9d64c443ab08202c6faeb7))\n* **api:** readd phase ([4a0cf29](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F4a0cf2974865519d3b512fb377bc4ba305dce7b7))\n* **api:** remove phase from message types, prompt_cache_key param in responses ([088fca6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F088fca6a4d5d1a577500acb5579ee403292d8911))\n\n\n### Chores\n\n* **internal:** codegen related update ([6a0aa9e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F6a0aa9e2ff10e78f8b9afd777174d16537a29c8e))\n* **internal:** codegen related update ([b2a4299](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fb2a42991cbe83eee45a342f19a5a99ce1d78b36a))\n* **internal:** move stringifyQuery implementation to internal function ([f9f4660](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ff9f46609cf5c1fc51e437c23251c5a7d0519d55d))\n* **internal:** reduce warnings ([7e19492](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F7e194929156052b0efbda9ca48c3ed6de8c18d2f))","2026-03-05T18:35:44",{"id":197,"version":198,"summary_zh":199,"released_at":200},101894,"v6.25.0","## 6.25.0 (2026-02-24)\n\nFull Changelog: [v6.24.0...v6.25.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.24.0...v6.25.0)\n\n### Features\n\n* **api:** add phase ([e32b853](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fe32b853c3c57f2d0e4c05b09177b94677aed0e5a))\n\n\n### Bug Fixes\n\n* **api:** fix phase enum ([2ffe1be](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F2ffe1be2600d0154b3355eefa61707470a341a95))\n* **api:** phase docs ([7fdfa38](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F7fdfa38c1fa2bd383e1171510918c6db5f0937d8))\n\n\n### Chores\n\n* **internal:** refactor sse event parsing ([0ea2380](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F0ea238054c0473adc97f4173a0ad5ba8bcfa4e29))","2026-02-24T19:53:38",{"id":202,"version":203,"summary_zh":204,"released_at":205},101895,"v6.24.0","## 6.24.0 (2026-02-24)\n\nFull Changelog: [v6.23.0...v6.24.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.23.0...v6.24.0)\n\n### Features\n\n* **api:** add gpt-realtime-1.5 and gpt-audio-1.5 models to realtime ([75875bf](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F75875bfb850c0780878553c566fe8821048ae5e8))","2026-02-24T03:19:42",{"id":207,"version":208,"summary_zh":209,"released_at":210},101896,"v6.23.0","## 6.23.0 (2026-02-23)\n\nFull Changelog: [v6.22.0...v6.23.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.22.0...v6.23.0)\n\n### Features\n\n* **api:** websockets for responses api ([c6b96b8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fc6b96b8b8d5f8132e0a4c5f7399a04185302adcc))\n\n\n### Bug Fixes\n\n* **docs\u002Fcontributing:** correct pnpm link command ([8a198a5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F8a198a5aa60209e26509651cdad110aadf164527))\n* **internal:** skip tests that depend on mock server ([3d88cb0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F3d88cb061a9a4d187931d4c892a87bd5e5f09c4d))\n\n\n### Chores\n\n* **internal\u002Fclient:** fix form-urlencoded requests ([646cedd](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F646cedd2842716b1768d81705110cc573d6ddc33))\n* update mock server docs ([29f78f3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F29f78f310b7c336318705c382fd92a324d4b1ea2))\n\n\n### Documentation\n\n* **api:** document 2000 file limit in file-batches create parameters ([ff7bde0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fff7bde08d8d02b8bda5f4e50bef65271a8f2a190))\n* **api:** enhance method descriptions across audio\u002Fchat\u002Fskills\u002Fvideos\u002Fresponses ([f5e02a1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ff5e02a1dcad492fd3dab2d1a289c12af082cdef4))\n* **api:** update safety_identifier description in chat\u002Fresponses ([a55e0ef](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fa55e0ef720cfb231e09e598ff0e8e60ef91e9088))","2026-02-23T20:07:18",{"id":212,"version":213,"summary_zh":214,"released_at":215},101897,"v6.22.0","## 6.22.0 (2026-02-14)\n\nFull Changelog: [v6.21.0...v6.22.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.21.0...v6.22.0)\n\n### Features\n\n* **api:** container network_policy and skills ([65c1482](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F65c1482a41f16d39ff6ba26849a72b417b27403e))\n\n\n### Bug Fixes\n\n* **docs:** restore helper methods in API reference ([3a4c189](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F3a4c189712292f280ca34326fe17e202180951bf))\n* **webhooks:** restore webhook type exports ([49bbf46](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F49bbf46f0ed14ce2a050d10baa4ad7a8481a773d))\n\n\n### Chores\n\n* **internal:** avoid type checking errors with ts-reset ([4b0d1f2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F4b0d1f27207dea6054291707d7bbdeb86dbcf4b2))\n\n\n### Documentation\n\n* split `api.md` by standalone resources ([48e07d6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F48e07d65894c22b543e669d62fa42a00cc3d0430))\n* update comment ([e3a1ea0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fe3a1ea0400b428e0e21666f96e3a9345468678d5))","2026-02-14T00:40:07",{"id":217,"version":218,"summary_zh":219,"released_at":220},101898,"v6.21.0","## 6.21.0 (2026-02-10)\n\nFull Changelog: [v6.20.0...v6.21.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.20.0...v6.21.0)\n\n### Features\n\n* **api:** support for images in batch api ([017ba1c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F017ba1cb5a08428ca59197764cff460c70950e84))","2026-02-10T19:02:04",{"id":222,"version":223,"summary_zh":224,"released_at":225},101899,"v6.20.0","## 6.20.0 (2026-02-10)\n\nFull Changelog: [v6.19.0...v6.20.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.19.0...v6.20.0)\n\n### Features\n\n* **api:** skills and hosted shell ([e4bdd62](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fe4bdd6205a0225d662ddeb07367f26094eaadbdd))","2026-02-10T18:14:11",{"id":227,"version":228,"summary_zh":229,"released_at":230},101900,"v6.19.0","## 6.19.0 (2026-02-09)\n\nFull Changelog: [v6.18.0...v6.19.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.18.0...v6.19.0)\n\n### Features\n\n* **api:** responses context_management ([40e7671](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F40e7671675159966fe219b3aebfb24b9b03f2c95))","2026-02-09T21:40:15",{"id":232,"version":233,"summary_zh":234,"released_at":235},101901,"v6.18.0","## 6.18.0 (2026-02-05)\n\nFull Changelog: [v6.17.0...v6.18.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.17.0...v6.18.0)\n\n### Features\n\n* **api:** image generation actions for responses; ResponseFunctionCallArgumentsDoneEvent.name ([d373c32](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fd373c3210d9299381e20520c217167b387b46105))\n\n\n### Bug Fixes\n\n* **client:** avoid memory leak with abort signals ([b449f36](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fb449f36609b727f3f147fad19e8d064225bc8621))\n* **client:** avoid removing abort listener too early ([1c045f7](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F1c045f701743017ac7b4e2be0dfc8706a3b0213a))\n* **client:** undo change to web search Find action ([8259b45](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F8259b457c6f73c78066af0e1a76be0125caeb1ae))\n* **client:** update type for `find_in_page` action ([9aa8d98](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F9aa8d9822e60afb595c585f7be75087378b724bd))\n\n\n### Chores\n\n* **client:** do not parse responses with empty content-length ([4a118fa](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F4a118fa3e09b0ad2bc4899b2a074fd60103796a0))\n* **client:** restructure abort controller binding ([a4d7151](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fa4d71518787849ec1f530da3c8550ea0f8746668))\n* **internal:** fix pagination internals not accepting option promises ([6677905](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F667790549f9160ba0cac484a8de09d8966cc13f0))","2026-02-05T16:27:40",{"id":237,"version":238,"summary_zh":239,"released_at":240},101902,"v6.17.0","## 6.17.0 (2026-01-28)\n\nFull Changelog: [v6.16.0...v6.17.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.16.0...v6.17.0)\n\n### Features\n\n* **api:** add shell_call_output status field ([edf9590](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fedf95904294cce6cdcac521ee75dc8e0a033df4c))\n* **api:** api update ([6a2eb80](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F6a2eb80f53c21f52ff217faef9b783e1cf9846c1))\n* **api:** api updates ([19ca100](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F19ca100e9ebb2d03983da923c4bf944aa23c1f00))\n\n\n### Bug Fixes\n\n* **api:** mark assistants as deprecated ([3ae2a14](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F3ae2a1439bc30d83c81e30ab30ddd06f91fee61f))\n\n\n### Chores\n\n* **ci:** upgrade `actions\u002Fgithub-script` ([4ea73d3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F4ea73d389b1b96d88c4c37c1a3a08ea143317c08))\n* **internal:** update `actions\u002Fcheckout` version ([f163b77](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Ff163b77bf2bb127f8049a0a7b1a2795c4f2bae50))\n* **internal:** upgrade babel, qs, js-yaml ([2e2f3c6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F2e2f3c66ed61c0666e19831b123ea13d42978112))","2026-01-28T22:26:56",{"id":242,"version":243,"summary_zh":244,"released_at":245},101903,"v6.16.0","## 6.16.0 (2026-01-09)\n\nFull Changelog: [v6.15.0...v6.16.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.15.0...v6.16.0)\n\n### Features\n\n* **api:** add new Response completed_at prop ([ca40534](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fca40534778311def52bc7dbbab043d925cdaf847))\n* **ci:** add breaking change detection workflow ([a6f3dea](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fa6f3deaf89ea0ef85cc57e1150032bb6b807c3b9))\n\n\n### Chores\n\n* break long lines in snippets into multiline ([80dee2f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F80dee2fe64d1b13f181bd482b31eb06fd6c5f3f4))\n* **internal:** codegen related update ([b2fac3e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002Fb2fac3ecdc3aecc3303c26304c4c94deda061edb))","2026-01-09T22:12:10",{"id":247,"version":248,"summary_zh":249,"released_at":250},101904,"v6.15.0","## 6.15.0 (2025-12-19)\n\nFull Changelog: [v6.14.0...v6.15.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcompare\u002Fv6.14.0...v6.15.0)\n\n### Bug Fixes\n\n* rebuild ([5627b41](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-node\u002Fcommit\u002F5627b4181775981e48991ea246e091afdfdc3caf))","2025-12-19T03:28:18"]