[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-openai--openai-java":3,"tool-openai--openai-java":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":32,"env_os":100,"env_gpu":100,"env_ram":100,"env_deps":101,"category_tags":107,"github_topics":77,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":108,"updated_at":109,"faqs":110,"releases":139},9654,"openai\u002Fopenai-java","openai-java","The official Java library for the OpenAI API","openai-java 是 OpenAI 官方推出的 Java 语言开发库，旨在帮助开发者轻松地在 Java 应用中集成 OpenAI 的强大能力。它封装了复杂的 REST API 调用细节，让程序员无需手动处理 HTTP 请求、参数构建或响应解析，即可直接通过简洁的代码调用 GPT 等模型进行文本生成、对话交互等操作。\n\n对于使用 Java 技术栈的团队或个人开发者而言，openai-java 解决了原生调用接口繁琐、易出错且维护成本高的问题。它支持通过环境变量或系统属性灵活配置密钥与组织信息，并兼容主流的 Gradle 和 Maven 构建工具，安装便捷。此外，该库不仅覆盖了最新的 Responses API，也长期支持经典的 Chat Completions API，确保新旧项目都能平滑接入。\n\n其技术亮点在于类型安全的 API 设计，利用 Java 的强类型特性在编译期即可发现潜在错误，同时提供完整的 Javadoc 文档和丰富的示例代码，大幅降低学习门槛。无论是需要快速原型验证的研究人员，还是致力于构建企业级 AI 应用的工程师，openai-java 都是连接 Java 生","openai-java 是 OpenAI 官方推出的 Java 语言开发库，旨在帮助开发者轻松地在 Java 应用中集成 OpenAI 的强大能力。它封装了复杂的 REST API 调用细节，让程序员无需手动处理 HTTP 请求、参数构建或响应解析，即可直接通过简洁的代码调用 GPT 等模型进行文本生成、对话交互等操作。\n\n对于使用 Java 技术栈的团队或个人开发者而言，openai-java 解决了原生调用接口繁琐、易出错且维护成本高的问题。它支持通过环境变量或系统属性灵活配置密钥与组织信息，并兼容主流的 Gradle 和 Maven 构建工具，安装便捷。此外，该库不仅覆盖了最新的 Responses API，也长期支持经典的 Chat Completions API，确保新旧项目都能平滑接入。\n\n其技术亮点在于类型安全的 API 设计，利用 Java 的强类型特性在编译期即可发现潜在错误，同时提供完整的 Javadoc 文档和丰富的示例代码，大幅降低学习门槛。无论是需要快速原型验证的研究人员，还是致力于构建企业级 AI 应用的工程师，openai-java 都是连接 Java 生态与前沿大模型的理想桥梁。","# OpenAI Java API Library\n\n\u003C!-- x-release-please-start-version -->\n\n[![Maven Central](https:\u002F\u002Fimg.shields.io\u002Fmaven-central\u002Fv\u002Fcom.openai\u002Fopenai-java)](https:\u002F\u002Fcentral.sonatype.com\u002Fartifact\u002Fcom.openai\u002Fopenai-java\u002F4.32.0)\n[![javadoc](https:\u002F\u002Fjavadoc.io\u002Fbadge2\u002Fcom.openai\u002Fopenai-java\u002F4.32.0\u002Fjavadoc.svg)](https:\u002F\u002Fjavadoc.io\u002Fdoc\u002Fcom.openai\u002Fopenai-java\u002F4.32.0)\n\n\u003C!-- x-release-please-end -->\n\nThe OpenAI Java SDK provides convenient access to the [OpenAI REST API](https:\u002F\u002Fplatform.openai.com\u002Fdocs) from applications written in Java.\n\n\u003C!-- x-release-please-start-version -->\n\nThe REST API documentation can be found on [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs). Javadocs are available on [javadoc.io](https:\u002F\u002Fjavadoc.io\u002Fdoc\u002Fcom.openai\u002Fopenai-java\u002F4.32.0).\n\n\u003C!-- x-release-please-end -->\n\n## Installation\n\n\u003C!-- x-release-please-start-version -->\n\n[_Try `openai-java-spring-boot-starter` if you're using Spring Boot!_](#spring-boot)\n\n### Gradle\n\n```kotlin\nimplementation(\"com.openai:openai-java:4.32.0\")\n```\n\n### Maven\n\n```xml\n\u003Cdependency>\n  \u003CgroupId>com.openai\u003C\u002FgroupId>\n  \u003CartifactId>openai-java\u003C\u002FartifactId>\n  \u003Cversion>4.32.0\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n\u003C!-- x-release-please-end -->\n\n## Requirements\n\nThis library requires Java 8 or later.\n\n## Usage\n\n> [!TIP]\n> See the [`openai-java-example`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample) directory for complete and runnable examples!\n\nThe primary API for interacting with OpenAI models is the [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses). You can generate text from the model with the code below.\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.responses.Response;\nimport com.openai.models.responses.ResponseCreateParams;\n\n\u002F\u002F Configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID` and `OPENAI_PROJECT_ID` environment variables\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nResponseCreateParams params = ResponseCreateParams.builder()\n        .input(\"Say this is a test\")\n        .model(ChatModel.GPT_5_2)\n        .build();\nResponse response = client.responses().create(params);\n```\n\nThe previous standard (supported indefinitely) for generating text is the [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat). You can use that API to generate text from the model with the code below.\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\n\u002F\u002F Configures using the `openai.apiKey`, `openai.orgId`, `openai.projectId`, `openai.webhookSecret` and `openai.baseUrl` system properties\n\u002F\u002F Or configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID`, `OPENAI_WEBHOOK_SECRET` and `OPENAI_BASE_URL` environment variables\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nChatCompletion chatCompletion = client.chat().completions().create(params);\n```\n\n## Client configuration\n\nConfigure the client using system properties or environment variables:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\n\u002F\u002F Configures using the `openai.apiKey`, `openai.orgId`, `openai.projectId`, `openai.webhookSecret` and `openai.baseUrl` system properties\n\u002F\u002F Or configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID`, `OPENAI_WEBHOOK_SECRET` and `OPENAI_BASE_URL` environment variables\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n```\n\nOr manually:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .apiKey(\"My API Key\")\n    .build();\n```\n\nOr using a combination of the two approaches:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    \u002F\u002F Configures using the `openai.apiKey`, `openai.orgId`, `openai.projectId`, `openai.webhookSecret` and `openai.baseUrl` system properties\n    \u002F\u002F Or configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID`, `OPENAI_WEBHOOK_SECRET` and `OPENAI_BASE_URL` environment variables\n    .fromEnv()\n    .apiKey(\"My API Key\")\n    .build();\n```\n\nSee this table for the available options:\n\n| Setter          | System property        | Environment variable    | Required | Default value                 |\n| --------------- | ---------------------- | ----------------------- | -------- | ----------------------------- |\n| `apiKey`        | `openai.apiKey`        | `OPENAI_API_KEY`        | true     | -                             |\n| `organization`  | `openai.orgId`         | `OPENAI_ORG_ID`         | false    | -                             |\n| `project`       | `openai.projectId`     | `OPENAI_PROJECT_ID`     | false    | -                             |\n| `webhookSecret` | `openai.webhookSecret` | `OPENAI_WEBHOOK_SECRET` | false    | -                             |\n| `baseUrl`       | `openai.baseUrl`       | `OPENAI_BASE_URL`       | true     | `\"https:\u002F\u002Fapi.openai.com\u002Fv1\"` |\n\nSystem properties take precedence over environment variables.\n\n> [!TIP]\n> Don't create more than one client in the same application. Each client has a connection pool and\n> thread pools, which are more efficient to share between requests.\n\n### Modifying configuration\n\nTo temporarily use a modified client configuration, while reusing the same connection and thread pools, call `withOptions()` on any client or service:\n\n```java\nimport com.openai.client.OpenAIClient;\n\nOpenAIClient clientWithOptions = client.withOptions(optionsBuilder -> {\n    optionsBuilder.baseUrl(\"https:\u002F\u002Fexample.com\");\n    optionsBuilder.maxRetries(42);\n});\n```\n\nThe `withOptions()` method does not affect the original client or service.\n\n### Workload identity authentication\n\nWorkload identity authentication allows applications running in cloud environments (Kubernetes, Azure, GCP) to authenticate using short-lived tokens issued by the cloud provider, instead of long-lived API keys.\n\n#### Basic setup\n\n```java\nimport com.openai.auth.*;\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder().build();\n\nWorkloadIdentity workloadIdentity = WorkloadIdentity.builder()\n    .clientId(\"your-client-id\")\n    .identityProviderId(\"your-identity-provider-id\")\n    .serviceAccountId(\"your-service-account-id\")\n    .provider(provider)\n    .build();\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .workloadIdentity(workloadIdentity)\n    .build();\n```\n\n#### Kubernetes service account token provider\n\n```java\n\u002F\u002F Use default token path (\u002Fvar\u002Frun\u002Fsecrets\u002Fkubernetes.io\u002Fserviceaccount\u002Ftoken)\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder().build();\n```\n\n```java\n\u002F\u002F Or specify a custom token path\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder()\n    .tokenPath(\"\u002Fcustom\u002Fpath\u002Fto\u002Ftoken\")\n    .build();\n```\n\n#### Azure Managed Identity provider\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F Use defaults (resource: https:\u002F\u002Fmanagement.azure.com\u002F, api-version: 2018-02-01)\nSubjectTokenProvider provider = AzureManagedIdentityTokenProvider.builder()\n    .build();\n```\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F Or customize\nSubjectTokenProvider provider = AzureManagedIdentityTokenProvider.builder()\n    .resource(\"https:\u002F\u002Fmanagement.azure.com\u002F\")\n    .apiVersion(\"2018-02-01\")\n    .build();\n```\n\n#### GCP ID token provider\n\n```java\nimport com.openai.auth.*;\n\nSubjectTokenProvider provider = GcpIdTokenProvider.builder()\n    .build();\n```\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F Or customize the audience\nSubjectTokenProvider provider = GcpIdTokenProvider.builder()\n    .audience(\"https:\u002F\u002Fapi.openai.com\u002Fv1\")\n    .build();\n```\n\n## Requests and responses\n\nTo send a request to the OpenAI API, build an instance of some `Params` class and pass it to the corresponding client method. When the response is received, it will be deserialized into an instance of a Java class.\n\nFor example, `client.chat().completions().create(...)` should be called with an instance of `ChatCompletionCreateParams`, and it will return an instance of `ChatCompletion`.\n\n## Immutability\n\nEach class in the SDK has an associated [builder](https:\u002F\u002Fblogs.oracle.com\u002Fjavamagazine\u002Fpost\u002Fexploring-joshua-blochs-builder-design-pattern-in-java) or factory method for constructing it.\n\nEach class is [immutable](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002Ftutorial\u002Fessential\u002Fconcurrency\u002Fimmutable.html) once constructed. If the class has an associated builder, then it has a `toBuilder()` method, which can be used to convert it back to a builder for making a modified copy.\n\nBecause each class is immutable, builder modification will _never_ affect already built class instances.\n\n## Asynchronous execution\n\nThe default client is synchronous. To switch to asynchronous execution, call the `async()` method:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport java.util.concurrent.CompletableFuture;\n\n\u002F\u002F Configures using the `openai.apiKey`, `openai.orgId`, `openai.projectId`, `openai.webhookSecret` and `openai.baseUrl` system properties\n\u002F\u002F Or configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID`, `OPENAI_WEBHOOK_SECRET` and `OPENAI_BASE_URL` environment variables\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nCompletableFuture\u003CChatCompletion> chatCompletion = client.async().chat().completions().create(params);\n```\n\nOr create an asynchronous client from the beginning:\n\n```java\nimport com.openai.client.OpenAIClientAsync;\nimport com.openai.client.okhttp.OpenAIOkHttpClientAsync;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport java.util.concurrent.CompletableFuture;\n\n\u002F\u002F Configures using the `openai.apiKey`, `openai.orgId`, `openai.projectId`, `openai.webhookSecret` and `openai.baseUrl` system properties\n\u002F\u002F Or configures using the `OPENAI_API_KEY`, `OPENAI_ORG_ID`, `OPENAI_PROJECT_ID`, `OPENAI_WEBHOOK_SECRET` and `OPENAI_BASE_URL` environment variables\nOpenAIClientAsync client = OpenAIOkHttpClientAsync.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nCompletableFuture\u003CChatCompletion> chatCompletion = client.chat().completions().create(params);\n```\n\nThe asynchronous client supports the same options as the synchronous one, except most methods return `CompletableFuture`s.\n\n## Streaming\n\nThe SDK defines methods that return response \"chunk\" streams, where each chunk can be individually processed as soon as it arrives instead of waiting on the full response. Streaming methods generally correspond to [SSE](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events) or [JSONL](https:\u002F\u002Fjsonlines.org) responses.\n\nSome of these methods may have streaming and non-streaming variants, but a streaming method will always have a `Streaming` suffix in its name, even if it doesn't have a non-streaming variant.\n\nThese streaming methods return [`StreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FStreamResponse.kt) for synchronous clients:\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\n\ntry (StreamResponse\u003CChatCompletionChunk> streamResponse = client.chat().completions().createStreaming(params)) {\n    streamResponse.stream().forEach(chunk -> {\n        System.out.println(chunk);\n    });\n    System.out.println(\"No more chunks!\");\n}\n```\n\nOr [`AsyncStreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FAsyncStreamResponse.kt) for asynchronous clients:\n\n```java\nimport com.openai.core.http.AsyncStreamResponse;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\nimport java.util.Optional;\n\nclient.async().chat().completions().createStreaming(params).subscribe(chunk -> {\n    System.out.println(chunk);\n});\n\n\u002F\u002F If you need to handle errors or completion of the stream\nclient.async().chat().completions().createStreaming(params).subscribe(new AsyncStreamResponse.Handler\u003C>() {\n    @Override\n    public void onNext(ChatCompletionChunk chunk) {\n        System.out.println(chunk);\n    }\n\n    @Override\n    public void onComplete(Optional\u003CThrowable> error) {\n        if (error.isPresent()) {\n            System.out.println(\"Something went wrong!\");\n            throw new RuntimeException(error.get());\n        } else {\n            System.out.println(\"No more chunks!\");\n        }\n    }\n});\n\n\u002F\u002F Or use futures\nclient.async().chat().completions().createStreaming(params)\n    .subscribe(chunk -> {\n        System.out.println(chunk);\n    })\n    .onCompleteFuture();\n    .whenComplete((unused, error) -> {\n        if (error != null) {\n            System.out.println(\"Something went wrong!\");\n            throw new RuntimeException(error);\n        } else {\n            System.out.println(\"No more chunks!\");\n        }\n    });\n```\n\nAsync streaming uses a dedicated per-client cached thread pool [`Executor`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fconcurrent\u002FExecutor.html) to stream without blocking the current thread. This default is suitable for most purposes.\n\nTo use a different `Executor`, configure the subscription using the `executor` parameter:\n\n```java\nimport java.util.concurrent.Executor;\nimport java.util.concurrent.Executors;\n\nExecutor executor = Executors.newFixedThreadPool(4);\nclient.async().chat().completions().createStreaming(params).subscribe(\n    chunk -> System.out.println(chunk), executor\n);\n```\n\nOr configure the client globally using the `streamHandlerExecutor` method:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.util.concurrent.Executors;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .streamHandlerExecutor(Executors.newFixedThreadPool(4))\n    .build();\n```\n\n### Streaming helpers\n\nThe SDK provides conveniences for streamed chat completions. A\n[`ChatCompletionAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FChatCompletionAccumulator.kt)\ncan record the stream of chat completion chunks in the response as they are processed and accumulate\na [`ChatCompletion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FChatCompletion.kt)\nobject similar to that which would have been returned by the non-streaming API.\n\nFor a synchronous response add a\n[`Stream.peek()`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fstream\u002FStream.html#peek-java.util.function.Consumer-)\ncall to the stream pipeline to accumulate each chunk:\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.helpers.ChatCompletionAccumulator;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\n\nChatCompletionAccumulator chatCompletionAccumulator = ChatCompletionAccumulator.create();\n\ntry (StreamResponse\u003CChatCompletionChunk> streamResponse =\n        client.chat().completions().createStreaming(createParams)) {\n    streamResponse.stream()\n            .peek(chatCompletionAccumulator::accumulate)\n            .flatMap(completion -> completion.choices().stream())\n            .flatMap(choice -> choice.delta().content().stream())\n            .forEach(System.out::print);\n}\n\nChatCompletion chatCompletion = chatCompletionAccumulator.chatCompletion();\n```\n\nFor an asynchronous response, add the `ChatCompletionAccumulator` to the `subscribe()` call:\n\n```java\nimport com.openai.helpers.ChatCompletionAccumulator;\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletionAccumulator chatCompletionAccumulator = ChatCompletionAccumulator.create();\n\nclient.chat()\n        .completions()\n        .createStreaming(createParams)\n        .subscribe(chunk -> chatCompletionAccumulator.accumulate(chunk).choices().stream()\n                .flatMap(choice -> choice.delta().content().stream())\n                .forEach(System.out::print))\n        .onCompleteFuture()\n        .join();\n\nChatCompletion chatCompletion = chatCompletionAccumulator.chatCompletion();\n```\n\nThe SDK provides conveniences for streamed responses. A\n[`ResponseAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FResponseAccumulator.kt)\ncan record the stream of response events as they are processed and accumulate a\n[`Response`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponse.kt)\nobject similar to that which would have been returned by the non-streaming API.\n\nFor a synchronous response add a\n[`Stream.peek()`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fstream\u002FStream.html#peek-java.util.function.Consumer-)\ncall to the stream pipeline to accumulate each event:\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.helpers.ResponseAccumulator;\nimport com.openai.models.responses.Response;\nimport com.openai.models.responses.ResponseStreamEvent;\n\nResponseAccumulator responseAccumulator = ResponseAccumulator.create();\n\ntry (StreamResponse\u003CResponseStreamEvent> streamResponse =\n        client.responses().createStreaming(createParams)) {\n    streamResponse.stream()\n            .peek(responseAccumulator::accumulate)\n            .flatMap(event -> event.outputTextDelta().stream())\n            .forEach(textEvent -> System.out.print(textEvent.delta()));\n}\n\nResponse response = responseAccumulator.response();\n```\n\nFor an asynchronous response, add the `ResponseAccumulator` to the `subscribe()` call:\n\n```java\nimport com.openai.helpers.ResponseAccumulator;\nimport com.openai.models.responses.Response;\n\nResponseAccumulator responseAccumulator = ResponseAccumulator.create();\n\nclient.responses()\n        .createStreaming(createParams)\n        .subscribe(event -> responseAccumulator.accumulate(event)\n                .outputTextDelta().ifPresent(textEvent -> System.out.print(textEvent.delta())))\n        .onCompleteFuture()\n        .join();\n\nResponse response = responseAccumulator.response();\n```\n\n## Structured outputs with JSON schemas\n\nOpen AI [Structured Outputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs?api-mode=chat)\nis a feature that ensures that the model will always generate responses that adhere to a supplied\n[JSON schema](https:\u002F\u002Fjson-schema.org\u002Foverview\u002Fwhat-is-jsonschema).\n\nA JSON schema can be defined by creating a\n[`ResponseFormatJsonSchema`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002FResponseFormatJsonSchema.kt)\nand setting it on the input parameters. However, for greater convenience, a JSON schema can instead\nbe derived automatically from the structure of an arbitrary Java class. The JSON content from the\nresponse will then be converted automatically to an instance of that Java class. A full, working\nexample of the use of Structured Outputs with arbitrary Java classes can be seen in\n[`StructuredOutputsExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FStructuredOutputsExample.java).\n\nJava classes can contain fields declared to be instances of other classes and can use collections\n(see [Defining JSON schema properties](#defining-json-schema-properties) for more details):\n\n```java\nclass Person {\n    public String name;\n    public int birthYear;\n}\n\nclass Book {\n    public String title;\n    public Person author;\n    public int publicationYear;\n}\n\nclass BookList {\n    public List\u003CBook> books;\n}\n```\n\nPass the top-level class—`BookList` in this example—to `responseFormat(Class\u003CT>)` when building the\nparameters and then access an instance of `BookList` from the generated message content in the\nresponse:\n\n```java\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport com.openai.models.chat.completions.StructuredChatCompletionCreateParams;\n\nStructuredChatCompletionCreateParams\u003CBookList> params = ChatCompletionCreateParams.builder()\n        .addUserMessage(\"List some famous late twentieth century novels.\")\n        .model(ChatModel.GPT_5_2)\n        .responseFormat(BookList.class)\n        .build();\n\nclient.chat().completions().create(params).choices().stream()\n        .flatMap(choice -> choice.message().content().stream())\n        .flatMap(bookList -> bookList.books.stream())\n        .forEach(book -> System.out.println(book.title + \" by \" + book.author.name));\n```\n\nYou can start building the parameters with an instance of\n[`ChatCompletionCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FChatCompletionCreateParams.kt)\nor\n[`StructuredChatCompletionCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FStructuredChatCompletionCreateParams.kt).\nIf you start with the former (which allows for more compact code) the builder type will change to\nthe latter when `ChatCompletionCreateParams.Builder.responseFormat(Class\u003CT>)` is called.\n\nIf a field in a class is optional and does not require a defined value, you can represent this using\nthe [`java.util.Optional`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002FOptional.html) class.\nIt is up to the AI model to decide whether to provide a value for that field or leave it empty.\n\n```java\nimport java.util.Optional;\n\nclass Book {\n    public String title;\n    public Person author;\n    public int publicationYear;\n    public Optional\u003CString> isbn;\n}\n```\n\nGeneric type information for fields is retained in the class's metadata, but _generic type erasure_\napplies in other scopes. While, for example, a JSON schema defining an array of books can be derived\nfrom the `BookList.books` field with type `List\u003CBook>`, a valid JSON schema cannot be derived from a\nlocal variable of that same type, so the following will _not_ work:\n\n```java\nList\u003CBook> books = new ArrayList\u003C>();\n\nStructuredChatCompletionCreateParams\u003CList\u003CBook>> params = ChatCompletionCreateParams.builder()\n        .responseFormat(books.getClass())\n        \u002F\u002F ...\n        .build();\n```\n\nIf an error occurs while converting a JSON response to an instance of a Java class, the error\nmessage will include the JSON response to assist in diagnosis. For instance, if the response is\ntruncated, the JSON data will be incomplete and cannot be converted to a class instance. If your\nJSON response may contain sensitive information, avoid logging it directly, or ensure that you\nredact any sensitive details from the error message.\n\n### Local JSON schema validation\n\nStructured Outputs supports a\n[subset](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-schemas) of the JSON\nSchema language. Schemas are generated automatically from classes to align with this subset.\nHowever, due to the inherent structure of the classes, the generated schema may still violate\ncertain OpenAI schema restrictions, such as exceeding the maximum nesting depth or utilizing\nunsupported data types.\n\nTo facilitate compliance, the method `responseFormat(Class\u003CT>)` performs a validation check on the\nschema derived from the specified class. This validation ensures that all restrictions are adhered\nto. If any issues are detected, an exception will be thrown, providing a detailed message outlining\nthe reasons for the validation failure.\n\n- **Local Validation**: The validation process occurs locally, meaning no requests are sent to the\n  remote AI model. If the schema passes local validation, it is likely to pass remote validation as\n  well.\n- **Remote Validation**: The remote AI model will conduct its own validation upon receiving the JSON\n  schema in the request.\n- **Version Compatibility**: There may be instances where local validation fails while remote\n  validation succeeds. This can occur if the SDK version is outdated compared to the restrictions\n  enforced by the remote AI model.\n- **Disabling Local Validation**: If you encounter compatibility issues and wish to bypass local\n  validation, you can disable it by passing\n  [`JsonSchemaLocalValidation.NO`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FJsonSchemaLocalValidation.kt)\n  to the `responseFormat(Class\u003CT>, JsonSchemaLocalValidation)` method when building the parameters.\n  (The default value for this parameter is `JsonSchemaLocalValidation.YES`.)\n\n```java\nimport com.openai.core.JsonSchemaLocalValidation;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport com.openai.models.chat.completions.StructuredChatCompletionCreateParams;\n\nStructuredChatCompletionCreateParams\u003CBookList> params = ChatCompletionCreateParams.builder()\n        .addUserMessage(\"List some famous late twentieth century novels.\")\n        .model(ChatModel.GPT_5_2)\n        .responseFormat(BookList.class, JsonSchemaLocalValidation.NO)\n        .build();\n```\n\nBy following these guidelines, you can ensure that your structured outputs conform to the necessary\nschema requirements and minimize the risk of remote validation errors.\n\n### Usage with the Responses API\n\n_Structured Outputs_ are also supported for the Responses API. The usage is the same as described\nexcept where the Responses API differs slightly from the Chat Completions API. Pass the top-level\nclass to `text(Class\u003CT>)` when building the parameters and then access an instance of the class from\nthe generated message content in the response.\n\nYou can start building the parameters with an instance of\n[`ResponseCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseCreateParams.kt)\nor\n[`StructuredResponseCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponseCreateParams.kt).\nIf you start with the former (which allows for more compact code) the builder type will change to\nthe latter when `ResponseCreateParams.Builder.text(Class\u003CT>)` is called.\n\nFor a full example of the usage of _Structured Outputs_ with the Responses API, see\n[`ResponsesStructuredOutputsExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsExample.java).\n\nInstead of using `ResponseCreateParams.text(Class\u003CT>)`, you can build a\n[`StructuredResponseTextConfig`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponseTextConfig.kt)\nand set it on the `ResponseCreateParams` using the `text(StructuredResponseTextConfig)` method.\nSimilar to using `ResponseCreateParams`, you can start with a `ResponseTextConfig.Builder` and its\n`format(Class\u003CT>)` method will change it to a `StructuredResponseTextConfig.Builder`. This also\nallows you to set the `verbosity` configuration parameter on the text configuration before adding it\nto the `ResponseCreateParams`.\n\nFor a full example of the usage of _Structured Outputs_ with the `ResponseTextConfig` and its\n`verbosity` parameter, see\n[`ResponsesStructuredOutputsVerbosityExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsVerbosityExample.java).\n\n### Usage with streaming\n\n_Structured Outputs_ can also be used with [Streaming](#streaming) and the Chat Completions API. As\nresponses are returned in \"chunks\", the full response must first be accumulated to concatenate the\nJSON strings that can then be converted into instances of the arbitrary Java class. Normal streaming\noperations can be performed while accumulating the JSON strings.\n\nUse the [`ChatCompletionAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FChatCompletionAccumulator.kt)\nas described in the section on [Streaming helpers](#streaming-helpers) to accumulate the JSON\nstrings. Once accumulated, use `ChatCompletionAccumulator.chatCompletion(Class\u003CT>)` to convert the\naccumulated `ChatCompletion` into a\n[`StructuredChatCompletion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FStructuredChatCompletion.kt).\nThe `StructuredChatCompletion` can then automatically deserialize the JSON strings into instances of\nyour Java class.\n\nFor a full example of the usage of _Structured Outputs_ with Streaming and the Chat Completions API,\nsee\n[`StructuredOutputsStreamingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FStructuredOutputsStreamingExample.java).\n\nWith the Responses API, accumulate events while streaming using the\n[`ResponseAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FResponseAccumulator.kt).\nOnce accumulated, use `ResponseAccumulator.response(Class\u003CT>)` to convert the accumulated `Response`\ninto a\n[`StructuredResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponse.kt).\nThe [`StructuredResponse`] can then automatically deserialize the JSON strings into instances of\nyour Java class.\n\nFor a full example of the usage of _Structured Outputs_ with Streaming and the Responses API, see\n[`ResponsesStructuredOutputsStreamingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsStreamingExample.java).\n\n### Defining JSON schema properties\n\nWhen a JSON schema is derived from your Java classes, all properties represented by `public` fields\nor `public` getter methods are included in the schema by default. Non-`public` fields and getter\nmethods are _not_ included by default. You can exclude `public`, or include non-`public` fields or\ngetter methods, by using the `@JsonIgnore` or `@JsonProperty` annotations respectively (see\n[Annotating classes and JSON schemas](#annotating-classes-and-json-schemas) for details).\n\nIf you do not want to define `public` fields, you can define `private` fields and corresponding\n`public` getter methods. For example, a `private` field `myValue` with a `public` getter method\n`getMyValue()` will result in a `\"myValue\"` property being included in the JSON schema. If you\nprefer not to use the conventional Java \"get\" prefix for the name of the getter method, then you\n_must_ annotate the getter method with the `@JsonProperty` annotation and the full method name will\nbe used as the property name. You do not have to define any corresponding setter methods if you do\nnot need them.\n\nEach of your classes _must_ define at least one property to be included in the JSON schema. A\nvalidation error will occur if any class contains no fields or getter methods from which schema\nproperties can be derived. This may occur if, for example:\n\n- There are no fields or getter methods in the class.\n- All fields and getter methods are `public`, but all are annotated with `@JsonIgnore`.\n- All fields and getter methods are non-`public`, but none are annotated with `@JsonProperty`.\n- A field or getter method is declared with a `Map` type. A `Map` is treated like a separate class\n  with no named properties, so it will result in an empty `\"properties\"` field in the JSON schema.\n\n### Annotating classes and JSON schemas\n\nYou can use annotations to add further information to the JSON schema derived from your Java\nclasses, or to control which fields or getter methods will be included in the schema. Details from\nannotations captured in the JSON schema may be used by the AI model to improve its response. The SDK\nsupports the use of [Jackson Databind](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind) annotations.\n\n```java\nimport com.fasterxml.jackson.annotation.JsonClassDescription;\nimport com.fasterxml.jackson.annotation.JsonIgnore;\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\n\nclass Person {\n    @JsonPropertyDescription(\"The first name and surname of the person\")\n    public String name;\n    public int birthYear;\n    @JsonPropertyDescription(\"The year the person died, or 'present' if the person is living.\")\n    public String deathYear;\n}\n\n@JsonClassDescription(\"The details of one published book\")\nclass Book {\n    public String title;\n    public Person author;\n    @JsonPropertyDescription(\"The year in which the book was first published.\")\n    public int publicationYear;\n    @JsonIgnore public String genre;\n}\n\nclass BookList {\n    public List\u003CBook> books;\n}\n```\n\n- Use `@JsonClassDescription` to add a detailed description to a class.\n- Use `@JsonPropertyDescription` to add a detailed description to a field or getter method of a\n  class.\n- Use `@JsonIgnore` to exclude a `public` field or getter method of a class from the generated JSON\n  schema.\n- Use `@JsonProperty` to include a non-`public` field or getter method of a class in the generated\n  JSON schema.\n\nIf you use `@JsonProperty(required = false)`, the `false` value will be ignored. OpenAI JSON schemas\nmust mark all properties as _required_, so the schema generated from your Java classes will respect\nthat restriction and ignore any annotation that would violate it.\n\nYou can also use [OpenAPI Swagger 2](https:\u002F\u002Fswagger.io\u002Fspecification\u002Fv2\u002F)\n[`@Schema`](https:\u002F\u002Fgithub.com\u002Fswagger-api\u002Fswagger-core\u002Fwiki\u002FSwagger-2.X---Annotations#schema) and\n[`@ArraySchema`](https:\u002F\u002Fgithub.com\u002Fswagger-api\u002Fswagger-core\u002Fwiki\u002FSwagger-2.X---Annotations#arrayschema)\nannotations. These allow type-specific constraints to be added to your schema properties. You can\nlearn more about the supported constraints in the OpenAI documentation on\n[Supported properties](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-properties).\n\n```java\nimport io.swagger.v3.oas.annotations.media.Schema;\nimport io.swagger.v3.oas.annotations.media.ArraySchema;\n\nclass Article {\n    @ArraySchema(minItems = 1, maxItems = 10)\n    public List\u003CString> authors;\n\n    @Schema(pattern = \"^[A-Za-z ]+$\")\n    public String title;\n\n    @Schema(format = \"date\")\n    public String publicationDate;\n\n    @Schema(minimum = \"1\")\n    public int pageCount;\n}\n```\n\nLocal validation will check that you have not used any unsupported constraint keywords. However, the\nvalues of the constraints are _not_ validated locally. For example, if you use a value for the\n`\"format\"` constraint of a string property that is not in the list of\n[supported format names](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-properties),\nthen local validation will pass, but the AI model may report an error.\n\nIf you use both Jackson and Swagger annotations to set the same schema field, the Jackson annotation\nwill take precedence. In the following example, the description of `myProperty` will be set to\n\"Jackson description\"; \"Swagger description\" will be ignored:\n\n```java\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\nimport io.swagger.v3.oas.annotations.media.Schema;\n\nclass MyObject {\n    @Schema(description = \"Swagger description\")\n    @JsonPropertyDescription(\"Jackson description\")\n    public String myProperty;\n}\n```\n\n## Function calling with JSON schemas\n\nOpenAI [Function Calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=chat)\nlets you integrate external functions directly into the language model's responses. Instead of\nproducing plain text, the model can output instructions (with parameters) for calling a function\nwhen appropriate. You define a [JSON schema](https:\u002F\u002Fjson-schema.org\u002Foverview\u002Fwhat-is-jsonschema)\nfor functions, and the model uses it to decide when and how to trigger these calls, enabling more\ninteractive, data-driven applications.\n\nA JSON schema describing a function's parameters can be defined via the API by building a\n[`ChatCompletionTool`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FChatCompletionTool.kt)\ncontaining a\n[`FunctionDefinition`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002FFunctionDefinition.kt)\nand then using `addTool` to set it on the input parameters. The response from the AI model may then\ncontain requests to call your functions, detailing the functions' names and their parameter values\nas JSON data that conforms to the JSON schema from the function definition. You can then parse the\nparameter values from this JSON, invoke your functions, and pass your functions' results back to the\nAI model. A full, working example of _Function Calling_ using the low-level API can be seen in\n[`FunctionCallingRawExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FFunctionCallingRawExample.java).\n\nHowever, for greater convenience, the SDK can derive a function and its parameters automatically\nfrom the structure of an arbitrary Java class: the class's name provides the function name, and the\nclass's fields define the function's parameters. When the AI model responds with the parameter\nvalues in JSON form, you can then easily convert that JSON to an instance of your Java class and\nuse the parameter values to invoke your custom function. A full, working example of the use of\n_Function Calling_ with Java classes to define function parameters can be seen in\n[`FunctionCallingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FFunctionCallingExample.java).\n\nLike for [Structured Outputs](#structured-outputs-with-json-schemas), Java classes can contain\nfields declared to be instances of other classes and can use collections (see\n[Defining JSON schema properties](#defining-json-schema-properties) for more details). Optionally,\nannotations can be used to set the descriptions of the function (class) and its parameters (fields)\nto assist the AI model in understanding the purpose of the function and the possible values of its\nparameters.\n\n```java\nimport com.fasterxml.jackson.annotation.JsonClassDescription;\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\n\n@JsonClassDescription(\"Gets the quality of the given SDK.\")\nstatic class GetSdkQuality {\n    @JsonPropertyDescription(\"The name of the SDK.\")\n    public String name;\n\n    public SdkQuality execute() {\n        return new SdkQuality(\n                name, name.contains(\"OpenAI\") ? \"It's robust and polished!\" : \"*shrug*\");\n    }\n}\n\nstatic class SdkQuality {\n    public String quality;\n\n    public SdkQuality(String name, String evaluation) {\n        quality = name + \": \" + evaluation;\n    }\n}\n\n@JsonClassDescription(\"Gets the review score (out of 10) for the named SDK.\")\nstatic class GetSdkScore {\n  public String name;\n\n  public int execute() {\n    return name.contains(\"OpenAI\") ? 10 : 3;\n  }\n}\n```\n\nWhen your functions are defined, add them to the input parameters using `addTool(Class\u003CT>)` and then\ncall them if requested to do so in the AI model's response. `Function.arguments(Class\u003CT>)` can be\nused to parse a function's parameters in JSON form to an instance of your function-defining class.\nThe fields of that instance will be set to the values of the parameters to the function call.\n\nAfter calling the function, use `ChatCompletionToolMessageParam.Builder.contentAsJson(Object)` to\npass the function's result back to the AI model. The method will convert the result to JSON form\nfor consumption by the model. The `Object` can be any object, including simple `String` instances\nand boxed primitive types.\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.*;\nimport java.util.Collection;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams.Builder createParamsBuilder = ChatCompletionCreateParams.builder()\n        .model(ChatModel.GPT_3_5_TURBO)\n        .maxCompletionTokens(2048)\n        .addTool(GetSdkQuality.class)\n        .addTool(GetSdkScore.class)\n        .addUserMessage(\"How good are the following SDKs and what do reviewers say: \"\n                + \"OpenAI Java SDK, Unknown Company SDK.\");\n\nclient.chat().completions().create(createParamsBuilder.build()).choices().stream()\n        .map(ChatCompletion.Choice::message)\n        \u002F\u002F Add each assistant message onto the builder so that we keep track of the\n        \u002F\u002F conversation for asking a follow-up question later.\n        .peek(createParamsBuilder::addMessage)\n        .flatMap(message -> {\n            message.content().ifPresent(System.out::println);\n            return message.toolCalls().stream().flatMap(Collection::stream);\n        })\n        .forEach(toolCall -> {\n            Object result = callFunction(toolCall.function());\n            \u002F\u002F Add the tool call result to the conversation.\n            createParamsBuilder.addMessage(ChatCompletionToolMessageParam.builder()\n                    .toolCallId(toolCall.id())\n                    .contentAsJson(result)\n                    .build());\n        });\n\n\u002F\u002F Ask a follow-up question about the function call result.\ncreateParamsBuilder.addUserMessage(\"Why do you say that?\");\nclient.chat().completions().create(createParamsBuilder.build()).choices().stream()\n        .flatMap(choice -> choice.message().content().stream())\n        .forEach(System.out::println);\n\nstatic Object callFunction(ChatCompletionMessageToolCall.Function function) {\n  switch (function.name()) {\n    case \"GetSdkQuality\":\n      return function.arguments(GetSdkQuality.class).execute();\n    case \"GetSdkScore\":\n      return function.arguments(GetSdkScore.class).execute();\n    default:\n      throw new IllegalArgumentException(\"Unknown function: \" + function.name());\n  }\n}\n```\n\nIn the code above, an `execute()` method encapsulates each function's logic. However, there is no\nrequirement to follow that pattern. You are free to implement your function's logic in any way that\nbest suits your use case. The pattern above is only intended to _suggest_ that a suitable pattern\nmay make the process of function calling simpler to understand and implement.\n\n### Usage with the Responses API\n\n_Function Calling_ is also supported for the Responses API. The usage is the same as described\nexcept where the Responses API differs slightly from the Chat Completions API. Pass the top-level\nclass to `addTool(Class\u003CT>)` when building the parameters. In the response, look for\n[`RepoonseOutputItem`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseOutputItem.kt)\ninstances that are function calls. Parse the parameters to each function call to an instance of the\nclass using\n[`ResponseFunctionToolCall.arguments(Class\u003CT>)`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseFunctionToolCall.kt).\nFinally, pass the result of each call back to the model.\n\nFor a full example of the usage of _Function Calling_ with the Responses API using the low-level\nAPI to define and parse function parameters, see\n[`ResponsesFunctionCallingRawExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesFunctionCallingRawExample.java).\n\nFor a full example of the usage of _Function Calling_ with the Responses API using Java classes to\ndefine and parse function parameters, see\n[`ResponsesFunctionCallingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesFunctionCallingExample.java).\n\n### Local function JSON schema validation\n\nLike for _Structured Outputs_, you can perform local validation to check that the JSON schema\nderived from your function class respects the restrictions imposed by OpenAI on such schemas. Local\nvalidation is enabled by default, but it can be disabled by adding `JsonSchemaLocalValidation.NO` to\nthe call to `addTool`.\n\n```java\nChatCompletionCreateParams.Builder createParamsBuilder = ChatCompletionCreateParams.builder()\n        .model(ChatModel.GPT_3_5_TURBO)\n        .maxCompletionTokens(2048)\n        .addTool(GetSdkQuality.class, JsonSchemaLocalValidation.NO)\n        .addTool(GetSdkScore.class, JsonSchemaLocalValidation.NO)\n        .addUserMessage(\"How good are the following SDKs and what do reviewers say: \"\n                + \"OpenAI Java SDK, Unknown Company SDK.\");\n```\n\nSee [Local JSON schema validation](#local-json-schema-validation) for more details on local schema\nvalidation and under what circumstances you might want to disable it.\n\n### Annotating function classes\n\nYou can use annotations to add further information about functions to the JSON schemas that are\nderived from your function classes, or to control which fields or getter methods will be used as\nparameters to the function. Details from annotations captured in the JSON schema may be used by the\nAI model to improve its response. The SDK supports the use of\n[Jackson Databind](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind) annotations.\n\n- Use `@JsonClassDescription` to add a description to a function class detailing when and how to use\n  that function.\n- Use `@JsonTypeName` to set the function name to something other than the simple name of the class,\n  which is used by default.\n- Use `@JsonPropertyDescription` to add a detailed description to function parameter (a field or\n  getter method of a function class).\n- Use `@JsonIgnore` to exclude a `public` field or getter method of a class from the generated JSON\n  schema for a function's parameters.\n- Use `@JsonProperty` to include a non-`public` field or getter method of a class in the generated\n  JSON schema for a function's parameters.\n\nOpenAI provides some\n[Best practices for defining functions](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling#best-practices-for-defining-functions)\nthat may help you to understand how to use the above annotations effectively for your functions.\n\nSee also [Defining JSON schema properties](#defining-json-schema-properties) for more details on how\nto use fields and getter methods and combine access modifiers and annotations to define the\nparameters of your functions. The same rules apply to function classes and to the structured output\nclasses described in that section.\n\n## File uploads\n\nThe SDK defines methods that accept files.\n\nTo upload a file, pass a [`Path`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fnio\u002Ffile\u002FPath.html):\n\n```java\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models.files.FileObject;\nimport com.openai.models.files.FilePurpose;\nimport java.nio.file.Paths;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(Paths.get(\"input.jsonl\"))\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\nOr an arbitrary [`InputStream`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fio\u002FInputStream.html):\n\n```java\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models.files.FileObject;\nimport com.openai.models.files.FilePurpose;\nimport java.net.URL;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(new URL(\"https:\u002F\u002Fexample.com\u002Finput.jsonl\").openStream())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\nOr a `byte[]` array:\n\n```java\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models.files.FileObject;\nimport com.openai.models.files.FilePurpose;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(\"content\".getBytes())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\nNote that when passing a non-`Path` its filename is unknown so it will not be included in the request. To manually set a filename, pass a [`MultipartField`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt):\n\n```java\nimport com.openai.core.MultipartField;\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models.files.FileObject;\nimport com.openai.models.files.FilePurpose;\nimport java.io.InputStream;\nimport java.net.URL;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(MultipartField.\u003CInputStream>builder()\n        .value(new URL(\"https:\u002F\u002Fexample.com\u002Finput.jsonl\").openStream())\n        .filename(\"input.jsonl\")\n        .build())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\n## Webhook Verification\n\nVerifying webhook signatures is _optional but encouraged_.\n\nFor more information about webhooks, see [the API docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks).\n\n### Parsing webhook payloads\n\nFor most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks().unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will throw an exception if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.Headers;\nimport com.openai.models.webhooks.UnwrapWebhookEvent;\nimport java.util.Optional;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv(); \u002F\u002F OPENAI_WEBHOOK_SECRET env var used by default\n\npublic void handleWebhook(String body, Map\u003CString, String> headers) {\n    try {\n        Headers headersList = Headers.builder()\n                .putAll(headers)\n                .build();\n\n        UnwrapWebhookEvent event = client.webhooks().unwrap(body, headersList, Optional.empty());\n\n        if (event.isResponseCompletedWebhookEvent()) {\n            System.out.println(\"Response completed: \" + event.asResponseCompletedWebhookEvent().data());\n        } else if (event.isResponseFailed()) {\n            System.out.println(\"Response failed: \" + event.asResponseFailed().data());\n        } else {\n            System.out.println(\"Unhandled event type: \" + event.getClass().getSimpleName());\n        }\n    } catch (Exception e) {\n        System.err.println(\"Invalid webhook signature: \" + e.getMessage());\n        \u002F\u002F Handle invalid signature\n    }\n}\n```\n\n### Verifying webhook payloads directly\n\nIn some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks().verifySignature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will throw an exception if the signature is invalid.\n\nNote that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.\n\n```java\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.Headers;\nimport com.openai.models.webhooks.WebhookVerificationParams;\nimport java.util.Optional;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv(); \u002F\u002F OPENAI_WEBHOOK_SECRET env var used by default\nObjectMapper objectMapper = new ObjectMapper();\n\npublic void handleWebhook(String body, Map\u003CString, String> headers) {\n    try {\n        Headers headersList = Headers.builder()\n                .putAll(headers)\n                .build();\n\n        client.webhooks().verifySignature(\n            WebhookVerificationParams.builder()\n                .payload(body)\n                .headers(headersList)\n                .build()\n        );\n\n        \u002F\u002F Parse the body after verification\n        Map\u003CString, Object> event = objectMapper.readValue(body, Map.class);\n        System.out.println(\"Verified event: \" + event);\n    } catch (Exception e) {\n        System.err.println(\"Invalid webhook signature: \" + e.getMessage());\n        \u002F\u002F Handle invalid signature\n    }\n}\n```\n\n## Binary responses\n\nThe SDK defines methods that return binary responses, which are used for API responses that shouldn't necessarily be parsed, like non-JSON data.\n\nThese methods return [`HttpResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FHttpResponse.kt):\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.files.FileContentParams;\n\nHttpResponse response = client.files().content(\"file_id\");\n```\n\nTo save the response content to a file, use the [`Files.copy(...)`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fnio\u002Ffile\u002FFiles.html#copy-java.io.InputStream-java.nio.file.Path-java.nio.file.CopyOption...-) method:\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardCopyOption;\n\ntry (HttpResponse response = client.files().content(params)) {\n    Files.copy(\n        response.body(),\n        Paths.get(path),\n        StandardCopyOption.REPLACE_EXISTING\n    );\n} catch (Exception e) {\n    System.out.println(\"Something went wrong!\");\n    throw new RuntimeException(e);\n}\n```\n\nOr transfer the response content to any [`OutputStream`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fio\u002FOutputStream.html):\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\ntry (HttpResponse response = client.files().content(params)) {\n    response.body().transferTo(Files.newOutputStream(Paths.get(path)));\n} catch (Exception e) {\n    System.out.println(\"Something went wrong!\");\n    throw new RuntimeException(e);\n}\n```\n\n## Raw responses\n\nThe SDK defines methods that deserialize responses into instances of Java classes. However, these methods don't provide access to the response headers, status code, or the raw response body.\n\nTo access this data, prefix any HTTP method call on a client or service with `withRawResponse()`:\n\n```java\nimport com.openai.core.http.Headers;\nimport com.openai.core.http.HttpResponseFor;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nHttpResponseFor\u003CChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params);\n\nint statusCode = chatCompletion.statusCode();\nHeaders headers = chatCompletion.headers();\n```\n\nYou can still deserialize the response into an instance of a Java class if needed:\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion parsedChatCompletion = chatCompletion.parse();\n```\n\n### Request IDs\n\n> For more information on debugging requests, see [the API docs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests).\n\nWhen using raw responses, you can access the `x-request-id` response header using the `requestId()` method:\n\n```java\nimport com.openai.core.http.HttpResponseFor;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport java.util.Optional;\n\nHttpResponseFor\u003CChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params);\nOptional\u003CString> requestId = chatCompletion.requestId();\n```\n\nThis can be used to quickly log failing requests and report them back to OpenAI.\n\n## Error handling\n\nThe SDK throws custom unchecked exception types:\n\n- [`OpenAIServiceException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIServiceException.kt): Base class for HTTP errors. See this table for which exception subclass is thrown for each HTTP status code:\n\n  | Status | Exception                                                                                                              |\n  | ------ | ---------------------------------------------------------------------------------------------------------------------- |\n  | 400    | [`BadRequestException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FBadRequestException.kt)                     |\n  | 401    | [`UnauthorizedException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnauthorizedException.kt)                 |\n  | 403    | [`PermissionDeniedException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FPermissionDeniedException.kt)         |\n  | 404    | [`NotFoundException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FNotFoundException.kt)                         |\n  | 422    | [`UnprocessableEntityException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnprocessableEntityException.kt)   |\n  | 429    | [`RateLimitException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FRateLimitException.kt)                       |\n  | 5xx    | [`InternalServerException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FInternalServerException.kt)             |\n  | others | [`UnexpectedStatusCodeException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnexpectedStatusCodeException.kt) |\n\n  [`SseException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FSseException.kt) is thrown for errors encountered during [SSE streaming](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events) after a successful initial HTTP response.\n\n- [`OpenAIIoException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIIoException.kt): I\u002FO networking errors.\n\n- [`OpenAIRetryableException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIRetryableException.kt): Generic error indicating a failure that could be retried by the client.\n\n- [`OpenAIInvalidDataException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIInvalidDataException.kt): Failure to interpret successfully parsed data. For example, when accessing a property that's supposed to be required, but the API unexpectedly omitted it from the response.\n\n- [`OpenAIException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIException.kt): Base class for all exceptions. Most errors will result in one of the previously mentioned ones, but completely generic errors may be thrown using the base class.\n\n## Pagination\n\nThe SDK defines methods that return a paginated lists of results. It provides convenient ways to access the results either one page at a time or item-by-item across all pages.\n\n### Auto-pagination\n\nTo iterate through all results across all pages, use the `autoPager()` method, which automatically fetches more pages as needed.\n\nWhen using the synchronous client, the method returns an [`Iterable`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Flang\u002FIterable.html)\n\n```java\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPage;\n\nJobListPage page = client.fineTuning().jobs().list();\n\n\u002F\u002F Process as an Iterable\nfor (FineTuningJob job : page.autoPager()) {\n    System.out.println(job);\n}\n\n\u002F\u002F Process as a Stream\npage.autoPager()\n    .stream()\n    .limit(50)\n    .forEach(job -> System.out.println(job));\n```\n\nWhen using the asynchronous client, the method returns an [`AsyncStreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FAsyncStreamResponse.kt):\n\n```java\nimport com.openai.core.http.AsyncStreamResponse;\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPageAsync;\nimport java.util.Optional;\nimport java.util.concurrent.CompletableFuture;\n\nCompletableFuture\u003CJobListPageAsync> pageFuture = client.async().fineTuning().jobs().list();\n\npageFuture.thenRun(page -> page.autoPager().subscribe(job -> {\n    System.out.println(job);\n}));\n\n\u002F\u002F If you need to handle errors or completion of the stream\npageFuture.thenRun(page -> page.autoPager().subscribe(new AsyncStreamResponse.Handler\u003C>() {\n    @Override\n    public void onNext(FineTuningJob job) {\n        System.out.println(job);\n    }\n\n    @Override\n    public void onComplete(Optional\u003CThrowable> error) {\n        if (error.isPresent()) {\n            System.out.println(\"Something went wrong!\");\n            throw new RuntimeException(error.get());\n        } else {\n            System.out.println(\"No more!\");\n        }\n    }\n}));\n\n\u002F\u002F Or use futures\npageFuture.thenRun(page -> page.autoPager()\n    .subscribe(job -> {\n        System.out.println(job);\n    })\n    .onCompleteFuture()\n    .whenComplete((unused, error) -> {\n        if (error != null) {\n            System.out.println(\"Something went wrong!\");\n            throw new RuntimeException(error);\n        } else {\n            System.out.println(\"No more!\");\n        }\n    }));\n```\n\n### Manual pagination\n\nTo access individual page items and manually request the next page, use the `items()`,\n`hasNextPage()`, and `nextPage()` methods:\n\n```java\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPage;\n\nJobListPage page = client.fineTuning().jobs().list();\nwhile (true) {\n    for (FineTuningJob job : page.items()) {\n        System.out.println(job);\n    }\n\n    if (!page.hasNextPage()) {\n        break;\n    }\n\n    page = page.nextPage();\n}\n```\n\n## Logging\n\nThe SDK uses the standard [OkHttp logging interceptor](https:\u002F\u002Fgithub.com\u002Fsquare\u002Fokhttp\u002Ftree\u002Fmaster\u002Fokhttp-logging-interceptor).\n\nEnable logging by setting the `OPENAI_LOG` environment variable to `info`:\n\n```sh\nexport OPENAI_LOG=info\n```\n\nOr to `debug` for more verbose logging:\n\n```sh\nexport OPENAI_LOG=debug\n```\n\n## ProGuard and R8\n\nAlthough the SDK uses reflection, it is still usable with [ProGuard](https:\u002F\u002Fgithub.com\u002FGuardsquare\u002Fproguard) and [R8](https:\u002F\u002Fdeveloper.android.com\u002Ftopic\u002Fperformance\u002Fapp-optimization\u002Fenable-app-optimization) because `openai-java-core` is published with a [configuration file](openai-java-core\u002Fsrc\u002Fmain\u002Fresources\u002FMETA-INF\u002Fproguard\u002Fopenai-java-core.pro) containing [keep rules](https:\u002F\u002Fwww.guardsquare.com\u002Fmanual\u002Fconfiguration\u002Fusage).\n\nProGuard and R8 should automatically detect and use the published rules, but you can also manually copy the keep rules if necessary.\n\n## GraalVM\n\nAlthough the SDK uses reflection, it is still usable in [GraalVM](https:\u002F\u002Fwww.graalvm.org) because `openai-java-core` is published with [reachability metadata](https:\u002F\u002Fwww.graalvm.org\u002Flatest\u002Freference-manual\u002Fnative-image\u002Fmetadata\u002F).\n\nGraalVM should automatically detect and use the published metadata, but [manual configuration](https:\u002F\u002Fwww.graalvm.org\u002Fjdk24\u002Freference-manual\u002Fnative-image\u002Foverview\u002FBuildConfiguration\u002F) is also available.\n\n## Spring Boot\n\nIf you're using Spring Boot, then you can use the SDK's [Spring Boot starter](https:\u002F\u002Fdocs.spring.io\u002Fspring-boot\u002Fdocs\u002F2.7.18\u002Freference\u002Fhtmlsingle\u002F#using.build-systems.starters) to simplify configuration and get set up quickly.\n\n### Installation\n\n\u003C!-- x-release-please-start-version -->\n\n#### Gradle\n\n```kotlin\nimplementation(\"com.openai:openai-java-spring-boot-starter:4.32.0\")\n```\n\n#### Maven\n\n```xml\n\u003Cdependency>\n  \u003CgroupId>com.openai\u003C\u002FgroupId>\n  \u003CartifactId>openai-java-spring-boot-starter\u003C\u002FartifactId>\n  \u003Cversion>4.32.0\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n\u003C!-- x-release-please-end -->\n\n### Configuration\n\nThe [client's environment variable options](#client-configuration) can be configured in [`application.properties` or `application.yml`](https:\u002F\u002Fdocs.spring.io\u002Fspring-boot\u002Fhow-to\u002Fproperties-and-configuration.html).\n\n#### `application.properties`\n\n```properties\nopenai.base-url=https:\u002F\u002Fapi.openai.com\u002Fv1\nopenai.api-key=My API Key\nopenai.org-id=My Organization\nopenai.project-id=My Project\nopenai.webhook-secret=My Webhook Secret\n```\n\n#### `application.yml`\n\n```yaml\nopenai:\n  base-url: https:\u002F\u002Fapi.openai.com\u002Fv1\n  api-key: My API Key\n  org-id: My Organization\n  project-id: My Project\n  webhook-secret: My Webhook Secret\n```\n\n#### Other configuration\n\nConfigure any other client option by providing one or more instances of [`OpenAIClientCustomizer`](openai-java-spring-boot-starter\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fspringboot\u002FOpenAIClientCustomizer.kt). For example, here's how you'd set [`maxRetries`](#retries):\n\n```java\nimport com.openai.springboot.OpenAIClientCustomizer;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\n\n@Configuration\npublic class OpenAIConfig {\n    @Bean\n    public OpenAIClientCustomizer customizer() {\n        return builder -> builder.maxRetries(3);\n    }\n}\n```\n\n### Usage\n\n[Inject](https:\u002F\u002Fdocs.spring.io\u002Fspring-framework\u002Freference\u002Fcore\u002Fbeans\u002Fdependencies\u002Ffactory-collaborators.html) [`OpenAIClient`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClient.kt) anywhere and start using it!\n\n## Jackson\n\nThe SDK depends on [Jackson](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson) for JSON serialization\u002Fdeserialization. It is compatible with version 2.13.4 or higher, but depends on version 2.18.2 by default.\n\nThe SDK throws an exception if it detects an incompatible Jackson version at runtime (e.g. if the default version was overridden in your Maven or Gradle config).\n\nIf the SDK threw an exception, but you're _certain_ the version is compatible, then disable the version check using the `checkJacksonVersionCompatibility` on [`OpenAIOkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClient.kt) or [`OpenAIOkHttpClientAsync`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClientAsync.kt).\n\n> [!CAUTION]\n> We make no guarantee that the SDK works correctly when the Jackson version check is disabled.\n\nAlso note that there are bugs in older Jackson versions that can affect the SDK. We don't work around all Jackson bugs ([example](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind\u002Fissues\u002F3240)) and expect users to upgrade Jackson for those instead.\n\n## Microsoft Azure\n\nTo use this library with [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview), use the same\nOpenAI client builder but with the Azure-specific configuration.\n\n```java\nOpenAIClient client = OpenAIOkHttpClient.builder()\n        \u002F\u002F Gets the API key and endpoint from the `AZURE_OPENAI_KEY` and `OPENAI_BASE_URL` environment variables, respectively\n        .fromEnv()\n        \u002F\u002F Set the Azure Entra ID\n        .credential(BearerTokenCredential.create(AuthenticationUtil.getBearerTokenSupplier(\n                new DefaultAzureCredentialBuilder().build(), \"https:\u002F\u002Fcognitiveservices.azure.com\u002F.default\")))\n        .build();\n```\n\nSee the complete Azure OpenAI example in the [`openai-java-example`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FAzureEntraIdExample.java) directory. The other examples in the directory also work with Azure as long as the client is configured to use it.\n\n### Optional: URL path mode configuration\n\nThe [`ClientOptions`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FClientOptions.kt) can be configured to treat Azure OpenAI endpoint URLs differently, depending on your service setup. The default value is [`AzureUrlPathMode.AUTO`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fazure\u002FAzureUrlPathMode.kt). To customize the SDK behavior, each value does the following:\n- `AzureUrlPathMode.LEGACY`: forces the deployment or model name into the path.\n- `AzureUrlPathMode.UNIFIED`: for newer endpoints ending in `\u002Fopenai\u002Fv1` the service behaviour matches OpenAI's, therefore [`AzureOpenAIServiceVersion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fazure\u002FAzureOpenAIServiceVersion.kt) becomes optional and the model is passed in the request object.\n- `AzureUrlPathMode.AUTO`: automatically detects the path mode based on the base URL. Default value.\n\n## Network options\n\n### Retries\n\nThe SDK automatically retries 2 times by default, with a short exponential backoff between requests.\n\nOnly the following error types are retried:\n\n- Connection errors (for example, due to a network connectivity problem)\n- 408 Request Timeout\n- 409 Conflict\n- 429 Rate Limit\n- 5xx Internal\n\nThe API may also explicitly instruct the SDK to retry or not retry a request.\n\nTo set a custom number of retries, configure the client using the `maxRetries` method:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .maxRetries(4)\n    .build();\n```\n\n### Timeouts\n\nRequests time out after 10 minutes by default.\n\nTo set a custom timeout, configure the method call using the `timeout` method:\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(\n  params, RequestOptions.builder().timeout(Duration.ofSeconds(30)).build()\n);\n```\n\nOr configure the default for all method calls at the client level:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.time.Duration;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .timeout(Duration.ofSeconds(30))\n    .build();\n```\n\n### Proxies\n\nTo route requests through a proxy, configure the client using the `proxy` method:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.net.InetSocketAddress;\nimport java.net.Proxy;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .proxy(new Proxy(\n      Proxy.Type.HTTP, new InetSocketAddress(\n        \"https:\u002F\u002Fexample.com\", 8080\n      )\n    ))\n    .build();\n```\n\n### Connection pooling\n\nTo customize the underlying OkHttp connection pool, configure the client using the `maxIdleConnections` and `keepAliveDuration` methods:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.time.Duration;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    \u002F\u002F If `maxIdleConnections` is set, then `keepAliveDuration` must be set, and vice versa.\n    .maxIdleConnections(10)\n    .keepAliveDuration(Duration.ofMinutes(2))\n    .build();\n```\n\nIf both options are unset, OkHttp's default connection pool settings are used.\n\n### HTTPS\n\n> [!NOTE]\n> Most applications should not call these methods, and instead use the system defaults. The defaults include\n> special optimizations that can be lost if the implementations are modified.\n\nTo configure how HTTPS connections are secured, configure the client using the `sslSocketFactory`, `trustManager`, and `hostnameVerifier` methods:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    \u002F\u002F If `sslSocketFactory` is set, then `trustManager` must be set, and vice versa.\n    .sslSocketFactory(yourSSLSocketFactory)\n    .trustManager(yourTrustManager)\n    .hostnameVerifier(yourHostnameVerifier)\n    .build();\n```\n\n### Custom HTTP client\n\nThe SDK consists of three artifacts:\n\n- `openai-java-core`\n  - Contains core SDK logic\n  - Does not depend on [OkHttp](https:\u002F\u002Fsquare.github.io\u002Fokhttp)\n  - Exposes [`OpenAIClient`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClient.kt), [`OpenAIClientAsync`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientAsync.kt), [`OpenAIClientImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientImpl.kt), and [`OpenAIClientAsyncImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientAsyncImpl.kt), all of which can work with any HTTP client\n- `openai-java-client-okhttp`\n  - Depends on [OkHttp](https:\u002F\u002Fsquare.github.io\u002Fokhttp)\n  - Exposes [`OpenAIOkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClient.kt) and [`OpenAIOkHttpClientAsync`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClientAsync.kt), which provide a way to construct [`OpenAIClientImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientImpl.kt) and [`OpenAIClientAsyncImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientAsyncImpl.kt), respectively, using OkHttp\n- `openai-java`\n  - Depends on and exposes the APIs of both `openai-java-core` and `openai-java-client-okhttp`\n  - Does not have its own logic\n\nThis structure allows replacing the SDK's default HTTP client without pulling in unnecessary dependencies.\n\n#### Customized [`OkHttpClient`](https:\u002F\u002Fsquare.github.io\u002Fokhttp\u002F3.x\u002Fokhttp\u002Fokhttp3\u002FOkHttpClient.html)\n\n> [!TIP]\n> Try the available [network options](#network-options) before replacing the default client.\n\nTo use a customized `OkHttpClient`:\n\n1. Replace your [`openai-java` dependency](#installation) with `openai-java-core`\n2. Copy `openai-java-client-okhttp`'s [`OkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOkHttpClient.kt) class into your code and customize it\n3. Construct [`OpenAIClientImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientImpl.kt) or [`OpenAIClientAsyncImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientAsyncImpl.kt), similarly to [`OpenAIOkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClient.kt) or [`OpenAIOkHttpClientAsync`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClientAsync.kt), using your customized client\n\n### Completely custom HTTP client\n\nTo use a completely custom HTTP client:\n\n1. Replace your [`openai-java` dependency](#installation) with `openai-java-core`\n2. Write a class that implements the [`HttpClient`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FHttpClient.kt) interface\n3. Construct [`OpenAIClientImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientImpl.kt) or [`OpenAIClientAsyncImpl`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClientAsyncImpl.kt), similarly to [`OpenAIOkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClient.kt) or [`OpenAIOkHttpClientAsync`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClientAsync.kt), using your new client class\n\n## Undocumented API functionality\n\nThe SDK is typed for convenient usage of the documented API. However, it also supports working with undocumented or not yet supported parts of the API.\n\n### Parameters\n\nTo set undocumented parameters, call the `putAdditionalHeader`, `putAdditionalQueryParam`, or `putAdditionalBodyProperty` methods on any `Params` class:\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .putAdditionalHeader(\"Secret-Header\", \"42\")\n    .putAdditionalQueryParam(\"secret_query_param\", \"42\")\n    .putAdditionalBodyProperty(\"secretProperty\", JsonValue.from(\"42\"))\n    .build();\n```\n\nThese can be accessed on the built object later using the `_additionalHeaders()`, `_additionalQueryParams()`, and `_additionalBodyProperties()` methods.\n\nTo set undocumented parameters on _nested_ headers, query params, or body classes, call the `putAdditionalProperty` method on the nested class:\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .responseFormat(ChatCompletionCreateParams.ResponseFormat.builder()\n        .putAdditionalProperty(\"secretProperty\", JsonValue.from(\"42\"))\n        .build())\n    .build();\n```\n\nThese properties can be accessed on the nested built object later using the `_additionalProperties()` method.\n\nTo set a documented parameter or property to an undocumented or not yet supported _value_, pass a [`JsonValue`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt) object to its setter:\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .messages(JsonValue.from(42))\n    .model(ChatModel.GPT_5_2)\n    .build();\n```\n\nThe most straightforward way to create a [`JsonValue`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt) is using its `from(...)` method:\n\n```java\nimport com.openai.core.JsonValue;\nimport java.util.List;\nimport java.util.Map;\n\n\u002F\u002F Create primitive JSON values\nJsonValue nullValue = JsonValue.from(null);\nJsonValue booleanValue = JsonValue.from(true);\nJsonValue numberValue = JsonValue.from(42);\nJsonValue stringValue = JsonValue.from(\"Hello World!\");\n\n\u002F\u002F Create a JSON array value equivalent to `[\"Hello\", \"World\"]`\nJsonValue arrayValue = JsonValue.from(List.of(\n  \"Hello\", \"World\"\n));\n\n\u002F\u002F Create a JSON object value equivalent to `{ \"a\": 1, \"b\": 2 }`\nJsonValue objectValue = JsonValue.from(Map.of(\n  \"a\", 1,\n  \"b\", 2\n));\n\n\u002F\u002F Create an arbitrarily nested JSON equivalent to:\n\u002F\u002F {\n\u002F\u002F   \"a\": [1, 2],\n\u002F\u002F   \"b\": [3, 4]\n\u002F\u002F }\nJsonValue complexValue = JsonValue.from(Map.of(\n  \"a\", List.of(\n    1, 2\n  ),\n  \"b\", List.of(\n    3, 4\n  )\n));\n```\n\nNormally a `Builder` class's `build` method will throw [`IllegalStateException`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Flang\u002FIllegalStateException.html) if any required parameter or property is unset.\n\nTo forcibly omit a required parameter or property, pass [`JsonMissing`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt):\n\n```java\nimport com.openai.core.JsonMissing;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .model(ChatModel.GPT_5_4)\n    .messages(JsonMissing.of())\n    .build();\n```\n\n### Response properties\n\nTo access undocumented response properties, call the `_additionalProperties()` method:\n\n```java\nimport com.openai.core.JsonValue;\nimport java.util.Map;\n\nMap\u003CString, JsonValue> additionalProperties = client.chat().completions().create(params)._additionalProperties();\nJsonValue secretPropertyValue = additionalProperties.get(\"secretProperty\");\n\nString result = secretPropertyValue.accept(new JsonValue.Visitor\u003C>() {\n    @Override\n    public String visitNull() {\n        return \"It's null!\";\n    }\n\n    @Override\n    public String visitBoolean(boolean value) {\n        return \"It's a boolean!\";\n    }\n\n    @Override\n    public String visitNumber(Number value) {\n        return \"It's a number!\";\n    }\n\n    \u002F\u002F Other methods include `visitMissing`, `visitString`, `visitArray`, and `visitObject`\n    \u002F\u002F The default implementation of each unimplemented method delegates to `visitDefault`, which throws by default, but can also be overridden\n});\n```\n\nTo access a property's raw JSON value, which may be undocumented, call its `_` prefixed method:\n\n```java\nimport com.openai.core.JsonField;\nimport com.openai.models.chat.completions.ChatCompletionMessageParam;\nimport java.util.Optional;\n\nJsonField\u003CList\u003CChatCompletionMessageParam>> messages = client.chat().completions().create(params)._messages();\n\nif (messages.isMissing()) {\n  \u002F\u002F The property is absent from the JSON response\n} else if (messages.isNull()) {\n  \u002F\u002F The property was set to literal null\n} else {\n  \u002F\u002F Check if value was provided as a string\n  \u002F\u002F Other methods include `asNumber()`, `asBoolean()`, etc.\n  Optional\u003CString> jsonString = messages.asString();\n\n  \u002F\u002F Try to deserialize into a custom type\n  MyClass myObject = messages.asUnknown().orElseThrow().convert(MyClass.class);\n}\n```\n\n### Response validation\n\nIn rare cases, the API may return a response that doesn't match the expected type. For example, the SDK may expect a property to contain a `String`, but the API could return something else.\n\nBy default, the SDK will not throw an exception in this case. It will throw [`OpenAIInvalidDataException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIInvalidDataException.kt) only if you directly access the property.\n\nIf you would prefer to check that the response is completely well-typed upfront, then either call `validate()`:\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(params).validate();\n```\n\nOr configure the method call to validate the response using the `responseValidation` method:\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(\n  params, RequestOptions.builder().responseValidation(true).build()\n);\n```\n\nOr configure the default for all method calls at the client level:\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .responseValidation(true)\n    .build();\n```\n\n## FAQ\n\n### Why don't you use plain `enum` classes?\n\nJava `enum` classes are not trivially [forwards compatible](https:\u002F\u002Fwww.stainless.com\u002Fblog\u002Fmaking-java-enums-forwards-compatible). Using them in the SDK could cause runtime exceptions if the API is updated to respond with a new enum value.\n\n### Why do you represent fields using `JsonField\u003CT>` instead of just plain `T`?\n\nUsing `JsonField\u003CT>` enables a few features:\n\n- Allowing usage of [undocumented API functionality](#undocumented-api-functionality)\n- Lazily [validating the API response against the expected shape](#response-validation)\n- Representing absent vs explicitly null values\n\n### Why don't you use [`data` classes](https:\u002F\u002Fkotlinlang.org\u002Fdocs\u002Fdata-classes.html)?\n\nIt is not [backwards compatible to add new fields to a data class](https:\u002F\u002Fkotlinlang.org\u002Fdocs\u002Fapi-guidelines-backward-compatibility.html#avoid-using-data-classes-in-your-api) and we don't want to introduce a breaking change every time we add a field to a class.\n\n### Why don't you use checked exceptions?\n\nChecked exceptions are widely considered a mistake in the Java programming language. In fact, they were omitted from Kotlin for this reason.\n\nChecked exceptions:\n\n- Are verbose to handle\n- Encourage error handling at the wrong level of abstraction, where nothing can be done about the error\n- Are tedious to propagate due to the [function coloring problem](https:\u002F\u002Fjournal.stuffwithstuff.com\u002F2015\u002F02\u002F01\u002Fwhat-color-is-your-function)\n- Don't play well with lambdas (also due to the function coloring problem)\n\n## Semantic versioning\n\nThis package generally follows [SemVer](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:\n\n1. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_\n2. Changes that we do not expect to impact the vast majority of users in practice.\n\nWe take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.\n\nWe are keen for your feedback; please open an [issue](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-java\u002Fissues) with questions, bugs, or suggestions.\n","# OpenAI Java API 库\n\n\u003C!-- x-release-please-start-version -->\n\n[![Maven Central](https:\u002F\u002Fimg.shields.io\u002Fmaven-central\u002Fv\u002Fcom.openai\u002Fopenai-java)](https:\u002F\u002Fcentral.sonatype.com\u002Fartifact\u002Fcom.openai\u002Fopenai-java\u002F4.32.0)\n[![javadoc](https:\u002F\u002Fjavadoc.io\u002Fbadge2\u002Fcom.openai\u002Fopenai-java\u002F4.32.0\u002Fjavadoc.svg)](https:\u002F\u002Fjavadoc.io\u002Fdoc\u002Fcom.openai\u002Fopenai-java\u002F4.32.0)\n\n\u003C!-- x-release-please-end -->\n\nOpenAI Java SDK 为使用 Java 编写的应用程序提供了便捷的 [OpenAI REST API](https:\u002F\u002Fplatform.openai.com\u002Fdocs) 访问方式。\n\n\u003C!-- x-release-please-start-version -->\n\nREST API 的文档可以在 [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs) 上找到。Javadoc 可以在 [javadoc.io](https:\u002F\u002Fjavadoc.io\u002Fdoc\u002Fcom.openai\u002Fopenai-java\u002F4.32.0) 上查阅。\n\n\u003C!-- x-release-please-end -->\n\n## 安装\n\n\u003C!-- x-release-please-start-version -->\n\n[_如果您使用 Spring Boot，可以尝试 `openai-java-spring-boot-starter`！_](#spring-boot)\n\n### Gradle\n\n```kotlin\nimplementation(\"com.openai:openai-java:4.32.0\")\n```\n\n### Maven\n\n```xml\n\u003Cdependency>\n  \u003CgroupId>com.openai\u003C\u002FgroupId>\n  \u003CartifactId>openai-java\u003C\u002FartifactId>\n  \u003Cversion>4.32.0\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n\u003C!-- x-release-please-end -->\n\n## 要求\n\n本库需要 Java 8 或更高版本。\n\n## 使用方法\n\n> [!TIP]\n> 请参阅 [`openai-java-example`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample) 目录，获取完整且可运行的示例！\n\n与 OpenAI 模型交互的主要 API 是 [Responses API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses)。您可以通过以下代码从模型生成文本。\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.responses.Response;\nimport com.openai.models.responses.ResponseCreateParams;\n\n\u002F\u002F 使用 `OPENAI_API_KEY`、`OPENAI_ORG_ID` 和 `OPENAI_PROJECT_ID` 环境变量进行配置\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nResponseCreateParams params = ResponseCreateParams.builder()\n        .input(\"Say this is a test\")\n        .model(ChatModel.GPT_5_2)\n        .build();\nResponse response = client.responses().create(params);\n```\n\n之前用于生成文本的标准（并将继续支持）是 [Chat Completions API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat)。您也可以使用该 API 通过以下代码从模型生成文本。\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\n\u002F\u002F 使用 `openai.apiKey`、`openai.orgId`、`openai.projectId`、`openai.webhookSecret` 和 `openai.baseUrl` 系统属性进行配置\n\u002F\u002F 或者使用 `OPENAI_API_KEY`、`OPENAI_ORG_ID`、`OPENAI_PROJECT_ID`、`OPENAI_WEBHOOK_SECRET` 和 `OPENAI_BASE_URL` 环境变量进行配置\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nChatCompletion chatCompletion = client.chat().completions().create(params);\n```\n\n## 客户端配置\n\n您可以使用系统属性或环境变量来配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\n\u002F\u002F 使用 `openai.apiKey`、`openai.orgId`、`openai.projectId`、`openai.webhookSecret` 和 `openai.baseUrl` 系统属性进行配置\n\u002F\u002F 或者使用 `OPENAI_API_KEY`、`OPENAI_ORG_ID`、`OPENAI_PROJECT_ID`、`OPENAI_WEBHOOK_SECRET` 和 `OPENAI_BASE_URL` 环境变量进行配置\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n```\n\n或者手动配置：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .apiKey(\"My API Key\")\n    .build();\n```\n\n也可以结合两种方式：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    \u002F\u002F 使用 `openai.apiKey`、`openai.orgId`、`openai.projectId`、`openai.webhookSecret` 和 `openai.baseUrl` 系统属性进行配置\n    \u002F\u002F 或者使用 `OPENAI_API_KEY`、`OPENAI_ORG_ID`、`OPENAI_PROJECT_ID`、`OPENAI_WEBHOOK_SECRET` 和 `OPENAI_BASE_URL` 环境变量进行配置\n    .fromEnv()\n    .apiKey(\"My API Key\")\n    .build();\n```\n\n以下是可用选项的表格：\n\n| 设置方法          | 系统属性        | 环境变量    | 必需 | 默认值                 |\n| --------------- | ---------------------- | ----------------------- | -------- | ----------------------------- |\n| `apiKey`        | `openai.apiKey`        | `OPENAI_API_KEY`        | 是     | -                             |\n| `organization`  | `openai.orgId`         | `OPENAI_ORG_ID`         | 否     | -                             |\n| `project`       | `openai.projectId`     | `OPENAI_PROJECT_ID`     | 否     | -                             |\n| `webhookSecret` | `openai.webhookSecret` | `OPENAI_WEBHOOK_SECRET` | 否     | -                             |\n| `baseUrl`       | `openai.baseUrl`       | `OPENAI_BASE_URL`       | 是     | `\"https:\u002F\u002Fapi.openai.com\u002Fv1\"` |\n\n系统属性优先于环境变量。\n\n> [!TIP]\n> 不要在同一个应用程序中创建多个客户端。每个客户端都有连接池和线程池，共享这些资源可以提高效率。\n\n### 修改配置\n\n如果需要临时使用修改后的客户端配置，同时复用原有的连接池和线程池，可以在任何客户端或服务上调用 `withOptions()` 方法：\n\n```java\nimport com.openai.client.OpenAIClient;\n\nOpenAIClient clientWithOptions = client.withOptions(optionsBuilder -> {\n    optionsBuilder.baseUrl(\"https:\u002F\u002Fexample.com\");\n    optionsBuilder.maxRetries(42);\n});\n```\n\n`withOptions()` 方法不会影响原始客户端或服务。\n\n### 工作负载身份验证\n\n工作负载身份验证允许在云环境（Kubernetes、Azure、GCP）中运行的应用程序使用由云提供商颁发的短期令牌进行身份验证，而不是长期有效的 API 密钥。\n\n#### 基本设置\n\n```java\nimport com.openai.auth.*;\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder().build();\n\nWorkloadIdentity workloadIdentity = WorkloadIdentity.builder()\n    .clientId(\"your-client-id\")\n    .identityProviderId(\"your-identity-provider-id\")\n    .serviceAccountId(\"your-service-account-id\")\n    .provider(provider)\n    .build();\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .workloadIdentity(workloadIdentity)\n    .build();\n```\n\n#### Kubernetes 服务账户令牌提供程序\n\n```java\n\u002F\u002F 使用默认令牌路径 (\u002Fvar\u002Frun\u002Fsecrets\u002Fkubernetes.io\u002Fserviceaccount\u002Ftoken)\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder().build();\n```\n\n```java\n\u002F\u002F 或者指定自定义令牌路径\nSubjectTokenProvider provider = K8sServiceAccountTokenProvider.builder()\n    .tokenPath(\"\u002Fcustom\u002Fpath\u002Fto\u002Ftoken\")\n    .build();\n```\n\n#### Azure 托管标识提供程序\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F 使用默认值（资源：https:\u002F\u002Fmanagement.azure.com\u002F，API 版本：2018-02-01）\nSubjectTokenProvider provider = AzureManagedIdentityTokenProvider.builder()\n    .build();\n```\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F 或者自定义\nSubjectTokenProvider provider = AzureManagedIdentityTokenProvider.builder()\n    .resource(\"https:\u002F\u002Fmanagement.azure.com\u002F\")\n    .apiVersion(\"2018-02-01\")\n    .build();\n```\n\n#### GCP ID 令牌提供程序\n\n```java\nimport com.openai.auth.*;\n\nSubjectTokenProvider provider = GcpIdTokenProvider.builder()\n    .build();\n```\n\n```java\nimport com.openai.auth.*;\n\n\u002F\u002F 或者自定义受众\nSubjectTokenProvider provider = GcpIdTokenProvider.builder()\n    .audience(\"https:\u002F\u002Fapi.openai.com\u002Fv1\")\n    .build();\n```\n\n## 请求与响应\n\n要向 OpenAI API 发送请求，需要构建某个 `Params` 类的实例，并将其传递给相应的客户端方法。当收到响应时，它将被反序列化为 Java 类的一个实例。\n\n例如，`client.chat().completions().create(...)` 应该使用 `ChatCompletionCreateParams` 的实例调用，并返回一个 `ChatCompletion` 的实例。\n\n## 不可变性\n\nSDK 中的每个类都配有对应的 [构建器](https:\u002F\u002Fblogs.oracle.com\u002Fjavamagazine\u002Fpost\u002Fexploring-joshua-blochs-builder-design-pattern-in-java) 或工厂方法来创建其实例。\n\n每个类一旦构建完成就是 [不可变的](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002Ftutorial\u002Fessential\u002Fconcurrency\u002Fimmutable.html)。如果类有对应的构建器，则会提供一个 `toBuilder()` 方法，可用于将其转换回构建器，从而创建修改后的副本。\n\n由于每个类都是不可变的，因此对构建器的修改绝不会影响已经构建好的类实例。\n\n## 异步执行\n\n默认客户端是同步的。要切换到异步执行，可以调用 `async()` 方法：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport java.util.concurrent.CompletableFuture;\n\n\u002F\u002F 使用系统属性 openai.apiKey、openai.orgId、openai.projectId、openai.webhookSecret 和 openai.baseUrl 进行配置\n\u002F\u002F 或者使用环境变量 OPENAI_API_KEY、OPENAI_ORG_ID、OPENAI_PROJECT_ID、OPENAI_WEBHOOK_SECRET 和 OPENAI_BASE_URL 进行配置\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nCompletableFuture\u003CChatCompletion> chatCompletion = client.async().chat().completions().create(params);\n```\n\n或者从一开始就创建异步客户端：\n\n```java\nimport com.openai.client.OpenAIClientAsync;\nimport com.openai.client.okhttp.OpenAIOkHttpClientAsync;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport java.util.concurrent.CompletableFuture;\n\n\u002F\u002F 使用系统属性 openai.apiKey、openai.orgId、openai.projectId、openai.webhookSecret 和 openai.baseUrl 进行配置\n\u002F\u002F 或者使用环境变量 OPENAI_API_KEY、OPENAI_ORG_ID、OPENAI_PROJECT_ID、OPENAI_WEBHOOK_SECRET 和 OPENAI_BASE_URL 进行配置\nOpenAIClientAsync client = OpenAIOkHttpClientAsync.fromEnv();\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"Say this is a test\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nCompletableFuture\u003CChatCompletion> chatCompletion = client.chat().completions().create(params);\n```\n\n异步客户端支持与同步客户端相同的选项，只是大多数方法会返回 `CompletableFuture` 对象。\n\n## 流式传输\n\nSDK 定义了返回响应“分块”流的方法，每个分块在到达时即可单独处理，而无需等待整个响应完成。流式方法通常对应于 [SSE](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events) 或 [JSONL](https:\u002F\u002Fjsonlines.org) 响应。\n\n其中一些方法可能同时提供流式和非流式两种变体，但只要存在流式方法，其名称中就会带有 `Streaming` 后缀，即使没有对应的非流式版本也是如此。\n\n这些流式方法会为同步客户端返回 [`StreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FStreamResponse.kt)：\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\n\ntry (StreamResponse\u003CChatCompletionChunk> streamResponse = client.chat().completions().createStreaming(params)) {\n    streamResponse.stream().forEach(chunk -> {\n        System.out.println(chunk);\n    });\n    System.out.println(\"没有更多分块了！\");\n}\n```\n\n而对于异步客户端，则会返回 [`AsyncStreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FAsyncStreamResponse.kt)：\n\n```java\nimport com.openai.core.http.AsyncStreamResponse;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\nimport java.util.Optional;\n\nclient.async().chat().completions().createStreaming(params).subscribe(chunk -> {\n    System.out.println(chunk);\n});\n\n\u002F\u002F 如果需要处理错误或流的结束\nclient.async().chat().completions().createStreaming(params).subscribe(new AsyncStreamResponse.Handler\u003C>() {\n    @Override\n    public void onNext(ChatCompletionChunk chunk) {\n        System.out.println(chunk);\n    }\n\n    @Override\n    public void onComplete(Optional\u003CThrowable> error) {\n        if (error.isPresent()) {\n            System.out.println(\"出错了！\");\n            throw new RuntimeException(error.get());\n        } else {\n            System.out.println(\"没有更多分块了！\");\n        }\n    }\n});\n\n\u002F\u002F 或者使用 Future\nclient.async().chat().completions().createStreaming(params)\n    .subscribe(chunk -> {\n        System.out.println(chunk);\n    })\n    .onCompleteFuture()\n    .whenComplete((unused, error) -> {\n        if (error != null) {\n            System.out.println(\"出错了！\");\n            throw new RuntimeException(error);\n        } else {\n            System.out.println(\"没有更多分块了！\");\n        }\n    });\n```\n\n异步流式传输使用一个专用于每个客户端的缓存线程池 [`Executor`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fconcurrent\u002FExecutor.html)，以便在不阻塞当前线程的情况下进行流式传输。此默认配置适用于大多数场景。\n\n若需使用其他 `Executor`，可通过 `executor` 参数配置订阅：\n\n```java\nimport java.util.concurrent.Executor;\nimport java.util.concurrent.Executors;\n\nExecutor executor = Executors.newFixedThreadPool(4);\nclient.async().chat().completions().createStreaming(params).subscribe(\n    chunk -> System.out.println(chunk), executor\n);\n```\n\n或者也可以通过 `streamHandlerExecutor` 方法全局配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.util.concurrent.Executors;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .streamHandlerExecutor(Executors.newFixedThreadPool(4))\n    .build();\n```\n\n### 流式处理辅助工具\n\nSDK 为流式聊天补全提供了便捷工具。一个\n[`ChatCompletionAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FChatCompletionAccumulator.kt)\n可以在处理响应中的聊天补全过程块时，记录这些块并累积成一个与非流式 API 所返回的\n[`ChatCompletion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FChatCompletion.kt)\n对象类似的对象。\n\n对于同步响应，在流管道中添加一个\n[`Stream.peek()`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fstream\u002FStream.html#peek-java.util.function.Consumer-)\n调用来累积每个块：\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.helpers.ChatCompletionAccumulator;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionChunk;\n\nChatCompletionAccumulator chatCompletionAccumulator = ChatCompletionAccumulator.create();\n\ntry (StreamResponse\u003CChatCompletionChunk> streamResponse =\n        client.chat().completions().createStreaming(createParams)) {\n    streamResponse.stream()\n            .peek(chatCompletionAccumulator::accumulate)\n            .flatMap(completion -> completion.choices().stream())\n            .flatMap(choice -> choice.delta().content().stream())\n            .forEach(System.out::print);\n}\n\nChatCompletion chatCompletion = chatCompletionAccumulator.chatCompletion();\n```\n\n对于异步响应，将 `ChatCompletionAccumulator` 添加到 `subscribe()` 调用中：\n\n```java\nimport com.openai.helpers.ChatCompletionAccumulator;\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletionAccumulator chatCompletionAccumulator = ChatCompletionAccumulator.create();\n\nclient.chat()\n        .completions()\n        .createStreaming(createParams)\n        .subscribe(chunk -> chatCompletionAccumulator.accumulate(chunk).choices().stream()\n                .flatMap(choice -> choice.delta().content().stream())\n                .forEach(System.out::print))\n        .onCompleteFuture()\n        .join();\n\nChatCompletion chatCompletion = chatCompletionAccumulator.chatCompletion();\n```\n\nSDK 还为流式响应提供了便捷工具。一个\n[`ResponseAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FResponseAccumulator.kt)\n可以在处理响应事件时记录这些事件，并累积成一个与非流式 API 所返回的\n[`Response`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponse.kt)\n对象类似的对象。\n\n对于同步响应，在流管道中添加一个\n[`Stream.peek()`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002Fstream\u002FStream.html#peek-java.util.function.Consumer-)\n调用来累积每个事件：\n\n```java\nimport com.openai.core.http.StreamResponse;\nimport com.openai.helpers.ResponseAccumulator;\nimport com.openai.models.responses.Response;\nimport com.openai.models.responses.ResponseStreamEvent;\n\nResponseAccumulator responseAccumulator = ResponseAccumulator.create();\n\ntry (StreamResponse\u003CResponseStreamEvent> streamResponse =\n        client.responses().createStreaming(createParams)) {\n    streamResponse.stream()\n            .peek(responseAccumulator::accumulate)\n            .flatMap(event -> event.outputTextDelta().stream())\n            .forEach(textEvent -> System.out.print(textEvent.delta()));\n}\n\nResponse response = responseAccumulator.response();\n```\n\n对于异步响应，将 `ResponseAccumulator` 添加到 `subscribe()` 调用中：\n\n```java\nimport com.openai.helpers.ResponseAccumulator;\nimport com.openai.models.responses.Response;\n\nResponseAccumulator responseAccumulator = ResponseAccumulator.create();\n\nclient.responses()\n        .createStreaming(createParams)\n        .subscribe(event -> responseAccumulator.accumulate(event)\n                .outputTextDelta().ifPresent(textEvent -> System.out.print(textEvent.delta())))\n        .onCompleteFuture()\n        .join();\n\nResponse response = responseAccumulator.response();\n```\n\n## 使用 JSON 模式的结构化输出\n\nOpenAI 的 [结构化输出](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs?api-mode=chat) 功能可确保模型始终生成符合所提供 [JSON 模式](https:\u002F\u002Fjson-schema.org\u002Foverview\u002Fwhat-is-jsonschema) 的响应。\n\n可以通过创建一个 [`ResponseFormatJsonSchema`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002FResponseFormatJsonSchema.kt) 并将其设置到输入参数中来定义 JSON 模式。然而，为了更方便起见，也可以直接从任意 Java 类的结构自动生成 JSON 模式。随后，响应中的 JSON 内容将自动转换为该 Java 类的实例。使用任意 Java 类实现结构化输出的完整示例，请参阅 [`StructuredOutputsExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FStructuredOutputsExample.java)。\n\nJava 类可以包含声明为其他类实例的字段，并且可以使用集合（有关详细信息，请参阅[定义 JSON 模式属性](#defining-json-schema-properties)）：\n\n```java\nclass Person {\n    public String name;\n    public int birthYear;\n}\n\nclass Book {\n    public String title;\n    public Person author;\n    public int publicationYear;\n}\n\nclass BookList {\n    public List\u003CBook> books;\n}\n```\n\n在构建参数时，将顶层类——本例中为 `BookList`——传递给 `responseFormat(Class\u003CT>)`，然后从响应中生成的消息内容中访问 `BookList` 的实例：\n\n```java\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport com.openai.models.chat.completions.StructuredChatCompletionCreateParams;\n\nStructuredChatCompletionCreateParams\u003CBookList> params = ChatCompletionCreateParams.builder()\n        .addUserMessage(\"列出一些著名的二十世纪末小说。\")\n        .model(ChatModel.GPT_5_2)\n        .responseFormat(BookList.class)\n        .build();\n\nclient.chat().completions().create(params).choices().stream()\n        .flatMap(choice -> choice.message().content().stream())\n        .flatMap(bookList -> bookList.books.stream())\n        .forEach(book -> System.out.println(book.title + \" 作者：\" + book.author.name));\n```\n\n您可以从 [`ChatCompletionCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FChatCompletionCreateParams.kt) 或 [`StructuredChatCompletionCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FStructuredChatCompletionCreateParams.kt) 的实例开始构建参数。如果您从前者开始（这种方式代码更为简洁），当调用 `ChatCompletionCreateParams.Builder.responseFormat(Class\u003CT>)` 时，构建器类型会自动切换为后者。\n\n如果类中的某个字段是可选的，且不需要指定值，可以使用 [`java.util.Optional`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Futil\u002FOptional.html) 类来表示。由 AI 模型决定是否为该字段提供值，或者将其留空。\n\n```java\nimport java.util.Optional;\n\nclass Book {\n    public String title;\n    public Person author;\n    public int publicationYear;\n    public Optional\u003CString> isbn;\n}\n```\n\n字段的泛型类型信息会保留在类的元数据中，但在其他作用域中则会发生“泛型类型擦除”。例如，虽然可以从具有 `List\u003CBook>` 类型的 `BookList.books` 字段推导出描述书籍数组的 JSON 模式，但无法从相同类型的局部变量推导出有效的 JSON 模式，因此以下代码将无法正常工作：\n\n```java\nList\u003CBook> books = new ArrayList\u003C>();\n\nStructuredChatCompletionCreateParams\u003CList\u003CBook>> params = ChatCompletionCreateParams.builder()\n        .responseFormat(books.getClass())\n        \u002F\u002F ...\n        .build();\n```\n\n如果在将 JSON 响应转换为 Java 类实例时发生错误，错误消息中将包含原始的 JSON 响应，以帮助诊断问题。例如，如果响应被截断，JSON 数据就会不完整，从而无法转换为类实例。如果您的 JSON 响应可能包含敏感信息，请避免直接记录它，或确保从错误消息中删除任何敏感细节。\n\n### 本地 JSON 模式验证\n\n结构化输出支持 JSON Schema 语言的一个[子集](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-schemas)。系统会根据类自动生成与该子集兼容的模式。然而，由于类本身的固有结构，生成的模式仍可能违反某些 OpenAI 模式的限制，例如超过最大嵌套深度或使用不受支持的数据类型。\n\n为便于合规性检查，`responseFormat(Class\u003CT>)` 方法会对从指定类推导出的模式进行验证。此验证可确保所有限制均得到遵守。如果检测到任何问题，将抛出异常，并提供详细的失败原因说明。\n\n- **本地验证**：验证过程在本地进行，即不会向远程 AI 模型发送请求。如果模式通过了本地验证，则很可能也能通过远程验证。\n- **远程验证**：远程 AI 模型会在接收到请求中的 JSON 模式时自行进行验证。\n- **版本兼容性**：有时会出现本地验证失败而远程验证成功的情况。这可能是由于 SDK 版本较旧，未能反映远程 AI 模型所执行的最新限制。\n- **禁用本地验证**：如果您遇到兼容性问题并希望绕过本地验证，可以在构建参数时将 [`JsonSchemaLocalValidation.NO`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FJsonSchemaLocalValidation.kt) 传递给 `responseFormat(Class\u003CT>, JsonSchemaLocalValidation)` 方法。（该参数的默认值为 `JsonSchemaLocalValidation.YES`。）\n\n```java\nimport com.openai.core.JsonSchemaLocalValidation;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\nimport com.openai.models.chat.completions.StructuredChatCompletionCreateParams;\n\nStructuredChatCompletionCreateParams\u003CBookList> params = ChatCompletionCreateParams.builder()\n        .addUserMessage(\"列出一些著名的二十世纪末小说。\")\n        .model(ChatModel.GPT_5_2)\n        .responseFormat(BookList.class, JsonSchemaLocalValidation.NO)\n        .build();\n```\n\n遵循这些指南，您可以确保您的结构化输出符合必要的模式要求，并最大限度地降低远程验证失败的风险。\n\n### 与 Responses API 的用法\n\n_Structured Outputs_ 也支持 Responses API。使用方法与前面所述相同，只是 Responses API 在某些方面与 Chat Completions API 略有不同。构建参数时，将顶层类传递给 `text(Class\u003CT>)`，然后在响应中生成的消息内容中访问该类的实例。\n\n你可以从 [`ResponseCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseCreateParams.kt) 或\n[`StructuredResponseCreateParams.Builder`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponseCreateParams.kt) 的实例开始构建参数。如果你先使用前者（这种方式代码更简洁），当调用 `ResponseCreateParams.Builder.text(Class\u003CT>)` 时，构建器类型会自动切换为后者。\n\n有关 _Structured Outputs_ 与 Responses API 结合使用的完整示例，请参阅\n[`ResponsesStructuredOutputsExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsExample.java)。\n\n除了使用 `ResponseCreateParams.text(Class\u003CT>)` 外，你还可以构建一个\n[`StructuredResponseTextConfig`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponseTextConfig.kt)，并通过 `text(StructuredResponseTextConfig)` 方法将其设置到 `ResponseCreateParams` 中。与使用 `ResponseCreateParams` 类似，你可以从 `ResponseTextConfig.Builder` 开始，其 `format(Class\u003CT>)` 方法会将其转换为 `StructuredResponseTextConfig.Builder`。这样你还可以在将文本配置添加到 `ResponseCreateParams` 之前，设置 `verbosity` 配置参数。\n\n有关 _Structured Outputs_ 与 `ResponseTextConfig` 及其 `verbosity` 参数结合使用的完整示例，请参阅\n[`ResponsesStructuredOutputsVerbosityExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsVerbosityExample.java)。\n\n### 流式传输中的用法\n\n_Structured Outputs_ 也可以与 [Streaming](#streaming) 和 Chat Completions API 一起使用。由于响应是以“块”形式返回的，因此必须先累积完整的响应，以拼接 JSON 字符串，然后再将其转换为任意 Java 类的实例。在累积 JSON 字符串的过程中，可以正常执行流式操作。\n\n按照 [Streaming helpers](#streaming-helpers) 部分的说明，使用 [`ChatCompletionAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FChatCompletionAccumulator.kt) 来累积 JSON 字符串。累积完成后，使用 `ChatCompletionAccumulator.chatCompletion(Class\u003CT>)` 将累积的 `ChatCompletion` 转换为\n[`StructuredChatCompletion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fchat\u002Fcompletions\u002FStructuredChatCompletion.kt)。随后，`StructuredChatCompletion` 可以自动将 JSON 字符串反序列化为你定义的 Java 类的实例。\n\n有关 _Structured Outputs_ 与 Streaming 和 Chat Completions API 结合使用的完整示例，请参阅\n[`StructuredOutputsStreamingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FStructuredOutputsStreamingExample.java)。\n\n对于 Responses API，在流式传输过程中使用\n[`ResponseAccumulator`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fhelpers\u002FResponseAccumulator.kt) 来累积事件。累积完成后，使用 `ResponseAccumulator.response(Class\u003CT>)` 将累积的 `Response` 转换为\n[`StructuredResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FStructuredResponse.kt)。随后，`StructuredResponse` 可以自动将 JSON 字符串反序列化为你定义的 Java 类的实例。\n\n有关 _Structured Outputs_ 与 Streaming 和 Responses API 结合使用的完整示例，请参阅\n[`ResponsesStructuredOutputsStreamingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesStructuredOutputsStreamingExample.java)。\n\n### 定义 JSON 模式的属性\n\n当 JSON 模式由你的 Java 类派生时，默认情况下，所有由 `public` 字段或 `public` getter 方法表示的属性都会包含在模式中。非 `public` 字段和 getter 方法则默认不会被包含。你可以通过分别使用 `@JsonIgnore` 或 `@JsonProperty` 注解来排除 `public` 属性，或包含非 `public` 属性和 getter 方法（详情请参阅 [注解类与 JSON 模式](#annotating-classes-and-json-schemas)）。\n\n如果你不想定义 `public` 字段，可以定义 `private` 字段并提供相应的 `public` getter 方法。例如，一个名为 `myValue` 的 `private` 字段，搭配一个名为 `getMyValue()` 的 `public` getter 方法，将会在 JSON 模式中生成一个 `\"myValue\"` 属性。如果你不希望使用传统的 Java “get” 前缀作为 getter 方法名，则必须使用 `@JsonProperty` 注解标注该方法，此时将直接使用完整的方法名作为属性名称。如果不需要 setter 方法，也可以不定义它们。\n\n你的每个类都必须至少定义一个要包含在 JSON 模式中的属性。如果某个类没有任何字段或 getter 方法可用于派生模式属性，则会引发验证错误。这种情况可能发生在以下情形：\n\n- 类中既没有字段也没有 getter 方法。\n- 所有字段和 getter 方法都是 `public` 的，但都被标注了 `@JsonIgnore`。\n- 所有字段和 getter 方法都是非 `public` 的，但都没有标注 `@JsonProperty`。\n- 某个字段或 getter 方法声明为 `Map` 类型。由于 `Map` 被视为一个没有命名属性的独立类，因此会在 JSON 模式中生成一个空的 `\"properties\"` 字段。\n\n### 注解类和 JSON 模式\n\n你可以使用注解为从 Java 类派生的 JSON 模式添加更多信息，或者控制哪些字段或 getter 方法将包含在模式中。JSON 模式中捕获的注解细节可能会被 AI 模型用于改进其响应。SDK 支持使用 [Jackson Databind](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind) 注解。\n\n```java\nimport com.fasterxml.jackson.annotation.JsonClassDescription;\nimport com.fasterxml.jackson.annotation.JsonIgnore;\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\n\nclass Person {\n    @JsonPropertyDescription(\"该人的名字和姓氏\")\n    public String name;\n    public int birthYear;\n    @JsonPropertyDescription(\"该人去世的年份，如果仍在世则为 'present'。\")\n    public String deathYear;\n}\n\n@JsonClassDescription(\"一本已出版书籍的详细信息\")\nclass Book {\n    public String title;\n    public Person author;\n    @JsonPropertyDescription(\"该书首次出版的年份。\")\n    public int publicationYear;\n    @JsonIgnore public String genre;\n}\n\nclass BookList {\n    public List\u003CBook> books;\n}\n```\n\n- 使用 `@JsonClassDescription` 为类添加详细描述。\n- 使用 `@JsonPropertyDescription` 为类的字段或 getter 方法添加详细描述。\n- 使用 `@JsonIgnore` 将类的 `public` 字段或 getter 方法排除在生成的 JSON 模式之外。\n- 使用 `@JsonProperty` 将类的非 `public` 字段或 getter 方法包含在生成的 JSON 模式中。\n\n如果你使用 `@JsonProperty(required = false)`，`false` 值将被忽略。OpenAI 的 JSON 模式必须将所有属性标记为 _required_，因此从你的 Java 类生成的模式会遵守这一限制，并忽略任何可能违反该限制的注解。\n\n你还可以使用 [OpenAPI Swagger 2](https:\u002F\u002Fswagger.io\u002Fspecification\u002Fv2\u002F) 的 [`@Schema`](https:\u002F\u002Fgithub.com\u002Fswagger-api\u002Fswagger-core\u002Fwiki\u002FSwagger-2.X---Annotations#schema) 和 [`@ArraySchema`](https:\u002F\u002Fgithub.com\u002Fswagger-api\u002Fswagger-core\u002Fwiki\u002FSwagger-2.X---Annotations#arrayschema) 注解。这些注解允许为你的模式属性添加特定于类型的约束。有关支持的约束的更多信息，请参阅 OpenAI 文档中的[支持的属性](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-properties)部分。\n\n```java\nimport io.swagger.v3.oas.annotations.media.Schema;\nimport io.swagger.v3.oas.annotations.media.ArraySchema;\n\nclass Article {\n    @ArraySchema(minItems = 1, maxItems = 10)\n    public List\u003CString> authors;\n\n    @Schema(pattern = \"^[A-Za-z ]+$\")\n    public String title;\n\n    @Schema(format = \"date\")\n    public String publicationDate;\n\n    @Schema(minimum = \"1\")\n    public int pageCount;\n}\n```\n\n本地验证会检查你是否使用了不支持的约束关键字。然而，约束的具体值并不会在本地进行验证。例如，如果你为字符串属性的 `\"format\"` 约束使用了一个不在[支持的格式名称列表](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs#supported-properties)中的值，那么本地验证将会通过，但 AI 模型可能会报告错误。\n\n如果你同时使用 Jackson 和 Swagger 注解来设置同一个模式字段，Jackson 注解将优先生效。在下面的示例中，`myProperty` 的描述将被设置为“Jackson 描述”；“Swagger 描述”将被忽略：\n\n```java\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\nimport io.swagger.v3.oas.annotations.media.Schema;\n\nclass MyObject {\n    @Schema(description = \"Swagger 描述\")\n    @JsonPropertyDescription(\"Jackson 描述\")\n    public String myProperty;\n}\n```\n\n## 使用 JSON 模式的函数调用\n\nOpenAI 的 [函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=chat) 功能使您能够将外部函数直接集成到语言模型的响应中。与生成纯文本不同，模型可以在适当的时候输出调用函数的指令（以及参数）。您需要为这些函数定义一个 [JSON 模式](https:\u002F\u002Fjson-schema.org\u002Foverview\u002Fwhat-is-jsonschema)，模型会根据该模式决定何时以及如何触发这些调用，从而实现更具交互性、数据驱动的应用程序。\n\n通过 API 定义描述函数参数的 JSON 模式，可以构建一个包含 `FunctionDefinition` 的 `ChatCompletionTool`，然后使用 `addTool` 方法将其设置到输入参数中。随后，AI 模型的响应可能包含调用您函数的请求，详细说明函数名称及其参数值，这些参数以符合函数定义中 JSON 模式的 JSON 数据形式呈现。您可以从该 JSON 中解析出参数值，调用您的函数，并将函数结果返回给 AI 模型。使用低级 API 实现完整功能的示例可在 `FunctionCallingRawExample` 中找到。\n\n然而，为了更方便起见，SDK 可以根据任意 Java 类的结构自动推导出函数及其参数：类名即为函数名称，类的字段则定义了函数的参数。当 AI 模型以 JSON 形式返回参数值时，您可以轻松地将该 JSON 转换为 Java 类的实例，并利用这些参数值来调用自定义函数。使用 Java 类定义函数参数的完整示例可在 `FunctionCallingExample` 中查看。\n\n与 [结构化输出](#structured-outputs-with-json-schemas) 类似，Java 类可以包含声明为其他类实例的字段，并且可以使用集合类型（有关详细信息，请参阅 [定义 JSON 模式属性](#defining-json-schema-properties)）。此外，还可以使用注解为函数（类）及其参数（字段）设置描述，以帮助 AI 模型理解函数的目的以及参数可能的取值范围。\n\n```java\nimport com.fasterxml.jackson.annotation.JsonClassDescription;\nimport com.fasterxml.jackson.annotation.JsonPropertyDescription;\n\n@JsonClassDescription(\"获取给定 SDK 的质量评分。\")\nstatic class GetSdkQuality {\n    @JsonPropertyDescription(\"SDK 的名称。\")\n    public String name;\n\n    public SdkQuality execute() {\n        return new SdkQuality(\n                name, name.contains(\"OpenAI\") ? \"它非常稳健且完善！\" : \"*耸肩*\");\n    }\n}\n\nstatic class SdkQuality {\n    public String quality;\n\n    public SdkQuality(String name, String evaluation) {\n        quality = name + \": \" + evaluation;\n    }\n}\n\n@JsonClassDescription(\"获取指定 SDK 的评分（满分 10 分）。\")\nstatic class GetSdkScore {\n  public String name;\n\n  public int execute() {\n    return name.contains(\"OpenAI\") ? 10 : 3;\n  }\n}\n```\n\n定义好函数后，使用 `addTool(Class\u003CT>)` 将其添加到输入参数中，然后在 AI 模型的响应中被要求时调用它们。`Function.arguments(Class\u003CT>)` 可用于将 JSON 格式的函数参数解析为定义函数的 Java 类的实例。该实例的字段将被设置为函数调用的参数值。\n\n调用函数后，使用 `ChatCompletionToolMessageParam.Builder.contentAsJson(Object)` 将函数结果返回给 AI 模型。该方法会将结果转换为 JSON 格式，以便模型消费。`Object` 可以是任何对象，包括简单的 `String` 实例和基本类型的包装类。\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.*;\nimport java.util.Collection;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\nChatCompletionCreateParams.Builder createParamsBuilder = ChatCompletionCreateParams.builder()\n        .model(ChatModel.GPT_3_5_TURBO)\n        .maxCompletionTokens(2048)\n        .addTool(GetSdkQuality.class)\n        .addTool(GetSdkScore.class)\n        .addUserMessage(\"以下 SDK 的质量如何？评论者怎么说：OpenAI Java SDK、未知公司 SDK。\");\n\nclient.chat().completions().create(createParamsBuilder.build()).choices().stream()\n        .map(ChatCompletion.Choice::message)\n        \u002F\u002F 将每条助手消息添加到构建器中，以便我们跟踪对话，以便稍后提出后续问题。\n        .peek(createParamsBuilder::addMessage)\n        .flatMap(message -> {\n            message.content().ifPresent(System.out::println);\n            return message.toolCalls().stream().flatMap(Collection::stream);\n        })\n        .forEach(toolCall -> {\n            Object result = callFunction(toolCall.function());\n            \u002F\u002F 将工具调用的结果添加到对话中。\n            createParamsBuilder.addMessage(ChatCompletionToolMessageParam.builder()\n                    .toolCallId(toolCall.id())\n                    .contentAsJson(result)\n                    .build());\n        });\n\n\u002F\u002F 提出关于函数调用结果的后续问题。\ncreateParamsBuilder.addUserMessage(\"你为什么这么说呢？\");\nclient.chat().completions().create(createParamsBuilder.build()).choices().stream()\n        .flatMap(choice -> choice.message().content().stream())\n        .forEach(System.out::println);\n\nstatic Object callFunction(ChatCompletionMessageToolCall.Function function) {\n  switch (function.name()) {\n    case \"GetSdkQuality\":\n      return function.arguments(GetSdkQuality.class).execute();\n    case \"GetSdkScore\":\n      return function.arguments(GetSdkScore.class).execute();\n    default:\n      throw new IllegalArgumentException(\"未知函数: \" + function.name());\n  }\n}\n```\n\n在上述代码中，每个函数的逻辑都封装在 `execute()` 方法中。然而，这并不是强制要求。您可以根据自己的使用场景，以任何最适合的方式实现函数逻辑。上述模式仅旨在提示一种合适的模式可能会使函数调用的过程更易于理解和实施。\n\n### 与 Responses API 的使用\n\n_Function Calling_ 功能同样适用于 Responses API。使用方式与之前描述的相同，除非 Responses API 与 Chat Completions API 有细微差异。在构建参数时，将顶级类传递给 `addTool(Class\u003CT>)`。在响应中，查找属于函数调用的 [`RepoonseOutputItem`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseOutputItem.kt) 实例。使用\n[`ResponseFunctionToolCall.arguments(Class\u003CT>)`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fmodels\u002Fresponses\u002FResponseFunctionToolCall.kt)\n将每个函数调用的参数解析为相应类的实例。最后，将每次调用的结果传回模型。\n\n如需查看使用低级 API 定义和解析函数参数的 _Function Calling_ 与 Responses API 结合使用的完整示例，请参阅\n[`ResponsesFunctionCallingRawExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesFunctionCallingRawExample.java)。\n\n如需查看使用 Java 类定义和解析函数参数的 _Function Calling_ 与 Responses API 结合使用的完整示例，请参阅\n[`ResponsesFunctionCallingExample`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FResponsesFunctionCallingExample.java)。\n\n### 本地函数 JSON 模式验证\n\n与 _Structured Outputs_ 类似，您可以在本地进行验证，以确保从您的函数类派生的 JSON 模式符合 OpenAI 对此类模式施加的限制。本地验证默认启用，但可以通过在调用 `addTool` 时添加 `JsonSchemaLocalValidation.NO` 来禁用。\n\n```java\nChatCompletionCreateParams.Builder createParamsBuilder = ChatCompletionCreateParams.builder()\n        .model(ChatModel.GPT_3_5_TURBO)\n        .maxCompletionTokens(2048)\n        .addTool(GetSdkQuality.class, JsonSchemaLocalValidation.NO)\n        .addTool(GetSdkScore.class, JsonSchemaLocalValidation.NO)\n        .addUserMessage(\"以下 SDK 的质量如何？评论者怎么说：OpenAI Java SDK、未知公司 SDK。\");\n```\n\n有关本地模式验证的更多详细信息以及在何种情况下可能需要将其禁用，请参阅 [本地 JSON 模式验证](#local-json-schema-validation)。\n\n### 注解函数类\n\n您可以使用注解为从函数类派生的 JSON 模式添加更多关于函数的信息，或控制哪些字段或 getter 方法将用作函数的参数。JSON 模式中捕获的注解细节可能会被 AI 模型用于改进其响应。SDK 支持使用\n[Jackson Databind](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind) 注解。\n\n- 使用 `@JsonClassDescription` 为函数类添加描述，详细说明何时以及如何使用该函数。\n- 使用 `@JsonTypeName` 将函数名称设置为不同于类简单名称的内容，默认使用类简单名称。\n- 使用 `@JsonPropertyDescription` 为函数参数（即函数类中的字段或 getter 方法）添加详细描述。\n- 使用 `@JsonIgnore` 将类中的 `public` 字段或 getter 方法排除在函数参数生成的 JSON 模式之外。\n- 使用 `@JsonProperty` 将类中的非 `public` 字段或 getter 方法包含在函数参数生成的 JSON 模式中。\n\nOpenAI 提供了一些\n[定义函数的最佳实践](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling#best-practices-for-defining-functions)\n，可以帮助您理解如何有效地为您的函数使用上述注解。\n\n另请参阅 [定义 JSON 模式属性](#defining-json-schema-properties)，了解如何使用字段和 getter 方法，并结合访问修饰符和注解来定义函数的参数。同样的规则适用于函数类以及该部分中描述的结构化输出类。\n\n## 文件上传\n\nSDK 定义了接受文件的方法。\n\n要上传文件，可以传递一个 [`Path`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fnio\u002Ffile\u002FPath.html)：\n\n```java\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models.files.FileObject;\nimport com.openai.models.files.FilePurpose;\nimport java.nio.file.Paths;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(Paths.get(\"input.jsonl\"))\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\n或者任意类型的 [`InputStream`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fio\u002FInputStream.html)：\n\n```java\nimport com.openai.models.files.FileCreateParams;\nimport com.openai.models\u002Ffiles.FileObject;\nimport com.openai.models\u002Ffiles.FilePurpose;\nimport java.net.URL;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(new URL(\"https:\u002F\u002Fexample.com\u002Finput.jsonl\").openStream())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\n也可以传递 `byte[]` 数组：\n\n```java\nimport com.openai.models\u002Ffiles.FileCreateParams;\nimport com.openai.models\u002Ffiles.FileObject;\nimport com.openai.models\u002Ffiles.FilePurpose;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(\"content\".getBytes())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\n请注意，当传递非 `Path` 类型时，其文件名是未知的，因此不会包含在请求中。若要手动设置文件名，可以传递一个 [`MultipartField`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt)：\n\n```java\nimport com.openai.core.MultipartField;\nimport com.openai.models\u002Ffiles.FileCreateParams;\nimport com.openai.models\u002Ffiles.FileObject;\nimport com.openai.models\u002Ffiles.FilePurpose;\nimport java.io.InputStream;\nimport java.net.URL;\n\nFileCreateParams params = FileCreateParams.builder()\n    .purpose(FilePurpose.FINE_TUNE)\n    .file(MultipartField.\u003CInputStream>builder()\n        .value(new URL(\"https:\u002F\u002Fexample.com\u002Finput.jsonl\").openStream())\n        .filename(\"input.jsonl\")\n        .build())\n    .build();\nFileObject fileObject = client.files().create(params);\n```\n\n## Webhook 验证\n\n验证 webhook 签名是 _可选但建议执行_ 的操作。\n\n有关 webhook 的更多信息，请参阅 [API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fwebhooks)。\n\n### 解析 Webhook 负载\n\n对于大多数用例，您可能希望同时验证 Webhook 并解析负载。为此，我们提供了 `client.webhooks().unwrap()` 方法，该方法会解析 Webhook 请求，并验证其是否由 OpenAI 发送。如果签名无效，此方法将抛出异常。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（请勿先对其进行解析）。`.unwrap()` 方法会在验证 Webhook 确实来自 OpenAI 后，为您将此 JSON 解析为事件对象。\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.Headers;\nimport com.openai.models.webhooks.UnwrapWebhookEvent;\nimport java.util.Optional;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv(); \u002F\u002F 默认使用 OPENAI_WEBHOOK_SECRET 环境变量\n\npublic void handleWebhook(String body, Map\u003CString, String> headers) {\n    try {\n        Headers headersList = Headers.builder()\n                .putAll(headers)\n                .build();\n\n        UnwrapWebhookEvent event = client.webhooks().unwrap(body, headersList, Optional.empty());\n\n        if (event.isResponseCompletedWebhookEvent()) {\n            System.out.println(\"响应已完成：\" + event.asResponseCompletedWebhookEvent().data());\n        } else if (event.isResponseFailed()) {\n            System.out.println(\"响应失败：\" + event.asResponseFailed().data());\n        } else {\n            System.out.println(\"未处理的事件类型：\" + event.getClass().getSimpleName());\n        }\n    } catch (Exception e) {\n        System.err.println(\"无效的 Webhook 签名：\" + e.getMessage());\n        \u002F\u002F 处理无效签名\n    }\n}\n```\n\n### 直接验证 Webhook 负载\n\n在某些情况下，您可能希望将 Webhook 的验证与负载解析分开进行。如果您更倾向于分别处理这些步骤，我们提供了 `client.webhooks().verifySignature()` 方法，用于仅验证 Webhook 请求的签名。与 `.unwrap()` 一样，如果签名无效，此方法也会抛出异常。\n\n请注意，`body` 参数必须是服务器发送的原始 JSON 字符串（请勿先对其进行解析）。在验证签名之后，您需要自行解析请求体。\n\n```java\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.Headers;\nimport com.openai.models.webhooks.WebhookVerificationParams;\nimport java.util.Optional;\n\nOpenAIClient client = OpenAIOkHttpClient.fromEnv(); \u002F\u002F 默认使用 OPENAI_WEBHOOK_SECRET 环境变量\nObjectMapper objectMapper = new ObjectMapper();\n\npublic void handleWebhook(String body, Map\u003CString, String> headers) {\n    try {\n        Headers headersList = Headers.builder()\n                .putAll(headers)\n                .build();\n\n        client.webhooks().verifySignature(\n            WebhookVerificationParams.builder()\n                .payload(body)\n                .headers(headersList)\n                .build()\n        );\n\n        \u002F\u002F 验证后解析请求体\n        Map\u003CString, Object> event = objectMapper.readValue(body, Map.class);\n        System.out.println(\"已验证的事件：\" + event);\n    } catch (Exception e) {\n        System.err.println(\"无效的 Webhook 签名：\" + e.getMessage());\n        \u002F\u002F 处理无效签名\n    }\n}\n```\n\n## 二进制响应\n\nSDK 定义了返回二进制响应的方法，这些方法适用于那些无需解析的 API 响应，例如非 JSON 数据。\n\n这些方法返回 [`HttpResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FHttpResponse.kt)：\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.files.FileContentParams;\n\nHttpResponse response = client.files().content(\"file_id\");\n```\n\n要将响应内容保存到文件，可以使用 [`Files.copy(...)`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fnio\u002Ffile\u002FFiles.html#copy-java.io.InputStream-java.nio.file.Path-java.nio.file.CopyOption...-) 方法：\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardCopyOption;\n\ntry (HttpResponse response = client.files().content(params)) {\n    Files.copy(\n        response.body(),\n        Paths.get(path),\n        StandardCopyOption.REPLACE_EXISTING\n    );\n} catch (Exception e) {\n    System.out.println(\"出错了！\");\n    throw new RuntimeException(e);\n}\n```\n\n或者将响应内容传输到任何 [`OutputStream`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Fio\u002FOutputStream.html)：\n\n```java\nimport com.openai.core.http.HttpResponse;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\ntry (HttpResponse response = client.files().content(params)) {\n    response.body().transferTo(Files.newOutputStream(Paths.get(path)));\n} catch (Exception e) {\n    System.out.println(\"出错了！\");\n    throw new RuntimeException(e);\n}\n```\n\n## 原始响应\n\nSDK 定义了将响应反序列化为 Java 类实例的方法。然而，这些方法无法访问响应头、状态码或原始响应体。\n\n要访问这些数据，可以在客户端或服务上的任何 HTTP 方法调用前添加 `withRawResponse()`：\n\n```java\nimport com.openai.core.http.Headers;\nimport com.openai.core.http.HttpResponseFor;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"说这是个测试\")\n    .model(ChatModel.GPT_5_2)\n    .build();\nHttpResponseFor\u003CChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params);\n\nint statusCode = chatCompletion.statusCode();\nHeaders headers = chatCompletion.headers();\n```\n\n如果需要，您仍然可以将响应反序列化为 Java 类的实例：\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion parsedChatCompletion = chatCompletion.parse();\n```\n\n### 请求 ID\n\n> 更多关于调试请求的信息，请参阅 [API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fdebugging-requests)。\n\n在使用原始响应时，您可以使用 `requestId()` 方法访问 `x-request-id` 响应头：\n\n```java\nimport com.openai.core.http.HttpResponseFor;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport java.util.Optional;\n\nHttpResponseFor\u003CChatCompletion> chatCompletion = client.chat().completions().withRawResponse().create(params);\nOptional\u003CString> requestId = chatCompletion.requestId();\n```\n\n这可用于快速记录失败的请求，并将其报告回 OpenAI。\n\n## 错误处理\n\nSDK 会抛出自定义的未检查异常类型：\n\n- [`OpenAIServiceException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIServiceException.kt)：HTTP 错误的基类。下表列出了针对每个 HTTP 状态码抛出的具体异常子类：\n\n  | 状态码 | 异常                                                                                                              |\n  | ------ | ---------------------------------------------------------------------------------------------------------------------- |\n  | 400    | [`BadRequestException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FBadRequestException.kt)                     |\n  | 401    | [`UnauthorizedException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnauthorizedException.kt)                 |\n  | 403    | [`PermissionDeniedException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FPermissionDeniedException.kt)         |\n  | 404    | [`NotFoundException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FNotFoundException.kt)                         |\n  | 422    | [`UnprocessableEntityException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnprocessableEntityException.kt)   |\n  | 429    | [`RateLimitException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FRateLimitException.kt)                       |\n  | 5xx    | [`InternalServerException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FInternalServerException.kt)             |\n  | 其他   | [`UnexpectedStatusCodeException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FUnexpectedStatusCodeException.kt) |\n\n  对于在成功完成初始 HTTP 响应后，[SSE 流式传输](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events)过程中遇到的错误，会抛出 [`SseException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FSseException.kt)。\n\n- [`OpenAIIoException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIIoException.kt)：I\u002FO 网络错误。\n\n- [`OpenAIRetryableException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIRetryableException.kt)：通用错误，表示客户端可以重试的失败。\n\n- [`OpenAIInvalidDataException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIInvalidDataException.kt)：无法正确解析已成功解析的数据。例如，当尝试访问本应必填的属性，但 API 却意外地将其从响应中省略时。\n\n- [`OpenAIException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIException.kt)：所有异常的基类。大多数错误都会导致上述异常之一被抛出，但也可能使用基类抛出完全通用的错误。\n\n## 分页\n\nSDK 定义了返回分页结果列表的方法。它提供了便捷的方式，既可以逐页访问结果，也可以跨所有页面逐项访问。\n\n### 自动分页\n\n要遍历所有页面中的全部结果，可以使用 `autoPager()` 方法，该方法会根据需要自动获取更多页面。\n\n在使用同步客户端时，该方法返回一个 [`Iterable`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Flang\u002FIterable.html)：\n\n```java\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPage;\n\nJobListPage page = client.fineTuning().jobs().list();\n\n\u002F\u002F 作为 Iterable 处理\nfor (FineTuningJob job : page.autoPager()) {\n    System.out.println(job);\n}\n\n\u002F\u002F 作为 Stream 处理\npage.autoPager()\n    .stream()\n    .limit(50)\n    .forEach(job -> System.out.println(job));\n```\n\n在使用异步客户端时，该方法返回一个 [`AsyncStreamResponse`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002Fhttp\u002FAsyncStreamResponse.kt)：\n\n```java\nimport com.openai.core.http.AsyncStreamResponse;\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPageAsync;\nimport java.util.Optional;\nimport java.util.concurrent.CompletableFuture;\n\nCompletableFuture\u003CJobListPageAsync> pageFuture = client.async().fineTuning().jobs().list();\n\npageFuture.thenRun(page -> page.autoPager().subscribe(job -> {\n    System.out.println(job);\n}));\n\n\u002F\u002F 如果需要处理错误或流的完成\npageFuture.thenRun(page -> page.autoPager().subscribe(new AsyncStreamResponse.Handler\u003C>() {\n    @Override\n    public void onNext(FineTuningJob job) {\n        System.out.println(job);\n    }\n\n    @Override\n    public void onComplete(Optional\u003CThrowable> error) {\n        if (error.isPresent()) {\n            System.out.println(\"出错了！\");\n            throw new RuntimeException(error.get());\n        } else {\n            System.out.println(\"没有更多了！\");\n        }\n    }\n}));\n\n\u002F\u002F 或者使用 Future\npageFuture.thenRun(page -> page.autoPager()\n    .subscribe(job -> {\n        System.out.println(job);\n    })\n    .onCompleteFuture()\n    .whenComplete((unused, error) -> {\n        if (error != null) {\n            System.out.println(\"出错了！\");\n            throw new RuntimeException(error);\n        } else {\n            System.out.println(\"没有更多了！\");\n        }\n    }));\n```\n\n### 手动分页\n\n要访问单个页面的项目并手动请求下一页，可以使用 `items()`、`hasNextPage()` 和 `nextPage()` 方法：\n\n```java\nimport com.openai.models.finetuning.jobs.FineTuningJob;\nimport com.openai.models.finetuning.jobs.JobListPage;\n\nJobListPage page = client.fineTuning().jobs().list();\nwhile (true) {\n    for (FineTuningJob job : page.items()) {\n        System.out.println(job);\n    }\n\n    if (!page.hasNextPage()) {\n        break;\n    }\n\n    page = page.nextPage();\n}\n```\n\n## 日志记录\n\nSDK 使用标准的 [OkHttp 日志拦截器](https:\u002F\u002Fgithub.com\u002Fsquare\u002Fokhttp\u002Ftree\u002Fmaster\u002Fokhttp-logging-interceptor)。\n\n通过将 `OPENAI_LOG` 环境变量设置为 `info` 来启用日志记录：\n\n```sh\nexport OPENAI_LOG=info\n```\n\n或者设置为 `debug` 以获得更详细的日志记录：\n\n```sh\nexport OPENAI_LOG=debug\n```\n\n## ProGuard 和 R8\n\n尽管 SDK 使用反射，但它仍然可以与 [ProGuard](https:\u002F\u002Fgithub.com\u002FGuardsquare\u002Fproguard) 和 [R8](https:\u002F\u002Fdeveloper.android.com\u002Ftopic\u002Fperformance\u002Fapp-optimization\u002Fenable-app-optimization) 一起使用，因为 `openai-java-core` 已经发布了一个包含 [keep 规则](https:\u002F\u002Fwww.guardsquare.com\u002Fmanual\u002Fconfiguration\u002Fusage)的 [配置文件](openai-java-core\u002Fsrc\u002Fmain\u002Fresources\u002FMETA-INF\u002Fproguard\u002Fopenai-java-core.pro)。\n\nProGuard 和 R8 应该能够自动检测并应用这些规则，但如果需要，您也可以手动复制 keep 规则。\n\n## GraalVM\n\n尽管 SDK 使用了反射，但它仍然可以在 [GraalVM](https:\u002F\u002Fwww.graalvm.org) 中使用，因为 `openai-java-core` 已经随附了 [可达性元数据](https:\u002F\u002Fwww.graalvm.org\u002Flatest\u002Freference-manual\u002Fnative-image\u002Fmetadata\u002F) 发布。\n\nGraalVM 应该能够自动检测并使用这些元数据，不过也可以进行[手动配置](https:\u002F\u002Fwww.graalvm.org\u002Fjdk24\u002Freference-manual\u002Fnative-image\u002Foverview\u002FBuildConfiguration\u002F)。\n\n## Spring Boot\n\n如果你正在使用 Spring Boot，那么可以利用 SDK 的 [Spring Boot 启动器](https:\u002F\u002Fdocs.spring.io\u002Fspring-boot\u002Fdocs\u002F2.7.18\u002Freference\u002Fhtmlsingle\u002F#using.build-systems.starters)，以简化配置并快速搭建项目。\n\n### 安装\n\n\u003C!-- x-release-please-start-version -->\n\n#### Gradle\n\n```kotlin\nimplementation(\"com.openai:openai-java-spring-boot-starter:4.32.0\")\n```\n\n#### Maven\n\n```xml\n\u003Cdependency>\n  \u003CgroupId>com.openai\u003C\u002FgroupId>\n  \u003CartifactId>openai-java-spring-boot-starter\u003C\u002FartifactId>\n  \u003Cversion>4.32.0\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n\u003C!-- x-release-please-end -->\n\n### 配置\n\n[客户端的环境变量选项](#client-configuration) 可以在 [`application.properties` 或 `application.yml`](https:\u002F\u002Fdocs.spring.io\u002Fspring-boot\u002Fhow-to\u002Fproperties-and-configuration.html) 中进行配置。\n\n#### `application.properties`\n\n```properties\nopenai.base-url=https:\u002F\u002Fapi.openai.com\u002Fv1\nopenai.api-key=My API Key\nopenai.org-id=My Organization\nopenai.project-id=My Project\nopenai.webhook-secret=My Webhook Secret\n```\n\n#### `application.yml`\n\n```yaml\nopenai:\n  base-url: https:\u002F\u002Fapi.openai.com\u002Fv1\n  api-key: My API Key\n  org-id: My Organization\n  project-id: My Project\n  webhook-secret: My Webhook Secret\n```\n\n#### 其他配置\n\n可以通过提供一个或多个 [`OpenAIClientCustomizer`](openai-java-spring-boot-starter\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fspringboot\u002FOpenAIClientCustomizer.kt) 实例来配置其他客户端选项。例如，以下是如何设置 [`maxRetries`](#retries) 的示例：\n\n```java\nimport com.openai.springboot.OpenAIClientCustomizer;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\n\n@Configuration\npublic class OpenAIConfig {\n    @Bean\n    public OpenAIClientCustomizer customizer() {\n        return builder -> builder.maxRetries(3);\n    }\n}\n```\n\n### 使用\n\n将 [`OpenAIClient`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002FOpenAIClient.kt) 注入到任何地方，即可开始使用！\n\n## Jackson\n\nSDK 依赖于 [Jackson](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson) 进行 JSON 的序列化和反序列化。它兼容 2.13.4 或更高版本，但默认依赖于 2.18.2 版本。\n\n如果运行时检测到不兼容的 Jackson 版本（例如，在你的 Maven 或 Gradle 配置中覆盖了默认版本），SDK 将抛出异常。\n\n如果 SDK 抛出了异常，但你_确定_版本是兼容的，则可以使用 [`OpenAIOkHttpClient`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClient.kt) 或 [`OpenAIOkHttpClientAsync`](openai-java-client-okhttp\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fclient\u002Fokhttp\u002FOpenAIOkHttpClientAsync.kt) 中的 `checkJacksonVersionCompatibility` 方法来禁用版本检查。\n\n> [!CAUTION]\n> 我们不保证在禁用 Jackson 版本检查的情况下，SDK 能够正常工作。\n\n此外，请注意，较旧版本的 Jackson 存在一些可能影响 SDK 的缺陷。我们并不会针对所有 Jackson 缺陷进行修复（[示例](https:\u002F\u002Fgithub.com\u002FFasterXML\u002Fjackson-databind\u002Fissues\u002F3240)），而是建议用户自行升级 Jackson 来解决这些问题。\n\n## Microsoft Azure\n\n要将此库与 [Azure OpenAI](https:\u002F\u002Flearn.microsoft.com\u002Fazure\u002Fai-services\u002Fopenai\u002Foverview) 一起使用，可以使用相同的 OpenAI 客户端构建器，但需进行 Azure 特有的配置。\n\n```java\nOpenAIClient client = OpenAIOkHttpClient.builder()\n        \u002F\u002F 分别从 `AZURE_OPENAI_KEY` 和 `OPENAI_BASE_URL` 环境变量中获取 API 密钥和端点\n        .fromEnv()\n        \u002F\u002F 设置 Azure Entra ID 凭证\n        .credential(BearerTokenCredential.create(AuthenticationUtil.getBearerTokenSupplier(\n                new DefaultAzureCredentialBuilder().build(), \"https:\u002F\u002Fcognitiveservices.azure.com\u002F.default\")))\n        .build();\n```\n\n完整的 Azure OpenAI 示例可在 [`openai-java-example`](openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FAzureEntraIdExample.java) 目录中找到。该目录中的其他示例只要客户端配置为使用 Azure，同样适用。\n\n### 可选：URL 路径模式配置\n\n[`ClientOptions`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FClientOptions.kt) 可以根据你的服务设置，对 Azure OpenAI 端点 URL 的处理方式进行配置。默认值为 [`AzureUrlPathMode.AUTO`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fazure\u002FAzureUrlPathMode.kt)。若要自定义 SDK 行为，各模式的作用如下：\n- `AzureUrlPathMode.LEGACY`：强制将部署名称或模型名称放入路径中。\n- `AzureUrlPathMode.UNIFIED`：对于以 `\u002Fopenai\u002Fv1` 结尾的新版端点，其服务行为与 OpenAI 一致，因此 [`AzureOpenAIServiceVersion`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fazure\u002FAzureOpenAIServiceVersion.kt) 变得可选，模型将在请求对象中传递。\n- `AzureUrlPathMode.AUTO`：根据基础 URL 自动检测路径模式。为默认值。\n\n## 网络选项\n\n### 重试次数\n\nSDK 默认会自动重试 2 次，每次请求之间采用短时间的指数退避策略。\n\n仅以下错误类型会被重试：\n- 连接错误（例如，由于网络连接问题）\n- 408 请求超时\n- 409 冲突\n- 429 速率限制\n- 5xx 内部错误\n\nAPI 也可能明确指示 SDK 是否需要重试请求。\n\n要设置自定义的重试次数，可以使用 `maxRetries` 方法配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .maxRetries(4)\n    .build();\n```\n\n### 超时时间\n\n默认情况下，请求会在 10 分钟后超时。\n\n要设置自定义超时时间，可以在方法调用时使用 `timeout` 方法进行配置：\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(\n  params, RequestOptions.builder().timeout(Duration.ofSeconds(30)).build()\n);\n```\n\n或者在客户端级别为所有方法调用设置默认超时时间：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.time.Duration;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .timeout(Duration.ofSeconds(30))\n    .build();\n```\n\n### 代理\n\n要通过代理路由请求，请使用 `proxy` 方法配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.net.InetSocketAddress;\nimport java.net.Proxy;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .proxy(new Proxy(\n      Proxy.Type.HTTP, new InetSocketAddress(\n        \"https:\u002F\u002Fexample.com\", 8080\n      )\n    ))\n    .build();\n```\n\n### 连接池\n\n要自定义底层 OkHttp 连接池，可以使用 `maxIdleConnections` 和 `keepAliveDuration` 方法配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport java.time.Duration;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    \u002F\u002F 如果设置了 `maxIdleConnections`，则必须同时设置 `keepAliveDuration`，反之亦然。\n    .maxIdleConnections(10)\n    .keepAliveDuration(Duration.ofMinutes(2))\n    .build();\n```\n\n如果这两个选项均未设置，则将使用 OkHttp 的默认连接池设置。\n\n### HTTPS\n\n> [!注意]\n> 大多数应用程序不应调用这些方法，而应使用系统默认设置。默认设置包含特殊的优化措施，若修改实现方式可能会导致这些优化失效。\n\n要配置 HTTPS 连接的安全性，可以使用 `sslSocketFactory`、`trustManager` 和 `hostnameVerifier` 方法配置客户端：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    \u002F\u002F 如果设置了 `sslSocketFactory`，则必须同时设置 `trustManager`，反之亦然。\n    .sslSocketFactory(yourSSLSocketFactory)\n    .trustManager(yourTrustManager)\n    .hostnameVerifier(yourHostnameVerifier)\n    .build();\n```\n\n### 自定义 HTTP 客户端\n\n该 SDK 包含三个组件：\n\n- `openai-java-core`\n  - 包含 SDK 核心逻辑\n  - 不依赖于 [OkHttp](https:\u002F\u002Fsquare.github.io\u002Fokhttp)\n  - 暴露了 `OpenAIClient`、`OpenAIClientAsync`、`OpenAIClientImpl` 和 `OpenAIClientAsyncImpl`，这些类均可与任何 HTTP 客户端配合使用\n- `openai-java-client-okhttp`\n  - 依赖于 [OkHttp](https:\u002F\u002Fsquare.github.io\u002Fokhttp)\n  - 暴露了 `OpenAIOkHttpClient` 和 `OpenAIOkHttpClientAsync`，它们分别提供了使用 OkHttp 构建 `OpenAIClientImpl` 和 `OpenAIClientAsyncImpl` 的方式\n- `openai-java`\n  - 依赖并暴露了 `openai-java-core` 和 `openai-java-client-okhttp` 的 API\n  - 不包含自己的逻辑\n\n这种结构允许在不引入不必要的依赖的情况下替换 SDK 的默认 HTTP 客户端。\n\n#### 自定义的 `OkHttpClient`\n\n> [!提示]\n> 在替换默认客户端之前，请先尝试可用的 [网络选项](#network-options)。\n\n要使用自定义的 `OkHttpClient`：\n\n1. 将您的 `openai-java` 依赖项替换为 `openai-java-core`\n2. 将 `openai-java-client-okhttp` 中的 `OkHttpClient` 类复制到您的代码中并进行自定义\n3. 使用您自定义的客户端，类似 `OpenAIOkHttpClient` 或 `OpenAIOkHttpClientAsync` 的方式，构建 `OpenAIClientImpl` 或 `OpenAIClientAsyncImpl`\n\n### 完全自定义的 HTTP 客户端\n\n要使用完全自定义的 HTTP 客户端：\n\n1. 将您的 `openai-java` 依赖项替换为 `openai-java-core`\n2. 编写一个实现 `HttpClient` 接口的类\n3. 使用您的新客户端类，类似 `OpenAIOkHttpClient` 或 `OpenAIOkHttpClientAsync` 的方式，构建 `OpenAIClientImpl` 或 `OpenAIClientAsyncImpl`\n\n## 未文档化的 API 功能\n\n该 SDK 经过类型化设计，旨在方便使用已文档化的 API。然而，它也支持与未文档化或尚未正式支持的 API 部分进行交互。\n\n### 参数\n\n要设置未公开的参数，可以在任何 `Params` 类上调用 `putAdditionalHeader`、`putAdditionalQueryParam` 或 `putAdditionalBodyProperty` 方法：\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .putAdditionalHeader(\"Secret-Header\", \"42\")\n    .putAdditionalQueryParam(\"secret_query_param\", \"42\")\n    .putAdditionalBodyProperty(\"secretProperty\", JsonValue.from(\"42\"))\n    .build();\n```\n\n之后可以通过 `_additionalHeaders()`、`_additionalQueryParams()` 和 `_additionalBodyProperties()` 方法在构建好的对象上访问这些参数。\n\n如果需要为嵌套的头部、查询参数或请求体类设置未公开的参数，可以在嵌套类上调用 `putAdditionalProperty` 方法：\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .responseFormat(ChatCompletionCreateParams.ResponseFormat.builder()\n        .putAdditionalProperty(\"secretProperty\", JsonValue.from(\"42\"))\n        .build())\n    .build();\n```\n\n这些属性可以在嵌套的构建对象上通过 `_additionalProperties()` 方法进行访问。\n\n若要将已公开的参数或属性设置为尚未支持或未公开的值，可以将其值包装成一个 [`JsonValue`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt) 对象并传递给对应的 setter 方法：\n\n```java\nimport com.openai.core.JsonValue;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .messages(JsonValue.from(42))\n    .model(ChatModel.GPT_5_2)\n    .build();\n```\n\n创建 [`JsonValue`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt) 的最简单方式是使用其 `from(...)` 方法：\n\n```java\nimport com.openai.core.JsonValue;\nimport java.util.List;\nimport java.util.Map;\n\n\u002F\u002F 创建基本 JSON 值\nJsonValue nullValue = JsonValue.from(null);\nJsonValue booleanValue = JsonValue.from(true);\nJsonValue numberValue = JsonValue.from(42);\nJsonValue stringValue = JsonValue.from(\"Hello World!\");\n\n\u002F\u002F 创建等价于 `[\"Hello\", \"World\"]` 的 JSON 数组\nJsonValue arrayValue = JsonValue.from(List.of(\n  \"Hello\", \"World\"\n));\n\n\u002F\u002F 创建等价于 `{ \"a\": 1, \"b\": 2 }` 的 JSON 对象\nJsonValue objectValue = JsonValue.from(Map.of(\n  \"a\", 1,\n  \"b\", 2\n));\n\n\u002F\u002F 创建一个任意嵌套的 JSON，等价于：\n\u002F\u002F {\n\u002F\u002F   \"a\": [1, 2],\n\u002F\u002F   \"b\": [3, 4]\n\u002F\u002F }\nJsonValue complexValue = JsonValue.from(Map.of(\n  \"a\", List.of(\n    1, 2\n  ),\n  \"b\", List.of(\n    3, 4\n  )\n));\n```\n\n通常情况下，`Builder` 类的 `build` 方法会在有任何必填参数或属性未设置时抛出 [`IllegalStateException`](https:\u002F\u002Fdocs.oracle.com\u002Fjavase\u002F8\u002Fdocs\u002Fapi\u002Fjava\u002Flang\u002FIllegalStateException.html)。  \n\n如果需要强制忽略某个必填参数或属性，可以传入 [`JsonMissing`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Fcore\u002FValues.kt)：\n\n```java\nimport com.openai.core.JsonMissing;\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .model(ChatModel.GPT_5_4)\n    .messages(JsonMissing.of())\n    .build();\n```\n\n### 响应属性\n\n要访问未公开的响应属性，可以调用 `_additionalProperties()` 方法：\n\n```java\nimport com.openai.core.JsonValue;\nimport java.util.Map;\n\nMap\u003CString, JsonValue> additionalProperties = client.chat().completions().create(params)._additionalProperties();\nJsonValue secretPropertyValue = additionalProperties.get(\"secretProperty\");\n\nString result = secretPropertyValue.accept(new JsonValue.Visitor\u003C>() {\n    @Override\n    public String visitNull() {\n        return \"It's null!\";\n    }\n\n    @Override\n    public String visitBoolean(boolean value) {\n        return \"It's a boolean!\";\n    }\n\n    @Override\n    public String visitNumber(Number value) {\n        return \"It's a number!\";\n    }\n\n    \u002F\u002F 其他方法包括 `visitMissing`, `visitString`, `visitArray` 和 `visitObject`\n    \u002F\u002F 每个未实现方法的默认行为都会委托给 `visitDefault`，而后者默认会抛出异常，但也可以被重写。\n});\n```\n\n要访问某个属性的原始 JSON 值（可能未公开），可以调用以 `_` 开头的方法：\n\n```java\nimport com.openai.core.JsonField;\nimport com.openai.models.chat.completions.ChatCompletionMessageParam;\nimport java.util.Optional;\n\nJsonField\u003CList\u003CChatCompletionMessageParam>> messages = client.chat().completions().create(params)._messages();\n\nif (messages.isMissing()) {\n  \u002F\u002F 该属性在 JSON 响应中不存在\n} else if (messages.isNull()) {\n  \u002F\u002F 该属性被设置为字面量 null\n} else {\n  \u002F\u002F 检查值是否以字符串形式提供\n  \u002F\u002F 其他方法包括 `asNumber()`, `asBoolean()` 等\n  Optional\u003CString> jsonString = messages.asString();\n\n  \u002F\u002F 尝试反序列化为自定义类型\n  MyClass myObject = messages.asUnknown().orElseThrow().convert(MyClass.class);\n}\n```\n\n### 响应验证\n\n在极少数情况下，API 可能会返回与预期类型不符的响应。例如，SDK 可能期望某个属性包含一个 `String`，但 API 却返回了其他内容。\n\n默认情况下，SDK 不会在这种情况下抛出异常，只有当你直接访问该属性时才会抛出 [`OpenAIInvalidDataException`](openai-java-core\u002Fsrc\u002Fmain\u002Fkotlin\u002Fcom\u002Fopenai\u002Ferrors\u002FOpenAIInvalidDataException.kt)。\n\n如果你希望在一开始就完全确认响应的类型是否正确，可以调用 `validate()` 方法：\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(params).validate();\n```\n\n或者通过 `responseValidation` 方法配置每次调用都进行响应验证：\n\n```java\nimport com.openai.models.chat.completions.ChatCompletion;\n\nChatCompletion chatCompletion = client.chat().completions().create(\n  params, RequestOptions.builder().responseValidation(true).build()\n);\n```\n\n又或者在客户端级别为所有方法调用设置默认验证：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .fromEnv()\n    .responseValidation(true)\n    .build();\n```\n\n## 常见问题解答\n\n### 为什么你们不使用普通的 `enum` 类？\n\nJava 的 `enum` 类并不容易实现向前兼容性（[参见 Stainless 博客文章](https:\u002F\u002Fwww.stainless.com\u002Fblog\u002Fmaking-java-enums-forwards-compatible)）。如果 SDK 中使用了 `enum`，当 API 更新并返回新的枚举值时，可能会导致运行时异常。\n\n### 为什么你们使用 `JsonField\u003CT>` 来表示字段，而不是直接使用普通的 `T` 呢？\n\n使用 `JsonField\u003CT>` 可以实现以下几点功能：\n\n- 允许使用 [未文档化的 API 功能](#undocumented-api-functionality)\n- 按需对 API 响应是否符合预期结构进行 [验证](#response-validation)\n- 区分缺失值与显式为 `null` 的值\n\n### 为什么不使用 Kotlin 的 [`data` 类](https:\u002F\u002Fkotlinlang.org\u002Fdocs\u002Fdata-classes.html) 呢？\n\n向 `data` 类中添加新字段并不具备 [向后兼容性](https:\u002F\u002Fkotlinlang.org\u002Fdocs\u002Fapi-guidelines-backward-compatibility.html#avoid-using-data-classes-in-your-api)，而我们不希望每次向类中添加字段时都引入破坏性变更。\n\n### 为什么不用受检异常呢？\n\n在 Java 编程语言中，受检异常被广泛认为是一种设计上的失误。事实上，Kotlin 也正因为这个原因而没有引入受检异常。\n\n受检异常存在以下问题：\n\n- 处理起来过于冗长\n- 容易导致错误处理发生在不恰当的抽象层次上，在该层次上往往无法采取任何措施来解决错误\n- 由于 [函数着色问题](https:\u002F\u002Fjournal.stuffwithstuff.com\u002F2015\u002F02\u002F01\u002Fwhat-color-is-your-function)，传播受检异常非常繁琐\n- 与 Lambda 表达式配合不佳（同样是因为函数着色问题）\n\n## 语义化版本控制\n\n本包通常遵循 [SemVer](https:\u002F\u002Fsemver.org\u002Fspec\u002Fv2.0.0.html) 规范，不过某些不兼容的变更可能会以次版本号的形式发布：\n\n1. 对库内部实现的更改，这些实现虽然技术上是公开的，但并非面向外部使用或未被文档化。（如果您依赖于此类内部实现，请务必提交一个 GitHub 问题告知我们。）\n2. 我们预计在实际应用中不会影响绝大多数用户的变更。\n\n我们非常重视向后兼容性，并会尽力确保您能够获得顺畅的升级体验。\n\n我们非常欢迎您的反馈，请随时在 [GitHub 仓库的问题页面](https:\u002F\u002Fwww.github.com\u002Fopenai\u002Fopenai-java\u002Fissues) 上提交问题、报告 Bug 或提出建议。","# OpenAI Java SDK 快速上手指南\n\n## 环境准备\n\n- **JDK 版本**：Java 8 或更高版本\n- **构建工具**：Maven 或 Gradle\n- **API 密钥**：需提前在 [OpenAI 平台](https:\u002F\u002Fplatform.openai.com) 获取 `OPENAI_API_KEY`\n\n## 安装步骤\n\n### Maven 项目\n\n在 `pom.xml` 中添加依赖：\n\n```xml\n\u003Cdependency>\n  \u003CgroupId>com.openai\u003C\u002FgroupId>\n  \u003CartifactId>openai-java\u003C\u002FartifactId>\n  \u003Cversion>4.32.0\u003C\u002Fversion>\n\u003C\u002Fdependency>\n```\n\n### Gradle 项目\n\n在 `build.gradle.kts` 中添加：\n\n```kotlin\nimplementation(\"com.openai:openai-java:4.32.0\")\n```\n\n> 💡 国内开发者如遇下载缓慢，可配置阿里云 Maven 镜像加速依赖下载。\n\n## 基本使用\n\n### 1. 配置客户端\n\n通过环境变量自动加载配置（推荐）：\n\n```java\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\n\n\u002F\u002F 需预先设置环境变量：OPENAI_API_KEY\nOpenAIClient client = OpenAIOkHttpClient.fromEnv();\n```\n\n或手动指定 API Key：\n\n```java\nOpenAIClient client = OpenAIOkHttpClient.builder()\n    .apiKey(\"sk-你的 API 密钥\")\n    .build();\n```\n\n### 2. 调用聊天接口\n\n```java\nimport com.openai.models.ChatModel;\nimport com.openai.models.chat.completions.ChatCompletion;\nimport com.openai.models.chat.completions.ChatCompletionCreateParams;\n\nChatCompletionCreateParams params = ChatCompletionCreateParams.builder()\n    .addUserMessage(\"你好，请介绍一下你自己\")\n    .model(ChatModel.GPT_4O)\n    .build();\n\nChatCompletion response = client.chat().completions().create(params);\nSystem.out.println(response.choices().get(0).message().content());\n```\n\n### 3. 异步调用（可选）\n\n```java\nimport java.util.concurrent.CompletableFuture;\n\nCompletableFuture\u003CChatCompletion> future = \n    client.async().chat().completions().create(params);\n\nfuture.thenAccept(resp -> \n    System.out.println(resp.choices().get(0).message().content())\n);\n```\n\n> ⚠️ 提示：建议在应用生命周期内复用同一个客户端实例，以充分利用连接池和线程池资源。","某金融科技公司后端团队正在构建一个基于 Java 的实时智能客服系统，需要让系统能够理解用户复杂的理财咨询并生成专业回复。\n\n### 没有 openai-java 时\n- 开发人员必须手动拼接繁琐的 HTTP REST 请求报文，处理 JSON 序列化与反序列化极易出错，代码冗余且难以维护。\n- 每次调用 API 都需要重复编写鉴权逻辑（如 Header 设置、密钥管理），缺乏统一的安全配置入口，存在密钥泄露风险。\n- 面对 OpenAI 复杂的响应结构（如流式输出、错误码解析），需自行编写大量底层解析代码，导致新功能开发周期被严重拉长。\n- 缺乏类型安全支持，参数传递依赖字符串硬编码，编译期无法发现模型名称或参数类型的错误，线上故障排查困难。\n\n### 使用 openai-java 后\n- 通过简洁的 Builder 模式构建请求（如 `ChatCompletionCreateParams.builder()`），SDK 自动处理底层 HTTP 通信与 JSON 转换，代码量减少 70%。\n- 利用 `OpenAIOkHttpClient.fromEnv()` 一键加载环境变量完成鉴权配置，统一管理 API Key 与组织 ID，显著提升系统安全性。\n- 直接调用强类型的 `client.chat().completions().create()` 方法获取结构化对象，内置完善的异常处理机制，快速实现流式回复功能。\n- 享受完整的 IDE 智能提示与编译期检查，模型版本（如 `ChatModel.GPT_5_2`）和参数校验在编码阶段即可完成，大幅降低运行时错误。\n\nopenai-java 将复杂的 API 交互转化为优雅的 Java 原生体验，让开发团队能专注于业务逻辑创新而非底层通信细节。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-java_823e5c32.png","openai","OpenAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopenai_1960bbf4.png","",null,"https:\u002F\u002Fopenai.com\u002F","https:\u002F\u002Fgithub.com\u002Fopenai",[81,85,89,93],{"name":82,"color":83,"percentage":84},"Kotlin","#A97BFF",99.6,{"name":86,"color":87,"percentage":88},"Java","#b07219",0.3,{"name":90,"color":91,"percentage":92},"Shell","#89e051",0,{"name":94,"color":95,"percentage":92},"Dockerfile","#384d54",1426,219,"2026-04-18T13:00:04","Apache-2.0","未说明",{"notes":102,"python":103,"dependencies":104},"该工具是 Java SDK，非 Python 库，因此无 Python 版本及 GPU 显存需求。运行环境需安装 Java 8 或更高版本。支持通过 Maven 或 Gradle 引入。配置可通过环境变量（如 OPENAI_API_KEY）或系统属性完成。支持同步和异步调用，以及 Kubernetes、Azure、GCP 的工作负载身份认证。","不适用",[105,106],"Java 8+","OkHttp (内置)",[35,14],"2026-03-27T02:49:30.150509","2026-04-20T04:04:12.008216",[111,116,120,125,130,135],{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},43373,"为什么 `ChatCompletionResults.choice.message()` 会抛出 `OpenAIInvalidDataException`，提示 message 无效？","这通常是由于项目中使用的 Jackson 版本过旧导致的反序列化问题。该问题已在 openai-java v0.37.0 中修复，该版本增强了对旧版 Jackson（低至 v2.13.4）的兼容性。请升级您的 SDK 版本来解决此异常。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F301",{"id":117,"question_zh":118,"answer_zh":119,"source_url":115},43368,"使用旧版本的 Jackson 库时，为什么会出现反序列化失败或 `InvalidDefinitionException` 错误？","从 v0.37.0 版本开始，SDK 已针对 Jackson v2.13.4 进行了编译和测试，以确保兼容旧版本。虽然 SDK 的 POM 文件中声明依赖的是较新的 Jackson v2.18.1（以便未显式依赖 Jackson 的用户能获得最新版本），但内部实现已调整以支持旧版本。如果您遇到此类问题，请确保升级到 openai-java v0.37.0 或更高版本。",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},43369,"如何传递原始的 JSON Schema 字符串进行函数调用或结构化输出，而不使用注解类或 Builder 模式？","您可以直接将 `JsonValue` 传递给大多数设置器（如 `FunctionDefinition#parameters` 或 `JsonSchema#schema`）。若要从字符串转换，可以使用以下代码：\n`ObjectMappers.jsonMapper().readValue(yourJsonString, JsonValue.class)`。\n此外，v2.5.0 版本已支持自动生成功能调用参数的 JSON Schema。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F139",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},43370,"如何在非 Azure 的 URL 中添加模型名称路径段（例如通过内部 APIM 网关访问）？","您可以使用 `withOptions` 方法来自定义请求的最终 URL。这允许您在构建 HTTP 客户端时动态调整路径，而无需硬编码基础 URL 或使用 Azure 特定的逻辑。如果在配置后仍遇到问题，建议提交新的 Issue 并提供具体细节。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F473",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},43371,"Builder 模式在某些情况下显得过于复杂（例如需要链式调用 `.builder().build()`），是否有简化方法？","在 v0.18.0 版本发布后，Builder 的使用已经变得更加简洁。对于某些特定场景（如 `responseFormat` 有多种类型），目前仍需使用 `.responseFormat(ResponseFormatText.builder().build())` 的形式，因为无法设置单一的常量默认值。维护者正在考虑为常用场景添加简写方法（例如 `ChatCompletionCreateParams.builder().responseFormatText()`），建议查阅最新的 README 和示例代码以获取最佳实践。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F53",{"id":136,"question_zh":137,"answer_zh":138,"source_url":124},43372,"在哪里可以找到关于函数调用（Function Calling）和结构化输出（Structured Outputs）的具体代码示例？","官方仓库中提供了详细的示例代码，您可以参考以下文件：\n1. 函数调用原始示例：`openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FFunctionCallingRawExample.java`\n2. 结构化输出原始示例：`openai-java-example\u002Fsrc\u002Fmain\u002Fjava\u002Fcom\u002Fopenai\u002Fexample\u002FStructuredOutputsRawExample.java`\n这些示例展示了如何手动构建 Schema 以及如何处理相关参数。",[140,145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235],{"id":141,"version":142,"summary_zh":143,"released_at":144},343019,"v4.32.0","## 4.32.0 (2026-04-16)\n\n完整变更日志：[v4.31.0...v4.32.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.31.0...v4.32.0)\n\n### 功能\n\n* **api:** 为 InputFileContent 添加详细信息 ([0c8418c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F0c8418cb584103e273b12f70b1cf535364d44bfd))\n* **api:** 添加 OAuthErrorCode 类型 ([09b41e1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F09b41e1d925371b6b6c94b8a147dce6264be6120))\n\n\n### 文档\n\n* 改进示例 ([717a8d5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F717a8d5cc331c2746ed1548ef89418f70707d23a))","2026-04-16T19:30:48",{"id":146,"version":147,"summary_zh":148,"released_at":149},343020,"v4.31.0","## 4.31.0 (2026-04-08)\n\n完整变更日志：[v4.30.0...v4.31.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.30.0...v4.31.0)\n\n### 功能\n\n* **api:** 在对话消息中添加 phase 字段 ([e562a17](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fe562a1701cfb7c92aa308d80d7bfe8e99c6394c8))\n* **api:** 将 WEB_SEARCH_CALL_RESULTS 添加到 ResponseIncludable 枚举中 ([eda0a61](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Feda0a61f837af3a0aba2851a23320aadfce82979))\n* **client:** 增加对短期令牌的支持 ([#1185](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F1185)) ([40e729d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F40e729ddc83f37dfe6429d257072cd39396fb104))\n\n\n### 错误修复\n\n* **api:** 从 ResponseIncludable 中移除 web_search_call.results ([936b2ab](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F936b2ab08e8641bd1d6a1b2432140762185e22eb))\n\n\n### 日常维护\n\n* **internal:** 更新多部分表单数组序列化 ([240ca42](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F240ca42401366b75cf70b0eb0ff9f63425c9b358))\n* **tests:** 将 steady 升级至 v0.20.1 ([a3c95b0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa3c95b073d7b10a13ce83865947ab4fbec5a95d0))\n* **tests:** 将 steady 升级至 v0.20.2 ([78b1d56](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F78b1d5629bf7d31173ec1cec50f3191959cdba9d))\n\n\n### 文档\n\n* **api:** 澄清向量存储文件和文件批次中的 file_batches 使用方法 ([9c56841](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F9c56841ff2f052a7e54290b36c3f74528be3de19))\n* 修复 README 中的函数参数拼写错误 ([#713](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F713)) ([36c4888](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F36c48883fa898da1b41ccc03fe1682934079923c))","2026-04-08T21:06:13",{"id":151,"version":152,"summary_zh":153,"released_at":154},343021,"v4.30.0","## 4.30.0 (2026-03-25)\n\n完整变更日志：[v4.29.1...v4.30.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.29.1...v4.30.0)\n\n### 功能\n\n* **api:** 在 ComputerAction\u002FResponseComputerToolCall 中的计算机操作类型中添加 `keys` 字段 ([67e4a24](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F67e4a24686d73de70335e169baba3c9b5774cf6d))\n\n\n### 错误修复\n\n* **api:** 将 SDK 响应类型与扩展后的项目模式对齐 ([f05a663](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ff05a6636cbf87672a6ba31004b8fdcee124994e1))\n\n\n### 杂项\n\n* **ci:** 跳过仅涉及元数据更改的 lint 检查 ([ed9e951](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fed9e95194d3cf44c896f44b6d57f310c12fd39b9))\n* **tests:** 将 steady 升级至 v0.19.7 ([924632a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F924632a2c973f7fac6e67cd063f906efbf7148f4))","2026-03-25T22:09:44",{"id":156,"version":157,"summary_zh":158,"released_at":159},343022,"v4.29.1","## 4.29.1（2026-03-23）\n\n完整变更日志：[v4.29.0...v4.29.1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.29.0...v4.29.1)\n\n### 错误修复\n\n* **client:** 允许在 `toBuilder()` 中更新影响字段的头部\u002F查询参数 ([fd3b67c](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ffd3b67cef9c4457506a76b9e994210e512e0181f))\n* **client:** 移除冗余的 API 密钥覆盖 ([8383a7d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F8383a7de659aa6f17e1707614f5a246ced127532))\n* **core:** 格式化 StructuredOutputs ([256718e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F256718e5ac3f3bd821d97829c5af09b2db6f113d))\n* **types:** 将 ResponseInputMessageItem 中的 type 字段改为常量 ([a8ae57f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa8ae57f73a04878013869f75bc331fa60f02b979))\n\n\n### 杂项任务\n\n* **internal:** 更新 .gitignore 文件 ([2663595](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F26635957cbcfb2438bcece6476eea1e206d54115))\n* **tests:** 将 steady 升级至 v0.19.4 ([f0d4ba8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ff0d4ba8685e6376e72a928da9aaf2bdc9e1655e5))\n* **tests:** 将 steady 升级至 v0.19.5 ([cbd424e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fcbd424e5e2220cc8c88522b9850fe1a923a2e523))\n* **tests:** 将 steady 升级至 v0.19.6 ([28a4c27](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F28a4c278d4f81e61f732d1576e09b91e85e48ca8))\n\n\n### 重构\n\n* **tests:** 从 prism 切换到 steady ([a8cb9e8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa8cb9e8c62c492a6aeda6fcdd6e9b09afc4f71fa))","2026-03-23T18:18:57",{"id":161,"version":162,"summary_zh":163,"released_at":164},343023,"v4.29.0","## 4.29.0 (2026-03-17)\n\n完整变更日志：[v4.28.0...v4.29.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.28.0...v4.29.0)\n\n### 功能\n\n* **api:** 新增 5.4 纳米和迷你模型标识符 ([397027a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F397027a4117ab49e0a500b8dec8594ad34763011))\n* **api:** 向 NamespaceTool.Tool.Function 添加 defer_loading 字段 ([ff60586](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fff60586659f25204a545cb08623fe0a12810cbfa))\n* **api:** 向 ComparisonFilter 添加 IN 和 NIN 过滤类型 ([6d0eac3](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F6d0eac374f8eab9de341ae1a19c185512614893b))\n* **api:** 在批量创建中新增 v1\u002Fvideos 端点 ([421acd8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F421acd884ef736944c5368a7282fda8a890f0aed))\n\n\n### 杂项\n\n* **内部:** 调整 CI 分支配置 ([bfe3f0a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fbfe3f0ac8ac068443a40b4b7b23e7d179e8b4837))\n* **内部:** 更新重试延迟测试 ([dfcccf1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fdfcccf14f4f3a72e85db513955a3ede7bcdb1d6b))","2026-03-17T17:54:14",{"id":166,"version":167,"summary_zh":168,"released_at":169},343024,"v4.28.0","## 4.28.0（2026-03-13）\n\n完整更新日志：[v4.27.0...v4.28.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.27.0...v4.28.0)\n\n### 功能\n\n* **api:** 自定义语音（[e576130](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fe57613014fa561418c77afb9cec87149f012bb67)）","2026-03-13T21:07:36",{"id":171,"version":172,"summary_zh":173,"released_at":174},343025,"v4.27.0","## 4.27.0 (2026-03-13)\n\n完整更新日志：[v4.26.0...v4.27.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.26.0...v4.27.0)\n\n### 功能\n\n* **api:** API 更新 ([ce42160](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fce421607a7d839d4477217d7a35abc13ffd1c9fc))\n* **api:** 手动更新 ([39b947b](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F39b947b75af1dbb9544c6c1db27dba1b995b2b52))\n* **api:** 手动更新 ([7b4fbeb](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F7b4fbeb15137592fb75fbcd79c1b5f4d28cf7601))\n* **api:** Sora API 改进：角色 API、视频扩展\u002F编辑功能，以及更高分辨率的导出选项。([54bf78f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F54bf78f851562794c679eb7126726629ec724314))\n\n\n### 错误修复\n\n* **client:** `Retry-After` 解析错误 ([948b279](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F948b279b6ccf1e39e201dc193bcfc3b66b4f7359))\n\n\n### 其他工作\n\n* **internal:** 与代码生成相关的更新 ([0ea414d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F0ea414d19b78c438fe360f6b73fd42881be37d86))","2026-03-13T19:17:35",{"id":176,"version":177,"summary_zh":178,"released_at":179},343026,"v4.26.0","## 4.26.0 (2026-03-05)\n\n完整变更日志：[v4.25.0...v4.26.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.25.0...v4.26.0)\n\n### 功能\n\n* **api:** GA ComputerTool 现在使用 ComputerTool 类。'computer_use_preview' 工具已被移至 ComputerUsePreview ([a8d8de8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa8d8de803d9df873eb13ecd912cdea096532a062))\n\n\n### 错误修复\n\n* **api:** 更新结构化响应工具的重载 ([5562fc1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F5562fc13a259d567aa03c59f36e74825effa53da))","2026-03-05T23:41:31",{"id":181,"version":182,"summary_zh":183,"released_at":184},343027,"v4.25.0","## 4.25.0 (2026-03-05)\n\n完整变更日志：[v4.24.1...v4.25.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.24.1...v4.25.0)\n\n### 功能\n\n* **api:** gpt-5.4、工具搜索工具以及新的计算机工具 ([18c8870](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F18c8870b9f16b4fb2d87eb10d73797da3ecc6fdf))\n\n\n### 错误修复\n\n* **api:** 允许响应中包含未知的视频时长参数 ([82d67cb](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F82d67cb396925dbe221cd9713afc414c0b9406ad))\n\n\n### 杂项\n\n* **内部:** 与代码生成相关的更新 ([42a435d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F42a435d6e939b6ecf1dc5866d5fce60f50b8b08b))","2026-03-05T19:07:43",{"id":186,"version":187,"summary_zh":188,"released_at":189},343028,"v4.24.1","## 4.24.1（2026-03-04）\n\n完整变更日志：[v4.24.0...v4.24.1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.24.0...v4.24.1)\n\n### 错误修复\n\n* **api:** 内部模式修复 ([b4d6777](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb4d67776443c04d9283e9d349af237d047b9e5c9))\n* **api:** 手动更新 ([95d5732](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F95d5732a7eaac2ba4708e3d9ab48433f64aa9d13))\n* **java:** 向结构化响应输出包装器添加缺失的阶段委托方法 ([9c61370](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F9c6137015ca7f95fb1144810986f2097e7f5d0ca))\n\n\n### 杂项任务\n\n* **internal:** 与代码生成相关的更新 ([e5c572f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fe5c572f40fce11d20f24de6afbeef6b69d4bc7bb))\n* **internal:** 减少警告信息 ([445d1ad](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F445d1ad99e6a7745bf30c4ec4d8f9a9c02e7dab7))","2026-03-04T18:19:58",{"id":191,"version":192,"summary_zh":193,"released_at":194},343029,"v4.24.0","## 4.24.0 (2026-02-24)\n\nFull Changelog: [v4.23.0...v4.24.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.23.0...v4.24.0)\n\n### Features\n\n* **api:** add phase ([28dfb07](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F28dfb07eea2195b13ab9bf4d47f40e3bbbe386f1))\n* **api:** remove phase from messages and promptCacheKey parameter ([6eb0909](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F6eb09098f7509d17f83c2b92c86f43ed28d278e9))\n\n\n### Bug Fixes\n\n* **api:** fix phase enum ([9217829](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F9217829f66334fba3323acd38369a81da39b19d3))\n* **api:** phase docs ([215079e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F215079e8e83b527a5480db9ebc7a4c43fbe01490))\n* **api:** readd phase ([9d234b2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F9d234b208b5e77b57665589a8881c8b4c8f3372e))\n\n\n### Chores\n\n* **internal:** expand imports ([73adcb9](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F73adcb9d348bc2e51c98c4535f7f2373fcf782c9))","2026-02-24T20:23:52",{"id":196,"version":197,"summary_zh":198,"released_at":199},343030,"v4.23.0","## 4.23.0 (2026-02-23)\n\nFull Changelog: [v4.22.0...v4.23.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.22.0...v4.23.0)\n\n### Features\n\n* **api:** add gpt-realtime-1.5 and gpt-audio-1.5 models to realtime session ([b00b515](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb00b515f591c8651ab4d5d46606e31a69787eebf))\n\n\n### Bug Fixes\n\n* **api:** manual updates ([bd83804](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fbd83804f76f2be62a2a09e4b9f2ed7832fdb740f))\n* set Accept header in more places ([9e2e714](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F9e2e7145e357dfe6e8b3f8701df70a42e07a4f63))\n* **tests:** skip unsupported streaming tests ([24f9854](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F24f9854de6299572a6b836059855218f68f2e412))\n\n\n### Chores\n\n* drop apache dependency ([37805bf](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F37805bf5dad1f503cec009dfaca6c6b2b21fc279))\n* make `Properties` more resilient to `null` ([0039eb0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F0039eb008840c33d9f62d44999e4b4533a3a50d2))\n\n\n### Documentation\n\n* **api:** add batch size limit to file batch create parameters ([764fd86](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F764fd868ab4d0f9642ae2facb6ca98ef3a491888))\n* **api:** clarify safety_identifier max length in chat completions and responses ([b2735b0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb2735b002e3591513ea743062e36d6f965f1fd46))","2026-02-23T23:37:02",{"id":201,"version":202,"summary_zh":203,"released_at":204},343031,"v4.22.0","## 4.22.0 (2026-02-19)\n\nFull Changelog: [v4.21.0...v4.22.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.21.0...v4.22.0)\n\n### Features\n\n* **client:** add connection pooling option ([6b5fd77](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F6b5fd77b364bd494b5630805f0cd62d5a381eceb))\n\n\n### Chores\n\n* **internal:** make `OkHttp` constructor internal ([a1e3ca6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa1e3ca67621046ae0e2abe41e4a3f17c573c7ac1))\n* **internal:** remove unnecessary base URL ([360edde](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F360edde0e5ea5d0ee30c926905a950db3e14b6fa))\n* **internal:** update `TestServerExtension` comment ([41182d1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F41182d164e3739af338ad8f6d17955792c3a48a5))\n* **internal:** update CI step name ([eeb51c6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Feeb51c61925e9e49ce1b2994f069d89068028e00))\n\n\n### Documentation\n\n* **api:** update docstrings across audio, chatkit, skills, videos, and other endpoints ([f5018f5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ff5018f5bfc839b22397d184c18fc5a359fabd430))","2026-02-19T16:57:46",{"id":206,"version":207,"summary_zh":208,"released_at":209},343032,"v4.21.0","## 4.21.0 (2026-02-13)\n\nFull Changelog: [v4.20.0...v4.21.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.20.0...v4.21.0)\n\n### Features\n\n* **api:** container network_policy and skills ([c248c52](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fc248c522e1ef826aa28c1176dee234ba64bc0550))\n\n\n### Documentation\n\n* update comment ([6a1a02b](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F6a1a02b54a87dc5ad846086f4ca0eda047928720))","2026-02-14T00:35:24",{"id":211,"version":212,"summary_zh":213,"released_at":214},343033,"v4.20.0","## 4.20.0 (2026-02-10)\n\nFull Changelog: [v4.19.0...v4.20.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.19.0...v4.20.0)\n\n### Features\n\n* **api:** skills and hosted shell ([fcd724d](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ffcd724d2822b755ef44b9de8f9b5573d82271ce7))\n* **api:** support for images in batch api ([c73908a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fc73908a19f26fa12350f7a5880401bd260420216))","2026-02-10T19:06:57",{"id":216,"version":217,"summary_zh":218,"released_at":219},343034,"v4.19.0","## 4.19.0 (2026-02-09)\n\nFull Changelog: [v4.18.0...v4.19.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.18.0...v4.19.0)\n\n### Features\n\n* **api:** add context_management to responses ([da0fb59](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fda0fb59b0e0362ee68353829c97ac8b2944cd49b))\n* **api:** add webhook signature verification ([1823eca](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F1823ecab53fd72ef7f7fdc7776e6ecd631307f7c))\n* **api:** responses context_management ([c0f2cd1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fc0f2cd1c854d6fca90c9000de46f042e64a8cbf5))\n\n\n### Chores\n\n* **internal:** upgrade AssertJ ([5c01787](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F5c017872b071272eff021318dab9a4a774cfcd4c))","2026-02-09T21:40:46",{"id":221,"version":222,"summary_zh":223,"released_at":224},343035,"v4.18.0","## 4.18.0 (2026-02-05)\n\nFull Changelog: [v4.17.0...v4.18.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.17.0...v4.18.0)\n\n### Features\n\n* **api:** image generation actions for responses; ResponseFunctionCallArgumentsDoneEvent.name ([a0cc1d8](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa0cc1d81ca67e8618f51df4359b2877d27cba572))\n\n\n### Bug Fixes\n\n* **client:** undo change to web search Find action ([7b2ebe5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F7b2ebe54b58b97fbc9ad1bba5cb54cb346606483))\n* **client:** update type for `find_in_page` action ([2cde783](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F2cde783b5d33d7498548074ad45ee04d4d9d7c05))\n\n\n### Chores\n\n* **internal:** allow passing args to `.\u002Fscripts\u002Ftest` ([ce2c0ed](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fce2c0edb51c10779f3c7bb68aef8666057d4e400))","2026-02-05T16:28:29",{"id":226,"version":227,"summary_zh":228,"released_at":229},343036,"v4.17.0","## 4.17.0 (2026-01-30)\n\nFull Changelog: [v4.16.1...v4.17.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.16.1...v4.17.0)\n\n### Features\n\n* **api:** add shell_call_output status field ([1e8a078](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F1e8a0786f54b7e618a93ce1eb7a42a9b851d1ad0))\n* **api:** api updates ([23a49f6](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F23a49f68f655658c07bdd98f3d603d2aaf8f5812))\n\n\n### Bug Fixes\n\n* **api:** mark assistants as deprecated ([a21625e](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fa21625e3378eb635a6045e59746a62ffc846ee6f))\n\n\n### Chores\n\n* **ci:** upgrade `actions\u002Fgithub-script` ([ddbc5a1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fddbc5a161dd5b1760eae9f70e149b4360f1b9203))\n* **client:** improve example values ([f5941b5](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Ff5941b53a08e457368895735cc3f265fa09415cb))","2026-01-30T15:53:38",{"id":231,"version":232,"summary_zh":233,"released_at":234},343037,"v4.16.1","## 4.16.1 (2026-01-23)\n\nFull Changelog: [v4.16.0...v4.16.1](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.16.0...v4.16.1)\n\n### Bug Fixes\n\n* **client:** preserve time zone in lenient date-time parsing ([2dcc893](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F2dcc893a155d1170f361ee7a64f79f75f88a10a1))\n\n\n### Chores\n\n* **internal:** correct cache invalidation for `SKIP_MOCK_TESTS` ([4f7b317](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F4f7b317dcfb66644e58be5d056e058e8c8cb866a))\n\n\n### Documentation\n\n* add comment for arbitrary value fields ([2d87940](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F2d87940031547ea6fe48a75f266f560297e210b9))","2026-01-23T03:16:33",{"id":236,"version":237,"summary_zh":238,"released_at":239},343038,"v4.16.0","## 4.16.0 (2026-01-21)\n\nFull Changelog: [v4.15.0...v4.16.0](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcompare\u002Fv4.15.0...v4.16.0)\n\n### Features\n\n* **api:** api update ([e5203e2](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fe5203e29d3a2c8e74c976ecf3e1db93102953870))\n* **client:** send `X-Stainless-Kotlin-Version` header ([d77a171](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fd77a1713511e9b30abd2df66a3ecc95ad811e276))\n\n\n### Bug Fixes\n\n* **client:** disallow coercion from float to int ([4332495](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F4332495f9d237a854ef0fc64bfa523a342eb7d98))\n* **client:** fully respect max retries ([b2ac5ce](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb2ac5cecb0819e6e5165f727cc76551654c0d680))\n* **client:** send retry count header for max retries 0 ([b2ac5ce](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb2ac5cecb0819e6e5165f727cc76551654c0d680))\n* date time deserialization leniency ([35a4662](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F35a4662ed2657f9a0b22487d1c5c4da8f27f0f96))\n* make ResponseAccumulator forwards compatible with new event types ([d9dc902](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fd9dc902beb83a9b3ec8e6d6fdc6ec59be580ec39))\n\n\n### Chores\n\n* **ci:** upgrade `actions\u002Fsetup-java` ([d739c6a](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fd739c6ab75bc155fdb44f4a49be8cb61a2807888))\n* **internal:** clean up maven repo artifact script and add html documentation to repo root ([763df3f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F763df3fcd6c64ad48f8b1c2391fa22010ebfd9ec))\n* **internal:** depend on packages directly in example ([b2ac5ce](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fb2ac5cecb0819e6e5165f727cc76551654c0d680))\n* **internal:** improve maven repo docs ([005acfc](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F005acfc499a7c2d26926ec95a3f9b2a7ad7cff47))\n* **internal:** support uploading Maven repo artifacts to stainless package server ([24dd88f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F24dd88f2fba4594b1c317b9840ca24e5aa6379bb))\n* **internal:** update `actions\u002Fcheckout` version ([64b074f](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F64b074fe39e628ba11b4d36e182b4156e656566f))\n* **internal:** update maven repo doc to include authentication ([c00b703](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002Fc00b70307b9bdd6a9f94cf63551a15352b58352e))\n* test on Jackson 2.14.0 to avoid encountering FasterXML\u002Fjackson-databind[#3240](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fissues\u002F3240) in tests ([35a4662](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-java\u002Fcommit\u002F35a4662ed2657f9a0b22487d1c5c4da8f27f0f96))","2026-01-21T22:29:50"]