[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MacPaw--OpenAI":3,"tool-MacPaw--OpenAI":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":23,"env_os":92,"env_gpu":93,"env_ram":93,"env_deps":94,"category_tags":97,"github_topics":98,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":105,"updated_at":106,"faqs":107,"releases":133},765,"MacPaw\u002FOpenAI","OpenAI","Swift community driven package for OpenAI public API","OpenAI 是一款面向 Swift 开发者的开源库，专注于简化人工智能服务的接入流程。它封装了 OpenAI 官方公共 API，让 iOS 和 macOS 工程师无需编写复杂的底层网络代码，即可轻松调用大语言模型、图像生成及语音处理等功能。\n\n这一方案有效解决了原生集成 AI 能力时的兼容性与开发效率难题。通过 Swift Package Manager 即可一键集成，开发者能直接使用类型安全的方法调用聊天补全、函数调用、结构化输出以及最新的 Assistants 接口。特别值得一提的是，它支持多供应商适配，不仅能连接 OpenAI，还能灵活切换至 Gemini、DeepSeek 等其他主流模型服务。\n\n无论是构建智能客服、内容创作应用还是探索 AI 新特性，Swift 开发者都能借助 OpenAI 快速落地想法。社区持续维护确保其紧跟官方文档更新，是 Swift 技术栈中不可或缺的智能化增强组件。","# OpenAI\n\n![logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_be05e2af6383.png)\n\n___\n\n![Swift Workflow](https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Factions\u002Fworkflows\u002Fswift.yml\u002Fbadge.svg)\n[![](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2FMacPaw%2FOpenAI%2Fbadge%3Ftype%3Dswift-versions)](https:\u002F\u002Fswiftpackageindex.com\u002FMacPaw\u002FOpenAI)\n[![](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2FMacPaw%2FOpenAI%2Fbadge%3Ftype%3Dplatforms)](https:\u002F\u002Fswiftpackageindex.com\u002FMacPaw\u002FOpenAI)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Twitter&message=@MacPaw&color=CA1F67)](https:\u002F\u002Ftwitter.com\u002FMacPaw)\n\nThis repository contains Swift community-maintained implementation over [OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002F) public API. \n\n- [Installation](#installation)\n    - [Swift Package Manager](#swift-package-manager)\n- [Usage](#usage)\n    - [Initialization](#initialization)\n    - [Using the SDK for other providers except OpenAI](#using-the-sdk-for-other-providers-except-openai)\n    - [Cancelling requests](#cancelling-requests)\n- [Text and prompting](#text-and-prompting)\n    - [Responses](#responses)\n    - [Chat Completions](#chat-completions)\n- [Function calling](#function-calling)\n- [Tools](#tools)\n    - [MCP (Model Context Protocol)](#mcp-model-context-protocol)\n        - [MCP Tool Integration](#mcp-tool-integration)\n- [Images](#images)\n    - [Create Image](#create-image)\n    - [Create Image Edit](#create-image-edit)\n    - [Create Image Variation](#create-image-variation)\n- [Audio](#audio)\n    - [Audio Create Speech](#audio-create-speech)\n    - [Audio Transcriptions](#audio-transcriptions)\n    - [Audio Translations](#audio-translations)\n- [Structured Outputs](#structured-outputs)\n- [Specialized models](#specialized-models)\n    - [Embeddings](#embeddings)\n    - [Moderations](#moderations)\n- [Assistants (Beta)](#assistants)\n    - [Create Assistant](#create-assistant)\n    - [Modify Assistant](#modify-assistant)\n    - [List Assistants](#list-assistants) \n    - [Threads](#threads)\n        - [Create Thread](#create-thread)\n        - [Create and Run Thread](#create-and-run-thread)\n        - [Get Threads Messages](#get-threads-messages)\n        - [Add Message to Thread](#add-message-to-thread)\n    - [Runs](#runs)\n        - [Create Run](#create-run)\n        - [Retrieve Run](#retrieve-run)\n        - [Retrieve Run Steps](#retrieve-run-steps)\n        - [Submit Tool Outputs for Run](#submit-tool-outputs-for-run)\n    - [Files](#files)\n        - [Upload File](#upload-file)\n- [Other APIs](#other-apis)\n    - [Models](#models)\n        - [List Models](#list-models)\n        - [Retrieve Model](#retrieve-model)\n    - [Utilities](#utilities)\n- [Support for other providers: Gemini, DeepSeek, Perplexity, OpenRouter, etc.](#support-for-other-providers)\n- [Example Project](#example-project)\n- [Contribution Guidelines](#contribution-guidelines)\n- [Links](#links)\n- [License](#license)\n## Documentation\n\nThis library implements it's types and methods in close accordance to the REST API documentation, which can be found on [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference).\n\n## Installation\n\n### Swift Package Manager\n\nTo integrate OpenAI into your Xcode project using Swift Package Manager:\n\n1.  In Xcode, go to **File > Add Package Dependencies...**\n2.  Enter the repository URL: `https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git`\n3.  Choose your desired dependency rule (e.g., \"Up to Next Major Version\").\n\nAlternatively, you can add it directly to your `Package.swift` file:\n\n```swift\ndependencies: [\n    .package(url: \"https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git\", branch: \"main\")\n]\n```\n\n## Usage\n\n### Initialization\n\nTo initialize API instance you need to [obtain](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys) API token from your Open AI organization.\n\n**Remember that your API key is a secret!** Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.\n\n\u003Cimg width=\"1081\" alt=\"company\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_d64faaae5739.png\">\n\nOnce you have a token, you can initialize `OpenAI` class, which is an entry point to the API.\n\n> ⚠️ OpenAI strongly recommends developers of client-side applications proxy requests through a separate backend service to keep their API key safe. API keys can access and manipulate customer billing, usage, and organizational data, so it's a significant risk to [expose](https:\u002F\u002Fnshipster.com\u002Fsecrets\u002F) them.\n\n```swift\nlet openAI = OpenAI(apiToken: \"YOUR_TOKEN_HERE\")\n```\n\nOptionally you can initialize `OpenAI` with token, organization identifier and timeoutInterval.\n\n```swift\nlet configuration = OpenAI.Configuration(token: \"YOUR_TOKEN_HERE\", organizationIdentifier: \"YOUR_ORGANIZATION_ID_HERE\", timeoutInterval: 60.0)\nlet openAI = OpenAI(configuration: configuration)\n```\n\nSee `OpenAI.Configuration` for more values that can be passed on init for customization, like: `host`, `basePath`, `port`, `scheme` and `customHeaders`.\n\nOnce you posses the token, and the instance is initialized you are ready to make requests.\n\n### Using the SDK for other providers except OpenAI\n\nThis SDK is more focused on working with OpenAI Platform, but also works with other providers that support OpenAI-compatible API.\n\nUse `.relaxed` parsing option on Configuration, or see more details on the topic [here](#support-for-other-providers)\n\n### Cancelling requests\n\nFor Swift Concurrency calls, you can simply cancel the calling task, and corresponding underlying `URLSessionDataTask` would get cancelled automatically.\n\n```swift\nlet task = Task {\n    do {\n        let chatResult = try await openAIClient.chats(query: .init(messages: [], model: \"asd\"))\n    } catch {\n        \u002F\u002F Handle cancellation or error\n    }\n}\n            \ntask.cancel()\n```\n\n\u003Cdetails>\n\u003Csummary>Cancelling closure-based API calls\u003C\u002Fsummary>\n\nWhen you call any of the closure-based API methods, it returns discardable `CancellableRequest`. Hold a reference to it to be able to cancel the request later.\n```swift\nlet cancellableRequest = object.chats(query: query, completion: { _ in })\ncancellableReques\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Cancelling Combine subscriptions\u003C\u002Fsummary>\nIn Combine, use a default cancellation mechanism. Just discard the reference to a subscription, or call `cancel()` on it.\n\n```swift\nlet subscription = openAIClient\n    .images(query: query)\n    .sink(receiveCompletion: { completion in }, receiveValue: { imagesResult in })\n    \nsubscription.cancel()\n```\n\u003C\u002Fdetails>\n\n## Text and prompting\n\n### Responses\n\nUse `responses` variable on `OpenAIProtocol` to call Responses API methods.\n\n```swift\npublic protocol OpenAIProtocol {\n    \u002F\u002F ...\n    var responses: ResponsesEndpointProtocol { get }\n    \u002F\u002F ...\n}\n```\n\nSpecify params by passing `CreateModelResponseQuery` to a method. Get `ResponseObject` or a stream of `ResponseStreamEvent` events in response.\n\n**Example: Generate text from a simple prompt**\n```swift\nlet client: OpenAIProtocol = \u002F* client initialization code *\u002F\n\nlet query = CreateModelResponseQuery(\n    input: .textInput(\"Write a one-sentence bedtime story about a unicorn.\"),\n    model: .gpt4_1\n)\n\nlet response: ResponseObject = try await client.responses.createResponse(query: query)\n\u002F\u002F ...\n```\n\u003Cdetails>\n\u003Csummary>print(response)\u003C\u002Fsummary>\n\n```\nResponseObject(\n  createdAt: 1752146109,\n  error: nil,\n  id: \"resp_686fa0bd8f588198affbbf5a8089e2d208a5f6e2111e31f5\",\n  incompleteDetails: nil,\n  instructions: nil,\n  maxOutputTokens: nil,\n  metadata: [:],\n  model: \"gpt-4.1-2025-04-14\",\n  object: \"response\",\n  output: [\n    OpenAI.OutputItem.outputMessage(\n      OpenAI.Components.Schemas.OutputMessage(\n        id: \"msg_686fa0bee24881988a4d1588d7f65c0408a5f6e2111e31f5\",\n        _type: OpenAI.Components.Schemas.OutputMessage._TypePayload.message,\n        role: OpenAI.Components.Schemas.OutputMessage.RolePayload.assistant,\n        content: [\n          OpenAI.Components.Schemas.OutputContent.OutputTextContent(\n            OpenAI.Components.Schemas.OutputTextContent(\n              _type: OpenAI.Components.Schemas.OutputTextContent._TypePayload.outputText,\n              text: \"Under a sky full of twinkling stars, a gentle unicorn named Luna danced through fields of stardust, spreading sweet dreams to every sleeping child.\",\n              annotations: [],\n              logprobs: Optional([])\n            )\n          )\n        ],\n        status: OpenAI.Components.Schemas.OutputMessage.StatusPayload.completed\n      )\n    )\n  ],\n  parallelToolCalls: true,\n  previousResponseId: nil,\n  reasoning: Optional(\n    OpenAI.Components.Schemas.Reasoning(\n      effort: nil,\n      summary: nil,\n      generateSummary: nil\n    )\n  ),\n  status: \"completed\",\n  temperature: Optional(1.0),\n  text: OpenAI.Components.Schemas.ResponseProperties.TextPayload(\n    format: Optional(\n      OpenAI.Components.Schemas.TextResponseFormatConfiguration.ResponseFormatText(\n        OpenAI.Components.Schemas.ResponseFormatText(\n          _type: OpenAI.Components.Schemas.ResponseFormatText._TypePayload.text\n        )\n      )\n    ),\n    toolChoice: OpenAI.Components.Schemas.ResponseProperties.ToolChoicePayload.ToolChoiceOptions(\n      OpenAI.Components.Schemas.ToolChoiceOptions.auto\n    ),\n    tools: [],\n    topP: Optional(1.0),\n    truncation: Optional(\"disabled\"),\n    usage: Optional(\n      OpenAI.Components.Schemas.ResponseUsage(\n        inputTokens: 18,\n        inputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.InputTokensDetailsPayload(\n          cachedTokens: 0\n        ),\n        outputTokens: 32,\n        outputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.OutputTokensDetailsPayload(\n          reasoningTokens: 0\n        ),\n        totalTokens: 50\n      )\n    ),\n    user: nil\n  )\n)\n````\n\n\u003C\u002Fdetails>\n\nAn array of content generated by the model is in the `output` property of the response.\n\n> [!NOTE] **The `output` array often has more than one item in it!** It can contain tool calls, data about reasoning tokens generated by reasoning models, and other items. It is not safe to assume that the model's text output is present at `output[0].content[0].text`.\n\nBecause of the note above, to safely and fully read the response, we'd need to switch both over messages and their contents, like this:\n\n```swift\n\u002F\u002F ...\nfor output in response.output {\n    switch output {\n    case .outputMessage(let outputMessage):\n        for content in outputMessage.content {\n            switch content {\n            case .OutputTextContent(let textContent):\n                print(textContent.text)\n            case .RefusalContent(let refusalContent):\n                print(refusalContent.refusal)\n            }\n        }\n    default:\n        \u002F\u002F Unhandled output items. Handle or throw an error.\n    }\n}\n```\n\n### Chat Completions\n\nUse `ChatQuery` with `func chats(query:)` and `func chatsStream(query:)` methods on `OpenAIProtocol` to generate text using Chat Completions API. Get `ChatResult` or `ChatStreamResult` in response.\n\n**Example: Generate text from a simple prompt**\n\n```swift\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"Who are you?\")))\n    ],\n    model: .gpt4_o\n)\n\nlet result = try await openAI.chats(query: query)\n\nprint(result.choices.first?.message.content ?? \"\")\n\u002F\u002F printed to console:\n\u002F\u002F I'm an AI language model created by OpenAI, designed to assist with a wide range of questions and tasks. How can I help you today?\n```\n\n\u003Cdetails>\n\u003Csummary>po result\u003C\u002Fsummary>\n\n```\n(lldb) po result\n▿ ChatResult\n  - id : \"chatcmpl-BgWJTzbVczdJDusTqVpnR6AQ2w6Fd\"\n  - created : 1749473687\n  - model : \"gpt-4o-2024-08-06\"\n  - object : \"chat.completion\"\n  ▿ serviceTier : Optional\u003CServiceTier>\n    - some : OpenAI.ServiceTier.defaultTier\n  ▿ systemFingerprint : Optional\u003CString>\n    - some : \"fp_07871e2ad8\"\n  ▿ choices : 1 element\n    ▿ 0 : Choice\n      - index : 0\n      - logprobs : nil\n      ▿ message : Message\n        ▿ content : Optional\u003CString>\n          - some : \"I am an AI language model created by OpenAI, known as ChatGPT. I\\'m here to assist with answering questions, providing explanations, and engaging in conversation on a wide range of topics. If you have any questions or need assistance, feel free to ask!\"\n        - refusal : nil\n        - role : \"assistant\"\n        ▿ annotations : Optional\u003CArray\u003CAnnotation>>\n          - some : 0 elements\n        - audio : nil\n        - toolCalls : nil\n        - _reasoning : nil\n        - _reasoningContent : nil\n      - finishReason : \"stop\"\n  ▿ usage : Optional\u003CCompletionUsage>\n    ▿ some : CompletionUsage\n      - completionTokens : 52\n      - promptTokens : 11\n      - totalTokens : 63\n      ▿ promptTokensDetails : Optional\u003CPromptTokensDetails>\n        ▿ some : PromptTokensDetails\n          - audioTokens : 0\n          - cachedTokens : 0\n  - citations : nil\n```\n\n\u003C\u002Fdetails>\n\n\u003C!-- ## Images and vision\n\n## Audio and speech\n\n## Structured Outputs -->\n\n## Function calling\n\nSee [OpenAI Platform Guide: Function calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=responses) for more details.\n\n\u003Cdetails>\n\n\u003Csummary>Chat Completions API Examples\u003C\u002Fsummary>\n\n### Function calling with get_weather function\n\n```swift\nlet openAI = OpenAI(apiToken: \"...\")\n\u002F\u002F Declare functions which model might decide to call.\nlet functions = [\n    ChatQuery.ChatCompletionToolParam.FunctionDefinition(\n        name: \"get_weather\",\n        description: \"Get current temperature for a given location.\",\n        parameters: .init(fields: [\n            .type(.object),\n            .properties([\n                \"location\": .init(fields: [\n                    .type(.string),\n                    .description(\"City and country e.g. Bogotá, Colombia\")\n                ])\n            ]),\n            .required([\"location\"]),\n            .additionalProperties(.boolean(false))\n        ])\n    )\n]\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"What is the weather like in Paris today?\"\n    ],\n    model: .gpt4_1,\n    tools: functions.map { .init(function: $0) }\n)\nlet result = try await openAI.chats(query: query)\nprint(result.choices[0].message.toolCalls)\n```\n\nResult will be (serialized as JSON here for readability):\n```json\n{\n  \"id\": \"chatcmpl-1234\",\n  \"object\": \"chat.completion\",\n  \"created\": 1686000000,\n  \"model\": \"gpt-3.5-turbo-0613\",\n  \"choices\": [\n    {\n      \"index\": 0,\n      \"message\": {\n        \"role\": \"assistant\",\n        \"tool_calls\": [\n          {\n            \"id\": \"call-0\",\n            \"type\": \"function\",\n            \"function\": {\n              \"name\": \"get_current_weather\",\n              \"arguments\": \"{\\n  \\\"location\\\": \\\"Boston, MA\\\"\\n}\"\n            }\n          }\n        ]\n      },\n      \"finish_reason\": \"function_call\"\n    }\n  ],\n  \"usage\": { \"total_tokens\": 100, \"completion_tokens\": 18, \"prompt_tokens\": 82 }\n}\n\n```\n\n\u003C\u002Fdetails>\n\n## Images\n\nGiven a prompt and\u002For an input image, the model will generate a new image.\n\nAs Artificial Intelligence continues to develop, so too does the intriguing concept of Dall-E. Developed by OpenAI, a research lab for artificial intelligence purposes, Dall-E has been classified as an AI system that can generate images based on descriptions provided by humans. With its potential applications spanning from animation and illustration to design and engineering - not to mention the endless possibilities in between - it's easy to see why there is such excitement over this new technology.\n\n### Create Image\n\n**Request**\n\n```swift\nstruct ImagesQuery: Codable {\n    \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters.\n    public let prompt: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**Response**\n\n```swift\nstruct ImagesResult: Codable, Equatable {\n    public struct URLResult: Codable, Equatable {\n        public let url: String\n    }\n    public let created: TimeInterval\n    public let data: [URLResult]\n}\n```\n\n**Example**\n\n```swift\nlet query = ImagesQuery(prompt: \"White cat with heterochromia sitting on the kitchen table\", n: 1, size: \"1024x1024\")\nopenAI.images(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.images(query: query)\n```\n\n```\n(lldb) po result\n▿ ImagesResult\n  - created : 1671453505.0\n  ▿ data : 1 element\n    ▿ 0 : URLResult\n      - url : \"https:\u002F\u002Foaidalleapiprodscus.blob.core.windows.net\u002Fprivate\u002Forg-CWjU5cDIzgCcVjq10pp5yX5Q\u002Fuser-GoBXgChvLBqLHdBiMJBUbPqF\u002Fimg-WZVUK2dOD4HKbKwW1NeMJHBd.png?st=2022-12-19T11%3A38%3A25Z&se=2022-12-19T13%3A38%3A25Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image\u002Fpng&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2022-12-19T09%3A35%3A16Z&ske=2022-12-20T09%3A35%3A16Z&sks=b&skv=2021-08-06&sig=mh52rmtbQ8CXArv5bMaU6lhgZHFBZz\u002FePr4y%2BJwLKOc%3D\"\n ```\n\n**Generated image**\n\n![Generated Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_a23a3183ffdd.png)\n\n### Create Image Edit\n\nCreates an edited or extended image given an original image and a prompt.\n\n**Request**\n\n```swift\npublic struct ImageEditsQuery: Codable {\n    \u002F\u002F\u002F The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.\n    public let image: Data\n    public let fileName: String\n    \u002F\u002F\u002F An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.\n    public let mask: Data?\n    public let maskFileName: String?\n    \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters.\n    public let prompt: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**Response**\n\nUses the ImagesResult response similarly to ImagesQuery.\n\n**Example**\n\n```swift\nlet data = image.pngData()\nlet query = ImageEditQuery(image: data, fileName: \"whitecat.png\", prompt: \"White cat with heterochromia sitting on the kitchen table with a bowl of food\", n: 1, size: \"1024x1024\")\nopenAI.imageEdits(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.imageEdits(query: query)\n```\n\n### Create Image Variation\n\nCreates a variation of a given image.\n\n**Request**\n\n```swift\npublic struct ImageVariationsQuery: Codable {\n    \u002F\u002F\u002F The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.\n    public let image: Data\n    public let fileName: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**Response**\n\nUses the ImagesResult response similarly to ImagesQuery.\n\n**Example**\n\n```swift\nlet data = image.pngData()\nlet query = ImageVariationQuery(image: data, fileName: \"whitecat.png\", n: 1, size: \"1024x1024\")\nopenAI.imageVariations(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.imageVariations(query: query)\n```\n\nReview [Images Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages) for more info.\n\n## Audio\n\nThe speech to text API provides two endpoints, transcriptions and translations, based on our state-of-the-art open source large-v2 [Whisper model](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fwhisper). They can be used to:\n\nTranscribe audio into whatever language the audio is in.\nTranslate and transcribe the audio into english.\nFile uploads are currently limited to 25 MB and the following input file types are supported: mp3, mp4, mpeg, mpga, m4a, wav, and webm.\n\n### Audio Create Speech\n\nThis function sends an `AudioSpeechQuery` to the OpenAI API to create audio speech from text using a specific voice and format. \n\n[Learn more about voices.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext-to-speech\u002Fvoice-options)  \n[Learn more about models.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Ftts)\n\n**Request:**  \n\n```swift\npublic struct AudioSpeechQuery: Codable, Equatable {\n    \u002F\u002F...\n    public let model: Model \u002F\u002F tts-1 or tts-1-hd  \n    public let input: String\n    public let voice: AudioSpeechVoice\n    public let responseFormat: AudioSpeechResponseFormat\n    public let speed: String? \u002F\u002F Initializes with Double?\n    \u002F\u002F...\n}\n```\n\n**Response:**\n\n```swift\n\u002F\u002F\u002F Audio data for one of the following formats :`mp3`, `opus`, `aac`, `flac`, `pcm`\npublic let audioData: Data?\n```\n\n**Example:**   \n\n```swift\nlet query = AudioSpeechQuery(model: .tts_1, input: \"Hello, world!\", voice: .alloy, responseFormat: .mp3, speed: 1.0)\n\nopenAI.audioCreateSpeech(query: query) { result in\n    \u002F\u002F Handle response here\n}\n\u002F\u002For\nlet result = try await openAI.audioCreateSpeech(query: query)\n```\n[OpenAI Create Speech – Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech)\n\n### Audio Create Speech Streaming\n\nAudio Create Speech is available by using `audioCreateSpeechStream` function. Tokens will be sent one-by-one.\n\n**Closures**\n```swift\nopenAI.audioCreateSpeechStream(query: query) { partialResult in\n    switch partialResult {\n    case .success(let result):\n        print(result.audio)\n    case .failure(let error):\n        \u002F\u002FHandle chunk error here\n    }\n} completion: { error in\n    \u002F\u002FHandle streaming error here\n}\n```\n\n**Combine**\n\n```swift\nopenAI\n    .audioCreateSpeechStream(query: query)\n    .sink { completion in\n        \u002F\u002FHandle completion result here\n    } receiveValue: { result in\n        \u002F\u002FHandle chunk here\n    }.store(in: &cancellables)\n```\n\n**Structured concurrency**\n```swift\nfor try await result in openAI.audioCreateSpeechStream(query: query) {\n   \u002F\u002FHandle result here\n}\n```\n\n### Audio Transcriptions\n\nTranscribes audio into the input language.\n\n**Request**\n\n```swift\npublic struct AudioTranscriptionQuery: Codable, Equatable {\n    \n    public let file: Data\n    public let fileName: String\n    public let model: Model\n    \n    public let prompt: String?\n    public let temperature: Double?\n    public let language: String?\n}\n```\n\n**Response**\n\n```swift\npublic struct AudioTranscriptionResult: Codable, Equatable {\n    \n    public let text: String\n}\n```\n\n**Example**\n\n```swift\nlet data = Data(contentsOfURL:...)\nlet query = AudioTranscriptionQuery(file: data, fileName: \"audio.m4a\", model: .whisper_1)        \n\nopenAI.audioTranscriptions(query: query) { result in\n    \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.audioTranscriptions(query: query)\n```\n\n### Audio Translations\n\nTranslates audio into into English.\n\n**Request**\n\n```swift\npublic struct AudioTranslationQuery: Codable, Equatable {\n    \n    public let file: Data\n    public let fileName: String\n    public let model: Model\n    \n    public let prompt: String?\n    public let temperature: Double?\n}    \n```\n\n**Response**\n\n```swift\npublic struct AudioTranslationResult: Codable, Equatable {\n    \n    public let text: String\n}\n```\n\n**Example**\n\n```swift\nlet data = Data(contentsOfURL:...)\nlet query = AudioTranslationQuery(file: data, fileName: \"audio.m4a\", model: .whisper_1)  \n\nopenAI.audioTranslations(query: query) { result in\n    \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.audioTranslations(query: query)\n```\n\nReview [Audio Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio) for more info.\n\n## Structured Outputs\n\n> [!NOTE] This section focuses on non-function calling use cases in the Responses and Chat Completions APIs. To learn more about how to use Structured Outputs with function calling, check out the [Function Calling](#function-calling).\n\nTo configure structured outputs you would define a JSON Schema and pass it to a query.\n\nThis SDK supports multiple ways to define a schema; choose the one you prefer.\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.jsonSchema\u003C\u002Fsummary>\n\n### Build a schema by specifying fields\n\nThis definition accepts `JSONSchema` which is either `boolean` or `object` JSON Document.\n\nInstead of providing schema yourself you can build one in a type-safe manner using initializers that accept `[JSONSchemaField]`, as shown in the example below.\n\nWhile this method of defining a schema is direct, it can be verbose. For alternative ways to define a schema, see the options below.\n\n### Example\n\n```swift\nlet query = CreateModelResponseQuery(\n    input: .textInput(\"Return structured output\"),\n    model: .gpt4_o,\n    text: .jsonSchema(.init(\n        name: \"research_paper_extraction\",\n        schema: .jsonSchema(.init(\n            .type(.object),\n            .properties([\n                \"title\": Schema.buildBlock(\n                    .type(.string)\n                ),\n                \"authors\": .init(\n                    .type(.array),\n                    .items(.init(\n                        .type(.string)\n                    ))\n                ),\n                \"abstract\": .init(\n                    .type(.string)\n                ),\n                \"keywords\": .init(\n                    .type(.array),\n                    .items(.init(\n                        .type(.string))\n                    )\n                )\n            ]),\n            .required([\"title, authors, abstract, keywords\"]),\n            .additionalProperties(.boolean(false))\n        )),\n        description: \"desc\",\n        strict: false\n    ))\n)\n\nlet response = try await openAIClient.responses.createResponse(query: query)\nfor output in response.output {\n    switch output {\n    case .outputMessage(let message):\n        for content in message.content {\n            switch content {\n            case .OutputTextContent(let textContent):\n                print(\"json output structured by the schema: \", textContent.text)\n            case .RefusalContent(let refusal):\n                \u002F\u002F Handle refusal\n                break\n            }\n        }\n    default:\n        \u002F\u002F Handle other OutputItems\n        break\n    }\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.derivedJsonSchema\u003C\u002Fsummary>\n\n### Implement a type that describes a schema\n\nUse [Pydantic](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002F) or [Zod](https:\u002F\u002Fzod.dev) fashion to define schemas.\n\n- Use the `derivedJsonSchema(_ type:)` response format when creating a `ChatQuery` or `CreateModelResponseQuery`\n- Provide a type that conforms to `JSONSchemaConvertible` and generates an instance as an example\n- Make sure all enum types within the provided type conform to `JSONSchemaEnumConvertible` and generate an array of names for all cases\n\n### Example\n\n```swift\nstruct MovieInfo: JSONSchemaConvertible {\n    \n    let title: String\n    let director: String\n    let release: Date\n    let genres: [MovieGenre]\n    let cast: [String]\n    \n    static let example: Self = {\n        .init(\n            title: \"Earth\",\n            director: \"Alexander Dovzhenko\",\n            release: Calendar.current.date(from: DateComponents(year: 1930, month: 4, day: 1))!,\n            genres: [.drama],\n            cast: [\"Stepan Shkurat\", \"Semyon Svashenko\", \"Yuliya Solntseva\"]\n        )\n    }()\n}\nenum MovieGenre: String, Codable, JSONSchemaEnumConvertible {\n    case action, drama, comedy, scifi\n    \n    var caseNames: [String] { Self.allCases.map { $0.rawValue } }\n}\nlet query = ChatQuery(\n    messages: [\n        .system(\n            .init(content: .textContent(\"Best Picture winner at the 2011 Oscars\"))\n        )\n    ],\n    model: .gpt4_o,\n    responseFormat: .jsonSchema(\n        .init(\n            name: \"movie-info\",\n            description: nil,\n            schema: .derivedJsonSchema(MovieInfo.self),\n            strict: true\n        )\n    )\n)\nlet result = try await openAI.chats(query: query)\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.dynamicJsonSchema\u003C\u002Fsummary>\n\n### Define a schema with an instance of any type that conforms to Encodable\n\nDefine your JSON schema using simple Dictionaries, or specify JSON schema with a library like https:\u002F\u002Fgithub.com\u002Fkevinhermawan\u002Fswift-json-schema.\n\n### Example\n\n```swift\nstruct AnyEncodable: Encodable {\n    private let _encode: (Encoder) throws -> Void\n    public init\u003CT: Encodable>(_ wrapped: T) {\n        _encode = wrapped.encode\n    }\n    func encode(to encoder: Encoder) throws {\n        try _encode(encoder)\n    }\n}\nlet schema = [\n    \"type\": AnyEncodable(\"object\"),\n    \"properties\": AnyEncodable([\n        \"title\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"director\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"release\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"genres\": AnyEncodable([\n            \"type\": AnyEncodable(\"array\"),\n            \"items\": AnyEncodable([\n                \"type\": AnyEncodable(\"string\"),\n                \"enum\": AnyEncodable([\"action\", \"drama\", \"comedy\", \"scifi\"])\n            ])\n        ]),\n        \"cast\": AnyEncodable([\n            \"type\": AnyEncodable(\"array\"),\n            \"items\": AnyEncodable([\n                \"type\": \"string\"\n            ])\n        ])\n    ]),\n    \"additionalProperties\": AnyEncodable(false)\n]\nlet query = ChatQuery(\n    messages: [.system(.init(content: .textContent(\"Return a structured response.\")))],\n    model: .gpt4_o,\n    responseFormat: .jsonSchema(.init(name: \"movie-info\", schema: .dynamicJsonSchema(schema)))\n)\nlet result = try await openAI.chats(query: query)\n```\n\n\u003C\u002Fdetails>\n\nReview [Structured Output Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs) for more info.\n\n## Tools\n### Remote MCP (Model Context Protocol)\n\nThe Model Context Protocol (MCP) enables AI models to securely connect to external data sources and tools through standardized server connections. This OpenAI Swift library supports MCP integration, allowing you to extend model capabilities with remote tools and services.\n\nYou can use the [MCP Swift library](https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fswift-sdk) to connect to MCP servers and discover available tools, then integrate those tools with OpenAI's chat completions.\n\n#### MCP Tool Integration\n\n**Request**\n\n```swift\n\u002F\u002F Create an MCP tool for connecting to a remote server\nlet mcpTool = Tool.mcpTool(\n    .init(\n        _type: .mcp,\n        serverLabel: \"GitHub_MCP_Server\",\n        serverUrl: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\",\n        headers: .init(additionalProperties: [\n            \"Authorization\": \"Bearer YOUR_TOKEN_HERE\"\n        ]),\n        allowedTools: .case1([\"search_repositories\", \"get_file_contents\"]),\n        requireApproval: .case2(.always)\n    )\n)\n\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"Search for Swift repositories on GitHub\")))\n    ],\n    model: .gpt4_o,\n    tools: [mcpTool]\n)\n```\n\n**MCP Tool Properties**\n\n- `serverLabel`: A unique identifier for the MCP server\n- `serverUrl`: The URL endpoint of the MCP server\n- `headers`: Authentication headers and other HTTP headers required by the server\n- `allowedTools`: Specific tools to enable from the server (optional - if not specified, all tools are available)\n- `requireApproval`: Whether tool calls require user approval (`.always`, `.never`, or conditional)\n\n**Example with MCP Swift Library**\n\n```swift\nimport MCP\nimport OpenAI\n\n\u002F\u002F Connect to MCP server using the MCP Swift library\nlet mcpClient = MCP.Client(name: \"MyApp\", version: \"1.0.0\")\n\nlet transport = HTTPClientTransport(\n    endpoint: URL(string: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\")!,\n    configuration: URLSessionConfiguration.default\n)\n\nlet result = try await mcpClient.connect(transport: transport)\nlet toolsResponse = try await mcpClient.listTools()\n\n\u002F\u002F Create OpenAI MCP tool with discovered tools\nlet enabledToolNames = toolsResponse.tools.map { $0.name }\nlet mcpTool = Tool.mcpTool(\n    .init(\n        _type: .mcp,\n        serverLabel: \"GitHub_MCP_Server\",\n        serverUrl: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\",\n        headers: .init(additionalProperties: authHeaders),\n        allowedTools: .case1(enabledToolNames),\n        requireApproval: .case2(.always)\n    )\n)\n\n\u002F\u002F Use in chat completion\nlet query = ChatQuery(\n    messages: [.user(.init(content: .string(\"Help me search GitHub repositories\")))],\n    model: .gpt4_o,\n    tools: [mcpTool]\n)\n\nlet chatResult = try await openAI.chats(query: query)\n```\n\n**MCP Tool Call Handling**\n\nWhen using MCP tools, the model may generate tool calls that are executed on the remote MCP server. Handle MCP-specific output items in your response processing:\n\n```swift\n\u002F\u002F Handle MCP tool calls in streaming responses\nfor try await result in openAI.chatsStream(query: query) {\n    for choice in result.choices {\n        if let outputItem = choice.delta.content {\n            switch outputItem {\n            case .mcpToolCall(let mcpCall):\n                print(\"MCP tool call: \\(mcpCall.name)\")\n                if let output = mcpCall.output {\n                    print(\"Result: \\(output)\")\n                }\n            case .mcpApprovalRequest(let approvalRequest):\n                \u002F\u002F Handle approval request if requireApproval is enabled\n                print(\"MCP tool requires approval: \\(approvalRequest)\")\n            default:\n                \u002F\u002F Handle other output types\n                break\n            }\n        }\n    }\n}\n```\n\n## Specialized models\n\n### Embeddings\n\nGet a vector representation of a given input that can be easily consumed by machine learning models and algorithms.\n\n**Request**\n\n```swift\nstruct EmbeddingsQuery: Codable {\n    \u002F\u002F\u002F ID of the model to use.\n    public let model: Model\n    \u002F\u002F\u002F Input text to get embeddings for\n    public let input: String\n}\n```\n\n**Response**\n\n```swift\nstruct EmbeddingsResult: Codable, Equatable {\n\n    public struct Embedding: Codable, Equatable {\n\n        public let object: String\n        public let embedding: [Double]\n        public let index: Int\n    }\n    public let data: [Embedding]\n    public let usage: Usage\n}\n```\n\n**Example**\n\n```swift\nlet query = EmbeddingsQuery(model: .textSearchBabbageDoc, input: \"The food was delicious and the waiter...\")\nopenAI.embeddings(query: query) { result in\n  \u002F\u002FHandle response here\n}\n\u002F\u002For\nlet result = try await openAI.embeddings(query: query)\n```\n\n```\n(lldb) po result\n▿ EmbeddingsResult\n  ▿ data : 1 element\n    ▿ 0 : Embedding\n      - object : \"embedding\"\n      ▿ embedding : 2048 elements\n        - 0 : 0.0010535449\n        - 1 : 0.024234328\n        - 2 : -0.0084999\n        - 3 : 0.008647452\n    .......\n        - 2044 : 0.017536353\n        - 2045 : -0.005897616\n        - 2046 : -0.026559394\n        - 2047 : -0.016633155\n      - index : 0\n\n(lldb)\n```\n\nReview [Embeddings Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings) for more info.\n\n### Moderations \n\nGiven a input text, outputs if the model classifies it as violating OpenAI's content policy.\n\n**Request**\n\n```swift\npublic struct ModerationsQuery: Codable {\n    \n    public let input: String\n    public let model: Model?\n}    \n```\n\n**Response**\n\n```swift\npublic struct ModerationsResult: Codable, Equatable {\n\n    public let id: String\n    public let model: Model\n    public let results: [CategoryResult]\n}\n```\n\n**Example**\n\n```swift\nlet query = ModerationsQuery(input: \"I want to kill them.\")\nopenAI.moderations(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.moderations(query: query)\n```\n\nReview [Moderations Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations) for more info.\n\n## Other APIs\n\n### Models \n\nModels are represented as a typealias `typealias Model = String`.\n\n```swift\npublic extension Model {\n    static let gpt5_1 = \"gpt-5.1\"\n    static let gpt5_1_chat_latest = \"gpt-5.1-chat-latest\"\n\n    static let gpt5 = \"gpt-5\"\n    static let gpt5_mini = \"gpt-5-mini\"\n    static let gpt5_nano = \"gpt-5-nano\"\n    static let gpt5_chat = \"gpt-5-chat\"\n\n    static let gpt4_1 = \"gpt-4.1\"\n    static let gpt4_1_mini = \"gpt-4.1-mini\"\n    static let gpt4_1_nano = \"gpt-4.1-nano\"\n\n    static let gpt4_turbo_preview = \"gpt-4-turbo-preview\"\n    static let gpt4_vision_preview = \"gpt-4-vision-preview\"\n    static let gpt4_0125_preview = \"gpt-4-0125-preview\"\n    static let gpt4_1106_preview = \"gpt-4-1106-preview\"\n    static let gpt4 = \"gpt-4\"\n    static let gpt4_0613 = \"gpt-4-0613\"\n    static let gpt4_0314 = \"gpt-4-0314\"\n    static let gpt4_32k = \"gpt-4-32k\"\n    static let gpt4_32k_0613 = \"gpt-4-32k-0613\"\n    static let gpt4_32k_0314 = \"gpt-4-32k-0314\"\n    \n    static let gpt3_5Turbo = \"gpt-3.5-turbo\"\n    static let gpt3_5Turbo_0125 = \"gpt-3.5-turbo-0125\"\n    static let gpt3_5Turbo_1106 = \"gpt-3.5-turbo-1106\"\n    static let gpt3_5Turbo_0613 = \"gpt-3.5-turbo-0613\"\n    static let gpt3_5Turbo_0301 = \"gpt-3.5-turbo-0301\"\n    static let gpt3_5Turbo_16k = \"gpt-3.5-turbo-16k\"\n    static let gpt3_5Turbo_16k_0613 = \"gpt-3.5-turbo-16k-0613\"\n    \n    static let textDavinci_003 = \"text-davinci-003\"\n    static let textDavinci_002 = \"text-davinci-002\"\n    static let textCurie = \"text-curie-001\"\n    static let textBabbage = \"text-babbage-001\"\n    static let textAda = \"text-ada-001\"\n    \n    static let textDavinci_001 = \"text-davinci-001\"\n    static let codeDavinciEdit_001 = \"code-davinci-edit-001\"\n    \n    static let tts_1 = \"tts-1\"\n    static let tts_1_hd = \"tts-1-hd\"\n    \n    static let whisper_1 = \"whisper-1\"\n\n    static let dall_e_2 = \"dall-e-2\"\n    static let dall_e_3 = \"dall-e-3\"\n    \n    static let davinci = \"davinci\"\n    static let curie = \"curie\"\n    static let babbage = \"babbage\"\n    static let ada = \"ada\"\n    \n    static let textEmbeddingAda = \"text-embedding-ada-002\"\n    static let textSearchAda = \"text-search-ada-doc-001\"\n    static let textSearchBabbageDoc = \"text-search-babbage-doc-001\"\n    static let textSearchBabbageQuery001 = \"text-search-babbage-query-001\"\n    static let textEmbedding3 = \"text-embedding-3-small\"\n    static let textEmbedding3Large = \"text-embedding-3-large\"\n    \n    static let textModerationStable = \"text-moderation-stable\"\n    static let textModerationLatest = \"text-moderation-latest\"\n    static let moderation = \"text-moderation-007\"\n}\n```\n\nGPT-4 models are supported. \n\nAs an example: To use the `gpt-4-turbo-preview` model, pass `.gpt4_turbo_preview` as the parameter to the `ChatQuery` init.\n\n```swift\nlet query = ChatQuery(model: .gpt4_turbo_preview, messages: [\n    .init(role: .system, content: \"You are Librarian-GPT. You know everything about the books.\"),\n    .init(role: .user, content: \"Who wrote Harry Potter?\")\n])\nlet result = try await openAI.chats(query: query)\nXCTAssertFalse(result.choices.isEmpty)\n```\n\nYou can also pass a custom string if you need to use some model, that is not represented above.\n\n#### List Models\n\nLists the currently available models.\n\n**Response**\n\n```swift\npublic struct ModelsResult: Codable, Equatable {\n    \n    public let data: [ModelResult]\n    public let object: String\n}\n\n```\n**Example**\n\n```swift\nopenAI.models() { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.models()\n```\n\n#### Retrieve Model\n\nRetrieves a model instance, providing ownership information.\n\n**Request**\n\n```swift\npublic struct ModelQuery: Codable, Equatable {\n    \n    public let model: Model\n}    \n```\n\n**Response**\n\n```swift\npublic struct ModelResult: Codable, Equatable {\n\n    public let id: Model\n    public let object: String\n    public let ownedBy: String\n}\n```\n\n**Example**\n\n```swift\nlet query = ModelQuery(model: .gpt4)\nopenAI.model(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.model(query: query)\n```\n\nReview [Models Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels) for more info.\n\n### Utilities\n\nThe component comes with several handy utility functions to work with the vectors.\n\n```swift\npublic struct Vector {\n\n    \u002F\u002F\u002F Returns the similarity between two vectors\n    \u002F\u002F\u002F\n    \u002F\u002F\u002F - Parameters:\n    \u002F\u002F\u002F     - a: The first vector\n    \u002F\u002F\u002F     - b: The second vector\n    public static func cosineSimilarity(a: [Double], b: [Double]) -> Double {\n        return dot(a, b) \u002F (mag(a) * mag(b))\n    }\n\n    \u002F\u002F\u002F Returns the difference between two vectors. Cosine distance is defined as `1 - cosineSimilarity(a, b)`\n    \u002F\u002F\u002F\n    \u002F\u002F\u002F - Parameters:\n    \u002F\u002F\u002F     - a: The first vector\n    \u002F\u002F\u002F     - b: The second vector\n    public func cosineDifference(a: [Double], b: [Double]) -> Double {\n        return 1 - Self.cosineSimilarity(a: a, b: b)\n    }\n}\n```\n\n**Example**\n\n```swift\nlet vector1 = [0.213123, 0.3214124, 0.421412, 0.3214521251, 0.412412, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.4214214, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251]\nlet vector2 = [0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.511515, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3213213]\nlet similarity = Vector.cosineSimilarity(a: vector1, b: vector2)\nprint(similarity) \u002F\u002F0.9510201910206734\n```\n>In data analysis, cosine similarity is a measure of similarity between two sequences of numbers.\n\n\u003Cimg width=\"574\" alt=\"Screenshot 2022-12-19 at 6 00 33 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_a577b81fa86c.png\">\n\nRead more about Cosine Similarity [here](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCosine_similarity).\n\n## Assistants\n\nReview [Assistants Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) for more info.\n\n### Create Assistant\n\nExample: Create Assistant\n```swift\nlet query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)\nopenAI.assistantCreate(query: query) { result in\n   \u002F\u002FHandle response here\n}\n```\n\n### Modify Assistant\n\nExample: Modify Assistant\n```swift\nlet query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)\nopenAI.assistantModify(query: query, assistantId: \"asst_1234\") { result in\n    \u002F\u002FHandle response here\n}\n```\n\n### List Assistants\n\nExample: List Assistants\n```swift\nopenAI.assistants() { result in\n   \u002F\u002FHandle response here\n}\n```\n\n### Threads\n\nReview [Threads Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) for more info.\n\n#### Create Thread\n\nExample: Create Thread\n```swift\nlet threadsQuery = ThreadsQuery(messages: [Chat(role: message.role, content: message.content)])\nopenAI.threads(query: threadsQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Create and Run Thread\n\nExample: Create and Run Thread\n```swift\nlet threadsQuery = ThreadQuery(messages: [Chat(role: message.role, content: message.content)])\nlet threadRunQuery = ThreadRunQuery(assistantId: \"asst_1234\"  thread: threadsQuery)\nopenAI.threadRun(query: threadRunQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Get Threads Messages\n\nReview [Messages Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages) for more info.\n\nExample: Get Threads Messages\n```swift\nopenAI.threadsMessages(threadId: currentThreadId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Add Message to Thread\n\nExample: Add Message to Thread\n```swift\nlet query = MessageQuery(role: message.role.rawValue, content: message.content)\nopenAI.threadsAddMessage(threadId: currentThreadId, query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n### Runs\n\nReview [Runs Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns) for more info.\n\n#### Create Run\n\nExample: Create Run\n```swift\nlet runsQuery = RunsQuery(assistantId:  currentAssistantId)\nopenAI.runs(threadId: threadsResult.id, query: runsQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Retrieve Run\n\nExample: Retrieve Run\n```swift\nopenAI.runRetrieve(threadId: currentThreadId, runId: currentRunId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Retrieve Run Steps\n\nExample: Retrieve Run Steps\n```swift\nopenAI.runRetrieveSteps(threadId: currentThreadId, runId: currentRunId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### Submit Tool Outputs for Run\n\nExample: Submit Tool Outputs for Run\n```swift\nlet output = RunToolOutputsQuery.ToolOutput(toolCallId: \"call123\", output: \"Success\")\nlet query = RunToolOutputsQuery(toolOutputs: [output])\nopenAI.runSubmitToolOutputs(threadId: currentThreadId, runId: currentRunId, query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n### Files\n\nReview [Files Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) for more info.\n\n#### Upload file\n\nExample: Upload file\n```swift\nlet query = FilesQuery(purpose: \"assistants\", file: fileData, fileName: url.lastPathComponent, contentType: \"application\u002Fpdf\")\nopenAI.files(query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n## Support for other providers\n\n> TL;DR Use `.relaxed` parsing option on Configuration\n\nThis SDK has a limited support for other providers like Gemini, Perplexity etc.\n\nThe top priority of this SDK is OpenAI, and the main rule is for all the main types to be fully compatible with [OpenAI's API Reference](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fintroduction). If it says a field should be optional, it must be optional in main subset of Query\u002FResult types of this SDK. The same goes for other info declared in the reference, like default values.\n\nThat said we still want to give a support for other providers.\n\n### Option 1: Use `.relaxed` parsing option\n`.relaxed` parsing option handles both missing and additional key\u002Fvalues in responses. It should be sufficient for most use-cases. Let us know if it doesn't cover any case you need.\n\n### Option 2: Specify parsing options separately\n#### Handle missing keys in responses\nSome providers return responses that don't completely satisfy OpenAI's scheme. Like, Gemini chat completion response ommits `id` field which is a required field in OpenAI's API Reference.\n\nIn such case use `fillRequiredFieldIfKeyNotFound` Parsing Option, like this:\n```swift\nlet configuration = OpenAI.Configuration(token: \"\", parsingOptions: .fillRequiredFieldIfKeyNotFound)\n```\n\n#### Handle missing values in responses\nSome fields are required to be present (non-optional) by OpenAI, but other providers may return `null` for them.\n\nUse `.fillRequiredFieldIfValueNotFound` to handle missing values\n\n#### What if a provider returns additional fields?\nCurrently we handle such cases by simply adding additional fields to main model set. This is possible because optional fields wouldn't break or conflict with OpenAI's scheme. At the moment, such additional fields are added:\n\n`ChatResult`\n\n* `citations` [Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fapi-reference\u002Fchat-completions#response-citations)\n\n`ChatResult.Choice.Message`\n\n* `reasoningContent` [Grok](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Fapi-reference#chat-completions), [DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002Fapi\u002Fcreate-chat-completion#responses)\n* `reasoning` [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fuse-cases\u002Freasoning-tokens#basic-usage-with-reasoning-tokens)\n\n## Example Project\n\nYou can find example iOS application in [Demo](\u002FDemo) folder. \n\n![mockuuups-iphone-13-pro-mockup-perspective-right](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_b7bebf847e02.png)\n\n## Contribution Guidelines\nMake your Pull Requests clear and obvious to anyone viewing them.  \nSet `main` as your target branch.\n\n#### Use [Conventional Commits](https:\u002F\u002Fwww.conventionalcommits.org\u002Fen\u002Fv1.0.0\u002F) principles in naming PRs and branches:\n\n- `Feat: ...` for new features and new functionality implementations.\n- `Bug: ...` for bug fixes.\n- `Fix: ...` for minor issues fixing, like typos or inaccuracies in code.\n- `Chore: ...` for boring stuff like code polishing, refactoring, deprecation fixing etc.\n\nPR naming example: `Feat: Add Threads API handling` or `Bug: Fix message result duplication`\n\nBranch naming example: `feat\u002Fadd-threads-API-handling` or `bug\u002Ffix-message-result-duplication`\n\n#### Write description to pull requests in following format:\n- What\n\n  ...\n- Why\n  \n  ...\n- Affected Areas\n\n  ...\n- More Info\n\n  ...\n\nWe'll appreciate you including tests to your code if it is needed and possible. ❤️\n\n## Links\n\n- [OpenAI Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fintroduction)\n- [OpenAI Playground](https:\u002F\u002Fplatform.openai.com\u002Fplayground)\n- [OpenAI Examples](https:\u002F\u002Fplatform.openai.com\u002Fexamples)\n- [Dall-E](https:\u002F\u002Flabs.openai.com\u002F)\n- [Cosine Similarity](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCosine_similarity)\n\n## License\n\n```\nMIT License\n\nCopyright (c) 2023 MacPaw Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and\u002For sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n```\n","# OpenAI\n\n![logo](https:\u002F\u002Fuser.githubusercontent.com\u002F1411778\u002F218319355-f56b6bd4-961a-4d8f-82cd-6dbd43111d7f.png)\n\n___\n\n![Swift Workflow](https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Factions\u002Fworkflows\u002Fswift.yml\u002Fbadge.svg)\n[![](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2FMacPaw%2FOpenAI%2Fbadge%3Ftype%3Dswift-versions)](https:\u002F\u002Fswiftpackageindex.com\u002FMacPaw\u002FOpenAI)\n[![](https:\u002F\u002Fimg.shields.io\u002Fendpoint?url=https%3A%2F%2Fswiftpackageindex.com%2Fapi%2Fpackages%2FMacPaw%2FOpenAI%2Fbadge%3Ftype%3Dplatforms)](https:\u002F\u002Fswiftpackageindex.com\u002FMacPaw\u002FOpenAI)\n[![Twitter](https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Twitter&message=@MacPaw&color=CA1F67)](https:\u002F\u002Ftwitter.com\u002FMacPaw)\n\n此仓库包含由 Swift 社区维护的 [OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002F) 公共 API 实现。 \n\n- [安装](#installation)\n    - [Swift 包管理器](#swift-package-manager)\n- [使用](#usage)\n    - [初始化](#initialization)\n    - [除 OpenAI 外的其他提供商使用 SDK](#using-the-sdk-for-other-providers-except-openai)\n    - [取消请求](#cancelling-requests)\n- [文本与提示](#text-and-prompting)\n    - [响应](#responses)\n    - [聊天补全](#chat-completions)\n- [函数调用](#function-calling)\n- [工具](#tools)\n    - [MCP (模型上下文协议)](#mcp-model-context-protocol)\n        - [MCP 工具集成](#mcp-tool-integration)\n- [图像](#images)\n    - [创建图像](#create-image)\n    - [创建图像编辑](#create-image-edit)\n    - [创建图像变体](#create-image-variation)\n- [音频](#audio)\n    - [音频创建语音](#audio-create-speech)\n    - [音频转录](#audio-transcriptions)\n    - [音频翻译](#audio-translations)\n- [结构化输出](#structured-outputs)\n- [专用模型](#specialized-models)\n    - [嵌入](#embeddings)\n    - [审核](#moderations)\n- [助手（测试版）](#assistants)\n    - [创建助手](#create-assistant)\n    - [修改助手](#modify-assistant)\n    - [列出助手](#list-assistants) \n    - [线程](#threads)\n        - [创建线程](#create-thread)\n        - [创建并运行线程](#create-and-run-thread)\n        - [获取线程消息](#get-threads-messages)\n        - [添加消息到线程](#add-message-to-thread)\n    - [运行](#runs)\n        - [创建运行](#create-run)\n        - [检索运行](#retrieve-run)\n        - [检索运行步骤](#retrieve-run-steps)\n        - [提交运行工具输出](#submit-tool-outputs-for-run)\n    - [文件](#files)\n        - [上传文件](#upload-file)\n- [其他 API](#other-apis)\n    - [模型](#models)\n        - [列出模型](#list-models)\n        - [检索模型](#retrieve-model)\n    - [工具类](#utilities)\n- [支持其他提供商：Gemini, DeepSeek, Perplexity, OpenRouter 等](#support-for-other-providers)\n- [示例项目](#example-project)\n- [贡献指南](#contribution-guidelines)\n- [链接](#links)\n- [许可证](#license)\n\n## 文档\n\n本库的类型和方法的实现与 REST API 文档紧密一致，可在 [platform.openai.com](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference) 找到。\n\n## 安装\n\n### Swift 包管理器\n\n要使用 Swift Package Manager 将 OpenAI 集成到您的 Xcode 项目中：\n\n1.  在 Xcode 中，前往 **File > Add Package Dependencies...**\n2.  输入仓库 URL：`https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git`\n3.  选择您想要的依赖规则（例如，“Up to Next Major Version\"）。\n\n或者，您可以直接将其添加到您的 `Package.swift` 文件中：\n\n```swift\ndependencies: [\n    .package(url: \"https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git\", branch: \"main\")\n]\n```\n\n## 使用\n\n### 初始化\n\n要初始化 API 实例，您需要从您的 OpenAI 组织 [获取](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys) API 令牌。\n\n**请记住您的 API 密钥是机密！** 不要与他人分享它，也不要将其暴露在任何客户端代码（浏览器、应用）中。生产请求必须通过您自己的后端服务器路由，在那里您可以安全地从环境变量或密钥管理服务加载您的 API 密钥。\n\n\u003Cimg width=\"1081\" alt=\"company\" src=\"https:\u002F\u002Fuser.githubusercontent.com\u002F1411778\u002F213204726-0772373e-14db-4d5d-9a58-bc249bac4c57.png\">\n\n一旦您有了令牌，您就可以初始化 `OpenAI` 类，这是访问 API 的入口点。\n\n> ⚠️ OpenAI 强烈建议客户端应用程序的开发人员通过单独的后台服务代理请求，以保护其 API 密钥安全。API 密钥可以访问和操作客户计费、用量和组织数据，因此 [暴露](https:\u002F\u002Fnshipster.com\u002Fsecrets\u002F) 它们存在重大风险。\n\n```swift\nlet openAI = OpenAI(apiToken: \"YOUR_TOKEN_HERE\")\n```\n\n可选地，您可以使用令牌、组织标识符和超时时间间隔初始化 `OpenAI`。\n\n```swift\nlet configuration = OpenAI.Configuration(token: \"YOUR_TOKEN_HERE\", organizationIdentifier: \"YOUR_ORGANIZATION_ID_HERE\", timeoutInterval: 60.0)\nlet openAI = OpenAI(configuration: configuration)\n```\n\n有关可以在初始化时传递以进行自定义的更多值，请参见 `OpenAI.Configuration`，例如：`host`, `basePath`, `port`, `scheme` 和 `customHeaders`。\n\n一旦您拥有令牌并初始化了实例，您就可以发起请求了。\n\n### 除 OpenAI 外的其他提供商使用 SDK\n\n此 SDK 更专注于与 OpenAI Platform 配合使用，但也支持与 OpenAI 兼容 API 的其他提供商。\n\n在 Configuration 中使用 `.relaxed` 解析选项，或在此处查看更多详细信息 [#支持其他提供商](#support-for-other-providers)。\n\n### 取消请求\n\n对于 Swift Concurrency 调用，您可以简单地取消调用任务，相应的底层 `URLSessionDataTask` 会自动取消。\n\n```swift\nlet task = Task {\n    do {\n        let chatResult = try await openAIClient.chats(query: .init(messages: [], model: \"asd\"))\n    } catch {\n        \u002F\u002F Handle cancellation or error\n    }\n}\n            \ntask.cancel()\n```\n\n\u003Cdetails>\n\u003Csummary>取消基于闭包的 API 调用\u003C\u002Fsummary>\n\n当您调用任何基于闭包的 API 方法时，它会返回一个可丢弃的 `CancellableRequest`。保存对它的引用以便稍后取消请求。\n```swift\nlet cancellableRequest = object.chats(query: query, completion: { _ in })\ncancellableReques\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>取消 Combine 订阅\u003C\u002Fsummary>\n在 Combine 中，使用默认的取消机制。只需丢弃对订阅的引用，或在其上调用 `cancel()`。\n\n```swift\nlet subscription = openAIClient\n    .images(query: query)\n    .sink(receiveCompletion: { completion in }, receiveValue: { imagesResult in })\n    \nsubscription.cancel()\n```\n\u003C\u002Fdetails>\n\n## 文本与提示\n\n### Responses\n\n在 `OpenAIProtocol` 上使用 `responses` 变量来调用 Responses API 方法。\n\n```swift\npublic protocol OpenAIProtocol {\n    \u002F\u002F ...\n    var responses: ResponsesEndpointProtocol { get }\n    \u002F\u002F ...\n}\n```\n\n通过向方法传递 `CreateModelResponseQuery` 来指定参数。获取 `ResponseObject` 或 `ResponseStreamEvent` 事件流作为响应。\n\n**示例：从简单提示生成文本**\n```swift\nlet client: OpenAIProtocol = \u002F* client initialization code *\u002F\n\nlet query = CreateModelResponseQuery(\n    input: .textInput(\"Write a one-sentence bedtime story about a unicorn.\"),\n    model: .gpt4_1\n)\n\nlet response: ResponseObject = try await client.responses.createResponse(query: query)\n\u002F\u002F ...\n```\n\u003Cdetails>\n\u003Csummary>print(response)\u003C\u002Fsummary>\n\n```\nResponseObject(\n  createdAt: 1752146109,\n  error: nil,\n  id: \"resp_686fa0bd8f588198affbbf5a8089e2d208a5f6e2111e31f5\",\n  incompleteDetails: nil,\n  instructions: nil,\n  maxOutputTokens: nil,\n  metadata: [:],\n  model: \"gpt-4.1-2025-04-14\",\n  object: \"response\",\n  output: [\n    OpenAI.OutputItem.outputMessage(\n      OpenAI.Components.Schemas.OutputMessage(\n        id: \"msg_686fa0bee24881988a4d1588d7f65c0408a5f6e2111e31f5\",\n        _type: OpenAI.Components.Schemas.OutputMessage._TypePayload.message,\n        role: OpenAI.Components.Schemas.OutputMessage.RolePayload.assistant,\n        content: [\n          OpenAI.Components.Schemas.OutputContent.OutputTextContent(\n            OpenAI.Components.Schemas.OutputTextContent(\n              _type: OpenAI.Components.Schemas.OutputTextContent._TypePayload.outputText,\n              text: \"Under a sky full of twinkling stars, a gentle unicorn named Luna danced through fields of stardust, spreading sweet dreams to every sleeping child.\",\n              annotations: [],\n              logprobs: Optional([])\n            )\n          )\n        ],\n        status: OpenAI.Components.Schemas.OutputMessage.StatusPayload.completed\n      )\n    )\n  ],\n  parallelToolCalls: true,\n  previousResponseId: nil,\n  reasoning: Optional(\n    OpenAI.Components.Schemas.Reasoning(\n      effort: nil,\n      summary: nil,\n      generateSummary: nil\n    )\n  ),\n  status: \"completed\",\n  temperature: Optional(1.0),\n  text: OpenAI.Components.Schemas.ResponseProperties.TextPayload(\n    format: Optional(\n      OpenAI.Components.Schemas.TextResponseFormatConfiguration.ResponseFormatText(\n        OpenAI.Components.Schemas.ResponseFormatText(\n          _type: OpenAI.Components.Schemas.ResponseFormatText._TypePayload.text\n        )\n      )\n    ),\n    toolChoice: OpenAI.Components.Schemas.ResponseProperties.ToolChoicePayload.ToolChoiceOptions(\n      OpenAI.Components.Schemas.ToolChoiceOptions.auto\n    ),\n    tools: [],\n    topP: Optional(1.0),\n    truncation: Optional(\"disabled\"),\n    usage: Optional(\n      OpenAI.Components.Schemas.ResponseUsage(\n        inputTokens: 18,\n        inputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.InputTokensDetailsPayload(\n          cachedTokens: 0\n        ),\n        outputTokens: 32,\n        outputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.OutputTokensDetailsPayload(\n          reasoningTokens: 0\n        ),\n        totalTokens: 50\n      )\n    ),\n    user: nil\n  )\n)\n````\n\n\u003C\u002Fdetails>\n\n模型生成的内容数组位于响应的 `output` 属性中。\n\n> [!NOTE] **`output` 数组通常包含多个项！** 它可能包含工具调用、推理模型生成的推理 token 数据以及其他项。假设模型的文本输出存在于 `output[0].content[0].text` 是不安全的。\n\n由于上述说明，为了安全且完整地读取响应，我们需要对消息及其内容进行切换处理，如下所示：\n\n```swift\n\u002F\u002F ...\nfor output in response.output {\n    switch output {\n    case .outputMessage(let outputMessage):\n        for content in outputMessage.content {\n            switch content {\n            case .OutputTextContent(let textContent):\n                print(textContent.text)\n            case .RefusalContent(let refusalContent):\n                print(refusalContent.refusal)\n            }\n        }\n    default:\n        \u002F\u002F Unhandled output items. Handle or throw an error.\n    }\n}\n```\n\n### 聊天补全\n\n使用 `OpenAIProtocol` 上的 `func chats(query:)` 和 `func chatsStream(query:)` 方法与 `ChatQuery` 配合，利用 Chat Completions API 生成文本。获取 `ChatResult` 或 `ChatStreamResult` 作为响应。\n\n**示例：从简单提示生成文本**\n\n```swift\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"Who are you?\")))\n    ],\n    model: .gpt4_o\n)\n\nlet result = try await openAI.chats(query: query)\n\nprint(result.choices.first?.message.content ?? \"\")\n\u002F\u002F printed to console:\n\u002F\u002F I'm an AI language model created by OpenAI, designed to assist with a wide range of questions and tasks. How can I help you today?\n```\n\n\u003Cdetails>\n\u003Csummary>po result\u003C\u002Fsummary>\n\n```\n(lldb) po result\n▿ ChatResult\n  - id : \"chatcmpl-BgWJTzbVczdJDusTqVpnR6AQ2w6Fd\"\n  - created : 1749473687\n  - model : \"gpt-4o-2024-08-06\"\n  - object : \"chat.completion\"\n  ▿ serviceTier : Optional\u003CServiceTier>\n    - some : OpenAI.ServiceTier.defaultTier\n  ▿ systemFingerprint : Optional\u003CString>\n    - some : \"fp_07871e2ad8\"\n  ▿ choices : 1 element\n    ▿ 0 : Choice\n      - index : 0\n      - logprobs : nil\n      ▿ message : Message\n        ▿ content : Optional\u003CString>\n          - some : \"I am an AI language model created by OpenAI, known as ChatGPT. I\\'m here to assist with answering questions, providing explanations, and engaging in conversation on a wide range of topics. If you have any questions or need assistance, feel free to ask!\"\n        - refusal : nil\n        - role : \"assistant\"\n        ▿ annotations : Optional\u003CArray\u003CAnnotation>>\n          - some : 0 elements\n        - audio : nil\n        - toolCalls : nil\n        - _reasoning : nil\n        - _reasoningContent : nil\n      - finishReason : \"stop\"\n  ▿ usage : Optional\u003CCompletionUsage>\n    ▿ some : CompletionUsage\n      - completionTokens : 52\n      - promptTokens : 11\n      - totalTokens : 63\n      ▿ promptTokensDetails : Optional\u003CPromptTokensDetails>\n        ▿ some : PromptTokensDetails\n          - audioTokens : 0\n          - cachedTokens : 0\n  - citations : nil\n```\n\n\u003C\u002Fdetails>\n\n\u003C!-- ## 图像与视觉\n\n## 音频与语音\n\n## 结构化输出 -->\n\n## 函数调用\n\n有关更多详细信息，请参阅 [OpenAI 平台指南：函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=responses)。\n\n\u003Cdetails>\n\n\u003Csummary>聊天补全 API 示例\u003C\u002Fsummary>\n\n### 使用 get_weather 函数进行函数调用\n\n```swift\nlet openAI = OpenAI(apiToken: \"...\")\n\u002F\u002F Declare functions which model might decide to call.\nlet functions = [\n    ChatQuery.ChatCompletionToolParam.FunctionDefinition(\n        name: \"get_weather\",\n        description: \"Get current temperature for a given location.\",\n        parameters: .init(fields: [\n            .type(.object),\n            .properties([\n                \"location\": .init(fields: [\n                    .type(.string),\n                    .description(\"City and country e.g. Bogotá, Colombia\")\n                ])\n            ]),\n            .required([\"location\"]),\n            .additionalProperties(.boolean(false))\n        ])\n    )\n]\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"What is the weather like in Paris today?\"\n    ],\n    model: .gpt4_1,\n    tools: functions.map { .init(function: $0) }\n)\nlet result = try await openAI.chats(query: query)\nprint(result.choices[0].message.toolCalls)\n```\n\n结果将是（此处序列化为 JSON 以便阅读）：\n```json\n{\n  \"id\": \"chatcmpl-1234\",\n  \"object\": \"chat.completion\",\n  \"created\": 1686000000,\n  \"model\": \"gpt-3.5-turbo-0613\",\n  \"choices\": [\n    {\n      \"index\": 0,\n      \"message\": {\n        \"role\": \"assistant\",\n        \"tool_calls\": [\n          {\n            \"id\": \"call-0\",\n            \"type\": \"function\",\n            \"function\": {\n              \"name\": \"get_current_weather\",\n              \"arguments\": \"{\\n  \\\"location\\\": \\\"Boston, MA\\\"\\n}\"\n            }\n          }\n        ]\n      },\n      \"finish_reason\": \"function_call\"\n    }\n  ],\n  \"usage\": { \"total_tokens\": 100, \"completion_tokens\": 18, \"prompt_tokens\": 82 }\n}\n\n```\n\n\u003C\u002Fdetails>\n\n## 图像\n\n给定提示词 (prompt) 和\u002F或输入图像，模型将生成新图像。\n\n随着人工智能 (Artificial Intelligence) 的持续发展，Dall-E 这一引人入胜的概念也在不断演进。由专注于人工智能研究的 OpenAI 实验室开发，Dall-E 被归类为一种能够根据人类提供的描述生成图像的 AI 系统。其潜在应用涵盖动画、插画、设计和工程等领域——更不用说中间无数的可能性——不难理解为何这项新技术会引起如此大的兴奋。\n\n### 创建图像\n\n**请求**\n\n```swift\nstruct ImagesQuery: Codable {\n    \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters.\n    public let prompt: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**响应**\n\n```swift\nstruct ImagesResult: Codable, Equatable {\n    public struct URLResult: Codable, Equatable {\n        public let url: String\n    }\n    public let created: TimeInterval\n    public let data: [URLResult]\n}\n```\n\n**示例**\n\n```swift\nlet query = ImagesQuery(prompt: \"White cat with heterochromia sitting on the kitchen table\", n: 1, size: \"1024x1024\")\nopenAI.images(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.images(query: query)\n```\n\n```\n(lldb) po result\n▿ ImagesResult\n  - created : 1671453505.0\n  ▿ data : 1 element\n    ▿ 0 : URLResult\n      - url : \"https:\u002F\u002Foaidalleapiprodscus.blob.core.windows.net\u002Fprivate\u002Forg-CWjU5cDIzgCcVjq10pp5yX5Q\u002Fuser-GoBXgChvLBqLHdBiMJBUbPqF\u002Fimg-WZVUK2dOD4HKbKwW1NeMJHBd.png?st=2022-12-19T11%3A38%3A25Z&se=2022-12-19T13%3A38%3A25Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image\u002Fpng&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2022-12-19T09%3A35%3A16Z&ske=2022-12-20T09%3A35%3A16Z&sks=b&skv=2021-08-06&sig=mh52rmtbQ8CXArv5bMaU6lhgZHFBZz\u002FePr4y%2BJwLKOc%3D\"\n ```\n\n**生成的图像**\n\n![Generated Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_a23a3183ffdd.png)\n\n### 创建图像编辑\n\n给定原始图像和提示词，创建编辑或扩展后的图像。\n\n**请求**\n\n```swift\npublic struct ImageEditsQuery: Codable {\n    \u002F\u002F\u002F The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.\n    public let image: Data\n    public let fileName: String\n    \u002F\u002F\u002F An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.\n    public let mask: Data?\n    public let maskFileName: String?\n    \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters.\n    public let prompt: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**响应**\n\n与 ImagesQuery 类似，使用 ImagesResult 响应。\n\n**示例**\n\n```swift\nlet data = image.pngData()\nlet query = ImageEditQuery(image: data, fileName: \"whitecat.png\", prompt: \"White cat with heterochromia sitting on the kitchen table with a bowl of food\", n: 1, size: \"1024x1024\")\nopenAI.imageEdits(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.imageEdits(query: query)\n```\n\n### 创建图像变体\n\n创建给定图像的变体。\n\n**请求**\n\n```swift\npublic struct ImageVariationsQuery: Codable {\n    \u002F\u002F\u002F The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.\n    public let image: Data\n    public let fileName: String\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n    public let n: Int?\n    \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.\n    public let size: String?\n}\n```\n\n**响应**\n\n与 ImagesQuery 类似，使用 ImagesResult 响应。\n\n**示例**\n\n```swift\nlet data = image.pngData()\nlet query = ImageVariationQuery(image: data, fileName: \"whitecat.png\", n: 1, size: \"1024x1024\")\nopenAI.imageVariations(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.imageVariations(query: query)\n```\n\n查看 [图像文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages) 以获取更多信息。\n\n## 音频\n\n语音转文本 API 提供两个端点 (endpoints)：转录 (transcriptions) 和翻译 (translations)，基于我们最先进的开源 large-v2 [Whisper 模型](https:\u002F\u002Fopenai.com\u002Fresearch\u002Fwhisper)。它们可用于：\n\n将音频转录为音频所在的任何语言。\n将音频翻译并转录为英语。\n文件上传目前限制为 25 MB，支持以下输入文件类型：mp3, mp4, mpeg, mpga, m4a, wav, 和 webm。\n\n### 音频创建语音\n\n此函数向 OpenAI API 发送 `AudioSpeechQuery`，使用特定的语音和格式从文本创建音频语音。 \n\n[了解有关语音的更多信息。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext-to-speech\u002Fvoice-options)  \n[了解有关模型的更多信息。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Ftts)\n\n**请求：**  \n\n```swift\npublic struct AudioSpeechQuery: Codable, Equatable {\n    \u002F\u002F...\n    public let model: Model \u002F\u002F tts-1 or tts-1-hd  \n    public let input: String\n    public let voice: AudioSpeechVoice\n    public let responseFormat: AudioSpeechResponseFormat\n    public let speed: String? \u002F\u002F Initializes with Double?\n    \u002F\u002F...\n}\n```\n\n**响应：**\n\n```swift\n\u002F\u002F\u002F Audio data for one of the following formats :`mp3`, `opus`, `aac`, `flac`, `pcm`\npublic let audioData: Data?\n```\n\n**示例：**   \n\n```swift\nlet query = AudioSpeechQuery(model: .tts_1, input: \"Hello, world!\", voice: .alloy, responseFormat: .mp3, speed: 1.0)\n\nopenAI.audioCreateSpeech(query: query) { result in\n    \u002F\u002F Handle response here\n}\n\u002F\u002For\nlet result = try await openAI.audioCreateSpeech(query: query)\n```\n[OpenAI 创建语音 – 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech)\n\n### 音频创建语音流式传输\n\n可以使用 `audioCreateSpeechStream` 函数进行音频创建语音。Token 将逐个发送。\n\n**闭包（Closures）**\n```swift\nopenAI.audioCreateSpeechStream(query: query) { partialResult in\n    switch partialResult {\n    case .success(let result):\n        print(result.audio)\n    case .failure(let error):\n        \u002F\u002FHandle chunk error here\n    }\n} completion: { error in\n    \u002F\u002FHandle streaming error here\n}\n```\n\n**Combine**\n\n```swift\nopenAI\n    .audioCreateSpeechStream(query: query)\n    .sink { completion in\n        \u002F\u002FHandle completion result here\n    } receiveValue: { result in\n        \u002F\u002FHandle chunk here\n    }.store(in: &cancellables)\n```\n\n**结构化并发（Structured Concurrency）**\n```swift\nfor try await result in openAI.audioCreateSpeechStream(query: query) {\n   \u002F\u002FHandle result here\n}\n```\n\n### 音频转录\n\n将音频转录为输入语言。\n\n**请求**\n\n```swift\npublic struct AudioTranscriptionQuery: Codable, Equatable {\n    \n    public let file: Data\n    public let fileName: String\n    public let model: Model\n    \n    public let prompt: String?\n    public let temperature: Double?\n    public let language: String?\n}\n```\n\n**响应**\n\n```swift\npublic struct AudioTranscriptionResult: Codable, Equatable {\n    \n    public let text: String\n}\n```\n\n**示例**\n\n```swift\nlet data = Data(contentsOfURL:...)\nlet query = AudioTranscriptionQuery(file: data, fileName: \"audio.m4a\", model: .whisper_1)        \n\nopenAI.audioTranscriptions(query: query) { result in\n    \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.audioTranscriptions(query: query)\n```\n\n### 音频翻译\n\n将音频翻译为英语。\n\n**请求**\n\n```swift\npublic struct AudioTranslationQuery: Codable, Equatable {\n    \n    public let file: Data\n    public let fileName: String\n    public let model: Model\n    \n    public let prompt: String?\n    public let temperature: Double?\n}    \n```\n\n**响应**\n\n```swift\npublic struct AudioTranslationResult: Codable, Equatable {\n    \n    public let text: String\n}\n```\n\n**示例**\n\n```swift\nlet data = Data(contentsOfURL:...)\nlet query = AudioTranslationQuery(file: data, fileName: \"audio.m4a\", model: .whisper_1)  \n\nopenAI.audioTranslations(query: query) { result in\n    \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.audioTranslations(query: query)\n```\n\n查看 [音频文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio) 以获取更多信息。\n\n## 结构化输出（Structured Outputs）\n\n> [!NOTE] 本节侧重于 Responses 和 Chat Completions API 中不使用函数调用（Function Calling）的用例。要了解如何使用结构化输出（Structured Outputs）配合函数调用，请查看 [函数调用（Function Calling）](#function-calling)。\n\n要配置结构化输出（Structured Outputs），您需要定义一个 JSON Schema（JSON 模式）并将其传递给查询。\n\n此 SDK 支持多种定义模式的方法；请选择您喜欢的一种。\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.jsonSchema\u003C\u002Fsummary>\n\n### 通过指定字段构建模式\n\n此定义接受 `JSONSchema`，它可以是 `boolean` 或 `object` JSON 文档。\n\n与其自己提供模式，不如使用接受 `[JSONSchemaField]` 的初始化器以类型安全的方式构建模式，如下面的示例所示。\n\n虽然这种定义模式的方法很直接，但它可能比较冗长。关于定义模式的替代方法，请参阅以下选项。\n\n### 示例\n\n```swift\nlet query = CreateModelResponseQuery(\n    input: .textInput(\"Return structured output\"),\n    model: .gpt4_o,\n    text: .jsonSchema(.init(\n        name: \"research_paper_extraction\",\n        schema: .jsonSchema(.init(\n            .type(.object),\n            .properties([\n                \"title\": Schema.buildBlock(\n                    .type(.string)\n                ),\n                \"authors\": .init(\n                    .type(.array),\n                    .items(.init(\n                        .type(.string)\n                    ))\n                ),\n                \"abstract\": .init(\n                    .type(.string)\n                ),\n                \"keywords\": .init(\n                    .type(.array),\n                    .items(.init(\n                        .type(.string))\n                    )\n                )\n            ]),\n            .required([\"title, authors, abstract, keywords\"]),\n            .additionalProperties(.boolean(false))\n        )),\n        description: \"desc\",\n        strict: false\n    ))\n)\n\nlet response = try await openAIClient.responses.createResponse(query: query)\nfor output in response.output {\n    switch output {\n    case .outputMessage(let message):\n        for content in message.content {\n            switch content {\n            case .OutputTextContent(let textContent):\n                print(\"json output structured by the schema: \", textContent.text)\n            case .RefusalContent(let refusal):\n                \u002F\u002F Handle refusal\n                break\n            }\n        }\n    default:\n        \u002F\u002F Handle other OutputItems\n        break\n    }\n}\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.derivedJsonSchema\u003C\u002Fsummary>\n\n### 实现描述模式的类型\n\n以 [Pydantic](https:\u002F\u002Fdocs.pydantic.dev\u002Flatest\u002F) 或 [Zod](https:\u002F\u002Fzod.dev) 的风格定义模式。\n\n- 在创建 `ChatQuery` 或 `CreateModelResponseQuery` 时使用 `derivedJsonSchema(_ type:)` 响应格式\n- 提供一个符合 `JSONSchemaConvertible` 的类型并生成一个实例作为示例\n- 确保所提供类型中的所有枚举类型都符合 `JSONSchemaEnumConvertible` 并为所有情况生成名称数组\n\n### 示例\n\n```swift\nstruct MovieInfo: JSONSchemaConvertible {\n    \n    let title: String\n    let director: String\n    let release: Date\n    let genres: [MovieGenre]\n    let cast: [String]\n    \n    static let example: Self = {\n        .init(\n            title: \"Earth\",\n            director: \"Alexander Dovzhenko\",\n            release: Calendar.current.date(from: DateComponents(year: 1930, month: 4, day: 1))!,\n            genres: [.drama],\n            cast: [\"Stepan Shkurat\", \"Semyon Svashenko\", \"Yuliya Solntseva\"]\n        )\n    }()\n}\nenum MovieGenre: String, Codable, JSONSchemaEnumConvertible {\n    case action, drama, comedy, scifi\n    \n    var caseNames: [String] { Self.allCases.map { $0.rawValue } }\n}\nlet query = ChatQuery(\n    messages: [\n        .system(\n            .init(content: .textContent(\"Best Picture winner at the 2011 Oscars\"))\n        )\n    ],\n    model: .gpt4_o,\n    responseFormat: .jsonSchema(\n        .init(\n            name: \"movie-info\",\n            description: nil,\n            schema: .derivedJsonSchema(MovieInfo.self),\n            strict: true\n        )\n    )\n)\nlet result = try await openAI.chats(query: query)\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\n\u003Csummary>JSONSchemaDefinition.dynamicJsonSchema\u003C\u002Fsummary>\n\n### 使用符合 Encodable 协议的任意类型的实例定义 Schema\n\n使用简单的字典定义您的 JSON Schema，或者像 https:\u002F\u002Fgithub.com\u002Fkevinhermawan\u002Fswift-json-schema 这样的库来指定 JSON Schema。\n\n### 示例\n\n```swift\nstruct AnyEncodable: Encodable {\n    private let _encode: (Encoder) throws -> Void\n    public init\u003CT: Encodable>(_ wrapped: T) {\n        _encode = wrapped.encode\n    }\n    func encode(to encoder: Encoder) throws {\n        try _encode(encoder)\n    }\n}\nlet schema = [\n    \"type\": AnyEncodable(\"object\"),\n    \"properties\": AnyEncodable([\n        \"title\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"director\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"release\": AnyEncodable([\n            \"type\": \"string\"\n        ]),\n        \"genres\": AnyEncodable([\n            \"type\": AnyEncodable(\"array\"),\n            \"items\": AnyEncodable([\n                \"type\": AnyEncodable(\"string\"),\n                \"enum\": AnyEncodable([\"action\", \"drama\", \"comedy\", \"scifi\"])\n            ])\n        ]),\n        \"cast\": AnyEncodable([\n            \"type\": AnyEncodable(\"array\"),\n            \"items\": AnyEncodable([\n                \"type\": \"string\"\n            ])\n        ])\n    ]),\n    \"additionalProperties\": AnyEncodable(false)\n]\nlet query = ChatQuery(\n    messages: [.system(.init(content: .textContent(\"Return a structured response.\")))],\n    model: .gpt4_o,\n    responseFormat: .jsonSchema(.init(name: \"movie-info\", schema: .dynamicJsonSchema(schema)))\n)\nlet result = try await openAI.chats(query: query)\n```\n\n\u003C\u002Fdetails>\n\n有关更多信息，请查看 [结构化输出文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs)。\n\n## 工具\n### 远程 MCP（模型上下文协议）\n\n模型上下文协议（MCP）使 AI 模型能够通过标准化的服务器连接安全地连接到外部数据源和工具。此 OpenAI Swift 库支持 MCP 集成，允许您通过远程工具和扩展模型能力。\n\n您可以使用 [MCP Swift 库](https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fswift-sdk) 连接到 MCP 服务器并发现可用工具，然后将这些工具与 OpenAI 的聊天补全功能集成。\n\n#### MCP 工具集成\n\n**请求**\n\n```swift\n\u002F\u002F Create an MCP tool for connecting to a remote server\nlet mcpTool = Tool.mcpTool(\n    .init(\n        _type: .mcp,\n        serverLabel: \"GitHub_MCP_Server\",\n        serverUrl: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\",\n        headers: .init(additionalProperties: [\n            \"Authorization\": \"Bearer YOUR_TOKEN_HERE\"\n        ]),\n        allowedTools: .case1([\"search_repositories\", \"get_file_contents\"]),\n        requireApproval: .case2(.always)\n    )\n)\n\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"Search for Swift repositories on GitHub\")))\n    ],\n    model: .gpt4_o,\n    tools: [mcpTool]\n)\n```\n\n**MCP 工具属性**\n\n- `serverLabel`: MCP 服务器的唯一标识符\n- `serverUrl`: MCP 服务器的 URL 端点\n- `headers`: 服务器所需的身份验证头和其他 HTTP 头\n- `allowedTools`: 要启用的特定工具（可选 - 如果未指定，则所有工具均可用）\n- `requireApproval`: 工具调用是否需要用户批准（`.always`、`.never` 或条件性）\n\n**使用 MCP Swift 库的示例**\n\n```swift\nimport MCP\nimport OpenAI\n\n\u002F\u002F Connect to MCP server using the MCP Swift library\nlet mcpClient = MCP.Client(name: \"MyApp\", version: \"1.0.0\")\n\nlet transport = HTTPClientTransport(\n    endpoint: URL(string: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\")!,\n    configuration: URLSessionConfiguration.default\n)\n\nlet result = try await mcpClient.connect(transport: transport)\nlet toolsResponse = try await mcpClient.listTools()\n\n\u002F\u002F Create OpenAI MCP tool with discovered tools\nlet enabledToolNames = toolsResponse.tools.map { $0.name }\nlet mcpTool = Tool.mcpTool(\n    .init(\n        _type: .mcp,\n        serverLabel: \"GitHub_MCP_Server\",\n        serverUrl: \"https:\u002F\u002Fapi.githubcopilot.com\u002Fmcp\u002F\",\n        headers: .init(additionalProperties: authHeaders),\n        allowedTools: .case1(enabledToolNames),\n        requireApproval: .case2(.always)\n    )\n)\n\n\u002F\u002F Use in chat completion\nlet query = ChatQuery(\n    messages: [.user(.init(content: .string(\"Help me search GitHub repositories\")))],\n    model: .gpt4_o,\n    tools: [mcpTool]\n)\n\nlet chatResult = try await openAI.chats(query: query)\n```\n\n**MCP 工具调用处理**\n\n使用 MCP 工具时，模型可能会生成在远程 MCP 服务器上执行的工具调用。请在响应处理中处理特定的 MCP 输出项：\n\n```swift\n\u002F\u002F Handle MCP tool calls in streaming responses\nfor try await result in openAI.chatsStream(query: query) {\n    for choice in result.choices {\n        if let outputItem = choice.delta.content {\n            switch outputItem {\n            case .mcpToolCall(let mcpCall):\n                print(\"MCP tool call: \\(mcpCall.name)\")\n                if let output = mcpCall.output {\n                    print(\"Result: \\(output)\")\n                }\n            case .mcpApprovalRequest(let approvalRequest):\n                \u002F\u002F Handle approval request if requireApproval is enabled\n                print(\"MCP tool requires approval: \\(approvalRequest)\")\n            default:\n                \u002F\u002F Handle other output types\n                break\n            }\n        }\n    }\n}\n```\n\n## 专用模型\n\n### Embeddings (嵌入向量)\n\n获取给定输入的向量表示，该表示可被机器学习模型和算法轻松消费。\n\n**请求 (Request)**\n\n```swift\nstruct EmbeddingsQuery: Codable {\n    \u002F\u002F\u002F ID of the model to use.\n    public let model: Model\n    \u002F\u002F\u002F Input text to get embeddings for\n    public let input: String\n}\n```\n\n**响应 (Response)**\n\n```swift\nstruct EmbeddingsResult: Codable, Equatable {\n\n    public struct Embedding: Codable, Equatable {\n\n        public let object: String\n        public let embedding: [Double]\n        public let index: Int\n    }\n    public let data: [Embedding]\n    public let usage: Usage\n}\n```\n\n**示例 (Example)**\n\n```swift\nlet query = EmbeddingsQuery(model: .textSearchBabbageDoc, input: \"The food was delicious and the waiter...\")\nopenAI.embeddings(query: query) { result in\n  \u002F\u002FHandle response here\n}\n\u002F\u002For\nlet result = try await openAI.embeddings(query: query)\n```\n\n```\n(lldb) po result\n▿ EmbeddingsResult\n  ▿ data : 1 element\n    ▿ 0 : Embedding\n      - object : \"embedding\"\n      ▿ embedding : 2048 elements\n        - 0 : 0.0010535449\n        - 1 : 0.024234328\n        - 2 : -0.0084999\n        - 3 : 0.008647452\n    .......\n        - 2044 : 0.017536353\n        - 2045 : -0.005897616\n        - 2046 : -0.026559394\n        - 2047 : -0.016633155\n      - index : 0\n\n(lldb)\n```\n\n查阅 [Embeddings 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings) 以获取更多信息。\n\n### Moderations (内容审核)\n\n给定输入文本，输出模型是否将其分类为违反 OpenAI 的内容政策。\n\n**请求 (Request)**\n\n```swift\npublic struct ModerationsQuery: Codable {\n    \n    public let input: String\n    public let model: Model?\n}    \n```\n\n**响应 (Response)**\n\n```swift\npublic struct ModerationsResult: Codable, Equatable {\n\n    public let id: String\n    public let model: Model\n    public let results: [CategoryResult]\n}\n```\n\n**示例 (Example)**\n\n```swift\nlet query = ModerationsQuery(input: \"I want to kill them.\")\nopenAI.moderations(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.moderations(query: query)\n```\n\n查阅 [Moderations 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations) 以获取更多信息。\n\n## 其他 API\n\n### Models (模型)\n\n模型表示为类型别名 `typealias Model = String`。\n\n```swift\npublic extension Model {\n    static let gpt5_1 = \"gpt-5.1\"\n    static let gpt5_1_chat_latest = \"gpt-5.1-chat-latest\"\n\n    static let gpt5 = \"gpt-5\"\n    static let gpt5_mini = \"gpt-5-mini\"\n    static let gpt5_nano = \"gpt-5-nano\"\n    static let gpt5_chat = \"gpt-5-chat\"\n\n    static let gpt4_1 = \"gpt-4.1\"\n    static let gpt4_1_mini = \"gpt-4.1-mini\"\n    static let gpt4_1_nano = \"gpt-4.1-nano\"\n\n    static let gpt4_turbo_preview = \"gpt-4-turbo-preview\"\n    static let gpt4_vision_preview = \"gpt-4-vision-preview\"\n    static let gpt4_0125_preview = \"gpt-4-0125-preview\"\n    static let gpt4_1106_preview = \"gpt-4-1106-preview\"\n    static let gpt4 = \"gpt-4\"\n    static let gpt4_0613 = \"gpt-4-0613\"\n    static let gpt4_0314 = \"gpt-4-0314\"\n    static let gpt4_32k = \"gpt-4-32k\"\n    static let gpt4_32k_0613 = \"gpt-4-32k-0613\"\n    static let gpt4_32k_0314 = \"gpt-4-32k-0314\"\n    \n    static let gpt3_5Turbo = \"gpt-3.5-turbo\"\n    static let gpt3_5Turbo_0125 = \"gpt-3.5-turbo-0125\"\n    static let gpt3_5Turbo_1106 = \"gpt-3.5-turbo-1106\"\n    static let gpt3_5Turbo_0613 = \"gpt-3.5-turbo-0613\"\n    static let gpt3_5Turbo_0301 = \"gpt-3.5-turbo-0301\"\n    static let gpt3_5Turbo_16k = \"gpt-3.5-turbo-16k\"\n    static let gpt3_5Turbo_16k_0613 = \"gpt-3.5-turbo-16k-0613\"\n    \n    static let textDavinci_003 = \"text-davinci-003\"\n    static let textDavinci_002 = \"text-davinci-002\"\n    static let textCurie = \"text-curie-001\"\n    static let textBabbage = \"text-babbage-001\"\n    static let textAda = \"text-ada-001\"\n    \n    static let textDavinci_001 = \"text-davinci-001\"\n    static let codeDavinciEdit_001 = \"code-davinci-edit-001\"\n    \n    static let tts_1 = \"tts-1\"\n    static let tts_1_hd = \"tts-1-hd\"\n    \n    static let whisper_1 = \"whisper-1\"\n\n    static let dall_e_2 = \"dall-e-2\"\n    static let dall_e_3 = \"dall-e-3\"\n    \n    static let davinci = \"davinci\"\n    static let curie = \"curie\"\n    static let babbage = \"babbage\"\n    static let ada = \"ada\"\n    \n    static let textEmbeddingAda = \"text-embedding-ada-002\"\n    static let textSearchAda = \"text-search-ada-doc-001\"\n    static let textSearchBabbageDoc = \"text-search-babbage-doc-001\"\n    static let textSearchBabbageQuery001 = \"text-search-babbage-query-001\"\n    static let textEmbedding3 = \"text-embedding-3-small\"\n    static let textEmbedding3Large = \"text-embedding-3-large\"\n    \n    static let textModerationStable = \"text-moderation-stable\"\n    static let textModerationLatest = \"text-moderation-latest\"\n    static let moderation = \"text-moderation-007\"\n}\n```\n\n支持 GPT-4 模型。\n\n例如：要使用 `gpt-4-turbo-preview` 模型，请将 `.gpt4_turbo_preview` 作为参数传递给 `ChatQuery` 的初始化方法。\n\n```swift\nlet query = ChatQuery(model: .gpt4_turbo_preview, messages: [\n    .init(role: .system, content: \"You are Librarian-GPT. You know everything about the books.\"),\n    .init(role: .user, content: \"Who wrote Harry Potter?\")\n])\nlet result = try await openAI.chats(query: query)\nXCTAssertFalse(result.choices.isEmpty)\n```\n\n如果您需要使用上述未表示的某个模型，也可以传递自定义字符串。\n\n#### List Models (列出模型)\n\n列出当前可用的模型。\n\n**响应 (Response)**\n\n```swift\npublic struct ModelsResult: Codable, Equatable {\n    \n    public let data: [ModelResult]\n    public let object: String\n}\n\n```\n**示例 (Example)**\n\n```swift\nopenAI.models() { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.models()\n```\n\n#### Retrieve Model (检索模型)\n\n检索模型实例，提供所有权信息。\n\n**请求 (Request)**\n\n```swift\npublic struct ModelQuery: Codable, Equatable {\n    \n    public let model: Model\n}    \n```\n\n**响应 (Response)**\n\n```swift\npublic struct ModelResult: Codable, Equatable {\n\n    public let id: Model\n    public let object: String\n    public let ownedBy: String\n}\n```\n\n**示例 (Example)**\n\n```swift\nlet query = ModelQuery(model: .gpt4)\nopenAI.model(query: query) { result in\n  \u002F\u002FHandle result here\n}\n\u002F\u002For\nlet result = try await openAI.model(query: query)\n```\n\n查阅 [Models 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels) 以获取更多信息。\n\n### 工具函数\n\n该组件提供了一些便捷的实用函数来处理向量。\n\n```swift\npublic struct Vector {\n\n    \u002F\u002F\u002F Returns the similarity between two vectors\n    \u002F\u002F\u002F\n    \u002F\u002F\u002F - Parameters:\n    \u002F\u002F\u002F     - a: The first vector\n    \u002F\u002F\u002F     - b: The second vector\n    public static func cosineSimilarity(a: [Double], b: [Double]) -> Double {\n        return dot(a, b) \u002F (mag(a) * mag(b))\n    }\n\n    \u002F\u002F\u002F Returns the difference between two vectors. Cosine distance is defined as `1 - cosineSimilarity(a, b)`\n    \u002F\u002F\u002F\n    \u002F\u002F\u002F - Parameters:\n    \u002F\u002F\u002F     - a: The first vector\n    \u002F\u002F\u002F     - b: The second vector\n    public func cosineDifference(a: [Double], b: [Double]) -> Double {\n        return 1 - Self.cosineSimilarity(a: a, b: b)\n    }\n}\n```\n\n**示例**\n\n```swift\nlet vector1 = [0.213123, 0.3214124, 0.421412, 0.3214521251, 0.412412, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.4214214, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251]\nlet vector2 = [0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.511515, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3214521251, 0.213123, 0.3214124, 0.1414124, 0.3213213]\nlet similarity = Vector.cosineSimilarity(a: vector1, b: vector2)\nprint(similarity) \u002F\u002F0.9510201910206734\n```\n>在数据分析中，余弦相似度 (Cosine Similarity) 是衡量两个数字序列之间相似程度的指标。\n\n\u003Cimg width=\"574\" alt=\"Screenshot 2022-12-19 at 6 00 33 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_a577b81fa86c.png\">\n\n关于余弦相似度 (Cosine Similarity) 的更多信息请[点击这里](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCosine_similarity)。\n\n## 助手\n\n查看 [助手文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) 以获取更多信息。\n\n### 创建助手\n\n示例：创建助手\n```swift\nlet query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)\nopenAI.assistantCreate(query: query) { result in\n   \u002F\u002FHandle response here\n}\n```\n\n### 修改助手\n\n示例：修改助手\n```swift\nlet query = AssistantsQuery(model: Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)\nopenAI.assistantModify(query: query, assistantId: \"asst_1234\") { result in\n    \u002F\u002FHandle response here\n}\n```\n\n### 列出助手\n\n示例：列出助手\n```swift\nopenAI.assistants() { result in\n   \u002F\u002FHandle response here\n}\n```\n\n### 线程\n\n查看 [线程文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) 以获取更多信息。\n\n#### 创建线程\n\n示例：创建线程\n```swift\nlet threadsQuery = ThreadsQuery(messages: [Chat(role: message.role, content: message.content)])\nopenAI.threads(query: threadsQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 创建并运行线程\n\n示例：创建并运行线程\n```swift\nlet threadsQuery = ThreadQuery(messages: [Chat(role: message.role, content: message.content)])\nlet threadRunQuery = ThreadRunQuery(assistantId: \"asst_1234\"  thread: threadsQuery)\nopenAI.threadRun(query: threadRunQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 获取线程消息\n\n查看 [消息文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages) 以获取更多信息。\n\n示例：获取线程消息\n```swift\nopenAI.threadsMessages(threadId: currentThreadId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 向线程添加消息\n\n示例：向线程添加消息\n```swift\nlet query = MessageQuery(role: message.role.rawValue, content: message.content)\nopenAI.threadsAddMessage(threadId: currentThreadId, query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n### 运行\n\n查看 [运行文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns) 以获取更多信息。\n\n#### 创建运行\n\n示例：创建运行\n```swift\nlet runsQuery = RunsQuery(assistantId:  currentAssistantId)\nopenAI.runs(threadId: threadsResult.id, query: runsQuery) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 检索运行\n\n示例：检索运行\n```swift\nopenAI.runRetrieve(threadId: currentThreadId, runId: currentRunId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 检索运行步骤\n\n示例：检索运行步骤\n```swift\nopenAI.runRetrieveSteps(threadId: currentThreadId, runId: currentRunId) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n#### 提交运行的工具输出\n\n示例：提交运行的工具输出\n```swift\nlet output = RunToolOutputsQuery.ToolOutput(toolCallId: \"call123\", output: \"Success\")\nlet query = RunToolOutputsQuery(toolOutputs: [output])\nopenAI.runSubmitToolOutputs(threadId: currentThreadId, runId: currentRunId, query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n### 文件\n\n查看 [文件文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) 以获取更多信息。\n\n#### 上传文件\n\n示例：上传文件\n```swift\nlet query = FilesQuery(purpose: \"assistants\", file: fileData, fileName: url.lastPathComponent, contentType: \"application\u002Fpdf\")\nopenAI.files(query: query) { result in\n  \u002F\u002FHandle response here\n}\n```\n\n## 支持其他提供商\n\n> TL;DR 在配置 (Configuration) 中使用 `.relaxed` 解析选项\n\n此 SDK (软件开发工具包) 对 Gemini、Perplexity 等其他提供商的支持有限。\n\n此 SDK 的首要目标是 OpenAI，主要规则是所有主要类型必须完全兼容 [OpenAI 的 API 参考](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fintroduction)。如果说明某个字段应为可选，那么在此 SDK 的主要查询\u002F结果 (Query\u002FResult) 类型子集中，该字段也必须是可选的。参考中声明的其他信息（如默认值）也是如此。\n\n尽管如此，我们仍希望为其他提供商提供支持。\n\n### 选项 1：使用 `.relaxed` 解析选项\n\n`.relaxed` 解析选项可以处理响应中缺失和额外的键\u002F值。它应该足以满足大多数用例。如果有任何未覆盖的情况，请告知我们。\n\n### 选项 2：单独指定解析选项\n#### 处理响应中缺失的键\n某些提供商返回的响应并不完全符合 OpenAI 的方案。例如，Gemini 聊天完成响应省略了 `id` 字段，而该字段是 OpenAI API 参考 (API Reference) 文档中的必填字段。\n\n在这种情况下，请使用 `fillRequiredFieldIfKeyNotFound` 解析选项 (Parsing Option)，如下所示：\n```swift\nlet configuration = OpenAI.Configuration(token: \"\", parsingOptions: .fillRequiredFieldIfKeyNotFound)\n```\n\n#### 处理响应中缺失的值\n某些字段在 OpenAI 中要求必须存在（非可选），但其他提供商可能会为它们返回 `null`。\n\n使用 `.fillRequiredFieldIfValueNotFound` 来处理缺失的值。\n\n#### 如果提供商返回了额外的字段怎么办？\n目前我们通过将额外字段添加到主模型集来简单处理此类情况。这是可行的，因为可选字段不会破坏或与 OpenAI 的方案冲突。目前添加了以下额外字段：\n\n`ChatResult`\n\n* `citations` [Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fapi-reference\u002Fchat-completions#response-citations)\n\n`ChatResult.Choice.Message`\n\n* `reasoningContent` [Grok](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Fapi-reference#chat-completions), [DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002Fapi\u002Fcreate-chat-completion#responses)\n* `reasoning` [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fuse-cases\u002Freasoning-tokens#basic-usage-with-reasoning-tokens)\n\n## 示例项目\n\n你可以在 [Demo](\u002FDemo) 文件夹中找到示例 iOS 应用程序。\n\n![mockuuups-iphone-13-pro-mockup-perspective-right](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_readme_b7bebf847e02.png)\n\n## 贡献指南\n请确保你的拉取请求 (Pull Requests) 对任何查看者来说都清晰明了。  \n将 `main` 设置为目标分支 (Branch)。\n\n#### 在命名 PR 和分支时使用 [约定式提交 (Conventional Commits)](https:\u002F\u002Fwww.conventionalcommits.org\u002Fen\u002Fv1.0.0\u002F) 原则：\n\n- `Feat: ...` 用于新功能和新功能实现。\n- `Bug: ...` 用于错误修复。\n- `Fix: ...` 用于小问题修复，如代码中的拼写错误或不准确之处。\n- `Chore: ...` 用于枯燥的工作，如代码润色、重构、废弃修复等。\n\nPR 命名示例：`Feat: Add Threads API handling` 或 `Bug: Fix message result duplication`\n\n分支命名示例：`feat\u002Fadd-threads-API-handling` 或 `bug\u002Ffix-message-result-duplication`\n\n#### 按照以下格式编写拉取请求的描述：\n- 内容 (What)\n\n  ...\n- 原因 (Why)\n  \n  ...\n- 受影响区域 (Affected Areas)\n\n  ...\n- 更多信息 (More Info)\n\n  ...\n\n如果需要且可能，我们很感激你在代码中包含测试。❤️\n\n## 链接\n\n- [OpenAI 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fintroduction)\n- [OpenAI Playground](https:\u002F\u002Fplatform.openai.com\u002Fplayground)\n- [OpenAI 示例](https:\u002F\u002Fplatform.openai.com\u002Fexamples)\n- [Dall-E](https:\u002F\u002Flabs.openai.com\u002F)\n- [余弦相似度 (Cosine Similarity)](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FCosine_similarity)\n\n## 许可证\n\n```\nMIT License\n\nCopyright (c) 2023 MacPaw Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and\u002For sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n```","# OpenAI (Swift SDK) 快速上手指南\n\n本指南基于 MacPaw 维护的 Swift 社区版 OpenAI SDK。该库实现了与 OpenAI 公共 API 兼容的接口，支持在 iOS、macOS 等 Apple 平台上调用大模型服务。\n\n## 环境准备\n\n- **开发语言**: Swift 5+\n- **开发环境**: Xcode (推荐) 或支持 Swift Package Manager 的编辑器\n- **系统要求**: iOS 13.0+ \u002F macOS 10.15+ \u002F tvOS 13.0+ \u002F watchOS 6.0+\n- **前置依赖**: 无外部依赖，通过 Swift Package Manager 管理\n\n> ⚠️ **安全提示**: API Token 是敏感信息。请勿将其暴露在任何客户端代码（如浏览器、App）中。生产环境请求应通过您的后端服务器中转，从环境变量或密钥管理服务中安全加载 API Key。\n\n## 安装步骤\n\n推荐使用 **Swift Package Manager** 将库集成到 Xcode 项目中。\n\n### 方法一：Xcode 界面操作\n\n1. 打开 Xcode 项目。\n2. 点击菜单栏 **File > Add Package Dependencies...**。\n3. 输入仓库地址：`https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git`。\n4. 选择依赖规则（例如：\"Up to Next Major Version\"）。\n5. 点击 **Add Package**。\n\n### 方法二：Package.swift 文件配置\n\n如果您使用命令行构建，可在 `Package.swift` 中添加以下依赖：\n\n```swift\ndependencies: [\n    .package(url: \"https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI.git\", branch: \"main\")\n]\n```\n\n## 基本使用\n\n初始化 `OpenAI` 类是调用 API 的入口。您需要先获取 OpenAI 组织的 API Token。\n\n### 1. 初始化客户端\n\n最简单的初始化方式如下：\n\n```swift\nlet openAI = OpenAI(apiToken: \"YOUR_TOKEN_HERE\")\n```\n\n如需自定义配置（如组织 ID、超时时间），可使用 Configuration：\n\n```swift\nlet configuration = OpenAI.Configuration(token: \"YOUR_TOKEN_HERE\", organizationIdentifier: \"YOUR_ORGANIZATION_ID_HERE\", timeoutInterval: 60.0)\nlet openAI = OpenAI(configuration: configuration)\n```\n\n### 2. 发送聊天请求\n\n使用 `chats` 方法生成文本。以下是完整的异步调用示例：\n\n```swift\nlet query = ChatQuery(\n    messages: [\n        .user(.init(content: .string(\"Who are you?\")))\n    ],\n    model: .gpt4_o\n)\n\nlet result = try await openAI.chats(query: query)\n\nprint(result.choices.first?.message.content ?? \"\")\n\u002F\u002F 控制台输出：\n\u002F\u002F I'm an AI language model created by OpenAI, designed to assist with a wide range of questions and tasks. How can you help you today?\n```\n\n### 3. 取消请求\n\n对于 Swift Concurrency 调用，直接取消任务即可自动取消底层网络请求：\n\n```swift\nlet task = Task {\n    do {\n        let chatResult = try await openAIClient.chats(query: .init(messages: [], model: \"asd\"))\n    } catch {\n        \u002F\u002F Handle cancellation or error\n    }\n}\n            \ntask.cancel()\n```","某电商 App 的 iOS 开发团队正在为客服模块集成智能问答功能，需要稳定调用大语言模型处理用户咨询并返回结构化数据。\n\n### 没有 OpenAI 时\n- 开发者需手动封装 URLSession 发送 HTTP 请求，代码量大且容易遗漏关键参数。\n- JSON 反序列化过程繁琐，一旦字段不匹配会导致运行时崩溃，调试困难。\n- 实现流式对话体验需要自行处理 SSE 事件解析，逻辑复杂且占用大量内存。\n- 若后续想切换至其他模型提供商，必须重写整个网络通信层的适配代码。\n\n### 使用 OpenAI 后\n- 通过 OpenAI 提供的 SDK 初始化实例，仅需几行代码即可完成认证与连接配置。\n- 内置强类型结构体自动映射 API 响应，编译期即可捕获错误，大幅减少崩溃风险。\n- 原生支持流式输出接口，轻松实现打字机效果，无需额外编写 SSE 解析逻辑。\n- 兼容多种模型及第三方提供商，扩展新功能时无需重构网络层，维护效率显著提升。\n\nOpenAI 显著降低了 Swift 应用接入大模型的技术门槛，让开发者能专注于业务逻辑而非底层网络细节。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMacPaw_OpenAI_b7bebf84.png","MacPaw","MacPaw Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMacPaw_cb572034.png","Befriend people with technology",null,"macpaw","https:\u002F\u002Fmacpaw.com","https:\u002F\u002Fgithub.com\u002FMacPaw",[84],{"name":85,"color":86,"percentage":87},"Swift","#F05138",100,2886,511,"2026-04-03T19:17:34","MIT","macOS","未说明",{"notes":95,"python":93,"dependencies":96},"该工具为 Swift 语言编写的 API 客户端 SDK，用于调用 OpenAI 公共接口，无需本地计算资源。开发环境需安装 Xcode 及 Swift Package Manager。必须配置 OpenAI API Token，且出于安全考虑，生产环境应通过后端代理请求，避免在前端代码中暴露密钥。",[93],[53,13,14,15],[99,100,101,102,103,104],"ai","openai","openai-api","spm","swift","swiftpackagemanager","2026-03-27T02:49:30.150509","2026-04-06T05:16:13.038648",[108,113,118,123,128],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},3281,"流式传输（Streaming）调用时报错 'Invalid value around line 1' 是什么原因？","这是早期版本的已知 Bug，导致流式数据解析失败。修复方案是更新至支持流式的版本（参考 PR #57）。使用方法上，需采用回调模式处理结果：`openAI.chats(query: ..., stream: true) { result in ... }`。在回调的 `.success` 分支中，通过 `res.choices.first?.delta?.content` 获取每次推送的增量内容。","https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fissues\u002F14",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},3282,"向 Gemini 模型发送超过 2 张图片进行识别时为什么会失败？","失败原因通常是请求大小超过了限制或中间服务器转发问题。Google Gemini API 对内联图片数据的总请求大小有限制（文本、系统指令和内联字节总和不超过 20MB）。建议检查代码中 `imageData.length` 确认单张图片实际大小。如果通过自建服务器转发，可能是接口限制了大小。对于大请求，建议使用 File API 上传文件而非直接内联图片数据。","https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fissues\u002F363",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3283,"使用 Gemini Flash 模型时流式响应解码失败怎么办？","这通常是因为 Gemini 返回的流式数据省略了部分标准字段（如 id, service_tier），导致解析器不兼容。排查建议：在流接收处添加打印日志查看原始数据（raw data）及解析步骤。尝试使用简短提示词（如“只说 3 个字”）复现，以便减少输出量并快速定位漏斗中哪一步出错。同时检查配置中的 host 和 basePath 是否正确。","https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fissues\u002F283",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},3284,"该库是否支持实时音频流式传输（Text-to-Speech）？","已支持。相关功能已在 PR #189 中合并。关于音频播放，官方建议结合第三方库如 `swift-chunked-audio-player` 来实现音频块的连续播放。开发者可以参考该库的文档或在 Demo 应用中查找相关示例集成。","https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fissues\u002F185",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3285,"Demo 应用构建失败提示缺少 OpenAI 库如何解决？","这是因为 Xcode 项目依赖设置中未正确包含 OpenAI 库。解决方法是在 Xcode 项目的依赖设置（Dependency Settings）中手动添加该库（点击加号图标添加），确保项目能正确链接库文件后即可成功构建并运行 App。","https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fissues\u002F58",[134,139,144,149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229],{"id":135,"version":136,"summary_zh":137,"released_at":138},102829,"0.3.7","## What's Changed\r\n* chore: update codeql actions versions by @art-tykh in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F274\r\n* Add gpt-4.5 by @Krivoblotsky in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F275\r\n* feat:Support reasoningContent in StreamResult by @LimChihi in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F279\r\n* Add create audio speech stream support by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F189\r\n* Make token optional by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F280\r\n* Fix:The speed parameter type for the audioCreatSpeech request by @qaz1991815 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F206\r\n* Add support for Perplexity-style citations by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F262\r\n* Add struct for Chat Completion Response Message by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F284\r\n* Add Strict Concurrency support by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F277\r\n* Add reasoning and reasoningContent to Message by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F289\r\n* Chore: adding `required` to ChatCompletionFunctionCallOptionParam enum by @elliottburris in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F215\r\n* Chore: Added new voice options added by OpenAI to TTS models. by @jyothishjohnson in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F290\r\n* Handle HTTP status errors in streaming sessions by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F287\r\n* Improve function call handling and update default model to GPT4-O-Mini by @stiiveo in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F285\r\n* Feat: support gpt-4o-mini-tts model and new speech params by @sakeven in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F291\r\n* Support for verbose_json for audio transcriptions by @azzever in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F199\r\n* Add SSLDelegateProtocol to StreamingSession by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F191\r\n\r\n## New Contributors\r\n* @batanus made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F189\r\n* @qaz1991815 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F206\r\n* @elliottburris made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F215\r\n* @jyothishjohnson made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F290\r\n* @stiiveo made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F285\r\n* @sakeven made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F291\r\n* @azzever made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F199\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.6...0.3.7","2025-03-21T18:38:27",{"id":140,"version":141,"summary_zh":142,"released_at":143},102830,"0.3.6","## What's Changed\r\n* Bug\u002Fcreate speech response decoding by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F273\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.5...0.3.6","2025-02-26T10:12:19",{"id":145,"version":146,"summary_zh":147,"released_at":148},102831,"0.3.5","## What's Changed\r\n* feat: Add custom api version support by @LimChihi in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F268\r\n* feat:add stream_options to query by @LimChihi in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F269\r\n\r\n## New Contributors\r\n* @LimChihi made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F268\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.4...0.3.5","2025-02-23T05:10:32",{"id":150,"version":151,"summary_zh":152,"released_at":153},102832,"0.3.4","## What's Changed\r\n* Add customHeaders to configuration by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F263\r\n* StreamableQuery parity by @kalafus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F157\r\n* Chore: Add initializer for `FunctionCall` by @thekoc in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F186\r\n* Bug: Fix uploading file not working in Assistants, sometimes even causing crashes by @irons163 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F264\r\n* Bug: Fix error parsing in StreamInterpreter by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F266\r\n\r\n## New Contributors\r\n* @thekoc made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F186\r\n* @irons163 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F264\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.3...0.3.4","2025-02-17T19:32:55",{"id":155,"version":156,"summary_zh":157,"released_at":158},102833,"0.3.3","## What's Changed\r\n* Support comments in streaming response by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F259\r\n* Implement cancellation by utilizing native URLSession functionality by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F251\r\n* Add language ids for more swift code blocks in readme by @niclego in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F254\r\n* Add reasoningEffort parameter to ChatQuery by @dounan in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F255\r\n* Feat: Add o3-mini to Model list by @yukiny0811 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F256\r\n\r\n## New Contributors\r\n* @niclego made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F254\r\n* @yukiny0811 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F256\r\n* @SplittyDev made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F259\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.2...0.3.3","2025-02-11T15:13:01",{"id":160,"version":161,"summary_zh":162,"released_at":163},102834,"0.3.2","## What's Changed\r\n* Feat: Structured Outputs by @andgordio in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F225\r\n* Feat: Add developer role by @dounan in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F247\r\n* Bug: Add initializer for ToolCallParam.FunctionCall by @dounan in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F248\r\n* Update AssistantsQuery in Readme by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F246\r\n\r\n## New Contributors\r\n* @andgordio made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F225\r\n* @dounan made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F247\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.1...0.3.2","2025-01-31T10:02:52",{"id":165,"version":166,"summary_zh":167,"released_at":168},102818,"0.4.8","## What's Changed\r\n* Add none reasoning effort, add gpt-5.1 and gpt-5.1-chat-latest models by @neelvirdy in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F393\r\n* Add verbosity support for Responses API by @neelvirdy in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F395\r\n* Added priority to service tier by @tzdesign in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F405\r\n\r\n## New Contributors\r\n* @tzdesign made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F405\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.7...0.4.8","2026-03-30T14:26:58",{"id":170,"version":171,"summary_zh":172,"released_at":173},102819,"0.4.7","## What's Changed\r\n* Add 'minimal' reasoning effort by @zats in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F384\r\n* Update LinkPreview.swift by @mehmetbaykar in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F377\r\n* Fix: Return after decoding ResponseStreamEvent successfully by @neelvirdy in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F383\r\n* Make ChoiceDeltaToolCall.index optional by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F380\r\n* Fix: Consider event type in data payload when processing model response streams by @JaredConover in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F387\r\n* Add Double\u002FInt overloads for JSON schema numeric constraints by @mi12-root in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F391\r\n\r\n## New Contributors\r\n* @zats made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F384\r\n* @mehmetbaykar made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F377\r\n* @JaredConover made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F387\r\n* @mi12-root made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F391\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.6...0.4.7","2025-10-26T22:03:23",{"id":175,"version":176,"summary_zh":177,"released_at":178},102820,"0.4.6","## What's Changed\r\n* Fix derived schema example in Readme by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F372\r\n* Add ServiceTier on_demand case by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F373\r\n* Add GPT-5 models by @piscue in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F374\r\n\r\n## New Contributors\r\n* @piscue made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F374\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.5...0.4.6","2025-08-09T17:51:58",{"id":180,"version":181,"summary_zh":182,"released_at":183},102821,"0.4.5","## What's Changed\r\n* 351 add json schema builder for response api structured output return format by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F362\r\n* Update Readme by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F368\r\n* chore: Put Sendable to URLSessionDataDelegateProtocol by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F360\r\n* chore: Fix and test request interception by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F359\r\n* chore: Rewrite ModelResponseEventsStreamInterpreterTests by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F358\r\n* Make ID optional in ToolCallParam by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F366\r\n* Add dimensions field to EmbeddingsQuery by @Priva28 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F367\r\n* Remove AnyJSONSchema. Make JSONSchema enum by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F365\r\n* 138 timestamps from whisper api by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F369\r\n\r\n## New Contributors\r\n* @Priva28 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F367\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.4...0.4.5","2025-07-11T10:08:41",{"id":185,"version":186,"summary_zh":187,"released_at":188},102822,"0.4.4","## What's Changed\r\n* Add verse voice by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F334\r\n* Make system_fingerprint optional in ChatResult by @uuneo in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F333\r\n* Fix SSE parser by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F336\r\n* Handle empty usage in ChatStreamResult by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F339\r\n* Feat\u002Fupdate chat completion types by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F337\r\n* Issue #345 fix by @maxgribov in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F346\r\n* Make SSLDelegateProtocol completionHandler parameter Sendable by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F344\r\n* Update ChatQuery and Result examples by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F349\r\n* Extract response handling by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F350\r\n* Append Contributing with Code Generation by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F352\r\n* Update Responses API with new tools by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F355\r\n* Support MCP with Demo by @atom2ueki in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F357\r\n* Support OpenRouter models endpoint by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F356\r\n\r\n## New Contributors\r\n* @uuneo made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F333\r\n* @maxgribov made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F346\r\n* @atom2ueki made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F357\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.3...0.4.4","2025-06-26T18:25:11",{"id":190,"version":191,"summary_zh":192,"released_at":193},102823,"0.4.3","## What's Changed\r\n* Responses API by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F319\r\n* 327 support none value for chatqueryreasoningeffort by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F328\r\n* Audio transcription streaming by @lhr0909 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F329\r\n* feat: image edits for gpt-image-1 by @yyjim in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F330\r\n* Make system_fingerprint optional in ChatResult by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F332\r\n\r\n## New Contributors\r\n* @lhr0909 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F329\r\n* @yyjim made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F330\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.2...0.4.3","2025-05-16T07:23:20",{"id":195,"version":196,"summary_zh":197,"released_at":198},102824,"0.4.2","## What's Changed\r\n* Implement all specd fields in JSONSchema by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F322\r\n* Add GPT-4.1 models by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F320\r\n* Fix role is getting lost in Chat Stream demo by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F323\r\n* Add comments on relaxed parsing options by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F325\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.1...0.4.2","2025-04-28T20:32:59",{"id":200,"version":201,"summary_zh":202,"released_at":203},102825,"0.4.1","## What's Changed\r\n* Fix streaming by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F318\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.4.0...0.4.1","2025-04-14T09:15:54",{"id":205,"version":206,"summary_zh":207,"released_at":208},102826,"0.4.0","## What's Changed\r\n* Audio stream fix. by @rkvanadea in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F303\r\n* Fix call order and write more tests by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F304\r\n* Add init for audio options. by @rkvanadea in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F305\r\n* Add gpt_4o_transcribe and gpt_4o_mini_transcribe models. by @rkvanadea in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F306\r\n* Public init for assistant audio message. by @rkvanadea in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F307\r\n* Add parsing options for valueNotFound case by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F310\r\n* Support `reasoning` in ChatStreamResult deltas by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F311\r\n* Add `error` finish reason by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F312\r\n* Unify `reasoning` and `reasoningContent` fields by @SplittyDev in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F313\r\n* Support dynamic json schema response format by @dounan in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F314\r\n* Rewrite the parser by more closely following spec by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F316\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.9...0.4.0","2025-04-13T12:16:31",{"id":210,"version":211,"summary_zh":212,"released_at":213},102827,"0.3.9","## What's Changed\r\n* Add Middlewares by @batanus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F295\r\n* Added support for audio-preview models, including updates to query pa… by @rkvanadea in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F301\r\n* Add ParsingOptions and use for ChatStreamResult by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F299\r\n* Make `created: TimeInterval` optional by @JackuXL in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F282\r\n* Feat: Get token usage data for streamed chat completion response. by @sdimka in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F211\r\n\r\n## New Contributors\r\n* @rkvanadea made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F301\r\n* @JackuXL made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F282\r\n* @sdimka made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F211\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.8...0.3.9","2025-03-31T14:01:56",{"id":215,"version":216,"summary_zh":217,"released_at":218},102828,"0.3.8","## What's Changed\r\n* Make toolCalls and annotations optional by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F294\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.7...0.3.8","2025-03-24T17:59:57",{"id":220,"version":221,"summary_zh":222,"released_at":223},102835,"0.3.1","## What's Changed\r\n* fixed verboseJson encoding by @Frank-Buss in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F236\r\n* Fix: Rename redundant type names by @shu223 in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F209\r\n* PCM support for text-to-speech by @vilcsak in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F183\r\n* feat(basePath): enable base path for configuration by @metrue in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F69\r\n* remove obsolete, inactive 'edits' endpoint by @kalafus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F159\r\n* remove legacy completions endpoint by @kalafus in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F160\r\n* feat: Assistants API Beta Implemented by @cdillard in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F140\r\n* Feat\u002Fo1 by @nezhyborets in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F245\r\n\r\n## New Contributors\r\n* @Frank-Buss made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F236\r\n* @shu223 made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F209\r\n* @vilcsak made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F183\r\n* @metrue made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F69\r\n* @cdillard made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F140\r\n* @nezhyborets made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F245\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.3.0...0.3.1","2025-01-24T18:33:24",{"id":225,"version":226,"summary_zh":227,"released_at":228},102836,"0.3.0","## What's Changed\r\n* chore: add gpt-4o-mini support by @xAstralMars in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F219\r\n\r\n## New Contributors\r\n* @xAstralMars made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F219\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.2.9...0.3.0","2024-07-18T19:39:11",{"id":230,"version":231,"summary_zh":232,"released_at":233},102837,"0.2.9","## What's Changed\r\n* Adds gpt-4o by @kelvinlauKL in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F207\r\n\r\n## New Contributors\r\n* @kelvinlauKL made their first contribution in https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fpull\u002F207\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FMacPaw\u002FOpenAI\u002Fcompare\u002F0.2.8...0.2.9","2024-05-15T07:33:36"]