[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-jamesrochabrun--SwiftOpenAI":3,"tool-jamesrochabrun--SwiftOpenAI":64},[4,17,27,35,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,2,"2026-04-13T23:32:16",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[25,14,26,13],"插件","图像",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":10,"last_commit_at":33,"category_tags":34,"status":16},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[25,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":41,"last_commit_at":42,"category_tags":43,"status":16},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,3,"2026-04-06T11:19:32",[15,26,14,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,"2026-04-10T11:13:16",[26,60,61,25,14,62,15,13,63],"数据工具","视频","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":80,"owner_twitter":76,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":94,"env_ram":94,"env_deps":95,"category_tags":102,"github_topics":103,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":144},7301,"jamesrochabrun\u002FSwiftOpenAI","SwiftOpenAI","The most complete open-source Swift package for interacting with OpenAI's public API.","SwiftOpenAI 是一款专为 Swift 开发者打造的开源工具包，旨在让苹果生态下的应用轻松对接 OpenAI 的强大能力。它全面覆盖了 OpenAI 公共 API 的所有核心功能，从基础的文本对话、图像生成、语音转录，到高级的函数调用、结构化输出及微调训练，开发者无需重复造轮子，即可在 iOS、macOS、watchOS 甚至 Linux 平台上快速集成人工智能特性。\n\n这款工具主要解决了原生 Swift 项目调用外部 AI 接口时代码繁琐、适配困难的问题。通过提供简洁统一的接口，它显著降低了开发门槛，让开发者能专注于业务逻辑而非底层网络请求的处理。特别值得一提的是，SwiftOpenAI 不仅支持 Azure 和 AIProxy 等替代服务，还率先适配了最新的低延迟双向语音“实时 API\"以及助手流式传输功能，为构建即时互动的语音助手或复杂智能体应用提供了坚实的技术支撑。\n\n无论是希望为 App 增添智能对话功能的独立开发者，还是需要构建原型的研究人员，SwiftOpenAI 都是理想的选择。它遵循 MIT 协议，拥有活跃的社区支持，并兼容 Swift 5.9 及 Swift","SwiftOpenAI 是一款专为 Swift 开发者打造的开源工具包，旨在让苹果生态下的应用轻松对接 OpenAI 的强大能力。它全面覆盖了 OpenAI 公共 API 的所有核心功能，从基础的文本对话、图像生成、语音转录，到高级的函数调用、结构化输出及微调训练，开发者无需重复造轮子，即可在 iOS、macOS、watchOS 甚至 Linux 平台上快速集成人工智能特性。\n\n这款工具主要解决了原生 Swift 项目调用外部 AI 接口时代码繁琐、适配困难的问题。通过提供简洁统一的接口，它显著降低了开发门槛，让开发者能专注于业务逻辑而非底层网络请求的处理。特别值得一提的是，SwiftOpenAI 不仅支持 Azure 和 AIProxy 等替代服务，还率先适配了最新的低延迟双向语音“实时 API\"以及助手流式传输功能，为构建即时互动的语音助手或复杂智能体应用提供了坚实的技术支撑。\n\n无论是希望为 App 增添智能对话功能的独立开发者，还是需要构建原型的研究人员，SwiftOpenAI 都是理想的选择。它遵循 MIT 协议，拥有活跃的社区支持，并兼容 Swift 5.9 及 SwiftUI，帮助各类技术背景的用户高效地将前沿 AI 模型融入自己的创意之中。","# SwiftOpenAI\n\u003Cimg width=\"1090\" alt=\"repoOpenAI\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_6b9497efef51.png\">\n\n![iOS 15+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FiOS-15%2B-blue.svg)\n![macOS 13+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FmacOS-13%2B-blue.svg)\n![watchOS 9+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FwatchOS-9%2B-blue.svg)\n![Linux](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinux-blue.svg)\n[![MIT license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-blue.svg)](https:\u002F\u002Flbesson.mit-license.org\u002F)\n[![swift-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fswift-5.9-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fapple\u002Fswift)\n[![swiftui-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fswiftui-brightgreen)](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fswiftui)\n[![xcode-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fxcode-15%20-brightgreen)](https:\u002F\u002Fdeveloper.apple.com\u002Fxcode\u002F)\n[![swift-package-manager](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpackage%20manager-compatible-brightgreen.svg?logo=data:image\u002Fsvg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iNjJweCIgaGVpZ2h0PSI0OXB4IiB2aWV3Qm94PSIwIDAgNjIgNDkiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDYzLjEgKDkyNDUyKSAtIGh0dHBzOi8vc2tldGNoLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cDwvdGl0bGU+CiAgICA8ZGVzYz5DcmVhdGVkIHdpdGggU2tldGNoLjwvZGVzYz4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJHcm91cCIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBvbHlnb24gaWQ9IlBhdGgiIGZpbGw9IiNEQkI1NTEiIHBvaW50cz0iNTEuMzEwMzQ0OCAwIDEwLjY4OTY1NTIgMCAwIDEzLjUxNzI0MTQgMCA0OSA2MiA0OSA2MiAxMy41MTcyNDE0Ij48L3BvbHlnb24+CiAgICAgICAgICAgIDxwb2x5Z29uIGlkPSJQYXRoIiBmaWxsPSIjRjdFM0FGIiBwb2ludHM9IjI3IDI1IDMxIDI1IDM1IDI1IDM3IDI1IDM3IDE0IDI1IDE0IDI1IDI1Ij48L3BvbHlnb24+CiAgICAgICAgICAgIDxwb2x5Z29uIGlkPSJQYXRoIiBmaWxsPSIjRUZDNzVFIiBwb2ludHM9IjEwLjY4OTY1NTIgMCAwIDE0IDYyIDE0IDUxLjMxMDM0NDggMCI+PC9wb2x5Z29uPgogICAgICAgICAgICA8cG9seWdvbiBpZD0iUmVjdGFuZ2xlIiBmaWxsPSIjRjdFM0FGIiBwb2ludHM9IjI3IDAgMzUgMCAzNyAxNCAyNSAxNCI+PC9wb2x5Z29uPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+)](https:\u002F\u002Fgithub.com\u002Fapple\u002Fswift-package-manager)\n[![Buy me a coffee](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBuy%20me%20a%20coffee-048754?logo=buymeacoffee)](https:\u002F\u002Fbuymeacoffee.com\u002Fjamesrochabrun)\n\nAn open-source Swift package designed for effortless interaction with OpenAI's public API. \n\n🚀 Now also available as [CLI](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAICLI) and also as [MCP](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAIMCP)\n\n## Table of Contents\n- [Description](#description)\n- [Getting an API Key](#getting-an-api-key)\n- [Installation](#installation)\n- [Compatibility](#compatibility)\n- [Usage](#usage)\n- [Collaboration](#collaboration)\n\n## Description\n\n`SwiftOpenAI` is an open-source Swift package that streamlines interactions with **all** OpenAI's API endpoints, now with added support for Azure, AIProxy, Assistant stream APIs, and the new **Realtime API** for low-latency bidirectional voice conversations.\n\n### OpenAI ENDPOINTS\n\n- [Audio](#audio)\n   - [Transcriptions](#audio-transcriptions)\n   - [Translations](#audio-translations)\n   - [Speech](#audio-Speech)\n   - [Realtime](#audio-realtime)\n- [Chat](#chat)\n   - [Function Calling](#function-calling)\n   - [Structured Outputs](#structured-outputs)\n   - [Vision](#vision)\n- [Response](#response)\n   - [Streaming Responses](#streaming-responses)\n- [Embeddings](#embeddings)\n- [Fine-tuning](#fine-tuning)\n- [Batch](#batch)\n- [Files](#files)\n- [Images](#images)\n- [Models](#models)\n- [Moderations](#moderations)\n\n### **BETA**\n- [Assistants](#assistants)\n   - [Assistants File Object](#assistants-file-object)\n- [Threads](#threads)\n- [Messages](#messages)\n   - [Message File Object](#message-file-object)\n- [Runs](#runs)\n   - [Run Step object](#run-step-object)\n   - [Run Step details](#run-step-details)\n- [Assistants Streaming](#assistants-streaming)\n   - [Message Delta Object](#message-delta-object)\n   - [Run Step Delta Object](#run-step-delta-object)\n- [Vector Stores](#vector-stores)\n   - [Vector store File](#vector-store-file)\n   - [Vector store File Batch](#vector-store-file-batch)\n\n## Getting an API Key\n\n⚠️ **Important**\n\nTo interact with OpenAI services, you'll need an API key. Follow these steps to obtain one:\n\n1. Visit [OpenAI](https:\u002F\u002Fwww.openai.com\u002F).\n2. Sign up for an [account](https:\u002F\u002Fplatform.openai.com\u002Fsignup) or [log in](https:\u002F\u002Fplatform.openai.com\u002Flogin) if you already have one.\n3. Navigate to the [API key page](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys) and follow the instructions to generate a new API key.\n\nFor more information, consult OpenAI's [official documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002F).\n\n⚠️  Please take precautions to keep your API key secure per [OpenAI's guidance](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fauthentication):\n\n> Remember that your API key is a secret! Do not share it with others or expose\n> it in any client-side code (browsers, apps). Production requests must be\n> routed through your backend server where your API key can be securely\n> loaded from an environment variable or key management service.\n\nSwiftOpenAI has built-in support for AIProxy, which is a backend for AI apps, to satisfy this requirement.\nTo configure AIProxy, see the instructions [here](#aiproxy).\n\n\n## Installation\n\n### Swift Package Manager\n\n1. Open your Swift project in Xcode.\n2. Go to `File` ->  `Add Package Dependency`.\n3. In the search bar, enter [this URL](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI).\n4. Choose the version you'd like to install (see the note below).\n5. Click `Add Package`.\n\nNote: Xcode has a quirk where it defaults an SPM package's upper limit to 2.0.0. This package is beyond that\nlimit, so you should not accept the defaults that Xcode proposes. Instead, enter the lower bound of the\n[release version](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Freleases) that you'd like to support, and then\ntab out of the input box for Xcode to adjust the upper bound. Alternatively, you may select `branch` -> `main`\nto stay on the bleeding edge.\n\n## Compatibility\n\n### Platform Support\n\nSwiftOpenAI supports both Apple platforms and Linux.\n- **Apple platforms** include iOS 15+, macOS 13+, and watchOS 9+.\n- **Linux**: SwiftOpenAI on Linux uses AsyncHTTPClient to work around URLSession bugs in Apple's Foundation framework, and can be used with the [Vapor](https:\u002F\u002Fvapor.codes\u002F) server framework.\n\n### OpenAI-Compatible Providers\n\nSwiftOpenAI supports various providers that are OpenAI-compatible, including but not limited to:\n\n- [Azure OpenAI](#azure-openai)\n- [Anthropic](#anthropic)\n- [Gemini](#gemini)\n- [Ollama](#ollama)\n- [Groq](#groq)\n- [xAI](#xai)\n- [OpenRouter](#openRouter)\n- [DeepSeek](#deepseek)\n- [AIProxy](#aiproxy)\n\nCheck OpenAIServiceFactory for convenience initializers that you can use to provide custom URLs.\n\n## Usage\n\nTo use SwiftOpenAI in your project, first import the package:\n\n```swift\nimport SwiftOpenAI\n```\n\nThen, initialize the service using your OpenAI API key:\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey)\n```\n\nYou can optionally specify an organization name if needed.\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet oganizationID = \"your_organixation_id\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, organizationID: oganizationID)\n```\n\nhttps:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Ffoundation\u002Fnsurlsessionconfiguration\u002F1408259-timeoutintervalforrequest\n\nFor reasoning models, ensure that you extend the timeoutIntervalForRequest in the URL session configuration to a higher value. The default is 60 seconds, which may be insufficient, as requests to reasoning models can take longer to process and respond.\n\nTo configure it:\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet organizationID = \"your_organization_id\"\nlet session = URLSession.shared\nsession.configuration.timeoutIntervalForRequest = 360 \u002F\u002F e.g., 360 seconds or more.\nlet httpClient = URLSessionHTTPClientAdapter(urlSession: session)\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, organizationID: organizationID, httpClient: httpClient)\n```\n\nThat's all you need to begin accessing the full range of OpenAI endpoints.\n\n### How to get the status code of network errors\n\nYou may want to build UI around the type of error that the API returns.\nFor example, a `429` means that your requests are being rate limited.\nThe `APIError` type has a case `responseUnsuccessful` with two associated values: a `description` and `statusCode`.\nHere is a usage example using the chat completion API:\n\n```swift\nlet service = OpenAIServiceFactory.service(apiKey: apiKey)\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(\"hello world\"))],\n                                          model: .gpt4o)\ndo {\n   let choices = try await service.startChat(parameters: parameters).choices\n   \u002F\u002F Work with choices\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n   print(\"Network error with status code: \\(statusCode) and description: \\(description)\")\n} catch {\n   print(error.localizedDescription)\n}\n```\n\n\n### Audio\n\n### Audio Transcriptions\nParameters\n```swift\npublic struct AudioTranscriptionParameters: Encodable {\n   \n   \u002F\u002F\u002F The name of the file asset is not documented in OpenAI's official documentation; however, it is essential for constructing the multipart request.\n   let fileName: String\n   \u002F\u002F\u002F The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.\n   let file: Data\n   \u002F\u002F\u002F ID of the model to use. Only whisper-1 is currently available.\n   let model: String\n   \u002F\u002F\u002F The language of the input audio. Supplying the input language in [ISO-639-1](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_ISO_639-1_codes) format will improve accuracy and latency.\n   let language: String?\n   \u002F\u002F\u002F An optional text to guide the model's style or continue a previous audio segment. The [prompt](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fspeech-to-text\u002Fprompting) should match the audio language.\n   let prompt: String?\n   \u002F\u002F\u002F The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt. Defaults to json\n   let responseFormat: String?\n   \u002F\u002F\u002F The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog_probability) to automatically increase the temperature until certain thresholds are hit. Defaults to 0\n   let temperature: Double?\n   \n   public enum Model {\n      case whisperOne \n      case custom(model: String)\n   }\n   \n   public init(\n      fileName: String,\n      file: Data,\n      model: Model = .whisperOne,\n      prompt: String? = nil,\n      responseFormat: String? = nil,\n      temperature: Double? = nil,\n      language: String? = nil)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.model = model.rawValue\n      self.prompt = prompt\n      self.responseFormat = responseFormat\n      self.temperature = temperature\n      self.language = language\n   }\n}\n```\n\nResponse\n```swift\npublic struct AudioObject: Decodable {\n   \n   \u002F\u002F\u002F The transcribed text if the request uses the `transcriptions` API, or the translated text if the request uses the `translations` endpoint.\n   public let text: String\n}\n```\n\nUsage\n```swift\nlet fileName = \"narcos.m4a\"\nlet data = Data(contentsOfURL:_) \u002F\u002F Data retrieved from the file named \"narcos.m4a\".\nlet parameters = AudioTranscriptionParameters(fileName: fileName, file: data) \u002F\u002F **Important**: in the file name always provide the file extension.\nlet audioObject =  try await service.createTranscription(parameters: parameters)\n```\n### Audio Translations\nParameters\n```swift\npublic struct AudioTranslationParameters: Encodable {\n   \n   \u002F\u002F\u002F The name of the file asset is not documented in OpenAI's official documentation; however, it is essential for constructing the multipart request.\n   let fileName: String\n   \u002F\u002F\u002F The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.\n   let file: Data\n   \u002F\u002F\u002F ID of the model to use. Only whisper-1 is currently available.\n   let model: String\n   \u002F\u002F\u002F An optional text to guide the model's style or continue a previous audio segment. The [prompt](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fspeech-to-text\u002Fprompting) should match the audio language.\n   let prompt: String?\n   \u002F\u002F\u002F The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt. Defaults to json\n   let responseFormat: String?\n   \u002F\u002F\u002F The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog_probability) to automatically increase the temperature until certain thresholds are hit. Defaults to 0\n   let temperature: Double?\n   \n   public enum Model {\n      case whisperOne \n      case custom(model: String)\n   }\n   \n   public init(\n      fileName: String,\n      file: Data,\n      model: Model = .whisperOne,\n      prompt: String? = nil,\n      responseFormat: String? = nil,\n      temperature: Double? = nil)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.model = model.rawValue\n      self.prompt = prompt\n      self.responseFormat = responseFormat\n      self.temperature = temperature\n   }\n}\n```\n\nResponse\n```swift\npublic struct AudioObject: Decodable {\n   \n   \u002F\u002F\u002F The transcribed text if the request uses the `transcriptions` API, or the translated text if the request uses the `translations` endpoint.\n   public let text: String\n}\n```\n\nUsage\n```swift\nlet fileName = \"german.m4a\"\nlet data = Data(contentsOfURL:_) \u002F\u002F Data retrieved from the file named \"german.m4a\".\nlet parameters = AudioTranslationParameters(fileName: fileName, file: data) \u002F\u002F **Important**: in the file name always provide the file extension.\nlet audioObject = try await service.createTranslation(parameters: parameters)\n```\n\n### Audio Speech\nParameters\n```swift\n\u002F\u002F\u002F [Generates audio from the input text.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech)\npublic struct AudioSpeechParameters: Encodable {\n\n   \u002F\u002F\u002F One of the available [TTS models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Ftts): tts-1 or tts-1-hd\n   let model: String\n   \u002F\u002F\u002F The text to generate audio for. The maximum length is 4096 characters.\n   let input: String\n   \u002F\u002F\u002F The voice to use when generating the audio. Supported voices are alloy, echo, fable, onyx, nova, and shimmer. Previews of the voices are available in the [Text to speech guide.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext-to-speech\u002Fvoice-options)\n   let voice: String\n   \u002F\u002F\u002F Defaults to mp3, The format to audio in. Supported formats are mp3, opus, aac, and flac.\n   let responseFormat: String?\n   \u002F\u002F\u002F Defaults to 1,  The speed of the generated audio. Select a value from 0.25 to 4.0. 1.0 is the default.\n   let speed: Double?\n\n   public enum TTSModel: String {\n      case tts1 = \"tts-1\"\n      case tts1HD = \"tts-1-hd\"\n   }\n\n   public enum Voice: String {\n      case alloy\n      case echo\n      case fable\n      case onyx\n      case nova\n      case shimmer\n   }\n\n   public enum ResponseFormat: String {\n      case mp3\n      case opus\n      case aac\n      case flac\n   }\n   \n   public init(\n      model: TTSModel,\n      input: String,\n      voice: Voice,\n      responseFormat: ResponseFormat? = nil,\n      speed: Double? = nil)\n   {\n       self.model = model.rawValue\n       self.input = input\n       self.voice = voice.rawValue\n       self.responseFormat = responseFormat?.rawValue\n       self.speed = speed\n   }\n}\n```\n\nResponse\n```swift\n\u002F\u002F\u002F The [audio speech](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech) response.\npublic struct AudioSpeechObject: Decodable {\n\n   \u002F\u002F\u002F The audio file content data.\n   public let output: Data\n}\n```\n\nUsage\n```swift\nlet prompt = \"Hello, how are you today?\"\nlet parameters = AudioSpeechParameters(model: .tts1, input: prompt, voice: .shimmer)\nlet audioObjectData = try await service.createSpeech(parameters: parameters).output\nplayAudio(from: audioObjectData)\n\n\u002F\u002F Play data\n private func playAudio(from data: Data) {\n       do {\n           \u002F\u002F Initialize the audio player with the data\n           audioPlayer = try AVAudioPlayer(data: data)\n           audioPlayer?.prepareToPlay()\n           audioPlayer?.play()\n       } catch {\n           \u002F\u002F Handle errors\n           print(\"Error playing audio: \\(error.localizedDescription)\")\n       }\n   }\n```\n\n### Audio Realtime\n\nThe [Realtime API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Frealtime) enables bidirectional voice conversations with OpenAI's models using WebSockets and low-latency audio streaming. The API supports both audio-to-audio and text-to-audio interactions with built-in voice activity detection, transcription, and function calling.\n\n**Platform Requirements:** iOS 15+, macOS 13+, watchOS 9+. Requires AVFoundation (not available on Linux).\n\n**Permissions Required:**\n- Add `NSMicrophoneUsageDescription` to your Info.plist\n- On macOS: Enable sandbox entitlements for microphone access and outgoing network connections\n\nParameters\n```swift\n\u002F\u002F\u002F Configuration for creating a realtime session\npublic struct OpenAIRealtimeSessionConfiguration: Encodable, Sendable {\n\n   \u002F\u002F\u002F The input audio format. Options: .pcm16, .g711_ulaw, .g711_alaw. Default is .pcm16\n   let inputAudioFormat: AudioFormat?\n   \u002F\u002F\u002F Configuration for input audio transcription using Whisper\n   let inputAudioTranscription: InputAudioTranscription?\n   \u002F\u002F\u002F System instructions for the model. Recommended default provided\n   let instructions: String?\n   \u002F\u002F\u002F Maximum tokens for response output. Can be .value(Int) or .infinite\n   let maxResponseOutputTokens: MaxResponseOutputTokens?\n   \u002F\u002F\u002F Output modalities: [.audio, .text] or [.text] only. Default is [.audio, .text]\n   let modalities: [Modality]?\n   \u002F\u002F\u002F The output audio format. Options: .pcm16, .g711_ulaw, .g711_alaw. Default is .pcm16\n   let outputAudioFormat: AudioFormat?\n   \u002F\u002F\u002F Audio playback speed. Range: 0.25 to 4.0. Default is 1.0\n   let speed: Double?\n   \u002F\u002F\u002F Sampling temperature for model responses. Range: 0.6 to 1.2. Default is 0.8\n   let temperature: Double?\n   \u002F\u002F\u002F Array of tools\u002Ffunctions available for the model to call\n   let tools: [Tool]?\n   \u002F\u002F\u002F Tool selection mode: .none, .auto, .required, or .specific(functionName: String)\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F Voice activity detection configuration. Options: .serverVAD or .semanticVAD\n   let turnDetection: TurnDetection?\n   \u002F\u002F\u002F The voice to use. Options: \"alloy\", \"ash\", \"ballad\", \"coral\", \"echo\", \"sage\", \"shimmer\", \"verse\"\n   let voice: String?\n\n   \u002F\u002F\u002F Available audio formats\n   public enum AudioFormat: String, Encodable, Sendable {\n      case pcm16\n      case g711_ulaw = \"g711-ulaw\"\n      case g711_alaw = \"g711-alaw\"\n   }\n\n   \u002F\u002F\u002F Output modalities\n   public enum Modality: String, Encodable, Sendable {\n      case audio\n      case text\n   }\n\n   \u002F\u002F\u002F Turn detection configuration\n   public struct TurnDetection: Encodable, Sendable {\n      \u002F\u002F\u002F Server-based VAD with customizable timing\n      public static func serverVAD(\n         prefixPaddingMs: Int = 300,\n         silenceDurationMs: Int = 500,\n         threshold: Double = 0.5\n      ) -> TurnDetection\n\n      \u002F\u002F\u002F Semantic VAD with eagerness level\n      public static func semanticVAD(eagerness: Eagerness = .medium) -> TurnDetection\n\n      public enum Eagerness: String, Encodable, Sendable {\n         case low, medium, high\n      }\n   }\n}\n```\n\nResponse\n```swift\n\u002F\u002F\u002F Messages received from the realtime API\npublic enum OpenAIRealtimeMessage: Sendable {\n   case error(String?)                    \u002F\u002F Error occurred\n   case sessionCreated                    \u002F\u002F Session successfully created\n   case sessionUpdated                    \u002F\u002F Configuration updated\n   case responseCreated                   \u002F\u002F Model started generating response\n   case responseAudioDelta(String)        \u002F\u002F Audio chunk (base64 PCM16)\n   case inputAudioBufferSpeechStarted     \u002F\u002F User started speaking (VAD detected)\n   case responseFunctionCallArgumentsDone(name: String, arguments: String, callId: String)\n   case responseTranscriptDelta(String)   \u002F\u002F Partial AI transcript\n   case responseTranscriptDone(String)    \u002F\u002F Complete AI transcript\n   case inputAudioBufferTranscript(String)           \u002F\u002F User audio transcript\n   case inputAudioTranscriptionDelta(String)         \u002F\u002F Partial user transcription\n   case inputAudioTranscriptionCompleted(String)     \u002F\u002F Complete user transcription\n}\n```\n\nSupporting Types\n```swift\n\u002F\u002F\u002F Manages microphone input and audio playback for realtime conversations.\n\u002F\u002F\u002F Audio played through AudioController does not interfere with mic input (the model won't hear itself).\n@RealtimeActor\npublic final class AudioController {\n\n   \u002F\u002F\u002F Initialize with specified modes\n   \u002F\u002F\u002F - Parameter modes: Array of .record (for microphone) and\u002For .playback (for audio output)\n   public init(modes: [Mode]) async throws\n\n   public enum Mode {\n      case record   \u002F\u002F Enable microphone streaming\n      case playback \u002F\u002F Enable audio playback\n   }\n\n   \u002F\u002F\u002F Returns an AsyncStream of microphone audio buffers\n   \u002F\u002F\u002F - Throws: OpenAIError if .record mode wasn't enabled during initialization\n   public func micStream() throws -> AsyncStream\u003CAVAudioPCMBuffer>\n\n   \u002F\u002F\u002F Plays base64-encoded PCM16 audio from the model\n   \u002F\u002F\u002F - Parameter base64String: Base64-encoded PCM16 audio data\n   public func playPCM16Audio(base64String: String)\n\n   \u002F\u002F\u002F Interrupts current audio playback (useful when user starts speaking)\n   public func interruptPlayback()\n\n   \u002F\u002F\u002F Stops all audio operations\n   public func stop()\n}\n\n\u002F\u002F\u002F Utility for encoding audio buffers to base64\npublic enum AudioUtils {\n   \u002F\u002F\u002F Converts AVAudioPCMBuffer to base64 string for transmission\n   public static func base64EncodeAudioPCMBuffer(from buffer: AVAudioPCMBuffer) -> String?\n\n   \u002F\u002F\u002F Checks if headphones are connected\n   public static var headphonesConnected: Bool\n}\n```\n\nUsage\n```swift\n\u002F\u002F 1. Create session configuration\nlet configuration = OpenAIRealtimeSessionConfiguration(\n   voice: \"alloy\",\n   instructions: \"You are a helpful AI assistant. Be concise and friendly.\",\n   turnDetection: .serverVAD(\n      prefixPaddingMs: 300,\n      silenceDurationMs: 500,\n      threshold: 0.5\n   ),\n   inputAudioTranscription: .init(model: \"whisper-1\")\n)\n\n\u002F\u002F 2. Create realtime session\nlet session = try await service.realtimeSession(\n   model: \"gpt-4o-mini-realtime-preview-2024-12-17\",\n   configuration: configuration\n)\n\n\u002F\u002F 3. Initialize audio controller for recording and playback\nlet audioController = try await AudioController(modes: [.record, .playback])\n\n\u002F\u002F 4. Handle incoming messages from OpenAI\nTask {\n   for await message in session.receiver {\n      switch message {\n      case .responseAudioDelta(let audio):\n         \u002F\u002F Play audio from the model\n         audioController.playPCM16Audio(base64String: audio)\n\n      case .inputAudioBufferSpeechStarted:\n         \u002F\u002F User started speaking - interrupt model's audio\n         audioController.interruptPlayback()\n\n      case .responseTranscriptDelta(let text):\n         \u002F\u002F Display partial model transcript\n         print(\"Model (partial): \\(text)\")\n\n      case .responseTranscriptDone(let text):\n         \u002F\u002F Display complete model transcript\n         print(\"Model: \\(text)\")\n\n      case .inputAudioTranscriptionCompleted(let text):\n         \u002F\u002F Display user's transcribed speech\n         print(\"User: \\(text)\")\n\n      case .responseFunctionCallArgumentsDone(let name, let args, let callId):\n         \u002F\u002F Handle function call from model\n         print(\"Function call: \\(name) with args: \\(args)\")\n         \u002F\u002F Execute function and send result back\n\n      case .error(let error):\n         print(\"Error: \\(error ?? \"Unknown error\")\")\n\n      default:\n         break\n      }\n   }\n}\n\n\u002F\u002F 5. Stream microphone audio to OpenAI\nTask {\n   do {\n      for try await buffer in audioController.micStream() {\n         \u002F\u002F Encode audio buffer to base64\n         guard let base64Audio = AudioUtils.base64EncodeAudioPCMBuffer(from: buffer) else {\n            continue\n         }\n\n         \u002F\u002F Send audio to OpenAI\n         try await session.sendMessage(\n            OpenAIRealtimeInputAudioBufferAppend(audio: base64Audio)\n         )\n      }\n   } catch {\n      print(\"Microphone error: \\(error)\")\n   }\n}\n\n\u002F\u002F 6. Manually trigger a response (optional - usually VAD handles this)\ntry await session.sendMessage(\n   OpenAIRealtimeResponseCreate()\n)\n\n\u002F\u002F 7. Update session configuration mid-conversation (optional)\nlet newConfig = OpenAIRealtimeSessionConfiguration(\n   voice: \"shimmer\",\n   temperature: 0.9\n)\ntry await session.sendMessage(\n   OpenAIRealtimeSessionUpdate(sessionConfig: newConfig)\n)\n\n\u002F\u002F 8. Cleanup when done\naudioController.stop()\nsession.disconnect()\n```\n\nFunction Calling\n```swift\n\u002F\u002F Define tools in configuration\nlet tools: [OpenAIRealtimeSessionConfiguration.Tool] = [\n   .init(\n      name: \"get_weather\",\n      description: \"Get the current weather in a location\",\n      parameters: [\n         \"type\": \"object\",\n         \"properties\": [\n            \"location\": [\n               \"type\": \"string\",\n               \"description\": \"City name, e.g. San Francisco\"\n            ]\n         ],\n         \"required\": [\"location\"]\n      ]\n   )\n]\n\nlet config = OpenAIRealtimeSessionConfiguration(\n   voice: \"alloy\",\n   tools: tools,\n   toolChoice: .auto\n)\n\n\u002F\u002F Handle function calls in message receiver\ncase .responseFunctionCallArgumentsDone(let name, let args, let callId):\n   if name == \"get_weather\" {\n      \u002F\u002F Parse arguments and execute function\n      let result = getWeather(arguments: args)\n\n      \u002F\u002F Send result back to model\n      try await session.sendMessage(\n         OpenAIRealtimeConversationItemCreate(\n            item: .functionCallOutput(\n               callId: callId,\n               output: result\n            )\n         )\n      )\n   }\n```\n\nAdvanced Features\n- **Voice Activity Detection (VAD):** Choose between server-based VAD (with configurable timing) or semantic VAD (with eagerness levels)\n- **Transcription:** Enable Whisper transcription for both user input and model output\n- **Session Updates:** Change voice, instructions, or tools mid-conversation without reconnecting\n- **Response Triggers:** Manually trigger model responses or rely on automatic VAD\n- **Platform-Specific Behavior:** Automatically selects optimal audio API based on platform and headphone connection\n\nFor a complete implementation example, see `Examples\u002FRealtimeExample\u002FRealtimeExample.swift` in the repository.\n\n### Chat\nParameters\n```swift\npublic struct ChatCompletionParameters: Encodable {\n   \n   \u002F\u002F\u002F A list of messages comprising the conversation so far. [Example Python code](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_format_inputs_to_chatgpt_models)\n   public var messages: [Message]\n   \u002F\u002F\u002F ID of the model to use. See the [model endpoint compatibility](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Fhow-we-use-your-data) table for details on which models work with the Chat API.\n   \u002F\u002F\u002F Supports GPT-4, GPT-4o, GPT-5, and other models. For GPT-5 family: .gpt5, .gpt5Mini, .gpt5Nano\n   public var model: String\n   \u002F\u002F\u002F Whether or not to store the output of this chat completion request for use in our [model distillation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fdistillation) or [evals](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fevals) products.\n   \u002F\u002F\u002F Defaults to false\n   public var store: Bool?\n   \u002F\u002F\u002F Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Defaults to 0\n   \u002F\u002F\u002F [See more information about frequency and presence penalties.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fgpt\u002Fparameter-details)\n   public var frequencyPenalty: Double?\n   \u002F\u002F\u002F Controls how the model responds to function calls. none means the model does not call a function, and responds to the end-user. auto means the model can pick between an end-user or calling a function. Specifying a particular function via {\"name\": \"my_function\"} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.\n   @available(*, deprecated, message: \"Deprecated in favor of tool_choice.\")\n   public var functionCall: FunctionCall?\n   \u002F\u002F\u002F Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. \n   \u002F\u002F\u002F auto means the model can pick between generating a message or calling a function. Specifying a particular function via `{\"type: \"function\", \"function\": {\"name\": \"my_function\"}}` forces the model to call that function.\n   \u002F\u002F\u002F `none` is the default when no functions are present. auto is the default if functions are present.\n   public var toolChoice: ToolChoice?\n   \u002F\u002F\u002F A list of functions the model may generate JSON inputs for.\n   @available(*, deprecated, message: \"Deprecated in favor of tools.\")\n   public var functions: [ChatFunction]?\n   \u002F\u002F\u002F A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.\n   public var tools: [Tool]?\n   \u002F\u002F\u002F Whether to enable parallel function calling during tool use. Defaults to true.\n   public var parallelToolCalls: Bool?\n   \u002F\u002F\u002F Modify the likelihood of specified tokens appearing in the completion.\n   \u002F\u002F\u002F Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. Defaults to null.\n   public var logitBias: [Int: Double]?\n   \u002F\u002F\u002F Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. This option is currently not available on the gpt-4-vision-preview model. Defaults to false.\n   public var logprobs: Bool?\n   \u002F\u002F\u002F An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.\n   public var topLogprobs: Int?\n   \u002F\u002F\u002F The maximum number of [tokens](https:\u002F\u002Fplatform.openai.com\u002Ftokenizer) that can be generated in the chat completion. This value can be used to control [costs](https:\u002F\u002Fopenai.com\u002Fapi\u002Fpricing\u002F) for text generated via API.\n   \u002F\u002F\u002F This value is now deprecated in favor of max_completion_tokens, and is not compatible with [o1 series models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Freasoning)\n   public var maxTokens: Int?\n   \u002F\u002F\u002F An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and [reasoning tokens](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Freasoning)\n   public var maCompletionTokens: Int?\n   \u002F\u002F\u002F How many chat completion choices to generate for each input message. Defaults to 1.\n   public var n: Int?\n   \u002F\u002F\u002F Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default:\n   \u002F\u002F\u002F [\"text\"]\n   \u002F\u002F\u002FThe gpt-4o-audio-preview model can also be used to [generate audio](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio). To request that this model generate both text and audio responses, you can use:\n   \u002F\u002F\u002F [\"text\", \"audio\"]\n   public var modalities: [String]?\n   \u002F\u002F\u002F Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]. [Learn more.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)\n   public var audio: Audio?\n   \u002F\u002F\u002F Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Defaults to 0\n   \u002F\u002F\u002F [See more information about frequency and presence penalties.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fgpt\u002Fparameter-details)\n   public var presencePenalty: Double?\n   \u002F\u002F\u002F An object specifying the format that the model must output. Used to enable JSON mode.\n   \u002F\u002F\u002F Setting to `{ type: \"json_object\" }` enables `JSON` mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002FImportant: when using `JSON` mode you must still instruct the model to produce `JSON` yourself via some conversation message, for example via your system message. If you don't do this, the model may generate an unending stream of whitespace until the generation reaches the token limit, which may take a lot of time and give the appearance of a \"stuck\" request. Also note that the message content may be partial (i.e. cut off) if `finish_reason=\"length\"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.\n   public var responseFormat: ResponseFormat?\n   \u002F\u002F\u002F Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:\n   \u002F\u002F\u002F If set to 'auto', the system will utilize scale tier credits until they are exhausted.\n   \u002F\u002F\u002F If set to 'default', the request will be processed in the shared cluster.\n   \u002F\u002F\u002F When this parameter is set, the response body will include the service_tier utilized.\n   public var serviceTier: String?\n   \u002F\u002F\u002F This feature is in `Beta`. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.\n   \u002F\u002F\u002F Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.\n   public var seed: Int?\n   \u002F\u002F\u002F Up to 4 sequences where the API will stop generating further tokens. Defaults to null.\n   public var stop: [String]?\n   \u002F\u002F\u002F If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events\u002FUsing_server-sent_events#event_stream_format) as they become available, with the stream terminated by a data: [DONE] message. [Example Python code](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_stream_completions ).\n   \u002F\u002F\u002F Defaults to false.\n   var stream: Bool? = nil\n   \u002F\u002F\u002F Options for streaming response. Only set this when you set stream: true\n   var streamOptions: StreamOptions?\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F We generally recommend altering this or `top_p` but not both. Defaults to 1.\n   public var temperature: Double?\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n   \u002F\u002F\u002F We generally recommend altering this or `temperature` but not both. Defaults to 1\n   public var topP: Double?\n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\n   \u002F\u002F\u002F [Learn more](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices\u002Fend-user-ids).\n   public var user: String?\n   \n   public struct Message: Encodable {\n      \n      \u002F\u002F\u002F The role of the messages author. One of system, user, assistant, or tool message.\n      let role: String\n      \u002F\u002F\u002F The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.\n      let content: ContentType\n      \u002F\u002F\u002F The name of the author of this message. name is required if role is function, and it should be the name of the function whose response is in the content. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.\n      let name: String?\n      \u002F\u002F\u002F The name and arguments of a function that should be called, as generated by the model.\n      @available(*, deprecated, message: \"Deprecated and replaced by `tool_calls`\")\n      let functionCall: FunctionCall?\n      \u002F\u002F\u002F The tool calls generated by the model, such as function calls.\n      let toolCalls: [ToolCall]?\n      \u002F\u002F\u002F Tool call that this message is responding to.\n      let toolCallID: String?\n      \n      public enum ContentType: Encodable {\n         \n         case text(String)\n         case contentArray([MessageContent])\n         \n         public func encode(to encoder: Encoder) throws {\n            var container = encoder.singleValueContainer()\n            switch self {\n            case .text(let text):\n               try container.encode(text)\n            case .contentArray(let contentArray):\n               try container.encode(contentArray)\n            }\n         }\n         \n         public enum MessageContent: Encodable, Equatable, Hashable {\n            \n            case text(String)\n            case imageUrl(ImageDetail)\n            \n            public struct ImageDetail: Encodable, Equatable, Hashable {\n               \n               public let url: URL\n               public let detail: String?\n               \n               enum CodingKeys: String, CodingKey {\n                  case url\n                  case detail\n               }\n               \n               public func encode(to encoder: Encoder) throws {\n                  var container = encoder.container(keyedBy: CodingKeys.self)\n                  try container.encode(url, forKey: .url)\n                  try container.encode(detail, forKey: .detail)\n               }\n               \n               public init(url: URL, detail: String? = nil) {\n                  self.url = url\n                  self.detail = detail\n               }\n            }\n            \n            enum CodingKeys: String, CodingKey {\n               case type\n               case text\n               case imageUrl = \"image_url\"\n            }\n            \n            public func encode(to encoder: Encoder) throws {\n               var container = encoder.container(keyedBy: CodingKeys.self)\n               switch self {\n               case .text(let text):\n                  try container.encode(\"text\", forKey: .type)\n                  try container.encode(text, forKey: .text)\n               case .imageUrl(let imageDetail):\n                  try container.encode(\"image_url\", forKey: .type)\n                  try container.encode(imageDetail, forKey: .imageUrl)\n               }\n            }\n            \n            public func hash(into hasher: inout Hasher) {\n               switch self {\n               case .text(let string):\n                  hasher.combine(string)\n               case .imageUrl(let imageDetail):\n                  hasher.combine(imageDetail)\n               }\n            }\n            \n            public static func ==(lhs: MessageContent, rhs: MessageContent) -> Bool {\n               switch (lhs, rhs) {\n               case let (.text(a), .text(b)):\n                  return a == b\n               case let (.imageUrl(a), .imageUrl(b)):\n                  return a == b\n               default:\n                  return false\n               }\n            }\n         }\n      }\n      \n      public enum Role: String {\n         case system \u002F\u002F content, role\n         case user \u002F\u002F content, role\n         case assistant \u002F\u002F content, role, tool_calls\n         case tool \u002F\u002F content, role, tool_call_id\n      }\n      \n      enum CodingKeys: String, CodingKey {\n         case role\n         case content\n         case name\n         case functionCall = \"function_call\"\n         case toolCalls = \"tool_calls\"\n         case toolCallID = \"tool_call_id\"\n      }\n      \n      public init(\n         role: Role,\n         content: ContentType,\n         name: String? = nil,\n         functionCall: FunctionCall? = nil,\n         toolCalls: [ToolCall]? = nil,\n         toolCallID: String? = nil)\n      {\n         self.role = role.rawValue\n         self.content = content\n         self.name = name\n         self.functionCall = functionCall\n         self.toolCalls = toolCalls\n         self.toolCallID = toolCallID\n      }\n   }\n   \n   @available(*, deprecated, message: \"Deprecated in favor of ToolChoice.\")\n   public enum FunctionCall: Encodable, Equatable {\n      case none\n      case auto\n      case function(String)\n      \n      enum CodingKeys: String, CodingKey {\n         case none = \"none\"\n         case auto = \"auto\"\n         case function = \"name\"\n      }\n      \n      public func encode(to encoder: Encoder) throws {\n         switch self {\n         case .none:\n            var container = encoder.singleValueContainer()\n            try container.encode(CodingKeys.none.rawValue)\n         case .auto:\n            var container = encoder.singleValueContainer()\n            try container.encode(CodingKeys.auto.rawValue)\n         case .function(let name):\n            var container = encoder.container(keyedBy: CodingKeys.self)\n            try container.encode(name, forKey: .function)\n         }\n      }\n   }\n   \n   \u002F\u002F\u002F [Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fcreate#chat-create-tools)\n   public struct Tool: Encodable {\n      \n      \u002F\u002F\u002F The type of the tool. Currently, only `function` is supported.\n      let type: String\n      \u002F\u002F\u002F object\n      let function: ChatFunction\n      \n      public init(\n         type: String = \"function\",\n         function: ChatFunction)\n      {\n         self.type = type\n         self.function = function\n      }\n   }\n   \n   public struct ChatFunction: Codable, Equatable {\n      \n      \u002F\u002F\u002F The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.\n      let name: String\n      \u002F\u002F\u002F A description of what the function does, used by the model to choose when and how to call the function.\n      let description: String?\n      \u002F\u002F\u002F The parameters the functions accepts, described as a JSON Schema object. See the [guide](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fgpt\u002Ffunction-calling) for examples, and the [JSON Schema reference](https:\u002F\u002Fjson-schema.org\u002Funderstanding-json-schema) for documentation about the format.\n      \u002F\u002F\u002F Omitting parameters defines a function with an empty parameter list.\n      let parameters: JSONSchema?\n      \u002F\u002F\u002F Defaults to false, Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the [function calling guide].(https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fdocs\u002Fguides\u002Ffunction-calling)\n      let strict: Bool?\n      \n      public init(\n         name: String,\n         strict: Bool?,\n         description: String?,\n         parameters: JSONSchema?)\n      {\n         self.name = name\n         self.strict = strict\n         self.description = description\n         self.parameters = parameters\n      }\n   }\n   \n   public enum ServiceTier: String, Encodable {\n      \u002F\u002F\u002F Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:\n      \u002F\u002F\u002F If set to 'auto', the system will utilize scale tier credits until they are exhausted.\n      \u002F\u002F\u002F If set to 'default', the request will be processed in the shared cluster.\n      \u002F\u002F\u002F When this parameter is set, the response body will include the service_tier utilized.\n      case auto\n      case `default`\n   }\n   \n   public struct StreamOptions: Encodable {\n      \u002F\u002F\u002F If set, an additional chunk will be streamed before the data: [DONE] message.\n      \u002F\u002F\u002F The usage field on this chunk shows the token usage statistics for the entire request,\n      \u002F\u002F\u002F and the choices field will always be an empty array. All other chunks will also include\n      \u002F\u002F\u002F a usage field, but with a null value.\n      let includeUsage: Bool\n\n      enum CodingKeys: String, CodingKey {\n          case includeUsage = \"include_usage\"\n      }\n   }\n   \n   \u002F\u002F\u002F Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]\n   \u002F\u002F\u002F [Learn more.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)\n   public struct Audio: Encodable {\n      \u002F\u002F\u002F Specifies the voice type. Supported voices are alloy, echo, fable, onyx, nova, and shimmer.\n      public let voice: String\n      \u002F\u002F\u002F Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16.\n      public let format: String\n      \n      public init(\n         voice: String,\n         format: String)\n      {\n         self.voice = voice\n         self.format = format\n      }\n   }\n\n   enum CodingKeys: String, CodingKey {\n      case messages\n      case model\n      case store\n      case frequencyPenalty = \"frequency_penalty\"\n      case toolChoice = \"tool_choice\"\n      case functionCall = \"function_call\"\n      case tools\n      case parallelToolCalls = \"parallel_tool_calls\"\n      case functions\n      case logitBias = \"logit_bias\"\n      case logprobs\n      case topLogprobs = \"top_logprobs\"\n      case maxTokens = \"max_tokens\"\n      case maCompletionTokens = \"max_completion_tokens\"\n      case n\n      case modalities\n      case audio\n      case responseFormat = \"response_format\"\n      case presencePenalty = \"presence_penalty\"\n      case seed\n      case serviceTier = \"service_tier\"\n      case stop\n      case stream\n      case streamOptions = \"stream_options\"\n      case temperature\n      case topP = \"top_p\"\n      case user\n   }\n   \n   public init(\n      messages: [Message],\n      model: Model,\n      store: Bool? = nil,\n      frequencyPenalty: Double? = nil,\n      functionCall: FunctionCall? = nil,\n      toolChoice: ToolChoice? = nil,\n      functions: [ChatFunction]? = nil,\n      tools: [Tool]? = nil,\n      parallelToolCalls: Bool? = nil,\n      logitBias: [Int: Double]? = nil,\n      logProbs: Bool? = nil,\n      topLogprobs: Int? = nil,\n      maxTokens: Int? = nil,\n      n: Int? = nil,\n      modalities: [String]? = nil,\n      audio: Audio? = nil,\n      responseFormat: ResponseFormat? = nil,\n      presencePenalty: Double? = nil,\n      serviceTier: ServiceTier? = nil,\n      seed: Int? = nil,\n      stop: [String]? = nil,\n      temperature: Double? = nil,\n      topProbability: Double? = nil,\n      user: String? = nil)\n   {\n      self.messages = messages\n      self.model = model.value\n      self.store = store\n      self.frequencyPenalty = frequencyPenalty\n      self.functionCall = functionCall\n      self.toolChoice = toolChoice\n      self.functions = functions\n      self.tools = tools\n      self.parallelToolCalls = parallelToolCalls\n      self.logitBias = logitBias\n      self.logprobs = logProbs\n      self.topLogprobs = topLogprobs\n      self.maxTokens = maxTokens\n      self.n = n\n      self.modalities = modalities\n      self.audio = audio\n      self.responseFormat = responseFormat\n      self.presencePenalty = presencePenalty\n      self.serviceTier = serviceTier?.rawValue\n      self.seed = seed\n      self.stop = stop\n      self.temperature = temperature\n      self.topP = topProbability\n      self.user = user\n   }\n}\n```\n\nResponse\n### Chat completion object\n```swift\n\u002F\u002F\u002F Represents a chat [completion](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fobject) response returned by model, based on the provided input.\npublic struct ChatCompletionObject: Decodable {\n   \n   \u002F\u002F\u002F A unique identifier for the chat completion.\n   public let id: String\n   \u002F\u002F\u002F A list of chat completion choices. Can be more than one if n is greater than 1.\n   public let choices: [ChatChoice]\n   \u002F\u002F\u002F The Unix timestamp (in seconds) of when the chat completion was created.\n   public let created: Int\n   \u002F\u002F\u002F The model used for the chat completion.\n   public let model: String\n   \u002F\u002F\u002F The service tier used for processing the request. This field is only included if the service_tier parameter is specified in the request.\n   public let serviceTier: String?\n   \u002F\u002F\u002F This fingerprint represents the backend configuration that the model runs with.\n   \u002F\u002F\u002F Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.\n   public let systemFingerprint: String?\n   \u002F\u002F\u002F The object type, which is always chat.completion.\n   public let object: String\n   \u002F\u002F\u002F Usage statistics for the completion request.\n   public let usage: ChatUsage\n   \n   public struct ChatChoice: Decodable {\n      \n      \u002F\u002F\u002F The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.\n      public let finishReason: IntOrStringValue?\n      \u002F\u002F\u002F The index of the choice in the list of choices.\n      public let index: Int\n      \u002F\u002F\u002F A chat completion message generated by the model.\n      public let message: ChatMessage   \n      \u002F\u002F\u002F Log probability information for the choice.\n      public let logprobs: LogProb?\n      \n      public struct ChatMessage: Decodable {\n         \n         \u002F\u002F\u002F The contents of the message.\n         public let content: String?\n         \u002F\u002F\u002F The tool calls generated by the model, such as function calls.\n         public let toolCalls: [ToolCall]?\n         \u002F\u002F\u002F The name and arguments of a function that should be called, as generated by the model.\n         @available(*, deprecated, message: \"Deprecated and replaced by `tool_calls`\")\n         public let functionCall: FunctionCall?\n         \u002F\u002F\u002F The role of the author of this message.\n         public let role: String\n         \u002F\u002F\u002F Provided by the Vision API.\n         public let finishDetails: FinishDetails?\n         \u002F\u002F\u002F The refusal message generated by the model.\n         public let refusal: String?\n         \u002F\u002F\u002F If the audio output modality is requested, this object contains data about the audio response from the model. [Learn more](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio).\n         public let audio: Audio?\n         \n         \u002F\u002F\u002F Provided by the Vision API.\n         public struct FinishDetails: Decodable {\n            let type: String\n         }\n         \n         public struct Audio: Decodable {\n            \u002F\u002F\u002F Unique identifier for this audio response.\n            public let id: String\n            \u002F\u002F\u002F The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations.\n            public let expiresAt: Int\n            \u002F\u002F\u002F Base64 encoded audio bytes generated by the model, in the format specified in the request.\n            public let data: String\n            \u002F\u002F\u002F Transcript of the audio generated by the model.\n            public let transcript: String\n            \n            enum CodingKeys: String, CodingKey {\n               case id\n               case expiresAt = \"expires_at\"\n               case data\n               case transcript\n            }\n         }\n      }\n      \n      public struct LogProb: Decodable {\n         \u002F\u002F\u002F A list of message content tokens with log probability information.\n         let content: [TokenDetail]\n      }\n      \n      public struct TokenDetail: Decodable {\n         \u002F\u002F\u002F The token.\n         let token: String\n         \u002F\u002F\u002F The log probability of this token.\n         let logprob: Double\n         \u002F\u002F\u002F A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.\n         let bytes: [Int]?\n         \u002F\u002F\u002F List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.\n         let topLogprobs: [TopLogProb]\n         \n         enum CodingKeys: String, CodingKey {\n            case token, logprob, bytes\n            case topLogprobs = \"top_logprobs\"\n         }\n         \n         struct TopLogProb: Decodable {\n            \u002F\u002F\u002F The token.\n            let token: String\n            \u002F\u002F\u002F The log probability of this token.\n            let logprob: Double\n            \u002F\u002F\u002F A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.\n            let bytes: [Int]?\n         }\n      }\n   }\n   \n   public struct ChatUsage: Decodable {\n      \n      \u002F\u002F\u002F Number of tokens in the generated completion.\n      public let completionTokens: Int\n      \u002F\u002F\u002F Number of tokens in the prompt.\n      public let promptTokens: Int\n      \u002F\u002F\u002F Total number of tokens used in the request (prompt + completion).\n      public let totalTokens: Int\n   }\n}\n```\n\nUsage\n```swift\nlet prompt = \"Tell me a joke\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)\nlet chatCompletionObject = service.startChat(parameters: parameters)\n```\n\nResponse\n### Chat completion chunk object\n```swift\n\u002F\u002F\u002F Represents a [streamed](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fstreaming) chunk of a chat completion response returned by model, based on the provided input.\npublic struct ChatCompletionChunkObject: Decodable {\n   \n   \u002F\u002F\u002F A unique identifier for the chat completion chunk.\n   public let id: String\n   \u002F\u002F\u002F A list of chat completion choices. Can be more than one if n is greater than 1.\n   public let choices: [ChatChoice]\n   \u002F\u002F\u002F The Unix timestamp (in seconds) of when the chat completion chunk was created.\n   public let created: Int\n   \u002F\u002F\u002F The model to generate the completion.\n   public let model: String\n   \u002F\u002F\u002F The service tier used for processing the request. This field is only included if the service_tier parameter is specified in the request.\n   public let serviceTier: String?\n   \u002F\u002F\u002F This fingerprint represents the backend configuration that the model runs with.\n   \u002F\u002F\u002F Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.\n   public let systemFingerprint: String?\n   \u002F\u002F\u002F The object type, which is always chat.completion.chunk.\n   public let object: String\n   \n   public struct ChatChoice: Decodable {\n      \n      \u002F\u002F\u002F A chat completion delta generated by streamed model responses.\n      public let delta: Delta\n      \u002F\u002F\u002F The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.\n      public let finishReason: IntOrStringValue?\n      \u002F\u002F\u002F The index of the choice in the list of choices.\n      public let index: Int\n      \u002F\u002F\u002F Provided by the Vision API.\n      public let finishDetails: FinishDetails?\n      \n      public struct Delta: Decodable {\n         \n         \u002F\u002F\u002F The contents of the chunk message.\n         public let content: String?\n         \u002F\u002F\u002F The tool calls generated by the model, such as function calls.\n         public let toolCalls: [ToolCall]?\n         \u002F\u002F\u002F The name and arguments of a function that should be called, as generated by the model.\n         @available(*, deprecated, message: \"Deprecated and replaced by `tool_calls`\")\n         public let functionCall: FunctionCall?\n         \u002F\u002F\u002F The role of the author of this message.\n         public let role: String?\n      }\n      \n      public struct LogProb: Decodable {\n         \u002F\u002F\u002F A list of message content tokens with log probability information.\n         let content: [TokenDetail]\n      }\n      \n      public struct TokenDetail: Decodable {\n         \u002F\u002F\u002F The token.\n         let token: String\n         \u002F\u002F\u002F The log probability of this token.\n         let logprob: Double\n         \u002F\u002F\u002F A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.\n         let bytes: [Int]?\n         \u002F\u002F\u002F List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.\n         let topLogprobs: [TopLogProb]\n         \n         enum CodingKeys: String, CodingKey {\n            case token, logprob, bytes\n            case topLogprobs = \"top_logprobs\"\n         }\n         \n         struct TopLogProb: Decodable {\n            \u002F\u002F\u002F The token.\n            let token: String\n            \u002F\u002F\u002F The log probability of this token.\n            let logprob: Double\n            \u002F\u002F\u002F A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.\n            let bytes: [Int]?\n         }\n      }\n      \n      \u002F\u002F\u002F Provided by the Vision API.\n      public struct FinishDetails: Decodable {\n         let type: String\n      }\n   }\n}\n```\nUsage\n```swift\nlet prompt = \"Tell me a joke\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\n\n### Function Calling\n\nChat Completion also supports [Function Calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling) and [Parallel Function Calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling\u002Fparallel-function-calling). `functions` has been deprecated in favor of `tools` check [OpenAI Documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fcreate) for more.\n\n```swift\npublic struct ToolCall: Codable {\n\n   public let index: Int\n   \u002F\u002F\u002F The ID of the tool call.\n   public let id: String?\n   \u002F\u002F\u002F The type of the tool. Currently, only `function` is supported.\n   public let type: String?\n   \u002F\u002F\u002F The function that the model called.\n   public let function: FunctionCall\n\n   public init(\n      index: Int,\n      id: String,\n      type: String = \"function\",\n      function: FunctionCall)\n   {\n      self.index = index\n      self.id = id\n      self.type = type\n      self.function = function\n   }\n}\n\npublic struct FunctionCall: Codable {\n\n   \u002F\u002F\u002F The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.\n   let arguments: String\n   \u002F\u002F\u002F The name of the function to call.\n   let name: String\n\n   public init(\n      arguments: String,\n      name: String)\n   {\n      self.arguments = arguments\n      self.name = name\n   }\n}\n```\n\nUsage\n```swift\n\u002F\u002F\u002F Define a `ToolCall`\nvar tool: ToolCall {\n   .init(\n      type: \"function\", \u002F\u002F The type of the tool. Currently, only \"function\" is supported.\n      function: .init(\n         name: \"create_image\",\n         description: \"Call this function if the request asks to generate an image\",\n         parameters: .init(\n            type: .object,\n            properties: [\n               \"prompt\": .init(type: .string, description: \"The exact prompt passed in.\"),\n               \"count\": .init(type: .integer, description: \"The number of images requested\")\n            ],\n            required: [\"prompt\", \"count\"])))\n}\n\nlet prompt = \"Show me an image of an unicorn eating ice cream\"\nlet content: ChatCompletionParameters.Message.ContentType = .text(prompt)\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: content)], model: .gpt41106Preview, tools: [tool])\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\nFor more details about how to also uploading base 64 encoded images in iOS check the [ChatFunctionsCalllDemo](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatFunctionsCall) demo on the Examples section of this package.\n\n### Structured Outputs\n\n#### Documentation:\n\n- [Structured Outputs Guides](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fstructured-outputs)\n- [Examples](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fexamples)\n- [How to use](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fhow-to-use)\n- [Supported schemas](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fsupported-schemas)\n\nMust knowns:\n\n- [All fields must be required](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fall-fields-must-be-required) , To use Structured Outputs, all fields or function parameters must be specified as required.\n- Although all fields must be required (and the model will return a value for each parameter), it is possible to emulate an optional parameter by using a union type with null.\n- [Objects have limitations on nesting depth and size](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fobjects-have-limitations-on-nesting-depth-and-size), A schema may have up to 100 object properties total, with up to 5 levels of nesting.\n\n- [additionalProperties](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fadditionalproperties-false-must-always-be-set-in-objects)): false must always be set in objects\nadditionalProperties controls whether it is allowable for an object to contain additional keys \u002F values that were not defined in the JSON Schema.\nStructured Outputs only supports generating specified keys \u002F values, so we require developers to set additionalProperties: false to opt into Structured Outputs.\n- [Key ordering](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fkey-ordering), When using Structured Outputs, outputs will be produced in the same order as the ordering of keys in the schema.\n- [Recursive schemas are supported](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Frecursive-schemas-are-supported)\n\n#### How to use Structured Outputs in SwiftOpenAI\n\n1. Function calling: Structured Outputs via tools is available by setting strict: true within your function definition. This feature works with all models that support tools, including all models gpt-4-0613 and gpt-3.5-turbo-0613 and later. When Structured Outputs are enabled, model outputs will match the supplied tool definition.\n\nUsing this schema:\n\n```json\n{\n  \"schema\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"steps\": {\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"object\",\n          \"properties\": {\n            \"explanation\": {\n              \"type\": \"string\"\n            },\n            \"output\": {\n              \"type\": \"string\"\n            }\n          },\n          \"required\": [\"explanation\", \"output\"],\n          \"additionalProperties\": false\n        }\n      },\n      \"final_answer\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\"steps\", \"final_answer\"],\n    \"additionalProperties\": false\n  }\n}\n```\n\nYou can use the convenient `JSONSchema` object like this:\n\n```swift\n\u002F\u002F 1: Define the Step schema object\n\nlet stepSchema = JSONSchema(\n   type: .object,\n   properties: [\n      \"explanation\": JSONSchema(type: .string),\n      \"output\": JSONSchema(\n         type: .string)\n   ],\n   required: [\"explanation\", \"output\"],\n   additionalProperties: false\n)\n\n\u002F\u002F 2. Define the steps Array schema.\n\nlet stepsArraySchema = JSONSchema(type: .array, items: stepSchema)\n\n\u002F\u002F 3. Define the final Answer schema.\n\nlet finalAnswerSchema = JSONSchema(type: .string)\n\n\u002F\u002F 4. Define math reponse JSON schema.\n\nlet mathResponseSchema = JSONSchema(\n      type: .object,\n      properties: [\n         \"steps\": stepsArraySchema,\n         \"final_answer\": finalAnswerSchema\n      ],\n      required: [\"steps\", \"final_answer\"],\n      additionalProperties: false\n)\n\nlet tool = ChatCompletionParameters.Tool(\n            function: .init(\n               name: \"math_response\",\n               strict: true,\n               parameters: mathResponseSchema))\n)\n\nlet prompt = \"solve 8x + 31 = 2\"\nlet systemMessage = ChatCompletionParameters.Message(role: .system, content: .text(\"You are a math tutor\"))\nlet userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))\nlet parameters = ChatCompletionParameters(\n   messages: [systemMessage, userMessage],\n   model: .gpt4o20240806,\n   tools: [tool])\n\nlet chat = try await service.startChat(parameters: parameters)\n```\n\n2. A new option for the `response_format` parameter: developers can now supply a JSON Schema via `json_schema`, a new option for the response_format parameter. This is useful when the model is not calling a tool, but rather, responding to the user in a structured way. This feature works with our newest GPT-4o models: `gpt-4o-2024-08-06`, released today, and `gpt-4o-mini-2024-07-18`. When a response_format is supplied with strict: true, model outputs will match the supplied schema.\n\nUsing the previous schema, this is how you can implement it as json schema using the convenient `JSONSchemaResponseFormat` object:\n\n```swift\n\u002F\u002F 1: Define the Step schema object\n\nlet stepSchema = JSONSchema(\n   type: .object,\n   properties: [\n      \"explanation\": JSONSchema(type: .string),\n      \"output\": JSONSchema(\n         type: .string)\n   ],\n   required: [\"explanation\", \"output\"],\n   additionalProperties: false\n)\n\n\u002F\u002F 2. Define the steps Array schema.\n\nlet stepsArraySchema = JSONSchema(type: .array, items: stepSchema)\n\n\u002F\u002F 3. Define the final Answer schema.\n\nlet finalAnswerSchema = JSONSchema(type: .string)\n\n\u002F\u002F 4. Define the response format JSON schema.\n\nlet responseFormatSchema = JSONSchemaResponseFormat(\n   name: \"math_response\",\n   strict: true,\n   schema: JSONSchema(\n      type: .object,\n      properties: [\n         \"steps\": stepsArraySchema,\n         \"final_answer\": finalAnswerSchema\n      ],\n      required: [\"steps\", \"final_answer\"],\n      additionalProperties: false\n   )\n)\n\nlet prompt = \"solve 8x + 31 = 2\"\nlet systemMessage = ChatCompletionParameters.Message(role: .system, content: .text(\"You are a math tutor\"))\nlet userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))\nlet parameters = ChatCompletionParameters(\n   messages: [systemMessage, userMessage],\n   model: .gpt4o20240806,\n   responseFormat: .jsonSchema(responseFormatSchema))\n```\n\nSwiftOpenAI Structred outputs supports:\n\n- [x] Tools Structured output.\n- [x] Response format Structure output.\n- [x] Recursive Schema.\n- [x] Optional values Schema.\n- [ ] Pydantic models.\n\nWe don't support Pydantic models, users need tos manually create Schemas using `JSONSchema` or `JSONSchemaResponseFormat` objects.\n\nPro tip 🔥 Use [iosAICodeAssistant GPT](https:\u002F\u002Fchatgpt.com\u002Fg\u002Fg-qj7RuW7PY-iosai-code-assistant) to construct SwifOpenAI schemas. Just paste your JSON schema and ask the GPT to create SwiftOpenAI schemas for tools and response format.\n\nFor more details visit the Demo project for [tools](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatStructureOutputTool) and [response format](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatStructuredOutputs).\n\n### Vision\n\n[Vision](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fvision) API is available for use; developers must access it through the chat completions API, specifically using the gpt-4-vision-preview model or gpt-4o model. Using any other model will not provide an image description\n\nUsage\n```swift\nlet imageURL = \"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fdd\u002FGfp-wisconsin-madison-the-nature-boardwalk.jpg\u002F2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\nlet prompt = \"What is this?\"\nlet messageContent: [ChatCompletionParameters.Message.ContentType.MessageContent] = [.text(prompt), .imageUrl(.init(url: imageURL)] \u002F\u002F Users can add as many `.imageUrl` instances to the service.\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .contentArray(messageContent))], model: .gpt4o)\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\n\n![Simulator Screen Recording - iPhone 15 - 2023-11-09 at 17 12 06](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_d0e5b5b85133.png)\n\nFor more details about how to also uploading base 64 encoded images in iOS check the [ChatVision](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FVision) demo on the Examples section of this package.\n\n### Response\n\nOpenAI's most advanced interface for generating model responses. Supports text and image inputs, and text outputs. Create stateful interactions with the model, using the output of previous responses as input. Extend the model's capabilities with built-in tools for file search, web search, computer use, and more. Allow the model access to external systems and data using function calling.\n\n- Full streaming support with `responseCreateStream` method\n- Comprehensive `ResponseStreamEvent` enum covering 40+ event types\n- Enhanced `InputMessage` with `id` field for response ID tracking\n- Improved conversation state management with `previousResponseId`\n- Real-time text streaming, function calls, and tool usage events\n- Support for reasoning summaries, web search, file search, and image generation events\n- **NEW**: Support for GPT-5 models (gpt-5, gpt-5-mini, gpt-5-nano)\n- **NEW**: Verbosity parameter for controlling response detail level\n\n#### ModelResponseParameter\n\nThe `ModelResponseParameter` provides a comprehensive interface for creating model responses:\n\n```swift\nlet parameters = ModelResponseParameter(\n    input: .text(\"What is the answer to life, the universe, and everything?\"),\n    model: .gpt5,  \u002F\u002F Support for GPT-5, GPT-5-mini, GPT-5-nano\n    text: TextConfiguration(\n        format: .text,\n        verbosity: \"low\"  \u002F\u002F NEW: Control response verbosity (\"low\", \"medium\", \"high\")\n    ),\n    temperature: 0.7\n)\n\nlet response = try await service.responseCreate(parameters)\n```\n\n#### Available GPT-5 Models\n\n```swift\npublic enum Model {\n    case gpt5        \u002F\u002F Complex reasoning, broad world knowledge, and code-heavy or multi-step agentic tasks\n    case gpt5Mini    \u002F\u002F Cost-optimized reasoning and chat; balances speed, cost, and capability\n    case gpt5Nano    \u002F\u002F High-throughput tasks, especially simple instruction-following or classification\n    \u002F\u002F ... other models\n}\n```\n\n#### TextConfiguration with Verbosity\n\n```swift\n\u002F\u002F Create a text configuration with verbosity control\nlet textConfig = TextConfiguration(\n    format: .text,       \u002F\u002F Can be .text, .jsonObject, or .jsonSchema\n    verbosity: \"medium\"  \u002F\u002F Controls response detail level\n)\n```\n\nRelated guides:\n\n- [Quickstart](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fquickstart?api-mode=responses)\n- [Text inputs and outputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext?api-mode=responses)\n- [Image inputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fimages?api-mode=responses)\n- [Structured Outputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs?api-mode=responses)\n- [Function calling](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=responses)\n- [Conversation state](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fconversation-state?api-mode=responses)\n- [Extend the models with tools](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftools?api-mode=responses)\n\nParameters\n```swift\n\u002F\u002F\u002F [Creates a model response.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses\u002Fcreate)\npublic struct ModelResponseParameter: Codable {\n\n   \u002F\u002F\u002F Text, image, or file inputs to the model, used to generate a response.\n   \u002F\u002F\u002F A text input to the model, equivalent to a text input with the user role.\n   \u002F\u002F\u002F A list of one or many input items to the model, containing different content types.\n   public var input: InputType\n\n   \u002F\u002F\u002F Model ID used to generate the response, like gpt-4o or o1. OpenAI offers a wide range of models with\n   \u002F\u002F\u002F different capabilities, performance characteristics, and price points.\n   \u002F\u002F\u002F Refer to the model guide to browse and compare available models.\n   public var model: String\n\n   \u002F\u002F\u002F Specify additional output data to include in the model response. Currently supported values are:\n   \u002F\u002F\u002F file_search_call.results : Include the search results of the file search tool call.\n   \u002F\u002F\u002F message.input_image.image_url : Include image urls from the input message.\n   \u002F\u002F\u002F computer_call_output.output.image_url : Include image urls from the computer call output.\n   public var include: [String]?\n\n   \u002F\u002F\u002F Inserts a system (or developer) message as the first item in the model's context.\n   \u002F\u002F\u002F When using along with previous_response_id, the instructions from a previous response will be not be\n   \u002F\u002F\u002F carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.\n   public var instructions: String?\n\n   \u002F\u002F\u002F An upper bound for the number of tokens that can be generated for a response, including visible output tokens\n   \u002F\u002F\u002F and reasoning tokens.\n   public var maxOutputTokens: Int?\n\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information\n   \u002F\u002F\u002F about the object in a structured format, and querying for objects via API or the dashboard.\n   \u002F\u002F\u002F Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.\n   public var metadata: [String: String]?\n\n   \u002F\u002F\u002F Whether to allow the model to run tool calls in parallel.\n   \u002F\u002F\u002F Defaults to true\n   public var parallelToolCalls: Bool?\n\n   \u002F\u002F\u002F The unique ID of the previous response to the model. Use this to create multi-turn conversations.\n   \u002F\u002F\u002F Learn more about conversation state.\n   public var previousResponseId: String?\n\n   \u002F\u002F\u002F o-series models only\n   \u002F\u002F\u002F Configuration options for reasoning models.\n   public var reasoning: Reasoning?\n\n   \u002F\u002F\u002F Whether to store the generated model response for later retrieval via API.\n   \u002F\u002F\u002F Defaults to true\n   public var store: Bool?\n\n   \u002F\u002F\u002F If set to true, the model response data will be streamed to the client as it is generated using server-sent events.\n   public var stream: Bool?\n\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2.\n   \u002F\u002F\u002F Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F We generally recommend altering this or top_p but not both.\n   \u002F\u002F\u002F Defaults to 1\n   public var temperature: Double?\n\n   \u002F\u002F\u002F Configuration options for a text response from the model. Can be plain text or structured JSON data.\n   public var text: TextConfiguration?\n\n   \u002F\u002F\u002F How the model should select which tool (or tools) to use when generating a response.\n   \u002F\u002F\u002F See the tools parameter to see how to specify which tools the model can call.\n   public var toolChoice: ToolChoiceMode?\n\n   \u002F\u002F\u002F An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.\n   public var tools: [Tool]?\n\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.\n   \u002F\u002F\u002F So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n   \u002F\u002F\u002F We generally recommend altering this or temperature but not both.\n   \u002F\u002F\u002F Defaults to 1\n   public var topP: Double?\n\n   \u002F\u002F\u002F The truncation strategy to use for the model response.\n   \u002F\u002F\u002F Defaults to disabled\n   public var truncation: String?\n\n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\n   public var user: String?\n}\n```\n\n[The Response object](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses\u002Fobject)\n\n```swift\n\u002F\u002F\u002F The Response object returned when retrieving a model response\npublic struct ResponseModel: Decodable {\n\n   \u002F\u002F\u002F Unix timestamp (in seconds) of when this Response was created.\n   public let createdAt: Int\n\n   \u002F\u002F\u002F An error object returned when the model fails to generate a Response.\n   public let error: ErrorObject?\n\n   \u002F\u002F\u002F Unique identifier for this Response.\n   public let id: String\n\n   \u002F\u002F\u002F Details about why the response is incomplete.\n   public let incompleteDetails: IncompleteDetails?\n\n   \u002F\u002F\u002F Inserts a system (or developer) message as the first item in the model's context.\n   public let instructions: String?\n\n   \u002F\u002F\u002F An upper bound for the number of tokens that can be generated for a response, including visible output tokens\n   \u002F\u002F\u002F and reasoning tokens.\n   public let maxOutputTokens: Int?\n\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object.\n   public let metadata: [String: String]\n\n   \u002F\u002F\u002F Model ID used to generate the response, like gpt-4o or o1.\n   public let model: String\n\n   \u002F\u002F\u002F The object type of this resource - always set to response.\n   public let object: String\n\n   \u002F\u002F\u002F An array of content items generated by the model.\n   public let output: [OutputItem]\n\n   \u002F\u002F\u002F Whether to allow the model to run tool calls in parallel.\n   public let parallelToolCalls: Bool\n\n   \u002F\u002F\u002F The unique ID of the previous response to the model. Use this to create multi-turn conversations.\n   public let previousResponseId: String?\n\n   \u002F\u002F\u002F Configuration options for reasoning models.\n   public let reasoning: Reasoning?\n\n   \u002F\u002F\u002F The status of the response generation. One of completed, failed, in_progress, or incomplete.\n   public let status: String\n\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2.\n   public let temperature: Double?\n\n   \u002F\u002F\u002F Configuration options for a text response from the model.\n   public let text: TextConfiguration\n\n   \u002F\u002F\u002F How the model should select which tool (or tools) to use when generating a response.\n   public let toolChoice: ToolChoiceMode\n\n   \u002F\u002F\u002F An array of tools the model may call while generating a response.\n   public let tools: [Tool]\n\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling.\n   public let topP: Double?\n\n   \u002F\u002F\u002F The truncation strategy to use for the model response.\n   public let truncation: String?\n\n   \u002F\u002F\u002F Represents token usage details.\n   public let usage: Usage?\n\n   \u002F\u002F\u002F A unique identifier representing your end-user.\n   public let user: String?\n   \n   \u002F\u002F\u002F Convenience property that aggregates all text output from output_text items in the output array.\n   \u002F\u002F\u002F Similar to the outputText property in Python and JavaScript SDKs.\n   public var outputText: String? \n}\n```\n\nInput Types\n```swift\n\u002F\u002F InputType represents the input to the Response API\npublic enum InputType: Codable {\n    case string(String)  \u002F\u002F Simple text input\n    case array([InputItem])  \u002F\u002F Array of input items for complex conversations\n}\n\n\u002F\u002F InputItem represents different types of input\npublic enum InputItem: Codable {\n    case message(InputMessage)  \u002F\u002F User, assistant, system messages\n    case functionToolCall(FunctionToolCall)  \u002F\u002F Function calls\n    case functionToolCallOutput(FunctionToolCallOutput)  \u002F\u002F Function outputs\n    \u002F\u002F ... other input types\n}\n\n\u002F\u002F InputMessage structure with support for response IDs\npublic struct InputMessage: Codable {\n    public let role: String  \u002F\u002F \"user\", \"assistant\", \"system\"\n    public let content: MessageContent\n    public let type: String?  \u002F\u002F Always \"message\"\n    public let status: String?  \u002F\u002F \"completed\" for assistant messages\n    public let id: String?  \u002F\u002F Response ID for assistant messages\n}\n\n\u002F\u002F MessageContent can be text or array of content items\npublic enum MessageContent: Codable {\n    case text(String)\n    case array([ContentItem])  \u002F\u002F For multimodal content\n}\n```\n\nUsage\n\nSimple text input\n```swift\nlet prompt = \"What is the capital of France?\"\nlet parameters = ModelResponseParameter(input: .string(prompt), model: .gpt4o)\nlet response = try await service.responseCreate(parameters)\n```\n\nText input with reasoning\n```swift\nlet prompt = \"How much wood would a woodchuck chuck?\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .o3Mini,\n    reasoning: Reasoning(effort: \"high\")\n)\nlet response = try await service.responseCreate(parameters)\n```\n\nImage input\n```swift\nlet textPrompt = \"What is in this image?\"\nlet imageUrl = \"https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fimage.jpg\"\nlet imageContent = ContentItem.imageUrl(ImageUrlContent(imageUrl: imageUrl))\nlet textContent = ContentItem.text(TextContent(text: textPrompt))\nlet message = InputItem(role: \"user\", content: [textContent, imageContent])\nlet parameters = ModelResponseParameter(input: .array([message]), model: .gpt4o)\nlet response = try await service.responseCreate(parameters)\n```\n\nUsing tools (web search)\n```swift\nlet prompt = \"What was a positive news story from today?\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [Tool(type: \"web_search_preview\", function: nil)]\n)\nlet response = try await service.responseCreate(parameters)\n```\n\nUsing tools (file search)\n```swift\nlet prompt = \"What are the key points in the document?\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"file_search\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"file_search\",\n                strict: false,\n                description: \"Search through files\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"vector_store_ids\": JSONSchema(\n                            type: .array,\n                            items: JSONSchema(type: .string)\n                        ),\n                        \"max_num_results\": JSONSchema(type: .integer)\n                    ],\n                    required: [\"vector_store_ids\"],\n                    additionalProperties: false\n                )\n            )\n        )\n    ]\n)\nlet response = try await service.responseCreate(parameters)\n```\n\nFunction calling\n```swift\nlet prompt = \"What is the weather like in Boston today?\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"function\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"get_current_weather\",\n                strict: false,\n                description: \"Get the current weather in a given location\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"location\": JSONSchema(\n                            type: .string,\n                            description: \"The city and state, e.g. San Francisco, CA\"\n                        ),\n                        \"unit\": JSONSchema(\n                            type: .string,\n                            enum: [\"celsius\", \"fahrenheit\"]\n                        )\n                    ],\n                    required: [\"location\", \"unit\"],\n                    additionalProperties: false\n                )\n            )\n        )\n    ],\n    toolChoice: .auto\n)\nlet response = try await service.responseCreate(parameters)\n```\n\nRetrieving a response\n```swift\nlet responseId = \"resp_abc123\"\nlet response = try await service.responseModel(id: responseId)\n```\n\n#### Streaming Responses\n\nThe Response API supports streaming responses using Server-Sent Events (SSE). This allows you to receive partial responses as they are generated, enabling real-time UI updates and better user experience.\n\nStream Events\n```swift\n\u002F\u002F The ResponseStreamEvent enum represents all possible streaming events\npublic enum ResponseStreamEvent: Decodable {\n  case responseCreated(ResponseCreatedEvent)\n  case responseInProgress(ResponseInProgressEvent)\n  case responseCompleted(ResponseCompletedEvent)\n  case responseFailed(ResponseFailedEvent)\n  case outputItemAdded(OutputItemAddedEvent)\n  case outputTextDelta(OutputTextDeltaEvent)\n  case outputTextDone(OutputTextDoneEvent)\n  case functionCallArgumentsDelta(FunctionCallArgumentsDeltaEvent)\n  case reasoningSummaryTextDelta(ReasoningSummaryTextDeltaEvent)\n  case error(ErrorEvent)\n  \u002F\u002F ... and many more event types\n}\n```\n\nBasic Streaming Example\n```swift\n\u002F\u002F Enable streaming by setting stream: true\nlet parameters = ModelResponseParameter(\n    input: .string(\"Tell me a story\"),\n    model: .gpt4o,\n    stream: true\n)\n\n\u002F\u002F Create a stream\nlet stream = try await service.responseCreateStream(parameters)\n\n\u002F\u002F Process events as they arrive\nfor try await event in stream {\n    switch event {\n    case .outputTextDelta(let delta):\n        \u002F\u002F Append text chunk to your UI\n        print(delta.delta, terminator: \"\")\n        \n    case .responseCompleted(let completed):\n        \u002F\u002F Response is complete\n        print(\"\\nResponse ID: \\(completed.response.id)\")\n        \n    case .error(let error):\n        \u002F\u002F Handle errors\n        print(\"Error: \\(error.message)\")\n        \n    default:\n        \u002F\u002F Handle other events as needed\n        break\n    }\n}\n```\n\nStreaming with Conversation State\n```swift\n\u002F\u002F Maintain conversation continuity with previousResponseId\nvar previousResponseId: String? = nil\nvar messages: [(role: String, content: String)] = []\n\n\u002F\u002F First message\nlet firstParams = ModelResponseParameter(\n    input: .string(\"Hello!\"),\n    model: .gpt4o,\n    stream: true\n)\n\nlet firstStream = try await service.responseCreateStream(firstParams)\nvar firstResponse = \"\"\n\nfor try await event in firstStream {\n    switch event {\n    case .outputTextDelta(let delta):\n        firstResponse += delta.delta\n        \n    case .responseCompleted(let completed):\n        previousResponseId = completed.response.id\n        messages.append((role: \"user\", content: \"Hello!\"))\n        messages.append((role: \"assistant\", content: firstResponse))\n        \n    default:\n        break\n    }\n}\n\n\u002F\u002F Follow-up message with conversation context\nvar inputArray: [InputItem] = []\n\n\u002F\u002F Add conversation history\nfor message in messages {\n    inputArray.append(.message(InputMessage(\n        role: message.role,\n        content: .text(message.content)\n    )))\n}\n\n\u002F\u002F Add new user message\ninputArray.append(.message(InputMessage(\n    role: \"user\",\n    content: .text(\"How are you?\")\n)))\n\nlet followUpParams = ModelResponseParameter(\n    input: .array(inputArray),\n    model: .gpt4o,\n    previousResponseId: previousResponseId,\n    stream: true\n)\n\nlet followUpStream = try await service.responseCreateStream(followUpParams)\n\u002F\u002F Process the follow-up stream...\n```\n\nStreaming with Tools and Function Calling\n```swift\nlet parameters = ModelResponseParameter(\n    input: .string(\"What's the weather in San Francisco?\"),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"function\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"get_weather\",\n                description: \"Get current weather\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"location\": JSONSchema(type: .string)\n                    ],\n                    required: [\"location\"]\n                )\n            )\n        )\n    ],\n    stream: true\n)\n\nlet stream = try await service.responseCreateStream(parameters)\nvar functionCallArguments = \"\"\n\nfor try await event in stream {\n    switch event {\n    case .functionCallArgumentsDelta(let delta):\n        \u002F\u002F Accumulate function call arguments\n        functionCallArguments += delta.delta\n        \n    case .functionCallArgumentsDone(let done):\n        \u002F\u002F Function call is complete\n        print(\"Function: \\(done.name)\")\n        print(\"Arguments: \\(functionCallArguments)\")\n        \n    case .outputTextDelta(let delta):\n        \u002F\u002F Regular text output\n        print(delta.delta, terminator: \"\")\n        \n    default:\n        break\n    }\n}\n```\n\nCanceling a Stream\n```swift\n\u002F\u002F Streams can be canceled using Swift's task cancellation\nlet streamTask = Task {\n    let stream = try await service.responseCreateStream(parameters)\n    \n    for try await event in stream {\n        \u002F\u002F Check if task is cancelled\n        if Task.isCancelled {\n            break\n        }\n        \n        \u002F\u002F Process events...\n    }\n}\n\n\u002F\u002F Cancel the stream when needed\nstreamTask.cancel()\n```\n\nComplete Streaming Implementation Example\n```swift\n@MainActor\n@Observable\nclass ResponseStreamProvider {\n    var messages: [Message] = []\n    var isStreaming = false\n    var error: String?\n    \n    private let service: OpenAIService\n    private var previousResponseId: String?\n    private var streamTask: Task\u003CVoid, Never>?\n    \n    init(service: OpenAIService) {\n        self.service = service\n    }\n    \n    func sendMessage(_ text: String) {\n        streamTask?.cancel()\n        \n        \u002F\u002F Add user message\n        messages.append(Message(role: .user, content: text))\n        \n        \u002F\u002F Start streaming\n        streamTask = Task {\n            await streamResponse(for: text)\n        }\n    }\n    \n    private func streamResponse(for userInput: String) async {\n        isStreaming = true\n        error = nil\n        \n        \u002F\u002F Create streaming message placeholder\n        let streamingMessage = Message(role: .assistant, content: \"\", isStreaming: true)\n        messages.append(streamingMessage)\n        \n        do {\n            \u002F\u002F Build conversation history\n            var inputArray: [InputItem] = []\n            for message in messages.dropLast(2) {\n                inputArray.append(.message(InputMessage(\n                    role: message.role.rawValue,\n                    content: .text(message.content)\n                )))\n            }\n            inputArray.append(.message(InputMessage(\n                role: \"user\",\n                content: .text(userInput)\n            )))\n            \n            let parameters = ModelResponseParameter(\n                input: .array(inputArray),\n                model: .gpt4o,\n                previousResponseId: previousResponseId,\n                stream: true\n            )\n            \n            let stream = try await service.responseCreateStream(parameters)\n            var accumulatedText = \"\"\n            \n            for try await event in stream {\n                guard !Task.isCancelled else { break }\n                \n                switch event {\n                case .outputTextDelta(let delta):\n                    accumulatedText += delta.delta\n                    updateStreamingMessage(with: accumulatedText)\n                    \n                case .responseCompleted(let completed):\n                    previousResponseId = completed.response.id\n                    finalizeStreamingMessage(with: accumulatedText, responseId: completed.response.id)\n                    \n                case .error(let errorEvent):\n                    throw APIError.requestFailed(description: errorEvent.message)\n                    \n                default:\n                    break\n                }\n            }\n        } catch {\n            self.error = error.localizedDescription\n            messages.removeLast() \u002F\u002F Remove streaming message on error\n        }\n        \n        isStreaming = false\n    }\n    \n    private func updateStreamingMessage(with content: String) {\n        if let index = messages.lastIndex(where: { $0.isStreaming }) {\n            messages[index].content = content\n        }\n    }\n    \n    private func finalizeStreamingMessage(with content: String, responseId: String) {\n        if let index = messages.lastIndex(where: { $0.isStreaming }) {\n            messages[index].content = content\n            messages[index].isStreaming = false\n            messages[index].responseId = responseId\n        }\n    }\n}\n```\n\n### Embeddings\nParameters\n```swift\n\u002F\u002F\u002F [Creates](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings\u002Fcreate) an embedding vector representing the input text.\npublic struct EmbeddingParameter: Encodable {\n   \n   \u002F\u002F\u002F ID of the model to use. You can use the List models API to see all of your available models, or see our [Model overview ](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview) for descriptions of them.\n   let model: String\n   \u002F\u002F\u002F Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or an array of token arrays. Each input must not exceed the max input tokens for the model (8191 tokens for text-embedding-ada-002) and cannot be an empty string. [How to Count Tokens with `tiktoken`](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_count_tokens_with_tiktoken)\n   let input: String\n   \n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices\u002Fend-user-ids)\n   let user: String?\n   \n   public enum Model: String {\n      case textEmbeddingAda002 = \"text-embedding-ada-002\"\n   }\n   \n   public init(\n      model: Model = .textEmbeddingAda002,\n      input: String,\n      user: String? = nil)\n   {\n      self.model = model.value\n      self.input = input\n      self.user = user\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F [Represents an embedding vector returned by embedding endpoint.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings\u002Fobject)\npublic struct EmbeddingObject: Decodable {\n   \n   \u002F\u002F\u002F The object type, which is always \"embedding\".\n   public let object: String\n   \u002F\u002F\u002F The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.[https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fembeddings]\n   public let embedding: [Float]\n   \u002F\u002F\u002F The index of the embedding in the list of embeddings.\n   public let index: Int\n}\n```\n\nUsage\n```swift\nlet prompt = \"Hello world.\"\nlet embeddingObjects = try await service.createEmbeddings(parameters: parameters).data\n```\n\n### Fine-tuning\nParameters\n```swift\n\u002F\u002F\u002F [Creates a job](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fcreate) that fine-tunes a specified model from a given dataset.\n\u002F\u002F\u002FResponse includes details of the enqueued job including job status and the name of the fine-tuned models once complete.\npublic struct FineTuningJobParameters: Encodable {\n   \n   \u002F\u002F\u002F The name of the model to fine-tune. You can select one of the [supported models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview).\n   let model: String\n   \u002F\u002F\u002F The ID of an uploaded file that contains training data.\n   \u002F\u002F\u002F See [upload file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fupload) for how to upload a file.\n   \u002F\u002F\u002F Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.\n   \u002F\u002F\u002F See the [fine-tuning guide](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning) for more details.\n   let trainingFile: String\n   \u002F\u002F\u002F The hyperparameters used for the fine-tuning job.\n   let hyperparameters: HyperParameters?\n   \u002F\u002F\u002F A string of up to 18 characters that will be added to your fine-tuned model name.\n   \u002F\u002F\u002F For example, a suffix of \"custom-model-name\" would produce a model name like ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel.\n   \u002F\u002F\u002F Defaults to null.\n   let suffix: String?\n   \u002F\u002F\u002F The ID of an uploaded file that contains validation data.\n   \u002F\u002F\u002F If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files.\n   \u002F\u002F\u002F Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.\n   \u002F\u002F\u002F See the [fine-tuning guide](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning) for more details.\n   let validationFile: String?\n   \u002F\u002F\u002F A list of integrations to enable for your fine-tuning job.\n   let integrations: [Integration]?\n   \u002F\u002F\u002F The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you.\n   let seed: Int?\n   \n   \u002F\u002F\u002F Fine-tuning is [currently available](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning\u002Fwhat-models-can-be-fine-tuned) for the following models:\n   \u002F\u002F\u002F gpt-3.5-turbo-0613 (recommended)\n   \u002F\u002F\u002F babbage-002\n   \u002F\u002F\u002F davinci-002\n   \u002F\u002F\u002F OpenAI expects gpt-3.5-turbo to be the right model for most users in terms of results and ease of use, unless you are migrating a legacy fine-tuned model.\n   public enum Model: String {\n      case gpt35 = \"gpt-3.5-turbo-0613\" \u002F\u002F\u002F recommended\n      case babbage002 = \"babbage-002\"\n      case davinci002 = \"davinci-002\"\n   }\n   \n   public struct HyperParameters: Encodable {\n      \u002F\u002F\u002F The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.\n      \u002F\u002F\u002F Defaults to auto.\n      let nEpochs: Int?\n      \n      public init(\n         nEpochs: Int?)\n      {\n         self.nEpochs = nEpochs\n      }\n   }\n   \n   public init(\n      model: Model,\n      trainingFile: String,\n      hyperparameters: HyperParameters? = nil,\n      suffix: String? = nil,\n      validationFile: String? = nil)\n   {\n      self.model = model.rawValue\n      self.trainingFile = trainingFile\n      self.hyperparameters = hyperparameters\n      self.suffix = suffix\n      self.validationFile = validationFile\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F The fine_tuning.job object represents a [fine-tuning job](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fobject) that has been created through the API.\npublic struct FineTuningJobObject: Decodable {\n   \n   \u002F\u002F\u002F The object identifier, which can be referenced in the API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the fine-tuning job was created.\n   public let createdAt: Int\n  \u002F\u002F\u002F For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.\n   public let error: OpenAIErrorResponse.Error?\n   \u002F\u002F\u002F The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.\n   public let fineTunedModel: String?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.\n   public let finishedAt: Int?\n   \u002F\u002F\u002F The hyperparameters used for the fine-tuning job. See the [fine-tuning guide](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning)  for more details.\n   public let hyperparameters: HyperParameters\n   \u002F\u002F\u002F The base model that is being fine-tuned.\n   public let model: String\n   \u002F\u002F\u002F The object type, which is always \"fine_tuning.job\".\n   public let object: String\n   \u002F\u002F\u002F The organization that owns the fine-tuning job.\n   public let organizationId: String\n   \u002F\u002F\u002F The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the [Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents).\n   public let resultFiles: [String]\n   \u002F\u002F\u002F The current status of the fine-tuning job, which can be either `validating_files`, `queued`, `running`, `succeeded`, `failed`, or `cancelled`.\n   public let status: String\n   \u002F\u002F\u002F The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.\n   public let trainedTokens: Int?\n   \n   \u002F\u002F\u002F The file ID used for training. You can retrieve the training data with the [Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents).\n   public let trainingFile: String\n   \u002F\u002F\u002F The file ID used for validation. You can retrieve the validation results with the [Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents).\n   public let validationFile: String?\n   \n   public enum Status: String {\n      case validatingFiles = \"validating_files\"\n      case queued\n      case running\n      case succeeded\n      case failed\n      case cancelled\n   }\n   \n   public struct HyperParameters: Decodable {\n      \u002F\u002F\u002F The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. \"auto\" decides the optimal number of epochs based on the size of the dataset. If setting the number manually, we support any number between 1 and 50 epochs.\n      public let nEpochs: IntOrStringValue\n   }\n}\n```\n\nUsage\nList fine-tuning jobs\n```swift\nlet fineTuningJobs = try await service.istFineTuningJobs()\n```\nCreate fine-tuning job\n```swift\nlet trainingFileID = \"file-Atc9okK0MOuQwQzDJCZXnrh6\" \u002F\u002F The id of the file that has been uploaded using the `Files` API. https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fcreate#fine-tuning\u002Fcreate-training_file\nlet parameters = FineTuningJobParameters(model: .gpt35, trainingFile: trainingFileID)\nlet fineTuningJob = try await service.createFineTuningJob(parameters: parameters)\n```\nRetrieve fine-tuning job\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet fineTuningJob = try await service.retrieveFineTuningJob(id: fineTuningJobID)\n```\nCancel fine-tuning job\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet canceledFineTuningJob = try await service.cancelFineTuningJobWith(id: fineTuningJobID)\n```\n#### Fine-tuning job event object\nResponse\n```swift\n\u002F\u002F\u002F [Fine-tuning job event object](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fevent-object)\npublic struct FineTuningJobEventObject: Decodable {\n   \n   public let id: String\n   \n   public let createdAt: Int\n   \n   public let level: String\n   \n   public let message: String\n   \n   public let object: String\n   \n   public let type: String?\n   \n   public let data: Data?\n   \n   public struct Data: Decodable {\n      public let step: Int\n      public let trainLoss: Double\n      public let trainMeanTokenAccuracy: Double\n   }\n}\n```\nUsage\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet jobEvents = try await service.listFineTuningEventsForJobWith(id: id, after: nil, limit: nil).data\n```\n\n### Batch\nParameters\n```swift\npublic struct BatchParameter: Encodable {\n   \n   \u002F\u002F\u002F The ID of an uploaded file that contains requests for the new batch.\n   \u002F\u002F\u002F See [upload file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fcreate) for how to upload a file.\n   \u002F\u002F\u002F Your input file must be formatted as a [JSONL file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fbatch\u002FrequestInput), and must be uploaded with the purpose batch.\n   let inputFileID: String\n   \u002F\u002F\u002F The endpoint to be used for all requests in the batch. Currently only \u002Fv1\u002Fchat\u002Fcompletions is supported.\n   let endpoint: String\n   \u002F\u002F\u002F The time frame within which the batch should be processed. Currently only 24h is supported.\n   let completionWindow: String\n   \u002F\u002F\u002F Optional custom metadata for the batch.\n   let metadata: [String: String]?\n   \n   enum CodingKeys: String, CodingKey {\n      case inputFileID = \"input_file_id\"\n      case endpoint\n      case completionWindow = \"completion_window\"\n      case metadata\n   }\n}\n```\nResponse\n```swift\npublic struct BatchObject: Decodable {\n   \n   let id: String\n   \u002F\u002F\u002F The object type, which is always batch.\n   let object: String\n   \u002F\u002F\u002F The OpenAI API endpoint used by the batch.\n   let endpoint: String\n   \n   let errors: Error\n   \u002F\u002F\u002F The ID of the input file for the batch.\n   let inputFileID: String\n   \u002F\u002F\u002F The time frame within which the batch should be processed.\n   let completionWindow: String\n   \u002F\u002F\u002F The current status of the batch.\n   let status: String\n   \u002F\u002F\u002F The ID of the file containing the outputs of successfully executed requests.\n   let outputFileID: String\n   \u002F\u002F\u002F The ID of the file containing the outputs of requests with errors.\n   let errorFileID: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch was created.\n   let createdAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch started processing.\n   let inProgressAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch will expire.\n   let expiresAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch started finalizing.\n   let finalizingAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch was completed.\n   let completedAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch failed.\n   let failedAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch expired.\n   let expiredAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch started cancelling.\n   let cancellingAt: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the batch was cancelled.\n   let cancelledAt: Int\n   \u002F\u002F\u002F The request counts for different statuses within the batch.\n   let requestCounts: RequestCount\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]\n   \n   public struct Error: Decodable {\n      \n      let object: String\n      let data: [Data]\n\n      public struct Data: Decodable {\n         \n         \u002F\u002F\u002F An error code identifying the error type.\n         let code: String\n         \u002F\u002F\u002F A human-readable message providing more details about the error.\n         let message: String\n         \u002F\u002F\u002F The name of the parameter that caused the error, if applicable.\n         let param: String?\n         \u002F\u002F\u002F The line number of the input file where the error occurred, if applicable.\n         let line: Int?\n      }\n   }\n   \n   public struct RequestCount: Decodable {\n      \n      \u002F\u002F\u002F Total number of requests in the batch.\n      let total: Int\n      \u002F\u002F\u002F Number of requests that have been completed successfully.\n      let completed: Int\n      \u002F\u002F\u002F Number of requests that have failed.\n      let failed: Int\n   }\n}\n```\nUsage\n\nCreate batch\n```swift\nlet inputFileID = \"file-abc123\"\nlet endpoint = \"\u002Fv1\u002Fchat\u002Fcompletions\"\nlet completionWindow = \"24h\"\nlet parameter = BatchParameter(inputFileID: inputFileID, endpoint: endpoint, completionWindow: completionWindow, metadata: nil)\nlet batch = try await service.createBatch(parameters: parameters)\n```\n\nRetrieve batch\n```swift\nlet batchID = \"batch_abc123\"\nlet batch = try await service.retrieveBatch(id: batchID)\n```\n\nCancel batch\n```swift\nlet batchID = \"batch_abc123\"\nlet batch = try await service.cancelBatch(id: batchID)\n```\n\nList batch\n```swift\nlet batches = try await service.listBatch(after: nil, limit: nil)\n```\n\n### Files\nParameters\n```swift\n\u002F\u002F\u002F [Upload a file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fcreate) that can be used across various endpoints\u002Ffeatures. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.\npublic struct FileParameters: Encodable {\n   \n   \u002F\u002F\u002F The name of the file asset is not documented in OpenAI's official documentation; however, it is essential for constructing the multipart request.\n   let fileName: String\n   \u002F\u002F\u002F The file object (not file name) to be uploaded.\n   \u002F\u002F\u002F If the purpose is set to \"fine-tune\", the file will be used for fine-tuning.\n   let file: Data\n   \u002F\u002F\u002F The intended purpose of the uploaded file.\n   \u002F\u002F\u002F Use \"fine-tune\" for [fine-tuning](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning). This allows us to validate the format of the uploaded file is correct for fine-tuning.\n   let purpose: String\n   \n   public init(\n      fileName: String,\n      file: Data,\n      purpose: String)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.purpose = purpose\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F The [File object](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fobject) represents a document that has been uploaded to OpenAI.\npublic struct FileObject: Decodable {\n   \n   \u002F\u002F\u002F The file identifier, which can be referenced in the API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The size of the file in bytes.\n   public let bytes: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the file was created.\n   public let createdAt: Int\n   \u002F\u002F\u002F The name of the file.\n   public let filename: String\n   \u002F\u002F\u002F The object type, which is always \"file\".\n   public let object: String\n   \u002F\u002F\u002F The intended purpose of the file. Currently, only \"fine-tune\" is supported.\n   public let purpose: String\n   \u002F\u002F\u002F The current status of the file, which can be either uploaded, processed, pending, error, deleting or deleted.\n   public let status: String\n   \u002F\u002F\u002F Additional details about the status of the file. If the file is in the error state, this will include a message describing the error.\n   public let statusDetails: String?\n   \n   public enum Status: String {\n      case uploaded\n      case processed\n      case pending\n      case error\n      case deleting\n      case deleted\n   }\n\n   public init(\n      id: String,\n      bytes: Int,\n      createdAt: Int,\n      filename: String,\n      object: String,\n      purpose: String,\n      status: Status,\n      statusDetails: String?)\n   {\n      self.id = id\n      self.bytes = bytes\n      self.createdAt = createdAt\n      self.filename = filename\n      self.object = object\n      self.purpose = purpose\n      self.status = status.rawValue\n      self.statusDetails = statusDetails\n   }\n}\n```\nUsage\nList files\n```swift\nlet files = try await service.listFiles().data\n```\n### Upload file\n```swift\nlet fileName = \"worldCupData.jsonl\"\nlet data = Data(contentsOfURL:_) \u002F\u002F Data retrieved from the file named \"worldCupData.jsonl\".\nlet parameters = FileParameters(fileName: \"WorldCupData\", file: data, purpose: \"fine-tune\") \u002F\u002F Important: make sure to provide a file name.\nlet uploadedFile =  try await service.uploadFile(parameters: parameters) \n```\nDelete file\n```swift\nlet fileID = \"file-abc123\"\nlet deletedStatus = try await service.deleteFileWith(id: fileID)\n```\nRetrieve file\n```swift\nlet fileID = \"file-abc123\"\nlet retrievedFile = try await service.retrieveFileWith(id: fileID)\n```\nRetrieve file content\n```swift\nlet fileID = \"file-abc123\"\nlet fileContent = try await service.retrieveContentForFileWith(id: fileID)\n```\n\n### Images\n\nThis library supports latest OpenAI Image generation\n\n- Parameters Create\n\n```swift\n\u002F\u002F\u002F 'Create Image':\n\u002F\u002F\u002F https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002Fcreate\npublic struct CreateImageParameters: Encodable {\n   \n   \u002F\u002F\u002F A text description of the desired image(s).\n   \u002F\u002F\u002F The maximum length is 32000 characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.\n   public let prompt: String\n   \n   \u002F\u002F MARK: - Optional properties\n   \n   \u002F\u002F\u002F Allows to set transparency for the background of the generated image(s).\n   \u002F\u002F\u002F This parameter is only supported for `gpt-image-1`.\n   \u002F\u002F\u002F Must be one of `transparent`, `opaque` or `auto` (default value).\n   \u002F\u002F\u002F When `auto` is used, the model will automatically determine the best background for the image.\n   \u002F\u002F\u002F If `transparent`, the output format needs to support transparency, so it should be set to either `png` (default value) or `webp`.\n   public let background: Background?\n   \n   \u002F\u002F\u002F The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or `gpt-image-1`.\n   \u002F\u002F\u002F Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1` is used.\n   public let model: Model?\n   \n   \u002F\u002F\u002F Control the content-moderation level for images generated by `gpt-image-1`.\n   \u002F\u002F\u002F Must be either low for less restrictive filtering or auto (default value).\n   public let moderation: Moderation?\n   \n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.\n   \u002F\u002F\u002F Defaults to `1`\n   public let n: Int?\n   \n   \u002F\u002F\u002F The compression level (0-100%) for the generated images.\n   \u002F\u002F\u002F This parameter is only supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and defaults to 100.\n   public let outputCompression: Int?\n   \n   \u002F\u002F\u002F The format in which the generated images are returned.\n   \u002F\u002F\u002F This parameter is only supported for `gpt-image-1`.\n   \u002F\u002F\u002F Must be one of `png`, `jpeg`, or `webp`.\n   public let outputFormat: OutputFormat?\n   \n   \u002F\u002F\u002F The quality of the image that will be generated.\n   \u002F\u002F\u002F - `auto` (default value) will automatically select the best quality for the given model.\n   \u002F\u002F\u002F - `high`, `medium` and `low` are supported for gpt-image-1.\n   \u002F\u002F\u002F - `hd` and `standard` are supported for dall-e-3.\n   \u002F\u002F\u002F - `standard` is the only option for dall-e-2.\n   public let quality: Quality?\n   \n   \u002F\u002F\u002F The format in which generated images with dall-e-2 and dall-e-3 are returned.\n   \u002F\u002F\u002F Must be one of `url` or `b64_json`.\n   \u002F\u002F\u002F URLs are only valid for 60 minutes after the image has been generated.\n   \u002F\u002F\u002F This parameter isn't supported for `gpt-image-1` which will always return base64-encoded images.\n   public let responseFormat: ResponseFormat?\n   \n   \u002F\u002F\u002F The size of the generated images.\n   \u002F\u002F\u002F - For gpt-image-1, one of `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default value)\n   \u002F\u002F\u002F - For dall-e-3, one of `1024x1024`, `1792x1024`, or `1024x1792`\n   \u002F\u002F\u002F - For dall-e-2, one of `256x256`, `512x512`, or `1024x1024`\n   public let size: String?\n   \n   \u002F\u002F\u002F The style of the generated images.\n   \u002F\u002F\u002F This parameter is only supported for `dall-e-3`.\n   \u002F\u002F\u002F Must be one of `vivid` or `natural`.\n   \u002F\u002F\u002F Vivid causes the model to lean towards generating hyper-real and dramatic images.\n   \u002F\u002F\u002F Natural causes the model to produce more natural, less hyper-real looking images.\n   public let style: Style?\n   \n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\n   public let user: String?\n}\n```\n\n- Parameters Edit\n\n```swift\n\u002F\u002F\u002F Creates an edited or extended image given one or more source images and a prompt.\n\u002F\u002F\u002F This endpoint only supports `gpt-image-1` and `dall-e-2`.\npublic struct CreateImageEditParameters: Encodable {\n   \n   \u002F\u002F\u002F The image(s) to edit.\n   \u002F\u002F\u002F For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than 25MB.\n   \u002F\u002F\u002F For `dall-e-2`, you can only provide one image, and it should be a square `png` file less than 4MB.\n   let image: [Data]\n   \n   \u002F\u002F\u002F A text description of the desired image(s).\n   \u002F\u002F\u002F The maximum length is 1000 characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.\n   let prompt: String\n   \n   \u002F\u002F\u002F An additional image whose fully transparent areas indicate where `image` should be edited.\n   \u002F\u002F\u002F If there are multiple images provided, the mask will be applied on the first image.\n   \u002F\u002F\u002F Must be a valid PNG file, less than 4MB, and have the same dimensions as `image`.\n   let mask: Data?\n   \n   \u002F\u002F\u002F The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are supported.\n   \u002F\u002F\u002F Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1` is used.\n   let model: String?\n   \n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n   \u002F\u002F\u002F Defaults to 1.\n   let n: Int?\n   \n   \u002F\u002F\u002F The quality of the image that will be generated.\n   \u002F\u002F\u002F `high`, `medium` and `low` are only supported for `gpt-image-1`.\n   \u002F\u002F\u002F `dall-e-2` only supports `standard` quality.\n   \u002F\u002F\u002F Defaults to `auto`.\n   let quality: String?\n   \n   \u002F\u002F\u002F The format in which the generated images are returned.\n   \u002F\u002F\u002F Must be one of `url` or `b64_json`.\n   \u002F\u002F\u002F URLs are only valid for 60 minutes after the image has been generated.\n   \u002F\u002F\u002F This parameter is only supported for `dall-e-2`, as `gpt-image-1` will always return base64-encoded images.\n   let responseFormat: String?\n   \n   \u002F\u002F\u002F The size of the generated images.\n   \u002F\u002F\u002F Must be one of `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default value) for `gpt-image-1`,\n   \u002F\u002F\u002F and one of `256x256`, `512x512`, or `1024x1024` for `dall-e-2`.\n   let size: String?\n   \n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\n   let user: String?\n}\n```\n\n- Parameters Variations\n\n```swift\n\u002F\u002F\u002F Creates a variation of a given image.\n\u002F\u002F\u002F This endpoint only supports `dall-e-2`.\npublic struct CreateImageVariationParameters: Encodable {\n   \n   \u002F\u002F\u002F The image to use as the basis for the variation(s).\n   \u002F\u002F\u002F Must be a valid PNG file, less than 4MB, and square.\n   let image: Data\n   \n   \u002F\u002F\u002F The model to use for image generation. Only `dall-e-2` is supported at this time.\n   \u002F\u002F\u002F Defaults to `dall-e-2`.\n   let model: String?\n   \n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\n   \u002F\u002F\u002F Defaults to 1.\n   let n: Int?\n   \n   \u002F\u002F\u002F The format in which the generated images are returned.\n   \u002F\u002F\u002F Must be one of `url` or `b64_json`.\n   \u002F\u002F\u002F URLs are only valid for 60 minutes after the image has been generated.\n   \u002F\u002F\u002F Defaults to `url`.\n   let responseFormat: String?\n   \n   \u002F\u002F\u002F The size of the generated images.\n   \u002F\u002F\u002F Must be one of `256x256`, `512x512`, or `1024x1024`.\n   \u002F\u002F\u002F Defaults to `1024x1024`.\n   let size: String?\n   \n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\n   let user: String?\n}\n```\n\n- Request example\n\n```swift\nimport SwiftOpenAI\n\nlet service = OpenAIServiceFactory.service(apiKey: \"\u003CYOUR_KEY>\")\n\n\u002F\u002F ❶ Describe the image you want\nlet prompt = \"A watercolor dragon-unicorn hybrid flying above snowy mountains\"\n\n\u002F\u002F ❷ Build parameters with the brand-new types (commit 880a15c)\nlet params = CreateImageParameters(\n    prompt: prompt,\n    model:  .gptImage1,      \u002F\u002F .dallE3 \u002F .dallE2 also valid\n    n:      1,               \u002F\u002F 1-10  (only 1 for DALL-E 3)\n    quality: .high,          \u002F\u002F .hd \u002F .standard for DALL-E 3\n    size:   \"1024x1024\"      \u002F\u002F use \"1792x1024\" or \"1024x1792\" for wide \u002F tall\n)\n\ndo {\n    \u002F\u002F ❸ Fire the request – returns a `CreateImageResponse`\n    let result = try await service.createImages(parameters: params)\n    let url    = result.data?.first?.url          \u002F\u002F or `b64Json` for base-64\n    print(\"Image URL:\", url ?? \"none\")\n} catch {\n    print(\"Generation failed:\", error)\n}\n```\n\nFor a sample app example go to the `Examples\u002FSwiftOpenAIExample` project on this repo.\n\n⚠️ This library Also keeps compatinility with previous Image generation.\n\n\nFor handling image sizes, we utilize the `Dalle` model. An enum with associated values has been defined to represent its size constraints accurately.\n\n [DALL·E](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Fdall-e)\n \n DALL·E is a AI system that can create realistic images and art from a description in natural language. DALL·E 3 currently supports the ability, given a prompt, to create a new image with a specific size. DALL·E 2 also support the ability to edit an existing image, or create variations of a user provided image.\n \n DALL·E 3 is available through our Images API along with DALL·E 2. You can try DALL·E 3 through ChatGPT Plus.\n \n \n | MODEL     | DESCRIPTION                                                  |\n |-----------|--------------------------------------------------------------|\n | dall-e-3  | DALL·E 3 New                                                 |\n |           | The latest DALL·E model released in Nov 2023. Learn more.    |\n | dall-e-2  | The previous DALL·E model released in Nov 2022.              |\n |           | The 2nd iteration of DALL·E with more realistic, accurate,   |\n |           | and 4x greater resolution images than the original model.    |\n\npublic enum Dalle {\n   \n   case dalle2(Dalle2ImageSize)\n   case dalle3(Dalle3ImageSize)\n   \n   public enum Dalle2ImageSize: String {\n      case small = \"256x256\"\n      case medium = \"512x512\"\n      case large = \"1024x1024\"\n   }\n   \n   public enum Dalle3ImageSize: String {\n      case largeSquare = \"1024x1024\"\n      case landscape  = \"1792x1024\"\n      case portrait = \"1024x1792\"\n   }\n   \n   var model: String {\n      switch self {\n      case .dalle2: return Model.dalle2.rawValue\n      case .dalle3: return Model.dalle3.rawValue\n      }\n   }\n   \n   var size: String {\n      switch self {\n      case .dalle2(let dalle2ImageSize):\n         return dalle2ImageSize.rawValue\n      case .dalle3(let dalle3ImageSize):\n         return dalle3ImageSize.rawValue\n      }\n   }\n}\n\n#### Image create\nParameters\n```swift\npublic struct ImageCreateParameters: Encodable {\n   \n   \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.\n   let prompt: String\n   \u002F\u002F\u002F The model to use for image generation. Defaults to dall-e-2\n   let model: String?\n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10. For dall-e-3, only n=1 is supported.\n   let n: Int?\n   \u002F\u002F\u002F The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3. Defaults to standard\n   let quality: String?\n   \u002F\u002F\u002F The format in which the generated images are returned. Must be one of url or b64_json. Defaults to url\n   let responseFormat: String?\n   \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. Defaults to 1024x1024\n   let size: String?\n   \u002F\u002F\u002F The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3. Defaults to vivid\n   let style: String?\n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      prompt: String,\n      model: Dalle,\n      numberOfImages: Int = 1,\n      quality: String? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      style: String? = nil,\n      user: String? = nil)\n   {\n   self.prompt = prompt\n   self.model = model.model\n   self.n = numberOfImages\n   self.quality = quality\n   self.responseFormat = responseFormat?.rawValue\n   self.size = model.size\n   self.style = style\n   self.user = user\n   }   \n}\n```\n#### Image Edit \nParameters\n```swift\n\u002F\u002F\u002F [Creates an edited or extended image given an original image and a prompt.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit)\npublic struct ImageEditParameters: Encodable {\n   \n   \u002F\u002F\u002F The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.\n   let image: Data\n   \u002F\u002F\u002F A text description of the desired image(s). The maximum length is 1000 characters.\n   let prompt: String\n   \u002F\u002F\u002F An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.\n   let mask: Data?\n   \u002F\u002F\u002F The model to use for image generation. Only dall-e-2 is supported at this time. Defaults to dall-e-2\n   let model: String?\n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10. Defaults to 1\n   let n: Int?\n   \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024. Defaults to 1024x1024\n   let size: String?\n   \u002F\u002F\u002F The format in which the generated images are returned. Must be one of url or b64_json. Defaults to url\n   let responseFormat: String?\n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      image: UIImage,\n      model: Dalle? = nil,\n      mask: UIImage? = nil,\n      prompt: String,\n      numberOfImages: Int? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      user: String? = nil)\n   {\n      if (image.pngData() == nil) {\n         assertionFailure(\"Failed to get PNG data from image\")\n      }\n      if let mask, mask.pngData() == nil {\n         assertionFailure(\"Failed to get PNG data from mask\")\n      }\n      if let model, model.model != Model.dalle2.rawValue {\n         assertionFailure(\"Only dall-e-2 is supported at this time [https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit]\")\n      }\n      self.image = image.pngData()!\n      self.model = model?.model\n      self.mask = mask?.pngData()\n      self.prompt = prompt\n      self.n = numberOfImages\n      self.size = model?.size\n      self.responseFormat = responseFormat?.rawValue\n      self.user = user\n   }\n}\n```\n#### Image variation\nParameters\n```swift\n\u002F\u002F\u002F [Creates a variation of a given image.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateVariation)\npublic struct ImageVariationParameters: Encodable {\n   \n   \u002F\u002F\u002F The image to use as the basis for the variation(s). Must be a valid PNG file, less than 4MB, and square.\n   let image: Data\n   \u002F\u002F\u002F The model to use for image generation. Only dall-e-2 is supported at this time. Defaults to dall-e-2\n   let model: String?\n   \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10. Defaults to 1\n   let n: Int?\n   \u002F\u002F\u002F The format in which the generated images are returned. Must be one of url or b64_json. Defaults to url\n   let responseFormat: String?\n   \u002F\u002F\u002F The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024. Defaults to 1024x1024\n   let size: String?\n   \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      image: UIImage,\n      model: Dalle? = nil,\n      numberOfImages: Int? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      user: String? = nil)\n   {\n      if let model, model.model != Model.dalle2.rawValue {\n         assertionFailure(\"Only dall-e-2 is supported at this time [https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit]\")\n      }\n      self.image = image.pngData()!\n      self.n = numberOfImages\n      self.model = model?.model\n      self.size = model?.size\n      self.responseFormat = responseFormat?.rawValue\n      self.user = user\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F [Represents the url or the content of an image generated by the OpenAI API.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002Fobject)\npublic struct ImageObject: Decodable {\n   \u002F\u002F\u002F The URL of the generated image, if response_format is url (default).\n   public let url: URL?\n   \u002F\u002F\u002F The base64-encoded JSON of the generated image, if response_format is b64_json.\n   public let b64Json: String?\n   \u002F\u002F\u002F The prompt that was used to generate the image, if there was any revision to the prompt.\n   public let revisedPrompt: String?\n}\n```\n\nUsage\n```swift\n\u002F\u002F\u002F Create image\nlet prompt = \"A mix of a dragon and an unicorn\"\nlet createParameters = ImageCreateParameters(prompt: prompt, model: .dalle3(.largeSquare))\nlet imageURLS = try await service.legacyCreateImages(parameters: createParameters).data.map(\\.url)\n```\n```swift\n\u002F\u002F\u002F Edit image\nlet data = Data(contentsOfURL:_) \u002F\u002F the data from an image.\nlet image = UIImage(data: data)\nlet prompt = \"Add a background filled with pink balloons.\"\nlet editParameters = ImageEditParameters(image: image, prompt: prompt, numberOfImages: 4)  \nlet imageURLS = try await service.legacyEditImage(parameters: parameters).data.map(\\.url)\n```\n```swift\n\u002F\u002F\u002F Image variations\nlet data = Data(contentsOfURL:_) \u002F\u002F the data from an image.\nlet image = UIImage(data: data)\nlet variationParameters = ImageVariationParameters(image: image, numberOfImages: 4)\nlet imageURLS = try await service.legacyCreateImageVariations(parameters: parameters).data.map(\\.url)\n```\n\n### Models\nResponse\n```swift\n\n\u002F\u002F\u002F Describes an OpenAI [model](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Fobject) offering that can be used with the API.\npublic struct ModelObject: Decodable {\n   \n   \u002F\u002F\u002F The model identifier, which can be referenced in the API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) when the model was created.\n   public let created: Int\n   \u002F\u002F\u002F The object type, which is always \"model\".\n   public let object: String\n   \u002F\u002F\u002F The organization that owns the model.\n   public let ownedBy: String\n   \u002F\u002F\u002F An array representing the current permissions of a model. Each element in the array corresponds to a specific permission setting. If there are no permissions or if the data is unavailable, the array may be nil.\n   public let permission: [Permission]?\n   \n   public struct Permission: Decodable {\n      public let id: String?\n      public let object: String?\n      public let created: Int?\n      public let allowCreateEngine: Bool?\n      public let allowSampling: Bool?\n      public let allowLogprobs: Bool?\n      public let allowSearchIndices: Bool?\n      public let allowView: Bool?\n      public let allowFineTuning: Bool?\n      public let organization: String?\n      public let group: String?\n      public let isBlocking: Bool?\n   }\n   \n   \u002F\u002F\u002F Represents the response from the [delete](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Fdelete) fine-tuning API\n   public struct DeletionStatus: Decodable {\n      \n      public let id: String\n      public let object: String\n      public let deleted: Bool\n   }\n}\n```\nUsage\n```swift\n\u002F\u002F\u002F List models\nlet models = try await service.listModels().data\n```\n```swift\n\u002F\u002F\u002F Retrieve model\nlet modelID = \"gpt-3.5-turbo-instruct\"\nlet retrievedModel = try await service.retrieveModelWith(id: modelID)\n```\n```swift\n\u002F\u002F\u002F Delete fine tuned model\nlet modelID = \"fine-tune-model-id\"\nlet deletionStatus = try await service.deleteFineTuneModelWith(id: modelID)\n```\n### Moderations\nParameters\n```swift\n\u002F\u002F\u002F [Classifies if text violates OpenAI's Content Policy.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations\u002Fcreate)\npublic struct ModerationParameter\u003CInput: Encodable>: Encodable {\n   \n   \u002F\u002F\u002F The input text to classify, string or array.\n   let input: Input\n   \u002F\u002F\u002F Two content moderations models are available: text-moderation-stable and text-moderation-latest.\n   \u002F\u002F\u002F The default is text-moderation-latest which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use text-moderation-stable, we will provide advanced notice before updating the model. Accuracy of text-moderation-stable may be slightly lower than for text-moderation-latest.\n   let model: String?\n   \n   enum Model: String {\n      case stable = \"text-moderation-stable\"\n      case latest = \"text-moderation-latest\"\n   }\n   \n   init(\n      input: Input,\n      model: Model? = nil)\n   {\n      self.input = input\n      self.model = model?.rawValue\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F The [moderation object](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations\u002Fobject). Represents policy compliance report by OpenAI's content moderation model against a given input.\npublic struct ModerationObject: Decodable {\n   \n   \u002F\u002F\u002F The unique identifier for the moderation request.\n   public let id: String\n   \u002F\u002F\u002F The model used to generate the moderation results.\n   public let model: String\n   \u002F\u002F\u002F A list of moderation objects.\n   public let results: [Moderation]\n   \n   public struct Moderation: Decodable {\n      \n      \u002F\u002F\u002F Whether the content violates OpenAI's usage policies.\n      public let flagged: Bool\n      \u002F\u002F\u002F A list of the categories, and whether they are flagged or not.\n      public let categories: Category\u003CBool>\n      \u002F\u002F\u002F A list of the categories along with their scores as predicted by model.\n      public let categoryScores: Category\u003CDouble>\n      \n      public struct Category\u003CT: Decodable>: Decodable {\n         \n         \u002F\u002F\u002F Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harrassment.\n         public let hate: T\n         \u002F\u002F\u002F Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.\n         public let hateThreatening: T\n         \u002F\u002F\u002F Content that expresses, incites, or promotes harassing language towards any target.\n         public let harassment: T\n         \u002F\u002F\u002F Harassment content that also includes violence or serious harm towards any target.\n         public let harassmentThreatening: T\n         \u002F\u002F\u002F Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.\n         public let selfHarm: T\n         \u002F\u002F\u002F Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders.\n         public let selfHarmIntent: T\n         \u002F\u002F\u002F Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts.\n         public let selfHarmInstructions: T\n         \u002F\u002F\u002F Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).\n         public let sexual: T\n         \u002F\u002F\u002F Sexual content that includes an individual who is under 18 years old.\n         public let sexualMinors: T\n         \u002F\u002F\u002F Content that depicts death, violence, or physical injury.\n         public let violence: T\n         \u002F\u002F\u002F Content that depicts death, violence, or physical injury in graphic detail.\n         public let violenceGraphic: T\n      }\n   }\n}\n```\nUsage\n```swift\n\u002F\u002F\u002F Single prompt\nlet prompt = \"I am going to kill him\"\nlet parameters = ModerationParameter(input: prompt)\nlet isFlagged = try await service.createModerationFromText(parameters: parameters)\n```\n```swift\n\u002F\u002F\u002F Multiple prompts\nlet prompts = [\"I am going to kill him\", \"I am going to die\"]\nlet parameters = ModerationParameter(input: prompts)\nlet isFlagged = try await service.createModerationFromTexts(parameters: parameters)\n```\n\n### **BETA**\n### Assistants\nParameters\n```swift\n\u002F\u002F\u002F Create an [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants\u002FcreateAssistant) with a model and instructions.\n\u002F\u002F\u002F Modifies an [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants\u002FmodifyAssistant).\npublic struct AssistantParameters: Encodable {\n   \n   \u002F\u002F\u002F ID of the model to use. You can use the [List models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Flist) API to see all of your available models, or see our [Model overview](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview) for descriptions of them.\n   public var model: String?\n   \u002F\u002F\u002F The name of the assistant. The maximum length is 256 characters.\n   public var name: String?\n   \u002F\u002F\u002F The description of the assistant. The maximum length is 512 characters.\n   public var description: String?\n   \u002F\u002F\u002F The system instructions that the assistant uses. The maximum length is 32768 characters.\n   public var instructions: String?\n   \u002F\u002F\u002F A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function. Defaults to []\n   public var tools: [AssistantObject.Tool] = []\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public var metadata: [String: String]?\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F Defaults to 1\n   public var temperature: Double?\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n  \u002F\u002F\u002F We generally recommend altering this or temperature but not both.\n   \u002F\u002F\u002F Defaults to 1\n   public var topP: Double?\n   \u002F\u002F\u002F Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.\n   \u002F\u002F\u002F Setting to { \"type\": \"json_object\" } enables JSON mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002F Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if finish_reason=\"length\", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.\n   \u002F\u002F\u002F Defaults to `auto`\n   public var responseFormat: ResponseFormat?\n   \n   public enum Action {\n      case create(model: String) \u002F\u002F model is required on creation of assistant.\n      case modify(model: String?) \u002F\u002F model is optional on modification of assistant.\n      \n      var model: String? {\n         switch self {\n         case .create(let model): return model\n         case .modify(let model): return model\n         }\n      }\n   }\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F Represents an [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) that can call the model and use tools.\npublic struct AssistantObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always \"assistant\".\n   public let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the assistant was created.\n   public let createdAt: Int\n   \u002F\u002F\u002F The name of the assistant. The maximum length is 256 characters.\n   public let name: String?\n   \u002F\u002F\u002F The description of the assistant. The maximum length is 512 characters.\n   public let description: String?\n   \u002F\u002F\u002F ID of the model to use. You can use the [List models](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Flist) API to see all of your available models, or see our [Model overview](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview) for descriptions of them.\n   public let model: String\n   \u002F\u002F\u002F The system instructions that the assistant uses. The maximum length is 32768 characters.\n   public let instructions: String?\n   \u002F\u002F\u002F A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, retrieval, or function.\n   public let tools: [Tool]\n   \u002F\u002F\u002F A list of [file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.\n   \u002F\u002F\u002F A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.\n   public let toolResources: ToolResources?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public let metadata: [String: String]?\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F Defaults to 1\n   public var temperature: Double?\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n  \u002F\u002F\u002F We generally recommend altering this or temperature but not both.\n   \u002F\u002F\u002F Defaults to 1\n   public var topP: Double?\n   \u002F\u002F\u002F Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.\n   \u002F\u002F\u002F Setting to { \"type\": \"json_object\" } enables JSON mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002F Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if finish_reason=\"length\", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.\n   \u002F\u002F\u002F Defaults to `auto`\n   public var responseFormat: ResponseFormat?\n\n   public struct Tool: Codable {\n      \n      \u002F\u002F\u002F The type of tool being defined.\n      public let type: String\n      public let function: ChatCompletionParameters.ChatFunction?\n      \n      public enum ToolType: String, CaseIterable {\n         case codeInterpreter = \"code_interpreter\"\n         case fileSearch = \"file_search\"\n         case function\n      }\n      \n      \u002F\u002F\u002F Helper.\n      public var displayToolType: ToolType? { .init(rawValue: type) }\n      \n      public init(\n         type: ToolType,\n         function: ChatCompletionParameters.ChatFunction? = nil)\n      {\n         self.type = type.rawValue\n         self.function = function\n      }\n   }\n   \n   public struct DeletionStatus: Decodable {\n      public let id: String\n      public let object: String\n      public let deleted: Bool\n   }\n}\n```\n\nUsage\n\nCreate Assistant\n```swift\nlet parameters = AssistantParameters(action: .create(model: Model.gpt41106Preview.rawValue), name: \"Math tutor\")\nlet assistant = try await service.createAssistant(parameters: parameters)\n```\nRetrieve Assistant\n```swift\nlet assistantID = \"asst_abc123\"\nlet assistant = try await service.retrieveAssistant(id: assistantID)\n```\nModify Assistant\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = AssistantParameters(action: .modify, name: \"Math tutor for kids\")\nlet assistant = try await service.modifyAssistant(id: assistantID, parameters: parameters)\n```\nDelete Assistant\n```swift\nlet assistantID = \"asst_abc123\"\nlet deletionStatus = try await service.deleteAssistant(id: assistantID)\n```\nList Assistants\n```swift\nlet assistants = try await service.listAssistants()\n```\n\n### Threads\nParameters\n```swift\n\u002F\u002F\u002F Create a [Thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads\u002FcreateThread)\npublic struct CreateThreadParameters: Encodable {\n   \n   \u002F\u002F\u002F A list of [messages](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages) to start the thread with.\n   public var messages: [MessageObject]?\n      \u002F\u002F\u002F A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.\n   public var toolResources: ToolResources?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public var metadata: [String: String]?\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F A [thread object](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) represents a thread that contains [messages](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages).\npublic struct ThreadObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always thread.\n   public let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the thread was created.\n   public let createdAt: Int\n   \u002F\u002F\u002F A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.\n   public var toolResources: ToolResources?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public let metadata: [String: String]\n   \n}\n```\n\nUsage\n\nCreate thread.\n```swift\nlet parameters = CreateThreadParameters()\nlet thread = try await service.createThread(parameters: parameters)\n```\nRetrieve thread.\n```swift\nlet threadID = \"thread_abc123\"\nlet thread = try await service.retrieveThread(id: id)\n```\nModify thread.\n```swift\nlet threadID = \"thread_abc123\"\nlet paramaters = CreateThreadParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"]\nlet thread = try await service.modifyThread(id: id, parameters: parameters)\n```\nDelete thread.\n```swift\nlet threadID = \"thread_abc123\"\nlet thread = try await service.deleteThread(id: id)\n```\n\n### Messages\nParameters\n[Create a Message](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages\u002FcreateMessage))\n```swift\npublic struct MessageParameter: Encodable {\n   \n   \u002F\u002F\u002F The role of the entity that is creating the message. Allowed values include:\n   \u002F\u002F\u002F user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.\n   \u002F\u002F\u002F assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.\n   let role: String\n   \u002F\u002F\u002F The content of the message, which can be a string or an array of content parts (text, image URL, image file).\n   let content: Content\n   \u002F\u002F\u002F A list of files attached to the message, and the tools they should be added to.\n   let attachments: [MessageAttachment]?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]?\n}\n```\n[Modify a Message](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages\u002FmodifyMessage))\n```swift\npublic struct ModifyMessageParameters: Encodable {\n   \n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public var metadata: [String: String]\n}\n```\nResponse\n```swift\n\u002F\u002F\u002F Represents a [message](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages) within a [thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads).\npublic struct MessageObject: Codable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always thread.message.\n   public let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the message was created.\n   public let createdAt: Int\n   \u002F\u002F\u002F The [thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) ID that this message belongs to.\n   public let threadID: String\n   \u002F\u002F\u002F The status of the message, which can be either in_progress, incomplete, or completed.\n   public let status: String\n   \u002F\u002F\u002F On an incomplete message, details about why the message is incomplete.\n   public let incompleteDetails: IncompleteDetails?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the message was completed.\n   public let completedAt: Int\n   \u002F\u002F\u002F The entity that produced the message. One of user or assistant.\n   public let role: String\n   \u002F\u002F\u002F The content of the message in array of text and\u002For images.\n   public let content: [MessageContent]\n   \u002F\u002F\u002F If applicable, the ID of the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) that authored this message.\n   public let assistantID: String?\n   \u002F\u002F\u002F If applicable, the ID of the [run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns) associated with the authoring of this message.\n   public let runID: String?\n   \u002F\u002F\u002F A list of files attached to the message, and the tools they were added to.\n   public let attachments: [MessageAttachment]?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public let metadata: [String: String]?\n   \n   enum Role: String {\n      case user\n      case assistant\n   }\n}\n\n\u002F\u002F MARK: MessageContent\n\npublic enum MessageContent: Codable {\n   \n   case imageFile(ImageFile)\n   case text(Text)\n}\n\n\u002F\u002F MARK: Image File\n\npublic struct ImageFile: Codable {\n   \u002F\u002F\u002F Always image_file.\n   public let type: String\n   \n   \u002F\u002F\u002F References an image [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) in the content of a message.\n   public let imageFile: ImageFileContent\n   \n   public struct ImageFileContent: Codable {\n      \n      \u002F\u002F\u002F The [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) ID of the image in the message content.\n      public let fileID: String\n   }\n}\n\n\u002F\u002F MARK: Text\n\npublic struct Text: Codable {\n   \n   \u002F\u002F\u002F Always text.\n   public let type: String\n   \u002F\u002F\u002F The text content that is part of a message.\n   public let text: TextContent\n   \n   public struct TextContent: Codable {\n      \u002F\u002F The data that makes up the text.\n      public let value: String\n      \n      public let annotations: [Annotation]\n   }\n}\n\n\u002F\u002F MARK: Annotation\n\npublic enum Annotation: Codable {\n   \n   case fileCitation(FileCitation)\n   case filePath(FilePath)\n}\n\n\u002F\u002F MARK: FileCitation\n\n\u002F\u002F\u002F A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the \"retrieval\" tool to search files.\npublic struct FileCitation: Codable {\n   \n   \u002F\u002F\u002F Always file_citation.\n   public let type: String\n   \u002F\u002F\u002F The text in the message content that needs to be replaced.\n   public let text: String\n   public let fileCitation: FileCitation\n   public  let startIndex: Int\n   public let endIndex: Int\n   \n   public struct FileCitation: Codable {\n      \n      \u002F\u002F\u002F The ID of the specific File the citation is from.\n      public let fileID: String\n      \u002F\u002F\u002F The specific quote in the file.\n      public let quote: String\n\n   }\n}\n\n\u002F\u002F MARK: FilePath\n\n\u002F\u002F\u002F A URL for the file that's generated when the assistant used the code_interpreter tool to generate a file.\npublic struct FilePath: Codable {\n   \n   \u002F\u002F\u002F Always file_path\n   public let type: String\n   \u002F\u002F\u002F The text in the message content that needs to be replaced.\n   public let text: String\n   public let filePath: FilePath\n   public let startIndex: Int\n   public let endIndex: Int\n   \n   public struct FilePath: Codable {\n      \u002F\u002F\u002F The ID of the file that was generated.\n      public let fileID: String\n   }\n}\n```\n\nUsage\n\nCreate Message.\n```swift\nlet threadID = \"thread_abc123\"\nlet prompt = \"Give me some ideas for a birthday party.\"\nlet parameters = MessageParameter(role: \"user\", content: .stringContent(prompt)\")\nlet message = try await service.createMessage(threadID: threadID, parameters: parameters)\n```\n\nRetrieve Message.\n```swift\nlet threadID = \"thread_abc123\"\nlet messageID = \"msg_abc123\"\nlet message = try await service.retrieveMessage(threadID: threadID, messageID: messageID)\n```\n\nModify Message.\n```swift\nlet threadID = \"thread_abc123\"\nlet messageID = \"msg_abc123\"\nlet parameters = ModifyMessageParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"]\nlet message = try await service.modifyMessage(threadID: threadID, messageID: messageID, parameters: parameters)\n```\n\nList Messages\n```swift\nlet threadID = \"thread_abc123\"\nlet messages = try await service.listMessages(threadID: threadID, limit: nil, order: nil, after: nil, before: nil) \n```\n\n### Runs\nParameters\n\n[Create a run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun)\n```swift\npublic struct RunParameter: Encodable {\n   \n   \u002F\u002F\u002F The ID of the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) to use to execute this run.\n    let assistantID: String\n   \u002F\u002F\u002F The ID of the [Model](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.\n   let model: String?\n   \u002F\u002F\u002F Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.\n   let instructions: String?\n   \u002F\u002F\u002F Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.\n   let additionalInstructions: String?\n   \u002F\u002F\u002F Adds additional messages to the thread before creating the run.\n   let additionalMessages: [MessageParameter]?\n   \u002F\u002F\u002F Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.\n   let tools: [AssistantObject.Tool]?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]?\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F Optional Defaults to 1\n   let temperature: Double?\n   \u002F\u002F\u002F If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.\n   var stream: Bool\n   \u002F\u002F\u002F The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status complete. See incomplete_details for more info.\n   let maxPromptTokens: Int?\n   \u002F\u002F\u002F The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status complete. See incomplete_details for more info.\n   let maxCompletionTokens: Int?\n   \u002F\u002F\u002F Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.\n   let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling a tool. Specifying a particular tool like {\"type\": \"file_search\"} or {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.\n   \u002F\u002F\u002F Setting to { \"type\": \"json_object\" } enables JSON mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002F Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if finish_reason=\"length\", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.\n   let responseFormat: ResponseFormat?\n}\n```\n[Modify a Run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FmodifyRun)\n```swift\npublic struct ModifyRunParameters: Encodable {\n   \n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public var metadata: [String: String]\n   \n   public init(\n      metadata: [String : String])\n   {\n      self.metadata = metadata\n   }\n}\n```\n[Creates a Thread and Runs.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun)\n```swift\npublic struct CreateThreadAndRunParameter: Encodable {\n   \n   \u002F\u002F\u002F The ID of the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) to use to execute this run.\n   let assistantId: String\n   \u002F\u002F\u002F A thread to create.\n   let thread: CreateThreadParameters?\n   \u002F\u002F\u002F The ID of the [Model](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels) to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.\n   let model: String?\n   \u002F\u002F\u002F Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.\n   let instructions: String?\n   \u002F\u002F\u002F Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.\n   let tools: [AssistantObject.Tool]?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]?\n   \u002F\u002F\u002F What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.\n   \u002F\u002F\u002F Defaults to 1\n   let temperature: Double?\n   \u002F\u002F\u002F An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n   \u002F\u002F\u002F We generally recommend altering this or temperature but not both.\n   let topP: Double?\n   \u002F\u002F\u002F If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.\n   var stream: Bool = false\n   \u002F\u002F\u002F The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info.\n   let maxPromptTokens: Int?\n   \u002F\u002F\u002F The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status complete. See incomplete_details for more info.\n   let maxCompletionTokens: Int?\n   \u002F\u002F\u002F Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.\n   let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling a tool. Specifying a particular tool like {\"type\": \"file_search\"} or {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.\n   \u002F\u002F\u002F Setting to { \"type\": \"json_object\" } enables JSON mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002F Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if finish_reason=\"length\", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.\n   let responseFormat: ResponseFormat?\n}\n```\n[Submit tool outputs to run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs)\n```swift\npublic struct RunToolsOutputParameter: Encodable {\n   \n   \u002F\u002F\u002F A list of tools for which the outputs are being submitted.\n   public let toolOutputs: [ToolOutput]\n   \u002F\u002F\u002F If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message.\n   public let stream: Bool\n}\n```\n   \nResponse\n```swift\npublic struct RunObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always thread.run.\n   public let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run was created.\n   public let createdAt: Int?\n   \u002F\u002F\u002F The ID of the [thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) that was executed on as a part of this run.\n   public let threadID: String\n   \u002F\u002F\u002F The ID of the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) used for execution of this run.\n   public let assistantID: String\n   \u002F\u002F\u002F The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, or expired.\n   public let status: String\n   \u002F\u002F\u002F Details on the action required to continue the run. Will be null if no action is required.\n   public let requiredAction: RequiredAction?\n   \u002F\u002F\u002F The last error associated with this run. Will be null if there are no errors.\n   public let lastError: LastError?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run will expire.\n   public let expiresAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run was started.\n   public let startedAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run was cancelled.\n   public let cancelledAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run failed.\n   public let failedAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run was completed.\n   public let completedAt: Int?\n   \u002F\u002F\u002F Details on why the run is incomplete. Will be null if the run is not incomplete.\n   public let incompleteDetails: IncompleteDetails?\n   \u002F\u002F\u002F The model that the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) used for this run.\n   public let model: String\n   \u002F\u002F\u002F The instructions that the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) used for this run.\n   public let instructions: String?\n   \u002F\u002F\u002F The list of tools that the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) used for this run.\n   public let tools: [AssistantObject.Tool]\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public let metadata: [String: String]\n   \u002F\u002F\u002F Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).\n   public let usage: Usage?\n   \u002F\u002F\u002F The sampling temperature used for this run. If not set, defaults to 1.\n   public let temperature: Double?\n   \u002F\u002F\u002F The nucleus sampling value used for this run. If not set, defaults to 1.\n   public let topP: Double?\n   \u002F\u002F\u002F The maximum number of prompt tokens specified to have been used over the course of the run.\n   public let maxPromptTokens: Int?\n   \u002F\u002F\u002F The maximum number of completion tokens specified to have been used over the course of the run.\n   public let maxCompletionTokens: Int?\n   \u002F\u002F\u002F Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.\n   public let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling a tool. Specifying a particular tool like {\"type\": \"TOOL_TYPE\"} or {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n   public let toolChoice: ToolChoice?\n   \u002F\u002F\u002F Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.\n   \u002F\u002F\u002F Setting to { \"type\": \"json_object\" } enables JSON mode, which guarantees the message the model generates is valid JSON.\n   \u002F\u002F\u002F Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly \"stuck\" request. Also note that the message content may be partially cut off if finish_reason=\"length\", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.\n   public let responseFormat: ResponseFormat?\n}\n```\nUsage\n\nCreate a Run\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = RunParameter(assistantID: assistantID)\nlet run = try await service.createRun(threadID: threadID, parameters: parameters)\n```\nRetrieve a Run\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet run = try await service.retrieveRun(threadID: threadID, runID: runID)\n```\nModify a Run\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet parameters = ModifyRunParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"]\nlet message = try await service.modifyRun(threadID: threadID, messageID: messageID, parameters: parameters)\n```\nList runs\n```swift\nlet threadID = \"thread_abc123\"\nlet runs = try await service.listRuns(threadID: threadID, limit: nil, order: nil, after: nil, before: nil) \n```\nSubmit tool outputs to Run\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet toolCallID = \"call_abc123\"\nlet output = \"28C\"\nlet parameters = RunToolsOutputParameter(toolOutputs: [.init(toolCallId: toolCallID, output: output)])\nlet run = try await service.submitToolOutputsToRun(threadID: threadID\", runID: runID\", parameters: parameters)\n```\nCancel a Run\n```swift\n\u002F\u002F\u002F Cancels a run that is in_progress.\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet run = try await service.cancelRun(threadID: threadID, runID: runID)\n```\nCreate thread and Run\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = CreateThreadAndRunParameter(assistantID: assistantID)\nlet run = service.createThreadAndRun(parameters: parameters)\n```\n\n### Run Step Object\nRepresents a [step](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002Fstep-object) in execution of a run.\nResponse\n```swift\npublic struct RunStepObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier of the run step, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always `thread.run.step``.\n   public let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run step was created.\n   public let createdAt: Int\n   \u002F\u002F\u002F The ID of the [assistant](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants) associated with the run step.\n   public let assistantId: String\n   \u002F\u002F\u002F The ID of the [thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) that was run.\n   public let threadId: String\n   \u002F\u002F\u002FThe ID of the [run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns) that this run step is a part of.\n   public let runId: String\n   \u002F\u002F\u002F The type of run step, which can be either message_creation or tool_calls.\n   public let type: String\n   \u002F\u002F\u002F The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.\n   public let status: String\n   \u002F\u002F\u002F The details of the run step.\n   public let stepDetails: RunStepDetails\n   \u002F\u002F\u002F The last error associated with this run step. Will be null if there are no errors.\n   public let lastError: RunObject.LastError?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.\n   public let expiredAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run step was cancelled.\n   public let cancelledAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run step failed.\n   public let failedAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the run step completed.\n   public let completedAt: Int?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   public let metadata: [String: String]?\n   \u002F\u002F\u002F Usage statistics related to the run step. This value will be null while the run step's status is in_progress.\n   public let usage: Usage?\n}\n```\nUsage\nRetrieve a Run step\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet stepID = \"step_abc123\"\nlet runStep = try await service.retrieveRunstep(threadID: threadID, runID: runID, stepID: stepID)\n```\nList run steps\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet runSteps = try await service.listRunSteps(threadID: threadID, runID: runID, limit: nil, order: nil, after: nil, before: nil) \n```\n\n### Run Step Detail\n\nThe details of the run step.\n\n```swift\npublic struct RunStepDetails: Codable {\n   \n   \u002F\u002F\u002F `message_creation` or `tool_calls`\n   public let type: String\n   \u002F\u002F\u002F Details of the message creation by the run step.\n   public let messageCreation: MessageCreation?\n   \u002F\u002F\u002F Details of the tool call.\n   public let toolCalls: [ToolCall]?\n}\n```\n\n### Assistants Streaming\n\nAssistants API [streaming.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming)\n\nStream the result of executing a Run or resuming a Run after submitting tool outputs.\n\nYou can stream events from the [Create Thread and Run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun), [Create Run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun), and [Submit Tool Outputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs) endpoints by passing \"stream\": true. The response will be a Server-Sent events stream.\n\nOpenAI Python tutorial(https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fassistants\u002Foverview?context=with-streaming))\n\n### Message Delta Object\n\n[MessageDeltaObject](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fmessage-delta-object) Represents a message delta i.e. any changed fields on a message during streaming.\n\n```swift\npublic struct MessageDeltaObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier of the message, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always thread.message.delta.\n   public let object: String\n   \u002F\u002F\u002F The delta containing the fields that have changed on the Message.\n   public let delta: Delta\n   \n   public struct Delta: Decodable {\n      \n      \u002F\u002F\u002F The entity that produced the message. One of user or assistant.\n      public let role: String\n      \u002F\u002F\u002F The content of the message in array of text and\u002For images.\n      public let content: [MessageContent]\n   }\n}\n```\n\n### Run Step Delta Object\n\nRepresents a [run step delta](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Frun-step-delta-object) i.e. any changed fields on a run step during streaming.\n\n```swift\npublic struct RunStepDeltaObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier of the run step, which can be referenced in API endpoints.\n   public let id: String\n   \u002F\u002F\u002F The object type, which is always thread.run.step.delta.\n   public let object: String\n   \u002F\u002F\u002F The delta containing the fields that have changed on the run step.\n   public let delta: Delta\n   \n   public struct Delta: Decodable {\n      \n      \u002F\u002F\u002F The details of the run step.\n      public let stepDetails: RunStepDetails\n      \n      private enum CodingKeys: String, CodingKey {\n         case stepDetails = \"step_details\"\n      }\n   }\n}\n```\n\n⚠️ To utilize the `createRunAndStreamMessage`, first create an assistant and initiate a thread.\n\nUsage\n[Create Run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun) with stream.\n\nThe `createRunAndStreamMessage` streams [events](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents), You can decide which one you need for your implementation. For example, this is how you can access message delta and run step delta objects\n\n```swift\nlet assistantID = \"asst_abc123\"\nlet threadID = \"thread_abc123\"\nlet messageParameter = MessageParameter(role: .user, content: \"Tell me the square root of 1235\")\nlet message = try await service.createMessage(threadID: threadID, parameters: messageParameter)\nlet runParameters = RunParameter(assistantID: assistantID)\nlet stream = try await service.createRunAndStreamMessage(threadID: threadID, parameters: runParameters)\n\n         for try await result in stream {\n            switch result {\n            case .threadMessageDelta(let messageDelta):\n               let content = messageDelta.delta.content.first\n               switch content {\n               case .imageFile, nil:\n                  break\n               case .text(let textContent):\n                  print(textContent.text.value) \u002F\u002F this will print the streamed response for a message.\n               }\n               \n            case .threadRunStepDelta(let runStepDelta):\n               if let toolCall = runStepDelta.delta.stepDetails.toolCalls?.first?.toolCall {\n                  switch toolCall {\n                  case .codeInterpreterToolCall(let toolCall):\n                     print(toolCall.input ?? \"\") \u002F\u002F this will print the streamed response for code interpreter tool call.\n                  case .fileSearchToolCall(let toolCall):\n                     print(\"File search tool call\")\n                  case .functionToolCall(let toolCall):\n                     print(\"Function tool call\")\n                  case nil:\n                     break\n                  }\n               }\n            }\n         }\n```\n\nYou can go to the [Examples folder](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample) in this package, navigate to the 'Configure Assistants' tab, create an assistant, and follow the subsequent steps.\n\n### Stream support has also been added to:\n\n[Create Thread and Run](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun):\n\n```swift\n   \u002F\u002F\u002F Creates a thread and run with stream enabled.\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F - Parameter parameters: The parameters needed to create a thread and run.\n   \u002F\u002F\u002F - Returns: An AsyncThrowingStream of [AssistantStreamEvent](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents) objects.\n   \u002F\u002F\u002F - Throws: An error if the request fails.\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F For more information, refer to [OpenAI's  Run API documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun).\n   func createThreadAndRunStream(\n      parameters: CreateThreadAndRunParameter)\n   async throws -> AsyncThrowingStream\u003CAssistantStreamEvent, Error>\n```\n\n[Submit Tool Outputs](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs):\n\n```swift\n   \u002F\u002F\u002F When a run has the status: \"requires_action\" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request. Stream enabled\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F - Parameter threadID: The ID of the [thread](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) to which this run belongs.\n   \u002F\u002F\u002F - Parameter runID: The ID of the run that requires the tool output submission.\n   \u002F\u002F\u002F - Parameter parameters: The parameters needed for the run tools output.\n   \u002F\u002F\u002F - Returns: An AsyncThrowingStream of [AssistantStreamEvent](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents) objects.\n   \u002F\u002F\u002F - Throws: An error if the request fails.\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F For more information, refer to [OpenAI's  Run API documentation](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs).\n   func submitToolOutputsToRunStream(\n      threadID: String,\n      runID: String,\n      parameters: RunToolsOutputParameter)\n   async throws -> AsyncThrowingStream\u003CAssistantStreamEvent, Error>\n```\n\n### Vector Stores\nParameters\n```swift\npublic struct VectorStoreParameter: Encodable {\n   \n   \u002F\u002F\u002F A list of [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) IDs that the vector store should use. Useful for tools like file_search that can access files.\n   let fileIDS: [String]?\n   \u002F\u002F\u002F The name of the vector store.\n   let name: String?\n   \u002F\u002F\u002F The expiration policy for a vector store.\n   let expiresAfter: ExpirationPolicy?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]?\n}\n```\nResponse\n```swift\npublic struct VectorStoreObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   let id: String\n   \u002F\u002F\u002F The object type, which is always vector_store.\n   let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the vector store was created.\n   let createdAt: Int\n   \u002F\u002F\u002F The name of the vector store.\n   let name: String\n   \u002F\u002F\u002F The total number of bytes used by the files in the vector store.\n   let usageBytes: Int\n   \n   let fileCounts: FileCount\n   \u002F\u002F\u002F The status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.\n   let status: String\n   \u002F\u002F\u002F The expiration policy for a vector store.\n   let expiresAfter: ExpirationPolicy?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the vector store will expire.\n   let expiresAt: Int?\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the vector store was last active.\n   let lastActiveAt: Int?\n   \u002F\u002F\u002F Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.\n   let metadata: [String: String]\n   \n   public struct FileCount: Decodable {\n      \n      \u002F\u002F\u002F The number of files that are currently being processed.\n      let inProgress: Int\n      \u002F\u002F\u002F The number of files that have been successfully processed.\n      let completed: Int\n      \u002F\u002F\u002F The number of files that have failed to process.\n      let failed: Int\n      \u002F\u002F\u002F The number of files that were cancelled.\n      let cancelled: Int\n      \u002F\u002F\u002F The total number of files.\n      let total: Int\n   }\n}\n```\nUsage\n[Create vector Store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fcreate)\n```swift\nlet name = \"Support FAQ\"\nlet parameters = VectorStoreParameter(name: name)\ntry vectorStore = try await service.createVectorStore(parameters: parameters)\n```\n\n[List Vector stores](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Flist)\n```swift\nlet vectorStores = try await service.listVectorStores(limit: nil, order: nil, after: nil, before: nil)\n```\n\n[Retrieve Vector store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fretrieve)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStore = try await service.retrieveVectorStore(id: vectorStoreID)\n```\n\n[Modify Vector store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fmodify)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStore = try await service.modifyVectorStore(id: vectorStoreID)\n```\n\n[Delete Vector store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fdelete)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet deletionStatus = try await service.deleteVectorStore(id: vectorStoreID)\n```\n\n### Vector Store File\nParameters\n```swift\npublic struct VectorStoreFileParameter: Encodable {\n   \n   \u002F\u002F\u002F A [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) ID that the vector store should use. Useful for tools like file_search that can access files.\n   let fileID: String\n}\n```\nResponse\n```swift\npublic struct VectorStoreFileObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   let id: String\n   \u002F\u002F\u002F The object type, which is always vector_store.file.\n   let object: String\n   \u002F\u002F\u002F The total vector store usage in bytes. Note that this may be different from the original file size.\n   let usageBytes: Int\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the vector store file was created.\n   let createdAt: Int\n   \u002F\u002F\u002F The ID of the [vector store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fobject) that the [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) is attached to.\n   let vectorStoreID: String\n   \u002F\u002F\u002F The status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.\n   let status: String\n   \u002F\u002F\u002F The last error associated with this vector store file. Will be null if there are no errors.\n   let lastError: LastError?\n}\n```\n\nUsage\n[Create vector store file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FcreateFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet parameters = VectorStoreFileParameter(fileID: fileID)\nlet vectoreStoreFile = try await service.createVectorStoreFile(vectorStoreID: vectorStoreID, parameters: parameters)\n```\n\n[List vector store files](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FlistFiles)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStoreFiles = try await service.listVectorStoreFiles(vectorStoreID: vectorStoreID, limit: nil, order: nil, aftre: nil, before: nil, filter: nil)\n```\n\n[Retrieve vector store file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FgetFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet vectoreStoreFile = try await service.retrieveVectorStoreFile(vectorStoreID: vectorStoreID, fileID: fileID)\n```\n\n[Delete vector store file](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FdeleteFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet deletionStatus = try await service.deleteVectorStoreFile(vectorStoreID: vectorStoreID, fileID: fileID)\n```\n\n### Vector Store File Batch\nParameters\n```swift\npublic struct VectorStoreFileBatchParameter: Encodable {\n   \n   \u002F\u002F\u002F A list of [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) IDs that the vector store should use. Useful for tools like file_search that can access files.\n   let fileIDS: [String]\n}\n```\nResponse\n```swift\npublic struct VectorStoreFileBatchObject: Decodable {\n   \n   \u002F\u002F\u002F The identifier, which can be referenced in API endpoints.\n   let id: String\n   \u002F\u002F\u002F The object type, which is always vector_store.file_batch.\n   let object: String\n   \u002F\u002F\u002F The Unix timestamp (in seconds) for when the vector store files batch was created.\n   let createdAt: Int\n   \u002F\u002F\u002F The ID of the [vector store](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fobject) that the [File](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) is attached to.\n   let vectorStoreID: String\n   \u002F\u002F\u002F The status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.\n   let status: String\n   \n   let fileCounts: FileCount\n}\n```\nUsage\n\n[Create vector store file batch](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FcreateBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileIDS = [\"file-abc123\", \"file-abc456\"]\nlet parameters = VectorStoreFileBatchParameter(fileIDS: fileIDS)\nlet vectorStoreFileBatch = try await service.\n   createVectorStoreFileBatch(vectorStoreID: vectorStoreID, parameters: parameters)\n```\n\n[Retrieve vector store file batch](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FgetBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFileBatch = try await service.retrieveVectorStoreFileBatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n[Cancel vector store file batch](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FcancelBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFileBatch = try await service.cancelVectorStoreFileBatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n[List vector store files in a batch](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FlistBatchFiles)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFiles = try await service.listVectorStoreFilesInABatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n⚠️ We currently support Only Assistants Beta 2. If you need support for Assistants V1, you can access it in the jroch-supported-branch-for-assistants-v1 branch or in the v2.3 release.. [Check OpenAI Documentation for details on migration.](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fassistants\u002Fmigration))\n\n## Anthropic\n\nAnthropic provides OpenAI compatibility, for more, visit the [documentation](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fapi\u002Fopenai-sdk#getting-started-with-the-openai-sdk)\n\nTo use Claude models with `SwiftOpenAI` you can.\n\n```swift\nlet anthropicApiKey = \"\"\nlet openAIService = OpenAIServiceFactory.service(apiKey: anthropicApiKey, \n                     overrideBaseURL: \"https:\u002F\u002Fapi.anthropic.com\", \n                     overrideVersion: \"v1\")\n```\n\nNow you can create the completio parameters like this:\n\n```swift\nlet parameters = ChatCompletionParameters(\n   messages: [.init(\n   role: .user,\n   content: \"Are you Claude?\")],\n   model: .custom(\"claude-3-7-sonnet-20250219\"))\n```\n\nFor a more complete Anthropic Swift Package, you can use [SwiftAnthropic](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftAnthropic)\n\n## Azure OpenAI\n\nThis library provides support for both chat completions and chat stream completions through Azure OpenAI. Currently, `DefaultOpenAIAzureService` supports chat completions, including both streamed and non-streamed options.\n\nFor more information about Azure configuration refer to the [documentation.](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference)\n\nTo instantiate `DefaultOpenAIAzureService` you need to provide a `AzureOpenAIConfiguration`\n\n```swift\nlet azureConfiguration = AzureOpenAIConfiguration(\n                           resourceName: \"YOUR_RESOURCE_NAME\", \n                           openAIAPIKey: .apiKey(\"YOUR_OPENAI_APIKEY), \n                           apiVersion: \"THE_API_VERSION\")\n                           \nlet service = OpenAIServiceFactory.service(azureConfiguration: azureConfiguration)           \n```\n\nsupported api version can be found on the azure [documentation](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#completions)\n\nCurrent Supported versions\n\n```2022-12-01```\n```2023-03-15-preview```\n```2023-05-15```\n```2023-06-01-preview```\n```2023-07-01-preview```\n```2023-08-01-preview```\n```2023-09-01-preview```\n\n### Usage on [Chat completions](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#chat-completions):\n\n```swift\nlet parameters = ChatCompletionParameters(\n                     messages: [.init(role: .user, content: .text(prompt))], \n                     model: .custom(\"DEPLOYMENT_NAME\") \u002F\u002F\u002F The deployment name you chose when you deployed the model. e.g: \"gpt-35-turbo-0613\"\nlet completionObject = try await service.startChat(parameters: parameters)\n```\n\n## AIProxy\n\n### What is it?\n\n[AIProxy](https:\u002F\u002Fwww.aiproxy.pro) is a backend for iOS apps that proxies requests from your app to OpenAI.\nUsing a proxy keeps your OpenAI key secret, protecting you from unexpectedly high bills due to key theft.\nRequests are only proxied if they pass your defined rate limits and Apple's [DeviceCheck](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fdevicecheck) verification.\nWe offer AIProxy support so you can safely distribute apps built with SwiftOpenAI.\n\n### How does my SwiftOpenAI code change?\n\nProxy requests through AIProxy with two changes to your Xcode project:\n\n1. Instead of initializing `service` with:\n\n        let apiKey = \"your_openai_api_key_here\"\n        let service = OpenAIServiceFactory.service(apiKey: apiKey)\n\nUse:\n\n        let service = OpenAIServiceFactory.service(\n            aiproxyPartialKey: \"your_partial_key_goes_here\",\n            aiproxyServiceURL: \"your_service_url_goes_here\"\n        )\n\nThe `aiproxyPartialKey` and `aiproxyServiceURL` values are provided to you on the [AIProxy developer dashboard](https:\u002F\u002Fdeveloper.aiproxy.pro)\n\n2. Add an `AIPROXY_DEVICE_CHECK_BYPASS' env variable to Xcode. This token is provided to you in the AIProxy\n   developer dashboard, and is necessary for the iOS simulator to communicate with the AIProxy backend.\n    - Type `cmd shift ,` to open up the \"Edit Schemes\" menu in Xcode\n    - Select `Run` in the sidebar\n    - Select `Arguments` from the top nav\n    - Add to the \"Environment Variables\" section (not the \"Arguments Passed on Launch\" section) an env\n      variable with name `AIPROXY_DEVICE_CHECK_BYPASS` and value that we provided you in the AIProxy dashboard\n\n\n⚠️  The `AIPROXY_DEVICE_CHECK_BYPASS` is intended for the simulator only. Do not let it leak into\na distribution build of your app (including a TestFlight distribution). If you follow the steps above,\nthen the constant won't leak because env variables are not packaged into the app bundle.\n\n#### What is the `AIPROXY_DEVICE_CHECK_BYPASS` constant?\n\nAIProxy uses Apple's [DeviceCheck](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fdevicecheck) to ensure\nthat requests received by the backend originated from your app on a legitimate Apple device.\nHowever, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to\nconstantly build and run on device during development, AIProxy provides a way to skip the\nDeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets\nthe token, they can make requests to your AIProxy project without including a DeviceCheck token, and\nthus remove one level of protection.\n\n#### What is the `aiproxyPartialKey` constant?\n\nThis constant is safe to include in distributed version of your app. It is one part of an\nencrypted representation of your real secret key. The other part resides on AIProxy's backend.\nAs your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used\nto fulfill the request to OpenAI.\n\n#### How to setup my project on AIProxy?\n\nPlease see the [AIProxy integration guide](https:\u002F\u002Fwww.aiproxy.pro\u002Fdocs\u002Fintegration-guide.html)\n\n\n### ⚠️  Disclaimer\n\nContributors of SwiftOpenAI shall not be liable for any damages or losses caused by third parties.\nContributors of this library provide third party integrations as a convenience. Any use of a third\nparty's services are assumed at your own risk.\n\n\n## Ollama\n\nOllama now has built-in compatibility with the OpenAI [Chat Completions API](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md), making it possible to use more tooling and applications with Ollama locally.\n\n\u003Cimg width=\"783\" alt=\"Screenshot 2024-06-24 at 11 52 35 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_503972a4ed73.png\">\n\n### ⚠️ Important\n\nRemember that these models run locally, so you need to download them. If you want to use llama3, you can open the terminal and run the following command:\n\n```python\nollama pull llama3\n```\n\nyou can follow [Ollama documentation](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md) for more.\n\n### How to use this models locally using SwiftOpenAI?\n\nTo use local models with an `OpenAIService` in your application, you need to provide a URL. \n\n```swift\nlet service = OpenAIServiceFactory.service(baseURL: \"http:\u002F\u002Flocalhost:11434\")\n```\n\nThen you can use the completions API as follows:\n\n```swift\nlet prompt = \"Tell me a joke\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom(\"llama3\"))\nlet chatCompletionObject = service.startStreamedChat(parameters: parameters)\n```\n\n⚠️ Note: You can probably use the `OpenAIServiceFactory.service(apiKey:overrideBaseURL:proxyPath)` for any OpenAI compatible service.\n\n### Resources:\n\n[Ollama OpenAI compatibility docs.](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md)\n[Ollama OpenAI compatibility blog post.](https:\u002F\u002Follama.com\u002Fblog\u002Fopenai-compatibility)\n\n### Notes\n\nYou can also use this service constructor to provide any URL or apiKey if you need.\n\n```swift\nlet service = OpenAIServiceFactory.service(apiKey: \"YOUR_API_KEY\", baseURL: \"http:\u002F\u002Flocalhost:11434\")\n```\n\n## Groq\n\n\u003Cimg width=\"792\" alt=\"Screenshot 2024-10-11 at 11 49 04 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_41e5d77e3e6f.png\">\n\nGroq API is mostly compatible with OpenAI's client libraries like `SwiftOpenAI` to use Groq using this library you just need to create an instance of `OpenAIService` like this:\n\n```swift\nlet apiKey = \"your_api_key\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, overrideBaseURL: \"https:\u002F\u002Fapi.groq.com\u002F\", proxyPath: \"openai\")\n\n```\n\nFor Supported API's using Groq visit its [documentation](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Fopenai).\n\n## xAI\n\n\u003Cimg width=\"792\" alt=\"xAI Grok\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_8afe515a0845.png\">\n\nxAI provides an OpenAI-compatible completion API to its Grok models. You can use the OpenAI SDK to access these models.\n\n```swift\nlet apiKey = \"your_api_xai_key\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, overrideBaseURL: \"https:\u002F\u002Fapi.x.ai\", overrideVersion: \"v1\")\n```\n\nFor more information about the `xAI` api visit its [documentation](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Foverview).\n\n## OpenRouter\n\n\u003Cimg width=\"734\" alt=\"Image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_911c7d71f944.png\" \u002F>\n\n[OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquick-start) provides an OpenAI-compatible completion API to 314 models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.\n\n```swift\n\n\u002F\u002F Creating the service\n\nlet apiKey = \"your_api_key\"\nlet servcie = OpenAIServiceFactory.service(apiKey: apiKey, \n   overrideBaseURL: \"https:\u002F\u002Fopenrouter.ai\", \n   proxyPath: \"api\",\n   extraHeaders: [\n      \"HTTP-Referer\": \"\u003CYOUR_SITE_URL>\", \u002F\u002F Optional. Site URL for rankings on openrouter.ai.\n         \"X-Title\": \"\u003CYOUR_SITE_NAME>\"  \u002F\u002F Optional. Site title for rankings on openrouter.ai.\n   ])\n\n\u002F\u002F Making a request\n\nlet prompt = \"What is the Manhattan project?\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom(\"deepseek\u002Fdeepseek-r1:free\"))\nlet stream = service.startStreamedChat(parameters: parameters)\n```\n\nFor more inofrmation about the `OpenRouter` api visit its [documentation](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquick-start).\n\n## DeepSeek\n\n![Image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_49091355c784.png)\n\nThe [DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F) API uses an API format compatible with OpenAI. By modifying the configuration, you can use SwiftOpenAI to access the DeepSeek API.\n\nCreating the service\n\n```swift\n\nlet apiKey = \"your_api_key\"\nlet service = OpenAIServiceFactory.service(\n   apiKey: apiKey,\n   overrideBaseURL: \"https:\u002F\u002Fapi.deepseek.com\")\n```\n\nNon-Streaming Example\n\n```swift\nlet prompt = \"What is the Manhattan project?\"\nlet parameters = ChatCompletionParameters(\n    messages: [.init(role: .user, content: .text(prompt))],\n    model: .custom(\"deepseek-reasoner\")\n)\n\ndo {\n    let result = try await service.chat(parameters: parameters)\n    \n    \u002F\u002F Access the response content\n    if let content = result.choices.first?.message.content {\n        print(\"Response: \\(content)\")\n    }\n    \n    \u002F\u002F Access reasoning content if available\n    if let reasoning = result.choices.first?.message.reasoningContent {\n        print(\"Reasoning: \\(reasoning)\")\n    }\n} catch {\n    print(\"Error: \\(error)\")\n}\n```\n\nStreaming Example\n\n```swift\nlet prompt = \"What is the Manhattan project?\"\nlet parameters = ChatCompletionParameters(\n    messages: [.init(role: .user, content: .text(prompt))],\n    model: .custom(\"deepseek-reasoner\")\n)\n\n\u002F\u002F Start the stream\ndo {\n    let stream = try await service.startStreamedChat(parameters: parameters)\n    for try await result in stream {\n        let content = result.choices.first?.delta.content ?? \"\"\n        self.message += content\n        \n        \u002F\u002F Optional: Handle reasoning content if available\n        if let reasoning = result.choices.first?.delta.reasoningContent {\n            self.reasoningMessage += reasoning\n        }\n    }\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n    self.errorMessage = \"Network error with status code: \\(statusCode) and description: \\(description)\"\n} catch {\n    self.errorMessage = error.localizedDescription\n}\n```\n\nNotes\n\n- The DeepSeek API is compatible with OpenAI's format but uses different model names\n- Use .custom(\"deepseek-reasoner\") to specify the DeepSeek model\n- The `reasoningContent` field is optional and specific to DeepSeek's API\n- Error handling follows the same pattern as standard OpenAI requests.\n\nFor more inofrmation about the `DeepSeek` api visit its [documentation](https:\u002F\u002Fapi-docs.deepseek.com).\n\n## Gemini\n\n\u003Cimg width=\"982\" alt=\"Screenshot 2024-11-12 at 10 53 43 AM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_df26ca69e2cf.png\">\n\nGemini is now accessible from the OpenAI Library. Announcement .\n`SwiftOpenAI` support all OpenAI endpoints, however Please refer to Gemini documentation to understand which API's are currently compatible' \n\nGemini is now accessible through the OpenAI Library. See the announcement [here](https:\u002F\u002Fdevelopers.googleblog.com\u002Fen\u002Fgemini-is-now-accessible-from-the-openai-library\u002F).\nSwiftOpenAI supports all OpenAI endpoints. However, please refer to the [Gemini documentation](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fopenai) to understand which APIs are currently compatible.\"\n\n\nYou can instantiate a `OpenAIService` using your Gemini token like this...\n\n```swift\nlet geminiAPIKey = \"your_api_key\"\nlet baseURL = \"https:\u002F\u002Fgenerativelanguage.googleapis.com\"\nlet version = \"v1beta\"\n\nlet service = OpenAIServiceFactory.service(\n   apiKey: apiKey, \n   overrideBaseURL: baseURL, \n   overrideVersion: version)\n```\n\nYou can now create a chat request using the .custom model parameter and pass the model name as a string.\n\n```swift\nlet parameters = ChatCompletionParameters(\n      messages: [.init(\n      role: .user,\n      content: content)],\n      model: .custom(\"gemini-1.5-flash\"))\n\nlet stream = try await service.startStreamedChat(parameters: parameters)\n```\n\n## Collaboration\nOpen a PR for any proposed change pointing it to `main` branch. Unit tests are highly appreciated ❤️\n\n","# SwiftOpenAI\n\u003Cimg width=\"1090\" alt=\"repoOpenAI\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_6b9497efef51.png\">\n\n![iOS 15+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FiOS-15%2B-blue.svg)\n![macOS 13+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FmacOS-13%2B-blue.svg)\n![watchOS 9+](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FwatchOS-9%2B-blue.svg)\n![Linux](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLinux-blue.svg)\n[![MIT license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-blue.svg)](https:\u002F\u002Flbesson.mit-license.org\u002F)\n[![swift-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fswift-5.9-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Fapple\u002Fswift)\n[![swiftui-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fswiftui-brightgreen)](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fswiftui)\n[![xcode-version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fxcode-15%20-brightgreen)](https:\u002F\u002Fdeveloper.apple.com\u002Fxcode\u002F)\n[![swift-package-manager](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpackage%20manager-compatible-brightgreen.svg?logo=data:image\u002Fsvg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iNjJweCIgaGVpZ2h0PSI0OXB4IiB2aWV3Qm94PSIwIDAgNjIgNDkiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDYzLjEgKDkyNDUyKSAtIGh0dHBzOi8vc2tldGNoLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cDwvdGl0bGU+CiAgICA8ZGVzYz5DcmVhdGVkIHdpdGggU2tldGNoLjwvZGVzYz4KICAgIDxnIGlkPSJQYWdlLTEiIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJHcm91cCIgZmlsbC1ydWxlPSJub256ZXJvIj4KICAgICAgICAgICAgPHBvbHlnb24gaWQ9IlBhdGgiIGZpbGw9IiNEQkI1NTEiIHBvaW50cz0iNTEuMzEwMzQ0OCAwIDEwLjY4OTY1NTIgMCAwIDEzLjUxNzI0MTQgMCA0OSA2MiA0OSA2MiAxMy41MTcyNDE0Ij48L3BvbHlnb24+CiAgICAgICAgICAgIDxwb2x5Z29uIGlkPSJQYXRoIiBmaWxsPSIjRjdFM0FGIiBwb2ludHM9IjI3IDI1IDMxIDI1IDM1IDI1IDM3IDI1IDM3IDE0IDI1IDE0IDI1IDI1Ij48L3BvbHlnb24+CiAgICAgICAgICAgIDxwb2x5Z29uIGlkPSJQYXRoIiBmaWxsPSIjRUZDNzVFIiBwb2ludHM9IjEwLjY4OTY1NTIgMCAwIDE0IDYyIDE0IDUxLjMxMDM0NDggMCI+PC9wb2x5Z29uPgogICAgICAgICAgICA8cG9seWdvbiBpZD0iUmVjdGFuZ2xlIiBmaWxsPSIjRjdFM0FGIiBwb2ludHM9IjI3IDAgMzUgMCAzNyAxNCAyNSAxNCI+PC9wb2x5Z29uPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+)](https:\u002F\u002Fgithub.com\u002Fapple\u002Fswift-package-manager)\n[![Buy me a coffee](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBuy%20me%20a%20coffee-048754?logo=buymeacoffee)](https:\u002F\u002Fbuymeacoffee.com\u002Fjamesrochabrun)\n\n一个开源的 Swift 包，旨在轻松与 OpenAI 的公共 API 进行交互。\n\n🚀 现在也提供 [CLI](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAICLI) 和 [MCP](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAIMCP) 版本。\n\n## 目录\n- [简介](#description)\n- [获取 API 密钥](#getting-an-api-key)\n- [安装](#installation)\n- [兼容性](#compatibility)\n- [使用方法](#usage)\n- [协作](#collaboration)\n\n## 简介\n\n`SwiftOpenAI` 是一个开源的 Swift 包，简化了与 **所有** OpenAI API 端点的交互，现在还新增了对 Azure、AIProxy、Assistant 流式 API 以及用于低延迟双向语音对话的新 **Realtime API** 的支持。\n\n### OpenAI 端点\n\n- [音频](#audio)\n   - [转录](#audio-transcriptions)\n   - [翻译](#audio-translations)\n   - [语音合成](#audio-Speech)\n   - [实时音频](#audio-realtime)\n- [聊天](#chat)\n   - [函数调用](#function-calling)\n   - [结构化输出](#structured-outputs)\n   - [视觉](#vision)\n- [响应](#response)\n   - [流式响应](#streaming-responses)\n- [嵌入](#embeddings)\n- [微调](#fine-tuning)\n- [批量处理](#batch)\n- [文件](#files)\n- [图像](#images)\n- [模型](#models)\n- [内容审核](#moderations)\n\n### **测试版**\n- [助手](#assistants)\n   - [助手文件对象](#assistants-file-object)\n- [线程](#threads)\n- [消息](#messages)\n   - [消息文件对象](#message-file-object)\n- [运行](#runs)\n   - [运行步骤对象](#run-step-object)\n   - [运行步骤详情](#run-step-details)\n- [助手流式传输](#assistants-streaming)\n   - [消息增量对象](#message-delta-object)\n   - [运行步骤增量对象](#run-step-delta-object)\n- [向量存储](#vector-stores)\n   - [向量存储文件](#vector-store-file)\n   - [向量存储文件批次](#vector-store-file-batch)\n\n## 获取 API 密钥\n\n⚠️ **重要提示**\n\n要与 OpenAI 服务交互，您需要一个 API 密钥。请按照以下步骤获取：\n\n1. 访问 [OpenAI](https:\u002F\u002Fwww.openai.com\u002F)。\n2. 注册一个 [账户](https:\u002F\u002Fplatform.openai.com\u002Fsignup) 或者如果您已有账户，请 [登录](https:\u002F\u002Fplatform.openai.com\u002Flogin)。\n3. 前往 [API 密钥页面](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys)，按照说明生成一个新的 API 密钥。\n\n有关更多信息，请参阅 OpenAI 的 [官方文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002F)。\n\n⚠️ 请务必按照 [OpenAI 的指导](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fauthentication) 保护好您的 API 密钥：\n\n> 请记住，您的 API 密钥是机密信息！切勿与他人共享或将其暴露在任何客户端代码中（如浏览器、应用程序）。生产环境中的请求必须通过您的后端服务器进行路由，这样您的 API 密钥就可以从环境变量或密钥管理服务中安全加载。\n\nSwiftOpenAI 内置了对 AIProxy 的支持，AIProxy 是一个用于 AI 应用程序的后端服务，可以满足这一要求。要配置 AIProxy，请参阅 [此处](#aiproxy) 的说明。\n\n## 安装\n\n### Swift Package Manager\n\n1. 在 Xcode 中打开您的 Swift 项目。\n2. 依次选择 `文件` -> `添加软件包依赖项`。\n3. 在搜索栏中输入 [此 URL](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI)。\n4. 选择您想要安装的版本（见下方注释）。\n5. 点击 `添加软件包`。\n\n注意：Xcode 存在一个小问题，它会将 SPM 软件包的上限默认设置为 2.0.0。而本软件包的版本号已超过该限制，因此不应接受 Xcode 提供的默认值。相反，您应手动输入您希望支持的 [发布版本](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Freleases)的下限，然后点击输入框外以让 Xcode 自动调整上限。或者，您也可以选择 `分支` -> `main` 来保持最新状态。\n\n## 兼容性\n\n### 平台支持\n\nSwiftOpenAI 同时支持 Apple 平台和 Linux。\n- **Apple 平台**包括 iOS 15+、macOS 13+ 和 watchOS 9+。\n- **Linux**：在 Linux 上，SwiftOpenAI 使用 AsyncHTTPClient 来绕过 Apple Foundation 框架中 URLSession 的一些 bug，并且可以与 [Vapor](https:\u002F\u002Fvapor.codes\u002F) 服务器框架一起使用。\n\n### OpenAI 兼容的服务提供商\n\nSwiftOpenAI 支持多种与 OpenAI 兼容的服务提供商，包括但不限于：\n\n- [Azure OpenAI](#azure-openai)\n- [Anthropic](#anthropic)\n- [Gemini](#gemini)\n- [Ollama](#ollama)\n- [Groq](#groq)\n- [xAI](#xai)\n- [OpenRouter](#openRouter)\n- [DeepSeek](#deepseek)\n- [AIProxy](#aiproxy)\n\n您可以查看 OpenAIServiceFactory，找到方便的初始化方法，以便提供自定义的 URL。\n\n## 使用方法\n\n要在您的项目中使用 SwiftOpenAI，首先导入该包：\n\n```swift\nimport SwiftOpenAI\n```\n\n然后，使用您的 OpenAI API 密钥初始化服务：\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey)\n```\n\n如果需要，您还可以选择指定组织名称。\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet organizationID = \"your_organization_id\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, organizationID: organizationID)\n```\n\nhttps:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Ffoundation\u002Fnsurlsessionconfiguration\u002F1408259-timeoutintervalforrequest\n\n对于推理模型，请确保将 URL 会话配置中的 `timeoutIntervalForRequest` 设置为更高的值。默认值为 60 秒，这可能不够，因为向推理模型发送请求的处理和响应时间可能会更长。\n\n要进行配置：\n\n```swift\nlet apiKey = \"your_openai_api_key_here\"\nlet organizationID = \"your_organization_id\"\nlet session = URLSession.shared\nsession.configuration.timeoutIntervalForRequest = 360 \u002F\u002F 例如，360 秒或更长。\nlet httpClient = URLSessionHTTPClientAdapter(urlSession: session)\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, organizationID: organizationID, httpClient: httpClient)\n```\n\n这样您就可以开始访问 OpenAI 的所有端点了。\n\n### 如何获取网络错误的状态码\n\n您可能希望根据 API 返回的错误类型来构建 UI。例如，`429` 表示您的请求已被限流。`APIError` 类型有一个 `.responseUnsuccessful` 情况，包含两个关联值：`description` 和 `statusCode`。以下是一个使用聊天完成 API 的示例：\n\n```swift\nlet service = OpenAIServiceFactory.service(apiKey: apiKey)\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(\"hello world\"))],\n                                          model: .gpt4o)\ndo {\n   let choices = try await service.startChat(parameters: parameters).choices\n   \u002F\u002F 处理 choices\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n   print(\"网络错误，状态码：\\(statusCode)，描述：\\(description)\")\n} catch {\n   print(error.localizedDescription)\n}\n```\n\n\n### 音频\n\n### 音频转录\n\n参数\n```swift\npublic struct AudioTranscriptionParameters: Encodable {\n   \n   \u002F\u002F\u002F 文件资产的名称在 OpenAI 的官方文档中未提及；然而，它是构建多部分请求所必需的。\n   let fileName: String\n   \u002F\u002F\u002F 音频文件对象（不是文件名），格式可以是 flac、mp3、mp4、mpeg、mpga、m4a、ogg、wav 或 webm。\n   let file: Data\n   \u002F\u002F\u002F 要使用的模型 ID。目前仅支持 whisper-1。\n   let model: String\n   \u002F\u002F\u002F 输入音频的语言。以 [ISO-639-1](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FList_of_ISO_639-1_codes) 格式提供输入语言可以提高准确性和降低延迟。\n   let language: String?\n   \u002F\u002F\u002F 可选的文本提示，用于指导模型的风格或延续之前的音频片段。[提示](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fspeech-to-text\u002Fprompting) 应与音频语言一致。\n   let prompt: String?\n   \u002F\u002F\u002F 转录输出的格式，可选 json、text、srt、verbose_json 或 vtt。默认为 json。\n   let responseFormat: String?\n   \u002F\u002F\u002F 采样温度，范围在 0 到 1 之间。较高的值（如 0.8）会使输出更具随机性，而较低的值（如 0.2）则会使输出更加专注和确定性。如果设置为 0，模型将使用 [对数概率](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog_probability) 自动调整温度，直到达到某些阈值。默认为 0。\n   let temperature: Double?\n   \n   public enum Model {\n      case whisperOne \n      case custom(model: String)\n   }\n   \n   public init(\n      fileName: String,\n      file: Data,\n      model: Model = .whisperOne,\n      prompt: String? = nil,\n      responseFormat: String? = nil,\n      temperature: Double? = nil,\n      language: String? = nil)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.model = model.rawValue\n      self.prompt = prompt\n      self.responseFormat = responseFormat\n      self.temperature = temperature\n      self.language = language\n   }\n}\n```\n\n响应\n```swift\npublic struct AudioObject: Decodable {\n   \n   \u002F\u002F\u002F 如果请求使用 `transcriptions` API，则为转录文本；如果请求使用 `translations` 端点，则为翻译文本。\n   public let text: String\n}\n```\n\n使用方法\n```swift\nlet fileName = \"narcos.m4a\"\nlet data = Data(contentsOfURL:_) \u002F\u002F 从名为 \"narcos.m4a\" 的文件中读取的数据。\nlet parameters = AudioTranscriptionParameters(fileName: fileName, file: data) \u002F\u002F **重要**：文件名中务必包含文件扩展名。\nlet audioObject =  try await service.createTranscription(parameters: parameters)\n```\n\n### 音频翻译\n参数\n```swift\npublic struct AudioTranslationParameters: Encodable {\n   \n   \u002F\u002F\u002F 文件资源的名称并未在 OpenAI 官方文档中提及；然而，它对于构建多部分请求至关重要。\n   let fileName: String\n   \u002F\u002F\u002F 音频文件对象（不是文件名），需要进行翻译，格式可以是 flac、mp3、mp4、mpeg、mpga、m4a、ogg、wav 或 webm 中的一种。\n   let file: Data\n   \u002F\u002F\u002F 要使用的模型 ID。目前仅支持 whisper-1。\n   let model: String\n   \u002F\u002F\u002F 一个可选的文本，用于指导模型的风格或延续之前的音频片段。该 [提示](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fspeech-to-text\u002Fprompting) 应与音频语言一致。\n   let prompt: String?\n   \u002F\u002F\u002F 转录输出的格式，可选择 json、text、srt、verbose_json 或 vtt 中的一种。默认为 json。\n   let responseFormat: String?\n   \u002F\u002F\u002F 采样温度，范围在 0 到 1 之间。较高的值（如 0.8）会使输出更具随机性，而较低的值（如 0.2）则会使其更加专注和确定性。如果设置为 0，模型将使用 [对数概率](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLog_probability) 自动提高温度，直到达到某些阈值为止。默认为 0。\n   let temperature: Double?\n   \n   public enum Model {\n      case whisperOne \n      case custom(model: String)\n   }\n   \n   public init(\n      fileName: String,\n      file: Data,\n      model: Model = .whisperOne,\n      prompt: String? = nil,\n      responseFormat: String? = nil,\n      temperature: Double? = nil)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.model = model.rawValue\n      self.prompt = prompt\n      self.responseFormat = responseFormat\n      self.temperature = temperature\n   }\n}\n```\n\n响应\n```swift\npublic struct AudioObject: Decodable {\n   \n   \u002F\u002F\u002F 如果请求使用 `transcriptions` API，则为转录文本；如果请求使用 `translations` 端点，则为翻译文本。\n   public let text: String\n}\n```\n\n用法\n```swift\nlet fileName = \"german.m4a\"\nlet data = Data(contentsOfURL:_) \u002F\u002F 从名为 \"german.m4a\" 的文件中获取的数据。\nlet parameters = AudioTranslationParameters(fileName: fileName, file: data) \u002F\u002F **重要提示**：文件名中务必包含文件扩展名。\nlet audioObject = try await service.createTranslation(parameters: parameters)\n```\n\n### 音频语音\n参数\n```swift\n\u002F\u002F\u002F [根据输入文本生成音频。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech)\npublic struct AudioSpeechParameters: Encodable {\n\n   \u002F\u002F\u002F 可用的 [TTS 模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Ftts) 之一：tts-1 或 tts-1-hd\n   let model: String\n   \u002F\u002F\u002F 要生成音频的文本。最大长度为 4096 个字符。\n   let input: String\n   \u002F\u002F\u002F 生成音频时要使用的语音。支持的语音有 alloy、echo、fable、onyx、nova 和 shimmer。语音预览可在 [文本转语音指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext-to-speech\u002Fvoice-options) 中找到。\n   let voice: String\n   \u002F\u002F\u002F 默认为 mp3，音频的输出格式。支持的格式有 mp3、opus、aac 和 flac。\n   let responseFormat: String?\n   \u002F\u002F\u002F 默认为 1，生成音频的速度。可选择 0.25 至 4.0 之间的值。1.0 为默认值。\n   let speed: Double?\n\n   public enum TTSModel: String {\n      case tts1 = \"tts-1\"\n      case tts1HD = \"tts-1-hd\"\n   }\n\n   public enum Voice: String {\n      case alloy\n      case echo\n      case fable\n      case onyx\n      case nova\n      case shimmer\n   }\n\n   public enum ResponseFormat: String {\n      case mp3\n      case opus\n      case aac\n      case flac\n   }\n   \n   public init(\n      model: TTSModel,\n      input: String,\n      voice: Voice,\n      responseFormat: ResponseFormat? = nil,\n      speed: Double? = nil)\n   {\n       self.model = model.rawValue\n       self.input = input\n       self.voice = voice.rawValue\n       self.responseFormat = responseFormat?.rawValue\n       self.speed = speed\n   }\n}\n```\n\n响应\n```swift\n\u002F\u002F\u002F [音频语音](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Faudio\u002FcreateSpeech) 的响应。\npublic struct AudioSpeechObject: Decodable {\n\n   \u002F\u002F\u002F 音频文件的内容数据。\n   public let output: Data\n}\n```\n\n用法\n```swift\nlet prompt = \"你好，你今天过得怎么样？\"\nlet parameters = AudioSpeechParameters(model: .tts1，input: prompt，voice: .shimmer)\nlet audioObjectData = try await service.createSpeech(parameters: parameters).output\nplayAudio(from: audioObjectData)\n\n\u002F\u002F 播放数据\n private func playAudio(from data: Data) {\n       do {\n           \u002F\u002F 使用数据初始化音频播放器\n           audioPlayer = try AVAudioPlayer(data: data)\n           audioPlayer?.prepareToPlay()\n           audioPlayer?.play()\n       } catch {\n           \u002F\u002F 处理错误\n           print(\"播放音频时出错：\\(error.localizedDescription)\")\n       }\n   }\n```\n\n### 音频实时\n\n[实时 API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Frealtime) 允许通过 WebSockets 和低延迟音频流与 OpenAI 的模型进行双向语音对话。该 API 支持音频到音频以及文本到音频的交互，并内置语音活动检测、转录和函数调用功能。\n\n**平台要求：** iOS 15+、macOS 13+、watchOS 9+。需要 AVFoundation（Linux 上不可用）。\n\n**所需权限：**\n- 在 Info.plist 中添加 `NSMicrophoneUsageDescription`\n- 在 macOS 上：启用沙盒权限以访问麦克风和允许出站网络连接\n\n参数\n```swift\n\u002F\u002F\u002F 用于创建实时会话的配置\npublic struct OpenAIRealtimeSessionConfiguration: Encodable，Sendable {\n\n\u002F\u002F\u002F 输入音频格式。选项：.pcm16、.g711_ulaw、.g711_alaw。默认为 .pcm16\n   let inputAudioFormat: AudioFormat?\n   \u002F\u002F\u002F 使用 Whisper 进行输入音频转录的配置\n   let inputAudioTranscription: InputAudioTranscription?\n   \u002F\u002F\u002F 模型的系统指令。提供了推荐的默认值\n   let instructions: String?\n   \u002F\u002F\u002F 响应输出的最大 token 数量。可以是 .value(Int) 或 .infinite\n   let maxResponseOutputTokens: MaxResponseOutputTokens?\n   \u002F\u002F\u002F 输出模态：[.audio, .text] 或仅 [.text]。默认为 [.audio, .text]\n   let modalities: [Modality]?\n   \u002F\u002F\u002F 输出音频格式。选项：.pcm16、.g711_ulaw、.g711_alaw。默认为 .pcm16\n   let outputAudioFormat: AudioFormat?\n   \u002F\u002F\u002F 音频播放速度。范围：0.25 至 4.0。默认为 1.0\n   let speed: Double?\n   \u002F\u002F\u002F 模型响应的采样温度。范围：0.6 至 1.2。默认为 0.8\n   let temperature: Double?\n   \u002F\u002F\u002F 模型可调用的工具\u002F函数数组\n   let tools: [Tool]?\n   \u002F\u002F\u002F 工具选择模式：.none、.auto、.required 或 .specific(functionName: String)\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F 语音活动检测配置。选项：.serverVAD 或 .semanticVAD\n   let turnDetection: TurnDetection?\n   \u002F\u002F\u002F 要使用的语音。选项：“alloy”、“ash”、“ballad”、“coral”、“echo”、“sage”、“shimmer”、“verse”\n   let voice: String?\n\n   \u002F\u002F\u002F 可用的音频格式\n   public enum AudioFormat: String, Encodable, Sendable {\n      case pcm16\n      case g711_ulaw = \"g711-ulaw\"\n      case g711_alaw = \"g711-alaw\"\n   }\n\n   \u002F\u002F\u002F 输出模态\n   public enum Modality: String, Encodable, Sendable {\n      case audio\n      case text\n   }\n\n   \u002F\u002F\u002F 轮次检测配置\n   public struct TurnDetection: Encodable, Sendable {\n      \u002F\u002F\u002F 基于服务器的 VAD，具有可自定义的时间参数\n      public static func serverVAD(\n         prefixPaddingMs: Int = 300,\n         silenceDurationMs: Int = 500,\n         threshold: Double = 0.5\n      ) -> TurnDetection\n\n      \u002F\u002F\u002F 语义 VAD，带有急切程度设置\n      public static func semanticVAD(eagerness: Eagerness = .medium) -> TurnDetection\n\n      public enum Eagerness: String, Encodable, Sendable {\n         case low, medium, high\n      }\n   }\n}\n```\n\n响应\n```swift\n\u002F\u002F\u002F 从实时 API 接收到的消息\npublic enum OpenAIRealtimeMessage: Sendable {\n   case error(String?)                    \u002F\u002F 发生错误\n   case sessionCreated                    \u002F\u002F 会话成功创建\n   case sessionUpdated                    \u002F\u002F 配置已更新\n   case responseCreated                   \u002F\u002F 模型开始生成响应\n   case responseAudioDelta(String)        \u002F\u002F 音频片段（base64 编码的 PCM16）\n   case inputAudioBufferSpeechStarted     \u002F\u002F 用户开始说话（VAD 检测到）\n   case responseFunctionCallArgumentsDone(name: String, arguments: String, callId: String)\n   case responseTranscriptDelta(String)   \u002F\u002F 部分 AI 转录文本\n   case responseTranscriptDone(String)    \u002F\u002F 完整的 AI 转录文本\n   case inputAudioBufferTranscript(String)           \u002F\u002F 用户音频转录文本\n   case inputAudioTranscriptionDelta(String)         \u002F\u002F 部分用户转录文本\n   case inputAudioTranscriptionCompleted(String)     \u002F\u002F 完整的用户转录文本\n}\n```\n\n辅助类型\n```swift\n\u002F\u002F\u002F 管理实时对话中的麦克风输入和音频播放。\n\u002F\u002F\u002F 通过 AudioController 播放的音频不会干扰麦克风输入（模型不会听到自己的声音）。\n@RealtimeActor\npublic final class AudioController {\n\n   \u002F\u002F\u002F 使用指定模式初始化\n   \u002F\u002F\u002F - 参数 modes：包含 .record（用于麦克风）和\u002F或 .playback（用于音频输出）的数组\n   public init(modes: [Mode]) async throws\n\n   public enum Mode {\n      case record   \u002F\u002F 启用麦克风流式传输\n      case playback \u002F\u002F 启用音频播放\n   }\n\n   \u002F\u002F\u002F 返回麦克风音频缓冲区的 AsyncStream\n   \u002F\u002F\u002F - 抛出：OpenAIError 如果初始化时未启用 .record 模式\n   public func micStream() throws -> AsyncStream\u003CAVAudioPCMBuffer>\n\n   \u002F\u002F\u002F 播放来自模型的 base64 编码的 PCM16 音频\n   \u002F\u002F\u002F - 参数 base64String：Base64 编码的 PCM16 音频数据\n   public func playPCM16Audio(base64String: String)\n\n   \u002F\u002F\u002F 中断当前音频播放（当用户开始说话时很有用）\n   public func interruptPlayback()\n\n   \u002F\u002F\u002F 停止所有音频操作\n   public func stop()\n}\n\n\u002F\u002F\u002F 用于将音频缓冲区编码为 base64 的工具\npublic enum AudioUtils {\n   \u002F\u002F\u002F 将 AVAudioPCMBuffer 转换为可用于传输的 base64 字符串\n   public static func base64EncodeAudioPCMBuffer(from buffer: AVAudioPCMBuffer) -> String?\n\n   \u002F\u002F\u002F 检查是否已连接耳机\n   public static var headphonesConnected: Bool\n}\n```\n\n使用示例\n```swift\n\u002F\u002F 1. 创建会话配置\nlet configuration = OpenAIRealtimeSessionConfiguration(\n   voice: \"alloy\",\n   instructions: \"你是一位乐于助人的 AI 助手。请简明扼要且友好地回答。\",\n   turnDetection: .serverVAD(\n      prefixPaddingMs: 300,\n      silenceDurationMs: 500,\n      threshold: 0.5\n   ),\n   inputAudioTranscription: .init(model: \"whisper-1\")\n)\n\n\u002F\u002F 2. 创建实时会话\nlet session = try await service.realtimeSession(\n   model: \"gpt-4o-mini-realtime-preview-2024-12-17\",\n   configuration: configuration\n)\n\n\u002F\u002F 3. 初始化音频控制器以进行录音和播放\nlet audioController = try await AudioController(modes: [.record, .playback])\n\n\u002F\u002F 4. 处理来自 OpenAI 的传入消息\nTask {\n   for await message in session.receiver {\n      switch message {\n      case .responseAudioDelta(let audio):\n         \u002F\u002F 播放来自模型的音频\n         audioController.playPCM16Audio(base64String: audio)\n\n      case .inputAudioBufferSpeechStarted:\n         \u002F\u002F 用户开始说话——中断模型的音频播放\n         audioController.interruptPlayback()\n\n      case .responseTranscriptDelta(let text):\n         \u002F\u002F 显示部分模型转录文本\n         print(\"模型（部分）：\\(text)\")\n\n      case .responseTranscriptDone(let text):\n         \u002F\u002F 显示完整的模型转录文本\n         print(\"模型：\\(text)\")\n\n      case .inputAudioTranscriptionCompleted(let text):\n         \u002F\u002F 显示用户转录的语音\n         print(\"用户：\\(text)\")\n\n      case .responseFunctionCallArgumentsDone(let name, let args, let callId):\n         \u002F\u002F 处理来自模型的函数调用\n         print(\"函数调用：\\(name)，参数：\\(args)\")\n         \u002F\u002F 执行函数并将结果返回\n\n      case .error(let error):\n         print(\"错误：\\(error ?? '未知错误')\")\n\n      default：\n         break\n      }\n   }\n}\n\n\u002F\u002F 5. 将麦克风音频流式传输到 OpenAI\nTask {\n   do {\n      for try await buffer in audioController.micStream() {\n         \u002F\u002F 将音频缓冲区编码为 base64\n         guard let base64Audio = AudioUtils.base64EncodeAudioPCMBuffer(from: buffer) else {\n            continue\n         }\n\n\u002F\u002F 向 OpenAI 发送音频\n         try await session.sendMessage(\n            OpenAIRealtimeInputAudioBufferAppend(audio: base64Audio)\n         )\n      }\n   } catch {\n      print(\"麦克风错误：\\(error)\")\n   }\n}\n\n\u002F\u002F 6. 手动触发响应（可选——通常由 VAD 处理）\ntry await session.sendMessage(\n   OpenAIRealtimeResponseCreate()\n)\n\n\u002F\u002F 7. 在对话过程中更新会话配置（可选）\nlet newConfig = OpenAIRealtimeSessionConfiguration(\n   voice: \"shimmer\",\n   temperature: 0.9\n)\ntry await session.sendMessage(\n   OpenAIRealtimeSessionUpdate(sessionConfig: newConfig)\n)\n\n\u002F\u002F 8. 完成后进行清理\naudioController.stop()\nsession.disconnect()\n```\n\n函数调用\n```swift\n\u002F\u002F 在配置中定义工具\nlet tools: [OpenAIRealtimeSessionConfiguration.Tool] = [\n   .init(\n      name: \"get_weather\",\n      description: \"获取某个地点的当前天气\",\n      parameters: [\n         \"type\": \"object\",\n         \"properties\": [\n            \"location\": [\n               \"type\": \"string\",\n               \"description\": \"城市名称，例如旧金山\"\n            ]\n         ],\n         \"required\": [\"location\"]\n      ]\n   )\n]\n\nlet config = OpenAIRealtimeSessionConfiguration(\n   voice: \"alloy\",\n   tools: tools,\n   toolChoice: .auto\n)\n\n\u002F\u002F 在消息接收器中处理函数调用\ncase .responseFunctionCallArgumentsDone(let name, let args, let callId):\n   if name == \"get_weather\" {\n      \u002F\u002F 解析参数并执行函数\n      let result = getWeather(arguments: args)\n\n      \u002F\u002F 将结果返回给模型\n      try await session.sendMessage(\n         OpenAIRealtimeConversationItemCreate(\n            item: .functionCallOutput(\n               callId: callId,\n               output: result\n            )\n         )\n      )\n   }\n```\n\n高级功能\n- **语音活动检测 (VAD)：** 可选择基于服务器的 VAD（支持自定义时序）或语义 VAD（支持不同积极性级别）\n- **转录：** 为用户输入和模型输出同时启用 Whisper 转录\n- **会话更新：** 在不重新连接的情况下，可在对话过程中更改语音、指令或工具\n- **响应触发：** 可手动触发模型响应，也可依赖自动 VAD\n- **平台特定行为：** 根据平台和耳机连接情况，自动选择最优的音频 API\n\n有关完整实现示例，请参阅仓库中的 `Examples\u002FRealtimeExample\u002FRealtimeExample.swift` 文件。\n\n### 聊天\n参数\n```swift\npublic struct ChatCompletionParameters: Encodable {\n   \n   \u002F\u002F\u002F 由迄今为止对话组成的消息列表。[示例 Python 代码](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_format_inputs_to_chatgpt_models)\n   public var messages: [Message]\n   \u002F\u002F\u002F 要使用的模型 ID。有关哪些模型适用于聊天 API 的详细信息，请参阅 [模型端点兼容性](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Fhow-we-use-your-data) 表。\n   \u002F\u002F\u002F 支持 GPT-4、GPT-4o、GPT-5 等模型。对于 GPT-5 系列：.gpt5、.gpt5Mini、.gpt5Nano\n   public var model: String\n   \u002F\u002F\u002F 是否存储此聊天完成请求的输出，以供我们用于 [模型蒸馏](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fdistillation) 或 [评估](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fevals) 产品。\n   \u002F\u002F\u002F 默认为 false\n   public var store: Bool?\n   \u002F\u002F\u002F 介于 -2.0 和 2.0 之间的数字。正值会根据当前文本中某个标记的出现频率对其施加惩罚，从而降低模型逐字重复同一句话的可能性。默认为 0\n   \u002F\u002F\u002F [有关频率和存在惩罚的更多信息。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fgpt\u002Fparameter-details)\n   public var frequencyPenalty: Double?\n   \u002F\u002F\u002F 控制模型对函数调用的响应方式。none 表示模型不会调用函数，而是直接回复用户；auto 表示模型可以自行决定是回复用户还是调用函数；通过 {\"name\": \"my_function\"} 指定特定函数，则强制模型调用该函数。如果没有函数定义，默认为 none；如果有函数定义，则默认为 auto。\n   @available(*, deprecated, message: \"已弃用，建议使用 tool_choice。\")\n   public var functionCall: FunctionCall?\n   \u002F\u002F\u002F 控制模型将调用哪个（如果有的话）函数。none 表示模型不会调用函数，而是生成一条消息。\n   \u002F\u002F\u002F auto 表示模型可以自行决定是生成消息还是调用函数。通过 `{\"type: \"function\", \"function\": {\"name\": \"my_function\"}}` 指定特定函数，则强制模型调用该函数。\n   \u002F\u002F\u002F 如果没有函数定义，默认为 none；如果有函数定义，默认为 auto。\n   public var toolChoice: ToolChoice?\n   \u002F\u002F\u002F 模型可能为其生成 JSON 输入的函数列表。\n   @available(*, deprecated, message: \"已弃用，建议使用 tools。\")\n   public var functions: [ChatFunction]?\n   \u002F\u002F\u002F 模型可能调用的工具列表。目前仅支持函数作为工具。使用此参数可提供模型可能为其生成 JSON 输入的函数列表。\n   public var tools: [Tool]?\n   \u002F\u002F\u002F 在使用工具时是否启用并行函数调用。默认为 true。\n   public var parallelToolCalls: Bool?\n   \u002F\u002F\u002F 修改指定标记出现在完成结果中的可能性。\n   \u002F\u002F\u002F 接受一个 JSON 对象，将标记（由分词器中的标记 ID 指定）映射到 -100 至 100 之间的相关偏置值。从数学上讲，该偏置会在采样之前添加到模型生成的 logits 中。具体效果因模型而异，但介于 -1 和 1 之间的值应会降低或增加被选中的可能性；而接近 -100 或 100 的值则可能导致禁止或独占选择相关标记。默认为 null。\n   public var logitBias: [Int: Double]?\n   \u002F\u002F\u002F 是否返回输出标记的对数概率。如果为真，则返回消息内容中每个输出标记的对数概率。此选项目前不适用于 gpt-4-vision-preview 模型。默认为 false。\n   public var logprobs: Bool?\n   \u002F\u002F\u002F 一个介于 0 和 5 之间的整数，指定在每个标记位置返回最有可能的标记数量，并附带相应的对数概率。如果使用此参数，则必须将 logprobs 设置为 true。\n   public var topLogprobs: Int?\n   \u002F\u002F\u002F 聊天完成中最多可生成的 [标记](https:\u002F\u002Fplatform.openai.com\u002Ftokenizer) 数量。此值可用于控制通过 API 生成文本的 [成本](https:\u002F\u002Fopenai.com\u002Fapi\u002Fpricing\u002F)。\n   \u002F\u002F\u002F 此值现已弃用，建议使用 max_completion_tokens，且与 [o1 系列模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Freasoning) 不兼容。\n   public var maxTokens: Int?\n   \u002F\u002F\u002F 完成结果中可生成标记数目的上限，包括可见输出标记和 [推理标记](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Freasoning)。\n   public var maxCompletionTokens: Int?\n   \u002F\u002F\u002F 为每条输入消息生成多少个聊天完成选项。默认为 1。\n   public var n: Int?\n   \u002F\u002F\u002F 您希望模型为此请求生成的输出类型。大多数模型能够生成文本，这也是默认设置：\n   \u002F\u002F\u002F [\"text\"]\n   \u002F\u002F\u002F gpt-4o-audio-preview 模型也可用于 [生成音频](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)。要请求该模型同时生成文本和音频响应，可以使用：\n   \u002F\u002F\u002F [\"text\", \"audio\"]\n   public var modalities: [String]?\n   \u002F\u002F\u002F 音频输出参数。当 modalities 设置为 [\"audio\"] 时需要提供。[了解更多信息。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)\n   public var audio: Audio?\n   \u002F\u002F\u002F 介于 -2.0 和 2.0 之间的数字。正值会根据标记是否已在当前文本中出现来对其进行惩罚，从而提高模型谈论新话题的可能性。默认为 0\n   \u002F\u002F\u002F [有关频率和存在惩罚的更多信息。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fgpt\u002Fparameter-details)\n   public var presencePenalty: Double?\n   \u002F\u002F\u002F 一个对象，指定模型必须输出的格式。用于启用 JSON 模式。\n   \u002F\u002F\u002F 设置为 `{ type: \"json_object\" }` 可启用 `JSON` 模式，确保模型生成的消息是有效的 JSON。\n   \u002F\u002F\u002F 重要提示：在使用 `JSON` 模式时，您仍需通过对话中的某条消息（例如系统消息）明确指示模型生成 `JSON`。否则，模型可能会持续生成空白内容，直到达到标记限制，这可能耗费大量时间，并表现为“卡住”的请求。此外，如果 `finish_reason=\"length\"`，则消息内容可能是不完整的（即被截断），这表明生成的标记数超过了 `max_tokens`，或者对话上下文长度超出了限制。\n   public var responseFormat: ResponseFormat?\n   \u002F\u002F\u002F 指定处理请求时使用的延迟等级。此参数适用于订阅了 scale tier 服务的客户：\n   \u002F\u002F\u002F 如果设置为 'auto'，系统将使用 scale tier 积分，直至用尽。\n   \u002F\u002F\u002F 如果设置为 'default'，请求将在共享集群中处理。\n   \u002F\u002F\u002F 设置此参数后，响应体中将包含所使用的 service_tier。\n   public var serviceTier: String?\n   \u002F\u002F\u002F 此功能处于 `Beta` 阶段。如果指定，我们的系统将尽力进行确定性采样，使得使用相同 `seed` 和参数的重复请求应返回相同的结果。\n   \u002F\u002F\u002F 确定性无法保证，您应参考 `system_fingerprint` 响应参数来监控后端的变化。\n   public var seed: Int?\n   \u002F\u002F\u002F 最多 4 个序列，API 将在此停止生成更多标记。默认为 null。\n   public var stop: [String]?\n   \u002F\u002F\u002F 如果设置，将发送部分消息增量，类似于 ChatGPT。标记会以纯数据形式按 [服务器发送事件](https:\u002F\u002Fdeveloper.mozilla.org\u002Fen-US\u002Fdocs\u002FWeb\u002FAPI\u002FServer-sent_events\u002FUsing_server-sent_events#event_stream_format) 的方式陆续发送，直到收到 data: [DONE] 消息为止。[示例 Python 代码](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_stream_completions )。\n   \u002F\u002F\u002F 默认为 false。\n   var stream: Bool? = nil\n   \u002F\u002F\u002F 流式响应选项。仅在设置 stream: true 时才应设置此参数。\n   var streamOptions: StreamOptions?\n   \u002F\u002F\u002F 使用的采样温度，范围为 0 到 2。较高的值（如 0.8）会使输出更具随机性，而较低的值（如 0.2）则使其更加专注和确定性更强。\n   \u002F\u002F\u002F 我们通常建议调整此参数或 `top_p` 参数，但不要同时调整两者。默认为 1。\n   public var temperature: Double?\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样，在这种方法中，模型会考虑具有 top_p 概率质量的标记结果。因此，0.1 表示仅考虑构成前 10% 概率质量的标记。\n   \u002F\u002F\u002F 我们通常建议调整此参数或 `temperature` 参数，但不要同时调整两者。默认为 1。\n   public var topP: Double?\n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。\n   \u002F\u002F\u002F [了解更多信息](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices\u002Fend-user-ids)。\n   public var user: String?\n   \n   public struct Message: Encodable {\n      \n      \u002F\u002F\u002F 消息作者的角色。可以是 system、user、assistant 或 tool message。\n      let role: String\n      \u002F\u002F\u002F 消息的内容。所有消息都需要 content，但对于带有函数调用的 assistant 消息，content 可以为空。\n      let content: ContentType\n      \u002F\u002F\u002F 此消息作者的名称。如果角色是 function，则必须填写 name，且应为内容中响应的函数名称。名称可以包含 a-z、A-Z、0-9 和下划线，最大长度为 64 个字符。\n      let name: String?\n      \u002F\u002F\u002F 模型生成的要调用的函数名称和参数。\n      @available(*, deprecated, message: \"已弃用，由 `tool_calls` 取代\")\n      let functionCall: FunctionCall?\n      \u002F\u002F\u002F 模型生成的工具调用，例如函数调用。\n      let toolCalls: [ToolCall]?\n      \u002F\u002F\u002F 本消息所回应的工具调用 ID。\n      let toolCallID: String?\n      \n      public enum ContentType: Encodable {\n         \n         case text(String)\n         case contentArray([MessageContent])\n         \n         public func encode(to encoder: Encoder) throws {\n            var container = encoder.singleValueContainer()\n            switch self {\n            case .text(let text):\n               try container.encode(text)\n            case .contentArray(let contentArray):\n               try container.encode(contentArray)\n            }\n         }\n         \n         public enum MessageContent: Encodable、Equatable 和 Hashable {\n            \n            case text(String)\n            case imageUrl(ImageDetail)\n            \n            public struct ImageDetail: Encodable、Equatable 和 Hashable {\n               \n               public let url: URL\n               public let detail: String?\n               \n               enum CodingKeys: String、CodingKey {\n                  case url\n                  case detail\n               }\n               \n               public func encode(to encoder: Encoder) throws {\n                  var container = encoder.container(keyedBy: CodingKeys.self)\n                  try container.encode(url, forKey: .url)\n                  try container.encode(detail, forKey: .detail)\n               }\n               \n               public init(url: URL、detail: String? = nil) {\n                  self.url = url\n                  self.detail = detail\n               }\n            }\n            \n            enum CodingKeys: String、CodingKey {\n               case type\n               case text\n               case imageUrl = \"image_url\"\n            }\n            \n            public func encode(to encoder: Encoder) throws {\n               var container = encoder.container(keyedBy: CodingKeys.self)\n              ...\n\nenum 编码键: String, 编码键 {\n          case 包含使用量 = \"include_usage\"\n      }\n   }\n   \n   \u002F\u002F\u002F 音频输出参数。当调用模态为 [\"audio\"] 时，此参数为必填。\n   \u002F\u002F\u002F [了解更多。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)\n   public struct 音频: 可编码 {\n      \u002F\u002F\u002F 指定语音类型。支持的语音包括 alloy、echo、fable、onyx、nova 和 shimmer。\n      public let 语音: String\n      \u002F\u002F\u002F 指定输出音频格式。必须是 wav、mp3、flac、opus 或 pcm16 中的一种。\n      public let 格式: String\n      \n      public init(\n         语音: String,\n         格式: String)\n      {\n         self.语音 = 语音\n         self.格式 = 格式\n      }\n   }\n\n   enum 编码键: String, 编码键 {\n      case 消息\n      case 模型\n      case 存储\n      case 频率惩罚 = \"frequency_penalty\"\n      case 工具选择 = \"tool_choice\"\n      case 函数调用 = \"function_call\"\n      case 工具\n      case 并行工具调用 = \"parallel_tool_calls\"\n      case 函数\n      case 对数偏置 = \"logit_bias\"\n      case 日志概率\n      case 最高日志概率 = \"top_logprobs\"\n      case 最大标记数 = \"max_tokens\"\n      case 最大完成标记数 = \"max_completion_tokens\"\n      case 数量\n      case 模态\n      case 音频\n      case 响应格式 = \"response_format\"\n      case 现场惩罚 = \"presence_penalty\"\n      case 种子\n      case 服务等级 = \"service_tier\"\n      case 停止\n      case 流式\n      case 流式选项 = \"stream_options\"\n      case 温度\n      case 最高概率 = \"top_p\"\n      case 用户\n   }\n   \n   public init(\n      消息: [消息],\n      模型: 模型,\n      存储: Bool? = nil,\n      频率惩罚: Double? = nil,\n      函数调用: 函数调用? = nil,\n      工具选择: 工具选择? = nil,\n      函数: [聊天函数]? = nil,\n      工具: [工具]? = nil,\n      并行工具调用: Bool? = nil,\n      对数偏置: [Int: Double]? = nil,\n      日志概率: Bool? = nil,\n      最高日志概率: Int? = nil,\n      最大标记数: Int? = nil,\n      数量: Int? = nil,\n      模态: [String]? = nil,\n      音频: 音频? = nil,\n      响应格式: 响应格式? = nil,\n      现场惩罚: Double? = nil,\n      服务等级: 服务等级? = nil,\n      种子: Int? = nil,\n      停止: [String]? = nil,\n      温度: Double? = nil,\n      最高概率: Double? = nil,\n      用户: String? = nil)\n   {\n      self.messages = messages\n      self.model = model.value\n      self.store = store\n      self.frequencyPenalty = frequencyPenalty\n      self.functionCall = functionCall\n      self.toolChoice = toolChoice\n      self.functions = functions\n      self.tools = tools\n      self.parallelToolCalls = parallelToolCalls\n      self.logitBias = logitBias\n      self.logprobs = logProbs\n      self.topLogprobs = topLogprobs\n      self.maxTokens = maxTokens\n      self.n = n\n      self.modalities = modalities\n      self.audio = audio\n      self.responseFormat = responseFormat\n      self.presencePenalty = presencePenalty\n      self.serviceTier = serviceTier?.rawValue\n      self.seed = seed\n      self.stop = stop\n      self.temperature = temperature\n      self.topP = topProbability\n      self.user = user\n   }\n}\n```\n\n### 聊天完成对象\n```swift\n\u002F\u002F\u002F 表示根据提供的输入由模型返回的聊天[完成](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fobject)响应。\npublic struct ChatCompletionObject: Decodable {\n   \n   \u002F\u002F\u002F 聊天完成的唯一标识符。\n   public let id: String\n   \u002F\u002F\u002F 聊天完成选项列表。如果 n 大于 1，则可以有多个选项。\n   public let choices: [ChatChoice]\n   \u002F\u002F\u002F 聊天完成创建时的 Unix 时间戳（以秒为单位）。\n   public let created: Int\n   \u002F\u002F\u002F 用于聊天完成的模型。\n   public let model: String\n   \u002F\u002F\u002F 用于处理请求的服务层级。仅当请求中指定了 service_tier 参数时，才会包含此字段。\n   public let serviceTier: String?\n   \u002F\u002F\u002F 此指纹代表模型运行所依据的后端配置。\n   \u002F\u002F\u002F 可与 seed 请求参数结合使用，以了解何时进行了可能影响确定性的后端更改。\n   public let systemFingerprint: String?\n   \u002F\u002F\u002F 对象类型，始终为 chat.completion。\n   public let object: String\n   \u002F\u002F\u002F 完成请求的使用统计信息。\n   public let usage: ChatUsage\n   \n   public struct ChatChoice: Decodable {\n      \n      \u002F\u002F\u002F 模型停止生成标记的原因。如果是自然停止点或提供的停止序列，则为 stop；如果达到请求中指定的最大标记数，则为 length；如果由于内容过滤器的标记而省略了内容，则为 content_filter；如果模型调用了工具，则为 tool_calls；如果模型调用了函数，则为 function_call（已弃用）。\n      public let finishReason: IntOrStringValue?\n      \u002F\u002F\u002F 选项在选项列表中的索引。\n      public let index: Int\n      \u002F\u002F\u002F 模型生成的聊天完成消息。\n      public let message: ChatMessage   \n      \u002F\u002F\u002F 该选项的日志概率信息。\n      public let logprobs: LogProb?\n      \n      public struct ChatMessage: Decodable {\n         \n         \u002F\u002F\u002F 消息的内容。\n         public let content: String?\n         \u002F\u002F\u002F 模型生成的工具调用，例如函数调用。\n         public let toolCalls: [ToolCall]?\n         \u002F\u002F\u002F 模型生成的应被调用的函数名称和参数。\n         @available(*, deprecated, message: \"已弃用并由 `tool_calls` 替代\")\n         public let functionCall: FunctionCall?\n         \u002F\u002F\u002F 此消息作者的角色。\n         public let role: String\n         \u002F\u002F\u002F 由 Vision API 提供。\n         public let finishDetails: FinishDetails?\n         \u002F\u002F\u002F 模型生成的拒绝消息。\n         public let refusal: String?\n         \u002F\u002F\u002F 如果请求了音频输出模态，此对象包含来自模型的音频响应数据。[了解更多](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Faudio)。\n         public let audio: Audio?\n         \n         \u002F\u002F\u002F 由 Vision API 提供。\n         public struct FinishDetails: Decodable {\n            let type: String\n         }\n         \n         public struct Audio: Decodable {\n            \u002F\u002F\u002F 此音频响应的唯一标识符。\n            public let id: String\n            \u002F\u002F\u002F 此音频响应将在服务器上不再可用以用于多轮对话的 Unix 时间戳（以秒为单位）。\n            public let expiresAt: Int\n            \u002F\u002F\u002F 模型生成的 Base64 编码音频字节，格式按请求指定。\n            public let data: String\n            \u002F\u002F\u002F 模型生成的音频转录文本。\n            public let transcript: String\n            \n            enum CodingKeys: String, CodingKey {\n               case id\n               case expiresAt = \"expires_at\"\n               case data\n               case transcript\n            }\n         }\n      }\n      \n      public struct LogProb: Decodable {\n         \u002F\u002F\u002F 包含日志概率信息的消息内容标记列表。\n         let content: [TokenDetail]\n      }\n      \n      public struct TokenDetail: Decodable {\n         \u002F\u002F\u002F 标记。\n         let token: String\n         \u002F\u002F\u002F 该标记的日志概率。\n         let logprob: Double\n         \u002F\u002F\u002F 表示标记 UTF-8 字节表示的整数列表。在字符由多个标记表示且必须组合其字节表示才能生成正确文本表示的情况下很有用。如果标记没有字节表示，则可能为 null。\n         let bytes: [Int]?\n         \u002F\u002F\u002F 在该标记位置最有可能出现的标记及其日志概率列表。在极少数情况下，返回的 top_logprobs 数量可能会少于请求的数量。\n         let topLogprobs: [TopLogProb]\n         \n         enum CodingKeys: String, CodingKey {\n            case token, logprob, bytes\n            case topLogprobs = \"top_logprobs\"\n         }\n         \n         struct TopLogProb: Decodable {\n            \u002F\u002F\u002F 标记。\n            let token: String\n            \u002F\u002F\u002F 该标记的日志概率。\n            let logprob: Double\n            \u002F\u002F\u002F 表示标记 UTF-8 字节表示的整数列表。在字符由多个标记表示且必须组合其字节表示才能生成正确文本表示的情况下很有用。如果标记没有字节表示，则可能为 null。\n            let bytes: [Int]?\n         }\n      }\n   }\n   \n   public struct ChatUsage: Decodable {\n      \n      \u002F\u002F\u002F 生成的完成内容中的标记数量。\n      public let completionTokens: Int\n      \u002F\u002F\u002F 提示中的标记数量。\n      public let promptTokens: Int\n      \u002F\u002F\u002F 请求中使用的总标记数量（提示 + 完成）。\n      public let totalTokens: Int\n   }\n}\n```\n\n使用方法\n```swift\nlet prompt = \"给我讲个笑话\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)\nlet chatCompletionObject = service.startChat(parameters: parameters)\n```\n\n响应\n\n### 聊天完成块对象\n```swift\n\u002F\u002F\u002F 表示根据提供的输入，由模型返回的聊天完成响应的[流式](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fstreaming)块。\npublic struct ChatCompletionChunkObject: Decodable {\n   \n   \u002F\u002F\u002F 聊天完成块的唯一标识符。\n   public let id: String\n   \u002F\u002F\u002F 聊天完成选项列表。如果 n 大于 1，则可以有多个选项。\n   public let choices: [ChatChoice]\n   \u002F\u002F\u002F 聊天完成块创建时的 Unix 时间戳（以秒为单位）。\n   public let created: Int\n   \u002F\u002F\u002F 用于生成完成内容的模型。\n   public let model: String\n   \u002F\u002F\u002F 用于处理请求的服务层级。仅当请求中指定了 service_tier 参数时，才会包含此字段。\n   public let serviceTier: String?\n   \u002F\u002F\u002F 此指纹代表模型运行所依据的后端配置。\n   \u002F\u002F\u002F 可与 seed 请求参数结合使用，以了解何时发生了可能影响确定性的后端更改。\n   public let systemFingerprint: String?\n   \u002F\u002F\u002F 对象类型，始终为 chat.completion.chunk。\n   public let object: String\n   \n   public struct ChatChoice: Decodable {\n      \n      \u002F\u002F\u002F 由流式模型响应生成的聊天完成增量。\n      public let delta: Delta\n      \u002F\u002F\u002F 模型停止生成标记的原因。如果是模型到达自然停止点或提供的停止序列，则为 stop；如果达到请求中指定的最大标记数，则为 length；如果由于内容过滤器的标记而省略了内容，则为 content_filter；如果模型调用了工具，则为 tool_calls；如果模型调用了函数，则为 function_call（已弃用）。\n      public let finishReason: IntOrStringValue?\n      \u002F\u002F\u002F 选项在选项列表中的索引。\n      public let index: Int\n      \u002F\u002F\u002F 由 Vision API 提供。\n      public let finishDetails: FinishDetails?\n      \n      public struct Delta: Decodable {\n         \n         \u002F\u002F\u002F 块消息的内容。\n         public let content: String?\n         \u002F\u002F\u002F 模型生成的工具调用，例如函数调用。\n         public let toolCalls: [ToolCall]?\n         \u002F\u002F\u002F 模型生成的应被调用的函数名称和参数。\n         @available(*, deprecated, message: \"已弃用并由 `tool_calls` 替代\")\n         public let functionCall: FunctionCall?\n         \u002F\u002F\u002F 此消息作者的角色。\n         public let role: String?\n      }\n      \n      public struct LogProb: Decodable {\n         \u002F\u002F\u002F 包含对数概率信息的消息内容标记列表。\n         let content: [TokenDetail]\n      }\n      \n      public struct TokenDetail: Decodable {\n         \u002F\u002F\u002F 标记。\n         let token: String\n         \u002F\u002F\u002F 该标记的对数概率。\n         let logprob: Double\n         \u002F\u002F\u002F 表示标记 UTF-8 字节表示的整数列表。在字符由多个标记表示且必须将它们的字节表示组合起来才能生成正确文本表示的情况下很有用。如果标记没有字节表示，则可能为 null。\n         let bytes: [Int]?\n         \u002F\u002F\u002F 在该标记位置上最有可能的标记及其对数概率列表。在极少数情况下，返回的 top_logprobs 数量可能会少于请求的数量。\n         let topLogprobs: [TopLogProb]\n         \n         enum CodingKeys: String, CodingKey {\n            case token, logprob, bytes\n            case topLogprobs = \"top_logprobs\"\n         }\n         \n         struct TopLogProb: Decodable {\n            \u002F\u002F\u002F 标记。\n            let token: String\n            \u002F\u002F\u002F 该标记的对数概率。\n            let logprob: Double\n            \u002F\u002F\u002F 表示标记 UTF-8 字节表示的整数列表。在字符由多个标记表示且必须将它们的字节表示组合起来才能生成正确文本表示的情况下很有用。如果标记没有字节表示，则可能为 null。\n            let bytes: [Int]?\n         }\n      }\n      \n      \u002F\u002F\u002F 由 Vision API 提供。\n      public struct FinishDetails: Decodable {\n         let type: String\n      }\n   }\n}\n```\n使用示例\n```swift\nlet prompt = \"给我讲个笑话\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\n\n### 函数调用\n\n聊天完成功能还支持[函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling)和[并行函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling\u002Fparallel-function-calling)。`functions` 已被弃用，取而代之的是 `tools`，更多详情请参阅 [OpenAI 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fchat\u002Fcreate)。\n\n```swift\npublic struct ToolCall: Codable {\n\n   public let index: Int\n   \u002F\u002F\u002F 工具调用的 ID。\n   public let id: String?\n   \u002F\u002F\u002F 工具的类型。目前仅支持 `function`。\n   public let type: String?\n   \u002F\u002F\u002F 模型调用的函数。\n   public let function: FunctionCall\n\n   public init(\n      index: Int,\n      id: String,\n      type: String = \"function\",\n      function: FunctionCall)\n   {\n      self.index = index\n      self.id = id\n      self.type = type\n      self.function = function\n   }\n}\n\npublic struct FunctionCall: Codable {\n\n   \u002F\u002F\u002F 用于调用函数的参数，由模型以 JSON 格式生成。请注意，模型并不总是生成有效的 JSON，有时可能会“幻觉”出未在您的函数模式中定义的参数。请在调用函数之前，在代码中验证这些参数。\n   let arguments: String\n   \u002F\u002F\u002F 要调用的函数名称。\n   let name: String\n\n   public init(\n      arguments: String,\n      name: String)\n   {\n      self.arguments = arguments\n      self.name = name\n   }\n}\n```\n\n使用方法\n```swift\n\u002F\u002F\u002F 定义一个 `ToolCall`\nvar tool: ToolCall {\n   .init(\n      type: \"function\", \u002F\u002F 工具的类型。目前仅支持 \"function\"。\n      function: .init(\n         name: \"create_image\",\n         description: \"如果请求要求生成一张图片，则调用此函数\",\n         parameters: .init(\n            type: .object,\n            properties: [\n               \"prompt\": .init(type: .string, description: \"传入的具体提示词\"),\n               \"count\": .init(type: .integer, description: \"请求的图片数量\")\n            ],\n            required: [\"prompt\", \"count\"])))\n}\n\nlet prompt = \"给我看一张独角兽吃冰淇淋的图片\"\nlet content: ChatCompletionParameters.Message.ContentType = .text(prompt)\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: content)], model: .gpt41106Preview, tools: [tool])\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\n有关如何在 iOS 中上传 Base64 编码的图片的更多详细信息，请查看本包 Examples 部分中的 [ChatFunctionsCalllDemo](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatFunctionsCall) 示例。\n\n### 结构化输出\n\n#### 文档：\n\n- [结构化输出指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fstructured-outputs)\n- [示例](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fexamples)\n- [使用方法](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fhow-to-use)\n- [支持的模式](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fsupported-schemas)\n\n需要了解的要点：\n\n- [所有字段必须是必填的](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fall-fields-must-be-required)：要使用结构化输出，所有字段或函数参数都必须指定为必填。\n- 尽管所有字段必须是必填的（并且模型会为每个参数返回值），但可以通过使用包含空值的联合类型来模拟可选参数。\n- [对象在嵌套深度和大小上有限制](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fobjects-have-limitations-on-nesting-depth-and-size)：一个模式最多可以有 100 个对象属性，嵌套层次最多为 5 层。\n- [additionalProperties](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fadditionalproperties-false-must-always-be-set-in-objects))：始终需要将对象的 additionalProperties 设置为 false。additionalProperties 控制是否允许对象包含未在 JSON Schema 中定义的额外键\u002F值。\n  结构化输出只支持生成指定的键\u002F值，因此我们要求开发者将 additionalProperties 设置为 false 才能启用结构化输出。\n- [键的顺序](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Fkey-ordering)：使用结构化输出时，输出将按照模式中键的顺序产生。\n- [支持递归模式](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs\u002Frecursive-schemas-are-supported)\n\n#### 如何在 SwiftOpenAI 中使用结构化输出\n\n1. 函数调用：通过工具实现的结构化输出可以通过在函数定义中设置 strict: true 来启用。此功能适用于所有支持工具的模型，包括所有 gpt-4-0613 和 gpt-3.5-turbo-0613 及更高版本的模型。当启用结构化输出时，模型的输出将与提供的工具定义完全一致。\n\n使用以下模式：\n\n```json\n{\n  \"schema\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"steps\": {\n        \"type\": \"array\",\n        \"items\": {\n          \"type\": \"object\",\n          \"properties\": {\n            \"explanation\": {\n              \"type\": \"string\"\n            },\n            \"output\": {\n              \"type\": \"string\"\n            }\n          },\n          \"required\": [\"explanation\", \"output\"],\n          \"additionalProperties\": false\n        }\n      },\n      \"final_answer\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\"steps\", \"final_answer\"],\n    \"additionalProperties\": false\n  }\n}\n```\n\n您可以像这样使用便捷的 `JSONSchema` 对象：\n\n```swift\n\u002F\u002F 1: 定义 Step 模式的对象\n\nlet stepSchema = JSONSchema(\n   type: .object,\n   properties: [\n      \"explanation\": JSONSchema(type: .string),\n      \"output\": JSONSchema(\n         type: .string)\n   ],\n   required: [\"explanation\", \"output\"],\n   additionalProperties: false\n)\n\n\u002F\u002F 2. 定义 steps 数组模式。\n\nlet stepsArraySchema = JSONSchema(type: .array, items: stepSchema)\n\n\u002F\u002F 3. 定义 final Answer 模式。\n\nlet finalAnswerSchema = JSONSchema(type: .string)\n\n\u002F\u002F 4. 定义数学回答 JSON 模式。\n\nlet mathResponseSchema = JSONSchema(\n      type: .object,\n      properties: [\n         \"steps\": stepsArraySchema,\n         \"final_answer\": finalAnswerSchema\n      ],\n      required: [\"steps\", \"final_answer\"],\n      additionalProperties: false\n)\n\nlet tool = ChatCompletionParameters.Tool(\n            function: .init(\n               name: \"math_response\",\n               strict: true,\n               parameters: mathResponseSchema))\n)\n\nlet prompt = \"解方程 8x + 31 = 2\"\nlet systemMessage = ChatCompletionParameters.Message(role: .system, content: .text(\"你是一位数学辅导老师\"))\nlet userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))\nlet parameters = ChatCompletionParameters(\n   messages: [systemMessage, userMessage],\n   model: .gpt4o20240806,\n   tools: [tool])\n\nlet chat = try await service.startChat(parameters: parameters)\n```\n\n2. `response_format` 参数的新选项：开发者现在可以通过 `json_schema` 提供 JSON Schema，这是 `response_format` 参数的一个新选项。当模型不调用工具，而是以结构化方式响应用户时，此功能非常有用。该特性适用于我们最新的 GPT-4o 模型：今天发布的 `gpt-4o-2024-08-06` 和 `gpt-4o-mini-2024-07-18`。当为 `response_format` 提供 `strict: true` 时，模型的输出将与提供的模式匹配。\n\n使用之前的模式，您可以通过便捷的 `JSONSchemaResponseFormat` 对象将其实现为 JSON 模式：\n\n```swift\n\u002F\u002F 1: 定义 Step 模式对象\n\nlet stepSchema = JSONSchema(\n   type: .object,\n   properties: [\n      \"explanation\": JSONSchema(type: .string),\n      \"output\": JSONSchema(\n         type: .string)\n   ],\n   required: [\"explanation\", \"output\"],\n   additionalProperties: false\n)\n\n\u002F\u002F 2. 定义 steps 数组模式。\n\nlet stepsArraySchema = JSONSchema(type: .array, items: stepSchema)\n\n\u002F\u002F 3. 定义最终答案模式。\n\nlet finalAnswerSchema = JSONSchema(type: .string)\n\n\u002F\u002F 4. 定义响应格式 JSON 模式。\n\nlet responseFormatSchema = JSONSchemaResponseFormat(\n   name: \"math_response\",\n   strict: true,\n   schema: JSONSchema(\n      type: .object,\n      properties: [\n         \"steps\": stepsArraySchema,\n         \"final_answer\": finalAnswerSchema\n      ],\n      required: [\"steps\", \"final_answer\"],\n      additionalProperties: false\n   )\n)\n\nlet prompt = \"解方程 8x + 31 = 2\"\nlet systemMessage = ChatCompletionParameters.Message(role: .system, content: .text(\"你是一位数学辅导老师\"))\nlet userMessage = ChatCompletionParameters.Message(role: .user, content: .text(prompt))\nlet parameters = ChatCompletionParameters(\n   messages: [systemMessage, userMessage],\n   model: .gpt4o20240806,\n   responseFormat: .jsonSchema(responseFormatSchema))\n```\n\nSwiftOpenAI 结构化输出支持：\n\n- [x] 工具结构化输出。\n- [x] 响应格式结构化输出。\n- [x] 递归模式。\n- [x] 可选值模式。\n- [ ] Pydantic 模型。\n\n我们目前不支持 Pydantic 模型，用户需要手动使用 `JSONSchema` 或 `JSONSchemaResponseFormat` 对象创建模式。\n\n实用小贴士 🔥 使用 [iosAICodeAssistant GPT](https:\u002F\u002Fchatgpt.com\u002Fg\u002Fg-qj7RuW7PY-iosai-code-assistant) 来构建 SwiftOpenAI 模式。只需粘贴您的 JSON 模式，并让 GPT 为您创建用于工具和响应格式的 SwiftOpenAI 模式。\n\n更多详情请访问示例项目中的[工具](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatStructureOutputTool)和[响应格式](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FChatStructuredOutputs)部分。\n\n\n\n### 视觉\n\n[Vision](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fvision) API 现已可用；开发者必须通过聊天完成 API 访问它，特别是使用 gpt-4-vision-preview 模型或 gpt-4o 模型。使用其他任何模型都无法提供图像描述。\n\n用法\n```swift\nlet imageURL = \"https:\u002F\u002Fupload.wikimedia.org\u002Fwikipedia\u002Fcommons\u002Fthumb\u002Fd\u002Fdd\u002FGfp-wisconsin-madison-the-nature-boardwalk.jpg\u002F2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\nlet prompt = \"这是什么？\"\nlet messageContent: [ChatCompletionParameters.Message.ContentType.MessageContent] = [.text(prompt), .imageUrl(.init(url: imageURL)] \u002F\u002F 用户可以向服务添加任意数量的 `.imageUrl` 实例。\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .contentArray(messageContent))], model: .gpt4o)\nlet chatCompletionObject = try await service.startStreamedChat(parameters: parameters)\n```\n\n![Simulator Screen Recording - iPhone 15 - 2023-11-09 at 17 12 06](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_d0e5b5b85133.png)\n\n有关如何在 iOS 中上传 Base64 编码图像的更多详细信息，请查看本包 Examples 部分中的 [ChatVision](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample\u002FVision) 示例。\n\n### 响应\n\nOpenAI 最先进的生成模型响应的接口。支持文本和图像输入，以及文本输出。利用先前响应的输出作为输入，与模型建立有状态的交互。通过内置的文件搜索、网络搜索、计算机使用等工具扩展模型的能力。允许模型通过函数调用访问外部系统和数据。\n\n- 全面的流式支持，通过 `responseCreateStream` 方法\n- 全面的 `ResponseStreamEvent` 枚举，涵盖 40 多种事件类型\n- 增强的 `InputMessage` 包含用于跟踪响应 ID 的 `id` 字段\n- 改进的对话状态管理，通过 `previousResponseId`\n- 实时文本流、函数调用和工具使用事件\n- 支持推理摘要、网络搜索、文件搜索和图像生成事件\n- **新增**：支持 GPT-5 模型（gpt-5、gpt-5-mini、gpt-5-nano）\n- **新增**：verbosity 参数用于控制响应的详细程度\n\n#### ModelResponseParameter\n\n`ModelResponseParameter` 提供了一个全面的接口来创建模型响应：\n\n```swift\nlet parameters = ModelResponseParameter(\n    input: .text(\"生命、宇宙以及一切的终极答案是什么？\"),\n    model: .gpt5,  \u002F\u002F 支持 GPT-5、GPT-5-mini、GPT-5-nano\n    text: TextConfiguration(\n        format: .text,\n        verbosity: \"low\"  \u002F\u002F 新增：控制响应的详细程度（“低”、“中”、“高”）\n    ),\n    temperature: 0.7\n)\n\nlet response = try await service.responseCreate(parameters)\n```\n\n#### 可用的 GPT-5 模型\n\n```swift\npublic enum Model {\n    case gpt5        \u002F\u002F 复杂推理、广泛的世界知识，以及代码密集型或多步骤代理任务\n    case gpt5Mini    \u002F\u002F 成本优化的推理和聊天；平衡速度、成本和能力\n    case gpt5Nano    \u002F\u002F 高吞吐量任务，尤其是简单的指令遵循或分类任务\n    \u002F\u002F ... 其他模型\n}\n```\n\n#### 带有 verbosity 的 TextConfiguration\n\n```swift\n\u002F\u002F 创建带有 verbosity 控制的文本配置\nlet textConfig = TextConfiguration(\n    format: .text,       \u002F\u002F 可以是 .text、.jsonObject 或 .jsonSchema\n    verbosity: \"medium\"  \u002F\u002F 控制响应的详细程度\n)\n```\n\n相关指南：\n\n- [快速入门](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fquickstart?api-mode=responses)\n- [文本输入和输出](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftext?api-mode=responses)\n- [图像输入](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fimages?api-mode=responses)\n- [结构化输出](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs?api-mode=responses)\n- [函数调用](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffunction-calling?api-mode=responses)\n- [对话状态](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fconversation-state?api-mode=responses)\n- [使用工具扩展模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ftools?api-mode=responses)\n\n参数\n```swift\n\u002F\u002F\u002F [创建模型响应。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses\u002Fcreate)\npublic struct ModelResponseParameter: Codable {\n\n   \u002F\u002F\u002F 模型的文本、图像或文件输入，用于生成响应。\n   \u002F\u002F\u002F 作为用户角色的文本输入。\n   \u002F\u002F\u002F 包含不同类型内容的一个或多个输入项列表。\n   public var input: InputType\n\n   \u002F\u002F\u002F 用于生成响应的模型ID，例如gpt-4o或o1。OpenAI提供了多种具有不同能力、性能特征和价格定位的模型。\n   \u002F\u002F\u002F 请参阅模型指南以浏览和比较可用模型。\n   public var model: String\n\n   \u002F\u002F\u002F 指定要包含在模型响应中的额外输出数据。当前支持的值包括：\n   \u002F\u002F\u002F file_search_call.results：包含文件搜索工具调用的搜索结果。\n   \u002F\u002F\u002F message.input_image.image_url：包含来自输入消息的图像URL。\n   \u002F\u002F\u002F computer_call_output.output.image_url：包含来自计算机调用输出的图像URL。\n   public var include: [String]?\n\n   \u002F\u002F\u002F 将系统（或开发者）消息作为模型上下文中的第一条消息插入。\n   \u002F\u002F\u002F 当与previous_response_id一起使用时，先前响应中的指令将不会传递到下一次响应中。这使得在新响应中轻松替换系统（或开发者）消息成为可能。\n   public var instructions: String?\n\n   \u002F\u002F\u002F 响应中可生成的最大标记数上限，包括可见输出标记和推理标记。\n   public var maxOutputTokens: Int?\n\n   \u002F\u002F\u002F 可附加到对象的一组16个键值对。这有助于以结构化格式存储有关该对象的附加信息，并通过API或仪表板查询对象。\n   \u002F\u002F\u002F 键是最大长度为64个字符的字符串。值是最大长度为512个字符的字符串。\n   public var metadata: [String: String]?\n\n   \u002F\u002F\u002F 是否允许模型并行运行工具调用。\n   \u002F\u002F\u002F 默认为true\n   public var parallelToolCalls: Bool?\n\n   \u002F\u002F\u002F 模型之前响应的唯一ID。使用此ID可以创建多轮对话。\n   \u002F\u002F\u002F 了解有关对话状态的更多信息。\n   public var previousResponseId: String?\n\n   \u002F\u002F\u002F 仅限o系列模型\n   \u002F\u002F\u002F 推理模型的配置选项。\n   public var reasoning: Reasoning?\n\n   \u002F\u002F\u002F 是否将生成的模型响应存储起来，以便稍后通过API检索。\n   \u002F\u002F\u002F 默认为true\n   public var store: Bool?\n\n   \u002F\u002F\u002F 如果设置为true，模型响应数据将在生成过程中通过服务器发送事件流式传输到客户端。\n   public var stream: Bool?\n\n   \u002F\u002F\u002F 使用的采样温度，范围在0到2之间。\n   \u002F\u002F\u002F 较高的值（如0.8）会使输出更具随机性，而较低的值（如0.2）则会使其更加专注和确定性。\n   \u002F\u002F\u002F 我们通常建议调整温度或top_p，但不要同时调整两者。\n   \u002F\u002F\u002F 默认为1\n   public var temperature: Double?\n\n   \u002F\u002F\u002F 模型文本响应的配置选项。可以是纯文本或结构化的JSON数据。\n   public var text: TextConfiguration?\n\n   \u002F\u002F\u002F 模型在生成响应时应如何选择使用哪个（或哪些）工具。\n   \u002F\u002F\u002F 请参阅tools参数，了解如何指定模型可以调用的工具。\n   public var toolChoice: ToolChoiceMode?\n\n   \u002F\u002F\u002F 模型在生成响应时可能调用的工具数组。可以通过设置tool_choice参数来指定要使用的工具。\n   public var tools: [Tool]?\n\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样，在这种方法中，模型会考虑具有top_p概率质量的标记结果。\n   \u002F\u002F\u002F 因此，0.1表示只考虑构成前10%概率质量的标记。\n   \u002F\u002F\u002F 我们通常建议调整top_p或温度，但不要同时调整两者。\n   \u002F\u002F\u002F 默认为1\n   public var topP: Double?\n\n   \u002F\u002F\u002F 模型响应所采用的截断策略。\n   \u002F\u002F\u002F 默认为禁用\n   public var truncation: String?\n\n   \u002F\u002F\u002F 代表您最终用户的唯一标识符，有助于OpenAI监控和检测滥用行为。\n   public var user: String?\n}\n```\n\n[响应对象](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fresponses\u002Fobject)\n\n```swift\n\u002F\u002F\u002F 在检索模型响应时返回的响应对象\npublic struct ResponseModel: Decodable {\n\n   \u002F\u002F\u002F 创建此响应的Unix时间戳（以秒为单位）。\n   public let createdAt: Int\n\n   \u002F\u002F\u002F 当模型无法生成响应时返回的错误对象。\n   public let error: ErrorObject?\n\n   \u002F\u002F\u002F 此响应的唯一标识符。\n   public let id: String\n\n   \u002F\u002F\u002F 关于响应不完整的原因的详细信息。\n   public let incompleteDetails: IncompleteDetails?\n\n   \u002F\u002F\u002F 将系统（或开发者）消息作为模型上下文中的第一条消息插入。\n   public let instructions: String?\n\n   \u002F\u002F\u002F 响应中可生成的最大标记数上限，包括可见输出标记和推理标记。\n   public let maxOutputTokens: Int?\n\n   \u002F\u002F\u002F 可附加到对象的一组16个键值对。\n   public let metadata: [String: String]\n\n   \u002F\u002F\u002F 用于生成响应的模型ID，例如gpt-4o或o1。\n   public let model: String\n\n   \u002F\u002F\u002F 此资源的对象类型——始终设置为response。\n   public let object: String\n\n   \u002F\u002F\u002F 模型生成的一组内容项。\n   public let output: [OutputItem]\n\n   \u002F\u002F\u002F 是否允许模型并行运行工具调用。\n   public let parallelToolCalls: Bool\n\n   \u002F\u002F\u002F 模型之前响应的唯一ID。使用此ID可以创建多轮对话。\n   public let previousResponseId: String?\n\n   \u002F\u002F\u002F 推理模型的配置选项。\n   public let reasoning: Reasoning?\n\n   \u002F\u002F\u002F 响应生成的状态。可能是已完成、失败、进行中或不完整。\n   public let status: String\n\n   \u002F\u002F\u002F 使用的采样温度，范围在0到2之间。\n   public let temperature: Double?\n\n   \u002F\u002F\u002F 模型文本响应的配置选项。\n   public let text: TextConfiguration\n\n\u002F\u002F\u002F 模型在生成响应时应如何选择使用哪种工具（或哪些工具）。\n   public let toolChoice: ToolChoiceMode\n\n   \u002F\u002F\u002F 模型在生成响应时可能调用的一组工具。\n   public let tools: [Tool]\n\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样。\n   public let topP: Double?\n\n   \u002F\u002F\u002F 模型响应所采用的截断策略。\n   public let truncation: String?\n\n   \u002F\u002F\u002F 表示令牌使用情况的详细信息。\n   public let usage: Usage?\n\n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符。\n   public let user: String?\n   \n   \u002F\u002F\u002F 一个便捷属性，用于汇总输出数组中 output_text 项的所有文本输出。\n   \u002F\u002F\u002F 类似于 Python 和 JavaScript SDK 中的 outputText 属性。\n   public var outputText: String? \n}\n```\n\n输入类型\n```swift\n\u002F\u002F InputType 表示 Response API 的输入\npublic enum InputType: Codable {\n    case string(String)  \u002F\u002F 简单的文本输入\n    case array([InputItem])  \u002F\u002F 复杂对话的输入项数组\n}\n\n\u002F\u002F InputItem 表示不同类型的输入\npublic enum InputItem: Codable {\n    case message(InputMessage)  \u002F\u002F 用户、助手、系统消息\n    case functionToolCall(FunctionToolCall)  \u002F\u002F 函数调用\n    case functionToolCallOutput(FunctionToolCallOutput)  \u002F\u002F 函数输出\n    \u002F\u002F ... 其他输入类型\n}\n\n\u002F\u002F InputMessage 结构支持响应 ID\npublic struct InputMessage: Codable {\n    public let role: String  \u002F\u002F \"user\"、\"assistant\"、\"system\"\n    public let content: MessageContent\n    public let type: String?  \u002F\u002F 始终为 \"message\"\n    public let status: String?  \u002F\u002F 助手消息为 \"completed\"\n    public let id: String?  \u002F\u002F 助手消息的响应 ID\n}\n\n\u002F\u002F MessageContent 可以是文本或内容项数组\npublic enum MessageContent: Codable {\n    case text(String)\n    case array([ContentItem])  \u002F\u002F 用于多模态内容\n}\n```\n\n使用示例\n\n简单文本输入\n```swift\nlet prompt = \"法国的首都是哪里？\"\nlet parameters = ModelResponseParameter(input: .string(prompt), model: .gpt4o)\nlet response = try await service.responseCreate(parameters)\n```\n\n带推理的文本输入\n```swift\nlet prompt = \"一只土拨鼠能刨多少木头呢？\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .o3Mini,\n    reasoning: Reasoning(effort: \"high\")\n)\nlet response = try await service.responseCreate(parameters)\n```\n\n图像输入\n```swift\nlet textPrompt = \"这张图片里有什么？\"\nlet imageUrl = \"https:\u002F\u002Fexample.com\u002Fpath\u002Fto\u002Fimage.jpg\"\nlet imageContent = ContentItem.imageUrl(ImageUrlContent(imageUrl: imageUrl))\nlet textContent = ContentItem.text(TextContent(text: textPrompt))\nlet message = InputItem(role: \"user\", content: [textContent, imageContent])\nlet parameters = ModelResponseParameter(input: .array([message]), model: .gpt4o)\nlet response = try await service.responseCreate(parameters)\n```\n\n使用工具（网络搜索）\n```swift\nlet prompt = \"今天有哪些积极的新闻报道？\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [Tool(type: \"web_search_preview\", function: nil)]\n)\nlet response = try await service.responseCreate(parameters)\n```\n\n使用工具（文件搜索）\n```swift\nlet prompt = \"这份文档的主要要点是什么？\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"file_search\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"file_search\",\n                strict: false,\n                description: \"搜索文件\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"vector_store_ids\": JSONSchema(\n                            type: .array,\n                            items: JSONSchema(type: .string)\n                        ),\n                        \"max_num_results\": JSONSchema(type: .integer)\n                    ],\n                    required: [\"vector_store_ids\"],\n                    additionalProperties: false\n                )\n            )\n        )\n    ]\n)\nlet response = try await service.responseCreate(parameters)\n```\n\n函数调用\n```swift\nlet prompt = \"波士顿今天的天气怎么样？\"\nlet parameters = ModelResponseParameter(\n    input: .string(prompt),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"function\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"get_current_weather\",\n                strict: false,\n                description: \"获取指定地点的当前天气\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"location\": JSONSchema(\n                            type: .string,\n                            description: \"城市和州，例如旧金山，加利福尼亚州\"\n                        ),\n                        \"unit\": JSONSchema(\n                            type: .string,\n                            enum: [\"摄氏度\"、\"华氏度\"]\n                        )\n                    ],\n                    required: [\"location\"、\"unit\"],\n                    additionalProperties: false\n                )\n            )\n        )\n    ],\n    toolChoice: .auto\n)\nlet response = try await service.responseCreate(parameters)\n```\n\n获取响应\n```swift\nlet responseId = \"resp_abc123\"\nlet response = try await service.responseModel(id: responseId)\n```\n\n#### 流式响应\n\nResponse API 支持使用服务器发送事件（SSE）进行流式响应。这使您能够在部分响应生成时立即接收它们，从而实现实时 UI 更新并提升用户体验。\n\n流式事件\n```swift\n\u002F\u002F ResponseStreamEvent 枚举表示所有可能的流式事件\npublic enum ResponseStreamEvent: Decodable {\n  case responseCreated(ResponseCreatedEvent)\n  case responseInProgress(ResponseInProgressEvent)\n  case responseCompleted(ResponseCompletedEvent)\n  case responseFailed(ResponseFailedEvent)\n  case outputItemAdded(OutputItemAddedEvent)\n  case outputTextDelta(OutputTextDeltaEvent)\n  case outputTextDone(OutputTextDoneEvent)\n  case functionCallArgumentsDelta(FunctionCallArgumentsDeltaEvent)\n  case reasoningSummaryTextDelta(ReasoningSummaryTextDeltaEvent)\n  case error(ErrorEvent)\n  \u002F\u002F ... 还有许多其他类型的事件\n}\n```\n\n基本流式示例\n```swift\n\u002F\u002F 通过设置 stream: true 启用流式传输\nlet parameters = ModelResponseParameter(\n    input: .string(\"给我讲个故事吧\"),\n    model: .gpt4o,\n    stream: true\n)\n\n\u002F\u002F 创建流\nlet stream = try await service.responseCreateStream(parameters)\n\n\u002F\u002F 按事件到达顺序处理\nfor try await event in stream {\n    switch event {\n    case .outputTextDelta(let delta):\n        \u002F\u002F 将文本片段追加到您的 UI\n        print(delta.delta, terminator: \"\")\n        \n    case .responseCompleted(let completed):\n        \u002F\u002F 响应已完成\n        print(\"\\n响应 ID: \\(completed.response.id)\")\n        \n    case .error(let error):\n        \u002F\u002F 处理错误\n        print(\"错误: \\(error.message)\")\n        \n    default:\n        \u002F\u002F 根据需要处理其他事件\n        break\n    }\n}\n```\n\n使用对话状态进行流式处理\n```swift\n\u002F\u002F 使用 previousResponseId 保持对话连续性\nvar previousResponseId: String? = nil\nvar messages: [(role: String, content: String)] = []\n\n\u002F\u002F 第一条消息\nlet firstParams = ModelResponseParameter(\n    input: .string(\"你好！\"),\n    model: .gpt4o,\n    stream: true\n)\n\nlet firstStream = try await service.responseCreateStream(firstParams)\nvar firstResponse = \"\"\n\nfor try await event in firstStream {\n    switch event {\n    case .outputTextDelta(let delta):\n        firstResponse += delta.delta\n        \n    case .responseCompleted(let completed):\n        previousResponseId = completed.response.id\n        messages.append((role: \"user\", content: \"你好！\"))\n        messages.append((role: \"assistant\", content: firstResponse))\n        \n    default:\n        break\n    }\n}\n\n\u002F\u002F 带有对话上下文的后续消息\nvar inputArray: [InputItem] = []\n\n\u002F\u002F 添加对话历史\nfor message in messages {\n    inputArray.append(.message(InputMessage(\n        role: message.role,\n        content: .text(message.content)\n    )))\n}\n\n\u002F\u002F 添加新的用户消息\ninputArray.append(.message(InputMessage(\n    role: \"user\",\n    content: .text(\"你好吗？\")\n)))\n\nlet followUpParams = ModelResponseParameter(\n    input: .array(inputArray),\n    model: .gpt4o,\n    previousResponseId: previousResponseId,\n    stream: true\n)\n\nlet followUpStream = try await service.responseCreateStream(followUpParams)\n\u002F\u002F 处理后续流…\n```\n\n使用工具和函数调用的流式处理\n```swift\nlet parameters = ModelResponseParameter(\n    input: .string(\"旧金山的天气如何？\"),\n    model: .gpt4o,\n    tools: [\n        Tool(\n            type: \"function\",\n            function: ChatCompletionParameters.ChatFunction(\n                name: \"get_weather\",\n                description: \"获取当前天气\",\n                parameters: JSONSchema(\n                    type: .object,\n                    properties: [\n                        \"location\": JSONSchema(type: .string)\n                    ],\n                    required: [\"location\"]\n                )\n            )\n        )\n    ],\n    stream: true\n)\n\nlet stream = try await service.responseCreateStream(parameters)\nvar functionCallArguments = \"\"\n\nfor try await event in stream {\n    switch event {\n    case .functionCallArgumentsDelta(let delta):\n        \u002F\u002F 累积函数调用参数\n        functionCallArguments += delta.delta\n        \n    case .functionCallArgumentsDone(let done):\n        \u002F\u002F 函数调用已完成\n        print(\"函数: \\(done.name)\")\n        print(\"参数: \\(functionCallArguments)\")\n        \n    case .outputTextDelta(let delta):\n        \u002F\u002F 普通文本输出\n        print(delta.delta, terminator: \"\")\n        \n    default:\n        break\n    }\n}\n```\n\n取消流式处理\n```swift\n\u002F\u002F 可以使用 Swift 的任务取消机制来取消流\nlet streamTask = Task {\n    let stream = try await service.responseCreateStream(parameters)\n    \n    for try await event in stream {\n        \u002F\u002F 检查任务是否被取消\n        if Task.isCancelled {\n            break\n        }\n        \n        \u002F\u002F 处理事件…\n    }\n}\n\n\u002F\u002F 在需要时取消流\nstreamTask.cancel()\n```\n\n完整的流式处理实现示例\n```swift\n@MainActor\n@Observable\nclass ResponseStreamProvider {\n    var messages: [Message] = []\n    var isStreaming = false\n    var error: String?\n    \n    private let service: OpenAIService\n    private var previousResponseId: String?\n    private var streamTask: Task\u003CVoid, Never>?\n    \n    init(service: OpenAIService) {\n        self.service = service\n    }\n    \n    func sendMessage(_ text: String) {\n        streamTask?.cancel()\n        \n        \u002F\u002F 添加用户消息\n        messages.append(Message(role: .user, content: text))\n        \n        \u002F\u002F 开始流式响应\n        streamTask = Task {\n            await streamResponse(for: text)\n        }\n    }\n    \n    private func streamResponse(for userInput: String) async {\n        isStreaming = true\n        error = nil\n        \n        \u002F\u002F 创建流式消息占位符\n        let streamingMessage = Message(role: .assistant, content: \"\", isStreaming: true)\n        messages.append(streamingMessage)\n        \n        do {\n            \u002F\u002F 构建对话历史\n            var inputArray: [InputItem] = []\n            for message in messages.dropLast(2) {\n                inputArray.append(.message(InputMessage(\n                    role: message.role.rawValue,\n                    content: .text(message.content)\n                )))\n            }\n            inputArray.append(.message(InputMessage(\n                role: \"user\",\n                content: .text(userInput)\n            )))\n            \n            let parameters = ModelResponseParameter(\n                input: .array(inputArray),\n                model: .gpt4o,\n                previousResponseId: previousResponseId,\n                stream: true\n            )\n            \n            let stream = try await service.responseCreateStream(parameters)\n            var accumulatedText = \"\"\n            \n            for try await event in stream {\n                guard !Task.isCancelled else { break }\n                \n                switch event {\n                case .outputTextDelta(let delta):\n                    accumulatedText += delta.delta\n                    updateStreamingMessage(with: accumulatedText)\n                    \n                case .responseCompleted(let completed):\n                    previousResponseId = completed.response.id\n                    finalizeStreamingMessage(with: accumulatedText, responseId: completed.response.id)\n                    \n                case .error(let errorEvent):\n                    throw APIError.requestFailed(description: errorEvent.message)\n                    \n                default:\n                    break\n                }\n            }\n        } catch {\n            self.error = error.localizedDescription\n            messages.removeLast() \u002F\u002F 发生错误时移除流式消息\n        }\n        \n        isStreaming = false\n    }\n    \n    private func updateStreamingMessage(with content: String) {\n        if let index = messages.lastIndex(where: { $0.isStreaming }) {\n            messages[index].content = content\n        }\n    }\n    \n    private func finalizeStreamingMessage(with content: String, responseId: String) {\n        if let index = messages.lastIndex(where: { $0.isStreaming }) {\n            messages[index].content = content\n            messages[index].isStreaming = false\n            messages[index].responseId = responseId\n        }\n    }\n}\n```\n\n\n\n### 嵌入\n参数\n```swift\n\u002F\u002F\u002F [创建](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings\u002Fcreate) 一个表示输入文本的嵌入向量。\npublic struct EmbeddingParameter: Encodable {\n   \n   \u002F\u002F\u002F 要使用的模型ID。您可以使用“列出模型”API查看所有可用的模型，或参阅我们的[模型概述](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview)，以了解各模型的说明。\n   let model: String\n   \u002F\u002F\u002F 要嵌入的输入文本，编码为字符串或标记数组。要在单个请求中嵌入多个输入，请传递字符串数组或标记数组的数组。每个输入不得超过该模型的最大输入标记数（text-embedding-ada-002 为 8191 个标记），且不能是空字符串。[如何使用 `tiktoken` 计算标记数](https:\u002F\u002Fcookbook.openai.com\u002Fexamples\u002Fhow_to_count_tokens_with_tiktoken)\n   let input: String\n   \n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。[了解更多。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices\u002Fend-user-ids)\n   let user: String?\n   \n   public enum Model: String {\n      case textEmbeddingAda002 = \"text-embedding-ada-002\"\n   }\n   \n   public init(\n      model: Model = .textEmbeddingAda002,\n      input: String,\n      user: String? = nil)\n   {\n      self.model = model.value\n      self.input = input\n      self.user = user\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F [表示由嵌入端点返回的嵌入向量。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fembeddings\u002Fobject)\npublic struct EmbeddingObject: Decodable {\n   \n   \u002F\u002F\u002F 对象类型，始终为“embedding”。\n   public let object: String\n   \u002F\u002F\u002F 嵌入向量，是一个浮点数列表。向量的长度取决于模型，具体请参阅嵌入指南。[https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fembeddings]\n   public let embedding: [Float]\n   \u002F\u002F\u002F 嵌入在嵌入列表中的索引。\n   public let index: Int\n}\n```\n\n使用方法\n```swift\nlet prompt = \"Hello world.\"\nlet embeddingObjects = try await service.createEmbeddings(parameters: parameters).data\n```\n\n### 微调\n参数\n```swift\n\u002F\u002F\u002F [创建作业](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fcreate) 以从给定数据集微调指定模型。\n\u002F\u002F\u002F 响应包括已加入队列的作业详细信息，包括作业状态以及完成后的微调模型名称。\npublic struct FineTuningJobParameters: Encodable {\n   \n   \u002F\u002F\u002F 要微调的模型名称。您可以选择[支持的模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview)之一。\n   let model: String\n   \u002F\u002F\u002F 包含训练数据的已上传文件的 ID。\n   \u002F\u002F\u002F 有关如何上传文件，请参阅[上传文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fupload)。\n   \u002F\u002F\u002F 您的数据集必须格式化为 JSONL 文件。此外，您必须以 fine-tune 的用途上传文件。\n   \u002F\u002F\u002F 更多详情请参阅[微调指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning)。\n   let trainingFile: String\n   \u002F\u002F\u002F 用于微调作业的超参数。\n   let hyperparameters: HyperParameters?\n   \u002F\u002F\u002F 一个最多 18 个字符的字符串，将被添加到您的微调模型名称中。\n   \u002F\u002F\u002F 例如，后缀为 \"custom-model-name\" 将生成类似 ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel 的模型名称。\n   \u002F\u002F\u002F 默认为 null。\n   let suffix: String?\n   \u002F\u002F\u002F 包含验证数据的已上传文件的 ID。\n   \u002F\u002F\u002F 如果您提供此文件，则该数据将在微调过程中定期用于生成验证指标。这些指标可以在微调结果文件中查看。训练数据和验证数据不应同时出现在同一文件中。\n   \u002F\u002F\u002F 您的数据集必须格式化为 JSONL 文件。您必须以 fine-tune 的用途上传文件。\n   \u002F\u002F\u002F 更多详情请参阅[微调指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning)。\n   let validationFile: String?\n   \u002F\u002F\u002F 为您的微调作业启用的一组集成。\n   let integrations: [Integration]?\n   \u002F\u002F\u002F 种子控制作业的可重复性。使用相同的种子和作业参数应产生相同的结果，但在极少数情况下可能会有所不同。如果未指定种子，系统将为您生成一个。\n   let seed: Int?\n   \n   \u002F\u002F\u002F 目前，以下模型支持[微调](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning\u002Fwhat-models-can-be-fine-tuned)：\n   \u002F\u002F\u002F gpt-3.5-turbo-0613（推荐）\n   \u002F\u002F\u002F babbage-002\n   \u002F\u002F\u002F davinci-002\n   \u002F\u002F\u002F OpenAI 预计 gpt-3.5-turbo 是大多数用户在效果和易用性方面最合适的选择，除非您正在迁移旧版微调模型。\n   public enum Model: String {\n      case gpt35 = \"gpt-3.5-turbo-0613\" \u002F\u002F\u002F 推荐\n      case babbage002 = \"babbage-002\"\n      case davinci002 = \"davinci-002\"\n   }\n   \n   public struct HyperParameters: Encodable {\n      \u002F\u002F\u002F 训练模型的轮数。一轮指完整遍历一次训练数据集。\n      \u002F\u002F\u002F 默认为 auto。\n      let nEpochs: Int?\n      \n      public init(\n         nEpochs: Int?)\n      {\n         self.nEpochs = nEpochs\n      }\n   }\n   \n   public init(\n      model: Model,\n      trainingFile: String,\n      hyperparameters: HyperParameters? = nil,\n      suffix: String? = nil,\n      validationFile: String? = nil)\n   {\n      self.model = model.rawValue\n      self.trainingFile = trainingFile\n      self.hyperparameters = hyperparameters\n      self.suffix = suffix\n      self.validationFile = validationFile\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F fine_tuning.job 对象表示通过 API 创建的[微调作业](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fobject)。\npublic struct FineTuningJobObject: Decodable {\n   \n   \u002F\u002F\u002F 对象标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 微调作业创建时的 Unix 时间戳（以秒为单位）。\n   public let createdAt: Int\n  \u002F\u002F\u002F 对于失败的微调作业，此处将包含有关失败原因的更多信息。\n   public let error: OpenAIErrorResponse.Error?\n   \u002F\u002F\u002F 正在创建的微调模型名称。如果微调作业仍在运行，则该值为 null。\n   public let fineTunedModel: String?\n   \u002F\u002F\u002F 微调作业完成时的 Unix 时间戳（以秒为单位）。如果微调作业仍在运行，则该值为 null。\n   public let finishedAt: Int?\n   \u002F\u002F\u002F 用于微调作业的超参数。更多详情请参阅[微调指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Ffine-tuning)。\n   public let hyperparameters: HyperParameters\n   \u002F\u002F\u002F 正在微调的基础模型。\n   public let model: String\n   \u002F\u002F\u002F 对象类型，始终为 \"fine_tuning.job\"。\n   public let object: String\n   \u002F\u002F\u002F 拥有微调作业的组织。\n   public let organizationId: String\n   \u002F\u002F\u002F 微调作业的编译结果文件 ID。您可以通过[Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents)检索结果。\n   public let resultFiles: [String]\n   \u002F\u002F\u002F 微调作业的当前状态，可能为 `validating_files`、`queued`、`running`、`succeeded`、`failed` 或 `cancelled`。\n   public let status: String\n   \u002F\u002F\u002F 此微调作业处理的可计费总 token 数。如果微调作业仍在运行，则该值为 null。\n   public let trainedTokens: Int?\n   \n   \u002F\u002F\u002F 用于训练的文件 ID。您可以通过[Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents)检索训练数据。\n   public let trainingFile: String\n   \u002F\u002F\u002F 用于验证的文件 ID。您可以通过[Files API](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fretrieve-contents)检索验证结果。\n   public let validationFile: String?\n   \n   public enum Status: String {\n      case validatingFiles = \"validating_files\"\n      case queued\n      case running\n      case succeeded\n      case failed\n      case cancelled\n   }\n   \n   public struct HyperParameters: Decodable {\n      \u002F\u002F\u002F 训练模型的轮数。一轮指完整遍历一次训练数据集。\"auto\" 会根据数据集大小决定最佳轮数。如果手动设置轮数，我们支持 1 到 50 轮之间的任何数值。\n      public let nEpochs: IntOrStringValue\n   }\n}\n```\n\n用法\n列出微调任务\n```swift\nlet fineTuningJobs = try await service.istFineTuningJobs()\n```\n创建微调任务\n```swift\nlet trainingFileID = \"file-Atc9okK0MOuQwQzDJCZXnrh6\" \u002F\u002F 使用 `Files` API 上传的文件 ID。https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fcreate#fine-tuning\u002Fcreate-training_file\nlet parameters = FineTuningJobParameters(model: .gpt35, trainingFile: trainingFileID)\nlet fineTuningJob = try await service.createFineTuningJob(parameters: parameters)\n```\n获取微调任务\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet fineTuningJob = try await service.retrieveFineTuningJob(id: fineTuningJobID)\n```\n取消微调任务\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet canceledFineTuningJob = try await service.cancelFineTuningJobWith(id: fineTuningJobID)\n```\n#### 微调任务事件对象\n响应\n```swift\n\u002F\u002F\u002F [微调任务事件对象](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning\u002Fevent-object)\npublic struct FineTuningJobEventObject: Decodable {\n   \n   public let id: String\n   \n   public let createdAt: Int\n   \n   public let level: String\n   \n   public let message: String\n   \n   public let object: String\n   \n   public let type: String?\n   \n   public let data: Data?\n   \n   public struct Data: Decodable {\n      public let step: Int\n      public let trainLoss: Double\n      public let trainMeanTokenAccuracy: Double\n   }\n}\n```\n用法\n```swift\nlet fineTuningJobID = \"ftjob-abc123\"\nlet jobEvents = try await service.listFineTuningEventsForJobWith(id: id, after: nil, limit: nil).data\n```\n\n\n\n### 批处理\n参数\n```swift\npublic struct BatchParameter: Encodable {\n   \n   \u002F\u002F\u002F 包含新批处理请求的已上传文件的 ID。\n   \u002F\u002F\u002F 请参阅 [上传文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fcreate) 了解如何上传文件。\n   \u002F\u002F\u002F 您的输入文件必须格式化为 [JSONL 文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fbatch\u002FrequestInput)，并且必须以批处理目的上传。\n   let inputFileID: String\n   \u002F\u002F\u002F 批处理中所有请求将使用的端点。目前仅支持 \u002Fv1\u002Fchat\u002Fcompletions。\n   let endpoint: String\n   \u002F\u002F\u002F 批处理应在其中完成的时间范围。目前仅支持 24 小时。\n   let completionWindow: String\n   \u002F\u002F\u002F 批处理的可选自定义元数据。\n   let metadata: [String: String]?\n   \n   enum CodingKeys: String, CodingKey {\n      case inputFileID = \"input_file_id\"\n      case endpoint\n      case completionWindow = \"completion_window\"\n      case metadata\n   }\n}\n```\n响应\n```swift\npublic struct BatchObject: Decodable {\n   \n   let id: String\n   \u002F\u002F\u002F 对象类型，始终为 batch。\n   let object: String\n   \u002F\u002F\u002F 批处理使用的 OpenAI API 端点。\n   let endpoint: String\n   \n   let errors: Error\n   \u002F\u002F\u002F 批处理输入文件的 ID。\n   let inputFileID: String\n   \u002F\u002F\u002F 批处理应在其中完成的时间范围。\n   let completionWindow: String\n   \u002F\u002F\u002F 批处理的当前状态。\n   let status: String\n   \u002F\u002F\u002F 包含成功执行请求输出的文件 ID。\n   let outputFileID: String\n   \u002F\u002F\u002F 包含出错请求输出的文件 ID。\n   let errorFileID: String\n   \u002F\u002F\u002F 批处理创建时的 Unix 时间戳（以秒为单位）。\n   let createdAt: Int\n   \u002F\u002F\u002F 批处理开始处理时的 Unix 时间戳（以秒为单位）。\n   let inProgressAt: Int\n   \u002F\u002F\u002F 批处理到期时的 Unix 时间戳（以秒为单位）。\n   let expiresAt: Int\n   \u002F\u002F\u002F 批处理开始收尾时的 Unix 时间戳（以秒为单位）。\n   let finalizingAt: Int\n   \u002F\u002F\u002F 批处理完成时的 Unix 时间戳（以秒为单位）。\n   let completedAt: Int\n   \u002F\u002F\u002F 批处理失败时的 Unix 时间戳（以秒为单位）。\n   let failedAt: Int\n   \u002F\u002F\u002F 批处理到期时的 Unix 时间戳（以秒为单位）。\n   let expiredAt: Int\n   \u002F\u002F\u002F 批处理开始取消时的 Unix 时间戳（以秒为单位）。\n   let cancellingAt: Int\n   \u002F\u002F\u002F 批处理被取消时的 Unix 时间戳（以秒为单位）。\n   let cancelledAt: Int\n   \u002F\u002F\u002F 批处理中不同状态的请求数量。\n   let requestCounts: RequestCount\n   \u002F\u002F\u002F 可附加到对象上的 16 组键值对。这有助于以结构化方式存储有关对象的额外信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   let metadata: [String: String]\n   \n   public struct Error: Decodable {\n      \n      let object: String\n      let data: [Data]\n\n      public struct Data: Decodable {\n         \n         \u002F\u002F\u002F 用于识别错误类型的错误代码。\n         let code: String\n         \u002F\u002F\u002F 提供更多关于错误细节的人类可读消息。\n         let message: String\n         \u002F\u002F\u002F 如果适用，导致错误的参数名称。\n         let param: String?\n         \u002F\u002F\u002F 如果适用，错误发生在输入文件中的行号。\n         let line: Int?\n      }\n   }\n   \n   public struct RequestCount: Decodable {\n      \n      \u002F\u002F\u002F 批处理中的总请求数。\n      let total: Int\n      \u002F\u002F\u002F 已成功完成的请求数。\n      let completed: Int\n      \u002F\u002F\u002F 失败的请求数。\n      let failed: Int\n   }\n}\n```\n用法\n\n创建批处理\n```swift\nlet inputFileID = \"file-abc123\"\nlet endpoint = \"\u002Fv1\u002Fchat\u002Fcompletions\"\nlet completionWindow = \"24h\"\nlet parameter = BatchParameter(inputFileID: inputFileID, endpoint: endpoint, completionWindow: completionWindow, metadata: nil)\nlet batch = try await service.createBatch(parameters: parameters)\n```\n\n获取批处理\n```swift\nlet batchID = \"batch_abc123\"\nlet batch = try await service.retrieveBatch(id: batchID)\n```\n\n取消批处理\n```swift\nlet batchID = \"batch_abc123\"\nlet batch = try await service.cancelBatch(id: batchID)\n```\n\n列出批处理\n```swift\nlet batches = try await service.listBatch(after: nil, limit: nil)\n```\n\n### 文件\n参数\n```swift\n\u002F\u002F\u002F [上传文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fcreate) 可用于各种端点\u002F功能。目前，单个组织上传的所有文件大小总和最多为 1 GB。如需提高存储限制，请联系我们。\npublic struct FileParameters: Encodable {\n   \n   \u002F\u002F\u002F 文件资产的名称未在 OpenAI 官方文档中记录；然而，它对于构建多部分请求至关重要。\n   let fileName: String\n   \u002F\u002F\u002F 要上传的文件对象（而非文件名）。\n   \u002F\u002F\u002F 如果 purpose 设置为 \"fine-tune\"，该文件将用于微调。\n   let file: Data\n   \u002F\u002F\u002F 上传文件的预期用途。\n   \u002F\u002F\u002F 对于[微调](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffine-tuning)，请使用 \"fine-tune\"。这使我们能够验证上传文件的格式是否适用于微调。\n   let purpose: String\n   \n   public init(\n      fileName: String,\n      file: Data,\n      purpose: String)\n   {\n      self.fileName = fileName\n      self.file = file\n      self.purpose = purpose\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F [文件对象](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles\u002Fobject) 表示已上传到 OpenAI 的文档。\npublic struct FileObject: Decodable {\n   \n   \u002F\u002F\u002F 文件标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 文件大小，以字节为单位。\n   public let bytes: Int\n   \u002F\u002F\u002F 文件创建时的 Unix 时间戳（以秒为单位）。\n   public let createdAt: Int\n   \u002F\u002F\u002F 文件名。\n   public let filename: String\n   \u002F\u002F\u002F 对象类型，始终为 \"file\"。\n   public let object: String\n   \u002F\u002F\u002F 文件的预期用途。目前仅支持 \"fine-tune\"。\n   public let purpose: String\n   \u002F\u002F\u002F 文件的当前状态，可能为 uploaded、processed、pending、error、deleting 或 deleted。\n   public let status: String\n   \u002F\u002F\u002F 关于文件状态的附加信息。如果文件处于 error 状态，此处将包含描述错误的消息。\n   public let statusDetails: String?\n   \n   public enum Status: String {\n      case uploaded\n      case processed\n      case pending\n      case error\n      case deleting\n      case deleted\n   }\n\n   public init(\n      id: String,\n      bytes: Int,\n      createdAt: Int,\n      filename: String,\n      object: String,\n      purpose: String,\n      status: Status,\n      statusDetails: String?)\n   {\n      self.id = id\n      self.bytes = bytes\n      self.createdAt = createdAt\n      self.filename = filename\n      self.object = object\n      self.purpose = purpose\n      self.status = status.rawValue\n      self.statusDetails = statusDetails\n   }\n}\n```\n用法\n列出文件\n```swift\nlet files = try await service.listFiles().data\n```\n### 上传文件\n```swift\nlet fileName = \"worldCupData.jsonl\"\nlet data = Data(contentsOfURL:_) \u002F\u002F 从名为 \"worldCupData.jsonl\" 的文件中获取的数据。\nlet parameters = FileParameters(fileName: \"WorldCupData\", file: data, purpose: \"fine-tune\") \u002F\u002F 重要提示：务必提供文件名。\nlet uploadedFile =  try await service.uploadFile(parameters: parameters) \n```\n删除文件\n```swift\nlet fileID = \"file-abc123\"\nlet deletedStatus = try await service.deleteFileWith(id: fileID)\n```\n获取文件\n```swift\nlet fileID = \"file-abc123\"\nlet retrievedFile = try await service.retrieveFileWith(id: fileID)\n```\n获取文件内容\n```swift\nlet fileID = \"file-abc123\"\nlet fileContent = try await service.retrieveContentForFileWith(id: fileID)\n```\n\n### 图像\n\n本库支持最新的 OpenAI 图像生成功能。\n\n- 参数 创建\n\n```swift\n\u002F\u002F\u002F '创建图像':\n\u002F\u002F\u002F https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002Fcreate\npublic struct 创建图像参数: Encodable {\n   \n   \u002F\u002F\u002F 对所需图像的文本描述。\n   \u002F\u002F\u002F `gpt-image-1` 的最大长度为 32000 字符，`dall-e-2` 为 1000 字符，`dall-e-3` 为 4000 字符。\n   public let prompt: String\n   \n   \u002F\u002F MARK: - 可选属性\n   \n   \u002F\u002F\u002F 允许设置生成图像背景的透明度。\n   \u002F\u002F\u002F 此参数仅适用于 `gpt-image-1`。\n   \u002F\u002F\u002F 必须是 `transparent`、`opaque` 或 `auto`（默认值）之一。\n   \u002F\u002F\u002F 当使用 `auto` 时，模型会自动确定最适合该图像的背景。\n   \u002F\u002F\u002F 如果选择 `transparent`，输出格式需要支持透明度，因此应设置为 `png`（默认值）或 `webp`。\n   public let background: 背景?\n   \n   \u002F\u002F\u002F 用于图像生成的模型。可以是 `dall-e-2`、`dall-e-3` 或 `gpt-image-1`。\n   \u002F\u002F\u002F 默认为 `dall-e-2`，除非使用了 `gpt-image-1` 特有的参数。\n   public let model: 模型?\n   \n   \u002F\u002F\u002F 控制由 `gpt-image-1` 生成图像的内容审核级别。\n   \u002F\u002F\u002F 必须是 `low`（过滤较宽松）或 `auto`（默认值）。\n   public let moderation: 审核?\n   \n   \u002F\u002F\u002F 要生成的图像数量。必须介于 1 到 10 之间。对于 `dall-e-3`，仅支持 `n=1`。\n   \u002F\u002F\u002F 默认值为 `1`\n   public let n: Int?\n   \n   \u002F\u002F\u002F 生成图像的压缩级别（0-100%）。\n   \u002F\u002F\u002F 该参数仅适用于使用 `webp` 或 `jpeg` 输出格式的 `gpt-image-1`，默认值为 100。\n   public let outputCompression: Int?\n   \n   \u002F\u002F\u002F 生成图像返回的格式。\n   \u002F\u002F\u002F 该参数仅适用于 `gpt-image-1`。\n   \u002F\u002F\u002F 必须是 `png`、`jpeg` 或 `webp` 中的一个。\n   public let outputFormat: 格式?\n   \n   \u002F\u002F\u002F 将要生成的图像质量。\n   \u002F\u002F\u002F - `auto`（默认值）会自动为给定模型选择最佳质量。\n   \u002F\u002F\u002F - `high`、`medium` 和 `low` 适用于 gpt-image-1。\n   \u002F\u002F\u002F - `hd` 和 `standard` 适用于 dall-e-3。\n   \u002F\u002F\u002F - `standard` 是 dall-e-2 唯一的选择。\n   public let quality: 质量?\n   \n   \u002F\u002F\u002F 使用 dall-e-2 和 dall-e-3 生成的图像返回的格式。\n   \u002F\u002F\u002F 必须是 `url` 或 `b64_json` 中的一个。\n   \u002F\u002F\u002F URL 在图像生成后仅有效 60 分钟。\n   \u002F\u002F\u002F 此参数不适用于 `gpt-image-1`，后者始终返回 base64 编码的图像。\n   public let responseFormat: 格式?\n   \n   \u002F\u002F\u002F 生成图像的尺寸。\n   \u002F\u002F\u002F - 对于 gpt-image-1，可以选择 `1024x1024`、`1536x1024`（横版）、`1024x1536`（竖版）或 `auto`（默认值）。\n   \u002F\u002F\u002F - 对于 dall-e-3，可以选择 `1024x1024`、`1792x1024` 或 `1024x1792`。\n   \u002F\u002F\u002F - 对于 dall-e-2，可以选择 `256x256`、`512x512` 或 `1024x1024`。\n   public let size: String?\n   \n   \u002F\u002F\u002F 生成图像的风格。\n   \u002F\u002F\u002F 该参数仅适用于 `dall-e-3`。\n   \u002F\u002F\u002F 必须是 `vivid` 或 `natural` 中的一个。\n   \u002F\u002F\u002F `vivid` 会使模型倾向于生成超现实且戏剧性的图像。\n   \u002F\u002F\u002F `natural` 会使模型产生更自然、不那么超现实的图像。\n   public let style: 风格?\n   \n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。\n   public let user: String?\n}\n```\n\n- 参数编辑\n\n```swift\n\u002F\u002F\u002F 根据一个或多个源图像和提示词，创建编辑或扩展后的图像。\n\u002F\u002F\u002F 此端点仅支持 `gpt-image-1` 和 `dall-e-2`。\npublic struct 创建图像编辑参数: Encodable {\n   \n   \u002F\u002F\u002F 要编辑的图像。\n   \u002F\u002F\u002F 对于 `gpt-image-1`，每张图像应为小于 25MB 的 `png`、`webp` 或 `jpg` 文件。\n   \u002F\u002F\u002F 对于 `dall-e-2`，只能提供一张图像，且必须是小于 4MB 的正方形 `png` 文件。\n   let image: [Data]\n   \n   \u002F\u002F\u002F 对所需图像的文本描述。\n   \u002F\u002F\u002F `dall-e-2` 的最大长度为 1000 字符，`gpt-image-1` 为 32000 字符。\n   let prompt: String\n   \n   \u002F\u002F\u002F 一张额外的图像，其完全透明的区域指示应如何编辑 `image`。\n   \u002F\u002F\u002F 如果提供了多张图像，则遮罩将应用于第一张图像。\n   \u002F\u002F\u002F 必须是有效的 PNG 文件，小于 4MB，并且与 `image` 具有相同的尺寸。\n   let mask: Data?\n   \n   \u002F\u002F\u002F 用于图像生成的模型。仅支持 `dall-e-2` 和 `gpt-image-1`。\n   \u002F\u002F\u002F 默认为 `dall-e-2`，除非使用了 `gpt-image-1` 特有的参数。\n   let model: String?\n   \n   \u002F\u002F\u002F 要生成的图像数量。必须介于 1 到 10 之间。\n   \u002F\u002F\u002F 默认值为 1。\n   let n: Int?\n   \n   \u002F\u002F\u002F 将要生成的图像质量。\n   \u002F\u002F\u002F `high`、`medium` 和 `low` 仅适用于 `gpt-image-1`。\n   \u002F\u002F\u002F `dall-e-2` 仅支持 `standard` 质量。\n   \u002F\u002F\u002F 默认值为 `auto`。\n   let quality: String?\n   \n   \u002F\u002F\u002F 生成图像返回的格式。\n   \u002F\u002F\u002F 必须是 `url` 或 `b64_json` 中的一个。\n   \u002F\u002F\u002F URL 在图像生成后仅有效 60 分钟。\n   \u002F\u002F\u002F 此参数仅适用于 `dall-e-2`，因为 `gpt-image-1` 始终会返回 base64 编码的图像。\n   let responseFormat: String?\n   \n   \u002F\u002F\u002F 生成图像的尺寸。\n   \u002F\u002F\u002F 对于 `gpt-image-1`，必须是 `1024x1024`、`1536x1024`（横版）、`1024x1536`（竖版）或 `auto`（默认值）之一；\n   \u002F\u002F\u002F 对于 `dall-e-2`，则必须是 `256x256`、`512x512` 或 `1024x1024` 之一。\n   let size: String?\n   \n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。\n   let user: String?\n}\n```\n\n- 参数变体\n\n```swift\n\u002F\u002F\u002F 根据给定图像创建变体。\n\u002F\u002F\u002F 此端点仅支持 `dall-e-2`。\npublic struct 创建图像变体参数: Encodable {\n   \n   \u002F\u002F\u002F 用作变体基础的图像。\n   \u002F\u002F\u002F 必须是有效的 PNG 文件，小于 4MB，且为正方形。\n   let image: Data\n   \n   \u002F\u002F\u002F 用于图像生成的模型。目前仅支持 `dall-e-2`。\n   \u002F\u002F\u002F 默认为 `dall-e-2`。\n   let model: String?\n   \n   \u002F\u002F\u002F 要生成的图像数量。必须介于 1 到 10 之间。\n   \u002F\u002F\u002F 默认值为 1。\n   let n: Int?\n   \n   \u002F\u002F\u002F 生成图像返回的格式。\n   \u002F\u002F\u002F 必须是 `url` 或 `b64_json` 中的一个。\n   \u002F\u002F\u002F URL 在图像生成后仅有效 60 分钟。\n   \u002F\u002F\u002F 默认值为 `url`。\n   let responseFormat: String?\n   \n   \u002F\u002F\u002F 生成图像的尺寸。\n   \u002F\u002F\u002F 必须是 `256x256`、`512x512` 或 `1024x1024` 之一。\n   \u002F\u002F\u002F 默认值为 `1024x1024`。\n   let size: String?\n   \n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。\n   let user: String?\n}\n```\n\n- 请求示例\n\n```swift\nimport SwiftOpenAI\n\nlet service = OpenAIServiceFactory.service(apiKey: \"\u003CYOUR_KEY>\")\n\n\u002F\u002F ❶ 描述你想要的图像\nlet prompt = \"一幅水彩风格的龙独角兽混合体在雪山之上飞翔\"\n\n\u002F\u002F ❷ 使用全新的类型构建参数（提交 880a15c）\nlet params = CreateImageParameters(\n    prompt: prompt,\n    model:  .gptImage1,      \u002F\u002F .dallE3 \u002F .dallE2 也同样有效\n    n:      1,               \u002F\u002F 1-10  (DALL-E 3 只支持 1)\n    quality: .high,          \u002F\u002F DALL-E 3 支持 .hd \u002F .standard\n    size:   \"1024x1024\"      \u002F\u002F 对于宽屏或长屏图像，可使用 \"1792x1024\" 或 \"1024x1792\"\n)\n\ndo {\n    \u002F\u002F ❸ 发送请求——返回 `CreateImageResponse`\n    let result = try await service.createImages(parameters: params)\n    let url    = result.data?.first?.url          \u002F\u002F 或者使用 base-64 编码的 `b64Json`\n    print(\"图像 URL:\", url ?? \"无\")\n} catch {\n    print(\"生成失败:\", error)\n}\n```\n\n如需查看示例应用，请前往本仓库中的 `Examples\u002FSwiftOpenAIExample` 项目。\n\n⚠️ 此库同时保持与先前图像生成功能的兼容性。\n\n\n为了处理图像尺寸，我们使用了 `Dalle` 模型。为此定义了一个带有关联值的枚举，以准确表示其尺寸限制。\n\n[DALL·E](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Fdall-e)\n\nDALL·E 是一种能够根据自然语言描述生成逼真图像和艺术作品的人工智能系统。目前，DALL·E 3 支持根据提示创建特定尺寸的新图像。而 DALL·E 2 则支持编辑现有图像或基于用户提供的图像生成变体。\n\nDALL·E 3 和 DALL·E 2 均可通过我们的 Images API 使用。您也可以通过 ChatGPT Plus 尝试 DALL·E 3。\n\n\n| 模型     | 描述                                                  |\n|-----------|--------------------------------------------------------------|\n| dall-e-3  | DALL·E 3 新版                                                 |\n|           | 最新发布的 DALL·E 模型，于 2023 年 11 月推出。了解更多。    |\n| dall-e-2  | 上一代 DALL·E 模型，于 2022 年 11 月发布。              |\n|           | DALL·E 的第二次迭代，生成的图像更加逼真、准确，且分辨率是原始模型的四倍。    |\n\npublic enum Dalle {\n   \n   case dalle2(Dalle2ImageSize)\n   case dalle3(Dalle3ImageSize)\n   \n   public enum Dalle2ImageSize: String {\n      case small = \"256x256\"\n      case medium = \"512x512\"\n      case large = \"1024x1024\"\n   }\n   \n   public enum Dalle3ImageSize: String {\n      case largeSquare = \"1024x1024\"\n      case landscape  = \"1792x1024\"\n      case portrait = \"1024x1792\"\n   }\n   \n   var model: String {\n      switch self {\n      case .dalle2: return Model.dalle2.rawValue\n      case .dalle3: return Model.dalle3.rawValue\n      }\n   }\n   \n   var size: String {\n      switch self {\n      case .dalle2(let dalle2ImageSize):\n         return dalle2ImageSize.rawValue\n      case .dalle3(let dalle3ImageSize):\n         return dalle3ImageSize.rawValue\n      }\n   }\n}\n\n#### 图像创建\n参数\n```swift\npublic struct ImageCreateParameters: Encodable {\n   \n   \u002F\u002F\u002F 对所需图像的文本描述。对于 dall-e-2，最大长度为1000个字符；对于 dall-e-3，最大长度为4000个字符。\n   let prompt: String\n   \u002F\u002F\u002F 用于生成图像的模型。默认为 dall-e-2。\n   let model: String?\n   \u002F\u002F\u002F 要生成的图像数量。必须介于1到10之间。对于 dall-e-3，仅支持 n=1。\n   let n: Int?\n   \u002F\u002F\u002F 将要生成的图像质量。hd 会生成细节更丰富、整幅图像一致性更高的图像。此参数仅适用于 dall-e-3。默认为 standard。\n   let quality: String?\n   \u002F\u002F\u002F 生成图像的返回格式。必须是 url 或 b64_json 之一。默认为 url。\n   let responseFormat: String?\n   \u002F\u002F\u002F 生成图像的尺寸。对于 dall-e-2，必须是 256x256、512x512 或 1024x1024 之一。对于 dall-e-3 模型，必须是 1024x1024、1792x1024 或 1024x1792 之一。默认为 1024x1024。\n   let size: String?\n   \u002F\u002F\u002F 生成图像的风格。必须是 vivid 或 natural 之一。vivid 会使模型倾向于生成超写实且富有戏剧性的图像。natural 则会使模型生成更自然、不那么超写实的图像。此参数仅适用于 dall-e-3。默认为 vivid。\n   let style: String?\n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。[了解详情](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      prompt: String,\n      model: Dalle,\n      numberOfImages: Int = 1,\n      quality: String? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      style: String? = nil,\n      user: String? = nil)\n   {\n   self.prompt = prompt\n   self.model = model.model\n   self.n = numberOfImages\n   self.quality = quality\n   self.responseFormat = responseFormat?.rawValue\n   self.size = model.size\n   self.style = style\n   self.user = user\n   }   \n}\n```\n#### 图像编辑\n参数\n```swift\n\u002F\u002F\u002F [根据原始图像和提示创建编辑或扩展后的图像。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit)\npublic struct ImageEditParameters: Encodable {\n   \n   \u002F\u002F\u002F 要编辑的图像。必须是有效的 PNG 文件，小于 4MB，且为正方形。如果未提供 mask，则图像必须具有透明区域，该透明区域将用作 mask。\n   let image: Data\n   \u002F\u002F\u002F 对所需图像的文本描述。最大长度为1000个字符。\n   let prompt: String\n   \u002F\u002F\u002F 附加图像，其完全透明的区域（例如 alpha 为零的地方）指示应编辑图像的位置。必须是有效的 PNG 文件，小于 4MB，且与 image 具有相同的尺寸。\n   let mask: Data?\n   \u002F\u002F\u002F 用于生成图像的模型。目前仅支持 dall-e-2。默认为 dall-e-2。\n   let model: String?\n   \u002F\u002F\u002F 要生成的图像数量。必须介于1到10之间。默认为1。\n   let n: Int?\n   \u002F\u002F\u002F 生成图像的尺寸。必须是 256x256、512x512 或 1024x1024 之一。默认为 1024x1024。\n   let size: String?\n   \u002F\u002F\u002F 生成图像的返回格式。必须是 url 或 b64_json 之一。默认为 url。\n   let responseFormat: String?\n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。[了解详情](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      image: UIImage,\n      model: Dalle? = nil,\n      mask: UIImage? = nil,\n      prompt: String,\n      numberOfImages: Int? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      user: String? = nil)\n   {\n      if (image.pngData() == nil) {\n         assertionFailure(\"无法从图像获取 PNG 数据\")\n      }\n      if let mask, mask.pngData() == nil {\n         assertionFailure(\"无法从 mask 获取 PNG 数据\")\n      }\n      if let model, model.model != Model.dalle2.rawValue {\n         assertionFailure(\"目前仅支持 dall-e-2 [https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit]\")\n      }\n      self.image = image.pngData()!\n      self.model = model?.model\n      self.mask = mask?.pngData()\n      self.prompt = prompt\n      self.n = numberOfImages\n      self.size = model?.size\n      self.responseFormat = responseFormat?.rawValue\n      self.user = user\n   }\n}\n```\n#### 图像变体\n参数\n```swift\n\u002F\u002F\u002F [根据给定图像创建变体。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateVariation)\npublic struct ImageVariationParameters: Encodable {\n   \n   \u002F\u002F\u002F 作为变体基础的图像。必须是有效的 PNG 文件，小于 4MB，且为正方形。\n   let image: Data\n   \u002F\u002F\u002F 用于生成图像的模型。目前仅支持 dall-e-2。默认为 dall-e-2。\n   let model: String?\n   \u002F\u002F\u002F 要生成的图像数量。必须介于1到10之间。默认为1。\n   let n: Int?\n   \u002F\u002F\u002F 生成图像的返回格式。必须是 url 或 b64_json 之一。默认为 url。\n   let responseFormat: String?\n   \u002F\u002F\u002F 生成图像的尺寸。必须是 256x256、512x512 或 1024x1024 之一。默认为 1024x1024。\n   let size: String?\n   \u002F\u002F\u002F 代表您的最终用户的唯一标识符，有助于 OpenAI 监控和检测滥用行为。[了解详情](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fsafety-best-practices)\n   let user: String?\n   \n   public init(\n      image: UIImage,\n      model: Dalle? = nil,\n      numberOfImages: Int? = nil,\n      responseFormat: ImageResponseFormat? = nil,\n      user: String? = nil)\n   {\n      if let model, model.model != Model.dalle2.rawValue {\n         assertionFailure(\"目前仅支持 dall-e-2 [https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002FcreateEdit]\")\n      }\n      self.image = image.pngData()!\n      self.n = numberOfImages\n      self.model = model?.model\n      self.size = model?.size\n      self.responseFormat = responseFormat?.rawValue\n      self.user = user\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F [表示由 OpenAI API 生成的图像的 URL 或内容。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002Fobject)\npublic struct ImageObject: Decodable {\n   \u002F\u002F\u002F 如果 response_format 为 url（默认），则为生成图像的 URL。\n   public let url: URL?\n   \u002F\u002F\u002F 如果 response_format 为 b64_json，则为生成图像的 Base64 编码 JSON。\n   public let b64Json: String?\n   \u002F\u002F\u002F 用于生成图像的提示，如果有对提示的修订，则显示修订后的提示。\n   public let revisedPrompt: String?\n}\n```\n\n使用方法\n```swift\n\u002F\u002F\u002F 创建图像\nlet prompt = \"龙和独角兽的混合体\"\nlet createParameters = ImageCreateParameters(prompt: prompt, model: .dalle3(.largeSquare))\nlet imageURLS = try await service.legacyCreateImages(parameters: createParameters).data.map(\\.url)\n```\n```swift\n\u002F\u002F\u002F 编辑图像\nlet data = Data(contentsOfURL:_) \u002F\u002F 从图片获取的数据。\nlet image = UIImage(data: data)\nlet prompt = \"添加一个充满粉红色气球的背景。\"\nlet editParameters = ImageEditParameters(image: image, prompt: prompt, numberOfImages: 4)  \nlet imageURLS = try await service.legacyEditImage(parameters: parameters).data.map(\\.url)\n```\n```swift\n\u002F\u002F\u002F 图像变体\nlet data = Data(contentsOfURL:_) \u002F\u002F 从图片获取的数据。\nlet image = UIImage(data: data)\nlet variationParameters = ImageVariationParameters(image: image, numberOfImages: 4)\nlet imageURLS = try await service.legacyCreateImageVariations(parameters: parameters).data.map(\\.url)\n```\n\n\n\n### 模型\n响应\n```swift\n\n\u002F\u002F\u002F 描述了一个 OpenAI [模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Fobject)，该模型可以与 API 一起使用。\npublic struct ModelObject: Decodable {\n   \n   \u002F\u002F\u002F 模型标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 模型创建的 Unix 时间戳（以秒为单位）。\n   public let created: Int\n   \u002F\u002F\u002F 对象类型，始终为 \"model\"。\n   public let object: String\n   \u002F\u002F\u002F 拥有该模型的组织。\n   public let ownedBy: String\n   \u002F\u002F\u002F 表示当前模型权限的数组。数组中的每个元素对应特定的权限设置。如果没有权限或数据不可用，该数组可能为 nil。\n   public let permission: [Permission]?\n   \n   public struct Permission: Decodable {\n      public let id: String?\n      public let object: String?\n      public let created: Int?\n      public let allowCreateEngine: Bool?\n      public let allowSampling: Bool?\n      public let allowLogprobs: Bool?\n      public let allowSearchIndices: Bool?\n      public let allowView: Bool?\n      public let allowFineTuning: Bool?\n      public let organization: String?\n      public let group: String?\n      public let isBlocking: Bool?\n   }\n   \n   \u002F\u002F\u002F 表示来自 [delete](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Fdelete) 微调 API 的响应。\n   public struct DeletionStatus: Decodable {\n      \n      public let id: String\n      public let object: String\n      public let deleted: Bool\n   }\n}\n```\n使用方法\n```swift\n\u002F\u002F\u002F 列出模型\nlet models = try await service.listModels().data\n```\n```swift\n\u002F\u002F\u002F 获取模型\nlet modelID = \"gpt-3.5-turbo-instruct\"\nlet retrievedModel = try await service.retrieveModelWith(id: modelID)\n```\n```swift\n\u002F\u002F\u002F 删除微调后的模型\nlet modelID = \"fine-tune-model-id\"\nlet deletionStatus = try await service.deleteFineTuneModelWith(id: modelID)\n```\n\n### 内容审核\n参数\n```swift\n\u002F\u002F\u002F [用于分类文本是否违反 OpenAI 的内容政策。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations\u002Fcreate)\npublic struct ModerationParameter\u003CInput: Encodable>: Encodable {\n   \n   \u002F\u002F\u002F 要分类的输入文本，可以是字符串或数组。\n   let input: Input\n   \u002F\u002F\u002F 目前有两种内容审核模型可供选择：text-moderation-stable 和 text-moderation-latest。\n   \u002F\u002F\u002F 默认使用 text-moderation-latest 模型，该模型会随时间自动升级，以确保您始终使用我们最准确的模型。如果您选择 text-moderation-stable 模型，我们在更新该模型之前会提前通知您。text-moderation-stable 模型的准确性可能会略低于 text-moderation-latest 模型。\n   let model: String?\n   \n   enum Model: String {\n      case stable = \"text-moderation-stable\"\n      case latest = \"text-moderation-latest\"\n   }\n   \n   init(\n      input: Input,\n      model: Model? = nil)\n   {\n      self.input = input\n      self.model = model?.rawValue\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F [审核对象](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmoderations\u002Fobject)。表示 OpenAI 的内容审核模型针对给定输入生成的政策合规报告。\npublic struct ModerationObject: Decodable {\n   \n   \u002F\u002F\u002F 审核请求的唯一标识符。\n   public let id: String\n   \u002F\u002F\u002F 用于生成审核结果的模型。\n   public let model: String\n   \u002F\u002F\u002F 审核结果列表。\n   public let results: [Moderation]\n   \n   public struct Moderation: Decodable {\n      \n      \u002F\u002F\u002F 内容是否违反 OpenAI 的使用政策。\n      public let flagged: Bool\n      \u002F\u002F\u002F 各类别的列表，以及这些类别是否被标记。\n      public let categories: Category\u003CBool>\n      \u002F\u002F\u002F 各类别的列表及其由模型预测的得分。\n      public let categoryScores: Category\u003CDouble>\n      \n      public struct Category\u003CT: Decodable>: Decodable {\n         \n         \u002F\u002F\u002F 表达、煽动或宣扬基于种族、性别、民族、宗教、国籍、性取向、残疾状况或种姓的仇恨内容。针对非受保护群体（例如国际象棋棋手）的仇恨内容被视为骚扰。\n         public let hate: T\n         \u002F\u002F\u002F 包含暴力或对目标群体造成严重伤害的仇恨内容，这些群体基于种族、性别、民族、宗教、国籍、性取向、残疾状况或种姓。\n         public let hateThreatening: T\n         \u002F\u002F\u002F 表达、煽动或宣扬针对任何目标的骚扰性语言的内容。\n         public let harassment: T\n         \u002F\u002F\u002F 包含暴力或对任何目标造成严重伤害的骚扰内容。\n         public let harassmentThreatening: T\n         \u002F\u002F\u002F 宣传、鼓励或描绘自残行为的内容，例如自杀、割腕和饮食失调等。\n         public let selfHarm: T\n         \u002F\u002F\u002F 发言者表示正在实施或打算实施自残行为的内容，例如自杀、割腕和饮食失调等。\n         public let selfHarmIntent: T\n         \u002F\u002F\u002F 鼓励实施自残行为，例如自杀、割腕和饮食失调，或提供如何实施此类行为的指导或建议的内容。\n         public let selfHarmInstructions: T\n         \u002F\u002F\u002F 旨在引起性兴奋的内容，例如对性活动的描述，或宣传性服务的内容（不包括性教育和健康相关内容）。\n         public let sexual: T\n         \u002F\u002F\u002F 包含未满 18 岁个人的色情内容。\n         public let sexualMinors: T\n         \u002F\u002F\u002F 描绘死亡、暴力或身体伤害的内容。\n         public let violence: T\n         \u002F\u002F\u002F 以极度写实方式描绘死亡、暴力或身体伤害的内容。\n         public let violenceGraphic: T\n      }\n   }\n}\n```\n用法\n```swift\n\u002F\u002F\u002F 单个提示\nlet prompt = \"我要杀了他\"\nlet parameters = ModerationParameter(input: prompt)\nlet isFlagged = try await service.createModerationFromText(parameters: parameters)\n```\n```swift\n\u002F\u002F\u002F 多个提示\nlet prompts = [\"我要杀了他\", \"我要去死\"]\nlet parameters = ModerationParameter(input: prompts)\nlet isFlagged = try await service.createModerationFromTexts(parameters: parameters)\n```\n\n### **测试版**\n\n### 助手\n参数\n```swift\n\u002F\u002F\u002F 使用模型和指令创建一个[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants\u002FcreateAssistant)。\n\u002F\u002F\u002F 修改一个[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants\u002FmodifyAssistant)。\npublic struct AssistantParameters: Encodable {\n   \n   \u002F\u002F\u002F 要使用的模型ID。您可以使用[列出模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Flist)API查看所有可用的模型，或参阅我们的[模型概述](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview)以获取它们的描述。\n   public var model: String?\n   \u002F\u002F\u002F 助手的名称。最大长度为256个字符。\n   public var name: String?\n   \u002F\u002F\u002F 助手的描述。最大长度为512个字符。\n   public var description: String?\n   \u002F\u002F\u002F 助手所使用的系统指令。最大长度为32768个字符。\n   public var instructions: String?\n   \u002F\u002F\u002F 助手上启用的工具列表。每个助手最多可有128种工具。工具类型可以是代码解释器、检索或函数。默认值为[]\n   public var tools: [AssistantObject.Tool] = []\n   \u002F\u002F\u002F 可附加到对象上的16组键值对。这有助于以结构化格式存储有关该对象的附加信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public var metadata: [String: String]?\n   \u002F\u002F\u002F 采样温度范围在0到2之间。较高的值（如0.8）会使输出更随机，而较低的值（如0.2）则会使输出更专注且确定性更强。\n   \u002F\u002F\u002F 默认值为1\n   public var temperature: Double?\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样，在这种方法中，模型会考虑具有top_p概率质量的标记结果。例如，0.1表示仅考虑构成前10%概率质量的标记。\n   \u002F\u002F\u002F 我们通常建议调整此参数或温度，但不要同时调整两者。\n   \u002F\u002F\u002F 默认值为1\n   public var topP: Double?\n   \u002F\u002F\u002F 指定模型必须输出的格式。与GPT-4 Turbo以及自gpt-3.5-turbo-1106以来的所有GPT-3.5 Turbo模型兼容。\n   \u002F\u002F\u002F 设置为{ \"type\": \"json_object\" }可启用JSON模式，从而保证模型生成的消息是有效的JSON。\n   \u002F\u002F\u002F 重要提示：使用JSON模式时，您还必须通过系统消息或用户消息指示模型生成JSON。否则，模型可能会生成无尽的空白内容，直到达到令牌限制，从而导致请求长时间运行并看似“卡住”。此外，如果finish_reason=\"length\"，则消息内容可能会被部分截断，这表明生成已超过max_tokens或对话已超过最大上下文长度。\n   \u002F\u002F\u002F 默认值为`auto`\n   public var responseFormat: ResponseFormat?\n   \n   public enum Action {\n      case create(model: String) \u002F\u002F 创建助手时需要指定模型。\n      case modify(model: String?) \u002F\u002F 修改助手时模型为可选。\n      \n      var model: String? {\n         switch self {\n         case .create(let model): return model\n         case .modify(let model): return model\n         }\n      }\n   }\n}\n```\n响应\n```swift\n\u002F\u002F\u002F 表示一个可以调用模型并使用工具的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)。\npublic struct AssistantObject: Decodable {\n   \n   \u002F\u002F\u002F 标识符，可在API端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为“assistant”。\n   public let object: String\n   \u002F\u002F\u002F 助手创建时的Unix时间戳（以秒为单位）。\n   public let createdAt: Int\n   \u002F\u002F\u002F 助手的名称。最大长度为256个字符。\n   public let name: String?\n   \u002F\u002F\u002F 助手的描述。最大长度为512个字符。\n   public let description: String?\n   \u002F\u002F\u002F 要使用的模型ID。您可以使用[列出模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels\u002Flist)API查看所有可用的模型，或参阅我们的[模型概述](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fmodels\u002Foverview)以获取它们的描述。\n   public let model: String\n   \u002F\u002F\u002F 助手所使用的系统指令。最大长度为32768个字符。\n   public let instructions: String?\n   \u002F\u002F\u002F 助手上启用的工具列表。每个助手最多可有128种工具。工具类型可以是代码解释器、检索或函数。\n   public let tools: [Tool]\n   \u002F\u002F\u002F 与此助手关联的[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles)ID列表。每个助手最多可关联20个文件。文件按创建日期升序排列。\n   \u002F\u002F\u002F 助手工具所使用的资源集合。这些资源因工具类型而异。例如，代码解释器工具需要文件ID列表，而文件搜索工具则需要向量存储ID列表。\n   public let toolResources: ToolResources?\n   \u002F\u002F\u002F 可附加到对象上的16组键值对。这有助于以结构化格式存储有关该对象的附加信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public let metadata: [String: String]?\n   \u002F\u002F\u002F 采样温度范围在0到2之间。较高的值（如0.8）会使输出更随机，而较低的值（如0.2）则会使输出更专注且确定性更强。\n   \u002F\u002F\u002F 默认值为1\n   public var temperature: Double?\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样，在这种方法中，模型会考虑具有top_p概率质量的标记结果。例如，0.1表示仅考虑构成前10%概率质量的标记。\n   \u002F\u002F\u002F 我们通常建议调整此参数或温度，但不要同时调整两者。\n   \u002F\u002F\u002F 默认值为1\n   public var topP: Double?\n   \u002F\u002F\u002F 指定模型必须输出的格式。与GPT-4 Turbo以及自gpt-3.5-turbo-1106以来的所有GPT-3.5 Turbo模型兼容。\n   \u002F\u002F\u002F 设置为{ \"type\": \"json_object\" }可启用JSON模式，从而保证模型生成的消息是有效的JSON。\n   \u002F\u002F\u002F 重要提示：使用JSON模式时，您还必须通过系统消息或用户消息指示模型生成JSON。否则，模型可能会生成无尽的空白内容，直到达到令牌限制，从而导致请求长时间运行并看似“卡住”。此外，如果finish_reason=\"length\"，则消息内容可能会被部分截断，这表明生成已超过max_tokens或对话已超过最大上下文长度。\n   \u002F\u002F\u002F 默认值为`auto`\n   public var responseFormat: ResponseFormat?\n\n```swift\npublic struct Tool: Codable {\n      \n      \u002F\u002F\u002F 定义的工具类型。\n      public let type: String\n      public let function: ChatCompletionParameters.ChatFunction?\n      \n      public enum ToolType: String, CaseIterable {\n         case codeInterpreter = \"code_interpreter\"\n         case fileSearch = \"file_search\"\n         case function\n      }\n      \n      \u002F\u002F\u002F 辅助属性，用于显示工具类型。\n      public var displayToolType: ToolType? { .init(rawValue: type) }\n      \n      public init(\n         type: ToolType,\n         function: ChatCompletionParameters.ChatFunction? = nil)\n      {\n         self.type = type.rawValue\n         self.function = function\n      }\n   }\n   \n   public struct DeletionStatus: Decodable {\n      public let id: String\n      public let object: String\n      public let deleted: Bool\n   }\n}\n```\n\n使用方法\n\n创建助手\n```swift\nlet parameters = AssistantParameters(action: .create(model: Model.gpt41106Preview.rawValue), name: \"数学家教\")\nlet assistant = try await service.createAssistant(parameters: parameters)\n```\n获取助手\n```swift\nlet assistantID = \"asst_abc123\"\nlet assistant = try await service.retrieveAssistant(id: assistantID)\n```\n修改助手\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = AssistantParameters(action: .modify, name: \"儿童数学家教\")\nlet assistant = try await service.modifyAssistant(id: assistantID, parameters: parameters)\n```\n删除助手\n```swift\nlet assistantID = \"asst_abc123\"\nlet deletionStatus = try await service.deleteAssistant(id: assistantID)\n```\n列出助手\n```swift\nlet assistants = try await service.listAssistants()\n```\n\n\n\n### 线程\n参数\n```swift\n\u002F\u002F\u002F 创建一个[线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads\u002FcreateThread)\npublic struct CreateThreadParameters: Encodable {\n   \n   \u002F\u002F\u002F 用于启动线程的消息列表。\n   public var messages: [MessageObject]?\n      \u002F\u002F\u002F 助手工具使用的资源集合。这些资源因工具类型而异。例如，code_interpreter 工具需要文件 ID 列表，而 file_search 工具则需要向量存储 ID 列表。\n   public var toolResources: ToolResources?\n   \u002F\u002F\u002F 可附加到对象上的 16 组键值对。这有助于以结构化方式存储关于对象的额外信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   public var metadata: [String: String]?\n}\n```\n响应\n```swift\n\u002F\u002F\u002F [线程对象](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) 表示包含[消息](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages)的线程。\npublic struct ThreadObject: Decodable {\n   \n   \u002F\u002F\u002F 标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为 thread。\n   public let object: String\n   \u002F\u002F\u002F 线程创建时的 Unix 时间戳（以秒为单位）。\n   public let createdAt: Int\n   \u002F\u002F\u002F 助手工具使用的资源集合。这些资源因工具类型而异。例如，code_interpreter 工具需要文件 ID 列表，而 file_search 工具则需要向量存储 ID 列表。\n   public var toolResources: ToolResources?\n   \u002F\u002F\u002F 可附加到对象上的 16 组键值对。这有助于以结构化方式存储关于对象的额外信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   public let metadata: [String: String]\n   \n}\n```\n\n使用方法\n\n创建线程。\n```swift\nlet parameters = CreateThreadParameters()\nlet thread = try await service.createThread(parameters: parameters)\n```\n获取线程。\n```swift\nlet threadID = \"thread_abc123\"\nlet thread = try await service.retrieveThread(id: threadID)\n```\n修改线程。\n```swift\nlet threadID = \"thread_abc123\"\nlet parameters = CreateThreadParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"])\nlet thread = try await service.modifyThread(id: threadID, parameters: parameters)\n```\n删除线程。\n```swift\nlet threadID = \"thread_abc123\"\nlet thread = try await service.deleteThread(id: threadID)\n```\n\n### 消息\n参数\n[创建消息](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages\u002FcreateMessage))\n```swift\npublic struct MessageParameter: Encodable {\n   \n   \u002F\u002F\u002F 发送消息的实体的角色。允许的值包括：\n   \u002F\u002F\u002F user：表示消息由实际用户发送，在大多数情况下应使用此值来代表用户生成的消息。\n   \u002F\u002F\u002F assistant：表示消息由助手生成。使用此值可将助手的消息插入对话中。\n   let role: String\n   \u002F\u002F\u002F 消息的内容，可以是字符串或内容片段数组（文本、图像URL、图像文件）。\n   let content: Content\n   \u002F\u002F\u002F 附加到消息的文件列表，以及这些文件应被添加到的工具。\n   let attachments: [MessageAttachment]?\n   \u002F\u002F\u002F 可附加到对象的一组16个键值对。这有助于以结构化格式存储关于对象的额外信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   let metadata: [String: String]?\n}\n```\n[修改消息](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages\u002FmodifyMessage))\n```swift\npublic struct ModifyMessageParameters: Encodable {\n   \n   \u002F\u002F\u002F 可附加到对象的一组16个键值对。这有助于以结构化格式存储关于对象的额外信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public var metadata: [String: String]\n}\n```\n响应\n```swift\n\u002F\u002F\u002F 表示[线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads)中的一个[消息](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmessages)。\npublic struct MessageObject: Codable {\n   \n   \u002F\u002F\u002F 标识符，可在API端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为thread.message。\n   public let object: String\n   \u002F\u002F\u002F 消息创建时的Unix时间戳（以秒为单位）。\n   public let createdAt: Int\n   \u002F\u002F\u002F 此消息所属的[线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads)ID。\n   public let threadID: String\n   \u002F\u002F\u002F 消息的状态，可以是in_progress、incomplete或completed。\n   public let status: String\n   \u002F\u002F\u002F 对于未完成的消息，说明其未完成的原因。\n   public let incompleteDetails: IncompleteDetails?\n   \u002F\u002F\u002F 消息完成时的Unix时间戳（以秒为单位）。\n   public let completedAt: Int\n   \u002F\u002F\u002F 产生消息的实体。可以是user或assistant。\n   public let role: String\n   \u002F\u002F\u002F 消息的内容，以文本和\u002F或图像的数组形式呈现。\n   public let content: [MessageContent]\n   \u002F\u002F\u002F 如果适用，撰写此消息的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)ID。\n   public let assistantID: String?\n   \u002F\u002F\u002F 如果适用，与撰写此消息相关的[运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns)ID。\n   public let runID: String?\n   \u002F\u002F\u002F 附加到消息的文件列表，以及这些文件被添加到的工具。\n   public let attachments: [MessageAttachment]?\n   \u002F\u002F\u002F 可附加到对象的一组16个键值对。这有助于以结构化格式存储关于对象的额外信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public let metadata: [String: String]?\n   \n   enum Role: String {\n      case user\n      case assistant\n   }\n}\n\n\u002F\u002F MARK: MessageContent\n\npublic enum MessageContent: Codable {\n   \n   case imageFile(ImageFile)\n   case text(Text)\n}\n\n\u002F\u002F MARK: Image File\n\npublic struct ImageFile: Codable {\n   \u002F\u002F\u002F 始终为image_file。\n   public let type: String\n   \n   \u002F\u002F\u002F 在消息内容中引用一张[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles)。\n   public let imageFile: ImageFileContent\n   \n   public struct ImageFileContent: Codable {\n      \n      \u002F\u002F\u002F 消息内容中图像所对应的[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles)ID。\n      public let fileID: String\n   }\n}\n\n\u002F\u002F MARK: Text\n\npublic struct Text: Codable {\n   \n   \u002F\u002F\u002F 始终为text。\n   public let type: String\n   \u002F\u002F\u002F 作为消息一部分的文本内容。\n   public let text: TextContent\n   \n   public struct TextContent: Codable {\n      \u002F\u002F 构成文本的数据。\n      public let value: String\n      \n      public let annotations: [Annotation]\n   }\n}\n\n\u002F\u002F MARK: Annotation\n\npublic enum Annotation: Codable {\n   \n   case fileCitation(FileCitation)\n   case filePath(FilePath)\n}\n\n\u002F\u002F MARK: FileCitation\n\n\u002F\u002F\u002F 消息中指向与助手或消息相关联的特定文件中某一具体引文的引用。当助手使用“retrieval”工具搜索文件时生成。\npublic struct FileCitation: Codable {\n   \n   \u002F\u002F\u002F 始终为file_citation。\n   public let type: String\n   \u002F\u002F\u002F 需要替换的消息内容中的文本。\n   public let text: String\n   public let fileCitation: FileCitation\n   public  let startIndex: Int\n   public let endIndex: Int\n   \n   public struct FileCitation: Codable {\n      \n      \u002F\u002F\u002F 引文来自的具体文件ID。\n      public let fileID: String\n      \u002F\u002F\u002F 文件中的具体引文。\n      public let quote: String\n\n   }\n}\n\n\u002F\u002F MARK: FilePath\n\n\u002F\u002F\u002F 当助手使用code_interpreter工具生成文件时产生的文件路径URL。\npublic struct FilePath: Codable {\n   \n   \u002F\u002F\u002F 始终为file_path。\n   public let type: String\n   \u002F\u002F\u002F 需要替换的消息内容中的文本。\n   public let text: String\n   public let filePath: FilePath\n   public let startIndex: Int\n   public let endIndex: Int\n   \n   public struct FilePath: Codable {\n      \u002F\u002F\u002F 所生成文件的ID。\n      public let fileID: String\n   }\n}\n```\n\n用法\n\n创建消息。\n```swift\nlet threadID = \"thread_abc123\"\nlet prompt = \"给我一些生日派对的创意。\"\nlet parameters = MessageParameter(role: \"user\", content: .stringContent(prompt)\")\nlet message = try await service.createMessage(threadID: threadID, parameters: parameters)\n```\n\n获取消息。\n```swift\nlet threadID = \"thread_abc123\"\nlet messageID = \"msg_abc123\"\nlet message = try await service.retrieveMessage(threadID: threadID, messageID: messageID)\n```\n\n修改消息。\n```swift\nlet threadID = \"thread_abc123\"\nlet messageID = \"msg_abc123\"\nlet parameters = ModifyMessageParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"])\nlet message = try await service.modifyMessage(threadID: threadID, messageID: messageID, parameters: parameters)\n```\n\n列出消息\n```swift\nlet threadID = \"thread_abc123\"\nlet messages = try await service.listMessages(threadID: threadID, limit: nil, order: nil, after: nil, before: nil) \n```\n\n### 运行\n参数\n\n[创建运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun)\n```swift\npublic struct RunParameter: Encodable {\n   \n   \u002F\u002F\u002F 用于执行此运行的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的ID。\n    let assistantID: String\n   \u002F\u002F\u002F 用于执行此运行的[模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels)的ID。如果在此处提供值，则会覆盖与助手关联的模型。否则，将使用与助手关联的模型。\n   let model: String?\n   \u002F\u002F\u002F 覆盖助手的默认系统消息。这在每次运行时修改行为时非常有用。\n   let instructions: String?\n   \u002F\u002F\u002F 在运行的指令末尾附加额外的指令。这在不覆盖其他指令的情况下，按每次运行修改行为时很有用。\n   let additionalInstructions: String?\n   \u002F\u002F\u002F 在创建运行之前，向线程添加额外的消息。\n   let additionalMessages: [MessageParameter]?\n   \u002F\u002F\u002F 覆盖助手在此运行中可以使用的工具。这在每次运行时修改行为时很有用。\n   let tools: [AssistantObject.Tool]?\n   \u002F\u002F\u002F 可以附加到对象上的16个键值对集合。这有助于以结构化格式存储有关对象的更多信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   let metadata: [String: String]?\n   \u002F\u002F\u002F 使用的采样温度范围为0到2。较高的值（如0.8）会使输出更随机，而较低的值（如0.2）则会使输出更集中和确定性。\n   \u002F\u002F\u002F 可选，默认为1\n   let temperature: Double?\n   \u002F\u002F\u002F 如果为真，则以服务器发送事件的形式返回运行期间发生的事件流，当运行进入终止状态并发出data: [DONE]消息时结束。\n   var stream: Bool\n   \u002F\u002F\u002F 运行过程中可能使用的最大提示令牌数。运行将尽力在多次回合中仅使用指定数量的提示令牌。如果运行超过指定的提示令牌数，则将以“已完成”状态结束。更多信息请参见incomplete_details。\n   let maxPromptTokens: Int?\n   \u002F\u002F\u002F 运行过程中可能使用的最大完成令牌数。运行将尽力在多次回合中仅使用指定数量的完成令牌。如果运行超过指定的完成令牌数，则将以“已完成”状态结束。更多信息请参见incomplete_details。\n   let maxCompletionTokens: Int?\n   \u002F\u002F\u002F 控制线程在运行前如何截断。可用于控制运行的初始上下文窗口。\n   let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F 控制模型调用哪种工具（如果有的话）。none表示模型不会调用任何工具，而是生成一条消息。auto是默认值，表示模型可以在生成消息或调用工具之间进行选择。指定特定工具，例如{\"type\": \"file_search\"}或{\"type\": \"function\", \"function\": {\"name\": \"my_function\"}}，则强制模型调用该工具。\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F 指定模型必须输出的格式。与GPT-4 Turbo以及所有比gpt-3.5-turbo-1106更新的GPT-3.5 Turbo模型兼容。\n   \u002F\u002F\u002F 设置为{ \"type\": \"json_object\" }可启用JSON模式，从而保证模型生成的消息是有效的JSON。\n   \u002F\u002F\u002F 重要提示：使用JSON模式时，您还必须通过系统消息或用户消息指示模型自行生成JSON。否则，模型可能会生成无休止的空白内容，直到达到令牌限制，从而导致长时间运行且看似“卡住”的请求。此外，如果finish_reason=\"length\"，则消息内容可能会被部分截断，这表明生成已超过max_tokens或对话已超过最大上下文长度。\n   let responseFormat: ResponseFormat?\n}\n```\n[修改运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FmodifyRun)\n```swift\npublic struct ModifyRunParameters: Encodable {\n   \n   \u002F\u002F\u002F 可以附加到对象上的16个键值对集合。这有助于以结构化格式存储有关对象的更多信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public var metadata: [String: String]\n   \n   public init(\n      metadata: [String : String])\n   {\n      self.metadata = metadata\n   }\n}\n```\n[创建线程并运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun)\n```swift\npublic struct CreateThreadAndRunParameter: Encodable {\n   \n   \u002F\u002F\u002F 用于执行此运行的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的ID。\n   let assistantId: String\n   \u002F\u002F\u002F 要创建的线程。\n   let thread: CreateThreadParameters?\n   \u002F\u002F\u002F 用于执行此运行的[模型](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fmodels)的ID。如果在此处提供值，则会覆盖与助手关联的模型。否则，将使用与助手关联的模型。\n   let model: String?\n   \u002F\u002F\u002F 覆盖助手的默认系统消息。这在每次运行时修改行为时很有用。\n   let instructions: String?\n   \u002F\u002F\u002F 覆盖助手在此运行中可以使用的工具。这在每次运行时修改行为时很有用。\n   let tools: [AssistantObject.Tool]?\n   \u002F\u002F\u002F 可以附加到对象上的16个键值对集合。这有助于以结构化格式存储有关对象的更多信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   let metadata: [String: String]?\n   \u002F\u002F\u002F 使用的采样温度范围为0到2。较高的值（如0.8）会使输出更随机，而较低的值（如0.2）则会使输出更集中和确定性。\n   \u002F\u002F\u002F 默认为1\n   let temperature: Double?\n   \u002F\u002F\u002F 一种替代温度采样的方法，称为核采样，在这种方法中，模型会考虑具有top_p概率质量的标记结果。因此，0.1表示仅考虑构成顶部10%概率质量的标记。\n   \u002F\u002F\u002F 我们通常建议调整此参数或温度，但不要同时调整两者。\n   let topP: Double?\n   \u002F\u002F\u002F 如果为真，则以服务器发送事件的形式返回运行期间发生的事件流，当运行进入终止状态并发出data: [DONE]消息时结束。\n   var stream: Bool = false\n   \u002F\u002F\u002F 运行过程中可能使用的最大提示令牌数。运行将尽力在多次回合中仅使用指定数量的提示令牌。如果运行超过指定的提示令牌数，则将以“未完成”状态结束。更多信息请参见incomplete_details。\n   let maxPromptTokens: Int?\n   \u002F\u002F\u002F 运行过程中可能使用的最大完成令牌数。运行将尽力在多次回合中仅使用指定数量的完成令牌。如果运行超过指定的完成令牌数，则将以“已完成”状态结束。更多信息请参见incomplete_details。\n   let maxCompletionTokens: Int?\n   \u002F\u002F\u002F 控制线程在运行前如何截断。可用于控制运行的初始上下文窗口。\n   let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F 控制模型调用哪种工具（如果有的话）。none表示模型不会调用任何工具，而是生成一条消息。auto是默认值，表示模型可以在生成消息或调用工具之间进行选择。指定特定工具，例如{\"type\": \"file_search\"}或{\"type\": \"function\", \"function\": {\"name\": \"my_function\"}}，则强制模型调用该工具。\n   let toolChoice: ToolChoice?\n   \u002F\u002F\u002F 指定模型必须输出的格式。与GPT-4 Turbo以及所有比gpt-3.5-turbo-1106更新的GPT-3.5 Turbo模型兼容。\n   \u002F\u002F\u002F 设置为{ \"type\": \"json_object\" }可启用JSON模式，从而保证模型生成的消息是有效的JSON。\n   \u002F\u002F\u002F 重要提示：使用JSON模式时，您还必须通过系统消息或用户消息指示模型自行生成JSON。否则，模型可能会生成无尽的空白内容，直到达到令牌限制，从而导致长时间运行且看似“卡住”的请求。此外，如果finish_reason=\"length\"，则消息内容可能会被部分截断，这表明生成已超过max_tokens或对话已超过最大上下文长度。\n   let responseFormat: ResponseFormat?\n}\n```\n[提交工具输出到运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs)\n```swift\npublic struct RunToolsOutputParameter: Encodable {\n   \n   \u002F\u002F\u002F 提交输出的工具列表。\n   public let toolOutputs: [ToolOutput]\n   \u002F\u002F\u002F 如果为真，则以服务器发送事件的形式返回运行期间发生的事件流，当运行进入终止状态并发出data: [DONE]消息时结束。\n   public let stream: Bool\n}\n```\n   \n响应\n```swift\npublic struct RunObject: Decodable {\n   \n   \u002F\u002F\u002F 可以在API端点中引用的标识符。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为thread.run。\n   public let object: String\n   \u002F\u002F\u002F 运行创建时的Unix时间戳（以秒为单位）。\n   public let createdAt: Int?\n   \u002F\u002F\u002F 作为此运行的一部分执行的[线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads)的ID。\n   public let threadID: String\n   \u002F\u002F\u002F 用于执行此运行的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的ID。\n   public let assistantID: String\n   \u002F\u002F\u002F 运行的状态，可能是queued、in_progress、requires_action、cancelling、cancelled、failed、completed或expired。\n   public let status: String\n   \u002F\u002F\u002F 继续运行所需的行动详情。如果没有所需行动，则为null。\n   public let requiredAction: RequiredAction?\n   \u002F\u002F\u002F 与此运行相关的最后一个错误。如果没有错误，则为null。\n   public let lastError: LastError?\n   \u002F\u002F\u002F 运行到期时的Unix时间戳（以秒为单位）。\n   public let expiresAt: Int?\n   \u002F\u002F\u002F 运行开始时的Unix时间戳（以秒为单位）。\n   public let startedAt: Int?\n   \u002F\u002F\u002F 运行取消时的Unix时间戳（以秒为单位）。\n   public let cancelledAt: Int?\n   \u002F\u002F\u002F 运行失败时的Unix时间戳（以秒为单位）。\n   public let failedAt: Int?\n   \u002F\u002F\u002F 运行完成时的Unix时间戳（以秒为单位）。\n   public let completedAt: Int?\n   \u002F\u002F\u002F 运行未完成的原因详情。如果运行未完成，则为null。\n   public let incompleteDetails: IncompleteDetails?\n   \u002F\u002F\u002F 此运行所使用的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的模型。\n   public let model: String\n   \u002F\u002F\u002F 此运行所使用的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的指令。\n   public let instructions: String?\n   \u002F\u002F\u002F 此运行所使用的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的工具列表。\n   public let tools: [AssistantObject.Tool]\n   \u002F\u002F\u002F 可以附加到对象上的16个键值对集合。这有助于以结构化格式存储有关对象的更多信息。键的最大长度为64个字符，值的最大长度为512个字符。\n   public let metadata: [String: String]\n   \u002F\u002F\u002F 与运行相关的使用统计信息。如果运行未处于终端状态（即in_progress、queued等），则此值为null。\n   public let usage: Usage?\n   \u002F\u002F\u002F 为此运行使用的采样温度。如果未设置，则默认为1。\n   public let temperature: Double?\n   \u002F\u002F\u002F 为此运行使用的核采样值。如果未设置，则默认为1。\n   public let topP: Double?\n   \u002F\u002F\u002F 运行过程中指定使用的最大提示令牌数。\n   public let maxPromptTokens: Int?\n   \u002F\u002F\u002F 运行过程中指定使用的最大完成令牌数。\n   public let maxCompletionTokens: Int?\n   \u002F\u002F\u002F 控制线程在运行前如何截断。可用于控制运行的初始上下文窗口。\n   public let truncationStrategy: TruncationStrategy?\n   \u002F\u002F\u002F 控制模型调用哪种工具（如果有的话）。none表示模型不会调用任何工具，而是生成一条消息。auto是默认值，表示模型可以在生成消息或调用工具之间进行选择。指定特定工具，例如{\"type\": \"TOOL_TYPE\"}或{\"type\": \"function\", \"function\": {\"name\": \"my_function\"}}，则强制模型调用该工具。\n   public let toolChoice: ToolChoice?\n   \u002F\u002F\u002F 指定模型必须输出的格式。与GPT-4 Turbo以及所有比gpt-3.5-turbo-1106更新的GPT-3.5 Turbo模型兼容。\n   \u002F\u002F\u002F 设置为{ \"type\": \"json_object\" }可启用JSON模式，从而保证模型生成的消息是有效的JSON。\n   \u002F\u002F\u002F 重要提示：使用JSON模式时，您还必须通过系统消息或用户消息指示模型自行生成JSON。否则，模型可能会生成无尽的空白内容，直到达到令牌限制，从而导致长时间运行且看似“卡住”的请求。此外，如果finish_reason=\"length\"，则消息内容可能会被部分截断，这表明生成已超过max_tokens或对话已超过最大上下文长度。\n   public let responseFormat: ResponseFormat?\n}\n```\n用法\n\n创建一次运行\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = RunParameter(assistantID: assistantID)\nlet run = try await service.createRun(threadID: threadID, parameters: parameters)\n```\n获取一次运行\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet run = try await service.retrieveRun(threadID: threadID, runID: runID)\n```\n修改一次运行\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet parameters = ModifyRunParameters(metadata: [\"modified\": \"true\", \"user\": \"abc123\"])\nlet message = try await service.modifyRun(threadID: threadID, messageID: messageID, parameters: parameters)\n```\n列出所有运行\n```swift\nlet threadID = \"thread_abc123\"\nlet runs = try await service.listRuns(threadID: threadID, limit: nil, order: nil, after: nil, before: nil)\n```\n向运行提交工具输出\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet toolCallID = \"call_abc123\"\nlet output = \"28C\"\nlet parameters = RunToolsOutputParameter(toolOutputs: [.init(toolCallId: toolCallID, output: output)])\nlet run = try await service.submitToolOutputsToRun(threadID: threadID, runID: runID, parameters: parameters)\n```\n取消一次运行\n```swift\n\u002F\u002F\u002F 取消一个正在进行中的运行。\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet run = try await service.cancelRun(threadID: threadID, runID: runID)\n```\n创建线程和运行\n```swift\nlet assistantID = \"asst_abc123\"\nlet parameters = CreateThreadAndRunParameter(assistantID: assistantID)\nlet run = service.createThreadAndRun(parameters: parameters)\n```\n\n\n\n### 运行步骤对象\n表示运行执行过程中的一个[步骤](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002Fstep-object)。\n响应\n```swift\npublic struct RunStepObject: Decodable {\n   \n   \u002F\u002F\u002F 运行步骤的标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为 `thread.run.step`。\n   public let object: String\n   \u002F\u002F\u002F 创建该运行步骤时的 Unix 时间戳（以秒为单位）。\n   public let createdAt: Int\n   \u002F\u002F\u002F 与该运行步骤关联的[助手](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants)的 ID。\n   public let assistantId: String\n   \u002F\u002F\u002F 被运行的[线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads)的 ID。\n   public let threadId: String\n   \u002F\u002F\u002F 该运行步骤所属的[运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns)的 ID。\n   public let runId: String\n   \u002F\u002F\u002F 运行步骤的类型，可以是 message_creation 或 tool_calls。\n   public let type: String\n   \u002F\u002F\u002F 运行步骤的状态，可以是 in_progress、cancelled、failed、completed 或 expired。\n   public let status: String\n   \u002F\u002F\u002F 运行步骤的详细信息。\n   public let stepDetails: RunStepDetails\n   \u002F\u002F\u002F 与此运行步骤相关的最后一个错误。如果没有错误，则为 null。\n   public let lastError: RunObject.LastError?\n   \u002F\u002F\u002F 运行步骤过期时的 Unix 时间戳（以秒为单位）。如果父级运行已过期，则该步骤也被视为已过期。\n   public let expiredAt: Int?\n   \u002F\u002F\u002F 运行步骤被取消时的 Unix 时间戳（以秒为单位）。\n   public let cancelledAt: Int?\n   \u002F\u002F\u002F 运行步骤失败时的 Unix 时间戳（以秒为单位）。\n   public let failedAt: Int?\n   \u002F\u002F\u002F 运行步骤完成时的 Unix 时间戳（以秒为单位）。\n   public let completedAt: Int?\n   \u002F\u002F\u002F 一组可附加到对象上的 16 个键值对。这有助于以结构化格式存储关于对象的额外信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   public let metadata: [String: String]?\n   \u002F\u002F\u002F 与运行步骤相关的使用统计信息。当运行步骤状态为 in_progress 时，此值为 null。\n   public let usage: Usage?\n}\n```\n用法\n获取一个运行步骤\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet stepID = \"step_abc123\"\nlet runStep = try await service.retrieveRunstep(threadID: threadID, runID: runID, stepID: stepID)\n```\n列出所有运行步骤\n```swift\nlet threadID = \"thread_abc123\"\nlet runID = \"run_abc123\"\nlet runSteps = try await service.listRunSteps(threadID: threadID, runID: runID, limit: nil, order: nil, after: nil, before: nil)\n```\n\n### 运行步骤详情\n\n运行步骤的详细信息。\n\n```swift\npublic struct RunStepDetails: Codable {\n   \n   \u002F\u002F\u002F `message_creation` 或 `tool_calls`\n   public let type: String\n   \u002F\u002F\u002F 运行步骤创建消息的详细信息。\n   public let messageCreation: MessageCreation?\n   \u002F\u002F\u002F 工具调用的详细信息。\n   public let toolCalls: [ToolCall]?\n}\n```\n\n### 助手流式传输\n\n助手 API 的[流式传输。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming)\n\n您可以流式传输执行运行或在提交工具输出后恢复运行的结果。\n\n通过将参数设置为 \"stream\": true，您可以从 [创建线程和运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun)、[创建运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun) 和 [提交工具输出](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs) 端点流式传输事件。响应将是一个服务器发送事件流。\n\nOpenAI Python 教程（https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fassistants\u002Foverview?context=with-streaming）\n\n### 消息增量对象\n\n[MessageDeltaObject](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fmessage-delta-object) 表示消息的增量，即在流式传输过程中消息中任何发生变化的字段。\n\n```swift\npublic struct MessageDeltaObject: Decodable {\n   \n   \u002F\u002F\u002F 消息的标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为 thread.message.delta。\n   public let object: String\n   \u002F\u002F\u002F 包含消息中已更改字段的增量。\n   public let delta: Delta\n   \n   public struct Delta: Decodable {\n      \n      \u002F\u002F\u002F 生成消息的实体，可以是 user 或 assistant。\n      public let role: String\n      \u002F\u002F\u002F 消息内容，由文本和\u002F或图像组成的数组。\n      public let content: [MessageContent]\n   }\n}\n```\n\n### 运行步骤增量对象\n\n表示一个 [运行步骤增量](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Frun-step-delta-object)，即在流式传输过程中运行步骤上任何已更改的字段。\n\n```swift\npublic struct RunStepDeltaObject: Decodable {\n   \n   \u002F\u002F\u002F 运行步骤的标识符，可在 API 端点中引用。\n   public let id: String\n   \u002F\u002F\u002F 对象类型，始终为 thread.run.step.delta。\n   public let object: String\n   \u002F\u002F\u002F 包含运行步骤上已更改字段的增量。\n   public let delta: Delta\n   \n   public struct Delta: Decodable {\n      \n      \u002F\u002F\u002F 运行步骤的详细信息。\n      public let stepDetails: RunStepDetails\n      \n      private enum CodingKeys: String, CodingKey {\n         case stepDetails = \"step_details\"\n      }\n   }\n}\n```\n\n⚠️ 要使用 `createRunAndStreamMessage`，请先创建助手并启动线程。\n\n用法\n通过流式传输创建 [运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateRun)。\n\n`createRunAndStreamMessage` 会流式传输 [事件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents)，您可以根据自己的实现需求选择所需的事件类型。例如，以下是如何访问消息增量和运行步骤增量对象的方法：\n\n```swift\nlet assistantID = \"asst_abc123\"\nlet threadID = \"thread_abc123\"\nlet messageParameter = MessageParameter(role: .user, content: \"告诉我 1235 的平方根\")\nlet message = try await service.createMessage(threadID: threadID, parameters: messageParameter)\nlet runParameters = RunParameter(assistantID: assistantID)\nlet stream = try await service.createRunAndStreamMessage(threadID: threadID, parameters: runParameters)\n\n         for try await result in stream {\n            switch result {\n            case .threadMessageDelta(let messageDelta):\n               let content = messageDelta.delta.content.first\n               switch content {\n               case .imageFile, nil:\n                  break\n               case .text(let textContent):\n                  print(textContent.text.value) \u002F\u002F 这将打印出消息的流式响应。\n               }\n               \n            case .threadRunStepDelta(let runStepDelta):\n               if let toolCall = runStepDelta.delta.stepDetails.toolCalls?.first?.toolCall {\n                  switch toolCall {\n                  case .codeInterpreterToolCall(let toolCall):\n                     print(toolCall.input ?? \"\") \u002F\u002F 这将打印出代码解释器工具调用的流式响应。\n                  case .fileSearchToolCall(let toolCall):\n                     print(\"文件搜索工具调用\")\n                  case .functionToolCall(let toolCall):\n                     print(\"函数工具调用\")\n                  case nil:\n                     break\n                  }\n               }\n            }\n         }\n```\n\n您可以在本包中的 [Examples 文件夹](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Ftree\u002Fmain\u002FExamples\u002FSwiftOpenAIExample\u002FSwiftOpenAIExample) 中，导航到“配置助手”选项卡，创建一个助手，并按照后续步骤操作。\n\n### 流式支持也已添加到：\n\n[创建线程和运行](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun)：\n\n```swift\n   \u002F\u002F\u002F 创建启用流式传输的线程和运行。\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F - 参数：创建线程和运行所需的参数。\n   \u002F\u002F\u002F - 返回：[AssistantStreamEvent](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents) 对象的异步抛异常流。\n   \u002F\u002F\u002F - 抛出：如果请求失败，则抛出错误。\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F 更多信息，请参阅 [OpenAI 的 Run API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FcreateThreadAndRun)。\n   func createThreadAndRunStream(\n      parameters: CreateThreadAndRunParameter)\n   async throws -> AsyncThrowingStream\u003CAssistantStreamEvent, Error>\n```\n\n[提交工具输出](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs)：\n\n```swift\n   \u002F\u002F\u002F 当运行状态为“requires_action”且 required_action.type 为 submit_tool_outputs 时，此端点可用于在所有工具调用完成后提交工具输出。所有输出必须在单个请求中提交。启用流式传输。\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F - 参数：该运行所属的 [线程](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fthreads) ID。\n   \u002F\u002F\u002F - 参数：需要提交工具输出的运行 ID。\n   \u002F\u002F\u002F - 参数：运行工具输出所需的参数。\n   \u002F\u002F\u002F - 返回：[AssistantStreamEvent](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fassistants-streaming\u002Fevents) 对象的异步抛异常流。\n   \u002F\u002F\u002F - 抛出：如果请求失败，则抛出错误。\n   \u002F\u002F\u002F\n   \u002F\u002F\u002F 更多信息，请参阅 [OpenAI 的 Run API 文档](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fruns\u002FsubmitToolOutputs)。\n   func submitToolOutputsToRunStream(\n      threadID: String,\n      runID: String,\n      parameters: RunToolsOutputParameter)\n   async throws -> AsyncThrowingStream\u003CAssistantStreamEvent, Error>\n```\n\n### 向量存储\n参数\n```swift\npublic struct VectorStoreParameter: Encodable {\n   \n   \u002F\u002F\u002F 向量存储应使用的[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) ID 列表。对于 file_search 等可以访问文件的工具非常有用。\n   let fileIDS: [String]?\n   \u002F\u002F\u002F 向量存储的名称。\n   let name: String?\n   \u002F\u002F\u002F 向量存储的过期策略。\n   let expiresAfter: ExpirationPolicy?\n   \u002F\u002F\u002F 可附加到对象的一组 16 个键值对。这有助于以结构化格式存储关于对象的附加信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   let metadata: [String: String]?\n}\n```\n响应\n```swift\npublic struct VectorStoreObject: Decodable {\n   \n   \u002F\u002F\u002F 标识符，可在 API 端点中引用。\n   let id: String\n   \u002F\u002F\u002F 对象类型，始终为 vector_store。\n   let object: String\n   \u002F\u002F\u002F 创建向量存储时的 Unix 时间戳（以秒为单位）。\n   let createdAt: Int\n   \u002F\u002F\u002F 向量存储的名称。\n   let name: String\n   \u002F\u002F\u002F 向量存储中文件所占用的总字节数。\n   let usageBytes: Int\n   \n   let fileCounts: FileCount\n   \u002F\u002F\u002F 向量存储的状态，可能为 expired、in_progress 或 completed。状态为 completed 表示向量存储已准备好使用。\n   let status: String\n   \u002F\u002F\u002F 向量存储的过期策略。\n   let expiresAfter: ExpirationPolicy?\n   \u002F\u002F\u002F 向量存储到期的 Unix 时间戳（以秒为单位）。\n   let expiresAt: Int?\n   \u002F\u002F\u002F 向量存储上次活跃的 Unix 时间戳（以秒为单位）。\n   let lastActiveAt: Int?\n   \u002F\u002F\u002F 可附加到对象的一组 16 个键值对。这有助于以结构化格式存储关于对象的附加信息。键的最大长度为 64 个字符，值的最大长度为 512 个字符。\n   let metadata: [String: String]\n   \n   public struct FileCount: Decodable {\n      \n      \u002F\u002F\u002F 当前正在处理的文件数量。\n      let inProgress: Int\n      \u002F\u002F\u002F 已成功处理的文件数量。\n      let completed: Int\n      \u002F\u002F\u002F 处理失败的文件数量。\n      let failed: Int\n      \u002F\u002F\u002F 被取消的文件数量。\n      let cancelled: Int\n      \u002F\u002F\u002F 文件总数。\n      let total: Int\n   }\n}\n```\n用法\n[创建向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fcreate)\n```swift\nlet name = \"Support FAQ\"\nlet parameters = VectorStoreParameter(name: name)\ntry vectorStore = try await service.createVectorStore(parameters: parameters)\n```\n\n[列出向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Flist)\n```swift\nlet vectorStores = try await service.listVectorStores(limit: nil, order: nil, after: nil, before: nil)\n```\n\n[检索向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fretrieve)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStore = try await service.retrieveVectorStore(id: vectorStoreID)\n```\n\n[修改向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fmodify)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStore = try await service.modifyVectorStore(id: vectorStoreID)\n```\n\n[删除向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fdelete)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet deletionStatus = try await service.deleteVectorStore(id: vectorStoreID)\n```\n\n### 向量存储文件\n参数\n```swift\npublic struct VectorStoreFileParameter: Encodable {\n   \n   \u002F\u002F\u002F 向量存储应使用的[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) ID。对于 file_search 等可以访问文件的工具非常有用。\n   let fileID: String\n}\n```\n响应\n```swift\npublic struct VectorStoreFileObject: Decodable {\n   \n   \u002F\u002F\u002F 标识符，可在 API 端点中引用。\n   let id: String\n   \u002F\u002F\u002F 对象类型，始终为 vector_store.file。\n   let object: String\n   \u002F\u002F\u002F 向量存储使用的总字节数。请注意，这可能与原始文件大小不同。\n   let usageBytes: Int\n   \u002F\u002F\u002F 创建向量存储文件时的 Unix 时间戳（以秒为单位）。\n   let createdAt: Int\n   \u002F\u002F\u002F [向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fobject) 的 ID，该[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) 附属于此向量存储。\n   let vectorStoreID: String\n   \u002F\u002F\u002F 向量存储文件的状态，可能为 in_progress、completed、cancelled 或 failed。状态为 completed 表示向量存储文件已准备好使用。\n   let status: String\n   \u002F\u002F\u002F 与此向量存储文件相关的最后一次错误。如果没有错误，则为 null。\n   let lastError: LastError?\n}\n```\n用法\n[创建向量存储文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FcreateFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet parameters = VectorStoreFileParameter(fileID: fileID)\nlet vectoreStoreFile = try await service.createVectorStoreFile(vectorStoreID: vectorStoreID, parameters: parameters)\n```\n\n[列出向量存储文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FlistFiles)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet vectorStoreFiles = try await service.listVectorStoreFiles(vectorStoreID: vectorStoreID, limit: nil, order: nil, aftre: nil, before: nil, filter: nil)\n```\n\n[检索向量存储文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FgetFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet vectoreStoreFile = try await service.retrieveVectorStoreFile(vectorStoreID: vectorStoreID, fileID: fileID)\n```\n\n[删除向量存储文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-files\u002FdeleteFile)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileID = \"file-abc123\"\nlet deletionStatus = try await service.deleteVectorStoreFile(vectorStoreID: vectorStoreID, fileID: fileID)\n```\n\n### 向量存储文件批次\n参数\n```swift\npublic struct VectorStoreFileBatchParameter: Encodable {\n   \n   \u002F\u002F\u002F 向量存储应使用的[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles) ID 列表。对于可以访问文件的工具（如 file_search）非常有用。\n   let fileIDS: [String]\n}\n```\n响应\n```swift\npublic struct VectorStoreFileBatchObject: Decodable {\n   \n   \u002F\u002F\u002F 标识符，可在 API 端点中引用。\n   let id: String\n   \u002F\u002F\u002F 对象类型，始终为 vector_store.file_batch。\n   let object: String\n   \u002F\u002F\u002F 创建向量存储文件批次时的 Unix 时间戳（以秒为单位）。\n   let createdAt: Int\n   \u002F\u002F\u002F [向量存储](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores\u002Fobject) 的 ID，该[文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Ffiles)已附加到此向量存储。\n   let vectorStoreID: String\n   \u002F\u002F\u002F 向量存储文件批次的状态，可能为 in_progress、completed、cancelled 或 failed。\n   let status: String\n   \n   let fileCounts: FileCount\n}\n```\n用法\n\n[创建向量存储文件批次](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FcreateBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet fileIDS = [\"file-abc123\", \"file-abc456\"]\nlet parameters = VectorStoreFileBatchParameter(fileIDS: fileIDS)\nlet vectorStoreFileBatch = try await service.\n   createVectorStoreFileBatch(vectorStoreID: vectorStoreID, parameters: parameters)\n```\n\n[检索向量存储文件批次](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FgetBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFileBatch = try await service.retrieveVectorStoreFileBatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n[取消向量存储文件批次](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FcancelBatch)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFileBatch = try await service.cancelVectorStoreFileBatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n[列出批次中的向量存储文件](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fvector-stores-file-batches\u002FlistBatchFiles)\n```swift\nlet vectorStoreID = \"vs_abc123\"\nlet batchID = \"vsfb_abc123\"\nlet vectorStoreFiles = try await service.listVectorStoreFilesInABatch(vectorStoreID: vectorStoreID, batchID: batchID)\n```\n\n⚠️ 我们目前仅支持 Assistants Beta 2。如果您需要 Assistants V1 的支持，可以在 jroch-supported-branch-for-assistants-v1 分支或 v2.3 版本中获取。[请参阅 OpenAI 文档了解迁移详情。](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fassistants\u002Fmigration)\n\n## Anthropic\n\nAnthropic 提供与 OpenAI 的兼容性，更多信息请访问[文档](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fapi\u002Fopenai-sdk#getting-started-with-the-openai-sdk)。\n\n要使用 `SwiftOpenAI` 调用 Claude 模型，您可以这样做：\n\n```swift\nlet anthropicApiKey = \"\"\nlet openAIService = OpenAIServiceFactory.service(apiKey: anthropicApiKey, \n                     overrideBaseURL: \"https:\u002F\u002Fapi.anthropic.com\", \n                     overrideVersion: \"v1\")\n```\n\n现在您可以这样创建完成参数：\n\n```swift\nlet parameters = ChatCompletionParameters(\n   messages: [.init(\n   role: .user,\n   content: \"你是 Claude 吗？\")],\n   model: .custom(\"claude-3-7-sonnet-20250219\"))\n```\n\n如需更完整的 Anthropic Swift 包，您可以使用 [SwiftAnthropic](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftAnthropic)。\n\n## Azure OpenAI\n\n本库通过 Azure OpenAI 提供对聊天完成和聊天流式完成的支持。目前，`DefaultOpenAIAzureService` 支持聊天完成，包括流式和非流式选项。\n\n有关 Azure 配置的更多信息，请参阅[文档](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference)。\n\n要实例化 `DefaultOpenAIAzureService`，您需要提供一个 `AzureOpenAIConfiguration`：\n\n```swift\nlet azureConfiguration = AzureOpenAIConfiguration(\n                           resourceName: \"YOUR_RESOURCE_NAME\", \n                           openAIAPIKey: .apiKey(\"YOUR_OPENAI_APIKEY), \n                           apiVersion: \"THE_API_VERSION\")\n                           \nlet service = OpenAIServiceFactory.service(azureConfiguration: azureConfiguration)           \n```\n\n支持的 API 版本可在 Azure [文档](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#completions) 中找到。\n\n当前支持的版本：\n\n```2022-12-01```\n```2023-03-15-preview```\n```2023-05-15```\n```2023-06-01-preview```\n```2023-07-01-preview```\n```2023-08-01-preview```\n```2023-09-01-preview```\n\n### 使用 [聊天完成](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fazure\u002Fai-services\u002Fopenai\u002Freference#chat-completions)：\n\n```swift\nlet parameters = ChatCompletionParameters(\n                     messages: [.init(role: .user, content: .text(prompt))], \n                     model: .custom(\"DEPLOYMENT_NAME\") \u002F\u002F\u002F 您部署模型时选择的部署名称。例如：“gpt-35-turbo-0613”\nlet completionObject = try await service.startChat(parameters: parameters)\n```\n\n## AIProxy\n\n### 是什么？\n\n[AIProxy](https:\u002F\u002Fwww.aiproxy.pro) 是一款用于 iOS 应用程序的后端服务，可将您的应用程序请求代理至 OpenAI。\n使用代理可以保护您的 OpenAI 密钥不被泄露，从而避免因密钥被盗而导致意外高额账单。\n只有在请求通过您设定的速率限制以及 Apple 的 [DeviceCheck](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fdevicecheck) 验证后，才会进行代理。\n我们提供 AIProxy 支持，以便您能够安全地分发使用 SwiftOpenAI 构建的应用程序。\n\n### 我的 SwiftOpenAI 代码需要做哪些改动？\n\n通过 AIProxy 代理请求时，只需对你的 Xcode 项目进行两项更改：\n\n1. 不再使用以下方式初始化 `service`：\n\n        let apiKey = \"your_openai_api_key_here\"\n        let service = OpenAIServiceFactory.service(apiKey: apiKey)\n\n而是使用：\n\n        let service = OpenAIServiceFactory.service(\n            aiproxyPartialKey: \"your_partial_key_goes_here\",\n            aiproxyServiceURL: \"your_service_url_goes_here\"\n        )\n\n`aiproxyPartialKey` 和 `aiproxyServiceURL` 的值会在 [AIProxy 开发者仪表板](https:\u002F\u002Fdeveloper.aiproxy.pro) 上提供给你。\n\n2. 在 Xcode 中添加一个名为 `AIPROXY_DEVICE_CHECK_BYPASS` 的环境变量。该令牌同样在 AIProxy 开发者仪表板中提供，是 iOS 模拟器与 AIProxy 后端通信所必需的。\n    - 按下 `cmd + shift + ,` 打开 Xcode 的“编辑方案”菜单。\n    - 在侧边栏中选择“运行”。\n    - 从顶部导航栏中选择“参数”。\n    - 在“环境变量”部分（而非“启动时传递的参数”部分）添加一个名为 `AIPROXY_DEVICE_CHECK_BYPASS` 的环境变量，并将其值设置为我们在 AIProxy 仪表板中提供的令牌。\n\n\n⚠️  `AIPROXY_DEVICE_CHECK_BYPASS` 仅适用于模拟器。请勿将其泄露到应用的发布版本中（包括 TestFlight 分发）。如果你按照上述步骤操作，该常量不会泄露，因为环境变量不会被打包进应用安装包中。\n\n#### 什么是 `AIPROXY_DEVICE_CHECK_BYPASS` 常量？\n\nAIProxy 使用 Apple 的 [DeviceCheck](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fdevicecheck) 来确保后端接收到的请求确实来自你应用在合法 Apple 设备上的发出。然而，iOS 模拟器无法生成 DeviceCheck 令牌。为了避免你在开发过程中必须频繁地在真机上构建和运行，AIProxy 提供了一种跳过 DeviceCheck 完整性检查的方法。此令牌仅供开发者使用。如果攻击者获取了该令牌，他们就可以在不包含 DeviceCheck 令牌的情况下向你的 AIProxy 项目发起请求，从而绕过一层保护机制。\n\n#### 什么是 `aiproxyPartialKey` 常量？\n\n该常量可以安全地包含在应用的发布版本中。它是你真实密钥的加密表示的一部分，另一部分则存储在 AIProxy 的后端。当你的应用向 AIProxy 发送请求时，这两部分加密数据会配对、解密，并用于完成对 OpenAI 的请求。\n\n#### 如何在 AIProxy 上设置我的项目？\n\n请参阅 [AIProxy 集成指南](https:\u002F\u002Fwww.aiproxy.pro\u002Fdocs\u002Fintegration-guide.html)。\n\n\n### ⚠️ 免责声明\n\nSwiftOpenAI 的贡献者不对任何由第三方造成的损害或损失承担责任。本库的贡献者提供第三方集成服务仅为方便起见。使用任何第三方服务的风险均由您自行承担。\n\n\n## Ollama\n\nOllama 现已内置与 OpenAI [Chat Completions API](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md) 的兼容性，这使得你可以更方便地在本地使用各种工具和应用程序来操作 Ollama。\n\n\u003Cimg width=\"783\" alt=\"Screenshot 2024-06-24 at 11 52 35 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_503972a4ed73.png\">\n\n### ⚠️ 重要提示\n\n请记住，这些模型是在本地运行的，因此你需要先下载它们。如果你想使用 llama3，可以在终端中运行以下命令：\n\n```python\nollama pull llama3\n```\n\n更多详细信息，请参考 [Ollama 文档](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md)。\n\n### 如何使用 SwiftOpenAI 在本地调用这些模型？\n\n要在你的应用中使用本地模型并结合 `OpenAIService`，你需要提供一个 URL。\n\n```swift\nlet service = OpenAIServiceFactory.service(baseURL: \"http:\u002F\u002Flocalhost:11434\")\n```\n\n然后你可以按如下方式使用 completions API：\n\n```swift\nlet prompt = \"给我讲个笑话\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom(\"llama3\"))\nlet chatCompletionObject = service.startStreamedChat(parameters: parameters)\n```\n\n⚠️ 注意：你也可以使用 `OpenAIServiceFactory.service(apiKey:overrideBaseURL:proxyPath)` 来适配任何兼容 OpenAI 的服务。\n\n### 参考资料：\n\n[Ollama OpenAI 兼容性文档](https:\u002F\u002Fgithub.com\u002Follama\u002Follama\u002Fblob\u002Fmain\u002Fdocs\u002Fopenai.md)\n[Ollama OpenAI 兼容性博客文章](https:\u002F\u002Follama.com\u002Fblog\u002Fopenai-compatibility)\n\n### 备注\n\n你还可以使用这种服务构造函数来提供任意 URL 或 API 密钥，以满足需求。\n\n```swift\nlet service = OpenAIServiceFactory.service(apiKey: \"YOUR_API_KEY\", baseURL: \"http:\u002F\u002Flocalhost:11434\")\n```\n\n## Groq\n\n\u003Cimg width=\"792\" alt=\"Screenshot 2024-10-11 at 11 49 04 PM\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_41e5d77e3e6f.png\">\n\nGroq API 大体上与 OpenAI 的客户端库兼容，例如 `SwiftOpenAI`。要使用该库与 Groq 集成，你只需创建一个 `OpenAIService` 实例，如下所示：\n\n```swift\nlet apiKey = \"your_api_key\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, overrideBaseURL: \"https:\u002F\u002Fapi.groq.com\u002F\", proxyPath: \"openai\")\n```\n\n有关 Groq 支持的 API，请参阅其 [官方文档](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Fopenai)。\n\n## xAI\n\n\u003Cimg width=\"792\" alt=\"xAI Grok\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_8afe515a0845.png\">\n\nxAI 为其 Grok 模型提供了兼容 OpenAI 的 completion API。你可以使用 OpenAI SDK 来访问这些模型。\n\n```swift\nlet apiKey = \"your_api_xai_key\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, overrideBaseURL: \"https:\u002F\u002Fapi.x.ai\", overrideVersion: \"v1\")\n```\n\n有关 xAI API 的更多信息，请参阅其 [官方文档](https:\u002F\u002Fdocs.x.ai\u002Fdocs\u002Foverview)。\n\n## OpenRouter\n\n\u003Cimg width=\"734\" alt=\"Image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_911c7d71f944.png\" \u002F>\n\n[OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquick-start) 提供了一个兼容 OpenAI 的 completion API，支持 314 种模型和提供商，你可以直接调用，也可以使用 OpenAI SDK 进行调用。此外，还有一些第三方 SDK 可供使用。\n\n```swift\n\n\u002F\u002F 创建服务\n\nlet apiKey = \"your_api_key\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, \n   overrideBaseURL: \"https:\u002F\u002Fopenrouter.ai\", \n   proxyPath: \"api\",\n   extraHeaders: [\n      \"HTTP-Referer\": \"\u003CYOUR_SITE_URL>\", \u002F\u002F 可选。用于 openrouter.ai 排名的网站 URL。\n         \"X-Title\": \"\u003CYOUR_SITE_NAME>\"  \u002F\u002F 可选。用于 openrouter.ai 排名的网站标题。\n   ])\n\n\u002F\u002F 发起请求\n\nlet prompt = \"曼哈顿计划是什么？\"\nlet parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .custom(\"deepseek\u002Fdeepseek-r1:free\"))\nlet stream = service.startStreamedChat(parameters: parameters)\n```\n\n有关 OpenRouter API 的更多信息，请参阅其 [快速入门文档](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquick-start)。\n\n## DeepSeek\n\n![图片](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_49091355c784.png)\n\n[DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F) API 使用与 OpenAI 兼容的 API 格式。通过修改配置，您可以使用 SwiftOpenAI 访问 DeepSeek API。\n\n创建服务\n\n```swift\nlet apiKey = \"your_api_key\"\nlet service = OpenAIServiceFactory.service(\n   apiKey: apiKey,\n   overrideBaseURL: \"https:\u002F\u002Fapi.deepseek.com\")\n```\n\n非流式示例\n\n```swift\nlet prompt = \"什么是曼哈顿计划？\"\nlet parameters = ChatCompletionParameters(\n    messages: [.init(role: .user, content: .text(prompt))],\n    model: .custom(\"deepseek-reasoner\")\n)\n\ndo {\n    let result = try await service.chat(parameters: parameters)\n    \n    \u002F\u002F 访问响应内容\n    if let content = result.choices.first?.message.content {\n        print(\"响应: \\(content)\")\n    }\n    \n    \u002F\u002F 如果有推理内容，访问推理内容\n    if let reasoning = result.choices.first?.message.reasoningContent {\n        print(\"推理: \\(reasoning)\")\n    }\n} catch {\n    print(\"错误: \\(error)\")\n}\n```\n\n流式示例\n\n```swift\nlet prompt = \"什么是曼哈顿计划？\"\nlet parameters = ChatCompletionParameters(\n    messages: [.init(role: .user, content: .text(prompt))],\n    model: .custom(\"deepseek-reasoner\")\n)\n\n\u002F\u002F 开始流式处理\ndo {\n    let stream = try await service.startStreamedChat(parameters: parameters)\n    for try await result in stream {\n        let content = result.choices.first?.delta.content ?? \"\"\n        self.message += content\n        \n        \u002F\u002F 可选：如果存在推理内容，进行处理\n        if let reasoning = result.choices.first?.delta.reasoningContent {\n            self.reasoningMessage += reasoning\n        }\n    }\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n    self.errorMessage = \"网络错误，状态码：\\(statusCode)，描述：\\(description)\"\n} catch {\n    self.errorMessage = error.localizedDescription\n}\n```\n\n注意事项\n\n- DeepSeek API 与 OpenAI 的格式兼容，但使用的模型名称不同。\n- 使用 `.custom(\"deepseek-reasoner\")` 来指定 DeepSeek 模型。\n- `reasoningContent` 字段是可选的，仅适用于 DeepSeek 的 API。\n- 错误处理遵循与标准 OpenAI 请求相同的模式。\n\n有关 `DeepSeek` API 的更多信息，请参阅其 [文档](https:\u002F\u002Fapi-docs.deepseek.com)。\n\n## Gemini\n\n\u003Cimg width=\"982\" alt=\"截图 2024-11-12 上午10:53:43\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_readme_df26ca69e2cf.png\">\n\nGemini 现在可以通过 OpenAI 库访问。公告。\n`SwiftOpenAI` 支持所有 OpenAI 端点，但是请参考 Gemini 文档以了解当前哪些 API 是兼容的。\n\nGemini 现在可以通过 OpenAI 库访问。请参阅公告 [这里](https:\u002F\u002Fdevelopers.googleblog.com\u002Fen\u002Fgemini-is-now-accessible-from-the-openai-library\u002F)。\nSwiftOpenAI 支持所有 OpenAI 端点。然而，请参考 [Gemini 文档](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs\u002Fopenai) 以了解当前哪些 API 是兼容的。\n\n\n您可以通过您的 Gemini 令牌实例化一个 `OpenAIService`，如下所示...\n\n```swift\nlet geminiAPIKey = \"your_api_key\"\nlet baseURL = \"https:\u002F\u002Fgenerativelanguage.googleapis.com\"\nlet version = \"v1beta\"\n\nlet service = OpenAIServiceFactory.service(\n   apiKey: apiKey, \n   overrideBaseURL: baseURL, \n   overrideVersion: version)\n```\n\n现在您可以使用 `.custom` 模型参数创建聊天请求，并将模型名称作为字符串传递。\n\n```swift\nlet parameters = ChatCompletionParameters(\n      messages: [.init(\n      role: .user,\n      content: content)],\n      model: .custom(\"gemini-1.5-flash\"))\n\nlet stream = try await service.startStreamedChat(parameters: parameters)\n```\n\n## 合作\n对于任何拟议的更改，请打开一个指向 `main` 分支的 PR。非常欢迎提供单元测试 ❤️","# SwiftOpenAI 快速上手指南\n\nSwiftOpenAI 是一个开源的 Swift 包，旨在简化与 OpenAI API（及兼容服务如 Azure、Ollama、Groq 等）的交互。它支持 iOS、macOS、watchOS 和 Linux 平台，涵盖聊天、语音、图像、嵌入及最新的 Realtime API 等功能。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**:\n    *   iOS 15+\n    *   macOS 13+\n    *   watchOS 9+\n    *   Linux (需配合 Vapor 或 AsyncHTTPClient)\n*   **开发工具**: Xcode 15+\n*   **语言版本**: Swift 5.9+\n*   **前置依赖**:\n    *   有效的 OpenAI API Key（或其他兼容服务商的 Key）。\n    *   **注意**：请勿将 API Key 硬编码在客户端代码中。生产环境建议通过后端服务器（如使用本库支持的 AIProxy）转发请求，或在本地测试时妥善管理密钥。\n\n## 安装步骤\n\nSwiftOpenAI 通过 **Swift Package Manager (SPM)** 进行安装。\n\n1.  打开你的 Swift 项目（Xcode）。\n2.  点击菜单栏 `File` -> `Add Package Dependency...`。\n3.  在搜索框中输入以下仓库地址：\n    ```text\n    https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\n    ```\n4.  **重要版本选择提示**：\n    *   Xcode 默认会将版本上限设为 `2.0.0`，但这可能低于该库的实际最新版本。\n    *   **操作方法**：在版本输入框中，手动输入你希望支持的最低版本号（例如 `1.0.0` 或查看 [Releases](https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Freleases) 获取最新版），然后按 `Tab` 键让 Xcode 自动调整上限；或者直接选择 `Branch` -> `main` 以使用最新开发版。\n5.  点击 `Add Package` 完成安装。\n\n## 基本使用\n\n### 1. 导入与初始化\n\n在你的代码文件中导入模块并使用 API Key 初始化服务：\n\n```swift\nimport SwiftOpenAI\n\n\u002F\u002F 替换为你的真实 API Key\nlet apiKey = \"your_openai_api_key_here\"\n\n\u002F\u002F 初始化服务\nlet service = OpenAIServiceFactory.service(apiKey: apiKey)\n```\n\n如果需要指定组织 ID：\n\n```swift\nlet organizationID = \"your_organization_id\"\nlet service = OpenAIServiceFactory.service(apiKey: apiKey, organizationID: organizationID)\n```\n\n> **提示**：对于推理模型（Reasoning Models），请求耗时可能较长。建议自定义 `URLSession` 配置以延长超时时间：\n> ```swift\n> let session = URLSession.shared\n> session.configuration.timeoutIntervalForRequest = 360 \u002F\u002F 设置为 360 秒或更长\n> let httpClient = URLSessionHTTPClientAdapter(urlSession: session)\n> let service = OpenAIServiceFactory.service(apiKey: apiKey, httpClient: httpClient)\n> ```\n\n### 2. 调用聊天接口示例\n\n以下是最基础的文本聊天调用示例：\n\n```swift\nlet parameters = ChatCompletionParameters(\n    messages: [.init(role: .user, content: .text(\"你好，SwiftOpenAI！\"))],\n    model: .gpt4o\n)\n\ndo {\n    let response = try await service.startChat(parameters: parameters)\n    if let choice = response.choices.first {\n        print(\"AI 回复：\\(choice.message.content)\")\n    }\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n    \u002F\u002F 处理特定的网络错误状态码（如 429 限流）\n    print(\"网络错误 - 状态码：\\(statusCode), 描述：\\(description)\")\n} catch {\n    \u002F\u002F 处理其他错误\n    print(\"发生错误：\\(error.localizedDescription)\")\n}\n```\n\n### 3. 语音转录示例\n\n```swift\n\u002F\u002F 假设已获取音频文件数据\nlet fileName = \"recording.m4a\" \nlet fileData = Data(contentsOfURL: yourAudioURL) \n\nlet parameters = AudioTranscriptionParameters(\n    fileName: fileName, \n    file: fileData, \n    model: .whisperOne\n)\n\ndo {\n    let result = try await service.createTranscription(parameters: parameters)\n    print(\"转录文本：\\(result.text)\")\n} catch {\n    print(\"转录失败：\\(error)\")\n}\n```\n\n现在你可以开始探索 SwiftOpenAI 支持的其他功能，如图像生成、Embeddings、Function Calling 及 Assistants API 等。","一位 iOS 开发者正在为一款面向视障用户的辅助应用构建实时语音交互功能，需要实现低延迟的双向对话及多模态内容处理。\n\n### 没有 SwiftOpenAI 时\n- 开发者需手动封装复杂的 HTTP 请求与 WebSocket 连接代码，尤其在处理 OpenAI 最新的 Realtime API 双向音频流时，极易出现断连或延迟过高问题。\n- 集成语音转文字、图像识别及结构化输出等功能时，需要分别编写大量重复的解析逻辑，导致代码库臃肿且难以维护。\n- 缺乏对 Assistants API 和线程管理的原生支持，处理多轮对话上下文时需自行设计状态机，开发周期被大幅拉长。\n- 跨平台适配（如 macOS 或 watchOS）时，需反复调整网络层代码以兼容不同系统的并发机制，测试成本极高。\n\n### 使用 SwiftOpenAI 后\n- 直接调用封装好的 Realtime API 接口，轻松实现毫秒级低延迟语音对话，底层连接稳定性由库自动保障。\n- 通过统一的 Swift 原生接口一键访问音频转录、视觉分析及结构化数据输出，代码量减少约 70%，逻辑清晰易读。\n- 内置完整的 Assistants、Threads 及 Runs 对象管理，多轮对话上下文自动维护，让复杂交互流程像编写普通函数一样简单。\n- 凭借对 iOS、macOS、watchOS 及 Linux 的全平台兼容特性，同一套代码即可无缝部署到所有苹果生态设备，无需额外适配。\n\nSwiftOpenAI 将繁琐的 API 对接转化为简洁的 Swift 原生体验，让开发者能专注于创造卓越的智能交互功能而非底层通信细节。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjamesrochabrun_SwiftOpenAI_d0e5b5b8.gif","jamesrochabrun","James Rochabrun","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjamesrochabrun_9141f167.jpg","Senior iOS Applications Engineer   ",null,"San Francisco","https:\u002F\u002Fmedium.com\u002F@jamesrochabrun","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun",[85],{"name":86,"color":87,"percentage":88},"Swift","#F05138",100,651,124,"2026-04-11T14:34:47","MIT","Linux, macOS, iOS, watchOS","未说明",{"notes":96,"python":97,"dependencies":98},"这是一个 Swift 包，主要用于 Apple 平台 (iOS 15+, macOS 13+, watchOS 9+) 和 Linux。在 Linux 上需要使用 AsyncHTTPClient 来替代 URLSession。需要 OpenAI API 密钥才能运行。不支持 Windows。","不适用 (基于 Swift)",[99,100,101],"Swift 5.9+","Xcode 15+","AsyncHTTPClient (Linux)",[25,15],[104,105,106,107,108,109,110],"chatgpt-api","ios","openai","openai-api","spm","swift","swiftpackage","2026-03-27T02:49:30.150509","2026-04-14T12:26:56.668691",[114,119,124,129,134,139],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},32772,"如何配置以使用与 OpenAI API 兼容的自定义服务（如 Ollama 或其他本地模型）？","可以使用 `OpenAIServiceFactory` 提供的构造函数来指定自定义的基础 URL。虽然早期版本可能命名为 `ollama`，但它适用于任何兼容 OpenAI 协议的 URL。使用方法如下：\n\n```swift\nlet service = OpenAIServiceFactory.ollama(baseURL: \"your_local_host_url\")\n\u002F\u002F 或者在更新后的版本中可能支持更通用的命名：\n\u002F\u002F let service = OpenAIServiceFactory.service(baseURL: \"your_local_host_url\")\n```\n\n如果你需要同时传递 API Key 和自定义 baseURL，请确保使用的构造函数支持这两个参数，例如：\n```swift\nOpenAIServiceFactory.service(apiKey: apiKey, baseURL: URL(string: \"http:\u002F\u002Flocalhost:11434\")!)\n```\n如果当前版本不支持，建议检查最新文档或等待维护者更新以公开 `DefaultOpenAIService` 或直接扩展 `.service()` 方法。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F51",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},32773,"为什么捕获网络错误时无法获取 HTTP 状态码（如 429）？","在旧版本中，`APIError.responseUnsuccessful` 枚举案例确实缺少关联的状态码值，导致无法直接提取状态码。该问题已被社区发现并修复。维护者已接受补丁，现在的实现允许你在 catch 块中获取状态码。请确保你使用的是包含此修复的最新版本。修复后的用法示例：\n\n```swift\ndo {\n   let choices = try await service.startChat(parameters: parameters).choices\n} catch APIError.responseUnsuccessful(let description, let statusCode) {\n   print(\"网络错误，状态码：\\(statusCode)，描述：\\(description)\")\n} catch {\n   print(error.localizedDescription)\n}\n```\n如果仍无法获取，请升级库到最新版本。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F60",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},32774,"如何在 Swift Server 或 Linux 环境下使用此库（避免在客户端暴露 API Key）？","该项目现已支持 Linux 和服务端环境。社区贡献者通过将底层的 `URLSession` 替换为 `async-http-client` 包中的 `HTTPClient` 实现了跨平台支持。这意味着你现在可以在 Swift Server 项目中使用该库，从而安全地在服务端管理 API Key。\n\n此外，维护者提到现在你可以在 CLI 工具中使用任何模型提供商：\nhttps:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAICLI\n\n如果你需要手动集成，请确保你的项目依赖了支持 Linux 的网络库版本，或者直接使用已合并了 Linux 支持的主分支版本。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F89",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},32775,"在哪里可以找到关于 Assistant（助手）和代码解释器功能的详细使用示例？","维护者已在项目中添加了更多关于 Assistant 和代码检索（Code Retrieval）的示例。此外，还修复了一个与文件引用解码相关的错误。\n\n如果你在 Demo 项目中只能配置但无法使用代码解释器，请拉取最新的代码或查看仓库中的 Example 目录。如果遇到具体的解码错误，维护者建议提供 JSON 响应以便进一步验证和解码逻辑的修正。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F11",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},32776,"Xcode 显示的版本号混乱（例如 v3.4.0 比 v3.6.2 新），导致无法获取最新功能怎么办？","这是一个发布版本号标记的失误。维护者确认了版本号排序错误的问题（例如 v3.4.0 被误认为是比 v3.6.2 更新的版本），并已发布修正版本（如 v3.7.0）。\n\n如果你在 Xcode 中遇到包解析错误或发现缺少某些功能（如 `ResponseFormat.jsonSchema`），请尝试以下操作：\n1. 手动指定依赖版本为最新的正确版本号（例如 3.7.0 或更高）。\n2. 清理 Xcode 的派生数据（Derived Data）并重新解析包。\n3. 关注官方 Release 页面以获取正确的版本标签。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F83",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},32777,"在 Assistant V2 API 流式传输中，如何获取函数调用输出（functionCallOutput）的具体参数？","在处理 Assistant V2 API 的流式响应时，你可以直接访问 `provider.functionCallOutput` 对象来获取参数。根据用户的代码示例，可以通过访问 `.argument` 属性来获取特定参数值：\n\n```swift\nif !provider.functionCallOutput.isEmpty {\n    Text(\"函数调用\")\n    \u002F\u002F 获取具体参数\n    Text(provider.functionCallOutput.argument) \n        .font(.title3)\n        .foregroundColor(.pink)\n        .fontDesign(.monospaced)\n        .bold()\n    Text(provider.functionCallOutput)\n        .font(.body)\n}\n```\n\n请确保你使用的库版本已正确解析了流式数据中的函数字段。如果遇到问题，建议检查返回的数据结构是否符合预期。","https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F59",[145,150,155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240],{"id":146,"version":147,"summary_zh":148,"released_at":149},247507,"4.4.9","## 变更内容\n* 处理实时 MCP 事件响应 `mcp_call.completed`，由 @Panha-Sim 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F193 中完成\n\n## 新贡献者\n* @Panha-Sim 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F193 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002F4.4.8...4.4.9","2026-04-02T07:21:22",{"id":151,"version":152,"summary_zh":153,"released_at":154},247508,"4.4.8","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F189 中修复：当 `name` 字段缺失时，流式调用解码错误。\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F190 中修复：可重用提示的 `InstructionsType` 解码问题（问题 #187）。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002F4.4.7...4.4.8","2025-12-27T05:58:18",{"id":156,"version":157,"summary_zh":158,"released_at":159},247509,"4.4.7","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F186 中添加了语言参数\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002F4.4.6...4.4.7","2025-12-09T20:48:41",{"id":161,"version":162,"summary_zh":163,"released_at":164},247510,"4.4.6","## 变更内容\n* 功能新增：@jayvenn 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F185 中添加了对 gpt-image-1-mini 模型的支持\n\n## 新贡献者\n* @jayvenn 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F185 中完成了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.5...4.4.6","2025-12-07T08:51:35",{"id":166,"version":167,"summary_zh":168,"released_at":169},247511,"v4.4.5","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F183 中修复了实时 API 问题和音频引擎崩溃。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.4...v4.4.5","2025-11-26T07:53:45",{"id":171,"version":172,"summary_zh":173,"released_at":174},247512,"v4.4.4","## 变更内容\n* 添加了 `response.done` 消息处理，用于实时 API 错误报告，由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F181 中实现。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.3...v4.4.4","2025-11-17T07:17:17",{"id":176,"version":177,"summary_zh":178,"released_at":179},247513,"v4.4.3","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F180 中添加了对 MCP 服务器的支持以及实时 API 的图像输入功能。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.2...v4.4.3","2025-11-15T00:37:31",{"id":181,"version":182,"summary_zh":183,"released_at":184},247514,"v4.4.2","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F179 中修复了 Azure OpenAI 基础 URL 的构建问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.1...v4.4.2","2025-11-14T06:26:30",{"id":186,"version":187,"summary_zh":188,"released_at":189},247515,"v4.4.1","## 变更内容\n* 由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F178 中修复的 Azure 实时 API 问题\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.4.0...v4.4.1","2025-11-14T06:06:20",{"id":191,"version":192,"summary_zh":193,"released_at":194},247516,"v4.4.0","## 变更内容\n* 修复键路径值类型为 `String` 无法转换为上下文类型 `String?` 的问题，由 @mergesort 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F176 中完成\n* 将应用更新至最新的 OpenAI API，由 @jamesrochabrun 在 https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F177 中完成\n**感谢 @lzell 提供的 OpenAI 实时 API 支持**\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.3.4...v4.4.0","2025-11-12T08:41:50",{"id":196,"version":197,"summary_zh":198,"released_at":199},247517,"v4.3.4","## What's Changed\r\n* Model Selector demo by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F172\r\n* Conversations API by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F173\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.3.3...v4.3.4","2025-10-06T18:19:27",{"id":201,"version":202,"summary_zh":203,"released_at":204},247518,"v4.3.3","## What's Changed\r\n* Fix model listing issues for OpenRouter by @longseespace in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F162\r\n* Add AIProxy public key for dot com TLD by @lzell in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F161\r\n* Do not throw error for unknown event types by @longseespace in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F165\r\n* Add custom tool support to Response API by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F171\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.3.2...v4.3.3","2025-09-30T07:47:32",{"id":206,"version":207,"summary_zh":208,"released_at":209},247519,"v4.3.2","## What's Changed\r\n* Api error details by @longseespace in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F160\r\n* Support all OpenAI Response API output item types by @longseespace in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F159\r\n* Update README with proper timeout interval example by @timimahoney in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F158\r\n* Update README.md by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F157\r\n* Add GPT-5 models support and verbosity parameter by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F164\r\n\r\n## New Contributors\r\n* @longseespace made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F160\r\n* @timimahoney made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F158\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.3.1...v4.3.2","2025-08-10T05:52:07",{"id":211,"version":212,"summary_zh":213,"released_at":214},247520,"v4.3.1","## What's Changed\r\n* Fixed: Add 'name' encoding for .jsonSchema in FormatType (Responses API) by @Mickael-tinytap in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F150\r\n* Fix SwiftFormat lint issue in TextConfiguration by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F153\r\n* Fix the access control of ChatCompletionParameters.StreamOptions by @AreroKetahi in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F156\r\n\r\n## New Contributors\r\n* @Mickael-tinytap made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F150\r\n* @AreroKetahi made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F156\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.3.0...v4.3.1","2025-07-07T18:41:25",{"id":216,"version":217,"summary_zh":218,"released_at":219},247521,"v4.3.0","## What's Changed\r\n* Adds xAI information to README.md by @AlekseyPleshkov in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F149\r\n* Implementing cross-platform support by supporting Linux-friendly HTTP clients by @mergesort in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F143\r\n\r\n## New Contributors\r\n* @AlekseyPleshkov made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F149\r\n* @mergesort made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F143\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.2.0...v4.3.0","2025-06-17T22:01:36",{"id":221,"version":222,"summary_zh":223,"released_at":224},247522,"v4.2.0","## What's Changed\r\n\r\n⏺ Response API Streaming Support - Summary of Changes\r\n\r\n#### Streaming Responses\r\n\r\nThe Response API supports streaming responses using Server-Sent Events (SSE). This allows you to receive partial responses as they are generated, enabling real-time UI updates and better user experience.\r\n\r\nStream Events\r\n```swift\r\n\u002F\u002F The ResponseStreamEvent enum represents all possible streaming events\r\npublic enum ResponseStreamEvent: Decodable {\r\n  case responseCreated(ResponseCreatedEvent)\r\n  case responseInProgress(ResponseInProgressEvent)\r\n  case responseCompleted(ResponseCompletedEvent)\r\n  case responseFailed(ResponseFailedEvent)\r\n  case outputItemAdded(OutputItemAddedEvent)\r\n  case outputTextDelta(OutputTextDeltaEvent)\r\n  case outputTextDone(OutputTextDoneEvent)\r\n  case functionCallArgumentsDelta(FunctionCallArgumentsDeltaEvent)\r\n  case reasoningSummaryTextDelta(ReasoningSummaryTextDeltaEvent)\r\n  case error(ErrorEvent)\r\n  \u002F\u002F ... and many more event types\r\n}\r\n```\r\n\r\nBasic Streaming Example\r\n```swift\r\n\u002F\u002F Enable streaming by setting stream: true\r\nlet parameters = ModelResponseParameter(\r\n    input: .string(\"Tell me a story\"),\r\n    model: .gpt4o,\r\n    stream: true\r\n)\r\n\r\n\u002F\u002F Create a stream\r\nlet stream = try await service.responseCreateStream(parameters)\r\n\r\n\u002F\u002F Process events as they arrive\r\nfor try await event in stream {\r\n    switch event {\r\n    case .outputTextDelta(let delta):\r\n        \u002F\u002F Append text chunk to your UI\r\n        print(delta.delta, terminator: \"\")\r\n        \r\n    case .responseCompleted(let completed):\r\n        \u002F\u002F Response is complete\r\n        print(\"\\nResponse ID: \\(completed.response.id)\")\r\n        \r\n    case .error(let error):\r\n        \u002F\u002F Handle errors\r\n        print(\"Error: \\(error.message)\")\r\n        \r\n    default:\r\n        \u002F\u002F Handle other events as needed\r\n        break\r\n    }\r\n}\r\n```\r\n\r\nStreaming with Conversation State\r\n```swift\r\n\u002F\u002F Maintain conversation continuity with previousResponseId\r\nvar previousResponseId: String? = nil\r\nvar messages: [(role: String, content: String)] = []\r\n\r\n\u002F\u002F First message\r\nlet firstParams = ModelResponseParameter(\r\n    input: .string(\"Hello!\"),\r\n    model: .gpt4o,\r\n    stream: true\r\n)\r\n\r\nlet firstStream = try await service.responseCreateStream(firstParams)\r\nvar firstResponse = \"\"\r\n\r\nfor try await event in firstStream {\r\n    switch event {\r\n    case .outputTextDelta(let delta):\r\n        firstResponse += delta.delta\r\n        \r\n    case .responseCompleted(let completed):\r\n        previousResponseId = completed.response.id\r\n        messages.append((role: \"user\", content: \"Hello!\"))\r\n        messages.append((role: \"assistant\", content: firstResponse))\r\n        \r\n    default:\r\n        break\r\n    }\r\n}\r\n\r\n\u002F\u002F Follow-up message with conversation context\r\nvar inputArray: [InputItem] = []\r\n\r\n\u002F\u002F Add conversation history\r\nfor message in messages {\r\n    inputArray.append(.message(InputMessage(\r\n        role: message.role,\r\n        content: .text(message.content)\r\n    )))\r\n}\r\n\r\n\u002F\u002F Add new user message\r\ninputArray.append(.message(InputMessage(\r\n    role: \"user\",\r\n    content: .text(\"How are you?\")\r\n)))\r\n\r\nlet followUpParams = ModelResponseParameter(\r\n    input: .array(inputArray),\r\n    model: .gpt4o,\r\n    previousResponseId: previousResponseId,\r\n    stream: true\r\n)\r\n\r\nlet followUpStream = try await service.responseCreateStream(followUpParams)\r\n\u002F\u002F Process the follow-up stream...\r\n```\r\n\r\nStreaming with Tools and Function Calling\r\n```swift\r\nlet parameters = ModelResponseParameter(\r\n    input: .string(\"What's the weather in San Francisco?\"),\r\n    model: .gpt4o,\r\n    tools: [\r\n        Tool(\r\n            type: \"function\",\r\n            function: ChatCompletionParameters.ChatFunction(\r\n                name: \"get_weather\",\r\n                description: \"Get current weather\",\r\n                parameters: JSONSchema(\r\n                    type: .object,\r\n                    properties: [\r\n                        \"location\": JSONSchema(type: .string)\r\n                    ],\r\n                    required: [\"location\"]\r\n                )\r\n            )\r\n        )\r\n    ],\r\n    stream: true\r\n)\r\n\r\nlet stream = try await service.responseCreateStream(parameters)\r\nvar functionCallArguments = \"\"\r\n\r\nfor try await event in stream {\r\n    switch event {\r\n    case .functionCallArgumentsDelta(let delta):\r\n        \u002F\u002F Accumulate function call arguments\r\n        functionCallArguments += delta.delta\r\n        \r\n    case .functionCallArgumentsDone(let done):\r\n        \u002F\u002F Function call is complete\r\n        print(\"Function: \\(done.name)\")\r\n        print(\"Arguments: \\(functionCallArguments)\")\r\n        \r\n    case .outputTextDelta(let delta):\r\n        \u002F\u002F Regular text output\r\n        print(delta.delta, terminator: \"\")\r\n        \r\n    default:\r\n        break\r\n    }\r\n}\r\n```\r\n\r\nCanceling a Stream\r\n```swift\r\n\u002F\u002F Streams can be canceled using Swift's task cancellation\r\nlet streamTask = Task {\r\n    let stream = try await service.responseCreateStream(parameters)\r\n    \r\n    for try await event in stream {\r\n        \u002F\u002F ","2025-06-07T22:31:41",{"id":226,"version":227,"summary_zh":228,"released_at":229},247523,"v4.1.1","Update Read me for Anthropic OpenAI compatibility\r\n\r\n## Anthropic\r\n\r\nAnthropic provides OpenAI compatibility, for more, visit the [documentation](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fapi\u002Fopenai-sdk#getting-started-with-the-openai-sdk)\r\n\r\nTo use Claude models with `SwiftOpenAI` you can.\r\n\r\n```swift\r\nlet anthropicApiKey = \"\"\r\nlet openAIService = OpenAIServiceFactory.service(apiKey: anthropicApiKey, \r\n                     overrideBaseURL: \"https:\u002F\u002Fapi.anthropic.com\", \r\n                     overrideVersion: \"v1\")\r\n```\r\n\r\nNow you can create the completio parameters like this:\r\n\r\n```swift\r\nlet parameters = ChatCompletionParameters(\r\n   messages: [.init(\r\n   role: .user,\r\n   content: \"Are you Claude?\")],\r\n   model: .custom(\"claude-3-7-sonnet-20250219\"))\r\n```\r\n\r\n## What else\r\n* Allow custom speech model by @hoaknoppix in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F138\r\n* Fix for Doubao API by ByteDance by @flexih in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F133\r\n\r\n## New Contributors\r\n* @hoaknoppix made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F138\r\n* @flexih made their first contribution in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F133\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.1.0...v4.1.1","2025-05-13T06:47:29",{"id":231,"version":232,"summary_zh":233,"released_at":234},247524,"v4.1.0","This library supports latest OpenAI Image generation\r\n \r\n - Parameters Create\r\n \r\n ```swift\r\n \u002F\u002F\u002F 'Create Image':\r\n \u002F\u002F\u002F https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fapi-reference\u002Fimages\u002Fcreate\r\n public struct CreateImageParameters: Encodable {\r\n    \r\n    \u002F\u002F\u002F A text description of the desired image(s).\r\n    \u002F\u002F\u002F The maximum length is 32000 characters for `gpt-image-1`, 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.\r\n    public let prompt: String\r\n    \r\n    \u002F\u002F MARK: - Optional properties\r\n    \r\n    \u002F\u002F\u002F Allows to set transparency for the background of the generated image(s).\r\n    \u002F\u002F\u002F This parameter is only supported for `gpt-image-1`.\r\n    \u002F\u002F\u002F Must be one of `transparent`, `opaque` or `auto` (default value).\r\n    \u002F\u002F\u002F When `auto` is used, the model will automatically determine the best background for the image.\r\n    \u002F\u002F\u002F If `transparent`, the output format needs to support transparency, so it should be set to either `png` (default value) or `webp`.\r\n    public let background: Background?\r\n    \r\n    \u002F\u002F\u002F The model to use for image generation. One of `dall-e-2`, `dall-e-3`, or `gpt-image-1`.\r\n    \u002F\u002F\u002F Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1` is used.\r\n    public let model: Model?\r\n    \r\n    \u002F\u002F\u002F Control the content-moderation level for images generated by `gpt-image-1`.\r\n    \u002F\u002F\u002F Must be either low for less restrictive filtering or auto (default value).\r\n    public let moderation: Moderation?\r\n    \r\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10. For `dall-e-3`, only `n=1` is supported.\r\n    \u002F\u002F\u002F Defaults to `1`\r\n    public let n: Int?\r\n    \r\n    \u002F\u002F\u002F The compression level (0-100%) for the generated images.\r\n    \u002F\u002F\u002F This parameter is only supported for `gpt-image-1` with the `webp` or `jpeg` output formats, and defaults to 100.\r\n    public let outputCompression: Int?\r\n    \r\n    \u002F\u002F\u002F The format in which the generated images are returned.\r\n    \u002F\u002F\u002F This parameter is only supported for `gpt-image-1`.\r\n    \u002F\u002F\u002F Must be one of `png`, `jpeg`, or `webp`.\r\n    public let outputFormat: OutputFormat?\r\n    \r\n    \u002F\u002F\u002F The quality of the image that will be generated.\r\n    \u002F\u002F\u002F - `auto` (default value) will automatically select the best quality for the given model.\r\n    \u002F\u002F\u002F - `high`, `medium` and `low` are supported for gpt-image-1.\r\n    \u002F\u002F\u002F - `hd` and `standard` are supported for dall-e-3.\r\n    \u002F\u002F\u002F - `standard` is the only option for dall-e-2.\r\n    public let quality: Quality?\r\n    \r\n    \u002F\u002F\u002F The format in which generated images with dall-e-2 and dall-e-3 are returned.\r\n    \u002F\u002F\u002F Must be one of `url` or `b64_json`.\r\n    \u002F\u002F\u002F URLs are only valid for 60 minutes after the image has been generated.\r\n    \u002F\u002F\u002F This parameter isn't supported for `gpt-image-1` which will always return base64-encoded images.\r\n    public let responseFormat: ResponseFormat?\r\n    \r\n    \u002F\u002F\u002F The size of the generated images.\r\n    \u002F\u002F\u002F - For gpt-image-1, one of `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default value)\r\n    \u002F\u002F\u002F - For dall-e-3, one of `1024x1024`, `1792x1024`, or `1024x1792`\r\n    \u002F\u002F\u002F - For dall-e-2, one of `256x256`, `512x512`, or `1024x1024`\r\n    public let size: String?\r\n    \r\n    \u002F\u002F\u002F The style of the generated images.\r\n    \u002F\u002F\u002F This parameter is only supported for `dall-e-3`.\r\n    \u002F\u002F\u002F Must be one of `vivid` or `natural`.\r\n    \u002F\u002F\u002F Vivid causes the model to lean towards generating hyper-real and dramatic images.\r\n    \u002F\u002F\u002F Natural causes the model to produce more natural, less hyper-real looking images.\r\n    public let style: Style?\r\n    \r\n    \u002F\u002F\u002F A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.\r\n    public let user: String?\r\n }\r\n ```\r\n \r\n - Parameters Edit\r\n \r\n ```swift\r\n \u002F\u002F\u002F Creates an edited or extended image given one or more source images and a prompt.\r\n \u002F\u002F\u002F This endpoint only supports `gpt-image-1` and `dall-e-2`.\r\n public struct CreateImageEditParameters: Encodable {\r\n    \r\n    \u002F\u002F\u002F The image(s) to edit.\r\n    \u002F\u002F\u002F For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than 25MB.\r\n    \u002F\u002F\u002F For `dall-e-2`, you can only provide one image, and it should be a square `png` file less than 4MB.\r\n    let image: [Data]\r\n    \r\n    \u002F\u002F\u002F A text description of the desired image(s).\r\n    \u002F\u002F\u002F The maximum length is 1000 characters for `dall-e-2`, and 32000 characters for `gpt-image-1`.\r\n    let prompt: String\r\n    \r\n    \u002F\u002F\u002F An additional image whose fully transparent areas indicate where `image` should be edited.\r\n    \u002F\u002F\u002F If there are multiple images provided, the mask will be applied on the first image.\r\n    \u002F\u002F\u002F Must be a valid PNG file, less than 4MB, and have the same dimensions as `image`.\r\n    let mask: Data?\r\n    \r\n    \u002F\u002F\u002F The model to use for image generation. Only `dall-e-2` and `gpt-image-1` are supported.\r\n    \u002F\u002F\u002F Defaults to `dall-e-2` unless a parameter specific to `gpt-image-1` is used.\r\n    let model: String?\r\n    \r\n    \u002F\u002F\u002F The number of images to generate. Must be between 1 and 10.\r\n    \u002F\u002F\u002F Defaults to 1.\r\n  ","2025-04-25T21:53:40",{"id":236,"version":237,"summary_zh":238,"released_at":239},247525,"v4.0.7","## What's Changed\r\n* Codable updates by @jamesrochabrun in https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fpull\u002F134\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fcompare\u002Fv4.0.6...v4.0.7","2025-04-14T06:36:04",{"id":241,"version":242,"summary_zh":243,"released_at":244},247526,"v4.0.6","\r\nAdding convenient property in `ResponseModel` addressing https:\u002F\u002Fgithub.com\u002Fjamesrochabrun\u002FSwiftOpenAI\u002Fissues\u002F129\r\n\r\n```swift\r\n   \u002F\u002F\u002F Convenience property that aggregates all text output from output_text items in the output array.\r\n   \u002F\u002F\u002F Similar to the outputText property in Python and JavaScript SDKs.\r\n   public var outputText: String? {\r\n      let outputTextItems = output.compactMap { outputItem -> String? in\r\n         switch outputItem {\r\n         case .message(let message):\r\n            return message.content.compactMap { contentItem -> String? in\r\n               switch contentItem {\r\n               case .outputText(let outputText):\r\n                  return outputText.text\r\n               }\r\n            }.joined()\r\n         default:\r\n            return nil\r\n         }\r\n      }\r\n      \r\n      return outputTextItems.isEmpty ? nil : outputTextItems.joined()\r\n   }\r\n```","2025-03-17T07:08:40"]