[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-openai--openai-dotnet":3,"similar-openai--openai-dotnet":195},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":9,"readme_en":10,"readme_zh":11,"quickstart_zh":12,"use_case_zh":13,"hero_image_url":14,"owner_login":15,"owner_name":16,"owner_avatar_url":17,"owner_bio":18,"owner_company":19,"owner_location":19,"owner_email":19,"owner_twitter":19,"owner_website":20,"owner_url":21,"languages":22,"stars":39,"forks":40,"last_commit_at":41,"license":42,"difficulty_score":43,"env_os":44,"env_gpu":45,"env_ram":45,"env_deps":46,"category_tags":52,"github_topics":58,"view_count":43,"oss_zip_url":19,"oss_zip_packed_at":19,"status":61,"created_at":62,"updated_at":63,"faqs":64,"releases":94},7271,"openai\u002Fopenai-dotnet","openai-dotnet","The official .NET library for the OpenAI API","openai-dotnet 是 OpenAI 官方推出的 .NET 开发库，旨在帮助开发者轻松地在 .NET 应用程序中集成 OpenAI 的强大功能。它封装了复杂的 REST API 调用细节，让程序员无需处理繁琐的 HTTP 请求和 JSON 解析，即可直接通过简洁的代码调用聊天补全、图像生成、语音转录、文本嵌入以及智能助手等核心服务。\n\n对于使用 C# 或 F# 进行开发的工程师而言，openai-dotnet 解决了手动对接 API 时容易出现的认证管理、错误重试及数据序列化等痛点。该库由 OpenAI 与微软合作，基于标准的 OpenAPI 规范自动生成，确保了接口更新的及时性与稳定性。其技术亮点包括原生支持异步编程模型、流式输出（Streaming）、函数调用（Function Calling）以及结构化数据输出，同时完美兼容依赖注入模式，便于构建可测试、高可用的企业级应用。此外，它还特别优化了对 Azure OpenAI 服务的支持，方便云原生架构的平滑迁移。无论是初创团队的快速原型开发，还是大型企业系统的智能化升级，openai-dotnet 都是 .NET 生态中接入","openai-dotnet 是 OpenAI 官方推出的 .NET 开发库，旨在帮助开发者轻松地在 .NET 应用程序中集成 OpenAI 的强大功能。它封装了复杂的 REST API 调用细节，让程序员无需处理繁琐的 HTTP 请求和 JSON 解析，即可直接通过简洁的代码调用聊天补全、图像生成、语音转录、文本嵌入以及智能助手等核心服务。\n\n对于使用 C# 或 F# 进行开发的工程师而言，openai-dotnet 解决了手动对接 API 时容易出现的认证管理、错误重试及数据序列化等痛点。该库由 OpenAI 与微软合作，基于标准的 OpenAPI 规范自动生成，确保了接口更新的及时性与稳定性。其技术亮点包括原生支持异步编程模型、流式输出（Streaming）、函数调用（Function Calling）以及结构化数据输出，同时完美兼容依赖注入模式，便于构建可测试、高可用的企业级应用。此外，它还特别优化了对 Azure OpenAI 服务的支持，方便云原生架构的平滑迁移。无论是初创团队的快速原型开发，还是大型企业系统的智能化升级，openai-dotnet 都是 .NET 生态中接入大模型能力的首选桥梁。","# OpenAI .NET API library\n\n[![NuGet stable version](https:\u002F\u002Fimg.shields.io\u002Fnuget\u002Fv\u002Fopenai.svg)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FOpenAI)\n\nThe OpenAI .NET library provides convenient access to the OpenAI REST API from .NET applications.\n\nIt is generated from our [OpenAPI specification](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi) in collaboration with Microsoft.\n\n## Table of Contents\n\n- [Getting started](#getting-started)\n  - [Prerequisites](#prerequisites)\n  - [Install the NuGet package](#install-the-nuget-package)\n- [Using the client library](#using-the-client-library)\n  - [Namespace organization](#namespace-organization)\n  - [Using the async API](#using-the-async-api)\n  - [Using the `OpenAIClient` class](#using-the-openaiclient-class)\n- [How to use dependency injection](#how-to-use-dependency-injection)\n- [How to use chat completions with streaming](#how-to-use-chat-completions-with-streaming)\n- [How to use chat completions with tools and function calling](#how-to-use-chat-completions-with-tools-and-function-calling)\n- [How to use chat completions with structured outputs](#how-to-use-chat-completions-with-structured-outputs)\n- [How to use chat completions with audio](#how-to-use-chat-completions-with-audio)\n- [How to use responses with streaming and reasoning](#how-to-use-responses-with-streaming-and-reasoning)\n- [How to use responses with file search](#how-to-use-responses-with-file-search)\n- [How to use responses with web search](#how-to-use-responses-with-web-search)\n- [How to generate text embeddings](#how-to-generate-text-embeddings)\n- [How to generate images](#how-to-generate-images)\n- [How to transcribe audio](#how-to-transcribe-audio)\n- [How to use assistants with retrieval augmented generation (RAG)](#how-to-use-assistants-with-retrieval-augmented-generation-rag)\n- [How to use assistants with streaming and vision](#how-to-use-assistants-with-streaming-and-vision)\n- [How to work with Azure OpenAI](#how-to-work-with-azure-openai)\n- [Advanced scenarios](#advanced-scenarios)\n  - [Using protocol methods](#using-protocol-methods)\n  - [Mock a client for testing](#mock-a-client-for-testing)\n  - [Automatically retrying errors](#automatically-retrying-errors)\n  - [Observability](#observability)\n\n## Getting started\n\n### Prerequisites\n\nTo call the OpenAI REST API, you will need an API key. To obtain one, first [create a new OpenAI account](https:\u002F\u002Fplatform.openai.com\u002Fsignup) or [log in](https:\u002F\u002Fplatform.openai.com\u002Flogin). Next, navigate to the [API key page](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys) and select \"Create new secret key\", optionally naming the key. Make sure to save your API key somewhere safe and do not share it with anyone.\n\n### Install the NuGet package\n\nAdd the client library to your .NET project by installing the [NuGet](https:\u002F\u002Fwww.nuget.org\u002F) package via your IDE or by running the following command in the .NET CLI:\n\n```cli\ndotnet add package OpenAI\n```\n\nNote that the code examples included below were written using [.NET 10](https:\u002F\u002Fdotnet.microsoft.com\u002Fdownload\u002Fdotnet\u002F10.0). The OpenAI .NET library is compatible with all .NET Standard 2.0 applications, but the syntax used in some of the code examples in this document may depend on newer language features.\n\n## Using the client library\n\nThe full API of this library can be found in the [OpenAI.netstandard2.0.cs](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002Fmain\u002Fapi\u002FOpenAI.netstandard2.0.cs) file, and there are many [code examples](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Ftree\u002Fmain\u002Fexamples) to help. For instance, the following snippet illustrates the basic use of the chat completions API:\n\n```C# Snippet:ReadMe_ChatCompletion_Basic\nChatClient client = new(model: \"gpt-5.1\", apiKey: Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nChatCompletion completion = client.CompleteChat(\"Say 'this is a test.'\");\nConsole.WriteLine($\"[ASSISTANT]: {completion.Content[0].Text}\");\n```\n\nWhile you can pass your API key directly as a string, it is highly recommended that you keep it in a secure location and instead access it via an environment variable or configuration file as shown above to avoid storing it in source control.\n\n### Using a custom base URL and API key\n\nIf you need to connect to an alternative API endpoint (for example, a proxy or self-hosted OpenAI-compatible LLM), you can specify a custom base URL and API key using the `ApiKeyCredential` and `OpenAIClientOptions`:\n\n```C# Snippet:ReadMe_CustomUrl\nChatClient client = new(\n    model: \"MODEL_NAME\",\n    credential: new ApiKeyCredential(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\")),\n    options: new OpenAIClientOptions()\n    {\n        Endpoint = new Uri(\"https:\u002F\u002FYOUR_BASE_URL\")\n    });\n```\n\nReplace `MODEL_NAME` with your model name and `BASE_URL` with your endpoint URI. This is useful when working with OpenAI-compatible APIs or custom deployments.\n\n### Namespace organization\n\nThe library is organized into namespaces by feature areas in the OpenAI REST API. Each namespace contains a corresponding client class.\n\n| Namespace                     | Client class                 |\n| ------------------------------|------------------------------|\n| `OpenAI.Assistants`           | `AssistantClient`            |\n| `OpenAI.Audio`                | `AudioClient`                |\n| `OpenAI.Batch`                | `BatchClient`                |\n| `OpenAI.Chat`                 | `ChatClient`                 |\n| `OpenAI.Embeddings`           | `EmbeddingClient`            |\n| `OpenAI.Evals`                | `EvaluationClient`           |\n| `OpenAI.FineTuning`           | `FineTuningClient`           |\n| `OpenAI.Files`                | `OpenAIFileClient`           |\n| `OpenAI.Images`               | `ImageClient`                |\n| `OpenAI.Models`               | `OpenAIModelClient`          |\n| `OpenAI.Moderations`          | `ModerationClient`           |\n| `OpenAI.Realtime`             | `RealtimeClient`             |\n| `OpenAI.Responses`            | `ResponsesClient`            |\n| `OpenAI.VectorStores`         | `VectorStoreClient`          |\n\n### Using the async API\n\nEvery client method that performs a synchronous API call has an asynchronous variant in the same client class. For instance, the asynchronous variant of the `ChatClient`'s `CompleteChat` method is `CompleteChatAsync`. To rewrite the call above using the asynchronous counterpart, simply `await` the call to the corresponding async variant:\n\n```C# Snippet:ReadMe_ChatCompletion_Async\nChatCompletion completion = await client.CompleteChatAsync(\"Say 'this is a test.'\");\n```\n\n### Using the `OpenAIClient` class\n\nIn addition to the namespaces mentioned above, there is also the parent `OpenAI` namespace itself:\n\n```csharp\nusing OpenAI;\n```\n\nThis namespace contains the `OpenAIClient` class, which offers certain conveniences when you need to work with multiple feature area clients. Specifically, you can use an instance of this class to create instances of the other clients and have them share the same implementation details, which might be more efficient.\n\nYou can create an `OpenAIClient` by specifying the API key that all clients will use for authentication:\n\n```C# Snippet:ReadMe_OpenAIClient_Create\nOpenAIClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n```\n\nNext, to create an instance of an `AudioClient`, for example, you can call the `OpenAIClient`'s `GetAudioClient` method by passing the OpenAI model that the `AudioClient` will use, just as if you were using the `AudioClient` constructor directly. If necessary, you can create additional clients of the same type to target different models.\n\n```C# Snippet:ReadMe_OpenAIClient_GetAudioClient\nAudioClient ttsClient = client.GetAudioClient(\"tts-1\");\nAudioClient whisperClient = client.GetAudioClient(\"whisper-1\");\n```\n\n## How to use dependency injection\n\nThe OpenAI clients are **thread-safe** and can be safely registered as **singletons** in ASP.NET Core's Dependency Injection container. This maximizes resource efficiency and HTTP connection reuse.\n\nRegister the `ChatClient` as a singleton in your `Program.cs`:\n\n```C# Snippet:ReadMe_DependencyInjection_Register\nbuilder.Services.AddSingleton\u003CChatClient>(serviceProvider =>\n{\n    var apiKey = Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\");\n    var model = \"gpt-5.1\";\n\n    return new ChatClient(model, apiKey);\n});\n```\n\nThen inject and use the client in your controllers or services:\n\n```C# Snippet:ReadMe_DependencyInjection_Controller\n[ApiController]\n[Route(\"api\u002F[controller]\")]\npublic class ChatController : ControllerBase\n{\n    private readonly ChatClient _chatClient;\n\n    public ChatController(ChatClient chatClient)\n    {\n        _chatClient = chatClient;\n    }\n\n    [HttpPost(\"complete\")]\n    public async Task\u003CIActionResult> CompleteChat([FromBody] string message)\n    {\n        ChatCompletion completion = await _chatClient.CompleteChatAsync(message);\n        return Ok(new { response = completion.Content[0].Text });\n    }\n}\n```\n\n## How to use chat completions with streaming\n\nWhen you request a chat completion, the default behavior is for the server to generate it in its entirety before sending it back in a single response. Consequently, long chat completions can require waiting for several seconds before hearing back from the server. To mitigate this, the OpenAI REST API supports the ability to stream partial results back as they are being generated, allowing you to start processing the beginning of the completion before it is finished.\n\nThe client library offers a convenient approach to working with streaming chat completions. If you wanted to re-write the example from the previous section using streaming, rather than calling the `ChatClient`'s `CompleteChat` method, you would call its `CompleteChatStreaming` method instead:\n\n```C# Snippet:ReadMe_Streaming_Sync\nCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreaming(\"Say 'this is a test.'\");\n```\n\nNotice that the returned value is a `CollectionResult\u003CStreamingChatCompletionUpdate>` instance, which can be enumerated to process the streaming response chunks as they arrive:\n\n```C# Snippet:ReadMe_Streaming_Enumerate\nCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreaming(\"Say 'this is a test.'\");\n\nConsole.Write($\"[ASSISTANT]: \");\nforeach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)\n{\n    if (completionUpdate.ContentUpdate.Count > 0)\n    {\n        Console.Write(completionUpdate.ContentUpdate[0].Text);\n    }\n}\n```\n\nAlternatively, you can do this asynchronously by calling the `CompleteChatStreamingAsync` method to get an `AsyncCollectionResult\u003CStreamingChatCompletionUpdate>` and enumerate it using `await foreach`:\n\n```C# Snippet:ReadMe_Streaming_Async\nAsyncCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreamingAsync(\"Say 'this is a test.'\");\n\nConsole.Write($\"[ASSISTANT]: \");\nawait foreach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)\n{\n    if (completionUpdate.ContentUpdate.Count > 0)\n    {\n        Console.Write(completionUpdate.ContentUpdate[0].Text);\n    }\n}\n```\n\n## How to use chat completions with tools and function calling\n\nIn this example, you have two functions. The first function can retrieve a user's current geographic location (e.g., by polling the location service APIs of the user's device), while the second function can query the weather in a given location (e.g., by making an API call to some third-party weather service). You want the model to be able to call these functions if it deems it necessary to have this information in order to respond to a user request as part of generating a chat completion. For illustrative purposes, consider the following:\n\n```C# Snippet:ReadMe_Tools_Functions\nstatic string GetCurrentLocation()\n{\n    \u002F\u002F Call the location API here.\n    return \"San Francisco\";\n}\n\nstatic string GetCurrentWeather(string location, string unit = \"celsius\")\n{\n    \u002F\u002F Call the weather API here.\n    return $\"31 {unit}\";\n}\n```\n\nStart by creating two `ChatTool` instances using the static `CreateFunctionTool` method to describe each function:\n\n```C# Snippet:ReadMe_Tools_Definitions\nChatTool getCurrentLocationTool = ChatTool.CreateFunctionTool(\n    functionName: nameof(GetCurrentLocation),\n    functionDescription: \"Get the user's current location\"\n);\n\nChatTool getCurrentWeatherTool = ChatTool.CreateFunctionTool(\n    functionName: nameof(GetCurrentWeather),\n    functionDescription: \"Get the current weather in a given location\",\n    functionParameters: BinaryData.FromBytes(\"\"\"\n        {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"The city and state, e.g. Boston, MA\"\n                },\n                \"unit\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"celsius\", \"fahrenheit\" ],\n                    \"description\": \"The temperature unit to use. Infer this from the specified location.\"\n                }\n            },\n            \"required\": [ \"location\" ]\n        }\n        \"\"\"u8.ToArray())\n);\n```\n\nNext, create a `ChatCompletionOptions` instance and add both to its `Tools` property. You will pass the `ChatCompletionOptions` as an argument in your calls to the `ChatClient`'s `CompleteChat` method.\n\n```C# Snippet:ReadMe_Tools_Options\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(\"What's the weather like today?\"),\n];\n\nChatCompletionOptions options = new()\n{\n    Tools = { getCurrentLocationTool, getCurrentWeatherTool },\n};\n```\n\nWhen the resulting `ChatCompletion` has a `FinishReason` property equal to `ChatFinishReason.ToolCalls`, it means that the model has determined that one or more tools must be called before the assistant can respond appropriately. In those cases, you must first call the function specified in the `ChatCompletion`'s `ToolCalls` and then call the `ChatClient`'s `CompleteChat` method again while passing the function's result as an additional `ChatRequestToolMessage`. Repeat this process as needed.\n\n```C# Snippet:ReadMe_Tools_Loop\nbool requiresAction;\n\ndo\n{\n    requiresAction = false;\n    ChatCompletion completion = client.CompleteChat(messages, options);\n\n    switch (completion.FinishReason)\n    {\n        case ChatFinishReason.Stop:\n        {\n            \u002F\u002F Add the assistant message to the conversation history.\n            messages.Add(new AssistantChatMessage(completion));\n            break;\n        }\n\n        case ChatFinishReason.ToolCalls:\n        {\n            \u002F\u002F First, add the assistant message with tool calls to the conversation history.\n            messages.Add(new AssistantChatMessage(completion));\n\n            \u002F\u002F Then, add a new tool message for each tool call that is resolved.\n            foreach (ChatToolCall toolCall in completion.ToolCalls)\n            {\n                switch (toolCall.FunctionName)\n                {\n                    case nameof(GetCurrentLocation):\n                        {\n                            string toolResult = GetCurrentLocation();\n                            messages.Add(new ToolChatMessage(toolCall.Id, toolResult));\n                            break;\n                        }\n\n                    case nameof(GetCurrentWeather):\n                        {\n                            \u002F\u002F The arguments that the model wants to use to call the function are specified as a\n                            \u002F\u002F stringified JSON object based on the schema defined in the tool definition. Note that\n                            \u002F\u002F the model may hallucinate arguments too. Consequently, it is important to do the\n                            \u002F\u002F appropriate parsing and validation before calling the function.\n                            using JsonDocument argumentsJson = JsonDocument.Parse(toolCall.FunctionArguments);\n                            bool hasLocation = argumentsJson.RootElement.TryGetProperty(\"location\", out JsonElement location);\n                            bool hasUnit = argumentsJson.RootElement.TryGetProperty(\"unit\", out JsonElement unit);\n\n                            if (!hasLocation)\n                            {\n                                throw new ArgumentNullException(nameof(location), \"The location argument is required.\");\n                            }\n\n                            string toolResult = hasUnit\n                                ? GetCurrentWeather(location.GetString(), unit.GetString())\n                                : GetCurrentWeather(location.GetString());\n                            messages.Add(new ToolChatMessage(toolCall.Id, toolResult));\n                            break;\n                        }\n\n                    default:\n                        {\n                            \u002F\u002F Handle other unexpected calls.\n                            throw new NotImplementedException();\n                        }\n                }\n            }\n\n            requiresAction = true;\n            break;\n        }\n\n        case ChatFinishReason.Length:\n            throw new NotImplementedException(\"Incomplete model output due to MaxTokens parameter or token limit exceeded.\");\n\n        case ChatFinishReason.ContentFilter:\n            throw new NotImplementedException(\"Omitted content due to a content filter flag.\");\n\n        case ChatFinishReason.FunctionCall:\n            throw new NotImplementedException(\"Deprecated in favor of tool calls.\");\n\n        default:\n            throw new NotImplementedException(completion.FinishReason.ToString());\n    }\n} while (requiresAction);\n```\n\n## How to use chat completions with structured outputs\n\nBeginning with the `gpt-4o-mini`, `gpt-4o-mini-2024-07-18`, and `gpt-4o-2024-08-06` model snapshots, structured outputs are available for both top-level response content and tool calls in the chat completion and assistants APIs. For information about the feature, see [the Structured Outputs guide](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs).\n\nTo use structured outputs to constrain chat completion content, set an appropriate `ChatResponseFormat` as in the following example:\n\n```C# Snippet:ReadMe_StructuredOutputs\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(\"How can I solve 8x + 7 = -23?\"),\n];\n\nChatCompletionOptions options = new()\n{\n    ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(\n        jsonSchemaFormatName: \"math_reasoning\",\n        jsonSchema: BinaryData.FromBytes(\"\"\"\n            {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"steps\": {\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"explanation\": { \"type\": \"string\" },\n                                \"output\": { \"type\": \"string\" }\n                            },\n                            \"required\": [\"explanation\", \"output\"],\n                            \"additionalProperties\": false\n                        }\n                    },\n                    \"final_answer\": { \"type\": \"string\" }\n                },\n                \"required\": [\"steps\", \"final_answer\"],\n                \"additionalProperties\": false\n            }\n            \"\"\"u8.ToArray()),\n        jsonSchemaIsStrict: true)\n};\n\nChatCompletion completion = client.CompleteChat(messages, options);\n\nusing JsonDocument structuredJson = JsonDocument.Parse(completion.Content[0].Text);\n\nConsole.WriteLine($\"Final answer: {structuredJson.RootElement.GetProperty(\"final_answer\")}\");\nConsole.WriteLine(\"Reasoning steps:\");\n\nforeach (JsonElement stepElement in structuredJson.RootElement.GetProperty(\"steps\").EnumerateArray())\n{\n    Console.WriteLine($\"  - Explanation: {stepElement.GetProperty(\"explanation\")}\");\n    Console.WriteLine($\"    Output: {stepElement.GetProperty(\"output\")}\");\n}\n```\n\n## How to use chat completions with audio\n\nStarting with the `gpt-4o-audio-preview` model, chat completions can process audio input and output.\n\nThis example demonstrates:\n  1. Configuring the client with the supported `gpt-4o-audio-preview` model\n  1. Supplying user audio input on a chat completion request\n  1. Requesting model audio output from the chat completion operation\n  1. Retrieving audio output from a `ChatCompletion` instance\n  1. Using past audio output as `ChatMessage` conversation history\n\n```C# Snippet:ReadMe_ChatAudio\n\u002F\u002F Chat audio input and output is only supported on specific models, beginning with gpt-4o-audio-preview\nChatClient client = new(\"gpt-5.1\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\n\u002F\u002F Input audio is provided to a request by adding an audio content part to a user message\nstring audioFilePath = Path.Combine(\"Assets\", \"realtime_whats_the_weather_pcm16_24khz_mono.wav\");\nbyte[] audioFileRawBytes = File.ReadAllBytes(audioFilePath);\nBinaryData audioData = BinaryData.FromBytes(audioFileRawBytes);\n\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(ChatMessageContentPart.CreateInputAudioPart(audioData, ChatInputAudioFormat.Wav)),\n];\n\n\u002F\u002F Output audio is requested by configuring ChatCompletionOptions to include the appropriate\n\u002F\u002F ResponseModalities values and corresponding AudioOptions.\nChatCompletionOptions options = new()\n{\n    ResponseModalities = ChatResponseModalities.Text | ChatResponseModalities.Audio,\n    AudioOptions = new(ChatOutputAudioVoice.Alloy, ChatOutputAudioFormat.Mp3),\n};\n\nChatCompletion completion = client.CompleteChat(messages, options);\n\nvoid PrintAudioContent()\n{\n    if (completion.OutputAudio is ChatOutputAudio outputAudio)\n    {\n        Console.WriteLine($\"Response audio transcript: {outputAudio.Transcript}\");\n        string outputFilePath = $\"{outputAudio.Id}.mp3\";\n        using (FileStream outputFileStream = File.OpenWrite(outputFilePath))\n        {\n            outputFileStream.Write(outputAudio.AudioBytes);\n        }\n\n        Console.WriteLine($\"Response audio written to file: {outputFilePath}\");\n        Console.WriteLine($\"Valid on followup requests until: {outputAudio.ExpiresAt}\");\n    }\n}\n\nPrintAudioContent();\n\n\u002F\u002F To refer to past audio output, create an assistant message from the earlier ChatCompletion, use the earlier\n\u002F\u002F response content part, or use ChatMessageContentPart.CreateAudioPart(string) to manually instantiate a part.\nmessages.Add(new AssistantChatMessage(completion));\nmessages.Add(\"Can you say that like a pirate?\");\n\ncompletion = client.CompleteChat(messages, options);\n\nPrintAudioContent();\n```\n\nStreaming is highly parallel: `StreamingChatCompletionUpdate` instances can include a `OutputAudioUpdate` that may\ncontain any of:\n\n- The `Id` of the streamed audio content, which can be referenced by subsequent `AssistantChatMessage` instances via `ChatAudioReference` once the streaming response is complete; this may appear across multiple `StreamingChatCompletionUpdate` instances but will always be the same value when present\n- The `ExpiresAt` value that describes when the `Id` will no longer be valid for use with `ChatAudioReference` in subsequent requests; this typically appears once and only once, in the final `StreamingOutputAudioUpdate`\n- Incremental `TranscriptUpdate` and\u002For `AudioBytesUpdate` values, which can incrementally consumed and, when concatenated, form the complete audio transcript and audio output for the overall response; many of these typically appear\n\n## How to use responses with streaming and reasoning\n\n```C# Snippet:ReadMe_ResponsesStreaming\nResponsesClient client = new(\n    apiKey: Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    ReasoningOptions = new ResponseReasoningOptions()\n    {\n        ReasoningEffortLevel = ResponseReasoningEffortLevel.High,\n    },\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"What's the optimal strategy to win at poker?\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nCreateResponseOptions streamingOptions = new()\n{\n    Model = \"gpt-5.1\",\n    ReasoningOptions = new ResponseReasoningOptions()\n    {\n        ReasoningEffortLevel = ResponseReasoningEffortLevel.High,\n    },\n    StreamingEnabled = true,\n};\n\nstreamingOptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"What's the optimal strategy to win at poker?\"));\n\nawait foreach (StreamingResponseUpdate update\n    in client.CreateResponseStreamingAsync(streamingOptions))\n{\n    if (update is StreamingResponseOutputItemAddedUpdate itemUpdate\n        && itemUpdate.Item is ReasoningResponseItem reasoningItem)\n    {\n        Console.WriteLine($\"[Reasoning] ({reasoningItem.Status})\");\n    }\n    else if (update is StreamingResponseOutputItemAddedUpdate itemDone\n        && itemDone.Item is ReasoningResponseItem reasoningDone)\n    {\n        Console.WriteLine($\"[Reasoning DONE] ({reasoningDone.Status})\");\n    }\n    else if (update is StreamingResponseOutputTextDeltaUpdate delta)\n    {\n        Console.Write(delta.Delta);\n    }\n}\n```\n\n## How to use responses with file search\n\n```C# Snippet:ReadMe_ResponsesFileSearch\nResponsesClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nstring vectorStoreId = \"vs-123\";\n\nResponseTool fileSearchTool\n    = ResponseTool.CreateFileSearchTool(vectorStoreIds: [vectorStoreId]);\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    Tools = { fileSearchTool }\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"According to available files, what's the secret number?\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nforeach (ResponseItem outputItem in response.OutputItems)\n{\n    if (outputItem is FileSearchCallResponseItem fileSearchCall)\n    {\n        Console.WriteLine($\"[file_search] ({fileSearchCall.Status}): {fileSearchCall.Id}\");\n        foreach (string query in fileSearchCall.Queries)\n        {\n            Console.WriteLine($\"  - {query}\");\n        }\n    }\n    else if (outputItem is MessageResponseItem message)\n    {\n        Console.WriteLine($\"[{message.Role}] {message.Content.FirstOrDefault()?.Text}\");\n    }\n}\n```\n\n## How to use responses with web search\n\n```C# Snippet:ReadMe_ResponsesWebSearch\nResponsesClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    Tools = { ResponseTool.CreateWebSearchTool() },\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"What's a happy news headline from today?\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nforeach (ResponseItem item in response.OutputItems)\n{\n    if (item is WebSearchCallResponseItem webSearchCall)\n    {\n        Console.WriteLine($\"[Web search invoked]({webSearchCall.Status}) {webSearchCall.Id}\");\n    }\n    else if (item is MessageResponseItem message)\n    {\n        Console.WriteLine($\"[{message.Role}] {message.Content?.FirstOrDefault()?.Text}\");\n    }\n}\n```\n\n## How to generate text embeddings\n\nIn this example, you want to create a trip-planning website that allows customers to write a prompt describing the kind of hotel that they are looking for and then offers hotel recommendations that closely match this description. To achieve this, it is possible to use text embeddings to measure the relatedness of text strings. In summary, you can get embeddings of the hotel descriptions, store them in a vector database, and use them to build a search index that you can query using the embedding of a given customer's prompt.\n\nTo generate a text embedding, use `EmbeddingClient` from the `OpenAI.Embeddings` namespace:\n\n```C# Snippet:ReadMe_Embeddings\nstring description = \"Best hotel in town if you like luxury hotels. They have an amazing infinity pool, a spa,\"\n    + \" and a really helpful concierge. The location is perfect -- right downtown, close to all the tourist\"\n    + \" attractions. We highly recommend this hotel.\";\n\nOpenAIEmbedding embedding = client.GenerateEmbedding(description);\nReadOnlyMemory\u003Cfloat> vector = embedding.ToFloats();\n```\n\nNotice that the resulting embedding is a list (also called a vector) of floating point numbers represented as an instance of `ReadOnlyMemory\u003Cfloat>`. By default, the length of the embedding vector will be 1536 when using the `text-embedding-3-small` model or 3072 when using the `text-embedding-3-large` model. Generally, larger embeddings perform better, but using them also tends to cost more in terms of compute, memory, and storage. You can reduce the dimensions of the embedding by creating an instance of the `EmbeddingGenerationOptions` class, setting the `Dimensions` property, and passing it as an argument in your call to the `GenerateEmbedding` method:\n\n```C# Snippet:ReadMe_Embeddings_WithDimensions\nstring description = \"Best hotel in town if you like luxury hotels.\";\nEmbeddingGenerationOptions options = new() { Dimensions = 512 };\n\nOpenAIEmbedding embedding = client.GenerateEmbedding(description, options);\n```\n\n## How to generate images\n\nIn this example, you want to build an app to help interior designers prototype new ideas based on the latest design trends. As part of the creative process, an interior designer can use this app to generate images for inspiration simply by describing the scene in their head as a prompt. As expected, high-quality, strikingly dramatic images with finer details deliver the best results for this application.\n\nTo generate an image, use `ImageClient` from the `OpenAI.Images` namespace:\n\n```C# Snippet:ReadMe_Images_CreateClient\nImageClient client = new(\"dall-e-3\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n```\n\nGenerating an image always requires a `prompt` that describes what should be generated. To further tailor the image generation to your specific needs, you can create an instance of the `ImageGenerationOptions` class and set the `Quality`, `Size`, and `Style` properties accordingly. Note that you can also set the `ResponseFormat` property of `ImageGenerationOptions` to `GeneratedImageFormat.Bytes` in order to receive the resulting PNG as `BinaryData` (instead of the default remote `Uri`) if this is convenient for your use case.\n\n```C# Snippet:ReadMe_Images_Options\nstring prompt = \"The concept for a living room that blends Scandinavian simplicity with Japanese minimalism for\"\n    + \" a serene and cozy atmosphere. It's a space that invites relaxation and mindfulness, with natural light\"\n    + \" and fresh air. Using neutral tones, including colors like white, beige, gray, and black, that create a\"\n    + \" sense of harmony. Featuring sleek wood furniture with clean lines and subtle curves to add warmth and\"\n    + \" elegance. Plants and flowers in ceramic pots adding color and life to a space. They can serve as focal\"\n    + \" points, creating a connection with nature. Soft textiles and cushions in organic fabrics adding comfort\"\n    + \" and softness to a space. They can serve as accents, adding contrast and texture.\";\n\nImageGenerationOptions options = new()\n{\n    Quality = GeneratedImageQuality.High,\n    Size = GeneratedImageSize.W1792xH1024,\n    Style = GeneratedImageStyle.Vivid,\n    ResponseFormat = GeneratedImageFormat.Bytes\n};\n```\n\nFinally, call the `ImageClient`'s `GenerateImage` method by passing the prompt and the `ImageGenerationOptions` instance as arguments:\n\n```C# Snippet:ReadMe_Images_Generate\nGeneratedImage image = client.GenerateImage(prompt, options);\nBinaryData bytes = image.ImageBytes;\n```\n\nFor illustrative purposes, you could then save the generated image to local storage:\n\n```C# Snippet:ReadMe_Images_Save\nusing FileStream stream = File.OpenWrite($\"{Guid.NewGuid()}.png\");\nbytes.ToStream().CopyTo(stream);\n```\n\n## How to transcribe audio\n\nIn this example, an audio file is transcribed using the Whisper speech-to-text model, including both word- and audio-segment-level timestamp information.\n\n```C# Snippet:ReadMe_Audio_Transcribe\n        AudioClient client = new(\"whisper-1\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nstring audioFilePath = Path.Combine(\"Assets\", \"audio_houseplant_care.mp3\");\n\nAudioTranscriptionOptions options = new()\n{\n    ResponseFormat = AudioTranscriptionFormat.Verbose,\n    TimestampGranularities = AudioTimestampGranularities.Word | AudioTimestampGranularities.Segment,\n};\n\nAudioTranscription transcription = client.TranscribeAudio(audioFilePath, options);\n\nConsole.WriteLine(\"Transcription:\");\nConsole.WriteLine($\"{transcription.Text}\");\nConsole.WriteLine();\nConsole.WriteLine($\"Words:\");\n\nforeach (TranscribedWord word in transcription.Words)\n{\n    Console.WriteLine($\"  {word.Word,15} : {word.StartTime.TotalMilliseconds,5:0} - {word.EndTime.TotalMilliseconds,5:0}\");\n}\n\nConsole.WriteLine();\nConsole.WriteLine($\"Segments:\");\nforeach (TranscribedSegment segment in transcription.Segments)\n{\n    Console.WriteLine($\"  {segment.Text,90} : {segment.StartTime.TotalMilliseconds,5:0} - {segment.EndTime.TotalMilliseconds,5:0}\");\n}\n```\n\n## How to use assistants with retrieval augmented generation (RAG)\n\nIn this example, you have a JSON document with the monthly sales information of different products, and you want to build an assistant capable of analyzing it and answering questions about it.\n\nTo achieve this, use both `OpenAIFileClient` from the `OpenAI.Files` namespace and `AssistantClient` from the `OpenAI.Assistants` namespace.\n\nImportant: The Assistants REST API is currently in beta. As such, the details are subject to change, and correspondingly the `AssistantClient` is attributed as `[Experimental]`. To use it, you must suppress the `OPENAI001` warning first.\n\n```C# Snippet:ReadMe_Assistants_CreateClients\nOpenAIClient openAIClient = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nOpenAIFileClient fileClient = openAIClient.GetOpenAIFileClient();\nAssistantClient assistantClient = openAIClient.GetAssistantClient();\n```\n\nHere is an example of what the JSON document might look like:\n\n```C# Snippet:ReadMe_Assistants_Document\nStream document = BinaryData.FromBytes(\"\"\"\n    {\n        \"description\": \"This document contains the sale history data for Contoso products.\",\n        \"sales\": [\n            {\n                \"month\": \"January\",\n                \"by_product\": {\n                    \"113043\": 15,\n                    \"113045\": 12,\n                    \"113049\": 2\n                }\n            },\n            {\n                \"month\": \"February\",\n                \"by_product\": {\n                    \"113045\": 22\n                }\n            },\n            {\n                \"month\": \"March\",\n                \"by_product\": {\n                    \"113045\": 16,\n                    \"113055\": 5\n                }\n            }\n        ]\n    }\n    \"\"\"u8.ToArray()).ToStream();\n```\n\nUpload this document to OpenAI using the `OpenAIFileClient`'s `UploadFile` method, ensuring that you use `FileUploadPurpose.Assistants` to allow your assistant to access it later:\n\n```C# Snippet:ReadMe_Assistants_UploadFile\nOpenAIFile salesFile = fileClient.UploadFile(\n    document,\n    \"monthly_sales.json\",\n    FileUploadPurpose.Assistants);\n```\n\nCreate a new assistant using an instance of the `AssistantCreationOptions` class to customize it. Here, we use:\n\n- A friendly `Name` for the assistant, as will display in the Playground\n- Tool definition instances for the tools that the assistant should have access to; here, we use `FileSearchToolDefinition` to process the sales document we just uploaded and `CodeInterpreterToolDefinition` so we can analyze and visualize the numeric data\n- Resources for the assistant to use with its tools, here using the `VectorStoreCreationHelper` type to automatically make a new vector store that indexes the sales file; alternatively, you could use `VectorStoreClient` to manage the vector store separately\n\n```C# Snippet:ReadMe_Assistants_CreateAssistant\nAssistantCreationOptions assistantOptions = new()\n{\n    Name = \"Example: Contoso sales RAG\",\n    Instructions =\n        \"You are an assistant that looks up sales data and helps visualize the information based\"\n        + \" on user queries. When asked to generate a graph, chart, or other visualization, use\"\n        + \" the code interpreter tool to do so.\",\n    Tools =\n    {\n        new FileSearchToolDefinition(),\n        new CodeInterpreterToolDefinition(),\n    },\n    ToolResources = new()\n    {\n        FileSearch = new()\n        {\n            NewVectorStores =\n            {\n                new VectorStoreCreationHelper([salesFile.Id]),\n            }\n        }\n    },\n};\n\nAssistant assistant = assistantClient.CreateAssistant(\"gpt-5.1\", assistantOptions);\n```\n\nNext, create a new thread. For illustrative purposes, you could include an initial user message asking about the sales information of a given product and then use the `AssistantClient`'s `CreateThreadAndRun` method to get it started:\n\n```C# Snippet:ReadMe_Assistants_CreateThreadAndRun\nThreadCreationOptions threadOptions = new()\n{\n    InitialMessages = { \"How well did product 113045 sell in February? Graph its trend over time.\" }\n};\n\nThreadRun threadRun = assistantClient.CreateThreadAndRun(assistant.Id, threadOptions);\n```\n\nPoll the status of the run until it is no longer queued or in progress:\n\n```C# Snippet:ReadMe_Assistants_Poll\ndo\n{\n    Thread.Sleep(TimeSpan.FromSeconds(1));\n    threadRun = assistantClient.GetRun(threadRun.ThreadId, threadRun.Id);\n} while (!threadRun.Status.IsTerminal);\n```\n\nIf everything went well, the terminal status of the run will be `RunStatus.Completed`.\n\nFinally, you can use the `AssistantClient`'s `GetMessages` method to retrieve the messages associated with this thread, which now include the responses from the assistant to the initial user message.\n\nFor illustrative purposes, you could print the messages to the console and also save any images produced by the assistant to local storage:\n\n```C# Snippet:ReadMe_Assistants_GetMessages\nCollectionResult\u003CThreadMessage> messages\n    = assistantClient.GetMessages(threadRun.ThreadId, new MessageCollectionOptions() { Order = MessageCollectionOrder.Ascending });\n\nforeach (ThreadMessage message in messages)\n{\n    Console.Write($\"[{message.Role.ToString().ToUpper()}]: \");\n    foreach (MessageContent contentItem in message.Content)\n    {\n        if (!string.IsNullOrEmpty(contentItem.Text))\n        {\n            Console.WriteLine($\"{contentItem.Text}\");\n\n            if (contentItem.TextAnnotations.Count > 0)\n            {\n                Console.WriteLine();\n            }\n\n            \u002F\u002F Include annotations, if any.\n            foreach (TextAnnotation annotation in contentItem.TextAnnotations)\n            {\n                if (!string.IsNullOrEmpty(annotation.InputFileId))\n                {\n                    Console.WriteLine($\"* File citation, file ID: {annotation.InputFileId}\");\n                }\n                if (!string.IsNullOrEmpty(annotation.OutputFileId))\n                {\n                    Console.WriteLine($\"* File output, new file ID: {annotation.OutputFileId}\");\n                }\n            }\n        }\n        if (!string.IsNullOrEmpty(contentItem.ImageFileId))\n        {\n            OpenAIFile imageInfo = fileClient.GetFile(contentItem.ImageFileId);\n            BinaryData imageBytes = fileClient.DownloadFile(contentItem.ImageFileId);\n            using FileStream stream = File.OpenWrite($\"{imageInfo.Filename}.png\");\n            imageBytes.ToStream().CopyTo(stream);\n\n            Console.WriteLine($\"\u003Cimage: {imageInfo.Filename}.png>\");\n        }\n    }\n    Console.WriteLine();\n}\n```\n\nAnd it would yield something like this:\n\n```text\n[USER]: How well did product 113045 sell in February? Graph its trend over time.\n\n[ASSISTANT]: Product 113045 sold 22 units in February【4:0†monthly_sales.json】.\n\nNow, I will generate a graph to show its sales trend over time.\n\n* File citation, file ID: file-hGOiwGNftMgOsjbynBpMCPFn\n\n[ASSISTANT]: \u003Cimage: 015d8e43-17fe-47de-af40-280f25452280.png>\nThe sales trend for Product 113045 over the past three months shows that:\n\n- In January, 12 units were sold.\n- In February, 22 units were sold, indicating significant growth.\n- In March, sales dropped slightly to 16 units.\n\nThe graph above visualizes this trend, showing a peak in sales during February.\n```\n\n## How to use assistants with streaming and vision\n\nThis example shows how to use the v2 Assistants API to provide image data to an assistant and then stream the run's response.\n\nAs before, you will use a `OpenAIFileClient` and an `AssistantClient`:\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateClients\nOpenAIClient openAIClient = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nOpenAIFileClient fileClient = openAIClient.GetOpenAIFileClient();\nAssistantClient assistantClient = openAIClient.GetAssistantClient();\n```\n\nFor this example, we will use both image data from a local file as well as an image located at a URL. For the local data, we upload the file with the `Vision` upload purpose, which would also allow it to be downloaded and retrieved later.\n\n```C# Snippet:ReadMe_Assistants_Vision_UploadImage\nOpenAIFile pictureOfAppleFile = fileClient.UploadFile(\n    Path.Combine(\"Assets\", \"images_apple.png\"),\n    FileUploadPurpose.Vision);\n\nUri linkToPictureOfOrange = new(\"https:\u002F\u002Fraw.githubusercontent.com\u002Fopenai\u002Fopenai-dotnet\u002Frefs\u002Fheads\u002Fmain\u002Fexamples\u002FAssets\u002Fimages_orange.png\");\n```\n\nNext, create a new assistant with a vision-capable model like `gpt-4o` and a thread with the image information referenced:\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateAssistantAndThread\nAssistant assistant = assistantClient.CreateAssistant(\n    \"gpt-5.1\",\n    new AssistantCreationOptions()\n    {\n        Instructions = \"When asked a question, attempt to answer very concisely. \"\n            + \"Prefer one-sentence answers whenever feasible.\"\n    });\n\nAssistantThread thread = assistantClient.CreateThread(new ThreadCreationOptions()\n{\n    InitialMessages =\n        {\n            new ThreadInitializationMessage(\n                OpenAI.Assistants.MessageRole.User,\n                [\n                    \"Hello, assistant! Please compare these two images for me:\",\n                    MessageContent.FromImageFileId(pictureOfAppleFile.Id),\n                    MessageContent.FromImageUri(linkToPictureOfOrange),\n                ]),\n        }\n});\n```\n\nWith the assistant and thread prepared, use the `CreateRunStreaming` method to get an enumerable `CollectionResult\u003CStreamingUpdate>`. You can then iterate over this collection with `foreach`. For async calling patterns, use `CreateRunStreamingAsync` and iterate over the `AsyncCollectionResult\u003CStreamingUpdate>` with `await foreach`, instead. Note that streaming variants also exist for `CreateThreadAndRunStreaming` and `SubmitToolOutputsToRunStreaming`.\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateRunStreaming\nCollectionResult\u003CStreamingUpdate> streamingUpdates = assistantClient.CreateRunStreaming(\n    thread.Id,\n    assistant.Id,\n    new RunCreationOptions()\n    {\n        AdditionalInstructions = \"When possible, try to sneak in puns if you're asked to compare things.\",\n    });\n```\n\nFinally, to handle the `StreamingUpdates` as they arrive, you can use the `UpdateKind` property on the base `StreamingUpdate` and\u002For downcast to a specifically desired update type, like `MessageContentUpdate` for `thread.message.delta` events or `RequiredActionUpdate` for streaming tool calls.\n\n```C# Snippet:ReadMe_Assistants_Vision_HandleStreamingUpdates\nforeach (StreamingUpdate streamingUpdate in streamingUpdates)\n{\n    if (streamingUpdate.UpdateKind == StreamingUpdateReason.RunCreated)\n    {\n        Console.WriteLine($\"--- Run started! ---\");\n    }\n    if (streamingUpdate is MessageContentUpdate contentUpdate)\n    {\n        Console.Write(contentUpdate.Text);\n    }\n}\n```\n\nThis will yield streamed output from the run like the following:\n\n```text\n--- Run started! ---\nThe first image depicts a multicolored apple with a blend of red and green hues, while the second image shows an orange with a bright, textured orange peel; one might say it’s comparing apples to oranges!\n```\n## How to work with Azure OpenAI\n\nSwitching from OpenAI to Azure OpenAI is simple, and in most cases requires little to no code changes. To get started quickly, check out the starter kit at https:\u002F\u002Faka.ms\u002Fopenai\u002Fstart. If you want to understand how endpoint switching works, you can also read: https:\u002F\u002Faka.ms\u002Fopenai\u002Fswitch.\n\n### Secure Access with Microsoft Entra ID (No API Keys)\nThe starter kit includes examples showing how to call Azure OpenAI securely using Microsoft Entra ID instead of API keys. This is the recommended approach for production scenarios. Here’s a direct link to the .NET sample using Entra ID in the starter kit: https:\u002F\u002Fgithub.com\u002FAzure-Samples\u002Fazure-openai-starter\u002Fblob\u002Fmain\u002Fsrc\u002Fdotnet\u002Fresponses_example_entra.cs\n\nBelow is the core pattern using the OpenAI SDK for .NET with Azure OpenAI + Entra ID:\n\n```C# Snippet:ReadMe_AzureOpenAI\nvar endpoint = Environment.GetEnvironmentVariable(\"AZURE_OPENAI_ENDPOINT\")\n    ?? throw new InvalidOperationException(\"AZURE_OPENAI_ENDPOINT is required.\");\n\nvar client = new ResponsesClient(\n    new BearerTokenPolicy(new DefaultAzureCredential(), \"https:\u002F\u002Fai.azure.com\u002F.default\"),\n    new OpenAIClientOptions { Endpoint = new Uri($\"{endpoint}\u002Fopenai\u002Fv1\u002F\") }\n);\n\nvar response = await client.CreateResponseAsync(\"gpt-5-mini\", \"Hello world!\");\nConsole.WriteLine(response.Value.GetOutputText());\n```\n\n### Why this works\n\n- One OpenAI SDK: You use the official OpenAI SDK for .NET. Azure OpenAI is just a different endpoint you point the client library to.\n- Unified \u002Fopenai\u002Fv1\u002F endpoint: Azure OpenAI uses the same path shape as OpenAI, so most client code can stay unchanged.\n- Enterprise-ready auth: Azure Identity SDK with Microsoft Entra ID lets you access Azure OpenAI without storing secrets.\n- Drop‑in model switching: Swap \"gpt-5-mini\" or any other model as long as the Azure model deployment has the same name as the model.\n\n## Advanced scenarios\n\n### Using protocol methods\n\nIn addition to the client methods that use strongly-typed request and response objects, the .NET library also provides _protocol methods_ that enable more direct access to the REST API. Protocol methods are \"binary in, binary out\" accepting `BinaryContent` as request bodies and providing `BinaryData` as response bodies.\n\nFor example, to use the protocol method variant of the `ChatClient`'s `CompleteChat` method, pass the request body as `BinaryContent`:\n\n```C# Snippet:ReadMe_ProtocolMethods\nBinaryData input = BinaryData.FromBytes(\"\"\"\n    {\n       \"model\": \"gpt-5.1\",\n       \"messages\": [\n           {\n               \"role\": \"user\",\n               \"content\": \"Say 'this is a test.'\"\n           }\n       ]\n    }\n    \"\"\"u8.ToArray());\n\nusing BinaryContent content = BinaryContent.Create(input);\nClientResult result = client.CompleteChat(content);\nBinaryData output = result.GetRawResponse().Content;\n\nusing JsonDocument outputAsJson = JsonDocument.Parse(output.ToString());\nstring message = outputAsJson.RootElement\n    .GetProperty(\"choices\")[0]\n    .GetProperty(\"message\")\n    .GetProperty(\"content\")\n    .GetString();\n\nConsole.WriteLine($\"[ASSISTANT]: {message}\");\n```\n\nNotice how you can then call the resulting `ClientResult`'s `GetRawResponse` method and retrieve the response body as `BinaryData` via the `PipelineResponse`'s `Content` property.\n\n### Mock a client for testing\n\nThe OpenAI .NET library has been designed to support mocking, providing key features such as:\n\n- Client methods made virtual to allow overriding.\n- Model factories to assist in instantiating API output models that lack public constructors.\n\nTo illustrate how mocking works, suppose you want to validate the behavior of the following method using the [Moq](https:\u002F\u002Fgithub.com\u002Fdevlooped\u002Fmoq) library. Given the path to an audio file, it determines whether it contains a specified secret word:\n\n```C# Snippet:ReadMe_Mocking_MethodUnderTest\nbool ContainsSecretWord(AudioClient client, string audioFilePath, string secretWord)\n{\n    AudioTranscription transcription = client.TranscribeAudio(audioFilePath);\n    return transcription.Text.Contains(secretWord);\n}\n```\n\nCreate mocks of `AudioClient` and `ClientResult\u003CAudioTranscription>`, set up methods and properties that will be invoked, then test the behavior of the `ContainsSecretWord` method. Since the `AudioTranscription` class does not provide public constructors, it must be instantiated by the `OpenAIAudioModelFactory` static class:\n\n```C# Snippet:ReadMe_Mocking_Test\n\u002F\u002F Instantiate mocks and the AudioTranscription object.\nMock\u003CAudioClient> mockClient = new();\nAudioTranscription transcription = OpenAIAudioModelFactory.AudioTranscription(text: \"I swear I saw an apple flying yesterday!\");\n\n\u002F\u002F Set up mocks' properties and methods.\nmockClient\n    .Setup(client => client.TranscribeAudio(It.IsAny\u003Cstring>()))\n    .Returns(ClientResult.FromValue(transcription, Mock.Of\u003CPipelineResponse>()));\n\n\u002F\u002F Perform validation.\nAudioClient client = mockClient.Object;\nbool containsSecretWord = ContainsSecretWord(client, \"\u003CaudioFilePath>\", \"apple\");\n\nAssert.That(containsSecretWord, Is.True);\n```\n\nAll namespaces have their corresponding model factory to support mocking with the exception of the `OpenAI.Assistants` and `OpenAI.VectorStores` namespaces, for which model factories are coming soon.\n\n### Automatically retrying errors\n\nBy default, the client classes will automatically retry the following errors up to three additional times using exponential backoff:\n\n- 408 Request Timeout\n- 429 Too Many Requests\n- 500 Internal Server Error\n- 502 Bad Gateway\n- 503 Service Unavailable\n- 504 Gateway Timeout\n\n### Observability\n\nOpenAI .NET library supports experimental distributed tracing and metrics with OpenTelemetry. Check out [Observability with OpenTelemetry](.\u002Fdocs\u002FObservability.md) for more details.\n","# OpenAI .NET API 库\n\n[![NuGet 稳定版本](https:\u002F\u002Fimg.shields.io\u002Fnuget\u002Fv\u002Fopenai.svg)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FOpenAI)\n\nOpenAI .NET 库为 .NET 应用程序提供了便捷的 OpenAI REST API 访问接口。该库是根据我们的 [OpenAPI 规范](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-openapi)，并与 Microsoft 合作生成的。\n\n## 目录\n\n- [快速入门](#快速入门)\n  - [先决条件](#先决条件)\n  - [安装 NuGet 包](#安装-nuget-包)\n- [使用客户端库](#使用客户端库)\n  - [命名空间组织](#命名空间组织)\n  - [使用异步 API](#使用异步-api)\n  - [使用 `OpenAIClient` 类](#使用-the-openaiclient-class)\n- [如何使用依赖注入](#如何使用依赖注入)\n- [如何使用带有流式传输的聊天完成](#如何使用-chat-completions-with-streaming)\n- [如何使用带有工具和函数调用的聊天完成](#如何-use-chat-completions-with-tools-and-function-calling)\n- [如何使用带有结构化输出的聊天完成](#如何-use-chat-completions-with-structured-outputs)\n- [如何使用带有音频的聊天完成](#如何-use-chat-completions-with-audio)\n- [如何使用带有流式传输和推理的响应](#如何-use-responses-with-streaming-and-reasoning)\n- [如何使用带有文件搜索的响应](#如何-use-responses-with-file-search)\n- [如何使用带有网络搜索的响应](#如何-use-responses-with-web-search)\n- [如何生成文本嵌入](#如何-generate-text-embeddings)\n- [如何生成图像](#如何-generate-images)\n- [如何转录音频](#如何-transcribe-audio)\n- [如何使用带有检索增强生成 (RAG) 的助手](#how-to-use-assistants-with-retrieval-augmented-generation-rag)\n- [如何使用带有流式传输和视觉功能的助手](#how-to-use-assistants-with-streaming-and-vision)\n- [如何与 Azure OpenAI 集成](#how-to-work-with-azure-openai)\n- [高级场景](#advanced-scenarios)\n  - [使用协议方法](#using-protocol-methods)\n  - [为测试模拟客户端](#mock-a-client-for-testing)\n  - [自动重试错误](#automatically-retrying-errors)\n  - [可观测性](#observability)\n\n## 快速入门\n\n### 先决条件\n\n要调用 OpenAI REST API，您需要一个 API 密钥。要获取 API 密钥，请先 [创建一个新的 OpenAI 账户](https:\u002F\u002Fplatform.openai.com\u002Fsignup) 或 [登录](https:\u002F\u002Fplatform.openai.com\u002Flogin)。然后，前往 [API 密钥页面](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys)，选择“创建新密钥”，并可为密钥命名。请务必将您的 API 密钥保存在安全的地方，并且不要与任何人共享。\n\n### 安装 NuGet 包\n\n通过 IDE 或在 .NET CLI 中运行以下命令，将客户端库添加到您的 .NET 项目中：\n\n```cli\ndotnet add package OpenAI\n```\n\n请注意，下面包含的代码示例是使用 [.NET 10](https:\u002F\u002Fdotnet.microsoft.com\u002Fdownload\u002Fdotnet\u002F10.0) 编写的。OpenAI .NET 库兼容所有 .NET Standard 2.0 应用程序，但本文档中部分代码示例所使用的语法可能依赖于较新的语言特性。\n\n## 使用客户端库\n\n该库的完整 API 可以在 [OpenAI.netstandard2.0.cs](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002Fmain\u002Fapi\u002FOpenAI.netstandard2.0.cs) 文件中找到，并且有许多 [代码示例](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Ftree\u002Fmain\u002Fexamples) 可供参考。例如，以下代码片段展示了聊天完成 API 的基本用法：\n\n```C# 示例：ReadMe_ChatCompletion_Basic\nChatClient client = new(model: \"gpt-5.1\", apiKey: Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nChatCompletion completion = client.CompleteChat(\"说‘这是一个测试。’\");\nConsole.WriteLine($\"[助理]: {completion.Content[0].Text}\");\n```\n\n虽然您可以直接将 API 密钥作为字符串传递，但我们强烈建议将其存储在安全位置，并像上面示例那样通过环境变量或配置文件来访问，以避免将其存储在源代码控制中。\n\n### 使用自定义基础 URL 和 API 密钥\n\n如果您需要连接到其他 API 端点（例如代理或自托管的 OpenAI 兼容大模型），可以使用 `ApiKeyCredential` 和 `OpenAIClientOptions` 指定自定义的基础 URL 和 API 密钥：\n\n```C# 示例：ReadMe_CustomUrl\nChatClient client = new(\n    model: \"MODEL_NAME\",\n    credential: new ApiKeyCredential(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\")),\n    options: new OpenAIClientOptions()\n    {\n        Endpoint = new Uri(\"https:\u002F\u002FYOUR_BASE_URL\")\n    });\n```\n\n将 `MODEL_NAME` 替换为您的模型名称，将 `BASE_URL` 替换为您的端点 URI。这在使用 OpenAI 兼容 API 或自定义部署时非常有用。\n\n### 命名空间组织\n\n该库按照 OpenAI REST API 中的功能区域划分为不同的命名空间。每个命名空间都包含相应的客户端类。\n\n| 命名空间                     | 客户端类                 |\n| ------------------------------|------------------------------|\n| `OpenAI.Assistants`           | `AssistantClient`            |\n| `OpenAI.Audio`                | `AudioClient`                |\n| `OpenAI.Batch`                | `BatchClient`                |\n| `OpenAI.Chat`                 | `ChatClient`                 |\n| `OpenAI.Embeddings`           | `EmbeddingClient`            |\n| `OpenAI.Evals`                | `EvaluationClient`           |\n| `OpenAI.FineTuning`           | `FineTuningClient`           |\n| `OpenAI.Files`                | `OpenAIFileClient`           |\n| `OpenAI.Images`               | `ImageClient`                |\n| `OpenAI.Models`               | `OpenAIModelClient`          |\n| `OpenAI.Moderations`          | `ModerationClient`           |\n| `OpenAI.Realtime`             | `RealtimeClient`             |\n| `OpenAI.Responses`            | `ResponsesClient`            |\n| `OpenAI.VectorStores`         | `VectorStoreClient`          |\n\n### 使用异步 API\n\n每个执行同步 API 调用的客户端方法，在同一客户端类中都有对应的异步变体。例如，`ChatClient` 的 `CompleteChat` 方法的异步变体是 `CompleteChatAsync`。要使用异步版本重写上述调用，只需对相应的异步方法使用 `await` 即可：\n\n```C# 示例：ReadMe_ChatCompletion_Async\nChatCompletion completion = await client.CompleteChatAsync(\"说‘这是一个测试。’\");\n```\n\n### 使用 `OpenAIClient` 类\n\n除了上述命名空间之外，还有一个父级的 `OpenAI` 命名空间：\n\n```csharp\nusing OpenAI;\n```\n\n该命名空间包含 `OpenAIClient` 类，当您需要同时使用多个功能模块的客户端时，它能提供一些便利。具体来说，您可以使用此客户端的实例来创建其他客户端，并让它们共享相同的实现细节，这样可能会更高效。\n\n您可以通过指定所有客户端都将用于身份验证的 API 密钥来创建一个 `OpenAIClient`：\n\n```C# Snippet:ReadMe_OpenAIClient_Create\nOpenAIClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n```\n\n接下来，例如要创建一个 `AudioClient` 实例，您可以调用 `OpenAIClient` 的 `GetAudioClient` 方法，并传入 `AudioClient` 将使用的 OpenAI 模型，就像直接使用 `AudioClient` 构造函数一样。如果需要，您还可以创建更多相同类型的客户端，以针对不同的模型。\n\n```C# Snippet:ReadMe_OpenAIClient_GetAudioClient\nAudioClient ttsClient = client.GetAudioClient(\"tts-1\");\nAudioClient whisperClient = client.GetAudioClient(\"whisper-1\");\n```\n\n## 如何使用依赖注入\n\nOpenAI 客户端是线程安全的，可以安全地作为单例注册到 ASP.NET Core 的依赖注入容器中。这样做可以最大化资源效率并重用 HTTP 连接。\n\n在您的 `Program.cs` 中将 `ChatClient` 注册为单例：\n\n```C# Snippet:ReadMe_DependencyInjection_Register\nbuilder.Services.AddSingleton\u003CChatClient>(serviceProvider =>\n{\n    var apiKey = Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\");\n    var model = \"gpt-5.1\";\n\n    return new ChatClient(model, apiKey);\n});\n```\n\n然后在您的控制器或服务中注入并使用该客户端：\n\n```C# Snippet:ReadMe_DependencyInjection_Controller\n[ApiController]\n[Route(\"api\u002F[controller]\")]\npublic class ChatController : ControllerBase\n{\n    private readonly ChatClient _chatClient;\n\n    public ChatController(ChatClient chatClient)\n    {\n        _chatClient = chatClient;\n    }\n\n    [HttpPost(\"complete\")]\n    public async Task\u003CIActionResult> CompleteChat([FromBody] string message)\n    {\n        ChatCompletion completion = await _chatClient.CompleteChatAsync(message);\n        return Ok(new { response = completion.Content[0].Text });\n    }\n}\n```\n\n## 如何使用流式聊天补全\n\n当您请求聊天补全时，默认行为是服务器会先完整生成结果，然后再将其作为一个响应一次性返回。因此，对于较长的聊天补全，可能需要等待几秒钟才能收到服务器的回复。为了解决这个问题，OpenAI REST API 支持在生成过程中逐步流式返回部分结果，这样您可以在补全尚未完成时就开始处理其开头部分。\n\n客户端库提供了一种便捷的方式来处理流式聊天补全。如果您想用流式方式重写上一节中的示例，而不是调用 `ChatClient` 的 `CompleteChat` 方法，而是改用 `CompleteChatStreaming` 方法：\n\n```C# Snippet:ReadMe_Streaming_Sync\nCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreaming(\"Say 'this is a test.'\");\n```\n\n请注意，返回值是一个 `CollectionResult\u003CStreamingChatCompletionUpdate>` 实例，您可以对其进行枚举，以便在流式响应块到达时逐个处理：\n\n```C# Snippet:ReadMe_Streaming_Enumerate\nCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreaming(\"Say 'this is a test.'\");\n\nConsole.Write($\"[ASSISTANT]: \");\nforeach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)\n{\n    if (completionUpdate.ContentUpdate.Count > 0)\n    {\n        Console.Write(completionUpdate.ContentUpdate[0].Text);\n    }\n}\n```\n\n或者，您也可以异步地执行此操作：调用 `CompleteChatStreamingAsync` 方法获取一个 `AsyncCollectionResult\u003CStreamingChatCompletionUpdate>`，然后使用 `await foreach` 来对其进行枚举：\n\n```C# Snippet:ReadMe_Streaming_Async\nAsyncCollectionResult\u003CStreamingChatCompletionUpdate> completionUpdates = client.CompleteChatStreamingAsync(\"Say 'this is a test.'\");\n\nConsole.Write($\"[ASSISTANT]: \");\nawait foreach (StreamingChatCompletionUpdate completionUpdate in completionUpdates)\n{\n    if (completionUpdate.ContentUpdate.Count > 0)\n    {\n        Console.Write(completionUpdate.ContentUpdate[0].Text);\n    }\n}\n```\n\n## 如何在聊天补全中使用工具和函数调用\n\n在本示例中，您有两个函数。第一个函数可以获取用户的当前地理位置（例如，通过轮询用户设备的位置服务 API），而第二个函数可以根据给定位置查询天气情况（例如，通过调用某个第三方天气服务的 API）。您希望模型能够在认为有必要获取这些信息以响应用户请求时，调用这些函数，从而生成聊天补全。为了便于说明，我们考虑以下代码：\n\n```C# Snippet:ReadMe_Tools_Functions\nstatic string GetCurrentLocation()\n{\n    \u002F\u002F 在此处调用位置 API。\n    return \"旧金山\";\n}\n\nstatic string GetCurrentWeather(string location, string unit = \"celsius\")\n{\n    \u002F\u002F 在此处调用天气 API。\n    return $\"31 {unit}\";\n}\n```\n\n首先，使用静态的 `CreateFunctionTool` 方法创建两个 `ChatTool` 实例来描述每个函数：\n\n```C# Snippet:ReadMe_Tools_Definitions\nChatTool getCurrentLocationTool = ChatTool.CreateFunctionTool(\n    functionName: nameof(GetCurrentLocation),\n    functionDescription: \"获取用户的当前位置\"\n);\n\nChatTool getCurrentWeatherTool = ChatTool.CreateFunctionTool(\n    functionName: nameof(GetCurrentWeather),\n    functionDescription: \"获取给定位置的当前天气\",\n    functionParameters: BinaryData.FromBytes(\"\"\"\n        {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"城市和州，例如波士顿，马萨诸塞州\"\n                },\n                \"unit\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"celsius\", \"fahrenheit\" ],\n                    \"description\": \"要使用的温度单位。可根据指定位置推断。\"\n                }\n            },\n            \"required\": [ \"location\" ]\n        }\n        \"\"\"u8.ToArray())\n);\n```\n\n接下来，创建一个 `ChatCompletionOptions` 实例，并将这两个工具添加到其 `Tools` 属性中。您将在调用 `ChatClient` 的 `CompleteChat` 方法时，将 `ChatCompletionOptions` 作为参数传递。\n\n```C# Snippet:ReadMe_Tools_Options\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(\"今天天气怎么样？\"),\n];\n\nChatCompletionOptions options = new()\n{\n    Tools = { getCurrentLocationTool, getCurrentWeatherTool },\n};\n```\n\n当生成的 `ChatCompletion` 的 `FinishReason` 属性等于 `ChatFinishReason.ToolCalls` 时，这意味着模型已经确定在助手能够做出适当回应之前，必须调用一个或多个工具。在这种情况下，您需要先调用 `ChatCompletion` 中 `ToolCalls` 指定的函数，然后再次调用 `ChatClient` 的 `CompleteChat` 方法，并将函数的结果作为附加的 `ChatRequestToolMessage` 传递。根据需要重复此过程。\n\n```C# Snippet:ReadMe_Tools_Loop\nbool requiresAction;\n\ndo\n{\n    requiresAction = false;\n    ChatCompletion completion = client.CompleteChat(messages, options);\n\n    switch (completion.FinishReason)\n    {\n        case ChatFinishReason.Stop:\n        {\n            \u002F\u002F 将助手消息添加到对话历史中。\n            messages.Add(new AssistantChatMessage(completion));\n            break;\n        }\n\n        case ChatFinishReason.ToolCalls:\n        {\n            \u002F\u002F 首先，将包含工具调用的助手消息添加到对话历史中。\n            messages.Add(new AssistantChatMessage(completion));\n\n            \u002F\u002F 然后，为每个已解析的工具调用添加一条新的工具消息。\n            foreach (ChatToolCall toolCall in completion.ToolCalls)\n            {\n                switch (toolCall.FunctionName)\n                {\n                    case nameof(GetCurrentLocation):\n                        {\n                            string toolResult = GetCurrentLocation();\n                            messages.Add(new ToolChatMessage(toolCall.Id, toolResult));\n                            break;\n                        }\n\n                    case nameof(GetCurrentWeather):\n                        {\n                            \u002F\u002F 模型希望用于调用函数的参数是以字符串形式表示的 JSON 对象，基于工具定义中指定的模式。需要注意的是，模型可能会“幻觉”出无效的参数。因此，在调用函数之前，务必进行适当的解析和验证。\n                            using JsonDocument argumentsJson = JsonDocument.Parse(toolCall.FunctionArguments);\n                            bool hasLocation = argumentsJson.RootElement.TryGetProperty(\"location\", out JsonElement location);\n                            bool hasUnit = argumentsJson.RootElement.TryGetProperty(\"unit\", out JsonElement unit);\n\n                            if (!hasLocation)\n                            {\n                                throw new ArgumentNullException(nameof(location), \"缺少位置参数。\");\n                            }\n\n                            string toolResult = hasUnit\n                                ? GetCurrentWeather(location.GetString(), unit.GetString())\n                                : GetCurrentWeather(location.GetString());\n                            messages.Add(new ToolChatMessage(toolCall.Id, toolResult));\n                            break;\n                        }\n\n                    default:\n                        {\n                            \u002F\u002F 处理其他未预期的调用。\n                            throw new NotImplementedException();\n                        }\n                }\n            }\n\n            requiresAction = true;\n            break;\n        }\n\n        case ChatFinishReason.Length:\n            throw new NotImplementedException(\"由于 MaxTokens 参数或令牌限制超出而导致模型输出不完整。\");\n\n        case ChatFinishReason.ContentFilter:\n            throw new NotImplementedException(\"因内容过滤器标记而省略了部分内容。\");\n\n        case ChatFinishReason.FunctionCall:\n            throw new NotImplementedException(\"已被工具调用取代，现已弃用。\");\n\n        default:\n            throw new NotImplementedException(completion.FinishReason.ToString());\n    }\n} while (requiresAction);\n```\n\n## 如何在聊天补全中使用结构化输出\n\n自 `gpt-4o-mini`、`gpt-4o-mini-2024-07-18` 和 `gpt-4o-2024-08-06` 模型快照起，聊天补全和助手 API 中的顶级响应内容以及工具调用均支持结构化输出。有关该功能的信息，请参阅 [结构化输出指南](https:\u002F\u002Fplatform.openai.com\u002Fdocs\u002Fguides\u002Fstructured-outputs)。\n\n要使用结构化输出来约束聊天补全文本内容，请按照以下示例设置适当的 `ChatResponseFormat`：\n\n```C# Snippet:ReadMe_StructuredOutputs\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(\"如何解方程 8x + 7 = -23？\"),\n];\n\nChatCompletionOptions options = new()\n{\n    ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(\n        jsonSchemaFormatName: \"math_reasoning\",\n        jsonSchema: BinaryData.FromBytes(\"\"\"\n            {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"steps\": {\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"explanation\": { \"type\": \"string\" },\n                                \"output\": { \"type\": \"string\" }\n                            },\n                            \"required\": [\"explanation\", \"output\"],\n                            \"additionalProperties\": false\n                        }\n                    },\n                    \"final_answer\": { \"type\": \"string\" }\n                },\n                \"required\": [\"steps\", \"final_answer\"],\n                \"additionalProperties\": false\n            }\n            \"\"\"u8.ToArray()),\n        jsonSchemaIsStrict: true)\n};\n\nChatCompletion completion = client.CompleteChat(messages, options);\n\nusing JsonDocument structuredJson = JsonDocument.Parse(completion.Content[0].Text);\n\nConsole.WriteLine($\"最终答案：{structuredJson.RootElement.GetProperty(\"final_answer\")}\");\nConsole.WriteLine(\"推理步骤：\");\n\nforeach (JsonElement stepElement in structuredJson.RootElement.GetProperty(\"steps\").EnumerateArray())\n{\n    Console.WriteLine($\"  - 解释：{stepElement.GetProperty(\"explanation\")}\");\n    Console.WriteLine($\"    结果：{stepElement.GetProperty(\"output\")}\");\n}\n```\n\n## 如何在聊天补全中使用音频\n\n自 `gpt-4o-audio-preview` 模型起，聊天补全可以处理音频输入和输出。\n\n此示例演示：\n  1. 使用支持的 `gpt-4o-audio-preview` 模型配置客户端\n  1. 在聊天补全请求中提供用户音频输入\n  1. 请求聊天补全操作返回模型音频输出\n  1. 从 `ChatCompletion` 实例中获取音频输出\n  1. 将之前的音频输出用作 `ChatMessage` 对话历史\n\n```C# Snippet:ReadMe_ChatAudio\n\u002F\u002F 聊天音频输入和输出仅在特定模型上受支持，自 gpt-4o-audio-preview 开始\nChatClient client = new(\"gpt-5.1\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\n\u002F\u002F 通过将音频内容部分添加到用户消息中，可向请求提供输入音频\nstring audioFilePath = Path.Combine(\"Assets\", \"realtime_whats_the_weather_pcm16_24khz_mono.wav\");\nbyte[] audioFileRawBytes = File.ReadAllBytes(audioFilePath);\nBinaryData audioData = BinaryData.FromBytes(audioFileRawBytes);\n\nList\u003CChatMessage> messages =\n[\n    new UserChatMessage(ChatMessageContentPart.CreateInputAudioPart(audioData, ChatInputAudioFormat.Wav)),\n];\n\n\u002F\u002F 通过配置 `ChatCompletionOptions` 以包含适当的 `ResponseModalities` 值及相应的 `AudioOptions`，即可请求输出音频。\nChatCompletionOptions options = new()\n{\n    ResponseModalities = ChatResponseModalities.Text | ChatResponseModalities.Audio,\n    AudioOptions = new(ChatOutputAudioVoice.Alloy, ChatOutputAudioFormat.Mp3),\n};\n\nChatCompletion completion = client.CompleteChat(messages, options);\n\nvoid PrintAudioContent()\n{\n    if (completion.OutputAudio is ChatOutputAudio outputAudio)\n    {\n        Console.WriteLine($\"响应音频转录：{outputAudio.Transcript}\");\n        string outputFilePath = $\"{outputAudio.Id}.mp3\";\n        using (FileStream outputFileStream = File.OpenWrite(outputFilePath))\n        {\n            outputFileStream.Write(outputAudio.AudioBytes);\n        }\n\n        Console.WriteLine($\"响应音频已写入文件：{outputFilePath}\");\n        Console.WriteLine($\"可在后续请求中有效至：{outputAudio.ExpiresAt}\");\n    }\n}\n\nPrintAudioContent();\n\n\u002F\u002F 若要引用之前的音频输出，可从先前的 `ChatCompletion` 创建助理消息，使用先前的响应内容部分，或使用 `ChatMessageContentPart.CreateAudioPart(string)` 手动实例化一个部分。\nmessages.Add(new AssistantChatMessage(completion));\nmessages.Add(\"你能用海盗口吻再说一遍吗？\");\n\ncompletion = client.CompleteChat(messages, options);\n\nPrintAudioContent();\n```\n\n流式传输具有高度并行性：`StreamingChatCompletionUpdate` 实例可能包含一个 `OutputAudioUpdate`，其中可能包括以下内容：\n\n- 流式音频内容的 `Id`，一旦流式响应完成，后续的 `AssistantChatMessage` 实例可通过 `ChatAudioReference` 引用该 `Id`；此值可能出现在多个 `StreamingChatCompletionUpdate` 实例中，但只要存在，其值始终相同。\n- `ExpiresAt` 值，用于描述 `Id` 在后续请求中不再适用于 `ChatAudioReference` 的时间；此值通常只出现一次，在最后一个 `StreamingOutputAudioUpdate` 中。\n- 递增的 `TranscriptUpdate` 和\u002F或 `AudioBytesUpdate` 值，这些值可以逐步接收，拼接后即可形成整个响应的完整音频转录和音频输出；此类更新通常会多次出现。\n\n## 如何结合流式传输和推理使用响应\n\n```C# Snippet:ReadMe_ResponsesStreaming\nResponsesClient client = new(\n    apiKey: Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    ReasoningOptions = new ResponseReasoningOptions()\n    {\n        ReasoningEffortLevel = ResponseReasoningEffortLevel.High,\n    },\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"在扑克游戏中，获胜的最佳策略是什么？\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nCreateResponseOptions streamingOptions = new()\n{\n    Model = \"gpt-5.1\",\n    ReasoningOptions = new ResponseReasoningOptions()\n    {\n        ReasoningEffortLevel = ResponseReasoningEffortLevel.High,\n    },\n    StreamingEnabled = true,\n};\n\nstreamingOptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"在扑克游戏中，获胜的最佳策略是什么？\"));\n\nawait foreach (StreamingResponseUpdate update\n    in client.CreateResponseStreamingAsync(streamingOptions))\n{\n    if (update is StreamingResponseOutputItemAddedUpdate itemUpdate\n        && itemUpdate.Item is ReasoningResponseItem reasoningItem)\n    {\n        Console.WriteLine($\"[推理] ({reasoningItem.Status})\");\n    }\n    else if (update is StreamingResponseOutputItemAddedUpdate itemDone\n        && itemDone.Item is ReasoningResponseItem reasoningDone)\n    {\n        Console.WriteLine($\"[推理完成] ({reasoningDone.Status})\");\n    }\n    else if (update is StreamingResponseOutputTextDeltaUpdate delta)\n    {\n        Console.Write(delta.Delta);\n    }\n}\n```\n\n## 如何结合文件搜索使用响应\n\n```C# Snippet:ReadMe_ResponsesFileSearch\nResponsesClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nstring vectorStoreId = \"vs-123\";\n\nResponseTool fileSearchTool\n    = ResponseTool.CreateFileSearchTool(vectorStoreIds: [vectorStoreId]);\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    Tools = { fileSearchTool }\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"根据现有文件，秘密数字是多少？\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nforeach (ResponseItem outputItem in response.OutputItems)\n{\n    if (outputItem is FileSearchCallResponseItem fileSearchCall)\n    {\n        Console.WriteLine($\"[文件搜索] ({fileSearchCall.Status}): {fileSearchCall.Id}\");\n        foreach (string query in fileSearchCall.Queries)\n        {\n            Console.WriteLine($\"  - {query}\");\n        }\n    }\n    else if (outputItem is MessageResponseItem message)\n    {\n        Console.WriteLine($\"[{message.Role}] {message.Content.FirstOrDefault()?.Text}\");\n    }\n}\n```\n\n## 如何结合网络搜索使用响应\n\n```C# Snippet:ReadMe_ResponsesWebSearch\nResponsesClient client = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n\nCreateResponseOptions options = new()\n{\n    Model = \"gpt-5.1\",\n    Tools = { ResponseTool.CreateWebSearchTool() },\n};\n\noptions.InputItems.Add(ResponseItem.CreateUserMessageItem(\"今天有什么令人开心的新闻标题？\"));\nResponseResult response = await client.CreateResponseAsync(options);\n\nforeach (ResponseItem item in response.OutputItems)\n{\n    if (item is WebSearchCallResponseItem webSearchCall)\n    {\n        Console.WriteLine($\"[调用网络搜索]({webSearchCall.Status}) {webSearchCall.Id}\");\n    }\n    else if (item is MessageResponseItem message)\n    {\n        Console.WriteLine($\"[{message.Role}] {message.Content?.FirstOrDefault()?.Text}\");\n    }\n}\n```\n\n## 如何生成文本嵌入\n在这个示例中，您希望创建一个旅行规划网站，允许客户输入一段描述他们理想酒店的文字提示，然后系统会提供与该描述高度匹配的酒店推荐。为了实现这一点，可以使用文本嵌入来衡量文本字符串之间的相关性。具体来说，您可以为每家酒店的描述生成嵌入，并将其存储到向量数据库中，从而构建一个可查询的索引，通过客户输入的提示嵌入进行检索。\n\n要生成文本嵌入，请使用 `OpenAI.Embeddings` 命名空间中的 `EmbeddingClient`：\n\n```C# Snippet:ReadMe_Embeddings\nstring description = \"如果您喜欢豪华酒店，这是镇上最好的酒店。这里有一座令人惊叹的无边泳池、水疗中心\"\n    + \"以及非常乐于助人的礼宾服务。地理位置也十分优越——位于市中心，靠近所有旅游景点。我们强烈推荐这家酒店。\";\n\nOpenAIEmbedding embedding = client.GenerateEmbedding(description);\nReadOnlyMemory\u003Cfloat> vector = embedding.ToFloats();\n```\n\n请注意，生成的嵌入是一个浮点数列表（也称为向量），以 `ReadOnlyMemory\u003Cfloat>` 的形式表示。默认情况下，使用 `text-embedding-3-small` 模型时，嵌入向量的长度为 1536；而使用 `text-embedding-3-large` 模型时，则为 3072。一般来说，较大的嵌入效果更好，但计算、内存和存储成本也会更高。您可以通过创建 `EmbeddingGenerationOptions` 类的实例，设置 `Dimensions` 属性，并将其作为参数传递给 `GenerateEmbedding` 方法，来降低嵌入的维度：\n\n```C# Snippet:ReadMe_Embeddings_WithDimensions\nstring description = \"如果您喜欢豪华酒店，这是镇上最好的酒店。\";\nEmbeddingGenerationOptions options = new() { Dimensions = 512 };\n\nOpenAIEmbedding embedding = client.GenerateEmbedding(description, options);\n```\n\n## 如何生成图像\n\n在本示例中，您希望构建一个应用程序，帮助室内设计师基于最新的设计趋势快速制作新想法的原型。作为创作过程的一部分，室内设计师只需用提示描述脑海中的场景，即可使用此应用生成用于获取灵感的图像。正如预期，高质量、极具视觉冲击力且细节丰富的图像能为该应用场景带来最佳效果。\n\n要生成图像，可以使用 `OpenAI.Images` 命名空间中的 `ImageClient`：\n\n```C# Snippet:ReadMe_Images_CreateClient\nImageClient client = new(\"dall-e-3\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\n```\n\n生成图像始终需要一个描述生成内容的 `prompt`。为了进一步根据您的具体需求定制图像生成，您可以创建 `ImageGenerationOptions` 类的实例，并相应地设置 `Quality`、`Size` 和 `Style` 属性。请注意，您还可以将 `ImageGenerationOptions` 的 `ResponseFormat` 属性设置为 `GeneratedImageFormat.Bytes`，以便以 `BinaryData` 格式接收生成的 PNG 图像（而不是默认的远程 `Uri`），如果这对您的用例更方便的话。\n\n```C# Snippet:ReadMe_Images_Options\nstring prompt = \"客厅的设计理念融合了斯堪的纳维亚的简约风格与日本的极简主义，营造出宁静而温馨的氛围。这是一个邀请人们放松身心的空间，充满自然光和新鲜空气。采用中性色调，包括白色、米色、灰色和黑色等颜色，以创造和谐感。家具选用线条简洁、略带弧度的光滑木质家具，增添温暖与优雅。陶瓷花盆中的绿植和鲜花为空间注入色彩与生机，它们可以作为视觉焦点，拉近人与自然的距离。柔软的纺织品和靠垫采用天然面料，为空间带来舒适与柔和，同时也能起到点缀作用，增加对比与质感。\";\n\nImageGenerationOptions options = new()\n{\n    Quality = GeneratedImageQuality.High,\n    Size = GeneratedImageSize.W1792xH1024,\n    Style = GeneratedImageStyle.Vivid,\n    ResponseFormat = GeneratedImageFormat.Bytes\n};\n```\n\n最后，通过将提示和 `ImageGenerationOptions` 实例作为参数传递给 `ImageClient` 的 `GenerateImage` 方法来调用它：\n\n```C# Snippet:ReadMe_Images_Generate\nGeneratedImage image = client.GenerateImage(prompt, options);\nBinaryData bytes = image.ImageBytes;\n```\n\n出于演示目的，您可以将生成的图像保存到本地存储：\n\n```C# Snippet:ReadMe_Images_Save\nusing FileStream stream = File.OpenWrite($\"{Guid.NewGuid()}.png\");\nbytes.ToStream().CopyTo(stream);\n```\n\n## 如何转录音频\n\n在本示例中，使用 Whisper 语音转文本模型对音频文件进行转录，同时包含单词级别和音频片段级别的时间戳信息。\n\n```C# Snippet:ReadMe_Audio_Transcribe\n        AudioClient client = new(\"whisper-1\", Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nstring audioFilePath = Path.Combine(\"Assets\", \"audio_houseplant_care.mp3\");\n\nAudioTranscriptionOptions options = new()\n{\n    ResponseFormat = AudioTranscriptionFormat.Verbose,\n    TimestampGranularities = AudioTimestampGranularities.Word | AudioTimestampGranularities.Segment,\n};\n\nAudioTranscription transcription = client.TranscribeAudio(audioFilePath, options);\n\nConsole.WriteLine(\"转录结果：\");\nConsole.WriteLine($\"{transcription.Text}\");\nConsole.WriteLine();\nConsole.WriteLine($\"单词：\");\n\nforeach (TranscribedWord word in transcription.Words)\n{\n    Console.WriteLine($\"  {word.Word,15} : {word.StartTime.TotalMilliseconds,5:0} - {word.EndTime.TotalMilliseconds,5:0}\");\n}\n\nConsole.WriteLine();\nConsole.WriteLine($\"片段：\");\nforeach (TranscribedSegment segment in transcription.Segments)\n{\n    Console.WriteLine($\"  {segment.Text,90} : {segment.StartTime.TotalMilliseconds,5:0} - {segment.EndTime.TotalMilliseconds,5:0}\");\n}\n```\n\n## 如何使用带有检索增强生成（RAG）的助手\n\n在本示例中，您有一个包含不同产品月度销售信息的 JSON 文档，希望构建一个能够分析该文档并回答相关问题的助手。\n\n为此，您可以同时使用 `OpenAI.Files` 命名空间中的 `OpenAIFileClient` 和 `OpenAI.Assistants` 命名空间中的 `AssistantClient`。\n\n重要提示：助手 REST API 目前处于测试阶段。因此，其细节可能会发生变化，相应的 `AssistantClient` 被标记为 `[Experimental]`。要使用它，您必须先禁用 `OPENAI001` 警告。\n\n```C# Snippet:ReadMe_Assistants_CreateClients\nOpenAIClient openAIClient = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nOpenAIFileClient fileClient = openAIClient.GetOpenAIFileClient();\nAssistantClient assistantClient = openAIClient.GetAssistantClient();\n```\n\n以下是该 JSON 文档可能的样子：\n\n```C# Snippet:ReadMe_Assistants_Document\nStream document = BinaryData.FromBytes(\"\"\"\n    {\n        \"description\": \"本文档包含 Contoso 公司产品的销售历史数据。\",\n        \"sales\": [\n            {\n                \"month\": \"一月\",\n                \"by_product\": {\n                    \"113043\": 15,\n                    \"113045\": 12,\n                    \"113049\": 2\n                }\n            },\n            {\n                \"month\": \"二月\",\n                \"by_product\": {\n                    \"113045\": 22\n                }\n            },\n            {\n                \"month\": \"三月\",\n                \"by_product\": {\n                    \"113045\": 16,\n                    \"113055\": 5\n                }\n            }\n        ]\n    }\n    \"\"\"u8.ToArray()).ToStream();\n```\n\n使用 `OpenAIFileClient` 的 `UploadFile` 方法将此文档上传至 OpenAI，并确保使用 `FileUploadPurpose.Assistants` 目的，以便您的助手日后可以访问该文件：\n\n```C# Snippet:ReadMe_Assistants_UploadFile\nOpenAIFile salesFile = fileClient.UploadFile(\n    document,\n    \"monthly_sales.json\",\n    FileUploadPurpose.Assistants);\n```\n\n使用 `AssistantCreationOptions` 类的实例创建一个新的助手，并对其进行自定义。在此示例中，我们设置了以下内容：\n\n- 助手的友好名称，将在 Playground 中显示；\n- 助手应具备的工具定义实例；这里我们使用 `FileSearchToolDefinition` 来处理刚刚上传的销售文档，以及 `CodeInterpreterToolDefinition` 以便对数值数据进行分析和可视化；\n- 助手与其工具一起使用的资源；此处我们使用 `VectorStoreCreationHelper` 类型自动创建一个新的向量存储，用于索引销售文件；或者您也可以使用 `VectorStoreClient` 单独管理向量存储。\n\n```C# 代码片段：ReadMe_Assistants_CreateAssistant\nAssistantCreationOptions assistantOptions = new()\n{\n    Name = \"示例：Contoso 销售 RAG\",\n    Instructions =\n        \"您是一位助手，负责查找销售数据，并根据用户查询帮助可视化相关信息。当被要求生成图表或其他可视化内容时，请使用代码解释器工具来完成。\",\n    Tools =\n    {\n        new FileSearchToolDefinition(),\n        new CodeInterpreterToolDefinition(),\n    },\n    ToolResources = new()\n    {\n        FileSearch = new()\n        {\n            NewVectorStores =\n            {\n                new VectorStoreCreationHelper([salesFile.Id]),\n            }\n        }\n    },\n};\n\nAssistant assistant = assistantClient.CreateAssistant(\"gpt-5.1\", assistantOptions);\n```\n\n接下来，创建一个新的线程。为了演示目的，您可以包含一条初始用户消息，询问某个产品的销售信息，然后使用 `AssistantClient` 的 `CreateThreadAndRun` 方法启动它：\n\n```C# 代码片段：ReadMe_Assistants_CreateThreadAndRun\nThreadCreationOptions threadOptions = new()\n{\n    InitialMessages = { \"产品 113045 在二月份的销售情况如何？请绘制其随时间的变化趋势图。\" }\n};\n\nThreadRun threadRun = assistantClient.CreateThreadAndRun(assistant.Id, threadOptions);\n```\n\n轮询运行状态，直到其不再处于排队或进行中状态：\n\n```C# 代码片段：ReadMe_Assistants_Poll\ndo\n{\n    Thread.Sleep(TimeSpan.FromSeconds(1));\n    threadRun = assistantClient.GetRun(threadRun.ThreadId, threadRun.Id);\n} while (!threadRun.Status.IsTerminal);\n```\n\n如果一切顺利，运行的最终状态将是 `RunStatus.Completed`。\n\n最后，您可以使用 `AssistantClient` 的 `GetMessages` 方法获取与该线程相关的消息，其中现在包含了助手对初始用户消息的响应。\n\n为了演示目的，您可以将这些消息打印到控制台，并将助手生成的任何图像保存到本地存储中：\n\n```C# 代码片段：ReadMe_Assistants_GetMessages\nCollectionResult\u003CThreadMessage> messages\n    = assistantClient.GetMessages(threadRun.ThreadId, new MessageCollectionOptions() { Order = MessageCollectionOrder.Ascending });\n\nforeach (ThreadMessage message in messages)\n{\n    Console.Write($\"[{message.Role.ToString().ToUpper()}]: \");\n    foreach (MessageContent contentItem in message.Content)\n    {\n        if (!string.IsNullOrEmpty(contentItem.Text))\n        {\n            Console.WriteLine($\"{contentItem.Text}\");\n\n            if (contentItem.TextAnnotations.Count > 0)\n            {\n                Console.WriteLine();\n            }\n\n            \u002F\u002F 如果有注释，则一并显示。\n            foreach (TextAnnotation annotation in contentItem.TextAnnotations)\n            {\n                if (!string.IsNullOrEmpty(annotation.InputFileId))\n                {\n                    Console.WriteLine($\"* 文件引用，文件 ID：{annotation.InputFileId}\");\n                }\n                if (!string.IsNullOrEmpty(annotation.OutputFileId))\n                {\n                    Console.WriteLine($\"* 文件输出，新文件 ID：{annotation.OutputFileId}\");\n                }\n            }\n        }\n        if (!string.IsNullOrEmpty(contentItem.ImageFileId))\n        {\n            OpenAIFile imageInfo = fileClient.GetFile(contentItem.ImageFileId);\n            BinaryData imageBytes = fileClient.DownloadFile(contentItem.ImageFileId);\n            using FileStream stream = File.OpenWrite($\"{imageInfo.Filename}.png\");\n            imageBytes.ToStream().CopyTo(stream);\n\n            Console.WriteLine($\"\u003C图片：{imageInfo.Filename}.png>\");\n        }\n    }\n    Console.WriteLine();\n}\n```\n\n输出结果可能如下所示：\n\n```text\n[USER]：产品 113045 在二月份的销售情况如何？请绘制其随时间的变化趋势图。\n\n[ASSISTANT]：产品 113045 在二月份售出了 22 件【4:0†monthly_sales.json】。\n\n现在，我将生成一张图表来展示其销售趋势。\n\n* 文件引用，文件 ID：file-hGOiwGNftMgOsjbynBpMCPFn\n\n[ASSISTANT]：\u003C图片：015d8e43-17fe-47de-af40-280f25452280.png>\n过去三个月内，产品 113045 的销售趋势显示：\n\n- 一月份售出 12 件。\n- 二月份售出 22 件，表明销售显著增长。\n- 三月份销量略有下降，为 16 件。\n\n上图直观地展示了这一趋势，显示出二月份的销售峰值。\n```\n\n## 如何使用带有流式传输和视觉功能的助手\n\n此示例展示了如何使用 v2 助手 API 向助手提供图像数据，然后流式传输运行的响应。\n\n与之前一样，您将使用 `OpenAIFileClient` 和 `AssistantClient`：\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateClients\nOpenAIClient openAIClient = new(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\"));\nOpenAIFileClient fileClient = openAIClient.GetOpenAIFileClient();\nAssistantClient assistantClient = openAIClient.GetAssistantClient();\n```\n\n对于本示例，我们将同时使用来自本地文件的图像数据以及位于 URL 的图像。对于本地数据，我们以 `Vision` 上传目的上传文件，这也将允许稍后下载和检索该文件。\n\n```C# Snippet:ReadMe_Assistants_Vision_UploadImage\nOpenAIFile pictureOfAppleFile = fileClient.UploadFile(\n    Path.Combine(\"Assets\", \"images_apple.png\"),\n    FileUploadPurpose.Vision);\n\nUri linkToPictureOfOrange = new(\"https:\u002F\u002Fraw.githubusercontent.com\u002Fopenai\u002Fopenai-dotnet\u002Frefs\u002Fheads\u002Fmain\u002Fexamples\u002FAssets\u002Fimages_orange.png\");\n```\n\n接下来，创建一个具有视觉能力的模型（如 `gpt-4o`）的新助手，并创建一个引用了图像信息的线程：\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateAssistantAndThread\nAssistant assistant = assistantClient.CreateAssistant(\n    \"gpt-5.1\",\n    new AssistantCreationOptions()\n    {\n        Instructions = \"当被问到问题时，尽量用非常简洁的方式回答。\"\n            + \"只要可行，就优先使用一句话来回答。\"\n    });\n\nAssistantThread thread = assistantClient.CreateThread(new ThreadCreationOptions()\n{\n    InitialMessages =\n        {\n            new ThreadInitializationMessage(\n                OpenAI.Assistants.MessageRole.User,\n                [\n                    \"你好，助手！请帮我比较这两张图片：\",\n                    MessageContent.FromImageFileId(pictureOfAppleFile.Id),\n                    MessageContent.FromImageUri(linkToPictureOfOrange),\n                ]),\n        }\n});\n```\n\n在助手和线程准备就绪后，使用 `CreateRunStreaming` 方法获取可枚举的 `CollectionResult\u003CStreamingUpdate>`。然后您可以使用 `foreach` 遍历此集合。对于异步调用模式，请使用 `CreateRunStreamingAsync`，并使用 `await foreach` 遍历 `AsyncCollectionResult\u003CStreamingUpdate>`。请注意，`CreateThreadAndRunStreaming` 和 `SubmitToolOutputsToRunStreaming` 也有流式传输版本。\n\n```C# Snippet:ReadMe_Assistants_Vision_CreateRunStreaming\nCollectionResult\u003CStreamingUpdate> streamingUpdates = assistantClient.CreateRunStreaming(\n    thread.Id,\n    assistant.Id,\n    new RunCreationOptions()\n    {\n        AdditionalInstructions = \"如果可能的话，在比较事物时试着加入一些双关语。\",\n    });\n```\n\n最后，为了处理到达的 `StreamingUpdates`，您可以使用基类 `StreamingUpdate` 上的 `UpdateKind` 属性，或者将其向下转换为特定所需的更新类型，例如用于 `thread.message.delta` 事件的 `MessageContentUpdate` 或用于流式工具调用的 `RequiredActionUpdate`。\n\n```C# Snippet:ReadMe_Assistants_Vision_HandleStreamingUpdates\nforeach (StreamingUpdate streamingUpdate in streamingUpdates)\n{\n    if (streamingUpdate.UpdateKind == StreamingUpdateReason.RunCreated)\n    {\n        Console.WriteLine($\"--- 运行已开始！---\");\n    }\n    if (streamingUpdate is MessageContentUpdate contentUpdate)\n    {\n        Console.Write(contentUpdate.Text);\n    }\n}\n```\n\n这将产生如下所示的流式输出：\n\n```text\n--- 运行已开始！---\n第一张图片描绘了一颗红绿相间的多彩苹果，而第二张图片则展示了一个有着明亮、纹理丰富的橙色果皮的橙子；可以说这是在比较苹果和橙子呢！\n```\n## 如何使用 Azure OpenAI\n\n从 OpenAI 切换到 Azure OpenAI 很简单，在大多数情况下几乎不需要更改代码。要快速上手，请访问 https:\u002F\u002Faka.ms\u002Fopenai\u002Fstart 查看入门套件。如果您想了解端点切换的工作原理，也可以阅读：https:\u002F\u002Faka.ms\u002Fopenai\u002Fswitch。\n\n### 使用 Microsoft Entra ID 安全访问（无需 API 密钥）\n入门套件包含示例，演示如何使用 Microsoft Entra ID 而不是 API 密钥安全地调用 Azure OpenAI。这是生产场景中推荐的方法。以下是入门套件中使用 Entra ID 的 .NET 示例的直接链接：https:\u002F\u002Fgithub.com\u002FAzure-Samples\u002Fazure-openai-starter\u002Fblob\u002Fmain\u002Fsrc\u002Fdotnet\u002Fresponses_example_entra.cs\n\n以下是使用 .NET 的 OpenAI SDK 结合 Azure OpenAI 和 Entra ID 的核心模式：\n\n```C# Snippet:ReadMe_AzureOpenAI\nvar endpoint = Environment.GetEnvironmentVariable(\"AZURE_OPENAI_ENDPOINT\")\n    ?? throw new InvalidOperationException(\"AZURE_OPENAI_ENDPOINT 是必需的。\");\n\nvar client = new ResponsesClient(\n    new BearerTokenPolicy(new DefaultAzureCredential(), \"https:\u002F\u002Fai.azure.com\u002F.default\"),\n    new OpenAIClientOptions { Endpoint = new Uri($\"{endpoint}\u002Fopenai\u002Fv1\u002F\") }\n);\n\nvar response = await client.CreateResponseAsync(\"gpt-5-mini\", \"Hello world!\");\nConsole.WriteLine(response.Value.GetOutputText());\n```\n\n### 为什么这样可行\n\n- 使用一个 OpenAI SDK：您使用官方的 .NET 版 OpenAI SDK。Azure OpenAI 只是您让客户端库指向的一个不同端点。\n- 统一的 \u002Fopenai\u002Fv1\u002F 端点：Azure OpenAI 使用与 OpenAI 相同的路径结构，因此大多数客户端代码可以保持不变。\n- 企业级身份验证：结合 Microsoft Entra ID 的 Azure Identity SDK 允许您在不存储密钥的情况下访问 Azure OpenAI。\n- 即插即用的模型切换：只要 Azure 模型部署与模型名称相同，就可以随意更换“gpt-5-mini”或其他任何模型。\n\n## 高级场景\n\n### 使用协议方法\n\n除了使用强类型请求和响应对象的客户端方法外，.NET 库还提供了_协议方法_，允许更直接地访问 REST API。协议方法采用“二进制输入、二进制输出”的方式，接受 `BinaryContent` 作为请求体，并以 `BinaryData` 作为响应体。\n\n例如，要使用 `ChatClient` 的 `CompleteChat` 方法的协议方法变体，可以将请求体作为 `BinaryContent` 传递：\n\n```C# Snippet:ReadMe_ProtocolMethods\nBinaryData input = BinaryData.FromBytes(\"\"\"\n    {\n       \"model\": \"gpt-5.1\",\n       \"messages\": [\n           {\n               \"role\": \"user\",\n               \"content\": \"说‘这是一个测试。’\"\n           }\n       ]\n    }\n    \"\"\"u8.ToArray());\n\nusing BinaryContent content = BinaryContent.Create(input);\nClientResult result = client.CompleteChat(content);\nBinaryData output = result.GetRawResponse().Content;\n\nusing JsonDocument outputAsJson = JsonDocument.Parse(output.ToString());\nstring message = outputAsJson.RootElement\n    .GetProperty(\"choices\")[0]\n    .GetProperty(\"message\")\n    .GetProperty(\"content\")\n    .GetString();\n\nConsole.WriteLine($\"[ASSISTANT]: {message}\");\n```\n\n请注意，您可以调用结果 `ClientResult` 的 `GetRawResponse` 方法，并通过 `PipelineResponse` 的 `Content` 属性获取响应体 `BinaryData`。\n\n### 模拟客户端进行测试\n\nOpenAI .NET 库的设计支持模拟，提供了以下关键特性：\n\n- 客户端方法被声明为虚方法，以便可以被重写。\n- 提供模型工厂，用于实例化缺少公共构造函数的 API 输出模型。\n\n为了说明模拟的工作原理，假设您想使用 [Moq](https:\u002F\u002Fgithub.com\u002Fdevlooped\u002Fmoq) 库验证以下方法的行为：给定一个音频文件路径，判断该文件是否包含指定的秘密单词：\n\n```C# Snippet:ReadMe_Mocking_MethodUnderTest\nbool ContainsSecretWord(AudioClient client, string audioFilePath, string secretWord)\n{\n    AudioTranscription transcription = client.TranscribeAudio(audioFilePath);\n    return transcription.Text.Contains(secretWord);\n}\n```\n\n创建 `AudioClient` 和 `ClientResult\u003CAudioTranscription>` 的模拟对象，设置将被调用的方法和属性，然后测试 `ContainsSecretWord` 方法的行为。由于 `AudioTranscription` 类没有公共构造函数，必须通过 `OpenAIAudioModelFactory` 静态类来实例化：\n\n```C# Snippet:ReadMe_Mocking_Test\n\u002F\u002F 实例化模拟对象和 AudioTranscription 对象。\nMock\u003CAudioClient> mockClient = new();\nAudioTranscription transcription = OpenAIAudioModelFactory.AudioTranscription(text: \"我发誓昨天真的看到一颗苹果在飞！\");\n\n\u002F\u002F 设置模拟对象的属性和方法。\nmockClient\n    .Setup(client => client.TranscribeAudio(It.IsAny\u003Cstring>()))\n    .Returns(ClientResult.FromValue(transcription, Mock.Of\u003CPipelineResponse>()));\n\n\u002F\u002F 执行验证。\nAudioClient client = mockClient.Object;\nbool containsSecretWord = ContainsSecretWord(client, \"\u003CaudioFilePath>\", \"apple\");\n\nAssert.That(containsSecretWord, Is.True);\n```\n\n除 `OpenAI.Assistants` 和 `OpenAI.VectorStores` 命名空间外，所有命名空间都有对应的模型工厂，以支持模拟；这两个命名空间的模型工厂也将很快推出。\n\n### 自动重试错误\n\n默认情况下，客户端类会使用指数退避策略，对以下错误最多自动重试三次：\n\n- 408 请求超时\n- 429 请求过多\n- 500 服务器内部错误\n- 502 网关错误\n- 503 服务不可用\n- 504 网关超时\n\n### 可观测性\n\nOpenAI .NET 库支持基于 OpenTelemetry 的实验性分布式追踪和指标功能。有关详细信息，请参阅 [使用 OpenTelemetry 进行可观性](.\u002Fdocs\u002FObservability.md)。","# OpenAI .NET 快速上手指南\n\n## 环境准备\n\n在开始之前，请确保满足以下前置条件：\n\n*   **.NET 环境**：支持所有 .NET Standard 2.0 及以上版本的应用程序（示例代码基于 .NET 10 编写）。\n*   **API Key**：您需要一个 OpenAI API 密钥。\n    1. 访问 [OpenAI 平台](https:\u002F\u002Fplatform.openai.com\u002Fsignup) 注册或登录账号。\n    2. 进入 [API 密钥页面](https:\u002F\u002Fplatform.openai.com\u002Faccount\u002Fapi-keys)。\n    3. 点击 \"Create new secret key\" 创建新密钥。\n    4. **重要**：请妥善保存密钥，切勿将其硬编码在源代码中或泄露给他人。\n\n## 安装步骤\n\n通过 .NET CLI 将 `OpenAI` NuGet 包添加到您的项目中：\n\n```cli\ndotnet add package OpenAI\n```\n\n> **提示**：如果您在中国大陆地区遇到下载速度慢的问题，可以指定国内镜像源（如阿里云或腾讯云 NuGet 镜像）进行安装：\n> ```cli\n> dotnet add package OpenAI --source https:\u002F\u002Fapi.nuget.org\u002Fv3\u002Findex.json\n> ```\n> *(注：若默认源受阻，可在 `NuGet.config` 文件中配置国内镜像源地址)*\n\n## 基本使用\n\n### 1. 初始化客户端\n\n推荐通过环境变量读取 API 密钥，以确保安全性。以下是最基础的聊天补全（Chat Completions）示例：\n\n```C#\nusing OpenAI.Chat;\n\n\u002F\u002F 从环境变量获取 API Key\nstring apiKey = Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\");\n\n\u002F\u002F 初始化 ChatClient，指定模型（例如 gpt-4o 或 gpt-5.1）\nChatClient client = new(model: \"gpt-4o\", apiKey: apiKey);\n\n\u002F\u002F 发送消息并获取响应\nChatCompletion completion = client.CompleteChat(\"Say 'this is a test.'\");\n\n\u002F\u002F 输出结果\nConsole.WriteLine($\"[ASSISTANT]: {completion.Content[0].Text}\");\n```\n\n### 2. 异步调用\n\n所有同步方法都有对应的异步版本（后缀为 `Async`），建议在 Web 应用中使用异步模式以避免阻塞线程：\n\n```C#\nChatCompletion completion = await client.CompleteChatAsync(\"Say 'this is a test.'\");\nConsole.WriteLine($\"[ASSISTANT]: {completion.Content[0].Text}\");\n```\n\n### 3. 流式输出 (Streaming)\n\n如果您希望像打字机一样实时接收生成的内容，可以使用流式接口：\n\n```C#\nAsyncCollectionResult\u003CStreamingChatCompletionUpdate> updates = client.CompleteChatStreamingAsync(\"Count from 1 to 5.\");\n\nConsole.Write(\"[ASSISTANT]: \");\nawait foreach (StreamingChatCompletionUpdate update in updates)\n{\n    if (update.ContentUpdate.Count > 0)\n    {\n        Console.Write(update.ContentUpdate[0].Text);\n    }\n}\n```\n\n### 4. 自定义端点 (可选)\n\n如果您使用的是兼容 OpenAI 接口的第三方服务或私有部署，可以自定义 Base URL：\n\n```C#\nChatClient client = new(\n    model: \"YOUR_MODEL_NAME\",\n    credential: new ApiKeyCredential(Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\")),\n    options: new OpenAIClientOptions()\n    {\n        Endpoint = new Uri(\"https:\u002F\u002FYOUR_CUSTOM_BASE_URL\")\n    });\n```","某电商企业的 .NET 开发团队正在构建一个智能客服系统，需要让后端服务实时调用大模型处理用户咨询并生成个性化回复。\n\n### 没有 openai-dotnet 时\n- 开发人员必须手动编写复杂的 HTTP 请求代码来处理认证头、序列化 JSON 负载及解析响应，极易因字段拼写错误导致运行时异常。\n- 实现流式输出（Streaming）以改善用户体验时，需底层处理 SSE 事件流解析，代码冗长且难以维护异步状态。\n- 缺乏内置的重试机制和错误分类，网络波动或限流时服务容易直接崩溃，稳定性完全依赖人工兜底。\n- 集成函数调用（Function Calling）或结构化输出时，需自行定义复杂的 Schema 映射逻辑，开发周期被大幅拉长。\n- 单元测试困难，无法轻松模拟 API 客户端行为，导致每次修改都需消耗真实的 API 额度进行验证。\n\n### 使用 openai-dotnet 后\n- 通过强类型的 `OpenAIClient` 和 `ChatClient` 类，仅需几行代码即可完成认证与对话调用，彻底告别手写 HTTP 底层逻辑。\n- 原生支持异步流式接口，开发者可直接遍历响应内容块，轻松实现打字机效果的实时回复，代码简洁清晰。\n- 库内建自动重试策略与详细的异常体系，能智能识别瞬态故障并恢复，显著提升了生产环境的鲁棒性。\n- 利用泛型约束和内置辅助方法，轻松定义函数参数与结构化输出格式，将复杂的功能集成时间从数天缩短至数小时。\n- 提供完善的 Mock 支持，允许在测试环境中注入假客户端，既节省了 API 成本又实现了高效的自动化测试。\n\nopenai-dotnet 将繁琐的 API 交互转化为直观的 .NET 对象操作，让开发者能专注于业务逻辑而非通信细节，极大加速了 AI 应用在微软技术栈中的落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fopenai_openai-dotnet_d3157d7a.png","openai","OpenAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fopenai_1960bbf4.png","",null,"https:\u002F\u002Fopenai.com\u002F","https:\u002F\u002Fgithub.com\u002Fopenai",[23,27,31,35],{"name":24,"color":25,"percentage":26},"C#","#178600",91.8,{"name":28,"color":29,"percentage":30},"TypeSpec","#4A3665",7.9,{"name":32,"color":33,"percentage":34},"PowerShell","#012456",0.3,{"name":36,"color":37,"percentage":38},"TypeScript","#3178c6",0,2561,387,"2026-04-13T04:54:15","MIT",2,"Windows, macOS, Linux","未说明",{"notes":47,"python":48,"dependencies":49},"该工具是 .NET 客户端库，非本地运行的 AI 模型，因此无需 GPU 或特定内存配置。主要依赖 .NET 运行环境（兼容 .NET Standard 2.0 及以上，示例基于 .NET 10）。使用时需配置 OpenAI API Key，支持通过环境变量注入。若用于 ASP.NET Core，建议将客户端注册为单例以优化连接复用。","不适用",[50,51],".NET Standard 2.0+","NuGet package: OpenAI",[53,54,55,56,57],"语言模型","图像","音频","开发框架","Agent",[59,60,15],"csharp","dotnet","ready","2026-03-27T02:49:30.150509","2026-04-14T12:28:00.554105",[65,70,75,80,85,90],{"id":66,"question_zh":67,"answer_zh":68,"source_url":69},32648,"为什么无法直接访问 ChatCompletionOptions 中的 Messages、Stream 或 Model 等属性？","这些属性被设计为内部（internal）属性，不推荐直接作为 DTO 使用。对于需要映射 OpenAI 请求格式的高级场景（如代理开发），官方建议使用“协议重载（protocol overloads）”方法。虽然目前通过反射可以临时解决，但这并非长期方案。团队正在探索一种结合“协议与模型”的新表面接口，以更好地桥接原始协议和便捷用法。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fissues\u002F306",{"id":71,"question_zh":72,"answer_zh":73,"source_url":74},32649,"如何在 .NET 库中序列化并重新构建（rehydrate）OpenAI 响应对象（如 ReasoningResponseItem）？","对于需要持久化或代理响应的场景，应使用 ModelReaderWriter 类进行序列化和反序列化，而不是直接设置属性。具体代码如下：\n1. 序列化：var data = ModelReaderWriter.Write(item);\n2. 反序列化：var restoredItem = ModelReaderWriter.Read\u003CReasoningResponseItem>(data);\n这样可以完整保留对象状态，且符合库的设计原则（即 ID 等属性由服务端生成，客户端不应随意设置）。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fissues\u002F643",{"id":76,"question_zh":77,"answer_zh":78,"source_url":79},32650,"遇到 System.InvalidOperationException 提示 JsonPatch& 类型无法序列化怎么办？","这是一个已知问题，已在版本 2.7.0 中修复。修复方式是在相关属性上添加了 [JsonIgnore] 特性以避免序列化错误。请升级您的 OpenAI .NET 库至 2.7.0 或更高版本即可解决该问题。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fissues\u002F817",{"id":81,"question_zh":82,"answer_zh":83,"source_url":84},32651,"如何区分不同类型的 API 错误（如部署未找到、速率限制等）？","当前库统一抛出 ClientResultException 异常，不针对不同错误码提供独立异常类型。要获取具体错误信息，需从异常的原始响应中解析 JSON 内容。示例代码如下：\ntry {\n    \u002F\u002F 调用 API\n} catch (ClientResultException ex) {\n    var errorJson = ex.GetRawResponse().Content.ToString();\n    \u002F\u002F 解析 errorJson 获取 code 和 message 字段\n}\n虽然需要手动解析 JSON，但这是目前推荐的错误处理方式。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fissues\u002F663",{"id":86,"question_zh":87,"answer_zh":88,"source_url":89},32652,"当 finish_reason 为空或未知时，为什么会抛出异常？","在流式推理场景中，finish_reason 通常在流结束时才确定。当前实现若遇到空值会抛出异常，这被视为设计缺陷。官方承认该问题对流式推断造成干扰，但尚未引入 Unknown 枚举值。建议开发者在捕获异常后检查流是否已正常结束，或等待后续版本修复。","https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fissues\u002F342",{"id":91,"question_zh":92,"answer_zh":93,"source_url":74},32653,"是否支持获取流式推理模型中的思考过程（reasoning content）？","目前尚无公开的可靠方式直接提取流式推理内容（如 thinking\u002Freasoning delta）。官方指出此类场景属于高级用例，不建议通过反射或额外序列化实现。未来可能会通过新的“协议与模型”接口提供支持，现阶段可关注相关 PR（如 #778 和 #828）的进展。",[95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190],{"id":96,"version":97,"summary_zh":98,"released_at":99},247386,"OpenAI_2.10.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.10.0\u002FCHANGELOG.md","2026-04-04T00:04:49",{"id":101,"version":102,"summary_zh":103,"released_at":104},247387,"OpenAI_2.9.1","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.9.1\u002FCHANGELOG.md","2026-03-02T23:43:24",{"id":106,"version":107,"summary_zh":108,"released_at":109},247388,"OpenAI_2.9.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.9.0\u002FCHANGELOG.md","2026-02-27T23:51:28",{"id":111,"version":112,"summary_zh":113,"released_at":114},247389,"OpenAI_2.8.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.8.0\u002FCHANGELOG.md","2025-12-11T21:20:46",{"id":116,"version":117,"summary_zh":118,"released_at":119},247390,"OpenAI_2.7.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.7.0\u002FCHANGELOG.md","2025-11-13T22:37:39",{"id":121,"version":122,"summary_zh":123,"released_at":124},247391,"OpenAI_2.6.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.6.0\u002FCHANGELOG.md","2025-10-31T22:16:14",{"id":126,"version":127,"summary_zh":128,"released_at":129},247392,"OpenAI_2.5.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.5.0\u002FCHANGELOG.md","2025-09-24T01:42:47",{"id":131,"version":132,"summary_zh":133,"released_at":134},247393,"OpenAI_2.4.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.4.0\u002FCHANGELOG.md","2025-09-06T00:23:01",{"id":136,"version":137,"summary_zh":138,"released_at":139},247394,"OpenAI_2.3.0","查看完整更新日志：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fblob\u002FOpenAI_2.3.0\u002FCHANGELOG.md","2025-08-04T16:11:40",{"id":141,"version":142,"summary_zh":143,"released_at":144},247395,"OpenAI_2.2.0","## 变更内容\n* @christothes 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F422 中添加了 ChatTools 和 ResponseTools 辅助类\n* @KrzysztofCwalina 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F464 中为 CreateBatchOperation.Rehydrate 方法添加了 string 类型的 batchId 重载\n* @thompson-tomo 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F443 中优化了依赖项，解决了 #348 问题\n* @gromer 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F359 中修复了 README 中结构化输出部分的一些代码格式问题\n* @stephentoub 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F429 中修复了 call id 参数验证问题\n* @joseharriaga 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F468 中添加了 CONTRIBUTING.md 文件\n* @Petermarcu 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F467 中向 README.md 添加了使用自定义基础 URL 和 API 密钥的说明\n* @shargon 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F459 中返回共享的 ArrayPool\n* @shargon 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F437 中修复了 bool 类型相关问题\n* @jsquire 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F477 中更新了议题模板和配置\n* @christothes 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F476 中改进了 AsyncWebsocketMessageResultEnumerator 的释放逻辑，以防止多次释放\n* @jsquire 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F481 中为新创建的议题添加了“needs-triage”工作流标签\n* @KrzysztofCwalina 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F485 中添加了 ChatMessageContent.ToString 方法的重写\n* @KrzysztofCwalina 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F489 中提供了初始示例\n* @achandmsft 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F497 中修复了示例文件中 .NET 10 属性指令的格式问题\n* @KrzysztofCwalina 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F490 中提供了 mcp 使用示例\n* @joseharriaga 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F502 中准备了 2.2.0 版本的发布（第 1 部分）\n* @joseharriaga 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F503 中准备了 2.2.0 版本的发布（第 2 部分）\n* @joseharriaga 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F504 中准备了 2.2.0 版本的发布（第 3 部分）\n\n## 新贡献者\n* @christothes 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F422 中完成了首次贡献\n* @thompson-tomo 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F443 中完成了首次贡献\n* @Petermarcu 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F467 中完成了首次贡献\n* @shargon 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F459 中完成了首次贡献\n* @jsquire 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F477 中完成了首次贡献\n* @achandmsft 在 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F497 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.2.0-beta.4...OpenAI_2.2.0","2025-07-03T00:06:15",{"id":146,"version":147,"summary_zh":148,"released_at":149},247396,"OpenAI_2.2.0-beta.4","## What's Changed\r\n* Add a few quick readme examples for new responses support by @trrwilson in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F371\r\n* Prepare 2.2.0-beta.4 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F378\r\n* Prepare 2.2.0-beta.4 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F379\r\n* Prepare 2.2.0-beta.4 release (Part 3) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F380\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.2.0-beta.3...OpenAI_2.2.0-beta.4","2025-03-19T01:30:36",{"id":151,"version":152,"summary_zh":153,"released_at":154},247397,"OpenAI_2.2.0-beta.3","## What's Changed\r\n* chore: set license in nuget package by @e-i-n-s in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F333\r\n* Fixed misspelling of instantiates. by @gromer in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F321\r\n* * GenericActionPipelinePolicy - ConfigureAwait(false) for ProcessAsync method by @vanek021 in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F353\r\n* Prepare 2.2.0-beta.3 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F368\r\n* Fix for FilePurpose by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F369\r\n* Prepare 2.2.0-beta.3 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F370\r\n\r\n## New Contributors\r\n* @e-i-n-s made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F333\r\n* @gromer made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F321\r\n* @vanek021 made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F353\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.2.0-beta.2...OpenAI_2.2.0-beta.3","2025-03-12T02:29:16",{"id":156,"version":157,"summary_zh":158,"released_at":159},247398,"OpenAI_2.2.0-beta.2","## What's Changed\r\n* Prepare 2.2.0-beta.2 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F349\r\n* Prepare 2.2.0-beta.2 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F351\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.2.0-beta.1...OpenAI_2.2.0-beta.2","2025-02-18T22:02:56",{"id":161,"version":162,"summary_zh":163,"released_at":164},247399,"OpenAI_2.2.0-beta.1","## What's Changed\r\n* Update and refactor generated code by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F326\r\n* Added net8.0 as a supported target framework for OpenAI #272 by @armanossiloko in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F277\r\n* feat: Added Trimming\u002FNativeAOT support. by @HavenDV in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F21\r\n* Prepare 2.2.0-beta.1 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F337\r\n* Prepare 2.2.0-beta.1 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F339\r\n\r\n## New Contributors\r\n* @armanossiloko made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F277\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.1.0...OpenAI_2.2.0-beta.1","2025-02-07T22:15:47",{"id":166,"version":167,"summary_zh":168,"released_at":169},247400,"OpenAI_2.1.0","## What's Changed\r\n* Assistant - Fix null-reference exception when accessing `RunStepDetailsUpdate.FunctionName` by @crickman in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F293\r\n* Add support for retrieving File Search result content in Run Steps and other fixes by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F294\r\n* Prepare 2.1.0 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F303\r\n* Prepare 2.1.0 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F302\r\n\r\n## New Contributors\r\n* @crickman made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F293\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.1.0-beta.2...OpenAI_2.1.0","2024-12-04T21:32:59",{"id":171,"version":172,"summary_zh":173,"released_at":174},247401,"OpenAI_2.1.0-beta.2","## What's Changed\r\n* Add CODEOWNERS file by @scottaddie in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F253\r\n* Simplify structured outputs sample code by @scottaddie in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F236\r\n* Prepare 2.1.0-beta.2 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F278\r\n* fix：Fix parameter spelling errors by @ZhaoYis in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F247\r\n* docs: update nuget badge by @WeihanLi in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F241\r\n* [realtime] Address serialization issue with ConversationToolChoice by @trrwilson in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F282\r\n* Prepare 2.1.0-beta.2 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F283\r\n\r\n## New Contributors\r\n* @scottaddie made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F253\r\n* @ZhaoYis made their first contribution in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F247\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.1.0-beta.1...OpenAI_2.1.0-beta.2","2024-11-04T21:53:23",{"id":176,"version":177,"summary_zh":178,"released_at":179},247402,"OpenAI_2.1.0-beta.1","## What's Changed\r\n* Remove prerelease switch from NuGet instructions by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F235\r\n* 2.1.0-beta.1 staging: RealtimeConversationClient by @trrwilson in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F238\r\n* 2.1.0-beta.1: CHANGELOG and release snap by @trrwilson in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F239\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.0.0...OpenAI_2.1.0-beta.1","2024-10-01T21:24:33",{"id":181,"version":182,"summary_zh":183,"released_at":184},247403,"OpenAI_2.0.0","## What's Changed\r\n* Prepare 2.0.0 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F233\r\n* Prepare 2.0.0 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F234\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.0.0-beta.13...OpenAI_2.0.0","2024-09-30T23:07:00",{"id":186,"version":187,"summary_zh":188,"released_at":189},247404,"OpenAI_2.0.0-beta.13","## What's Changed\r\n* Add serialization\u002Fdeserialization example with chat completions by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F124\r\n* Refactor and rename types and properties for consistency and clarity by @ShivangiReja in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F225\r\n* Remove the virtual keyword from the Pipeline property across all clients by @ShivangiReja in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F227\r\n* Prepare 2.0.0-beta.13 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F229\r\n* [Examples] Updating orange picture links by @kinelski in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F231\r\n* Prepare 2.0.0-beta.13 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F230\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.0.0-beta.12...OpenAI_2.0.0-beta.13","2024-09-27T22:37:58",{"id":191,"version":192,"summary_zh":193,"released_at":194},247405,"OpenAI_2.0.0-beta.12","## What's Changed\r\n* Added the NuGet tags openai-dotnet, ChatGPT, and Dall-E by @AngelosP in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F173\r\n* Prepare 2.0.0-beta.12 release (Part 1) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F216\r\n* Prepare 2.0.0-beta.12 release (Part 2) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F220\r\n* Prepare 2.0.0-beta.12 release (Part 3) by @joseharriaga in https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fpull\u002F221\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-dotnet\u002Fcompare\u002FOpenAI_2.0.0-beta.11...OpenAI_2.0.0-beta.12","2024-09-20T20:33:26",[196,206,214,222,230,239],{"id":197,"name":198,"github_repo":199,"description_zh":200,"stars":201,"difficulty_score":202,"last_commit_at":203,"category_tags":204,"status":61},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[57,56,54,205],"数据工具",{"id":207,"name":208,"github_repo":209,"description_zh":210,"stars":211,"difficulty_score":202,"last_commit_at":212,"category_tags":213,"status":61},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[56,54,57],{"id":215,"name":216,"github_repo":217,"description_zh":218,"stars":219,"difficulty_score":43,"last_commit_at":220,"category_tags":221,"status":61},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,"2026-04-13T23:32:16",[56,57,53],{"id":223,"name":224,"github_repo":225,"description_zh":226,"stars":227,"difficulty_score":43,"last_commit_at":228,"category_tags":229,"status":61},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[56,54,57],{"id":231,"name":232,"github_repo":233,"description_zh":234,"stars":235,"difficulty_score":43,"last_commit_at":236,"category_tags":237,"status":61},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[238,57,54,56],"插件",{"id":240,"name":241,"github_repo":242,"description_zh":243,"stars":244,"difficulty_score":43,"last_commit_at":245,"category_tags":246,"status":61},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[238,56]]