[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lofcz--LLMTornado":3,"tool-lofcz--LLMTornado":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",148568,2,"2026-04-09T23:34:24",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":114,"forks":115,"last_commit_at":116,"license":117,"difficulty_score":118,"env_os":119,"env_gpu":120,"env_ram":121,"env_deps":122,"category_tags":131,"github_topics":132,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":153,"updated_at":154,"faqs":155,"releases":185},6039,"lofcz\u002FLLMTornado","LLMTornado","The .NET library to build AI agents with 30+ built-in connectors.","LLM Tornado 是一款专为 .NET 开发者打造的开源 SDK，旨在帮助用户快速构建、编排和部署 AI 智能体及工作流。它有效解决了开发过程中面临的模型接入繁琐、不同厂商 API 差异大以及本地部署复杂等痛点，让开发者无需依赖各厂商专用的重型 SDK，即可通过统一的代码接口轻松集成全球 30 多家主流 AI 服务商（如 OpenAI、Anthropic、Google、阿里通义千问等）及向量数据库。\n\n该工具特别适合熟悉 C# 和 .NET 生态的软件工程师与架构师，无论是想快速搭建一个简单的聊天机器人，还是设计能够自主执行复杂任务的编码智能体，都能从中获益。LLM Tornado 的核心亮点在于其“提供商无关”的设计理念：开发者只需编写一次业务逻辑，通过切换模型名称即可在不同 AI 后端间无缝迁移。此外，它还内置了强大的智能体编排引擎，支持基于图结构的任务协调、并行执行及可视化导出，并原生兼容 vLLM、Ollama 等本地部署方案。凭借强类型的代码支持和对最新 AI 特性的即时跟进，LLM Tornado 让 .NET 开发者能以极高的效率将创意转化为落地的 AI 应用。","\u003Cdiv align=\"center\">\n\n\u003Cimg width=\"600\" alt=\"LLM Tornado\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_f58356788d0c.png\" \u002F>\n\n# LLM Tornado\n\n**Build AI agents and workflows in minutes with one toolkit and built-in connectors to 30+ API Providers & Vector Databases.**    \n\n**[Official Website](https:\u002F\u002Fllmtornado.ai)**\n\n[![LlmTornado](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado?v=302&icon=nuget&label=LlmTornado)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado)\n[![LlmTornado.Agents](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.Agents?v=303&icon=nuget&label=LlmTornado.Agents)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.Agents)\n[![LlmTornado.Mcp](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.Mcp?v=302&icon=nuget&label=LlmTornado.MCP)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.Mcp)\n[![LlmTornado.A2A](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.A2A?v=302&icon=nuget&label=LlmTornado.A2A)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.A2A)\n[![License:MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-34D058.svg)](https:\u002F\u002Fopensource.org\u002Flicense\u002Fmit)\n\nLLM Tornado is a .NET provider-agnostic SDK that empowers developers to build, orchestrate, and deploy AI agents and workflows. Whether you're building a simple chatbot or an autonomous coding agent, LLM Tornado provides the tools you need with unparalleled integration into the AI ecosystem.\n\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=h7yTai0cRtE\">\n \u003Cimg alt=\"LLM Tornado in .NET AI Community Standup by Microsoft\" width=\"768\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_66cb1b9a7b86.png\" \u002F>\n\u003C\u002Fa>\n \n\u003C\u002Fdiv>\n\n## ✨ Features\n-  **Use Any Provider**: Built-in connectors to: [Alibaba](https:\u002F\u002Fmodelstudio.console.alibabacloud.com), [Anthropic](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fintro), [Azure](https:\u002F\u002Fazure.microsoft.com\u002Fen-us\u002Fproducts\u002Fai-services\u002Fopenai-service), [Blablador](https:\u002F\u002Fsdlaml.pages.jsc.fz-juelich.de\u002Fai\u002Fguides\u002Fblablador_api_access), [Cohere](https:\u002F\u002Fdocs.cohere.com\u002Fchangelog), [DeepInfra](https:\u002F\u002Fdeepinfra.com\u002Fdocs\u002F), [DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F), [Google](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs), [Groq](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Foverview), [MiniMax](https:\u002F\u002Fplatform.minimax.io\u002Fdocs\u002Fguides\u002Fmodels-intro), [Mistral](https:\u002F\u002Fdocs.mistral.ai\u002Fgetting-started), [MoonshotAI](https:\u002F\u002Fplatform.moonshot.ai\u002Fdocs\u002Foverview), [OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs), [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquickstart), [Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fhome), [Requesty](https:\u002F\u002Fwww.requesty.ai), [Upstage](https:\u002F\u002Fwww.upstage.ai), [Voyage](https:\u002F\u002Fwww.voyageai.com\u002F), [xAI](https:\u002F\u002Fdocs.x.ai\u002Fdocs), [Z.ai](https:\u002F\u002Fdocs.z.ai\u002Fguides\u002Foverview\u002Fquick-start), and more. Connectors expose all niche\u002Funique features via strongly typed code and are up-to-date with the latest AI development. No dependencies on first-party SDKs. [Feature Matrix](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FFeatureMatrix.md) tracks detailed endpoint support.\n- **First-class Local Deployments**: Run with [vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest), [Ollama](https:\u002F\u002Follama.com\u002F), or [LocalAI](https:\u002F\u002Flocalai.io\u002F) with integrated support for request transformations.\n- **Agents Orchestration**: [Coordinate specialist agents](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fgetting-started) that can autonomously perform complex tasks with three core concepts: `Orchestrator` (graph), `Runner` (node), and `Advancer` (edge). Comes with [handoffs](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fagent-orchestration\u002Fbasics), [parallel execution](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fchat-runtime), [Mermaid](https:\u002F\u002Fmermaid.js.org) export, and [builder pattern](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fagent-orchestration\u002Forchestration#using-the-builder) to keep it simple.\n- **Rapid Development**: Write pipelines once, execute with any Provider by changing the model's name. Connect your editor to [Context7](https:\u002F\u002Fcontext7.com\u002Flofcz\u002Fllmtornado) or [FSKB](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Ftree\u002Fmaster\u002Fsrc\u002FLlmTornado.FsKb\u002FLlmTornado.FsKb) to accelerate coding with instant access to vectorized documentation.\n- **Fully Multimodal**: Text, images, videos, documents, URLs, and audio inputs\u002Foutputs are supported.\n- **Cutting Edge Protocols:**\n  - [**MCP**](https:\u002F\u002Fllmtornado.ai\u002Fmpc\u002Fmpc): Connect agents to data sources, tools, and workflows via Model Context Protocol with `LlmTornado.Mcp`.\n  - [**A2A**](https:\u002F\u002Fllmtornado.ai\u002Fa2a\u002Fgetting-started): Enable seamless collaboration between AI agents across different platforms with `LlmTornado.A2A`.\n  - [**Skills**](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fanthropic-specific\u002Fskills): Dynamically load folders of instructions, scripts, and resources to improve performance on specialized tasks.\n- **Vector Databases**: Built-in connectors to [Chroma](https:\u002F\u002Fwww.trychroma.com), [PgVector](https:\u002F\u002Fgithub.com\u002Fpgvector\u002Fpgvector), [Pinecone](https:\u002F\u002Fwww.pinecone.io), [Faiss](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss), and [QDrant](https:\u002F\u002Fqdrant.tech).\n- **Integrated**: Interoperability with [Microsoft.Extensions.AI](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fdotnet\u002Fai\u002Fmicrosoft-extensions-ai) enables plugging Tornado in [Semantic Kernel](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Demo\u002FMicrosoftExtensionsAiDemo.cs) applications with `LlmTornado.Microsoft.Extensions.AI`.\n- **Enterprise Ready**: [Guardrails](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Fguardrails) framework, [preview and transform any request](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F2fd75e9cdf551724d59d91aebbcb74caea8ae7b2\u002Fsrc\u002FLlmTornado.Demo\u002FChatDemo2.cs#L233-L256) before committing to it. [Open Telemetry](https:\u002F\u002Fopentelemetry.io) support. Stable APIs.\n\n\u003Cdiv align=\"center\">\n \u003Ch4>➡️ Get started in minutes – \u003Ca href=\"https:\u002F\u002Fllmtornado.ai\u002Fgetting-started\">Quickstart\u003C\u002Fa> ⬅️\u003C\u002Fh4>\n\u003C\u002Fdiv>\n\n## 🔥 News 2026\n- 26\u002F02 - [MiniMax](https:\u002F\u002Fwww.minimax.io) connector is implemented. Compaction endpoint is added, and can be used along the compaction built-in to LLM Tornado. Started work on ACP.\n- 26\u002F01 - [Ivy Framework](https:\u002F\u002Fivy.app) uses `LlmTornado.Agents` to build their [AI Components](https:\u002F\u002Fgithub.com\u002FIvy-Interactive\u002FIvy-Examples\u002Fpull\u002F362). `\u002Focr` endpoint is implemented for Mistral. `\u002Ffiles` endpoint support is extended to all supported providers.\n\n## News 2025\n- 25\u002F12 - [Upstage](https:\u002F\u002Fwww.upstage.ai) connector is implemented. `\u002Fbatch` endpoint is implemented for OpenAI, Anthropic, and Google. `\u002Fvideos` endpoint now supports OpenAI\u002FSora.\n- 25\u002F11 - [Flowbite Blazor](https:\u002F\u002Fgithub.com\u002Fthemesberg\u002Fflowbite-blazor) uses LLM Tornado to build their [AI Chat](https:\u002F\u002Fflowbite-blazor.org\u002Fdocs\u002Fai\u002Fchat) WASM component. [Requesty](https:\u002F\u002Fwww.requesty.ai) connector is implemented. `\u002Ftokenize` endpoint is implemented for all Providers supporting the feature.\n- 25\u002F10 - LLM Tornado is featured in [dotInsights](https:\u002F\u002Fblog.jetbrains.com\u002Fdotnet\u002F2025\u002F10\u002F06\u002Fdotinsights-october-2025) by [JetBrains](https:\u002F\u002Fwww.jetbrains.com). [Microsoft](https:\u002F\u002Fwww.microsoft.com) uses LLM Tornado in [Generative AI for Beginners .NET](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FGenerative-AI-for-beginners-dotnet). Interoperability with [Microsoft.Extensions.AI](https:\u002F\u002Fgithub.com\u002Fdotnet\u002Fextensions) is launched. [Skills](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fanthropic-specific\u002Fskills) protocol is implemented.\n- 25\u002F09 - Maintainers [Matěj Štágl](https:\u002F\u002Fgithub.com\u002Flofcz) and [John Lomba](https:\u002F\u002Fgithub.com\u002FJohnny2x2) talk about LLM Tornado in [.NET AI Community Standup](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=h7yTai0cRtE). [PgVector](https:\u002F\u002Fllmtornado.ai\u002Fvectordatabases\u002Fpgvector) and [ChromaDb](https:\u002F\u002Fllmtornado.ai\u002Fvectordatabases\u002Fchromadb) connectors are implemented.\n- 25\u002F08 - [ProseFlow](https:\u002F\u002Flsxprime.github.io\u002Fproseflow-web) is built with LLM Tornado. [Sciobot](https:\u002F\u002Fsciobot.org) – an AI platform for Educators built with LLM Tornado is accepted into [Cohere Labs Catalyst Grant Program](https:\u002F\u002Fcohere.com\u002Fresearch\u002Fgrants). [A2A](https:\u002F\u002Fllmtornado.ai\u002Fa2a\u002Fgetting-started) protocol is implemented.\n- 25\u002F07 - Contributor [Shaltiel Shmidman](https:\u002F\u002Fgithub.com\u002Fshaltielshmid) talks about LLM Tornado in [Asp.net Community Standup](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RaZc-2tfh9k). New connectors to [Z.ai](https:\u002F\u002Fdocs.z.ai\u002Fguides\u002Foverview\u002Fquick-start), [Alibaba](https:\u002F\u002Fmodelstudio.console.alibabacloud.com), and [Blablador](https:\u002F\u002Fsdlaml.pages.jsc.fz-juelich.de\u002Fai\u002Fguides\u002Fblablador_api_access) are implemented.\n- 25\u002F06 - [C# delegates as Agent tools](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Ftools\u002Ffunction-tools) system is added, freeing applications from authoring JSON schema manually. [MCP](https:\u002F\u002Fllmtornado.ai\u002Fmpc\u002Fmpc) protocol is implemented.\n- 25\u002F05 - [Chat to Responses](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Ftools\u002Fresponse-tools) system is added, allowing usage of `\u002Fresponses` exclusive models from `\u002Fchat` endpoint.\n- 25\u002F04 - New connectors to [DeepInfra](https:\u002F\u002Fdeepinfra.com\u002Fdocs\u002F), [DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F), and [Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fhome) are added.\n- 25\u002F03 - [Assistants](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fassistants\u002Fbasics) are implemented.\n- 25\u002F02 - New connectors to [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquickstart), [Voyage](https:\u002F\u002Fwww.voyageai.com\u002F), and [xAI](https:\u002F\u002Fdocs.x.ai\u002Fdocs) are added.\n- 25\u002F01 - Strict JSON mode is implemented, [Groq](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Foverview) and [Mistral](https:\u002F\u002Fdocs.mistral.ai\u002Fgetting-started) connectors are added.\n\n## ⭐ Samples\n- [Chat with your documents](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FChatDemo.cs#L722-L757)\n- [Make multiple-speaker podcasts](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fd1042281082ea5ff1de9dcb438a847d4cd9c416b\u002FLlmTornado.Demo\u002FChatDemo2.cs#L332-L374)\n- [Voice call with AI using your microphone](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FChatDemo.cs#L905-L968)\n- [Orchestrate Assistants](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FThreadsDemo.cs#L331-L429)\n- [Generate images](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FImagesDemo.cs#L10-L13)\n- [Summarize a video (local file \u002F YouTube)](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fcfd47f915584728d9a2365fc9d38d158673da68a\u002FLlmTornado.Demo\u002FChatDemo2.cs#L119)\n- [Turn text & images into high quality embeddings](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FEmbeddingDemo.cs#L50-L75)\n- [Transcribe audio in real time](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fe592a2fc0a37dbd0e754dac7b1655703367369df\u002FLlmTornado.Demo\u002FAudioDemo.cs#L29)\n\n## ⚡Getting Started\n\nInstall LLM Tornado via NuGet:\n\n```bash\ndotnet add package LlmTornado\n```\n\nOptional addons:\n\n```bash\ndotnet add package LlmTornado.Agents # Agentic framework, higher-level abstractions\ndotnet add package LlmTornado.Mcp # Model Context Protocol (MCP) integration\ndotnet add package LlmTornado.A2A # Agent2Agent (A2A) integration\ndotnet add package LlmTornado.Microsoft.Extensions.AI # Semantic Kernel interoperability\ndotnet add package LlmTornado.Contrib # productivity, quality of life enhancements\n```\n\n## 🪄 Quick Inference\n\nInferencing across multiple providers is as easy as changing the `ChatModel` argument. Tornado instance can be constructed with multiple API keys, the correct key is then used based on the model automatically:\n\n```csharp\nTornadoApi api = new TornadoApi([\n    \u002F\u002F note: delete lines with providers you won't be using\n    new (LLmProviders.OpenAi, \"OPEN_AI_KEY\"),\n    new (LLmProviders.Anthropic, \"ANTHROPIC_KEY\"),\n    new (LLmProviders.Cohere, \"COHERE_KEY\"),\n    new (LLmProviders.Google, \"GOOGLE_KEY\"),\n    new (LLmProviders.Groq, \"GROQ_KEY\"),\n    new (LLmProviders.DeepSeek, \"DEEP_SEEK_KEY\"),\n    new (LLmProviders.Mistral, \"MISTRAL_KEY\"),\n    new (LLmProviders.XAi, \"XAI_KEY\"),\n    new (LLmProviders.Perplexity, \"PERPLEXITY_KEY\"),\n    new (LLmProviders.Voyage, \"VOYAGE_KEY\"),\n    new (LLmProviders.DeepInfra, \"DEEP_INFRA_KEY\"),\n    new (LLmProviders.OpenRouter, \"OPEN_ROUTER_KEY\")\n]);\n\n\u002F\u002F this sample iterates a bunch of models, gives each the same task, and prints results.\nList\u003CChatModel> models = [\n    ChatModel.OpenAi.O3.Mini, ChatModel.Anthropic.Claude37.Sonnet,\n    ChatModel.Cohere.Command.RPlus, ChatModel.Google.Gemini.Gemini2Flash001,\n    ChatModel.Groq.Meta.Llama370B, ChatModel.DeepSeek.Models.Chat,\n    ChatModel.Mistral.Premier.MistralLarge, ChatModel.XAi.Grok.Grok2241212,\n    ChatModel.Perplexity.Sonar.Default\n];\n\nforeach (ChatModel model in models)\n{\n    string? response = await api.Chat.CreateConversation(model)\n        .AppendSystemMessage(\"You are a fortune teller.\")\n        .AppendUserInput(\"What will my future bring?\")\n        .GetResponse();\n\n    Console.WriteLine(response);\n}\n```\n\n💡 Instead of passing in a strongly typed model, you can pass a string instead: `await api.Chat.CreateConversation(\"gpt-5-mini\")`, Tornado will automatically resolve the provider.\n\n## ❄️ Vendor Extensions\n\nTornado has a powerful concept of `VendorExtensions` which can be applied to various endpoints and are strongly typed. Many Providers offer unique\u002Fniche APIs, often enabling use cases otherwise unavailable. For example, let's set a reasoning budget for Anthropic's Claude 3.7:\n\n```cs\npublic static async Task AnthropicSonnet37Thinking()\n{\n    Conversation chat = Program.Connect(LLmProviders.Anthropic).Chat.CreateConversation(new ChatRequest\n    {\n        Model = ChatModel.Anthropic.Claude37.Sonnet,\n        VendorExtensions = new ChatRequestVendorExtensions(new ChatRequestVendorAnthropicExtensions\n        {\n            Thinking = new AnthropicThinkingSettings\n            {\n                BudgetTokens = 2_000,\n                Enabled = true\n            }\n        })\n    });\n    \n    chat.AppendUserInput(\"Explain how to solve differential equations.\");\n\n    ChatRichResponse blocks = await chat.GetResponseRich();\n\n    if (blocks.Blocks is not null)\n    {\n        foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Reasoning))\n        {\n            Console.ForegroundColor = ConsoleColor.DarkGray;\n            Console.WriteLine(reasoning.Reasoning?.Content);\n            Console.ResetColor();\n        }\n\n        foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Message))\n        {\n            Console.WriteLine(reasoning.Message);\n        }\n    }\n}\n```\n\n## 🔮 Self-Hosted\u002FCustom Providers\n\nInstead of consuming commercial APIs, one can easily roll their inference servers with [a plethora](https:\u002F\u002Fgithub.com\u002Fjanhq\u002Fawesome-local-ai) of available tools. Here is a simple demo for streaming response with Ollama, but the same approach can be used for any custom provider:\n\n```cs\npublic static async Task OllamaStreaming()\n{\n    TornadoApi api = new TornadoApi(new Uri(\"http:\u002F\u002Flocalhost:11434\")); \u002F\u002F default Ollama port, API key can be passed in the second argument if needed\n    \n    await api.Chat.CreateConversation(new ChatModel(\"falcon3:1b\")) \u002F\u002F \u003C-- replace with your model\n        .AppendUserInput(\"Why is the sky blue?\")\n        .StreamResponse(Console.Write);\n}\n```\n\nIf you need more control over requests, for example, custom headers, you can create an instance of a built-in Provider. This is useful for custom deployments like Amazon Bedrock, Vertex AI, etc.\n\n```cs\nTornadoApi tornadoApi = new TornadoApi(new AnthropicEndpointProvider\n{\n    Auth = new ProviderAuthentication(\"ANTHROPIC_API_KEY\"),\n    \u002F\u002F {0} = endpoint, {1} = action, {2} = model's name\n    UrlResolver = (endpoint, url, ctx) => \"https:\u002F\u002Fapi.anthropic.com\u002Fv1\u002F{0}{1}\",\n    RequestResolver = (request, data, streaming) =>\n    {\n        \u002F\u002F by default, providing a custom request resolver omits beta headers\n        \u002F\u002F request is HttpRequestMessage, data contains the payload\n    },\n    RequestSerializer = (data, ctx) =>\n    {\n       \u002F\u002F data is JObject, which can be modified before\n       \u002F\u002F being serialized into a string.\n    }\n});\n```\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fde62f0fe-93e0-448c-81d0-8ab7447ad780\n\n## 🔎 Advanced Inference\n\n### Streaming\n\nTornado offers three levels of abstraction, trading more details for more complexity. The simple use cases where only plaintext is needed can be represented in a terse format:\n\n```cs\nawait api.Chat.CreateConversation(ChatModel.Anthropic.Claude3.Sonnet)\n    .AppendSystemMessage(\"You are a fortune teller.\")\n    .AppendUserInput(\"What will my future bring?\")\n    .StreamResponse(Console.Write);\n```  \n  \nThe levels of abstraction are:\n- `Response` (`string` for chat, `float[]` for embeddings, etc.)\n- `ResponseRich` (tools, modalities, metadata such as usage)\n- `ResponseRichSafe` (same as level 2, guaranteed not to throw on network level, for example, if the provider returns an internal error or doesn't respond at all)\n\n### Streaming with Rich content (tools, images, audio..)\n\nWhen plaintext is insufficient, switch to `StreamResponseRich` or `GetResponseRich()` APIs. Tools requested by the model can be resolved later and never returned to the model. This is useful in scenarios where we use the tools without intending to continue the conversation:\n\n```cs\n\u002F\u002FAsk the model to generate two images, and stream the result:\npublic static async Task GoogleStreamImages()\n{\n    Conversation chat = api.Chat.CreateConversation(new ChatRequest\n    {\n        Model = ChatModel.Google.GeminiExperimental.Gemini2FlashImageGeneration,\n        Modalities = [ ChatModelModalities.Text, ChatModelModalities.Image ]\n    });\n    \n    chat.AppendUserInput([\n        new ChatMessagePart(\"Generate two images: a lion and a squirrel\")\n    ]);\n    \n    await chat.StreamResponseRich(new ChatStreamEventHandler\n    {\n        MessagePartHandler = async (part) =>\n        {\n            if (part.Text is not null)\n            {\n                Console.Write(part.Text);\n                return;\n            }\n\n            if (part.Image is not null)\n            {\n                \u002F\u002F In our tests this executes Chafa to turn the raw base64 data into Sixels\n                await DisplayImage(part.Image.Url);\n            }\n        },\n        BlockFinishedHandler = (block) =>\n        {\n            Console.WriteLine();\n            return ValueTask.CompletedTask;\n        },\n        OnUsageReceived = (usage) =>\n        {\n            Console.WriteLine();\n            Console.WriteLine(usage);\n            return ValueTask.CompletedTask;\n        }\n    });\n}\n```\n\n### Tools with immediate resolve\n\nTools requested by the model can be resolved and the results returned immediately. This has the benefit of automatically continuing the conversation:\n\n```cs\nConversation chat = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt4.O,\n    Tools =\n    [\n        new Tool(new ToolFunction(\"get_weather\", \"gets the current weather\", new\n        {\n            type = \"object\",\n            properties = new\n            {\n                location = new\n                {\n                    type = \"string\",\n                    description = \"The location for which the weather information is required.\"\n                }\n            },\n            required = new List\u003Cstring> { \"location\" }\n        }))\n    ]\n})\n.AppendSystemMessage(\"You are a helpful assistant\")\n.AppendUserInput(\"What is the weather like today in Prague?\");\n\nChatStreamEventHandler handler = new ChatStreamEventHandler\n{\n  MessageTokenHandler = (x) =>\n  {\n      Console.Write(x);\n      return Task.CompletedTask;\n  },\n  FunctionCallHandler = (calls) =>\n  {\n      calls.ForEach(x => x.Result = new FunctionResult(x, \"A mild rain is expected around noon.\", null));\n      return Task.CompletedTask;\n  },\n  AfterFunctionCallsResolvedHandler = async (results, handler) => { await chat.StreamResponseRich(handler); }\n};\n\nawait chat.StreamResponseRich(handler);\n```\n\n### Tools with deferred resolve\n\nInstead of resolving the tool call, we can postpone\u002Fquit the conversation. This is useful for extractive tasks, where we care only for the tool call:\n\n```cs\nConversation chat = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt4.Turbo,\n    Tools = new List\u003CTool>\n    {\n        new Tool\n        {\n            Function = new ToolFunction(\"get_weather\", \"gets the current weather\")\n        }\n    },\n    ToolChoice = new OutboundToolChoice(OutboundToolChoiceModes.Required)\n});\n\nchat.AppendUserInput(\"Who are you?\"); \u002F\u002F user asks something unrelated, but we force the model to use the tool\nChatRichResponse response = await chat.GetResponseRich(); \u002F\u002F the response contains one block of type Function\n```\n\n_`GetResponseRichSafe()` API is also available, which is guaranteed not to throw on the network level. The response is wrapped in a network-level wrapper, containing additional information. For production use cases, either use `try {} catch {}` on all the HTTP request-producing Tornado APIs, or use the safe APIs._\n\n## 🌐 MCP\nTo use the Model Context Protocol, install the `LlmTornado.Mcp` adapter. After that, new interop methods will become available on the `ModelContextProtocol` types. The following example uses the `GetForecast` tool defined on an [example MCP server](https:\u002F\u002Fmodelcontextprotocol.io\u002Fquickstart\u002Fserver#c%23):\n```cs\n[McpServerToolType]\npublic sealed class WeatherTools\n{\n    [McpServerTool, Description(\"Get weather forecast for a location.\")]\n    public static async Task\u003Cstring> GetForecast(\n        HttpClient client,\n        [Description(\"Latitude of the location.\")] double latitude,\n        [Description(\"Longitude of the location.\")] double longitude)\n    {\n        var pointUrl = string.Create(CultureInfo.InvariantCulture, $\"\u002Fpoints\u002F{latitude},{longitude}\");\n        using var jsonDocument = await client.ReadJsonDocumentAsync(pointUrl);\n        var forecastUrl = jsonDocument.RootElement.GetProperty(\"properties\").GetProperty(\"forecast\").GetString()\n            ?? throw new Exception($\"No forecast URL provided by {client.BaseAddress}points\u002F{latitude},{longitude}\");\n\n        using var forecastDocument = await client.ReadJsonDocumentAsync(forecastUrl);\n        var periods = forecastDocument.RootElement.GetProperty(\"properties\").GetProperty(\"periods\").EnumerateArray();\n\n        return string.Join(\"\\n---\\n\", periods.Select(period => $\"\"\"\n                {period.GetProperty(\"name\").GetString()}\n                Temperature: {period.GetProperty(\"temperature\").GetInt32()}°F\n                Wind: {period.GetProperty(\"windSpeed\").GetString()} {period.GetProperty(\"windDirection\").GetString()}\n                Forecast: {period.GetProperty(\"detailedForecast\").GetString()}\n                \"\"\"));\n    }\n}\n```\n\nThe following is done by the client:\n```cs\n\u002F\u002F your clientTransport, for example StdioClientTransport\nawait using IMcpClient mcpClient = await McpClientFactory.CreateAsync(clientTransport);\n\n\u002F\u002F 1. fetch tools\nList\u003CTool> tools = await mcpClient.ListTornadoToolsAsync();\n\n\u002F\u002F 2. create a conversation, pass available tools\nTornadoApi api = new TornadoApi(LLmProviders.OpenAi, apiKeys.OpenAi);\nConversation conversation = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt41.V41,\n    Tools = tools,\n    \u002F\u002F force any of the available tools to be used (use new OutboundToolChoice(\"toolName\") to specify which if needed)\n    ToolChoice = OutboundToolChoice.Required\n});\n\n\u002F\u002F 3. let the model call the tool and infer arguments\nawait conversation\n    .AddSystemMessage(\"You are a helpful assistant\")\n    .AddUserMessage(\"What is the weather like in Dallas?\")\n    .GetResponseRich(async calls =>\n    {\n        foreach (FunctionCall call in calls)\n        {\n            \u002F\u002F retrieve arguments inferred by the model\n            double latitude = call.GetOrDefault\u003Cdouble>(\"latitude\");\n            double longitude = call.GetOrDefault\u003Cdouble>(\"longitude\");\n            \n            \u002F\u002F call the tool on the MCP server, pass args\n            await call.ResolveRemote(new\n            {\n                latitude = latitude,\n                longitude = longitude\n            });\n\n            \u002F\u002F extract the tool result and pass it back to the model\n            if (call.Result?.RemoteContent is McpContent mcpContent)\n            {\n                foreach (IMcpContentBlock block in mcpContent.McpContentBlocks)\n                {\n                    if (block is McpContentBlockText textBlock)\n                    {\n                        call.Result.Content = textBlock.Text;\n                    }\n                }\n            }\n        }\n    });\n\n\u002F\u002F stop forcing the client to call the tool\nconversation.RequestParameters.ToolChoice = null;\n\n\u002F\u002F 4. stream final response\nawait conversation.StreamResponse(Console.Write);\n```\n\nA complete example is available here: [client](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Mcp.Sample.Server\u002FWeatherTools.cs), [server](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Mcp.Sample\u002FProgram.cs).\n\n## 🧰 Toolkit\n\nTornado includes powerful abstractions in the `LlmTornado.Toolkit` package, allowing rapid development of applications, while avoiding many design pitfalls. Scalability and tuning-friendly code design are at the core of these abstractions.\n\n### ToolkitChat\n\n`ToolkitChat` is a primitive for graph-based workflows, where edges move data and nodes execute functions. ToolkitChat supports streaming, rich responses, and chaining tool calls. Tool calls are provided via `ChatFunction` or `ChatPlugin` (an envelope with multiple tools). Many overloads accept a primary and a secondary model acting as a backup, this zig-zag strategy overcomes temporary downtime in APIs better than simple retrying of the same model. All tool calls are strongly typed and `strict` by default. For providers, where a strict JSON schema is not supported (Anthropic, for example), prefill with `{` is used as a fallback. Call can be marked as non-strict by simply changing a parameter.\n\n```cs\nclass DemoAggregatedItem\n{\n    public string Name { get; set; }\n    public string KnownName { get; set; }\n    public int Quantity { get; set; }\n}\n\nstring sysPrompt = \"aggregate items by type\";\nstring userPrompt = \"three apples, one cherry, two apples, one orange, one orange\";\n\nawait ToolkitChat.GetSingleResponse(Program.Connect(), ChatModel.Google.Gemini.Gemini25Flash, ChatModel.OpenAi.Gpt41.V41Mini, sysPrompt, new ChatFunction([\n    new ToolParam(\"items\", new ToolParamList(\"aggregated items\", [\n        new ToolParam(\"name\", \"name of the item\", ToolParamAtomicTypes.String),\n        new ToolParam(\"quantity\", \"aggregated quantity\", ToolParamAtomicTypes.Int),\n        new ToolParam(\"known_name\", new ToolParamEnum(\"known name of the item\", [ \"apple\", \"cherry\", \"orange\", \"other\" ]))\n    ]))\n], async (args, ctx) =>\n{\n    if (!args.ParamTryGet(\"items\", out List\u003CDemoAggregatedItem>? items) || items is null)\n    {\n        return new ChatFunctionCallResult(ChatFunctionCallResultParameterErrors.MissingRequiredParameter, \"items\");\n    }\n    \n    Console.WriteLine(\"Aggregated items:\");\n\n    foreach (DemoAggregatedItem item in items)\n    {\n        Console.WriteLine($\"{item.Name}: {item.Quantity}\");\n    }\n    \n    return new ChatFunctionCallResult();\n}), userPrompt); \u002F\u002F temp defaults to 0, output length to 8k\n\n\u002F*\nAggregated items:\napple: 5\ncherry: 1\norange: 2\n*\u002F\n```\n\n## 👉 Why Tornado?\n\n- 100,000+ installs on [NuGet](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado).\n- Used in [award-winning](https:\u002F\u002Fwww-aiawards-cz.translate.goog\u002F?_x_tr_sl=cs&_x_tr_tl=en&_x_tr_hl=cs) commercial projects, processing > 100B tokens monthly.\n- Covered by 500+ tests.\n- Great performance.\n- The license will never change.\n\n## 📢 Built With Tornado\n- [ScioBot](https:\u002F\u002Fsciobot.org\u002F) - AI For Educators, 100k+ users.\n- [ProseFlow](https:\u002F\u002Fgithub.com\u002FLSXPrime\u002FProseFlow) - Your universal AI text processor, powered by local and cloud LLMs. Edit, refactor, and transform text in any application on Windows, macOS, and Linux.\n- [NotT3Chat](https:\u002F\u002Fgithub.com\u002Fshaltielshmid\u002FNotT3Chat) - The C# Answer to the T3 Stack.\n- [ClaudeCodeProxy](https:\u002F\u002Fgithub.com\u002Fsalty-flower\u002FClaudeCodeProxy) - Provider multiplexing proxy.\n- [Semantic Search](https:\u002F\u002Fgithub.com\u002Fprimaryobjects\u002Fsemantic-search) - AI semantic search where a query is matched by context and meaning.\n\n_Have you built something with Tornado? Let us know about it in the issues to get a spotlight!_\n\n## 🤝 Partners\n\n### Sponsored by\n\n\u003Ca href=\"https:\u002F\u002Fwww.scio.cz\u002Fprace-u-nas\" target=\"_blank\">\n    \u003Cfigure>\n        \u003Cimg alt=\"Scio\" width=\"300\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_a5bcd079d0f1.png\" \u002F>\n    \u003C\u002Ffigure>\n\u003C\u002Fa>\n\n### Powered by\n[![JetBrains logo.](https:\u002F\u002Fresources.jetbrains.com\u002Fstorage\u002Fproducts\u002Fcompany\u002Fbrand\u002Flogos\u002Fjetbrains.svg)](https:\u002F\u002Fjb.gg\u002FOpenSource)\n\n## 📚 Contributing\n\nPRs are welcome! We are accepting new Provider implementations, contributions towards a 100 % green [Feature Matrix](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FFeatureMatrix.md), and, after public discussion, new abstractions.\n\n## Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_8cf0bfdf6448.png)](https:\u002F\u002Fwww.star-history.com\u002F#lofcz\u002Fllmtornado&type=date&legend=top-left)\n\n## License\n\nThis library is licensed under the [MIT](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FLICENSE) license. 💜\n","\u003Cdiv align=\"center\">\n\n\u003Cimg width=\"600\" alt=\"LLM Tornado\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_f58356788d0c.png\" \u002F>\n\n# LLM Tornado\n\n**只需一个工具包和内置的30多种API提供商及向量数据库连接器，即可在几分钟内构建AI智能体和工作流。**\n\n**[官方网站](https:\u002F\u002Fllmtornado.ai)**\n\n[![LlmTornado](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado?v=302&icon=nuget&label=LlmTornado)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado)\n[![LlmTornado.Agents](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.Agents?v=303&icon=nuget&label=LlmTornado.Agents)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.Agents)\n[![LlmTornado.Mcp](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.Mcp?v=302&icon=nuget&label=LlmTornado.MCP)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.Mcp)\n[![LlmTornado.A2A](https:\u002F\u002Fshields.io\u002Fnuget\u002Fv\u002FLlmTornado.A2A?v=302&icon=nuget&label=LlmTornado.A2A)](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado.A2A)\n[![License:MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-34D058.svg)](https:\u002F\u002Fopensource.org\u002Flicense\u002Fmit)\n\nLLM Tornado是一款与供应商无关的.NET SDK，可帮助开发者构建、编排并部署AI智能体和工作流。无论您是在构建简单的聊天机器人，还是自主编码智能体，LLM Tornado都能为您提供所需的工具，并与AI生态系统实现无与伦比的集成。\n\n\u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=h7yTai0cRtE\">\n \u003Cimg alt=\"LLM Tornado在微软.NET AI社区站会上的介绍\" width=\"768\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_66cb1b9a7b86.png\" \u002F>\n\u003C\u002Fa>\n \n\u003C\u002Fdiv>\n\n## ✨ 特性\n-  **使用任意提供商**：内置连接器支持：[阿里巴巴](https:\u002F\u002Fmodelstudio.console.alibabacloud.com)、[Anthropic](https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fintro)、[Azure](https:\u002F\u002Fazure.microsoft.com\u002Fen-us\u002Fproducts\u002Fai-services\u002Fopenai-service)、[Blablador](https:\u002F\u002Fsdlaml.pages.jsc.fz-juelich.de\u002Fai\u002Fguides\u002Fblablador_api_access)、[Cohere](https:\u002F\u002Fdocs.cohere.com\u002Fchangelog)、[DeepInfra](https:\u002F\u002Fdeepinfra.com\u002Fdocs\u002F)、[DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F)、[Google](https:\u002F\u002Fai.google.dev\u002Fgemini-api\u002Fdocs)、[Groq](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Foverview)、[MiniMax](https:\u002F\u002Fplatform.minimax.io\u002Fdocs\u002Fguides\u002Fmodels-intro)、[Mistral](https:\u002F\u002Fdocs.mistral.ai\u002Fgetting-started)、[MoonshotAI](https:\u002F\u002Fplatform.moonshot.ai\u002Fdocs\u002Foverview)、[OpenAI](https:\u002F\u002Fplatform.openai.com\u002Fdocs)、[OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquickstart)、[Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fhome)、[Requesty](https:\u002F\u002Fwww.requesty.ai)、[Upstage](https:\u002F\u002Fwww.upstage.ai)、[Voyage](https:\u002F\u002Fwww.voyageai.com\u002F)、[xAI](https:\u002F\u002Fdocs.x.ai\u002Fdocs)、[Z.ai](https:\u002F\u002Fdocs.z.ai\u002Fguides\u002Foverview\u002Fquick-start)等。这些连接器通过强类型代码暴露所有细分和独特的功能，并始终紧跟最新的AI发展。无需依赖任何第一方SDK。[特性矩阵](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FFeatureMatrix.md)详细记录了对各个端点的支持情况。\n- **一流的本地部署**：可通过集成的请求转换支持，与[vLLM](https:\u002F\u002Fdocs.vllm.ai\u002Fen\u002Flatest)、[Ollama](https:\u002F\u002Follama.com\u002F)或[LocalAI](https:\u002F\u002Flocalai.io\u002F)一起运行。\n- **智能体编排**：借助三个核心概念——`Orchestrator`（图）、`Runner`（节点）和`Advancer`（边）——[协调专业智能体](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fgetting-started)，使其能够自主执行复杂任务。提供[交接机制](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fagent-orchestration\u002Fbasics)、[并行执行](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fchat-runtime)、[Mermaid](https:\u002F\u002Fmermaid.js.org)导出以及[构建者模式](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Fagent-orchestration\u002Forchestration#using-the-builder)，让操作更加简单。\n- **快速开发**：只需编写一次管道，通过更改模型名称即可在任何提供商上执行。将您的编辑器连接到[Context7](https:\u002F\u002Fcontext7.com\u002Flofcz\u002Fllmtornado)或[FSKB](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Ftree\u002Fmaster\u002Fsrc\u002FLlmTornado.FsKb\u002FLlmTornado.FsKb)，即可通过即时访问向量化文档来加速编码。\n- **完全多模态**：支持文本、图像、视频、文档、URL和音频的输入与输出。\n- **前沿协议：**\n  - [**MCP**](https:\u002F\u002Fllmtornado.ai\u002Fmpc\u002Fmpc)：通过`LlmTornado.Mcp`，利用模型上下文协议将智能体连接到数据源、工具和工作流。\n  - [**A2A**](https:\u002F\u002Fllmtornado.ai\u002Fa2a\u002Fgetting-started)：借助`LlmTornado.A2A`，实现跨不同平台的AI智能体之间的无缝协作。\n  - [**技能**](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fanthropic-specific\u002Fskills)：动态加载指令、脚本和资源文件夹，以提升特定任务的性能。\n- **向量数据库**：内置连接器支持[Chroma](https:\u002F\u002Fwww.trychroma.com)、[PgVector](https:\u002F\u002Fgithub.com\u002Fpgvector\u002Fpgvector)、[Pinecone](https:\u002F\u002Fwww.pinecone.io)、[Faiss](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss)和[QDrant](https:\u002F\u002Fqdrant.tech)。\n- **集成性**：与[Microsoft.Extensions.AI](https:\u002F\u002Flearn.microsoft.com\u002Fen-us\u002Fdotnet\u002Fai\u002Fmicrosoft-extensions-ai)的互操作性使Tornado能够通过`LlmTornado.Microsoft.Extensions.AI`插入到[Semantic Kernel](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Demo\u002FMicrosoftExtensionsAiDemo.cs)应用中。\n- **企业级就绪**：配备[护栏](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Fguardrails)框架，在提交请求之前可对其进行预览和转换。支持[Open Telemetry](https:\u002F\u002Fopentelemetry.io)。稳定的API。\n\n\u003Cdiv align=\"center\">\n \u003Ch4>➡️ 几分钟内开始使用 – \u003Ca href=\"https:\u002F\u002Fllmtornado.ai\u002Fgetting-started\">快速入门\u003C\u002Fa> ⬅️\u003C\u002Fh4>\n\u003C\u002Fdiv>\n\n## 🔥 2026年新闻\n- 2月26日 - 实现了[MiniMax](https:\u002F\u002Fwww.minimax.io)连接器。新增压缩端点，可与LLM Tornado内置的压缩功能协同使用。开始着手开发ACP。\n- 1月26日 - [Ivy Framework](https:\u002F\u002Fivy.app)使用`LlmTornado.Agents`构建其[AI组件](https:\u002F\u002Fgithub.com\u002FIvy-Interactive\u002FIvy-Examples\u002Fpull\u002F362)。为Mistral实现了`\u002Focr`端点。对所有支持的提供商扩展了`\u002Ffiles`端点支持。\n\n## 新闻 2025\n- 25\u002F12 - 实现了 [Upstage](https:\u002F\u002Fwww.upstage.ai) 连接器。为 OpenAI、Anthropic 和 Google 实现了 `\u002Fbatch` 端点。`\u002Fvideos` 端点现在支持 OpenAI\u002FSora。\n- 25\u002F11 - [Flowbite Blazor](https:\u002F\u002Fgithub.com\u002Fthemesberg\u002Fflowbite-blazor) 使用 LLM Tornado 构建了他们的 [AI 聊天](https:\u002F\u002Fflowbite-blazor.org\u002Fdocs\u002Fai\u002Fchat) WASM 组件。实现了 [Requesty](https:\u002F\u002Fwww.requesty.ai) 连接器。为所有支持该功能的提供商实现了 `\u002Ftokenize` 端点。\n- 25\u002F10 - LLM Tornado 被 [JetBrains](https:\u002F\u002Fwww.jetbrains.com) 收录于 [dotInsights](https:\u002F\u002Fblog.jetbrains.com\u002Fdotnet\u002F2025\u002F10\u002F06\u002Fdotinsights-october-2025)。[Microsoft](https:\u002F\u002Fwww.microsoft.com) 在 [面向初学者的 .NET 生成式 AI](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FGenerative-AI-for-beginners-dotnet) 中使用了 LLM Tornado。与 [Microsoft.Extensions.AI](https:\u002F\u002Fgithub.com\u002Fdotnet\u002Fextensions) 的互操作性正式推出。实现了 [Skills](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fanthropic-specific\u002Fskills) 协议。\n- 25\u002F09 - 维护者 [Matěj Štágl](https:\u002F\u002Fgithub.com\u002Flofcz) 和 [John Lomba](https:\u002F\u002Fgithub.com\u002FJohnny2x2) 在 [.NET AI 社区站会](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=h7yTai0cRtE) 中讨论了 LLM Tornado。实现了 [PgVector](https:\u002F\u002Fllmtornado.ai\u002Fvectordatabases\u002Fpgvector) 和 [ChromaDb](https:\u002F\u002Fllmtornado.ai\u002Fvectordatabases\u002Fchromadb) 连接器。\n- 25\u002F08 - [ProseFlow](https:\u002F\u002Flsxprime.github.io\u002Fproseflow-web) 是用 LLM Tornado 构建的。[Sciobot](https:\u002F\u002Fsciobot.org)——一个由 LLM Tornado 构建的教育者用 AI 平台，已被 [Cohere Labs 催化器资助计划](https:\u002F\u002Fcohere.com\u002Fresearch\u002Fgrants) 接受。实现了 [A2A](https:\u002F\u002Fllmtornado.ai\u002Fa2a\u002Fgetting-started) 协议。\n- 25\u002F07 - 贡献者 [Shaltiel Shmidman](https:\u002F\u002Fgithub.com\u002Fshaltielshmid) 在 [Asp.net 社区站会](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RaZc-2tfh9k) 中谈到了 LLM Tornado。新增了对 [Z.ai](https:\u002F\u002Fdocs.z.ai\u002Fguides\u002Foverview\u002Fquick-start)、[阿里巴巴](https:\u002F\u002Fmodelstudio.console.alibabacloud.com) 和 [Blablador](https:\u002F\u002Fsdlaml.pages.jsc.fz-juelich.de\u002Fai\u002Fguides\u002Fblablador_api_access) 的连接器。\n- 25\u002F06 - 添加了 [C# 委托作为代理工具](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Ftools\u002Ffunction-tools) 系统，使应用程序无需手动编写 JSON 模式。实现了 [MCP](https:\u002F\u002Fllmtornado.ai\u002Fmpc\u002Fmpc) 协议。\n- 25\u002F05 - 添加了 [聊天转响应](https:\u002F\u002Fllmtornado.ai\u002Fagents\u002Ftornado-agent\u002Ftools\u002Fresponse-tools) 系统，允许从 `\u002Fchat` 端点使用 `\u002Fresponses` 专属模型。\n- 25\u002F04 - 新增了对 [DeepInfra](https:\u002F\u002Fdeepinfra.com\u002Fdocs\u002F)、[DeepSeek](https:\u002F\u002Fapi-docs.deepseek.com\u002F) 和 [Perplexity](https:\u002F\u002Fdocs.perplexity.ai\u002Fhome) 的连接器。\n- 25\u002F03 - 实现了 [助手](https:\u002F\u002Fllmtornado.ai\u002Fllmtornado\u002Fassistants\u002Fbasics) 功能。\n- 25\u002F02 - 新增了对 [OpenRouter](https:\u002F\u002Fopenrouter.ai\u002Fdocs\u002Fquickstart)、[Voyage](https:\u002F\u002Fwww.voyageai.com\u002F) 和 [xAI](https:\u002F\u002Fdocs.x.ai\u002Fdocs) 的连接器。\n- 25\u002F01 - 实现了严格 JSON 模式，新增了 [Groq](https:\u002F\u002Fconsole.groq.com\u002Fdocs\u002Foverview) 和 [Mistral](https:\u002F\u002Fdocs.mistral.ai\u002Fgetting-started) 连接器。\n\n## ⭐ 示例\n- [与你的文档聊天](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FChatDemo.cs#L722-L757)\n- [制作多说话人播客](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fd1042281082ea5ff1de9dcb438a847d4cd9c416b\u002FLlmTornado.Demo\u002FChatDemo2.cs#L332-L374)\n- [使用麦克风与 AI 进行语音通话](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FChatDemo.cs#L905-L968)\n- [编排助手](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FThreadsDemo.cs#L331-L429)\n- [生成图片](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FImagesDemo.cs#L10-L13)\n- [总结视频（本地文件 \u002F YouTube）](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fcfd47f915584728d9a2365fc9d38d158673da68a\u002FLlmTornado.Demo\u002FChatDemo2.cs#L119)\n- [将文本和图片转换为高质量嵌入](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002F61d2a4732c88c45d4a8c053204ecdef807c34652\u002FLlmTornado.Demo\u002FEmbeddingDemo.cs#L50-L75)\n- [实时转录音频](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fe592a2fc0a37dbd0e754dac7b1655703367369df\u002FLlmTornado.Demo\u002FAudioDemo.cs#L29)\n\n## ⚡ 入门指南\n\n通过 NuGet 安装 LLM Tornado：\n\n```bash\ndotnet add package LlmTornado\n```\n\n可选插件：\n\n```bash\ndotnet add package LlmTornado.Agents # 代理框架，更高层次的抽象\ndotnet add package LlmTornado.Mcp # 模型上下文协议 (MCP) 集成\ndotnet add package LlmTornado.A2A # 代理间交互 (A2A) 集成\ndotnet add package LlmTornado.Microsoft.Extensions.AI # 语义核互操作性\ndotnet add package LlmTornado.Contrib # 提高生产力和生活质量的增强功能\n```\n\n## 🪄 快速推理\n\n在多个提供商之间进行推理非常简单，只需更改 `ChatModel` 参数即可。Tornado 实例可以使用多个 API 密钥构建，系统会根据模型自动选择正确的密钥：\n\n```csharp\nTornadoApi api = new TornadoApi([\n    \u002F\u002F 注意：删除你不会使用的提供商对应的行\n    new (LLmProviders.OpenAi, \"OPEN_AI_KEY\"),\n    new (LLmProviders.Anthropic, \"ANTHROPIC_KEY\"),\n    new (LLmProviders.Cohere, \"COHERE_KEY\"),\n    new (LLmProviders.Google, \"GOOGLE_KEY\"),\n    new (LLmProviders.Groq, \"GROQ_KEY\"),\n    new (LLmProviders.DeepSeek, \"DEEP_SEEK_KEY\"),\n    new (LLmProviders.Mistral, \"MISTRAL_KEY\"),\n    new (LLmProviders.XAi, \"XAI_KEY\"),\n    new (LLmProviders.Perplexity, \"PERPLEXITY_KEY\"),\n    new (LLmProviders.Voyage, \"VOYAGE_KEY\"),\n    new (LLmProviders.DeepInfra, \"DEEP_INFRA_KEY\"),\n    new (LLmProviders.OpenRouter, \"OPEN_ROUTER_KEY\")\n]);\n\n\u002F\u002F 此示例遍历多个模型，给每个模型相同的任务，并打印结果。\nList\u003CChatModel> models = [\n    ChatModel.OpenAi.O3.Mini, ChatModel.Anthropic.Claude37.Sonnet,\n    ChatModel.Cohere.Command.RPlus, ChatModel.Google.Gemini.Gemini2Flash001,\n    ChatModel.Groq.Meta.Llama370B, ChatModel.DeepSeek.Models.Chat,\n    ChatModel.Mistral.Premier.MistralLarge, ChatModel.XAi.Grok.Grok2241212,\n    ChatModel.Perplexity.Sonar.Default\n];\n\nforeach (ChatModel model in models)\n{\n    string? response = await api.Chat.CreateConversation(model)\n        .AppendSystemMessage(\"你是一位算命师。\")\n        .AppendUserInput(\"我的未来会怎样？\")\n        .GetResponse();\n\n    Console.WriteLine(response);\n}\n```\n\n💡 除了传递强类型模型外，你也可以直接传入字符串：`await api.Chat.CreateConversation(\"gpt-5-mini\")`，Tornado 会自动解析提供商。\n\n## ❄️ 供应商扩展\n\nTornado 拥有一个强大的 `VendorExtensions` 概念，可以应用于各种端点，并且是强类型的。许多提供商都提供独特或小众的 API，通常能够实现其他方式无法实现的用例。例如，我们可以为 Anthropic 的 Claude 3.7 设置推理预算：\n\n```cs\npublic static async Task AnthropicSonnet37Thinking()\n{\n    Conversation chat = Program.Connect(LLmProviders.Anthropic).Chat.CreateConversation(new ChatRequest\n    {\n        Model = ChatModel.Anthropic.Claude37.Sonnet,\n        VendorExtensions = new ChatRequestVendorExtensions(new ChatRequestVendorAnthropicExtensions\n        {\n            Thinking = new AnthropicThinkingSettings\n            {\n                BudgetTokens = 2_000,\n                Enabled = true\n            }\n        })\n    });\n    \n    chat.AppendUserInput(\"解释如何解微分方程。\");\n\n    ChatRichResponse blocks = await chat.GetResponseRich();\n\n    if (blocks.Blocks is not null)\n    {\n        foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Reasoning))\n        {\n            Console.ForegroundColor = ConsoleColor.DarkGray;\n            Console.WriteLine(reasoning.Reasoning?.Content);\n            Console.ResetColor();\n        }\n\n        foreach (ChatRichResponseBlock reasoning in blocks.Blocks.Where(x => x.Type is ChatRichResponseBlockTypes.Message))\n        {\n            Console.WriteLine(reasoning.Message);\n        }\n    }\n}\n```\n\n## 🔮 自托管\u002F自定义提供商\n\n除了使用商业 API 外，用户还可以借助 [大量](https:\u002F\u002Fgithub.com\u002Fjanhq\u002Fawesome-local-ai)可用工具轻松搭建自己的推理服务器。以下是一个使用 Ollama 进行流式响应的简单示例，但同样的方法也可用于任何自定义提供商：\n\n```cs\npublic static async Task OllamaStreaming()\n{\n    TornadoApi api = new TornadoApi(new Uri(\"http:\u002F\u002Flocalhost:11434\")); \u002F\u002F 默认 Ollama 端口，如有需要可在第二个参数中传入 API 密钥\n    \n    await api.Chat.CreateConversation(new ChatModel(\"falcon3:1b\")) \u002F\u002F \u003C-- 替换为您使用的模型\n        .AppendUserInput(\"为什么天空是蓝色的？\")\n        .StreamResponse(Console.Write);\n}\n```\n\n如果需要对请求有更多控制，比如添加自定义头部信息，可以创建一个内置提供商的实例。这对于 Amazon Bedrock、Vertex AI 等自定义部署非常有用。\n\n```cs\nTornadoApi tornadoApi = new TornadoApi(new AnthropicEndpointProvider\n{\n    Auth = new ProviderAuthentication(\"ANTHROPIC_API_KEY\"),\n    \u002F\u002F {0} = 端点, {1} = 操作, {2} = 模型名称\n    UrlResolver = (endpoint, url, ctx) => \"https:\u002F\u002Fapi.anthropic.com\u002Fv1\u002F{0}{1}\",\n    RequestResolver = (request, data, streaming) =>\n    {\n        \u002F\u002F 默认情况下，提供自定义请求解析器会忽略 beta 头部\n        \u002F\u002F request 是 HttpRequestMessage，data 包含有效载荷\n    },\n    RequestSerializer = (data, ctx) =>\n    {\n       \u002F\u002F data 是 JObject，可以在序列化为字符串之前进行修改。\n    }\n});\n```\n\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fde62f0fe-93e0-448c-81d0-8ab7447ad780\n\n## 🔎 高级推理\n\n### 流式处理\n\nTornado 提供了三个抽象层次，细节越多，复杂性也越高。对于只需要纯文本的简单场景，可以用简洁的格式表示：\n\n```cs\nawait api.Chat.CreateConversation(ChatModel.Anthropic.Claude3.Sonnet)\n    .AppendSystemMessage(\"你是一位算命师。\")\n    .AppendUserInput(\"我的未来会怎样？\")\n    .StreamResponse(Console.Write);\n```  \n\n抽象层次如下：\n- `Response`（聊天为 `string`，嵌入为 `float[]` 等）\n- `ResponseRich`（工具、模态、用量等元数据）\n- `ResponseRichSafe`（与第 2 层相同，但在网络层面不会抛出异常，例如当提供商返回内部错误或完全无响应时）\n\n### 带丰富内容的流式处理（工具、图像、音频等）\n\n当纯文本不足以满足需求时，可以切换到 `StreamResponseRich` 或 `GetResponseRich()` API。模型请求的工具可以在稍后解析，而无需再返回给模型。这在我们使用工具但不打算继续对话的情况下非常有用：\n\n```cs\n\u002F\u002F 请求模型生成两张图片，并流式输出结果：\npublic static async Task GoogleStreamImages()\n{\n    Conversation chat = api.Chat.CreateConversation(new ChatRequest\n    {\n        Model = ChatModel.Google.GeminiExperimental.Gemini2FlashImageGeneration,\n        Modalities = [ ChatModelModalities.Text, ChatModelModalities.Image ]\n    });\n    \n    chat.AppendUserInput([\n        new ChatMessagePart(\"生成两张图片：一头狮子和一只松鼠\")\n    ]);\n    \n    await chat.StreamResponseRich(new ChatStreamEventHandler\n    {\n        MessagePartHandler = async (part) =>\n        {\n            if (part.Text is not null)\n            {\n                Console.Write(part.Text);\n                return;\n            }\n\n            if (part.Image is not null)\n            {\n                \u002F\u002F 在我们的测试中，这里会调用 Chafa 将原始 base64 数据转换为 Sixels 格式\n                await DisplayImage(part.Image.Url);\n            }\n        },\n        BlockFinishedHandler = (block) =>\n        {\n            Console.WriteLine();\n            return ValueTask.CompletedTask;\n        },\n        OnUsageReceived = (usage) =>\n        {\n            Console.WriteLine();\n            Console.WriteLine(usage);\n            return ValueTask.CompletedTask;\n        }\n    });\n}\n```\n\n### 工具即时解析\n\n模型请求的工具可以立即解析并返回结果。这样做的好处是可以自动继续对话：\n\n```cs\nConversation chat = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt4.O,\n    Tools =\n    [\n        new Tool(new ToolFunction(\"get_weather\", \"获取当前天气\", new\n        {\n            type = \"object\",\n            properties = new\n            {\n                location = new\n                {\n                    type = \"string\",\n                    description = \"需要查询天气信息的位置。\"\n                }\n            },\n            required = new List\u003Cstring> { \"location\" }\n        }))\n    ]\n})\n.AppendSystemMessage(\"你是一位乐于助人的助手\")\n.AppendUserInput(\"今天布拉格的天气怎么样？\");\n\nChatStreamEventHandler handler = new ChatStreamEventHandler\n{\n  MessageTokenHandler = (x) =>\n  {\n      Console.Write(x);\n      return Task.CompletedTask;\n  },\n  FunctionCallHandler = (calls) =>\n  {\n      calls.ForEach(x => x.Result = new FunctionResult(x, \"预计中午前后会有小雨。\", null));\n      return Task.CompletedTask;\n  },\n  AfterFunctionCallsResolvedHandler = async (results, handler) => { await chat.StreamResponseRich(handler); }\n};\n\nawait chat.StreamResponseRich(handler);\n```\n\n### 延迟解析的工具\n\n我们也可以不解析工具调用，而是推迟或结束对话。这对于提取类任务非常有用，因为我们只关心工具调用：\n\n```cs\nConversation chat = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt4.Turbo,\n    Tools = new List\u003CTool>\n    {\n        new Tool\n        {\n            Function = new ToolFunction(\"get_weather\", \"获取当前天气\")\n        }\n    },\n    ToolChoice = new OutboundToolChoice(OutboundToolChoiceModes.Required)\n});\n\nchat.AppendUserInput(\"你是谁？\"); \u002F\u002F 用户提出了无关问题，但我们强制模型使用工具\nChatRichResponse response = await chat.GetResponseRich(); \u002F\u002F 响应包含一个函数类型的块\n```\n\n_`GetResponseRichSafe()` API 也可用，它保证不会在网络层面抛出异常。响应被封装在一个包含额外信息的网络层包装器中。对于生产环境中的用例，可以在所有产生 HTTP 请求的 Tornado API 上使用 `try {} catch {}` 结构，或者直接使用安全的 API。_\n\n## 🌐 MCP\n要使用模型上下文协议，需安装 `LlmTornado.Mcp` 适配器。安装后，`ModelContextProtocol` 类型上将提供新的互操作方法。以下示例使用了在 [示例 MCP 服务器](https:\u002F\u002Fmodelcontextprotocol.io\u002Fquickstart\u002Fserver#c%23) 上定义的 `GetForecast` 工具：\n```cs\n[McpServerToolType]\npublic sealed class WeatherTools\n{\n    [McpServerTool, Description(\"获取某个地点的天气预报。\")]\n    public static async Task\u003Cstring> GetForecast(\n        HttpClient client,\n        [Description(\"地点的纬度。\")] double latitude,\n        [Description(\"地点的经度。\")] double longitude)\n    {\n        var pointUrl = string.Create(CultureInfo.InvariantCulture, $\"\u002Fpoints\u002F{latitude},{longitude}\");\n        using var jsonDocument = await client.ReadJsonDocumentAsync(pointUrl);\n        var forecastUrl = jsonDocument.RootElement.GetProperty(\"properties\").GetProperty(\"forecast\").GetString()\n            ?? throw new Exception($\"No forecast URL provided by {client.BaseAddress}points\u002F{latitude},{longitude}\");\n\n        using var forecastDocument = await client.ReadJsonDocumentAsync(forecastUrl);\n        var periods = forecastDocument.RootElement.GetProperty(\"properties\").GetProperty(\"periods\").EnumerateArray();\n\n        return string.Join(\"\\n---\\n\", periods.Select(period => $\"\"\"\n                {period.GetProperty(\"name\").GetString()}\n                温度: {period.GetProperty(\"temperature\").GetInt32()}°F\n                风速: {period.GetProperty(\"windSpeed\").GetString()} {period.GetProperty(\"windDirection\").GetString()}\n                天气预报: {period.GetProperty(\"detailedForecast\").GetString()}\n                \"\"\"));\n    }\n}\n```\n\n客户端需要执行以下操作：\n```cs\n\u002F\u002F 您的客户端传输方式，例如 StdioClientTransport\nawait using IMcpClient mcpClient = await McpClientFactory.CreateAsync(clientTransport);\n\n\u002F\u002F 1. 获取工具列表\nList\u003CTool> tools = await mcpClient.ListTornadoToolsAsync();\n\n\u002F\u002F 2. 创建对话，并传递可用工具\nTornadoApi api = new TornadoApi(LLmProviders.OpenAi, apiKeys.OpenAi);\nConversation conversation = api.Chat.CreateConversation(new ChatRequest\n{\n    Model = ChatModel.OpenAi.Gpt41.V41,\n    Tools = tools,\n    \u002F\u002F 强制使用任意一个可用工具（如果需要指定具体工具，可使用 new OutboundToolChoice(\"toolName\")）\n    ToolChoice = OutboundToolChoice.Required\n});\n\n\u002F\u002F 3. 让模型调用工具并推断参数\nawait conversation\n    .AddSystemMessage(\"您是一位有用的助手\")\n    .AddUserMessage(\"达拉斯现在的天气如何？\")\n    .GetResponseRich(async calls =>\n    {\n        foreach (FunctionCall call in calls)\n        {\n            \u002F\u002F 获取模型推断出的参数\n            double latitude = call.GetOrDefault\u003Cdouble>(\"latitude\");\n            double longitude = call.GetOrDefault\u003Cdouble>(\"longitude\");\n            \n            \u002F\u002F 在 MCP 服务器上调用工具，并传递参数\n            await call.ResolveRemote(new\n            {\n                latitude = latitude,\n                longitude = longitude\n            });\n\n            \u002F\u002F 提取工具返回的结果，并将其传递回模型\n            if (call.Result?.RemoteContent is McpContent mcpContent)\n            {\n                foreach (IMcpContentBlock block in mcpContent.McpContentBlocks)\n                {\n                    if (block is McpContentBlockText textBlock)\n                    {\n                        call.Result.Content = textBlock.Text;\n                    }\n                }\n            }\n        }\n    });\n\n\u002F\u002F 取消对客户端调用工具的强制要求\nconversation.RequestParameters.ToolChoice = null;\n\n\u002F\u002F 4. 流式输出最终响应\nawait conversation.StreamResponse(Console.Write);\n```\n\n完整的示例可在以下位置找到：[客户端](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Mcp.Sample.Server\u002FWeatherTools.cs)，[服务器](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Mcp.Sample\u002FProgram.cs)。\n\n## 🧰 工具包\n\nTornado 在 `LlmTornado.Toolkit` 包中提供了强大的抽象层，能够帮助开发者快速构建应用，同时避免许多设计陷阱。这些抽象的核心在于代码的可扩展性和易于调优的设计。\n\n### ToolkitChat\n\n`ToolkitChat` 是一种基于图的工作流原语，其中边负责传递数据，节点则执行函数。ToolkitChat 支持流式处理、丰富响应以及工具调用的链式操作。工具调用可以通过 `ChatFunction` 或 `ChatPlugin`（一个包含多个工具的封装）来提供。许多重载方法接受主模型和备用模型，这种交错策略比简单地重试同一模型更能有效应对 API 的临时中断。所有工具调用都是强类型的，默认为“严格”模式。对于不支持严格 JSON 模式的提供商（例如 Anthropic），会使用 `{` 进行预填充作为回退方案。只需更改参数即可将调用标记为“非严格”。\n\n```cs\nclass DemoAggregatedItem\n{\n    public string Name { get; set; }\n    public string KnownName { get; set; }\n    public int Quantity { get; set; }\n}\n\nstring sysPrompt = \"按类型汇总物品\";\nstring userPrompt = \"三个苹果，一个樱桃，两个苹果，一个橙子，一个橙子\";\n\nawait ToolkitChat.GetSingleResponse(Program.Connect(), ChatModel.Google.Gemini.Gemini25Flash, ChatModel.OpenAi.Gpt41.V41Mini, sysPrompt, new ChatFunction([\n    new ToolParam(\"items\", new ToolParamList(\"汇总后的物品\", [\n        new ToolParam(\"name\", \"物品名称\", ToolParamAtomicTypes.String),\n        new ToolParam(\"quantity\", \"汇总数量\", ToolParamAtomicTypes.Int),\n        new ToolParam(\"known_name\", new ToolParamEnum(\"物品已知名称\", [ \"苹果\", \"樱桃\", \"橙子\", \"其他\" ]))\n    ]))\n], async (args, ctx) =>\n{\n    if (!args.ParamTryGet(\"items\", out List\u003CDemoAggregatedItem>? items) || items is null)\n    {\n        return new ChatFunctionCallResult(ChatFunctionCallResultParameterErrors.MissingRequiredParameter, \"items\");\n    }\n    \n    Console.WriteLine(\"汇总后的物品：\");\n\n    foreach (DemoAggregatedItem item in items)\n    {\n        Console.WriteLine($\"{item.Name}: {item.Quantity}\");\n    }\n    \n    return new ChatFunctionCallResult();\n}), userPrompt); \u002F\u002F 温度默认为 0，输出长度为 8k\n\n\u002F*\n汇总后的物品：\n苹果：5\n樱桃：1\n橙子：2\n*\u002F\n```\n\n## 👉 为什么选择 Tornado？\n\n- 在 [NuGet](https:\u002F\u002Fwww.nuget.org\u002Fpackages\u002FLlmTornado) 上拥有超过 10 万次安装。\n- 被用于[获奖](https:\u002F\u002Fwww-aiawards-cz.translate.goog\u002F?_x_tr_sl=cs&_x_tr_tl=en&_x_tr_hl=cs)的商业项目中，每月处理超过 1000 亿个 token。\n- 覆盖了 500 多项测试。\n- 性能优异。\n- 许可证永远不会改变。\n\n## 📢 使用 Tornado 构建的项目\n- [ScioBot](https:\u002F\u002Fsciobot.org\u002F) - 面向教育者的 AI 工具，用户超过 10 万人。\n- [ProseFlow](https:\u002F\u002Fgithub.com\u002FLSXPrime\u002FProseFlow) - 您的通用 AI 文本处理器，由本地和云端 LLM 提供支持。可在 Windows、macOS 和 Linux 上的任何应用程序中编辑、重构和转换文本。\n- [NotT3Chat](https:\u002F\u002Fgithub.com\u002Fshaltielshmid\u002FNotT3Chat) - C# 版本的 T3 堆栈解决方案。\n- [ClaudeCodeProxy](https:\u002F\u002Fgithub.com\u002Fsalty-flower\u002FClaudeCodeProxy) - 提供商多路复用代理。\n- [Semantic Search](https:\u002F\u002Fgithub.com\u002Fprimaryobjects\u002Fsemantic-search) - 基于上下文和语义匹配查询的 AI 语义搜索。\n\n_您是否使用 Tornado 构建过项目？请在 Issues 中告诉我们，我们将为您展示！_\n\n## 🤝 合作伙伴\n\n### 赞助方\n\n\u003Ca href=\"https:\u002F\u002Fwww.scio.cz\u002Fprace-u-nas\" target=\"_blank\">\n    \u003Cfigure>\n        \u003Cimg alt=\"Scio\" width=\"300\" alt=\"image\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_a5bcd079d0f1.png\" \u002F>\n    \u003C\u002Ffigure>\n\u003C\u002Fa>\n\n### 技术支持\n[![JetBrains logo.](https:\u002F\u002Fresources.jetbrains.com\u002Fstorage\u002Fproducts\u002Fcompany\u002Fbrand\u002Flogos\u002Fjetbrains.svg)](https:\u002F\u002Fjb.gg\u002FOpenSource)\n\n## 📚 贡献\n我们欢迎 PR！目前正在接受新的提供商实现、对 100% 绿色[功能矩阵](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FFeatureMatrix.md)的贡献，以及在公开讨论后提出的新抽象概念。\n\n## 星标历史\n\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_readme_8cf0bfdf6448.png)](https:\u002F\u002Fwww.star-history.com\u002F#lofcz\u002Fllmtornado&type=date&legend=top-left)\n\n## 许可证\n\n本库采用 [MIT](https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002FLICENSE) 许可证授权。💜","# LLMTornado 快速上手指南\n\nLLM Tornado 是一个专为 .NET 开发者设计的提供商无关 SDK，助你在几分钟内构建、编排和部署 AI 智能体（Agents）与工作流。它内置了 30+ 主流 AI 模型提供商（包括阿里云、DeepSeek、月之暗面等）和向量数据库的连接器，无需依赖各家的官方 SDK 即可使用其最新特性。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: Windows, Linux, 或 macOS\n*   **.NET SDK**: 版本 **.NET 8.0** 或更高版本\n*   **开发工具**: Visual Studio 2022, JetBrains Rider, 或 VS Code (推荐安装 C# Dev Kit 扩展)\n*   **API 密钥**: 至少拥有一个支持的 AI 服务提供商的 API Key（如 OpenAI, Azure OpenAI, 阿里云百炼，DeepSeek 等）\n\n## 安装步骤\n\n通过 NuGet 包管理器将 LLMTornado 添加到你的 .NET 项目中。\n\n### 1. 安装核心库\n打开终端或包管理器控制台，运行以下命令安装基础包：\n\n```bash\ndotnet add package LlmTornado\n```\n\n### 2. 可选扩展包\n根据项目需求，你可以选择安装以下功能扩展包：\n\n```bash\n# 智能体编排框架（高阶抽象，推荐用于复杂任务）\ndotnet add package LlmTornado.Agents\n\n# Model Context Protocol (MCP) 集成，用于连接数据源和工具\ndotnet add package LlmTornado.Mcp\n\n# Agent2Agent (A2A) 集成，实现跨平台智能体协作\ndotnet add package LlmTornado.A2A\n\n# 与 Microsoft.Extensions.AI 及 Semantic Kernel 互操作\ndotnet add package LlmTornado.Microsoft.Extensions.AI\n\n# 生产力增强工具\ndotnet add package LlmTornado.Contrib\n```\n\n> **提示**：国内开发者若遇到 NuGet 下载缓慢，可临时指定国内镜像源：\n> `dotnet add package LlmTornado --source https:\u002F\u002Fapi.nuget.org\u002Fv3\u002Findex.json` (通常自动生效，若需强制指定可用 `https:\u002F\u002Fwww.myget.org\u002FF\u002Fazure-appservice\u002Fapi\u002Fv3\u002Findex.json` 或其他国内私有源配置)\n\n## 基本使用\n\nLLM Tornado 的核心优势在于**统一接口**。你只需初始化一次 `TornadoApi` 实例并配置多个提供商的密钥，后续只需更改模型名称即可切换底层提供商，无需修改业务逻辑代码。\n\n### 最简单的推理示例\n\n以下示例展示了如何配置多个提供商并进行一次简单的聊天调用：\n\n```csharp\nusing LlmTornado;\nusing LlmTornado.Chat;\n\n\u002F\u002F 1. 初始化 API 客户端\n\u002F\u002F 支持同时传入多个提供商的 Key，系统会根据调用的模型自动匹配正确的 Key\nTornadoApi api = new TornadoApi([\n    new (LLmProviders.OpenAi, \"YOUR_OPENAI_KEY\"),\n    new (LLmProviders.Anthropic, \"YOUR_ANTHROPIC_KEY\"),\n    new (LLmProviders.DeepSeek, \"YOUR_DEEPSEEK_KEY\"), \u002F\u002F 支持国产模型\n    new (LLmProviders.AliCloud, \"YOUR_ALICLOUD_KEY\"), \u002F\u002F 支持阿里云\n    \u002F\u002F 添加更多你需要的提供商...\n]);\n\n\u002F\u002F 2. 执行聊天请求\n\u002F\u002F 只需更改 model 参数即可切换提供商，例如从 \"gpt-4o\" 切换到 \"deepseek-chat\"\nvar response = await api.Chat.GetChatCompletion(new ChatRequest()\n{\n    Model = \"gpt-4o\", \u002F\u002F 尝试改为 \"deepseek-chat\" 或 \"qwen-plus\" 测试不同模型\n    Messages = [\n        new(ChatMessageRole.System, \"你是一个乐于助人的 AI 助手。\"),\n        new(ChatMessageRole.User, \"你好，请介绍一下你自己。\")\n    ]\n});\n\n\u002F\u002F 3. 输出结果\nConsole.WriteLine(response.FirstChoice.Message.Content);\n```\n\n### 关键特性说明\n\n*   **自动路由**: `TornadoApi` 会自动识别 `Model` 字符串所属的提供商，并调用对应的 API 端点和密钥。\n*   **强类型支持**: 所有提供商的特有功能（如 JSON Mode, Vision, Function Calling）均通过强类型代码暴露，享受完整的 IntelliSense 支持。\n*   **本地部署**: 同样支持连接本地的 vLLM, Ollama 或 LocalAI，只需在初始化时配置相应的 Base URL 即可。\n\n现在你可以开始探索更高级的功能，如智能体编排（Agents Orchestration）、RAG 向量检索或多模态处理了。","某 .NET 开发团队正致力于构建一个能自动处理客户工单、查询知识库并生成回复的智能客服系统。\n\n### 没有 LLMTornado 时\n- **多模型适配成本高**：若要切换或对比 OpenAI、Azure 与国产大模型（如 Moonshot），需分别引入各厂商独立的 SDK，导致项目依赖臃肿且代码耦合严重。\n- **智能体编排复杂**：实现“意图识别→知识检索→回复生成”的多步工作流时，需手动编写大量状态管理代码来协调不同任务节点，开发周期长达数周。\n- **本地部署困难**：尝试将模型迁移至本地 Ollama 或 vLLM 以保护数据隐私时，因请求格式差异需重写底层通信逻辑，维护难度极大。\n- **功能迭代缓慢**：每当需要新增向量数据库连接或调整 Agent 交互逻辑，都涉及到底层基础设施的改动，难以快速响应业务需求。\n\n### 使用 LLMTornado 后\n- **统一接口屏蔽差异**：借助内置的 30+ 连接器，仅需修改配置即可在 Alibaba、Google 或本地模型间无缝切换，彻底摆脱对厂商专属 SDK 的依赖。\n- **可视化编排工作流**：利用 Orchestrator 和 Runner 核心概念，通过简洁的构建器模式即可定义复杂的 Agent 协作图，将原本数周的编排工作缩短至几分钟。\n- **原生支持本地化**：内置对 vLLM 和 Ollama 的请求转换支持，无需额外代码即可将云端流程平滑迁移至本地私有环境，保障数据安全。\n- **敏捷开发与扩展**：基于 Provider-agnostic 设计，新增向量库连接或调整 Agent 手递手（handoffs）逻辑只需几行代码，显著提升迭代效率。\n\nLLMTornado 通过统一的抽象层和强大的编排能力，让 .NET 开发者能以最低成本构建灵活、可移植的企业级 AI 应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flofcz_LLMTornado_f5835678.png","lofcz","Matěj Štágl","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flofcz_219e657a.jpg","Intel ISEF & EUCYS alumni, AI Awards laureate. Married to .NET, casually dating Supabase & React. Compilers, ML, 3AM debugging.","@sciocz","Somewhere in Nevada","stagl@wattlescript.org",null,"https:\u002F\u002Fllmtornado.ai","https:\u002F\u002Fgithub.com\u002Flofcz",[83,87,91,95,98,102,106,110],{"name":84,"color":85,"percentage":86},"C#","#178600",90.1,{"name":88,"color":89,"percentage":90},"Python","#3572A5",7.1,{"name":92,"color":93,"percentage":94},"HTML","#e34c26",0.7,{"name":96,"color":97,"percentage":94},"Shell","#89e051",{"name":99,"color":100,"percentage":101},"Vue","#41b883",0.6,{"name":103,"color":104,"percentage":105},"JavaScript","#f1e05a",0.4,{"name":107,"color":108,"percentage":109},"CSS","#663399",0.3,{"name":111,"color":112,"percentage":113},"Dockerfile","#384d54",0,590,98,"2026-04-08T16:44:35","MIT",1,"Windows, Linux, macOS","未说明 (作为 .NET SDK，GPU 需求取决于所选的后端模型服务，如本地部署的 vLLM、Ollama 或云端 API)","未说明",{"notes":123,"python":124,"dependencies":125},"这是一个 .NET SDK 而非独立的 Python 应用程序。它主要用于连接外部 AI 提供商（如 OpenAI, Anthropic 等）或本地模型服务（如 vLLM, Ollama, LocalAI）。因此，硬件资源需求主要取决于您选择连接的模型服务本身，而非此库的运行。若使用本地部署（First-class Local Deployments），需自行配置相应的模型运行环境。","不需要 (基于 .NET 平台)",[126,127,128,129,130],".NET 8.0+ (推断)","LlmTornado (NuGet 包)","LlmTornado.Agents (可选)","LlmTornado.Mcp (可选)","LlmTornado.A2A (可选)",[13,15,14],[133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152],"agent-framework","agent-orchestration","orchestration","multi-agent","agent","workflow","dotnet","agents","dotnet-core","agents-sdk","a2a","agentic-ai","mcp","agent2agent","agentic-workflow","agentic-rag","rag","ai","artificial-intelligence","sdk","2026-03-27T02:49:30.150509","2026-04-10T07:46:35.362010",[156,161,166,170,175,180],{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},27352,"LlmTornado 是否支持 Microsoft.Extensions.AI 抽象层和批量（Batch）处理？","是的，LlmTornado 现已同时支持 Microsoft.Extensions.AI 抽象层和 Anthropic\u002FOpenAI 的 `\u002Fbatch` 端点。您可以参考官方演示代码来了解如何使用批量功能：https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Demo\u002FBatchDemo.cs","https:\u002F\u002Fgithub.com\u002Flofcz\u002FLLMTornado\u002Fissues\u002F41",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},27353,"在使用 MCP 工具时遇到 'require_approval' 类型错误（期望对象或字符串，却得到整数）如何解决？","这是一个已知问题，已在版本 `3.7.14` 中修复。请升级您的 LlmTornado 库到 3.7.14 或更高版本。该版本还首次支持了 Delegate（委托）功能。示例代码可参考：https:\u002F\u002Fgithub.com\u002Flofcz\u002FLlmTornado\u002Fblob\u002Fmaster\u002Fsrc\u002FLlmTornado.Demo\u002FInfraDemo.cs","https:\u002F\u002Fgithub.com\u002Flofcz\u002FLLMTornado\u002Fissues\u002F70",{"id":167,"question_zh":168,"answer_zh":169,"source_url":165},27354,"如何在 LlmTornado 中配置和使用多个本地 MCP 服务器？","目前可以通过以下步骤实现：\n1. 为每个本地服务器创建一个 `IMcpClient` 实例。\n2. 调用每个客户端的 `ListTornadoToolsAsync()` 获取可用工具。\n3. 将所有工具合并为一个大的 `List\u003CTool>`。\n4. 当模型决定调用某个工具时，对该工具调用 `ResolveRemote()` 方法，这将自动处理与相应服务器的通信。",{"id":171,"question_zh":172,"answer_zh":173,"source_url":174},27355,"使用 Anthropic 提供商时，为什么返回结果中的 `Content` 字段为空，但 `Parts` 中有数据？","这是早期版本的一个转换缺失问题，已在版本 `3.4.15` 中修复。如果您仍在使用旧版本（如 3.4.12），请升级到 3.4.15 或更高版本。升级后，`GetResponseRich`、`ChatEndpoint.CreateChatCompletion` 以及其他聊天 API 返回的 `Content` 字段将正常包含数据。","https:\u002F\u002Fgithub.com\u002Flofcz\u002FLLMTornado\u002Fissues\u002F38",{"id":176,"question_zh":177,"answer_zh":178,"source_url":179},27356,"向 Gemini 发送请求时失败，提示请求体中包含空的默认工具（empty tool），如何解决？","这是由于默认设置中序列化了一个空工具导致的。虽然可以通过手动修改请求体字符串来移除（如调试时替换掉包含空 name\u002Fdescription 的 tools 部分），但这通常是库内部逻辑问题。建议检查是否无意中启用了默认工具配置，或者等待\u002F联系维护者修复该默认行为，确保在未显式设置工具时不发送空的 `tools` 数组。","https:\u002F\u002Fgithub.com\u002Flofcz\u002FLLMTornado\u002Fissues\u002F67",{"id":181,"question_zh":182,"answer_zh":183,"source_url":184},27357,"如何在使用 LlmTornado 构建 Agent 时绕过自动工具调用，以便自行处理函数执行？","如果您希望自行解析和执行函数工具（遵循 OpenAI 标准流程），而不是让库自动调用，您需要处理“未解析函数”的消息逻辑。最新的更新已经提供了更好的解决方案来处理并发状态和转换。您可以通过设置状态的并行过渡（parallel transition）选项，允许状态机在满足条件时过渡到任何状态，从而更灵活地控制工具调用的流程，而不是依赖默认的自动调用机制。","https:\u002F\u002Fgithub.com\u002Flofcz\u002FLLMTornado\u002Fissues\u002F54",[186,191,196,201,206,211,216,221,226,231,236,241,246,251,256,261,266,271,276,281],{"id":187,"version":188,"summary_zh":189,"released_at":190},180482,"v3.8.54","## 变更内容\n- 修复基于令牌的分块测试设置\n- 添加对 GPT-5.4 和 GPT-5.3-Codex 模型的支持\n- 新增通义千问 3.5 Plus 模型（暂未包含在最新版本中）\n- 将项目版本更新至 v3.8.53\n\n## 已更新的项目\nLlmTornado.A2A.csproj -> 1.0.41\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.51\nLlmTornado.Agents.Samples.csproj -> 1.0.43\nLlmTornado.Mcp.csproj -> 1.1.51\nLlmTornado.Agents.csproj -> 1.0.50\nLlmTornado.csproj -> 3.8.54\nLlmTornado.Tests.csproj -> 0.0.1","2026-03-07T16:02:10",{"id":192,"version":193,"summary_zh":194,"released_at":195},180483,"v3.8.53","## 变更内容\n- tokenize：接受 chatrequest 和 responsesrequest\n- 自动忽略 gpt-image 模型的 ResponseFormat\n- google：gemini-3.1-flash-image-preview\n- 将项目版本更新至 v3.8.52\n\n## 已更新的项目\nLlmTornado.A2A.csproj -> 1.0.40\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.50\nLlmTornado.Agents.Samples.csproj -> 1.0.42\nLlmTornado.Mcp.csproj -> 1.1.50\nLlmTornado.Agents.csproj -> 1.0.49\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.53","2026-02-27T18:03:29",{"id":197,"version":198,"summary_zh":199,"released_at":200},180484,"v3.8.52","## 变更内容\n- 更新 ResponsesDemo.cs\n- responses：令牌计数\n- oai：新的 responses 属性\n- 将项目版本更新至 v3.8.51\n\n## 已更新的项目\nLlmTornado.A2A.csproj -> 1.0.39\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.49\nLlmTornado.Agents.Samples.csproj -> 1.0.41\nLlmTornado.Mcp.csproj -> 1.1.49\nLlmTornado.Agents.csproj -> 1.0.48\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.52","2026-02-26T17:19:48",{"id":202,"version":203,"summary_zh":204,"released_at":205},180485,"v3.8.51","## 变更内容\n- 向量数据库：放宽依赖\n- a2a：0.3.3\n- mcp：1.0\n- 将项目版本更新至 v3.8.50\n\n## 更新的项目\nLlmTornado.A2A.csproj -> 1.0.38\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.48\nLlmTornado.Agents.Samples.csproj -> 1.0.40\nLlmTornado.VectorDatabases.csproj -> 0.0.1\nLlmTornado.VectorDatabases.Qdrant.csproj -> 0.0.1\nLlmTornado.Mcp.csproj -> 1.1.48\nLlmTornado.Agents.csproj -> 1.0.47\nLlmTornado.csproj -> 3.8.51\nLlmTornado.Mcp.Sample.Server.csproj -> 0.0.1","2026-02-25T19:20:46",{"id":207,"version":208,"summary_zh":209,"released_at":210},180486,"v3.8.50","## 变更内容\n- oai: gpt-5.3-codex\n- 为 Microsoft.Extensions.AI 集成添加对推理内容的支持\n- 将项目版本更新至 v3.8.49\n\n## 更新的项目\nLlmTornado.A2A.csproj -> 1.0.37\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.47\nLlmTornado.Agents.Samples.csproj -> 1.0.39\nLlmTornado.Mcp.csproj -> 1.1.47\nLlmTornado.Agents.csproj -> 1.0.46\nLlmTornado.csproj -> 3.8.50","2026-02-25T18:34:47",{"id":212,"version":213,"summary_zh":214,"released_at":215},180487,"v3.8.49","## 变更内容\n- anthropic：自动缓存，移除已废弃模型\n- gemini：优化 reasoning_effort 路由\n- google：gemini-3.1-pro-preview\n- 将项目版本更新至 v3.8.48\n\n## 更新的项目\nLlmTornado.Agents.Samples.csproj -> 1.0.38\nLlmTornado.Agents.csproj -> 1.0.45\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.46\nLlmTornado.Mcp.csproj -> 1.1.46\nLlmTornado.csproj -> 3.8.49\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.36","2026-02-20T11:40:58",{"id":217,"version":218,"summary_zh":219,"released_at":220},180488,"v3.8.48","## 变更内容\n- anthropic：sonnet 4.6，新增内置工具\n- 修复 #152 问题\n- 将项目版本更新至 v3.8.47\n\n## 已更新的项目\nLlmTornado.A2A.csproj -> 1.0.35\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.45\nLlmTornado.Agents.Samples.csproj -> 1.0.37\nLlmTornado.Mcp.csproj -> 1.1.45\nLlmTornado.Agents.csproj -> 1.0.44\nLlmTornado.csproj -> 3.8.48","2026-02-18T08:15:48",{"id":222,"version":223,"summary_zh":224,"released_at":225},180489,"v3.8.47","## 变更内容\n- 提供商：minimax\n- 将测试覆盖率从350个测试用例提升至500个\n- 更新README中的API提供商数量\n- 添加MiniMax连接器并更新功能\n- zai：glm5\n- openai：批量图片、响应\u002F压缩\n- anthropic：速度模式\n- 传播停止原因\n- 修复延迟流式工具调用问题\n- 将项目版本更新至v3.8.46\n\n## 已更新的项目\nLlmTornado.A2A.csproj -> 1.0.34\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.44\nLlmTornado.Agents.Samples.csproj -> 1.0.36\nLlmTornado.Mcp.csproj -> 1.1.44\nLlmTornado.Agents.csproj -> 1.0.43\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.47","2026-02-14T11:27:41",{"id":227,"version":228,"summary_zh":229,"released_at":230},180490,"v3.8.46","## 变更内容\n- anthropic：opus 4.6、压缩、自适应思维、推理地理定位、最大推理\n- zai：交错思维、xai：上一条响应ID、使用加密内容\n- 将项目版本更新至v3.8.45\n\n## 已更新的项目\nLlmTornado.Agents.Samples.csproj -> 1.0.35\nLlmTornado.Agents.csproj -> 1.0.42\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.43\nLlmTornado.Mcp.csproj -> 1.1.43\nLlmTornado.csproj -> 3.8.46\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.33","2026-02-06T06:39:00",{"id":232,"version":233,"summary_zh":234,"released_at":235},180491,"v3.8.45","## 变更内容\n- groq：文件、音频\n- xai：视频生成\u002F编辑、图像编辑；moonshot：kimi k2.5；zai：音频、视频生成\n- 同步 Google 模型\n- 将 Anthropic 的结构化响应升级至 GA 级别\n- 将项目版本更新至 v3.8.44\n\n## 更新的项目\nLlmTornado.Agents.Samples.csproj -> 1.0.34\nLlmTornado.Agents.csproj -> 1.0.41\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.42\nLlmTornado.Mcp.csproj -> 1.1.42\nLlmTornado.Mcp.Sample.Server.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.45\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.32","2026-02-02T23:33:55",{"id":237,"version":238,"summary_zh":239,"released_at":240},180492,"v3.8.44","## What's Changed\n- update to usable gemini models, and upgrade to Tool utility Thanks @JuergenGutsch for the suggestion\n- Add sponsorship and powering sections to README\n- up mcp version\n- Update project versions to v3.8.43\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.33\nLlmTornado.Agents.csproj -> 1.0.40\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.41\nLlmTornado.Mcp.csproj -> 1.1.41\nLlmTornado.Mcp.Sample.Server.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.44\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.31","2026-01-31T21:20:47",{"id":242,"version":243,"summary_zh":244,"released_at":245},180493,"v3.8.43","## What's Changed\r\n- fix to tool runner optional parameters\r\n- Update project versions to v3.8.42\r\n\r\n## Updated Projects\r\nLlmTornado.Agents.Samples.csproj -> 1.0.32\r\nLlmTornado.Agents.csproj -> 1.0.39\r\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.40\r\nLlmTornado.Mcp.csproj -> 1.1.40\r\nLlmTornado.Tests.csproj -> 0.0.1\r\nLlmTornado.csproj -> 3.8.43\r\nLlmTornado.Demo.csproj -> 0.0.1\r\nLlmTornado.A2A.csproj -> 1.0.30","2026-01-18T20:01:25",{"id":247,"version":248,"summary_zh":249,"released_at":250},180494,"v3.8.42","## What's Changed\n- Update FeatureMatrix.md\n- new endpoint: \u002Focr\n- regenerate openrouter, requesty\n- voyage: voyage-4-large, voyage-4, voyage-4-lite, voyage-4-nano, voyage-multimodal-3.5\n- openai: gpt-5.2-codex\n- fix to handoff and moving system message to end\n- Update install count on NuGet from 70,000 to 100,000\n- adding new tool resolved information properties\n- Update project versions to v3.8.41\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.31\nLlmTornado.Agents.csproj -> 1.0.38\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.39\nLlmTornado.Mcp.csproj -> 1.1.39\nLlmTornado.csproj -> 3.8.42\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.29","2026-01-18T03:00:43",{"id":252,"version":253,"summary_zh":254,"released_at":255},180495,"v3.8.41","## What's Changed\n- Improve tool permission request handling\n- Update project versions to v3.8.40\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.30\nLlmTornado.Agents.csproj -> 1.0.37\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.38\nLlmTornado.Mcp.csproj -> 1.1.38\nLlmTornado.csproj -> 3.8.41\nLlmTornado.A2A.csproj -> 1.0.28","2026-01-10T20:46:24",{"id":257,"version":258,"summary_zh":259,"released_at":260},180496,"v3.8.40","## What's Changed\n- refactor embedding, image, audio models luts\n- allow setting UA per api instance\n- Update project versions to v3.8.39\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.29\nLlmTornado.Agents.csproj -> 1.0.36\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.37\nLlmTornado.Mcp.csproj -> 1.1.37\nLlmTornado.csproj -> 3.8.40\nLlmTornado.A2A.csproj -> 1.0.27","2026-01-05T23:07:06",{"id":262,"version":263,"summary_zh":264,"released_at":265},180497,"v3.8.39","## What's Changed\n- Update news section in README.md\n- \u002Fvideos: openai support\n- \u002Fbatch: google\n- Update project versions to v3.8.38\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.28\nLlmTornado.Agents.csproj -> 1.0.35\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.36\nLlmTornado.Mcp.csproj -> 1.1.36\nLlmTornado.csproj -> 3.8.39\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.26","2026-01-03T01:08:27",{"id":267,"version":268,"summary_zh":269,"released_at":270},180498,"v3.8.38","## What's Changed\n- \u002Fbatch: oai, anthropic\n- update feature matrix\n- Update README with new provider 'Upstage'\n- new connector: upstage\n- Update project versions to v3.8.37\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.27\nLlmTornado.Agents.csproj -> 1.0.34\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.35\nLlmTornado.Mcp.csproj -> 1.1.35\nLlmTornado.Tests.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.38\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.25","2026-01-02T15:29:31",{"id":272,"version":273,"summary_zh":274,"released_at":275},180499,"v3.8.37","## What's Changed\n- fix\n- Fix duplicated final chunk when streaming with IChatClient\n- Initial plan\n- Update project versions to v3.8.36\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.26\nLlmTornado.Agents.csproj -> 1.0.33\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.34\nLlmTornado.Mcp.csproj -> 1.1.34\nLlmTornado.csproj -> 3.8.37\nLlmTornado.A2A.csproj -> 1.0.24","2026-01-02T00:22:12",{"id":277,"version":278,"summary_zh":279,"released_at":280},180500,"v3.8.36","## What's Changed\n- anthropic: fix inline tool calls, zai: glm-4.7, improve model aliases, sync autogen\n- chat: improve refusal reason parsing and availablity\n- Update project versions to v3.8.35\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.25\nLlmTornado.Agents.csproj -> 1.0.32\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.33\nLlmTornado.Mcp.csproj -> 1.1.33\nLlmTornado.Tests.csproj -> 0.0.1\nLlmTornado.csproj -> 3.8.36\nLlmTornado.Demo.csproj -> 0.0.1\nLlmTornado.A2A.csproj -> 1.0.23","2025-12-23T05:15:57",{"id":282,"version":283,"summary_zh":284,"released_at":285},180501,"v3.8.35","## What's Changed\n- change to ReasoningTokens property\n- fix reasoning tokens not handled for ollama qwen3:4b\n- Update project versions to v3.8.34\n\n## Updated Projects\nLlmTornado.Agents.Samples.csproj -> 1.0.24\nLlmTornado.Agents.csproj -> 1.0.31\nLlmTornado.Microsoft.Extensions.AI.csproj -> 1.1.32\nLlmTornado.Mcp.csproj -> 1.1.32\nLlmTornado.csproj -> 3.8.35\nLlmTornado.A2A.csproj -> 1.0.22","2025-12-19T14:13:06"]