[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kaito-project--kaito":3,"tool-kaito-project--kaito":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160784,2,"2026-04-19T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":66,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":117,"forks":118,"last_commit_at":119,"license":120,"difficulty_score":121,"env_os":122,"env_gpu":123,"env_ram":124,"env_deps":125,"category_tags":138,"github_topics":139,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":144,"updated_at":145,"faqs":146,"releases":177},9813,"kaito-project\u002Fkaito","kaito","Kubernetes AI Toolchain Operator","KAITO 是一款专为 Kubernetes 集群设计的 AI 工具链操作员，旨在自动化大语言模型（LLM）的推理、微调及检索增强生成（RAG）引擎的部署流程。它主要解决了在分布式环境中配置大模型时面临的复杂性难题，用户无需手动调整流水线并行、数据并行等繁琐参数，也无需额外配置存储资源。\n\nKAITO 特别适合需要在云原生架构下高效管理 AI 工作负载的开发者和运维工程师。其核心亮点在于极简的 API 设计与智能的资源调度：用户只需指定 GPU 实例类型和模型 ID，KAITO 即可自动估算显存需求并计算所需节点数量。通过集成节点自动供应器（NAP），它能精准调配 GPU 资源以实现最优的分布式推理。此外，KAITO 创新性地利用 GPU 节点内置的本地 NVMe 作为模型存储，省去了外部存储依赖，并全面支持所有 vLLM 兼容的 HuggingFace 模型。结合 KEDA 实现的自动扩缩容能力，KAITO 让大模型在 Kubernetes 上的运行变得更加轻松、高效且成本可控。","# Kubernetes AI Toolchain Operator (KAITO)\n\n![GitHub Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fkaito-project\u002Fkaito)\n[![Go Report Card](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_2b4a70945b89.png)](https:\u002F\u002Fgoreportcard.com\u002Freport\u002Fgithub.com\u002Fkaito-project\u002Fkaito)\n![GitHub go.mod Go version](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fgo-mod\u002Fgo-version\u002Fkaito-project\u002Fkaito)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkaito-project\u002Fkaito\u002Fgraph\u002Fbadge.svg?token=XAQLLPB2AR)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkaito-project\u002Fkaito)\n[![FOSSA Status](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito.svg?type=shield)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_shield)\n\n| ![notification](website\u002Fstatic\u002Fimg\u002Fbell.svg) What is NEW!                                                                                                                                                                                                |\n| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| ALL vLLM supported models can be run in KAITO now. |\n| Latest Release: Apr 15th, 2026. KAITO v0.10.0. |\n| First Release: Nov 15th, 2023. KAITO v0.1.0. |\n\nKAITO is an operator suite that automates LLM model inference, fine-tuning, and RAG (Retrieval Augmented Generation) engine deployment in a Kubernetes cluster.\nKAITO has the following key differentiations compared to other inference model deployment methodologies:\n\n- Simplify the CRD API by removing detailed deployment parameters. The controller provides optimized preset configurations for key inference engine scheduling parameters such as pipeline parallelism (PP), data parallelism (DP), tensor parallelism (TP), max model length, etc.\n- Use node auto provisioner (NAP) to provision GPU resources with accurate model memory estimation, enabling the controller to pick the optimal node count for distributed inference.\n- Leverage GPU node built-in local NVMe as model storage — no extra storage is required for inference.\n- Support any [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm)-supported HuggingFace models.\n\n## Architecture\n\nKAITO follows the classic Kubernetes Custom Resource Definition (CRD)\u002Fcontroller design pattern for workload orchestration and integrates with [Gateway API Inference Extension](https:\u002F\u002Fgateway-api-inference-extension.sigs.k8s.io\u002F) to support LLM-based routing.\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_e39f81e995a0.png\" width=100% title=\"KAITO architecture\" alt=\"KAITO architecture\">\n\u003C\u002Fdiv>\n\n- **Workspace**: The CRD that serves as the basic building block for managing LLM inference\u002Ftuning workloads. The API provides a largely simplified experience for deploying an LLM model in Kubernetes - the user provides the GPU instance type and the HuggingFace model ID, the controller will:\n  - Estimate the GPU memory requirement based on the GPU instance type and model metadata, and calculate the required GPU count;\n  - Trigger GPU node auto-provisioning by integrating with Karpenter APIs ([NodePool](https:\u002F\u002Fkarpenter.sh\u002Fdocs\u002Fconcepts\u002Fnodepools\u002F));\n  - Configure the inference engine parameters for single node\u002Fmultiple nodes inference with optimized scheduling based on the GPU hardware topology.\n\n  Currently, only the **vLLM** engine is supported. LoRA adapters are supported. KVCache offloading is enabled by default.\n- **InferenceSet**: The CRD designed for managing the number of replicas of workspace instances for the same model. It is primarily used to autoscale the workspace based on inference request load. It reacts to scale-up\u002Fdown actions determined by a KEDA autoscaler that uses vLLM metrics collected by a [KEDA plugin](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkeda-kaito-scaler).\n- **InferencePool**: KAITO integrates [Gateway API Inference Extension](https:\u002F\u002Fgateway-api-inference-extension.sigs.k8s.io\u002F) by creating corresponding InferencePool object and EPP (Endpoint Picker, which enables KVCache-aware routing) per InferenceSet. It can work with any external gateway that supports the inference extension.\n\n\n> Note: In this repo, an open-source [gpu-provisioner](https:\u002F\u002Fgithub.com\u002FAzure\u002Fgpu-provisioner) is used in the E2E test and is referred to in various documents. KAITO can work with any other node provisioners that support the [Karpenter-core](https:\u002F\u002Fsigs.k8s.io\u002Fkarpenter) APIs.\n\nKAITO also supports a **RAGEngine** operator. It streamlines the process of managing a Retrieval Augmented Generation (RAG) service.\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_dd3bba7fed24.png\" width=90% title=\"KAITO RAGEngine architecture\" alt=\"KAITO RAGEngine architecture\">\n\u003C\u002Fdiv>\n\n  - **RAGEngine**: The CRD that defines the components of a RAG service, including the LLM endpoint (optional), the embedding service and the vector DB. The controller will create all required components.\n  - **Vector database**: Supports a built-in [FAISS](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss) in-memory vector database (default), and Qdrant\u002FMilvus persistent databases if specified.\n  - **Embedding**: Supports both local and remote embedding services to embed documents in the vector database.\n  - **RAGService**: The core service that leverages the [LlamaIndex](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index) orchestration. It supports commonly used APIs such as `\u002Findex` for indexing documents, `\u002Fv1\u002Fchat\u002Fcompletion` for intercepting LLM calls to append retrieved context automatically, and `\u002Fretrieve` for integrating with MCP servers. The `\u002Fretrieve` API uses the Reciprocal Rank Fusion (RRF) hybrid search algorithm to combine the results from both BM25 sparse retrieval and vector dense retrieval.\n  \nThe details of the service APIs can be found in this [document](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Frag).\n\n\n## Getting Started \n- **Installation**: Please check the guidance [here](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Finstallation) for installing core components (Workspace, InferenceSet) using helm and [here](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fblob\u002Fmain\u002Fterraform\u002FREADME.md) for installation using Terraform.\n- **Quick Start**: Please check the quick start guidance [here](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fquick-start) for running your first model using KAITO!\n- **AutoScaling**: Please check this [doc](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fkeda-autoscaler-inference) for configuring KAITO and KEDA to enable autoscaling inference workload.\n- **BYO models using HuggingFace runtime**: If you plan to run any BYO models using the HuggingFace runtime, check this [doc](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fcustom-model). Note: KAITO only supports BYO models hosted in HuggingFace.\n- **CPU models**: Please check this [doc](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Faikit) for running CPU models using [aikit](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Faikit\u002F).\n- **RAGEngine**: Please check the installation guidance and usage documents [here](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Frag).\n\n\n## Contributing\n\n[Read more](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fcontributing)\n\u003C!-- markdown-link-check-disable -->\nThis project welcomes contributions and suggestions. The contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit [CLAs for CNCF](https:\u002F\u002Fgithub.com\u002Fcncf\u002Fcla?tab=readme-ov-file).\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the CLAs for CNCF, please electronically sign the CLA via\nhttps:\u002F\u002Feasycla.lfx.linuxfoundation.org. If you encounter issues, you can submit a ticket with the\nLinux Foundation ID group through the [Linux Foundation Support website](https:\u002F\u002Fjira.linuxfoundation.org\u002Fplugins\u002Fservlet\u002Fdesk\u002Fportal\u002F4\u002Fcreate\u002F143).\n\n## Get Involved!\n\n- Visit [#KAITO channel in CNCF Slack](https:\u002F\u002Fcloud-native.slack.com\u002Farchives\u002FC09B4EWCZ5M) to discuss features in development and proposals.\n- We host a weekly community meeting for contributors on Tuesdays at 4:00pm PST. Please join here: [meeting link](https:\u002F\u002Fzoom-lfx.platform.linuxfoundation.org\u002Fmeeting\u002F99948431028?password=05912bb9-53fb-4b22-a634-ab5f8261e94c).\n- Reference the weekly meeting notes in our [KAITO community calls doc](https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F1OEC-WUQ2wn0TDQPsU09shMoXn5cW3dSrdu-M43Q79dA\u002Fedit?usp=sharing)!\n\n## License\n\nSee [Apache License 2.0](LICENSE).\n\n[![FOSSA Status](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito.svg?type=large)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_large)\n\n## Code of Conduct\n\nKAITO has adopted the [Cloud Native Compute Foundation Code of Conduct](https:\u002F\u002Fgithub.com\u002Fcncf\u002Ffoundation\u002Fblob\u002Fmain\u002Fcode-of-conduct.md). For more information see the [KAITO Code of Conduct](CODE_OF_CONDUCT.md).\n\n\u003C!-- markdown-link-check-enable -->\n## Contact\n\n- Please send emails to \"KAITO devs\" \u003Ckaito-dev@microsoft.com> for any issues.\n","# Kubernetes AI 工具链运营商（KAITO）\n\n![GitHub 发布](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fv\u002Frelease\u002Fkaito-project\u002Fkaito)\n[![Go Report Card](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_2b4a70945b89.png)](https:\u002F\u002Fgoreportcard.com\u002Freport\u002Fgithub.com\u002Fkaito-project\u002Fkaito)\n![GitHub go.mod Go 版本](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fgo-mod\u002Fgo-version\u002Fkaito-project\u002Fkaito)\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkaito-project\u002Fkaito\u002Fgraph\u002Fbadge.svg?token=XAQLLPB2AR)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fkaito-project\u002Fkaito)\n[![FOSSA 状态](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito.svg?type=shield)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_shield)\n\n| ![notification](website\u002Fstatic\u002Fimg\u002Fbell.svg) 有什么新内容！                                                                                                                                                                                                |\n| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| 现在 KAITO 中可以运行所有 vLLM 支持的模型。 |\n| 最新版本：2026年4月15日。KAITO v0.10.0。 |\n| 首次发布：2023年11月15日。KAITO v0.1.0。 |\n\nKAITO 是一套运营商工具集，可在 Kubernetes 集群中自动化部署 LLM 模型推理、微调以及 RAG（检索增强生成）引擎。\n与其他推理模型部署方法相比，KAITO 具有以下关键优势：\n\n- 简化 CRD API，移除详细的部署参数。控制器为关键推理引擎调度参数提供优化的预设配置，例如流水线并行度 (PP)、数据并行度 (DP)、张量并行度 (TP)、最大模型长度等。\n- 使用节点自动供应器 (NAP) 根据准确的模型内存估算来供应 GPU 资源，使控制器能够选择分布式推理的最佳节点数量。\n- 利用 GPU 节点内置的本地 NVMe 作为模型存储——推理无需额外存储。\n- 支持任何 [vLLM](https:\u002F\u002Fgithub.com\u002Fvllm-project\u002Fvllm) 支持的 HuggingFace 模型。\n\n## 架构\n\nKAITO 遵循经典的 Kubernetes 自定义资源定义 (CRD)\u002F控制器设计模式来进行工作负载编排，并与 [Gateway API Inference Extension](https:\u002F\u002Fgateway-api-inference-extension.sigs.k8s.io\u002F) 集成以支持基于 LLM 的路由。\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_e39f81e995a0.png\" width=100% title=\"KAITO 架构\" alt=\"KAITO 架构\">\n\u003C\u002Fdiv>\n\n- **Workspace**：作为管理 LLM 推理\u002F微调工作负载的基本构建块的 CRD。该 API 提供了一种大大简化的体验，用于在 Kubernetes 中部署 LLM 模型——用户只需提供 GPU 实例类型和 HuggingFace 模型 ID，控制器将：\n  - 根据 GPU 实例类型和模型元数据估算 GPU 内存需求，并计算所需的 GPU 数量；\n  - 通过集成 Karpenter API（[NodePool](https:\u002F\u002Fkarpenter.sh\u002Fdocs\u002Fconcepts\u002Fnodepools\u002F)）触发 GPU 节点自动供应；\n  - 根据 GPU 硬件拓扑优化调度，为单节点\u002F多节点推理配置推理引擎参数。\n\n目前仅支持 **vLLM** 引擎。支持 LoRA 适配器。默认启用 KVCache 卸载。\n- **InferenceSet**：专为管理同一模型的 Workspace 实例副本数量而设计的 CRD。主要用于根据推理请求负载对 Workspace 进行自动扩展。它会响应由 KEDA 自动伸缩器决定的扩容\u002F缩容操作，该自动伸缩器使用由 [KEDA 插件](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkeda-kaito-scaler)收集的 vLLM 指标。\n- **InferencePool**：KAITO 通过为每个 InferenceSet 创建相应的 InferencePool 对象和 EPP（端点选择器，可实现 KVCache 感知路由），集成 [Gateway API Inference Extension](https:\u002F\u002Fgateway-api-inference-extension.sigs.k8s.io\u002F)。它可以与任何支持推理扩展的外部网关协同工作。\n\n\n> 注意：在此仓库中，开源的 [gpu-provisioner](https:\u002F\u002Fgithub.com\u002FAzure\u002Fgpu-provisioner) 被用于端到端测试，并在各种文档中被提及。KAITO 可以与任何其他支持 [Karpenter-core](https:\u002F\u002Fsigs.k8s.io\u002Fkarpenter) API 的节点供应器一起使用。\n\nKAITO 还支持 **RAGEngine** 运营商。它简化了管理检索增强生成 (RAG) 服务的流程。\n\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_readme_dd3bba7fed24.png\" width=90% title=\"KAITO RAGEngine 架构\" alt=\"KAITO RAGEngine 架构\">\n\u003C\u002Fdiv>\n\n  - **RAGEngine**：定义 RAG 服务组件的 CRD，包括 LLM 端点（可选）、嵌入服务和向量数据库。控制器将创建所有必需的组件。\n  - **向量数据库**：支持内置的 [FAISS](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Ffaiss) 内存向量数据库（默认），也可指定 Qdrant\u002FMilvus 持久化数据库。\n  - **Embedding**：支持本地和远程嵌入服务，用于将文档嵌入到向量数据库中。\n  - **RAGService**：利用 [LlamaIndex](https:\u002F\u002Fgithub.com\u002Frun-llama\u002Fllama_index) 编排的核心服务。它支持常用的 API，例如 `\u002Findex` 用于索引文档，`\u002Fv1\u002Fchat\u002Fcompletion` 用于拦截 LLM 调用并自动附加检索到的上下文，以及 `\u002Fretrieve` 用于与 MCP 服务器集成。`\u002Fretrieve` API 使用倒数排名融合 (RRF) 混合搜索算法，结合 BM25 稀疏检索和向量密集检索的结果。\n  \n有关服务 API 的详细信息，请参阅此 [文档](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Frag)。\n\n## 入门指南\n- **安装**：请参阅[此处](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Finstallation)的指南，了解如何使用 Helm 安装核心组件（Workspace、InferenceSet），以及[此处](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fblob\u002Fmain\u002Fterraform\u002FREADME.md)的指南，了解如何使用 Terraform 进行安装。\n- **快速入门**：请参阅[此处](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fquick-start)的快速入门指南，了解如何使用 KAITO 运行您的第一个模型！\n- **自动伸缩**：请参阅此[文档](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fkeda-autoscaler-inference)，了解如何配置 KAITO 和 KEDA 以启用推理工作负载的自动伸缩功能。\n- **使用 HuggingFace 运行自定义模型**：如果您计划使用 HuggingFace 运行任何自定义模型，请参阅此[文档](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fcustom-model)。请注意：KAITO 仅支持托管在 HuggingFace 上的自定义模型。\n- **CPU 模型**：请参阅此[文档](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Faikit)，了解如何使用 [aikit](https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Faikit\u002F) 运行 CPU 模型。\n- **RAGEngine**：请参阅[此处](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Frag)的安装指南和使用文档。\n\n## 贡献\n[了解更多](https:\u002F\u002Fkaito-project.github.io\u002Fkaito\u002Fdocs\u002Fcontributing)\n\u003C!-- markdown-link-check-disable -->\n本项目欢迎贡献与建议。为确保我们能够合法使用您的贡献，所有贡献者需签署贡献者许可协议（CLA），声明您有权且确实授予我们使用您贡献的权利。有关详情，请访问 [CNCF 的 CLA](https:\u002F\u002Fgithub.com\u002Fcncf\u002Fcla?tab=readme-ov-file)。\n\n当您提交拉取请求时，CLA 机器人将自动判断您是否需要提供 CLA，并相应地标记您的 PR（例如添加状态检查或评论）。您只需按照机器人提供的指示操作即可。对于使用我们 CLA 的所有仓库，您只需完成一次此流程。\n\n本项目已采用 CNCF 的 CLA，请通过 https:\u002F\u002Feasycla.lfx.linuxfoundation.org 在线签署 CLA。如遇问题，您可通过 [Linux Foundation 支持网站](https:\u002F\u002Fjira.linuxfoundation.org\u002Fplugins\u002Fservlet\u002Fdesk\u002Fportal\u002F4\u002Fcreate\u002F143)向 Linux Foundation ID 团队提交工单。\n\n## 参与方式！\n\n- 访问 CNCF Slack 中的 [#KAITO 频道](https:\u002F\u002Fcloud-native.slack.com\u002Farchives\u002FC09B4EWCZ5M)，讨论正在进行的功能开发及提案。\n- 我们每周二下午 4:00 PST 为社区贡献者举办线上会议。欢迎加入：[会议链接](https:\u002F\u002Fzoom-lfx.platform.linuxfoundation.org\u002Fmeeting\u002F99948431028?password=05912bb9-53fb-4b22-a634-ab5f8261e94c)。\n- 请参考我们的[KAITO 社区会议记录文档](https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F1OEC-WUQ2wn0TDQPsU09shMoXn5cW3dSrdu-M43Q79dA\u002Fedit?usp=sharing)，获取每周会议的详细内容！\n\n## 许可证\n参见 [Apache License 2.0](LICENSE)。\n\n[![FOSSA 状态](https:\u002F\u002Fapp.fossa.com\u002Fapi\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito.svg?type=large)](https:\u002F\u002Fapp.fossa.com\u002Fprojects\u002Fgit%2Bgithub.com%2Fkaito-project%2Fkaito?ref=badge_large)\n\n## 行为准则\nKAITO 已采纳 [云原生计算基金会行为准则](https:\u002F\u002Fgithub.com\u002Fcncf\u002Ffoundation\u002Fblob\u002Fmain\u002Fcode-of-conduct.md)。更多信息请参阅 [KAITO 行为准则](CODE_OF_CONDUCT.md)。\n\n\u003C!-- markdown-link-check-enable -->\n## 联系方式\n\n- 如有任何问题，请发送邮件至“KAITO 开发团队”\u003Ckaito-dev@microsoft.com>。","# KAITO 快速上手指南\n\nKAITO (Kubernetes AI Toolchain Operator) 是一个用于在 Kubernetes 集群中自动化大语言模型（LLM）推理、微调和 RAG（检索增强生成）引擎部署的算子套件。它通过简化的 CRD API 和自动节点配置，让开发者只需关注模型 ID 和 GPU 类型，即可轻松运行 vLLM 支持的 HuggingFace 模型。\n\n## 环境准备\n\n在开始之前，请确保您的环境满足以下要求：\n\n*   **Kubernetes 集群**：需要一个正在运行的 Kubernetes 集群（建议版本 1.23+）。\n*   **GPU 资源**：集群需具备 NVIDIA GPU 节点支持。\n*   **节点自动配置器 (Node Auto-Provisioner)**：\n    *   KAITO 依赖 **Karpenter** 进行 GPU 节点的自动扩缩容。\n    *   需安装支持 Karpenter-core API 的 GPU 配置器（如 `gpu-provisioner` 或其他兼容实现）。\n*   **命令行工具**：\n    *   `kubectl`：已配置好集群访问权限。\n    *   `helm`：v3.0+ 版本（用于安装 KAITO 核心组件）。\n*   **网络访问**：节点需要能够访问 HuggingFace Hub 以下载模型权重，以及访问必要的容器镜像仓库。\n\n> **注意**：KAITO 默认利用 GPU 节点内置的本地 NVMe 存储作为模型存储，无需额外配置持久化存储卷用于推理。\n\n## 安装步骤\n\n推荐使用 Helm 安装 KAITO 的核心组件（Workspace, InferenceSet 等）。\n\n### 1. 添加 Helm Chart 仓库\n\n```bash\nhelm repo add kaito https:\u002F\u002Fkaito-project.github.io\u002Fkaito-chart\nhelm repo update\n```\n\n### 2. 创建命名空间\n\n```bash\nkubectl create namespace kaito-system\n```\n\n### 3. 安装 KAITO Operator\n\n执行以下命令安装最新版本的 KAITO：\n\n```bash\nhelm install kaito kaito\u002Fkaito \\\n  --namespace kaito-system \\\n  --create-namespace \\\n  --wait\n```\n\n### 4. 验证安装\n\n检查 Pod 状态，确保所有组件处于 `Running` 状态：\n\n```bash\nkubectl get pods -n kaito-system\n```\n\n## 基本使用\n\nKAITO 的核心是通过 `Workspace` 自定义资源来部署模型。以下示例展示如何以最简单的方式部署一个 HuggingFace 模型（例如 `microsoft\u002Fphi-2`）。\n\n### 1. 定义 Workspace 资源\n\n创建一个名为 `workspace.yaml` 的文件。您只需指定目标 GPU 实例类型和 HuggingFace 模型 ID，KAITO 会自动计算所需的显存、GPU 数量并触发节点自动配置。\n\n```yaml\napiVersion: kaito.sh\u002Fv1alpha1\nkind: Workspace\nmetadata:\n  name: phi2-workspace\n  namespace: default\nresource:\n  labelSelector:\n    matchLabels:\n      app: phi2-inference\n  instanceType: \"Standard_NC6s_v3\" # 替换为您云厂商对应的 GPU 实例类型 (例如 AWS: g5.xlarge, Azure: Standard_NC6s_v3)\n  count: 1\ninference:\n  preset:\n    name: \"phi-2\" # 使用内置预设，或直接用 modelID\n  model:\n    name: \"microsoft\u002Fphi-2\" # HuggingFace Model ID\n  engine:\n    name: vllm\n```\n\n> **提示**：`instanceType` 需根据您的云服务商实际支持的 GPU 机型填写。KAITO 会结合该机型和模型元数据自动估算显存需求。\n\n### 2. 部署模型\n\n应用配置文件：\n\n```bash\nkubectl apply -f workspace.yaml\n```\n\n### 3. 监控部署进度\n\nKAITO 控制器将执行以下操作：\n1.  估算显存并计算所需 GPU 数量。\n2.  调用 Karpenter API 自动 provision 新的 GPU 节点。\n3.  在新节点上启动 vLLM 推理引擎并加载模型。\n\n查看状态：\n\n```bash\nkubectl get workspace phi2-workspace -w\n```\n\n当状态显示为 `Ready` 时，表示模型已就绪。\n\n### 4. 访问模型服务\n\n部署完成后，KAITO 会创建一个 Service。您可以通过端口转发或集群内服务发现来访问 vLLM 的标准 API 接口（兼容 OpenAI 格式）。\n\n**本地测试示例：**\n\n```bash\n# 获取服务名称并端口转发\nexport SVC_NAME=$(kubectl get svc -l kaito.sh\u002Fworkspace-name=phi2-workspace -o jsonpath='{.items[0].metadata.name}')\nkubectl port-forward svc\u002F$SVC_NAME 8080:80\n\n# 在另一个终端发送请求\ncurl http:\u002F\u002Flocalhost:8080\u002Fv1\u002Fcompletions \\\n  -H \"Content-Type: application\u002Fjson\" \\\n  -d '{\n    \"model\": \"microsoft\u002Fphi-2\",\n    \"prompt\": \"Hello, how are you?\",\n    \"max_tokens\": 50\n  }'\n```\n\n---\n\n**下一步探索：**\n*   **自动扩缩容**：结合 KEDA 和 `InferenceSet` CRD 实现基于负载的自动伸缩。\n*   **RAG 引擎**：使用 `RAGEngine` CRD 快速搭建包含向量数据库和 Embedding 服务的 RAG 应用。\n*   **自定义模型**：参考官方文档加载任意 vLLM 支持的 HuggingFace 模型。","某电商公司的算法团队需要在 Kubernetes 集群中快速部署并弹性伸缩一个基于 Llama 3 的智能客服模型，以应对大促期间的流量洪峰。\n\n### 没有 kaito 时\n- **资源估算困难**：工程师需手动计算模型显存占用，反复尝试才能确定所需的 GPU 数量和型号，常因配置不当导致 OOM（内存溢出）或资源浪费。\n- **部署流程繁琐**：编写复杂的 YAML 文件来配置 vLLM 的张量并行（TP）、流水线并行（PP）等参数，稍有错误即导致推理服务启动失败。\n- **扩缩容滞后**：面对突发流量，缺乏基于推理指标（如请求队列长度）的自动扩缩容机制，人工介入调整副本数往往来不及响应，造成用户请求超时。\n- **存储成本高**：需要预先配置昂贵的共享存储（如云盘）来存放模型权重，增加了基础设施成本和管理复杂度。\n\n### 使用 kaito 后\n- **智能资源调度**：只需指定模型 ID 和 GPU 类型，kaito 自动估算显存并计算最佳节点数，一键完成分布式推理环境的构建。\n- **配置极简自动化**：屏蔽了底层复杂的并行策略参数，kaito 根据硬件拓扑自动优化 vLLM 配置，大幅降低部署门槛和出错率。\n- **精准弹性伸缩**：结合 InferenceSet 与 KEDA，kaito 能实时监控 vLLM 指标并自动增减副本，轻松扛住大促流量峰值，闲时自动释放资源。\n- **本地存储优化**：直接利用 GPU 节点的本地 NVMe 缓存模型，无需额外挂载共享存储，既提升了加载速度又节省了成本。\n\nkaito 将原本需要数天调优的 LLM 部署工作缩短至分钟级，让团队能专注于业务逻辑而非基础设施运维。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkaito-project_kaito_cdee2e1c.png","kaito-project","Kaito","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkaito-project_28046849.png",null,"https:\u002F\u002Fgithub.com\u002Fkaito-project",[79,83,87,91,95,99,103,107,111,115],{"name":80,"color":81,"percentage":82},"Go","#00ADD8",70.5,{"name":84,"color":85,"percentage":86},"Python","#3572A5",22.8,{"name":88,"color":89,"percentage":90},"Jinja","#a52a22",1.5,{"name":92,"color":93,"percentage":94},"MDX","#fcb32c",1.4,{"name":96,"color":97,"percentage":98},"Makefile","#427819",1.2,{"name":100,"color":101,"percentage":102},"Shell","#89e051",0.9,{"name":104,"color":105,"percentage":106},"Dockerfile","#384d54",0.5,{"name":108,"color":109,"percentage":110},"HCL","#844FBA",0.4,{"name":112,"color":113,"percentage":114},"JavaScript","#f1e05a",0.3,{"name":116,"color":81,"percentage":114},"Go Template",919,169,"2026-04-18T07:33:40","NOASSERTION",5,"Linux","必需。支持 NVIDIA GPU，具体型号和显存大小由控制器根据模型元数据和实例类型自动估算；利用节点内置本地 NVMe 作为模型存储，无需额外存储。","未说明（由控制器根据模型自动计算并触发节点自动配置）",{"notes":126,"python":127,"dependencies":128},"1. 核心运行环境为 Kubernetes 集群，需集成 Karpenter 进行 GPU 节点自动配置（NAP）。2. 推理引擎目前仅支持 vLLM，默认启用 KVCache 卸载。3. 支持通过 HuggingFace ID 部署模型，控制器会自动优化并行策略（PP\u002FDP\u002FTP）。4. RAG 功能支持内置 FAISS 内存数据库或外部 Qdrant\u002FMilvus。5. 不支持直接在本地操作系统运行，必须部署在 K8s 环境中。","未说明",[129,130,131,132,133,134,135,136,137],"Kubernetes","Karpenter","vLLM","Gateway API Inference Extension","KEDA","LlamaIndex","FAISS","Helm","Terraform (可选)",[13,14,15],[140,141,142,143],"ai","gpu","kubernetes","operator","2026-03-27T02:49:30.150509","2026-04-20T07:16:12.980865",[147,152,157,162,167,172],{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},44073,"在 Azure Kubernetes Service (AKS) 中部署 Phi-3 模型时遇到 'failed calling webhook' 和 'EOF' 错误怎么办？","这是一个已知问题，代码修复已合并并将包含在下一个 Kaito 版本中。目前的临时解决方案是：不要修改 Terraform 脚本，而是直接使用 kubectl 针对 live k8s 集群，手动修改由 kaito controller 创建的 `deployment` 对象，并更新其中的 pod template。建议暂时尝试部署 Phi-3 Mini 模型，或等待下一个版本发布以获取正式修复。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F523",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},44074,"按照官方文档创建新集群并启用 OIDC issuer 后，应用 Workspace YAML 文件失败且状态显示 'the object has been modified' 错误，如何解决？","该错误通常发生在资源状态并发更新时。根据社区反馈，如果从 main 分支拉取代码遇到问题，可以尝试回退到稳定的发布版本（例如 0.4.4）。请确保使用的 YAML 示例版本与安装的 Kaito 控制器版本相匹配。如果问题依旧，请检查节点日志确认是否有 'InvalidDiskCapacity' 等底层资源问题，并尝试重新应用配置。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F1315",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},44075,"项目中的 Mariner\u002FAzure Linux 2 基础镜像即将停止支持（EOL），应该如何处理？","需要将 workspace 和 ragengine 的基础镜像更新为非 EOL 版本。目前社区正在处理此问题，计划将 `mcr.microsoft.com\u002Fcbl-mariner\u002Fdistroless\u002Fminimal:2.0` 替换为受支持的新版本镜像。贡献者可以关注相关 PR 或直接参与将基础镜像迁移到最新稳定版的工做。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F1215",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},44076,"在使用 `\u002Frag_edit` 命令时遇到 '403 Resource not accessible by integration' 错误是什么原因？","此错误表示 GitHub Actions 或集成工具没有足够的权限访问仓库资源以创建或更新文件内容。通常需要检查仓库的 Settings -> Actions -> General 设置，确保 'Workflow permissions' 被设置为 'Read and write permissions'，并且允许工作流创建 Pull Request。此外，需确认使用的 GitHub Token 具有相应的 scope。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F1546",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},44077,"执行 `\u002Frag_edit` 测试时出现 '404 Branch ... not found' 错误该如何解决？","该错误表明自动化代理试图推送到的分支不存在或无法被识别。这通常是由于本地环境与远程仓库状态不同步，或者自动化脚本在创建临时分支时失败。解决方法包括：确保当前分支是最新的（git pull），检查是否有权限创建新分支，或者手动创建对应的分支后再重试操作。如果是 CI\u002FCD 环境问题，可能需要检查运行器的网络连通性和仓库克隆状态。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F1556",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},44078,"如何防止在设置全局客户端（SetGlobalClient）时因传入 nil 值而导致程序恐慌（panic）？","需要在 `pkg\u002Fk8sclient\u002Fclient.go` 文件的 `SetGlobalClient` 函数中增加空值检查（nil check）和验证逻辑。在访问客户端对象或其属性之前，必须先判断输入参数是否为 nil。如果检测到 nil，应返回明确的错误信息而不是继续执行，从而避免运行时恐慌。这是增强代码健壮性的标准做法。","https:\u002F\u002Fgithub.com\u002Fkaito-project\u002Fkaito\u002Fissues\u002F1554",[178,183,188,193,198,203,208,213,218,223,228,233,238,243,248,253,258,263,268,273],{"id":179,"version":180,"summary_zh":181,"released_at":182},351598,"v0.10.0","## v0.10.0 - 2026-04-15\n\n**适用于 RAG 的 Qdrant 向量存储**\nRAG 引擎现支持将 Qdrant 作为向量后端，包括混合搜索功能。这为 RAG 工作负载扩展了存储灵活性和检索质量选项。\n\n**vLLM v0.17.1 带智能运行时调优**\nKAITO 升级至 vLLM v0.17.1，并引入了一种三层并行策略，可自动配置张量并行、流水线并行和数据并行。\n运行时的数据类型 now 现在会根据 GPU 的计算能力动态选择。\nkaito.sh\u002Fperformance-mode 注解（交互性、平衡、吞吐量）提供了基于意图的调优功能，并直接映射到 vLLM 的性能模式控制。\n\n**内置 MT-Bench 评估**\n通过集成 MT-Bench，系统现可开箱即用地进行对话质量评估，并提供自动化的基准测试流程。\n\n**Azure GPU 抢占式实例支持**\nKAITO 现在支持 Azure GPU 抢占式实例用于节点池，有助于降低推理基础设施成本。\n\n**改进的推理可观测性**\nTPM 基准指标 now 现在会被写入工作区和推理集的状态中。\n同时新增了 PVC 指标和预设使用情况跟踪，以提升运维可见性。\n\n**LoRA 适配器卷**\nLoRA 适配器 now 现在可以直接从 PVC 卷挂载，为镜像打包的适配器提供了一种灵活的替代方案。\n\n**VS Code Copilot 插件**\n全新的 kaito-workspace Copilot 插件增加了推理技能，使 Copilot 工作流更加丰富。\n\n\n## 更改日志\n### 功能 🌈\n* 736b5bc2af60c5b62206675c0d0a281efb5e3521 feat: 支持使用 mt-bench 评估模型的对话能力 (#1937)\n* 30b146ac59f4cc39d0fbfed3279fd8e4838a73b9 feat: 将 phi-4 与 microsoft\u002Fphi-4 关联 (#1920)\n* 4dc9815c3cc7c6f314a730ddf717453f89f6e9e2 feat: 为 Azure 节点池添加 GPU 抢占式实例支持 #1837 (#1870)\n* bb8b0611ac1421afb310ca1a7f304630b7a6bb09 feat: 初始模型目录实现 (#1901)\n* 157f763129c48e56ffa60434c39f689aae0f914c feat: 为 InferenceSetTemplate 添加元数据 (#1881)\n* 235f73370f48606a7d33ab6637cddee875319e84 feat: 支持将卷用作适配器来源 (#1877)\n* 915e8b5b0c2610fc6672c5a652f5dcf2b5017b0c feat: 根据 GPU 计算能力动态确定数据类型 (#1866)\n* 5bb157d94e148dee8da2633de7ee06ac0b80c321 feat: 添加提案模板文档 (#1874)\n* 547890caab9c96d365299a89a9be189b32053382 feat: 支持为 vLLM 工作负载调优添加 performance-mode 注解 (#1865)\n* dacef87b60435e3130417e79c404c9158caf405d feat: 为 vLLM 运行时添加三层并行策略 (#1867)\n* cc5ff14d55fcb048037789b6e6f9dcca95e6b2a2 feat: 将 TPM 写入工作区和推理集 (#1864)\n* 1e3ba6cdd88046865913aef0aa2566ae176d719c feat: 添加 PVC 指标 (#1869)\n* 6d94fc5551a71477372d601f689f176025744f50 feat: 添加 Qdrant 原生 CRUD 替代、nest_asyncio 修复以及端到端测试 (#1845)\n* 67a450fdad3a002154a27b9089a66591e9c23e02 feat: 添加带有推理技能的 kaito-workspace Copilot 插件 (#1855)\n* d864f0184679a0407d74da969c34cd8b23f735ba feat: 添加模型 TPM 基准测试和日志记录 (#","2026-04-15T11:22:36",{"id":184,"version":185,"summary_zh":186,"released_at":187},351599,"v0.9.3","## v0.9.3 - 2026-03-19\n\n## 更改日志\n### 错误修复 🐞\n* 9ae7e7b281a57c47d5619980854b411242c99079 修复：移除 CVE-2026-25537 的 Trivy 忽略项，并从镜像中卸载 uv (#1854)\n* c41696596a9152d2f3a3c7f0d4c1a658f0187c50 修复：在多节点推理中禁用 kvcache CPU 卸载 (#1851)\n* ba911bf6d299f20aba3495335370f78bb969b7b0 修复：修复 usr\u002Flocal\u002Fbin\u002Fuv 中的 CVE 漏洞 (#1847)\n### 维护 🔧\n* 4ce9f875ea32d21a146889bdc50b859384564c79 杂项：将 google.golang.org\u002Fgrpc 从 1.78.0 升级到 1.79.3 (#1856)\n\n","2026-03-19T09:34:26",{"id":189,"version":190,"summary_zh":191,"released_at":192},351600,"v0.9.2","## v0.9.2 - 2026-03-16\n\n## 更改日志\n### 维护 🔧\n* a2b42a130201a0917d55519ea943ef26c419f709 杂项：将 NVIDIA K8s 设备插件升级至 v0.18.2-1 (#1843) (#1844)\n\n","2026-03-16T11:52:53",{"id":194,"version":195,"summary_zh":196,"released_at":197},351601,"v0.9.1","## v0.9.1 - 2026-03-10\n\n## 更改日志\n### 错误修复 🐞\n* 54ebf1ee4ba0034f779b17ff327c3bc3232d3953 修复：防止在自动生成的、GPU 参数为空的 vLLM 预设中出现 Webhook panic (#1824) (#1834)\n### 持续集成 💜\n* 4c52d476fcaad5b23c93ac8041ba7f9ec36109f4 ci：在 trivy 工作流中将 trivy 二进制版本固定为 v0.69.2 (#1817)\n### 维护 🔧\n* ec2e6168b17251d0b14566d8bf29162bf6242bed 杂项：升级到 go1.25 以修复 workspace 控制器的 CVE (#1832)\n* 07318c1ff4454f90ac48849c291ff0957ba38bb2 杂项：制作 kaito Helm 模板的 CRD (#1826) (#1833)\n\n","2026-03-10T22:21:49",{"id":199,"version":200,"summary_zh":201,"released_at":202},351602,"v0.9.0","## v0.9.0 - 2026-02-27\n**vLLM 运行时：运行任何兼容 vLLM 的模型**\nKAITO 现在支持运行任意兼容 vLLM 的模型——您只需提供一个 HuggingFace 仓库 ID 即可。GPU 显存、节点数量、存储以及代理配置（工具调用解析器、推理解析器）都会自动确定，无需手动调优。捆绑的 vLLM 版本已升级至 v0.14.1。\n\n**Transformers 运行时：兼容 OpenAI 的 API**\n基于 Transformers 的服务引擎现在公开了与 OpenAI 兼容的 API，同时继续支持 HuggingFace 生态中的所有模型。\n\n**新增 WorkspaceStatus.State 字段**\nWorkspaceStatus 中新增了一个状态字段，使用户能够一目了然地了解 Workspace 资源的当前生命周期状态（例如：正在部署、已就绪、失败）。\n\n**Azure Linux 节点支持**\nKAITO 现在支持 Azure Linux 节点池，从而扩展了可用于 GPU 工作负载的 AKS 配置范围。\n\n**RAG 服务的检索 API**\nRAG 引擎新增了一个 \u002Fretrieve API，允许调用方直接获取检索到的文档片段——这使得构建更灵活、更具代理能力的 RAG 流程成为可能，而无需完整的生成步骤。\n\n## 更改日志\n### 功能 🌈\n* dc99157e172abb6c89566ce48fca03ab7ef365b8 feat: 改进工作区控制器的日志记录 (#1802)\n* 0896cbb66616b62e8cbe3bc1af0b3ea3b1767387 feat: 将 kaito 的 gpu-provisioner 版本更新至 v0.4.1\n* 9b1bdc2fbd74a5ceca360d0714290abac9ec84df feat: 支持 Azure Linux 节点 (#1784)\n* 353cc69fa593c28d79b7a04e258f6ccbd2114210 feat: 生成 vLLM v0.14.1 支持的模型架构列表 (#1791)\n* d8c2bf6349fb4c495774f763d32464aa0748ea93 feat: 为 Azure Linux 场景添加端到端流水线 (#1792)\n* 43f39594607cac50d47c61a1b0ad388aed32f9ac feat: 将主镜像发布到 GHCR (#1776)\n* cd5a67c39544828162121a940f6b09c4c28192ae feat: 集成 Transformers 的 OpenAI 兼容服务引擎 (#1384) (#1765)\n* 714207cba2073fdcda4910a1520854fbdb8cffb7 feat: 更新工作区状态字段 (#1758)\n* bf977bc40b106323c148214635049e923fa6ce5e feat: 为更多精细化模型支持工具调用解析器和推理解析器 (#1766)\n* 5221259bb6a750aa136d941403033937e9959780 feat: 为 RAG 服务添加检索 API (#1732)\n* 9fa77e16b0b17c7b9202b2e14b48100c04802b24 feat: 在 WorkspaceStatus 中添加状态字段，用于展示工作区的当前状态。(#1745)\n* 29a58f4991800fa15643858f3d2d2544a45cee89 feat: 支持通用的 HuggingFace vLLM 推理模型 (#1727)\n* 18d7e852ffd5775cd488e51fa9195cc58553daf3 feat: 使用 Go 语言实现预设生成器 (#1726)\n* e42bb0418d25fe6cbe1330bb278990c764645327 feat: 简化 vLLM 推理模型的支持流程 (#1713)\n* 319419b5d3ce7c96bca4212e6e676166256d7bdb feat: 通过功能门控将 NVIDIA 设备插件的部署设置为可选 (#1707)\n### 错误修复 🐞\n* 9150e1ac90b07c4b48f0029ef5c6047ccf9e1c5c fix: 移除 ec2nodeclasss CRD (#1803)\n* 88061dee9f62b09689e58be67eb2baf365d9a1d1 fix: GHCR 服务镜像名称\n* 22fc27b4fae1649ef4a4ff60df47c","2026-02-27T12:07:23",{"id":204,"version":205,"summary_zh":206,"released_at":207},351603,"v0.8.1","## v0.8.1 - 2026-01-24\n\n## 更改日志\n### 功能特性 🌈\n* ea7c46d7ba2a75f3fb39748ac64525944d3664ac 功能：通过功能门控使 NVIDIA 设备插件部署成为可选 (#1707) [release-0.8] (#1739)\n### 错误修复 🐞\n* e50f715f837b61e18fcb3a49f3afbf4f7b02ef50 修复：在不稳定的单元测试中存在 Python 依赖冲突 (#1741)\n* 98d8eb7e56f65694563b2f83e8bd9697d330dac6 修复：BYO 节点模式下无法删除现有 Workspace (#1719) [release-0.8] (#1736)\n* a543a902effc5e4d532a80b72c24e49c4e7251f8 修复：当启用 isableNodeAutoProvisioning 且缺少 Karpenter CRD 时，控制器会崩溃 (#1725) [release-0.8] (#1729)\n### 维护 🔧\n* e9a05ae284fd9a68d52ff74eaf9052b8ba162b12 杂项：向 Helm Chart 添加更多参数 (#1724) (#1728)\n* e3c364078a2b2662b1e8a4e352dcde74d1888985 杂项：将 Helm Chart 发布为 OCI 工件 (#1717) (#1718)\n### 测试 💚\n* 1725b513565df6a442f2d6fa4f87710cb766887d 测试：为 BYO 模式添加升级兼容性测试 (#1730) [release-0.8] (#1740)\n\n","2026-01-24T01:17:32",{"id":209,"version":210,"summary_zh":211,"released_at":212},351604,"v0.8.0","## v0.8.0 - 2025-12-20\n\n本次发布引入了一项**破坏性变更**，即推理工作负载将统一为 StatefulSet。现有工作空间创建的 Deployment 资源将由控制器自动移除，并替换为新的 StatefulSet 资源。此迁移无需手动操作，预计推理服务因 Pod 重建会短暂中断。\n\n## 更改日志\n### 破坏性变更 💥\n* 3ab3f3d55a47d8ebb8cd26440c9f7732c259017a feat: [BREAKING] 将所有工作空间的工作负载改为使用 StatefulSet (#1523)\n\n### 功能特性 🌈\n* b9664848f4a36a0454147909f2f2dcc78d45640c feat: 更新 kaito 的 gpu-provisioner 版本至 v0.3.8 (#1698)\n* 91819b9e8141c8062c6b2bdb8f6b9ee2dcf8eb65 feat: 预设生成器支持通用模型格式和注意力机制架构 (#1690)\n\n### 错误修复 🐞\n* 1366f9a7d290453d01c10a800c8202b59bb8c6bb fix: 将 imagePullPolicy 设置为 Always (#1702)\n* 8945b5b74fdeb40d8a9e5f1cb05895b021faa764 fix: 修复 ragengine 端到端测试中的工作负载类型问题 (#1697)\n* dffd5f342d9a980f5ea6b8c5481004d693d6f357 fix: 修复 artifacthub 链接中的无效缩进问题 (#1683)\n* e5d77e5c0e34556add8542780f0976212d6c48b2 fix: 当最新版本为 perrelease 时取消该版本发布 (#1680)\n* e813c46021ac723a33bfd3fd61242e38889a2593 fix: 修复发布标签验证规则 (#1677)\n\n### 代码重构 💎\n* 47fcd2e81abebe5be281445f9dcf2af076e48926 refactor: 将 sku 计算模块化为通用预设生成器 (#1689)\n\n### 文档更新 📘\n* 318bf01fcb3c3ed293c4fddd0e53b10fda209b1c docs: 修复 keda-kaito-scaler 中命名空间文档的问题 (#1699)\n* 87c9c32cb9826bdfbb5d346f89cb5162df4b0179 docs: 在 keda 安装说明中使用 kaito-workspace (#1694)\n* eefd2b812866c0cf80c8a90a30d95cab51eaec59 docs: 在文档中添加 keda-autoscaler-inference 的伸缩示例 (#1682)\n* bbe61d70d6c4e9f5ca7fd45258b5859f266879af docs: 优化文档和示例中的命名 (#1681)\n* c78d68bf1bf6ca957c6f9fef8333454c59822982 docs: 添加 keda-autoscaler-inference 的文档 (#1679)\n\n### 维护与升级 🔧\n* 67deec5216d81015525b0b9c02c6ff9303febb92 chore: 将 Golang 升级至 1.24.11 (#1695)\n* 89aba34717840d8b892bdea4ba31eaf3c83e6166 chore: 使用 localcsi manager 提供的 PV 清理工具 (#1687)\n* 7911b002217ad03ba8a0a50edf957d0e2ec33378 chore: 修复 preset_generator 中 huggingface_hub 的版本问题 (#1693)\n* 0fabc5cd0e92a8bfb2b030aa803abae458bd8ce7 chore: 将 Ray 升级至 0.25.1 (#1684)\n* 3d33b893d6fec49c4e55417158eccfad8d3fd467 chore: 将 \u002Fwebsite 中的 js-yaml 从 3.14.1 升级至 3.14.2 (#1647)\n* 601ad7b0934acc88bbf24c3bd732c7adae0e3d43 chore: 将 \u002Fwebsite 中的 mdast-util-to-hast 从 13.2.0 升级至 13.2.1 (#1657)\n* e1efaa84aa0ce52ced281c87c8e485763f051661 chore: 为 RAG 引擎服务的 PV 支持进行端到端测试 (#1671)\n\n### 测试 💚\n* b1b8db290244d748482929af52c37e71038de295 test: 添加升级兼容性测试 (#1696)\n* 6eda95e2d1590da9592873c4fca407f16512da6a test: 在升级兼容性测试中使用 Helm v3 (#1692)\n* ff3a61d2ed411c5beed4419a1729d9769c2f6429 test: 添加 Chart 升级兼容性测试 (#1691)","2025-12-20T06:36:25",{"id":214,"version":215,"summary_zh":216,"released_at":217},351605,"v0.8.0-rc.0","## v0.8.0-rc.0 - 2025年12月8日\n\n## 更改日志\n### 破坏性变更 💥\n* 57feef8b7c1261de0ffc0ef41c3cc8826350ee39 chore: [破坏性] 弃用 phi-2 模型 (#1667)\n* 78a76deeafecfa92eda6b5deff1d0249c970d152 feat: [破坏性] 移除 \u002Fquery API 调用，并添加 FastAPI 信息和标签 (#1621)\n### 功能特性 🌈\n* 475f94e3424f466ed23a9463b3e737855f1578c3 feat: 支持小版本号格式 (x.y.z-rc.w) (#1675)\n* f1cba23d82f5fbfe7b563623eb06a74f9d44d662 feat: 添加 mistral3 系列模型 (#1668)\n* 295068f48866c6741484c1cd8e78729c43d165bb feat: 为 RAG 服务添加 PV 支持 (#1660)\n* 5a8b5804dd0ae8a92453249e2b8c9b3d2bd9c99e feat: 利用 AIKit 进行预设图像打包 (#1649)\n* a97f28d09246502958acf1d420160d60d14a7730 feat: 添加对使用 NVIDIA GPU 特征发现的通用 BYO 节点的支持 (#1536)\n* 0a6ee7761c9de27333835cc505f6a96b3236ab7d feat: 使用 skopeo mcr 镜像 (#1630)\n* 9b1f6bd69e6674afe2075c9e034c18d4a9b37cf9 feat: 基于文档的 RAG 基准测试 (#1615)\n* 38f6846046892b9689da36415967aa19eeeb1090 feat: 将版本信息添加到 ua 和 cmd 中 (#1633)\n* 7933a2cb8519050da042b7344a9540138ae02cb3 feat: 为 RAG oai 客户端添加用户代理头 (#1622)\n* f73f9b748a8f0c23d47071eb3978833dc5de0b4e feat: 为使用 GPU 特征发现的 BYO 节点添加 Webhook 验证 (#1587)\n* a68bb82525655a851d75661729ddab14ebb93f30 feat: 在 RAG 服务中提供令牌使用情况 (#1605)\n* c96327283000368950645bfefca0983d8cba37b8 feat: 将 gpu-provisioner 版本更新至 v0.3.7，用于 kaito (#1604)\n* 251d9a2b4e87c686bee6017042fe3545b738872c Revert “feat: 支持 arm64 容器镜像” (#1603)\n* 198408ad8cf681214fdf1eabd6e3079bd3ab061b feat: 支持 arm64 容器镜像 (#1585)\n* 22afc334e21f5162e25f5fd5c254acbda9bd944a feat: 添加新的 InferenceSet CRD 和控制器，以自动扩展推理工作负载 (#1522)\n* ff7dd8d39be1c27b0f1acac67286dd5044b33d52 feat: 添加 NVIDIA GPU 特征发现 Helm 图表 (#1586)\n* 45f85fd112ff93f6f8a77a76f53ef9dd61efc99e feat: 添加 gemma-3 4B 和 27B 模型 (#1572)\n* 42331a4c70bb30d8b876a3356437d4b27e1e394a feat: 在 RAG 列表文档响应中添加 total_items (#1578)\n### 错误修复 🐞\n* fe21280d0aba9f6bfe56b219c8bdd91902bbb25c fix: 修正 csi-local-node ds 标签 (#1672)\n* cef240d4ded8828f258b635d103785964904c005 fix: 将 GatewayAPIInferenceExtension 移至 InferenceSet 控制器 (#1656)\n* 670d30bbfa2161b9be9c9390fd52160802e90667 fix: 在 Helm 图表配置中添加 enableInferenceSetController (#1651)\n* 6437e6c4fef3e642a8ccdf957f77e1403b040669 fix: 为 InferenceSet 生成的推理 Pod 添加新标签 (#1645)\n* 45d68fd4e86607a9e56233160d29b18837b11d10 fix: 在图表中添加缺失的 inferenceset CRD (#1643)\n* cd30f25b14a9372fb593e44bafdfd736ba7921ef fix: 将 findutils 添加为 skopeo 镜像的运行时依赖项 (#1632)\n* 3eeb3725965c4bc2144c816b5f2233cb8f607c18 fix: 将 kaito-base 镜像中的 pip 升级至 25.3 (#1629)\n* fe73c4b6dbbe587aa46c6ebad2db1130c0114ac3 fix: 向 skopeo 工作流中添加缺失步骤 (#1614)\n* e8d798baa125f44c9706f3a4f8","2025-12-08T11:07:52",{"id":219,"version":220,"summary_zh":221,"released_at":222},351606,"v0.7.2","## v0.7.2 - 2025-11-01\n\n## 更改日志\n### 功能 🌈\n* 1d3e45516cc5a6dc58bf3ba930e94906aaeb7a2a 特性：使用 skopeo mcr 镜像 [release-0.7] (#1635)\n### 错误修复 🐞\n* 763b41337622eb58a4d10fd0aaf809256e79af3b 修复：在 kaito-base 和 ragservice 镜像中将 pip 升级至 25.3 [release-0.7] (#1631)\n### 维护 🔧\n* c0480274250788887788d1b704b7866e6dc12b00 杂项：使 local-csi-driver 成为 Helm 的依赖项 (#1483) [release-0.7] (#1634)\n\n","2025-11-01T06:29:30",{"id":224,"version":225,"summary_zh":226,"released_at":227},351607,"v0.7.1","## v0.7.1 - 2025-10-09\n\n## 更改日志\n### 错误修复 🐞\n* 87db0e6363eca6cbe98dc21c122fdffa950b0aba 修复：当未找到上下文时，直接传递给 LLM (#1542) [release-0.7]  (#1550)\n* c9b1a0fb137229254993a419ac93b05ecbf4e7d2 修复：BYO 的 ResourceReady 条件永远不会被设置为 true [release-0.7]  (#1549)\n\n","2025-10-09T20:53:27",{"id":229,"version":230,"summary_zh":231,"released_at":232},351608,"v0.7.0","## v0.7.0 - 2025-09-24\n\n## Changelog\n### Breaking Changes 💥\n* dc15b1644b198683353b7ac2d3ea290c6fccc3ab feat: [BREAKING] adding context size window to rag spec (#1392)\n### Features 🌈\n* 3285259c6c0c1621130c1f02f94462d17fb582fc feat: update gpu-provisioner version to v0.3.6 for kaito (#1512)\n* dd8031855547b6a40e56a4762ee5922f0d4a85c2 feat: refactor workspace controller to support NodeEstimator and Workspace.Status.TargetNodeCount (#1477)\n* d2424134e408962a68471284335c48d324c6be33 feat: Max token calculator (#1507)\n* fedf505cb321f6aa4f0761922db412f9b548a216 feat: node calculator (#1435)\n* 509f7acd0ff722163d4b5ea6559be1d88ab52a08 feat: add support for GPT-OSS 20B & 120B models (#1442)\n* 434e8eec391c859d1de5b3db4ae4e7c4dbd19043 feat: configure workspace replicas, perReplicaNodeCount and TargetNodeCount (#1473)\n* 852b86395a9b9394138f0bb406ce0770f721cfa4 feat: add node manager for ensuring device plugin and accelerator label (#1475)\n* e8c84e745157fb4ffe47f17718bf5b2106affd80 feat: separate EnsureNodeClaims into ScaleUpNodeClaims and ScaleDownNodeClaims (#1461)\n* 2e8e8d9a98b1398527d9c63576e0ca74c4931305 feat: only run vector search on latest user message (#1427)\n* b12801d61247dc5f8f2acc1c107a9899a226374b feat: adding breaking change handling into goreleaser (#1450)\n* 8cbf9be2a89f9b47154dd6430dc91528cce8ebc2 feat: add nodeclaim manager for creating\u002Fdeleting nodeclaims of workspace (#1417)\n* f7d2142c0321e5dc9fef4f8f928a95fc0381a6e7 feat: add node estimator for calculating PerReplicaNodeCount (#1414)\n* 4f91b0c345eff70c87ed286c2f5f4ef758db56b8 feat: Enhanced Token Management and Context Selection for RAGEngine (#1404)\n* f46c916d626e801a324868f24adf311ac7e5a279 feat: update workspace CRD for supporting scale subresource API (#1349)\n* 24cea27a10bf017e3d355bf83f4c020ad0e8d80d feat: Add Gateway API Inference Extension support (#1252)\n* d8f24e4b6457d90189fffa6c4dc322e509d4d507 feat: add a feature gate to disable node auto-provisioning and validate the preferredNodes (#1337)\n* 3a92e9870833bd5bbec1ede3a17c6e0ce11b343f feat: offload kv cache to cpu RAM on vllm v1 (#1326)\n* 154e3ff7e7e6af07baa811a0764bc79b6d3ff18c feat: add Flux Helm controller as optional dependency in Helm chart (#1363)\n### Bug Fixes 🐞\n* ceeda0386220a1fbb31d006854fbfee663ca0240 fix: incompatible liveness probe check (#1505)\n* 0b1f9b6ae05cdd6e1423bf3d51b5ad0b97f8df02 fix: fixed failing rag e2e test (#1502)\n* 1db155cc6563fb63573ef046d06e5a815184215e fix: helm upgrade --install needs release name (#1510)\n* df55167fbeaf4f422ae29b61bc08f2ac448d2c14 fix: ensure a non-empty volumnMount is appended in puller containers (#1480)\n* c55c457f0c3c5c5170fa7833256b633505b4d605 fix: add presets\u002Fragengine to rag e2e test workflow (#1462)\n* 2355ccb88533b4d447c719fc77dec780e9f5f8ce fix: add nodeSelector on Linux to avoid crash on Windows node (#1446)\n* 9d57e99366f1e00a683ac427cf0ff278df500c16 fix: add nodeaffinity for tuning job (#1429)\n* fe7648fd4cacd3bbef9c6ceefe90734383dc10ae Revert \"fix: add missing pynvml package dependency\" (#1426)\n* 909dcd6cdc4dc2ee48e9f0214c787dcc5eef374a fix: add missing pynvml package dependency (#1424)\n* 20bd85ae0961ad55f2f058a4b01ed38f500a4b59 fix: fix chat template for phi4mini (#1422)\n* 3a911902893a212926e82cc65b13c05b0b61756f fix: cannot read kv-cache-cpu-memory-utilization from ConfigMap (#1415)\n### Code Refactoring 💎\n* 3ffe0d0fe89b3d812dd738d8a437b7b3bfb641ce refactor: move image building step out of PRs (#1423)\n### Continuous Integration 💜\n* 8f63cb20e5508d24713396603db5706044317e6b ci: add workflow_dispatch options to publish pipelines (#1490)\n* 54dab791f0bd0103ff4a5f0402739e67c960fa84 ci: update .github\u002Fdependabot.yaml with the right file paths (#1433)\n* 9e20ccec0b454ab9445ab206871b58c92e0ceae6 ci: add pipeline to generate versioned docs for each minor version (#1333)\n* 3182f2593ea498baab98a16c82f4a171efe564ac ci: aikit test (#1395)\n### Documentation 📘\n* 9cbe558f60b42672b6d0cfdf02be78e32e718f60 docs: Fix KAITO acronym alignment (#1501)\n* c174bdb94da0d3b60fe7c39b62d51449f69a8213 docs: install helm charts via helm repository instead of using tarballs (#1491)\n* 2cb4f126b1a3502bf9ddc6af1da6269945812753 docs: add implementation strategy for BYO nodes proposal (#1474)\n* 6a550e1ecf612b4185b6a5b8ea2c7e9ef820b723 docs: update preset onboarding doc with examples (#1471)\n* c6704173f1777e00e20f89cf4b577679a1fc93fc docs: add BYO nodes redesign proposal (#1412)\n* 9b18fe8ccce9a8beffdeaaa86cc603e74d37cfc8 docs: add links and note to GPU benchmarks doc (#1466)\n* b0fca845df4a9f2287b4ee3a3da00e84045292ff docs: update istio environment variable in installation step (#1464)\n* 14342418f1339a0e18a3b1e7c6905289d7350e73 docs: fix bits of spellings and revise code of conduct for clarity (#1460)\n* 6a9747721095152d193d007031a3ae2de05bc393 docs: update slack community links to cncf workspace (#1459)\n* 6db2ce6fa6b54bc45b5b18d0ab15f89b0a8c24b9 docs: add docs for Gateway API Inference Extension (#1434)\n* eebfeffe5b5b15e3781bb472bf42de507f27f425 docs: update aikit docs to fix ","2025-09-24T03:25:29",{"id":234,"version":235,"summary_zh":236,"released_at":237},351609,"v0.6.2","## v0.6.2 - 2025-09-11\n\n## Changelog\n### Bug Fixes 🐞\n* ac8baa5206fa8decbf76d659e7df642a8b1ce577 fix: ensure a non-empty volumnMount is appended in puller containers (#1487)\n\n","2025-09-11T03:40:41",{"id":239,"version":240,"summary_zh":241,"released_at":242},351610,"v0.6.1","## v0.6.1 - 2025-09-03\n\n## Changelog\n### Maintenance 🔧\n* 6c01aace6adc567e0bd51fe13a20efa140568630 chore: bump local-csi-driver v0.2.3 -> v0.2.4 -> v0.2.5  (#1468)\n\n","2025-09-03T22:44:29",{"id":244,"version":245,"summary_zh":246,"released_at":247},351611,"v0.6.0","## v0.6.0 - 2025-08-08\r\n\r\nThis release includes these major changes:\r\n\r\n- Added support for DeepSeek-R1\u002FDeepSeek-V3 models.\r\n- Added \u002Fv1\u002Fchat\u002Fcompletions API for RAGEngine.\r\n- Provided better UX for preferred nodes and cpu nodes.\r\n- Updated documentation with new features, integrations.\r\n- Added NVIDIA A10 GPU to the supported SKUs.\r\n\r\n## Changelog\r\n### Features 🌈\r\n* 253d6aad0218c514ffb861c83830dfaad85ecf3d feat: add \u002Fv1\u002Fchat\u002Fcompletions API for RAGEngine (#1277)\r\n* dc464188acd42c9fdbae38a3a8fa1792a0a22375 feat: add deepseek-r1\u002Fdeepseek-v3 model (#1251)\r\n* bfd271af78827c004625c82d18b01443e4be4523 feat: adding RAGEngine CRD shortName and ServiceReady status column (#1336)\r\n* 45cbbf1638fac2eab54dcdc9d4902433047f1a21 feat: support Preferred node in RAG (#1327)\r\n* ee0c3d0b8d48a404b735e3f45273498648692890 feat: add `make help` target to Makefile (#1248)\r\n\r\n### Bug Fixes 🐞\r\n* e39f82f1db07260324cf3a3aad5b552ac35b1d5d fix: pin phi2 to vllm v0 (#1369)\r\n* 8f4fa750a5a8a0d195964a70dca20ec04e76e023 fix: fix bug where fetch GPU count was failing and defaulting (#1338)\r\n* 06f4cbda11d145d958be269d3649bb8ecd7c19fa fix: resolve pydantic deprecation warnings (#1317)\r\n* 78ef22d014d32001e9ae9b5bbf118a41364ee2b5 fix: image link error in scaler proposals (#1318)\r\n* 2163923dbc573e55ca1cb8789ab5ca1877ace369 fix: get gpu config from status if preferred nodes provided (#1308)\r\n* 76e099e1a95e6108587a994286c869186f52f19f fix: avoid extra node creation on informercache delay (#1311)\r\n\r\n### Code Refactoring 💎\r\n* d2ac05950809d90746eeeb04b684993295dde489 refactor: adopt generator pattern in fine-tuning part (#1292)\r\n* aea0a4240a8a280dfdc301449099c6c9b3089a4f refactor: introduce manifest generator -- part 1 (#1284)\r\n### Continuous Integration 💜\r\n* 461f75cd536b9ede07667b44bcc3e31ff7d6f13a ci: update release branch prefix to 'release-' (#1371)\r\n* a42040ddbb64b8002172b0389a948eda85bbe177 ci: Expand trivy scanning to other images (#1161)\r\n### Documentation 📘\r\n* 5fe9a44dbfd68560479effa7696b3fad2db1ff96 docs: add release management (#1360)\r\n* ca4b6e2b1a11f4764ab7d7d74a536ae05ce0a7fa docs: add chat\u002Fcompletions rag docs and split install\u002Fapi docs (#1334)\r\n* d50af9dc7504d860c099967ca74b0eb6315ebf65 docs: Document Model-As-OCI-Artifacts feature (#1359)\r\n* 3ea17ff0caea92a8664da1844f0b74063368bac0 docs: fix ConfigMap creation sequence in example docs (#1348)\r\n* fa41991bd6bc7a2558432a1e17f2071d6b9a04ae docs: add aikit to integrations (#1303)\r\n* 111267355e2a3d55a09dd9ca6b6c6cd2d04451a2 docs: Kaito kubectl cli proposal (#1230)\r\n* 5e750b86af160b1917c502b911eb5e3cf33a07a0 docs: Update documentation to use chat completions API instead of deprecated completions API (#1340)\r\n* 4a75426ad1f7dc2e8a642004b9f7b55b8833fcf5 docs: verify docs site with Algolia (#1328)\r\n* 336d7be2a854ae72c85ca623f0a14a3873bbb841 docs: add Headlamp-KAITO to documentation (#1314)\r\n* 03d8665f10826de68571a8de961208481c4877c0 docs: add search to website using Algolia (#1302)\r\n* ff9342626a6c83b0e900284f32ca3331db554c5a docs: update installation docs to support different cloud providers (#1247)\r\n* fca729158e43e0d6598115e6cd8fb74f370b3739 docs: publish v0.5.1 docs (#1289)\r\n\r\n### Maintenance 🔧\r\n* 4ad29f87f8794cfa05ab6cdaadffc14f175e5eff chore: bump base image to 0.0.5 (#1364)\r\n* 91b38b1db6511e4e100881c0f00463cfc27883ff chore: bump actions\u002Fsetup-go from 5.4.0 to 5.5.0 (#1345)\r\n* 5acd3c7c9e743db218a9e01af80c4b83e37e6c43 chore: rename Go files to use underscore file naming convention (#1361)\r\n* 03898555a1016b18073a415667b8ef7dac5ba668 chore: fix references to yaml files and rename bugs (#1347)\r\n* 152c0e80b58a6c3c3271f5d065815c577d624b78 chore: rename .yml to .yaml extension in GH actions for consistency (#1339)\r\n* f00cad7d563d3be0550b3ad06c255882551626a4 chore: bump actions\u002Fcache from 4.2.2 to 4.2.3 (#1344)\r\n* db679e8430e0baa49175dff949be01becc96b013 chore: Revert the model used in the rag e2e test back to Phi-3 (#1342)\r\n* ea6216f35ea43a0ac313deb9227e517689b45850 chore: update node sku for e2e tests (#1341)\r\n* af43eb5c6353645e3ea9fd2ab6c87d76771fe0ff chore: remove unnecessary sleep in test (#1332)\r\n* fc4a847cff0203555d23c402c9ff0be116dc2fba chore: bump actions\u002Fsetup-node from 4.3.0 to 4.4.0 (#1324)\r\n* 8c7ac49e31fa4181d91a6487abb30470f6990028 chore: reduce verbose logs and unnecessary reconcile (#1312)\r\n* 725da5436ff674bf266698a7411767e383d0b7ee chore: bump starlette from 0.40.0 to 0.47.2 in \u002Fpresets\u002Fworkspace\u002Fdependencies (#1290)\r\n* 3f85a58e46943e0521f13e41cf4508f7c69147ea chore: bump step-security\u002Fharden-runner from 2.12.2 to 2.13.0 (#1287)\r\n\r\n\r\n","2025-08-08T07:51:59",{"id":249,"version":250,"summary_zh":251,"released_at":252},351612,"v0.5.1","## v0.5.1 - 2025-07-21\r\n\r\n## Changelog\r\n### Features 🌈\r\n* 10532a7782e5714d5b2a62c1476837e7f0465653 feat: expose kaito supported models via configmap (#1265)\r\n\r\n### Bug Fixes 🐞\r\n* b71e7d39c02e99637bae7a80f93cfb676583a1f7 fix: check nv plugin only for known gpu sku (#1275)\r\n* 2ed2d2396ba89ed5ad61fafbdef56bcb4f8e659f fix: fix deploy-docs.yml (#1259)\r\n* dc8576a48af55d953096f48a523589d92e500d84 fix: helm: StorageClass upgrades & rollbacks idempotent (#1242)\r\n* 0a7b40ee9ef25e48eeb4e3de9da01746f40742f4 fix: correct kaito-rag-service mcr image registry\r\n* 03037fe12c4cdef8754c3ca3cd8d1daf2bcc50a9 fix: set individual charts path for helm chart release\r\n* ae6edb7c47399710adcef87a3b86838159e80e0e fix: specify helm charts target in pipeline (#1235)\r\n\r\n### Documentation 📘\r\n* 7db5a6ba230ed3a8bc5a13ae0905dd8538122697 docs: add versioned docs (#1281)\r\n* d8fa8c52a7f5737a875f682ff7fd0e2868e48839 docs: add proposal for Gateway API Inference Extension (#1274)\r\n* db62cadf7ead5bcaeead9db2f715cf4a38eace54 docs: add comprehensive multi-node inference documentation (#1270)\r\n* 48dd41e4b13017452649e1189688b0b99116b5b2 docs: Update README.md with meeting notes link (#1272)\r\n* 8bd1ce69402e481ab4a6b7a9940f44dc0897aaa9 docs: update README.md (#1258)\r\n* 57b6cf18855037e19c2f68d1944c575eb90d2a89 docs: updage RAG Docs to remove rerank options (#1255)\r\n* d40ab2c7a4be035e2cdd20bce683055c1bb6cd5e docs: update community meeting info (#1254)\r\n* 5f3e80de924b9ec0e273df5741fa5ce37c427c8c docs: add keda scaler for kaito workloads proposal (#1237)\r\n* cefc33c5ef24cdfce7aabfa8e5fab2eac8d3d202 docs: update owners (#1250)\r\n* 03332f9ddd85a2e6012b8b6c145816bf01bc0cd2 docs: add tool calling and MCP examples (#1246)\r\n* 0eeef8187f61fb900734a71d482e65e3a3388e18 docs: add explanation for max-model-len requirement (#1239)\r\n\r\n### Maintenance 🔧\r\n* 62de3ec15569f83587a4a51e55e4d89683166673 chore: skip flaky test in ragengine e2e\r\n* e9159565fa9b0e8b5416b861ab7ad6eece4022b5 chore: bump local-csi-driver to v0.2.3 (#1285)\r\n* e78b0fc490df849d35d72db288ccaccdecfb2119 chore: bump on-headers and compression in \u002Fwebsite (#1283)\r\n* 589cd14795ebbe39ea803b7c496ccd40fdd89ce1 chore: use tool-enabled chat templates when possible (#1278)\r\n* 57341e1a7f73902cc89b29f8b76f4e92f6609b6e chore: bump requests from 2.32.3 to 2.32.4 in \u002Fpresets\u002Fworkspace\u002Fdependencies (#1174)\r\n* 4309784a8468fa7bfe261be9067001d7b09027f9 chore: bump llama-index from 0.12.38 to 0.12.41 in \u002Fpresets\u002Fragengine (#1253)\r\n* 12a1ff455df19621cbe9596508c35c6c5a653dff chore: bump step-security\u002Fharden-runner from 2.11.0 to 2.12.2 (#1229)\r\n* 3490ce758a457d3cd6dccd540f270de9ad27e473 chore: switch project license to Apache 2 (#1260)\r\n* cb21ff3f967a3d5e73bc9c110856a6318fc5614f chore: bump terraform provider versions and add kaito-ragengine chart deployment (#1240)\r\n* 55607c80244903ef1e4db586ef0ca9407210935b chore: add ArtifactHub annotations to Chart.yaml (#1219)\r\n* c2767690bcff6e343cd7367978d7124718ee348b chore: bump k8s.io\u002F* deps -> v0.33.2, controller-runtime -> v0.21.0 (#1227)\r\n\r\n### Performance Improvements 🚀\r\n* beeaa7c7ac20e035c082a63af10922b780083abf perf: improve multi nodeclaim provision time (#1243)\r\n\r\n","2025-07-21T18:45:15",{"id":254,"version":255,"summary_zh":256,"released_at":257},351613,"v0.5.0","## v0.5.0 - 2025-07-03\r\n\r\nThis release includes these major changes:\r\n\r\n- First official release of Retrieval Augmented Generation (RAG) support.\r\n- Added vLLM-based distributed inference support, with more popular large models coming soon.\r\n- Leveraged local-csi-driver plugin for NVMe disk support.\r\n- Used OCI artifact to distribute preset model images, achieving 50% faster image pulls.\r\n\r\n## Changelog\r\n### Features 🌈\r\n* 0cba25871d62972479a27e6c917581fdb90c16a5 feat: Update Workflows for RAG Release (#1216)\r\n* 8fac718dac144ecddceadcb57a4f391379528986 feat: add local cache for model files (#1203)\r\n* 715437e61e33eed838e09f67f0dcf1bcf3608155 feat: Add RagEngine README.md (#1204)\r\n* 6f62b72c774e10e9a304b3949f716c2a61a93f5c feat: pull model artifacts lively (#1188)\r\n* 010e136ef7a1908f2bb1b39c637d0d9e8c999900  feat: update model image to 0.2.0 oci artifacts (#1181)\r\n* 21a00aca5253e75f7dd9ffe3d5643c6832f3e9e4 feat: adding CURL E2E tests for update\u002Fdelete doc and delete index (#1182)\r\n* f17722603be8610d115a75631fc993f6715aa103 feat: update rag engine image url to be build from env (#1175)\r\n* 8c03221e0df9e3e0d3c574872584510acfb9e213 feat: add doc_id to \u002Fquery result nodes (#1170)\r\n* 0b1fa382e07e45f1324c79fa854f5f27dee4adff feat: RAG metrics part 2 (#1164)\r\n* 776f180164be9bf9cb2d0ae57b614117b6969764 feat: Adding delete index api (#1165)\r\n* 2ddb2a7c053c35fa20bd320b7316b776bd0a9e78 feat: support adapters for StatefulSet workloads (#1158)\r\n* e6213e46384652b37c9f7825fc89af2117138f7f feat: add proposal for Llama-3.3-70B-Instruct model support (#1159)\r\n* c2ebdb0c1b8e7158ced50fa244b1ccd79f5b2710 feat: support volume for tuning (#1118)\r\n* 18e5c17a3393d437ca93ffb4ae012ae71c2325a7 feat: RAG metrics part 1 (#988)\r\n* 9976cda7abb7ecc5f3c1ed785c8a975b480aeb5b feat: bump base image to 0.0.3 (#1138)\r\n* 1f30f0ffe0c9ed39545c9ebca18302df1845ebea feat: support multi-node distributed inference for vLLM (#949)\r\n* e90e366120fb434236940f7ed42a05706923bce3 feat: adding metadata filtering on list documents api (#1114)\r\n* 3415ac7e2f46d0c4bdc63f6fd40e68c386c5f9cf feat: Add CustomTransformer for Code Splitting support (#1131)\r\n* 140a5f97f24be9f20c1490f7bc6860731f0b7098 feat: update gpu-provisioner version to v0.3.5 for kaito (#1128)\r\n### Bug Fixes 🐞\r\n* f9ab07c92fb3e9482372308e18bfeeb63e30fa18 fix: specify helm charts target in pipeline\r\n* 8f6f312d4f6fe7ededd33b40025dcb5019c58060 fix: adapt to vllm 0.9.0 (#1187)\r\n* 25f99ab89dc168a7f5f2885659c7ba8212b21402 fix: require max-model-len for distributed inference (#1177)\r\n* 0f521665bfbb37fe4ecda76bb8c1c94ecd64d4cf fix: Pin requirements to prevent versioning errors (#1156)\r\n* 1c72a5ad76e39522aaac5815ed1ac1c144c0585f fix: pin the pytest-asyncio package (#1153)\r\n* cbd06900c445b11b735f144eb77d84c3642ae386 fix: customize gpu-provisioner deployment name for e2e test (#1144)\r\n* 76497d1186d4c7ef4854593cc3c585744cbff926 fix: revert dependabot.yml changes (#1141)\r\n### Documentation 📘\r\n* b138086c6900fe053871fbffb640c74ffa76f2e7 docs: adding Docs for RAGEngine (#1226)\r\n* b1ae54622201f6c4718404256f29e631477e8353 docs: fix broken img link (#1220)\r\n* 2f5f7d8e9bed2f73d37db6c9e9e63f0e8a886789 docs: add proposal for supporting scale subresource api of workspace (#1184)\r\n* 12dfacdf1467ae0aedc1af44a6e98d8f9e716b36 docs: change machine to nodeclaim (#1193)\r\n* 0d9fa9074bbc09d8244e1df84c053269eaacdb99 docs: fix broken links for presets (#1196)\r\n* d5648ee649df8028008174372ac007f7abe10e4f docs: add website and move docs (#1183)\r\n* 7f2a036032ada1a90df369f91d1332b7baf57f10 docs: add proposal for Distributing LLM Model Files as OCI Artifacts (#1169)\r\n### Maintenance 🔧\r\n* 9326178d6ab658c259d7fec09149c452d8dbef03 chore: skip flaky test in ragengine e2e\r\n* d0cb21fc9809ea8cda7a29907db79c45a9366097 chore: bump local-csi-driver to 0.2.0 (#1224)\r\n* e7485ba68acb6668da21bc90e7a91249c2ca2feb chore: converge Errors and update tests (#1194)\r\n* 89c585065f5284835d84eccd6681c0922e2c5356 chore: use FIPS compliance image for oras tool (#1217)\r\n* 13540bcf36cc5827c81185b821295fe89ae779fb chore: bump vllm from 0.8.5 to 0.9.0 in \u002Fpresets\u002Fworkspace\u002Fdependencies (#1154)\r\n* 268b12891126e7137cc698fe59561c816436365d chore: Create kaito_configmap_tuning_phi_3.yaml (#1146)\r\n* db334a2710eb44b7b1fec22ee0622fbc2cb53835 chore: bump codecov\u002Fcodecov-action from 5.4.2 to 5.4.3 in the all-action group (#1134)\r\n* dbcfbb83834d5f5fe223d6e50aba25912641e028 chore: merge dependabot pr into groups (#1087)\r\n* 1dee27784d03f4acb407f463c39f30693c3564b3 chore: bump actions\u002Fdependency-review-action from 4.6.0 to 4.7.0 (#1119)\r\n### Performance Improvements 🚀\r\n* d1e50498d55047ebae91d1631f129cc786154a8b perf: distribute model as oci artifacts (#1173)\r\n### Testing 💚\r\n* fb5536946eab302a1e7a257f7965a53aa28d7b53 test: adapt preset test to oci artifacts (#1186)\r\n* 18afc28592978411a0be28dbf55d2829fe9d9e18 test: add e2e test case for multi-node distributed inference (#1163)\r\n* 16f9a27031ba40ccf0b1ffefa79d86aba5d0c57f test: add support volume for tuning e2e (#1139)\r\n* 6f60f5991b33e837716","2025-07-03T12:55:16",{"id":259,"version":260,"summary_zh":261,"released_at":262},351614,"v0.4.6","## v0.4.6 - 2025-05-14\r\n\r\nThis release includes these major changes:\r\n\r\n- Added support for Llama 3 models.\r\n- Added support for tool calling.\r\n- Deprecate Llama 2 Models.\r\n- Improved Faiss-based RAG engine with update\u002Fdelete APIs.\r\n- Fixed node plugin issues and corrected memory\u002Fstorage settings for GPUs.\r\n\r\n## Changelog\r\n### Features 🌈\r\n* 45ba61ef8d25f038625616c2aac6475c003b2b40 feat: bump service image to 0.1.1 (#1117)\r\n* a16fd08b1f297bc8f86c33e7c2a6f3319d997f7c feat: add update\u002Fdelete document api's and handling for faiss (#1090)\r\n* 382f476a8d9634aeb5ecd8e9c8db34a8aae7debf feat: update GPU provisioner version and model OS disk storage (#1097)\r\n* 50d321aae21c0e4acd7c19e2068738758acefad5 feat: Ensure shm volume added (#1102)\r\n* 5d431a8fce6d79b378b828446378bf3a744db964 feat: Cleanup Deprecated ImageAccess Field (#1100)\r\n* 1b0a214d80eb11055fb8793f4d5d79e0563750e9 feat: Update pipeline to handle downloadAtRuntime models (#1095)\r\n* 3668eb08e89779e5025e6451872e1b67923fb13d feat: strip managed fields for controller informer (#1086)\r\n* 346e6a247b62007a638c3ca8219cd02a5e438b63 feat: Deprecate Llama 2 Models and Add Llama3 (#1091)\r\n* 0512a5d49de0339090c7ef3af19a0f5b0a6e68c6 feat: support preset model weight downloads (#1035)\r\n* 6890fc0ee77c8d5a088450141fb9d973afa783bb feat: Ensure latest security patches applied on debian packages (#1045)\r\n* 73693bb772448b219fcac94d63531ee8c98184b3 feat: parse supported_models.yaml in plugin pkg before starting manager (#1044)\r\n* 7adf0d78ff31959d91b9a497a172300596c9ec50 feat: Add more fine-tuning examples (#1046)\r\n* bb4635cfc63a78529bf35653e0e051f48b739aad feat: add base image build target in Dockerfile (#1032)\r\n* 2454918367ffeb98654ec954d607a52681c904e3 feat: Update README.md (#1038)\r\n* 0e6f2f131a84b5387edbf443187111d84364c145 feat: Update API version from v1alpha1 to v1beta1 (#1036)\r\n### Bug Fixes 🐞\r\n* 80bb0dca17b7b201846a7f6adcce29e54f1ef224 fix: move nodeclaim check to right place (#1062)\r\n* c6abfb1db1a576ae70944fe70e25d80fe24193d3 fix: remove PYTORCH_CUDA_ALLOC_CONF environment variable (#1120)\r\n* f6a1f152266589281446150e6fce5f16f7bae07a fix: update arc GPUMem to 32 and add more examples for arc (#1111)\r\n* 8ed0ec51e3bb3ee5afcebe11d125f67ddec4d46f fix: Troubleshooting Target Modules Error (#1096)\r\n* b8f501de4460e64af829723d8b721d84a3441fa1 fix: update GPUMemGB for Standard_ND96amsr_A100_v4 (#1098)\r\n* ad90c4816c51d1e72a6de0b641dc146e5b57044d fix: change ephemeral storage to resource storage in nodeclaim (#1094)\r\n* f7240240b6999c7263dddcf39b6b9e60fae83d0c fix: delete unused pod indexer (#1089)\r\n* 0201daf68c8d53d5655f4fad6cc74fea46f89a6c fix: ensureNodePlugins does not work (#1070)\r\n* 9798552aa880be23399ac5dbefc1d9807e72bfa4 fix: guard nodeclass op by featuregate (#1079)\r\n* 80268c06cbfdc28139162734fc7093ef4163d2af fix: document id's on list document responses are the same as document id's on create document responses (#1080)\r\n* 797ffdd10ef9b466339dce05cdff2c50f73fea02 fix: ragengine index document returns wrong ids (#1071)\r\n* 34d14d304de3aeb462fcbb494460dc6904a8d75f fix: should not check preset model in sync loop (#1058)\r\n* 76771c6ed9a96e5a47481d4c43aa07133a95835f fix: only add finalizer when the object is not in deletion state (#1057)\r\n* 1efa0a712a56874e523b004b534d299fc71876c1 fix: Dots not allowed in workspace name\r\n* a00d73ef7b0c1131b84a3631e7098e1b1351b263 fix: add registry into release pipeline (#1026) (#1028)\r\n### Code Refactoring 💎\r\n* 56c01bb08ba1f32c728079372ff5ec0dc1c9fad6 refactor: remove fields from PresetParam in favor of embedded Metadata (#1074)\r\n### Documentation 📘\r\n* 9bbc70703d7fee3fec2f4eeb85b1c2012d91afde docs: add tool calling example (#1125)\r\n* 782f4a758faa281f8a638ff4b4099579d96f6100 docs: expand on container probes in 20250325-distributed-inference.md (#1105)\r\n* 039437a8ccc2158666a421fe55a9d6d8be7f77b2 docs: add proposal for distributed inference (#1092)\r\n* 9edff8d4babeca0cdc159b7d5ee41e67e78902eb docs: Update README for new release (#1033)\r\n### Maintenance 🔧\r\n* ad79ab5ddc4e000479e42cc3616a4490aa85c8d8 chore: update tool chat templates (#1116)\r\n* eba517f8db8371e1044feeaf8c4f3a3ee9132bab chore: bump vllm from 0.8.2 to 0.8.5 in \u002Fpresets\u002Fworkspace\u002Fdependencies (#1081)\r\n* b7b5b194e00c88c2014e0e17241ff6512ce8ada3 chore: remove torchrun and distributed inference references (#1108)\r\n* 515ab2608fbfb2142f85f5600a73690c0c511715 chore: bump step-security\u002Fharden-runner from 2.11.1 to 2.12.0 (#1072)\r\n* 9fe6ac377efa6f2fc819274de41128abc1f3ed92 chore: bump goreleaser\u002Fgoreleaser-action from 6.2.1 to 6.3.0 (#965)\r\n* 7eb1deff8ae2849f7c13fa9b2c9a6ff6622c9f67 chore: bump codecov\u002Fcodecov-action from 5.4.0 to 5.4.2 (#1031)\r\n* 10af1f42eddbacb9e335b06bebbb4088cad7b629 chore: simplify the manifests generation code and add ut (#1048)\r\n* 03dd1d7a539d1b4ee0de1471bead11708c12075e chore: add license header check for Python files (#1075)\r\n* 7da6e3b964d3cfa5dd395cb6af7bfe51789218b5 chore: bump golang.org\u002Fx\u002Fnet from 0.36.0 to 0.38.0 (#1022)\r\n* eff7a7017b5b904979bc662e4fde89d01f5f9cf3 chore","2025-05-14T23:15:55",{"id":264,"version":265,"summary_zh":266,"released_at":267},351615,"v0.4.5","## v0.4.5 - 2025-04-18\r\n\r\nThis release includes these major changes:\r\n- Bump workspace API to v1beta1.\r\n- Added workload count metrics for KAITO workspace.\r\n- Added better support to prevent OOM.\r\n- Added Phi-4 and Qwen 32B models.\r\n- Reimplemented the pull and push mechanisms for images, enhancing reliability.\r\n\r\n\r\n## Changelog\r\n### Features 🌈\r\n* 9cb9a9c97c8f8a47bf6c1988d55e9e97e042ea76 feat: Add workload count metrics for kaito workspace (#1020)\r\n* 53a5aa27cb6d29d911a85278ac26c5ec771813a4 Revert \"feat: Provide default chat templates for Falcon and Phi-2 fine-tuning\" (#1017)\r\n* 35d2e4897498837d30ad36a7901b055cc7a007e0 feat: Provide default chat templates for Falcon and Phi-2 fine-tuning (#1015)\r\n* 937fa4a51ab80cf2f9c7116a7dc366f2e7b5961f feat: add vllm adapter strength validation (#1009)\r\n* 08cd1006fbd6ca38520db3f8e73c2f5dd16dfc8d feat: enable reasoning output for deepseek model (#1003)\r\n* a33b8d53ce53673db807eac167b5ce1c2b2e56d7 feat: reserve enough vram buffer for vllm service (#990)\r\n* 8f36cc86d2ed70d19025e061a0910d8256c49ef8 feat: Enhance vLLM Configuration Validation (#979)\r\n* a5e53828549f0ea815693f366eda927d86022a74 feat: Add controller tags (#996)\r\n* 81cd242825a733426365c1a5a655aa4bc6155eb5 feat: check lora support availability (#997)\r\n* 8730b2cc0d7a1362143f164f14df1787345dcfe2 feat: bump service image tag (#970)\r\n* 85e032671c9e60f38fbf09f44f28161f2fda324c feat: Add Annotation Bypass GPU Mem check (#986)\r\n* 1bd176645570fa1a359146591c5825f6c093bbb9 feat: reimplement pull and push of images with skopeo and oras (#950)\r\n* 7570d538d89b22a979fb7192613965844ef6ef95 feat: Add Pytorch Expandable segments envvar (#980)\r\n* 423f2c759ef07a8c86cfe2a46c9c134b00d1dcc2 feat: Add Phi-4 and Qwen 32B models (#978)\r\n* 4a0e4de9bc45c5eacb33460c0824565d82f9bd5e Revert \"feat: Update precommit hooks\"\r\n* 0094722bfaffa9afc7907a46d9aa0576ceca2f2a Revert \"feat: Add github action\"\r\n* 359309f28a5e8bc6764015c7c7019bbd39a270c5 feat: Add github action\r\n* 87b2437d82fa675e7446c77ad4e498f5007de456 feat: Update precommit hooks\r\n* d4a2cdd5d588a40079e7b4bbf6311a1293555743 feat: update gpu-provisioner version to v0.3.3 for kaito (#972)\r\n* 416e3306aa6387eb6b22a68fdc1f3487e10376b0 feat: skip nodeclaim.Status.LastPodEventTime change event (#963)\r\n* 0cc1ab4b701a80bc1d6581dcdf9ebdc408a3800e feat: more rapid local development with Tilt (#952)\r\n* e83f3b0ef52a3cb879a4c99757f4ed05481f6007 feat: onboard arc instance type (#948)\r\n* 94e0ea97fcdfaeeb4546a98bcec0facbd33ebaef feat: add probes to template sample (#941)\r\n* 480632207d43d8b3e593821065a09d03d9c2f6d2 feat: Updated Custom Deployment Template and Minor Nits (#935)\r\n* ad077e3da7421bb555babc4e8434558b257bd50d feat: update gpu-provisioner version to v0.3.2 for kaito (#917)\r\n* 4aaab3d030c9340b16e55d3d864a3b977077d715 feat: Add load and persist endpoints tests to RAGEngine E2E (#913)\r\n* 1815428804593eaa94de0d6f78d82b53e85d0137 feat: port preset name validation to v1beta1 webhook (#914)\r\n* fc6c008dd124f0025c00b837f2d848c171ff9e92 feat: RAG remote service secret patch (#862)\r\n* 241f876304217704b661ae38523fa58ad0316008 feat: Offload Async Coroutine Execution to Separate Thread to Avoid Nested Event Loop Error (#906)\r\n* 381f5396007e796f938ab298cf13b769cf054c51 feat: Introduce workspace v1beta1 API (#904)\r\n* 80365b07ce09b5a20cd1f029cfff0e6951807227 feat: add featuregate for ensureNodeClass (#900)\r\n* 0bbd6ef8e6abeccf7f04733c5e50c9bea55ebeb2 feat: Add overwrite to load index (#897)\r\n* e84cd9bd7a5bac02fb997c1778f98846e0404439 feat: Generalize Embedding Model Class & Add \u002Fload Endpoint for Index Management (#892)\r\n* d02fb8957fcd0ac546d7bd5991a781f22a7a64a2 feat: Add Persist RWLock (#891)\r\n* 08817b4b0dfe02ab5880aa3f2141822b71c1151d feat: Add Persist Index Endpoint (#889)\r\n* 950b6ed9fb5a8d435c639ab2ae29b03c326e1aed feat: Add Async RWLock for Safe Concurrent Index Operations in Select Vector Stores (#888)\r\n* 8737f528d3908b626b376c55c9556f9190f2ea73 feat: Make Inference Class Fully Async with HTTPX Requests (#880)\r\n* 2a9b5c552602cb739a3521a2d6a8ec3f29362862 feat: Add E2E Endpoint Checks (#864)\r\n* 6cb1bee184711f013939b2630383f34bdbc62681 feat: Paginate & Truncate List Documents, Add RAG FastAPI Docstrings, Optimize Multi-Document Retrieval with Async Gather (#847)\r\n### Bug Fixes 🐞\r\n* 592c4604ef27c4659187efbf816e93024bb5fef8 fix: allow backoff for tuning job (#1008)\r\n* a8867fbc925fd7bcb91ae412ad1a4b4e0b329b3b fix: Pin disable chunked prefill on V100 Arch (#971)\r\n* 35e210fe198d97ec5bca45df7a83190061c50de0 fix: add dependencies for vllm runtime (#964)\r\n* fec20c4a74989bd0a9a4e600af6162ea774601f9 fix: Fine-Tuning DataCollator Parsing + BnB Support (#940)\r\n* c8f2b0992a98117cea8352300706a868d7b09d39 fix: Add yq dependency to install instructions (#927)\r\n* a8c25fb53ceda6ea0603219b01520b85cb5ea464 fix: Adding Preset name validation in validateCreateWithInference (#905)\r\n* 45e6882749b357172bee7686f5721f9fa325dc23 fix: Excluding Metadata to LLM during Response Synthesis & Updating default top_k (#896)\r\n* c85116da1878e2c23fa4","2025-04-18T18:01:09",{"id":269,"version":270,"summary_zh":271,"released_at":272},351616,"v0.4.4","## v0.4.4 - 2025-01-31\r\n\r\n## Changelog\r\n### Bug Fixes 🐞\r\n* ff240fee19b19cf72ef0bc7c49b0da232679db54 fix: DeepSeek Model Naming (#859)\r\n### Maintenance 🔧\r\n* 6c118fe9f423604d61af51ccc3276267340880ea chore: bump gpu-provisioner in terraform sample (#860)\r\n* 07077f81019e871752004c3536dfba9d65c12cde chore: bumping tf provider versions and kaito workspace to 0.4.3 (#858)\r\n","2025-01-31T06:39:59",{"id":274,"version":275,"summary_zh":276,"released_at":277},351617,"v0.4.3","## v0.4.3 - 2025-01-30\n\n## Changelog\n### Features 🌈\n* e333f2a651a383251fa2f0cdc1f18bcfcc7320f4 feat: Add DeepSeek READMEs and Example (#851)\n* 31bdf477d02df63524310eccf2578a962f8a54b3 feat: Add DeepSeek Model for E2E (#850)\n* 0ed89f29f180be0c10cd9ef545d3c1dbce6f4d66 feat: Add Deepseek Model (#848)\n* 254dec6dc7120a991193007a4abd5b4fe51953c1 feat: Add DeepSeek Model Plugin (#849)\n* 979b739236ffc4309c3409dcfd81bc1f3a158bc4 feat: RAG API Server to use Async\u002FAwait (#835)\n### Bug Fixes 🐞\n* 872d1a6a1fb7ef5ebba3f709e8ac7c7129d9b321 fix: use ghcr image for e2e test during release\n* 8d3a1e13cf438d3826c7d594f2a01e7ff99ff07b fix: Add DeepSeek Qwen E2E (#852)\n* f7ee0ec720567c56ea279ff50d2e551742d09b1a fix: Prevent blocking healthcheck during Inference (#837)\n### Code Refactoring 💎\n* 9fe5b86115152e0843a093c94ab8f121538a8680 refactor: add inference manifest template and generation script (#833)\n### Maintenance 🔧\n* 2dadc2b918513325168877a42cce24967282a237 chore: RAGEngine e2e pipeline (#832)\n* 59583af3973be57670184ec4adc061c47a333d0e chore: bump nvidia\u002Fk8s-device-plugin to 0.17.0 (#836)\n* c96fb2d7b1a178b74ea1f4303462adecbb2f75d3 chore: Reorg e2e test code to be reused in RAG e2e tests (#834)\n\n","2025-01-30T16:47:01"]