[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NVIDIA--nvidia-container-toolkit":3,"tool-NVIDIA--nvidia-container-toolkit":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":80,"owner_email":80,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":108,"forks":109,"last_commit_at":110,"license":111,"difficulty_score":10,"env_os":112,"env_gpu":113,"env_ram":114,"env_deps":115,"category_tags":121,"github_topics":80,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":122,"updated_at":123,"faqs":124,"releases":153},3018,"NVIDIA\u002Fnvidia-container-toolkit","nvidia-container-toolkit","Build and run containers leveraging NVIDIA GPUs","nvidia-container-toolkit 是一款专为 Linux 系统设计的开源工具，旨在让用户能够轻松构建和运行利用 NVIDIA GPU 进行加速的容器化应用。在人工智能和深度学习领域，许多计算任务需要强大的图形处理器支持，而传统容器技术往往难以直接调用宿主机的 GPU 资源。nvidia-container-toolkit 正是为了解决这一痛点而生，它通过提供特定的容器运行时库和实用程序，自动配置容器环境，使其能够无缝识别并使用宿主机上的 NVIDIA 显卡，无需在容器内部重复安装庞大的驱动程序或 CUDA 工具包。\n\n这款工具特别适合 AI 开发者、数据科学家以及研究人员使用。无论是训练复杂的神经网络模型，还是部署高性能推理服务，用户只需借助 Docker 等主流容器平台，即可快速启动具备 GPU 加速能力的环境，极大简化了开发流程并提升了资源利用率。其核心技术亮点在于与 NVIDIA 驱动的深度集成，能够在保证安全隔离的前提下，高效透传硬件算力。对于希望在不牺牲灵活性的前提下最大化发挥 GPU 性能的团队而言，nvidia-container-toolkit 是构建现","nvidia-container-toolkit 是一款专为 Linux 系统设计的开源工具，旨在让用户能够轻松构建和运行利用 NVIDIA GPU 进行加速的容器化应用。在人工智能和深度学习领域，许多计算任务需要强大的图形处理器支持，而传统容器技术往往难以直接调用宿主机的 GPU 资源。nvidia-container-toolkit 正是为了解决这一痛点而生，它通过提供特定的容器运行时库和实用程序，自动配置容器环境，使其能够无缝识别并使用宿主机上的 NVIDIA 显卡，无需在容器内部重复安装庞大的驱动程序或 CUDA 工具包。\n\n这款工具特别适合 AI 开发者、数据科学家以及研究人员使用。无论是训练复杂的神经网络模型，还是部署高性能推理服务，用户只需借助 Docker 等主流容器平台，即可快速启动具备 GPU 加速能力的环境，极大简化了开发流程并提升了资源利用率。其核心技术亮点在于与 NVIDIA 驱动的深度集成，能够在保证安全隔离的前提下，高效透传硬件算力。对于希望在不牺牲灵活性的前提下最大化发挥 GPU 性能的团队而言，nvidia-container-toolkit 是构建现代化云原生 AI 工作流不可或缺的基础设施。","# NVIDIA Container Toolkit\n\n[![GitHub license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FNVIDIA\u002Fnvidia-container-toolkit?style=flat-square)](https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fmain\u002FLICENSE)\n[![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-wiki-blue.svg?style=flat-square)](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Foverview.html)\n[![Package repository](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpackages-repository-b956e8.svg?style=flat-square)](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container)\n\n![nvidia-container-stack](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_nvidia-container-toolkit_readme_d14822f24718.png)\n\n## Introduction\n\nThe NVIDIA Container Toolkit allows users to build and run GPU-accelerated containers. The toolkit includes a container runtime [library](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container) and utilities to automatically configure containers to leverage NVIDIA GPUs.\n\nProduct documentation including an architecture overview, platform support, and installation and usage guides can be found in the [documentation repository](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Foverview.html).\n\n## Getting Started\n\n**Make sure you have installed the [NVIDIA driver](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#nvidia-drivers) for your Linux Distribution**\n**Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed**\n\nFor instructions on getting started with the NVIDIA Container Toolkit, refer to the [installation guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#installation-guide).\n\n## Usage\n\nThe [user guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Fuser-guide.html) provides information on the configuration and command line options available when running GPU containers with Docker.\n\n## Issues and Contributing\n\n[Checkout the Contributing document!](CONTRIBUTING.md)\n\n* Please let us know by [filing a new issue](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002Fnew)\n* You can contribute by creating a [pull request](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare) to our public GitHub repository\n","# NVIDIA 容器工具包\n\n[![GitHub 许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002FNVIDIA\u002Fnvidia-container-toolkit?style=flat-square)](https:\u002F\u002Fraw.githubusercontent.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fmain\u002FLICENSE)\n[![文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-wiki-blue.svg?style=flat-square)](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Foverview.html)\n[![软件包仓库](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpackages-repository-b956e8.svg?style=flat-square)](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container)\n\n![nvidia-container-stack](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_nvidia-container-toolkit_readme_d14822f24718.png)\n\n## 简介\n\nNVIDIA 容器工具包使用户能够构建和运行由 GPU 加速的容器。该工具包包含一个容器运行时 [库](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container)，以及用于自动配置容器以利用 NVIDIA GPU 的实用工具。\n\n产品文档，包括架构概述、平台支持以及安装和使用指南，可在 [文档仓库](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Foverview.html) 中找到。\n\n## 入门\n\n**请确保已为您的 Linux 发行版安装了 [NVIDIA 驱动程序](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#nvidia-drivers)**  \n**请注意，您无需在主机系统上安装 CUDA 工具包，但必须安装 NVIDIA 驱动程序。**\n\n有关如何开始使用 NVIDIA 容器工具包的说明，请参阅 [安装指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#installation-guide)。\n\n## 使用\n\n[用户指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Fuser-guide.html) 提供了使用 Docker 运行 GPU 容器时可用的配置和命令行选项的相关信息。\n\n## 问题与贡献\n\n[查看贡献文档！](CONTRIBUTING.md)\n\n* 请通过 [提交新问题](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002Fnew) 告知我们。\n* 您可以通过向我们的公共 GitHub 仓库 [创建拉取请求](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare) 来做出贡献。","# NVIDIA Container Toolkit 快速上手指南\n\n## 环境准备\n\n在开始之前，请确保您的 Linux 主机满足以下要求：\n\n*   **操作系统**：支持的主流 Linux 发行版（如 Ubuntu, CentOS, Debian 等）。\n*   **NVIDIA 驱动**：必须已安装与您的 GPU 兼容的 **NVIDIA 驱动程序**。\n    *   *注意：主机上无需安装完整的 CUDA Toolkit，但必须安装显卡驱动。*\n*   **容器运行时**：已安装 Docker 或 containerd。\n\n> **国内用户提示**：如果您在中国大陆地区，建议在配置软件源时使用清华大学或阿里云的镜像加速，以提高下载速度。\n\n## 安装步骤\n\n以下以 Ubuntu\u002FDebian 系统为例（其他发行版请参考官方文档调整包管理器命令）：\n\n### 1. 配置软件源\n\n添加 NVIDIA 容器工具包的仓库密钥和源列表：\n\n```bash\ndistribution=$(. \u002Fetc\u002Fos-release;echo $ID$VERSION_ID) \\\n      && curl -fsSL https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002Fgpgkey | sudo gpg --dearmor -o \u002Fusr\u002Fshare\u002Fkeyrings\u002Fnvidia-container-toolkit.gpg \\\n      && curl -s -L https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F$distribution\u002Flibnvidia-container.list | \\\n            sed 's#deb https:\u002F\u002F#deb [signed-by=\u002Fusr\u002Fshare\u002Fkeyrings\u002Fnvidia-container-toolkit.gpg] https:\u002F\u002F#g' | \\\n            sudo tee \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fnvidia-container-toolkit.list\n```\n\n*(可选) 国内加速方案：若上述源访问缓慢，可手动编辑 `\u002Fetc\u002Fapt\u002Fsources.list.d\u002Fnvidia-container-toolkit.list`，将 `https:\u002F\u002Fnvidia.github.io` 替换为国内镜像代理地址（如有），或直接下载 deb 包安装。*\n\n### 2. 更新并安装工具包\n\n```bash\nsudo apt-get update\nsudo apt-get install -y nvidia-container-toolkit\n```\n\n### 3. 配置 Docker 运行时\n\n生成默认配置并重启 Docker 服务，使 Docker 能够识别 NVIDIA GPU：\n\n```bash\nsudo nvidia-ctk runtime configure --runtime=docker\nsudo systemctl restart docker\n```\n\n### 4. 验证安装\n\n运行一个简单的测试容器来验证 GPU 是否可用：\n\n```bash\ndocker run --rm --gpus all nvcr.io\u002Fnvidia\u002Fcuda:12.0-base-ubuntu22.04 nvidia-smi\n```\n\n如果输出显示了 GPU 的状态信息，则说明安装成功。\n\n## 基本使用\n\n安装完成后，您只需在运行容器时添加 `--gpus` 参数即可调用 GPU 资源。\n\n### 最简单的使用示例\n\n启动一个带有 GPU 支持的 CUDA 容器并进入交互模式：\n\n```bash\ndocker run -it --gpus all nvcr.io\u002Fnvidia\u002Fcuda:12.0-base-ubuntu22.04 bash\n```\n\n在容器内部，您可以直接运行 `nvidia-smi` 或使用编译好的 CUDA 程序，它们将直接使用宿主机的 GPU 进行加速。\n\n### 指定特定 GPU\n\n如果您有多张显卡，可以指定使用特定的 GPU（例如只使用第一张）：\n\n```bash\ndocker run -it --gpus '\"device=0\"' nvcr.io\u002Fnvidia\u002Fcuda:12.0-base-ubuntu22.04 bash\n```","某自动驾驶研发团队需要在多节点集群上快速部署并迭代基于 PyTorch 的深度学习训练环境，以加速模型收敛。\n\n### 没有 nvidia-container-toolkit 时\n- **环境配置繁琐**：开发人员必须在每台物理宿主机上手动安装特定版本的 CUDA Toolkit、cuDNN 及驱动，不同项目间的依赖冲突频发，维护成本极高。\n- **资源隔离困难**：无法在容器内直接识别 GPU 硬件，导致多个训练任务难以在同一台服务器上安全隔离运行，容易造成显存争抢或任务崩溃。\n- **迁移部署低效**：将开发好的模型从本地工作站迁移到云端集群时，需重新构建包含庞大 CUDA 库的镜像，不仅镜像体积巨大，且极易因底层驱动不匹配而启动失败。\n- **弹性扩展受阻**：由于缺乏标准化的 GPU 容器运行时支持，无法利用 Kubernetes 等编排工具实现训练任务的自动扩缩容，严重拖慢实验迭代速度。\n\n### 使用 nvidia-container-toolkit 后\n- **即开即用**：只需在宿主机安装 NVIDIA 驱动，nvidia-container-toolkit 即可自动拦截容器请求并注入必要的 GPU 库文件，彻底解耦了容器内部环境与宿主机软件栈。\n- **原生硬件访问**：容器启动时通过 `--gpus` 参数即可直接调用物理 GPU，实现了真正的硬件级资源隔离与按需分配，多租户共享服务器变得安全稳定。\n- **镜像轻量化与便携**：开发者可构建仅含应用代码和框架的精简镜像，无需捆绑 CUDA 运行时，确保同一镜像在任何安装了该工具包的节点上都能无缝运行。\n- **云原生集成**：完美支持 Docker 和 Kubernetes 生态，团队可轻松编写标准化 YAML 文件，实现大规模分布式训练任务的自动化调度与弹性伸缩。\n\nnvidia-container-toolkit 通过屏蔽底层硬件差异，让 GPU 算力像 CPU 一样成为云原生应用中可随意调度的标准资源，极大提升了 AI 工程的交付效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_nvidia-container-toolkit_1832570a.png","NVIDIA","NVIDIA Corporation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNVIDIA_7dcf6000.png","",null,"https:\u002F\u002Fnvidia.com","https:\u002F\u002Fgithub.com\u002FNVIDIA",[84,88,92,96,100,104],{"name":85,"color":86,"percentage":87},"Go","#00ADD8",90.7,{"name":89,"color":90,"percentage":91},"Shell","#89e051",4.7,{"name":93,"color":94,"percentage":95},"C","#555555",2.6,{"name":97,"color":98,"percentage":99},"Makefile","#427819",1.3,{"name":101,"color":102,"percentage":103},"Dockerfile","#384d54",0.6,{"name":105,"color":106,"percentage":107},"SmPL","#c94949",0.1,4222,504,"2026-04-03T15:09:29","Apache-2.0","Linux","必需 NVIDIA GPU，需安装对应 Linux 发行版的 NVIDIA 驱动；无需在宿主机安装 CUDA Toolkit；具体显卡型号、显存大小及 CUDA 版本取决于容器内应用需求，README 未限定","未说明",{"notes":116,"python":114,"dependencies":117},"该工具包用于构建和运行 GPU 加速容器。必须在宿主机上安装适用于当前 Linux 发行版的 NVIDIA 驱动程序，但无需在宿主机安装完整的 CUDA 工具包。容器内的 CUDA 版本由容器镜像决定。详细架构、平台支持及安装指南请参阅官方文档链接。",[118,119,120],"NVIDIA Driver","libnvidia-container","Docker (或兼容的容器运行时)",[13],"2026-03-27T02:49:30.150509","2026-04-06T06:52:13.569335",[125,130,135,140,145,149],{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},13915,"遇到 GPG 密钥错误（NO_PUBKEY）导致无法更新 NVIDIA CUDA 仓库怎么办？","此问题通常与 CUDA 下载仓库的密钥轮换有关，而非容器工具包本身的问题。可以通过在 Dockerfile 或系统中重新安装密钥环来解决。具体步骤如下：\n1. 删除旧的源列表和密钥：\n   rm \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fcuda.list\n   rm \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fnvidia-ml.list\n   apt-key del 7fa2af80\n2. 更新并安装 wget：\n   apt-get update && apt-get install -y --no-install-recommends wget\n3. 下载并安装新的密钥环包（以 Ubuntu 20.04 为例）：\n   wget https:\u002F\u002Fdeveloper.download.nvidia.com\u002Fcompute\u002Fcuda\u002Frepos\u002Fubuntu2004\u002Fx86_64\u002Fcuda-keyring_1.0-1_all.deb\n   dpkg -i cuda-keyring_1.0-1_all.deb\n4. 再次更新 apt：\n   apt-get update","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002F258",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},13916,"运行 Docker 容器时报错 'libnvidia-ml.so.1: cannot open shared object file' 如何解决？","该错误常见于通过 Snap 安装的 Docker 版本，因为 Snap 版本的 Docker 目前不受 NVIDIA Container Toolkit 支持。解决方法是移除 Snap 版的 Docker 并改用官方 Docker CE：\n1. 移除 Snap 版 Docker：\n   sudo snap remove docker --purge\n2. 重启 Docker 服务以恢复 socket（如果已安装 Docker CE）：\n   sudo systemctl stop docker docker.socket containerd\n   sudo systemctl start docker\n之后再次尝试运行容器即可。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002F154",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},13917,"使用 nvidia-ctk cdi generate 命令时提示找不到驱动对应的库文件（failed to locate libcuda.so）怎么办？","在 v1.13+ 版本中，库文件的发现逻辑有所变化。如果自动检测失败，需要显式指定库搜索路径。请同时设置环境变量 LD_LIBRARY_PATH 和使用 --library-search-path 参数。示例命令如下：\nLD_LIBRARY_PATH=\u002Fusr\u002Flocal\u002Fnvidia\u002Flib:\u002Fusr\u002Flocal\u002Fnvidia\u002Flib64 nvidia-ctk cdi generate --library-search-path=\u002Fusr\u002Flocal\u002Fnvidia\u002Flib:\u002Fusr\u002Flocal\u002Fnvidia\u002Flib64 --output=\u002Fvar\u002Frun\u002Fcdi\u002Fnvidia.yaml\n确保路径中包含正确的 libcuda.so 版本文件。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002F82",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},13918,"在 WSL2 环境下运行容器时报错 'libnvidia-ml.so.1: file exists' 是什么原因？","此错误通常发生在自定义镜像中已经包含了 NVIDIA 驱动库文件（如 libnvidia-ml.so.1），而容器启动时 NVIDIA 工具包试图再次挂载该文件导致冲突。解决方法是确保基础镜像中不要预装任何 nvidia-driver 或 libnvidia-* 相关的包。仅安装 CUDA Toolkit 相关包（如 cuda-compat, cuda-cudart 等）即可，驱动程序应由宿主机通过挂载提供。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fissues\u002F289",{"id":146,"question_zh":147,"answer_zh":148,"source_url":134},13919,"Docker Desktop on Linux 是否支持 NVIDIA GPU 加速？","目前不支持。NVIDIA Container Toolkit 明确说明不支持在 Linux 上使用 Docker Desktop，也不支持通过 Snap 安装的 Docker 版本。建议在 Linux 上直接使用官方文档推荐的 Docker CE 安装方式，以确保 GPU 功能正常工作。",{"id":150,"question_zh":151,"answer_zh":152,"source_url":134},13920,"如何验证 nvidia-container-cli 的运行环境和加载的库信息？","可以使用调试模式运行 nvidia-container-cli 来查看详细初始化和库加载信息。执行以下命令：\nnvidia-container-cli -k -d \u002Fdev\u002Ftty info\n该命令会输出当前使用的根目录、ldcache 路径、用户权限以及尝试加载的驱动库（如 dxcore, libnvidia-ml 等）的详细日志，有助于排查“初始化失败”或“找不到库”类的问题。",[154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244,249],{"id":155,"version":156,"summary_zh":157,"released_at":158},73540,"v1.18.0-rc.4","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.4`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.4)\r\n* [`nvidia-container-toolkit 1.18.0-rc.4`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.4)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Add drop-in file support for containerd and crio\r\n- Add support for IMEX_CHANNELS to jit-cdi mode\r\n- Refactor IMEX channel requests from image\r\n- Remove redundant CDI annotations\r\n- support running and degraded systemd state during install\r\n- Don't inject enable-cuda-compat hook in CSV mode\r\n- Cleanup default runtime in runtime config when setAsDefault=false\r\n\r\n### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.12-dev in \u002Fdeployments\u002Fcontainer\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0-rc.3...v1.18.0-rc.4\r\n\r\n","2025-09-22T09:04:54",{"id":160,"version":161,"summary_zh":162,"released_at":163},73542,"v1.18.0-rc.2","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.2`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.2)\r\n* [`nvidia-container-toolkit 1.18.0-rc.2`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.2)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n- Ensure that .so symlinks are created for driver libraries in the container\r\n- Load settings from config.toml file during CDI generation\r\n- Use securejoin to resolve \u002Fproc\r\n- Refactor nvml CDI spec generation for consistency\r\n- Simplify nvcdi interface\r\n- Add SpecGenerator interface\r\n- Ensure that modified params file mount does not leak to host\r\n- Add test for leaking mounts with shared mount propagation\r\n\r\n### Changes in the Toolkit Container\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.10-dev in \u002Fdeployments\u002Fcontainer\r\n- Bump nvidia\u002Fcuda to 12.9.1-base-ubi9 in \u002Fdeployments\u002Fcontainer\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0-rc.1...v1.18.0-rc.2\r\n\r\n","2025-07-31T09:22:17",{"id":165,"version":166,"summary_zh":167,"released_at":168},73543,"v1.18.0-rc.1","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.1`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.1)\r\n* [`nvidia-container-toolkit 1.18.0-rc.1`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.1)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## Known Issues\r\n\r\nThe systemd units for keeping CDI specifications for NVIDIA GPUs up to date are not properly started when upgrading the NVIDIA Container Toolkit packages. To ensure that these are properly started run the following after installing the `nvidia-container-toolkit-base` package:\r\n```\r\nsudo systemctl enable --now nvidia-cdi-refresh.service nvidia-cdi-refresh.path\r\n```\r\n\r\nTo confirm that CDI specifications were generated, run:\r\n```\r\nnvidia-ctk cdi list\r\n```\r\n\r\n## What's Changed\r\n\r\n- Add create-soname-symlinks hook\r\n- Require matching version of libnvidia-container-tools\r\n- Add envvar for libcuda.so parent dir to CDI spec\r\n- Add EnvVar to Discover interface\r\n- Resolve to legacy by default in nvidia-container-runtime-hook\r\n- Default to jit-cdi mode in the nvidia runtime\r\n- Use functional options to construct runtime mode resolver\r\n- Add NVIDIA_CTK_CONFIG_FILE_PATH envvar\r\n- Switch to cuda ubi9 base image\r\n- Use single version tag for image\r\n- BUGFIX: modifier: respect GPU volume-mount device requests\r\n- Ensure consistent sorting of annotation devices\r\n- Extract deb and rpm packages to single image\r\n- Remove docker-run as default runtime candidate\r\n- Return annotation devices from VisibleDevices\r\n- Make CDI device requests consistent with other methods\r\n- Construct container info once\r\n- Add logic to extract annotation device requests to image type\r\n- Add IsPrivileged function to CUDA container type\r\n- Add device IDs to nvcdi.GetSpec API\r\n- Refactor extracting requested devices from the container image\r\n- Add EnvVars option for all nvidia-ctk cdi commands\r\n- Add nvidia-cdi-refresh service\r\n- Add discovery of arch-specific vulkan ICD\r\n- Add disabled-device-node-modification hook to CDI spec\r\n- Add a hook to disable device node creation in a container\r\n- Remove redundant deduplication of search paths for WSL\r\n- Added ability to disable specific (or all) CDI hooks\r\n- Consolidate HookName functionality on internal\u002Fdiscover pkg\r\n- Add envvar to control debug logging in CDI hooks\r\n- Add FeatureFlags to the nvcdi API\r\n- Reenable nvsandboxutils for driver discovery\r\n- Edit discover.mounts to have a deterministic output\r\n- Refactor the way we create CDI Hooks\r\n- Issue warning on unsupported CDI hook\r\n- Run update-ldcache in isolated namespaces\r\n- Add cuda-compat-mode config option\r\n- Fix mode detection on Thor-based systems\r\n- Add rprivate to CDI mount options\r\n- Skip nil discoverers in merge\r\n- bump runc go dep to v1.3.0\r\n- Fix resolution of libs in LDCache on ARM\r\n- Updated .release:staging to stage images in nvstaging\r\n- Refactor toolkit installer\r\n- Allow container runtime executable path to be specified\r\n- Add support for building ubuntu22.04 on arm64\r\n- Fix race condition in mounts cache\r\n- Add support for building ubuntu22.04 on amd64\r\n- Fix update-ldcache arguments\r\n- Remove positional arguments from nvidia-ctk-installer\r\n- Remove deprecated --runtime-args from nvidia-ctk-installer\r\n- Add version info to nvidia-ctk-installer\r\n- Update nvidia-ctk-installer app name to match binary name\r\n- Allow nvidia-ctk config --set to accept comma-separated lists\r\n- Disable enable-cuda-compat hook for management containers\r\n- Allow enable-cuda-compat hook to be disabled in CDI spec generation\r\n- Add disable-cuda-compat-lib-hook feature flag\r\n- Add basic integration tests for forward compat\r\n- Ensure that mode hook is executed last\r\n- Add enable-cuda-compat hook to CDI spec generation\r\n- Add ldconfig hook in legacy mode\r\n- Add enable-cuda-compat hook if required\r\n- Add enable-cuda-compat hook to allow compat libs to be discovered\r\n- Use libcontainer execseal to run ldconfig\r\n- Add ignore-imex-channel-requests feature flag\r\n- Disable nvsandboxutils in nvcdi API\r\n- Allow cdi mode to work with --gpus flag\r\n- Add E2E GitHub Action for Container Toolkit\r\n- Add remote-test option for E2E\r\n- Enable CDI in runtime if CDI_ENABLED is set\r\n- Fix overwriting docker feature flags\r\n- Add option in toolkit container to enable CDI in runtime\r\n- Remove Set from engine config API\r\n- Add EnableCDI() method to engine.Interface\r\n- Add IMEX binaries to CDI discovery\r\n- Rename test folder to tests\r\n- Add allow-cuda-compat-libs-from-container feature flag\r\n- Disable mounting of compat libs from container\r\n- Skip graphics modifier in CSV mode\r\n- Move nvidia-toolkit to nvidia-ctk-installer\r\n- Automated regression testing for the NVIDIA Container Toolkit\r\n- Add support for containerd version 3 config\r\n- Remove watch option from create-dev-char-syml","2025-06-25T12:56:08",{"id":170,"version":171,"summary_zh":172,"released_at":173},73526,"v1.19.0","## 变更内容\n\n**注意：** 本版本是 NVIDIA 容器工具包的统一发布版，包含以下软件包：\n* [`libnvidia-container 1.19.0`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.19.0)\n* [`nvidia-container-toolkit 1.19.0`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.19.0)\n\n该版本的软件包已发布到 [`libnvidia-container` 软件源](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F)。\n\n这是一个功能增强版本，主要包括以下高层次变更：\n- 当检测到未知的 OCI 运行时规范字段时报告错误。\n- 增加对基于 IGX 2.0 Thor 的系统的支持，包括安装了 dGPU 的系统。\n- 在基于 Tegra 的系统上增加了 CUDA 向前兼容性支持。在基于 Orin 的系统上，这需要容器中包含特定的兼容库。\n- 支持以可能没有显式设备节点访问权限的用户身份运行容器，而无需显式指定额外的用户组。\n- 改进了用于确保 CDI 规范保持最新的 systemd 服务触发机制。\n- 增加对只读根文件系统（例如 initramfs 上的文件系统）的支持。\n\n## 自 v1.19.0-rc.7 以来的变更\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1722 中为 v1.19.0 版本提升版本号。\n\n## v1.19.0-rc.7\n* build(deps): 由 dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1683 中将 actions\u002Fdownload-artifact 从 7 升级到 8。\n* build(deps): 由 dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1684 中将 actions\u002Fupload-artifact 从 6 升级到 7。\n* 由 rahulait 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1675 中停止将 PR 标记为过时。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1690 中修复：确保默认设置 CUDA 兼容容器路径。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1681 中修复：更新 versions.mk 文件时使用正确的版本号。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1692 中进行杂项操作：对 mock 文件运行 goimports 工具。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1674 中进行杂项操作：将 isIntegratedGPUID 函数重命名为 isOrinGPUID。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1667 中使用自动 CDI 规范生成功能为其他修改器生成 CDI 规范。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1697 中修复：在兼容性检查中不再使用驱动程序版本作为 ELF 头信息。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1666 中修复：在 CDI 中复用已实例化的 editsFactory。\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1698 中为 v1.19.0-rc.7 版本提升版本号。\n\n## v1.19.0-rc.6\n* 由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1679 中为所有 CSV 兼容性检查使用主机 CUDA 版本。\n* 提升 v1.19.0-rc.6 版本号。","2026-03-12T15:34:23",{"id":175,"version":176,"summary_zh":177,"released_at":178},73527,"v1.19.0-rc.7","## 变更内容\n* 构建（依赖）：将 actions\u002Fdownload-artifact 从 7 升级到 8，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1683 中完成\n* 构建（依赖）：将 actions\u002Fupload-artifact 从 6 升级到 7，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1684 中完成\n* 停止将 PR 标记为过时，由 @rahulait 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1675 中完成\n* 修复：确保默认设置 CUDA Compat 容器路径，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1690 中完成\n* 修复：在更新 versions.mk 文件时使用正确版本，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1681 中完成\n* 杂项：对 mock 文件运行 goimports，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1692 中完成\n* 杂项：将 isIntegratedGPUID 函数重命名为 isOrinGPUID，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1674 中完成\n* 使用自动 CDI 规范生成工具为其他修饰符生成 CDI 规范，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1667 中完成\n* 修复：在兼容性检查中不要使用 ELF 头中的驱动程序版本，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1697 中完成\n* 修复：在 CDI 中复用已实例化的 editsFactory，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1666 中完成\n* 为 v1.19.0-rc.7 版本升级版本号，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1698 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.19.0-rc.6...v1.19.0-rc.7","2026-03-05T21:19:13",{"id":180,"version":181,"summary_zh":182,"released_at":183},73528,"v1.19.0-rc.6","## 变更内容\n* 使用主机 CUDA 版本来进行所有 CSV 兼容性检查，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1679 中实现\n* 为 v1.19.0-rc.6 版本提升版本号，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1680 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.19.0-rc.5...v1.19.0-rc.6","2026-02-25T15:49:16",{"id":185,"version":186,"summary_zh":187,"released_at":188},73529,"v1.19.0-rc.5","修复了 v1.19.0-rc.4 中 CDI 规范生成的一个严重错误。\n\n## 变更内容\n* 将 nvcdi 构建选项与运行时选项分离，由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1659 中完成\n* build(deps)：将 \u002Fdeployments\u002Fcontainer 中的 nvidia\u002Fdistroless\u002Fgo 从 v4.0.1-dev 升级至 v4.0.2-dev，由 dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1672 中完成\n* chore：修复发布工具以支持带注释的标签，由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1669 中完成\n* 允许删除过时的缓存，由 rahulait 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1670 中完成\n* 移除 dlopen 定位器，由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1676 中完成\n* 为 v1.19.0-rc.5 版本提升版本号，由 elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1678 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.19.0-rc.4...v1.19.0-rc.5","2026-02-25T13:52:46",{"id":190,"version":191,"summary_zh":192,"released_at":193},73530,"v1.19.0-rc.4","## 变更内容\n* 构建（依赖）：在 \u002Fdeployments\u002Fdevel 中，由 @dependabot[bot] 将 Go 从 1.25.6 升级至 1.25.7，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1640\n* 构建（依赖）：由 @dependabot[bot] 将 golang.org\u002Fx\u002Fmod 从 0.32.0 升级至 0.33.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1642\n* 构建（依赖）：在 \u002Ftests 中，由 @dependabot[bot] 将 github.com\u002Fonsi\u002Fginkgo\u002Fv2 从 2.27.5 升级至 2.28.1，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1619\n* 构建（依赖）：在 \u002Fdeployments\u002Fdevel 中，由 @dependabot[bot] 将 Go 从 1.25.7 升级至 1.26.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1647\n* 构建（依赖）：由 @dependabot[bot] 将 golang.org\u002Fx\u002Fcrypto 从 0.47.0 升级至 0.48.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1644\n* 构建（依赖）：由 @dependabot[bot] 将 golang.org\u002Fx\u002Fsys 从 0.40.0 升级至 0.41.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1641\n* 由 @elezar 对设备节点测试进行重构，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1637\n* 由 @jactor-sue 为 ctk-installer 添加调试级别日志选项，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1551\n* 构建（依赖）：在 \u002Ftests 中，由 @dependabot[bot] 将 golang.org\u002Fx\u002Fmod 从 0.32.0 升级至 0.33.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1643\n* 修复：由 @elezar 更正 Orin 的容器兼容路径，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1649\n* 由 @elezar 对 ldcache 定位器进行重构，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1648\n* 测试：由 @elezar 将 Ptr 函数添加到包中，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1653\n* 杂项：由 @elezar 将管理接收器的名称从 m 改为 l，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1654\n* 测试：由 @elezar 不再使用修饰符来添加测试运行时钩子，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1656\n* 重构：由 @elezar 将 update-ldcache 参数处理移至钩子创建者处，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1651\n* 由 @elezar 添加修饰符工厂，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1655\n* 重构：由 @elezar 在可能的情况下优先使用 cdilib 方法而非函数，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1658\n* 由 @elezar 修复 golangci-lint 错误，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1662\n* 修复：由 @elezar 修复对 nvswitch 模式的支持，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1661\n* 重构：由 @elezar 将 NormalizeSearchPaths 移至 lookup 包中，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1660\n* 由 @elezar 允许指定 Orin CUDA 前向兼容根目录，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1614\n* 修复：由 @elezar 修复因合并冲突导致的拼写错误，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1664\n* 重构：由 @elezar 为 lookup.Locator 添加 AsOptional 包装器，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit","2026-02-20T16:42:59",{"id":195,"version":196,"summary_zh":197,"released_at":198},73531,"v1.19.0-rc.3","## 变更内容\n* [CI]：由 @rahulait 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1613 中添加 GitHub 问题模板\n* 测试：由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1615 中调整针对 Docker 29.2.0 的端到端测试\n* 使用 rpmrebuild 替代 fpm 重新构建 RPM 包，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1612 中完成\n* 构建（依赖项）：将 third_party\u002Flibnvidia-container 从 `a83ddc0` 升级至 `fe0d8e5`，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1622 中完成\n* 构建（依赖项）：将 nvidia\u002Fdistroless\u002Fgo 从 v4.0.0-dev 升级至 v4.0.1-dev，在 \u002Fdeployments\u002Fcontainer 中进行，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1618 中完成\n* 构建（依赖项）：将 third_party\u002Flibnvidia-container 从 `fe0d8e5` 升级至 `7585946`，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1623 中完成\n* 构建（依赖项）：将 github.com\u002Fonsi\u002Fgomega 从 1.39.0 升级至 1.39.1，在 \u002Ftests 中进行，由 @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1620 中完成\n* 修复：允许将配置选项设置为默认值，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1629 中完成\n* 向外部公开内部包，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1577 中完成\n* 修复：记录实际的 CDI 规范版本，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1633 中完成\n* 修复：在 CDI 规范中设置设备节点的 GID，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1631 中完成\n* 为 v1.19.0-rc.3 版本发布提升版本号，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1636 中完成\n* 从 nvidia-cdi-refresh.service 中移除重启逻辑，由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1638 中完成\n\n## 新贡献者\n* @rahulait 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1613 中做出了首次贡献\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.19.0-rc.2...v1.19.0-rc.3","2026-02-06T21:14:04",{"id":200,"version":201,"summary_zh":202,"released_at":203},73532,"v1.19.0-rc.2","## 变更内容\n* [无变更日志] @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1529 中修复了发布脚本中的拼写错误。\n* @dependabot[bot] 在 \u002Ftests 中将 github.com\u002Fonsi\u002Fgomega 从 1.38.2 升级至 1.38.3，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1533。\n* @tariq1890 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1535 中改进了解析逗号分隔配置源时去除多余空白字符的逻辑。\n* @dependabot[bot] 在 \u002Fdeployments\u002Fcontainer 中将 nvidia\u002Fdistroless\u002Fgo 从 v3.2.1-dev 升级至 v3.2.2-dev，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1547。\n* @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1538 中修复了 JIT CDI 规范生成失败时返回错误的问题。\n* @dependabot[bot] 将 tags.cncf.io\u002Fcontainer-device-interface 从 1.0.2-0.20251114135136-1b24d969689f 升级至 1.1.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1536。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1504 中将 github.com\u002Fopencontainers\u002Frunc 从 1.3.3 升级至 1.4.0。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1549 中将 actions\u002Fdownload-artifact 从 6 升级至 7。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1548 中将 actions\u002Fupload-artifact 从 5 升级至 6。\n* @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1530 中修复了基于 musl 的容器中库驱动库解析的问题。\n* @dependabot[bot] 在 \u002Ftests 中将 github.com\u002Fonsi\u002Fgomega 从 1.38.3 升级至 1.39.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1566。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1564 中将 golang.org\u002Fx\u002Fsys 从 0.39.0 升级至 0.40.0。\n* @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1563 中修复了创建 DRM 设备符号链接时参数传递的问题。\n* @dependabot[bot] 在 \u002Ftests 中将 github.com\u002Fonsi\u002Fginkgo\u002Fv2 从 2.27.3 升级至 2.27.4，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1565。\n* @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1562 中修复了当 NVIDIA_VISIBLE_DEVICES=none 时不再注入设备节点的问题。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1572 中将 golang.org\u002Fx\u002Fmod 从 0.31.0 升级至 0.32.0。\n* @dependabot[bot] 在 \u002Ftests 中将 golang.org\u002Fx\u002Fcrypto 从 0.46.0 升级至 0.47.0，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1575。\n* @dependabot[bot] 在 \u002Ftests 中将 github.com\u002Fonsi\u002Fginkgo\u002Fv2 从 2.27.4 升级至 2.27.5，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1574。\n* @dependabot[bot] 在 \u002Fdeployments\u002Fcontainer 中将 nvidia\u002Fdistroless\u002Fgo 从 v3.2.2-dev 升级至 v4.0.0-dev，详见 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1576。\n* @tariq1890 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1498 中实现了 NRI 插件服务器，用于注入管理用的 CDI 设备。\n* @dependabot[bot] 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fn 中将 github.com\u002Furfave\u002Fcli\u002Fv3 从 3.6.1 升级至 3.6.2。","2026-01-23T19:34:54",{"id":205,"version":206,"summary_zh":207,"released_at":208},73541,"v1.18.0-rc.3","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.3`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.3)\r\n* [`nvidia-container-toolkit 1.18.0-rc.3`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.3)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Generate separate specs for coherent and noncoherent devices\r\n- Disable chmod hook by default\r\n- Add support for gated modifications jit-cdi mode\r\n- Add support for nvswitch mode to nvcdi API\r\n- Add support for gdrcopy mode to nvcdi API\r\n- Consolidate CDI spec generation of gated modes\r\n- Add missing imex mode to Valid modes\r\n- Add explicitLibs list to libs discovery\r\n- Consolidate logic to determine driver version\r\n- Fix: Enable local YUM repo by default in CentOS8 and Fedora35 entrypoints\r\n\r\n### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.11-dev in \u002Fdeployments\u002Fcontainer\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0-rc.2...v1.18.0-rc.3\r\n\r\n","2025-08-29T11:01:00",{"id":210,"version":211,"summary_zh":212,"released_at":213},73533,"v1.18.2","**注意**：此版本是 NVIDIA 容器工具包的统一发布版，包含以下软件包：\n* [`libnvidia-container 1.18.2`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.2)\n* [`nvidia-container-toolkit 1.18.2`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.2)\n\n该版本的软件包已发布到 [`libnvidia-container` 软件源](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F)。\n\n## 变更内容\n* 修复 CDI 刷新服务的触发机制，以正确处理压缩内核。\n* 当 JIT CDI 规范生成失败时返回错误。这使得容器无法启动的原因更加清晰，而不是报告一个无法解析的 CDI 设备。\n* 允许在基于 `musl` 的容器中正确找到驱动库。\n* 正确构造用于创建 DRM 设备符号链接的挂钩参数。这修复了一个 bug：当 `NVIDIA_DRIVER_CAPABILITIES` 包含 `graphics` 时，在 `legacy` 模式下容器将无法启动。\n* 修复了一个 bug：当指定 `NVIDIA_VISIBLE_DEVICES=none` 时，所有 GPU 都会暴露给容器。\n* 为 CDI 刷新服务添加重启逻辑，以应对驱动程序在系统启动时尚未就绪的情况。\n* 使用 CDI 时，不再以只读方式挂载 IPC 套接字。这修复了在某些嵌套场景（例如 K8s 上的 Slurm）中的崩溃问题。\n\n### 工具包容器中的变更\n\n- 将 \u002Fdeployments\u002Fcontainer 中的 nvidia\u002Fdistroless\u002Fgo 版本升级至 v3.2.2-dev。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.1...v1.18.2","2026-01-23T13:16:23",{"id":215,"version":216,"summary_zh":217,"released_at":218},73534,"v1.19.0-rc.1","## 变更内容\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1373 中修复了 create-dev-char-symlinks 命令中的 bug\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1377 中修改为不从运行时模式中读取 CDI 生成模式\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1382 中修复了 containerd 对现有导入的处理问题\n* 由 @cdesiniotis 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1400 中将日志消息重定向到 stderr\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1407 中修复了 containerd 的 drop-in 配置路径问题\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1432 中启用了针对 arm64 架构的容器工具包测试\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1411 中更新了 RPM 包，使其包含 256 位摘要\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1409 中修复了 CDI 刷新服务的触发问题\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1419 中允许为 `jit-cdi` 模式设置 nvcdi 功能标志\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1431 中修复了 jit-cdi 模式下 CDI 规范重复生成的问题\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1403 中从 ldcache 更新中过滤掉已跟踪的目录\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1464 中在生成 CSV CDI 规范时使用请求的设备\n* 由 @jfroy 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1459 中实现 ldconfig：如果缺少 ld.so.conf 文件，则创建该文件\n* 由 @waltforme 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1257 中修复了 nvidia-container-runtime 的 README 中的拼写错误\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1451 中补充考虑 libnvidia-ml.so 以提取驱动程序版本\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1471 中当 ldcache 不存在时使用 enable-cuda-compat 钩子\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1444 中修复了主机与容器发行版不匹配时 ldcache 更新的问题\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F960 中尽可能从主机路径中提取 FileMode\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1174 中允许 update-ldcache 钩子在不支持 pivot-root 的情况下正常工作\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1491 中默认采用严格的 OCI 运行时规范解码\n* 由 @EdSwarthout 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1518 中修复了 CDI 刷新服务的触发问题\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1511 中为 nvidia-ctk cd generate 添加了 --no-all-device 选项\n* 由 @elezar 在 https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fpull\u002F1512 中为 nvidia-ctk cdi generate 命令添加了 --device-id 标志\n* 处理 C 中的多 GPU","2025-12-10T12:24:08",{"id":220,"version":221,"summary_zh":222,"released_at":223},73535,"v1.18.1","**注意**：此版本是 NVIDIA 容器工具包的统一发布版，包含以下软件包：\n* [`libnvidia-container 1.18.1`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.1)\n* [`nvidia-container-toolkit 1.18.1`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.1)\n\n该版本的软件包已发布到 [`libnvidia-container` 软件源](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F)。\n\n## 变更内容\n- 修复主机与容器发行版不匹配时 ldcache 的更新问题\n- 当 ldcache 不存在时使用 enable-cuda-compat 钩子\n- 修复容器中缺少 ld.so.conf 文件时 ldcache 的更新问题\n- 切换到 go 1.25 的 os.Root\n- 在 ldcache 更新时过滤已跟踪的目录\n- 修复 jit-cdi 模式下的重复 spec 问题\n- 修正 nvsandboxutils 功能标志中的拼写错误\n- 允许为 jit-cdi 模式配置 nvcdi FeatureFlags\n- 修复 CDI 刷新服务的触发问题\n- 更新 rpm 包以使用 256 位摘要\n- 修复 containerd drop-in 配置路径问题\n- 在 nvidia 运行时封装脚本中将日志消息重定向至 stderr\n- 修复 containerd 中现有导入的处理问题\n- 不再从运行时模式中读取 cdi 生成模式\n- 修复 create-dev-char-symlinks 命令中的 bug\n\n### 工具包容器中的变更\n- 将 \u002Fdeployments\u002Fcontainer 中的 nvidia\u002Fdistroless\u002Fgo 版本升级至 v3.2.1-dev\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0...v1.18.1","2025-12-02T10:14:22",{"id":225,"version":226,"summary_zh":227,"released_at":228},73536,"v1.18.0","## What's Changed\r\n\r\n**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0)\r\n* [`nvidia-container-toolkit 1.18.0`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\nThis is a major release and includes the following high-level changes:\r\n- The default mode of the NVIDIA Container Runtime has been updated to make use\r\n  of a just-in-time-generated CDI specification instead of defaulting to the `legacy` mode.\r\n- Added a systemd unit to generate CDI specifications for available devices automatically. This allows\r\n  native CDI support in container engines such as Docker and Podman to be used without additional steps.\r\n\r\n### Changes since v1.18.0-rc.6\r\n- Fix bug in device selection in jit-cdi mode\r\n- Make list of explicit driver libraries opt-in\r\n\r\n#### Changes in the Toolkit Container\r\n- Invoke the actual default low-level runtime in the nvidia-ctk wrapper script\r\n- Remove default_runtime from cri-o config on cleanup\r\n- Do not remove cri-o drop-in file on shutdown.\r\n\r\n### v1.18.0-rc.6\r\n\r\n- Remove ppc64le artifacts from build\r\n- Add support for building artifacts with custom GOPROXY\r\n- Always update the ldcache in the container.\r\n\r\n#### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.13-dev in \u002Fdeployments\u002Fcontainer\r\n- Allow config sources to be specified for containerd and crio\r\n- Allow file for config source to be specified explicitly\r\n\r\n#### Changes in libnvidia-container\r\n- Add clock_gettime to the set of allowed syscalls under seccomp.\r\n\r\n### v1.18.0-rc.5\r\n\r\n- Fix handling of unrecognised hooks\r\n- Disable generation of coherent CDI specs by default\r\n- Update go-nvlib to restrict nvidia.com\u002Fgpu.coherent devices to devices with an ATS addressing mode.\r\n- Deprecate the hook config mode for cri-o\r\n\r\n#### Changes in the Toolkit Container\r\n- Deprecate the hook config mode for cri-o\r\n- Add CRI plugin config from source containerd config to drop-in file\r\n- Add support for drop-in config files in a container\r\n\r\n### v1.18.0-rc.4\r\n\r\n- Add drop-in file support for containerd and crio\r\n- Add support for IMEX_CHANNELS to jit-cdi mode\r\n- Refactor IMEX channel requests from image\r\n- Remove redundant CDI annotations\r\n- support running and degraded systemd state during install\r\n- Don't inject enable-cuda-compat hook in CSV mode\r\n- Cleanup default runtime in runtime config when setAsDefault=false\r\n\r\n#### Changes in the Toolkit Container\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.12-dev in \u002Fdeployments\u002Fcontainer\r\n\r\n### v1.18.0-rc.3\r\n- Generate separate specs for coherent and noncoherent devices\r\n- Disable chmod hook by default\r\n- Add support for gated modifications jit-cdi mode\r\n- Add support for nvswitch mode to nvcdi API\r\n- Add support for gdrcopy mode to nvcdi API\r\n- Consolidate CDI spec generation of gated modes\r\n- Add missing imex mode to Valid modes\r\n- Add explicitLibs list to libs discovery\r\n- Consolidate logic to determine driver version\r\n- Fix: Enable local YUM repo by default in CentOS8 and Fedora35 entrypoints\r\n\r\n#### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.11-dev in \u002Fdeployments\u002Fcontainer\r\n\r\n### v1.18.0-rc.2\r\n- Ensure that .so symlinks are created for driver libraries in the container\r\n- Load settings from config.toml file during CDI generation\r\n- Use securejoin to resolve \u002Fproc\r\n- Refactor nvml CDI spec generation for consistency\r\n- Simplify nvcdi interface\r\n- Add SpecGenerator interface\r\n- Ensure that modified params file mount does not leak to host\r\n- Add test for leaking mounts with shared mount propagation\r\n\r\n#### Changes in the Toolkit Container\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.10-dev in \u002Fdeployments\u002Fcontainer\r\n- Bump nvidia\u002Fcuda to 12.9.1-base-ubi9 in \u002Fdeployments\u002Fcontainer\r\n\r\n### v1.18.0-rc.1\r\n\r\n- Add create-soname-symlinks hook\r\n- Require matching version of libnvidia-container-tools\r\n- Add envvar for libcuda.so parent dir to CDI spec\r\n- Add EnvVar to Discover interface\r\n- Resolve to legacy by default in nvidia-container-runtime-hook\r\n- Default to jit-cdi mode in the nvidia runtime\r\n- Use functional options to construct runtime mode resolver\r\n- Add NVIDIA_CTK_CONFIG_FILE_PATH envvar\r\n- Switch to cuda ubi9 base image\r\n- Use single version tag for image\r\n- BUGFIX: modifier: respect GPU volume-mount device requests\r\n- Ensure consistent sorting of annotation devices\r\n- Extract deb and rpm packages to single image\r\n- Remove docker-run as default runtime candidate\r\n- Return annotation devices from VisibleDevices\r\n- Make CDI device requests consistent with other methods\r\n- Construct container info once\r\n- Add logic to extract annotation device requests to image type\r\n- Add IsPrivileged function to CUDA container type\r\n- Add device","2025-10-21T12:13:16",{"id":230,"version":231,"summary_zh":232,"released_at":233},73537,"v1.18.0-rc.6","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.6`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.6)\r\n* [`nvidia-container-toolkit 1.18.0-rc.6`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.6)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n- Remove ppc64le artifacts from build\r\n- Add support for building artifacts with custom GOPROXY\r\n- Always update the ldcache in the container.\r\n\r\n### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fdistroless\u002Fgo to v3.1.13-dev in \u002Fdeployments\u002Fcontainer\r\n- Allow config sources to be specified for containerd and crio\r\n- Allow file for config source to be specified explicitly\r\n\r\n## Changes in libnvidia-container\r\n- Add clock_gettime to the set of allowed syscalls under seccomp.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0-rc.6...v1.18.0-rc.6\r\n\r\n","2025-10-09T11:55:37",{"id":235,"version":236,"summary_zh":237,"released_at":238},73538,"v1.17.9","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* `libnvidia-container-tools` and `libnvidia-container1` [`v1.17.9`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.17.9)\r\n* `nvidia-container-toolkit` and `nvidia-container-toolkit-base` [`v1.17.9`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.17.9)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Don't inject enable-cuda-compat hook in CSV mode\r\n- Use securejoin to resolve \u002Fproc\r\n\r\n### Changes in the Toolkit Container\r\n\r\n- Bump nvidia\u002Fcuda to 12.9.1 in \u002Fdeployments\u002Fcontainer\r\n\r\n### Changes in libnvidia-container\r\n- Add clock_gettime to allowed syscalls\r\n- Add libnvidia-gpucomp.so to the list of compute libs\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.17.8...v1.17.9\r\n\r\n","2025-10-15T12:30:27",{"id":240,"version":241,"summary_zh":242,"released_at":243},73539,"v1.18.0-rc.5","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.18.0-rc.5`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.18.0-rc.5)\r\n* [`nvidia-container-toolkit 1.18.0-rc.5`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.18.0-rc.5)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Fix handling of unrecognised hooks\r\n- Disable generation of coherent CDI specs by default\r\n- Update go-nvlib to restrict nvidia.com\u002Fgpu.coherent devices to devices with an ATS addressing mode.\r\n- Deprecate the hook config mode for cri-o\r\n\r\n### Changes in the Toolkit Container\r\n- Deprecate the hook config mode for cri-o\r\n- Add CRI plugin config from source containerd config to drop-in file\r\n- Add support for drop-in config files in a container\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.18.0-rc.4...v1.18.0-rc.5\r\n\r\n","2025-09-29T09:15:33",{"id":245,"version":246,"summary_zh":247,"released_at":248},73544,"v1.17.8","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* `libnvidia-container-tools` and `libnvidia-container1` [`v1.17.8`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.17.8)\r\n* `nvidia-container-toolkit` and `nvidia-container-toolkit-base` [`v1.17.8`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.17.8)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Updated the ordering of Mounts in CDI to have a deterministic output. This makes testing more consistent.\r\n- Added `NVIDIA_CTK_DEBUG` envvar to hooks as a placeholder for enabling debugging output.\r\n\r\n### Changes in libnvidia-container\r\n- Fixed bug in setting default for `--cuda-compat-mode` flag. This caused failures in use cases invoking the `nvidia-container-cli` directly or when the `v1.17.7` version of the `nvidia-container-cli` was used with an older `nvidia-container-runtime-hook`.\r\n- Added additional logging to the `nvidia-container-cli`.\r\n- Fixed variable initialisation when updating the ldcache. This caused failures on Arch linux or other platforms where the `nvidia-container-cli` was built from source.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.17.7...v1.17.8\r\n\r\n","2025-05-30T16:35:09",{"id":250,"version":251,"summary_zh":252,"released_at":253},73545,"v1.17.7","**NOTE:** This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:\r\n* [`libnvidia-container 1.17.7`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Flibnvidia-container\u002Freleases\u002Ftag\u002Fv1.17.7)\r\n* [`nvidia-container-toolkit 1.17.7`](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Freleases\u002Ftag\u002Fv1.17.7)\r\n\r\nThe packages for this release are published to the [`libnvidia-container` package repositories](https:\u002F\u002Fnvidia.github.io\u002Flibnvidia-container\u002F).\r\n\r\n## What's Changed\r\n\r\n- Fix mode detection on Thor-based systems. This correctly resolves `auto` mode to `csv`.\r\n- Fix resolution of libs in LDCache on ARM. This fixes CDI spec generation on ARM-based systems using NVML.\r\n- Added a `nvidia-container-runtime-modes.legacy.cuda-compat-mode` option to provide finer control of how CUDA Forward Compatibility is handled. The default value (`ldconfig`) fixes CUDA Compatibility Support in cases where only the NVIDIA Container Runtime Hook is used (e.g. the Docker `--gpus` command line flag).\r\n- Run update-ldcache hook in isolated namespaces.\r\n\r\n### Changes in the Toolkit Container\r\n\r\n- Bump CUDA base image version to 12.9.0\r\n\r\n### Changes in libnvidia-container\r\n\r\n- Add `--cuda-compat-mode` flag to the `nvidia-container-cli configure` command.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-container-toolkit\u002Fcompare\u002Fv1.17.6...v1.17.7\r\n\r\n","2025-05-16T15:43:02"]