[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-NVIDIA--TensorRT":3,"tool-NVIDIA--TensorRT":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",142651,2,"2026-04-06T23:34:12",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":115,"forks":116,"last_commit_at":117,"license":118,"difficulty_score":10,"env_os":119,"env_gpu":120,"env_ram":121,"env_deps":122,"category_tags":136,"github_topics":137,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":143,"updated_at":144,"faqs":145,"releases":176},4819,"NVIDIA\u002FTensorRT","TensorRT","NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.","TensorRT 是 NVIDIA 推出的一款高性能深度学习推理 SDK，专为在 NVIDIA GPU 上加速 AI 模型部署而设计。它主要解决了深度学习模型从训练环境迁移到生产环境时面临的推理速度慢、资源消耗大等痛点，通过层融合、精度校准（如 INT8 量化）和内核自动调优等技术，显著提升推理吞吐量并降低延迟。\n\n这款工具非常适合 AI 开发者、算法工程师以及需要优化模型性能的研究人员使用。无论是希望将复杂的神经网络高效部署到服务器还是边缘设备，TensorRT 都能提供强大的支持。其开源组件包含了插件源码、ONNX 解析器及丰富的示例应用，方便用户进行自定义扩展和二次开发。\n\n技术亮点方面，TensorRT 不仅支持显式量化和强类型网络等先进特性以提升精度与效率，还持续演进其插件架构（如从 IPluginV2 升级至 IPluginV3），确保生态的兼容性与前瞻性。此外，它提供了便捷的 Python 安装包，让开发者能快速上手体验。对于追求极致推理性能的企业用户，TensorRT 更是构建高效 AI 服务不可或缺的核心引擎。","[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0) [![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorRT-documentation-brightgreen.svg)](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Fsdk\u002Ftensorrt-developer-guide\u002Findex.html) [![Roadmap](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRoadmap-Q1_2026-brightgreen.svg)](documents\u002Ftensorrt_roadmap_2026q1.pdf)\n\n# :mega::mega: Announcement :mega::mega:\n\nTensorRT 11.0 is coming soon in 2026 Q2 with powerful new capabilities designed to accelerate your AI inference workflows. With this major version bump, TensorRT's API will be streamlined and a few legacy features will be removed.\n\nWe recommend migrating early for the following features:\n- Weakly-typed networks and related APIs will be removed, replaced by [Strongly Typed Networks](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fadvanced.html#strongly-typed-networks).\n- Implicit quantization and related APIs will be removed, replaced by [Explicit Quantization](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fwork-quantized-types.html#explicit-quantization)\n- IPluginV2 and related APIs will be removed, replaced by [IPluginV3](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fextending-custom-layers.html#migrating-v2-plugins-to-ipluginv3)\n- TREX tool will be removed, replaced by [Nsight Deep Learning Designer](https:\u002F\u002Fdocs.nvidia.com\u002Fnsight-dl-designer\u002FUserGuide\u002Findex.html#visualizing-a-tensorrt-engine)\n- Python bindings for Python 3.9 and older versions will be removed starting TensorRT 10.16. RPM packages for RHEL\u002FRocky Linux 8 and RHEL\u002FRocky Linux 9 now depend on Python 3.12.\n\n# TensorRT Open Source Software\n\nThis repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. It includes the sources for TensorRT plugins and ONNX parser, as well as sample applications demonstrating usage and capabilities of the TensorRT platform. These open source software components are a subset of the TensorRT General Availability (GA) release with some extensions and bug-fixes.\n\n- For code contributions to TensorRT-OSS, please see our [Contribution Guide](CONTRIBUTING.md) and [Coding Guidelines](CODING-GUIDELINES.md).\n- For a summary of new additions and updates shipped with TensorRT-OSS releases, please refer to the [Changelog](CHANGELOG.md).\n- For business inquiries, please contact [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)\n- For press and other inquiries, please contact Hector Marinez at [hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)\n\nNeed enterprise support? NVIDIA global support is available for TensorRT with the [NVIDIA AI Enterprise software suite](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdata-center\u002Fproducts\u002Fai-enterprise\u002F). Check out [NVIDIA LaunchPad](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Flaunchpad\u002Fai\u002Fai-enterprise\u002F) for free access to a set of hands-on labs with TensorRT hosted on NVIDIA infrastructure.\n\nJoin the [TensorRT and Triton community](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdeep-learning-ai\u002Ftriton-tensorrt-newsletter\u002F) and stay current on the latest product updates, bug fixes, content, best practices, and more.\n\n# Prebuilt TensorRT Python Package\n\nWe provide the TensorRT Python package for an easy installation. \\\nTo install:\n\n```bash\npip install tensorrt\n```\n\nYou can skip the **Build** section to enjoy TensorRT with Python.\n\n# Build\n\n## Prerequisites\n\nTo build the TensorRT-OSS components, you will first need the following software packages.\n\n**TensorRT GA build**\n\n- TensorRT v10.16.0.72\n  - Available from direct download links listed below\n\n**System Packages**\n\n- [CUDA](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit)\n  - Recommended versions:\n  - cuda-13.2.0\n  - cuda-12.9.0\n- [CUDNN (optional)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcudnn)\n  - cuDNN 8.9\n- [GNU make](https:\u002F\u002Fftp.gnu.org\u002Fgnu\u002Fmake\u002F) >= v4.1\n- [cmake](https:\u002F\u002Fgithub.com\u002FKitware\u002FCMake\u002Freleases) >= v3.31\n- [python](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F) >= v3.10, \u003C= v3.13.x\n- [pip](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpip\u002F#history) >= v19.0\n- Essential utilities\n  - [git](https:\u002F\u002Fgit-scm.com\u002Fdownloads), [pkg-config](https:\u002F\u002Fwww.freedesktop.org\u002Fwiki\u002FSoftware\u002Fpkg-config\u002F), [wget](https:\u002F\u002Fwww.gnu.org\u002Fsoftware\u002Fwget\u002Ffaq.html#download)\n\n**Optional Packages**\n\n- [NCCL](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnccl\u002Fnccl-download) >= v2.19, \u003C v3.0 — only when building with multi-device support (`-DTRT_BUILD_ENABLE_MULTIDEVICE=ON`) for the `sampleDistCollective` sample.\n- Containerized build\n  - [Docker](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F) >= 19.03\n  - [NVIDIA Container Toolkit](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n- PyPI packages (for demo applications\u002Ftests)\n  - [onnx](https:\u002F\u002Fpypi.org\u002Fproject\u002Fonnx\u002F)\n  - [onnxruntime](https:\u002F\u002Fpypi.org\u002Fproject\u002Fonnxruntime\u002F)\n  - [tensorflow-gpu](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow\u002F) >= 2.5.1\n  - [Pillow](https:\u002F\u002Fpypi.org\u002Fproject\u002FPillow\u002F) >= 9.0.1\n  - [pycuda](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpycuda\u002F) \u003C 2021.1\n  - [numpy](https:\u002F\u002Fpypi.org\u002Fproject\u002Fnumpy\u002F)\n  - [pytest](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytest\u002F)\n- Code formatting tools (for contributors)\n\n  - [Clang-format](https:\u002F\u002Fclang.llvm.org\u002Fdocs\u002FClangFormat.html)\n  - [Git-clang-format](https:\u002F\u002Fgithub.com\u002Fllvm-mirror\u002Fclang\u002Fblob\u002Fmaster\u002Ftools\u002Fclang-format\u002Fgit-clang-format)\n\n  > NOTE: [onnx-tensorrt](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fonnx-tensorrt), [cub](http:\u002F\u002Fnvlabs.github.io\u002Fcub\u002F), and [protobuf](https:\u002F\u002Fgithub.com\u002Fprotocolbuffers\u002Fprotobuf.git) packages are downloaded along with TensorRT OSS, and not required to be installed.\n\n## Downloading TensorRT Build\n\n1. #### Download TensorRT OSS\n\n   ```bash\n   git clone -b main https:\u002F\u002Fgithub.com\u002Fnvidia\u002FTensorRT TensorRT\n   cd TensorRT\n   git submodule update --init --recursive\n   ```\n\n2. #### (Optional - if not using TensorRT container) Specify the TensorRT GA release build path\n\n   If using the TensorRT OSS build container, TensorRT libraries are preinstalled under `\u002Fusr\u002Flib\u002Fx86_64-linux-gnu` and you may skip this step.\n\n   Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https:\u002F\u002Fdeveloper.nvidia.com) with the direct links below:\n\n   - [TensorRT 10.16.0.72 for CUDA 13.2, Linux x86_64](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Ftars\u002FTensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz)\n   - [TensorRT 10.16.0.72 for CUDA 12.9, Linux x86_64](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Ftars\u002FTensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz)\n   - [TensorRT 10.16.0.72 for CUDA 13.2, Windows x86_64](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Fzip\u002FTensorRT-10.16.0.72.Windows.win10.cuda-13.2.zip)\n   - [TensorRT 10.16.0.72 for CUDA 12.9, Windows x86_64](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Fzip\u002FTensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip)\n\n   **Example: Ubuntu 22.04 on x86-64 with cuda-13.2**\n\n   ```bash\n   cd ~\u002FDownloads\n   tar -xvzf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz\n   export TRT_LIBPATH=`pwd`\u002FTensorRT-10.16.0.72\u002Flib\n   ```\n\n   **Example: Windows on x86-64 with cuda-12.9**\n\n   ```powershell\n   Expand-Archive -Path TensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip\n   $env:TRT_LIBPATH=\"$pwd\\TensorRT-10.16.0.72\\lib\"\n   ```\n\n## Setting Up The Build Environment\n\nFor Linux platforms, we recommend that you generate a docker container for building TensorRT OSS as described below. For native builds, please install the [prerequisite](#prerequisites) _System Packages_.\n\n1. #### Generate the TensorRT-OSS build container.\n\n   **Example: Ubuntu 24.04 on x86-64 with cuda-13.2 (default)**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-24.04.Dockerfile --tag tensorrt-ubuntu24.04-cuda13.2\n   ```\n\n   **Example: Rockylinux8 on x86-64 with cuda-13.2**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Frockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda13.2\n   ```\n\n   **Example: Ubuntu 24.04 cross-compile for Jetson (aarch64) with cuda-13.2 (JetPack SDK)**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda13.2\n   ```\n\n   **Example: Ubuntu 24.04 on aarch64 with cuda-13.2**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-24.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu24.04-cuda13.2\n   ```\n\n2. #### Launch the TensorRT-OSS build container.\n   **Example: Ubuntu 24.04 build container**\n   ```bash\n   .\u002Fdocker\u002Flaunch.sh --tag tensorrt-ubuntu24.04-cuda13.2 --gpus all\n   ```\n   > NOTE:\n   > \u003Cbr> 1. Use the `--tag` corresponding to build container generated in Step 1.\n   > \u003Cbr> 2. [NVIDIA Container Toolkit](#prerequisites) is required for GPU access (running TensorRT applications) inside the build container.\n   > \u003Cbr> 3. `sudo` password for Ubuntu build containers is 'nvidia'.\n   > \u003Cbr> 4. Specify port number using `--jupyter \u003Cport>` for launching Jupyter notebooks.\n   > \u003Cbr> 5. Write permission to this folder is required as this folder will be mounted inside the docker container for uid:gid of 1000:1000.\n\n## Building TensorRT-OSS\n\n- Generate Makefiles and build\n\n  **Example: Linux (x86-64) build with default cuda-13.2**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout\n  make -j$(nproc)\n  ```\n\n  **Example: Linux (aarch64) build with default cuda-13.2**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64-native.toolchain\n  make -j$(nproc)\n  ```\n\n  **Example: Native build on Jetson Thor (aarch64) with cuda-13.2**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DTRT_PLATFORM_ID=aarch64\n  CC=\u002Fusr\u002Fbin\u002Fgcc make -j$(nproc)\n  ```\n\n  > NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.\n\n  **Example: Ubuntu 24.04 Cross-Compile for Jetson Thor (aarch64) with cuda-13.2 (JetPack)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **Example: Ubuntu 24.04 Cross-Compile for DriveOS (aarch64) with cuda-13.2**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **Example: Native builds on Windows (x86) with cuda-13.2**\n\n  ```bash\n  cd $TRT_OSSPATH\n  New-Item -ItemType Directory -Path build\n  cd build\n  cmake .. -DTRT_LIB_DIR=\"$env:TRT_LIBPATH\" -DTRT_OUT_DIR=\"$pwd\\\\out\"\n  msbuild TensorRT.sln \u002Fproperty:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS\n  ```\n\n  > NOTE: The default CUDA version used by CMake is 13.2. To override this, for example to 12.9, append `-DCUDA_VERSION=12.9` to the cmake command.\n\n- Required CMake build arguments are:\n  - `TRT_LIB_DIR`: Path to the TensorRT installation directory containing libraries.\n  - `TRT_OUT_DIR`: Output directory where generated build artifacts will be copied.\n- Optional CMake build arguments:\n  - `CMAKE_BUILD_TYPE`: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [`Release`] | `Debug`\n  - `CUDA_VERSION`: The version of CUDA to target, for example [`12.9.9`].\n  - `CUDNN_VERSION`: The version of cuDNN to target, for example [`8.9`].\n  - `PROTOBUF_VERSION`: The version of Protobuf to use, for example [`3.20.1`]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.\n  - `CMAKE_TOOLCHAIN_FILE`: The path to a toolchain file for cross compilation.\n  - `BUILD_PARSERS`: Specify if the parsers should be built, for example [`ON`] | `OFF`. If turned OFF, CMake will try to find precompiled versions of the parser libraries to use in compiling samples. First in `${TRT_LIB_DIR}`, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.\n  - `BUILD_PLUGINS`: Specify if the plugins should be built, for example [`ON`] | `OFF`. If turned OFF, CMake will try to find a precompiled version of the plugin library to use in compiling samples. First in `${TRT_LIB_DIR}`, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.\n  - `BUILD_SAMPLES`: Specify if the samples should be built, for example [`ON`] | `OFF`.\n  - `BUILD_SAFE_SAMPLES`: Specify if safety samples should be built, for example [`ON`] | `OFF`.\n  - `TRT_SAFETY_INFERENCE_ONLY`: Specify if only build the safety inference components, for example [`ON`] | `OFF`. If turned ON, all other components will be turned OFF except `BUILD_SAFE_SAMPLES`.\n  - `GPU_ARCHS`: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found [here](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-gpus). Examples: - NVidia A100: `-DGPU_ARCHS=\"80\"` - RTX 50 series: `-DGPU_ARCHS=\"120\"` - Multiple SMs: `-DGPU_ARCHS=\"80 120\"`\n  - `TRT_PLATFORM_ID`: Bare-metal build (unlike containerized cross-compilation). Currently supported options: `x86_64` (default).\n  - `TRT_BUILD_ENABLE_MULTIDEVICE`: Enable the multi-device sample (`sampleDistCollective`). Use `-DTRT_BUILD_ENABLE_MULTIDEVICE=ON` to build it; requires [NCCL](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnccl\u002Fnccl-download) >= v2.19, \u003C v3.0.\n\n## Building TensorRT DriveOS Samples\n\n- Generate Makefiles and build\n\n  **Example: Cross-Compile for DOS7 Linux (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **Example: Cross-Compile for DOS6.5 Linux (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain -DCUDA_VERSION=11.4 -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  **Example: Native build for DOS6.5 and DOS7 Linux (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64-native.toolchain -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF\n  make -j$(nproc)\n  ```\n\n  **Example: Cross-Compile for DOS6.5 QNX (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=11.4\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # Set to your QNX toolchain installation path\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx7\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  > NOTE: Set `QNX_BASE` to your QNX toolchain installation path.\n  > If your CUDA version is not the same as in the example, set `CUDA_VERSION` (for examples that use it in multiple places) or add `-DCUDA_VERSION=\u003Cversion>` to the cmake command.\n\n  **Example: Cross-Compile for DOS6.5 QNX Safety (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=11.4\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # Set to your QNX toolchain installation path\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx7\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  cmake .. -DBUILD_SAMPLES=OFF -DBUILD_SAFE_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_SAFETY_INFERENCE_ONLY=ON -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx_safe.toolchain -DCUDA_VERSION=$CUDA_VERSION -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  > NOTE: Set `QNX_BASE` to your QNX toolchain installation path.\n  > If your CUDA version is not the same as in the example, set `CUDA_VERSION` (for examples that use it in multiple places) or add `-DCUDA_VERSION=\u003Cversion>` to the cmake command.\n\n  **Example: Cross-Compile for DOS7 QNX (aarch64)**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=13.2\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # Set to your QNX toolchain installation path\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=110\n  make -j$(nproc)\n  ```\n\n  > NOTE: Set `QNX_BASE` to your QNX toolchain installation path.\n  > If your CUDA version is not the same as in the example, set `CUDA_VERSION` (for examples that use it in multiple places) or add `-DCUDA_VERSION=\u003Cversion>` to the cmake command.\n\n# References\n\n## TensorRT Resources\n\n- [TensorRT Developer Home](https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt)\n- [TensorRT QuickStart Guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fquick-start-guide\u002Findex.html)\n- [TensorRT Developer Guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fdeveloper-guide\u002Findex.html)\n- [TensorRT Sample Support Guide](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fsample-support-guide\u002Findex.html)\n- [TensorRT ONNX Tools](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Findex.html#tools)\n- [TensorRT Discussion Forums](https:\u002F\u002Fdevtalk.nvidia.com\u002Fdefault\u002Fboard\u002F304\u002Ftensorrt\u002F)\n- [TensorRT Release Notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html)\n\n## Known Issues\n\n- Please refer to [TensorRT Release Notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes)\n","[![License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache%202.0-blue.svg)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0) [![Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorRT-documentation-brightgreen.svg)](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Fsdk\u002Ftensorrt-developer-guide\u002Findex.html) [![Roadmap](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRoadmap-Q1_2026-brightgreen.svg)](documents\u002Ftensorrt_roadmap_2026q1.pdf)\n\n# :mega::mega: 公告 :mega::mega:\n\nTensorRT 11.0 将于 2026 年第二季度正式发布，带来强大的新功能，旨在加速您的 AI 推理工作流。随着这一重大版本更新，TensorRT 的 API 将得到简化，并移除部分遗留功能。\n\n我们建议您尽早迁移以下功能：\n- 弱类型网络及相关 API 将被移除，取而代之的是 [强类型网络](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fadvanced.html#strongly-typed-networks)。\n- 隐式量化及相关 API 将被移除，取而代之的是 [显式量化](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fwork-quantized-types.html#explicit-quantization)。\n- IPluginV2 及相关 API 将被移除，取而代之的是 [IPluginV3](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fextending-custom-layers.html#migrating-v2-plugins-to-ipluginv3)。\n- TREX 工具将被移除，取而代之的是 [Nsight Deep Learning Designer](https:\u002F\u002Fdocs.nvidia.com\u002Fnsight-dl-designer\u002FUserGuide\u002Findex.html#visualizing-a-tensorrt-engine)。\n- 从 TensorRT 10.16 开始，将不再支持 Python 3.9 及更早版本的 Python 绑定。RHEL\u002FRocky Linux 8 和 RHEL\u002FRocky Linux 9 的 RPM 包现依赖于 Python 3.12。\n\n# TensorRT 开源软件\n\n本仓库包含 NVIDIA TensorRT 的开源软件（OSS）组件。它包含了 TensorRT 插件和 ONNX 解析器的源代码，以及演示 TensorRT 平台使用方法和功能的示例应用。这些开源软件组件是 TensorRT 正式发布版（GA）的一部分，附带了一些扩展和错误修复。\n\n- 如需为 TensorRT-OSS 贡献代码，请参阅我们的 [贡献指南](CONTRIBUTING.md) 和 [编码规范](CODING-GUIDELINES.md)。\n- 关于 TensorRT-OSS 版本中新增内容及更新的摘要，请参考 [变更日志](CHANGELOG.md)。\n- 如有业务咨询，请联系 [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)。\n- 如有媒体或其他咨询，请联系 Hector Marinez，邮箱：[hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)。\n\n需要企业级支持吗？NVIDIA 全球技术支持可为 TensorRT 提供服务，配合 [NVIDIA AI Enterprise 软件套件](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdata-center\u002Fproducts\u002Fai-enterprise\u002F) 使用。访问 [NVIDIA LaunchPad](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Flaunchpad\u002Fai\u002Fai-enterprise\u002F) 即可免费体验一系列基于 NVIDIA 基础设施、使用 TensorRT 的实践实验室。\n\n加入 [TensorRT 和 Triton 社区](https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdeep-learning-ai\u002Ftriton-tensorrt-newsletter\u002F) ，及时了解最新产品更新、漏洞修复、内容、最佳实践等信息。\n\n# 预编译 TensorRT Python 包\n\n我们提供了易于安装的 TensorRT Python 包。 \\\n安装命令如下：\n\n```bash\npip install tensorrt\n```\n\n您可以跳过“构建”部分，直接使用 Python 版本的 TensorRT。\n\n# 构建\n\n## 前置条件\n\n要构建 TensorRT-OSS 组件，您首先需要安装以下软件包。\n\n**TensorRT GA 构建**\n\n- TensorRT v10.16.0.72\n  - 可通过下方提供的直接下载链接获取\n\n**系统软件包**\n\n- [CUDA](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-toolkit)\n  - 推荐版本：\n  - cuda-13.2.0\n  - cuda-12.9.0\n- [CUDNN（可选）](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcudnn)\n  - cuDNN 8.9\n- [GNU make](https:\u002F\u002Fftp.gnu.org\u002Fgnu\u002Fmake\u002F) ≥ v4.1\n- [cmake](https:\u002F\u002Fgithub.com\u002FKitware\u002FCMake\u002Freleases) ≥ v3.31\n- [python](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002F) ≥ v3.10, ≤ v3.13.x\n- [pip](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpip\u002F#history) ≥ v19.0\n- 必要的实用工具\n  - [git](https:\u002F\u002Fgit-scm.com\u002Fdownloads)、[pkg-config](https:\u002F\u002Fwww.freedesktop.org\u002Fwiki\u002FSoftware\u002Fpkg-config\u002F)、[wget](https:\u002F\u002Fwww.gnu.org\u002Fsoftware\u002Fwget\u002Ffaq.html#download)\n\n**可选软件包**\n\n- [NCCL](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnccl\u002Fnccl-download) ≥ v2.19, \u003C v3.0 — 仅在启用多设备支持（`-DTRT_BUILD_ENABLE_MULTIDEVICE=ON`）并构建 `sampleDistCollective` 示例时需要。\n- 容器化构建\n  - [Docker](https:\u002F\u002Fdocs.docker.com\u002Finstall\u002F) ≥ 19.03\n  - [NVIDIA Container Toolkit](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n- PyPI 包（用于演示应用\u002F测试）\n  - [onnx](https:\u002F\u002Fpypi.org\u002Fproject\u002Fonnx\u002F)\n  - [onnxruntime](https:\u002F\u002Fpypi.org\u002Fproject\u002Fonnxruntime\u002F)\n  - [tensorflow-gpu](https:\u002F\u002Fpypi.org\u002Fproject\u002Ftensorflow\u002F) ≥ 2.5.1\n  - [Pillow](https:\u002F\u002Fpypi.org\u002Fproject\u002FPillow\u002F) ≥ 9.0.1\n  - [pycuda](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpycuda\u002F) \u003C 2021.1\n  - [numpy](https:\u002F\u002Fpypi.org\u002Fproject\u002Fnumpy\u002F)\n  - [pytest](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytest\u002F)\n- 代码格式化工具（适用于贡献者）\n\n  - [Clang-format](https:\u002F\u002Fclang.llvm.org\u002Fdocs\u002FClangFormat.html)\n  - [Git-clang-format](https:\u002F\u002Fgithub.com\u002Fllvm-mirror\u002Fclang\u002Fblob\u002Fmaster\u002Ftools\u002Fclang-format\u002Fgit-clang-format)\n\n  > 注意：[onnx-tensorrt](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fonnx-tensorrt)、[cub](http:\u002F\u002Fnvlabs.github.io\u002Fcub\u002F) 和 [protobuf](https:\u002F\u002Fgithub.com\u002Fprotocolbuffers\u002Fprotobuf.git) 等库会随 TensorRT OSS 一同下载，无需单独安装。\n\n## 下载 TensorRT 构建\n\n1. #### 下载 TensorRT 开源项目\n\n   ```bash\n   git clone -b main https:\u002F\u002Fgithub.com\u002Fnvidia\u002FTensorRT TensorRT\n   cd TensorRT\n   git submodule update --init --recursive\n   ```\n\n2. #### （可选——如果不使用 TensorRT 容器）指定 TensorRT GA 版本的构建路径\n\n   如果使用 TensorRT 开源项目的构建容器，TensorRT 库已预安装在 `\u002Fusr\u002Flib\u002Fx86_64-linux-gnu` 目录下，您可以跳过此步骤。\n\n   否则，请从 [NVIDIA 开发者专区](https:\u002F\u002Fdeveloper.nvidia.com) 下载并解压 TensorRT GA 版本的构建包，下载链接如下：\n\n   - [适用于 CUDA 13.2、Linux x86_64 的 TensorRT 10.16.0.72](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Ftars\u002FTensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz)\n   - [适用于 CUDA 12.9、Linux x86_64 的 TensorRT 10.16.0.72](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Ftars\u002FTensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz)\n   - [适用于 CUDA 13.2、Windows x86_64 的 TensorRT 10.16.0.72](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Fzip\u002FTensorRT-10.16.0.72.Windows.win10.cuda-13.2.zip)\n   - [适用于 CUDA 12.9、Windows x86_64 的 TensorRT 10.16.0.72](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdownloads\u002Fcompute\u002Fmachine-learning\u002Ftensorrt\u002F10.16.0\u002Fzip\u002FTensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip)\n\n   **示例：Ubuntu 22.04（x86_64），CUDA 13.2**\n\n   ```bash\n   cd ~\u002FDownloads\n   tar -xvzf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz\n   export TRT_LIBPATH=`pwd`\u002FTensorRT-10.16.0.72\u002Flib\n   ```\n\n   **示例：Windows（x86_64），CUDA 12.9**\n\n   ```powershell\n   Expand-Archive -Path TensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip\n   $env:TRT_LIBPATH=\"$pwd\\TensorRT-10.16.0.72\\lib\"\n   ```\n\n## 设置构建环境\n\n对于 Linux 平台，我们建议按照以下说明生成一个用于构建 TensorRT 开源项目的 Docker 容器。对于原生构建，请先安装[先决条件](#prerequisites)中的“系统软件包”。\n\n1. #### 生成 TensorRT 开源项目的构建容器。\n\n   **示例：Ubuntu 24.04（x86_64），CUDA 13.2（默认）**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-24.04.Dockerfile --tag tensorrt-ubuntu24.04-cuda13.2\n   ```\n\n   **示例：Rockylinux8（x86_64），CUDA 13.2**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Frockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda13.2\n   ```\n\n   **示例：Ubuntu 24.04 交叉编译用于 Jetson（aarch64），CUDA 13.2（JetPack SDK）**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda13.2\n   ```\n\n   **示例：Ubuntu 24.04（aarch64），CUDA 13.2**\n\n   ```bash\n   .\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-24.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu24.04-cuda13.2\n   ```\n\n2. #### 启动 TensorRT 开源项目的构建容器。\n   **示例：Ubuntu 24.04 构建容器**\n   ```bash\n   .\u002Fdocker\u002Flaunch.sh --tag tensorrt-ubuntu24.04-cuda13.2 --gpus all\n   ```\n   > 注意：\n   > \u003Cbr> 1. 请使用第 1 步中生成的构建容器对应的 `--tag`。\n   > \u003Cbr> 2. 要在构建容器内访问 GPU（运行 TensorRT 应用程序），需要安装 [NVIDIA Container Toolkit](#prerequisites)。\n   > \u003Cbr> 3. Ubuntu 构建容器的 `sudo` 密码为 'nvidia'。\n   > \u003Cbr> 4. 使用 `--jupyter \u003C端口>` 指定端口号以启动 Jupyter Notebook。\n   > \u003Cbr> 5. 需要对此文件夹具有写入权限，因为该文件夹将以 uid:gid 为 1000:1000 的方式挂载到 Docker 容器中。\n\n## 构建 TensorRT-OSS\n\n- 生成 Makefile 并构建\n\n  **示例：使用默认 CUDA 13.2 的 Linux (x86-64) 构建**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout\n  make -j$(nproc)\n  ```\n\n  **示例：使用默认 CUDA 13.2 的 Linux (aarch64) 构建**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64-native.toolchain\n  make -j$(nproc)\n  ```\n\n  **示例：在 Jetson Thor (aarch64) 上使用 CUDA 13.2 进行原生构建**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DTRT_PLATFORM_ID=aarch64\n  CC=\u002Fusr\u002Fbin\u002Fgcc make -j$(nproc)\n  ```\n\n  > 注意：对于原生 aarch64 构建的 Protobuf，必须通过 `CC=` 显式指定 C 编译器。\n\n  **示例：在 Ubuntu 24.04 上针对 Jetson Thor (aarch64) 使用 CUDA 13.2（JetPack）进行交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **示例：在 Ubuntu 24.04 上针对 DriveOS (aarch64) 使用 CUDA 13.2 进行交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **示例：在 Windows (x86) 上使用 CUDA 13.2 进行原生构建**\n\n  ```bash\n  cd $TRT_OSSPATH\n  New-Item -ItemType Directory -Path build\n  cd build\n  cmake .. -DTRT_LIB_DIR=\"$env:TRT_LIBPATH\" -DTRT_OUT_DIR=\"$pwd\\\\out\"\n  msbuild TensorRT.sln \u002Fproperty:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS\n  ```\n\n  > 注意：CMake 默认使用的 CUDA 版本是 13.2。若需覆盖此设置，例如改为 12.9，可在 cmake 命令后追加 `-DCUDA_VERSION=12.9`。\n\n- 必需的 CMake 构建参数包括：\n  - `TRT_LIB_DIR`：包含库文件的 TensorRT 安装目录路径。\n  - `TRT_OUT_DIR`：用于存放生成的构建产物的输出目录。\n- 可选的 CMake 构建参数包括：\n  - `CMAKE_BUILD_TYPE`：指定生成的二进制文件是发布版还是调试版（包含调试符号）。可选值为 [`Release`] 或 `Debug`。\n  - `CUDA_VERSION`：目标 CUDA 版本，例如 [`12.9.9`]。\n  - `CUDNN_VERSION`：目标 cuDNN 版本，例如 [`8.9`]。\n  - `PROTOBUF_VERSION`：使用的 Protobuf 版本，例如 [`3.20.1`]。注意：更改此参数不会使 CMake 使用系统已安装的 Protobuf 版本，而是会配置 CMake 下载并尝试构建该版本。\n  - `CMAKE_TOOLCHAIN_FILE`：用于交叉编译的工具链文件路径。\n  - `BUILD_PARSERS`：指定是否构建解析器，例如 [`ON`] 或 `OFF`。若设置为 OFF，CMake 将尝试查找预编译的解析器库版本以用于编译示例。优先从 `${TRT_LIB_DIR}` 中查找，其次在系统中查找。如果构建类型为 Debug，则会优先使用调试版本的库，而非发布版本。\n  - `BUILD_PLUGINS`：指定是否构建插件，例如 [`ON`] 或 `OFF`。若设置为 OFF，CMake 将尝试查找预编译的插件库版本以用于编译示例。优先从 `${TRT_LIB_DIR}` 中查找，其次在系统中查找。如果构建类型为 Debug，则会优先使用调试版本的库，而非发布版本。\n  - `BUILD_SAMPLES`：指定是否构建示例，例如 [`ON`] 或 `OFF`。\n  - `BUILD_SAFE_SAMPLES`：指定是否构建安全示例，例如 [`ON`] 或 `OFF`。\n  - `TRT_SAFETY_INFERENCE_ONLY`：指定是否仅构建安全推理组件，例如 [`ON`] 或 `OFF`。若设置为 ON，则除 `BUILD_SAFE_SAMPLES` 外，其他所有组件将被关闭。\n  - `GPU_ARCHS`：目标 GPU（SM）架构。默认情况下，我们会为所有主要 SM 生成 CUDA 代码。此处可以指定具体的 SM 版本，以缩短编译时间和减小二进制文件大小。NVIDIA GPU 的计算能力表可在 [这里](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-gpus) 查阅。示例：- NVIDIA A100：`-DGPU_ARCHS=\"80\"`；- RTX 50 系列：`-DGPU_ARCHS=\"120\"`；- 多个 SM：`-DGPU_ARCHS=\"80 120\"`。\n  - `TRT_PLATFORM_ID`：裸机构建（不同于容器化的交叉编译）。当前支持的选项为 `x86_64`（默认）。\n  - `TRT_BUILD_ENABLE_MULTIDEVICE`：启用多设备示例（sampleDistCollective）。使用 `-DTRT_BUILD_ENABLE_MULTIDEVICE=ON` 来构建它；需要 [NCCL](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnccl\u002Fnccl-download) ≥ v2.19，\u003C v3.0。\n\n## 构建 TensorRT DriveOS 示例\n\n- 生成 Makefile 并编译\n\n  **示例：为 DOS7 Linux（aarch64）交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain\n  make -j$(nproc)\n  ```\n\n  **示例：为 DOS6.5 Linux（aarch64）交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64_dos_cross.toolchain -DCUDA_VERSION=11.4 -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  **示例：为 DOS6.5 和 DOS7 Linux（aarch64）进行原生构建**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_aarch64-native.toolchain -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF\n  make -j$(nproc)\n  ```\n\n  **示例：为 DOS6.5 QNX（aarch64）交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=11.4\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # 设置为您的 QNX 工具链安装路径\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx7\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  > 注意：请将 `QNX_BASE` 设置为您 QNX 工具链的安装路径。\n  > 如果您的 CUDA 版本与示例不同，请设置 `CUDA_VERSION`（对于在多个地方使用该变量的示例）或在 cmake 命令中添加 `-DCUDA_VERSION=\u003C版本>`。\n\n  **示例：为 DOS6.5 QNX Safety（aarch64）交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=11.4\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # 设置为您的 QNX 工具链安装路径\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx7\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  cmake .. -DBUILD_SAMPLES=OFF -DBUILD_SAFE_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DTRT_SAFETY_INFERENCE_ONLY=ON -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx_safe.toolchain -DCUDA_VERSION=$CUDA_VERSION -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DGPU_ARCHS=87\n  make -j$(nproc)\n  ```\n\n  > 注意：请将 `QNX_BASE` 设置为您 QNX 工具链的安装路径。\n  > 如果您的 CUDA 版本与示例不同，请设置 `CUDA_VERSION`（对于在多个地方使用该变量的示例）或在 cmake 命令中添加 `-DCUDA_VERSION=\u003C版本>`。\n\n  **示例：为 DOS7 QNX（aarch64）交叉编译**\n\n  ```bash\n  cd $TRT_OSSPATH\n  mkdir -p build && cd build\n  export CUDA_VERSION=13.2\n  export CUDA=cuda-$CUDA_VERSION\n  export CUDA_ROOT=\u002Fusr\u002Flocal\u002Fcuda-safe-$CUDA_VERSION\n  export QNX_BASE=\u002Fdrive\u002Ftoolchains\u002Fqnx_toolchain  # 设置为您的 QNX 工具链安装路径\n  export QNX_HOST=$QNX_BASE\u002Fhost\u002Flinux\u002Fx86_64\u002F\n  export QNX_TARGET=$QNX_BASE\u002Ftarget\u002Fqnx\u002F\n  export PATH=$PATH:$QNX_HOST\u002Fusr\u002Fbin\n  cmake .. -DBUILD_SAMPLES=ON -DBUILD_PLUGINS=OFF -DBUILD_PARSERS=OFF -DBUILD_SAFE_SAMPLES=OFF -DCMAKE_CUDA_COMPILER=$CUDA_ROOT\u002Fbin\u002Fnvcc -DTRT_OUT_DIR=`pwd`\u002Fbin_dynamic_cross -DTRT_LIB_DIR=$TRT_LIBPATH -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH\u002Fcmake\u002Ftoolchains\u002Fcmake_qnx.toolchain -DCUDA_VERSION=$CUDA_VERSION -DGPU_ARCHS=110\n  make -j$(nproc)\n  ```\n\n  > 注意：请将 `QNX_BASE` 设置为您 QNX 工具链的安装路径。\n  > 如果您的 CUDA 版本与示例不同，请设置 `CUDA_VERSION`（对于在多个地方使用该变量的示例）或在 cmake 命令中添加 `-DCUDA_VERSION=\u003C版本>`。\n\n# 参考资料\n\n## TensorRT 资源\n\n- [TensorRT 开发者主页](https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt)\n- [TensorRT 快速入门指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fquick-start-guide\u002Findex.html)\n- [TensorRT 开发者指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fdeveloper-guide\u002Findex.html)\n- [TensorRT 示例支持指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fsample-support-guide\u002Findex.html)\n- [TensorRT ONNX 工具](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Findex.html#tools)\n- [TensorRT 讨论论坛](https:\u002F\u002Fdevtalk.nvidia.com\u002Fdefault\u002Fboard\u002F304\u002Ftensorrt\u002F)\n- [TensorRT 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html)\n\n## 已知问题\n\n- 请参阅 [TensorRT 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes)","# TensorRT 快速上手指南\n\nTensorRT 是 NVIDIA 推出的高性能深度学习推理优化器和运行时引擎。本指南基于 TensorRT 开源软件（OSS）组件，帮助开发者快速完成环境搭建与基础使用。\n\n## 1. 环境准备\n\n在开始构建或安装前，请确保您的系统满足以下要求。\n\n### 系统要求\n- **操作系统**: Linux (Ubuntu 22.04\u002F24.04, Rocky Linux 8\u002F9) 或 Windows 10\u002F11 (x86_64)\n- **架构**: x86_64 或 aarch64 (Jetson\u002FThor)\n- **Python 版本**: >= 3.10, \u003C= 3.13.x (注意：Python 3.9 及更低版本的支持将在未来移除)\n\n### 前置依赖\n您需要安装以下核心软件包：\n\n- **CUDA Toolkit**: 推荐版本 `cuda-13.2.0` 或 `cuda-12.9.0`\n- **cuDNN** (可选): 版本 `8.9`\n- **构建工具**:\n  - GNU make >= v4.1\n  - cmake >= v3.31\n  - git, pkg-config, wget\n- **TensorRT GA 包**: 需预先下载对应版本的 TensorRT 二进制包（用于链接库文件），版本需匹配（如 `v10.16.0.72`）。\n\n> **提示**: 如果使用 Docker 容器化构建，只需宿主机安装 [NVIDIA Container Toolkit](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker) 和 Docker >= 19.03 即可，无需在宿主机安装上述所有依赖。\n\n## 2. 安装步骤\n\n您可以选择直接安装 Python 包（推荐用于快速体验）或从源码构建（推荐用于开发插件或定制功能）。\n\n### 方案 A：直接安装 Python 包（最简单）\n\n如果您仅需在 Python 中使用 TensorRT 进行推理，可直接通过 pip 安装预编译包：\n\n```bash\npip install tensorrt\n```\n\n安装完成后即可跳过源码构建步骤，直接使用。\n\n### 方案 B：从源码构建 (Linux 示例)\n\n如需开发自定义插件或使用最新 OSS 特性，请按以下步骤构建：\n\n#### 第一步：获取源码\n```bash\ngit clone -b main https:\u002F\u002Fgithub.com\u002Fnvidia\u002FTensorRT TensorRT\ncd TensorRT\ngit submodule update --init --recursive\n```\n> **国内加速建议**: 如果克隆速度慢，可配置 Git 代理或使用国内镜像源（如 Gitee 镜像，若有）。\n\n#### 第二步：准备 TensorRT 二进制库\n下载并解压对应 CUDA 版本的 TensorRT GA 包（以 Ubuntu + CUDA 13.2 为例）：\n```bash\n# 假设已下载 tar.gz 包到 ~\u002FDownloads\ncd ~\u002FDownloads\ntar -xvzf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz\nexport TRT_LIBPATH=`pwd`\u002FTensorRT-10.16.0.72\u002Flib\n```\n\n#### 第三步：创建构建容器（推荐）\n使用官方提供的脚本生成 Docker 构建环境：\n```bash\n.\u002Fdocker\u002Fbuild.sh --file docker\u002Fubuntu-24.04.Dockerfile --tag tensorrt-ubuntu24.04-cuda13.2\n```\n\n启动容器：\n```bash\n.\u002Fdocker\u002Flaunch.sh --tag tensorrt-ubuntu24.04-cuda13.2 --gpus all\n```\n> 容器内默认用户密码为 `nvidia`。\n\n#### 第四步：编译构建\n在容器内执行编译命令：\n```bash\ncd $TRT_OSSPATH\nmkdir -p build && cd build\ncmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`\u002Fout\nmake -j$(nproc)\n```\n编译产物将输出至 `build\u002Fout` 目录。\n\n## 3. 基本使用\n\n以下是一个最简单的 Python 使用示例，演示如何加载引擎并进行推理。\n\n### 简单推理示例\n\n假设您已经拥有一个序列化好的 TensorRT 引擎文件 (`model.plan`)。\n\n```python\nimport tensorrt as trt\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nimport numpy as np\n\n# 1. 初始化日志记录器\nlogger = trt.Logger(trt.Logger.WARNING)\n\n# 2. 反序列化引擎\nwith open(\"model.plan\", \"rb\") as f, trt.Runtime(logger) as runtime:\n    engine = runtime.deserialize_cuda_engine(f.read())\n\n# 3. 创建执行上下文\ncontext = engine.create_execution_context()\n\n# 4. 分配输入输出内存 (简化示例，实际需根据 binding 索引处理)\ninput_shape = (1, 3, 224, 224)\nh_input = np.random.randn(*input_shape).astype(np.float32)\nh_output = np.empty(engine.get_binding_shape(1), dtype=np.float32)\n\nd_input = cuda.mem_alloc(h_input.nbytes)\nd_output = cuda.mem_alloc(h_output.nbytes)\n\nstream = cuda.Stream()\ncuda.memcpy_htod_async(d_input, h_input, stream)\n\n# 5. 执行推理\ncontext.execute_async_v2(bindings=[int(d_input), int(d_output)], stream_handle=stream.handle)\ncuda.memcpy_dtoh_async(h_output, d_output, stream)\nstream.synchronize()\n\nprint(\"Inference result shape:\", h_output.shape)\n```\n\n### 关键迁移提示 (针对新版本)\n如果您是从旧版本迁移，请注意 TensorRT 11.0 (预计 2026 Q2) 的重大变更：\n- **强类型网络 (Strongly Typed Networks)** 将取代弱类型网络 API。\n- **显式量化 (Explicit Quantization)** 将取代隐式量化 API。\n- **IPluginV3** 将取代 IPluginV2 接口。\n- 建议使用 **Nsight Deep Learning Designer** 替代旧的 TREX 工具。","一家自动驾驶初创公司的算法团队正致力于将训练好的高精度目标检测模型部署到搭载 NVIDIA Orin 芯片的量产车辆上，以满足实时路况分析需求。\n\n### 没有 TensorRT 时\n- **推理延迟过高**：直接使用 PyTorch 或 TensorFlow 原生框架进行推理，单帧图像处理耗时超过 80 毫秒，无法满足自动驾驶系统要求的 30 FPS 实时响应标准。\n- **显存占用巨大**：未优化的模型在车载 GPU 上运行时显存占用极高，导致无法在同一块芯片上并行运行路径规划或语音交互等其他关键任务。\n- **算力浪费严重**：通用计算图包含大量冗余算子和低精度不必要的浮点运算，未能充分利用 NVIDIA GPU 特有的 Tensor Core 加速能力。\n- **部署成本高昂**：为了弥补软件效率的不足，团队被迫考虑升级更昂贵的硬件方案或增加车辆端的计算单元数量，大幅推高了 BOM 成本。\n\n### 使用 TensorRT 后\n- **极致低延迟**：TensorRT 通过层融合、内核自动调优及显存优化，将单帧推理时间压缩至 15 毫秒以内，轻松实现 60+ FPS 的流畅检测效果。\n- **资源利用率提升**：借助 INT8 量化技术，模型体积缩小 4 倍且显存占用大幅降低，使得单一 SoC 即可承载多模态感知任务，释放了宝贵的硬件资源。\n- **硬件性能满血释放**：TensorRT 针对特定 GPU 架构生成高度优化的推理引擎，完美调用 Tensor Core 进行混合精度计算，吞吐量相比原生框架提升 3-5 倍。\n- **落地成本显著下降**：凭借软件层面的极致优化，团队成功在现有硬件配置下达成性能指标，避免了额外的硬件迭代投入，加速了车型量产进程。\n\nTensorRT 通过将深度学习模型转化为针对特定硬件深度定制的高效推理引擎，彻底打通了从算法训练到边缘端实时落地的“最后一公里”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FNVIDIA_TensorRT_144b5611.png","NVIDIA","NVIDIA Corporation","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FNVIDIA_7dcf6000.png","",null,"https:\u002F\u002Fnvidia.com","https:\u002F\u002Fgithub.com\u002FNVIDIA",[80,84,88,92,95,99,103,106,109,112],{"name":81,"color":82,"percentage":83},"C++","#f34b7d",98,{"name":85,"color":86,"percentage":87},"Python","#3572A5",1.4,{"name":89,"color":90,"percentage":91},"Cuda","#3A4E3A",0.3,{"name":93,"color":94,"percentage":91},"Jupyter Notebook","#DA5B0B",{"name":96,"color":97,"percentage":98},"CMake","#DA3434",0.1,{"name":100,"color":101,"percentage":102},"Dockerfile","#384d54",0,{"name":104,"color":105,"percentage":102},"Shell","#89e051",{"name":107,"color":108,"percentage":102},"C","#555555",{"name":110,"color":111,"percentage":102},"Makefile","#427819",{"name":113,"color":114,"percentage":102},"Batchfile","#C1F12E",12866,2339,"2026-04-06T16:39:21","Apache-2.0","Linux, Windows","必需 NVIDIA GPU。支持 CUDA 12.9 或 13.2。具体型号未说明，但需兼容对应 CUDA 版本。针对 Jetson (aarch64) 和 DriveOS 平台有特定构建配置。","未说明",{"notes":123,"python":124,"dependencies":125},"1. 编译源码前需先下载并安装对应 CUDA 版本的 TensorRT GA (通用发布版) 二进制包 (v10.16.0.72)。\n2. 强烈建议使用提供的 Docker 脚本在 Linux 上构建环境，需安装 NVIDIA Container Toolkit 以在容器内使用 GPU。\n3. 支持跨平台编译，包括为 Jetson (aarch64) 和 DriveOS 设备进行交叉编译。\n4. Windows 原生构建需使用 MSBuild。\n5. ONNX-TensorRT、cub 和 protobuf 会在构建时自动下载，无需手动安装。","3.10 - 3.13.x (Python 3.9 及更早版本的支持将在 TensorRT 10.16 起移除)",[126,127,128,129,130,131,132,133,134,135],"CUDA >= 12.9 或 13.2","cuDNN 8.9 (可选)","CMake >= 3.31","GNU Make >= 4.1","pip >= 19.0","git","pkg-config","wget","NCCL >= 2.19 (仅多设备构建时需要)","Docker >= 19.03 (容器化构建推荐)",[14],[138,139,140,141,142],"tensorrt","nvidia","deep-learning","inference","gpu-acceleration","2026-03-27T02:49:30.150509","2026-04-07T11:35:03.621087",[146,151,156,161,166,171],{"id":147,"question_zh":148,"answer_zh":149,"source_url":150},21903,"如何在将 PyTorch 模型转换为 ONNX 再转为 TensorRT 后添加 NMS（非极大值抑制）？","如果您使用的是动态形状（dynamic shapes），应使用 `BatchedNMSDynamic_TRT` 插件，该插件在 TensorRT 7.2 及以上版本中受支持。如果您使用的是不支持此插件的设备（如某些 NVIDIA Jetsons），可以将模型冻结为固定形状（fixed shape）来解决兼容性问题。此外，也可以参考社区项目（如 https:\u002F\u002Fgithub.com\u002Fttanzhiqiang\u002Fonnx_tensorrt_project）获取纯 Python 实现的示例。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F795",{"id":152,"question_zh":153,"answer_zh":154,"source_url":155},21904,"加载 ONNX 模型时遇到 'Resize scales must be an initializer' 错误如何解决？","该错误通常发生在 Resize 操作使用了非常量的缩放因子时。解决方法是使用 ONNX Simplifier 工具对模型进行简化处理。另外，如果您使用的是 TensorRT 7 (TRT7)，可以尝试构建并使用开源组件（OSS components）版本的 ONNX 解析器，因为上游解析器已修复了此问题。可以通过运行官方 Docker 容器并执行 build_OSS.sh 脚本来构建 OSS 组件。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F386",{"id":157,"question_zh":158,"answer_zh":159,"source_url":160},21905,"将 Mask RCNN 模型从 h5 转换为 UFF 格式时遇到 'Unsupported operation _AddV2' 错误怎么办？","此问题通常源于模型转换过程中的 TensorFlow 版本差异。尝试使用 TensorFlow 1.15-gpu 版本进行从 h5 到 UFF 的转换，该版本通常能自动将 `_AddV2` 操作转换为 TensorRT 支持的二进制操作，从而避免报错。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F125",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},21906,"在使用 tf2onnx 和 TensorRT 时遇到 'Could Not Parse Model' 错误或版本不匹配问题如何处理？","这类问题通常由 Docker 容器、TensorRT、CUDA 和 TensorFlow 之间的版本不兼容引起。例如，TensorFlow 2.0.0 可能需要 CUDA 10.0，而较新的 TensorRT Docker 镜像（如 20.11\u002F20.12）可能预装了更高版本的 CUDA。建议检查各组件的版本兼容性矩阵，必要时使用旧版 Docker 容器（如 19.02）或手动安装匹配的 CUDA\u002FcuDNN 版本。如果问题持续，请确认是否使用了正确的 TensorRT EA（早期访问）版本。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F964",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},21907,"如何创建一个 TensorRT 引擎以服务于多个输入源（如多路摄像头）而不混淆输出？","可以在初始化阶段反序列化创建一个共享的 TensorRT 引擎（engine），然后在每个线程中通过调用 `create_execution_context()`（或代码中的 `create_new_context`）为该线程创建独立的执行上下文（context）。确保每个线程使用自己独立的 context 进行推理，这样即使共享同一个 engine，也不会出现输出混淆的问题。注意需要正确管理 CUDA 上下文。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F1121",{"id":172,"question_zh":173,"answer_zh":174,"source_url":175},21908,"为什么 SDXL 模型的 INT8 量化版本推理速度比 FP16 版本慢很多？","虽然提供的 Issue 数据中该问题被截断且缺乏具体解决方案评论，但通常情况下，INT8 推理速度慢于 FP16 可能是因为：1. 硬件对该特定模型结构的 INT8 优化不足；2. 量化校准过程未正确执行导致回退到 FP16 计算；3. 算子不支持 INT8 导致部分网络在 FP16 下运行增加了开销。建议检查 TensorRT 日志确认是否有算子回退（fallback），并确保使用了针对该模型架构优化的校准数据集和参数。","https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F3724",[177,182,187,192,197,202,207,212,217,222,227,232,237,242,247,252,257,262,267,272],{"id":178,"version":179,"summary_zh":180,"released_at":181},128855,"v10.0.0","Key Features and Updates:\r\n\r\n\r\n- Samples changes\r\n    - Added a [sample](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fblob\u002Frelease\u002F10.0\u002Fsamples\u002Fpython\u002Fsample_weight_stripping) showcasing weight-stripped engines.\r\n    - Added a [sample](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fblob\u002Frelease\u002F10.0\u002Fsamples\u002Fpython\u002Fpython_plugin\u002Fcirc_pad_plugin_multi_tactic.py) demonstrating the use of custom tactics with IPluginV3.\r\n    - Added a [sample](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fblob\u002Frelease\u002F10.0\u002Fsamples\u002FsampleNonZeroPlugin) to showcase plugins with data-dependent output shapes, using IPluginV3.\r\n- Parser changes\r\n    - Added a new class IParserRefitter that can be used to refit a TensorRT engine with the weights of an ONNX model.\r\n    - kNATIVE_INSTANCENORM is now set to ON by default.\r\n    - Added support for IPluginV3 interfaces from TensorRT.\r\n    - Added support for INT4 quantization.\r\n    - Added support for the reduction attribute in ScatterElements.\r\n    - Added support for wrap padding mode in Pad\r\n- Plugin changes\r\n    - A [new plugin](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fblob\u002Frelease\u002F10.0\u002Fplugin\u002FscatterElementsPlugin) has been added in compliance with [ONNX ScatterElements](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fonnx\u002Fblob\u002Fmain\u002Fdocs\u002FOperators.md#ScatterElements).\r\n    - The TensorRT plugin library no longer has a load-time link dependency on cuBLAS or cuDNN libraries.\r\n    - All plugins which relied on cuBLAS\u002FcuDNN handles passed through IPluginV2Ext::attachToContext() have moved to use cuBLAS\u002FcuDNN resources initialized by the plugin library itself. This works by dynamically loading the required cuBLAS\u002FcuDNN library. Additionally, plugins which independently initialized their cuBLAS\u002FcuDNN resources have also moved to dynamically loading the required library. If the respective library is not discoverable through the library path(s), these plugins will not work.\r\n    - bertQKVToContextPlugin: Version 2 of this plugin now supports head sizes less than or equal to 32.\r\n    - reorgPlugin: Added a version 2 which implements IPluginV2DynamicExt.\r\n    - disentangledAttentionPlugin: Fixed a kernel bug.\r\n- Demo changes\r\n    - HuggingFace demos have been removed. For all users using TensorRT to accelerate Large Language Model inference, please use [TensorRT-LLM](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-LLM\u002F).\r\n- Updated tooling\r\n    - Polygraphy v0.49.9\r\n    - ONNX-GraphSurgeon v0.5.1\r\n    - TensorRT Engine Explorer v0.1.8\r\n- Build Containers\r\n    - RedHat\u002FCentOS 7.x are no longer officially supported starting with TensorRT 10.0. The corresponding container has been removed from TensorRT-OSS.","2024-04-03T21:45:30",{"id":183,"version":184,"summary_zh":185,"released_at":186},128836,"v10.16","更多信息，请参阅 [TensorRT 10.16 版本说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Fgetting-started\u002Frelease-notes-10\u002F10.16.0.html)\n\n## 一般\n- 默认 CUDA 版本已更新至 CUDA 13.2。\n\n## 示例\n- 新增 sampleDistCollective 示例，用于演示 TensorRT 中的多设备执行。\n\n## 解析器\n- 添加了 kADJUST_FOR_DLA 标志，用于调整 ONNX 模型的解析行为，使其更适用于 DLA 硬件执行。\n- 增加了 DistCollective 算子支持，以实现 TensorRT 中的多设备执行。","2026-03-25T23:22:51",{"id":188,"version":189,"summary_zh":190,"released_at":191},128837,"v10.15","更多信息，请参阅 [TensorRT 10.15 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Fgetting-started\u002Frelease-notes-10\u002F10.15.1.html)：\n\n# 示例变更\n\n- 新增了两个安全示例：sampleSafeMNIST 和 sampleSafePluginV3，用于演示如何在安全工作流中使用 TensorRT。\n- 随安全工作流的发布一同新增了 trtSafeExec 工具。\n- 新增了 python\u002Fstream_writer 示例，展示如何使用 IStreamWriter 接口将 TensorRT 引擎直接序列化到自定义流中，而非写入文件或连续内存缓冲区。\n- 新增了 python\u002Fstrongly_type_autocast 示例，演示如何使用 ModelOpt 的 AutoCast 工具将 FP32 ONNX 模型转换为混合精度（FP32-FP16），并随后以 TensorRT 的强类型模式构建引擎。\n- 新增了 sampleCudla 示例，演示如何利用 cuDLA API 在深度学习加速器（DLA）硬件上运行 TensorRT 引擎，该硬件适用于 NVIDIA Jetson 和 DRIVE 平台。\n- 废弃了 sampleCharRNN 示例。\n\n# 插件变更\n- 废弃了 bertQKVToContextPlugin，并计划在未来的版本中将其移除。暂无替代方案。\n\n# 解析器变更\n\n- 增加了对 RotaryEmbedding、RMSNormalization 和 TensorScatter 的支持，以更好地支持 LLM 模型。\n- 为通过 TensorRT ModelOptimizer 量化后的模型添加了更多专用的量化运算符。\n- 添加了 kREPORT_CAPABILITY_DLA 标志，以便在通过 TensorRT 构建 DLA 引擎时进行逐节点验证。\n- 添加了 kENABLE_PLUGIN_OVERRIDE 标志，以允许对与用户插件同名的节点启用 TensorRT 插件覆盖。\n- 改进了包含多个子图（如 Loop 或 Scan 节点）的模型的错误报告功能。\n\n# 示例程序变更\n\n- demoDiffusion：Stable Diffusion 1.5、2.0 和 2.1 的流水线已被废弃并移除。","2026-02-03T22:22:41",{"id":193,"version":194,"summary_zh":195,"released_at":196},128838,"v10.14","## 10.14 GA - 2025年11月7日\n\n- 示例变更\n  - 将所有 PyCUDA 的用法替换为 cuda-python API\n  - 移除了 EfficientNet 示例\n  - 废弃了 tensorflow_object_detection 和 efficientdet 示例\n  - 示例将不再随软件包发布。TensorRT 的 GitHub 仓库将成为唯一的来源。\n\n\n- 解析器：\n  - 增加了对 `Attention` 算子的支持\n  - 改进了 `ConstantOfShape` 节点的重新校准功能\n\n- 演示\n  - demoDiffusion：\n    - 增加了对 Cosmos-Predict2 文本生成图像和视频生成世界管道的支持","2025-11-08T00:50:44",{"id":198,"version":199,"summary_zh":200,"released_at":201},128839,"v10.13.3","更多信息请参阅 [TensorRT 10.13.3 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Fgetting-started\u002Frelease-notes.html)。\n\n* 新增对 TensorRT API 捕获与回放功能的支持，更多信息请参阅 [开发者指南](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Finference-library\u002Fadvanced.html)。\n\n演示变更\n* 新增对 Flux Kontext 流水线的支持。","2025-09-09T00:16:35",{"id":203,"version":204,"summary_zh":205,"released_at":206},128840,"v10.13.2","## 10.13.2 GA - 2025年8月18日\n\n更多信息，请参阅 [10.13.2 版本说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002F10.13.2\u002Fgetting-started\u002Frelease-notes.html)。\n\n- 新增对 CUDA 13.0 的支持，不再支持 CUDA 11.X\n- 不再支持 Ubuntu 20.04\n- 样例和演示程序不再支持 Python 3.10 以下版本\n\n","2025-08-19T16:44:08",{"id":208,"version":209,"summary_zh":210,"released_at":211},128841,"v10.13.0","\r\n## 10.13.0 GA - 2025年7月24日\r\n- 插件变更\r\n  - 修复了在省略偏置时 geluPlugin 中出现的除零错误。\r\n  - 完成了从标准插件中使用静态插件字段\u002F属性成员变量的过渡。目前无需再使用此类变量，因为 TRT 在插件创建者被销毁（从插件注册表中注销）后不会访问字段信息，也不会在没有创建者实例的情况下访问这些信息。\r\n- 示例变更\r\n  - 因 YOLO 权重的 URL 不稳定，已弃用 `yolov3_onnx` 示例。\r\n  - 更新了 `1_run_onnx_with_tensorrt` 和 `2_construct_network_with_layer_apis` 示例，改用 `cuda-python` 替代 `PyCUDA`，以获得对最新 GPU\u002FCUDA 的支持。\r\n- 解析器变更\r\n  - 降低了导入包含外部权重模型时的内存占用。\r\n  - 为 IParser 添加了 `loadModelProto`、`loadInitializer` 和 `parseModelProto` API。这些 API 用于在解析 ONNX 模型时加载用户自定义的初始化器。\r\n  - 为 IParserRefitter 添加了 `loadModelProto`、`loadInitializer` 和 `refitModelProto` API。这些 API 用于在重新拟合 ONNX 模型时加载用户自定义的初始化器。\r\n  - 已弃用 `IParser::parseWithWeightDescriptors`。","2025-07-24T22:00:51",{"id":213,"version":214,"summary_zh":215,"released_at":216},128842,"v10.12.0","## 10.12.0 GA - 2025年6月10日\r\n主要特性与更新：\n- 插件变更\n  - 将 `cropAndResizeDynamic` 的版本 1（继承自 `IPluginV2`）迁移至版本 2，该版本实现了 `IPluginV3`。\n  - 注意：较新版本保留了对应旧版插件的属性和输入输出。旧版插件已被弃用，将在未来的版本中移除。\n  - 下列插件的指定版本已被弃用：\n    - `DecodeBbox3DPlugin`（版本 1）\n    - `DetectionLayer_TRT`（版本 1）\n    - `EfficientNMS_TRT`（版本 1）\n    - `FlattenConcat_TRT`（版本 1）\n    - `GenerateDetection_TRT`（版本 1）\n    - `GridAnchor_TRT`（版本 1）\n    - `GroupNormalizationPlugin`（版本 1）\n    - `InstanceNormalization_TRT`（版本 2）\n    - `ModulatedDeformConv2d`（版本 1）\n    - `MultilevelCropAndResize_TRT`（版本 1）\n    - `MultilevelProposeROI_TRT`（版本 1）\n    - `RPROI_TRT`（版本 1）\n    - `PillarScatterPlugin`（版本 1）\n    - `PriorBox_TRT`（版本 1）\n    - `ProposalLayer_TRT`（版本 1）\n    - `ProposalDynamic`（版本 1）\n    - `Region_TRT`（版本 1）\n    - `Reorg_TRT`（版本 2）\n    - `ResizeNearest_TRT`（版本 1）\n    - `ScatterND`（版本 1）\n    - `VoxelGeneratorPlugin`（版本 1）\n- 演示变更\n  - 为 Stable Diffusion v3.5-large ControlNet 模型新增了 [图像到图像](demo\u002FDiffusion#generate-an-image-with-stable-diffusion-v35-large-with-controlnet-guided-by-an-image-and-a-text-prompt) 支持。\n  - 启用了 Stable Diffusion v3.5-large 流水线的 [预导出 ONNX 模型](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3.5-large-tensorrt) 下载功能。\n- 示例变更\n  - 新增了两个重构后的 Python 示例：[1_run_onnx_with_tensorrt](samples\u002Fpython\u002Frefactored\u002F1_run_onnx_with_tensorrt) 和 [2_construct_network_with_layer_apis](samples\u002Fpython\u002Frefactored\u002F2_construct_network_with_layer_apis)。\n- 解析器变更\n  - 为 `Pow` 运算增加了对整数类型基张量的支持。\n  - 增加了对自定义 `MXFP8` 量化运算的支持。\n  - 在 `Einsum` 运算中增加了对省略号、对角线及广播操作的支持。","2025-06-18T21:41:29",{"id":218,"version":219,"summary_zh":220,"released_at":221},128843,"v10.11","## 10.11.0 GA - 2025年5月21日\n\n主要特性与更新：\n\n- 插件变更\n  - 将 `modulatedDeformConvPlugin` 的版本 1（继承自 `IPluginV2`）迁移至版本 2，该版本实现了 `IPluginV3`。\n  - 将 `DisentangledAttention_TRT` 的版本 1（继承自 `IPluginV2`）迁移至版本 2，该版本实现了 `IPluginV3`。\n  - 将 `MultiscaleDeformableAttnPlugin_TRT` 的版本 1（继承自 `IPluginV2`）迁移至版本 2，该版本实现了 `IPluginV3`。\n  - 注意：较新版本保留了对应旧版插件的属性和输入输出。旧版插件已被弃用，将在未来的版本中移除。\n- 示例程序变更\n  - demoDiffusion\n    - 新增对 Stable Diffusion 3.5-medium 和 3.5-large 流水线在 BF16 和 FP16 精度下的支持。\n- 解析器变更\n  - 添加了 `kENABLE_UINT8_AND_ASYMMETRIC_QUANTIZATION_DLA` 解析器标志，以在目标为 DLA 的引擎上启用 UINT8 非对称量化。\n  - 移除了对 `RandomNormalLike` 和 `RandomUniformLike` 输入必须为张量的限制。\n  - 明确了 `Loop` 节点扫描输出的限制。","2025-05-21T22:59:38",{"id":223,"version":224,"summary_zh":225,"released_at":226},128844,"v10.10.0","# 10.10.0 GA\n更多信息，请参阅 [TensorRT 10.10.0 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Fgetting-started\u002Frelease-notes.html#tensorrt-10-10-0)。\n\n主要特性与更新：\n\n- 示例变更\n  - demoDiffusion\n    - 为 demoDiffusion 的 SDXL 和 FLUX 流水线新增了 fp16 和 fp8 LoRA 支持。\n    - 为 demoDiffusion 的 SDXL 流水线新增了 fp16 ControlNet 支持。\n- 插件变更\n  - 废弃了枚举类 [PluginVersion](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002F_static\u002Fc-api\u002Fnamespacenvinfer1.html#a6fb3932a2896d82a94c8783e640afb34) 和 [PluginCreatorVersion](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002F_static\u002Fc-api\u002Fnamespacenvinfer1.html#a43c4159a19c23f74234f3c34124ea0c5)。PluginVersion 和 PluginCreatorVersion 仅用于与 IPluginV2 派生的插件接口相关，而这些接口均已废弃。\n  - 新增以下 API，允许用户以层次化方式获取注册到 TensorRT 插件注册表中所有插件创建者的列表（[C++](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002F_static\u002Fc-api\u002Fclassnvinfer1_1_1_i_plugin_registry.html)、[Python](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002F_static\u002Fpython-api\u002Finfer\u002FPlugin\u002FIPluginRegistry.html)）：\n    - C++ API：IPluginRegistry::getAllCreatorsRecursive()\n    - Python API：IPluginRegistry.all_creators_recursive\n- 解析器变更\n  - 清理了当 ONNX 网络同时包含插件和本地函数时的日志冗余信息。\n  - UINT8 常量现在能够正确导入到 QuantizeLinear 和 DequantizeLinear 节点中。\n  - 插件回退导入器现在也会从节点的 domain 字段中读取其命名空间。\n- 样例变更\n  - 为 [python_plugin 样例](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Ftree\u002Frelease\u002F10.9\u002Fsamples\u002Fpython\u002Fpython_plugin) 添加了编译目标 Blackwell 的支持。","2025-05-09T22:27:41",{"id":228,"version":229,"summary_zh":230,"released_at":231},128845,"v10.9.0","# 10.9.0 GA\n更多信息，请参阅 [TensorRT 10.9.0 发行说明](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Flatest\u002Fgetting-started\u002Frelease-notes.html#tensorrt-10-9-0)。\n\n主要特性与更新：\n\n- 示例变更\n  - demoDiffusion\n    - 为 SDXL 流水线添加了 Canny ControlNet 支持\n- 插件变更\n  - 为 GroupNormalization 插件 (`GroupNormalizationPlugin`) 添加了自述文件 - [4314](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4314)\n  - 修复了 `CustomQKVToConte mxtPluginDynamic` 版本 3 中的一个 bug，该版本未将 SM 100 视为受支持的平台。\n- 解析器变更\n  - 添加了对 Python AOT 插件的支持\n  - 添加了对 opset 21 GroupNorm 的支持 - [4336](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4336)\n  - 修复了对 opset 18 及以上版本 ScatterND 的支持问题\n- 示例变更\n  - 新增了一个示例 `dds_faster_rcnn`，演示如何使用 `IOutputAllocator` 处理依赖于数据的形状输出。\n- 已修复的问题：\n  - 修复了 streamReaderV2 Python API 的性能问题 - [4327](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4327)","2025-03-11T22:00:06",{"id":233,"version":234,"summary_zh":235,"released_at":236},128846,"v10.8.0","# 10.8.0 GA\r\nFor more information, see the [TensorRT 10.8.0 release notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html#rel-10-8-0).\r\n\r\nKey Features and Updates:\r\n\r\n- Demo changes\r\n  - demoDiffusion\r\n    - Added [Image-to-Image](demo\u002FDiffusion#generate-an-image-guided-by-an-initial-image-and-a-text-prompt-using-flux) support for Flux-1.dev and Flux.1-schnell pipelines.\r\n    - Added [ControlNet](demo\u002FDiffusion#generate-an-image-guided-by-a-text-prompt-and-a-control-image-using-flux-controlnet) support for [FLUX.1-Canny-dev](https:\u002F\u002Fhuggingface.co\u002Fblack-forest-labs\u002FFLUX.1-Canny-dev) and [FLUX.1-Depth-dev](https:\u002F\u002Fhuggingface.co\u002Fblack-forest-labs\u002FFLUX.1-Depth-dev) pipelines. Native FP8 quantization is also supported for these pipelines.\r\n    - Added support for ONNX model export only mode. See [--onnx-export-only](demo\u002FDiffusion#https:\u002F\u002Fgitlab-master.nvidia.com\u002FTensorRT\u002FPublic\u002Foss\u002F-\u002Ftree\u002Frelease\u002F10.8\u002Fdemo\u002FDiffusion?ref_type=heads#use-separate-directories-for-individual-onnx-models).\r\n    - Added FP16, BF16, FP8, and FP4 support for all Flux Pipelines.\r\n- Plugin changes\r\n  - Added SM 100 and SM 120 support to bertQKVToContextPlugin. This enables demo\u002FBERT on Blackwell GPUs.\r\n- Sample changes\r\n  - Added a new `sampleEditableTimingCache` to demonstrate how to build an engine with the desired tactics by modifying the timing cache.\r\n  - Deleted the `sampleAlgorithmSelector` sample.\r\n  - Fixed `sampleOnnxMNIST` by updating the correct INT8 dynamic range.\r\n- Parser changes\r\n  - Added support for `FLOAT4E2M1` types for quantized networks.\r\n  - Added support for dynamic axes and improved performance of `CumSum` operations.\r\n  - Fixed the import of local functions when their input tensor names aliased one from an outside scope.\r\n  - Added support for `Pow` ops with integer-typed exponent values.\r\n- Fixed issues\r\n  - Fixed segmentation of boolean constant nodes -  [4224](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4224).\r\n  - Fixed accuracy issue when multiple optimization profiles were defined [4250](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4250).","2025-02-01T01:09:15",{"id":238,"version":239,"summary_zh":240,"released_at":241},128847,"v10.7.0","# 10.7.0 GA\r\nFor more information, see the [TensorRT 10.7.0 release notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html#rel-10-7-0).\r\n\r\nKey Feature and Updates:\r\n\r\n- Demo Changes\r\n  - demoDiffusion\r\n    - Enabled low-vram for the Flux pipeline. Users can now run the pipelines on systems with 32GB VRAM.\r\n    - Added support for [FLUX.1-schnell](https:\u002F\u002Fhuggingface.co\u002Fblack-forest-labs\u002FFLUX.1-schnell) pipeline.\r\n    - Enabled weight streaming mode for Flux pipeline.\r\n\r\n- Plugin Changes\r\n  - On Blackwell and later platforms, TensorRT will drop cuDNN support on the following categories of plugins\r\n    - User-written `IPluginV2Ext`, `IPluginV2DynamicExt`, and `IPluginV2IOExt` plugins that are dependent on cuDNN handles provided by TensorRT (via the `attachToContext()` API).\r\n    - TensorRT standard plugins that use cuDNN, specifically:\r\n      - `InstanceNormalization_TRT` (version: 1, 2, and 3) present in `plugin\u002FinstanceNormalizationPlugin\u002F`.\r\n      - `GroupNormalizationPlugin` (version: 1) present in `plugin\u002FgroupNormalizationPlugin\u002F`.\r\n      - Note: These normalization plugins are superseded by TensorRT’s native `INormalizationLayer` ([C++](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fapi\u002Fc_api\u002Fclassnvinfer1_1_1_i_normalization_layer.html), [Python](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Foperators\u002Fdocs\u002FNormalization.html)). TensorRT support for cuDNN-dependent plugins remain unchanged on pre-Blackwell platforms.\r\n\r\n- Parser Changes\r\n  - Now prioritizes using plugins over local functions when a corresponding plugin is available in the registry.\r\n  - Added dynamic axes support for `Squeeze` and `Unsqueeze` operations.\r\n  - Added support for parsing mixed-precision `BatchNormalization` nodes in strongly-typed mode.\r\n\r\n- Addressed Issues\r\n  - Fixed [4113](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT\u002Fissues\u002F4113).","2024-12-05T21:16:44",{"id":243,"version":244,"summary_zh":245,"released_at":246},128848,"v10.6.0","## 10.6.0 GA \r\nFor more information, see the [TensorRT 10.6.0 release notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html#rel-10-6-0).\r\n\r\nKey Feature and Updates:\r\n- Demo Changes\r\n  - demoBERT: The use of `fcPlugin` in demoBERT has been removed.\r\n  - demoBERT: All TensorRT plugins now used in demoBERT (`CustomEmbLayerNormDynamic`, `CustomSkipLayerNormDynamic`, and `CustomQKVToContextDynamic`) now have versions that inherit from IPluginV3 interface classes. The user can opt-in to use these V3 plugins by specifying `--use-v3-plugins` to the builder scripts.\r\n    - Opting-in to use V3 plugins does not affect performance, I\u002FO, or plugin attributes. \r\n    - There is a known issue in the V3 (version 4) of `CustomQKVToContextDynamic` plugin from TensorRT 10.6.0, causing an internal assertion error if either the batch or sequence dimensions differ at runtime from the ones used to serialize the engine. See the “known issues” section of the [TensorRT-10.6.0 release notes](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Frelease-notes\u002Findex.html#rel-10-6-0).\r\n    - For smoother migration, the default behavior is still using the  deprecated `IPluginV2DynamicExt`-derived plugins, when the flag: `--use-v3-plugins` isn't specified in the builder scripts. The flag `--use-deprecated-plugins` was added as an explicit way to enforce the default behavior, and is mutually exclusive with `--use-v3-plugins`.\r\n  - demoDiffusion\r\n    - Introduced BF16 and FP8 support for the [Flux.1-dev](demo\u002FDiffusion#generate-an-image-guided-by-a-text-prompt-using-flux) pipeline.\r\n    - Expanded FP8 support on Ada platforms.\r\n    - Enabled LoRA adapter compatibility for SDv1.5, SDv2.1, and SDXL pipelines using Diffusers version 0.30.3.\r\n\r\n- Sample Changes\r\n  - Added the Python sample [quickly_deployable_plugins](samples\u002Fpython\u002Fquickly_deployable_plugins), which demonstrates quickly deployable Python-based plugin definitions (QDPs) in TensorRT. QDPs are a simple and intuitive decorator-based approach to defining TensorRT plugins, requiring drastically less code.\r\n\r\n- Plugin Changes\r\n  - The `fcPlugin` has been deprecated. Its functionality has been superseded by the [IMatrixMultiplyLayer](https:\u002F\u002Fdocs.nvidia.com\u002Fdeeplearning\u002Ftensorrt\u002Fapi\u002Fc_api\u002Fclassnvinfer1_1_1_i_matrix_multiply_layer.html) that is natively provided by TensorRT.\r\n  - Migrated `IPluginV2`-descendent version 1 of `CustomEmbLayerNormDynamic`, to version 6, which implements `IPluginV3`.\r\n    - The newer versions preserve the attributes and I\u002FO of the corresponding older plugin version.\r\n    - The older plugin versions are deprecated and will be removed in a future release.\r\n\r\n- Parser Changes\r\n  - Updated ONNX submodule version to 1.17.0.\r\n  - Fixed issue where conditional layers were incorrectly being added.\r\n  - Updated local function metadata to contain more information.\r\n  - Added support for parsing nodes with Quickly Deployable Plugins.\r\n  - Fixed handling of optional outputs.\r\n\r\n- Tool Updates\r\n  - ONNX-Graphsurgeon updated to version 0.5.3\r\n  - Polygraphy updated to 0.49.14.","2024-11-05T21:55:05",{"id":248,"version":249,"summary_zh":250,"released_at":251},128849,"v10.5.0","## Release 10.5-GA\r\nKey Features and Updates:\r\n\r\n- Demo changes\r\n    - Added [Flux.1-dev](demo\u002FDiffusion) pipeline\r\n- Sample changes\r\n    - None\r\n- Plugin changes\r\n    - Migrated `IPluginV2`-descendent versions of `bertQKVToContextPlugin` (1, 2, 3) to newer versions (4, 5, 6 respectively) which implement `IPluginV3`.\r\n    - Note:\r\n        - The newer versions preserve the attributes and I\u002FO of the corresponding older plugin version\r\n        - The older plugin versions are deprecated and will be removed in a future release\r\n- Quickstart guide\r\n    - None\r\n- Parser changes\r\n    - Added support for real-valued `STFT` operations\r\n    - Improved error handling in `IParser`\r\n\r\nKnown issues:\r\n\r\n- Demos:\r\n    - TensorRT engine might not be build successfully when using `--fp8` flag on H100 GPUs.","2024-10-10T19:47:48",{"id":253,"version":254,"summary_zh":255,"released_at":256},128850,"v10.4.0","## 10.4.0 GA - 2024-09-11\r\nKey Features and Updates:\r\n\r\n- Demo changes\r\n    - Added [Stable Cascade](demo\u002FDiffusion) pipeline.\r\n    - Enabled INT8 and FP8 quantization for Stable Diffusion v1.5, v2.0 and v2.1 pipelines.\r\n    - Enabled FP8 quantization for Stable Diffusion XL pipeline.\r\n- Sample changes\r\n    - Add a new python sample `aliased_io_plugin` which demonstrates how in-place updates to plugin inputs can be achieved through I\u002FO aliasing.\r\n- Plugin changes\r\n    - Migrated IPluginV2-descendent versions (a) of the following plugins to newer versions (b) which implement IPluginV3 (a->b):\r\n        - scatterElementsPlugin (1->2)\r\n        - skipLayerNormPlugin (1->5, 2->6, 3->7, 4->8)\r\n        - embLayerNormPlugin (2->4, 3->5)\r\n        - bertQKVToContextPlugin (1->4, 2->5, 3->6)\r\n    - Note\r\n        - The newer versions preserve the corresponding attributes and I\u002FO of the corresponding older plugin version.\r\n        - The older plugin versions are deprecated and will be removed in a future release.\r\n\r\n- Quickstart guide\r\n    - Updated deploy_to_triton guide and removed legacy APIs.\r\n    - Removed legacy TF-TRT code as the project is no longer supported.\r\n    - Removed quantization_tutorial as pytorch_quantization has been deprecated. Check out https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-Model-Optimizer for the latest quantization support. Check [Stable Diffusion XL (Base\u002FTurbo) and Stable Diffusion 1.5 Quantization with Model Optimizer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-Model-Optimizer\u002Ftree\u002Fmain\u002Fdiffusers\u002Fquantization) for integration with TensorRT.\r\n- Parser changes\r\n    - Added support for tensor `axes` for `Pad` operations.\r\n    - Added support for `BlackmanWindow`, `HammingWindow`, and `HannWindow` operations.\r\n    - Improved error handling in `IParserRefitter`.\r\n    - Fixed kernel shape inference in multi-input convolutions.\r\n\r\n- Updated tooling\r\n    - polygraphy-extension-trtexec v0.0.9","2024-09-12T00:59:43",{"id":258,"version":259,"summary_zh":260,"released_at":261},128851,"v10.3.0","## 10.3.0 GA\r\n\r\nKey Features and Updates:\r\n\r\n - Demo changes\r\n   - Added [Stable Video Diffusion](demo\u002FDiffusion)(`SVD`) pipeline.\r\n - Plugin changes\r\n   - Deprecated Version 1 of [ScatterElements plugin](plugin\u002FscatterElementsPlugin). It is superseded by Version 2, which implements the `IPluginV3` interface.\r\n - Quickstart guide\r\n   - Updated the [SemanticSegmentation](quickstart\u002FSemanticSegmentation) guide with latest APIs.\r\n - Parser changes\r\n   - Added support for tensor `axes` inputs for `Slice` node.\r\n   - Updated `ScatterElements` importer to use Version 2 of [ScatterElements plugin](plugin\u002FscatterElementsPlugin), which implements the `IPluginV3` interface.\r\n - Updated tooling\r\n   - Polygraphy v0.49.13\r\n  ","2024-08-08T23:23:49",{"id":263,"version":264,"summary_zh":265,"released_at":266},128852,"v10.2.0","Key Features and Updates:\r\n\r\n - Demo changes\r\n   - Added [Stable Diffusion 3 demo](demo\u002FDiffusion).\r\n - Plugin changes\r\n   - Version 3 of the [InstanceNormalization plugin](plugin\u002FinstanceNormalizationPlugin\u002F) (`InstanceNormalization_TRT`) has been added. This version is based on the `IPluginV3` interface and is used by the TensorRT ONNX parser when native `InstanceNormalization` is disabled.\r\n - Tooling changes\r\n   - Pytorch Quantization development has transitioned to [TensorRT Model Optimizer](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002FTensorRT-Model-Optimizer). All developers are encouraged to use TensorRT Model Optimizer to benefit from the latest advancements on quantization and compression.\r\n - Build containers\r\n   - Updated default cuda versions to `12.5.0`.","2024-07-15T16:16:49",{"id":268,"version":269,"summary_zh":270,"released_at":271},128853,"v10.1.0","Key Features and Updates:\r\n\r\n - Parser changes\r\n   - Added `supportsModelV2` API\r\n   - Added support for `DeformConv` operation\r\n   - Added support for `PluginV3` TensorRT Plugins\r\n   - Marked all IParser and IParserRefitter APIs as `noexcept`\r\n - Plugin changes\r\n   - Added version 2 of ROIAlign_TRT plugin, which implements the IPluginV3 plugin interface. When importing an ONNX model with the RoiAlign op, this new version of the plugin will be inserted to the TRT network.\r\n - Samples changes\r\n   - Added a new sample [non_zero_plugin](samples\u002Fpython\u002Fnon_zero_plugin), which is a Python version of the C++ sample [sampleNonZeroPlugin](samples\u002FsampleNonZeroPlugin).\r\n - Updated tooling\r\n   - Polygraphy v0.49.12\r\n   - ONNX-GraphSurgeon v0.5.3","2024-06-18T00:26:12",{"id":273,"version":274,"summary_zh":275,"released_at":276},128854,"v10.0.1","Key Features and Updates:\r\n\r\n - Parser changes\r\n   - Added support for building with `protobuf-lite`.\r\n   - Fixed issue when parsing and refitting models with nested `BatchNormalization` nodes.\r\n   - Added support for empty inputs in custom plugin nodes.\r\n - Demo changes\r\n   - The following demos have been removed: Jasper, Tacotron2, HuggingFace Diffusers notebook\r\n - Updated tooling\r\n   - Polygraphy v0.49.10\r\n   - ONNX-GraphSurgeon v0.5.2\r\n - Build Containers\r\n   - Updated default cuda versions to `12.4.0`.\r\n   - Added Rocky Linux 8 and Rocky Linux 9 build containers","2024-04-30T18:05:03"]