[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-marcoslucianops--DeepStream-Yolo":3,"tool-marcoslucianops--DeepStream-Yolo":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":80,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":101,"forks":102,"last_commit_at":103,"license":104,"difficulty_score":10,"env_os":105,"env_gpu":106,"env_ram":107,"env_deps":108,"category_tags":115,"github_topics":116,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":132,"updated_at":133,"faqs":134,"releases":164},481,"marcoslucianops\u002FDeepStream-Yolo","DeepStream-Yolo","NVIDIA DeepStream SDK 8.0 \u002F 7.1 \u002F 7.0 \u002F 6.4 \u002F 6.3 \u002F 6.2 \u002F 6.1.1 \u002F 6.1 \u002F 6.0.1 \u002F 6.0 \u002F 5.1 implementation for YOLO models","DeepStream-Yolo 是一个专为 NVIDIA DeepStream SDK 设计的开源工具，旨在简化 YOLO 系列目标检测模型在视频分析场景中的部署流程。它通过提供预配置的模型转换方案和优化参数，帮助用户快速将 YOLO 模型（包括 YOLOv5 至 YOLOv13、YOLO-NAS、RT-DETR 等 30+ 变体）集成到 DeepStream 的实时视频处理管线中。\n\n该工具主要解决了 YOLO 模型在 DeepStream 中部署时的兼容性与性能优化难题。传统流程需要手动处理 ONNX 格式转换、TensorRT 引擎生成及配置文件适配，而 DeepStream-Yolo 提供了标准化的转换脚本和 GPU 加速的后处理模块，显著降低了部署门槛。特别支持 INT8 量化校准、非正方形输入模型、动态批处理等特性，可提升边缘设备的推理效率。\n\n适合具备基础深度学习和 NVIDIA 工具链使用经验的开发者、算法工程师及研究人员。尤其适用于需要在 Jetson 或数据中心 GPU 上构建实时视频分析系统（如智能安防、工业质检）的团队。其技术亮点包括对 Darknet 原生模型","DeepStream-Yolo 是一个专为 NVIDIA DeepStream SDK 设计的开源工具，旨在简化 YOLO 系列目标检测模型在视频分析场景中的部署流程。它通过提供预配置的模型转换方案和优化参数，帮助用户快速将 YOLO 模型（包括 YOLOv5 至 YOLOv13、YOLO-NAS、RT-DETR 等 30+ 变体）集成到 DeepStream 的实时视频处理管线中。\n\n该工具主要解决了 YOLO 模型在 DeepStream 中部署时的兼容性与性能优化难题。传统流程需要手动处理 ONNX 格式转换、TensorRT 引擎生成及配置文件适配，而 DeepStream-Yolo 提供了标准化的转换脚本和 GPU 加速的后处理模块，显著降低了部署门槛。特别支持 INT8 量化校准、非正方形输入模型、动态批处理等特性，可提升边缘设备的推理效率。\n\n适合具备基础深度学习和 NVIDIA 工具链使用经验的开发者、算法工程师及研究人员。尤其适用于需要在 Jetson 或数据中心 GPU 上构建实时视频分析系统（如智能安防、工业质检）的团队。其技术亮点包括对 Darknet 原生模型的自动转换支持、多版本 DeepStream SDK（5.1-8.0）兼容性，以及针对不同 YOLO 变体的定制化配置模板。用户需准备 Ubuntu 22.04\u002F24.04 系统环境及 NVIDIA 显卡驱动，通过 Docker 或源码方式部署。","# DeepStream-Yolo\n\nNVIDIA DeepStream SDK 8.0 \u002F 7.1 \u002F 7.0 \u002F 6.4 \u002F 6.3 \u002F 6.2 \u002F 6.1.1 \u002F 6.1 \u002F 6.0.1 \u002F 6.0 \u002F 5.1  configuration for YOLO models\n\n--------------------------------------------------------------------------------------------------\n### For now, I am limited for some updates. Thank you for understanding.\n--------------------------------------------------------------------------------------------------\n### YOLO-Pose: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Pose\n### YOLO-Seg: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Seg\n### YOLO-Face: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Face\n--------------------------------------------------------------------------------------------------\n### Important: please export the ONNX model with the new export file, generate the TensorRT engine again with the updated files, and use the new config_infer_primary file according to your model\n--------------------------------------------------------------------------------------------------\n\n### Improvements on this repository\n\n* Support for INT8 calibration\n* Support for non square models\n* Models benchmarks\n* Support for Darknet models (YOLOv4, etc) using cfg and weights conversion with GPU post-processing\n* Support for YOLO-Master, YOLO26, RF-DETR, D-FINE, RT-DETR, CO-DETR (MMDetection), YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, Gold-YOLO, RTMDet (MMYOLO), YOLOX, YOLOR, YOLOv13, YOLOv12, YOLO11, YOLOv10, YOLOv9, YOLOv8, YOLOv7, YOLOv6, YOLOv5u and YOLOv5 using ONNX conversion with GPU post-processing\n* GPU bbox parser\n* Custom ONNX model parser\n* Dynamic batch-size\n* INT8 calibration (PTQ) for Darknet and ONNX exported models\n\n##\n\n### Getting started\n\n* [Requirements](#requirements)\n* [Supported models](#supported-models)\n* [Benchmarks](docs\u002Fbenchmarks.md)\n* [dGPU installation](docs\u002FdGPUInstalation.md)\n* [Basic usage](#basic-usage)\n* [Docker usage](#docker-usage)\n* [NMS configuration](#nms-configuration)\n* [Notes](#notes)\n* [INT8 calibration](docs\u002FINT8Calibration.md)\n* [YOLOv5 usage](docs\u002FYOLOv5.md)\n* [YOLOv5u usage](docs\u002FYOLOv5u.md)\n* [YOLOv6 usage](docs\u002FYOLOv6.md)\n* [YOLOv7 usage](docs\u002FYOLOv7.md)\n* [YOLOv8 usage](docs\u002FYOLOv8.md)\n* [YOLOv9 usage](docs\u002FYOLOv9.md)\n* [YOLOv10 usage](docs\u002FYOLOv10.md)\n* [YOLO11 usage](docs\u002FYOLO11.md)\n* [YOLOv12 usage](docs\u002FYOLOv12.md)\n* [YOLOv13 usage](docs\u002FYOLOv13.md)\n* [YOLOR usage](docs\u002FYOLOR.md)\n* [YOLOX usage](docs\u002FYOLOX.md)\n* [RTMDet (MMYOLO) usage](docs\u002FRTMDet.md)\n* [Gold-YOLO usage](docs\u002FGoldYOLO.md)\n* [DAMO-YOLO usage](docs\u002FDAMOYOLO.md)\n* [PP-YOLOE \u002F PP-YOLOE+ usage](docs\u002FPPYOLOE.md)\n* [YOLO-NAS usage](docs\u002FYOLONAS.md)\n* [CO-DETR (MMDetection) usage](docs\u002FCODETR.md)\n* [RT-DETR PyTorch usage](docs\u002FRTDETR_PyTorch.md)\n* [RT-DETR Paddle usage](docs\u002FRTDETR_Paddle.md)\n* [RT-DETR Ultralytics usage](docs\u002FRTDETR_Ultralytics.md)\n* [D-FINE usage](docs\u002FDFINE.md)\n* [RF-DETR usage](docs\u002FRFDETR.md)\n* [YOLO26 usage](docs\u002FYOLO26.md)\n* [YOLO-Master usage](docs\u002FYOLOMaster.md)\n* [Using your custom model](docs\u002FcustomModels.md)\n* [Multiple YOLO GIEs](docs\u002FmultipleGIEs.md)\n\n##\n\n### Requirements\n\n#### DeepStream 8.0 on x86 platform\n\n* [Ubuntu 24.04](https:\u002F\u002Freleases.ubuntu.com\u002F24.04\u002F)\n* [CUDA 12.8 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-8-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 10.9 GA (10.9.0.34)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-10x-download)\n* [NVIDIA Driver 570.195.03 (Data center \u002F Tesla series) \u002F 570.133.20 (TITAN, GeForce RTX \u002F GTX series and RTX \u002F Quadro series)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 8.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=8.0)\n* [GStreamer 1.24.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.1 on x86 platform\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.6 Update 3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-6-3-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 10.4 GA (10.4.0.26)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-10x-download)\n* [NVIDIA Driver 535.183.06 (Data center \u002F Tesla series) \u002F 560.35.03 (TITAN, GeForce RTX \u002F GTX series and RTX \u002F Quadro series)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 7.1](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.1)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.0 on x86 platform\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.2 Update 2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 8.6 GA (8.6.1.6)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 535 (>= 535.161.08)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 7.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.0)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.4 on x86 platform\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.2 Update 2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 8.6 GA (8.6.1.6)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 535 (>= 535.104.12)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.4](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.4)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.3 on x86 platform\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 12.1 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.5 GA Update 2 (8.5.3.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 525 (>= 525.125.06)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.3](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.3)\n* [GStreamer 1.16.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.2 on x86 platform\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 11.8](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.5 GA Update 1 (8.5.2.2)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 525 (>= 525.85.12)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1.1 on x86 platform\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 11.7 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-7-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.4 GA (8.4.1.5)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 515.65.01](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1 on x86 platform\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 11.6 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.2 GA Update 4 (8.2.5.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 510.47.03](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.0.1 \u002F 6.0 on x86 platform\n\n* [Ubuntu 18.04](https:\u002F\u002Freleases.ubuntu.com\u002F18.04.6\u002F)\n* [CUDA 11.4 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local)\n* [TensorRT 8.0 GA (8.0.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 470.63.01](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.0.1 \u002F 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.14.5](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 5.1 on x86 platform\n\n* [Ubuntu 18.04](https:\u002F\u002Freleases.ubuntu.com\u002F18.04.6\u002F)\n* [CUDA 11.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11.1.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal)\n* [TensorRT 7.2.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-7x-download)\n* [NVIDIA Driver 460.32.03](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.14.5](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 8.0 on Jetson platform\n\n* [JetPack 7.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack\u002Fdownloads)\n* [NVIDIA DeepStream SDK 8.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=8.0)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.1 on Jetson platform\n\n* [JetPack 6.2.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-621) \u002F [JetPack 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-62) \u002F [JetPack 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-61)\n* [NVIDIA DeepStream SDK 7.1](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.1)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.0 on Jetson platform\n\n* [JetPack 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-60)\n* [NVIDIA DeepStream SDK 7.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.0)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.4 on Jetson platform\n\n* [JetPack 6.0 DP](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-60dp)\n* [NVIDIA DeepStream SDK 6.4](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.4)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.3 on Jetson platform\n\n* JetPack [5.1.3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-513) \u002F [5.1.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-512)\n* [NVIDIA DeepStream SDK 6.3](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.3)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.2 on Jetson platform\n\n* JetPack [5.1.3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-513) \u002F [5.1.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-512) \u002F [5.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-511) \u002F [5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-51)\n* [NVIDIA DeepStream SDK 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1.1 on Jetson platform\n\n* [JetPack 5.0.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-502)\n* [NVIDIA DeepStream SDK 6.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1 on Jetson platform\n\n* [JetPack 5.0.1 DP](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-501dp)\n* [NVIDIA DeepStream SDK 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.0.1 \u002F 6.0 on Jetson platform\n\n* [JetPack 4.6.4](https:\u002F\u002Fdeveloper.nvidia.com\u002Fjetpack-sdk-464)\n* [NVIDIA DeepStream SDK 6.0.1 \u002F 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 5.1 on Jetson platform\n\n* [JetPack 4.5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-451-archive)\n* [NVIDIA DeepStream SDK 5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n##\n\n### Supported models\n\n* [Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet)\n* [MobileNet-YOLO](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FMobileNet-Yolo)\n* [YOLO-Fastest](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FYolo-Fastest)\n* [YOLOv5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)\n* [YOLOv5u](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6)\n* [YOLOv7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n* [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9)\n* [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10)\n* [YOLO11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv12](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12)\n* [YOLOv13](https:\u002F\u002Fgithub.com\u002FiMoonLab\u002Fyolov13)\n* [YOLOR](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolor)\n* [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX)\n* [RTMDet (MMYOLO)](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmyolo\u002Ftree\u002Fmain\u002Fconfigs\u002Frtmdet)\n* [Gold-YOLO](https:\u002F\u002Fgithub.com\u002Fhuawei-noah\u002FEfficient-Computing\u002Ftree\u002Fmaster\u002FDetection\u002FGold-YOLO)\n* [DAMO-YOLO](https:\u002F\u002Fgithub.com\u002Ftinyvision\u002FDAMO-YOLO)\n* [PP-YOLOE \u002F PP-YOLOE+](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleDetection\u002Ftree\u002Frelease\u002F2.8\u002Fconfigs\u002Fppyoloe)\n* [YOLO-NAS](https:\u002F\u002Fgithub.com\u002FDeci-AI\u002Fsuper-gradients\u002Fblob\u002Fmaster\u002FYOLONAS.md)\n* [CO-DETR (MMDetection)](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection\u002Ftree\u002Fmain\u002Fprojects\u002FCO-DETR)\n* [RT-DETR](https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR)\n* [D-FINE](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE)\n* [RF-DETR](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr)\n* [YOLO26](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLO-Master](https:\u002F\u002Fgithub.com\u002FTencent\u002FYOLO-Master)\n\n##\n\n### Basic usage\n\n#### 1. Download the repo\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo.git\ncd DeepStream-Yolo\n```\n\n#### 2. Download the `cfg` and `weights` files from [Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet) repo to the DeepStream-Yolo folder\n\n#### 3. Compile the lib\n\n3.1. Set the `CUDA_VER` according to your DeepStream version\n\n```\nexport CUDA_VER=XY.Z\n```\n\n* x86 platform\n\n  ```\n  DeepStream 8.0 = 12.8\n  DeepStream 7.1 = 12.6\n  DeepStream 7.0 \u002F 6.4 = 12.2\n  DeepStream 6.3 = 12.1\n  DeepStream 6.2 = 11.8\n  DeepStream 6.1.1 = 11.7\n  DeepStream 6.1 = 11.6\n  DeepStream 6.0.1 \u002F 6.0 = 11.4\n  DeepStream 5.1 = 11.1\n  ```\n\n* Jetson platform\n\n  ```\n  DeepStream 8.0 = 13.0\n  DeepStream 7.1 = 12.6\n  DeepStream 7.0 \u002F 6.4 = 12.2\n  DeepStream 6.3 \u002F 6.2 \u002F 6.1.1 \u002F 6.1 = 11.4\n  DeepStream 6.0.1 \u002F 6.0 \u002F 5.1 = 10.2\n  ```\n\n3.2. Make the lib\n\n```\nmake -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo\n```\n\n#### 4. Edit the `config_infer_primary.txt` file according to your model (example for YOLOv4)\n\n```\n[property]\n...\ncustom-network-config=yolov4.cfg\nmodel-file=yolov4.weights\n...\n```\n\n**NOTE**: For **Darknet** models, by default, the dynamic batch-size is set. To use static batch-size, uncomment the line\n\n```\n...\nforce-implicit-batch-dim=1\n...\n```\n\n#### 5. Run\n\n```\ndeepstream-app -c deepstream_app_config.txt\n```\n\n**NOTE**: The TensorRT engine file may take a very long time to generate (sometimes more than 10 minutes).\n\n**NOTE**: If you want to use YOLOv2 or YOLOv2-Tiny models, change the `deepstream_app_config.txt` file before run it\n\n```\n...\n[primary-gie]\n...\nconfig-file=config_infer_primary_yoloV2.txt\n...\n```\n\n##\n\n### Docker usage\n\n* x86 platform\n\n  ```\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-gc-triton-devel\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-triton-multiarch\n  ```\n\n* Jetson platform\n\n  ```\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-triton-multiarch\n  ```\n\n**NOTE**: To compile the `nvdsinfer_custom_impl_Yolo`, you need to install the g++ inside the container\n\n```\napt-get install build-essential\n```\n\n**NOTE**: With DeepStream 8.0, the docker containers do not package libraries necessary for certain multimedia operations like audio data parsing, CPU decode, and CPU encode. This change could affect processing certain video streams\u002Ffiles like mp4 that include audio track. Please run the below script inside the docker images to install additional packages that might be necessary to use all of the DeepStreamSDK features:\n\n```\n\u002Fopt\u002Fnvidia\u002Fdeepstream\u002Fdeepstream\u002Fuser_additional_install.sh\n```\n\n##\n\n### NMS Configuration\n\nTo change the `nms-iou-threshold`, `pre-cluster-threshold` and `topk` values, modify the config_infer file\n\n```\n[class-attrs-all]\nnms-iou-threshold=0.45\npre-cluster-threshold=0.25\ntopk=300\n```\n\n**NOTE**: Make sure to set `cluster-mode=2` in the config_infer file.\n\n##\n\n### Notes\n\n1. Sometimes while running gstreamer pipeline or sample apps, user can encounter error: `GLib (gthread-posix.c): Unexpected error from C library during 'pthread_setspecific': Invalid argument.  Aborting.`. The issue is caused because of a bug in `glib 2.0-2.72` version which comes with Ubuntu 22.04 by default. The issue is addressed in `glib 2.76` and its installation is required to fix the issue (https:\u002F\u002Fgithub.com\u002FGNOME\u002Fglib\u002Ftree\u002F2.76.6).\n\n    - Migrate `glib` to newer version\n\n      ```\n      pip3 install meson\n      pip3 install ninja\n      ```\n\n      **NOTE**: It is recommended to use Python virtualenv.\n\n      ```\n      git clone https:\u002F\u002Fgithub.com\u002FGNOME\u002Fglib.git\n      cd glib\n      git checkout 2.76.6\n      meson build --prefix=\u002Fusr\n      ninja -C build\u002F\n      cd build\u002F\n      ninja install\n      ```\n\n    - Check and confirm the newly installed glib version:\n\n      ```\n      pkg-config --modversion glib-2.0\n      ```\n\n2. Sometimes with RTSP streams the application gets stuck on reaching EOS. This is because of an issue in rtpjitterbuffer component. To fix this issue, a script has been provided with required details to update gstrtpmanager library.\n\n    ```\n    \u002Fopt\u002Fnvidia\u002Fdeepstream\u002Fdeepstream\u002Fupdate_rtpmanager.sh\n    ```\n\n##\n\n### Extract metadata\n\nYou can get metadata from DeepStream using Python and C\u002FC++. For C\u002FC++, you can edit the `deepstream-app` or `deepstream-test` codes. For Python, your can install and edit [deepstream_python_apps](https:\u002F\u002Fgithub.com\u002FNVIDIA-AI-IOT\u002Fdeepstream_python_apps).\n\nBasically, you need manipulate the `NvDsObjectMeta` ([Python](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fpython-api\u002FPYTHON_API\u002FNvDsMeta\u002FNvDsObjectMeta.html) \u002F [C\u002FC++](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fsdk-api\u002Fstruct__NvDsObjectMeta.html)) `and NvDsFrameMeta` ([Python](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fpython-api\u002FPYTHON_API\u002FNvDsMeta\u002FNvDsFrameMeta.html) \u002F [C\u002FC++](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fsdk-api\u002Fstruct__NvDsFrameMeta.html)) to get the label, position, etc. of bboxes.\n\n##\n\nMy projects: https:\u002F\u002Fwww.youtube.com\u002FMarcosLucianoTV\n","# DeepStream-Yolo\n\nNVIDIA DeepStream SDK 8.0 \u002F 7.1 \u002F 7.0 \u002F 6.4 \u002F 6.3 \u002F 6.2 \u002F 6.1.1 \u002F 6.1 \u002F 6.0.1 \u002F 6.0 \u002F 5.1 的 YOLO 模型配置（YOLO model configuration）\n\n--------------------------------------------------------------------------------------------------\n### 目前由于某些限制无法进行更新，感谢您的理解。\n--------------------------------------------------------------------------------------------------\n### YOLO-Pose: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Pose\n### YOLO-Seg: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Seg\n### YOLO-Face: https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo-Face\n--------------------------------------------------------------------------------------------------\n### 重要提示：请使用新的导出文件导出 ONNX 模型（Open Neural Network Exchange），使用更新后的文件重新生成 TensorRT 引擎（NVIDIA TensorRT 的优化模型），并根据您的模型使用新的 config_infer_primary 配置文件\n--------------------------------------------------------------------------------------------------\n\n### 本仓库的改进功能\n\n* 支持 INT8 校准（PTQ，Post Training Quantization）\n* 支持非正方形模型（non square models）\n* 模型性能基准测试\n* 使用 cfg 和 weights 转换并结合 GPU 后处理（GPU post-processing）支持 Darknet 模型（YOLOv4 等）\n* 支持 YOLO-Master, YOLO26, RF-DETR, D-FINE, RT-DETR, CO-DETR (MMDetection), YOLO-NAS, PPYOLOE+, PPYOLOE, DAMO-YOLO, Gold-YOLO, RTMDet (MMYOLO), YOLOX, YOLOR, YOLOv13, YOLOv12, YOLO11, YOLOv10, YOLOv9, YOLOv8, YOLOv7, YOLOv6, YOLOv5u 和 YOLOv5 等模型，通过 ONNX 转换结合 GPU 后处理\n* GPU 边界框解析器（GPU bbox parser）\n* 自定义 ONNX 模型解析器\n* 动态批处理大小（Dynamic batch-size）\n* 对 Darknet 和 ONNX 导出模型的 INT8 校准（PTQ）\n\n##\n\n### 快速入门\n\n* [要求](#requirements)\n* [支持的模型](#supported-models)\n* [基准测试](docs\u002Fbenchmarks.md)\n* [dGPU 安装](docs\u002FdGPUInstalation.md)\n* [基础用法](#basic-usage)\n* [Docker 用法](#docker-usage)\n* [NMS 配置](#nms-configuration)\n* [注意事项](#notes)\n* [INT8 校准](docs\u002FINT8Calibration.md)\n* [YOLOv5 用法](docs\u002FYOLOv5.md)\n* [YOLOv5u 用法](docs\u002FYOLOv5u.md)\n* [YOLOv6 用法](docs\u002FYOLOv6.md)\n* [YOLOv7 用法](docs\u002FYOLOv7.md)\n* [YOLOv8 用法](docs\u002FYOLOv8.md)\n* [YOLOv9 用法](docs\u002FYOLOv9.md)\n* [YOLOv10 用法](docs\u002FYOLOv10.md)\n* [YOLO11 用法](docs\u002FYOLO11.md)\n* [YOLOv12 用法](docs\u002FYOLOv12.md)\n* [YOLOv13 用法](docs\u002FYOLOv13.md)\n* [YOLOR 用法](docs\u002FYOLOR.md)\n* [YOLOX 用法](docs\u002FYOLOX.md)\n* [RTMDet (MMYOLO) 用法](docs\u002FRTMDet.md)\n* [Gold-YOLO 用法](docs\u002FGoldYOLO.md)\n* [DAMO-YOLO 用法](docs\u002FDAMOYOLO.md)\n* [PP-YOLOE \u002F PP-YOLOE+ 用法](docs\u002FPPYOLOE.md)\n* [YOLO-NAS 用法](docs\u002FYOLONAS.md)\n* [CO-DETR (MMDetection) 用法](docs\u002FCODETR.md)\n* [RT-DETR PyTorch 用法](docs\u002FRTDETR_PyTorch.md)\n* [RT-DETR Paddle 用法](docs\u002FRTDETR_Paddle.md)\n* [RT-DETR Ultralytics 用法](docs\u002FRTDETR_Ultralytics.md)\n* [D-FINE 用法](docs\u002FDFINE.md)\n* [RF-DETR 用法](docs\u002FRFDETR.md)\n* [YOLO26 用法](docs\u002FYOLO26.md)\n* [YOLO-Master 用法](docs\u002FYOLOMaster.md)\n* [使用自定义模型](docs\u002FcustomModels.md)\n* [多个 YOLO GIEs](docs\u002FmultipleGIEs.md)\n\n##\n\n### 要求\n\n#### x86 平台上的 DeepStream 8.0\n\n* [Ubuntu 24.04](https:\u002F\u002Freleases.ubuntu.com\u002F24.04\u002F)\n* [CUDA 12.8 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-8-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 10.9 GA (10.9.0.34)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-10x-download)\n* [NVIDIA 驱动 570.195.03（数据中心\u002FTesla 系列）\u002F 570.133.20（TITAN, GeForce RTX\u002FGTX 系列和 RTX\u002FQuadro 系列）](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 8.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=8.0)\n* [GStreamer 1.24.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### x86 平台上的 DeepStream 7.1\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.6 Update 3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-6-3-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 10.4 GA (10.4.0.26)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-10x-download)\n* [NVIDIA 驱动 535.183.06（数据中心\u002FTesla 系列）\u002F 560.35.03（TITAN, GeForce RTX\u002FGTX 系列和 RTX\u002FQuadro 系列）](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 7.1](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.1)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### x86 平台上的 DeepStream 7.0\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.2 Update 2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 8.6 GA (8.6.1.6)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA 驱动 535（>= 535.161.08）](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 7.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.0)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### x86 平台上的 DeepStream 6.4\n\n* [Ubuntu 22.04](https:\u002F\u002Freleases.ubuntu.com\u002F22.04\u002F)\n* [CUDA 12.2 Update 2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-2-2-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=runfile_local)\n* [TensorRT 8.6 GA (8.6.1.6)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA 驱动 535（>= 535.104.12）](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.4](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.4)\n* [GStreamer 1.20.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### x86 平台上的 DeepStream 6.3\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 12.1 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-12-1-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.5 GA Update 2 (8.5.3.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA 驱动 525（>= 525.125.06）](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.3](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.3)\n* [GStreamer 1.16.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### x86 平台上的 DeepStream 6.2\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA（统一计算架构） 11.8](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-8-0-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT（张量推理引擎） 8.5 GA Update 1 (8.5.2.2)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver（驱动程序） 525 (>= 525.85.12)](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.3](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1.1 on x86 platform（x86 平台）\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 11.7 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-7-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.4 GA (8.4.1.5)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 515.65.01](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1 on x86 platform\n\n* [Ubuntu 20.04](https:\u002F\u002Freleases.ubuntu.com\u002F20.04\u002F)\n* [CUDA 11.6 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-6-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=runfile_local)\n* [TensorRT 8.2 GA Update 4 (8.2.5.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 510.47.03](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.16.2](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.0.1 \u002F 6.0 on x86 platform\n\n* [Ubuntu 18.04](https:\u002F\u002Freleases.ubuntu.com\u002F18.04.6\u002F)\n* [CUDA 11.4 Update 1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11-4-1-download-archive?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=runfile_local)\n* [TensorRT 8.0 GA (8.0.1)](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-8x-download)\n* [NVIDIA Driver 470.63.01](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 6.0.1 \u002F 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.14.5](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 5.1 on x86 platform\n\n* [Ubuntu 18.04](https:\u002F\u002Freleases.ubuntu.com\u002F18.04.6\u002F)\n* [CUDA 11.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-11.1.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal)\n* [TensorRT 7.2.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fnvidia-tensorrt-7x-download)\n* [NVIDIA Driver 460.32.03](https:\u002F\u002Fwww.nvidia.com\u002FDownload\u002Findex.aspx)\n* [NVIDIA DeepStream SDK 5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fdeepstream-sdk-download-tesla-archived)\n* [GStreamer 1.14.5](https:\u002F\u002Fgstreamer.freedesktop.org\u002F)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 8.0 on Jetson platform（Jetson 平台）\n\n* [JetPack 7.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack\u002Fdownloads)\n* [NVIDIA DeepStream SDK 8.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=8.0)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.1 on Jetson platform\n\n* [JetPack 6.2.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-621) \u002F [JetPack 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-62) \u002F [JetPack 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-61)\n* [NVIDIA DeepStream SDK 7.1](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.1)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 7.0 on Jetson platform\n\n* [JetPack 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-60)\n* [NVIDIA DeepStream SDK 7.0](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=7.0)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.4 on Jetson platform\n\n* [JetPack 6.0 DP](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-60dp)\n* [NVIDIA DeepStream SDK 6.4](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.4)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.3 on Jetson platform\n\n* JetPack [5.1.3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-513) \u002F [5.1.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-512)\n* [NVIDIA DeepStream SDK 6.3](https:\u002F\u002Fcatalog.ngc.nvidia.com\u002Forgs\u002Fnvidia\u002Fresources\u002Fdeepstream\u002Ffiles?version=6.3)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.2 on Jetson platform\n\n* JetPack [5.1.3](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-513) \u002F [5.1.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-512) \u002F [5.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-511) \u002F [5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-51)\n* [NVIDIA DeepStream SDK 6.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1.1 on Jetson platform\n\n* [JetPack 5.0.2](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-502)\n* [NVIDIA DeepStream SDK 6.1.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.1 on Jetson platform\n\n* [JetPack 5.0.1 DP](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-501dp)\n* [NVIDIA DeepStream SDK 6.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 6.0.1 \u002F 6.0 on Jetson platform\n\n* [JetPack 4.6.4](https:\u002F\u002Fdeveloper.nvidia.com\u002Fjetpack-sdk-464)\n* [NVIDIA DeepStream SDK 6.0.1 \u002F 6.0](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n#### DeepStream 5.1 on Jetson platform\n\n* [JetPack 4.5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack-sdk-451-archive)\n* [NVIDIA DeepStream SDK 5.1](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fdeepstream-on-jetson-downloads-archived)\n* [DeepStream-Yolo](https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo)\n\n### 支持的模型\n\n* [Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet)（一种流行的开源深度学习框架）\n* [MobileNet-YOLO](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FMobileNet-Yolo)\n* [YOLO-Fastest](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FYolo-Fastest)\n* [YOLOv5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)\n* [YOLOv5u](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6)\n* [YOLOv7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n* [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9)\n* [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10)\n* [YOLO11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLOv12](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12)\n* [YOLOv13](https:\u002F\u002Fgithub.com\u002FiMoonLab\u002Fyolov13)\n* [YOLOR](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolor)\n* [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX)\n* [RTMDet (MMYOLO)](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmyolo\u002Ftree\u002Fmain\u002Fconfigs\u002Frtmdet)\n* [Gold-YOLO](https:\u002F\u002Fgithub.com\u002Fhuawei-noah\u002FEfficient-Computing\u002Ftree\u002Fmaster\u002FDetection\u002FGold-YOLO)\n* [DAMO-YOLO](https:\u002F\u002Fgithub.com\u002Ftinyvision\u002FDAMO-YOLO)\n* [PP-YOLOE \u002F PP-YOLOE+](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleDetection\u002Ftree\u002Frelease\u002F2.8\u002Fconfigs\u002Fppyoloe)\n* [YOLO-NAS](https:\u002F\u002Fgithub.com\u002FDeci-AI\u002Fsuper-gradients\u002Fblob\u002Fmaster\u002FYOLONAS.md)\n* [CO-DETR (MMDetection)](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection\u002Ftree\u002Fmain\u002Fprojects\u002FCO-DETR)\n* [RT-DETR](https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR)\n* [D-FINE](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE)\n* [RF-DETR](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr)\n* [YOLO26](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)\n* [YOLO-Master](https:\u002F\u002Fgithub.com\u002FTencent\u002FYOLO-Master)\n\n##\n\n### 基本用法\n\n#### 1. 下载仓库\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo.git\ncd DeepStream-Yolo\n```\n\n#### 2. 从 [Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet) 仓库下载 `cfg` 和 `weights` 文件到 DeepStream-Yolo 文件夹\n\n#### 3. 编译库\n\n3.1. 根据你的 DeepStream 版本设置 `CUDA_VER`（CUDA版本）\n\n```\nexport CUDA_VER=XY.Z\n```\n\n* x86 平台\n\n  ```\n  DeepStream 8.0 = 12.8\n  DeepStream 7.1 = 12.6\n  DeepStream 7.0 \u002F 6.4 = 12.2\n  DeepStream 6.3 = 12.1\n  DeepStream 6.2 = 11.8\n  DeepStream 6.1.1 = 11.7\n  DeepStream 6.1 = 11.6\n  DeepStream 6.0.1 \u002F 6.0 = 11.4\n  DeepStream 5.1 = 11.1\n  ```\n\n* Jetson 平台\n\n  ```\n  DeepStream 8.0 = 13.0\n  DeepStream 7.1 = 12.6\n  DeepStream 7.0 \u002F 6.4 = 12.2\n  DeepStream 6.3 \u002F 6.2 \u002F 6.1.1 \u002F 6.1 = 11.4\n  DeepStream 6.0.1 \u002F 6.0 \u002F 5.1 = 10.2\n  ```\n\n3.2. 构建库\n\n```\nmake -C nvdsinfer_custom_impl_Yolo clean && make -C nvdsinfer_custom_impl_Yolo\n```\n\n#### 4. 根据你的模型编辑 `config_infer_primary.txt` 文件（YOLOv4 示例）\n\n```\n[property]\n...\ncustom-network-config=yolov4.cfg\nmodel-file=yolov4.weights\n...\n```\n\n**注意**：对于 **Darknet** 模型，默认启用了动态批处理大小。若要使用静态批处理大小，请取消注释以下行：\n\n```\n...\nforce-implicit-batch-dim=1\n...\n```\n\n#### 5. 运行\n\n```\ndeepstream-app -c deepstream_app_config.txt\n```\n\n**注意**：TensorRT 引擎文件可能需要非常长的时间生成（有时超过10分钟）。\n\n**注意**：如果要使用 YOLOv2 或 YOLOv2-Tiny 模型，请在运行前修改 `deepstream_app_config.txt` 文件：\n\n```\n...\n[primary-gie]\n...\nconfig-file=config_infer_primary_yoloV2.txt\n...\n```\n\n##\n\n### Docker 使用\n\n* x86 平台\n\n  ```\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-gc-triton-devel\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-triton-multiarch\n  ```\n\n* Jetson 平台\n\n  ```\n  nvcr.io\u002Fnvidia\u002Fdeepstream:8.0-triton-multiarch\n  ```\n\n**注意**：要编译 `nvdsinfer_custom_impl_Yolo`，需要在容器内安装 g++：\n\n```\napt-get install build-essential\n```\n\n**注意**：在 DeepStream 8.0 中，docker 容器未打包某些多媒体操作所需的库（如音频解析、CPU 解码和编码）。这可能会影响处理包含音频轨道的视频流\u002F文件（如 mp4）。请在 docker 镜像内运行以下脚本安装必要的附加包：\n\n```\n\u002Fopt\u002Fnvidia\u002Fdeepstream\u002Fdeepstream\u002Fuser_additional_install.sh\n```\n\n##\n\n### NMS 配置\n\n要修改 `nms-iou-threshold`、`pre-cluster-threshold` 和 `topk` 值，请编辑 config_infer 文件：\n\n```\n[class-attrs-all]\nnms-iou-threshold=0.45\npre-cluster-threshold=0.25\ntopk=300\n```\n\n**注意**：请确保在 config_infer 文件中设置 `cluster-mode=2`。\n\n##\n\n### 注意事项\n\n1. 在运行 gstreamer 管道或示例应用时，用户可能会遇到错误：`GLib (gthread-posix.c): Unexpected error from C library during 'pthread_setspecific': Invalid argument.  Aborting.`。该问题由 Ubuntu 22.04 默认的 glib 2.0-2.72 版本中的 bug 引起。在 glib 2.76 中已修复该问题（https:\u002F\u002Fgithub.com\u002FGNOME\u002Fglib\u002Ftree\u002F2.76.6）。\n\n    - 升级 glib 到新版本：\n\n      ```\n      pip3 install meson\n      pip3 install ninja\n      ```\n\n      **注意**：建议使用 Python 虚拟环境。\n\n      ```\n      git clone https:\u002F\u002Fgithub.com\u002FGNOME\u002Fglib.git\n      cd glib\n      git checkout 2.76.6\n      meson build --prefix=\u002Fusr\n      ninja -C build\u002F\n      cd build\u002F\n      ninja install\n      ```\n\n    - 检查并确认新安装的 glib 版本：\n\n      ```\n      pkg-config --modversion glib-2.0\n      ```\n\n2. 使用 RTSP 流时，应用程序可能在到达 EOS 时卡住。这是由于 rtpjitterbuffer 组件的问题。要解决此问题，提供了一个脚本用于更新 gstrtpmanager 库：\n\n    ```\n    \u002Fopt\u002Fnvidia\u002Fdeepstream\u002Fdeepstream\u002Fupdate_rtpmanager.sh\n    ```\n\n##\n\n### 提取元数据\n\n你可以通过 Python 和 C\u002FC++ 从 DeepStream 提取元数据。对于 C\u002FC++，可以编辑 `deepstream-app` 或 `deepstream-test` 代码。对于 Python，可以安装并编辑 [deepstream_python_apps](https:\u002F\u002Fgithub.com\u002FNVIDIA-AI-IOT\u002Fdeepstream_python_apps)。\n\n基本上，你需要操作 `NvDsObjectMeta` ([Python](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fpython-api\u002FPYTHON_API\u002FNvDsMeta\u002FNvDsObjectMeta.html) \u002F [C\u002FC++](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fsdk-api\u002Fstruct__NvDsObjectMeta.html)) 和 `NvDsFrameMeta` ([Python](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fpython-api\u002FPYTHON_API\u002FNvDsMeta\u002FNvDsFrameMeta.html) \u002F [C\u002FC++](https:\u002F\u002Fdocs.nvidia.com\u002Fmetropolis\u002Fdeepstream\u002Fdev-guide\u002Fsdk-api\u002Fstruct__NvDsFrameMeta.html)) 来获取边界框的标签、位置等信息。\n\n##\n\n我的项目：https:\u002F\u002Fwww.youtube.com\u002FMarcosLucianoTV","# DeepStream-Yolo 快速上手指南\n\n---\n\n## 环境准备\n\n### 系统要求（以 DeepStream 8.0 为例）\n- **操作系统**：Ubuntu 24.04（推荐使用阿里云\u002F清华源加速安装）\n- **CUDA**：12.8 Update 1\n- **TensorRT**：10.9 GA (10.9.0.34)\n- **NVIDIA 驱动**：570.195.03（数据中心\u002FTesla）或 570.133.20（消费级显卡）\n- **DeepStream SDK**：8.0（从 NGC 下载）\n- **GStreamer**：1.24.2\n\n> **国内镜像加速建议**：\n> - Ubuntu 软件源替换为 [清华源](https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fhelp\u002Fubuntu\u002F) 或 [阿里云源](https:\u002F\u002Fmirrors.aliyun.com\u002Fubuntu\u002F)\n> - CUDA\u002FTensorRT 安装包可通过 [NVIDIA 官网](https:\u002F\u002Fdeveloper.nvidia.com\u002F) 使用国内网络加速下载\n\n---\n\n## 安装步骤\n\n1. **安装依赖**\n   ```bash\n   sudo apt update && sudo apt install -y git cmake libgstreamer1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good\n   ```\n\n2. **克隆项目**\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo.git\n   cd DeepStream-Yolo\n   ```\n\n3. **编译插件**\n   ```bash\n   mkdir build && cd build\n   cmake ..\n   make -j$(nproc)\n   sudo make install\n   ```\n\n4. **配置环境变量**\n   ```bash\n   export GST_PLUGIN_PATH=\u002Fopt\u002Fnvidia\u002Fdeepstream\u002Fdeepstream\u002Flib:$GST_PLUGIN_PATH\n   ```\n\n---\n\n## 基本使用\n\n### 示例：YOLOv8 模型推理\n\n1. **导出 ONNX 模型**\n   ```bash\n   # 以 YOLOv8n 为例（需先安装 ultralytics）\n   pip install ultralytics\n   yolo export model=yolov8n.pt format=onnx opset=13\n   ```\n\n2. **生成 TensorRT 引擎**\n   ```bash\n   .\u002Fbuild\u002Fyolov8_export -m yolov8n.onnx -o yolov8n.engine --input-dims 640,640\n   ```\n\n3. **运行推理**\n   ```bash\n   deepstream-app -c config_infer_primary_yolov8.txt\n   ```\n\n> **配置文件说明**：\n> - 修改 `config_infer_primary_yolov8.txt` 中的 `model-engine-file` 和 `model-file` 路径\n> - 支持 INT8 量化（需提前校准，详见 `docs\u002FINT8Calibration.md`）\n\n---\n\n**注意**：不同 DeepStream 版本需对应 README 中的依赖表调整环境配置，完整文档请参考项目 `docs\u002F` 目录。","某城市交通管理部门需要实时监控主干道十字路口的车辆和行人流量，通过AI识别违规行为（如闯红灯、逆行）并生成预警。系统需同时处理8路1080p视频流，对检测精度和实时性要求较高。\n\n### 没有 DeepStream-Yolo 时\n- **模型部署复杂**：需手动将YOLOv7模型转换为ONNX格式，再通过TensorRT优化，过程中常因输入尺寸不匹配导致推理失败\n- **性能瓶颈明显**：单路视频流处理延迟达300ms，8路并发时GPU利用率仅65%，无法满足实时预警需求\n- **多模型管理困难**：同时部署车辆检测和行人识别两个模型时，需重复配置DeepStream管道，内存占用增加40%\n- **精度与速度难平衡**：关闭INT8量化可提升检测精度2.3%，但帧率从25fps降至12fps，无法满足实际需求\n\n### 使用 DeepStream-Yolo 后\n- **一键式模型集成**：通过预置的YOLOv7配置文件，30分钟内完成从PyTorch模型到TensorRT引擎的全流程转换，支持非方形输入\n- **吞吐量提升3倍**：动态批处理技术使8路视频流并发处理延迟降至85ms，GPU利用率提升至92%\n- **统一多任务处理**：通过配置文件切换不同检测任务，单个DeepStream管道即可同时运行车辆\u002F行人检测，内存占用降低35%\n- **INT8量化无损优化**：采用PTQ校准后，在保持98.7%原始精度的同时，检测速度提升2.1倍（25fps→52fps）\n\n核心价值：DeepStream-Yolo通过深度整合NVIDIA硬件生态，为复杂视频分析场景提供端到端的模型优化方案，使开发者能以最低成本实现高精度、低延迟的多模型并发推理。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmarcoslucianops_DeepStream-Yolo_a675a896.png","marcoslucianops","Marcos Luciano","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmarcoslucianops_5573fd7e.png","Sr. Computer Vision Engineer | Control and Automation, Mechatronics Engineer",null,"Brazil","https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fmarcoslucianops","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops",[85,89,93,97],{"name":86,"color":87,"percentage":88},"Python","#3572A5",48.4,{"name":90,"color":91,"percentage":92},"C++","#f34b7d",43.9,{"name":94,"color":95,"percentage":96},"Cuda","#3A4E3A",6.4,{"name":98,"color":99,"percentage":100},"Makefile","#427819",1.2,1988,455,"2026-04-05T07:36:08","MIT","Linux","需要 NVIDIA GPU，CUDA 版本根据 DeepStream 版本不同（11.4-12.8），显存需求未明确说明","未说明",{"notes":109,"python":107,"dependencies":110},"需根据模型导出 ONNX 文件并生成 TensorRT 引擎，使用对应的 config_infer_primary 配置文件。不同 DeepStream 版本需匹配特定 SDK 和依赖库版本（如 Ubuntu 18.04\u002F20.04\u002F22.04\u002F24.04 及 JetPack 系统）。",[111,112,113,114],"NVIDIA DeepStream SDK","TensorRT","CUDA Toolkit","GStreamer",[14,13],[117,118,119,120,121,122,123,124,125,126,127,128,129,130,131],"nvidia-deepstream-sdk","deepstream","object-detection","yolo","darknet","tensorrt","pytorch","nvidia","ultralytics","ppyoloe","paddle","mmyolo","rtdetr","rtmdet","cuda","2026-03-27T02:49:30.150509","2026-04-06T08:40:09.094476",[135,140,145,150,155,160],{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},1896,"如何解决 'Assertion `0' failed.' 错误？","该错误通常由模型文件路径错误或配置文件不匹配导致。请检查以下步骤：1. 确保 `model_b1_gpu0_fp32.engine` 文件路径正确且可访问；2. 使用与模型版本匹配的配置文件（如 YOLOv5s6 的 `yolov5s6.yaml` 而非 `yolov5s.yaml`）；3. 重新生成 `.engine` 文件。参考用户 H19012 的解决方案：替换为正确的 YAML 文件后问题解决。","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo\u002Fissues\u002F136",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},1897,"视频推理时没有检测框输出怎么办？","请确保使用预先导出的 ONNX 模型而非动态导出。具体步骤：1. 通过命令行手动运行 `python3 export_yoloV5.py` 导出 ONNX 模型；2. 在 DeepStream 配置中指定导出的 ONNX 路径；3. 避免在应用启动时动态导出模型（可能导致资源冲突）。参考用户 HeeebsInc 的分析：动态导出会引发段错误，需分离导出与推理流程。","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo\u002Fissues\u002F441",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},1898,"如何处理 'NVDSINFER CONFIG FAILED' 配置错误？","检查配置文件路径和模型兼容性：1. 确认 `config_infer_primary_yolov8.txt` 中的路径无拼写错误；2. 使用 PyTorch 1.14 及以上版本；3. 确保模型已正确导出为 ONNX 格式。维护者 MarcosLucianops 提示：YOLOv7\u002Fv8 需通过本仓库的 `export.py` 导出模型，直接使用原始模型文件可能导致输出结构不匹配。","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo\u002Fissues\u002F395",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},1899,"Tesla T4 上生成 FP16 引擎失败如何解决？","该问题可能由 GPU 架构差异导致。解决方案：1. 检查 TensorRT 版本与 Tesla T4 的兼容性（推荐使用 TensorRT 8.4.2+）；2. 尝试降低精度模式（FP32 优先）；3. 重新编译 `libnvdsinfer_custom_impl_Yolo.so` 动态库。用户总结的关键点：RTX 系列与 T4 的驱动\u002F环境配置存在差异，需统一 CUDA\u002FcuDNN 版本。","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo\u002Fissues\u002F353",{"id":156,"question_zh":157,"answer_zh":158,"source_url":159},1900,"自定义模型转换为 .wts 格式失败怎么办？","转换失败通常由模型结构或权重问题导致。请执行：1. 确保训练时使用与官方一致的 `yolov5` 代码版本；2. 检查 `nc` 类别数是否在所有配置文件中同步修改；3. 使用 `convert.py` 脚本时添加 `--no-strict` 参数跳过严格验证。参考 Issue 57 中维护者建议：转换失败可能因 ONNX 输出结构变化，需对比官方预训练模型的导出流程。","https:\u002F\u002Fgithub.com\u002Fmarcoslucianops\u002FDeepStream-Yolo\u002Fissues\u002F57",{"id":161,"question_zh":162,"answer_zh":163,"source_url":139},1901,"如何避免 DeepStream 启动时报 'deserialize engine failed' 错误？","该错误表明引擎文件损坏或不兼容。解决方法：1. 删除旧 `.engine` 文件并重新生成；2. 检查生成引擎时的 GPU 显存占用（关闭其他进程）；3. 确保 DeepStream 版本与 TensorRT 版本匹配。用户日志显示问题源于文件无法打开，需验证路径权限及文件完整性。",[]]