[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-RizwanMunawar--yolov7-object-tracking":3,"tool-RizwanMunawar--yolov7-object-tracking":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",145895,2,"2026-04-08T11:32:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":32,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":106,"github_topics":107,"view_count":32,"oss_zip_url":115,"oss_zip_packed_at":115,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":149},5513,"RizwanMunawar\u002Fyolov7-object-tracking","yolov7-object-tracking","YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking","yolov7-object-tracking 是一个基于 PyTorch、OpenCV 和 SORT 算法的开源项目，旨在实现高效实时的视频目标检测与跟踪。它核心解决了在动态视频流中“不仅识别物体是什么，还能持续锁定并追踪其运动轨迹”的技术难题，有效避免了目标在移动或短暂遮挡时身份丢失的问题。\n\n该项目特别适合计算机视觉开发者、AI 研究人员以及需要构建智能监控、行为分析或自动驾驶原型的技术人员使用。对于希望快速验证想法的学生和工程师，它也提供了极大的便利，支持直接在 Google Colab、Kaggle 等云端环境中一键运行，无需繁琐的本地配置。\n\n其技术亮点在于深度融合了 YOLOv7 强大的实时检测能力与 SORT 跟踪算法的稳定性，确保在复杂场景下依然保持流畅的运行效率。此外，项目具有出色的扩展性，目前已新增对 Ultralytics YOLOv8 的支持，并计划兼容 YOLOv9 至 YOLOv13 等最新架构，让用户能轻松利用前沿模型提升任务表现。无论是进行学术研究还是开发实际应用，yolov7-object-tracking 都提供了一个简洁、可靠且易于上手的代码基准。","## YOLOv7 Object Tracking 🚀\n\n[![CI](https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Factions\u002Fworkflows\u002Fci.yml) ![Visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_1aaabdb88056.png)\n[![Open in Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) [![Open in Kaggle](https:\u002F\u002Fkaggle.com\u002Fstatic\u002Fimages\u002Fopen-in-kaggle.svg)](https:\u002F\u002Fkaggle.com\u002Fkernels\u002Fwelcome?src=https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) [![Open in SageMaker Studio Lab](https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fstudiolab.svg)](https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fimport\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRepo-DeepWiki-blue.svg?logo=data:image\u002Fpng;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK\u002FAIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06\u002Fuv1saEDv4O3n3dV60RfP947Mm9\u002FSQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH\u002F\u002FPB8mnKqScAhsD0kYP3j\u002FYt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY\u002F56ebRWeraTjMt\u002F00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB\u002FimwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h\u002FU4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5\u002FXFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb\u002FvA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26\u002FHfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr\u002FFGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r\u002FcKaoqr+27\u002FXcrS5UwSMbQAAAABJRU5ErkJggg==\" alt=\"YOLOv7-object-tracking DeepWiki\">\u003C\u002Fa>\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fgraph\u002Fbadge.svg?token=GE4Z0BS8V9)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FRizwanMunawar\u002Fyolov7-object-tracking)\n\n💥 **Ultralytics YOLOv8** support added `python detect.py --weights yolov8n.pt`\n\n 🚀 **YOLOv9, YOLOv10, YOLO11, YOLO12, YOLO13** support coming soon :)\n\n### How to Run the Code\n\n1. Clone the repository:\n   \n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking.git\n    ```\n   \n2. Navigate to the cloned folder:\n    ```bash\n    cd yolov7-object-tracking\n    ```\n\n3. Create a virtual environment (Recommended to avoid conflicts):\n\n    **For Anaconda**:\n    ```bash\n    conda create -n yolov7objtracking python=3.12\n    conda activate yolov7objtracking\n    ```\n\n    **For Linux**:\n    ```bash\n    python3 -m venv yolov7objtracking\n    source yolov7objtracking\u002Fbin\u002Factivate\n    ```\n\n    **For Windows**:\n    ```bash\n    python3 -m venv yolov7objtracking\n    cd yolov7objtracking\u002FScripts\n    activate\n    ```\n\n    **For MacOS**:\n    ```bash\n    python3 -m venv yolov7objtracking\n    source yolov7objtracking\u002Fbin\u002Factivate\n    ```\n\n4. Update pip and install dependencies:\n    ```bash\n    pip install --upgrade pip\n    pip install -r requirements.txt\n    ```\n\n5. Run the script:\n\n    Select the appropriate command based on your requirements. Pretrained [yolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Freleases\u002Fdownload\u002Fv0.1\u002Fyolov7.pt) weights will be downloaded automatically if needed.\n\n    - **Detection only**:\n      ```bash\n      python detect.py --weights yolov7.pt\n  \n      # If you want to use your own videos.\n      python detect.py --weights yolov7.pt --source \"your video.mp4\"\n      \n      # For Inference with YOLOv8\n      python detect.py --weights yolov8n.pt\n      ```\n\n    - **Object tracking**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt\n      \n      # If you want to use your own videos.\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\"\n      ```\n\n    - **Webcam**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source 0\n      ```\n\n    - **External Camera**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source 1\n      ```\n\n    - **IP Camera Stream**:\n      ```bash\n      python detect_and_track.py --source \"your IP Camera Stream URL\" --device 0\n      ```\n\n    - **Specific class tracking (e.g., person)**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --classes 0\n      ```\n\n    - **Colored tracks**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --colored-trk\n      ```\n\n    - **Save track centroids, IDs, and bounding box coordinates**:\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --save-txt --save-bbox-dim\n      ```\n\n6. **Output files** will be saved in `runs\u002Fdetect\u002Fobj-tracking` with the original filename.\n\n### Arguments details 🚀\n\n| Argument                | Type        | Default                       | Description                                                         |\n|-------------------------|-------------|-------------------------------|---------------------------------------------------------------------|\n| `--weights`             | `str`       | `yolov7.pt`                   | Path(s) to model weights (`.pt` file).                              |\n| `--download`            | `flag`      | `False`                       | Download model weights automatically.                               |\n| `--source`              | `str`       | `None`                        | Source for inference (file, folder, or `0` for webcam).             |\n| `--img-size`            | `int`       | `640`                         | Inference image size in pixels.                                     |\n| `--conf-thres`          | `float`     | `0.25`                        | Object confidence threshold.                                        |\n| `--iou-thres`           | `float`     | `0.45`                        | Intersection over Union (IoU) threshold for NMS.                    |\n| `--device`              | `str`       | `''`                          | CUDA device (e.g., `0` or `0,1,2,3`) or `cpu`.                      |\n| `--view-img`            | `flag`      | `False`                       | Display results during inference.                                   |\n| `--save-txt`            | `flag`      | `False`                       | Save results to `.txt` files.                                       |\n| `--save-conf`           | `flag`      | `False`                       | Save confidence scores in `.txt` labels.                            |\n| `--nosave`              | `flag`      | `False`                       | Do not save images or videos.                                       |\n| `--classes`             | `list[int]` | `None`                        | Filter results by class (e.g., `--classes 0` or `--classes 0 2 3`). |\n| `--agnostic-nms`        | `flag`      | `False`                       | Use class-agnostic Non-Maximum Suppression (NMS).                   |\n| `--augment`             | `flag`      | `False`                       | Enable augmented inference.                                         |\n| `--update`              | `flag`      | `False`                       | Update all models.                                                  |\n| `--project`             | `str`       | `runs\u002Fdetect`                 | Directory to save results (`project\u002Fname`).                         |\n| `--name`                | `str`       | `runs\u002Fdetect\u002Fobject_tracking` | Name of the results folder inside the project directory.            |\n| `--exist-ok`            | `flag`      | `False`                       | Allow existing project\u002Fname without incrementing.                   |\n| `--no-trace`            | `flag`      | `False`                       | Do not trace the model during export.                               |\n| `--colored-trk`         | `flag`      | `False`                       | Assign a unique color to each track for visualization.              |\n| `--save-bbox-dim`       | `flag`      | `False`                       | Save bounding box dimensions in `.txt` tracks.                      |\n| `--save-with-object-id` | `flag`      | `False`                       | Save results with object ID in `.txt` files.                        |\n\n### Results 📊\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>YOLOv7 Detection Only\u003C\u002Ftd>\n    \u003Ctd>YOLOv7 Object Tracking with ID\u003C\u002Ftd>\n    \u003Ctd>YOLOv7 Object Tracking with ID and Label\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_793268c71b37.png\">\u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_225b0b1b52eb.png\">\u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_844b0153d17b.png\">\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### Star History\n\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_e13c716fbd7a.png)](https:\u002F\u002Fwww.star-history.com\u002F#RizwanMunawar\u002Fyolov7-object-tracking&type=date&legend=top-left)\n\n### References\n\n- [YOLOv7 GitHub](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n- [SORT GitHub](https:\u002F\u002Fgithub.com\u002Fabewley\u002Fsort)\n\n**Some of my articles\u002Fresearch papers | computer vision awesome resources for learning | How do I appear to the world? 🚀**\n\n[Ultralytics YOLO11: Object Detection and Instance Segmentation🤯](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fultralytics-yolo11-object-detection-and-instance-segmentation-88ef0239a811) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2024--10--27-brightgreen)\n\n[Parking Management using Ultralytics YOLO11](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fparking-management-using-ultralytics-yolo11-fba4c6bc62bc) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2024--11--10-brightgreen)\n\n[My 🖐️Computer Vision Hobby Projects that Yielded Earnings](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fmy-️computer-vision-hobby-projects-that-yielded-earnings-7923c9b9eead) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2023--09--10-brightgreen)\n\n[Best Resources to Learn Computer Vision](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fbest-resources-to-learn-computer-vision-311352ed0833) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2023--06--30-brightgreen)\n\n[Roadmap for Computer Vision Engineer](https:\u002F\u002Fmedium.com\u002Faugmented-startups\u002Froadmap-for-computer-vision-engineer-45167b94518c)  ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--08--07-brightgreen)\n\n[How did I spend 2022 in the Computer Vision Field](https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fhow-did-i-spend-2022-computer-vision-field-muhammad-rizwan-munawar) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--12--20-brightgreen)\n\n[Domain Feature Mapping with YOLOv7 for Automated Edge-Based Pallet Racking Inspections](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F22\u002F18\u002F6927) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--13-brightgreen)\n\n[Exudate Regeneration for Automated Exudate Detection in Retinal Fundus Images](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9885192) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--12-brightgreen)\n\n[Feature Mapping for Rice Leaf Defect Detection Based on a Custom Convolutional Architecture](https:\u002F\u002Fwww.mdpi.com\u002F2304-8158\u002F11\u002F23\u002F3914) ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--12--04-brightgreen)\n\n[Yolov5, Yolo-x, Yolo-r, Yolov7 Performance Comparison: A Survey](https:\u002F\u002Faircconline.com\u002Fcsit\u002Fpapers\u002Fvol12\u002Fcsit121602.pdf)  ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--24-brightgreen)\n\n[Explainable AI in Drug Sensitivity Prediction on Cancer Cell Lines](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9922931)  ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--23-brightgreen)\n\n[Train YOLOv8 on Custom Data](https:\u002F\u002Fmedium.com\u002Faugmented-startups\u002Ftrain-yolov8-on-custom-data-6d28cd348262)  ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--23-brightgreen)\n\n[Session with Ultralytics Team about Computer Vision Journey](https:\u002F\u002Fwww.ultralytics.com\u002Fblog\u002Fbecoming-a-computer-vision-engineer)  ![Published Date](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--11--15-brightgreen)\n\n### Contributors 🤝\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_86f244fea017.png\" \u002F>\n\u003C\u002Fa>\n","## YOLOv7 目标跟踪 🚀\n\n[![CI](https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Factions\u002Fworkflows\u002Fci.yml) ![访问者](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_1aaabdb88056.png)\n[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) [![在 Kaggle 中打开](https:\u002F\u002Fkaggle.com\u002Fstatic\u002Fimages\u002Fopen-in-kaggle.svg)](https:\u002F\u002Fkaggle.com\u002Fkernels\u002Fwelcome?src=https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) [![在 SageMaker Studio Lab 中打开](https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fstudiolab.svg)](https:\u002F\u002Fstudiolab.sagemaker.aws\u002Fimport\u002Fgithub\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fblob\u002Fmain\u002Fnotebooks\u002Fyolov7-object-tracking.ipynb) \u003Ca href=\"https:\u002F\u002Fdeepwiki.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRepo-DeepWiki-blue.svg?logo=data:image\u002Fpng;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK\u002FAIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06\u002Fuv1saEDv4O3n3dV60RfP947Mm9\u002FSQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH\u002F\u002FPB8mnKqScAhsD0kYP3j\u002FYt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY\u002F56ebRWeraTjMt\u002F00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB\u002FimwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h\u002FU4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5\u002FXFWLYZRIMpX39AR0tjaGGiGzQVrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr\u002FFGaKiG+T+v+TcrS5UwSMbQAAAABJRU5ErkJggg==\" alt=\"YOLOv7-object-tracking DeepWiki\">\u003C\u002Fa>\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fgraph\u002Fbadge.svg?token=GE4Z0BS8V9)](https:\u002F\u002Fcodecov.io\u002Fgh\u002FRizwanMunawar\u002Fyolov7-object-tracking)\n\n💥 **Ultralytics YOLOv8** 支持已添加 `python detect.py --weights yolov8n.pt`\n\n🚀 **YOLOv9、YOLOv10、YOLO11、YOLO12、YOLO13** 支持即将推出 :)\n\n### 如何运行代码\n\n1. 克隆仓库：\n   \n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking.git\n    ```\n   \n2. 进入克隆的文件夹：\n    ```bash\n    cd yolov7-object-tracking\n    ```\n\n3. 创建虚拟环境（推荐以避免冲突）：\n\n    **对于 Anaconda**：\n    ```bash\n    conda create -n yolov7objtracking python=3.12\n    conda activate yolov7objtracking\n    ```\n\n    **对于 Linux**：\n    ```bash\n    python3 -m venv yolov7objtracking\n    source yolov7objtracking\u002Fbin\u002Factivate\n    ```\n\n    **对于 Windows**：\n    ```bash\n    python3 -m venv yolov7objtracking\n    cd yolov7objtracking\u002FScripts\n    activate\n    ```\n\n    **对于 MacOS**：\n    ```bash\n    python3 -m venv yolov7objtracking\n    source yolov7objtracking\u002Fbin\u002Factivate\n    ```\n\n4. 更新 pip 并安装依赖项：\n    ```bash\n    pip install --upgrade pip\n    pip install -r requirements.txt\n    ```\n\n5. 运行脚本：\n\n    根据您的需求选择合适的命令。如果需要，会自动下载预训练的 [yolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Freleases\u002Fdownload\u002Fv0.1\u002Fyolov7.pt) 权重。\n\n    - **仅检测**：\n      ```bash\n      python detect.py --weights yolov7.pt\n  \n      # 如果您想使用自己的视频。\n      python detect.py --weights yolov7.pt --source \"your video.mp4\"\n      \n      # 使用 YOLOv8 进行推理\n      python detect.py --weights yolov8n.pt\n      ```\n\n    - **目标跟踪**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt\n      \n      # 如果您想使用自己的视频。\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\"\n      ```\n\n    - **摄像头**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source 0\n      ```\n\n    - **外接相机**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source 1\n      ```\n\n    - **IP 摄像头流**：\n      ```bash\n      python detect_and_track.py --source \"your IP Camera Stream URL\" --device 0\n      ```\n\n    - **特定类别跟踪（例如人）**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --classes 0\n      ```\n\n    - **彩色轨迹**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --colored-trk\n      ```\n\n    - **保存轨迹质心、ID 和边界框坐标**：\n      ```bash\n      python detect_and_track.py --weights yolov7.pt --source \"your video.mp4\" --save-txt --save-bbox-dim\n      ```\n\n6. **输出文件** 将保存在 `runs\u002Fdetect\u002Fobj-tracking` 文件夹中，并保留原始文件名。\n\n### 参数详情 🚀\n\n| 参数                | 类型        | 默认值                       | 描述                                                         |\n|-------------------------|-------------|-------------------------------|---------------------------------------------------------------------|\n| `--weights`             | `str`       | `yolov7.pt`                   | 模型权重文件（`.pt` 文件）的路径。                              |\n| `--download`            | `flag`      | `False`                       | 自动下载模型权重。                                               |\n| `--source`              | `str`       | `None`                        | 推理的输入源（文件、文件夹，或 `0` 表示网络摄像头）。             |\n| `--img-size`            | `int`       | `640`                         | 推理时使用的图像尺寸，单位为像素。                             |\n| `--conf-thres`          | `float`     | `0.25`                        | 目标置信度阈值。                                                 |\n| `--iou-thres`           | `float`     | `0.45`                        | 用于非极大值抑制（NMS）的交并比（IoU）阈值。                     |\n| `--device`              | `str`       | `''`                          | CUDA 设备（例如 `0` 或 `0,1,2,3`）或 `cpu`。                      |\n| `--view-img`            | `flag`      | `False`                       | 在推理过程中显示结果。                                           |\n| `--save-txt`            | `flag`      | `False`                       | 将结果保存为 `.txt` 文件。                                       |\n| `--save-conf`           | `flag`      | `False`                       | 将置信度分数保存到 `.txt` 标签文件中。                           |\n| `--nosave`              | `flag`      | `False`                       | 不保存图像或视频。                                               |\n| `--classes`             | `list[int]` | `None`                        | 按类别筛选结果（例如 `--classes 0` 或 `--classes 0 2 3`）。       |\n| `--agnostic-nms`        | `flag`      | `False`                       | 使用类别无关的非极大值抑制（NMS）。                               |\n| `--augment`             | `flag`      | `False`                       | 启用增强推理。                                                   |\n| `--update`              | `flag`      | `False`                       | 更新所有模型。                                                   |\n| `--project`             | `str`       | `runs\u002Fdetect`                 | 保存结果的目录（`project\u002Fname`）。                                |\n| `--name`                | `str`       | `runs\u002Fdetect\u002Fobject_tracking` | 项目目录内结果文件夹的名称。                                     |\n| `--exist-ok`            | `flag`      | `False`                       | 允许使用已存在的项目\u002F名称而不递增编号。                          |\n| `--no-trace`            | `flag`      | `False`                       | 导出模型时不进行追踪。                                           |\n| `--colored-trk`         | `flag`      | `False`                       | 为每个轨迹分配唯一颜色以便可视化。                             |\n| `--save-bbox-dim`       | `flag`      | `False`                       | 将边界框的尺寸保存到 `.txt` 轨迹文件中。                        |\n| `--save-with-object-id` | `flag`      | `False`                       | 将带有目标 ID 的结果保存到 `.txt` 文件中。                       |\n\n### 结果 📊\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd>仅 YOLOv7 检测\u003C\u002Ftd>\n    \u003Ctd>带 ID 的 YOLOv7 目标跟踪\u003C\u002Ftd>\n    \u003Ctd>带 ID 和标签的 YOLOv7 目标跟踪\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_793268c71b37.png\">\u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_225b0b1b52eb.png\">\u003C\u002Ftd>\n    \u003Ctd>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_844b0153d17b.png\">\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n### 点赞历史\n\n[![点赞历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_e13c716fbd7a.png)](https:\u002F\u002Fwww.star-history.com\u002F#RizwanMunawar\u002Fyolov7-object-tracking&type=date&legend=top-left)\n\n### 参考文献\n\n- [YOLOv7 GitHub](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)\n- [SORT GitHub](https:\u002F\u002Fgithub.com\u002Fabewley\u002Fsort)\n\n**我撰写的一些文章\u002F研究论文 | 计算机视觉学习的优质资源 | 我在世人眼中是什么样子？🚀**\n\n[Ultralytics YOLO11：目标检测与实例分割🤯](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fultralytics-yolo11-object-detection-and-instance-segmentation-88ef0239a811) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2024--10--27-brightgreen)\n\n[使用 Ultralytics YOLO11 的停车场管理](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fparking-management-using-ultralytics-yolo11-fba4c6bc62bc) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2024--11--10-brightgreen)\n\n[我的 🖐️能带来收益的计算机视觉兴趣项目](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fmy-️computer-vision-hobby-projects-that-yielded-earnings-7923c9b9eead) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2023--09--10-brightgreen)\n\n[学习计算机视觉的最佳资源](https:\u002F\u002Fmuhammadrizwanmunawar.medium.com\u002Fbest-resources-to-learn-computer-vision-311352ed0833) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2023--06--30-brightgreen)\n\n[计算机视觉工程师的职业发展路线图](https:\u002F\u002Fmedium.com\u002Faugmented-startups\u002Froadmap-for-computer-vision-engineer-45167b94518c)  ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--08--07-brightgreen)\n\n[我在2022年如何深耕计算机视觉领域](https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fhow-did-i-spend-2022-computer-vision-field-muhammad-rizwan-munawar) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--12--20-brightgreen)\n\n[基于 YOLOv7 的域特征映射用于自动化边缘式托盘货架检测](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F22\u002F18\u002F6927) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--13-brightgreen)\n\n[视网膜眼底图像中渗出物自动检测的渗出物再生技术](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9885192) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--12-brightgreen)\n\n[基于自定义卷积架构的大米叶片缺陷检测特征映射](https:\u002F\u002Fwww.mdpi.com\u002F2304-8158\u002F11\u002F23\u002F3914) ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--12--04-brightgreen)\n\n[Yolov5、Yolo-x、Yolo-r、Yolov7 性能对比：一项综述](https:\u002F\u002Faircconline.com\u002Fcsit\u002Fpapers\u002Fvol12\u002Fcsit121602.pdf)  ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--24-brightgreen)\n\n[可解释人工智能在癌细胞系药物敏感性预测中的应用](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9922931)  ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--23-brightgreen)\n\n[在自定义数据上训练 YOLOv8](https:\u002F\u002Fmedium.com\u002Faugmented-startups\u002Ftrain-yolov8-on-custom-data-6d28cd348262)  ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--09--23-brightgreen)\n\n[与 Ultralytics 团队关于计算机视觉职业旅程的交流](https:\u002F\u002Fwww.ultralytics.com\u002Fblog\u002Fbecoming-a-computer-vision-engineer)  ![发布日期](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fpublished_Date-2022--11--15-brightgreen)\n\n### 贡献者 🤝\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_readme_86f244fea017.png\" \u002F>\n\u003C\u002Fa>","# YOLOv7 目标跟踪快速上手指南\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, Windows, 或 macOS\n*   **Python 版本**：推荐 Python 3.8 - 3.12\n*   **硬件加速**（可选但推荐）：NVIDIA GPU 及对应的 CUDA 驱动（用于加速推理）\n*   **前置依赖**：已安装 Git 和 pip\n\n> **💡 国内开发者提示**：建议在安装依赖前配置 pip 国内镜像源（如清华源），以显著提升下载速度：\n> ```bash\n> pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n\n### 1. 克隆项目代码\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking.git\ncd yolov7-object-tracking\n```\n\n### 2. 创建并激活虚拟环境\n为避免依赖冲突，强烈建议使用虚拟环境。\n\n**Anaconda 用户：**\n```bash\nconda create -n yolov7objtracking python=3.12\nconda activate yolov7objtracking\n```\n\n**原生 Python (Linux\u002FMac)：**\n```bash\npython3 -m venv yolov7objtracking\nsource yolov7objtracking\u002Fbin\u002Factivate\n```\n\n**原生 Python (Windows)：**\n```bash\npython3 -m venv yolov7objtracking\nyolov7objtracking\\Scripts\\activate\n```\n\n### 3. 安装依赖库\n更新 pip 并安装项目所需依赖：\n```bash\npip install --upgrade pip\npip install -r requirements.txt\n```\n*(注：若未配置国内源且下载缓慢，可添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 参数)*\n\n## 基本使用\n\n本项目支持纯检测模式和目标跟踪模式。首次运行时，脚本会自动下载预训练的 `yolov7.pt` 权重文件。\n\n### 场景一：运行目标跟踪（推荐）\n对视频或摄像头输入进行实时目标跟踪，并在画面中标记 ID。\n\n*   **使用默认测试视频\u002F图片：**\n    ```bash\n    python detect_and_track.py --weights yolov7.pt\n    ```\n\n*   **跟踪本地视频文件：**\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source \"your_video.mp4\"\n    ```\n\n*   **调用电脑摄像头 (Webcam)：**\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source 0\n    ```\n\n*   **调用外接摄像头：**\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source 1\n    ```\n\n### 场景二：仅运行目标检测\n如果不需跟踪 ID，仅需检测框：\n\n```bash\npython detect.py --weights yolov7.pt --source \"your_video.mp4\"\n```\n\n### 常用高级参数示例\n\n*   **仅跟踪特定类别**（例如只跟踪“人”，COCO 数据集中人的类别 ID 为 0）：\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source \"your_video.mp4\" --classes 0\n    ```\n\n*   **为每个轨迹分配独特颜色**：\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source \"your_video.mp4\" --colored-trk\n    ```\n\n*   **保存检测结果**（包含坐标、ID 的 txt 文件）：\n    ```bash\n    python detect_and_track.py --weights yolov7.pt --source \"your_video.mp4\" --save-txt --save-bbox-dim\n    ```\n\n### 结果查看\n运行结束后，处理后的视频、图片及标注文件将保存在 `runs\u002Fdetect\u002Fobj-tracking` 目录下。\n\n> **注意**：本项目现已支持 **YOLOv8** 模型，只需将 `--weights` 参数改为 `yolov8n.pt` 即可直接使用。","某智慧园区的安防团队需要利用现有摄像头，对进出货运车辆进行全天候自动计数与轨迹分析，以优化物流调度。\n\n### 没有 yolov7-object-tracking 时\n- **目标身份丢失**：当两辆货车在画面中交汇或短暂遮挡时，传统检测算法会将同一辆车误判为两个新目标，导致计数严重虚高。\n- **轨迹断裂碎片化**：车辆行驶过程中若经过光影变化区，检测框会频繁跳变或消失，无法生成完整的行车路线，难以分析违停行为。\n- **人工复核成本高**：由于自动数据不可信，安保人员必须每小时人工回看监控录像来核对车流量，耗费大量人力且效率低下。\n- **实时性差**：原有的多阶段处理流程延迟高，无法在车辆违规驶入禁行区的瞬间触发警报，往往事后才能发现。\n\n### 使用 yolov7-object-tracking 后\n- **ID 持续稳定锁定**：借助 SORT 追踪算法，即使货车在转弯或被其他物体遮挡时，yolov7-object-tracking 也能保持唯一 ID 不变，确保计数准确率提升至 98% 以上。\n- **完整轨迹可视化**：系统能流畅绘制每辆车的连续运动路径，清晰展示其在园区内的停留时长与行驶热点，为动线优化提供直观数据。\n- **自动化报表生成**：无需人工干预，系统自动输出分时段的车流统计报表，让管理人员能即时掌握物流高峰时段。\n- **毫秒级异常预警**：依托 YOLOv7 的高效检测与追踪联动，一旦车辆闯入设定电子围栏，系统可在毫秒级内推送报警信息，实现主动防御。\n\nyolov7-object-tracking 通过将高精度检测与稳定追踪深度融合，把原本模糊的视频流转化为可量化、可追溯的智能物流数据资产。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FRizwanMunawar_yolov7-object-tracking_793268c7.png","RizwanMunawar","Muhammad Rizwan Munawar","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FRizwanMunawar_5bbd0362.jpg","Solving real-world problems using computer vision | Influencer | Open source contributor | Technical writer VisionAI | Computer vision engineer | LLMs | Open FC","@ultralytics","Islamabad Pakistan","muhammadrizwanmunawar123@gmail.com","muhammdrizwanmr","https:\u002F\u002Fvisionusecases.com\u002F","https:\u002F\u002Fgithub.com\u002FRizwanMunawar",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",97,{"name":88,"color":89,"percentage":10},"Jupyter Notebook","#DA5B0B",646,175,"2026-04-07T15:49:07","AGPL-3.0","Linux, macOS, Windows","可选（支持 CUDA 设备，如 --device 0），具体型号和显存未说明；也可使用 CPU 运行","未说明",{"notes":98,"python":99,"dependencies":100},"建议使用虚拟环境（conda 或 venv）以避免依赖冲突。首次运行时若未指定本地权重，会自动下载 YOLOv7 或 YOLOv8 的预训练模型文件。支持多种输入源（视频文件、 webcam、IP 摄像头流）。","3.12 (README 示例中使用，建议版本)",[101,102,103,104,105],"torch","ultralytics","opencv-python","numpy","pandas",[15,14],[108,109,103,110,111,112,102,113,114],"deep-learning","object-detection","tracking-algorithm","yolov7","computer-vision","ultralytics-yolo","yolov8",null,"2026-03-27T02:49:30.150509","2026-04-08T20:33:45.757390",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},25003,"在推理（检测与跟踪）阶段，是否应该使用比验证阶段更高的置信度阈值（conf-thres）？","是的，您可以测试不同的阈值并观察结果差异。为了获得更准确的跟踪结果，通常建议在 detect_and_track.py 中尝试提高 conf-thres 的值。您可以根据实际效果调整阈值，直到获得最佳结果。","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F18",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},25004,"加载自定义训练的 YOLOv7 模型权重时出现 AttributeError 或 pickle 相关错误怎么办？","这通常是由于 PyTorch 版本更新导致的安全限制（新版本默认不自动加载 pickle 文件中的非权重数据）。解决方法是找到代码中所有使用 `torch.load(w, map_location=map_location)` 的地方，将其修改为 `torch.load(w, map_location=map_location, weights_only=False)`。这通常出现在 detect_and_track.py 或 sort.py 等脚本中。","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F69",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},25005,"直接运行 sort.py 时报错 'NameError: name seq_dets_fn is not defined' 是什么原因？","这是一个用法错误。sort.py 文件设计为被其他脚本调用，不应直接作为主程序运行。请改用以下命令进行目标检测和跟踪：\n`python detect_and_track.py --weights yolov7.pt --source \"your_video.mp4\"`\n这样即可避免缩进或未定义变量的问题。","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F53",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},25006,"本地已存在模型权重文件（如 yolov7.pt），但程序仍然尝试重新下载，如何解决？","可以在运行命令时添加 `--no-download` 参数来禁止自动下载。例如：\n`python detect_and_track.py --weights yolov7.pt --source ~\u002FVideos\u002Fvideo.mp4 --no-download`\n或者：\n`python detect.py --weights yolov7.pt --source ~\u002FVideos\u002Fvideo.mp4 --no-download`","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F14",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},25007,"如何使用自定义数据集训练 YOLOv7 模型？","该项目主要支持目标检测和跟踪功能。若需使用自定义数据集训练 YOLOv7，建议参考作者提供的详细教程文章：https:\u002F\u002Fmedium.com\u002Fpixelmindx\u002Fyolov7-training-on-custom-data-b86d23e6623。该文章涵盖了数据准备、配置文件修改及训练启动的具体步骤。","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F66",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},25008,"如何将输出坐标从归一化格式转换为 MOT 兼容的经典格式（像素坐标）？","代码中输出的归一化坐标可以通过乘以图像宽高转换回像素坐标。如果需要符合 MOT 挑战赛的格式（class, x, y, w, h 等），可以参考 StackOverflow 上的转换方法（https:\u002F\u002Fstackoverflow.com\u002Fa\u002F67097124），将 YOLO 格式的中心点坐标和宽高转换为左上角坐标格式。具体而言，需将归一化值乘以 `im0.shape[1]` (宽) 和 `im0.shape[0]` (高) 还原为像素值。","https:\u002F\u002Fgithub.com\u002FRizwanMunawar\u002Fyolov7-object-tracking\u002Fissues\u002F15",[150],{"id":151,"version":64,"summary_zh":152,"released_at":153},154424,"使用 PyTorch、SORT 算法和 OpenCV 的 YOLOv7 目标跟踪","2022-08-21T19:42:09"]