[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-mikel-brostrom--boxmot":3,"similar-mikel-brostrom--boxmot":186},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":18,"owner_twitter":18,"owner_website":18,"owner_url":20,"languages":21,"stars":30,"forks":31,"last_commit_at":32,"license":33,"difficulty_score":34,"env_os":35,"env_gpu":36,"env_ram":35,"env_deps":37,"category_tags":42,"github_topics":47,"view_count":66,"oss_zip_url":18,"oss_zip_packed_at":18,"status":67,"created_at":68,"updated_at":69,"faqs":70,"releases":86},1002,"mikel-brostrom\u002Fboxmot","boxmot","BoxMOT: Pluggable SOTA multi-object tracking modules with support for axis-aligned and oriented bounding boxes","BoxMOT 是一个开源的多目标跟踪工具箱，为开发者提供统一的命令行和 Python 接口，让你无需重写检测代码就能快速切换不同的跟踪算法。它整合了 8 种主流跟踪器（如 ByteTrack、BotSort、StrongSORT 等），同时支持常规矩形框和旋转框检测，适用于目标检测、分割和姿态估计等多种任务。\n\nBoxMOT 解决了多目标跟踪实验中的两大痛点：一是不同跟踪器接口各异导致集成困难，二是重复运行耗时过长。通过复用缓存的检测结果和特征嵌入，你可以大幅缩短评估和调优时间。内置的基准测试功能支持在 MOT17、MOT20 等标准数据集上快速验证性能。\n\n这个工具特别适合计算机视觉研究者和算法工程师，无论是做学术研究还是开发实际应用，都能帮你高效完成跟踪模型的选型、评估和部署。性能方面，BoxMOT 在保持高准确率（HOTA 最高达 69.4）的同时，部分跟踪器可实现 6000 FPS 的惊人速度。","\u003Cdiv align=\"center\" markdown=\"1\">\n\n  \u003Cimg width=\"640\"\n       src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_7d3436691fdf.gif\"\n       alt=\"BoxMOT demo\">\n  \u003Cbr>\n\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F13239\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_0b0cf02be0f5.png\" alt=\"mikel-brostrom%2Fboxmot | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\">\u003C\u002Fa>\n\n  [![CI](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Factions\u002Fworkflows\u002Fci.yml)\n  [![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot)\n  [![downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_cfa2ada7258b.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fboxmot)\n  [![license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-AGPL%203.0-blue)](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fblob\u002Fmaster\u002FLICENSE)\n  [![python-version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fboxmot)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot)\n  [![colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F18nIqkBr68TkK8dHdarxTco6svHUJGggY?usp=sharing)\n  [![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.8132989.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.8132989)\n  [![docker pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fboxmot\u002Fboxmot?logo=docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fboxmot\u002Fboxmot)\n  [![discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1377565354326495283?logo=discord&label=discord&labelColor=fff&color=5865f2)](https:\u002F\u002Fdiscord.gg\u002FtUmFEcYU4q)\n  [![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fmikel-brostrom\u002Fboxmot)\n\n\u003C\u002Fdiv>\n\nBoxMOT gives you one CLI and one Python API for running, evaluating, tuning, and exporting modern multi-object tracking pipelines. Swap trackers without rewriting your detector stack, reuse cached detections and embeddings across experiments, and benchmark locally on MOT-style datasets.\n\n\u003Cdiv align=\"center\" markdown=\"1\">\n\n[Installation](#installation) • [Metrics](#benchmark-results-mot17-ablation-split) • [CLI](#cli) • [Python API](#python-api) • [Detection Layouts](#detection-layouts) • [Examples](#examples) • [Contributing](#contributing)\n\n\u003C\u002Fdiv>\n\n## Why BoxMOT\n\n- One interface for `track`, `generate`, `eval`, `tune`, and `export`.\n- Works with detection, segmentation, and pose models as long as they emit boxes.\n- Supports both motion-only trackers and motion + appearance trackers.\n- Reuses saved detections and embeddings to speed up repeated evaluation and tuning.\n- Handles both AABB and OBB detection layouts natively.\n- Includes local benchmarking workflows for MOT17, MOT20, and DanceTrack ablation splits.\n\n## Installation\n\nBoxMOT supports Python `3.9` through `3.12`.\n\n```bash\npip install boxmot\nboxmot --help\n```\n\n## Benchmark Results (MOT17 ablation split)\n\n\u003Cdiv align=\"center\" markdown=\"1\">\n\n\u003C!-- START TRACKER TABLE -->\n| Tracker | Status  | OBB | HOTA↑ | MOTA↑ | IDF1↑ | FPS |\n| :-----: | :-----: | :-: | :---: | :---: | :---: | :---: |\n| [botsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14651) | ✅ | ✅ | 69.418 | 78.232 | 81.812 | 12 |\n| [boosttrack](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13003) | ✅ | ❌ | 69.253 | 75.914 | 83.206 | 13 |\n| [strongsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13514) | ✅ | ❌ | 68.05 | 76.185 | 80.763 | 11 |\n| [deepocsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11813) | ✅ | ❌ | 67.796 | 75.868 | 80.514 | 12 |\n| [bytetrack](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06864) | ✅ | ✅ | 67.68 | 78.039 | 79.157 | 720 |\n| [hybridsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00783) | ✅ | ❌ | 67.39 | 74.127 | 79.105 | 25 |\n| [ocsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14360) | ✅ | ✅ | 66.441 | 74.548 | 77.899 | 890 |\n| [sfsort](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.07553) | ✅ | ✅ | 62.653 | 76.87 | 69.184 | 6000 |\n\u003C!-- END TRACKER TABLE -->\n\n\u003Csub>Evaluation was run on the second half of the MOT17 training set because the validation split is not public and the ablation detector was trained on the first half. Results used [pre-generated detections and embeddings](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Freleases\u002Fdownload\u002Fv11.0.9\u002Fruns2.zip) with each tracker configured from its default repository settings.\u003C\u002Fsub>\n\n\u003C\u002Fdiv>\n\n## CLI\n\nBoxMOT provides a unified CLI with a simple syntax:\n\n```bash\nboxmot MODE [OPTIONS] [DETECTOR] [REID] [TRACKER]\n```\n\nModes:\n\n```text\ntrack      run detector + tracker on webcam, images, videos, directories, or streams\ngenerate   precompute detections and embeddings for later reuse\neval       benchmark on MOT-style datasets and apply optional postprocessing\ntune       optimize tracker hyperparameters with multi-objective search\nexport     export ReID models to deployment formats\n```\n\nUse `boxmot MODE --help` for mode-specific flags.\n\nUse `--detector`, `--reid`, and `--tracker` for explicit component selection. Legacy aliases such as `--yolo-model`, `--reid-model`, and `--tracking-method` are not supported.\n\nQuick examples:\n\n```bash\n# Track a webcam feed\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source 0 --show\n\n# Track a video, draw trajectories, and save the result\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --show-trajectories --save\n\n# Evaluate on the MOT17 ablation split with GBRC postprocessing\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gbrc --verbose\n\n# Generate reusable detections and embeddings for a benchmark\nboxmot generate --benchmark mot17-ablation\n\n# Tune tracker hyperparameters on a benchmark\nboxmot tune --benchmark mot17-ablation --tracker ocsort --n-trials 10\n\n# Export a ReID model to ONNX and TensorRT with dynamic input\nboxmot export --weights osnet_x0_25_msmt17.pt --include onnx --include engine --dynamic\n```\n\nCommon `--source` values for `track` and direct-source `generate` runs include `0`, `img.jpg`, `video.mp4`, `path\u002F`, `path\u002F*.jpg`, YouTube URLs, and RTSP \u002F RTMP \u002F HTTP streams.\n\nFor config-driven `generate`, `eval`, and `tune` runs:\n\n- `--benchmark \u003Cbenchmark>` selects a benchmark config from `boxmot\u002Fconfigs\u002Fbenchmarks\u002F`\n- the benchmark config selects its associated dataset config from `boxmot\u002Fconfigs\u002Fdatasets\u002F`\n- the benchmark config selects its associated detector profile from `boxmot\u002Fconfigs\u002Fdetectors\u002F`\n- the benchmark config selects its associated ReID profile from `boxmot\u002Fconfigs\u002Freid\u002F`\n- `--tracker \u003Cname>` selects the tracker and loads `boxmot\u002Fconfigs\u002Ftrackers\u002F\u003Cname>.yaml`\n\nExample:\n\n```bash\nboxmot eval --benchmark mot17-ablation --tracker boosttrack\n```\n\nThe benchmark config's associated dataset, detector, and ReID profiles are used automatically.\n\nTo override the benchmark's detector and ReID defaults explicitly:\n\n```bash\nboxmot eval --benchmark mot17-ablation --detector yolo11s_obb --reid lmbn_n_duke --tracker boosttrack\n```\n\nIf you want to track only selected classes, pass a comma-separated list:\n\n```bash\nboxmot track --detector yolov8s --source 0 --classes 16,17\n```\n\n## Python API\n\nIf you already have detections from your own model, call `tracker.update(...)` once per frame inside your video loop:\n\n```python\nfrom pathlib import Path\n\nimport cv2\nimport numpy as np\nfrom boxmot import BotSort\n\ntracker = BotSort(\n    reid_weights=Path(\"osnet_x0_25_msmt17.pt\"),\n    device=\"cpu\",\n    half=False,\n)\n\ncap = cv2.VideoCapture(\"video.mp4\")\n\nwhile True:\n    ok, frame = cap.read()\n    if not ok:\n        break\n\n    # Replace this with your detector output for the current frame.\n    # AABB input: (N, 6) = (x1, y1, x2, y2, conf, cls)\n    # OBB input: (N, 7) = (cx, cy, w, h, angle, conf, cls)\n    detections = np.empty((0, 6), dtype=np.float32)\n    # detections = your_detector(frame)\n\n    tracks = tracker.update(detections, frame)\n    tracker.plot_results(frame, show_trajectories=True)\n\n    print(tracks)\n    # AABB output: (N, 8) = (x1, y1, x2, y2, id, conf, cls, det_ind)\n    # OBB output: (N, 9) = (cx, cy, w, h, angle, id, conf, cls, det_ind)\n    # Use det_ind to map a track back to the detector output\n\n    cv2.imshow(\"BoxMOT\", frame)\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\ncap.release()\ncv2.destroyAllWindows()\n```\n\nFor end-to-end detector integrations, see the notebooks in [examples](examples).\n\n## Detection Layouts\n\nBoxMOT switches tracking mode from the detection tensor shape:\n\n| Geometry | Input detections | Output tracks |\n| --- | --- | --- |\n| AABB | `(N, 6)` = `(x1, y1, x2, y2, conf, cls)` | `(N, 8)` = `(x1, y1, x2, y2, id, conf, cls, det_ind)` |\n| OBB | `(N, 7)` = `(cx, cy, w, h, angle, conf, cls)` | `(N, 9)` = `(cx, cy, w, h, angle, id, conf, cls, det_ind)` |\n\nOBB-specific tracking paths are enabled automatically when OBB detections are provided. Current OBB-capable trackers: `bytetrack`, `botsort`, `ocsort`, and `sfsort`.\n\n## Examples\n\nThe short commands above are enough to get started. The sections below keep the longer recipe list available without turning the README into a wall of commands.\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Tracking recipes\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nTrack from common sources:\n\n```bash\n# Webcam\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source 0 --show\n\n# Video file\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --save\n\n# Image directory\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker bytetrack --source path\u002Fto\u002Fimages --save\n\n# Stream or URL\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker ocsort --source 'rtsp:\u002F\u002Fexample.com\u002Fmedia.mp4'\n\n# YouTube\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker boosttrack --source 'https:\u002F\u002Fyoutu.be\u002FZgi9g1ksQHc'\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Detector backends\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nSwap detectors without changing the overall CLI:\n\n```bash\n# Ultralytics detection\nboxmot track --detector yolov8n\nboxmot track --detector yolo11n\n\n# Segmentation and pose variants\nboxmot track --detector yolov8n-seg\nboxmot track --detector yolov8n-pose\n\n# YOLOX\nboxmot track --detector yolox_s\n\n# RF-DETR\nboxmot track --detector rf-detr-base\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Tracker swaps\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nUse the same detector and ReID model while changing only the tracker:\n\n```bash\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker strongsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker boosttrack\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker hybridsort\n\n# Motion-only trackers\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker bytetrack\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker ocsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker sfsort\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Filtering and visualization\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nUseful flags for inspection and debugging:\n\n```bash\n# Draw trajectories and show kalman filter predictions when track is lost\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --show-trajectories --show-kf-preds --save\n\n# Track only selected classes\nboxmot track --detector yolov8s --source 0 --classes 16,17\n\n# Track each class independently\nboxmot track --detector yolov8n --source video.mp4 --per-class --save\n\n# Highlight one target ID\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source video.mp4 --target-id 7 --show\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Evaluation and tuning\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nBenchmark on built-in MOT-style dataset shortcuts:\n\n```bash\n# Reproduce README-style MOT17 results\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --verbose\n\n# MOT20 ablation split\nboxmot eval --benchmark mot20-ablation --tracker boosttrack --verbose\n\n# DanceTrack ablation split\nboxmot eval --benchmark dancetrack-ablation --tracker boosttrack --verbose\n\n# VisDrone ablation split\nboxmot eval --benchmark visdrone-ablation --tracker botsort --verbose\n\n# Apply postprocessing\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gsi\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gbrc\n\n# Generate detections and embeddings once for a benchmark\nboxmot generate --benchmark mot17-ablation\n\n# Generate detections and embeddings for a direct dataset path\nboxmot generate --detector yolov8n --reid osnet_x0_25_msmt17 --source .\u002Fassets\u002FMOT17-mini\u002Ftrain\n\n# Tune on a built-in benchmark config\nboxmot tune --benchmark mot17-ablation --tracker boosttrack --n-trials 9\n\n# Tune a tracker with explicit detector\u002FReID overrides\nboxmot tune --benchmark mot17-ablation --detector yolo11s_obb --reid lmbn_n_duke --tracker botsort --n-trials 9\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Export and OBB\u003C\u002Fstrong>\u003C\u002Fsummary>\n\nDeployment and oriented-box examples:\n\n```bash\n# Export to ONNX\nboxmot export --weights osnet_x0_25_msmt17.pt --include onnx --device cpu\n\n# Export to OpenVINO\nboxmot export --weights osnet_x0_25_msmt17.pt --include openvino --device cpu\n\n# Export to TensorRT with dynamic input\nboxmot export --weights osnet_x0_25_msmt17.pt --include engine --device 0 --dynamic\n```\n\nOBB references:\n\n- Notebook: [examples\u002Fdet\u002Fobb.ipynb](examples\u002Fdet\u002Fobb.ipynb)\n- OBB-capable trackers: `bytetrack`, `botsort`, `ocsort`, `sfsort`\n\n\u003C\u002Fdetails>\n\n## Contributing\n\nIf you want to contribute, start with [CONTRIBUTING.md](CONTRIBUTING.md).\n\n## Contributors\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_102cac8deb2c.png\" alt=\"BoxMOT contributors\">\n\u003C\u002Fa>\n\n## Support and Citation\n\n- Bugs and feature requests: [GitHub Issues](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues)\n- Questions and discussion: [GitHub Discussions](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fdiscussions) or [Discord](https:\u002F\u002Fdiscord.gg\u002FtUmFEcYU4q)\n- Citation metadata: [CITATION.cff](CITATION.cff)\n- Commercial support: `box-mot@outlook.com`\n","\u003Cdiv align=\"center\" markdown=\"1\">\n\n  \u003Cimg width=\"640\"\n       src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_7d3436691fdf.gif\"\n       alt=\"BoxMOT demo\">\n  \u003Cbr>\n\n  \u003Ca href=\"https:\u002F\u002Ftrendshift.io\u002Frepositories\u002F13239\" target=\"_blank\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_0b0cf02be0f5.png\" alt=\"mikel-brostrom%2Fboxmot | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\">\u003C\u002Fa>\n\n  [![CI](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Factions\u002Fworkflows\u002Fci.yml)\n  [![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot)\n  [![downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_cfa2ada7258b.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fboxmot)\n  [![license](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Flicense-AGPL%203.0-blue)](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fblob\u002Fmaster\u002FLICENSE)\n  [![python-version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fboxmot)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Fboxmot)\n  [![colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F18nIqkBr68TkK8dHdarxTco6svHUJGggY?usp=sharing)\n  [![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.8132989.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.8132989)\n  [![docker pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Fboxmot\u002Fboxmot?logo=docker)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Fboxmot\u002Fboxmot)\n  [![discord](https:\u002F\u002Fimg.shields.io\u002Fdiscord\u002F1377565354326495283?logo=discord&label=discord&labelColor=fff&color=5865f2)](https:\u002F\u002Fdiscord.gg\u002FtUmFEcYU4q)\n  [![Ask DeepWiki](https:\u002F\u002Fdeepwiki.com\u002Fbadge.svg)](https:\u002F\u002Fdeepwiki.com\u002Fmikel-brostrom\u002Fboxmot)\n\n\u003C\u002Fdiv>\n\nBoxMOT 为您提供统一的 CLI 和 Python API，用于运行、评估、调优和导出现代多目标跟踪（multi-object tracking）流水线。无需重写检测器（detector）堆栈即可切换跟踪器（tracker），在实验中重复使用缓存的检测结果和嵌入（embeddings），并在本地对 MOT 风格的数据集进行基准测试。\n\n\u003Cdiv align=\"center\" markdown=\"1\">\n\n[安装](#installation) • [指标](#benchmark-results-mot17-ablation-split) • [命令行界面](#cli) • [Python API](#python-api) • [检测布局](#detection-layouts) • [示例](#examples) • [贡献](#contributing)\n\n\u003C\u002Fdiv>\n\n## 为何选择 BoxMOT\n\n- 统一接口支持 `track`（跟踪）、`generate`（生成）、`eval`（评估）、`tune`（调优）和 `export`（导出）。\n- 兼容检测、分割和姿态模型，只要它们输出边界框（boxes）。\n- 支持仅运动（motion-only）跟踪器以及运动+外观（motion + appearance）跟踪器。\n- 重复使用保存的检测结果和嵌入（embeddings）以加速重复评估和调优。\n- 原生支持 AABB（轴对齐边界框）和 OBB（定向边界框）检测布局。\n- 包含针对 MOT17、MOT20 和 DanceTrack 消融（ablation）分割的本地基准测试工作流。\n\n## 安装\n\nBoxMOT 支持 Python `3.9` 至 `3.12`。\n\n```bash\npip install boxmot\nboxmot --help\n```\n\n## 基准测试结果（MOT17 消融分割）\n\n\u003Cdiv align=\"center\" markdown=\"1\">\n\n\u003C!-- START TRACKER TABLE -->\n| 跟踪器 | 状态 | OBB | HOTA↑ | MOTA↑ | IDF1↑ | FPS |\n| :-----: | :-----: | :-: | :---: | :---: | :---: | :---: |\n| [botsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.14651) | ✅ | ✅ | 69.418 | 78.232 | 81.812 | 12 |\n| [boosttrack](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.13003) | ✅ | ❌ | 69.253 | 75.914 | 83.206 | 13 |\n| [strongsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.13514) | ✅ | ❌ | 68.05 | 76.185 | 80.763 | 11 |\n| [deepocsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.11813) | ✅ | ❌ | 67.796 | 75.868 | 80.514 | 12 |\n| [bytetrack](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.06864) | ✅ | ✅ | 67.68 | 78.039 | 79.157 | 720 |\n| [hybridsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.00783) | ✅ | ❌ | 67.39 | 74.127 | 79.105 | 25 |\n| [ocsort](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.14360) | ✅ | ✅ | 66.441 | 74.548 | 77.899 | 890 |\n| [sfsort](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.07553) | ✅ | ✅ | 62.653 | 76.87 | 69.184 | 6000 |\n\u003C!-- END TRACKER TABLE -->\n\n\u003Csub>评估在 MOT17 训练集的后半部分上运行，因为验证集分割未公开，且消融（ablation）检测器在前半部分上训练。结果使用了[预生成的检测结果和嵌入](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Freleases\u002Fdownload\u002Fv11.0.9\u002Fruns2.zip)，每个跟踪器均按其默认仓库设置进行配置。\u003C\u002Fsub>\n\n\u003C\u002Fdiv>\n\n## 命令行界面\n\nBoxMOT 提供统一的命令行界面（CLI），语法简洁：\n\n```bash\nboxmot MODE [OPTIONS] [DETECTOR] [REID] [TRACKER]\n```\n\n模式：\n\n```text\ntrack      在摄像头、图像、视频、目录或流上运行检测器 + 跟踪器\ngenerate   预计算检测结果和嵌入（embeddings）以供后续重复使用\neval       在 MOT 风格的数据集上进行基准测试并应用可选的后处理（postprocessing）\ntune       通过多目标搜索优化跟踪器超参数（hyperparameters）\nexport     将 ReID 模型导出为部署格式\n```\n\n使用 `boxmot MODE --help` 查看特定模式的标志。\n\n使用 `--detector`、`--reid` 和 `--tracker` 进行显式的组件选择。不支持旧版别名，如 `--yolo-model`、`--reid-model` 和 `--tracking-method`。\n\n快速示例：\n\n```bash\n# 跟踪摄像头输入\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source 0 --show\n\n# 跟踪视频，绘制轨迹并保存结果\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --show-trajectories --save\n\n# 在 MOT17 消融（ablation）分割上评估，使用 GBRC 后处理\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gbrc --verbose\n\n# 为基准测试生成可复用的检测结果和嵌入（embeddings）\nboxmot generate --benchmark mot17-ablation\n\n# 在基准测试上调优跟踪器超参数（hyperparameters）\nboxmot tune --benchmark mot17-ablation --tracker ocsort --n-trials 10\n\n# 将 ReID 模型导出为 ONNX 和 TensorRT 格式，支持动态输入\nboxmot export --weights osnet_x0_25_msmt17.pt --include onnx --include engine --dynamic\n```\n\n`track` 和直接源 `generate` 运行中常见的 `--source` 值包括 `0`、`img.jpg`、`video.mp4`、`path\u002F`、`path\u002F*.jpg`、YouTube 链接以及 RTSP \u002F RTMP \u002F HTTP 流。\n\n对于配置驱动的 `generate`、`eval` 和 `tune` 运行：\n\n- `--benchmark \u003Cbenchmark>` 从 `boxmot\u002Fconfigs\u002Fbenchmarks\u002F` 中选择基准测试配置\n- 基准测试配置会自动选择 `boxmot\u002Fconfigs\u002Fdatasets\u002F` 中关联的数据集配置\n- 基准测试配置会自动选择 `boxmot\u002Fconfigs\u002Fdetectors\u002F` 中关联的检测器配置文件\n- 基准测试配置会自动选择 `boxmot\u002Fconfigs\u002Freid\u002F` 中关联的 ReID 配置文件\n- `--tracker \u003Cname>` 选择跟踪器并加载 `boxmot\u002Fconfigs\u002Ftrackers\u002F\u003Cname>.yaml`\n\n示例：\n\n```bash\nboxmot eval --benchmark mot17-ablation --tracker boosttrack\n```\n\n基准测试配置会自动使用其关联的数据集、检测器和 ReID 配置文件。\n\n要显式覆盖基准测试的检测器和 ReID 默认设置：\n\n```bash\nboxmot eval --benchmark mot17-ablation --detector yolo11s_obb --reid lmbn_n_duke --tracker boosttrack\n```\n\n如需仅跟踪指定类别，请传入逗号分隔的列表：\n\n```bash\nboxmot track --detector yolov8s --source 0 --classes 16,17\n```\n\n## Python API\n\n如果您已经拥有自己的模型检测结果，请在视频循环中每帧调用一次 tracker.update(...) 方法：\n\n```python\nfrom pathlib import Path\n\nimport cv2\nimport numpy as np\nfrom boxmot import BotSort\n\ntracker = BotSort(\n    reid_weights=Path(\"osnet_x0_25_msmt17.pt\"),\n    device=\"cpu\",\n    half=False,\n)\n\ncap = cv2.VideoCapture(\"video.mp4\")\n\nwhile True:\n    ok, frame = cap.read()\n    if not ok:\n        break\n\n    # Replace this with your detector output for the current frame.\n    # AABB input: (N, 6) = (x1, y1, x2, y2, conf, cls)\n    # OBB input: (N, 7) = (cx, cy, w, h, angle, conf, cls)\n    detections = np.empty((0, 6), dtype=np.float32)\n    # detections = your_detector(frame)\n\n    tracks = tracker.update(detections, frame)\n    tracker.plot_results(frame, show_trajectories=True)\n\n    print(tracks)\n    # AABB output: (N, 8) = (x1, y1, x2, y2, id, conf, cls, det_ind)\n    # OBB output: (N, 9) = (cx, cy, w, h, angle, id, conf, cls, det_ind)\n    # Use det_ind to map a track back to the detector output\n\n    cv2.imshow(\"BoxMOT\", frame)\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\ncap.release()\ncv2.destroyAllWindows()\n```\n\n对于端到端的检测器集成，请查看 [examples](examples) 中的 notebooks。\n\n## 检测布局\n\nBoxMOT 根据检测张量形状切换跟踪模式：\n\n| 几何类型 | 输入检测 | 输出跟踪 |\n| --- | --- | --- |\n| AABB (轴对齐边界框) | `(N, 6)` = `(x1, y1, x2, y2, conf, cls)` | `(N, 8)` = `(x1, y1, x2, y2, id, conf, cls, det_ind)` |\n| OBB (定向边界框) | `(N, 7)` = `(cx, cy, w, h, angle, conf, cls)` | `(N, 9)` = `(cx, cy, w, h, angle, id, conf, cls, det_ind)` |\n\n当提供 OBB (定向边界框) 检测结果时，OBB 专用跟踪路径会自动启用。当前支持 OBB 的跟踪器有：`bytetrack`、`botsort`、`ocsort` 和 `sfsort`。\n\n## 示例\n\n上面的简短命令足以开始使用。以下部分保留了更长的示例列表，而不会让 README 变成命令墙。\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>跟踪示例\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n从常见来源进行跟踪：\n\n```bash\n# 摄像头\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source 0 --show\n\n# 视频文件\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --save\n\n# 图像目录\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker bytetrack --source path\u002Fto\u002Fimages --save\n\n# 视频流或 URL\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker ocsort --source 'rtsp:\u002F\u002Fexample.com\u002Fmedia.mp4'\n\n# YouTube\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker boosttrack --source 'https:\u002F\u002Fyoutu.be\u002FZgi9g1ksQHc'\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>检测器后端\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n在不改变整体 CLI (命令行界面) 的情况下切换检测器：\n\n```bash\n# Ultralytics 检测\nboxmot track --detector yolov8n\nboxmot track --detector yolo11n\n\n# 分割和姿态变体\nboxmot track --detector yolov8n-seg\nboxmot track --detector yolov8n-pose\n\n# YOLOX\nboxmot track --detector yolox_s\n\n# RF-DETR\nboxmot track --detector rf-detr-base\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>跟踪器切换\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n使用相同的检测器和 ReID (重识别) 模型，仅切换跟踪器：\n\n```bash\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker strongsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker boosttrack\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker hybridsort\n\n# 仅运动跟踪器\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker bytetrack\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker ocsort\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker sfsort\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>过滤与可视化\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n用于检查和调试的有用标志：\n\n```bash\n# 绘制轨迹并在跟踪丢失时显示卡尔曼滤波预测\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --show-trajectories --show-kf-preds --save\n\n# 仅跟踪选定类别\nboxmot track --detector yolov8s --source 0 --classes 16,17\n\n# 独立跟踪每个类别\nboxmot track --detector yolov8n --source video.mp4 --per-class --save\n\n# 高亮显示特定目标 ID\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source video.mp4 --target-id 7 --show\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>评估与调优\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n在内置的 MOT (多目标跟踪) 风格数据集快捷方式上进行基准测试：\n\n```bash\n# 复现 README 风格的 MOT17 结果\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --verbose\n\n# MOT20 消融分割\nboxmot eval --benchmark mot20-ablation --tracker boosttrack --verbose\n\n# DanceTrack 消融分割\nboxmot eval --benchmark dancetrack-ablation --tracker boosttrack --verbose\n\n# VisDrone 消融分割\nboxmot eval --benchmark visdrone-ablation --tracker botsort --verbose\n\n# 应用后处理\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gsi\nboxmot eval --benchmark mot17-ablation --tracker boosttrack --postprocessing gbrc\n\n# 为基准测试生成检测和嵌入\nboxmot generate --benchmark mot17-ablation\n\n# 为直接数据集路径生成检测和嵌入\nboxmot generate --detector yolov8n --reid osnet_x0_25_msmt17 --source .\u002Fassets\u002FMOT17-mini\u002Ftrain\n\n# 在内置基准配置上调优\nboxmot tune --benchmark mot17-ablation --tracker boosttrack --n-trials 9\n\n# 使用显式的检测器\u002FReID 覆盖调优跟踪器\nboxmot tune --benchmark mot17-ablation --detector yolo11s_obb --reid lmbn_n_duke --tracker botsort --n-trials 9\n```\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>导出与 OBB\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n部署和定向框示例：\n\n```bash\n# 导出为 ONNX\nboxmot export --weights osnet_x0_25_msmt17.pt --include onnx --device cpu\n\n# 导出为 OpenVINO\nboxmot export --weights osnet_x0_25_msmt17.pt --include openvino --device cpu\n\n# 导出为支持动态输入的 TensorRT\nboxmot export --weights osnet_x0_25_msmt17.pt --include engine --device 0 --dynamic\n```\n\nOBB 参考：\n\n- Notebook: [examples\u002Fdet\u002Fobb.ipynb](examples\u002Fdet\u002Fobb.ipynb)\n- 支持 OBB 的跟踪器：`bytetrack`、`botsort`、`ocsort`、`sfsort`\n\n\u003C\u002Fdetails>\n\n## 贡献\n\n如果您想贡献代码，请从 [CONTRIBUTING.md](CONTRIBUTING.md) 开始。\n\n## 贡献者\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_readme_102cac8deb2c.png\" alt=\"BoxMOT contributors\">\n\u003C\u002Fa>\n\n## 支持与引用\n\n- 错误报告和功能请求：[GitHub Issues](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues)（GitHub 问题追踪）\n- 问题与讨论：[GitHub Discussions](https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fdiscussions)（GitHub 讨论区）或 [Discord](https:\u002F\u002Fdiscord.gg\u002FtUmFEcYU4q)（Discord 聊天平台）\n- 引用元数据：[CITATION.cff](CITATION.cff)\n- 商业支持：`box-mot@outlook.com`","# BoxMOT 快速上手指南\n\n## 环境准备\n\n- **Python 版本**：3.9 - 3.12\n- **操作系统**：Linux、Windows、macOS 均支持\n- **硬件要求**：支持 CPU 和 GPU（推荐 NVIDIA GPU 加速）\n\n## 安装步骤\n\n```bash\npip install boxmot\nboxmot --help\n```\n\n## 基本使用\n\n### 方式一：命令行工具（CLI）\n\n**实时追踪摄像头画面：**\n```bash\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker deepocsort --source 0 --show\n```\n\n**追踪视频文件并保存结果：**\n```bash\nboxmot track --detector yolov8n --reid osnet_x0_25_msmt17 --tracker botsort --source video.mp4 --save\n```\n\n### 方式二：Python API\n\n在代码中集成追踪功能：\n\n```python\nfrom pathlib import Path\nimport cv2\nimport numpy as np\nfrom boxmot import BotSort\n\n# 初始化追踪器\ntracker = BotSort(\n    reid_weights=Path(\"osnet_x0_25_msmt17.pt\"),\n    device=\"cpu\",\n    half=False,\n)\n\n# 读取视频\ncap = cv2.VideoCapture(\"video.mp4\")\n\nwhile True:\n    ok, frame = cap.read()\n    if not ok:\n        break\n\n    # 用你的检测器获取检测结果\n    # 格式: (N, 6) = (x1, y1, x2, y2, conf, cls)\n    detections = np.empty((0, 6), dtype=np.float32)\n    # detections = your_detector(frame)\n\n    # 更新追踪器\n    tracks = tracker.update(detections, frame)\n    \n    # 可视化结果\n    tracker.plot_results(frame, show_trajectories=True)\n    \n    cv2.imshow(\"BoxMOT\", frame)\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\ncap.release()\ncv2.destroyAllWindows()\n```\n\n**提示**：更多使用示例和高级功能（如模型评估、参数调优）请参考官方文档。","某AI初创公司正在开发智慧零售分析系统，需要在超市监控视频中实时跟踪顾客移动轨迹，分析货架停留时间和区域热度，但店内货架密集、顾客遮挡严重，且算法迭代频繁。\n\n### 没有 boxmot 时\n\n- **算法切换像\"拆墙重装\"**：测试Bytetrack、Botsort等不同跟踪器时，每个算法的接口和数据格式都不一样，需要重写大量胶水代码，折腾一周才能对比效果\n- **重复计算浪费生命**：每次调整跟踪参数都要重新跑YOLO检测和ReID特征提取，8小时的视频数据要处理一整天，风扇狂转却只是在重复劳动\n- **评估全靠\"肉眼观测\"**：没有标准化的MOT评估工具，只能盯着视频数ID切换次数，无法量化HOTA、IDF1等指标，优化方向全靠猜\n- **倾斜货架成盲区**：货架呈45度角摆放时，轴对齐边界框重叠严重，跟踪器频繁丢失目标，只能手动写旋转框后处理，bug层出不穷\n\n### 使用 boxmot 后\n\n- **一行命令切换算法**：`boxmot track botsort`改成`boxmot track ocsort`即可对比性能，统一的Python API让算法替换变成5分钟的事，快速找到最适合拥挤场景的OCSort\n- **缓存机制省90%时间**：首次运行自动保存检测结果和ReID特征，后续调参直接加载缓存，8小时视频二次处理仅需20分钟，GPU资源专注跟踪本身\n- **内置标准化评估**：`boxmot eval`一键生成MOT17格式的指标报告，HOTA、MOTA、IDF1一目了然，优化方向清晰可量化，周报有数据支撑\n- **原生支持旋转框**：OBB布局直接传入，倾斜货架场景下ID切换率降低60%，无需手写复杂后处理，代码量减少一半\n\nboxmot让团队从繁琐的算法适配中解放出来，专注优化零售场景的业务逻辑，两周就上线了原本需要两个月才能完成的MVP。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fmikel-brostrom_boxmot_e6f05243.png","mikel-brostrom","Mike","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fmikel-brostrom_6a4b3a81.png","Applied and R&D ML",null,"Sweden  ⇄ Spain","https:\u002F\u002Fgithub.com\u002Fmikel-brostrom",[22,26],{"name":23,"color":24,"percentage":25},"Python","#3572A5",99.9,{"name":27,"color":28,"percentage":29},"Dockerfile","#384d54",0.1,8087,1891,"2026-04-04T23:58:37","AGPL-3.0",3,"未说明","推荐NVIDIA GPU以提升性能，但支持CPU运行",{"notes":38,"python":39,"dependencies":40},"支持8种跟踪器（含运动\u002F外观模型），兼容AABB\u002FOBB检测框格式；提供CLI和Python API；支持检测\u002FReID特征缓存以加速实验；内置MOT17\u002F20等数据集评测流程；提供Docker镜像和Colab示例","3.9, 3.10, 3.11, 3.12",[41],"未在README中明确列出",[43,44,45,46],"音频","视频","开发框架","图像",[48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65],"strongsort","bytetrack","ocsort","deep-learning","segmentation","tensorrt","tracking-by-detection","yolo","botsort","deepocsort","multi-object-tracking","mot","mots","multi-object-tracking-segmentation","improvedassociation","boosttrack","oriented-bounding-box-tracking","machine-learning",9,"ready","2026-03-27T02:49:30.150509","2026-04-06T07:13:55.564522",[71,76,81],{"id":72,"question_zh":73,"answer_zh":74,"source_url":75},4452,"在自定义数据集上使用 StrongSORT 时，很多在 detect.py 中能检测到的目标在 track.py 中丢失了，如何解决？","这个问题通常是由于跟踪脚本中的检测阈值设置与检测脚本不同导致的。维护者建议尝试使用 v10.0.50 版本，该版本针对此类问题进行了优化。建议检查 track.py 中的检测置信度阈值参数，确保其与 detect.py 中的设置一致。具体版本信息见：https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fyolo_tracking\u002Freleases\u002Ftag\u002Fv10.0.50","https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues\u002F669",{"id":77,"question_zh":78,"answer_zh":79,"source_url":80},4453,"运行 val.py 评估脚本时出现 KeyError: 'HOTA' 错误，如何正确运行自定义数据集的评估？","这个错误通常是因为评估配置不正确导致的。可以通过自定义评估参数来解决。以下是自定义评估的示例代码：\n\n```python\n# run_mot_challenge.py 示例\n\"\"\" run_mot_challenge.py\n\nRun example:\nrun_mot_challenge.py --USE_PARALLEL False --METRICS Hota --TRACKERS_TO_EVAL Lif_T\n\nCommand Line Arguments: Defaults, # Comments\n    Eval arguments:\n        'USE_PARALLEL': False,\n        'NUM_PARALLEL_CORES': 8,\n        'BREAK_ON_ERROR': True,\n        'PRINT_RESULTS': True,\n        'PRINT_ONLY_COMBINED': False,\n        'PRINT_CONFIG': True,\n        'TIME_PROGRESS': True,\n        'OUTPUT_SUMMARY': True,\n        'OUTPUT_DETAILED': True,\n        'PLOT_CURVES': True,\n    Dataset arguments:\n        'GT_FOLDER': os.path.join(code_path, 'data\u002Fgt\u002Fmot_challenge\u002F'),  # Location of GT data\n        'TRACKERS_FOLDER': os.path.join(code_path, 'data\u002Ftrackers\u002Fmot_challenge\u002F'),  # Trackers location\n        'OUTPUT_FOLDER': None,  # Where to save eval results (if None, same as TRACKERS_FOLDER)\n        'TRACKERS_TO_EVAL': None,  # Filenames of trackers to eval (if None, all in folder)\n```\n\n需要修改 mot_challenge_2d_box.py 以适应你的类别，并确保正确设置评估参数。","https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues\u002F883",{"id":82,"question_zh":83,"answer_zh":84,"source_url":85},4454,"在 MOT17 数据集上测试时，OCSORT 的性能比 DeepOCSORT 更好，这是否正常？如何优化 DeepOCSORT？","这是参数配置问题，不是算法本身的问题。DeepOCSORT 需要正确的参数配置才能发挥最佳性能。根据维护者的分析，需要进行以下调整：\n\n1. 修改 deepocsort.yaml 中的参数：\n   - 将某个参数值设置为 3 以匹配原始实现：\n   https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fyolo_tracking\u002Fblob\u002F774e13322f31931e42ace0f39e572f6c77046a00\u002Fboxmot\u002Fconfigs\u002Fdeepocsort.yaml#L15\n\n2. 更改关联函数：\n   - 将关联函数从 giou 改为 iou\n   - OCSORT 配置：https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fyolo_tracking\u002Fblob\u002F774e13322f31931e42ace0f39e572f6c77046a00\u002Fboxmot\u002Fconfigs\u002Focsort.yaml#L3\n   - 参考原始实现：https:\u002F\u002Fgithub.com\u002FGerardMaggiolino\u002FDeep-OC-SORT\u002Fblob\u002F6bb51d027b137233f5c520b6fcc4f2ae387a6ba9\u002Ftrackers\u002Focsort_tracker\u002Focsort.py#L205\n\n这些参数需要根据具体数据集和子集进行调整，不同数据集的最优参数可能不同。","https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues\u002F1015",[87,92,97,102,107,112,117,122,127,132,137,142,147,152,157,162,167,171,176,181],{"id":88,"version":89,"summary_zh":90,"released_at":91},103908,"v17.0.0","## What's Changed\r\n* Fix engine export by @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2235\r\n* Enable evaluation on FastTracker dataset by @Fleyderer in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2236\r\n* Fix xysr_kf bug @amaizr in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2241\r\n* Extend all trackers in order to enable OBB tracking @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2242\r\n* Add toy OBB dataset @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2249\r\n* Easier cpu\u002Fgpu installation by @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2250\r\n* Fix tflite export @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2253 https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2252\r\n* Oriented bounding box evaluation by @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2256\r\n* Move tracker responsibilities from visualization to base tracker by @mikel-brostrom in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2257\r\n* Extended tuning functionality, default configs for ultralytics YOLO by @Fleyderer in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2259\r\n\r\n## New Contributors\r\n* @amaizr made their first contribution in https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fpull\u002F2241\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fcompare\u002Fv16.0.11...v17.0.0","2026-03-30T19:27:53",{"id":93,"version":94,"summary_zh":95,"released_at":96},103909,"v16.0.11","- Integrated SFSORT\r\n- Added SFSORT to CI and Benchmark for automatic metric generation","2026-02-01T22:52:15",{"id":98,"version":99,"summary_zh":100,"released_at":101},103910,"v16.0.10","* single process, batched, multi-sequence detections and embeddings generation. Instead of multi-process, one per sequence\r\n* resume enabled for detection and embeddings generation\r\n* enable tuning for visdrone dataset\r\n* unified inference engine for detection and embeddings generation\r\n  * unified inference engine used by both tracking and evaluation\r\n  * which enables `boxmot track` timers in `boxmot eval`\r\n","2026-01-27T22:07:44",{"id":103,"version":104,"summary_zh":105,"released_at":106},103911,"v16.0.9","- Fix visdrone evaluation bug (https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues\u002F2214)\r\n- Improve custom dataset evaluation flow by centralizing classes to eval and distractor classes in configs","2026-01-19T20:22:20",{"id":108,"version":109,"summary_zh":110,"released_at":111},103912,"v16.0.8","Add RTDetr from transformers (https:\u002F\u002Fhuggingface.co\u002FPekingU\u002Frtdetr_v2_r50vd) to detectors","2026-01-06T01:23:28",{"id":113,"version":114,"summary_zh":115,"released_at":116},103913,"v16.0.7","- BugFix: GBRC postprocessing","2026-01-05T11:03:58",{"id":118,"version":119,"summary_zh":120,"released_at":121},103914,"v16.0.6","Fix on-the-fly installs in cloned and pip installed repos\r\n","2025-12-25T17:15:49",{"id":123,"version":124,"summary_zh":125,"released_at":126},103915,"v16.0.5","- Fix fatal evaluation bug\r\n- Fix fatal tuning bug","2025-12-24T11:53:01",{"id":128,"version":129,"summary_zh":130,"released_at":131},103916,"v16.0.4","- Refactor lost tracks visualization class into several classes:\r\n   - trackers that only expose active tracks\r\n   - trackers that maintain explicit lists for lost and removed tracks\r\n- Activate the lost tracks visualization by `--show-lost`\r\n- Improved quick examples in `README`\r\n","2025-12-24T00:08:59",{"id":133,"version":134,"summary_zh":135,"released_at":136},103917,"v16.0.3","Add lost tracks display for Kalman Filter debugging. Try it by:\r\n\r\n`boxmot track yolo12x lmbn_n_duke hybridsort --source 0 --classes 0 --verbose --save --show --show-trajectories`\r\n\r\nhttps:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F1cec0495-5e22-4ef5-b4d0-65a7cca17276","2025-12-23T22:04:58",{"id":138,"version":139,"summary_zh":140,"released_at":141},103918,"v16.0.2","- Fully integrated multi-class evaluation. You have two options:\r\n\r\n    - Create a file called `custom.yaml` under `boxmot\u002Fconfigs\u002Fdatasets\u002F`\r\n\r\n      ```yaml\r\n      # https:\u002F\u002Fmotchallenge.net\u002Fdata\u002FMOT17\u002F\r\n      download:\r\n        runs_url: \"\"\r\n        dataset_url: \"\"\r\n      \r\n      benchmark:\r\n        source: \"assets\u002FMOT17-mini\"      # relative dataset location to boxmot's root\r\n        split: \"train\"                   # set the split \r\n        classes: \"person car bicycle\"    # set the classes to evaluate on\r\n      ```\r\n\r\n    - Run this command with the classes to evaluate on, for example `--classes 0,2`:\r\n\r\n      `boxmot eval --yolo-model yolox_x_MOT17_ablation.pt --reid-model lmbn_n_duke.pt --tracking-method botsort --source MOT17-ablation --classes 0,2`\r\n\r\n- Fully integrated multi-class tuning (by optimizing for averaged metrics over all classes)\r\n\r\nBig thanks to @blaisefinix for providing guidelines to solve this (https:\u002F\u002Fgithub.com\u002Fmikel-brostrom\u002Fboxmot\u002Fissues\u002F2118)","2025-12-23T13:26:01",{"id":143,"version":144,"summary_zh":145,"released_at":146},103919,"v16.0.1","- Fix `botsort` embedding normalization bug\r\n- Increase multiprocess stability when evaluating by avoiding race condition when loading models\r\n- [GBRC postprocessing](https:\u002F\u002Fgithub.com\u002Fvukasin-stanojevic\u002FBoostTrack\u002Fblob\u002Fmaster\u002Ftracker\u002FGBI.py) implemented, use `--postprocessing gbrc` in order to activate it. Example\r\n\r\n`boxmot eval --yolo-model yolox_x_MOT17_ablation.pt --reid-model lmbn_n_duke.pt --tracking-method botsort --source MOT17-ablation --classes 0 --postprocessing gsi`\r\n\r\n# GBRC\r\n\r\nTracker | HOTA | MOTA | IDF1 | AssA | AssRe | IDSW | IDs\r\n-- | -- | -- | -- | -- | -- | -- | --\r\nByteTrack (with GBRC) | 68.97% (+1.29) | 79.82% (+1.78) | 80.12% (+0.96) | 70.28% (+1.14) | 76.70% (+1.67) | 167 (-31) | 409 (+0)\r\nByteTrack (without) | 67.68% | 78.04% | 79.16% | 69.14% | 75.03% | 198 | 409\r\nBotSORT (with GBRC) | 71.05% (+1.60) | 80.66% (+2.42) | 83.13% (+1.19) | 73.69% (+1.35) | 79.44% (+1.86) | 105 (-32) | 367 (+0)\r\nBotSORT (without) | 69.45% | 78.24% | 81.94% | 72.34% | 77.58% | 137 | 367\r\n\r\n# GSI\r\n\r\nTracker | HOTA | MOTA | IDF1 | AssA | AssRe | IDSW | IDs\r\n-- | -- | -- | -- | -- | -- | -- | --\r\nByteTrack (with GSI) | 68.51% (+0.83) | 79.67% (+1.63) | 80.05% (+0.89) | 70.00% (+0.86) | 76.51% (+1.48) | 174 (-24) | 409 (+0)\r\nByteTrack (without) | 67.68% | 78.04% | 79.16% | 69.14% | 75.03% | 198 | 409\r\nBotSORT (with GSI) | 70.52% (+1.07) | 80.68% (+2.44) | 83.13% (+1.19) | 73.21% (+0.87) | 79.11% (+1.53) | 102 (-35) | 365 (-2)\r\nBotSORT (without) | 69.45% | 78.24% | 81.94% | 72.34% | 77.58% | 137 | 367","2025-12-22T00:27:49",{"id":148,"version":149,"summary_zh":150,"released_at":151},103920,"v16.0.0","- Improved CLI\r\n- Improved logging\r\n- Refactor\r\n  - Better file organization\r\n  - Better separation of concerns\r\n  - All core functionality now centralized under `boxmot\u002Fengine`\r\n- Repo file history pruned (from +500MB to 16MB)\r\n- Faster repo cloning due to file history pruning\r\n- Timing summary added when running `boxmot track ...`","2025-12-19T00:22:12",{"id":153,"version":154,"summary_zh":155,"released_at":156},103921,"v15.0.10","- Input to CLIP model now set by configuring `from boxmot.appearance.backbones.clip.config.defaults import _C as cfg`\r\n- `mean=[0.5, 0.5, 0.5]` and `std=[0.5, 0.5, 0.5]` instead of `[0.485, 0.456, 0.406]` and `[0.229, 0.224, 0.225]` is now used for clip model\r\ncrops \r\n- resize to `[256, 256]` only if models are trained on veri or vehicleid, `[256,128]` otherwise","2025-11-11T03:16:53",{"id":158,"version":159,"summary_zh":160,"released_at":161},103922,"v15.0.9","Fix trajectory plotting crash bug","2025-10-15T09:49:54",{"id":163,"version":164,"summary_zh":165,"released_at":166},103923,"v15.0.8","Auto install evolve dependencies when running:\r\n\r\n`boxmot evolve ...`","2025-10-10T22:10:41",{"id":168,"version":169,"summary_zh":165,"released_at":170},103924,"v15.0.7","2025-10-10T21:59:29",{"id":172,"version":173,"summary_zh":174,"released_at":175},103925,"v15.0.6","Added export command to CLI\r\n\r\n`boxmot export --include openvino --include onnx`","2025-10-10T21:21:46",{"id":177,"version":178,"summary_zh":179,"released_at":180},103926,"v15.0.5","- HybridSort extended in order to support multi-class tracking\r\n- HybridSort trajectory plotting fix","2025-10-06T22:05:33",{"id":182,"version":183,"summary_zh":184,"released_at":185},103927,"v15.0.4","Added HybridSort back to master. By @horsto and @mikel-brostrom ","2025-10-06T20:27:27",[187,196,206,214,222,233],{"id":188,"name":189,"github_repo":190,"description_zh":191,"stars":192,"difficulty_score":34,"last_commit_at":193,"category_tags":194,"status":67},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[45,46,195],"Agent",{"id":197,"name":198,"github_repo":199,"description_zh":200,"stars":201,"difficulty_score":202,"last_commit_at":203,"category_tags":204,"status":67},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[45,195,205],"语言模型",{"id":207,"name":208,"github_repo":209,"description_zh":210,"stars":211,"difficulty_score":202,"last_commit_at":212,"category_tags":213,"status":67},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[45,46,195],{"id":215,"name":216,"github_repo":217,"description_zh":218,"stars":219,"difficulty_score":202,"last_commit_at":220,"category_tags":221,"status":67},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[45,205],{"id":223,"name":224,"github_repo":225,"description_zh":226,"stars":227,"difficulty_score":202,"last_commit_at":228,"category_tags":229,"status":67},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[46,230,44,231,195,232,205,45,43],"数据工具","插件","其他",{"id":234,"name":235,"github_repo":236,"description_zh":237,"stars":238,"difficulty_score":34,"last_commit_at":239,"category_tags":240,"status":67},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[195,46,45,205,232]]