[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-GeekAlexis--FastMOT":3,"tool-GeekAlexis--FastMOT":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":79,"owner_url":81,"languages":82,"stars":107,"forks":108,"last_commit_at":109,"license":110,"difficulty_score":10,"env_os":111,"env_gpu":112,"env_ram":113,"env_deps":114,"category_tags":125,"github_topics":126,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":144,"updated_at":145,"faqs":146,"releases":192},2042,"GeekAlexis\u002FFastMOT","FastMOT","High-performance multiple object tracking based on YOLO, Deep SORT, and KLT 🚀","FastMOT 是一个高效多目标跟踪系统，专为实时视频分析设计，结合了 YOLO 目标检测、Deep SORT 跟踪与 KLT 光流插值技术，在保持高精度的同时大幅提升运行速度。它解决了传统跟踪方法在复杂场景（如摄像头移动、目标密集）中速度慢、易丢失的问题，通过“隔 N 帧检测、中间帧用光流补全”的策略，显著降低计算负担，即使在 Jetson 等嵌入式设备上也能稳定运行。系统还支持相机运动补偿，能有效应对航拍或移动摄像头场景，这是许多同类工具难以处理的痛点。FastMOT 使用 TensorRT 加速推理，核心算法通过 Numba 优化，兼顾性能与灵活性，支持 YOLOv4、SSD 多种检测器，并兼容多类别跟踪。适合计算机视觉开发者、机器人与安防系统研究人员使用，尤其适合需要在边缘设备部署实时跟踪的项目。普通用户无需直接使用，但可通过集成其成果应用于智能监控、交通分析等场景。在 Jetson Xavier NX 上最高可达 42 FPS，桌面端更可突破 150 FPS，是兼顾精度与效率的实用之选。","# FastMOT\n\n[![Hits](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FGeekAlexis%2FFastMOT&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com) [![License: MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-yellow.svg)](LICENSE) [![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002F237143671.svg)](https:\u002F\u002Fzenodo.org\u002Fbadge\u002Flatestdoi\u002F237143671)\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGeekAlexis_FastMOT_readme_e7ddc9d03f1e.gif\" width=\"400\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGeekAlexis_FastMOT_readme_622110848c42.gif\" width=\"400\"\u002F>\n\n## News\n  - (2021.8.17) Support multi-class tracking\n  - (2021.7.4) Support yolov4-p5 and yolov4-p6\n  - (2021.2.13) Support Scaled-YOLOv4 (i.e. yolov4-csp\u002Fyolov4x-mish\u002Fyolov4-csp-swish)\n  - (2021.1.3) Add DIoU-NMS for postprocessing\n  - (2020.11.28) Docker container provided for x86 Ubuntu\n\n## Description\nFastMOT is a custom multiple object tracker that implements:\n  - YOLO detector\n  - SSD detector\n  - Deep SORT + OSNet ReID\n  - KLT tracker\n  - Camera motion compensation\n\nTwo-stage trackers like Deep SORT run detection and feature extraction sequentially, which often becomes a bottleneck. FastMOT significantly speeds up the entire system to run in **real-time** even on Jetson. Motion compensation improves tracking for scenes with moving camera, where Deep SORT and FairMOT fail.\n\nTo achieve faster processing, FastMOT only runs the detector and feature extractor every N frames, while KLT fills in the gaps efficiently. FastMOT also re-identifies objects that moved out of frame to keep the same IDs.\n\nYOLOv4 was trained on CrowdHuman (82% mAP@0.5) and SSD's are pretrained COCO models from TensorFlow. Both detection and feature extraction use the **TensorRT** backend and perform asynchronous inference. In addition, most algorithms, including KLT, Kalman filter, and data association, are optimized using Numba.\n\n## Performance\n### Results on MOT20 train set\n| Detector Skip | MOTA | IDF1 | HOTA | MOTP | MT | ML |\n|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|\n| N = 1 | 66.8% | 56.4% | 45.0% | 79.3% | 912 | 274 |\n| N = 5 | 65.1% | 57.1% | 44.3% | 77.9% | 860 | 317 |\n\n### FPS on MOT17 sequences\n| Sequence | Density | FPS |\n|:-------|:-------:|:-------:|\n| MOT17-13 | 5 - 30  | 42 |\n| MOT17-04 | 30 - 50  | 26 |\n| MOT17-03 | 50 - 80  | 18 |\n\nPerformance is evaluated with YOLOv4 using [TrackEval](https:\u002F\u002Fgithub.com\u002FJonathonLuiten\u002FTrackEval). Note that neither YOLOv4 nor OSNet was trained or finetuned on the MOT20 dataset, so train set results should generalize well. FPS results are obtained on Jetson Xavier NX (20W 2core mode).\n\nFastMOT has MOTA scores close to **state-of-the-art** trackers from the MOT Challenge. Increasing N shows small impact on MOTA. Tracking speed can reach up to **42 FPS** depending on the number of objects. Lighter models (e.g. YOLOv4-tiny) are recommended for a more constrained device like Jetson Nano. FPS is expected to be in the range of **50 - 150** on desktop CPU\u002FGPU.\n\n## Requirements\n- CUDA >= 10\n- cuDNN >= 7\n- TensorRT >= 7\n- OpenCV >= 3.3\n- Numpy >= 1.17\n- Scipy >= 1.5\n- Numba == 0.48\n- CuPy == 9.2\n- TensorFlow \u003C 2.0 (for SSD support)\n\n### Install for x86 Ubuntu\nMake sure to have [nvidia-docker](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#docker) installed. The image requires NVIDIA Driver version >= 450 for Ubuntu 18.04 and >= 465.19.01 for Ubuntu 20.04. Build and run the docker image:\n  ```bash\n  # Add --build-arg TRT_IMAGE_VERSION=21.05 for Ubuntu 20.04\n  # Add --build-arg CUPY_NVCC_GENERATE_CODE=... to speed up build for your GPU, e.g. \"arch=compute_75,code=sm_75\"\n  docker build -t fastmot:latest .\n  \n  # Run xhost local:root first if you cannot visualize inside the container\n  docker run --gpus all --rm -it -v $(pwd):\u002Fusr\u002Fsrc\u002Fapp\u002FFastMOT -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix -e DISPLAY=unix$DISPLAY -e TZ=$(cat \u002Fetc\u002Ftimezone) fastmot:latest\n  ```\n### Install for Jetson Nano\u002FTX2\u002FXavier NX\u002FXavier\nMake sure to have [JetPack >= 4.4](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack) installed and run the script:\n  ```bash\n  .\u002Fscripts\u002Finstall_jetson.sh\n  ```\n### Download models\nPretrained OSNet, SSD, and my YOLOv4 ONNX model are included.\n  ```bash\n  .\u002Fscripts\u002Fdownload_models.sh\n  ```\n### Build YOLOv4 TensorRT plugin\n  ```bash\n  cd fastmot\u002Fplugins\n  make\n  ```\n### Download VOC dataset for INT8 calibration\nOnly required for SSD (not supported on Ubuntu 20.04)\n  ```bash\n  .\u002Fscripts\u002Fdownload_data.sh\n  ```\n\n## Usage\n```bash\n  python3 app.py --input-uri ... --mot\n```\n- Image sequence: `--input-uri %06d.jpg`\n- Video file: `--input-uri file.mp4`\n- USB webcam: `--input-uri \u002Fdev\u002Fvideo0`\n- MIPI CSI camera: `--input-uri csi:\u002F\u002F0`\n- RTSP stream: `--input-uri rtsp:\u002F\u002F\u003Cuser>:\u003Cpassword>@\u003Cip>:\u003Cport>\u002F\u003Cpath>`\n- HTTP stream: `--input-uri http:\u002F\u002F\u003Cuser>:\u003Cpassword>@\u003Cip>:\u003Cport>\u002F\u003Cpath>`\n\nUse `--show` to visualize, `--output-uri` to save output, and `--txt` for MOT compliant results.\n\nShow help message for all options:\n```bash\n  python3 app.py -h\n```\nNote that the first run will be slow due to Numba compilation. To use the FFMPEG backend on x86, set `WITH_GSTREAMER = False` [here](https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F3a4cad87743c226cf603a70b3f15961b9baf6873\u002Ffastmot\u002Fvideoio.py#L11)\n\u003Cdetails>\n\u003Csummary> More options can be configured in cfg\u002Fmot.json \u003C\u002Fsummary>\n\n  - Set `resolution` and `frame_rate` that corresponds to the source data or camera configuration (optional). They are required for image sequence, camera sources, and saving txt results. List all configurations for a USB\u002FCSI camera:\n    ```bash\n    v4l2-ctl -d \u002Fdev\u002Fvideo0 --list-formats-ext\n    ```\n  - To swap network, modify `model` under a detector. For example, you can choose from `SSDInceptionV2`, `SSDMobileNetV1`, or `SSDMobileNetV2` for SSD.\n  - If more accuracy is desired and FPS is not an issue, lower `detector_frame_skip`. Similarly, raise `detector_frame_skip` to speed up tracking at the cost of accuracy. You may also want to change `max_age` such that `max_age` × `detector_frame_skip` ≈ 30\n  - Modify `visualizer_cfg` to toggle drawing options.\n  - All parameters are documented in the API.\n\n\u003C\u002Fdetails>\n\n ## Track custom classes\nFastMOT can be easily extended to a custom class (e.g. vehicle). You need to train both YOLO and a ReID network on your object class. Check [Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet) for training YOLO and [fast-reid](https:\u002F\u002Fgithub.com\u002FJDAI-CV\u002Ffast-reid) for training ReID. After training, convert weights to ONNX format. The TensorRT plugin adapted from [tensorrt_demos](https:\u002F\u002Fgithub.com\u002Fjkjung-avt\u002Ftensorrt_demos\u002F) is only compatible with Darknet.\n\nFastMOT also supports multi-class tracking. It is recommended to train a ReID network for each class to extract features separately.\n### Convert YOLO to ONNX\n1. Install ONNX version 1.4.1 (not the latest version)\n    ```bash\n    pip3 install onnx==1.4.1\n    ```\n2. Convert using your custom cfg and weights\n    ```bash\n    .\u002Fscripts\u002Fyolo2onnx.py --config yolov4.cfg --weights yolov4.weights\n    ```\n### Add custom YOLOv3\u002Fv4\n1. Subclass `fastmot.models.YOLO` like here: https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F32c217a7d289f15a3bb0c1820982df947c82a650\u002Ffastmot\u002Fmodels\u002Fyolo.py#L100-L109\n    ```\n    ENGINE_PATH : Path\n        Path to TensorRT engine.\n        If not found, TensorRT engine will be converted from the ONNX model\n        at runtime and cached for later use.\n    MODEL_PATH : Path\n        Path to ONNX model.\n    NUM_CLASSES : int\n        Total number of trained classes.\n    LETTERBOX : bool\n        Keep aspect ratio when resizing.\n    NEW_COORDS : bool\n        new_coords Darknet parameter for each yolo layer.\n    INPUT_SHAPE : tuple\n        Input size in the format `(channel, height, width)`.\n    LAYER_FACTORS : List[int]\n        Scale factors with respect to the input size for each yolo layer.\n    SCALES : List[float]\n        scale_x_y Darknet parameter for each yolo layer.\n    ANCHORS : List[List[int]]\n        Anchors grouped by each yolo layer.\n    ```\n    Note anchors may not follow the same order in the Darknet cfg file. You need to mask out the anchors for each yolo layer using the indices in `mask` in Darknet cfg.\n    Unlike YOLOv4, the anchors are usually in reverse for YOLOv3 and YOLOv3\u002Fv4-tiny\n2. Set class labels to your object classes with `fastmot.models.set_label_map`\n3. Modify cfg\u002Fmot.json: set `model` in `yolo_detector_cfg` to the added Python class name and set `class_ids` of interest. You may want to play with `conf_thresh` based on model performance\n### Add custom ReID\n1. Subclass `fastmot.models.ReID` like here: https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F32c217a7d289f15a3bb0c1820982df947c82a650\u002Ffastmot\u002Fmodels\u002Freid.py#L50-L55\n    ```\n    ENGINE_PATH : Path\n        Path to TensorRT engine.\n        If not found, TensorRT engine will be converted from the ONNX model\n        at runtime and cached for later use.\n    MODEL_PATH : Path\n        Path to ONNX model.\n    INPUT_SHAPE : tuple\n        Input size in the format `(channel, height, width)`.\n    OUTPUT_LAYOUT : int\n        Feature dimension output by the model.\n    METRIC : {'euclidean', 'cosine'}\n        Distance metric used to match features.\n    ```\n2. Modify cfg\u002Fmot.json: set `model` in `feature_extractor_cfgs` to the added Python class name. For more than one class, add more feature extractor configurations to the list `feature_extractor_cfgs`. You may want to play with `max_assoc_cost` and `max_reid_cost` based on model performance\n\n ## Citation\n If you find this repo useful in your project or research, please star and consider citing it:\n ```bibtex\n@software{yukai_yang_2020_4294717,\n  author       = {Yukai Yang},\n  title        = {{FastMOT: High-Performance Multiple Object Tracking Based on Deep SORT and KLT}},\n  month        = nov,\n  year         = 2020,\n  publisher    = {Zenodo},\n  version      = {v1.0.0},\n  doi          = {10.5281\u002Fzenodo.4294717},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.4294717}\n}\n```\n","# FastMOT\n\n[![访问量](https:\u002F\u002Fhits.seeyoufarm.com\u002Fapi\u002Fcount\u002Fincr\u002Fbadge.svg?url=https%3A%2F%2Fgithub.com%2FGeekAlexis%2FFastMOT&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=访问量&edge_flat=false)](https:\u002F\u002Fhits.seeyoufarm.com) [![许可证：MIT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F许可证-MIT-yellow.svg)](LICENSE) [![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002F237143671.svg)](https:\u002F\u002Fzenodo.org\u002Fbadge\u002Flatestdoi\u002F237143671)\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGeekAlexis_FastMOT_readme_e7ddc9d03f1e.gif\" width=\"400\"\u002F> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGeekAlexis_FastMOT_readme_622110848c42.gif\" width=\"400\"\u002F>\n\n## 最新动态\n  - (2021.8.17) 支持多类别跟踪\n  - (2021.7.4) 支持yolov4-p5和yolov4-p6\n  - (2021.2.13) 支持缩放版YOLOv4（即yolov4-csp\u002Fyolov4x-mish\u002Fyolov4-csp-swish）\n  - (2021.1.3) 添加DIoU-NMS后处理\n  - (2020.11.28) 提供适用于x86 Ubuntu的Docker容器\n\n## 说明\nFastMOT是一款自定义的多目标跟踪器，实现了以下功能：\n  - YOLO检测器\n  - SSD检测器\n  - Deep SORT + OSNet ReID\n  - KLT跟踪器\n  - 相机运动补偿\n\n像Deep SORT这样的两阶段跟踪器依次运行检测和特征提取，这往往成为性能瓶颈。FastMOT大幅提升了整个系统的运行速度，即使在Jetson上也能实现**实时**运行。运动补偿则改善了相机移动场景下的跟踪效果，而Deep SORT和FairMOT在此类场景中表现不佳。\n\n为了实现更快的处理速度，FastMOT仅每隔N帧运行一次检测器和特征提取器，而KLT则高效地填补空缺。FastMOT还会重新识别那些移出画面的目标，以保持它们的ID不变。\n\nYOLOv4是在CrowdHuman数据集上训练的（mAP@0.5为82%），SSD使用的是TensorFlow预训练的COCO模型。检测和特征提取均采用**TensorRT**后端，并进行异步推理。此外，包括KLT、卡尔曼滤波器和数据关联在内的大部分算法都通过Numba进行了优化。\n\n## 性能\n### 在MOT20训练集上的结果\n| 检测器跳帧 | MOTA | IDF1 | HOTA | MOTP | MT | ML |\n|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|\n| N = 1 | 66.8% | 56.4% | 45.0% | 79.3% | 912 | 274 |\n| N = 5 | 65.1% | 57.1% | 44.3% | 77.9% | 860 | 317 |\n\n### 在MOT17序列上的FPS\n| 序列 | 密度 | FPS |\n|:-------|:-------:|:-------:|\n| MOT17-13 | 5 - 30  | 42 |\n| MOT17-04 | 30 - 50  | 26 |\n| MOT17-03 | 50 - 80  | 18 |\n\n性能评估使用了YOLOv4，并基于[TrackEval](https:\u002F\u002Fgithub.com\u002FJonathonLuiten\u002FTrackEval)工具。需要注意的是，YOLOv4和OSNet均未在MOT20数据集上进行训练或微调，因此训练集上的结果具有良好的泛化能力。FPS结果是在Jetson Xavier NX上获得的（20W双核模式）。\n\nFastMOT的MOTA得分接近MOT Challenge中的**最先进**跟踪器。增加N对MOTA的影响较小。跟踪速度最高可达**42 FPS**，具体取决于目标数量。对于像Jetson Nano这样资源受限的设备，建议使用更轻量级的模型（例如YOLOv4-tiny）。在桌面CPU\u002FGPU上，FPS预计在**50 - 150**之间。\n\n## 要求\n- CUDA >= 10\n- cuDNN >= 7\n- TensorRT >= 7\n- OpenCV >= 3.3\n- Numpy >= 1.17\n- Scipy >= 1.5\n- Numba == 0.48\n- CuPy == 9.2\n- TensorFlow \u003C 2.0（用于支持SSD）\n\n### x86 Ubuntu安装步骤\n确保已安装[nvidia-docker](https:\u002F\u002Fdocs.nvidia.com\u002Fdatacenter\u002Fcloud-native\u002Fcontainer-toolkit\u002Finstall-guide.html#docker)。镜像要求Ubuntu 18.04的NVIDIA驱动版本>=450，Ubuntu 20.04的驱动版本>=465.19.01。构建并运行Docker镜像：\n  ```bash\n  # 对于Ubuntu 20.04，添加--build-arg TRT_IMAGE_VERSION=21.05\n  # 添加--build-arg CUPY_NVCC_GENERATE_CODE=...以加速针对你的GPU的构建，例如“arch=compute_75,code=sm_75”\n  docker build -t fastmot:latest .\n  \n  # 如果无法在容器内可视化，请先运行xhost local:root\n  docker run --gpus all --rm -it -v $(pwd):\u002Fusr\u002Fsrc\u002Fapp\u002FFastMOT -v \u002Ftmp\u002F.X11-unix:\u002Ftmp\u002F.X11-unix -e DISPLAY=unix$DISPLAY -e TZ=$(cat \u002Fetc\u002Ftimezone) fastmot:latest\n  ```\n### Jetson Nano\u002FTX2\u002FXavier NX\u002FXavier安装步骤\n确保已安装[JetPack >= 4.4](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack)，并运行脚本：\n  ```bash\n  .\u002Fscripts\u002Finstall_jetson.sh\n  ```\n### 下载模型\n包含预训练的OSNet、SSD以及我的YOLOv4 ONNX模型。\n  ```bash\n  .\u002Fscripts\u002Fdownload_models.sh\n  ```\n### 构建YOLOv4 TensorRT插件\n  ```bash\n  cd fastmot\u002Fplugins\n  make\n  ```\n### 下载VOC数据集用于INT8校准\n仅适用于SSD（不支持Ubuntu 20.04）\n  ```bash\n  .\u002Fscripts\u002Fdownload_data.sh\n  ```\n\n## 使用方法\n```bash\n  python3 app.py --input-uri ... --mot\n```\n- 图片序列：`--input-uri %06d.jpg`\n- 视频文件：`--input-uri file.mp4`\n- USB网络摄像头：`--input-uri \u002Fdev\u002Fvideo0`\n- MIPI CSI摄像头：`--input-uri csi:\u002F\u002F0`\n- RTSP流：`--input-uri rtsp:\u002F\u002F\u003Cuser>:\u003Cpassword>@\u003Cip>:\u003Cport>\u002F\u003Cpath>`\n- HTTP流：`--input-uri http:\u002F\u002F\u003Cuser>:\u003Cpassword>@\u003Cip>:\u003Cport>\u002F\u003Cpath>`\n\n使用`--show`可视化，`--output-uri`保存输出，`--txt`生成符合MOT标准的结果。\n\n查看所有选项的帮助信息：\n```bash\n  python3 app.py -h\n```\n注意，首次运行会因Numba编译而较慢。若要在x86上使用FFMPEG后端，请将[此处](https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F3a4cad87743c226cf603a70b3f15961b9baf6873\u002Ffastmot\u002Fvideoio.py#L11)的WITH_GSTREAMER设为False。\n\n\u003Cdetails>\n\u003Csummary>更多选项可在cfg\u002Fmot.json中配置\u003C\u002Fsummary>\n\n  - 设置与源数据或相机配置相对应的`resolution`和`frame_rate`（可选）。这些参数对于图片序列、相机源和保存txt结果是必需的。列出USB\u002FCSI摄像头的所有配置：\n    ```bash\n    v4l2-ctl -d \u002Fdev\u002Fvideo0 --list-formats-ext\n    ```\n  - 若需切换网络，修改检测器下的`model`。例如，你可以从`SSDInceptionV2`、`SSDMobileNetV1`或`SSDMobileNetV2`中选择SSD。\n  - 如果追求更高的精度且FPS不是问题，可以降低`detector_frame_skip`。同样，提高`detector_frame_skip`以提升跟踪速度，但会牺牲一些精度。你还可以调整`max_age`，使`max_age` × `detector_frame_skip` ≈ 30。\n  - 修改`visualizer_cfg`以切换绘图选项。\n  - 所有参数都在API中详细说明。\n\n\u003C\u002Fdetails>\n\n## 跟踪自定义类别\nFastMOT可以轻松扩展到自定义类别（例如车辆）。你需要分别对YOLO和ReID网络进行该对象类别的训练。关于YOLO训练，请参考[Darknet](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet)，ReID训练请参考[fast-reid](https:\u002F\u002Fgithub.com\u002FJDAI-CV\u002Ffast-reid)。训练完成后，将权重转换为ONNX格式。从[tensorrt_demos](https:\u002F\u002Fgithub.com\u002Fjkjung-avt\u002Ftensorrt_demos\u002F)改编的TensorRT插件仅兼容Darknet。\n\nFastMOT还支持多类别跟踪。建议为每个类别单独训练ReID网络，以便分别提取特征。\n\n### 将YOLO转换为ONNX\n1. 安装ONNX 1.4.1版本（而非最新版本）\n    ```bash\n    pip3 install onnx==1.4.1\n    ```\n2. 使用您的自定义cfg和权重进行转换\n    ```bash\n    .\u002Fscripts\u002Fyolo2onnx.py --config yolov4.cfg --weights yolov4.weights\n    ```\n### 添加自定义YOLOv3\u002Fv4\n1. 继承`fastmot.models.YOLO`类，方法参考这里：https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F32c217a7d289f15a3bb0c1820982df947c82a650\u002Ffastmot\u002Fmodels\u002Fyolo.py#L100-L109\n    ```\n    ENGINE_PATH : Path\n        TensorRT引擎的路径。\n        如果未找到，则会在运行时从ONNX模型转换TensorRT引擎，并缓存以备后续使用。\n    MODEL_PATH : Path\n        ONNX模型的路径。\n    NUM_CLASSES : int\n        训练过的总类别数。\n    LETTERBOX : bool\n        调整大小时保持宽高比。\n    NEW_COORDS : bool\n        每个YOLO层的new_coords Darknet参数。\n    INPUT_SHAPE : tuple\n        输入尺寸，格式为`(channel, height, width)`。\n    LAYER_FACTORS : List[int]\n        每个YOLO层相对于输入尺寸的缩放因子。\n    SCALES : List[float]\n        每个YOLO层的scale_x_y Darknet参数。\n    ANCHORS : List[List[int]]\n        按每个YOLO层分组的锚点。\n    ```\n    注意：锚点顺序可能与Darknet cfg文件中的顺序不一致。您需要根据Darknet cfg中的`mask`索引，为每个YOLO层屏蔽掉不需要的锚点。\n    与YOLOv4不同，YOLOv3和YOLOv3\u002Fv4-tiny的锚点通常采用反序排列。\n2. 使用`fastmot.models.set_label_map`设置类标签为您的目标类别。\n3. 修改cfg\u002Fmot.json：将`yolo_detector_cfg`中的`model`设置为添加的Python类名，并设置感兴趣的`class_ids`。您可能需要根据模型表现调整`conf_thresh`。\n### 添加自定义ReID\n1. 继承`fastmot.models.ReID`类，方法参考这里：https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fblob\u002F32c217a7d289f15a3bb0c1820982df947c82a650\u002Ffastmot\u002Fmodels\u002Freid.py#L50-L55\n    ```\n    ENGINE_PATH : Path\n        TensorRT引擎的路径。\n        如果未找到，则会在运行时从ONNX模型转换TensorRT引擎，并缓存以备后续使用。\n    MODEL_PATH : Path\n        ONNX模型的路径。\n    INPUT_SHAPE : tuple\n        输入尺寸，格式为`(channel, height, width)`。\n    OUTPUT_LAYOUT : int\n        模型输出的特征维度。\n    METRIC : {'euclidean', 'cosine'}\n        用于匹配特征的距离度量。\n    ```\n2. 修改cfg\u002Fmot.json：将`feature_extractor_cfgs`中的`model`设置为添加的Python类名。如果有多于一个类，需在列表`feature_extractor_cfgs`中添加更多特征提取器配置。您可能需要根据模型表现调整`max_assoc_cost`和`max_reid_cost`。\n\n## 引用\n如果您在项目或研究中觉得本仓库有用，请点赞并考虑引用：\n```bibtex\n@software{yukai_yang_2020_4294717,\n  author       = {Yukai Yang},\n  title        = {{FastMOT: 高性能基于Deep SORT和KLT的多目标跟踪}},\n  month        = nov,\n  year         = 2020,\n  publisher    = {Zenodo},\n  version      = {v1.0.0},\n  doi          = {10.5281\u002Fzenodo.4294717},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.4294717}\n}\n```","# FastMOT 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- CUDA ≥ 10\n- cuDNN ≥ 7\n- TensorRT ≥ 7\n- OpenCV ≥ 3.3\n- Python 3.6+\n\n### 前置依赖\n```bash\npip3 install numpy>=1.17 scipy>=1.5 numba==0.48 cupy==9.2\n# SSD 支持需 TensorFlow \u003C 2.0\npip3 install tensorflow\u003C2.0\n```\n\n> **Jetson 用户**：请确保已安装 [JetPack ≥ 4.4](https:\u002F\u002Fdeveloper.nvidia.com\u002Fembedded\u002Fjetpack)\n\n## 安装步骤\n\n### 1. 克隆仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT.git\ncd FastMOT\n```\n\n### 2. 下载预训练模型\n```bash\n.\u002Fscripts\u002Fdownload_models.sh\n```\n\n### 3. 构建 YOLOv4 TensorRT 插件\n```bash\ncd fastmot\u002Fplugins\nmake\ncd ..\n```\n\n### 4. （可选）下载 VOC 数据集（仅 SSD 需要）\n```bash\n.\u002Fscripts\u002Fdownload_data.sh\n```\n\n### 5. Jetson 设备一键安装（推荐）\n```bash\n.\u002Fscripts\u002Finstall_jetson.sh\n```\n\n> **x86 用户**：如需使用 Docker，推荐使用国内镜像加速构建：\n> ```bash\n> docker build --build-arg TRT_IMAGE_VERSION=21.05 -t fastmot:latest .\n> ```\n\n## 基本使用\n\n### 最简运行命令（视频文件跟踪）\n```bash\npython3 app.py --input-uri file.mp4 --mot --show\n```\n\n### 支持的输入源\n| 类型 | 示例 |\n|------|------|\n| 视频文件 | `--input-uri file.mp4` |\n| 图像序列 | `--input-uri %06d.jpg` |\n| USB 摄像头 | `--input-uri \u002Fdev\u002Fvideo0` |\n| RTSP 流 | `--input-uri rtsp:\u002F\u002F\u003Cuser>:\u003Cpassword>@\u003Cip>:\u003Cport>\u002F\u003Cpath>` |\n\n### 保存结果\n```bash\npython3 app.py --input-uri file.mp4 --mot --show --output-uri output.mp4 --txt\n```\n\n> 首次运行会编译 Numba，耗时较长，后续运行将显著加速。  \n> 所有参数配置可修改 `cfg\u002Fmot.json` 文件调整。","某智慧物流园区的运维团队正在部署一套自动巡检系统，需实时追踪搬运机器人、叉车和人员的运动轨迹，以优化路径规划并预防碰撞事故。系统部署在Jetson Xavier NX边缘设备上，要求7×24小时稳定运行。\n\n### 没有 FastMOT 时\n- 原有基于Deep SORT的方案每秒仅能处理8~10帧，无法满足实时监控需求，频繁出现目标丢失。\n- 移动摄像头因园区AGV运行导致画面抖动，传统跟踪器无法补偿运动，轨迹出现严重漂移。\n- 每帧都运行YOLO检测，导致GPU负载过高，设备持续过热，每天需重启2~3次。\n- 多目标交叉时ID频繁切换，运维人员需手动核对异常轨迹，日均处理30+误报。\n- 系统延迟高达1.5秒，当机器人突然变向时，预警系统无法及时响应，已发生2起轻微剐蹭。\n\n### 使用 FastMOT 后\n- 通过KLT插值与每5帧检测一次的策略，系统稳定运行在18~22 FPS，响应延迟降至0.3秒内。\n- 内置相机运动补偿模块有效消除AGV震动带来的画面偏移，轨迹平滑度提升70%。\n- TensorRT加速+异步推理使GPU负载降低40%，设备连续运行72小时无过热宕机。\n- ID切换率下降85%，目标在遮挡后仍能准确重识别，误报数量减少至每周不足5次。\n- 实时轨迹数据直接接入调度系统，自动优化机器人路径，碰撞事故归零。\n\nFastMOT 让边缘端的多目标跟踪从“勉强可用”升级为“可靠支撑”，真正实现了高精度、低延迟、零宕机的工业级实时监控。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FGeekAlexis_FastMOT_e7ddc9d0.gif","GeekAlexis","Yukai Yang (Alexis)","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FGeekAlexis_88532043.jpg","Passionate about generative AI, NLP, and their efficiency and interpretability.",null,"alexisyang555@gmail.com","https:\u002F\u002Fgithub.com\u002FGeekAlexis",[83,87,91,95,99,103],{"name":84,"color":85,"percentage":86},"Python","#3572A5",88.1,{"name":88,"color":89,"percentage":90},"Cuda","#3A4E3A",6.4,{"name":92,"color":93,"percentage":94},"C++","#f34b7d",2.4,{"name":96,"color":97,"percentage":98},"Dockerfile","#384d54",1.5,{"name":100,"color":101,"percentage":102},"Shell","#89e051",1.1,{"name":104,"color":105,"percentage":106},"Makefile","#427819",0.5,1210,255,"2026-03-11T09:06:17","MIT","Linux","需要 NVIDIA GPU，显存建议 8GB+，CUDA >= 10","未说明",{"notes":115,"python":116,"dependencies":117},"建议使用 NVIDIA JetPack 4.4+ 在 Jetson 设备上部署；首次运行需下载约数GB模型文件并编译 TensorRT 插件；x86 系统推荐使用 Docker 部署；需 NVIDIA 驱动版本 >= 450（Ubuntu 18.04）或 >= 465.19.01（Ubuntu 20.04）；SSD 模型需 VOC 数据集进行 INT8 校准（不支持 Ubuntu 20.04）；首次运行因 Numba 编译会较慢。","3.x",[118,119,120,121,122,123,124],"opencv-python>=3.3","numpy>=1.17","scipy>=1.5","numba==0.48","cupy==9.2","tensorflow\u003C2.0","onnx==1.4.1",[13,14,52],[127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143],"jetson","multi-object-tracking","tensorrt","real-time","reid","object-detection","deep-sort","computer-vision","yolov4","ssd","yolov3","people-counter","scaledyolov4","edge-computing","lucas-kanade","video-analysis","deep-learning","2026-03-27T02:49:30.150509","2026-04-06T06:52:10.923920",[147,152,157,162,167,172,176,180,184,188],{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},9299,"如何在 Jetson Xavier NX 上运行 FastMOT 并提升推理速度？","确保使用 TensorRT 优化模型，并避免使用过时的 TBB 版本。若遇到 Numba 错误，建议从源码安装 numba 并升级 TBB 至 2019.5 或更高版本（TBB_INTERFACE_VERSION >= 11005）。同时，使用 Docker 镜像并启用 FFMPEG 后端替代 GStreamer 可提升视频流读取稳定性。","https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fissues\u002F34",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},9300,"FastMOT 是否支持 RTX 3090 GPU 和 Ubuntu 20.04？","支持。需确保安装 numpy 1.21.0 或更高版本，并在构建插件时指定 compute capability 86（如设置 computes=86）。同时，避免使用包含频繁场景切换的视频输入，这可能导致运动估计失败。","https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fissues\u002F105",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},9301,"运行 app.py 时出现 'Unable to read video stream' 错误如何解决？","尝试使用 Docker 构建时启用 FFMPEG 后端而非 GStreamer（通过修改 Dockerfile），并设置环境变量 -e NO_AT_BRIDGE=1 来解决 GTK 相关的视频流读取问题。同时确认输入视频路径或摄像头设备权限正确。","https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fissues\u002F11",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},9302,"如何同时跟踪多个类别（如人和车）并使用不同的 ReID 模型？","通过自定义 find_split_indices() 函数在 mot.py 中按类别 ID 分割检测结果，例如将车辆类（car, truck, bus 等）与人（person）分开，分别使用不同的 ReID 模型。需移除 mot.py 中的类别一致性断言（第84-85行），确保不同类别不被错误关联。","https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fissues\u002F91",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},9303,"FastMOT 是否支持 YOLOv4-Tiny + DeepSORT？","支持。维护者已确认可使用 YOLOv4-Tiny + DeepSORT，需确保模型转换为 TensorRT 格式并正确配置 detector 和 tracker 参数。建议参考项目文档中关于模型替换的说明进行自定义部署。","https:\u002F\u002Fgithub.com\u002FGeekAlexis\u002FFastMOT\u002Fissues\u002F1",{"id":173,"question_zh":174,"answer_zh":175,"source_url":166},9304,"如何解决小目标高速运动时 ID 切换频繁的问题？","降低 detector_frame_skip 值（如设为 1）以提高检测频率，同时考虑使用更鲁棒的 ReID 模型（如 OSNet）并调整运动模型参数。若仍不稳定，建议为车辆和行人分别配置独立的 ReID 模型，通过 find_split_indices() 实现类别分离。",{"id":177,"question_zh":178,"answer_zh":179,"source_url":171},9305,"是否可以使用自定义训练的 YOLOv4 模型来跟踪自定义类别？","可以。只需将自定义训练的 YOLOv4 模型转换为 TensorRT 格式，并更新 config.yaml 中的 detector 模型路径和类别标签。确保类别顺序与训练时一致，且类别数量匹配模型输出。",{"id":181,"question_zh":182,"answer_zh":183,"source_url":161},9306,"如何设置视频流的 buffer_size 参数以适配 1280x720@24fps 流？","在 config.yaml 的 video_io 部分，将 buffer_size 设置为 100-200 之间（如 buffer_size: 150）以平衡延迟与稳定性。若使用 GStreamer，建议改用 FFMPEG 后端以获得更好兼容性。",{"id":185,"question_zh":186,"answer_zh":187,"source_url":151},9307,"在 Jetson 设备上运行时出现 TBB 版本过低错误如何修复？","从源码安装 TBB 2019.5 或更高版本：git clone https:\u002F\u002Fgithub.com\u002Fwjakob\u002Ftbb.git && cd tbb\u002Fbuild && cmake .. && make -j && sudo make install。然后重新安装 numba==0.51.2，避免使用 pip 安装的预编译版本。",{"id":189,"question_zh":190,"answer_zh":191,"source_url":166},9308,"FastMOT 是否支持多类别同时跟踪？","支持，但默认不跨类别关联目标。需确保不同类别的检测框不共享 ID。若需为不同类别使用不同 ReID 模型，可通过自定义 find_split_indices() 函数按类别 ID 分割检测结果，分别加载对应模型。",[193,198],{"id":194,"version":195,"summary_zh":196,"released_at":197},106679,"v2.0.0","- Add faster Numba functions for matching and distance metrics\r\n- More robust tracking\r\n  - Average feature\r\n  - Invalidate feature during occlusion\r\n  - Cascaded association with age\r\n  - Duplicate tracks merging and removal\r\n  - Greedy matching for ReID\r\n- Restructure API\r\n  - Add a buffer to save past bounding boxes in Track class\r\n  - Add frame count properties in Track class\r\n  - Allow subclassing base models outside FastMOT for custom models\r\n  - Documentations for all parameters\r\n  - Visualization options in mot.json config\r\n  - Rename a few parameters to be more intuitive\r\n- Remove cython-bbox dependency\r\n- YOLO plugin TensorRT 8 support","2021-08-11T07:24:45",{"id":199,"version":200,"summary_zh":201,"released_at":202},106680,"v1.0.0","- All dependencies are included in the docker image for Ubuntu 18.04","2020-11-28T13:03:46"]