[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-kcg2015--Vehicle-Detection-and-Tracking":3,"tool-kcg2015--Vehicle-Detection-and-Tracking":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":78,"owner_location":79,"owner_email":80,"owner_twitter":78,"owner_website":78,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":78,"difficulty_score":10,"env_os":90,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":97,"github_topics":98,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},858,"kcg2015\u002FVehicle-Detection-and-Tracking","Vehicle-Detection-and-Tracking","Computer vision based vehicle detection and tracking using Tensorflow Object Detection API and Kalman-filtering","Vehicle-Detection-and-Tracking 是一个基于计算机视觉的开源项目，专注于在视频流中实时检测并追踪多辆车辆。它能模拟车载摄像头视角，自动识别画面中的汽车、公交车和卡车，并在每一帧上精准标注目标位置。这一方案主要解决了传统检测仅关注单帧图像、难以维持目标连续性的痛点，通过算法让系统不仅能“看见”车辆，还能理解其运动轨迹，实现稳定跟踪。\n\n项目代码结构清晰且注重可读性，非常适合自动驾驶开发者、计算机视觉研究人员以及希望快速搭建原型的工程师使用。在这里，用户可以轻松迭代不同的检测器和追踪算法。技术层面，它集成了 TensorFlow Object Detection API 与轻量级 SSD MobileNet 模型，在检测精度与推理速度之间取得了良好平衡；同时引入卡尔曼滤波算法优化追踪逻辑，有效提升了多目标跟踪的鲁棒性。整体流程从初始化到结果展示简单直接，为相关领域的学习与实践提供了便捷的框架支持。","# Vehicle Detection and Tracking\n\n\n## Overview\nThis repo illustrates the detection and tracking of multiple vehicles using a camera mounted inside a self-driving car.  The aim here is to provide developers, researchers, and engineers a simple framework to quickly iterate different detectors and tracking algorithms. In the process, I focus on simplicity and readability of the code. The detection and tracking pipeline is relatively staight forward. It first initializes a detector and a tracker. Next, detector localizes the vehicles in each video frame. The tracker is then updated with the detection results. Finally the tracking results are annotated and displayed in a video frame.\n\n## Key files in this repo\n  \n  \n  * detector.py -- implements  ```CarDetector``` class to output car detection results\n  * tracker.py  -- implements Kalman Filter-based prediction and update for tracking\n  * main.py -- implements the detection and tracking pipeline, including detection-track assignment and track management\n  * helpers.py -- helper functions\n  * ssd_mobilenet_v1_coco_11_06_2017\u002Ffrozen_inference_graph.pb -- pre-trained mobilenet-coco model\n\n## Detection\nIn the pipeline, vehicle (car) detection takes a captured image as input and produces the bounding boxes as the output. We use TensorFlow Object Detection API, which is an open source framework built on top of TensorFlow to construct, train and deploy object detection models. The Object Detection API also comes with a collection of detection models pre-trained on the COCO dataset that are well suited for fast prototyping. Specifically, we use a lightweight model: ssd\\_mobilenet\\_v1\\_coco that is based on Single Shot Multibox Detection (SSD) framework with minimal modification. Though this is a general-purpose detection model (not specifically optimized for vehicle detection), we find this model achieves the balance between bounding box accuracy and inference time.\n\nThe detector is implemented in ```CarDetector``` class in detector.py. The output are the coordinates of the bounding boxes (in the format of [y\\_up, x\\_left, y\\_down, x\\_right] ) of all the detected vehicles.\n\nThe COCO dataset contains images of 90 classes, with the first 14 classes all related to transportation, including bicycle, car, and bus, etc. The ID for car is 3.\n\n```\ncategory_index={1: {'id': 1, 'name': u'person'},\n                        2: {'id': 2, 'name': u'bicycle'},\n                        3: {'id': 3, 'name': u'car'},\n                        4: {'id': 4, 'name': u'motorcycle'},\n                        5: {'id': 5, 'name': u'airplane'},\n                        6: {'id': 6, 'name': u'bus'},\n                        7: {'id': 7, 'name': u'train'},\n                        8: {'id': 8, 'name': u'truck'},\n                        9: {'id': 9, 'name': u'boat'},\n                        10: {'id': 10, 'name': u'traffic light'},\n                        11: {'id': 11, 'name': u'fire hydrant'},\n                        13: {'id': 13, 'name': u'stop sign'},\n                        14: {'id': 14, 'name': u'parking meter'}} \n```\nThe following code snippet implements the actual detection using TensorFlow API.\n\n```\n(boxes, scores, classes, num_detections) = self.sess.run(\n                  [self.boxes, self.scores, self.classes, self.num_detections],\n                  feed_dict={self.image_tensor: image_expanded})\n```    \nHere ```boxes```, ```scores```, and ```classes``` represent the bounding box, confidence level, and class name corresponding to each of the detection, respectively. Next, we select the detections that are cars and have a confidence greater than a threshold ( e.g., 0.3 in this case). \n```\nidx_vec = [i for i, v in enumerate(cls) if ((v==3) and (scores[i]>0.3))]\n```\nTo detect all kinds of vehicles, we also include the indices for bus and truck.\n```\nidx_vec = [i for i, v in enumerate(cls) if (((v==3) or (v==6) or (v==8)) and (scores[i]>0.3))]\n```\nTo further reduce possible false positives, we include thresholds for bounding box width, height, and height-to-width ratio.\n\n```\nif ((ratio \u003C 0.8) and (box_h>20) and (box_w>20)):\n    tmp_car_boxes.append(box)\n    print(box, ', confidence: ', scores[idx], 'ratio:', ratio)\nelse:\n     print('wrong ratio or wrong size, ', box, ', confidence: ', scores[idx], 'ratio:', ratio)\n```\n\n## Kalman Filter for Bounding Box Measurement\n\nWe use Kalman filter for tracking objects. Kalman filter has the following important features that tracking can benefit from:\n\n* Prediction of object's future location\n* Correction of the prediction based on new measurements\n* Reduction of noise introduced by inaccurate detections\n* Facilitating the process of association of multiple objects to their tracks\n\nKalman filter consists of two steps: prediction and update. The first step uses previous states to predict the current state. The second step uses the current measurement, such as detection bounding box location , to correct the state. The formula are provided in the following:\n\n### Kalman Filter Equations:\n#### Prediction phase: notations\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_a398e5f31c4f.gif\" alt=\"Drawing\" style=\"width: 250px;\"\u002F>\n#### Prediction phase: equations\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_649fc3cc91c4.gif\" alt=\"Drawing\" style=\"width: 125px;\"\u002F>\n#### Update phase: notations\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_2ae1b7ca42b1.gif\" alt=\"Drawing\" style=\"width: 250px;\"\u002F>\n#### Update phase: equations\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_55f510a22e40.gif\" alt=\"Drawing\" style=\"width: 200px;\"\u002F>\n\n### Kalman Filter Implementation\nIn this section, we describe the implementation of the Kalman filter in detail.\n\nThe state vector has eight elements as follows:\n```\n[up, up_dot, left, left_dot, down, down_dot, right, right_dot]\n```\nThat is,  we use the coordinates and their first-order derivatives of the up left corner and lower right corner of the bounding box.\n\nThe process matrix, assuming the constant velocity (thus no acceleration), is:\n\n```\nself.F = np.array([[1, self.dt, 0,  0,  0,  0,  0, 0],\n                    [0, 1,  0,  0,  0,  0,  0, 0],\n                    [0, 0,  1,  self.dt, 0,  0,  0, 0],\n                    [0, 0,  0,  1,  0,  0,  0, 0],\n                    [0, 0,  0,  0,  1,  self.dt, 0, 0],\n                    [0, 0,  0,  0,  0,  1,  0, 0],\n                    [0, 0,  0,  0,  0,  0,  1, self.dt],\n                    [0, 0,  0,  0,  0,  0,  0,  1]])\n```\nThe measurement matrix, given that the detector only outputs the coordindate (not velocity), is:\n\n```\nself.H = np.array([[1, 0, 0, 0, 0, 0, 0, 0],\n                   [0, 0, 1, 0, 0, 0, 0, 0],\n                   [0, 0, 0, 0, 1, 0, 0, 0], \n                   [0, 0, 0, 0, 0, 0, 1, 0]])\n```\nThe state, process, and measurement noises are :\n\n```\n # Initialize the state covariance\n self.L = 100.0\n self.P = np.diag(self.L*np.ones(8))\n        \n        \n # Initialize the process covariance\n self.Q_comp_mat = np.array([[self.dt**4\u002F2., self.dt**3\u002F2.],\n                                    [self.dt**3\u002F2., self.dt**2]])\n self.Q = block_diag(self.Q_comp_mat, self.Q_comp_mat, \n                            self.Q_comp_mat, self.Q_comp_mat)\n        \n# Initialize the measurement covariance\nself.R_scaler = 1.0\u002F16.0\nself.R_diag_array = self.R_ratio * np.array([self.L, self.L, self.L, self.L])\nself.R = np.diag(self.R_diag_array)\n```\nHere  ```self.R_scaler``` represents the \"magnitude\" of measurement noise relative to state noise. A low ```self.R_scaler``` indicates a more reliable measurement. The following figures visualize the impact of measurement noise to the Kalman filter process. The green bounding box represents the prediction (initial) state. The red bounding box represents the measurement.\nIf measurement noise is low, the updated state (aqua colored bounding box) is very close to the measurement (aqua bounding box completely overlaps over the red bounding box).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_fdad8c4e27fc.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\nIn contrast, if measurement noise is high, the updated state is very close to the initial prediction (aqua bounding box completely overlaps over the green bounding box).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_ce2728cf6bed.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n## Detection-to-Tracker Assignment\n\nThe module ```assign_detections_to_trackers(trackers, detections, iou_thrd = 0.3)``` takes from current list of trackers and new detections, output matched detections, unmatched trackers, unmatched detections.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_e3b51806b788.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n### Linear Assignment and Hungarian (Munkres) algorithm\n\nIf there are multiple detections, we need to match (assign) each of them to a tracker. We use intersection over union (IOU) of a tracker bounding box and detection bounding box as a metric. We solve the maximizing the sum of IOU assignment problem using the Hungarian algorithm (also known as Munkres algorithm). The machine learning package scikit-learn has a build-in utility function that implements the Hungarian algorithm.\n\n```\nmatched_idx = linear_assignment(-IOU_mat)   \n```\nNote that ```linear_assignment ``` by default minimizes an objective function. So we need to reverse the sign of ```IOU_mat``` for maximization.\n\n### Unmatched detections and trackers\n\nBased on the linear assignment results, we keep two lists for unmatched detections and unmatched trackers, respectively. When a car enters into a frame and is first detected, it is not matched with any existing tracks, thus this particular detection is referred to as an unmatched detection, as shown in the following figure. In addition, any matching with an overlap less than ```iou_thrd``` signifies the existence of \nan untracked object. When a car leaves the frame, the previously established track has no more detection to associate with. In this scenario, the track is referred to as unmatched track. Thus, the tracker and the detection associated in the matching are added to the lists of unmatched trackers and unmatched detection, respectively.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_837879b7889f.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n## Pipeline\n\nWe include two important design parameters, ```min_hits``` and ```max_age```, in the pipeline.  The parameter ```min_hits``` is the number of consecutive matches needed to establish a track. The parameter ```max_age``` is number of consecutive unmatched detections before a track is deleted. Both parameters need to be tuned to improve the tracking and detection performance.\n\nThe pipeline deals with matched detection, unmatched detection, and unmatched trackers sequentially. We annotate the tracks that meet the ```min_hits``` and ```max_age``` condition. Proper book keeping is also needed to delete the stale tracks. \n\nThe following examples show the process of the pipeline. When the car is first detected in the first video frame, running the following line of code returns an empty list, an one-element list, and an empty list  for ```matched```, ```unmatched_dets```, and ```unmatched_trks```, respectively. \n\n```\nmatched, unmatched_dets, unmatched_trks \\\n    = assign_detections_to_trackers(x_box, z_box, iou_thrd = 0.3) \n```\nWe thus have a situation of unmatched detections. Unmatched detections are processed by the following code block:\n\n```\nif len(unmatched_dets)>0:\n        for idx in unmatched_dets:\n            z = z_box[idx]\n            z = np.expand_dims(z, axis=0).T\n            tmp_trk = Tracker() # Create a new tracker\n            x = np.array([[z[0], 0, z[1], 0, z[2], 0, z[3], 0]]).T\n            tmp_trk.x_state = x\n            tmp_trk.predict_only()\n            xx = tmp_trk.x_state\n            xx = xx.T[0].tolist()\n            xx =[xx[0], xx[2], xx[4], xx[6]]\n            tmp_trk.box = xx\n            tmp_trk.id = track_id_list.popleft() # assign an ID for the tracker\n            tracker_list.append(tmp_trk)\n            x_box.append(xx)\n```\nThis code block carries out two important tasks, 1) creating a new tracker ```tmp_trk``` for the detection; 2) carrying out the Kalman filter's predict stage ```tmp_trk.predict_only()```. Note that this newly created track is still in probation period, i.e., ```trk.hits =0```, so this track is yet established at the end of pipeline. The output image is the same as the input image - the detection bounding box is not annotated.\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_dd0db250bd6d.png\" alt=\"Drawing\" style=\"width: 150px;\"\u002F>\n\nWhen the car is  detected again in the second video frame, running the following ```assign_detections_to_trackers``` returns an one-element list , an empty list, and an empty list for matched, unmatched_dets, and unmatched_trks, respectively. As shown in the following figure, we have a matched detection, which will be processed by the following code block:\n\n```\nif matched.size >0:\n        for trk_idx, det_idx in matched:\n            z = z_box[det_idx]\n            z = np.expand_dims(z, axis=0).T\n            tmp_trk= tracker_list[trk_idx]\n            tmp_trk.kalman_filter(z)\n            xx = tmp_trk.x_state.T[0].tolist()\n            xx =[xx[0], xx[2], xx[4], xx[6]]\n            x_box[trk_idx] = xx\n            tmp_trk.box =xx\n            tmp_trk.hits += 1\n```\nThis code block carries out two important tasks, 1) carrying out the Kalman filter's prediction and update stages ```tmp_trk.kalman_filter()```; 2) increasing the hits of the track by one ```tmp_trk.hits +=1```. With this update,  \nthe condition ```if ((trk.hits >= min_hits) and (trk.no_losses \u003C=max_age)) ``` is statified, so the track is fully established. As the result, the bounding box is annotated in the output image, as shown in the figure below.\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_685ea3affa02.png\" alt=\"Drawing\" style=\"width: 150px;\"\u002F>\n## Issues\n\nThe main issue is occlusion. For example, when one car is passing another car, the two cars can be very close to each other. This can fool the detector into outputing a single (and possibly bigger bounding) box, instead of two separate bounding boxes. In addition, the tracking algorithm may treat this detection as a new detection and sets up a new track.  The tracking algorithm may fail again when one of the passing car moves away from another car. \n\n\n","# 车辆检测与跟踪\n\n## 概述\n本仓库展示了如何使用安装在自动驾驶汽车内的摄像头进行多辆车的检测和跟踪。旨在为开发者、研究人员和工程师提供一个简单的框架，以便快速迭代不同的检测器和跟踪算法。在此过程中，我注重代码的简洁性和可读性。检测和跟踪流程相对直接。它首先初始化一个检测器和一个跟踪器。接下来，检测器定位每一帧视频中的车辆。然后使用检测结果更新跟踪器。最后将跟踪结果标注并显示在视频帧中。\n\n## 本仓库中的关键文件\n  \n  \n  * detector.py -- 实现 `CarDetector` 类以输出车辆检测结果\n  * tracker.py  -- 实现基于卡尔曼滤波器（Kalman Filter）的预测和更新用于跟踪\n  * main.py -- 实现检测和跟踪流程，包括检测 - 跟踪分配和跟踪管理\n  * helpers.py -- 辅助函数\n  * ssd_mobilenet_v1_coco_11_06_2017\u002Ffrozen_inference_graph.pb -- 预训练的 mobilenet-coco 模型\n\n## 检测\n在流程中，车辆（汽车）检测接收捕获的图像作为输入，并产生边界框（bounding boxes）作为输出。我们使用 TensorFlow Object Detection API（TensorFlow 目标检测 API），这是一个构建在 TensorFlow 之上的开源框架，用于构建、训练和部署目标检测模型。Object Detection API 还附带了一系列在 COCO 数据集上预训练的检测模型集合，非常适合快速原型设计。具体来说，我们使用一个轻量级模型：ssd_mobilenet_v1_coco，它基于单 Shot 多框检测（Single Shot Multibox Detection, SSD）框架，修改极少。虽然这是一个通用检测模型（并非专门针对车辆检测优化），但我们发现该模型在边界框精度和推理时间（inference time）之间取得了平衡。\n\n检测器实现在 detector.py 中的 CarDetector 类中。输出是所有检测到的车辆的边界框坐标（格式为 [y_up, x_left, y_down, x_right]）。\n\nCOCO 数据集包含 90 个类别的图像，前 14 个类别都与交通相关，包括自行车、汽车和公共汽车等。汽车的 ID 是 3。\n\n```\ncategory_index={1: {'id': 1, 'name': u'person'},\n                        2: {'id': 2, 'name': u'bicycle'},\n                        3: {'id': 3, 'name': u'car'},\n                        4: {'id': 4, 'name': u'motorcycle'},\n                        5: {'id': 5, 'name': u'airplane'},\n                        6: {'id': 6, 'name': u'bus'},\n                        7: {'id': 7, 'name': u'train'},\n                        8: {'id': 8, 'name': u'truck'},\n                        9: {'id': 9, 'name': u'boat'},\n                        10: {'id': 10, 'name': u'traffic light'},\n                        11: {'id': 11, 'name': u'fire hydrant'},\n                        13: {'id': 13, 'name': u'stop sign'},\n                        14: {'id': 14, 'name': u'parking meter'}} \n```\n以下代码片段实现了使用 TensorFlow API 的实际检测。\n\n```\n(boxes, scores, classes, num_detections) = self.sess.run(\n                  [self.boxes, self.scores, self.classes, self.num_detections],\n                  feed_dict={self.image_tensor: image_expanded})\n```    \n这里 `boxes`、`scores` 和 `classes` 分别代表每个检测对应的边界框、置信度和类别名称。接下来，我们选择那些是汽车且置信度大于阈值（例如本例中的 0.3）的检测。 \n```\nidx_vec = [i for i, v in enumerate(cls) if ((v==3) and (scores[i]>0.3))]\n```\n为了检测各种类型的车辆，我们还包含了公共汽车和卡车的索引。\n```\nidx_vec = [i for i, v in enumerate(cls) if (((v==3) or (v==6) or (v==8)) and (scores[i]>0.3))]\n```\n为了进一步减少可能的误报（false positives），我们包括了边界框宽度、高度和高宽比的阈值。\n\n```\nif ((ratio \u003C 0.8) and (box_h>20) and (box_w>20)):\n    tmp_car_boxes.append(box)\n    print(box, ', confidence: ', scores[idx], 'ratio:', ratio)\nelse:\n     print('wrong ratio or wrong size, ', box, ', confidence: ', scores[idx], 'ratio:', ratio)\n```\n\n## 用于边界框测量的卡尔曼滤波器\n\n我们使用卡尔曼滤波器（Kalman filter）来跟踪对象。卡尔曼滤波器具有以下跟踪可以受益的重要特性：\n\n* 预测对象的未来位置\n* 根据新测量值修正预测\n* 减少由不准确检测引入的噪声\n* 促进多个对象与其轨迹关联的过程\n\n卡尔曼滤波器由两个步骤组成：预测和更新。第一步使用之前的状态来预测当前状态。第二步使用当前的测量值，例如检测边界框位置，来修正状态。公式如下所示：\n\n### 卡尔曼滤波器方程：\n#### 预测阶段：符号说明\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_a398e5f31c4f.gif\" alt=\"Drawing\" style=\"width: 250px;\"\u002F>\n#### 预测阶段：方程\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_649fc3cc91c4.gif\" alt=\"Drawing\" style=\"width: 125px;\"\u002F>\n#### 更新阶段：符号说明\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_2ae1b7ca42b1.gif\" alt=\"Drawing\" style=\"width: 250px;\"\u002F>\n#### 更新阶段：方程\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_55f510a22e40.gif\" alt=\"Drawing\" style=\"width: 200px;\"\u002F>\n\n### 卡尔曼滤波器实现\n在本节中，我们将详细描述卡尔曼滤波器（Kalman Filter）的实现。\n\n状态向量包含以下八个元素：\n```\n[up, up_dot, left, left_dot, down, down_dot, right, right_dot]\n```\n即，我们使用边界框（Bounding Box）左上角和右下角的坐标及其一阶导数。\n\n过程矩阵（Process Matrix），假设速度恒定（因此无加速度），如下所示：\n\n```\nself.F = np.array([[1, self.dt, 0,  0,  0,  0,  0, 0],\n                    [0, 1,  0,  0,  0,  0,  0, 0],\n                    [0, 0,  1,  self.dt, 0,  0,  0, 0],\n                    [0, 0,  0,  1,  0,  0,  0, 0],\n                    [0, 0,  0,  0,  1,  self.dt, 0, 0],\n                    [0, 0,  0,  0,  0,  1,  0, 0],\n                    [0, 0,  0,  0,  0,  0,  1, self.dt],\n                    [0, 0,  0,  0,  0,  0,  0,  1]])\n```\n测量矩阵（Measurement Matrix），鉴于检测器仅输出坐标（而非速度），如下所示：\n\n```\nself.H = np.array([[1, 0, 0, 0, 0, 0, 0, 0],\n                   [0, 0, 1, 0, 0, 0, 0, 0],\n                   [0, 0, 0, 0, 1, 0, 0, 0], \n                   [0, 0, 0, 0, 0, 0, 1, 0]])\n```\n状态、过程和测量噪声定义如下：\n\n```\n # Initialize the state covariance\n self.L = 100.0\n self.P = np.diag(self.L*np.ones(8))\n        \n        \n # Initialize the process covariance\n self.Q_comp_mat = np.array([[self.dt**4\u002F2., self.dt**3\u002F2.],\n                                    [self.dt**3\u002F2., self.dt**2]])\n self.Q = block_diag(self.Q_comp_mat, self.Q_comp_mat, \n                            self.Q_comp_mat, self.Q_comp_mat)\n        \n# Initialize the measurement covariance\nself.R_scaler = 1.0\u002F16.0\nself.R_diag_array = self.R_ratio * np.array([self.L, self.L, self.L, self.L])\nself.R = np.diag(self.R_diag_array)\n```\n此处 `self.R_scaler` 表示相对于状态噪声的测量噪声“幅度”。较低的 `self.R_scaler` 表示测量更可靠。下图可视化了测量噪声对卡尔曼滤波过程的影响。绿色边界框代表预测（初始）状态。红色边界框代表测量值。\n如果测量噪声较低，更新后的状态（青色边界框）非常接近测量值（青色边界框完全覆盖在红色边界框上）。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_fdad8c4e27fc.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n相反，如果测量噪声较高，更新后的状态非常接近初始预测（青色边界框完全覆盖在绿色边界框上）。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_ce2728cf6bed.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n## 检测与跟踪器分配\n\n模块 `assign_detections_to_trackers(trackers, detections, iou_thrd = 0.3)` 接收当前的跟踪器列表和新检测，输出匹配的检测、未匹配的跟踪器以及未匹配的检测。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_e3b51806b788.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n### 线性分配与匈牙利（Munkres）算法\n\n如果有多个检测，我们需要将每个检测匹配（分配）到一个跟踪器。我们使用跟踪器边界框和检测边界框的交并比（Intersection Over Union, IOU）作为度量标准。我们使用匈牙利算法（也称为 Munkres 算法）来解决最大化 IOU 分配总和的问题。机器学习库 scikit-learn 有一个内置的工具函数实现了匈牙利算法。\n\n```\nmatched_idx = linear_assignment(-IOU_mat)   \n```\n注意，`linear_assignment` 默认最小化目标函数。因此，为了最大化，我们需要反转 `IOU_mat` 的符号。\n\n### 未匹配的检测与跟踪器\n\n基于线性分配结果，我们分别保留两个列表用于存储未匹配的检测和未匹配的跟踪器。当一辆车进入画面并被首次检测到时，它不会与任何现有轨迹匹配，因此这种特定的检测被称为未匹配检测，如下图所示。此外，任何重叠小于 `iou_thrd` 的匹配都意味着存在未被跟踪的对象。当车辆离开画面时，之前建立的轨迹不再有可关联的检测。在这种情况下，该轨迹被称为未匹配轨迹。因此，匹配中关联的跟踪器和检测分别被添加到未匹配跟踪器列表和未匹配检测列表中。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_837879b7889f.png\" alt=\"Drawing\" style=\"width: 300px;\"\u002F>\n\n## 流水线\n\n我们在流水线中包含了两个重要的设计参数，即 ```min_hits``` 和 ```max_age```。参数 ```min_hits``` 是建立一条轨迹 (track) 所需的连续匹配次数。参数 ```max_age``` 是在删除一条轨迹之前允许的连续未匹配检测 (detection) 的数量。这两个参数都需要进行调整，以提高跟踪 (tracking) 和检测 (detection) 性能。\n\n流水线依次处理匹配的检测 (matched detection)、未匹配的检测 (unmatched detection) 以及未匹配的跟踪器 (unmatched trackers)。我们标注满足 ```min_hits``` 和 ```max_age``` 条件的轨迹 (tracks)。还需要适当的维护工作来删除过时的轨迹。\n\n以下示例展示了流水线的过程。当汽车在第一个视频帧中首次被检测到时，运行以下代码行将分别返回空列表、一个元素的列表和空列表，对应于 ```matched```、```unmatched_dets``` 和 ```unmatched_trks```。\n\n```\nmatched, unmatched_dets, unmatched_trks \\\n    = assign_detections_to_trackers(x_box, z_box, iou_thrd = 0.3) \n```\n因此我们面临的是未匹配检测的情况。未匹配的检测由以下代码块处理：\n\n```\nif len(unmatched_dets)>0:\n        for idx in unmatched_dets:\n            z = z_box[idx]\n            z = np.expand_dims(z, axis=0).T\n            tmp_trk = Tracker() # Create a new tracker\n            x = np.array([[z[0], 0, z[1], 0, z[2], 0, z[3], 0]]).T\n            tmp_trk.x_state = x\n            tmp_trk.predict_only()\n            xx = tmp_trk.x_state\n            xx = xx.T[0].tolist()\n            xx =[xx[0], xx[2], xx[4], xx[6]]\n            tmp_trk.box = xx\n            tmp_trk.id = track_id_list.popleft() # assign an ID for the tracker\n            tracker_list.append(tmp_trk)\n            x_box.append(xx)\n```\n此代码块执行两项重要任务：1) 为检测创建一个新的跟踪器 (tracker) ```tmp_trk```；2) 执行卡尔曼滤波器 (Kalman filter) 的预测阶段 ```tmp_trk.predict_only()```。请注意，这个新创建的轨迹仍处于试用期，即 ```trk.hits =0```，因此在流水线结束时该轨迹尚未确立。输出图像与输入图像相同——检测边界框 (bounding box) 未被标注。\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_dd0db250bd6d.png\" alt=\"Drawing\" style=\"width: 150px;\"\u002F>\n\n当汽车在第二个视频帧中再次被检测到时，运行以下 ```assign_detections_to_trackers``` 将分别返回一个元素的列表、空列表和空列表，对应于 matched、unmatched_dets 和 unmatched_trks。如下图所示，我们有一个匹配的检测，它将由以下代码块处理：\n\n```\nif matched.size >0:\n        for trk_idx, det_idx in matched:\n            z = z_box[det_idx]\n            z = np.expand_dims(z, axis=0).T\n            tmp_trk= tracker_list[trk_idx]\n            tmp_trk.kalman_filter(z)\n            xx = tmp_trk.x_state.T[0].tolist()\n            xx =[xx[0], xx[2], xx[4], xx[6]]\n            x_box[trk_idx] = xx\n            tmp_trk.box =xx\n            tmp_trk.hits += 1\n```\n此代码块执行两项重要任务：1) 执行卡尔曼滤波器 (Kalman filter) 的预测和更新阶段 ```tmp_trk.kalman_filter()```；2) 将轨迹的命中次数加一 ```tmp_trk.hits +=1```。通过此更新，条件 ```if ((trk.hits >= min_hits) and (trk.no_losses \u003C=max_age)) ``` 得到满足，因此该轨迹完全确立。结果，边界框 (bounding box) 在输出图像中被标注，如下图所示。\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_readme_685ea3affa02.png\" alt=\"Drawing\" style=\"width: 150px;\"\u002F>\n## 问题\n\n主要问题是遮挡 (occlusion)。例如，当一辆车经过另一辆车时，两辆车可能非常接近。这可能会欺骗检测器输出单个（且可能更大的边界）框，而不是两个独立的边界框 (bounding box)。此外，跟踪算法可能会将此检测视为新的检测并建立新的轨迹。当其中一辆经过的车驶离另一辆车时，跟踪算法可能会再次失败。","# Vehicle Detection and Tracking 快速上手指南\n\n## 环境准备\n*   **操作系统**: Linux \u002F Windows \u002F macOS\n*   **Python 版本**: 3.x\n*   **核心依赖库**:\n    *   `tensorflow` (用于 Object Detection API)\n    *   `scikit-learn` (用于匈牙利算法)\n    *   `numpy`, `opencv-python`\n*   **模型文件**: 确保目录 `ssd_mobilenet_v1_coco_11_06_2017\u002F` 下存在 `frozen_inference_graph.pb` 预训练模型。\n\n## 安装步骤\n1.  **克隆项目**\n    ```bash\n    git clone \u003Crepository_url>\n    cd Vehicle-Detection-and-Tracking\n    ```\n\n2.  **安装依赖**\n    建议使用国内镜像源以提高下载速度：\n    ```bash\n    pip install tensorflow scikit-learn numpy opencv-python -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *(注：若需完整使用 TensorFlow Object Detection API，请确保已正确配置编译环境)*\n\n3.  **验证模型路径**\n    确认以下文件路径有效：\n    ```text\n    ssd_mobilenet_v1_coco_11_06_2017\u002Ffrozen_inference_graph.pb\n    ```\n\n## 基本使用\n运行主程序启动检测与跟踪管道：\n```bash\npython main.py\n```\n\n### 关键功能说明\n*   **检测对象**: 默认识别汽车 (Car, ID: 3)、公交车 (Bus, ID: 6) 和卡车 (Truck, ID: 8)。\n*   **置信度过滤**: 仅保留 `scores[i] > 0.3` 的结果。\n*   **尺寸过滤**: 排除宽高比异常或过小（如 `\u003C 20px`）的边界框以减少误检。\n*   **跟踪管理**:\n    *   `min_hits`: 建立新轨迹所需的最小连续匹配次数。\n    *   `max_age`: 轨迹在无匹配检测后可保留的最大帧数。\n\n### 核心代码逻辑\n**1. 车辆检测 (`detector.py`)**\n使用 TensorFlow API 获取边界框、置信度和类别：\n```python\n(boxes, scores, classes, num_detections) = self.sess.run(\n              [self.boxes, self.scores, self.classes, self.num_detections],\n              feed_dict={self.image_tensor: image_expanded})\n```\n筛选符合条件的车辆索引：\n```python\nidx_vec = [i for i, v in enumerate(cls) if (((v==3) or (v==6) or (v==8)) and (scores[i]>0.3))]\n```\n\n**2. 卡尔曼滤波跟踪 (`tracker.py`)**\n状态向量包含边界框坐标及其一阶导数：\n```python\n# State vector: [up, up_dot, left, left_dot, down, down_dot, right, right_dot]\n```\n过程矩阵假设恒定速度（无加速度）：\n```python\nself.F = np.array([[1, self.dt, 0,  0,  0,  0,  0, 0],\n                    [0, 1,  0,  0,  0,  0,  0, 0],\n                    [0, 0,  1,  self.dt, 0,  0,  0, 0],\n                    [0, 0,  0,  1,  0,  0,  0, 0],\n                    [0, 0,  0,  0,  1,  self.dt, 0, 0],\n                    [0, 0,  0,  0,  0,  1,  0, 0],\n                    [0, 0,  0,  0,  0,  0,  1, self.dt],\n                    [0, 0,  0,  0,  0,  0,  0,  1]])\n```\n\n**3. 检测与跟踪关联 (`assign_detections_to_trackers`)**\n使用匈牙利算法最大化 IoU 匹配：\n```python\nmatched_idx = linear_assignment(-IOU_mat)   \n```","某智慧交通研发团队正在开发路口车流量监测系统，需要实时统计经过摄像头的车辆数量及轨迹。\n\n### 没有 Vehicle-Detection-and-Tracking 时\n- 依赖人工逐帧标注视频数据，效率极低且容易产生漏检。\n- 传统 OpenCV 算法难以处理车辆被遮挡时的跟踪丢失问题。\n- 从零搭建检测与跟踪框架，代码耦合度高，调试周期漫长。\n- 通用大模型推理速度慢，无法满足实时路口的监控延迟要求。\n\n### 使用 Vehicle-Detection-and-Tracking 后\n- 直接调用内置的 SSD MobileNet 预训练模型，秒级完成车辆定位与分类。\n- 集成卡尔曼滤波算法，有效解决车辆短暂遮挡后的轨迹连续性问题。\n- 模块化代码结构清晰，开发者可快速替换不同检测器进行性能对比测试。\n- 轻量化设计平衡了精度与速度，确保系统在普通硬件上也能流畅运行。\n\nVehicle-Detection-and-Tracking 通过简化视觉流水线，显著降低了智能交通系统的开发门槛并提升了实时监测的稳定性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkcg2015_Vehicle-Detection-and-Tracking_fdad8c4e.png","kcg2015","Kyle Guan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkcg2015_d29ad836.jpg",null,"New York , NY","kcguan@gmail.com","https:\u002F\u002Fgithub.com\u002Fkcg2015",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,556,192,"2026-03-24T11:54:31","未说明",{"notes":92,"python":90,"dependencies":93},"基于 TensorFlow Object Detection API 构建，使用预训练的 SSD MobileNet V1 模型（COCO 数据集）；追踪算法采用卡尔曼滤波；检测与追踪匹配使用匈牙利算法（scikit-learn）；需准备 frozen_inference_graph.pb 模型文件；代码设计注重简洁性和可读性。",[94,95,96],"tensorflow","numpy","scikit-learn",[53,13,14],[99,100,101,102,103,104,105,106,107,108,109,110,111,112],"detection","tracking","kalman-filtering","object-detection","keras","hungarian-algorithm","tensorflow-object-detection-api","single-shot-multibox-detector","mobilenet-ssd","linear-assignment-problem","occlusion","computer-vision","bounding-boxes","bayesian-filter","2026-03-27T02:49:30.150509","2026-04-06T05:37:46.840804",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},3696,"运行 main.py 处理视频时出现 'box_iou2' 未定义的错误如何解决？","请将第 13 行修改为 `from helper import *`，或者将第 42 行修改为 `IOU_mat[t, d] = helper.box_iou2(trk, det)`。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F12",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},3697,"运行 main.py 时报错 'CarDetector' 或 'draw_box_label' 未定义怎么办？","需要在 tracker.py 和 main.py 中导入或添加 `helpers.draw_box_label` 函数及相关逻辑，确保 helper 模块被正确引用。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F2",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3698,"使用自定义视频进行检测时出现误检或漏检，如何调整参数？","需要根据检测器在特定图像上的表现来调整阈值参数。如果值太低会产生误检（false-positives），太高则会产生漏检（miss-detections）。建议根据测试结果微调（例如示例中使用了 0.3）。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F14",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},3699,"在匹配检测后是否应该重置 `tmp_trk.no_losses` 为 0？","是的，需要重置。因为存在 `tmp_trk.no_losses > 0` 的可能性，如果不重置，同一个检测可能会被重复匹配。应在 `tmp_trk.hits += 1` 后执行 `tmp_trk.no_losses = 0`。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F4",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},3700,"想将此项目用于行人跟踪或不同场景的视频，有什么建议？","建议尝试不同的目标检测器（如 YOLO、RCNN 等）看是否有改进。注意本仓库使用的 SSD MobileNet 是针对彩色图像训练的通用检测器，如果是灰度视频可能会影响检测精度。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F24",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},3701,"SSD Mobilenet 性能不足或在树莓派上运行遇到问题怎么办？","如果 SSD Mobilenet 性能无法满足任务需求，建议在 TensorFlow Object Detection API 中更换其他模型。对于树莓派部署，由于涉及 dlib tracker 的具体细节，建议查阅相关博客或资源自行探索。","https:\u002F\u002Fgithub.com\u002Fkcg2015\u002FVehicle-Detection-and-Tracking\u002Fissues\u002F1",[]]