[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-theAIGuysCode--yolov4-custom-functions":3,"tool-theAIGuysCode--yolov4-custom-functions":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":10,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":102,"github_topics":103,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},1293,"theAIGuysCode\u002Fyolov4-custom-functions","yolov4-custom-functions","A Wide Range of Custom Functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny Implemented in TensorFlow, TFLite, and TensorRT.","yolov4-custom-functions 是一个为 YOLOv4、YOLOv4-tiny、YOLOv3 和 YOLOv3-tiny 提供多种自定义功能的开源工具，支持 TensorFlow、TFLite 和 TensorRT 平台。它允许用户在目标检测任务中添加丰富的扩展功能，如物体计数、检测信息打印、图像裁剪、车牌识别与 OCR 文本提取等，满足不同场景下的个性化需求。\n\n这个工具解决了传统 YOLO 模型功能单一的问题，让用户可以灵活地根据实际应用场景进行功能拓展和定制。例如，开发者可以通过它实现更智能的监控系统或自动化识别流程，研究人员则可以利用其提供的功能进行算法测试与创新实验。\n\n适合有一定编程基础的开发者和研究人员使用，尤其是对目标检测有深入需求的用户。对于希望快速实现复杂功能但又不想从零开始编写代码的用户来说，这是一个非常实用的选择。\n\n其独特之处在于提供了多种可直接使用的自定义函数，并且支持多种深度学习框架，方便用户在不同环境下部署和优化模型。","# yolov4-custom-functions\r\n[![license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmashape\u002Fapistatus.svg)](LICENSE)\r\n\r\nA wide range of custom functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny implemented in TensorFlow, TFLite and TensorRT.\r\n\r\nDISCLAIMER: This repository is very similar to my repository: [tensorflow-yolov4-tflite](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Ftensorflow-yolov4-tflite). I created this repository to explore coding custom functions to be implemented with YOLOv4, and they may worsen the overal speed of the application and make it not optimized in respect to time complexity. So if you want to run the most optimal YOLOv4 code with TensorFlow than head over to my other repository. This one is to explore cool customizations and applications that can be created using YOLOv4!\r\n\r\n### Demo of Object Counter Custom Function in Action!\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_9bccc17af411.gif\"\\>\u003C\u002Fp>\r\n\r\n## Currently Supported Custom Functions and Flags\r\n* [x] [Counting Objects (total objects and per class)](#counting)\r\n* [x] [Print Info About Each Detection (class, confidence, bounding box coordinates)](#info)\r\n* [x] [Crop Detections and Save as New Image](#crop)\r\n* [x] [License Plate Recognition Using Tesseract OCR](#license)\r\n* [x] [Apply Tesseract OCR to Detections to Extract Text](#ocr)\r\n\r\nIf there is a custom function you want to see created then create an issue in the issues tab and suggest it! If enough people suggest the same custom function I will add it quickly!\r\n\r\n## Getting Started\r\n### Conda (Recommended)\r\n\r\n```bash\r\n# Tensorflow CPU\r\nconda env create -f conda-cpu.yml\r\nconda activate yolov4-cpu\r\n\r\n# Tensorflow GPU\r\nconda env create -f conda-gpu.yml\r\nconda activate yolov4-gpu\r\n```\r\n\r\n### Pip\r\n```bash\r\n# TensorFlow CPU\r\npip install -r requirements.txt\r\n\r\n# TensorFlow GPU\r\npip install -r requirements-gpu.txt\r\n```\r\n### Nvidia Driver (For GPU, if you are not using Conda Environment and haven't set up CUDA yet)\r\nMake sure to use CUDA Toolkit version 10.1 as it is the proper version for the TensorFlow version used in this repository.\r\nhttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-10.1-download-archive-update2\r\n\r\n## Downloading Official Pre-trained Weights\r\nYOLOv4 comes pre-trained and able to detect 80 classes. For easy demo purposes we will use the pre-trained weights.\r\nDownload pre-trained yolov4.weights file: https:\u002F\u002Fdrive.google.com\u002Fopen?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT\r\n\r\nCopy and paste yolov4.weights from your downloads folder into the 'data' folder of this repository.\r\n\r\nIf you want to use yolov4-tiny.weights, a smaller model that is faster at running detections but less accurate, download file here: https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet\u002Freleases\u002Fdownload\u002Fdarknet_yolo_v4_pre\u002Fyolov4-tiny.weights\r\n\r\n## Using Custom Trained YOLOv4 Weights\r\n\u003Cstrong>Learn How To Train Custom YOLOv4 Weights here: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mmj3nxGT2YQ \u003C\u002Fstrong>\r\n\r\n\u003Cstrong>Watch me Walk-Through using Custom Model in TensorFlow :https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nOIVxi5yurE \u003C\u002Fstrong>\r\n\r\nUSE MY LICENSE PLATE TRAINED CUSTOM WEIGHTS: https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1EUPtbtdF0bjRtNjGv436vDY28EN5DXDH\u002Fview?usp=sharing\r\n\r\nCopy and paste your custom .weights file into the 'data' folder and copy and paste your custom .names into the 'data\u002Fclasses\u002F' folder.\r\n\r\nThe only change within the code you need to make in order for your custom model to work is on line 14 of 'core\u002Fconfig.py' file.\r\nUpdate the code to point at your custom .names file as seen below. (my custom .names file is called custom.names but yours might be named differently)\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_d2044729f04d.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\n\u003Cstrong>Note:\u003C\u002Fstrong> If you are using the pre-trained yolov4 then make sure that line 14 remains \u003Cstrong>coco.names\u003C\u002Fstrong>.\r\n\r\n## YOLOv4 Using Tensorflow (tf, .pb model)\r\nTo implement YOLOv4 using TensorFlow, first we convert the .weights into the corresponding TensorFlow model files and then run the model.\r\n```bash\r\n# Convert darknet weights to tensorflow\r\n## yolov4\r\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4-416 --input_size 416 --model yolov4 \r\n\r\n# Run yolov4 tensorflow model\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg\r\n\r\n# Run yolov4 on video\r\npython detect_video.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Fvideo.mp4 --output .\u002Fdetections\u002Fresults.avi\r\n\r\n# Run yolov4 on webcam\r\npython detect_video.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --video 0 --output .\u002Fdetections\u002Fresults.avi\r\n```\r\nIf you want to run yolov3 or yolov3-tiny change ``--model yolov3`` and .weights file in above commands.\r\n\r\n\u003Cstrong>Note:\u003C\u002Fstrong> You can also run the detector on multiple images at once by changing the --images flag like such ``--images \".\u002Fdata\u002Fimages\u002Fkite.jpg, .\u002Fdata\u002Fimages\u002Fdog.jpg\"``\r\n\r\n### Result Image(s) (Regular TensorFlow)\r\nYou can find the outputted image(s) showing the detections saved within the 'detections' folder.\r\n#### Pre-trained YOLOv4 Model Example\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_7c0ef0bdc099.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\n### Result Video\r\nVideo saves wherever you point --output flag to. If you don't set the flag then your video will not be saved with detections on it.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_b368dcb82c4b.gif\"\\>\u003C\u002Fp>\r\n\r\n## YOLOv4-Tiny using TensorFlow\r\nThe following commands will allow you to run yolov4-tiny model.\r\n```\r\n# yolov4-tiny\r\npython save_model.py --weights .\u002Fdata\u002Fyolov4-tiny.weights --output .\u002Fcheckpoints\u002Fyolov4-tiny-416 --input_size 416 --model yolov4 --tiny\r\n\r\n# Run yolov4-tiny tensorflow model\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-tiny-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --tiny\r\n```\r\n\u003Ca name=\"custom\"\u002F>\r\n\r\n## Custom YOLOv4 Using TensorFlow\r\nThe following commands will allow you to run your custom yolov4 model. (video and webcam commands work as well)\r\n```\r\n# custom yolov4\r\npython save_model.py --weights .\u002Fdata\u002Fcustom.weights --output .\u002Fcheckpoints\u002Fcustom-416 --input_size 416 --model yolov4 \r\n\r\n# Run custom yolov4 tensorflow model\r\npython detect.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fcar.jpg\r\n```\r\n\r\n#### Custom YOLOv4 Model Example (see video link above to train this model)\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_62d17164b766.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\n## Custom Functions and Flags\r\nHere is how to use all the currently supported custom functions and flags that I have created.\r\n\r\n\u003Ca name=\"counting\"\u002F>\r\n\r\n### Counting Objects (total objects or per class)\r\nI have created a custom function within the file [core\u002Ffunctions.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Ffunctions.py) that can be used to count and keep track of the number of objects detected at a given moment within each image or video. It can be used to count total objects found or can count number of objects detected per class.\r\n\r\n#### Count Total Objects\r\nTo count total objects all that is needed is to add the custom flag \"--count\" to your detect.py or detect_video.py command.\r\n```\r\n# Run yolov4 model while counting total objects detected\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --count\r\n```\r\nRunning the above command will count the total number of objects detected and output it to your command prompt or shell as well as on the saved detection as so:\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_50421643cb0e.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\n#### Count Objects Per Class\r\nTo count the number of objects for each individual class of your object detector you need to add the custom flag \"--count\" as well as change one line in the detect.py or detect_video.py script. By default the count_objects function has a parameter called \u003Cstrong>by_class\u003C\u002Fstrong> that is set to False. If you change this parameter to \u003Cstrong>True\u003C\u002Fstrong> it will count per class instead.\r\n\r\nTo count per class make detect.py or detect_video.py look like this:\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_dac84fd74653.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\nThen run the same command as above:\r\n```\r\n# Run yolov4 model while counting objects per class\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --count\r\n```\r\nRunning the above command will count the number of objects detected per class and output it to your command prompt or shell as well as on the saved detection as so:\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_f57de613a1ed.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\n\u003Cstrong>Note:\u003C\u002Fstrong> You can add the --count flag to detect_video.py commands as well!\r\n\r\n\u003Ca name=\"info\"\u002F>\r\n\r\n### Print Detailed Info About Each Detection (class, confidence, bounding box coordinates)\r\nI have created a custom flag called \u003Cstrong>INFO\u003C\u002Fstrong> that can be added to any detect.py or detect_video.py commands in order to print detailed information about each detection made by the object detector. To print the detailed information to your command prompt just add the flag `--info` to any of your commands. The information on each detection includes the class, confidence in the detection and the bounding box coordinates of the detection in xmin, ymin, xmax, ymax format.\r\n\r\nIf you want to edit what information gets printed you can edit the \u003Cstrong>draw_bbox\u003C\u002Fstrong> function found within the [core\u002Futils.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Futils.py) file. The line that prints the information looks as follows:\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_cd3eaf7ca5a1.png\" height=\"50\"\\>\u003C\u002Fp>\r\n\r\nExample of info flag added to command:\r\n```\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --info\r\n```\r\nResulting output within your shell or terminal:\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_c4287dd09a63.png\" height=\"100\"\\>\u003C\u002Fp>\r\n\r\n\u003Cstrong>Note:\u003C\u002Fstrong> You can add the --info flag to detect_video.py commands as well!\r\n\r\n\u003Ca name=\"crop\"\u002F>\r\n\r\n### Crop Detections and Save Them as New Images\r\nI have created a custom function within the file [core\u002Ffunctions.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Ffunctions.py) that can be applied to any detect.py or detect_video.py commands in order to crop the YOLOv4 detections and save them each as their own new image. To crop detections all you need to do is add the `--crop` flag to any command. The resulting cropped images will be saved within the \u003Cstrong>detections\u002Fcrop\u002F\u003C\u002Fstrong> folder.\r\n  \r\n Example of crop flag added to command:\r\n```\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --crop\r\n```\r\n Here is an example of one of the resulting cropped detections from the above command.\r\n \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_76676b3b53f9.png\" height=\"250\"\\>\u003C\u002Fp>\r\n \r\n\u003Ca name=\"license\"\u002F>\r\n\r\n## License Plate Recognition Using Tesseract OCR\r\nI have created a custom function to feed Tesseract OCR the bounding box regions of license plates found by my custom YOLOv4 model in order to read and extract the license plate numbers. Thorough preprocessing is done on the license plate in order to correctly extract the license plate number from the image. The function that is in charge of doing the preprocessing and text extraction is called \u003Cstrong>recognize_plate\u003C\u002Fstrong> and can be found in the file [core\u002Futils.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Futils.py).\r\n\r\n\u003Cstrong>Disclaimer: In order to run tesseract OCR you must first download the binary files and set them up on your local machine. Please do so before proceeding or commands will not run as expected!\u003C\u002Fstrong>\r\n\r\nOfficial Tesseract OCR Github Repo: [tesseract-ocr\u002Ftessdoc](https:\u002F\u002Fgithub.com\u002Ftesseract-ocr\u002Ftessdoc)\r\n\r\nGreat Article for How To Install Tesseract on Mac or Linux Machines: [PyImageSearch Article](https:\u002F\u002Fwww.pyimagesearch.com\u002F2017\u002F07\u002F03\u002Finstalling-tesseract-for-ocr\u002F)\r\n\r\nFor Windows I recommend: [Windows Install](https:\u002F\u002Fgithub.com\u002FUB-Mannheim\u002Ftesseract\u002Fwiki)\r\n\r\nOnce you have Tesseract properly installed you can move onwards. If you don't have a trained YOLOv4 model to detect license plates feel free to use one that I have trained. It is not perfect but it works well. [Download license plate detector model and learn how to save and run it with TensorFlow here](#custom)\r\n\r\n### Running License Plate Recognition on Images (video example below)\r\nThe license plate recognition works wonders on images. All you need to do is add the `--plate` flag on top of the command to run the custom YOLOv4 model.\r\n\r\nTry it out on this image in the repository!\r\n```\r\n# Run License Plate Recognition\r\npython detect.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --images .\u002Fhttps:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_22c1b2608d4b.jpg --plate\r\n```\r\n\r\n### Resulting Image Example\r\nThe output from the above command should print any license plate numbers found to your command terminal as well as output and save the following image to the `detections` folder.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_257bfc01d202.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\nYou should be able to see the license plate number printed on the screen above the bounding box found by YOLOv4.\r\n\r\n### Behind the Scenes\r\nThis section will highlight the steps I took in order to implement the License Plate Recognition with YOLOv4 and potential areas to be worked on further.\r\n\r\nThis demo will be showing the step-by-step workflow on the following original image.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_22c1b2608d4b.jpg\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\nFirst step of the process is taking the bounding box coordinates from YOLOv4 and simply taking the subimage region within the bounds of the box. Since this image is super small the majority of the time we use cv2.resize() to blow the image up 3x its original size. \r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_d51d4afa12ef.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nThen we convert the image to grayscale and apply a small Gaussian blur to smooth it out.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_3636c45b7926.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nFollowing this, the image is thresholded to white text with black background and has Otsu's method also applied. This white text on black background helps to find contours of image.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_070636637f76.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nThe image is then dilated using opencv in order to make contours more visible and be picked up in future step.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_591e110666a4.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nNext we use opencv to find all the rectangular shaped contours on the image and sort them left to right.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_8f1bf72d6ca6.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nAs you can see this causes many contours to be found other than just the contours of each character within the license plate number. In order to filter out the unwanted regions we apply a couple parameters to be met in order to accept a contour. These parameters are just height and width ratios (i.e. the height of region must be at least 1\u002F6th of the total height of the image). A couple other parameters on area of region etc are also placed. Check out code to see exact details. This filtering leaves us with.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_78d4867bc54c.png\" width=\"400\"\\>\u003C\u002Fp>\r\n\r\nThe individual characters of the license plate number are now the only regions of interest left. We segment each subimage and apply a bitwise_not mask to flip the image to black text on white background which Tesseract is more accurate with. The final step is applying a small median blur on the image and then it is passed to Tesseract to get the letter or number from it. Example of how letters look like when going to tesseract.\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_e4edbea005df.png\" width=\"650\"\\>\u003C\u002Fp>\r\n\r\nEach letter or number is then just appended together into a string and at the end you get the full license plate that is recognized! BOOM!\r\n\r\n### Running License Plate Recognition on Video\r\nRunning the license plate recognition straight on video at the same time that YOLOv4 object detections causes a few issues. Tesseract OCR is fairly expensive in terms of time complexity and slows down the processing of the video to a snail's pace. It can still be accomplished by adding the `--plate` command line flag to any detect_video.py commands.\r\n\r\nHowever, I believe the best route to go is to run video detections without the plate flag and instead run them with `--crop` flag which crops the objects found on screen and saves them as new images. [See how it works here](#crop) Once the video is done processing at a higher FPS all the license plate images will be cropped and saved within [detections\u002Fcrop](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fdetections\u002Fcrop\u002F) folder. I have added an easy script within the repository called [license_plate_recognizer.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Flicense_plate_recognizer.py) that you can run in order to recognize license plates. Plus this allows you to easily customize the script to further enhance any recognitions. I will be working on linking this functionality automatically in future commits to the repository.\r\n\r\nRunning License Plate Recognition with detect_video.py is done with the following command.\r\n```\r\npython detect_video.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Flicense_plate.mp4 --output .\u002Fdetections\u002Frecognition.avi --plate\r\n```\r\n\r\nThe recommended route I think is more efficient is using this command. Customize the rate at which detections are cropped within the code itself.\r\n```\r\npython detect_video.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Flicense_plate.mp4 --output .\u002Fdetections\u002Frecognition.avi --crop\r\n```\r\n\r\nNow play around with [license_plate_recognizer.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Flicense_plate_recognizer.py) and have some fun!\r\n\r\n\u003Ca name=\"ocr\"\u002F>\r\n\r\n## Running Tesseract OCR on any Detections\r\nI have also implemented a generic use of Tesseract OCR with YOLOv4. By enabling the flag `--ocr` with any detect.py image command you can search detections for text and extract what is found. Generic preprocessing is applied on the subimage that makes up the inside of the detection bounding box. However, so many lighting or color issues require advanced preprocessing so this function is by no means perfect. You will also need to install tesseract on your local machine prior to running this flag (see links and suggestions in above section)\r\n\r\nExample command (note this image doesn't have text so will not output anything, just meant to show how command is structured):\r\n```\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --ocr\r\n```\r\n\r\n## YOLOv4 Using TensorFlow Lite (.tflite model)\r\nCan also implement YOLOv4 using TensorFlow Lite. TensorFlow Lite is a much smaller model and perfect for mobile or edge devices (raspberry pi, etc).\r\n```bash\r\n# Save tf model for tflite converting\r\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4-416 --input_size 416 --model yolov4 --framework tflite\r\n\r\n# yolov4\r\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416.tflite\r\n\r\n# yolov4 quantize float16\r\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416-fp16.tflite --quantize_mode float16\r\n\r\n# yolov4 quantize int8\r\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416-int8.tflite --quantize_mode int8 --dataset .\u002Fcoco_dataset\u002Fcoco\u002Fval207.txt\r\n\r\n# Run tflite model\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416.tflite --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --framework tflite\r\n```\r\n### Result Image (TensorFlow Lite)\r\nYou can find the outputted image(s) showing the detections saved within the 'detections' folder.\r\n#### TensorFlow Lite int8 Example\r\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_2e1a638043b8.png\" width=\"640\"\\>\u003C\u002Fp>\r\n\r\nYolov4 and Yolov4-tiny int8 quantization have some issues. I will try to fix that. You can try Yolov3 and Yolov3-tiny int8 quantization \r\n\r\n## YOLOv4 Using TensorRT\r\nCan also implement YOLOv4 using TensorFlow's TensorRT. TensorRT is a high-performance inference optimizer and runtime that can be used to perform inference in lower precision (FP16 and INT8) on GPUs. TensorRT can allow up to 8x higher performance than regular TensorFlow.\r\n```bash# yolov3\r\npython save_model.py --weights .\u002Fdata\u002Fyolov3.weights --output .\u002Fcheckpoints\u002Fyolov3.tf --input_size 416 --model yolov3\r\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov3.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov3-trt-fp16-416\r\n\r\n# yolov3-tiny\r\npython save_model.py --weights .\u002Fdata\u002Fyolov3-tiny.weights --output .\u002Fcheckpoints\u002Fyolov3-tiny.tf --input_size 416 --tiny\r\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov3-tiny.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov3-tiny-trt-fp16-416\r\n\r\n# yolov4\r\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4.tf --input_size 416 --model yolov4\r\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov4.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov4-trt-fp16-416\r\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-trt-fp16-416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --framework trt\r\n```\r\n\r\n## Command Line Args Reference\r\n\r\n```bash\r\nsave_model.py:\r\n  --weights: path to weights file\r\n    (default: '.\u002Fdata\u002Fyolov4.weights')\r\n  --output: path to output\r\n    (default: '.\u002Fcheckpoints\u002Fyolov4-416')\r\n  --[no]tiny: yolov4 or yolov4-tiny\r\n    (default: 'False')\r\n  --input_size: define input size of export model\r\n    (default: 416)\r\n  --framework: what framework to use (tf, trt, tflite)\r\n    (default: tf)\r\n  --model: yolov3 or yolov4\r\n    (default: yolov4)\r\n\r\ndetect.py:\r\n  --images: path to input images as a string with images separated by \",\"\r\n    (default: '.\u002Fdata\u002Fimages\u002Fkite.jpg')\r\n  --output: path to output folder\r\n    (default: '.\u002Fdetections\u002F')\r\n  --[no]tiny: yolov4 or yolov4-tiny\r\n    (default: 'False')\r\n  --weights: path to weights file\r\n    (default: '.\u002Fcheckpoints\u002Fyolov4-416')\r\n  --framework: what framework to use (tf, trt, tflite)\r\n    (default: tf)\r\n  --model: yolov3 or yolov4\r\n    (default: yolov4)\r\n  --size: resize images to\r\n    (default: 416)\r\n  --iou: iou threshold\r\n    (default: 0.45)\r\n  --score: confidence threshold\r\n    (default: 0.25)\r\n  --count: count objects within images\r\n    (default: False)\r\n  --dont_show: dont show image output\r\n    (default: False)\r\n  --info: print info on detections\r\n    (default: False)\r\n  --crop: crop detections and save as new images\r\n    (default: False)\r\n    \r\ndetect_video.py:\r\n  --video: path to input video (use 0 for webcam)\r\n    (default: '.\u002Fdata\u002Fvideo\u002Fvideo.mp4')\r\n  --output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)\r\n    (default: None)\r\n  --output_format: codec used in VideoWriter when saving video to file\r\n    (default: 'XVID)\r\n  --[no]tiny: yolov4 or yolov4-tiny\r\n    (default: 'false')\r\n  --weights: path to weights file\r\n    (default: '.\u002Fcheckpoints\u002Fyolov4-416')\r\n  --framework: what framework to use (tf, trt, tflite)\r\n    (default: tf)\r\n  --model: yolov3 or yolov4\r\n    (default: yolov4)\r\n  --size: resize images to\r\n    (default: 416)\r\n  --iou: iou threshold\r\n    (default: 0.45)\r\n  --score: confidence threshold\r\n    (default: 0.25)\r\n  --count: count objects within video\r\n    (default: False)\r\n  --dont_show: dont show video output\r\n    (default: False)\r\n  --info: print info on detections\r\n    (default: False)\r\n  --crop: crop detections and save as new images\r\n    (default: False)\r\n```\r\n\r\n### References  \r\n\r\n   Huge shoutout goes to hunglc007 for creating the backbone of this repository:\r\n  * [tensorflow-yolov4-tflite](https:\u002F\u002Fgithub.com\u002Fhunglc007\u002Ftensorflow-yolov4-tflite)\r\n","# yolov4-自定义函数  \n[![license](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fmashape\u002Fapistatus.svg)](LICENSE)  \n\n这是一个在 TensorFlow、TFLite 和 TensorRT 中实现的 YOLOv4、YOLOv4-tiny、YOLOv3 和 YOLOv3-tiny 的多种自定义函数库。  \n\n免责声明：本仓库与我的另一个仓库 [tensorflow-yolov4-tflite](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Ftensorflow-yolov4-tflite) 非常相似。我创建这个仓库是为了探索使用 YOLOv4 实现自定义函数，但这些自定义函数可能会降低整体运行速度，并且在时间复杂度方面可能不够优化。因此，如果您希望运行最优化的 TensorFlow 版 YOLOv4 代码，请前往我的另一个仓库。而这个仓库则是为了探索利用 YOLOv4 可以实现的酷炫自定义功能和应用场景！  \n\n### 对象计数自定义函数的实际演示！  \n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_9bccc17af411.gif\"\\>\u003C\u002Fp>\n\n## 当前支持的自定义函数及标志  \n* [x] [对象计数（总对象数及各类别对象数）](#counting)  \n* [x] [打印每检测结果的相关信息（类别、置信度、边界框坐标）](#info)  \n* [x] [裁剪检测结果并保存为新图像](#crop)  \n* [x] [使用 Tesseract OCR 进行车牌识别](#license)  \n* [x] [对检测结果应用 Tesseract OCR 提取文本](#ocr)  \n\n如果您希望看到某个特定的自定义功能被实现，请在 Issues 标签页中提交 issue 并提出建议！如果足够多的人提出相同的自定义需求，我会尽快将其添加到项目中！  \n\n## 快速入门  \n### Conda（推荐）  \n\n```bash\n# TensorFlow CPU\nconda env create -f conda-cpu.yml\nconda activate yolov4-cpu\n\n# TensorFlow GPU\nconda env create -f conda-gpu.yml\nconda activate yolov4-gpu\n```  \n\n### Pip  \n```bash\n# TensorFlow CPU\npip install -r requirements.txt\n\n# TensorFlow GPU\npip install -r requirements-gpu.txt\n```  \n\n### Nvidia 驱动程序（适用于 GPU，若您未使用 Conda 环境且尚未配置 CUDA）  \n请确保使用 CUDA Toolkit 10.1 版本，因为这是本仓库所用 TensorFlow 版本的适配版本。  \nhttps:\u002F\u002Fdeveloper.nvidia.com\u002Fcuda-10.1-download-archive-update2  \n\n## 下载官方预训练权重  \nYOLOv4 已经预训练完成，能够检测 80 类物体。为了便于演示，我们将使用预训练权重。  \n下载预训练的 yolov4.weights 文件：https:\u002F\u002Fdrive.google.com\u002Fopen?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT  \n\n将下载的 yolov4.weights 文件从您的下载文件夹复制并粘贴到本仓库的 `data` 文件夹中。  \n\n如果您想使用 yolov4-tiny.weights——一个运行检测更快但精度稍低的小型模型——请从这里下载：https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet\u002Freleases\u002Fdownload\u002Fdarknet_yolo_v4_pre\u002Fyolov4-tiny.weights  \n\n## 使用自定义训练的 YOLOv4 权重  \n\u003Cstrong>在此学习如何训练自定义 YOLOv4 权重：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mmj3nxGT2YQ \u003C\u002Fstrong>  \n\n\u003Cstrong>观看我在 TensorFlow 中使用自定义模型的详细讲解：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nOIVxi5yurE \u003C\u002Fstrong>  \n\n使用我的车牌训练的自定义权重：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1EUPtbtdF0bjRtNjGv436vDY28EN5DXDH\u002Fview?usp=sharing  \n\n将您的自定义 .weights 文件复制并粘贴到 `data` 文件夹中，同时将自定义 .names 文件复制并粘贴到 `data\u002Fclasses\u002F` 文件夹中。  \n\n为了让您的自定义模型正常工作，您需要在代码中仅做一处修改：将 `core\u002Fconfig.py` 文件第 14 行的路径更新为指向您的自定义 .names 文件，如下所示。（我的自定义 .names 文件名为 custom.names，但您的文件名可能不同）  \n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_d2044729f04d.png\" width=\"640\"\\>\u003C\u002Fp>  \n\n\u003Cstrong>注意：\u003C\u002Fstrong>如果您使用的是预训练的 yolov4，则务必保持第 14 行为 \u003Cstrong>coco.names\u003C\u002Fstrong>。  \n\n## 使用 TensorFlow 的 YOLOv4（tf、.pb 模型）  \n要使用 TensorFlow 实现 YOLOv4，首先需要将 .weights 转换为对应的 TensorFlow 模型文件，然后运行该模型。  \n```bash\n# 将 Darknet 权重转换为 TensorFlow\n## yolov4\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4-416 --input_size 416 --model yolov4 \n\n# 运行 YOLOv4 TensorFlow 模型\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg\n\n# 在视频上运行 YOLOv4\npython detect_video.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Fvideo.mp4 --output .\u002Fdetections\u002Fresults.avi\n\n# 在网络摄像头上运行 YOLOv4\npython detect_video.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --video 0 --output .\u002Fdetections\u002Fresults.avi\n```  \n\n如果您想运行 yolov3 或 yolov3-tiny，请在上述命令中将 `--model yolov3` 和 .weights 文件替换为相应内容。  \n\n\u003Cstrong>注意：\u003C\u002Fstrong>您还可以通过修改 `--images` 标志一次性处理多张图片，例如：`--images \".\u002Fdata\u002Fimages\u002Fkite.jpg, .\u002Fdata\u002Fimages\u002Fdog.jpg\"`  \n\n### 输出图像（常规 TensorFlow）  \n您可以在 `detections` 文件夹中找到包含检测结果的输出图像。  \n\n#### 预训练 YOLOv4 模型示例  \n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_7c0ef0bdc099.png\" width=\"640\"\\>\u003C\u002Fp>\n\n### 输出视频  \n视频会保存到您指定的 `--output` 标志所指向的路径。如果您未设置该标志，则视频不会保存检测结果。  \n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_b368dcb82c4b.gif\"\\>\u003C\u002Fp>\n\n## 使用 TensorFlow 的 YOLOv4-Tiny  \n以下命令可用于运行 yolov4-tiny 模型。  \n```  \n# yolov4-tiny\npython save_model.py --weights .\u002Fdata\u002Fyolov4-tiny.weights --output .\u002Fcheckpoints\u002Fyolov4-tiny-416 --input_size 416 --model yolov4 --tiny\n\n# 运行 YOLOv4-Tiny TensorFlow 模型\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-tiny-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --tiny\n```  \n\n\u003Ca name=\"custom\"\u002F>\n\n## 使用 TensorFlow 的自定义 YOLOv4  \n以下命令可用于运行您的自定义 YOLOv4 模型。（视频和网络摄像头命令同样适用）  \n```  \n# 自定义 YOLOv4\npython save_model.py --weights .\u002Fdata\u002Fcustom.weights --output .\u002Fcheckpoints\u002Fcustom-416 --input_size 416 --model yolov4 \n\n# 运行自定义 YOLOv4 TensorFlow 模型\npython detect.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fcar.jpg\n```  \n\n#### 自定义 YOLOv4 模型示例（请参阅上方视频链接以了解如何训练该模型）  \n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_62d17164b766.png\" width=\"640\"\\>\u003C\u002Fp>\n\n## 自定义函数及标志  \n以下是目前我已实现的所有支持的自定义函数及标志的使用方法。  \n\n\u003Ca name=\"counting\"\u002F>\n\n### 物体计数（总物体数或按类别计数）\n我在 [core\u002Ffunctions.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Ffunctions.py) 文件中创建了一个自定义函数，可用于统计并跟踪在每张图像或视频中某一时刻检测到的物体数量。该函数既可统计检测到的物体总数，也可按类别统计检测到的物体数量。\n\n#### 统计总物体数\n要统计总物体数，只需在 `detect.py` 或 `detect_video.py` 命令中添加自定义标志 `--count` 即可。\n```bash\n# 运行 YOLOv4 模型并统计检测到的总物体数\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --count\n```\n运行上述命令后，将统计检测到的物体总数，并将其输出到您的命令提示符或终端，同时也会保存在检测结果文件中，如下所示：\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_50421643cb0e.png\" width=\"640\"\u002F>\u003C\u002Fp>\n\n#### 按类别统计物体数\n要统计目标检测器中每个类别的物体数量，需添加自定义标志 `--count`，并修改 `detect.py` 或 `detect_video.py` 脚本中的某一行代码。默认情况下，`count_objects` 函数有一个名为 `by_class` 的参数，其值为 `False`。若将该参数改为 `True`，则会按类别进行统计。\n\n要按类别统计，需将 `detect.py` 或 `detect_video.py` 修改为如下形式：\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_dac84fd74653.png\" width=\"640\"\u002F>\u003C\u002Fp>\n\n然后运行与前述相同的命令：\n```bash\n# 运行 YOLOv4 模型并按类别统计物体数\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --count\n```\n运行上述命令后，将统计每个类别的检测物体数量，并将其输出到您的命令提示符或终端，同时也会保存在检测结果文件中，如下所示：\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_f57de613a1ed.png\" width=\"640\"\u002F>\u003C\u002Fp>\n\n**注意：** 您也可以将 `--count` 标志添加到 `detect_video.py` 命令中！\n\n\u003Ca name=\"info\"\u002F>\n\n### 打印每次检测的详细信息（类别、置信度、边界框坐标）\n我创建了一个名为 `INFO` 的自定义标志，可添加到任何 `detect.py` 或 `detect_video.py` 命令中，以打印目标检测器每次检测的详细信息。只需在任意命令中添加 `--info` 标志，即可在命令提示符上显示这些详细信息。每次检测的信息包括类别、检测置信度以及边界框的坐标，格式为 `xmin, ymin, xmax, ymax`。\n\n如果您想修改输出的具体信息，可以编辑位于 [core\u002Futils.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Futils.py) 文件中的 `draw_bbox` 函数。打印信息的代码行如下：\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_cd3eaf7ca5a1.png\" height=\"50\"\u002F>\u003C\u002Fp>\n\n添加 `--info` 标志后的示例命令：\n```bash\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --info\n```\n终端或命令行中的输出结果如下：\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_c4287dd09a63.png\" height=\"100\"\u002F>\u003C\u002Fp>\n\n**注意：** 您也可以将 `--info` 标志添加到 `detect_video.py` 命令中！\n\n\u003Ca name=\"crop\"\u002F>\n\n### 裁剪检测结果并保存为新图像\n我在 [core\u002Ffunctions.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Ffunctions.py) 文件中创建了一个自定义函数，可应用于任何 `detect.py` 或 `detect_video.py` 命令，用于裁剪 YOLOv4 的检测结果，并将每个检测结果保存为单独的新图像。要裁剪检测结果，只需在任意命令中添加 `--crop` 标志即可。裁剪后的图像将保存在 `detections\u002Fcrop\u002F` 文件夹中。\n\n添加 `--crop` 标志后的示例命令：\n```bash\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --crop\n```\n以下是上述命令生成的其中一个裁剪后的检测结果示例。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_76676b3b53f9.png\" height=\"250\"\u002F>\u003C\u002Fp>\n\n\u003Ca name=\"license\"\u002F>\n\n## 使用 Tesseract OCR 进行车牌识别\n我创建了一个自定义函数，可将由我的自定义 YOLOv4 模型检测到的车牌边界框区域输入 Tesseract OCR，以读取并提取车牌号码。为了从图像中准确提取车牌号码，会对车牌进行充分的预处理。负责执行预处理和文本提取的函数名为 `recognize_plate`，位于 [core\u002Futils.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fcore\u002Futils.py) 文件中。\n\n**免责声明：** 若要运行 Tesseract OCR，您必须先下载二进制文件并在本地机器上完成安装。请务必在继续操作前完成此步骤，否则命令将无法正常运行！\n\nTesseract OCR 官方 GitHub 仓库：[tesseract-ocr\u002Ftessdoc](https:\u002F\u002Fgithub.com\u002Ftesseract-ocr\u002Ftessdoc)\n\n关于如何在 Mac 或 Linux 机器上安装 Tesseract 的优秀文章：[PyImageSearch 文章](https:\u002F\u002Fwww.pyimagesearch.com\u002F2017\u002F07\u002F03\u002Finstalling-tesseract-for-ocr\u002F)\n\n对于 Windows 系统，推荐使用：[Windows 安装指南](https:\u002F\u002Fgithub.com\u002FUB-Mannheim\u002Ftesseract\u002Fwiki)\n\n成功安装 Tesseract 后，即可继续操作。若您尚未训练出用于检测车牌的 YOLOv4 模型，也可直接使用我已训练好的模型。虽然该模型并不完美，但效果尚佳。[在此下载车牌检测模型，并学习如何使用 TensorFlow 保存和运行它](#custom)\n\n### 在图像上运行车牌识别（下方为视频示例）\n车牌识别在图像上的表现非常出色。您只需在运行自定义 YOLOv4 模型的命令中添加 `--plate` 标志即可。\n\n试试在仓库中的这张图片上运行吧！\n```bash\n# 运行车牌识别\npython detect.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --images .\u002Fhttps:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_22c1b2608d4b.jpg --plate\n```\n\n### 示例结果图\n上述命令的输出将在您的命令终端中显示所有检测到的车牌号码，并将以下图像输出并保存到 `detections` 文件夹中。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_257bfc01d202.png\" width=\"640\"\u002F>\u003C\u002Fp>\n\n您应该能够在屏幕上看到车牌号码，显示在 YOLOv4 检测到的边界框上方。\n\n### 幕后揭秘  \n本节将重点介绍我为实现基于 YOLOv4 的车牌识别所采取的步骤，以及未来可进一步优化的方向。\n\n本次演示将以如下原始图像为例，展示整个流程的每一步操作。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_22c1b2608d4b.jpg\" width=\"640\"\\>\u003C\u002Fp>\n\n第一步是从 YOLOv4 中获取边界框坐标，并简单地截取该边界框内的子图像区域。由于这张图非常小，我们通常会使用 cv2.resize() 将其放大至原图尺寸的 3 倍。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_d51d4afa12ef.png\" width=\"400\"\\>\u003C\u002Fp>\n\n随后，我们将图像转换为灰度图，并应用一个小的高斯模糊以使其更加平滑。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_3636c45b7926.png\" width=\"400\"\\>\u003C\u002Fp>\n\n接着，对图像进行二值化处理，使白字黑底的文本更清晰，并同时采用 Otsu 方法进行自动阈值分割。这种白字黑底的图像有助于后续提取轮廓。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_070636637f76.png\" width=\"400\"\\>\u003C\u002Fp>\n\n然后，利用 OpenCV 对图像进行膨胀操作，以增强轮廓的可见性，便于后续步骤中更好地检测到这些轮廓。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_591e110666a4.png\" width=\"400\"\\>\u003C\u002Fp>\n\n接下来，我们使用 OpenCV 在图像中查找所有矩形形状的轮廓，并按从左到右的顺序对其进行排序。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_8f1bf72d6ca6.png\" width=\"400\"\\>\u003C\u002Fp>\n\n如您所见，这样做会检测到许多非车牌数字字符的轮廓。为了过滤掉这些不必要的区域，我们设置了若干参数来判定一个轮廓是否有效。这些参数主要包括高度与宽度的比例（例如，区域的高度至少应为图像总高度的 1\u002F6），此外还设置了区域面积等其他条件。具体细节请参阅代码。经过这一系列筛选后，最终得到的结果如下。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_78d4867bc54c.png\" width=\"400\"\\>\u003C\u002Fp>\n\n现在，车牌数字中的每个字符都成为了我们关注的唯一区域。我们将每个子图像单独分割出来，并应用 bitwise_not 掩码，将图像反转为黑字白底，这样 Tesseract 的识别准确率会更高。最后一步是对图像进行小幅中值模糊处理，然后将其送入 Tesseract 进行文字识别，从而提取出对应的字母或数字。以下是字母输入 Tesseract 后的示例效果。\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_e4edbea005df.png\" width=\"650\"\\>\u003C\u002Fp>\n\n随后，我们将每个字母或数字依次拼接成字符串，最终便得到了完整的车牌识别结果！砰！  \n\n### 在视频上运行车牌识别  \n直接在视频上实时运行车牌识别，并同时进行 YOLOv4 的目标检测，会遇到一些问题。Tesseract OCR 的时间复杂度较高，会导致视频处理速度大幅下降，几乎慢得像蜗牛一样。不过，通过在任何 `detect_video.py` 命令中添加 `--plate` 命令行参数，仍然可以实现这一功能。\n\n然而，我认为最佳方案是先不加 `--plate` 参数运行视频检测，而是使用 `--crop` 参数，将屏幕上检测到的目标裁剪下来并保存为新图像。[点击此处查看具体操作](#crop)。待视频以较高帧率完成处理后，所有车牌图像都会被裁剪并保存到 [detections\u002Fcrop](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Fdetections\u002Fcrop\u002F) 文件夹中。我在仓库中新增了一个名为 [license_plate_recognizer.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Flicense_plate_recognizer.py) 的简易脚本，您可以直接运行它来识别车牌。此外，您还可以轻松自定义该脚本，以进一步提升识别效果。未来我将在仓库的后续提交中实现这一功能的自动化集成。\n\n使用 `detect_video.py` 运行车牌识别的命令如下：\n```bash\npython detect_video.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Flicense_plate.mp4 --output .\u002Fdetections\u002Frecognition.avi --plate\n```\n\n而我认为更为高效的推荐方案是使用以下命令：在代码中自定义检测结果的裁剪频率。\n```bash\npython detect_video.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --video .\u002Fdata\u002Fvideo\u002Flicense_plate.mp4 --output .\u002Fdetections\u002Frecognition.avi --crop\n```\n\n现在就动手玩一玩 [license_plate_recognizer.py](https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fblob\u002Fmaster\u002Flicense_plate_recognizer.py)，尽情享受吧！\n\n\u003Ca name=\"ocr\"\u002F>\n\n## 在任意检测结果上运行 Tesseract OCR  \n我还实现了 Tesseract OCR 与 YOLOv4 的通用结合方式。只需在任何 `detect.py` 图像命令中启用 `--ocr` 参数，即可在检测结果中搜索文本并提取其中的内容。系统会对构成检测边界框内部的子图像进行通用预处理。不过，由于光照或色彩问题较多，往往需要更高级的预处理，因此这项功能并非完美无缺。此外，在运行此参数之前，您还需要在本地安装 Tesseract（详见上文相关链接与建议）。\n\n示例命令（请注意，这张图本身并无文字，因此不会输出任何内容，仅用于展示命令的结构）：\n```bash\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --ocr\n```\n\n## 使用 TensorFlow Lite 实现 YOLOv4（.tflite 模型）  \n您也可以使用 TensorFlow Lite 来部署 YOLOv4。TensorFlow Lite 是一种体积更小的模型，非常适合在移动设备或边缘设备（如树莓派等）上运行。\n```bash\n# 保存 TF 模型以便转换为 TFLite\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4-416 --input_size 416 --model yolov4 --framework tflite\n\n# YOLOv4 转换为 TFLite\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416.tflite\n\n# YOLOv4 量化为 FP16\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416-fp16.tflite --quantize_mode float16\n\n# YOLOv4 量化为 INT8\npython convert_tflite.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --output .\u002Fcheckpoints\u002Fyolov4-416-int8.tflite --quantize_mode int8 --dataset .\u002Fcoco_dataset\u002Fcoco\u002Fval207.txt\n\n# 运行 TFLite 模型\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416.tflite --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --framework tflite\n```\n\n### 结果图像（TensorFlow Lite）\n您可以在“detections”文件夹中找到显示检测结果的输出图像。\n#### TensorFlow Lite int8 示例\n\u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_readme_2e1a638043b8.png\" width=\"640\"\\>\u003C\u002Fp>\n\nYolov4 和 Yolov4-tiny 的 int8 量化存在一些问题。我将尝试修复这些问题。您可以先尝试 Yolov3 和 Yolov3-tiny 的 int8 量化。\n\n## 使用 TensorRT 的 YOLOv4\n也可以使用 TensorFlow 的 TensorRT 来实现 YOLOv4。TensorRT 是一款高性能的推理优化器和运行时，能够在 GPU 上以较低精度（FP16 和 INT8）进行推理。与普通的 TensorFlow 相比，TensorRT 可以将性能提升高达 8 倍。\n```bash# yolov3\npython save_model.py --weights .\u002Fdata\u002Fyolov3.weights --output .\u002Fcheckpoints\u002Fyolov3.tf --input_size 416 --model yolov3\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov3.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov3-trt-fp16-416\n\n# yolov3-tiny\npython save_model.py --weights .\u002Fdata\u002Fyolov3-tiny.weights --output .\u002Fcheckpoints\u002Fyolov3-tiny.tf --input_size 416 --tiny\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov3-tiny.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov3-tiny-trt-fp16-416\n\n# yolov4\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4.tf --input_size 416 --model yolov4\npython convert_trt.py --weights .\u002Fcheckpoints\u002Fyolov4.tf --quantize_mode float16 --output .\u002Fcheckpoints\u002Fyolov4-trt-fp16-416\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-trt-fp16-416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg --framework trt\n```\n\n## 命令行参数参考\n\n```bash\nsave_model.py:\n  --weights: 权重文件路径\n    （默认：'.\u002Fdata\u002Fyolov4.weights'）\n  --output: 输出路径\n    （默认：'.\u002Fcheckpoints\u002Fyolov4-416'）\n  --[no]tiny: yolov4 或 yolov4-tiny\n    （默认：'False'）\n  --input_size: 定义导出模型的输入尺寸\n    （默认：416）\n  --framework: 使用何种框架（tf、trt、tflite）\n    （默认：tf）\n  --model: yolov3 或 yolov4\n    （默认：yolov4）\n\ndetect.py:\n  --images: 输入图像路径，以逗号分隔的字符串形式\n    （默认：'.\u002Fdata\u002Fimages\u002Fkite.jpg'）\n  --output: 输出文件夹路径\n    （默认：'.\u002Fdetections\u002F'）\n  --[no]tiny: yolov4 或 yolov4-tiny\n    （默认：'False'）\n  --weights: 权重文件路径\n    （默认：'.\u002Fcheckpoints\u002Fyolov4-416'）\n  --framework: 使用何种框架（tf、trt、tflite）\n    （默认：tf）\n  --model: yolov3 或 yolov4\n    （默认：yolov4）\n  --size: 将图像调整为\n    （默认：416）\n  --iou: IoU 阈值\n    （默认：0.45）\n  --score: 置信度阈值\n    （默认：0.25）\n  --count: 统计图像中的目标数量\n    （默认：False）\n  --dont_show: 不显示图像输出\n    （默认：False）\n  --info: 打印检测信息\n    （默认：False）\n  --crop: 裁剪检测结果并保存为新图像\n    （默认：False）\n    \ndetect_video.py:\n  --video: 输入视频路径（使用 0 表示摄像头）\n    （默认：'.\u002Fdata\u002Fvideo\u002Fvideo.mp4'）\n  --output: 输出视频路径（请确保为给定格式设置正确的编解码器，例如 .avi 格式使用 XVID）\n    （默认：None）\n  --output_format: 保存视频时 VideoWriter 使用的编解码器\n    （默认：'XVID'）\n  --[no]tiny: yolov4 或 yolov4-tiny\n    （默认：'false'）\n  --weights: 权重文件路径\n    （默认：'.\u002Fcheckpoints\u002Fyolov4-416'）\n  --framework: 使用何种框架（tf、trt、tflite）\n    （默认：tf）\n  --model: yolov3 或 yolov4\n    （默认：yolov4）\n  --size: 将图像调整为\n    （默认：416）\n  --iou: IoU 阈值\n    （默认：0.45）\n  --score: 置信度阈值\n    （默认：0.25）\n  --count: 统计视频中的目标数量\n    （默认：False）\n  --dont_show: 不显示视频输出\n    （默认：False）\n  --info: 打印检测信息\n    （默认：False）\n  --crop: 裁剪检测结果并保存为新图像\n    （默认：False）\n``` \n\n### 参考文献  \n\n特别感谢 hunglc007 创建了本仓库的核心基础：\n* [tensorflow-yolov4-tflite](https:\u002F\u002Fgithub.com\u002Fhunglc007\u002Ftensorflow-yolov4-tflite)","# yolov4-custom-functions 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- 操作系统：支持 Linux、macOS 或 Windows（推荐使用 Linux 或 macOS）\n- Python 版本：3.6+（建议使用 3.8）\n- TensorFlow：2.x（根据是否使用 GPU，选择 CPU 或 GPU 版本）\n\n### 前置依赖\n- 安装 Conda 或 pip（推荐使用 Conda 管理环境）\n- 如果使用 GPU：\n  - 安装 NVIDIA 驱动\n  - 安装 CUDA Toolkit 10.1（与 TensorFlow 兼容）\n  - 安装 cuDNN（与 CUDA 10.1 对应的版本）\n\n## 安装步骤\n\n### 使用 Conda（推荐）\n\n```bash\n# TensorFlow CPU\nconda env create -f conda-cpu.yml\nconda activate yolov4-cpu\n\n# TensorFlow GPU\nconda env create -f conda-gpu.yml\nconda activate yolov4-gpu\n```\n\n### 使用 Pip\n\n```bash\n# TensorFlow CPU\npip install -r requirements.txt\n\n# TensorFlow GPU\npip install -r requirements-gpu.txt\n```\n\n> 注意：如果使用国内网络，可以考虑使用清华源加速安装：\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n> ```\n\n## 下载预训练权重\n\nYOLOv4 默认支持检测 80 个类别。为了方便演示，我们使用官方提供的预训练权重。\n\n- 下载 YOLOv4 预训练权重：[yolov4.weights](https:\u002F\u002Fdrive.google.com\u002Fopen?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT)\n- 下载 YOLOv4-tiny 预训练权重（更小更快）：[yolov4-tiny.weights](https:\u002F\u002Fgithub.com\u002FAlexeyAB\u002Fdarknet\u002Freleases\u002Fdownload\u002Fdarknet_yolo_v4_pre\u002Fyolov4-tiny.weights)\n\n将下载的 `.weights` 文件复制到项目目录下的 `data` 文件夹中。\n\n## 基本使用\n\n### 使用预训练模型进行目标检测\n\n以下命令将使用预训练的 YOLOv4 模型对图片进行检测：\n\n```bash\n# 将 .weights 转换为 TensorFlow 模型\npython save_model.py --weights .\u002Fdata\u002Fyolov4.weights --output .\u002Fcheckpoints\u002Fyolov4-416 --input_size 416 --model yolov4\n\n# 对单张图片进行检测\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fkite.jpg\n```\n\n### 使用自定义模型\n\n如果你有自己训练的 `.weights` 和 `.names` 文件，请将它们分别放入 `data` 和 `data\u002Fclasses\u002F` 目录中，并修改 `core\u002Fconfig.py` 中第 14 行指向你的 `.names` 文件。\n\n示例命令如下：\n\n```bash\n# 将自定义 .weights 转换为 TensorFlow 模型\npython save_model.py --weights .\u002Fdata\u002Fcustom.weights --output .\u002Fcheckpoints\u002Fcustom-416 --input_size 416 --model yolov4\n\n# 对图片进行检测\npython detect.py --weights .\u002Fcheckpoints\u002Fcustom-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fcar.jpg\n```\n\n### 使用自定义功能（以对象计数为例）\n\n要启用对象计数功能，只需在命令中添加 `--count` 标志：\n\n```bash\npython detect.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --images .\u002Fdata\u002Fimages\u002Fdog.jpg --count\n```\n\n如需按类别统计对象数量，还需在 `detect.py` 中设置参数 `by_class=True`。\n\n### 使用 Tesseract OCR 提取文本\n\n确保已正确安装 [Tesseract OCR](https:\u002F\u002Fgithub.com\u002Ftesseract-ocr\u002Ftessdoc) 并配置好环境变量后，可使用 OCR 功能提取检测到的文本内容。\n\n以上就是 `yolov4-custom-functions` 的快速上手指南，你可以根据需要进一步探索更多自定义功能！","某智能停车场管理公司正在开发一个基于视频监控的车辆识别与计数系统，用于实时统计进入和离开停车场的车辆数量，并识别车牌信息以实现自动化收费。\n\n### 没有 yolov4-custom-functions 时\n- 需要手动编写大量代码来实现目标检测、计数和车牌识别功能，开发周期长且容易出错。\n- 缺乏现成的工具支持，无法快速集成对象计数、OCR识别等功能，导致系统功能扩展困难。\n- 车辆检测和车牌识别的精度不高，影响系统的实际应用效果。\n- 不同模型（如YOLOv4和YOLOv4-tiny）之间的切换和适配需要重新编写代码，增加维护成本。\n- 无法直接使用预训练模型进行演示或测试，需要从零开始训练模型。\n\n### 使用 yolov4-custom-functions 后\n- 提供了现成的对象计数功能，可直接调用并实现对车辆的实时计数，提升开发效率。\n- 内置的Tesseract OCR功能可自动识别车牌信息，简化了车牌识别流程，提高了识别准确率。\n- 支持多种YOLO模型（如YOLOv4、YOLOv4-tiny等），便于根据性能需求灵活切换，无需重复开发。\n- 提供了丰富的自定义函数接口，方便后续添加新功能（如打印检测信息、裁剪检测区域等）。\n- 可直接使用官方预训练权重进行演示，降低了模型训练门槛，加快了系统测试和部署速度。\n\n核心价值：yolov4-custom-functions 大大简化了YOLO模型在实际项目中的应用，显著提升了开发效率和系统功能的灵活性。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FtheAIGuysCode_yolov4-custom-functions_7c0ef0bd.png","theAIGuysCode","The AI Guy","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FtheAIGuysCode_e594907a.png","I love making tutorials for all things machine learning and AI!",null,"https:\u002F\u002Fwww.youtube.com\u002Fchannel\u002FUCrydcKaojc44XnuXrfhlV8Q","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode",[83],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,610,362,"2026-04-04T13:37:53","MIT","Linux, macOS, Windows","需要 NVIDIA GPU，CUDA 10.1（若不使用 Conda 环境且未设置 CUDA）","未说明",{"notes":95,"python":93,"dependencies":96},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件。若使用 GPU，需安装 CUDA Toolkit 10.1。",[97,98,99,100,101],"TensorFlow","TFLite","TensorRT","OpenCV","Tesseract OCR",[14,13],[104,105,106,107,108,109,110,111,112],"yolov4","yolov3","object-detection","tensorflow","tflite","custom-yolov4","yolov4-tiny","tf2","tensorrt","2026-03-27T02:49:30.150509","2026-04-06T07:04:51.986102",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},5911,"如何批量运行 detect.py 对图片进行检测？","可以使用文件夹路径代替单个图片路径，例如：`--images .\u002Fdata\u002Fimages\u002F`，这样会自动检测该目录下的所有图片。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F98",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},5912,"如何通过 IP 摄像头实时视频流进行目标检测？","可以使用如下命令：`python detect_video.py --weights .\u002Fcheckpoints\u002Fyolov4-416 --size 416 --model yolov4 --video http:\u002F\u002F192.168.0.80:8080\u002Fvideo?dummy=param.mjpg`，将 IP 地址替换为实际的 IP 摄像头地址。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F6",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},5913,"如何解决 OpenCV 的 'cvtColor' 函数中 '_src.empty()' 断言失败的问题？","确保输入图像不是空的，并且格式正确。如果使用 OpenCV 的 imread 加载图像，图像可能以 BGR 格式存储，需转换为 RGB 格式再进行处理。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F96",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},5914,"如何解决 'rectangle' 函数参数错误的问题？","检查传入的坐标参数是否为长度为 4 的序列，例如 `(x1, y1, x2, y2)`。确保 `c1` 和 `c3` 是正确的坐标点，而不是其他类型的数据。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F81",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},5915,"如何解决保存自定义权重到 saved_model.pb 失败的问题？","确保 `custom.names` 文件内容正确，每行一个类别名称，如 `car`。同时检查模型配置和路径是否正确。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F22",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},5916,"为什么车牌识别时文字只显示了一半？","可能是由于图像对比度低或倾斜导致的。建议在预处理阶段对图像进行去倾斜、增强对比度等操作，以提高 OCR 的识别效果。","https:\u002F\u002Fgithub.com\u002FtheAIGuysCode\u002Fyolov4-custom-functions\u002Fissues\u002F16",[]]