[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-EdjeElectronics--TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi":3,"tool-EdjeElectronics--TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":75,"owner_website":82,"owner_url":83,"languages":84,"stars":97,"forks":98,"last_commit_at":99,"license":100,"difficulty_score":10,"env_os":101,"env_gpu":102,"env_ram":103,"env_deps":104,"category_tags":109,"github_topics":110,"view_count":10,"oss_zip_url":110,"oss_zip_packed_at":110,"status":16,"created_at":111,"updated_at":112,"faqs":113,"releases":141},849,"EdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi","TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi","A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!","TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi 旨在帮助开发者在资源受限的边缘设备上部署轻量级深度学习模型。通过这个开源教程，用户可以轻松完成自定义物体检测模型的训练、格式转换以及在树莓派或 Android 手机上的实际运行。\n\n传统 TensorFlow 模型往往体积庞大，难以在手机或树莓派等低功耗设备上实时运行。它利用 TensorFlow Lite 框架优化了模型推理速度并降低了算力需求，有效解决了边缘 AI 部署难的问题。特别值得一提的是，集成 Google Colab 云端训练流程后，用户无需本地配置复杂的 GPU 环境，只需上传数据集即可快速生成可部署的 .tflite 模型。\n\n这套教程非常适合嵌入式开发者、创客以及人工智能初学者。无论是想在树莓派 3\u002F4 上实现智能监控，还是在 Android 手机上开发视觉应用，这里都有详细的步骤指南和现成的 Python 代码支持。从图像识别到视频流处理，它让边缘计算变得触手可及，是学习移动端 AI 落地的优秀实践资源。","# TensorFlow Lite Object Detection on Android and Raspberry Pi\nTrain your own TensorFlow Lite object detection models and run them on the Raspberry Pi, Android phones, and other edge devices! \n\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi_readme_caa364544b7b.gif\">\n\u003C\u002Fp>\n\nGet started with training on Google Colab by clicking the icon below, or [click here to go straight to the YouTube video that provides step-by-step instructions](https:\u002F\u002Fyoutu.be\u002FXZ7FYAMCc4M).\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fblob\u002Fmaster\u002FTrain_TFLite2_Object_Detction_Model.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n\n## Introduction\nTensorFlow Lite is an optimized framework for deploying lightweight deep learning models on resource-constrained edge devices. TensorFlow Lite models have faster inference time and require less processing power than regular TensorFlow models, so they can be used to obtain faster performance in realtime applications. \n\nThis guide provides step-by-step instructions for how train a custom TensorFlow Object Detection model, convert it into an optimized format that can be used by TensorFlow Lite, and run it on edge devices like the Raspberry Pi. It also provides Python code for running TensorFlow Lite models to perform detection on images, videos, web streams, or webcam feeds.\n\n## Step 1. Train TensorFlow Lite Models\n### Using Google Colab (recommended)\n\nThe easiest way to train, convert, and export a TensorFlow Lite model is using Google Colab. Colab provides you with a free GPU-enabled virtual machine on Google's servers that comes pre-installed with the libraries and packages needed for training.\n\nI wrote a [Google Colab notebook](.\u002FTrain_TFLite2_Object_Detction_Model.ipynb) that can be used to train custom TensorFlow Lite models. It goes through the process of preparing data, configuring a model for training, training the model, running it on test images, and exporting it to a downloadable TFLite format so you can deploy it to your own device. It makes training a custom TFLite model as easy as uploading an image dataset and clicking Play on a few blocks of code!\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fblob\u002Fmaster\u002FTrain_TFLite2_Object_Detction_Model.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n\nOpen the Colab notebook in your browser by clicking the icon above. Work through the instructions in the notebook to start training your own model. Once it's trained and exported, visit the [Setup TFLite Runtime Environment](#step-2-setup-tflite-runtime-environment-on-your-device) section to learn how to deploy it on your PC, Raspberry Pi, Android phone, or other edge devices.\n\n### Using a Local PC\nThe old version of this guide shows how to set up a TensorFlow training environment locally on your PC. Be warned: it's a lot of work, and the guide is outdated. [Here's a link to the local training guide.](doc\u002Flocal_training_guide.md)\n\n## Step 2. Setup TFLite Runtime Environment on Your Device\nOnce you have a trained `.tflite` model, the next step is to deploy it on a device like a computer, Raspberry Pi, or Android phone. To run the model, you'll need to install the TensorFlow or the TensorFlow Lite Runtime on your device and set up the Python environment and directory structure to run your application in. The [deploy_guides](deploy_guides) folder in this repository has step-by-step guides showing how to set up a TensorFlow environment on several different devices. Links to the guides are given below.\n\n### Raspberry Pi\nFollow the [Raspberry Pi setup guide](deploy_guides\u002FRaspberry_Pi_Guide.md) to install TFLite Runtime on a Raspberry Pi 3 or 4 and run a TensorFlow Lite model. This guide also shows how to use the Google Coral USB Accelerator to greatly increase the speed of quantized models on the Raspberry Pi.\n\n### Windows\nFollow the instructions in the [Windows TFLite guide](deploy_guides\u002FWindows_TFLite_Guide.md) to set up TFLite Runtime on your Windows PC using Anaconda!\n\n### macOS\nStill to come!\n\n### Linux\nStill to come!\n\n### Android\nStill to come!\n\n### Embedded Devices\nStill to come!\n\n## Step 3. Run TensorFlow Lite Models!\nThere are four Python scripts to run the TensorFlow Lite object detection model on an image, video, web stream, or webcam feed. The scripts are based off the label_image.py example given in the [TensorFlow Lite examples GitHub repository](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Flite\u002Fexamples\u002Fpython\u002Flabel_image.py).\n\n* [TFLite_detection_image.py](TFLite_detection_image.py)\n* [TFLite_detection_video.py](TFLite_detection_video.py)\n* [TFLite_detection_stream.py](TFLite_detection_stream.py)\n* [TFLite_detection_webcam.py](TFLite_detection_webcam.py)\n\nThe following instructions show how to run the scripts. These instructions assume your .tflite model file and labelmap.txt file are in the `TFLite_model` folder in your `tflite1` directory as per the instructions given in the [Setup TFLite Runtime Environment](#step-2-setup-tflite-runtime-environment-on-your-device) guide.\n\n\u003Cp align=\"center\">\n   \u003Cimg width=\"500\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi_readme_a704013863a4.png\">\n\u003C\u002Fp>\n\nIf you’d like try using the sample TFLite object detection model provided by Google, simply download it [here](https:\u002F\u002Fstorage.googleapis.com\u002Fdownload.tensorflow.org\u002Fmodels\u002Ftflite\u002Fcoco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip), unzip it to the `tflite1` folder, and rename it to `TFLite_model`. Then, use `--modeldir=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29` rather than `--modeldir=TFLite_model` when running the script. \n\n\u003Cdetails>\n   \u003Csummary>Webcam\u003C\u002Fsummary>\nMake sure you have a USB webcam plugged into your computer. If you’re on a laptop with a built-in camera, you don’t need to plug in a USB webcam. \n\nFrom the `tflite1` directory, issue: \n\n```\npython TFLite_detection_webcam.py --modeldir=TFLite_model \n```\n\nAfter a few moments of initializing, a window will appear showing the webcam feed. Detected objects will have bounding boxes and labels displayed on them in real time.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>Video\u003C\u002Fsummary>\nTo run the video detection script, issue:\n\n```\npython TFLite_detection_video.py --modeldir=TFLite_model\n```\n\nA window will appear showing consecutive frames from the video, with each object in the frame labeled. Press 'q' to close the window and end the script. By default, the video detection script will open a video named 'test.mp4'. To open a specific video file, use the `--video` option:\n\n```\npython TFLite_detection_video.py --modeldir=TFLite_model --video='birdy.mp4'\n```\n\nNote: Video detection will run at a slower FPS than realtime webcam detection. This is mainly because loading a frame from a video file requires more processor I\u002FO than receiving a frame from a webcam.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>Web stream\u003C\u002Fsummary>\nTo run the script to detect images in a video stream (e.g. a remote security camera), issue: \n\n```\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\" \n```\n\nAfter a few moments of initializing, a window will appear showing the video stream. Detected objects will have bounding boxes and labels displayed on them in real time.\n\nMake sure to update the URL parameter to the one that is being used by your security camera. It has to include authentication information in case the stream is secured.\n\nIf the bounding boxes are not matching the detected objects, probably the stream resolution wasn't detected. In this case you can set it explicitly by using the `--resolution` parameter:\n\n```\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\" --resolution=1920x1080\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>Image\u003C\u002Fsummary>\nTo run the image detection script, issue:\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model\n```\n\nThe image will appear with all objects labeled. Press 'q' to close the image and end the script. By default, the image detection script will open an image named 'test1.jpg'. To open a specific image file, use the `--image` option:\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --image=squirrel.jpg\n```\n\nIt can also open an entire folder full of images and perform detection on each image. There can only be images files in the folder, or errors will occur. To specify which folder has images to perform detection on, use the `--imagedir` option:\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels\n```\n\nPress any key (other than 'q') to advance to the next image. Do not use both the --image option and the --imagedir option when running the script, or it will throw an error.\n\nTo save labeled images and a text file with detection results for each image, use the `--save_results` option. The results will be saved to a folder named `\u003Cimagedir>_results`. This works well if you want to check your model's performance on a folder of images and use the results to calculate mAP with the [calculate_map_catchuro.py](.\u002Futil_scripts) script. For example:\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels --save_results\n```\n\nThe `--noshow_results` option will stop the program from displaying images.\n\u003C\u002Fdetails>\n\n**See all command options**\n\nFor more information on options that can be used while running the scripts, use the `-h` option when calling them. For example:\n\n```\npython TFLite_detection_image.py -h\n```\n\nIf you encounter errors, please check the [FAQ section](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi#FAQs) of this guide. It has a list of common errors and their solutions. If you can successfully run the script, but your object isn’t detected, it is most likely because your model isn’t accurate enough. The FAQ has further discussion on how to resolve this.\n\n## Examples\n(Still to come!) Please see the [examples](examples) folder for examples of how to use your TFLite model in basic vision applications.\n\n## FAQs\n\u003Cdetails>\n\u003Csummary>What's the difference between the TensorFlow Object Detection API and TFLite Model Maker?\u003C\u002Fsummary>\n\u003Cbr>\nGoogle provides a set of Colab notebooks for training TFLite models called [TFLite Model Maker](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fmodels\u002Fmodify\u002Fmodel_maker). While their object detection notebook is straightfoward and easy to follow, using the [TensorFlow Object Detection API](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection) for creating models provides several benefits:\n\n* TFLite Model Maker only supports EfficientDet models, which aren't as fast as SSD-MobileNet models.\n* Training models with the Object Detection API generally results in better model accuracy.\n* The Object Detection API provides significantly more flexibility in model and training configuration (training steps, learning rate, model depth and resolution, etc).\n* Google still [recommends using the Object Detection API](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fexamples\u002Fobject_detection\u002Foverview#fine-tuning_models_on_custom_data) as the formal method for training models with large datasets.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>What's the difference between training, transfer learning, and fine-tuning?\u003C\u002Fsummary>\n\u003Cbr>\nUsing correct terminology is important in a complicated field like machine learning. In this notebook, I use the word \"training\" to describe the process of teaching a model to recognize custom objects, but what we're actually doing is \"fine-tuning\". The Keras documentation gives a [good example notebook](https:\u002F\u002Fkeras.io\u002Fguides\u002Ftransfer_learning\u002F) explaining the difference between each term.\n\nHere's my attempt at defining the terms:\n\n* **Training**: The process of taking a full neural network with randomly initialized weights, passing in image data, calculating the resulting loss from its predictions on those images, and using backpropagation to adjust the weights in every node of the network and reduce its loss. In this process, the network learns how to extract features of interest from images and correlate those features to classes. Training a model from scratch typically takes millions of training steps and a large dataset of 100,000+ images (such as ImageNet or COCO). Let's leave actual training to companies like Google and Microsoft!\n* **Transfer learning**: Taking a model that has already been trained, unfreezing the last layer of the model (i.e. making it so only the last layer's weights can be modified), and retraining the last layer with a new dataset so it can learn to identify new classes. Transfer learning takes advantage of the feature extraction capabilities that have already been learned in the deep layers of the trained model. It takes the extracted features and recategorizes them to predict new classes.\n* **Fine-tuning**: Fine-tuning is similar to transfer learning, except more layers are unfrozen and retrained. Instead of just unfreezing the last layer, a significant amount of layers (such as the last 20% to 50% of layers) are unfrozen. This allows the model to modify some of its feature extraction layers so it can extract features that are more relevant to the classes its trying to identify. This notebook (and the TensorFlow Object Detection API) uses fine-tuning.\n\nIn general, I like to use the word \"training\" instead of \"fine-tuning\", because it's more intuitive and understandable to new users.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>Should I get a Google Colab Pro subscription?\u003C\u002Fsummary>\n\u003Cbr>\nIf you plan to use Colab frequently for training models, I recommend getting a Colab Pro subscription. It provides several benefits:\n\n* Idle Colab sessions remain connected for longer before timing out and disconnecting\n* Allows for running multiple Colab sessions at once\n* Priority access to TPU and GPU-enabled virtual machines\n* Virtual machines have more RAM\n\nColab keeps track of how much GPU time you use, and cuts you off from using GPU-enabled instances once you reach a certain use time. If you get the message telling you you're cut off from GPU instances, then that's a good indicator that you use Colab enough to justify paying for a Pro subscription.\n\u003C\u002Fdetails>\n","# TensorFlow Lite 在 Android 和 Raspberry Pi 上的目标检测\n训练您自己的 TensorFlow Lite 目标检测模型，并在 Raspberry Pi、Android 手机和其他边缘设备 (Edge devices) 上运行它们！ \n\n\u003Cp align=\"center\">\n   \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi_readme_caa364544b7b.gif\">\n\u003C\u002Fp>\n\n点击下方图标即可开始在 Google Colab 上进行训练，或者 [点击此处直接进入提供逐步指导的 YouTube 视频](https:\u002F\u002Fyoutu.be\u002FXZ7FYAMCc4M)。\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fblob\u002Fmaster\u002FTrain_TFLite2_Object_Detction_Model.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n\n## 简介\nTensorFlow Lite (一种用于部署轻量级深度学习 (Deep Learning) 模型的优化框架) 是专为在资源受限的边缘设备 (Edge devices) 上部署轻量级深度学习模型而设计的优化框架。与常规 TensorFlow 模型相比，TensorFlow Lite 模型具有更快的推理速度 (Inference time)，且所需的算力 (Processing power) 更少，因此可用于在实时应用 (Realtime applications) 中获取更快的性能。 \n\n本指南提供了如何训练自定义 TensorFlow 目标检测 (Object Detection) 模型的逐步说明，将其转换为 TensorFlow Lite 可用的优化格式，并在 Raspberry Pi 等边缘设备上运行它。它还提供了用于运行 TensorFlow Lite 模型以在图像、视频、网络流或网络摄像头馈送上执行检测的 Python 代码。\n\n## 步骤 1. 训练 TensorFlow Lite 模型\n### 使用 Google Colab（推荐）\n\n训练、转换和导出 TensorFlow Lite 模型最简单的方法是使用 Google Colab。Colab 为您提供了一个免费的支持 GPU 的虚拟机 (Virtual Machine)，托管在 Google 服务器上，并预装好了训练所需的库和包。\n\n我编写了一个 [Google Colab 笔记本](.\u002FTrain_TFLite2_Object_Detction_Model.ipynb)，可用于训练自定义 TensorFlow Lite 模型。它涵盖了准备数据、配置模型以进行训练、训练模型、在测试图像上运行模型以及将其导出为可下载的 TFLite 格式的整个过程，以便您可以将其部署到自己的设备上。它使得训练自定义 TFLite 模型变得像上传图像数据集 (Dataset) 并在几块代码块上点击“播放”一样简单！\n\n\u003Ca href=\"https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fblob\u002Fmaster\u002FTrain_TFLite2_Object_Detction_Model.ipynb\" target=\"_parent\">\u003Cimg src=\"https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg\" alt=\"Open In Colab\"\u002F>\u003C\u002Fa>\n\n通过点击上方的图标在浏览器中打开 Colab 笔记本。按照笔记本中的说明开始训练您自己的模型。一旦模型训练完成并导出，请访问 [设置 TFLite 运行时环境](#step-2-setup-tflite-runtime-environment-on-your-device) 部分，了解如何在您的 PC、Raspberry Pi、Android 手机或其他边缘设备上部署它。\n\n### 使用本地 PC\n本指南的旧版本展示了如何在您的 PC 上本地设置 TensorFlow 训练环境。请注意：这很繁琐，而且该指南已过时。[这是本地训练指南的链接。](doc\u002Flocal_training_guide.md)\n\n## 步骤 2. 在您的设备上设置 TFLite 运行时环境\n一旦您拥有一个训练好的 `.tflite` 模型，下一步就是将其部署到计算机、Raspberry Pi 或 Android 手机等设备上。要运行该模型，您需要在设备上安装 TensorFlow 或 TensorFlow Lite 运行时 (Runtime)，并设置 Python 环境和目录结构以运行您的应用程序。此仓库中的 [deploy_guides](deploy_guides) 文件夹包含分步指南，展示如何在多种不同设备上设置 TensorFlow 环境。指南链接如下。\n\n### Raspberry Pi\n遵循 [Raspberry Pi 设置指南](deploy_guides\u002FRaspberry_Pi_Guide.md) 在 Raspberry Pi 3 或 4 上安装 TFLite 运行时 (Runtime) 并运行 TensorFlow Lite 模型。本指南还展示了如何使用 Google Coral USB Accelerator 来大幅提高 Raspberry Pi 上量化模型 (Quantized models) 的速度。\n\n### Windows\n遵循 [Windows TFLite 指南](deploy_guides\u002FWindows_TFLite_Guide.md) 中的说明，使用 Anaconda 在您的 Windows PC 上设置 TFLite 运行时 (Runtime)！\n\n### macOS\n即将推出！\n\n### Linux\n即将推出！\n\n### Android\n即将推出！\n\n### 嵌入式设备 (Embedded Devices)\n即将推出！\n\n## 步骤 3. 运行 TensorFlow Lite 模型！\n有四个 Python 脚本用于在图像、视频、网络流或网络摄像头画面 (webcam feed) 上运行 TensorFlow Lite object detection (目标检测) 模型。这些脚本基于 [TensorFlow Lite 示例 GitHub 仓库](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Fblob\u002Fmaster\u002Ftensorflow\u002Flite\u002Fexamples\u002Fpython\u002Flabel_image.py) 中提供的 `label_image.py` 示例。\n\n* [TFLite_detection_image.py](TFLite_detection_image.py)\n* [TFLite_detection_video.py](TFLite_detection_video.py)\n* [TFLite_detection_stream.py](TFLite_detection_stream.py)\n* [TFLite_detection_webcam.py](TFLite_detection_webcam.py)\n\n以下说明展示了如何运行这些脚本。以下说明假设您的 `.tflite` 模型文件和 `labelmap.txt` 文件位于您 `tflite1` 目录中的 `TFLite_model` 文件夹内，具体请参考 [设置 TFLite 运行时环境](#step-2-setup-tflite-runtime-environment-on-your-device) 指南中的说明。\n\n\u003Cp align=\"center\">\n   \u003Cimg width=\"500\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi_readme_a704013863a4.png\">\n\u003C\u002Fp>\n\n如果您想尝试使用 Google 提供的示例 TensorFlow Lite object detection (目标检测) 模型，只需在此处下载它 [here](https:\u002F\u002Fstorage.googleapis.com\u002Fdownload.tensorflow.org\u002Fmodels\u002Ftflite\u002Fcoco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip)，将其解压到 `tflite1` 文件夹，并重命名为 `TFLite_model`。然后，在运行脚本时使用 `--modeldir=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29` 而不是 `--modeldir=TFLite_model`。 \n\n\u003Cdetails>\n   \u003Csummary>网络摄像头\u003C\u002Fsummary>\n请确保已将 USB 网络摄像头插入您的电脑。如果您使用的是带有内置摄像头的笔记本电脑，则无需插入 USB 网络摄像头。 \n\n从 `tflite1` 目录，发出指令： \n\n```\npython TFLite_detection_webcam.py --modeldir=TFLite_model \n```\n\n初始化几秒后，会出现一个窗口显示网络摄像头画面。检测到的物体将实时显示 bounding boxes (边界框) 和标签。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>视频\u003C\u002Fsummary>\n要运行视频检测脚本，发出指令：\n\n```\npython TFLite_detection_video.py --modeldir=TFLite_model\n```\n\n将出现一个窗口，显示视频中连续的帧，并对每帧中的每个对象进行标记。按 'q' 关闭窗口并结束脚本。默认情况下，视频检测脚本将打开名为 'test.mp4' 的视频。要打开特定视频文件，请使用 `--video` 选项：\n\n```\npython TFLite_detection_video.py --modeldir=TFLite_model --video='birdy.mp4'\n```\n\n注意：视频检测的运行速度 FPS (帧率) 将慢于实时网络摄像头检测。这主要是因为从视频文件加载帧比从网络摄像头接收帧需要更多的处理器 I\u002FO (输入\u002F输出)。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>网络流\u003C\u002Fsummary>\n要运行检测视频流（例如远程安全摄像头）中图像的脚本，发出指令： \n\n```\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\" \n```\n\n初始化几秒后，会出现一个窗口显示视频流。检测到的物体将实时显示 bounding boxes (边界框) 和标签。\n\n请确保更新 URL 参数为您安全摄像头正在使用的地址。如果流受到保护，必须包含认证信息。\n\n如果 bounding boxes (边界框) 与检测到的物体不匹配，可能是未检测到流的分辨率。在这种情况下，您可以使用 `--resolution` 参数显式设置它：\n\n```\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\" --resolution=1920x1080\n```\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n   \u003Csummary>图像\u003C\u002Fsummary>\n要运行图像检测脚本，发出指令：\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model\n```\n\n图像将显示出来，所有对象均已标记。按 'q' 关闭图像并结束脚本。默认情况下，图像检测脚本将打开名为 'test1.jpg' 的图像。要打开特定图像文件，请使用 `--image` 选项：\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --image=squirrel.jpg\n```\n\n它还可以打开包含整个文件夹的图像，并对每张图像执行检测。文件夹中只能有图像文件，否则会发生错误。要指定哪个文件夹包含要执行检测的图像，请使用 `--imagedir` 选项：\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels\n```\n\n按任意键（'q' 除外）前进到下一张图像。运行脚本时不要同时使用 --image 选项和 --imagedir 选项，否则会抛出错误。\n\n要保存标记后的图像以及包含每张图像检测结果的文件，请使用 `--save_results` 选项。结果将保存到名为 `\u003Cimagedir>_results` 的文件夹中。如果您想检查模型在图像文件夹上的性能，并使用结果配合 [calculate_map_catchuro.py](.\u002Futil_scripts) 脚本来计算 mAP (平均精度均值)，这将非常有用。例如：\n\n```\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels --save_results\n```\n\n`--noshow_results` 选项将阻止程序显示图像。\n\u003C\u002Fdetails>\n\n**查看所有命令选项**\n\n有关运行脚本时可用的选项的更多信息，请在调用它们时使用 `-h` 选项。例如：\n\n```\npython TFLite_detection_image.py -h\n```\n\n如果您遇到错误，请查看本指南的 [常见问题解答部分](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi#FAQs)。其中列出了常见错误及其解决方案。如果您能成功运行脚本，但未能检测到物体，最可能的原因是您的模型准确度不够。常见问题解答部分有关于如何解决此问题的进一步讨论。\n\n## 示例\n（敬请期待！）请参阅 [examples](examples) 文件夹，了解如何在基本视觉应用中使用您的 TFLite 模型的示例。\n\n## 常见问题\n\u003Cdetails>\n\u003Csummary>TensorFlow Object Detection API 和 TFLite Model Maker 有什么区别？\u003C\u002Fsummary>\n\u003Cbr>\nGoogle 提供了一套用于训练 TFLite 模型的 Colab 笔记本，称为 [TFLite Model Maker](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fmodels\u002Fmodify\u002Fmodel_maker)。虽然他们的目标检测笔记本简单明了且易于遵循，但使用 [TensorFlow Object Detection API](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection) 来创建模型则提供了诸多优势：\n\n* TFLite Model Maker 仅支持 EfficientDet 模型，其速度不如 SSD-MobileNet 模型快。\n* 使用目标检测 API 训练模型通常能获得更好的模型精度。\n* 目标检测 API 在模型和训练配置（如训练步数、学习率、模型深度和分辨率等）方面提供了显著更多的灵活性。\n* Google 仍然 [推荐使用目标检测 API](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fexamples\u002Fobject_detection\u002Foverview#fine-tuning_models_on_custom_data) 作为使用大型数据集训练模型的正式方法。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>训练、迁移学习 (Transfer learning) 和微调 (Fine-tuning) 之间有什么区别？\u003C\u002Fsummary>\n\u003Cbr>\n在机器学习这样复杂的领域，使用正确的术语很重要。在本笔记本中，我使用“训练 (Training)\"一词来描述教导模型识别自定义对象的过程，但我们实际上做的是“微调 (Fine-tuning)\"。Keras 文档提供了一个 [很好的示例笔记本](https:\u002F\u002Fkeras.io\u002Fguides\u002Ftransfer_learning\u002F) 解释每个术语之间的区别。\n\n以下是我对这些术语的定义尝试：\n\n* **训练 (Training)**：获取一个具有随机初始化权重 (Weights) 的完整神经网络 (Neural network)，传入图像数据，计算其在这些图像上预测产生的损失 (Loss)，并使用反向传播 (Backpropagation) 调整网络中每个节点的权重以减少损失的过程。在此过程中，网络学习如何从图像中提取感兴趣特征并将这些特征与类别相关联。从头开始训练模型通常需要数百万次训练步骤和一个包含 100,000+ 张图像的大型数据集（如 ImageNet 或 COCO）。让我们把实际训练留给 Google 和 Microsoft 这样的公司吧！\n* **迁移学习 (Transfer learning)**：采用一个已经训练好的模型，解冻 (Unfreezing) 模型的最后一层（即仅允许修改最后一层的权重），并使用新数据集重新训练最后一层，使其能够学习识别新类别。迁移学习利用了已训练模型深层中已经学到的特征提取能力。它利用提取的特征并对其进行重新分类以预测新类别。\n* **微调 (Fine-tuning)**：微调类似于迁移学习，不同的是解冻并重训的层 (Layers) 更多。不仅仅是解冻最后一层，而是解冻相当数量的层（例如最后 20% 到 50% 的层）。这允许模型修改其部分特征提取层，以便提取与其试图识别的类别更相关的特征。本笔记本（以及 TensorFlow Object Detection API）使用的是微调。\n\n一般来说，我喜欢使用“训练”这个词而不是“微调”，因为它对新手来说更直观、更容易理解。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>我应该购买 Google Colab Pro 订阅吗？\u003C\u002Fsummary>\n\u003Cbr>\n如果你计划频繁使用 Colab 进行模型训练，我建议你购买 Colab Pro 订阅。它提供了几项优势：\n\n* 空闲的 Colab 会话在超时断开连接之前保持连接的时间更长\n* 允许同时运行多个 Colab 会话\n* 优先访问启用 TPU 和 GPU 的虚拟机\n* 虚拟机拥有更多的 RAM\n\nColab 会跟踪你使用的 GPU 时间量，一旦达到一定的使用时间，就会切断你使用启用 GPU 实例的权限。如果你收到消息告知你被切断了 GPU 实例的使用，那么这是一个很好的指标，表明你使用 Colab 的频率足以证明支付 Pro 订阅费用是合理的。\n\u003C\u002Fdetails>","# TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi 快速上手指南\n\n本项目旨在帮助用户在资源受限的边缘设备（如树莓派、Android 手机、PC）上训练自定义的 TensorFlow Lite 目标检测模型，并实现实时检测。\n\n## 1. 环境准备\n\n### 系统要求\n*   **训练阶段**：推荐使用 **Google Colab**（无需本地配置 GPU），也可使用本地 PC（需较复杂的环境配置）。\n*   **部署阶段**：支持 Windows、Raspberry Pi (3\u002F4)、Linux、Android 等。\n*   **依赖库**：Python 3.x, OpenCV, TensorFlow \u002F TensorFlow Lite Runtime。\n\n### 前置依赖建议\n若需在本地 Python 环境中运行检测脚本，建议通过国内镜像源加速安装：\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n*(注：具体依赖请参照仓库根目录下的 requirements.txt 或部署指南)*\n\n---\n\n## 2. 安装与配置步骤\n\n### 第一步：克隆项目\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi.git\ncd TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\n```\n\n### 第二步：训练模型（推荐）\n最便捷的方式是使用 Google Colab 进行云端训练，无需本地配置深度学习环境。\n1.  点击 [Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fblob\u002Fmaster\u002FTrain_TFLite2_Object_Detction_Model.ipynb) 图标打开 Notebook。\n2.  按照 Notebook 中的指示上传数据集并运行代码块。\n3.  训练完成后，下载生成的 `.tflite` 模型文件和 `labelmap.txt`。\n\n### 第三步：部署环境\n将训练好的模型文件放入指定目录结构。根据目标设备类型，参考仓库内的 `deploy_guides` 文件夹设置运行时环境：\n*   **Windows**: 参考 `deploy_guides\u002FWindows_TFLite_Guide.md`\n*   **Raspberry Pi**: 参考 `deploy_guides\u002FRaspberry_Pi_Guide.md`\n\n确保目录结构如下（以默认脚本为例）：\n```text\ntflite1\u002F\n└── TFLite_model\u002F\n    ├── model.tflite\n    └── labelmap.txt\n```\n\n---\n\n## 3. 基本使用示例\n\n以下命令假设您的模型文件已放置在 `TFLite_model` 文件夹中。\n\n### 3.1 摄像头实时检测\n适用于连接 USB 摄像头或使用笔记本内置摄像头。\n```bash\npython TFLite_detection_webcam.py --modeldir=TFLite_model \n```\n*按 'q' 键关闭窗口退出。*\n\n### 3.2 视频文件检测\n默认读取当前目录下的 `test.mp4`，可指定其他视频文件。\n```bash\npython TFLite_detection_video.py --modeldir=TFLite_model\n# 指定特定视频文件\npython TFLite_detection_video.py --modeldir=TFLite_model --video='birdy.mp4'\n```\n\n### 3.3 图片批量检测\n默认读取 `test1.jpg`，可指定单张图片、整张文件夹或保存检测结果。\n```bash\n# 检测单张图片\npython TFLite_detection_image.py --modeldir=TFLite_model --image=squirrel.jpg\n\n# 检测整个文件夹的图片\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels\n\n# 保存检测结果到文件（用于计算 mAP）\npython TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels --save_results\n```\n\n### 3.4 Web 流媒体检测\n适用于远程监控摄像头（需替换为实际 IP 地址）。\n```bash\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\"\n# 若分辨率未自动识别，可手动指定\npython TFLite_detection_stream.py --modeldir=TFLite_model --streamurl=\"http:\u002F\u002Fipaddress:port\u002Fstream\u002Fvideo.mjpeg\" --resolution=1920x180\n```\n\n### 查看帮助信息\n如需了解所有可用参数，请在命令后添加 `-h`：\n```bash\npython TFLite_detection_image.py -h\n```","一位智能家居爱好者计划利用树莓派摄像头搭建一套本地化的家庭安防系统，用于自动识别入侵者或宠物。\n\n### 没有 TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi 时\n- 必须依赖云端服务器处理图像数据，导致每月产生高额的 API 调用费和带宽成本。\n- 在树莓派等低功耗设备上直接运行标准 TensorFlow 模型会占用过多内存，造成系统卡顿甚至死机。\n- 视频流上传互联网存在隐私泄露风险，且网络波动会导致报警延迟，无法实现即时响应。\n- 从零构建训练环境极其繁琐，缺乏针对边缘设备的优化指导，调试过程耗时耗力。\n\n### 使用 TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi 后\n- 借助 Google Colab 免费 GPU 资源即可完成模型训练与转换，显著降低了硬件投入成本。\n- 模型被优化为轻量级 TFLite 格式，能在树莓派上流畅运行，实现低延迟的实时目标检测。\n- 所有图像分析均在设备本地完成，数据不出内网，既保障了用户隐私又节省了网络流量。\n- 项目提供了从训练到部署的全套 Python 代码及详细指南，开发者可快速将模型移植到手机或开发板。\n\n该项目让开发者能够以极低的成本在边缘设备上实现高效的本地智能视觉应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi_a7040138.png","EdjeElectronics","Evan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FEdjeElectronics_8523fa08.jpg","Computer vision engineer and founder at EJ Technology Consultants.","EJ Technology Consultants","Bozeman, MT","evan.juras@ejtech.io","https:\u002F\u002Fejtech.io","https:\u002F\u002Fgithub.com\u002FEdjeElectronics",[85,89,93],{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",83.8,{"name":90,"color":91,"percentage":92},"Python","#3572A5",15.8,{"name":94,"color":95,"percentage":96},"Shell","#89e051",0.5,1594,697,"2026-04-01T17:00:04","Apache-2.0","Windows, Raspberry Pi","推理无需 GPU，训练推荐使用 Google Colab 免费 GPU","未说明",{"notes":105,"python":103,"dependencies":106},"训练推荐使用 Google Colab；本地训练指南已过时；仅提供 Windows 和树莓派部署指南；macOS\u002FLinux\u002FAndroid 部署指南尚未发布；需准备.tflite 模型文件和 labelmap.txt",[107,108],"tensorflow","tensorflow-lite",[14,13],null,"2026-03-27T02:49:30.150509","2026-04-06T06:44:06.590640",[114,119,124,128,132,137],{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},3656,"如何修复 .proto 文件编译失败的问题？","如果在编译 *.proto 文件时遇到困难，建议直接使用预编译好的 *.py 文件替代。可以从 Google Drive 下载 compiled_protos.zip，并将其复制到 Colab 环境中的指定目录。\n具体操作步骤如下：\n1. 进入目标目录：%cd \u002Fcontent\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fobject_detection\u002Fprotos\u002F\n2. 复制压缩包：!cp '\u002Fcontent\u002Fdrive\u002FMy Drive\u002Fcompiled_protos.zip' .\n3. 解压文件：!unzip compiled_protos.zip -d .\u002F （在覆盖提示时选择 No）","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fissues\u002F40",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},3657,"树莓派上运行 TFLite 脚本时报错 'STRIDED_SLICE' 版本不匹配怎么办？","这是 TensorFlow v1.14 及以下版本在 Raspbian Stretch 上的已知问题。解决方法是卸载标准 TensorFlow 并安装 Google 提供的 TensorFlow Lite 运行时。\n请在终端执行以下命令：\n```\ncd tflite1\nsource tflite1-env\u002Fbin\u002Factivate\npip3 uninstall tensorflow\nwget https:\u002F\u002Fdl.google.com\u002Fcoral\u002Fpython\u002Ftflite_runtime-1.14.0-cp35-cp35m-linux_armv7l.whl\npip3 install tflite_runtime-1.14.0-cp35-cp35m-linux_armv7l.whl\n```\n之后即可正常运行 TFLite_detection 脚本。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fissues\u002F13",{"id":125,"question_zh":126,"answer_zh":127,"source_url":123},3658,"树莓派上检测到的边界框位置偏移如何修正？","如果检测框相对于视频物体发生位移，可以通过调整坐标系数来解决。建议在代码中强制将坐标限制在图像尺寸内，并使用特定因子进行缩放。\n参考修改方式：\n```python\n# 获取边界框坐标并绘制框\nymin = int(max(1, (boxes[i][0] * imH * 0.7)))\nxmin = int(max(1, (boxes[i][1] * imW * 0.5)))\nymax = int(min(imH, (boxes[i][2] * imH * 0.7)))\nxmax = int(min(imW, (boxes[i][3] * imW * 0.5)))\n```\n其中 Y 坐标使用 0.7 因子，X 坐标使用 0.5 因子。",{"id":129,"question_zh":130,"answer_zh":131,"source_url":123},3659,"运行检测脚本时出现无尽报错信息如何处理？","当脚本运行过程中产生大量无关紧要的错误日志（如 corrupt JPEG data）时，可以将输出重定向到空设备来隐藏这些信息。\n执行命令示例：\n```bash\npython3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model > \u002Fdev\u002Fnull 2>&1\n```",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},3660,"训练后的 TFLite 模型无法显示任何检测框怎么办？","如果训练了足够步数（如 10000 步）但仍无检测框出现，可能是置信度阈值设置过高。尝试降低阈值参数，例如将其设置为 0.01，通常可以解决此问题并显示更多检测结果。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi\u002Fissues\u002F135",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},3661,"在哪里可以获取用于测试的样本数据集？","仓库维护者提供了一个包含 900 张图像的 'bird, squirrel, raccoon' 数据集供用户测试使用。可以在 Dropbox 下载该 ZIP 包，并在 Colab 笔记本的步骤 3 中选择下载选项。\n下载地址：https:\u002F\u002Fwww.dropbox.com\u002Fs\u002Fen33x280e4z3wbt\u002FBSR2.zip?dl=0",[]]