[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-EdjeElectronics--TensorFlow-Object-Detection-on-the-Raspberry-Pi":3,"tool-EdjeElectronics--TensorFlow-Object-Detection-on-the-Raspberry-Pi":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":72,"owner_website":79,"owner_url":80,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":89,"difficulty_score":90,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":108,"github_topics":109,"view_count":32,"oss_zip_url":109,"oss_zip_packed_at":109,"status":17,"created_at":110,"updated_at":111,"faqs":112,"releases":146},8690,"EdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi","TensorFlow-Object-Detection-on-the-Raspberry-Pi","A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi","TensorFlow-Object-Detection-on-the-Raspberry-Pi 是一份详尽的实战教程，旨在指导用户如何在树莓派这一低成本硬件上部署谷歌的 TensorFlow 对象检测 API。它主要解决了在资源受限的边缘设备上配置复杂深度学习环境的技术难题，让用户能够轻松利用树莓派配合摄像头，实现对实时视频流中特定物体的自动识别与追踪。\n\n这份指南非常适合嵌入式开发者、AI 爱好者以及希望将智能视觉功能融入物联网项目的研究人员。无论是想监控花园里的动物、检测停车场空位，还是构建个性化的宠物监测器，用户都能从中找到落地的解决方案。其独特的技术亮点在于紧跟技术迭代，简化了原本繁琐的安装流程：现在只需通过简单的 pip 命令即可安装 TensorFlow，并利用系统包管理器直接编译 Protobuf，大幅降低了入门门槛。此外，项目还附带了实用的示例代码（如宠物检测脚本），展示了从环境搭建到实际应用的完整闭环，帮助用户快速将创意转化为现实。","# Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_f655384b8c2b.png\">\n\u003C\u002Fp>\n\n*Update 10\u002F13\u002F19:* Setting up the TensorFlow Object Detection API on the Pi is much easier now! Two major updates: 1) TensorFlow can be installed simply using \"pip3 install tensorflow\". 2) The protobuf compiler (protoc) can be installed using \"sudo apt-get protobuf-compiler. I have updated Step 3 and Step 4 to reflect these changes.\n\nBonus: I made a Pet Detector program (Pet_detector.py) that sends me a text when it detects when my cat wants to be let outside! It runs on the Raspberry Pi and uses the TensorFlow Object Detection API. You can use the code as an example for your own object detection applications. More info is available at [the bottom of this readme](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector).\n\n## Introduction\nThis guide provides step-by-step instructions for how to set up TensorFlow’s Object Detection API on the Raspberry Pi. By following the steps in this guide, you will be able to use your Raspberry Pi to perform object detection on live video feeds from a Picamera or USB webcam. Combine this guide with my \u003Clink> tutorial on how to train your own neural network to identify specific objects\u003C\u002Flink>, and you use your Pi for unique detection applications such as:\n\n* Detecting if bunnies are in your garden eating your precious vegetables\n* Telling you if there are any parking spaces available in front of your apartment building\n* [Beehive bee counter](http:\u002F\u002Fmatpalm.com\u002Fblog\u002Fcounting_bees\u002F)\n* [Counting cards at the blackjack table](https:\u002F\u002Fhackaday.io\u002Fproject\u002F27639-rainman-20-blackjack-robot)\n* And anything else you can think of!\n\nHere's a YouTube video I made that walks through this guide!\n\n[![Link to my YouTube video!](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_bf40157dcc5a.png)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=npZ-8Nj1YwY)\n\nThe guide walks through the following steps:\n1. [Update the Raspberry Pi](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#1-update-the-raspberry-pi)\n2. [Install TensorFlow](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#2-install-tensorflow)\n3. [Install OpenCV](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#3-install-opencv)\n4. [Compile and install Protobuf](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#4-compile-and-install-protobuf)\n5. [Set up TensorFlow directory structure and the PYTHONPATH variable](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#5-set-up-tensorflow-directory-structure-and-pythonpath-variable)\n6. [Detect objects!](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#6-detect-objects)\n7. [Bonus: Pet detector!](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector)\n\nThe repository also includes the Object_detection_picamera.py script, which is a Python script that loads an object detection model in TensorFlow and uses it to detect objects in a Picamera video feed. The guide was written for TensorFlow v1.8.0 on a Raspberry Pi Model 3B running Raspbian Stretch v9. It will likely work for newer versions of TensorFlow.\n\n## Steps\n### 1. Update the Raspberry Pi\nFirst, the Raspberry Pi needs to be fully updated. Open a terminal and issue:\n```\nsudo apt-get update\nsudo apt-get dist-upgrade\n```\nDepending on how long it’s been since you’ve updated your Pi, the upgrade could take anywhere between a minute and an hour.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_1db8011fd02e.png\">\n\u003C\u002Fp>\n\n### 2. Install TensorFlow\n*Update 10\u002F13\u002F19: Changed instructions to just use \"pip3 install tensorflow\" rather than getting it from lhelontra's repository. The old instructions have been moved to this guide's appendix.*\n\nNext, we’ll install TensorFlow. The download is rather large (over 100MB), so it may take a while. Issue the following command:\n\n```\npip3 install tensorflow\n```\n\nTensorFlow also needs the LibAtlas package. Install it by issuing the following command. (If this command doesn't work, issue \"sudo apt-get update\" and then try again).\n```\nsudo apt-get install libatlas-base-dev\n```\nWhile we’re at it, let’s install other dependencies that will be used by the TensorFlow Object Detection API. These are listed on the [installation instructions](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Finstallation.md) in TensorFlow’s Object Detection GitHub repository. Issue:\n```\nsudo pip3 install pillow lxml jupyter matplotlib cython\nsudo apt-get install python-tk\n```\nAlright, that’s everything we need for TensorFlow! Next up: OpenCV.\n\n### 3. Install OpenCV\nTensorFlow’s object detection examples typically use matplotlib to display images, but I prefer to use OpenCV because it’s easier to work with and less error prone. The object detection scripts in this guide’s GitHub repository use OpenCV. So, we need to install OpenCV.\n\nTo get OpenCV working on the Raspberry Pi, there’s quite a few dependencies that need to be installed through apt-get. If any of the following commands don’t work, issue “sudo apt-get update” and then try again. Issue:\n```\nsudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev\nsudo apt-get install libxvidcore-dev libx264-dev\nsudo apt-get install qt4-dev-tools libatlas-base-dev\n```\nNow that we’ve got all those installed, we can install OpenCV. Issue:\n```\nsudo pip3 install opencv-python\n```\nAlright, now OpenCV is installed!\n\n### 4. Compile and Install Protobuf\nThe TensorFlow object detection API uses Protobuf, a package that implements Google’s Protocol Buffer data format. You used to need to compile this from source, but now it's an easy install! I moved the old instructions for compiling and installing it from source to the appendix of this guide.\n\n```sudo apt-get install protobuf-compiler```\n\nRun `protoc --version` once that's done to verify it is installed. You should get a response of `libprotoc 3.6.1` or similar.\n\n### 5. Set up TensorFlow Directory Structure and PYTHONPATH Variable\nNow that we’ve installed all the packages, we need to set up the TensorFlow directory. Move back to your home directory, then make a directory called “tensorflow1”, and cd into it.\n```\nmkdir tensorflow1\ncd tensorflow1\n```\nDownload the tensorflow repository from GitHub by issuing:\n```\ngit clone --depth 1 https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels.git\n```\nNext, we need to modify the PYTHONPATH environment variable to point at some directories inside the TensorFlow repository we just downloaded. We want PYTHONPATH to be set every time we open a terminal, so we have to modify the .bashrc file. Open it by issuing:\n```\nsudo nano ~\u002F.bashrc\n```\nMove to the end of the file, and on the last line, add:\n```\nexport PYTHONPATH=$PYTHONPATH:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fslim\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_dab48cd04c0e.png\">\n\u003C\u002Fp>\n\nThen, save and exit the file. This makes it so the “export PYTHONPATH” command is called every time you open a new terminal, so the PYTHONPATH variable will always be set appropriately. Close and then re-open the terminal.\n\nNow, we need to use Protoc to compile the Protocol Buffer (.proto) files used by the Object Detection API. The .proto files are located in \u002Fresearch\u002Fobject_detection\u002Fprotos, but we need to execute the command from the \u002Fresearch directory. Issue:\n```\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\nprotoc object_detection\u002Fprotos\u002F*.proto --python_out=.\n```\nThis command converts all the \"name\".proto files to \"name_pb2\".py files. Next, move into the object_detection directory:\n```\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fobject_detection\n```\nNow, we’ll download the SSD_Lite model from the [TensorFlow detection model zoo](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Fdetection_model_zoo.md). The model zoo is Google’s collection of pre-trained object detection models that have various levels of speed and accuracy. The Raspberry Pi has a weak processor, so we need to use a model that takes less processing power. Though the model will run faster, it comes at a tradeoff of having lower accuracy. For this tutorial, we’ll use SSDLite-MobileNet, which is the fastest model available. \n\nGoogle is continuously releasing models with improved speed and performance, so check back at the model zoo often to see if there are any better models.\n\nDownload the SSDLite-MobileNet model and unpack it by issuing:\n```\nwget http:\u002F\u002Fdownload.tensorflow.org\u002Fmodels\u002Fobject_detection\u002Fssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\ntar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\n```\nNow the model is in the object_detection directory and ready to be used.\n\n### 6. Detect Objects!\nOkay, now everything is set up for performing object detection on the Pi! The Python script in this repository, Object_detection_picamera.py, detects objects in live feeds from a Picamera or USB webcam. Basically, the script sets paths to the model and label map, loads the model into memory, initializes the Picamera, and then begins performing object detection on each video frame from the Picamera. \n\nIf you’re using a Picamera, make sure it is enabled in the Raspberry Pi configuration menu.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_6ce741125b4f.png\">\n\u003C\u002Fp>\n\nDownload the Object_detection_picamera.py file into the object_detection directory by issuing:\n```\nwget https:\u002F\u002Fraw.githubusercontent.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fmaster\u002FObject_detection_picamera.py\n```\nRun the script by issuing: \n```\npython3 Object_detection_picamera.py \n```\nThe script defaults to using an attached Picamera. If you have a USB webcam instead, add --usbcam to the end of the command:\n```\npython3 Object_detection_picamera.py --usbcam\n```\nOnce the script initializes (which can take up to 30 seconds), you will see a window showing a live view from your camera. Common objects inside the view will be identified and have a rectangle drawn around them. \n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_7eaae3dfc7a1.png\">\n\u003C\u002Fp>\n\nWith the SSDLite model, the Raspberry Pi 3 performs fairly well, achieving a frame rate higher than 1FPS. This is fast enough for most real-time object detection applications.\n\nYou can also use a model you trained yourself [(here's a guide that shows you how to train your own model)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Rgpfk6eYxJA) by adding the frozen inference graph into the object_detection directory and changing the model path in the script. You can test this out using my playing card detector model (transferred from ssd_mobilenet_v2 model and trained on TensorFlow v1.5) located at [this dropbox link](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F27avwicywbq68tx\u002Fcard_model.zip?dl=0). Once you’ve downloaded and extracted the model, or if you have your own model, place the model folder into the object_detection directory. Place the label_map.pbtxt file into the object_detection\u002Fdata directory.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_626353650c5a.png\">\n\u003C\u002Fp>\n\nThen, open the Object_detection_picamera.py script in a text editor. Go to the line where MODEL_NAME is set and change the string to match the name of the new model folder. Then, on the line where PATH_TO_LABELS is set, change the name of the labelmap file to match the new label map. Change the NUM_CLASSES variable to the number of classes your model can identify.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_657ef755235f.png\">\n\u003C\u002Fp>\n\nNow, when you run the script, it will use your model rather than the SSDLite_MobileNet model. If you’re using my model, it will detect and identify any playing cards dealt in front of the camera.\n\n**Note: If you plan to run this on the Pi for extended periods of time (greater than 5 minutes), make sure to have a heatsink installed on the Pi's main CPU! All the processing causes the CPU to run hot. Without a heatsink, it will shut down due to high temperature.**\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_ea82ea79a079.png\">\n\u003C\u002Fp>\n\nThanks for following through this guide, I hope you found it useful. Good luck with your object detection applications on the Raspberry Pi!\n\n## Bonus: Pet Detector!\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_86a1281f1d61.png\">\n\u003C\u002Fp>\n\n### Description\nThe Pet_detector.py script is an example application of using object detection on the API to alert users when a certain object is detected. I have two indoor-outdoor pets at my parents' home: a cat and a dog. They frequently stand at the door and wait patiently to be let inside or outside. This pet detector uses the TensorFlow MobileNet-SSD model to detect when they are near the door. It defines two regions in the image, an \"inside\" region and an \"outside\" region. If the pet is detected in either region for at least 10 consecutive frames, the script uses Twilio to send my phone a text message.\n\nHere's a YouTube video demonstrating the pet detector and explaining how it works!\n\n[![Link to my YouTube video!](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_2680e7a2be69.png)](https:\u002F\u002Fyoutu.be\u002FgGqVNuYol6o)\n\n### Usage\nRun the pet detector by downloading Pet_detector.py to your \u002Fobject_detection directory and issuing:\n```\npython3 Pet_detector.py\n```\n\nUsing the Pet_detector.py program requires having a Twilio account set up [(see tutorial here)](https:\u002F\u002Fwww.twilio.com\u002Fdocs\u002Fsms\u002Fquickstart\u002Fpython). It also uses four environment variables that have to be set before running the program: TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, MY_DIGITS, and TWILIO_DIGITS. These can be set using the \"export\" command, as shown below. More information on setting environment variables for Twilio is given [here](https:\u002F\u002Fwww.twilio.com\u002Fblog\u002F2017\u002F01\u002Fhow-to-set-environment-variables.html).\n```\nexport TWILIO_ACCOUNT_SID=[sid_value]\nexport TWILIO_AUTH_TOKEN=[auth_token]\nexport MY_DIGITS=[your cell phone number]\nexport TWILIO_DIGITS=[phone number of the Twilio account]\n```\nThe sid_value, auth_token, and phone number of the Twilio account values are all provided when a Twilio account is set up.\n\nIf you don't want to bother with setting up Twilio so the pet detector can send you texts, you can just comment out the lines in the code that use the Twilio library. The detector will still display a message on the screen when your pet wants inside or outside.\n\nAlso, you can move the locations of the \"inside\" and \"outside\" boxes by adjusting the TL_inside, BR_inside, TL_outside, and BR_outside variables.\n\n## Appendix\n\n### Old instructions for installing TensorFlow\nThese instructions show how to install TensorFlow using lhelontra's repository. They were replaced in my 10\u002F13\u002F19 update of this guide. I am keeping them here, because these are the instructions used in my [video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=npZ-8Nj1YwY).\n\nIn the \u002Fhome\u002Fpi directory, create a folder called ‘tf’, which will be used to hold all the installation files for TensorFlow and Protobuf, and cd into it:\n```\nmkdir tf\ncd tf\n```\nA pre-built, Rapsberry Pi-compatible wheel file for installing the latest version of TensorFlow is available in the [“TensorFlow for ARM” GitHub repository](https:\u002F\u002Fgithub.com\u002Flhelontra\u002Ftensorflow-on-arm\u002Freleases). GitHub user lhelontra updates the repository with pre-compiled installation packages each time a new TensorFlow is released. Thanks lhelontra!  Download the wheel file by issuing:\n```\nwget https:\u002F\u002Fgithub.com\u002Flhelontra\u002Ftensorflow-on-arm\u002Freleases\u002Fdownload\u002Fv1.8.0\u002Ftensorflow-1.8.0-cp35-none-linux_armv7l.whl\n```\nAt the time this tutorial was written, the most recent version of TensorFlow was version 1.8.0. If a more recent version is available on the repository, you can download it rather than version 1.8.0.\n\nAlternatively, if the owner of the GitHub repository stops releasing new builds, or if you want some experience compiling Python packages from source code, you can check out my video guide: [How to Install TensorFlow from Source on the Raspberry Pi](https:\u002F\u002Fyoutu.be\u002FWqCnW_2XDw8), which shows you how to build and install TensorFlow from source on the Raspberry Pi.\n\n[![Link to TensorFlow installation video!](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_d9f8512604a0.jpg)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WqCnW_2XDw8)\n\nNow that we’ve got the file, install TensorFlow by issuing:\n```\nsudo pip3 install \u002Fhome\u002Fpi\u002Ftf\u002Ftensorflow-1.8.0-cp35-none-linux_armv7l.whl\n```\nTensorFlow also needs the LibAtlas package. Install it by issuing (if this command doesn't work, issue \"sudo apt-get update\" and then try again):\n```\nsudo apt-get install libatlas-base-dev\n```\nWhile we’re at it, let’s install other dependencies that will be used by the TensorFlow Object Detection API. These are listed on the [installation instructions](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Finstallation.md) in TensorFlow’s Object Detection GitHub repository. Issue:\n```\nsudo pip3 install pillow lxml jupyter matplotlib cython\nsudo apt-get install python-tk\n```\nTensorFlow is now installed and ready to go!\n\n### Old instructions for compiling and installing Protobuf from source\nThese are the old instructions from Step 4 showing how to compile and install Protobuf from source. These were replaced in the 10\u002F13\u002F19 update of this guide.\n\nThe TensorFlow object detection API uses Protobuf, a package that implements Google’s Protocol Buffer data format. Unfortunately, there’s currently no easy way to install Protobuf on the Raspberry Pi. We have to compile it from source ourselves and then install it. Fortunately, a [guide](http:\u002F\u002Fosdevlab.blogspot.com\u002F2016\u002F03\u002Fhow-to-install-google-protocol-buffers.html) has already been written on how to compile and install Protobuf on the Pi. Thanks OSDevLab for writing the guide!\n\nFirst, get the packages needed to compile Protobuf from source. Issue:\n```\nsudo apt-get install autoconf automake libtool curl\n```\nThen download the protobuf release from its GitHub repository by issuing:\n```\nwget https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fprotobuf\u002Freleases\u002Fdownload\u002Fv3.5.1\u002Fprotobuf-all-3.5.1.tar.gz\n```\nIf a more recent version of protobuf is available, download that instead. Unpack the file and cd into the folder:\n```\ntar -zxvf protobuf-all-3.5.1.tar.gz\ncd protobuf-3.5.1\n```\nConfigure the build by issuing the following command (it takes about 2 minutes):\n```\n.\u002Fconfigure\n```\nBuild the package by issuing:\n```\nmake\n```\nThe build process took 61 minutes on my Raspberry Pi. When it’s finished, issue:\n```\nmake check \n```\nThis process takes even longer, clocking in at 107 minutes on my Pi. According to other guides I’ve seen, this command may exit out with errors, but Protobuf will still work. If you see errors, you can ignore them for now. Now that it’s built, install it by issuing:\n```\nsudo make install\n```\nThen move into the python directory and export the library path:\n```\ncd python\nexport LD_LIBRARY_PATH=..\u002Fsrc\u002F.libs\n```\nNext, issue:\n```\npython3 setup.py build --cpp_implementation \npython3 setup.py test --cpp_implementation\nsudo python3 setup.py install --cpp_implementation\n```\nThen issue the following path commands:\n```\nexport PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp\nexport PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3\n```\nFinally, issue:\n```\nsudo ldconfig\n```\nThat’s it! Now Protobuf is installed on the Pi. Verify it’s installed correctly by issuing the command below and making sure it puts out the default help text.\n```\nprotoc\n```\nFor some reason, the Raspberry Pi needs to be restarted after this process, or TensorFlow will not work. Go ahead and reboot the Pi by issuing:\n```\nsudo reboot now\n```\n\nProtobuf should now be installed!\n","# 在树莓派上设置 TensorFlow 对象检测 API 的教程\n\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_f655384b8c2b.png\">\n\u003C\u002Fp>\n\n*更新日期：2019年10月13日：* 现在在树莓派上设置 TensorFlow 对象检测 API 已经容易得多！有两个主要更新：1) 可以直接使用“pip3 install tensorflow”来安装 TensorFlow。2) 可以使用“sudo apt-get install protobuf-compiler”来安装 Protocol Buffers 编译器（protoc）。我已经更新了步骤 3 和步骤 4，以反映这些变化。\n\n额外福利：我编写了一个宠物检测程序（Pet_detector.py），当它检测到我的猫想出去时，会给我发送一条短信！该程序运行在树莓派上，并使用 TensorFlow 对象检测 API。你可以将此代码作为自己对象检测应用的示例。更多信息请参见[本自述文件的底部](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector)。\n\n## 简介\n本指南提供了在树莓派上设置 TensorFlow 对象检测 API 的分步说明。按照本指南中的步骤操作，你将能够使用树莓派对来自 Picamera 或 USB 网络摄像头的实时视频流进行对象检测。将本指南与我的\u003Clink>关于如何训练自己的神经网络来识别特定物体的教程\u003C\u002Flink>结合使用，你就可以利用树莓派实现各种独特的检测应用，例如：\n\n* 检测花园里是否有兔子正在吃你珍贵的蔬菜\n* 告诉你公寓楼前是否还有空闲停车位\n* [蜂箱蜜蜂计数器](http:\u002F\u002Fmatpalm.com\u002Fblog\u002Fcounting_bees\u002F)\n* [二十一点牌桌上的算牌机器人](https:\u002F\u002Fhackaday.io\u002Fproject\u002F27639-rainman-20-blackjack-robot)\n* 以及你能想到的任何其他用途！\n\n这里有一段我制作的 YouTube 视频，详细介绍了本指南的内容！\n\n[![我的 YouTube 视频链接！](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_bf40157dcc5a.png)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=npZ-8Nj1YwY)\n\n本指南将逐步介绍以下步骤：\n1. [更新树莓派](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#1-update-the-raspberry-pi)\n2. [安装 TensorFlow](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#2-install-tensorflow)\n3. [安装 OpenCV](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#3-install-opencv)\n4. [编译并安装 Protocol Buffers](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#4-compile-and-install-protobuf)\n5. [设置 TensorFlow 目录结构和 PYTHONPATH 环境变量](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#5-set-up-tensorflow-directory-structure-and-pythonpath-variable)\n6. [开始检测对象！](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#6-detect-objects)\n7. [额外福利：宠物检测器！](https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi#bonus-pet-detector)\n\n该仓库还包含 Object_detection_picamera.py 脚本，这是一个 Python 脚本，用于加载 TensorFlow 中的对象检测模型，并利用它在 Picamera 视频流中检测对象。本指南是基于 TensorFlow v1.8.0，在运行 Raspbian Stretch v9 的 Raspberry Pi Model 3B 上编写的。它很可能也适用于较新版本的 TensorFlow。\n\n## 步骤\n### 1. 更新树莓派\n首先，需要将树莓派完全更新。打开终端并执行以下命令：\n```\nsudo apt-get update\nsudo apt-get dist-upgrade\n```\n根据你上次更新树莓派的时间长短，升级过程可能需要一分钟到一小时不等。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_1db8011fd02e.png\">\n\u003C\u002Fp>\n\n### 2. 安装 TensorFlow\n*更新日期：2019年10月13日：说明已更改为直接使用“pip3 install tensorflow”，而不是从 lhelontra 的仓库获取。旧的说明已被移至本指南的附录。*\n\n接下来，我们将安装 TensorFlow。下载文件较大（超过 100MB），因此可能需要一些时间。执行以下命令：\n```\npip3 install tensorflow\n```\n\nTensorFlow 还需要 LibAtlas 包。通过执行以下命令来安装它。（如果此命令不起作用，请先执行“sudo apt-get update”，然后再试一次）。\n```\nsudo apt-get install libatlas-base-dev\n```\n\n顺便说一下，我们还需要安装 TensorFlow 对象检测 API 所需的其他依赖项。这些依赖项列在 TensorFlow 对象检测 GitHub 仓库中的[安装说明](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Finstallation.md)中。执行以下命令：\n```\nsudo pip3 install pillow lxml jupyter matplotlib cython\nsudo apt-get install python-tk\n```\n\n好了，TensorFlow 所需的一切都准备好了！接下来是 OpenCV。\n\n### 3. 安装 OpenCV\nTensorFlow 的对象检测示例通常使用 matplotlib 来显示图像，但我更喜欢使用 OpenCV，因为它更容易使用且出错的可能性更小。本指南 GitHub 仓库中的对象检测脚本就使用了 OpenCV。因此，我们需要安装 OpenCV。\n\n要在树莓派上运行 OpenCV，需要通过 apt-get 安装许多依赖项。如果以下任何一条命令无法执行，请先执行“sudo apt-get update”，然后再试一次。执行以下命令：\n```\nsudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev\nsudo apt-get install libxvidcore-dev libx264-dev\nsudo apt-get install qt4-dev-tools libatlas-base-dev\n```\n\n现在所有依赖项都已安装完毕，我们可以安装 OpenCV。执行以下命令：\n```\nsudo pip3 install opencv-python\n```\n\n好了，OpenCV 现已安装！\n\n### 4. 编译并安装 Protocol Buffers\nTensorFlow 对象检测 API 使用 Protocol Buffers，这是一种实现 Google Protocol Buffer 数据格式的软件包。过去你需要从源代码编译它，但现在可以直接轻松安装！我已将以前从源代码编译和安装的说明移到了本指南的附录中。\n\n```\nsudo apt-get install protobuf-compiler\n```\n\n完成后，运行 `protoc --version` 来验证是否已成功安装。你应该会看到类似 `libprotoc 3.6.1` 的输出。\n\n### 5. 设置 TensorFlow 目录结构和 PYTHONPATH 环境变量\n现在我们已经安装了所有必要的包，接下来需要设置 TensorFlow 的目录结构。首先回到你的主目录，然后创建一个名为“tensorflow1”的目录，并进入该目录：\n```\nmkdir tensorflow1\ncd tensorflow1\n```\n通过以下命令从 GitHub 下载 TensorFlow 仓库：\n```\ngit clone --depth 1 https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels.git\n```\n接下来，我们需要修改 PYTHONPATH 环境变量，使其指向刚刚下载的 TensorFlow 仓库中的某些目录。为了让每次打开终端时 PYTHONPATH 都能正确设置，我们需要编辑 .bashrc 文件。使用以下命令打开该文件：\n```\nsudo nano ~\u002F.bashrc\n```\n将光标移动到文件末尾，在最后一行添加：\n```\nexport PYTHONPATH=$PYTHONPATH:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fslim\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_dab48cd04c0e.png\">\n\u003C\u002Fp>\n\n保存并退出文件。这样每次打开新终端时都会执行“export PYTHONPATH”命令，确保 PYTHONPATH 变量始终正确设置。关闭并重新打开终端。\n\n现在，我们需要使用 Protoc 编译对象检测 API 所使用的 Protocol Buffer (.proto) 文件。这些 .proto 文件位于 \u002Fresearch\u002Fobject_detection\u002Fprotos 目录下，但我们必须在 \u002Fresearch 目录中执行该命令。输入以下命令：\n```\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\nprotoc object_detection\u002Fprotos\u002F*.proto --python_out=.\n```\n该命令会将所有的“name”.proto 文件转换为“name_pb2”.py 文件。接下来，进入 object_detection 目录：\n```\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fobject_detection\n```\n现在，我们将从 [TensorFlow 检测模型库](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Fdetection_model_zoo.md) 下载 SSD_Lite 模型。该模型库是 Google 提供的一系列预训练的对象检测模型，它们在速度和精度上各有不同。由于树莓派的处理器性能较弱，我们需要选择一个对计算资源要求较低的模型。虽然这样的模型运行速度更快，但其精度可能会有所降低。在本教程中，我们将使用 SSDLite-MobileNet 模型，这是目前最快的可用模型。\n\nGoogle 不断发布速度和性能更优的新模型，因此建议您经常查看模型库，以了解是否有更适合的模型。\n\n通过以下命令下载并解压 SSDLite-MobileNet 模型：\n```\nwget http:\u002F\u002Fdownload.tensorflow.org\u002Fmodels\u002Fobject_detection\u002Fssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\ntar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\n```\n现在，该模型已位于 object_detection 目录中，可以投入使用了。\n\n### 6. 进行对象检测！\n好了，现在一切准备工作都已完成，可以在树莓派上进行对象检测了！此仓库中的 Python 脚本 Object_detection_picamera.py 可以检测来自 Picamera 或 USB 网络摄像头的实时视频流中的物体。基本上，该脚本会设置模型和标签映射文件的路径，将模型加载到内存中，初始化 Picamera，然后开始对 Picamera 捕获的每一帧视频进行对象检测。\n\n如果您使用的是 Picamera，请确保已在树莓派的配置菜单中启用它。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_6ce741125b4f.png\">\n\u003C\u002Fp>\n\n通过以下命令将 Object_detection_picamera.py 文件下载到 object_detection 目录中：\n```\nwget https:\u002F\u002Fraw.githubusercontent.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fmaster\u002FObject_detection_picamera.py\n```\n然后运行该脚本：\n```\npython3 Object_detection_picamera.py\n```\n该脚本默认使用连接的 Picamera。如果您使用的是 USB 网络摄像头，则需要在命令末尾添加 --usbcam 参数：\n```\npython3 Object_detection_picamera.py --usbcam\n```\n脚本初始化完成后（可能需要最多 30 秒），您将看到一个窗口显示来自摄像头的实时画面。画面中的常见物体将被识别，并在其周围绘制矩形框。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_7eaae3dfc7a1.png\">\n\u003C\u002Fp>\n\n使用 SSDLite 模型时，树莓派 3 的表现相当不错，帧率可超过 1 FPS。对于大多数实时对象检测应用来说，这已经足够快了。\n\n您也可以使用自己训练的模型[（这里有一份指南介绍如何训练自己的模型）](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Rgpfk6eYxJA)，只需将冻结的推理图放入 object_detection 目录，并在脚本中更改模型路径即可。您可以使用我的扑克牌检测模型来测试这一功能，该模型基于 ssd_mobilenet_v2 模型训练而成，使用 TensorFlow v1.5 版本，可通过[此 Dropbox 链接](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F27avwicywbq68tx\u002Fcard_model.zip?dl=0)下载。下载并解压模型后，将其文件夹放入 object_detection 目录，并将 label_map.pbtxt 文件放入 object_detection\u002Fdata 目录。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_626353650c5a.png\">\n\u003C\u002Fp>\n\n然后用文本编辑器打开 Object_detection_picamera.py 脚本。找到设置 MODEL_NAME 的那一行，将字符串改为新模型文件夹的名称；再找到设置 PATH_TO_LABELS 的那一行，将标签映射文件名改为新的标签文件名。最后，将 NUM_CLASSES 变量修改为您模型能够识别的类别数量。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_657ef755235f.png\">\n\u003C\u002Fp>\n\n现在，当您运行脚本时，它将使用您提供的模型，而不是 SSDLite_MobileNet 模型。如果您使用的是我的模型，它将能够检测并识别摄像头前出现的任何扑克牌。\n\n**注意：如果您计划长时间运行此程序（超过 5 分钟），请务必在树莓派的主 CPU 上安装散热片！大量的计算会使 CPU 温度过高，如果没有散热片，CPU 将因过热而自动关机。**\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_ea82ea79a079.png\">\n\u003C\u002Fp>\n\n感谢您阅读本教程，希望对您有所帮助。祝您在树莓派上顺利开展对象检测项目！\n\n## 附录：宠物检测器！\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_86a1281f1d61.png\">\n\u003C\u002Fp>\n\n### 说明\nPet_detector.py 脚本是一个使用 API 进行目标检测的示例应用，当检测到特定物体时会向用户发出警报。我父母家养了两只室内外活动的宠物：一只猫和一只狗。它们经常站在门口耐心地等待主人开门让它们进屋或出门。这款宠物检测器使用 TensorFlow 的 MobileNet-SSD 模型来检测宠物是否靠近门口。它在图像中定义了两个区域：“室内”区域和“室外”区域。如果宠物在任一区域内连续出现至少 10 帧，脚本就会通过 Twilio 向我的手机发送一条短信。\n\n这里有一段 YouTube 视频，展示了宠物检测器的工作原理并进行了详细说明！\n\n[![我的 YouTube 视频链接！](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_2680e7a2be69.png)](https:\u002F\u002Fyoutu.be\u002FgGqVNuYol6o)\n\n### 使用方法\n要运行宠物检测器，请将 Pet_detector.py 下载到您的 \u002Fobject_detection 目录下，然后执行以下命令：\n```\npython3 Pet_detector.py\n```\n\n使用 Pet_detector.py 程序需要先注册一个 Twilio 账户 [(请参阅此处的教程)](https:\u002F\u002Fwww.twilio.com\u002Fdocs\u002Fsms\u002Fquickstart\u002Fpython)。此外，程序运行前还需要设置四个环境变量：TWILIO_ACCOUNT_SID、TWILIO_AUTH_TOKEN、MY_DIGITS 和 TWILIO_DIGITS。这些变量可以通过 “export” 命令进行设置，如下所示。有关为 Twilio 设置环境变量的更多信息，请参阅 [这里](https:\u002F\u002Fwww.twilio.com\u002Fblog\u002F2017\u002F01\u002Fhow-to-set-environment-variables.html)。\n```\nexport TWILIO_ACCOUNT_SID=[sid_value]\nexport TWILIO_AUTH_TOKEN=[auth_token]\nexport MY_DIGITS=[您的手机号码]\nexport TWILIO_DIGITS=[Twilio 账户的电话号码]\n```\n\n其中，sid_value、auth_token 以及 Twilio 账户的电话号码等信息，均会在注册 Twilio 账户时提供。\n\n如果您不想费心设置 Twilio 以便宠物检测器能够给您发送短信，也可以直接注释掉代码中使用 Twilio 库的部分。这样，当您的宠物想要进出时，检测器仍然会在屏幕上显示消息。\n\n此外，您还可以通过调整 TL_inside、BR_inside、TL_outside 和 BR_outside 这四个变量来移动“室内”和“室外”框的位置。\n\n## 附录\n\n### 安装 TensorFlow 的旧版说明\n这些说明展示了如何使用 lhelontra 的仓库安装 TensorFlow。不过，它们已在 2019 年 10 月 13 日的更新中被替换。我仍将其保留在此处，因为它们正是我在 [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=npZ-8Nj1YwY) 中所使用的步骤。\n\n在 \u002Fhome\u002Fpi 目录下创建一个名为 ‘tf’ 的文件夹，用于存放 TensorFlow 和 Protobuf 的所有安装文件，并进入该目录：\n```\nmkdir tf\ncd tf\n```\n\n最新版本的 TensorFlow 已经有一个针对树莓派优化的预编译 wheel 文件，可在 [“TensorFlow for ARM” GitHub 仓库](https:\u002F\u002Fgithub.com\u002Flhelontra\u002Ftensorflow-on-arm\u002Freleases) 中找到。每当有新的 TensorFlow 版本发布时，GitHub 用户 lhelontra 都会更新该仓库中的预编译安装包。感谢 lhelontra！您可以通过以下命令下载 wheel 文件：\n```\nwget https:\u002F\u002Fgithub.com\u002Flhelontra\u002Ftensorflow-on-arm\u002Freleases\u002Fdownload\u002Fv1.8.0\u002Ftensorflow-1.8.0-cp35-none-linux_armv7l.whl\n```\n\n撰写本教程时，最新的 TensorFlow 版本是 1.8.0。如果仓库中提供了更新的版本，您可以下载那个版本，而不是 1.8.0。\n\n或者，如果您希望从源代码编译 Python 包以获得实践经验，也可以观看我的视频教程：[如何在树莓派上从源代码安装 TensorFlow](https:\u002F\u002Fyoutu.be\u002FWqCnW_2XDw8)，该视频将指导您如何在树莓派上从源代码构建并安装 TensorFlow。\n\n[![TensorFlow 安装视频链接！](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_readme_d9f8512604a0.jpg)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WqCnW_2XDw8)\n\n现在我们已经下载好了文件，接下来就可以通过以下命令安装 TensorFlow：\n```\nsudo pip3 install \u002Fhome\u002Fpi\u002Ftf\u002Ftensorflow-1.8.0-cp35-none-linux_armv7l.whl\n```\n\nTensorFlow 还需要 LibAtlas 包。您可以通过以下命令安装它（如果此命令无法执行，请先运行 “sudo apt-get update”，然后再试一次）：\n```\nsudo apt-get install libatlas-base-dev\n```\n\n顺便说一下，我们还可以安装 TensorFlow 对象检测 API 所需的其他依赖项。这些依赖项列在 TensorFlow 对象检测 GitHub 仓库的 [安装说明](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Fblob\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fg3doc\u002Finstallation.md) 中。请执行以下命令：\n```\nsudo pip3 install pillow lxml jupyter matplotlib cython\nsudo apt-get install python-tk\n```\n\n至此，TensorFlow 已经成功安装并准备就绪！\n\n### 旧版从源码编译并安装 Protobuf 的说明\n以下是第 4 步中的旧版说明，展示了如何从源码编译并安装 Protobuf。这些说明已在本指南的 2019 年 10 月 13 日更新中被替换。\n\nTensorFlow 对象检测 API 使用 Protobuf，这是一个实现 Google Protocol Buffer 数据格式的软件包。遗憾的是，目前在树莓派上并没有简单的方法来安装 Protobuf。我们必须自行从源码编译它，然后进行安装。幸运的是，已经有一篇[指南](http:\u002F\u002Fosdevlab.blogspot.com\u002F2016\u002F03\u002Fhow-to-install-google-protocol-buffers.html)介绍了如何在树莓派上编译和安装 Protobuf。感谢 OSDevLab 编写了这篇指南！\n\n首先，获取编译 Protobuf 源码所需的软件包。执行以下命令：\n```\nsudo apt-get install autoconf automake libtool curl\n```\n\n然后从其 GitHub 仓库下载 Protobuf 发布版本：\n```\nwget https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fprotobuf\u002Freleases\u002Fdownload\u002Fv3.5.1\u002Fprotobuf-all-3.5.1.tar.gz\n```\n\n如果有更新的 Protobuf 版本可用，请下载该版本。解压文件并进入解压后的目录：\n```\ntar -zxvf protobuf-all-3.5.1.tar.gz\ncd protobuf-3.5.1\n```\n\n通过运行以下命令配置构建环境（大约需要 2 分钟）：\n```\n.\u002Fconfigure\n```\n\n接着编译软件包：\n```\nmake\n```\n\n在我的树莓派上，编译过程耗时约 61 分钟。编译完成后，执行：\n```\nmake check\n```\n\n这个步骤耗时更长，在我的树莓派上花费了 107 分钟。根据我看到的其他指南，此命令可能会报错退出，但 Protobuf 仍然可以正常工作。如果出现错误，暂时可以忽略。编译完成后，通过以下命令进行安装：\n```\nsudo make install\n```\n\n然后进入 Python 目录并导出库路径：\n```\ncd python\nexport LD_LIBRARY_PATH=..\u002Fsrc\u002F.libs\n```\n\n接下来依次执行以下命令：\n```\npython3 setup.py build --cpp_implementation \npython3 setup.py test --cpp_implementation\nsudo python3 setup.py install --cpp_implementation\n```\n\n随后设置以下环境变量：\n```\nexport PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp\nexport PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3\n```\n\n最后执行：\n```\nsudo ldconfig\n```\n\n至此，Protobuf 已成功安装在树莓派上。可以通过运行以下命令验证安装是否正确，并确保输出默认的帮助信息：\n```\nprotoc\n```\n\n由于某种原因，完成上述步骤后需要重启树莓派，否则 TensorFlow 将无法正常工作。请执行以下命令重启树莓派：\n```\nsudo reboot now\n```\n\n现在，Protobuf 应该已经成功安装！","# TensorFlow 树莓派物体检测快速上手指南\n\n本指南帮助你在树莓派（Raspberry Pi）上快速部署 TensorFlow 物体检测 API，实现对摄像头实时视频流的物体识别。\n\n## 环境准备\n\n*   **硬件要求**：树莓派 3B 或更高版本（推荐 4B），配备 Picamera 模块或 USB 摄像头。\n*   **系统要求**：Raspbian Stretch (v9) 或更新版本（如 Buster, Bullseye）。\n*   **网络环境**：需连接互联网以下载模型和依赖包。\n    *   *国内加速建议*：如遇 `pip` 或 `apt` 下载缓慢，可临时切换至国内镜像源（如清华源、阿里源）。\n*   **前置知识**：熟悉 Linux 终端基本操作。\n\n## 安装步骤\n\n请在树莓派终端中依次执行以下命令。\n\n### 1. 更新系统\n首先确保系统软件包为最新状态：\n```bash\nsudo apt-get update\nsudo apt-get dist-upgrade -y\n```\n\n### 2. 安装 TensorFlow 及基础依赖\n直接通过 pip 安装 TensorFlow，并安装必要的底层库：\n```bash\npip3 install tensorflow\nsudo apt-get install libatlas-base-dev -y\nsudo pip3 install pillow lxml jupyter matplotlib cython\nsudo apt-get install python3-tk -y\n```\n*(注：若 pip 下载慢，可添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 参数)*\n\n### 3. 安装 OpenCV\n本指南使用 OpenCV 进行图像显示，需先安装其系统依赖，再安装 Python 包：\n```bash\nsudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev -y\nsudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev -y\nsudo apt-get install libxvidcore-dev libx264-dev -y\nsudo apt-get install qt4-dev-tools libatlas-base-dev -y\nsudo pip3 install opencv-python\n```\n\n### 4. 安装 Protobuf 编译器\nTensorFlow 物体检测 API 需要 Protobuf 支持：\n```bash\nsudo apt-get install protobuf-compiler -y\nprotoc --version\n# 验证输出应为 libprotoc 3.x.x 或类似版本\n```\n\n### 5. 配置目录结构与环境变量\n创建工作目录并克隆 TensorFlow Models 仓库：\n```bash\ncd ~\nmkdir tensorflow1\ncd tensorflow1\ngit clone --depth 1 https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels.git\n```\n\n配置 `PYTHONPATH` 环境变量，使其永久生效：\n```bash\nsudo nano ~\u002F.bashrc\n```\n在文件末尾添加以下内容（注意路径需与你实际创建的目录一致）：\n```bash\nexport PYTHONPATH=$PYTHONPATH:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch:\u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fslim\n```\n保存退出（Ctrl+O, Enter, Ctrl+X），然后重启终端或执行 `source ~\u002F.bashrc`。\n\n编译 Protobuf 文件：\n```bash\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\nprotoc object_detection\u002Fprotos\u002F*.proto --python_out=.\n```\n\n### 6. 下载预训练模型\n由于树莓派算力有限，推荐使用轻量级的 **SSDLite-MobileNet** 模型：\n```bash\ncd \u002Fhome\u002Fpi\u002Ftensorflow1\u002Fmodels\u002Fresearch\u002Fobject_detection\nwget http:\u002F\u002Fdownload.tensorflow.org\u002Fmodels\u002Fobject_detection\u002Fssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\ntar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz\n```\n\n### 7. 获取检测脚本\n下载专为树莓派优化的检测脚本：\n```bash\nwget https:\u002F\u002Fraw.githubusercontent.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fmaster\u002FObject_detection_picamera.py\n```\n\n## 基本使用\n\n### 启用摄像头\n如果你使用的是树莓派专用摄像头（Picamera），请先在系统配置中启用它：\n1. 运行 `sudo raspi-config`\n2. 选择 **Interface Options** -> **Camera** -> **Yes**\n3. 重启树莓派。\n\n### 运行检测\n进入脚本所在目录并运行：\n\n**使用 Picamera：**\n```bash\npython3 Object_detection_picamera.py\n```\n\n**使用 USB 摄像头：**\n```bash\npython3 Object_detection_picamera.py --usbcam\n```\n\n### 预期结果\n*   脚本初始化可能需要 10-30 秒。\n*   成功后将弹出一个窗口，显示摄像头实时画面。\n*   画面中常见的物体（如人、猫、杯子等）会被识别并用矩形框标记，上方显示类别名称和置信度。\n*   在树莓派 3B\u002F4B 上，使用该模型帧率通常高于 1 FPS，可满足基础实时检测需求。\n\n### 自定义模型（可选）\n若需检测特定物体（如扑克牌、蜜蜂等），可将自己训练好的模型文件夹（包含 `frozen_inference_graph.pb`）放入 `object_detection` 目录，并将标签文件 `label_map.pbtxt` 放入 `data` 子目录。随后编辑 `Object_detection_picamera.py`，修改以下变量：\n*   `MODEL_NAME`: 设置为你的模型文件夹名称。\n*   `PATH_TO_LABELS`: 设置为你的标签文件路径。\n*   `NUM_CLASSES`: 设置为你模型训练的类别数量。","一位社区花园管理者希望自动监控菜园，防止兔子偷吃珍贵蔬菜，同时避免全天候人工蹲守。\n\n### 没有 TensorFlow-Object-Detection-on-the-Raspberry-Pi 时\n- 管理者必须每隔几小时亲自巡视菜园，耗费大量体力且无法覆盖夜间时段。\n- 若尝试购买商用智能监控设备，不仅价格昂贵，还难以针对“兔子”这一特定目标进行定制化识别。\n- 自行在树莓派上从零配置 TensorFlow 环境和编译依赖库极其复杂，极易因版本冲突导致项目搁浅。\n- 即使勉强运行模型，缺乏优化的代码往往导致帧率过低，无法实时捕捉快速移动的动物。\n- 发现侵害时通常为时已晚，蔬菜已被啃食，只能事后补救而非事前预警。\n\n### 使用 TensorFlow-Object-Detection-on-the-Raspberry-Pi 后\n- 利用该工具提供的详细指南，管理者轻松在低成本的树莓派上部署了实时检测系统，实现 24 小时无人值守监控。\n- 通过集成教程中提到的自定义训练方法，系统能精准识别“兔子”而非误报风吹草动，专物专用。\n- 借助更新后的简化安装步骤（如直接 pip 安装 TensorFlow），原本繁琐的环境搭建过程缩短至几小时内完成。\n- 系统运行流畅，能实时分析摄像头画面，一旦检测到兔子进入即刻触发警报或发送通知到手机。\n- 管理者可及时远程干预驱赶动物，有效保护了作物，将被动损失转变为主动防御。\n\n核心价值在于它将高门槛的 AI 视觉技术转化为低成本、易部署的树莓派解决方案，让普通人也能轻松构建定制化的智能安防系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEdjeElectronics_TensorFlow-Object-Detection-on-the-Raspberry-Pi_bf40157d.png","EdjeElectronics","Evan","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FEdjeElectronics_8523fa08.jpg","Computer vision engineer and founder at EJ Technology Consultants.","EJ Technology Consultants","Bozeman, MT","evan.juras@ejtech.io","https:\u002F\u002Fejtech.io","https:\u002F\u002Fgithub.com\u002FEdjeElectronics",[82],{"name":83,"color":84,"percentage":85},"Python","#3572A5",100,1156,360,"2026-04-16T19:43:12","Apache-2.0",4,"Linux (Raspbian Stretch v9 或更新版本)","不需要 GPU，专为树莓派 CPU 运行设计","最低未说明，推荐树莓派 3B 或以上型号（通常配备 1GB RAM）",{"notes":95,"python":96,"dependencies":97},"本教程专为树莓派（如 Model 3B）优化，使用轻量级模型 SSDLite-MobileNet 以实现实时检测。需启用 Picamera 或连接 USB 摄像头。首次运行需下载预训练模型文件。虽然支持较新版本的 TensorFlow，但原指南基于 TensorFlow v1.8.0 编写。","Python 3 (通过 pip3 安装)",[98,99,100,101,102,103,104,105,106,107],"tensorflow","libatlas-base-dev","pillow","lxml","jupyter","matplotlib","cython","python-tk","opencv-python","protobuf-compiler",[15,14],null,"2026-03-27T02:49:30.150509","2026-04-18T11:08:45.897276",[113,118,123,128,133,138,142],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},38919,"运行脚本时遇到 ImportError: libImath-2_2.so.12 找不到共享对象文件怎么办？","该错误是因为缺少依赖库。请依次执行以下命令安装缺失的包：\n1. 安装 libilmbase23：`sudo apt install libilmbase23`\n2. 如果随后出现 libIlmImf 相关的错误，请安装 libopenexr-dev：`sudo apt install libopenexr-dev`\n安装完成后即可正常运行。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fissues\u002F18",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},38920,"运行 Object_detection_picamera.py 时报错 ValueError: cannot set WRITEABLE flag to True of this array 如何解决？","这是因为从摄像头获取的数组默认不可写。解决方法是修改代码第 136 行（具体行号可能因版本而异），将获取帧的代码从 `frame = frame1.array` 改为 `frame = np.copy(frame1.array)`。虽然这会轻微增加处理时间并降低 FPS，但能解决写入标志错误的问题。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fissues\u002F30",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},38921,"如何减少克隆 tensorflow\u002Fmodels 仓库时的下载大小和时间？","不需要克隆整个仓库。可以通过添加 `--depth 1` 参数并移除 `--recurse-submodules` 参数来优化 git 克隆命令。修改后的命令可以将下载大小从约 900MB 减少到约 370MB，且不影响功能。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fissues\u002F62",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},38922,"在 Raspberry Pi 4 (Python 3.7) 上使用 pip3 install opencv-python 失败，提示找不到匹配版本怎么办？","这是因为 piwheels 仓库当时没有提供适用于 Python 3.7 的 OpenCV 预编译包。解决方法是不要使用 pip 安装，而是需要从源代码编译安装 OpenCV。可以参考相关指南（如 pyimagesearch 上的教程）在树莓派上从源码构建 OpenCV 4。注意：不要使用 `sudo apt-get install python-opencv`，因为它通常安装的是 Python 2.7 版本。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fissues\u002F53",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},38923,"代码获取的图像是 BGR 格式，而 TensorFlow 模型通常需要 RGB 格式，是否需要转换？","是的，通常需要转换。虽然某些预训练模型可能对通道顺序不敏感，但为了确保准确性，特别是如果你打算微调自己的模型时，必须保持训练和推理时的输入格式一致。建议在代码中将图像从 BGR 转换为 RGB，例如使用 `frame = frame[:, :, [2, 1, 0]]` 或类似的转换逻辑。维护者已在后续更新中实现了 BGR 到 RGB 的转换。","https:\u002F\u002Fgithub.com\u002FEdjeElectronics\u002FTensorFlow-Object-Detection-on-the-Raspberry-Pi\u002Fissues\u002F42",{"id":139,"question_zh":140,"answer_zh":141,"source_url":132},38924,"执行 protoc 命令编译 proto 文件时提示命令无效或找不到怎么办？","这通常意味着虽然安装了 Protobuf 库，但没有正确配置命令行工具 `protoc` 的路径，或者安装的是纯库文件而非编译器二进制文件。需要确保安装了 `protobuf-compiler` 包，并且其二进制文件位于系统 PATH 环境变量中。在树莓派上，通常可以通过 `sudo apt-get install protobuf-compiler` 安装，并确保安装后能在终端直接运行 `protoc --version`。",{"id":143,"question_zh":144,"answer_zh":145,"source_url":122},38925,"提高屏幕分辨率后摄像头检测的 FPS 大幅下降，如何优化？","FPS 下降是因为处理更高分辨率（如 1600x1080）的图像需要更多计算资源。为了提高速度，建议降低摄像头的输入分辨率以匹配显示需求或处理能力（例如降至 800x480 或更低）。可以在初始化摄像头时设置较低的分辨率参数，从而减少每帧的处理数据量，提升实时检测的帧率。",[]]