[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-shaqian--flutter_tflite":3,"tool-shaqian--flutter_tflite":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":79,"owner_location":79,"owner_email":80,"owner_twitter":79,"owner_website":81,"owner_url":82,"languages":83,"stars":108,"forks":109,"last_commit_at":110,"license":111,"difficulty_score":10,"env_os":112,"env_gpu":113,"env_ram":114,"env_deps":115,"category_tags":121,"github_topics":122,"view_count":10,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":127,"updated_at":128,"faqs":129,"releases":159},285,"shaqian\u002Fflutter_tflite","flutter_tflite","Flutter plugin for TensorFlow Lite","flutter_tflite 是一个 Flutter 插件，专门用于在移动应用中运行 TensorFlow Lite 机器学习模型。它为开发者提供了简洁的 Dart API，无需编写复杂的原生代码，就能轻松在 iOS 和 Android 应用中集成各种 AI 能力。\n\n这个插件支持多种常见的计算机视觉任务，包括图像分类、目标检测（如 SSD MobileNet 和 Tiny-YOLOv2）、图像分割（Deeplab）、图像风格迁移（Pix2Pix）以及人体姿态估计（PoseNet）。开发者只需几行代码就能加载模型并获取预测结果，大大降低了移动端机器学习的应用门槛。\n\nflutter_tflite 特别适合需要在 Flutter 项目中快速添加 AI 功能的移动开发者使用。无论是想做一个简单的图片识别应用，还是需要实时目标检测的复杂场景，这个插件都能提供便捷的解决方案。它支持 GPU 加速以提升推理性能，同时也兼容纯 CPU 模式，可以根据实际需求灵活选择。\n\n简单来说，如果你使用 Flutter 开发移动应用，又想加入图像识别、目标检测等智能功能，flutter_tflite 是一个值","flutter_tflite 是一个 Flutter 插件，专门用于在移动应用中运行 TensorFlow Lite 机器学习模型。它为开发者提供了简洁的 Dart API，无需编写复杂的原生代码，就能轻松在 iOS 和 Android 应用中集成各种 AI 能力。\n\n这个插件支持多种常见的计算机视觉任务，包括图像分类、目标检测（如 SSD MobileNet 和 Tiny-YOLOv2）、图像分割（Deeplab）、图像风格迁移（Pix2Pix）以及人体姿态估计（PoseNet）。开发者只需几行代码就能加载模型并获取预测结果，大大降低了移动端机器学习的应用门槛。\n\nflutter_tflite 特别适合需要在 Flutter 项目中快速添加 AI 功能的移动开发者使用。无论是想做一个简单的图片识别应用，还是需要实时目标检测的复杂场景，这个插件都能提供便捷的解决方案。它支持 GPU 加速以提升推理性能，同时也兼容纯 CPU 模式，可以根据实际需求灵活选择。\n\n简单来说，如果你使用 Flutter 开发移动应用，又想加入图像识别、目标检测等智能功能，flutter_tflite 是一个值得尝试的选择。","# tflite\n\nA Flutter plugin for accessing TensorFlow Lite API. Supports image classification, object detection ([SSD](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection) and [YOLO](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F)), [Pix2Pix](https:\u002F\u002Fphillipi.github.io\u002Fpix2pix\u002F) and [Deeplab](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fdeeplab) and [PoseNet](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fmodels\u002Fpose_estimation\u002Foverview) on both iOS and Android.\n\n### Table of Contents\n\n- [Installation](#Installation)\n- [Usage](#Usage)\n    - [Image Classification](#Image-Classification)\n    - [Object Detection](#Object-Detection)\n      - [SSD MobileNet](#SSD-MobileNet)\n      - [YOLO](#Tiny-YOLOv2)\n    - [Pix2Pix](#Pix2Pix)\n    - [Deeplab](#Deeplab)\n    - [PoseNet](#PoseNet)\n- [Example](#Example)\n    - [Prediction in Static Images](#Prediction-in-Static-Images)\n    - [Real-time Detection](#Real-time-Detection)\n\n### Breaking changes\n\n#### Since 1.1.0:\n\n1. iOS TensorFlow Lite library is upgraded from TensorFlowLite　1.x to TensorFlowLiteObjC　2.x. Changes to native code are denoted with `TFLITE2`.\n\n#### Since 1.0.0:\n\n1. Updated to TensorFlow Lite API v1.12.0.\n2. No longer accepts parameter `inputSize` and `numChannels`. They will be retrieved from input tensor.\n3. `numThreads` is moved to `Tflite.loadModel`.\n\n## Installation\n\nAdd `tflite` as a [dependency in your pubspec.yaml file](https:\u002F\u002Fflutter.io\u002Fusing-packages\u002F).\n\n### Android\n\nIn `android\u002Fapp\u002Fbuild.gradle`, add the following setting in `android` block.\n\n```\n    aaptOptions {\n        noCompress 'tflite'\n        noCompress 'lite'\n    }\n```\n\n### iOS\n\nSolutions to build errors on iOS:\n\n* 'vector' file not found\"\n\n  Open `ios\u002FRunner.xcworkspace` in Xcode, click Runner > Tagets > Runner > Build Settings, search `Compile Sources As`, change the value to `Objective-C++`\n\n* 'tensorflow\u002Flite\u002Fkernels\u002Fregister.h' file not found\n\n  The plugin assumes the tensorflow header files are located in path \"tensorflow\u002Flite\u002Fkernels\".\n\n  However, for early versions of tensorflow the header path is \"tensorflow\u002Fcontrib\u002Flite\u002Fkernels\".\n\n  Use `CONTRIB_PATH` to toggle the path. Uncomment `\u002F\u002F#define CONTRIB_PATH` from here:\n  https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Fios\u002FClasses\u002FTflitePlugin.mm#L1\n\n## Usage\n\n1. Create a `assets` folder and place your label file and model file in it. In `pubspec.yaml` add:\n\n```\n  assets:\n   - assets\u002Flabels.txt\n   - assets\u002Fmobilenet_v1_1.0_224.tflite\n```\n\n2. Import the library:\n\n```dart\nimport 'package:tflite\u002Ftflite.dart';\n```\n\n3. Load the model and labels:\n\n```dart\nString res = await Tflite.loadModel(\n  model: \"assets\u002Fmobilenet_v1_1.0_224.tflite\",\n  labels: \"assets\u002Flabels.txt\",\n  numThreads: 1, \u002F\u002F defaults to 1\n  isAsset: true, \u002F\u002F defaults to true, set to false to load resources outside assets\n  useGpuDelegate: false \u002F\u002F defaults to false, set to true to use GPU delegate\n);\n```\n\n4. See the section for the respective model below.\n\n5. Release resources:\n\n```\nawait Tflite.close();\n```\n\n### GPU Delegate\n\nWhen using GPU delegate, refer to [this step](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fperformance\u002Fgpu#step_5_release_mode) for release mode setting to get better performance. \n\n### Image Classification\n\n- Output format:\n```\n{\n  index: 0,\n  label: \"person\",\n  confidence: 0.629\n}\n```\n\n- Run on image:\n\n```dart\nvar recognitions = await Tflite.runModelOnImage(\n  path: filepath,   \u002F\u002F required\n  imageMean: 0.0,   \u002F\u002F defaults to 117.0\n  imageStd: 255.0,  \u002F\u002F defaults to 1.0\n  numResults: 2,    \u002F\u002F defaults to 5\n  threshold: 0.2,   \u002F\u002F defaults to 0.1\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar recognitions = await Tflite.runModelOnBinary(\n  binary: imageToByteListFloat32(image, 224, 127.5, 127.5),\u002F\u002F required\n  numResults: 6,    \u002F\u002F defaults to 5\n  threshold: 0.05,  \u002F\u002F defaults to 0.1\n  asynch: true      \u002F\u002F defaults to true\n);\n\nUint8List imageToByteListFloat32(\n    img.Image image, int inputSize, double mean, double std) {\n  var convertedBytes = Float32List(1 * inputSize * inputSize * 3);\n  var buffer = Float32List.view(convertedBytes.buffer);\n  int pixelIndex = 0;\n  for (var i = 0; i \u003C inputSize; i++) {\n    for (var j = 0; j \u003C inputSize; j++) {\n      var pixel = image.getPixel(j, i);\n      buffer[pixelIndex++] = (img.getRed(pixel) - mean) \u002F std;\n      buffer[pixelIndex++] = (img.getGreen(pixel) - mean) \u002F std;\n      buffer[pixelIndex++] = (img.getBlue(pixel) - mean) \u002F std;\n    }\n  }\n  return convertedBytes.buffer.asUint8List();\n}\n\nUint8List imageToByteListUint8(img.Image image, int inputSize) {\n  var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);\n  var buffer = Uint8List.view(convertedBytes.buffer);\n  int pixelIndex = 0;\n  for (var i = 0; i \u003C inputSize; i++) {\n    for (var j = 0; j \u003C inputSize; j++) {\n      var pixel = image.getPixel(j, i);\n      buffer[pixelIndex++] = img.getRed(pixel);\n      buffer[pixelIndex++] = img.getGreen(pixel);\n      buffer[pixelIndex++] = img.getBlue(pixel);\n    }\n  }\n  return convertedBytes.buffer.asUint8List();\n}\n```\n\n- Run on image stream (video frame):\n\n> Works with [camera plugin 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera). Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.\n\n```dart\nvar recognitions = await Tflite.runModelOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 127.5,   \u002F\u002F defaults to 127.5\n  imageStd: 127.5,    \u002F\u002F defaults to 127.5\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.1,     \u002F\u002F defaults to 0.1\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n### Object Detection\n\n- Output format:\n\n`x, y, w, h` are between [0, 1]. You can scale `x, w` by the width and `y, h` by the height of the image.\n\n```\n{\n  detectedClass: \"hot dog\",\n  confidenceInClass: 0.123,\n  rect: {\n    x: 0.15,\n    y: 0.33,\n    w: 0.80,\n    h: 0.27\n  }\n}\n```\n\n#### SSD MobileNet:\n\n- Run on image:\n\n```dart\nvar recognitions = await Tflite.detectObjectOnImage(\n  path: filepath,       \u002F\u002F required\n  model: \"SSDMobileNet\",\n  imageMean: 127.5,     \n  imageStd: 127.5,      \n  threshold: 0.4,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar recognitions = await Tflite.detectObjectOnBinary(\n  binary: imageToByteListUint8(resizedImage, 300), \u002F\u002F required\n  model: \"SSDMobileNet\",  \n  threshold: 0.4,                                  \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,                           \u002F\u002F defaults to 5\n  asynch: true                                     \u002F\u002F defaults to true\n);\n```\n\n- Run on image stream (video frame):\n\n> Works with [camera plugin 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera). Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.\n\n```dart\nvar recognitions = await Tflite.detectObjectOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  model: \"SSDMobileNet\",  \n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 127.5,   \u002F\u002F defaults to 127.5\n  imageStd: 127.5,    \u002F\u002F defaults to 127.5\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.1,     \u002F\u002F defaults to 0.1\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n#### Tiny YOLOv2:\n\n- Run on image:\n\n```dart\nvar recognitions = await Tflite.detectObjectOnImage(\n  path: filepath,       \u002F\u002F required\n  model: \"YOLO\",      \n  imageMean: 0.0,       \n  imageStd: 255.0,      \n  threshold: 0.3,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar recognitions = await Tflite.detectObjectOnBinary(\n  binary: imageToByteListFloat32(resizedImage, 416, 0.0, 255.0), \u002F\u002F required\n  model: \"YOLO\",  \n  threshold: 0.3,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- Run on image stream (video frame):\n\n> Works with [camera plugin 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera). Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.\n\n```dart\nvar recognitions = await Tflite.detectObjectOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  model: \"YOLO\",  \n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 0,         \u002F\u002F defaults to 127.5\n  imageStd: 255.0,      \u002F\u002F defaults to 127.5\n  numResults: 2,        \u002F\u002F defaults to 5\n  threshold: 0.1,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n### Pix2Pix\n\n> Thanks to [RP](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fpull\u002F18) from [Green Appers](https:\u002F\u002Fgithub.com\u002FGreenAppers)\n\n- Output format:\n  \n  The output of Pix2Pix inference is Uint8List type. Depending on the `outputType` used, the output is:\n\n  - (if outputType is png) byte array of a png image \n\n  - (otherwise) byte array of the raw output\n\n- Run on image:\n\n```dart\nvar result = await runPix2PixOnImage(\n  path: filepath,       \u002F\u002F required\n  imageMean: 0.0,       \u002F\u002F defaults to 0.0\n  imageStd: 255.0,      \u002F\u002F defaults to 255.0\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar result = await runPix2PixOnBinary(\n  binary: binary,       \u002F\u002F required\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- Run on image stream (video frame):\n\n```dart\nvar result = await runPix2PixOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 127.5,   \u002F\u002F defaults to 0.0\n  imageStd: 127.5,    \u002F\u002F defaults to 255.0\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n### Deeplab\n\n> Thanks to [RP](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fpull\u002F22) from [see--](https:\u002F\u002Fgithub.com\u002Fsee--) for Android implementation.\n\n- Output format:\n  \n  The output of Deeplab inference is Uint8List type. Depending on the `outputType` used, the output is:\n\n  - (if outputType is png) byte array of a png image \n\n  - (otherwise) byte array of r, g, b, a values of the pixels \n\n- Run on image:\n\n```dart\nvar result = await runSegmentationOnImage(\n  path: filepath,     \u002F\u002F required\n  imageMean: 0.0,     \u002F\u002F defaults to 0.0\n  imageStd: 255.0,    \u002F\u002F defaults to 255.0\n  labelColors: [...], \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",  \u002F\u002F defaults to \"png\"\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar result = await runSegmentationOnBinary(\n  binary: binary,     \u002F\u002F required\n  labelColors: [...], \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",  \u002F\u002F defaults to \"png\"\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- Run on image stream (video frame):\n\n```dart\nvar result = await runSegmentationOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 127.5,        \u002F\u002F defaults to 0.0\n  imageStd: 127.5,         \u002F\u002F defaults to 255.0\n  rotation: 90,            \u002F\u002F defaults to 90, Android only\n  labelColors: [...],      \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",       \u002F\u002F defaults to \"png\"\n  asynch: true             \u002F\u002F defaults to true\n);\n```\n\n### PoseNet\n\n> Model is from [StackOverflow thread](https:\u002F\u002Fstackoverflow.com\u002Fa\u002F55288616).\n\n- Output format:\n\n`x, y` are between [0, 1]. You can scale `x` by the width and `y` by the height of the image.\n\n```\n[ \u002F\u002F array of poses\u002Fpersons\n  { \u002F\u002F pose #1\n    score: 0.6324902,\n    keypoints: {\n      0: {\n        x: 0.250,\n        y: 0.125,\n        part: nose,\n        score: 0.9971070\n      },\n      1: {\n        x: 0.230,\n        y: 0.105,\n        part: leftEye,\n        score: 0.9978438\n      }\n      ......\n    }\n  },\n  { \u002F\u002F pose #2\n    score: 0.32534285,\n    keypoints: {\n      0: {\n        x: 0.402,\n        y: 0.538,\n        part: nose,\n        score: 0.8798978\n      },\n      1: {\n        x: 0.380,\n        y: 0.513,\n        part: leftEye,\n        score: 0.7090239\n      }\n      ......\n    }\n  },\n  ......\n]\n```\n\n- Run on image:\n\n```dart\nvar result = await runPoseNetOnImage(\n  path: filepath,     \u002F\u002F required\n  imageMean: 125.0,   \u002F\u002F defaults to 125.0\n  imageStd: 125.0,    \u002F\u002F defaults to 125.0\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.7,     \u002F\u002F defaults to 0.5\n  nmsRadius: 10,      \u002F\u002F defaults to 20\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- Run on binary:\n\n```dart\nvar result = await runPoseNetOnBinary(\n  binary: binary,     \u002F\u002F required\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.7,     \u002F\u002F defaults to 0.5\n  nmsRadius: 10,      \u002F\u002F defaults to 20\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- Run on image stream (video frame):\n\n```dart\nvar result = await runPoseNetOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 125.0,        \u002F\u002F defaults to 125.0\n  imageStd: 125.0,         \u002F\u002F defaults to 125.0\n  rotation: 90,            \u002F\u002F defaults to 90, Android only\n  numResults: 2,           \u002F\u002F defaults to 5\n  threshold: 0.7,          \u002F\u002F defaults to 0.5\n  nmsRadius: 10,           \u002F\u002F defaults to 20\n  asynch: true             \u002F\u002F defaults to true\n);\n```\n\n## Example\n\n### Prediction in Static Images\n\n  Refer to the [example](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Ftree\u002Fmaster\u002Fexample).\n\n### Real-time detection\n\n  Refer to [flutter_realtime_Detection](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_realtime_detection).\n\n## Run test cases\n\n`flutter test test\u002Ftflite_test.dart`","# tflite\n\n一个用于访问 TensorFlow Lite API 的 Flutter 插件。支持图像分类、目标检测（[SSD](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection) 和 [YOLO](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov2\u002F)）、[Pix2Pix](https:\u002F\u002Fphillipi.github.io\u002Fpix2pix\u002F)、[Deeplab](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fdeeplab) 和 [PoseNet](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fmodels\u002Fpose_estimation\u002Foverview)，适用于 iOS 和 Android 平台。\n\n### 目录\n\n- [安装](#Installation)\n- [使用](#Usage)\n    - [图像分类](#Image-Classification)\n    - [目标检测](#Object-Detection)\n      - [SSD MobileNet](#SSD-MobileNet)\n      - [YOLO](#Tiny-YOLOv2)\n    - [Pix2Pix](#Pix2Pix)\n    - [Deeplab](#Deeplab)\n    - [PoseNet](#PoseNet)\n- [示例](#Example)\n    - [静态图像预测](#Prediction-in-Static-Images)\n    - [实时检测](#Real-time-Detection)\n\n### 重大变更\n\n#### 自 1.1.0 版本起：\n\n1. iOS TensorFlow Lite 库已从 TensorFlowLite 1.x 升级到 TensorFlowLiteObjC 2.x。本地代码的变更用 `TFLITE2` 标记。\n\n#### 自 1.0.0 版本起：\n\n1. 更新至 TensorFlow Lite API v1.12.0。\n2. 不再接受 `inputSize` 和 `numChannels` 参数。这些参数将从输入张量中自动获取。\n3. `numThreads` 已移至 `Tflite.loadModel`。\n\n## 安装\n\n在 pubspec.yaml 文件中添加 `tflite` 作为[依赖项](https:\u002F\u002Fflutter.io\u002Fusing-packages\u002F)。\n\n### Android\n\n在 `android\u002Fapp\u002Fbuild.gradle` 的 `android` 块中添加以下配置。\n\n```\n    aaptOptions {\n        noCompress 'tflite'\n        noCompress 'lite'\n    }\n```\n\n### iOS\n\niOS 构建错误的解决方案：\n\n* \"vector\" 文件未找到\n\n  在 Xcode 中打开 `ios\u002FRunner.xcworkspace`，点击 Runner > Targets > Runner > Build Settings，搜索 `Compile Sources As`，将值改为 `Objective-C++`\n\n* \"tensorflow\u002Flite\u002Fkernels\u002Fregister.h\" 文件未找到\n\n  插件假设 tensorflow 头文件位于路径 \"tensorflow\u002Flite\u002Fkernels\"。\n\n  然而，早期版本的 tensorflow 头文件路径是 \"tensorflow\u002Fcontrib\u002Flite\u002Fkernels\"。\n\n  使用 `CONTRIB_PATH` 来切换路径。从此处取消注释 `\u002F\u002F#define CONTRIB_PATH`：\n  https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Fios\u002FClasses\u002FTflitePlugin.mm#L1\n\n## 使用\n\n1. 创建 `assets` 文件夹，将标签文件和模型文件放入其中。在 `pubspec.yaml` 中添加：\n\n```\n  assets:\n   - assets\u002Flabels.txt\n   - assets\u002Fmobilenet_v1_1.0_224.tflite\n```\n\n2. 导入库：\n\n```dart\nimport 'package:tflite\u002Ftflite.dart';\n```\n\n3. 加载模型和标签：\n\n```dart\nString res = await Tflite.loadModel(\n  model: \"assets\u002Fmobilenet_v1_1.0_224.tflite\",\n  labels: \"assets\u002Flabels.txt\",\n  numThreads: 1, \u002F\u002F defaults to 1\n  isAsset: true, \u002F\u002F defaults to true, set to false to load resources outside assets\n  useGpuDelegate: false \u002F\u002F defaults to false, set to true to use GPU delegate\n);\n```\n\n4. 请参阅下文相应模型的章节。\n\n5. 释放资源：\n\n```\nawait Tflite.close();\n```\n\n### GPU 委托\n\n使用 GPU 委托时，请参考[此步骤](https:\u002F\u002Fwww.tensorflow.org\u002Flite\u002Fperformance\u002Fgpu#step_5_release_mode)进行发布模式设置，以获得更好的性能。\n\n### 图像分类\n\n- 输出格式：\n```\n{\n  index: 0,\n  label: \"person\",\n  confidence: 0.629\n}\n```\n\n- 在图像上运行：\n\n```dart\nvar recognitions = await Tflite.runModelOnImage(\n  path: filepath,   \u002F\u002F required\n  imageMean: 0.0,   \u002F\u002F defaults to 117.0\n  imageStd: 255.0,  \u002F\u002F defaults to 1.0\n  numResults: 2,    \u002F\u002F defaults to 5\n  threshold: 0.2,   \u002F\u002F defaults to 0.1\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar recognitions = await Tflite.runModelOnBinary(\n  binary: imageToByteListFloat32(image, 224, 127.5, 127.5),\u002F\u002F required\n  numResults: 6,    \u002F\u002F defaults to 5\n  threshold: 0.05,  \u002F\u002F defaults to 0.1\n  asynch: true      \u002F\u002F defaults to true\n);\n\nUint8List imageToByteListFloat32(\n    img.Image image, int inputSize, double mean, double std) {\n  var convertedBytes = Float32List(1 * inputSize * inputSize * 3);\n  var buffer = Float32List.view(convertedBytes.buffer);\n  int pixelIndex = 0;\n  for (var i = 0; i \u003C inputSize; i++) {\n    for (var j = 0; j \u003C inputSize; j++) {\n      var pixel = image.getPixel(j, i);\n      buffer[pixelIndex++] = (img.getRed(pixel) - mean) \u002F std;\n      buffer[pixelIndex++] = (img.getGreen(pixel) - mean) \u002F std;\n      buffer[pixelIndex++] = (img.getBlue(pixel) - mean) \u002F std;\n    }\n  }\n  return convertedBytes.buffer.asUint8List();\n}\n\nUint8List imageToByteListUint8(img.Image image, int inputSize) {\n  var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);\n  var buffer = Uint8List.view(convertedBytes.buffer);\n  int pixelIndex = 0;\n  for (var i = 0; i \u003C inputSize; i++) {\n    for (var j = 0; j \u003C inputSize; j++) {\n      var pixel = image.getPixel(j, i);\n      buffer[pixelIndex++] = img.getRed(pixel);\n      buffer[pixelIndex++] = img.getGreen(pixel);\n      buffer[pixelIndex++] = img.getBlue(pixel);\n    }\n  }\n  return convertedBytes.buffer.asUint8List();\n}\n```\n\n- 在图像流（视频帧）上运行：\n\n> 适用于 [camera 插件 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera)。视频格式：(iOS) kCVPixelFormatType_32BGRA，(Android) YUV_420_888。\n\n```dart\nvar recognitions = await Tflite.runModelOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 127.5,   \u002F\u002F defaults to 127.5\n  imageStd: 127.5,    \u002F\u002F defaults to 127.5\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.1,     \u002F\u002F defaults to 0.1\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n### 目标检测\n\n- 输出格式：\n\n`x, y, w, h` 的取值范围为 [0, 1]。你可以将 `x`、`w` 乘以图像宽度，将 `y`、`h` 乘以图像高度来获取实际坐标。\n\n```\n{\n  detectedClass: \"hot dog\",\n  confidenceInClass: 0.123,\n  rect: {\n    x: 0.15,\n    y: 0.33,\n    w: 0.80,\n    h: 0.27\n  }\n}\n```\n\n#### SSD MobileNet：\n\n- 在图像上运行：\n\n```dart\nvar recognitions = await Tflite.detectObjectOnImage(\n  path: filepath,       \u002F\u002F required\n  model: \"SSDMobileNet\",\n  imageMean: 127.5,     \n  imageStd: 127.5,      \n  threshold: 0.4,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar recognitions = await Tflite.detectObjectOnBinary(\n  binary: imageToByteListUint8(resizedImage, 300), \u002F\u002F required\n  model: \"SSDMobileNet\",  \n  threshold: 0.4,                                  \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,                           \u002F\u002F defaults to 5\n  asynch: true                                     \u002F\u002F defaults to true\n);\n```\n\n- 在图像流（视频帧）上运行：\n\n> 适用于 [camera plugin 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera)。视频格式：(iOS) kCVPixelFormatType_32BGRA，(Android) YUV_420_888。\n\n```dart\nvar recognitions = await Tflite.detectObjectOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  model: \"SSDMobileNet\",  \n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 127.5,   \u002F\u002F defaults to 127.5\n  imageStd: 127.5,    \u002F\u002F defaults to 127.5\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.1,     \u002F\u002F defaults to 0.1\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n#### Tiny YOLOv2：\n\n- 在图像上运行：\n\n```dart\nvar recognitions = await Tflite.detectObjectOnImage(\n  path: filepath,       \u002F\u002F required\n  model: \"YOLO\",      \n  imageMean: 0.0,       \n  imageStd: 255.0,      \n  threshold: 0.3,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar recognitions = await Tflite.detectObjectOnBinary(\n  binary: imageToByteListFloat32(resizedImage, 416, 0.0, 255.0), \u002F\u002F required\n  model: \"YOLO\",  \n  threshold: 0.3,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n- 在图像流（视频帧）上运行：\n\n> 适用于 [camera plugin 4.0.0](https:\u002F\u002Fpub.dartlang.org\u002Fpackages\u002Fcamera)。视频格式：(iOS) kCVPixelFormatType_32BGRA，(Android) YUV_420_888。\n\n```dart\nvar recognitions = await Tflite.detectObjectOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  model: \"YOLO\",  \n  imageHeight: img.height,\n  imageWidth: img.width,\n  imageMean: 0,         \u002F\u002F defaults to 127.5\n  imageStd: 255.0,      \u002F\u002F defaults to 127.5\n  numResults: 2,        \u002F\u002F defaults to 5\n  threshold: 0.1,       \u002F\u002F defaults to 0.1\n  numResultsPerClass: 2,\u002F\u002F defaults to 5\n  anchors: anchors,     \u002F\u002F defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]\n  blockSize: 32,        \u002F\u002F defaults to 32\n  numBoxesPerBlock: 5,  \u002F\u002F defaults to 5\n  asynch: true          \u002F\u002F defaults to true\n);\n```\n\n### Pix2Pix\n\n> 感谢来自 [Green Appers](https:\u002F\u002Fgithub.com\u002FGreenAppers) 的 [RP](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fpull\u002F18)\n\n- 输出格式：\n  \n  Pix2Pix 推理的输出是 Uint8List 类型。根据使用的 `outputType` 不同，输出为：\n\n  -（如果 outputType 为 png）png 图像的字节数组\n\n  -（否则）原始输出的字节数组\n\n- 在图像上运行：\n\n```dart\nvar result = await runPix2PixOnImage(\n  path: filepath,       \u002F\u002F required\n  imageMean: 0.0,       \u002F\u002F defaults to 0.0\n  imageStd: 255.0,      \u002F\u002F defaults to 255.0\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar result = await runPix2PixOnBinary(\n  binary: binary,       \u002F\u002F required\n  asynch: true      \u002F\u002F defaults to true\n);\n```\n\n- 在图像流（视频帧）上运行：\n\n```dart\nvar result = await runPix2PixOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 127.5,   \u002F\u002F defaults to 0.0\n  imageStd: 127.5,    \u002F\u002F defaults to 255.0\n  rotation: 90,       \u002F\u002F defaults to 90, Android only\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n### Deeplab\n\n> 感谢来自 [see--](https:\u002F\u002Fgithub.com\u002Fsee--) 的 [RP](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fpull\u002F22) 提供的 Android 实现。\n\n- 输出格式：\n  \n  Deeplab 推理的输出是 Uint8List 类型。根据使用的 `outputType` 不同，输出为：\n\n  -（如果 outputType 为 png）png 图像的字节数组\n\n  -（否则）像素的 r、g、b、a 值的字节数组\n\n- 在图像上运行：\n\n```dart\nvar result = await runSegmentationOnImage(\n  path: filepath,     \u002F\u002F required\n  imageMean: 0.0,     \u002F\u002F defaults to 0.0\n  imageStd: 255.0,    \u002F\u002F defaults to 255.0\n  labelColors: [...], \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",  \u002F\u002F defaults to \"png\"\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar result = await runSegmentationOnBinary(\n  binary: binary,     \u002F\u002F required\n  labelColors: [...], \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",  \u002F\u002F defaults to \"png\"\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- 在图像流（视频帧）上运行：\n\n```dart\nvar result = await runSegmentationOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 127.5,        \u002F\u002F defaults to 0.0\n  imageStd: 127.5,         \u002F\u002F defaults to 255.0\n  rotation: 90,            \u002F\u002F defaults to 90, Android only\n  labelColors: [...],      \u002F\u002F defaults to https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Flib\u002Ftflite.dart#L219\n  outputType: \"png\",       \u002F\u002F defaults to \"png\"\n  asynch: true             \u002F\u002F defaults to true\n);\n```\n\n### PoseNet\n\n> 模型来源于 [StackOverflow 帖子](https:\u002F\u002Fstackoverflow.com\u002Fa\u002F55288616)。\n\n- 输出格式：\n\n`x, y` 的取值范围为 [0, 1]。您可以将 `x` 乘以图像宽度、`y` 乘以图像高度来进行缩放。\n\n```\n[ \u002F\u002F 姿态\u002F人物的数组\n  { \u002F\u002F 姿态 #1\n    score: 0.6324902,\n    keypoints: {\n      0: {\n        x: 0.250,\n        y: 0.125,\n        part: nose,\n        score: 0.9971070\n      },\n      1: {\n        x: 0.230,\n        y: 0.105,\n        part: leftEye,\n        score: 0.9978438\n      }\n      ......\n    }\n  },\n  { \u002F\u002F 姿态 #2\n    score: 0.32534285,\n    keypoints: {\n      0: {\n        x: 0.402,\n        y: 0.538,\n        part: nose,\n        score: 0.8798978\n      },\n      1: {\n        x: 0.380,\n        y: 0.513,\n        part: leftEye,\n        score: 0.7090239\n      }\n      ......\n    }\n  },\n  ......\n]\n```\n\n- 在图像上运行：\n\n```dart\nvar result = await runPoseNetOnImage(\n  path: filepath,     \u002F\u002F required\n  imageMean: 125.0,   \u002F\u002F defaults to 125.0\n  imageStd: 125.0,    \u002F\u002F defaults to 125.0\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.7,     \u002F\u002F defaults to 0.5\n  nmsRadius: 10,      \u002F\u002F defaults to 20\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- 在二进制数据上运行：\n\n```dart\nvar result = await runPoseNetOnBinary(\n  binary: binary,     \u002F\u002F required\n  numResults: 2,      \u002F\u002F defaults to 5\n  threshold: 0.7,     \u002F\u002F defaults to 0.5\n  nmsRadius: 10,      \u002F\u002F defaults to 20\n  asynch: true        \u002F\u002F defaults to true\n);\n```\n\n- 在图像流（视频帧）上运行：\n\n```dart\nvar result = await runPoseNetOnFrame(\n  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),\u002F\u002F required\n  imageHeight: img.height, \u002F\u002F defaults to 1280\n  imageWidth: img.width,   \u002F\u002F defaults to 720\n  imageMean: 125.0,        \u002F\u002F defaults to 125.0\n  imageStd: 125.0,         \u002F\u002F defaults to 125.0\n  rotation: 90,            \u002F\u002F defaults to 90, Android only\n  numResults: 2,           \u002F\u002F defaults to 5\n  threshold: 0.7,          \u002F\u002F defaults to 0.5\n  nmsRadius: 10,           \u002F\u002F defaults to 20\n  asynch: true             \u002F\u002F defaults to true\n);\n```\n\n## 示例\n\n### 静态图像预测\n\n请参考 [示例](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Ftree\u002Fmaster\u002Fexample)。\n\n### 实时检测\n\n请参考 [flutter_realtime_Detection](https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_realtime_detection)。\n\n## 运行测试用例\n\n`flutter test test\u002Ftflite_test.dart`","# flutter_tflite 快速上手指南\n\nflutter_tflite 是一个 Flutter 插件，用于在移动端访问 TensorFlow Lite API，支持图像分类、对象检测、图像分割等功能。\n\n## 环境准备\n\n- Flutter SDK 1.12.0 或更高版本\n- Dart SDK 2.0.0 或更高版本\n- Android SDK（Android 5.0+）\n- Xcode（iOS 开发）\n\n## 安装步骤\n\n### 1. 添加依赖\n\n在项目根目录的 `pubspec.yaml` 文件中添加：\n\n```yaml\ndependencies:\n  flutter:\n    sdk: flutter\n  tflite: ^1.1.2\n```\n\n然后执行：\n\n```bash\nflutter pub get\n```\n\n### 2. Android 配置\n\n在 `android\u002Fapp\u002Fbuild.gradle` 的 `android` 块中添加：\n\n```gradle\nandroid {\n    aaptOptions {\n        noCompress 'tflite'\n        noCompress 'lite'\n    }\n}\n```\n\n### 3. iOS 配置\n\n在 `ios\u002FRunner.xcworkspace` 中打开 Xcode，依次点击 Runner > Targets > Runner > Build Settings，搜索 **Compile Sources As**，将值改为 **Objective-C++**。\n\n### 4. 准备模型文件\n\n创建 `assets` 文件夹，放入模型文件和标签文件：\n\n```yaml\n# pubspec.yaml\nflutter:\n  assets:\n    - assets\u002Flabels.txt\n    - assets\u002Fmobilenet_v1_1.0_224.tflite\n```\n\n## 基本使用\n\n### 1. 导入库\n\n```dart\nimport 'package:tflite\u002Ftflite.dart';\n```\n\n### 2. 加载模型\n\n```dart\nString res = await Tflite.loadModel(\n  model: \"assets\u002Fmobilenet_v1_1.0_224.tflite\",\n  labels: \"assets\u002Flabels.txt\",\n  numThreads: 1,\n  isAsset: true,\n  useGpuDelegate: false,\n);\n```\n\n### 3. 图像分类示例\n\n```dart\nvar recognitions = await Tflite.runModelOnImage(\n  path: filepath,\n  imageMean: 0.0,\n  imageStd: 255.0,\n  numResults: 2,\n  threshold: 0.2,\n  asynch: true,\n);\n```\n\n返回结果格式：\n```dart\n{\n  index: 0,\n  label: \"person\",\n  confidence: 0.629\n}\n```\n\n### 4. 对象检测示例（SSD MobileNet）\n\n```dart\nvar recognitions = await Tflite.detectObjectOnImage(\n  path: filepath,\n  model: \"SSDMobileNet\",\n  imageMean: 127.5,\n  imageStd: 127.5,\n  threshold: 0.4,\n  numResultsPerClass: 2,\n  asynch: true,\n);\n```\n\n返回结果格式：\n```dart\n{\n  detectedClass: \"hot dog\",\n  confidenceInClass: 0.123,\n  rect: {\n    x: 0.15,\n    y: 0.33,\n    w: 0.80,\n    h: 0.27\n  }\n}\n```\n\n### 5. 释放资源\n\n```dart\nawait Tflite.close();\n```\n\n## 快速运行示例\n\n```dart\nimport 'package:flutter\u002Fmaterial.dart';\nimport 'package:tflite\u002Ftflite.dart';\nimport 'package:image_picker\u002Fimage_picker.dart';\n\nvoid main() => runApp(MyApp());\n\nclass MyApp extends StatelessWidget {\n  @override\n  Widget build(BuildContext context) {\n    return MaterialApp(\n      home: Home(),\n    );\n  }\n}\n\nclass Home extends StatefulWidget {\n  @override\n  _HomeState createState() => _HomeState();\n}\n\nclass _HomeState extends State\u003CHome> {\n  String _result = \"\";\n\n  @override\n  void initState() {\n    super.initState();\n    loadModel();\n  }\n\n  Future\u003Cvoid> loadModel() async {\n    await Tflite.loadModel(\n      model: \"assets\u002Fmobilenet_v1_1.0_224.tflite\",\n      labels: \"assets\u002Flabels.txt\",\n    );\n  }\n\n  Future\u003Cvoid> classifyImage() async {\n    final picker = ImagePicker();\n    final image = await picker.pickImage(source: ImageSource.gallery);\n    \n    if (image == null) return;\n\n    var recognitions = await Tflite.runModelOnImage(\n      path: image.path,\n      numResults: 3,\n      threshold: 0.2,\n    );\n\n    setState(() {\n      _result = recognitions.toString();\n    });\n  }\n\n  @override\n  Widget build(BuildContext context) {\n    return Scaffold(\n      appBar: AppBar(title: Text('TFLite Demo')),\n      body: Center(\n        child: Column(\n          mainAxisAlignment: MainAxisAlignment.center,\n          children: [\n            ElevatedButton(\n              onPressed: classifyImage,\n              child: Text('选择图片并分类'),\n            ),\n            SizedBox(height: 20),\n            Text(_result),\n          ],\n        ),\n      ),\n    );\n  }\n}\n```\n\n## 支持的模型类型\n\n| 模型 | 用途 |\n|------|------|\n| MobileNet | 图像分类 |\n| SSD MobileNet | 对象检测 |\n| Tiny YOLOv2 | 对象检测 |\n| Pix2Pix | 图像转换 |\n| Deeplab | 图像分割 |\n| PoseNet | 姿态估计 |","一位社区居民在扔垃圾时不确定手中的塑料瓶属于可回收还是其他垃圾，希望用手机拍照快速识别垃圾类别并获得正确的投放指导。\n\n### 没有 flutter_tflite 时\n\n- 居民只能手动翻阅垃圾分类手册或上网搜索，耗时且容易出错，投放错误还要被罚款\n- 开发团队需要分别编写 iOS 和 Android 原生代码来调用 TensorFlow Lite，跨平台开发成本高\n- 团队缺乏移动端机器学习模型部署经验，模型加载和推理性能优化困难，导致 APP 响应缓慢\n- 模型文件较大，用户每次识别都需要联网下载模型，流量消耗大且识别速度慢\n- 复杂的原生集成导致 APP 包体积增大，安装包超过 100MB，用户下载意愿低\n\n### 使用 flutter_tflite 后\n\n- 居民拍照即可在 1 秒内获得垃圾类别识别结果和投放指导，整个过程不到 3 秒\n- 团队使用 Flutter 统一代码库，一次开发同时支持 iOS 和 Android，开发效率提升近一倍\n- flutter_tflite 封装了 TensorFlow Lite 的复杂接口，模型加载、GPU 加速、线程配置等操作一行代码搞定\n- 模型直接打包在 APP 中，支持离线识别，无需网络连接，识别速度快、用户体验好\n- 通过 GPU 加速和模型优化，APP 包体积控制在 30MB 以内，用户下载率显著提升\n\n居民从\"翻手册找不到、查手机太麻烦\"变成了\"拍照即识别、投放更精准\"，垃圾分类 APP 的开发也变得更简单高效。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fshaqian_flutter_tflite_a8646d6a.png","shaqian","sha","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fshaqian_499bf088.jpg",null,"x_ash@qq.com","https:\u002F\u002Fmedium.com\u002F@shaqian629","https:\u002F\u002Fgithub.com\u002Fshaqian",[84,88,92,96,100,104],{"name":85,"color":86,"percentage":87},"Objective-C++","#6866fb",37.4,{"name":89,"color":90,"percentage":91},"Java","#b07219",36.4,{"name":93,"color":94,"percentage":95},"Dart","#00B4AB",23.9,{"name":97,"color":98,"percentage":99},"Ruby","#701516",1.6,{"name":101,"color":102,"percentage":103},"Objective-C","#438eff",0.5,{"name":105,"color":106,"percentage":107},"C++","#f34b7d",0.2,641,443,"2026-03-27T08:58:40","MIT","Android, iOS","支持 GPU 加速（可选），具体型号和显存要求未说明","未说明",{"notes":116,"python":117,"dependencies":118},"这是一个 Flutter 插件，用于在移动端（iOS\u002FAndroid）运行 TensorFlow Lite 模型。Android 需在 build.gradle 中配置 aaptOptions；iOS 需将 Compile Sources As 改为 Objective-C++；使用 GPU 加速需参考 TensorFlow 官方文档配置 release mode","非 Python 库（Flutter\u002FDart 插件）",[119,120],"TensorFlow Lite","camera plugin 4.0.0",[13],[123,124,125,126],"flutter","tensorflow","tensorflowlite","dart","2026-03-27T02:49:30.150509","2026-04-06T05:32:19.933483",[130,135,140,145,150,155],{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},944,"如何将数组而不是图片文件输入到 TensorFlow Lite 模型？","可以使用 `runModelOnBinary` 方法直接输入二进制数组。在 0.0.5 版本中添加了这个功能。示例代码参考：https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fblob\u002Fmaster\u002Fexample\u002Flib\u002Fmain.dart#L79-L92。注意：在 iOS 上使用 image package 解码图片会比较慢。","https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fissues\u002F8",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},945,"遇到 \"Cannot convert between a TensorFlowLite tensor with type UINT8 and a Java object of type [[F\" 错误如何解决？","这是因为模型的量化类型（UINT8）与代码输入的数据类型（FLOAT32）不匹配。解决方案是导出模型时使用 float16 量化配置：\n```python\nconfig = QuantizationConfig.for_float16()\nmodel.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)\n```\n或者在代码中将输出从 float 改为 byte 类型：\n```java\n\u002F\u002F 改为 byte\nbyte[][] labelProb = new byte[1][labels.size()];\nfor (int i = 0; i \u003C labels.size(); ++i) {\n    float confidence = (float)labelProb[0][i];\n}\n```","https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fissues\u002F53",{"id":141,"question_zh":142,"answer_zh":143,"source_url":144},946,"遇到 \"Failed to mmap model\" 错误如何解决？","这个错误通常是因为模型文件没有正确放置或 pubspec.yaml 配置不正确。确保：1) 在 pubspec.yaml 中正确配置 assets 文件夹；2) 模型文件放在正确的 assets 目录下；3) 使用正确的路径加载模型。配置示例：\nassets:\n  - assets\u002Fmobilenet_v1_1.0_224.tflite\n  - assets\u002Fmobilenet_v1_1.0_224.txt","https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fissues\u002F13",{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},947,"TensorFlow Lite 张量形状不匹配错误（shape [1, x] 与 shape [1, y, z]）如何解决？","这个错误通常是因为模型输出层配置与代码预期不符。常见原因：1) 模型输出维度与代码中 numResults 参数不匹配；2) 使用了不支持的自定义模型（如带有 Flatten 和 Dense 层的 MobileNetV2）；3) 模型训练时的输出格式问题。建议检查：模型输出形状是否正确、是否使用了兼容的模型结构、numResults 参数是否与模型输出维度匹配。","https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fissues\u002F10",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},948,"遇到 \"Interpreter busy\" 解释器繁忙错误如何解决？","这个错误通常发生在多次调用模型时没有正确释放资源。可能的解决方案：1) 在使用完模型后调用 Tflite.close() 释放资源；2) 确保每次模型调用前 previous model 已被正确关闭；3) 避免在短时间内重复加载模型。如果问题持续，可能需要检查代码中模型的生命周期管理。","https:\u002F\u002Fgithub.com\u002Fshaqian\u002Fflutter_tflite\u002Fissues\u002F47",{"id":156,"question_zh":157,"answer_zh":158,"source_url":139},949,"模型输入字节大小不匹配错误（602112 bytes vs 150528 bytes）如何解决？","这个错误表明输入数据的尺寸与模型期望的输入尺寸不一致。需要检查：1) 模型预期的输入尺寸（可以通过模型元数据查看）；2) 代码中实际传入的图片尺寸；3) 确保 resize 后的图片尺寸与模型输入要求完全匹配。例如，如果模型需要 224x224 的输入，确保图片被正确 resize 到这个尺寸。",[]]