[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-tucan9389--awesome-ml-demos-with-ios":3,"tool-tucan9389--awesome-ml-demos-with-ios":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",153609,2,"2026-04-13T11:34:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":72,"owner_avatar_url":73,"owner_bio":74,"owner_company":75,"owner_location":76,"owner_email":77,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":88,"difficulty_score":32,"env_os":89,"env_gpu":90,"env_ram":91,"env_deps":92,"category_tags":102,"github_topics":103,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":116},7160,"tucan9389\u002Fawesome-ml-demos-with-ios","awesome-ml-demos-with-ios","The challenge projects for Inferencing machine learning models on iOS","awesome-ml-demos-with-ios 是一个专为 iOS 开发者打造的机器学习实战项目合集，旨在降低在移动端部署和运行 AI 模型的门槛。它集中解决了开发者在 iOS 平台上集成机器学习模型时面临的常见挑战，特别是如何高效利用 Core ML、ML Kit（TensorFlow Lite）等框架进行模型推理。\n\n该资源库通过提供一系列结构清晰的基线项目和完整的应用案例，覆盖了图像分类、物体检测与识别、图像估算及语义分割等核心视觉任务。其独特的技术亮点在于不仅展示了如何调用内置模型，还详细演示了如何将 TensorFlow 等外部训练的自定义模型转换为 iOS 兼容格式，并手动实现必要的前后处理流程。此外，项目还包含了性能测量模块和 Create ML 的实践指南，帮助开发者优化应用表现。\n\n无论是希望快速上手移动端 AI 的初级工程师，还是寻求最佳实践参考的资深研究人员，都能从中获益。通过直观的代码示例和动态演示（GIF），awesome-ml-demos-with-ios 让复杂的模型部署过程变得透明且易于理解，是构建智能 iOS 应用不可或缺的实用指南。","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_312feae9a00f.png\" width=\"187\" height=\"174\"\u002F>\n\u003C\u002Fp>\n\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002Fawesome-ml-demos-with-ios) ![Hits](https:\u002F\u002Fhitcounter.pythonanywhere.com\u002Fcount\u002Ftag.svg?url=https%3A%2F%2Fgithub.com%2Fmotlabs%2Fawesome-ml-demos-with-ios) [![PRs Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com) [![GIF PRs More Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGIF--PRs-WELCOME!-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com) \n\n> This repo was moved from [@motlabs](https:\u002F\u002Fgithub.com\u002Fmotlabs) group. Thanks for [@jwkanggist](https:\u002F\u002Fgithub.com\u002Fjwkanggist) who is a leader of motlabs community.\n\n# Awesome Machine Learning DEMOs with iOS\n\nWe tackle the challenge of using machine learning models on iOS via Core ML and ML Kit (TensorFlow Lite).\n\n[한국어 README](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002FiOS-Proejcts-with-ML-Models\u002Fblob\u002Fmaster\u002FREADME_kr.md)\n\n## Contents\n- [Machine Learning Framework for iOS](#machine-learning-framework-for-ios)\n  - [Flow of Model When Using Core ML](#Flow-of-Model-When-Using-Core-ML)\n  - [Flow of Model When Using Create ML](#Flow-of-Model-When-Using-Create-ML)\n- [Baseline Projects](#Baseline-Projects)\n  - [Image Classification](#Image-Classification)\n  - [Object Detection & Recognition](#Object-Detection--Recognition)\n  - [Image Estimation](#Image-Estimation)\n  - [Semantic Segmentation](#Semantic-Segmentation)\n- [Application Projects](#Application-Projects)\n  - [Annotation Tool](#Annotation-Tool)\n- [Create ML Projects](#Create-ML-Projects)\n- [Performance](#Performance)\n  - [📏Measure module](#measure-module)\n  - [Implements](#Implements)\n- [See also](#See-also)\n\n## Machine Learning Framework for iOS\n\n- [Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fcoreml)\n- [TensorFlow Lite](https:\u002F\u002Fwww.tensorflow.org\u002Flite)\n- [Pytorch Mobile](https:\u002F\u002Fpytorch.org\u002Fmobile\u002Fhome\u002F)\n- [fritz](https:\u002F\u002Fwww.fritz.ai\u002F)\n- etc. ~~[Tensorflow Mobile](https:\u002F\u002Fwww.tensorflow.org\u002Fmobile\u002F)~~`DEPRECATED`)\n\n\n### Flow of Model When Using Core ML\n\n[![Flow of Model When Using Core ML](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_d31d7fc20459.png)](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing)\n\nThe overall flow is very similar for most ML frameworks. Each framework has its own compatible model format. We need to take the model created in TensorFlow and **convert it into the appropriate format, for each mobile ML framework**.\n\nOnce the compatible model is prepared, you can run the inference using the ML framework. Note that you must perform **pre\u002Fpostprocessing** manually.\n\n> If you want more explanation, check [this slide(Korean)](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing).\n\n### Flow of Model When Using Create ML\n\n![playground-createml-validation-001](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_ebf2c35d4ebd.png)\n\n## Baseline Projects\n\n#### DONE\n\n- Using built-in model with Core ML\n\n- Using built-in on-device model with ML Kit\n- Using custom model for Vision with Core ML and ML Kit\n- Object Detection with Core ML\n\n#### TODO\n\n- Object Detection with ML Kit\n- Using built-in cloud model on ML Kit\n  - Landmark recognition\n- Using custom model for NLP with Core ML and ML Kit\n- Using custom model for Audio with Core ML and ML Kit\n  - Audio recognition\n  - Speech recognition\n  - TTS\n\n\n\n### Image Classification\n\n| Name | DEMO | Note |\n| ---- | ---- | ---- |\n| [ImageClassification-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FImageClassification-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_6aaed2314a3a.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [MobileNet-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FMobileNet-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_a418d7e08db9.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### Object Detection & Recognition\n\n| Name | DEMO | Note |\n| ---- | ---- | ---- |\n| [ObjectDetection-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FObjectDetection-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_351269d65020.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [TextDetection-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FTextDetection-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_25e53582f488.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [TextRecognition-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FTextRecognition-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_e8bc98b45e4e.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [FaceDetection-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FFaceDetection-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_f8165295031e.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### Pose Estimation\n\n| Name | DEMO | Note |\n| ---- | :--- | ---- |\n| [PoseEstimation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_165476bb42dd.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [PoseEstimation-TFLiteSwift](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-TFLiteSwift) | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_db85b861d5ac.gif\" width=200px>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_754b22a996a7.gif\" width=200px> | -    |\n| [PoseEstimation-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_328bb2d09933.gif\" width=\"200\"\u002F>\u003C\u002Fp> | -    |\n| [FingertipEstimation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FFingertipEstimation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_25d123138e51.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### Depth Prediction\n\n|                                                              |                                                              |      |\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ---- |\n| [DepthPrediction-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FDepthPrediction-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_b93a2b7fa5a5.gif\" width=\"200\"\u002F>\u003C\u002Fp> | -    |\n\n### Semantic Segmentation\n\n| Name | DEMO | Note |\n| ---- | ---- | ---- |\n| [SemanticSegmentation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FSemanticSegmentation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_020420609c24.gif\" width=\"200\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_1eda519f174b.gif\" width=200px>\u003C\u002Fp> | - |\n\n## Application Projects\n\n| Name | DEMO | Note |\n| ---- | ---- | ---- |\n| [dont-be-turtle-ios](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002Fdont-be-turtle-ios) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_592f2d0a0565.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [WordRecognition-CoreML-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FWordRecognition-CoreML-MLKit)(preparing...) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_0ed309619462.gif\" width=\"200\"\u002F>\u003C\u002Fp> | Detect character, find a word what I point and then recognize the word using Core ML and ML Kit. |\n\n### Annotation Tool\n\n| Name | DEMO | Note |\n| ---- | ---- | ---- |\n| [KeypointAnnotation](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FKeypointAnnotation) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_3a9de42eb5d0.gif\" width=\"200\"\u002F>\u003C\u002Fp> | Annotation tool for own custom estimation dataset |\n\n## Create ML Projects\n\n| Name | Create ML DEMO | Core ML DEMO | Note |\n| ------ | ------------------------------------------------------------ | ---------------------------------- | ------ |\n| [SimpleClassification-CreateML-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FSimpleClassification-CreateML-CoreML) | ![IMG_0436](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_bc3ee604b66e.png) | ![IMG_0436](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_9258d2f4b858.png) | A Simple Classification Using Create ML and Core ML |\n\n## Performance\n\nExecution Time: Inference Time + Postprocessing Time\n\n|              (with iPhone X) | Inference Time(ms) | Execution Time(ms) |   FPS   |\n| ---------------------------: | :----------------: | :----------------: | :-----: |\n|   ImageClassification-CoreML |         40         |         40         |   23    |\n|              MobileNet-MLKit |        120         |        130         |    6    |\n|       ObjectDetection-CoreML |  100 ~ 120         |    110 ~ 130       |    5    |\n|         TextDetection-CoreML |         12         |         13         | 30(max) |\n|        TextRecognition-MLKit |       35~200       |       40~200       |  5~20   |\n|        PoseEstimation-CoreML |         51         |         65         |   14    |\n|         PoseEstimation-MLKit |        200         |        217         |    3    |\n|       DepthPrediction-CoreML |        624         |        640         |    1    |\n|    SemanticSegmentation-CoreML |        178         |        509         |    1    |\n| WordRecognition-CoreML-MLKit |         23         |         30         |   14    |\n| FaceDetection-MLKit          |         -          |          -         |   -     |\n\n### 📏Measure module\n\nYou can see the measured latency time for inference or execution and FPS on the top of the screen.\n\n> If you have more elegant method for measuring the performance, suggest on issue!\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_1fd3fece75cd.jpeg\" width=\"320\"\u002F>\n\n### Implements\n\n|                            | Measure📏 | Unit Test | Bunch Test |\n| -------------------------: | :-------: | :-------: | :--------: |\n| ImageClassification-CoreML |    O      |     X     |     X      |\n|            MobileNet-MLKit |    O      |     X     |     X      |\n|     ObjectDetection-CoreML |    O      |     O     |     X      |\n|       TextDetection-CoreML |    O      |     X     |     X      |\n|      TextRecognition-MLKit |    O      |     X     |     X      |\n|      PoseEstimation-CoreML |    O      |     O     |     X      |\n|       PoseEstimation-MLKit |    O      |     X     |     X      |\n|     DepthPrediction-CoreML |    O      |     X     |     X      |\n|  SemanticSegmentation-CoreML |    O      |     X     |     X      |\n\n## See also\n\n- [Core ML | Apple Developer Documentation](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fcoreml)\n- [Machine Learning - Apple Developer](https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002F)\n- [ML Kit - Firebase](https:\u002F\u002Fdevelopers.google.com\u002Fml-kit\u002F)\n- [Apple's Core ML 2 vs. Google's ML Kit: What's the difference?](https:\u002F\u002Fventurebeat.com\u002F2018\u002F06\u002F05\u002Fapples-core-ml-2-vs-googles-ml-kit-whats-the-difference\u002F)\n- [iOS에서 머신러닝 슬라이드 자료](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing)\n- [MoT Labs Blog](https:\u002F\u002Fmotlabs.github.io\u002F)\n\n### WWDC\n\n#### Core ML\n\n- WWDC2020\n  - [WWDC2020 10152 Session - Use model deployment and security with Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10152\u002F)\n  - [WWDC2020 10153 Session - Get models on device using Core ML Converters](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10153\u002F)\n  - Vision\n    - [WWDC2020 10673 Session - Explore Computer Vision APIs](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10673\u002F)\n    - [WWDC2020 10099 Session - Explore the Action & Vision app](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10099\u002F)\n    - [WWDC2020 10653 Session - Detect Body and Hand Pose with Vision](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10653\u002F)\n    - [TECH-TALKS 206 Session - QR Code Recognition on iOS 11](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F206\u002F)\n  - NLP\n    - [WWDC2020 10657 Session - Make apps smarter with Natural Language](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10657\u002F)\n\n- WWDC2019\n  - [WWDC2019 256 Session - Advances in Speech Recognition](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F256\u002F)\n  - [WWDC2019 704 Session - Core ML 3 Framework](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F704\u002F)\n  - [WWDC2019 228 Session - Creating Great Apps Using Core ML and ARKit](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F228\u002F)\n  - [WWDC2019 232 Session - Advances in Natural Language Framework](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F232\u002F)\n  - [WWDC2019 222 Session - Understanding Images in Vision Framework](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F222\u002F)\n  - [WWDC2019 234 Session - Text Recognition in Vision Framework](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F234\u002F)\n- WWDC2018\n  - [WWDC2018 708 Session - What’s New in Core ML, Part 1](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F708\u002F)\n  - [WWDC2018 716 Session - Object Tracking in Vision](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F716\u002F)\n  - [WWDC2018 717 Session - Vision with Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F717\u002F)\n  - [WWDC2018 709 Session - What’s New in Core ML, Part 2](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F709\u002F)\n  - [WWDC2018 713 Session - Introducing Natural Language Framework](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F713\u002F)\n- WWDC2017\n  - [WWDC2017 710 Session - Core ML in depth](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F710\u002F)\n  - [WWDC2017 208 Session - Natural Language Processing and your Apps](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F208\u002F)\n  - [WWDC2017 510 Session - Advances in Core Image: Filters, Metal, Vision, and More](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F510\u002F)\n  - [WWDC2017 506 Session - Vision Framework: Building on Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F506\u002F)\n  - [WWDC2017 703 Session - Introducing Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F703\u002F)\n\n#### Create ML and Turi Create\n\n- WWDC2020\n  - [WWDC2020 10642 Session - Build Image and Video Style Transfer models in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10642\u002F)\n  - [WWDC2020 10156 Session - Control training in Create ML with Swift](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10156\u002F)\n  - [WWDC2020 10043 Session - Build an Action Classifier with Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10043\u002F)\n- WWDC2019\n  - [WWDC2019 424 Session - Training Object Detection Models in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F424\u002F)\n  - [WWDC2019 426 Session - Building Activity Classification Models in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F426\u002F)\n  - [WWDC2019 420 Session - Drawing Classification and One-Shot Object Detection in Turi Create](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F420\u002F)\n  - [WWDC2019 425 Session - Training Sound Classification Models in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F425\u002F)\n  - [WWDC2019 428 Session - Training Text Classifiers in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F428\u002F)\n  - [WWDC2019 427 Session - Training Recommendation Models in Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F427\u002F)\n  - [WWDC2019 430 Session - Introducing the Create ML App](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F430\u002F)\n- WWDC2018\n  - [WWDC2018 712 Session - A Guide to Turi Create](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F712\u002F)\n  - [WWDC2018 703 Session - Introducing Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F703\u002F)\n\n#### Common ML\n\n- WWDC2020\n  - [WWDC2020 10677 Session - Build customized ML models with the Metal Performance Shaders Graph](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10677\u002F)\n- WWDC2019\n  - [WWDC2019 803 Session - Designing Great ML Experiences](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F803\u002F)\n  - [WWDC2019 614 Session - Metal for Machine Learning](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F614\u002F)\n  - [WWDC2019 209 Session - What's New in Machine Learning](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F209\u002F)\n- WWDC2018\n  - [WWDC2018 609 Session - Metal for Accelerating Machine Learning](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F609\u002F)\n- WWDC2016\n  - [WWDC2016 715 Session - Neural Networks and Accelerate](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2016\u002F715\u002F)\n  - [WWDC2016 605 Session - What's New in Metal, Part 2](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2016\u002F605\u002F)  \n\n### Metal\n\n- WWDC2020\n  - [WWDC2020 10632 Session - Optimize Metal Performance for Apple Silicon Macs](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10632\u002F)\n  - [WWDC2020 10603 Session - Optimize Metal apps and games with GPU counters](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10603\u002F)\n  - [TECH-TALKS 606 Session - Metal 2 on A11 - Imageblock Sample Coverage Control](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F606\u002F)\n  - [TECH-TALKS 603 Session - Metal 2 on A11 - Imageblocks](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F603\u002F)\n  - [TECH-TALKS 602 Session - Metal 2 on A11 - Overview](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F602\u002F)\n  - [TECH-TALKS 605 Session - Metal 2 on A11 - Raster Order Groups](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F605\u002F)\n  - [TECH-TALKS 604 Session - Metal 2 on A11 - Tile Shading](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F604\u002F)\n  - [TECH-TALKS 608 Session - Metal Enhancements for A13 Bionic](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F608\u002F)\n  - [WWDC2020 10631 Session - Bring your Metal app to Apple Silicon Macs](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10631\u002F)\n  - [WWDC2020 10197 Session - Broaden your reach with Siri Event Suggestions](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10197\u002F)\n  - [WWDC2020 10615 Session - Build GPU binaries with Metal](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10615\u002F)\n  - [WWDC2020 10021 Session - Build Metal-based Core Image kernels with Xcode](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10021\u002F)\n  - [WWDC2020 10616 Session - Debug GPU-side errors in Metal](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10616\u002F)\n  - [WWDC2020 10012 Session - Discover ray tracing with Metal](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10012\u002F)\n  - [WWDC2020 10013 Session - Get to know Metal function pointers](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10013\u002F)\n  - [WWDC2020 10605 Session - Gain insights into your Metal app with Xcode 12](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10605\u002F)\n  - [WWDC2020 10602 Session - Harness Apple GPUs with Metal](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10602\u002F)\n\n### AR\n\n- WWDC2020\n  - [TECH-TALKS 609 Session - Advanced Scene Understanding in AR](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F609\u002F)\n  - [TECH-TALKS 601 Session - Face Tracking with ARKit](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F601\u002F)\n  - [WWDC2020 10611 Session - Explore ARKit 4](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10611\u002F)\n  - [WWDC2020 10604 Session - Shop online with AR Quick Look](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10604\u002F)\n  - [WWDC2020 10601 Session - The artist’s AR toolkit](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10601\u002F)\n  - [WWDC2020 10613 Session - What's new in USD](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10613\u002F)\n\n### Examples\n\n- Training\n  - Keras examples: https:\u002F\u002Fkeras.io\u002Fexamples\u002F\n  - Pytorch examples: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\n- Inference\n  - TFLite examples: https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fexamples\u002Ftree\u002Fmaster\u002Flite\n  - Pytorch Mobile iOS example: https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fios-demo-app\n  - FritzLabs examples: https:\u002F\u002Fgithub.com\u002Ffritzlabs\u002Ffritz-examples\n- Models\n  - TensorFlow & TFLite models: https:\u002F\u002Ftfhub.dev\u002F\n  - Pytorch models: https:\u002F\u002Fpytorch.org\u002Fhub\u002F\n  - CoreML official models: https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002Fmodels\u002F\n","\u003Cp align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_312feae9a00f.png\" width=\"187\" height=\"174\"\u002F>\n\u003C\u002Fp>\n\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002Fawesome-ml-demos-with-ios) ![Hits](https:\u002F\u002Fhitcounter.pythonanywhere.com\u002Fcount\u002Ftag.svg?url=https%3A%2F%2Fgithub.com%2Fmotlabs%2Fawesome-ml-demos-with-ios) [![PRs Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPRs-welcome-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com) [![GIF PRs More Welcome](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGIF--PRs-WELCOME!-brightgreen.svg?style=flat-square)](http:\u002F\u002Fmakeapullrequest.com) \n\n> 此仓库已从[@motlabs](https:\u002F\u002Fgithub.com\u002Fmotlabs)组织迁移而来。感谢[@jwkanggist](https:\u002F\u002Fgithub.com\u002Fjwkanggist)，他是motlabs社区的领导者。\n\n# 适用于 iOS 的优秀机器学习 DEMO\n\n我们通过 Core ML 和 ML Kit（TensorFlow Lite）来解决在 iOS 上使用机器学习模型的挑战。\n\n[韩语 README](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002FiOS-Proejcts-with-ML-Models\u002Fblob\u002Fmaster\u002FREADME_kr.md)\n\n## 目录\n- [iOS 机器学习框架](#machine-learning-framework-for-ios)\n  - [使用 Core ML 时的模型流程](#Flow-of-Model-When-Using-Core-ML)\n  - [使用 Create ML 时的模型流程](#Flow-of-Model-When-Using-Create-ML)\n- [基准项目](#Baseline-Projects)\n  - [图像分类](#Image-Classification)\n  - [目标检测与识别](#Object-Detection--Recognition)\n  - [图像估计](#Image-Estimation)\n  - [语义分割](#Semantic-Segmentation)\n- [应用项目](#Application-Projects)\n  - [标注工具](#Annotation-Tool)\n- [Create ML 项目](#Create-ML-Projects)\n- [性能](#Performance)\n  - [📏测量模块](#measure-module)\n  - [实现](#Implements)\n- [相关资源](#See-also)\n\n## iOS 机器学习框架\n\n- [Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fcoreml)\n- [TensorFlow Lite](https:\u002F\u002Fwww.tensorflow.org\u002Flite)\n- [Pytorch Mobile](https:\u002F\u002Fpytorch.org\u002Fmobile\u002Fhome\u002F)\n- [fritz](https:\u002F\u002Fwww.fritz.ai\u002F)\n- 等等。~~[Tensorflow Mobile](https:\u002F\u002Fwww.tensorflow.org\u002Fmobile\u002F)~~`已弃用`\n\n\n### 使用 Core ML 时的模型流程\n\n[![使用 Core ML 时的模型流程](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_d31d7fc20459.png)](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing)\n\n总体流程在大多数机器学习框架中都非常相似。每个框架都有其兼容的模型格式。我们需要将 TensorFlow 中创建的模型，**转换为适合各个移动端机器学习框架的格式**。\n\n一旦准备好了兼容的模型，就可以使用相应的机器学习框架进行推理。需要注意的是，你必须手动执行**预处理和后处理**。\n\n> 如果你想了解更多说明，请查看[这张幻灯片（韩语）](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing)。\n\n### 使用 Create ML 时的模型流程\n\n![playground-createml-validation-001](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_ebf2c35d4ebd.png)\n\n## 基准项目\n\n#### 已完成\n\n- 使用 Core ML 内置模型\n- 使用 ML Kit 内置设备端模型\n- 使用自定义模型进行视觉任务（Core ML 和 ML Kit）\n- 使用 Core ML 进行目标检测\n\n#### 待完成\n\n- 使用 ML Kit 进行目标检测\n- 使用 ML Kit 内置云端模型\n  - 地标识别\n- 使用 Core ML 和 ML Kit 进行自然语言处理的自定义模型\n- 使用 Core ML 和 ML Kit 进行音频处理的自定义模型\n  - 音频识别\n  - 语音识别\n  - TTS\n\n\n\n### 图像分类\n\n| 名称 | 演示 | 备注 |\n| ---- | ---- | ---- |\n| [ImageClassification-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FImageClassification-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_6aaed2314a3a.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [MobileNet-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FMobileNet-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_a418d7e08db9.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### 定位检测与识别\n\n| 名称 | 演示 | 备注 |\n| ---- | ---- | ---- |\n| [ObjectDetection-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FObjectDetection-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_351269d65020.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [TextDetection-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FTextDetection-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_25e53582f488.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [TextRecognition-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FTextRecognition-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_e8bc98b45e4e.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [FaceDetection-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FFaceDetection-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_f8165295031e.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### 姿态估计\n\n| 名称 | 演示 | 备注 |\n| ---- | :--- | ---- |\n| [PoseEstimation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_165476bb42dd.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [PoseEstimation-TFLiteSwift](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-TFLiteSwift) | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_db85b861d5ac.gif\" width=200px>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_754b22a996a7.gif\" width=200px> | -    |\n| [PoseEstimation-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FPoseEstimation-MLKit) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_328bb2d09933.gif\" width=\"200\"\u002F>\u003C\u002Fp> | -    |\n| [FingertipEstimation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FFingertipEstimation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_25d123138e51.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n\n### 深度预测\n\n|                                                              |                                                              |      |\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ---- |\n| [DepthPrediction-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FDepthPrediction-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_b93a2b7fa5a5.gif\" width=\"200\"\u002F>\u003C\u002Fp> | -    |\n\n### 语义分割\n\n| 名称 | 演示 | 备注 |\n| ---- | ---- | ---- |\n| [SemanticSegmentation-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FSemanticSegmentation-CoreML) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_020420609c24.gif\" width=\"200\"\u002F>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_1eda519f174b.gif\" width=200px>\u003C\u002Fp> | - |\n\n## 应用项目\n\n| 名称 | 演示 | 备注 |\n| ---- | ---- | ---- |\n| [dont-be-turtle-ios](https:\u002F\u002Fgithub.com\u002Fmotlabs\u002Fdont-be-turtle-ios) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_592f2d0a0565.gif\" width=\"200\"\u002F>\u003C\u002Fp> | - |\n| [WordRecognition-CoreML-MLKit](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FWordRecognition-CoreML-MLKit)(准备中...) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_0ed309619462.gif\" width=\"200\"\u002F>\u003C\u002Fp> | 检测字符，找到我指向的单词，然后使用 Core ML 和 ML Kit 识别该单词。 |\n\n### 标注工具\n\n| 名称 | 演示 | 备注 |\n| ---- | ---- | ---- |\n| [KeypointAnnotation](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FKeypointAnnotation) | \u003Cp align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_3a9de42eb5d0.gif\" width=\"200\"\u002F>\u003C\u002Fp> | 用于自定义估计数据集的标注工具 |\n\n## Create ML 项目\n\n| 名称 | Create ML 演示 | Core ML 演示 | 备注 |\n| ------ | ------------------------------------------------------------ | ---------------------------------- | ------ |\n| [SimpleClassification-CreateML-CoreML](https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FSimpleClassification-CreateML-CoreML) | ![IMG_0436](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_bc3ee604b66e.png) | ![IMG_0436](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_9258d2f4b858.png) | 使用 Create ML 和 Core ML 的简单分类 |\n\n## 性能\n\n执行时间：推理时间 + 后处理时间\n\n|              (使用 iPhone X) | 推理时间(ms) | 执行时间(ms) |   FPS   |\n| ---------------------------: | :----------------: | :----------------: | :-----: |\n|   ImageClassification-CoreML |         40         |         40         |   23    |\n|              MobileNet-MLKit |        120         |        130         |    6    |\n|       ObjectDetection-CoreML |  100 ~ 120         |    110 ~ 130       |    5    |\n|         TextDetection-CoreML |         12         |         13         | 30(最大) |\n|        TextRecognition-MLKit |       35~200       |       40~200       |  5~20   |\n|        PoseEstimation-CoreML |         51         |         65         |   14    |\n|         PoseEstimation-MLKit |        200         |        217         |    3    |\n|       DepthPrediction-CoreML |        624         |        640         |    1    |\n|    SemanticSegmentation-CoreML |        178         |        509         |    1    |\n| WordRecognition-CoreML-MLKit |         23         |         30         |   14    |\n| FaceDetection-MLKit          |         -          |          -         |   -     |\n\n### 📏测量模块\n\n您可以在屏幕顶部看到推理或执行的测量延迟时间以及 FPS。\n\n> 如果您有更优雅的性能测量方法，请在 issue 中提出建议！\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_readme_1fd3fece75cd.jpeg\" width=\"320\"\u002F>\n\n### 实现情况\n\n|                            | 测量📏 | 单元测试 | 集成测试 |\n| -------------------------: | :-------: | :-------: | :--------: |\n| ImageClassification-CoreML |    O      |     X     |     X      |\n|            MobileNet-MLKit |    O      |     X     |     X      |\n|     ObjectDetection-CoreML |    O      |     O     |     X      |\n|       TextDetection-CoreML |    O      |     X     |     X      |\n|      TextRecognition-MLKit |    O      |     X     |     X      |\n|      PoseEstimation-CoreML |    O      |     O     |     X      |\n|       PoseEstimation-MLKit |    O      |     X     |     X      |\n|     DepthPrediction-CoreML |    O      |     X     |     X      |\n|  SemanticSegmentation-CoreML |    O      |     X     |     X      |\n\n## 参见\n\n- [Core ML | 苹果开发者文档](https:\u002F\u002Fdeveloper.apple.com\u002Fdocumentation\u002Fcoreml)\n- [机器学习 - 苹果开发者](https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002F)\n- [ML Kit - Firebase](https:\u002F\u002Fdevelopers.google.com\u002Fml-kit\u002F)\n- [苹果的 Core ML 2 对比谷歌的 ML Kit：有什么区别？](https:\u002F\u002Fventurebeat.com\u002F2018\u002F06\u002F05\u002Fapples-core-ml-2-vs-googles-ml-kit-whats-the-difference\u002F)\n- [iOS 中的机器学习幻灯片资料](https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002F1wA_PAjllpLLcFPuZcERYbQlPe1Ipb-bzIZinZg3zXkg\u002Fedit?usp=sharing)\n- [MoT Labs 博客](https:\u002F\u002Fmotlabs.github.io\u002F)\n\n### WWDC\n\n#### Core ML\n\n- WWDC2020\n  - [WWDC2020 10152 会话 - 使用 Core ML 进行模型部署和安全](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10152\u002F)\n  - [WWDC2020 10153 会话 - 使用 Core ML 转换器将模型部署到设备上](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10153\u002F)\n  - Vision\n    - [WWDC2020 10673 会话 - 探索计算机视觉 API](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10673\u002F)\n    - [WWDC2020 10099 会话 - 探索 Action & Vision 应用](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10099\u002F)\n    - [WWDC2020 10653 会话 - 使用 Vision 检测身体和手部姿态](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10653\u002F)\n    - [TECH-TALKS 206 会话 - iOS 11 上的 QR 码识别](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F206\u002F)\n  - NLP\n    - [WWDC2020 10657 会话 - 使用自然语言使应用更智能](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10657\u002F)\n\n- WWDC2019\n  - [WWDC2019 256 会话 - 语音识别的进展](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F256\u002F)\n  - [WWDC2019 704 会话 - Core ML 3 框架](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F704\u002F)\n  - [WWDC2019 228 会话 - 使用 Core ML 和 ARKit 打造优秀应用](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F228\u002F)\n  - [WWDC2019 232 会话 - 自然语言框架的进展](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F232\u002F)\n  - [WWDC2019 222 会话 - 理解 Vision 框架中的图像](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F222\u002F)\n  - [WWDC2019 234 会话 - Vision 框架中的文本识别](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F234\u002F)\n\n- WWDC2018\n  - [WWDC2018 708 会话 - Core ML 新功能，第一部分](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F708\u002F)\n  - [WWDC2018 716 会话 - Vision 中的对象跟踪](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F716\u002F)\n  - [WWDC2018 717 会话 - 使用 Core ML 的 Vision](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F717\u002F)\n  - [WWDC2018 709 会话 - Core ML 新功能，第二部分](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F709\u002F)\n  - [WWDC2018 713 会话 - 引入自然语言框架](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F713\u002F)\n\n- WWDC2017\n  - [WWDC2017 710 会话 - 深入了解 Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F710\u002F)\n  - [WWDC2017 208 会话 - 自然语言处理与你的应用](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F208\u002F)\n  - [WWDC2017 510 会话 - Core Image 的进展：滤镜、Metal、Vision 等](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F510\u002F)\n  - [WWDC2017 506 会话 - Vision 框架：基于 Core ML 构建](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F506\u002F)\n  - [WWDC2017 703 会话 - 引入 Core ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2017\u002F703\u002F)\n\n#### Create ML 和 Turi Create\n\n- WWDC2020\n  - [WWDC2020 10642 会话 - 在 Create ML 中构建图像和视频风格迁移模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10642\u002F)\n  - [WWDC2020 10156 会话 - 使用 Swift 在 Create ML 中控制训练](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10156\u002F)\n  - [WWDC2020 10043 会话 - 使用 Create ML 构建动作分类器](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10043\u002F)\n\n- WWDC2019\n  - [WWDC2019 424 会话 - 在 Create ML 中训练目标检测模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F424\u002F)\n  - [WWDC2019 426 会话 - 在 Create ML 中构建活动分类模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F426\u002F)\n  - [WWDC2019 420 会话 - Turi Create 中的绘画分类和单样本目标检测](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F420\u002F)\n  - [WWDC2019 425 会话 - 在 Create ML 中训练声音分类模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F425\u002F)\n  - [WWDC2019 428 会话 - 在 Create ML 中训练文本分类器](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F428\u002F)\n  - [WWDC2019 427 会话 - 在 Create ML 中训练推荐模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F427\u002F)\n  - [WWDC2019 430 会话 - 引入 Create ML 应用](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F430\u002F)\n\n- WWDC2018\n  - [WWDC2018 712 会话 - Turi Create 使用指南](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F712\u002F)\n  - [WWDC2018 703 会话 - 引入 Create ML](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F703\u002F)\n\n#### Common ML\n\n- WWDC2020\n  - [WWDC2020 10677 会话 - 使用 Metal Performance Shaders Graph 构建自定义 ML 模型](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10677\u002F)\n\n- WWDC2019\n  - [WWDC2019 803 会话 - 设计出色的 ML 体验](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F803\u002F)\n  - [WWDC2019 614 会话 - Metal 用于机器学习](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F614\u002F)\n  - [WWDC2019 209 会话 - 机器学习的新进展](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2019\u002F209\u002F)\n\n- WWDC2018\n  - [WWDC2018 609 会话 - Metal 用于加速机器学习](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2018\u002F609\u002F)\n\n- WWDC2016\n  - [WWDC2016 715 会话 - 神经网络与 Accelerate](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2016\u002F715\u002F)\n  - [WWDC2016 605 会话 - Metal 新功能，第二部分](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2016\u002F605\u002F)\n\n### Metal\n\n- WWDC2020\n  - [WWDC2020 10632 讲座 - 为 Apple Silicon Macs 优化 Metal 性能](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10632\u002F)\n  - [WWDC2020 10603 讲座 - 使用 GPU 计数器优化 Metal 应用和游戏](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10603\u002F)\n  - [TECH-TALKS 606 讲座 - A11 上的 Metal 2 - Imageblock 样本覆盖率控制](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F606\u002F)\n  - [TECH-TALKS 603 讲座 - A11 上的 Metal 2 - Imageblocks](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F603\u002F)\n  - [TECH-TALKS 602 讲座 - A11 上的 Metal 2 - 概述](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F602\u002F)\n  - [TECH-TALKS 605 讲座 - A11 上的 Metal 2 - 光栅化顺序组](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F605\u002F)\n  - [TECH-TALKS 604 讲座 - A11 上的 Metal 2 - 图块着色](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F604\u002F)\n  - [TECH-TALKS 608 讲座 - 针对 A13 仿生芯片的 Metal 增强功能](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F608\u002F)\n  - [WWDC2020 10631 讲座 - 将您的 Metal 应用带到 Apple Silicon Macs](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10631\u002F)\n  - [WWDC2020 10197 讲座 - 通过 Siri 事件建议扩大您的覆盖范围](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10197\u002F)\n  - [WWDC2020 10615 讲座 - 使用 Metal 构建 GPU 二进制文件](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10615\u002F)\n  - [WWDC2020 10021 讲座 - 使用 Xcode 构建基于 Metal 的 Core Image 内核](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10021\u002F)\n  - [WWDC2020 10616 讲座 - 调试 Metal 中的 GPU 端错误](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10616\u002F)\n  - [WWDC2020 10012 讲座 - 使用 Metal 探索光线追踪](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10012\u002F)\n  - [WWDC2020 10013 讲座 - 了解 Metal 函数指针](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10013\u002F)\n  - [WWDC2020 10605 讲座 - 使用 Xcode 12 深入了解您的 Metal 应用](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10605\u002F)\n  - [WWDC2020 10602 讲座 - 使用 Metal 充分利用 Apple GPU](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10602\u002F)\n\n### AR\n\n- WWDC2020\n  - [TECH-TALKS 609 讲座 - AR 中的高级场景理解](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F609\u002F)\n  - [TECH-TALKS 601 讲座 - 使用 ARKit 进行面部跟踪](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Ftech-talks\u002F601\u002F)\n  - [WWDC2020 10611 讲座 - 探索 ARKit 4](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10611\u002F)\n  - [WWDC2020 10604 讲座 - 使用 AR Quick Look 在线购物](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10604\u002F)\n  - [WWDC2020 10601 讲座 - 艺术家的 AR 工具包](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10601\u002F)\n  - [WWDC2020 10613 讲座 - USD 的新变化](https:\u002F\u002Fdeveloper.apple.com\u002Fvideos\u002Fplay\u002Fwwdc2020\u002F10613\u002F)\n\n### 示例\n\n- 训练\n  - Keras 示例：https:\u002F\u002Fkeras.io\u002Fexamples\u002F\n  - PyTorch 示例：https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\n- 推理\n  - TFLite 示例：https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fexamples\u002Ftree\u002Fmaster\u002Flite\n  - PyTorch Mobile iOS 示例：https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fios-demo-app\n  - FritzLabs 示例：https:\u002F\u002Fgithub.com\u002Ffritzlabs\u002Ffritz-examples\n- 模型\n  - TensorFlow 和 TFLite 模型：https:\u002F\u002Ftfhub.dev\u002F\n  - PyTorch 模型：https:\u002F\u002Fpytorch.org\u002Fhub\u002F\n  - CoreML 官方模型：https:\u002F\u002Fdeveloper.apple.com\u002Fmachine-learning\u002Fmodels\u002F","# awesome-ml-demos-with-ios 快速上手指南\n\n本指南旨在帮助开发者快速在 iOS 设备上运行机器学习模型演示，涵盖 Core ML、ML Kit (TensorFlow Lite) 等主流框架。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: macOS (推荐最新稳定版)\n*   **开发工具**: Xcode 12.0 或更高版本\n*   **设备要求**: iPhone X 或更新机型（部分高性能模型如深度预测、语义分割需要较强的算力）\n*   **前置知识**: 熟悉 Swift 编程语言及 iOS 应用开发基础\n*   **依赖框架**:\n    *   Apple Core ML (Xcode 内置)\n    *   Google ML Kit \u002F TensorFlow Lite (通过 Swift Package Manager 或 CocoaPods 集成)\n\n## 安装步骤\n\n该项目是一个集合了多个独立 Demo 的仓库，每个功能模块通常对应一个独立的 GitHub 子项目。以下是获取和运行任意一个 Demo（以图像分类为例）的通用步骤：\n\n1.  **克隆项目**\n    打开终端，选择您感兴趣的具体 Demo 仓库进行克隆。例如，运行基于 Core ML 的图像分类演示：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FImageClassification-CoreML.git\n    ```\n    或者运行基于 ML Kit 的演示：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Ftucan9389\u002FMobileNet-MLKit.git\n    ```\n\n2.  **打开项目**\n    进入项目目录并打开 Xcode 工程文件：\n    ```bash\n    cd ImageClassification-CoreML\n    open ImageClassification-CoreML.xcodeproj\n    ```\n    *(注：如果项目使用 `.xcworkspace`，请使用 `open *.xcworkspace`)*\n\n3.  **配置签名与依赖**\n    *   在 Xcode 中，选择左侧项目导航栏中的 Project Target。\n    *   在 \"Signing & Capabilities\" 标签页中，将 **Team** 设置为您的 Apple ID 或开发团队，以自动处理代码签名。\n    *   如果项目提示缺少依赖，Xcode 通常会自动通过 Swift Package Manager 下载。若使用 CocoaPods，请在终端执行 `pod install` 后打开 `.xcworkspace` 文件。\n\n4.  **构建并运行**\n    连接您的 iOS 真机（推荐）或启动模拟器，点击 Xcode 顶部的运行按钮（▶️）或使用快捷键 `Cmd + R`。\n\n## 基本使用\n\n大多数 Demo 项目在启动后会自动调用摄像头或加载预设图片进行推理。以下以 **ImageClassification-CoreML** 为例说明基本交互流程：\n\n1.  **启动应用**: 应用启动后会请求相机权限，请点击“允许”。\n2.  **实时推理**: 将摄像头对准物体，屏幕上方会实时显示识别结果（类别名称）和置信度。\n    *   界面顶部通常包含性能监控模块（Measure module），显示 **Inference Time** (推理耗时)、**Execution Time** (总耗时) 和 **FPS**。\n3.  **切换模型\u002F模式**: 部分高级 Demo（如姿态估计或物体检测）可能在界面上提供按钮来切换不同的模型（例如从 MobileNetV1 切换到 MobileNetV2）或开启\u002F关闭后处理可视化。\n4.  **查看代码逻辑**:\n    *   **模型加载**: 查看 `ViewController.swift` 或专门的 `ModelHandler` 类，寻找 `MLModel(contentsOfURL:)` 或 `Interpreter` 初始化代码。\n    *   **预处理**: 注意观察图像如何被调整为模型输入的尺寸（如 224x224）并进行归一化处理。\n    *   **后处理**: 查看如何将模型输出的数组转换为可读的标签和边界框坐标。\n\n**示例代码片段 (Core ML 推理核心逻辑):**\n```swift\n\u002F\u002F 加载模型\nlet model = try? MobileNet(configuration: config)\n\n\u002F\u002F 准备输入 (UIImage 转 MLFeatureProvider)\nguard let input = try? MLFeatureProvider(image: image, size: CGSize(width: 224, height: 224)) else { return }\n\n\u002F\u002F 执行推理\nguard let output = try? model?.prediction(input: input) else { return }\n\n\u002F\u002F 获取结果\nlet label = output.classLabel\nlet confidence = output.confidence[label] ?? 0\n```\n\n您可以参考仓库中其他子项目（如 `ObjectDetection-CoreML`, `PoseEstimation-TFLiteSwift`）来探索物体检测、关键点识别、语义分割等更复杂的场景。","某初创团队正致力于开发一款基于 iOS 的实时植物病害识别应用，需要在移动端高效运行自定义机器学习模型。\n\n### 没有 awesome-ml-demos-with-ios 时\n- **框架选型迷茫**：面对 Core ML、TensorFlow Lite 和 PyTorch Mobile 等多种框架，开发者难以快速理清各框架的模型转换格式与兼容性问题，浪费大量调研时间。\n- **预处理逻辑缺失**：官方文档往往只关注推理本身，开发者需从零编写图像缩放、归一化等前后处理代码，极易因数据格式错误导致推理结果偏差。\n- **基线项目匮乏**：缺乏针对物体检测、语义分割等具体任务的完整参考代码，团队不得不反复试错来调试模型在真机上的性能表现。\n- **性能评估困难**：缺少标准化的性能测量模块，难以量化模型在 iOS 设备上的延迟与内存占用，优化工作如同“盲人摸象”。\n\n### 使用 awesome-ml-demos-with-ios 后\n- **技术路径清晰**：通过仓库中详细的流程图与多框架对比，团队迅速确定了适合植物识别任务的 Core ML 技术栈，并掌握了模型转换的关键步骤。\n- **代码复用高效**：直接复用仓库中成熟的图像分类与物体检测基线项目（如 MobileNet 示例），内置完善的预处理逻辑，将核心功能开发周期缩短了一半。\n- **场景落地加速**：参考“应用项目”中的标注工具与实战案例，快速构建了从数据准备到模型部署的完整闭环，显著降低了试错成本。\n- **性能优化有据**：利用仓库提供的性能测量模块，精准定位了推理瓶颈，成功将单张图片的识别延迟优化至毫秒级，确保了流畅的用户体验。\n\nawesome-ml-demos-with-ios 通过提供标准化的全流程演示与基线代码，将 iOS 端机器学习模型的落地门槛从“专家级”降低到了“工程级”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftucan9389_awesome-ml-demos-with-ios_bc3ee604.png","tucan9389","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftucan9389_35e5e255.jpg","SWE","Google","California, USA",null,"https:\u002F\u002Fsites.google.com\u002Fview\u002Ftucan9389","https:\u002F\u002Fgithub.com\u002Ftucan9389",[81],{"name":82,"color":83,"percentage":84},"Python","#3572A5",100,1283,139,"2026-04-05T00:13:14","MIT","macOS, iOS","未说明 (主要依赖 iOS 设备端神经引擎或 CPU 进行推理)","未说明 (取决于具体 iOS 设备型号)",{"notes":93,"python":94,"dependencies":95},"这是一个专注于在 iOS 设备上运行机器学习模型的演示项目集合，而非传统的服务器端训练环境。主要开发环境为 macOS (需安装 Xcode)。模型需在外部训练后转换为 .mlmodel 格式，或使用 Apple 的 Create ML 工具直接在 macOS 上训练。性能数据基于 iPhone X 等设备测试，不支持 Linux 或 Windows 直接运行演示应用。","未说明",[96,97,98,99,100,101],"Core ML","Create ML","TensorFlow Lite","ML Kit","Xcode","Swift",[14],[104,105,106,107,108,109,110,111,112],"ios","machine-learning","coreml","mlkit","tensorflow","tensorflow-lite","demo","awesome","inference","2026-03-27T02:49:30.150509","2026-04-14T00:14:33.569698",[],[]]