[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-john-rocky--CoreML-Models":3,"tool-john-rocky--CoreML-Models":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",145895,2,"2026-04-08T11:32:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":76,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":76,"difficulty_score":32,"env_os":93,"env_gpu":94,"env_ram":94,"env_deps":95,"category_tags":100,"github_topics":101,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":119},5554,"john-rocky\u002FCoreML-Models","CoreML-Models","Converted CoreML Model Zoo.","CoreML-Models 是一个专为苹果生态打造的机器学习模型资源库，汇集了众多已转换并优化好的 Core ML 格式模型。它主要解决了开发者在 iOS、macOS 等平台应用机器学习功能时，面临模型格式转换复杂、适配难度大以及缺乏高质量现成模型的痛点。\n\n无论是图像分类、目标检测（涵盖 YOLO 系列）、人像分割，还是超分辨率重建、低光增强、风格迁移乃至 Stable Diffusion 文生图，CoreML-Models 都提供了丰富的预训练模型选择。其独特的技术亮点在于将原本复杂的开源模型直接转换为苹果原生框架支持的格式，让开发者无需自行处理繁琐的转换流程，即可通过简单的下载和拖拽操作，将先进的 AI 能力集成到 Xcode 项目中。\n\n这套资源库非常适合 iOS \u002FmacOS 应用开发者、希望快速验证算法原型的科研人员，以及对移动端 AI 感兴趣的设计师使用。对于普通用户而言，虽然不能直接运行模型，但许多基于此库开发的 App 能带来更智能的拍照、修图及交互体验。如果你希望在苹果设备上高效落地前沿 AI 技术，CoreML-Models 无疑是一个值得信赖的起点。","# CoreML-Models\nConverted Core ML Model Zoo.\n\n\u003Cimg width=\"1280\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_638c7bf7b158.jpeg\">\n\nCore ML is a machine learning framework by Apple.\nIf you are iOS developer, you can easly use machine learning models in your Xcode project. \n\n# How to use\n\nTake a look this model zoo, and if you found the CoreML model you want,\ndownload the model from google drive link and bundle it in your project.\nOr if the model have sample project link, try it and see how to use the model in the project.\nYou are free to do or not.\n\n**If you like this repository, please give me a star so I can do my best.**\n\n# Section Link\n\n- [**Image Classifier**](#image-classifier)\n  - [Efficientnetb0](#efficientnetb0)\n  - [Efficientnetv2](#efficientnetv2)\n  - [VisionTransformer](#visiontransformer)\n  - [Conformer](#conformer)\n  - [DeiT](#deit)\n  - [RepVGG](#repvgg)\n  - [RegNet](#regnet)\n  - [MobileViTv2](#mobilevitv2)\n\n  \n- [**Object Detection**](#object-detection)\n  - [D-FINE](#d-fine)\n  - [RF-DETR](#rf-detr)\n  - [YOLOv5s](#yolov5s)\n  - [YOLOv7](#yolov7)\n  - [YOLOv8](#yolov8)\n  - [YOLOv9](#yolov9)\n  - [YOLOv10](#yolov10)\n  - [YOLO11](#yolo11)\n  - [YOLO26](#yolo26)\n  - [YOLO-World](#yolo-world)\n\n- [**Segmentation**](#segmentation)\n  - [U2Net](#u2net)\n  - [IS-Net](#is-net)\n  - [RMBG1.4](#rmbg14)\n  - [face-parsing](#face-parsing)\n  - [Segformer](#segformer)\n  - [BiseNetv2](#bisenetv2)\n  - [DNL](#dnl)\n  - [ISANet](#isanet)\n  - [FastFCN](#fastfcn)\n  - [GCNet](#gcnet)\n  - [DANet](#danet)\n  - [Semantic FPN](#semantic-fpn)\n  - [cloths_segmentation](#cloths_segmentation)\n  - [easyportrait](#easyportrait)\n  - [MobileSAM](#mobilesam)\n  - [SAM2-Tiny](#sam2-tiny)\n\n- [**Video Matting**](#video-matting)\n  - [MatAnyone](#matanyone)\n\n- [**Super Resolution**](#super-resolution)\n  - [Real ESRGAN](#real-esrgan)\n  - [GFPGAN](#gfpgan)\n  - [BSRGAN](#bsrgan)\n  - [A-ESRGAN](#a-esrgan)\n  - [Beby-GAN](#beby-gan)\n  - [RRDN](#rrdn)\n  - [Fast-SRGAN](#fast-srgan)\n  - [ESRGAN](#esrgan)\n  - [UltraSharp](#ultrasharp)\n  - [SRGAN](#srgan)\n  - [SRResNet](#srresnet)\n  - [LESRCNN](#lesrcnn)\n  - [MMRealSR](#mmrealsr)\n  - [DASR](#dasr)\n  - [SinSR](#sinsr)\n      \n- [**Low Light Enhancement**](#low-light-enhancement)\n  - [StableLLVE](#stablellve)\n  - [Zero-DCE](#zero-dce)\n  - [Retinexformer](#retinexformer)\n\n- [**Image Restoration**](#image-restroration)\n  - [MPRNet](#mprnet)\n  - [MIRNetv2](#mirnetv2)\n  \n- [**Image Generation**](#image-generation)\n  - [MobileStyleGAN](#mobilestylegan)\n  - [DCGAN](#dcgan)\n\n- [**Image2Image**](#image2image)\n  - [Anime2Sketch](#anime2sketch)\n  - [AnimeGAN2Face_Paint_512_v2](#animegan2face_paint_512_v2)\n  - [Photo2Cartoon](#photo2cartoon)\n  - [AnimeGANv2_Hayao](#animeGANv2_hayao)\n  - [AnimeGANv2_Paprika](#animeGANv2_paprika)\n  - [WarpGAN Caricature](#warpgancaricature)\n  - [UGATIT_selfie2anime](#ugatit_selfie2anime)\n  - [Fast-Neural-Style-Transfer](#fast-neural-style-transfer)\n  - [White_box_Cartoonization](#white_box_cartoonization)\n  - [FacialCartoonization](#facialcartoonization)\n\n- [**Inpainting**](#inpainting)\n  - [AOT-GAN-for-Inpainting](#aot-gan-for-inpainting)\n  - [Lama](#lama)\n\n- [**Monocular Depth Estimation**](#monocular-depth-estimation)\n  - [MiDaS](#midas)\n  \n- [**Stable Diffusion**](#stable-diffusion) **:text2image**\n  - [Hyper-SD](#hyper-sd)\n  - [stable-diffusion-v1-5](#stable-diffusion-v1-5)\n  - [pastel-mix](#pastel-mix)\n  - [Orange Mix](#orange-mix)\n  - [Counterfeit-V2.5](#counterfeit)\n  - [anything-v4.5](#anything-v4)\n  - [Openjourney](#openjourney)\n  - [dreamlike-photoreal-2.0](#dreamlike-photoreal-2)\n\n- [**Image Colorization**](#image-colorization)\n  - [DDColor Tiny](#ddcolor-tiny)\n\n- [**Face Recognition**](#face-recognition)\n  - [AdaFace IR-18](#adaface-ir-18)\n\n- [**3D Face Pose Estimation**](#3d-face-pose-estimation)\n  - [3DDFA_V2](#3ddfa_v2)\n\n- [**Speaker Diarization**](#speaker-diarization)\n  - [pyannote segmentation-3.0](#pyannote-segmentation-30)\n\n- [**Voice Conversion**](#voice-conversion)\n  - [OpenVoice V2](#openvoice-v2)\n\n- [**Text-to-Speech**](#text-to-speech)\n  - [Kokoro-82M](#kokoro-82m)\n\n- [**Text-to-Music Generation**](#text-to-music-generation)\n  - [Stable Audio Open Small](#stable-audio-open-small)\n\n- [**Audio Source Separation**](#audio-source-separation)\n  - [HTDemucs](#htdemucs)\n\n- [**Vision-Language**](#vision-language)\n  - [Florence-2-base](#florence-2-base)\n\n- [**Zero-Shot Image Classification**](#zero-shot-image-classification)\n  - [SigLIP ViT-B\u002F16](#siglip-vit-b16)\n\n- [**Anomaly Detection**](#anomaly-detection)\n  - [EfficientAD](#efficientad)\n\n- [**Music Transcription**](#music-transcription)\n  - [Basic Pitch](#basic-pitch)\n\n# How to get the model\nYou can get the model converted to CoreML format from the link of Google drive.\nSee the section below for how to use it in Xcode.\nThe license for each model conforms to the license for the original project.\n\n# Image Classifier\n\n### Efficientnet\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-27 6 34 43\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_218f024c1aac.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |\n| ------------- | ------------- | ------------- |------------- |------------- |\n| [Efficientnetb0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mJq8SMuDaCQHW77ui3fAfe5o3Qu2GKMi\u002Fview?usp=sharing) | 22.7 MB | ImageNet | [TensorFlowHub](https:\u002F\u002Ftfhub.dev\u002Ftensorflow\u002Fefficientnet\u002Fb0\u002Fclassification\u002F1)  |[Apache2.0](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)|\n\n\n### Efficientnetv2\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-31 4 30 22\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_19ceaac75008.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License | Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [Efficientnetv2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F12JiGwXh8pX3yjoG_GsJOKAnPd3lbVrrn\u002Fview?usp=sharing) | 85.8 MB | ImageNet | [Google\u002FautoML](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Ftree\u002Fmaster\u002Fefficientnetv2)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Fblob\u002Fmaster\u002FLICENSE)|2021|\n\n### VisionTransformer\n\nAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-01-07 10 37 05\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f43de3ca2ca5.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [VisionTransformer-B16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1VPo8Cjv7dyicM4lcJ6TgxnD4AN3ldMQp\u002Fview?usp=sharing) | 347.5 MB | ImageNet | [google-research\u002Fvision_transformer](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### Conformer\n\nLocal Features Coupling Global Representations for Visual Recognition.\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-01-07 11 34 33\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_28411770ea64.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [Conformer-tiny-p16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-4qVbuTYr4r4o08656iGtV8KKblAVVyr\u002Fview?usp=sharing) | 94.1 MB | ImageNet | [pengzhiliang\u002FConformer](https:\u002F\u002Fgithub.com\u002Fpengzhiliang\u002FConformer)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### DeiT\n\nData-efficient Image Transformers\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-01-07 11 50 25\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d0a931e0237e.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [DeiT-base384](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-7J-b0fTjmZi2VDPrDCWKBsCYGxYP5yW\u002Fview?usp=sharing) | 350.5 MB | ImageNet | [facebookresearch\u002Fdeit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### RepVGG\n\nMaking VGG-style ConvNets Great Again\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-01-08 5 00 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_591c2d0eaad2.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [RepVGG-A0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1i8mDvRGn2_OjzIG9ioVJyQrefVliKsh_\u002Fview?usp=sharing) | 33.3 MB | ImageNet | [DingXiaoH\u002FRepVGG](https:\u002F\u002Fgithub.com\u002FDingXiaoH\u002FRepVGG)  | [MIT](https:\u002F\u002Fgithub.com\u002FDingXiaoH\u002FRepVGG\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### RegNet\n\nDesigning Network Design Spaces\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-02-23 7 38 23\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3edff6cd7f19.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [regnet_y_400mf](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16jbUJ4gHSzdxxbYb99rOQe0FiKCuLyDB\u002Fview?usp=sharing) | 16.5 MB | ImageNet | [TORCHVISION.MODELS](https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision-models)  | [MIT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpycls\u002Fblob\u002Fmain\u002FLICENSE)|2020|\n\n\n### MobileViTv2\n\nCVNets: A library for training computer vision networks\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2022-02-23 7 38 23\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7c134d27875a.png\">\n\n| Google Drive Link | Size | Dataset |Original Project | License |Year|Conversion Script|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |\n| [MobileViTv2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1__aG67p6o5-NIchkHpfFJBszCpIhI0uf\u002Fview?usp=share_link) | 18.8 MB | ImageNet | [apple\u002Fml-cvnets](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-cvnets)  | [apple](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-cvnets\u002Fblob\u002Fmain\u002FLICENSE)|2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)]([https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1QiTlFsN948Xt2e4WgqUB8DnGgwWwtVZS?usp=sharing](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1UQwhFpVP_4Q9I6LXPdBSS0VDhIRdUBQA?usp=sharing)) |\n\n# Object Detection\n\n### D-FINE\n\n\u003Cimg width=\"400\" alt=\"D-FINE iOS Demo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1350cd53f97.png\">\n\n| Download Link | Size | Output | Original Project | License | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[dfine-n-coco](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Freleases\u002Fdownload\u002Fv0.2.0\u002Fdfine_n_coco.mlpackage.zip)|13MB| Confidence(MultiArray (Float32 300 × 80)), Coordinates (MultiArray (Float32 300 × 4)) |[Peterande\u002FD-FINE](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE)|[Apache 2.0](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE\u002Fblob\u002Fmaster\u002FLICENSE)|Input 640×640. Coordinates are normalized cxcywh. No NMS — filter by confidence threshold.| [peaceofcake DFINEDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Ftree\u002Fmain\u002FDFINEDemo) |\n\n### RF-DETR\n\n\u003Cimg width=\"400\" alt=\"RF-DETR iOS Demo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_013eae2c371c.png\">\n\n| Download Link | Size | Output | Original Project | License | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[rfdetr-n-coco](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Freleases\u002Fdownload\u002Fv0.2.0\u002Frfdetr_n_coco.mlpackage.zip)|95MB| Confidence(MultiArray (Float32 300 × 91)), Coordinates (MultiArray (Float32 300 × 4)) |[roboflow\u002Frf-detr](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr)|[Apache 2.0](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr\u002Fblob\u002Fmain\u002FLICENSE)|Input 384×384. 91 classes (index 0 = background, 1-90 = COCO category IDs). Coordinates are normalized cxcywh. No NMS.| [peaceofcake DFINEDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Ftree\u002Fmain\u002FDFINEDemo) |\n\n### YOLOv5s\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd332537229a.png\">\n\n| Google Drive Link | Size | Output | Original Project | License | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[YOLOv5s](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KT-9eKO4F-LYIJVYJg7dy2LEW_hVUq0M\u002Fview?usp=sharing)|29.3MB| Confidence(MultiArray (Double 0 × 80)), Coordinates (MultiArray (Double 0 × 4)) |[ultralytics\u002Fyolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)|[GNU](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5\u002Fblob\u002Fmaster\u002FLICENSE)|Non Maximum Suppression has been added.| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) |\n\n### YOLOv7\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_544cb65244bf.png\">\n\n| Google Drive Link | Size | Output | Original Project | License | Note | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |\n|[YOLOv7](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1EKBC7tiwP1tDvXUm_ldD1Nq7hW8HofLe\u002Fview?usp=sharing)|147.9MB| Confidence(MultiArray (Double 0 × 80)), Coordinates (MultiArray (Double 0 × 4)) |[WongKinYiu\u002Fyolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)|[GNU](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Fblob\u002Fmain\u002FLICENSE.md)|Non Maximum Suppression has been added.| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) | [![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1QiTlFsN948Xt2e4WgqUB8DnGgwWwtVZS?usp=sharing) |\n\n### YOLOv8\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_80972c8d4ce8.png\">\n\n| Google Drive Link | Size | Output | Original Project | License | Note | Sample Project | \n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[YOLOv8s](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pLRh1Y37KLEMpQn3v8qH-A12swakoHbI\u002Fview?usp=share_link)|45.1MB| Confidence(MultiArray (Double 0 × 80)), Coordinates (MultiArray (Double 0 × 4)) |[ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)|[GNU](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE)|Non Maximum Suppression has been added.| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) |\n\n### YOLOv9\n\nYOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. Uses PGI and GELAN architecture for efficient object detection.\n\n| Download Link | Size | Output | Original Project | License | Year | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolov9s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolov9s.mlpackage.zip) | 14 MB | Confidence (MultiArray (Double 0 × 80)), Coordinates (MultiArray (Double 0 × 4)) | [WongKinYiu\u002Fyolov9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9) | [GPL-3.0](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9\u002Fblob\u002Fmain\u002FLICENSE.md) | 2024 | Non Maximum Suppression has been added. | [YOLOv9Demo](sample_apps\u002FYOLOv9Demo) |\n\n### YOLOv10\n\nYOLOv10: Real-Time End-to-End Object Detection. NMS-free architecture using consistent dual assignments — no post-processing needed.\n\n| Download Link | Size | Output | Original Project | License | Year | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolov10s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolov10s.mlpackage.zip) | 14 MB | MultiArray (1 × 300 × 6) | [THU-MIG\u002Fyolov10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | NMS-free end-to-end detection. | [YOLO26Demo](sample_apps\u002FYOLO26Demo) |\n\n### YOLO11\n\nYOLO11: Ultralytics latest YOLO with improved backbone and neck architecture. 22% fewer parameters than YOLOv8 with higher mAP.\n\n| Download Link | Size | Output | Original Project | License | Year | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolo11s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolo11s.mlpackage.zip) | 18 MB | Confidence (MultiArray (Double 0 × 80)), Coordinates (MultiArray (Double 0 × 4)) | [ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | Non Maximum Suppression has been added. | [YOLOv9Demo](sample_apps\u002FYOLOv9Demo) |\n\n### YOLO26\n\nYOLO26: Edge-first vision AI with NMS-free end-to-end detection. Up to 43% faster CPU inference vs YOLO11 with DFL removal and ProgLoss.\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7a13261f79bd.png\">\n\n| Download Link | Size | Output | Original Project | License | Year | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolo26s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolo26s.mlpackage.zip) | 18 MB | MultiArray (1 × 300 × 6) | [ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE) | 2026 | NMS-free end-to-end detection. | [YOLO26Demo](sample_apps\u002FYOLO26Demo) |\n\n### YOLO-World\n\nYOLO-World: Real-Time Open-Vocabulary Object Detection. Type any text query and detect it — no fixed class list. Uses CLIP text encoder for open-vocabulary matching.\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e2a5baa26621.png\">\n\n| Download Link | Size | Description | Original Project | License | Year | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yoloworld_detector.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyoloworld_detector.mlpackage.zip) | 25 MB | YOLO-World V2-S visual detector | [AILab-CVC\u002FYOLO-World](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World) | [GPL-3.0](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World\u002Fblob\u002Fmaster\u002FLICENSE) | 2024 | [YOLOWorldDemo](sample_apps\u002FYOLOWorldDemo) |\n| [clip_text_encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fclip_text_encoder.mlpackage.zip) | 121 MB | CLIP ViT-B\u002F32 text encoder | [openai\u002FCLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP) | [MIT](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP\u002Fblob\u002Fmain\u002FLICENSE) | 2021 | — |\n| [clip_vocab.json.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fclip_vocab.json.zip) | 1.6 MB | BPE vocabulary for tokenizer | — | — | — | — |\n\n# Segmentation\n\n### [U2Net](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002Fa8e89c72c0950db66d63415b9010d203aae22617\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f36303037393162322d633534332d613537652d303639622d3863663130373932643662392e6a706567\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F4f502487cd9e9e02d150ad63b33683a1446e7516\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f39636532633237612d643134322d663136352d343365662d6532373966646337386333382e706e67\">\n\n| Google Drive Link | Size | Output |Original Project | License |\n| ------------- | ------------- | ------------- | ------------- |------------- |\n| [U2Net](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing) | 175.9 MB | Image(GRAYSCALE 320 × 320)| [xuebinqin\u002FU-2-Net](https:\u002F\u002Fgithub.com\u002Fxuebinqin)  | [Apache](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fmaster\u002FApache-LICENSE)|\n| [U2Netp](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D-quPGy33PzSEC6A7EBNv7mCyuiBlO08\u002Fview?usp=sharing) | 4.6 MB | Image(GRAYSCALE 320 × 320) | [xuebinqin\u002FU-2-Net](https:\u002F\u002Fgithub.com\u002Fxuebinqin)  |  [Apache](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fmaster\u002FApache-LICENSE)|\n\n### [IS-Net](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13CkOTBCYc3FjGTU26lmCsRYsOkeHnAMA?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b11e5227913a.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c8b04f50a404.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd49404efefe.jpeg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_10ef6ed7938c.jpg\">\n\n| Google Drive Link | Size | Output |Original Project | License | Year | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- |------------- | ------------- |------------- |\n| [IS-Net](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13CkOTBCYc3FjGTU26lmCsRYsOkeHnAMA?usp=sharing) | 176.1 MB | Image(GRAYSCALE 1024 × 1024)| [xuebinqin\u002FDIS](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS)  | [Apache](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS\u002Fblob\u002Fmain\u002FLICENSE.md)| 2022 |[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1xWD7LZbI-_09LXmiYMdhA28V2qujvOlZ?usp=sharing)|\n| [IS-Net-General-Use](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Vglh1zPwTglroMvycnkLdFP6nCHf_GuH\u002Fview?usp=sharing) | 176.1 MB | Image(GRAYSCALE 1024 × 1024)| [xuebinqin\u002FDIS](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS)  | [Apache](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS\u002Fblob\u002Fmain\u002FLICENSE.md)| 2022 |[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1xWD7LZbI-_09LXmiYMdhA28V2qujvOlZ?usp=sharing)|\n\n### RMBG1.4\n\nRMBG1.4 - The IS-Net enhanced with our unique training scheme and proprietary dataset. \n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_58dde2e675cc.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_66aa0cc958c0.png\" width=400>\n\n| Download Link | Size | Output |Original Project | License | year  | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- |------------- |------------- |\n| [RMBG_1_4.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Frmbg-v1\u002FRMBG_1_4.mlpackage.zip) | 42 MB (INT8) | Alpha mask 1024x1024 |[briaai\u002FRMBG-1.4](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) | [Creative Commons](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) |2024| [RMBGDemo](sample_apps\u002FRMBGDemo) | [convert_rmbg.py](conversion_scripts\u002Fconvert_rmbg.py) |\n\n### face-Parsing\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_710e4e19e86b.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fa3f9169d3cb.png\" width=400>\n\n| Google Drive Link | Size | Output |Original Project | License | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [face-Parsing](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1I_cu8x0k6d1AEV_VPLyMu3Pqg3hwmo7g\u002Fview?usp=sharing) | 53.2 MB | MultiArray(1 x 512 × 512)| [zllrunning\u002Fface-parsing.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch)  | [MIT](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch\u002Fblob\u002Fmaster\u002FLICENSE)|[CoreML-face-parsing](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Face-Parsing) |\n\n### Segformer\n\nSimple and Efficient Design for Semantic Segmentation with Transformers\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_07e52ed1596e.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_9d4043e68ded.jpg\" width=400>\n\n| Google Drive Link | Size | Output |Original Project | License | year |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SegFormer_mit-b0_1024x1024_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-lcNjJM85DZh5-xQv4jlKL6I1ZMBk2uu\u002Fview?usp=sharing) | 14.9 MB | MultiArray(512 × 1024)| [NVlabs\u002FSegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer)  | [NVIDIA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer\u002Fblob\u002Fmaster\u002FLICENSE)|2021|\n\n### BiSeNetV2\t\n\nBilateral Network with Guided Aggregation for Real-time Semantic Segmentation\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0e33605cfe6.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b14af6e47030.jpg\" width=400>\n\n| Google Drive Link | Size | Output |Original Project | License | year |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [BiSeNetV2_1024x1024_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-20x0-TP8zqXCzDhH06TyL03SJRFYY9n\u002Fview?usp=sharing) | 12.8 MB | MultiArray | [ycszen\u002FBiSeNet](https:\u002F\u002Fgithub.com\u002Fycszen\u002FBiSeNet)  | Apache2.0 |2021|\n\n### DNL\n\nDisentangled Non-Local Neural Networks\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c8561e34faba.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a4e56332c2ba.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [dnl_r50-d8_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1DOnPGocotsjXknBuNqikgpFVpmH6s_E3\u002Fview?usp=sharing) | 190.8 MB | MultiArray[512x512] |ADE20K| [yinmh17\u002FDNL-Semantic-Segmentation](https:\u002F\u002Fgithub.com\u002Fyinmh17\u002FDNL-Semantic-Segmentation)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fyinmh17\u002FDNL-Semantic-Segmentation\u002Fblob\u002Fmaster\u002FLICENSE) |2020|\n\n### ISANet\n\nInterlaced Sparse Self-Attention for Semantic Segmentation\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ff38df48ea91.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b99758103347.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [isanet_r50-d8_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F114ypGU9S1BOT2otl7P_gsmZbA3bCmz5K\u002Fview?usp=sharing) | 141.5 MB | MultiArray[512x512] |ADE20K| [openseg-group\u002Fopenseg.pytorch](https:\u002F\u002Fgithub.com\u002Fopenseg-group\u002Fopenseg.pytorch) | [MIT](https:\u002F\u002Fgithub.com\u002Fopenseg-group\u002Fopenseg.pytorch\u002Fblob\u002Fmaster\u002FLICENSE) |ArXiv'2019\u002FIJCV'2021|\n\n### FastFCN\n\nRethinking Dilated Convolution in the Backbone for Semantic Segmentation\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_83fc626bdda1.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_015b6b9e07c5.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [fastfcn_r50-d32_jpu_aspp_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2CUR1M-a4xzUxdf5enU_9cUdxONmFbT\u002Fview?usp=sharing) | 326.2 MB | MultiArray[512x512] |ADE20K| [wuhuikai\u002FFastFCN](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FFastFCN) | [MIT](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FFastFCN\u002Fblob\u002Fmaster\u002FLICENSE) |ArXiv'2019|\n\n### GCNet\n\nNon-local Networks Meet Squeeze-Excitation Networks and Beyond\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_87a6440b6afd.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_aba9f67cace4.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [gcnet_r50-d8_512x512_20k_voc12aug](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-DfjorbUDFXOVasSPoGk7GP1XC_OnNVT\u002Fview?usp=sharing) | 189 MB | MultiArray[512x512] |PascalVOC| [xvjiarui\u002FGCNet](https:\u002F\u002Fgithub.com\u002Fxvjiarui\u002FGCNet) | [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Fxvjiarui\u002FGCNet\u002Fblob\u002Fmaster\u002FLICENSE) |ICCVW'2019\u002FTPAMI'2020|\n\n### DANet\n\nDual Attention Network for Scene Segmentation(CVPR2019)\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c53ca04b5a12.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_cc0495d33f9e.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [danet_r50-d8_512x1024_40k_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1A45r_725V7edPTSrjA4T-T03rPD6Sj2z\u002Fview?usp=sharing) | 189.7 MB | MultiArray[512x1024] |CityScapes| [junfu1115\u002FDANet](https:\u002F\u002Fgithub.com\u002Fjunfu1115\u002FDANet\u002F) | [MIT](https:\u002F\u002Fgithub.com\u002Fjunfu1115\u002FDANet\u002Fblob\u002Fmaster\u002FLICENSE) |CVPR2019|\n\n### Semantic-FPN\n\nPanoptic Feature Pyramid Networks\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_41f33efe476f.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_42ae8a2d4720.png\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [fpn_r50_512x1024_80k_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_IVhCnJ--54P7qVGLo8-ks_LRGXJQXht\u002Fview?usp=sharing) | 108.6 MB | MultiArray[512x1024] |CityScapes| [facebookresearch\u002Fdetectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) | [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2\u002Fblob\u002Fmain\u002FLICENSE) |2019|\n\n### cloths_segmentation\n\nCode for binary segmentation of various cloths.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_9eb949b2e400.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e69abf79b827.jpg\" width=400>\n\n| Google Drive Link | Size | Output |Dataset|Original Project | License | year |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [clothSegmentation](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2AydEgkth6UTD5bu13R0fJYoqZZMG3e\u002Fview?usp=sharing) | 50.1 MB | Image(GrayScale 640x960) |[fashion-2019-FGVC6](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fimaterialist-fashion-2019-FGVC6)| [facebookresearch\u002Fdetectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) | [MIT](https:\u002F\u002Fgithub.com\u002Fternaus\u002Fcloths_segmentation\u002Fblob\u002Fmain\u002FLICENSE) |2020|\n\n### easyportrait\n\nEasyPortrait - Face Parsing and Portrait Segmentation Dataset.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a2863751b649.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4e929fed0f07.png\" width=400>\n\n| Google Drive Link | Size | Output |Original Project | License | year | Swift sample |Conversion Script |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- |------------- |------------- |\n| [easyportrait-segformer512-fp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13BUhNpQHodAgcj6eJaPbzuSUaFn3JuU-?usp=sharing) | 7.6 MB | Image(GrayScale 512x512) * 9 |[hukenovs\u002Feasyportrait](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Feasyportrait) | [Creative Commons](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Feasyportrait\u002Ftree\u002Fmain\u002Flicense) |2023|[easyportrait-coreml](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Feasyportrait-coreml)|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F11a3XWFA8fa8V0a2zgWFqOMUaZgF4O1qt?usp=sharing)|\n\n### MobileSAM\n\nFaster Segment Anything: Towards Lightweight SAM for Mobile Applications. MobileSAM replaces the heavy ViT-H image encoder with a lightweight ViT-Tiny encoder via decoupled knowledge distillation, making it ~60x smaller and ~40x faster than the original SAM.\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1ecbf02c0520.png\" width=200>\n| Download Link | Size | Output | Original Project | License | Year | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MobileSAM.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit\u002Freleases\u002Fdownload\u002Fv1.0.0\u002FMobileSAM.zip) | 23 MB (Encoder 13 MB + Decoder 9.8 MB) | Segmentation Mask | [ChaoningZhang\u002FMobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) | [Apache 2.0](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM\u002Fblob\u002Fmaster\u002FLICENSE) | 2023 | [SamKit](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit) |\n\n### SAM2-Tiny\n\nSAM 2: Segment Anything in Images and Videos. SAM 2 extends promptable segmentation from images to videos using a streaming architecture with memory. The Tiny variant uses a Hiera-T backbone for efficient on-device inference.\n\n| Download Link | Size | Output | Original Project | License | Year | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SAM2Tiny.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit\u002Freleases\u002Fdownload\u002Fv1.0.0\u002FSAM2Tiny.zip) | 76 MB (ImageEncoder 64 MB + PromptEncoder 2 MB + MaskDecoder 9.8 MB) | Segmentation Mask | [facebookresearch\u002Fsam2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2) | [Apache 2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [SamKit](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit) |\n\n# Video Matting\n\n### MatAnyone\n\n[pq-yang\u002FMatAnyone](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone) (CVPR 2025) — temporally consistent video matting with object-level memory propagation. Given a first-frame mask the network tracks and refines an alpha matte across the whole clip, holding sharp edges (hair, semitransparent regions) much better than per-frame matting baselines. Built on the Cutie video object segmentation backbone with a dedicated mask decoder for matting.\n\nThe CoreML port splits the network into 5 stateless modules so the per-frame memory state machine can live in Swift while CoreML handles the heavy compute. End-to-end alpha matte parity vs the official PyTorch reference: MAE \u003C 2e-4, correlation 0.9999+ across 18 frames including 3 memory cycles.\n\nThe sample app uses Vision's `VNGeneratePersonSegmentationRequest` to bootstrap the first-frame mask automatically — pick a video, tap \"Remove BG\", and it composites the foreground over the chosen background colour.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| MatAnyone (5 mlpackages, ~111 MB FP16 total) | 111 MB | image [1,3,432,768] (per-frame state in Swift) | alpha matte [1,1,432,768] | [pq-yang\u002FMatAnyone](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone) | [NTU S-Lab 1.0](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone\u002Fblob\u002Fmain\u002FLICENSE) | 2025 | [MatAnyoneDemo](sample_apps\u002FMatAnyoneDemo) | [convert_matanyone.py](conversion_scripts\u002Fconvert_matanyone.py) |\n\nSee [`sample_apps\u002FMatAnyoneDemo\u002FREADME.md`](sample_apps\u002FMatAnyoneDemo\u002FREADME.md) for the per-frame state machine, the 5-module split, and conversion details.\n\n# Super Resolution\n\n### [Real ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a80856541913.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_981c37ba515f.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License | year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Real ESRGAN4x](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16JEWh48fgQc8az7avROePOd-PYda0Yi2\u002Fview?usp=sharing) | 66.9 MB | Image(RGB 2048x2048)| [xinntao\u002FReal-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)  | [BSD 3-Clause License](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n| [Real ESRGAN Anime4x](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1qXdLx46Lpqya7Txc5Wvgkd2Dqlnqm3Qm\u002Fview?usp=sharing) | 66.9 MB | Image(RGB 2048x2048)| [xinntao\u002FReal-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)  | [BSD 3-Clause License](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [GFPGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3fF4aPnh8ygUOmKItIrZ318xI9JGmQx\u002Fview?usp=sharing)\n\nTowards Real-World Blind Face Restoration with Generative Facial Prior\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_13e4a42fbf51.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_23b02eb01ae6.png\"> \n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [GFPGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3fF4aPnh8ygUOmKItIrZ318xI9JGmQx\u002Fview?usp=sharing) | 337.4 MB | Image(RGB 512x512)| [TencentARC\u002FGFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [BSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3K89vJZ5OUAh4xdSAifgnL52jbl2fVf\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e2df08237f0c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_06a5ec4d7e2d.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [BSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3K89vJZ5OUAh4xdSAifgnL52jbl2fVf\u002Fview?usp=sharing) | 66.9 MB | Image(RGB 2048x2048)| [cszn\u002FBSRGAN](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)  |  |2021|\n\n### [A-ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0rKVQtFXNWfIBIpvyemjuO3O00GZBeb\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7e57c97986de.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3733426ee7e7.png\"> \n\n| Google Drive Link | Size | Output |Original Project | License |year |Conversion Script|\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [A-ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0rKVQtFXNWfIBIpvyemjuO3O00GZBeb\u002Fview?usp=sharing) | 63.8 MB | Image(RGB 1024x1024)| [aesrgan\u002FA-ESRGANN](https:\u002F\u002Fgithub.com\u002Faesrgan\u002FA-ESRGAN)  | [BSD 3-Clause License](https:\u002F\u002Fgithub.com\u002Faesrgan\u002FA-ESRGAN\u002Fblob\u002Fmain\u002FLICENSE) |2021|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1UxtSXnVYOXEfTVdIeoP7HQEjsyVbqOKa?usp=sharing)|\n\n### [Beby-GAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bJ7_NgR2KXI46JiFk5hH_6IdCHMyhN05\u002Fview?usp=sharing)\n\nBest-Buddy GANs for Highly Detailed Image Super-Resolution\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fc5f6c358c93.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3add270a5491.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Beby-GAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bJ7_NgR2KXI46JiFk5hH_6IdCHMyhN05\u002Fview?usp=sharing) | 66.9 MB | Image(RGB 2048x2048)| [dvlab-research\u002FSimple-SR](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSimple-SR)  | [MIT](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSimple-SR\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [RRDN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-M30vR0xMuYDn2p5O4KZrUnUXy4SNThF\u002Fview?usp=sharing)\n\nThe Residual in Residual Dense Network for image super-scaling.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7e8d00ce87b1.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_83771c7d6718.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [RRDN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-M30vR0xMuYDn2p5O4KZrUnUXy4SNThF\u002Fview?usp=sharing) | 16.8 MB | Image(RGB 2048x2048)| [idealo\u002Fimage-super-resolution](https:\u002F\u002Fgithub.com\u002Fidealo\u002Fimage-super-resolution)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fidealo\u002Fimage-super-resolution\u002Fblob\u002Fmaster\u002FLICENSE) |2018|\n\n\n### [Fast-SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gYXbhcSUm5rhcCAmwLruonAhu8jvyDL8\u002Fview?usp=sharing)\n\nFast-SRGAN.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4c296bf372e2.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6d7f12c28fbf.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Fast-SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gYXbhcSUm5rhcCAmwLruonAhu8jvyDL8\u002Fview?usp=sharing) | 628 KB | Image(RGB 1024x1024)| [HasnainRaz\u002FFast-SRGAN](https:\u002F\u002Fgithub.com\u002FHasnainRaz\u002FFast-SRGAN)  | [MIT](https:\u002F\u002Fgithub.com\u002FHasnainRaz\u002FFast-SRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n\n### [ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fkRbh_gckuFlgr357OIdOrEJK4T_2Xkz\u002Fview?usp=sharing)\n\nEnhanced-SRGAN.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d3afdd41d8c1.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0913b95fb8f8.jpg\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fkRbh_gckuFlgr357OIdOrEJK4T_2Xkz\u002Fview?usp=sharing) | 66.9 MB | Image(RGB 2048x2048)| [xinntao\u002FESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN)  | [Apache 2.0](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2018|\n\n### [UltraSharp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-Q1SdS8iHWTfTs7FE39pUTEubPks30Ca?usp=drive_link)\n\nPretrained: 4xESRGAN\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0d8247014bde.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_28656aa6ff9a.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [UltraSharp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-Q1SdS8iHWTfTs7FE39pUTEubPks30Ca?usp=drive_link) | 34 MB | Image(RGB 1024x1024)| [Kim2019\u002F](https:\u002F\u002Fopenmodeldb.info\u002Fmodels\u002F4x-UltraSharp)  | [CC-BY-NC-SA-4.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Fdeed.ja) |2021|\n\n### [SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-076W2o0wCtoODptikX1eOnlFBx2s3qK\u002Fview?usp=sharing)\n\nPhoto-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_418f37159fa9.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ef0a1344b01d.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-076W2o0wCtoODptikX1eOnlFBx2s3qK\u002Fview?usp=sharing) | 6.1 MB | Image(RGB 2048x2048)| [dongheehand\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fdongheehand\u002FSRGAN-PyTorch)  |  |2017|\n\n### [SRResNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2kYZgF_Z6vntrRsHmRiwyHJg5TC1PSW\u002Fview?usp=sharing)\n\nPhoto-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b40dff30c0c7.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_be501370880c.jpg\">\n\n| Google Drive Link | Size | Output |Original Project | License |year |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [SRResNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2kYZgF_Z6vntrRsHmRiwyHJg5TC1PSW\u002Fview?usp=sharing) | 6.1 MB | Image(RGB 2048x2048)| [dongheehand\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fdongheehand\u002FSRGAN-PyTorch)  |  |2017|\n\n### [LESRCNN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0zgxURZwqX0mAAVy69K-owE7QP-7NfJ\u002Fview?usp=sharing)\n\nLightweight Image Super-Resolution with Enhanced CNN.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_64773691950b.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_45feeefc2f14.jpg\">\n\n| Google Drive Link | Size | Output |Original Project | License |year | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [LESRCNN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0zgxURZwqX0mAAVy69K-owE7QP-7NfJ\u002Fview?usp=sharing) | 4.3 MB | Image(RGB 512x512)| [hellloxiaotian\u002FLESRCNN](https:\u002F\u002Fgithub.com\u002Fhellloxiaotian\u002FLESRCNN)  |  |2020|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Q6piAJvXSmb-DcdFipcRUEYuHi9fnTm7?usp=sharing)|\n\n### [MMRealSR](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HwMLvOy_hHycHNhojob6uT8t6tRyWqb\u002Fview?usp=sharing)\n\nMetric Learning based Interactive Modulation for Real-World Super-Resolution\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2212aaa1dafc.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_645ffc1647ee.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [MMRealSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HwMLvOy_hHycHNhojob6uT8t6tRyWqb\u002Fview?usp=sharing) | 104.6 MB | Image(RGB 1024x1024)| [TencentARC\u002FMM-RealSR](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR)  | [BSD 3-Clause](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR\u002Fblob\u002Fmain\u002FLICENSE) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1zhUhQhdtP02N2pFIxsO5lin7tDOExZCo?usp=sharing)|\n| [MMRealSRNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-77P8AtHFh5kca2kYZ6X7GaUueoa3el_\u002Fview?usp=sharing) | 104.6 MB | Image(RGB 1024x1024)| [TencentARC\u002FMM-RealSR](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR)  | [BSD 3-Clause](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR\u002Fblob\u002Fmain\u002FLICENSE) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1zhUhQhdtP02N2pFIxsO5lin7tDOExZCo?usp=sharing)|\n\n### [DASR](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F10J2ehHewK2ppS5ToDqmtJ2Ei5k8vcdL0?usp=sharing)\n\nPytorch implementation of \"Unsupervised Degradation Representation Learning for Blind Super-Resolution\", CVPR 2021\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_10a34b62819b.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8e7db7552e9e.png\">\n\n| Google Drive Link | Size | Output |Original Project | License |year|\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [DASR](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F10J2ehHewK2ppS5ToDqmtJ2Ei5k8vcdL0?usp=sharing) | 12.1 MB | Image(RGB 1024x1024)| [The-Learning-And-Vision-Atelier-LAVA\u002FDASR](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FDASR)  | [MIT](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FDASR\u002Fblob\u002Fmain\u002FLICENSE) |2022|\n\n\n### SinSR\n\n[wyf0912\u002FSinSR](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR) — single-step diffusion-based super-resolution (CVPR 2024, ~113M params). Distilled from ResShift for one-step 4x upscaling. Uses a Swin Transformer UNet with VQ-VAE latent space.\n\n\u003Cimg width=\"512\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1304f9efa5cf.png\">\n\n*Left: bicubic 4x upscale, Right: SinSR single-step diffusion SR (128x128 → 512x512)*\n\n3 CoreML models: VQ-VAE encoder, Swin-UNet denoiser (single step), and VQ-VAE decoder with vector quantization.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| [SinSR_Encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Encoder.mlpackage.zip) | 39 MB | image [1,3,1024,1024] | latent [1,3,256,256] | [wyf0912\u002FSinSR](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR) | [S-Lab](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [SinSRDemo](sample_apps\u002FSinSRDemo) | [convert_sinsr.py](conversion_scripts\u002Fconvert_sinsr.py) |\n| [SinSR_Denoiser.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Denoiser.mlpackage.zip) | 420 MB | input [1,6,256,256] | predicted_latent [1,3,256,256] | | | | | |\n| [SinSR_Decoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Decoder.mlpackage.zip) | 58 MB | latent [1,3,256,256] | image [1,3,1024,1024] | | | | | |\n\nSee [`sample_apps\u002FSinSRDemo\u002FREADME.md`](sample_apps\u002FSinSRDemo\u002FREADME.md) for the inference pipeline and conversion details.\n\n\n# Low Light Enhancement\n\n### StableLLVE\n\nLearning Temporal Consistency for Low Light Video Enhancement from Single Images.\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b6a7c8c7bfce.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d945b4ce0ea2.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [StableLLVE](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-9xry7XeCJYsZadxcfTscjGi_Sna5NhM\u002Fview?usp=sharing) | 17.3 MB | Image(RGB 512x512)| [zkawfanx\u002FStableLLVE](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE)  | [MIT](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE\u002Fblob\u002Fmain\u002FLICENSE) |2021|\n\n### Zero-DCE\n\nZero-Reference Deep Curve Estimation for Low-Light Image Enhancement\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6362ab81ca31.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ef7567af1243.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |Year|Conversion Script|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [Zero-DCE](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0lxlBNFm8E_y9ImhS2wxq0p1ZJlXyoA\u002Fview?usp=sharing) | 320KB | Image(RGB 512x512)| [Li-Chongyi\u002FZero-DCE](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE)  | [See Repo](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE) |2021|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1sh3O-4EvYv49Rlm59beH6koHe0sYxc2r?usp=sharing)|\n\n### Retinexformer\n\nRetinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement\n\n\u003Cimg width=\"256\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a13d4087f9bb.png\"> \u003Cimg width=\"256\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c3cc3991aa15.png\"> \n\n| Google Drive Link | Size | Output |Original Project | License |Year|Conversion Script|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [ZRetinexformer FiveK](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1ea6vBuLG-z4TAK4iU6vrgABAAlHuDdhy?usp=drive_link) | 3.4MB | Image(RGB 512x512)| [caiyuanhao1998\u002FRetinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer)  | [MIT](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer?tab=MIT-1-ov-file#readme) |2023|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F10PtPI4V72Pp6PQZcrah-vClGzjKLaGGK?usp=sharing)|\n| [ZRetinexformer NTIRE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14piyZVwzu4Abpfgwh2HIKoubeeE-3qoq?usp=drive_link) | 3.4MB | Image(RGB 512x512)| [caiyuanhao1998\u002FRetinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer)  | [MIT](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer?tab=MIT-1-ov-file#readme) |2023|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F10PtPI4V72Pp6PQZcrah-vClGzjKLaGGK?usp=sharing)|\n\n# Image Restoration\n\n### MPRNet\n\nMulti-Stage Progressive Image Restoration.\n\nDebluring\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8150d8db8d47.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c20877dbef5b.png\"> \n\nDenoising\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_50b25698d3fe.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_851f370d5b59.png\"> \n\nDeraining\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_135fdda71559.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0f656888269b.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MPRNetDebluring](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1--5L6BxxbyYGY9ey5WCIrl7g1yYBN27U\u002Fview?usp=sharing) | 137.1 MB | Image(RGB 512x512)| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n| [MPRNetDeNoising](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-04xou-UgoflZb7MqTBycCpuLWKUAj0i\u002Fview?usp=sharing) | 108 MB | Image(RGB 512x512)| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n| [MPRNetDeraining](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1tGvjj49yaDym24vGdGqr1VKOtGd7ALKB\u002Fview?usp=sharing) | 24.5 MB | Image(RGB 512x512)| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n\n\n### MIRNetv2\n\nLearning Enriched Features for Fast Image Restoration and Enhancement.\n\nDenoising\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bcd61a753a56.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ce91cb656dfb.png\"> \n\nSuper Resolution\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_20cca9f617d9.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2bb5e7818656.jpg\"> \n\nContrast Enhancement\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c32da7743fb2.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_842619f511e4.jpg\"> \n\nLow Light Enhancement\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_107021d8aa0c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6f131ee061f5.jpg\"> \n\n| Google Drive Link | Size | Output |Original Project | License |Year|Conversion Script|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MIRNetv2Denoising](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HY2AhQV84LUZMadsbIi4TGBhEntAOaF\u002Fview?usp=sharing) | 42.5 MB | Image(RGB 512x512)| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [ACADEMIC PUBLIC LICENSE](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2SuperResolution](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-BLfJj8xK_bw-GsGLfRR9uMvuA2VOqsh\u002Fview?usp=sharing) | 42.5 MB | Image(RGB 512x512)| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [ACADEMIC PUBLIC LICENSE](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2ContrastEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1--q9Decpy1ZZbSifiE26SkpXstoadpM8\u002Fview?usp=sharing) | 42.5 MB | Image(RGB 512x512)| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [ACADEMIC PUBLIC LICENSE](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2LowLightEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Yh3FCogRfQ8k7Hh_UIZAnGwwhXHX6k6P\u002Fview?usp=sharing) | 42.5 MB | Image(RGB 512x512)| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [ACADEMIC PUBLIC LICENSE](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n\n# Image Generation\n\n### [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0a68ebb38c0b.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0113b9b8c24.jpg\"> \n\n| Google Drive Link | Size | Output | Original Project | License | Sample Project |\n| ------------- | ------------- | ------------- | ------------- |  ------------- |  ------------- | \n| [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing) | 38.6MB  | Image(Color 1024 × 1024)| [bes-dev\u002FMobileStyleGAN.pytorch](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch)  | [Nvidia Source Code License-NC](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch\u002Fblob\u002Fdevelop\u002FLICENSE-NVIDIA) | [CoreML-StyleGAN](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-StyleGAN) |\n\n\n### [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a5bcc6c05781.png\">\n\n| Google Drive Link | Size | Output | Original Project | \n| ------------- | ------------- | ------------- | ------------- | \n| [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)　| 9.2MB | MultiArray | [TensorFlowCore](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fgenerative\u002Fdcgan)|\n\n\n# Image2Image\n\n### [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_95f7c4de617c.jpeg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_48e63132d6d4.jpeg\">\n\n| Google Drive Link | Size | Output | Original Project | License | Usage |\n| ------------- | ------------- | ------------- | ------------- | ------------- |  ------------- | \n| [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing) | 217.7MB  | Image(Color 512 × 512)| [Mukosame\u002FAnime2Sketch](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch)  | [MIT](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch\u002Fblob\u002Fmaster\u002FLICENSE)| Drop an image to preview|\n\n\n### [AnimeGAN2Face_Paint_512_v2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1phSgcAz3LNbk2v2RoSESmr7PFxTAHcxb\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F74a02b6e0b80e52c2ae3af798c93eea9aa3e394d\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f30313764616563342d333933312d643664662d303339322d6162313039303237313963642e706e67\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F311349da47136ff9ce61701d09ce59dc663c95bf\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f66633337653936332d383533302d333731312d643163662d3335366266646666316665322e706e67\">\n\n| Google Drive Link | Size | Output | Original Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- |  ------------- | \n| [AnimeGAN2Face_Paint_512_v2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1phSgcAz3LNbk2v2RoSESmr7PFxTAHcxb\u002Fview?usp=sharing) | 8.6MB  | Image(Color 512 × 512)| [bryandlee\u002Fanimegan2-pytorch](https:\u002F\u002Fgithub.com\u002Fbryandlee\u002Fanimegan2-pytorch#additional-model-weights)  |[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WGAxMaikjNIfqdGRndEOmNyeVf33nGNh?usp=sharing) |\n\n\n### [Photo2Cartoon](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1xFWZ9Rf1o_LtwBpmSw2zSwPGk2FY6Wya\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2a14725a3a9c.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_521100713183.png\">\n\n| Google Drive Link | Size | Output | Original Project | License | Note |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [Photo2Cartoon](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1xFWZ9Rf1o_LtwBpmSw2zSwPGk2FY6Wya\u002Fview?usp=sharing) | 15.2 MB  | Image(Color 256 × 256)| [minivision-ai\u002Fphoto2cartoon](https:\u002F\u002Fgithub.com\u002Fminivision-ai\u002Fphoto2cartoon) | [MIT](https:\u002F\u002Fgithub.com\u002Fminivision-ai\u002Fphoto2cartoon\u002Fblob\u002Fmaster\u002FLICENSE) | The output is little bit different from the original model. It cause some operations were converted replaced　manually. |\n\n### [AnimeGANv2_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1G53oZ1hiMcLJs1loN_fe_VmBVfegh9ha\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1046f0e8ab73.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_33a09eb6b4bf.png\">\n\n| Google Drive Link | Size | Output | Original Project | Sample |\n| ------------- | ------------- | ------------- | ------------- | ------------- |\n| [AnimeGANv2_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1G53oZ1hiMcLJs1loN_fe_VmBVfegh9ha\u002Fview?usp=sharing)　| 8.7MB | Image(256 x 256) | [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)|[AnimeGANv2-iOS](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FAnimeGANv2-iOS)|\n\n\n### [AnimeGANv2_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10drMcmF67iREUK8NY8ekMHrsyVirs5XT\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4d94744317a1.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f64c12cae7d3.png\">\n\n| Google Drive Link | Size | Output | Original Project | \n| ------------- | ------------- | ------------- | ------------- | \n| [AnimeGANv2_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10drMcmF67iREUK8NY8ekMHrsyVirs5XT\u002Fview?usp=sharing)　| 8.7MB | Image(256 x 256) | [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)|\n\n\n### [WarpGAN Caricature](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1HE3qvfjuXZMFelRcmmGsLzoO5dV8lnaQ\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0113b9b8c24.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3855045543a5.jpg\">\n\n| Google Drive Link | Size | Output | Original Project | \n| ------------- | ------------- | ------------- | ------------- | \n| [WarpGAN Caricature](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1HE3qvfjuXZMFelRcmmGsLzoO5dV8lnaQ\u002Fview?usp=sharing)　| 35.5MB | Image(256 x 256) | [seasonSH\u002FWarpGAN](https:\u002F\u002Fgithub.com\u002FseasonSH\u002FWarpGAN)|\n\n### [UGATIT_selfie2anime](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing)\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-27 8 18 33\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fcf8404c6599.png\"> \u003Cimg width=\"400\" alt=\"スクリーンショット 2021-12-27 8 28 11\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_84acf8060293.png\">\n\n| Google Drive Link | Size | Output | Original Project | \n| ------------- | ------------- | ------------- | ------------- | \n| [UGATIT_selfie2anime](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing) | 266.2MB(quantized) | Image(256x256) | [taki0112\u002FUGATIT](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FUGATIT)  |\n\n### CartoonGAN\n\n| Google Drive Link | Size | Output | Original Project | \n| ------------- | ------------- | ------------- | ------------- | \n| [CartoonGAN_Shinkai](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1j9bvHFBX5yctSeaE8FEvUv-r-hEVvXwi\u002Fview?usp=sharing)　| 44.6MB | MultiArray | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2dTGge4fza-TTBI9actkg_xp91zYT-F\u002Fview?usp=sharing)　| 44.6MB | MultiArray | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Hosoda](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-5VB1g7kRt0KMe6u37fi_t18l-Zn_wr1\u002Fview?usp=sharing)　| 44.6MB | MultiArray | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-5x3TYugodcnGYiEEDitFqMQPVHsCDs_\u002Fview?usp=sharing)　| 44.6MB | MultiArray | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n\n### [Fast-Neural-Style-Transfer](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing)\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_35a6e65ddcad.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6907c35f2126.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c75ceba3963c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8383a1c6e58a.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6a292cfefc31.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fb618762f2d7.jpg\">\n\n| Google Drive Link | Size | Output |Original Project | License |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [fast-neural-style-transfer-cuphead](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-LLQF8T6MrcpdiYZkdGZAizkj7c-lJ9e\u002Fview?usp=sharing) | 6.4MB | Image(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n| [fast-neural-style-transfer-starry-night](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HLHIrV_WwZJsEkZ34nTfqnlIHIe04Vy\u002Fview?usp=sharing) |  6.4MB | Image(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n| [fast-neural-style-transfer-mosaic](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-GmnewjDz2Cs7-CfXPSFIgOruQvBbK2X\u002Fview?usp=sharing) |  6.4MB | Image(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n\n### [White_box_Cartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QGNJzEp0fo6oOryTos1dazEKaS34WzZC\u002Fview?usp=sharing)\n\nLearning to Cartoonize Using White-box Cartoon Representations\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_14cb075ffbbf.jpg\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_279e77a85a51.jpg\">\n\n| Google Drive Link | Size | Output | Original Project | License |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [White_box_Cartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QGNJzEp0fo6oOryTos1dazEKaS34WzZC\u002Fview?usp=sharing) | 5.9MB | Image(1536x1536) | [SystemErrorWang\u002FWhite-box-Cartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization)  |[creativecommons](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode)|CVPR2020|\n\n### [FacialCartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1CJH4tuR3ArKvxrmAE_44lbsAwUzjtyXi\u002Fview?usp=sharing)\n\nWhite-box facial image cartoonizaiton\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0c484781767d.png\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4ff722de8f5f.png\">\n\n| Google Drive Link | Size | Output | Original Project | License |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [FacialCartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1CJH4tuR3ArKvxrmAE_44lbsAwUzjtyXi\u002Fview?usp=sharing) | 8.4MB | Image(256x256) | [SystemErrorWang\u002FFacialCartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FFacialCartoonization)  |[creativecommons](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode)|2020|\n\n# Inpainting\n\n### AOT-GAN-for-Inpainting\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4ceb1b45eb0f.gif\">\n\n| Google Drive Link | Size | Output | Original Project | License | Note | Sample Project |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[AOT-GAN-for-Inpainting](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16rF46DFcDPherlpgjuL60065xcP2N3nv\u002Fview?usp=share_link)|60.8MB| MLMultiArray(3,512,512) |[researchmm\u002FAOT-GAN-for-Inpainting](https:\u002F\u002Fgithub.com\u002Fresearchmm\u002FAOT-GAN-for-Inpainting)|[Apache2.0](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Fblob\u002Fmaster\u002FLICENSE)|To use see sample.| [john-rocky\u002FInpainting-CoreML](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FInpainting-CoreML) |\n\n### [Lama](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s_uICJQykFFxgVubpBNeLLDL0JsxgdCd?usp=sharing)\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f5aad0a6972d.png\">\n\n| Google Drive Link | Size | Input | Output | Original Project | License | Note | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |------------- |\n|[Lama](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s_uICJQykFFxgVubpBNeLLDL0JsxgdCd?usp=sharing)|216.6MB| Image (Color 800 × 800), Image (GrayScale 800 × 800)| Image (Color 800 × 800) |[advimman\u002Flama](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama)|[Apache2.0](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama\u002Fblob\u002Fmain\u002FLICENSE)|To use see sample.| [john-rocky\u002Flama-cleaner-iOS](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Flama-cleaner-iOS) | [mallman\u002FCoreMLaMa](https:\u002F\u002Fgithub.com\u002Fmallman\u002FCoreMLaMa)|\n\n# Monocular Depth Estimation\n\n### [MiDaS](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1agGnt5Cq5CGzoNDl9Nb-3u7pB5SrIbN4\u002Fview?usp=share_link)\nTowards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ab00f6ac7091.jpg\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a37f46ea3208.jpeg\">\n\n| Google Drive Link | Size | Output | Original Project | License |Year|Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [MiDaS_Small](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1agGnt5Cq5CGzoNDl9Nb-3u7pB5SrIbN4\u002Fview?usp=share_link) | 66.3MB | MultiArray(1x256x256) | [isl-org\u002FMiDaS](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS)  |[MIT](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS\u002Fblob\u002Fmaster\u002FLICENSE)|2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F13cVDO6gYdQvbKimcfbgGOfuoQmrTbarU?usp=sharing) |\n\n# Stable Diffusion\n\n### Hyper-SD\n\n[ByteDance\u002FHyper-SD](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD) — single-step text-to-image distilled from SD1.5 via Trajectory Segmented Consistency Distillation. ByteDance reports user preference 2x over SD-Turbo at 1 step. Combined with Apple's ml-stable-diffusion (Split-Einsum attention, chunked UNet, 6-bit palettization), runs at acceptable speed and quality on iPhone 15+.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdd456c13-d778-4a84-8bb2-9dfd78de3070\" width=\"400\">\u003C\u002Fvideo>\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd3c10d5fc39.png\">\n\n*1-step generations on iPhone, 512×512. Prompts: cat with sunglasses, cyberpunk city, japanese garden, astronaut on horse.*\n\n4 CoreML models (~947 MB total): CLIP text encoder + Swin-style chunked UNet (6-bit palettized) + VAE decoder. Uses TCD scheduler for single-step inference.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| [HyperSDTextEncoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDTextEncoder.mlpackage.zip) | 235 MB | input_ids [1,77] | encoder_hidden_states [1,77,768] | [ByteDance\u002FHyper-SD](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD) | [OpenRAIL++](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD\u002Fblob\u002Fmain\u002FREADME.md) | 2024 | [HyperSDDemo](sample_apps\u002FHyperSDDemo) | [convert_hypersd.py](conversion_scripts\u002Fconvert_hypersd.py) |\n| [HyperSDUnetChunk1.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDUnetChunk1.mlpackage.zip) | 318 MB | latent + encoder_hs + timestep | first half intermediates | | | | | |\n| [HyperSDUnetChunk2.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDUnetChunk2.mlpackage.zip) | 299 MB | first half outputs + skip connections | noise_pred [2,4,64,64] | | | | | |\n| [HyperSDVAEDecoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDVAEDecoder.mlpackage.zip) | 95 MB | latent [1,4,64,64] | image [1,3,512,512] | | | | | |\n\nSee [`sample_apps\u002FHyperSDDemo\u002FREADME.md`](sample_apps\u002FHyperSDDemo\u002FREADME.md) for the LoRA fusion, chunked-UNet palettization, and TCD scheduler details.\n\n### [stable-diffusion-v1-5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1dqYEdhSPi7y0Dgans-Fk7_ViNviUTUJj\u002Fview?usp=share_link)\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2023-03-21 18 52 18\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0d5c61554893.png\">\n\n| Google Drive Link  | Original Model |Original Project | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [stable-diffusion-v1-5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1dqYEdhSPi7y0Dgans-Fk7_ViNviUTUJj\u002Fview?usp=share_link) |[runwayml\u002Fstable-diffusion-v1-5](https:\u002F\u002Fhuggingface.co\u002Frunwayml\u002Fstable-diffusion-v1-5)|[runwayml\u002Fstable-diffusion](https:\u002F\u002Fgithub.com\u002Frunwayml\u002Fstable-diffusion)  |[Open RAIL M license](https:\u002F\u002Fhuggingface.co\u002Frunwayml\u002Fstable-diffusion-v1-5)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2022|\n\n### [pastel-mix](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cp3VoF1R-as8_lScWGUoxl-BNVX3d7vb\u002Fview?usp=share_link)\n\nPastel Mix - a stylized latent diffusion model.This model is intended to produce high-quality, highly detailed anime style with just a few prompts.\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2023-03-21 19 54 13\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_cdd5581fcee8.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [pastelMixStylizedAnime_pastelMixPrunedFP16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cp3VoF1R-as8_lScWGUoxl-BNVX3d7vb\u002Fview?usp=share_link) |[andite\u002Fpastel-mix](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fpastel-mix)|[Fantasy.ai](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fpastel-mix)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Orange Mix](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ueU-RuZIsl3b3F7uu_gBa_SfAtGTzTI5\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-21 23 34 13\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b65785c8df21.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [AOM3_orangemixs](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ueU-RuZIsl3b3F7uu_gBa_SfAtGTzTI5\u002Fview?usp=share_link) |[WarriorMama777\u002FOrangeMixs](https:\u002F\u002Fhuggingface.co\u002FWarriorMama777\u002FOrangeMixs)|[CreativeML OpenRAIL-M](https:\u002F\u002Fhuggingface.co\u002FWarriorMama777\u002FOrangeMixs)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Counterfeit](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Kt_8hnGUGnJAUnuergLki37GKnWjWOJp\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-22 0 47 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_884af76c0a40.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [Counterfeit-V2.5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Kt_8hnGUGnJAUnuergLki37GKnWjWOJp\u002Fview?usp=share_link) |[gsdf\u002FCounterfeit-V2.5](https:\u002F\u002Fhuggingface.co\u002Fgsdf\u002FCounterfeit-V2.5)|-|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n\n### [anything-v4](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1yF55CGy4I3BKom_E70lLkU6N03nSvjDt\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-22 0 47 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a11057dee75d.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [anything-v4.5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1yF55CGy4I3BKom_E70lLkU6N03nSvjDt\u002Fview?usp=share_link) |[andite\u002Fanything-v4.0](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fanything-v4.0)|[Fantasy.ai](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fanything-v4.0)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Openjourney](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KIhSG7pHjgldg7r2mm1Yuwa85BceFLsk\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-22 7 49 39\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_18901c89c817.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [Openjourney](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KIhSG7pHjgldg7r2mm1Yuwa85BceFLsk\u002Fview?usp=share_link) |[prompthero\u002Fopenjourney](https:\u002F\u002Fhuggingface.co\u002Fprompthero\u002Fopenjourney)|-|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [dreamlike-photoreal-2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D5RXYE52wyXPq6TdCHM8DIkP4dxHafwt\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"dreamlike\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8f188482e19b.png\">\n\n| Google Drive Link  | Original Model | License | Run on mac |Conversion Script |Year|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [dreamlike-photoreal-2.0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D5RXYE52wyXPq6TdCHM8DIkP4dxHafwt\u002Fview?usp=share_link) |[dreamlike-art\u002Fdreamlike-photoreal-2.0](https:\u002F\u002Fhuggingface.co\u002Fdreamlike-art\u002Fdreamlike-photoreal-2.0)|[CreativeML OpenRAIL-M](https:\u002F\u002Fhuggingface.co\u002Fdreamlike-art\u002Fdreamlike-photoreal-2.0)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n# Image Colorization\n\n### DDColor Tiny\n\nDDColor — AI image colorization for grayscale\u002FB&W photos using dual decoders (ICCV 2023).\n\n| Input | Output |\n|---|---|\n| \u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_056eb54f0182.png\"> | \u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_51b8bf820e8c.png\"> |\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [DDColor_Tiny.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fddcolor-v1\u002FDDColor_Tiny.mlpackage.zip) | 242 MB | 512×512 RGB | AB channels (LAB) | [piddnad\u002FDDColor](https:\u002F\u002Fgithub.com\u002Fpiddnad\u002FDDColor) | [Apache-2.0](https:\u002F\u002Fgithub.com\u002Fpiddnad\u002FDDColor\u002Fblob\u002Fmaster\u002FLICENSE) | 2023 | [DDColorDemo](sample_apps\u002FDDColorDemo) | [convert_ddcolor.py](conversion_scripts\u002Fconvert_ddcolor.py) |\n\n# Face Recognition\n\n### AdaFace IR-18\n\nAdaFace — Quality-adaptive face recognition. Outputs 512-dim embedding for face verification and identification.\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_185d2851d0f0.png\">\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [AdaFace_IR18.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fadaface-v1\u002FAdaFace_IR18.mlpackage.zip) | 48 MB | Image (112×112 face) | 512-dim L2-normalized embedding | [mk-minchul\u002FAdaFace](https:\u002F\u002Fgithub.com\u002Fmk-minchul\u002FAdaFace) | [MIT](https:\u002F\u002Fgithub.com\u002Fmk-minchul\u002FAdaFace\u002Fblob\u002Fmaster\u002FLICENSE) | 2022 | [AdaFaceDemo](sample_apps\u002FAdaFaceDemo) | [convert_adaface.py](conversion_scripts\u002Fconvert_adaface.py) |\n\n# 3D Face Pose Estimation\n\n### 3DDFA_V2\n\n3DDFA_V2 — 3D face reconstruction and head pose estimation (yaw, pitch, roll) from a single face image.\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4e01e2d9bd17.png\">\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [3DDFA_V2.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fface3d-v1\u002F3DDFA_V2.mlpackage.zip) | 6.3 MB | Image (120×120 RGB) | 62 params (12 pose + 40 shape + 10 expression) | [cleardusk\u002F3DDFA_V2](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2) | [MIT](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2\u002Fblob\u002Fmaster\u002FLICENSE) | 2020 | [Face3DDemo](sample_apps\u002FFace3DDemo) |\n\n# Speaker Diarization\n\n### pyannote segmentation-3.0\n\npyannote segmentation — Speaker diarization with up to 3 simultaneous speakers. Identifies who speaks when, with overlap detection and per-speaker transcription.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SpeakerSegmentation.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fdiarization-v1\u002FSpeakerSegmentation.mlpackage.zip) | 5.8 MB | 10s mono 16kHz [1,1,160000] | [1, 589, 7] speaker logits | [pyannote\u002Fsegmentation-3.0](https:\u002F\u002Fhuggingface.co\u002Fpyannote\u002Fsegmentation-3.0) | [MIT](https:\u002F\u002Fhuggingface.co\u002Fpyannote\u002Fsegmentation-3.0) | 2023 | [DiarizationDemo](sample_apps\u002FDiarizationDemo) | [convert_diarization.py](conversion_scripts\u002Fconvert_diarization.py) |\n\n# Voice Conversion\n\n### OpenVoice V2\n\nOpenVoice — Zero-shot voice conversion. Record source and target voice, convert on-device.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F70078691-14df-4350-846c-9ba1682433ce\" width=\"300\">\u003C\u002Fvideo>\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [OpenVoice_SpeakerEncoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fopenvoice-v1\u002FOpenVoice_SpeakerEncoder.mlpackage.zip) | 1.7 MB | Spectrogram [1, T, 513] | 256-dim speaker embedding | [myshell-ai\u002FOpenVoice](https:\u002F\u002Fgithub.com\u002Fmyshell-ai\u002FOpenVoice) | [MIT](https:\u002F\u002Fgithub.com\u002Fmyshell-ai\u002FOpenVoice\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [OpenVoiceDemo](sample_apps\u002FOpenVoiceDemo) | [convert_openvoice.py](conversion_scripts\u002Fconvert_openvoice.py) |\n| [OpenVoice_VoiceConverter.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fopenvoice-v1\u002FOpenVoice_VoiceConverter.mlpackage.zip) | 64 MB | Spectrogram + speaker embeddings | Waveform audio (22050 Hz) | | | | | |\n\n# Audio Source Separation\n\n### HTDemucs\n\nHybrid Transformer Demucs — separates music into 4 stems: drums, bass, vocals, and other instruments.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F98dea359-e557-4e46-af1d-2010503c86ba\" width=\"400\">\u003C\u002Fvideo>\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [HTDemucs_SourceSeparation_F32.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fdemucs-v1\u002FHTDemucs_SourceSeparation_F32.mlpackage.zip) | 80 MB | Audio Waveform [1, 2, 343980] at 44.1kHz | 4 stems (drums, bass, other, vocals) stereo | [facebookresearch\u002Fdemucs](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs) | [MIT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs\u002Fblob\u002Fmain\u002FLICENSE) | 2022 | [DemucsDemo](sample_apps\u002FDemucsDemo) | [convert_htdemucs.py](conversion_scripts\u002Fconvert_htdemucs.py) |\n\n# Vision-Language\n\n### Florence-2-base\n\nMicrosoft Florence-2 — a unified vision-language model supporting image captioning, OCR, and object detection from a single model. Converted as 3 CoreML models (INT8): Vision Encoder (DaViT), Text Encoder (BART), and Decoder with autoregressive generation.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [Florence2VisionEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2VisionEncoder.mlpackage.zip) \u002F [TextEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2TextEncoder.mlpackage.zip) \u002F [Decoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2Decoder.mlpackage.zip) | 260 MB (INT8, 3 models total) | 768x768 RGB image + task prompt | Generated text (caption, OCR, etc.) | [microsoft\u002FFlorence-2-base](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base) | [MIT](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [Florence2Demo](sample_apps\u002FFlorence2Demo) | [convert_florence2.py](conversion_scripts\u002Fconvert_florence2.py) |\n\n# Zero-Shot Image Classification\n\n### SigLIP ViT-B\u002F16\n\nGoogle SigLIP — sigmoid-based contrastive image-text model for zero-shot classification. Type any labels (e.g. \"cat, dog, car\") and get per-label probabilities. Converted as 2 CoreML models (INT8): Image Encoder and Text Encoder.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SigLIP_ImageEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsiglip-v2\u002FSigLIP_ImageEncoder.mlpackage.zip) \u002F [TextEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsiglip-v2\u002FSigLIP_TextEncoder.mlpackage.zip) | 386 MB (FP16, 2 models total) | 224x224 RGB image + text labels | Per-label similarity scores (softmax) | [google\u002Fsiglip-base-patch16-224](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-base-patch16-224) | [Apache-2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | 2024 | [SigLIPDemo](sample_apps\u002FSigLIPDemo) | [convert_siglip.py](conversion_scripts\u002Fconvert_siglip.py) |\n\n# Text-to-Speech\n\n### Kokoro-82M\n\n[hexgrad\u002FKokoro-82M](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M) — open-weight 82M-parameter TTS by hexgrad. Style-conditioned StyleTTS2 architecture (BERT + duration predictor + iSTFTNet vocoder) producing 24kHz speech in 9 languages from per-voice style embeddings. The first CoreML port with **on-device bilingual (English + Japanese) free-text input** — no MLX, no MeCab, no IPADic, no Python G2P at runtime.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F56eb2ffc-f915-4f8b-b6d3-1021f3d490ca\" width=\"400\">\u003C\u002Fvideo>\n\n2 CoreML models: a flexible-length **Predictor** (BERT + LSTM duration head + text encoder) and **3 fixed-shape Decoder buckets** (128 \u002F 256 \u002F 512 frames). The Swift pipeline picks the smallest bucket that fits the predicted total duration, pads input features with zeros, and trims the output audio.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| [Kokoro_Predictor.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Predictor.mlpackage.zip) | 75 MB | input_ids [1, T≤256] (int32) + ref_s_style [1, 128] | duration [1, T] + d_for_align [1, 640, T] + t_en [1, 512, T] | [hexgrad\u002FKokoro-82M](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M) | [Apache-2.0](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M\u002Fblob\u002Fmain\u002FLICENSE) | 2025 | [KokoroDemo](sample_apps\u002FKokoroDemo) | [convert_kokoro.py](conversion_scripts\u002Fconvert_kokoro.py) |\n| [Kokoro_Decoder_128.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_128.mlpackage.zip) | 238 MB | en_aligned [1, 640, 128] + asr_aligned [1, 512, 128] + ref_s [1, 256] | audio [1, 76800] @ 24kHz | | | | | |\n| [Kokoro_Decoder_256.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_256.mlpackage.zip) | 241 MB | en_aligned [1, 640, 256] + asr_aligned [1, 512, 256] + ref_s [1, 256] | audio [1, 153600] @ 24kHz | | | | | |\n| [Kokoro_Decoder_512.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_512.mlpackage.zip) | 246 MB | en_aligned [1, 640, 512] + asr_aligned [1, 512, 512] + ref_s [1, 256] | audio [1, 307200] @ 24kHz | | | | | |\n\nSee [`sample_apps\u002FKokoroDemo\u002FREADME.md`](sample_apps\u002FKokoroDemo\u002FREADME.md) for the on-device G2P (English + Japanese), bucketed decoder strategy, and conversion details.\n\n# Anomaly Detection\n\n### EfficientAD\n\nEfficientAD (PDN-Small) — lightweight unsupervised anomaly detection for industrial inspection. Wraps teacher, student, and autoencoder networks into a single model that outputs a per-pixel anomaly heatmap and image-level anomaly score. Pretrained on MVTec AD bottle category.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [EfficientAD_Bottle.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fefficientad-v1\u002FEfficientAD_Bottle.mlpackage.zip) | 15 MB (FP16) | 256x256 RGB image | anomaly_map [1,1,256,256] + anomaly_score [0-1] | [nelson1425\u002FEfficientAD](https:\u002F\u002Fgithub.com\u002Fnelson1425\u002FEfficientAD) | [MIT](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) | 2023 | [EfficientADDemo](sample_apps\u002FEfficientADDemo) | [convert_efficientad.py](conversion_scripts\u002Fconvert_efficientad.py) |\n\n# Music Transcription\n\n### Basic Pitch\n\n[spotify\u002Fbasic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) — polyphonic Automatic Music Transcription. Converts any audio (any instrument, any voice) into MIDI notes with pitch bend detection. Just **17K parameters \u002F 272 KB** — runs in real time on iPhone with full ANE acceleration.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd4e96b51-680f-471c-93d1-7546d5890cd7\" width=\"400\">\u003C\u002Fvideo>\n\nThe first open-source iOS implementation. Loads any audio file, runs the CoreML model in 2-second sliding windows, then runs the full Python `note_creation.py` pipeline natively in Swift (onset inference, greedy backwards-in-time tracking, melodia trick, pitch bend extraction). Detected notes are visualized as a piano roll, exported as a Standard MIDI File, and played back through a built-in additive sine synth so you can A\u002FB compare with the original audio.\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [BasicPitch_nmp.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fbasic-pitch-v1\u002FBasicPitch_nmp.mlpackage.zip) | 272 KB | audio waveform [1, 43844, 1] @ 22050 Hz mono | note [1,172,88] + onset [1,172,88] + contour [1,172,264] | [spotify\u002Fbasic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) | [Apache-2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | 2022 | [BasicPitchDemo](sample_apps\u002FBasicPitchDemo) |\n\nSee [`sample_apps\u002FBasicPitchDemo\u002FREADME.md`](sample_apps\u002FBasicPitchDemo\u002FREADME.md) for the sliding-window inference, post-processing port, and iOS-specific gotchas.\n\n# Text-to-Music Generation\n\n### Stable Audio Open Small\n\n[stabilityai\u002Fstable-audio-open-small](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small) — text-to-music generation (497M params). Generates up to 11.9 seconds of stereo 44.1kHz audio from text prompts using rectified flow diffusion.\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fea448e41-d5ae-407e-84a6-8312c1108cfd\" width=\"400\">\u003C\u002Fvideo>\n\n4 CoreML models: T5 text encoder, NumberEmbedder (seconds conditioning), DiT (diffusion transformer), and VAE decoder (Oobleck).\n\n| Download Link | Size | Input | Output | Original Project | License | Year | Sample Project | Conversion Script |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [StableAudioT5Encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioT5Encoder.mlpackage.zip) | 105 MB | input_ids [1, 64] | text_embeddings [1, 64, 768] | [stabilityai\u002Fstable-audio-open-small](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small) | [Stability AI Community](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [StableAudioDemo](sample_apps\u002FStableAudioDemo) | [convert_stable_audio.py](conversion_scripts\u002Fconvert_stable_audio.py) |\n| [StableAudioNumberEmbedder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioNumberEmbedder.mlpackage.zip) | 396 KB | normalized_seconds [1] | seconds_embedding [1, 768] | | | | | |\n| [StableAudioDiT.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioDiT.mlpackage.zip) | 326 MB | latent [1,64,256] + timestep + conditioning | velocity [1,64,256] | | | | | |\n| [StableAudioDiT_FP32.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioDiT_FP32.mlpackage.zip) | 1.3 GB | latent [1,64,256] + timestep + conditioning | velocity [1,64,256] | | | | | |\n| [StableAudioVAEDecoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioVAEDecoder.mlpackage.zip) | 149 MB | latent [1, 64, 256] | stereo audio [1, 2, 524288] at 44.1kHz | | | | | |\n\nSee [`sample_apps\u002FStableAudioDemo\u002FREADME.md`](sample_apps\u002FStableAudioDemo\u002FREADME.md) for INT8 vs FP32 DiT selection and conversion details.\n\n## Models converted by someone other than me.\n\n### [Stable Diffusion](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-stable-diffusion)\n[apple\u002Fml-stable-diffusion](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-stable-diffusion)\n\n## How to use in a xcode project.\n\n### Option 1,implement Vision request.\n\n```swift:\n\nimport Vision\nlazy var coreMLRequest:VNCoreMLRequest = {\n   let model = try! VNCoreMLModel(for: modelname().model)\n   let request = VNCoreMLRequest(model: model, completionHandler: self.coreMLCompletionHandler)\n   return request\n   }()\n\nlet handler = VNImageRequestHandler(ciImage: ciimage,options: [:])\n   DispatchQueue.global(qos: .userInitiated).async {\n   try? handler.perform([coreMLRequest])\n}\n```\n\nIf the model has Image type output:\n\n```swift\nlet result = request?.results?.first as! VNPixelBufferObservation\nlet uiimage = UIImage(ciImage: CIImage(cvPixelBuffer: result.pixelBuffer))\n```\n\nElse the model has Multiarray type output:\n\nFor visualizing multiArray as image, Mr. Hollance’s “CoreML Helpers” are very convenient.\n[CoreML Helpers](https:\u002F\u002Fgithub.com\u002Fhollance\u002FCoreMLHelpers)\n\n[Converting from MultiArray to Image with CoreML Helpers.](https:\u002F\u002Fmedium.com\u002F@rockyshikoku\u002Fconverting-from-multiarray-to-image-with-coreml-helpers-59fdc34d80d8)\n\n```swift:\nfunc coreMLCompletionHandler（request：VNRequest？、error：Error？）{\n   let = coreMLRequest.results？.first as！VNCoreMLFeatureValueObservation\n   let multiArray = result.featureValue.multiArrayValue\n   let cgimage = multiArray？.cgImage（min：-1、max：1、channel：nil）\n```\n\n### Option 2,Use [CoreGANContainer](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreGANContainer). You can use models with dragging&dropping into the container project. \n\n# Make the model lighter\nYou can make the model size lighter with Quantization if you want.\nhttps:\u002F\u002Fcoremltools.readme.io\u002Fdocs\u002Fquantization\n>The lower the number of bits, more the chances of degrading the model accuracy. The loss in accuracy varies with the model.\n\n```python\nimport coremltools as ct\nfrom coremltools.models.neural_network import quantization_utils\n\n# load full precision model\nmodel_fp32 = ct.models.MLModel('model.mlmodel')\n\nmodel_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16)\n# nbits can be 16(half size model), 8(1\u002F4), 4(1\u002F8), 2, 1\n```\n\n##### quantized sample (U2Net)\n\n##### InputImage \u002F nbits=32(original) \u002F nbits=16 \u002F nbits=8 \u002F nbits=4\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8818bb6e6087.jpeg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1fbf1268e47.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1fbf1268e47.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d35dab4fc66b.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2d159ef46989.jpg\" width=200>\n\n\n\n# Thanks\nCover image was taken from Ghibli free images. \n\nOn YOLOv5 convertion, [dbsystel\u002Fyolov5-coreml-tools](https:\u002F\u002Fgithub.com\u002Fdbsystel\u002Fyolov5-coreml-tools) give me the super inteligent convert script. \n\nAnd all of original projects\n\n# Auther\n\nDaisuke Majima\nFreelance engineer. iOS\u002FMachineLearning\u002FAR\nI can work on mobile ML projects and AR project.\nFeel free to contact: rockyshikoku@gmail.com\n\n[GitHub](https:\u002F\u002Fgithub.com\u002Fjohn-rocky)\n[Twitter](https:\u002F\u002Ftwitter.com\u002FJackdeS11)\n[Medium](https:\u002F\u002Frockyshikoku.medium.com\u002F)\n\n\n","# CoreML-模型\n转换后的Core ML模型库。\n\n\u003Cimg width=\"1280\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_638c7bf7b158.jpeg\">\n\nCore ML是苹果公司推出的一款机器学习框架。\n如果你是一名iOS开发者，就可以轻松地在你的Xcode项目中使用机器学习模型。\n\n# 使用方法\n\n浏览这个模型库，如果你找到了想要的CoreML模型，\n可以从Google Drive链接下载该模型，并将其打包到你的项目中。\n或者，如果该模型附有示例项目链接，可以尝试运行一下，看看如何在项目中使用这个模型。\n你可以选择是否这样做。\n\n**如果你喜欢这个仓库，请给我点个赞，这样我就能更加努力地维护它了。**\n\n# 章节链接\n\n- [**图像分类器**](#image-classifier)\n  - [Efficientnetb0](#efficientnetb0)\n  - [Efficientnetv2](#efficientnetv2)\n  - [VisionTransformer](#visiontransformer)\n  - [Conformer](#conformer)\n  - [DeiT](#deit)\n  - [RepVGG](#repvgg)\n  - [RegNet](#regnet)\n  - [MobileViTv2](#mobilevitv2)\n\n  \n- [**目标检测**](#object-detection)\n  - [D-FINE](#d-fine)\n  - [RF-DETR](#rf-detr)\n  - [YOLOv5s](#yolov5s)\n  - [YOLOv7](#yolov7)\n  - [YOLOv8](#yolov8)\n  - [YOLOv9](#yolov9)\n  - [YOLOv10](#yolov10)\n  - [YOLO11](#yolo11)\n  - [YOLO26](#yolo26)\n  - [YOLO-World](#yolo-world)\n\n- [**分割**](#segmentation)\n  - [U2Net](#u2net)\n  - [IS-Net](#is-net)\n  - [RMBG1.4](#rmbg14)\n  - [face-parsing](#face-parsing)\n  - [Segformer](#segformer)\n  - [BiseNetv2](#bisenetv2)\n  - [DNL](#dnl)\n  - [ISANet](#isanet)\n  - [FastFCN](#fastfcn)\n  - [GCNet](#gcnet)\n  - [DANet](#danet)\n  - [Semantic FPN](#semantic-fpn)\n  - [cloths_segmentation](#cloths_segmentation)\n  - [easyportrait](#easyportrait)\n  - [MobileSAM](#mobilesam)\n  - [SAM2-Tiny](#sam2-tiny)\n\n- [**视频抠像**](#video-matting)\n  - [MatAnyone](#matanyone)\n\n- [**超分辨率**](#super-resolution)\n  - [Real ESRGAN](#real-esrgan)\n  - [GFPGAN](#gfpgan)\n  - [BSRGAN](#bsrgan)\n  - [A-ESRGAN](#a-esrgan)\n  - [Beby-GAN](#beby-gan)\n  - [RRDN](#rrdn)\n  - [Fast-SRGAN](#fast-srgan)\n  - [ESRGAN](#esrgan)\n  - [UltraSharp](#ultrasharp)\n  - [SRGAN](#srgan)\n  - [SRResNet](#srresnet)\n  - [LESRCNN](#lesrcnn)\n  - [MMRealSR](#mmrealsr)\n  - [DASR](#dasr)\n  - [SinSR](#sinsr)\n      \n- [**低光增强**](#low-light-enhancement)\n  - [StableLLVE](#stablellve)\n  - [Zero-DCE](#zero-dce)\n  - [Retinexformer](#retinexformer)\n\n- [**图像修复**](#image-restroration)\n  - [MPRNet](#mprnet)\n  - [MIRNetv2](#mirnetv2)\n\n- [**图像生成**](#image-generation)\n  - [MobileStyleGAN](#mobilestylegan)\n  - [DCGAN](#dcgan)\n\n- [**图像到图像转换**](#image2image)\n  - [Anime2Sketch](#anime2sketch)\n  - [AnimeGAN2Face_Paint_512_v2](#animegan2face_paint_512_v2)\n  - [Photo2Cartoon](#photo2cartoon)\n  - [AnimeGANv2_Hayao](#animeGANv2_hayao)\n  - [AnimeGANv2_Paprika](#animeGANv2_paprika)\n  - [WarpGAN Caricature](#warpgancaricature)\n  - [UGATIT_selfie2anime](#ugatit_selfie2anime)\n  - [Fast-Neural-Style-Transfer](#fast-neural-style-transfer)\n  - [White_box_Cartoonization](#white_box_cartoonization)\n  - [FacialCartoonization](#facialcartoonization)\n\n- [**图像修复**](#inpainting)\n  - [AOT-GAN-for-Inpainting](#aot-gan-for-inpainting)\n  - [Lama](#lama)\n\n- [**单目深度估计**](#monocular-depth-estimation)\n  - [MiDaS](#midas)\n  \n- [**稳定扩散**](#stable-diffusion) **:文本到图像**\n  - [Hyper-SD](#hyper-sd)\n  - [stable-diffusion-v1-5](#stable-diffusion-v1-5)\n  - [pastel-mix](#pastel-mix)\n  - [Orange Mix](#orange-mix)\n  - [Counterfeit-V2.5](#counterfeit)\n  - [anything-v4.5](#anything-v4)\n  - [Openjourney](#openjourney)\n  - [dreamlike-photoreal-2.0](#dreamlike-photoreal-2)\n\n- [**图像上色**](#image-colorization)\n  - [DDColor Tiny](#ddcolor-tiny)\n\n- [**人脸识别**](#face-recognition)\n  - [AdaFace IR-18](#adaface-ir-18)\n\n- [**3D人脸姿态估计**](#3d-face-pose-estimation)\n  - [3DDFA_V2](#3ddfa_v2)\n\n- [**说话人分离**](#speaker-diarization)\n  - [pyannote segmentation-3.0](#pyannote-segmentation-30)\n\n- [**语音转换**](#voice-conversion)\n  - [OpenVoice V2](#openvoice-v2)\n\n- [**文本转语音**](#text-to-speech)\n  - [Kokoro-82M](#kokoro-82m)\n\n- [**文本转音乐生成**](#text-to-music-generation)\n  - [Stable Audio Open Small](#stable-audio-open-small)\n\n- [**音频源分离**](#audio-source-separation)\n  - [HTDemucs](#htdemucs)\n\n- [**视觉-语言模型**](#vision-language)\n  - [Florence-2-base](#florence-2-base)\n\n- [**零样本图像分类**](#zero-shot-image-classification)\n  - [SigLIP ViT-B\u002F16](#siglip-vit-b16)\n\n- [**异常检测**](#anomaly-detection)\n  - [EfficientAD](#efficientad)\n\n- [**音乐转录**](#music-transcription)\n  - [Basic Pitch](#basic-pitch)\n\n# 如何获取模型\n你可以通过Google Drive链接获取已转换为CoreML格式的模型。\n关于如何在Xcode中使用这些模型，请参阅下方章节。\n每个模型的许可证均遵循其原始项目的许可证。\n\n# 图像分类器\n\n### Efficientnet\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-27 6 34 43\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_218f024c1aac.png\">\n\n| Google Drive链接 | 大小 | 数据集 | 原始项目 | 许可证 |\n| ------------- | ------------- | ------------- |------------- |------------- |\n| [Efficientnetb0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1mJq8SMuDaCQHW77ui3fAfe5o3Qu2GKMi\u002Fview?usp=sharing) | 22.7 MB | ImageNet | [TensorFlowHub](https:\u002F\u002Ftfhub.dev\u002Ftensorflow\u002Fefficientnet\u002Fb0\u002Fclassification\u002F1)  |[Apache2.0](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0)|\n\n\n### Efficientnetv2\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-31 4 30 22\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_19ceaac75008.png\">\n\n| Google Drive链接 | 大小 | 数据集 | 原始项目 | 许可证 | 年份|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [Efficientnetv2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F12JiGwXh8pX3yjoG_GsJOKAnPd3lbVrrn\u002Fview?usp=sharing) | 85.8 MB | ImageNet | [Google\u002FautoML](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Ftree\u002Fmaster\u002Fefficientnetv2)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Fblob\u002Fmaster\u002FLICENSE)|2021|\n\n### VisionTransformer\n\n一张图片胜过16x16个单词：大规模图像识别中的Transformer。\n\n\u003Cimg width=\"400\" alt=\"截图 2022-01-07 10 37 05\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f43de3ca2ca5.png\">\n\n| Google Drive链接 | 大小 | 数据集 | 原始项目 | 许可证 | 年份|\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [VisionTransformer-B16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1VPo8Cjv7dyicM4lcJ6TgxnD4AN3ldMQp\u002Fview?usp=sharing) | 347.5 MB | ImageNet | [google-research\u002Fvision_transformer](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### Conformer\n\n局部特征耦合全局表示用于视觉识别。\n\n\u003Cimg width=\"400\" alt=\"截图 2022-01-07 11 34 33\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_28411770ea64.png\">\n\n| Google Drive 链接 | 大小 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [Conformer-tiny-p16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-4qVbuTYr4r4o08656iGtV8KKblAVVyr\u002Fview?usp=sharing) | 94.1 MB | ImageNet | [pengzhiliang\u002FConformer](https:\u002F\u002Fgithub.com\u002Fpengzhiliang\u002FConformer)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fvision_transformer\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### DeiT\n\n数据高效的图像Transformer\n\n\u003Cimg width=\"400\" alt=\"截图 2022-01-07 11 50 25\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d0a931e0237e.png\">\n\n| Google Drive 链接 | 大小 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [DeiT-base384](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-7J-b0fTjmZi2VDPrDCWKBsCYGxYP5yW\u002Fview?usp=sharing) | 350.5 MB | ImageNet | [facebookresearch\u002Fdeit](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdeit\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### RepVGG\n\n让VGG风格的卷积神经网络再次伟大\n\n\u003Cimg width=\"400\" alt=\"截图 2022-01-08 5 00 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_591c2d0eaad2.png\">\n\n| Google Drive 链接 | 大小 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [RepVGG-A0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1i8mDvRGn2_OjzIG9ioVJyQrefVliKsh_\u002Fview?usp=sharing) | 33.3 MB | ImageNet | [DingXiaoH\u002FRepVGG](https:\u002F\u002Fgithub.com\u002FDingXiaoH\u002FRepVGG)  | [MIT](https:\u002F\u002Fgithub.com\u002FDingXiaoH\u002FRepVGG\u002Fblob\u002Fmain\u002FLICENSE)|2021|\n\n### RegNet\n\n设计网络设计空间\n\n\u003Cimg width=\"400\" alt=\"截图 2022-02-23 7 38 23\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3edff6cd7f19.png\">\n\n| Google Drive 链接 | 大小 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |\n| [regnet_y_400mf](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16jbUJ4gHSzdxxbYb99rOQe0FiKCuLyDB\u002Fview?usp=sharing) | 16.5 MB | ImageNet | [TORCHVISION.MODELS](https:\u002F\u002Fpytorch.org\u002Fvision\u002Fstable\u002Fmodels.html#torchvision-models)  | [MIT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpycls\u002Fblob\u002Fmain\u002FLICENSE)|2020|\n\n### MobileViTv2\n\nCVNets：用于训练计算机视觉网络的库\n\n\u003Cimg width=\"400\" alt=\"截图 2022-02-23 7 38 23\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7c134d27875a.png\">\n\n| Google Drive 链接 | 大小 | 数据集 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |\n| [MobileViTv2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1__aG67p6o5-NIchkHpfFJBszCpIhI0uf\u002Fview?usp=share_link) | 18.8 MB | ImageNet | [apple\u002Fml-cvnets](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-cvnets)  | [苹果](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-cvnets\u002Fblob\u002Fmain\u002FLICENSE)|2022|[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)]([https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1QiTlFsN948Xt2e4WgqUB8DnGgwWwtVZS?usp=sharing](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1UQwhFpVP_4Q9I6LXPdBSS0VDhIRdUBQA?usp=sharing)) |\n\n# 目标检测\n\n### D-FINE\n\n\u003Cimg width=\"400\" alt=\"D-FINE iOS演示\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1350cd53f97.png\">\n\n| 下载链接 | 大小 | 输出 | 原项目 | 许可证 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[dfine-n-coco](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Freleases\u002Fdownload\u002Fv0.2.0\u002Fdfine_n_coco.mlpackage.zip)|13MB| 置信度（Float32 300 × 80 的多维数组），坐标（Float32 300 × 4 的多维数组） |[Peterande\u002FD-FINE](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE)|[Apache 2.0](https:\u002F\u002Fgithub.com\u002FPeterande\u002FD-FINE\u002Fblob\u002Fmaster\u002FLICENSE)|输入为640×640。坐标归一化为cxcywh。无NMS——按置信度阈值筛选。| [peaceofcake DFINEDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Ftree\u002Fmain\u002FDFINEDemo) |\n\n### RF-DETR\n\n\u003Cimg width=\"400\" alt=\"RF-DETR iOS演示\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_013eae2c371c.png\">\n\n| 下载链接 | 大小 | 输出 | 原项目 | 许可证 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[rfdetr-n-coco](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Freleases\u002Fdownload\u002Fv0.2.0\u002Frfdetr_n_coco.mlpackage.zip)|95MB| 置信度（Float32 300 × 91 的多维数组），坐标（Float32 300 × 4 的多维数组） |[roboflow\u002Frf-detr](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr)|[Apache 2.0](https:\u002F\u002Fgithub.com\u002Froboflow\u002Frf-detr\u002Fblob\u002Fmain\u002FLICENSE)|输入为384×384。91个类别（索引0为背景，1-90为COCO类别ID）。坐标归一化为cxcywh。无NMS。| [peaceofcake DFINEDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Fpeaceofcake\u002Ftree\u002Fmain\u002FDFINEDemo) |\n\n### YOLOv5s\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd332537229a.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[YOLOv5s](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KT-9eKO4F-LYIJVYJg7dy2LEW_hVUq0M\u002Fview?usp=sharing)|29.3MB| 置信度（Double 0 × 80 的多维数组），坐标（Double 0 × 4 的多维数组） |[ultralytics\u002Fyolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5)|[GNU](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5\u002Fblob\u002Fmaster\u002FLICENSE)|已添加非极大值抑制。| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) |\n\n### YOLOv7\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_544cb65244bf.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 备注 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |\n|[YOLOv7](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1EKBC7tiwP1tDvXUm_ldD1Nq7hW8HofLe\u002Fview?usp=sharing)|147.9MB| 置信度(多维数组 (Double 0 × 80))，坐标(多维数组 (Double 0 × 4)) |[WongKinYiu\u002Fyolov7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7)|[GNU](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7\u002Fblob\u002Fmain\u002FLICENSE.md)|已添加非极大值抑制。| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) | [![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1QiTlFsN948Xt2e4WgqUB8DnGgwWwtVZS?usp=sharing) |\n\n### YOLOv8\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-29 6 17 08\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_80972c8d4ce8.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[YOLOv8s](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1pLRh1Y37KLEMpQn3v8qH-A12swakoHbI\u002Fview?usp=share_link)|45.1MB| 置信度(多维数组 (Double 0 × 80))，坐标(多维数组 (Double 0 × 4)) |[ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics)|[GNU](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE)|已添加非极大值抑制。| [CoreML-YOLOv5](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-YOLOv5) |\n\n### YOLOv9\n\nYOLOv9：使用可编程梯度信息学习你想学的内容。采用 PGI 和 GELAN 架构实现高效的目标检测。\n\n| 下载链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolov9s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolov9s.mlpackage.zip) | 14 MB | 置信度（多维数组（Double 0 × 80）），坐标（多维数组（Double 0 × 4）） | [WongKinYiu\u002Fyolov9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9) | [GPL-3.0](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9\u002Fblob\u002Fmain\u002FLICENSE.md) | 2024 | 已添加非极大值抑制。 | [YOLOv9Demo](sample_apps\u002FYOLOv9Demo) |\n\n### YOLOv10\n\nYOLOv10：实时端到端目标检测。采用一致的双重分配无 NMS 架构——无需后处理。\n\n| 下载链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolov10s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolov10s.mlpackage.zip) | 14 MB | 多维数组（1 × 300 × 6） | [THU-MIG\u002Fyolov10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | 无 NMS 的端到端检测。 | [YOLO26Demo](sample_apps\u002FYOLO26Demo) |\n\n### YOLO11\n\nYOLO11：Ultralytics 最新的 YOLO，改进了骨干和颈部架构。参数比 YOLOv8 少 22%，mAP 更高。\n\n| 下载链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolo11s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolo11s.mlpackage.zip) | 18 MB | 置信度（多维数组（Double 0 × 80）），坐标（多维数组（Double 0 × 4）） | [ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | 已添加非极大值抑制。 | [YOLOv9Demo](sample_apps\u002FYOLOv9Demo) |\n\n### YOLO26\n\nYOLO26：边缘优先的视觉 AI，具有无 NMS 的端到端检测功能。与 YOLO11 相比，CPU 推理速度最高快 43%，并移除了 DFL 和 ProgLoss。\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7a13261f79bd.png\">\n\n| 下载链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yolo26s.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyolo26s.mlpackage.zip) | 18 MB | 多维数组（1 × 300 × 6） | [ultralytics\u002Fultralytics](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | [AGPL-3.0](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics\u002Fblob\u002Fmain\u002FLICENSE) | 2026 | 无 NMS 的端到端检测。 | [YOLO26Demo](sample_apps\u002FYOLO26Demo) |\n\n### YOLO-World\n\nYOLO-World：实时开放词汇目标检测。输入任意文本查询即可检测，无需固定类别列表。使用 CLIP 文本编码器进行开放词汇匹配。\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e2a5baa26621.png\">\n\n| 下载链接 | 大小 | 描述 | 原始项目 | 许可证 | 年份 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [yoloworld_detector.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fyoloworld_detector.mlpackage.zip) | 25 MB | YOLO-World V2-S 视觉检测器 | [AILab-CVC\u002FYOLO-World](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World) | [GPL-3.0](https:\u002F\u002Fgithub.com\u002FAILab-CVC\u002FYOLO-World\u002Fblob\u002Fmaster\u002FLICENSE) | 2024 | [YOLOWorldDemo](sample_apps\u002FYOLOWorldDemo) |\n| [clip_text_encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fclip_text_encoder.mlpackage.zip) | 121 MB | CLIP ViT-B\u002F32 文本编码器 | [openai\u002FCLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP) | [MIT](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP\u002Fblob\u002Fmain\u002FLICENSE) | 2021 | — |\n| [clip_vocab.json.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fyolo-models-v1\u002Fclip_vocab.json.zip) | 1.6 MB | BPE 词汇表用于分词器 | — | — | — | — |\n\n# 分割\n\n### [U2Net](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002Fa8e89c72c0950db66d63415b9010d203aae22617\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f36303037393162322d633534332d613537652d303639622d38636631073932643662392e6a706567\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F4f502487cd9e9e02d150ad63b33683a1446e7516\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3233353235392f39636532633237612d643134322d663136352d343365662d65323739646337386333382e706e67\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 |\n| ------------- | ------------- | ------------- | ------------- |------------- |\n| [U2Net](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing) | 175.9 MB | 图像（灰度，320 × 320）| [xuebinqin\u002FU-2-Net](https:\u002F\u002Fgithub.com\u002Fxuebinqin)  | [Apache](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fmaster\u002FApache-LICENSE)|\n| [U2Netp](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D-quPGy33PzSEC6A7EBNv7mCyuiBlO08\u002Fview?usp=sharing) | 4.6 MB | 图像（灰度，320 × 320）| [xuebinqin\u002FU-2-Net](https:\u002F\u002Fgithub.com\u002Fxuebinqin)  |  [Apache](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fmaster\u002FApache-LICENSE)|\n\n### [IS-Net](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13CkOTBCYc3FjGTU26lmCsRYsOkeHnAMA?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b11e5227913a.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c8b04f50a404.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd49404efefe.jpeg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_10ef6ed7938c.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- |------------- | ------------- |------------- |\n| [IS-Net](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13CkOTBCYc3FjGTU26lmCsRYsOkeHnAMA?usp=sharing) | 176.1 MB | 图像（灰度，1024 × 1024）| [xuebinqin\u002FDIS](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS)  | [Apache](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS\u002Fblob\u002Fmain\u002FLICENSE.md)| 2022 |[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1xWD7LZbI-_09LXmiYMdhA28V2qujvOlZ?usp=sharing)|\n| [IS-Net-General-Use](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Vglh1zPwTglroMvycnkLdFP6nCHf_GuH\u002Fview?usp=sharing) | 176.1 MB | 图像（灰度，1024 × 1024）| [xuebinqin\u002FDIS](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS)  | [Apache](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS\u002Fblob\u002Fmain\u002FLICENSE.md)| 2022 |[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1xWD7LZbI-_09LXmiYMdhA28V2qujvOlZ?usp=sharing)|\n\n### RMBG1.4\n\nRMBG1.4 - 经过我们独特的训练方案和专有数据集增强的 IS-Net。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_58dde2e675cc.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_66aa0cc958c0.png\" width=400>\n\n| 下载链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份  | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- |------------- |------------- |\n| [RMBG_1_4.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Frmbg-v1\u002FRMBG_1_4.mlpackage.zip) | 42 MB（INT8） | Alpha 透明度图 1024×1024 |[briaai\u002FRMBG-1.4](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) | [知识共享](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) |2024| [RMBGDemo](sample_apps\u002FRMBGDemo) | [convert_rmbg.py](conversion_scripts\u002Fconvert_rmbg.py) |\n\n### face-Parsing\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_710e4e19e86b.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fa3f9169d3cb.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [face-Parsing](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1I_cu8x0k6d1AEV_VPLyMu3Pqg3hwmo7g\u002Fview?usp=sharing) | 53.2 MB | 多维数组（1 x 512 × 512）| [zllrunning\u002Fface-parsing.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch)  | [MIT](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fface-parsing.PyTorch\u002Fblob\u002Fmaster\u002FLICENSE)|[CoreML-face-parsing](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Face-Parsing) |\n\n### Segformer\n\n使用 Transformer 的简单高效语义分割设计\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_07e52ed1596e.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_9d4043e68ded.jpg\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SegFormer_mit-b0_1024x1024_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-lcNjJM85DZh5-xQv4jlKL6I1ZMBk2uu\u002Fview?usp=sharing) | 14.9 MB | 多维数组（512 × 1024）| [NVlabs\u002FSegFormer](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer)  | [NVIDIA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FSegFormer\u002Fblob\u002Fmaster\u002FLICENSE)|2021|\n\n### BiSeNetV2\t\n\n用于实时语义分割的引导聚合双边网络\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0e33605cfe6.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b14af6e47030.jpg\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [BiSeNetV2_1024x1024_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-20x0-TP8zqXCzDhH06TyL03SJRFYY9n\u002Fview?usp=sharing) | 12.8 MB | 多维数组 | [ycszen\u002FBiSeNet](https:\u002F\u002Fgithub.com\u002Fycszen\u002FBiSeNet)  | Apache2.0 |2021|\n\n### DNL\n\n解耦非局部神经网络\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c8561e34faba.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a4e56332c2ba.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [dnl_r50-d8_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1DOnPGocotsjXknBuNqikgpFVpmH6s_E3\u002Fview?usp=sharing) | 190.8 MB | MultiArray[512x512] |ADE20K| [yinmh17\u002FDNL-Semantic-Segmentation](https:\u002F\u002Fgithub.com\u002Fyinmh17\u002FDNL-Semantic-Segmentation)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fyinmh17\u002FDNL-Semantic-Segmentation\u002Fblob\u002Fmaster\u002FLICENSE) |2020|\n\n### ISANet\n\n用于语义分割的交错稀疏自注意力机制\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ff38df48ea91.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b99758103347.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [isanet_r50-d8_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F114ypGU9S1BOT2otl7P_gsmZbA3bCmz5K\u002Fview?usp=sharing) | 141.5 MB | MultiArray[512x512] |ADE20K| [openseg-group\u002Fopenseg.pytorch](https:\u002F\u002Fgithub.com\u002Fopenseg-group\u002Fopenseg.pytorch) | [MIT](https:\u002F\u002Fgithub.com\u002Fopenseg-group\u002Fopenseg.pytorch\u002Fblob\u002Fmaster\u002FLICENSE) |ArXiv'2019\u002FIJCV'2021|\n\n### FastFCN\n\n重新思考骨干网络中的空洞卷积在语义分割中的应用\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_83fc626bdda1.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_015b6b9e07c5.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [fastfcn_r50-d32_jpu_aspp_512x512_80k_ade20k](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2CUR1M-a4xzUxdf5enU_9cUdxONmFbT\u002Fview?usp=sharing) | 326.2 MB | MultiArray[512x512] |ADE20K| [wuhuikai\u002FFastFCN](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FFastFCN) | [MIT](https:\u002F\u002Fgithub.com\u002Fwuhuikai\u002FFastFCN\u002Fblob\u002Fmaster\u002FLICENSE) |ArXiv'2019|\n\n### GCNet\n\n非局部网络与挤压激励网络的结合及其扩展\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_87a6440b6afd.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_aba9f67cace4.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [gcnet_r50-d8_512x512_20k_voc12aug](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-DfjorbUDFXOVasSPoGk7GP1XC_OnNVT\u002Fview?usp=sharing) | 189 MB | MultiArray[512x512] |PascalVOC| [xvjiarui\u002FGCNet](https:\u002F\u002Fgithub.com\u002Fxvjiarui\u002FGCNet) | [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Fxvjiarui\u002FGCNet\u002Fblob\u002Fmaster\u002FLICENSE) |ICCVW'2019\u002FTPAMI'2020|\n\n### DANet\n\n用于场景分割的双注意力网络（CVPR2019）\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c53ca04b5a12.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_cc0495d33f9e.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [danet_r50-d8_512x1024_40k_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1A45r_725V7edPTSrjA4T-T03rPD6Sj2z\u002Fview?usp=sharing) | 189.7 MB | MultiArray[512x1024] |CityScapes| [junfu1115\u002FDANet](https:\u002F\u002Fgithub.com\u002Fjunfu1115\u002FDANet\u002F) | [MIT](https:\u002F\u002Fgithub.com\u002Fjunfu1115\u002FDANet\u002Fblob\u002Fmaster\u002FLICENSE) |CVPR2019|\n\n### Semantic-FPN\n\n全景特征金字塔网络\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_41f33efe476f.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_42ae8a2d4720.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [fpn_r50_512x1024_80k_cityscapes](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1_IVhCnJ--54P7qVGLo8-ks_LRGXJQXht\u002Fview?usp=sharing) | 108.6 MB | MultiArray[512x1024] |CityScapes| [facebookresearch\u002Fdetectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) | [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2\u002Fblob\u002Fmain\u002FLICENSE) |2019|\n\n### cloths_segmentation\n\n用于各种衣物二值分割的代码。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_9eb949b2e400.jpg\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e69abf79b827.jpg\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 数据集 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- | ------------- |\n| [clothSegmentation](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2AydEgkth6UTD5bu13R0fJYoqZZMG3e\u002Fview?usp=sharing) | 50.1 MB | 图像（灰度 640x960） |[fashion-2019-FGVC6](https:\u002F\u002Fwww.kaggle.com\u002Fc\u002Fimaterialist-fashion-2019-FGVC6)| [facebookresearch\u002Fdetectron2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdetectron2) | [MIT](https:\u002F\u002Fgithub.com\u002Fternaus\u002Fcloths_segmentation\u002Fblob\u002Fmain\u002FLICENSE) |2020|\n\n### easyportrait\n\nEasyPortrait - 人脸解析与人像分割数据集。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a2863751b649.png\" width=400> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4e929fed0f07.png\" width=400>\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | Swift 示例 | 转换脚本 |\n| ------------- | ------------- | ------------- |------------- | ------------- | ------------- |------------- |------------- |\n| [easyportrait-segformer512-fp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F13BUhNpQHodAgcj6eJaPbzuSUaFn3JuU-?usp=sharing) | 7.6 MB | 图像（灰度 512x512）* 9 |[hukenovs\u002Feasyportrait](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Feasyportrait) | [知识共享](https:\u002F\u002Fgithub.com\u002Fhukenovs\u002Feasyportrait\u002Ftree\u002Fmain\u002Flicense) |2023|[easyportrait-coreml](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Feasyportrait-coreml)|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F11a3XWFA8fa8V0a2zgWFqOMUaZgF4O1qt?usp=sharing)|\n\n### MobileSAM\n\n更快的 Segment Anything：面向移动应用的轻量级 SAM。MobileSAM 通过解耦的知识蒸馏，用轻量级的 ViT-Tiny 编码器替代了沉重的 ViT-H 图像编码器，使其体积缩小约 60 倍，速度提升约 40 倍，相比原始的 SAM。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1ecbf02c0520.png\" width=200>\n| 下载链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MobileSAM.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit\u002Freleases\u002Fdownload\u002Fv1.0.0\u002FMobileSAM.zip) | 23 MB（编码器 13 MB + 解码器 9.8 MB） | 分割掩膜 | [ChaoningZhang\u002FMobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) | [Apache 2.0](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM\u002Fblob\u002Fmaster\u002FLICENSE) | 2023 | [SamKit](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit) |\n\n### SAM2-Tiny\n\nSAM 2：对图像和视频进行任意分割。SAM 2 使用带有记忆功能的流式架构，将可提示分割从图像扩展到视频。Tiny 变体采用 Hiera-T 主干网络，以实现高效的设备端推理。\n\n| 下载链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SAM2Tiny.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit\u002Freleases\u002Fdownload\u002Fv1.0.0\u002FSAM2Tiny.zip) | 76 MB（图像编码器 64 MB + 提示编码器 2 MB + 掩膜解码器 9.8 MB） | 分割掩膜 | [facebookresearch\u002Fsam2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2) | [Apache 2.0](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [SamKit](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FSamKit) |\n\n# 视频抠图\n\n### MatAnyone\n\n[pq-yang\u002FMatAnyone](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone)（CVPR 2025）—— 具有对象级记忆传播的时序一致视频抠图。给定第一帧的掩膜，该网络会跟踪并细化整段视频中的 Alpha 抠图，能够比逐帧抠图基线更好地保持清晰的边缘（如头发、半透明区域）。它基于 Cutie 视频目标分割主干网络构建，并配备了专门用于抠图的掩膜解码器。\n\nCoreML 版本将网络拆分为 5 个无状态模块，以便每帧的记忆状态机可以在 Swift 中运行，而 CoreML 则负责繁重的计算任务。端到端 Alpha 抠图与官方 PyTorch 参考实现的对比结果显示：MAE \u003C 2e-4，相关系数在 18 帧中超过 0.9999，其中包括 3 个记忆周期。\n\n示例应用程序使用 Vision 的 `VNGeneratePersonSegmentationRequest` 自动生成第一帧的掩膜——选择一段视频，点击“移除背景”，即可将前景合成到选定的背景颜色上。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| MatAnyone（5 个 mlpackage，FP16 总大小约 111 MB） | 111 MB | 图像 [1,3,432,768]（每帧状态由 Swift 维护） | alpha 抠图 [1,1,432,768] | [pq-yang\u002FMatAnyone](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone) | [NTU S-Lab 1.0](https:\u002F\u002Fgithub.com\u002Fpq-yang\u002FMatAnyone\u002Fblob\u002Fmain\u002FLICENSE) | 2025 | [MatAnyoneDemo](sample_apps\u002FMatAnyoneDemo) | [convert_matanyone.py](conversion_scripts\u002Fconvert_matanyone.py) |\n\n有关每帧状态机、5 模块拆分及转换细节，请参阅 [`sample_apps\u002FMatAnyoneDemo\u002FREADME.md`](sample_apps\u002FMatAnyoneDemo\u002FREADME.md)。\n\n# 超分辨率\n\n### [Real ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cpm-x12Ih7Cqd_kOjfTvtt4ipGS3BpCx\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a80856541913.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_981c37ba515f.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 |原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Real ESRGAN4x](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16JEWh48fgQc8az7avROePOd-PYda0Yi2\u002Fview?usp=sharing) | 66.9 MB | 图像（RGB 2048x2048）| [xinntao\u002FReal-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)  | [BSD 3-Clause 许可证](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n| [Real ESRGAN Anime4x](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1qXdLx46Lpqya7Txc5Wvgkd2Dqlnqm3Qm\u002Fview?usp=sharing) | 66.9 MB | 图像（RGB 2048x2048）| [xinntao\u002FReal-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN)  | [BSD 3-Clause 许可证](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [GFPGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3fF4aPnh8ygUOmKItIrZ318xI9JGmQx\u002Fview?usp=sharing)\n\n利用生成式面部先验实现真实世界的盲态人脸修复\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_13e4a42fbf51.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_23b02eb01ae6.png\"> \n\n| Google Drive 链接 | 大小 | 输出 |原项目 | 许可证 |年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [GFPGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3fF4aPnh8ygUOmKItIrZ318xI9JGmQx\u002Fview?usp=sharing) | 337.4 MB | 图像（RGB 512x512）| [TencentARC\u002FGFPGAN](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [BSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3K89vJZ5OUAh4xdSAifgnL52jbl2fVf\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e2df08237f0c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_06a5ec4d7e2d.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [BSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-3K89vJZ5OUAh4xdSAifgnL52jbl2fVf\u002Fview?usp=sharing) | 66.9 MB | 图像（RGB 2048x2048）| [cszn\u002FBSRGAN](https:\u002F\u002Fgithub.com\u002Fcszn\u002FBSRGAN)  |  |2021|\n\n### [A-ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0rKVQtFXNWfIBIpvyemjuO3O00GZBeb\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7e57c97986de.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3733426ee7e7.png\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [A-ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0rKVQtFXNWfIBIpvyemjuO3O00GZBeb\u002Fview?usp=sharing) | 63.8 MB | 图像（RGB 1024x1024）| [aesrgan\u002FA-ESRGANN](https:\u002F\u002Fgithub.com\u002Faesrgan\u002FA-ESRGAN)  | [BSD 3-Clause 许可证](https:\u002F\u002Fgithub.com\u002Faesrgan\u002FA-ESRGAN\u002Fblob\u002Fmain\u002FLICENSE) |2021|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1UxtSXnVYOXEfTVdIeoP7HQEjsyVbqOKa?usp=sharing)|\n\n### [Beby-GAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bJ7_NgR2KXI46JiFk5hH_6IdCHMyhN05\u002Fview?usp=sharing)\n\n用于高细节图像超分辨率的最佳伙伴GANs\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fc5f6c358c93.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3add270a5491.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Beby-GAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1bJ7_NgR2KXI46JiFk5hH_6IdCHMyhN05\u002Fview?usp=sharing) | 66.9 MB | 图像（RGB 2048x2048）| [dvlab-research\u002FSimple-SR](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSimple-SR)  | [MIT](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FSimple-SR\u002Fblob\u002Fmaster\u002FLICENSE) |2021|\n\n### [RRDN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-M30vR0xMuYDn2p5O4KZrUnUXy4SNThF\u002Fview?usp=sharing)\n\n用于图像超分辨率的残差级联密集网络。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_7e8d00ce87b1.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_83771c7d6718.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [RRDN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-M30vR0xMuYDn2p5O4KZrUnUXy4SNThF\u002Fview?usp=sharing) | 16.8 MB | 图像（RGB 2048x2048）| [idealo\u002Fimage-super-resolution](https:\u002F\u002Fgithub.com\u002Fidealo\u002Fimage-super-resolution)  | [Apache2.0](https:\u002F\u002Fgithub.com\u002Fidealo\u002Fimage-super-resolution\u002Fblob\u002Fmaster\u002FLICENSE) |2018|\n\n\n### [Fast-SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gYXbhcSUm5rhcCAmwLruonAhu8jvyDL8\u002Fview?usp=sharing)\n\n快速SRGAN。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4c296bf372e2.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6d7f12c28fbf.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [Fast-SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1gYXbhcSUm5rhcCAmwLruonAhu8jvyDL8\u002Fview?usp=sharing) | 628 KB | 图像（RGB 1024x1024）| [HasnainRaz\u002FFast-SRGAN](https:\u002F\u002Fgithub.com\u002FHasnainRaz\u002FFast-SRGAN)  | [MIT](https:\u002F\u002Fgithub.com\u002FHasnainRaz\u002FFast-SRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n\n### [ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fkRbh_gckuFlgr357OIdOrEJK4T_2Xkz\u002Fview?usp=sharing)\n\n增强版SRGAN。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d3afdd41d8c1.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0913b95fb8f8.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [ESRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1fkRbh_gckuFlgr357OIdOrEJK4T_2Xkz\u002Fview?usp=sharing) | 66.9 MB | 图像（RGB 2048x2048）| [xinntao\u002FESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN)  | [Apache 2.0](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN\u002Fblob\u002Fmaster\u002FLICENSE) |2018|\n\n### [UltraSharp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-Q1SdS8iHWTfTs7FE39pUTEubPks30Ca?usp=drive_link)\n\n预训练：4倍ESRGAN\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0d8247014bde.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_28656aa6ff9a.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [UltraSharp](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1-Q1SdS8iHWTfTs7FE39pUTEubPks30Ca?usp=drive_link) | 34 MB | 图像（RGB 1024x1024）| [Kim2019\u002F](https:\u002F\u002Fopenmodeldb.info\u002Fmodels\u002F4x-UltraSharp)  | [CC-BY-NC-SA-4.0](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Fdeed.ja) |2021|\n\n### [SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-076W2o0wCtoODptikX1eOnlFBx2s3qK\u002Fview?usp=sharing)\n\n使用生成对抗网络实现照片级真实感单张图像超分辨率。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_418f37159fa9.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ef0a1344b01d.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [SRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-076W2o0wCtoODptikX1eOnlFBx2s3qK\u002Fview?usp=sharing) | 6.1 MB | 图像（RGB 2048x2048）| [dongheehand\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fdongheehand\u002FSRGAN-PyTorch)  |  |2017|\n\n### [SRResNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2kYZgF_Z6vntrRsHmRiwyHJg5TC1PSW\u002Fview?usp=sharing)\n\n基于生成对抗网络的逼真单张图像超分辨率。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b40dff30c0c7.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_be501370880c.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [SRResNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2kYZgF_Z6vntrRsHmRiwyHJg5TC1PSW\u002Fview?usp=sharing) | 6.1 MB | 图像(RGB 2048x2048)| [dongheehand\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fdongheehand\u002FSRGAN-PyTorch)  |  |2017|\n\n### [LESRCNN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0zgxURZwqX0mAAVy69K-owE7QP-7NfJ\u002Fview?usp=sharing)\n\n基于增强CNN的轻量级图像超分辨率。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_64773691950b.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_45feeefc2f14.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [LESRCNN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0zgxURZwqX0mAAVy69K-owE7QP-7NfJ\u002Fview?usp=sharing) | 4.3 MB | 图像(RGB 512x512)| [hellloxiaotian\u002FLESRCNN](https:\u002F\u002Fgithub.com\u002Fhellloxiaotian\u002FLESRCNN)  |  |2020|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1Q6piAJvXSmb-DcdFipcRUEYuHi9fnTm7?usp=sharing)|\n\n### [MMRealSR](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HwMLvOy_hHycHNhojob6uT8t6tRyWqb\u002Fview?usp=sharing)\n\n基于度量学习的真实世界交互式调制超分辨率\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2212aaa1dafc.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_645ffc1647ee.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |------------- |\n| [MMRealSRGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HwMLvOy_hHycHNhojob6uT8t6tRyWqb\u002Fview?usp=sharing) | 104.6 MB | 图像(RGB 1024x1024)| [TencentARC\u002FMM-RealSR](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR)  | [BSD 3-Clause](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR\u002Fblob\u002Fmain\u002FLICENSE) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1zhUhQhdtP02N2pFIxsO5lin7tDOExZCo?usp=sharing)|\n| [MMRealSRNet](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-77P8AtHFh5kca2kYZ6X7GaUueoa3el_\u002Fview?usp=sharing) | 104.6 MB | 图像(RGB 1024x1024)| [TencentARC\u002FMM-RealSR](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR)  | [BSD 3-Clause](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FMM-RealSR\u002Fblob\u002Fmain\u002FLICENSE) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1zhUhQhdtP02N2pFIxsO5lin7tDOExZCo?usp=sharing)|\n\n### [DASR](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F10J2ehHewK2ppS5ToDqmtJ2Ei5k8vcdL0?usp=sharing)\n\n“用于盲超分辨率的无监督退化表征学习”在 CVPR 2021 中的 PyTorch 实现\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_10a34b62819b.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8e7db7552e9e.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |------------- |\n| [DASR](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F10J2ehHewK2ppS5ToDqmtJ2Ei5k8vcdL0?usp=sharing) | 12.1 MB | 图像(RGB 1024x1024)| [The-Learning-And-Vision-Atelier-LAVA\u002FDASR](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FDASR)  | [MIT](https:\u002F\u002Fgithub.com\u002FThe-Learning-And-Vision-Atelier-LAVA\u002FDASR\u002Fblob\u002Fmain\u002FLICENSE) |2022|\n\n\n### SinSR\n\n[wyf0912\u002FSinSR](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR) — 单步扩散式超分辨率（CVPR 2024，约1.13亿参数）。从 ResShift 中提炼而来，实现一步4倍放大。采用 Swin Transformer UNet 结合 VQ-VAE 隐空间。\n\n\u003Cimg width=\"512\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1304f9efa5cf.png\">\n\n*左：双三次4倍放大，右：SinSR单步扩散超分辨率（128x128 → 512x512）*\n\n包含3个 CoreML 模型：VQ-VAE 编码器、Swin-UNet 去噪器（单步）以及带有向量量化功能的 VQ-VAE 解码器。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原始项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| [SinSR_Encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Encoder.mlpackage.zip) | 39 MB | 图像 [1,3,1024,1024] | 隐变量 [1,3,256,256] | [wyf0912\u002FSinSR](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR) | [S-Lab](https:\u002F\u002Fgithub.com\u002Fwyf0912\u002FSinSR\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [SinSRDemo](sample_apps\u002FSinSRDemo) | [convert_sinsr.py](conversion_scripts\u002Fconvert_sinsr.py) |\n| [SinSR_Denoiser.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Denoiser.mlpackage.zip) | 420 MB | 输入 [1,6,256,256] | 预测的隐变量 [1,3,256,256] | | | | | |\n| [SinSR_Decoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsinsr-v1\u002FSinSR_Decoder.mlpackage.zip) | 58 MB | 隐变量 [1,3,256,256] | 图像 [1,3,1024,1024] | | | | | |\n\n推理流程及转换细节请参阅 [`sample_apps\u002FSinSRDemo\u002FREADME.md`](sample_apps\u002FSinSRDemo\u002FREADME.md)。\n\n\n# 低光增强\n\n### StableLLVE\n\n从单张图像中学习时间一致性以进行低光视频增强。\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b6a7c8c7bfce.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d945b4ce0ea2.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [StableLLVE](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-9xry7XeCJYsZadxcfTscjGi_Sna5NhM\u002Fview?usp=sharing) | 17.3 MB | 图像(RGB 512x512)| [zkawfanx\u002FStableLLVE](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE)  | [MIT](https:\u002F\u002Fgithub.com\u002Fzkawfanx\u002FStableLLVE\u002Fblob\u002Fmain\u002FLICENSE) |2021|\n\n### Zero-DCE\n\n无参考深度曲线估计用于低光图像增强\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6362ab81ca31.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ef7567af1243.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [Zero-DCE](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-0lxlBNFm8E_y9ImhS2wxq0p1ZJlXyoA\u002Fview?usp=sharing) | 320KB | 图像（RGB 512x512）| [Li-Chongyi\u002FZero-DCE](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE)  | [查看仓库](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FZero-DCE) |2021|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1sh3O-4EvYv49Rlm59beH6koHe0sYxc2r?usp=sharing)|\n\n### Retinexformer\n\nRetinexformer：基于 Retinex 的单阶段 Transformer 用于低光图像增强\n\n\u003Cimg width=\"256\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a13d4087f9bb.png\"> \u003Cimg width=\"256\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c3cc3991aa15.png\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [ZRetinexformer FiveK](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1ea6vBuLG-z4TAK4iU6vrgABAAlHuDdhy?usp=drive_link) | 3.4MB | 图像（RGB 512x512）| [caiyuanhao1998\u002FRetinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer)  | [MIT](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer?tab=MIT-1-ov-file#readme) |2023|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F10PtPI4V72Pp6PQZcrah-vClGzjKLaGGK?usp=sharing)|\n| [ZRetinexformer NTIRE](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F14piyZVwzu4Abpfgwh2HIKoubeeE-3qoq?usp=drive_link) | 3.4MB | 图像（RGB 512x512）| [caiyuanhao1998\u002FRetinexformer](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer)  | [MIT](https:\u002F\u002Fgithub.com\u002Fcaiyuanhao1998\u002FRetinexformer?tab=MIT-1-ov-file#readme) |2023|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F10PtPI4V72Pp6PQZcrah-vClGzjKLaGGK?usp=sharing)|\n\n# 图像修复\n\n### MPRNet\n\n多阶段渐进式图像修复。\n\n去模糊\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8150d8db8d47.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c20877dbef5b.png\"> \n\n去噪\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_50b25698d3fe.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_851f370d5b59.png\"> \n\n去雨\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_135fdda71559.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0f656888269b.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MPRNetDebluring](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1--5L6BxxbyYGY9ey5WCIrl7g1yYBN27U\u002Fview?usp=sharing) | 137.1 MB | 图像（RGB 512x512）| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n| [MPRNetDeNoising](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-04xou-UgoflZb7MqTBycCpuLWKUAj0i\u002Fview?usp=sharing) | 108 MB | 图像（RGB 512x512）| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n| [MPRNetDeraining](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1tGvjj49yaDym24vGdGqr1VKOtGd7ALKB\u002Fview?usp=sharing) | 24.5 MB | 图像（RGB 512x512）| [swz30\u002FMPRNet](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet)  | [MIT](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMPRNet\u002Fblob\u002Fmain\u002FLICENSE.md) |2021|\n\n### MIRNetv2\n\n用于快速图像修复与增强的特征学习模型。\n\n去噪\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bcd61a753a56.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ce91cb656dfb.png\"> \n\n超分辨率\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_20cca9f617d9.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2bb5e7818656.jpg\"> \n\n对比度增强\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c32da7743fb2.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_842619f511e4.jpg\"> \n\n低光增强\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_107021d8aa0c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6f131ee061f5.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MIRNetv2Denoising](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HY2AhQV84LUZMadsbIi4TGBhEntAOaF\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512×512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2SuperResolution](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-BLfJj8xK_bw-GsGLfRR9uMvuA2VOqsh\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512×512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2ContrastEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1--q9Decpy1ZZbSifiE26SkpXstoadpM8\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512×512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2LowLightEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Yh3FCogRfQ8k7Hh_UIZAnGwwhXHX6k6P\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512×512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n\n# 图像生成\n\n### [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0a68ebb38c0b.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0113b9b8c24.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |  ------------- |  ------------- | \n| [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing) | 38.6MB  | 图像（彩色 1024 × 1024）| [bes-dev\u002FMobileStyleGAN.pytorch](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch)  | [Nvidia 源代码许可证-非商业用途](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch\u002Fblob\u002Fdevelop\u002FLICENSE-NVIDIA) | [CoreML-StyleGAN](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-StyleGAN) |\n\n\n### [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a5bcc6c05781.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)　| 9.2MB | 多维数组 | [TensorFlowCore](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fgenerative\u002Fdcgan)|\n\n\n# 图像到图像转换\n\n### [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_95f7c4de617c.jpeg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_48e63132d6d4.jpeg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 使用方法 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |  ------------- | \n| [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing) | 217.7MB  | 图像（彩色 512 × 512）| [Mukosame\u002FAnime2Sketch](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch)  | [MIT 许可证](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch\u002Fblob\u002Fmaster\u002FLICENSE)| 拖入一张图片即可预览|\n\n\n### [AnimeGAN2Face_Paint_512_v2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1phSgcAz3LNbk2v2RoSESmr7PFxTAHcxb\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F74a02b6e0b80e52c2ae3af798c93eea9aa3e394d\u002F68747470733a2f2f7169612d696d6167652d73746f72652e73332e61702d6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6f6e6......### MIRNetv2\n\n用于快速图像修复与增强的特征学习。\n\n去噪\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bcd61a753a56.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ce91cb656dfb.png\"> \n\n超分辨率\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_20cca9f617d9.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2bb5e7818656.jpg\"> \n\n对比度增强\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c32da7743fb2.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_842619f511e4.jpg\"> \n\n低光增强\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_107021d8aa0c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6f131ee061f5.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [MIRNetv2Denoising](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HY2AhQV84LUZMadsbIi4TGBhEntAOaF\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512x512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2SuperResolution](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-BLfJj8xK_bw-GsGLfRR9uMvuA2VOqsh\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512x512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2ContrastEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1--q9Decpy1ZZbSifiE26SkpXstoadpM8\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512x512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n| [MIRNetv2LowLightEnhancement](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Yh3FCogRfQ8k7Hh_UIZAnGwwhXHX6k6P\u002Fview?usp=sharing) | 42.5 MB | 图像（RGB 512x512）| [swz30\u002FMIRNetv2](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2)  | [学术公共许可证](https:\u002F\u002Fgithub.com\u002Fswz30\u002FMIRNetv2\u002Fblob\u002Fmain\u002FLICENSE.md) |2022|[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1lSWCn0et08hdS3sgKc40c7VXUvKcqCSi?usp=sharing)|\n\n# 图像生成\n\n### [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0a68ebb38c0b.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0113b9b8c24.jpg\"> \n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |  ------------- |  ------------- | \n| [MobileStyleGAN](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1rUV6AXwp8JhPPmkog-0r0AUGzUvN9DmW?usp=sharing) | 38.6MB  | 图像（彩色 1024 × 1024）| [bes-dev\u002FMobileStyleGAN.pytorch](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch)  | [Nvidia 源代码许可证-NC](https:\u002F\u002Fgithub.com\u002Fbes-dev\u002FMobileStyleGAN.pytorch\u002Fblob\u002Fdevelop\u002FLICENSE-NVIDIA) | [CoreML-StyleGAN](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-StyleGAN) |\n\n\n### [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a5bcc6c05781.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [DCGAN](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F132GrmmuETSLTml1zWyLUnIksclP-8vGw\u002Fview?usp=sharing)　| 9.2MB | 多维数组 | [TensorFlowCore](https:\u002F\u002Fwww.tensorflow.org\u002Ftutorials\u002Fgenerative\u002Fdcgan)|\n\n\n# 图像到图像\n\n### [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_95f7c4de617c.jpeg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_48e63132d6d4.jpeg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 使用方法 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |  ------------- | \n| [Anime2Sketch](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-52NnZ1kajZI5Rk0tn3DegpU38la_jYk\u002Fview?usp=sharing) | 217.7MB  | 图像（彩色 512 × 512）| [Mukosame\u002FAnime2Sketch](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch)  | [MIT](https:\u002F\u002Fgithub.com\u002FMukosame\u002FAnime2Sketch\u002Fblob\u002Fmaster\u002FLICENSE)| 拖放一张图片即可预览|\n\n\n### [AnimeGAN2Face_Paint_512_v2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1phSgcAz3LNbk2v2RoSESmr7PFxTAHcxb\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F74a02b6e0b80e52c2ae3af798c93eea9aa3e394d\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f72742d312e616d617a6f6e617awas2e636f6d2f302f3233353235392f6663337653936332d383533302d333731312d6d316c662d3335366b666631666e322e706e67\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Fcamo.qiitausercontent.com\u002F311349da47136ff9ce61701d09ce59dc663c95bf\u002F68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f72742d312e616d617a6f6e617awas2e636f6d2f302f3233353235392f6663337653936332d383533302d333731312d6d316c662d3335366b666631666e322e706e67\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- |  ------------- | \n| [AnimeGAN2Face_Paint_512_v2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1phSgcAz3LNbk2v2RoSESmr7PFxTAHcxb\u002Fview?usp=sharing) | 8.6MB  | 图像（彩色 512 × 512）| [bryandlee\u002Fanimegan2-pytorch](https:\u002F\u002Fgithub.com\u002Fbryandlee\u002Fanimegan2-pytorch#additional-model-weights)  |[![在 Colab 中打开](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1WGAxMaikjNIfqdGRndEOmNyeVf33nGNh?usp=sharing) |\n\n### [Photo2Cartoon](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1xFWZ9Rf1o_LtwBpmSw2zSwPGk2FY6Wya\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2a14725a3a9c.png\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_521100713183.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 许可证 | 备注 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [Photo2Cartoon](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1xFWZ9Rf1o_LtwBpmSw2zSwPGk2FY6Wya\u002Fview?usp=sharing) | 15.2 MB  | 图像（彩色 256 × 256）| [minivision-ai\u002Fphoto2cartoon](https:\u002F\u002Fgithub.com\u002Fminivision-ai\u002Fphoto2cartoon) | [MIT](https:\u002F\u002Fgithub.com\u002Fminivision-ai\u002Fphoto2cartoon\u002Fblob\u002Fmaster\u002FLICENSE) | 输出与原始模型略有不同，原因是部分操作被手动替换。|\n\n### [AnimeGANv2_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1G53oZ1hiMcLJs1loN_fe_VmBVfegh9ha\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_1046f0e8ab73.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_33a09eb6b4bf.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | 示例 |\n| ------------- | ------------- | ------------- | ------------- | ------------- |\n| [AnimeGANv2_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1G53oZ1hiMcLJs1loN_fe_VmBVfegh9ha\u002Fview?usp=sharing)　| 8.7MB | 图像（256 x 256） | [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)|[AnimeGANv2-iOS](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FAnimeGANv2-iOS)|\n\n\n### [AnimeGANv2_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10drMcmF67iREUK8NY8ekMHrsyVirs5XT\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4d94744317a1.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f64c12cae7d3.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [AnimeGANv2_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F10drMcmF67iREUK8NY8ekMHrsyVirs5XT\u002Fview?usp=sharing)　| 8.7MB | 图像（256 x 256） | [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)|\n\n\n### [WarpGAN 卡通化](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1HE3qvfjuXZMFelRcmmGsLzoO5dV8lnaQ\u002Fview?usp=sharing)\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_e0113b9b8c24.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_3855045543a5.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [WarpGAN 卡通化](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1HE3qvfjuXZMFelRcmmGsLzoO5dV8lnaQ\u002Fview?usp=sharing)　| 35.5MB | 图像（256 x 256） | [seasonSH\u002FWarpGAN](https:\u002F\u002Fgithub.com\u002FseasonSH\u002FWarpGAN)|\n\n### [UGATIT selfie2anime](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing)\n\n\u003Cimg width=\"400\" alt=\"截图 2021-12-27 8 18 33\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fcf8404c6599.png\"> \u003Cimg width=\"400\" alt=\"截图 2021-12-27 8 28 11\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_84acf8060293.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [UGATIT selfie2anime](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing) | 266.2MB（量化版） | 图像（256x256） | [taki0112\u002FUGATIT](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FUGATIT)  |\n\n### CartoonGAN\n\n| Google Drive 链接 | 大小 | 输出 | 原始项目 | \n| ------------- | ------------- | ------------- | ------------- | \n| [CartoonGAN_Shinkai](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1j9bvHFBX5yctSeaE8FEvUv-r-hEVvXwi\u002Fview?usp=sharing)　| 44.6MB | 多数组 | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Hayao](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-2dTGge4fza-TTBI9actkg_xp91zYT-F\u002Fview?usp=sharing)　| 44.6MB | 多数组 | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Hosoda](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-5VB1g7kRt0KMe6u37fi_t18l-Zn_wr1\u002Fview?usp=sharing)　| 44.6MB | 多数组 | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n| [CartoonGAN_Paprika](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-5x3TYugodcnGYiEEDitFqMQPVHsCDs_\u002Fview?usp=sharing)　| 44.6MB | 多数组 | [mnicnc404\u002FCartoonGan-tensorflow](https:\u002F\u002Fgithub.com\u002Fmnicnc404\u002FCartoonGan-tensorflow)|\n\n### [快速神经风格迁移](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1o15OO0Kn0tq79fFkmBm3PES93IRQOxB-\u002Fview?usp=sharing)\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_35a6e65ddcad.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6907c35f2126.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_c75ceba3963c.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8383a1c6e58a.jpg\">\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_6a292cfefc31.jpg\"> \u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_fb618762f2d7.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [fast-neural-style-transfer-cuphead](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-LLQF8T6MrcpdiYZkdGZAizkj7c-lJ9e\u002Fview?usp=sharing) | 6.4MB | 图像(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n| [fast-neural-style-transfer-starry-night](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-HLHIrV_WwZJsEkZ34nTfqnlIHIe04Vy\u002Fview?usp=sharing) |  6.4MB | 图像(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n| [fast-neural-style-transfer-mosaic](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1-GmnewjDz2Cs7-CfXPSFIgOruQvBbK2X\u002Fview?usp=sharing) |  6.4MB | 图像(RGB 960x640)| [eriklindernoren\u002FFast-Neural-Style-Transfer](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer)  | [MIT](https:\u002F\u002Fgithub.com\u002Feriklindernoren\u002FFast-Neural-Style-Transfer\u002Fblob\u002Fmaster\u002FLICENSE) |2019|\n\n### [白盒卡通化](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QGNJzEp0fo6oOryTos1dazEKaS34WzZC\u002Fview?usp=sharing)\n\n使用白盒卡通表示学习卡通化\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_14cb075ffbbf.jpg\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_279e77a85a51.jpg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [White_box_Cartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1QGNJzEp0fo6oOryTos1dazEKaS34WzZC\u002Fview?usp=sharing) | 5.9MB | 图像(1536x1536) | [SystemErrorWang\u002FWhite-box-Cartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization)  |[creativecommons](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode)|CVPR2020|\n\n### [人脸卡通化](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1CJH4tuR3ArKvxrmAE_44lbsAwUzjtyXi\u002Fview?usp=sharing)\n\n白盒人脸图像卡通化\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0c484781767d.png\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4ff722de8f5f.png\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [FacialCartoonization](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1CJH4tuR3ArKvxrmAE_44lbsAwUzjtyXi\u002Fview?usp=sharing) | 8.4MB | 图像(256x256) | [SystemErrorWang\u002FFacialCartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FFacialCartoonization)  |[creativecommons](https:\u002F\u002Fcreativecommons.org\u002Flicenses\u002Fby-nc-sa\u002F4.0\u002Flegalcode)|2020|\n\n# 图像修复\n\n### AOT-GAN用于图像修复\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4ceb1b45eb0f.gif\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 备注 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |\n|[AOT-GAN-for-Inpainting](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F16rF46DFcDPherlpgjuL60065xcP2N3nv\u002Fview?usp=share_link)|60.8MB| MLMultiArray(3,512,512) |[researchmm\u002FAOT-GAN-for-Inpainting](https:\u002F\u002Fgithub.com\u002Fresearchmm\u002FAOT-GAN-for-Inpainting)|[Apache2.0](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Fblob\u002Fmaster\u002FLICENSE)|使用时请参考示例。| [john-rocky\u002FInpainting-CoreML](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FInpainting-CoreML) |\n\n### [Lama](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s_uICJQykFFxgVubpBNeLLDL0JsxgdCd?usp=sharing)\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_f5aad0a6972d.png.jpg\">\n\n| Google Drive 链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 备注 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- |------------- |------------- |------------- |------------- |------------- |\n|[Lama](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s_uICJQykFFxgVubpBNeLLDL0JsxgdCd?usp=sharing)|216.6MB| 图像（彩色 800 × 800），图像（灰度 800 × 800）| 图像（彩色 800 × 800） |[advimman\u002Flama](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama)|[Apache2.0](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama\u002Fblob\u002Fmain\u002FLICENSE)|使用时请参考示例。| [john-rocky\u002Flama-cleaner-iOS](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002Flama-cleaner-iOS) | [mallman\u002FCoreMLaMa](https:\u002F\u002Fgithub.com\u002Fmallman\u002FCoreMLaMa)|\n\n# 单目深度估计\n\n### [MiDaS](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1agGnt5Cq5CGzoNDl9Nb-3u7pB5SrIbN4\u002Fview?usp=share_link)\n迈向鲁棒的单目深度估计：混合数据集实现零样本跨数据集迁移\n\n\u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_ab00f6ac7091.jpg\"> \u003Cimg width=\"400\" img src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a37f46ea3208.jpeg\">\n\n| Google Drive 链接 | 大小 | 输出 | 原项目 | 许可证 | 年份|转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [MiDaS_Small](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1agGnt5Cq5CGzoNDl9Nb-3u7pB5SrIbN4\u002Fview?usp=share_link) | 66.3MB | MultiArray(1x256x256) | [isl-org\u002FMiDaS](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS)  |[MIT](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS\u002Fblob\u002Fmaster\u002FLICENSE)|2022|[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F13cVDO6gYdQvbKimcfbgGOfuoQmrTbarU?usp=sharing) |\n\n# 稳定扩散\n\n### Hyper-SD\n\n[ByteDance\u002FHyper-SD](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD) — 通过轨迹分段一致性蒸馏从 SD1.5 中提炼出的单步文生图模型。字节跳动报告称，在单步情况下，用户对 Hyper-SD 的偏好是 SD-Turbo 的两倍。结合 Apple 的 ml-stable-diffusion（Split-Einsum 注意力机制、分块 UNet、6 位调色板量化），该模型在 iPhone 15 及更高版本上能够以可接受的速度和质量运行。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fdd456c13-d778-4a84-8bb2-9dfd78de3070\" width=\"400\">\u003C\u002Fvideo>\n\n\u003Cimg width=\"400\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_bd3c10d5fc39.png\">\n\n*iPhone 上的单步生成，512×512 分辨率。提示词：戴太阳镜的猫、赛博朋克城市、日式庭院、骑马的宇航员。*\n\n包含 4 个 CoreML 模型（总大小约 947 MB）：CLIP 文本编码器 + Swin 风格分块 UNet（6 位调色板量化）+ VAE 解码器。使用 TCD 调度器进行单步推理。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原始项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ---- | ----- | ------ | ---------------- | ------- | ---- | -------------- | ----------------- |\n| [HyperSDTextEncoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDTextEncoder.mlpackage.zip) | 235 MB | input_ids [1,77] | encoder_hidden_states [1,77,768] | [ByteDance\u002FHyper-SD](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD) | [OpenRAIL++](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FHyper-SD\u002Fblob\u002Fmain\u002FREADME.md) | 2024 | [HyperSDDemo](sample_apps\u002FHyperSDDemo) | [convert_hypersd.py](conversion_scripts\u002Fconvert_hypersd.py) |\n| [HyperSDUnetChunk1.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDUnetChunk1.mlpackage.zip) | 318 MB | latent + encoder_hs + timestep | 第一半中间结果 | | | | | |\n| [HyperSDUnetChunk2.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDUnetChunk2.mlpackage.zip) | 299 MB | 第一半输出 + 跳跃连接 | noise_pred [2,4,64,64] | | | | | |\n| [HyperSDVAEDecoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fhypersd-v1\u002FHyperSDVAEDecoder.mlpackage.zip) | 95 MB | latent [1,4,64,64] | image [1,3,512,512] | | | | | |\n\n有关 LoRA 融合、分块 UNet 调色板量化以及 TCD 调度器的详细信息，请参阅 [`sample_apps\u002FHyperSDDemo\u002FREADME.md`](sample_apps\u002FHyperSDDemo\u002FREADME.md)。\n\n### [stable-diffusion-v1-5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1dqYEdhSPi7y0Dgans-Fk7_ViNviUTUJj\u002Fview?usp=share_link)\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2023-03-21 18 52 18\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_0d5c61554893.png\">\n\n| Google Drive 链接  | 原始模型 |原始项目 | 许可证 | 在 Mac 上运行 |转换脚本 |年份|\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| [stable-diffusion-v1-5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1dqYEdhSPi7y0Dgans-Fk7_ViNviUTUJj\u002Fview?usp=share_link) |[runwayml\u002Fstable-diffusion-v1-5](https:\u002F\u002Fhuggingface.co\u002Frunwayml\u002Fstable-diffusion-v1-5)|[runwayml\u002Fstable-diffusion](https:\u002F\u002Fgithub.com\u002Frunwayml\u002Fstable-diffusion)  |[Open RAIL M 许可证](https:\u002F\u002Fhuggingface.co\u002Frunwayml\u002Fstable-diffusion-v1-5)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2022|\n\n### [pastel-mix](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cp3VoF1R-as8_lScWGUoxl-BNVX3d7vb\u002Fview?usp=share_link)\n\nPastel Mix - 一种风格化的潜在扩散模型。该模型旨在仅通过少量提示词就能生成高质量、细节丰富的动漫风格图像。\n\n\u003Cimg width=\"400\" alt=\"スクリーンショット 2023-03-21 19 54 13\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_cdd5581fcee8.png\">\n\n| Google Drive 链接  | 原始模型 | 许可证 | 在 Mac 上运行 |转换脚本 |年份|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [pastelMixStylizedAnime_pastelMixPrunedFP16](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1cp3VoF1R-as8_lScWGUoxl-BNVX3d7vb\u002Fview?usp=share_link) |[andite\u002Fpastel-mix](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fpastel-mix)|[Fantasy.ai](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fpastel-mix)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Orange Mix](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ueU-RuZIsl3b3F7uu_gBa_SfAtGTzTI5\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-21 23 34 13\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b65785c8df21.png\">\n\n| Google Drive 链接  | 原始模型 | 许可证 | 在 Mac 上运行 |转换脚本 |年份|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [AOM3_orangemixs](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ueU-RuZIsl3b3F7uu_gBa_SfAtGTzTI5\u002Fview?usp=share_link) |[WarriorMama777\u002FOrangeMixs](https:\u002F\u002Fhuggingface.co\u002FWarriorMama777\u002FOrangeMixs)|[CreativeML OpenRAIL-M](https:\u002F\u002Fhuggingface.co\u002FWarriorMama777\u002FOrangeMixs)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Counterfeit](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Kt_8hnGUGnJAUnuergLki37GKnWjWOJp\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"スクリーンショット 2023-03-22 0 47 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_884af76c0a40.png\">\n\n| Google Drive 链接  | 原始模型 | 许可证 | 在 Mac 上运行 |转换脚本 |年份|\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [Counterfeit-V2.5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1Kt_8hnGUGnJAUnuergLki37GKnWjWOJp\u002Fview?usp=share_link) |[gsdf\u002FCounterfeit-V2.5](https:\u002F\u002Fhuggingface.co\u002Fgsdf\u002FCounterfeit-V2.5)|-|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [anything-v4](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1yF55CGy4I3BKom_E70lLkU6N03nSvjDt\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"截图 2023-03-22 0 47 53\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_a11057dee75d.png\">\n\n| Google Drive 链接  | 原始模型       | 许可证         | 是否可在 Mac 上运行 | 转换脚本           | 年份 |\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [anything-v4.5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1yF55CGy4I3BKom_E70lLkU6N03nSvjDt\u002Fview?usp=share_link) |[andite\u002Fanything-v4.0](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fanything-v4.0)|[Fantasy.ai](https:\u002F\u002Fhuggingface.co\u002Fandite\u002Fanything-v4.0)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [Openjourney](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KIhSG7pHjgldg7r2mm1Yuwa85BceFLsk\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"截图 2023-03-22 7 49 39\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_18901c89c817.png\">\n\n| Google Drive 链接  | 原始模型       | 许可证         | 是否可在 Mac 上运行 | 转换脚本           | 年份 |\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [Openjourney](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KIhSG7pHjgldg7r2mm1Yuwa85BceFLsk\u002Fview?usp=share_link) |[prompthero\u002Fopenjourney](https:\u002F\u002Fhuggingface.co\u002Fprompthero\u002Fopenjourney)|-|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n### [dreamlike-photoreal-2](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D5RXYE52wyXPq6TdCHM8DIkP4dxHafwt\u002Fview?usp=share_link)\n\n\u003Cimg width=\"800\" alt=\"dreamlike\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8f188482e19b.png\">\n\n| Google Drive 链接  | 原始模型       | 许可证         | 是否可在 Mac 上运行 | 转换脚本           | 年份 |\n| ------------- | ------------- | ------------- |  ------------- | ------------- | ------------- | \n| [dreamlike-photoreal-2.0](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1D5RXYE52wyXPq6TdCHM8DIkP4dxHafwt\u002Fview?usp=share_link) |[dreamlike-art\u002Fdreamlike-photoreal-2.0](https:\u002F\u002Fhuggingface.co\u002Fdreamlike-art\u002Fdreamlike-photoreal-2.0)|[CreativeML OpenRAIL-M](https:\u002F\u002Fhuggingface.co\u002Fdreamlike-art\u002Fdreamlike-photoreal-2.0)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion)|[godly-devotion\u002FMochiDiffusion](https:\u002F\u002Fgithub.com\u002Fgodly-devotion\u002FMochiDiffusion\u002Fwiki\u002FHow-to-convert-Stable-Diffusion-models-to-Core-ML#requirements) |2023|\n\n# 图像上色\n\n### DDColor Tiny\n\nDDColor — 使用双解码器对灰度\u002F黑白照片进行 AI 图像上色（ICCV 2023）。\n\n| 输入 | 输出 |\n|---|---|\n| \u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_056eb54f0182.png\"> | \u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_51b8bf820e8c.png\"> |\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [DDColor_Tiny.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fddcolor-v1\u002FDDColor_Tiny.mlpackage.zip) | 242 MB | 512×512 RGB | AB 通道（LAB） | [piddnad\u002FDDColor](https:\u002F\u002Fgithub.com\u002Fpiddnad\u002FDDColor) | [Apache-2.0](https:\u002F\u002Fgithub.com\u002Fpiddnad\u002FDDColor\u002Fblob\u002Fmaster\u002FLICENSE) | 2023 | [DDColorDemo](sample_apps\u002FDDColorDemo) | [convert_ddcolor.py](conversion_scripts\u002Fconvert_ddcolor.py) |\n\n# 人脸识别\n\n### AdaFace IR-18\n\nAdaFace — 质量自适应的人脸识别。输出用于人脸验证和识别的 512 维嵌入向量。\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_185d2851d0f0.png\">\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [AdaFace_IR18.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fadaface-v1\u002FAdaFace_IR18.mlpackage.zip) | 48 MB | 图像（112×112 的人脸） | 512 维 L2 归一化嵌入向量 | [mk-minchul\u002FAdaFace](https:\u002F\u002Fgithub.com\u002Fmk-minchul\u002FAdaFace) | [MIT](https:\u002F\u002Fgithub.com\u002Fmk-minchul\u002FAdaFace\u002Fblob\u002Fmaster\u002FLICENSE) | 2022 | [AdaFaceDemo](sample_apps\u002FAdaFaceDemo) | [convert_adaface.py](conversion_scripts\u002Fconvert_adaface.py) |\n\n# 3D 人脸姿态估计\n\n### 3DDFA_V2\n\n3DDFA_V2 — 从单张人脸图像中进行 3D 人脸重建和头部姿态估计（偏航、俯仰、滚转）。\n\n\u003Cimg width=\"300\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_4e01e2d9bd17.png\">\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [3DDFA_V2.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fface3d-v1\u002F3DDFA_V2.mlpackage.zip) | 6.3 MB | 图像（120×120 RGB） | 62 个参数（12 个姿态 + 40 个形状 + 10 个表情） | [cleardusk\u002F3DDFA_V2](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2) | [MIT](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2\u002Fblob\u002Fmaster\u002FLICENSE) | 2020 | [Face3DDemo](sample_apps\u002FFace3DDemo) |\n\n# 发言人分离\n\n### pyannote segmentation-3.0\n\npyannote 分割 — 最多支持 3 名同时发言者的发言人分离。能够识别谁在何时说话，并具备重叠检测和每位发言人的转录功能。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SpeakerSegmentation.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fdiarization-v1\u002FSpeakerSegmentation.mlpackage.zip) | 5.8 MB | 10 秒单声道 16kHz [1,1,160000] | [1, 589, 7] 发言人置信度分数 | [pyannote\u002Fsegmentation-3.0](https:\u002F\u002Fhuggingface.co\u002Fpyannote\u002Fsegmentation-3.0) | [MIT](https:\u002F\u002Fhuggingface.co\u002Fpyannote\u002Fsegmentation-3.0) | 2023 | [DiarizationDemo](sample_apps\u002FDiarizationDemo) | [convert_diarization.py](conversion_scripts\u002Fconvert_diarization.py) |\n\n# 语音转换\n\n### OpenVoice V2\n\nOpenVoice — 零样本语音转换。录制源语音和目标语音，在设备端进行转换。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F70078691-14df-4350-846c-9ba1682433ce\" width=\"300\">\u003C\u002Fvideo>\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [OpenVoice_SpeakerEncoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fopenvoice-v1\u002FOpenVoice_SpeakerEncoder.mlpackage.zip) | 1.7 MB | 频谱图 [1, T, 513] | 256维说话人嵌入 | [myshell-ai\u002FOpenVoice](https:\u002F\u002Fgithub.com\u002Fmyshell-ai\u002FOpenVoice) | [MIT](https:\u002F\u002Fgithub.com\u002Fmyshell-ai\u002FOpenVoice\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [OpenVoiceDemo](sample_apps\u002FOpenVoiceDemo) | [convert_openvoice.py](conversion_scripts\u002Fconvert_openvoice.py) |\n| [OpenVoice_VoiceConverter.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fopenvoice-v1\u002FOpenVoice_VoiceConverter.mlpackage.zip) | 64 MB | 频谱图 + 说话人嵌入 | 波形音频（22050 Hz） | | | | | |\n\n# 音频分离\n\n### HTDemucs\n\n混合Transformer Demucs — 将音乐分离为鼓、贝斯、人声和其他乐器4个音轨。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F98dea359-e557-4e46-af1d-2010503c86ba\" width=\"400\">\u003C\u002Fvideo>\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [HTDemucs_SourceSeparation_F32.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fdemucs-v1\u002FHTDemucs_SourceSeparation_F32.mlpackage.zip) | 80 MB | 音频波形 [1, 2, 343980]，采样率44.1kHz | 4个音轨（鼓、贝斯、其他、人声）立体声 | [facebookresearch\u002Fdemucs](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs) | [MIT](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs\u002Fblob\u002Fmain\u002FLICENSE) | 2022 | [DemucsDemo](sample_apps\u002FDemucsDemo) | [convert_htdemucs.py](conversion_scripts\u002Fconvert_htdemucs.py) |\n\n# 视觉-语言模型\n\n### Florence-2-base\n\n微软Florence-2 — 一个统一的视觉-语言模型，支持从单个模型完成图像描述、OCR和目标检测任务。已转换为3个CoreML模型（INT8）：视觉编码器（DaViT）、文本编码器（BART）以及具有自回归生成能力的解码器。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [Florence2VisionEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2VisionEncoder.mlpackage.zip) \u002F [TextEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2TextEncoder.mlpackage.zip) \u002F [Decoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fflorence2-v1\u002FFlorence2Decoder.mlpackage.zip) | 260 MB（INT8，共3个模型） | 768×768 RGB图像 + 任务提示 | 生成的文本（描述、OCR等） | [microsoft\u002FFlorence-2-base](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base) | [MIT](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [Florence2Demo](sample_apps\u002FFlorence2Demo) | [convert_florence2.py](conversion_scripts\u002Fconvert_florence2.py) |\n\n# 零样本图像分类\n\n### SigLIP ViT-B\u002F16\n\n谷歌SigLIP — 基于sigmoid的对比学习图像-文本模型，用于零样本分类。输入任意标签（如“猫、狗、汽车”），即可获得每个标签的概率。已转换为2个CoreML模型（INT8）：图像编码器和文本编码器。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [SigLIP_ImageEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsiglip-v2\u002FSigLIP_ImageEncoder.mlpackage.zip) \u002F [TextEncoder](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fsiglip-v2\u002FSigLIP_TextEncoder.mlpackage.zip) | 386 MB（FP16，共2个模型） | 224×224 RGB图像 + 文本标签 | 每个标签的相似度分数（softmax） | [google\u002Fsiglip-base-patch16-224](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-base-patch16-224) | [Apache-2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | 2024 | [SigLIPDemo](sample_apps\u002FSigLIPDemo) | [convert_siglip.py](conversion_scripts\u002Fconvert_siglip.py) |\n\n# 文本转语音\n\n### 心-82M\n\n[hexgrad\u002FKokoro-82M](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M) — hexgrad 开源的 8200 万参数 TTS 模型。基于 StyleTTS2 架构（BERT + 长度预测器 + iSTFTNet 声码器），能够根据每种声音的风格嵌入，生成 9 种语言、采样率为 24kHz 的语音。这是首个 CoreML 移植版本，支持 **设备端双语（英语 + 日语）自由文本输入**——运行时无需 MLX、MeCab、IPADic 或 Python G2P。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F56eb2ffc-f915-4f8b-b6d3-1021f3d490ca\" width=\"400\">\u003C\u002Fvideo>\n\n包含两个 CoreML 模型：一个灵活长度的 **预测器**（BERT + LSTM 长度头 + 文本编码器）和 **三个固定形状的解码器桶**（128 \u002F 256 \u002F 512 帧）。Swift 流水线会选取最合适的桶来匹配预测的总时长，对输入特征进行零填充，并裁剪输出音频。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ---------- | ------ | ---- | ---- | -------- | ------- | ------ | -------- | -------- |\n| [Kokoro_Predictor.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Predictor.mlpackage.zip) | 75 MB | input_ids [1, T≤256] (int32) + ref_s_style [1, 128] | duration [1, T] + d_for_align [1, 640, T] + t_en [1, 512, T] | [hexgrad\u002FKokoro-82M](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M) | [Apache-2.0](https:\u002F\u002Fhuggingface.co\u002Fhexgrad\u002FKokoro-82M\u002Fblob\u002Fmain\u002FLICENSE) | 2025 | [KokoroDemo](sample_apps\u002FKokoroDemo) | [convert_kokoro.py](conversion_scripts\u002Fconvert_kokoro.py) |\n| [Kokoro_Decoder_128.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_128.mlpackage.zip) | 238 MB | en_aligned [1, 640, 128] + asr_aligned [1, 512, 128] + ref_s [1, 256] | audio [1, 76800] @ 24kHz | | | | | |\n| [Kokoro_Decoder_256.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_256.mlpackage.zip) | 241 MB | en_aligned [1, 640, 256] + asr_aligned [1, 512, 256] + ref_s [1, 256] | audio [1, 153600] @ 24kHz | | | | | |\n| [Kokoro_Decoder_512.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fkokoro-v1\u002FKokoro_Decoder_512.mlpackage.zip) | 246 MB | en_aligned [1, 640, 512] + asr_aligned [1, 512, 512] + ref_s [1, 256] | audio [1, 307200] @ 24kHz | | | | | |\n\n有关设备端 G2P（英语 + 日语）、分桶解码策略及转换细节，请参阅 [`sample_apps\u002FKokoroDemo\u002FREADME.md`](sample_apps\u002FKokoroDemo\u002FREADME.md)。\n\n# 异常检测\n\n### EfficientAD\n\nEfficientAD（PDN-Small）— 一种轻量级的无监督工业质检异常检测模型。它将教师网络、学生网络和自编码器网络封装为单个模型，输出像素级异常热图和图像级别的异常分数。已在 MVTec AD 瓶子类别数据集上预训练。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ---------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |\n| [EfficientAD_Bottle.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fefficientad-v1\u002FEfficientAD_Bottle.mlpackage.zip) | 15 MB（FP16） | 256×256 RGB 图像 | anomaly_map [1,1,256,256] + anomaly_score [0-1] | [nelson1425\u002FEfficientAD](https:\u002F\u002Fgithub.com\u002Fnelson1425\u002FEfficientAD) | [MIT](https:\u002F\u002Fopensource.org\u002Flicenses\u002FMIT) | 2023 | [EfficientADDemo](sample_apps\u002FEfficientADDemo) | [convert_efficientad.py](conversion_scripts\u002Fconvert_efficientad.py) |\n\n# 音乐转录\n\n### Basic Pitch\n\n[spotify\u002Fbasic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) — 一款多声部自动音乐转录工具。它可以将任何音频（任何乐器或人声）转换为带有音高弯曲检测的 MIDI 音符。仅需 **1.7 万个参数 \u002F 272 KB**，即可在 iPhone 上通过 ANE 全速加速实时运行。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fd4e96b51-680f-471c-93d1-7546d5890cd7\" width=\"400\">\u003C\u002Fvideo>\n\n这是首个开源的 iOS 实现。它可以加载任意音频文件，在 2 秒滑动窗口中运行 CoreML 模型，随后在 Swift 中原生执行完整的 Python `note_creation.py` 流程（起音推断、贪婪逆向追踪、Melodia 技巧、音高弯曲提取）。检测到的音符会以钢琴卷帘的形式可视化，导出为标准 MIDI 文件，并通过内置的加法正弦合成器播放，以便与原始音频进行 A\u002FB 对比。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 |\n| ---------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [BasicPitch_nmp.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fbasic-pitch-v1\u002FBasicPitch_nmp.mlpackage.zip) | 272 KB | 音频波形 [1, 43844, 1] @ 22050 Hz 单声道 | note [1,172,88] + onset [1,172,88] + contour [1,172,264] | [spotify\u002Fbasic-pitch](https:\u002F\u002Fgithub.com\u002Fspotify\u002Fbasic-pitch) | [Apache-2.0](https:\u002F\u002Fwww.apache.org\u002Flicenses\u002FLICENSE-2.0) | 2022 | [BasicPitchDemo](sample_apps\u002FBasicPitchDemo) |\n\n有关滑窗推理、后处理移植以及 iOS 特有的注意事项，请参阅 [`sample_apps\u002FBasicPitchDemo\u002FREADME.md`](sample_apps\u002FBasicPitchDemo\u002FREADME.md)。\n\n# 文本到音乐生成\n\n### 稳定音频开放小型模型\n\n[stabilityai\u002Fstable-audio-open-small](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small) — 文本到音乐生成（4.97亿参数）。该模型使用修正流扩散技术，可根据文本提示生成长达11.9秒的44.1kHz立体声音频。\n\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fea448e41-d5ae-407e-84a6-8312c1108cfd\" width=\"400\">\u003C\u002Fvideo>\n\n包含4个CoreML模型：T5文本编码器、NumberEmbedder（秒数条件）、DiT（扩散Transformer）以及VAE解码器（Oobleck）。\n\n| 下载链接 | 大小 | 输入 | 输出 | 原项目 | 许可证 | 年份 | 示例项目 | 转换脚本 |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |\n| [StableAudioT5Encoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioT5Encoder.mlpackage.zip) | 105 MB | input_ids [1, 64] | text_embeddings [1, 64, 768] | [stabilityai\u002Fstable-audio-open-small](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small) | [Stability AI Community](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small\u002Fblob\u002Fmain\u002FLICENSE) | 2024 | [StableAudioDemo](sample_apps\u002FStableAudioDemo) | [convert_stable_audio.py](conversion_scripts\u002Fconvert_stable_audio.py) |\n| [StableAudioNumberEmbedder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioNumberEmbedder.mlpackage.zip) | 396 KB | normalized_seconds [1] | seconds_embedding [1, 768] | | | | | |\n| [StableAudioDiT.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioDiT.mlpackage.zip) | 326 MB | latent [1,64,256] + timestep + conditioning | velocity [1,64,256] | | | | | |\n| [StableAudioDiT_FP32.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioDiT_FP32.mlpackage.zip) | 1.3 GB | latent [1,64,256] + timestep + conditioning | velocity [1,64,256] | | | | | |\n| [StableAudioVAEDecoder.mlpackage.zip](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Freleases\u002Fdownload\u002Fstable-audio-v1\u002FStableAudioVAEDecoder.mlpackage.zip) | 149 MB | latent [1, 64, 256] | 立体声音频 [1, 2, 524288]，采样率为44.1kHz | | | | | |\n\n有关INT8与FP32 DiT的选择及转换详情，请参阅[`sample_apps\u002FStableAudioDemo\u002FREADME.md`](sample_apps\u002FStableAudioDemo\u002FREADME.md)。\n\n## 非我本人转换的模型。\n\n### [稳定扩散](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-stable-diffusion)\n[apple\u002Fml-stable-diffusion](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-stable-diffusion)\n\n## 如何在Xcode项目中使用。\n\n### 方法一：实现Vision请求。\n\n```swift:\n\nimport Vision\nlazy var coreMLRequest:VNCoreMLRequest = {\n   let model = try! VNCoreMLModel(for: modelname().model)\n   let request = VNCoreMLRequest(model: model, completionHandler: self.coreMLCompletionHandler)\n   return request\n   }()\n\nlet handler = VNImageRequestHandler(ciImage: ciimage,options: [:])\n   DispatchQueue.global(qos: .userInitiated).async {\n   try? handler.perform([coreMLRequest])\n}\n```\n\n如果模型输出类型为Image：\n\n```swift\nlet result = request?.results?.first as! VNPixelBufferObservation\nlet uiimage = UIImage(ciImage: CIImage(cvPixelBuffer: result.pixelBuffer))\n```\n\n否则，若模型输出类型为Multiarray：\n\n要将MultiArray可视化为图像，Hollance先生的“CoreML Helpers”非常方便。\n[CoreML Helpers](https:\u002F\u002Fgithub.com\u002Fhollance\u002FCoreMLHelpers)\n\n[使用CoreML Helpers将MultiArray转换为图像。](https:\u002F\u002Fmedium.com\u002F@rockyshikoku\u002Fconverting-from-multiarray-to-image-with-coreml-helpers-59fdc34d80d8)\n\n```swift:\nfunc coreMLCompletionHandler（request：VNRequest？、error：Error？）{\n   let = coreMLRequest.results？.first as！VNCoreMLFeatureValueObservation\n   let multiArray = result.featureValue.multiArrayValue\n   let cgimage = multiArray？.cgImage（min：-1、max：1、channel：nil）\n```\n\n### 方法二：使用[CoreGANContainer](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreGANContainer)。您可以将模型拖放至容器项目中直接使用。\n\n# 使模型更轻量化\n如果您希望减小模型大小，可以通过量化来实现。\nhttps:\u002F\u002Fcoremltools.readme.io\u002Fdocs\u002Fquantization\n>位数越低，模型精度下降的风险越大。精度损失因模型而异。\n\n```python\nimport coremltools as ct\nfrom coremltools.models.neural_network import quantization_utils\n\n# 加载全精度模型\nmodel_fp32 = ct.models.MLModel('model.mlmodel')\n\nmodel_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16)\n# nbits可以是16（模型大小减半）、8（四分之一）、4（八分之一）、2或1\n```\n\n##### 量化后的示例（U2Net）\n\n##### 输入图像 \u002F nbits=32（原版） \u002F nbits=16 \u002F nbits=8 \u002F nbits=4\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_8818bb6e6087.jpeg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1fbf1268e47.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_b1fbf1268e47.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_d35dab4fc66b.jpg\" width=200> \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_readme_2d159ef46989.jpg\" width=200>\n\n\n\n# 感谢\n封面图片取自吉卜力免费素材。\n\n在YOLOv5转换方面，[dbsystel\u002Fyolov5-coreml-tools](https:\u002F\u002Fgithub.com\u002Fdbsystel\u002Fyolov5-coreml-tools)为我提供了极其智能的转换脚本。\n\n以及所有原始项目的所有者。\n\n# 作者\n\nDaisuke Majima\n自由职业工程师。iOS\u002F机器学习\u002FAR\n我可以从事移动ML项目和AR项目。\n欢迎联系：rockyshikoku@gmail.com\n\n[GitHub](https:\u002F\u002Fgithub.com\u002Fjohn-rocky)\n[Twitter](https:\u002F\u002Ftwitter.com\u002FJackdeS11)\n[Medium](https:\u002F\u002Frockyshikoku.medium.com\u002F)","# CoreML-Models 快速上手指南\n\nCoreML-Models 是一个汇集了多种已转换为 Apple Core ML 格式的开源模型仓库。iOS 开发者可以直接将这些模型集成到 Xcode 项目中，轻松实现图像分类、目标检测、分割、超分辨率、生成式 AI 等功能。\n\n## 环境准备\n\n在使用本仓库的模型前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**: macOS (推荐最新稳定版)\n*   **开发工具**: Xcode 14.0 或更高版本\n*   **系统框架**:\n    *   iOS 15.0+ \u002F macOS 12.0+ (针对较新的模型如 YOLOv8\u002Fv9\u002Fv10, MobileSAM 等可能需要更新系统)\n    *   `CoreML` 框架\n    *   `Vision` 框架 (用于图像处理辅助)\n*   **硬件建议**: 部分大型模型（如 Stable Diffusion、大参数 Vision Transformer）在模拟器上运行较慢，建议在真实设备（iPhone\u002FiPad\u002FMac）上进行性能测试。\n\n## 安装步骤\n\n本仓库主要提供编译好的 `.mlmodel` 或 `.mlpackage` 文件，**无需**通过 pip 或 Homebrew 安装核心库。获取模型主要有两种方式：\n\n### 方式一：直接下载模型文件（推荐）\n\n1.  浏览本仓库的 README 目录，找到你需要的模型类别（如 **Object Detection** 下的 **YOLOv8**）。\n2.  点击对应的 **Google Drive Link** 或 **Download Link**。\n    *   *注意：由于网络原因，国内用户下载 Google Drive 链接可能较慢，建议使用代理或寻找国内镜像源（如有提供）。*\n3.  下载完成后，解压文件（如果是 `.zip` 格式），得到 `.mlmodel` 或 `.mlpackage` 文件。\n\n### 方式二：克隆示例项目（可选）\n\n如果模型条目下提供了 **Sample Project** 链接：\n\n```bash\ngit clone \u003C示例项目仓库地址>\ncd \u003C示例项目目录>\n```\n\n打开其中的 `.xcodeproj` 或 `.xcworkspace` 文件，查看模型调用的具体代码实现。\n\n## 基本使用\n\n以下是在 Xcode 项目中集成并使用模型的最简步骤：\n\n### 1. 将模型添加到项目\n\n1.  打开你的 Xcode 项目。\n2.  将下载好的 `.mlmodel` 或 `.mlpackage` 文件直接拖入 Xcode 的项目导航栏（通常放入一个名为 `Models` 的组中）。\n3.  在弹出的对话框中，确保勾选 **Copy items if needed** 并选择 **Add to targets** 为你的主应用 Target。\n4.  Xcode 会自动编译模型，并在左侧导航栏中显示生成的 Swift 类（例如 `YOLOv8` 或 `Efficientnetb0`）。\n\n### 2. 编写调用代码\n\n在 Swift 代码中引入 CoreML 并实例化模型。以下以图像分类模型为例：\n\n```swift\nimport CoreML\nimport UIKit\n\n\u002F\u002F 1. 初始化模型\n\u002F\u002F 类名通常与模型文件名一致，首字母大写\nlet modelConfig = MLModelConfiguration()\nmodelConfig.computeUnits = .all \u002F\u002F 使用所有可用计算单元 (CPU, GPU, Neural Engine)\n\ndo {\n    let classifier = try Efficientnetb0(configuration: modelConfig)\n    \n    \u002F\u002F 2. 准备输入图片\n    guard let image = UIImage(named: \"test_image\") else { return }\n    guard let ciImage = CIImage(image: image) else { return }\n    \n    \u002F\u002F CoreML 通常需要 CGImage 或 CVPixelBuffer\n    let handler = CIContext(options: nil)\n    guard let cgImage = handler.createCGImage(ciImage, from: ciImage.extent) else { return }\n    \n    \u002F\u002F 创建模型所需的输入对象 (具体输入类型视模型而定，此处为 ImageConstraint)\n    let input = Efficientnetb0Input(image: cgImage)\n    \n    \u002F\u002F 3. 执行预测\n    let output = try classifier.prediction(input: input)\n    \n    \u002F\u002F 4. 获取结果\n    print(\"预测标签：\\(output.classLabel)\")\n    print(\"置信度：\\(output.confidence[output.classLabel] ?? 0)\")\n    \n} catch {\n    print(\"模型加载或预测失败：\\(error)\")\n}\n```\n\n### 3. 处理特定模型输出\n\n不同任务的模型输出格式不同，请参考 README 中对应模型的表格说明：\n\n*   **Object Detection (如 YOLO)**: 输出通常包含 `confidence` (置信度数组) 和 `coordinates` (坐标数组)。你需要编写后处理逻辑（如非极大值抑制 NMS）来过滤框。\n*   **Segmentation (如 MobileSAM)**: 输出通常是掩码（Mask）的多维数组，需要将其转换为图像显示。\n*   **Stable Diffusion**: 通常需要多步推理，建议直接参考仓库提供的 Sample Project 中的完整流水线代码。\n\n> **提示**: 双击 Xcode 中的 `.mlmodel` 文件，可以查看模型的详细输入\u002F输出层级名称、数据类型及维度，这对于编写正确的预处理和后处理代码至关重要。","一位 iOS 开发者正在为一款旅行摄影 App 开发“实时智能背景虚化”功能，希望在不依赖云端服务器的情况下，让用户在拍摄瞬间即可享受专业级的人像效果。\n\n### 没有 CoreML-Models 时\n- **模型转换门槛高**：开发者需自行寻找开源算法（如 MobileSAM 或 RMBG），并耗费数天时间配置复杂的 Python 环境进行格式转换，极易因版本兼容问题失败。\n- **端侧性能难优化**：直接移植的通用模型体积庞大，导致 App 安装包激增，且在旧款 iPhone 上运行帧率低下，无法实现流畅的实时预览。\n- **集成调试周期长**：缺乏针对 Xcode 优化的示例代码，开发者需从零编写 Core ML 推理逻辑，排查内存泄漏与算力瓶颈耗时耗力。\n- **功能迭代受限**：由于技术验证成本过高，团队被迫放弃尝试更先进的分割算法，只能使用效果平庸的传统图像处理方案。\n\n### 使用 CoreML-Models 后\n- **即取即用高效集成**：直接从仓库下载已预训练并转换好的 `MobileSAM` 或 `RMBG1.4` 模型文件，拖入 Xcode 项目即可调用，省去了繁琐的转换环节。\n- **原生性能极致发挥**：这些模型专为 Apple 神经引擎优化，在保持高精度的同时大幅降低延迟，确保即使在 iPhone 11 等老设备上也能维持 30fps+ 的实时流畅度。\n- **参考示例加速开发**：利用仓库提供的 Sample Project 快速理解 API 调用方式，将原本需要一周的集成调试工作压缩至半天内完成。\n- **前沿算法轻松落地**：能够低成本尝试最新的分割与生成式模型，迅速上线竞品难以企及的创意滤镜功能，显著提升产品竞争力。\n\nCoreML-Models 通过将复杂的模型工程标准化，让 iOS 开发者能专注于业务创新，真正实现了高端 AI 能力在移动端的普惠与即时落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fjohn-rocky_CoreML-Models_218f024c.png","john-rocky","Daisuke Majima","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fjohn-rocky_38984075.jpg","AI \u002F Mobile Engineer @  | Building AI apps on iOS & Android.\r\n",null,"Japan","rockyshikoku@gmail.com","JackdeS11","https:\u002F\u002Fgithub.com\u002Fjohn-rocky",[82,86],{"name":83,"color":84,"percentage":85},"Swift","#F05138",86.8,{"name":87,"color":88,"percentage":89},"Python","#3572A5",13.2,1705,160,"2026-04-08T02:49:25","macOS, iOS","未说明",{"notes":96,"python":94,"dependencies":97},"该项目是已转换为 Core ML 格式的模型库，专为 Apple 生态系统设计。用户无需自行配置训练环境或安装 Python 依赖，只需从 Google Drive 下载 .mlmodel 或 .mlpackage 文件，直接集成到 Xcode 项目中即可在 iOS、iPadOS 或 macOS 上运行。不同模型的具体输入尺寸和输出格式请参考各模型章节的详细说明。",[98,99],"Xcode","Core ML",[14,15],[102,103,104,105,106,107,108,109,110,111,112,113,114,115],"coreml","coremltools","coreml-framework","ios","swift","machine-learning","gan","gans","deep-learning","object-detection","semantic-segmentation","super-resolution","image-classification","style-gan","2026-03-27T02:49:30.150509","2026-04-09T00:48:29.737406",[],[120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195,200],{"id":121,"version":122,"summary_zh":123,"released_at":124},162725,"moge2-v1","MoGe-2 ViT-B + 法线（CVPR 2025 口头报告）—— 单目 3D 几何估计。\n\n单个 CoreML mlpackage（约 200 MB，FP16 格式，输入尺寸固定为 504×504）。基于 DINOv2 ViT-B\u002F14 主干网络，预测度量深度、表面法线及置信度掩码。\n\n**变体：** `Ruicheng\u002Fmoge-2-vitb-normal`（1.04亿参数）\n\n| 输出 | 形状 | 描述 |\n|---|---|---|\n| `points` | (1, 504, 504, 3) | 3D 点云图（仿射变换后并进行指数重映射） |\n| `depth` | (1, 504, 504) | 原始深度值（乘以 `metric_scale` 可转换为米） |\n| `normal` | (1, 504, 504, 3) | 表面法线，已 L2 归一化 |\n| `mask` | (1, 504, 504) | 置信度掩码（经过 sigmoid 激活，阈值设为 0.5） |\n| `metric_scale` | (1,) | 将原始深度值转换为米的标量 |\n\niOS 演示应用请参见：[sample_apps\u002FMoGe2Demo\u002F](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Ftree\u002Fadd-moge2\u002Fsample_apps\u002FMoGe2Demo)。","2026-04-08T02:49:15",{"id":126,"version":127,"summary_zh":128,"released_at":129},162726,"hypersd-v1","适用于 Hyper-SD 一步式文生图的 4 个 Core ML 模型（SD1.5 基础模型 + 字节跳动 Hyper-SD LoRA 融合，输出分辨率为 512×512，总大小约 947 MB）。\n\n- **HyperSDTextEncoder.mlpackage.zip**（216 MB，FP16）— CLIP ViT-L 文本编码器\n- **HyperSDUnetChunk1.mlpackage.zip**（310 MB，6 位调色板量化）— Hyper-SD UNet 的前半部分（Split-Einsum 注意力机制）\n- **HyperSDUnetChunk2.mlpackage.zip**（290 MB，6 位调色板量化）— Hyper-SD UNet 的后半部分\n- **HyperSDVAEDecoder.mlpackage.zip**（87 MB，FP16）— Stable Diffusion VAE 解码器\n\n通过 TCD 调度器实现单步生成。可在配备神经网络引擎的 iPhone 15 及更高型号上运行。\n\n详情请参阅 [conversion_scripts\u002Fconvert_hypersd.py](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fadd-hyper-sd-demo\u002Fconversion_scripts\u002Fconvert_hypersd.py) 和 [sample_apps\u002FHyperSDDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Ftree\u002Fadd-hyper-sd-demo\u002Fsample_apps\u002FHyperSDDemo)。","2026-04-06T05:34:29",{"id":131,"version":132,"summary_zh":133,"released_at":134},162727,"kokoro-v1","hexgrad\u002FKokoro-82M 的首个 Core ML 移植版本，支持设备端双语（英语 + 日语）自由文本输入。包含：预测器（灵活的 1–256 个音素）+ 解码桶（128\u002F256\u002F512 帧）。采用 FP32 精度。已应用 iSTFTNet SineGen 中的 mod-by-1 修复。详情请参阅 README.md → 文本转语音 → Kokoro-82M。","2026-04-07T03:30:58",{"id":136,"version":137,"summary_zh":138,"released_at":139},162728,"efficientad-v1","EfficientAD（PDN-Small）异常检测模型，已在 MVTec AD 数据集的瓶子类别上训练。输入为 256×256 RGB 图像，输出为异常热图及得分。采用 FP16 精度，模型大小为 15 MB。","2026-04-04T10:05:55",{"id":141,"version":142,"summary_zh":143,"released_at":144},162729,"sinsr-v1","用于 SinSR 4x 超分辨率（256×256 → 1024×1024）的 3 个 CoreML 模型。\n\n- **SinSR_Encoder.mlpackage.zip**（39 MB，FP16）—— VQ-VAE 编码器\n- **SinSR_Denoiser.mlpackage.zip**（420 MB，FP32）—— Swin-UNet 去噪器\n- **SinSR_Decoder.mlpackage.zip**（58 MB，FP16）—— 带向量量化功能的 VQ-VAE 解码器\n\n请参阅 [conversion_scripts\u002Fconvert_sinsr.py](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fadd-sinsr-demo\u002Fconversion_scripts\u002Fconvert_sinsr.py) 和 [sample_apps\u002FSinSRDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Ftree\u002Fadd-sinsr-demo\u002Fsample_apps\u002FSinSRDemo)。","2026-04-05T18:11:25",{"id":146,"version":147,"summary_zh":148,"released_at":149},162730,"stable-audio-v1","[stabilityai\u002Fstable-audio-open-small](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-audio-open-small) 的 CoreML 转换 — 文本到音乐生成（4.97 亿参数）。\n\n根据文本提示生成最长 11.9 秒的 44.1kHz 立体声音频。\n\n## 模型\n\n| 文件 | 大小 | 描述 |\n|------|------|-------------|\n| StableAudioT5Encoder | 94 MB | T5-base 文本编码器（INT8） |\n| StableAudioNumberEmbedder | 367 KB | 秒数条件编码（FP16） |\n| StableAudioDiT | 292 MB | 扩散 Transformer（INT8，使用 cpuAndGPU） |\n| StableAudioDiT_FP32 | 1.2 GB | 扩散 Transformer（FP32 计算，使用 cpuOnly，质量最佳） |\n| StableAudioVAEDecoder | 138 MB | Oobleck 立体声解码器（FP16） |\n\n## 使用方法\n请参阅 [StableAudioDemo](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Ftree\u002Fadd-stable-audio-demo\u002Fsample_apps\u002FStableAudioDemo) 示例应用和 [convert_stable_audio.py](https:\u002F\u002Fgithub.com\u002Fjohn-rocky\u002FCoreML-Models\u002Fblob\u002Fadd-stable-audio-demo\u002Fconversion_scripts\u002Fconvert_stable_audio.py) 脚本。\n\n## 转换注意事项\n- DiT 的 FP16 权重在 iOS GPU 上进行注意力计算时会导致 NaN → 请使用 INT8 或 FP32 计算\n- T5 的 INT8 可能会偶尔产生 NaN → 在输入到 DiT 之前需进行清理\n- DiT 的 FP32 需要设置为 `cpuOnly`（iOS 上存在 GPU 后台权限限制）","2026-04-04T05:09:20",{"id":151,"version":152,"summary_zh":153,"released_at":154},162731,"siglip-v2","Google SigLIP ViT-B\u002F16 已转换为 2 个 CoreML 模型（FP16）。\n\n**v2：FP16 取代 INT8。对比模型需要 FP16 才能实现可靠的相似度打分。**\n\n| 模型 | 大小 | 输入 | 输出 |\n|-------|------|-------|--------|\n| SigLIP_ImageEncoder | 162 MB | 224x224 RGB 图像 | L2 归一化的 768 维嵌入 |\n| SigLIP_TextEncoder | 195 MB | SentencePiece 词元 ID | L2 归一化的 768 维嵌入 |\n\n打分公式：在标签上计算 `softmax(image_emb · text_emb * 117.33)`。\n\n**总计：357 MB（压缩后）。iOS 17 及以上版本。**\n\n许可证：Apache-2.0","2026-04-03T16:55:29",{"id":156,"version":157,"summary_zh":158,"released_at":159},162732,"rmbg-v1","BRIA RMBG-1.4 背景去除模型已转换为 CoreML 格式（INT8 量化）。\n\n| 模型 | 大小 | 输入 | 输出 |\n|-------|------|-------|--------|\n| RMBG_1_4 | 37 MB（压缩后） | 1024×1024 RGB 图像 | 透明度掩码 [1, 1, 1024, 1024] |\n\n**iOS 17 及以上版本。请使用 `.cpuOnly` 计算单元。**\n\n有关带有照片抠图功能的 SwiftUI 示例应用，请参阅 [RMBGDemo](sample_apps\u002FRMBGDemo)。\n\n许可证：知识共享协议（与原版相同）","2026-04-03T15:29:10",{"id":161,"version":162,"summary_zh":163,"released_at":164},162733,"siglip-v1","Google SigLIP ViT-B\u002F16 已转换为 2 个 CoreML 模型（INT8 量化）。\n\n零样本图像分类：输入任意标签即可获得每个标签的概率。\n\n| 模型 | 大小 | 输入 | 输出 |\n|-------|------|-------|--------|\n| SigLIP_ImageEncoder | 79 MB | 224x224 RGB 图像 | L2 归一化的 768 维嵌入 |\n| SigLIP_TextEncoder | 96 MB | SentencePiece 令牌 ID | L2 归一化的 768 维嵌入 |\n\n相似度计算公式：`sigmoid(image_emb · text_emb * 117.33 + (-12.93))`\n\n**总大小：175 MB（压缩后）。支持 iOS 17 及以上版本。**\n\n完整 SwiftUI 示例应用请参见 [SigLIPDemo](sample_apps\u002FSigLIPDemo)。\n\n许可证：Apache-2.0","2026-04-03T14:59:27",{"id":166,"version":167,"summary_zh":168,"released_at":169},162734,"florence2-v1","微软 Florence-2-base 已转换为 3 个 CoreML 模型（INT8 量化）。\n\n支持在设备端进行图像字幕生成、详细字幕生成和 OCR。\n\n| 模型 | 大小 | 输入 | 输出 |\n|-------|------|-------|--------|\n| Florence2VisionEncoder | 77 MB | 768×768 RGB 图像 | 图像特征 [1, 577, 768] |\n| Florence2TextEncoder | 69 MB | 图像特征 + 任务 input_ids | 编码器隐藏状态 |\n| Florence2Decoder | 81 MB | 解码器 input_ids + 编码器隐藏状态 | logits [1, seq, 51289] |\n\n**总计：227 MB（压缩后）。iOS 17+。请使用 `.cpuOnly` 计算单元。**\n\n完整 SwiftUI 示例应用请参见 [Florence2Demo](sample_apps\u002FFlorence2Demo)，转换脚本请参见 [convert_florence2.py](conversion_scripts\u002Fconvert_florence2.py)。\n\n许可证：MIT（与原始模型相同）","2026-04-03T13:40:29",{"id":171,"version":172,"summary_zh":173,"released_at":174},162735,"diarization-v1","pyannote segmentation-3.0 CoreML model for speaker diarization. Input: 10s mono 16kHz audio [1,1,160000]. Output: [1,589,7] speaker activity logits (powerset encoding: 3 speakers + overlaps). 5.8MB. License: MIT.","2026-04-03T11:59:50",{"id":176,"version":177,"summary_zh":178,"released_at":179},162736,"ddcolor-v1","DDColor Tiny (ConvNeXt-T) CoreML model for image colorization. Input: 512x512 RGB (grayscale as pseudo-RGB via LAB). Output: AB channels in LAB color space. 242MB (F32). License: Apache-2.0.","2026-04-03T09:00:12",{"id":181,"version":182,"summary_zh":183,"released_at":184},162737,"openvoice-v1","OpenVoice V2 CoreML models for zero-shot voice conversion. Record source + target voice → convert. SpeakerEncoder (1.7MB): extracts 256-dim speaker embedding. VoiceConverter (64MB): converts voice using source\u002Ftarget embeddings. License: MIT.","2026-04-03T07:07:56",{"id":186,"version":187,"summary_zh":188,"released_at":189},162738,"adaface-v1","AdaFace IR-18 CoreML model for face recognition\u002Fverification. Input: 112x112 face image. Output: 512-dim L2-normalized embedding. License: MIT.","2026-04-03T02:11:35",{"id":191,"version":192,"summary_zh":193,"released_at":194},162739,"face3d-v1","3DDFA_V2 CoreML model for 3D face pose estimation (yaw\u002Fpitch\u002Froll). Input: 120x120 RGB face image. Output: 62 parameters (12 pose + 40 shape + 10 expression).","2026-04-02T07:31:16",{"id":196,"version":197,"summary_zh":198,"released_at":199},162740,"demucs-v1","## HTDemucs Source Separation (Float32 compute)\n\nCoreML model for audio source separation into 4 stems.\n\n### Model Details\n- **Input**: `mix` [1, 2, 343980] stereo audio at 44.1kHz (~7.8s segment)\n- **Output**: `time_output` [1, 8, 343980] = 4 stems × 2 channels\n- **Source order**: drums(0,1), bass(2,3), other(4,5), vocals(6,7)\n- **Compute precision**: Float32 (prevents frequency branch overflow)\n- **I\u002FO format**: Float16\n\n### Known Issues\n- `freq_output` overflows Float16 range (±inf) for real STFT data\n- Currently using time-domain output only for reconstruction\n- Full freq+time reconstruction would improve separation quality but requires Float32 output tensors\n\n### Usage\nAdd `HTDemucs_SourceSeparation_F32.mlpackage` to your Xcode project.\nSee `DemucsDemo` app for implementation reference.","2026-04-01T09:57:30",{"id":201,"version":202,"summary_zh":203,"released_at":204},162741,"yolo-models-v1","CoreML models for YOLO detection:\n\n**Detection Models**\n- **YOLO26s** (18 MB) — NMS-free end-to-end detection (2026)\n- **YOLO11s** (18 MB) — with CoreML NMS pipeline (2024)\n- **YOLOv10s** (14 MB) — NMS-free end-to-end detection (2024)\n- **YOLOv9s** (14 MB) — with CoreML NMS pipeline (2024)\n\n**Open-Vocabulary Detection (YOLO-World)**\n- **yoloworld_detector** (25 MB) — YOLO-World V2-S visual detector\n- **clip_text_encoder** (121 MB) — CLIP ViT-B\u002F32 text encoder\n- **clip_vocab.json** (1.6 MB) — BPE vocabulary for tokenizer\n\nAll detection models trained on COCO (80 classes), input 640x640.","2026-03-30T07:10:07"]