[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ailia-ai--ailia-models":3,"tool-ailia-ai--ailia-models":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",145895,2,"2026-04-08T11:32:59",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":77,"owner_email":78,"owner_twitter":77,"owner_website":79,"owner_url":80,"languages":81,"stars":98,"forks":99,"last_commit_at":100,"license":77,"difficulty_score":32,"env_os":101,"env_gpu":102,"env_ram":103,"env_deps":104,"category_tags":111,"github_topics":113,"view_count":32,"oss_zip_url":77,"oss_zip_packed_at":77,"status":17,"created_at":134,"updated_at":135,"faqs":136,"releases":186},5580,"ailia-ai\u002Failia-models","ailia-models","The collection of pre-trained, state-of-the-art AI models for ailia SDK","ailia-models 是一个专为 ailia SDK 打造的预训练 AI 模型库，汇集了超过 400 个涵盖视觉、语音、大语言模型及动作识别等领域的最先进模型。它主要解决了开发者在跨平台部署 AI 时面临的模型适配难、推理速度慢以及缺乏现成示例代码等痛点。\n\n无论是需要在 Windows、macOS、Linux 上运行，还是希望将 AI 功能嵌入 iOS、Android、Jetson 甚至树莓派等边缘设备，ailia-models 都能提供开箱即用的支持。其核心亮点在于利用 Vulkan 和 Metal 技术实现高效的 GPU 加速推理，并针对特定模型进行了深度优化，从而在资源受限的设备上也能获得流畅体验。此外，该库还提供了丰富的示例代码，支持 C++、Python、Unity、Rust 等多种主流开发语言，大幅降低了从算法验证到产品落地的门槛。\n\n这套工具非常适合希望快速构建高性能 AI 应用的软件工程师、嵌入式开发人员以及算法研究者。如果你正在寻找一个既能保证推理速度，又具备广泛硬件兼容性的模型解决方案，ailia-models 将是一个值得信赖的选择，帮助你轻松将前沿 AI ","ailia-models 是一个专为 ailia SDK 打造的预训练 AI 模型库，汇集了超过 400 个涵盖视觉、语音、大语言模型及动作识别等领域的最先进模型。它主要解决了开发者在跨平台部署 AI 时面临的模型适配难、推理速度慢以及缺乏现成示例代码等痛点。\n\n无论是需要在 Windows、macOS、Linux 上运行，还是希望将 AI 功能嵌入 iOS、Android、Jetson 甚至树莓派等边缘设备，ailia-models 都能提供开箱即用的支持。其核心亮点在于利用 Vulkan 和 Metal 技术实现高效的 GPU 加速推理，并针对特定模型进行了深度优化，从而在资源受限的设备上也能获得流畅体验。此外，该库还提供了丰富的示例代码，支持 C++、Python、Unity、Rust 等多种主流开发语言，大幅降低了从算法验证到产品落地的门槛。\n\n这套工具非常适合希望快速构建高性能 AI 应用的软件工程师、嵌入式开发人员以及算法研究者。如果你正在寻找一个既能保证推理速度，又具备广泛硬件兼容性的模型解决方案，ailia-models 将是一个值得信赖的选择，帮助你轻松将前沿 AI 技术转化为实际生产力。","[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d9da79f7362.png\">](ABOUT_AINYAN.md)\n\nThe collection of pre-trained, state-of-the-art AI models.\n\n# About ailia SDK\n\n[ailia SDK](https:\u002F\u002Failia.ai\u002Fen\u002Fsdk\u002F) is a cross-platform, high-speed inference SDK for AI. It supports Windows, Mac, Linux, iOS, Android, Jetson, and Raspberry Pi with GPU acceleration via Vulkan and Metal. Bindings are available for C++, Python, Unity (C#), Kotlin, Rust, and Flutter.\n\n# Why ailia SDK\n\n|  | ailia SDK | ONNX Runtime |\n|:---|:---:|:---:|\n| GPU inference via Vulkan and Metal | ✓ | − |\n| ailia Speech \u002F Voice \u002F LLM \u002F Tokenizer \u002F Tracker | ✓ | − |\n| 400+ verified model library with sample code | ✓ | − |\n| Non-OS \u002F RTOS inference support | ✓ | − |\n| Unity bindings and model collection | ✓ | △ |\n| Model‑specific optimization | ✓ | △ |\n\n△ = Supported but limited due to general-purpose implementation.\n\n# How to use\n\nTry now on [Google Colaboratory](https:\u002F\u002Fwww.ailia.ai\u002Flaunch_to_colab)\n\nIf you would like to try on your computer:\n\n[ailia MODELS tutorial](TUTORIAL.md)\n\n[ailia MODELS tutorial 日本語版](TUTORIAL_jp.md)\n\n# Documentation\n\n[ailia-models wiki](https:\u002F\u002Fdeepwiki.com\u002Failia-ai\u002Failia-models)\n\n# Supported models\n403 models as of March 12, 2026\n\n# Latest update\n- 2026.03.12 Add depth_anything_v3, depth_pro\n- 2026.03.06 Add depth_anything_v2\n- 2026.03.04 Add gpt-sovits-v2-pro, bevformer, uniad\n- 2026.03.02 Add g2pw, gpt-sovits-v1, v2, v3 (chinese)\n- 2026.01.16 Add embeddinggemma\n- 2025.12.30 Add demucs, latentsync\n- 2025.12.26 Add sadtalker\n- 2025.12.25 Add samurai, cotracker3 (ailia SDK 1.6.1)\n- 2025.12.21 Add silerovad v5, v6, v6_2\n- 2025.12.17 Add sensevoice, cosyvoice2\n- 2025.12.01 Add glass, mobilevlm, donut\n\n- More information in our [Wiki](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fwiki)\n\n## Action recognition\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9b3cda812675.png\" width=128px>](action_recognition\u002Fva-cnn\u002F) | [va-cnn](\u002Faction_recognition\u002Fva-cnn\u002F) | [View Adaptive Neural Networks (VA) for Skeleton-based Human Action Recognition](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FView-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition) | Pytorch | 1.2.7 and later | Mar 2017 ||\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_99eb101cc9ee.png\" width=128px>](action_recognition\u002Fst_gcn\u002F) | [st-gcn](\u002Faction_recognition\u002Fst_gcn\u002F) | [ST-GCN](https:\u002F\u002Fgithub.com\u002Fyysijie\u002Fst-gcn) | Pytorch | 1.2.5 and later | Jan 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fst-gcn-a-machine-learning-model-for-detecting-human-actions-from-skeletons-46a95b31b5db) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fst-gcn-%E9%AA%A8%E6%A0%BC%E3%81%8B%E3%82%89%E4%BA%BA%E7%89%A9%E3%81%AE%E3%82%A2%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-af3196e38d1f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_26dd279d25aa.jpg\" width=128px>](action_recognition\u002Fmars\u002F) | [mars](\u002Faction_recognition\u002Fmars\u002F) | [MARS: Motion-Augmented RGB Stream for Action Recognition](https:\u002F\u002Fgithub.com\u002Fcraston\u002FMARS) | Pytorch | 1.2.4 and later | Nov 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmars-a-machine-learning-model-for-identifying-actions-from-videos-6b93c06ac6a5) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmars-%E5%8B%95%E7%94%BB%E3%81%8B%E3%82%89%E3%82%A2%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E8%AD%98%E5%88%A5%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c03b0b8804a8) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4ca9d3a01647.gif\" width=128px>](action_recognition\u002Fax_action_recognition\u002F) | [ax_action_recognition](\u002Faction_recognition\u002Fax_action_recognition\u002F) | [Realtime-Action-Recognition](https:\u002F\u002Fgithub.com\u002Ffelixchenfy\u002FRealtime-Action-Recognition) | Pytorch | 1.2.7 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8ed805bd7cf5.png\" width=128px>](action_recognition\u002Fdriver-action-recognition-adas\u002F) | [driver-action-recognition-adas](\u002Faction_recognition\u002Fdriver-action-recognition-adas\u002F) | [driver-action-recognition-adas-0002](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fdriver-action-recognition-adas-0002) | OpenVINO | 1.2.5 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ea78e5cc3376.gif\" width=128px>](action_recognition\u002Faction_clip\u002F) | [action_clip](\u002Faction_recognition\u002Faction_clip\u002F) | [ActionCLIP](https:\u002F\u002Fgithub.com\u002Fsallymmx\u002FActionCLIP) | Pytorch | 1.2.7 and later | Sep 2021 | |\n\n## Anomaly detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f9dfd9afdf47.png\" width=128px>](anomaly_detection\u002Fmahalanobisad\u002F) | [mahalanobisad](\u002Fanomaly_detection\u002Fmahalanobisad\u002F) | [MahalanobisAD-pytorch](https:\u002F\u002Fgithub.com\u002Fbyungjae89\u002FMahalanobisAD-pytorch\u002Ftree\u002Fmaster) | Pytorch | 1.2.9 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a37dde9c1a1c.png\" width=128px>](anomaly_detection\u002Fspade-pytorch\u002F) | [spade-pytorch](\u002Fanomaly_detection\u002Fspade-pytorch\u002F) | [Sub-Image Anomaly Detection with Deep Pyramid Correspondences](https:\u002F\u002Fgithub.com\u002Fbyungjae89\u002FSPADE-pytorch) | Pytorch | 1.2.6 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f032db59a299.png\" width=128px>](anomaly_detection\u002Fpadim\u002F) | [padim](\u002Fanomaly_detection\u002Fpadim\u002F) | [PaDiM-Anomaly-Detection-Localization-master](https:\u002F\u002Fgithub.com\u002Fxiahaifeng1995\u002FPaDiM-Anomaly-Detection-Localization-master) | Pytorch | 1.2.6 and later | Nov 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpadim-a-machine-learning-model-for-detecting-defective-products-without-retraining-5daa6f203377) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpadim-%E5%86%8D%E5%AD%A6%E7%BF%92%E4%B8%8D%E8%A6%81%E3%81%A7%E4%B8%8D%E8%89%AF%E5%93%81%E6%A4%9C%E7%9F%A5%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-69add653fbd3) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_23eb8505880d.png\" width=128px>](anomaly_detection\u002Fpatchcore\u002F) | [patchcore](\u002Fanomaly_detection\u002Fpatchcore\u002F) | [PatchCore_anomaly_detection](https:\u002F\u002Fgithub.com\u002Fhcw-00\u002FPatchCore_anomaly_detection) | Pytorch | 1.2.6 and later | Jun 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_edbea6f18791.png\" width=128px>](anomaly_detection\u002Fglass\u002F) | [glass](\u002Fanomaly_detection\u002Fglass\u002F) | [A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization](https:\u002F\u002Fgithub.com\u002Fcqylunlun\u002FGLASS) | Pytorch | 1.2.14 and later | Jul 2024 | |\n\n## Audio Language Model\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[qwen_audio](\u002Faudio_language_model\u002Fqwen_audio) | [Qwen-Audio](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen-Audio) | Pytorch | 1.5.0 and later | Nov 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fqwen-audio-%E9%9F%B3%E3%82%92%E5%85%A5%E5%8A%9B%E3%81%97%E3%81%A6%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E7%94%9F%E6%88%90%E5%8F%AF%E8%83%BD%E3%81%AAaudio-language-model-57d3a5c71643) |\n\n## Audio processing\n\n### Audio classification\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [crnn_audio_classification](\u002Faudio_processing\u002Fcrnn_audio_classification\u002F) | [crnn-audio-classification](https:\u002F\u002Fgithub.com\u002Fksanjeevan\u002Fcrnn-audio-classification) | Pytorch | 1.2.5 and later | Mar 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcrnnsoundclassification-a-machine-learning-model-for-classifying-sound-8e45d1f22fa) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrnnsoundclassification-%E9%9F%B3%E5%A3%B0%E3%82%92%E5%88%86%E9%A1%9E%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2a35564dad42) |\n| [audioset_tagging_cnn](\u002Faudio_processing\u002Faudioset_tagging_cnn\u002F) | [PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition](https:\u002F\u002Fgithub.com\u002Fqiuqiangkong\u002Faudioset_tagging_cnn) | Pytorch | 1.2.9 and later | Dec 2019 | |\n| [transformer-cnn-emotion-recognition](\u002Faudio_processing\u002Ftransformer-cnn-emotion-recognition\u002F) | [Combining Spatial and Temporal Feature Representions of Speech Emotion by Parallelizing CNNs and Transformer-Encoders](https:\u002F\u002Fgithub.com\u002FIliaZenkov\u002Ftransformer-cnn-emotion-recognition)  | Pytorch | 1.2.5 and later | Oct 2020 | |\n| [microsoft clap](\u002Faudio_processing\u002Fmsclap\u002F) | [CLAP](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCLAP) | Pytorch | 1.2.11 and later | Jun 2022 | |\n| [clap](\u002Faudio_processing\u002Fclap\u002F) | [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP) | Pytorch | 1.2.6 and later | Nov 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclap-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E9%9F%B3%E5%A3%B0%E3%82%92%E6%A4%9C%E7%B4%A2%E5%8F%AF%E8%83%BD%E3%81%AB%E3%81%99%E3%82%8B%E7%89%B9%E5%BE%B4%E6%8A%BD%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-f712f9c00dab) |\n\n### Music enhancement\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [hifigan](\u002Faudio_processing\u002Fhifigan\u002F) | [HiFi-GAN](https:\u002F\u002Fgithub.com\u002Fjik876\u002Fhifi-gan) | Pytorch | 1.2.9 and later | Oct 2020 |  |\n| [deep music enhancer](\u002Faudio_processing\u002Fdeep-music-enhancer\u002F) | [On Filter Generalization for Music Bandwidth Extension Using Deep Neural Networks](https:\u002F\u002Fgithub.com\u002Fserkansulun\u002Fdeep-music-enhancer) | Pytorch | 1.2.6 and later | Nov 2020 | |\n\n### Music generation\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pytorch_wavenet](\u002Faudio_processing\u002Fpytorch_wavenet\u002F) | [pytorch_wavenet](https:\u002F\u002Fgithub.com\u002Fvincentherrmann\u002Fpytorch-wavenet) | Pytorch | 1.2.14 and later | Sep 2016 |  |\n\n### Noise reduction\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [rnnoise](\u002Faudio_processing\u002Frnnoise\u002F) | [rnnoise](https:\u002F\u002Fgithub.com\u002Fxiph\u002Frnnoise) | Keras | 1.2.15 and later | Sep 2017 | |\n| [voicefilter](\u002Faudio_processing\u002Fvoicefilter\u002F) | [VoiceFilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter)  | Pytorch | 1.2.7 and later | Oct 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvoicefilter-targeted-voice-separation-model-6fe6f85309ea) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvoicefilter-%E4%BB%BB%E6%84%8F%E3%81%AE%E4%BA%BA%E7%89%A9%E3%81%AE%E5%A3%B0%E3%82%92%E6%8A%BD%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E5%88%86%E9%9B%A2%E3%83%A2%E3%83%87%E3%83%AB-d5b88a8549d9) |\n| [unet_source_separation](\u002Faudio_processing\u002Funet_source_separation\u002F) | [source_separation](https:\u002F\u002Fgithub.com\u002FAppleHolic\u002Fsource_separation)  | Pytorch | 1.2.6 and later | Jul 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Funetsourceseparation-a-machine-learning-model-to-remove-audio-noise-and-extract-voices-5acae8c37291) [JP](https:\u002F\u002Ftech.ailia.ai\u002Funetsourceseparation-%E9%9B%91%E9%9F%B3%E3%82%92%E9%99%A4%E5%8E%BB%E3%81%97%E3%81%A6%E5%A3%B0%E3%81%A0%E3%81%91%E3%82%92%E6%8A%BD%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-5d23fd054eac) |\n| [demucs](\u002Faudio_processing\u002Fdemucs\u002F) | [Demucs](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs) | Pytorch | 1.4.0 and later | Sep 2019 | |\n| [dtln](\u002Faudio_processing\u002Fdtln\u002F) | [Dual-signal Transformation LSTM Network](https:\u002F\u002Fgithub.com\u002Fbreizhn\u002FDTLN) | Tensorflow | 1.3.0 and later | May 2020 |  |\n| [audiosep](\u002Faudio_processing\u002Faudiosep\u002F) | [AudioSep](https:\u002F\u002Fgithub.com\u002FAudio-AGI\u002FAudioSep) | Pytorch | 1.3.0 and later | Aug 2023 | |\n\n### Phoneme alignment\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [narabas](\u002Faudio_processing\u002Fnarabas\u002F) | [narabas: Japanese phoneme forced alignment tool](https:\u002F\u002Fgithub.com\u002Fdarashi\u002Fnarabas) | Pytorch | 1.2.11 and later | Mar 2023 | |\n\n### Pitch detection\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [crepe](\u002Faudio_processing\u002Fcrepe\u002F) | [torchcrepe](https:\u002F\u002Fgithub.com\u002Fmaxrmorrison\u002Ftorchcrepe) | Pytorch | 1.2.10 and later | Feb 2018 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrepe-%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E3%83%94%E3%83%83%E3%83%81%E6%8E%A8%E5%AE%9A%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dfb09b5e1f6b) |\n\n\n### Speaker diarization\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pyannote-audio](\u002Faudio_processing\u002Fpyannote-audio\u002F) | [Pyannote-audio](https:\u002F\u002Fgithub.com\u002Fpyannote\u002Fpyannote-audio) | Pytorch | 1.2.15 and later | Nov 2019 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpyannoteaudio-%E8%A9%B1%E8%80%85%E5%88%86%E9%9B%A2%E3%82%92%E8%A1%8C%E3%81%86%E3%81%9F%E3%82%81%E3%81%AE%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-fca61f4ef5d0) |\n| [auto_speech](\u002Faudio_processing\u002Fauto_speech\u002F) | [AutoSpeech: Neural Architecture Search for Speaker Recognition](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FAutoSpeech)  | Pytorch | 1.2.5 and later | May 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fautospeech-speech-based-person-identification-model-f01822f6d8e5) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fautospeech-%E9%9F%B3%E5%A3%B0%E3%81%AB%E3%82%88%E3%82%8B%E5%80%8B%E4%BA%BA%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-267a00f26a4a) |\n| [wespeaker](\u002Faudio_processing\u002Fwespeaker\u002F) | [WeSpeaker](https:\u002F\u002Fgithub.com\u002Fwenet-e2e\u002Fwespeaker) | Onnxruntime | 1.2.9 and later | Oct 2022 | |\n\n### Speech to text\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [deepspeech2](\u002Faudio_processing\u002Fdeepspeech2\u002F) | [deepspeech.pytorch](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch) | Pytorch | 1.2.2 and later | Oct 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeepspeech2-a-machine-learning-model-for-speech-recognition-d9e64c0d1afc) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeepspeech2-%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-8288c5f53eea) |\n| [whisper](\u002Faudio_processing\u002Fwhisper\u002F) | [Whisper](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fwhisper) | Pytorch | 1.2.10 and later | Dec 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fwhisper-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%92%E5%90%AB%E3%82%8099%E8%A8%80%E8%AA%9E%E3%82%92%E8%AA%8D%E8%AD%98%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%83%A2%E3%83%87%E3%83%AB-b6e578f55c87) |\n| [reazon_speech](\u002Faudio_processing\u002Freazon_speech\u002F) | [ReazonSpeech](https:\u002F\u002Fresearch.reazon.jp\u002Fprojects\u002FReazonSpeech\u002F) | Pytorch | 1.4.0 and later | Jan 2023 | |\n| [distil-whisper](\u002Faudio_processing\u002Fdistil-whisper\u002F) | [Hugging Face - Distil-Whisper](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdistil-whisper) | Pytorch | 1.2.16 and later | Nov 2023 | |\n| [sensevoice](\u002Faudio_processing\u002Fsensevoice\u002F) | [SenseVoice](https:\u002F\u002Fgithub.com\u002FFunAudioLLM\u002FSenseVoice) | Pytorch | 1.2.13 and later | July 2024 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsensevoice-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%AB%E3%82%82%E5%AF%BE%E5%BF%9C%E3%81%97%E3%81%9F%E9%AB%98%E9%80%9F%E3%81%AA%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%83%A2%E3%83%87%E3%83%AB-3721c79e0592) |\n| [reazon_speech2](\u002Faudio_processing\u002Freazon_speech2\u002F) | [ReazonSpeech2](https:\u002F\u002Fresearch.reazon.jp\u002Fprojects\u002FReazonSpeech\u002F) | Pytorch | 1.4.0 and later | Feb 2024 | |\n| [kotoba-whisper](\u002Faudio_processing\u002Fkotoba-whisper\u002F) | [kotoba-whisper](https:\u002F\u002Fhuggingface.co\u002Fkotoba-tech\u002Fkotoba-whisper-v1.0) | Pytorch | 1.2.16 and later | Apr 2024 | |\n\n### Text to speech\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pytorch-dc-tts](\u002Faudio_processing\u002Fpytorch-dc-tts\u002F) | [Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention](https:\u002F\u002Fgithub.com\u002Ftugstugi\u002Fpytorch-dc-tts) | Pytorch | 1.2.6 and later | Oct 2017 |  [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpytorchdctts-a-machine-learning-model-for-text-to-speech-synthesis-2273e269b480) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpytorchdctts-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dcb5eb07c883) |\n| [tacotron2](\u002Faudio_processing\u002Ftacotron2\u002F) | [Tacotron2](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Ftacotron2) | Pytorch | 1.2.15 and later | Feb 2018 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftacotron2-%E6%B3%A2%E5%BD%A2%E5%A4%89%E6%8F%9B%E3%82%92ai%E3%81%A7%E8%A1%8C%E3%81%86%E9%AB%98%E5%93%81%E8%B3%AA%E3%81%AA%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-bc592217a399) |\n| [vall-e-x](\u002Faudio_processing\u002Fvall-e-x\u002F) | [VALL-E-X](https:\u002F\u002Fgithub.com\u002FPlachtaa\u002FVALL-E-X) | Pytorch | 1.2.15 and later | Mar 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvall-e-x-%E5%86%8D%E5%AD%A6%E7%BF%92%E4%B8%8D%E8%A6%81%E3%81%A7%E5%A3%B0%E8%B3%AA%E3%82%92%E5%A4%89%E6%9B%B4%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-977efc19ac84) |\n| [Bert-VITS2](\u002Faudio_processing\u002Fbert-vits2\u002F) | [Bert-VITS2](https:\u002F\u002Fgithub.com\u002Ffishaudio\u002FBert-VITS2) | Pytorch | 1.2.16 and later | Aug 2023|\n| [gpt-sovits](\u002Faudio_processing\u002Fgpt-sovits\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 and later | Feb 2024 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgpt-sovits-%E3%83%95%E3%82%A1%E3%82%A4%E3%83%B3%E3%83%81%E3%83%A5%E3%83%BC%E3%83%8B%E3%83%B3%E3%82%B0%E3%81%A7%E3%81%8D%E3%82%8B0%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E3%81%AE%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-2212eeb5ad20) |\n| [gpt-sovits-v2](\u002Faudio_processing\u002Fgpt-sovits-v2\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 and later | Aug 2024 |  |\n| [cosyvoice2](\u002Faudio_processing\u002Fcosyvoice2\u002F) | [CosyVoice2](https:\u002F\u002Fgithub.com\u002FFunAudioLLM\u002FCosyVoice\u002Ftree\u002Fmain) | Pytorch | 1.4.0 and later | Dec 2024 |  |\n| [gpt-sovits-v3](\u002Faudio_processing\u002Fgpt-sovits-v3\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 and later | Feb 2025 |  |\n| [gpt-sovits-v2-pro](\u002Faudio_processing\u002Fgpt-sovits-v2-pro\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 and later | Jun 2025 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgpt-sovits-v2-pro-%E9%AB%98%E9%80%9F%E3%81%8B%E3%81%A4%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-81f2156366cd) |\n\n### Voice activity detection\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [silero-vad](\u002Faudio_processing\u002Fsilero-vad\u002F) | [Silero VAD](https:\u002F\u002Fgithub.com\u002Fsnakers4\u002Fsilero-vad) | Pytorch | 1.2.15 and later | Dec 2020 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsilerovad-%E7%99%BA%E8%A9%B1%E5%8C%BA%E9%96%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2ad6cf395703) |\n\n### Voice conversion\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [rvc](\u002Faudio_processing\u002Frvc\u002F) | [Retrieval-based-Voice-Conversion-WebUI](https:\u002F\u002Fgithub.com\u002FRVC-Project\u002FRetrieval-based-Voice-Conversion-WebUI) | Pytorch | 1.2.12 and later | Mar 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Frvc-ai%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E3%83%9C%E3%82%A4%E3%82%B9%E3%83%81%E3%82%A7%E3%83%B3%E3%82%B8%E3%83%A3%E3%83%BC-64a813c7a0c4) |\n\n## Autonomous driving\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bevformer](\u002Fautonomous_driving\u002Fbevformer\u002F) | [BEVFormer](https:\u002F\u002Fgithub.com\u002Ffundamentalvision\u002FBEVFormer) | Pytorch | 1.6.1 and later | Mar 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbevformer-%E3%83%9E%E3%83%AB%E3%83%81%E3%82%AB%E3%83%A1%E3%83%A9%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%89bev%E8%A1%A8%E7%8F%BE%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-66ec76dd3a70) |\n|[uniad](\u002Fautonomous_driving\u002Funiad\u002F) | [UniAD: Unified Driving](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FUniAD) | Pytorch | 1.6.1 and later | Dec 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Funiad-end2end%E3%81%AE%E8%87%AA%E5%8B%95%E9%81%8B%E8%BB%A2%E3%81%AE%E5%9F%BA%E6%9C%AC%E3%81%A8%E3%81%AA%E3%82%8B%E3%83%A2%E3%83%87%E3%83%AB-39339e6e277b) |\n\n## Background removal\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d12990de31b9.png\" width=128px>](background_removal\u002Fdeep-image-matting\u002F) | [deep-image-matting](\u002Fbackground_removal\u002Fdeep-image-matting\u002F) | [Deep Image Matting](https:\u002F\u002Fgithub.com\u002Ffoamliu\u002FDeep-Image-Matting)| Keras | 1.2.3 and later | Mar 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeep-image-matting-a-machine-learning-model-to-improve-the-accuracy-of-image-matting-2ff98e0b47d6) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeep-image-matting-%E7%89%A9%E4%BD%93%E3%81%AE%E5%88%87%E3%82%8A%E6%8A%9C%E3%81%8D%E3%82%92%E9%AB%98%E7%B2%BE%E5%BA%A6%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-45580882966f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e20a8820514e.png\" width=128px>](background_removal\u002Findexnet\u002F) | [indexnet](\u002Fbackground_removal\u002Findexnet\u002F) | [Indices Matter: Learning to Index for Deep Image Matting](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Ftree\u002Fmaster\u002Fconfigs\u002Fmattors\u002Findexnet) | Pytorch | 1.2.7 and later | Aug 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3f16b5db9753.png\" width=128px>](background_removal\u002Fu2net\u002F) | [U-2-Net](\u002Fbackground_removal\u002Fu2net\u002F) | [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FU-2-Net) | Pytorch | 1.2.2 and later | May 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fu2net-a-machine-learning-model-that-performs-object-cropping-in-a-single-shot-48adfc158483) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fu2net-%E3%82%B7%E3%83%B3%E3%82%B0%E3%83%AB%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E3%81%A7%E7%89%A9%E4%BD%93%E3%81%AE%E5%88%87%E3%82%8A%E6%8A%9C%E3%81%8D%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e346f2787cdb) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a6eca8477aae.png\" width=128px>](background_removal\u002Fu2net-portrait-matting\u002F) | [u2net-portrait-matting](\u002Fbackground_removal\u002Fu2net-portrait-matting\u002F) | [U^2-Net - Portrait matting](https:\u002F\u002Fgithub.com\u002Fdennisbappert\u002Fu-2-net-portrait) | Pytorch | 1.2.7 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_168bba1664f9.png\" width=128px>](background_removal\u002Fu2net-human-seg\u002F) | [u2net-human-seg](\u002Fbackground_removal\u002Fu2net-human-seg\u002F) | [U^2-Net - human segmentation](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FU-2-Net) | Pytorch | 1.2.4 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a12ebc063a2.png\" width=128px>](background_removal\u002Fcascade_psp\u002F) | [cascade_psp](\u002Fbackground_removal\u002Fcascade_psp\u002F) | [CascadePSP](https:\u002F\u002Fgithub.com\u002Fhkchengrex\u002FCascadePSP) | Pytorch | 1.2.9 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d60bf7ec8b48.png\" width=128px>](background_removal\u002Frembg\u002F) | [rembg](\u002Fbackground_removal\u002Frembg\u002F) | [Rembg](https:\u002F\u002Fgithub.com\u002Fdanielgatis\u002Frembg) | Pytorch | 1.2.4 and later | Aug 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fe290ce5cbba.png\" width=128px>](background_removal\u002Fgfm\u002F) | [gfm](\u002Fbackground_removal\u002Fgfm\u002F) | [Bridging Composite and Real: Towards End-to-end Deep Image Matting](https:\u002F\u002Fgithub.com\u002FJizhiziLi\u002FGFM) | Pytorch | 1.2.10 and later | Oct 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8a39946122f6.jpg\" width=128px>](background_removal\u002Fmodnet\u002F) | [modnet](\u002Fbackground_removal\u002Fmodnet\u002F) | [MODNet: Trimap-Free Portrait Matting in Real Time](https:\u002F\u002Fgithub.com\u002FZHKKKe\u002FMODNet) | Pytorch | 1.2.7 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41538729bad5.png\" width=128px>](background_removal\u002Fbackground_matting_v2\u002F) | [background_matting_v2](\u002Fbackground_removal\u002Fbackground_matting_v2\u002F) | [Real-Time High-Resolution Background Matting](https:\u002F\u002Fgithub.com\u002FPeterL1n\u002FBackgroundMattingV2) | Pytorch | 1.2.9 and later | Dec 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_53e38ba59dc2.jpg\" width=128px>](background_removal\u002Fdis_seg\u002F) | [dis_seg](\u002Fbackground_removal\u002Fdis_seg\u002F) | [Highly Accurate Dichotomous Image Segmentation](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS) | Pytorch | 1.2.10 and later | Mar 2022 | |\n\n## Crowd counting\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41bd6b5714f8.png\" width=256px>](crowd_counting\u002Fcrowdcount-cascaded-mtl\u002F) | [crowdcount-cascaded-mtl](\u002Fcrowd_counting\u002Fcrowdcount-cascaded-mtl) | [CNN-based Cascaded Multi-task Learning of \u003Cbr\u002F>High-level Prior and Density Estimation for Crowd Counting \u003Cbr\u002F>(Single Image Crowd Counting)](https:\u002F\u002Fgithub.com\u002Fsvishwa\u002Fcrowdcount-cascaded-mtl) | Pytorch | 1.2.1 and later | Jul 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcrowdcounting-a-machine-learning-model-for-counting-people-a7b274a7c2af) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrowdcounting-%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%89%E4%BA%BA%E6%95%B0%E3%82%92%E8%A8%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-459e8b3fc184) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_aca259a55df2.png\" width=256px>](crowd_counting\u002Fc-3-framework\u002F) | [c-3-framework](\u002Fcrowd_counting\u002Fc-3-framework) | [Crowd Counting Code Framework(C^3-Framework)](https:\u002F\u002Fgithub.com\u002Fgjy3035\u002FC-3-Framework) | Pytorch | 1.2.5 and later | Jul 2019 | |\n\n## Deep fashion\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f463b4f22ba3.png\" width=128px>](deep_fashion\u002Ffashionai-key-points-detection\u002F) | [fashionai-key-points-detection](\u002Fdeep_fashion\u002Ffashionai-key-points-detection\u002F) | [A Pytorch Implementation of Cascaded Pyramid Network for FashionAI Key Points Detection](https:\u002F\u002Fgithub.com\u002Fgathierry\u002FFashionAI-KeyPointsDetectionOfApparel) | Pytorch | 1.2.5 and later | Jun 2018 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8a4d1ce0d207.jpg\" width=128px>](deep_fashion\u002Fperson-attributes-recognition-crossroad\u002F) | [person-attributes-recognition-crossroad](\u002Fdeep_fashion\u002Fperson-attributes-recognition-crossroad\u002F) | [person-attributes-recognition-crossroad-0230](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fperson-attributes-recognition-crossroad-0230) | Pytorch | 1.2.10 and later | Oct 2018 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_76e6b705b922.png\" width=128px>](deep_fashion\u002Fclothing-detection\u002F) | [clothing-detection](\u002Fdeep_fashion\u002Fclothing-detection\u002F) | [Clothing-Detection](https:\u002F\u002Fgithub.com\u002Fsimaiden\u002FClothing-Detection) | Pytorch | 1.2.1 and later | Jun 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fclothingdetection-a-machine-learning-model-for-detecting-clothing-dab99e1492eb) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclothingdetection-%E6%9C%8D%E8%A3%85%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e75cc8bc75b7) |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b5b00cc9805c.png\" width=128px>](deep_fashion\u002Fmmfashion\u002F) | [mmfashion](\u002Fdeep_fashion\u002Fmmfashion\u002F) | [MMFashion](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.5 and later | Nov 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmmfashion-a-machine-learning-model-for-fashion-segmentation-a043fa972a2a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmmfashion-%E3%83%95%E3%82%A1%E3%83%83%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c486af72fdb5) |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2943e5d061ed.png\" width=128px>](deep_fashion\u002Fmmfashion_tryon\u002F) | [mmfashion_tryon](\u002Fdeep_fashion\u002Fmmfashion_tryon\u002F) | [MMFashion virtual try-on](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.8 and later | Nov 2019 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3af7b968bcb1.png\" width=128px>](deep_fashion\u002Fmmfashion_retrieval\u002F) | [mmfashion_retrieval](\u002Fdeep_fashion\u002Fmmfashion_retrieval\u002F) | [MMFashion In-Shop Clothes Retrieval](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.5 and later | Nov 2019 | |\n\n## Depth estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_713372d2fa43.png\" width=256px>](depth_estimation\u002Ffcrn-depthprediction\u002F) |[fcrn-depthprediction](depth_estimation\u002Ffcrn-depthprediction)| [Deeper Depth Prediction with Fully Convolutional Residual Networks](https:\u002F\u002Fgithub.com\u002Firo-cp\u002FFCRN-DepthPrediction) | TensorFlow | 1.2.6 and later | Jun 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_29aa4ef0ebbe.png\" width=256px>](depth_estimation\u002Fmonodepth2\u002F) | [monodepth2](depth_estimation\u002Fmonodepth2)| [Monocular depth estimation from a single image](https:\u002F\u002Fgithub.com\u002Fnianticlabs\u002Fmonodepth2) | Pytorch | 1.2.2 and later | Jun 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_496fbd51a4ad.png\" width=256px>](depth_estimation\u002Ffast-depth\u002F) |[fast-depth](depth_estimation\u002Ffast-depth)| [ICRA 2019 \"FastDepth: Fast Monocular Depth Estimation on Embedded Systems\"](https:\u002F\u002Fgithub.com\u002Fdwofk\u002Ffast-depth) | Pytorch | 1.2.5 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a6ba4d9cd13.png\" width=256px>](depth_estimation\u002Fmidas\u002F) |[midas](depth_estimation\u002Fmidas)| [Towards Robust Monocular Depth Estimation:\u003Cbr\u002F> Mixing Datasets for Zero-shot Cross-dataset Transfer](https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FMiDaS) | Pytorch | 1.2.4 and later | Jul 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmidas-a-machine-learning-model-for-depth-estimation-e96119cc1a3c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmidas-%E5%A5%A5%E8%A1%8C%E3%81%8D%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-71e65a041e0f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9572bbb029d8.png\" width=256px>](depth_estimation\u002Fhitnet\u002F) |[hitnet](depth_estimation\u002Fhitnet)| [ONNX-HITNET-Stereo-Depth-estimation](https:\u002F\u002Fgithub.com\u002FibaiGorordo\u002FONNX-HITNET-Stereo-Depth-estimation) | Pytorch | 1.2.9 and later | Jul 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_590eced2e825.png\" width=256px>](depth_estimation\u002Flap-depth\u002F) |[lap-depth](depth_estimation\u002Flap-depth)| [LapDepth-release](https:\u002F\u002Fgithub.com\u002Ftjqansthd\u002FLapDepth-release) | Pytorch | 1.2.9 and later | Jan 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f92a2365765e.png\" width=256px>](depth_estimation\u002Fmobilestereonet\u002F) |[mobilestereonet](depth_estimation\u002Fmobilestereonet)| [MobileStereoNet](https:\u002F\u002Fgithub.com\u002Fcogsys-tuebingen\u002Fmobilestereonet) | Pytorch | 1.2.13 and later | Aug 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7d816aa7397d.png\" width=256px>](depth_estimation\u002Fcrestereo\u002F) |[crestereo](depth_estimation\u002Fcrestereo)| [ONNX-CREStereo-Depth-Estimation](https:\u002F\u002Fgithub.com\u002FibaiGorordo\u002FONNX-CREStereo-Depth-Estimation) | Pytorch | 1.2.13 and later | Mar 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3df02805a07f.png\" width=256px>](depth_estimation\u002Fzoe_depth\u002F) |[zoe_depth](depth_estimation\u002Fzoe_depth)| [ZoeDepth](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FZoeDepth) | Pytorch | 1.3.0 and later | Feb 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_793b658bcb34.png\" width=256px>](depth_estimation\u002Fdepth_anything\u002F) |[depth_anything](depth_estimation\u002Fdepth_anything)| [DepthAnything](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything) | Pytorch | 1.2.9 and later | Jan 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ac50dde22e6e.png\" width=256px>](depth_estimation\u002Fdepth_anything_v2\u002F) |[depth_anything_v2](depth_estimation\u002Fdepth_anything_v2)| [Depth Anything V2](https:\u002F\u002Fgithub.com\u002FDepthAnything\u002FDepth-Anything-V2) | Pytorch | 1.2.16 and later | Jun 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e96363a182f9.png\" width=256px>](depth_estimation\u002Fdepth_pro\u002F) |[depth_pro](depth_estimation\u002Fdepth_pro)| [Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-depth-pro) | Pytorch | 1.2.12 and later | Oct 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b58e9c07db90.png\" width=256px>](depth_estimation\u002Fdepth_anything_v3\u002F) |[depth_anything_v3](depth_estimation\u002Fdepth_anything_v3)| [Depth Anything V3](https:\u002F\u002Fgithub.com\u002FByteDance-Seed\u002FDepth-Anything-3) | Pytorch | 1.2.16 and later | Nov 2025 | |\n\n## Diffusion\n\n### Text to image\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c11929149642.png\" width=128px>](diffusion\u002Flatent-diffusion-txt2img\u002F) | [latent-diffusion-txt2img](\u002Fdiffusion\u002Flatent-diffusion-txt2img\u002F) | [Latent Diffusion - txt2img](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 and later | Dec 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b4b220fbdb01.png\" width=128px>](diffusion\u002Fstable-diffusion-txt2img\u002F) | [stable-diffusion-txt2img](\u002Fdiffusion\u002Fstable-diffusion-txt2img\u002F) | [Stable Diffusion](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fstable-diffusion) | Pytorch | 1.2.14 and later | Aug 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fstablediffusion-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E7%94%BB%E5%83%8F%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-aa3676787a09) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_23c1624f9e8e.png\" width=128px>](diffusion\u002Fanything_v3\u002F) | [anything_v3](\u002Fdiffusion\u002Fanything_v3\u002F) | [Linaqruf\u002Fanything-v3.0](https:\u002F\u002Fhuggingface.co\u002FLinaqruf\u002Fanything-v3.0) | Pytorch | 1.5.0 and later | Nov 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a4c838ac585d.png\" width=128px>](diffusion\u002Fcontrol_net\u002F) | [control_net](\u002Fdiffusion\u002Fcontrol_net\u002F) | [ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet) | Pytorch | 1.2.15 and later | Feb 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_155ac42c481a.png\" width=128px>](diffusion\u002Flatent-consistency-models\u002F) | [latent-consistency-models](\u002Fdiffusion\u002Flatent-consistency-models\u002F) | [latent-consistency-models](https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model) | Pytorch | 1.2.16 and later | Oct 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e131a3c178d2.png\" width=128px>](diffusion\u002Fsd-turbo\u002F) | [sd-turbo](\u002Fdiffusion\u002Fsd-turbo\u002F) | [Hugging Face - SD-Turbo](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fsd-turbo) | Pytorch | 1.2.16 and later | Nov 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3440a8ec0a3f.png\" width=128px>](diffusion\u002Fsdxl-turbo\u002F) | [sdxl-turbo](\u002Fdiffusion\u002Fsdxl-turbo\u002F) | [Hugging Face - SDXL-Turbo](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fsdxl-turbo) | Pytorch | 1.2.16 and later | Nov 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_97b582412aa9.png\" width=128px>](diffusion\u002Fdepth_anything_controlnet\u002F) | [depth_anything_controlnet](\u002Fdiffusion\u002Fdepth_anything_controlnet\u002F) | [DepthAnything](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything) | Pytorch | 1.2.16 and later | Jan 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3950d9f12373.png\" width=128px>](diffusion\u002Flatentsync\u002F) | [latentsync](\u002Fdiffusion\u002Flatentsync\u002F) | [LatentSync](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FLatentSync\u002Ftree\u002Fmain) | Pytorch | 1.4.0 and later | Dec 2024 | |\n\n### Text to audio\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_45b2c60b368e.png\" width=128px>](diffusion\u002Friffusion\u002F) | [riffusion](\u002Fdiffusion\u002Friffusion\u002F) | [Riffusion](https:\u002F\u002Fgithub.com\u002Friffusion\u002Friffusion) | Pytorch | 1.2.16 and later | Dec 2022 | |\n\n### Others\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6f464f8a6722.png\" width=128px>](diffusion\u002Flatent-diffusion-inpainting\u002F) | [latent-diffusion-inpainting](\u002Fdiffusion\u002Flatent-diffusion-inpainting\u002F) | [Latent Diffusion - inpainting](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 and later | Dec 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1ab9d320e69a.png\" width=128px>](diffusion\u002Flatent-diffusion-superresolution\u002F) | [latent-diffusion-superresolution](\u002Fdiffusion\u002Flatent-diffusion-superresolution\u002F) | [Latent Diffusion - Super-resolution](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 and later | Dec 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bdec48e6a029.png\" width=128px>](diffusion\u002Fdaclip-sde\u002F) | [DA-CLIP](\u002Fdiffusion\u002Fdaclip-sde\u002F) | [DA-CLIP](https:\u002F\u002Fgithub.com\u002FAlgolzw\u002Fdaclip-uir) | Pytorch | 1.2.16 and later | Oct 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d06b942b5104.png\" width=128px>](diffusion\u002Fmarigold\u002F) | [marigold](\u002Fdiffusion\u002Fmarigold\u002F) | [Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FMarigold) | Pytorch | 1.2.16 and later | Dec 2023 | |\n\n## Face detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b16d8f6630fb.jpg\" width=128px>](face_detection\u002Fmtcnn\u002F)| [mtcnn](face_detection\u002Fmtcnn\u002F) | [mtcnn](https:\u002F\u002Fgithub.com\u002Fipazc\u002Fmtcnn) | Keras | 1.2.10 and later | Apr 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2dcf2dba802e.png\" width=128px>](face_detection\u002Fyolov1-face\u002F) | [yolov1-face](\u002Fface_detection\u002Fyolov1-face\u002F) | [YOLO-Face-detection](https:\u002F\u002Fgithub.com\u002Fdannyblueliu\u002FYOLO-Face-detection\u002F) | Darknet | 1.1.0 and later | Mar 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_96525c230153.png\" width=128px>](face_detection\u002Fface-detection-adas\u002F)| [face-detection-adas](face_detection\u002Fface-detection-adas\u002F) | [face-detection-adas-0001](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fface-detection-adas-0001) | OpenVINO | 1.2.5 and later | Oct 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e6c2f5cd2677.png\" width=128px>](face_detection\u002Fretinaface\u002F)| [retinaface](face_detection\u002Fretinaface\u002F) | [RetinaFace: Single-stage Dense Face Localisation in the Wild.](https:\u002F\u002Fgithub.com\u002Fbiubug6\u002FPytorch_Retinaface) | Pytorch | 1.2.5 and later | May 2019 |  [JP](https:\u002F\u002Ftech.ailia.ai\u002Fretinaface-%E9%AB%98%E8%A7%A3%E5%83%8F%E5%BA%A6%E3%81%AB%E5%AF%BE%E5%BF%9C%E3%81%97%E3%81%9F%E9%A1%94%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-37d0807581ce) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_294fe200eed8.png\" width=128px>](face_detection\u002Fblazeface\u002F) |[blazeface](\u002Fface_detection\u002Fblazeface\u002F)| [BlazeFace-PyTorch](https:\u002F\u002Fgithub.com\u002Fhollance\u002FBlazeFace-PyTorch) | Pytorch | 1.2.1 and later | Jul 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazeface-a-machine-learning-model-for-fast-detection-of-face-positions-and-key-points-5dcfb9429d72) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazeface-%E9%A1%94%E3%81%AE%E4%BD%8D%E7%BD%AE%E3%81%A8%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E9%AB%98%E9%80%9F%E3%81%AB%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e851c348a32b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7470bdd9c2b4.png\" width=128px>](face_detection\u002Fyolov3-face\u002F) | [yolov3-face](\u002Fface_detection\u002Fyolov3-face\u002F) | [Face detection using keras-yolov3](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face) | Keras | 1.2.1 and later | Dec 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a91ba6d0dafe.png\" width=128px>](face_detection\u002Fface-mask-detection\u002F)| [face-mask-detection](\u002Fface_detection\u002Fface-mask-detection\u002F) | [Face detection using keras-yolov3](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face) | Keras | 1.2.1 and later | Dec 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacemaskdetection-a-machine-learning-model-to-determine-if-a-person-is-wearing-a-mask-e5a581ea8af9) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemaskdetection-%E3%83%9E%E3%82%B9%E3%82%AF%E3%82%92%E4%BB%98%E3%81%91%E3%81%A6%E3%81%84%E3%82%8B%E3%81%8B%E3%82%92%E5%88%A4%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b06793f79a97) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_625529d380f1.png\" width=128px>](face_detection\u002Fdbface\u002F)| [dbface](face_detection\u002Fdbface\u002F) | [DBFace : real-time, single-stage detector for face detection, \u003Cbr\u002F>with faster speed and higher accuracy](https:\u002F\u002Fgithub.com\u002Fdlunion\u002FDBFace) | Pytorch | 1.2.2 and later | Mar 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_46b2882d8f8f.png\" width=128px>](face_detection\u002Fanime-face-detector\u002F)| [anime-face-detector](face_detection\u002Fanime-face-detector\u002F) | [Anime Face Detector](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector) | Pytorch | 1.2.6 and later | Oct 2021 | |\n\n## Face identification\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87f564476059.jpg\" width=128px>](face_identification\u002Ffacenet_pytorch\u002F) |[facenet_pytorch](\u002Fface_identification\u002Ffacenet_pytorch) | [Face Recognition Using Pytorch](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch) | Pytorch | 1.2.6 and later | Mar 2015 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ed81b6200bb0.png\" width=128px>](face_identification\u002Finsightface\u002F)|[insightface](\u002Fface_identification\u002Finsightface) | [InsightFace: 2D and 3D Face Analysis Project](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface) | Pytorch | 1.2.5 and later | Sep 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_10f9b218a959.jpg\">](face_identification\u002Fvggface2\u002F) |[vggface2](\u002Fface_identification\u002Fvggface2) | [VGGFace2 Dataset for Face Recognition](https:\u002F\u002Fgithub.com\u002Fox-vgg\u002Fvgg_face2) | Caffe | 1.1.0 and later | Oct 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_98ed28fada81.jpg\" width=128px>](face_identification\u002Farcface\u002F)|[arcface](\u002Fface_identification\u002Farcface) | [pytorch implement of arcface](https:\u002F\u002Fgithub.com\u002Fronghuaiyang\u002Farcface-pytorch) | Pytorch | 1.2.1 and later | Jan 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Farcface-a-machine-learning-model-for-face-recognition-5f743cdac6fa) [JP](https:\u002F\u002Ftech.ailia.ai\u002Farcface-%E9%A1%94%E8%AA%8D%E8%A8%BC%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-cbb0e127bd0a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_98ed28fada81.jpg\" width=128px>](face_identification\u002Fcosface\u002F) |[cosface](\u002Fface_identification\u002Fcosface) | [Pytorch implementation of CosFace](https:\u002F\u002Fgithub.com\u002FMuggleWang\u002FCosFace_pytorch) | Pytorch | 1.2.10 and later | Jan 2018 | |\n\n## Face recognition\n\n### Age gender estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7d12fb8f3c7.png\">](face_recognition\u002Fface_classification\u002F) |[face_classification](\u002Fface_recognition\u002Fface_classification) | [Real-time face detection and emotion\u002Fgender classification](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fface_classification) | Keras | 1.1.0 and later | Oct 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87ccecff5d87.jpg\" width=128px>](face_recognition\u002Fage-gender-recognition-retail\u002F) | [age-gender-recognition-retail](\u002Fface_recognition\u002Fage-gender-recognition-retail\u002F) | [age-gender-recognition-retail-0013](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fage-gender-recognition-retail-0013) | OpenVINO | 1.2.5 and later | May 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fagegenderrecognitionretail-a-machine-learning-model-to-identify-age-and-gender-8506510414b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fagegenderrecognitionretail-%E5%B9%B4%E9%BD%A2%E3%81%A8%E6%80%A7%E5%88%A5%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-3632935d19ec) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d1bf195813ee.png\" width=128px>](face_recognition\u002Fmivolo\u002F) | [mivolo](\u002Fface_recognition\u002Fmivolo\u002F) | [MiVOLO: Multi-input Transformer for Age and Gender Estimation](https:\u002F\u002Fgithub.com\u002FWildChlamydia\u002FMiVOLO) | Pytorch | 1.2.13 and later | Jul 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmulti-input-transformer-for-age-and-gender-estimation-%E5%B9%B4%E9%BD%A2%E3%81%A8%E6%80%A7%E5%88%A5%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E3%81%9F%E3%82%81%E3%81%AE%E3%83%9E%E3%83%AB%E3%83%81%E3%82%A4%E3%83%B3%E3%83%97%E3%83%83%E3%83%88%E3%83%88%E3%83%A9%E3%83%B3%E3%82%B9%E3%83%95%E3%82%A9%E3%83%BC%E3%83%9E%E3%83%BC%E3%83%A2%E3%83%87%E3%83%AB-8b77aa8c6dbc) |\n\n### Emotion recognition\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7978054e61c6.png\" width=128px>](face_recognition\u002Fferplus\u002F) | [ferplus](\u002Fface_recognition\u002Fferplus\u002F) | [FER+](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFERPlus) | CNTK | 1.2.2 and later | Aug 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7d12fb8f3c7.png\">](face_recognition\u002Fhsemotion\u002F) | [hsemotion](\u002Fface_recognition\u002Fhsemotion\u002F) | [HSEmotion (High-Speed face Emotion recognition) library](https:\u002F\u002Fgithub.com\u002FHSE-asavchenko\u002Fface-emotion-recognition) | Pytorch | 1.2.5 and later | Mar 2021 | |\n\n### Gaze estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5a80fdcb8449.png\" width=128px>](face_recognition\u002Fgazeml\u002F) | [gazeml](\u002Fface_recognition\u002Fgazeml\u002F) | [A deep learning framework based on Tensorflow \u003Cbr\u002F>for the training of high performance gaze estimation](https:\u002F\u002Fgithub.com\u002Fswook\u002FGazeML) | TensorFlow | 1.2.0 and later | May 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12874829cd01.png\" width=128px>](face_recognition\u002Fmediapipe_iris\u002F) | [mediapipe_iris](\u002Fface_recognition\u002Fmediapipe_iris\u002F) | [irislandmarks.pytorch](https:\u002F\u002Fgithub.com\u002Fcedriclmenard\u002Firislandmarks.pytorch) | Pytorch | 1.2.2 and later | Jun 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmediapipe-iris-detecting-key-points-in-the-eye-637f5c1e728e) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmediapipe-iris-%E7%9B%AE%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-a4742f143551) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fb093360cfe3.png\" width=128px>](face_recognition\u002Fgazelle\u002F) | [gazelle](\u002Fface_recognition\u002Fgazelle\u002F) | [gazelle](https:\u002F\u002Fgithub.com\u002Ffkryan\u002Fgazelle) | Pytorch | 1.2.16 and later | Dec 2024 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgaze-lle-%E5%A4%A7%E8%A6%8F%E6%A8%A1%E3%83%87%E3%83%BC%E3%82%BF%E3%81%A7%E5%AD%A6%E7%BF%92%E3%81%95%E3%82%8C%E3%81%9F%E5%9F%BA%E7%9B%A4%E3%83%A2%E3%83%87%E3%83%AB%E3%81%AB%E3%82%88%E3%82%8B%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AA%E8%A6%96%E7%B7%9A%E6%8E%A8%E5%AE%9A%E3%83%A2%E3%83%87%E3%83%AB-7176706a0e4e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_101da6797a05.png\" width=128px>](face_recognition\u002Fax_gaze_estimation\u002F) | [ax_gaze_estimation](\u002Fface_recognition\u002Fax_gaze_estimation\u002F) | ax Gaze Estimation | Pytorch | 1.2.2 and later |  | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Faxgazeestimation-a-machine-learning-model-for-estimating-gaze-c9648042d637) [JP](https:\u002F\u002Ftech.ailia.ai\u002Faxgazeestimation-%E8%A6%96%E7%B7%9A%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-8446a791968) |\n\n### Head pose estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4e5017cf7593.png\" width=128px>](face_recognition\u002Fhopenet\u002F) | [hopenet](\u002Fface_recognition\u002Fhopenet\u002F) | [deep-head-pose](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdeep-head-pose) | Pytorch | 1.2.2 and later | Oct 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fhope-net-a-machine-learning-model-for-estimating-face-orientation-83d5af26a513) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fhope-net-%E9%A1%94%E3%81%AE%E5%90%91%E3%81%8D%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6db21979f935) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d41ea9246564.png\" width=128px>](face_recognition\u002F6d_repnet\u002F) | [6d_repnet](\u002Fface_recognition\u002F6d_repnet\u002F) | [6D Rotation Representation for Unconstrained Head Pose Estimation (Pytorch)](https:\u002F\u002Fgithub.com\u002Fthohemp\u002F6DRepNet) | Pytorch | 1.2.6 and later | Feb 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_97e6d902b4ce.png\" width=128px>](face_recognition\u002Fl2cs_net\u002F) | [L2CS_Net](\u002Fface_recognition\u002Fl2cs_net\u002F) | [L2CS_Net](https:\u002F\u002Fgithub.com\u002FAhmednull\u002FL2CS-Net) | Pytorch | 1.2.9 and later | Mar 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d41ea9246564.png\" width=128px>](face_recognition\u002F6d_repnet_360\u002F) | [6d_repnet_360](\u002Fface_recognition\u002F6d_repnet_360\u002F) | [Toward Robust and Unconstrained Full Range of Rotation Head Pose Estimation](https:\u002F\u002Fgithub.com\u002Fthohemp\u002F6DRepNet360) | Pytorch | 1.2.9 and later | Sep 2023 | |\n\n### Keypoint detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2e1eccfa3194.png\" width=128px>](face_recognition\u002Fface_alignment\u002F) |[face_alignment](\u002Fface_recognition\u002Fface_alignment\u002F)| [2D and 3D Face alignment library build using pytorch](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment) | Pytorch | 1.2.1 and later | Mar 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacealignment-a-machine-learning-model-for-recognizing-key-points-on-a-face-956f5e796efa) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacealignment-%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E8%AA%8D%E8%AD%98%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-a46654c4da14) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_be2782d43e41.png\" width=128px>](face_recognition\u002Fprnet\u002F) |[prnet](\u002Fface_recognition\u002Fprnet)| [Joint 3D Face Reconstruction and Dense Alignment \u003Cbr\u002F>with Position Map Regression Network](https:\u002F\u002Fgithub.com\u002FYadiraF\u002FPRNet) | TensorFlow | 1.2.2 and later | Mar 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_de46150732f7.png\" width=128px>](face_recognition\u002Ffacemesh\u002F) | [facemesh](\u002Fface_recognition\u002Ffacemesh\u002F) | [facemesh.pytorch](https:\u002F\u002Fgithub.com\u002Fthepowerfuldeez\u002Ffacemesh.pytorch) | Pytorch | 1.2.2 and later | Jul 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacemesh-detecting-key-points-on-faces-in-real-time-977c03f1bab) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemesh-%E3%83%AA%E3%82%A2%E3%83%AB%E3%82%BF%E3%82%A4%E3%83%A0%E3%81%A7%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bf223a50b7d6) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_27cc434edfb4.png\" width=128px>](face_recognition\u002Ffacial_feature\u002F) |[facial_feature](\u002Fface_recognition\u002Ffacial_feature\u002F)|[kaggle-facial-keypoints](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fkaggle-facial-keypoints)|Pytorch| 1.2.0 and later | Oct 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2b66ac8ad04a.png\" width=128px>](face_recognition\u002F3ddfa\u002F) | [3ddfa](\u002Fface_recognition\u002F3ddfa\u002F) | [Towards Fast, Accurate and Stable 3D Dense Face Alignment](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2) | Pytorch | 1.2.10 and later | Sep 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_659db680dddb.png\" width=128px>](face_recognition\u002Ffacemesh_v2\u002F) | [facemesh_v2](\u002Fface_recognition\u002Ffacemesh_v2\u002F) | [MediaPipe Face landmark detection](https:\u002F\u002Fdevelopers.google.com\u002Fmediapipe\u002Fsolutions\u002Fvision\u002Fface_landmarker) | Pytorch | 1.2.9 and later | May 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemeshv2-blendshape%E3%82%82%E8%A8%88%E7%AE%97%E5%8F%AF%E8%83%BD%E3%81%AA%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-3198898dccdd) |\n\n### Others\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_48ed68136552.png\" width=128px>](face_recognition\u002Fface-anti-spoofing\u002F) | [face-anti-spoofing](\u002Fface_recognition\u002Fface-anti-spoofing\u002F) | [Lightweight Face Anti Spoofing](https:\u002F\u002Fgithub.com\u002Fkprokofi\u002Flight-weight-face-anti-spoofing) | Pytorch | 1.2.5 and later | Jul 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffaceantispoofing-a-machine-learning-model-to-determine-if-a-face-is-real-b6c30f12abb6) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffaceantispoofing-%E6%9C%AC%E7%89%A9%E3%81%AE%E9%A1%94%E3%81%8B%E3%81%A9%E3%81%86%E3%81%8B%E3%82%92%E5%88%A4%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c7092c1dde43) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d41a0d0a2e4.jpg\" width=128px>](face_recognition\u002Fax_facial_features) | [ax_facial_features](\u002Fface_recognition\u002Fax_facial_features\u002F)| ax Facial Features | Pytorch | 1.2.5 and later |  |[EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fax-facial-features-eyelids-eyelashes-and-facial-hair-classification-9b3b12f1d6a1) |\n\n## Face restoration\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a1fae1f7e2af.png\" width=128px>](face_restoration\u002Fgfpgan\u002F) | [gfpgan](\u002Fface_restoration\u002Fgfpgan)| [GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN)| Pytorch | 1.2.10 and later | Jan 2021 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgfpgan-%E9%A1%94%E7%94%BB%E5%83%8F%E3%82%92%E9%AB%98%E7%94%BB%E8%B3%AA%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-547acd717086) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a39e49459459.png\" width=128px>](face_restoration\u002Fcodeformer\u002F) | [codeformer](\u002Fface_restoration\u002Fcodeformer\u002F) | [CodeFormer: Towards Robust Blind Face Restoration with Codebook Lookup Transformer](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer) | Pytorch | 1.2.9 and later | Jun 2022 | |\n\n## Face swapping\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a545679f95c.png\" width=128px>](face_swapping\u002Fdeepfacelive\u002F) | [deepfacelive](\u002Fface_swapping\u002Fdeepfacelive\u002F) | [DeepFaceLive](https:\u002F\u002Fgithub.com\u002Fiperov\u002FDeepFaceLive) | ONNX Runtime | 1.2.10 and later | Dec 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_56272b36e292.png\" width=128px>](face_swapping\u002Fsber-swap\u002F) | [sber-swap](\u002Fface_swapping\u002Fsber-swap\u002F) | [SberSwap](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fsber-swap) | Pytorch | 1.2.12 and later | Feb 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsberswap-ai%E3%81%AB%E3%82%88%E3%82%8B%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AAfaceswap-bddae3b8ff84) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_29a63a621bad.png\" width=128px>](face_swapping\u002Ffacefusion\u002F) | [facefusion](\u002Fface_swapping\u002Ffacefusion\u002F) | [FaceFusion](https:\u002F\u002Fgithub.com\u002Ffacefusion\u002Ffacefusion) | ONNX Runtime | 1.2.10 and later | Aug 2023 | |\n\n## Frame Interpolation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_353a9c0b31ad.png\" width=128px>](frame_interpolation\u002Fcain\u002F) | [cain](\u002Fframe_interpolation\u002Fcain\u002F) | [Channel Attention Is All You Need for Video Frame Interpolation](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN) | Pytorch | 1.2.5 and later | Nov 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44dcf4bb670f.png\" width=128px>](frame_interpolation\u002Frife\u002F) | [rife](\u002Fframe_interpolation\u002Frife\u002F) | [Real-Time Intermediate Flow Estimation for Video Frame Interpolation](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE) | Pytorch | 1.2.13 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9e15526b3ba8.png\" width=128px>](frame_interpolation\u002Fflavr\u002F) | [flavr](\u002Fframe_interpolation\u002Fflavr\u002F) | [FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation](https:\u002F\u002Fgithub.com\u002Ftarun005\u002FFLAVR) | Pytorch | 1.2.7 and later | Dec 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fflavr-a-machine-learning-model-to-increase-video-frame-rate-758fe8132818) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fflavr-%E5%8B%95%E7%94%BB%E3%81%AE%E3%83%95%E3%83%AC%E3%83%BC%E3%83%A0%E3%83%AC%E3%83%BC%E3%83%88%E3%82%92%E4%B8%8A%E3%81%92%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6a18211445da) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a29846f20d26.png\" width=128px>](frame_interpolation\u002Ffilm\u002F) | [film](\u002Fframe_interpolation\u002Ffilm\u002F) | [FILM: Frame Interpolation for Large Motion](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fframe-interpolation) | Tensorflow | 1.2.10 and later | Feb 2022 | |\n\n## Generative adversarial networks\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2e6de472d666.png\">](generative_adversarial_networks\u002Fpytorch-gan\u002F) |[pytorch-gan](\u002Fgenerative_adversarial_networks\u002Fpytorch-gan) | [Code repo for the Pytorch GAN Zoo project (used to train this model)](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch_GAN_zoo)| Pytorch | 1.2.4 and later | Oct 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a9781c4aee30.jpg\" width=128px>](generative_adversarial_networks\u002Flipgan\u002F) | [lipgan](\u002Fgenerative_adversarial_networks\u002Flipgan\u002F) | [LipGAN](https:\u002F\u002Fgithub.com\u002FRudrabha\u002FLipGAN) | Keras | 1.2.15 and later | Oct 2019 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Flipgan-%E3%83%AA%E3%83%83%E3%83%97%E3%82%B7%E3%83%B3%E3%82%AF%E5%8B%95%E7%94%BB%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-57511508eaff) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f33f3e1b9b32.png\" width=128px>](generative_adversarial_networks\u002Fcouncil-gan\u002F) | [council-gan](\u002Fgenerative_adversarial_networks\u002Fcouncil-gan)| [Council-GAN](https:\u002F\u002Fgithub.com\u002FOnr\u002FCouncil-GAN)| Pytorch | 1.2.4 and later | Nov 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fa82f184b3b4.jpg\" width=128px>](generative_adversarial_networks\u002Fsam\u002F) | [sam](\u002Fgenerative_adversarial_networks\u002Fsam)| [Age Transformation Using a Style-Based Regression Model](https:\u002F\u002Fgithub.com\u002Fyuval-alaluf\u002FSAM)| Pytorch | 1.2.9 and later | Feb 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0570248a7ae6.png\" width=128px>](generative_adversarial_networks\u002Fencoder4editing\u002F) | [encoder4editing](\u002Fgenerative_adversarial_networks\u002Fencoder4editing\u002F) | [Designing an Encoder for StyleGAN Image Manipulation](https:\u002F\u002Fgithub.com\u002Fomertov\u002Fencoder4editing) | Pytorch | 1.2.10 and later | Feb 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d73f608a3c6.jpg\" width=128px>](generative_adversarial_networks\u002Frestyle-encoder\u002F) | [restyle-encoder](\u002Fgenerative_adversarial_networks\u002Frestyle-encoder)| [ReStyle](https:\u002F\u002Fgithub.com\u002Fyuval-alaluf\u002Frestyle-encoder)| Pytorch | 1.2.9 and later | Apr 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7da9f919346.png\" width=128px>](generative_adversarial_networks\u002Fsadtalker\u002F) | [SadTalker](generative_adversarial_networks\u002Fsadtalker\u002F) | [SadTalker](https:\u002F\u002Fgithub.com\u002FOpenTalker\u002FSadTalker) | Pytorch | 1.5.0 and later | Nov 2022 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5a77d7f7725c.jpg\" width=128px>](generative_adversarial_networks\u002Flive_portrait\u002F) | [live_portrait](\u002Fgenerative_adversarial_networks\u002Flive_portrait)| [LivePortrait](https:\u002F\u002Fgithub.com\u002FKwaiVGI\u002FLivePortrait) | Pytorch | 1.5.0 and later | Jul 2024 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Flive-portrait-1%E6%9E%9A%E3%81%AE%E7%94%BB%E5%83%8F%E3%82%92%E5%8B%95%E3%81%8B%E3%81%9B%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-8eaa7d3eb683)|\n\n## Hand detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_faa6162271b8.jpg\" width=128px>](hand_detection\u002Fhand_detection_pytorch\u002F) | [hand_detection_pytorch](\u002Fhand_detection\u002Fhand_detection_pytorch\u002F) | [hand-detection.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fhand-detection.PyTorch) | Pytorch | 1.2.2 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc905059ef3b.png\" width=128px>](hand_detection\u002Fyolov3-hand\u002F) | [yolov3-hand](\u002Fhand_detection\u002Fyolov3-hand\u002F) | [Hand detection branch of Face detection using keras-yolov3](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face\u002Ftree\u002Fhand_detection) | Keras | 1.2.1 and later | Dec 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5d809a0ebf02.png\" width=128px>](hand_detection\u002Fblazepalm\u002F) |[blazepalm](\u002Fhand_detection\u002Fblazepalm\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 and later | Jun 2020 | |\n\n## Hand recognition\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1de0bf3345ca.png\" width=128px>](hand_recognition\u002Fhand3d\u002F) |[hand3d](\u002Fhand_recognition\u002Fhand3d\u002F) | [ColorHandPose3D network](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fhand3d) | TensorFlow | 1.2.5 and later | May 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ce17bf62db89.png\" width=128px>](hand_recognition\u002Fv2v-posenet\u002F) |[v2v-posenet](\u002Fhand_recognition\u002Fv2v-posenet\u002F) | [V2V-PoseNet](https:\u002F\u002Fgithub.com\u002Fmks0601\u002FV2V-PoseNet_RELEASE) | Pytorch | 1.2.6 and later | Nov 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d2ef872b2106.png\" width=128px>](hand_recognition\u002Fminimal-hand\u002F) |[minimal-hand](\u002Fhand_recognition\u002Fminimal-hand\u002F) | [Minimal Hand](https:\u002F\u002Fgithub.com\u002FCalciferZh\u002Fminimal-hand) | TensorFlow | 1.2.8 and later | Mar 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d739bebccb35.png\" width=128px>](hand_recognition\u002Fblazehand\u002F) |[blazehand](\u002Fhand_recognition\u002Fblazehand\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 and later | Jun 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazehand-a-machine-learning-model-for-detecting-hand-key-points-c3943b82739a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazehand-%E6%89%8B%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e84e011ef7bc) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9def894a6b18.png\" width=128px>](hands_recognition\u002Fhands_segmentation_pytorch\u002F) |[hands_segmentation_pytorch](\u002Fhand_recognition\u002Fhands_segmentation_pytorch\u002F) | [hands-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fguglielmocamporese\u002Fhands-segmentation-pytorch) | Pytorch | 1.2.10 and later | Apr 2021 | |\n\n## Image captioning\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8f0be58afa92.jpg\" width=128px>](image_captioning\u002Fillustration2vec\u002F) | [illustration2vec](\u002Fimage_captioning\u002Fillustration2vec\u002F)|[Illustration2Vec](https:\u002F\u002Fgithub.com\u002Frezoo\u002Fillustration2vec) | Caffe | 1.2.2 and later | Nov 2015 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_68bf1f1c9255.jpg\" width=128px>](image_captioning\u002Fimage_captioning_pytorch\u002F) | [image_captioning_pytorch](\u002Fimage_captioning\u002Fimage_captioning_pytorch\u002F)|[Image Captioning pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002FImageCaptioning.pytorch) | Pytorch | 1.2.5 and later | Dec 2016 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fimage-captioning-pytorch-a-machine-learning-model-for-describing-images-d27562b6f15b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fimage-captioning-pytorch-%E7%94%BB%E5%83%8F%E3%82%92%E8%AA%AC%E6%98%8E%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e690982af19) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_88e3648135a0.png\" width=128px>](image_captioning\u002Fblip2\u002F) | [blip2](\u002Fimage_captioning\u002Fblip2\u002F)|[Hugging Face - BLIP-2](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSalesforce\u002FBLIP2) | Pytorch | 1.2.16 and later | Jan 2023 | |\n\n## Image classification\n\n### CNN\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Falexnet\u002F) | [alexnet](\u002Fimage_classification\u002Falexnet\u002F)|[AlexNet PyTorch](https:\u002F\u002Fpytorch.org\u002Fhub\u002Fpytorch_vision_alexnet\u002F)|Pytorch| 1.2.5 and later | Sep 2012 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fvgg16\u002F) | [vgg16](\u002Fimage_classification\u002Fvgg16\u002F) |[Very Deep Convolutional Networks for Large-Scale Image Recognition]( https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556 )|Keras| 1.1.0 and later| Sep 2014 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fgooglenet\u002F) | [googlenet](\u002Fimage_classification\u002Fgooglenet\u002F) |[Going Deeper with Convolutions]( https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.4842 )|Pytorch| 1.2.0 and later| Sep 2014 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fresnet18\u002F) | [resnet18](\u002Fimage_classification\u002Fresnet18\u002F) | [ResNet18]( https:\u002F\u002Fpytorch.org\u002Fvision\u002Fmain\u002Fgenerated\u002Ftorchvision.models.resnet18.html) | Pytorch | 1.2.8 and later | Dec 2015 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fresnet50\u002F) | [resnet50](\u002Fimage_classification\u002Fresnet50\u002F) | [Deep Residual Learning for Image Recognition]( https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | Chainer | 1.2.0 and later | Dec 2015 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Finceptionv3\u002F) | [inceptionv3](\u002Fimage_classification\u002Finceptionv3\u002F)|[Rethinking the Inception Architecture for Computer Vision](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.00567)|Pytorch| 1.2.0 and later | Dec 2015 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Failia-sdk-%E3%83%A2%E3%83%87%E3%83%AB%E7%B4%B9%E4%BB%8B-inceptionv3-b39dd43f285d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Finceptionv4\u002F) | [inceptionv4](\u002Fimage_classification\u002Finceptionv4\u002F)|[Keras Inception-V4](https:\u002F\u002Fgithub.com\u002Fkentsommer\u002Fkeras-inceptionV4)|Keras| 1.2.5 and later | Feb 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_dd0a675c5a0f.jpg\" width=128px>](image_classification\u002Fwide_resnet50\u002F)| [wide_resnet50](\u002Fimage_classification\u002Fwide_resnet50\u002F)|[Wide Resnet](https:\u002F\u002Fpytorch.org\u002Fhub\u002Fpytorch_vision_wide_resnet\u002F)|Pytorch| 1.2.5 and later | May 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobilenetv2\u002F) | [mobilenetv2](\u002Fimage_classification\u002Fmobilenetv2\u002F)|[PyTorch Implemention of MobileNet V2](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Fmobilenetv2.pytorch)|Pytorch| 1.2.0 and later | Jan 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobilenetv3\u002F) | [mobilenetv3](\u002Fimage_classification\u002Fmobilenetv3\u002F)|[PyTorch Implemention of MobileNet V3](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Fmobilenetv3.pytorch)|Pytorch| 1.2.1 and later | May 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fefficientnet\u002F)| [efficientnet](\u002Fimage_classification\u002Fefficientnet\u002F)|[A PyTorch implementation of EfficientNet]( https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch)|Pytorch| 1.2.3 and later | May 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fefficientnetv2\u002F)| [efficientnetv2](\u002Fimage_classification\u002Fefficientnetv2\u002F)|[EfficientNetV2]( https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Ftree\u002Fmaster\u002Fefficientnetv2 )|Pytorch| 1.2.4 and later | Apr 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_dd0a675c5a0f.jpg\" width=128px>](image_classification\u002Fimagenet21k\u002F) | [imagenet21k](\u002Fimage_classification\u002Fimagenet21k\u002F) | [ImageNet21K](https:\u002F\u002Fgithub.com\u002FAlibaba-MIIL\u002FImageNet21K) | Pytorch | 1.2.11 and later | Apr 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0405447bf67d.jpg\" width=128px>](image_classification\u002Fmlp_mixer\u002F)| [mlp_mixer](\u002Fimage_classification\u002Fmlp_mixer\u002F)|[MLP-Mixer](https:\u002F\u002Fgithub.com\u002Fjeonsworld\u002FMLP-Mixer-Pytorch)|Pytorch| 1.2.9 and later | May 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fvolo\u002F) | [volo](\u002Fimage_classification\u002Fvolo\u002F) | [VOLO: Vision Outlooker for Visual Recognition](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fvolo) | Pytorch | 1.2.9 and later | Jun 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_941efb51d361.jpg\" width=128px>](image_classification\u002Fconvnext\u002F) | [convnext](\u002Fimage_classification\u002Fconvnext\u002F)|[A PyTorch implementation of ConvNeXt](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FConvNeXt) | Pytorch | 1.2.5 and later | Jan 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobileone\u002F) | [mobileone](\u002Fimage_classification\u002Fmobileone\u002F)|[A PyTorch implementation of MobileOne](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-mobileone) | Pytorch | 1.2.1 and later | Jun 2022 | |\n\n### Transformer\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_92fe8596625f.png\" width=128px>](image_classification\u002Fvit\u002F)| [vit](\u002Fimage_classification\u002Fvit\u002F)|[Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)](https:\u002F\u002Fgithub.com\u002Fjeonsworld\u002FViT-pytorch)|Pytorch| 1.2.7 and later | Oct 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvision-transformer-state-of-the-art-image-identification-technology-without-convolutional-fd10097ae9c2) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvision-transformer-%E7%95%B3%E3%81%BF%E8%BE%BC%E3%81%BF%E6%BC%94%E7%AE%97%E3%82%92%E7%94%A8%E3%81%84%E3%81%AA%E3%81%84%E6%9C%80%E6%96%B0%E7%94%BB%E5%83%8F%E8%AD%98%E5%88%A5%E6%8A%80%E8%A1%93-84f06978a17f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0f1b4a595049.png\" width=128px>](image_classification\u002Fclip\u002F) | [clip](\u002Fimage_classification\u002Fclip\u002F)|[CLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP) | Pytorch | 1.2.9 and later | Feb 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fclip-learning-transferable-visual-models-from-natural-language-supervision-4508b3f0ea46) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclip-%E8%B6%85%E5%A4%A7%E8%A6%8F%E6%A8%A1%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88%E3%81%A7%E4%BA%8B%E5%89%8D%E5%AD%A6%E7%BF%92%E3%81%95%E3%82%8C-%E5%86%8D%E5%AD%A6%E7%BF%92%E3%81%AA%E3%81%97%E3%81%A7%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E8%AD%98%E5%88%A5%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-2ebc5c1666f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_941efb51d361.jpg\" width=128px>](image_classification\u002Fswin-transformer\u002F) | [swin-transformer](\u002Fimage_classification\u002Fswin-transformer\u002F)|[Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer) | Pytorch | 1.2.6 and later | Mar 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4711fbf0be4f.jpeg\" width=128px>](image_classification\u002Fjapanese-clip\u002F) | [japanese-clip](\u002Fimage_classification\u002Fjapanese-clip\u002F)|[Japanese-CLIP](https:\u002F\u002Fgithub.com\u002Frinnakk\u002Fjapanese-clip) | Pytorch | 1.2.15 and later | May 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4711fbf0be4f.jpeg\" width=128px>](image_classification\u002Fjapanese-stable-clip-vit-l-16\u002F) | [japanese-stable-clip-vit-l-16](\u002Fimage_classification\u002Fjapanese-stable-clip-vit-l-16\u002F) | [japanese-stable-clip-vit-l-16](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fjapanese-stable-clip-vit-l-16\u002F) | Pytorch | 1.2.11 and later | Nov 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2b44dc0f5766.jpeg\" width=128px>](image_classification\u002Fclip-japanse-base\u002F) | [clip-japanese-base](\u002Fimage_classification\u002Fclip-japanese-base\u002F)|[line-corporation\u002Fclip-japanese-base](https:\u002F\u002Fhuggingface.co\u002Fline-corporation\u002Fclip-japanese-base) | Pytorch | 1.2.16 and later | Apr 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ba36126546cc.jpg\" width=128px>](image_classification\u002Fsiglip2\u002F) | [siglip2](\u002Fimage_classification\u002Fsiglip2\u002F)|[Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip2-base-patch16-224) | Pytorch | 1.2.16 and later | Feb 2025 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsiglip2-%E6%AC%A1%E4%B8%96%E4%BB%A3%E3%81%AE0%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E7%89%A9%E4%BD%93%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-854e768c3163) |\n\n### Specific task\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f76cae2df52e.png\" width=128px>](image_classification\u002Fweather-prediction-from-image\u002F) | [weather-prediction-from-image](\u002Fimage_classification\u002Fweather-prediction-from-image\u002F)|[Weather Prediction From Image - (Warmth Of Image)](https:\u002F\u002Fgithub.com\u002Fberkgulay\u002Fweather-prediction-from-image) | Keras | 1.2.5 and later | Oct 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_040e1e655ed9.jpeg\" width=128px>](image_classification\u002Fpartialconv\u002F) | [partialconv](\u002Fimage_classification\u002Fpartialconv\u002F)|[Partial Convolution Layer for Padding and Image Inpainting](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpartialconv)|Pytorch| 1.2.0 and later | Nov 2018 | |\n\n## Image inpainting\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9f6eb330853c.png\" width=128px>](image_inpainting\u002Fpytorch-inpainting-with-partial-conv\u002F) | [inpainting-with-partial-conv](\u002Fimage_inpainting\u002Fpytorch-inpainting-with-partial-conv\u002F) | [pytorch-inpainting-with-partial-conv](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-inpainting-with-partial-conv) | PyTorch | 1.2.6 and later | Apr 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Finpainting-with-partial-conv-a-machine-learning-model-that-predicts-and-fills-in-missing-parts-of-53c046343a85) [JP](https:\u002F\u002Ftech.ailia.ai\u002Finpainting-with-partial-conv-%E7%94%BB%E5%83%8F%E3%81%AE%E6%AC%A0%E6%90%8D%E9%83%A8%E5%88%86%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%97%E3%81%A6%E5%9F%8B%E3%82%81%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9746576e6490) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_739673d65451.png\" width=128px>](image_inpainting\u002Fdeepfillv2\u002F) | [deepfillv2](\u002Fimage_inpainting\u002Fdeepfillv2\u002F) | [Free-Form Image Inpainting with Gated Convolution](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Ftree\u002Fmaster\u002Fconfigs\u002Finpainting\u002Fdeepfillv2) | Pytorch | 1.2.9 and later | Jun 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0bd7bd206fad.png\" width=128px>](image_inpainting\u002Finpainting_gmcnn\u002F) | [inpainting_gmcnn](\u002Fimage_inpainting\u002Finpainting_gmcnn\u002F) | [Image Inpainting via Generative Multi-column Convolutional Neural Networks](https:\u002F\u002Fgithub.com\u002Fshepnerd\u002Finpainting_gmcnn) | TensorFlow | 1.2.6 and later | Oct 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1105c8ef57db.jpg\" width=128px>](image_inpainting\u002F3d-photo-inpainting\u002F) | [3d-photo-inpainting](\u002Fimage_inpainting\u002F3d-photo-inpainting\u002F) | [3D Photography using Context-aware Layered Depth Inpainting](https:\u002F\u002Fgithub.com\u002Fvt-vl-lab\u002F3d-photo-inpainting) | Pytorch | 1.2.7 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_999e58d1d7cb.png\" width=128px>](image_inpainting\u002Flama\u002F) | [lama](\u002Fimage_inpainting\u002Flama\u002F) | [LaMa: Resolution-robust Large Mask Inpainting with Fourier Convolutions](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama) | Pytorch | 1.2.13 and later | Sep 2021 | |\n\n## Image manipulation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5a8a3ce7dd0.jpg\" width=128px>](image_manipulation\u002Fcolorization\u002F) | [colorization](\u002Fimage_manipulation\u002Fcolorization\u002F) | [Colorful Image Colorization](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization) | Pytorch | 1.2.2 and later | Mar 2016 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcolorization-a-machine-learning-model-for-colorizing-black-and-white-images-829e35e4f91c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcolorization-%E7%99%BD%E9%BB%92%E7%94%BB%E5%83%8F%E3%82%92%E3%82%AB%E3%83%A9%E3%83%BC%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-177d3fd52e40) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d29fef1bee5d.png\" width=128px>](image_manipulation\u002Fcnngeometric_pytorch\u002F) | [cnngeometric_pytorch](\u002Fimage_manipulation\u002Fcnngeometric_pytorch\u002F) | [CNNGeometric PyTorch implementation](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fcnngeometric_pytorch) | Pytorch | 1.2.7 and later | Mar 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44bb2f981ab3.png\" width=128px>](image_manipulation\u002Fstyle2paints\u002F) | [style2paints](\u002Fimage_manipulation\u002Fstyle2paints\u002F) | [Style2Paints](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints) | TensorFlow | 1.2.6 and later | Jun 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ec7f82636195.png\" width=128px>](image_manipulation\u002Fdeblur_gan\u002F) | [deblur_gan](\u002Fimage_manipulation\u002Fdeblur_gan\u002F) | [DeblurGAN](https:\u002F\u002Fgithub.com\u002FKupynOrest\u002FDeblurGAN) | Pytorch | 1.2.6 and later | Nov 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc0376679536.png\" width=128px>](image_manipulation\u002Fpytorch-superpoint\u002F) | [pytorch-superpoint](\u002Fimage_manipulation\u002Fpytorch-superpoint\u002F) | [pytorch-superpoint : Self-Supervised Interest Point Detection and Description](https:\u002F\u002Fgithub.com\u002Feric-yyjau\u002Fpytorch-superpoint) | Pytorch | 1.2.6 and later | Dec 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c1b5691c0735.png\" width=128px>](image_manipulation\u002Fnoise2noise\u002F) | [noise2noise](\u002Fimage_manipulation\u002Fnoise2noise\u002F) | [Learning Image Restoration without Clean Data](https:\u002F\u002Fgithub.com\u002Fjoeylitalien\u002Fnoise2noise-pytorch) | Pytorch | 1.2.0 and later | Mar 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fd63d6bde9f0.png\" width=128px>](image_manipulation\u002Fdfe\u002F) | [dfe](\u002Fimage_manipulation\u002Fdfe\u002F) | [Deep Fundamental Matrix Estimation](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FDFE) | Pytorch | 1.2.6 and later | Oct 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_17cbbe2b94d7.png\" width=128px>](image_manipulation\u002Fillnet\u002F) | [illnet](\u002Fimage_manipulation\u002Fillnet\u002F) | [Document Rectification and Illumination Correction using a Patch-based CNN](https:\u002F\u002Fgithub.com\u002Fxiaoyu258\u002FDocProj) | Pytorch | 1.2.2 and later | Sep 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7fb95d007dbf.png\" width=128px>](image_manipulation\u002Fdewarpnet\u002F) | [dewarpnet](\u002Fimage_manipulation\u002Fdewarpnet) | [DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks](https:\u002F\u002Fgithub.com\u002Fcvlab-stonybrook\u002FDewarpNet) | Pytorch | 1.2.1 and later | Oct 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d70aa4925090.png\" width=128px>](image_manipulation\u002Fdeep_white_balance\u002F) | [deep_white_balance](\u002Fimage_manipulation\u002Fdeep_white_balance\u002F) | [Deep White-Balance Editing, CVPR 2020 (Oral)](https:\u002F\u002Fgithub.com\u002Fmahmoudnafifi\u002FDeep_White_Balance) | PyTorch | 1.2.6 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_81a70b96c669.jpg\" width=128px>](image_manipulation\u002Fu2net_portrait\u002F) | [u2net_portrait](\u002Fimage_manipulation\u002Fu2net_portrait\u002F) | [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FU-2-Net) | Pytorch | 1.2.2 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1c9773677251.png\" width=128px>](image_manipulation\u002Finvertible_denoising_network\u002F) | [invertible_denoising_network](\u002Fimage_manipulation\u002Finvertible_denoising_network\u002F) | [Invertible Image Denoising](https:\u002F\u002Fgithub.com\u002FYang-Liu1082\u002FInvDN) | Pytorch | 1.2.8 and later | Apr 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_16752e9a3ee1.png\" width=128px>](image_manipulation\u002Fdfm\u002F) | [dfm](\u002Fimage_manipulation\u002Fdfm\u002F) | [Deep Feature Matching](https:\u002F\u002Fgithub.com\u002Fufukefe\u002FDFM) | Pytorch | 1.2.6 and later | Jun 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_25865e9e9a85.png\" width=128px>](image_manipulation\u002Ffbcnn\u002F) | [fbcnn](\u002Fimage_manipulation\u002Ffbcnn\u002F) | [Towards Flexible Blind JPEG Artifacts Removal](https:\u002F\u002Fgithub.com\u002Fjiaxi-jiang\u002FFBCNN)   | Pytorch | 1.2.9 and later | Sep 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b9a3b3fe56ed.png\" width=128px>](image_manipulation\u002Fdehamer\u002F) | [dehamer](\u002Fimage_manipulation\u002Fdehamer\u002F) | [Image Dehazing Transformer with Transmission-Aware 3D Position Embedding](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FDehamer) | Pytorch | 1.2.13 and later | Jun 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_550d04c008dd.png\" width=128px>](image_manipulation\u002Flightglue\u002F) | [lightglue](\u002Fimage_manipulation\u002Flightglue\u002F) | [LightGlue-ONNX](https:\u002F\u002Fgithub.com\u002Ffabio-sim\u002FLightGlue-ONNX) | Pytorch | 1.2.15 and later | Jun 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_37d51c7dc706.png\" width=128px>](image_manipulation\u002Fdocshadow\u002F) | [docshadow](\u002Fimage_manipulation\u002Fdocshadow\u002F) | [DocShadow-ONNX-TensorRT](https:\u002F\u002Fgithub.com\u002Ffabio-sim\u002FDocShadow-ONNX-TensorRT) | Pytorch | 1.2.10 and later | Aug 2023 | |\n\n## Image restoration\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7a391a16dcf4.png\" width=128px>](image_restoration\u002Fnafnet\u002F) | [nafnet](\u002Fimage_restoration\u002Fnafnet\u002F) | [NAFNet: Nonlinear Activation Free Network for Image Restoration](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002Fnafnet) | Pytorch | 1.2.10 and later | Mar 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fnafnet-%E7%94%BB%E5%83%8F%E3%81%AE%E3%83%96%E3%83%A9%E3%83%BC%E3%82%92%E9%99%A4%E5%8E%BB%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b8547fd67597) | \n\n## Image segmentation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_abe357270d5e.jpg\" width=128px>](image_segmentation\u002Fpytorch-fcn\u002F) | [pytorch-fcn](\u002Fimage_segmentation\u002Fpytorch-fcn\u002F) | [pytorch-fcn](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn) | Pytorch | 1.3.0 and later | Nov 2014 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3f45402f933b.png\" width=128px>](image_segmentation\u002Fpytorch-enet\u002F) | [pytorch-enet](\u002Fimage_segmentation\u002Fpytorch-enet\u002F) | [PyTorch-ENet](https:\u002F\u002Fgithub.com\u002Fdavidtvs\u002FPyTorch-ENet) | Pytorch | 1.2.8 and later | Jun 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2f27413addb8.png\" width=128px>](image_segmentation\u002Ftusimple-DUC\u002F) | [tusimple-DUC](\u002Fimage_segmentation\u002Ftusimple-DUC\u002F) | [TuSimple-DUC](https:\u002F\u002Fgithub.com\u002FTuSimple\u002FTuSimple-DUC) | Pytorch | 1.2.10 and later | Feb 2017 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_191850dd1c12.jpg\" width=128px>](image_segmentation\u002Fpytorch-unet\u002F) | [pytorch-unet](\u002Fimage_segmentation\u002Fpytorch-unet\u002F) | [Pytorch-Unet](https:\u002F\u002Fgithub.com\u002Fmilesial\u002FPytorch-UNet) | Pytorch | 1.2.5 and later | Aug 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87d12e450abd.png\" width=128px>](image_segmentation\u002Fdeeplabv3\u002F) | [deeplabv3](\u002Fimage_segmentation\u002Fdeeplabv3\u002F) | [Xception65 for backbone network of DeepLab v3+](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fdeeplab) | Chainer | 1.2.0 and later | Feb 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2799f7c622db.png\" width=128px>](image_segmentation\u002Fpspnet-hair-segmentation\u002F) | [pspnet-hair-segmentation](\u002Fimage_segmentation\u002Fpspnet-hair-segmentation\u002F) | [pytorch-hair-segmentation](https:\u002F\u002Fgithub.com\u002FYBIGTA\u002Fpytorch-hair-segmentation) | Pytorch | 1.2.2 and later | Nov 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3c51b17e3980.png\" width=128px>](image_segmentation\u002Fswiftnet\u002F) | [swiftnet](\u002Fimage_segmentation\u002Fswiftnet\u002F) | [SwiftNet](https:\u002F\u002Fgithub.com\u002Forsic\u002Fswiftnet) | Pytorch | 1.2.6 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fd93d51bea9f.png\" width=128px>](image_segmentation\u002Fhrnet_segmentation\u002F) | [hrnet_segmentation](\u002Fimage_segmentation\u002Fhrnet_segmentation\u002F) | [High-resolution networks (HRNets) for Semantic Segmentation](https:\u002F\u002Fgithub.com\u002FHRNet\u002FHRNet-Semantic-Segmentation) | Pytorch | 1.2.1 and later | Apr 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d217cbda81da.png\" width=128px>](image_segmentation\u002Fhair_segmentation\u002F) | [hair_segmentation](\u002Fimage_segmentation\u002Fhair_segmentation\u002F) | [hair segmentation in mobile device](https:\u002F\u002Fgithub.com\u002Fthangtran480\u002Fhair-segmentation) | Keras | 1.2.1 and later | Jul 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_aa49d37f930c.png\" width=128px>](image_segmentation\u002Fpaddleseg\u002F) | [paddleseg](\u002Fimage_segmentation\u002Fpaddleseg\u002F) | [PaddleSeg](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleSeg\u002Ftree\u002Frelease\u002F2.3\u002Fcontrib\u002FCityscapesSOTA) | Pytorch | 1.2.7 and later | Aug 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpaddleseg-highly-accurate-segmentation-model-using-hierarchical-attention-18e69363dc2a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleseg-%E9%9A%8E%E5%B1%A4%E7%9A%84%E3%81%AA%E3%82%A2%E3%83%86%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%A2%E3%83%87%E3%83%AB-acc89bf50423) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_844f01ccedaa.png\" width=128px>](image_segmentation\u002Fhuman_part_segmentation\u002F) | [human_part_segmentation](\u002Fimage_segmentation\u002Fhuman_part_segmentation\u002F) | [Self Correction for Human Parsing](https:\u002F\u002Fgithub.com\u002FPeikeLi\u002FSelf-Correction-Human-Parsing) | Pytorch | 1.2.4 and later | Oct 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fhumanpartsegmentation-a-machine-learning-model-for-segmenting-human-parts-cd7e39480714) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fhumanpartsegmentation-%E5%8B%95%E7%94%BB%E3%81%8B%E3%82%89%E4%BD%93%E3%81%AE%E9%83%A8%E4%BD%8D%E3%82%92%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e8a0e405255) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8834a82b2f2a.png\" width=128px>](image_segmentation\u002Fsemantic-segmentation-mobilenet-v3\u002F) | [semantic-segmentation-mobilenet-v3](\u002Fimage_segmentation\u002Fsemantic-segmentation-mobilenet-v3) | [Semantic segmentation with MobileNetV3](https:\u002F\u002Fgithub.com\u002FOniroAI\u002FSemantic-segmentation-with-MobileNetV3) | TensorFlow | 1.2.5 and later | Nov 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12c06fdaabd8.jpg\" width=128px>](image_segmentation\u002Fsuim\u002F) | [suim](\u002Fimage_segmentation\u002Fsuim\u002F) | [SUIM](https:\u002F\u002Fgithub.com\u002FIRVLab\u002FSUIM) | Keras | 1.2.6 and later | Apr 2020 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_191e6715f3f0.png\" width=128px>](image_segmentation\u002Fyet-another-anime-segmenter\u002F) | [yet-another-anime-segmenter](\u002Fimage_segmentation\u002Fyet-another-anime-segmenter\u002F) | [Yet Another Anime Segmenter](https:\u002F\u002Fgithub.com\u002Fzymk9\u002FYet-Another-Anime-Segmenter) | Pytorch | 1.2.6 and later | Oct 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_08d82935701a.png\" width=128px>](image_segmentation\u002Fdense_prediction_transformers\u002F) | [dense_prediction_transformers](\u002Fimage_segmentation\u002Fdense_prediction_transformers\u002F) | [Vision Transformers for Dense Prediction](https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FDPT)   | Pytorch | 1.2.7 and later | Mar 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdpt-segmentation-model-using-vision-transformer-b479f3027468) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdpt-vision-transformer%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%A2%E3%83%87%E3%83%AB-88db4842b4a7) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5f51acdb3300.png\" width=128px>](image_segmentation\u002Fgroup_vit\u002F) | [group_vit](\u002Fimage_segmentation\u002Fgroup_vit\u002F) | [GroupViT](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FGroupViT) | Pytorch | 1.2.10 and later | Feb 2022 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9811c87f224d.png\" width=128px>](image_segmentation\u002Fpp_liteseg\u002F) | [pp_liteseg](\u002Fimage_segmentation\u002Fpp_liteseg\u002F) | [PP-LiteSeg](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleSeg\u002Ftree\u002Fdevelop\u002Fconfigs\u002Fpp_liteseg) | Pytorch | 1.2.10 and later | Apr 2022 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_462c099d291a.png\" width=128px>](image_segmentation\u002Fanime-segmentation\u002F) | [anime-segmentation](\u002Fimage_segmentation\u002Fanime-segmentation\u002F) | [Anime Segmentation](https:\u002F\u002Fgithub.com\u002FSkyTNT\u002Fanime-segmentation) | Pytorch | 1.2.9 and later | Aug 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a792f981018a.png\" width=128px>](image_segmentation\u002Fyolov8-seg\u002F) | [yolov8-seg](\u002Fimage_segmentation\u002Fyolov8-seg\u002F) | [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 and later | Jan 2023 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cb9ae91d5327.png\" width=128px>](image_segmentation\u002Fsegment-anything\u002F) | [segment-anything](\u002Fimage_segmentation\u002Fsegment-anything\u002F) | [Segment Anything](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) | Pytorch | 1.2.16 and later | Apr 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1baef25228f0.png\" width=128px>](image_segmentation\u002Fgrounded_sam\u002F) | [grounded_sam](\u002Fimage_segmentation\u002Fgrounded_sam\u002F) | [Grounded-SAM](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGrounded-Segment-Anything\u002Ftree\u002Fmain) | Pytorch | 1.2.16 and later | Apr 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e1090f254ed2.png\" width=128px>](image_segmentation\u002Ffast_sam\u002F) | [fast_sam](\u002Fimage_segmentation\u002Ffast_sam\u002F) | [FastSAM](https:\u002F\u002Fgithub.com\u002FCASIA-IVA-Lab\u002FFastSAM) | Pytorch | 1.2.14 and later | Jun 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_47e08849abf0.png\" width=128px>](image_segmentation\u002Fmobile_sam\u002F) | [mobile_sam](\u002Fimage_segmentation\u002Fmobile_sam\u002F) | [MobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) | Pytorch | 1.6.0 and later | Jun 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fe0133ff1701.png\" width=128px>](image_segmentation\u002Fedge_sam\u002F) | [edge_sam](\u002Fimage_segmentation\u002Fedge_sam\u002F) | [EdgeSAM](https:\u002F\u002Fgithub.com\u002Fchongzhou96\u002FEdgeSAM) | Pytorch | 1.2.10 and later | Dec 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a429b166130b.png\" width=128px>](image_segmentation\u002Fsegment-anything-2\u002F) | [segment-anything-2](\u002Fimage_segmentation\u002Fsegment-anything-2\u002F) | [Segment Anything 2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything-2) | Pytorch | 1.2.16 and later | Jul 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_72c7cb767641.png\" width=128px>](image_segmentation\u002Fyolov11-seg\u002F) | [yolov11-seg](\u002Fimage_segmentation\u002Fyolov11-seg\u002F) | [Ultralytics YOLO11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 and later | Sep 2024 |  |\n\n## Landmark classification\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_77a16a3a0588.jpg\" width=128px>](landmark_classification\u002Fplaces365\u002F) | [places365](\u002Flandmark_classification\u002Fplaces365\u002F)|[Release of Places365-CNNs](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fplaces365) | Pytorch | 1.2.5 and later | Oct 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c5083148522e.jpg\" width=128px>](landmark_classification\u002Flandmarks_classifier_asia\u002F) | [landmarks_classifier_asia](\u002Flandmark_classification\u002Flandmarks_classifier_asia\u002F)|[Landmarks classifier_asia_V1.1](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fon_device_vision\u002Fclassifier\u002Flandmarks_classifier_asia_V1\u002F1) | TensorFlow Hub | 1.2.4 and later | Apr 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Flandmarksclassifierasia-a-machine-learning-model-to-identify-japanese-tourist-attractions-9bb9600d2c80) [JP](https:\u002F\u002Ftech.ailia.ai\u002Flandmarksclassifierasia-%E6%97%A5%E6%9C%AC%E3%81%AE%E8%A6%B3%E5%85%89%E5%90%8D%E6%89%80%E3%82%92%E8%AD%98%E5%88%A5%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dbe930b5653c)|\n\n## Line segment detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ba579303251b.png\" width=128px>](line_segment_detection\u002Fdexined\u002F) | [dexined](\u002Fline_segment_detection\u002Fdexined\u002F) | [DexiNed: Dense Extreme Inception Network for Edge Detection](https:\u002F\u002Fgithub.com\u002Fxavysp\u002FDexiNed) | Pytorch | 1.2.5 and later | Sep 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5f084d327d16.jpg\" width=128px>](line_segment_detection\u002Fmlsd\u002F) | [mlsd](\u002Fline_segment_detection\u002Fmlsd\u002F) | [M-LSD: Towards Light-weight and Real-time Line Segment Detection](https:\u002F\u002Fgithub.com\u002Fnavervision\u002Fmlsd) | TensorFlow | 1.2.8 and later | Jun 2021 |  [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fm-lsd-machine-learning-model-for-detecting-wireframes-ac1b618f459b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fm-lsd-%E3%83%AF%E3%82%A4%E3%83%A4%E3%83%BC%E3%83%95%E3%83%AC%E3%83%BC%E3%83%A0%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-876bef18d908) |\n\n## Low Light Image Enhancement\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_82d233d3ecc4.png\" width=128px>](low_light_image_enhancement\u002Fagllnet\u002F) | [agllnet](\u002Flow_light_image_enhancement\u002Fagllnet\u002F) | [AGLLNet: Attention Guided Low-light Image Enhancement (IJCV 2021)](https:\u002F\u002Fgithub.com\u002Fyu-li\u002FAGLLNet) | Pytorch | 1.2.9 and later | Aug 2019 |[EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fagllnet-a-machine-learning-model-for-brightening-dark-images-133a0887b5c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fagllnet-%E6%9A%97%E3%81%84%E7%94%BB%E5%83%8F%E3%82%92%E6%98%8E%E3%82%8B%E3%81%8F%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-d59181ad89a9) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_055b19c188b4.png\" width=128px>](low_light_image_enhancement\u002Fdrbn_skf\u002F) | [drbn_skf](\u002Flow_light_image_enhancement\u002Fdrbn_skf\u002F) | [DRBN SKF](https:\u002F\u002Fgithub.com\u002Flangmanbusi\u002FSemantic-Aware-Low-Light-Image-Enhancement\u002Ftree\u002Fmain\u002FDRBN_SKF) | Pytorch | 1.2.14 and later | Apr 2023 | |\n\n## Natural language processing\n\n### Bert\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert](\u002Fnatural_language_processing\u002Fbert) | [pytorch-pretrained-bert](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-pretrained-bert\u002F) | Pytorch | 1.2.2 and later | Oct 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fbert-a-machine-learning-model-for-efficient-natural-language-processing-aef3081c24e8) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbert-%E8%87%AA%E7%84%B6%E8%A8%80%E8%AA%9E%E5%87%A6%E7%90%86%E3%82%92%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AB%E5%AD%A6%E7%BF%92%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-3a9c27d78cf8) |\n|[bert_maskedlm](\u002Fnatural_language_processing\u002Fbert_maskedlm) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n|[bert_question_answering](\u002Fnatural_language_processing\u002Fbert_question_answering) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n\n### Embedding\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[sentence_transformers_japanese](\u002Fnatural_language_processing\u002Fsentence_transformers_japanese) | [sentence transformers](https:\u002F\u002Fhuggingface.co\u002Fsentence-transformers\u002Fparaphrase-multilingual-mpnet-base-v2) | Pytorch | 1.2.7 and later | Aug 2019 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsentencetransformer-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89embedding%E3%82%92%E5%8F%96%E5%BE%97%E3%81%99%E3%82%8B%E8%A8%80%E8%AA%9E%E5%87%A6%E7%90%86%E3%83%A2%E3%83%87%E3%83%AB-b7d2a9bb2c31) |\n|[multilingual-e5](\u002Fnatural_language_processing\u002Fmultilingual-e5) | [multilingual-e5-base](https:\u002F\u002Fhuggingface.co\u002Fintfloat\u002Fmultilingual-e5-base) | Pytorch | 1.2.15 and later | Dec 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmultilingual-e5-%E5%A4%9A%E8%A8%80%E8%AA%9E%E3%81%AE%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92embedding%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-71f1dec7c4f0) |\n|[glucose](\u002Fnatural_language_processing\u002Fglucose) | [GLuCoSE (General Luke-based Contrastive Sentence Embedding)-base-Japanese](https:\u002F\u002Fhuggingface.co\u002Fpkshatech\u002FGLuCoSE-base-ja) | Pytorch | 1.2.15 and later | Jul 2023 |\n|[ruri-v3](\u002Fnatural_language_processing\u002Fruri-v3) | [ruri-v3-310m ](https:\u002F\u002Fhuggingface.co\u002Fcl-nagoya\u002Fruri-v3-310m) | Pytorch | 1.2.13 and later | Apr 2025 |\n|[embeddinggemma](\u002Fnatural_language_processing\u002Fembeddinggemma) | [EmbeddingGemma](https:\u002F\u002Fai.google.dev\u002Fgemma\u002Fdocs\u002Fembeddinggemma?hl=ja) | Pytorch | 1.2.14 and later | Sep 2025| [JP](https:\u002F\u002Fkyakuno.medium.com\u002Fembedding-gemma-google%E3%81%AE%E9%96%8B%E7%99%BA%E3%81%97%E3%81%9F%E8%BB%BD%E9%87%8F%E3%81%A7%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AAembedding%E3%83%A2%E3%83%87%E3%83%AB-9ec139ddfde9) |\n\n### Error corrector\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_insert_punctuation](\u002Fnatural_language_processing\u002Fbert_insert_punctuation) | [bert-japanese](https:\u002F\u002Fgithub.com\u002Fcl-tohoku\u002Fbert-japanese) | Pytorch | 1.2.15 and later | Nov 2019 | |\n|[bertjsc](\u002Fnatural_language_processing\u002Fbertjsc) | [bertjsc](https:\u002F\u002Fgithub.com\u002Fer-ri\u002Fbertjsc) | Pytorch | 1.2.15 and later | Mar 2023 | |\n|[t5_whisper_medical](\u002Fnatural_language_processing\u002Ft5_whisper_medical) | error correction of medical terms using t5 | Pytorch | 1.2.13 and later |  | |\n\n### Grapheme to phoneme\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[g2p_en](\u002Fnatural_language_processing\u002Fg2p_en) | [g2p_en](https:\u002F\u002Fgithub.com\u002FKyubyong\u002Fg2p) | Pytorch | 1.2.14 and later | Jan 2019 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fg2p-en-%E8%8B%B1%E8%AA%9E%E3%81%AE%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E9%9F%B3%E7%B4%A0%E3%81%AB%E5%A4%89%E6%8F%9B%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-88947c27b9ea) |\n|[g2pw](\u002Fnatural_language_processing\u002Fg2pw) | [g2pW](https:\u002F\u002Fgithub.com\u002FGitYCC\u002Fg2pW) | Pytorch | 1.2.9 and later | Mar 2022 | |\n|[soundchoice-g2p](\u002Fnatural_language_processing\u002Fsoundchoice-g2p) | [Hugging Face - speechbrain\u002Fsoundchoice-g2p](https:\u002F\u002Fhuggingface.co\u002Fspeechbrain\u002Fsoundchoice-g2p) | Pytorch | 1.2.16 and later | Jul 2022 | |\n\n### Named entity recognition\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_ner](\u002Fnatural_language_processing\u002Fbert_ner) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n|[t5_base_japanese_ner](\u002Fnatural_language_processing\u002Ft5_base_japanese_ner) |  [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 and later | Mar 2021 | |\n|[bert_ner_japanese](\u002Fnatural_language_processing\u002Fbert_ner_japanese) | [jurabi\u002Fbert-ner-japanese](https:\u002F\u002Fhuggingface.co\u002Fjurabi\u002Fbert-ner-japanese) | Pytorch | 1.2.10 and later | Mar 2023 | |\n\n### Reranker\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[cross_encoder_mmarco](\u002Fnatural_language_processing\u002Fcross_encoder_mmarco) | [jeffwan\u002Fmmarco-mMiniLMv2-L12-H384-v](https:\u002F\u002Fhuggingface.co\u002Fjeffwan\u002Fmmarco-mMiniLMv2-L12-H384-v1) | Pytorch | 1.2.10 and later | Sep 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrossencodermmarco-%E8%B3%AA%E5%95%8F%E6%96%87%E3%81%A8%E5%9B%9E%E7%AD%94%E6%96%87%E3%81%AE%E9%A1%9E%E4%BC%BC%E5%BA%A6%E3%82%92%E8%A8%88%E7%AE%97%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c90b35e9fc09)|\n|[japanese-reranker-cross-encoder](\u002Fnatural_language_processing\u002Fjapanese-reranker-cross-encoder) | [hotchpotch\u002Fjapanese-reranker-cross-encoder-large-v1](https:\u002F\u002Fhuggingface.co\u002Fhotchpotch\u002Fjapanese-reranker-cross-encoder-large-v1) | Pytorch | 1.2.16 and later | Apr 2024 | |\n\n### Sentence generation\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[gpt2](\u002Fnatural_language_processing\u002Fgpt2) | [GPT-2](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fmodels\u002Fblob\u002Fmaster\u002Ftext\u002Fmachine_comprehension\u002Fgpt-2\u002FREADME.md) | Pytorch | 1.2.7 and later | Feb 2019 | |\n|[rinna_gpt2](\u002Fnatural_language_processing\u002Frinna_gpt2) | [japanese-pretrained-models](https:\u002F\u002Fgithub.com\u002Frinnakk\u002Fjapanese-pretrained-models)   | Pytorch | 1.2.7 and later | Apr 2021 | |\n\n### Sentiment analysis\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_sentiment_analysis](\u002Fnatural_language_processing\u002Fbert_sentiment_analysis) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n|[bert_tweets_sentiment](\u002Fnatural_language_processing\u002Fbert_tweets_sentiment) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n\n### Summarize\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_sum_ext](\u002Fnatural_language_processing\u002Fbert_sum_ext) | [BERTSUMEXT](https:\u002F\u002Fgithub.com\u002Fdmmiller612\u002Fbert-extractive-summarizer)   | Pytorch | 1.2.7 and later | May 2019 | |\n|[presumm](\u002Fnatural_language_processing\u002Fpresumm) | [PreSumm](https:\u002F\u002Fgithub.com\u002Fnlpyang\u002FPreSumm)   | Pytorch | 1.2.8 and later| Aug 2019 | |\n|[t5_base_japanese_title_generation](\u002Fnatural_language_processing\u002Ft5_base_japanese_title_generation) | [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 and later | Mar 2021 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ft5-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-602830bdc5b4) |\n|[t5_base_summarization](\u002Fnatural_language_processing\u002Ft5_base_japanese_summarization) | [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 and later | Mar 2021 | |\n\n### Translation\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[fugumt-en-ja](\u002Fnatural_language_processing\u002Ffugumt-en-ja) | [Fugu-Machine Translator](https:\u002F\u002Fgithub.com\u002Fs-taka\u002Ffugumt)   | Pytorch | 1.2.9 and later | Nov 2020 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffugumt-%E8%8B%B1%E8%AA%9E%E3%81%8B%E3%82%89%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%B8%E3%81%AE%E7%BF%BB%E8%A8%B3%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-46b839c1b4ae) |\n|[fugumt-ja-en](\u002Fnatural_language_processing\u002Ffugumt-ja-en) | [Fugu-Machine Translator](https:\u002F\u002Fgithub.com\u002Fs-taka\u002Ffugumt)   | Pytorch | 1.2.10 abd later | Nov 2020 | |\n\n### Zero shot classification\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_zero_shot_classification](\u002Fnatural_language_processing\u002Fbert_zero_shot_classification) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 and later | Oct 2018 | |\n|[multilingual-minilmv2](\u002Fnatural_language_processing\u002Fmultilingual-minilmv2) | [MoritzLaurer\u002Fmultilingual-MiniLMv2-L12-mnli-xnli](https:\u002F\u002Fhuggingface.co\u002FMoritzLaurer\u002Fmultilingual-MiniLMv2-L12-mnli-xnli) | Pytorch | 1.2.10 and later | Jun 2022 | |\n\n## Network intrusion detection\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [bert-network-packet-flow-header-payload](\u002Fnetwork_intrusion_detection\u002Fbert-network-packet-flow-header-payload\u002F) | [bert-network-packet-flow-header-payload](https:\u002F\u002Fhuggingface.co\u002Frdpahalavan\u002Fbert-network-packet-flow-header-payload)| Pytorch | 1.2.10 and later | Sep 2023 | |\n| [falcon-adapter-network-packet](\u002Fnetwork_intrusion_detection\u002Ffalcon-adapter-network-packet\u002F) | [falcon-adapter-network-packet](https:\u002F\u002Fhuggingface.co\u002Frdpahalavan\u002Ffalcon-adapter-network-packet)| Pytorch | 1.2.10 and later | Sep 2023 | |\n\n## Neural Rendering\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_629561a770aa.png\" width=128px>](neural_rendering\u002Fnerf\u002F) | [nerf](\u002Fneural_rendering\u002Fnerf\u002F) | [NeRF: Neural Radiance Fields](https:\u002F\u002Fgithub.com\u002Fbmild\u002Fnerf) | Tensorflow | 1.2.10 and later | Mar 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fnerf-machine-learning-model-to-generate-and-render-3d-models-from-multiple-viewpoint-images-599631dc2075) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fnerf-%E8%A4%87%E6%95%B0%E3%81%AE%E8%A6%96%E7%82%B9%E3%81%AE%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%893d%E3%83%A2%E3%83%87%E3%83%AB%E3%82%92%E7%94%9F%E6%88%90%E3%81%97%E3%81%A6%E3%83%AC%E3%83%B3%E3%83%80%E3%83%AA%E3%83%B3%E3%82%B0%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2d6bee7ff22f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e4d9a4b35cf7.gif\" width=128px>](neural_rendering\u002Ftripo_sr\u002F) | [TripoSR](\u002Fneural_rendering\u002Ftripo_sr\u002F) | [TripoSR](https:\u002F\u002Fgithub.com\u002FVAST-AI-Research\u002FTripoSR) | Pytorch | 1.2.6 and later | Mar 2024 | |\n\n## NSFW detector\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [clip-based-nsfw-detector](\u002Fnsfw_detector\u002Fclip-based-nsfw-detector\u002F) | [CLIP-based-NSFW-Detector](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLIP-based-NSFW-Detector)| Keras | 1.2.10 and later | Mar 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclip-based-nsfw-detector-%E4%B8%8D%E9%81%A9%E5%88%87%E7%94%BB%E5%83%8F%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-1ea69dbd7c0d) |\n\n## [Object detection](\u002Fobject_detection\u002F)\n\n### CNN\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_613960ffec15.png\" width=128px>](object_detection\u002Fyolov1-tiny\u002F) | [yolov1-tiny](\u002Fobject_detection\u002Fyolov1-tiny\u002F) | [YOLO: Real-Time Object Detection](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov1\u002F) | Darknet | 1.1.0 and later | Jun 2015 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fyolov1-you-look-only-once%E9%AB%98%E9%80%9F%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-92141aab4b69) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0f5d8761fbc6.png\" width=128px>](object_detection\u002Fyolov2\u002F) | [yolov2](\u002Fobject_detection\u002Fyolov2\u002F) | [YOLO: Real-Time Object Detection](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | Pytorch | 1.2.0 and later | Dec 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5fff17f4eb17.png\" width=128px>](object_detection\u002Fyolov2-tiny\u002F) | [yolov2-tiny](\u002Fobject_detection\u002Fyolov2-tiny\u002F) | [YOLO: Real-Time Object Detection](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | Pytorch | 1.2.6 and later | Dec 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_df64af60efe6.png\" width=128px>](object_detection\u002Fmaskrcnn\u002F) | [maskrcnn](\u002Fobject_detection\u002Fmaskrcnn\u002F) | [Mask R-CNN: real-time neural network for object instance segmentation](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fmodels\u002Ftree\u002Fmaster\u002Fvision\u002Fobject_detection_segmentation\u002Fmask-rcnn) | Pytorch | 1.2.3 and later | Mar 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_24e777e292da.png\" width=128px>](object_detection\u002Fyolov3\u002F) | [yolov3](\u002Fobject_detection\u002Fyolov3\u002F) | [YOLO: Real-Time Object Detection](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | ONNX Runtime | 1.2.1 and later | Apr 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov3-a-machine-learning-model-to-detect-the-position-and-type-of-an-object-60f1c18f8107) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fyolov3-66c9b998c096) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9749fbe69a00.png\" width=128px>](object_detection\u002Fyolov3-tiny\u002F) | [yolov3-tiny](\u002Fobject_detection\u002Fyolov3-tiny\u002F) | [YOLO: Real-Time Object Detection](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | ONNX Runtime | 1.2.1 and later | Apr 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5cf77db3b1bd.png\" width=128px>](object_detection\u002Fmobilenet_ssd\u002F) | [mobilenet_ssd](\u002Fobject_detection\u002Fmobilenet_ssd\u002F) | [MobileNetV1, MobileNetV2, VGG based SSD\u002FSSD-lite implementation in Pytorch](https:\u002F\u002Fgithub.com\u002Fqfgaohao\u002Fpytorch-ssd) | Pytorch | 1.2.1 and later | Aug 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmobilenetssd-a-machine-learning-model-for-fast-object-detection-37352ce6da7d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmobilenetssd-%E9%AB%98%E9%80%9F%E3%81%AB%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-be3ca37c411) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9d2ceed49436.png\" width=128px>](object_detection\u002Fm2det\u002F) | [m2det](\u002Fobject_detection\u002Fm2det\u002F) | [M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network](https:\u002F\u002Fgithub.com\u002Fqijiezhao\u002FM2Det) | Pytorch | 1.2.3 and later | Nov 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fm2det-highly-accurate-object-detection-model-b5c5bff27970) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fm2det-%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-bf92a8a3d423) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d2f0dda17c1f.png\" width=128px>](object_detection\u002Fcenternet\u002F) | [centernet](\u002Fobject_detection\u002Fcenternet\u002F) | [CenterNet : Objects as Points](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002FCenterNet) | Pytorch | 1.2.1 and later | Apr 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcenternet-a-machine-learning-model-for-anchorless-object-detection-462c48483cfe) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcenternet-%E3%82%A2%E3%83%B3%E3%82%AB%E3%83%BC%E3%83%AC%E3%82%B9%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9ecbadefd884) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_152d09b646c9.png\" width=128px>](object_detection\u002Fyolact\u002F) | [yolact](\u002Fobject_detection\u002Fyolact\u002F) | [You Only Look At CoefficienTs](https:\u002F\u002Fgithub.com\u002Fdbolya\u002Fyolact) | Pytorch | 1.2.6 and later | Apr 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1e28bc4a0422.png\" width=128px>](object_detection\u002Fefficientdet\u002F) | [efficientdet](\u002Fobject_detection\u002Fefficientdet\u002F) | [EfficientDet: Scalable and Efficient Object Detection, in PyTorch](https:\u002F\u002Fgithub.com\u002Ftoandaominh1997\u002FEfficientDet.Pytorch) | Pytorch | 1.2.6 and later | Nov 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc66366d6ce8.png\" width=128px>](object_detection\u002Fpedestrian_detection\u002F) | [pedestrian_detection](\u002Fobject_detection\u002Fpedestrian_detection\u002F) | [Pedestrian-Detection-on-YOLOv3_Research-and-APP](https:\u002F\u002Fgithub.com\u002FZyjacya-In-love\u002FPedestrian-Detection-on-YOLOv3_Research-and-APP) | Keras | 1.2.1 and later | Mar 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_84d8d0877f8d.png\" width=128px>](object_detection\u002Fcrowd_det\u002F) | [crowd_det](\u002Fobject_detection\u002Fcrowd_det\u002F) | [Detection in Crowded Scenes](https:\u002F\u002Fgithub.com\u002FPurkialo\u002FCrowdDet) | Pytorch | 1.2.13 and later | Mar 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c6310f97b27a.png\" width=128px>](object_detection\u002Fyolov4\u002F) | [yolov4](\u002Fobject_detection\u002Fyolov4\u002F) | [Pytorch-YOLOv4](https:\u002F\u002Fgithub.com\u002FTianxiaomo\u002Fpytorch-YOLOv4) | Pytorch | 1.2.4 and later | Apr 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov4-a-machine-learning-model-to-detect-the-position-and-type-of-an-object-4f108ed0507b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fyolov4-%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-480f0a635317) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_043e52156f3a.png\" width=128px>](object_detection\u002Fyolov4-tiny\u002F) | [yolov4-tiny](\u002Fobject_detection\u002Fyolov4-tiny\u002F) | [Pytorch-YOLOv4](https:\u002F\u002Fgithub.com\u002FTianxiaomo\u002Fpytorch-YOLOv4) | Pytorch | 1.2.5 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1f8e737995cf.png\" width=128px>](object_detection\u002Fyolov5\u002F) | [yolov5](\u002Fobject_detection\u002Fyolov5\u002F) | [yolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5) | Pytorch | 1.2.5 and later | May 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov5-the-latest-model-for-object-detection-b13320ec516b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fyolov5-%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%81%AE%E6%9C%80%E6%96%B0%E3%83%A2%E3%83%87%E3%83%AB-5b7316d1e54d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7bfe56df7a3a.jpg\" width=128px>](object_detection\u002Fpoly_yolo\u002F) | [poly_yolo](\u002Fobject_detection\u002Fpoly_yolo\u002F) | [Poly YOLO](https:\u002F\u002Fgitlab.com\u002Firafm-ai\u002Fpoly-yolo\u002F) | Keras | 1.2.6 and later | May 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6e400ae42b82.jpg\" width=128px>](object_detection\u002Fnanodet\u002F) | [nanodet](\u002Fobject_detection\u002Fnanodet\u002F) | [NanoDet](https:\u002F\u002Fgithub.com\u002FRangiLyu\u002Fnanodet) | Pytorch | 1.2.6 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d47fc6d37cff.jpg\" width=128px>](object_detection\u002Fyolor\u002F) | [yolor](\u002Fobject_detection\u002Fyolor\u002F) | [yolor](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolor\u002Ftree\u002Fpaper) | Pytorch | 1.2.5 and later | May 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c8cd491ee18d.jpg\" width=128px>](object_detection\u002Fyolox\u002F) | [yolox](\u002Fobject_detection\u002Fyolox\u002F) | [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX) | Pytorch | 1.2.6 and later | Jul 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolox-object-detection-model-exceeding-yolov5-d6cea6d3c4bc) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fyolox-yolov5%E3%82%92%E8%B6%85%E3%81%88%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-e9706e15fef2) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_49542d46f386.png\" width=128px>](object_detection\u002Fpicodet\u002F) | [picodet](\u002Fobject_detection\u002Fpicodet\u002F) | [PP-PicoDet](https:\u002F\u002Fgithub.com\u002FBo396543018\u002FPicodet_Pytorch) | Pytorch | 1.2.10 and later | Nov 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7db8c770c36.jpg\" width=128px>](object_detection\u002Fyolox-ti-lite\u002F) | [yolox-ti-lite](\u002Fobject_detection\u002Fyolox-ti-lite\u002F) | [edgeai-yolox](https:\u002F\u002Fgithub.com\u002FTexasInstruments\u002Fedgeai-yolox) | Pytorch | 1.2.9 and later | Dec 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9b711cff068d.jpg\" width=128px>](object_detection\u002Fyolov7\u002F) | [yolov7](\u002Fobject_detection\u002Fyolov7\u002F) | [YOLOv7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7) | Pytorch | 1.2.7 and later | Jul 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3fef00f63137.png\" width=128px>](object_detection\u002Ffastest-det\u002F) | [fastest-det](\u002Fobject_detection\u002Ffastest-det\u002F) | [FastestDet](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FFastestDet) | Pytorch | 1.2.5 and later | Jul 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f0620e4908be.jpg\" width=128px>](object_detection\u002Fyolov\u002F) | [yolov](\u002Fobject_detection\u002Fyolov\u002F) | [YOLOV](https:\u002F\u002Fgithub.com\u002FYuHengsss\u002FYOLOV) | Pytorch | 1.2.10 and later | Aug 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ad7e7dd795e0.jpg\" width=128px>](object_detection\u002Fyolov6\u002F) | [yolov6](\u002Fobject_detection\u002Fyolov6\u002F) | [YOLOV6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6) | Pytorch | 1.2.10 and later | Sep 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0034020d6795.jpg\" width=128px>](object_detection\u002Fdamo_yolo\u002F) | [damo_yolo](\u002Fobject_detection\u002Fdamo_yolo\u002F) | [DAMO-YOLO](https:\u002F\u002Fgithub.com\u002Ftinyvision\u002FDAMO-YOLO) | Pytorch | 1.2.9 and later | Nov 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7bfe941ad0e2.png\" width=128px>](object_detection\u002Fyolov8\u002F) | [yolov8](\u002Fobject_detection\u002Fyolov8\u002F) | [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 and later | Jan 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_20920880e146.jpg\" width=128px>](object_detection\u002Fyolox_body_head_hand_face\u002F) | [yolox_body_head_hand_face](\u002Fobject_detection\u002Fyolox_body_head_hand_face\u002F) | [YOLOX-Body-Head-Hand-Face](https:\u002F\u002Fgithub.com\u002FPINTO0309\u002FPINTO_model_zoo\u002Ftree\u002Fmain\u002F434_YOLOX-Body-Head-Hand-Face) | Pytorch | 1.2.15 and later | Jan 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_26eccdd970c1.png\" width=128px>](object_detection\u002Fyolov9\u002F) | [yolov9](\u002Fobject_detection\u002Fyolov9\u002F) | [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9) | Pytorch | 1.2.10 and later | Feb 2024 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0b963b22c127.png\" width=128px>](object_detection\u002Fyolov10\u002F) | [yolov10](\u002Fobject_detection\u002Fyolov10\u002F) | [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10) | Pytorch | 1.2.11 and later | May 2024 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_479d57d1bdc6.png\" width=128px>](object_detection\u002Fyolov11\u002F) | [yolov11](\u002Fobject_detection\u002Fyolov11\u002F) | [YOLOv11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14 and later | Sep 2024 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9e1265b9f2e.png\" width=128px>](object_detection\u002Fyolov12\u002F) | [yolov12](\u002Fobject_detection\u002Fyolov12\u002F) | [YOLOv12](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12) | Pytorch | 1.2.14 and later | Feb 2025 |  |\n\n### Transformer\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_36fce29307ad.png\" width=128px>](object_detection\u002Fglip\u002F) | [glip](\u002Fobject_detection\u002Fglip\u002F) | [GLIP](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FGLIP) | Pytorch | 1.2.13 and later | Dec 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9079e824ff3b.jpg\" width=128px>](object_detection\u002Fdab-detr\u002F) | [dab-detr](\u002Fobject_detection\u002Fdab-detr\u002F) | [DAB-DETR](https:\u002F\u002Fgithub.com\u002FIDEA-opensource\u002FDAB-DETR) | Pytorch | 1.2.12 and later | Jan 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12c7aad7c8e9.png\" width=128px>](object_detection\u002Fdetic\u002F) | [detic](\u002Fobject_detection\u002Fdetic\u002F) | [Detecting Twenty-thousand Classes using Image-level Supervision](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetic) | Pytorch | 1.2.10 and later | Jan 2022 | [EN](https:\u002F\u002Fmedium.com\u002Fp\u002F49cba412b7d4) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdetic-21k%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%92%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AB%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-1b8f777ee89a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0a7e0f201171.png\" width=128px>](object_detection\u002Fgroundingdino\u002F) | [groundingdino](\u002Fobject_detection\u002Fgroundingdino\u002F) | [Grounding DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGroundingDINO\u002Ftree\u002Fmain) | Pytorch | 1.2.16 and later | Mar 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgrounding-dino-%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-3cc87db64f0c) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9a200d56399.png\" width=128px>](object_detection\u002Frt-detr-v2\u002F) | [rt-detr-v2](\u002Fobject_detection\u002Frt-detr-v2\u002F) | [RT-DETR](https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR) | Pytorch | 1.2.13 and later | Jul 2024 |[JP](https:\u002F\u002Ftech.ailia.ai\u002Frt-detr-convolution%E3%81%A8transformer%E3%81%AE%E3%83%8F%E3%82%A4%E3%83%96%E3%83%AA%E3%83%83%E3%83%89%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-7b73fd6a8de9) |\n\n### Specific target\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_74b7a26b17a7.png\" width=128px>](object_detection\u002Ftraffic-sign-detection\u002F) | [traffic-sign-detection](\u002Fobject_detection\u002Ftraffic-sign-detection\u002F) | [Traffic Sign Detection](https:\u002F\u002Fgithub.com\u002Faarcosg\u002Ftraffic-sign-detection) | Tensorflow | 1.2.10 and later | Aug 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ftrafficsigndetection-machine-learning-model-to-detect-road-signs-76d7c175ee01) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftrafficsigndetection-%E9%81%93%E8%B7%AF%E6%A8%99%E8%AD%98%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-d1dc1bd5ff5e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d9f0ebb8bd8.png\" width=128px>](object_detection\u002Fsku110k-densedet\u002F) | [sku110k-densedet](\u002Fobject_detection\u002Fsku110k-densedet\u002F) | [SKU110K-DenseDet](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002FSKU110K-DenseDet) | Pytorch | 1.2.9 and later | Apr 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fsku110k-densedet-a-machine-learning-model-that-can-detect-products-in-a-store-baf9d98cb441) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsku110k-densedet-%E5%BA%97%E8%88%97%E5%86%85%E3%81%AE%E5%95%86%E5%93%81%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b775184b5e46) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ed2bffabb44c.png\" width=128px>](object_detection\u002Ffootandball\u002F) | [footandball](\u002Fobject_detection\u002Ffootandball\u002F) | [FootAndBall: Integrated player and ball detector](https:\u002F\u002Fgithub.com\u002Fjac99\u002FFootAndBall) | Pytorch | 1.2.0 and later | Dec 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4059fbd388c5.jpg\" width=128px>](object_detection\u002Fqrcode_wechatqrcode\u002F) | [qrcode_wechatqrcode](\u002Fobject_detection\u002Fqrcode_wechatqrcode\u002F) | [qrcode_wechatqrcode](https:\u002F\u002Fgithub.com\u002Fopencv\u002Fopencv_zoo\u002Ftree\u002F4fb591053ba1201c07c68929cc324787d5afaa6c\u002Fmodels\u002Fqrcode_wechatqrcode) | Caffe | 1.2.15 and later | Mar 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c2fb3be9c829.png\" width=128px>](object_detection\u002Fmobile_object_localizer\u002F) | [mobile_object_localizer](\u002Fobject_detection\u002Fmobile_object_localizer\u002F) | [mobile_object_localizer_v1](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fobject_detection\u002Fmobile_object_localizer_v1\u002F1) | TensorFlow Hub | 1.2.6 and later | Jun 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmobileobjectlocalizer-class-agnostic-mobile-object-detector-b740c0ceb16c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmobileobjectlocalizer-%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-595b54cfab26) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_07a6eca27aa0.jpg\" width=128px>](object_detection\u002Flayout_parsing\u002F) |[layout_parsing](object_detection\u002Flayout_parsing\u002F)  | [unstructured-inference](https:\u002F\u002Fgithub.com\u002FUnstructured-IO\u002Funstructured-inference\u002Ftree\u002Fmain) | Pytorch | 1.2.9 and later | Dec 2022 | |\n\n## Object detection 3d\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_04e1134ff8fe.png\" width=128px>](object_detection_3d\u002Fefficientdet\u002F) | [3d_bbox](\u002Fobject_detection_3d\u002F3d_bbox\u002F) | [3D Bounding Box Estimation Using Deep Learning and Geometry](https:\u002F\u002Fgithub.com\u002Fskhadem\u002F3D-BoundingBox) | Pytorch | 1.2.6 and later | Dec 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cf8fbfb96413.png\" width=128px>](object_detection_3d\u002Fd4lcn\u002F) |[d4lcn](\u002Fobject_detection_3d\u002Fd4lcn\u002F) | [D4LCN](https:\u002F\u002Fgithub.com\u002Fdingmyu\u002FD4LCN) | Pytorch | 1.2.9 and later | Dec 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ad52ef9f2a6e.png\" width=128px>](object_detection_3d\u002Fegonet\u002F) |[egonet](\u002Fobject_detection_3d\u002Fegonet\u002F) | [EgoNet](https:\u002F\u002Fgithub.com\u002FNicholasli1995\u002FEgoNet) | Pytorch | 1.2.9 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e7432e192cd2.png\" width=128px>](object_detection_3d\u002Fmediapipe_objectron\u002F) | [mediapipe_objectron](\u002Fobject_detection_3d\u002Fmediapipe_objectron\u002F) | [MediaPipe Objectron](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fmediapipe) | TensorFlow Lite | 1.2.5 and later | Dec 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4c896c15b14.png\" width=128px>](object_detection_3d\u002F3d-object-detection.pytorch\u002F) | [3d-object-detection.pytorch](\u002Fobject_detection_3d\u002F3d-object-detection.pytorch\u002F) | [3d-object-detection.pytorch](https:\u002F\u002Fgithub.com\u002Fsovrasov\u002F3d-object-detection.pytorch) | Pytorch | 1.2.8 and later | Feb 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002F3dobjectdetectionpytorch-3d-object-detection-model-3e1da4c53f) [JP](https:\u002F\u002Ftech.ailia.ai\u002F3dobjectdetectionpytorch-3d%E3%81%AE%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-8df18b8eb5d1) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d432437e89b.png\" width=128px>](object_detection_3d\u002Fdid_m3d\u002F) |[did_m3d](\u002Fobject_detection_3d\u002Fdid_m3d\u002F) | [DID M3D](https:\u002F\u002Fgithub.com\u002FSPengLiang\u002FDID-M3D) | Pytorch | 1.2.11 and later | Jul 2022 | |\n\n## Object tracking\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c976f6892738.gif\" width=128px>](object_tracking\u002Fdeepsort\u002F) | [deepsort](\u002Fobject_tracking\u002Fdeepsort\u002F) | [Deep Sort with PyTorch](https:\u002F\u002Fgithub.com\u002FZQPei\u002Fdeep_sort_pytorch) | Pytorch | 1.2.3 and later | Mar 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeepsort-a-machine-learning-model-for-tracking-people-1170743b5984) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeepsort-%E4%BA%BA%E7%89%A9%E3%81%AE%E3%83%88%E3%83%A9%E3%83%83%E3%82%AD%E3%83%B3%E3%82%B0%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e8cb7410457c) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a2341bcd2fee.png\" width=128px>](object_tracking\u002Fperson_reid_baseline_pytorch\u002F) | [person_reid_baseline_pytorch](\u002Fobject_tracking\u002Fperson_reid_baseline_pytorch\u002F) | [UTS-Person-reID-Practical](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch) | Pytorch | 1.2.6 and later | Mar 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7d355f3eaf14.png\" width=128px>](object_tracking\u002Fabd_net\u002F) | [abd_net](\u002Fobject_tracking\u002Fabd_net\u002F) | [Attentive but Diverse Person Re-Identification](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FABD-Net) | Pytorch | 1.2.7 and later | Aug 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cc7c6edeced4.jpg\" width=128px>](object_tracking\u002Fdeepsort_vehicle\u002F) | [deepsort_vehicle](\u002Fobject_tracking\u002Fdeepsort_vehicle\u002F) | [Multi-Camera Live Object Tracking](https:\u002F\u002Fgithub.com\u002FLeonLok\u002FMulti-Camera-Live-Object-Tracking) | Pytorch | 1.2.9 and later | May 2020 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_28d39d764e2d.png\" width=128px>](object_tracking\u002Fqd-3dt\u002F) | [qd-3dt](\u002Fobject_tracking\u002Fqd-3dt\u002F) | [Monocular Quasi-Dense 3D Object Tracking](https:\u002F\u002Fgithub.com\u002FSysCV\u002Fqd-3dt) | Pytorch | 1.2.11 and later | Mar 2021 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_60ad6591ee3c.png\" width=128px>](object_tracking\u002Fcentroids-reid\u002F) | [centroids-reid](\u002Fobject_tracking\u002Fcentroids-reid\u002F) | [On the Unreasonable Effectiveness of Centroids in Image Retrieval](https:\u002F\u002Fgithub.com\u002Fmikwieczorek\u002Fcentroids-reidh) | Pytorch | 1.2.9 and later | Apr 2021 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f165b0304267.png\" width=128px>](object_tracking\u002Fsiam-mot\u002F) | [siam-mot](\u002Fobject_tracking\u002Fsiam-mot\u002F) | [SiamMOT](https:\u002F\u002Fgithub.com\u002Famazon-research\u002Fsiam-mot) | Pytorch | 1.2.9 and later | May 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6b760d6d86bd.png\" width=128px>](object_tracking\u002Fbytetrack\u002F) | [bytetrack](\u002Fobject_tracking\u002Fbytetrack\u002F) | [ByteTrack](https:\u002F\u002Fgithub.com\u002Fifzhang\u002FByteTrack) | Pytorch | 1.2.5 and later | Oct 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fbytetrack-tracking-model-that-also-considers-low-accuracy-bounding-boxes-17f5ed70e00c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbytetrack-%E4%BD%8E%E3%81%84%E7%A2%BA%E5%BA%A6%E3%81%AEboundingbox%E3%82%82%E8%80%83%E6%85%AE%E3%81%99%E3%82%8B%E3%83%88%E3%83%A9%E3%83%83%E3%82%AD%E3%83%B3%E3%82%B0%E3%83%A2%E3%83%87%E3%83%AB-244b994d5afb)　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1798b85d116f.png\" width=128px>](object_tracking\u002Fstrong_sort\u002F) | [strong_sort](\u002Fobject_tracking\u002Fstrong_sort\u002F) | [StrongSORT](https:\u002F\u002Fgithub.com\u002FdyhBUPT\u002FStrongSORT) | Pytorch | 1.2.15 and later | Feb 2022 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_637e2ebf59ab.jpg\" width=128px>](object_tracking\u002Fsamurai\u002F) | [samurai](\u002Fobject_tracking\u002Fsamurai\u002F) | [SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory](https:\u002F\u002Fgithub.com\u002Fyangchris11\u002Fsamurai) | Pytorch | 1.6.1 and later | Nov 2024 |  |\n\n## Optical Flow Estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4c0af4ba532f.png\" width=128px>](optical_flow_estimation\u002Fraft\u002F) | [raft](\u002Foptical_flow_estimation\u002Fraft\u002F) | [RAFT: Recurrent All Pairs Field Transforms for Optical Flow](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT) | Pytorch | 1.2.6 and later | Mar 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fraft-a-machine-learning-model-for-estimating-optical-flow-6ab6d077e178) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fraft-optical-flow%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bf898965de05)　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_65c4e455301a.gif\" width=128px>](optical_flow_estimation\u002Fcotracker3\u002F) | [cotracker3](\u002Foptical_flow_estimation\u002Fcotracker3\u002F) | [ CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fco-tracker) | Pytorch | 1.6.1 and later | Oct 2024 |  |\n## Point segmentation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9cec894c5de3.png\" width=128px>](point_segmentation\u002Fpointnet_pytorch\u002F) | [pointnet_pytorch](\u002Fpoint_segmentation\u002Fpointnet_pytorch\u002F) | [PointNet.pytorch](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch) | Pytorch | 1.2.6 and later | Dec 2016 | |\n\n## Pose estimation\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ceacc19b9bab.png\" width=128px>](pose_estimation\u002Fopenpose\u002F) |[openpose](\u002Fpose_estimation\u002Fopenpose\u002F) | [Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) | Caffe | 1.2.1 and later | Nov 2016 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f7b57d156b42.png\" width=128px>](pose_estimation\u002Fposenet\u002F) |[posenet](\u002Fpose_estimation\u002Fposenet\u002F) | [PoseNet Pytorch](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fposenet-pytorch) | Pytorch | 1.2.10 and later | Jan 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1626dc168e3e.png\" width=128px>](pose_estimation\u002Fpose_resnet\u002F) |[pose_resnet](\u002Fpose_estimation\u002Fpose_resnet\u002F) | [Simple Baselines for Human Pose Estimation and Tracking](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhuman-pose-estimation.pytorch) | Pytorch | 1.2.1 and later | Apr 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fposeresnet-a-top-down-machine-learning-model-for-skeletal-detection-9454f391ae4d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fposeresnet-%E3%83%88%E3%83%83%E3%83%97%E3%83%80%E3%82%A6%E3%83%B3%E3%81%A7%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9e0d20396d1e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1974f0ff85f6.png\" width=128px>](pose_estimation\u002Flightweight-human-pose-estimation\u002F)  |[lightweight-human-pose-estimation](\u002Fpose_estimation\u002Flightweight-human-pose-estimation\u002F) | [Fast and accurate human pose estimation in PyTorch.\u003Cbr\u002F>Contains implementation of \u003Cbr\u002F>\"Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose\" paper.](https:\u002F\u002Fgithub.com\u002FDaniil-Osokin\u002Flightweight-human-pose-estimation.pytorch) | Pytorch | 1.2.1 and later | Nov 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Flightweighthumanpose-a-machine-learning-model-for-fast-multi-person-skeleton-detection-631c042bed50) [JP](https:\u002F\u002Ftech.ailia.ai\u002Flightweighthumanpose-%E9%AB%98%E9%80%9F%E3%81%AB%E8%A4%87%E6%95%B0%E4%BA%BA%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bc34d420e6e2) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ec06240b4c62.png\" width=128px>](pose_estimation\u002Fanimalpose\u002F) |[animalpose](\u002Fpose_estimation\u002Fanimalpose\u002F) | [MMPose - 2D animal pose estimation](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose) | Pytorch | 1.2.7 and later | Aug 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fanimalpose-pose-esimation-for-animals-700603e0dbae) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fanimalpose-%E5%8B%95%E7%89%A9%E3%81%AE%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-f7f667c0e69d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4c776f30d8bb.png\" width=128px>](pose_estimation\u002Fefficientpose\u002F) |[efficientpose](\u002Fpose_estimation\u002Fefficientpose\u002F) | [Code repo for EfficientPose](https:\u002F\u002Fgithub.com\u002Fdaniegr\u002FEfficientPose) | TensorFlow | 1.2.6 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_94c38455f703.png\" width=128px>](pose_estimation\u002Fblazepose\u002F) |[blazepose](\u002Fpose_estimation\u002Fblazepose\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 and later | Jun 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cc1766399ce0.png\" width=128px>](pose_estimation\u002Fmediapipe_holistic\u002F) |[mediapipe_holistic](\u002Fpose_estimation\u002Fmediapipe_holistic\u002F) | [MediaPipe Holistic](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fholistic.html) | TensorFlow | 1.2.9 and later | Dec 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_65dd24c1b393.png\" width=128px>](pose_estimation\u002Fmovenet\u002F) |[movenet](\u002Fpose_estimation\u002Fmovenet\u002F) | [Code repo for movenet](https:\u002F\u002Fwww.tensorflow.org\u002Fhub\u002Ftutorials\u002Fmovenet) | TensorFlow | 1.2.8 and later | May 2021 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmovenet-pose-estimation-for-video-with-intense-motion-2b92f53f3c8) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmovenet-%E5%8B%95%E3%81%8D%E3%81%AE%E6%BF%80%E3%81%97%E3%81%84%E5%8B%95%E7%94%BB%E5%90%91%E3%81%91%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-d26d9e06126c)|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3b0d09df782a.png\" width=128px>](pose_estimation\u002Fap-10k\u002F) |[ap-10k](\u002Fpose_estimation\u002Fap-10k\u002F) | [AP-10K](https:\u002F\u002Fgithub.com\u002FAlexTheBad\u002FAP-10K)  | Pytorch | 1.2.4 and later | Aug 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_88f316bda885.png\" width=128px>](pose_estimation\u002Fe2pose\u002F) |[e2pose](\u002Fpose_estimation\u002Fe2pose\u002F) | [E2Pose](https:\u002F\u002Fgithub.com\u002FAISIN-TRC\u002FE2Pose)  | Tensorflow | 1.2.5 and later | Oct 2022 | |\n\n## Pose estimation 3d\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_95b3c01e4b57.png\" width=128px>](pose_estimation_3d\u002Fpose-hg-3d\u002F) |[pose-hg-3d](\u002Fpose_estimation_3d\u002Fpose-hg-3d\u002F) | [Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d) | Pytorch | 1.2.6 and later | Apr 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1974f0ff85f6.png\" width=128px>](pose_estimation_3d\u002F3d-pose-baseline\u002F) |[3d-pose-baseline](\u002Fpose_estimation_3d\u002F3d-pose-baseline\u002F) | [A simple baseline for 3d human pose estimation in tensorflow.\u003Cbr\u002F>Presented at ICCV 17.](https:\u002F\u002Fgithub.com\u002Funa-dinosauria\u002F3d-pose-baseline) | TensorFlow | 1.2.3 and later | May 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a96c926ee23e.png\" width=128px>](pose_estimation_3d\u002Flightweight-human-pose-estimation-3d\u002F) |[lightweight-human-pose-estimation-3d](\u002Fpose_estimation_3d\u002Flightweight-human-pose-estimation-3d\u002F) | [Real-time 3D multi-person pose estimation demo in PyTorch.\u003Cbr\u002F>OpenVINO backend can be used for fast inference on CPU.](https:\u002F\u002Fgithub.com\u002FDaniil-Osokin\u002Flightweight-human-pose-estimation-3d-demo.pytorch) | Pytorch | 1.2.1 and later | Dec 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9793b2b7c8a2.jpg\" width=128px>](pose_estimation_3d\u002F3dmppe_posenet\u002F) |[3dmppe_posenet](\u002Fpose_estimation_3d\u002F3dmppe_posenet\u002F) | [PoseNet of \"Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image\"](https:\u002F\u002Fgithub.com\u002Fmks0601\u002F3DMPPE_POSENET_RELEASE) | Pytorch | 1.2.6 and later | Jul 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_347c4d3bda77.png\" width=128px>](pose_estimation_3d\u002Fgast\u002F) |[gast](\u002Fpose_estimation_3d\u002Fgast\u002F) | [A Graph Attention Spatio-temporal Convolutional Networks for 3D Human Pose Estimation in Video (GAST-Net)](https:\u002F\u002Fgithub.com\u002Ffabro66\u002FGAST-Net-3DPoseEstimation) | Pytorch | 1.2.7 and later | Mar 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fgast-a-machine-learning-model-that-predicts-a-3d-skeleton-from-a-2d-skeleton-44449d1ff78d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgast-2d%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%81%8B%E3%82%893d%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6c70fc4b3b3a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_05f7b5d96eb4.png\" width=128px>](pose_estimation_3d\u002Fblazepose-fullbody\u002F) |[blazepose-fullbody](\u002Fpose_estimation_3d\u002Fblazepose-fullbody\u002F) | [MediaPipe](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fmodels.html#pose) | TensorFlow Lite | 1.2.5 and later | Jun 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazepose-a-3d-pose-estimation-model-d8689d06b7c4) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazepose-3%E6%AC%A1%E5%85%83%E5%BA%A7%E6%A8%99%E3%82%92%E5%8F%96%E5%BE%97%E5%8F%AF%E8%83%BD%E3%81%AA%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-a6c588013009) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1afd73439d11.png\" width=128px>](pose_estimation_3d\u002Fmediapipe_pose_world_landmarks\u002F) |[mediapipe_pose_world_landmarks](\u002Fpose_estimation_3d\u002Fmediapipe_pose_world_landmarks\u002F) | [MediaPipe Pose real-world 3D coordinates](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fpose.html#pose_world_landmarks) | TensorFlow Lite | 1.2.10 and later | Jun 2022 | |\n\n## Road detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e4ef8b899b64.png\" width=128px>](road_detection\u002Froad-segmentation-adas\u002F) | [road-segmentation-adas](\u002Froad_detection\u002Froad-segmentation-adas\u002F) | [road-segmentation-adas-0001](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Froad-segmentation-adas-0001) | OpenVINO | 1.2.5 and later | Sep 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9d91f71e6d98.jpg\" width=128px>](road_detection\u002Fcodes-for-lane-detection\u002F) | [codes-for-lane-detection](\u002Froad_detection\u002Fcodes-for-lane-detection\u002F) | [Codes-for-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-Lane-Detection) | Pytorch | 1.2.6 and later | Aug 2019 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcodesforlanedetection-a-machine-learning-model-for-detecting-white-lines-on-roads-7bee3aad818d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcodesforlanedetection-%E9%81%93%E8%B7%AF%E3%81%AE%E7%99%BD%E7%B7%9A%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-1ffe7c6ccf1e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7ab1e03ec7c8.jpg\" width=128px>](road_detection\u002Fultra-fast-lane-detection\u002F) | [ultra-fast-lane-detection](\u002Froad_detection\u002Fultra-fast-lane-detection\u002F) | [Ultra-Fast-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcfzd\u002FUltra-Fast-Lane-Detection) | Pytorch | 1.2.6 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1ff13165abeb.jpg\" width=128px>](road_detection\u002Fpolylanenet\u002F) | [polylanenet](\u002Froad_detection\u002Fpolylanenet\u002F) | [PolyLaneNet](https:\u002F\u002Fgithub.com\u002Flucastabelini\u002FPolyLaneNet) | Pytorch | 1.2.9 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a08ce74fee69.jpg\" width=128px>](road_detection\u002Froneld\u002F) | [roneld](\u002Froad_detection\u002Froneld\u002F) | [RONELD-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fczming\u002FRONELD-Lane-Detection) | Pytorch | 1.2.6 and later | Oct 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_63d89f55c10a.jpg\" width=128px>](road_detection\u002Flstr\u002F) | [lstr](\u002Froad_detection\u002Flstr\u002F) | [LSTR](https:\u002F\u002Fgithub.com\u002Fliuruijin17\u002FLSTR) | Pytorch | 1.2.8 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cd115fd04565.jpg\" width=128px>](road_detection\u002Fyolop\u002F) | [yolop](\u002Froad_detection\u002Fyolop\u002F) | [YOLOP](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FYOLOP) | Pytorch | 1.2.6 and later | Aug 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5e2694422fa.png\" width=128px>](road_detection\u002Fcdnet\u002F) | [cdnet](\u002Froad_detection\u002Fcdnet\u002F) | [CDNet](https:\u002F\u002Fgithub.com\u002Fzhangzhengde0225\u002FCDNet) | Pytorch | 1.2.5 and later | Feb 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7e43011d98e6.jpg\" width=128px>](road_detection\u002Fhybridnets\u002F) | [hybridnets](\u002Froad_detection\u002Fhybridnets\u002F) | [HybridNets](https:\u002F\u002Fgithub.com\u002Fdatvuthanh\u002FHybridNets) | Pytorch | 1.2.6 and later | Mar 2022 | |\n\n## Rotation prediction\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_577fa52815ca.png\" width=256px>](rotation_prediction\u002Frotnet\u002F) |[rotnet](\u002Frotation_prediction\u002Frotnet) | [CNNs for predicting the rotation angle of an image to correct its orientation](https:\u002F\u002Fgithub.com\u002Fd4nst\u002FRotNet) | Keras | 1.2.1 and later | Mar 2018 | |\n\n## Style transfer\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f8eed7b2d044.png\" width=128px>](style_transfer\u002Fadain\u002F) | [adain](\u002Fstyle_transfer\u002Fadain\u002F) | [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-AdaIN)| Pytorch | 1.2.1 and later | Mar 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fadain-a-machine-learning-model-for-style-transfer-341b242c554b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fadain-%E7%94%BB%E5%83%8F%E3%81%AE%E3%82%B9%E3%82%BF%E3%82%A4%E3%83%AB%E3%82%92%E5%A4%89%E6%8F%9B%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2443feba832b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1b95d3f19f75.png\" width=128px>](style_transfer\u002Fpix2pixHD\u002F) | [pix2pixHD](\u002Fstyle_transfer\u002Fpix2pixHD\u002F) | [pix2pixHD: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD) | Pytorch | 1.2.6 and later | Nov 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cf769a8ea4af.png\" width=128px>](style_transfer\u002Fbeauty_gan\u002F) | [beauty_gan](\u002Fstyle_transfer\u002Fbeauty_gan\u002F) | [BeautyGAN](https:\u002F\u002Fgithub.com\u002Fwtjiang98\u002FBeautyGAN_pytorch) | Pytorch | 1.2.7 and later | Jul 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f122b5b2b44c.png\" width=128px>](style_transfer\u002Fpsgan\u002F) | [psgan](\u002Fstyle_transfer\u002Fpsgan\u002F) | [PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer](https:\u002F\u002Fgithub.com\u002Fwtjiang98\u002FPSGAN)| Pytorch | 1.2.7 and later | Sep 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_86cc300ac636.png\" width=128px>](style_transfer\u002Fanimeganv2\u002F) | [animeganv2](\u002Fstyle_transfer\u002Fanimeganv2\u002F) | [PyTorch Implementation of AnimeGANv2](https:\u002F\u002Fgithub.com\u002Fbryandlee\u002Fanimegan2-pytorch) | Pytorch | 1.2.5 and later | Nov 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44db83cc45b3.png\" width=128px>](style_transfer\u002Felegant\u002F) | [EleGANt](\u002Fstyle_transfer\u002Felegant\u002F) | [EleGANt: Exquisite and Locally Editable GAN for Makeup Transfer](https:\u002F\u002Fgithub.com\u002FChenyu-Yang-2000\u002FEleGANt) | Pytorch | 1.2.15 and later | Jul 2022 | |\n\n## Super resolution\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9327818e43cc.png\" width=128px>](super_resolution\u002Fsrresnet\u002F) | [srresnet](\u002Fsuper_resolution\u002Fsrresnet\u002F) | [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https:\u002F\u002Fgithub.com\u002Ftwtygqyy\u002Fpytorch-SRResNet) | Pytorch | 1.2.0 and later | Sep 2016 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fsrresnet-a-machine-learning-model-to-increase-image-resolution-9efc478f2674) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsrresnet-%E7%94%BB%E5%83%8F%E3%82%92%E9%AB%98%E5%93%81%E8%B3%AA%E3%81%AB%E6%8B%A1%E5%A4%A7%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9e35b9a90586) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41ad4ba3ff92.png\" width=128px>](super_resolution\u002Fedsr\u002F) | [edsr](\u002Fsuper_resolution\u002Fedsr\u002F) | [Enhanced Deep Residual Networks for Single Image Super-Resolution](https:\u002F\u002Fgithub.com\u002Fsanghyun-son\u002FEDSR-PyTorch.git) | Pytorch | 1.2.6 and later | Jul 2017 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fedsr-a-machine-learning-model-for-super-resolution-image-processing-9deaf36b24ed) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fedsr-%E7%94%BB%E5%83%8F%E3%81%AE%E8%B6%85%E8%A7%A3%E5%83%8F%E5%87%A6%E7%90%86%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2842b1d244d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_537aa774fae2.png\" width=128px>](super_resolution\u002Fhan\u002F) | [han](\u002Fsuper_resolution\u002Fhan\u002F) | [Single Image Super-Resolution via a Holistic Attention Network](https:\u002F\u002Fgithub.com\u002FwwlCape\u002FHAN) | Pytorch | 1.2.6 and later | Aug 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3dffcc4915c9.jpg\" width=128px>](super_resolution\u002Freal-esrgan\u002F) | [real-esrgan](\u002Fsuper_resolution\u002Freal-esrgan\u002F) | [Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) | Pytorch | 1.2.9 and later | Jul 2021 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Freal-esrgan-%E3%83%87%E3%83%8E%E3%82%A4%E3%82%BA%E3%82%92%E5%BC%B7%E5%8C%96%E3%81%97%E3%81%9F%E8%B6%85%E8%A7%A3%E5%83%8F%E3%83%A2%E3%83%87%E3%83%AB-91a434b3683b)|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8ad3591e5a9a.png\" width=128px>](super_resolution\u002Fswinir\u002F) | [swinir](\u002Fsuper_resolution\u002Fswinir\u002F) | [SwinIR: Image Restoration Using Swin Transformer](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR) | Pytorch | 1.2.12 and later | Aug 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2bf91cf8ba39.png\" width=128px>](super_resolution\u002Frcan-it\u002F) | [rcan-it](\u002Fsuper_resolution\u002Frcan-it\u002F) | [Revisiting RCAN: Improved Training for Image Super-Resolution](https:\u002F\u002Fgithub.com\u002Fzudi-lin\u002Frcan-it) | Pytorch | 1.2.10 and later | Jan 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e560b017bc96.png\" width=128px>](super_resolution\u002Fhat\u002F) | [Hat](\u002Fsuper_resolution\u002Fhat\u002F) | [Hat](https:\u002F\u002Fgithub.com\u002FXPixelGroup\u002FHAT) | Pytorch | 1.2.6 and later | May 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a11bd60aba65.png\" width=128px>](super_resolution\u002Fspan\u002F) | [SPAN](\u002Fsuper_resolution\u002Fspan\u002F) | [SPAN](https:\u002F\u002Fgithub.com\u002Fhongyuanyu\u002FSPAN) | Pytorch | 1.2.14 and later | Nov 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fspan-%E3%83%91%E3%83%A9%E3%83%A1%E3%83%BC%E3%82%BF%E3%83%95%E3%83%AA%E3%83%BC%E3%81%AEattention%E3%81%AB%E3%82%88%E3%82%8B%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AA%E8%B6%85%E8%A7%A3%E5%83%8F%E3%83%A2%E3%83%87%E3%83%AB-3af731eae44a) | \n\n## Text detection\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_af5e8d036a04.png\" width=64px>](text_detection\u002Feast\u002F) |[east](\u002Ftext_detection\u002Feast) | [EAST: An Efficient and Accurate Scene Text Detector](https:\u002F\u002Fgithub.com\u002Fargman\u002FEAST) | TensorFlow | 1.2.6 and later | Apr 2017 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0d65a5546cac.png\" width=64px>](text_detection\u002Fpixel_link\u002F) |[pixel_link](\u002Ftext_detection\u002Fpixel_link) | [Pixel-Link](https:\u002F\u002Fgithub.com\u002FZJULearning\u002Fpixel_link) | TensorFlow | 1.2.6 and later | Jan 2018 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5e0097c060f.jpg\" width=64px>](text_detection\u002Fcraft_pytorch\u002F) |[craft_pytorch](\u002Ftext_detection\u002Fcraft_pytorch) | [CRAFT: Character-Region Awareness For Text detection](https:\u002F\u002Fgithub.com\u002Fclovaai\u002FCRAFT-pytorch) | Pytorch | 1.2.2 and later | Apr 2019 | |\n\n## Text recognition\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9194877b05f1.png\" width=64px>](\u002Ftext_recognition\u002Fetl\u002F) |[etl](\u002Ftext_recognition\u002Fetl) | Japanese Character Classification | Keras | 1.1.0 and later | 1973 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Failia-sdk-%E3%83%A2%E3%83%87%E3%83%AB%E7%B4%B9%E4%BB%8B-etl-412ee389a8ef) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_592d7da6cff6.png\" width=64px>](text_recognition\u002Fcrnn.pytorch\u002F) |[crnn.pytorch](\u002Ftext_recognition\u002Fcrnn.pytorch\u002F) | [Convolutional Recurrent Neural Network](https:\u002F\u002Fgithub.com\u002Fmeijieru\u002Fcrnn.pytorch) | Pytorch | 1.2.6 and later | Jul 2015 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_592d7da6cff6.png\" width=64px>](text_recognition\u002Fdeep-text-recognition-benchmark\u002F) |[deep-text-recognition-benchmark](\u002Ftext_recognition\u002Fdeep-text-recognition-benchmark\u002F) | [deep-text-recognition-benchmark](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fdeep-text-recognition-benchmark) | Pytorch | 1.2.6 and later | Apr 2019 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bd0bbb425620.jpg\" width=64px>](text_recognition\u002Feasyocr\u002F) |[easyocr](\u002Ftext_recognition\u002Feasyocr\u002F) | [Ready-to-use OCR with 80+ supported languages](https:\u002F\u002Fgithub.com\u002FJaidedAI\u002FEasyOCR) | Pytorch | 1.2.6 and later | Apr 2020 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_59ee08cd9fc7.png\" width=64px>](text_recognition\u002Fpaddleocr\u002F) |[paddleocr](\u002Ftext_recognition\u002Fpaddleocr\u002F) | [PaddleOCR : Awesome multilingual OCR toolkits based on PaddlePaddle](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR) | Pytorch | 1.2.6 and later | Sep 2020 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpaddleocr-the-latest-lightweight-ocr-system-a13171d7ea3e) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleocr-%E6%9C%80%E6%96%B0%E3%81%AE%E8%BB%BD%E9%87%8Focr%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0-8744205f3703) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5bed24ab4cf4.png\" width=64px>](text_recognition\u002Fdonut\u002F) |[donut](\u002Ftext_recognition\u002Fdonut\u002F) | [Donut](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fdonut) | Pytorch | 1.2.16 and later | Nov 2021 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e53a69c433dd.png\" width=64px>](text_recognition\u002Fndlocr_text_recognition\u002F) |[ndlocr_text_recognition](\u002Ftext_recognition\u002Fndlocr_text_recognition\u002F) | [NDL OCR](https:\u002F\u002Fgithub.com\u002Fndl-lab\u002Ftext_recognition) | Pytorch | 1.2.5 and later | Apr 2022 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7a1e89cbf4ec.png\" width=64px>](text_recognition\u002Fpaddleocr_v3\u002F) |[paddleocr_v3](\u002Ftext_recognition\u002Fpaddleocr_v3\u002F) | [PaddleOCR : Awesome multilingual OCR toolkits based on PaddlePaddle](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR) | Pytorch | 1.2.17 and later | Jun 2022 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleocr-v3-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%8C%E9%AB%98%E7%B2%BE%E5%BA%A6%E5%8C%96%E3%81%97%E3%81%9F%E6%9C%80%E6%96%B0%E3%81%AEocr%E3%83%A2%E3%83%87%E3%83%AB-7dfa93a3dfcd) |\n\n## Time-Series Forecasting\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [informer2020](\u002Ftime_series_forecasting\u002Finformer2020\u002F) | [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting (AAAI'21 Best Paper)](https:\u002F\u002Fgithub.com\u002Fzhouhaoyi\u002FInformer2020) | Pytorch | 1.2.10 and later | Dec 2020 ||\n| [timesfm](\u002Ftime_series_forecasting\u002Ftimesfm\u002F) | [TimesFM](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Ftimesfm) | Pytorch | 1.2.16 and later | Oct 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftimesfm-%E6%99%82%E7%B3%BB%E5%88%97%E4%BA%88%E6%B8%AC%E3%81%AE%E5%9F%BA%E7%9B%A4%E3%83%A2%E3%83%87%E3%83%AB-0a11fdefa319) |\n\n## Vehicle recognition\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_169a473979a8.png\" width=64px>](\u002Fvehicle_recognition\u002Fvehicle-attributes-recognition-barrier\u002F) |[vehicle-attributes-recognition-barrier](\u002Fvehicle_recognition\u002Fvehicle-attributes-recognition-barrier) | [vehicle-attributes-recognition-barrier-0042](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fvehicle-attributes-recognition-barrier-0042) | OpenVINO | 1.2.5 and later | May 2018 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvehicleattributerecognitionbarrier-a-machine-learning-model-for-detecting-car-attributes-fe8fda7649ff) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvehicleattributerecognitionbarrier-%E8%BB%8A%E3%81%AE%E5%B1%9E%E6%80%A7%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-ee26d1a3e00b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cb9de1282dd3.png\" width=128px>](vehicle_recognition\u002Fvehicle-license-plate-detection-barrier\u002F) | [vehicle-license-plate-detection-barrier](\u002Fvehicle_recognition\u002Fvehicle-license-plate-detection-barrier\u002F) | [vehicle-license-plate-detection-barrier-0106](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fvehicle-license-plate-detection-barrier-0106) | OpenVINO | 1.2.5 and later | May 2018 | |\n\n## Vision Language Model\n\n| | Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bb98a6bae630.jpg\" width=128px>](vision_language_model\u002Fllava\u002F) | [llava](\u002Fvision_language_model\u002Fllava) | [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA) | Pytorch | 1.2.16 and later | Apr 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fllava-%E7%94%BB%E5%83%8F%E3%81%AB%E5%AF%BE%E3%81%97%E3%81%A6%E8%B3%AA%E5%95%8F%E3%81%A7%E3%81%8D%E3%82%8B%E5%A4%A7%E8%A6%8F%E6%A8%A1%E8%A8%80%E8%AA%9E%E3%83%A2%E3%83%87%E3%83%AB-6ede836f2bed) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3e77fa65eea2.jpg\" width=128px>](vision_language_model\u002Fflorence2\u002F) | [florence2](vision_language_model\u002Fflorence2) | [Hugging Face - microsoft\u002FFlorence-2-base](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base) | Pytorch | 1.2.16 and later | Nov 2023 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fflorence2-%E8%BB%BD%E9%87%8F%E3%81%A7%E3%82%A8%E3%83%83%E3%82%B8%E5%AE%9F%E8%A3%85%E5%8F%AF%E8%83%BD%E3%81%AAvision-language-model-71809797a957) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8b6aaa1ac1ee.jpg\" width=128px>](vision_language_model\u002Fmobilevlm\u002F) | [mobilevlm](vision_language_model\u002Fmobilevlm) | [MobileVLM](https:\u002F\u002Fgithub.com\u002FMeituan-AutoML\u002FMobileVLM) | Pytorch | 1.5.0 and later | Dec 2023 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9bdbc6b01d4.jpg\" width=128px>](vision_language_model\u002Fllava-jp\u002F) | [llava-jp](vision_language_model\u002Fllava-jp) | [LLaVA-JP](https:\u002F\u002Fgithub.com\u002Ftosiyuki\u002FLLaVA-JP\u002Ftree\u002Fmain) | Pytorch | 1.5.0 and later | Jan 2024 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d128cc43276.jpeg\" width=128px>](vision_language_model\u002Fqwen2_vl\u002F) | [qwen2_vl](vision_language_model\u002Fqwen2_vl) | [Qwen2-VL](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen2-VL) | Pytorch | 1.5.0 and later | Sep 2024 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fqwen2-vl-%E3%83%AD%E3%83%BC%E3%82%AB%E3%83%AB%E3%81%A7%E5%8B%95%E4%BD%9C%E3%81%99%E3%82%8Bvision-language-model-b6f75fa30a08) |\n\n## Commercial model\n\n| Model | Reference | Exported From | Supported Ailia Version | Date | Blog |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[acculus-pose](\u002Fcommercial_model\u002Facculus-pose) | [Acculus, Inc.](https:\u002F\u002Facculus.jp\u002F) | Caffe | 1.2.3 and later | May 2018 | |\n\n# Other platforms\n\nPrototype with ailia MODELS (Python), then deploy to production.\n\n- [unity version](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-unity)\n- [kotlin version](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-kotlin)\n- [c++ version](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-cpp)\n- [flutter version](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-flutter)\n- [rust version](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-rust)\n\n# Contact\n\n- [Contact us](https:\u002F\u002Fwww.ailia.ai\u002Fen-contact-product)\n- [Mail](mailto:contact@ailia.ai)\n","[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d9da79f7362.png\">](ABOUT_AINYAN.md)\n\n预先训练的、最先进的AI模型集合。\n\n# 关于ailia SDK\n\n[ailia SDK](https:\u002F\u002Failia.ai\u002Fen\u002Fsdk\u002F) 是一款跨平台、高速的AI推理SDK。它支持Windows、Mac、Linux、iOS、Android、Jetson和Raspberry Pi，并通过Vulkan和Metal实现GPU加速。提供C++、Python、Unity (C#)、Kotlin、Rust和Flutter等语言的绑定。\n\n# 为什么选择ailia SDK\n\n|  | ailia SDK | ONNX Runtime |\n|:---|:---:|:---:|\n| 通过Vulkan和Metal进行GPU推理 | ✓ | − |\n| ailia语音\u002F声音\u002FLLM\u002F分词器\u002F跟踪器 | ✓ | − |\n| 400+个经过验证的模型库及示例代码 | ✓ | − |\n| 非操作系统\u002F实时操作系统推理支持 | ✓ | − |\n| Unity绑定和模型集合 | ✓ | △ |\n| 模型特定优化 | ✓ | △ |\n\n△ = 支持，但因通用实现而有所限制。\n\n# 如何使用\n\n立即在[Google Colaboratory](https:\u002F\u002Fwww.ailia.ai\u002Flaunch_to_colab)上试用。\n\n如果您想在自己的电脑上尝试：\n\n[ailia MODELS教程](TUTORIAL.md)\n\n[ailia MODELS教程日语版](TUTORIAL_jp.md)\n\n# 文档\n\n[ailia-models维基](https:\u002F\u002Fdeepwiki.com\u002Failia-ai\u002Failia-models)\n\n# 支持的模型\n截至2026年3月12日，共有403个模型。\n\n# 最新更新\n- 2026.03.12 添加depth_anything_v3、depth_pro\n- 2026.03.06 添加depth_anything_v2\n- 2026.03.04 添加gpt-sovits-v2-pro、bevformer、uniad\n- 2026.03.02 添加g2pw、gpt-sovits-v1、v2、v3（中文）\n- 2026.01.16 添加embeddinggemma\n- 2025.12.30 添加demucs、latentsync\n- 2025.12.26 添加sadtalker\n- 2025.12.25 添加samurai、cotracker3（ailia SDK 1.6.1）\n- 2025.12.21 添加silerovad v5、v6、v6_2\n- 2025.12.17 添加sensevoice、cosyvoice2\n- 2025.12.01 添加glass、mobilevlm、donut\n\n更多信息请参阅我们的[维基](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fwiki)。\n\n## 动作识别\n\n| | 模型 | 参考 | 导出自 | 支持的ailia版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9b3cda812675.png\" width=128px>](action_recognition\u002Fva-cnn\u002F) | [va-cnn](\u002Faction_recognition\u002Fva-cnn\u002F) | [查看基于骨架的人类动作识别的自适应神经网络（VA）](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FView-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition) | Pytorch | 1.2.7及以上 | 2017年3月 ||\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_99eb101cc9ee.png\" width=128px>](action_recognition\u002Fst_gcn\u002F) | [st-gcn](\u002Faction_recognition\u002Fst_gcn\u002F) | [ST-GCN](https:\u002F\u002Fgithub.com\u002Fyysijie\u002Fst-gcn) | Pytorch | 1.2.5及以上 | 2018年1月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fst-gcn-a-machine-learning-model-for-detecting-human-actions-from-skeletons-46a95b31b5db) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fst-gcn-%E9%AA%A8%E6%A0%BC%E3%81%8B%E3%82%89%E4%BA%BA%E7%89%A9%E3%81%AE%E3%82%A2%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-af3196e38d1f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_26dd279d25aa.jpg\" width=128px>](action_recognition\u002Fmars\u002F) | [mars](\u002Faction_recognition\u002Fmars\u002F) | [MARS：用于动作识别的运动增强RGB流](https:\u002F\u002Fgithub.com\u002Fcraston\u002FMARS) | Pytorch | 1.2.4及以上 | 2018年11月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmars-a-machine-learning-model-for-identifying-actions-from-videos-6b93c06ac6a5) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmars-%E5%8B%95%E7%94%BB%E3%81%8B%E3%82%89%E3%82%A2%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E8%AD%98%E5%88%A5%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c03b0b8804a8) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4ca9d3a01647.gif\" width=128px>](action_recognition\u002Fax_action_recognition\u002F) | [ax_action_recognition](\u002Faction_recognition\u002Fax_action_recognition\u002F) | [实时动作识别](https:\u002F\u002Fgithub.com\u002Ffelixchenfy\u002FRealtime-Action-Recognition) | Pytorch | 1.2.7及以上 | 2019年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8ed805bd7cf5.png\" width=128px>](action_recognition\u002Fdriver-action-recognition-adas\u002F) | [driver-action-recognition-adas](\u002Faction_recognition\u002Fdriver-action-recognition-adas\u002F) | [driver-action-recognition-adas-0002](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fdriver-action-recognition-adas-0002) | OpenVINO | 1.2.5及以上 | 2019年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ea78e5cc3376.gif\" width=128px>](action_recognition\u002Faction_clip\u002F) | [action_clip](\u002Faction_recognition\u002Faction_clip\u002F) | [ActionCLIP](https:\u002F\u002Fgithub.com\u002Fsallymmx\u002FActionCLIP) | Pytorch | 1.2.7及以上 | 2021年9月 | |\n\n## 异常检测\n\n| | 模型 | 参考 | 导出自 | 支持的ailia版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f9dfd9afdf47.png\" width=128px>](anomaly_detection\u002Fmahalanobisad\u002F) | [mahalanobisad](\u002Fanomaly_detection\u002Fmahalanobisad\u002F) | [MahalanobisAD-pytorch](https:\u002F\u002Fgithub.com\u002Fbyungjae89\u002FMahalanobisAD-pytorch\u002Ftree\u002Fmaster) | Pytorch | 1.2.9及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a37dde9c1a1c.png\" width=128px>](anomaly_detection\u002Fspade-pytorch\u002F) | [spade-pytorch](\u002Fanomaly_detection\u002Fspade-pytorch\u002F) | [基于深度金字塔对应关系的子图像异常检测](https:\u002F\u002Fgithub.com\u002Fbyungjae89\u002FSPADE-pytorch) | Pytorch | 1.2.6及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f032db59a299.png\" width=128px>](anomaly_detection\u002Fpadim\u002F) | [padim](\u002Fanomaly_detection\u002Fpadim\u002F) | [PaDiM-异常检测与定位-master](https:\u002F\u002Fgithub.com\u002Fxiahaifeng1995\u002FPaDiM-Anomaly-Detection-Localization-master) | Pytorch | 1.2.6及以上 | 2020年11月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpadim-a-machine-learning-model-for-detecting-defective-products-without-retraining-5daa6f203377) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpadim-%E5%86%8D%E5%AD%A6%E7%BF%92%E4%B8%8D%E8%A6%81%E3%81%A7%E4%B8%8D%E8%89%AF%E5%93%81%E6%A4%9C%E7%9F%A5%E3%82%92%E8%A1%8D%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-69add653fbd3) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_23eb8505880d.png\" width=128px>](anomaly_detection\u002Fpatchcore\u002F) | [patchcore](\u002Fanomaly_detection\u002Fpatchcore\u002F) | [PatchCore_anomaly_detection](https:\u002F\u002Fgithub.com\u002Fhcw-00\u002FPatchCore_anomaly_detection) | Pytorch | 1.2.6及以上 | 2021年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_edbea6f18791.png\" width=128px>](anomaly_detection\u002Fglass\u002F) | [glass](\u002Fanomaly_detection\u002Fglass\u002F) | [工业异常检测与定位的统一异常合成策略——梯度上升法](https:\u002F\u002Fgithub.com\u002Fcqylunlun\u002FGLASS) | Pytorch | 1.2.14及以上 | 2024年7月 | |\n\n## 音频语言模型\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[qwen_audio](\u002Faudio_language_model\u002Fqwen_audio) | [Qwen-Audio](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen-Audio) | Pytorch | 1.5.0 及更高版本 | 2023年11月 | [日文](https:\u002F\u002Ftech.ailia.ai\u002Fqwen-audio-%E9%9F%B3%E3%82%92%E5%85%A5%E5%8A%9B%E3%81%97%E3%81%A6%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E7%94%9F%E6%88%90%E5%8F%AF%E8%83%BD%E3%81%AAaudio-language-model-57d3a5c71643) |\n\n## 音频处理\n\n### 音频分类\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [crnn_audio_classification](\u002Faudio_processing\u002Fcrnn_audio_classification\u002F) | [crnn-audio-classification](https:\u002F\u002Fgithub.com\u002Fksanjeevan\u002Fcrnn-audio-classification) | Pytorch | 1.2.5 及更高版本 | 2019年3月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcrnnsoundclassification-a-machine-learning-model-for-classifying-sound-8e45d1f22fa) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fcrnnsoundclassification-%E9%9F%B3%E5%A3%B0%E3%82%92%E5%88%86%E9%A1%9E%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2a35564dad42) |\n| [audioset_tagging_cnn](\u002Faudio_processing\u002Faudioset_tagging_cnn\u002F) | [PANNs: 大规模预训练音频神经网络用于音频模式识别](https:\u002F\u002Fgithub.com\u002Fqiuqiangkong\u002Faudioset_tagging_cnn) | Pytorch | 1.2.9 及更高版本 | 2019年12月 | |\n| [transformer-cnn-emotion-recognition](\u002Faudio_processing\u002Ftransformer-cnn-emotion-recognition\u002F) | [通过并行化 CNN 和 Transformer 编码器结合语音情感的空间与时间特征表示](https:\u002F\u002Fgithub.com\u002FIliaZenkov\u002Ftransformer-cnn-emotion-recognition)  | Pytorch | 1.2.5 及更高版本 | 2020年10月 | |\n| [microsoft clap](\u002Faudio_processing\u002Fmsclap\u002F) | [CLAP](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FCLAP) | Pytorch | 1.2.11 及更高版本 | 2022年6月 | |\n| [clap](\u002Faudio_processing\u002Fclap\u002F) | [CLAP](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLAP) | Pytorch | 1.2.6 及更高版本 | 2022年11月 | [日文](https:\u002F\u002Ftech.ailia.ai\u002Fclap-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E9%9F%B3%E5%A3%B0%E3%82%92%E6%A4%9C%E7%B4%A2%E5%8F%AF%E8%83%BE%E3%81%AB%E3%81%99%E3%82%8B%E7%89%B9%E5%BE%B4%E6%8A%BD%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-f712f9c00dab) |\n\n### 音乐增强\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [hifigan](\u002Faudio_processing\u002Fhifigan\u002F) | [HiFi-GAN](https:\u002F\u002Fgithub.com\u002Fjik876\u002Fhifi-gan) | Pytorch | 1.2.9 及更高版本 | 2020年10月 |  |\n| [deep music enhancer](\u002Faudio_processing\u002Fdeep-music-enhancer\u002F) | [关于使用深度神经网络进行音乐带宽扩展的滤波器泛化研究](https:\u002F\u002Fgithub.com\u002Fserkansulun\u002Fdeep-music-enhancer) | Pytorch | 1.2.6 及更高版本 | 2020年11月 | |\n\n### 音乐生成\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pytorch_wavenet](\u002Faudio_processing\u002Fpytorch_wavenet\u002F) | [pytorch_wavenet](https:\u002F\u002Fgithub.com\u002Fvincentherrmann\u002Fpytorch-wavenet) | Pytorch | 1.2.14 及更高版本 | 2016年9月 |  |\n\n### 噪声 reduction\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [rnnoise](\u002Faudio_processing\u002Frnnoise\u002F) | [rnnoise](https:\u002F\u002Fgithub.com\u002Fxiph\u002Frnnoise) | Keras | 1.2.15 及更高版本 | 2017年9月 | |\n| [voicefilter](\u002Faudio_processing\u002Fvoicefilter\u002F) | [VoiceFilter](https:\u002F\u002Fgithub.com\u002Fmindslab-ai\u002Fvoicefilter)  | Pytorch | 1.2.7 及更高版本 | 2018年10月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvoicefilter-targeted-voice-separation-model-6fe6f85309ea) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fvoicefilter-%E4%BB%BB%E6%84%8F%E3%81%AE%E4%BA%BA%E7%89%A9%E3%81%AE%E5%A3%B0%E3%82%92%E6%8A%BD%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E5%88%86%E9%9B%A2%E3%83%A2%E3%83%87%E3%83%AB-d5b88a8549d9) |\n| [unet_source_separation](\u002Faudio_processing\u002Funet_source_separation\u002F) | [source_separation](https:\u002F\u002Fgithub.com\u002FAppleHolic\u002Fsource_separation)  | Pytorch | 1.2.6 及更高版本 | 2019年7月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Funetsourceseparation-a-machine-learning-model-to-remove-audio-noise-and-extract-voices-5acae8c37291) [日文](https:\u002F\u002Ftech.ailia.ai\u002Funetsourceseparation-%E9%9B%91%E9%9F%B3%E3%82%92%E9%99%A4%E5%8E%BB%E3%81%97%E3%81%A6%E5%A3%B0%E3%81%A0%E3%81%91%E3%82%92%E6%8A%BD%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-5d23fd054eac) |\n| [demucs](\u002Faudio_processing\u002Fdemucs\u002F) | [Demucs](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdemucs) | Pytorch | 1.4.0 及更高版本 | 2019年9月 | |\n| [dtln](\u002Faudio_processing\u002Fdtln\u002F) | [双信号变换 LSTM 网络](https:\u002F\u002Fgithub.com\u002Fbreizhn\u002FDTLN) | Tensorflow | 1.3.0 及更高版本 | 2020年5月 |  |\n| [audiosep](\u002Faudio_processing\u002Faudiosep\u002F) | [AudioSep](https:\u002F\u002Fgithub.com\u002FAudio-AGI\u002FAudioSep) | Pytorch | 1.3.0 及更高版本 | 2023年8月 | |\n\n### 音素对齐\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [narabas](\u002Faudio_processing\u002Fnarabas\u002F) | [narabas: 日语音素强制对齐工具](https:\u002F\u002Fgithub.com\u002Fdarashi\u002Fnarabas) | Pytorch | 1.2.11 及更高版本 | 2023年3月 | |\n\n### 音高检测\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [crepe](\u002Faudio_processing\u002Fcrepe\u002F) | [torchcrepe](https:\u002F\u002Fgithub.com\u002Fmaxrmorrison\u002Ftorchcrepe) | Pytorch | 1.2.10 及更高版本 | 2018年2月 | [日文](https:\u002F\u002Ftech.ailia.ai\u002Fcrepe-%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E3%83%94%E3%83%83%E3%83%81%E6%8E%A8%E5%AE%9A%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dfb09b5e1f6b) |\n\n### 说话人分离\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pyannote-audio](\u002Faudio_processing\u002Fpyannote-audio\u002F) | [Pyannote-audio](https:\u002F\u002Fgithub.com\u002Fpyannote\u002Fpyannote-audio) | Pytorch | 1.2.15 及更高版本 | 2019年11月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpyannoteaudio-%E8%A9%B1%E8%80%85%E5%88%86%E9%9B%A2%E3%82%92%E8%A1%8D%E3%81%86%E3%81%9F%E3%82%81%E3%81%AE%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-fca61f4ef5d0) |\n| [auto_speech](\u002Faudio_processing\u002Fauto_speech\u002F) | [AutoSpeech: 基于神经架构搜索的说话人识别](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FAutoSpeech) | Pytorch | 1.2.5 及更高版本 | 2020年5月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fautospeech-speech-based-person-identification-model-f01822f6d8e5) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fautospeech-%E9%9F%B3%E5%A3%B0%E3%81%AB%E3%82%88%E3%82%8B%E5%80%8B%E4%BA%BA%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-267a00f26a4a) |\n| [wespeaker](\u002Faudio_processing\u002Fwespeaker\u002F) | [WeSpeaker](https:\u002F\u002Fgithub.com\u002Fwenet-e2e\u002Fwespeaker) | Onnxruntime | 1.2.9 及更高版本 | 2022年10月 | |\n\n### 语音转文本\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [deepspeech2](\u002Faudio_processing\u002Fdeepspeech2\u002F) | [deepspeech.pytorch](https:\u002F\u002Fgithub.com\u002FSeanNaren\u002Fdeepspeech.pytorch) | Pytorch | 1.2.2 及更高版本 | 2017年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeepspeech2-a-machine-learning-model-for-speech-recognition-d9e64c0d1afc) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeepspeech2-%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%82%92%E8%A1%8D%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-8288c5f53eea) |\n| [whisper](\u002Faudio_processing\u002Fwhisper\u002F) | [Whisper](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fwhisper) | Pytorch | 1.2.10 及更高版本 | 2022年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fwhisper-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%82%92%E5%90%AB%E3%82%8099%E8%A8%80%E8%AA%9E%E3%82%92%E8%AA%8D%E8%AD%98%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%83%A2%E3%83%87%E3%83%AB-b6e578f55c87) |\n| [reazon_speech](\u002Faudio_processing\u002Freazon_speech\u002F) | [ReazonSpeech](https:\u002F\u002Fresearch.reazon.jp\u002Fprojects\u002FReazonSpeech\u002F) | Pytorch | 1.4.0 及更高版本 | 2023年1月 | |\n| [distil-whisper](\u002Faudio_processing\u002Fdistil-whisper\u002F) | [Hugging Face - Distil-Whisper](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdistil-whisper) | Pytorch | 1.2.16 及更高版本 | 2023年11月 | |\n| [sensevoice](\u002Faudio_processing\u002Fsensevoice\u002F) | [SenseVoice](https:\u002F\u002Fgithub.com\u002FFunAudioLLM\u002FSenseVoice) | Pytorch | 1.2.13 及更高版本 | 2024年7月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsensevoice-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%AB%E3%82%82%E5%AF%BE%E5%BF%9C%E3%81%97%E3%81%9F%E9%AB%98%E9%80%9F%E3%81%AA%E9%9F%B3%E5%A3%B0%E8%AA%8D%E8%AD%98%E3%83%A2%E3%83%87%E3%83%AB-3721c79e0592) |\n| [reazon_speech2](\u002Faudio_processing\u002Freazon_speech2\u002F) | [ReazonSpeech2](https:\u002F\u002Fresearch.reazon.jp\u002Fprojects\u002FReazonSpeech\u002F) | Pytorch | 1.4.0 及更高版本 | 2024年2月 | |\n| [kotoba-whisper](\u002Faudio_processing\u002Fkotoba-whisper\u002F) | [kotoba-whisper](https:\u002F\u002Fhuggingface.co\u002Fkotoba-tech\u002Fkotoba-whisper-v1.0) | Pytorch | 1.2.16 及更高版本 | 2024年4月 | |\n\n### 文本转语音\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [pytorch-dc-tts](\u002Faudio_processing\u002Fpytorch-dc-tts\u002F) | [基于深度卷积网络和引导注意力机制的高效可训练文本转语音系统](https:\u002F\u002Fgithub.com\u002Ftugstugi\u002Fpytorch-dc-tts) | Pytorch | 1.2.6 及更高版本 | 2017年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpytorchdctts-a-machine-learning-model-for-text-to-speech-synthesis-2273e269b480) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpytorchdctts-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%82%92%E8%A1%8D%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dcb5eb07c883) |\n| [tacotron2](\u002Faudio_processing\u002Ftacotron2\u002F) | [Tacotron2](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Ftacotron2) | Pytorch | 1.2.15 及更高版本 | 2018年2月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftacotron2-%E6%B3%A2%E5%BD%A2%E5%A4%89%E6%8F%9B%E3%82%92ai%E3%81%A7%E8%A1%8D%E3%81%86%E9%AB%98%E5%93%81%E8%B3%AA%E3%81%AA%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-bc592217a399) |\n| [vall-e-x](\u002Faudio_processing\u002Fvall-e-x\u002F) | [VALL-E-X](https:\u002F\u002Fgithub.com\u002FPlachtaa\u002FVALL-E-X) | Pytorch | 1.2.15 及更高版本 | 2023年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvall-e-x-%E5%86%8D%E5%AD%A6%E7%BF%92%E4%B8%8D%E8%A6%81%E3%81%A7%E5%A3%B0%E8%B3%AA%E3%82%92%E5%A4%89%E6%9B%B4%E3%81%A7%E3%81%8D%E3%82%8B%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-977efc19ac84) |\n| [Bert-VITS2](\u002Faudio_processing\u002Fbert-vits2\u002F) | [Bert-VITS2](https:\u002F\u002Fgithub.com\u002Ffishaudio\u002FBert-VITS2) | Pytorch | 1.2.16 及更高版本 | 2023年8月 |\n| [gpt-sovits](\u002Faudio_processing\u002Fgpt-sovits\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 及更高版本 | 2024年2月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgpt-sovits-%E3%83%95%E3%82%A1%E3%82%A4%E3%83%B3%E3%83%81%E3%83%A5%E3%83%BC%E3%83%8B%E3%83%B3%E3%82%B0%E3%81%A7%E3%81%8D%E3%82%8B0%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E3%81%AE%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-2212eeb5ad20) |\n| [gpt-sovits-v2](\u002Faudio_processing\u002Fgpt-sovits-v2\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 及更高版本 | 2024年8月 |  |\n| [cosyvoice2](\u002Faudio_processing\u002Fcosyvoice2\u002F) | [CosyVoice2](https:\u002F\u002Fgithub.com\u002FFunAudioLLM\u002FCosyVoice\u002Ftree\u002Fmain) | Pytorch | 1.4.0 及更高版本 | 2024年12月 |  |\n| [gpt-sovits-v3](\u002Faudio_processing\u002Fgpt-sovits-v3\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 及更高版本 | 2025年2月 |  |\n| [gpt-sovits-v2-pro](\u002Faudio_processing\u002Fgpt-sovits-v2-pro\u002F) | [GPT-SoVITS](https:\u002F\u002Fgithub.com\u002FRVC-Boss\u002FGPT-SoVITS) | Pytorch | 1.4.0 及更高版本 | 2025年6月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgpt-sovits-v2-pro-%E9%AB%98%E9%80%9F%E3%81%8B%E3%81%A4%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E9%9F%B3%E5%A3%B0%E5%90%88%E6%88%90%E3%83%A2%E3%83%87%E3%83%AB-81f2156366cd) |\n\n### 语音活动检测\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [silero-vad](\u002Faudio_processing\u002Fsilero-vad\u002F) | [Silero VAD](https:\u002F\u002Fgithub.com\u002Fsnakers4\u002Fsilero-vad) | Pytorch | 1.2.15 及更高版本 | 2020年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsilerovad-%E7%99%BA%E8%A9%B1%E5%8C%BA%E9%96%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2ad6cf395703) |\n\n### 语音转换\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [rvc](\u002Faudio_processing\u002Frvc\u002F) | [基于检索的语音转换WebUI](https:\u002F\u002Fgithub.com\u002FRVC-Project\u002FRetrieval-based-Voice-Conversion-WebUI) | Pytorch | 1.2.12及以上 | 2023年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Frvc-ai%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E3%83%9C%E3%82%A4%E3%82%B9%E3%83%81%E3%82%A7%E3%83%B3%E3%82%B8%E3%83%A5-64a813c7a0c4) |\n\n## 自动驾驶\n\n| 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bevformer](\u002Fautonomous_driving\u002Fbevformer\u002F) | [BEVFormer](https:\u002F\u002Fgithub.com\u002Ffundamentalvision\u002FBEVFormer) | Pytorch | 1.6.1及以上 | 2022年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbevformer-%E3%83%9E%E3%83%AB%E3%83%81%E3%82%AB%E3%83%A1%E3%83%A9%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%89bev%E8%A1%A8%E7%8F%BE%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-66ec76dd3a70) |\n|[uniad](\u002Fautonomous_driving\u002Funiad\u002F) | [UniAD: 统一驾驶](https:\u002F\u002Fgithub.com\u002FOpenDriveLab\u002FUniAD) | Pytorch | 1.6.1及以上 | 2022年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Funiad-end2end%E3%81%AE%E8%87%AA%E5%8B%95%E9%81%8B%E8%BB%A2%E3%81%AE%E5%9F%BA%E6%9C%AC%E3%81%A8%E3%81%AA%E3%82%8B%E3%83%A2%E3%83%87%E3%83%AB-39339e6e277b) |\n\n## 背景去除\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d12990de31b9.png\" width=128px>](background_removal\u002Fdeep-image-matting\u002F) | [deep-image-matting](\u002Fbackground_removal\u002Fdeep-image-matting\u002F) | [深度图像抠图](https:\u002F\u002Fgithub.com\u002Ffoamliu\u002FDeep-Image-Matting)| Keras | 1.2.3及以上 | 2017年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeep-image-matting-a-machine-learning-model-to-improve-the-accuracy-of-image-matting-2ff98e0b47d6) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeep-image-matting-%E7%89%A9%E4%BD%93%E3%81%AE%E5%88%87%E3%82%8A%E6%8A%9C%E3%81%8D%E3%82%92%E9%AB%98%E7%B2%BE%E5%BA%A6%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-45580882966f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e20a8820514e.png\" width=128px>](background_removal\u002Findexnet\u002F) | [indexnet](\u002Fbackground_removal\u002Findexnet\u002F) | [索引很重要：为深度图像抠图学习索引](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Ftree\u002Fmaster\u002Fconfigs\u002Fmattors\u002Findexnet) | Pytorch | 1.2.7及以上 | 2019年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3f16b5db9753.png\" width=128px>](background_removal\u002Fu2net\u002F) | [U-2-Net](\u002Fbackground_removal\u002Fu2net\u002F) | [U^2-Net：通过嵌套U结构深入显著目标检测](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FU-2-Net) | Pytorch | 1.2.2及以上 | 2020年5月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fu2net-a-machine-learning-model-that-performs-object-cropping-in-a-single-shot-48adfc158483) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fu2net-%E3%82%B7%E3%83%B3%E3%82%B0%E3%83%AB%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E3%81%A7%E7%89%A9%E4%BD%93%E3%81%AE%E5%88%87%E3%82%8A%E6%8A%9C%E3%81%8D%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e346f2787cdb) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a6eca8477aae.png\" width=128px>](background_removal\u002Fu2net-portrait-matting\u002F) | [u2net-portrait-matting](\u002Fbackground_removal\u002Fu2net-portrait-matting\u002F) | [U^2-Net - 人像抠图](https:\u002F\u002Fgithub.com\u002Fdennisbappert\u002Fu-2-net-portrait) | Pytorch | 1.2.7及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_168bba1664f9.png\" width=128px>](background_removal\u002Fu2net-human-seg\u002F) | [u2net-human-seg](\u002Fbackground_removal\u002Fu2net-human-seg\u002F) | [U^2-Net - 人体分割](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FU-2-Net) | Pytorch | 1.2.4及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a12ebc063a2.png\" width=128px>](background_removal\u002Fcascade_psp\u002F) | [cascade_psp](\u002Fbackground_removal\u002Fcascade_psp\u002F) | [CascadePSP](https:\u002F\u002Fgithub.com\u002Fhkchengrex\u002FCascadePSP) | Pytorch | 1.2.9及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d60bf7ec8b48.png\" width=128px>](background_removal\u002Frembg\u002F) | [rembg](\u002Fbackground_removal\u002Frembg\u002F) | [Rembg](https:\u002F\u002Fgithub.com\u002Fdanielgatis\u002Frembg) | Pytorch | 1.2.4及以上 | 2020年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fe290ce5cbba.png\" width=128px>](background_removal\u002Fgfm\u002F) | [gfm](\u002Fbackground_removal\u002Fgfm\u002F) | [弥合合成与真实：迈向端到端深度图像抠图](https:\u002F\u002Fgithub.com\u002FJizhiziLi\u002FGFM) | Pytorch | 1.2.10及以上 | 2020年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8a39946122f6.jpg\" width=128px>](background_removal\u002Fmodnet\u002F) | [modnet](\u002Fbackground_removal\u002Fmodnet\u002F) | [MODNet：实时无三通道图人像抠图](https:\u002F\u002Fgithub.com\u002FZHKKKe\u002FMODNet) | Pytorch | 1.2.7及以上 | 2020年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41538729bad5.png\" width=128px>](background_removal\u002Fbackground_matting_v2\u002F) | [background_matting_v2](\u002Fbackground_removal\u002Fbackground_matting_v2\u002F) | [实时高分辨率背景抠图](https:\u002F\u002Fgithub.com\u002FPeterL1n\u002FBackgroundMattingV2) | Pytorch | 1.2.9及以上 | 2020年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_53e38ba59dc2.jpg\" width=128px>](background_removal\u002Fdis_seg\u002F) | [dis_seg](\u002Fbackground_removal\u002Fdis_seg\u002F) | [高度精确的二分图像分割](https:\u002F\u002Fgithub.com\u002Fxuebinqin\u002FDIS) | Pytorch | 1.2.10及以上 | 2022年3月 | |\n\n## 人群计数\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41bd6b5714f8.png\" width=256px>](crowd_counting\u002Fcrowdcount-cascaded-mtl\u002F) | [crowdcount-cascaded-mtl](\u002Fcrowd_counting\u002Fcrowdcount-cascaded-mtl) | [基于CNN的级联多任务学习：\u003Cbr\u002F>高层先验与密度估计用于人群计数\u003Cbr\u002F>(单张图像人群计数)](https:\u002F\u002Fgithub.com\u002Fsvishwa\u002Fcrowdcount-cascaded-mtl) | Pytorch | 1.2.1及以上 | 2017年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcrowdcounting-a-machine-learning-model-for-counting-people-a7b274a7c2af) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrowdcounting-%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%89%E4%BA%BA%E6%95%B0%E3%82%92%E8%A8%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-459e8b3fc184) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_aca259a55df2.png\" width=256px>](crowd_counting\u002Fc-3-framework\u002F) | [c-3-framework](\u002Fcrowd_counting\u002Fc-3-framework) | [人群计数代码框架(C^3-框架)](https:\u002F\u002Fgithub.com\u002Fgjy3035\u002FC-3-Framework) | Pytorch | 1.2.5及以上 | 2019年7月 | |\n\n## 深度时尚\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f463b4f22ba3.png\" width=128px>](deep_fashion\u002Ffashionai-key-points-detection\u002F) | [fashionai-key-points-detection](\u002Fdeep_fashion\u002Ffashionai-key-points-detection\u002F) | [A Pytorch Implementation of Cascaded Pyramid Network for FashionAI Key Points Detection](https:\u002F\u002Fgithub.com\u002Fgathierry\u002FFashionAI-KeyPointsDetectionOfApparel) | Pytorch | 1.2.5 及更高版本 | 2018年6月 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8a4d1ce0d207.jpg\" width=128px>](deep_fashion\u002Fperson-attributes-recognition-crossroad\u002F) | [person-attributes-recognition-crossroad](\u002Fdeep_fashion\u002Fperson-attributes-recognition-crossroad\u002F) | [person-attributes-recognition-crossroad-0230](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fperson-attributes-recognition-crossroad-0230) | Pytorch | 1.2.10 及更高版本 | 2018年10月 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_76e6b705b922.png\" width=128px>](deep_fashion\u002Fclothing-detection\u002F) | [clothing-detection](\u002Fdeep_fashion\u002Fclothing-detection\u002F) | [Clothing-Detection](https:\u002F\u002Fgithub.com\u002Fsimaiden\u002FClothing-Detection) | Pytorch | 1.2.1 及更高版本 | 2019年6月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fclothingdetection-a-machine-learning-model-for-detecting-clothing-dab99e1492eb) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclothingdetection-%E6%9C%8D%E8%A3%85%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e75cc8bc75b7) |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b5b00cc9805c.png\" width=128px>](deep_fashion\u002Fmmfashion\u002F) | [mmfashion](\u002Fdeep_fashion\u002Fmmfashion\u002F) | [MMFashion](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.5 及更高版本 | 2019年11月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmmfashion-a-machine-learning-model-for-fashion-segmentation-a043fa972a2a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmmfashion-%E3%83%95%E3%82%A1%E3%83%83%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c486af72fdb5) |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2943e5d061ed.png\" width=128px>](deep_fashion\u002Fmmfashion_tryon\u002F) | [mmfashion_tryon](\u002Fdeep_fashion\u002Fmmfashion_tryon\u002F) | [MMFashion virtual try-on](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.8 及更高版本 | 2019年11月 | |\n|[\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3af7b968bcb1.png\" width=128px>](deep_fashion\u002Fmmfashion_retrieval\u002F) | [mmfashion_retrieval](\u002Fdeep_fashion\u002Fmmfashion_retrieval\u002F) | [MMFashion In-Shop Clothes Retrieval](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmfashion) | Pytorch | 1.2.5 及更高版本 | 2019年11月 | |\n\n## 深度估计\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_713372d2fa43.png\" width=256px>](depth_estimation\u002Ffcrn-depthprediction\u002F) |[fcrn-depthprediction](depth_estimation\u002Ffcrn-depthprediction)| [Deeper Depth Prediction with Fully Convolutional Residual Networks](https:\u002F\u002Fgithub.com\u002Firo-cp\u002FFCRN-DepthPrediction) | TensorFlow | 1.2.6 及更高版本 | 2016年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_29aa4ef0ebbe.png\" width=256px>](depth_estimation\u002Fmonodepth2\u002F) | [monodepth2](depth_estimation\u002Fmonodepth2)| [Monocular depth estimation from a single image](https:\u002F\u002Fgithub.com\u002Fnianticlabs\u002Fmonodepth2) | Pytorch | 1.2.2 及更高版本 | 2018年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_496fbd51a4ad.png\" width=256px>](depth_estimation\u002Ffast-depth\u002F) |[fast-depth](depth_estimation\u002Ffast-depth)| [ICRA 2019 \"FastDepth: Fast Monocular Depth Estimation on Embedded Systems\"](https:\u002F\u002Fgithub.com\u002Fdwofk\u002Ffast-depth) | Pytorch | 1.2.5 及更高版本 | 2019年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a6ba4d9cd13.png\" width=256px>](depth_estimation\u002Fmidas\u002F) |[midas](depth_estimation\u002Fmidas)| [Towards Robust Monocular Depth Estimation:\u003Cbr\u002F> Mixing Datasets for Zero-shot Cross-dataset Transfer](https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FMiDaS) | Pytorch | 1.2.4 及更高版本 | 2019年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmidas-a-machine-learning-model-for-depth-estimation-e96119cc1a3c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmidas-%E5%A5%A5%E8%A1%8C%E3%81%8D%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-71e65a041e0f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9572bbb029d8.png\" width=256px>](depth_estimation\u002Fhitnet\u002F) |[hitnet](depth_estimation\u002Fhitnet)| [ONNX-HITNET-Stereo-Depth-estimation](https:\u002F\u002Fgithub.com\u002FibaiGorordo\u002FONNX-HITNET-Stereo-Depth-estimation) | Pytorch | 1.2.9 及更高版本 | 2020年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_590eced2e825.png\" width=256px>](depth_estimation\u002Flap-depth\u002F) |[lap-depth](depth_estimation\u002Flap-depth)| [LapDepth-release](https:\u002F\u002Fgithub.com\u002Ftjqansthd\u002FLapDepth-release) | Pytorch | 1.2.9 及更高版本 | 2021年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f92a2365765e.png\" width=256px>](depth_estimation\u002Fmobilestereonet\u002F) |[mobilestereonet](depth_estimation\u002Fmobilestereonet)| [MobileStereoNet](https:\u002F\u002Fgithub.com\u002Fcogsys-tuebingen\u002Fmobilestereonet) | Pytorch | 1.2.13 及更高版本 | 2021年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7d816aa7397d.png\" width=256px>](depth_estimation\u002Fcrestereo\u002F) |[crestereo](depth_estimation\u002Fcrestereo)| [ONNX-CREStereo-Depth-Estimation](https:\u002F\u002Fgithub.com\u002FibaiGorordo\u002FONNX-CREStereo-Depth-Estimation) | Pytorch | 1.2.13 及更高版本 | 2022年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3df02805a07f.png\" width=256px>](depth_estimation\u002Fzoe_depth\u002F) |[zoe_depth](depth_estimation\u002Fzoe_depth)| [ZoeDepth](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FZoeDepth) | Pytorch | 1.3.0 及更高版本 | 2023年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_793b658bcb34.png\" width=256px>](depth_estimation\u002Fdepth_anything\u002F) |[depth_anything](depth_estimation\u002Fdepth_anything)| [DepthAnything](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything) | Pytorch | 1.2.9 及更高版本 | 2024年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ac50dde22e6e.png\" width=256px>](depth_estimation\u002Fdepth_anything_v2\u002F) |[depth_anything_v2](depth_estimation\u002Fdepth_anything_v2)| [Depth Anything V2](https:\u002F\u002Fgithub.com\u002FDepthAnything\u002FDepth-Anything-V2) | Pytorch | 1.2.16 及更高版本 | 2024年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e96363a182f9.png\" width=256px>](depth_estimation\u002Fdepth_pro\u002F) |[depth_pro](depth_estimation\u002Fdepth_pro)| [Depth Pro: Sharp Monocular Metric Depth in Less Than a Second](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-depth-pro) | Pytorch | 1.2.12 及更高版本 | 2024年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b58e9c07db90.png\" width=256px>](depth_estimation\u002Fdepth_anything_v3\u002F) |[depth_anything_v3](depth_estimation\u002Fdepth_anything_v3)| [Depth Anything V3](https:\u002F\u002Fgithub.com\u002FByteDance-Seed\u002FDepth-Anything-3) | Pytorch | 1.2.16 及更高版本 | 2025年11月 | |\n\n## 扩散\n\n### 文本转图像\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c11929149642.png\" width=128px>](diffusion\u002Flatent-diffusion-txt2img\u002F) | [latent-diffusion-txt2img](\u002Fdiffusion\u002Flatent-diffusion-txt2img\u002F) | [Latent Diffusion - txt2img](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 及更高版本 | 2021年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b4b220fbdb01.png\" width=128px>](diffusion\u002Fstable-diffusion-txt2img\u002F) | [stable-diffusion-txt2img](\u002Fdiffusion\u002Fstable-diffusion-txt2img\u002F) | [Stable Diffusion](https:\u002F\u002Fgithub.com\u002FCompVis\u002Fstable-diffusion) | Pytorch | 1.2.14 及更高版本 | 2022年8月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fstablediffusion-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-aa3676787a09) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_23c1624f9e8e.png\" width=128px>](diffusion\u002Fanything_v3\u002F) | [anything_v3](\u002Fdiffusion\u002Fanything_v3\u002F) | [Linaqruf\u002Fanything-v3.0](https:\u002F\u002Fhuggingface.co\u002FLinaqruf\u002Fanything-v3.0) | Pytorch | 1.5.0 及更高版本 | 2022年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a4c838ac585d.png\" width=128px>](diffusion\u002Fcontrol_net\u002F) | [control_net](\u002Fdiffusion\u002Fcontrol_net\u002F) | [ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet) | Pytorch | 1.2.15 及更高版本 | 2023年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_155ac42c481a.png\" width=128px>](diffusion\u002Flatent-consistency-models\u002F) | [latent-consistency-models](\u002Fdiffusion\u002Flatent-consistency-models\u002F) | [latent-consistency-models](https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model) | Pytorch | 1.2.16 及更高版本 | 2023年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e131a3c178d2.png\" width=128px>](diffusion\u002Fsd-turbo\u002F) | [sd-turbo](\u002Fdiffusion\u002Fsd-turbo\u002F) | [Hugging Face - SD-Turbo](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fsd-turbo) | Pytorch | 1.2.16 及更高版本 | 2023年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3440a8ec0a3f.png\" width=128px>](diffusion\u002Fsdxl-turbo\u002F) | [sdxl-turbo](\u002Fdiffusion\u002Fsdxl-turbo\u002F) | [Hugging Face - SDXL-Turbo](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fsdxl-turbo) | Pytorch | 1.2.16 及更高版本 | 2023年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_97b582412aa9.png\" width=128px>](diffusion\u002Fdepth_anything_controlnet\u002F) | [depth_anything_controlnet](\u002Fdiffusion\u002Fdepth_anything_controlnet\u002F) | [DepthAnything](https:\u002F\u002Fgithub.com\u002FLiheYoung\u002FDepth-Anything) | Pytorch | 1.2.16 及更高版本 | 2024年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3950d9f12373.png\" width=128px>](diffusion\u002Flatentsync\u002F) | [latentsync](\u002Fdiffusion\u002Flatentsync\u002F) | [LatentSync](https:\u002F\u002Fgithub.com\u002Fbytedance\u002FLatentSync\u002Ftree\u002Fmain) | Pytorch | 1.4.0 及更高版本 | 2024年12月 | |\n\n### 文本转音频\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_45b2c60b368e.png\" width=128px>](diffusion\u002Friffusion\u002F) | [riffusion](\u002Fdiffusion\u002Friffusion\u002F) | [Riffusion](https:\u002F\u002Fgithub.com\u002Friffusion\u002Friffusion) | Pytorch | 1.2.16 及更高版本 | 2022年12月 | |\n\n### 其他\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6f464f8a6722.png\" width=128px>](diffusion\u002Flatent-diffusion-inpainting\u002F) | [latent-diffusion-inpainting](\u002Fdiffusion\u002Flatent-diffusion-inpainting\u002F) | [Latent Diffusion - inpainting](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 及更高版本 | 2021年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1ab9d320e69a.png\" width=128px>](diffusion\u002Flatent-diffusion-superresolution\u002F) | [latent-diffusion-superresolution](\u002Fdiffusion\u002Flatent-diffusion-superresolution\u002F) | [Latent Diffusion - Super-resolution](https:\u002F\u002Fgithub.com\u002FCompVis\u002Flatent-diffusion) | Pytorch | 1.2.10 及更高版本 | 2021年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bdec48e6a029.png\" width=128px>](diffusion\u002Fdaclip-sde\u002F) | [DA-CLIP](\u002Fdiffusion\u002Fdaclip-sde\u002F) | [DA-CLIP](https:\u002F\u002Fgithub.com\u002FAlgolzw\u002Fdaclip-uir) | Pytorch | 1.2.16 及更高版本 | 2023年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d06b942b5104.png\" width=128px>](diffusion\u002Fmarigold\u002F) | [marigold](\u002Fdiffusion\u002Fmarigold\u002F) | [Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation](https:\u002F\u002Fgithub.com\u002Fprs-eth\u002FMarigold) | Pytorch | 1.2.16 及更高版本 | 2023年12月 | |\n\n## 人脸检测\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 发布日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b16d8f6630fb.jpg\" width=128px>](face_detection\u002Fmtcnn\u002F)| [mtcnn](face_detection\u002Fmtcnn\u002F) | [mtcnn](https:\u002F\u002Fgithub.com\u002Fipazc\u002Fmtcnn) | Keras | 1.2.10 及更高版本 | 2016年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2dcf2dba802e.png\" width=128px>](face_detection\u002Fyolov1-face\u002F) | [yolov1-face](\u002Fface_detection\u002Fyolov1-face\u002F) | [YOLO-Face-detection](https:\u002F\u002Fgithub.com\u002Fdannyblueliu\u002FYOLO-Face-detection\u002F) | Darknet | 1.1.0 及更高版本 | 2017年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_96525c230153.png\" width=128px>](face_detection\u002Fface-detection-adas\u002F)| [face-detection-adas](face_detection\u002Fface-detection-adas\u002F) | [face-detection-adas-0001](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fface-detection-adas-0001) | OpenVINO | 1.2.5 及更高版本 | 2018年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e6c2f5cd2677.png\" width=128px>](face_detection\u002Fretinaface\u002F)| [retinaface](face_detection\u002Fretinaface\u002F) | [RetinaFace: Single-stage Dense Face Localisation in the Wild.](https:\u002F\u002Fgithub.com\u002Fbiubug6\u002FPytorch_Retinaface) | Pytorch | 1.2.5 及更高版本 | 2019年5月 |  [JP](https:\u002F\u002Ftech.ailia.ai\u002Fretinaface-%E9%A1%94%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-37d0807581ce) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_294fe200eed8.png\" width=128px>](face_detection\u002Fblazeface\u002F) |[blazeface](\u002Fface_detection\u002Fblazeface\u002F)| [BlazeFace-PyTorch](https:\u002F\u002Fgithub.com\u002Fhollance\u002FBlazeFace-PyTorch) | Pytorch | 1.2.1 及更高版本 | 2019年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazeface-a-machine-learning-model-for-fast-detection-of-face-positions-and-key-points-5dcfb9429d72) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazeface-%E9%A1%94%E3%81%AE%E4%BD%8D%E7%BD%AE%E3%81%A8%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E9%AB%98%E9%80%9F%E3%81%AB%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e851c348a32b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7470bdd9c2b4.png\" width=128px>](face_detection\u002Fyolov3-face\u002F) | [yolov3-face](\u002Fface_detection\u002Fyolov3-face\u002F) | [Face detection using keras-yolov3](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face) | Keras | 1.2.1 及更高版本 | 2019年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a91ba6d0dafe.png\" width=128px>](face_detection\u002Fface-mask-detection\u002F)| [face-mask-detection](\u002Fface_detection\u002Fface-mask-detection\u002F) | [Face detection using keras-yolov3](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face) | Keras | 1.2.1 及更高版本 | 2019年12月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacemaskdetection-a-machine-learning-model-to-determine-if-a-person-is-wearing-a-mask-e5a581ea8af9) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemaskdetection-%E3%83%9E%E3%82%B9%E3%82%AF%E3%82%92%E4%BB%98%E3%81%91%E3%81%A6%E3%81%84%E3%82%8B%E3%81%8B%E3%82%92%E5%88%A4%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b06793f79a97) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_625529d380f1.png\" width=128px>](face_detection\u002Fdbface\u002F)| [dbface](face_detection\u002Fdbface\u002F) | [DBFace : real-time, single-stage detector for face detection,\u003Cbr\u002F>with faster speed and higher accuracy](https:\u002F\u002Fgithub.com\u002Fdlunion\u002FDBFace) | Pytorch | 1.2.2 及更高版本 | 2020年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_46b2882d8f8f.png\" width=128px>](face_detection\u002Fanime-face-detector\u002F)| [anime-face-detector](face_detection\u002Fanime-face-detector\u002F) | [Anime Face Detector](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector) | Pytorch | 1.2.6 及更高版本 | 2021年10月 | |\n\n## 人脸识别\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 发布日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87f564476059.jpg\" width=128px>](face_identification\u002Ffacenet_pytorch\u002F) |[facenet_pytorch](\u002Fface_identification\u002Ffacenet_pytorch) | [Face Recognition Using Pytorch](https:\u002F\u002Fgithub.com\u002Ftimesler\u002Ffacenet-pytorch) | Pytorch | 1.2.6 及更高版本 | 2015年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ed81b6200bb0.png\" width=128px>](face_identification\u002Finsightface\u002F)|[insightface](\u002Fface_identification\u002Finsightface) | [InsightFace: 2D and 3D Face Analysis Project](https:\u002F\u002Fgithub.com\u002Fdeepinsight\u002Finsightface) | Pytorch | 1.2.5 及更高版本 | 2017年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_10f9b218a959.jpg\">](face_identification\u002Fvggface2\u002F) |[vggface2](\u002Fface_identification\u002Fvggface2) | [VGGFace2 Dataset for Face Recognition](https:\u002F\u002Fgithub.com\u002Fox-vgg\u002Fvgg_face2) | Caffe | 1.1.0 及更高版本 | 2017年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_98ed28fada81.jpg\" width=128px>](face_identification\u002Farcface\u002F)|[arcface](\u002Fface_identification\u002Farcface) | [pytorch implement of arcface](https:\u002F\u002Fgithub.com\u002Fronghuaiyang\u002Farcface-pytorch) | Pytorch | 1.2.1 及更高版本 | 2018年1月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Farcface-a-machine-learning-model-for-face-recognition-5f743cdac6fa) [JP](https:\u002F\u002Ftech.ailia.ai\u002Farcface-%E9%A1%94%E8%AA%8D%E8%A8%BC%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-cbb0e127bd0a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_98ed28fada81.jpg\" width=128px>](face_identification\u002Fcosface\u002F) |[cosface](\u002Fface_identification\u002Fcosface) | [Pytorch implementation of CosFace](https:\u002F\u002Fgithub.com\u002FMuggleWang\u002FCosFace_pytorch) | Pytorch | 1.2.10 及更高版本 | 2018年1月 | |\n\n## 人脸比对\n\n### 年龄性别估计\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7d12fb8f3c7.png\">](face_recognition\u002Fface_classification\u002F) |[face_classification](\u002Fface_recognition\u002Fface_classification) | [实时人脸检测与情绪\u002F性别分类](https:\u002F\u002Fgithub.com\u002Foarriaga\u002Fface_classification) | Keras | 1.1.0 及以上 | 2017年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87ccecff5d87.jpg\" width=128px>](face_recognition\u002Fage-gender-recognition-retail\u002F) | [age-gender-recognition-retail](\u002Fface_recognition\u002Fage-gender-recognition-retail\u002F) | [age-gender-recognition-retail-0013](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fage-gender-recognition-retail-0013) | OpenVINO | 1.2.5 及以上 | 2018年5月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fagegenderrecognitionretail-a-machine-learning-model-to-identify-age-and-gender-8506510414b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fagegenderrecognitionretail-%E5%B9%B4%E9%BD%A2%E3%81%A8%E6%80%A7%E5%88%A5%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-3632935d19ec) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d1bf195813ee.png\" width=128px>](face_recognition\u002Fmivolo\u002F) | [mivolo](\u002Fface_recognition\u002Fmivolo\u002F) | [MiVOLO：用于年龄和性别估计的多输入Transformer](https:\u002F\u002Fgithub.com\u002FWildChlamydia\u002FMiVOLO) | Pytorch | 1.2.13 及以上 | 2023年7月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmulti-input-transformer-for-age-and-gender-estimation-%E5%B9%B4%E9%BD%A2%E3%81%A8%E6%80%A7%E5%88%A5%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E3%81%9F%E3%82%81%E3%81%AE%E3%83%9A%E3%83%BC%E3%83%88%E3%83%88%E3%83%8D%E3%83%83%E3%83%88%E3%83%8D%E3%83%AB-8b77aa8c6dbc) |\n\n### 情绪识别\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7978054e61c6.png\" width=128px>](face_recognition\u002Fferplus\u002F) | [ferplus](\u002Fface_recognition\u002Fferplus\u002F) | [FER+](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FFERPlus) | CNTK | 1.2.2 及以上 | 2016年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7d12fb8f3c7.png\">](face_recognition\u002Fhsemotion\u002F) | [hsemotion](\u002Fface_recognition\u002Fhsemotion\u002F) | [HSEmotion（高速人脸情绪识别）库](https:\u002F\u002Fgithub.com\u002FHSE-asavchenko\u002Fface-emotion-recognition) | Pytorch | 1.2.5 及以上 | 2021年3月 | |\n\n### 凝视估计\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5a80fdcb8449.png\" width=128px>](face_recognition\u002Fgazeml\u002F) | [gazeml](\u002Fface_recognition\u002Fgazeml\u002F) | [基于TensorFlow的深度学习框架，用于训练高性能凝视估计模型](https:\u002F\u002Fgithub.com\u002Fswook\u002FGazeML) | TensorFlow | 1.2.0 及以上 | 2018年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12874829cd01.png\" width=128px>](face_recognition\u002Fmediapipe_iris\u002F) | [mediapipe_iris](\u002Fface_recognition\u002Fmediapipe_iris\u002F) | [irislandmarks.pytorch](https:\u002F\u002Fgithub.com\u002Fcedriclmenard\u002Firislandmarks.pytorch) | Pytorch | 1.2.2 及以上 | 2020年6月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmediapipe-iris-detecting-key-points-in-the-eye-637f5c1e728e) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmediapipe-iris-%E7%9B%AE%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-a4742f143551) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fb093360cfe3.png\" width=128px>](face_recognition\u002Fgazelle\u002F) | [gazelle](\u002Fface_recognition\u002Fgazelle\u002F) | [gazelle](https:\u002F\u002Fgithub.com\u002Ffkryan\u002Fgazelle) | Pytorch | 1.2.16 及以上 | 2024年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgaze-lle-%E5%A4%A7%E8%A6%8F%E6%A8%A1%E3%83%87%E3%83%BC%E3%82%BF%E3%81%A7%E5%AD%A6%E7%BF%92%E3%81%95%E3%82%8C%E3%81%9F%E5%9F%BA%E7%9B%A4%E3%83%A2%E3%83%87%E3%83%AB%E3%81%AB%E3%82%88%E3%82%8B%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AA%E8%A6%96%E7%B7%9A%E6%8E%A8%E5%AE%9A%E3%83%A2%E3%83%87%E3%83%AB-7176706a0e4e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_101da6797a05.png\" width=128px>](face_recognition\u002Fax_gaze_estimation\u002F) | [ax_gaze_estimation](\u002Fface_recognition\u002Fax_gaze_estimation\u002F) | ax 凝视估计 | Pytorch | 1.2.2 及以上 |  | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Faxgazeestimation-a-machine-learning-model-for-estimating-gaze-c9648042d637) [JP](https:\u002F\u002Ftech.ailia.ai\u002Faxgazeestimation-%E8%A6%96%E7%B7%9A%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-8446a791968) |\n\n### 头部姿态估计\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4e5017cf7593.png\" width=128px>](face_recognition\u002Fhopenet\u002F) | [hopenet](\u002Fface_recognition\u002Fhopenet\u002F) | [deep-head-pose](https:\u002F\u002Fgithub.com\u002Fnatanielruiz\u002Fdeep-head-pose) | Pytorch | 1.2.2 及以上 | 2017年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fhope-net-a-machine-learning-model-for-estimating-face-orientation-83d5af26a513) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fhope-net-%E9%A1%94%E3%81%AE%E5%90%91%E3%81%8D%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6db21979f935) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d41ea9246564.png\" width=128px>](face_recognition\u002F6d_repnet\u002F) | [6d_repnet](\u002Fface_recognition\u002F6d_repnet\u002F) | [用于无约束头部姿态估计的6D旋转表示（Pytorch）](https:\u002F\u002Fgithub.com\u002Fthohemp\u002F6DRepNet) | Pytorch | 1.2.6 及以上 | 2022年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_97e6d902b4ce.png\" width=128px>](face_recognition\u002Fl2cs_net\u002F) | [L2CS_Net](\u002Fface_recognition\u002Fl2cs_net\u002F) | [L2CS_Net](https:\u002F\u002Fgithub.com\u002FAhmednull\u002FL2CS-Net) | Pytorch | 1.2.9 及以上 | 2022年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d41ea9246564.png\" width=128px>](face_recognition\u002F6d_repnet_360\u002F) | [6d_repnet_360](\u002Fface_recognition\u002F6d_repnet_360\u002F) | [迈向鲁棒且无约束的全范围旋转头部姿态估计](https:\u002F\u002Fgithub.com\u002Fthohemp\u002F6DRepNet360) | Pytorch | 1.2.9 及以上 | 2023年9月 | |\n\n### 关键点检测\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2e1eccfa3194.png\" width=128px>](face_recognition\u002Fface_alignment\u002F) |[face_alignment](\u002Fface_recognition\u002Fface_alignment\u002F)| [使用 PyTorch 构建的 2D 和 3D 面部对齐库](https:\u002F\u002Fgithub.com\u002F1adrianb\u002Fface-alignment) | PyTorch | 1.2.1 及以上 | 2017年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacealignment-a-machine-learning-model-for-recognizing-key-points-on-a-face-956f5e796efa) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacealignment-%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E8%AA%8D%E8%AD%98%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-a46654c4da14) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_be2782d43e41.png\" width=128px>](face_recognition\u002Fprnet\u002F) |[prnet](\u002Fface_recognition\u002Fprnet)| [结合位置图回归网络的联合 3D 面部重建与密集对齐 \u003Cbr\u002F>](https:\u002F\u002Fgithub.com\u002FYadiraF\u002FPRNet) | TensorFlow | 1.2.2 及以上 | 2018年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_de46150732f7.png\" width=128px>](face_recognition\u002Ffacemesh\u002F) | [facemesh](\u002Fface_recognition\u002Ffacemesh\u002F) | [facemesh.pytorch](https:\u002F\u002Fgithub.com\u002Fthepowerfuldeez\u002Ffacemesh.pytorch) | PyTorch | 1.2.2 及以上 | 2019年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffacemesh-detecting-key-points-on-faces-in-real-time-977c03f1bab) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemesh-%E3%83%AA%E3%82%A2%E3%83%AB%E3%82%BF%E3%82%A4%E3%83%A0%E3%81%A7%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bf223a50b7d6) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_27cc434edfb4.png\" width=128px>](face_recognition\u002Ffacial_feature\u002F) |[facial_feature](\u002Fface_recognition\u002Ffacial_feature\u002F)|[kaggle-facial-keypoints](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fkaggle-facial-keypoints)|PyTorch| 1.2.0 及以上 | 2019年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2b66ac8ad04a.png\" width=128px>](face_recognition\u002F3ddfa\u002F) | [3ddfa](\u002Fface_recognition\u002F3ddfa\u002F) | [迈向快速、准确且稳定的 3D 密集面部对齐](https:\u002F\u002Fgithub.com\u002Fcleardusk\u002F3DDFA_V2) | PyTorch | 1.2.10 及以上 | 2020年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_659db680dddb.png\" width=128px>](face_recognition\u002Ffacemesh_v2\u002F) | [facemesh_v2](\u002Fface_recognition\u002Ffacemesh_v2\u002F) | [MediaPipe 面部地标检测](https:\u002F\u002Fdevelopers.google.com\u002Fmediapipe\u002Fsolutions\u002Fvision\u002Fface_landmarker) | PyTorch | 1.2.9 及以上 | 2023年5月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffacemeshv2-blendshape%E3%82%82%E8%A8%88%E7%AE%97%E5%8F%AF%E8%83%BD%E3%81%AA%E9%A1%94%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-3198898dccdd) |\n\n### 其他\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_48ed68136552.png\" width=128px>](face_recognition\u002Fface-anti-spoofing\u002F) | [face-anti-spoofing](\u002Fface_recognition\u002Fface-anti-spoofing\u002F) | [轻量级面部防欺骗](https:\u002F\u002Fgithub.com\u002Fkprokofi\u002Flight-weight-face-anti-spoofing) | PyTorch | 1.2.5 及以上 | 2020年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ffaceantispoofing-a-machine-learning-model-to-determine-if-a-face-is-real-b6c30f12abb6) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffaceantispoofing-%E6%9C%AC%E7%89%A9%E3%81%AE%E9%A1%94%E3%81%8B%E3%81%A9%E3%81%86%E3%81%8B%E3%82%92%E5%88%A4%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c7092c1dde43) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d41a0d0a2e4.jpg\" width=128px>](face_recognition\u002Fax_facial_features) | [ax_facial_features](\u002Fface_recognition\u002Fax_facial_features\u002F)| ax 面部特征 | PyTorch | 1.2.5 及以上 |  |[EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fax-facial-features-eyelids-eyelashes-and-facial-hair-classification-9b3b12f1d6a1) |\n\n## 面部修复\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a1fae1f7e2af.png\" width=128px>](face_restoration\u002Fgfpgan\u002F) | [gfpgan](\u002Fface_restoration\u002Fgfpgan)| [GFP-GAN：基于生成式面部先验的真实世界盲人面部修复](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN)| PyTorch | 1.2.10 及以上 | 2021年1月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgfpgan-%E9%A1%94%E7%94%BB%E5%83%8F%E3%82%92%E9%AB%98%E7%94%BB%E8%B3%AA%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-547acd717086) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a39e49459459.png\" width=128px>](face_restoration\u002Fcodeformer\u002F) | [codeformer](\u002Fface_restoration\u002Fcodeformer\u002F) | [CodeFormer：基于码本查找变换器的鲁棒盲人面部修复](https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer) | PyTorch | 1.2.9 及以上 | 2022年6月 | |\n\n## 面部换脸\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3a545679f95c.png\" width=128px>](face_swapping\u002Fdeepfacelive\u002F) | [deepfacelive](\u002Fface_swapping\u002Fdeepfacelive\u002F) | [DeepFaceLive](https:\u002F\u002Fgithub.com\u002Fiperov\u002FDeepFaceLive) | ONNX Runtime | 1.2.10 及以上 | 2020年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_56272b36e292.png\" width=128px>](face_swapping\u002Fsber-swap\u002F) | [sber-swap](\u002Fface_swapping\u002Fsber-swap\u002F) | [SberSwap](https:\u002F\u002Fgithub.com\u002Fai-forever\u002Fsber-swap) | PyTorch | 1.2.12 及以上 | 2022年2月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsberswap-ai%E3%81%AB%E3%82%88%E3%82%8B%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AAfaceswap-bddae3b8ff84) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_29a63a621bad.png\" width=128px>](face_swapping\u002Ffacefusion\u002F) | [facefusion](\u002Fface_swapping\u002Ffacefusion\u002F) | [FaceFusion](https:\u002F\u002Fgithub.com\u002Ffacefusion\u002Ffacefusion) | ONNX Runtime | 1.2.10 及以上 | 2023年8月 | |\n\n## 帧插值\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_353a9c0b31ad.png\" width=128px>](frame_interpolation\u002Fcain\u002F) | [cain](\u002Fframe_interpolation\u002Fcain\u002F) | [视频帧插值中只需通道注意力机制](https:\u002F\u002Fgithub.com\u002Fmyungsub\u002FCAIN) | Pytorch | 1.2.5 及以上 | 2019年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44dcf4bb670f.png\" width=128px>](frame_interpolation\u002Frife\u002F) | [rife](\u002Fframe_interpolation\u002Frife\u002F) | [用于视频帧插值的实时中间光流估计](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FECCV2022-RIFE) | Pytorch | 1.2.13 及以上 | 2020年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9e15526b3ba8.png\" width=128px>](frame_interpolation\u002Fflavr\u002F) | [flavr](\u002Fframe_interpolation\u002Fflavr\u002F) | [FLAVR：适用于快速帧插值的光流无关视频表示](https:\u002F\u002Fgithub.com\u002Ftarun005\u002FFLAVR) | Pytorch | 1.2.7 及以上 | 2020年12月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fflavr-a-machine-learning-model-to-increase-video-frame-rate-758fe8132818) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fflavr-%E5%8B%95%E7%94%BB%E3%81%AE%E3%83%95%E3%83%AC%E3%83%BC%E3%83%A0%E3%83%AC%E3%83%BC%E3%83%88%E3%82%92%E4%B8%8A%E3%81%92%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6a18211445da) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a29846f20d26.png\" width=128px>](frame_interpolation\u002Ffilm\u002F) | [film](\u002Fframe_interpolation\u002Ffilm\u002F) | [FILM：大运动场景下的帧插值](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Fframe-interpolation) | Tensorflow | 1.2.10 及以上 | 2022年2月 | |\n\n## 生成对抗网络\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2e6de472d666.png\">](generative_adversarial_networks\u002Fpytorch-gan\u002F) |[pytorch-gan](\u002Fgenerative_adversarial_networks\u002Fpytorch-gan) | [PyTorch GAN Zoo 项目的代码仓库（用于训练此模型）](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fpytorch_GAN_zoo)| Pytorch | 1.2.4 及以上 | 2017年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a9781c4aee30.jpg\" width=128px>](generative_adversarial_networks\u002Flipgan\u002F) | [lipgan](\u002Fgenerative_adversarial_networks\u002Flipgan\u002F) | [LipGAN](https:\u002F\u002Fgithub.com\u002FRudrabha\u002FLipGAN) | Keras | 1.2.15 及以上 | 2019年10月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Flipgan-%E3%83%AA%E3%83%83%E3%83%97%E3%82%B7%E3%83%B3%E3%82%AF%E5%8B%95%E7%94%BB%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-57511508eaff) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f33f3e1b9b32.png\" width=128px>](generative_adversarial_networks\u002Fcouncil-gan\u002F) | [council-gan](\u002Fgenerative_adversarial_networks\u002Fcouncil-gan)| [Council-GAN](https:\u002F\u002Fgithub.com\u002FOnr\u002FCouncil-GAN)| Pytorch | 1.2.4 及以上 | 2019年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fa82f184b3b4.jpg\" width=128px>](generative_adversarial_networks\u002Fsam\u002F) | [sam](\u002Fgenerative_adversarial_networks\u002Fsam)| [基于风格的回归模型进行年龄转换](https:\u002F\u002Fgithub.com\u002Fyuval-alaluf\u002FSAM)| Pytorch | 1.2.9 及以上 | 2021年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0570248a7ae6.png\" width=128px>](generative_adversarial_networks\u002Fencoder4editing\u002F) | [encoder4editing](\u002Fgenerative_adversarial_networks\u002Fencoder4editing\u002F) | [为 StyleGAN 图像操作设计编码器](https:\u002F\u002Fgithub.com\u002Fomertov\u002Fencoder4editing) | Pytorch | 1.2.10 及以上 | 2021年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d73f608a3c6.jpg\" width=128px>](generative_adversarial_networks\u002Frestyle-encoder\u002F) | [restyle-encoder](\u002Fgenerative_adversarial_networks\u002Frestyle-encoder)| [ReStyle](https:\u002F\u002Fgithub.com\u002Fyuval-alaluf\u002Frestyle-encoder)| Pytorch | 1.2.9 及以上 | 2021年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7da9f919346.png\" width=128px>](generative_adversarial_networks\u002Fsadtalker\u002F) | [SadTalker](generative_adversarial_networks\u002Fsadtalker\u002F) | [SadTalker](https:\u002F\u002Fgithub.com\u002FOpenTalker\u002FSadTalker) | Pytorch | 1.5.0 及以上 | 2022年11月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5a77d7f7725c.jpg\" width=128px>](generative_adversarial_networks\u002Flive_portrait\u002F) | [live_portrait](\u002Fgenerative_adversarial_networks\u002Flive_portrait)| [LivePortrait](https:\u002F\u002Fgithub.com\u002FKwaiVGI\u002FLivePortrait) | Pytorch | 1.5.0 及以上 | 2024年7月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Flive-portrait-1%E6%9E%9A%E3%81%AE%E7%94%BB%E5%83%8F%E3%82%92%E5%8B%95%E3%81%8B%E3%81%9B%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-8eaa7d3eb683)|\n\n## 手部检测\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_faa6162271b8.jpg\" width=128px>](hand_detection\u002Fhand_detection_pytorch\u002F) | [hand_detection_pytorch](\u002Fhand_detection\u002Fhand_detection_pytorch\u002F) | [hand-detection.PyTorch](https:\u002F\u002Fgithub.com\u002Fzllrunning\u002Fhand-detection.PyTorch) | Pytorch | 1.2.2 及以上 | 2019年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc905059ef3b.png\" width=128px>](hand_detection\u002Fyolov3-hand\u002F) | [yolov3-hand](\u002Fhand_detection\u002Fyolov3-hand\u002F) | [使用 keras-yolov3 的人脸检测分支中的手部检测模块](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Fyolov3-face\u002Ftree\u002Fhand_detection) | Keras | 1.2.1 及以上 | 2019年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5d809a0ebf02.png\" width=128px>](hand_detection\u002Fblazepalm\u002F) |[blazepalm](\u002Fhand_detection\u002Fblazepalm\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 及以上 | 2020年6月 | |\n\n## 手部识别\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1de0bf3345ca.png\" width=128px>](hand_recognition\u002Fhand3d\u002F) |[hand3d](\u002Fhand_recognition\u002Fhand3d\u002F) | [ColorHandPose3D 网络](https:\u002F\u002Fgithub.com\u002Flmb-freiburg\u002Fhand3d) | TensorFlow | 1.2.5 及以上 | 2017年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ce17bf62db89.png\" width=128px>](hand_recognition\u002Fv2v-posenet\u002F) |[v2v-posenet](\u002Fhand_recognition\u002Fv2v-posenet\u002F) | [V2V-PoseNet](https:\u002F\u002Fgithub.com\u002Fmks0601\u002FV2V-PoseNet_RELEASE) | Pytorch | 1.2.6 及以上 | 2017年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d2ef872b2106.png\" width=128px>](hand_recognition\u002Fminimal-hand\u002F) |[minimal-hand](\u002Fhand_recognition\u002Fminimal-hand\u002F) | [Minimal Hand](https:\u002F\u002Fgithub.com\u002FCalciferZh\u002Fminimal-hand) | TensorFlow | 1.2.8 及以上 | 2020年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d739bebccb35.png\" width=128px>](hand_recognition\u002Fblazehand\u002F) |[blazehand](\u002Fhand_recognition\u002Fblazehand\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 及以上 | 2020年6月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazehand-a-machine-learning-model-for-detecting-hand-key-points-c3943b82739a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazehand-%E6%89%8B%E3%81%AE%E3%82%AD%E3%83%BC%E3%83%9D%E3%82%A4%E3%83%B3%E3%83%88%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e84e011ef7bc) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9def894a6b18.png\" width=128px>](hands_recognition\u002Fhands_segmentation_pytorch\u002F) |[hands_segmentation_pytorch](\u002Fhand_recognition\u002Fhands_segmentation_pytorch\u002F) | [hands-segmentation-pytorch](https:\u002F\u002Fgithub.com\u002Fguglielmocamporese\u002Fhands-segmentation-pytorch) | Pytorch | 1.2.10 及以上 | 2021年4月 | |\n\n## 图像描述生成\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8f0be58afa92.jpg\" width=128px>](image_captioning\u002Fillustration2vec\u002F) | [illustration2vec](\u002Fimage_captioning\u002Fillustration2vec\u002F)|[Illustration2Vec](https:\u002F\u002Fgithub.com\u002Frezoo\u002Fillustration2vec) | Caffe | 1.2.2 及以上 | 2015年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_68bf1f1c9255.jpg\" width=128px>](image_captioning\u002Fimage_captioning_pytorch\u002F) | [image_captioning_pytorch](\u002Fimage_captioning\u002Fimage_captioning_pytorch\u002F)|[Image Captioning pytorch](https:\u002F\u002Fgithub.com\u002Fruotianluo\u002FImageCaptioning.pytorch) | Pytorch | 1.2.5 及以上 | 2016年12月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fimage-captioning-pytorch-a-machine-learning-model-for-describing-images-d27562b6f15b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fimage-captioning-pytorch-%E7%94%BB%E5%83%8F%E3%82%92%E8%AA%AC%E6%98%8E%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e690982af19) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_88e3648135a0.png\" width=128px>](image_captioning\u002Fblip2\u002F) | [blip2](\u002Fimage_captioning\u002Fblip2\u002F)|[Hugging Face - BLIP-2](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSalesforce\u002FBLIP2) | Pytorch | 1.2.16 及以上 | 2023年1月 | |\n\n## 图像分类\n\n### CNN\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Falexnet\u002F) | [alexnet](\u002Fimage_classification\u002Falexnet\u002F)|[AlexNet PyTorch](https:\u002F\u002Fpytorch.org\u002Fhub\u002Fpytorch_vision_alexnet\u002F)|Pytorch| 1.2.5 及更高版本 | 2012年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fvgg16\u002F) | [vgg16](\u002Fimage_classification\u002Fvgg16\u002F) |[非常深层的卷积网络用于大规模图像识别]( https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.1556 )|Keras| 1.1.0 及更高版本| 2014年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fgooglenet\u002F) | [googlenet](\u002Fimage_classification\u002Fgooglenet\u002F) |[通过卷积更深入地学习]( https:\u002F\u002Farxiv.org\u002Fabs\u002F1409.4842 )|Pytorch| 1.2.0 及更高版本| 2014年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fresnet18\u002F) | [resnet18](\u002Fimage_classification\u002Fresnet18\u002F) | [ResNet18]( https:\u002F\u002Fpytorch.org\u002Fvision\u002Fmain\u002Fgenerated\u002Ftorchvision.models.resnet18.html) | Pytorch | 1.2.8 及更高版本 | 2015年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fresnet50\u002F) | [resnet50](\u002Fimage_classification\u002Fresnet50\u002F) | [用于图像识别的深度残差学习]( https:\u002F\u002Fgithub.com\u002FKaimingHe\u002Fdeep-residual-networks) | Chainer | 1.2.0 及更高版本 | 2015年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Finceptionv3\u002F) | [inceptionv3](\u002Fimage_classification\u002Finceptionv3\u002F)|[重新思考计算机视觉中的 Inception 架构](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.00567)|Pytorch| 1.2.0 及更高版本 | 2015年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Failia-sdk-%E3%83%A2%E3%83%87%E3%83%AB%E7%B4%B9%E4%BB%8B-inceptionv3-b39dd43f285d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Finceptionv4\u002F) | [inceptionv4](\u002Fimage_classification\u002Finceptionv4\u002F)|[Keras Inception-V4](https:\u002F\u002Fgithub.com\u002Fkentsommer\u002Fkeras-inceptionV4)|Keras| 1.2.5 及更高版本 | 2016年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_dd0a675c5a0f.jpg\" width=128px>](image_classification\u002Fwide_resnet50\u002F)| [wide_resnet50](\u002Fimage_classification\u002Fwide_resnet50\u002F)|[Wide Resnet](https:\u002F\u002Fpytorch.org\u002Fhub\u002Fpytorch_vision_wide_resnet\u002F)|Pytorch| 1.2.5 及更高版本 | 2016年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobilenetv2\u002F) | [mobilenetv2](\u002Fimage_classification\u002Fmobilenetv2\u002F)|[MobileNet V2 的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Fmobilenetv2.pytorch)|Pytorch| 1.2.0 及更高版本 | 2018年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobilenetv3\u002F) | [mobilenetv3](\u002Fimage_classification\u002Fmobilenetv3\u002F)|[MobileNet V3 的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fd-li14\u002Fmobilenetv3.pytorch)|Pytorch| 1.2.1 及更高版本 | 2019年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fefficientnet\u002F)| [efficientnet](\u002Fimage_classification\u002Fefficientnet\u002F)|[EfficientNet 的 PyTorch 实现]( https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch)|Pytorch| 1.2.3 及更高版本 | 2019年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fefficientnetv2\u002F)| [efficientnetv2](\u002Fimage_classification\u002Fefficientnetv2\u002F)|[EfficientNetV2]( https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fautoml\u002Ftree\u002Fmaster\u002Fefficientnetv2 )|Pytorch| 1.2.4 及更高版本 | 2021年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_dd0a675c5a0f.jpg\" width=128px>](image_classification\u002Fimagenet21k\u002F) | [imagenet21k](\u002Fimage_classification\u002Fimagenet21k\u002F) | [ImageNet21K](https:\u002F\u002Fgithub.com\u002FAlibaba-MIIL\u002FImageNet21K) | Pytorch | 1.2.11 及更高版本 | 2021年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0405447bf67d.jpg\" width=128px>](image_classification\u002Fmlp_mixer\u002F)| [mlp_mixer](\u002Fimage_classification\u002Fmlp_mixer\u002F)|[MLP-Mixer](https:\u002F\u002Fgithub.com\u002Fjeonsworld\u002FMLP-Mixer-Pytorch)|Pytorch| 1.2.9 及更高版本 | 2021年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7141d1762f0f.jpg\" width=128px>](image_classification\u002Fvolo\u002F) | [volo](\u002Fimage_classification\u002Fvolo\u002F) | [VOLO：用于视觉识别的视觉观察者](https:\u002F\u002Fgithub.com\u002Fsail-sg\u002Fvolo) | Pytorch | 1.2.9 及更高版本 | 2021年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_941efb51d361.jpg\" width=128px>](image_classification\u002Fconvnext\u002F) | [convnext](\u002Fimage_classification\u002Fconvnext\u002F)|[ConvNeXt 的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FConvNeXt) | Pytorch | 1.2.5 及更高版本 | 2022年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4173ff89833.jpg\" width=128px>](image_classification\u002Fmobileone\u002F) | [mobileone](\u002Fimage_classification\u002Fmobileone\u002F)|[MobileOne 的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-mobileone) | Pytorch | 1.2.1 及更高版本 | 2022年6月 | |\n\n### 变压器\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_92fe8596625f.png\" width=128px>](image_classification\u002Fvit\u002F)| [vit](\u002Fimage_classification\u002Fvit\u002F)|[Vision Transformer 的 PyTorch 重实现（一张图胜过 16x16 个词：大规模图像识别中的 Transformer）](https:\u002F\u002Fgithub.com\u002Fjeonsworld\u002FViT-pytorch)|Pytorch| 1.2.7 及更高版本 | 2020年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvision-transformer-state-of-the-art-image-identification-technology-without-convolutional-fd10097ae9c2) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvision-transformer-%E7%95%B3%E3%81%BF%E8%BE%BC%E3%81%BF%E6%BC%94%E7%AE%97%E3%82%92%E7%94%A8%E3%81%84%E3%81%AA%E3%81%84%E6%9C%80%E6%96%B0%E7%94%BB%E5%83%8F%E8%AD%98%E5%88%A5%E6%8A%80%E8%A1%93-84f06978a17f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0f1b4a595049.png\" width=128px>](image_classification\u002Fclip\u002F) | [clip](\u002Fimage_classification\u002Fclip\u002F)|[CLIP](https:\u002F\u002Fgithub.com\u002Fopenai\u002FCLIP) | Pytorch | 1.2.9 及更高版本 | 2021年2月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fclip-learning-transferable-visual-models-from-natural-language-supervision-4508b3f0ea46) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclip-%E8%B6%85%E5%A4%A7%E8%A6%8F%E6%A8%A1%E3%83%87%E3%83%83%E3%83%88%E3%81%A7%E4%BA%8B%E5%89%8D%E5%AD%A6%E7%BF%92%E3%81%95%E3%82%8C-%E5%86%8D%E5%AD%A6%E7%BF%92%E3%81%AA%E3%81%97%E3%81%A7%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E8%AD%98%E5%88%A5%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-2ebc5c1666f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_941efb51d361.jpg\" width=128px>](image_classification\u002Fswin-transformer\u002F) | [swin-transformer](\u002Fimage_classification\u002Fswin-transformer\u002F)|[Swin Transformer](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FSwin-Transformer) | Pytorch | 1.2.6 及更高版本 | 2021年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4711fbf0be4f.jpeg\" width=128px>](image_classification\u002Fjapanese-clip\u002F) | [japanese-clip](\u002Fimage_classification\u002Fjapanese-clip\u002F)|[Japanese-CLIP](https:\u002F\u002Fgithub.com\u002Frinnakk\u002Fjapanese-clip) | Pytorch | 1.2.15 及更高版本 | 2022年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4711fbf0be4f.jpeg\" width=128px>](image_classification\u002Fjapanese-stable-clip-vit-l-16\u002F) | [japanese-stable-clip-vit-l-16](\u002Fimage_classification\u002Fjapanese-stable-clip-vit-l-16\u002F) | [japanese-stable-clip-vit-l-16](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fjapanese-stable-clip-vit-l-16\u002F) | Pytorch | 1.2.11 及更高版本 | 2023年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2b44dc0f5766.jpeg\" width=128px>](image_classification\u002Fclip-japanse-base\u002F) | [clip-japanese-base](\u002Fimage_classification\u002Fclip-japanese-base\u002F)|[line-corporation\u002Fclip-japanese-base](https:\u002F\u002Fhuggingface.co\u002Fline-corporation\u002Fclip-japanese-base) | Pytorch | 1.2.16 及更高版本 | 2024年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ba36126546cc.jpg\" width=128px>](image_classification\u002Fsiglip2\u002F) | [siglip2](\u002Fimage_classification\u002Fsiglip2\u002F)|[具有改进语义理解、定位和密集特征的多语言视觉-语言编码器](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip2-base-patch16-224) | Pytorch | 1.2.16 及更高版本 | 2025年2月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsiglip2-%E6%AC%A1%E4%B8%96%E4%BB%A3%E3%81%AE0%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%E7%89%A9%E4%BD%93%E8%AD%98%E5%88%A5%E3%83%A2%E3%83%87%E3%83%AB-854e768c3163) |\n\n### 具体任务\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f76cae2df52e.png\" width=128px>](image_classification\u002Fweather-prediction-from-image\u002F) | [weather-prediction-from-image](\u002Fimage_classification\u002Fweather-prediction-from-image\u002F)|[基于图像的天气预测 - （图像的温暖感）](https:\u002F\u002Fgithub.com\u002Fberkgulay\u002Fweather-prediction-from-image) | Keras | 1.2.5 及更高版本 | 2017年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_040e1e655ed9.jpeg\" width=128px>](image_classification\u002Fpartialconv\u002F) | [partialconv](\u002Fimage_classification\u002Fpartialconv\u002F)|[用于填充和图像修复的局部卷积层](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpartialconv)|Pytorch| 1.2.0 及更高版本 | 2018年11月 | |\n\n## 图像修复\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9f6eb330853c.png\" width=128px>](image_inpainting\u002Fpytorch-inpainting-with-partial-conv\u002F) | [inpainting-with-partial-conv](\u002Fimage_inpainting\u002Fpytorch-inpainting-with-partial-conv\u002F) | [pytorch-inpainting-with-partial-conv](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-inpainting-with-partial-conv) | PyTorch | 1.2.6 及更高版本 | 2018年4月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Finpainting-with-partial-conv-a-machine-learning-model-that-predicts-and-fills-in-missing-parts-of-53c046343a85) [JP](https:\u002F\u002Ftech.ailia.ai\u002Finpainting-with-partial-conv-%E7%94%BB%E5%83%8F%E3%81%AE%E6%AC%A0%E6%90%8D%E9%83%A8%E5%88%86%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%97%E3%81%A6%E5%9F%8B%E3%82%81%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9746576e6490) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_739673d65451.png\" width=128px>](image_inpainting\u002Fdeepfillv2\u002F) | [deepfillv2](\u002Fimage_inpainting\u002Fdeepfillv2\u002F) | [带有门控卷积的自由形式图像修复](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmediting\u002Ftree\u002Fmaster\u002Fconfigs\u002Finpainting\u002Fdeepfillv2) | Pytorch | 1.2.9 及更高版本 | 2018年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0bd7bd206fad.png\" width=128px>](image_inpainting\u002Finpainting_gmcnn\u002F) | [inpainting_gmcnn](\u002Fimage_inpainting\u002Finpainting_gmcnn\u002F) | [通过生成式多列卷积神经网络进行图像修复](https:\u002F\u002Fgithub.com\u002Fshepnerd\u002Finpainting_gmcnn) | TensorFlow | 1.2.6 及更高版本 | 2018年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1105c8ef57db.jpg\" width=128px>](image_inpainting\u002F3d-photo-inpainting\u002F) | [3d-photo-inpainting](\u002Fimage_inpainting\u002F3d-photo-inpainting\u002F) | [使用上下文感知分层深度修复技术的 3D 摄影](https:\u002F\u002Fgithub.com\u002Fvt-vl-lab\u002F3d-photo-inpainting) | Pytorch | 1.2.7 及更高版本 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_999e58d1d7cb.png\" width=128px>](image_inpainting\u002Flama\u002F) | [lama](\u002Fimage_inpainting\u002Flama\u002F) | [LaMa：基于傅里叶卷积的分辨率鲁棒性大掩码修复](https:\u002F\u002Fgithub.com\u002Fadvimman\u002Flama) | Pytorch | 1.2.13 及更高版本 | 2021年9月 | |\n\n## 图像处理\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5a8a3ce7dd0.jpg\" width=128px>](image_manipulation\u002Fcolorization\u002F) | [colorization](\u002Fimage_manipulation\u002Fcolorization\u002F) | [Colorful Image Colorization](https:\u002F\u002Fgithub.com\u002Frichzhang\u002Fcolorization) | Pytorch | 1.2.2 及以上 | 2016年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcolorization-a-machine-learning-model-for-colorizing-black-and-white-images-829e35e4f91c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcolorization-%E7%99%BD%E9%BB%92%E7%94%BB%E5%83%8F%E3%82%92%E3%82%AB%E3%83%A9%E3%83%BC%E5%8C%96%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-177d3fd52e40) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d29fef1bee5d.png\" width=128px>](image_manipulation\u002Fcnngeometric_pytorch\u002F) | [cnngeometric_pytorch](\u002Fimage_manipulation\u002Fcnngeometric_pytorch\u002F) | [CNNGeometric PyTorch implementation](https:\u002F\u002Fgithub.com\u002Fignacio-rocco\u002Fcnngeometric_pytorch) | Pytorch | 1.2.7 及以上 | 2017年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44bb2f981ab3.png\" width=128px>](image_manipulation\u002Fstyle2paints\u002F) | [style2paints](\u002Fimage_manipulation\u002Fstyle2paints\u002F) | [Style2Paints](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints) | TensorFlow | 1.2.6 及以上 | 2017年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ec7f82636195.png\" width=128px>](image_manipulation\u002Fdeblur_gan\u002F) | [deblur_gan](\u002Fimage_manipulation\u002Fdeblur_gan\u002F) | [DeblurGAN](https:\u002F\u002Fgithub.com\u002FKupynOrest\u002FDeblurGAN) | Pytorch | 1.2.6 及以上 | 2017年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc0376679536.png\" width=128px>](image_manipulation\u002Fpytorch-superpoint\u002F) | [pytorch-superpoint](\u002Fimage_manipulation\u002Fpytorch-superpoint\u002F) | [pytorch-superpoint : Self-Supervised Interest Point Detection and Description](https:\u002F\u002Fgithub.com\u002Feric-yyjau\u002Fpytorch-superpoint) | Pytorch | 1.2.6 及以上 | 2017年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c1b5691c0735.png\" width=128px>](image_manipulation\u002Fnoise2noise\u002F) | [noise2noise](\u002Fimage_manipulation\u002Fnoise2noise\u002F) | [Learning Image Restoration without Clean Data](https:\u002F\u002Fgithub.com\u002Fjoeylitalien\u002Fnoise2noise-pytorch) | Pytorch | 1.2.0 及以上 | 2018年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fd63d6bde9f0.png\" width=128px>](image_manipulation\u002Fdfe\u002F) | [dfe](\u002Fimage_manipulation\u002Fdfe\u002F) | [Deep Fundamental Matrix Estimation](https:\u002F\u002Fgithub.com\u002Fisl-org\u002FDFE) | Pytorch | 1.2.6 及以上 | 2018年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_17cbbe2b94d7.png\" width=128px>](image_manipulation\u002Fillnet\u002F) | [illnet](\u002Fimage_manipulation\u002Fillnet\u002F) | [Document Rectification and Illumination Correction using a Patch-based CNN](https:\u002F\u002Fgithub.com\u002Fxiaoyu258\u002FDocProj) | Pytorch | 1.2.2 及以上 | 2019年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7fb95d007dbf.png\" width=128px>](image_manipulation\u002Fdewarpnet\u002F) | [dewarpnet](\u002Fimage_manipulation\u002Fdewarpnet) | [DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks](https:\u002F\u002Fgithub.com\u002Fcvlab-stonybrook\u002FDewarpNet) | Pytorch | 1.2.1 及以上 | 2019年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d70aa4925090.png\" width=128px>](image_manipulation\u002Fdeep_white_balance\u002F) | [deep_white_balance](\u002Fimage_manipulation\u002Fdeep_white_balance\u002F) | [Deep White-Balance Editing, CVPR 2020 (Oral)](https:\u002F\u002Fgithub.com\u002Fmahmoudnafifi\u002FDeep_White_Balance) | PyTorch | 1.2.6 及以上 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_81a70b96c669.jpg\" width=128px>](image_manipulation\u002Fu2net_portrait\u002F) | [u2net_portrait](\u002Fimage_manipulation\u002Fu2net_portrait\u002F) | [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https:\u002F\u002Fgithub.com\u002FNathanUA\u002FU-2-Net) | Pytorch | 1.2.2 及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1c9773677251.png\" width=128px>](image_manipulation\u002Finvertible_denoising_network\u002F) | [invertible_denoising_network](\u002Fimage_manipulation\u002Finvertible_denoising_network\u002F) | [Invertible Image Denoising](https:\u002F\u002Fgithub.com\u002FYang-Liu1082\u002FInvDN) | Pytorch | 1.2.8 及以上 | 2021年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_16752e9a3ee1.png\" width=128px>](image_manipulation\u002Fdfm\u002F) | [dfm](\u002Fimage_manipulation\u002Fdfm\u002F) | [Deep Feature Matching](https:\u002F\u002Fgithub.com\u002Fufukefe\u002FDFM) | Pytorch | 1.2.6 及以上 | 2021年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_25865e9e9a85.png\" width=128px>](image_manipulation\u002Ffbcnn\u002F) | [fbcnn](\u002Fimage_manipulation\u002Ffbcnn\u002F) | [Towards Flexible Blind JPEG Artifacts Removal](https:\u002F\u002Fgithub.com\u002Fjiaxi-jiang\u002FFBCNN) | Pytorch | 1.2.9 及以上 | 2021年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_b9a3b3fe56ed.png\" width=128px>](image_manipulation\u002Fdehamer\u002F) | [dehamer](\u002Fimage_manipulation\u002Fdehamer\u002F) | [Image Dehazing Transformer with Transmission-Aware 3D Position Embedding](https:\u002F\u002Fgithub.com\u002FLi-Chongyi\u002FDehamer) | Pytorch | 1.2.13 及以上 | 2022年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_550d04c008dd.png\" width=128px>](image_manipulation\u002Flightglue\u002F) | [lightglue](\u002Fimage_manipulation\u002Flightglue\u002F) | [LightGlue-ONNX](https:\u002F\u002Fgithub.com\u002Ffabio-sim\u002FLightGlue-ONNX) | Pytorch | 1.2.15 及以上 | 2023年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_37d51c7dc706.png\" width=128px>](image_manipulation\u002Fdocshadow\u002F) | [docshadow](\u002Fimage_manipulation\u002Fdocshadow\u002F) | [DocShadow-ONNX-TensorRT](https:\u002F\u002Fgithub.com\u002Ffabio-sim\u002FDocShadow-ONNX-TensorRT) | Pytorch | 1.2.10 及以上 | 2023年8月 | |\n\n## 图像修复\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7a391a16dcf4.png\" width=128px>](image_restoration\u002Fnafnet\u002F) | [nafnet](\u002Fimage_restoration\u002Fnafnet\u002F) | [NAFNet: Nonlinear Activation Free Network for Image Restoration](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002Fnafnet) | Pytorch | 1.2.10 及以上 | 2022年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fnafnet-%E7%94%BB%E5%83%8F%E3%81%AE%E3%83%96%E3%83%A9%E3%82%92%E9%99%A4%E5%8E%BB%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b8547fd67597) | \n\n## 图像分割\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_abe357270d5e.jpg\" width=128px>](image_segmentation\u002Fpytorch-fcn\u002F) | [pytorch-fcn](\u002Fimage_segmentation\u002Fpytorch-fcn\u002F) | [pytorch-fcn](https:\u002F\u002Fgithub.com\u002Fwkentaro\u002Fpytorch-fcn) | Pytorch | 1.3.0 及更高版本 | 2014年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3f45402f933b.png\" width=128px>](image_segmentation\u002Fpytorch-enet\u002F) | [pytorch-enet](\u002Fimage_segmentation\u002Fpytorch-enet\u002F) | [PyTorch-ENet](https:\u002F\u002Fgithub.com\u002Fdavidtvs\u002FPyTorch-ENet) | Pytorch | 1.2.8 及更高版本 | 2016年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2f27413addb8.png\" width=128px>](image_segmentation\u002Ftusimple-DUC\u002F) | [tusimple-DUC](\u002Fimage_segmentation\u002Ftusimple-DUC\u002F) | [TuSimple-DUC](https:\u002F\u002Fgithub.com\u002FTuSimple\u002FTuSimple-DUC) | Pytorch | 1.2.10 及更高版本 | 2017年2月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_191850dd1c12.jpg\" width=128px>](image_segmentation\u002Fpytorch-unet\u002F) | [pytorch-unet](\u002Fimage_segmentation\u002Fpytorch-unet\u002F) | [Pytorch-Unet](https:\u002F\u002Fgithub.com\u002Fmilesial\u002FPytorch-UNet) | Pytorch | 1.2.5 及更高版本 | 2017年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_87d12e450abd.png\" width=128px>](image_segmentation\u002Fdeeplabv3\u002F) | [deeplabv3](\u002Fimage_segmentation\u002Fdeeplabv3\u002F) | [Xception65 作为 DeepLab v3+ 的骨干网络](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fdeeplab) | Chainer | 1.2.0 及更高版本 | 2018年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2799f7c622db.png\" width=128px>](image_segmentation\u002Fpspnet-hair-segmentation\u002F) | [pspnet-hair-segmentation](\u002Fimage_segmentation\u002Fpspnet-hair-segmentation\u002F) | [pytorch-hair-segmentation](https:\u002F\u002Fgithub.com\u002FYBIGTA\u002Fpytorch-hair-segmentation) | Pytorch | 1.2.2 及更高版本 | 2018年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3c51b17e3980.png\" width=128px>](image_segmentation\u002Fswiftnet\u002F) | [swiftnet](\u002Fimage_segmentation\u002Fswiftnet\u002F) | [SwiftNet](https:\u002F\u002Fgithub.com\u002Forsic\u002Fswiftnet) | Pytorch | 1.2.6 及更高版本 | 2019年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fd93d51bea9f.png\" width=128px>](image_segmentation\u002Fhrnet_segmentation\u002F) | [hrnet_segmentation](\u002Fimage_segmentation\u002Fhrnet_segmentation\u002F) | [高分辨率网络（HRNets）用于语义分割](https:\u002F\u002Fgithub.com\u002FHRNet\u002FHRNet-Semantic-Segmentation) | Pytorch | 1.2.1 及更高版本 | 2019年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d217cbda81da.png\" width=128px>](image_segmentation\u002Fhair_segmentation\u002F) | [hair_segmentation](\u002Fimage_segmentation\u002Fhair_segmentation\u002F) | [移动端头发分割](https:\u002F\u002Fgithub.com\u002Fthangtran480\u002Fhair-segmentation) | Keras | 1.2.1 及更高版本 | 2019年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_aa49d37f930c.png\" width=128px>](image_segmentation\u002Fpaddleseg\u002F) | [paddleseg](\u002Fimage_segmentation\u002Fpaddleseg\u002F) | [PaddleSeg](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleSeg\u002Ftree\u002Frelease\u002F2.3\u002Fcontrib\u002FCityscapesSOTA) | Pytorch | 1.2.7 及更高版本 | 2019年8月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpaddleseg-highly-accurate-segmentation-model-using-hierarchical-attention-18e69363dc2a) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleseg-%E9%9A%8E%E5%B1%A4%E7%9A%84%E3%81%AA%E3%82%A2%E3%83%86%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%B3%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%A2%E3%83%87%E3%83%AB-acc89bf50423) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_844f01ccedaa.png\" width=128px>](image_segmentation\u002Fhuman_part_segmentation\u002F) | [human_part_segmentation](\u002Fimage_segmentation\u002Fhuman_part_segmentation\u002F) | [人体解析的自我修正](https:\u002F\u002Fgithub.com\u002FPeikeLi\u002FSelf-Correction-Human-Parsing) | Pytorch | 1.2.4 及更高版本 | 2019年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fhumanpartsegmentation-a-machine-learning-model-for-segmenting-human-parts-cd7e39480714) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fhumanpartsegmentation-%E5%8B%95%E7%94%BB%E3%81%8B%E3%82%89%E4%BD%93%E3%81%AE%E9%83%A8%E4%BD%8D%E3%82%92%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e8a0e405255) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8834a82b2f2a.png\" width=128px>](image_segmentation\u002Fsemantic-segmentation-mobilenet-v3\u002F) | [semantic-segmentation-mobilenet-v3](\u002Fimage_segmentation\u002Fsemantic-segmentation-mobilenet-v3) | [使用 MobileNetV3 进行语义分割](https:\u002F\u002Fgithub.com\u002FOniroAI\u002FSemantic-segmentation-with-MobileNetV3) | TensorFlow | 1.2.5 及更高版本 | 2019年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12c06fdaabd8.jpg\" width=128px>](image_segmentation\u002Fsuim\u002F) | [suim](\u002Fimage_segmentation\u002Fsuim\u002F) | [SUIM](https:\u002F\u002Fgithub.com\u002FIRVLab\u002FSUIM) | Keras | 1.2.6 及更高版本 | 2020年4月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_191e6715f3f0.png\" width=128px>](image_segmentation\u002Fyet-another-anime-segmenter\u002F) | [yet-another-anime-segmenter](\u002Fimage_segmentation\u002Fyet-another-anime-segmenter\u002F) | [另一个动漫分割器](https:\u002F\u002Fgithub.com\u002Fzymk9\u002FYet-Another-Anime-Segmenter) | Pytorch | 1.2.6 及更高版本 | 2020年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_08d82935701a.png\" width=128px>](image_segmentation\u002Fdense_prediction_transformers\u002F) | [dense_prediction_transformers](\u002Fimage_segmentation\u002Fdense_prediction_transformers\u002F) | [用于密集预测的视觉Transformer](https:\u002F\u002Fgithub.com\u002Fintel-isl\u002FDPT) | Pytorch | 1.2.7 及更高版本 | 2021年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdpt-segmentation-model-using-vision-transformer-b479f3027468) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdpt-vision-transformer%E3%82%92%E4%BD%BF%E7%94%A8%E3%81%97%E3%81%9F%E3%82%BB%E3%82%B0%E3%83%A1%E3%83%B3%E3%83%86%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E3%83%A2%E3%83%87%E3%83%AB-88db4842b4a7) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5f51acdb3300.png\" width=128px>](image_segmentation\u002Fgroup_vit\u002F) | [group_vit](\u002Fimage_segmentation\u002Fgroup_vit\u002F) | [GroupViT](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FGroupViT) | Pytorch | 1.2.10 及更高版本 | 2022年2月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9811c87f224d.png\" width=128px>](image_segmentation\u002Fpp_liteseg\u002F) | [pp_liteseg](\u002Fimage_segmentation\u002Fpp_liteseg\u002F) | [PP-LiteSeg](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleSeg\u002Ftree\u002Fdevelop\u002Fconfigs\u002Fpp_liteseg) | Pytorch | 1.2.10 及更高版本 | 2022年4月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_462c099d291a.png\" width=128px>](image_segmentation\u002Fanime-segmentation\u002F) | [anime-segmentation](\u002Fimage_segmentation\u002Fanime-segmentation\u002F) | [动漫分割](https:\u002F\u002Fgithub.com\u002FSkyTNT\u002Fanime-segmentation) | Pytorch | 1.2.9 及更高版本 | 2022年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a792f981018a.png\" width=128px>](image_segmentation\u002Fyolov8-seg\u002F) | [yolov8-seg](\u002Fimage_segmentation\u002Fyolov8-seg\u002F) | [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 及更高版本 | 2023年1月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cb9ae91d5327.png\" width=128px>](image_segmentation\u002Fsegment-anything\u002F) | [segment-anything](\u002Fimage_segmentation\u002Fsegment-anything\u002F) | [Segment Anything](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) | Pytorch | 1.2.16 及更高版本 | 2023年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1baef25228f0.png\" width=128px>](image_segmentation\u002Fgrounded_sam\u002F) | [grounded_sam](\u002Fimage_segmentation\u002Fgrounded_sam\u002F) | [Grounded-SAM](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGrounded-Segment-Anything\u002Ftree\u002Fmain) | Pytorch | 1.2.16 及更高版本 | 2023年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e1090f254ed2.png\" width=128px>](image_segmentation\u002Ffast_sam\u002F) | [fast_sam](\u002Fimage_segmentation\u002Ffast_sam\u002F) | [FastSAM](https:\u002F\u002Fgithub.com\u002FCASIA-IVA-Lab\u002FFastSAM) | Pytorch | 1.2.14 及更高版本 | 2023年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_47e08849abf0.png\" width=128px>](image_segmentation\u002Fmobile_sam\u002F) | [mobile_sam](\u002Fimage_segmentation\u002Fmobile_sam\u002F) | [MobileSAM](https:\u002F\u002Fgithub.com\u002FChaoningZhang\u002FMobileSAM) | Pytorch | 1.6.0 及更高版本 | 2023年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fe0133ff1701.png\" width=128px>](image_segmentation\u002Fedge_sam\u002F) | [edge_sam](\u002Fimage_segmentation\u002Fedge_sam\u002F) | [EdgeSAM](https:\u002F\u002Fgithub.com\u002Fchongzhou96\u002FEdgeSAM) | Pytorch | 1.2.10 及更高版本 | 2023年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a429b166130b.png\" width=128px>](image_segmentation\u002Fsegment-anything-2\u002F) | [segment-anything-2](\u002Fimage_segmentation\u002Fsegment-anything-2\u002F) | [Segment Anything 2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything-2) | Pytorch | 1.2.16 及更高版本 | 2024年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_72c7cb767641.png\" width=128px>](image_segmentation\u002Fyolov11-seg\u002F) | [yolov11-seg](\u002Fimage_segmentation\u002Fyolov11-seg\u002F) | [Ultralytics YOLO11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 及更高版本 | 2024年9月 |  |\n\n## 地标分类\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_77a16a3a0588.jpg\" width=128px>](landmark_classification\u002Fplaces365\u002F) | [places365](\u002Flandmark_classification\u002Fplaces365\u002F)|[Places365-CNNs 发布](https:\u002F\u002Fgithub.com\u002FCSAILVision\u002Fplaces365) | Pytorch | 1.2.5 及以上 | 2016年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c5083148522e.jpg\" width=128px>](landmark_classification\u002Flandmarks_classifier_asia\u002F) | [landmarks_classifier_asia](\u002Flandmark_classification\u002Flandmarks_classifier_asia\u002F)|[Landmarks classifier_asia_V1.1](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fon_device_vision\u002Fclassifier\u002Flandmarks_classifier_asia_V1\u002F1) | TensorFlow Hub | 1.2.4 及以上 | 2020年4月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Flandmarksclassifierasia-a-machine-learning-model-to-identify-japanese-tourist-attractions-9bb9600d2c80) [JP](https:\u002F\u002Ftech.ailia.ai\u002Flandmarksclassifierasia-%E6%97%A5%E6%9C%AC%E3%81%AE%E8%A6%B3%E5%85%89%E5%90%8D%E6%89%80%E3%82%92%E8%AD%98%E5%88%A5%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-dbe930b5653c)|\n\n## 线段检测\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ba579303251b.png\" width=128px>](line_segment_detection\u002Fdexined\u002F) | [dexined](\u002Fline_segment_detection\u002Fdexined\u002F) | [DexiNed：用于边缘检测的密集极端 Inception 网络](https:\u002F\u002Fgithub.com\u002Fxavysp\u002FDexiNed) | Pytorch | 1.2.5 及以上 | 2019年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5f084d327d16.jpg\" width=128px>](line_segment_detection\u002Fmlsd\u002F) | [mlsd](\u002Fline_segment_detection\u002Fmlsd\u002F) | [M-LSD：面向轻量级和实时线段检测](https:\u002F\u002Fgithub.com\u002Fnavervision\u002Fmlsd) | TensorFlow | 1.2.8 及以上 | 2021年6月 |  [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fm-lsd-machine-learning-model-for-detecting-wireframes-ac1b618f459b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fm-lsd-%E3%83%AF%E3%82%A4%E3%83%A4%E3%83%BC%E3%83%95%E3%83%AC%E3%83%BC%E3%83%A0%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-876bef18d908) |\n\n## 低光图像增强\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_82d233d3ecc4.png\" width=128px>](low_light_image_enhancement\u002Fagllnet\u002F) | [agllnet](\u002Flow_light_image_enhancement\u002Fagllnet\u002F) | [AGLLNet：注意力引导的低光图像增强（IJCV 2021）](https:\u002F\u002Fgithub.com\u002Fyu-li\u002FAGLLNet) | Pytorch | 1.2.9 及以上 | 2019年8月 |[EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fagllnet-a-machine-learning-model-for-brightening-dark-images-133a0887b5c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fagllnet-%E6%9A%97%E3%81%84%E7%94%BB%E5%83%8F%E3%82%92%E6%98%8E%E3%82%8B%E3%81%8F%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-d59181ad89a9) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_055b19c188b4.png\" width=128px>](low_light_image_enhancement\u002Fdrbn_skf\u002F) | [drbn_skf](\u002Flow_light_image_enhancement\u002Fdrbn_skf\u002F) | [DRBN SKF](https:\u002F\u002Fgithub.com\u002Flangmanbusi\u002FSemantic-Aware-Low-Light-Image-Enhancement\u002Ftree\u002Fmain\u002FDRBN_SKF) | Pytorch | 1.2.14 及以上 | 2023年4月 | |\n\n## 自然语言处理\n\n### Bert\n\n| 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert](\u002Fnatural_language_processing\u002Fbert) | [pytorch-pretrained-bert](https:\u002F\u002Fpypi.org\u002Fproject\u002Fpytorch-pretrained-bert\u002F) | Pytorch | 1.2.2 及以上 | 2018年10月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fbert-a-machine-learning-model-for-efficient-natural-language-processing-aef3081c24e8) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbert-%E8%87%AA%E7%84%B6%E8%A8%80%E8%AA%9E%E5%87%A6%E7%90%86%E3%82%92%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AB%E5%AD%A6%E7%BF%92%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-3a9c27d78cf8) |\n|[bert_maskedlm](\u002Fnatural_language_processing\u002Fbert_maskedlm) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及以上 | 2018年10月 | |\n|[bert_question_answering](\u002Fnatural_language_processing\u002Fbert_question_answering) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及以上 | 2018年10月 | |\n\n### 嵌入\n\n| 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[sentence_transformers_japanese](\u002Fnatural_language_processing\u002Fsentence_transformers_japanese) | [sentence transformers](https:\u002F\u002Fhuggingface.co\u002Fsentence-transformers\u002Fparaphrase-multilingual-mpnet-base-v2) | Pytorch | 1.2.7 及以上 | 2019年8月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsentencetransformer-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89embedding%E3%82%92%E5%8F%96%E5%BE%97%E3%81%99%E3%82%8B%E8%A8%80%E8%AA%9E%E5%87%A6%E7%90%86%E3%83%A2%E3%83%87%E3%83%AB-b7d2a9bb2c31) |\n|[multilingual-e5](\u002Fnatural_language_processing\u002Fmultilingual-e5) | [multilingual-e5-base](https:\u002F\u002Fhuggingface.co\u002Fintfloat\u002Fmultilingual-e5-base) | Pytorch | 1.2.15 及以上 | 2022年12月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmultilingual-e5-%E5%A4%9A%E8%A8%80%E8%AA%9E%E3%81%AE%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92embedding%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-71f1dec7c4f0) |\n|[glucose](\u002Fnatural_language_processing\u002Fglucose) | [GLuCoSE（基于通用 Luke 的对比句子嵌入）-基础日语版](https:\u002F\u002Fhuggingface.co\u002Fpkshatech\u002FGLuCoSE-base-ja) | Pytorch | 1.2.15 及以上 | 2023年7月 |\n|[ruri-v3](\u002Fnatural_language_processing\u002Fruri-v3) | [ruri-v3-310m ](https:\u002F\u002Fhuggingface.co\u002Fcl-nagoya\u002Fruri-v3-310m) | Pytorch | 1.2.13 及以上 | 2025年4月 |\n|[embeddinggemma](\u002Fnatural_language_processing\u002Fembeddinggemma) | [EmbeddingGemma](https:\u002F\u002Fai.google.dev\u002Fgemma\u002Fdocs\u002Fembeddinggemma?hl=ja) | Pytorch | 1.2.14 及以上 | 2025年9月| [JP](https:\u002F\u002Fkyakuno.medium.com\u002Fembedding-gemma-google%E3%81%AE%E9%96%8B%E7%99%BA%E3%81%97%E3%81%9F%E8%BB%BD%E9%87%8F%E3%81%A7%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AAembedding%E3%83%A2%E3%83%87%E3%83%AB-9ec139ddfde9) |\n\n### 错误纠正器\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_insert_punctuation](\u002Fnatural_language_processing\u002Fbert_insert_punctuation) | [bert-japanese](https:\u002F\u002Fgithub.com\u002Fcl-tohoku\u002Fbert-japanese) | Pytorch | 1.2.15 及更高版本 | 2019年11月 | |\n|[bertjsc](\u002Fnatural_language_processing\u002Fbertjsc) | [bertjsc](https:\u002F\u002Fgithub.com\u002Fer-ri\u002Fbertjsc) | Pytorch | 1.2.15 及更高版本 | 2023年3月 | |\n|[t5_whisper_medical](\u002Fnatural_language_processing\u002Ft5_whisper_medical) | 使用 t5 进行医学术语纠错 | Pytorch | 1.2.13 及更高版本 |  | |\n\n### 字素到音素转换\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[g2p_en](\u002Fnatural_language_processing\u002Fg2p_en) | [g2p_en](https:\u002F\u002Fgithub.com\u002FKyubyong\u002Fg2p) | Pytorch | 1.2.14 及更高版本 | 2019年1月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fg2p-en-%E8%8B%B1%E8%AA%9E%E3%81%AE%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E9%9F%B3%E7%B4%A0%E3%81%AB%E5%A4%89%E6%8F%9B%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-88947c27b9ea) |\n|[g2pw](\u002Fnatural_language_processing\u002Fg2pw) | [g2pW](https:\u002F\u002Fgithub.com\u002FGitYCC\u002Fg2pW) | Pytorch | 1.2.9 及更高版本 | 2022年3月 | |\n|[soundchoice-g2p](\u002Fnatural_language_processing\u002Fsoundchoice-g2p) | [Hugging Face - speechbrain\u002Fsoundchoice-g2p](https:\u002F\u002Fhuggingface.co\u002Fspeechbrain\u002Fsoundchoice-g2p) | Pytorch | 1.2.16 及更高版本 | 2022年7月 | |\n\n### 命名实体识别\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_ner](\u002Fnatural_language_processing\u002Fbert_ner) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及更高版本 | 2018年10月 | |\n|[t5_base_japanese_ner](\u002Fnatural_language_processing\u002Ft5_base_japanese_ner) | [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 及更高版本 | 2021年3月 | |\n|[bert_ner_japanese](\u002Fnatural_language_processing\u002Fbert_ner_japanese) | [jurabi\u002Fbert-ner-japanese](https:\u002F\u002Fhuggingface.co\u002Fjurabi\u002Fbert-ner-japanese) | Pytorch | 1.2.10 及更高版本 | 2023年3月 | |\n\n### 重排序器\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[cross_encoder_mmarco](\u002Fnatural_language_processing\u002Fcross_encoder_mmarco) | [jeffwan\u002Fmmarco-mMiniLMv2-L12-H384-v](https:\u002F\u002Fhuggingface.co\u002Fjeffwan\u002Fmmarco-mMiniLMv2-L12-H384-v1) | Pytorch | 1.2.10 及更高版本 | 2022年9月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcrossencodermmarco-%E8%B3%AA%E5%95%8F%E6%96%87%E3%81%A8%E5%9B%9E%E7%AD%94%E6%96%87%E3%81%AE%E9%A1%9E%E4%BC%BC%E5%BA%A6%E3%82%92%E8%A8%88%E7%AE%97%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-c90b35e9fc09)|\n|[japanese-reranker-cross-encoder](\u002Fnatural_language_processing\u002Fjapanese-reranker-cross-encoder) | [hotchpotch\u002Fjapanese-reranker-cross-encoder-large-v1](https:\u002F\u002Fhuggingface.co\u002Fhotchpotch\u002Fjapanese-reranker-cross-encoder-large-v1) | Pytorch | 1.2.16 及更高版本 | 2024年4月 | |\n\n### 句子生成\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[gpt2](\u002Fnatural_language_processing\u002Fgpt2) | [GPT-2](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fmodels\u002Fblob\u002Fmaster\u002Ftext\u002Fmachine_comprehension\u002Fgpt-2\u002FREADME.md) | Pytorch | 1.2.7 及更高版本 | 2019年2月 | |\n|[rinna_gpt2](\u002Fnatural_language_processing\u002Frinna_gpt2) | [japanese-pretrained-models](https:\u002F\u002Fgithub.com\u002Frinnakk\u002Fjapanese-pretrained-models)   | Pytorch | 1.2.7 及更高版本 | 2021年4月 | |\n\n### 情感分析\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_sentiment_analysis](\u002Fnatural_language_processing\u002Fbert_sentiment_analysis) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及更高版本 | 2018年10月 | |\n|[bert_tweets_sentiment](\u002Fnatural_language_processing\u002Fbert_tweets_sentiment) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及更高版本 | 2018年10月 | |\n\n### 摘要生成\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_sum_ext](\u002Fnatural_language_processing\u002Fbert_sum_ext) | [BERTSUMEXT](https:\u002F\u002Fgithub.com\u002Fdmmiller612\u002Fbert-extractive-summarizer)   | Pytorch | 1.2.7 及更高版本 | 2019年5月 | |\n|[presumm](\u002Fnatural_language_processing\u002Fpresumm) | [PreSumm](https:\u002F\u002Fgithub.com\u002Fnlpyang\u002FPreSumm)   | Pytorch | 1.2.8 及更高版本| 2019年8月 | |\n|[t5_base_japanese_title_generation](\u002Fnatural_language_processing\u002Ft5_base_japanese_title_generation) | [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 及更高版本 | 2021年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ft5-%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%81%8B%E3%82%89%E3%83%86%E3%82%AD%E3%82%B9%E3%83%88%E3%82%92%E7%94%9F%E6%88%90%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-602830bdc5b4) |\n|[t5_base_summarization](\u002Fnatural_language_processing\u002Ft5_base_japanese_summarization) | [t5-japanese](https:\u002F\u002Fgithub.com\u002Fsonoisa\u002Ft5-japanese) | Pytorch | 1.2.13 及更高版本 | 2021年3月 | |\n\n### 翻译\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[fugumt-en-ja](\u002Fnatural_language_processing\u002Ffugumt-en-ja) | [Fugu-Machine Translator](https:\u002F\u002Fgithub.com\u002Fs-taka\u002Ffugumt)   | Pytorch | 1.2.9 及更高版本 | 2020年11月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ffugumt-%E8%8B%B1%E8%AA%9E%E3%81%8B%E3%82%89%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%B8%E3%81%AE%E7%BF%BB%E8%A8%B3%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-46b839c1b4ae) |\n|[fugumt-ja-en](\u002Fnatural_language_processing\u002Ffugumt-ja-en) | [Fugu-Machine Translator](https:\u002F\u002Fgithub.com\u002Fs-taka\u002Ffugumt)   | Pytorch | 1.2.10 及更高版本 | 2020年11月 | |\n\n### 零样本分类\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[bert_zero_shot_classification](\u002Fnatural_language_processing\u002Fbert_zero_shot_classification) | [huggingface\u002Ftransformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) | Pytorch | 1.2.5 及更高版本 | 2018年10月 | |\n|[multilingual-minilmv2](\u002Fnatural_language_processing\u002Fmultilingual-minilmv2) | [MoritzLaurer\u002Fmultilingual-MiniLMv2-L12-mnli-xnli](https:\u002F\u002Fhuggingface.co\u002FMoritzLaurer\u002Fmultilingual-MiniLMv2-L12-mnli-xnli) | Pytorch | 1.2.10 及更高版本 | 2022年6月 | |\n\n## 网络入侵检测\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [bert-network-packet-flow-header-payload](\u002Fnetwork_intrusion_detection\u002Fbert-network-packet-flow-header-payload\u002F) | [bert-network-packet-flow-header-payload](https:\u002F\u002Fhuggingface.co\u002Frdpahalavan\u002Fbert-network-packet-flow-header-payload)| Pytorch | 1.2.10 及更高版本 | 2023年9月 | |\n| [falcon-adapter-network-packet](\u002Fnetwork_intrusion_detection\u002Ffalcon-adapter-network-packet\u002F) | [falcon-adapter-network-packet](https:\u002F\u002Fhuggingface.co\u002Frdpahalavan\u002Ffalcon-adapter-network-packet)| Pytorch | 1.2.10 及更高版本 | 2023年9月 | |\n\n## 神经渲染\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_629561a770aa.png\" width=128px>](neural_rendering\u002Fnerf\u002F) | [nerf](\u002Fneural_rendering\u002Fnerf\u002F) | [NeRF: 神经辐射场](https:\u002F\u002Fgithub.com\u002Fbmild\u002Fnerf) | Tensorflow | 1.2.10 及更高版本 | 2020年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fnerf-machine-learning-model-to-generate-and-render-3d-models-from-multiple-viewpoint-images-599631dc2075) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fnerf-%E8%A4%84%E6%95%B0%E3%81%AE%E8%A6%96%E7%82%B9%E3%81%AE%E7%94%BB%E5%83%8F%E3%81%8B%E3%82%893d%E3%83%A2%E3%83%87%E3%83%AB%E3%82%92%E7%94%9F%E6%88%90%E3%81%97%E3%81%A6%E3%83%AC%E3%83%B3%E3%83%80%E3%83%AA%E3%83%B3%E3%82%B0%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2d6bee7ff22f) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e4d9a4b35cf7.gif\" width=128px>](neural_rendering\u002Ftripo_sr\u002F) | [TripoSR](\u002Fneural_rendering\u002Ftripo_sr\u002F) | [TripoSR](https:\u002F\u002Fgithub.com\u002FVAST-AI-Research\u002FTripoSR) | Pytorch | 1.2.6 及更高版本 | 2024年3月 | |\n\n## 不适宜内容检测器\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [clip-based-nsfw-detector](\u002Fnsfw_detector\u002Fclip-based-nsfw-detector\u002F) | [CLIP-based-NSFW-Detector](https:\u002F\u002Fgithub.com\u002FLAION-AI\u002FCLIP-based-NSFW-Detector)| Keras | 1.2.10 及更高版本 | 2022年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fclip-based-nsfw-detector-%E4%B8%8D%E9%81%A9%E5%88%87%E7%94%BB%E5%83%8F%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8Bai%E3%83%A2%E3%83%87%E3%83%AB-1ea69dbd7c0d) |\n\n## [目标检测](\u002Fobject_detection\u002F)\n\n### CNN\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_613960ffec15.png\" width=128px>](object_detection\u002Fyolov1-tiny\u002F) | [yolov1-tiny](\u002Fobject_detection\u002Fyolov1-tiny\u002F) | [YOLO：实时目标检测](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolov1\u002F) | Darknet | 1.1.0 及以上 | 2015年6月 | [日文](https:\u002F\u002Ftech.ailia.ai\u002Fyolov1-you-look-only-once%E9%AB%98%E9%80%9F%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-92141aab4b69) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0f5d8761fbc6.png\" width=128px>](object_detection\u002Fyolov2\u002F) | [yolov2](\u002Fobject_detection\u002Fyolov2\u002F) | [YOLO：实时目标检测](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | Pytorch | 1.2.0 及以上 | 2016年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5fff17f4eb17.png\" width=128px>](object_detection\u002Fyolov2-tiny\u002F) | [yolov2-tiny](\u002Fobject_detection\u002Fyolov2-tiny\u002F) | [YOLO：实时目标检测](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | Pytorch | 1.2.6 及以上 | 2016年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_df64af60efe6.png\" width=128px>](object_detection\u002Fmaskrcnn\u002F) | [maskrcnn](\u002Fobject_detection\u002Fmaskrcnn\u002F) | [Mask R-CNN：用于实例分割的实时神经网络](https:\u002F\u002Fgithub.com\u002Fonnx\u002Fmodels\u002Ftree\u002Fmaster\u002Fvision\u002Fobject_detection_segmentation\u002Fmask-rcnn) | Pytorch | 1.2.3 及以上 | 2017年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_24e777e292da.png\" width=128px>](object_detection\u002Fyolov3\u002F) | [yolov3](\u002Fobject_detection\u002Fyolov3\u002F) | [YOLO：实时目标检测](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | ONNX Runtime | 1.2.1 及以上 | 2018年4月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov3-a-machine-learning-model-to-detect-the-position-and-type-of-an-object-60f1c18f8107) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fyolov3-66c9b998c096) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9749fbe69a00.png\" width=128px>](object_detection\u002Fyolov3-tiny\u002F) | [yolov3-tiny](\u002Fobject_detection\u002Fyolov3-tiny\u002F) | [YOLO：实时目标检测](https:\u002F\u002Fpjreddie.com\u002Fdarknet\u002Fyolo\u002F) | ONNX Runtime | 1.2.1 及以上 | 2018年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5cf77db3b1bd.png\" width=128px>](object_detection\u002Fmobilenet_ssd\u002F) | [mobilenet_ssd](\u002Fobject_detection\u002Fmobilenet_ssd\u002F) | [基于 MobileNetV1、MobileNetV2 和 VGG 的 SSD\u002FSSD-lite PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fqfgaohao\u002Fpytorch-ssd) | Pytorch | 1.2.1 及以上 | 2018年8月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmobilenetssd-a-machine-learning-model-for-fast-object-detection-37352ce6da7d) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fmobilenetssd-%E9%AB%98%E9%80%9F%E3%81%AB%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8D%E3%81%84%E3%81%9F%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-be3ca37c411) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9d2ceed49436.png\" width=128px>](object_detection\u002Fm2det\u002F) | [m2det](\u002Fobject_detection\u002Fm2det\u002F) | [M2Det：基于多级特征金字塔网络的单次目标检测器](https:\u002F\u002Fgithub.com\u002Fqijiezhao\u002FM2Det) | Pytorch | 1.2.3 及以上 | 2018年11月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fm2det-highly-accurate-object-detection-model-b5c5bff27970) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fm2det-%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-bf92a8a3d423) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d2f0dda17c1f.png\" width=128px>](object_detection\u002Fcenternet\u002F) | [centernet](\u002Fobject_detection\u002Fcenternet\u002F) | [CenterNet：以点表示物体](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002FCenterNet) | Pytorch | 1.2.1 及以上 | 2019年4月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcenternet-a-machine-learning-model-for-anchorless-object-detection-462c48483cfe) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fcenternet-%E3%82%A2%E3%83%B3%E3%82%AB%E3%83%BC%E3%83%AC%E3%82%B9%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8D%E3%81%84%E3%81%9F%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9ecbadefd884) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_152d09b646c9.png\" width=128px>](object_detection\u002Fyolact\u002F) | [yolact](\u002Fobject_detection\u002Fyolact\u002F) | [You Only Look At CoefficienTs](https:\u002F\u002Fgithub.com\u002Fdbolya\u002Fyolact) | Pytorch | 1.2.6 及以上 | 2019年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1e28bc4a0422.png\" width=128px>](object_detection\u002Fefficientdet\u002F) | [efficientdet](\u002Fobject_detection\u002Fefficientdet\u002F) | [EfficientDet：可扩展且高效的对象检测，基于 PyTorch](https:\u002F\u002Fgithub.com\u002Ftoandaominh1997\u002FEfficientDet.Pytorch) | Pytorch | 1.2.6 及以上 | 2019年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_fc66366d6ce8.png\" width=128px>](object_detection\u002Fpedestrian_detection\u002F) | [pedestrian_detection](\u002Fobject_detection\u002Fpedestrian_detection\u002F) | [基于 YOLOv3 的行人检测：研究与应用](https:\u002F\u002Fgithub.com\u002FZyjacya-In-love\u002FPedestrian-Detection-on-YOLOv3_Research-and-APP) | Keras | 1.2.1 及以上 | 2020年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_84d8d0877f8d.png\" width=128px>](object_detection\u002Fcrowd_det\u002F) | [crowd_det](\u002Fobject_detection\u002Fcrowd_det\u002F) | [拥挤场景中的目标检测](https:\u002F\u002Fgithub.com\u002FPurkialo\u002FCrowdDet) | Pytorch | 1.2.13 及以上 | 2020年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c6310f97b27a.png\" width=128px>](object_detection\u002Fyolov4\u002F) | [yolov4](\u002Fobject_detection\u002Fyolov4\u002F) | [PyTorch-YOLOv4](https:\u002F\u002Fgithub.com\u002FTianxiaomo\u002Fpytorch-YOLOv4) | Pytorch | 1.2.4 及以上 | 2020年4月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov4-a-machine-learning-model-to-detect-the-position-and-type-of-an-object-4f108ed0507b) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fyolov4-%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-480f0a635317) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_043e52156f3a.png\" width=128px>](object_detection\u002Fyolov4-tiny\u002F) | [yolov4-tiny](\u002Fobject_detection\u002Fyolov4-tiny\u002F) | [PyTorch-YOLOv4](https:\u002F\u002Fgithub.com\u002FTianxiaomo\u002Fpytorch-YOLOv4) | Pytorch | 1.2.5 及以上 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1f8e737995cf.png\" width=128px>](object_detection\u002Fyolov5\u002F) | [yolov5](\u002Fobject_detection\u002Fyolov5\u002F) | [yolov5](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fyolov5) | Pytorch | 1.2.5 及以上 | 2020年5月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolov5-the-latest-model-for-object-detection-b13320ec516b) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fyolov5-%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%81%AE%E6%9C%80%E6%96%B0%E3%83%A2%E3%83%87%E3%83%AB-5b7316d1e54d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7bfe56df7a3a.jpg\" width=128px>](object_detection\u002Fpoly_yolo\u002F) | [poly_yolo](\u002Fobject_detection\u002Fpoly_yolo\u002F) | [Poly YOLO](https:\u002F\u002Fgitlab.com\u002Firafm-ai\u002Fpoly-yolo\u002F) | Keras | 1.2.6 及以上 | 2020年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6e400ae42b82.jpg\" width=128px>](object_detection\u002Fnanodet\u002F) | [nanodet](\u002Fobject_detection\u002Fnanodet\u002F) | [NanoDet](https:\u002F\u002Fgithub.com\u002FRangiLyu\u002Fnanodet) | Pytorch | 1.2.6 及以上 | 2020年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d47fc6d37cff.jpg\" width=128px>](object_detection\u002Fyolor\u002F) | [yolor](\u002Fobject_detection\u002Fyolor\u002F) | [yolor](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolor\u002Ftree\u002Fpaper) | Pytorch | 1.2.5 及以上 | 2021年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c8cd491ee18d.jpg\" width=128px>](object_detection\u002Fyolox\u002F) | [yolox](\u002Fobject_detection\u002Fyolox\u002F) | [YOLOX](https:\u002F\u002Fgithub.com\u002FMegvii-BaseDetection\u002FYOLOX) | Pytorch | 1.2.6 及以上 | 2021年7月 | [英文](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fyolox-object-detection-model-exceeding-yolov5-d6cea6d3c4bc) [日文](https:\u002F\u002Ftech.ailia.ai\u002Fyolox-yolov5%E3%82%92%E8%B6%85%E3%81%88%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-e9706e15fef2) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_49542d46f386.png\" width=128px>](object_detection\u002Fpicodet\u002F) | [picodet](\u002Fobject_detection\u002Fpicodet\u002F) | [PP-PicoDet](https:\u002F\u002Fgithub.com\u002FBo396543018\u002FPicodet_Pytorch) | Pytorch | 1.2.10 及以上 | 2021年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c7db8c770c36.jpg\" width=128px>](object_detection\u002Fyolox-ti-lite\u002F) | [yolox-ti-lite](\u002Fobject_detection\u002Fyolox-ti-lite\u002F) | [edgeai-yolox](https:\u002F\u002Fgithub.com\u002FTexasInstruments\u002Fedgeai-yolox) | Pytorch | 1.2.9 及以上 | 2021年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9b711cff068d.jpg\" width=128px>](object_detection\u002Fyolov7\u002F) | [yolov7](\u002Fobject_detection\u002Fyolov7\u002F) | [YOLOv7](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov7) | Pytorch | 1.2.7 及以上 | 2022年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3fef00f63137.png\" width=128px>](object_detection\u002Ffastest-det\u002F) | [fastest-det](\u002Fobject_detection\u002Ffastest-det\u002F) | [FastestDet](https:\u002F\u002Fgithub.com\u002Fdog-qiuqiu\u002FFastestDet) | Pytorch | 1.2.5 及以上 | 2022年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f0620e4908be.jpg\" width=128px>](object_detection\u002Fyolov\u002F) | [yolov](\u002Fobject_detection\u002Fyolov\u002F) | [YOLOV](https:\u002F\u002Fgithub.com\u002FYuHengsss\u002FYOLOV) | Pytorch | 1.2.10 及以上 | 2022年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ad7e7dd795e0.jpg\" width=128px>](object_detection\u002Fyolov6\u002F) | [yolov6](\u002Fobject_detection\u002Fyolov6\u002F) | [YOLOV6](https:\u002F\u002Fgithub.com\u002Fmeituan\u002FYOLOv6) | Pytorch | 1.2.10 及以上 | 2022年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0034020d6795.jpg\" width=128px>](object_detection\u002Fdamo_yolo\u002F) | [damo_yolo](\u002Fobject_detection\u002Fdamo_yolo\u002F) | [DAMO-YOLO](https:\u002F\u002Fgithub.com\u002Ftinyvision\u002FDAMO-YOLO) | Pytorch | 1.2.9 及以上 | 2022年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7bfe941ad0e2.png\" width=128px>](object_detection\u002Fyolov8\u002F) | [yolov8](\u002Fobject_detection\u002Fyolov8\u002F) | [YOLOv8](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14.1 及以上 | 2023年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_20920880e146.jpg\" width=128px>](object_detection\u002Fyolox_body_head_hand_face\u002F) | [yolox_body_head_hand_face](\u002Fobject_detection\u002Fyolox_body_head_hand_face\u002F) | [YOLOX-Body-Head-Hand-Face](https:\u002F\u002Fgithub.com\u002FPINTO0309\u002FPINTO_model_zoo\u002Ftree\u002Fmain\u002F434_YOLOX-Body-Head-Hand-Face) | Pytorch | 1.2.15 及以上 | 2024年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_26eccdd970c1.png\" width=128px>](object_detection\u002Fyolov9\u002F) | [yolov9](\u002Fobject_detection\u002Fyolov9\u002F) | [YOLOv9](https:\u002F\u002Fgithub.com\u002FWongKinYiu\u002Fyolov9) | Pytorch | 1.2.10 及以上 | 2024年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0b963b22c127.png\" width=128px>](object_detection\u002Fyolov10\u002F) | [yolov10](\u002Fobject_detection\u002Fyolov10\u002F) | [YOLOv10](https:\u002F\u002Fgithub.com\u002FTHU-MIG\u002Fyolov10) | Pytorch | 1.2.11 及以上 | 2024年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_479d57d1bdc6.png\" width=128px>](object_detection\u002Fyolov11\u002F) | [yolov11](\u002Fobject_detection\u002Fyolov11\u002F) | [YOLOv11](https:\u002F\u002Fgithub.com\u002Fultralytics\u002Fultralytics) | Pytorch | 1.2.14 及以上 | 2024年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9e1265b9f2e.png\" width=128px>](object_detection\u002Fyolov12\u002F) | [yolov12](\u002Fobject_detection\u002Fyolov12\u002F) | [YOLOv12](https:\u002F\u002Fgithub.com\u002Fsunsmarterjie\u002Fyolov12) | Pytorch | 1.2.14 及以上 | 2025年2月 | |\n\n### 变压器\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_36fce29307ad.png\" width=128px>](object_detection\u002Fglip\u002F) | [glip](\u002Fobject_detection\u002Fglip\u002F) | [GLIP](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FGLIP) | Pytorch | 1.2.13 及更高版本 | 2021年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9079e824ff3b.jpg\" width=128px>](object_detection\u002Fdab-detr\u002F) | [dab-detr](\u002Fobject_detection\u002Fdab-detr\u002F) | [DAB-DETR](https:\u002F\u002Fgithub.com\u002FIDEA-opensource\u002FDAB-DETR) | Pytorch | 1.2.12 及更高版本 | 2022年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_12c7aad7c8e9.png\" width=128px>](object_detection\u002Fdetic\u002F) | [detic](\u002Fobject_detection\u002Fdetic\u002F) | [使用图像级监督检测两万个类别](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002FDetic) | Pytorch | 1.2.10 及更高版本 | 2022年1月 | [EN](https:\u002F\u002Fmedium.com\u002Fp\u002F49cba412b7d4) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdetic-21k%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%92%E9%AB%98%E7%B2%BE%E5%BA%A6%E3%81%AB%E3%82%BB%E3%82%B3%E3%83%8B%E3%82%B8%E3%83%A7%E3%83%B3%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-1b8f777ee89a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0a7e0f201171.png\" width=128px>](object_detection\u002Fgroundingdino\u002F) | [groundingdino](\u002Fobject_detection\u002Fgroundingdino\u002F) | [Grounding DINO](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FGroundingDINO\u002Ftree\u002Fmain) | Pytorch | 1.2.16 及更高版本 | 2023年3月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgrounding-dino-%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-3cc87db64f0c) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9a200d56399.png\" width=128px>](object_detection\u002Frt-detr-v2\u002F) | [rt-detr-v2](\u002Fobject_detection\u002Frt-detr-v2\u002F) | [RT-DETR](https:\u002F\u002Fgithub.com\u002Flyuwenyu\u002FRT-DETR) | Pytorch | 1.2.13 及更高版本 | 2024年7月 |[JP](https:\u002F\u002Ftech.ailia.ai\u002Frt-detr-convolution%E3%81%A8transformer%E3%81%AE%E3%83%8F%E3%82%A4%E3%83%96%E3%83%AA%E3%83%83%E3%83%89%E3%81%AA%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-7b73fd6a8de9) |\n\n### 特定目标\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_74b7a26b17a7.png\" width=128px>](object_detection\u002Ftraffic-sign-detection\u002F) | [traffic-sign-detection](\u002Fobject_detection\u002Ftraffic-sign-detection\u002F) | [交通标志检测](https:\u002F\u002Fgithub.com\u002Faarcosg\u002Ftraffic-sign-detection) | Tensorflow | 1.2.10 及更高版本 | 2018年8月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Ftrafficsigndetection-machine-learning-model-to-detect-road-signs-76d7c175ee01) [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftrafficsigndetection-%E9%81%93%E8%B7%AF%E6%A8%99%E8%AD%98%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-d1dc1bd5ff5e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6d9f0ebb8bd8.png\" width=128px>](object_detection\u002Fsku110k-densedet\u002F) | [sku110k-densedet](\u002Fobject_detection\u002Fsku110k-densedet\u002F) | [SKU110K-DenseDet](https:\u002F\u002Fgithub.com\u002FMedia-Smart\u002FSKU110K-DenseDet) | Pytorch | 1.2.9 及更高版本 | 2019年4月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fsku110k-densedet-a-machine-learning-model-that-can-detect-products-in-a-store-baf9d98cb441) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsku110k-densedet-%E5%BA%97%E8%88%97%E5%86%85%E3%81%AE%E5%95%86%E5%93%81%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-b775184b5e46) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ed2bffabb44c.png\" width=128px>](object_detection\u002Ffootandball\u002F) | [footandball](\u002Fobject_detection\u002Ffootandball\u002F) | [FootAndBall：集成球员和球检测器](https:\u002F\u002Fgithub.com\u002Fjac99\u002FFootAndBall) | Pytorch | 1.2.0 及更高版本 | 2019年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4059fbd388c5.jpg\" width=128px>](object_detection\u002Fqrcode_wechatqrcode\u002F) | [qrcode_wechatqrcode](\u002Fobject_detection\u002Fqrcode_wechatqrcode\u002F) | [qrcode_wechatqrcode](https:\u002F\u002Fgithub.com\u002Fopencv\u002Fopencv_zoo\u002Ftree\u002F4fb591053ba1201c07c68929cc324787d5afaa6c\u002Fmodels\u002Fqrcode_wechatqrcode) | Caffe | 1.2.15 及更高版本 | 2021年3月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c2fb3be9c829.png\" width=128px>](object_detection\u002Fmobile_object_localizer\u002F) | [mobile_object_localizer](\u002Fobject_detection\u002Fmobile_object_localizer\u002F) | [mobile_object_localizer_v1](https:\u002F\u002Ftfhub.dev\u002Fgoogle\u002Fobject_detection\u002Fmobile_object_localizer_v1\u002F1) | TensorFlow Hub | 1.2.6 及更高版本 | 2021年6月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmobileobjectlocalizer-class-agnostic-mobile-object-detector-b740c0ceb16c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmobileobjectlocalizer-%E4%BB%BB%E6%84%8F%E3%81%AE%E7%89%A9%E4%BD%93%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%A7%E3%81%8D%E3%82%8B%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-595b54cfab26) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_07a6eca27aa0.jpg\" width=128px>](object_detection\u002Flayout_parsing\u002F) |[layout_parsing](object_detection\u002Flayout_parsing\u002F)  | [unstructured-inference](https:\u002F\u002Fgithub.com\u002FUnstructured-IO\u002Funstructured-inference\u002Ftree\u002Fmain) | Pytorch | 1.2.9 及更高版本 | 2022年12月 | |\n\n## 三维目标检测\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_04e1134ff8fe.png\" width=128px>](object_detection_3d\u002Fefficientdet\u002F) | [3d_bbox](\u002Fobject_detection_3d\u002F3d_bbox\u002F) | [利用深度学习和几何学进行 3D 边界框估计](https:\u002F\u002Fgithub.com\u002Fskhadem\u002F3D-BoundingBox) | Pytorch | 1.2.6 及更高版本 | 2016 年 12 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cf8fbfb96413.png\" width=128px>](object_detection_3d\u002Fd4lcn\u002F) |[d4lcn](\u002Fobject_detection_3d\u002Fd4lcn\u002F) | [D4LCN](https:\u002F\u002Fgithub.com\u002Fdingmyu\u002FD4LCN) | Pytorch | 1.2.9 及更高版本 | 2019 年 12 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ad52ef9f2a6e.png\" width=128px>](object_detection_3d\u002Fegonet\u002F) |[egonet](\u002Fobject_detection_3d\u002Fegonet\u002F) | [EgoNet](https:\u002F\u002Fgithub.com\u002FNicholasli1995\u002FEgoNet) | Pytorch | 1.2.9 及更高版本 | 2020 年 11 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e7432e192cd2.png\" width=128px>](object_detection_3d\u002Fmediapipe_objectron\u002F) | [mediapipe_objectron](\u002Fobject_detection_3d\u002Fmediapipe_objectron\u002F) | [MediaPipe Objectron](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fmediapipe) | TensorFlow Lite | 1.2.5 及更高版本 | 2020 年 12 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c4c896c15b14.png\" width=128px>](object_detection_3d\u002F3d-object-detection.pytorch\u002F) | [3d-object-detection.pytorch](\u002Fobject_detection_3d\u002F3d-object-detection.pytorch\u002F) | [3d-object-detection.pytorch](https:\u002F\u002Fgithub.com\u002Fsovrasov\u002F3d-object-detection.pytorch) | Pytorch | 1.2.8 及更高版本 | 2021 年 2 月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002F3dobjectdetectionpytorch-3d-object-detection-model-3e1da4c53f) [JP](https:\u002F\u002Ftech.ailia.ai\u002F3dobjectdetectionpytorch-3d%E3%81%AE%E7%89%A9%E4%BD%93%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-8df18b8eb5d1) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d432437e89b.png\" width=128px>](object_detection_3d\u002Fdid_m3d\u002F) |[did_m3d](\u002Fobject_detection_3d\u002Fdid_m3d\u002F) | [DID M3D](https:\u002F\u002Fgithub.com\u002FSPengLiang\u002FDID-M3D) | Pytorch | 1.2.11 及更高版本 | 2022 年 7 月 | |\n\n## 目标跟踪\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_c976f6892738.gif\" width=128px>](object_tracking\u002Fdeepsort\u002F) | [deepsort](\u002Fobject_tracking\u002Fdeepsort\u002F) | [使用 PyTorch 的 Deep Sort](https:\u002F\u002Fgithub.com\u002FZQPei\u002Fdeep_sort_pytorch) | Pytorch | 1.2.3 及更高版本 | 2017 年 3 月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fdeepsort-a-machine-learning-model-for-tracking-people-1170743b5984) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fdeepsort-%E4%BA%BA%E7%89%A9%E3%81%AE%E3%83%88%E3%83%A9%E3%83%83%E3%82%AD%E3%83%B3%E3%82%B0%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-e8cb7410457c) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a2341bcd2fee.png\" width=128px>](object_tracking\u002Fperson_reid_baseline_pytorch\u002F) | [person_reid_baseline_pytorch](\u002Fobject_tracking\u002Fperson_reid_baseline_pytorch\u002F) | [UTS-Person-reID-Practical](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson_reID_baseline_pytorch) | Pytorch | 1.2.6 及更高版本 | 2019 年 3 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7d355f3eaf14.png\" width=128px>](object_tracking\u002Fabd_net\u002F) | [abd_net](\u002Fobject_tracking\u002Fabd_net\u002F) | [专注但多样的行人再识别](https:\u002F\u002Fgithub.com\u002FVITA-Group\u002FABD-Net) | Pytorch | 1.2.7 及更高版本 | 2019 年 8 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cc7c6edeced4.jpg\" width=128px>](object_tracking\u002Fdeepsort_vehicle\u002F) | [deepsort_vehicle](\u002Fobject_tracking\u002Fdeepsort_vehicle\u002F) | [多摄像头实时目标跟踪](https:\u002F\u002Fgithub.com\u002FLeonLok\u002FMulti-Camera-Live-Object-Tracking) | Pytorch | 1.2.9 及更高版本 | 2020 年 5 月 |  |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_28d39d764e2d.png\" width=128px>](object_tracking\u002Fqd-3dt\u002F) | [qd-3dt](\u002Fobject_tracking\u002Fqd-3dt\u002F) | [单目准密集 3D 对象跟踪](https:\u002F\u002Fgithub.com\u002FSysCV\u002Fqd-3dt) | Pytorch | 1.2.11 及更高版本 | 2021 年 3 月 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_60ad6591ee3c.png\" width=128px>](object_tracking\u002Fcentroids-reid\u002F) | [centroids-reid](\u002Fobject_tracking\u002Fcentroids-reid\u002F) | [论质心在图像检索中的不合理有效性](https:\u002F\u002Fgithub.com\u002Fmikwieczorek\u002Fcentroids-reidh) | Pytorch | 1.2.9 及更高版本 | 2021 年 4 月 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f165b0304267.png\" width=128px>](object_tracking\u002Fsiam-mot\u002F) | [siam-mot](\u002Fobject_tracking\u002Fsiam-mot\u002F) | [SiamMOT](https:\u002F\u002Fgithub.com\u002Famazon-research\u002Fsiam-mot) | Pytorch | 1.2.9 及更高版本 | 2021 年 5 月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_6b760d6d86bd.png\" width=128px>](object_tracking\u002Fbytetrack\u002F) | [bytetrack](\u002Fobject_tracking\u002Fbytetrack\u002F) | [ByteTrack](https:\u002F\u002Fgithub.com\u002Fifzhang\u002FByteTrack) | Pytorch | 1.2.5 及更高版本 | 2021 年 10 月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fbytetrack-tracking-model-that-also-considers-low-accuracy-bounding-boxes-17f5ed70e00c) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fbytetrack-%E4%BD%8E%E3%81%84%E7%A2%BA%E5%BA%A6%E3%81%AEboundingbox%E3%82%82%E8%80%83%E6%85%AE%E3%81%99%E3%82%8B%E3%83%88%E3%83%A9%E3%83%83%E3%82%AD%E3%83%B3%E3%82%B0%E3%83%A2%E3%83%87%E3%83%AB-244b994d5afb)　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1798b85d116f.png\" width=128px>](object_tracking\u002Fstrong_sort\u002F) | [strong_sort](\u002Fobject_tracking\u002Fstrong_sort\u002F) | [StrongSORT](https:\u002F\u002Fgithub.com\u002FdyhBUPT\u002FStrongSORT) | Pytorch | 1.2.15 及更高版本 | 2022 年 2 月 |　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_637e2ebf59ab.jpg\" width=128px>](object_tracking\u002Fsamurai\u002F) | [samurai](\u002Fobject_tracking\u002Fsamurai\u002F) | [SAMURAI：通过运动感知内存将 Segment Anything Model 适配到零样本视觉跟踪](https:\u002F\u002Fgithub.com\u002Fyangchris11\u002Fsamurai) | Pytorch | 1.6.1 及更高版本 | 2024 年 11 月 |  |\n\n## 光流估计\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4c0af4ba532f.png\" width=128px>](optical_flow_estimation\u002Fraft\u002F) | [raft](\u002Foptical_flow_estimation\u002Fraft\u002F) | [RAFT：用于光流估计的递归全对场变换](https:\u002F\u002Fgithub.com\u002Fprinceton-vl\u002FRAFT) | Pytorch | 1.2.6 及更高版本 | 2020 年 3 月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fraft-a-machine-learning-model-for-estimating-optical-flow-6ab6d077e178) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fraft-optical-flow%E3%82%92%E6%8E%A8%E5%AE%9A%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bf898965de05)　|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_65c4e455301a.gif\" width=128px>](optical_flow_estimation\u002Fcotracker3\u002F) | [cotracker3](\u002Foptical_flow_estimation\u002Fcotracker3\u002F) | [CoTracker3：通过伪标签真实视频实现更简单、更好的点跟踪](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fco-tracker) | Pytorch | 1.6.1 及更高版本 | 2024 年 10 月 |  |\n\n## 点云分割\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9cec894c5de3.png\" width=128px>](point_segmentation\u002Fpointnet_pytorch\u002F) | [pointnet_pytorch](\u002Fpoint_segmentation\u002Fpointnet_pytorch\u002F) | [PointNet.pytorch](https:\u002F\u002Fgithub.com\u002Ffxia22\u002Fpointnet.pytorch) | Pytorch | 1.2.6 及更高版本 | 2016年12月 | |\n\n## 姿势估计\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ceacc19b9bab.png\" width=128px>](pose_estimation\u002Fopenpose\u002F) |[openpose](\u002Fpose_estimation\u002Fopenpose\u002F) | [CVPR'17 实时多人姿态估计代码库（口头报告）](https:\u002F\u002Fgithub.com\u002FZheC\u002FRealtime_Multi-Person_Pose_Estimation) | Caffe | 1.2.1 及更高版本 | 2016年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f7b57d156b42.png\" width=128px>](pose_estimation\u002Fposenet\u002F) |[posenet](\u002Fpose_estimation\u002Fposenet\u002F) | [PoseNet Pytorch](https:\u002F\u002Fgithub.com\u002Frwightman\u002Fposenet-pytorch) | Pytorch | 1.2.10 及更高版本 | 2017年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1626dc168e3e.png\" width=128px>](pose_estimation\u002Fpose_resnet\u002F) |[pose_resnet](\u002Fpose_estimation\u002Fpose_resnet\u002F) | [人体姿态估计与跟踪的简单基线](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Fhuman-pose-estimation.pytorch) | Pytorch | 1.2.1 及更高版本 | 2018年4月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fposeresnet-a-top-down-machine-learning-model-for-skeletal-detection-9454f391ae4d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fposeresnet-%E3%83%88%E3%83%83%E3%83%97%E3%83%80%E3%82%A6%E3%83%B3%E3%81%A7%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%82%92%E8%A1%8C%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9e0d20396d1e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1974f0ff85f6.png\" width=128px>](pose_estimation\u002Flightweight-human-pose-estimation\u002F)  |[lightweight-human-pose-estimation](\u002Fpose_estimation\u002Flightweight-human-pose-estimation\u002F) | [在 PyTorch 中快速准确的人体姿态估计。\u003Cbr\u002F>包含\u003Cbr\u002F>“CPU 上的实时 2D 多人姿态估计：轻量级 OpenPose”论文的实现。](https:\u002F\u002Fgithub.com\u002FDaniil-Osokin\u002Flightweight-human-pose-estimation.pytorch) | Pytorch | 1.2.1 及更高版本 | 2018年11月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Flightweighthumanpose-a-machine-learning-model-for-fast-multi-person-skeleton-detection-631c042bed50) [JP](https:\u002F\u002Ftech.ailia.ai\u002Flightweighthumanpose-%E9%AB%98%E9%80%9F%E3%81%AB%E8%A4%87%E6%95%B0%E4%BA%BA%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-bc34d420e6e2) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_ec06240b4c62.png\" width=128px>](pose_estimation\u002Fanimalpose\u002F) |[animalpose](\u002Fpose_estimation\u002Fanimalpose\u002F) | [MMPose - 2D 动物姿态估计](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmpose) | Pytorch | 1.2.7 及更高版本 | 2019年8月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fanimalpose-pose-esimation-for-animals-700603e0dbae) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fanimalpose-%E5%8B%95%E7%89%A9%E3%81%AE%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-f7f667c0e69d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_4c776f30d8bb.png\" width=128px>](pose_estimation\u002Fefficientpose\u002F) |[efficientpose](\u002Fpose_estimation\u002Fefficientpose\u002F) | [EfficientPose 的代码库](https:\u002F\u002Fgithub.com\u002Fdaniegr\u002FEfficientPose) | TensorFlow | 1.2.6 及更高版本 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_94c38455f703.png\" width=128px>](pose_estimation\u002Fblazepose\u002F) |[blazepose](\u002Fpose_estimation\u002Fblazepose\u002F) | [MediaPipePyTorch](https:\u002F\u002Fgithub.com\u002Fzmurez\u002FMediaPipePyTorch) | Pytorch | 1.2.5 及更高版本 | 2020年6月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cc1766399ce0.png\" width=128px>](pose_estimation\u002Fmediapipe_holistic\u002F) |[mediapipe_holistic](\u002Fpose_estimation\u002Fmediapipe_holistic\u002F) | [MediaPipe Holistic](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fholistic.html) | TensorFlow | 1.2.9 及更高版本 | 2020年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_65dd24c1b393.png\" width=128px>](pose_estimation\u002Fmovenet\u002F) |[movenet](\u002Fpose_estimation\u002Fmovenet\u002F) | [movenet 的代码库](https:\u002F\u002Fwww.tensorflow.org\u002Fhub\u002Ftutorials\u002Fmovenet) | TensorFlow | 1.2.8 及更高版本 | 2021年5月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fmovenet-pose-estimation-for-video-with-intense-motion-2b92f53f3c8) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fmovenet-%E5%8B%95%E3%81%8D%E3%81%AE%E6%BF%80%E3%81%97%E3%81%84%E5%8B%95%E7%94%BB%E5%90%91%E3%81%91%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-d26d9e06126c)|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3b0d09df782a.png\" width=128px>](pose_estimation\u002Fap-10k\u002F) |[ap-10k](\u002Fpose_estimation\u002Fap-10k\u002F) | [AP-10K](https:\u002F\u002Fgithub.com\u002FAlexTheBad\u002FAP-10K)  | Pytorch | 1.2.4 及更高版本 | 2021年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_88f316bda885.png\" width=128px>](pose_estimation\u002Fe2pose\u002F) |[e2pose](\u002Fpose_estimation\u002Fe2pose\u002F) | [E2Pose](https:\u002F\u002Fgithub.com\u002FAISIN-TRC\u002FE2Pose)  | Tensorflow | 1.2.5 及更高版本 | 2022年10月 | |\n\n## 三维姿态估计\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_95b3c01e4b57.png\" width=128px>](pose_estimation_3d\u002Fpose-hg-3d\u002F) |[pose-hg-3d](\u002Fpose_estimation_3d\u002Fpose-hg-3d\u002F) | [迈向野外环境下的三维人体姿态估计：一种弱监督方法](https:\u002F\u002Fgithub.com\u002Fxingyizhou\u002Fpytorch-pose-hg-3d) | Pytorch | 1.2.6 及以上 | 2017年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1974f0ff85f6.png\" width=128px>](pose_estimation_3d\u002F3d-pose-baseline\u002F) |[3d-pose-baseline](\u002Fpose_estimation_3d\u002F3d-pose-baseline\u002F) | [TensorFlow 中用于三维人体姿态估计的简单基线。\u003Cbr\u002F>在 ICCV 17 上展示。](https:\u002F\u002Fgithub.com\u002Funa-dinosauria\u002F3d-pose-baseline) | TensorFlow | 1.2.3 及以上 | 2017年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a96c926ee23e.png\" width=128px>](pose_estimation_3d\u002Flightweight-human-pose-estimation-3d\u002F) |[lightweight-human-pose-estimation-3d](\u002Fpose_estimation_3d\u002Flightweight-human-pose-estimation-3d\u002F) | [PyTorch 中的实时多人三维姿态估计演示。\u003Cbr\u002F>可使用 OpenVINO 后端在 CPU 上进行快速推理。](https:\u002F\u002Fgithub.com\u002FDaniil-Osokin\u002Flightweight-human-pose-estimation-3d-demo.pytorch) | Pytorch | 1.2.1 及以上 | 2017年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9793b2b7c8a2.jpg\" width=128px>](pose_estimation_3d\u002F3dmppe_posenet\u002F) |[3dmppe_posenet](\u002Fpose_estimation_3d\u002F3dmppe_posenet\u002F) | [“基于相机距离感知的自上而下方法，用于从单张 RGB 图像中进行多人三维姿态估计”的 PoseNet](https:\u002F\u002Fgithub.com\u002Fmks0601\u002F3DMPPE_POSENET_RELEASE) | Pytorch | 1.2.6 及以上 | 2019年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_347c4d3bda77.png\" width=128px>](pose_estimation_3d\u002Fgast\u002F) |[gast](\u002Fpose_estimation_3d\u002Fgast\u002F) | [用于视频中三维人体姿态估计的图注意力时空卷积网络 (GAST-Net)](https:\u002F\u002Fgithub.com\u002Ffabro66\u002FGAST-Net-3DPoseEstimation) | Pytorch | 1.2.7 及以上 | 2020年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fgast-a-machine-learning-model-that-predicts-a-3d-skeleton-from-a-2d-skeleton-44449d1ff78d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fgast-2d%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%81%8B%E3%82%893d%E3%81%AE%E9%AA%A8%E6%A0%BC%E3%82%92%E4%BA%88%E6%B8%AC%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-6c70fc4b3b3a) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_05f7b5d96eb4.png\" width=128px>](pose_estimation_3d\u002Fblazepose-fullbody\u002F) |[blazepose-fullbody](\u002Fpose_estimation_3d\u002Fblazepose-fullbody\u002F) | [MediaPipe](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fmodels.html#pose) | TensorFlow Lite | 1.2.5 及以上 | 2020年6月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fblazepose-a-3d-pose-estimation-model-d8689d06b7c4) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fblazepose-3%E6%AC%A1%E5%85%83%E5%BA%A7%E6%A8%99%E3%82%92%E5%8F%96%E5%BE%97%E5%8F%AF%E8%83%BD%E3%81%AA%E9%AA%A8%E6%A0%BC%E6%A4%9C%E5%87%BA%E3%83%A2%E3%83%87%E3%83%AB-a6c588013009) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1afd73439d11.png\" width=128px>](pose_estimation_3d\u002Fmediapipe_pose_world_landmarks\u002F) |[mediapipe_pose_world_landmarks](\u002Fpose_estimation_3d\u002Fmediapipe_pose_world_landmarks\u002F) | [MediaPipe 姿态的真实世界 3D 坐标](https:\u002F\u002Fgoogle.github.io\u002Fmediapipe\u002Fsolutions\u002Fpose.html#pose_world_landmarks) | TensorFlow Lite | 1.2.10 及以上 | 2022年6月 | |\n\n## 道路检测\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e4ef8b899b64.png\" width=128px>](road_detection\u002Froad-segmentation-adas\u002F) | [road-segmentation-adas](\u002Froad_detection\u002Froad-segmentation-adas\u002F) | [road-segmentation-adas-0001](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Froad-segmentation-adas-0001) | OpenVINO | 1.2.5 及以上 | 2018年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9d91f71e6d98.jpg\" width=128px>](road_detection\u002Fcodes-for-lane-detection\u002F) | [codes-for-lane-detection](\u002Froad_detection\u002Fcodes-for-lane-detection\u002F) | [Codes-for-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcardwing\u002FCodes-for-Lane-Detection) | Pytorch | 1.2.6 及以上 | 2019年8月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fcodesforlanedetection-a-machine-learning-model-for-detecting-white-lines-on-roads-7bee3aad818d) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fcodesforlanedetection-%E9%81%93%E8%B7%AF%E3%81%AE%E7%99%BD%E7%B7%9A%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-1ffe7c6ccf1e) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7ab1e03ec7c8.jpg\" width=128px>](road_detection\u002Fultra-fast-lane-detection\u002F) | [ultra-fast-lane-detection](\u002Froad_detection\u002Fultra-fast-lane-detection\u002F) | [Ultra-Fast-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fcfzd\u002FUltra-Fast-Lane-Detection) | Pytorch | 1.2.6 及以上 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1ff13165abeb.jpg\" width=128px>](road_detection\u002Fpolylanenet\u002F) | [polylanenet](\u002Froad_detection\u002Fpolylanenet\u002F) | [PolyLaneNet](https:\u002F\u002Fgithub.com\u002Flucastabelini\u002FPolyLaneNet) | Pytorch | 1.2.9 及以上 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a08ce74fee69.jpg\" width=128px>](road_detection\u002Froneld\u002F) | [roneld](\u002Froad_detection\u002Froneld\u002F) | [RONELD-Lane-Detection](https:\u002F\u002Fgithub.com\u002Fczming\u002FRONELD-Lane-Detection) | Pytorch | 1.2.6 及以上 | 2020年10月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_63d89f55c10a.jpg\" width=128px>](road_detection\u002Flstr\u002F) | [lstr](\u002Froad_detection\u002Flstr\u002F) | [LSTR](https:\u002F\u002Fgithub.com\u002Fliuruijin17\u002FLSTR) | Pytorch | 1.2.8 及以上 | 2020年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cd115fd04565.jpg\" width=128px>](road_detection\u002Fyolop\u002F) | [yolop](\u002Froad_detection\u002Fyolop\u002F) | [YOLOP](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FYOLOP) | Pytorch | 1.2.6 及以上 | 2021年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5e2694422fa.png\" width=128px>](road_detection\u002Fcdnet\u002F) | [cdnet](\u002Froad_detection\u002Fcdnet\u002F) | [CDNet](https:\u002F\u002Fgithub.com\u002Fzhangzhengde0225\u002FCDNet) | Pytorch | 1.2.5 及以上 | 2022年2月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7e43011d98e6.jpg\" width=128px>](road_detection\u002Fhybridnets\u002F) | [hybridnets](\u002Froad_detection\u002Fhybridnets\u002F) | [HybridNets](https:\u002F\u002Fgithub.com\u002Fdatvuthanh\u002FHybridNets) | Pytorch | 1.2.6 及以上 | 2022年3月 | |\n\n## 旋转预测\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_577fa52815ca.png\" width=256px>](rotation_prediction\u002Frotnet\u002F) |[rotnet](\u002Frotation_prediction\u002Frotnet) | [用于预测图像旋转角度以校正其方向的 CNN](https:\u002F\u002Fgithub.com\u002Fd4nst\u002FRotNet) | Keras | 1.2.1 及以上 | 2018年3月 | |\n\n## 风格迁移\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f8eed7b2d044.png\" width=128px>](style_transfer\u002Fadain\u002F) | [adain](\u002Fstyle_transfer\u002Fadain\u002F) | [使用自适应实例归一化实现实时任意风格迁移](https:\u002F\u002Fgithub.com\u002Fnaoto0804\u002Fpytorch-AdaIN)| Pytorch | 1.2.1 及更高版本 | 2017年3月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fadain-a-machine-learning-model-for-style-transfer-341b242c554b) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fadain-%E7%94%BB%E5%83%8F%E3%81%AE%E3%82%B9%E3%82%BF%E3%82%A4%E3%83%AB%E3%82%92%E5%A4%89%E6%8F%9B%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2443feba832b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_1b95d3f19f75.png\" width=128px>](style_transfer\u002Fpix2pixHD\u002F) | [pix2pixHD](\u002Fstyle_transfer\u002Fpix2pixHD\u002F) | [pix2pixHD：基于条件 GAN 的高分辨率图像合成与语义操控](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fpix2pixHD) | Pytorch | 1.2.6 及更高版本 | 2017年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cf769a8ea4af.png\" width=128px>](style_transfer\u002Fbeauty_gan\u002F) | [beauty_gan](\u002Fstyle_transfer\u002Fbeauty_gan\u002F) | [BeautyGAN](https:\u002F\u002Fgithub.com\u002Fwtjiang98\u002FBeautyGAN_pytorch) | Pytorch | 1.2.7 及更高版本 | 2018年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_f122b5b2b44c.png\" width=128px>](style_transfer\u002Fpsgan\u002F) | [psgan](\u002Fstyle_transfer\u002Fpsgan\u002F) | [PSGAN：鲁棒姿态和表情的空间感知 GAN，用于可定制的化妆迁移](https:\u002F\u002Fgithub.com\u002Fwtjiang98\u002FPSGAN)| Pytorch | 1.2.7 及更高版本 | 2019年9月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_86cc300ac636.png\" width=128px>](style_transfer\u002Fanimeganv2\u002F) | [animeganv2](\u002Fstyle_transfer\u002Fanimeganv2\u002F) | [AnimeGANv2 的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fbryandlee\u002Fanimegan2-pytorch) | Pytorch | 1.2.5 及更高版本 | 2020年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_44db83cc45b3.png\" width=128px>](style_transfer\u002Felegant\u002F) | [EleGANt](\u002Fstyle_transfer\u002Felegant\u002F) | [EleGANt：精致且可局部编辑的 GAN，用于化妆迁移](https:\u002F\u002Fgithub.com\u002FChenyu-Yang-2000\u002FEleGANt) | Pytorch | 1.2.15 及更高版本 | 2022年7月 | |\n\n## 超分辨率\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9327818e43cc.png\" width=128px>](super_resolution\u002Fsrresnet\u002F) | [srresnet](\u002Fsuper_resolution\u002Fsrresnet\u002F) | [使用生成对抗网络实现照片级真实感单幅图像超分辨率](https:\u002F\u002Fgithub.com\u002Ftwtygqyy\u002Fpytorch-SRResNet) | Pytorch | 1.2.0 及更高版本 | 2016年9月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fsrresnet-a-machine-learning-model-to-increase-image-resolution-9efc478f2674) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fsrresnet-%E7%94%BB%E5%83%8F%E3%82%92%E9%AB%98%E5%93%81%E8%B3%AA%E3%81%AB%E6%8B%A1%E5%A4%A7%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-9e35b9a90586) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_41ad4ba3ff92.png\" width=128px>](super_resolution\u002Fedsr\u002F) | [edsr](\u002Fsuper_resolution\u002Fedsr\u002F) | [用于单幅图像超分辨率的增强深度残差网络](https:\u002F\u002Fgithub.com\u002Fsanghyun-son\u002FEDSR-PyTorch.git) | Pytorch | 1.2.6 及更高版本 | 2017年7月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fedsr-a-machine-learning-model-for-super-resolution-image-processing-9deaf36b24ed) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fedsr-%E7%94%BB%E5%83%8F%E3%81%AE%E8%B6%85%E8%A7%A3%E5%83%8F%E5%87%A6%E7%90%86%E3%82%92%E8%A1%8D%E3%81%86%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%AB-2842b1d244d) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_537aa774fae2.png\" width=128px>](super_resolution\u002Fhan\u002F) | [han](\u002Fsuper_resolution\u002Fhan\u002F) | [通过整体注意力网络实现单幅图像超分辨率](https:\u002F\u002Fgithub.com\u002FwwlCape\u002FHAN) | Pytorch | 1.2.6 及更高版本 | 2020年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3dffcc4915c9.jpg\" width=128px>](super_resolution\u002Freal-esrgan\u002F) | [real-esrgan](\u002Fsuper_resolution\u002Freal-esrgan\u002F) | [Real-ESRGAN](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) | Pytorch | 1.2.9 及更高版本 | 2021年7月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Freal-esrgan-%E3%83%87%E3%83%8E%E3%82%A4%E3%82%BA%E3%82%92%E5%BC%B7%E5%8C%96%E3%81%97%E3%81%9F%E8%B6%85%E8%A7%A3%E5%83%8F%E3%83%A2%E3%83%87%E3%83%AB-91a434b3683b)|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8ad3591e5a9a.png\" width=128px>](super_resolution\u002Fswinir\u002F) | [swinir](\u002Fsuper_resolution\u002Fswinir\u002F) | [SwinIR：使用 Swin Transformer 进行图像修复](https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR) | Pytorch | 1.2.12 及更高版本 | 2021年8月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_2bf91cf8ba39.png\" width=128px>](super_resolution\u002Frcan-it\u002F) | [rcan-it](\u002Fsuper_resolution\u002Frcan-it\u002F) | [重新审视 RCAN：改进的图像超分辨率训练方法](https:\u002F\u002Fgithub.com\u002Fzudi-lin\u002Frcan-it) | Pytorch | 1.2.10 及更高版本 | 2022年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e560b017bc96.png\" width=128px>](super_resolution\u002Fhat\u002F) | [Hat](\u002Fsuper_resolution\u002Fhat\u002F) | [Hat](https:\u002F\u002Fgithub.com\u002FXPixelGroup\u002FHAT) | Pytorch | 1.2.6 及更高版本 | 2022年5月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_a11bd60aba65.png\" width=128px>](super_resolution\u002Fspan\u002F) | [SPAN](\u002Fsuper_resolution\u002Fspan\u002F) | [SPAN](https:\u002F\u002Fgithub.com\u002Fhongyuanyu\u002FSPAN) | Pytorch | 1.2.14 及更高版本 | 2023年11月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fspan-%E3%83%91%E3%83%A9%E3%83%A1%E3%83%BC%E3%82%BF%E3%83%95%E3%83%AA%E3%83%BC%E3%81%AEattention%E3%81%AB%E3%82%88%E3%82%8B%E5%8A%B9%E7%8E%87%E7%9A%84%E3%81%AA%E8%B6%85%E8%A7%A3%E5%83%8F%E3%83%A2%E3%83%87%E3%83%AB-3af731eae44a) | \n\n## 文本检测\n\n| | 模型 | 参考文献 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_af5e8d036a04.png\" width=64px>](text_detection\u002Feast\u002F) |[east](\u002Ftext_detection\u002Feast) | [EAST：高效准确的场景文本检测器](https:\u002F\u002Fgithub.com\u002Fargman\u002FEAST) | TensorFlow | 1.2.6 及更高版本 | 2017年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_0d65a5546cac.png\" width=64px>](text_detection\u002Fpixel_link\u002F) |[pixel_link](\u002Ftext_detection\u002Fpixel_link) | [Pixel-Link](https:\u002F\u002Fgithub.com\u002FZJULearning\u002Fpixel_link) | TensorFlow | 1.2.6 及更高版本 | 2018年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_d5e0097c060f.jpg\" width=64px>](text_detection\u002Fcraft_pytorch\u002F) |[craft_pytorch](\u002Ftext_detection\u002Fcraft_pytorch) | [CRAFT：面向文本检测的字符区域感知模型](https:\u002F\u002Fgithub.com\u002Fclovaai\u002FCRAFT-pytorch) | Pytorch | 1.2.2 及更高版本 | 2019年4月 | |\n\n## 文本识别\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_9194877b05f1.png\" width=64px>](\u002Ftext_recognition\u002Fetl\u002F) |[etl](\u002Ftext_recognition\u002Fetl) | 日本字符分类 | Keras | 1.1.0 及以上 | 1973年 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Failia-sdk-%E3%83%A2%E3%83%87%E3%83%BB%E7%B4%B9%E4%BB%8B-etl-412ee389a8ef) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_592d7da6cff6.png\" width=64px>](text_recognition\u002Fcrnn.pytorch\u002F) |[crnn.pytorch](\u002Ftext_recognition\u002Fcrnn.pytorch\u002F) | [卷积循环神经网络](https:\u002F\u002Fgithub.com\u002Fmeijieru\u002Fcrnn.pytorch) | Pytorch | 1.2.6 及以上 | 2015年7月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_592d7da6cff6.png\" width=64px>](text_recognition\u002Fdeep-text-recognition-benchmark\u002F) |[deep-text-recognition-benchmark](\u002Ftext_recognition\u002Fdeep-text-recognition-benchmark\u002F) | [深度文本识别基准](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fdeep-text-recognition-benchmark) | Pytorch | 1.2.6 及以上 | 2019年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bd0bbb425620.jpg\" width=64px>](text_recognition\u002Feasyocr\u002F) |[easyocr](\u002Ftext_recognition\u002Feasyocr\u002F) | [支持80多种语言的即用型OCR](https:\u002F\u002Fgithub.com\u002FJaidedAI\u002FEasyOCR) | Pytorch | 1.2.6 及以上 | 2020年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_59ee08cd9fc7.png\" width=64px>](text_recognition\u002Fpaddleocr\u002F) |[paddleocr](\u002Ftext_recognition\u002Fpaddleocr\u002F) | [PaddleOCR：基于飞桨的强大多语言OCR工具包](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR) | Pytorch | 1.2.6 及以上 | 2020年9月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fpaddleocr-the-latest-lightweight-ocr-system-a13171d7ea3e) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleocr-%E6%9C%80%E6%96%B0%E3%81%AE%E8%BB%BD%E9%87%8Focr%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0-8744205f3703) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_5bed24ab4cf4.png\" width=64px>](text_recognition\u002Fdonut\u002F) |[donut](\u002Ftext_recognition\u002Fdonut\u002F) | [Donut](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fdonut) | Pytorch | 1.2.16 及以上 | 2021年11月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e53a69c433dd.png\" width=64px>](text_recognition\u002Fndlocr_text_recognition\u002F) |[ndlocr_text_recognition](\u002Ftext_recognition\u002Fndlocr_text_recognition\u002F) | [NDL OCR](https:\u002F\u002Fgithub.com\u002Fndl-lab\u002Ftext_recognition) | Pytorch | 1.2.5 及以上 | 2022年4月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_7a1e89cbf4ec.png\" width=64px>](text_recognition\u002Fpaddleocr_v3\u002F) |[paddleocr_v3](\u002Ftext_recognition\u002Fpaddleocr_v3\u002F) | [PaddleOCR：基于飞桨的强大多语言OCR工具包](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR) | Pytorch | 1.2.17 及以上 | 2022年6月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fpaddleocr-v3-%E6%97%A5%E6%9C%AC%E8%AA%9E%E3%81%8C%E9%AB%98%E7%B2%BE%E5%BA%A6%E5%8C%96%E3%81%97%E3%81%9F%E6%9C%80%E6%96%B0%E3%81%AEocr%E3%83%A2%E3%83%87%E3%83%BB-7dfa93a3dfcd) |\n\n## 时间序列预测\n\n| 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [informer2020](\u002Ftime_series_forecasting\u002Finformer2020\u002F) | [Informer：超越高效Transformer的长序列时间序列预测（AAAI'21最佳论文）](https:\u002F\u002Fgithub.com\u002Fzhouhaoyi\u002FInformer2020) | Pytorch | 1.2.10 及以上 | 2020年12月 || \n| [timesfm](\u002Ftime_series_forecasting\u002Ftimesfm\u002F) | [TimesFM](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Ftimesfm) | Pytorch | 1.2.16 及以上 | 2023年10月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Ftimesfm-%E6%99%82%E7%B3%BB%E5%88%97%E4%BA%88%E6%B8%AC%E3%81%AE%E5%9F%BA%E7%9B%A4%E3%83%A2%E3%83%87%E3%83%BB-0a11fdefa319) |\n\n## 车辆识别\n\n| | 模型 | 参考资料 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_169a473979a8.png\" width=64px>](\u002Fvehicle_recognition\u002Fvehicle-attributes-recognition-barrier\u002F) |[vehicle-attributes-recognition-barrier](\u002Fvehicle_recognition\u002Fvehicle-attributes-recognition-barrier) | [vehicle-attributes-recognition-barrier-0042](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fvehicle-attributes-recognition-barrier-0042) | OpenVINO | 1.2.5 及以上 | 2018年5月 | [EN](https:\u002F\u002Fmedium.com\u002Faxinc-ai\u002Fvehicleattributerecognitionbarrier-a-machine-learning-model-for-detecting-car-attributes-fe8fda7649ff) [JP](https:\u002F\u002Ftech.ailia.ai\u002Fvehicleattributerecognitionbarrier-%E8%BB%8A%E3%81%AE%E5%B1%9E%E6%80%A7%E3%82%92%E6%A4%9C%E5%87%BA%E3%81%99%E3%82%8B%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%A2%E3%83%87%E3%83%BB-ee26d1a3e00b) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_cb9de1282dd3.png\" width=128px>](vehicle_recognition\u002Fvehicle-license-plate-detection-barrier\u002F) | [vehicle-license-plate-detection-barrier](\u002Fvehicle_recognition\u002Fvehicle-license-plate-detection-barrier\u002F) | [vehicle-license-plate-detection-barrier-0106](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fopen_model_zoo\u002Ftree\u002Fmaster\u002Fmodels\u002Fintel\u002Fvehicle-license-plate-detection-barrier-0106) | OpenVINO | 1.2.5 及以上 | 2018年5月 | |\n\n## 视觉语言模型\n\n| | 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|:-----------|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_bb98a6bae630.jpg\" width=128px>](vision_language_model\u002Fllava\u002F) | [llava](\u002Fvision_language_model\u002Fllava) | [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA) | Pytorch | 1.2.16 及更高版本 | 2023年4月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fllava-%E7%94%BB%E5%83%8F%E3%81%B8%E5%AF%BE%E3%81%97%E3%81%A6%E8%B3%AA%E5%95%8F%E3%81%A7%E3%81%8D%E3%82%8B%E5%A4%A7%E8%A6%8F%E6%A8%A1%E8%A8%80%E8%AA%9E%E3%83%A2%E3%83%87%E3%83%AB-6ede836f2bed) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_3e77fa65eea2.jpg\" width=128px>](vision_language_model\u002Fflorence2\u002F) | [florence2](vision_language_model\u002Fflorence2) | [Hugging Face - microsoft\u002FFlorence-2-base](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FFlorence-2-base) | Pytorch | 1.2.16 及更高版本 | 2023年11月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fflorence2-%E8%BB%BD%E9%87%8F%E3%81%A7%E3%82%A8%E3%83%83%E3%82%B8%E5%AE%9F%E8%A3%85%E5%8F%AF%E8%83%BD%E3%81%AAvision-language-model-71809797a957) |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8b6aaa1ac1ee.jpg\" width=128px>](vision_language_model\u002Fmobilevlm\u002F) | [mobilevlm](vision_language_model\u002Fmobilevlm) | [MobileVLM](https:\u002F\u002Fgithub.com\u002FMeituan-AutoML\u002FMobileVLM) | Pytorch | 1.5.0 及更高版本 | 2023年12月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_e9bdbc6b01d4.jpg\" width=128px>](vision_language_model\u002Fllava-jp\u002F) | [llava-jp](vision_language_model\u002Fllava-jp) | [LLaVA-JP](https:\u002F\u002Fgithub.com\u002Ftosiyuki\u002FLLaVA-JP\u002Ftree\u002Fmain) | Pytorch | 1.5.0 及更高版本 | 2024年1月 | |\n| [\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_readme_8d128cc43276.jpeg\" width=128px>](vision_language_model\u002Fqwen2_vl\u002F) | [qwen2_vl](vision_language_model\u002Fqwen2_vl) | [Qwen2-VL](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen2-VL) | Pytorch | 1.5.0 及更高版本 | 2024年9月 | [JP](https:\u002F\u002Ftech.ailia.ai\u002Fqwen2-vl-%E3%83%AD%E3%83%BC%E3%82%AB%E3%83%AB%E3%81%A7%E5%8B%95%E4%BD%9C%E3%81%99%E3%82%8Bvision-language-model-b6f75fa30a08) |\n\n## 商业模型\n\n| 模型 | 参考 | 导出自 | 支持的 Ailia 版本 | 日期 | 博客 |\n|------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n|[acculus-pose](\u002Fcommercial_model\u002Facculus-pose) | [Acculus, Inc.](https:\u002F\u002Facculus.jp\u002F) | Caffe | 1.2.3 及更高版本 | 2018年5月 | |\n\n# 其他平台\n\n使用 ailia MODELS（Python）进行原型开发，然后部署到生产环境。\n\n- [unity 版本](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-unity)\n- [kotlin 版本](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-kotlin)\n- [c++ 版本](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-cpp)\n- [flutter 版本](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-flutter)\n- [rust 版本](https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models-rust)\n\n# 联系方式\n\n- [联系我们](https:\u002F\u002Fwww.ailia.ai\u002Fen-contact-product)\n- [邮件](mailto:contact@ailia.ai)","# ailia-models 快速上手指南\n\nailia-models 是一个预训练的尖端 AI 模型集合，配合 **ailia SDK** 使用，可实现跨平台（Windows, Mac, Linux, Android, iOS, Jetson, Raspberry Pi）的高速推理。支持通过 Vulkan 和 Metal 进行 GPU 加速。\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Windows, macOS, Linux, Android, iOS, Jetson, Raspberry Pi\n- **硬件**: 支持 CPU 推理；推荐使用支持 Vulkan (NVIDIA\u002FAMD\u002FIntel) 或 Metal (Apple) 的 GPU 以获得最佳性能。\n- **编程语言支持**: C++, Python, Unity (C#), Kotlin, Rust, Flutter。\n\n### 前置依赖\n- **Python 版本**: 推荐 Python 3.8 及以上。\n- **编译器**: 若需从源码构建，需安装 CMake 及对应平台的 C++ 编译器。\n- **GPU 驱动**: 确保已安装最新的显卡驱动以支持 Vulkan 或 Metal。\n\n## 安装步骤\n\n### 方法一：使用 pip 安装（推荐 Python 用户）\n\n直接通过 PyPI 安装 ailia SDK 和示例模型依赖：\n\n```bash\npip install ailia\npip install -r requirements.txt\n```\n\n> **注意**：若国内访问 PyPI 较慢，可使用清华镜像源加速：\n> ```bash\n> pip install ailia -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n### 方法二：下载预编译包\n\n访问 [ailia SDK 官网](https:\u002F\u002Failia.ai\u002Fen\u002Fsdk\u002F) 下载对应平台的二进制包，解压后将 `bin` 目录添加到环境变量。\n\n### 获取模型代码\n\n克隆本仓库以获取 400+ 个模型的示例代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models.git\ncd ailia-models\n```\n\n## 基本使用\n\n以下以 Python 为例，演示如何运行一个图像分类模型（如 `resnet50`）。\n\n### 1. 进入模型目录\n假设我们要使用 `image_classification\u002Fresnet50` 模型：\n\n```bash\ncd image_classification\u002Fresnet50\n```\n\n### 2. 下载模型文件\n首次运行需下载对应的 `.onnx` 模型文件和权重（脚本会自动处理）：\n\n```bash\npython resnet50.py --download\n```\n\n### 3. 执行推理\n准备一张测试图片（例如 `cat.jpg`），运行推理：\n\n```bash\npython resnet50.py --input cat.jpg\n```\n\n**输出示例**：\n```text\nPredicted class: tabby, tabby cat\nProbability: 0.89\nInference time: 12.5 ms (GPU accelerated)\n```\n\n### 4. 启用 GPU 加速（可选）\nailia SDK 通常会自动检测并使用可用的 GPU。若需强制指定或查看设备信息，可在代码中或通过环境变量配置：\n\n```bash\nexport AILIA_GPU_DEVICE=0  # Linux\u002FmacOS\nset AILIA_GPU_DEVICE=0     # Windows\npython resnet50.py --input cat.jpg\n```\n\n---\n\n**更多资源**：\n- 完整教程：[ailia MODELS tutorial](TUTORIAL.md)\n- 模型列表与文档：[ailia-models wiki](https:\u002F\u002Fdeepwiki.com\u002Failia-ai\u002Failia-models)\n- 在线体验：[Google Colaboratory](https:\u002F\u002Fwww.ailia.ai\u002Flaunch_to_colab)","一家智能健身镜初创团队需要在低功耗的 Android 设备上实时分析用户的深蹲、拳击等动作，以提供即时纠正反馈。\n\n### 没有 ailia-models 时\n- **模型适配困难**：团队需手动将 PyTorch 训练的 ST-GCN 或 MARS 模型转换为推理格式，常因算子不支持导致导出失败，耗费数周调试。\n- **端侧性能瓶颈**：普通推理引擎无法利用 Android 设备的 Vulkan GPU 加速，导致视频流分析帧率低于 15 FPS，动作识别严重滞后。\n- **缺乏现成示例**：开发者需从零编写数据预处理和后处理代码，难以快速验证算法在真实摄像头画面中的效果。\n- **跨平台部署复杂**：若未来需扩展至 iOS 或 Raspberry Pi 版本，需重新适配整套推理后端，维护成本极高。\n\n### 使用 ailia-models 后\n- **开箱即用模型库**：直接调用已验证优化的 `st-gcn` 和 `mars` 模型，无需关心底层转换细节，半天即可完成集成。\n- **高性能边缘推理**：借助 ailia SDK 的 Vulkan 加速，在同等 Android 设备上实现 60+ FPS 的流畅识别，确保用户动作零延迟反馈。\n- **完整示例指引**：参考官方提供的 Python 和 Kotlin 示例代码，快速打通从摄像头采集到骨骼点分析的完整链路。\n- **一次开发多端运行**：同一套模型代码可无缝部署至 iOS（Metal 加速）或嵌入式开发板，大幅降低多产品线扩展成本。\n\nailia-models 通过提供预优化的高性能模型库，让开发者跳过繁琐的工程化陷阱，专注于核心业务逻辑的快速落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Failia-ai_ailia-models_8d9da79f.png","ailia-ai","ailia Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Failia-ai_5ec0352e.png","AI Extends Possibilities.",null,"contact@ailia.ai","https:\u002F\u002Failia.ai\u002Fen\u002F","https:\u002F\u002Fgithub.com\u002Failia-ai",[82,86,90,94],{"name":83,"color":84,"percentage":85},"Python","#3572A5",95.3,{"name":87,"color":88,"percentage":89},"Jupyter Notebook","#DA5B0B",4.2,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.4,{"name":95,"color":96,"percentage":97},"C","#555555",0.1,2339,358,"2026-04-07T06:03:51","Windows, macOS, Linux, iOS, Android","非必需。支持通过 Vulkan (NVIDIA\u002FAMD\u002FIntel) 和 Metal (Apple) 进行 GPU 加速。未指定具体显存大小或 CUDA 版本要求（因主要使用 Vulkan\u002FMetal 而非纯 CUDA）。","未说明",{"notes":105,"python":106,"dependencies":107},"该工具核心为 ailia SDK，是一个跨平台推理引擎。除了常规桌面系统，还支持嵌入式设备 (Jetson, Raspberry Pi) 和无操作系统环境 (Non-OS\u002FRTOS)。支持的语言绑定包括 C++, Python, Unity (C#), Kotlin, Rust 和 Flutter。模型库包含 400+ 个预训练模型。","未说明 (提供 Python 绑定)",[108,109,110],"ailia SDK","Vulkan Runtime (可选)","Metal (macOS\u002FiOS 可选)",[35,14,112,15],"音频",[114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133],"deep-learning","face-recognition","face-detection","object-detection","object-recognition","hand-detection","fashion-ai","image-classification","pose-estimation","image-segmentation","object-tracking","anomaly-detection","action-recognition","background-removal","audio-processing","crowd-counting","gan","neural-network","embeddings","llm","2026-03-27T02:49:30.150509","2026-04-08T23:47:07.713313",[137,142,147,152,157,161,166,171,176,181],{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},25304,"如何导出支持动态 Batch 维度的 SAM2 (ONNX) 模型以支持多目标？","为了支持多目标，建议将 Batch 维度设置为动态。如果需要导出具有单一内存的 6 维矩阵乘法（matmul），请使用特定的修订版本：https:\u002F\u002Fgithub.com\u002Faxinc-ai\u002Fsegment-anything-2\u002Ftree\u002Ff36169e87ec302c75279fadc60cda1c3763165eb。此外，针对 TFLite 的 4 维矩阵乘法支持，可参考相关 PR：https:\u002F\u002Fgithub.com\u002Faxinc-ai\u002Fsegment-anything-2\u002Fpull\u002F2。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1514",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},25305,"在转换 BERT 模型时遇到 'Folder is not empty, aborting' 错误如何解决？","该错误通常是因为输出目录非空导致的。解决方法是手动清空或删除输出文件夹（例如 `work` 目录），然后重新运行转换脚本即可成功导出模型。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1154",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},25306,"如何修复 RVC Voice Changer 中无法正确获取 ONNX 模型特定层数据的问题？","在 `rvc.py` 文件中存在一个 Bug，导致无法从 ONNX 模型中实际获取指定层（如 `\u002Fencoder\u002FSlice_5_output_0`）。需要检查并修正代码中获取 blob 数据的逻辑，确保正确索引并提取 v2 版本所需的 768 维特征数据。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1096",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},25307,"安装 VALL-E-X 模型时需要什么版本的 PyTorch？","需要安装 PyTorch 的夜间构建版（nightly build）。可以使用以下命令卸载现有版本并安装指定版本（如 2.2.0.dev20230910）：\npip3 uninstall torch\npip3 install --pre torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fnightly\u002Fcpu","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1236",{"id":158,"question_zh":159,"answer_zh":160,"source_url":151},25308,"在 RVC 中 'protect' 参数是如何影响特征向量计算的？","当 'protect' 参数小于 0.5 且存在音高数据时，会对特征进行加权修正。公式如下：\nfeats = feats * pitchff + feats0 * (1 - pitchff)\n其中 `feats` 是应用了 faiss 检索后的特征，`feats0` 是未应用 faiss 的原始特征，`pitchff` 是根据音高和 protect 值计算出的权重系数。",{"id":162,"question_zh":163,"answer_zh":164,"source_url":165},25309,"CREPE 模型输出的置信度（confidence）如何处理？","CREPE 模型的输出中包含置信度值（pd）。处理逻辑是：如果置信度低于 0.1，则将对应的基频（f0）强制设置为 0，以过滤掉低质量的预测结果。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1201",{"id":167,"question_zh":168,"answer_zh":169,"source_url":170},25310,"如何解决 g2p_en 导出 ONNX 时序列长度变为常数的问题？","在导出过程中，如果循环（for）结构保留在编码器内部，可能会导致序列长度被固定为常数。解决方案是将循环逻辑移出编码器外部，这样可以保持序列长度的动态性，避免导出后的模型深度因输入不同而变化。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1468",{"id":172,"question_zh":173,"answer_zh":174,"source_url":175},25311,"如何导出 Stable Diffusion 的 FP16 模型？","可以使用 Windows PC 环境来完成 FP16 模型的导出。已成功导出的模型包括 diffusion_emb, mid, out 合并模型以及 basilmix 模型，格式为 .opt.onnx。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1187",{"id":177,"question_zh":178,"answer_zh":179,"source_url":180},25312,"GPT-SoVITS 中的 g2p 转换是否依赖自定义的 cmudict 逻辑？","不需要。虽然项目中有独自逻辑参照 cmudict，但 `g2p_en` 库内部已经包含了 cmudict。因此，可以直接调用 `g2p_en` 进行转换，无需额外实现自定义的 cmudict 参照逻辑，结果是一致的。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1404",{"id":182,"question_zh":183,"answer_zh":184,"source_url":185},25313,"SenseVoice 模型如何使用 SentencePiece 进行解码？","SenseVoice 使用 SentencePiece 进行分词，因此可以直接使用相应的 tokenizer 库（如 `ailia_tok` 或其他兼容 SentencePiece 的工具）对模型输出进行解码，将其转换为可读文本。","https:\u002F\u002Fgithub.com\u002Failia-ai\u002Failia-models\u002Fissues\u002F1752",[]]