[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-inclusionAI--Ming":3,"tool-inclusionAI--Ming":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159636,2,"2026-04-17T23:33:34",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":77,"owner_url":78,"languages":79,"stars":95,"forks":96,"last_commit_at":97,"license":98,"difficulty_score":10,"env_os":99,"env_gpu":100,"env_ram":101,"env_deps":102,"category_tags":110,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":145},8669,"inclusionAI\u002FMing","Ming","Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.","Ming-flash-omni 2.0 是一款基于 Ling-2.0 架构打造的开源全能多模态大模型，旨在突破传统 AI 在理解与生成能力上的边界。它有效解决了现有模型在跨模态任务中知识深度不足、语音表达单一以及图像编辑灵活性欠缺等痛点，能够同时处理视觉、听觉和文本信息，实现从精准识别到高质量创作的无缝衔接。\n\n这款工具特别适合开发者、人工智能研究人员以及需要复杂内容创作支持的设计师使用。无论是构建智能交互应用、探索多模态前沿算法，还是进行高动态的视听内容生产，Ming-flash-omni 2.0 都能提供强大的底层支持。\n\n其核心技术亮点在于采用了包含 1000 亿总参数（激活参数 60 亿）的混合专家（MoE）架构。在认知方面，它结合高分辨率视觉与庞大知识库，具备专家级的文物分析与文化识别能力；在音频领域，首创统一的端到端声学生成管线，支持零样本语音克隆及情感、音色等细腻控制，让机器语音更具沉浸感；在图像处理上，原生统一了分割、生成与编辑任务，擅长氛围重建与上下文感知的物体移除，能在保持纹理与空间一致性的前提下完成高难度编辑。作为当前开源领域的佼佼者，Ming-flash-omn","Ming-flash-omni 2.0 是一款基于 Ling-2.0 架构打造的开源全能多模态大模型，旨在突破传统 AI 在理解与生成能力上的边界。它有效解决了现有模型在跨模态任务中知识深度不足、语音表达单一以及图像编辑灵活性欠缺等痛点，能够同时处理视觉、听觉和文本信息，实现从精准识别到高质量创作的无缝衔接。\n\n这款工具特别适合开发者、人工智能研究人员以及需要复杂内容创作支持的设计师使用。无论是构建智能交互应用、探索多模态前沿算法，还是进行高动态的视听内容生产，Ming-flash-omni 2.0 都能提供强大的底层支持。\n\n其核心技术亮点在于采用了包含 1000 亿总参数（激活参数 60 亿）的混合专家（MoE）架构。在认知方面，它结合高分辨率视觉与庞大知识库，具备专家级的文物分析与文化识别能力；在音频领域，首创统一的端到端声学生成管线，支持零样本语音克隆及情感、音色等细腻控制，让机器语音更具沉浸感；在图像处理上，原生统一了分割、生成与编辑任务，擅长氛围重建与上下文感知的物体移除，能在保持纹理与空间一致性的前提下完成高难度编辑。作为当前开源领域的佼佼者，Ming-flash-omni 2.0 为多模态技术的落地应用树立了新的标杆。","# Ming-flash-omni 2.0\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_7efb30453753.png\" width=\"100\"\u002F>\n\u003Cp>\n\n\u003Cp align=\"center\">📑 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344\">Technical Report\u003C\u002Fa>｜🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FinclusionAI\u002FMing-flash-omni-2.0\">Hugging Face\u003C\u002Fa>｜ 🤖 \u003Ca href=\"https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0\">ModelScope\u003C\u002Fa>\n\n\n\n## Introduction\n\nThe newly released Ming-flash-omni 2.0 leverages the [Ling-2.0](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FLing-V2) architecture—a Mixture-of-Experts (MoE) framework comprising 100B total and 6B active parameters. Representing a generational advancement over its predecessor, it establishes new State-of-the-Art (SOTA) benchmarks among open-source omni-MLLMs. Ming-flash-omni 2.0 effectively synergizes foundational abilities with specialized domain expertise. In particular, it exhibits superior performance in visual encyclopedic knowledge, immersive speech synthesis, and high-dynamic image generation and manipulation.\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_008bf6dae73d.png\" width=\"800\"\u002F>\n\u003Cp>\n\n\n## 📌 Updates\n* [2026.02.11] 🔥 We release the official version of [Ming-flash-omni 2.0](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fhz2fsH1DGpp2zpY-Yngsog), an open-source SOTA omni-MLLM that pushes the boundaries of multimodal understanding and synthesis.\n* [2025.10.27] 🔥 We release the preview version of Ming-flash-omni：[Ming-flash-omni Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fmain).\n* [2025.07.15] 🔥 We release [Ming-lite-omni v1.5](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fv1.5) with significant improvements across all modalities.\n* [2025.06.12] 🔥 Our [Technical Report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344) is in public on arxiv.\n* [2025.05.28] 🔥 The official version of [Ming-lite-omni v1](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fv1.0) is released, with better performance and image generation support.\n* [2025.05.04] 🔥 We release the test version of Ming-lite-omni：[Ming-lite-omni-Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-Lite-Omni-Preview).\n\n\n## Key Features\nCompared to [Ming-flash-omni Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview), Ming-flash-omni 2.0 focuses on optimizing capabilities across the following key domains: \n- **Expert-level Multimodal Cognition**: It accurately identifies plants and animals, recognizing cultural references (from regional cuisines to global landmarks), and delivering expert-level analysis of artifacts, including era, form, and craftsmanship. By synergizing high-resolution visual capture with a vast knowledge graph, the model achieves \"vision-to-knowledge\" synthesis, enabling superior knowledge understanding.\n\n\n- **Immersive and Controllable Unified Acoustic Synthesis**:  Ming-flash-omni 2.0 introduces a unified end-to-end acoustic generation pipeline that integrates Speech, Audio, and Music within a single channel. Leveraging Continuous Autoregression coupled with a Diffusion Transformer (DiT) head, the model enables zero-shot voice cloning and nuanced attribute control (e.g., emotion, timbre, and ambient atmosphere). This architecture facilitates a transition from simple text-to-speech to highly expressive, emotionally resonant, and immersive auditory experiences.\n\n\n- **High-Dynamic Controllable Image Generation and Manipulation**: Ming-flash-omni 2.0 features a native multi-task architecture that unifies segmentation, generation, and editing, allowing for sophisticated spatiotemporal semantic decoupling. It excels in high-dynamic content creation, including atmospheric reconstruction, seamless scene composition, and context-aware object removal. By maintaining texture coherence and spatial depth consistency, Ming-flash-omni 2.0 achieves state-of-the-art precision in complex image manipulation tasks.\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_cf4bbcc06707.png\" width=\"800\"\u002F>\n\u003Cp>\n\n\n## Use Cases\n\n### Enhanced Multimodal Cognition & Free Modality Switching  \n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F147b9594-e492-4beb-a0db-b5c810135663\" controls width=\"50%\" height=\"400\" style=\"object-fit: contain; max-width: 100%;\">\n    Enhanced Multimodal Cognition & Free Modality Switching\n\u003C\u002Fvideo>\n\n### Streaming Video Conversation  \n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fb1afb34e-8877-497c-85f3-82cd7cf618db\" controls width=\"50%\" height=\"400\" style=\"object-fit: contain; max-width: 100%;\">\n    Streaming Video Conversation\n\u003C\u002Fvideo>\n\n### Controllable Audio Generation\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F6b5d504f-86a3-4121-97c9-0aa9ea9abaa4\" controls=\"controls\" width=\"50%\" height=\"auto\" >\n    Audio Context ASR & Dialect ASR\n\u003C\u002Fvideo>\n\n### Image Generation & Editing\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8d0af9cc-e0dc-440c-9963-b589d6396917\" controls=\"controls\" width=\"50%\" height=\"auto\" >\n    Controllable Image Generation\n\u003C\u002Fvideo>\n\n\n\n\n## Model Downloads\n\nYou can download our latest model from both Huggingface and ModelScope. For previous version model like [Ming-flash-omni-Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview), Please refer to this [link](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview?tab=readme-ov-file#model-downloads).\n\n\u003Cdiv align=\"center\">\n\n| **Model**               |   **Input modality**   | **Oput modality** |                                                                      **Download**                                                                      |\n|:------------------------|:----------------------:| :---------------: |:------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| Ming-flash-omni 2.0 | Image,text,video,audio | Image,text,audio  |                           [🤗 HuggingFace](https:\u002F\u002Fhuggingface.co\u002FinclusionAI\u002FMing-flash-omni-2.0) \u003Cbr>[🤖 ModelScope](https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0)                           |\n\u003C\u002Fdiv>\nIf you're in mainland China, we strongly recommend you to download our model from 🤖 \u003Ca href=\"https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0\">ModelScope\u003C\u002Fa>.\n\n```\npip install modelscope\nmodelscope download --model inclusionAI\u002FMing-flash-omni-2.0 --local_dir inclusionAI\u002FMing-flash-omni-2.0  --revision master\n```\n\nNote: This download process will take several minutes to several hours, depending on your network conditions.\n\n\n\n## Environment Preparation\n\n\n### Installation with pip\n```shell\npip install -r requirements.txt\npip install nvidia-cublas-cu12==12.4.5.8  # for H20 GPU\n```\n\n\n## Example Usage\n\nWe provide a step-by-step running example:\n\nStep 1 - Download the source code\n```\ngit clone https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing.git \ncd Ming\n```\nStep 2 - Download the model weights and create a soft link to the source code directory\n\nDownload our model following [Model Downloads](#model-downloads)\n\n```shell\nmkdir inclusionAI \nln -s \u002Fpath\u002Fto\u002FinclusionAI\u002FMing-flash-omni-2.0 inclusionAI\u002FMing-flash-omni-2.0\n```\n\nStep 3 - Enter the code directory, you can refer to the following codes to run the Ming-flash-omni model.\n```shell\njupyter notebook cookbook.ipynb\n```\n\nWe also provide a simple example on the usage of this repo. For detailed usage, please refer to [cookbook.ipynb](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fblob\u002Fmain\u002Fcookbook.ipynb).\n\n```python\nimport os\nimport torch\nimport warnings\nfrom bisect import bisect_left\nwarnings.filterwarnings(\"ignore\")\n\nfrom transformers import AutoProcessor\nfrom modeling_bailingmm2 import BailingMM2NativeForConditionalGeneration\n\ndef split_model():\n    device_map = {}\n    world_size = torch.cuda.device_count()\n    num_layers = 32\n    layer_per_gpu = num_layers \u002F\u002F world_size\n    layer_per_gpu = [i * layer_per_gpu for i in range(1, world_size + 1)]\n    for i in range(num_layers):\n        device_map[f'model.model.layers.{i}'] = bisect_left(layer_per_gpu, i)\n    device_map['vision'] = 0\n    device_map['audio'] = 0\n    device_map['linear_proj'] = 0\n    device_map['linear_proj_audio'] = 0\n    device_map['model.model.word_embeddings.weight'] = 0\n    device_map['model.model.norm.weight'] = 0\n    device_map['model.lm_head.weight'] = 0\n    device_map['model.model.norm'] = 0\n    device_map[f'model.model.layers.{num_layers - 1}'] = 0\n    return device_map\n\n# Load pre-trained model with optimized settings, this will take ~10 minutes\nmodel_path = \"inclusionAI\u002FMing-flash-omni-2.0\"\nmodel = BailingMM2NativeForConditionalGeneration.from_pretrained(\n    model_path,\n    torch_dtype=torch.bfloat16,\n    attn_implementation=\"flash_attention_2\",\n    device_map=split_model(),\n    load_image_gen=True,\n    load_talker=True,\n).to(dtype=torch.bfloat16)\n\n# Initialize processor for handling multimodal inputs\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\n\n# Inference Pipeline\ndef generate(messages, processor, model, sys_prompt_exp=None, use_cot_system_prompt=False, max_new_tokens=512):\n    text = processor.apply_chat_template(\n        messages, \n        sys_prompt_exp=sys_prompt_exp,\n        use_cot_system_prompt=use_cot_system_prompt\n    )\n    image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)\n\n    inputs = processor(\n        text=[text],\n        images=image_inputs,\n        videos=video_inputs,\n        audios=audio_inputs,\n        return_tensors=\"pt\",\n        audio_kwargs={\"use_whisper_encoder\": True},\n    ).to(model.device)\n\n    for k in inputs.keys():\n        if k == \"pixel_values\" or k == \"pixel_values_videos\" or k == \"audio_feats\":\n            inputs[k] = inputs[k].to(dtype=torch.bfloat16)\n\n    with torch.no_grad():\n        generated_ids = model.generate(\n            **inputs,\n            max_new_tokens=max_new_tokens,\n            use_cache=True,\n            eos_token_id=processor.gen_terminator,\n            num_logits_to_keep=1,\n        )\n\n    generated_ids_trimmed = [\n        out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n    ]\n\n    output_text = processor.batch_decode(\n        generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n    )[0]\n\n    return output_text\n\n# qa\nmessages = [\n    {\n        \"role\": \"HUMAN\",\n        \"content\": [\n            {\"type\": \"text\", \"text\": \"请详细介绍鹦鹉的生活习性。\"}\n        ],\n    },\n]\noutput_text = generate(messages, processor=processor, model=model)\nprint(output_text)\n# Output:\n\n# 鹦鹉是一种非常受欢迎的宠物鸟类，它们以其鲜艳的羽毛、聪明的头脑和模仿人类语言的能力而闻名。鹦鹉的生活习性非常丰富，以下是一些主要的习性：\n\n# 1. **社交性**：鹦鹉是高度社交的鸟类，它们在野外通常生活在群体中，与同伴互动、玩耍和寻找食物。在家庭环境中，鹦鹉需要与人类或其他鹦鹉进行定期的互动，以保持其心理健康。\n\n# 2. **智力**：鹦鹉拥有非常高的智力，它们能够学习各种技能，包括模仿人类语言、识别物体、解决问题等。这种智力使它们成为非常有趣的宠物。\n\n# ......\n```\n\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```bibtex\n\n@misc{Mingomni2025,\n      title  = {Ming-Omni: A Unified Multimodal Model for Perception and Generation}, \n      author = {Inclusion AI},\n      year = {2025},\n      eprint = {2506.09344},\n      archivePrefix = {arXiv},\n      url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344}\n}\n\n@article{ai2025ming,\n  title={Ming-flash-omni: A sparse, unified architecture for multimodal perception and generation},\n  author={Inclusion AI},\n  journal={arXiv preprint arXiv:2510.24821},\n  year={2025}\n}\n```\n\n\n","# 明闪全能2.0\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_7efb30453753.png\" width=\"100\"\u002F>\n\u003Cp>\n\n\u003Cp align=\"center\">📑 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344\">技术报告\u003C\u002Fa>｜🤗 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FinclusionAI\u002FMing-flash-omni-2.0\">Hugging Face\u003C\u002Fa>｜ 🤖 \u003Ca href=\"https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0\">ModelScope\u003C\u002Fa>\n\n\n\n## 简介\n\n最新发布的明闪全能2.0基于[Ling-2.0](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FLing-V2)架构——一种由1000亿总参数、60亿活跃参数组成的专家混合（MoE）框架。相较于其前代产品，它实现了跨越式的进步，在开源多模态大语言模型中树立了新的SOTA基准。明闪全能2.0有效融合了基础能力与特定领域的专业知识。尤其在视觉百科知识、沉浸式语音合成以及高动态图像生成与操控方面表现出色。\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_008bf6dae73d.png\" width=\"800\"\u002F>\n\u003Cp>\n\n\n## 📌 更新\n* [2026.02.11] 🔥 我们发布了[明闪全能2.0](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fhz2fsH1DGpp2zpY-Yngsog)的正式版本，这是一款开源的SOTA多模态大语言模型，进一步突破了多模态理解和生成的边界。\n* [2025.10.27] 🔥 我们发布了明闪全能的预览版：[明闪全能预览版](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fmain)。\n* [2025.07.15] 🔥 我们发布了[Ming-lite-omni v1.5](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fv1.5)，在所有模态上都有显著提升。\n* [2025.06.12] 🔥 我们的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344)已在arxiv公开。\n* [2025.05.28] 🔥 [Ming-lite-omni v1](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002Fv1.0)的正式版本发布，性能更优，并支持图像生成。\n* [2025.05.04] 🔥 我们发布了Ming-lite-omni的测试版：[Ming-lite-omni预览版](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-Lite-Omni-Preview)。\n\n\n## 核心特性\n与[Ming-flash-omni Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview)相比，明闪全能2.0专注于优化以下关键领域的能力：\n- **专家级多模态认知**：能够准确识别动植物，理解文化背景（从地方美食到全球地标），并对文物进行年代、形态和工艺等方面的专家级分析。通过将高分辨率视觉捕捉与庞大的知识图谱相结合，该模型实现了“视觉到知识”的综合处理，从而具备卓越的知识理解能力。\n\n\n- **沉浸式且可控的统一声学合成**：明闪全能2.0引入了一种端到端的统一声学生成管道，将语音、音频和音乐整合到一个通道中。借助连续自回归结合扩散Transformer（DiT）头，该模型可实现零样本语音克隆及对情感、音色和环境氛围等属性的精细控制。这一架构使系统从简单的文本转语音过渡到高度富有表现力、情感共鸣且沉浸式的听觉体验。\n\n\n- **高动态可控图像生成与编辑**：明闪全能2.0采用原生多任务架构，将分割、生成和编辑功能统一起来，实现复杂的时空语义解耦。它在高动态内容创作方面表现出众，包括氛围重建、无缝场景构图以及上下文感知的对象移除。通过保持纹理一致性和空间深度一致性，明闪全能2.0在复杂图像操作任务中达到了最先进的精度。\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_readme_cf4bbcc06707.png\" width=\"800\"\u002F>\n\u003Cp>\n\n\n## 应用场景\n\n### 增强型多模态认知与自由模态切换  \n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F147b9594-e492-4beb-a0db-b5c810135663\" controls width=\"50%\" height=\"400\" style=\"object-fit: contain; max-width: 100%;\">\n    增强型多模态认知与自由模态切换\n\u003C\u002Fvideo>\n\n### 流媒体视频对话  \n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fb1afb34e-8877-497c-85f3-82cd7cf618db\" controls width=\"50%\" height=\"400\" style=\"object-fit: contain; max-width: 100%;\">\n    流媒体视频对话\n\u003C\u002Fvideo>\n\n### 可控音频生成\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F6b5d504f-86a3-4121-97c9-0aa9ea9abaa4\" controls=\"controls\" width=\"50%\" height=\"auto\" >\n    音频上下文ASR与方言ASR\n\u003C\u002Fvideo>\n\n### 图像生成与编辑\n\u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F8d0af9cc-e0dc-440c-9963-b589d6396917\" controls=\"controls\" width=\"50%\" height=\"auto\" >\n    可控图像生成\n\u003C\u002Fvideo>\n\n\n\n\n## 模型下载\n\n您可以通过Huggingface和ModelScope下载我们的最新模型。对于早期版本的模型，如[Ming-flash-omni-Preview](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview)，请参考此链接：[https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview?tab=readme-ov-file#model-downloads](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Ftree\u002FMing-flash-omni-Preview?tab=readme-ov-file#model-downloads)。\n\n\u003Cdiv align=\"center\">\n\n| **模型**               |   **输入模态**   | **输出模态** |                                                                      **下载**                                                                      |\n|:------------------------|:----------------------:| :---------------: |:------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| Ming-flash-omni 2.0 | 图像、文本、视频、音频 | 图像、文本、音频  |                           [🤗 HuggingFace](https:\u002F\u002Fhuggingface.co\u002FinclusionAI\u002FMing-flash-omni-2.0) \u003Cbr>[🤖 ModelScope](https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0)                           |\n\u003C\u002Fdiv>\n如果您在中国大陆，我们强烈建议您从🤖 \u003Ca href=\"https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002FinclusionAI\u002FMing-flash-omni-2.0\">ModelScope\u003C\u002Fa>下载我们的模型。\n\n```\npip install modelscope\nmodelscope download --model inclusionAI\u002FMing-flash-omni-2.0 --local_dir inclusionAI\u002FMing-flash-omni-2.0  --revision master\n```\n\n注意：根据您的网络状况，下载过程可能需要几分钟到几小时不等。\n\n\n\n## 环境准备\n\n\n### 使用pip安装\n```shell\npip install -r requirements.txt\npip install nvidia-cublas-cu12==12.4.5.8  # for H20 GPU\n```\n\n## 使用示例\n\n我们提供一个分步运行示例：\n\n步骤 1 - 下载源代码\n```\ngit clone https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing.git \ncd Ming\n```\n步骤 2 - 下载模型权重，并在源代码目录中创建软链接\n\n按照[模型下载](#model-downloads)部分下载我们的模型。\n\n```shell\nmkdir inclusionAI \nln -s \u002Fpath\u002Fto\u002FinclusionAI\u002FMing-flash-omni-2.0 inclusionAI\u002FMing-flash-omni-2.0\n```\n\n步骤 3 - 进入代码目录，您可以参考以下代码来运行 Ming-flash-omni 模型。\n```shell\njupyter notebook cookbook.ipynb\n```\n\n我们还提供了关于该仓库使用方法的简单示例。如需详细用法，请参阅 [cookbook.ipynb](https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fblob\u002Fmain\u002Fcookbook.ipynb)。\n\n```python\nimport os\nimport torch\nimport warnings\nfrom bisect import bisect_left\nwarnings.filterwarnings(\"ignore\")\n\nfrom transformers import AutoProcessor\nfrom modeling_bailingmm2 import BailingMM2NativeForConditionalGeneration\n\ndef split_model():\n    device_map = {}\n    world_size = torch.cuda.device_count()\n    num_layers = 32\n    layer_per_gpu = num_layers \u002F\u002F world_size\n    layer_per_gpu = [i * layer_per_gpu for i in range(1, world_size + 1)]\n    for i in range(num_layers):\n        device_map[f'model.model.layers.{i}'] = bisect_left(layer_per_gpu, i)\n    device_map['vision'] = 0\n    device_map['audio'] = 0\n    device_map['linear_proj'] = 0\n    device_map['linear_proj_audio'] = 0\n    device_map['model.model.word_embeddings.weight'] = 0\n    device_map['model.model.norm.weight'] = 0\n    device_map['model.lm_head.weight'] = 0\n    device_map['model.model.norm'] = 0\n    device_map[f'model.model.layers.{num_layers - 1}'] = 0\n    return device_map\n\n# 加载预训练模型并进行优化设置，此过程大约需要10分钟\nmodel_path = \"inclusionAI\u002FMing-flash-omni-2.0\"\nmodel = BailingMM2NativeForConditionalGeneration.from_pretrained(\n    model_path,\n    torch_dtype=torch.bfloat16,\n    attn_implementation=\"flash_attention_2\",\n    device_map=split_model(),\n    load_image_gen=True,\n    load_talker=True,\n).to(dtype=torch.bfloat16)\n\n# 初始化用于处理多模态输入的处理器\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\n\n# 推理流程\ndef generate(messages, processor, model, sys_prompt_exp=None, use_cot_system_prompt=False, max_new_tokens=512):\n    text = processor.apply_chat_template(\n        messages, \n        sys_prompt_exp=sys_prompt_exp,\n        use_cot_system_prompt=use_cot_system_prompt\n    )\n    image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)\n\n    inputs = processor(\n        text=[text],\n        images=image_inputs,\n        videos=video_inputs,\n        audios=audio_inputs,\n        return_tensors=\"pt\",\n        audio_kwargs={\"use_whisper_encoder\": True},\n    ).to(model.device)\n\n    for k in inputs.keys():\n        if k == \"pixel_values\" or k == \"pixel_values_videos\" or k == \"audio_feats\":\n            inputs[k] = inputs[k].to(dtype=torch.bfloat16)\n\n    with torch.no_grad():\n        generated_ids = model.generate(\n            **inputs,\n            max_new_tokens=max_new_tokens,\n            use_cache=True,\n            eos_token_id=processor.gen_terminator,\n            num_logits_to_keep=1,\n        )\n\n    generated_ids_trimmed = [\n        out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n    ]\n\n    output_text = processor.batch_decode(\n        generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n    )[0]\n\n    return output_text\n\n# 问答\nmessages = [\n    {\n        \"role\": \"HUMAN\",\n        \"content\": [\n            {\"type\": \"text\", \"text\": \"请详细介绍鹦鹉的生活习性。\"}\n        ],\n    },\n]\noutput_text = generate(messages, processor=processor, model=model)\nprint(output_text)\n# 输出：\n\n# 鹦鹉是一种非常受欢迎的宠物鸟类，它们以其鲜艳的羽毛、聪明的头脑和模仿人类语言的能力而闻名。鹦鹉的生活习性非常丰富，以下是一些主要的习性：\n\n# 1. **社交性**：鹦鹉是高度社交的鸟类，它们在野外通常生活在群体中，与同伴互动、玩耍和寻找食物。在家庭环境中，鹦鹉需要与人类或其他鹦鹉进行定期的互动，以保持其心理健康。\n\n# 2. **智力**：鹦鹉拥有非常高的智力，它们能够学习各种技能，包括模仿人类语言、识别物体、解决问题等。这种智力使它们成为非常有趣的宠物。\n\n# ......\n```\n\n\n## 引用\n\n如果您觉得我们的工作有所帮助，欢迎引用我们。\n\n```bibtex\n\n@misc{Mingomni2025,\n      title  = {Ming-Omni: 一种用于感知与生成的统一多模态模型}, \n      author = {Inclusion AI},\n      year = {2025},\n      eprint = {2506.09344},\n      archivePrefix = {arXiv},\n      url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.09344}\n}\n\n@article{ai2025ming,\n  title={Ming-flash-omni: 一种稀疏的、统一的多模态感知与生成架构},\n  author={Inclusion AI},\n  journal={arXiv 预印本 arXiv:2510.24821},\n  year={2025}\n}\n```","# Ming-flash-omni 2.0 快速上手指南\n\nMing-flash-omni 2.0 是一款基于 MoE（混合专家）架构的开源全能多模态大模型，拥有 1000 亿总参数和 60 亿激活参数。它在视觉百科知识、沉浸式语音合成以及高动态图像生成与编辑方面达到了业界领先水平。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+)\n*   **GPU**: NVIDIA GPU (支持 CUDA)，显存建议 24GB 以上（多卡可分摊负载）。若使用 H20 GPU，需特定依赖。\n*   **Python**: 3.10 或更高版本\n*   **前置依赖**:\n    *   PyTorch (兼容 CUDA 版本)\n    *   Flash Attention 2\n    *   Transformers\n\n## 安装步骤\n\n### 1. 获取源代码\n克隆官方仓库并进入目录：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing.git \ncd Ming\n```\n\n### 2. 安装 Python 依赖\n安装项目所需的依赖包。如果您使用的是 H20 GPU，请额外安装指定的 cublas 版本。\n```bash\npip install -r requirements.txt\npip install nvidia-cublas-cu12==12.4.5.8  # 仅针对 H20 GPU 需要执行此行\n```\n\n### 3. 下载模型权重\n对于中国大陆开发者，强烈推荐使用 **ModelScope (魔搭社区)** 进行下载，速度更快且更稳定。\n\n首先安装 ModelScope 工具：\n```bash\npip install modelscope\n```\n\n然后执行下载命令（将模型保存至本地目录）：\n```bash\nmodelscope download --model inclusionAI\u002FMing-flash-omni-2.0 --local_dir inclusionAI\u002FMing-flash-omni-2.0  --revision master\n```\n*注意：根据网络状况，下载过程可能需要数分钟至数小时。*\n\n### 4. 配置模型软链接\n将下载好的模型权重目录软链接到代码根目录下，以便程序读取：\n```bash\nmkdir inclusionAI \nln -s \u002Fpath\u002Fto\u002FinclusionAI\u002FMing-flash-omni-2.0 inclusionAI\u002FMing-flash-omni-2.0\n```\n*(请将 `\u002Fpath\u002Fto\u002F` 替换为您实际下载模型的绝对路径)*\n\n## 基本使用\n\n以下是一个最小化的 Python 推理示例，展示如何加载模型并进行文本问答。该示例自动处理多卡负载均衡。\n\n```python\nimport os\nimport torch\nimport warnings\nfrom bisect import bisect_left\nwarnings.filterwarnings(\"ignore\")\n\nfrom transformers import AutoProcessor\nfrom modeling_bailingmm2 import BailingMM2NativeForConditionalGeneration\n\ndef split_model():\n    \"\"\"自动分配模型层到不同 GPU\"\"\"\n    device_map = {}\n    world_size = torch.cuda.device_count()\n    num_layers = 32\n    layer_per_gpu = num_layers \u002F\u002F world_size\n    layer_per_gpu = [i * layer_per_gpu for i in range(1, world_size + 1)]\n    for i in range(num_layers):\n        device_map[f'model.model.layers.{i}'] = bisect_left(layer_per_gpu, i)\n    device_map['vision'] = 0\n    device_map['audio'] = 0\n    device_map['linear_proj'] = 0\n    device_map['linear_proj_audio'] = 0\n    device_map['model.model.word_embeddings.weight'] = 0\n    device_map['model.model.norm.weight'] = 0\n    device_map['model.lm_head.weight'] = 0\n    device_map['model.model.norm'] = 0\n    device_map[f'model.model.layers.{num_layers - 1}'] = 0\n    return device_map\n\n# 1. 加载模型 (首次加载约需 10 分钟)\nmodel_path = \"inclusionAI\u002FMing-flash-omni-2.0\"\nmodel = BailingMM2NativeForConditionalGeneration.from_pretrained(\n    model_path,\n    torch_dtype=torch.bfloat16,\n    attn_implementation=\"flash_attention_2\",\n    device_map=split_model(),\n    load_image_gen=True,\n    load_talker=True,\n).to(dtype=torch.bfloat16)\n\n# 2. 初始化处理器\nprocessor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)\n\n# 3. 定义推理函数\ndef generate(messages, processor, model, max_new_tokens=512):\n    text = processor.apply_chat_template(messages)\n    image_inputs, video_inputs, audio_inputs = processor.process_vision_info(messages)\n\n    inputs = processor(\n        text=[text],\n        images=image_inputs,\n        videos=video_inputs,\n        audios=audio_inputs,\n        return_tensors=\"pt\",\n        audio_kwargs={\"use_whisper_encoder\": True},\n    ).to(model.device)\n\n    # 确保输入数据类型正确\n    for k in inputs.keys():\n        if k == \"pixel_values\" or k == \"pixel_values_videos\" or k == \"audio_feats\":\n            inputs[k] = inputs[k].to(dtype=torch.bfloat16)\n\n    with torch.no_grad():\n        generated_ids = model.generate(\n            **inputs,\n            max_new_tokens=max_new_tokens,\n            use_cache=True,\n            eos_token_id=processor.gen_terminator,\n            num_logits_to_keep=1,\n        )\n\n    generated_ids_trimmed = [\n        out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)\n    ]\n\n    output_text = processor.batch_decode(\n        generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False\n    )[0]\n    return output_text\n\n# 4. 执行测试问答\nmessages = [\n    {\n        \"role\": \"HUMAN\",\n        \"content\": [\n            {\"type\": \"text\", \"text\": \"请详细介绍鹦鹉的生活习性。\"}\n        ],\n    },\n]\n\noutput_text = generate(messages, processor=processor, model=model)\nprint(output_text)\n```\n\n如需更复杂的多模态交互（如图片生成、语音控制等），请参考仓库中的 `cookbook.ipynb` 文件。","一位数字博物馆策展人正在为即将开幕的“宋代生活”线上特展制作沉浸式导览内容，需要处理大量文物高清图片并生成配套的解说音频。\n\n### 没有 Ming 时\n- **文物识别依赖人工**：面对数千张瓷器与书画的高清细节图，团队需聘请多位历史专家手动标注年代、工艺及文化背景，耗时数周且易出现知识盲区。\n- **语音合成生硬割裂**：传统的 TTS 工具生成的解说音缺乏情感起伏，无法模拟宋代雅集的 ambient 氛围，且单独制作背景音乐和音效导致音轨对齐困难。\n- **图像修复效果失真**：在修复破损古画或移除现代干扰物（如玻璃反光）时，现有工具常破坏原有的纹理连贯性与空间景深，导致画面显得虚假。\n- **多模态协作流程繁琐**：视觉分析、音频生成与图像编辑需在不同软件间切换，数据格式不统一，极大拖慢了布展进度。\n\n### 使用 Ming 后\n- **专家级自动认知**：Ming 凭借视觉百科全书能力，瞬间识别文物细节，自动输出包含朝代、形制及工艺特征的深度分析报告，准确率媲美资深专家。\n- **沉浸式统一声学合成**：利用 Ming 的统一声学管道，一键生成带有特定情感色彩（如庄重、悠扬）的解说词，并自然融合背景环境音，实现零样本声音克隆与氛围营造。\n- **高动态精准图像操控**：Ming 原生支持高分辨率图像编辑，在去除展柜反光或补全残缺画作时，完美保持丝绸纹理与水墨笔触的空间一致性，修复痕迹无痕。\n- **端到端高效工作流**：从图片上传到最终音视频产出，Ming 在一个模型内完成所有多模态任务，将原本数周的制作周期缩短至数小时。\n\nMing 通过深度融合视觉认知、可控声学合成与高动态图像生成能力，将复杂的文化遗产数字化工程转化为高效、精准且富有感染力的自动化创作流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FinclusionAI_Ming_f51be29f.png","inclusionAI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FinclusionAI_70666e45.jpg","This organization contains the series of open-source projects from Ant Group with dedicated efforts to work towards Artificial General Intelligence (AGI).",null,"https:\u002F\u002Finclusion-ai.org","https:\u002F\u002Fgithub.com\u002FinclusionAI",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",90.1,{"name":85,"color":86,"percentage":87},"Python","#3572A5",9.9,{"name":89,"color":90,"percentage":91},"Dockerfile","#384d54",0,{"name":93,"color":94,"percentage":91},"Shell","#89e051",647,58,"2026-04-16T12:54:43","MIT","Linux","必需 NVIDIA GPU，示例提及 H20 显卡，需安装 nvidia-cublas-cu12 (CUDA 12)，代码强制使用 torch.bfloat16 和 flash_attention_2，建议多卡部署（代码包含自动分片逻辑）","未说明（模型总参数 100B，激活参数 6B，建议大内存以支持模型加载）",{"notes":103,"python":104,"dependencies":105},"1. 该模型为 MoE 架构（总参 100B\u002F激活 6B），官方示例代码包含多卡自动分片逻辑，单卡运行可能受限。2. 必须安装 flash_attention_2 并指定 attn_implementation。3. 针对 H20 GPU 需单独安装特定版本的 nvidia-cublas-cu12==12.4.5.8。4. 首次运行需下载模型权重，耗时取决于网络状况。5. 推理时需启用图像生成 (load_image_gen=True) 和语音合成 (load_talker=True) 模块。","未说明",[106,107,108,109],"torch","transformers","flash-attn","nvidia-cublas-cu12",[35,15,111,112],"音频","其他","2026-03-27T02:49:30.150509","2026-04-18T09:20:15.069337",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},38829,"如何在加载模型时减少显存占用以避免 OOM？","如果不需要图像生成功能，可以删除 `modeling_bailingmm.py` 第 582 行的代码，这大约能节省 1.3GB 显存；如果不需要文本转语音（TTS）功能，可以删除 `modeling_bailingmm.py` 第 150 至 152 行的代码，这大约能节省 19GB 显存。请根据实际需求裁剪模块以适配显存较小的显卡（如 A100）。","https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fissues\u002F13",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},38830,"如何从音频文件中提取音频令牌（audio tokens）以实现声音克隆？","用于从提示音频中提取音频特征并执行零样本 TTS（Zero-shot TTS）的代码和工具已更新在 GitHub 和 Hugging Face 上。请参考 README 中的相关示例进行操作。此外，与音频分词器（audio tokenizer）相关的代码和模型也将陆续添加到该仓库中，请保持关注。","https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fissues\u002F17",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},38831,"运行 cookbook.ipynb 或 quick_start.ipynb 时出现内核崩溃或输出重复字符（如 'TheThe...'）怎么办？","此类问题通常由配置错误引起。首先请检查运行环境是否正确安装，尝试在终端中直接运行演示脚本以确认是否正常。如果问题依旧，请提供更多关于运行环境的信息（如库版本、硬件配置等），以便进一步排查是否为特定的配置冲突导致。","https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fissues\u002F57",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},38832,"遇到 'Unrecognized model' 或 'config.json 缺少 model_type' 错误如何解决？","该错误通常是因为 transformers 库版本过新或不兼容，导致无法识别自定义模型配置。请确保安装了项目 `requirements.txt` 中指定版本的 transformers 库。如果使用的是本地路径加载模型，请检查 `config.json` 文件中是否包含正确的 `model_type` 字段，或者尝试直接从 Hugging Face 加载模型而非本地路径。","https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fissues\u002F47",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},38833,"下载的分词器文件（tokenizer.json）不完整或损坏怎么办？","如果发现 ModelScope 上的 `qwen2.5 tokenizer.json` 文件不完整，可能是上传过程中网络问题导致的。维护者已重新上传文件，请刷新后重新下载。作为替代方案，您可以直接从 Hugging Face 仓库（https:\u002F\u002Fhuggingface.co\u002FinclusionAI\u002FMing-Lite-Uni\u002Ftree\u002Fmain\u002Fqwen2_5_llm）下载完整的分词器文件使用。","https:\u002F\u002Fgithub.com\u002FinclusionAI\u002FMing\u002Fissues\u002F8",{"id":142,"question_zh":143,"answer_zh":144,"source_url":120},38834,"不同显卡（如 H800 与 A100）上的显存占用为何差异巨大？","显存占用差异可能源于环境配置不一致。请严格按照 README 安装环境，并核对 `requirements.txt` 中关键库的版本号。维护者在 H800 上测试时，使用 `torch.cuda.memory_allocated()` 测得显存占用约为 55-56GB。如果在 A100 上发生 OOM，除了硬件差异外，更可能是未正确裁剪不必要的模块（如图像生成或 TTS 模块）所致。",[]]