[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-JIA-Lab-research--MGM":3,"tool-JIA-Lab-research--MGM":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151314,2,"2026-04-11T23:32:58",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":75,"owner_location":75,"owner_email":75,"owner_twitter":75,"owner_website":75,"owner_url":76,"languages":77,"stars":98,"forks":99,"last_commit_at":100,"license":101,"difficulty_score":10,"env_os":102,"env_gpu":103,"env_ram":104,"env_deps":105,"category_tags":119,"github_topics":122,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":126,"updated_at":127,"faqs":128,"releases":158},6797,"JIA-Lab-research\u002FMGM","MGM","Official repo for \"Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models\"","MGM（Mini-Gemini）是一个强大的多模态视觉语言模型开源框架，旨在同时实现高质量的图像理解、逻辑推理与图像生成。它基于 LLaVA 架构构建，能够灵活支持从 2B 到 34B 参数的多种稠密及混合专家（MoE）大语言模型，并已成功适配 LLaMA3 等最新基座模型。\n\n针对传统多模态模型在处理高分辨率图像时细节丢失或计算成本过高的问题，MGM 创新性地采用了双视觉编码器架构。该架构结合低分辨率全局嵌入与高分辨率局部候选区域，并通过独特的“补丁信息挖掘”技术，在高低分辨率视觉特征间建立精细联系。这种设计让模型既能把握图像整体语境，又能敏锐捕捉细微视觉线索，从而在复杂场景下实现更精准的分析与创作。\n\nMGM 非常适合 AI 研究人员、开发者以及需要处理复杂视觉任务的技术团队使用。研究人员可利用其开放的代码、训练数据及多种预训练权重（如 MGM-7B、MGM-13B 等）进行前沿探索；开发者则可基于其成熟的训练与评估流程，快速构建定制化的多模态应用。无论是学术实验还是工程落地，MGM 都提供了一个高效且扩展性强的解决方案。","# Official repo for \"Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models\"\n\n\u003Ca href='https:\u002F\u002Fmini-gemini.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-Green'>\u003C\u002Fa>\n\u003Ca href='http:\u002F\u002F103.170.5.190:7860\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Demo-violet'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fwcy1122\u002FMGM'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Open%20In%20Spaces-blue.svg'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18814.pdf'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Arxiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Models-blue'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-data-660463ea895a01d8f367624e'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Data-green'>\u003C\u002Fa>\n\n\nThe framework supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with image understanding, reasoning, and generation simultaneously. We build this repo based on LLaVA.\n\n## Release\n- [05\u002F03] 🔥 We support LLaMA3-based models! Welcome to try them [here](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854).\n- [04\u002F15] 🔥 The [Hugging Face demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fwcy1122\u002FMGM) is available. It's a 13B-HD version, welcome to watch and try.\n- [03\u002F28] 🔥 Mini-Gemini is coming! We release the [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18814.pdf), [demo](http:\u002F\u002F103.170.5.190:7860\u002F), [code](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM), [models](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854'), and [data](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-data-660463ea895a01d8f367624e)!\n\n## Contents\n- [Demo](#demo)\n- [Install](#install)\n- [Model](#model)\n- [Preparation](#preparation)\n- [Train](#train)\n- [Evaluation](#evaluation)\n- [Examples](#examples)\n- [Citation](#citation)\n- [Acknowledgement](#acknowledgement)\n- [License](#license)\n\n## Demo\nWe provide some selected examples in this section. More examples can be found in our [project page](https:\u002F\u002Fmini-gemini.github.io\u002F). Feel free to try our online [demo](http:\u002F\u002F103.170.5.190:7860\u002F)!\n\n\u003Cdiv align=center>\n\u003Cimg width=\"100%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_f2b5858f534d.png\"\u002F>\n\u003C\u002Fdiv>\n\n## Install\nPlease follow the instructions below to install the required packages.\n\nNOTE: If you want to use the 2B version, please ensure to install the latest version Transformers (>=4.38.0).\n\n1. Clone this repository\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM.git\n```\n\n2. Install Package\n```bash\nconda create -n mgm python=3.10 -y\nconda activate mgm\ncd MGM\npip install --upgrade pip  # enable PEP 660 support\npip install -e .\n```\n\n3. Install additional packages for training cases\n```bash\npip install ninja\npip install flash-attn --no-build-isolation\n```\n\n## Model\nThe framework is conceptually simple: dual vision encoders are utilized to provide low-resolution visual embedding and high-resolution candidates;\npatch info mining is proposed to conduct patch-level mining between high-resolution regions and low-resolution visual queries;\nLLM is utilized to marry text with images for both comprehension and generation at the same time.\n\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_b19a1e52bdd5.png\"\u002F>\n\u003C\u002Fdiv>\n\nWe provide all our fully finetuned models on Stage 1 and 2 data:\n\n| Model | LR | HR | Base LLM | Vision Encoder | Finetuning Data | Finetuning schedule | Download |\n|----------|----------|----------|----------|----------------|---------------|--------------------|------------------|\n| MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-2B) |\n| MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B) |\n| MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B) |\n| MGM-8B | 336 | 768 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B) |\n| MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B) |\n| MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B) |\n| MGM-7B-HD | 672 | 1536 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B-HD) |\n| MGM-13B-HD | 672 | 1536 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B-HD) |\n| MGM-8B-HD | 672 | 1536 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B-HD) |\n| MGM-8x7B-HD | 672 | 1536 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B-HD) |\n| MGM-34B-HD | 672 | 1536 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B-HD) |\n\nHere are the pretrained weights on Stage 1 data only:\n| Model | LR | HR | Base LLM | Vision Encoder | Pretrain Data | Finetuning schedule | Download |\n|----------|----------|----------|----------|----------------|---------------|--------------------|------------------|\n| MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-2B) |\n| MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-7B) |\n| MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-13B) |\n| MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-8x7B) |\n| MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-34B) |\n\n## Preparation\n### Dataset\nWe provide the processed data for the model training. \nFor model pretraining, please download the following the training image-based data and organize them as:\n\n`->` means put the data in the local folder.\n- [LLaVA Images](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fliuhaotian\u002FLLaVA-Pretrain) -> `data\u002FMGM-Pretrain\u002Fimages`, `data\u002FMGM-Finetune\u002Fllava\u002FLLaVA-Pretrain\u002Fimages`\n- [ALLaVA Caption](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA) -> `data\u002FMGM-Pretrain\u002FALLaVA-4V`\n\nFor model finetuning, please download the following the instruction data and organize them as:\n\n`->` means put the data in the local folder.\n- [COCO train2017](http:\u002F\u002Fimages.cocodataset.org\u002Fzips\u002Ftrain2017.zip) -> `data\u002FMGM-Finetune\u002Fcoco`\n- [GQA](https:\u002F\u002Fdownloads.cs.stanford.edu\u002Fnlp\u002Fdata\u002Fgqa\u002Fimages.zip) -> `data\u002FMGM-Finetune\u002Fgqa`\n- [OCR-VQA](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing) (**we save all files as `.jpg`**) -> `data\u002FMGM-Finetune\u002Focr_vqa`\n- [TextVQA](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftextvqa\u002Fimages\u002Ftrain_val_images.zip) (not included for training) -> `data\u002FMGM-Finetune\u002Ftextvqa`\n- [VisualGenome part1](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Frak248\u002FVG_100K_2\u002Fimages.zip), [VisualGenome part2](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Frak248\u002FVG_100K_2\u002Fimages2.zip) -> `data\u002FMGM-Finetune\u002Fvg`\n- [ShareGPT4V-100K](https:\u002F\u002Fgithub.com\u002FInternLM\u002FInternLM-XComposer\u002Fblob\u002Fmain\u002Fprojects\u002FShareGPT4V\u002Fdocs\u002FData.md) -> `data\u002FMGM-Finetune\u002Fsam`, `share_textvqa`, `wikiart`, `web-celebrity`, `web-landmark`\n- [LAION GPT4V](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flaion\u002Fgpt4v-dataset) -> `data\u002FMGM-Finetune\u002Fgpt4v-dataset`\n- [ALLaVA Instruction](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA) -> `data\u002FMGM-Pretrain\u002FALLaVA-4V`\n- [DocVQA](https:\u002F\u002Fwww.docvqa.org\u002Fdatasets\u002Fdocvqa) -> `data\u002FMGM-Finetune\u002Fdocvqa`\n- [ChartQA](https:\u002F\u002Fgithub.com\u002Fvis-nlp\u002FChartQA) -> `data\u002FMGM-Finetune\u002Fchartqa`\n- [DVQA](https:\u002F\u002Fgithub.com\u002Fkushalkafle\u002FDVQA_dataset) -> `data\u002FMGM-Finetune\u002Fdvqa`\n- [AI2D](https:\u002F\u002Fallenai.org\u002Fdata\u002Fdiagrams) -> `data\u002FMGM-Finetune\u002Fai2d`\n\nFor model evaluation, please follow this [link](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA\u002Fblob\u002Fmain\u002Fdocs\u002FEvaluation.md) for preparation. We use some extra benchmarks for evaluation. please download the following the training image-based data and organize them as:\n\n`->` means put the data in the local folder.\n- [MMMU](https:\u002F\u002Fmmmu-benchmark.github.io\u002F) -> `data\u002FMGM-Eval\u002FMMMU`\n- [MMB](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002Fmmbench\u002F) -> `data\u002FMGM-Eval\u002FMMB`\n- [MathVista](https:\u002F\u002Fmathvista.github.io\u002F) -> `data\u002FMGM-Eval\u002FMathVista`\n\n\nPlease put the pretrained data, finetuned data, and eval data in  `MGM-Pretrain`, `MGM-Finetune`, and `MGM-Eval` subset following [Structure](#structure).\n\n\nFor meta info, please download the following files and organize them as in [Structure](#structure).\n\n| Data file name | Size |\n| --- | ---: |\n| [mgm_pretrain.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Pretrain) | 1.68 G |\n| [mgm_instruction.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Instruction) | 1.79 G |\n| [mgm_generation_pure_text.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Instruction) | 0.04 G |\n\nIMPORTANT: `mgm_generation_pure_text.json` is a generation-related subset. **DO NOT** merge it with `mgm_instruction.json` as it is already included in it. You may merge this file with your customized LLM\u002FVLM SFT dataset to enable the reasoning generation ability.\n\n\n### Pretrained Weights\nWe recommend users to download the pretrained weights from the following link [CLIP-Vit-L-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336), [OpenCLIP-ConvNeXt-L](https:\u002F\u002Fhuggingface.co\u002Flaion\u002FCLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup), [Gemma-2b-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-2b-it), [Vicuna-7b-v1.5](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-7b-v1.5), [Vicuna-13b-v1.5](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-13b-v1.5), [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1), and [Nous-Hermes-2-Yi-34B](https:\u002F\u002Fhuggingface.co\u002FNousResearch\u002FNous-Hermes-2-Yi-34B) , and put them in `model_zoo` following [Structure](#structure).\n\n\n### Structure\n\nThe folder structure should be organized as follows before training.\n\n```\nMGM\n├── mgm\n├── scripts\n├── work_dirs\n│   ├── MGM\n│   │   ├── MGM-2B\n│   │   ├── ...\n├── model_zoo\n│   ├── LLM\n│   │   ├── gemma\n│   │   │   ├── gemma-2b-it\n│   │   ├── vicuna\n│   │   │   ├── 7B-V1.5\n│   │   │   ├── 13B-V1.5\n│   │   ├── llama-3\n│   │   │   ├── Meta-Llama-3-8B-Instruct\n│   │   │   ├── Meta-Llama-3-70B-Instruct\n│   │   ├── mixtral\n│   │   │   ├── Mixtral-8x7B-Instruct-v0.1\n│   │   ├── Nous-Hermes-2-Yi-34B\n│   ├── OpenAI\n│   │   ├── clip-vit-large-patch14-336\n│   │   ├── openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup\n├── data\n│   ├── MGM-Pretrain\n│   │   ├── mgm_pretrain.json\n│   │   ├── images\n│   │   ├── ALLaVA-4V\n│   ├── MGM-Finetune\n│   │   ├── mgm_instruction.json\n│   │   ├── llava\n│   │   ├── coco\n│   │   ├── gqa\n│   │   ├── ocr_vqa\n│   │   ├── textvqa\n│   │   ├── vg\n│   │   ├── gpt4v-dataset\n│   │   ├── sam\n│   │   ├── share_textvqa\n│   │   ├── wikiart\n│   │   ├── web-celebrity\n│   │   ├── web-landmark\n│   │   ├── ALLaVA-4V\n│   │   ├── docvqa\n│   │   ├── chartqa\n│   │   ├── dvqa\n│   │   ├── ai2d\n│   ├── MGM-Eval\n│   │   ├── MMMU\n│   │   ├── MMB\n│   │   ├── MathVista\n│   │   ├── ...\n```\n\n## Train\n\nThe training process consists of two stages: (1) feature alignment stage: bridge the vision and language tokens; (2) instruction tuning stage: teach the model to follow multimodal instructions.\n\nOur models are trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the `per_device_train_batch_size` and increase the `gradient_accumulation_steps` accordingly. Always keep the global batch size the same: `per_device_train_batch_size` x `gradient_accumulation_steps` x `num_gpus`.\n\nPlease make sure you download and organize the data following [Preparation](#preparation) before training.\n\nNOTE: Please set `hostfile` for 2 machine training and `hostfile_4` for 4 machine training.\n\nIf you want to train and finetune the framework, please run the following command for MGM-7B with image size 336:\n\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_1_2_full_v7b_336_hr_768.sh\n```\nor for MGM-13B with image size 336:\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_1_2_full_v13b_336_hr_768.sh\n```\nBecause we reuse the pre-trained projecter weights from the MGM-7B, you can directly use the MGM-7B-HD with image size 672 for stage-2 instruction tuning:\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_2_full_v7b_672_hr_1536.sh\n```\nPlease find more training scripts of `gemma`, `llama`, `mixtral`, and `yi` in `scripts\u002F`.\n\n\n## Evaluation\nWe perform evaluation on several image-based benchmarks. Please download the evaluation data following [Preparation](#preparation) and organize them as in [Structure](#structure).\n\n| Model | LLM | Res. | Link | TextVQA | MMB | MME | MM-Vet | MMMU_val | MMMU_test | MathVista |\n|----------|----------|----------|-----------|---|---|---|---|---|---|---|\nMGM-2B | Gemma-2B | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-2B) | 56.2 | 59.8 | 1341\u002F312 | 31.1 | 31.7 | 29.1 | 29.4\nMGM-7B | Vicuna-7B-v1.5 | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B) | 65.2 | 69.3 | 1523\u002F316 | 40.8 | 36.1 | 32.8 | 31.4 \nMGM-13B | Vicuna-13B-v1.5 | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B) | 65.9 | 68.5 | 1565\u002F322 | 46.0 | 38.1 | 33.5 | 37.0\nMGM-8B | LLaMA-3-8B-Instruct | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B) | 67.6 | 72.7 | 1606\u002F341 | 47.3 | 38.2 | 36.3 | --\nMGM-8x7B | Mixtral-8x7B-Instruct-v0.1 | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B) | 69.2 | 75.6 | 1639\u002F379 | 45.8 | 41.8 | 37.1 | 41.8\nMGM-34B | Nous-Hermes-2-Yi-34B | 336 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B) | 70.1 | 79.6 | 1666\u002F439 | 53.0 | 48.7 | 43.6 | 38.9\nMGM-7B-HD | Vicuna-7B-v1.5 | 672 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B-HD) | 68.4 | 65.8 | 1546\u002F319 | 41.3 | 36.8 | 32.9 | 32.2\nMGM-13B-HD | Vicuna-13B-v1.5 | 672 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B-HD) | 70.2 | 68.6 | 1597\u002F320 | 50.5 | 37.3 | 35.1 | 37.0\nMGM-8B-HD | LLaMA-3-8B-Instruct | 672 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B-HD) | 71.6 | -- | 1532\u002F357 | -- | 37.0 | -- | --\nMGM-8x7B-HD | Mixtral-8x7B-Instruct-v0.1 | 672 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B-HD) | 71.9 | 74.7 | 1633\u002F356 | 53.5 | 40.0 | 37.0 | 43.1\nMGM-34B-HD | Nous-Hermes-2-Yi-34B | 672 | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B-HD) | 74.1 | 80.6 | 1659\u002F482 | 59.3 | 48.0 | 44.9 | 43.3\n\n\n\nIf you want to evaluate the model on image-based benchmarks, please use the scripts in `scripts\u002FMODEL_PATH\u002Feval`. \nFor example, run the following command for TextVQA evaluation with MGM-7B-HD:\n```bash\nbash scripts\u002Fllama\u002Feval\u002Ftextvqa.sh\n```\nPlease find more evaluation scripts in `scripts\u002FMODEL_PATH`.\n\n\n### CLI Inference\nChat with images without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization.\nPlease make sure you have installed [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) and [PaddleOCR](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR\u002Fblob\u002Frelease\u002F2.7\u002FREADME_en.md) (only for better experience with OCR), and try this for image and generation inference:\n\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003Cpath to your image>\n```\n\nor try this better experience with OCR (make sure you have installed [PaddleOCR](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR\u002Fblob\u002Frelease\u002F2.7\u002FREADME_en.md)):\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003Cpath to your image> \\\n    --ocr\n```\n\nor try this for inference with generation (make sure you have installed [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)):\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003Cpath to your image> \\\n    --gen\n```\n\nYou can also try 8bit or even 4bit for efficient inference \n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003Cpath to your image> \\\n    --gen\n    --load-8bit\n```\n\n### Gradio Web UI\n\nHere, we adopt the Gradio UI similar to that in LLaVA to provide a user-friendly interface for our models.\nTo launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server *ONCE*.\n\n#### Launch a controller\n```Shell\npython -m mgm.serve.controller --host 0.0.0.0 --port 10000\n```\n\n#### Launch a gradio web server.\n```Shell\npython -m mgm.serve.gradio_web_server --controller http:\u002F\u002Flocalhost:10000 --model-list-mode reload\n```\nYou just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.\n\n#### Launch a model worker\nThis is the actual *worker* that performs the inference on the GPU.  Each worker is responsible for a single model specified in `--model-path`.\n\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD\n```\nWait until the process finishes loading the model and you see \"Uvicorn running on ...\".  Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.\n\nYou can launch as many workers as you want, and compare between different models in the same Gradio interface. Please keep the `--controller` the same, and modify the `--port` and `--worker` to a different port number for each worker.\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port \u003Cdifferent from 40000, say 40001> --worker http:\u002F\u002Flocalhost:\u003Cchange accordingly, i.e. 40001> --model-path work_dirs\u002FMGM\u002FMGM-34B-HD\n```\n\nIf you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the `--device` flag: `--device mps`.\n\n#### Launch a model worker (Multiple GPUs, when GPU VRAM \u003C= 24GB)\n\nIf the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with `CUDA_VISIBLE_DEVICES`. Below is an example of running with the first two GPUs.\n\n```Shell\nCUDA_VISIBLE_DEVICES=0,1 python -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD\n```\n\n#### Launch a model worker (4-bit, 8-bit inference, quantized)\n\nYou can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append `--load-4bit` or `--load-8bit` to the **model worker** command that you are executing. Below is an example of running with 4-bit quantization.\n\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD --load-4bit\n```\n\n## Examples\nWe provide some examples in this section. More examples can be found in our [project page](https:\u002F\u002Fmini-gemini.github.io\u002F).\n\n### Hi-Resolution Understanding\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_64025c0e6fb7.png\"\u002F>\n\u003C\u002Fdiv>\n\n### Generation with Reasoning\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_ca029eb52965.png\"\u002F>\n\u003C\u002Fdiv>\n\n## Citation\nIf you find this repo useful for your research, please consider citing the paper\n```\n@article{li2024mgm,\n  title={Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models},\n  author={Li, Yanwei and Zhang, Yuechen and Wang, Chengyao and Zhong, Zhisheng and Chen, Yixin and Chu, Ruihang and Liu, Shaoteng and Jia, Jiaya},\n  journal={arXiv:2403.18814},\n  year={2023}\n}\n```\n\n## Acknowledgement\nThis project is not affiliated with Google LLC.\n\nWe would like to thank the following repos for their great work:\n\n- This work is built upon the [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA).\n- This work utilizes LLMs from [Gemma](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-2b-it), [Vicuna](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat), [Mixtral](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1), and [Nous-Hermes](https:\u002F\u002Fhuggingface.co\u002FNousResearch\u002FNous-Hermes-2-Yi-34B).\n\n## License\n[![Code License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode%20License-Apache_2.0-yellow.svg)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FLICENSE)\n[![Data License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FData%20License-CC%20By%20NC%204.0-orange.svg)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FDATA_LICENSE)\n[![Weight License](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight%20License-CC%20By%20NC%204.0-red)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FWEIGHT_LICENSE)\n\nThe data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaVA, LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.\n","# “Mini-Gemini：挖掘多模态视觉语言模型潜力”的官方仓库\n\n\u003Ca href='https:\u002F\u002Fmini-gemini.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-Green'>\u003C\u002Fa>\n\u003Ca href='http:\u002F\u002F103.170.5.190:7860\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Demo-violet'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fwcy1122\u002FMGM'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F🤗-Open%20In%20Spaces-blue.svg'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18814.pdf'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Arxiv-red'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Models-blue'>\u003C\u002Fa>\n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-data-660463ea895a01d8f367624e'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20 Face-Data-green'>\u003C\u002Fa>\n\n\n该框架支持从2B到34B的一系列密集型和MoE大型语言模型（LLMs），这些模型能够同时进行图像理解、推理和生成。我们基于LLaVA构建了这个仓库。\n\n## 发布\n- [05\u002F03] 🔥 我们现已支持基于LLaMA3的模型！欢迎在此尝试 [这里](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854)。\n- [04\u002F15] 🔥 [Hugging Face演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fwcy1122\u002FMGM) 已上线。这是一个13B-HD版本，欢迎大家观看并试用。\n- [03\u002F28] 🔥 Mini-Gemini 来了！我们发布了[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18814.pdf)、[演示](http:\u002F\u002F103.170.5.190:7860\u002F)、[代码](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM)、[模型](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-6603c50b9b43d044171d0854')以及[数据](https:\u002F\u002Fhuggingface.co\u002Fcollections\u002FYanweiLi\u002Fmgm-data-660463ea895a01d8f367624e)！\n\n## 目录\n- [演示](#demo)\n- [安装](#install)\n- [模型](#model)\n- [准备](#preparation)\n- [训练](#train)\n- [评估](#evaluation)\n- [示例](#examples)\n- [引用](#citation)\n- [致谢](#acknowledgement)\n- [许可证](#license)\n\n## 演示\n我们在本节提供了一些精选示例。更多示例请访问我们的[项目页面](https:\u002F\u002Fmini-gemini.github.io\u002F)。也欢迎您在线试用我们的[演示](http:\u002F\u002F103.170.5.190:7860\u002F)！\n\n\u003Cdiv align=center>\n\u003Cimg width=\"100%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_f2b5858f534d.png\"\u002F>\n\u003C\u002Fdiv>\n\n## 安装\n请按照以下步骤安装所需的软件包。\n\n注意：如果您想使用2B版本，请确保安装最新版本的Transformers库（>=4.38.0）。\n\n1. 克隆本仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM.git\n```\n\n2. 安装环境\n```bash\nconda create -n mgm python=3.10 -y\nconda activate mgm\ncd MGM\npip install --upgrade pip  # 启用PEP 660支持\npip install -e .\n```\n\n3. 安装额外的训练相关包\n```bash\npip install ninja\npip install flash-attn --no-build-isolation\n```\n\n## 模型\n该框架的概念非常简单：采用双视觉编码器分别提供低分辨率视觉嵌入和高分辨率候选特征；\n提出了一种补丁信息挖掘方法，在高分辨率区域与低分辨率视觉查询之间进行补丁级别的信息挖掘；\n最后利用大型语言模型将文本与图像结合起来，实现理解和生成的双重功能。\n\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_b19a1e52bdd5.png\"\u002F>\n\u003C\u002Fdiv>\n\n我们提供了在第一阶段和第二阶段数据上完全微调的所有模型：\n\n| 模型 | 低分辨率 | 高分辨率 | 基础LLM | 视觉编码器 | 微调数据 | 微调计划 | 下载 |\n|----------|----------|----------|----------|----------------|---------------|--------------------|------------------|\n| MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-2B) |\n| MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B) |\n| MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B) |\n| MGM-8B | 336 | 768 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B) |\n| MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B) |\n| MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B) |\n| MGM-7B-HD | 672 | 1536 | Vicuna-7B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B-HD) |\n| MGM-13B-HD | 672 | 1536 | Vicuna-13B-v1.5 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B-HD) |\n| MGM-8B-HD | 672 | 1536 | LLaMA-3-8B-Instruct | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B-HD) |\n| MGM-8x7B-HD | 672 | 1536 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B-HD) |\n| MGM-34B-HD | 672 | 1536 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Instruct | full_ft-1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B-HD) |\n\n以下是仅在第一阶段数据上预训练的权重：\n| 模型 | 低分辨率 | 高分辨率 | 基础LLM | 视觉编码器 | 预训练数据 | 微调计划 | 下载 |\n|----------|----------|----------|----------|----------------|---------------|--------------------|------------------|\n| MGM-2B | 336 | 768 | Gemma-2B | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-2B) |\n| MGM-7B | 336 | 768 | Vicuna-7B-v1.5 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-7B) |\n| MGM-13B | 336 | 768 | Vicuna-13B-v1.5 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-13B) |\n| MGM-8x7B | 336 | 768 | Mixtral-8x7B-Instruct-v0.1 | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-8x7B) |\n| MGM-34B | 336 | 768 | Nous-Hermes-2-Yi-34B | CLIP-L | MGM-Pretrain | 1e | [ckpt](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-Pretrain\u002Ftree\u002Fmain\u002FMGM-34B) |\n\n## 准备\n\n### 数据集\n我们提供了用于模型训练的处理后的数据。\n对于模型预训练，请下载以下基于图像的训练数据，并按如下方式组织：\n\n`->` 表示将数据放入本地文件夹。\n- [LLaVA Images](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fliuhaotian\u002FLLaVA-Pretrain) -> `data\u002FMGM-Pretrain\u002Fimages`, `data\u002FMGM-Finetune\u002Fllava\u002FLLaVA-Pretrain\u002Fimages`\n- [ALLaVA Caption](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA) -> `data\u002FMGM-Pretrain\u002FALLaVA-4V`\n\n对于模型微调，请下载以下指令数据，并按如下方式组织：\n\n`->` 表示将数据放入本地文件夹。\n- [COCO train2017](http0:\u002F\u002Fimages.cocodataset.org\u002Fzips\u002Ftrain2017.zip) -> `data\u002FMGM-Finetune\u002Fcoco`\n- [GQA](https:\u002F\u002Fdownloads.cs.stanford.edu\u002Fnlp\u002Fdata\u002Fgqa\u002Fimages.zip) -> `data\u002FMGM-Finetune\u002Fgqa`\n- [OCR-VQA](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)（**我们将所有文件保存为`.jpg`格式**）-> `data\u002FMGM-Finetune\u002Focr_vqa`\n- [TextVQA](https:\u002F\u002Fdl.fbaipublicfiles.com\u002Ftextvqa\u002Fimages\u002Ftrain_val_images.zip)（不包含在训练中）-> `data\u002FMGM-Finetune\u002Ftextvqa`\n- [VisualGenome part1](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Frak248\u002FVG_100K_2\u002Fimages.zip), [VisualGenome part2](https:\u002F\u002Fcs.stanford.edu\u002Fpeople\u002Frak248\u002FVG_100K_2\u002Fimages2.zip) -> `data\u002FMGM-Finetune\u002Fvg`\n- [ShareGPT4V-100K](https:\u002F\u002Fgithub.com\u002FInternLM\u002FInternLM-XComposer\u002Fblob\u002Fmain\u002Fprojects\u002FShareGPT4V\u002Fdocs\u002FData.md) -> `data\u002FMGM-Finetune\u002Fsam`, `share_textvqa`, `wikiart`, `web-celebrity`, `web-landmark`\n- [LAION GPT4V](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Flaion\u002Fgpt4v-dataset) -> `data\u002FMGM-Finetune\u002Fgpt4v-dataset`\n- [ALLaVA Instruction](https:\u002F\u002Fgithub.com\u002FFreedomIntelligence\u002FALLaVA) -> `data\u002FMGM-Pretrain\u002FALLaVA-4V`\n- [DocVQA](https:\u002F\u002Fwww.docvqa.org\u002Fdatasets\u002Fdocvqa) -> `data\u002FMGM-Finetune\u002Fdocvqa`\n- [ChartQA](https:\u002F\u002Fgithub.com\u002Fvis-nlp\u002FChartQA) -> `data\u002FMGM-Finetune\u002Fchartqa`\n- [DVQA](https:\u002F\u002Fgithub.com\u002Fkushalkafle\u002FDVQA_dataset) -> `data\u002FMGM-Finetune\u002Fdvqa`\n- [AI2D](https:\u002F\u002Fallenai.org\u002Fdata\u002Fdiagrams) -> `data\u002FMGM-Finetune\u002Fai2d`\n\n对于模型评估，请按照此[链接](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA\u002Fblob\u002Fmain\u002Fdocs\u002FEvaluation.md)进行准备。我们使用了一些额外的基准来进行评估。请下载以下基于图像的训练数据，并按如下方式组织：\n\n`->` 表示将数据放入本地文件夹。\n- [MMMU](https:\u002F\u002Fmmmu-benchmark.github.io\u002F) -> `data\u002FMGM-Eval\u002FMMMU`\n- [MMB](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002Fmmbench\u002F) -> `data\u002FMGM-Eval\u002FMMB`\n- [MathVista](https:\u002F\u002Fmathvista.github.io\u002F) -> `data\u002FMGM-Eval\u002FMathVista`\n\n\n请将预训练数据、微调数据和评估数据分别放入 `MGM-Pretrain`、`MGM-Finetune` 和 `MGM-Eval` 子目录中，遵循[结构](#structure)。\n\n关于元信息，请下载以下文件，并按[结构](#structure)进行组织。\n\n| 数据文件名 | 大小 |\n| --- | ---: |\n| [mgm_pretrain.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Pretrain) | 1.68 G |\n| [mgm_instruction.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Instruction) | 1.79 G |\n| [mgm_generation_pure_text.json](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FYanweiLi\u002FMGM-Instruction) | 0.04 G |\n\n重要提示：`mgm_generation_pure_text.json` 是一个与生成相关的子集。**请勿**将其与 `mgm_instruction.json` 合并，因为它已经包含在其中。您可以将此文件与您自定义的 LLM\u002FVLM SFT 数据集合并，以启用推理生成能力。\n\n\n### 预训练权重\n我们建议用户从以下链接下载预训练权重：[CLIP-Vit-L-336](https:\u002F\u002Fhuggingface.co\u002Fopenai\u002Fclip-vit-large-patch14-336), [OpenCLIP-ConvNeXt-L](https:\u002F\u002Fhuggingface.co\u002Flaion\u002FCLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup), [Gemma-2b-it](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-2b-it), [Vicuna-7b-v1.5](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-7b-v1.5), [Vicuna-13b-v1.5](https:\u002F\u002Fhuggingface.co\u002Flmsys\u002Fvicuna-13b-v1.5), [Mixtral-8x7B-Instruct-v0.1](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1)，以及 [Nous-Hermes-2-Yi-34B](https:\u002F\u002Fhuggingface.co\u002FNousResearch\u002FNous-Hermes-2-Yi-34B)，并将它们放入 `model_zoo` 目录中，遵循[结构](#structure)。\n\n\n### 结构\n\n在开始训练之前，文件夹结构应按如下方式组织：\n\n```\nMGM\n├── mgm\n├── scripts\n├── work_dirs\n│   ├── MGM\n│   │   ├── MGM-2B\n│   │   ├── ...\n├── model_zoo\n│   ├── LLM\n│   │   ├── gemma\n│   │   │   ├── gemma-2b-it\n│   │   ├── vicuna\n│   │   │   ├── 7B-V1.5\n│   │   │   ├── 13B-V1.5\n│   │   ├── llama-3\n│   │   │   ├── Meta-Llama-3-8B-Instruct\n│   │   │   ├── Meta-Llama-3-70B-Instruct\n│   │   ├── mixtral\n│   │   │   ├── Mixtral-8x7B-Instruct-v0.1\n│   │   ├── Nous-Hermes-2-Yi-34B\n│   ├── OpenAI\n│   │   ├── clip-vit-large-patch14-336\n│   │   ├── openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup\n├── data\n│   ├── MGM-Pretrain\n│   │   ├── mgm_pretrain.json\n│   │   ├── images\n│   │   ├── ALLaVA-4V\n│   ├── MGM-Finetune\n│   │   ├── mgm_instruction.json\n│   │   ├── llava\n│   │   ├── coco\n│   │   ├── gqa\n│   │   ├── ocr_vqa\n│   │   ├── textvqa\n│   │   ├── vg\n│   │   ├── gpt4v-dataset\n│   │   ├── sam\n│   │   ├── share_textvqa\n│   │   ├── wikiart\n│   │   ├── web-celebrity\n│   │   ├── web-landmark\n│   │   ├── ALLaVA-4V\n│   │   ├── docvqa\n│   │   ├── chartqa\n│   │   ├── dvqa\n│   │   ├── ai2d\n│   ├── MGM-Eval\n│   │   ├── MMMU\n│   │   ├── MMB\n│   │   ├── MathVista\n│   │   ├── ...\n```\n\n## 训练\n\n训练过程分为两个阶段：(1) 特征对齐阶段：连接视觉和语言标记；(2) 指令调优阶段：教会模型遵循多模态指令。\n\n我们的模型是在配备 80GB 显存的 8 张 A100 GPU 上训练的。如果使用较少的 GPU 进行训练，可以相应地减少 `per_device_train_batch_size` 并增加 `gradient_accumulation_steps`。始终保持全局批量大小不变：`per_device_train_batch_size` × `gradient_accumulation_steps` × `num_gpus`。\n\n请确保在训练前按照[准备工作](#preparation)下载并整理好数据。\n\n注意：对于两台机器的训练，请设置 `hostfile`；对于四台机器的训练，请设置 `hostfile_4`。\n\n如果您想训练和微调该框架，请运行以下命令，针对 MGM-7B 使用 336 像素的图像尺寸：\n\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_1_2_full_v7b_336_hr_768.sh\n```\n或者针对 MGM-13B 使用 336 像素的图像尺寸：\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_1_2_full_v13b_336_hr_768.sh\n```\n由于我们复用了 MGM-7B 的预训练投影器权重，因此可以直接使用 MGM-7B-HD（672 像素）进行第二阶段的指令调优：\n```bash\nbash scripts\u002Fllama\u002Ftrain\u002Fstage_2_full_v7b_672_hr_1536.sh\n```\n更多关于 `gemma`、`llama`、`mixtral` 和 `yi` 的训练脚本，请参阅 `scripts\u002F` 目录。\n\n## 评估\n我们在多个基于图像的基准测试上进行了评估。请按照[准备工作](#preparation)下载评估数据，并按照[结构](#structure)中的说明进行组织。\n\n| 模型 | 大语言模型 | 分辨率 | 链接 | TextVQA | MMB | MME | MM-Vet | MMMU_val | MMMU_test | MathVista |\n|----------|----------|----------|-----------|---|---|---|---|---|---|---|\nMGM-2B | Gemma-2B | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-2B) | 56.2 | 59.8 | 1341\u002F312 | 31.1 | 31.7 | 29.1 | 29.4\nMGM-7B | Vicuna-7B-v1.5 | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B) | 65.2 | 69.3 | 1523\u002F316 | 40.8 | 36.1 | 32.8 | 31.4 \nMGM-13B | Vicuna-13B-v1.5 | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B) | 65.9 | 68.5 | 1565\u002F322 | 46.0 | 38.1 | 33.5 | 37.0\nMGM-8B | LLaMA-3-8B-Instruct | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B) | 67.6 | 72.7 | 1606\u002F341 | 47.3 | 38.2 | 36.3 | --\nMGM-8x7B | Mixtral-8x7B-Instruct-v0.1 | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B) | 69.2 | 75.6 | 1639\u002F379 | 45.8 | 41.8 | 37.1 | 41.8\nMGM-34B | Nous-Hermes-2-Yi-34B | 336 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B) | 70.1 | 79.6 | 1666\u002F439 | 53.0 | 48.7 | 43.6 | 38.9\nMGM-7B-HD | Vicuna-7B-v1.5 | 672 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B-HD) | 68.4 | 65.8 | 1546\u002F319 | 41.3 | 36.8 | 32.9 | 32.2\nMGM-13B-HD | Vicuna-13B-v1.5 | 672 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B-HD) | 70.2 | 68.6 | 1597\u002F320 | 50.5 | 37.3 | 35.1 | 37.0\nMGM-8B-HD | LLaMA-3-8B-Instruct | 672 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B-HD) | 71.6 | -- | 1532\u002F357 | -- | 37.0 | -- | --\nMGM-8x7B-HD | Mixtral-8x7B-Instruct-v0.1 | 672 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B-HD) | 71.9 | 74.7 | 1633\u002F356 | 53.5 | 40.0 | 37.0 | 43.1\nMGM-34B-HD | Nous-Hermes-2-Yi-34B | 672 | [检查点](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-34B-HD) | 74.1 | 80.6 | 1659\u002F482 | 59.3 | 48.0 | 44.9 | 43.3\n\n\n\n如果您想在基于图像的基准测试上评估模型，请使用`scripts\u002FMODEL_PATH\u002Feval`中的脚本。\n例如，要使用MGM-7B-HD对TextVQA进行评估，请运行以下命令：\n```bash\nbash scripts\u002Fllama\u002Feval\u002Ftextvqa.sh\n```\n更多评估脚本请参见`scripts\u002FMODEL_PATH`。\n\n### 命令行推理\n无需Gradio界面即可与图像进行对话。它还支持多GPU、4位和8位量化推理。使用4位量化时。\n请确保您已安装[diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)和[PaddleOCR](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR\u002Fblob\u002Frelease\u002F2.7\u002FREADME_en.md)（仅为了更好的OCR体验），并尝试以下命令进行图像和生成推理：\n\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003C您的图像路径>\n```\n\n或者尝试更好的OCR体验（请确保已安装[PaddleOCR](https:\u002F\u002Fgithub.com\u002FPaddlePaddle\u002FPaddleOCR\u002Fblob\u002Frelease\u002F2.7\u002FREADME_en.md)）：\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003C您的图像路径> \\\n    --ocr\n```\n\n或者尝试生成推理（请确保已安装[diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)）：\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003C您的图像路径> \\\n    --gen\n```\n\n您还可以尝试8位甚至4位以实现高效推理\n```bash\npython -m mgm.serve.cli \\\n    --model-path work_dirs\u002FMGM\u002FMGM-13B-HD \\\n    --image-file \u003C您的图像路径> \\\n    --gen\n    --load-8bit\n```\n\n### Gradio Web UI\n\n在这里，我们采用了类似于LLaVA的Gradio界面，为我们的模型提供了一个用户友好的界面。\n要在本地启动Gradio演示，请依次运行以下命令。如果您计划启动多个模型工作节点以比较不同检查点之间的差异，则只需*一次*启动控制器和Web服务器。\n\n#### 启动控制器\n```Shell\npython -m mgm.serve.controller --host 0.0.0.0 --port 10000\n```\n\n#### 启动Gradio Web服务器。\n```Shell\npython -m mgm.serve.gradio_web_server --controller http:\u002F\u002Flocalhost:10000 --model-list-mode reload\n```\n您刚刚启动了Gradio Web界面。现在，您可以使用屏幕上打印的URL打开该界面。您可能会注意到模型列表中没有模型。不用担心，因为我们还没有启动任何模型工作节点。当您启动一个模型工作节点时，它会自动更新。\n\n#### 启动模型工作节点\n这是实际在GPU上执行推理的*工作节点*。每个工作节点负责`--model-path`中指定的单个模型。\n\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD\n```\n等待进程完成模型加载，直到看到“Uvicorn正在运行...”。现在，刷新您的Gradio Web界面，您将看到刚刚启动的模型出现在模型列表中。\n\n您可以根据需要启动任意数量的工作节点，并在同一Gradio界面中比较不同的模型。请保持`--controller`不变，同时为每个工作节点修改`--port`和`--worker`以使用不同的端口号。\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port \u003C不同于40000，比如40001> --worker http:\u002F\u002Flocalhost:\u003C相应地更改，即40001> --model-path work_dirs\u002FMGM\u002FMGM-34B-HD\n```\n\n如果您使用的是配备M1或M2芯片的Apple设备，可以通过使用`--device`标志指定mps设备：`--device mps`。\n\n#### 启动模型工作节点（多GPU，当GPU显存≤24GB时）\n\n如果您的GPU显存小于24GB（例如RTX 3090、RTX 4090等），您可以尝试使用多个GPU运行。我们最新的代码库会在您拥有多个GPU时自动尝试使用多GPU。您可以使用`CUDA_VISIBLE_DEVICES`指定要使用的GPU。下面是一个使用前两个GPU运行的示例。\n\n```Shell\nCUDA_VISIBLE_DEVICES=0,1 python -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD\n```\n\n#### 启动模型工作节点（4位、8位推理，量化）\n\n您可以启动带有量化位数（4位、8位）的模型工作节点，这可以让您以更小的GPU内存占用运行推理。请注意，使用量化位数进行推理可能不如全精度模型准确。只需在您正在执行的**模型工作节点**命令中添加`--load-4bit`或`--load-8bit`即可。下面是一个使用4位量化运行的示例。\n\n```Shell\npython -m mgm.serve.model_worker --host 0.0.0.0 --controller http:\u002F\u002Flocalhost:10000 --port 40000 --worker http:\u002F\u002Flocalhost:40000 --model-path work_dirs\u002FMGM\u002FMGM-13B-HD --load-4bit\n```\n\n## 示例\n我们在这一部分提供了一些示例。更多示例可以在我们的[项目页面](https:\u002F\u002Fmini-gemini.github.io\u002F)上找到。\n\n### 高分辨率理解\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_64025c0e6fb7.png\"\u002F>\n\u003C\u002Fdiv>\n\n### 带推理的生成\n\u003Cdiv align=center>\n\u003Cimg width=\"98%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_readme_ca029eb52965.png\"\u002F>\n\u003C\u002Fdiv>\n\n## 引用\n如果您觉得本仓库对您的研究有帮助，请考虑引用以下论文：\n```\n@article{li2024mgm,\n  title={Mini-Gemini: 挖掘多模态视觉语言模型的潜力},\n  author={李彦伟、张悦晨、王成耀、钟志胜、陈义欣、褚瑞航、刘绍腾、贾嘉亚},\n  journal={arXiv:2403.18814},\n  year={2023}\n}\n```\n\n## 致谢\n本项目与谷歌公司无任何关联。\n\n我们衷心感谢以下开源项目及其贡献者：\n\n- 本工作基于 [LLaVA](https:\u002F\u002Fgithub.com\u002Fhaotian-liu\u002FLLaVA) 构建。\n- 本工作使用了来自 [Gemma](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-2b-it)、[Vicuna](https:\u002F\u002Fgithub.com\u002Flm-sys\u002FFastChat)、[Mixtral](https:\u002F\u002Fhuggingface.co\u002Fmistralai\u002FMixtral-8x7B-Instruct-v0.1) 和 [Nous-Hermes](https:\u002F\u002Fhuggingface.co\u002FNousResearch\u002FNous-Hermes-2-Yi-34B) 的大语言模型。\n\n## 许可证\n[![代码许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FCode%20License-Apache_2.0-yellow.svg)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FLICENSE)\n[![数据许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FData%20License-CC%20By%20NC%204.0-orange.svg)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FDATA_LICENSE)\n[![权重许可证](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FWeight%20License-CC%20By%20NC%204.0-red.svg)](https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM\u002Fblob\u002Fmain\u002FWEIGHT_LICENSE)\n\n本项目的数据和检查点仅用于科研目的，并受相关许可协议约束。它们同样受到 LLaVA、LLaMA、Vicuna 和 GPT-4 许可协议的限制。数据集采用 CC BY NC 4.0 许可（仅允许非商业用途），使用该数据集训练的模型也不得用于科研以外的场景。","# MGM (Mini-Gemini) 快速上手指南\n\nMGM 是一个支持多模态理解、推理及生成的视觉语言模型框架，基于 LLaVA 构建，支持从 2B 到 34B 的稠密及 MoE 大语言模型（包括 LLaMA3、Mixtral 等）。\n\n## 1. 环境准备\n\n*   **操作系统**: Linux (推荐 Ubuntu)\n*   **Python 版本**: 3.10\n*   **GPU**: 建议使用 NVIDIA GPU (训练需多卡，推理单卡即可，显存需求视模型大小而定)\n*   **前置依赖**:\n    *   Conda (用于环境管理)\n    *   Git\n    *   CUDA Toolkit (需与 PyTorch 版本匹配)\n    *   **注意**: 若使用 2B 版本模型，请确保 `transformers` 库版本 >= 4.38.0。\n\n## 2. 安装步骤\n\n### 2.1 克隆仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fdvlab-research\u002FMGM.git\ncd MGM\n```\n\n### 2.2 创建并激活虚拟环境\n```bash\nconda create -n mgm python=3.10 -y\nconda activate mgm\n```\n\n### 2.3 安装核心依赖\n启用 PEP 660 支持并安装项目包：\n```bash\npip install --upgrade pip\npip install -e .\n```\n\n### 2.4 安装训练额外依赖\n如需进行模型训练或微调，需安装以下组件（包含 Flash Attention 加速）：\n```bash\npip install ninja\npip install flash-attn --no-build-isolation\n```\n> **提示**: 国内用户若下载 `flash-attn` 较慢，可尝试使用清华源或阿里源镜像，例如：\n> `pip install flash-attn --no-build-isolation -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n## 3. 基本使用\n\n### 3.1 模型下载\n在运行前，请从 Hugging Face 下载预训练权重。以下是部分主流模型的下载地址：\n\n| 模型名称 | 基础 LLM | 分辨率类型 | 下载链接 |\n| :--- | :--- | :--- | :--- |\n| **MGM-7B** | Vicuna-7B-v1.5 | 标准 (336\u002F768) | [Hugging Face](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-7B) |\n| **MGM-13B-HD** | Vicuna-13B-v1.5 | 高清 (672\u002F1536) | [Hugging Face](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-13B-HD) |\n| **MGM-8B** | LLaMA-3-8B-Instruct | 标准 | [Hugging Face](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8B) |\n| **MGM-8x7B** | Mixtral-8x7B | 标准 | [Hugging Face](https:\u002F\u002Fhuggingface.co\u002FYanweiLi\u002FMGM-8x7B) |\n\n同时需要下载对应的视觉编码器权重（如 `CLIP-Vit-L-336`）和基座 LLM 权重，并按项目要求的目录结构存放于 `model_zoo` 文件夹中。\n\n### 3.2 目录结构配置\n确保文件组织如下（以 MGM-7B 为例）：\n```text\nMGM\n├── model_zoo\n│   ├── LLM\n│   │   └── vicuna\n│   │       └── 7B-V1.5  # 放置 Vicuna 权重\n│   ├── OpenAI\n│   │   └── clip-vit-large-patch14-336 # 放置 CLIP 权重\n├── work_dirs\n│   └── MGM\n│       └── MGM-7B       # 用于存放训练输出或临时文件\n```\n\n### 3.3 推理示例\nMGM 提供了脚本用于单图推理。假设你已准备好模型权重，可以使用以下命令进行测试（具体脚本路径请参考 `scripts` 目录下的示例）：\n\n```bash\n# 示例：运行评估或推理脚本\n# 请根据实际下载的模型路径修改 --model-path 和 --image-file 参数\npython mgm\u002Feval\u002Fmodel_vqa.py \\\n    --model-path .\u002Fmodel_zoo\u002FLLM\u002Fvicuna\u002F7B-V1.5 \\\n    --image-file .\u002Fimages\u002Ftest_image.jpg \\\n    --conv-mode vicuna_v1\n```\n\n*注：由于原 README 中具体的推理命令在截断部分，建议参考仓库内 `scripts\u002F` 目录下的 `.sh` 文件获取针对特定模型（如 2B, 7B, 13B-HD 等）的最优启动参数。*\n\n### 3.4 在线体验\n如果本地资源有限，可直接访问官方提供的 Demo 进行体验：\n*   **Hugging Face Spaces**: [MGM Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fwcy1122\u002FMGM)\n*   **项目主页 Demo**: [Mini-Gemini Project Page](http:\u002F\u002F103.170.5.190:7860\u002F)","某电商平台的视觉算法团队正致力于升级其商品详情页的智能助手，旨在让用户能通过上传商品实拍图，直接获取详细的材质分析、搭配建议甚至生成展示海报。\n\n### 没有 MGM 时\n- **细节识别模糊**：传统多模态模型仅支持低分辨率输入，无法看清衣物纹理、标签文字或珠宝刻痕等微小细节，导致回答笼统。\n- **功能割裂严重**：理解图片需要用一个模型，生成营销文案或海报草图又需切换另一个工具，开发链路繁琐且上下文容易丢失。\n- **推理成本高昂**：为了提升精度强行放大输入图像，导致显存占用激增，难以在大规模并发场景下部署大参数模型。\n- **复杂逻辑缺失**：面对“根据这张图的色调推荐三套不同场合的穿搭”这类多步推理任务，模型往往顾此失彼，逻辑连贯性差。\n\n### 使用 MGM 后\n- **高清细节洞察**：MGM 独特的双视觉编码器架构同时利用低分辩率全局嵌入和高分辨率局部候选区，能精准识别面料织法及毫米级瑕疵。\n- **理解生成一体**：依托同一套框架，MGM 既能深度解读图片内容，又能直接基于图像特征生成高质量的营销文案或初步设计图，无需切换模型。\n- **高效弹性部署**：支持从 2B 到 34B 多种规模的模型（包括 LLaMA3 基座），团队可根据业务流量灵活选择小模型保速度或大模型保质量，显著优化算力成本。\n- **深度逻辑推理**：借助补丁级信息挖掘技术，MGM 在处理涉及空间关系、因果推导的复杂指令时表现稳定，能条理清晰地输出多步骤搭配方案。\n\nMGM 通过高分辨率感知与理解生成一体化的突破，让电商视觉助手真正具备了“看得清细节、想得深逻辑、办得全任务”的专业能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FJIA-Lab-research_MGM_f2b5858f.png","JIA-Lab-research","JIA Lab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FJIA-Lab-research_a2efb296.png",null,"https:\u002F\u002Fgithub.com\u002FJIA-Lab-research",[78,82,86,90,94],{"name":79,"color":80,"percentage":81},"Python","#3572A5",85,{"name":83,"color":84,"percentage":85},"Shell","#89e051",11.5,{"name":87,"color":88,"percentage":89},"JavaScript","#f1e05a",1.8,{"name":91,"color":92,"percentage":93},"HTML","#e34c26",1.4,{"name":95,"color":96,"percentage":97},"CSS","#663399",0.3,3328,275,"2026-04-08T18:58:22","Apache-2.0","Linux","必需 NVIDIA GPU。官方训练环境为 8x A100 (80GB)。支持 2B-34B 参数模型，显存需求随模型大小增加（2B 模型需较少显存，34B 或 MoE 模型需多卡高显存）。需安装 flash-attn，通常要求 CUDA 11.8+。","未说明（建议 64GB+ 以处理大型数据集和模型）",{"notes":106,"python":107,"dependencies":108},"1. 该项目基于 LLaVA 构建，支持密集型和 MoE 大语言模型（2B 到 34B）。2. 若使用 2B 版本，必须确保 transformers 版本 >=4.38.0。3. 训练需安装 flash-attn 且建议使用 --no-build-isolation 参数。4. 数据准备复杂，需下载并整理多个数据集（预训练、微调、评估）至指定目录结构。5. 显存不足时可通过减小 per_device_train_batch_size 并增加 gradient_accumulation_steps 来调整，但需保持全局 batch size 不变。","3.10",[109,110,111,112,113,114,115,116,117,118],"torch","transformers>=4.38.0 (2B 版本必需)","flash-attn","ninja","peft","accelerate","deepspeed","scikit-learn","shortuuid","gradio",[15,120,121,35],"音频","视频",[123,124,125],"generation","large-language-models","vision-language-model","2026-03-27T02:49:30.150509","2026-04-12T13:58:50.870123",[129,134,139,144,149,154],{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},30651,"如何在推理时解决 'OpenCLIPVisionTower' object has no attribute 'device' 错误？","该错误通常发生在加载自定义微调模型进行推理时。请检查加载模型的路径，确保路径中包含的模型名称里必须有 \"mgm\" 字样。如果是使用官方提供的模型，名称中通常已包含 \"mgm\"；如果是自己训练的模型，请重命名或调整路径以包含该标识。","https:\u002F\u002Fgithub.com\u002FJIA-Lab-research\u002FMGM\u002Fissues\u002F93",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},30652,"在 ALLaVA 数据集中找不到预训练数据中提到的图片文件名（如 465440.jpeg），原因是什么？","ALLaVA 数据集中的 `id` 条目并非唯一（样本数 505588，唯一 ID 数 484532），且共享相同 ID 的样本内容可能不同。这是因为项目早期尝试了不同的提示策略并重新生成了部分样本。Mini-Gemini 团队和 ALLaVA 团队已经更新了对齐后的数据，请下载最新版本的对齐数据以解决文件名不匹配的问题。","https:\u002F\u002Fgithub.com\u002FJIA-Lab-research\u002FMGM\u002Fissues\u002F20",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},30653,"基于 mini-gemini-8x7b-HD 微调后，生成的模型文件只有 4 个 safetensors 而不是预期的 20 个，为什么？","这是由于配置中存在一个名为 \"unified model\" 的参数导致的，该参数在 HuggingFace trainer 中并不适用。维护者确认这是一个问题，并建议移除该参数。请检查你的启动脚本或配置文件，删除任何涉及 \"unified model\" 的设置后重新运行微调。","https:\u002F\u002Fgithub.com\u002FJIA-Lab-research\u002FMGM\u002Fissues\u002F18",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},30654,"如何从官方提供的检查点（Checkpoint）启动模型，而不是从开源大模型权重开始微调？","虽然教程主要展示了从开源权重微调的流程，但你可以直接指定官方检查点的路径作为 `--model_name_or_path` 参数来启动。如果在多节点训练中遇到 SSH 连接错误（如 `subprocess.CalledProcessError`），请尝试删除 `hostfile` 文件或在单节点模式下运行，这通常与 Zero2 优化策略在异构内存环境下的兼容性有关。","https:\u002F\u002Fgithub.com\u002FJIA-Lab-research\u002FMGM\u002Fissues\u002F45",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},30655,"预训练阶段的 Loss 值非常高（例如 2.5-2.8 左右），这正常吗？","在预训练初期，Loss 值较高是常见现象，特别是当模型正在学习对齐视觉和语言特征时。只要 Loss 随着训练步数增加呈现下降趋势，通常无需担心。如果 Loss 长期不降或震荡，请检查学习率设置、数据预处理是否正确以及批次大小是否合适。","https:\u002F\u002Fgithub.com\u002FJIA-Lab-research\u002FMGM\u002Fissues\u002F10",{"id":155,"question_zh":156,"answer_zh":157,"source_url":153},30656,"当前的图像生成方法难以将输入图像作为参考，有什么解决办法或建议吗？","目前该方法在利用输入图像作为参考生成新图像方面存在局限性。建议关注后续的版本更新或尝试调整提示词（Prompting strategies）以增强模型对输入图像的注意力。社区也在探讨改进方案，但目前主要依赖于模型自身的指令遵循能力。",[]]