[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Alpha-VLLM--Lumina-DiMOO":3,"tool-Alpha-VLLM--Lumina-DiMOO":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":78,"owner_twitter":78,"owner_website":78,"owner_url":79,"languages":80,"stars":105,"forks":106,"last_commit_at":107,"license":108,"difficulty_score":10,"env_os":109,"env_gpu":110,"env_ram":111,"env_deps":112,"category_tags":125,"github_topics":126,"view_count":10,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":130,"updated_at":131,"faqs":132,"releases":173},1316,"Alpha-VLLM\u002FLumina-DiMOO","Lumina-DiMOO","Lumina-DiMOO - An Open-Sourced Multi-Modal Large Diffusion Language Model","Lumina-DiMOO 是一个强大的多模态生成与理解模型，能够处理文本、图像等多种类型的数据。它通过统一的离散扩散架构，实现了高效的多模态内容生成和理解，支持从文本生成图像到图像编辑、修复等多种任务。相比以往方法，Lumina-DiMOO 在采样效率上有了显著提升，并在多个基准测试中表现优异。适合开发者、研究人员以及需要进行多模态内容创作的设计师使用。其独特的离散扩散建模和高速采样技术，使其在生成质量和速度上都具有明显优势。","\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_c89e75763a4c.png\" width=\"20%\"\u002F>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n \u003Ch1> Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding \u003C\u002Fh1>\n\n  [[📑 Technical Report ](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06308)] &emsp; [[🌐 Project Page (Demo & Benchmark)](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F)] &emsp; [[🤗 Model ](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-DiMOO)]\n \n \u003Cb>¹Shanghai Innovation Institute, ²Shanghai AI Laboratory, ³Shanghai Jiao Tong University, ⁴Nanjing University \u003C\u002Fb>\n \n \u003Cb>⁵The University of Sydney, ⁶The Chinese University of Hong Kong, ⁷Tsinghua University\u003C\u002Fb>\n\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_6cdcfeeedbbb.png\" width=\"95%\"\u002F>\n\u003C\u002Fdiv>\n\n## 📚 Introduction \nWe introduce Lumina-DiMOO, an omni foundational model for seamless multimodal generation and understanding. Lumina-DiMOO is distinguished by four key innovations:\n\n - **Unified Discrete Diffusion Architecture:** Lumina-DiMOO sets itself apart from prior unified models by utilizing a fully discrete diffusion modeling to handle inputs and outputs across various modalities.\n - **Versatile Multimodal Capabilities:** Lumina-DiMOO supports a broad spectrum of multimodal tasks, including text-to-image generation (allowing for arbitrary and high-resolution), image-to-image generation (e.g., image editing, subject-driven generation, and image inpainting, etc.), alongside advanced image understanding.\n\n - **Higher Sampling Efficiency:** Compared to previous AR or hybrid AR-diffusion paradigms, Lumina-DiMOO demonstrates remarkable sampling efficiency. Additionally, we design a bespoke caching method to further speed up the sampling speed by 2x.\n\n - **Superior Performance:** Lumina-DiMOO achieves state-of-the-art performance on multiple benchmarks, surpassing existing open-source unified multimodal models, setting a new standard in the field.\n\n\n   \n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_a1bda7e7b0c7.png\" width=\"100%\"\u002F>\n\n\n## 🔥 News\n- **[2026-02-26]** 🎉 Our dMLLM-TTS is accepted by CVPR 2026.\n- **[2025-12-23]** We have designed a unique Test-Time Scaling algorithm for Diffusion MLLM. See more details at [ArXiv (dMLLM-TTS)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19433).\n- **[2025-11-27]** We have released the evaluation code using VLMEvalKit.\n- **[2025-10-24]** 🎉 We have released a guide for those who want to build worlds with the mask paradigm, see more details at [ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20668) and [Github](https:\u002F\u002Fgithub.com\u002FM-E-AGI-Lab\u002FAwesome-World-Models).\n- **[2025-10-21]** 🎉 We’ve added support for [Diffusers](https:\u002F\u002Fgithub.com\u002Fqianyu-dlut\u002Fdiffusers\u002Fblob\u002Fmain\u002FLumina_DiMOO_README.md) and [ComfyUI](https:\u002F\u002Fgithub.com\u002FL-Hugh\u002FComfyUI-Lumina-DiMOO).\n- **[2025-10-06]** Training code is released.\n- **[2025-09-25]** We have released the Technical Report.\n- **[2025-09-20]** 🎉 In the latest [UniGenBench Leaderboard](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FCodeGoat24\u002FUniGenBench_Leaderboard)(maintained by Tencent Hunyuan Team), Lumina-DiMOO's generation evaluation ranks 1st 🥇 among all open-source unified models. \n- **[2025-09-12]** We have open-sourced Image Inpainting & Extrapolation code.\n- **[2025-09-11]** We have open-sourced the Max Logit-based Cache solution, offering a 2x speed improvement for sampling.\n- **[2025-09-10]** 🎉 We release the initial version of **Lumina-DiMOO**, including:\n  - 🎯 Model Checkpoints on [HuggingFace](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-DiMOO)!\n  - 🎯 Text-to-Image & Image-to-Image Generation Inference code!\n  - 🎯 Image Understanding Inference Code!\n  - 🎯 Website & Demo on [Project Page](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F)!\n\n## 📝 Open-Source Plan\n - [x] Image Inpainting & Extrapolation Code\n - [x] Fast Sampling with Max Logit-based Cache\n - [x] Diffusers and ComfyUI\n - [x] Bechmark Evaluation Code\n - [x] Fine-Tuning Code\n - [x] Technical Report\n - [ ] Test-Time Scaling\n\n## 📽️ Qualitative Results\nHere we present some comparative generation results with other models. **For additional visualization results, please see our [Project Page](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F).**\n\u003Cdetails open>\n  \u003Csummary>Text-to-Image Comparison\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ed8d41fe4268.png\" width=\"100%\"\u002F>\n\u003C!--   \u003Cdetails open>\n  \u003Csummary>Effects of Max Logit-Based Cache (A800 GPU, 1536x768 resolution)\u003C\u002Fsummary>\n  Without Cache: Latency: 58.2 s; Peak GPU Memory: 38.9 GiB\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ba477de1c067.png\" width=\"80%\"\u002F>\n\n\n  With Cache: Latency: 32.2 s; Peak GPU Memory: 45.9 GiB\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_77a124a080e3.png\" width=\"80%\"\u002F>\n\u003C\u002Fdetails> -->\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>Image Editing Comparison\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_848525051e77.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>Controllable & Subject-Driven Generation Comparison\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_63f23012f900.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>Image Inpainting & Extrapolation\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_6ebd30429402.jpg\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n## 📊 Quantitative Performance\n\u003Cdetails open>\n  \u003Csummary>GenEval Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_20092595a235.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails close>\n  \u003Csummary>DPG Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_29cce9d5e904.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>OneIG-EN Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_1ac6f7956e12.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails close>\n  \u003Csummary>TIIF Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ae93dcb3820d.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>Image-to-Image Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_be97a304ce19.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>Image Understanding Benchmark\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_c041ecfdb308.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n## 🚀 Sampling Speed Analysis\n- Since text generation is performed in a block-wise manner, unlike image generation which uses a single global decoding step, its speed is influenced by both the number of blocks and the number of steps. Therefore, the speed improvement of image understanding is not as significant as that of image generation.\n\n- **Lumina-DiMOO Settings**: For image generation, we sample 64 steps. For image understanding, we set the block length to 256 and the number of sampling steps to 128.\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_cafc3a716e0c.png\" width=\"100%\"\u002F>\n\n\n## 📌 Quick Start\n### ⚙️ Installation\n#### 1. Create a conda environment\n```\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO.git && cd Lumina-DiMOO\nconda create -n lumina_dimoo python=3.10 -y\nconda activate lumina_dimoo\n```\n#### 2. Install  dependencies\n```\npip install -r requirements.txt\n```\n\n### 🧨 How to Fine-Tuning Lumina-DiMOO\n#### Step 1: Pre-extract discrete codes of training images.\nThe final format after specific processing can refer to the sample json file ``assets\u002Fmmu_sample.json`` and ``assets\u002Ft2i_sample.json``.\n```\nbash pre_tokenizer\u002Frun_pre_token.sh\n```\n#### Step 2: Train Lumina-DiMOO model.\n```\nbash train\u002Ftrain.sh\n```\n\n### 🚗 Text-to-Image Generation Inference\n#### 1. Normal Sampling\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A striking photograph of a glass of orange juice on a wooden kitchen table, capturing a playful moment. The orange juice splashes out of the glass and forms the word \\\"Smile\\\" in a whimsical, swirling script just above the glass. The background is softly blurred, revealing a cozy, homely kitchen with warm lighting and a sense of comfort.\" \\\n    --height 768 \\\n    --width 1536 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n#### 2. DDP Sampling\nTo support large-scale sampling\u002Ftesting, we provide additional ddp sampling scripts that support multi-GPU parallel sampling.\n```\ntorchrun --nproc_per_node=8 inference\u002Finference_t2i_ddp.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt_path \u002Fpath\u002Fto\u002Fprompts.jsonl \\\n    --height 1024 \\\n    --width 1024 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image_ddp \\\n    --output_json output\u002Fresults_image_to_image_ddp\u002Fresults.json\n```\n#### 3. Faster Sampling with Cache\n- Add `--use-cache` to accelerate sampling through max logit-based cache (ML-Cache). The efficiency-quality tradeoff can be tuned by `cache_ratio` (in `(0,1)`; the higher the faster), `warmup_ratio` (in `[0,1)`; the lower the faster), and `refresh_interval` (in `(1, timesteps-int(warmup_ratio*timesteps)-1]`; the higher the faster). \n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A striking photograph of a glass of orange juice on a wooden kitchen table, capturing a playful moment. The orange juice splashes out of the glass and forms the word \\\"Smile\\\" in a whimsical, swirling script just above the glass. The background is softly blurred, revealing a cozy, homely kitchen with warm lighting and a sense of comfort.\" \\\n    --height 768 \\\n    --width 1536 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image_usecache \\\n    --use-cache \\\n    --cache_ratio 0.9 \\\n    --warmup_ratio 0.3 \\\n    --refresh_interval 5\n```\n\n- We provide the inference time and GPU memory on one A800 as a reference:\n\n| Method               | Inference Time | Inference GPU Memory |\n|----------------------|--------|----------|\n| Lumina-DiMOO      | 58.2s     | 38.9 GB  |\n| + ML-Cache        | 32.2s     | 45.9 GB  |\n\n### 🌟 Image-to-Image Inference\n \n#### 1. Controllable Generation: \"hed_control\", \"depth_control\", \"openpose_control\", \"subject_driven\".\n\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A functional wooden printer stand.Nestled next to a brick wall in a bustling city street, it stands firm as pedestrians hustle by, illuminated by the warm glow of vintage street lamps.\" \\\n    --image_path examples\u002Fexample_2.jpg \\\n    --edit_type depth_control \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 2. Subject-Driven Generation.\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A creamy, rich-flavored dark beverage.Captured in a bustling urban street at twilight, this item is placed on an outdoor café table, as city lights begin to twinkle and passersby create a lively atmosphere.\" \\\n    --image_path examples\u002Fexample_3.jpg \\\n    --edit_type subject_driven \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 3. Image Editing: \"edit_add\", \"edit_remove\", \"edit_replace\", \"edit_background\", \"edit_text_transfer\".\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"Add a beige shed with brown trim and double doors with a diamond pattern in the center-right, occupying more than a third of the image.\" \\\n    --image_path examples\u002Fexample_4.png \\\n    --edit_type edit_add \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 4. Style Transfer (An Image as Style Reference)\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"Transform the current image into the style of the provided image.\" \\\n    --image_path examples\u002Fexample_5.png \\\n    --ref_image_path examples\u002Fexample_5_style.png \\\n    --edit_type image_ref_transfer \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 5. Dense Prediction: \"canny_pred\", \"hed_pred\", \"depth_pred\", \"openpose_pred\", \"canny_control\".\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"Generate a canny edge map accroding to the image.\" \\\n    --image_path examples\u002Fexample_1.png \\\n    --edit_type canny_pred \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n### 🏃 Image Inpainting & Extrapolation Inference\n\n#### 1. Image Inpainting\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"Porsche showroom. Make there be a Porsche logo on the back wall behind the car.\" \\\n    --painting_mode inpainting \\\n    --painting_image examples\u002Fexample_8.png \\\n    --mask_h_ratio 0.5 \\\n    --mask_w_ratio 0.5 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n\n#### 2. Image Extrapolation\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A photograph showcasing a pale gold moon, partially veiled by wispy cirrus clouds, dominating a dramatic twilight sky. The moon's soft glow reflects on the tranquil surface of a lake below, creating a shimmering mirror effect, while a small wooden rowboat gently bobs on the water's edge. Dark silhouettes of tall, ancient pine trees encircle the lake, their branches reaching towards the sky like skeletal fingers, as a gentle mist hangs low, diffusing the moonlight and adding a sense of serene mystery. The scene is bathed in soft, cool lighting, creating an ethereal and captivating atmosphere.\" \\\n    --painting_mode outpainting \\\n    --painting_image examples\u002Fexample_7.png \\\n    --mask_h_ratio 1 \\\n    --mask_w_ratio 0.2 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n\n\n### ⚡️ Image Understanding Inference\n```\npython inference\u002Finference_mmu.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"Please describe this image.\" \\\n    --image_path examples\u002Fexample_6.jpg \\\n    --steps 128 \\\n    --gen_length 128 \\\n    --block_length 32 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Foutputs_text_understanding\n```\n\n## 🏆 Benchmark Evaluation\n\nWe utilize the [VLMEvalKit](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002FVLMEvalKit) from OpenCompass to evaluate **Lumina_DiMOO** across multiple benchmarks.\n\n### 1. Preparation\nNavigate to the `VLMEvalKit` directory and install the required dependencies:\n\n```bash\ncd VLMEvalKit\npip install -r requirements.txt\n```\n **⚠️ Important Note:** We utilize an LLM as the judge model for answer matching. Before running the evaluation, you need edit the `VLMEvalKit\u002F.env` file to fill in your `OPENAI_API_KEY` and `OPENAI_API_BASE`.\n### 2. Supported Benchmarks\nWe support evaluation on the following 5 benchmarks. Please use the corresponding **Data Name** in the command arguments:\n\n| Benchmark | Data Name (`--data`) |\n| :--- | :--- |\n| **POPE** | `POPE` |\n| **MME** | `MME` |\n| **MMBench** | `MMBench_DEV_EN` |\n| **SEEDBench** | `SEEDBench_IMG` |\n| **MMMU** | `MMMU_DEV_VAL` |\n\n### 3. Run Evaluation\nYou can perform the evaluation using either a single GPU or multiple GPUs.\n\n**Single GPU Evaluation:**\n```bash\npython3 run.py --data MMMU_DEV_VAL --model Lumina_DiMOO --verbose\n```\n\n**Multi-GPU Evaluation (8 GPUs):**\n```bash\ntorchrun --nproc-per-node=8 --master_port=29500 run.py \\\n    --data MMMU_DEV_VAL \\\n    --model Lumina_DiMOO \\\n    --verbose\n```\n\n## 📜 Acknowledgements\nThis work was also supported and implemented by [MindSpeed MM](https:\u002F\u002Fgitee.com\u002Fascend\u002FMindSpeed-MM), an open-source training framework for large-scale multimodal models designed for distributed training, developed and maintained by Huawei's Computing Product Line. Specifically Optimized for Huawei‘s Ascend AI chips, MindSpeed MM offers comprehensive support for distributed training and is tailored for a wide range of multimodal tasks.\n\n## 📖 BibTeX\n```\n@article{xin2025lumina,\n  title={Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding},\n  author={Xin, Yi and Qin, Qi and Luo, Siqi and Zhu, Kaiwen and Yan, Juncheng and Tai, Yan and Lei, Jiayi and Cao, Yuewen and Wang, Keqi and Wang, Yibin and others},\n  journal={arXiv preprint arXiv:2510.06308},\n  year={2025}\n}\n\n@article{xin2025dmllm,\n  title={dMLLM-TTS: Self-Verified and Efficient Test-Time Scaling for Diffusion Multi-Modal Large Language Models},\n  author={Xin, Yi and Luo, Siqi and Qin, Qi and Chen, Haoxing and Zhu, Kaiwen and Zhang, Zhiwei and He, Yangfan and Zhang, Rongchao and Bai, Jinbin and Cao, Shuo and others},\n  journal={arXiv preprint arXiv:2512.19433},\n  year={2025}\n}\n```\n\n\n\n","\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_c89e75763a4c.png\" width=\"20%\"\u002F>\n\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\n \u003Ch1> Lumina-DiMOO: 一种用于多模态生成与理解的全能扩散大型语言模型 \u003C\u002Fh1>\n\n  [[📑 技术报告 ](http:\u002F\u002Farxiv.org\u002Fabs\u002F2510.06308)] &emsp; [[🌐 项目页面（演示与基准测试）](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F)] &emsp; [[🤗 模型 ](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-DiMOO)]\n \n \u003Cb>¹上海创新研究院，²上海人工智能实验室，³上海交通大学，⁴南京大学 \u003C\u002Fb>\n \n \u003Cb>⁵悉尼大学，⁶香港中文大学，⁷清华大学\u003C\u002Fb>\n\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_6cdcfeeedbbb.png\" width=\"95%\"\u002F>\n\u003C\u002Fdiv>\n\n## 📚 引言 \n我们推出了Lumina-DiMOO，这是一种用于无缝多模态生成与理解的全能基础模型。Lumina-DiMOO以四项关键创新而著称：\n\n - **统一的离散扩散架构：** Lumina-DiMOO通过采用完全离散的扩散建模来处理各种模态的输入和输出，从而区别于以往的统一模型。\n - **多功能的多模态能力：** Lumina-DiMOO支持广泛的多模态任务，包括文本到图像的生成（允许任意分辨率和高分辨率）、图像到图像的生成（例如图像编辑、主体驱动的生成和图像修复等），以及先进的图像理解。\n\n - **更高的采样效率：** 与之前的AR或混合AR-扩散范式相比，Lumina-DiMOO展现出卓越的采样效率。此外，我们设计了一种定制的缓存方法，可将采样速度进一步提升2倍。\n - **卓越的性能：** Lumina-DiMOO在多个基准测试中达到了最先进的水平，超越了现有的开源多模态统一模型，为该领域树立了新的标准。\n\n\n   \n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_a1bda7e7b0c7.png\" width=\"100%\"\u002F>\n\n\n## 🔥 新闻\n- **[2026-02-26]** 🎉 我们的dMLLM-TTS已被CVPR 2026接收。\n- **[2025-12-23]** 我们为扩散MLLM设计了一种独特的测试时缩放算法。更多详情请参见[ArXiv (dMLLM-TTS)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2512.19433)。\n- **[2025-11-27]** 我们发布了使用VLMEvalKit的评估代码。\n- **[2025-10-24]** 🎉 我们发布了一份指南，供希望使用掩码范式构建世界的人参考，更多详情请参见[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2510.20668)和[Github](https:\u002F\u002Fgithub.com\u002FM-E-AGI-Lab\u002FAwesome-World-Models)。\n- **[2025-10-21]** 🎉 我们增加了对[Diffusers](https:\u002F\u002Fgithub.com\u002Fqianyu-dlut\u002Fdiffusers\u002Fblob\u002Fmain\u002FLumina_DiMOO_README.md)和[ComfyUI](https:\u002F\u002Fgithub.com\u002FL-Hugh\u002FComfyUI-Lumina-DiMOO)的支持。\n- **[2025-10-06]** 训练代码已发布。\n- **[2025-09-25]** 我们发布了技术报告。\n- **[2025-09-20]** 🎉 在最新的[UniGenBench排行榜](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FCodeGoat24\u002FUniGenBench_Leaderboard)（由腾讯混元团队维护）中，Lumina-DiMOO的生成评估在所有开源统一模型中排名第一🥇。\n- **[2025-09-12]** 我们开源了图像修复与外推代码。\n- **[2025-09-11]** 我们开源了基于最大对数似然值的缓存方案，使采样速度提升了2倍。\n- **[2025-09-10]** 🎉 我们发布了**Lumina-DiMOO**的初始版本，其中包括：\n  - 🎯 在[HuggingFace](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-DiMOO)上的模型检查点！\n  - 🎯 文本到图像及图像到图像生成推理代码！\n  - 🎯 图像理解推理代码！\n  - 🎯 项目页面上的网站与演示！ [项目页面](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F)\n\n## 📝 开源计划\n - [x] 图像修复与外推代码\n - [x] 基于最大对数似然值的快速采样缓存\n - [x] Diffusers和ComfyUI\n - [x] 基准评估代码\n - [x] 微调代码\n - [x] 技术报告\n - [ ] 测试时缩放\n\n## 📽️ 定性结果\n以下是我们与其他模型的对比生成结果。**更多可视化结果，请参阅我们的[项目页面](https:\u002F\u002Fsynbol.github.io\u002FLumina-DiMOO\u002F)。**\n\u003Cdetails open>\n  \u003Csummary>文本到图像比较\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ed8d41fe4268.png\" width=\"100%\"\u002F>\n\u003C!--   \u003Cdetails open>\n  \u003Csummary>基于最大对数似然值缓存的效果（A800 GPU，1536x768分辨率）\u003C\u002Fsummary>\n  无缓存：延迟：58.2秒；GPU峰值内存：38.9 GiB\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ba477de1c067.png\" width=\"80%\"\u002F>\n\n\n  有缓存：延迟：32.2秒；GPU峰值内存：45.9 GiB\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_77a124a080e3.png\" width=\"80%\"\u002F>\n\u003C\u002Fdetails> -->\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>图像编辑比较\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_848525051e77.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>可控与主体驱动生成比较\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_63f23012f900.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>图像修复与外推\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_6ebd30429402.jpg\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n## 📊 定量性能\n\u003Cdetails open>\n  \u003Csummary>GenEval基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_20092595a235.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails close>\n  \u003Csummary>DPG基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_29cce9d5e904.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>OneIG-EN基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_1ac6f7956e12.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\n\u003Cdetails close>\n  \u003Csummary>TIIF基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_ae93dcb3820d.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>图像到图像基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_be97a304ce19.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n\u003Cdetails close>\n  \u003Csummary>图像理解基准测试\u003C\u002Fsummary>\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_c041ecfdb308.png\" width=\"100%\"\u002F>\n\u003C\u002Fdetails>\n\n## 🚀 采样速度分析\n- 由于文本生成是以块为单位进行的，而图像生成则采用单一的全局解码步骤，因此其速度既受块数影响，也受步数影响。所以，图像理解的速度提升不如图像生成那样显著。\n\n- **Lumina-DiMOO设置：** 对于图像生成，我们采样64步。对于图像理解，我们将块长度设为256，采样步数设为128。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_readme_cafc3a716e0c.png\" width=\"100%\"\u002F>\n\n\n## 📌 快速入门\n### ⚙️ 安装\n#### 1. 创建一个conda环境\n```\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO.git && cd Lumina-DiMOO\nconda create -n lumina_dimoo python=3.10 -y\nconda activate lumina_dimoo\n```\n#### 2. 安装依赖\n```\npip install -r requirements.txt\n```\n\n### 🧨 如何微调Lumina-DiMOO\n#### 第一步：预先提取训练图像的离散编码。\n经过特定处理后的最终格式可参考示例json文件``assets\u002Fmmu_sample.json``和``assets\u002Ft2i_sample.json``。\n```\nbash pre_tokenizer\u002Frun_pre_token.sh\n```\n#### 第二步：训练Lumina-DiMOO模型。\n```\nbash train\u002Ftrain.sh\n```\n\n### 🚗 文本到图像生成推理\n#### 1. 普通采样\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"一张引人注目的照片，展示木制厨房桌上的一杯橙汁，捕捉了一个俏皮的瞬间。橙汁从杯中飞溅而出，在杯子上方以奇幻、漩涡状的字体拼写出‘Smile’一词。背景柔和地虚化，露出温馨舒适的家常厨房，光线温暖而惬意。\" \\\n    --height 768 \\\n    --width 1536 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n#### 2. DDP 采样\n为了支持大规模采样和测试，我们提供了额外的 DDP 采样脚本，支持多 GPU 并行采样。\n```\ntorchrun --nproc_per_node=8 inference\u002Finference_t2i_ddp.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt_path \u002Fpath\u002Fto\u002Fprompts.jsonl \\\n    --height 1024 \\\n    --width 1024 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image_ddp \\\n    --output_json output\u002Fresults_image_to_image_ddp\u002Fresults.json\n```\n#### 3. 使用缓存加速采样\n- 添加 `--use-cache` 参数，可通过基于最大对数似然的缓存（ML-Cache）加速采样。效率与质量之间的权衡可以通过以下参数进行调整：`cache_ratio`（取值范围为 (0,1)，值越大速度越快）、`warmup_ratio`（取值范围为 [0,1)，值越小速度越快），以及 `refresh_interval`（取值范围为 (1, timesteps-int(warmup_ratio*timesteps)-1]，值越大速度越快）。\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"一张引人注目的照片，展示木制厨房桌上的一杯橙汁，捕捉了一个俏皮的瞬间。橙汁从杯中飞溅而出，在杯子上方以奇幻、漩涡状的字体拼写出‘Smile’一词。背景柔和地虚化，露出温馨舒适的家常厨房，光线温暖而惬意。\" \\\n    --height 768 \\\n    --width 1536 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image_usecache \\\n    --use-cache \\\n    --cache_ratio 0.9 \\\n    --warmup_ratio 0.3 \\\n    --refresh_interval 5\n```\n\n- 我们提供了一块 A800 显卡上的推理时间和显存占用作为参考：\n\n| 方法               | 推理时间 | 推理显存 |\n|----------------------|--------|----------|\n| Lumina-DiMOO      | 58.2s     | 38.9 GB  |\n| + ML-Cache        | 32.2s     | 45.9 GB  |\n\n### 🌟 图像到图像推理\n \n#### 1. 可控生成：“hed_control”、“depth_control”、“openpose_control”、“subject_driven”。\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"一个功能性的木质打印机支架。它坐落在繁华都市街道的一堵砖墙旁，行人匆匆而过，被复古街灯温暖的光芒照亮。\" \\\n    --image_path examples\u002Fexample_2.jpg \\\n    --edit_type depth_control \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 2. 主体驱动生成。\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"一种浓郁香醇的黑色饮品。在黄昏时分的喧嚣都市街头拍摄，该物品放置于露天咖啡馆的桌面上，城市灯光开始闪烁，路人熙熙攘攘，气氛热闹非凡。\" \\\n    --image_path examples\u002Fexample_3.jpg \\\n    --edit_type subject_driven \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 3. 图像编辑：“edit_add”、“edit_remove”、“edit_replace”、“edit_background”、“edit_text_transfer”。\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"在图像的右中部添加一座米色小屋，带有棕色装饰条和中央镶嵌菱形图案的双开门，占据画面超过三分之一的空间。\" \\\n    --image_path examples\u002Fexample_4.png \\\n    --edit_type edit_add \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 4. 风格迁移（以一张图片作为风格参考）\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"将当前图像转换为所提供图片的风格。\" \\\n    --image_path examples\u002Fexample_5.png \\\n    --ref_image_path examples\u002Fexample_5_style.png \\\n    --edit_type image_ref_transfer \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n#### 5. 密集预测：“canny_pred”、“hed_pred”、“depth_pred”、“openpose_pred”、“canny_control”。\n```\npython inference\u002Finference_i2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"根据图像生成 Canny 边缘图。\" \\\n    --image_path examples\u002Fexample_1.png \\\n    --edit_type canny_pred \\\n    --timesteps 64 \\\n    --cfg_scale 2.5 \\\n    --cfg_img 4.0 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_image_to_image\n```\n\n### 🏃 图像修复与扩展推理\n\n#### 1. 图像修复\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"保时捷展厅。请在车后方的墙上添加一个保时捷标志。\" \\\n    --painting_mode inpainting \\\n    --painting_image examples\u002Fexample_8.png \\\n    --mask_h_ratio 0.5 \\\n    --mask_w_ratio 0.5 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n\n#### 2. 图像扩展\n```\npython inference\u002Finference_t2i.py\\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"一张照片，展现淡金色的月亮，部分被纤细的卷云遮掩，主宰着壮丽的黄昏天空。月光柔和地洒在下方宁静的湖面上，形成波光粼粼的镜面效果；岸边一艘小巧的木船轻轻摇曳。高大古老的松树环绕着湖泊，枝干如骨骼般伸向天空，低垂的薄雾弥漫其间，柔化了月光，增添了一丝静谧的神秘感。整个场景笼罩在柔和而清冷的光线中，营造出空灵迷人的氛围。\" \\\n    --painting_mode outpainting \\\n    --painting_image examples\u002Fexample_7.png \\\n    --mask_h_ratio 1 \\\n    --mask_w_ratio 0.2 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```\n\n### ⚡️ 图像理解推理\n```\npython inference\u002Finference_mmu.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"请描述这张图片。\" \\\n    --image_path examples\u002Fexample_6.jpg \\\n    --steps 128 \\\n    --gen_length 128 \\\n    --block_length 32 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Foutputs_text_understanding\n```\n\n## 🏆 基准评测\n\n我们使用 OpenCompass 提供的 [VLMEvalKit](https:\u002F\u002Fgithub.com\u002Fopen-compass\u002FVLMEvalKit) 来对 **Lumina_DiMOO** 进行多基准测试评估。\n\n### 1. 准备工作\n进入 `VLMEvalKit` 目录并安装所需的依赖：\n\n```bash\ncd VLMEvalKit\npip install -r requirements.txt\n```\n **⚠️ 注意事项:** 我们使用一个大语言模型作为评判标准来匹配答案。在运行评测之前，您需要编辑 `VLMEvalKit\u002F.env` 文件，填写您的 `OPENAI_API_KEY` 和 `OPENAI_API_BASE`。\n### 2. 支持的基准\n我们支持以下 5 个基准的评测。请在命令参数中使用对应的 **数据名称**：\n\n| 基准 | 数据名称 (`--data`) |\n| :--- | :--- |\n| **POPE** | `POPE` |\n| **MME** | `MME` |\n| **MMBench** | `MMBench_DEV_EN` |\n| **SEEDBench** | `SEEDBench_IMG` |\n| **MMMU** | `MMMU_DEV_VAL` |\n\n### 3. 运行评测\n您可以使用单 GPU 或多 GPU 来进行评测。\n\n**单 GPU 评测:**\n```bash\npython3 run.py --data MMMU_DEV_VAL --model Lumina_DiMOO --verbose\n```\n\n**多 GPU 评测（8 即）:**\n```bash\ntorchrun --nproc-per-node=8 --master_port=29500 run.py \\\n    --data MMMU_DEV_VAL \\\n    --model Lumina_DiMOO \\\n    --verbose\n```\n\n## 📜 致谢\n本工作还得到了 [MindSpeed MM](https:\u002F\u002Fgitee.com\u002Fascend\u002FMindSpeed-MM) 的支持与实现。MindSpeed MM 是华为计算产品线开发和维护的一个开源大规模多模态模型训练框架，专为分布式训练设计。该框架针对华为 Ascend AI 芯片进行了特别优化，能够全面支持分布式训练，并适用于广泛的多模态任务。\n\n## 📖 BibTeX\n```\n@article{xin2025lumina,\n  title={Lumina-DiMOO：一种用于多模态生成与理解的全能扩散型大型语言模型},\n  author={Xin, Yi 和 Qin, Qi 和 Luo, Siqi 和 Zhu, Kaiwen 和 Yan, Juncheng 和 Tai, Yan 和 Lei, Jiayi 和 Cao, Yuewen 和 Wang, Keqi 和 Wang, Yibin 等},\n  journal={arXiv 预印本 arXiv:2510.06308},\n  year={2025}\n}\n\n@article{xin2025dmllm,\n  title={dMLLM-TTS：扩散型多模态大型语言模型的自验证高效测试时缩放},\n  author={Xin, Yi 和 Luo, Siqi 和 Qin, Qi 和 Chen, Haoxing 和 Zhu, Kaiwen 和 Zhang, Zhiwei 和 He, Yangfan 和 Zhang, Rongchao 和 Bai, Jinbin 和 Cao, Shuo 等},\n  journal={arXiv 预印本 arXiv:2512.19433},\n  year={2025}\n}\n```","# Lumina-DiMOO 快速上手指南\n\n## 环境准备\n- **系统要求**：支持 Linux 或 macOS 操作系统。\n- **前置依赖**：\n  - Python 3.10\n  - PyTorch（建议使用 2.0+ 版本）\n  - CUDA（根据 GPU 型号选择合适的版本，推荐使用 11.8 或以上）\n\n> 建议使用国内镜像源加速安装，例如：\n> ```bash\n> pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n> ```\n\n## 安装步骤\n### 1. 创建 Conda 环境\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO.git && cd Lumina-DiMOO\nconda create -n lumina_dimoo python=3.10 -y\nconda activate lumina_dimoo\n```\n\n### 2. 安装依赖\n```bash\npip install -r requirements.txt\n```\n\n## 基本使用\n### 文本到图像生成（最简单示例）\n```bash\npython inference\u002Finference_t2i.py \\\n    --checkpoint Alpha-VLLM\u002FLumina-DiMOO \\\n    --prompt \"A striking photograph of a glass of orange juice on a wooden kitchen table, capturing a playful moment.\" \\\n    --height 768 \\\n    --width 1536 \\\n    --timesteps 64 \\\n    --cfg_scale 4.0 \\\n    --seed 65513 \\\n    --vae_ckpt Alpha-VLLM\u002FLumina-DiMOO \\\n    --output_dir output\u002Fresults_text_to_image\n```","某游戏开发团队正在为一款开放世界冒险游戏设计一个动态生成的环境系统，需要根据玩家输入的文本描述自动生成高质量的场景图像，并支持对已有图像进行编辑和扩展。\n\n### 没有 Lumina-DiMOO 时  \n- 需要依赖多个独立模型处理不同任务，如文本到图像、图像编辑、图像补全等，导致流程复杂且效率低下  \n- 图像生成质量不稳定，尤其在高分辨率和复杂场景下容易出现细节缺失或逻辑错误  \n- 图像编辑和补全功能受限，无法实现自然流畅的风格迁移或内容扩展  \n- 模型推理速度慢，影响实时生成和迭代效率  \n\n### 使用 Lumina-DiMOO 后  \n- 通过单一模型完成多模态任务，简化了开发流程并提升了整体一致性  \n- 生成图像质量显著提升，支持高分辨率和复杂场景的准确还原  \n- 支持高效的图像编辑与补全，实现更自然的视觉效果和内容扩展  \n- 采样速度大幅提升，满足实时生成需求，加快开发周期  \n\nLumina-DiMOO 通过统一的多模态能力，显著提升了游戏场景生成的效率与质量。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-DiMOO_c89e7576.png","Alpha-VLLM","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAlpha-VLLM_c381d705.png","A branch of OpenGVLab at Shanghai AI Lab",null,"https:\u002F\u002Fgithub.com\u002FAlpha-VLLM",[81,85,89,93,96,99,102],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.4,{"name":86,"color":87,"percentage":88},"Jupyter Notebook","#DA5B0B",0.5,{"name":90,"color":91,"percentage":92},"CSS","#663399",0,{"name":94,"color":95,"percentage":92},"Shell","#89e051",{"name":97,"color":98,"percentage":92},"Makefile","#427819",{"name":100,"color":101,"percentage":92},"HTML","#e34c26",{"name":103,"color":104,"percentage":92},"JavaScript","#f1e05a",959,60,"2026-04-05T04:35:28","Apache-2.0","Linux, macOS","需要 NVIDIA GPU，显存 8GB+，CUDA 11.7+","16GB+",{"notes":113,"python":114,"dependencies":115},"建议使用 conda 管理环境，首次运行需下载约 5GB 模型文件。需要安装 CUDA 工具包和 PyTorch。","3.10",[116,117,118,119,120,121,122,123,124],"torch>=2.0","transformers>=4.30","accelerate","diffusers","torchvision","numpy","pillow","scikit-learn","tqdm",[54,26,14],[127,128,129],"diffusion-large-language-model","discrete-diffusion-models","unified-multimodal-understanding-and-generation","2026-03-27T02:49:30.150509","2026-04-06T08:52:29.686707",[133,138,143,148,153,158,163,168],{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},5361,"训练数据集会公开吗？","目前未计划公开训练数据集，但项目中的一些代码和配置可能包含相关细节。建议关注官方文档或联系维护者获取更多信息。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F10",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},5362,"文本到图像生成中训练和推理分辨率不一致是否会影响效果？","Lumina-DiMOO 支持任意分辨率训练，论文中提到的 1024×1024 是一个参考值，实际使用中可以是 1024×1024、1536×768 等，只要 H×W 接近即可。推理时使用不同分辨率不会导致质量明显下降。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F18",{"id":144,"question_zh":145,"answer_zh":146,"source_url":147},5363,"如何复现 Lumina-Dimoo 在多模态理解基准上的结果？","请使用我们更新后的评估代码，并确保使用与论文中相同的解码参数。例如：temperature=0, cfg=0, gen_length=16, block_length=16, gen_steps=16。此外，建议查看官方提供的评估脚本或配置文件。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F17",{"id":149,"question_zh":150,"answer_zh":151,"source_url":152},5364,"如何测试微调后的模型检查点？","测试微调后的检查点时，需确保保存的检查点目录包含以下文件：special_tokens_map.json、tokenizer.json 和 tokenizer_config.json。如果缺少这些文件，可能会导致加载失败。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F16",{"id":154,"question_zh":155,"answer_zh":156,"source_url":157},5365,"代码中 LLaDALlamaBlock.forward 是否冗余？","该代码用于方便开发者跳转至 `LLaDALlamaBlock.forward` 的定义，可以安全地删除。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F9",{"id":159,"question_zh":160,"answer_zh":161,"source_url":162},5366,"load_image_tokens 中是否存在高度和宽度的混淆？","是的，此处存在一个错误，高度和宽度在预处理阶段被错误地交换了。建议修正为 height = data_pkl[\"height\"] \u002F\u002F 16, width = data_pkl[\"width\"] \u002F\u002F 16。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F21",{"id":164,"question_zh":165,"answer_zh":166,"source_url":167},5367,"模型是否支持分割和检测任务？","Lumina-DiMOO 不支持分割和对象检测任务。设置 edit_type 为 sam2mask 或 sam2mask_pred 无法生成分割掩码。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F13",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},5368,"如何调整 Subject Driven Image Editing 的参数以获得更好的结果？","若生成图像中出现文字扭曲问题，建议减少文本密度。VQGAN 模型在密集文本重建上存在局限性，使用较少文本的主体可能会得到更优结果。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-DiMOO\u002Fissues\u002F7",[]]