[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Alpha-VLLM--Lumina-T2X":3,"tool-Alpha-VLLM--Lumina-T2X":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160015,2,"2026-04-18T11:30:52",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":104,"github_topics":105,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":113,"updated_at":114,"faqs":115,"releases":146},9116,"Alpha-VLLM\u002FLumina-T2X","Lumina-T2X","Lumina-T2X is a unified framework for Text to Any Modality Generation","Lumina-T2X 是一个强大的统一生成框架，旨在打破文本到不同媒体形式生成的壁垒。无论是生成高清图像、创作音乐，还是合成视频，用户只需输入一段文字描述，Lumina-T2X 便能将其转化为任意模态、分辨率及时长的内容。\n\n长期以来，AI 生成领域面临的一大痛点是“专款专用”：生成图片需要一套模型，生成音频又需另一套，且往往对输出尺寸或时长有严格限制。Lumina-T2X 通过创新的基于流（Flow-based）的大规模扩散 Transformer 架构，成功解决了这一碎片化问题。它不仅能灵活适应各种生成需求，还支持动态调整输出的分辨率和持续时间，实现了真正的“文生万物”。\n\n这款工具特别适合 AI 研究人员探索多模态生成的前沿技术，也深受开发者青睐，便于其构建灵活的多媒体应用原型。同时，对于数字艺术家和设计师而言，Lumina-T2X 提供了一个高效的创意辅助手段，让灵感能瞬间跨越文字与视听的界限。作为入选 ICLR 2025 Spotlight 和 NeurIPS 2024 的开源项目，Lumina-T2X 以其卓越的技术架构和广泛的适用性，正推动着生成式 AI 向更通用、更自由","Lumina-T2X 是一个强大的统一生成框架，旨在打破文本到不同媒体形式生成的壁垒。无论是生成高清图像、创作音乐，还是合成视频，用户只需输入一段文字描述，Lumina-T2X 便能将其转化为任意模态、分辨率及时长的内容。\n\n长期以来，AI 生成领域面临的一大痛点是“专款专用”：生成图片需要一套模型，生成音频又需另一套，且往往对输出尺寸或时长有严格限制。Lumina-T2X 通过创新的基于流（Flow-based）的大规模扩散 Transformer 架构，成功解决了这一碎片化问题。它不仅能灵活适应各种生成需求，还支持动态调整输出的分辨率和持续时间，实现了真正的“文生万物”。\n\n这款工具特别适合 AI 研究人员探索多模态生成的前沿技术，也深受开发者青睐，便于其构建灵活的多媒体应用原型。同时，对于数字艺术家和设计师而言，Lumina-T2X 提供了一个高效的创意辅助手段，让灵感能瞬间跨越文字与视听的界限。作为入选 ICLR 2025 Spotlight 和 NeurIPS 2024 的开源项目，Lumina-T2X 以其卓越的技术架构和广泛的适用性，正推动着生成式 AI 向更通用、更自由的方向发展。","\u003C!-- \u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_59cd4b982a25.png\" width=\"40%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp> -->\n\n# $\\textbf{Lumina-T2X}$: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers \n\n### \u003Cdiv align=\"center\"> ICLR 2025 Spotlight & NeurIPS 2024 \u003Cdiv>\n\n\u003Cdiv align=\"center\">\n\n\u003C!--[![GitHub repo contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors-anon\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&label=Contributors)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fgraphs\u002Fcontributors)-->\n\n\u003C!--[![GitHub Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002FAlpha-VLLM\u002FLumina-T2X?label=Commit)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fcommits\u002Fmain\u002F)-->\n\n\u003C!--[![Pr](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr-closed-raw\u002FAlpha-VLLM\u002FLumina-T2X.svg?label=Merged+PRs&color=green)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fpulls) \u003Cbr>-->\n\n\u003C!--[![GitHub repo stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fstargazers) -->\n\n\u003C!--[![GitHub repo watchers](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fwatchers\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Watchers)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fwatchers) -->\n\n\u003C!--[![GitHub repo size](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Repo%20Size)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Farchive\u002Frefs\u002Fheads\u002Fmain.zip) -->\n\n[![Lumina-Next](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--Next-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18583)&#160;\n[![Lumina-T2X](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--T2X-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945)&#160;\n[![Lumina-mGPT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--mGPT-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02657)&#160;\n\n[![Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-WeChat@Group-000000?logo=wechat&logoColor=07C160)](http:\u002F\u002Fimagebind-llm.opengvlab.com\u002Fqrcode\u002F)&#160;\n[![weixin](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-WeChat@机器之心-000000?logo=wechat&logoColor=07C160)](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FNwwbaeRujh-02V6LRs5zMg)&#160;\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-知乎-000000?logo=zhihu&logoColor=0084FF)](https:\u002F\u002Fwww.zhihu.com\u002Forg\u002Fopengvlab)&#160;\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Twitter@OpenGVLab-black?logo=twitter&logoColor=1D9BF0)](https:\u002F\u002Ftwitter.com\u002Fopengvlab\u002Fstatus\u002F1788949243383910804)&#160;\n![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-MIT-MIT?logoColor=%231082c3&label=Code%20License&link=https%3A%2F%2Fgithub.com%2FAlpha-VLLM%2FLumina-T2X%2Fblob%2Fmain%2FLICENSE)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo%20Introduction%20of%20Lumina--Next-red?logo=youtube)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=K0-AJa33Rw4)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo%20Introduction%20of%20Lumina--T2X-pink?logo=youtube)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KFtHmS5eUCM)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node1)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10020\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node2)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10021\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node3)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10022\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(compositional)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-T2I)](http:\u002F\u002F106.14.2.150:10023\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node1)-violet?logo=youtubegaming&label=Demo%20Lumina-Text2Music)](http:\u002F\u002F139.196.83.164:8000\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT-HF_Space-yellow?logoColor=violet&label=%F0%9F%A4%97%20Demo%20Lumina-Next-SFT)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-purple?logoColor=#571482&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--T2I%20checkpoints-Model(2B)-purple?logoColor=#571482&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-Diffusers%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT-diffusers)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--T2I%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-T2I%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--T2I%20checkpoints-Model(5B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-T2I%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2I)\n\n\u003C!-- [![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FAlpha-VLLM\u002FLumina-T2X?color=critical&label=Issues)]() -->\n\n\u003C!-- [![GitHub closed issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002FAlpha-VLLM\u002FLumina-T2X?color=success&label=Issues)]() \u003Cbr> -->\n\n\u003C!-- [![GitHub repo forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Forks)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fnetwork)  -->\n\n\u003C!--\n[[📄 Lumina-T2X arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945)]\n[[📽️ Video Introduction of Lumina-T2X](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KFtHmS5eUCM)]\n[👋 join our \u003Ca href=\"http:\u002F\u002Fimagebind-llm.opengvlab.com\u002Fqrcode\u002F\" target=\"_blank\">WeChat\u003C\u002Fa>]\n\n-->\n\n\u003C!-- [[📺 Website](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002F)] -->\n\n\u003C\u002Fdiv>\n\n![intro_large](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_a595094f7406.png)\n\n\u003C!-- [[中文版本]](.\u002FREADME_cn.md) -->\n\n## 📰 News\n\n- **[2024-08-06] 🎉🎉🎉 We have released [Lumina-mGPT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02657), the next-generation of generative models in our Lumina family! Lumina-mGPT is an autoregressive transformer capable of photorealistic image generation and other vision-language tasks, e.g., controllable generation, multi-turn dialog, depth\u002Fnormal\u002Fsegmentation map estimation.**\n- **[2024-07-08] 🎉🎉🎉 Lumina-Next is now supported in the [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)! Thanks to [@yiyixuxu](https:\u002F\u002Fgithub.com\u002Fyiyixuxu) and [@sayakpaul](https:\u002F\u002Fgithub.com\u002Fsayakpaul)! [HF Model Repo](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT-diffusers).**\n- [2024-06-26] We have released the inference code for img2img translation using `Lumina-Next-T2I`. [CODE](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i_mini\u002Fscripts\u002Fsample_img2img.sh) [ComfyUI](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-LuminaWrapper)\n- [2024-06-21] 🥰🥰🥰 Lumina-Next has a jupyter nootbook for inference, thanks to [canenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru)! [LINK](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002FLumina-Next-jupyter)\n- [2024-06-21] We have uploaded the `Lumina-Next-SFT` and `Lumina-Next-T2I` to [wisemodel.cn](https:\u002F\u002Fwisemodel.cn\u002Fmodels). [wisemodel repo](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n- [2024-06-19] We have released the `Lumina-T2Audio` (Text-to-Audio) code and model for music generation. [MODEL](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2Audio)\n- [2024-06-17] 🚀🚀🚀 We have support both inference and training (including Dreambooth) of SD3, implemented in our Lumina framework! [CODE](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i_mini)\n- **[2024-06-17] 🥰🥰🥰 Lumina-Next supports ComfyUI now, thanks to [Kijai](https:\u002F\u002Fgithub.com\u002Fkijai)! [LINK](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-LuminaWrapper)**\n- **[2024-06-08] 🚀🚀🚀 We have released the `Lumina-Next-SFT` model, demonstrating better visual quality! [MODEL](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT)**\n- [2024-06-07] We have released the `Lumina-T2Music` (Text-to-Music) code and model for music generation. [MODEL](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2Music) [DEMO](http:\u002F\u002F139.196.83.164:8000\u002F)\n- [2024-06-03] We have released the `Compositional Generation` version of `Lumina-Next-T2I`, which enables compositional generation with multiple captions for different regions. [model](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I). [DEMO](http:\u002F\u002F106.14.2.150:10023\u002F)\n- [2024-05-29] We updated the new `Lumina-Next-T2I` [Code](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i) and [HF Model](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I). Supporting 2K Resolution image generation and Time-aware Scaled RoPE.\n- [2024-05-25] We released training scripts for Flag-DiT and Next-DiT, and we have reported the comparison results between Next-DiT and Flag-DiT. [Comparsion Results](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fblob\u002Fmain\u002FNext-DiT-ImageNet\u002FREADME.md#results)\n- [2024-05-21] Lumina-Next-T2I supports a higher-order solver. It can generate images in just 10 steps without any distillation. Try our demos [DEMO](http:\u002F\u002F106.14.2.150:10021\u002F).\n- [2024-05-18] We released training scripts for Lumina-T2I 5B. [README](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_t2i#training)\n- [2024-05-16] ❗❗❗ We have converted the `.pth` weights to `.safetensors` weights. Please pull the latest code and use `demo.py` for inference.\n- [2024-05-14] Lumina-Next now supports simple **text-to-music** generation ([examples](#text-to-music-generation)), **high-resolution (1024*4096) Panorama** generation conditioned on text ([examples](#panorama-generation)), and **3D point cloud** generation conditioned on labels ([examples](#point-cloud-generation)).\n- [2024-05-13] We give [examples](#multilingual-generation) demonstrating Lumina-T2X's capability to support **multilingual prompts**, and even support prompts containing **emojis**.\n- **[2024-05-12] We excitedly released our `Lumina-Next-T2I` model ([checkpoint](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)) which uses a 2B Next-DiT model as the backbone and Gemma-2B as the text encoder. Try it out at [demo1](http:\u002F\u002F106.14.2.150:10020\u002F) & [demo2](http:\u002F\u002F106.14.2.150:10021\u002F) & [demo3](http:\u002F\u002F106.14.2.150:10022\u002F). Please refer to the paper [Lumina-Next](assets\u002Flumina-next.pdf) for more details.**\n- [2024-05-10] We released the technical report on [arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945).\n- [2024-05-09] We released `Lumina-T2A` (Text-to-Audio) Demos. [Examples](#text-to-audio-generation)\n- [2024-04-29] We released the 5B model [checkpoint](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2I) and demo built upon it for text-to-image generation.\n- [2024-04-25] Support 720P video generation with arbitrary aspect ratio. [Examples](#text-to-video-generation)\n- [2024-04-19]  Demo examples released.\n- [2024-04-05] Code released for `Lumina-T2I`.\n- [2024-04-01] We release the initial version of `Lumina-T2I` for text-to-image generation.\n\n## 🚀 Quick Start\n\n> [!Warning]\n> **Since we are updating the code frequently, please pull the latest code:**\n>\n> ```bash\n> git pull origin main\n> ```\n\n### Fast Demo\n\nWe have supported Lumina-Next in the [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers). \n\n> [!Note]\n> You should install the development version of diffusers (`main` branch) before diffusers releasing the new version.\n> ```bash\n> pip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\n\nand you can try the code below:\n\n```python\nfrom diffusers import LuminaText2ImgPipeline\nimport torch\n\npipeline = LuminaText2ImgPipeline.from_pretrained(\n\"\u002Fmnt\u002Fhdd1\u002Fxiejunlin\u002Fcheckpoints\u002FLumina-Next-SFT-diffusers\", torch_dtype=torch.bfloat16\n).to(\"cuda\")\n\nimage = pipeline(prompt=\"Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution ciyscape with smoky skies and tall, metal structures\", height=1024, width=768).images[0]\n```\n\nFor more details about training and inference of Lumina framework, please refer to [Lumina-T2I](.\u002Flumina_t2i\u002FREADME.md#Installation), [Lumina-Next-T2I](.\u002Flumina_next_t2i\u002FREADME.md#Installation), and [Lumina-Next-T2I-Mini](.\u002Flumina_next_t2i_mini\u002FREADME.md#Installation). We highly recommend you to use the **[Lumina-Next-T2I-Mini](.\u002Flumina_next_t2i_mini\u002FREADME.md#Installation)** for training and inference, which is an extremely simplified version of Lumina-Next-T2I with full functionalities.\n\n### GUI Demo\n\nIn order to quickly get you guys using our model, we built different versions of the GUI demo site.\n\n#### Lumina-Next-T2I model demo:\n\nImage Generation: [[node1](http:\u002F\u002F106.14.2.150:10020\u002F)] [[node2](http:\u002F\u002F106.14.2.150:10021\u002F)] [[node3](http:\u002F\u002F106.14.2.150:10022\u002F)]\n\nImage Compositional Generation: [[node1](http:\u002F\u002F106.14.2.150:10023\u002F)]\n\nMusic Generation: [[node1](http:\u002F\u002F139.196.83.164:8000)]\n\n\u003C!-- > [!Warning] -->\n\u003C!-- > **Lumina-T2X employs FSDP for training large diffusion models. FSDP shards parameters, optimizer states, and gradients across GPUs. Thus, at least 8 GPUs are required for full fine-tuning of the Lumina-T2X 5B model. Parameter-efficient Finetuning of Lumina-T2X shall be released soon.** -->\n\n### Installation\nUsing `Lumina-T2X` as a library, using installation command on your environment:\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\n```\n\n### Development\nIf you want to contribute to the code, you should run command below to install `pre-commit` library:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\n\ncd Lumina-T2X\npip install -e \".[dev]\"\npre-commit install\npre-commit\n```\n\n## 📑 Open-source Plan\n\n- [X] Lumina-Text2Image (Demos✅, Training✅, Inference✅, Checkpoints✅, Diffusers✅)\n- [ ] Lumina-Text2Video (Demos✅)\n- [X] Lumina-Text2Music (Demos✅, Inference✅, Checkpoints✅)\n- [X] Lumina-Text2Audio (Demos✅, Inference✅, Checkpoints✅)\n\n## 📜 Index of Content\n\n- [$\\\\textbf{Lumina-T2X}$: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers](#textbflumina-t2x-transforming-text-into-any-modality-resolution-and-duration-via-flow-based-large-diffusion-transformers)\n  - [📰 News](#-news)\n  - [🚀 Quick Start](#-quick-start)\n    - [GUI Demo](#gui-demo)\n      - [Lumina-Next-T2I model demo:](#lumina-next-t2i-model-demo)\n    - [Installation](#installation)\n    - [Development](#development)\n  - [📑 Open-source Plan](#-open-source-plan)\n  - [📜 Index of Content](#-index-of-content)\n  - [Introduction](#introduction)\n  - [📽️ Demo Examples](#️-demo-examples)\n    - [Demos of Lumina-Next-SFT](#demos-of-lumina-next-sft)\n    - [Demos of Lumina-T2I](#demos-of-lumina-t2i)\n      - [Panorama Generation](#panorama-generation)\n    - [Text-to-Video Generation](#text-to-video-generation)\n    - [Text-to-3D Generation](#text-to-3d-generation)\n      - [Point Cloud Generation](#point-cloud-generation)\n    - [Text-to-Audio Generation](#text-to-audio-generation)\n    - [Text-to-music Generation](#text-to-music-generation)\n    - [Multilingual Generation](#multilingual-generation)\n  - [⚙️ Diverse Configurations](#️-diverse-configurations)\n  - [Contributors](#contributors)\n  - [📄 Citation](#-citation)\n\n## Introduction\n\nWe introduce the $\\textbf{Lumina-T2X}$ family, a series of text-conditioned Diffusion Transformers (DiT) capable of transforming textual descriptions into vivid images, dynamic videos, detailed multi-view 3D images, and synthesized speech. At the core of Lumina-T2X lies the **Flow-based Large Diffusion Transformer (Flag-DiT)**—a robust engine that supports up to **7 billion parameters** and extends sequence lengths to **128,000** tokens. Drawing inspiration from Sora, Lumina-T2X integrates images, videos, multi-views of 3D objects, and speech spectrograms within a spatial-temporal latent token space, and can generate outputs at **any resolution, aspect ratio, and duration**.\n\n🌟 **Features**:\n\n- **Flow-based Large Diffusion Transformer (Flag-DiT)**: Lumina-T2X adopts the **flow matching** formulation and is equipped with many advanced techniques, such as RoPE, RMSNorm, and KQ-norm, **demonstrating faster training convergence, stable training dynamics, and a simplified pipeline**.\n- **Any Modalities, Resolution, and Duration within One Framework**:\n  1. $\\textbf{Lumina-T2X}$ can **encode any modality, including mages, videos, multi-views of 3D objects, and spectrograms into a unified 1-D token sequence at any resolution, aspect ratio, and temporal duration.**\n  2. By introducing the `[nextline]` and `[nextframe]` tokens, our model can **support resolution extrapolation**, i.e., generating images\u002Fvideos with out-of-domain resolutions **not encountered during training**, such as images from 768x768 to 1792x1792 pixels.\n- **Low Training Resources**: Our empirical observations indicate that employing larger models,\n  high-resolution images, and longer-duration video clips can **significantly accelerate the convergence**\n  **speed** of diffusion transformers. Moreover, by employing meticulously curated text-image and text-video pairs featuring high aesthetic quality frames and detailed captions, our $\\textbf{Lumina-T2X}$ model is learned to generate high-resolution images and coherent videos with minimal computational demands. Remarkably, the default Lumina-T2I configuration, equipped with a 5B Flag-DiT and a 7B LLaMA as the text encoder, **requires only 35% of the computational resources compared to Pixelart-**$\\alpha$.\n\n![framework](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_62457cda7448.png)\n\n## 📽️ Demo Examples\n\n### Demos of Lumina-Next-SFT\n\n![github_banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_542985373cce.png)\n\n### Demos of Visual Anagrams\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_afba4f09af31.png)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_c89a8ae22c71.png)\n\n### Demos of Lumina-T2I\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_023467c53236.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n#### Panorama Generation\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_ec393a5ea075.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### Text-to-Video Generation\n\n**720P Videos:**\n\n**Prompt:** The majestic beauty of a waterfall cascading down a cliff into a serene lake.\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F17187de8-7a07-49a8-92f9-fdb8e2f5e64c\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F0a20bb39-f6f7-430f-aaa0-7193a71b256a\n\n**Prompt:** A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F7bf9ce7e-f454-4430-babe-b14264e0f194\n\n**360P Videos:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fd7fec32c-3655-4fd1-aa14-c0cb3ace3845\n\n### Text-to-3D Generation\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fcd061b8d-c47b-4c0c-b775-2cbaf8014be9\n\n#### Point Cloud Generation\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_b5fc9c4e512c.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### Text-to-Audio Generation\n\n> [!Note]\n> **Attention: Mouse over the playbar and click the audio button on the playbar to unmute it.**\n\n\u003C!-- > 🌟🌟🌟 **We recommend visiting the Lumina website to try it out! [🌟 visit](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos\u002Fdemo-of-audio)** -->\n\n**Prompt:** Semiautomatic gunfire occurs with slight echo\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F25f2a6a8-0386-41e8-ab10-d1303554b944\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F6722a68a-1a5a-4a44-ba9c-405372dc27ef\n\n**Prompt:** A telephone bell rings\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F7467dd6d-b163-4436-ac5b-36662d1f9ddf\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F703ea405-6eb4-4161-b5ff-51a93f81d013\n\n**Prompt:** An engine running followed by the engine revving and tires screeching\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F5d9dd431-b8b4-41a0-9e78-bb0a234a30b9\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F9ca4af9e-cee3-4596-b826-d6c25761c3c1\n\n**Prompt:** Birds chirping with insects buzzing and outdoor ambiance\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fb776aacb-783b-4f47-bf74-89671a17d38d\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fa11333e4-695e-4a8c-8ea1-ee5b83e34682\n\n### Text-to-music Generation\n\n> [!Note]\n> **Attention: Mouse over the playbar and click the audio button on the playbar to unmute it.**\n> For more details check out [this](.\u002Flumina_music\u002FREADME.md)\n\n**Prompt:** An electrifying ska tune with prominent saxophone riffs, energetic e-guitar and acoustic drums, lively percussion, soulful keys, groovy e-bass, and a fast tempo that exudes uplifting energy.\n\n**Generated Music:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002Ffef8f6b9-1e77-457e-bf4b-fb0cccefa0ec\n\n**Prompt:** A high-energy synth rock\u002Fpop song with fast-paced acoustic drums, a triumphant brass\u002Fstring section, and a thrilling synth lead sound that creates an adventurous atmosphere.\n\n**Generated Music:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F1f796046-64ab-44ed-a4d8-0ebc0cfc484f\n\n**Prompt:** An uptempo electronic pop song that incorporates digital drums, digital bass and synthpad sounds.\n\n**Generated Music:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F4768415e-436a-4d0e-af53-bf7882cb94cd\n\n**Prompt:** A medium-tempo digital keyboard song with a jazzy backing track featuring digital drums, piano, e-bass, trumpet, and acoustic guitar.\n\n**Generated Music:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F8994a573-e776-488b-a86c-4398a4362398\n\n**Prompt:** This low-quality folk song features groovy wooden percussion, bass, piano, and flute melodies, as well as sustained strings and shimmering shakers that create a passionate, happy, and joyful atmosphere.\n\n**Generated Music:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002Fe0b5d197-589c-47d6-954b-b9c1d54feebb\n\n### Multilingual Generation\n\nWe present three multilingual capabilities of Lumina-Next-2B.\n\n**Generating Images conditioned on Chinese poems:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_c197fab47428.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n**Generating Images with multilingual prompts:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_3ab22d6699c2.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_a92e8e565263.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n**Generating Images with emojis:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_5c9e6b6547be.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n\u003C!--\n**Prompt:** Water trickling rapidly and draining\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F88fcf0e1-b71a-4e94-b9a6-138db6a670f0\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F6fb9963f-46a5-4020-b160-f9a004528d7e\n\n**Prompt:** Thunderstorm sounds while raining\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Ffad8baf3-d80b-4915-ba31-aab13db5ce06\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fc01a7e6e-3421-4a28-93c5-831523ec061d\n\n**Prompt:** Birds chirping repeatedly\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F0fa673a3-f9de-487b-8812-1f96a335e913\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F718289f9-a93e-4ea9-b7db-a14c2b209b28\n\n**Prompt:** Several large bells ring\n\n**Generated Audio:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F362fde84-e4ae-4152-aeb5-4355155c8719\n\n**Groundtruth:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fda93e13d-6462-48d2-b6dc-af6ff0c4d07d\n\n-->\n\n\u003C!-- For more audio demos visit [lumina website - audio demos](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos\u002Fdemo-of-audio) -->\n\n\u003C!-- ### More examples -->\n\n\u003C!-- For more demos visit [this website](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos) -->\n\n\u003C!-- ### High-res. Image Editing\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_f0f4d58709c9.png\" width=\"90%\"\u002F>\n \u003Cbr>\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_ad4ecb2a09bc.png\" width=\"90%\"\u002F>\n\u003C\u002Fp>\n\n### Compositional Generation\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_1d9f10913ad4.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### Resolution Extrapolation\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_4c5f9ef6d1da.png\" width=\"90%\"\u002F>\n \u003Cbr>\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_0a2beb4d932f.png\" width=\"100%\"\u002F>\n\u003C\u002Fp>\n\n### Consistent-Style Generation\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_73be17ad5524.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp> -->\n\n## ⚙️ Diverse Configurations\n\nWe support diverse configurations, including text encoders, DiTs of different parameter sizes, inference methods, and VAE encoders.AAdditionally, we offer features such as 1D-RoPE, image enhancement, and more.\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_430ea231d78f.png\" width=\"100%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n## Contributors\n\nCore member for code developlement and maintence:\n\nDongyang Liu, Le Zhuo, Junlin Xie, Ruoyi Du, Peng Gao\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_20876116df8b.png\" \u002F>\n\u003C\u002Fa>\n\n## 📄 Citation\n\n```\n@article{gao2024lumina-next,\n  title={Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT},\n  author={Zhuo, Le and Du, Ruoyi and Han, Xiao and Li, Yangguang and Liu, Dongyang and Huang, Rongjie and Liu, Wenze and others},\n  journal={arXiv preprint arXiv:2406.18583},\n  year={2024}\n}\n```\n\n```\n@article{gao2024lumin-t2x,\n  title={Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers},\n  author={Gao, Peng and Zhuo, Le and Liu, Chris and and Du, Ruoyi and Luo, Xu and Qiu, Longtian and Zhang, Yuhang and others},\n  journal={arXiv preprint arXiv:2405.05945},\n  year={2024}\n}\n\n```\n\n\u003C!--\n## Star History\n\n [![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_f0c581830b36.png)](https:\u002F\u002Fstar-history.com\u002F#Alpha-VLLM\u002FLumina-T2X&Date) -->\n","\u003C!-- \u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_59cd4b982a25.png\" width=\"40%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp> -->\n\n# $\\textbf{Lumina-T2X}$：基于流的大型扩散Transformer，实现文本到任意模态、分辨率和时长的转换\n\n### \u003Cdiv align=\"center\"> ICLR 2025 Spotlight & NeurIPS 2024 \u003Cdiv>\n\n\u003Cdiv align=\"center\">\n\n\u003C!--[![GitHub repo contributors](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcontributors-anon\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&label=Contributors)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fgraphs\u002Fcontributors)-->\n\n\u003C!--[![GitHub Commit](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fcommit-activity\u002Fm\u002FAlpha-VLLM\u002FLumina-T2X?label=Commit)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fcommits\u002Fmain\u002F)-->\n\n\u003C!--[![Pr](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-pr-closed-raw\u002FAlpha-VLLM\u002FLumina-T2X.svg?label=Merged+PRs&color=green)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fpulls) \u003Cbr>-->\n\n\u003C!--[![GitHub repo stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fstargazers) -->\n\n\u003C!--[![GitHub repo watchers](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fwatchers\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Watchers)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fwatchers) -->\n\n\u003C!--[![GitHub repo size](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frepo-size\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Repo%20Size)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Farchive\u002Frefs\u002Fheads\u002Fmain.zip) -->\n\n[![Lumina-Next](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--Next-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2406.18583)&#160;\n[![Lumina-T2X](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--T2X-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945)&#160;\n[![Lumina-mGPT](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-Lumina--mGPT-2b9348.svg?logo=arXiv)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02657)&#160;\n\n[![Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-WeChat@Group-000000?logo=wechat&logoColor=07C160)](http:\u002F\u002Fimagebind-llm.opengvlab.com\u002Fqrcode\u002F)&#160;\n[![weixin](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-WeChat@机器之心-000000?logo=wechat&logoColor=07C160)](https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FNwwbaeRujh-02V6LRs5zMg)&#160;\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-知乎-000000?logo=zhihu&logoColor=0084FF)](https:\u002F\u002Fwww.zhihu.com\u002Forg\u002Fopengvlab)&#160;\n[![zhihu](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-Twitter@OpenGVLab-black?logo=twitter&logoColor=1D9BF0)](https:\u002F\u002Ftwitter.com\u002Fopengvlab\u002Fstatus\u002F1788949243383910804)&#160;\n![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F-MIT-MIT?logoColor=%231082c3&label=Code%20License&link=https%3A%2F%2Fgithub.com%2FAlpha-VLLM%2FLumina-T2X%2Fblob%2Fmain%2FLICENSE)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo%20Introduction%20of%20Lumina--Next-red?logo=youtube)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=K0-AJa33Rw4)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FVideo%20Introduction%20of%20Lumina--T2X-pink?logo=youtube)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KFtHmS5eUCM)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node1)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10020\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node2)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10021\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node3)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-SFT)](http:\u002F\u002F106.14.2.150:10022\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(compositional)-6B88E3?logo=youtubegaming&label=Demo%20Lumina-Next-T2I)](http:\u002F\u002F106.14.2.150:10023\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FOfficial(node1)-violet?logo=youtubegaming&label=Demo%20Lumina-Text2Music)](http:\u002F\u002F139.196.83.164:8000\u002F)&#160;\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT-HF_Space-yellow?logoColor=violet&label=%F0%9F%A4%97%20Demo%20Lumina-Next-SFT)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-purple?logoColor=#571482&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--T2I%20checkpoints-Model(2B)-purple?logoColor=#571482&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-Diffusers%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT-diffusers)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--SFT%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-SFT%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--Next--T2I%20checkpoints-Model(2B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-Next-T2I%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)\n[![Static Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLumina--T2I%20checkpoints-Model(5B)-yellow?logoColor=violet&label=%F0%9F%A4%97%20Lumina-T2I%20checkpoints)](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2I)\n\n\u003C!-- [![GitHub issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues\u002FAlpha-VLLM\u002FLumina-T2X?color=critical&label=Issues)]() -->\n\n\u003C!-- [![GitHub closed issues](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fissues-closed\u002FAlpha-VLLM\u002FLumina-T2X?color=success&label=Issues)]() \u003Cbr> -->\n\n\u003C!-- [![GitHub repo forks](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fforks\u002FAlpha-VLLM\u002FLumina-T2X?style=flat&logo=github&logoColor=whitesmoke&label=Forks)](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fnetwork)  -->\n\n\u003C!--\n[[📄 Lumina-T2X arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945)]\n[[📽️ Video Introduction of Lumina-T2X](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=KFtHmS5eUCM)]\n[👋 join our \u003Ca href=\"http:\u002F\u002Fimagebind-llm.opengvlab.com\u002Fqrcode\u002F\" target=\"_blank\">WeChat\u003C\u002Fa>]\n\n-->\n\n\u003C!-- [[📺 Website](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002F)] -->\n\n\u003C\u002Fdiv>\n\n![intro_large](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_a595094f7406.png)\n\n\u003C!-- [[中文版本]](.\u002FREADME_cn.md) -->\n\n## 📰 新闻\n\n- **[2024-08-06] 🎉🎉🎉 我们发布了[Lumina-mGPT](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.02657)，这是我们Lumina系列中的下一代生成模型！Lumina-mGPT是一种自回归Transformer模型，能够进行照片级逼真的图像生成以及其他视觉-语言任务，例如可控生成、多轮对话、深度\u002F法线\u002F分割图估计等。**\n- **[2024-07-08] 🎉🎉🎉 Lumina-Next现已在[diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)中得到支持！感谢[@yiyixuxu](https:\u002F\u002Fgithub.com\u002Fyiyixuxu)和[@sayakpaul](https:\u002F\u002Fgithub.com\u002Fsayakpaul)！[HF模型仓库](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT-diffusers)。**\n- [2024-06-26] 我们发布了使用`Lumina-Next-T2I`进行img2img转换的推理代码。[代码](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i_mini\u002Fscripts\u002Fsample_img2img.sh) [ComfyUI](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-LuminaWrapper)\n- [2024-06-21] 🥰🥰🥰 Lumina-Next现在有了用于推理的Jupyter Notebook，感谢[canenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru)! [链接](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002FLumina-Next-jupyter)\n- [2024-06-21] 我们已将`Lumina-Next-SFT`和`Lumina-Next-T2I`上传至[wisemodel.cn](https:\u002F\u002Fwisemodel.cn\u002Fmodels)。[wisemodel仓库](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-SFT)\n- [2024-06-19] 我们发布了用于音乐生成的`Lumina-T2Audio`（文本到音频）代码和模型。[模型](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2Audio)\n- [2024-06-17] 🚀🚀🚀 我们已经在Lumina框架中实现了对SD3的推理和训练支持（包括Dreambooth）！[代码](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i_mini)\n- **[2024-06-17] 🥰🥰🥰 Lumina-Next现已支持ComfyUI，感谢[Kijai](https:\u002F\u002Fgithub.com\u002Fkijai)! [链接](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-LuminaWrapper)**\n- **[2024-06-08] 🚀🚀🚀 我们发布了`Lumina-Next-SFT`模型，展示了更佳的视觉质量！[模型](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-SFT)**\n- [2024-06-07] 我们发布了用于音乐生成的`Lumina-T2Music`（文本到音乐）代码和模型。[模型](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2Music) [演示](http:\u002F\u002F139.196.83.164:8000\u002F)\n- [2024-06-03] 我们发布了`Lumina-Next-T2I`的“组合式生成”版本，该版本支持为不同区域提供多个描述性文字以实现组合式生成。[模型](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)。[演示](http:\u002F\u002F106.14.2.150:10023\u002F)\n- [2024-05-29] 我们更新了新的`Lumina-Next-T2I`[代码](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_next_t2i)和[Hugging Face模型](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)。支持2K分辨率图像生成以及时间感知的Scaled RoPE。\n- [2024-05-25] 我们发布了Flag-DiT和Next-DiT的训练脚本，并报告了Next-DiT与Flag-DiT之间的对比结果。[对比结果](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fblob\u002Fmain\u002FNext-DiT-ImageNet\u002FREADME.md#results)\n- [2024-05-21] Lumina-Next-T2I支持更高阶的求解器。它仅需10步即可生成图像，无需任何蒸馏过程。请尝试我们的演示[演示](http:\u002F\u002F106.14.2.150:10021\u002F)。\n- [2024-05-18] 我们发布了Lumina-T2I 5B的训练脚本。[README](https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Ftree\u002Fmain\u002Flumina_t2i#training)\n- [2024-05-16] ❗❗❗ 我们已将`.pth`权重转换为`.safetensors`格式。请拉取最新代码，并使用`demo.py`进行推理。\n- [2024-05-14] Lumina-Next现在支持简单的**文本到音乐**生成（[示例](#text-to-music-generation)）、基于文本条件的**高分辨率（1024*4096）全景图**生成（[示例](#panorama-generation)），以及基于标签条件的**3D点云**生成（[示例](#point-cloud-generation)）。\n- [2024-05-13] 我们提供了[示例](#multilingual-generation)，展示了Lumina-T2X支持**多语言提示**的能力，甚至可以处理包含**表情符号**的提示。\n- **[2024-05-12] 我们激动地发布了`Lumina-Next-T2I`模型（[检查点](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-Next-T2I)），该模型以2B Next-DiT模型作为骨干网络，并采用Gemma-2B作为文本编码器。请在[demo1](http:\u002F\u002F106.14.2.150:10020\u002F)、[demo2](http:\u002F\u002F106.14.2.150:10021\u002F)和[demo3](http:\u002F\u002F106.14.2.150:10022\u002F)上试用。更多详情请参阅论文[Lumina-Next](assets\u002Flumina-next.pdf)。**\n- [2024-05-10] 我们在[arXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2405.05945)上发布了技术报告。\n- [2024-05-09] 我们发布了`Lumina-T2A`（文本到音频）的演示。[示例](#text-to-audio-generation)\n- [2024-04-29] 我们发布了5B模型的[检查点](https:\u002F\u002Fhuggingface.co\u002FAlpha-VLLM\u002FLumina-T2I)以及基于该模型构建的文本到图像生成演示。\n- [2024-04-25] 支持任意长宽比的720P视频生成。[示例](#text-to-video-generation)\n- [2024-04-19] 发布了演示示例。\n- [2024-04-05] 发布了`Lumina-T2I`的代码。\n- [2024-04-01] 我们发布了用于文本到图像生成的`Lumina-T2I`初始版本。\n\n## 🚀 快速入门\n\n> [!警告]\n> **由于我们频繁更新代码，请务必拉取最新代码：**\n>\n> ```bash\n> git pull origin main\n> ```\n\n### 快速演示\n\n我们已在[Hugging Face Diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)中支持Lumina-Next。\n\n> [!注]\n> 在Diffusers发布新版本之前，您应先安装其开发版（`main`分支）。\n> ```bash\n> pip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\n\n然后您可以尝试以下代码：\n\n```python\nfrom diffusers import LuminaText2ImgPipeline\nimport torch\n\npipeline = LuminaText2ImgPipeline.from_pretrained(\n\"\u002Fmnt\u002Fhdd1\u002Fxiejunlin\u002Fcheckpoints\u002FLumina-Next-SFT-diffusers\", torch_dtype=torch.bfloat16\n).to(\"cuda\")\n\nimage = pipeline(prompt=\"一位年轻女子的上半身，身穿维多利亚时代的服装，戴着黄铜护目镜和皮质绑带。背景是工业革命时期的景象，天空弥漫着烟雾，周围矗立着高大的金属建筑\", height=1024, width=768).images[0]\n```\n\n有关Lumina框架的训练和推理的更多详细信息，请参阅[Lumina-T2I](.\u002Flumina_t2i\u002FREADME.md#Installation)、[Lumina-Next-T2I](.\u002Flumina_next_t2i\u002FREADME.md#Installation)和[Lumina-Next-T2I-Mini](.\u002Flumina_next_t2i_mini\u002FREADME.md#Installation)。我们强烈建议您使用**[Lumina-Next-T2I-Mini](.\u002Flumina_next_t2i_mini\u002FREADME.md#Installation)**进行训练和推理，它是功能齐全但极其精简的Lumina-Next-T2I版本。\n\n### GUI 演示\n\n为了让各位快速上手使用我们的模型，我们构建了多个版本的 GUI 演示站点。\n\n#### Lumina-Next-T2I 模型演示：\n\n图像生成：[[node1](http:\u002F\u002F106.14.2.150:10020\u002F)] [[node2](http:\u002F\u002F106.14.2.150:10021\u002F)] [[node3](http:\u002F\u002F106.14.2.150:10022\u002F)]\n\n图像组合生成：[[node1](http:\u002F\u002F106.14.2.150:10023\u002F)]\n\n音乐生成：[[node1](http:\u002F\u002F139.196.83.164:8000)]\n\n\u003C!-- > [!Warning] -->\n\u003C!-- > **Lumina-T2X 采用 FSDP 来训练大型扩散模型。FSDP 会将参数、优化器状态和梯度分散到各个 GPU 上。因此，对 Lumina-T2X 5B 模型进行全量微调至少需要 8 张 GPU 卡。Lumina-T2X 的参数高效微调功能即将发布。** -->\n\n### 安装\n作为库使用 `Lumina-T2X` 时，请在您的环境中运行以下安装命令：\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\n```\n\n### 开发\n如果您想参与代码贡献，应运行以下命令来安装 `pre-commit` 库：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\n\ncd Lumina-T2X\npip install -e \".[dev]\"\npre-commit install\npre-commit\n```\n\n## 📑 开源计划\n\n- [X] Lumina-Text2Image（演示✅、训练✅、推理✅、检查点✅、Diffusers✅）\n- [ ] Lumina-Text2Video（演示✅）\n- [X] Lumina-Text2Music（演示✅、推理✅、检查点✅）\n- [X] Lumina-Text2Audio（演示✅、推理✅、检查点✅）\n\n## 📜 内容索引\n\n- [$\\textbf{Lumina-T2X}$：通过基于流的大型扩散 Transformer 将文本转换为任意模态、分辨率和时长](#textbflumina-t2x-transforming-text-into-any-modality-resolution-and-duration-via-flow-based-large-diffusion-transformers)\n  - [📰 新闻](#-news)\n  - [🚀 快速入门](#-quick-start)\n    - [GUI 演示](#gui-demo)\n      - [Lumina-Next-T2I 模型演示：](#lumina-next-t2i-model-demo)\n    - [安装](#installation)\n    - [开发](#development)\n  - [📑 开源计划](#-open-source-plan)\n  - [📜 内容索引](#-index-of-content)\n  - [简介](#introduction)\n  - [📽️ 演示示例](#️-demo-examples)\n    - [Lumina-Next-SFT 的演示](#demos-of-lumina-next-sft)\n    - [Lumina-T2I 的演示](#demos-of-lumina-t2i)\n      - [全景生成](#panorama-generation)\n    - [文本转视频生成](#text-to-video-generation)\n    - [文本转 3D 生成](#text-to-3d-generation)\n      - [点云生成](#point-cloud-generation)\n    - [文本转音频生成](#text-to-audio-generation)\n    - [文本转音乐生成](#text-to-music-generation)\n    - [多语言生成](#multilingual-generation)\n  - [⚙️ 多样化配置](#️-diverse-configurations)\n  - [贡献者](#contributors)\n  - [📄 引用](#-citation)\n\n## 简介\n\n我们隆重推出 $\\textbf{Lumina-T2X}$ 系列，这是一系列基于文本条件的扩散 Transformer（DiT），能够将文本描述转化为生动的图像、动态视频、精细的多视角 3D 图像以及合成语音。Lumina-T2X 的核心是 **基于流的大型扩散 Transformer（Flag-DiT）**——一个强大的引擎，支持高达 **70 亿参数**，并将序列长度扩展至 **128,000 个标记**。受 Sora 启发，Lumina-T2X 在时空潜在标记空间中整合了图像、视频、3D 对象的多视角视图以及语音频谱图，并且可以生成 **任意分辨率、宽高比和时长** 的输出。\n\n🌟 **特点**：\n\n- **基于流的大型扩散 Transformer（Flag-DiT）**：Lumina-T2X 采用了 **流匹配** 的框架，并配备了 RoPE、RMSNorm 和 KQ-norm 等多项先进技术，**展现出更快的训练收敛速度、稳定的训练动态以及简化的流程**。\n- **单一框架内支持任意模态、分辨率和时长**：\n  1. $\\textbf{Lumina-T2X}$ 可以 **将任何模态，包括图像、视频、3D 对象的多视角视图以及频谱图，编码为统一的 1 维标记序列，且不受分辨率、宽高比和时间长度的限制。**\n  2. 通过引入 `[nextline]` 和 `[nextframe]` 标记，我们的模型能够 **支持分辨率外推**，即生成训练过程中未出现过的域外分辨率的图像或视频，例如从 768×768 像素扩展到 1792×1792 像素。\n- **较低的训练资源需求**：根据我们的经验观察，使用更大规模的模型、更高分辨率的图像以及更长时间的视频片段，可以 **显著加快扩散 Transformer 的收敛速度**。此外，通过精心筛选具有高审美价值的画面和详细描述的文本-图像、文本-视频配对数据，我们的 $\\textbf{Lumina-T2X}$ 模型能够在计算资源消耗较少的情况下，生成高分辨率图像和连贯的视频。值得注意的是，配备 5B Flag-DiT 和 7B LLaMA 作为文本编码器的默认 Lumina-T2I 配置，**仅需 Pixelart-$\\alpha$ 所需计算资源的 35%**。\n\n![framework](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_62457cda7448.png)\n\n## 📽️ 演示示例\n\n### Lumina-Next-SFT 的演示\n\n![github_banner](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_542985373cce.png)\n\n### 视觉字谜的演示\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_afba4f09af31.png)\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_c89a8ae22c71.png)\n\n### Lumina-T2I 的演示\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_023467c53236.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n#### 全景生成\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_ec393a5ea075.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### 文本转视频生成\n\n**720P 视频：**\n\n**提示**：瀑布从悬崖倾泻而下，注入宁静湖泊的壮丽美景。\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F17187de8-7a07-49a8-92f9-fdb8e2f5e64c\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F0a20bb39-f6f7-430f-aaa0-7193a71b256a\n\n**提示**：一位时尚女性走在东京街头，周围是温暖明亮的霓虹灯和动态的城市招牌。她身穿黑色皮夹克、红色长裙和黑色靴子，手提黑色手袋，戴着太阳镜和红唇膏。她步伐自信而随意。街道湿润且反光，映照出五彩斑斓的灯光。许多行人来来往往。\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F7bf9ce7e-f454-4430-babe-b14264e0f194\n\n**360P 视频：**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fd7fec32c-3655-4fd1-aa14-c0cb3ace3845\n\n### 文本到3D生成\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fcd061b8d-c47b-4c0c-b775-2cbaf8014be9\n\n#### 点云生成\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_b5fc9c4e512c.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### 文本到音频生成\n\n> [!Note]\n> **注意：将鼠标悬停在播放栏上，点击播放栏上的音频按钮以取消静音。**\n\n\u003C!-- > 🌟🌟🌟 **我们建议访问Lumina网站进行体验！[🌟 访问](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos\u002Fdemo-of-audio)** -->\n\n**提示词:** 半自动枪声响起，伴有轻微回声\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F25f2a6a8-0386-41e8-ab10-d1303554b944\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F6722a68a-1a5a-4a44-ba9c-405372dc27ef\n\n**提示词:** 电话铃声响起\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F7467dd6d-b163-4436-ac5b-36662d1f9ddf\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F703ea405-6eb4-4161-b5ff-51a93f81d013\n\n**提示词:** 发动机运转后转速升高，轮胎发出尖锐的摩擦声\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F5d9dd431-b8b4-41a0-9e78-bb0a234a30b9\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F9ca4af9e-cee3-4596-b826-d6c25761c3c1\n\n**提示词:** 鸟鸣声、昆虫嗡嗡声以及户外环境音效\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fb776aacb-783b-4f47-bf74-89671a17d38d\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fa11333e4-695e-4a8c-8ea1-ee5b83e34682\n\n### 文本到音乐生成\n\n> [!Note]\n> **注意：将鼠标悬停在播放栏上，点击播放栏上的音频按钮以取消静音。**\n> 更多详情请参阅[这里](.\u002Flumina_music\u002FREADME.md)\n\n**提示词:** 一首充满电光火石般活力的斯卡曲风，突出萨克斯管即兴演奏、充满能量的电吉他与原声鼓、活泼的打击乐、富有灵魂的键盘声、律动十足的电贝斯，以及快节奏所散发出的振奋人心的能量。\n\n**生成音乐:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002Ffef8f6b9-1e77-457e-bf4b-fb0cccefa0ec\n\n**提示词:** 一首高能量的合成器摇滚\u002F流行歌曲，配有快速的原声鼓、气势恢宏的铜管乐与弦乐部分，以及令人兴奋的合成器主旋律，营造出一种冒险的氛围。\n\n**生成音乐:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F1f796046-64ab-44ed-a4d8-0ebc0cfc484f\n\n**提示词:** 一首快节奏的电子流行歌曲，融合了数字鼓、数字贝斯和合成器铺底音色。\n\n**生成音乐:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F4768415e-436a-4d0e-af53-bf7882cb94cd\n\n**提示词:** 一首中等节奏的数字键盘曲，背景伴奏带有爵士风格，包含数字鼓、钢琴、电贝斯、小号和原声吉他。\n\n**生成音乐:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002F8994a573-e776-488b-a86c-4398a4362398\n\n**提示词:** 这首低质量民谣作品采用了律动十足的木制打击乐、贝斯、钢琴和长笛旋律，同时辅以持续的弦乐和闪烁的沙锤声，营造出热情、欢快且愉悦的氛围。\n\n**生成音乐:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F86041420\u002Fe0b5d197-589c-47d6-954b-b9c1d54feebb\n\n### 多语言生成\n\n我们展示了Lumina-Next-2B的三项多语言能力。\n\n**根据中文古诗生成图像:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_c197fab47428.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n**使用多语言提示词生成图像:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_3ab22d6699c2.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_a92e8e565263.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n**使用表情符号生成图像:**\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_5c9e6b6547be.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n\u003C!--\n**提示词:** 水快速流淌并排入下水道\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F88fcf0e1-b71a-4e94-b9a6-138db6a670f0\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F6fb9963f-46a5-4020-b160-f9a004528d7e\n\n**提示词:** 下雨时的雷暴声\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Ffad8baf3-d80b-4915-ba31-aab13db5ce06\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fc01a7e6e-3421-4a28-93c5-831523ec061d\n\n**提示词:** 鸟儿反复鸣叫\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F0fa673a3-f9de-487b-8812-1f96a335e913\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F718289f9-a93e-4ea9-b7db-a14c2b209b28\n\n**提示词:** 几口大钟齐鸣\n\n**生成音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002F362fde84-e4ae-4152-aeb5-4355155c8719\n\n**真实音频:**\n\nhttps:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fassets\u002F54879512\u002Fda93e13d-6462-48d2-b6dc-af6ff0c4d07d\n\n-->\n\n\u003C!-- 更多音频演示请访问[Lumina网站 - 音频演示](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos\u002Fdemo-of-audio) -->\n\n\u003C!-- ### 更多示例 -->\n\n\u003C!-- 更多演示请访问[这个网站](https:\u002F\u002Flumina-t2-x-web.vercel.app\u002Fdocs\u002Fdemos) -->\n\n\u003C!-- ### 高分辨率图像编辑\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_f0f4d58709c9.png\" width=\"90%\"\u002F>\n \u003Cbr>\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_ad4ecb2a09bc.png\" width=\"90%\"\u002F>\n\u003C\u002Fp>\n\n### 组合式生成\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_1d9f10913ad4.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n### 分辨率外推\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_4c5f9ef6d1da.png\" width=\"90%\"\u002F>\n \u003Cbr>\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_0a2beb4d932f.png\" width=\"100%\"\u002F>\n\u003C\u002Fp>\n\n### 风格一致性生成\n\n\u003Cp align=\"center\">\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_73be17ad5524.png\" width=\"90%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp> -->\n\n## ⚙️ 多样化配置\n\n我们支持多种配置，包括文本编码器、不同参数规模的DiT模型、推理方法以及VAE编码器。此外，我们还提供1D-RoPE、图像增强等功能。\n\n\u003Cp align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_430ea231d78f.png\" width=\"100%\"\u002F>\n \u003Cbr>\n\u003C\u002Fp>\n\n## 贡献者\n\n代码开发与维护的核心成员：\n\nDongyang Liu、Le Zhuo、Junlin Xie、Ruoyi Du、Peng Gao\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_20876116df8b.png\" \u002F>\n\u003C\u002Fa>\n\n## 📄 引用\n\n```\n@article{gao2024lumina-next,\n  title={Lumina-Next：借助Next-DiT使Lumina-T2X更强大、更快},\n  author={Zhuo, Le and Du, Ruoyi and Han, Xiao and Li, Yangguang and Liu, Dongyang and Huang, Rongjie and Liu, Wenze and others},\n  journal={arXiv预印本 arXiv:2406.18583},\n  year={2024}\n}\n```\n\n```\n@article{gao2024lumin-t2x,\n  title={Lumina-T2X：通过基于流的大规模扩散Transformer将文本转换为任意模态、分辨率和时长},\n  author={Gao, Peng and Zhuo, Le and Liu, Chris and Du, Ruoyi and Luo, Xu and Qiu, Longtian and Zhang, Yuhang and others},\n  journal={arXiv预印本 arXiv:2405.05945},\n  year={2024}\n}\n\n```\n\n\u003C!--\n## 星标历史\n\n [![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_readme_f0c581830b36.png)](https:\u002F\u002Fstar-history.com\u002F#Alpha-VLLM\u002FLumina-T2X&Date) -->","# Lumina-T2X 快速上手指南\n\nLumina-T2X 是一个基于流匹配（Flow-based）的大型扩散 Transformer 模型，能够将文本转换为任意模态（图像、视频、音频、3D 点云等）、分辨率和时长。本指南将帮助您快速在本地部署并运行该模型。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+)\n*   **Python**: 3.8 或更高版本\n*   **GPU**: NVIDIA GPU，显存建议 16GB 以上（运行 2B 模型），支持 CUDA 11.8+\n*   **PyTorch**: 2.0+ (需与 CUDA 版本匹配)\n\n### 前置依赖安装\n\n建议使用虚拟环境（如 `conda` 或 `venv`）以避免依赖冲突。\n\n```bash\n# 创建并激活 conda 环境 (示例)\nconda create -n lumina python=3.10 -y\nconda activate lumina\n\n# 安装 PyTorch (请根据实际 CUDA 版本调整，此处以 CUDA 11.8 为例)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\n\n# 安装其他基础依赖\npip install transformers accelerate safetensors einops timm\n```\n\n> **提示**：国内用户可使用清华源加速安装：\n> `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>`\n\n## 2. 安装步骤\n\nLumina-T2X 现已支持 Hugging Face `diffusers` 库，这是最简便的集成方式。\n\n### 步骤一：克隆最新代码\n由于项目更新频繁，请务必拉取最新代码：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X.git\ncd Lumina-T2X\ngit pull origin main\n```\n\n### 步骤二：安装 Diffusers 开发版\n目前需要安装 `diffusers` 的 `main` 分支以支持 Lumina-Next 模型：\n\n```bash\npip install git+https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\n```\n\n> **国内加速方案**：如果直接连接 GitHub 较慢，可尝试先下载 diffusers 源码包后本地安装，或使用镜像站配置。\n\n### 步骤三：获取模型权重\n您可以从 Hugging Face 或国内镜像平台（如 WiseModel）下载模型。\n\n**选项 A：Hugging Face (国际)**\n模型会自动下载，或手动下载 `Alpha-VLLM\u002FLumina-Next-SFT-diffusers`。\n\n**选项 B：WiseModel (国内推荐)**\n访问 [WiseModel 模型页](https:\u002F\u002Fwisemodel.cn\u002Fmodels\u002FAlpha-VLLM\u002FLumina-Next-SFT) 下载权重文件至本地目录。\n\n## 3. 基本使用\n\n以下是最简单的文生图（Text-to-Image）推理示例。\n\n### Python 推理脚本\n\n创建一个名为 `demo.py` 的文件，填入以下代码：\n\n```python\nfrom diffusers import LuminaText2ImgPipeline\nimport torch\n\n# 指定本地模型路径或 Hugging Face 模型 ID\n# 如果使用本地下载的路径，请修改为绝对路径，例如：\"\u002Fpath\u002Fto\u002FLumina-Next-SFT-diffusers\"\nmodel_path = \"Alpha-VLLM\u002FLumina-Next-SFT-diffusers\"\n\n# 加载管道\npipeline = LuminaText2ImgPipeline.from_pretrained(\n    model_path, \n    torch_dtype=torch.bfloat16\n).to(\"cuda\")\n\n# 定义提示词\nprompt = \"Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background is a steampunk workshop, highly detailed, 8k resolution.\"\n\n# 生成图像\nimage = pipeline(\n    prompt=prompt,\n    num_inference_steps=28, # 默认步数，Lumina-Next 支持高阶求解器，可减少步数\n    guidance_scale=4.5\n).images[0]\n\n# 保存结果\nimage.save(\"output_lumina.png\")\nprint(\"Image generated successfully!\")\n```\n\n### 运行脚本\n\n```bash\npython demo.py\n```\n\n### 进阶功能简述\n*   **高分辨率生成**：Lumina-Next-T2I 支持 2K 分辨率及时间感知 RoPE。\n*   **多模态支持**：同一框架下还支持文生音乐 (`Lumina-T2Music`)、文生音频 (`Lumina-T2Audio`) 及全景图生成，只需切换对应的 Pipeline 和模型权重。\n*   **ComfyUI 支持**：如需图形化界面，可安装 [ComfyUI-LuminaWrapper](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-LuminaWrapper) 插件。","某独立游戏开发者正为一款奇幻冒险游戏快速生成多模态资产，包括高清场景概念图、角色动作视频片段及匹配的背景音乐。\n\n### 没有 Lumina-T2X 时\n- **工具链割裂**：需要分别调用文生图、文生视频和文生音频三个不同的模型或平台，切换上下文极其繁琐。\n- **风格难以统一**：不同模型生成的素材在光影、色调和艺术风格上存在显著差异，后期需花费大量时间手动修图和对齐。\n- **分辨率与时长受限**：现有单一模态工具往往限制输出分辨率或视频时长，无法满足高质量游戏过场动画的需求。\n- **迭代成本高昂**：修改一个文本提示词意味着要在三个独立系统中重新运行并再次协调结果，严重拖慢原型设计进度。\n\n### 使用 Lumina-T2X 后\n- **一站式生成**：依托统一的流式扩散 Transformer 架构，仅需一次输入即可同时产出图像、视频和音频，工作流大幅简化。\n- **原生风格一致**：由于所有模态源自同一潜在空间映射，生成的视觉与听觉素材在奇幻风格上天然契合，无需额外调色。\n- **灵活适配需求**：支持任意分辨率和持续时间生成，直接输出符合引擎导入标准的高清长镜头视频与高保真音轨。\n- **高效敏捷迭代**：调整提示词后能瞬间获得全套更新的多模态资源，让开发者能在数分钟内完成多个版本的原型验证。\n\nLumina-T2X 通过“文本到任意模态”的统一范式，彻底打破了多模态内容生产的壁垒，将游戏资产创作效率提升了数倍。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlpha-VLLM_Lumina-T2X_a595094f.png","Alpha-VLLM","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAlpha-VLLM_c381d705.png","A branch of OpenGVLab at Shanghai AI Lab",null,"https:\u002F\u002Fgithub.com\u002FAlpha-VLLM",[79,83,87],{"name":80,"color":81,"percentage":82},"Python","#3572A5",98.8,{"name":84,"color":85,"percentage":86},"Shell","#89e051",1.2,{"name":88,"color":89,"percentage":90},"CSS","#663399",0,2252,95,"2026-04-18T01:32:38","MIT","Linux","必需 NVIDIA GPU，示例代码使用 torch.bfloat16 (建议 Ampere 架构及以上，如 A100\u002FA10\u002F3090\u002F4090)，显存需求未明确说明 (生成高分辨率图像通常需 24GB+)，CUDA 版本未说明 (需支持 bfloat16)","未说明",{"notes":99,"python":97,"dependencies":100},"1. 必须安装 diffusers 的开发版本 (main 分支) 才能运行 Lumina-Next。2. 模型权重已转换为 .safetensors 格式，请使用最新代码。3. 支持多种模态生成 (图像、视频、音频、音乐、3D 点云)。4. 推荐使用 Hugging Face Diffusers 接口进行推理。5. 部分功能 (如 ComfyUI 支持) 需额外安装第三方插件。",[101,102,103],"diffusers (main branch)","torch","transformers",[35,15],[106,107,108,109,110,111,112,103],"aigc","transformer","diffusion-models","diffusion","diffusion-model","diffusion-transformer","generation-models","2026-03-27T02:49:30.150509","2026-04-18T22:34:13.122121",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},40901,"为什么图像分辨率增加时，推理时间的增长幅度远超预期（例如从 1024 到 1664）？","这是因为推理时间与序列长度（sequence length）的平方成正比，而序列长度取决于图像的像素面积而非边长。当分辨率从 1024 增加到 1664 时，边长增加了约 1.625 倍，但面积（即 Patch 数量\u002F序列长度）增加了约 2.6 倍。由于 Attention 机制的计算复杂度是序列长度的二次方，因此时间成本会增加约 2.6^2 ≈ 7 倍。这与 UNet 架构不同，DiT 架构的推理时间随分辨率变化更显著。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F65",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},40902,"如何为 Lumina Next 模型实现 img2img（图生图）或图像编辑功能？","官方已更新支持 img2img 的代码。实现逻辑与 diffusers 库类似，但需要特别注意起始时间步（start timestep），因为它受时间偏移缩放因子（time-shifting scale）的影响。您可以参考官方提供的脚本 `lumina_next_t2i_mini\u002Fscripts\u002Fsample_img2img.sh` 进行测试，也可以尝试将其集成到 ComfyUI 中使用。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F91",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},40903,"运行 sample.py 时遇到 '_pickle.UnpicklingError: invalid load key, 'v'' 错误怎么办？","该错误通常由以下原因导致：1. 检查点文件（checkpoint files）下载不完整或已损坏，请确认文件夹中是否包含 `model_args.pth` 文件；2. 确认您下载的文件格式是否正确，检查文件名是否以 `.safetensor` 结尾而不是错误的格式。建议重新从 HuggingFace 仓库下载完整的模型文件。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F93",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},40904,"训练 DiT_Llama_5B_patch2 模型时出现 'unexpected keyword argument max_seq_len' 错误或显存不足怎么办？","DiT_Llama_5B_patch2 模型需要使用 FSDP（Fully Sharded Data Parallel）进行多卡训练以减少单卡显存占用。单张 A100 显卡无法运行该模型。请确保使用多 GPU 环境启动训练，FSDP 会将参数和梯度分片，从而随着 GPU 数量增加而降低单卡显存需求。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F10",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},40905,"项目是否有计划支持多语言提示词（Multilingual capabilities）或相关的对齐评估？","维护者表示将在未来的技术报告中加入关于大语言模型作为文本编码器时涌现的多语言能力的讨论。同时，支持多语言训练的脚本也计划在近期更新发布。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F16",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},40906,"如何在 Hugging Face Spaces 上部署 Lumina-T2X 演示？","您可以参考 Hugging Face 官方提供的逐步指南来构建 Gradio 应用：https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fhub\u002Fen\u002Fspaces-sdks-gradio。此外，项目可以通过 **ZeroGPU** 计划申请免费的 A100 GPU 资源。目前已有社区成员在 Spaces 上部署了演示（如 Lumina-Next-T2I），但在部署过程中可能需要解决一些特定的环境配置问题。","https:\u002F\u002Fgithub.com\u002FAlpha-VLLM\u002FLumina-T2X\u002Fissues\u002F28",[]]