[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SandAI-org--MAGI-1":3,"tool-SandAI-org--MAGI-1":64},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,2,"2026-04-06T11:32:50",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[43,15,13,14],"语言模型",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,52],"视频",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85013,"2026-04-06T11:09:19",[15,16,52,61,13,62,43,14,63],"插件","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":10,"env_os":95,"env_gpu":96,"env_ram":95,"env_deps":97,"category_tags":100,"github_topics":101,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":105,"updated_at":106,"faqs":107,"releases":136},5460,"SandAI-org\u002FMAGI-1","MAGI-1","MAGI-1: Autoregressive Video Generation at Scale","MAGI-1 是一款由 Sand AI 推出的大规模自回归视频生成模型，旨在通过先进的算法实现高质量、长时长的视频内容创作。它主要解决了当前视频生成领域难以兼顾画面连贯性、动作流畅度与生成长度的痛点，能够理解复杂的物理规律并生成细节丰富的动态场景。\n\n作为基于自回归架构的创新成果，MAGI-1 突破了传统扩散模型在时序一致性上的局限。其核心技术亮点在于将视频生成视为序列预测问题，利用大规模数据训练，使模型具备了强大的上下文理解能力，从而在生成长视频时仍能保持角色特征稳定且逻辑自然。\n\n这款工具非常适合人工智能研究人员探索下一代生成范式，同时也为开发者提供了构建视频应用的坚实基础。对于影视创作者、游戏设计师及数字艺术家而言，MAGI-1 能大幅降低高质量视频素材的制作门槛，激发创意灵感。虽然普通用户也可通过其衍生产品体验技术魅力，但其开源特性更侧重于服务专业社群，推动视频生成技术的开放协作与持续进化。","![magi-logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_9d174bcacb2e.png)\n\n\n-----\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13211\">\u003Cimg alt=\"paper\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-B31B1B?logo=arxiv\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fsand.ai\">\u003Cimg alt=\"blog\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSand%20AI-Homepage-333333.svg?logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyB3aWR0aD0iODAwIiBoZWlnaHQ9IjgwMCIgdmlld0JveD0iMCAwIDgwMCA4MDAiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNMjI3IDIyNS4wODVDMjI3IDIwMi4zMDMgMjI3IDE5MC45MTIgMjMxLjQzNyAxODIuMjExQzIzNS4zMzkgMTc0LjU1NyAyNDEuNTY2IDE2OC4zMzQgMjQ5LjIyNiAxNjQuNDM0QzI1Ny45MzMgMTYwIDI2OS4zMzIgMTYwIDI5Mi4xMjkgMTYwSDUwNy44NzFDNTA5LjI5NSAxNjAgNTEwLjY3NiAxNjAgNTEyLjAxNCAxNjAuMDAxQzUzMi4wODIgMTYwLjAxNyA1NDIuNjExIDE2MC4yNzcgNTUwLjc3NCAxNjQuNDM0QzU1OC40MzQgMTY4LjMzNCA1NjQuNjYxIDE3NC41NTcgNTY4LjU2MyAxODIuMjExQzU3MyAxOTAuOTEyIDU3MyAyMDIuMzAzIDU3MyAyMjUuMDg1VjI1Ni41NThDNTczIDI5MS4zMTkgNTczIDMwOC43IDU2NS4wMzUgMzIzLjI3OUM1NTguNzU2IDMzNC43NzIgNTQzLjU2NSAzNDYuMTEgNTIzLjA3OCAzNTkuNjA1QzUxNC42NzQgMzY1LjE0MSA1MTAuNDcyIDM2Ny45MDkgNTA1LjYzOSAzNjcuOTM2QzUwMC44MDYgMzY3Ljk2NCA0OTYuNTAzIDM2NS4yIDQ4Ny44OTYgMzU5LjY3MUw0ODcuODk2IDM1OS42N0w0NjYuNDY5IDM0NS45MDVDNDU2Ljg3NSAzMzkuNzQyIDQ1Mi4wNzggMzM2LjY2IDQ1Mi4wNzggMzMyLjIxOEM0NTIuMDc4IDMyNy43NzcgNDU2Ljg3NSAzMjQuNjk1IDQ2Ni40NjkgMzE4LjUzMUw1MjYuNzgyIDI3OS43ODVDNTM1LjI5MSAyNzQuMzE5IDU0MC40MzUgMjY0LjkwMyA1NDAuNDM1IDI1NC43OTRDNTQwLjQzNSAyMzguMzg2IDUyNy4xMjUgMjI1LjA4NSA1MTAuNzA1IDIyNS4wODVIMjg5LjI5NUMyNzIuODc1IDIyNS4wODUgMjU5LjU2NSAyMzguMzg2IDI1OS41NjUgMjU0Ljc5NEMyNTkuNTY1IDI2NC45MDMgMjY0LjcwOSAyNzQuMzE5IDI3My4yMTggMjc5Ljc4NUw1MTMuMTggNDMzLjk0MUM1NDIuNDQxIDQ1Mi43MzggNTU3LjA3MSA0NjIuMTM3IDU2NS4wMzUgNDc2LjcxNkM1NzMgNDkxLjI5NCA1NzMgNTA4LjY3NSA1NzMgNTQzLjQzNlY1NzQuOTE1QzU3MyA1OTcuNjk3IDU3MyA2MDkuMDg4IDU2OC41NjMgNjE3Ljc4OUM1NjQuNjYxIDYyNS40NDQgNTU4LjQzNCA2MzEuNjY2IDU1MC43NzQgNjM1LjU2NkM1NDIuMDY3IDY0MCA1MzAuNjY4IDY0MCA1MDcuODcxIDY0MEgyOTIuMTI5QzI2OS4zMzIgNjQwIDI1Ny45MzMgNjQwIDI0OS4yMjYgNjM1LjU2NkMyNDEuNTY2IDYzMS42NjYgMjM1LjMzOSA2MjUuNDQ0IDIzMS40MzcgNjE3Ljc4OUMyMjcgNjA5LjA4OCAyMjcgNTk3LjY5NyAyMjcgNTc0LjkxNVY1NDMuNDM2QzIyNyA1MDguNjc1IDIyNyA0OTEuMjk0IDIzNC45NjUgNDc2LjcxNkMyNDEuMjQ0IDQ2NS4yMjIgMjU2LjQzMyA0NTMuODg2IDI3Ni45MTggNDQwLjM5MkMyODUuMzIyIDQzNC44NTYgMjg5LjUyNSA0MzIuMDg4IDI5NC4zNTcgNDMyLjA2QzI5OS4xOSA0MzIuMDMyIDMwMy40OTQgNDM0Ljc5NyAzMTIuMSA0NDAuMzI2TDMzMy41MjcgNDU0LjA5MUMzNDMuMTIyIDQ2MC4yNTQgMzQ3LjkxOSA0NjMuMzM2IDM0Ny45MTkgNDY3Ljc3OEMzNDcuOTE5IDQ3Mi4yMiAzNDMuMTIyIDQ3NS4zMDEgMzMzLjUyOCA0ODEuNDY1TDMzMy41MjcgNDgxLjQ2NUwyNzMuMjIgNTIwLjIwOEMyNjQuNzA5IDUyNS42NzUgMjU5LjU2NSA1MzUuMDkxIDI1OS41NjUgNTQ1LjIwMkMyNTkuNTY1IDU2MS42MTIgMjcyLjg3NyA1NzQuOTE1IDI4OS4yOTkgNTc0LjkxNUg1MTAuNzAxQzUyNy4xMjMgNTc0LjkxNSA1NDAuNDM1IDU2MS42MTIgNTQwLjQzNSA1NDUuMjAyQzU0MC40MzUgNTM1LjA5MSA1MzUuMjkxIDUyNS42NzUgNTI2Ljc4IDUyMC4yMDhMMjg2LjgyIDM2Ni4wNTNDMjU3LjU2IDM0Ny4yNTYgMjQyLjkyOSAzMzcuODU3IDIzNC45NjUgMzIzLjI3OUMyMjcgMzA4LjcgMjI3IDI5MS4zMTkgMjI3IDI1Ni41NThWMjI1LjA4NVoiIGZpbGw9IiNGRkZGRkYiLz4KPC9zdmc+Cg==\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fmagi.sand.ai\">\u003Cimg alt=\"product\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMagi-Product-logo.svg?logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyB3aWR0aD0iODAwIiBoZWlnaHQ9IjgwMCIgdmlld0JveD0iMCAwIDgwMCA4MDAiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNNDY5LjAyNyA1MDcuOTUxVjE4MC4zNjRDNDY5LjAyNyAxNjguNDE2IDQ2OS4wMjcgMTYyLjQ0MiA0NjUuMjQ0IDE2MC41MTlDNDYxLjQ2MSAxNTguNTk2IDQ1Ni42NTkgMTYyLjEzIDQ0Ny4wNTYgMTY5LjE5OEwzNjEuMDQ4IDIzMi40OTZDMzQ2LjI5NiAyNDMuMzUzIDMzOC45MjEgMjQ4Ljc4MSAzMzQuOTQ3IDI1Ni42NUMzMzAuOTczIDI2NC41MTggMzMwLjk3MyAyNzMuNjk1IDMzMC45NzMgMjkyLjA0OVY2MTkuNjM2QzMzMC45NzMgNjMxLjU4NCAzMzAuOTczIDYzNy41NTggMzM0Ljc1NiA2MzkuNDgxQzMzOC41MzkgNjQxLjQwNCAzNDMuMzQxIDYzNy44NyAzNTIuOTQ0IDYzMC44MDJMNDM4Ljk1MiA1NjcuNTA0QzQ1My43MDQgNTU2LjY0OCA0NjEuMDggNTUxLjIxOSA0NjUuMDUzIDU0My4zNUM0NjkuMDI3IDUzNS40ODIgNDY5LjAyNyA1MjYuMzA1IDQ2OS4wMjcgNTA3Ljk1MVpNMjg3LjkwNyA0OTQuMTU1VjIyMS45M0MyODcuOTA3IDIxNC4wMDIgMjg3LjkwNyAyMTAuMDM5IDI4NS4zOTQgMjA4Ljc1NEMyODIuODgxIDIwNy40NyAyNzkuNjg0IDIwOS44MDEgMjczLjI5MiAyMTQuNDYyTDIwOS40MjEgMjYxLjAzMkMxOTguMjYyIDI2OS4xNjggMTkyLjY4MyAyNzMuMjM2IDE4OS42NzUgMjc5LjE2QzE4Ni42NjcgMjg1LjA4NCAxODYuNjY3IDI5Mi4wMDMgMTg2LjY2NyAzMDUuODQxVjU3OC4wNjdDMTg2LjY2NyA1ODUuOTk0IDE4Ni42NjcgNTg5Ljk1OCAxODkuMTggNTkxLjI0MkMxOTEuNjkzIDU5Mi41MjYgMTk0Ljg4OSA1OTAuMTk2IDIwMS4yODIgNTg1LjUzNUwyNjUuMTUyIDUzOC45NjVDMjc2LjMxMSA1MzAuODI5IDI4MS44OSA1MjYuNzYxIDI4NC44OTkgNTIwLjgzN0MyODcuOTA3IDUxNC45MTMgMjg3LjkwNyA1MDcuOTk0IDI4Ny45MDcgNDk0LjE1NVpNNjEzLjMzMyAyMjEuOTNWNDk0LjE1NUM2MTMuMzMzIDUwNy45OTQgNjEzLjMzMyA1MTQuOTEzIDYxMC4zMjUgNTIwLjgzN0M2MDcuMzE3IDUyNi43NjEgNjAxLjczOCA1MzAuODI5IDU5MC41NzkgNTM4Ljk2NUw1MjYuNzA4IDU4NS41MzVDNTIwLjMxNiA1OTAuMTk2IDUxNy4xMTkgNTkyLjUyNiA1MTQuNjA2IDU5MS4yNDJDNTEyLjA5MyA1ODkuOTU4IDUxMi4wOTMgNTg1Ljk5NCA1MTIuMDkzIDU3OC4wNjdWMzA1Ljg0MUM1MTIuMDkzIDI5Mi4wMDMgNTEyLjA5MyAyODUuMDg0IDUxNS4xMDIgMjc5LjE2QzUxOC4xMSAyNzMuMjM2IDUyMy42ODkgMjY5LjE2OCA1MzQuODQ4IDI2MS4wMzJMNTk4LjcxOSAyMTQuNDYyQzYwNS4xMTEgMjA5LjgwMSA2MDguMzA3IDIwNy40NyA2MTAuODIgMjA4Ljc1NEM2MTMuMzMzIDIxMC4wMzkgNjEzLjMzMyAyMTQuMDAyIDYxMy4zMzMgMjIxLjkzWiIgZmlsbD0iI0ZGRkZGRiIgc2hhcGUtcmVuZGVyaW5nPSJjcmlzcEVkZ2VzIi8+Cjwvc3ZnPgo=&color=DCBE7E\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fsand-ai\">\u003Cimg alt=\"Hugging Face\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Sand AI-ffc107?color=ffc107&logoColor=white\"\u002F>\u003C\u002Fa>\n     \u003Ca href=\"https:\u002F\u002Fx.com\u002FSandAI_HQ\">\u003Cimg alt=\"Twitter Follow\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-Sand%20AI-white?logo=x&logoColor=white\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FhgaZ86D7Wv\">\u003Cimg alt=\"Discord\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Sand%20AI-7289da?logo=discord&logoColor=white&color=7289da\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002FLICENSE\">\u003Cimg alt=\"license\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache2.0-green?logo=Apache\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n# MAGI-1: Autoregressive Video Generation at Scale\n\nThis repository contains the code for the MAGI-1 model, pre-trained weights and inference code. You can find more information on our [technical report](https:\u002F\u002Fstatic.magi.world\u002Fstatic\u002Ffiles\u002FMAGI_1.pdf) or directly create magic with MAGI-1 [here](http:\u002F\u002Fsand.ai) . 🚀✨\n\n\n## 🔥🔥🔥 Latest News\n\n- May 30, 2025: Support for ComfyUI is added 🎉 — the custom nodes for MAGI-1 are now available. Try them out in your workflows!\n- May 26, 2025: MAGI-1 4.5B distill and distill+quant models has been released 🎉 — we’ve updated the model weights - check it out!\n- May 14, 2025: Added Dify DSL for prompt enhancement 🎉 — import it into Dify to boost prompt quality!\n- Apr 30, 2025: MAGI-1 4.5B model has been released 🎉. We've updated the model weights — check it out!\n- Apr 21, 2025: MAGI-1 is here 🎉. We've released the model weights and inference code — check it out!\n\n\n## 1. About\n\nWe present MAGI-1, a world model that generates videos by ***autoregressively*** predicting a sequence of video chunks, defined as fixed-length segments of consecutive frames. Trained to denoise per-chunk noise that increases monotonically over time, MAGI-1 enables causal temporal modeling and naturally supports streaming generation. It achieves strong performance on image-to-video (I2V) tasks conditioned on text instructions, providing high temporal consistency and scalability, which are made possible by several algorithmic innovations and a dedicated infrastructure stack. MAGI-1 further supports controllable generation via chunk-wise prompting, enabling smooth scene transitions, long-horizon synthesis, and fine-grained text-driven control. We believe MAGI-1 offers a promising direction for unifying high-fidelity video generation with flexible instruction control and real-time deployment.\n\n\u003Cdiv align=\"center\">\n  \u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F5cfa90e0-f6ed-476b-a194-71f1d309903a\n\" width=\"70%\" poster=\"\"> \u003C\u002Fvideo>\n\u003C\u002Fdiv>\n\n\n## 2. Model Summary\n\n### Transformer-based VAE\n\n- Variational autoencoder (VAE) with transformer-based architecture, 8x spatial and 4x temporal compression.\n- Fastest average decoding time and highly competitive reconstruction quality\n\n### Auto-Regressive Denoising Algorithm\n\nMAGI-1 is an autoregressive denoising video generation model generating videos chunk-by-chunk instead of as a whole. Each chunk (24 frames) is denoised holistically, and the generation of the next chunk begins as soon as the current one reaches a certain level of denoising. This pipeline design enables concurrent processing of up to four chunks for efficient video generation.\n\n![auto-regressive denosing algorithm](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_7e7ebeac15bc.png)\n\n### Diffusion Model Architecture\n\nMAGI-1 is built upon the Diffusion Transformer, incorporating several key innovations to enhance training efficiency and stability at scale. These advancements include Block-Causal Attention, Parallel Attention Block, QK-Norm and GQA, Sandwich Normalization in FFN, SwiGLU, and Softcap Modulation. For more details, please refer to the [technical report.](https:\u002F\u002Fstatic.magi.world\u002Fstatic\u002Ffiles\u002FMAGI_1.pdf)\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_9e339220b590.png\" alt=\"diffusion model architecture\" width=\"500\" \u002F>\n\u003C\u002Fdiv>\n\n### Distillation Algorithm\n\nWe adopt a shortcut distillation approach that trains a single velocity-based model to support variable inference budgets. By enforcing a self-consistency constraint—equating one large step with two smaller steps—the model learns to approximate flow-matching trajectories across multiple step sizes. During training, step sizes are cyclically sampled from {64, 32, 16, 8}, and classifier-free guidance distillation is incorporated to preserve conditional alignment. This enables efficient inference with minimal loss in fidelity.\n\n\n## 3. Model Zoo\n\nWe provide the pre-trained weights for MAGI-1, including the 24B and 4.5B models, as well as the corresponding distill and distill+quant models. The model weight links are shown in the table.\n\n| Model                         | Link                                                                 | Recommend Machine             |\n| ------------------------------ | -------------------------------------------------------------------- | ------------------------------- |\n| T5                             | [T5](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Ft5)        | -                               |\n| MAGI-1-VAE                     | [MAGI-1-VAE](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fvae) | -                               |\n| MAGI-1-24B                     | [MAGI-1-24B](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_base) | H100\u002FH800 × 8                   |\n| MAGI-1-24B-distill              | [MAGI-1-24B-distill](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_distill) | H100\u002FH800 × 8                   |\n| MAGI-1-24B-distill+fp8_quant    | [MAGI-1-24B-distill+quant](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_distill_quant) | H100\u002FH800 × 4 or RTX 4090 × 8    |\n| MAGI-1-4.5B                    | [MAGI-1-4.5B](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_base) | RTX 4090 × 1                    |\n| MAGI-1-4.5B-distill             | [MAGI-1-4.5B-distill](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_distill) | RTX 4090 × 1                    |\n| MAGI-1-4.5B-distill+fp8_quant   | [MAGI-1-4.5B-distill+quant](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_distill_quant) | RTX 4090 × 1                    |\n\n> [!NOTE]\n>\n> For 4.5B models, any machine with at least 24GB of GPU memory is sufficient.\n> If GPU memory is more constrained, you can instead run the 4.5B-distill+fp8_quant model by setting the `window_size` parameter to 1 in the `4.5B_distill_quant_config.json` file. This configuration works on GPUs with at least 12GB of memory.\n\n## 4. Evaluation\n\n### In-house Human Evaluation\n\nMAGI-1 achieves state-of-the-art performance among open-source models like Wan-2.1 and HunyuanVideo and closed-source model like Hailuo (i2v-01), particularly excelling in instruction following and motion quality, positioning it as a strong potential competitor to closed-source commercial models such as Kling.\n\n![inhouse human evaluation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_cbb1a7101782.png)\n\n### Physical Evaluation\n\nThanks to the natural advantages of autoregressive architecture, Magi achieves far superior precision in predicting physical behavior on the [Physics-IQ benchmark](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fphysics-IQ-benchmark) through video continuation—significantly outperforming all existing models.\n\n| Model          | Phys. IQ Score ↑ | Spatial IoU ↑ | Spatio Temporal ↑ | Weighted Spatial IoU ↑ | MSE ↓  |\n|----------------|------------------|---------------|-------------------|-------------------------|--------|\n| **V2V Models** |                  |               |                   |                         |        |\n| **Magi-24B (V2V)** | **56.02**        | **0.367**     | **0.270**         | **0.304**               | **0.005** |\n| **Magi-4.5B (V2V)** | **42.44**        | **0.234**     | **0.285**         | **0.188**               | **0.007** |\n| VideoPoet (V2V)| 29.50            | 0.204         | 0.164             | 0.137                   | 0.010  |\n| **I2V Models** |                  |               |                   |                         |        |\n| **Magi-24B (I2V)** | **30.23**        | **0.203**     | **0.151**         | **0.154**               | **0.012** |\n| Kling1.6 (I2V) | 23.64            | 0.197         | 0.086             | 0.144                   | 0.025  |\n| VideoPoet (I2V)| 20.30            | 0.141         | 0.126             | 0.087                   | 0.012  |\n| Gen 3 (I2V)    | 22.80            | 0.201         | 0.115             | 0.116                   | 0.015  |\n| Wan2.1 (I2V)   | 20.89            | 0.153         | 0.100             | 0.112                   | 0.023  |\n| Sora (I2V)     | 10.00            | 0.138         | 0.047             | 0.063                   | 0.030  |\n| **GroundTruth**| **100.0**        | **0.678**     | **0.535**         | **0.577**               | **0.002** |\n\n\n## 5. How to run\n\n### Environment Preparation\n\nWe provide two ways to run MAGI-1, with the Docker environment being the recommended option.\n\n**Run with Docker Environment (Recommend)**\n\n```bash\ndocker pull sandai\u002Fmagi:latest\n\ndocker run -it --gpus all --privileged --shm-size=32g --name magi --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=6710886 sandai\u002Fmagi:latest \u002Fbin\u002Fbash\n```\n\n**Run with Source Code**\n\n```bash\n# Create a new environment\nconda create -n magi python==3.10.12\n\n# Install pytorch\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia\n\n# Install other dependencies\npip install -r requirements.txt\n\n# Install ffmpeg\nconda install -c conda-forge ffmpeg=4.4\n\n# For GPUs based on the Hopper architecture (e.g., H100\u002FH800), it is recommended to install MagiAttention(https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMagiAttention) for acceleration. For non-Hopper GPUs, installing MagiAttention is not necessary.\ngit clone git@github.com:SandAI-org\u002FMagiAttention.git\ncd MagiAttention\ngit submodule update --init --recursive\npip install --no-build-isolation .\n```\n\n### Inference Command\n\nTo run the `MagiPipeline`, you can control the input and output by modifying the parameters in the `example\u002F24B\u002Frun.sh` or `example\u002F4.5B\u002Frun.sh` script. Below is an explanation of the key parameters:\n\n#### Parameter Descriptions\n\n- `--config_file`: Specifies the path to the configuration file, which contains model configuration parameters, e.g., `example\u002F24B\u002F24B_config.json`.\n- `--mode`: Specifies the mode of operation. Available options are:\n  - `t2v`: Text to Video\n  - `i2v`: Image to Video\n  - `v2v`: Video to Video\n- `--prompt`: The text prompt used for video generation, e.g., `\"Good Boy\"`.\n- `--image_path`: Path to the image file, used only in `i2v` mode.\n- `--prefix_video_path`: Path to the prefix video file, used only in `v2v` mode.\n- `--output_path`: Path where the generated video file will be saved.\n\n#### Bash Script\n\n```bash\n#!\u002Fbin\u002Fbash\n# Run 24B MAGI-1 model\nbash example\u002F24B\u002Frun.sh\n\n# Run 4.5B MAGI-1 model\nbash example\u002F4.5B\u002Frun.sh\n```\n\n#### Customizing Parameters\n\nYou can modify the parameters in `run.sh` as needed. For example:\n\n- To use the Image to Video mode (`i2v`), set `--mode` to `i2v` and provide `--image_path`:\n  ```bash\n  --mode i2v \\\n  --image_path example\u002Fassets\u002Fimage.jpeg \\\n  ```\n\n- To use the Video to Video mode (`v2v`), set `--mode` to `v2v` and provide `--prefix_video_path`:\n  ```bash\n  --mode v2v \\\n  --prefix_video_path example\u002Fassets\u002Fprefix_video.mp4 \\\n  ```\n\nBy adjusting these parameters, you can flexibly control the input and output to meet different requirements.\n\n### Some Useful Configs (for config.json)\n\n> [!NOTE]\n>\n> - If you are running 24B model with RTX 4090 \\* 8, please set `pp_size:2 cp_size: 4`.\n>\n> - Our model supports arbitrary resolutions. To accelerate inference process, the default resolution for the 4.5B model is set to 720×720 in the `4.5B_config.json`.\n\n| Config         | Help                                                         |\n| -------------- | ------------------------------------------------------------ |\n| seed           | Random seed used for video generation                        |\n| video_size_h   | Height of the video                                          |\n| video_size_w   | Width of the video                                           |\n| num_frames     | Controls the duration of generated video                     |\n| fps            | Frames per second, 4 video frames correspond to 1 latent_frame |\n| cfg_number     | Base model uses cfg_number==3, distill and quant model uses cfg_number=1 |\n| load           | Directory containing a model checkpoint.                     |\n| t5_pretrained  | Path to load pretrained T5 model                             |\n| vae_pretrained | Path to load pretrained VAE model                            |\n\n## 6. Prompt Enhancement\n\nTo improve prompt quality, we provide a [Dify DSL](\u002Fassets\u002Fprompt_enhancement_dify_dsl.yml) file that can be imported directly into [Dify](https:\u002F\u002Fdify.ai\u002F) to set up a prompt enhancement pipeline. If you’re new to Dify, see [how to create an app from a DSL file](https:\u002F\u002Fdocs.dify.ai\u002Fen\u002Fguides\u002Fapplication-orchestrate\u002Fcreating-an-application#creating-from-a-dsl-file) to get started.\n\n## 7. License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## 8. Citation\n\nIf you find our code or model useful in your research, please cite:\n\n```bibtex\n@misc{ai2025magi1autoregressivevideogeneration,\n      title={MAGI-1: Autoregressive Video Generation at Scale},\n      author={Sand. ai and Hansi Teng and Hongyu Jia and Lei Sun and Lingzhi Li and Maolin Li and Mingqiu Tang and Shuai Han and Tianning Zhang and W. Q. Zhang and Weifeng Luo and Xiaoyang Kang and Yuchen Sun and Yue Cao and Yunpeng Huang and Yutong Lin and Yuxin Fang and Zewei Tao and Zheng Zhang and Zhongshu Wang and Zixun Liu and Dai Shi and Guoli Su and Hanwen Sun and Hong Pan and Jie Wang and Jiexin Sheng and Min Cui and Min Hu and Ming Yan and Shucheng Yin and Siran Zhang and Tingting Liu and Xianping Yin and Xiaoyu Yang and Xin Song and Xuan Hu and Yankai Zhang and Yuqiao Li},\n      year={2025},\n      eprint={2505.13211},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13211},\n}\n```\n\n## 9. Contact\n\nIf you have any questions, please feel free to raise an issue or contact us at [research@sand.ai](mailto:research@sand.ai) .\n","![magi-logo](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_9d174bcacb2e.png)\n\n\n-----\n\n\u003Cp align=\"center\">\n    \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13211\">\u003Cimg alt=\"paper\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FPaper-arXiv-B31B1B?logo=arxiv\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fsand.ai\">\u003Cimg alt=\"blog\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FSand%20AI-Homepage-333333.svg?logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyB3aWR0aD0iODAwIiBoZWlnaHQ9IjgwMCIgdmlld0JveD0iMCAwIDgwMCA4MDAiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNMjI3IDIyNS4wODVDMjI3IDIwMi4zMDMgMjI3IDE5MC45MTIgMjMxLjQzNyAxODIuMjExQzIzNS4zMzkgMTc0LjU1NyAyNDEuNTY2IDE2OC4zMzQgMjQ5LjIyNiAxNjQuNDM0QzI1Ny45MzMgMTYwIDI2OS4zMzIgMTYwIDI5Mi4xMjkgMTYwSDUwNy44NzFDNTA5LjI5NSAxNjAgNTEwLjY3NiAxNjAgNTEyLjAxNCAxNjAuMDAxQzUzMi4wODIgMTYwLjAxNyA1NDIuNjExIDE2MC4yNzcgNTUwLjc3NCAxNjQuNDM0QzU1OC40MzQgMTY4LjMzNCA1NjQuNjYxIDE3NC41NTcgNTY4LjU2MyAxODIuMjExQzU3MyAxOTAuOTEyIDU3MyAyMDIuMzAzIDU3MyAyMjUuMDg1VjI1Ni41NThDNTczIDI5MS4zMTkgNTczIDMwOC43IDU2NS4wMzUgMzIzLjI3OUM1NTguNzU2IDMzNC43NzIgNTQzLjU2NSAzNDYuMTEgNTIzLjA3OCAzNTkuNjA1QzUxNC42NzQgMzY1LjE0MSA1MTAuNDcyIDM2Ny45MDkgNTA1LjYzOSAzNjcuOTM2QzUwMC44MDYgMzY3Ljk2NCA0OTYuNTAzIDM2NS4yIDQ4Ny44OTYgMzU5LjY3MUw0ODcuODk2IDM1OS42N0w0NjYuNDY5IDM0NS45MDVDNDU2Ljg3NSAzMzkuNzQyIDQ1Mi4wNzggMzM2LjY2IDQ1Mi4wNzggMzMyLjIxOEM0NTIuMDc4IDMyNy43NzcgNDU2Ljg3NSAzMjQuNjk1IDQ2Ni40NjkgMzE4LjUzMUw1MjYuNzgyIDI3OS43ODVDNTM1LjI5MSAyNzQuMzE5IDU0MC40MzUgMjY0LjkwMyA1NDAuNDM1IDI1NC43OTRDNTQwLjQzNSAyMzguMzg2IDUyNy4xMjUgMjI1LjA4NSA1MTAuNzA1IDIyNS4wODVIMjg5LjI5NUMyNzIuODc1IDIyNS4wODUgMjU5LjU2NSAyMzguMzg2IDI1OS41NjUgMjU0Ljc5NEMyNTkuNTY1IDI2NC45MDMgMjY0LjcwOSAyNzQuMzE5IDI3My4yMTggMjc5Ljc4NUw1MTMuMTggNDMzLjk0MUM1NDIuNDQxIDQ1Mi43MzggNTU3LjA3MSA0NjIuMTM3IDU2NS4wMzUgNDc2LjcxNkM1NzMgNDkxLjI5NCA1NzMgNTA4LjY3NSA1NzMgNTQzLjQzNlY1NzQuOTE1QzU3MyA1OTcuNjk3IDU3MyA2MDkuMDg4IDU2OC41NjMgNjE3Ljc4OUM1NjQuNjYxIDYyNS40NDQgNTU4LjQzNCA2MzEuNjY2IDU1MC43NzQgNjM1LjU2NkM1NDIuMDY3IDY0MCA1MzAuNjY4IDY0MCA1MDcuODcxIDY0MEgyOTIuMTI5QzI2OS4zMzIgNjQwIDI1Ny45MzMgNjQwIDI0OS4yMjYgNjM1LjU2NkMyNDEuNTY2IDYzMS42NjYgMjM1LjMzOSA2MjUuNDQ0IDIzMS40MzcgNjE3Ljc4OUMyMjcgNjA5LjA4OCAyMjcgNTk3LjY5NyAyMjcgNTc0LjkxNVY1NDMuNDM2QzIyNyA1MDguNjc1IDIyNyA0OTEuMjk0IDIzNC45NjUgNDc2LjcxNkMyNDEuMjQ0IDQ2NS4yMjIgMjU2LjQzMyA0NTMuODg2IDI3Ni45MTggNDQwLjM5MkMyODUuMzIyIDQzNC44NTYgMjg5LjUyNSA0MzIuMDg4IDI5NC4zNTcgNDMyLjA2QzI5OS4xOSA0MzIuMDMyIDMwMy40OTQgNDM0Ljc5NyAzMTIuMSA0NDAuMzI2TDMzMy41MjcgNDU0LjA5MUMzNDMuMTIyIDQ2MC4yNTQgMzQ3LjkxOSA0NjMuMzM2IDM0Ny45MTkgNDY3Ljc3OEMzNDcuOTE5IDQ3Mi4yMiAzNDMuMTIyIDQ3NS4zMDEgMzMzLjUyOCA0ODEuNDY1TDMzMy41MjcgNDgxLjQ2NUwyNzMuMjIgNTIwLjIwOEMyNjQuNzA5IDUyNS42NzUgMjU5LjU2NSA1MzUuMDkxIDI1OS41NjUgNTQ1LjIwMkMyNTkuNTY1IDU2MS42MTIgMjcyLjg3NyA1NzQuOTE1IDI4OS4yOTkgNTc0LjkxNUg1MTAuNzAxQzUyNy4xMjMgNTc0LjkxNSA1NDAuNDM1IDU2MS42MTIgNTQwLjQzNSA1NDUuMjAyQzU0MC40MzUgNTM1LjA5MSA1MzUuMjkxIDUyNS42NzUgNTI2Ljc4IDUyMC4yMDhMMjg2LjgyIDM2Ni4wNTNDMjU3LjU2IDM0Ny4yNTYgMjQyLjkyOSAzMzcuODU3IDIzNC45NjUgMzIzLjI3OUMyMjcgMzA4LjcgMjI3IDI5MS4zMTkgMjI3IDI1Ni41NThWMjI1LjA4NVoiIGZpbGw9IiNGRkZGRkYiLz4KPC9zdmc+Cg==\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fmagi.sand.ai\">\u003Cimg alt=\"product\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMagi-Product-logo.svg?logo=data:image\u002Fsvg%2bxml;base64,PHN2ZyB3aWR0aD0iODAwIiBoZWlnaHQ9IjgwMCIgdmlld0JveD0iMCAwIDgwMCA4MDAiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNNDY5LjAyNyA1MDcuOTUxVjE4MC4zNjRDNDY5LjAyNyAxNjguNDE2IDQ2OS4wMjcgMTYyLjQ0MiA0NjUuMjQ0IDE2MC41MTlDNDYxLjQ2MSAxNTguNTk2IDQ1Ni42NTkgMTYyLjEzIDQ0Ny4wNTYgMTY5LjE5OEwzNjEuMDQ4IDIzMi40OTZDMzQ2LjI5NiAyNDMuMzUzIDMzOC45MjEgMjQ4Ljc4MSAzMzQuOTQ3IDI1Ni42NUMzMzAuOTczIDI2NC41MTggMzMwLjk3MyAyNzMuNjk1IDMzMC45NzMgMjkyLjA0OVY2MTkuNjM2QzMzMC45NzMgNjMxLjU4NCAzMzAuOTczIDYzNy41NTggMzM0Ljc1NiA2MzkuNDgxQzMzOC41MzkgNjQxLjQwNCAzNDMuMzQxIDYzNy44NyAzNTIuOTQ0IDYzMC44MDJMNDM4Ljk1MiA1NjcuNTA0QzQ1My43MDQgNTU2LjY0OCA0NjEuMDggNTUxLjIxOSA0NjUuMDUzIDU0My4zNUM0NjkuMDI3IDUzNS40ODIgNDY5LjAyNyA1MjYuMzA1IDQ2OS4wMjcgNTA3Ljk1MVpNMjg3LjkwNyA0OTQuMTU1VjIyMS45M0MyODcuOTA3IDIxNC4wMDIgMjg3LjkwNyAyMTAuMDM5IDI4NS4zOTQgMjA4Ljc1NEMyODIuODgxIDIwNy40NyAyNzkuNjg0IDIwOS44MDEgMjczLjI5MiAyMTQuNDYyTDIwOS40MjEgMjYxLjAzMkMxOTguMjYyIDI2OS4xNjggMTkyLjY4MyAyNzMuMjM2IDE4OS42NzUgMjc5LjE2QzE4Ni42NjcgMjg1LjA4NCAxODYuNjY3IDI5Mi4wMDMgMTg2LjY2NyAzMDUuODQxVjU3OC4wNjdDMTg2LjY2NyA1ODUuOTk0IDE4Ni42NjcgNTg5Ljk1OCAxODkuMTggNTkxLjI0MkMxOTEuNjkzIDU5Mi41MjYgMTk0Ljg4OSA1OTAuMTk2IDIwMS4yODIgNTg1LjUzNUwyNjUuMTUyIDUzOC45NjVDMjc2LjMxMSA1MzAuODI5IDI4MS44OSA1MjYuNzYxIDI4NC44OTkgNTIwLjgzN0MyODcuOTA3IDUxNC45MTMgMjg3LjkwNyA1MDcuOTk0IDI4Ny45MDcgNDk0LjE1NVpNNjEzLjMzMyAyMjEuOTNWNDk0LjE1NUM2MTMuMzMzIDUwNy45OTQgNjEzLjMzMyA1MTQuOTEzIDYxMC4zMjUgNTIwLjgzN0M2MDcuMzE3IDUyNi43NjEgNjAxLjczOCA1MzAuODI5IDU5MC41NzkgNTM4Ljk2NUw1MjYuNzA4IDU4NS41MzVDNTIwLjMxNiA1OTAuMTk2IDUxNy4xMTkgNTkyLjUyNiA1MTQuNjA2IDU5MS4yNDJDNTEyLjA5MyA1ODkuOTU4IDUxMi4wOTMgNTg1Ljk5NCA1MTIuMDkzIDU3OC4wNjdWMzA1Ljg0MUM1MTIuMDkzIDI5Mi4wMDMgNTEyLjA5MyAyODUuMDg0IDUxNS4xMDIgMjc5LjE2QzUxOC4xMSAyNzMuMjM2IDUyMy42ODkgMjY5LjE2OCA1MzQuODQ4IDI2MS4wMzJMNTk4LjcxOSAyMTQuNDYyQzYwNS4xMTEgMjA5LjgwMSA2MDguMzA3IDIwNy40NyA2MTAuODIgMjA4Ljc1NEM2MTMuMzMzIDIxMC4wMzkgNjEzLjMzMyAyMTQuMDAyIDYxMy4zMzMgMjIxLjkzWiIgZmlsbD0iI0ZGRkZGRiIgc2hhcGUtcmVuZGVyaW5nPSJjcmlzcEVkZ2VzIi8+Cjwvc3ZnPgo=&color=DCBE7E\">\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fsand-ai\">\u003Cimg alt=\"Hugging Face\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Sand AI-ffc107?color=ffc107&logoColor=white\"\u002F>\u003C\u002Fa>\n     \u003Ca href=\"https:\u002F\u002Fx.com\u002FSandAI_HQ\">\u003Cimg alt=\"Twitter Follow\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTwitter-Sand%20AI-white?logo=x&logoColor=white\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FhgaZ86D7Wv\">\u003Cimg alt=\"Discord\"\n    src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDiscord-Sand%20AI-7289da?logo=discord&logoColor=white&color=7289da\"\u002F>\u003C\u002Fa>\n    \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002FLICENSE\">\u003Cimg alt=\"license\" src=\"https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-Apache2.0-green?logo=Apache\">\u003C\u002Fa>\n\u003C\u002Fp>\n\n# MAGI-1: Autoregressive Video Generation at Scale\n\nThis repository contains the code for the MAGI-1 model, pre-trained weights and inference code. You can find more information on our [technical report](https:\u002F\u002Fstatic.magi.world\u002Fstatic\u002Ffiles\u002FMAGI_1.pdf) or directly create magic with MAGI-1 [here](http:\u002F\u002Fsand.ai) . 🚀✨\n\n\n## 🔥🔥🔥 Latest News\n\n- May 30, 2025: Support for ComfyUI is added 🎉 — the custom nodes for MAGI-1 are now available. Try them out in your workflows!\n- May 26, 2025: MAGI-1 4.5B distill and distill+quant models has been released 🎉 — we’ve updated the model weights - check it out!\n- May 14, 2025: Added Dify DSL for prompt enhancement 🎉 — import it into Dify to boost prompt quality!\n- Apr 30, 2025: MAGI-1 4.5B model has been released 🎉. We've updated the model weights — check it out!\n- Apr 21, 2025: MAGI-1 is here 🎉. We've released the model weights and inference code — check it out!\n\n## 1. 关于\n\n我们推出了MAGI-1，这是一种世界模型，通过***自回归***方式预测一系列视频块来生成视频。这里的“视频块”是指由连续帧组成的固定长度片段。MAGI-1经过训练，能够逐步去噪随时间单调递增的每块噪声，从而实现因果性的时序建模，并自然支持流式生成。它在基于文本指令的图像到视频（I2V）任务中表现出色，提供了高度的时间一致性与可扩展性。这些优势得益于多项算法创新以及专门构建的基础设施栈。此外，MAGI-1还支持基于块级提示的可控生成，能够实现流畅的场景过渡、长时序合成以及细粒度的文本驱动控制。我们相信，MAGI-1为将高保真视频生成与灵活的指令控制及实时部署相结合提供了一个极具前景的方向。\n\n\u003Cdiv align=\"center\">\n  \u003Cvideo src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F5cfa90e0-f6ed-476b-a194-71f1d309903a\n\" width=\"70%\" poster=\"\"> \u003C\u002Fvideo>\n\u003C\u002Fdiv>\n\n\n## 2. 模型概览\n\n### 基于Transformer的VAE\n\n- 基于Transformer架构的变分自编码器（VAE），具有8倍的空间压缩和4倍的 temporal 压缩。\n- 解码速度最快，且重建质量极具竞争力。\n\n### 自回归去噪算法\n\nMAGI-1是一种自回归去噪视频生成模型，它不是一次性生成整段视频，而是逐块生成。每块包含24帧，会在整体上进行去噪处理；当前块达到一定去噪程度后，便会开始生成下一帧。这种流水线设计允许同时处理最多四个视频块，从而高效地生成视频。\n\n![自回归去噪算法](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_7e7ebeac15bc.png)\n\n### 扩散模型架构\n\nMAGI-1基于扩散Transformer构建，并引入了多项关键创新，以提升大规模训练的效率和稳定性。这些改进包括：块因果注意力、并行注意力块、QK归一化与GQA、FFN中的三明治归一化、SwiGLU以及Softcap调制。更多细节请参阅[技术报告](https:\u002F\u002Fstatic.magi.world\u002Fstatic\u002Ffiles\u002FMAGI_1.pdf)。\n\u003Cdiv align=\"center\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_9e339220b590.png\" alt=\"扩散模型架构\" width=\"500\" \u002F>\n\u003C\u002Fdiv>\n\n### 蒸馏算法\n\n我们采用了一种快捷蒸馏方法，通过训练一个基于速度的单一模型来支持不同的推理预算。通过强制执行自我一致性约束——即让一步大步等同于两步小步——模型能够学习在不同步长下近似流匹配轨迹。在训练过程中，步长会循环从{64, 32, 16, 8}中采样，并结合无分类器指导蒸馏以保持条件对齐。这使得在不显著损失保真度的情况下，仍能高效地进行推理。\n\n\n## 3. 模型库\n\n我们提供了MAGI-1的预训练权重，包括24B和4.5B版本，以及对应的蒸馏版和量化蒸馏版。模型权重链接见下表。\n\n| 模型                         | 链接                                                                 | 推荐硬件             |\n| ------------------------------ | -------------------------------------------------------------------- | ------------------------------- |\n| T5                             | [T5](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Ft5)        | -                               |\n| MAGI-1-VAE                     | [MAGI-1-VAE](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fvae) | -                               |\n| MAGI-1-24B                     | [MAGI-1-24B](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_base) | H100\u002FH800 × 8                   |\n| MAGI-1-24B-distill              | [MAGI-1-24B-distill](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_distill) | H100\u002FH800 × 8                   |\n| MAGI-1-24B-distill+fp8_quant    | [MAGI-1-24B-distill+quant](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F24B_distill_quant) | H100\u002FH800 × 4 或 RTX 4090 × 8    |\n| MAGI-1-4.5B                    | [MAGI-1-4.5B](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_base) | RTX 4090 × 1                    |\n| MAGI-1-4.5B-distill             | [MAGI-1-4.5B-distill](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_distill) | RTX 4090 × 1                    |\n| MAGI-1-4.5B-distill+fp8_quant   | [MAGI-1-4.5B-distill+quant](https:\u002F\u002Fhuggingface.co\u002Fsand-ai\u002FMAGI-1\u002Ftree\u002Fmain\u002Fckpt\u002Fmagi\u002F4.5B_distill_quant) | RTX 4090 × 1                    |\n\n> [!NOTE]\n>\n> 对于4.5B模型，任何至少拥有24GB显存的设备均可运行。\n> 如果显存较为紧张，可以使用4.5B-distill+fp8_quant模型，只需在`4.5B_distill_quant_config.json`文件中将`window_size`参数设置为1即可。该配置可在至少12GB显存的GPU上运行。\n\n## 4. 评估\n\n### 内部人工评估\n\nMAGI-1在开源模型如Wan-2.1和HunyuanVideo，以及闭源模型如Hailuo (i2v-01) 中均达到了最先进的水平，尤其在指令遵循性和运动质量方面表现突出，使其有望成为闭源商业模型如Kling的强大竞争对手。\n\n![内部人工评估](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_readme_cbb1a7101782.png)\n\n### 物理行为评估\n\n得益于自回归架构的天然优势，Magi 通过视频续写在 [Physics-IQ 基准测试](https:\u002F\u002Fgithub.com\u002Fgoogle-deepmind\u002Fphysics-IQ-benchmark)上对物理行为的预测精度远超现有模型。\n\n| 模型          | Phys. IQ 分数 ↑ | 空间 IoU ↑ | 空间-时间 IoU ↑ | 加权空间 IoU ↑ | MSE ↓  |\n|----------------|------------------|---------------|-------------------|-------------------------|--------|\n| **V2V 模型** |                  |               |                   |                         |        |\n| **Magi-24B (V2V)** | **56.02**        | **0.367**     | **0.270**         | **0.304**               | **0.005** |\n| **Magi-4.5B (V2V)** | **42.44**        | **0.234**     | **0.285**         | **0.188**               | **0.007** |\n| VideoPoet (V2V)| 29.50            | 0.204         | 0.164             | 0.137                   | 0.010  |\n| **I2V 模型** |                  |               |                   |                         |        |\n| **Magi-24B (I2V)** | **30.23**        | **0.203**     | **0.151**         | **0.154**               | **0.012** |\n| Kling1.6 (I2V) | 23.64            | 0.197         | 0.086             | 0.144                   | 0.025  |\n| VideoPoet (I2V)| 20.30            | 0.141         | 0.126             | 0.087                   | 0.012  |\n| Gen 3 (I2V)    | 22.80            | 0.201         | 0.115             | 0.116                   | 0.015  |\n| Wan2.1 (I2V)   | 20.89            | 0.153         | 0.100             | 0.112                   | 0.023  |\n| Sora (I2V)     | 10.00            | 0.138         | 0.047             | 0.063                   | 0.030  |\n| **GroundTruth**| **100.0**        | **0.678**     | **0.535**         | **0.577**               | **0.002** |\n\n\n## 5. 如何运行\n\n### 环境准备\n\n我们提供了两种运行 MAGI-1 的方式，推荐使用 Docker 环境。\n\n**使用 Docker 环境运行（推荐）**\n\n```bash\ndocker pull sandai\u002Fmagi:latest\n\ndocker run -it --gpus all --privileged --shm-size=32g --name magi --net=host --ipc=host --ulimit memlock=-1 --ulimit stack=6710886 sandai\u002Fmagi:latest \u002Fbin\u002Fbash\n```\n\n**使用源代码运行**\n\n```bash\n# 创建新环境\nconda create -n magi python==3.10.12\n\n# 安装 PyTorch\nconda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia\n\n# 安装其他依赖\npip install -r requirements.txt\n\n# 安装 FFmpeg\nconda install -c conda-forge ffmpeg=4.4\n\n# 对于基于 Hopper 架构的 GPU（如 H100\u002FH800），建议安装 MagiAttention（https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMagiAttention）以加速。对于非 Hopper 架构的 GPU，无需安装 MagiAttention。\ngit clone git@github.com:SandAI-org\u002FMagiAttention.git\ncd MagiAttention\ngit submodule update --init --recursive\npip install --no-build-isolation .\n```\n\n### 推理命令\n\n要运行 `MagiPipeline`，可以通过修改 `example\u002F24B\u002Frun.sh` 或 `example\u002F4.5B\u002Frun.sh` 脚本中的参数来控制输入和输出。以下是关键参数的说明：\n\n#### 参数说明\n\n- `--config_file`: 指定配置文件路径，包含模型配置参数，例如 `example\u002F24B\u002F24B_config.json`。\n- `--mode`: 指定运行模式。可选值包括：\n  - `t2v`: 文本到视频\n  - `i2v`: 图像到视频\n  - `v2v`: 视频到视频\n- `--prompt`: 用于视频生成的文本提示，例如 `\"Good Boy\"`。\n- `--image_path`: 图像文件路径，仅在 `i2v` 模式下使用。\n- `--prefix_video_path`: 前缀视频文件路径，仅在 `v2v` 模式下使用。\n- `--output_path`: 生成的视频文件保存路径。\n\n#### Bash 脡脚本\n\n```bash\n#!\u002Fbin\u002Fbash\n# 运行 24B MAGI-1 模型\nbash example\u002F24B\u002Frun.sh\n\n# 运行 4.5B MAGI-1 模型\nbash example\u002F4.5B\u002Frun.sh\n```\n\n#### 自定义参数\n\n您可以根据需要修改 `run.sh` 中的参数。例如：\n\n- 如果要使用图像到视频模式 (`i2v`)，将 `--mode` 设置为 `i2v` 并提供 `--image_path`：\n  ```bash\n  --mode i2v \\\n  --image_path example\u002Fassets\u002Fimage.jpeg \\\n  ```\n\n- 如果要使用视频到视频模式 (`v2v`)，将 `--mode` 设置为 `v2v` 并提供 `--prefix_video_path`：\n  ```bash\n  --mode v2v \\\n  --prefix_video_path example\u002Fassets\u002Fprefix_video.mp4 \\\n  ```\n\n通过调整这些参数，您可以灵活地控制输入和输出，以满足不同的需求。\n\n### 一些有用的配置（针对 config.json）\n\n> [!NOTE]\n>\n> - 如果您使用 RTX 4090 \\* 8 运行 24B 模型，请设置 `pp_size:2 cp_size: 4`。\n>\n> - 我们的模型支持任意分辨率。为了加速推理过程，4.5B 模型的默认分辨率在 `4.5B_config.json` 中被设置为 720×720。\n\n| 配置         | 说明                                                         |\n| -------------- | ------------------------------------------------------------ |\n| seed           | 用于视频生成的随机种子                        |\n| video_size_h   | 视频的高度                                          |\n| video_size_w   | 视频的宽度                                           |\n| num_frames     | 控制生成视频的时长                     |\n| fps            | 每秒帧数，4 帧视频对应 1 个潜在帧 |\n| cfg_number     | 基础模型使用 cfg_number==3，蒸馏和量化模型使用 cfg_number=1 |\n| load           | 包含模型检查点的目录。                     |\n| t5_pretrained  | 预训练 T5 模型的加载路径                             |\n| vae_pretrained | 预训练 VAE 模型的加载路径                            |\n\n## 6. 提示词增强\n\n为了提升提示词质量，我们提供了一个 [Dify DSL](\u002Fassets\u002Fprompt_enhancement_dify_dsl.yml) 文件，可以直接导入 [Dify](https:\u002F\u002Fdify.ai\u002F) 来设置提示词增强流程。如果您是 Dify 的新手，请参阅 [如何从 DSL 文件创建应用](https:\u002F\u002Fdocs.dify.ai\u002Fen\u002Fguides\u002Fapplication-orchestrate\u002Fcreating-an-application#creating-from-a-dsl-file) 以开始使用。\n\n## 7. 许可证\n\n本项目采用 Apache License 2.0 许可证——详情请参阅 [LICENSE](LICENSE) 文件。\n\n## 8. 引用\n\n如果您在研究中使用了我们的代码或模型，请引用以下文献：\n\n```bibtex\n@misc{ai2025magi1autoregressivevideogeneration,\n      title={MAGI-1：大规模自回归视频生成},\n      author={Sand.ai 和 Hansi Teng、Hongyu Jia、Lei Sun、Lingzhi Li、Maolin Li、Mingqiu Tang、Shuai Han、Tianning Zhang、W. Q. Zhang、Weifeng Luo、Xiaoyang Kang、Yuchen Sun、Yue Cao、Yunpeng Huang、Yutong Lin、Yuxin Fang、Zewei Tao、Zheng Zhang、Zhongshu Wang、Zixun Liu、Dai Shi、Guoli Su、Hanwen Sun、Hong Pan、Jie Wang、Jiexin Sheng、Min Cui、Min Hu、Ming Yan、Shucheng Yin、Siran Zhang、Tingting Liu、Xianping Yin、Xiaoyu Yang、Xin Song、Xuan Hu、Yankai Zhang、Yuqiao Li},\n      year={2025},\n      eprint={2505.13211},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.13211},\n}\n```\n\n## 9. 联系方式\n\n如果您有任何问题，欢迎随时提交 issue 或通过电子邮件 [research@sand.ai](mailto:research@sand.ai) 与我们联系。","# MAGI-1 快速上手指南\n\nMAGI-1 是一款基于自回归架构的大规模视频生成模型，支持文生视频（T2V）和图生视频（I2V），具备出色的时间一致性和流式生成能力。本指南将帮助您快速部署并运行该模型。\n\n## 1. 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu 20.04+)\n- **GPU**: \n  - **24B 模型**: 推荐 8x H100\u002FH800；量化版需 4x H100\u002FH800 或 8x RTX 4090。\n  - **4.5B 模型**: 单张 RTX 4090 (24GB 显存) 即可运行基础版。\n  - **低显存方案**: 若显存低于 24GB (最低 12GB)，请使用 `4.5B-distill+fp8_quant` 模型，并修改配置文件中的 `window_size` 为 1。\n- **CUDA**: 12.1 或更高版本\n- **Python**: 3.10+\n\n### 前置依赖\n确保已安装 Git 和 Conda (推荐使用 Miniconda)。\n\n## 2. 安装步骤\n\n### 步骤 1: 克隆仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1.git\ncd MAGI-1\n```\n\n### 步骤 2: 创建虚拟环境并安装依赖\n```bash\nconda create -n magi python=3.10 -y\nconda activate magi\n\n# 安装 PyTorch (根据 CUDA 版本调整，此处以 CUDA 12.1 为例)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\n\n# 安装项目依赖\npip install -r requirements.txt\n```\n> **提示**: 国内用户可使用清华源加速 pip 安装：\n> `pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n### 步骤 3: 下载模型权重\n从 Hugging Face 下载所需模型权重至 `ckpt` 目录。以下以 **4.5B 蒸馏量化版** (适合单卡用户) 为例：\n\n```bash\n# 创建目录\nmkdir -p ckpt\u002Fmagi\u002F4.5B_distill_quant\n\n# 使用 huggingface-cli 下载 (需安装 huggingface_hub)\npip install huggingface_hub\nhuggingface-cli download sand-ai\u002FMAGI-1 --local-dir ckpt --include \"ckpt\u002Fmagi\u002F4.5B_distill_quant\u002F*\"\n```\n*注：还需下载 T5 编码器 和 VAE 权重，位于 `ckpt\u002Ft5` 和 `ckpt\u002Fvae` 目录。*\n\n## 3. 基本使用\n\n以下是一个最简单的命令行推理示例，生成一段视频。\n\n### 配置低显存模式 (可选)\n如果您使用的是 12GB-16GB 显存的显卡运行量化模型，请先修改配置文件：\n编辑 `ckpt\u002Fmagi\u002F4.5B_distill_quant_config.json` (具体文件名请以实际为准)，将 `window_size` 设置为 `1`。\n\n### 运行推理\n执行以下命令生成视频：\n\n```bash\npython infer.py \\\n    --config_path configs\u002F4.5B_distill_quant.yaml \\\n    --ckpt_path ckpt\u002Fmagi\u002F4.5B_distill_quant \\\n    --prompt \"A cyberpunk cat walking in the rain, neon lights reflecting on wet streets, cinematic lighting\" \\\n    --output_dir outputs \\\n    --num_frames 96 \\\n    --fps 24\n```\n\n**参数说明：**\n- `--config_path`: 对应模型的配置文件路径。\n- `--ckpt_path`: 下载的模型权重文件夹路径。\n- `--prompt`: 视频生成的提示词。\n- `--num_frames`: 生成帧数 (MAGI-1 按块生成，每块 24 帧，建议设为 24 的倍数)。\n- `--output_dir`: 输出视频保存目录。\n\n生成完成后，视频文件将保存在 `outputs` 目录中。\n\n---\n*更多高级功能（如 ComfyUI 集成、Dify 工作流、图生视频）请参考官方文档或 GitHub 仓库。*","某短视频广告团队需要在一天内为新款运动鞋生成 50 条不同风格的高质量动态展示视频，以进行多平台 A\u002FB 测试。\n\n### 没有 MAGI-1 时\n- **渲染耗时极长**：传统视频生成模型难以兼顾时长与分辨率，生成一条 5 秒高清视频往往需要数小时，无法在截稿前完成批量产出。\n- **动作连贯性差**：生成的视频中鞋子旋转或跳跃时容易出现画面闪烁、肢体扭曲等伪影，导致素材不可用，需人工逐帧修复。\n- **风格一致性难控**：在不同提示词下生成的视频光影和质感波动大，难以维持品牌统一的视觉调性，增加了后期剪辑的合成难度。\n- **算力成本高昂**：为了尝试不同效果需反复运行小模型，集群资源被低效占用，单次测试的算力开销远超预算。\n\n### 使用 MAGI-1 后\n- **规模化极速生成**：利用 MAGI-1 的自回归架构优势，团队可并行批量生成长序列高清视频，将原本几天的工作量压缩至数小时内完成。\n- **电影级画面稳定**：MAGI-1 显著提升了长视频的时间一致性，鞋面纹理与动态光影流畅自然，彻底消除了闪烁和形变，直出即可投放。\n- **精准风格控制**：通过统一的条件引导，MAGI-1 确保了 50 条视频在色调、景深和运动逻辑上的高度一致，大幅降低了后期统筹成本。\n- **资源效率倍增**：凭借可扩展的生成能力，MAGI-1 在单次推理中即可输出多样本结果，有效降低了单条视频的边际算力成本。\n\nMAGI-1 通过突破性的规模化自回归生成能力，将视频创作从“手工打磨”升级为“工业化量产”，让创意验证不再受限于时间与算力瓶颈。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSandAI-org_MAGI-1_d175d59f.png","SandAI-org","Sand.ai","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSandAI-org_1dd70c5b.jpg","",null,"https:\u002F\u002Fsand.ai\u002F","https:\u002F\u002Fgithub.com\u002FSandAI-org",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",99.2,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0.8,3670,235,"2026-04-05T08:46:32","Apache-2.0","未说明","必需 NVIDIA GPU。24B 模型推荐 H100\u002FH800 × 8（量化版需 × 4 或 RTX 4090 × 8）；4.5B 基础\u002F蒸馏版推荐 RTX 4090 × 1（需 24GB+ 显存）；4.5B 量化版在特定配置下可运行于 12GB+ 显存 GPU。",{"notes":98,"python":95,"dependencies":99},"1. 提供多种模型尺寸：24B（需多卡集群）和 4.5B（单卡可用）。2. 4.5B 模型若显存受限（\u003C24GB），可使用 'distill+fp8_quant' 版本并将配置文件中的 'window_size' 设为 1，此时最低显存需求为 12GB。3. 项目包含 T5 文本编码器、VAE 及不同精度的主模型权重。4. 支持 ComfyUI 自定义节点及 Dify DSL。",[95],[52,15],[102,103,104],"autoregressive","diffusion-models","video-generation","2026-03-27T02:49:30.150509","2026-04-08T17:36:15.258134",[108,113,117,122,127,132],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},24781,"MagiAttention 支持哪些 GPU 架构？可以在 A100 上运行吗？","目前 MagiAttention 仅支持 Hopper 架构的 GPU（如 H100）。在 A100（Ampere 架构）上运行时可能会收到警告或无法正常工作。开发团队计划在未来更新中扩大支持范围。","https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002Fissues\u002F13",{"id":114,"question_zh":115,"answer_zh":116,"source_url":112},24782,"进行图像到视频（i2v）生成时结果异常，可能是什么原因？","请检查配置中的 `cfg_num` 参数。该值应设置为 3，如果设置为 1 或 2 会导致生成结果奇怪。即使您认为已设置正确，也请再次确认完整配置文件中的该项。",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},24783,"构建 MagiAttention 时遇到 'ninja -v' 报错或 'configure.py missing' 错误如何解决？","这是一个已知的构建环境问题。即使安装了 GCC\u002FG++ 13 版本仍可能出现此错误。建议尝试以下方法：1. 确保 `ninja` 和 `setuptools` 是最新版本；2. 清理构建缓存（删除 build 目录）后重新运行安装命令；3. 社区用户反馈该问题复现率高，目前官方尚未发布预编译版本，需自行排查编译器环境兼容性。","https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002Fissues\u002F21",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},24784,"官方提供的 Docker 镜像 (sandai\u002Fmagi:latest) 是否可用？为什么拉取后无法运行源码？","当前的 Docker 镜像是一个包含 magi_attention 的基础 PyTorch 镜像（基于 NVIDIA Release 24.07, PyTorch 2.4.0a0）。如果直接拉取源码无法运行，可能是因为镜像环境与最新源码不匹配。建议使用镜像作为基础环境，并根据当前仓库的最新文档手动配置依赖，或者等待官方更新镜像以匹配最新代码。","https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002Fissues\u002F26",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},24785,"是否有计划开源模型的训练代码？","目前官方没有近期开源训练代码的计划。原因是训练代码与内部基础设施（数据中心和 GPU 云环境）高度耦合，如果不经过深度清理和重构，外部用户无法直接使用。","https:\u002F\u002Fgithub.com\u002FSandAI-org\u002FMAGI-1\u002Fissues\u002F9",{"id":133,"question_zh":134,"answer_zh":135,"source_url":112},24786,"运行示例时出现奇怪的结果，是否与提示词（prompt）有关？","通常不是提示词本身的问题（如 \"good boy\" 等），更多时候是由于硬件不兼容（如非 Hopper GPU）或关键配置参数（如 `cfg_num` 必须为 3）设置不当导致的。请优先检查硬件支持列表和核心配置项。",[]]