[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-filipstrand--mflux":3,"tool-filipstrand--mflux":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":76,"owner_website":83,"owner_url":84,"languages":85,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":100,"env_deps":101,"category_tags":108,"github_topics":109,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":121,"updated_at":122,"faqs":123,"releases":154},2223,"filipstrand\u002Fmflux","mflux","MLX native implementations of state-of-the-art generative image models","mflux 是一款专为苹果 Mac 用户打造的开源工具，让你能在本地轻松运行最先进的生成式图像模型。它通过将 Hugging Face 社区中流行的 Diffusers 和 Transformers 库里的顶级模型，逐行重写为原生适配 Apple MLX 框架的代码，解决了以往在 Mac 上运行大型 AI 绘图模型依赖复杂、效率低下或兼容性不佳的痛点。\n\n无论是想要快速体验最新 AI 绘图技术的开发者、需要本地部署模型进行研究的研究人员，还是希望在不依赖云端服务的情况下创作高质量图像的设计师，mflux 都能提供流畅的使用体验。其核心理念是“极简与明确”，代码库保持轻量，仅依赖必要的分词器，其余核心逻辑均从零基于 MLX 构建，不仅提升了运行效率，也方便用户深入理解模型原理。\n\n目前，mflux 已支持包括 Z-Image 和 FLUX.2 在内的多个前沿模型家族，涵盖从高速蒸馏版到高精度基础版的多种选择。用户可以通过简单的命令行指令或 Python 脚本，几行代码即可生成高分辨率图像。配合自动模型下载和量化加速等特性，mflux 让在 Mac 上本地玩转顶级 AI 绘图变得前所未有的","mflux 是一款专为苹果 Mac 用户打造的开源工具，让你能在本地轻松运行最先进的生成式图像模型。它通过将 Hugging Face 社区中流行的 Diffusers 和 Transformers 库里的顶级模型，逐行重写为原生适配 Apple MLX 框架的代码，解决了以往在 Mac 上运行大型 AI 绘图模型依赖复杂、效率低下或兼容性不佳的痛点。\n\n无论是想要快速体验最新 AI 绘图技术的开发者、需要本地部署模型进行研究的研究人员，还是希望在不依赖云端服务的情况下创作高质量图像的设计师，mflux 都能提供流畅的使用体验。其核心理念是“极简与明确”，代码库保持轻量，仅依赖必要的分词器，其余核心逻辑均从零基于 MLX 构建，不仅提升了运行效率，也方便用户深入理解模型原理。\n\n目前，mflux 已支持包括 Z-Image 和 FLUX.2 在内的多个前沿模型家族，涵盖从高速蒸馏版到高精度基础版的多种选择。用户可以通过简单的命令行指令或 Python 脚本，几行代码即可生成高分辨率图像。配合自动模型下载和量化加速等特性，mflux 让在 Mac 上本地玩转顶级 AI 绘图变得前所未有的简单高效。","![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffilipstrand_mflux_readme_3234b994f78e.jpg)\n\n[![MFLUX](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmflux?label=MFLUX&logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmflux\u002F)\n[![MLX](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmlx?label=MLX&logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmlx\u002F)\n[![CI](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Factions\u002Fworkflows\u002Ftests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Factions\u002Fworkflows\u002Ftests.yml)\n\n### About\n\nRun the latest state-of-the-art generative image models locally on your Mac in native MLX!\n\n### Table of contents\n\n- [💡 Philosophy](#-philosophy)\n- [💿 Installation](#-installation)\n- [🎨 Models](#-models)\n- [✨ Features](#-features)\n- [🌱 Related projects](#related-projects)\n- [🙏 Acknowledgements](#-acknowledgements)\n- [⚖️ License](#%EF%B8%8F-license)\n\n---\n\n### 💡 Philosophy\n\nMFLUX is a line-by-line MLX port of several state-of-the-art generative image models from the [Huggingface Diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) and [Huggingface Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) libraries. All models are implemented from scratch in MLX, using only tokenizers from the [Huggingface Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) library. MFLUX is purposefully kept minimal and explicit, [@karpathy](https:\u002F\u002Fgist.github.com\u002Fawni\u002Fa67d16d50f0f492d94a10418e0592bde?permalink_comment_id=5153531#gistcomment-5153531) style.\n\n---\n\n### 💿 Installation\nIf you haven't already, [install `uv`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv?tab=readme-ov-file#installation), then run:\n\n```sh\nuv tool install --upgrade mflux\n```\n\nAfter installation, the following command shows all available MFLUX CLI commands: \n\n```sh\nuv tool list \n```\n\nTo generate your first image using, for example, the z-image-turbo model, run\n\n```\nmflux-generate-z-image-turbo \\\n  --prompt \"A puffin standing on a cliff\" \\\n  --width 1280 \\\n  --height 500 \\\n  --seed 42 \\\n  --steps 9 \\\n  -q 8\n```\n\n![Puffin](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffilipstrand_mflux_readme_7db205cfa361.png)\n\nThe first time you run this, the model will automatically download which can take some time. See the [model section](#-models) for the different options and features, and the [common README](src\u002Fmflux\u002Fmodels\u002Fcommon\u002FREADME.md) for shared CLI patterns and examples.\n\n\u003Cdetails>\n\u003Csummary>Python API\u003C\u002Fsummary>\n\nCreate a standalone `generate.py` script with inline `uv` dependencies:\n\n```python\n#!\u002Fusr\u002Fbin\u002Fenv -S uv run --script\n# \u002F\u002F\u002F script\n# requires-python = \">=3.10\"\n# dependencies = [\n#   \"mflux\",\n# ]\n# \u002F\u002F\u002F\nfrom mflux.models.z_image import ZImageTurbo\n\nmodel = ZImageTurbo(quantize=8)\nimage = model.generate_image(\n    prompt=\"A puffin standing on a cliff\",\n    seed=42,\n    num_inference_steps=9,\n    width=1280,\n    height=500,\n)\nimage.save(\"puffin.png\")\n```\n\nRun it with:\n\n```sh\nuv run generate.py\n```\n\nFor more Python API inspiration, look at the [CLI entry points](src\u002Fmflux\u002Fmodels\u002Fz_image\u002Fcli\u002Fz_image_turbo_generate.py) for the respective models.\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>⚠️ Troubleshooting: hf_transfer error\u003C\u002Fsummary>\n\nIf you encounter a `ValueError: Fast download using 'hf_transfer' is enabled (HF_HUB_ENABLE_HF_TRANSFER=1) but 'hf_transfer' package is not available`, you can install MFLUX with the `hf_transfer` package included:\n\n```sh\nuv tool install --upgrade mflux --with hf_transfer\n```\n\nThis will enable faster model downloads from Hugging Face.\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>DGX \u002F NVIDIA (uv tool install)\u003C\u002Fsummary>\n\n```sh\nuv tool install --python 3.13 mflux\n```\n\u003C\u002Fdetails>\n\n---\n\n### 🎨 Models\n\nMFLUX supports the following model families. They have different strengths and weaknesses; see each model’s README for full usage details.\n\n| Model | Release date | Size | Type | Training | Description |\n| --- | --- | --- | --- | --- | --- |\n|[Z-Image](src\u002Fmflux\u002Fmodels\u002Fz_image\u002FREADME.md) | Nov 2025 | 6B | Distilled & Base | Yes | Fast, small, very good quality and realism. |\n|[FLUX.2](src\u002Fmflux\u002Fmodels\u002Fflux2\u002FREADME.md) | Jan 2026 | 4B & 9B | Distilled & Base | Yes | Fastest + smallest with very good qaility and edit capabilities. |\n|[FIBO](src\u002Fmflux\u002Fmodels\u002Ffibo\u002FREADME.md) | Oct 2025+ | 8B | Distilled & Base | No | Very good JSON-based prompt understanding. Has edit capabilities. |\n|[SeedVR2](src\u002Fmflux\u002Fmodels\u002Fseedvr2\u002FREADME.md) | Jun 2025 | 3B & 7B | — | No | Best upscaling model. |\n|[Qwen Image](src\u002Fmflux\u002Fmodels\u002Fqwen\u002FREADME.md) | Aug 2025+ | 20B | Base | No | Large model (slower); strong prompt understanding and world knowledge. Has edit capabilities |\n|[Depth Pro](src\u002Fmflux\u002Fmodels\u002Fdepth_pro\u002FREADME.md) | Oct 2024 | — | — | No | Very fast and accurate depth estimation model from Apple. |\n|[FLUX.1](src\u002Fmflux\u002Fmodels\u002Fflux\u002FREADME.md) | Aug 2024 | 12B | Distilled & Base | No (legacy) | Legacy option with decent quality. Has edit capabilities with 'Kontext' model and upscaling support via ControlNet |\n\n---\n\n### ✨ Features\n\n**General**\n- Quantization and local model loading\n- LoRA support (multi-LoRA, scales, library lookup)\n- Metadata export + reuse, plus prompt file support\n\n**Model-specific highlights**\n- Text-to-image and image-to-image generation.\n- LoRA finetuning\n- In-context editing, multi-image editing, and virtual try-on\n- ControlNet (Canny), depth conditioning, fill\u002Finpainting, and Redux\n- Upscaling (SeedVR2 and Flux ControlNet)\n- Depth map extraction and FIBO prompt tooling (VLM inspire\u002Frefine)\n\nSee the [common README](src\u002Fmflux\u002Fmodels\u002Fcommon\u002FREADME.md) for detailed usage and examples, and use the model section above to browse specific models and capabilities.\n\n> [!NOTE]\n> As MFLUX supports a wide variety of CLI tools and options, the easiest way to navigate the CLI in 2026 is to use a coding agent (like [Cursor](https:\u002F\u002Fcursor.com), [Claude Code](https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code), or similar). Ask questions like: “Can you help me generate an image using z-image?”\n\n\n---\n\n\u003Ca id=\"related-projects\">\u003C\u002Fa>\n\n### 🌱 Related projects\n\n- [MindCraft Studio](https:\u002F\u002Fthemindstudio.cc\u002Fmindcraft#models) by [@shaoju](https:\u002F\u002Fgithub.com\u002Fshaoju)\n- [Mflux-ComfyUI](https:\u002F\u002Fgithub.com\u002Fraysers\u002FMflux-ComfyUI) by [@raysers](https:\u002F\u002Fgithub.com\u002Fraysers)\n- [MFLUX-WEBUI](https:\u002F\u002Fgithub.com\u002FCharafChnioune\u002FMFLUX-WEBUI) by [@CharafChnioune](https:\u002F\u002Fgithub.com\u002FCharafChnioune)\n- [mflux-fasthtml](https:\u002F\u002Fgithub.com\u002Fanthonywu\u002Fmflux-fasthtml) by [@anthonywu](https:\u002F\u002Fgithub.com\u002Fanthonywu)\n- [mflux-streamlit](https:\u002F\u002Fgithub.com\u002Felitexp\u002Fmflux-streamlit) by [@elitexp](https:\u002F\u002Fgithub.com\u002Felitexp)\n\n---\n\n### 🙏 Acknowledgements\n\nMFLUX would not be possible without the great work of:\n\n- The MLX Team for [MLX](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx) and [MLX examples](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-examples)\n- Black Forest Labs for the [FLUX project](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux)\n- Bria for the [FIBO project](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FFIBO)\n- Tongyi Lab for the [Z-Image project](https:\u002F\u002Ftongyi-mai.github.io\u002FZ-Image-blog\u002F)\n- Qwen Team for the [Qwen Image project](https:\u002F\u002Fqwen.ai\u002Fblog?id=a6f483777144685d33cd3d2af95136fcbeb57652&from=research.research-list)\n- ByteDance, @numz and @adrientoupet for the [SeedVR2 project](https:\u002F\u002Fgithub.com\u002Fnumz\u002FComfyUI-SeedVR2_VideoUpscaler)\n- Hugging Face for the [Diffusers library implementations](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) \n- Depth Pro authors for the [Depth Pro model](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-depth-pro?tab=readme-ov-file#citation)\n- The MLX community and all [contributors and testers](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fgraphs\u002Fcontributors)\n\n---\n\n### ⚖️ License\n\nThis project is licensed under the [MIT License](LICENSE).\n","![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffilipstrand_mflux_readme_3234b994f78e.jpg)\n\n[![MFLUX](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmflux?label=MFLUX&logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmflux\u002F)\n[![MLX](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fmlx?label=MLX&logo=pypi&logoColor=white)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmlx\u002F)\n[![CI](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Factions\u002Fworkflows\u002Ftests.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Factions\u002Fworkflows\u002Ftests.yml)\n\n### 关于\n\n在你的 Mac 上使用原生 MLX，在本地运行最新的最先进生成式图像模型！\n\n### 目录\n\n- [💡 理念](#-philosophy)\n- [💿 安装](#-installation)\n- [🎨 模型](#-models)\n- [✨ 特性](#-features)\n- [🌱 相关项目](#related-projects)\n- [🙏 致谢](#-acknowledgements)\n- [⚖️ 许可证](#%EF%B8%8F-license)\n\n---\n\n### 💡 理念\n\nMFLUX 是对来自 [Huggingface Diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) 和 [Huggingface Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) 库中若干最先进的生成式图像模型的逐行 MLX 移植。所有模型均在 MLX 中从头实现，仅使用来自 [Huggingface Transformers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Ftransformers) 库的分词器。MFLUX 有意保持极简和明确，风格类似 [@karpathy](https:\u002F\u002Fgist.github.com\u002Fawni\u002Fa67d16d50f0f492d94a10418e0592bde?permalink_comment_id=5153531#gistcomment-5153531)。\n\n---\n\n### 💿 安装\n如果你还没有安装，先 [安装 `uv`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv?tab=readme-ov-file#installation)，然后运行：\n\n```sh\nuv tool install --upgrade mflux\n```\n\n安装完成后，以下命令会显示所有可用的 MFLUX CLI 命令：\n\n```sh\nuv tool list \n```\n\n要使用例如 z-image-turbo 模型生成第一张图片，运行\n\n```\nmflux-generate-z-image-turbo \\\n  --prompt \"一只海鹦站在悬崖上\" \\\n  --width 1280 \\\n  --height 500 \\\n  --seed 42 \\\n  --steps 9 \\\n  -q 8\n```\n\n![海鹦](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffilipstrand_mflux_readme_7db205cfa361.png)\n\n第一次运行时，模型会自动下载，这可能需要一些时间。请参阅 [模型部分](#-models) 以了解不同的选项和功能，以及 [通用 README](src\u002Fmflux\u002Fmodels\u002Fcommon\u002FREADME.md) 以获取共享的 CLI 模式和示例。\n\n\u003Cdetails>\n\u003Csummary>Python API\u003C\u002Fsummary>\n\n创建一个独立的 `generate.py` 脚本，并内嵌 `uv` 依赖项：\n\n```python\n#!\u002Fusr\u002Fbin\u002Fenv -S uv run --script\n# \u002F\u002F\u002F script\n# requires-python = \">=3.10\"\n# dependencies = [\n#   \"mflux\",\n# ]\n# \u002F\u002F\u002F\nfrom mflux.models.z_image import ZImageTurbo\n\nmodel = ZImageTurbo(quantize=8)\nimage = model.generate_image(\n    prompt=\"一只海鹦站在悬崖上\",\n    seed=42,\n    num_inference_steps=9,\n    width=1280,\n    height=500,\n)\nimage.save(\"puffin.png\")\n```\n\n运行它：\n\n```sh\nuv run generate.py\n```\n\n如需更多 Python API 的灵感，请查看相应模型的 [CLI 入口点](src\u002Fmflux\u002Fmodels\u002Fz_image\u002Fcli\u002Fz_image_turbo_generate.py)。\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>⚠️ 故障排除：hf_transfer 错误\u003C\u002Fsummary>\n\n如果你遇到 `ValueError: 使用 'hf_transfer' 进行快速下载已启用 (HF_HUB_ENABLE_HF_TRANSFER=1)，但 'hf_transfer' 包不可用` 的错误，你可以安装包含 `hf_transfer` 包的 MFLUX：\n\n```sh\nuv tool install --upgrade mflux --with hf_transfer\n```\n\n这将启用从 Hugging Face 更快地下载模型。\n\n\u003C\u002Fdetails>\n\n\u003Cdetails>\n\u003Csummary>DGX \u002F NVIDIA (uv 工具安装)\u003C\u002Fsummary>\n\n```sh\nuv tool install --python 3.13 mflux\n```\n\u003C\u002Fdetails>\n\n---\n\n### 🎨 模型\n\nMFLUX 支持以下模型系列。它们各有优劣；请参阅各模型的 README 以获取完整的使用说明。\n\n| 模型 | 发布日期 | 大小 | 类型 | 训练 | 描述 |\n| --- | --- | --- | --- | --- | --- |\n|[Z-Image](src\u002Fmflux\u002Fmodels\u002Fz_image\u002FREADME.md) | 2025年11月 | 6B | 提炼版 & 基础版 | 是 | 速度快、体积小，质量与逼真度极高。 |\n|[FLUX.2](src\u002Fmflux\u002Fmodels\u002Fflux2\u002FREADME.md) | 2026年1月 | 4B & 9B | 提炼版 & 基础版 | 是 | 最快且最小，质量极佳，并具备编辑能力。 |\n|[FIBO](src\u002Fmflux\u002Fmodels\u002Ffibo\u002FREADME.md) | 2025年10月+ | 8B | 提炼版 & 基础版 | 否 | 对基于 JSON 的提示理解能力极强。具备编辑能力。 |\n|[SeedVR2](src\u002Fmflux\u002Fmodels\u002Fseedvr2\u002FREADME.md) | 2025年6月 | 3B & 7B | — | 否 | 最佳的超分辨率模型。 |\n|[Qwen Image](src\u002Fmflux\u002Fmodels\u002Fqwen\u002FREADME.md) | 2025年8月+ | 20B | 基础版 | 否 | 大型模型（较慢）；强大的提示理解和世界知识。具备编辑能力。 |\n|[Depth Pro](src\u002Fmflux\u002Fmodels\u002Fdepth_pro\u002FREADME.md) | 2024年10月 | — | — | 否 | 来自 Apple 的非常快速且准确的深度估计模型。 |\n|[FLUX.1](src\u002Fmflux\u002Fmodels\u002Fflux\u002FREADME.md) | 2024年8月 | 12B | 提炼版 & 基础版 | 否（旧版） | 旧版选项，质量尚可。可通过 ControlNet 支持“Kontext”模型和超分辨率进行编辑。 |\n\n---\n\n### ✨ 特性\n\n**通用**\n- 量化与本地模型加载\n- LoRA 支持（多 LoRA、缩放、库查找）\n- 元数据导出 + 重用，以及提示文件支持\n\n**模型特定亮点**\n- 文本到图像及图像到图像生成。\n- LoRA 微调\n- 上下文编辑、多图像编辑和虚拟试穿\n- ControlNet（Canny）、深度条件、填充\u002F修复以及 Redux\n- 超分辨率（SeedVR2 和 Flux ControlNet）\n- 深度图提取和 FIBO 提示工具（VLM 启发\u002F优化）\n\n请参阅 [通用 README](src\u002Fmflux\u002Fmodels\u002Fcommon\u002FREADME.md) 以获取详细的使用方法和示例，并利用上面的模型部分浏览具体模型及其功能。\n\n> [!注意]\n> 由于 MFLUX 支持多种 CLI 工具和选项，2026 年导航 CLI 的最简单方式是使用编码助手（如 [Cursor](https:\u002F\u002Fcursor.com)、[Claude Code](https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code) 或类似工具）。你可以提问：“你能帮我用 z-image 生成一张图片吗？”\n\n\n---\n\n\u003Ca id=\"related-projects\">\u003C\u002Fa>\n\n### 🌱 相关项目\n\n- [MindCraft Studio](https:\u002F\u002Fthemindstudio.cc\u002Fmindcraft#models) 由 [@shaoju](https:\u002F\u002Fgithub.com\u002Fshaoju) 开发\n- [Mflux-ComfyUI](https:\u002F\u002Fgithub.com\u002Fraysers\u002FMflux-ComfyUI) 由 [@raysers](https:\u002F\u002Fgithub.com\u002Fraysers) 开发\n- [MFLUX-WEBUI](https:\u002F\u002Fgithub.com\u002FCharafChnioune\u002FMFLUX-WEBUI) 由 [@CharafChnioune](https:\u002F\u002Fgithub.com\u002FCharafChnioune) 开发\n- [mflux-fasthtml](https:\u002F\u002Fgithub.com\u002Fanthonywu\u002Fmflux-fasthtml) 由 [@anthonywu](https:\u002F\u002Fgithub.com\u002Fanthonywu) 开发\n- [mflux-streamlit](https:\u002F\u002Fgithub.com\u002Felitexp\u002Fmflux-streamlit) 由 [@elitexp](https:\u002F\u002Fgithub.com\u002Felitexp) 开发\n\n---\n\n### 🙏 致谢\n\nMFLUX 的实现离不开以下团队和个人的杰出工作：\n\n- MLX 团队，感谢他们开发的 [MLX](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx) 及其 [MLX 示例](https:\u002F\u002Fgithub.com\u002Fml-explore\u002Fmlx-examples)\n- Black Forest Labs，感谢他们发起的 [FLUX 项目](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux)\n- Bria，感谢他们推出的 [FIBO 项目](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FFIBO)\n- Tongyi Lab，感谢他们研发的 [Z-Image 项目](https:\u002F\u002Ftongyi-mai.github.io\u002FZ-Image-blog\u002F)\n- Qwen 团队，感谢他们推出的 [Qwen Image 项目](https:\u002F\u002Fqwen.ai\u002Fblog?id=a6f483777144685d33cd3d2af95136fcbeb57652&from=research.research-list)\n- 字节跳动、@numz 和 @adrientoupet，感谢他们开发的 [SeedVR2 项目](https:\u002F\u002Fgithub.com\u002Fnumz\u002FComfyUI-SeedVR2_VideoUpscaler)\n- Hugging Face，感谢他们提供的 [Diffusers 库实现](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers)\n- Depth Pro 的作者们，感谢他们发布的 [Depth Pro 模型](https:\u002F\u002Fgithub.com\u002Fapple\u002Fml-depth-pro?tab=readme-ov-file#citation)\n- MLX 社区以及所有 [贡献者和测试者](https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fgraphs\u002Fcontributors)\n\n---\n\n### ⚖️ 许可证\n\n本项目采用 [MIT 许可证](LICENSE) 进行授权。","# MFLUX 快速上手指南\n\nMFLUX 是一个基于 Apple MLX 框架的轻量级开源项目，旨在让 Mac 用户能够在本地原生运行最新的顶级生成式图像模型（如 FLUX、Z-Image 等）。它采用极简设计，代码逐行移植自 Hugging Face Diffusers，适合开发者快速体验和集成。\n\n## 环境准备\n\n*   **操作系统**：macOS (推荐 macOS Sonoma 或更高版本)\n*   **硬件要求**：Apple Silicon 芯片 (M1, M2, M3, M4 系列)。虽然支持 NVIDIA DGX，但本项目主要针对 Mac 优化。\n*   **前置依赖**：\n    *   Python >= 3.10\n    *   [`uv`](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv)：现代化的 Python 包管理器和项目管理器。\n\n> **注意**：请确保已安装 `uv`。如果尚未安装，请访问 [uv 官方安装页面](https:\u002F\u002Fgithub.com\u002Fastral-sh\u002Fuv?tab=readme-ov-file#installation) 进行安装。国内用户若下载缓慢，可尝试配置国内镜像源或使用代理加速。\n\n## 安装步骤\n\n使用 `uv` 工具一键安装 MFLUX：\n\n```sh\nuv tool install --upgrade mflux\n```\n\n安装完成后，可通过以下命令查看可用的 CLI 命令列表：\n\n```sh\nuv tool list \n```\n\n> **提示**：首次运行生成命令时，模型会自动从 Hugging Face 下载。若遇到 `hf_transfer` 相关报错或希望加速下载，可使用以下命令重装以包含加速包：\n> ```sh\n> uv tool install --upgrade mflux --with hf_transfer\n> ```\n\n## 基本使用\n\n### 1. 命令行生成图片 (CLI)\n\n以下示例使用 `z-image-turbo` 模型生成一张图片。首次运行时会自动下载模型文件。\n\n```sh\nmflux-generate-z-image-turbo \\\n  --prompt \"A puffin standing on a cliff\" \\\n  --width 1280 \\\n  --height 500 \\\n  --seed 42 \\\n  --steps 9 \\\n  -q 8\n```\n\n*   `--prompt`: 提示词\n*   `--width` \u002F `--height`: 生成图片的分辨率\n*   `--steps`: 推理步数（数值越小速度越快）\n*   `-q 8`: 启用 8-bit 量化以节省显存并提升速度\n\n### 2. Python API 调用\n\n你也可以通过 Python 脚本直接调用。创建一个名为 `generate.py` 的文件，内容如下：\n\n```python\n#!\u002Fusr\u002Fbin\u002Fenv -S uv run --script\n# \u002F\u002F\u002F script\n# requires-python = \">=3.10\"\n# dependencies = [\n#   \"mflux\",\n# ]\n# \u002F\u002F\u002F\nfrom mflux.models.z_image import ZImageTurbo\n\nmodel = ZImageTurbo(quantize=8)\nimage = model.generate_image(\n    prompt=\"A puffin standing on a cliff\",\n    seed=42,\n    num_inference_steps=9,\n    width=1280,\n    height=500,\n)\nimage.save(\"puffin.png\")\n```\n\n运行脚本：\n\n```sh\nuv run generate.py\n```\n\n### 支持的主要模型\n\nMFLUX 支持多种模型家族，可根据需求选择：\n\n| 模型 | 特点 | 适用场景 |\n| :--- | :--- | :--- |\n| **Z-Image** | 速度快、体积小、画质逼真 | 通用高质量生成 |\n| **FLUX.2** | 极速、支持编辑功能 | 快速迭代与图像编辑 |\n| **FIBO** | 基于 JSON 的提示词理解 | 结构化提示词生成 |\n| **SeedVR2** | 最佳超分模型 | 图片高清放大 |\n| **Qwen Image** | 大参数量、强语义理解 | 复杂场景与世界知识生成 |\n\n更多高级功能（如 LoRA 微调、ControlNet、图生图等）请参考各模型对应的详细文档。","一位独立游戏开发者需要在配备 Apple Silicon 芯片的 MacBook Pro 上，快速迭代生成大量高分辨率的场景概念图以确立美术风格。\n\n### 没有 mflux 时\n- **硬件闲置与兼容困境**：Mac 强大的统一内存架构无法被充分利用，开发者被迫依赖昂贵的云端 GPU 实例或缓慢的 CPU 模拟来运行 FLUX.2 等先进模型。\n- **迭代周期漫长**：每次调整提示词测试新风格，都需经历漫长的排队等待和图像上传下载过程，严重打断创作心流。\n- **代码黑盒难定制**：现有的跨平台库封装过重，若想修改模型底层逻辑以适应特定游戏资产需求，往往面临复杂的依赖冲突和难以调试的“黑盒”问题。\n- **隐私与成本顾虑**：将未公开的游戏设定和创意提示词发送至第三方云服务存在泄露风险，且高频调用的云算力账单令人咋舌。\n\n### 使用 mflux 后\n- **原生性能释放**：mflux 基于 MLX 重写，让 Mac 本地直接满血运行 Z-Image 和 FLUX.2 模型，无需任何云端配置即可实现秒级出图。\n- **即时反馈循环**：开发者可在本地终端通过一行命令（如 `mflux-generate-z-image-turbo`）实时调整参数并预览结果，将创意验证时间从小时级压缩至分钟级。\n- **极简透明架构**：mflux 采用类似 Karpathy 风格的极简代码实现，去除了冗余封装，开发者可轻松阅读源码并针对游戏素材特性进行微调。\n- **数据完全私有**：所有生成过程均在本地完成，核心创意数据不出设备，同时彻底消除了云端推理的持续费用。\n\nmflux 通过将顶尖生成式模型原生移植到 Mac 生态，让个人开发者也能在本地享受企业级的 AI 绘图效率与自由度。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffilipstrand_mflux_3234b994.jpg","filipstrand","Filip Strand","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffilipstrand_74ec1ba2.jpg",null,"@Bria-AI","Stockholm, Sweden","strand.filip@gmail.com","filipstrand.com","https:\u002F\u002Fgithub.com\u002Ffilipstrand",[86,90],{"name":87,"color":88,"percentage":89},"Python","#3572A5",99.7,{"name":91,"color":92,"percentage":93},"Makefile","#427819",0.3,1962,133,"2026-04-04T11:28:29","MIT","macOS","非必需（依赖 Apple Silicon 芯片的 Mac 原生运行 MLX 框架；文档提及 DGX\u002FNVIDIA 支持但主要优化针对 Mac）","未说明（取决于所选模型大小，如 6B-20B 参数量模型通常需要较大内存）",{"notes":102,"python":103,"dependencies":104},"该工具专为在 Mac 上原生运行设计，基于 Apple 的 MLX 框架。虽然安装部分提到了 DGX\u002FNVIDIA 的安装命令，但核心哲学和主要功能描述均强调在 Mac 本地运行。首次运行会自动下载模型文件。建议使用 uv 工具进行安装和环境管理。",">=3.10",[105,67,106,107],"mlx","transformers","hf_transfer (可选)",[14,13,26,15],[110,111,112,113,114,115,105,106,116,117,118,119,120],"ai","apple-silicon","diffusers","flux","huggingface","ml","qwen","qwen-image","seedvr2","z-image","fibo","2026-03-27T02:49:30.150509","2026-04-06T07:13:34.913553",[124,129,134,139,144,149],{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},10228,"MFLUX 是否支持 ComfyUI？如何集成？","是的，已有社区项目实现了 MFLUX 与 ComfyUI 的集成。您可以安装 [Mflux-ComfyUI](https:\u002F\u002Fgithub.com\u002Fraysers\u002FMflux-ComfyUI) 插件。注意：如果需要使用 0.4.1 版本新增的 img2img 功能，由于插件的 `requirements.txt` 可能未自动更新，您需要在 ComfyUI 的 Python 依赖中手动将 mflux 升级到 0.4.1 版本（例如运行 `pip install mflux==0.4.1`）。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F56",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},10229,"MFLUX 是否支持 LoRA 训练？如何使用？","是的，LoRA 训练功能已在 v0.5.0 版本中合并到主分支并正式发布。用户可以使用默认的训练配置文件进行训练。训练完成后，生成的 LoRA 适配器文件（如 `adapter.safetensors`）可以通过标准方式加载使用。如果在训练过程中遇到界面看似冻结的情况，通常是因为详细日志输出较少，实际上后台仍在正常进行训练。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F78",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},10230,"除了命令行，是否有图形化界面（WebUI）可用？","有的，社区开发了多种图形界面方案。您可以选择：\n1. **Gradio**: 参考社区实现的代码或相关项目。\n2. **Streamlit**: 安装并运行基于 Streamlit 的 Web 应用，命令如下：\n   ```bash\n   pip install mflux-streamlit\n   mflux-streamlit\n   ```\n   这将启动一个本地 Web 服务，提供可视化的操作界面。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F15",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},10231,"在 M4 Max MacBook Pro 上生成图像的速度大概是多少？","根据用户在 M4 Max (128GB) 环境下的测试，使用 `schnell` 模型、2 步采样、1024x1024 分辨率生成图像，总耗时约为 10.56 秒（其中系统耗时占主要部分）。需要注意的是，在非量化模式下，不同配置的机器表现可能有所差异；而在量化模式下（如 4bit 对比 8bit），目前观察到的速度提升并不明显。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F92",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},10232,"在 M2 Max 设备上运行 MFLUX 的性能表现如何？","在配备 96GB 内存的 2023 款 M2 Max 设备上，使用 2 步采样（steps 2）生成图像的总耗时大约在 25-26 秒左右。性能可能受硬盘读写速度影响，但开启磁盘加密对加载大模型的时间没有显著的优化效果。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F6",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},10233,"运行 `mflux-generate-fibo` 时报错缺少 tokenizer 后端怎么办？","该错误通常是因为缺少必要的依赖库导致无法实例化分词器。即使安装了 `protobuf` 和 `sentencepiece` 仍报错时，请检查您的 Python 环境（特别是使用 `uv` 工具时）。确保在运行 `mflux-generate-fibo` 的工具环境中正确安装了 `transformers` 及其依赖的 `tokenizers` 库。如果是通过 `uv tool install` 安装的 mflux，可能需要显式添加这些依赖或等待官方修复该版本的依赖声明。","https:\u002F\u002Fgithub.com\u002Ffilipstrand\u002Fmflux\u002Fissues\u002F388",[155,160,165,170,175,180,185,190,195,200,205,210,215,220,225,230,235,240,245,250],{"id":156,"version":157,"summary_zh":158,"released_at":159},107469,"v.0.17.4","### 🐛 Bug Fixes\n\n- **Z-Image PEFT\u002FModelScope LoRA keys**: Extend the Z-Image LoRA mapping with `.default` tensor name variants so adapters in PEFT\u002FModelScope layouts (for example Tongyi-MAI exports) resolve and apply correctly instead of matching zero weights.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-28T14:44:40",{"id":161,"version":162,"summary_zh":163,"released_at":164},107470,"v.0.17.3","### 🐛 Bug Fixes\n\n- **FLUX.2 edit guidance metadata**: Preserve the requested guidance value for FLUX.2 Klein base image-edit runs so `mflux-info` and saved metadata report the actual guidance used instead of always showing `1.0`.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-27T10:16:05",{"id":166,"version":167,"summary_zh":168,"released_at":169},107471,"v.0.17.2","### 🐛 Bug Fixes\n\n- **Shared tokenizer cache resolution**: Fix Hugging Face tokenizer resolution when a repo is only partially cached locally, preserving offline-first behavior for valid cached layouts while retrying ambiguous cached primaries once before surfacing real load errors.\n\n### 🧰 DX & Maintenance\n\n- **Tokenizer resolution coverage**: Expand shared tokenizer-resolution regression tests to cover root-layout tokenizers, fallback edge cases, and refresh failure handling.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-23T13:08:20",{"id":171,"version":172,"summary_zh":173,"released_at":174},107472,"v.0.17.1","### 🐛 Bug Fixes\n\n- **Hugging Face tokenizer dependencies**: Declare `protobuf` so minimal installs (including `uv tool install mflux`) include packages Transformers may require when loading tokenizers, fixing failures such as `mflux-generate-fibo` when the tokenizer falls back off the fast path.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-21T23:32:21",{"id":176,"version":177,"summary_zh":178,"released_at":179},107473,"v.0.17.0","### 🎨 New Model Support\n\n- **FIBO Edit**: Add image-editing support for the FIBO model family.\n- **FIBO Edit remove-background workflow**: Support the dedicated remove-background edit path for FIBO.\n\n### ✨ Improvements\n\n- **Training image scaling**: Scale training images by area rather than longest side for more consistent preprocessing.\n- **MLX 0.31.x**: Allow MLX 0.31.x in dependency ranges.\n- **FLUX.2 LoRA mapping**: Expand LoRA key mapping coverage for FLUX.2.\n\n### 🐛 Bug Fixes\n\n- **Training optimizer state**: Evaluate optimizer state after each training step as intended.\n- **Local tokenizer loading**: Fix loading tokenizers from local paths.\n- **Dynamic-resolution image edit**: Restore correct behavior for image edit when using dynamic resolution.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n- **@icelaglace**\n- **@TheOrsa**\n- **@waldheinz**\n\n---","2026-03-20T16:03:50",{"id":181,"version":182,"summary_zh":183,"released_at":184},107474,"v.0.16.9","### ✨ Improvements\n\n- **Broader LoRA compatibility for FLUX.2 and Z-Image**: Expand LoRA mapping coverage so more adapter key layouts resolve cleanly for FLUX.2 and Z-Image models.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-07T11:54:46",{"id":186,"version":187,"summary_zh":188,"released_at":189},107475,"v.0.16.8","### ✨ Improvements\n\n- **Local-model LoRA training**: Allow LoRA training to work when the base model is supplied from a local path, including the FLUX.2 and Z-Image training adapters.\n\n### 📝 Documentation\n\n- **Distilled-model step defaults**: Clarify CLI guidance so examples prefer model default inference steps unless the user intentionally overrides them.\n\n### 👩‍💻 Contributors\n\n- **@waldheinz**\n\n---","2026-03-06T11:44:43",{"id":191,"version":192,"summary_zh":193,"released_at":194},107476,"v.0.16.7","### 🎨 New Model Support\n\n- **FIBO-Lite support**: Add support for the FIBO-Lite model variant.\n\n### 🐛 Bug Fixes\n\n- **FLUX.2 edit downsampling extents**: Fix downsampling in FLUX.2 edit paths so image extents are preserved.\n\n### 👩‍💻 Contributors\n\n- **@filipstrand**\n\n---","2026-03-02T20:46:40",{"id":196,"version":197,"summary_zh":198,"released_at":199},107477,"v.0.16.6","### ✨ Improvements\n\n- **SeedVR2 7B support**: Add support for the SeedVR2 7B upscaler variant.\n- **Qwen-Image parity with diffusers**: Align Qwen-Image behavior more closely with the diffusers reference implementation.\n- **FIBO scheduler default**: Default FIBO `generate_image` to `flow_match_euler_discrete`.\n\n### 🧰 DX & Maintenance\n\n- **Repo tooling cleanup**: Remove unused Cursor command wrappers from the repository.\n- **SeedVR2 7B test coverage**: Add image test support for the new SeedVR2 7B path.\n\n### 👩‍💻 Contributors\n\n- **@ciaranbor**\n- **@icelaglace**\n- **@filipstrand**\n\n---","2026-02-20T19:45:09",{"id":201,"version":202,"summary_zh":203,"released_at":204},107478,"v.0.16.5","### ✨ Improvements\n\n- **FLUX.2 Klein img2img CLI parity**: Add `--image-path` and `--image-strength` to `mflux-generate-flux2`, enabling init-image driven generation with the same CLI pattern used in other generators.\n- **MLX cache control**: Add `--mlx-cache-limit-gb` to cap MLX cache usage without requiring full `--low-ram` mode.\n\n### 📝 Documentation\n\n- **Common CLI docs**: Document `--mlx-cache-limit-gb` behavior and usage in the shared model README.\n\n### 👩‍💻 Contributors\n\n- **@terribilissimo**\n- **@icelaglace**\n\n---","2026-02-17T12:00:11",{"id":206,"version":207,"summary_zh":208,"released_at":209},107479,"v.0.16.4","### 🐛 Bug Fixes\n\n- **Training preview stability**: Always offload optimizer state during preview generation to avoid memory pressure and improve preview reliability.\n- **Apple Silicon compile guard**: Narrow the M1\u002FM2 compile fallback so it excludes Max and Ultra variants, preserving expected optimized behavior on those chips.\n\n---","2026-02-15T21:37:18",{"id":211,"version":212,"summary_zh":213,"released_at":214},107480,"v.0.16.3","### 🐛 Bug Fixes\n\n- **Z-Image training preview guidance**: Fix Z-Image (non-turbo) training previews so they use the configured guidance value instead of defaulting to 0.0, ensuring preview quality matches actual CFG behavior.\n- **FLUX.2 training preview guidance**: Fix FLUX.2 training previews (txt2img and edit) so they use the configured guidance value instead of forcing 1.0.\n\n---","2026-02-14T00:16:56",{"id":216,"version":217,"summary_zh":218,"released_at":219},107481,"v.0.16.2","### 🐛 Bug Fixes\n\n- **Edit training preview fallback**: Fix edit auto-discovery runs (`*_in\u002F*_out`) with monitoring enabled so fallback preview prompts use an available input image instead of requiring explicit `data\u002Fpreview.*` files.\n\n### 📝 Documentation\n\n- **FLUX.2 training guide**: Expand the FLUX.2 LoRA training example documentation with richer guidance and examples.\n\n---","2026-02-12T19:23:43",{"id":221,"version":222,"summary_zh":223,"released_at":224},107482,"v.0.16.1","### 🐛 Performance regression fixes\n\n- **M1\u002FM2 inference performance fallback**: Disable model-level `mx.compile` prediction wrappers for Z-Image and FLUX.2 on Apple M1\u002FM2 to avoid observed 0.16 regressions on older Apple Silicon while preserving compiled paths on newer chips.\n\n---","2026-02-11T18:28:24",{"id":226,"version":227,"summary_zh":228,"released_at":229},107483,"v.0.16.0","### ✨ Improvements\n\n- **Completely rewritten training system**: Rebuild LoRA training end-to-end, replacing the DreamBooth-specific implementation with a new common training stack (dataset, state, optimizer, runner, and statistics) shared across model families.\n- **New base-model support for training and inference**: Add support for `flux2-klein-base-4b`, `flux2-klein-base-9b`, and `z-image` (in addition to `z-image-turbo`) with dedicated FLUX.2 and Z-Image training adapters.\n- **Performance tuning**: Improve core scheduler\u002Fmodel execution paths used by FLUX.2 and Z-Image.\n\n### 🐛 Bug Fixes\n\n- **FLUX.2 Klein 9B text encoder overrides**: Fix override resolution\u002Fapplication in the FLUX.2 initializer\u002Fconfig flow.\n\n### 🧰 DX & Maintenance\n\n- **FLUX.1 legacy cleanup**: Remove legacy FLUX.1 image-generation tests\u002Fresources and retire unused helper tools.\n- **Dependency alignment**: Update install guidance for stable `transformers` 5.0 and refresh lockfile\u002Fdependency metadata.\n\n### 📝 Documentation\n\n- **Training docs refresh**: Expand and update training docs\u002FREADME sections for common training, FLUX.2, and Z-Image.\n- **Install troubleshooting**: Add troubleshooting guidance for `hf_transfer` installation issues.\n\n### 👩‍💻 Contributors\n\n- **Filip Strand (@filipstrand)**\n- **Xin (@q3g)**\n\n---","2026-02-11T15:09:02",{"id":231,"version":232,"summary_zh":233,"released_at":234},107484,"v.0.15.5","### ✨ Improvements\n\n- **SeedVR2 directory input**: Allow passing a folder to `--image-path` to upscale all images inside.\n\n### 🧰 DX & Maintenance\n\n- **Model porting guidance**: Require model README entries in the porting workflow.\n\n### 📝 Documentation\n\n- **SeedVR2 usage**: Document directory upscaling with CLI and Python API examples.\n- **CLI docs**: Add Python API sections and improve Z-Image Turbo entry-point links.\n\n---","2026-01-26T12:41:47",{"id":236,"version":237,"summary_zh":238,"released_at":239},107485,"v.0.15.4","### ✨ Improvements\n\n- **Flux2 LoRA aliasing**: Add key aliases for `base_model` prefixes to improve LoRA resolution across configs.\n\n### 📝 Documentation\n\n- **Agent guidance**: Clarify skill references for Cursor agents.\n\n---","2026-01-20T15:39:29",{"id":241,"version":242,"summary_zh":243,"released_at":244},107486,"v.0.15.3","### 🐛 Bug Fixes\n\n- **Flux2 Klein local path**: Fix errors when using a local FLUX.2-klein-9B path in `mflux-save` and `mflux-generate-flux2`.\n\n---","2026-01-19T22:55:01",{"id":246,"version":247,"summary_zh":248,"released_at":249},107487,"v.0.15.2","### 🐛 Bug Fixes\n\n- **Flux2 edit (low-ram)**: Normalize tiled VAE latents to 4D before patchifying to avoid shape errors.\n\n---","2026-01-19T00:09:19",{"id":251,"version":252,"summary_zh":253,"released_at":254},107488,"v.0.15.1","### 🐛 Bug Fixes\n\n- **PyPI metadata**: Removed invalid architecture classifier that blocked uploads (`Architecture :: AArch64`).\n\n---","2026-01-18T23:29:35"]