[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-stepfun-ai--Step1X-Edit":3,"tool-stepfun-ai--Step1X-Edit":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":78,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":10,"env_os":77,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":103,"github_topics":104,"view_count":108,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":109,"updated_at":110,"faqs":111,"releases":142},171,"stepfun-ai\u002FStep1X-Edit","Step1X-Edit","A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini 2 Flash.","Step1X-Edit 是一款开源的图像编辑 AI 模型，目标是达到与 GPT-4o、Gemini 2 Flash 等闭源大模型相当的图像理解与编辑能力。它能根据自然语言指令精准修改图片内容，比如“把沙发换成蓝色”或“在背景加一座山”，同时保持画面协调和细节真实。对于当前许多开源工具难以兼顾语义理解与视觉质量的问题，Step1X-Edit 通过引入“推理+反思”机制，在复杂编辑任务中表现更稳定、准确。\n\n适合对图像生成与编辑有进阶需求的设计师、AI 研究者和开发者使用，普通用户也可通过 Hugging Face 或 Replicate 平台轻松体验。最新版本 Step1X-Edit-v1p2 支持“思考式编辑”，能先分析指令意图再执行修改，并在 KRIS-Bench 和 GEdit-Bench 评测中全面超越前代及多个主流模型。此外，社区推出的 RegionE 插件还能让推理速度提升 2.5 倍，几乎不损失精度，只需五行代码即可接入。项目提供完整模型、数据集和在线演示，欢迎加入 Discord 或微信社群交流使用心得。","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_76e02663483f.png\"  height=100>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fstep1x-edit.github.io\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Project%20Page&message=Web&color=green\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Step1X-Edit&message=Arxiv&color=red\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22625\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=ReasonEdit&message=Arxiv&color=red\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"assets\u002FWeChat.jpg\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=WeChat&message=Add%20Me&color=green&logo=wechat&logoColor=white\">\n  \u003C\u002Fa>\n  \n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Model&message=HuggingFace&color=yellow\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=GEdit-Bench&message=HuggingFace&color=yellow\">\u003C\u002Fa> &ensp;\n  [![Run on Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fzsxkib\u002Fstep1x-edit) &ensp;\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002Fj3qzuAyn\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Discord%20Channel&message=Discord&color=purple\">\u003C\u002Fa> &ensp;\n\u003C\u002Fdiv>\n\n\n## 🔥🔥🔥 News!!\n* Dec 29, 2025: 🎉 [RegionE](https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002FRegionE) delivers a 2.5× speedup for Step1X-Edit inference with no accuracy degradation, achieved with just five lines of code.\n* Nov 26, 2025: 👋 We release [Step1X-Edit-v1p2](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p2) (referred to as **ReasonEdit-S** in the paper), a native reasoning edit model with better performance on KRIS-Bench and GEdit-Bench. Technical report can be found [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22625).\n  \u003Ctable>\n  \u003Cthead>\n  \u003Ctr>\n    \u003Cth rowspan=\"2\">Models\u003C\u002Fth>\n    \u003Cth colspan=\"3\"> \u003Cdiv align=\"center\">GEdit-Bench\u003C\u002Fdiv> \u003C\u002Fth>\n    \u003Cth colspan=\"4\"> \u003Cdiv align=\"center\">Kris-Bench\u003C\u002Fdiv> \u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>G_SC⬆️\u003C\u002Fth> \u003Cth>G_PQ⬆️ \u003C\u002Fth> \u003Cth>G_O⬆️\u003C\u002Fth> \u003Cth>FK⬆️\u003C\u002Fth> \u003Cth>CK⬆️\u003C\u002Fth> \u003Cth>PK⬆️ \u003C\u002Fth> \u003Cth>Overall⬆️\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003C\u002Fthead>\n  \u003Ctbody>\n  \u003Ctr>  \n    \u003Ctd>Flux-Kontext-dev \u003C\u002Ftd> \u003Ctd>7.16\u003C\u002Ftd> \u003Ctd>7.37\u003C\u002Ftd> \u003Ctd>6.51\u003C\u002Ftd> \u003Ctd>53.28\u003C\u002Ftd> \u003Ctd>50.36\u003C\u002Ftd> \u003Ctd>42.53\u003C\u002Ftd> \u003Ctd>49.54\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>   \n    \u003Ctd>Qwen-Image-Edit-2509 \u003C\u002Ftd> \u003Ctd>8.00\u003C\u002Ftd> \u003Ctd>7.86\u003C\u002Ftd> \u003Ctd>7.56\u003C\u002Ftd> \u003Ctd>61.47\u003C\u002Ftd> \u003Ctd>56.79\u003C\u002Ftd> \u003Ctd>47.07\u003C\u002Ftd> \u003Ctd>56.15\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1X-Edit v1.1 \u003C\u002Ftd> \u003Ctd>7.66\u003C\u002Ftd> \u003Ctd>7.35\u003C\u002Ftd> \u003Ctd>6.97\u003C\u002Ftd> \u003Ctd>53.05\u003C\u002Ftd> \u003Ctd>54.34\u003C\u002Ftd> \u003Ctd>44.66\u003C\u002Ftd> \u003Ctd>51.59\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2-preview \u003C\u002Ftd> \u003Ctd>8.14\u003C\u002Ftd> \u003Ctd>7.55\u003C\u002Ftd> \u003Ctd>7.42\u003C\u002Ftd> \u003Ctd>60.49\u003C\u002Ftd> \u003Ctd>58.81\u003C\u002Ftd> \u003Ctd>41.77\u003C\u002Ftd> \u003Ctd>52.51\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (base) \u003C\u002Ftd> \u003Ctd>7.77\u003C\u002Ftd> \u003Ctd>7.65\u003C\u002Ftd> \u003Ctd>7.24\u003C\u002Ftd> \u003Ctd>58.23\u003C\u002Ftd> \u003Ctd>60.55\u003C\u002Ftd> \u003Ctd>46.21\u003C\u002Ftd> \u003Ctd>56.33\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (thinking) \u003C\u002Ftd> \u003Ctd>8.02\u003C\u002Ftd> \u003Ctd>7.64\u003C\u002Ftd> \u003Ctd>7.36\u003C\u002Ftd> \u003Ctd>59.79\u003C\u002Ftd> \u003Ctd>62.76\u003C\u002Ftd> \u003Ctd>49.78\u003C\u002Ftd> \u003Ctd>58.64\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (thinking + reflection) \u003C\u002Ftd> \u003Ctd>8.18\u003C\u002Ftd> \u003Ctd>7.85\u003C\u002Ftd> \u003Ctd>7.58\u003C\u002Ftd> \u003Ctd>62.44\u003C\u002Ftd> \u003Ctd>65.72\u003C\u002Ftd> \u003Ctd>50.42\u003C\u002Ftd> \u003Ctd>60.93\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003C\u002Ftable>\n\n* Sep 08, 2025: 👋 We release [step1x-edit-v1p2-preview](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p2-preview), a new version of Step1X-Edit with reasoning edit ability and better performance (report to be released soon), featuring:\n  - Native Reasoning Edit Model: Combines instruction reasoning with reflective correction to handle complex edits more accurately. Performance on KRIS-Bench:\n    |    Models    |   Factual Knowledge ⬆️   |  Conceptual Knowledge ⬆️ | Procedural Knowledge ⬆️   |  Overall ⬆️ | \n    |:------------:|:------------:|:------------:| :------------:|:------------:| \n    | Step1X-Edit v1.1  | 53.05 |  54.34 | 44.66 | 51.59 |   \n    | Step1x-edit-v1p2-preview  | 60.49 | 58.81 | 41.77 | 52.51 | \n    | Step1x-edit-v1p2-preview (thinking)  | 62.24 | 62.25 | 44.43 | 55.21| \n    | Step1x-edit-v1p2-preview (thinking + reflection) | 62.94 |  61.82 |  44.08 |  55.64 | \n  - Improved image editing quality and better instruction-following performance. Performance on GEdit-Bench:\n    |     Models    |     G_SC ⬆️   |  G_PQ ⬆️ | G_O ⬆️   |  Q_SC ⬆️ | Q_PQ ⬆️   |  Q_O ⬆️ |\n    |:------------:|:------------:|:------------:| :------------:|:------------:| :------------:|:------------:|\n    | Step1X-Edit (v1.0)  |    7.13   | 7.00 |   6.44   | 7.39 |    7.28   | 7.07 | \n    | Step1X-Edit (v1.1)  |    7.66   | 7.35 |   6.97   | 7.65 |    7.41   | 7.35 | \n    | Step1x-edit-v1p2-preview  |    8.14   | 7.55 |   7.42   | 7.90 |   7.34   | 7.40   |\n* Jul 09, 2025: 👋 We’ve updated the step1x-edit model and released it as [step1x-edit-v1p1](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit) (diffusers version see [here](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p1-diffusers)), featuring:\n  - Added support for text-to-image (T2I) generation tasks\n  - Improved image editing quality and better instruction-following performance.\n  Quantitative evaluation on GEdit-Bench-EN (Full set). G_SC, G_PQ, and G_O refer to the metrics evaluated by GPT-4.1, while Q_SC, Q_PQ, and Q_O refer to the metrics evaluated by Qwen2.5-VL-72B. To facilitate reproducibility, we have released the [intermediate results](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FShiyu95\u002Fgedit_results) of our model evaluations.\n    |     Models    |     G_SC ⬆️   |  G_PQ ⬆️ | G_O ⬆️   |  Q_SC ⬆️ | Q_PQ ⬆️   |  Q_O ⬆️ |\n    |:------------:|:------------:|:------------:| :------------:|:------------:| :------------:|:------------:|\n    | Step1X-Edit (v1.0)  |    7.13   | 7.00 |   6.44   | 7.39 |    7.28   | 7.07 | \n    | Step1X-Edit (v1.1)  |    7.66   | 7.35 |   6.97   | 7.65 |    7.41   | 7.35 | \n* Jun 17, 2025: 👋 Support for Teacache and parallel inference has been added.\n* May 22, 2025: 👋 Step1X-Edit now supports Lora finetuning on a single 24GB GPU now! A hand-fixing Lora for anime characters has also been released. [Download Lora](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit)\n* Apr 30, 2025: 🎉 Step1X-Edit ComfyUI Plugin is available now, thanks for the community contribution! [quank123wip\u002FComfyUI-Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fquank123wip\u002FComfyUI-Step1X-Edit) & [raykindle\u002FComfyUI_Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fraykindle\u002FComfyUI_Step1X-Edit).\n* Apr 27, 2025: 🎉 With community support, we update the inference code and model weights of Step1X-Edit-FP8. [meimeilook\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook\u002FStep1X-Edit-FP8) & [rkfg\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Frkfg\u002FStep1X-Edit-FP8).\n* Apr 26, 2025: 🎉 Step1X-Edit is now live — you can try editing images directly in the online demo! [Online Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fstepfun-ai\u002FStep1X-Edit)\n* Apr 25, 2025: 👋 We release the evaluation code and benchmark data of Step1X-Edit. [Download GEdit-Bench](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench)\n* Apr 25, 2025: 👋 We release the inference code and model weights of Step1X-Edit. [ModelScope](https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002Fstepfun-ai\u002FStep1X-Edit) & [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit) models.\n* Apr 25, 2025: 👋 We have made our technical report available as open source. [Read](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761)\n\n\u003C!-- ## Image Edit Demos -->\n\n\n\u003C!-- ## 📑 Open-source Plan\n- [x] Inference & Checkpoints\n- [x] Online demo (Gradio)\n- [x] Fine-tuning scripts\n- [x] Multi-gpus Sequence Parallel inference\n- [x] FP8 Quantified weight\n- [x] ComfyUI\n- [x] Diffusers -->\n\n\n\n## 📖 Introduction\nWe introduce a state-of-the-art image editing model, **Step1X-Edit**, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini2 Flash. \nMore specifically, we adopt the Multimodal LLM to process the reference image and user's editing instruction. A latent embedding has been extracted and integrated with a diffusion image decoder to obtain  the target image. To train the model, we build a data generation pipeline to produce a high-quality dataset. \nFor evaluation, we develop the GEdit-Bench, a novel benchmark rooted in real-world user instructions. Experimental results on GEdit-Bench demonstrate that Step1X-Edit outperforms existing open-source baselines by a substantial margin and approaches the performance of leading proprietary models, thereby making significant contributions to the field of image editing. \nMore details please refer to our [technical report](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761).\n\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"720\" alt=\"demo\" src=\"assets\u002Fimage_edit_demo.gif\">\n\u003Cp>\u003Cb>Step1X-Edit:\u003C\u002Fb> a unified image editing model performs impressively on various genuine user instructions. \u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\n## ⚡️ Quick Start\n1. Make sure your `transformers==4.55.0` (we tested on this version)\n2. Install the `diffusers` package locally, according model version you want to use\n\n\n### Step1X-Edit-v1p2 (v1.2)\nInstall the `diffusers` package from the following command:\n```bash\ngit clone -b step1xedit_v1p2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n\npip install RegionE # optional, for faster inference\n```\nHere is an example for using the `Step1X-Edit-v1p2` model to edit images:\n```python\nimport torch\nfrom diffusers import Step1XEditPipelineV1P2\nfrom diffusers.utils import load_image\nfrom RegionE import RegionEHelper\n\npipe = Step1XEditPipelineV1P2.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p2\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\n# Import the RegionEHelper\nregionehelper = RegionEHelper(pipe)\nregionehelper.set_params()   # default hyperparameter\nregionehelper.enable()\n\nprint(\"=== processing image ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"add a ruby pendant on the girl's neck.\"\nenable_thinking_mode=True\nenable_reflection_mode=True\npipe_output = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=50,\n    true_cfg_scale=6,\n    generator=torch.Generator().manual_seed(42),\n    enable_thinking_mode=enable_thinking_mode,\n    enable_reflection_mode=enable_reflection_mode,\n)\nif enable_thinking_mode:\n    print(\"Reformat Prompt:\", pipe_output.reformat_prompt)\nfor image_idx in range(len(pipe_output.images)):\n    pipe_output.images[image_idx].save(f\"0001-{image_idx}.jpg\", lossless=True)\n    if enable_reflection_mode:\n        print(pipe_output.think_info[image_idx])\n        print(pipe_output.best_info[image_idx])\npipe_output.final_images[0].save(f\"0001-final.jpg\", lossless=True)\n\nregionehelper.disable()\n```\nThe results looks like:\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_5edf724a689b.jpeg\">\n\u003C\u002Fdiv>\n\n### Step1X-Edit-v1p2-preview (v1.2-preview)\nInstall the `diffusers` package from the following command:\n```bash\ngit clone -b dev\u002FMergeV1-2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n```\n\nHere is an example for using the `Step1X-Edit-v1p2-preview` model to edit images:\n\n```python\nimport torch\nfrom diffusers import Step1XEditPipelineV1P2\nfrom diffusers.utils import load_image\npipe = Step1XEditPipelineV1P2.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p2-preview\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\nprint(\"=== processing image ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"add a ruby ​​pendant on the girl's neck.\"\nenable_thinking_mode=True\nenable_reflection_mode=True\npipe_output = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=28,\n    true_cfg_scale=4,\n    generator=torch.Generator().manual_seed(42),\n    enable_thinking_mode=enable_thinking_mode,\n    enable_reflection_mode=enable_reflection_mode,\n)\nif enable_thinking_mode:\n    print(\"Reformat Prompt:\", pipe_output.reformat_prompt)\nfor image_idx in range(len(pipe_output.images)):\n    pipe_output.images[image_idx].save(f\"0001-{image_idx}.jpg\", lossless=True)\n    if enable_reflection_mode:\n        print(pipe_output.think_info[image_idx])\n```\n\n\n### Step1X-Edit-v1p1 (v1.1)\nInstall the `diffusers` package from the following command:\n```bash\ngit clone -b step1xedit https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n```\n\nHere is an example for using the `Step1X-Edit-v1p1` model to edit images:\n```python\nimport torch\nfrom diffusers import Step1XEditPipeline\nfrom diffusers.utils import load_image\n\n\npipe = Step1XEditPipeline.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p1-diffusers\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\nprint(\"=== processing image ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"给这个女生的脖子上戴一个带有红宝石的吊坠。\"\nimage = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=28,\n    size_level=1024,\n    guidance_scale=6.0,\n    generator=torch.Generator().manual_seed(42),\n).images[0]\nimage.save(\"0000.jpg\")\n```\n\nThe results will look like:\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_de80c1eb02e9.png\">\n\u003C\u002Fdiv>\n\n\n## 🌟 Advanced Usage\nWe use the original [Step1X-Edit](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit) model as an example to demonstrate some advanced uses of the model. Other versions of the model may have different inference processes.\n\n### A1. Requirements\nWe test our model using torch==2.3.1 and torch==2.5.1 with cuda-12.1.\nInstall requirements:\n  \n``` bash\npip install -r requirements.txt\n```\n\nInstall [`flash-attn`](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention), here we provide a script to help find the pre-built wheel suitable for your system. \n    \n```bash\npython scripts\u002Fget_flash_attn.py\n```\n\nThe script will generate a wheel name like `flash_attn-2.7.2.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl`, which could be found in [the release page of flash-attn](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention\u002Freleases).\n\nThen you can download the corresponding pre-built wheel and install it following the instructions in [`flash-attn`](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention).\n\n\n\n### A2. Reduce GPU Memory Usage\nYou can use the following scripts to edit images with reduced GPU memory usage.\n\n```\nbash scripts\u002Frun_examples.sh\n```\nThe default script runs the inference code with non-quantified weights. If you want to save the GPU memory usage, you can 1)  set the `--quantized` flag in the script, which will quantify the weights to fp8, or 2) set the `--offload` flag in the script to offload some modules to CPU.\n\nThe following table shows the GPU Memory Usage and speed for running Step1X-Edit model (batch size = 1, with cfg) with different configurations:\n\n|     Model    |     Peak GPU Memory (512 \u002F 786 \u002F 1024)  | 28 steps w flash-attn(512 \u002F 786 \u002F 1024) |\n|:------------:|:------------:|:------------:|\n| Step1X-Edit   |                42.5GB \u002F 46.5GB \u002F 49.8GB  | 5s \u002F 11s \u002F 22s |\n| Step1X-Edit (FP8)   |             31GB \u002F 31.5GB \u002F 34GB     | 6.8s \u002F 13.5s \u002F 25s | \n| Step1X-Edit (offload)   |       25.9GB \u002F 27.3GB \u002F 29.1GB | 49.6s \u002F 54.1s \u002F 63.2s |\n| Step1X-Edit (FP8 + offload)   |   18GB \u002F 18GB \u002F 18GB | 35s \u002F 40s \u002F 51s |\n\n* The model is tested on one H800 GPU.\n* We recommend to use GPUs with 80GB of memory for better generation quality and efficiency.\n\n\n### A3. Multi-GPU inference\nFor multi-GPU inference, you can use the following script:\n```\nbash scripts\u002Frun_examples_parallel.sh\n```\nYou can change the number of GPUs (`GPU`), the configuration of xDiT (`--ulysses_degree` or `--ring_degree` or `--cfg_degree`), and whether to enable TeaCache acceleration (`--teacache`) in the script.\nThe table below presents the speedup of several efficient methods on the Step1X-Edit model.\n\n|     Model    |     Peak GPU Memory   |  28 steps |\n|:------------:|:------------:|:------------:|\n| Step1X-Edit + TeaCache     |    49.6GB   | 16.78s | \n| Step1X-Edit + xDiT (GPU=2) |    50.2GB   | 12.81s |\n| Step1X-Edit + xDiT (GPU=4) |    52.9GB   | 8.17s |\n| Step1X-Edit + TeaCache + xDiT (GPU=2)  |  50.7GB    | 8.94s |\n| Step1X-Edit + TeaCache + xDiT (GPU=4)  |  54.2GB |  5.82s |\n\n* The model was tested on H800 series GPUs with a resolution of 1024.\n* TeaCache's default threshold of 0.2 provides a good balance between efficiency and performance.\n* xDiT employs both CFG Parallelism and Ring Attention when using 4 GPUs, but only utilizes CFG Parallelism when operating with 2 GPUs.\n\nThis default script runs the inference code on example inputs. The results will look like:\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_4d4c8df7c2c4.png\">\n\u003C\u002Fdiv>\n\n\n\u003C!-- ### 2.4 Gradio Scripts\n\nChange the `model_path` in `gradio_app.py` to the local path of Step1X-Edit. Then run\n\n```bash\npython gradio_app.py\n```\n\nThen the gradio demo will run on `localhost:32800`. -->\n\n\n\n\n\n### A4. Finetuning\n#### Lora training script\n\nHere is the the GPU memory cost during training with lora rank as 64 and batchsize as 1:\n\n|     Precision of DiT    |     bf16 (512 \u002F 786 \u002F 1024)  | fp8 (512 \u002F 786 \u002F 1024) |\n|:------------:|:------------:|:------------:|\n| GPU Memory   |                29.7GB \u002F 31.6GB \u002F 33.8GB  | 19.8GB \u002F 21.3GB \u002F 23.6GB |\n\nThe script `.\u002Fscripts\u002Ffinetuning.sh` shows how to fine-tune the Step1X-Edit model. With our default strategy, it is possible to fine-tune Step1X-Edit with 1024 resolution on a single 24GB GPU. Our fine-tuning script is adapted from  [kohya-ss\u002Fsd-scripts](https:\u002F\u002Fgithub.com\u002Fkohya-ss\u002Fsd-scripts).\n\n```bash\nbash .\u002Fscripts\u002Ffinetuning.sh\n```\n\nThe custom dataset is organized by `.\u002Flibrary\u002Fdata_configs\u002Fstep1x_edit.toml`. Here `metadata_file` contains all the training sampels, including the absolute paths of source images, absolute paths of target images and instructions.\n\nThe `metadata_file` should be a json file containing a dict as follows:\n\n```\n{\n  \u003Ctarget image path, str>: {\n    'ref_image_path': \u003Csource image path, str>\n    'caption': \u003Cthe editing instruction, str>\n  }, \n  ...\n}\n```\n\n#### Inference with Lora\nTo inference with Lora, simply add `--lora \u003Cpath to your lora weights>` when using `inference.py`. For example:\n\n```bash\npython inference.py --input_dir .\u002Fexamples \\\n    --model_path \u002Fdata\u002Fwork_dir\u002Fstep1x-edit\u002F \\\n    --json_path .\u002Fexamples\u002Fprompt_cn.json \\\n    --output_dir .\u002Foutput_cn \\\n    --seed 1234 --size_level 1024 \\\n    --lora 20250521_001-lora256-alpha128-fix-hand-per-epoch\u002Fstep1x-edit_test.safetensors\n```\n\nHere is an example for our [pretrained Lora weights](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit\u002Ftree\u002Fmain\u002Flora), which is designed for fixing corrupted hands of anime characters.\n\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_2f28a498189e.png\">\n\u003C\u002Fdiv>\n\nTo reproduce the cases above, you can run the following scripts:\n```bash \nbash scripts\u002Frun_examples_fix_hand.sh\n```\n\n\n## 📊 Benchmark\nWe release [GEdit-Bench](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench) as a new benchmark, grounded in real-world usages is developed to support more authentic and comprehensive evaluation. This benchmark, which is carefully curated to reflect actual user editing needs and a wide range of editing scenarios, enables more authentic and comprehensive evaluations of image editing models.\nThe evaluation process and related code can be found in [GEdit-Bench\u002FEVAL.md](GEdit-Bench\u002FEVAL.md). Part results of the benchmark are shown below:\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_102517eeeadf.png\">\n\u003C\u002Fdiv>\n\n\n## 🧩 Community Contributions\n\nIf you develop\u002Fuse Step1X-Edit in your projects, welcome to let us know 🎉.\n\n- A detailed introduction blog of Step1X-Edit: [Step1X-Edit执行流程](https:\u002F\u002Fliwenju0.com\u002Fposts\u002FStep1X-Edit%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B-%E4%B8%80.html) by [liwenju0](https:\u002F\u002Fliwenju0.com\u002Fabout.html)\n- FP8 model weights: [meimeilook\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook\u002FStep1X-Edit-FP8) by [meimeilook](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook);  [rkfg\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Frkfg\u002FStep1X-Edit-FP8) by [rkfg](https:\u002F\u002Fhuggingface.co\u002Frkfg)\n- Step1X-Edit ComfyUI Plugin: [quank123wip\u002FComfyUI-Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fquank123wip\u002FComfyUI-Step1X-Edit) by [quank123wip](https:\u002F\u002Fgithub.com\u002Fquank123wip); [raykindle\u002FComfyUI_Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fraykindle\u002FComfyUI_Step1X-Edit) by [raykindle](https:\u002F\u002Fgithub.com\u002Fraykindle)\n- Training scripts: [hobart07\u002FStep1X-Edit_train](https:\u002F\u002Fgithub.com\u002Fhobart07\u002FStep1X-Edit_train) by [hobart07](https:\u002F\u002Fgithub.com\u002Fhobart07)\n\n## 📚 Citation\nIf you find the Step1X-Edit series helpful for your research or applications, please consider ⭐ starring the repository and citing our paper.\n```\n@article{yin2025reasonedit,\n  title={ReasonEdit: Towards Reasoning-Enhanced Image Editing Models}, \n  author={Fukun Yin, Shiyu Liu, Yucheng Han, Zhibo Wang, Peng Xing, Rui Wang, Wei Cheng, Yingming Wang, Aojie Li, Zixin Yin, Pengtao Chen, Xiangyu Zhang, Daxin Jiang, Xianfang Zeng, Gang Yu},\n  journal={arXiv preprint arXiv:2511.22625},\n  year={2025}\n}\n\n@article{wu2025kris,\n  title={KRIS-Bench: Benchmarking Next-Level Intelligent Image Editing Models},\n  author={Wu, Yongliang and Li, Zonghui and Hu, Xinting and Ye, Xinyu and Zeng, Xianfang and Yu, Gang and Zhu, Wenbo and Schiele, Bernt and Yang, Ming-Hsuan and Yang, Xu},\n  journal={arXiv preprint arXiv:2505.16707},\n  year={2025}\n}\n\n@article{liu2025step1x-edit,\n  title={Step1X-Edit: A Practical Framework for General Image Editing}, \n  author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang},\n  journal={arXiv preprint arXiv:2504.17761},\n  year={2025}\n}\n\n```\n\n## Acknowledgement\nWe would like to express our sincere thanks to the contributors of [Kohya](https:\u002F\u002Fgithub.com\u002Fkohya-ss\u002Fsd-scripts\u002Ftree\u002Fsd3), [SD3](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3-medium), [FLUX](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux), [Qwen](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen2.5), [xDiT](https:\u002F\u002Fgithub.com\u002Fxdit-project\u002FxDiT), [TeaCache](https:\u002F\u002Fgithub.com\u002Fali-vilab\u002FTeaCache), [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) and [HuggingFace](https:\u002F\u002Fhuggingface.co) teams, for their open research and exploration.\n\n\n## Disclaimer\nThe results produced by this image editing model are entirely determined by user input and actions. The development team and this open-source project are not responsible for any outcomes or consequences arising from its use.\n\n## LICENSE\nStep1X-Edit is licensed under the Apache License 2.0. You can find the license files in the respective github and  HuggingFace repositories.\n","\u003Cdiv align=\"center\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_76e02663483f.png\"  height=100>\n\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\n  \u003Ca href=\"https:\u002F\u002Fstep1x-edit.github.io\u002F\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=项目主页&message=网站&color=green\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Step1X-Edit&message=Arxiv论文&color=red\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22625\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=ReasonEdit&message=Arxiv论文&color=red\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"assets\u002FWeChat.jpg\">\n  \u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=微信&message=加我好友&color=green&logo=wechat&logoColor=white\">\n  \u003C\u002Fa>\n  \n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=模型&message=HuggingFace&color=yellow\">\u003C\u002Fa> &ensp;\n  \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=GEdit-Bench数据集&message=HuggingFace&color=yellow\">\u003C\u002Fa> &ensp;\n  [![在 Replicate 上运行](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fzsxkib\u002Fstep1x-edit) &ensp;\n  \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002Fj3qzuAyn\">\u003Cimg src=\"https:\u002F\u002Fimg.shields.io\u002Fstatic\u002Fv1?label=Discord频道&message=Discord&color=purple\">\u003C\u002Fa> &ensp;\n\u003C\u002Fdiv>\n\n## 🔥🔥🔥 新闻速递！！\n* 2025年12月29日：🎉 [RegionE](https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002FRegionE) 仅需五行代码，即可在不损失精度的前提下，为 Step1X-Edit 推理带来 2.5 倍加速。\n* 2025年11月26日：👋 我们发布了 [Step1X-Edit-v1p2](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p2)（论文中称为 **ReasonEdit-S**），这是一款原生支持推理编辑（reasoning edit）的模型，在 KRIS-Bench 和 GEdit-Bench 上表现更优。技术报告请见 [此处](https:\u002F\u002Farxiv.org\u002Fabs\u002F2511.22625)。\n  \u003Ctable>\n  \u003Cthead>\n  \u003Ctr>\n    \u003Cth rowspan=\"2\">模型\u003C\u002Fth>\n    \u003Cth colspan=\"3\"> \u003Cdiv align=\"center\">GEdit-Bench\u003C\u002Fdiv> \u003C\u002Fth>\n    \u003Cth colspan=\"4\"> \u003Cdiv align=\"center\">Kris-Bench\u003C\u002Fdiv> \u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>G_SC⬆️\u003C\u002Fth> \u003Cth>G_PQ⬆️ \u003C\u002Fth> \u003Cth>G_O⬆️\u003C\u002Fth> \u003Cth>FK⬆️\u003C\u002Fth> \u003Cth>CK⬆️\u003C\u002Fth> \u003Cth>PK⬆️ \u003C\u002Fth> \u003Cth>Overall⬆️\u003C\u002Fth>\n  \u003C\u002Ftr>\n  \u003C\u002Fthead>\n  \u003Ctbody>\n  \u003Ctr>  \n    \u003Ctd>Flux-Kontext-dev \u003C\u002Ftd> \u003Ctd>7.16\u003C\u002Ftd> \u003Ctd>7.37\u003C\u002Ftd> \u003Ctd>6.51\u003C\u002Ftd> \u003Ctd>53.28\u003C\u002Ftd> \u003Ctd>50.36\u003C\u002Ftd> \u003Ctd>42.53\u003C\u002Ftd> \u003Ctd>49.54\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>   \n    \u003Ctd>Qwen-Image-Edit-2509 \u003C\u002Ftd> \u003Ctd>8.00\u003C\u002Ftd> \u003Ctd>7.86\u003C\u002Ftd> \u003Ctd>7.56\u003C\u002Ftd> \u003Ctd>61.47\u003C\u002Ftd> \u003Ctd>56.79\u003C\u002Ftd> \u003Ctd>47.07\u003C\u002Ftd> \u003Ctd>56.15\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1X-Edit v1.1 \u003C\u002Ftd> \u003Ctd>7.66\u003C\u002Ftd> \u003Ctd>7.35\u003C\u002Ftd> \u003Ctd>6.97\u003C\u002Ftd> \u003Ctd>53.05\u003C\u002Ftd> \u003Ctd>54.34\u003C\u002Ftd> \u003Ctd>44.66\u003C\u002Ftd> \u003Ctd>51.59\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2-preview \u003C\u002Ftd> \u003Ctd>8.14\u003C\u002Ftd> \u003Ctd>7.55\u003C\u002Ftd> \u003Ctd>7.42\u003C\u002Ftd> \u003Ctd>60.49\u003C\u002Ftd> \u003Ctd>58.81\u003C\u002Ftd> \u003Ctd>41.77\u003C\u002Ftd> \u003Ctd>52.51\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (base) \u003C\u002Ftd> \u003Ctd>7.77\u003C\u002Ftd> \u003Ctd>7.65\u003C\u002Ftd> \u003Ctd>7.24\u003C\u002Ftd> \u003Ctd>58.23\u003C\u002Ftd> \u003Ctd>60.55\u003C\u002Ftd> \u003Ctd>46.21\u003C\u002Ftd> \u003Ctd>56.33\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (thinking) \u003C\u002Ftd> \u003Ctd>8.02\u003C\u002Ftd> \u003Ctd>7.64\u003C\u002Ftd> \u003Ctd>7.36\u003C\u002Ftd> \u003Ctd>59.79\u003C\u002Ftd> \u003Ctd>62.76\u003C\u002Ftd> \u003Ctd>49.78\u003C\u002Ftd> \u003Ctd>58.64\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd>Step1x-edit-v1p2 (thinking + reflection) \u003C\u002Ftd> \u003Ctd>8.18\u003C\u002Ftd> \u003Ctd>7.85\u003C\u002Ftd> \u003Ctd>7.58\u003C\u002Ftd> \u003Ctd>62.44\u003C\u002Ftd> \u003Ctd>65.72\u003C\u002Ftd> \u003Ctd>50.42\u003C\u002Ftd> \u003Ctd>60.93\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003C\u002Ftable>\n\n* 2025年9月8日：👋 我们发布了 [step1x-edit-v1p2-preview](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p2-preview)，这是 Step1X-Edit 的新版本，具备推理编辑能力且性能更优（技术报告即将发布），主要特性包括：\n  - 原生推理编辑模型：结合指令推理与反思修正机制，更精准处理复杂编辑任务。KRIS-Bench 表现如下：\n    |    模型    |   事实知识 ⬆️   |  概念知识 ⬆️ | 程序性知识 ⬆️   |  综合得分 ⬆️ | \n    |:------------:|:------------:|:------------:| :------------:|:------------:| \n    | Step1X-Edit v1.1  | 53.05 |  54.34 | 44.66 | 51.59 |   \n    | Step1x-edit-v1p2-preview  | 60.49 | 58.81 | 41.77 | 52.51 | \n    | Step1x-edit-v1p2-preview (thinking)  | 62.24 | 62.25 | 44.43 | 55.21| \n    | Step1x-edit-v1p2-preview (thinking + reflection) | 62.94 |  61.82 |  44.08 |  55.64 | \n  - 提升图像编辑质量与指令遵循能力。GEdit-Bench 表现如下：\n    |     模型    |     G_SC ⬆️   |  G_PQ ⬆️ | G_O ⬆️   |  Q_SC ⬆️ | Q_PQ ⬆️   |  Q_O ⬆️ |\n    |:------------:|:------------:|:------------:| :------------:|:------------:| :------------:|:------------:|\n    | Step1X-Edit (v1.0)  |    7.13   | 7.00 |   6.44   | 7.39 |    7.28   | 7.07 | \n    | Step1X-Edit (v1.1)  |    7.66   | 7.35 |   6.97   | 7.65 |    7.41   | 7.35 | \n    | Step1x-edit-v1p2-preview  |    8.14   | 7.55 |   7.42   | 7.90 |   7.34   | 7.40   |\n* 2025年7月9日：👋 我们更新了 step1x-edit 模型并发布为 [step1x-edit-v1p1](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit)（diffusers 版本见 [此处](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit-v1p1-diffusers))，主要特性包括：\n  - 新增支持文生图（T2I, Text-to-Image）生成任务\n  - 提升图像编辑质量与指令遵循能力。\n  GEdit-Bench-EN（完整集）定量评估结果。G_SC、G_PQ、G_O 为 GPT-4.1 评估指标，Q_SC、Q_PQ、Q_O 为 Qwen2.5-VL-72B 评估指标。为便于复现，我们已公开模型评估的[中间结果](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FShiyu95\u002Fgedit_results)。\n    |     模型    |     G_SC ⬆️   |  G_PQ ⬆️ | G_O ⬆️   |  Q_SC ⬆️ | Q_PQ ⬆️   |  Q_O ⬆️ |\n    |:------------:|:------------:|:------------:| :------------:|:------------:| :------------:|:------------:|\n    | Step1X-Edit (v1.0)  |    7.13   | 7.00 |   6.44   | 7.39 |    7.28   | 7.07 | \n    | Step1X-Edit (v1.1)  |    7.66   | 7.35 |   6.97   | 7.65 |    7.41   | 7.35 | \n* 2025年6月17日：👋 新增支持 Teacache 与并行推理。\n* 2025年5月22日：👋 Step1X-Edit 现已支持在单张 24GB GPU 上进行 Lora 微调！同时发布了针对动漫角色的手动修复版 Lora。[下载 Lora](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit)\n* 2025年4月30日：🎉 Step1X-Edit ComfyUI 插件现已上线，感谢社区贡献！[quank123wip\u002FComfyUI-Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fquank123wip\u002FComfyUI-Step1X-Edit) & [raykindle\u002FComfyUI_Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fraykindle\u002FComfyUI_Step1X-Edit)。\n* 2025年4月27日：🎉 在社区支持下，我们更新了 Step1X-Edit-FP8 的推理代码与模型权重。[meimeilook\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook\u002FStep1X-Edit-FP8) & [rkfg\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Frkfg\u002FStep1X-Edit-FP8)。\n* 2025年4月26日：🎉 Step1X-Edit 正式上线 —— 您现在可以直接在在线演示中编辑图像！[在线演示](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fstepfun-ai\u002FStep1X-Edit)\n* 2025年4月25日：👋 我们发布了 Step1X-Edit 的评估代码与基准数据。[下载 GEdit-Bench](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench)\n* 2025年4月25日：👋 我们发布了 Step1X-Edit 的推理代码与模型权重。[ModelScope](https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002Fstepfun-ai\u002FStep1X-Edit) & [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit) 模型。\n* 2025年4月25日：👋 我们已开源技术报告。[阅读报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761)\n\n\u003C!-- ## 图像编辑演示 -->\n\n\n\u003C!-- ## 📑 开源计划\n- [x] 推理代码与模型检查点\n- [x] 在线演示（Gradio）\n- [x] 微调脚本\n- [x] 多GPU序列并行推理\n- [x] FP8量化权重\n- [x] ComfyUI 支持\n- [x] Diffusers 支持 -->\n\n## 📖 简介\n我们推出了最先进的图像编辑模型 **Step1X-Edit**，旨在提供与闭源模型（如 GPT-4o 和 Gemini2 Flash）相媲美的性能。  \n具体而言，我们采用多模态大语言模型（Multimodal LLM）处理参考图像和用户的编辑指令，提取潜在嵌入（latent embedding），并与扩散图像解码器（diffusion image decoder）结合以生成目标图像。为训练该模型，我们构建了一套数据生成流水线，用于生产高质量数据集。  \n在评估方面，我们开发了 GEdit-Bench——一个基于真实用户指令的全新基准测试。GEdit-Bench 上的实验结果表明，Step1X-Edit 显著优于现有开源基线模型，并接近领先闭源模型的性能，从而为图像编辑领域做出了重要贡献。  \n更多细节请参阅我们的[技术报告](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.17761)。\n\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"720\" alt=\"demo\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_cbead97cc655.gif\">\n\u003Cp>\u003Cb>Step1X-Edit：\u003C\u002Fb> 一个统一的图像编辑模型，在多种真实用户指令上表现卓越。\u003C\u002Fp>\n\u003C\u002Fdiv>\n\n\n## ⚡️ 快速开始\n1. 请确保你的 `transformers==4.55.0`（我们在该版本下测试通过）\n2. 根据你想使用的模型版本，本地安装 `diffusers` 包\n\n\n### Step1X-Edit-v1p2 (v1.2)\n使用以下命令安装 `diffusers` 包：\n```bash\ngit clone -b step1xedit_v1p2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n\npip install RegionE # 可选，用于加速推理\n```\n以下是使用 `Step1X-Edit-v1p2` 模型编辑图像的示例：\n```python\nimport torch\nfrom diffusers import Step1XEditPipelineV1P2\nfrom diffusers.utils import load_image\nfrom RegionE import RegionEHelper\n\npipe = Step1XEditPipelineV1P2.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p2\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\n# 导入 RegionEHelper\nregionehelper = RegionEHelper(pipe)\nregionehelper.set_params()   # 默认超参数\nregionehelper.enable()\n\nprint(\"=== 处理图像 ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"add a ruby pendant on the girl's neck.\"\nenable_thinking_mode=True\nenable_reflection_mode=True\npipe_output = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=50,\n    true_cfg_scale=6,\n    generator=torch.Generator().manual_seed(42),\n    enable_thinking_mode=enable_thinking_mode,\n    enable_reflection_mode=enable_reflection_mode,\n)\nif enable_thinking_mode:\n    print(\"Reformat Prompt:\", pipe_output.reformat_prompt)\nfor image_idx in range(len(pipe_output.images)):\n    pipe_output.images[image_idx].save(f\"0001-{image_idx}.jpg\", lossless=True)\n    if enable_reflection_mode:\n        print(pipe_output.think_info[image_idx])\n        print(pipe_output.best_info[image_idx])\npipe_output.final_images[0].save(f\"0001-final.jpg\", lossless=True)\n\nregionehelper.disable()\n```\n结果如下所示：\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_5edf724a689b.jpeg\">\n\u003C\u002Fdiv>\n\n### Step1X-Edit-v1p2-preview (v1.2-preview)\n使用以下命令安装 `diffusers` 包：\n```bash\ngit clone -b dev\u002FMergeV1-2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n```\n\n以下是使用 `Step1X-Edit-v1p2-preview` 模型编辑图像的示例：\n\n```python\nimport torch\nfrom diffusers import Step1XEditPipelineV1P2\nfrom diffusers.utils import load_image\npipe = Step1XEditPipelineV1P2.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p2-preview\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\nprint(\"=== 处理图像 ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"add a ruby ​​pendant on the girl's neck.\"\nenable_thinking_mode=True\nenable_reflection_mode=True\npipe_output = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=28,\n    true_cfg_scale=4,\n    generator=torch.Generator().manual_seed(42),\n    enable_thinking_mode=enable_thinking_mode,\n    enable_reflection_mode=enable_reflection_mode,\n)\nif enable_thinking_mode:\n    print(\"Reformat Prompt:\", pipe_output.reformat_prompt)\nfor image_idx in range(len(pipe_output.images)):\n    pipe_output.images[image_idx].save(f\"0001-{image_idx}.jpg\", lossless=True)\n    if enable_reflection_mode:\n        print(pipe_output.think_info[image_idx])\n```\n\n\n### Step1X-Edit-v1p1 (v1.1)\n使用以下命令安装 `diffusers` 包：\n```bash\ngit clone -b step1xedit https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n```\n\n以下是使用 `Step1X-Edit-v1p1` 模型编辑图像的示例：\n```python\nimport torch\nfrom diffusers import Step1XEditPipeline\nfrom diffusers.utils import load_image\n\n\npipe = Step1XEditPipeline.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p1-diffusers\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\nprint(\"=== 处理图像 ===\")\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"给这个女生的脖子上戴一个带有红宝石的吊坠。\"\nimage = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=28,\n    size_level=1024,\n    guidance_scale=6.0,\n    generator=torch.Generator().manual_seed(42),\n).images[0]\nimage.save(\"0000.jpg\")\n```\n\n结果将如下所示：\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_de80c1eb02e9.png\">\n\u003C\u002Fdiv>\n\n\n## 🌟 高级用法\n我们以原始的 [Step1X-Edit](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit) 模型为例，展示该模型的一些高级用法。其他版本的模型可能具有不同的推理流程。\n\n### A1. 依赖要求\n我们在 torch==2.3.1 和 torch==2.5.1（搭配 cuda-12.1）环境下测试了本模型。\n安装依赖：\n\n``` bash\npip install -r requirements.txt\n```\n\n安装 [`flash-attn`](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention)，我们提供了一个脚本帮助你找到适合系统的预编译 wheel 文件。\n    \n```bash\npython scripts\u002Fget_flash_attn.py\n```\n\n脚本将生成类似 `flash_attn-2.7.2.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl` 的 wheel 文件名，你可以在 [flash-attn 的发布页面](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention\u002Freleases) 找到对应文件。\n\n随后你可以下载对应的预编译 wheel 文件，并按照 [`flash-attn`](https:\u002F\u002Fgithub.com\u002FDao-AILab\u002Fflash-attention) 中的说明进行安装。\n\n### A2. 降低 GPU 显存占用\n你可以使用以下脚本来以更低的 GPU 显存占用编辑图像。\n\n```\nbash scripts\u002Frun_examples.sh\n```\n默认脚本使用未量化（non-quantified）权重运行推理代码。若希望节省 GPU 显存，你可以：1）在脚本中设置 `--quantized` 标志，将权重量化为 fp8；或 2）在脚本中设置 `--offload` 标志，将部分模块卸载到 CPU。\n\n下表展示了在不同配置下运行 Step1X-Edit 模型（batch size = 1，启用 cfg）时的 GPU 显存占用与速度：\n\n|     模型    |     峰值 GPU 显存 (512 \u002F 786 \u002F 1024)  | 28 步 + flash-attn(512 \u002F 786 \u002F 1024) |\n|:------------:|:------------:|:------------:|\n| Step1X-Edit   |                42.5GB \u002F 46.5GB \u002F 49.8GB  | 5s \u002F 11s \u002F 22s |\n| Step1X-Edit (FP8)   |             31GB \u002F 31.5GB \u002F 34GB     | 6.8s \u002F 13.5s \u002F 25s | \n| Step1X-Edit (offload)   |       25.9GB \u002F 27.3GB \u002F 29.1GB | 49.6s \u002F 54.1s \u002F 63.2s |\n| Step1X-Edit (FP8 + offload)   |   18GB \u002F 18GB \u002F 18GB | 35s \u002F 40s \u002F 51s |\n\n* 测试环境为单张 H800 GPU。\n* 我们推荐使用 80GB 显存的 GPU 以获得更佳的生成质量与效率。\n\n\n### A3. 多 GPU 推理\n对于多 GPU 推理，可使用以下脚本：\n```\nbash scripts\u002Frun_examples_parallel.sh\n```\n你可以在脚本中修改 GPU 数量（`GPU`）、xDiT 的配置（`--ulysses_degree`、`--ring_degree` 或 `--cfg_degree`），以及是否启用 TeaCache 加速（`--teacache`）。\n下表展示了 Step1X-Edit 模型上几种高效方法的速度提升效果。\n\n|     模型    |     峰值 GPU 显存   |  28 步耗时 |\n|:------------:|:------------:|:------------:|\n| Step1X-Edit + TeaCache     |    49.6GB   | 16.78s | \n| Step1X-Edit + xDiT (GPU=2) |    50.2GB   | 12.81s |\n| Step1X-Edit + xDiT (GPU=4) |    52.9GB   | 8.17s |\n| Step1X-Edit + TeaCache + xDiT (GPU=2)  |  50.7GB    | 8.94s |\n| Step1X-Edit + TeaCache + xDiT (GPU=4)  |  54.2GB |  5.82s |\n\n* 测试环境为 H800 系列 GPU，分辨率为 1024。\n* TeaCache 默认阈值 0.2 在效率与性能之间提供了良好平衡。\n* xDiT 在使用 4 张 GPU 时同时启用了 CFG 并行（CFG Parallelism）和 Ring Attention，而在 2 张 GPU 时仅使用 CFG 并行。\n\n默认脚本将在示例输入上运行推理代码。结果如下所示：\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_4d4c8df7c2c4.png\">\n\u003C\u002Fdiv>\n\n\n\u003C!-- ### 2.4 Gradio 脚本\n\n修改 `gradio_app.py` 中的 `model_path` 为你本地的 Step1X-Edit 路径，然后运行：\n\n```bash\npython gradio_app.py\n```\n\nGradio 演示将运行于 `localhost:32800`。 -->\n\n\n\n\n\n### A4. 微调（Finetuning）\n#### LoRA 训练脚本\n\n以下是当 LoRA 秩（rank）设为 64、batch size 为 1 时训练过程中的 GPU 显存消耗：\n\n|     DiT 精度    |     bf16 (512 \u002F 786 \u002F 1024)  | fp8 (512 \u002F 786 \u002F 1024) |\n|:------------:|:------------:|:------------:|\n| GPU 显存   |                29.7GB \u002F 31.6GB \u002F 33.8GB  | 19.8GB \u002F 21.3GB \u002F 23.6GB |\n\n脚本 `.\u002Fscripts\u002Ffinetuning.sh` 展示了如何微调 Step1X-Edit 模型。采用我们的默认策略，可在单张 24GB GPU 上完成 1024 分辨率的微调。该微调脚本基于 [kohya-ss\u002Fsd-scripts](https:\u002F\u002Fgithub.com\u002Fkohya-ss\u002Fsd-scripts) 修改而来。\n\n```bash\nbash .\u002Fscripts\u002Ffinetuning.sh\n```\n\n自定义数据集由 `.\u002Flibrary\u002Fdata_configs\u002Fstep1x_edit.toml` 组织管理。其中 `metadata_file` 包含所有训练样本，包括源图像绝对路径、目标图像绝对路径及编辑指令。\n\n`metadata_file` 应为一个 JSON 文件，格式如下：\n\n```\n{\n  \u003C目标图像路径, str>: {\n    'ref_image_path': \u003C源图像路径, str>\n    'caption': \u003C编辑指令, str>\n  }, \n  ...\n}\n```\n\n#### 使用 LoRA 进行推理\n在使用 `inference.py` 时，只需添加 `--lora \u003C你的 LoRA 权重路径>` 即可加载 LoRA 权重。例如：\n\n```bash\npython inference.py --input_dir .\u002Fexamples \\\n    --model_path \u002Fdata\u002Fwork_dir\u002Fstep1x-edit\u002F \\\n    --json_path .\u002Fexamples\u002Fprompt_cn.json \\\n    --output_dir .\u002Foutput_cn \\\n    --seed 1234 --size_level 1024 \\\n    --lora 20250521_001-lora256-alpha128-fix-hand-per-epoch\u002Fstep1x-edit_test.safetensors\n```\n\n以下是我们发布的[预训练 LoRA 权重](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit\u002Ftree\u002Fmain\u002Flora)示例，专用于修复动漫角色中损坏的手部。\n\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_2f28a498189e.png\">\n\u003C\u002Fdiv>\n\n如需复现上述案例，可运行以下脚本：\n```bash \nbash scripts\u002Frun_examples_fix_hand.sh\n```\n\n\n## 📊 基准测试（Benchmark）\n我们发布了 [GEdit-Bench](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002Fstepfun-ai\u002FGEdit-Bench)，这是一个基于真实用户场景构建的新基准测试，旨在支持更真实、全面的评估。该基准经过精心设计，覆盖广泛的编辑需求与场景，能够对图像编辑模型进行更贴近实际应用的综合评测。\n评估流程及相关代码请参见 [GEdit-Bench\u002FEVAL.md](GEdit-Bench\u002FEVAL.md)。部分评测结果如下：\n\u003Cdiv align=\"center\">\n\u003Cimg width=\"1080\" alt=\"results\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_readme_102517eeeadf.png\">\n\u003C\u002Fdiv>\n\n\n## 🧩 社区贡献\n\n如果你在项目中开发或使用了 Step1X-Edit，欢迎告知我们 🎉。\n\n- Step1X-Edit 详细执行流程博客：[Step1X-Edit执行流程](https:\u002F\u002Fliwenju0.com\u002Fposts\u002FStep1X-Edit%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B-%E4%B8%80.html)，作者 [liwenju0](https:\u002F\u002Fliwenju0.com\u002Fabout.html)\n- FP8 模型权重：[meimeilook\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook\u002FStep1X-Edit-FP8)，作者 [meimeilook](https:\u002F\u002Fhuggingface.co\u002Fmeimeilook)；[rkfg\u002FStep1X-Edit-FP8](https:\u002F\u002Fhuggingface.co\u002Frkfg\u002FStep1X-Edit-FP8)，作者 [rkfg](https:\u002F\u002Fhuggingface.co\u002Frkfg)\n- Step1X-Edit ComfyUI 插件：[quank123wip\u002FComfyUI-Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fquank123wip\u002FComfyUI-Step1X-Edit)，作者 [quank123wip](https:\u002F\u002Fgithub.com\u002Fquank123wip)；[raykindle\u002FComfyUI_Step1X-Edit](https:\u002F\u002Fgithub.com\u002Fraykindle\u002FComfyUI_Step1X-Edit)，作者 [raykindle](https:\u002F\u002Fgithub.com\u002Fraykindle)\n- 训练脚本：[hobart07\u002FStep1X-Edit_train](https:\u002F\u002Fgithub.com\u002Fhobart07\u002FStep1X-Edit_train)，作者 [hobart07](https:\u002F\u002Fgithub.com\u002Fhobart07)\n\n## 📚 引用\n如果您认为 Step1X-Edit 系列对您的研究或应用有所帮助，请考虑 ⭐ 标星本仓库并引用我们的论文。\n```\n@article{yin2025reasonedit,\n  title={ReasonEdit: Towards Reasoning-Enhanced Image Editing Models}, \n  author={Fukun Yin, Shiyu Liu, Yucheng Han, Zhibo Wang, Peng Xing, Rui Wang, Wei Cheng, Yingming Wang, Aojie Li, Zixin Yin, Pengtao Chen, Xiangyu Zhang, Daxin Jiang, Xianfang Zeng, Gang Yu},\n  journal={arXiv preprint arXiv:2511.22625},\n  year={2025}\n}\n\n@article{wu2025kris,\n  title={KRIS-Bench: Benchmarking Next-Level Intelligent Image Editing Models},\n  author={Wu, Yongliang and Li, Zonghui and Hu, Xinting and Ye, Xinyu and Zeng, Xianfang and Yu, Gang and Zhu, Wenbo and Schiele, Bernt and Yang, Ming-Hsuan and Yang, Xu},\n  journal={arXiv preprint arXiv:2505.16707},\n  year={2025}\n}\n\n@article{liu2025step1x-edit,\n  title={Step1X-Edit: A Practical Framework for General Image Editing}, \n  author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang},\n  journal={arXiv preprint arXiv:2504.17761},\n  year={2025}\n}\n\n```\n\n## 致谢\n我们衷心感谢 [Kohya](https:\u002F\u002Fgithub.com\u002Fkohya-ss\u002Fsd-scripts\u002Ftree\u002Fsd3)、[SD3](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3-medium)、[FLUX](https:\u002F\u002Fgithub.com\u002Fblack-forest-labs\u002Fflux)、[Qwen](https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen2.5)、[xDiT](https:\u002F\u002Fgithub.com\u002Fxdit-project\u002FxDiT)、[TeaCache](https:\u002F\u002Fgithub.com\u002Fali-vilab\u002FTeaCache)、[diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) 以及 [HuggingFace](https:\u002F\u002Fhuggingface.co) 团队的贡献者们，感谢他们开放的研究与探索。\n\n## 免责声明\n本图像编辑模型生成的结果完全由用户输入和操作决定。开发团队及本开源项目不对使用过程中产生的任何结果或后果承担责任。\n\n## 许可证\nStep1X-Edit 采用 Apache License 2.0 授权。您可以在相应的 GitHub 和 HuggingFace 仓库中找到许可证文件。","# Step1X-Edit 快速上手指南\n\n## 环境准备\n\n- **Python 版本**：建议使用 Python 3.8+\n- **PyTorch**：需支持 CUDA，推荐 torch >= 2.0\n- **transformers**：必须为 `4.55.0`（经测试稳定版本）\n- **显存要求**：至少 24GB GPU 显存（如 RTX 3090 \u002F 4090 或 A10\u002FA100）\n- **可选加速**：支持 RegionE 加速推理（提速 2.5 倍），支持 FP8 量化、LoRA 微调、ComfyUI 插件\n\n> 国内用户建议使用 HuggingFace 镜像或 ModelScope 下载模型：\n> - [HuggingFace 模型页](https:\u002F\u002Fhuggingface.co\u002Fstepfun-ai\u002FStep1X-Edit)\n> - [ModelScope 模型页](https:\u002F\u002Fwww.modelscope.cn\u002Fmodels\u002Fstepfun-ai\u002FStep1X-Edit)\n\n## 安装步骤\n\n### 1. 安装基础依赖\n\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121\npip install transformers==4.55.0 accelerate safetensors\n```\n\n### 2. 安装 diffusers（根据版本选择）\n\n#### 使用 v1.2 正式版（推荐）\n\n```bash\ngit clone -b step1xedit_v1p2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n\n# 可选：安装 RegionE 加速推理\npip install RegionE\n```\n\n#### 使用 v1.2-preview 版\n\n```bash\ngit clone -b dev\u002FMergeV1-2 https:\u002F\u002Fgithub.com\u002FPeyton-Chen\u002Fdiffusers.git\ncd diffusers\npip install -e .\n```\n\n## 基本使用\n\n以下为使用 `Step1X-Edit-v1p2` 编辑图像的最小示例：\n\n```python\nimport torch\nfrom diffusers import Step1XEditPipelineV1P2\nfrom diffusers.utils import load_image\n\n# 加载模型（首次运行会自动下载约 10GB 权重）\npipe = Step1XEditPipelineV1P2.from_pretrained(\"stepfun-ai\u002FStep1X-Edit-v1p2\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\n# 加载输入图像 + 编辑指令\nimage = load_image(\"examples\u002F0000.jpg\").convert(\"RGB\")\nprompt = \"add a ruby pendant on the girl's neck.\"\n\n# 执行编辑\npipe_output = pipe(\n    image=image,\n    prompt=prompt,\n    num_inference_steps=50,\n    true_cfg_scale=6,\n    generator=torch.Generator().manual_seed(42),\n    enable_thinking_mode=True,      # 启用推理模式\n    enable_reflection_mode=True     # 启用反思修正\n)\n\n# 保存结果\npipe_output.final_images[0].save(\"output.jpg\", lossless=True)\n```\n\n> ✅ 输出将包含最终编辑图像 + 中间思考过程（若启用 reflection 模式）  \n> 🚀 如需加速推理，可在 `pipe` 初始化后添加：\n> ```python\n> from RegionE import RegionEHelper\n> regionehelper = RegionEHelper(pipe)\n> regionehelper.set_params()\n> regionehelper.enable()\n> ```\n\n---\n\n📌 提示：更多模型版本、LoRA 微调、ComfyUI 插件等高级功能请参考项目主页：[https:\u002F\u002Fstep1x-edit.github.io\u002F](https:\u002F\u002Fstep1x-edit.github.io\u002F)","一位独立游戏开发者正在为新作《蒸汽朋克动物园》制作宣传海报，需要快速将原始手绘角色图中的“机械狐狸耳朵”替换成“发光水晶鹿角”，同时保持画面风格与光影一致。\n\n### 没有 Step1X-Edit 时\n- 必须手动在 Photoshop 中逐层绘制替换部件，耗时至少3小时，且容易破坏原图质感。\n- 使用普通AI修图工具（如旧版Stable Diffusion插件）生成结果风格割裂，水晶材质与蒸汽朋克氛围不搭。\n- 多次修改需求（比如“让水晶更透明一点”）需重新开始编辑，沟通成本高、迭代效率低。\n- 无法精确控制局部区域编辑范围，常误改背景齿轮或毛发细节，导致返工。\n- 输出质量不稳定，有时生成结构错乱的角状物，缺乏可控性和专业级精度。\n\n### 使用 Step1X-Edit 后\n- 输入自然语言指令“将狐狸耳朵替换为半透明发光水晶鹿角，保留蒸汽金属纹理”，10秒内获得高质量输出，无需手动重绘。\n- 得益于其推理编辑能力，Step1X-Edit 自动理解“蒸汽朋克+水晶”的风格融合，生成结果材质协调、光影自然。\n- 支持多轮语义修正，只需追加“增强水晶内部光晕效果”，模型即在原基础上优化，无需从头再来。\n- 区域编辑精准，通过简单框选或语义描述即可锁定目标部位，背景与毛发细节毫发无损。\n- 在GEdit-Bench和KRIS-Bench上超越多数闭源模型的表现，确保每次输出都稳定达到商业可用标准。\n\nStep1X-Edit 让创意工作者从繁琐的手工修图中解放出来，用自然语言直接驱动专业级图像编辑，真正实现“所想即所得”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstepfun-ai_Step1X-Edit_5edf724a.jpg","stepfun-ai","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fstepfun-ai_130ba55d.png","",null,"opensource@stepfun.com","https:\u002F\u002Fgithub.com\u002Fstepfun-ai",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.7,{"name":87,"color":88,"percentage":89},"Shell","#89e051",0.3,2175,95,"2026-04-04T05:56:37","Apache-2.0","需要 NVIDIA GPU，显存 24GB（用于 LoRA 微调），未说明推理最低显存","未说明",{"notes":97,"python":95,"dependencies":98},"需从指定分支安装 diffusers；支持 FP8 和并行推理；LoRA 微调可在单张 24GB GPU 上运行；推荐使用 CUDA 环境。",[99,100,101,102],"transformers==4.55.0","diffusers","torch","RegionE",[14],[105,106,107],"image-editing","reasoning","visual-reasoning",4,"2026-03-27T02:49:30.150509","2026-04-06T06:46:05.534778",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},373,"如何在单卡环境下配置 accelerate_config.yaml 进行微调？","原 YAML 文件默认为多卡训练配置，需修改以适配单卡。具体可参考 Issue #66 的解决方案：https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F66，作者已修复该问题。","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F83",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},374,"微调后的模型输出结果如何使用？","微调输出的模型文件（如 xxx.safetensors）应作为 LoRA 使用。在运行推理脚本 inference.py 时，添加参数 `--lora xxx.safetensors` 即可加载。","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F66",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},375,"训练数据集的格式是什么？能否提供一个示例？","数据集格式为 JSON，包含目标图像路径、原始图像路径和编辑指令。示例如下：\n{\n  \"\u002Fpath\u002Fto\u002Foutput.png\": {\n    \"caption\": \"fix all the corrupted hands in the image\",\n    \"ref_image_path\": \"\u002Fpath\u002Fto\u002Finput.png\"\n  }\n}\n注意：output.png 应为修复后图像，input.png 为原始图像，且路径必须为绝对路径。","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F78",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},376,"是否支持多 GPU 并行推理或训练？","已支持 Teacache 和并行推理功能。对于多 GPU 训练，若遇到 OOM 问题，建议检查显存分配或参考社区讨论调整 batch size 或梯度累积策略。","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F14",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},377,"在不支持 bf16 的旧硬件上运行时生成黑图怎么办？","需手动将模型加载为 fp32 格式。当前代码会统一加载模型后再转换 VAE 到 fp32，可能导致问题。建议修改 inference.py，分别加载 DIT+LLM 和 AE 模型，并确保 VAE 始终使用 fp32。","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F53",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},378,"本地部署效果比网页版差，可能是什么原因？","本地与网页版使用的模型和参数理论上一致。若效果差异大，建议检查 cfg_guidance 参数，默认值通常适用大多数情况。另可下载官方结果对比图至本地查看：https:\u002F\u002Fraw.githubusercontent.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Frefs\u002Fheads\u002Fmain\u002Fassets\u002Fresults_show.png","https:\u002F\u002Fgithub.com\u002Fstepfun-ai\u002FStep1X-Edit\u002Fissues\u002F25",[]]