[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-bytedance--res-adapter":3,"tool-bytedance--res-adapter":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":10,"env_os":92,"env_gpu":93,"env_ram":92,"env_deps":94,"category_tags":104,"github_topics":79,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":105,"updated_at":106,"faqs":107,"releases":143},2805,"bytedance\u002Fres-adapter","res-adapter","[AAAI 2025] Official codes of \"ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models\".","ResAdapter 是一款专为扩散模型设计的“即插即用”分辨率适配器，旨在让 AI 绘画模型摆脱固定分辨率的限制。在使用传统扩散模型时，如果生成的图片尺寸与模型训练时的预设不符，往往会出现画面崩坏、风格漂移或细节模糊等问题。ResAdapter 巧妙地将图像分辨率信息作为条件注入模型，确保在任何自定义尺寸下，生成结果都能保持原本的训练风格和内容一致性，无需额外的微调训练或复杂的后期处理。\n\n这项技术的核心亮点在于其高效与便捷：它不需要重新训练模型，也不增加推理步骤，更不依赖风格迁移算法，仅通过加载轻量级的 LoRA 权重即可生效。目前，ResAdapter 已支持 ComfyUI、Hugging Face Spaces 等多种主流平台，并兼容 SDXL 等热门模型。\n\n无论是希望灵活控制输出尺寸的 AI 艺术家和设计师，还是致力于研究多分辨率生成机制的开发者与科研人员，ResAdapter 都能提供极大的便利。它让创作者不再受限于固定的画布比例，能够自由地生成横版、竖版或任意特殊比例的高质量图像，同时完美保留模型原有的艺术风格。","\u003Cdiv align=\"center\">\n\n\u003Ch1> ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models  \u003C\u002Fh1>\n\nJiaxiang Cheng, Pan Xie*, Xin Xia, Jiashi Li, Jie Wu, Yuxi Ren, Huixia Li, Xuefeng Xiao, Min Zheng, Lean Fu (*Corresponding author)\n\nByteDance Inc.\n\n⭐ If ResAdapter is helpful to your images or projects, please help star this repo. Thanks! 🤗\n\n\n\u003Ca href='https:\u002F\u002Fres-adapter.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-green'>\u003C\u002Fa> \n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02084'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F Paper-Arxiv-red'>\u003C\u002Fa> \n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2403.02084'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F Paper-Huggingface-blue'>\u003C\u002Fa> \n![GitHub Org's stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbytedance%2Fres-adapter)\n\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Space-green)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter)\n[![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReplicate-Gradio-red)](https:\u002F\u002Freplicate.com\u002Fbytedance\u002Fres-adapter)\n[![ComfyUI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FComfyUI-ResAdapter-blue)](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter)\n![visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ee1c83dc950b.png) \n\n**We propose ResAdapter, a plug-and-play resolution adapter for enabling any diffusion model generate resolution-free images: no additional training, no additional inference and no style transfer.**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_a73d4ff305b6.png\" width=\"49.9%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4917eeefd981.png\" width=\"50%\">\nComparison examples between resadapter and [dreamlike-diffusion-1.0](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F1274\u002Fdreamlike-diffusion-10).\n\n\u003C\u002Fdiv>\n\n\n## Release\n- `[2024\u002F12\u002F10]` 🎉 ResAdapter is accepted by AAAI 2025.\n- `[2024\u002F04\u002F07]` 🔥 We release the official [gradio space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter) in Huggingface.\n- `[2024\u002F04\u002F05]` 🔥 We release the [resadapter_v2 weights](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter).\n- `[2024\u002F03\u002F30]` 🔥 We release the [ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter).\n- `[2024\u002F03\u002F28]` 🔥 We release the [resadapter_v1 weights](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter).\n- `[2024\u002F03\u002F04]` 🔥 We release the [arxiv paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02084).\n\u003C!-- - `[2024\u002F03\u002F12]` Code: 🔥 we release the [inference code](https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fblob\u002Fmain\u002Fmain.py). -->\n\n\n## Quicktour\n\nWe provide a standalone [example code](quicktour.py) to help you quickly use resadapter with diffusion models.\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c19d5335f72b.png\" width=\"100%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_1fd6afd534cc.png\" width=\"100%\">\n\nComparison examples (640x384) between resadapter and [dreamshaper-xl-1.0](https:\u002F\u002Fhuggingface.co\u002FLykon\u002Fdreamshaper-xl-1-0). Top: with resadapter. Bottom: without resadapter.\n\n\u003C\u002Fdiv>\n\n```python\n# pip install diffusers, transformers, accelerate, safetensors, huggingface_hub\nimport torch\nfrom torchvision.utils import save_image\nfrom safetensors.torch import load_file\nfrom huggingface_hub import hf_hub_download\nfrom diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler\n\ngenerator = torch.manual_seed(0)\nprompt = \"portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors\"\nwidth, height = 640, 384\n\n# Load baseline pipe\nmodel_name = \"lykon-models\u002Fdreamshaper-xl-1-0\"\npipe = AutoPipelineForText2Image.from_pretrained(model_name, torch_dtype=torch.float16, variant=\"fp16\").to(\"cuda\")\npipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True, algorithm_type=\"sde-dpmsolver++\")\n\n# Inference baseline pipe\nimage = pipe(prompt, width=width, height=height, num_inference_steps=25, num_images_per_prompt=4, output_type=\"pt\").images\nsave_image(image, f\"image_baseline.png\", normalize=True, padding=0)\n\n# Load resadapter for baseline\nresadapter_model_name = \"resadapter_v1_sdxl\"\npipe.load_lora_weights(\n    hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\", subfolder=resadapter_model_name, filename=\"pytorch_lora_weights.safetensors\"), \n    adapter_name=\"res_adapter\",\n    ) # load lora weights\npipe.set_adapters([\"res_adapter\"], adapter_weights=[1.0])\npipe.unet.load_state_dict(\n    load_file(hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\", subfolder=resadapter_model_name, filename=\"diffusion_pytorch_model.safetensors\")),\n    strict=False,\n    ) # load norm weights\n\n# Inference resadapter pipe\nimage = pipe(prompt, width=width, height=height, num_inference_steps=25, num_images_per_prompt=4, output_type=\"pt\").images\nsave_image(image, f\"image_resadapter.png\", normalize=True, padding=0)\n```\n\n## Download\n\n### Models\n\nWe have released all resadapter weights, you can download resadapter models from [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain). The following is our resadapter model card:\n\n|Models  | Parameters | Resolution Range | Ratio Range | Links |\n| --- | --- |--- | --- | --- |\n|resadapter_v2_sd1.5| 0.9M | 128 \u003C= x \u003C= 1024 | 0.28 \u003C= r \u003C= 3.5 | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v2_sd1.5)|\n|resadapter_v2_sdxl| 0.5M | 256 \u003C= x \u003C= 1536 | 0.28 \u003C= r \u003C= 3.5 | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v2_sdxl)|\n|resadapter_v1_sd1.5| 0.9M | 128 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2 | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5)|\n|resadapter_v1_sd1.5_extrapolation| 0.9M | 512 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2  | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5_extrapolation)|\n|resadapter_v1_sd1.5_interpolation| 0.9M | 128 \u003C= x \u003C= 512 | 0.5 \u003C= r \u003C= 2  | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5_interpolation)|\n|resadapter_v1_sdxl| 0.5M | 256 \u003C= x \u003C= 1536 | 0.5 \u003C= r \u003C= 2  | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl) |\n|resadapter_v1_sdxl_extrapolation| 0.5M | 1024 \u003C= x \u003C= 1536 | 0.5 \u003C= r \u003C= 2  | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl_extrapolation) |\n|resadapter_v1_sdxl_interpolation| 0.5M | 256 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2  | [Download](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl_interpolation) |\n\nHint1: We update the resadapter name format according to [controlnet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet-v1-1-nightly).\n\nHint2: If you want use resadapter with personalized diffusion models, you should download them from [CivitAI](https:\u002F\u002Fcivitai.com\u002F).\n\nHint3: If you want use resadapter with ip-adapter, controlnet and lcm-lora, you should download them from [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter).\n\nHint4: Here is an [installation guidance](models\u002FREADME.md) for preparing environment and downloading models.\n\n## Inference\n\nIf you want generate images in our inference script, you should install dependency libraries and download related models according to [installation guidance](models\u002FREADME.md). After filling in [example configs](configs), you can directly run this script.\n\n```bash\npython main.py --config \u002Fpath\u002Fto\u002Ffile\n```\n\n### ResAdapter with Personalized Models for Text to Image\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_93652e8a2eef.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_07cd98fd69f3.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4e42b4f3b19c.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_53adc50c4779.jpg\" width=\"25%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0ef8495f661f.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_09e24a905162.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4bcd92ad9c97.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6c962a0cfd8a.jpg\" width=\"25%\">\n\nComparison examples (960x1104) between resadapter and [dreamshaper-7](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F1274\u002Fdreamlike-diffusion-10). Top: with resadapter. Bottom: without resadapter.\n\n\u003C\u002Fdiv>\n\n### ResAdapter with ControlNet for Image to Image\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e035a8386b7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_26e97bf0917a.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e2ecba2b64e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0071fbab291e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ade857ef25f9.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e035a8386b7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c0177c5c7547.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_8ef6e46a9a36.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_9c7367721bd7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_090e89572261.jpg\" width=\"20%\">\n\nComparison examples (840x1264) between resadapter and [lllyasviel\u002Fsd-controlnet-canny](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd-controlnet-canny). Top: with resadapter, bottom: without resadapter.\n\n\u003C\u002Fdiv>\n\n### ResAdapter with ControlNet-XL for Image to Image\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c599d10d3a14.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_b39cba54186d.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_02696e013ccd.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_a66e80b0527f.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c430dccef1f4.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c599d10d3a14.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_cf92b528307e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_323da33a1943.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_2c5d5a397a10.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ec64ae9d7425.jpg\" width=\"20%\">\n\nComparison examples (336x504) between resadapter and [diffusers\u002Fcontrolnet-canny-sdxl-1.0](https:\u002F\u002Fhuggingface.co\u002Fdiffusers\u002Fcontrolnet-canny-sdxl-1.0). Top: with resadapter, bottom: without resadapter.\n\n\u003C\u002Fdiv>\n\n### ResAdapter with IP-Adapter for Face Variance\n\n\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6d94b4e7a452.png\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4196b6496b7c.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_dc274ca0d363.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_d015f3065061.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_f77569a89601.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6d94b4e7a452.png\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e631bc096768.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c50c19c27fa4.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e5f5650dd4fd.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_61f3fc34f6d9.jpg\" width=\"20%\">\n\nComparison examples (864x1024) between resadapter and [h94\u002FIP-Adapter](https:\u002F\u002Fhuggingface.co\u002Fh94\u002FIP-Adapter). Top: with resadapter, bottom: without resadapter.\n\n\n\u003C\u002Fdiv>\n\n\n### ResAdapter with LCM-LoRA for Speeding up\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_155d9398346e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_8a0baf0a37af.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e457cc72c7fa.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_fa09a265e2c5.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_acb773d1cd73.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_9150eb509a59.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_2d4235ed4b82.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0a7d3448d3d7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_cde930169897.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6ec62d9025d7.jpg\" width=\"20%\">\n\nComparison examples (512x512) between resadapter and [dreamshaper-xl-1.0](https:\u002F\u002Fhuggingface.co\u002FLykon\u002Fdreamshaper-xl-1-0) with [lcm-sdxl-lora](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl). Top: with resadapter, bottom: without resadapter.\n\n\n\u003C\u002Fdiv>\n\n## Community Resource\n\n### Gradio\n- Replicate website: [bytedance\u002Fres-adapter](https:\u002F\u002Freplicate.com\u002Fbytedance\u002Fres-adapter) by ([@Chenxi](https:\u002F\u002Fgithub.com\u002Fchenxwh))\n- Huggingface space: \n  - [jiaxiangc\u002Fres-adapter](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter) (official space)\n  - [ameerazam08\u002FRes-Adapter-GPU-Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fameerazam08\u002FRes-Adapter-GPU-Demo) by ([@Ameer Azam](https:\u002F\u002Fgithub.com\u002FAMEERAZAM08))\n\nAn text-to-image example about res-adapter in huggingface space. More information in [jiaxiangc\u002Fres-adapter](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter).\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_7249e4048d4b.png\">\n  \n### ComfyUI\n- [jiaxiangc\u002FComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter) (official comfyui node)\n- [blepping\u002FComfyUI-ApplyResAdapterUnet](https:\u002F\u002Fgithub.com\u002Fblepping\u002FComfyUI-ApplyResAdapterUnet) by ([@blepping](https:\u002F\u002Fgithub.com\u002Fblepping))\n\nAn text-to image example about ComfyUI-ResAdapter. More examples about lcm-lora, controlnet and ipadapter can be found in [ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Ftree\u002Fmain).\n\nhttps:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Fassets\u002F162297627\u002F82453931-23de-4f72-8a9c-1053c4c8d81a\n\n### WebUI\n\nI am learning how to make webui extension.\n\n## Local Gradio Demo\n\nRun the following script:\n\n```bash\n# pip install peft, gradio, httpx==0.23.3\npython app.py\n```\n\n\n\n## Usage Tips\n\n1. If you are not satisfied with interpolation images, try to increase the alpha of resadapter to 1.0.\n2. If you are not satisfied with extrapolate images, try to choose the alpha of resadapter in 0.3 ~ 0.7.\n3. If you find the images with style conflicts, try to decrease the alpha of resadapter.\n4. If you find resadapter is not compatible with other accelerate lora, try to decrease the alpha of resadapter to 0.5 ~ 0.7.\n\n## Acknowledgements\n\n- ResAdapter is developed by AutoML Team at ByteDance Inc, all copyright reserved.\n- Thanks to the [HuggingFace](https:\u002F\u002Fhuggingface.co\u002F) gradio team for their free GPU support!\n- Thanks to the [IP-Adapter](https:\u002F\u002Fhuggingface.co\u002Fh94\u002FIP-Adapter), [ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet-v1-1-nightly), [LCM-LoRA](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl) for their nice work.\n- Thank [@Chenxi](https:\u002F\u002Fgithub.com\u002Fchenxwh) and [@AMEERAZAM08](https:\u002F\u002Fgithub.com\u002FAMEERAZAM08) to provide gradio demos.\n- Thank [@fengyuzzz](https:\u002F\u002Fgithub.com\u002Ffengyuzzz) to support video demos in [ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Ftree\u002Fmain).\n\n## Star History\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_eaaa06d34cb0.png)](https:\u002F\u002Fstar-history.com\u002F#bytedance\u002Fres-adapter&Date)\n\n## Citation\nIf you find ResAdapter useful for your research and applications, please cite us using this BibTeX:\n```\n@inproceedings{cheng2025resadapter,\n  title={Resadapter: Domain consistent resolution adapter for diffusion models},\n  author={Cheng, Jiaxiang and Xie, Pan and Xia, Xin and Li, Jiashi and Wu, Jie and Ren, Yuxi and Li, Huixia and Xiao, Xuefeng and Wen, Shilei and Fu, Lean},\n  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n  volume={39},\n  number={3},\n  pages={2438--2446},\n  year={2025}\n}\n```\nFor any question, please feel free to contact us via jiaxiangcc@gmail.com or xiepan.01@bytedance.com.\n","\u003Cdiv align=\"center\">\n\n\u003Ch1> ResAdapter：用于扩散模型的领域一致性分辨率适配器 \u003C\u002Fh1>\n\n程家祥、谢攀*、夏欣、李嘉实、吴杰、任宇熙、李慧霞、肖雪峰、郑敏、傅LEAN (*通讯作者)\n\n字节跳动公司\n\n⭐ 如果 ResAdapter 对您的图像或项目有所帮助，请为本仓库点亮星标。谢谢！🤗\n\n\n\u003Ca href='https:\u002F\u002Fres-adapter.github.io\u002F'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Page-green'>\u003C\u002Fa> \n\u003Ca href='https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02084'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F Paper-Arxiv-red'>\u003C\u002Fa> \n\u003Ca href='https:\u002F\u002Fhuggingface.co\u002Fpapers\u002F2403.02084'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F Paper-Huggingface-blue'>\u003C\u002Fa> \n![GitHub Org's stars](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fbytedance%2Fres-adapter)\n\n[![Hugging Face](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Space-green)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter)\n[![Replicate](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FReplicate-Gradio-red)](https:\u002F\u002Freplicate.com\u002Fbytedance\u002Fres-adapter)\n[![ComfyUI](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FComfyUI-ResAdapter-blue)](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter)\n![visitors](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ee1c83dc950b.png) \n\n**我们提出 ResAdapter，一种即插即用的分辨率适配器，使任何扩散模型都能生成与分辨率无关的图像：无需额外训练，无需额外推理，也无需风格迁移。**\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_a73d4ff305b6.png\" width=\"49.9%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4917eeefd981.png\" width=\"50%\">\nresadapter 与 [dreamlike-diffusion-1.0](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F1274\u002Fdreamlike-diffusion-10) 的对比示例。\n\n\u003C\u002Fdiv>\n\n\n## 发布\n- `[2024\u002F12\u002F10]` 🎉 ResAdapter 被 AAAI 2025 接收。\n- `[2024\u002F04\u002F07]` 🔥 我们在 Huggingface 上发布了官方 [gradio space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter)。\n- `[2024\u002F04\u002F05]` 🔥 我们发布了 [resadapter_v2 权重](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter)。\n- `[2024\u002F03\u002F30]` 🔥 我们发布了 [ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter)。\n- `[2024\u002F03\u002F28]` 🔥 我们发布了 [resadapter_v1 权重](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter)。\n- `[2024\u002F03\u002F04]` 🔥 我们发布了 [arxiv 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02084)。\n\u003C!-- - `[2024\u002F03\u002F12]` Code: 🔥 we release the [inference code](https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fblob\u002Fmain\u002Fmain.py). -->\n\n\n## 快速入门\n\n我们提供了一个独立的 [示例代码](quicktour.py)，帮助您快速使用 resadapter 与扩散模型结合。\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c19d5335f72b.png\" width=\"100%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_1fd6afd534cc.png\" width=\"100%\">\n\nresadapter 与 [dreamshaper-xl-1.0](https:\u002F\u002Fhuggingface.co\u002FLykon\u002Fdreamshaper-xl-1-0) 的对比示例（640x384）。上图：使用 resadapter；下图：未使用 resadapter。\n\n\u003C\u002Fdiv>\n\n```python\n# pip install diffusers, transformers, accelerate, safetensors, huggingface_hub\nimport torch\nfrom torchvision.utils import save_image\nfrom safetensors.torch import load_file\nfrom huggingface_hub import hf_hub_download\nfrom diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler\n\ngenerator = torch.manual_seed(0)\nprompt = \"穿着破旧机甲服的肌肉发达、留着胡子的男子人像照片，浅景深散景，细节丰富，钢铁质感，优雅，焦点锐利，柔和光线，色彩鲜艳\"\nwidth, height = 640, 384\n\n# 加载基线管道\nmodel_name = \"lykon-models\u002Fdreamshaper-xl-1-0\"\npipe = AutoPipelineForText2Image.from_pretrained(model_name，torch_dtype=torch.float16，variant=\"fp16\").to(\"cuda\")\npipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config，use_karras_sigmas=True，algorithm_type=\"sde-dpmsolver++\")\n\n# 基线管道推理\nimage = pipe(prompt，width=width，height=height，num_inference_steps=25，num_images_per_prompt=4，output_type=\"pt\").images\nsave_image(image，f\"image_baseline.png\"，normalize=True，padding=0)\n\n# 加载 resadapter 用于基线\nresadapter_model_name = \"resadapter_v1_sdxl\"\npipe.load_lora_weights(\n    hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\"，subfolder=resadapter_model_name，filename=\"pytorch_lora_weights.safetensors\")，\n    adapter_name=\"res_adapter\"，\n) # 加载 lora 权重\npipe.set_adapters([\"res_adapter\"]，adapter_weights=[1.0])\npipe.unet.load_state_dict(\n    load_file(hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\"，subfolder=resadapter_model_name，filename=\"diffusion_pytorch_model.safetensors\"))，\n    strict=False，\n) # 加载 norm 权重\n\n# resadapter 管道推理\nimage = pipe(prompt，width=width，height=height，num_inference_steps=25，num_images_per_prompt=4，output_type=\"pt\").images\nsave_image(image，f\"image_resadapter.png\"，normalize=True，padding=0)\n```\n\n## 下载\n\n### 模型\n\n我们已发布所有 resadapter 权重，您可以从 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain) 下载 resadapter 模型。以下是我们的 resadapter 模型卡片：\n\n|模型  | 参数 | 分辨率范围 | 宽高比范围 | 链接 |\n| --- | --- |--- | --- | --- |\n|resadapter_v2_sd1.5| 0.9M | 128 \u003C= x \u003C= 1024 | 0.28 \u003C= r \u003C= 3.5 | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v2_sd1.5)|\n|resadapter_v2_sdxl| 0.5M | 256 \u003C= x \u003C= 1536 | 0.28 \u003C= r \u003C= 3.5 | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v2_sdxl)|\n|resadapter_v1_sd1.5| 0.9M | 128 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2 | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5)|\n|resadapter_v1_sd1.5_extrapolation| 0.9M | 512 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2  | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5_extrapolation)|\n|resadapter_v1_sd1.5_interpolation| 0.9M | 128 \u003C= x \u003C= 512 | 0.5 \u003C= r \u003C= 2  | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sd1.5_interpolation)|\n|resadapter_v1_sdxl| 0.5M | 256 \u003C= x \u003C= 1536 | 0.5 \u003C= r \u003C= 2  | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl) |\n|resadapter_v1_sdxl_extrapolation| 0.5M | 1024 \u003C= x \u003C= 1536 | 0.5 \u003C= r \u003C= 2  | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl_extrapolation) |\n|resadapter_v1_sdxl_interpolation| 0.5M | 256 \u003C= x \u003C= 1024 | 0.5 \u003C= r \u003C= 2  | [下载](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter\u002Ftree\u002Fmain\u002Fresadapter_v1_sdxl_interpolation) |\n\n提示1：我们根据 [controlnet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet-v1-1-nightly) 更新了 resadapter 的命名格式。\n\n提示2：如果您想将 resadapter 与个性化扩散模型一起使用，应从 [CivitAI](https:\u002F\u002Fcivitai.com\u002F) 下载。\n\n提示3：如果您想将 resadapter 与 ip-adapter、controlnet 和 lcm-lora 一起使用，应从 [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter) 下载。\n\n提示4：这里有一份 [安装指南](models\u002FREADME.md)，用于准备环境和下载模型。\n\n## 推理\n\n如果您想在我们的推理脚本中生成图像，应按照[安装指南](models\u002FREADME.md)安装依赖库并下载相关模型。填写完[示例配置文件](configs)后，即可直接运行此脚本。\n\n```bash\npython main.py --config \u002Fpath\u002Fto\u002Ffile\n```\n\n### 基于ResAdapter与个性化模型的文生图\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_93652e8a2eef.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_07cd98fd69f3.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4e42b4f3b19c.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_53adc50c4779.jpg\" width=\"25%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0ef8495f661f.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_09e24a905162.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4bcd92ad9c97.jpg\" width=\"25%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6c962a0cfd8a.jpg\" width=\"25%\">\n\n对比示例（960×1104），上方为使用ResAdapter的结果，下方为未使用ResAdapter的结果，对比对象为[dreamshaper-7](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F1274\u002Fdreamlike-diffusion-10)。\n\n\u003C\u002Fdiv>\n\n### 基于ResAdapter与ControlNet的图生图\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e035a8386b7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_26e97bf0917a.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e2ecba2b64e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0071fbab291e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ade857ef25f9.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_3e035a8386b7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c0177c5c7547.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_8ef6e46a9a36.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_9c7367721bd7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_090e89572261.jpg\" width=\"20%\">\n\n对比示例（840×1264），上方为使用ResAdapter的结果，下方为未使用ResAdapter的结果，对比对象为[lllyasviel\u002Fsd-controlnet-canny](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd-controlnet-canny)。\n\n\u003C\u002Fdiv>\n\n### 基于ResAdapter与ControlNet-XL的图生图\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c599d10d3a14.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_b39cba54186d.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_02696e013ccd.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_a66e80b0527f.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c430dccef1f4.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c599d10d3a14.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_cf92b528307e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_323da33a1943.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_2c5d5a397a10.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_ec64ae9d7425.jpg\" width=\"20%\">\n\n对比示例（336×504），上方为使用ResAdapter的结果，下方为未使用ResAdapter的结果，对比对象为[diffusers\u002Fcontrolnet-canny-sdxl-1.0](https:\u002F\u002Fhuggingface.co\u002Fdiffusers\u002Fcontrolnet-canny-sdxl-1.0)。\n\n\u003C\u002Fdiv>\n\n### 基于ResAdapter与IP-Adapter的人脸风格化\n\n\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6d94b4e7a452.png\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_4196b6496b7c.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_dc274ca0d363.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_d015f3065061.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_f77569a89601.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6d94b4e7a452.png\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e631bc096768.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_c50c19c27fa4.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e5f5650dd4fd.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_61f3fc34f6d9.jpg\" width=\"20%\">\n\n对比示例（864×1024），上方为使用ResAdapter的结果，下方为未使用ResAdapter的结果，对比对象为[h94\u002FIP-Adapter](https:\u002F\u002Fhuggingface.co\u002Fh94\u002FIP-Adapter)。\n\n\n\u003C\u002Fdiv>\n\n\n### 基于ResAdapter与LCM-LoRA的加速\n\n\u003Cdiv align=center>\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_155d9398346e.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_8a0baf0a37af.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_e457cc72c7fa.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_fa09a265e2c5.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_acb773d1cd73.jpg\" width=\"20%\">\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_9150eb509a59.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_2d4235ed4b82.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_0a7d3448d3d7.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_cde930169897.jpg\" width=\"20%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_6ec62d9025d7.jpg\" width=\"20%\">\n\n对比示例（512×512），上方为使用ResAdapter的结果，下方为未使用ResAdapter的结果，对比对象为[dreamshaper-xl-1.0](https:\u002F\u002Fhuggingface.co\u002FLykon\u002Fdreamshaper-xl-1-0)，采用[lcm-sdxl-lora](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl)。\n\n\n\u003C\u002Fdiv>\n\n## 社区资源\n\n### Gradio\n- Replicate网站：由[@Chenxi](https:\u002F\u002Fgithub.com\u002Fchenxwh)提供的[bytedance\u002Fres-adapter](https:\u002F\u002Freplicate.com\u002Fbytedance\u002Fres-adapter)\n- Huggingface空间：\n  - [jiaxiangc\u002Fres-adapter](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter)（官方空间）\n  - [ameerazam08\u002FRes-Adapter-GPU-Demo](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fameerazam08\u002FRes-Adapter-GPU-Demo)，由[@Ameer Azam](https:\u002F\u002Fgithub.com\u002FAMEERAZAM08)提供\n\nHuggingface空间中关于ResAdapter的文生图示例。更多信息请参见[jiaxiangc\u002Fres-adapter](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fjiaxiangc\u002Fres-adapter)。\n\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_7249e4048d4b.png\">\n  \n### ComfyUI\n- [jiaxiangc\u002FComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter)（官方ComfyUI节点）\n- [blepping\u002FComfyUI-ApplyResAdapterUnet](https:\u002F\u002Fgithub.com\u002Fblepping\u002FComfyUI-ApplyResAdapterUnet)，由[@blepping](https:\u002F\u002Fgithub.com\u002Fblepping)提供\n\nComfyUI-ResAdapter的文生图示例。更多关于lcm-lora、controlnet和ipadapter的示例可在[ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Ftree\u002Fmain)中找到。\n\nhttps:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Fassets\u002F162297627\u002F82453931-23de-4f72-8a9c-1053c4c8d81a\n\n### WebUI\n\n我正在学习如何制作WebUI扩展。\n\n## 本地Gradio演示\n\n运行以下脚本：\n\n```bash\n# 使用pip安装peft、gradio以及httpx==0.23.3\npython app.py\n```\n\n\n\n## 使用技巧\n\n1. 如果对插值生成的图像不满意，可尝试将ResAdapter的alpha值调至1.0。\n2. 若对外推生成的图像不甚满意，建议将ResAdapter的alpha值设为0.3至0.7之间。\n3. 如发现生成图像存在风格冲突，可适当降低ResAdapter的alpha值。\n4. 若发现ResAdapter与其他加速LoRA不兼容，可将ResAdapter的alpha值调整至0.5至0.7之间。\n\n## 致谢\n\n- ResAdapter 由字节跳动公司的 AutoML 团队开发，版权所有。\n- 感谢 [HuggingFace](https:\u002F\u002Fhuggingface.co\u002F) Gradio 团队提供的免费 GPU 支持！\n- 感谢 [IP-Adapter](https:\u002F\u002Fhuggingface.co\u002Fh94\u002FIP-Adapter)、[ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet-v1-1-nightly)、[LCM-LoRA](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl) 的优秀工作。\n- 感谢 [@Chenxi](https:\u002F\u002Fgithub.com\u002Fchenxwh) 和 [@AMEERAZAM08](https:\u002F\u002Fgithub.com\u002FAMEERAZAM08) 提供 Gradio 演示。\n- 感谢 [@fengyuzzz](https:\u002F\u002Fgithub.com\u002Ffengyuzzz) 在 [ComfyUI-ResAdapter](https:\u002F\u002Fgithub.com\u002Fjiaxiangc\u002FComfyUI-ResAdapter\u002Ftree\u002Fmain) 中支持视频演示。\n\n## 星标历史\n[![星标历史图表](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_readme_eaaa06d34cb0.png)](https:\u002F\u002Fstar-history.com\u002F#bytedance\u002Fres-adapter&Date)\n\n## 引用\n如果您在研究和应用中发现 ResAdapter 非常有用，请使用以下 BibTeX 格式引用我们：\n```\n@inproceedings{cheng2025resadapter,\n  title={Resadapter: 面向扩散模型的领域一致性分辨率适配器},\n  author={Cheng, Jiaxiang and Xie, Pan and Xia, Xin and Li, Jiashi and Wu, Jie and Ren, Yuxi and Li, Huixia and Xiao, Xuefeng and Wen, Shilei and Fu, Lean},\n  booktitle={AAAI 人工智能会议论文集},\n  volume={39},\n  number={3},\n  pages={2438--2446},\n  year={2025}\n}\n```\n如有任何问题，请随时通过 jiaxiangcc@gmail.com 或 xiepan.01@bytedance.com 与我们联系。","# ResAdapter 快速上手指南\n\nResAdapter 是一个即插即用的分辨率适配器，能够让任何扩散模型（Diffusion Models）生成与分辨率无关的高质量图像。它无需额外训练、无需增加推理步骤，也不会改变原有模型的风格。\n\n## 1. 环境准备\n\n### 系统要求\n- **操作系统**: Linux \u002F Windows \u002F macOS\n- **GPU**: 推荐 NVIDIA GPU (支持 CUDA)\n- **Python**: 3.8 或更高版本\n- **显存**: 建议 8GB 及以上（取决于基础模型大小）\n\n### 前置依赖\n请确保已安装以下 Python 库：\n```bash\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu118\npip install diffusers transformers accelerate safetensors huggingface_hub\n```\n> **提示**: 国内用户可使用清华或阿里镜像源加速安装，例如：\n> `pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple diffusers transformers accelerate safetensors huggingface_hub`\n\n## 2. 安装步骤\n\nResAdapter 无需单独安装软件包，主要通过加载预训练权重文件（LoRA 和 UNet 权重）集成到现有的 Diffusers 流程中。\n\n1. **克隆仓库（可选，仅用于获取示例代码）**：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter.git\n   cd res-adapter\n   ```\n\n2. **模型下载**：\n   程序运行时会自动从 Hugging Face 下载所需权重。若网络受限，可手动下载模型文件至本地。\n   \n   主要模型地址：[jiaxiangc\u002Fres-adapter](https:\u002F\u002Fhuggingface.co\u002Fjiaxiangc\u002Fres-adapter)\n   \n   常用模型选择参考：\n   - **SDXL 模型**: `resadapter_v2_sdxl` (推荐，支持分辨率 256-1536)\n   - **SD 1.5 模型**: `resadapter_v2_sd1.5` (支持分辨率 128-1024)\n\n## 3. 基本使用\n\n以下是最简单的 Python 脚本示例，展示如何在 SDXL 模型上加载并使用 ResAdapter。\n\n### 代码示例\n\n```python\n# pip install diffusers, transformers, accelerate, safetensors, huggingface_hub\nimport torch\nfrom torchvision.utils import save_image\nfrom safetensors.torch import load_file\nfrom huggingface_hub import hf_hub_download\nfrom diffusers import AutoPipelineForText2Image, DPMSolverMultistepScheduler\n\n# 设置随机种子和提示词\ngenerator = torch.manual_seed(0)\nprompt = \"portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors\"\nwidth, height = 640, 384  # 任意非标准分辨率\n\n# 1. 加载基础模型 (以 DreamShaper XL 为例)\nmodel_name = \"lykon-models\u002Fdreamshaper-xl-1-0\"\npipe = AutoPipelineForText2Image.from_pretrained(model_name, torch_dtype=torch.float16, variant=\"fp16\").to(\"cuda\")\npipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True, algorithm_type=\"sde-dpmsolver++\")\n\n# 2. 加载 ResAdapter 权重\n# 选择对应的模型版本，此处为 SDXL v1 (v2 版本用法类似，只需更改 repo_id 和 subfolder)\nresadapter_model_name = \"resadapter_v1_sdxl\"\n\n# 加载 LoRA 权重\npipe.load_lora_weights(\n    hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\", subfolder=resadapter_model_name, filename=\"pytorch_lora_weights.safetensors\"), \n    adapter_name=\"res_adapter\",\n) \n\n# 激活 Adapter\npipe.set_adapters([\"res_adapter\"], adapter_weights=[1.0])\n\n# 加载 UNet 归一化权重\npipe.unet.load_state_dict(\n    load_file(hf_hub_download(repo_id=\"jiaxiangc\u002Fres-adapter\", subfolder=resadapter_model_name, filename=\"diffusion_pytorch_model.safetensors\")),\n    strict=False,\n)\n\n# 3. 开始推理\nimage = pipe(prompt, width=width, height=height, num_inference_steps=25, num_images_per_prompt=4, output_type=\"pt\").images\n\n# 保存结果\nsave_image(image, f\"image_resadapter.png\", normalize=True, padding=0)\nprint(\"生成完成！\")\n```\n\n### 使用说明\n- **分辨率自由**: 修改 `width` 和 `height` 为任意数值（需在模型支持的范围内），ResAdapter 会自动适配，避免画面崩坏或风格漂移。\n- **兼容性强**: 该方法同样适用于配合 ControlNet、IP-Adapter 或 LCM-LoRA 使用，只需在加载这些插件后，按上述步骤加载 ResAdapter 即可。\n- **模型版本**: 请根据你使用的基础模型（SD 1.5 或 SDXL）选择对应的 `resadapter_model_name` 文件夹。","某游戏美术团队正在为一款科幻题材手游批量生成角色概念图，需要利用现有的 SDXL 模型快速产出大量非标准分辨率（如手机屏幕比例的 640x384）的高清素材。\n\n### 没有 res-adapter 时\n- **画面结构崩坏**：强行将预训练模型设置为非标准分辨率时，生成的人物肢体比例失调，机械装备出现扭曲或断裂。\n- **风格一致性难保**：为了修复分辨率问题尝试微调模型或使用风格迁移，导致原本统一的“赛博朋克”画风出现偏差，增加了后期修图成本。\n- **算力资源浪费**：不得不先生成标准正方形大图再进行裁剪，不仅浪费了显存和推理时间，还经常因构图被切坏而需要重绘。\n- **工作流繁琐**：设计师需要在不同分辨率间反复测试参数，缺乏一个通用的解决方案来适配各种异形屏幕需求。\n\n### 使用 res-adapter 后\n- **任意分辨率直出**：直接加载 res-adapter 插件，无需重新训练即可在 640x384 等非标准尺寸下生成结构完整、比例协调的角色图像。\n- **完美保持原风格**：作为即插即用模块，res-adapter 在不改变原有模型权重的前提下工作，确保了所有产出素材的风格高度统一。\n- **推理效率倍增**：省去了“先生成大图再裁剪”的冗余步骤，单次推理直接获得可用成品，显著降低了显卡负载和时间成本。\n- **工作流极简升级**：只需在现有 ComfyUI 或 Diffusers 流程中插入 res-adapter 节点，即可让旧模型瞬间具备“分辨率自由”能力，灵活适配各类需求。\n\nres-adapter 的核心价值在于打破了扩散模型对固定分辨率的依赖，让开发者能以零训练成本实现任意尺寸下的高质量、风格一致图像生成。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbytedance_res-adapter_a73d4ff3.png","bytedance","Bytedance Inc.","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbytedance_7fee2b15.png","",null,"ByteDanceOSS","https:\u002F\u002Fopensource.bytedance.com","https:\u002F\u002Fgithub.com\u002Fbytedance",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,769,25,"2026-03-12T21:01:57","Apache-2.0","未说明","需要 NVIDIA GPU (代码示例使用 .to('cuda') 和 torch.float16)，显存需求取决于基础模型 (如 SDXL 通常需 8GB+)，CUDA 版本未说明",{"notes":95,"python":92,"dependencies":96},"该工具作为插件适配现有的扩散模型（如 SD1.5, SDXL），无需额外训练。支持通过 LoRA 权重和 UNet 状态字典加载。提供 ComfyUI 节点和 Hugging Face Space 等多种使用方式。模型文件需从 Hugging Face 单独下载，支持多种分辨率范围（128-1536px）。",[97,98,99,100,101,102,103],"torch","diffusers","transformers","accelerate","safetensors","huggingface_hub","torchvision",[14],"2026-03-27T02:49:30.150509","2026-04-06T07:13:43.198938",[108,113,118,123,128,133,138],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},12976,"为什么在生成超过 1024px 分辨率的图像时效果不佳或失效？","目前模型主要针对 256px 到 1024px 的分辨率进行了优化。如果分辨率超过 1024px，生成结果可能会出现问题（如人物重复、扭曲）。维护者建议暂时不要生成高于 1024px 的图像，未来会发布支持更高分辨率的模型。对于 SDXL 用户，仅加载 resolution_lora.safetensors 即可支持 256~1024 范围。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F24",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},12977,"如何正确加载 SD1.5 的 ResAdapter 以生成 1024px 图像？","需要同时使用 resolution_lora 和 resolution_normalization 权重。维护者已更新代码示例（见 model_loader.py），请注意：\n1. 对于文生图任务，尽可能使用个性化模型（Checkpoint），不要使用基础模型（Base Model），因为释放的权重是面向个性化模型的。\n2. 对于 ControlNet、IP-Adapter 等其他任务，可以使用基础模型。\n相关代码地址：https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fblob\u002Fmain\u002Fresadapter\u002Fmodel_loader.py","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F10",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},12978,"遇到报错 'ValueError: PEFT backend is required for `set_adapters()`' 如何解决？","这是因为缺少 PEFT 库或版本过旧。解决方案是安装或更新 PEFT 库：\n1. 运行命令：pip install peft\n2. 如果已安装，请尝试升级到最新版本：pip install --upgrade peft\n安装完成后即可正常调用 set_adapters() 方法。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F18",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},12979,"在 ComfyUI 中安装了插件却找不到 ResAdapter 节点怎么办？","这通常是因为插件导入失败。请检查 ComfyUI 启动时的终端日志，查找是否出现 'ImportFailed' 或 '(IMPORT FAILED) [ResAdapter for ComfyUI]' 的错误提示。根据具体的报错信息（如缺少依赖包）进行修复后，节点即可正常显示。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F23",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},12980,"使用 SD1.5 Adapter 时遇到处理器数量不匹配的错误（number of processors 0 does not match...）怎么办？","该问题通常由环境版本不一致引起。维护者建议更新运行环境（特别是 diffusers 和相关依赖库）到最新版本。如果问题依旧，建议仅在低分辨率生成时仅使用 resolution_lora，避免在低分辨率下加载 resolution norm 权重。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F5",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},12981,"SDXL 模型在 512 分辨率下使用 ResAdapter 效果不明显怎么办？","如果在 512 分辨率下差异不明显，可能是采样器（Scheduler）的问题。ResAdapter 在某些采样器下可能无法被激活。建议尝试更换采样器，例如使用 DDIM 或 DPM Solver 等，以发挥 ResAdapter 的最佳效果。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F9",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},12982,"是否会开源 ResAdapter 的训练代码？","目前暂不考虑开源训练代码，因为这涉及公司利益。团队将继续开源新的模型权重并支持更多生态。如果想了解训练细节，可以阅读官方论文：https:\u002F\u002Farxiv.org\u002Fabs\u002F2403.02084。关于 LoRA 训练可参考 Diffusers 社区的相关讨论。","https:\u002F\u002Fgithub.com\u002Fbytedance\u002Fres-adapter\u002Fissues\u002F3",[]]