[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-luosiallen--latent-consistency-model":3,"tool-luosiallen--latent-consistency-model":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":81,"owner_website":82,"owner_url":83,"languages":84,"stars":93,"forks":94,"last_commit_at":95,"license":96,"difficulty_score":10,"env_os":97,"env_gpu":98,"env_ram":99,"env_deps":100,"category_tags":108,"github_topics":81,"view_count":10,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":109,"updated_at":110,"faqs":111,"releases":140},853,"luosiallen\u002Flatent-consistency-model","latent-consistency-model","Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference","Latent Consistency Models（简称 LCM）致力于解决扩散模型生成速度慢的问题。它利用一致性蒸馏技术，仅需极少推理步骤即可合成高分辨率图像，将原本需要数十步的迭代过程压缩至 4 步以内，显著提升生成效率。\n\nLCM 拥有多项独特优势。其推出的 LCM-LoRA 模块允许用户在不重新训练的情况下，轻松加速 Stable Diffusion XL、SD 1.5 等主流模型。LCM 生态完善，已集成至 Hugging Face Diffusers 库，并支持 SD-WebUI、ComfyUI 等流行界面，涵盖文生图、图生图及实时交互场景。\n\n无论是希望快速落地的开发者、追求效率的设计师，还是关注算法优化的研究人员，都能从 LCM 中获益。官方提供了丰富的训练脚本与在线 Demo，社区氛围活跃，欢迎各方参与共建。","# Latent Consistency Models\n\nOfficial Repository of the paper: [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04378).\n\nOfficial Repository of the paper: [LCM-LoRA: A Universal Stable-Diffusion Acceleration Module](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556).\n\nProject Page: https:\u002F\u002Flatent-consistency-models.github.io\n\n\n### Try our Demos:\n\n🤗 **Hugging Face Demo**: [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model) 🔥🔥🔥\n\n**Replicate Demo**: [![Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fcjwbw\u002Flatent-consistency-model) \n\n**OpenXLab Demo**: [![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fapp-center\u002Fopenxlab_app.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FLatent-Consistency-Model\u002FLatent-Consistency-Model)\n\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_59c383035af3.png\" width=\"4%\" alt=\"\" \u002F> **LCM Community**: Join our LCM discord channels \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FKM6aeW6CgD\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6342e5371027.png\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa> for discussions. Coders are welcome to contribute.\n\n## Breaking News 🔥🔥!!\n- (🤖New) 2023\u002F12\u002F1  **Pixart-α X LCM** is out, a high quality image generative model. see [here](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FPixArt-alpha\u002FPixArt-LCM).\n- (❤️New) 2023\u002F11\u002F10 **Training Scripts** are released!! Check [here](https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Ftree\u002Fmain\u002FLCM_Training_Script\u002Fconsistency_distillation). \n- (🤯New) 2023\u002F11\u002F10 **Training-free acceleration LCM-LoRA** is born! See our technical report [here](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556) and Hugging Face blog [here](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Flcm_lora).\n- (⚡️New) 2023\u002F11\u002F10 LCM has a major update! We release **3 LCM-LoRA (SD-XL, SSD-1B, SD-V1.5)**, see [here](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl).\n- (🚀New) 2023\u002F11\u002F10 LCM has a major update! We release **2 Full Param-tuned LCM (SD-XL, SSD-1B)**,  see [here](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-sdxl).\n\n## News\n- (🔥New) 2023\u002F11\u002F10 We support LCM Inference with C# and ONNX Runtime now! Thanks to [@saddam213](https:\u002F\u002Fgithub.com\u002Fsaddam213)! Check the link [here](https:\u002F\u002Fgithub.com\u002Fsaddam213\u002FOnnxStack).\n- (🔥New) 2023\u002F11\u002F01 **Real-Time Latent Consistency Models** is out!! Github link [here](https:\u002F\u002Fgithub.com\u002Fradames\u002FReal-Time-Latent-Consistency-Model). Thanks [@radames](https:\u002F\u002Fgithub.com\u002Fradames) for the really cool Huggingface🤗 demo [Real-Time Image-to-Image](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FReal-Time-Latent-Consistency-Model), [Real-Time Text-to-Image](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FReal-Time-Latent-Consistency-Model-Text-To-Image). Twitter\u002FX [Link](https:\u002F\u002Fx.com\u002Fradamar\u002Fstatus\u002F1718783886413709542?s=20).\n- (🔥New) 2023\u002F10\u002F28 We support **Img2Img** for LCM! Please refer to \"🔥 Image2Image Demos\".\n- (🔥New) 2023\u002F10\u002F25 We have official [**LCM Pipeline**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) and [**LCM Scheduler**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py) in 🧨 Diffusers library now! Check the new \"Usage\".\n- (🔥New) 2023\u002F10\u002F24 Simple **Streamlit UI** for local use: See the [link](https:\u002F\u002Fgithub.com\u002Fakx\u002Flcm_test) Thanks for [@akx](https:\u002F\u002Fgithub.com\u002Fakx).\n- (🔥New) 2023\u002F10\u002F24 We support **SD-Webui** and **ComfyUI** now!! Thanks for [@0xbitches](https:\u002F\u002Fgithub.com\u002F0xbitches). See the link: [SD-Webui](https:\u002F\u002Fgithub.com\u002F0xbitches\u002Fsd-webui-lcm) and [ComfyUI](https:\u002F\u002Fgithub.com\u002F0xbitches\u002FComfyUI-LCM). \n- (🔥New) 2023\u002F10\u002F23 Running on **Windows\u002FLinux CPU** is also supported! Thanks for [@rupeshs](https:\u002F\u002Fgithub.com\u002Frupeshs) See the [link](https:\u002F\u002Fgithub.com\u002Frupeshs\u002Ffastsdcpu).\n- (🔥New) 2023\u002F10\u002F22 **Google Colab** is supported now. Thanks for [@camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru) See the link: [Colab](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002Flatent-consistency-model-colab)\n- (🔥New) 2023\u002F10\u002F21 We support **local gradio demo** now. LCM can run locally!! Please refer to the \"**Local gradio Demos**\".\n- (🔥New) 2023\u002F10\u002F19 We provide a demo of LCM in 🤗 Hugging Face Space. Try it [here](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model).\n- (🔥New) 2023\u002F10\u002F19 We provide the LCM model (Dreamshaper_v7) in 🤗 Hugging Face. Download [here](https:\u002F\u002Fhuggingface.co\u002FSimianLuo\u002FLCM_Dreamshaper_v7).\n- (🔥New) 2023\u002F10\u002F19 LCM is integrated in 🧨 Diffusers library. Please refer to the \"Usage\".\n\n\n## 🔥 Image2Image Demos (Image-to-Image):\nWe support **Img2Img** now! Try the impressive img2img demos here: [Replicate](https:\u002F\u002Freplicate.com\u002Ffofr\u002Flatent-consistency-model),   [SD-webui](https:\u002F\u002Fgithub.com\u002F0xbitches\u002Fsd-webui-lcm),  [ComfyUI](https:\u002F\u002Fgithub.com\u002F0xbitches\u002FComfyUI-LCM),  [Colab](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002Flatent-consistency-model-colab\u002F)\n\nLocal gradio for img2img is on the way!\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_b1d507e42365.png\", width=\"50%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_69b39021b84f.png\", width=\"49%\">\n\u003C\u002Fp>\n\n## 🔥 Local gradio Demos (Text-to-Image):\n\nTo run the model locally, you can download the \"local_gradio\" folder:\n1. Install Pytorch (CUDA). MacOS system can download the \"MPS\" version of Pytorch. Please refer to: [https:\u002F\u002Fpytorch.org](https:\u002F\u002Fpytorch.org). Install [Intel Extension for Pytorch](https:\u002F\u002Fintel.github.io\u002Fintel-extension-for-pytorch\u002Fxpu\u002Flatest\u002F) as well if you're using Intel GPUs.\n2. Install the main library:\n```\npip install diffusers transformers accelerate gradio==3.48.0 \n```\n3. Launch the gradio: (For MacOS users, need to set the device=\"mps\" in app.py; For Intel GPU users, set `device=\"xpu\"` in app.py)\n```\npython app.py\n```\n\n## Demos & Models Released\nOurs Hugging Face Demo and Model are released ! Latent Consistency Models are supported in 🧨 [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers). \n\n**LCM Model Download**: [LCM_Dreamshaper_v7](https:\u002F\u002Fhuggingface.co\u002FSimianLuo\u002FLCM_Dreamshaper_v7)\n\nLCM模型已上传到始智AI(wisemodel)  中文用户可在此下载，[下载链接](https:\u002F\u002Fwww.wisemodel.cn\u002Forganization\u002FLatent-Consistency-Model).\n\nFor Chinese users, download LCM here: (中文用户可以在此下载LCM模型) [![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fheader\u002Fopenxlab_models.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fmodels\u002Fdetail\u002FLatent-Consistency-Model\u002FLCM_Dreamshaper_v7_4k.safetensors)\n\nHugging Face Demo: [![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model)\n\nReplicate Demo: [![Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fcjwbw\u002Flatent-consistency-model) \n\nOpenXLab Demo: [![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fapp-center\u002Fopenxlab_app.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FLatent-Consistency-Model\u002FLatent-Consistency-Model)\n\nTungsten Demo: [![Tungsten](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_a0f4b58543a3.png)](https:\u002F\u002Ftungsten.run\u002Fmjpyeon\u002Flcm)\n\nNovita.AI Demo:  [![Novita.AI Latent Consistency Playground](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%20Novita.AI%20-Demo%20&%20API-blue)](https:\u002F\u002Fnovita.ai\u002Fproduct\u002Flcm-txt2img)\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6d75aa2fccb5.png\">\n\u003C\u002Fp>\n\nBy distilling classifier-free guidance into the model's input, LCM can generate high-quality images in very short inference time. We compare the inference time at the setting of 768 x 768 resolution, CFG scale w=8, batchsize=4, using a A800 GPU. \n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_0ab44d57298a.png\">\n\u003C\u002Fp>\n\n\n\n## Usage\nWe have official [**LCM Pipeline**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) and [**LCM Scheduler**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py) in 🧨 Diffusers library now! The older usages will be deprecated.\n\nYou can try out Latency Consistency Models directly on:\n[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model)\n\nTo run the model yourself, you can leverage the 🧨 Diffusers library:\n1. Install the library:\n```\npip install --upgrade diffusers  # make sure to use at least diffusers >= 0.22\npip install transformers accelerate\n```\n\n2. Run the model:\n```py\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\"SimianLuo\u002FLCM_Dreamshaper_v7\")\n\n# To save GPU memory, torch.float16 can be used, but it may compromise image quality.\npipe.to(torch_device=\"cuda\", torch_dtype=torch.float32)\n\nprompt = \"Self-portrait oil painting, a beautiful cyborg with golden hair, 8k\"\n\n# Can be set to 1~50 steps. LCM support fast inference even \u003C= 4 steps. Recommend: 1~8 steps.\nnum_inference_steps = 4 \n\nimages = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type=\"pil\").images\n```\n\nFor more information, please have a look at the official docs:\n👉 https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdiffusers\u002Fapi\u002Fpipelines\u002Flatent_consistency_models#latent-consistency-models\n\n\n## Usage (Deprecated)\nWe have official [**LCM Pipeline**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) and [**LCM Scheduler**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py) in 🧨 Diffusers library now! The older usages will be deprecated. But you can still use the older usages by adding ```revision=\"fb9c5d1\"``` from ```from_pretrained(...)``` \n\n\nTo run the model yourself, you can leverage the 🧨 Diffusers library:\n1. Install the library:\n```\npip install diffusers transformers accelerate\n```\n\n2. Run the model:\n```py\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\"SimianLuo\u002FLCM_Dreamshaper_v7\", custom_pipeline=\"latent_consistency_txt2img\", custom_revision=\"main\", revision=\"fb9c5d\")\n\n# To save GPU memory, torch.float16 can be used, but it may compromise image quality.\npipe.to(torch_device=\"cuda\", torch_dtype=torch.float32)\n\nprompt = \"Self-portrait oil painting, a beautiful cyborg with golden hair, 8k\"\n\n# Can be set to 1~50 steps. LCM support fast inference even \u003C= 4 steps. Recommend: 1~8 steps.\nnum_inference_steps = 4 \n\nimages = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type=\"pil\").images\n```\n\n### Our Contributors :\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6be1394222f4.png\" \u002F>\n\u003C\u002Fa>\n\n## BibTeX\n\n```bibtex\nLCM:\n@misc{luo2023latent,\n      title={Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference}, \n      author={Simian Luo and Yiqin Tan and Longbo Huang and Jian Li and Hang Zhao},\n      year={2023},\n      eprint={2310.04378},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\nLCM-LoRA:\n@article{luo2023lcm,\n  title={LCM-LoRA: A Universal Stable-Diffusion Acceleration Module},\n  author={Luo, Simian and Tan, Yiqin and Patil, Suraj and Gu, Daniel and von Platen, Patrick and Passos, Apolin{\\'a}rio and Huang, Longbo and Li, Jian and Zhao, Hang},\n  journal={arXiv preprint arXiv:2311.05556},\n  year={2023}\n}\n```\n","# 潜在一致性模型 (Latent Consistency Models)\n\n论文官方仓库：[潜在一致性模型：使用少步推理合成高分辨率图像](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.04378)。\n\n论文官方仓库：[LCM-LoRA：一种通用的 Stable-Diffusion 加速模块](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556)。\n\n项目页面：https:\u002F\u002Flatent-consistency-models.github.io\n\n\n### 尝试我们的演示：\n\n🤗 **Hugging Face 演示**：[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model) 🔥🔥🔥\n\n**Replicate 演示**：[![Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fcjwbw\u002Flatent-consistency-model) \n\n**OpenXLab 演示**：[![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fapp-center\u002Fopenxlab_app.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FLatent-Consistency-Model\u002FLatent-Consistency-Model)\n\n \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_59c383035af3.png\" width=\"4%\" alt=\"\" \u002F> **LCM 社区**：加入我们的 LCM Discord 频道 \u003Ca href=\"https:\u002F\u002Fdiscord.gg\u002FKM6aeW6CgD\" style=\"text-decoration:none;\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6342e5371027.png\" width=\"3%\" alt=\"\" \u002F>\u003C\u002Fa> 进行讨论。欢迎开发者贡献代码。\n\n## 重磅新闻 🔥🔥!!\n- (🤖New) 2023\u002F12\u002F1 **Pixart-α X LCM** 已发布，这是一个高质量图像生成模型。请见 [此处](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FPixArt-alpha\u002FPixArt-LCM)。\n- (❤️New) 2023\u002F11\u002F10 **训练脚本** 已发布！！请查看 [此处](https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Ftree\u002Fmain\u002FLCM_Training_Script\u002Fconsistency_distillation)。\n- (🤯New) 2023\u002F11\u002F10 **无需训练的加速版 LCM-LoRA** 诞生了！请查看我们的技术报告 [此处](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.05556) 和 Hugging Face 博客 [此处](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Flcm_lora)。*(注：LoRA 即 Low-Rank Adaptation)*\n- (⚡️New) 2023\u002F11\u002F10 LCM 迎来重大更新！我们发布了 **3 个 LCM-LoRA (SD-XL, SSD-1B, SD-V1.5)**，请见 [此处](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-lora-sdxl)。\n- (🚀New) 2023\u002F11\u002F10 LCM 迎来重大更新！我们发布了 **2 个全参数微调的 LCM (SD-XL, SSD-1B)**，请见 [此处](https:\u002F\u002Fhuggingface.co\u002Flatent-consistency\u002Flcm-sdxl)。\n\n## 新闻\n- (🔥New) 2023\u002F11\u002F10 我们现在支持使用 C# 和 ONNX Runtime 进行 LCM 推理！感谢 [@saddam213](https:\u002F\u002Fgithub.com\u002Fsaddam213)! 请查看链接 [此处](https:\u002F\u002Fgithub.com\u002Fsaddam213\u002FOnnxStack)。\n- (🔥New) 2023\u002F11\u002F01 **实时潜在一致性模型 (Real-Time Latent Consistency Models)** 已发布！！Github 链接 [此处](https:\u002F\u002Fgithub.com\u002Fradames\u002FReal-Time-Latent-Consistency-Model)。感谢 [@radames](https:\u002F\u002Fgithub.com\u002Fradames) 提供的非常酷的 Huggingface🤗 演示 [实时图像到图像](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FReal-Time-Latent-Consistency-Model)，[实时文生图](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fradames\u002FReal-Time-Latent-Consistency-Model-Text-To-Image)。Twitter\u002FX [链接](https:\u002F\u002Fx.com\u002Fradamar\u002Fstatus\u002F1718783886413709542?s=20)。\n- (🔥New) 2023\u002F10\u002F28 我们支持 LCM 的 **Img2Img (图像到图像)**！请参阅\"🔥 图像到图像演示”。\n- (🔥New) 2023\u002F10\u002F25 我们在 🧨 Diffusers 库中现在有了官方的 [**LCM Pipeline (管道)**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) 和 [**LCM Scheduler (调度器)**](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py)！请查看新的“使用方法”。\n- (🔥New) 2023\u002F10\u002F24 简单的 **Streamlit UI** 用于本地使用：请见 [链接](https:\u002F\u002Fgithub.com\u002Fakx\u002Flcm_test)。感谢 [@akx](https:\u002F\u002Fgithub.com\u002Fakx)。\n- (🔥New) 2023\u002F10\u002F24 我们现在支持 **SD-Webui** 和 **ComfyUI**！！感谢 [@0xbitches](https:\u002F\u002Fgithub.com\u002F0xbitches)。请查看链接：[SD-Webui](https:\u002F\u002Fgithub.com\u002F0xbitches\u002Fsd-webui-lcm) 和 [ComfyUI](https:\u002F\u002Fgithub.com\u002F0xbitches\u002FComfyUI-LCM)。 \n- (🔥New) 2023\u002F10\u002F23 也支持在 **Windows\u002FLinux CPU** 上运行！感谢 [@rupeshs](https:\u002F\u002Fgithub.com\u002Frupeshs) 请见 [链接](https:\u002F\u002Fgithub.com\u002Frupeshs\u002Ffastsdcpu)。\n- (🔥New) 2023\u002F10\u002F22 现在支持 **Google Colab**。感谢 [@camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru) 请查看链接：[Colab](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002Flatent-consistency-model-colab)\n- (🔥New) 2023\u002F10\u002F21 我们现在支持 **本地 gradio 演示**。LCM 可以在本地运行！！请参阅\"**本地 gradio 演示**\"。\n- (🔥New) 2023\u002F10\u002F19 我们在 🤗 Hugging Face Space 中提供了 LCM 的演示。在此尝试 [此处](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model)。\n- (🔥New) 2023\u002F10\u002F19 我们在 🤗 Hugging Face 上提供了 LCM 模型 (Dreamshaper_v7)。下载 [此处](https:\u002F\u002Fhuggingface.co\u002FSimianLuo\u002FLCM_Dreamshaper_v7)。\n- (🔥New) 2023\u002F10\u002F19 LCM 已集成到 🧨 Diffusers 库中。请参考“使用方法”。\n\n\n## 🔥 图像到图像演示 (Image-to-Image):\n我们现在支持 **Img2Img (图像到图像)**！在这里尝试令人印象深刻的 img2img 演示：[Replicate](https:\u002F\u002Freplicate.com\u002Ffofr\u002Flatent-consistency-model),   [SD-webui](https:\u002F\u002Fgithub.com\u002F0xbitches\u002Fsd-webui-lcm),  [ComfyUI](https:\u002F\u002Fgithub.com\u002F0xbitches\u002FComfyUI-LCM),  [Colab](https:\u002F\u002Fgithub.com\u002Fcamenduru\u002Flatent-consistency-model-colab\u002F)\n\n本地 img2img 的 gradio 正在开发中！\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_b1d507e42365.png\", width=\"50%\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_69b39021b84f.png\", width=\"49%\">\n\u003C\u002Fp>\n\n## 🔥 本地 gradio 演示 (文生图):\n\n要在本地运行模型，您可以下载 \"local_gradio\" 文件夹：\n1. 安装 Pytorch (CUDA)。MacOS 系统可以下载 Pytorch 的\"MPS\"版本。请参见：[https:\u002F\u002Fpytorch.org](https:\u002F\u002Fpytorch.org)。如果您使用的是 Intel GPU，也请安装 [Intel Extension for Pytorch](https:\u002F\u002Fintel.github.io\u002Fintel-extension-for-pytorch\u002Fxpu\u002Flatest\u002F)。\n2. 安装主库：\n```\npip install diffusers transformers accelerate gradio==3.48.0 \n```\n3. 启动 gradio：(对于 MacOS 用户，需要在 app.py 中设置 device=\"mps\"；对于 Intel GPU 用户，在 app.py 中设置 `device=\"xpu\"`)\n```\npython app.py\n```\n\n## 已发布的演示与模型\n我们的 Hugging Face 演示和模型已发布！潜在一致性模型 (Latent Consistency Models) 已在 🧨 [diffusers](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers) 库中得到支持。\n\n**LCM 模型下载**: [LCM_Dreamshaper_v7](https:\u002F\u002Fhuggingface.co\u002FSimianLuo\u002FLCM_Dreamshaper_v7)\n\nLCM 模型已上传到始智 AI(wisemodel)，中文用户可在此下载，[下载链接](https:\u002F\u002Fwww.wisemodel.cn\u002Forganization\u002FLatent-Consistency-Model)。\n\n中文用户可在此下载 LCM 模型：[![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fheader\u002Fopenxlab_models.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fmodels\u002Fdetail\u002FLatent-Consistency-Model\u002FLCM_Dreamshaper_v7_4k.safetensors)\n\nHugging Face 演示：[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model)\n\nReplicate 演示：[![Replicate](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_7dacf1cc5d87.png)](https:\u002F\u002Freplicate.com\u002Fcjwbw\u002Flatent-consistency-model) \n\nOpenXLab 演示：[![Open in OpenXLab](https:\u002F\u002Fcdn-static.openxlab.org.cn\u002Fapp-center\u002Fopenxlab_app.svg)](https:\u002F\u002Fopenxlab.org.cn\u002Fapps\u002Fdetail\u002FLatent-Consistency-Model\u002FLatent-Consistency-Model)\n\nTungsten 演示：[![Tungsten](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_a0f4b58543a3.png)](https:\u002F\u002Ftungsten.run\u002Fmjpyeon\u002Flcm)\n\nNovita.AI 演示：[![Novita.AI Latent Consistency Playground](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%20Novita.AI%20-Demo%20&%20API-blue)](https:\u002F\u002Fnovita.ai\u002Fproduct\u002Flcm-txt2img)\n\n\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6d75aa2fccb5.png\">\n\u003C\u002Fp>\n\n通过将无分类器引导 (classifier-free guidance) 蒸馏到模型的输入中，LCM 可以在极短的推理时间内生成高质量图像。我们在 768 x 768 分辨率、CFG scale w=8、batchsize=4、使用 A800 GPU 的设置下比较了推理时间。\n\n\u003Cp align=\"center\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_0ab44d57298a.png\">\n\u003C\u002Fp>\n\n\n\n## 使用方法\n我们现在在 🧨 Diffusers 库中拥有官方的 [**LCM Pipeline**（LCM 管道）](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) 和 [**LCM Scheduler**（LCM 调度器）](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py)！旧的使用方法将被弃用。\n\n您可以直接在以下平台尝试潜在一致性模型 (Latent Consistency Models)：\n[![Hugging Face Spaces](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002F%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FSimianLuo\u002FLatent_Consistency_Model)\n\n若要自行运行模型，您可以利用 🧨 Diffusers 库：\n1. 安装库：\n```\npip install --upgrade diffusers  # make sure to use at least diffusers >= 0.22\npip install transformers accelerate\n```\n\n2. 运行模型：\n```py\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\"SimianLuo\u002FLCM_Dreamshaper_v7\")\n\n# To save GPU memory, torch.float16 can be used, but it may compromise image quality.\npipe.to(torch_device=\"cuda\", torch_dtype=torch.float32)\n\nprompt = \"Self-portrait oil painting, a beautiful cyborg with golden hair, 8k\"\n\n# Can be set to 1~50 steps. LCM support fast inference even \u003C= 4 steps. Recommend: 1~8 steps.\nnum_inference_steps = 4 \n\nimages = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type=\"pil\").images\n```\n\n更多信息，请查看官方文档：\n👉 https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdiffusers\u002Fapi\u002Fpipelines\u002Flatent_consistency_models#latent-consistency-models\n\n\n## 使用方法（已弃用）\n我们现在在 🧨 Diffusers 库中拥有官方的 [**LCM Pipeline**（LCM 管道）](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Ftree\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fpipelines\u002Flatent_consistency_models) 和 [**LCM Scheduler**（LCM 调度器）](https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fdiffusers\u002Fblob\u002Fmain\u002Fsrc\u002Fdiffusers\u002Fschedulers\u002Fscheduling_lcm.py)！旧的使用方法将被弃用。但你仍可通过在 `from_pretrained(...)` 中添加 `revision=\"fb9c5d1\"` 来使用旧方法。\n\n若要自行运行模型，您可以利用 🧨 Diffusers 库：\n1. 安装库：\n```\npip install diffusers transformers accelerate\n```\n\n2. 运行模型：\n```py\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\"SimianLuo\u002FLCM_Dreamshaper_v7\", custom_pipeline=\"latent_consistency_txt2img\", custom_revision=\"main\", revision=\"fb9c5d\")\n\n# To save GPU memory, torch.float16 can be used, but it may compromise image quality.\npipe.to(torch_device=\"cuda\", torch_dtype=torch.float32)\n\nprompt = \"Self-portrait oil painting, a beautiful cyborg with golden hair, 8k\"\n\n# Can be set to 1~50 steps. LCM support fast inference even \u003C= 4 steps. Recommend: 1~8 steps.\nnum_inference_steps = 4 \n\nimages = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type=\"pil\").images\n```\n\n### 我们的贡献者：\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fgraphs\u002Fcontributors\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_readme_6be1394222f4.png\" \u002F>\n\u003C\u002Fa>\n\n## BibTeX\n\n```bibtex\nLCM:\n@misc{luo2023latent,\n      title={Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference}, \n      author={Simian Luo and Yiqin Tan and Longbo Huang and Jian Li and Hang Zhao},\n      year={2023},\n      eprint={2310.04378},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\nLCM-LoRA:\n@article{luo2023lcm,\n  title={LCM-LoRA: A Universal Stable-Diffusion Acceleration Module},\n  author={Luo, Simian and Tan, Yiqin and Patil, Suraj and Gu, Daniel and von Platen, Patrick and Passos, Apolin{\\'a}rio and Huang, Longbo and Li, Jian and Zhao, Hang},\n  journal={arXiv preprint arXiv:2311.05556},\n  year={2023}\n}\n```","# Latent Consistency Models (LCM) 快速上手指南\n\nLatent Consistency Models (LCM) 是一种能够以极少推理步数（Few-Step）合成高分辨率图像的开源模型，显著提升了 Stable Diffusion 等模型的生成速度。\n\n## 环境准备\n\n- **操作系统**：Linux \u002F Windows \u002F macOS\n- **硬件要求**：\n  - 推荐 NVIDIA GPU (CUDA)\n  - macOS 用户可使用 MPS (Metal Performance Shaders)\n  - Intel GPU 用户可使用 XPU\n  - 也支持 CPU 运行（速度较慢）\n- **Python 版本**：建议 Python 3.8+\n\n## 安装步骤\n\n1. **安装核心依赖库**\n   确保使用 `diffusers >= 0.22` 版本以支持官方 Pipeline。\n\n   ```bash\n   pip install --upgrade diffusers\n   pip install transformers accelerate\n   ```\n\n2. **（可选）安装本地 WebUI**\n   如需运行本地 Gradio 界面，请额外安装：\n\n   ```bash\n   pip install gradio==3.48.0\n   ```\n\n3. **模型下载（国内加速）**\n   由于 Hugging Face 访问可能受限，建议中国开发者优先从以下国内镜像源下载模型权重：\n   - **始智 AI (Wisemodel)**: [LCM_Dreamshaper_v7](https:\u002F\u002Fwww.wisemodel.cn\u002Forganization\u002FLatent-Consistency-Model)\n   - **OpenXLab**: [LCM_Dreamshaper_v7_4k.safetensors](https:\u002F\u002Fopenxlab.org.cn\u002Fmodels\u002Fdetail\u002FLatent-Consistency-Model\u002FLCM_Dreamshaper_v7_4k.safetensors)\n\n## 基本使用\n\n以下示例展示了如何使用 `diffusers` 库加载模型并进行文本生成图像（Text-to-Image）。\n\n```python\nfrom diffusers import DiffusionPipeline\nimport torch\n\n# 加载模型 (推荐使用官方仓库 ID，国内网络慢时可手动下载 .safetensors 文件后指定路径)\npipe = DiffusionPipeline.from_pretrained(\"SimianLuo\u002FLCM_Dreamshaper_v7\")\n\n# 设置设备与精度 (cuda 为推荐，mps 用于 Mac，float16 可节省显存但可能影响画质)\npipe.to(torch_device=\"cuda\", torch_dtype=torch.float32)\n\nprompt = \"Self-portrait oil painting, a beautiful cyborg with golden hair, 8k\"\n\n# 推理步数建议设置在 1~8 步之间，LCM 支持极低步数推理\nnum_inference_steps = 4 \n\nimages = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type=\"pil\").images\n```\n\n### 关键参数说明\n- `num_inference_steps`: 推理步数，范围 1~50，推荐 1~8 步以获得极速生成。\n- `guidance_scale`: 引导系数，默认 8.0 左右效果较好。\n- `lcm_origin_steps`: 原始蒸馏步数，通常保持 50。\n\n更多详细信息请参考官方文档：[Hugging Face Diffusers Docs](https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fdiffusers\u002Fapi\u002Fpipelines\u002Flatent_consistency_models#latent-consistency-models)","电商运营团队需要为每日上百款新品快速生成营销海报，传统文生图流程的延迟严重拖慢了上线节奏。\n\n### 没有 latent-consistency-model 时\n- 生成一张高分辨率图片通常需要 20-50 步采样，单张耗时超过 10 秒，无法满足即时需求。\n- 批量处理大量商品图时，GPU 显存占用大且计算队列拥堵，导致整体任务积压。\n- 设计师微调提示词后需长时间等待渲染结果，无法进行快速的视觉风格试错与迭代。\n\n### 使用 latent-consistency-model 后\n- latent-consistency-model 仅需 4-8 步推理即可输出高质量图像，单张生成速度提升至 1 秒以内。\n- 兼容现有 Stable Diffusion 工作流，通过 LCM-LoRA 模块直接加速，无需重新训练底层模型。\n- 实现近实时的图文交互体验，设计师调整参数后能立即预览效果，显著缩短创意验证周期。\n\nlatent-consistency-model 通过少步推理技术将图像生成从“分钟级”提升至“秒级”，极大释放了创意生产力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fluosiallen_latent-consistency-model_59c38303.png","luosiallen","Simian Luo","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fluosiallen_3c988282.jpg","AI Researcher, IIIS, Tsinghua University.\r\nResearch in Generative AI. \r\nInventor of LCM⚡️& Author of LCM-LoRA🚀","IIIS, Tsinghua University","San Francisco",null,"https:\u002F\u002Fluosiallen.github.io","https:\u002F\u002Fgithub.com\u002Fluosiallen",[85,89],{"name":86,"color":87,"percentage":88},"Python","#3572A5",99.9,{"name":90,"color":91,"percentage":92},"CSS","#663399",0.1,4611,233,"2026-04-05T08:41:36","MIT","Linux, macOS, Windows","非必需 (支持 CPU 运行)，推荐 NVIDIA GPU (CUDA)，MacOS 支持 MPS，Intel GPU 支持 XPU，具体显存要求未明确说明","未说明",{"notes":101,"python":99,"dependencies":102},"1. 本地部署需安装 PyTorch (支持 CUDA\u002FMPS\u002FXPU)；2. MacOS 用户需在 app.py 设置 device=\"mps\"，Intel GPU 用户设置 device=\"xpu\"；3. 需下载预训练模型 (如 LCM_Dreamshaper_v7)；4. 推荐使用 Hugging Face diffusers 官方库；5. 支持少步数快速推理 (1-8 步)。",[103,104,105,106,107],"torch","diffusers>=0.22","transformers","accelerate","gradio==3.48.0",[14],"2026-03-27T02:49:30.150509","2026-04-06T07:24:46.121592",[112,117,121,126,131,135],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},3675,"Replicate 版本中 NSFW 过滤器频繁误触且没有负面提示词输入选项怎么办？","实际上不需要负面提示词也能获得高质量。主要好处是更容易控制图像内容（即什么不该出现）。建议尝试空提示词或使用自然语言描述，避免标签堆砌（tag-soup）。增加推理步数（steps）至 25 或 50 也能提升质量。","https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fissues\u002F11",{"id":118,"question_zh":119,"answer_zh":120,"source_url":116},3676,"为什么模型生成质量不高，如何调整参数优化？","4 步可能不足以展现最佳质量，部分提示词需要 25 步甚至 50 步。注意提示词灵活性，某些词汇（如'masterpiece'）会导致模型偏向特定风格。如果提示词与模型不匹配，增加步数无效。",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},3677,"结合 LCM LoRA 和常规 SDXL LoRA 时图像质量低的原因及解决方法？","问题在于 `pipe.load_lora_weights` 加载 LoRA 时未指定 `adapter_name` 参数。请将代码修改为 `pipe.load_lora_weights(lcm_lora_id, adapter_name=\"lora\")` 即可解决。","https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fissues\u002F41",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},3678,"本地运行 app.py 出现 TORCH_USE_CUDA_DSA 错误如何处理？","这可能是因为使用了已弃用的 custom_pipeline。维护者表示忘记更新 pipeline，建议更新相关代码以适配新版本。同时检查 PyTorch 版本是否匹配 CUDA 环境。","https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fissues\u002F34",{"id":132,"question_zh":133,"answer_zh":134,"source_url":130},3679,"有没有推荐的 LCM 推理 Python 代码示例？","可以使用以下代码：从 diffusers 导入 DiffusionPipeline，设置 torch_device=\"cuda\"，torch_dtype=torch.float32。num_inference_steps 推荐 1~8 步，guidance_scale=8.0，lcm_origin_steps=50。",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},3680,"一致性模型中的 c_skip 和 c_out 参数在推理中是否有实际作用？","在采样过程中 timestep 通常远大于 sigma_data(0.5)，导致 c_out 接近 1，c_skip 接近 0。忽略边界条件对结果差异不大。代码实现类似 EDM，默认 sigma_data 为 0.5。","https:\u002F\u002Fgithub.com\u002Fluosiallen\u002Flatent-consistency-model\u002Fissues\u002F82",[]]