[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-AUTOMATIC1111--stable-diffusion-webui":3,"tool-AUTOMATIC1111--stable-diffusion-webui":64},[4,17,26,34,47,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,25,14],"图像",{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":10,"last_commit_at":32,"category_tags":33,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":35,"name":36,"github_repo":37,"description_zh":38,"stars":39,"difficulty_score":10,"last_commit_at":40,"category_tags":41,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[25,42,43,44,14,45,15,13,46],"数据工具","视频","插件","其他","音频",{"id":48,"name":49,"github_repo":50,"description_zh":51,"stars":52,"difficulty_score":53,"last_commit_at":54,"category_tags":55,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,25,13,15,45],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":53,"last_commit_at":62,"category_tags":63,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,25,13,45],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":78,"languages":79,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":53,"env_os":108,"env_gpu":109,"env_ram":110,"env_deps":111,"category_tags":125,"github_topics":126,"view_count":141,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":142,"updated_at":143,"faqs":144,"releases":164},3808,"AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui","Stable Diffusion web UI","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。","# Stable Diffusion web UI\r\nA web interface for Stable Diffusion, implemented using Gradio library.\r\n\r\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAUTOMATIC1111_stable-diffusion-webui_readme_7a2ae21c3de3.png)\r\n\r\n## Features\r\n[Detailed feature showcase with images](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures):\r\n- Original txt2img and img2img modes\r\n- One click install and run script (but you still must install python and git)\r\n- Outpainting\r\n- Inpainting\r\n- Color Sketch\r\n- Prompt Matrix\r\n- Stable Diffusion Upscale\r\n- Attention, specify parts of text that the model should pay more attention to\r\n    - a man in a `((tuxedo))` - will pay more attention to tuxedo\r\n    - a man in a `(tuxedo:1.21)` - alternative syntax\r\n    - select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)\r\n- Loopback, run img2img processing multiple times\r\n- X\u002FY\u002FZ plot, a way to draw a 3 dimensional plot of images with different parameters\r\n- Textual Inversion\r\n    - have as many embeddings as you want and use any names you like for them\r\n    - use multiple embeddings with different numbers of vectors per token\r\n    - works with half precision floating point numbers\r\n    - train embeddings on 8GB (also reports of 6GB working)\r\n- Extras tab with:\r\n    - GFPGAN, neural network that fixes faces\r\n    - CodeFormer, face restoration tool as an alternative to GFPGAN\r\n    - RealESRGAN, neural network upscaler\r\n    - ESRGAN, neural network upscaler with a lot of third party models\r\n    - SwinIR and Swin2SR ([see here](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F2092)), neural network upscalers\r\n    - LDSR, Latent diffusion super resolution upscaling\r\n- Resizing aspect ratio options\r\n- Sampling method selection\r\n    - Adjust sampler eta values (noise multiplier)\r\n    - More advanced noise setting options\r\n- Interrupt processing at any time\r\n- 4GB video card support (also reports of 2GB working)\r\n- Correct seeds for batches\r\n- Live prompt token length validation\r\n- Generation parameters\r\n     - parameters you used to generate images are saved with that image\r\n     - in PNG chunks for PNG, in EXIF for JPEG\r\n     - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI\r\n     - can be disabled in settings\r\n     - drag and drop an image\u002Ftext-parameters to promptbox\r\n- Read Generation Parameters Button, loads parameters in promptbox to UI\r\n- Settings page\r\n- Running arbitrary python code from UI (must run with `--allow-code` to enable)\r\n- Mouseover hints for most UI elements\r\n- Possible to change defaults\u002Fmix\u002Fmax\u002Fstep values for UI elements via text config\r\n- Tiling support, a checkbox to create images that can be tiled like textures\r\n- Progress bar and live image generation preview\r\n    - Can use a separate neural network to produce previews with almost none VRAM or compute requirement\r\n- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image\r\n- Styles, a way to save part of prompt and easily apply them via dropdown later\r\n- Variations, a way to generate same image but with tiny differences\r\n- Seed resizing, a way to generate same image but at slightly different resolution\r\n- CLIP interrogator, a button that tries to guess prompt from an image\r\n- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway\r\n- Batch Processing, process a group of files using img2img\r\n- Img2img Alternative, reverse Euler method of cross attention control\r\n- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions\r\n- Reloading checkpoints on the fly\r\n- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one\r\n- [Custom scripts](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FCustom-Scripts) with many extensions from community\r\n- [Composable-Diffusion](https:\u002F\u002Fenergy-based-model.github.io\u002FCompositional-Visual-Generation-with-Composable-Diffusion-Models\u002F), a way to use multiple prompts at once\r\n     - separate prompts using uppercase `AND`\r\n     - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`\r\n- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)\r\n- DeepDanbooru integration, creates danbooru style tags for anime prompts\r\n- [xformers](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FXformers), major speed increase for select cards: (add `--xformers` to commandline args)\r\n- via extension: [History tab](https:\u002F\u002Fgithub.com\u002Fyfszzx\u002Fstable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI\r\n- Generate forever option\r\n- Training tab\r\n     - hypernetworks and embeddings options\r\n     - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)\r\n- Clip skip\r\n- Hypernetworks\r\n- Loras (same as Hypernetworks but more pretty)\r\n- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt\r\n- Can select to load a different VAE from settings screen\r\n- Estimated completion time in progress bar\r\n- API\r\n- Support for dedicated [inpainting model](https:\u002F\u002Fgithub.com\u002Frunwayml\u002Fstable-diffusion#inpainting-with-stable-diffusion) by RunwayML\r\n- via extension: [Aesthetic Gradients](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https:\u002F\u002Fgithub.com\u002Fvicgalle\u002Fstable-diffusion-aesthetic-gradients](https:\u002F\u002Fgithub.com\u002Fvicgalle\u002Fstable-diffusion-aesthetic-gradients))\r\n- [Stable Diffusion 2.0](https:\u002F\u002Fgithub.com\u002FStability-AI\u002Fstablediffusion) support - see [wiki](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures#stable-diffusion-20) for instructions\r\n- [Alt-Diffusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.06679) support - see [wiki](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures#alt-diffusion) for instructions\r\n- Now without any bad letters!\r\n- Load checkpoints in safetensors format\r\n- Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64\r\n- Now with a license!\r\n- Reorder elements in the UI from settings screen\r\n- [Segmind Stable Diffusion](https:\u002F\u002Fhuggingface.co\u002Fsegmind\u002FSSD-1B) support\r\n\r\n## Installation and Running\r\nMake sure the required [dependencies](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FDependencies) are met and follow the instructions available for:\r\n- [NVidia](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-NVidia-GPUs) (recommended)\r\n- [AMD](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-AMD-GPUs) GPUs.\r\n- [Intel CPUs, Intel GPUs (both integrated and discrete)](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fstable-diffusion-webui\u002Fwiki\u002FInstallation-on-Intel-Silicon) (external wiki page)\r\n- [Ascend NPUs](https:\u002F\u002Fgithub.com\u002Fwangshuai09\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-run-on-Ascend-NPUs) (external wiki page)\r\n\r\nAlternatively, use online services (like Google Colab):\r\n\r\n- [List of Online Services](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FOnline-Services)\r\n\r\n### Installation on Windows 10\u002F11 with NVidia-GPUs using release package\r\n1. Download `sd.webui.zip` from [v1.0.0-pre](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Freleases\u002Ftag\u002Fv1.0.0-pre) and extract its contents.\r\n2. Run `update.bat`.\r\n3. Run `run.bat`.\r\n> For more details see [Install-and-Run-on-NVidia-GPUs](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-NVidia-GPUs)\r\n\r\n### Automatic Installation on Windows\r\n1. Install [Python 3.10.6](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F) (Newer version of Python does not support torch), checking \"Add Python to PATH\".\r\n2. Install [git](https:\u002F\u002Fgit-scm.com\u002Fdownload\u002Fwin).\r\n3. Download the stable-diffusion-webui repository, for example by running `git clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui.git`.\r\n4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.\r\n\r\n### Automatic Installation on Linux\r\n1. Install the dependencies:\r\n```bash\r\n# Debian-based:\r\nsudo apt install wget git python3 python3-venv libgl1 libglib2.0-0\r\n# Red Hat-based:\r\nsudo dnf install wget git python3 gperftools-libs libglvnd-glx\r\n# openSUSE-based:\r\nsudo zypper install wget git python3 libtcmalloc4 libglvnd\r\n# Arch-based:\r\nsudo pacman -S wget git python3\r\n```\r\nIf your system is very new, you need to install python3.11 or python3.10:\r\n```bash\r\n# Ubuntu 24.04\r\nsudo add-apt-repository ppa:deadsnakes\u002Fppa\r\nsudo apt update\r\nsudo apt install python3.11\r\n\r\n# Manjaro\u002FArch\r\nsudo pacman -S yay\r\nyay -S python311 # do not confuse with python3.11 package\r\n\r\n# Only for 3.11\r\n# Then set up env variable in launch script\r\nexport python_cmd=\"python3.11\"\r\n# or in webui-user.sh\r\npython_cmd=\"python3.11\"\r\n```\r\n2. Navigate to the directory you would like the webui to be installed and execute the following command:\r\n```bash\r\nwget -q https:\u002F\u002Fraw.githubusercontent.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fmaster\u002Fwebui.sh\r\n```\r\nOr just clone the repo wherever you want:\r\n```bash\r\ngit clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\r\n```\r\n\r\n3. Run `webui.sh`.\r\n4. Check `webui-user.sh` for options.\r\n### Installation on Apple Silicon\r\n\r\nFind the instructions [here](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstallation-on-Apple-Silicon).\r\n\r\n## Contributing\r\nHere's how to add code to this repo: [Contributing](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FContributing)\r\n\r\n## Documentation\r\n\r\nThe documentation was moved from this README over to the project's [wiki](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki).\r\n\r\nFor the purposes of getting Google and other search engines to crawl the wiki, here's a link to the (not for humans) [crawlable wiki](https:\u002F\u002Fgithub-wiki-see.page\u002Fm\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki).\r\n\r\n## Credits\r\nLicenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html\u002Flicenses.html` file.\r\n\r\n- Stable Diffusion - https:\u002F\u002Fgithub.com\u002FStability-AI\u002Fstablediffusion, https:\u002F\u002Fgithub.com\u002FCompVis\u002Ftaming-transformers, https:\u002F\u002Fgithub.com\u002Fmcmonkey4eva\u002Fsd3-ref\r\n- k-diffusion - https:\u002F\u002Fgithub.com\u002Fcrowsonkb\u002Fk-diffusion.git\r\n- Spandrel - https:\u002F\u002Fgithub.com\u002FchaiNNer-org\u002Fspandrel implementing\r\n  - GFPGAN - https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN.git\r\n  - CodeFormer - https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\r\n  - ESRGAN - https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN\r\n  - SwinIR - https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\r\n  - Swin2SR - https:\u002F\u002Fgithub.com\u002Fmv-lab\u002Fswin2sr\r\n- LDSR - https:\u002F\u002Fgithub.com\u002FHafiidz\u002Flatent-diffusion\r\n- MiDaS - https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS\r\n- Ideas for optimizations - https:\u002F\u002Fgithub.com\u002Fbasujindal\u002Fstable-diffusion\r\n- Cross Attention layer optimization - Doggettx - https:\u002F\u002Fgithub.com\u002FDoggettx\u002Fstable-diffusion, original idea for prompt editing.\r\n- Cross Attention layer optimization - InvokeAI, lstein - https:\u002F\u002Fgithub.com\u002Finvoke-ai\u002FInvokeAI (originally http:\u002F\u002Fgithub.com\u002Flstein\u002Fstable-diffusion)\r\n- Sub-quadratic Cross Attention layer optimization - Alex Birch (https:\u002F\u002Fgithub.com\u002FBirch-san\u002Fdiffusers\u002Fpull\u002F1), Amin Rezaei (https:\u002F\u002Fgithub.com\u002FAminRezaei0x443\u002Fmemory-efficient-attention)\r\n- Textual Inversion - Rinon Gal - https:\u002F\u002Fgithub.com\u002Frinongal\u002Ftextual_inversion (we're not using his code, but we are using his ideas).\r\n- Idea for SD upscale - https:\u002F\u002Fgithub.com\u002Fjquesnelle\u002Ftxt2imghd\r\n- Noise generation for outpainting mk2 - https:\u002F\u002Fgithub.com\u002Fparlance-zz\u002Fg-diffuser-bot\r\n- CLIP interrogator idea and borrowing some code - https:\u002F\u002Fgithub.com\u002Fpharmapsychotic\u002Fclip-interrogator\r\n- Idea for Composable Diffusion - https:\u002F\u002Fgithub.com\u002Fenergy-based-model\u002FCompositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch\r\n- xformers - https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fxformers\r\n- DeepDanbooru - interrogator for anime diffusers https:\u002F\u002Fgithub.com\u002FKichangKim\u002FDeepDanbooru\r\n- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https:\u002F\u002Fgithub.com\u002FBirch-san\u002Fdiffusers-play\u002Ftree\u002F92feee6)\r\n- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https:\u002F\u002Fgithub.com\u002Ftimothybrooks\u002Finstruct-pix2pix\r\n- Security advice - RyotaK\r\n- UniPC sampler - Wenliang Zhao - https:\u002F\u002Fgithub.com\u002Fwl-zhao\u002FUniPC\r\n- TAESD - Ollin Boer Bohan - https:\u002F\u002Fgithub.com\u002Fmadebyollin\u002Ftaesd\r\n- LyCORIS - KohakuBlueleaf\r\n- Restart sampling - lambertae - https:\u002F\u002Fgithub.com\u002FNewbeeer\u002Fdiffusion_restart_sampling\r\n- Hypertile - tfernd - https:\u002F\u002Fgithub.com\u002Ftfernd\u002FHyperTile\r\n- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.\r\n- (You)\r\n","# Stable Diffusion Web UI\n一个基于 Gradio 库实现的 Stable Diffusion 网页界面。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAUTOMATIC1111_stable-diffusion-webui_readme_7a2ae21c3de3.png)\n\n## 功能特性\n[附带图片的详细功能展示](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures)：\n- 原生的文本到图像和图像到图像模式\n- 一键安装与运行脚本（但仍需自行安装 Python 和 Git）\n- 外扩绘图\n- 局部重绘\n- 色彩草图\n- 提示词矩阵\n- Stable Diffusion 超分辨率放大\n- 注意力机制：可指定模型应更加关注的文本部分\n    - `((燕尾服))` 中的“燕尾服”将获得更多关注\n    - `(燕尾服:1.21)` 是另一种语法\n    - 选中文本后按 `Ctrl+Up` 或 `Ctrl+Down`（Mac 上为 `Command+Up` 或 `Command+Down`）即可自动调整对所选文本的关注度（由匿名用户贡献的代码）\n- 循环处理：多次执行图像到图像处理\n- X\u002FY\u002FZ 图：以三维方式绘制不同参数下的图像\n- 文本反转嵌入\n    - 可拥有任意数量的嵌入，并为其命名\n    - 支持使用不同向量数的多个嵌入\n    - 兼容半精度浮点数\n    - 即使在 8GB 显存上也能训练嵌入，也有 6GB 显存可用的报告\n- “附加”选项卡包含：\n    - GFPGAN：用于修复人脸的神经网络\n    - CodeFormer：GFPGAN 的替代性人脸修复工具\n    - RealESRGAN：神经网络超分辨率放大器\n    - ESRGAN：支持大量第三方模型的神经网络超分辨率放大器\n    - SwinIR 和 Swin2SR（[参见此处](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F2092)）：神经网络超分辨率放大器\n    - LDSR：潜在扩散超分辨率放大\n- 宽高比调整选项\n- 采样方法选择\n    - 可调节采样器的 eta 值（噪声倍增因子）\n    - 更高级的噪声设置选项\n- 随时中断处理过程\n- 支持 4GB 显卡（也有 2GB 显卡可用的报告）\n- 批次生成时种子正确\n- 实时提示词长度验证\n- 生成参数保存\n     - 生成图像时使用的参数会随图像一同保存\n     - PNG 格式保存在 PNG 数据块中，JPEG 格式保存在 EXIF 中\n     - 可将图像拖拽至 PNG 信息标签页以恢复生成参数并自动复制到界面\n     - 可在设置中禁用此功能\n     - 将图像或文本参数拖拽至提示词框\n- “读取生成参数”按钮：将参数加载到提示词框并显示在界面上\n- 设置页面\n- 可从界面运行任意 Python 代码（需使用 `--allow-code` 参数启用）\n- 大多数界面元素的鼠标悬停提示\n- 可通过文本配置文件修改界面元素的默认值、混合值、最大值和步长\n- 平铺支持：勾选该选项可生成可像纹理一样平铺的图像\n- 进度条与实时图像生成预览\n    - 可使用独立的神经网络生成预览，几乎不占用显存或计算资源\n- 负面提示词：额外的文本字段，用于列出不希望出现在生成图像中的内容\n- 风格：可保存部分提示词，并在后续通过下拉菜单轻松应用\n- 变体：生成相同图像但略有差异的方法\n- 种子缩放：以略微不同的分辨率生成相同图像\n- CLIP 解读器：尝试根据图像猜测提示词的按钮\n- 提示词编辑：可在生成过程中更改提示词，例如先生成西瓜，中途切换为动漫女孩\n- 批量处理：使用图像到图像模式处理一组文件\n- 图像到图像的替代方法：交叉注意力控制的反向欧拉法\n- 高分辨率修复：一键生成高分辨率图片且无常见畸变的便捷选项\n- 动态重新加载检查点\n- 检查点合并：允许将最多 3 个检查点合并为一个\n- [自定义脚本](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FCustom-Scripts)，社区提供了众多扩展插件\n- [组合扩散](https:\u002F\u002Fenergy-based-model.github.io\u002FCompositional-Visual-Generation-with-Composable-Diffusion-Models\u002F)：可同时使用多个提示词\n     - 使用大写 `AND` 分隔提示词\n     - 也支持为提示词设置权重：`a cat :1.2 AND a dog AND a penguin :2.2`\n- 提示词无字符限制（原版 Stable Diffusion 仅支持最多 75 个字符）\n- DeepDanbooru 集成：为动漫类提示词生成 Danbooru 风格标签\n- [xformers](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FXformers)：为特定显卡带来显著速度提升（需在命令行参数中添加 `--xformers`）\n- 通过扩展插件：[历史记录选项卡](https:\u002F\u002Fgithub.com\u002Fyfszzx\u002Fstable-diffusion-webui-images-browser)：可在界面内方便地查看、直接操作及删除图像\n- 无限生成选项\n- 训练选项卡\n     - 超网络与嵌入选项\n     - 图像预处理：裁剪、镜像、使用 BLIP 或 Deepdanbooru 自动打标签（适用于动漫图像）\n- Clip 跳过\n- 超网络\n- LoRA（与超网络类似，但更美观）\n- 独立的界面，可预览并选择要添加到提示词中的嵌入、超网络或 LoRA\n- 可在设置界面中选择加载不同的 VAE\n- 进度条显示预计完成时间\n- API\n- 支持 RunwayML 提供的专用局部重绘模型（[参见此处](https:\u002F\u002Fgithub.com\u002Frunwayml\u002Fstable-diffusion#inpainting-with-stable-diffusion)）\n- 通过扩展插件：[美学渐变](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui-aesthetic-gradients)：利用 CLIP 图像嵌入生成具有特定美学风格的图像（基于 [https:\u002F\u002Fgithub.com\u002Fvicgalle\u002Fstable-diffusion-aesthetic-gradients](https:\u002F\u002Fgithub.com\u002Fvicgalle\u002Fstable-diffusion-aesthetic-gradients) 的实现）\n- 支持 [Stable Diffusion 2.0](https:\u002F\u002Fgithub.com\u002FStability-AI\u002Fstablediffusion)——具体说明请参阅[维基页面](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures#stable-diffusion-20)\n- 支持 [Alt-Diffusion](https:\u002F\u002Farxiv.org\u002Fabs\u002F2211.06679)——具体说明请参阅[维基页面](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FFeatures#alt-diffusion)\n- 现在不再出现任何不良字符！\n- 支持加载 safetensors 格式的检查点\n- 放宽分辨率限制：生成图像的尺寸只需是 8 的倍数，而非 64 的倍数\n- 现在已获得许可！\n- 可在设置界面中重新排列界面元素\n- 支持 [Segmind Stable Diffusion](https:\u002F\u002Fhuggingface.co\u002Fsegmind\u002FSSD-1B)\n\n## 安装与运行\n请确保满足所需的[依赖项](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FDependencies)，并按照以下平台的说明进行操作：\n- [NVidia](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-NVidia-GPUs)（推荐）\n- [AMD](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-AMD-GPUs) 显卡。\n- [Intel CPU、Intel GPU（集成与独立）](https:\u002F\u002Fgithub.com\u002Fopenvinotoolkit\u002Fstable-diffusion-webui\u002Fwiki\u002FInstallation-on-Intel-Silicon)（外部维基页面）\n- [Ascend NPU](https:\u002F\u002Fgithub.com\u002Fwangshuai09\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-run-on-Ascend-NPUs)（外部维基页面）\n\n或者，您也可以使用在线服务（如 Google Colab）：\n\n- [在线服务列表](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FOnline-Services)\n\n### 使用发布包在 Windows 10\u002F11 上安装 NVidia 显卡版\n1. 从 [v1.0.0-pre](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Freleases\u002Ftag\u002Fv1.0.0-pre) 下载 `sd.webui.zip` 并解压。\n2. 运行 `update.bat`。\n3. 运行 `run.bat`。\n> 更多详情请参阅 [Install-and-Run-on-NVidia-GPUs](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstall-and-Run-on-NVidia-GPUs)\n\n### Windows 自动安装\n1. 安装 [Python 3.10.6](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F)（较新版本的 Python 不支持 torch），并勾选“将 Python 添加到 PATH”。\n2. 安装 [git](https:\u002F\u002Fgit-scm.com\u002Fdownload\u002Fwin)。\n3. 克隆 stable-diffusion-webui 仓库，例如运行 `git clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui.git`。\n4. 以普通非管理员用户身份，在 Windows 资源管理器中运行 `webui-user.bat`。\n\n### Linux 自动安装\n1. 安装依赖项：\n```bash\n# 基于 Debian 的系统：\nsudo apt install wget git python3 python3-venv libgl1 libglib2.0-0\n# 基于 Red Hat 的系统：\nsudo dnf install wget git python3 gperftools-libs libglvnd-glx\n# 基于 openSUSE 的系统：\nsudo zypper install wget git python3 libtcmalloc4 libglvnd\n# 基于 Arch 的系统：\nsudo pacman -S wget git python3\n```\n如果您的系统非常新，可能需要安装 Python 3.11 或 Python 3.10：\n```bash\n# Ubuntu 24.04\nsudo add-apt-repository ppa:deadsnakes\u002Fppa\nsudo apt update\nsudo apt install python3.11\n\n# Manjaro\u002FArch\nsudo pacman -S yay\nyay -S python311 # 请注意不要与 python3.11 包混淆\n\n# 仅针对 3.11 版本\n# 然后在启动脚本中设置环境变量\nexport python_cmd=\"python3.11\"\n# 或者在 webui-user.sh 中\npython_cmd=\"python3.11\"\n```\n2. 导航到您希望安装 WebUI 的目录，并执行以下命令：\n```bash\nwget -q https:\u002F\u002Fraw.githubusercontent.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fmaster\u002Fwebui.sh\n```\n或者直接克隆仓库到您想要的位置：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\n```\n\n3. 运行 `webui.sh`。\n4. 查看 `webui-user.sh` 以了解可用选项。\n\n### Apple Silicon 上的安装\n请参阅[此处](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstallation-on-Apple-Silicon)的说明。\n\n## 贡献\n以下是向此仓库添加代码的方法：[贡献](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FContributing)\n\n## 文档\n文档已从本 README 移至项目的[维基](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki)。\n\n为了便于 Google 及其他搜索引擎抓取维基，这里提供一个（非人类友好型）可爬取的[维基链接](https:\u002F\u002Fgithub-wiki-see.page\u002Fm\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki)。\n\n## 致谢\n借用代码的许可证可在“设置 -> 许可证”界面以及 `html\u002Flicenses.html` 文件中找到。\n\n- Stable Diffusion - https:\u002F\u002Fgithub.com\u002FStability-AI\u002Fstablediffusion, https:\u002F\u002Fgithub.com\u002FCompVis\u002Ftaming-transformers, https:\u002F\u002Fgithub.com\u002Fmcmonkey4eva\u002Fsd3-ref\n- k-diffusion - https:\u002F\u002Fgithub.com\u002Fcrowsonkb\u002Fk-diffusion.git\n- Spandrel - https:\u002F\u002Fgithub.com\u002FchaiNNer-org\u002Fspandrel 实现了\n  - GFPGAN - https:\u002F\u002Fgithub.com\u002FTencentARC\u002FGFPGAN.git\n  - CodeFormer - https:\u002F\u002Fgithub.com\u002Fsczhou\u002FCodeFormer\n  - ESRGAN - https:\u002F\u002Fgithub.com\u002Fxinntao\u002FESRGAN\n  - SwinIR - https:\u002F\u002Fgithub.com\u002FJingyunLiang\u002FSwinIR\n  - Swin2SR - https:\u002F\u002Fgithub.com\u002Fmv-lab\u002Fswin2sr\n- LDSR - https:\u002F\u002Fgithub.com\u002FHafiidz\u002Flatent-diffusion\n- MiDaS - https:\u002F\u002Fgithub.com\u002Fisl-org\u002FMiDaS\n- 优化思路 - https:\u002F\u002Fgithub.com\u002Fbasujindal\u002Fstable-diffusion\n- Cross Attention 层优化 - Doggettx - https:\u002F\u002Fgithub.com\u002FDoggettx\u002Fstable-diffusion，最初提出了提示词编辑的想法。\n- Cross Attention 层优化 - InvokeAI、lstein - https:\u002F\u002Fgithub.com\u002Finvoke-ai\u002FInvokeAI（原为 http:\u002F\u002Fgithub.com\u002Flstein\u002Fstable-diffusion）\n- 次二次复杂度 Cross Attention 层优化 - Alex Birch（https:\u002F\u002Fgithub.com\u002FBirch-san\u002Fdiffusers\u002Fpull\u002F1）、Amin Rezaei（https:\u002F\u002Fgithub.com\u002FAminRezaei0x443\u002Fmemory-efficient-attention）\n- Textual Inversion - Rinon Gal - https:\u002F\u002Fgithub.com\u002Frinongal\u002Ftextual_inversion（我们并未使用他的代码，但借鉴了他的思想）。\n- SD 超分辨率想法 - https:\u002F\u002Fgithub.com\u002Fjquesnelle\u002Ftxt2imghd\n- Outpainting mk2 的噪声生成 - https:\u002F\u002Fgithub.com\u002Fparlance-zz\u002Fg-diffuser-bot\n- CLIP 询问器的想法及部分代码借用 - https:\u002F\u002Fgithub.com\u002Fpharmapsychotic\u002Fclip-interrogator\n- 组合式扩散模型的想法 - https:\u002F\u002Fgithub.com\u002Fenergy-based-model\u002FCompositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch\n- xformers - https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fxformers\n- DeepDanbooru - 动漫扩散模型的询问器 https:\u002F\u002Fgithub.com\u002FKichangKim\u002FDeepDanbooru\n- 从 float16 UNet 中以 float32 精度采样 - marunine 提出想法，Birch-san 提供了示例实现（https:\u002F\u002Fgithub.com\u002FBirch-san\u002Fdiffusers-play\u002Ftree\u002F92feee6）\n- Instruct pix2pix - Tim Brooks（明星）、Aleksander Holynski（明星）、Alexei A. Efros（无明星光环）- https:\u002F\u002Fgithub.com\u002Ftimothybrooks\u002Finstruct-pix2pix\n- 安全建议 - RyotaK\n- UniPC 采样器 - Wenliang Zhao - https:\u002F\u002Fgithub.com\u002Fwl-zhao\u002FUniPC\n- TAESD - Ollin Boer Bohan - https:\u002F\u002Fgithub.com\u002Fmadebyollin\u002Ftaesd\n- LyCORIS - KohakuBlueleaf\n- 重启采样 - lambertae - https:\u002F\u002Fgithub.com\u002FNewbeeer\u002Fdiffusion_restart_sampling\n- Hypertile - tfernd - https:\u002F\u002Fgithub.com\u002Ftfernd\u002FHyperTile\n- 初始 Gradio 脚本 - 由一位匿名用户在 4chan 上发布。感谢这位匿名用户。\n- （您）","# Stable Diffusion WebUI 快速上手指南\n\nStable Diffusion WebUI 是一个基于 Gradio 库开发的 Stable Diffusion 网页操作界面，提供了文生图（txt2img）、图生图（img2img）、局部重绘、高清修复等丰富功能，是目前最流行的 SD 本地部署方案之一。\n\n## 环境准备\n\n在开始安装前，请确保您的系统满足以下基本要求：\n\n### 系统要求\n*   **操作系统**：Windows 10\u002F11, Linux, 或 macOS (Apple Silicon)。\n*   **GPU 推荐**：\n    *   **NVIDIA 显卡**（强烈推荐）：显存建议 4GB 及以上（4GB 可运行，8GB+ 体验更佳）。需安装最新 NVIDIA 驱动。\n    *   **AMD \u002F Intel \u002F Ascend**：支持但配置相对复杂，需参考官方 Wiki 特定教程。\n*   **磁盘空间**：建议预留 10GB 以上空间用于存放程序、模型及生成图片。\n\n### 前置依赖\n无论何种系统，均需预先安装以下基础工具：\n1.  **Python**: 必须安装 **Python 3.10.6**。\n    *   *注意*：新版 Python (3.11+) 可能导致 torch 不兼容，请严格使用 3.10.6 版本。\n    *   安装时务必勾选 **\"Add Python to PATH\"**。\n2.  **Git**: 用于克隆代码仓库。\n\n> **国内加速建议**：\n> *   Python 下载：可使用清华大学或阿里云镜像源。\n> *   Git 克隆：若直接克隆速度慢，可将 `https:\u002F\u002Fgithub.com` 替换为国内镜像地址（如 `https:\u002F\u002Fghp.ci\u002Fhttps:\u002F\u002Fgithub.com` 或使用 Gitee 镜像仓库）。\n> *   Pip 源：安装过程中如需手动 pip 安装包，请配置 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`。\n\n---\n\n## 安装步骤\n\n根据您的操作系统选择对应的安装方式。\n\n### 方案 A：Windows 一键安装包（推荐新手）\n此方法无需手动配置 Git 和 Python 环境，适合纯新手。\n\n1.  前往 [Releases 页面](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Freleases) 下载 `sd.webui.zip` (查找 v1.0.0-pre 或最新稳定版)。\n2.  解压压缩包到任意目录（路径中不要包含中文或空格）。\n3.  双击运行 `update.bat` 进行初始化更新。\n4.  双击运行 `run.bat` 启动程序。\n\n### 方案 B：Windows 自动安装脚本（标准方式）\n适合需要自定义环境或开发者。\n\n1.  安装 [Python 3.10.6](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-3106\u002F) 并勾选添加环境变量。\n2.  安装 [Git](https:\u002F\u002Fgit-scm.com\u002Fdownload\u002Fwin)。\n3.  打开命令提示符 (CMD) 或 PowerShell，执行以下命令克隆仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui.git\n    ```\n    *(国内用户若速度慢，可使用：`git clone https:\u002F\u002Fghp.ci\u002Fhttps:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui.git`)*\n4.  进入项目目录：\n    ```bash\n    cd stable-diffusion-webui\n    ```\n5.  运行启动脚本：\n    ```bash\n    webui-user.bat\n    ```\n    *首次运行会自动下载依赖和模型，请耐心等待。*\n\n### 方案 C：Linux 安装步骤\n\n1.  **安装系统依赖** (以 Ubuntu\u002FDebian 为例)：\n    ```bash\n    sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0\n    ```\n    *注：若系统较新（如 Ubuntu 24.04），可能需要手动安装 python3.10 或 3.11 并在 `webui-user.sh` 中指定 `python_cmd=\"python3.10\"`。*\n\n2.  **克隆仓库**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui.git\n    cd stable-diffusion-webui\n    ```\n\n3.  **运行启动脚本**：\n    ```bash\n    .\u002Fwebui.sh\n    ```\n    *如需修改参数（如监听端口、显存优化），请编辑 `webui-user.sh` 文件中的 `COMMANDLINE_ARGS`。*\n\n### 方案 D：macOS (Apple Silicon)\nM1\u002FM2\u002FM3 芯片用户请参考官方 Wiki 的 [Installation on Apple Silicon](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FInstallation-on-Apple-Silicon) 章节，通常需安装 Homebrew 及特定版本的 Python。\n\n---\n\n## 基本使用\n\n启动成功后，终端会显示类似 `Running on local URL: http:\u002F\u002F127.0.0.1:7860` 的信息。在浏览器中打开该地址即可进入界面。\n\n### 1. 准备模型 checkpoint\n首次运行时，程序可能会尝试自动下载基础模型。若失败或需使用特定模型：\n*   下载 `.ckpt` 或 `.safetensors` 格式的模型文件（推荐从 Civitai 或 HuggingFace 下载）。\n*   将模型文件放入项目目录下的 `models\u002FStable-diffusion` 文件夹中。\n*   在 WebUI 左上角点击刷新按钮 🔄，选择刚放入的模型。\n\n### 2. 文生图 (txt2img) 最简单的示例\n这是最核心的功能，通过文字描述生成图片。\n\n1.  点击顶部标签页 **txt2img**。\n2.  **Prompt (提示词)**：在上方文本框输入英文描述。\n    *   示例：`a beautiful girl with blue eyes, standing in a garden, sunlight, high quality, masterpiece`\n    *   *技巧*：使用 `(keyword:1.2)` 可增加该关键词权重。\n3.  **Negative Prompt (负面提示词)**：在下方文本框输入不希望出现的内容。\n    *   示例：`low quality, worst quality, ugly, deformed, noisy, blurry`\n4.  **参数设置**：\n    *   **Sampling Steps**: 设为 20-30。\n    *   **Width\u002FHeight**: 设为 512x512 或 512x768。\n    *   **Batch count**: 生成几张图，设为 1 即可测试。\n5.  点击 **Generate** 按钮。\n6.  等待进度条完成，生成的图片将显示在下方，可右键保存。\n\n### 3. 图生图 (img2img) 简述\n点击 **img2img** 标签，上传一张本地图片，配合提示词可进行风格转换、局部重绘或高清放大。\n\n### 4. 常用快捷键与技巧\n*   **中断生成**：随时点击红色的 **Interrupt** 按钮停止当前任务。\n*   **提示词权重调整**：选中提示词中的某部分，按 `Ctrl+Up` (Mac: `Cmd+Up`) 增加权重，`Ctrl+Down` 降低权重。\n*   **查看参数**：将生成的图片拖拽至 **PNG Info** 标签页，可自动还原当时的生成参数并填入界面。\n\n---\n*注：本工具功能极其丰富，包含插件扩展、训练模型等高级功能，详细文档请参阅项目官方 Wiki。*","一位独立游戏开发者需要为奇幻 RPG 项目快速生成大量风格统一的角色立绘和无缝拼接的地形纹理素材。\n\n### 没有 stable-diffusion-webui 时\n- 只能依赖命令行手动调整参数，无法实时预览生成进度，每次试错都需等待完整渲染，效率极低。\n- 难以精确控制画面细节（如“只要盔甲不要头盔”），缺乏负向提示词和注意力机制支持，导致废图率极高。\n- 制作可无限平铺的地形贴图时，需借助外部软件后期处理边缘，无法直接生成无缝纹理。\n- 生成的低分辨率图片放大后模糊失真，缺乏内置的高质量修复与超分工具，后续处理流程繁琐。\n- 无法保存和复用特定的艺术风格提示词，导致不同批次生成的角色画风不一致，破坏美术统一性。\n\n### 使用 stable-diffusion-webui 后\n- 通过可视化界面实时调节参数并查看生成预览，利用中断功能随时停止不满意的结果，大幅缩短迭代周期。\n- 运用负向提示词排除多余元素，结合注意力权重语法（如 `(tuxedo:1.2)`）精准强化关键特征，显著提升出图准确率。\n- 勾选\"Tiling\"选项即可一键生成完美无缝的地形纹理，直接应用于游戏引擎，省去后期拼接麻烦。\n- 调用内置的 RealESRGAN 和 GFPGAN 插件，在生成同时完成高清放大与人脸修复，直接输出可用资产。\n- 利用\"Styles\"功能保存专属画风模板，通过下拉菜单随时调用，确保数百张角色立绘保持高度一致的艺术风格。\n\nstable-diffusion-webui 将复杂的扩散模型转化为直观的生产力流水线，让创作者从繁琐的技术调试中解放，专注于创意实现。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAUTOMATIC1111_stable-diffusion-webui_7a2ae21c.png","AUTOMATIC1111",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAUTOMATIC1111_4fab9273.png","https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111",[80,84,88,92,96,100],{"name":81,"color":82,"percentage":83},"Python","#3572A5",87.5,{"name":85,"color":86,"percentage":87},"JavaScript","#f1e05a",8.4,{"name":89,"color":90,"percentage":91},"CSS","#663399",2.1,{"name":93,"color":94,"percentage":95},"HTML","#e34c26",1.3,{"name":97,"color":98,"percentage":99},"Shell","#89e051",0.6,{"name":101,"color":102,"percentage":103},"Batchfile","#C1F12E",0.1,162132,30214,"2026-04-05T11:01:52","AGPL-3.0","Windows, Linux, macOS","NVIDIA GPU 推荐（最低支持 4GB 显存，有 2GB 运行成功的报告；训练嵌入需 6-8GB）；支持 AMD GPU、Intel GPU\u002FCPU 及 Ascend NPU（需参考外部 Wiki 安装指南）","未说明",{"notes":112,"python":113,"dependencies":114},"必须预先安装 Python 和 Git。Windows 用户可使用一键安装包或手动安装；Linux 用户需安装特定系统库（如 libgl1）。支持多种显卡架构但 NVIDIA 体验最佳。首次运行会自动下载模型文件。可通过命令行参数 --xformers 启用加速。支持加载 safetensors 格式的模型以确保安全。","3.10.6 (新版 Python 可能不支持 torch，部分新系统需 3.11)",[115,116,117,118,119,120,121,122,123,124],"torch","git","gradio","xformers (可选，用于加速)","GFPGAN","CodeFormer","RealESRGAN","SwinIR","LDSR","safetensors",[13,25,14],[127,128,129,130,131,132,133,134,135,117,136,137,115,138,139,140],"deep-learning","diffusion","image-generation","image2image","img2img","text2image","txt2img","ai","ai-art","pytorch","stable-diffusion","upscaling","web","unstable",14,"2026-03-27T02:49:30.150509","2026-04-06T05:17:42.911256",[145,150,155,159],{"id":146,"question_zh":147,"answer_zh":148,"source_url":149},17443,"使用深度模型（Depth Models）时出现全 NaN 张量错误或黑屏，即使使用了 --no-half 参数也无法解决，该怎么办？","这通常与模型的精度设置或特定 API 问题有关。如果日志提示 GPU 不支持半精度，虽然您使用的是 RTX 4090，但某些深度模型可能需要特定的精度配置。尝试在启动脚本中设置环境变量 `ATTN_PRECISION=fp16` 来强制使用半精度注意力机制，或者检查模型文件是否完整（有时下载的文件可能显示为\"EMPTY\"或损坏）。此外，确保您的 YAML 配置文件与模型版本匹配。","https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fissues\u002F6923",{"id":151,"question_zh":152,"answer_zh":153,"source_url":154},17444,"加载 Stable Diffusion 2.0\u002F2.1 新模型（如 768x768 或深度模型）时出现\"size mismatch\"（尺寸不匹配）错误怎么办？","出现此错误通常是因为使用的 YAML 配置文件不正确。SD 2.0 及更高版本需要特定的推理配置。请确保您使用了正确的 YAML 文件（例如 `v2-inference.yaml` 或 `v2-inference-v.yaml`），并且该文件是原始格式，扩展名正确，且文件名与 safetensors\u002Fckpt 模型文件保持一致。错误信息中提到的 `input_blocks` 尺寸不匹配通常意味着模型架构（如 v1 与 v2）与配置文件不兼容。","https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fissues\u002F5011",{"id":156,"question_zh":157,"answer_zh":158,"source_url":154},17445,"使用 SD 2.1 版本生成图像时得到全黑图片，即使安装了 xformers 或使用默认设置也是如此，如何解决？","在未安装 `xformers` 的情况下，SD 2.1 模型的注意力操作默认会以全精度（full precision）运行，这可能导致显存问题或生成黑图。解决方法是在运行脚本前设置环境变量：`ATTN_PRECISION=fp16 python \u003Cscript.py>`。如果您是通过 webui 启动，请确保在命令行参数或启动脚本中应用了此设置。同时，确认虚拟环境激活正确（使用 `source .\u002Fvenv\u002Fbin\u002Factivate` 或 Windows 下的对应命令），并重新安装依赖。",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},17446,"生成的图像在最后几步采样过程中突然变差、出现伪影或面部扭曲（特别是在使用 LMS 或 Euler a 采样器时），这是 Bug 吗？","这通常不是 WebUI 的 Bug，而是底层 k-diffusion 库或特定采样器算法的特性所致。在某些采样器（如 LMS）的最后几步，算法可能会过度锐化或引入噪点，导致图像看起来像马赛克或失真。建议尝试更换采样器（如 DPM++ 系列），减少采样步数，或者查阅 k-diffusion 相关议题以了解是否为已知行为。这不是模型损坏或系统错误，无需过度担心。","https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fissues\u002F7244",[165,170,175,180,185,190,195,200,205,210,215,220,225,229,234,239,244,249,254,259],{"id":166,"version":167,"summary_zh":168,"released_at":169},106285,"v1.10.1","## 1.10.1\n\n### 错误修复：\n* 修复 CPU 上的图像超分辨率功能（[#16275](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16275)）","2025-02-09T08:00:10",{"id":171,"version":172,"summary_zh":173,"released_at":174},106286,"v1.10.0","### 功能：\n* 大量性能优化（详见下方“性能”部分）\n* 支持 Stable Diffusion 3（[#16030](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16030)、[#16164](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16164)、[#16212](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16212)）\n  * 推荐使用 Euler 采样器；DDIM 及其他时间步采样器目前暂不支持\n  * T5 文本模型默认关闭，可在设置中启用\n* 新调度器：\n  * Align Your Steps（[#15751](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15751)）\n  * KL Optimal（[#15608](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15608)）\n  * Normal（[#16149](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16149)）\n  * DDIM（[#16149](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16149)）\n  * Simple（[#16142](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16142)）\n  * Beta（[#16235](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16235)）\n* 新采样器：DDIM CFG++（[#16035](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16035)）\n\n### 其他小改进：\n* 增加在早期步骤跳过 CFG 的选项（[#15607](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15607)）\n* 添加 --models-dir 选项（[#15742](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15742)）\n* 允许移动端用户通过双指长按打开上下文菜单（[#15682](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15682)）\n* 在信息文本中为捆绑的 Textual Inversion 添加 LoRA 名称作为 TI 哈希值（[#15679](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15679)）\n* 下载模型后检查其哈希值，以防止下载损坏（[#15602](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15602)）\n* 更多扩展标签筛选选项（[#15627](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15627)）\n* 保存 AVIF 文件时使用 JPEG 的质量设置（[#15610](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15610)）\n* 添加文件名模式：`[basename]`（[#15978](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15978)）\n* 增加为 SDXL 上的 CLIP L 模型启用 Clip Skip 的选项（[#15992](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15992)）\n* 增加防止生成过程中屏幕休眠的选项（[#16001](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16001)）\n* 图片查看器中新增 ToggleLivePreview 按钮（[#16065](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16065)）\n* 移除重新加载和快速滚动时界面闪烁现象（[#16153](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16153)）\n* 增加禁用 save 按钮记录 log.csv 的选项（[#16242](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16242)）\n\n### 扩展与 API：\n* 添加 process_before_every_sampling 钩子（[#15984](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffus","2024-07-27T03:55:24",{"id":176,"version":177,"summary_zh":178,"released_at":179},106287,"v1.10.0-RC","[如何切换到不同版本的 WebUI](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FHow-to-switch-to-different-versions-of-WebUI)\n\n### 功能：\n* 大量性能优化（详见下方“性能”部分）\n* 支持 Stable Diffusion 3 ([#16030](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16030)）\n  * 推荐使用 Euler 采样器；DDIM 及其他时间步采样器目前暂不支持\n  * T5 文本模型默认关闭，请在设置中启用\n* 新调度器：\n  * Align Your Steps ([#15751](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15751)）\n  * KL Optimal ([#15608](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15608)）\n  * Normal ([#16149](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16149)）\n  * DDIM ([#16149](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16149)）\n  * Simple ([#16142](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16142)）\n* 新采样器：DDIM CFG++ ([#16035](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16035)）\n\n### 其他小改进：\n* 在早期步骤跳过 CFG 的选项 ([#15607](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15607)）\n* 添加 --models-dir 选项 ([#15742](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15742)）\n* 允许移动用户通过双指长按打开上下文菜单 ([#15682](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15682)）\n* 信息文本：为捆绑的 Textual Inversion 添加 LoRA 名称作为 TI 哈希值 ([#15679](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15679)）\n* 下载模型后检查其哈希值，以防止下载损坏 ([#15602](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15602)）\n* 更多扩展标签筛选选项 ([#15627](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15627)）\n* 保存 AVIF 时使用 JPEG 的质量设置 ([#15610](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15610)）\n* 添加文件名模式：`[basename]` ([#15978](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15978)）\n* 添加为 SDXL 上的 CLIP L 启用 CLIP 跳过的选项 ([#15992](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15992)）\n* 防止生成过程中屏幕休眠的选项 ([#16001](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16001)）\n* 图片查看器中的 ToggleLivePreview 按钮 ([#16065](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16065)）\n\n### 扩展与 API：\n* 添加 process_before_every_sampling 钩子 ([#15984](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15984)）\n* 在无效采样器错误时返回 HTTP 400 而不是 404 ([#16140](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F16140)）\n\n### 性能：\n* [性能 1\u002F6] use_checkpoint = False ([#15803](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15803)）\n* [性能 2\u002F6] 替换 einops.r","2024-07-06T08:28:39",{"id":181,"version":182,"summary_zh":183,"released_at":184},106288,"v1.9.4","## 1.9.4\n\n### 错误修复：\n* 锁定 setuptools 版本以修复启动错误 ([#15882](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15882)) \n","2024-05-28T18:21:30",{"id":186,"version":187,"summary_zh":188,"released_at":189},106289,"v1.9.3","## 1.9.3\n\n### 错误修复：\n* 修复 get_crop_region_v2 ([#15594](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15594))\n\n## 1.9.2\n\n### 扩展和 API：\n* 恢复 1.8.0 风格的脚本命名\n\n## 1.9.1\n\n### 小改进：\n* 添加 AVIF 支持 ([#15582](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15582))\n* 添加文件名模式：`[sampler_scheduler]` 和 `[scheduler]` ([#15581](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15581))\n\n### 扩展和 API：\n* 撤销将脚本添加到 sys.modules 的操作\n* 添加调度器 API 端点 ([#15577](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15577))\n* 移除 API 超分辨率因子限制 ([#15560](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15560))\n\n### 错误修复：\n* 修复图像不匹配问题 \u002F 坐标“右”小于“左”问题 ([#15534](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15534))\n* 修复：remove_callbacks_for_function 也应从有序映射中移除回调函数 ([#15533](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15533))\n* 修复 x1 超分辨率模型 ([#15555](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15555))\n* 修复扩展脚本中 cls.__module__ 的值 ([#15532](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15532))\n* 修复函数调用中的拼写错误（eror -> error）([#15531](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15531))\n\n### 其他：\n* 隐藏“未找到图像数据块”的提示信息 ([#15567](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15567))\n* 允许 webui.sh 在包含 .git 文件的任意目录下运行 ([#15561](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15561))\n* 与 Debian 11、Fedora 34+ 和 openSUSE 15.4+ 的兼容性 ([#15544](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15544))\n* numpy 废弃警告中将 product 替换为 prod ([#15547](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15547))\n* get_crop_region_v2 ([#15583](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15583), [#15587](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15587))\n\n","2024-04-22T15:03:02",{"id":191,"version":192,"summary_zh":193,"released_at":194},106290,"v1.9.0","## 1.9.0\n\n### 功能：\n* 根据模型的 timestep 而不是采样步数来切换 refiner ([#14978](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14978))\n* 添加选项以使用旧版目录视图而非树形视图；对额外网络排序\u002F搜索控件进行了样式调整\n* 添加用于重新排序回调的 UI，并支持在扩展元数据中指定回调顺序 ([#15205](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15205))\n* 为 SDXL-Lightning 模型添加 Sgm 均匀调度器 ([#15325](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15325))\n* 在主界面中添加调度器选择功能 ([#15333](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15333)、[#15361](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15361)、[#15394](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15394))\n\n### 小改进：\n* “打开图片目录”按钮现在会打开实际目录 ([#14947](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14947))\n* 支持使用 LyCORIS BOFT 网络进行推理 ([#14871](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14871)、[#14973](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14973))\n* 默认将额外网络卡片描述设置为纯文本，并提供选项以重新启用之前的 HTML 格式\n* 为额外网络添加大小调整手柄 ([#15041](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15041))\n* 新增命令行参数：`--unix-filenames-sanitization` 和 `--filenames-max-length` ([#15031](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15031))\n* 在 HTML 表格中显示额外网络参数，而非原始 JSON 格式 ([#15131](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15131))\n* 为 LoRA\u002FLoHa\u002FLoKr 添加 DoRA（权重分解）支持 ([#15160](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15160)、[#15283](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15283))\n* 添加 `--no-prompt-history` 命令行参数，用于禁用上次生成的提示历史记录 ([#15189](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15189))\n* 更新“替换预览”上的预览显示 ([#15201](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15201))\n* 仅拉取已激活 Git 分支的扩展更新 ([#15233](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15233))\n* 将放大后处理 UI 放入折叠面板中 ([#15223](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15223))\n* 支持通过拖放 URL 来读取 infotext ([#15262](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15262))\n* 使用 diskcache 库进行缓存 ([#15287](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15287)、[#15299](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15299))\n* 允许在 Extras 选项卡中使用 PNG-RGBA 格式 ([#15334](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15334))\n* 支持 safetensors 中嵌入的封面图片","2024-04-13T03:40:16",{"id":196,"version":197,"summary_zh":198,"released_at":199},106291,"v1.9.0-RC","## 1.9.0\n\n### 新特性：\n* 根据模型的 timestep 而不是采样步数来切换 refiner（[#14978](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14978)）\n* 添加选项以使用旧版目录视图而非树形视图；对额外网络的排序和搜索控件进行了样式调整\n* 添加用于重新排序回调的 UI，并支持在扩展元数据中指定回调顺序（[#15205](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15205)）\n* 为 SDXL-Lightning 模型提供 Sgm 均匀调度器（[#15325](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15325)）\n* 在主界面中添加调度器选择功能（[#15333](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15333)、[#15361](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15361)、[#15394](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15394)）\n\n### 小改进：\n* “打开图片目录”按钮现在会打开实际的目录（[#14947](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14947)）\n* 支持使用 LyCORIS BOFT 网络进行推理（[#14871](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14871)、[#14973](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14973)）\n* 默认将额外网络卡片的描述设置为纯文本，并提供选项以重新启用之前的 HTML 格式\n* 为额外网络添加了大小调整手柄（[#15041](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15041)）\n* 命令行参数：`--unix-filenames-sanitization` 和 `--filenames-max-length`（[#15031](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15031)）\n* 在 HTML 表格中显示额外网络的参数，而非原始 JSON 格式（[#15131](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15131)）\n* 为 LoRA\u002FLoHa\u002FLoKr 添加 DoRA（权重分解）支持（[#15160](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15160)、[#15283](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15283)）\n* 添加 `--no-prompt-history` 命令行参数，用于禁用上次生成的提示历史记录（[#15189](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15189)）\n* 更新“替换预览”上的预览内容（[#15201](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15201)）\n* 仅拉取扩展程序当前激活的 Git 分支的更新（[#15233](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15233)）\n* 将放大后处理 UI 放入折叠面板中（[#15223](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15223)）\n* 支持通过拖放 URL 来读取 infotext（[#15262](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15262)）\n* 使用 diskcache 库进行缓存（[#15287](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15287)、[#15299](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15299)）\n* 允许在 Extras 选项卡中使用 PNG-RGBA 格式（[#15334](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F15334)）\n* 支持 safetensors 中嵌入的封面图片","2024-04-06T18:49:00",{"id":201,"version":202,"summary_zh":203,"released_at":204},106292,"v1.8.0","### 功能：\n* 将 PyTorch 更新至 2.1.2 版本\n* 软修复填充 (#14208)\n* FP8 支持 (#14031, #14327)\n* 支持 SDXL-修复模型 (#14390)\n* 使用 Spandrel 实现超分辨率和人脸修复架构 (#14425, #14467, #14473, #14474, #14477, #14476, #14484, #14500, #14501, #14504, #14524, #14809)\n* 自动向后兼容（当加载包含程序版本信息的旧图像的提示文本时，会自动添加兼容性设置）\n* 实现零终端 SNR 噪声调度选项（**[种子破坏性变更](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FSeed-breaking-changes#180-dev-170-225-2024-01-01---zero-terminal-snr-noise-schedule-option)**，#14145, #14979）\n* 在图库中为选定图像添加 [✨] 按钮以运行高分辨率修复（借助 #14598, #14626, #14728）\n* [独立的资源仓库](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui-assets)；本地提供字体，而非从 Google 服务器获取\n* 正式支持 LCM 采样器 (#14583)\n* 添加对 DAT 超分辨率模型的支持 (#14690, #15039)\n* 额外网络树形视图 (#14588, #14900)\n* NPU 支持 (#14801)\n* 提示词注释支持\n\n### 细微改进：\n* 允许在宽度\u002F高度输入框中粘贴 WIDTH×HEIGHT 格式的字符串 (#14296)\n* 新增选项：全屏图片查看器中的实时预览 (#14230, #14307)\n* 为生成、跳过和中断操作添加键盘快捷键 (#14269)\n* 在不同平台上更好地支持 TCMALLOC (#14227, #14883, #14910)\n* LoRA 未找到警告 (#14464)\n* 在额外网络中为 LoRA 添加负面提示词 (#14475)\n* xyz_grid：允许在与轴选项不同的轴上调整种子值 (#12180)\n* 将 VAE 转换为 bfloat16 的选项（实现 #9295）\n* 更好地支持 IPEX (#14229, #14353, #14559, #14562, #14597)\n* 选择在当前生成完成后中断，而非立即中断的选项 (#13653, #14659)\n* 全屏预览控制的淡入\u002F关闭功能 (#14291)\n* 更精细的设置冻结控制 (#13789)\n* 提高超分辨率上限 (#14589)\n* 使用快捷键调整画笔大小 (#14638)\n* 保存图像时将检查点信息添加到 CSV 日志文件中 (#14663)\n* 使更多列可调整大小 (#14740, #14884)\n* 为 #14727 添加不叠加原图进行修复的选项\n* 添加 Pad conds v0 选项，以支持在 DDIM 中进行与 1.6.0 之前相同的生成\n* 添加“正在中断…”占位符。\n* 刷新扩展列表按钮 (#14857)\n* 添加在计算强调后禁用归一化的选项 (#14874)\n* 计算 token 数量时，也包括已启用的样式（可在设置中禁用，以恢复先前行为）\n* 图片图库 [📂] 按钮的配置 (#14947)\n* 支持使用 LyCORIS BOFT 网络进行推理 (#14871, #14973)\n* 支持触摸设备（平板电脑）的可调整列宽 (#15002)\n\n### 扩展与 API：\n* 从依赖项中移除了 packages：basicsr、gfpgan、realesrgan；以及它们的依赖包：absl-py、addict、beautifulsoup4、future、gdown、grpcio、importlib-metadata、lmdb、lpips、Markdo","2024-03-02T04:08:18",{"id":206,"version":207,"summary_zh":208,"released_at":209},106293,"v1.8.0-RC","## 1.8.0-RC\n\n### 功能：\n* 更新 PyTorch 至 2.1.2 版本\n* 软修复填充（[#14208](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14208)）\n* FP8 支持（[#14031](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14031), [#14327](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14327)）\n* 支持 SDXL-Inpaint 模型（[#14390](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14390)）\n* 使用 Spandrel 库实现超分辨率和人脸修复架构（[#14425](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14425), [#14467](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14467), [#14473](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14473), [#14474](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14474), [#14477](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14477), [#14476](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14476), [#14484](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14484), [#14500](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-difusion-webui\u002Fpull\u002F14500), [#14501](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14501), [#14504](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14504), [#14524](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14524), [#14809](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14809)）\n* 自动回退版本兼容性（当加载包含程序版本信息的旧图像的 infotext 时，会自动添加兼容性设置）\n* 实现零终端 SNR 噪声调度选项（**[种子破坏性变更](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FSeed-breaking-changes#180-dev-170-225-2024-01-01---zero-terminal-snr-noise-schedule-option)**, [#14145](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14145)）\n* 在图库中为选中的图像添加一个 [✨] 按钮，用于运行高分辨率修复（借助 [#14598](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14598), [#14626](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14626), [#14728](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14728) 的帮助）\n* [独立的资源仓库](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui-assets)；改为本地提供字体，而非从 Google 服务器获取\n* 正式支持 LCM 采样器（[#14583](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14583)）\n* 添加对 DAT 超分辨率模型的支持（[#14690](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14690)）\n* 额外网络树形视图（[#14588](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14588), [#14900](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14900)）\n* NPU 支持（[#14801](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14801)）\n* 支持提示词注释\n\n### 小改进：\n* 允许在宽度\u002F高度输入框中粘贴 WIDTH×HEIGHT 格式的字符串","2024-02-17T08:50:56",{"id":211,"version":212,"summary_zh":213,"released_at":214},106294,"v1.7.0","### 功能：\n* 重新设计设置选项卡：添加搜索框、分类，将 UI 设置页面拆分为多个部分\n* 添加 altdiffusion-m18 支持（[#13364](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13364)）\n* 支持使用 LyCORIS GLora 网络进行推理（[#13610](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13610)）\n* 添加 LoRA-Embedding 捆绑系统（[#13568](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13568)）\n* 新增将提示词从顶部行移动到生成参数中的选项\n* 添加对 SSD-1B 的支持（[#13865](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13865)）\n* 支持使用 OFT 网络进行推理（[#13692](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13692)）\n* 脚本元数据及 DAG 排序机制（[#13944](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13944)）\n* 支持 HyperTile 优化（[#13948](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13948)）\n* 添加对 SD 2.1 Turbo 的支持（[#14170](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14170)）\n* 移除“训练”->“预处理”选项卡，并将其所有功能整合到“工具”选项卡中\n* 初步支持 Intel Arc GPU 的 IPEX 加速（[#14171](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14171)）\n\n### 细节改进：\n* 允许在 img2img 批量模式下从图像中读取模型哈希值（[#12767](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12767)）\n* 添加与 sgm 仓库采样实现对齐的选项（[#12818](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12818)）\n* 为 LoRA 元数据查看器新增字段 `ss_output_name`（[#12838](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12838)）\n* 在设置页面中新增计算所有 SD 检查点哈希值的功能（[#12909](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12909)）\n* 增加将提示词复制到风格编辑器的按钮（[#12975](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12975)）\n* 添加 --skip-load-model-at-start 选项（[#13253](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13253)）\n* 将信息文本写入 GIF 图像\n* 从 GIF 图像中读取信息文本（[#13068](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13068)）\n* 允许在 ui-config.json 中配置 InputAccordion 的初始状态（[#13189](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13189)）\n* 允许编辑用于 Ctrl+上\u002F下键提示词编辑的空白分隔符（[#13444](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13444)）\n* 防止意外关闭弹出式对话框（[#13480](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13480)）\n* 新增是否播放通知声音的选项（[#13631](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13631)）\n* 在全屏图片查看器中显示预览图（如果可用）（[#13459](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13459)）\n* 对 Web 的支持","2023-12-16T07:01:59",{"id":216,"version":217,"summary_zh":218,"released_at":219},106295,"v1.7.0-RC","How to use: [wiki](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fwiki\u002FHow-to-switch-to-different-versions-of-WebUI).\r\n\r\n### Features:\r\n* settings tab rework: add search field, add categories, split UI settings page into many\r\n* add altdiffusion-m18 support ([#13364](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13364))\r\n* support inference with LyCORIS GLora networks ([#13610](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13610))\r\n* add lora-embedding bundle system ([#13568](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13568))\r\n* option to move prompt from top row into generation parameters\r\n* add support for SSD-1B ([#13865](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13865))\r\n* support inference with OFT networks ([#13692](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13692))\r\n* script metadata and DAG sorting mechanism ([#13944](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13944))\r\n* support HyperTile optimization ([#13948](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13948))\r\n* add support for SD 2.1 Turbo ([#14170](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14170))\r\n* remove Train->Preprocessing tab and put all its functionality into Extras tab\r\n* initial IPEX support for Intel Arc GPU ([#14171](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14171))\r\n\r\n### Minor:\r\n* allow reading model hash from images in img2img batch mode ([#12767](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12767))\r\n* add option to align with sgm repo's sampling implementation ([#12818](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12818))\r\n* extra field for lora metadata viewer: `ss_output_name` ([#12838](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12838))\r\n* add action in settings page to calculate all SD checkpoint hashes ([#12909](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12909))\r\n* add button to copy prompt to style editor ([#12975](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12975))\r\n* add --skip-load-model-at-start option ([#13253](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13253))\r\n* write infotext to gif images\r\n* read infotext from gif images ([#13068](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13068))\r\n* allow configuring the initial state of InputAccordion in ui-config.json ([#13189](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13189))\r\n* allow editing whitespace delimiters for ctrl+up\u002Fctrl+down prompt editing ([#13444](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13444))\r\n* prevent accidentally closing popup dialogs ([#13480](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13480))\r\n* added option to play notification sound or not ([#13631](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13631))\r\n* show the preview image in the full screen image viewer if available ([#13459](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13459))\r\n* support for webui.settings.bat ([#13638](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13638))\r\n* add an option to not print stack traces on ctrl+c\r\n* start\u002Frestart generation by Ctrl (Alt) + Enter ([#13644](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13644))\r\n* update prompts_from_file script to allow concatenating entries with the general prompt ([#13733](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13733))\r\n* added a visible checkbox to input accordion\r\n* added an option to hide all txt2img\u002Fimg2img parameters in an accordion ([#13826](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13826))\r\n* added 'Path' sorting option for Extra network cards ([#13968](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13968))\r\n* enable prompt hotkeys in style editor ([#13931](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F13931))\r\n* option to show batch img2img results in UI ([#14009](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14009))\r\n* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page\r\n* add FP32 fallback support on sd_vae_approx ([#14046](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14046))\r\n* support XYZ scripts \u002F split hires path from unet ([#14126](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14126))\r\n* allow use of mutiple styles csv files ([#14125](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F14125))\r\n\r\n### Extensions and API:\r\n* update gradio to 3.41.2\r\n* support installed extensions list api ([#12774](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12774))\r\n* update pnginfo API to return dict with parsed values\r\n* add noisy latent to `ExtraNoiseParams`","2023-12-04T06:40:49",{"id":221,"version":222,"summary_zh":223,"released_at":224},106296,"v1.6.0","### Features:\r\n * refiner support [#12371](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12371)\r\n * add NV option for Random number generator source setting, which allows to generate same pictures on CPU\u002FAMD\u002FMac as on NVidia videocards\r\n * add style editor dialog\r\n * hires fix: add an option to use a different checkpoint for second pass ([#12181](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12181))\r\n * option to keep multiple loaded models in memory ([#12227](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12227))\r\n * new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12300), [#12519](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12519), [#12542](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12542))\r\n * rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers:\r\n   * makes all of them work with img2img\r\n   * makes prompt composition posssible (AND)\r\n   * makes them available for SDXL\r\n * always show extra networks tabs in the UI ([#11808](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F11808))\r\n * use less RAM when creating models ([#11958](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F11958), [#12599](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12599))\r\n * textual inversion inference support for SDXL\r\n * extra networks UI: show metadata for SD checkpoints\r\n * checkpoint merger: add metadata support \r\n * prompt editing and attention: add support for whitespace after the number ([ red : green : 0.5 ]) (seed breaking change) ([#12177](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12177))\r\n * VAE: allow selecting own VAE for each checkpoint (in user metadata editor)\r\n * VAE: add selected VAE to infotext\r\n * options in main UI: add own separate setting for txt2img and img2img, correctly read values from pasted infotext, add setting for column count ([#12551](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12551))\r\n * add resize handle to txt2img and img2img tabs, allowing to change the amount of horizontable space given to generation parameters and resulting image gallery ([#12687](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12687), [#12723](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12723))\r\n * change default behavior for batching cond\u002Funcond -- now it's on by default, and is disabled by an UI setting (Optimizatios -> Batch cond\u002Funcond) - if you are on lowvram\u002Fmedvram and are getting OOM exceptions, you will need to enable it\r\n * show current position in queue and make it so that requests are processed in the order of arrival ([#12707](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12707))\r\n * add `--medvram-sdxl` flag that only enables `--medvram` for SDXL models\r\n * prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) ([#12457](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12457))\r\n\r\n### Minor:\r\n * img2img batch: RAM savings, VRAM savings, .tif, .tiff in img2img batch ([#12120](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12120), [#12514](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12514), [#12515](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12515))\r\n * postprocessing\u002Fextras: RAM savings ([#12479](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12479))\r\n * XYZ: in the axis labels, remove pathnames from model filenames\r\n * XYZ: support hires sampler ([#12298](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12298))\r\n * XYZ: new option: use text inputs instead of dropdowns ([#12491](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12491))\r\n * add gradio version warning\r\n * sort list of VAE checkpoints ([#12297](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12297))\r\n * use transparent white for mask in inpainting, along with an option to select the color ([#12326](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12326))\r\n * move some settings to their own section: img2img, VAE\r\n * add checkbox to show\u002Fhide dirs for extra networks\r\n * Add TAESD(or more) options for all the VAE encode\u002Fdecode operation ([#12311](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12311))\r\n * gradio theme cache, new gradio themes, along with explanation that the user can input his own values ([#12346](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12346), [#12355](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12355))\r\n * sampler fixes\u002Ftweaks: s_tmax, s_churn, s_noise, s_tmax ([#12354](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12354), [#12356](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion","2023-08-31T04:40:36",{"id":226,"version":227,"summary_zh":223,"released_at":228},106297,"v1.6.0-RC","2023-08-24T08:19:18",{"id":230,"version":231,"summary_zh":232,"released_at":233},106298,"v1.5.2","### Bug Fixes:\r\n * fix memory leak when generation fails\r\n * update doggettx cross attention optimization to not use an unreasonable amount of memory in some edge cases -- suggestion by MorkTheOrk\r\n","2023-08-23T12:57:10",{"id":235,"version":236,"summary_zh":237,"released_at":238},106299,"v1.5.1","### Minor:\r\n * support parsing text encoder blocks in some new LoRAs\r\n * delete scale checker script due to user demand\r\n\r\n### Extensions and API:\r\n * add postprocess_batch_list script callback\r\n\r\n### Bug Fixes:\r\n * fix TI training for SD1\r\n * fix reload altclip model error\r\n * prepend the pythonpath instead of overriding it\r\n * fix typo in SD_WEBUI_RESTARTING\r\n * if txt2img\u002Fimg2img raises an exception, finally call state.end()\r\n * fix composable diffusion weight parsing\r\n * restyle Startup profile for black users\r\n * fix webui not launching with --nowebui\r\n * catch exception for non git extensions\r\n * fix some options missing from \u002Fsdapi\u002Fv1\u002Foptions\r\n * fix for extension update status always saying \"unknown\"\r\n * fix display of extra network cards that have `\u003C>` in the name\r\n * update lora extension to work with python 3.8\r\n","2023-07-27T06:04:31",{"id":240,"version":241,"summary_zh":242,"released_at":243},106300,"v1.5.1-RC","\r\n### Minor:\r\n * support parsing text encoder blocks in some new LoRAs\r\n\r\n### Extensions and API:\r\n * add postprocess_batch_list script callback\r\n\r\n### Bug Fixes:\r\n * fix reload altclip model error\r\n * prepend the pythonpath instead of overriding it\r\n * fix typo in SD_WEBUI_RESTARTING\r\n * if txt2img\u002Fimg2img raises an exception, finally call state.end()\r\n * fix composable diffusion weight parsing\r\n * restyle Startup profile for black users\r\n * fix webui not launching with --nowebui\r\n * catch exception for non git extensions\r\n\r\n","2023-07-25T13:32:54",{"id":245,"version":246,"summary_zh":247,"released_at":248},106301,"v1.5.0","\r\n### Features:\r\n * SD XL support\r\n \t* Requires `--no-half-vae`. See #11757 for more details.\r\n * user metadata system for custom networks\r\n * extended Lora metadata editor: set activation text, default weight, view tags, training info\r\n * Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)\r\n * show github stars for extenstions\r\n * img2img batch mode can read extra stuff from png info\r\n * img2img batch works with subdirectories\r\n * hotkeys to move prompt elements: alt+left\u002Fright\r\n * restyle time taken\u002FVRAM display\r\n * add textual inversion hashes to infotext\r\n * optimization: cache git extension repo information\r\n * move generate button next to the generated picture for mobile clients\r\n * hide cards for networks of incompatible Stable Diffusion version in Lora extra networks interface\r\n * skip installing packages with pip if they all are already installed - startup speedup of about 2 seconds\r\n\r\n### Minor:\r\n * checkbox to check\u002Funcheck all extensions in the Installed tab\r\n * add gradio user to infotext and to filename patterns\r\n * allow gif for extra network previews\r\n * add options to change colors in grid\r\n * use natural sort for items in extra networks\r\n * Mac: use empty_cache() from torch 2 to clear VRAM\r\n * added automatic support for installing the right libraries for Navi3 (AMD)\r\n * add option SWIN_torch_compile to accelerate SwinIR upscale\r\n * suppress printing TI embedding info at start to console by default\r\n * speedup extra networks listing\r\n * added `[none]` filename token.\r\n * removed thumbs extra networks view mode (use settings tab to change width\u002Fheight\u002Fscale to get thumbs)\r\n * add always_discard_next_to_last_sigma option to XYZ plot\r\n * automatically switch to 32-bit float VAE if the generated picture has NaNs without the need for `--no-half-vae` commandline flag.\r\n \r\n### Extensions and API:\r\n * api endpoints: \u002Fsdapi\u002Fv1\u002Fserver-kill, \u002Fsdapi\u002Fv1\u002Fserver-restart, \u002Fsdapi\u002Fv1\u002Fserver-stop\r\n * allow Script to have custom metaclass\r\n * add model exists status check \u002Fsdapi\u002Fv1\u002Foptions\r\n * rename --add-stop-route to --api-server-stop\r\n * add `before_hr` script callback\r\n * add callback `after_extra_networks_activate`\r\n * disable rich exception output in console for API by default, use WEBUI_RICH_EXCEPTIONS env var to enable\r\n * return http 404 when thumb file not found\r\n * allow replacing extensions index with environment variable\r\n \r\n### Bug Fixes:\r\n * fix for catch errors when retrieving extension index #11290\r\n * fix very slow loading speed of .safetensors files when reading from network drives\r\n * API cache cleanup\r\n * fix UnicodeEncodeError when writing to file CLIP Interrogator batch mode\r\n * fix warning of 'has_mps' deprecated from PyTorch\r\n * fix problem with extra network saving images as previews losing generation info\r\n * fix throwing exception when trying to resize image with I;16 mode\r\n * fix for #11534: canvas zoom and pan extension hijacking shortcut keys\r\n * fixed launch script to be runnable from any directory\r\n * don't add \"Seed Resize: -1x-1\" to API image metadata\r\n * correctly remove end parenthesis with ctrl+up\u002Fdown\r\n * fixing --subpath on newer gradio version\r\n * fix: check fill size none zero when resize  (fixes #11425)\r\n * use submit and blur for quick settings textbox\r\n * save img2img batch with images.save_image()\r\n * prevent running preload.py for disabled extensions\r\n * fix: previously, model name was added together with directory name to infotext and to [model_name] filename pattern; directory name is now not included\r\n","2023-07-25T05:21:57",{"id":250,"version":251,"summary_zh":252,"released_at":253},106302,"v1.5.0-RC","\r\n### Features:\r\n * SD XL support\r\n * user metadata system for custom networks\r\n * extended Lora metadata editor: set activation text, default weight, view tags, training info\r\n * Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)\r\n   * Lora page now lists all models from both Lora and LyCORIS directories\r\n   * You can still use the LyCORIS extension if you want to\r\n * show github stars for extenstions\r\n * img2img batch mode can read extra stuff from png info\r\n * img2img batch works with subdirectories\r\n * hotkeys to move prompt elements: alt+left\u002Fright\r\n * restyle time taken\u002FVRAM display\r\n * add textual inversion hashes to infotext\r\n * optimization: cache git extension repo information\r\n * move generate button next to the generated picture for mobile clients\r\n * hide cards for networks of incompatible Stable Diffusion version in Lora extra networks interface\r\n * skip installing packages with pip if they all are already installed - startup speedup of about 2 seconds\r\n\r\n### Minor:\r\n * checkbox to check\u002Funcheck all extensions in the Installed tab\r\n * add gradio user to infotext and to filename patterns\r\n * allow gif for extra network previews\r\n * add options to change colors in grid\r\n * use natural sort for items in extra networks\r\n * Mac: use empty_cache() from torch 2 to clear VRAM\r\n * added automatic support for installing the right libraries for Navi3 (AMD)\r\n * add option SWIN_torch_compile to accelerate SwinIR upscale\r\n * suppress printing TI embedding info at start to console by default\r\n * speedup extra networks listing\r\n * added `[none]` filename token.\r\n * removed thumbs extra networks view mode (use settings tab to change width\u002Fheight\u002Fscale to get thumbs)\r\n * add always_discard_next_to_last_sigma option to XYZ plot \r\n \r\n### Extensions and API:\r\n * api endpoints: \u002Fsdapi\u002Fv1\u002Fserver-kill, \u002Fsdapi\u002Fv1\u002Fserver-restart, \u002Fsdapi\u002Fv1\u002Fserver-stop\r\n * allow Script to have custom metaclass\r\n * add model exists status check \u002Fsdapi\u002Fv1\u002Foptions\r\n * rename --add-stop-route to --api-server-stop\r\n * add `before_hr` script callback\r\n * add callback `after_extra_networks_activate`\r\n * disable rich exception output in console for API by default, use WEBUI_RICH_EXCEPTIONS env var to enable\r\n * return http 404 when thumb file not found\r\n * allow replacing extensions index with environment variable\r\n \r\n### Bug Fixes:\r\n * fix for catch errors when retrieving extension index #11290\r\n * fix very slow loading speed of .safetensors files when reading from network drives\r\n * API cache cleanup\r\n * fix UnicodeEncodeError when writing to file CLIP Interrogator batch mode\r\n * fix warning of 'has_mps' deprecated from PyTorch\r\n * fix problem with extra network saving images as previews losing generation info\r\n * fix throwing exception when trying to resize image with I;16 mode\r\n * fix for #11534: canvas zoom and pan extension hijacking shortcut keys\r\n * fixed launch script to be runnable from any directory\r\n * don't add \"Seed Resize: -1x-1\" to API image metadata\r\n * correctly remove end parenthesis with ctrl+up\u002Fdown\r\n * fixing --subpath on newer gradio version\r\n * fix: check fill size none zero when resize  (fixes #11425)\r\n * use submit and blur for quick settings textbox\r\n * save img2img batch with images.save_image()\r\n","2023-07-18T15:25:13",{"id":255,"version":256,"summary_zh":257,"released_at":258},106303,"v1.4.0","\r\n### Features:\r\n * zoom controls for inpainting\r\n * run basic torch calculation at startup in parallel to reduce the performance impact of first generation\r\n * option to pad prompt\u002Fneg prompt to be same length\r\n * remove taming_transformers dependency\r\n * custom k-diffusion scheduler settings\r\n * add an option to show selected settings in main txt2img\u002Fimg2img UI\r\n * sysinfo tab in settings\r\n * infer styles from prompts when pasting params into the UI\r\n * an option to control the behavior of the above\r\n\r\n### Minor:\r\n * bump Gradio to 3.32.0\r\n * bump xformers to 0.0.20\r\n * Add option to disable token counters\r\n * tooltip fixes & optimizations\r\n * make it possible to configure filename for the zip download\r\n * `[vae_filename]` pattern for filenames\r\n * Revert discarding penultimate sigma for DPM-Solver++(2M) SDE\r\n * change UI reorder setting to multiselect\r\n * read version info form CHANGELOG.md if git version info is not available\r\n * link footer API to Wiki when API is not active\r\n * persistent conds cache (opt-in optimization)\r\n \r\n### Extensions:\r\n * After installing extensions, webui properly restarts the process rather than reloads the UI \r\n * Added VAE listing to web API. Via: \u002Fsdapi\u002Fv1\u002Fsd-vae\r\n * custom unet support\r\n * Add onAfterUiUpdate callback\r\n * refactor EmbeddingDatabase.register_embedding() to allow unregistering\r\n * add before_process callback for scripts\r\n * add ability for alwayson scripts to specify section and let user reorder those sections\r\n \r\n### Bug Fixes:\r\n * Fix dragging text to prompt\r\n * fix incorrect quoting for infotext values with colon in them\r\n * fix \"hires. fix\" prompt sharing same labels with txt2img_prompt\r\n * Fix s_min_uncond default type int\r\n * Fix for #10643 (Inpainting mask sometimes not working)\r\n * fix bad styling for thumbs view in extra networks #10639\r\n * fix for empty list of optimizations #10605\r\n * small fixes to prepare_tcmalloc for Debian\u002FUbuntu compatibility\r\n * fix --ui-debug-mode exit\r\n * patch GitPython to not use leaky persistent processes\r\n * fix duplicate Cross attention optimization after UI reload\r\n * torch.cuda.is_available() check for SdOptimizationXformers\r\n * fix hires fix using wrong conds in second pass if using Loras.\r\n * handle exception when parsing generation parameters from png info\r\n * fix upcast attention dtype error\r\n * forcing Torch Version to 1.13.1 for RX 5000 series GPUs\r\n * split mask blur into X and Y components, patch Outpainting MK2 accordingly\r\n * don't die when a LoRA is a broken symlink\r\n * allow activation of Generate Forever during generation\r\n","2023-06-27T05:40:25",{"id":260,"version":261,"summary_zh":262,"released_at":263},106304,"v1.4.0-RC","## 1.4.0\r\n\r\n### Features:\r\n * zoom controls for inpainting\r\n * run basic torch calculation at startup in parallel to reduce the performance impact of first generation\r\n * option to pad prompt\u002Fneg prompt to be same length\r\n * remove taming_transformers dependency\r\n * custom k-diffusion scheduler settings\r\n * add an option to show selected settings in main txt2img\u002Fimg2img UI\r\n * sysinfo tab in settings\r\n * infer styles from prompts when pasting params into the UI\r\n * an option to control the behavior of the above\r\n\r\n### Minor:\r\n * bump Gradio to 3.32.0\r\n * bump xformers to 0.0.20\r\n * Add option to disable token counters\r\n * tooltip fixes & optimizations\r\n * make it possible to configure filename for the zip download\r\n * `[vae_filename]` pattern for filenames\r\n * Revert discarding penultimate sigma for DPM-Solver++(2M) SDE\r\n * change UI reorder setting to multiselect\r\n * read version info form CHANGELOG.md if git version info is not available\r\n * link footer API to Wiki when API is not active\r\n * persistent conds cache (opt-in optimization)\r\n \r\n### Extensions:\r\n * After installing extensions, webui properly restarts the process rather than reloads the UI \r\n * Added VAE listing to web API. Via: \u002Fsdapi\u002Fv1\u002Fsd-vae\r\n * custom unet support\r\n * Add onAfterUiUpdate callback\r\n * refactor EmbeddingDatabase.register_embedding() to allow unregistering\r\n * add before_process callback for scripts\r\n * add ability for alwayson scripts to specify section and let user reorder those sections\r\n \r\n### Bug Fixes:\r\n * Fix dragging text to prompt\r\n * fix incorrect quoting for infotext values with colon in them\r\n * fix \"hires. fix\" prompt sharing same labels with txt2img_prompt\r\n * Fix s_min_uncond default type int\r\n * Fix for #10643 (Inpainting mask sometimes not working)\r\n * fix bad styling for thumbs view in extra networks #10639\r\n * fix for empty list of optimizations #10605\r\n * small fixes to prepare_tcmalloc for Debian\u002FUbuntu compatibility\r\n * fix --ui-debug-mode exit\r\n * patch GitPython to not use leaky persistent processes\r\n * fix duplicate Cross attention optimization after UI reload\r\n * torch.cuda.is_available() check for SdOptimizationXformers\r\n * fix hires fix using wrong conds in second pass if using Loras.\r\n * handle exception when parsing generation parameters from png info\r\n * fix upcast attention dtype error\r\n * forcing Torch Version to 1.13.1 for RX 5000 series GPUs\r\n * split mask blur into X and Y components, patch Outpainting MK2 accordingly\r\n * don't die when a LoRA is a broken symlink\r\n * allow activation of Generate Forever during generation\r\n","2023-06-09T19:51:14"]