[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lllyasviel--Fooocus":3,"tool-lllyasviel--Fooocus":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":79,"languages":80,"stars":104,"forks":105,"last_commit_at":106,"license":107,"difficulty_score":10,"env_os":108,"env_gpu":109,"env_ram":110,"env_deps":111,"category_tags":116,"github_topics":76,"view_count":117,"oss_zip_url":76,"oss_zip_packed_at":76,"status":16,"created_at":118,"updated_at":119,"faqs":120,"releases":148},809,"lllyasviel\u002FFooocus","Fooocus","Focus on prompting and generating","Fooocus 是一款基于 Stable Diffusion XL 架构的开源图像生成软件。它重新思考了绘图工具的设计逻辑，主张“专注于提示词与生成”。就像 Midjourney 一样，用户无需手动调整复杂参数，只需输入想法即可得到高质量图片，同时保持了离线运行、完全免费和开源的优势。\n\n传统 AI 绘图往往面临安装繁琐、参数调试困难的问题。Fooocus 极大地简化了这一流程，从下载到生成首张图仅需不到三次点击，且最低仅需 4GB 显存（Nvidia）即可流畅运行。这使其非常适合设计师、创作者以及希望体验 AI 绘画但缺乏技术背景的普通用户。\n\n技术上，Fooocus 内置了基于 GPT-2 的提示词处理引擎，能自动优化提示词质量，确保无论输入长短都能获得美观结果。它还采用了自研的图像修复与放大算法，在细节表现上优于许多同类软件。目前项目处于长期维护模式，专注于修复 Bug 而非引入新架构。请注意，网络上存在大量假冒网站，请务必通过 GitHub 官方渠道下载，以保障安全。","\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_a75fe6cc20b4.png\">\n\u003C\u002Fdiv>\n\n# Fooocus\n\n[>>> Click Here to Install Fooocus \u003C\u003C\u003C](#download)\n\nFooocus is an image generating software (based on [Gradio](https:\u002F\u002Fwww.gradio.app\u002F) \u003Ca href='https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgradio-app\u002Fgradio'>\u003C\u002Fa>).\n\nFooocus presents a rethinking of image generator designs. The software is offline, open source, and free, while at the same time, similar to many online image generators like Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images. Fooocus has also simplified the installation: between pressing \"download\" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).\n\n**Recently many fake websites exist on Google when you search “fooocus”. Do not trust those – here is the only official source of Fooocus.**\n\n# Project Status: Limited Long-Term Support (LTS) with Bug Fixes Only\n\nThe Fooocus project, built entirely on the **Stable Diffusion XL** architecture, is now in a state of limited long-term support (LTS) with bug fixes only. As the existing functionalities are considered as nearly free of programmartic issues (Thanks to [mashb1t](https:\u002F\u002Fgithub.com\u002Fmashb1t)'s huge efforts), future updates will focus exclusively on addressing any bugs that may arise. \n\n**There are no current plans to migrate to or incorporate newer model architectures.** However, this may change during time with the development of open-source community. For example, if the community converge to one single dominant method for image generation (which may really happen in half or one years given the current status), Fooocus may also migrate to that exact method.\n\nFor those interested in utilizing newer models such as **Flux**, we recommend exploring alternative platforms such as [WebUI Forge](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstable-diffusion-webui-forge) (also from us), [ComfyUI\u002FSwarmUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI). Additionally, several [excellent forks of Fooocus](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus?tab=readme-ov-file#forks) are available for experimentation.\n\nAgain, recently many fake websites exist on Google when you search “fooocus”. Do **NOT** get Fooocus from those websites – this page is the only official source of Fooocus. We never have any website like such as “fooocus.com”, “fooocus.net”, “fooocus.co”, “fooocus.ai”, “fooocus.org”, “fooocus.pro”, “fooocus.one”. Those websites are ALL FAKE. **They have ABSOLUTLY no relationship to us. Fooocus is a 100% non-commercial offline open-source software.**\n\n# Features\n\nBelow is a quick list using Midjourney's examples:\n\n| Midjourney | Fooocus |\n| - | - |\n| High-quality text-to-image without needing much prompt engineering or parameter tuning. \u003Cbr> (Unknown method) | High-quality text-to-image without needing much prompt engineering or parameter tuning. \u003Cbr> (Fooocus has an offline GPT-2 based prompt processing engine and lots of sampling improvements so that results are always beautiful, no matter if your prompt is as short as “house in garden” or as long as 1000 words) |\n| V1 V2 V3 V4 | Input Image -> Upscale or Variation -> Vary (Subtle) \u002F Vary (Strong)|\n| U1 U2 U3 U4 | Input Image -> Upscale or Variation -> Upscale (1.5x) \u002F Upscale (2x) |\n| Inpaint \u002F Up \u002F Down \u002F Left \u002F Right (Pan) | Input Image -> Inpaint or Outpaint -> Inpaint \u002F Up \u002F Down \u002F Left \u002F Right \u003Cbr> (Fooocus uses its own inpaint algorithm and inpaint models so that results are more satisfying than all other software that uses standard SDXL inpaint method\u002Fmodel) |\n| Image Prompt | Input Image -> Image Prompt \u003Cbr> (Fooocus uses its own image prompt algorithm so that result quality and prompt understanding are more satisfying than all other software that uses standard SDXL methods like standard IP-Adapters or Revisions) |\n| --style | Advanced -> Style |\n| --stylize | Advanced -> Advanced -> Guidance |\n| --niji | [Multiple launchers: \"run.bat\", \"run_anime.bat\", and \"run_realistic.bat\".](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F679) \u003Cbr> Fooocus support SDXL models on Civitai \u003Cbr> (You can google search “Civitai” if you do not know about it) |\n| --quality | Advanced -> Quality |\n| --repeat | Advanced -> Image Number |\n| Multi Prompts (::) | Just use multiple lines of prompts |\n| Prompt Weights | You can use \" I am (happy:1.5)\". \u003Cbr> Fooocus uses A1111's reweighting algorithm so that results are better than ComfyUI if users directly copy prompts from Civitai. (Because if prompts are written in ComfyUI's reweighting, users are less likely to copy prompt texts as they prefer dragging files) \u003Cbr> To use embedding, you can use \"(embedding:file_name:1.1)\" |\n| --no | Advanced -> Negative Prompt |\n| --ar | Advanced -> Aspect Ratios |\n| InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap |\n| Describe | Input Image -> Describe |\n\nBelow is a quick list using LeonardoAI's examples:\n\n| LeonardoAI | Fooocus |\n| - | - |\n| Prompt Magic | Advanced -> Style -> Fooocus V2 |\n| Advanced Sampler Parameters (like Contrast\u002FSharpness\u002Fetc) | Advanced -> Advanced -> Sampling Sharpness \u002F etc |\n| User-friendly ControlNets | Input Image -> Image Prompt -> Advanced |\n\nAlso, [click here to browse the advanced features.](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117)\n\n# Download\n\n### Windows\n\nYou can directly download Fooocus with:\n\n**[>>> Click here to download \u003C\u003C\u003C](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Freleases\u002Fdownload\u002Fv2.5.0\u002FFooocus_win64_2-5-0.7z)**\n\nAfter you download the file, please uncompress it and then run the \"run.bat\".\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_e20b8d0a31ef.png)\n\nThe first time you launch the software, it will automatically download models:\n\n1. It will download [default models](#models) to the folder \"Fooocus\\models\\checkpoints\" given different presets. You can download them in advance if you do not want automatic download.\n2. Note that if you use inpaint, at the first time you inpaint an image, it will download [Fooocus's own inpaint control model from here](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Ffooocus_inpaint\u002Fresolve\u002Fmain\u002Finpaint_v26.fooocus.patch) as the file \"Fooocus\\models\\inpaint\\inpaint_v26.fooocus.patch\" (the size of this file is 1.28GB).\n\nAfter Fooocus 2.1.60, you will also have `run_anime.bat` and `run_realistic.bat`. They are different model presets (and require different models, but they will be automatically downloaded). [Check here for more details](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F679).\n\nAfter Fooocus 2.3.0 you can also switch presets directly in the browser. Keep in mind to add these arguments if you want to change the default behavior:\n* Use `--disable-preset-selection` to disable preset selection in the browser.\n* Use `--always-download-new-model` to download missing models on preset switch. Default is fallback to `previous_default_models` defined in the corresponding preset, also see terminal output.\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_0efb680c1f96.png)\n\nIf you already have these files, you can copy them to the above locations to speed up installation.\n\nNote that if you see **\"MetadataIncompleteBuffer\" or \"PytorchStreamReader\"**, then your model files are corrupted. Please download models again.\n\nBelow is a test on a relatively low-end laptop with **16GB System RAM** and **6GB VRAM** (Nvidia 3060 laptop). The speed on this machine is about 1.35 seconds per iteration. Pretty impressive – nowadays laptops with 3060 are usually at very acceptable price.\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_219ddcee82ca.png)\n\nBesides, recently many other software report that Nvidia driver above 532 is sometimes 10x slower than Nvidia driver 531. If your generation time is very long, consider download [Nvidia Driver 531 Laptop](https:\u002F\u002Fwww.nvidia.com\u002Fdownload\u002FdriverResults.aspx\u002F199991\u002Fen-us\u002F) or [Nvidia Driver 531 Desktop](https:\u002F\u002Fwww.nvidia.com\u002Fdownload\u002FdriverResults.aspx\u002F199990\u002Fen-us\u002F).\n\nNote that the minimal requirement is **4GB Nvidia GPU memory (4GB VRAM)** and **8GB system memory (8GB RAM)**. This requires using Microsoft’s Virtual Swap technique, which is automatically enabled by your Windows installation in most cases, so you often do not need to do anything about it. However, if you are not sure, or if you manually turned it off (would anyone really do that?), or **if you see any \"RuntimeError: CPUAllocator\"**, you can enable it here:\n\n\u003Cdetails>\n\u003Csummary>Click here to see the image instructions. \u003C\u002Fsummary>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_c9d80f879d1b.png)\n\n**And make sure that you have at least 40GB free space on each drive if you still see \"RuntimeError: CPUAllocator\" !**\n\n\u003C\u002Fdetails>\n\nPlease open an issue if you use similar devices but still cannot achieve acceptable performances.\n\nNote that the [minimal requirement](#minimal-requirement) for different platforms is different.\n\nSee also the common problems and troubleshoots [here](troubleshoot.md).\n\n### Colab\n\n(Last tested - 2024 Aug 12 by [mashb1t](https:\u002F\u002Fgithub.com\u002Fmashb1t))\n\n| Colab | Info\n| --- | --- |\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Ffooocus_colab.ipynb) | Fooocus Official\n\nIn Colab, you can modify the last line to `!python entry_with_update.py --share --always-high-vram` or `!python entry_with_update.py --share --always-high-vram --preset anime` or `!python entry_with_update.py --share --always-high-vram --preset realistic` for Fooocus Default\u002FAnime\u002FRealistic Edition.\n\nYou can also change the preset in the UI. Please be aware that this may lead to timeouts after 60 seconds. If this is the case, please wait until the download has finished, change the preset to initial and back to the one you've selected or reload the page.\n\nNote that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some \"big\" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab.\n\nUsing `--always-high-vram` shifts resource allocation from RAM to VRAM and achieves the overall best balance between performance, flexibility and stability on the default T4 instance. Please find more information [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710#issuecomment-1989185346).\n\nThanks to [camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru) for the template!\n\n### Linux (Using Anaconda)\n\nIf you want to use Anaconda\u002FMiniconda, you can\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    conda env create -f environment.yaml\n    conda activate fooocus\n    pip install -r requirements_versions.txt\n\nThen download the models: download [default models](#models) to the folder \"Fooocus\\models\\checkpoints\". **Or let Fooocus automatically download the models** using the launcher:\n\n    conda activate fooocus\n    python entry_with_update.py\n\nOr, if you want to open a remote port, use\n\n    conda activate fooocus\n    python entry_with_update.py --listen\n\nUse `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Linux (Using Python Venv)\n\nYour Linux needs to have **Python 3.10** installed, and let's say your Python can be called with the command **python3** with your venv system working; you can\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    python3 -m venv fooocus_env\n    source fooocus_env\u002Fbin\u002Factivate\n    pip install -r requirements_versions.txt\n\nSee the above sections for model downloads. You can launch the software with:\n\n    source fooocus_env\u002Fbin\u002Factivate\n    python entry_with_update.py\n\nOr, if you want to open a remote port, use\n\n    source fooocus_env\u002Fbin\u002Factivate\n    python entry_with_update.py --listen\n\nUse `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Linux (Using native system Python)\n\nIf you know what you are doing, and your Linux already has **Python 3.10** installed, and your Python can be called with the command **python3** (and Pip with **pip3**), you can\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    pip3 install -r requirements_versions.txt\n\nSee the above sections for model downloads. You can launch the software with:\n\n    python3 entry_with_update.py\n\nOr, if you want to open a remote port, use\n\n    python3 entry_with_update.py --listen\n\nUse `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Linux (AMD GPUs)\n\nNote that the [minimal requirement](#minimal-requirement) for different platforms is different.\n\nSame with the above instructions. You need to change torch to the AMD version\n\n    pip uninstall torch torchvision torchaudio torchtext functorch xformers \n    pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Frocm5.6\n\nAMD is not intensively tested, however. The AMD support is in beta.\n\nUse `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Windows (AMD GPUs)\n\nNote that the [minimal requirement](#minimal-requirement) for different platforms is different.\n\nSame with Windows. Download the software and edit the content of `run.bat` as:\n\n    .\\python_embeded\\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y\n    .\\python_embeded\\python.exe -m pip install torch-directml\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --directml\n    pause\n\nThen run the `run.bat`.\n\nAMD is not intensively tested, however. The AMD support is in beta.\n\nFor AMD, use `.\\python_embeded\\python.exe Fooocus\\entry_with_update.py --directml --preset anime` or `.\\python_embeded\\python.exe Fooocus\\entry_with_update.py --directml --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Mac\n\nNote that the [minimal requirement](#minimal-requirement) for different platforms is different.\n\nMac is not intensively tested. Below is an unofficial guideline for using Mac. You can discuss problems [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F129).\n\nYou can install Fooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or a newer version. Fooocus runs on Apple silicon computers via [PyTorch](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) MPS device acceleration. Mac Silicon computers don't come with a dedicated graphics card, resulting in significantly longer image processing times compared to computers with dedicated graphics cards.\n\n1. Install the conda package manager and pytorch nightly. Read the [Accelerated PyTorch training on Mac](https:\u002F\u002Fdeveloper.apple.com\u002Fmetal\u002Fpytorch\u002F) Apple Developer guide for instructions. Make sure pytorch recognizes your MPS device.\n1. Open the macOS Terminal app and clone this repository with `git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git`.\n1. Change to the new Fooocus directory, `cd Fooocus`.\n1. Create a new conda environment, `conda env create -f environment.yaml`.\n1. Activate your new conda environment, `conda activate fooocus`.\n1. Install the packages required by Fooocus, `pip install -r requirements_versions.txt`.\n1. Launch Fooocus by running `python entry_with_update.py`. (Some Mac M2 users may need `python entry_with_update.py --disable-offload-from-vram` to speed up model loading\u002Funloading.) The first time you run Fooocus, it will automatically download the Stable Diffusion SDXL models and will take a significant amount of time, depending on your internet connection.\n\nUse `python entry_with_update.py --preset anime` or `python entry_with_update.py --preset realistic` for Fooocus Anime\u002FRealistic Edition.\n\n### Docker\n\nSee [docker.md](docker.md)\n\n### Download Previous Version\n\nSee the guidelines [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F1405).\n\n## Minimal Requirement\n\nBelow is the minimal requirement for running Fooocus locally. If your device capability is lower than this spec, you may not be able to use Fooocus locally. (Please let us know, in any case, if your device capability is lower but Fooocus still works.)\n\n| Operating System  | GPU                          | Minimal GPU Memory           | Minimal System Memory     | [System Swap](troubleshoot.md) | Note                                                                       |\n|-------------------|------------------------------|------------------------------|---------------------------|--------------------------------|----------------------------------------------------------------------------|\n| Windows\u002FLinux     | Nvidia RTX 4XXX              | 4GB                          | 8GB                       | Required                       | fastest                                                                    |\n| Windows\u002FLinux     | Nvidia RTX 3XXX              | 4GB                          | 8GB                       | Required                       | usually faster than RTX 2XXX                                               |\n| Windows\u002FLinux     | Nvidia RTX 2XXX              | 4GB                          | 8GB                       | Required                       | usually faster than GTX 1XXX                                               |\n| Windows\u002FLinux     | Nvidia GTX 1XXX              | 8GB (&ast; 6GB uncertain)    | 8GB                       | Required                       | only marginally faster than CPU                                            |\n| Windows\u002FLinux     | Nvidia GTX 9XX               | 8GB                          | 8GB                       | Required                       | faster or slower than CPU                                                  |\n| Windows\u002FLinux     | Nvidia GTX \u003C 9XX             | Not supported                | \u002F                         | \u002F                              | \u002F                                                                          |\n| Windows           | AMD GPU                      | 8GB    (updated 2023 Dec 30) | 8GB                       | Required                       | via DirectML (&ast; ROCm is on hold), about 3x slower than Nvidia RTX 3XXX |\n| Linux             | AMD GPU                      | 8GB                          | 8GB                       | Required                       | via ROCm, about 1.5x slower than Nvidia RTX 3XXX                           |\n| Mac               | M1\u002FM2 MPS                    | Shared                       | Shared                    | Shared                         | about 9x slower than Nvidia RTX 3XXX                                       |\n| Windows\u002FLinux\u002FMac | only use CPU                 | 0GB                          | 32GB                      | Required                       | about 17x slower than Nvidia RTX 3XXX                                      |\n\n&ast; AMD GPU ROCm (on hold): The AMD is still working on supporting ROCm on Windows.\n\n&ast; Nvidia GTX 1XXX 6GB uncertain: Some people report 6GB success on GTX 10XX, but some other people report failure cases.\n\n*Note that Fooocus is only for extremely high quality image generating. We will not support smaller models to reduce the requirement and sacrifice result quality.*\n\n## Troubleshoot\n\nSee the common problems [here](troubleshoot.md).\n\n## Default Models\n\u003Ca name=\"models\">\u003C\u002Fa>\n\nGiven different goals, the default models and configs of Fooocus are different:\n\n| Task      | Windows | Linux args | Main Model                  | Refiner | Config                                                                         |\n|-----------| --- | --- |-----------------------------| --- |--------------------------------------------------------------------------------|\n| General   | run.bat |  | juggernautXL_v8Rundiffusion | not used | [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Fdefault.json)   |\n| Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20     | not used | [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Frealistic.json) |\n| Anime     | run_anime.bat | --preset anime | animaPencilXL_v500          | not used | [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Fanime.json)     |\n\nNote that the download is **automatic** - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation.\n\n## UI Access and Authentication\nIn addition to running on localhost, Fooocus can also expose its UI in two ways: \n* Local UI listener: use `--listen` (specify port e.g. with `--port 8888`). \n* API access: use `--share` (registers an endpoint at `.gradio.live`).\n\nIn both ways the access is unauthenticated by default. You can add basic authentication by creating a file called `auth.json` in the main directory, which contains a list of JSON objects with the keys `user` and `pass` (see example in [auth-example.json](.\u002Fauth-example.json)).\n\n## List of \"Hidden\" Tricks\n\u003Ca name=\"tech_list\">\u003C\u002Fa>\n\n\u003Cdetails>\n\u003Csummary>Click to see a list of tricks. Those are based on SDXL and are not very up-to-date with latest models.\u003C\u002Fsummary>\n\n1. GPT2-based [prompt expansion as a dynamic style \"Fooocus V2\".](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117#raw) (similar to Midjourney's hidden pre-processing and \"raw\" mode, or the LeonardoAI's Prompt Magic).\n2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the “native refiner swap inside one single k-sampler” is [merged]( https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12371) into the dev branch of webui. Great!)\n3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive\u002Fnegative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App [Draw Things](https:\u002F\u002Fapps.apple.com\u002Fus\u002Fapp\u002Fdraw-things-ai-generation\u002Fid6444050820) will support Negative ADM Guidance. Great!)\n4. We implemented a carefully tuned variation of Section 5.1 of [\"Improving Sample Quality of Diffusion Models Using Self-Attention Guidance\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.00939.pdf). The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples [here](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117#sharpness)). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.)\n5. We modified the style templates a bit and added the \"cinematic-default\".\n6. We tested the \"sd_xl_offset_example-lora_1.0.safetensors\" and it seems that when the lora weight is below 0.5, the results are always better than XL without lora.\n7. The parameters of samplers are carefully tuned.\n8. Because XL uses positional encoding for generation resolution, images generated by several fixed resolutions look a bit better than those from arbitrary resolutions (because the positional encoding is not very good at handling int numbers that are unseen during training). This suggests that the resolutions in UI may be hard coded for best results.\n9. Separated prompts for two different text encoders seem unnecessary. Separated prompts for the base model and refiner may work, but the effects are random, and we refrain from implementing this.\n10. The DPM family seems well-suited for XL since XL sometimes generates overly smooth texture, but the DPM family sometimes generates overly dense detail in texture. Their joint effect looks neutral and appealing to human perception.\n11. A carefully designed system for balancing multiple styles as well as prompt expansion.\n12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai.\n13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way.\n14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10.\n\u003C\u002Fdetails>\n\n## Customization\n\nAfter the first time you run Fooocus, a config file will be generated at `Fooocus\\config.txt`. This file can be edited to change the model path or default parameters.\n\nFor example, an edited `Fooocus\\config.txt` (this file will be generated after the first launch) may look like this:\n\n```json\n{\n    \"path_checkpoints\": \"D:\\\\Fooocus\\\\models\\\\checkpoints\",\n    \"path_loras\": \"D:\\\\Fooocus\\\\models\\\\loras\",\n    \"path_embeddings\": \"D:\\\\Fooocus\\\\models\\\\embeddings\",\n    \"path_vae_approx\": \"D:\\\\Fooocus\\\\models\\\\vae_approx\",\n    \"path_upscale_models\": \"D:\\\\Fooocus\\\\models\\\\upscale_models\",\n    \"path_inpaint\": \"D:\\\\Fooocus\\\\models\\\\inpaint\",\n    \"path_controlnet\": \"D:\\\\Fooocus\\\\models\\\\controlnet\",\n    \"path_clip_vision\": \"D:\\\\Fooocus\\\\models\\\\clip_vision\",\n    \"path_fooocus_expansion\": \"D:\\\\Fooocus\\\\models\\\\prompt_expansion\\\\fooocus_expansion\",\n    \"path_outputs\": \"D:\\\\Fooocus\\\\outputs\",\n    \"default_model\": \"realisticStockPhoto_v10.safetensors\",\n    \"default_refiner\": \"\",\n    \"default_loras\": [[\"lora_filename_1.safetensors\", 0.5], [\"lora_filename_2.safetensors\", 0.5]],\n    \"default_cfg_scale\": 3.0,\n    \"default_sampler\": \"dpmpp_2m\",\n    \"default_scheduler\": \"karras\",\n    \"default_negative_prompt\": \"low quality\",\n    \"default_positive_prompt\": \"\",\n    \"default_styles\": [\n        \"Fooocus V2\",\n        \"Fooocus Photograph\",\n        \"Fooocus Negative\"\n    ]\n}\n```\n\nMany other keys, formats, and examples are in `Fooocus\\config_modification_tutorial.txt` (this file will be generated after the first launch).\n\nConsider twice before you really change the config. If you find yourself breaking things, just delete `Fooocus\\config.txt`. Fooocus will go back to default.\n\nA safer way is just to try \"run_anime.bat\" or \"run_realistic.bat\" - they should already be good enough for different tasks.\n\n~Note that `user_path_config.txt` is deprecated and will be removed soon.~ (Edit: it is already removed.)\n\n### All CMD Flags\n\n```\nentry_with_update.py  [-h] [--listen [IP]] [--port PORT]\n                      [--disable-header-check [ORIGIN]]\n                      [--web-upload-size WEB_UPLOAD_SIZE]\n                      [--hf-mirror HF_MIRROR]\n                      [--external-working-path PATH [PATH ...]]\n                      [--output-path OUTPUT_PATH]\n                      [--temp-path TEMP_PATH] [--cache-path CACHE_PATH]\n                      [--in-browser] [--disable-in-browser]\n                      [--gpu-device-id DEVICE_ID]\n                      [--async-cuda-allocation | --disable-async-cuda-allocation]\n                      [--disable-attention-upcast]\n                      [--all-in-fp32 | --all-in-fp16]\n                      [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]\n                      [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]\n                      [--vae-in-cpu]\n                      [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]\n                      [--directml [DIRECTML_DEVICE]]\n                      [--disable-ipex-hijack]\n                      [--preview-option [none,auto,fast,taesd]]\n                      [--attention-split | --attention-quad | --attention-pytorch]\n                      [--disable-xformers]\n                      [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]\n                      [--always-offload-from-vram]\n                      [--pytorch-deterministic] [--disable-server-log]\n                      [--debug-mode] [--is-windows-embedded-python]\n                      [--disable-server-info] [--multi-user] [--share]\n                      [--preset PRESET] [--disable-preset-selection]\n                      [--language LANGUAGE]\n                      [--disable-offload-from-vram] [--theme THEME]\n                      [--disable-image-log] [--disable-analytics]\n                      [--disable-metadata] [--disable-preset-download]\n                      [--disable-enhance-output-sorting]\n                      [--enable-auto-describe-image]\n                      [--always-download-new-model]\n                      [--rebuild-hash-cache [CPU_NUM_THREADS]]\n```\n\n## Inline Prompt Features\n\n### Wildcards\n\nExample prompt: `__color__ flower`\n\nProcessed for positive and negative prompt.\n\nSelects a random wildcard from a predefined list of options, in this case the `wildcards\u002Fcolor.txt` file. \nThe wildcard will be replaced with a random color (randomness based on seed). \nYou can also disable randomness and process a wildcard file from top to bottom by enabling the checkbox `Read wildcards in order` in Developer Debug Mode.\n\nWildcards can be nested and combined, and multiple wildcards can be used in the same prompt (example see `wildcards\u002Fcolor_flower.txt`).\n\n### Array Processing\n\nExample prompt: `[[red, green, blue]] flower`\n\nProcessed only for positive prompt.\n\nProcesses the array from left to right, generating a separate image for each element in the array. In this case 3 images would be generated, one for each color.\nIncrease the image number to 3 to generate all 3 variants.\n\nArrays can not be nested, but multiple arrays can be used in the same prompt.\nDoes support inline LoRAs as array elements!\n\n### Inline LoRAs\n\nExample prompt: `flower \u003Clora:sunflowers:1.2>`\n\nProcessed only for positive prompt.\n\nApplies a LoRA to the prompt. The LoRA file must be located in the `models\u002Floras` directory.\n\n## Advanced Features\n\n[Click here to browse the advanced features.](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117)\n\n## Forks\n\nBelow are some Forks to Fooocus:\n\n| Fooocus' forks |\n| - |\n| [fenneishi\u002FFooocus-Control](https:\u002F\u002Fgithub.com\u002Ffenneishi\u002FFooocus-Control) \u003C\u002Fbr>[runew0lf\u002FRuinedFooocus](https:\u002F\u002Fgithub.com\u002Frunew0lf\u002FRuinedFooocus) \u003C\u002Fbr> [MoonRide303\u002FFooocus-MRE](https:\u002F\u002Fgithub.com\u002FMoonRide303\u002FFooocus-MRE) \u003C\u002Fbr> [mashb1t\u002FFooocus](https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus) \u003C\u002Fbr> and so on ... |\n\n## Thanks\n\nMany thanks to [twri](https:\u002F\u002Fgithub.com\u002Ftwri) and [3Diva](https:\u002F\u002Fgithub.com\u002F3Diva) and [Marc K3nt3L](https:\u002F\u002Fgithub.com\u002FK3nt3L) for creating additional SDXL styles available in Fooocus. \n\nThe project starts from a mixture of [Stable Diffusion WebUI](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui) and [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) codebases.\n\nAlso, thanks [daswer123](https:\u002F\u002Fgithub.com\u002Fdaswer123) for contributing the Canvas Zoom!\n\n## Update Log\n\nThe log is [here](update_log.md).\n\n## Localization\u002FTranslation\u002FI18N\n\nYou can put json files in the `language` folder to translate the user interface.\n\nFor example, below is the content of `Fooocus\u002Flanguage\u002Fexample.json`:\n\n```json\n{\n  \"Generate\": \"生成\",\n  \"Input Image\": \"入力画像\",\n  \"Advanced\": \"고급\",\n  \"SAI 3D Model\": \"SAI 3D Modèle\"\n}\n```\n\nIf you add `--language example` arg, Fooocus will read `Fooocus\u002Flanguage\u002Fexample.json` to translate the UI.\n\nFor example, you can edit the ending line of Windows `run.bat` as\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example\n\nOr `run_anime.bat` as\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example --preset anime\n\nOr `run_realistic.bat` as\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example --preset realistic\n\nFor practical translation, you may create your own file like `Fooocus\u002Flanguage\u002Fjp.json` or `Fooocus\u002Flanguage\u002Fcn.json` and then use flag `--language jp` or `--language cn`. Apparently, these files do not exist now. **We need your help to create these files!**\n\nNote that if no `--language` is given and at the same time `Fooocus\u002Flanguage\u002Fdefault.json` exists, Fooocus will always load `Fooocus\u002Flanguage\u002Fdefault.json` for translation. By default, the file `Fooocus\u002Flanguage\u002Fdefault.json` does not exist.\n","\u003Cdiv align=center>\n\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_a75fe6cc20b4.png\">\n\u003C\u002Fdiv>\n\n# Fooocus\n\n[>>> 点击此处安装 Fooocus \u003C\u003C\u003C](#download)\n\nFooocus 是一款图像生成软件（基于 [Gradio](https:\u002F\u002Fwww.gradio.app\u002F) \u003Ca href='https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio'>\u003Cimg src='https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fstars\u002Fgradio-app\u002Fgradio'>\u003C\u002Fa>）。\n\nFooocus 重新思考了图像生成器的设计。该软件离线、开源且免费，同时类似于许多在线图像生成器（如 Midjourney），无需手动调整，用户只需关注提示词和图像。Fooocus 还简化了安装过程：从按下“下载”到生成第一张图像，所需的鼠标点击次数严格限制在 3 次以内。最低 GPU 内存要求为 4GB（Nvidia）。\n\n**最近，当您在 Google 上搜索\"fooocus\"时存在许多虚假网站。不要相信它们——这里是 Fooocus 的唯一官方来源。**\n\n# 项目状态：仅限错误修复的有限长期支持 (LTS)\n\nFooocus 项目完全基于 **Stable Diffusion XL** 架构构建，目前处于仅限错误修复的有限长期支持 (LTS) 状态。由于现有功能被认为几乎没有程序性问题（感谢 [mashb1t](https:\u002F\u002Fgithub.com\u002Fmashb1t) 的巨大努力），未来的更新将 exclusively 专注于解决可能出现的任何错误。\n\n**目前没有计划迁移或整合更新的模型架构。** 然而，随着开源社区的发展，这可能会随时间改变。例如，如果社区收敛到一种单一的图像生成主导方法（鉴于当前状况，这在半年或一年内真的可能发生），Fooocus 也可能迁移到该确切方法。\n\n对于那些希望利用更新模型（如 **Flux**）的用户，我们建议探索替代平台，例如 [WebUI Forge](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstable-diffusion-webui-forge)（也是由我们开发）、[ComfyUI\u002FSwarmUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)。此外，还有几个 [优秀的 Fooocus 分支版本](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus?tab=readme-ov-file#forks) 可供实验。\n\n再次强调，最近当您在 Google 上搜索\"fooocus\"时存在许多虚假网站。请**不要**从这些网站获取 Fooocus——此页面是 Fooocus 的唯一官方来源。我们从未拥有过类似\"fooocus.com\"、\"fooocus.net\"、\"fooocus.co\"、\"fooocus.ai\"、\"fooocus.org\"、\"fooocus.pro\"、\"fooocus.one\"的网站。那些网站全部都是假的。**它们与我们绝对没有任何关系。Fooocus 是 100% 非商业化的离线开源软件。**\n\n# 功能特性\n\n以下是使用 Midjourney 示例的快速列表：\n\n| Midjourney | Fooocus |\n| - | - |\n| 高质量文生图，无需太多提示工程或参数调整。\u003Cbr>（未知方法） | 高质量文生图，无需太多提示工程或参数调整。\u003Cbr>（Fooocus 拥有一个基于 GPT-2 的离线提示处理引擎以及大量采样改进，因此无论您的提示词短至“花园里的房子”还是长达 1000 字，结果总是美丽的） |\n| V1 V2 V3 V4 | 输入图像 -> 放大或变体 -> 微调差异 (Vary Subtle) \u002F 强差异 (Vary Strong)|\n| U1 U2 U3 U4 | 输入图像 -> 放大或变体 -> 放大 (1.5x) \u002F 放大 (2x) |\n| 重绘 \u002F 上 \u002F 下 \u002F 左 \u002F 右 (平移 Pan) | 输入图像 -> 重绘或外绘 -> 重绘 \u002F 上 \u002F 下 \u002F 左 \u002F 右\u003Cbr>（Fooocus 使用自己的重绘算法和重绘模型，因此结果比所有使用标准 SDXL 重绘方法\u002F模型的其他软件更令人满意） |\n| 图像提示 | 输入图像 -> 图像提示\u003Cbr>（Fooocus 使用自己的图像提示算法，因此结果质量和提示理解力比所有使用标准 SDXL 方法（如标准 IP-Adapters 或 Revisions）的其他软件更令人满意） |\n| --style | 高级 -> 风格 |\n| --stylize | 高级 -> 高级 -> 引导 |\n| --niji | [多个启动器：\"run.bat\", \"run_anime.bat\", 和 \"run_realistic.bat\".](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F679)\u003Cbr>Fooocus 支持 Civitai 上的 SDXL 模型\u003Cbr>（如果您不知道它，可以谷歌搜索\"Civitai\"） |\n| --quality | 高级 -> 质量 |\n| --repeat | 高级 -> 图像数量 |\n| 多提示 (::) | 直接使用多行提示即可 |\n| 提示权重 | 您可以使用\"I am (happy:1.5)\"。\u003Cbr>Fooocus 使用 A1111 的重加权算法，因此如果用户直接从 Civitai 复制提示词，结果比 ComfyUI 更好。（因为如果使用 ComfyUI 的重加权编写提示词，用户不太可能复制提示文本，因为他们更喜欢拖拽文件）\u003Cbr>要使用嵌入，您可以使用\"(embedding:file_name:1.1)\" |\n| --no | 高级 -> 负面提示 |\n| --ar | 高级 -> 宽高比 |\n| InsightFace | 输入图像 -> 图像提示 -> 高级 -> 换脸 |\n| Describe | 输入图像 -> 描述 |\n\n以下是使用 LeonardoAI 示例的快速列表：\n\n| LeonardoAI | Fooocus |\n| - | - |\n| 提示魔法 Prompt Magic | 高级 -> 风格 -> Fooocus V2 |\n| 高级采样器参数（如对比度\u002F锐度等） | 高级 -> 高级 -> 采样锐度 \u002F 等 |\n| 用户友好的 ControlNets | 输入图像 -> 图像提示 -> 高级 |\n\n此外，[点击此处浏览高级功能。](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117)\n\n# 下载\n\n### Windows\n\n您可以直接下载 Fooocus：\n\n**[>>> Click here to download \u003C\u003C\u003C](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Freleases\u002Fdownload\u002Fv2.5.0\u002FFooocus_win64_2-5-0.7z)**\n\n下载文件后，请解压它并运行 \"run.bat\"。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_e20b8d0a31ef.png)\n\n首次启动软件时，它将自动下载模型：\n\n1. 根据预设的不同，它会下载 [默认模型](#models) 到文件夹 \"Fooocus\\models\\checkpoints\"。如果您不希望自动下载，可以提前下载它们。\n2. 注意，如果您使用修复（inpaint），在您第一次修复图像时，它会从 [此处](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Ffooocus_inpaint\u002Fresolve\u002Fmain\u002Finpaint_v26.fooocus.patch) 下载 Fooocus 自己的修复控制模型，保存为文件 \"Fooocus\\models\\inpaint\\inpaint_v26.fooocus.patch\"（此文件大小为 1.28GB）。\n\nFooocus 2.1.60 版本之后，您还将拥有 `run_anime.bat` 和 `run_realistic.bat`。它们是不同模型的预设（需要不同的模型，但它们会自动下载）。[点击此处了解更多详情](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F679)。\n\nFooocus 2.3.0 之后，您也可以直接在浏览器中切换预设。请记住，如果您想更改默认行为，请添加以下参数：\n* 使用 `--disable-preset-selection` 禁用浏览器中的预设选择。\n* 使用 `--always-download-new-model` 在切换预设时下载缺失的模型。默认情况下回退到相应预设中定义的 `previous_default_models`，也请参阅终端输出。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_0efb680c1f96.png)\n\n如果您已经有了这些文件，可以将它们复制到上述位置以加快安装速度。\n\n请注意，如果您看到 **\"MetadataIncompleteBuffer\" 或 \"PytorchStreamReader\"** ，则您的模型文件已损坏。请重新下载模型。\n\n以下是针对一台相对低配笔记本电脑的测试，配备 **16GB 系统内存（RAM）** 和 **6GB 显存（VRAM）** (Nvidia 3060 笔记本)。该机器上的速度约为每次迭代 1.35 秒。相当令人印象深刻——如今配备 3060 的笔记本电脑通常价格非常合理。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_219ddcee82ca.png)\n\n此外，最近许多其他软件报告称，Nvidia 驱动 532 以上版本有时比 Nvidia 驱动 531 慢 10 倍。如果您的生成时间非常长，请考虑下载 [Nvidia Driver 531 Laptop](https:\u002F\u002Fwww.nvidia.com\u002Fdownload\u002FdriverResults.aspx\u002F199991\u002Fen-us\u002F) 或 [Nvidia Driver 531 Desktop](https:\u002F\u002Fwww.nvidia.com\u002Fdownload\u002FdriverResults.aspx\u002F199990\u002Fen-us\u002F)。\n\n请注意，最低要求是 **4GB Nvidia GPU 显存（4GB VRAM）** 和 **8GB 系统内存（8GB RAM）**。这需要启用 Microsoft 的虚拟交换技术，大多数情况下您的 Windows 安装会自动启用它，因此您通常不需要做任何操作。但是，如果您不确定，或者如果您手动关闭了它（真的有人会这样做吗？），或者 **如果您看到任何 \"RuntimeError: CPUAllocator\"** ，您可以在这里启用它：\n\n\u003Cdetails>\n\u003Csummary>点击此处查看图片说明。 \u003C\u002Fsummary>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_readme_c9d80f879d1b.png)\n\n**并且确保如果仍然看到 \"RuntimeError: CPUAllocator\"，每个驱动器上至少有 40GB 可用空间！**\n\n\u003C\u002Fdetails>\n\n如果您使用类似设备但仍无法达到可接受的性能，请提交问题（issue）。\n\n请注意，不同平台的 [最低要求](#minimal-requirement) 是不同的。\n\n另请参阅常见问题和故障排除 [此处](troubleshoot.md)。\n\n### Colab\n\n（最后测试 - 2024 年 8 月 12 日，由 [mashb1t](https:\u002F\u002Fgithub.com\u002Fmashb1t)）\n\n| Colab | 信息\n| --- | --- |\n[![Open In Colab](https:\u002F\u002Fcolab.research.google.com\u002Fassets\u002Fcolab-badge.svg)](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Ffooocus_colab.ipynb) | Fooocus 官方\n\n在 Colab 中，您可以将最后一行修改为 `!python entry_with_update.py --share --always-high-vram` 或 `!python entry_with_update.py --share --always-high-vram --preset anime` 或 `!python entry_with_update.py --share --always-high-vram --preset realistic` 以对应 Fooocus 默认版\u002F动漫版\u002F写实版。\n\n您也可以在使用界面（UI）中更改预设。请注意，这可能导致 60 秒后超时。如果是这种情况，请等待下载完成，将预设更改为初始值再改回您选择的预设，或者刷新页面。\n\n请注意，此 Colab 默认将禁用 Refiner（细化器），因为 Colab 免费版的资源相对有限（某些“大型”功能如图像提示可能会导致免费版 Colab 断开连接）。我们确保基本的文生图功能在免费版 Colab 上始终可用。\n\n使用 `--always-high-vram` 将资源分配从 RAM（内存）转移到 VRAM（显存），并在默认的 T4 实例上实现性能、灵活性和稳定性之间的最佳平衡。请在此处 [查找更多信息](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710#issuecomment-1989185346)。\n\n感谢 [camenduru](https:\u002F\u002Fgithub.com\u002Fcamenduru) 提供的模板！\n\n### Linux (使用 Anaconda)\n\n如果您想使用 Anaconda\u002FMiniconda，可以执行以下命令：\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    conda env create -f environment.yaml\n    conda activate fooocus\n    pip install -r requirements_versions.txt\n\n然后下载模型：将 [默认模型](#models) 下载到文件夹 \"Fooocus\\models\\checkpoints\"。**或者让 Fooocus 使用启动器自动下载模型**：\n\n    conda activate fooocus\n    python entry_with_update.py\n\n或者，如果您想开放远程端口，请使用：\n\n    conda activate fooocus\n    python entry_with_update.py --listen\n\n对于 Fooocus 动漫版\u002F写实版，请使用 `python entry_with_update.py --preset anime` 或 `python entry_with_update.py --preset realistic`。\n\n### Linux (使用 Python Venv)\n\n您的 Linux 需要安装 **Python 3.10**，假设您的 Python 可以通过命令 **python3** 调用且您的 venv（虚拟环境）系统正常工作；您可以：\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    python3 -m venv fooocus_env\n    source fooocus_env\u002Fbin\u002Factivate\n    pip install -r requirements_versions.txt\n\n关于模型下载，请参考上述部分。您可以使用以下方式启动软件：\n\n    source fooocus_env\u002Fbin\u002Factivate\n    python entry_with_update.py\n\n或者，如果您想开放远程端口，请使用：\n\n    source fooocus_env\u002Fbin\u002Factivate\n    python entry_with_update.py --listen\n\n对于 Fooocus 动漫版\u002F写实版，请使用 `python entry_with_update.py --preset anime` 或 `python entry_with_update.py --preset realistic`。\n\n### Linux（使用原生系统 Python）\n\n如果你知道自己在做什么，并且你的 Linux 已经安装了 **Python 3.10**，且你的 Python 可以通过命令 **python3** 调用（Pip 通过 **pip3** 调用），那么你可以\n\n    git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\n    cd Fooocus\n    pip3 install -r requirements_versions.txt\n\n模型下载请参见上述章节。你可以通过以下方式启动软件：\n\n    python3 entry_with_update.py\n\n或者，如果你想开放远程端口，请使用\n\n    python3 entry_with_update.py --listen\n\n若要使用 Fooocus Anime\u002FRealistic 版本，请使用 `python entry_with_update.py --preset anime` 或 `python entry_with_update.py --preset realistic`。\n\n### Linux（AMD GPU）\n\n请注意，不同平台的 [最低要求](#minimal-requirement) 是不同的。\n\n与上述说明相同。你需要将 torch 更改为 AMD 版本\n\n    pip uninstall torch torchvision torchaudio torchtext functorch xformers \n    pip install torch torchvision torchaudio --index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Frocm5.6\n\n不过，AMD 尚未进行密集测试。AMD 支持处于测试版阶段。\n\n若要使用 Fooocus Anime\u002FRealistic 版本，请使用 `python entry_with_update.py --preset anime` 或 `python entry_with_update.py --preset realistic`。\n\n### Windows（AMD GPU）\n\n请注意，不同平台的 [最低要求](#minimal-requirement) 是不同的。\n\n与 Windows 系统相同。下载软件并编辑 `run.bat` 的内容如下：\n\n    .\\python_embeded\\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y\n    .\\python_embeded\\python.exe -m pip install torch-directml\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --directml\n    pause\n\n然后运行 `run.bat`。\n\n不过，AMD 尚未进行密集测试。AMD 支持处于测试版阶段。\n\n对于 AMD，若要使用 Fooocus Anime\u002FRealistic 版本，请使用 `.\\python_embeded\\python.exe Fooocus\\entry_with_update.py --directml --preset anime` 或 `.\\python_embeded\\python.exe Fooocus\\entry_with_update.py --directml --preset realistic`。\n\n### Mac\n\n请注意，不同平台的 [最低要求](#minimal-requirement) 是不同的。\n\nMac 尚未进行密集测试。以下是使用 Mac 的非官方指南。你可以在 [此处](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F129) 讨论问题。\n\n你可以在搭载 macOS 'Catalina' 或更新版本的 Apple Mac 芯片（M1 或 M2）上安装 Fooocus。Fooocus 通过 [PyTorch](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F)（深度学习框架）MPS（Metal Performance Shaders）设备加速在 Apple 硅基计算机上运行。Mac 硅基计算机没有配备独立显卡，因此与配备独立显卡的计算机相比，图像处理时间会显著延长。\n\n1. 安装 conda（包管理器）和 pytorch nightly 版本。阅读 [Mac 上的加速 PyTorch 训练](https:\u002F\u002Fdeveloper.apple.com\u002Fmetal\u002Fpytorch\u002F) Apple Developer 指南以获取说明。确保 pytorch 识别你的 MPS 设备。\n1. 打开 macOS Terminal 应用程序，并使用 `git clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git` 克隆此仓库。\n1. 进入新的 Fooocus 目录，执行 `cd Fooocus`。\n1. 创建一个新的 conda 环境，执行 `conda env create -f environment.yaml`。\n1. 激活你的新 conda 环境，执行 `conda activate fooocus`。\n1. 安装 Fooocus 所需的软件包，执行 `pip install -r requirements_versions.txt`。\n1. 运行 `python entry_with_update.py` 启动 Fooocus。（一些 Mac M2 用户可能需要 `python entry_with_update.py --disable-offload-from-vram` 来加快模型的加载\u002F卸载速度。）第一次运行 Fooocus 时，它会自动下载 Stable Diffusion SDXL（图像生成模型）模型，这可能需要相当长的时间，具体取决于你的网络连接。\n\n若要使用 Fooocus Anime\u002FRealistic 版本，请使用 `python entry_with_update.py --preset anime` 或 `python entry_with_update.py --preset realistic`。\n\n### Docker（容器化平台）\n\n请参阅 [docker.md](docker.md)。\n\n### 下载旧版本\n\n请参阅 [此处](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F1405) 的指南。\n\n## 最低要求\n\n以下是本地运行 Fooocus 的最低要求。如果您的设备性能低于此规格，您可能无法在本地使用 Fooocus。（无论如何，如果您的设备性能较低但 Fooocus 仍能工作，请告知我们。）\n\n| 操作系统 | GPU | 最小显存 | 最小系统内存 | [系统交换空间](troubleshoot.md) | 备注 |\n|---|---|---|---|---|---|\n| Windows\u002FLinux | Nvidia RTX 4XXX | 4GB | 8GB | 必需 | 最快 |\n| Windows\u002FLinux | Nvidia RTX 3XXX | 4GB | 8GB | 必需 | 通常比 RTX 2XXX 快 |\n| Windows\u002FLinux | Nvidia RTX 2XXX | 4GB | 8GB | 必需 | 通常比 GTX 1XXX 快 |\n| Windows\u002FLinux | Nvidia GTX 1XXX | 8GB (* 6GB 不确定) | 8GB | 必需 | 仅比 CPU 稍快 |\n| Windows\u002FLinux | Nvidia GTX 9XX | 8GB | 8GB | 必需 | 比 CPU 快或慢 |\n| Windows\u002FLinux | Nvidia GTX \u003C 9XX | 不支持 | \u002F | \u002F | \u002F |\n| Windows | AMD GPU | 8GB (更新于 2023 年 12 月 30 日) | 8GB | 必需 | 通过 DirectML (* ROCm 已暂停)，速度约为 Nvidia RTX 3XXX 的 1\u002F3 |\n| Linux | AMD GPU | 8GB | 8GB | 必需 | 通过 ROCm，速度约为 Nvidia RTX 3XXX 的 1\u002F1.5 |\n| Mac | M1\u002FM2 MPS | 共享 | 共享 | 共享 | 速度约为 Nvidia RTX 3XXX 的 1\u002F9 |\n| Windows\u002FLinux\u002FMac | 仅使用 CPU | 0GB | 32GB | 必需 | 速度约为 Nvidia RTX 3XXX 的 1\u002F17 |\n\n* AMD GPU ROCm（暂停中）：AMD 仍在努力支持 Windows 上的 ROCm。\n\n* Nvidia GTX 1XXX 6GB 不确定性：部分用户报告 GTX 10XX 上 6GB 成功，但也有其他用户报告失败案例。\n\n*请注意，Fooocus 仅用于生成极高质量的图像。我们将不支持较小的模型以降低要求并牺牲结果质量。*\n\n## 故障排除\n\n常见问题请参见 [此处](troubleshoot.md)。\n\n## 默认模型\n\u003Ca name=\"models\">\u003C\u002Fa>\n\n根据不同的目标，Fooocus 的默认模型和配置有所不同：\n\n| 任务 | Windows | Linux 参数 | 主模型 | 精炼器 | 配置 |\n|---|---|---|---|---|---|\n| 通用 | run.bat | | juggernautXL_v8Rundiffusion | 未使用 | [此处](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Fdefault.json) |\n| 写实 | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | 未使用 | [此处](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Frealistic.json) |\n| 动漫 | run_anime.bat | --preset anime | animaPencilXL_v500 | 未使用 | [此处](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fblob\u002Fmain\u002Fpresets\u002Fanime.json) |\n\n请注意下载是**自动**的——如果网络连接正常，您无需做任何操作。但是，如果您已有准备（例如从其他地方移动过来），也可以手动下载它们。\n\n## UI 访问与身份验证\n\n除了在本机运行外，Fooocus 还可以通过以下方式暴露其 UI：\n* 本地 UI 监听器：使用 `--listen`（指定端口，例如通过 `--port 8888`）。\n* API 访问：使用 `--share`（在 `.gradio.live` 注册一个端点）。\n\n在这两种方式下，默认情况下访问无需身份验证。您可以通过在主目录中创建一个名为 `auth.json` 的文件来添加基本身份验证，该文件包含带有 `user` 和 `pass` 键的 JSON 对象列表（示例见 [auth-example.json](.\u002Fauth-example.json)）。\n\n## “隐藏”技巧列表\n\u003Ca name=\"tech_list\">\u003C\u002Fa>\n\n\u003Cdetails>\n\u003Csummary>点击查看技巧列表。这些基于 SDXL（Stable Diffusion XL），与最新模型相比可能稍显过时。\u003C\u002Fsummary>\n\n1. 基于 GPT2 的 [提示词扩展作为动态风格 \"Fooocus V2\"。](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117#raw)（类似于 Midjourney 的隐藏预处理和\"raw\"模式，或 LeonardoAI 的 Prompt Magic）。\n2. 在单个 k-sampler（k 采样器）内部的原生 refine 器（精修器）切换。优势在于 refine 模型现在可以重用从 k-sampling 中收集的 base 模型（基础模型）的动量（或 ODE（常微分方程）的历史参数），从而实现更连贯的采样。在 Automatic1111 的高清修复和 ComfyUI 的节点系统中，base 模型和 refine 器使用两个独立的 k-sampler，这意味着动量在很大程度上被浪费，且采样的连续性被打破。Fooocus 使用其自身先进的 k-diffusion（k 扩散）采样，确保在 refine 器设置中无缝、原生且连续的切换。（更新 8 月 13 日：实际上，前几天我和 Automatic1111 讨论过这件事，看起来“在单个 k-sampler 内部的原生 refine 器切换”已经 [合并]( https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui\u002Fpull\u002F12371) 到 webui（Web 界面）的 dev 分支了。太棒了！）\n3. 负向 ADM 引导（宽高比模型引导）。因为 XL Base 的最高分辨率级别没有交叉注意力机制，XL 最高分辨率级别的正负信号在 CFG（分类器自由引导）采样期间无法获得足够的对比度，导致结果在某些情况下看起来有点塑料感或过于平滑。幸运的是，由于 XL 的最高分辨率级别仍然基于图像宽高比（ADM），我们可以修改正\u002F负侧的 adm 来补偿最高分辨率级别中 CFG 对比度的缺失。（更新 8 月 16 日，iOS 应用 [Draw Things](https:\u002F\u002Fapps.apple.com\u002Fus\u002Fapp\u002Fdraw-things-ai-generation\u002Fid6444050820) 将支持负向 ADM 引导。太棒了！）\n4. 我们实现了论文 [\"Improving Sample Quality of Diffusion Models Using Self-Attention Guidance\"](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.00939.pdf) 第 5.1 节的一个精心调优的变体（自注意力引导 SAG）。权重设置得非常低，但这是 Fooocus 的最终保障，确保 XL 永远不会产生过于平滑或塑料感的外观（示例 [在此](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117#sharpness)）。这几乎消除了所有 XL 即使有负向 ADM 引导仍偶尔产生过于平滑结果的情况。（更新 2023 年 8 月 18 日，SAG 的高斯核已更改为各向异性核，以更好地保留结构并减少伪影。）\n5. 我们稍微修改了风格模板，并添加了\"cinematic-default\"。\n6. 我们测试了\"sd_xl_offset_example-lora_1.0.safetensors\"，似乎当 lora（LoRA）权重低于 0.5 时，结果总是优于不使用 lora 的 XL。\n7. 采样器的参数经过精心调优。\n8. 因为 XL 使用位置编码来处理生成分辨率，由几个固定分辨率生成的图像看起来比任意分辨率生成的图像更好（因为位置编码不太擅长处理训练期间未见过的整数）。这表明 UI 中的分辨率可能是硬编码的以获得最佳效果。\n9. 为两个不同的文本编码器使用分离的提示词似乎没有必要。为 base 模型和 refine 器使用分离的提示词可能有效，但效果是随机的，因此我们避免实现这一点。\n10. DPM 系列似乎非常适合 XL，因为 XL 有时会生成过于平滑的纹理，但 DPM 系列有时会在纹理上生成过于密集的细节。它们的联合效果看起来中性且符合人类感知。\n11. 一个精心设计的系统，用于平衡多种风格以及提示词扩展。\n12. 使用 automatic1111 的方法来规范化提示词强调。当用户直接从 civitai 复制提示词时，这显著改善了结果。\n13. refine 器的联合切换系统现在也以无缝方式支持 img2img（图生图）和 upscale（放大）。\n14. 当 CFG 大于 10 时的 CFG Scale（分类器自由引导比例）和 TSNR（时间步降噪）校正（针对 SDXL 调优）。\n\u003C\u002Fdetails>\n\n## 自定义\n\n首次运行 Fooocus 后，将在 `Fooocus\\config.txt` 生成配置文件。可以编辑此文件以更改模型路径或默认参数。\n\n例如，编辑后的 `Fooocus\\config.txt`（此文件将在首次启动后生成）可能如下所示：\n\n```json\n{\n    \"path_checkpoints\": \"D:\\\\Fooocus\\\\models\\\\checkpoints\",\n    \"path_loras\": \"D:\\\\Fooocus\\\\models\\\\loras\",\n    \"path_embeddings\": \"D:\\\\Fooocus\\\\models\\\\embeddings\",\n    \"path_vae_approx\": \"D:\\\\Fooocus\\\\models\\\\vae_approx\",\n    \"path_upscale_models\": \"D:\\\\Fooocus\\\\models\\\\upscale_models\",\n    \"path_inpaint\": \"D:\\\\Fooocus\\\\models\\\\inpaint\",\n    \"path_controlnet\": \"D:\\\\Fooocus\\\\models\\\\controlnet\",\n    \"path_clip_vision\": \"D:\\\\Fooocus\\\\models\\\\clip_vision\",\n    \"path_fooocus_expansion\": \"D:\\\\Fooocus\\\\models\\\\prompt_expansion\\\\fooocus_expansion\",\n    \"path_outputs\": \"D:\\\\Fooocus\\\\outputs\",\n    \"default_model\": \"realisticStockPhoto_v10.safetensors\",\n    \"default_refiner\": \"\",\n    \"default_loras\": [[\"lora_filename_1.safetensors\", 0.5], [\"lora_filename_2.safetensors\", 0.5]],\n    \"default_cfg_scale\": 3.0,\n    \"default_sampler\": \"dpmpp_2m\",\n    \"default_scheduler\": \"karras\",\n    \"default_negative_prompt\": \"low quality\",\n    \"default_positive_prompt\": \"\",\n    \"default_styles\": [\n        \"Fooocus V2\",\n        \"Fooocus Photograph\",\n        \"Fooocus Negative\"\n    ]\n}\n```\n\n许多其他键、格式和示例位于 `Fooocus\\config_modification_tutorial.txt`（此文件将在首次启动后生成）。\n\n在真正更改配置前请三思。如果发现弄坏了东西，只需删除 `Fooocus\\config.txt`。Fooocus 将恢复默认设置。\n\n更安全的方法是尝试\"run_anime.bat\"或\"run_realistic.bat\"——它们对于不同任务应该已经足够好了。\n\n~注意 `user_path_config.txt` 已被弃用并将很快移除。~（编辑：它已经被移除了。）\n\n### 所有 CMD 参数\n\n```\nentry_with_update.py  [-h] [--listen [IP]] [--port PORT]\n                      [--disable-header-check [ORIGIN]]\n                      [--web-upload-size WEB_UPLOAD_SIZE]\n                      [--hf-mirror HF_MIRROR]\n                      [--external-working-path PATH [PATH ...]]\n                      [--output-path OUTPUT_PATH]\n                      [--temp-path TEMP_PATH] [--cache-path CACHE_PATH]\n                      [--in-browser] [--disable-in-browser]\n                      [--gpu-device-id DEVICE_ID]\n                      [--async-cuda-allocation | --disable-async-cuda-allocation]\n                      [--disable-attention-upcast]\n                      [--all-in-fp32 | --all-in-fp16]\n                      [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]\n                      [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]\n                      [--vae-in-cpu]\n                      [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]\n                      [--directml [DIRECTML_DEVICE]]\n                      [--disable-ipex-hijack]\n                      [--preview-option [none,auto,fast,taesd]]\n                      [--attention-split | --attention-quad | --attention-pytorch]\n                      [--disable-xformers]\n                      [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]\n                      [--always-offload-from-vram]\n                      [--pytorch-deterministic] [--disable-server-log]\n                      [--debug-mode] [--is-windows-embedded-python]\n                      [--disable-server-info] [--multi-user] [--share]\n                      [--preset PRESET] [--disable-preset-selection]\n                      [--language LANGUAGE]\n                      [--disable-offload-from-vram] [--theme THEME]\n                      [--disable-image-log] [--disable-analytics]\n                      [--disable-metadata] [--disable-preset-download]\n                      [--disable-enhance-output-sorting]\n                      [--enable-auto-describe-image]\n                      [--always-download-new-model]\n                      [--rebuild-hash-cache [CPU_NUM_THREADS]]\n```\n\n## 内联提示词功能\n\n### 通配符\n\n示例提示词：`__color__ flower`\n\n正负向提示词均会处理。\n\n从预定义选项列表中随机选择一个通配符，此处为 `wildcards\u002Fcolor.txt` 文件。\n通配符将被替换为一个随机颜色（随机性基于种子）。\n你也可以禁用随机性，并通过在开发者调试模式（Developer Debug Mode）中勾选“按顺序读取通配符”（Read wildcards in order）复选框，从上到下处理通配符文件。\n\n通配符可以嵌套和组合，并且可以在同一个提示词中使用多个通配符（示例见 `wildcards\u002Fcolor_flower.txt`）。\n\n### 数组处理\n\n示例提示词：`[[red, green, blue]] flower`\n\n仅处理正向提示词。\n\n从左到右处理数组，为数组中的每个元素生成一张单独的图像。在此情况下将生成 3 张图像，每种颜色一张。\n将图像数量增加到 3 以生成所有 3 种变体。\n\n数组不能嵌套，但可以在同一个提示词中使用多个数组。\n支持将内联 LoRA 作为数组元素！\n\n### 内联 LoRA (低秩自适应)\n\n示例提示词：`flower \u003Clora:sunflowers:1.2>`\n\n仅处理正向提示词。\n\n将 LoRA 应用于提示词。LoRA 文件必须位于 `models\u002Floras` 目录中。\n\n## 高级功能\n\n[点击此处浏览高级功能。](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F117)\n\n## 分支项目\n\n以下是 Fooocus 的一些分支项目：\n\n| Fooocus 的分支项目 |\n| - |\n| [fenneishi\u002FFooocus-Control](https:\u002F\u002Fgithub.com\u002Ffenneishi\u002FFooocus-Control) \u003C\u002Fbr>[runew0lf\u002FRuinedFooocus](https:\u002F\u002Fgithub.com\u002Frunew0lf\u002FRuinedFooocus) \u003C\u002Fbr> [MoonRide303\u002FFooocus-MRE](https:\u002F\u002Fgithub.com\u002FMoonRide303\u002FFooocus-MRE) \u003C\u002Fbr> [mashb1t\u002FFooocus](https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus) \u003C\u002Fbr> 以及更多... |\n\n## 致谢\n\n非常感谢 [twri](https:\u002F\u002Fgithub.com\u002Ftwri)、[3Diva](https:\u002F\u002Fgithub.com\u002F3Diva) 和 [Marc K3nt3L](https:\u002F\u002Fgithub.com\u002FK3nt3L) 创建了 Fooocus 中可用的额外 SDXL 风格。 \n\n该项目始于 [Stable Diffusion WebUI](https:\u002F\u002Fgithub.com\u002FAUTOMATIC1111\u002Fstable-diffusion-webui) 和 [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) 代码库的组合。\n\n此外，感谢 [daswer123](https:\u002F\u002Fgithub.com\u002Fdaswer123) 贡献了画布缩放（Canvas Zoom）功能！\n\n## 更新日志\n\n日志位于 [此处](update_log.md)。\n\n## 本地化\u002F翻译\u002F国际化 (I18N)\n\n你可以将 json 文件放入 `language` 文件夹来翻译用户界面。\n\n例如，以下是 `Fooocus\u002Flanguage\u002Fexample.json` 的内容：\n\n```json\n{\n  \"Generate\": \"生成\",\n  \"Input Image\": \"入力画像\",\n  \"Advanced\": \"고급\",\n  \"SAI 3D Model\": \"SAI 3D Modèle\"\n}\n```\n\n如果你添加 `--language example` 参数，Fooocus 将读取 `Fooocus\u002Flanguage\u002Fexample.json` 来翻译 UI。\n\n例如，你可以编辑 Windows `run.bat` 的末尾行如下\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example\n\n或者 `run_anime.bat` 如下\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example --preset anime\n\n或者 `run_realistic.bat` 如下\n\n    .\\python_embeded\\python.exe -s Fooocus\\entry_with_update.py --language example --preset realistic\n\n对于实际翻译，你可以创建自己的文件如 `Fooocus\u002Flanguage\u002Fjp.json` 或 `Fooocus\u002Flanguage\u002Fcn.json`，然后使用标志 `--language jp` 或 `--language cn`。显然，这些文件目前不存在。**我们需要你的帮助来创建这些文件！**\n\n注意，如果没有给出 `--language` 且同时存在 `Fooocus\u002Flanguage\u002Fdefault.json`，Fooocus 将始终加载 `Fooocus\u002Flanguage\u002Fdefault.json` 进行翻译。默认情况下，文件 `Fooocus\u002Flanguage\u002Fdefault.json` 不存在。","# Fooocus 快速上手指南\n\nFooocus 是一款基于 **Stable Diffusion XL** 架构的离线、开源图像生成软件。它简化了安装与操作流程，无需复杂的参数调整，专注于提示词（Prompt）与图像输入，提供类似 Midjourney 的高质量出图体验。\n\n> **⚠️ 重要安全提示**：Google 搜索中可能存在大量虚假网站。**Fooocus 唯一的官方来源是 GitHub**。请勿从 `fooocus.com`、`fooocus.ai` 等非官方域名下载软件。\n\n---\n\n## 环境准备\n\n### 系统要求\n- **操作系统**：Windows \u002F Linux\n- **显卡 (GPU)**：Nvidia 显卡，显存至少 **4GB** (推荐 6GB+)\n- **内存 (RAM)**：系统内存至少 **8GB**\n- **磁盘空间**：建议每个驱动至少有 **40GB** 可用空间（用于缓存和模型）\n- **网络**：首次运行需自动下载模型，请确保网络连接稳定\n\n### 前置依赖\n- **Windows**：无需额外配置，解压即可运行。\n- **Linux**：需安装 **Python 3.10**，建议使用 Anaconda 或 Python venv 管理环境。\n\n---\n\n## 安装步骤\n\n### 方法一：Windows (推荐)\n1. 访问官方发布页下载压缩包：[>>> 点击下载 Fooocus \u003C\u003C\u003C](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Freleases\u002Fdownload\u002Fv2.5.0\u002FFooocus_win64_2-5-0.7z)\n2. 解压下载的 `.7z` 文件到任意目录。\n3. 双击运行文件夹内的 **`run.bat`**。\n4. 首次启动时，程序将自动下载默认模型至 `Fooocus\\models\\checkpoints` 目录。\n\n### 方法二：Linux (使用 Anaconda)\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\ncd Fooocus\nconda env create -f environment.yaml\nconda activate fooocus\npip install -r requirements_versions.txt\n```\n\n### 方法三：Linux (使用 Python Venv)\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus.git\ncd Fooocus\npython3 -m venv fooocus_env\nsource fooocus_env\u002Fbin\u002Factivate\npip install -r requirements_versions.txt\n```\n\n> **注**：所有方法首次运行时均会自动下载所需模型。若遇到 `RuntimeError: CPUAllocator`，请检查并确保已开启虚拟交换空间（Virtual Swap）。\n\n---\n\n## 基本使用\n\n1. **启动界面**：运行脚本后，浏览器将自动打开本地地址（通常为 `http:\u002F\u002F127.0.0.1:7860`）。\n2. **选择预设**：在界面顶部可选择不同模式，如：\n   - `Default`：通用模式\n   - `Anime`：动漫风格 (`run_anime.bat`)\n   - `Realistic`：写实风格 (`run_realistic.bat`)\n3. **生成图像**：\n   - 在 **Prompt** 输入框中输入描述性文字（支持中文）。\n   - 点击 **Generate** 按钮开始生成。\n4. **进阶功能**：\n   - **图像增强**：输入图片 -> Upscale or Variation。\n   - **局部重绘**：输入图片 -> Inpaint or Outpaint。\n   - **换脸**：输入图片 -> Image Prompt -> Advanced -> FaceSwap。\n\n完成上述步骤后，您即可开始使用 Fooocus 进行创作。","某电商运营人员需要在周五前完成双十一活动的多套主视觉海报设计，时间紧迫且缺乏专业绘图技能。\n\n### 没有 Fooocus 时\n- 本地部署 Stable Diffusion 环境极其繁琐，经常因版本冲突导致无法启动，浪费半天时间\n- 提示词编写门槛高，稍微简单的描述就会生成扭曲的人脸或奇怪的物体，难以控制构图\n- 图片放大和局部修改需要额外加载 ControlNet 等插件，操作逻辑混乱且容易出错\n- 为了追求画质必须手动调整数十个参数，消耗大量精力在技术层面而非创意构思上\n\n### 使用 Fooocus 后\n- 解压即用，无需配置 Python 环境，几分钟内即可开始生成第一张符合预期的图像\n- 内置 GPT-2 提示词优化引擎，输入“咖啡杯”也能自动补全光影细节获得精美效果\n- 集成高清修复与变体功能，直接在原图上操作即可完成四倍放大或风格微调，流程顺畅\n- 默认采用 SDXL 架构，无需调试采样器即可获得媲美商业级 Midjourney 的画质表现\n\nFooocus 通过极简的操作流程，让非技术人员也能轻松获得专业级的图像生成能力。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flllyasviel_Fooocus_219ddcee.png","lllyasviel",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flllyasviel_92d612b9.jpg","Lvmin Zhang (Lyumin Zhang)\r\n\r\n","https:\u002F\u002Fgithub.com\u002Flllyasviel",[81,85,89,93,97,101],{"name":82,"color":83,"percentage":84},"Python","#3572A5",96.7,{"name":86,"color":87,"percentage":88},"JavaScript","#f1e05a",2.8,{"name":90,"color":91,"percentage":92},"CSS","#663399",0.4,{"name":94,"color":95,"percentage":96},"Dockerfile","#384d54",0.1,{"name":98,"color":99,"percentage":100},"Jupyter Notebook","#DA5B0B",0,{"name":102,"color":103,"percentage":100},"Shell","#89e051",48018,7838,"2026-04-05T21:56:27","GPL-3.0","Windows, Linux","需要 NVIDIA GPU，最低 4GB 显存","最低 8GB，推荐 16GB",{"notes":112,"python":113,"dependencies":114},"1. 仅支持 Windows 和 Linux 系统；2. 必须使用 NVIDIA 显卡，最低 4GB 显存；3. 首次运行会自动下载模型文件；4. 低显存设备需确保开启虚拟内存交换；5. 警惕网络上的假冒官网；6. 推荐使用 Nvidia 531 版驱动","3.10",[115],"未说明",[14],82,"2026-03-27T02:49:30.150509","2026-04-06T08:40:07.451970",[121,126,130,135,140,144],{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},3488,"Fooocus 在 macOS 上是否支持 MPS 后端？","MPS 后端目前处于 Beta 支持状态，但并非所有函数都已优化。部分 PyTorch 算子（如 aten::std_mean.correction_）尚未获得 Apple 的支持。建议查阅官方 README 中关于 MacOS \u002F MPS 后端的安装说明。","https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fissues\u002F690",{"id":127,"question_zh":128,"answer_zh":129,"source_url":125},3489,"遇到 `Device mps:0 does not support the torch.fft functions` 错误怎么办？","这是由于 Apple 尚未支持特定的 PyTorch 算子导致的。`--disable-offload-from-vram` 参数仅能缩短模型加载时间，对此类算子不支持的问题无效。建议等待 Apple 更新或查阅相关技术 FAQ。",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},3490,"AMD 显卡（如 6700XT）提示显存不足怎么办？","这通常是旧版本的已知问题。请尝试升级到 Fooocus 的最新版本，许多用户反馈新版已修复此问题。如果问题依旧，可尝试回退到旧版本作为临时解决方案。","https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fissues\u002F1278",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},3491,"Windows AMD 显卡用户如何正确配置运行环境？","需要编辑 run.bat 文件。首先卸载旧的 torch 相关包，然后安装 torch-directml，最后使用 --directml 参数启动应用。具体命令包括：pip uninstall torch torchvision torchaudio torchtext functorch xformers -y，随后 pip install torch-directml。","https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fissues\u002F1078",{"id":141,"question_zh":142,"answer_zh":143,"source_url":139},3492,"遇到 `Could not allocate tensor` 显存分配错误如何修改配置？","可以通过修改代码文件来增加内存限制。在 `Fooocus\\ldm_patched\\modules` 目录下，找到第 95 行左右的 `mem_total = 1024 * 1024 * 1024`，将其修改为 `mem_total = 8192 * 1024 * 1024`（根据实际显存大小调整数值）。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":139},3493,"输入提示词后程序开始加载但没有生成图片，且无报错如何解决？","有时这是临时的运行时错误。有用户反馈重启电脑后可以解决问题。此外，检查是否启用了正确的 DirectML 模式，并确保没有占用显存的后台进程。",[149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244],{"id":150,"version":151,"summary_zh":152,"released_at":153},112796,"v2.5.5","## What's Changed\r\n* fix: resolve colab unsupported image type issue by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3506\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.5.4...v2.5.5","2024-08-12T06:12:31",{"id":155,"version":156,"summary_zh":157,"released_at":158},112797,"v2.5.4","## What's Changed\r\n* fix: add handling for default \"None\" value of default_ip_image_* by @mashb1t in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fpull\u002F70\r\n* fix: correctly validate default_inpaint_mask_sam_model by @mashb1t in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fpull\u002F71\r\n* fix: check all dirs instead of only the first one by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3495\r\n* fix: yield enhance_input_image to correctly preview debug masks by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3497\r\n* docs: change code ownership from mashb1t to lllyasviel, see https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3504\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.5.3...v2.5.4","2024-08-11T16:51:50",{"id":160,"version":161,"summary_zh":162,"released_at":163},112798,"v2.5.3","## What's Changed\r\n* fix: use weights_only for loading by @kit1980 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3427\r\n* fix\u002Ffeat: add checkbox and config to disable updating selected styles when describing an image by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3430\r\n\r\n## New Contributors\r\n* @kit1980 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3427\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.5.2...v2.5.3","2024-08-03T13:24:08",{"id":165,"version":166,"summary_zh":167,"released_at":168},112799,"v2.5.2","## What's Changed\r\n* fix: add positive prompt if styles don't have a prompt placeholder by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3372\r\n* docs: update numbering of basic debug procedure in issue template by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3376\r\n* feat: extend config settings for input image by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3382\r\n* feat: count image count index from 1 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3383\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.5.1...v2.5.2","2024-07-27T21:30:40",{"id":170,"version":171,"summary_zh":172,"released_at":173},112800,"v2.5.1","## What's Changed\r\n* docs: update download URL in readme\r\n* fix: use type pil for image upload to prevent conversion to png through temp file by @mashb1t in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fpull\u002F58\r\n* fix: allow reading of metadata from jpeg, jpg and webp again by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3301\r\n* fix: correctly debug preprocessor again by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3332\r\n* docs: update attributes and add add inline prompt features section to readme by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3333\r\n* feat: add checkbox, config and handling for saving only the final enhanced image by @mashb1t in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fpull\u002F61\r\n* feat: sort enhance images by @mashb1t in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fpull\u002F62\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.5.0...v2.5.1","2024-07-25T14:05:39",{"id":175,"version":176,"summary_zh":177,"released_at":178},112801,"v2.5.0","## How to update\r\n\r\nThis version includes various package updates. If the auto-update doesn't work:\r\n1. Open a terminal in the Fooocus folder (go to the location of config.txt, open terminal by clicking in the address bar of the file explorer, type `cmd` and hit enter) and run `git pull`. If you do not have git installed, skip to step 2.\r\n2. Update packages\r\n   - Windows (installation through zip file): run `..\\python_embeded\\python.exe -m pip install -r .\\requirements_versions.txt` (Windows using embedded python, installation method zip file) or download Fooocus again (zip file attached to this release)\r\n   - other: manually update the packages using `python.exe -m pip install -r requirements_versions.txt` or use the docker image\r\n\r\n## What's Changed\r\n\r\n* feat: update python dependencies and add segment_anything\r\n* sync enhance feature and code refactoring from [mashb1t\u002FFooocus](https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Ftree\u002Fmain)\r\n* feat: add enhance feature, which offers easy image refinement steps (similar to [adetailer](https:\u002F\u002Fgithub.com\u002FBing-su\u002Fadetailer), but based on dynamic image detection instead of specific mask detection models). See https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3281.\r\n* feat: improve GroundingDINO and SAM image masking\r\n* refactor: rewrite async worker code, make code much more reusable to allow iterations and improve reusability\r\n* refactor: move checkboxes Enable Mask Upload and Invert Mask When Generating from Developer Debug Mode to Inpaint Or Outpaint\r\n* refactor: rename checkbox Enable Mask Upload to Enable Advanced Masking Features\r\n* fix: get upscale model filepath by calling downloading_upscale_model() to ensure the model exists\r\n* i18n: rename tab titles and translations from singular to plural\r\n* i18n: rename document to documentation\r\n* feat: update default models to latest versions\r\n  * animaPencilXL_v400 => animaPencilXL_v500 (https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2943)\r\n  * DreamShaperXL_Turbo_dpmppSdeKarras => DreamShaperXL_Turbo_v2_1\r\n  * SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4 => SDXL_FILM_PHOTOGRAPHY_STYLE_V1\r\n*  feat: add preset for pony_v6 (using ponyDiffusionV6XL, discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: add style `Fooocus Pony` (discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: remove `by wlop` from style `Fooocus Masterpiece` as it has been causing unintended watermarks\r\n* feat: add restart sampler ([paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14878))\r\n* feat: add config option for default_inpaint_engine_version, sets inpaint engine for pony_v6 and playground_v2.5 to None for improved results (incompatible with inpaint engine, discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: add image editor functionality to mask upload (same as for inpaint, now correctly resizes and allows more detailed mask creation, discussion in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fdiscussions\u002F44)\r\n* feat: add persistent model cache for metadata. Use `--rebuild-hash-cache` to manually rebuild the cache for all non-cached hashes (optional, cache will otherwise be lazy-loaded. Remove `--rebuild-hash-cache` after executing once)\r\n* refactor: rename `--enable-describe-uov-image` to `--enable-auto-describe-image` to better reflect its purpose (now also works for enhance image upload)\r\n* feat: adjust playground_v2.5 preset by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3136\r\n* fix: correctly identify and remove performance LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3150\r\n* apply performance from metadata by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3153\r\n* hotfix: add missing method in performance enum by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3154\r\n* feat: add vae to possible preset keys by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3177\r\n* feat: add restart sampler by @licyk in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3219\r\n* build(deps): bump docker\u002Fbuild-push-action from 5 to 6 by @dependabot in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3223\r\n\r\n## New Contributors\r\n* @licyk made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3219\r\n* @dependabot made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3223\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.4.3...v2.5.0","2024-07-17T10:31:17",{"id":180,"version":181,"summary_zh":182,"released_at":183},112802,"v2.5.0-rc1","## What's Changed\r\n\r\n* feat: update python dependencies and add segment_anything\r\n* sync enhance feature and code refactoring from [mashb1t\u002FFooocus](https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Ftree\u002Fmain)\r\n* feat: add enhance feature, which offers easy image refinement steps (similar to [adetailer](https:\u002F\u002Fgithub.com\u002FBing-su\u002Fadetailer), but based on dynamic image detection instead of specific mask detection models). See https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3281.\r\n* feat: improve GroundingDINO and SAM image masking\r\n* refactor: rewrite async worker code, make code much more reusable to allow iterations and improve reusability\r\n* refactor: move checkboxes Enable Mask Upload and Invert Mask When Generating from Developer Debug Mode to Inpaint Or Outpaint\r\n* refactor: rename checkbox Enable Mask Upload to Enable Advanced Masking Features\r\n* fix: get upscale model filepath by calling downloading_upscale_model() to ensure the model exists\r\n* i18n: rename tab titles and translations from singular to plural\r\n* i18n: rename document to documentation\r\n* feat: update default models to latest versions\r\n  * animaPencilXL_v400 => animaPencilXL_v500 (https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2943)\r\n  * DreamShaperXL_Turbo_dpmppSdeKarras => DreamShaperXL_Turbo_v2_1\r\n  * SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4 => SDXL_FILM_PHOTOGRAPHY_STYLE_V1\r\n*  feat: add preset for pony_v6 (using ponyDiffusionV6XL, discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: add style `Fooocus Pony` (discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: remove `by wlop` from style `Fooocus Masterpiece` as it has been causing unintended watermarks\r\n* feat: add restart sampler ([paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.14878))\r\n* feat: add config option for default_inpaint_engine_version, sets inpaint engine for pony_v6 and playground_v2.5 to None for improved results (incompatible with inpaint engine, discussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3217)\r\n* feat: add image editor functionality to mask upload (same as for inpaint, now correctly resizes and allows more detailed mask creation, discussion in https:\u002F\u002Fgithub.com\u002Fmashb1t\u002FFooocus\u002Fdiscussions\u002F44)\r\n* feat: add persistent model cache for metadata. Use `--rebuild-hash-cache` to manually rebuild the cache for all non-cached hashes (optional, cache will otherwise be lazy-loaded. Remove `--rebuild-hash-cache` after executing once)\r\n* refactor: rename `--enable-describe-uov-image` to `--enable-auto-describe-image` to better reflect its purpose (now also works for enhance image upload)\r\n* feat: adjust playground_v2.5 preset by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3136\r\n* fix: correctly identify and remove performance LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3150\r\n* apply performance from metadata by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3153\r\n* hotfix: add missing method in performance enum by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3154\r\n* feat: add vae to possible preset keys by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3177\r\n* feat: add restart sampler by @licyk in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3219\r\n* build(deps): bump docker\u002Fbuild-push-action from 5 to 6 by @dependabot in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3223\r\n\r\n## New Contributors\r\n* @licyk made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3219\r\n* @dependabot made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3223\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.4.3...v2.5.0-rc1","2024-07-14T19:46:33",{"id":185,"version":186,"summary_zh":187,"released_at":188},112803,"v2.4.3","## What's Changed\r\n* fix: correctly set alphas_cumprod by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3106\r\n* feat: parse env var strings to expected config value types by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3107\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.4.2...v2.4.3","2024-06-06T17:37:20",{"id":190,"version":191,"summary_zh":192,"released_at":193},112804,"v2.4.2","## What's Changed\r\n* feat: add support and preset for playground v2.5 (only works with performance Quality or Speed, use with scheduler edm_playground_v2) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3073\r\n* feat: make textboxes (incl. positive prompt) resizable by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3074\r\n* fix: use default vae name instead of None on file refresh by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3045\r\n* fix: correctly use translation and dynamic label for aspect ratios by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3046\r\n* feat: optimize performance lora filtering in metadata by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3048\r\n* feat: rework intermediate image display for restricted performances by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3050\r\n* fix: turbo scheduler loading issue by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3065\r\n* fix: chown files directly at copy in Dockerfile by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3066\r\n* feat: sync cmd args in readme by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3075\r\n* fix: correct sampling for tcd scheduler when gamma is 0 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3093\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.4.1...v2.4.2","2024-06-05T19:56:33",{"id":195,"version":196,"summary_zh":197,"released_at":198},112805,"v2.4.1","## What's Changed\r\n* fix: adjust clip skip default value from 1 to 2 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3011\r\n* fix: add type check for undefined, use fallback when no translation for aspect ratios was given by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3025\r\n* feat: build docker image tagged \"edge\" on push to main branch by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F3026\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002Fv2.4.0...v2.4.1","2024-05-27T23:11:31",{"id":200,"version":201,"summary_zh":202,"released_at":203},112806,"v2.4.0","## What's Changed\r\n* feat: support download huggingface files from a mirror site by @chenxinlong in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2637\r\n* chore: update interposer from v3.1 to v4.0 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2717\r\n* feat: add button to reconnect UI without having to reload the page by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2727\r\n* feat: add optional model VAE select by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2867\r\n* feat: choose random style by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2855\r\n* feat: update anime from animaPencilXL_v100 to animaPencilXL_v310 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2454\r\n* refactor: rename label for reconnect button by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2893\r\n* feat: add full raw prompt to history log by @docppp in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1920\r\n* fix: use correct border radius css property by @khanvilkarvishvesh in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2845\r\n* fix: do not close meta tag in HTML header by @e52fa787 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2740\r\n* feat: automatically describe image on uov image upload by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1938\r\n* add nsfw image censoring via config and checkbox by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F958\r\n* feat: add align your steps scheduler by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2905\r\n* feat: add support for lora inline prompt references by @cantor-set in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2323\r\n* feat: add tcd sampler and discrete distilled tcd scheduler based on sgm_uniform (same as lcm) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2907\r\n* feat: add performance hyper-sd based on 4step LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2812\r\n* fix: remove leftover code for hyper-sd testing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2959\r\n* feat: optimize model management of nsfw image censoring by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2960\r\n* feat: progress bar improvements by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2962\r\n* feat: inline lora optimisations by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2967\r\n* feat: change code owner from @lllyasviel to @mashb1t by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2948\r\n* feat: only use valid inline loras, add subfolder support by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2968\r\n* feature: Reads the size and ratio of the image and gives the recommended size by @xhoxye in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2971\r\n* feat: build and push container image for ghcr.io, update docker.md, and other related fixes  by @xynydev in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2805. \r\nSee [available images](https:\u002F\u002Fghcr.io\u002Flllyasviel\u002Ffooocus)\r\n* feat: adjust line ending default config by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2991\r\n* feat: add translation for image size describe by @mashb1t and @xhoxye in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2992\r\n* feat: read value 'CFG Mimicking from TSNR' from presets by @Alexdnk in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2990\r\n* feat: add inpaint brush color picker by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2997\r\n* feat: remove labels from most of the image input fields by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2998\r\n* feat: add clip skip handling by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2999\r\n* feat: make ui settings more compact by @Alexdnk and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2590\r\n\r\n## New Contributors\r\n* @chenxinlong made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2637\r\n* @docppp made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1920\r\n* @khanvilkarvishvesh made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2845\r\n* @e52fa787 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2740\r\n* @cantor-set made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2323\r\n* @xynydev made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2805\r\n* @Alexdnk made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2990\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.3.1...v2.4.0\r\n\r\n**Join the discussion**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F3003","2024-05-26T17:24:35",{"id":205,"version":206,"summary_zh":207,"released_at":208},112807,"2.4.0-rc2","## What's Changed\r\n* fix: remove leftover code for hyper-sd testing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2959\r\n* feat: optimize model management of nsfw image censoring by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2960\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.4.0-rc1...2.4.0-rc2\r\n\r\nDiscussion in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fdiscussions\u002F2955","2024-05-19T16:13:42",{"id":210,"version":211,"summary_zh":212,"released_at":213},112808,"2.4.0-rc1","## What's Changed\r\n* feat: support download huggingface files from a  mirror site by @chenxinlong in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2637\r\n* chore: update interposer from v3.1 to v4.0 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2717\r\n* feat: add button to reconnect UI without having to reload the page by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2727\r\n* feat: add optional model VAE select by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2867\r\n* feat: choose random style by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2855\r\n* feat: update anime from animaPencilXL_v100 to animaPencilXL_v310 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2454\r\n* refactor: rename label for reconnect button by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2893\r\n* feat: add full raw prompt to history log by @docppp in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1920\r\n* fix: use correct border radius css property by @khanvilkarvishvesh in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2845\r\n* fix: do not close meta tag in HTML header by @e52fa787 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2740\r\n* feat: automatically describe image on uov image upload by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1938\r\n* add nsfw image censoring via config and checkbox by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F958\r\n* feat: add align your steps scheduler by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2905\r\n* feat: add support for lora inline prompt references by @cantor-set in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2323\r\n* feat: add tcd sampler and discrete distilled tcd scheduler based on sgm_uniform (same as lcm) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2907\r\n* feat: add performance hyper-sd based on 4step LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2812\r\n\r\n## New Contributors\r\n* @chenxinlong made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2637\r\n* @docppp made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1920\r\n* @khanvilkarvishvesh made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2845\r\n* @e52fa787 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2740\r\n* @cantor-set made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2323\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.3.1...2.4.0-rc1","2024-05-19T11:30:57",{"id":215,"version":216,"summary_zh":217,"released_at":218},112809,"2.3.1","## What's Changed\r\n* fix: remove positive prompt from anime prefix by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2571\r\n* fix: add enabled value to LoRA when setting default_max_lora_number by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2576\r\n* fix: correctly set preset config and loras in meta parser by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2588\r\n* fix: correctly load image number from preset \u002F do not reset to 1 anymore by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2611\r\n* fix: use correct base dimensions for outpaint mask padding by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2612\r\n* fix: add compatibility for LoRAs in a1111 metadata scheme by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2615\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.3.0...2.3.1","2024-03-23T15:57:41",{"id":220,"version":221,"summary_zh":222,"released_at":223},112810,"2.3.0","## What's Changed\r\n* fix: parse width and height as int when applying metadata by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2452\r\n* fix: do not attempt to remove non-existing image grid file by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2456\r\n* feat: add troubleshooting guide to bug report template again by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2489\r\n* feat: use jpeg instead of jpg, use enums instead of strings by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2453\r\n* feat: add performance lightning with 4 step LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2415\r\n* fix: change synthetic refiner switch from 0.5 to 0.8 by @xhoxye in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2165\r\n* feat: add config for temp path and temp path cleanup on launch by @midareashi and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1992\r\n* feat: scan wildcard subdirectories by @Cruxial0 and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2466\r\n* fix: use correct method call for interrupt_current_processing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2506\r\n* feat: read wildcards in order 通配符增强，切换顺序读取。 by @xhoxye and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1761\r\n* feat: use scrollable 2 column layout for styles by @hswlab in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1883\r\n* feat: allow to add disabled LoRAs in config by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2507\r\n* fix: prioritize VRAM over RAM in Colab, preventing out of memory issues by @Peppe289 and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710\r\n* fix: update xformers to 0.0.23 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2517\r\n* feat: update xformers to 0.0.23 in Dockerfile by @josephrocca in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2519\r\n* fix: parse seed as string to display correctly in metadata preview by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2536\r\n* feat: allow user add custom preset without block `git pull` by @Zxilly in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2520\r\n* feat: add preset selection to Gradio UI (session based) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1570\r\n* fix: add error output for unsupported images by @shaye059 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2537\r\n* feat: improve anime preset by adding style Fooocus Semi Realistic by @DavidDragonsage in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2492\r\n* feat: add backwards compatibility for presets without disable\u002Fenable LoRA boolean by @mashb1t\r\n\r\n## New Contributors\r\n* @midareashi made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1992\r\n* @Cruxial0 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2466\r\n* @hswlab made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1883\r\n* @Peppe289 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710\r\n* @josephrocca made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2519\r\n* @Zxilly made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2520\r\n* @shaye059 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2537\r\n* @DavidDragonsage made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2492\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.2.1...2.3.0","2024-03-18T17:37:26",{"id":225,"version":226,"summary_zh":227,"released_at":228},112811,"2.3.0-rc1","## What's Changed\r\n* fix: parse width and height as int when applying metadata by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2452\r\n* fix: do not attempt to remove non-existing image grid file by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2456\r\n* feat: add troubleshooting guide to bug report template again by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2489\r\n* feat: use jpeg instead of jpg, use enums instead of strings by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2453\r\n* feat: add performance lightning with 4 step LoRA by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2415\r\n* fix: change synthetic refiner switch from 0.5 to 0.8 by @xhoxye in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2165\r\n* feat: add config for temp path and temp path cleanup on launch by @midareashi and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1992\r\n* feat: scan wildcard subdirectories by @Cruxial0 and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2466\r\n* fix: use correct method call for interrupt_current_processing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2506\r\n* feat: read wildcards in order 通配符增强，切换顺序读取。 by @xhoxye and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1761\r\n* feat: use scrollable 2 column layout for styles by @hswlab in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1883\r\n* feat: allow to add disabled LoRAs in config by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2507\r\n* fix: prioritize VRAM over RAM in Colab, preventing out of memory issues by @Peppe289 and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710\r\n* fix: update xformers to 0.0.23 by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2517\r\n* feat: update xformers to 0.0.23 in Dockerfile by @josephrocca in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2519\r\n* fix: parse seed as string to display correctly in metadata preview by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2536\r\n* feat: allow user add custom preset without block `git pull` by @Zxilly in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2520\r\n* feat: add preset selection to Gradio UI (session based) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1570\r\n* fix: add error output for unsupported images by @shaye059 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2537\r\n* feat: improve anime preset by adding style Fooocus Semi Realistic by @DavidDragonsage in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2492\r\n\r\n## New Contributors\r\n* @midareashi made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1992\r\n* @Cruxial0 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2466\r\n* @hswlab made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1883\r\n* @Peppe289 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1710\r\n* @josephrocca made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2519\r\n* @Zxilly made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2520\r\n* @shaye059 made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2537\r\n* @DavidDragonsage made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2492\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.2.1...2.3.0-rc1","2024-03-16T14:34:57",{"id":230,"version":231,"summary_zh":232,"released_at":233},112812,"2.2.1","## What's Changed\r\n* fix: adjust parameters for upscale fast 2x by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2411\r\n* fix: add handling for filepaths to image grid by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2414\r\n* fix: add fallback value for default_max_lora_number when default_loras is empty by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2430\r\n* feat: add metadata flag and steps override to history log by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2425\r\n* fix: adjust width of lora weight for firefox by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2431\r\n* fix: add hint for png to metadata scheme selection by @eddyizm in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2434\r\n* fix: typo in wildcards\u002Fanimal.txt by @nbs in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2433\r\n* feat: match anything in array syntax, not only words and whitespace by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2438\r\n\r\n## New Contributors\r\n* @nbs made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2433\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.2.0...2.2.1","2024-03-04T10:38:15",{"id":235,"version":236,"summary_zh":237,"released_at":238},112813,"2.2.0","## What's Changed\r\n* fix: sort with casefold, case insensitive by @mashb1t\r\n* feat: add early return for prompt expansion when no new tokens should be added by @mashb1t\r\n* feat: ignore DS_Store by @charliewilco in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2313\r\n* feat: advanced params refactoring + prevent users from skipping\u002Fstopping other users tasks in queue by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F981\r\n* feat: add list of 100 most popular animals to wildcards by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F985\r\n* feat: add advanced parameter for disable_intermediate_results (progress_gallery) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1013\r\n* feat: add ability to load checkpoints and loras from multiple locations by @dooglewoogle in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1256\r\n* feat: allow users to specify the number of threads when running on CPU by @maxim-saplin in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1601\r\n* feat: improve bug report and feature request issue templates by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1631\r\n* fix: correctly create directory for path_outputs if not existing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1668\r\n* fix: allow path_outputs to be outside of root dir by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2332\r\n* feat: add button to enable LoRAs by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2210\r\n* feat: make lora number editable in config by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2215\r\n* feat: make lora min max weight editable in config by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2216\r\n* feat: add array support on main prompt by @flannerybh in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1503\r\n* feat: use consistent file name in gradio by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1932\r\n* feat: add metadata to images by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1940\r\n* feat: add jpg and webp support, add exif data handling for metadata by @mashb1t and @eddyizm in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1863\r\n* feat: add docker files by @whitehara and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1418\r\n* docs: fix typo in readme by @gteti in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2368\r\n\r\n## New Contributors\r\n* @charliewilco made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2313\r\n* @dooglewoogle made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1256\r\n* @maxim-saplin made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1601\r\n* @MindOfMatter made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2210\r\n* @flannerybh made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1503\r\n* @whitehara made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1418\r\n* @gteti made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2368\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.1.865...2.2.0","2024-03-02T15:31:52",{"id":240,"version":241,"summary_zh":242,"released_at":243},112814,"2.2.0-rc1","## What's Changed\r\n* fix: sort with casefold, case insensitive by @mashb1t\r\n* feat: add early return for prompt expansion when no new tokens should be added by @mashb1t\r\n* feat: ignore DS_Store by @charliewilco in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2313\r\n* feat: advanced params refactoring + prevent users from skipping\u002Fstopping other users tasks in queue by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F981\r\n* feat: add list of 100 most popular animals to wildcards by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F985\r\n* feat: add advanced parameter for disable_intermediate_results (progress_gallery) by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1013\r\n* feat: add ability to load checkpoints and loras from multiple locations by @dooglewoogle in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1256\r\n* feat: allow users to specify the number of threads when running on CPU by @maxim-saplin in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1601\r\n* feat: improve bug report and feature request issue templates by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1631\r\n* fix: correctly create directory for path_outputs if not existing by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1668\r\n* fix: allow path_outputs to be outside of root dir by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2332\r\n* feat: add button to enable LoRAs by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2210\r\n* feat: make lora number editable in config by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2215\r\n* feat: make lora min max weight editable in config by @MindOfMatter in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2216\r\n* feat: add array support on main prompt by @flannerybh in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1503\r\n* feat: use consistent file name in gradio by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1932\r\n* feat: add metadata to images by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1940\r\n* feat: add jpg and webp support, add exif data handling for metadata by @mashb1t and @eddyizm in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1863\r\n* feat: add docker files by @whitehara and @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1418\r\n\r\n## New Contributors\r\n* @charliewilco made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2313\r\n* @dooglewoogle made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1256\r\n* @maxim-saplin made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1601\r\n* @MindOfMatter made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2210\r\n* @flannerybh made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1503\r\n* @whitehara made their first contribution in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1418\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.1.865...2.2.0-rc1","2024-02-26T16:58:16",{"id":245,"version":246,"summary_zh":247,"released_at":248},112815,"2.1.865","## What's Changed\r\n* fix by @lllyasviel in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2069\r\n* chore: replace broken links by @justindhillon in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2217\r\n* fix: delay importing of modules.config by @rsl8 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2195\r\n* fix: adjust mistakes in HTML generation by @V1sionVerse in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2187\r\n* feat: add auth to --listen and readme in in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F2127\r\n* fix: prevents outdated history log link after midnight by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1979\r\n* fix: do not overwrite $GRADIO_SERVER_PORT if it is already set by @rsl8 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1921\r\n* chore: remove unnecessary comments by @Cassini-chris in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1905\r\n* fix: correctly sort files, display deepest dir level first by @mashb1t in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1784\r\n* fix: replace regexp to support unicode chars by @PraveenKumarSridhar in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1424\r\n* chore: replace custom lcm function with math.lcm by @PraveenKumarSridhar in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F1122\r\n* feat: add suffix ordinals by @hisk2323 in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fpull\u002F845\r\n* chore: fix typos and adjust wording by @kyeno, @AlistairKeiller, @eltociear in #1521, #1644, #1691, #1772\r\n* fix: implement output path argument by @eddyizm in #2074\r\n* fix: correctly calculate refiner switch when overwrite_switch is > 0 by @xhoxye in #2165\r\n* docs: update version by @mashb1t in #2229\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FFooocus\u002Fcompare\u002F2.1.864...2.1.865","2024-02-11T14:44:29"]