[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ltdrdata--ComfyUI-Impact-Pack":3,"tool-ltdrdata--ComfyUI-Impact-Pack":61},[4,18,26,36,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,35],"插件",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":42,"last_commit_at":51,"category_tags":52,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[35,13,15,14],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":42,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[35,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":78,"languages":79,"stars":96,"forks":97,"last_commit_at":98,"license":99,"difficulty_score":10,"env_os":100,"env_gpu":101,"env_ram":102,"env_deps":103,"category_tags":115,"github_topics":76,"view_count":42,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":116,"updated_at":117,"faqs":118,"releases":148},8149,"ltdrdata\u002FComfyUI-Impact-Pack","ComfyUI-Impact-Pack","Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more.","ComfyUI-Impact-Pack 是专为 ComfyUI 设计的一套强大自定义节点扩展包，旨在简化并增强图像生成与修复的工作流。它通过集成检测器（Detector）、细节修复器（Detailer）、超分辨率放大器（Upscaler）以及高效的管道连接（Pipe）等核心功能，帮助用户轻松实现人脸修复、局部重绘、高清放大及复杂区域控制，有效解决了原生工作流中操作繁琐、难以精准控制局部细节的痛点。\n\n该工具特别适合希望深入定制 AI 绘图流程的设计师、高级爱好者及研究人员使用。对于需要精细调整生成结果（如修复崩坏的手指或面部）的用户，ComfyUI-Impact-Pack 提供了近乎自动化的解决方案。其技术亮点在于支持最新的 FLUX.1 模型与 SAM2 分割模型，并具备基于执行模型反转的开关逻辑，允许用户在单个工作流中灵活切换不同处理路径。此外，它还完美兼容 AnimateDiff 动态生成与控制网（ControlNet）的高级应用。虽然安装时需注意版本兼容性建议通过 ComfyUI-Manager 进行部署，但一旦配置完成，它将极大提升图像生成的质量与可控性，是构建专业级 Com","ComfyUI-Impact-Pack 是专为 ComfyUI 设计的一套强大自定义节点扩展包，旨在简化并增强图像生成与修复的工作流。它通过集成检测器（Detector）、细节修复器（Detailer）、超分辨率放大器（Upscaler）以及高效的管道连接（Pipe）等核心功能，帮助用户轻松实现人脸修复、局部重绘、高清放大及复杂区域控制，有效解决了原生工作流中操作繁琐、难以精准控制局部细节的痛点。\n\n该工具特别适合希望深入定制 AI 绘图流程的设计师、高级爱好者及研究人员使用。对于需要精细调整生成结果（如修复崩坏的手指或面部）的用户，ComfyUI-Impact-Pack 提供了近乎自动化的解决方案。其技术亮点在于支持最新的 FLUX.1 模型与 SAM2 分割模型，并具备基于执行模型反转的开关逻辑，允许用户在单个工作流中灵活切换不同处理路径。此外，它还完美兼容 AnimateDiff 动态生成与控制网（ControlNet）的高级应用。虽然安装时需注意版本兼容性建议通过 ComfyUI-Manager 进行部署，但一旦配置完成，它将极大提升图像生成的质量与可控性，是构建专业级 ComfyUI 工作流不可或缺的组件。","[![Youtube Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYoutube-FF0000?style=for-the-badge&logo=Youtube&logoColor=white&link=https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP)\n\n# ComfyUI-Impact-Pack\n\n**Custom node pack for ComfyUI**\nThis node pack helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more.\n\nNOTE: The UltralyticsDetectorProvider node is not part of the ComfyUI-Impact-Pack. To use the UltralyticsDetectorProvider node, please install the ComfyUI-Impact-Subpack separately.\n\n## NOTICE \n* V8.24: This compatibility patch requires ComfyUI version 0.3.63 or higher due to structural changes in DifferentialDiffusion.\n* V8.19: legacy nodes (mmdet and etc.) are removed\n* V8.18: Support [facebookresearch\u002Fsam2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2) models\n* V8.0: The `Impact Subpack` is no longer installed automatically. To use `UltralyticsDetectorProvider` nodes, please install the `Impact Subpack` separately.\n* V7.6: Automatic installation is no longer supported. Please install using ComfyUI-Manager, or manually install requirements.txt and run install.py to complete the installation.\n* V7.0: Supports Switch based on Execution Model Inversion.\n* V6.0: Supports FLUX.1 model in Impact KSampler, Detailers, PreviewBridgeLatent\n* V5.0: It is no longer compatible with versions of ComfyUI before 2024.04.08. \n* V4.87.4: Update to a version of ComfyUI after 2024.04.08 for proper functionality.\n* V4.85: Incompatible with the outdated **ComfyUI IPAdapter Plus**. (A version dated March 24th or later is required.)\n* V4.77: Compatibility patch applied. Requires ComfyUI version (Oct. 8th) or later.\n* V4.73.3: ControlNetApply (SEGS) supports AnimateDiff\n* V4.20.1: Due to the feature update in `RegionalSampler`, the parameter order has changed, causing malfunctions in previously created `RegionalSamplers`. Please adjust the parameters accordingly.\n* V4.12: `MASKS` is changed to `MASK`.\n* V4.7.2 isn't compatible with old version of `ControlNet Auxiliary Preprocessor`. If you will use `MediaPipe FaceMesh to SEGS` update to latest version(Sep. 17th).  \n* Selection weight syntax is changed(: -> ::) since V3.16. ([tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcardProcessor.md))\n* Starting from V3.6, requires latest version(Aug 8, 9ccc965) of ComfyUI.\n* **In versions below V3.3.1, there was an issue with the image quality generated after using the UltralyticsDetectorProvider. Please make sure to upgrade to a newer version.**\n* Starting from V3.0, nodes related to `mmdet` are optional nodes that are activated only based on the configuration settings.\n  - Through ComfyUI-Impact-Subpack, you can utilize UltralyticsDetectorProvider to access various detection models.\n* Between versions 2.22 and 2.21, there is partial compatibility loss regarding the Detailer workflow. If you continue to use the existing workflow, errors may occur during execution. An additional output called \"enhanced_alpha_list\" has been added to Detailer-related nodes.\n* The permission error related to cv2 that occurred during the installation of Impact Pack has been patched in version 2.21.4. However, please note that the latest versions of ComfyUI and ComfyUI-Manager are required.\n* The \"PreviewBridge\" feature may not function correctly on ComfyUI versions released before July 1, 2023.\n* Attempting to load the \"ComfyUI-Impact-Pack\" on ComfyUI versions released before June 27, 2023, will result in a failure.\n* With the addition of wildcard support in FaceDetailer, the structure of DETAILER_PIPE-related nodes and Detailer nodes has changed. There may be malfunctions when using the existing workflow.\n\n\n## How To Install\n\n### **Recommended**\n* Install via [ComfyUI-Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager).\n\n### **Manual**\n* Navigate to `ComfyUI\u002Fcustom_nodes` in your terminal (cmd).\n* Clone the repository under the `custom_nodes` directory using the following command:\n  ```\n  git clone https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack comfyui-impact-pack\n  cd comfyui-impact-pack\n  ```\n* Install dependencies in your Python environment.\n    * For Windows Portable, run the following command inside `ComfyUI\\custom_nodes\\comfyui-impact-pack`:\n        ```\n        ..\\..\\..\\python_embeded\\python.exe -m pip install -r requirements.txt\n        ```\n    * If using venv or conda, activate your Python environment first, then run:\n        ```\n        pip install -r requirements.txt\n        ```\n\n### Companion Pack\n* If you need the `Ultralytics Detector Provider` to use various YOLO detection models, you should also install [ComfyUI-Impact-Subpack](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Subpack).\n\n\n## Custom Nodes\n### [Detector nodes](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fdetectors.md)\n  * `SAMLoader (Impact)` - Loads the SAM model.\n  * `ONNXDetectorProvider` - Loads the ONNX model to provide BBOX_DETECTOR.\n  * `CLIPSegDetectorProvider` - Wrapper for CLIPSeg to provide BBOX_DETECTOR.\n    * You need to install the ComfyUI-CLIPSeg node extension.\n  * `SEGM Detector (combined)` - Detects segmentation and returns a mask from the input image.\n  * `BBOX Detector (combined)` - Detects bounding boxes and returns a mask from the input image.\n  * `SAMDetector (combined)` - Utilizes the SAM technology to extract the segment at the location indicated by the input SEGS on the input image and outputs it as a unified mask.\n  * `SAMDetector (Segmented)` - It is similar to `SAMDetector (combined)`, but it separates and outputs the detected segments. Multiple segments can be found for the same detected area, and currently, a policy is in place to group them arbitrarily in sets of three. This aspect is expected to be improved in the future.\n    * As a result, it outputs the `combined_mask`, which is a unified mask, and `batch_masks`, which are multiple masks grouped together in batch form.\n    * While `batch_masks` may not be completely separated, it provides functionality to perform some level of segmentation.\n  * `Simple Detector (SEGS)` - Operating primarily with `BBOX_DETECTOR`, and with the additional provision of `SAM_MODEL` or `SEGM_DETECTOR`, this node internally generates improved SEGS through mask operations on both *bbox* and *silhouette*. It serves as a convenient tool to simplify a somewhat intricate workflow.\n  * `Simple Detector for Video (SEGS)` – Performs detection on videos composed of image frames. Instead of using a single mask, it performs detection individually on each image frame and generates a SEGS object with a batch of masks. \n  * `SAM2 Video Detector (SEGS)` – Similar to `Simple Detector for Video (SEGS)`, but utilizes SAM2’s video tracking technology to generate a SEGS object with a batch of masks. \n      * To use this node, you must select a SAM2 model in the SAMLoader.\n\n\n### ControlNet, IPAdapter\n  * `ControlNetApply (SEGS)` - To apply ControlNet in SEGS, you need to use the Preprocessor Provider node from the Inspire Pack to utilize this node.\n    * `segs_preprocessor` and `control_image` can be selectively applied. If a `control_image` is given, `segs_preprocessor` will be ignored.\n    * If set to `control_image`, you can preview the cropped cnet image through `SEGSPreview (CNET Image)`. Images generated by `segs_preprocessor` should be verified through the `cnet_images` output of each Detailer.\n    * The `segs_preprocessor` operates by applying preprocessing on-the-fly based on the cropped image during the detailing process, while `control_image` will be cropped and used as input to `ControlNetApply (SEGS)`.\n  * `ControlNetClear (SEGS)` - Clear applied ControlNet in SEGS\n  * `IPAdapterApply (SEGS)` - To apply IPAdapter in SEGS, you need to use the Preprocessor Provider node from the Inspire Pack to utilize this node.\n\n\n### Mask operation\n  * `Pixelwise(SEGS & SEGS)` - Performs a 'pixelwise and' operation between two SEGS.\n  * `Pixelwise(SEGS - SEGS)` - Subtracts one SEGS from another.\n  * `Pixelwise(SEGS & MASK)` - Performs a pixelwise AND operation between SEGS and MASK.\n  * `Pixelwise(SEGS & MASKS ForEach)` - Performs a pixelwise AND operation between SEGS and MASKS.\n    * Please note that this operation is performed with batches of MASKS, not just a single MASK.\n  * `Pixelwise(MASK & MASK)` - Performs a 'pixelwise and' operation between two masks.\n  * `Pixelwise(MASK - MASK)` - Subtracts one mask from another.\n  * `Pixelwise(MASK + MASK)` - Combine two masks.\n  * `SEGM Detector (SEGS)` - Detects segmentation and returns SEGS from the input image.\n  * `BBOX Detector (SEGS)` - Detects bounding boxes and returns SEGS from the input image.\n  * `Dilate Mask` - Dilate Mask.\n    * Support erosion for negative value.\n  * `Gaussian Blur Mask` - Apply Gaussian Blur to Mask. You can utilize this for mask feathering.\n  * `Mask Rect Area` - Create a rectangular mask defined by percentages with preview canvas.\n  * `Mask Rect Area (Advanced)` - Create a rectangular mask defined by pixels and image size. \n\n\n### [Detailer nodes](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fdetailers.md)\n  * `Detailer (SEGS)` - Refines the image based on SEGS.\n  * `Detailer (SEGS) with auto retry` - Refines the image based on SEGS and will automatically retry if the patch is all black.\n  * `DetailerDebug (SEGS)` - Refines the image based on SEGS. Additionally, it provides the ability to monitor the cropped image and the refined image of the cropped image.\n    * To prevent regeneration caused by the seed that does not change every time when using 'external_seed', please disable the 'seed random generate' option in the 'Detailer...' node.\n  * `MASK to SEGS` - Generates SEGS based on the mask.\n  * `MASK to SEGS For Video` - Generates SEGS based on the mask for Video. (Renamed from `MASK to SEGS For AnimateDiff`)\n    * When using a single mask, convert it to SEGS to apply it to the entire frame.\n    * When using a batch mask, the contour fill feature is disabled.\n  * `MediaPipe FaceMesh to SEGS` - Separate each landmark from the mediapipe facemesh image to create labeled SEGS.\n    * Usually, the size of images created through the MediaPipe facemesh preprocessor is downscaled. It resizes the MediaPipe facemesh image to the original size given as reference_image_opt for matching sizes during processing. \n  * `ToBinaryMask` - Separates the mask generated with alpha values between 0 and 255 into 0 and 255. The non-zero parts are always set to 255.\n  * `Masks to Mask List` - This node converts the MASKS in batch form to a list of individual masks.\n  * `Mask List to Masks` - This node converts the MASK list to MASK batch form.\n  * `EmptySEGS` - Provides an empty SEGS.\n  * `MaskPainter` - Provides a feature to draw masks.\n  * `FaceDetailer` - Easily detects faces and improves them.\n  * `FaceDetailer (pipe)` - Easily detects faces and improves them (for multipass).\n  * `MaskDetailer (pipe)` - This is a simple inpaint node that applies the Detailer to the mask area.\n\n  * `FromDetailer (SDXL\u002Fpipe)`, `BasicPipe -> DetailerPipe (SDXL)`, `Edit DetailerPipe (SDXL)` - These are pipe functions used in Detailer for utilizing the refiner model of SDXL.\n  * `Any PIPE -> BasicPipe` - Convert the PIPE Value of other custom nodes that are not BASIC_PIPE but internally have the same structure as BASIC_PIPE to BASIC_PIPE. If an incompatible type is applied, it may cause runtime errors.\n\n\n### SEGS Manipulation nodes\n  * `SEGSDetailer` - Performs detailed work on SEGS without pasting it back onto the original image.\n  * `SEGSPaste` - Pastes the results of SEGS onto the original image.\n    * If `ref_image_opt` is present, the images contained within SEGS are ignored. Instead, the image within `ref_image_opt` corresponding to the crop area of SEGS is taken and pasted. The size of the image in `ref_image_opt` should be the same as the original image size.\n    * This node can be used in conjunction with the processing results of AnimateDiff.\n  * `SEGSPreview` - Provides a preview of SEGS.\n     * This option is used to preview the improved image through `SEGSDetailer` before merging it into the original. Prior to going through ```SEGSDetailer```, SEGS only contains mask information without image information. If fallback_image_opt is connected to the original image, SEGS without image information will generate a preview using the original image. However, if SEGS already contains image information, fallback_image_opt will be ignored.\n     * This node can be used in conjunction with the processing results of AnimateDiff.\n  * `SEGSPreview (CNET Image)` - Show images configured with `ControlNetApply (SEGS)` for debugging purposes.\n  * `SEGSToImageList` - Convert SEGS To Image List\n  * `SEGSToMaskList` - Convert SEGS To Mask List\n  * `SEGS Filter (label)` - This node filters SEGS based on the label of the detected areas. \n  * `SEGS Filter (ordered)` - This node sorts SEGS based on size and position and retrieves SEGs within a certain range. \n  * `SEGS Filter (range)` - This node retrieves only SEGs from SEGS that have a size and position within a certain range.\n  * `SEGS Filter (non max suppression)` - This node filters SEGS by removing those with high overlap based on the Intersection over Union (IoU) threshold, keeping only the most confident detections.\n  * `SEGS Filter (intersection)` - This node filters segs1, keeping only the SEGS that do not significantly overlap with any SEGS in segs2, based on the Intersection over Area (IoA) threshold.\n  * `SEGS Assign (label)` - Assign labels sequentially to SEGS. This node is useful when used with `[LAB]` of FaceDetailer.\n  * `SEGSConcat` - Concatenate segs1 and segs2. If source shape of segs1 and segs2 are different from segs2 will be ignored.\n  * `SEGS Merge` - SEGS contains multiple SEGs. SEGS Merge integrates several SEGs into a single merged SEG. The label is changed to `merged` and the confidence becomes the minimum confidence. The applied controlnet and cropped_image are removed.\n  * `Picker (SEGS)` - Among the input SEGS, you can select a specific SEG through a dialog. If no SEG is selected, it outputs an empty SEGS. Increasing the batch_size of SEGSDetailer can be used for the purpose of selecting from the candidates.\n  * `Set Default Image For SEGS` - Set a default image for SEGS. SEGS with images set this way do not need to have a fallback image set. When override is set to false, the original image is preserved.\n  * `Remove Image from SEGS` - Remove the image set for the SEGS that has been configured by \"Set Default Image for SEGS\" or SEGSDetailer. When the image for the SEGS is removed, the Detailer node will operate based on the currently processed image instead of the SEGS. \n  * `Make Tile SEGS` - [experimental] Create SEGS in the form of tiles from an image to facilitate experiments for Tiled Upscale using the Detailer.\n    * The `filter_in_segs_opt` and `filter_out_segs_opt` are optional inputs. If these inputs are provided, when creating the tiles, the mask for each tile is generated by overlapping with the mask of `filter_in_segs_opt` and excluding the overlap with the mask of `filter_out_segs_opt`. Tiles with an empty mask will not be created as SEGS.\n  * `Dilate Mask (SEGS)` - Dilate\u002FErosion Mask in SEGS\n  * `Gaussian Blur Mask (SEGS)` - Apply Gaussian Blur to Mask in SEGS\n  * `SEGS_ELT Manipulation` - experimental nodes\n    * `DecomposeSEGS` - Decompose SEGS to allow for detailed manipulation.\n    * `AssembleSEGS` - Reassemble the decomposed SEGS.\n    * `From SEG_ELT` - Extract detailed information from SEG_ELT.\n    * `Edit SEG_ELT` - Modify some of the information in SEG_ELT.\n    * `Dilate SEG_ELT` - Dilate the mask of SEG_ELT.\n    * `From SEG_ELT` bbox - Extract coordinate from bbox in SEG_ELT\n    * `From SEG_ELT` crop_region - Extract coordinate from crop_region in SEG_ELT\n  * `Count Elt in SEGS` - Number of Elts ins SEGS\n \n\n### Pipe nodes\n   * `ToDetailerPipe`, `FromDetailerPipe` - These nodes are used to bundle multiple inputs used in the detailer, such as models and vae, ..., into a single DETAILER_PIPE or extract the elements that are bundled in the DETAILER_PIPE.\n   * `ToBasicPipe`, `FromBasicPipe` - These nodes are used to bundle model, clip, vae, positive conditioning, and negative conditioning into a single BASIC_PIPE, or extract each element from the BASIC_PIPE.\n   * `EditBasicPipe`, `EditDetailerPipe` - These nodes are used to replace some elements in BASIC_PIPE or DETAILER_PIPE.\n   * `FromDetailerPipe_v2`, `FromBasicPipe_v2` - It has the same functionality as `FromDetailerPipe` and `FromBasicPipe`, but it has an additional output that directly exports the input pipe. It is useful when editing EditBasicPipe and EditDetailerPipe.\n* `Latent Scale (on Pixel Space)` - This node converts latent to pixel space, upscales it, and then converts it back to latent.\n   * If upscale_model_opt is provided, it uses the model to upscale the pixel and then downscales it using the interpolation method provided in scale_method to the target resolution.\n* `PixelKSampleUpscalerProvider` - An upscaler is provided that converts latent to pixels using VAEDecode, performs upscaling, converts back to latent using VAEEncode, and then performs k-sampling. This upscaler can be attached to nodes such as `Iterative Upscale` for use.\n  * Similar to `Latent Scale (on Pixel Space)`, if upscale_model_opt is provided, it performs pixel upscaling using the model.\n* `PixelTiledKSampleUpscalerProvider` - It is similar to `PixelKSampleUpscalerProvider`, but it uses `ComfyUI_TiledKSampler` and Tiled VAE Decoder\u002FEncoder to avoid GPU VRAM issues at high resolutions.\n  * You need to install the [BlenderNeko\u002FComfyUI_TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) node extension.\n\n\n### PK_HOOK\n  * `DenoiseScheduleHookProvider` - IterativeUpscale provides a hook that gradually changes the denoise to target_denoise as the iterative-step progresses.\n  * `CfgScheduleHookProvider` - IterativeUpscale provides a hook that gradually changes the cfg to target_cfg as the iterative-step progresses.\n  * `StepsScheduleHookProvider` - IterativeUpscale provides a hook that gradually changes the sampling-steps to target_steps as the iterative-step progresses.\n  * `NoiseInjectionHookProvider` - During each iteration of IterativeUpscale, noise is injected into the latent space while varying the strength according to a schedule.\n    * You need to install the [BlenderNeko\u002FComfyUI_Noise](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_Noise) node extension.\n    * The seed serves as the initial value required for generating noise, and it increments by 1 with each iteration as the process unfolds.\n    * The source determines the types of CPU noise and GPU noise to be configured.\n    * Currently, there is only a simple schedule available, where the strength of the noise varies from start_strength to end_strength during the progression of each iteration.\n  * `UnsamplerHookProvider` - Apply Unsampler during each iteration. To use this node, ComfyUI_Noise must be installed.\n  * `PixelKSampleHookCombine` - This is used to connect two PK_HOOKs. hook1 is executed first and then hook2 is executed.\n    * If you want to simultaneously change cfg and denoise, you can combine the PK_HOOKs of CfgScheduleHookProvider and PixelKSampleHookCombine.\n \n\n### DETAILER_HOOK\n  * `NoiseInjectionDetailerHookProvider` - The `detailer_hook` is a hook in the `Detailer` that injects noise during the processing of each SEGS.\n  * `UnsamplerDetailerHookProvider` - Apply Unsampler during each cycle. To use this node, ComfyUI_Noise must be installed.\n  * `DenoiseSchedulerDetailerHookProvider` - During the progress of the cycle, the detailer's denoise is altered up to the `target_denoise`. \n  * `CoreMLDetailerHookProvider` - CoreML supports only 512x512, 512x768, 768x512, 768x768 size sampling. CoreMLDetailerHookProvider precisely fixes the upscale of the crop_region to this size. When using this hook, it will always be selected size, regardless of the guide_size. However, if the guide_size is too small, skipping will occur.\n  * `DetailerHookCombine` - This is used to connect two DETAILER_HOOKs. Similar to PixelKSampleHookCombine.\n  * `SEGSOrderedFilterDetailerHook`, SEGSRangeFilterDetailerHook, SEGSLabelFilterDetailerHook - There are a wrapper node that provides SEGSFilter nodes to be applied in FaceDetailer or Detector by creating DETAILER_HOOK.\n  * `PreviewDetailerHook` - Connecting this hook node helps provide assistance for viewing previews whenever SEGS Detailing tasks are completed. When working with a large number of SEGS, such as Make Tile SEGS, it allows for monitoring the situation as improvements progress incrementally.\n    * Since this is the hook applied when pasting onto the original image, it has no effect on nodes like `SEGSDetailer`.\n  * `VariationNoiseDetailerHookProvider` - Apply variation seed to the detailer. It can be applied in multiple stages through combine.\n  * `CustomSamplerDetailerHookProvider` - Apply a hook that allows you to use a custom sampler in the Detailer nodes. When using `DetailerHookCombine`, the sampler from the first hook is applied.\n  * `LamaRemoverDetailerHookProvider` – Applies Lama Remover to the upscaled image during the detailing stage. If `skip_sampling` is set to True, Lama Remover can be used alone without the detailing stage, allowing it to simply remove detected regions.\n      * Not applicable for **AnimateDiff** detailers. When using `DetailerHookCombine`, `skip_sampling` is only applied if it is set to `True` for all hooks.\n      * To use this node, the node pack at [Layer-norm\u002Fcomfyui-lama-remover](https:\u002F\u002Fgithub.com\u002FLayer-norm\u002Fcomfyui-lama-remover) must be installed.\n\n\n### Iterative Upscale nodes\n  * `Iterative Upscale (Latent\u002Fon Pixel Space)` - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling. \n  This takes latent as input and outputs latent as the result.\n  * `Iterative Upscale (Image)` - The upscaler takes the input upscaler and splits the scale_factor into steps, then iteratively performs upscaling. This takes image as input and outputs image as the result.\n    * Internally, this node uses 'Iterative Upscale (Latent)'.\n\n\n### TwoSamplers nodes\n* `TwoSamplersForMask` - This node can apply two samplers depending on the mask area. The base_sampler is applied to the area where the mask is 0, while the mask_sampler is applied to the area where the mask is 1.\n  * Note: The latent encoded through VAEEncodeForInpaint cannot be used.\n* `KSamplerProvider` - This is a wrapper that enables KSampler to be used in TwoSamplersForMask TwoSamplersForMaskUpscalerProvider.\n* `TiledKSamplerProvider` - ComfyUI_TiledKSampler is a wrapper that provides KSAMPLER.\n  * You need to install the [BlenderNeko\u002FComfyUI_TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) node extension.\n  \n* `TwoAdvancedSamplersForMask` - TwoSamplersForMask is similar to TwoAdvancedSamplersForMask, but they differ in their operation. TwoSamplersForMask performs sampling in the mask area only after all the samples in the base area are finished. On the other hand, TwoAdvancedSamplersForMask performs sampling in both the base area and the mask area sequentially at each step.\n* `KSamplerAdvancedProvider` - This is a wrapper that enables KSampler to be used in TwoAdvancedSamplersForMask, RegionalSampler.\n  * sigma_factor: By multiplying the denoise schedule by the sigma_factor, you can adjust the amount of denoising based on the configured denoise.\n\n* `TwoSamplersForMaskUpscalerProvider` - This is an Upscaler that extends TwoSamplersForMask to be used in Iterative Upscale.\n  * TwoSamplersForMaskUpscalerProviderPipe - pipe version of TwoSamplersForMaskUpscalerProvider.\n\n\n### Image Utils\n  * `PreviewBridge (image)` - This custom node can be used with a bridge for image when using the MaskEditor feature of Clipspace.\n  * `PreviewBridge (latent)` - This custom node can be used with a bridge for latent image when using the MaskEditor feature of Clipspace.\n    * If a latent with a mask is provided as input, it displays the mask. Additionally, the mask output provides the mask set in the latent.\n    * If a latent without a mask is provided as input, it outputs the original latent as is, but the mask output provides an output with the entire region set as a mask.\n    * When set mask through MaskEditor, a mask is applied to the latent, and the output includes the stored mask. The same mask is also output as the mask output.\n    * When connected to `vae_opt`, it takes higher priority than the `preview_method`.\n  * `ImageSender`, `ImageReceiver` - The images generated in ImageSender are automatically sent to the ImageReceiver with the same link_id.\n  * `LatentSender`, `LatentReceiver` - The latent generated in LatentSender are automatically sent to the LatentReceiver with the same link_id.\n    * Furthermore, LatentSender is implemented with PreviewLatent, which stores the latent in payload form within the image thumbnail.\n    * Due to the current structure of ComfyUI, it is unable to distinguish between SDXL latent and SD1.5\u002FSD2.1 latent. Therefore, it generates thumbnails by decoding them using the SD1.5 method.\n\n\n### Switch nodes\n  * `Switch (image,mask)`, `Switch (latent)`, `Switch (SEGS)` - Among multiple inputs, it selects the input designated by the selector and outputs it. The first input must be provided, while the others are optional. However, if the input specified by the selector is not connected, an error may occur.\n  * `Switch (Any)` - This is a Switch node that takes an arbitrary number of inputs and produces a single output. Its type is determined when connected to any node, and connecting inputs increases the available slots for connections.\n  * `Inversed Switch (Any)` - In contrast to `Switch (Any)`, it takes a single input and outputs one of many.\n  * NOTE: See this [tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fswitch.md) \n\n\n### [Wildcards](http:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcard.md) nodes\n  * These are nodes that supports syntax in the form of `__wildcard-name__` and dynamic prompt syntax like `{a|b|c}`.\n  * Wildcard files can be used by placing `.txt` or `.yaml` files under either `ComfyUI-Impact-Pack\u002Fwildcards` or `ComfyUI-Impact-Pack\u002Fcustom_wildcards` paths.\n    * You can download and use [Wildcard YAML](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F138970\u002Fbillions-of-wildcards-all-in-one) files in this format.\n    * After the first execution, you can change the custom wildcards path in the `custom_wildcards` entry within the `ComfyUI-Impact-Pack\u002Fimpact-pack.ini` file created.\n  * `ImpactWildcardProcessor` - The text is generated by processing the wildcard in the Text. If the mode is set to \"populate\", a dynamic prompt is generated with each execution and the input is filled in the second textbox. If the mode is set to \"fixed\", the content of the second textbox remains unchanged.\n    * When an image is generated with the \"fixed\" mode, the prompt used for that particular generation is stored in the metadata.\n  * `ImpactWildcardEncode` - Similar to ImpactWildcardProcessor, this provides the loading functionality of LoRAs (e.g. `\u003Clora:some_awesome_lora:0.7:1.2>`). Populated prompts are encoded using the clip after all the lora loading is done.\n    * If the `Inspire Pack` is installed, you can use **Lora Block Weight** in the form of `LBW=lbw spec;`\n    * `\u003Clora:chunli:1.0:1.0:LBW=B11:0,0,0,0,0,0,0,0,0,0,A,0,0,0,0,0,0;A=0.;>`, `\u003Clora:chunli:1.0:1.0:LBW=0,0,0,0,0,0,0,0,0,0,A,B,0,0,0,0,0;A=0.5;B=0.2;>`, `\u003Clora:chunli:1.0:1.0:LBW=SD-MIDD;>`\n\n\n### Regional Sampling\n  * These nodes offer the capability to divide regions and perform partial sampling using a mask. Unlike TwoSamplersForMask, sampling for each region is applied during each step.\n  * `RegionalPrompt` - This node combines a **mask** for specifying regions and the **sampler** to apply to each region to create `REGIONAL_PROMPTS`.\n  * `CombineRegionalPrompts` - Combine multiple `REGIONAL_PROMPTS` to create a single `REGIONAL_PROMPTS`.\n  * `RegionalSampler` - This node performs sampling using a base sampler and regional prompts. Sampling by the base sampler is executed at each step, while sampling for each region is performed through the sampler bound to each region.\n    * overlap_factor - Specifies the amount of overlap for each region to blend well with the area outside the mask.\n    * restore_latent - When sampling each region, restore the areas outside the mask to the base latent, preventing additional noise from being introduced outside the mask during region sampling.\n  * `RegionalSamplerAdvanced` - This is the Advanced version of the RegionalSampler. You can control it using `step` instead of `denoise`.\n    > NOTE: The `sde` sampler and `uni_pc` sampler introduce additional noise during each step of the sampling process. To mitigate this, when sampling each region, the `uni_pc` sampler applies additional `dpmpp_fast`, and the sde sampler applies the `dpmpp_2m` sampler as an additional measure.\n\n\n### Impact KSampler\n  * These samplers support basic_pipe and AYS\u002FOSS\u002FGITS scheduler\n  * `KSampler (pipe)` - pipe version of KSampler\n  * `KSampler (advanced\u002Fpipe)` - pipe version of KSamplerAdvacned\n  * When converting the scheduler widget to input, refer to the `Impact Scheduler Adapter` node to resolve compatibility issues.\n  * `GITSScheduler Func Provider` - provider scheduler function for GITSScheduler\n  \n\n### Batch\u002FList Util\n  * `Image Batch to Image List` - Convert Image batch to Image List\n    - You can use images generated in a multi batch to handle them\n  * `Image List to Image Batch` - Convert Image List to Image Batch \n  * `Make Image List` - Convert multiple images into a single image list\n  * `Make Image Batch` - Convert multiple images into a single image batch\n    - The input of images can be scaled up as needed\n  * `Masks to Mask List`, `Mask List to Masks`, `Make Mask List`, `Make Mask Batch` - It has the same functionality as the nodes above, but uses mask as input instead of image.\n  * `Flatten Mask Batch` - Flattens a Mask Batch into a single Mask. Normal operation is not guaranteed for non-binary masks. \n  * `Make List (Any)` - Create a list with arbitrary values.\n  * `Select Nth Item (Any list)` - Selects the Nth item from a list. If the index is out of range, it returns the last item in the list. \n\n\n### Logics (experimental) \n  * These nodes are experimental nodes designed to implement the logic for loops and dynamic switching.\n  * `ImpactCompare`, `ImpactConditionalBranch`, `ImpactConditionalBranchSelMode`, `ImpactInt`, `ImpactBoolean`, `ImpactValueSender`, `ImpactValueReceiver`, `ImpactImageInfo`, `ImpactMinMax`, `ImpactNeg`, `ImpactConditionalStopIteration`\n  * `ImpactIsNotEmptySEGS` - This node returns `true` only if the input SEGS is not empty. \n  * `ImpactIfNone` - Returns `true` if any_input is None, and returns `false` if it is not None.\n  * `Queue Trigger` - When this node is executed, it adds a new queue to assist with repetitive tasks. It will only execute if the signal's status changes.\n  * `Queue Trigger (Countdown)` - Like the Queue Trigger, it adds a queue, but only adds it if it's greater than 1, and decrements the count by one each time it runs.\n  * `Sleep` - Waits for the specified time (in seconds).\n  * `Set Widget Value` - This node sets one of the optional inputs to the specified node's widget. An error may occur if the types do not match.\n  * `Set Mute State` - This node changes the mute state of a specific node.\n  * `Control Bridge` - This node modifies the state of the connected control nodes based on the `mode` and `behavior` . If there are nodes that require a change, the current execution is paused, the mute status is updated, and a new prompt queue is inserted. \n    * When the `mode` is `active`, it makes the connected control nodes active regardless of the behavior. \n    * When the `mode` is `Bypass\u002FMute`, it changes the state of the connected nodes based on whether the behavior is `Bypass` or `Mute`.\n    * **Limitation**: Due to these characteristics, it does not function correctly when the batch count exceeds 1. Additionally, it does not guarantee proper operation when the seed is randomized or when the state of nodes is altered by actions such as `Queue Trigger`, `Set Widget Value`, `Set Mute`, before the Control Bridge.\n    * When utilizing this node, please structure the workflow in such a way that `Queue Trigger`, `Set Widget Value`, `Set Mute State`, and similar actions are executed at the end of the workflow.\n    * If you want to change the value of the seed at each iteration, please ensure that Set Widget Value is executed at the end of the workflow instead of using randomization.\n      * It is not a problem if the seed changes due to randomization as long as it occurs after the Control Bridge section.\n  * `Remote Boolean (on prompt)`, `Remote Int (on prompt)` - At the start of the prompt, this node forcibly sets the `widget_value` of `node_id`. It is disregarded if the target widget type is different.\n  * You can find the `node_id` by checking through [ComfyUI-Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager) using the format `Badge: #ID Nickname`.\n  * Experimental set of nodes for implementing loop functionality (tutorial to be prepared later \u002F [example workflow](test\u002Floop-test.json)).\n\n\n### Limitation\n* Many nodes in the `Impact Pack` use a wildcard type to allow arbitrary input\u002Foutput connections. This approach will be replaced once ComfyUI officially supports **dynamic types**. Until then, while it functions without issues, type validation may still produce error messages.\n\n\n### HuggingFace nodes\n  * These nodes provide functionalities based on HuggingFace repository models.\n  * The path where the HuggingFace model cache is stored can be changed through the `HF_HOME` environment variable.\n  * `HF Transformers Classifier Provider` - This is a node that provides a classifier based on HuggingFace's transformers models.\n    * The 'repo id' parameter should contain HuggingFace's repo id. When `preset_repo_id` is set to `Manual repo id`, use the manually entered repo id in `manual_repo_id`.\n    * e.g. 'rizvandwiki\u002Fgender-classification-2' is a repository that provides a model for gender classification.\n  * `SEGS Classify` - This node utilizes the `TRANSFORMERS_CLASSIFIER` loaded with 'HF Transformers Classifier Provider' to classify `SEGS`.\n    * The 'expr' allows for forms like `label > number`, and in the case of `preset_expr` being `Manual expr`, it uses the expression entered in `manual_expr`.\n    * For example, in the case of `male \u003C= 0.4`, if the score of the `male` label in the classification result is less than or equal to 0.4, it is categorized as `filtered_SEGS`, otherwise, it is categorized as `remained_SEGS`.\n      * For supported labels, please refer to the `config.json` of the respective HuggingFace repository.\n    * `#Female` and `#Male` are symbols that group multiple labels such as `Female, women, woman, ...`, for convenience, rather than being single labels.\n\n\n### Etc nodes\n  * `Impact Scheduler Adapter` - With the addition of AYS to the scheduler of the Impact Pack and Inspire Pack, there is an issue of incompatibility when the existing scheduler widget is converted to input. The Impact Scheduler Adapter allows for an indirect connection to be possible.\n  * `StringListToString` - Convert String List to String\n  * `WildcardPromptFromString` - Create labeled wildcard for detailer from string. \n    * This node works well when used with MakeTileSEGS. [[Link](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fpull\u002F536#discussion_r1586060779)]\n\n  * `String Selector` - It selects and returns a portion of the string. When `multiline` mode is disabled, it simply returns the string of the line pointed to by the selector. When `multiline` mode is enabled, it divides the string based on lines that start with `#` and returns them. If the `select` value is larger than the number of items, it will start counting from the first line again and return accordingly.\n  * `Combine Conditionings` - It takes multiple conditionings as input and combines them into a single conditioning.\n  * `Concat Conditionings` - It takes multiple conditionings as input and concat them into a single conditioning.\n  * `Negative Cond Placeholder` - Models like FLUX.1 do not use Negative Conditioning. This is a placeholder node for them. You can use FLUX.1 by replacing the Negative Conditioning used in Impact KSampler, KSampler (Inspire), and Detailer with this node.\n  * `Execution Order Controller` - A helper node that can forcibly control the execution order of nodes.\n    * Connect the output of the node that should be executed first to the signal, and make the input of the node that should be executed later pass through this node.\n  * `List Bridge` - When passing the list output through this node, it collects and organizes the data before forwarding it, which ensures that the previous stage's sub-workflow has been completed.\n\n\n## Feature\n* `Interactive SAM Detector (Clipspace)` - When you right-click on a node that has 'MASK' and 'IMAGE' outputs, a context menu will open. From this menu, you can either open a dialog to create a SAM Mask using 'Open in SAM Detector', or copy the content (likely mask data) using 'Copy (Clipspace)' and generate a mask using 'Impact SAM Detector' from the clipspace menu, and then paste it using 'Paste (Clipspace)'.\n* Providing a feature to detect errors that occur when mixing models and clips from checkpoints such as `SDXL Base`, `SDXL Refiner`, `SD1.x`, `SD2.x` during sample execution, and reporting appropriate errors.\n\n\n## How To Install?\n\n### Install via ComfyUI-Manager (Recommended)\n* Search `ComfyUI Impact Pack` in ComfyUI-Manager and click `Install` button.\n\n### Manual Install (Not Recommended)\n1. `cd custom_nodes`\n2. `git clone https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack`\n3. `cd ComfyUI-Impact-Pack`\n4. `pip install -r requirements.txt`\n    * **IMPORTANT**:\n        * You must install it within the Python environment where ComfyUI is running.\n        * For the portable version, use `\u003Cinstalled path>\\python_embeded\\python.exe -m pip` instead of `pip`. For a `venv`, activate the `venv` first and then use `pip`.\n5. Restart ComfyUI\n\n* NOTE1: If an error occurs during the installation process, please refer to [Troubleshooting Page](troubleshooting\u002FTROUBLESHOOTING.md) for assistance. \n* NOTE2: You can use this colab notebook [colab notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fblob\u002FMain\u002Fnotebook\u002Fcomfyui_colab_impact_pack.ipynb) to launch it. This notebook automatically downloads the impact pack to the custom_nodes directory, installs the tested dependencies, and runs it.\n* NOTE3: If you create an empty file named `skip_download_model` in the `ComfyUI\u002Fcustom_nodes\u002F` directory, it will skip the model download step during the installation of the impact pack.\n\n\n## Package Dependencies (If you need to manual setup.)\n\n* pip install\n   * segment-anything\n   * scikit-image\n   * piexif \n   * opencv-python\n   * scipy\n   * numpy\u003C2\n   * dill\n   * matplotlib\n   * (optional) onnxruntime\n   * (deprecated) openmim      # for mim\n   * (deprecated) pycocotools  # for mim\n   \n* linux packages (ubuntu)\n  * libgl1-mesa-glx\n  * libglib2.0-0\n\n\n## Config example\n* Once you run the Impact Pack for the first time, an `impact-pack.ini` file will be automatically generated in the Impact Pack directory. You can modify this configuration file to customize the default behavior.\n  * `dependency_version` - don't touch this\n  * `sam_editor_cpu` - use cpu for `SAM editor` instead of gpu\n  * sam_editor_model: Specify the SAM model for the SAM editor.\n    * You can download various SAM models using ComfyUI-Manager.\n    * Path to SAM model: `ComfyUI\u002Fmodels\u002Fsams`\n```\n[default]\nsam_editor_cpu = False\nsam_editor_model = sam_vit_b_01ec64.pth\n```\n\n\n## Other Materials (auto-download when installing)\n\n* ComfyUI\u002Fmodels\u002Fsams \u003C= https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything\u002Fsam_vit_b_01ec64.pth\n\n\n## Troubleshooting page\n* [Troubleshooting Page](troubleshooting\u002FTROUBLESHOOTING.md)\n\n\n## How To Use (DDetailer feature)\n\n#### 1. Basic auto face detection and refine exapmle.\n![simple](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_5b85282df17f.png)\n* The face that has been damaged due to low resolution is restored with high resolution by generating and synthesizing it, in order to restore the details.\n* The FaceDetailer node is a combination of a Detector node for face detection and a Detailer node for image enhancement. See the [Advanced Tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fraw\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fadvanced.md) for a more detailed explanation.\n* The MASK output of FaceDetailer provides a visualization of where the detected and enhanced areas are.\n\n![simple-orig](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_6c3cacd86935.png) ![simple-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_a8c316c87db7.png)\n* You can see that the face in the image on the left has increased detail as in the image on the right.\n\n#### 2. 2Pass refine (restore a severely damaged face)\n![2pass-workflow-example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_cf788db878ec.png)\n* Although two FaceDetailers can be attached together for a 2-pass configuration, various common inputs used in KSampler can be passed through DETAILER_PIPE, so FaceDetailerPipe can be used to configure easily.\n* In 1pass, only rough outline recovery is required, so restore with a reasonable resolution and low options. However, if you increase the dilation at this time, not only the face but also the surrounding parts are included in the recovery range, so it is useful when you need to reshape the face other than the facial part.\n\n![2pass-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_b187dc44d9bf.png) ![2pass-example-middle](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_1c12d482aabf.png) ![2pass-example-result](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_f9c354b88b4a.png)\n* In the first stage, the severely damaged face is restored to some extent, and in the second stage, the details are restored\n\n#### 3. Face Bbox(bounding box) + Person silhouette segmentation (prevent distortion of the background.)\n![combination-workflow-example](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fraw\u002FMain\u002FComfyUI-Impact-Pack\u002Fimages\u002Fcombination.jpg)\n![combination-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_68a3a54a8966.png) ![combination-example-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_462c97d8fb8e.png)\n\n* Facial synthesis that emphasizes details is delicately aligned with the contours of the face, and it can be observed that it does not affect the image outside of the face.\n\n* The BBoxDetectorForEach node is used to detect faces, and the SAMDetectorCombined node is used to find the segment related to the detected face. By using the Segs & Mask node with the two masks obtained in this way, an accurate mask that intersects based on segs can be generated. If this generated mask is input to the DetailerForEach node, only the target area can be created in high resolution from the image and then composited.\n\n#### 4. Iterative Upscale\n![upscale-workflow-example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_e5480f38326f.png)\n \n* The IterativeUpscale node is a node that enlarges an image\u002Flatent by a scale_factor. In this process, the upscale is carried out progressively by dividing it into steps.\n* IterativeUpscale takes an Upscaler as an input, similar to a plugin, and uses it during each iteration. PixelKSampleUpscalerProvider is an Upscaler that converts the latent representation to pixel space and applies ksampling.\n  * The upscale_model_opt is an optional parameter that determines whether to use the upscale function of the model base if available. Using the upscale function of the model base can significantly reduce the number of iterative steps required. If an x2 upscaler is used, the image\u002Flatent is first upscaled by a factor of 2 and then downscaled to the target scale at each step before further processing is done.\n\n* The following image is an image of 304x512 pixels and the same image scaled up to three times its original size using IterativeUpscale.\n\n![combination-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_c7c305ad43ea.png) ![combination-example-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_04a400b47aef.png)\n\n\n#### 5. Interactive SAM Detector (Clipspace)\n\n* When you right-click on the node that outputs 'MASK' and 'IMAGE', a menu called \"Open in SAM Detector\" appears, as shown in the following picture. Clicking on the menu opens a dialog in SAM's functionality, allowing you to generate a segment mask.\n![samdetector-menu](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_467502896f5d.png)\n\n* By clicking the left mouse button on a coordinate, a positive prompt in blue color is entered, indicating the area that should be included. Clicking the right mouse button on a coordinate enters a negative prompt in red color, indicating the area that should be excluded. Positive prompts represent the areas that should be included, while negative prompts represent the areas that should be excluded.\n* You can remove the points that were added by using the \"undo\" button. After selecting the points, pressing the \"detect\" button generates the mask. Additionally, you can adjust the fidelity slider to determine the extent to which the mask belongs to the confidence region.\n\n![samdetector-dialog](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_ec6327cfec56.jpg)\n\n* If you opened the dialog through \"Open in SAM Detector\" from the node, you can directly apply the changes by clicking the \"Save to node\" button. However, if you opened the dialog through the \"clipspace\" menu, you can save it to clipspace by clicking the \"Save\" button.\n\n![samdetector-result](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_6730bc3cd11b.jpg)\n\n* When you execute using the reflected mask in the node, you can observe that the image and mask are displayed separately.\n\n\n## Others Tutorials\n* [ComfyUI-extension-tutorials\u002FComfyUI-Impact-Pack](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Ftree\u002FMain\u002FComfyUI-Impact-Pack) - You can find various tutorials and workflows on this page.\n* [Advanced Tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fadvanced.md)\n* [SAM Application](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsam.md)\n* [PreviewBridge](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fpreviewbridge.md)\n* [Mask Pointer](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fmaskpointer.md)\n* [ONNX Tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FONNX.md)\n* [CLIPSeg Tutorial](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fclipseg.md)\n* [Extreme Highresolution Upscale](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fextreme-upscale.md)\n* [TwoSamplersForMask](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoSamplers.md)\n* [TwoAdvancedSamplersForMask](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoAdvancedSamplers.md)\n* [Advanced Iterative Upscale: PK_HOOK](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fpk_hook.md)\n* [Advanced Iterative Upscale: TwoSamplersForMask Upscale Provider](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoSamplersUpscale.md)\n* [Interactive SAM + PreviewBridge](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsam_with_preview_bridge.md)\n* [ImageSender\u002FImageReceiver\u002FLatentSender\u002FLatentReceiver](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsender_receiver.md)\n* [ImpactWildcardProcessor](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcardProcessor.md)\n\n\n## Credits\n\nComfyUI\u002F[ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) - A powerful and modular stable diffusion GUI.\n\ndustysys\u002F[ddetailer](https:\u002F\u002Fgithub.com\u002Fdustysys\u002Fddetailer) - DDetailer for Stable-diffusion-webUI extension.\n\nBing-su\u002F[dddetailer](https:\u002F\u002Fgithub.com\u002FBing-su\u002Fdddetailer) - The anime-face-detector used in ddetailer has been updated to be compatible with mmdet 3.0.0, and we have also applied a patch to the pycocotools dependency for Windows environment in ddetailer.\n\nfacebook\u002F[segment-anything](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) - Segmentation Anything!\n\nhysts\u002F[anime-face-detector](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector) - Creator of `anime-face_yolov3`, which has impressive performance on a variety of art styles.\n\nopen-mmlab\u002F[mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection) - Object detection toolset. `dd-person_mask2former` was trained via transfer learning using their [R-50 Mask2Former instance segmentation model](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection\u002Ftree\u002Fmaster\u002Fconfigs\u002Fmask2former#instance-segmentation) as a base.\n\nbiegert\u002F[ComfyUI-CLIPSeg](https:\u002F\u002Fgithub.com\u002Fbiegert\u002FComfyUI-CLIPSeg) - This is a custom node that enables the use of CLIPSeg technology, which can find segments through prompts, in ComfyUI.\n\nBlenderNeok\u002F[ComfyUI-TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) - The tile sampler allows high-resolution sampling even in places with low GPU VRAM.\n\nBlenderNeok\u002F[ComfyUI_Noise](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_Noise) - The noise injection feature relies on this function and slerp code for noise variation\n\nWASasquatch\u002F[was-node-suite-comfyui](https:\u002F\u002Fgithub.com\u002FWASasquatch\u002Fwas-node-suite-comfyui) - A powerful custom node extensions of ComfyUI.\n\nTrung0246\u002F[ComfyUI-0246](https:\u002F\u002Fgithub.com\u002FTrung0246\u002FComfyUI-0246) - Nice bypass hack!\n\nLayer-norm\u002F[comfyui-lama-remover](https:\u002F\u002Fgithub.com\u002FLayer-norm\u002Fcomfyui-lama-remover) - Required for using `LamaRemoverDetailerHook`.\n","[![Youtube Badge](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FYoutube-FF0000?style=for-the-badge&logo=Youtube&logoColor=white&link=https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP)](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AccoxDZIg3Y&list=PL_Ej2RDzjQLGfEeizq4GISeY3FtVyFmGP)\n\n# ComfyUI-Impact-Pack\n\n**ComfyUI 自定义节点包**\n此节点包通过检测器、细节增强器、超分辨率模型、管道等功能，帮助您便捷地增强图像。\n\n注意：UltralyticsDetectorProvider 节点不属于 ComfyUI-Impact-Pack。如需使用 UltralyticsDetectorProvider 节点，请单独安装 ComfyUI-Impact-Subpack。\n\n## 注意事项\n* V8.24：由于 DifferentialDiffusion 的结构变化，此兼容性补丁需要 ComfyUI 0.3.63 或更高版本。\n* V8.19：移除了旧版节点（如 mmdet 等）。\n* V8.18：支持 [facebookresearch\u002Fsam2](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsam2) 模型。\n* V8.0：`Impact Subpack` 不再自动安装。如需使用 `UltralyticsDetectorProvider` 节点，请单独安装 `Impact Subpack`。\n* V7.6：不再支持自动安装。请使用 ComfyUI-Manager 进行安装，或手动安装 requirements.txt 并运行 install.py 完成安装。\n* V7.0：支持基于执行模型反转的切换功能。\n* V6.0：在 Impact KSampler、Detailers 和 PreviewBridgeLatent 中支持 FLUX.1 模型。\n* V5.0：不再兼容 2024 年 4 月 8 日之前的 ComfyUI 版本。\n* V4.87.4：请更新至 2024 年 4 月 8 日之后的 ComfyUI 版本，以确保正常运行。\n* V4.85：与过时的 **ComfyUI IPAdapter Plus** 不兼容。（需使用 3 月 24 日或之后的版本。）\n* V4.77：已应用兼容性补丁。要求 ComfyUI 版本为 10 月 8 日或之后。\n* V4.73.3：ControlNetApply (SEGS) 支持 AnimateDiff。\n* V4.20.1：由于 `RegionalSampler` 功能更新，参数顺序发生了变化，导致之前创建的 `RegionalSamplers` 出现故障。请相应调整参数。\n* V4.12：`MASKS` 已更改为 `MASK`。\n* V4.7.2 与旧版本的 `ControlNet 辅助预处理器` 不兼容。如需使用 `MediaPipe FaceMesh to SEGS`，请更新至最新版本（9 月 17 日）。\n* 自 V3.16 起，选择权重语法已更改（: -> ::）。（[教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcardProcessor.md)）\n* 自 V3.6 起，需要 ComfyUI 最新版本（8 月 8 日，9ccc965）。\n* **在 V3.3.1 以下版本中，使用 UltralyticsDetectorProvider 后生成的图像质量存在问题。请务必升级到较新版本。**\n* 自 V3.0 起，与 `mmdet` 相关的节点为可选节点，仅根据配置设置启用。\n  - 通过 ComfyUI-Impact-Subpack，您可以利用 UltralyticsDetectorProvider 访问多种检测模型。\n* 在 2.22 和 2.21 版本之间，Detailer 工作流存在部分兼容性损失。若继续使用现有工作流，执行时可能会出现错误。Detailer 相关节点新增了一个名为 “enhanced_alpha_list” 的输出。\n* Impact Pack 安装过程中出现的 cv2 权限错误已在 2.21.4 版本中修复。但请注意，需要使用最新版本的 ComfyUI 和 ComfyUI-Manager。\n* “PreviewBridge” 功能可能无法在 2023 年 7 月 1 日之前发布的 ComfyUI 版本上正常工作。\n* 尝试在 2023 年 6 月 27 日之前发布的 ComfyUI 版本上加载 “ComfyUI-Impact-Pack”，将会失败。\n* 随着 FaceDetailer 中通配符支持的加入，DETAILER_PIPE 相关节点和 Detailer 节点的结构发生了变化。使用现有工作流时可能出现故障。\n\n\n## 安装方法\n\n### 推荐方式\n* 通过 [ComfyUI-Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager) 安装。\n\n### 手动安装\n* 在终端（cmd）中导航至 `ComfyUI\u002Fcustom_nodes`。\n* 使用以下命令将仓库克隆到 `custom_nodes` 目录下：\n  ```\n  git clone https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack comfyui-impact-pack\n  cd comfyui-impact-pack\n  ```\n* 在您的 Python 环境中安装依赖项。\n    * 对于 Windows 可移植版，在 `ComfyUI\\custom_nodes\\comfyui-impact-pack` 内运行以下命令：\n        ```\n        ..\\..\\..\\python_embeded\\python.exe -m pip install -r requirements.txt\n        ```\n    * 如果使用 venv 或 conda，请先激活 Python 环境，然后运行：\n        ```\n        pip install -r requirements.txt\n        ```\n\n### 伴侣包\n* 如需使用 `Ultralytics Detector Provider` 来访问各种 YOLO 检测模型，还应安装 [ComfyUI-Impact-Subpack](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Subpack)。\n\n\n## 自定义节点\n\n### [检测节点](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fdetectors.md)\n  * `SAMLoader (Impact)` - 加载 SAM 模型。\n  * `ONNXDetectorProvider` - 加载 ONNX 模型以提供 BBOX_DETECTOR。\n  * `CLIPSegDetectorProvider` - CLIPSeg 的封装，用于提供 BBOX_DETECTOR。\n    * 需要安装 ComfyUI-CLIPSeg 节点扩展。\n  * `SEGM Detector (combined)` - 检测分割并从输入图像中返回掩码。\n  * `BBOX Detector (combined)` - 检测边界框并从输入图像中返回掩码。\n  * `SAMDetector (combined)` - 利用 SAM 技术，在输入图像上提取由输入 SEGS 指定位置的分割区域，并将其输出为统一的掩码。\n  * `SAMDetector (Segmented)` - 类似于 `SAMDetector (combined)`，但它会将检测到的分割区域分开并分别输出。对于同一检测区域，可能会找到多个分割区域，目前的策略是将它们任意地每三个一组进行分组。这一部分预计在未来会得到改进。\n    * 因此，它会输出一个统一的 `combined_mask` 掩码，以及以批次形式组合的多个 `batch_masks` 掩码。\n    * 虽然 `batch_masks` 可能不会完全分离，但它提供了进行一定程度分割的功能。\n  * `Simple Detector (SEGS)` - 主要使用 `BBOX_DETECTOR` 运行，同时通过提供 `SAM_MODEL` 或 `SEGM_DETECTOR`，该节点会在内部通过对 *bbox* 和 *silhouette* 进行掩码操作来生成更优的 SEGS。它是一个方便的工具，可以简化较为复杂的流程。\n  * `Simple Detector for Video (SEGS)` – 对由图像帧组成的视频进行检测。它不是使用单个掩码，而是对每一帧图像单独进行检测，并生成包含一批掩码的 SEGS 对象。\n  * `SAM2 Video Detector (SEGS)` – 类似于 `Simple Detector for Video (SEGS)`，但利用 SAM2 的视频跟踪技术生成包含一批掩码的 SEGS 对象。\n      * 使用该节点时，必须在 SAMLoader 中选择一个 SAM2 模型。\n\n\n### ControlNet、IPAdapter\n  * `ControlNetApply (SEGS)` - 若要在 SEGS 中应用 ControlNet，需要使用 Inspire Pack 中的 Preprocessor Provider 节点来配合使用此节点。\n    * 可以选择性地应用 `segs_preprocessor` 或 `control_image`。如果提供了 `control_image`，则会忽略 `segs_preprocessor`。\n    * 如果设置为 `control_image`，可以通过 `SEGSPreview (CNET Image)` 预览裁剪后的 cnet 图像。由 `segs_preprocessor` 生成的图像应通过每个 Detailer 的 `cnet_images` 输出进行验证。\n    * `segs_preprocessor` 是在细化过程中基于裁剪后的图像实时进行预处理，而 `control_image` 则会被裁剪后作为输入传递给 `ControlNetApply (SEGS)`。\n  * `ControlNetClear (SEGS)` - 清除 SEGS 中已应用的 ControlNet。\n  * `IPAdapterApply (SEGS)` - 若要在 SEGS 中应用 IPAdapter，同样需要使用 Inspire Pack 中的 Preprocessor Provider 节点来配合使用此节点。\n\n\n### 掩码操作\n  * `Pixelwise(SEGS & SEGS)` - 在两个 SEGS 之间执行“逐像素与”操作。\n  * `Pixelwise(SEGS - SEGS)` - 从一个 SEGS 中减去另一个 SEGS。\n  * `Pixelwise(SEGS & MASK)` - 在 SEGS 和 MASK 之间执行逐像素 AND 操作。\n  * `Pixelwise(SEGS & MASKS ForEach)` - 在 SEGS 和 MASKS 之间执行逐像素 AND 操作。\n    * 请注意，此操作是对一批 MASKS 而非单个 MASK 进行的。\n  * `Pixelwise(MASK & MASK)` - 在两个掩码之间执行“逐像素与”操作。\n  * `Pixelwise(MASK - MASK)` - 从一个掩码中减去另一个掩码。\n  * `Pixelwise(MASK + MASK)` - 将两个掩码合并。\n  * `SEGM Detector (SEGS)` - 检测分割并从输入图像中返回 SEGS。\n  * `BBOX Detector (SEGS)` - 检测边界框并从输入图像中返回 SEGS。\n  * `Dilate Mask` - 扩张掩码。\n    * 支持使用负值进行腐蚀。\n  * `Gaussian Blur Mask` - 对掩码应用高斯模糊。可用于掩码羽化。\n  * `Mask Rect Area` - 基于百分比创建矩形掩码，并可在预览画布上查看。\n  * `Mask Rect Area (Advanced)` - 基于像素和图像尺寸创建矩形掩码。\n\n\n### [细化节点](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fdetailers.md)\n  * `Detailer (SEGS)` - 根据 SEGS 对图像进行细化。\n  * `Detailer (SEGS) with auto retry` - 根据 SEGS 对图像进行细化，如果补丁全部为黑色，则会自动重试。\n  * `DetailerDebug (SEGS)` - 根据 SEGS 对图像进行细化。此外，它还提供了监控裁剪图像以及裁剪图像细化结果的能力。\n    * 为了避免在使用“external_seed”时因每次种子不变而导致的重复生成，请在“Detailer…”节点中关闭“seed random generate”选项。\n  * `MASK to SEGS` - 根据掩码生成 SEGS。\n  * `MASK to SEGS For Video` - 根据视频中的掩码生成 SEGS。（由 `MASK to SEGS For AnimateDiff` 更名而来）\n    * 当使用单个掩码时，将其转换为 SEGS 以应用于整个帧。\n    * 当使用批量掩码时，轮廓填充功能将被禁用。\n  * `MediaPipe FaceMesh to SEGS` - 从 MediaPipe 面部网格图像中分离出每个地标，创建带标签的 SEGS。\n    * 通常，通过 MediaPipe 面部网格预处理器生成的图像会被缩小。它会将 MediaPipe 面部网格图像调整回原始大小，以便在处理过程中与 reference_image_opt 提供的参考图像尺寸匹配。\n  * `ToBinaryMask` - 将使用 0 到 255 之间 alpha 值生成的掩码分离为 0 和 255。非零部分始终设置为 255。\n  * `Masks to Mask List` - 该节点将批量的 MASKS 转换为单个掩码列表。\n  * `Mask List to Masks` - 该节点将掩码列表转换为批量掩码形式。\n  * `EmptySEGS` - 提供一个空的 SEGS。\n  * `MaskPainter` - 提供绘制掩码的功能。\n  * `FaceDetailer` - 轻松检测人脸并进行优化。\n  * `FaceDetailer (pipe)` - 轻松检测人脸并进行优化（适用于多通道）。\n  * `MaskDetailer (pipe)` - 这是一个简单的 inpaint 节点，可将 Detailer 应用于掩码区域。\n\n  * `FromDetailer (SDXL\u002Fpipe)`, `BasicPipe -> DetailerPipe (SDXL)`, `Edit DetailerPipe (SDXL)` - 这些是用于 Detailer 的管道函数，旨在利用 SDXL 的细化模型。\n  * `Any PIPE -> BasicPipe` - 将其他自定义节点中并非 BASIC_PIPE，但内部结构与 BASIC_PIPE 相同的 PIPE 值转换为 BASIC_PIPE。如果应用了不兼容的类型，可能会导致运行时错误。\n\n### SEGS 操作节点\n  * `SEGSDetailer` - 对 SEGS 进行细节处理，但不将其贴回原图。\n  * `SEGSPaste` - 将 SEGS 的处理结果贴回原图。\n    * 如果提供了 `ref_image_opt`，则会忽略 SEGS 中包含的图像。取而代之的是，使用 `ref_image_opt` 中与 SEGS 裁剪区域对应的部分进行粘贴。`ref_image_opt` 中的图像尺寸应与原图尺寸相同。\n    * 此节点可与 AnimateDiff 的处理结果结合使用。\n  * `SEGSPreview` - 提供 SEGS 的预览。\n    * 该选项用于在将改进后的图像合并回原图之前，通过 `SEGSDetailer` 预览其效果。在经过 `SEGSDetailer` 处理前，SEGS 只包含掩码信息，而不包含图像信息。如果连接了 `fallback_image_opt` 作为原图，则 SEGS 将使用原图生成无图像信息的预览。然而，如果 SEGS 已经包含图像信息，则 `fallback_image_opt` 将被忽略。\n    * 此节点可与 AnimateDiff 的处理结果结合使用。\n  * `SEGSPreview (CNET Image)` - 用于调试目的，显示通过 `ControlNetApply (SEGS)` 配置的图像。\n  * `SEGSToImageList` - 将 SEGS 转换为图像列表。\n  * `SEGSToMaskList` - 将 SEGS 转换为掩码列表。\n  * `SEGS Filter (label)` - 根据检测区域的标签对 SEGS 进行过滤。\n  * `SEGS Filter (ordered)` - 根据大小和位置对 SEGS 进行排序，并提取特定范围内的 SEG。\n  * `SEGS Filter (range)` - 仅从 SEGS 中提取尺寸和位置在特定范围内的 SEG。\n  * `SEGS Filter (non max suppression)` - 根据交并比（IoU）阈值，移除重叠度高的 SEG，仅保留置信度最高的检测结果。\n  * `SEGS Filter (intersection)` - 根据交集面积（IoA）阈值，从 segs1 中筛选出与 segs2 中任何 SEG 均无显著重叠的 SEG。\n  * `SEGS Assign (label)` - 按顺序为 SEGS 分配标签。此节点在与 FaceDetailer 的 `[LAB]` 结合使用时非常有用。\n  * `SEGSConcat` - 将 segs1 和 segs2 拼接在一起。如果 segs1 和 segs2 的源形状不同，则 segs2 将被忽略。\n  * `SEGS Merge` - SEGS 包含多个 SEG。SEGS Merge 可将多个 SEG 整合为一个合并后的 SEG。标签将变为 `merged`，置信度则取最低值。应用的 ControlNet 和裁剪后的图像将被移除。\n  * `Picker (SEGS)` - 在输入的 SEGS 中，可通过对话框选择特定的 SEG。若未选择任何 SEG，则输出空的 SEGS。增加 `SEGSDetailer` 的 batch_size 可用于从候选中进行选择。\n  * `Set Default Image For SEGS` - 为 SEGS 设置默认图像。以这种方式设置图像的 SEGS 不再需要设置后备图像。当 override 设置为 false 时，将保留原图。\n  * `Remove Image from SEGS` - 移除通过 “Set Default Image for SEGS” 或 `SEGSDetailer` 配置的 SEGS 图像。移除 SEGS 的图像后，Detailer 节点将基于当前处理的图像而非 SEGS 运行。\n  * `Make Tile SEGS` - [实验性] 从图像创建瓦片形式的 SEGS，以便于使用 Detailer 进行分块放大实验。\n    * `filter_in_segs_opt` 和 `filter_out_segs_opt` 是可选输入。如果提供了这些输入，在创建瓦片时，每个瓦片的掩码将通过与 `filter_in_segs_opt` 的掩码叠加，并排除与 `filter_out_segs_opt` 的掩码重叠来生成。掩码为空的瓦片将不会被创建为 SEGS。\n  * `Dilate Mask (SEGS)` - 对 SEGS 中的掩码进行膨胀\u002F腐蚀操作。\n  * `Gaussian Blur Mask (SEGS)` - 对 SEGS 中的掩码应用高斯模糊。\n  * `SEGS_ELT Manipulation` - 实验性节点\n    * `DecomposeSEGS` - 将 SEGS 分解，以便进行详细操作。\n    * `AssembleSEGS` - 将分解后的 SEGS 重新组装。\n    * `From SEG_ELT` - 从 SEG_ELT 中提取详细信息。\n    * `Edit SEG_ELT` - 修改 SEG_ELT 中的部分信息。\n    * `Dilate SEG_ELT` - 膨胀 SEG_ELT 的掩码。\n    * `From SEG_ELT` bbox - 从 SEG_ELT 中的 bbox 提取坐标。\n    * `From SEG_ELT` crop_region - 从 SEG_ELT 中的 crop_region 提取坐标。\n  * `Count Elt in SEGS` - 计算 SEGS 中的元素数量。\n\n### 管道节点\n   * `ToDetailerPipe`, `FromDetailerPipe` - 这些节点用于将 Detailer 中使用的多个输入（如模型、VAE 等）打包成一个 DETAILER_PIPE，或从 DETAILER_PIPE 中提取已打包的元素。\n   * `ToBasicPipe`, `FromBasicPipe` - 这些节点用于将模型、CLIP、VAE、正向条件和反向条件打包成一个 BASIC_PIPE，或从 BASIC_PIPE 中提取各个元素。\n   * `EditBasicPipe`, `EditDetailerPipe` - 这些节点用于替换 BASIC_PIPE 或 DETAILER_PIPE 中的部分元素。\n   * `FromDetailerPipe_v2`, `FromBasicPipe_v2` - 功能与 `FromDetailerPipe` 和 `FromBasicPipe` 相同，但额外提供了一个直接导出输入管道的输出。这在编辑 EditBasicPipe 和 EditDetailerPipe 时非常有用。\n   * `Latent Scale (on Pixel Space)` - 本节点将潜在空间转换为像素空间，对其进行放大，然后再转换回潜在空间。\n     * 如果提供了 upscale_model_opt，则使用该模型对像素进行放大，随后使用 scale_method 中提供的插值方法将其缩小到目标分辨率。\n   * `PixelKSampleUpscalerProvider` - 提供一种放大器，它使用 VAEDecode 将潜在空间转换为像素，执行放大操作，再使用 VAEEncode 将其转换回潜在空间，最后进行 k-sampling。这种放大器可以附加到 `Iterative Upscale` 等节点上使用。\n     * 类似于 `Latent Scale (on Pixel Space)`，如果提供了 upscale_model_opt，则会使用该模型进行像素放大。\n   * `PixelTiledKSampleUpscalerProvider` - 与 `PixelKSampleUpscalerProvider` 类似，但它使用 `ComfyUI_TiledKSampler` 和 Tiled VAE 解码器\u002F编码器，以避免在高分辨率下出现 GPU VRAM 问题。\n     * 需要安装 [BlenderNeko\u002FComfyUI_TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) 节点扩展。\n\n### PK_HOOK\n  * `DenoiseScheduleHookProvider` - IterativeUpscale 提供了一个钩子，随着迭代步骤的推进，逐渐将去噪强度调整为目标去噪强度。\n  * `CfgScheduleHookProvider` - IterativeUpscale 提供了一个钩子，随着迭代步骤的推进，逐渐将 CFG 强度调整为目标 CFG 强度。\n  * `StepsScheduleHookProvider` - IterativeUpscale 提供了一个钩子，随着迭代步骤的推进，逐渐将采样步数调整为目标步数。\n  * `NoiseInjectionHookProvider` - 在 IterativeUpscale 的每次迭代过程中，会根据预设的时间表逐步调整噪声强度，并向潜在空间注入噪声。\n    * 需要安装 [BlenderNeko\u002FComfyUI_Noise](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_Noise) 节点扩展。\n    * 种子用于生成初始噪声值，每次迭代时种子值会递增 1。\n    * 源参数决定了配置 CPU 噪声还是 GPU 噪声。\n    * 目前仅提供一种简单的时间表：在每次迭代过程中，噪声强度从 start_strength 逐渐变化到 end_strength。\n  * `UnsamplerHookProvider` - 在每次迭代中应用 Unsampler。使用此节点前需先安装 ComfyUI_Noise 扩展。\n  * `PixelKSampleHookCombine` - 用于连接两个 PK_HOOK。首先执行 hook1，然后再执行 hook2。\n    * 如果希望同时调整 CFG 和去噪强度，可以将 CfgScheduleHookProvider 和 PixelKSampleHookCombine 结合使用。\n \n\n### DETAILER_HOOK\n  * `NoiseInjectionDetailerHookProvider` - `detailer_hook` 是 `Detailer` 中的一个钩子，在处理每个 SEGS 的过程中注入噪声。\n  * `UnsamplerDetailerHookProvider` - 在每次循环中应用 Unsampler。使用此节点前需先安装 ComfyUI_Noise 扩展。\n  * `DenoiseSchedulerDetailerHookProvider` - 在循环进行的过程中，详细器的去噪强度会被调整至 `target_denoise`。\n  * `CoreMLDetailerHookProvider` - CoreML 仅支持 512x512、512x768、768x512、768x768 尺寸的采样。CoreMLDetailerHookProvider 会精确地将裁剪区域的放大尺寸固定为这些规格。使用此钩子时，无论 guide_size 如何，都会始终选择上述尺寸。然而，如果 guide_size 过小，则可能会出现跳过的情况。\n  * `DetailerHookCombine` - 用于连接两个 DETAILER_HOOK。与 PixelKSampleHookCombine 类似。\n  * `SEGSOrderedFilterDetailerHook`、`SEGSRangeFilterDetailerHook`、`SEGSLabelFilterDetailerHook` - 这些是包装节点，通过创建 DETAILER_HOOK，为 FaceDetailer 或 Detector 提供 SEGSFilter 节点。\n  * `PreviewDetailerHook` - 连接此钩子节点有助于在每次完成 SEGS 详细化任务后查看预览效果。当处理大量 SEGS 时，例如制作拼贴 SEGS，它可以让用户逐步监控处理进度。\n    * 由于该钩子是在最终粘贴回原始图像时应用的，因此对 `SEGSDetailer` 等节点没有影响。\n  * `VariationNoiseDetailerHookProvider` - 向详细器应用变异种子。可以通过组合方式分多个阶段应用。\n  * `CustomSamplerDetailerHookProvider` - 应用一个允许在 Detailer 节点中使用自定义采样器的钩子。当使用 `DetailerHookCombine` 时，会优先应用第一个钩子中的采样器。\n  * `LamaRemoverDetailerHookProvider` - 在详细化阶段，将 Lama Remover 应用于放大后的图像。如果设置 `skip_sampling` 为 True，则可单独使用 Lama Remover 而无需经过详细化阶段，直接移除检测到的区域。\n      * 不适用于 **AnimateDiff** 详细器。使用 `DetailerHookCombine` 时，只有当所有钩子都设置为 `True` 时，才会生效 `skip_sampling`。\n      * 使用此节点前，需安装位于 [Layer-norm\u002Fcomfyui-lama-remover](https:\u002F\u002Fgithub.com\u002FLayer-norm\u002Fcomfyui-lama-remover) 的节点包。\n\n\n### 迭代式放大节点\n  * `Iterative Upscale (Latent\u002Fon Pixel Space)` - 该放大器接收输入放大器，并将缩放因子拆分为若干步骤，然后逐次进行放大操作。此节点以潜在空间作为输入，输出也为潜在空间。\n  * `Iterative Upscale (Image)` - 该放大器接收输入放大器，并将缩放因子拆分为若干步骤，然后逐次进行放大操作。此节点以图像作为输入，输出也为图像。\n    * 内部实现上，该节点使用的是 `Iterative Upscale (Latent)`。\n\n### 双采样器节点\n* `TwoSamplersForMask` - 该节点可以根据掩码区域应用两种不同的采样器。掩码值为 0 的区域使用 base_sampler，而掩码值为 1 的区域则使用 mask_sampler。\n  * 注意：无法使用通过 VAEEncodeForInpaint 编码的潜在空间。\n* `KSamplerProvider` - 这是一个包装器，使 KSampler 能够在 TwoSamplersForMask 和 TwoSamplersForMaskUpscalerProvider 中使用。\n* `TiledKSamplerProvider` - ComfyUI_TiledKSampler 是一个包装器，用于提供 KSAMPLER。\n  * 需要安装 [BlenderNeko\u002FComfyUI_TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) 节点扩展。\n  \n* `TwoAdvancedSamplersForMask` - TwoSamplersForMask 与 TwoAdvancedSamplersForMask 类似，但它们的操作方式有所不同。TwoSamplersForMask 只有在基础区域的所有采样完成后，才会对掩码区域进行采样。而 TwoAdvancedSamplersForMask 则会在每一步中依次对基础区域和掩码区域进行采样。\n* `KSamplerAdvancedProvider` - 这是一个包装器，使 KSampler 能够在 TwoAdvancedSamplersForMask 和 RegionalSampler 中使用。\n  * sigma_factor：通过将去噪时间表乘以 sigma_factor，可以根据配置的去噪程度调整去噪量。\n\n* `TwoSamplersForMaskUpscalerProvider` - 这是一个放大器，扩展了 TwoSamplersForMask 的功能，使其能够在 Iterative Upscale 中使用。\n  * TwoSamplersForMaskUpscalerProviderPipe - 是 TwoSamplersForMaskUpscalerProvider 的管道版本。\n\n### 图像工具\n  * `PreviewBridge (image)` - 此自定义节点可在使用 Clipspace 的 MaskEditor 功能时，与图像桥接一起使用。\n  * `PreviewBridge (latent)` - 此自定义节点可在使用 Clipspace 的 MaskEditor 功能时，与潜在图像桥接一起使用。\n    * 如果输入的是带有掩码的潜在变量，则会显示该掩码。此外，掩码输出会提供在潜在变量中设置的掩码。\n    * 如果输入的是不带掩码的潜在变量，则会原样输出原始潜在变量，但掩码输出会将整个区域都视为掩码。\n    * 当通过 MaskEditor 设置掩码时，掩码会被应用到潜在变量上，并且输出中会包含存储的掩码。相同的掩码也会作为掩码输出。\n    * 当连接到 `vae_opt` 时，其优先级高于 `preview_method`。\n  * `ImageSender`, `ImageReceiver` - 在 ImageSender 中生成的图像会自动发送到具有相同 link_id 的 ImageReceiver。\n  * `LatentSender`, `LatentReceiver` - 在 LatentSender 中生成的潜在变量会自动发送到具有相同 link_id 的 LatentReceiver。\n    * 此外，LatentSender 是通过 PreviewLatent 实现的，它会将潜在变量以负载形式存储在图像缩略图中。\n    * 由于 ComfyUI 当前的结构限制，无法区分 SDXL 潜在变量和 SD1.5\u002FSD2.1 潜在变量。因此，它会使用 SD1.5 方法对潜在变量进行解码并生成缩略图。\n\n\n### 切换节点\n  * `Switch (image,mask)`, `Switch (latent)`, `Switch (SEGS)` - 在多个输入中，选择由选择器指定的输入并输出。必须提供第一个输入，其他输入为可选。但是，如果选择器指定的输入未连接，可能会发生错误。\n  * `Switch (Any)` - 这是一个可以接受任意数量输入并产生单个输出的切换节点。其类型会在连接到任何节点时确定，连接更多输入会增加可用的连接槽位。\n  * `Inversed Switch (Any)` - 与 `Switch (Any)` 相反，它接受一个输入并从多个输出中选择一个。\n  * 注意：请参阅此[教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fswitch.md)\n\n\n### [通配符](http:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcard.md) 节点\n  * 这些节点支持 `__wildcard-name__` 形式的语法以及 `{a|b|c}` 等动态提示语法。\n  * 通配符文件可以通过将 `.txt` 或 `.yaml` 文件放置在 `ComfyUI-Impact-Pack\u002Fwildcards` 或 `ComfyUI-Impact-Pack\u002Fcustom_wildcards` 路径下使用。\n    * 您可以下载并使用这种格式的 [Wildcard YAML](https:\u002F\u002Fcivitai.com\u002Fmodels\u002F138970\u002Fbillions-of-wildcards-all-in-one) 文件。\n    * 首次执行后，您可以在创建的 `ComfyUI-Impact-Pack\u002Fimpact-pack.ini` 文件中的 `custom_wildcards` 条目中更改自定义通配符路径。\n  * `ImpactWildcardProcessor` - 通过处理文本中的通配符来生成文本。如果模式设置为“填充”，则每次执行时都会生成动态提示，并将输入内容填充到第二个文本框中。如果模式设置为“固定”，则第二个文本框的内容保持不变。\n    * 当以“固定”模式生成图像时，用于该特定生成的提示会存储在元数据中。\n  * `ImpactWildcardEncode` - 类似于 ImpactWildcardProcessor，此节点提供了 LoRA 的加载功能（例如 `\u003Clora:some_awesome_lora:0.7:1.2>`）。在所有 LoRA 加载完成后，填充后的提示会使用 Clip 进行编码。\n    * 如果安装了 `Inspire Pack`，您可以使用 **Lora Block Weight**，格式为 `LBW=lbw spec;`\n    * `\u003Clora:chunli:1.0:1.0:LBW=B11:0,0,0,0,0,0,0,0,0,0,A,0,0,0,0,0,0;A=0.;>`, `\u003Clora:chunli:1.0:1.0:LBW=0,0,0,0,0,0,0,0,0,0,A,B,0,0,0,0,0;A=0.5;B=0.2;>`, `\u003Clora:chunli:1.0:1.0:LBW=SD-MIDD;>`\n\n\n### 区域采样\n  * 这些节点能够通过掩码划分区域并进行部分采样。与 TwoSamplersForMask 不同，每个区域的采样会在每一步中应用。\n  * `RegionalPrompt` - 此节点结合用于指定区域的 **掩码** 和应用于每个区域的 **采样器**，以创建 `REGIONAL_PROMPTS`。\n  * `CombineRegionalPrompts` - 将多个 `REGIONAL_PROMPTS` 组合在一起，形成一个单一的 `REGIONAL_PROMPTS`。\n  * `RegionalSampler` - 此节点使用基础采样器和区域提示进行采样。基础采样器的采样会在每一步中执行，而每个区域的采样则通过绑定到该区域的采样器进行。\n    * overlap_factor - 指定每个区域的重叠量，以便与掩码外部区域更好地融合。\n    * restore_latent - 在对每个区域进行采样时，将掩码外部的区域恢复到基础潜在变量，从而防止在区域采样过程中向掩码外部引入额外噪声。\n  * `RegionalSamplerAdvanced` - 这是 RegionalSampler 的高级版本。您可以使用 `step` 而不是 `denoise` 来控制它。\n    > 注意：`sde` 采样器和 `uni_pc` 采样器在采样的每一步中都会引入额外的噪声。为了缓解这一点，在对每个区域进行采样时，`uni_pc` 采样器会额外应用 `dpmpp_fast`，而 sde 采样器则会额外应用 `dpmpp_2m` 采样器。\n\n\n### Impact KSampler\n  * 这些采样器支持 basic_pipe 以及 AYS\u002FOSS\u002FGITS 调度器。\n  * `KSampler (pipe)` - KSampler 的管道版本。\n  * `KSampler (advanced\u002Fpipe)` - KSampler Advanced 的管道版本。\n  * 当将调度器小部件转换为输入时，请参考 `Impact Scheduler Adapter` 节点以解决兼容性问题。\n  * `GITSScheduler Func Provider` - GITSScheduler 的调度函数提供者。\n\n\n### 批量\u002F列表工具\n  * `Image Batch to Image List` - 将图像批次转换为图像列表。\n    - 您可以使用在多批次中生成的图像来处理它们。\n  * `Image List to Image Batch` - 将图像列表转换为图像批次。\n  * `Make Image List` - 将多张图像转换成一个图像列表。\n  * `Make Image Batch` - 将多张图像转换成一个图像批次。\n    - 图像输入可以根据需要进行扩展。\n  * `Masks to Mask List`, `Mask List to Masks`, `Make Mask List`, `Make Mask Batch` - 这些节点的功能与上述节点相同，只是输入为掩码而非图像。\n  * `Flatten Mask Batch` - 将掩码批次展平为单个掩码。对于非二值掩码，不能保证正常运行。\n  * `Make List (Any)` - 创建包含任意值的列表。\n  * `Select Nth Item (Any list)` - 从列表中选择第 N 项。如果索引超出范围，则返回列表中的最后一项。\n\n### 逻辑节点（实验性）\n  * 这些节点是实验性的，旨在实现循环和动态切换的逻辑。\n  * `ImpactCompare`、`ImpactConditionalBranch`、`ImpactConditionalBranchSelMode`、`ImpactInt`、`ImpactBoolean`、`ImpactValueSender`、`ImpactValueReceiver`、`ImpactImageInfo`、`ImpactMinMax`、`ImpactNeg`、`ImpactConditionalStopIteration`\n  * `ImpactIsNotEmptySEGS` - 该节点仅在输入的SEGS不为空时返回`true`。\n  * `ImpactIfNone` - 如果any_input为None，则返回`true`；否则返回`false`。\n  * `Queue Trigger` - 当此节点执行时，它会添加一个新的队列来协助重复性任务。只有当信号状态发生变化时，才会执行。\n  * `Queue Trigger (Countdown)` - 类似于Queue Trigger，它也会添加一个队列，但仅当计数大于1时才添加，并且每次运行时将计数减1。\n  * `Sleep` - 等待指定的时间（以秒为单位）。\n  * `Set Widget Value` - 此节点将可选输入之一设置为指定节点的控件值。如果类型不匹配，可能会发生错误。\n  * `Set Mute State` - 此节点更改特定节点的静音状态。\n  * `Control Bridge` - 此节点根据`mode`和`behavior`修改连接的控制节点的状态。如果有需要更改的节点，当前执行会被暂停，静音状态会被更新，并插入一个新的提示队列。\n    * 当`mode`为`active`时，无论行为如何，都会使连接的控制节点处于激活状态。\n    * 当`mode`为`Bypass\u002FMute`时，会根据行为是`Bypass`还是`Mute`来改变连接节点的状态。\n    * **局限性**：由于这些特性，当批次数量超过1时，该节点无法正常工作。此外，在Control Bridge之前，如果种子被随机化，或者节点状态被`Queue Trigger`、`Set Widget Value`、`Set Mute`等操作改变，也无法保证其正常运行。\n    * 使用此节点时，请确保将`Queue Trigger`、`Set Widget Value`、`Set Mute State`等操作安排在工作流的最后部分执行。\n    * 如果希望每次迭代都更改种子值，请确保在工作流的最后执行Set Widget Value，而不是使用随机化功能。\n      * 只要种子变化发生在Control Bridge部分之后，就不会有问题。\n  * `Remote Boolean (on prompt)`、`Remote Int (on prompt)` - 在提示开始时，此节点会强制设置`node_id`的`widget_value`。如果目标控件类型不同，则会被忽略。\n  * 您可以通过[ComfyUI-Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager)以`Badge: #ID Nickname`的格式查看`node_id`。\n  * 用于实现循环功能的实验性节点集合（教程将在稍后提供 \u002F [示例工作流](test\u002Floop-test.json))。\n\n\n### 局限性\n* `Impact Pack`中的许多节点使用通配符类型，以允许任意的输入输出连接。一旦ComfyUI正式支持**动态类型**，这种方法将被取代。在此之前，虽然这些节点可以正常工作，但在类型验证时仍可能出现错误信息。\n\n\n### HuggingFace节点\n  * 这些节点基于HuggingFace仓库中的模型提供功能。\n  * 可以通过`HF_HOME`环境变量更改HuggingFace模型缓存的存储路径。\n  * `HF Transformers Classifier Provider` - 这是一个基于HuggingFace的transformers模型提供分类器的节点。\n    * 参数`repo id`应包含HuggingFace的仓库ID。当`preset_repo_id`设置为`Manual repo id`时，需在`manual_repo_id`中手动输入仓库ID。\n    * 例如，`rizvandwiki\u002Fgender-classification-2`是一个提供性别分类模型的仓库。\n  * `SEGS Classify` - 此节点利用由`HF Transformers Classifier Provider`加载的`TRANSFORMERS_CLASSIFIER`对`SEGS`进行分类。\n    * 参数`expr`允许使用如`label > number`的形式，当`preset_expr`为`Manual expr`时，则使用`manual_expr`中输入的表达式。\n    * 例如，在`male \u003C= 0.4`的情况下，如果分类结果中`male`标签的得分小于或等于0.4，则将其归类为`filtered_SEGS`，否则归类为`remained_SEGS`。\n      * 支持的标签请参考相应HuggingFace仓库的`config.json`文件。\n    * `#Female`和`#Male`是用于方便起见而将多个标签（如`Female, women, woman, ...`）分组的符号，而非单个标签。\n\n\n### 其他节点\n  * `Impact Scheduler Adapter` - 随着AYS加入Impact Pack和Inspire Pack的日程安排器中，在将现有日程安排器控件转换为输入时会出现兼容性问题。Impact Scheduler Adapter允许间接连接。\n  * `StringListToString` - 将字符串列表转换为字符串。\n  * `WildcardPromptFromString` - 从字符串创建用于detailer的带标签通配符。\n    * 该节点与MakeTileSEGS配合使用效果良好。[[链接](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fpull\u002F536#discussion_r1586060779)]\n\n  * `String Selector` - 选择并返回字符串的一部分。当`multiline`模式关闭时，它简单地返回选择器指向的那一行的字符串。当`multiline`模式开启时，它会根据以`#`开头的行分割字符串并返回。如果`select`值大于项目总数，它会从第一行重新开始计数并返回相应的结果。\n  * `Combine Conditionings` - 接受多个conditioning作为输入，并将它们合并为一个conditioning。\n  * `Concat Conditionings` - 接受多个conditioning作为输入，并将它们串联成一个conditioning。\n  * `Negative Cond Placeholder` - 像FLUX.1这样的模型不使用Negative Conditioning。这是一个为它们准备的占位符节点。您可以用此节点替换Impact KSampler、KSampler (Inspire)和Detailer中使用的Negative Conditioning，从而使用FLUX.1。\n  * `Execution Order Controller` - 一个辅助节点，可以强制控制节点的执行顺序。\n    * 将应首先执行的节点的输出连接到信号，并使随后执行的节点的输入经过此节点。\n  * `List Bridge` - 当列表输出通过此节点时，它会收集并整理数据后再转发，从而确保前一阶段的子工作流已完成。\n\n## 功能\n* `交互式 SAM 检测器（剪贴板空间）` - 当您右键单击具有 'MASK' 和 'IMAGE' 输出的节点时，会打开一个上下文菜单。从此菜单中，您可以选择使用“在 SAM 检测器中打开”来创建 SAM Mask 的对话框，或者使用“复制（剪贴板空间）”复制内容（很可能是掩码数据），然后从剪贴板空间菜单中使用“Impact SAM 检测器”生成掩码，并使用“粘贴（剪贴板空间）”将其粘贴。\n* 提供检测功能，用于识别在样本执行过程中混合来自 `SDXL Base`、`SDXL Refiner`、`SD1.x`、`SD2.x` 等检查点的模型和片段时出现的错误，并报告相应的错误信息。\n\n\n## 如何安装？\n\n### 通过 ComfyUI-Manager 安装（推荐）\n* 在 ComfyUI-Manager 中搜索 `ComfyUI Impact Pack`，然后点击“安装”按钮。\n\n### 手动安装（不推荐）\n1. `cd custom_nodes`\n2. `git clone https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack`\n3. `cd ComfyUI-Impact-Pack`\n4. `pip install -r requirements.txt`\n    * **重要提示**：\n        * 必须在运行 ComfyUI 的 Python 环境中进行安装。\n        * 对于便携版，请使用 `\u003Cinstalled path>\\python_embeded\\python.exe -m pip` 而不是 `pip`。对于 `venv`，请先激活 `venv`，然后再使用 `pip`。\n5. 重启 ComfyUI\n\n* 注意1：如果在安装过程中出现错误，请参阅[故障排除页面](troubleshooting\u002FTROUBLESHOOTING.md)以获取帮助。\n* 注意2：您可以使用此 Colab 笔记本 [colab notebook](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fblob\u002FMain\u002Fnotebook\u002Fcomfyui_colab_impact_pack.ipynb) 来启动它。该笔记本会自动将 Impact Pack 下载到 custom_nodes 目录，安装经过测试的依赖项并运行它。\n* 注意3：如果您在 `ComfyUI\u002Fcustom_nodes\u002F` 目录中创建一个名为 `skip_download_model` 的空文件，那么在安装 Impact Pack 时将会跳过模型下载步骤。\n\n\n## 软件包依赖（如果需要手动设置）\n\n* 使用 pip 安装\n   * segment-anything\n   * scikit-image\n   * piexif \n   * opencv-python\n   * scipy\n   * numpy\u003C2\n   * dill\n   * matplotlib\n   * （可选）onnxruntime\n   * （已弃用）openmim      # 用于 mim\n   * （已弃用）pycocotools  # 用于 mim\n   \n* Linux 软件包（Ubuntu）\n  * libgl1-mesa-glx\n  * libglib2.0-0\n\n\n## 配置示例\n* 当您首次运行 Impact Pack 时，会在 Impact Pack 目录中自动生成一个 `impact-pack.ini` 文件。您可以修改此配置文件以自定义默认行为。\n  * `dependency_version` - 不要修改此选项\n  * `sam_editor_cpu` - 使用 CPU 而不是 GPU 进行 `SAM 编辑器` 操作\n  * sam_editor_model：指定 SAM 编辑器使用的 SAM 模型。\n    * 您可以通过 ComfyUI-Manager 下载各种 SAM 模型。\n    * SAM 模型路径：`ComfyUI\u002Fmodels\u002Fsams`\n```\n[default]\nsam_editor_cpu = False\nsam_editor_model = sam_vit_b_01ec64.pth\n```\n\n\n## 其他资源（安装时自动下载）\n\n* ComfyUI\u002Fmodels\u002Fsams \u003C= https:\u002F\u002Fdl.fbaipublicfiles.com\u002Fsegment_anything\u002Fsam_vit_b_01ec64.pth\n\n\n## 故障排除页面\n* [故障排除页面](troubleshooting\u002FTROUBLESHOOTING.md)\n\n## 使用方法（DDetailer 功能）\n\n#### 1. 基本的自动人脸检测与细化示例。\n![simple](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_5b85282df17f.png)\n* 由于低分辨率导致损坏的人脸，通过生成和合成高分辨率图像来恢复细节。\n* FaceDetailer 节点结合了用于人脸检测的 Detector 节点和用于图像增强的 Detailer 节点。更详细的说明请参阅[高级教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fraw\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fadvanced.md)。\n* FaceDetailer 的 MASK 输出提供了检测和增强区域的可视化信息。\n\n![simple-orig](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_6c3cacd86935.png) ![simple-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_a8c316c87db7.png)\n* 可以看到，左侧图像中的人脸细节在右侧图像中得到了显著提升。\n\n#### 2. 两步细化（修复严重损坏的人脸）\n![2pass-workflow-example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_cf788db878ec.png)\n* 虽然可以将两个 FaceDetailer 节点串联起来实现两步处理，但也可以通过 DETAILER_PIPE 传递 KSampler 中常用的多种输入，因此使用 FaceDetailerPipe 可以更方便地进行配置。\n* 在第一遍中，只需恢复大致轮廓，因此可以使用合理的分辨率和较低的选项进行修复。不过，如果此时增加膨胀值，不仅人脸会被纳入修复范围，周围的区域也会受到影响，因此在需要对脸部以外的部分进行重塑时会很有用。\n\n![2pass-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_b187dc44d9bf.png) ![2pass-example-middle](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_1c12d482aabf.png) ![2pass-example-result](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_f9c354b88b4a.png)\n* 第一阶段将严重损坏的人脸恢复到一定程度，第二阶段则进一步恢复细节。\n\n#### 3. 人脸边界框 + 人物轮廓分割（防止背景失真）\n![combination-workflow-example](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fraw\u002FMain\u002FComfyUI-Impact-Pack\u002Fimages\u002Fcombination.jpg)\n![combination-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_68a3a54a8966.png) ![combination-example-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_462c97d8fb8e.png)\n\n* 强调细节的人脸合成被精细地对齐到面部轮廓上，可以看出它并未影响到面部以外的图像部分。\n\n* BBoxDetectorForEach 节点用于检测人脸，而 SAMDetectorCombined 节点则用于找到与检测到的人脸相关的分割区域。通过将这两种方式获得的掩码输入到 Segs & Mask 节点中，可以生成基于分割精确相交的掩码。如果将此掩码输入到 DetailerForEach 节点中，则仅能对目标区域进行高分辨率重建并将其合成到原图中。\n\n#### 4. 迭代式放大\n![upscale-workflow-example](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_e5480f38326f.png)\n \n* IterativeUpscale 节点是一个按 scale_factor 放大图像或潜在表示的节点。在此过程中，放大操作会分步骤逐步进行。\n* IterativeUpscale 接受一个类似于插件的 Upscaler 作为输入，并在每次迭代中使用它。PixelKSampleUpscalerProvider 是一种将潜在表示转换为像素空间并应用 ksampling 的放大器。\n  * upscale_model_opt 是一个可选参数，用于决定是否在模型基础具备放大功能时使用该功能。使用模型自带的放大功能可以显著减少所需的迭代次数。例如，如果使用 x2 放大器，图像或潜在表示会先被放大两倍，然后在每一步中再缩小到目标尺寸，之后才会继续进行后续处理。\n\n* 下图是一张 304x512 像素的图像，以及使用 IterativeUpscale 将其放大至原尺寸三倍后的效果。\n\n![combination-example-original](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_c7c305ad43ea.png) ![combination-example-refined](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_04a400b47aef.png)\n\n\n#### 5. 交互式 SAM 检测器（Clipspace）\n* 当您右键单击输出 'MASK' 和 'IMAGE' 的节点时，会出现一个名为“在 SAM 检测器中打开”的菜单，如图所示。点击该菜单会打开 SAM 功能中的对话框，允许您生成分割掩码。\n![samdetector-menu](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_467502896f5d.png)\n\n* 单击鼠标左键会在坐标处添加蓝色的正面提示，表示应包含的区域；单击鼠标右键则会添加红色的负面提示，表示应排除的区域。正面提示代表应包含的区域，而负面提示代表应排除的区域。\n* 您可以通过“撤销”按钮移除已添加的点。选择好点位后，点击“检测”按钮即可生成掩码。此外，您还可以通过调整保真度滑块来控制掩码属于置信区的程度。\n\n![samdetector-dialog](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_ec6327cfec56.jpg)\n\n* 如果您是通过节点中的“在 SAM 检测器中打开”选项打开对话框的，则可以直接点击“保存到节点”按钮来应用更改。而如果通过“clipspace”菜单打开对话框，则需点击“保存”按钮将其保存到 clipspace 中。\n\n![samdetector-result](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_readme_6730bc3cd11b.jpg)\n\n* 当您使用节点中反映的掩码执行操作时，可以看到图像和掩码会分别显示。\n\n## 其他教程\n* [ComfyUI-extension-tutorials\u002FComfyUI-Impact-Pack](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Ftree\u002FMain\u002FComfyUI-Impact-Pack) - 在此页面上，您可以找到各种教程和工作流。\n* [高级教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fadvanced.md)\n* [SAM 应用](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsam.md)\n* [PreviewBridge](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fpreviewbridge.md)\n* [Mask Pointer](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fmaskpointer.md)\n* [ONNX 教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FONNX.md)\n* [CLIPSeg 教程](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fclipseg.md)\n* [极端高分辨率放大](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fextreme-upscale.md)\n* [TwoSamplersForMask](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoSamplers.md)\n* [TwoAdvancedSamplersForMask](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoAdvancedSamplers.md)\n* [高级迭代放大：PK_HOOK](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fpk_hook.md)\n* [高级迭代放大：TwoSamplersForMask 放大提供者](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FTwoSamplersUpscale.md)\n* [交互式 SAM + PreviewBridge](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsam_with_preview_bridge.md)\n* [ImageSender\u002FImageReceiver\u002FLatentSender\u002FLatentReceiver](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002Fsender_receiver.md)\n* [ImpactWildcardProcessor](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-extension-tutorials\u002Fblob\u002FMain\u002FComfyUI-Impact-Pack\u002Ftutorial\u002FImpactWildcardProcessor.md)\n\n\n## 致谢\n\nComfyUI\u002F[ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) - 一个功能强大且模块化的稳定扩散 GUI。\n\ndustysys\u002F[ddetailer](https:\u002F\u002Fgithub.com\u002Fdustysys\u002Fddetailer) - Stable-diffusion-webUI 扩展中的 DDetailer。\n\nBing-su\u002F[dddetailer](https:\u002F\u002Fgithub.com\u002FBing-su\u002Fdddetailer) - DDetailer 中使用的动漫人脸检测器已更新为兼容 mmdet 3.0.0，并且我们还为 DDetailer 的 pycocotools 依赖项在 Windows 环境中打上了补丁。\n\nfacebook\u002F[segment-anything](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fsegment-anything) - 分割一切！\n\nhysts\u002F[anime-face-detector](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector) - `anime-face_yolov3` 的创建者，在多种艺术风格上表现出色。\n\nopen-mmlab\u002F[mmdetection](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection) - 目标检测工具集。`dd-person_mask2former` 是基于他们的 [R-50 Mask2Former 实例分割模型](https:\u002F\u002Fgithub.com\u002Fopen-mmlab\u002Fmmdetection\u002Ftree\u002Fmaster\u002Fconfigs\u002Fmask2former#instance-segmentation) 进行迁移学习训练的。\n\nbiegert\u002F[ComfyUI-CLIPSeg](https:\u002F\u002Fgithub.com\u002Fbiegert\u002FComfyUI-CLIPSeg) - 这是一个自定义节点，使 CLIPSeg 技术能够在 ComfyUI 中使用，该技术可以通过提示词来查找分割区域。\n\nBlenderNeok\u002F[ComfyUI-TiledKSampler](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_TiledKSampler) - 平铺采样器即使在 GPU 显存较低的情况下也能进行高分辨率采样。\n\nBlenderNeok\u002F[ComfyUI_Noise](https:\u002F\u002Fgithub.com\u002FBlenderNeko\u002FComfyUI_Noise) - 噪声注入功能依赖于该函数以及 slerp 代码来实现噪声变化。\n\nWASasquatch\u002F[was-node-suite-comfyui](https:\u002F\u002Fgithub.com\u002FWASasquatch\u002Fwas-node-suite-comfyui) - ComfyUI 功能强大的自定义节点扩展。\n\nTrung0246\u002F[ComfyUI-0246](https:\u002F\u002Fgithub.com\u002FTrung0246\u002FComfyUI-0246) - 一个不错的绕过技巧！\n\nLayer-norm\u002F[comfyui-lama-remover](https:\u002F\u002Fgithub.com\u002FLayer-norm\u002Fcomfyui-lama-remover) - 使用 `LamaRemoverDetailerHook` 所需。","# ComfyUI-Impact-Pack 快速上手指南\n\nComfyUI-Impact-Pack 是 ComfyUI 的核心增强插件包，主要用于图像的细节修复（Detailer）、目标检测（Detector）、局部重绘及高清放大。它是实现“面部修复”、“手部修复”及复杂局部控制工作流的关键组件。\n\n## 环境准备\n\n*   **系统要求**：Windows \u002F Linux \u002F macOS\n*   **核心依赖**：\n    *   **ComfyUI**: 建议更新至最新版本（需 >= 0.3.63 以兼容最新特性）。\n    *   **Python**: 建议使用 Python 3.10 或 3.11。\n    *   **Git**: 用于克隆仓库。\n*   **可选依赖**：\n    *   若需使用 YOLO 系列检测模型（如 `UltralyticsDetectorProvider`），需额外安装 [ComfyUI-Impact-Subpack](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Subpack)。\n\n## 安装步骤\n\n### 方法一：通过 ComfyUI-Manager 安装（推荐）\n\n这是最简单且不易出错的方式，可自动处理依赖关系。\n\n1.  确保已安装 **ComfyUI-Manager**。\n2.  启动 ComfyUI，点击右侧菜单的 **\"Manager\"** 按钮。\n3.  选择 **\"Install Custom Nodes\"**。\n4.  在搜索框输入 `Impact Pack`。\n5.  找到 **ComfyUI-Impact-Pack**，点击 **Install**。\n6.  安装完成后，重启 ComfyUI。\n\n> **注意**：如需使用 Ultralytics 检测器，请在 Manager 中同样搜索并安装 **ComfyUI-Impact-Subpack**。\n\n### 方法二：手动安装\n\n适用于无法使用 Manager 或需要特定版本的用户。\n\n1.  进入 ComfyUI 的自定义节点目录：\n    ```bash\n    cd ComfyUI\u002Fcustom_nodes\n    ```\n\n2.  克隆仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack comfyui-impact-pack\n    cd comfyui-impact-pack\n    ```\n\n3.  安装依赖：\n\n    *   **Windows 便携版 (Portable)**：\n        ```bash\n        ..\\..\\..\\python_embeded\\python.exe -m pip install -r requirements.txt\n        ```\n    *   **虚拟环境 (venv\u002Fconda)**：\n        先激活你的环境，然后运行：\n        ```bash\n        pip install -r requirements.txt\n        ```\n        *(国内用户若下载慢，可添加清华源：`pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`)*\n\n4.  重启 ComfyUI。\n\n## 基本使用\n\n最经典的使用场景是利用 **FaceDetailer** 节点自动检测并修复人脸。\n\n### 最简单的工作流示例\n\n1.  **加载模型**：添加 `Load Checkpoint` 节点加载主模型。\n2.  **生成底图**：连接 `KSampler` 生成一张包含人物的图片（此时人脸可能模糊或崩坏）。\n3.  **添加 FaceDetailer**：\n    *   在空白处双击搜索 `FaceDetailer` 并添加。\n    *   **输入连接**：\n        *   `image`: 连接上一步生成的图片。\n        *   `model`, `clip`, `vae`, `positive`, `negative`: 通常直接复用主工作流的对应输出（或连接专门的 Detailer Prompt）。\n        *   `bbox_detector`: 选择一个检测器，推荐使用 `BBOX Detector (combined)` 并在其中加载 `face_yolov8n.pt` 等面部模型；或者直接使用该节点内置的默认检测逻辑。\n    *   **参数设置**：\n        *   `guide_size`: 控制重绘区域的大小倍数（默认 512 即可）。\n        *   `steps`, `cfg`: 设置重绘时的步数和引导系数。\n4.  **查看结果**：`FaceDetailer` 的输出端 `image` 即为修复后的人脸合成图。\n\n### 进阶提示\n*   **SEGS 概念**：本插件核心概念为 `SEGS` (Segments)，代表检测到的多个局部区域集合。大多数高级节点（如 `Detailer (SEGS)`）都需要先通过检测器生成 `SEGS`，再传入进行局部重绘。\n*   **视频支持**：若处理视频帧，请使用 `Simple Detector for Video (SEGS)` 或 `SAM2 Video Detector (SEGS)` 以获得时序稳定的遮罩。","一位电商设计师正在批量生成模特展示图，需要确保人物面部清晰且服装细节完美，同时保持高分辨率以用于广告海报。\n\n### 没有 ComfyUI-Impact-Pack 时\n- 生成的人物面部经常模糊或五官扭曲，必须手动重绘数十次才能碰巧得到一张可用的脸。\n- 想要修复手部或饰品细节时，缺乏自动遮罩功能，只能依靠繁琐的手动蒙版绘制或外部 PS 处理。\n- 直接放大图片会导致画面出现伪影和噪点，无法在保持细节的前提下提升分辨率。\n- 工作流节点连线极其复杂，调整一个参数需要断开并重连多条线，调试效率极低。\n- 难以统一控制检测模型与修复模型的参数，导致批量出图时质量参差不齐。\n\n### 使用 ComfyUI-Impact-Pack 后\n- 利用 Detailer 节点自动检测并重绘人脸，无需反复重试，每张图的人物五官都自然清晰。\n- 通过 Detector 自动识别身体、手部或特定衣物区域生成精准遮罩，实现局部细节的自动化修复。\n- 集成专用 Upscaler 流程，在放大的同时智能补充纹理，直接输出可用于印刷的高清大图。\n- 借助 Pipe 系列节点将复杂的模型参数打包传输，大幅简化连线，让工作流整洁且易于维护。\n- 支持一键切换不同的检测模型（如 SAM2），确保在不同姿态下都能稳定锁定目标区域。\n\nComfyUI-Impact-Pack 将原本依赖运气的“抽卡式”生成，转变为可控、高效且高质量的工业化图像生产流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fltdrdata_ComfyUI-Impact-Pack_3438e5c8.png","ltdrdata","Dr.Lt.Data","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fltdrdata_ac2c4d3d.jpg",null,"@Comfy-Org","https:\u002F\u002Fgithub.com\u002Fltdrdata",[80,84,88,92],{"name":81,"color":82,"percentage":83},"Python","#3572A5",77.1,{"name":85,"color":86,"percentage":87},"JavaScript","#f1e05a",11.5,{"name":89,"color":90,"percentage":91},"Shell","#89e051",10.8,{"name":93,"color":94,"percentage":95},"Jupyter Notebook","#DA5B0B",0.6,3064,364,"2026-04-15T20:29:03","GPL-3.0","Windows, Linux, macOS","未说明（作为 ComfyUI 插件，依赖宿主环境的 GPU 配置以运行 SAM、YOLO 等检测模型）","未说明",{"notes":104,"python":105,"dependencies":106},"1. 必须安装 ComfyUI 主程序，且版本建议为 0.3.63 或更高以兼容最新功能。\n2. 若需使用 UltralyticsDetectorProvider (YOLO 模型)，必须单独安装配套插件 'ComfyUI-Impact-Subpack'，不再自动安装。\n3. 推荐使用 ComfyUI-Manager 进行安装和管理；手动安装时需运行 install.py 并安装 requirements.txt。\n4. 部分节点（如 CLIPSegDetectorProvider, MediaPipe FaceMesh）依赖其他第三方 ComfyUI 扩展或特定预处理器。\n5. 支持 Facebook Research 的 SAM2 视频跟踪模型，需在 SAMLoader 中选择对应模型。","3.8+ (基于 ComfyUI 及 torch 依赖推断，README 提及使用 python_embeded 或 venv\u002Fconda)",[107,108,109,110,111,112,113,114],"ComfyUI>=0.3.63","ultralytics (需通过 ComfyUI-Impact-Subpack 安装)","onnxruntime","clipseg","mediapipe","opencv-python (cv2)","segment-anything (SAM)","segment-anything-2 (SAM2)",[15,35],"2026-03-27T02:49:30.150509","2026-04-17T09:53:28.949053",[119,124,129,134,139,144],{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},36451,"为什么在使用 FaceDetailer 节点时会出现 'AttributeError: DifferentialDiffusion object has no attribute apply' 错误？","这是因为 ComfyUI 核心更新后，DifferentialDiffusion 的 API 从 `apply` 方法变更为 `execute` 方法。解决方法是修改 Impact Pack 源代码：\n1. 打开文件 `custom_nodes\\comfyui-impact-pack\\modules\\impact\\impact_pack.py`。\n2. 找到第 304 行左右的代码：`model = nodes_differential_diffusion.DifferentialDiffusion().apply(model)[0]`。\n3. 将其替换为以下代码：\n```python\nresult = nodes_differential_diffusion.DifferentialDiffusion.execute(model, 1.0)\nmodel = result.model if hasattr(result, 'model') else result[0]\n```\n4. 保存文件并重启 ComfyUI。或者直接更新 Impact Pack 到最新版本以获取修复。","https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fissues\u002F1113",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},36452,"遇到 'Weights only load failed' 错误提示该怎么办？","该错误通常由 PyTorch 2.6+ 版本默认开启 `weights_only=True` 安全策略引起。最有效的解决方法是将 ComfyUI-Impact-Pack 和 ComfyUI-Impact-Subpack 更新到最新的 nightly 版本（开发版）。\n如果自动更新无效，可以尝试手动删除 `custom_nodes` 文件夹中对应的插件目录，然后重新安装最新版。通常不需要手动修改 `model_whitelist.txt` 或源码即可解决。","https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fissues\u002F931",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},36453,"Preview Bridge 节点中的蒙版编辑器（Mask Editor）无法使用或点击无反应怎么办？","这是由于 ComfyUI 前端版本过低导致的兼容性问题。请确保您的环境满足以下条件之一：\n1. 将 ComfyUI 更新至最新主分支（master），确保前端版本至少为 v1.37.1（推荐 v1.37.11 或更高）。\n2. 或者在启动 ComfyUI 的脚本参数中添加 `--front-end-version Comfy-Org\u002FComfyUI_frontend@latest` 以强制使用最新前端。\n3. 如果使用官方发布版，请升级至 ComfyUI v0.10.0 或更高版本（内置前端 v1.36.14+ 已修复此问题）。","https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fissues\u002F1157",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},36454,"Switch 节点连接多个输入时，为什么第二个输入也显示为 'input 1' 且无法连接？","这不是插件的 Bug，而是 ComfyUI 本身的机制限制。在 ComfyUI 中，只有被连接的输入才会被视为执行节点所需的必要输入。如果 Switch 节点的某些输入端口未正确连接或被系统判定为不完整，节点可能不会按预期执行或显示异常。请检查连线逻辑，确保所有需要的输入都已正确连接到上游节点。","https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fissues\u002F123",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},36455,"Ultralytics 包中被发现恶意挖矿代码，我是通过 ComfyUI-Impact-Pack 安装的，是否受影响？","此次安全事件是由于攻击者入侵了 PyPi 发布工作流，导致通过 `pip install` 安装的 Ultralytics 特定版本（约 8.3.43 至 8.3.46 之间）包含恶意代码。GitHub 仓库源码本身未被篡改。\n建议措施：\n1. 如果您是通过 pip 安装的 ultralytics，请立即卸载并重新从可信源安装，或升级到已修复的版本。\n2. 检查系统中是否有异常的挖矿进程或网络连接（如连接至 connect.consrensys.com）。\n3. ComfyUI-Impact-Pack 维护者确认恶意代码未直接存在于插件仓库中，但依赖包的风险需用户自行排查。","https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Impact-Pack\u002Fissues\u002F843",{"id":145,"question_zh":146,"answer_zh":147,"source_url":128},36456,"手动更新 ComfyUI 后出现权重加载错误，且 ComfyUI Manager 无法删除相关节点怎么办？","当自动管理器失效时，可以手动清理：\n1. 进入 ComfyUI 安装目录下的 `custom_nodes` 文件夹。\n2. 找到 `comfyui-impact-pack` 和 `comfyui-impact-subpack` 文件夹。\n3. 直接手动删除这两个文件夹。\n4. 重新启动 ComfyUI，然后通过 Manager 重新安装最新版本的插件，或者直接从 GitHub 克隆最新代码到该目录。这通常能解决因版本不匹配导致的 `weights_only` 加载错误。",[]]