[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-chflame163--ComfyUI_LayerStyle":3,"tool-chflame163--ComfyUI_LayerStyle":61},[4,18,26,36,45,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,35],"插件",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":46,"name":47,"github_repo":48,"description_zh":49,"stars":50,"difficulty_score":42,"last_commit_at":51,"category_tags":52,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[35,13,15,14],{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":42,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[35,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":73,"owner_company":73,"owner_location":75,"owner_email":73,"owner_twitter":73,"owner_website":73,"owner_url":76,"languages":77,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":42,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":109,"github_topics":73,"view_count":10,"oss_zip_url":73,"oss_zip_packed_at":73,"status":17,"created_at":110,"updated_at":111,"faqs":112,"releases":147},8772,"chflame163\u002FComfyUI_LayerStyle","ComfyUI_LayerStyle","A set of nodes for ComfyUI that can composite layer and mask to achieve Photoshop like functionality.","ComfyUI_LayerStyle 是一套专为 ComfyUI 设计的节点插件，旨在将 Photoshop 中经典的图层合成与蒙版处理功能引入 AI 工作流。它解决了用户在生成式 AI 创作过程中频繁切换软件进行后期处理的痛点，让用户无需离开 ComfyUI 环境，即可在一个集中的工作流中完成复杂的图像合成、遮罩编辑及细节调整，显著提升了从生成到成图的效率。\n\n这款工具特别适合希望深化工作流整合的 AI 艺术家、数字设计师以及进阶创作者使用。对于习惯传统图像处理逻辑但想拥抱 AI 生成的用户而言，它提供了极佳的上手体验。其核心亮点在于成功迁移了多项 Photoshop 基础功能至节点系统，支持灵活的图层混合模式与精细的蒙版控制。值得注意的是，为了保持核心功能的稳定与轻量，开发者已将部分依赖复杂模型的高级功能（如各类超精度分割、智能抠图及二维码工具等）剥离至独立的 ComfyUI_LayerStyle_Advance 仓库，用户可根据实际需求灵活选配，确保工作流的流畅运行。","# ComfyUI Layer Style\r\n\r\n## Important note\r\nSplit some nodes of the dependencies that are prone to problems into [ComfyUI_LayerStyle_Advance](https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance) repository. Including:     \r\nLayerMask: BiRefNetUltra, LayerMask: BiRefNetUltraV2, LayerMask: LoadBiRefNetModel, LayerMask: LoadBiRefNetModelV2,    \r\nLayerMask: EVFSAMUltra, LayerMask: Florence2Ultra, LayerMask: LoadFlorence2Model, LayerUtility: Florence2Image2Prompt,    \r\nLayerUtility: GetColorTone, LayerUtility: GetColorToneV2, LayerMask: HumanPartsUltra,  LayerMask: BenUltra, LayerMask: LoadBenModel,        \r\nLayerUtility: ImageAutoCrop, LayerUtility: ImageAutoCropV2, LayerUtility: ImageAutoCropV3,    \r\nLayerUtility: ImageRewardFilter, LayerUtility: LoadJoyCaption2Model, LayerUtility: JoyCaption2Split,    \r\nLayerUtility: JoyCaption2, LayerUtility: JoyCaption2ExtraOptions, LayerUtility: LaMa,    \r\nLayerUtility: LlamaVision, LayerUtility: LoadPSD, LayerMask: MaskByDifferent, LayerMask: MediapipeFacialSegment,    \r\nLayerMask: BBoxJoin, LayerMask: DrawBBoxMask, LayerMask: ObjectDetectorFL2, LayerMask: ObjectDetectorMask,    \r\nLayerMask: ObjectDetectorYOLO8, LayerMask: ObjectDetectorYOLOWorld, LayerMask: PersonMaskUltra, LayerMask: PersonMaskUltra V2,    \r\nLayerUtility: PhiPrompt, LayerUtility: PromptEmbellish, LayerUtility: PromptTagger, LayerUtility: CreateQRCode, LayerUtility: DecodeQRCode,    \r\nLayerUtility: QWenImage2Prompt, LayerMask: SAM2Ultra, LayerMask: SAM2VideoUltra, LayerUtility: SaveImagePlus, LayerUtility: SD3NegativeConditioning,    \r\nLayerMask: SegmentAnythingUltra, LayerMask: SegmentAnythingUltra V2, LayerMask: TransparentBackgroundUltra,     \r\nLayerUtility: UserPromptGeneratorTxt2ImgPrompt, LayerUtility: UserPromptGeneratorTxt2ImgPromptWithReference, LayerUtility: UserPromptGeneratorReplaceWord,    \r\nLayerUtility: AddBlindWaterMark, LayerUtility: ShowBlindWaterMark, LayerMask: YoloV8Detect\r\n\r\nIf there are recent updates, you need to install [ComfyUI_LayerStyle_Advance](https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance) to ensure that previous workflows do not lose nodes. \r\nIf the problem is caused by splitting the warehouse, please roll back the plugin version to```3d4a3526a9d1a19671a133e9215077bda520ee5d```\r\nOpen the terminal in the plugin directory and use the following command to roll back the version:\r\n```\r\ngit reset --hard 3d4a3526a9d1a19671a133e9215077bda520ee5d\r\n```\r\n\r\n\r\n[中文说明点这里](.\u002FREADME_CN.MD)    \r\n\r\n商务合作请联系email [chflame@163.com](mailto:chflame@163.com).\r\n\r\nFor business cooperation, please contact email [chflame@163.com](mailto:chflame@163.com).\r\n\r\nA set of nodes for ComfyUI that can composite layer and mask to achieve Photoshop like functionality.  \r\n\r\nIt migrate some basic functions of PhotoShop to ComfyUI, aiming to centralize the workflow and reduce the frequency of software switching.  \r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b763c32151b8.jpg)    \r\n\u003Cfont size=\"1\">*this workflow (title_example_workflow.json) is in the workflow directory.  \u003C\u002Ffont>\u003Cbr \u002F> \r\n\r\n## Example workflow\r\n\r\nSome JSON workflow files in the    ```workflow``` directory, That's examples of how these nodes can be used in ComfyUI.\r\n\r\n## How to install\r\n\r\n(Taking ComfyUI official portable package and Aki ComfyUI package as examples, please modify the dependency environment directory for other ComfyUI environments)\r\n\r\n### Install plugin\r\n\r\n* Recommended use ComfyUI Manager for installation.\r\n\r\n* Or open the cmd window in the plugin directory of ComfyUI, like ```ComfyUI\\custom_nodes```，type    \r\n  \r\n  ```\r\n  git clone https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle.git\r\n  ```\r\n\r\n* Or download the zip file and extracted, copy the resulting folder to ```ComfyUI\\custom_nodes```    \r\n\r\n### Install dependency packages\r\n\r\n* for ComfyUI official portable package, double-click the ```install_requirements.bat``` in the plugin directory, for Aki ComfyUI package double-click on the ```install_requirements_aki.bat``` in the plugin directory, and wait for the installation to complete.\r\n\r\n* Or install dependency packages, open the cmd window in the ComfyUI_LayerStyle plugin directory like \r\n  ```ComfyUI\\custom_nodes\\ComfyUI_LayerStyle``` and enter the following command,\r\n\r\n&emsp;&emsp;for ComfyUI official portable package, type:\r\n\r\n```\r\n..\\..\\..\\python_embeded\\python.exe -s -m pip install -r requirements.txt\r\n.\\repair_dependency.bat\r\n```\r\n\r\n&emsp;&emsp;for Aki ComfyUI package, type:\r\n\r\n```\r\n..\\..\\python\\python.exe -s -m pip install -r requirements.txt\r\n.\\repair_dependency_aki.bat\r\n```\r\n\r\n* Restart ComfyUI.\r\n\r\n### Download Model Files\r\n\r\nChinese domestic users from  [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1T_uXMX3OKIWOJLPuLijrgA?pwd=1yye) or [QuarkNetdisk](https:\u002F\u002Fpan.quark.cn\u002Fs\u002F4802d6bca7cb) , other users from [huggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle](https:\u002F\u002Fhuggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle\u002Ftree\u002Fmain)  \r\ndownload all files and copy them to ```ComfyUI\\models``` folder. This link provides all the model files required for this plugin.\r\nOr download the model file according to the instructions of each node.     \r\nSome nodes named \"Ultra\" will use the vitmatte model, download the [vitmatte model](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain) and copy to ```ComfyUI\u002Fmodels\u002Fvitmatte``` folder, it is also included in the download link above. \r\n\r\n## Common Issues\r\n\r\nIf the node cannot load properly or there are errors during use, please check the error message in the ComfyUI terminal window. The following are common errors and their solutions.\r\n\r\n### Warning: xxxx.ini not found, use default xxxx..\r\n\r\nThis warning message indicates that the ini file cannot be found and does not affect usage. If you do not want to see these warnings, please modify all ```*.ini.example``` files in the plugin directory to ```*.ini```.\r\n\r\n### Cannot import name 'guidedFilter' from 'cv2.ximgproc'\r\n\r\nThis error is caused by incorrect version of the ```opencv-contrib-python``` package，or this package is overwriteen by other opencv packages. \r\n\r\n### NameError: name 'guidedFilter' is not defined\r\n\r\nThe reason for the problem is the same as above.\r\n#### For the issues with the above, please double click ```repair_dependency.bat``` (for Official ComfyUI Protable) or  ```repair_dependency_aki.bat``` (for ComfyUI-aki-v1.x) in the plugin folder to automatically fix them.\r\n\r\n### Cannot import name 'VitMatteImageProcessor' from 'transformers'\r\n\r\nThis error is caused by the low version of ```transformers``` package. \r\n\r\n### insightface Loading very slow\r\n\r\nThis error is caused by the low version of ```protobuf``` package. \r\n\r\n\r\n### onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page\r\n\r\nSolution:\r\nReinstall the  ```onnxruntime``` dependency package.\r\n\r\n### Error loading model xxx: We couldn't connect to huggingface.co ...\r\n\r\nCheck the network environment. If you cannot access huggingface.co normally in China, try modifying the huggingface_hub package to force the use hf_mirror.\r\n\r\n* Find ```constants.py``` in the directory of ```huggingface_hub``` package (usually ```Lib\u002Fsite packages\u002Fhuggingface_hub``` in the virtual environment path),\r\n  Add a line after ```import os```\r\n  \r\n  ```\r\n  os.environ['HF_ENDPOINT'] = 'https:\u002F\u002Fhf-mirror.com'\r\n  ```\r\n\r\n### ValueError: Trimap did not contain foreground values (xxxx...)\r\n\r\nThis error is caused by the mask area being too large or too small when using the ```PyMatting``` method to handle the mask edges.    \r\n\r\nSolution:\r\n\r\n* Please adjust the parameters to change the effective area of the mask. Or use other methods to handle the edges.\r\n\r\n### Requests.exceptions.ProxyError: HTTPSConnectionPool(xxxx...)\r\n\r\nWhen this error has occurred, please check the network environment.\r\n\r\n\r\n## Update\r\n\r\n\u003Cfont size=\"4\">**If the dependency package error after updating,  please double clicking ```repair_dependency.bat``` (for Official ComfyUI Protable) or  ```repair_dependency_aki.bat``` (for ComfyUI-aki-v1.x) in the plugin folder to reinstall the dependency packages. \u003C\u002Ffont>\u003Cbr \u002F>    \r\n\r\n* Commit [ImageBatchToList](#ImageBatchToList) and [ImageListToBatch](#ImageListToBatch) nodes, Used for converting single batches of images into multiple small batches and vice versa, with option to define the maximum number of images in each small batch. \r\n* Commit [DistortDisplace](#DistortDisplace) node,  Generate displacement deformation effects for material images.\r\n* Commit [MaskEdgeUltraDetailV3](#MaskEdgeUltraDetailV3)  node, By processing different partitions through inputting a trimap mask, a more refined overall mask including translucent parts is generated. \r\n* Commit [ImageCompositeHandleMask](#ImageCompositeHandleMask) node, used to generate local feathering masks and cropping data.\r\n* Commit [DrawRoundedRectangle](#DrawRoundedRectangle) node, used to generate rounded rectangle masks.\r\n* Commit [FluxKontextImageScale](#FluxKontextImageScale) node, based on official node modifications, used to resizes the image to one that is more optimal for flux kontext. For images with different aspect ratio, the scale will be adjusted appropriately to maintain all information.\r\n* Commit [MaskBoxExtend](#MaskBoxExtend) node, used to generate BBOX mask extension range and output as Mask.\r\n* Commit [ColorNegative](#ColorNegative) node, used to invert the color of image.\r\n* Commit [LoadImagesFromPath](#LoadImagesFromPath) and [ImageTaggerSaveV2](#ImageTaggerSaveV2) nodes, used to load a list of images from a folder and save images and tagger text file with corresponding file names.\r\n* Commit [LoadImageFromPath](LoadImageFromPath) node, The images in a folder can be loaded and output as image list, also supporting output of a list corresponding to a file name.\r\n* Commit [SegformerUltraV3](SegformerUltraV3), [LoadSegformerModel](LoadSegformerModel), [SegformerClothesSetting](SegformerClothesSetting) and [SegformerFashionSetting](SegformerFashionSetting) nodes, Separate the loading of models and settings to save resources when using multiple nodes.\r\n* Add multiple languages and increase support for 5 languages: Chinese, French, Japanese, Korean and Russian. This feature producted by [ComfyUI-Globalization-Node-Translation](https:\u002F\u002Fgithub.com\u002Fyamanacn\u002FComfyUI-Globalization-Node-Translation), thank you to the original author.\r\n* Commit [HalfTone](#HalfTone) node, use for halftone processing of images.\r\n* Add QuarkNetdisk model download link.\r\n* Support numpy 2.x dependency package.\r\n* Commit [PurgeVRAM V2](#PurgeVRAMV2) node.\r\n* Commit [ChoiceTextPreset](#ChoiceTextPreset) and [TextPreseter](#TextPreseter) nodes, used for preset text and selecting preset text output.\r\n* [StringCondition](#StringCondition) add the option of comparing strings to determine if they are the same.\r\n* Commit [NameToColor](#NameToColor) node, Output colors based on their names.\r\n* Commit [ImageMaskScaleAsV2](#ImageMaskScaleAsV2) node, Add background color settings on the basis of the original node.\r\n* Commit [RoundedRectangle](#RoundedRectangle) node, Used to create rounded rectangle and mask.\r\n* Commit [AnyRerouter](#AnyRerouter) node, Used for reroute any type of data.\r\n* Commit [ICMask](#ICMask) and [ICMaskCropBack](#ICMaskCropBack) nodes, Used for generating In-Context image and mask, and automatic crop back. The code is from [lrzjason\u002FComfyui-In-Context-Lora-Utils](https:\u002F\u002Fgithub.com\u002Flrzjason\u002FComfyui-In-Context-Lora-Utils) , Thanks to the original author @小志Jason.\r\n* Commit [GetMainColorsV2](#GetMainColorsV2) node, add sorting by color area and output color values and proportions in the preview image. This part of the code was improved by @ HL, thanks.\r\n* Optimize dependency packages. Optimize some algorithms.\r\n* Split some nodes of the dependencies that are prone to problems into [ComfyUI_LayerStyle_Advance](#https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance) repository. Including:\r\nLayerMask: BiRefNetUltra, LayerMask: BiRefNetUltraV2, LayerMask: LoadBiRefNetModel, LayerMask: LoadBiRefNetModelV2,    \r\nLayerMask: EVFSAMUltra, LayerMask: Florence2Ultra, LayerMask: LoadFlorence2Model, LayerUtility: Florence2Image2Prompt,    \r\nLayerUtility: GetColorTone, LayerUtility: GetColorToneV2, LayerMask: HumanPartsUltra,    \r\nLayerUtility: ImageAutoCrop, LayerUtility: ImageAutoCropV2, LayerUtility: ImageAutoCropV3,    \r\nLayerUtility: ImageRewardFilter, LayerUtility: LoadJoyCaption2Model, LayerUtility: JoyCaption2Split,    \r\nLayerUtility: JoyCaption2, LayerUtility: JoyCaption2ExtraOptions, LayerUtility: LaMa,    \r\nLayerUtility: LlamaVision, LayerUtility: LoadPSD, LayerMask: MaskByDifferent, LayerMask: MediapipeFacialSegment,    \r\nLayerMask: BBoxJoin, LayerMask: DrawBBoxMask, LayerMask: ObjectDetectorFL2, LayerMask: ObjectDetectorMask,    \r\nLayerMask: ObjectDetectorYOLO8, LayerMask: ObjectDetectorYOLOWorld, LayerMask: PersonMaskUltra, LayerMask: PersonMaskUltra V2,    \r\nLayerUtility: PhiPrompt, LayerUtility: PromptEmbellish, LayerUtility: PromptTagger, LayerUtility: CreateQRCode, LayerUtility: DecodeQRCode,    \r\nLayerUtility: QWenImage2Prompt, LayerMask: SAM2Ultra, LayerMask: SAM2VideoUltra, LayerUtility: SaveImagePlus, LayerUtility: SD3NegativeConditioning,    \r\nLayerMask: SegmentAnythingUltra, LayerMask: SegmentAnythingUltra V2, LayerMask: TransparentBackgroundUltra,     \r\nLayerUtility: UserPromptGeneratorTxt2ImgPrompt, LayerUtility: UserPromptGeneratorTxt2ImgPromptWithReference, LayerUtility: UserPromptGeneratorReplaceWord,    \r\nLayerUtility: AddBlindWaterMark, LayerUtility: ShowBlindWaterMark, LayerMask: YoloV8Detect\r\n\r\n* Merge the PR submitted by [alexisrolland](https:\u002F\u002Fgithub.com\u002Falexisrolland) , commit the ```Image Blend Advanced v3``` and ```Drop Shadow v3``` nodes, support transparent background.\r\n* Commit [BenUltra](#BenUltra) and [LoadBenModel](#LoadBenModel) nodes. These two nodes are the implementation of [PramaLLC\u002FBEN](https:\u002F\u002Fhuggingface.co\u002FPramaLLC\u002FBEN) project in ComfyUI.   \r\nDownload the ```BEN_Base.pth``` and ```config.json``` from [huggingface](https:\u002F\u002Fhuggingface.co\u002FPramaLLC\u002FBEN\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17mdBxfBl_R97mtNHuiHsxQ?pwd=2jn3) and copy to ```ComfyUI\u002Fmodels\u002FBEN``` folder.\r\n* Merge the PR submitted by [jimlee2048](https:\u002F\u002Fgithub.com\u002Fjimlee2048), add the LoadBiRefNetModelV2 node, and support loading RMBG 2.0 models.       \r\nDownload the model files from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-2.0\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1viIXlZnpTYTKkm2F-QMj_w?pwd=axr9) and copy to ```ComfyUI\u002Fmodels\u002FBiRefNet\u002FRMBG-2.0``` folder.\r\n\r\n* Florence2 nodes support base-PromptGen-v2.0 and large-PromptGen-v2.0, Download ```base-PromptGen-v2.0``` and ```large-PromptGen-v2.0``` two folder from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle\u002Ftree\u002Fmain\u002FComfyUI\u002Fmodels\u002Fflorence2) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BVvXt3N7zrBnToyF-GrC_A?pwd=xm0x) and copy to ```ComfyUI\u002Fmodels\u002Fflorence2``` folder.\r\n* [SAM2Ultra](#SAM2Ultra) and ObjectDetector nodes support image batch.\r\n* [SAM2Ultra](#SAM2Ultra) and [SAM2VideoUltra](#SAM2VideoUltra) nodes add support for SAM2.1 model, including [kijai](https:\u002F\u002Fgithub.com\u002Fkijai)'s FP16 model. Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xaQYBA6ktxvAxm310HXweQ?pwd=auki) or [huggingface.co\u002FKijai\u002Fsam2-safetensors](https:\u002F\u002Fhuggingface.co\u002FKijai\u002Fsam2-safetensors\u002Ftree\u002Fmain) and copy to ```ComfyUI\u002Fmodels\u002Fsam2``` folder.\r\n* Commit [JoyCaption2Split](#JoyCaption2Split) and [LoadJoyCaption2Model](#LoadJoyCaption2Model) nodes, Sharing the model across multiple JoyCaption2 nodes improves efficiency.\r\n* [SegmentAnythingUltra](#SegmentAnythingUltra) and [SegmentAnythingUltraV2](#SegmentAnythingUltraV2) add the  ```cache_model``` option, Easy to flexibly manage VRAM usage.\r\n\r\n* Due to the high version requirements of the [LlamaVision](#LlamaVision) node for ```transformers```, which affects the loading of some older third-party plugins, so the LayerStyle plugin has lowered the default requirement to 4.43.2. If you need to run LlamaVision, please upgrade to 4.45.0 or above on your own. \r\n\r\n* Commit [JoyCaption2](#JoyCaption2) and [JoyCaption2ExtraOptions](#JoyCaption2ExtraOptions) nodes. New dependency packages need to be installed.\r\nUse the JoyCaption-alpha-two model for local inference. Can be used to generate prompt words. this node is https:\u002F\u002Fhuggingface.co\u002FJohn6666\u002Fjoy-caption-alpha-two-cli-mod Implementation in ComfyUI, thank you to the original author.\r\nDownload models form [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dOjbUEacUOhzFitAQ3uIeQ?pwd=4ypv) and [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mH1SuW45Dy6Wga7aws5siQ?pwd=w6h5) , \r\nor [huggingface\u002FOrenguteng](https:\u002F\u002Fhuggingface.co\u002FOrenguteng\u002FLlama-3.1-8B-Lexi-Uncensored-V2\u002Ftree\u002Fmain) and [huggingface\u002Funsloth](https:\u002F\u002Fhuggingface.co\u002Funsloth\u002FMeta-Llama-3.1-8B-Instruct\u002Ftree\u002Fmain) , then copy to ```ComfyUI\u002Fmodels\u002FLLM```,\r\nDownload models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pkVymOsDcXqL7IdQJ6lMVw?pwd=v8wp) or [huggingface\u002Fgoogle](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002Fclip```,\r\nDonwload the ```cgrkzexw-599808``` folder from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12TDwZAeI68hWT6MgRrrK7Q?pwd=d7dh) or [huggingface\u002FJohn6666](https:\u002F\u002Fhuggingface.co\u002FJohn6666\u002Fjoy-caption-alpha-two-cli-mod\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002FJoy_caption```。\r\n\r\n* Commit [LlamaVision](#LlamaVision) node, Use the Llama 3.2 vision model for local inference. Can be used to generate prompt words. part of the code for this node comes from [ComfyUI-PixtralLlamaMolmoVision](https:\u002F\u002Fgithub.com\u002FSeanScripts\u002FComfyUI-PixtralLlamaMolmoVision), thank you to the original author.\r\nTo use this node, the ```transformers``` need upgraded to 4.45.0 or higher.\r\nDownload models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18oHnTrkNMiwKLMcUVrfFjA?pwd=4g81) or [huggingface\u002FSeanScripts](https:\u002F\u002Fhuggingface.co\u002FSeanScripts\u002FLlama-3.2-11B-Vision-Instruct-nf4\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002FLLM```.\r\n\r\n* Commit [RandomGeneratorV2](#RandomGeneratorV2) node, add least random range and seed options.\r\n* Commit [TextJoinV2](#TextJoinV2) node, add delimiter options on top of TextJion.\r\n* Commit [GaussianBlurV2](#GaussianBlurV2) node, The parameter accuracy has been improved to 0.01.\r\n* Commit [UserPromptGeneratorTxtImgWithReference](#UserPromptGeneratorTxtImgWithReference) node.\r\n* Commit [GrayValue](#GrayValue) node, output the grayscale values corresponding to the RGB color values.\r\n* [LUT Apply](#LUT), [TextImageV2](#TextImageV2), [TextImage](#TextImage), [SimpleTextImage](#SimpleTextImage) nodes to support defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. Simultaneously supports refreshing real-time updates.\r\n* [LUT Apply](#LUT), [TextImageV2](#TextImageV2), [TextImage](#TextImage), [SimpleTextImage](#SimpleTextImage) nodes support defining multi directory fonts and lut folders, and support refreshing and real-time updates.\r\n* Commit [HumanPartsUltra](#HumanPartsUltra) node, used to generate human body parts masks. It is based on the warrper of [metal3d\u002FComfyUI_Human_Parts](https:\u002F\u002Fgithub.com\u002Fmetal3d\u002FComfyUI_Human_Parts), thank the original author.\r\n  Download model file from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-6uwH6RB0FhIVfa3qO7hhQ?pwd=d862) or [huggingface](https:\u002F\u002Fhuggingface.co\u002FMetal3d\u002Fdeeplabv3p-resnet50-human\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\onnx\\human-parts``` folder.\r\n* ObjectDetector nodes add sort by confidence option.\r\n* Commit [DrawBBoxMask](#DrawBBoxMask) node, used to convert the BBoxes output by the Object Detector node into a mask.\r\n* Commit [UserPromptGeneratorTxtImg](#UserPromptGeneratorTxtImg) and [UserPromptGeneratorReplaceWord](#UserPromptGeneratorReplaceWord) nodes, Used to generate text and image prompts and replace prompt content.\r\n* Commit [PhiPrompt](#PhiPrompt) node, Use Microsoft Phi 3.5 text and visual models for local inference. Can be used to generate prompt words, process prompt words, or infer prompt words from images. Running this model requires at least 16GB of video memory.      \r\n  Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BdTLdaeGC3trh1U3V-6XTA?pwd=29dh) or [huggingface.co\u002Fmicrosoft\u002FPhi-3.5-vision-instruct](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3.5-vision-instruct\u002Ftree\u002Fmain) and [huggingface.co\u002Fmicrosoft\u002FPhi-3.5-mini-instruct](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3.5-mini-instruct\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\LLM``` folder.\r\n* Commit [GetMainColors](#GetMainColors) node, it can obtained 5 main colors of image. Commit [ColorName](#ColorName) node, it can obtain the color name of input color value.\r\n* Duplicate the [Brightness & Contrast](#Brightness) node as [BrightnessContrastV2](#BrightnessContrastV2), the [Color of Shadow & Highlight](#Highlight) node as [ColorofShadowHighlight](#HighlightV2), and [Shadow & Highlight Mask](#Shadow) to [Shadow Highlight Mask V2](#ShadowV2), to avoid errors in ComfyUI workflow parsing caused by the \"&\" character in the node name.\r\n* Commit [VQAPrompt](#VQAPrompt) and [LoadVQAModel](#LoadVQAModel) nodes.      \r\n  Download the model from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) or [huggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large\u002Ftree\u002Fmain) and [huggingface.co\u002FSalesforce\u002Fblip-vqa-base](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-base\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\VQA``` folder.\r\n* [Florence2Ultra](#Florence2Ultra),  [Florence2Image2Prompt](#Florence2Image2Prompt) 和 [LoadFlorence2Model](#LoadFlorence2Model) nodes support the MiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5 and MiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5 model.    \r\n  Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xOL6x6LijIMSh_3woErjJg?pwd=t3xa) or [huggingface.co\u002FMiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5](https:\u002F\u002Fhuggingface.co\u002FMiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5\u002Ftree\u002Fmain) and [huggingface.co\u002FMiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5](https:\u002F\u002Fhuggingface.co\u002FMiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5\u002Ftree\u002Fmain) , copy to  ```ComfyUI\\models\\florence2``` folder.\r\n* Commit [BiRefNetUltraV2](#BiRefNetUltraV2) and [LoadBiRefNetModel](#LoadBiRefNetModel) nodes, that support the use of the latest BiRefNet model.\r\n  Download model file from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12z3qUuqag3nqpN2NJ5pSzg?pwd=ek65) or [GoogleDrive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM) named ```BiRefNet-general-epoch_244.pth``` to  ```ComfyUI\u002FModels\u002FBiRefNet\u002Fpth``` folder. You can also download more BiRefNet models and put them here.\r\n* [ExtendCanvasV2](#ExtendCanvasV2) node support negative value input, it means image will be cropped.\r\n* The default title color of nodes is changed to blue-green, and nodes in LayerStyle, LayerColor, LayerMask, LayerUtility, and LayerFilter are distinguished by different colors.\r\n* The Object Detector nodes added sort bbox option, which allows sorting from left to right, top to bottom, and large to small, making object selection more intuitive and convenient. The nodes released yesterday has been abandoned, please manually replace it with the new version node (sorry).\r\n* Commit [SAM2Ultra](#SAM2Ultra), [SAM2VideoUltra](#SAM2VideoUltra), [ObjectDetectorFL2](#ObjectDetectorFL2), [ObjectDetectorYOLOWorld](#ObjectDetectorYOLOWorld), [ObjectDetectorYOLO8](#ObjectDetectorYOLO8), [ObjectDetectorMask](#ObjectDetectorMask) and [BBoxJoin](#BBoxJoin) nodes. \r\n  Download models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xaQYBA6ktxvAxm310HXweQ?pwd=auki) or [huggingface.co\u002FKijai\u002Fsam2-safetensors](https:\u002F\u002Fhuggingface.co\u002FKijai\u002Fsam2-safetensors\u002Ftree\u002Fmain) and copy to ```ComfyUI\u002Fmodels\u002Fsam2``` folder,\r\n  Download models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QpjajeTA37vEAU2OQnbDcQ?pwd=nqsk) or [GoogleDrive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nrsfq4S-yk9ewJgwrhXAoNVqIFLZ1at7?usp=sharing) and copy to ```ComfyUI\u002Fmodels\u002Fyolo-world``` folder.\r\n  This update introduces new dependencies, please reinstall the dependency package.\r\n* Commit [RandomGenerator](#RandomGenerator) node, Used to generate random numbers within a specified range, with outputs of int, float, and boolean, supporting batch generation of different random numbers by image batch.\r\n* Commit [EVF-SAMUltra](#EVFSAMUltra) node, it is implementation of [EVF-SAM](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FEVF-SAM) in ComfyUI. Please download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EvaxgKcCxUpMbYKzLnEx9w?pwd=69bn) or [huggingface\u002FEVF-SAM2](https:\u002F\u002Fhuggingface.co\u002FYxZhang\u002Fevf-sam2\u002Ftree\u002Fmain), [huggingface\u002FEVF-SAM](https:\u002F\u002Fhuggingface.co\u002FYxZhang\u002Fevf-sam\u002Ftree\u002Fmain) to ```ComfyUI\u002Fmodels\u002FEVF-SAM``` folder(save the models in their respective subdirectories).\r\n  Due to the introduction of new dependencies package, after the plugin upgrade, please reinstall the dependency packages.\r\n* Commit [ImageTaggerSave](#ImageTaggerSave) and [ImageAutoCropV3](#ImageAutoCropV3) nodes. Used to implement the automatic trimming and marking workflow for the training set (the workflow ```image_tagger_save.json``` is located in the workflow directory).\r\n* Commit [CheckMaskV2](#CheckMaskV2) node, Added the ```simple``` method to detect masks more quickly.\r\n* Commit [ImageReel](#ImageReel) and [ImageReelComposite](#ImageReelComposite) nodes to composite multiple images on a canvas.\r\n* [NumberCalculatorV2](#NumberCalculatorV2) and [NumberCalculator](#NumberCalculator) add the  ```min``` and ```max``` method.\r\n* Optimize node loading speed.    \r\n* [Florence2Image2Prompt](#Florence2Image2Prompt) add support for ```thwri\u002FCogFlorence-2-Large-Freeze``` and ```thwri\u002FCogFlorence-2.1-Large``` models. Please download the model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hzw9-QiU1vB8pMbBgofZIA?pwd=mfl3) or [huggingface\u002FCogFlorence-2-Large-Freeze](https:\u002F\u002Fhuggingface.co\u002Fthwri\u002FCogFlorence-2-Large-Freeze\u002Ftree\u002Fmain) and [huggingface\u002FCogFlorence-2.1-Large](https:\u002F\u002Fhuggingface.co\u002Fthwri\u002FCogFlorence-2.1-Large\u002Ftree\u002Fmain) , then copy it to ```ComfyUI\u002Fmodels\u002Fflorence2``` folder. \r\n* Merge branch from [ClownsharkBatwing](https:\u002F\u002Fgithub.com\u002FClownsharkBatwing) \"Use GPU for color blend mode\", the speed of some layer blends by more than ten times.\r\n* Commit [Florence2Ultra](#Florence2Ultra),  [Florence2Image2Prompt](#Florence2Image2Prompt) and [LoadFlorence2Model](#LoadFlorence2Model) nodes.\r\n* [TransparentBackgroundUltra](#TransparentBackgroundUltra) node add new model support. Please download the model file according to the instructions.\r\n* Commit [SegformerUltraV2](#SegformerUltraV2), [SegfromerFashionPipeline](#SegfromerFashionPipeline) and [SegformerClothesPipeline](#SegformerClothesPipeline) nodes, used for segmentation of clothing. please download the model file according to the instructions.\r\n* Commit ```install_requirements.bat``` and ```install_requirements_aki.bat```, One click solution to install dependency packages.\r\n* Commit [TransparentBackgroundUltra](#TransparentBackgroundUltra) node, it remove background based on transparent-background model.\r\n* Change the VitMatte model of the [Ultra](#Ultra) node to a local call. Please download [all files of vitmatte model](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain) to the ```ComfyUI\u002Fmodels\u002Fvitmatte``` folder.\r\n* [GetColorToneV2](#GetColorToneV2) node add the ```mask``` method to the color selection option, which can accurately obtain the main color and average color within the mask.\r\n* [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) node add the \"background_color\" option.\r\n* [LUT Apply](#LUT) Add the \"strength\" option.\r\n* Commit [AutoAdjustV2](#AutoAdjustV2) node, add optional mask input and support for multiple automatic color adjustment modes.\r\n* Due to the upcoming discontinuation of gemini-pro vision services, [PromptTagger](#PromptTagger) and [PromptEmbellish](#PromptEmbellish) have added the \"gemini-1.5-flash\" API to continue using it.\r\n* [Ultra](#Ultra) nodes added the option to run ```VitMatte``` on the CUDA device, resulting in a 5-fold increase in running speed.\r\n* Commit [QueueStop](#QueueStop) node, used to terminate the queue operation.\r\n* Optimize performance of the ```VitMate``` method for [Ultra](#Ultra) nodes when processing large-size image.\r\n* [CropByMaskV2](#CropByMaskV2) add option to round the cutting size by multiples.\r\n* Commit [CheckMask](#CheckMask) node, it detect whether the mask contains sufficient effective areas. Commit [HSVValue](#HSVValue) node, it convert color values to HSV values.\r\n* [BooleanOperatorV2](#BooleanOperatorV2), [NumberCalculatorV2](#NumberCalculatorV2), [Integer](#Integer), [Float](#Float), [Boolean](#Boolean) nodes add string output to output the value as a string for use with [SwitchCase](#SwitchCase).\r\n* Commit [SwitchCase](#SwitchCase) node, Switches the output based on the matching string. Can be used for any type of data switching.\r\n* Commit [String](#String) node, Used to output a string. It is the TextBox simplified node.\r\n* Commit [If](#If) node，Switches output based on Boolean conditional input. Can be used for any type of data switching.\r\n* Commit [StringCondition](#StringCondition) node, Determines whether the text contains or does not contain a substring.\r\n* Commit [NumberCalculatorV2](#NumberCalculatorV2) node，Add the nth root operation. Commit [BooleanOperatorV2](#BooleanOperatorV2) node, Increasing greater\u002Fless than, greater\u002Fless then or equal logical judgment. The two nodes can access numeric inputs and can input numeric values within the node. Note: Numeric input takes precedence. Values in nodes will not be valid when there is input.\r\n* Commit [SD3NegativeConditioning](#SD3NegativeConditioning) node, Encapsulate the four nodes of Negative Condition in SD3 into a separate node.\r\n* [ImageRemoveAlpha](#ImageRemoveAlpha) node add optional mask input.\r\n* Commit [HLFrequencyDetailRestore](#HLFrequencyDetailRestore) node, Using low-frequency filtering and high-frequency preserving to restore image details, the fusion is better.\r\n* Commit [AddGrain](#AddGrain) and [MaskGrain](#MaskGrain) nodes, Add noise to a picture or mask.\r\n* Commit [FilmV2](#FilmV2) node, The fastgrain method is added on the basis of the previous one, and the noise generation speed is 10 times faster.\r\n* Commit [ImageToMask](#ImageToMask) node, it can be converted image into mask. Supports converting any channel in LAB, RGBA, YUV, and HSV modes into masks, while providing color scale adjustment. Support mask optional input to obtain masks that only include valid parts.\r\n* The blackpoint and whitepoint options in some nodes have been changed to slider adjustment for a more intuitive display. Include [MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2)，[PersonMaskUltraV2](#PersonMaskUltraV2)，[BiRefNetUltra](#BiRefNetUltra), [SegformerB2ClothesUltra](#SegformerB2ClothesUltra), [BlendIfMask](#BlendIfMask) and [Levels](#Levels).\r\n* [ImageScaleRestoreV2](#ImageScaleRestoreV2) and  [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) nodes add the ```total_pixel``` method to scale images.\r\n* Commit [MediapipeFacialSegment](#MediapipeFacialSegment) node，Used to segment facial features, including left and right eyebrows, eyes, lips, and teeth.\r\n* Commit [BatchSelector](#BatchSelector) node，Used to retrieve specified images or masks from batch images or masks.\r\n* LayerUtility creates new subdirectories such as SystemIO, Data, and Prompt. Some nodes are classified into subdirectories.\r\n* Commit [MaskByColor](#MaskByColor) node, Generate a mask based on the selected color.\r\n* Commit [LoadPSD](#LoadPSD) node, It read the psd format, and output layer images. Note that this node requires the installation of the ```psd_tools``` dependency package, If error occurs during the installation of psd_tool, such as ```ModuleNotFoundError: No module named 'docopt'``` , please download [docopt's whl](https:\u002F\u002Fwww.piwheels.org\u002Fproject\u002Fdocopt\u002F) and manual install it. \r\n* Commit [SegformerB2ClothesUltra](#SegformerB2ClothesUltra) node, it used to segment character clothing. The model segmentation code is from[StartHua](https:\u002F\u002Fgithub.com\u002FStartHua\u002FComfyui_segformer_b2_clothes),  thanks to the original author.\r\n* [SaveImagePlus](#SaveImagePlus) node adds the output workflow to the json function, supports ```%date``` and ```%time``` to embeddint date or time to path and filename, and adds the preview switch.\r\n* Commit [SaveImagePlus](#SaveImagePlus) node，It can customize the directory where the picture is saved, add a timestamp to the file name, select the save format, set the image compression rate, set whether to save the workflow, and optionally add invisible watermarks to the picture.\r\n* Commit [AddBlindWaterMark](#AddBlindWaterMark), [ShowBlindWaterMark](#ShowBlindWaterMark) nodes, Add invisible watermark and decoded watermark to the picture. Commit [CreateQRCode](#CreateQRCode), [DecodeQRCode](#DecodeQRCode) nodes, It can generate two-dimensional code pictures and decode two-dimensional codes.\r\n* [ImageScaleRestoreV2](#ImageScaleRestoreV2), [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2), [ImageAutoCropV2](#ImageAutoCropV2) nodes add options for ```width``` and ```height```, which can specify width or height as fixed values.\r\n* Commit [PurgeVRAM](#PurgeVRAM) node, Clean up VRAM an RAM.\r\n* Commit [AutoAdjust](#AutoAdjust) node, it can automatically adjust image contrast and white balance.\r\n* Commit [RGBValue](#RGBValue) node to output the color value as a single decimal value of R, G, B. This idea is from [vxinhao](https:\u002F\u002Fgithub.com\u002Fvxinhao\u002Fcolor2rgb), Thanks.\r\n* Commit [seed](#seed) node to output the seed value. The [ImageMaskScaleAs](#ImageMaskScaleAs), [ImageScaleBySpectRatio](#ImageScaleBySpectRatio), [ImageScaleBySpectRatioV2](#ImageScaleBySpectRatioV2), [ImageScaleRestore](#ImageScaleRestore), [ImageScaleRestoreV2](#ImageScaleRestoreV2) nodes increase ```width```, ```height``` output.\r\n* Commit [Levels](#Levels) node, it can achieve the same color levels adjustment function as Photoshop.[Sharp&Soft](#Sharp) add the \"None\" option.\r\n* Commit [BlendIfMask](#BlendIfMask) node, This node cooperates with ImgaeBlendV2 or ImageBlendAdvanceV2 to achieve the same Blend If function as Photoshop.\r\n* Commit [ColorTemperature](#ColorTemperature) and [ColorBalance](#ColorBalance) nodes, used to adjust the color temperature and color balance of the picture.\r\n* Add new types of [Blend Mode V2](#BlendModeV2) between images. now supports up to 30 blend modes. The new blend mode is available for all V2 versions that support mixed mode nodes, including ImageBlend V2, ImageBlendAdvance V2, DropShadow V2, InnerShadow V2, OuterGlow V2, InnerGlow V2, Stroke V2, ColorOverlay V2, GradientOverlay V2.    \r\n  Part of the code for BlendMode V2 is from [Virtuoso Nodes for ComfyUI](https:\u002F\u002Fgithub.com\u002Fchrisfreilich\u002Fvirtuoso-nodes). Thanks to the original authors.\r\n* Commit [YoloV8Detect](#YoloV8Detect) node.\r\n* Commit [QWenImage2Prompt](#QWenImage2Prompt) node, this node is repackage of the [ComfyUI_VLM_nodes](https:\u002F\u002Fgithub.com\u002Fgokayfem\u002FComfyUI_VLM_nodes)'s ```UForm-Gen2 Qwen Node```,  thanks to the original author.\r\n* Commit [BooleanOperator](#BooleanOperator), [NumberCalculator](#NumberCalculator), [TextBox](#TextBox), [Integer](#Integer), [Float](#Float), [Boolean](#Boolean)nodes. These nodes can perform mathematical and logical operations.\r\n* Commit [ExtendCanvasV2](#ExtendCanvasV2) node，support color value input.\r\n* Commit [AutoBrightness](#AutoBrightness) node，it can automatically adjust the brightness of image.\r\n* [CreateGradientMask](#CreateGradientMask) node add ```center``` option.\r\n* Commit [GetColorToneV2](#GetColorToneV2) node, can select the main and average colors for the background or body. \r\n* Commit [ImageRewardFilter](#ImageRewardFilter) node, can filter out poor quality pictures.\r\n* [Ultra](#Ultra) nodes add ```VITMatte(local)``` method, You can choose this method to avoid accessing huggingface.co if you have already downloaded the model before.\r\n* Commit [HDR Effect](#HDR) node，it enhances the dynamic range and visual appeal of input images.  this node is repackage of [HDR Effects (SuperBeasts.AI)](https:\u002F\u002Fgithub.com\u002FSuperBeastsAI\u002FComfyUI-SuperBeasts).\r\n* Commit [CropBoxResolve](#CropBoxResolve) node.\r\n* Commit [BiRefNetUltra](#BiRefNetUltra) node, it using the BiRefNet model to remove background has better recognition ability and ultra-high edge details.\r\n* Commit [ImageAutoCropV2](#ImageAutoCropV2) node, it can choose not to remove the background, support mask input, and scale by long or short side size.\r\n* Commit [ImageHub](#ImageHub) node, supports up to 9 sets of Image and Mask switching output, and supports random output.\r\n* Commit [TextJoin](#TextJoin) node.\r\n* Commit [PromptEmbellish](#PromptEmbellish) node. it output polished prompt words, and support inputting images as references.\r\n* [Ultra](#Ultra) nodes have been fully upgraded to V2 version, with the addition of VITMatte edge processing method, which is suitable for handling semi transparent areas. Include [MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2) and [PersonMaskUltraV2](#PersonMaskUltraV2) nodes.\r\n* Commit [Color of Shadow & Highlight](#Highlight) node, it can adjust the color of the dark and bright parts separately. Commit [Shadow & Highlight Mask](#Shadow) node, it can output mask for dark and bright areas.\r\n* Commit [CropByMaskV2](#CropByMaskV2) node, On the basis of the original node, it supports ```crop_box``` input, making it convenient to cut layers of the same size.\r\n* Commit [SimpleTextImage](#SimpleTextImage) node, it generate simple typesetting images and masks from text. This node references some of the functionalities and code of [ZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite).\r\n* Commit [PromptTagger](#PromptTagger) node，Inference the prompts based on the image. and it can replace key word for the prompt(need apply for Google Studio API key). Upgrade [ColorImageV2](#ColorImageV2) and [GradientImageV2](#GradientImageV2)，support user customize preset sizes and size_as input.\r\n* Commit [LaMa](#LaMa) node, it can erase objects from the image based on the mask. this node is repackage of [IOPaint](https:\u002F\u002Fwww.iopaint.com).\r\n* Commit [ImageRemoveAlpha](#ImageRemoveAlpha) and [ImageCombineAlpha](#ImageCombineAlpha) nodes, alpha channel of the image can be removed or merged.\r\n* Commit [ImageScaleRestoreV2](#ImageScaleRestoreV2) and [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) nodes, supports scaling images to specified long or short edge sizes.\r\n* Commit [PersonMaskUltra](#PersonMaskUltra) node, Generate masks for portrait's face, hair, body skin, clothing, or accessories. the model code for this node comes from [a-person-mask-generator](https:\u002F\u002Fgithub.com\u002Fdjbielejeski\u002Fa-person-mask-generator).\r\n* Commit [LightLeak](#LightLeak) node, this filter simulate the light leakage effect of the film.\r\n* Commit [Film](#Film) node, this filter simulate the grain, dark edge, and blurred edge of the film, support input depth map to simulate defocus. it is reorganize and encapsulate of [digitaljohn\u002Fcomfyui-propost](https:\u002F\u002Fgithub.com\u002Fdigitaljohn\u002Fcomfyui-propost).\r\n* Commit [ImageAutoCrop](#ImageAutoCrop) node, which is designed to generate image materials for training models.\r\n* Commit [ImageScaleByAspectRatio](#ImageScaleByAspectRatio) node, it can be scaled image or mask according to frame ratio.\r\n* Fix the bug of color gradation in [LUT Apply](#LUT) node rendering, and this node now support for log color space. *Please load the dedicated log lut file for the log color space image.\r\n* Commit [CreateGradientMask](#CreateGradientMask) node. Commit [LayerImageTransform](#LayerImageTransform) and [LayerMaskTransform](#LayerMaskTransform) nodes.\r\n* Commit [MaskEdgeUltraDetail](#MaskEdgeUltraDetail) node, it process rough masks to ultra fine edges.Commit [Exposure](#Exposure) node.\r\n* Commit [Sharp & Soft](#Sharp) node, it can enhance or smooth out image details. Commit [MaskByDifferent](#MaskByDifferent) node, it compare two images and output a Mask. Commit [SegmentAnythingUltra](#SegmentAnythingUltra) node, Improve the quality of mask edges. *If SegmentAnything is not installed, you will need to manually download the model.\r\n* All nodes have fully supported batch images, providing convenience for video creation.\r\n  (The CropByMask node only supports cuts of the same size. if a batch mask_for_crop inputted, the data from the first sheet will be used.)\r\n* Commit [RemBgUltra](#RemBgUltra) and [PixelSpread](#PixelSpread) nodes significantly improved mask quality. *RemBgUltra requires manual model download.\r\n* Commit [TextImage](#TextImage) node, it generate text images and masks.\r\n* Add new types of [blend mode](#Blend) between images. now supports up to 19 blend modes. add **color_burn, color_dodge, linear_burn, linear_dodge, overlay, soft_light, hard_light, vivid_light, pin_light, linear_light** and **hard_mix**. \r\n  The newly added blend mode is applicable to all nodes that support blend mode.\r\n* Commit [ColorMap](#ColorMap) filter node to create a pseudo color heatmap effect.\r\n* Commit [WaterColor](#WaterColor) and [SkinBeauty](#SkinBeauty) nodes。These are image filters that generate watercolor and skin smoothness effects.\r\n* Commit [ImageShift](#ImageShift)  node to shift the image and output a displacement seam mask, making it convenient to create continuous textures.\r\n* Commit [ImageMaskScaleAs](#ImageMaskScaleAs) node to adjust the image or mask size based on the reference image.\r\n* Commit [ImageScaleRestore](#ImageScaleRestore) node to work with CropByMask for local upscale and repair works.\r\n* Commit [CropByMask](#CropByMask) and [RestoreCropBox](#RestoreCropBox) nodes. The combination of these two can partially crop and redraw the image before restoring it.\r\n* Commit [ColorAdapter](#ColorAdapter) node, that can automatically adjust the color tone of the image.\r\n* Commit [MaskStroke](#MaskStroke) node, it can generate mask contour strokes.\r\n* Add [LayerColor](#LayerColor) node group, used to adjust image color. it include [LUT Apply](#LUT), [Gamma](#Gamma), [Brightness & Contrast](#Brightness), [RGB](#RGB), [YUV](#YUV), [LAB](#LAB) adn [HSV](#HSV).\r\n* Commit [ImageChannelSplit](#ImageChannelSplit) and [ImageChannelMerge](#ImageChannelMerge) nodes.\r\n* Commit [MaskMotionBlur](#MaskMotionBlur) node.\r\n* Commit [SoftLight](#SoftLight) node.\r\n* Commit [ChannelShake](#ChannelShake) node, that is filter, can produce channel dislocation effect similar like Tiktok logo.\r\n* Commit [MaskGradient](#MaskGradient) node, can create a gradient in the mask.\r\n* Commit [GetColorTone](#GetColorTone) node, can obtain the main color or average color of the image. \r\n  Commit [MaskGrow](#MaskGrow) and [MaskEdgeShrink](#MaskEdgeShrink) nodes.\r\n* Commit [MaskBoxDetect](#MaskBoxDetect) node, which can automatically detect the position through the mask and output it to the composite node.\r\n  Commit [XY to Percent](#Percent) node to convert absolute coordinates to percent coordinates.\r\n  Commit [GaussianBlur](#GaussianBlur) node.\r\n  Commit [GetImageSize](#GetImageSize) node.\r\n* Commit [ExtendCanvas](#ExtendCanvas) node.\r\n* Commit [ImageBlendAdvance](#ImageBlendAdvance) node. This node allows for the synthesis of background images and layers of different sizes, providing a more free synthesis experience. \r\n  Commit [PrintInfo](#PrintInfo) node as a workflow debugging aid.\r\n* Commit [ColorImage](#ColorImage) and [GradientImage](#GradientImage) nodes, Used to generate solid and gradient color images.\r\n* Commit [GradientOverlay](#GradientOverlay) and [ColorOverlay](#ColorOverlay) nodes. \r\n  Add invalid mask input judgment and ignore it when invalid mask is input.\r\n* Commit [InnerGlow](#InnerGlow), [InnerShadow](#InnerShadow) and [MotionBlur](#MotionBlur) nodes.\r\n* Renaming all completed nodes, the nodes are divided into 4 groups：LayerStyle, LayerMask, LayerUtility, LayerFilter. workflows containing old version nodes need to be manually replaced with new version nodes.\r\n* [OuterGlow](#OuterGlow) node has undergone significant modifications by adding options for **_brightness_**, **_light_color_**, and **_glow_color_**.\r\n* Commit [MaskInvert](#MaskInvert) node.\r\n* Commit [ColorPick](#ColorPick) node.\r\n* Commit [Stroke](#Stroke) node.\r\n* Commit [MaskPreview](#MaskPreview) node.\r\n* Commit [ImageOpacity](#ImageOpacity) node.\r\n* The layer_mask is not a mandatory input now. it is allowed to use layers and masks with different shapes, but the size must be consistent.\r\n* Commit [ImageBlend](#ImageBlend) node.\r\n* Commit [OuterGlow](#OuterGlow) node.\r\n* Commit [DropShadow](#DropShadow) node.\r\n\r\n## Description\r\n\r\nNodes are divided into 5 groups according to their functions: LayerStyle, LayerColor, LayerMask, LayerUtility and LayerFilter.\r\n\r\n* [LayerStyle](#LayerStyle) nodes provides layer styles that mimic Adobe Photoshop.\r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_60829226ff64.jpg)    \r\n* [LayerColor](#LayerColor) node group provides color adjustment functionality.\r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d18b4c8f6134.jpg)    \r\n* [LayerMask](#LayerMask) nodes provides mask assistance tools.\r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_82f1180642fe.jpg)    \r\n* [LayerUtility](#LayerUtility) nodes provides auxiliary nodes related to layer composit tools and workflows.\r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_828f7387092d.jpg)    \r\n* [LayerFilter](#LayerFilter) nodes provides image effect filters.\r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e2a7b19f2468.jpg)    \r\n\r\n# \u003Ca id=\"table1\">LayerStyle\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c5dde66db62f.jpg)    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_01f3980b05a5.jpg)    \r\n\r\n### \u003Ca id=\"table1\">DropShadow\u003C\u002Fa>\r\n\r\nGenerate shadow\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8bc27843ade5.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a962b43ad9db.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image, shadows are generated according to their shape.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of shadows.\r\n* opacity: Opacity of shadow.\r\n* distance_x: Horizontal offset of shadow.\r\n* distance_y: Vertical offset of shadow.\r\n* grow: Shadow expansion amplitude.\r\n* blur: Shadow blur level.\r\n* shadow_color\u003Csup>4\u003C\u002Fsup>: Shadow color.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">OuterGlow\u003C\u002Fa>\r\n\r\nGenerate outer glow\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9fec8670b374.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d10e48ac7eea.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image, grow are generated according to their shape.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of glow.\r\n* opacity: Opacity of glow.\r\n* brightness: Luminance of light.\r\n* glow_range: range of glow.\r\n* blur：blur of glow.\r\n* light_color\u003Csup>4\u003C\u002Fsup>: Center part color of glow.\r\n* glow_color\u003Csup>4\u003C\u002Fsup>: Edge part color of glow.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">InnerShadow\u003C\u002Fa>\r\n\r\nGenerate inner shadow\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ee9e54e60ed8.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_98899cf0a91d.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image, shadows are generated according to their shape.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of shadows.\r\n* opacity: Opacity of shadow.\r\n* distance_x: Horizontal offset of shadow.\r\n* distance_y: Vertical offset of shadow.\r\n* grow: Shadow expansion amplitude.\r\n* blur: Shadow blur level.\r\n* shadow_color\u003Csup>4\u003C\u002Fsup>: Shadow color.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">InnerGlow\u003C\u002Fa>\r\n\r\nGenerate inner glow\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6a562e4fa957.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a9b541ae0053.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image, grow are generated according to their shape.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of glow.\r\n* opacity: Opacity of glow.\r\n* brightness: Luminance of light.\r\n* glow_range: range of glow.\r\n* blur：blur of glow.\r\n* light_color\u003Csup>4\u003C\u002Fsup>: Center part color of glow.\r\n* glow_color\u003Csup>4\u003C\u002Fsup>: Edge part color of glow.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">Stroke\u003C\u002Fa>\r\n\r\nGenerate a stroke of layer。\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6de545b99b32.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_50967a464f62.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image, stroke are generated according to their shape.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of stroke.\r\n* opacity: Opacity of stroke.\r\n* stroke_grow: Stroke expansion\u002Fcontraction amplitude, positive values indicate expansion and negative values indicate contraction.\r\n* stroke_width: Stroke width.\r\n* blur: Blur of stroke.\r\n* stroke_color\u003Csup>4\u003C\u002Fsup>: Stroke color, described in hexadecimal RGB format.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">GradientOverlay\u003C\u002Fa>\r\n\r\nGenerate gradient overlay\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8482312b5a76.jpg)    \r\n\r\nNode options:   \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of gradient.\r\n* opacity: Opacity of stroke.\r\n* start_color: Color at the beginning of the gradient.\r\n* start_alpha: Transparency at the beginning of the gradient.\r\n* end_color: Color at the end of the gradient.\r\n* end_alpha: Transparency at the end of the gradient.\r\n* angle: Gradient rotation angle.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">ColorOverlay\u003C\u002Fa>\r\n\r\nGenerate color overlay\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fe3fee659914.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6502f8c02ae4.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode of color.\r\n* opacity: Opacity of stroke.\r\n* color: Color of overlay.\r\n* [note](#notes)\r\n\r\n# \u003Ca id=\"table1\">LayerColor\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_aaff75deec77.jpg)    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_87592d32005d.jpg)    \r\n\r\n### \u003Ca id=\"table1\">LUT\u003C\u002Fa> Apply\r\n\r\nApply LUT to the image. only supports .cube format.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7af331f07b6a.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_483037b8b05f.jpg)    \r\n\r\n* LUT\u003Csup>*\u003C\u002Fsup>: Here is a list of available. cube files in the LUT folder, and the selected LUT files will be applied to the image.\r\n* color_space: For regular image, please select linear, for image in the log color space, please select log.\r\n* strength: Range 0~100, LUT application strength. The larger the value, the greater the difference from the original image, and the smaller the value, the closer it is to the original image.\r\n\r\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">LUT folder is defined in ```resource_dir.ini```, this file is located in the root directory of the plug-in, and the default name is ```resource_dir.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```.\r\nOpen the text editing software and find the line starting with \"LUT_dir=\", after \"=\", enter the custom folder path name.\r\nsupport defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. \r\nall .cube files in this folder will be collected and displayed in the node list during ComfyUI initialization.\r\nIf the folder set in ini is invalid, the LUT folder that comes with the plugin will be enabled.\u003C\u002Ffont>\r\n\r\n### \u003Ca id=\"table1\">AutoAdjust\u003C\u002Fa>\r\n\r\nAutomatically adjust the brightness, contrast, and white balance of the image. Provide some manual adjustment options to compensate for the shortcomings of automatic adjustment.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7600ac14cedf.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8656b623dbca.jpg)    \r\n\r\n* strength: Strength of adjust. The larger the value, the greater the difference from the original image.\r\n* brightness: Manual adjustment of brightness.\r\n* contrast: Manual adjustment of contrast.\r\n* saturation: Manual adjustment of saturation.\r\n* red: Manual adjustment of the red channel.\r\n* green: Manual adjustment of the green channel.\r\n* blue: Manual adjustment of the blue channel.\r\n\r\n### \u003Ca id=\"table1\">AutoAdjustV2\u003C\u002Fa>\r\n\r\nOn the basis of AutoAdjust, add mask input and only calculate the content inside the mask for automatic color adjustment. Add multiple automatic adjustment modes.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3adec65ca62e.jpg)    \r\n\r\nThe following changes have been made based on AutoAdjust: \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5af0e141df5c.jpg)    \r\n\r\n* mask: Optional mask input.\r\n* mode: Automatic adjustment mode. \"RGB\" automatically adjusts according to the three channels of RGB, \"lum + sat\"automatically adjusts according to luminance and saturation, \"luminance\" automatically adjusts according to luminance, \"saturation\" automatically adjusts according to saturation, and \"mono\" automatically adjusts according to grayscale and outputs monochrome.\r\n\r\n### \u003Ca id=\"table1\">AutoBrightness\u003C\u002Fa>\r\n\r\nAutomatically adjust too dark or too bright image to moderate brightness, and support mask input. When  mask input, only the content of the mask part is used as the data source of the automatic brightness. The output is still the whole adjusted image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2e2691b779f9.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b8460536c434.jpg)    \r\n\r\n* strength: Automatically adjust the intensity of the brightness. The larger the value, the more biased towards the middle value, the greater the difference from the original picture.\r\n* saturation: Color saturation. Changes in brightness usually result in changes in color saturation, where appropriate compensation can be adjusted.\r\n\r\n### \u003Ca id=\"table1\">ColorAdapter\u003C\u002Fa>\r\n\r\nAuto adjust the color tone of the image to resemble the reference image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_600c724d57ae.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ea6bfe5a00ea.jpg)    \r\n\r\n* opacity: The opacity of an image after adjusting its color tone.\r\n\r\n### \u003Ca id=\"table1\">Exposure\u003C\u002Fa>\r\n\r\nChange the exposure of the image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5509a799611c.jpg)    \r\n\r\n### Color of Shadow & \u003Ca id=\"table1\">Highlight\u003C\u002Fa>\r\n\r\nAdjust the color of the dark and bright parts of the image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_64cc8863e4c2.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7b3702c3b73d.jpg)    \r\n\r\n* image: The input image.\r\n* mask: Optional input. if there is input, only the colors within the mask range will be adjusted.\r\n* shadow_brightness: The brightness of the dark area.\r\n* shadow_saturation: The color saturation in the dark area.\r\n* shadow_hue: The color hue in the dark area.\r\n* shadow_level_offset: The offset of values in the dark area, where larger values bring more areas closer to the bright into the dark area.\r\n* shadow_range: The transitional range of the dark area.\r\n* highlight_brightness:  The brightness of the highlight area.\r\n* highlight_saturation: The color saturation in the highlight area.\r\n* highlight_hue: The color hue in the highlight area.\r\n* highlight_level_offset: The offset of values in the highlight area, where larger values bring more areas closer to the dark into the highlight area.\r\n* highlight_range: The transitional range of the highlight area.\r\n\r\nNode option:  \r\n\r\n* exposure: Exposure value. Higher values indicate brighter image.\r\n\r\n### Color of Shadow \u003Ca id=\"table1\">HighlightV2\u003C\u002Fa>\r\n\r\nA replica of the ```Color of Shadow & Highlight``` node, with the \"&\" character removed from the node name to avoid ComfyUI workflow parsing errors.\r\n\r\n### \u003Ca id=\"table1\">ColorTemperature\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2b37c5b8531f.jpg)    \r\nChange the color temperature of the image.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9a83d460085f.jpg)    \r\n\r\n* temperature: Color temperature value. Range between-100 and 100. The higher the value, the higher the color temperature (bluer); The lower the color temperature, the lower the color temperature (yellowish).\r\n\r\n### \u003Ca id=\"table1\">Levels\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a3271d4ea81.jpg)    \r\nChange the levels of image.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_25cdf533d394.jpg)    \r\n\r\n* channel: Select the channel you want to adjust. Available in RGB, red, green, blue.\r\n* black_point\u003Csup>*\u003C\u002Fsup>: Input black point value. Value range 0-255, default 0.\r\n* white_point\u003Csup>*\u003C\u002Fsup>: Input white point value. Value range 0-255, default 255.\r\n* gray_point: Input grey point values. Value range 0.01-9.99, default 1.\r\n* output_black_point\u003Csup>*\u003C\u002Fsup>: Output black point value. Value range 0-255, default 0.\r\n* output_white_point\u003Csup>*\u003C\u002Fsup>: Output white point value. Value range 0-255, default 255.\r\n\r\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">If the black_point or output_black_point value is greater than white_point or output_white_point, the two values are swapped, with the larger value used as white_point and the smaller value used as black_point.\u003C\u002Ffont>\r\n\r\n### \u003Ca id=\"table1\">ColorBalance\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4978557c9510.jpg)    \r\nChange the color balance of an image.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c727b12aeea6.jpg)    \r\n\r\n* cyan_red: Cyan-Red balance. negative values are leaning cyan, positive values are leaning red.\r\n* magenta_green: Megenta-Green balance. negative values are leaning megenta, positive values are leaning green.\r\n* yellow_blue: Yellow-Blue balance. negative values are leaning yellow, positive values are leaning blue.\r\n\r\n### \u003Ca id=\"table1\">Gamma\u003C\u002Fa>\r\n\r\nChange the gamma value of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8ff27eaa7d29.jpg)    \r\n\r\n* gamma: Value of the Gamma.\r\n\r\n### \u003Ca id=\"table1\">Brightness\u003C\u002Fa> & Contrast\r\n\r\nChange the brightness, contrast, and saturation of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_02051cdc00b7.jpg)    \r\n\r\n* brightness: Value of brightness.\r\n* contrast: Value of contrast.\r\n* saturation: Value of saturation.\r\n\r\n### \u003Ca id=\"table1\">BrightnessContrastV2\u003C\u002Fa>\r\n\r\nA replica of the ```Brightness & Contrast``` node, with the \"&\" character removed from the node name to avoid ComfyUI workflow parsing errors.\r\n\r\n### \u003Ca id=\"table1\">RGB\u003C\u002Fa>\r\n\r\nAdjust the RGB channels of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_838c01431979.jpg)    \r\n\r\n* R: R channel.\r\n* G: G channel.\r\n* B: B channel.\r\n\r\n### \u003Ca id=\"table1\">YUV\u003C\u002Fa>\r\n\r\nAdjust the YUV channels of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_359a1fde2b92.jpg)    \r\n\r\n* Y: Y channel.\r\n* U: U channel.\r\n* V: V channel.\r\n\r\n### \u003Ca id=\"table1\">LAB\u003C\u002Fa>\r\n\r\nAdjust the LAB channels of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_852e24c579ff.jpg)    \r\n\r\n* L: L channel.\r\n* A: A channel.\r\n* B: B channel.\r\n\r\n### \u003Ca id=\"table1\">HSV\u003C\u002Fa>\r\n\r\nAdjust the HSV channels of the image.\r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bb753b6fafc3.jpg)    \r\n\r\n* H: H channel.\r\n* S: S channel.\r\n* V: V channel.\r\n\r\n### \u003Ca id=\"table1\">ColorNegative\u003C\u002Fa>\r\nChange color to negative of image, you can choose RGB, Mono, or each channel of RGB to negative.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6e93474aaae9.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_08ca91d55ed3.jpg)    \r\n* negative_channel: Select the channel for inversion.\r\n\r\n# \u003Ca id=\"table1\">LayerUtility\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6c23aba33e67.jpg)    \r\n\r\n### \u003Ca id=\"table1\">ImageBlendAdvance\u003C\u002Fa>\r\n\r\nUsed for compositing layers, allowing for compositing layer images of different sizes on the background image, and setting positions and transformations. multiple mixing modes are available for selection, and transparency can be set.\r\n\r\nThe node provide layer transformation_methods and anti_aliasing options. helps improve the quality of synthesized images.\r\n\r\nThe node provides mask output that can be used for subsequent workflows.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_68a71f6f3e63.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a7d56307b6a8.jpg)    \r\n\r\n* background_image: The background image.\r\n* layer_image\u003Csup>5\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>2,5\u003C\u002Fsup>: Mask for layer_image.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode.\r\n* opacity: Opacity of blend.\r\n* x_percent: Horizontal position of the layer on the background image, expressed as a percentage, with 0 on the far left and 100 on the far right. It can be less than 0 or more than 100, indicating that some of the layer's content is outside the screen.\r\n* y_percent: Vertical position of the layer on the background image, expressed as a percentage, with 0 on the top and 100 on the bottom. For example, setting it to 50 indicates vertical center, 20 indicates upper center, and 80 indicates lower center.\r\n* mirror: Mirror flipping. Provide two flipping modes, horizontal flipping and vertical flipping.\r\n* scale: Layer magnification, 1.0 represents the original size.\r\n* aspect_ratio: Layer aspect ratio. 1.0 is the original ratio, a value greater than this indicates elongation, and a value less than this indicates flattening.\r\n* rotate: Layer rotation degree.\r\n* Sampling methods for layer enlargement and rotation, including lanczos, bicubic, hamming, bilinear, box and nearest. Different sampling methods can affect the image quality and processing time of the synthesized image.\r\n* anti_aliasing: Anti aliasing, ranging from 0 to 16, the larger the value, the less obvious the aliasing. An excessively high value will significantly reduce the processing speed of the node.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">ImageCompositeHandleMask\u003C\u002Fa>\r\nUsed to generate local feathering masks and corresponding cropping data. The node provides mask output that can be used for subsequent workflows.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_56f34c33bddf.jpg)    \r\n\r\nNode Options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f7a766b41472.jpg)    \r\n* background_image: The background image.\r\n* layer_image: Layer image for composite.\r\n* layer_mask: Mask for layer_image.\r\n* invert_mask: Whether to reverse the mask.\r\n* opacity: Opacity of image composite.\r\n* x_percent: Horizontal position of the layer on the background image, expressed as a percentage, with 0 on the far left and 100 on the far right. It can be less than 0 or more than 100, indicating that some of the layer's content is outside the screen.\r\n* y_percent: Vertical position of the layer on the background image, expressed as a percentage, with 0 on the top and 100 on the bottom. For example, setting it to 50 indicates vertical center, 20 indicates upper center, and 80 indicates lower center.\r\n* scale: Layer magnification, 1.0 represents the original size.\r\n* mirror: Mirror flipping. Provide two flipping modes, horizontal flipping and vertical flipping.\r\n* rotate: Layer rotation degree.\r\n* anti_aliasing: Anti aliasing, ranging from 0 to 8, the larger the value, the less obvious the aliasing. An excessively high value will significantly reduce the processing speed of the node.\r\n* handle_detect: There are two methods for detecting the position of feathering masks: mask_area and layer-bbox. ```mask_area``` detects the effective area of the mask for the layer object, while ```layer-bbox``` detects the outer BBox of the layer object.\r\n* top_handle: The amplitude of feathering at the top side of the mask. The value is the percentage of the average edge length of the mask.\r\n* bottom_handle: The amplitude of feathering at the bottom side of the mask. The value is the percentage of the average edge length of the mask.\r\n* left_handle: The amplitude of feathering at the left side of the mask. The value is the percentage of the average edge length of the mask.\r\n* right_handle: The amplitude of feathering at the right side of the mask. The value is the percentage of the average edge length of the mask.\r\n* handle_mask_outradius: Feather mask fillet radius.\r\n* top_reserve: Cut the top to preserve size.\r\n* bottom_reserve: Cut the bottom to preserve size.\r\n* left_reserve: Cut the left to preserve size.\r\n* right_reserve: Cut the right to preserve size.\r\n* round_to_multiple: Round the trimming edge length multiple. For example, setting it to 8 will force the width and height to be multiples of 8.\r\n\r\nOutputs:\r\n* image: The composited image.\r\n* mask: The composited mask.\r\n* layer_bbox_mask: Composite object BBox mask.\r\n* handle_mask: The mask after feather process.\r\n* handle_crop_bbox: Feather mask cropping data.\r\n* handle_overrange: Does the feathering mask exceed the range of the background image. The output format is a string including \"top\", \"bottom\", \"left\", and \"right\".\r\n\r\n### \u003Ca id=\"table1\">CropByMask\u003C\u002Fa>\r\n\r\nCrop the image according to the mask range, and set the size of the surrounding borders to be retained.\r\nThis node can be used in conjunction with the [RestoreCropBox](#RestoreCropBox) and [ImageScaleRestore](#ImageScaleRestore) nodes to crop and modify upscale parts of image, and then paste them back in place.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_addad8df505f.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1d815b1a29a3.jpg)    \r\n\r\n* image\u003Csup>5\u003C\u002Fsup>: The input image.\r\n* mask_for_crop\u003Csup>5\u003C\u002Fsup>: Mask of the image, it will automatically be cut according to the mask range.\r\n* invert_mask: Whether to reverse the mask.\r\n* detect: Detection method, ```min_bounding_rect``` is the minimum bounding rectangle of block shape, ```max_inscribed_rect``` is the maximum inscribed rectangle of block shape, and ```mask-area``` is the effective area for masking pixels.\r\n* top_reserve: Cut the top to preserve size.\r\n* bottom_reserve: Cut the bottom to preserve size.\r\n* left_reserve: Cut the left to preserve size.\r\n* right_reserve: Cut the right to preserve size.\r\n* [note](#notes)\r\n\r\nOutput:\r\n\r\n* croped_image: The image after crop.\r\n* croped_mask: The mask after crop.\r\n* crop_box: The trimmed box data is used when restoring the RestoreCropBox node.\r\n* box_preview: Preview image of cutting position, red represents the detected range, and green represents the cutting range after adding the reserved border.\r\n\r\n### \u003Ca id=\"table1\">CropByMaskV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of CropByMask. Supports crop_box input, making it easy to cut layers of the same size.\r\n\r\nThe following changes have been made based on CropByMask:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f42eb377caae.jpg)    \r\n\r\n* The input ```mask_for_crop``` reanme to ```mask```。\r\n* Add optional inputs to the ```crop_box```. If there are inputs here, mask detection will be ignored and this data will be directly used for cropping.\r\n* Add the option ```round_to_multiple``` to round the trimming edge length multiple. For example, setting it to 8 will force the width and height to be multiples of 8.\r\n\r\n### \u003Ca id=\"table1\">RestoreCropBox\u003C\u002Fa>\r\n\r\nRestore the cropped image to the original image by [CropByMask](#CropByMask).\r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_66d699cffbd7.jpg)    \r\n\r\n* background_image: The original image before cutting.\r\n* croped_image\u003Csup>5\u003C\u002Fsup>: The cropped image. If the middle is enlarged, the size needs to be restored before restoration.\r\n* croped_mask\u003Csup>5\u003C\u002Fsup>: The cut mask.\r\n* crop_box: Box data during cutting.\r\n* invert_mask: Whether to reverse the mask.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">CropBoxResolve\u003C\u002Fa>\r\n\r\nParsing the ```corp_box```  to ```x``` , ```y``` , ```width``` , ```height``` .\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1add6a747602.jpg)    \r\n\r\n### \u003Ca id=\"table1\">ImageScaleRestore\u003C\u002Fa>\r\n\r\nImage scaling. when this node is used in pairs, the image can be automatically restored to its original size on the second node.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1e5a399d06b9.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_56c61e9ea55b.jpg)    \r\n\r\n* image\u003Csup>5\u003C\u002Fsup>: The input image.\r\n* mask\u003Csup>2,5\u003C\u002Fsup>: Mask of image.\r\n* original_size: Optional input, used to restore the image to its original size.\r\n* scale: Scale ratio. when the original_size have input, or scale_ by_longest_side is set to True, this setting will be ignored.\r\n* scale_by_longest_side: Allow scaling by long edge size.\r\n* longest_side: When the scale_by_longest_side is set to True, this will be used this value to the long edge of the image. when the original_size have input, this setting will be ignored.\r\n\r\nOutputs:\r\n\r\n* image: The scaled image.\r\n* mask: If have mask input, the scaled mask will be output.\r\n* original_size: The original size data of the image is used for subsequent node recovery.\r\n* width: The output image's width.\r\n* height: The output image's height.\r\n\r\n### \u003Ca id=\"table1\">ImageScaleRestoreV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of ImageScaleRestore.\r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_38a311425168.jpg)    \r\nThe following changes have been made based on ImageScaleRestore:\r\n\r\n* scale_by: Allow scaling by specified dimensions for long, short, width, height, or total pixels. When this option is set to by_scale, use the scale value, and for other options, use the scale_by_length value.\r\n* scale_by_length: The value here is used as ```scale_by``` to specify the length of the edge.\r\n\r\n### \u003Ca id=\"table1\">ImageMaskScaleAs\u003C\u002Fa>\r\n\r\nScale the image or mask to the size of the reference image (or reference mask).\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c66b6e524782.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_288b96cd91dc.jpg)    \r\n\r\n* scale_as\u003Csup>*\u003C\u002Fsup>: Reference size. It can be an image or a mask.\r\n* image: Image to be scaled. this option is optional input. if there is no input, a black image will be output.\r\n* mask: Mask to be scaled. this option is optional input. if there is no input, a black mask will be output.\r\n* fit: Scale aspect ratio mode. when the width to height ratio of the original image does not match the scaled size, there are three modes to choose from, \r\n  The _letterbox_ mode retains the complete frame and fills in the blank spaces with black; \r\n  The _crop_ mode retains the complete short edge, and any excess of the long edge will be cut off;\r\n  The _fill_ mode does not maintain frame ratio and fills the screen with width and height.\r\n* method: Scaling sampling methods, including lanczos, bicubic, hamming, bilinear, box, and nearest.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input images and masks. forcing the integration of other types of inputs will result in node errors.\r\n\r\nOutputs:\r\n\r\n* image: If there is an image input, the scaled image will be output.\r\n* mask: If there is a mask input, the scaled mask will be output.\r\n* original_size: The original size data of the image is used for subsequent node recovery.\r\n* width: The output image's width.\r\n* height: The output image's height.\r\n\r\n### \u003Ca id=\"table1\">ImageMaskScaleAsV2\u003C\u002Fa>\r\nThe upgraded version of ImageMaskScaleAs adds background color settings on top of the original nodes.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d9e3667db27d.jpg)    \r\n\r\nNew Option:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ef95dba21faa.jpg)    \r\n* background_color: Expand background color.\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageScaleByAspectRatio\u003C\u002Fa>\r\n\r\nScale the image or mask by aspect ratio. the scaled size can be rounded to a multiple of 8 or 16, and can be scaled to the long side size.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c9b9ab2b7453.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_09e84575d779.jpg)    \r\n\r\n* aspect_ratio: Here are several common frame ratios provided. alternatively, you can choose \"original\" to keep original ratio or customize the ratio using \"custom\".\r\n* proportional_width: Proportional width. if the aspect ratio option is not \"custom\", this setting will be ignored.\r\n* proportional_height: Proportional height. if the aspect ratio option is not \"custom\", this setting will be ignored.\r\n* fit: Scale aspect ratio mode. when the width to height ratio of the original image does not match the scaled size, there are three modes to choose from, \r\n  The _letterbox_ mode retains the complete frame and fills in the blank spaces with black; \r\n  The _crop_ mode retains the complete short edge, and any excess of the long edge will be cut off;\r\n  The _fill_ mode does not maintain frame ratio and fills the screen with width and height.\r\n* method: Scaling sampling methods, including lanczos, bicubic, hamming, bilinear, box, and nearest.\r\n* round_to_multiple: Round multiples. for example, setting it to 8 will force the width and height to be multiples of 8.\r\n* scale_by_longest_side: Allow scaling by long edge size.\r\n* longest_side: When the scale_by_longest_side is set to True, this will be used this value to the long edge of the image. when the original_size have input, this setting will be ignored.\r\n\r\nOutputs:\r\n\r\n* image: If have image input, the scaled image will be output.\r\n* mask: If have mask input, the scaled mask will be output.\r\n* original_size: The original size data of the image is used for subsequent node recovery.\r\n* width: The output image's width.\r\n* height: The output image's height.\r\n\r\n### \u003Ca id=\"table1\">ImageScaleByAspectRatioV2\u003C\u002Fa>\r\n\r\nV2 Upgraded Version of ImageScaleByAspectRatio\r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8c349d8198dd.jpg)    \r\nThe following changes have been made based on ImageScaleByAspectRatio:\r\n\r\n* scale_to_side: Allow scaling by specified dimensions for long, short, width, height, or total pixels.\r\n* scale_to_length: The numerical value here serves as the length of the specified edge or the total pixels (kilo pixels) for scale_to_side.\r\n* background_color\u003Csup>4\u003C\u002Fsup>: The color of the background.\r\n\r\n### \u003Ca id=\"table1\">ICMask\u003C\u002Fa>\r\nUsed for generating In-Context image and mask. The code is from [lrzjason\u002FComfyui-In-Context-Lora-Utils](https:\u002F\u002Fgithub.com\u002Flrzjason\u002FComfyui-In-Context-Lora-Utils) , Thanks to the original author @小志Jason.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8652fa8cf3ce.jpg)\r\n\r\nNode Options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a617339b3544.jpg)    \r\n\r\n* first_image: Images used as contextual references.\r\n* first_mask: Optional input, context reference image mask.\r\n* second_image: Used for redrawing images.\r\n* second_mask: Mask used for redrawing images.\r\n* patch_mode: There are three types of splicing modes: auto、patch_right and patch_bottom.\r\n* output_length: Output the long side size of the image.\r\n* patch_color: Fill color.\r\n\r\nOutputs:\r\n* image: The output image.\r\n* mask: The output mask.\r\n* icmask_data: The stitching information of the image is used for automatic cropping of subsequent nodes.\r\n\r\n### \u003Ca id=\"table1\">ICMaskCropBack\u003C\u002Fa>\r\nCrop the image inference output generated by ICMask.\r\n\r\nNode Options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_228540a2dc49.jpg)    \r\n\r\n* image: The input image.\r\n* icmask_data: Splicing information output from ICMask node.\r\n\r\n### \u003Ca id=\"table1\">FluxKontextImageScale\u003C\u002Fa>\r\nBased on official node modifications, used to resizes the image to one that is more optimal for flux kontext. For images with different aspect ratio, the scale will be adjusted appropriately to maintain all information.\r\nThe following example uses this node to maintain the complete information of a 4K resolution image, changes the background of the image through the FluxKontext model inference, and then achieves 4K image quality detail restoration through the [HLFrequencyDetailRestore](#HLFrequencyDetailRestore) node.\r\n\u003Cfont size=\"1\">*this workflow (flux_kontext_image_scale_example.json) is in the workflow directory.  \u003C\u002Ffont>\u003Cbr \u002F> \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d7a576ecf9b6.jpg)\r\n\r\nNode Options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fed9f2f2f3b0.jpg)    \r\n\r\n* image: The input image.\r\n* method: Scaling sampling methods, including lanczos, bicubic, hamming, bilinear, box, and nearest.\r\n\r\nOutputs:\r\n* image: The output image.\r\n\r\n### \u003Ca id=\"table1\">VQAPrompt\u003C\u002Fa>\r\n\r\nUse the blip-vqa model for visual question answering. Part of the code for this node is referenced from [celoron\u002FComfyUI-VisualQueryTemplate](https:\u002F\u002Fgithub.com\u002Fceloron\u002FComfyUI-VisualQueryTemplate), thanks to the original author.   \r\n*Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) or [huggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large\u002Ftree\u002Fmain) and [huggingface.co\u002FSalesforce\u002Fblip-vqa-base](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-base\u002Ftree\u002Fmain) and copy to  ```ComfyUI\\models\\VQA``` folder.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_26ee7f6e1333.jpg) \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8f24d5ba5774.jpg)\r\n\r\n* image: The image input.\r\n* vqa_model: The vqa model input, it from [LoadVQAModel](#LoadVQAModel) node.\r\n* question: Task text input. A single question is enclosed in curly braces \"{}\", and the answer to the question will be replaced in its original position in the text output. Multiple questions can be defined using curly braces in a single Q&A.\r\n  For example, for a picture of an item placed in a scene, the question is:\"{object color} {object} on the {scene}\".\r\n\r\n### \u003Ca id=\"table1\">LoadVQAModel\u003C\u002Fa>\r\n\r\nLoad the blip-vqa model.    \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fe5c9fda8276.jpg)\r\n\r\n* model: There are currently two models to choose from \"blip-vqa-base\" and \"blip-vqa-capfilt-large\".\r\n* precision: The model accuracy has two options: \"fp16\" and \"fp32\".\r\n* device: The model running device has two options: \"cuda\" and \"cpu\".\r\n\r\n### \u003Ca id=\"table1\">ImageShift\u003C\u002Fa>\r\n\r\nShift the image. this node supports the output of displacement seam masks, making it convenient to create continuous textures.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6b39ef79d5f7.jpg)    \r\n\r\nNode options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bd188354f0f0.jpg)    \r\n\r\n* image\u003Csup>5\u003C\u002Fsup>: The input image.\r\n* mask\u003Csup>2,5\u003C\u002Fsup>: The mask of image.\r\n* shift_x: Horizontal distance of shift.\r\n* shift_y: Vertical distance of shift.\r\n* cyclic: Is the part of displacement that is out of bounds cyclic.\r\n* background_color\u003Csup>4\u003C\u002Fsup>: Background color. if cyclic is set to False, the setting here will be used as the background color.\r\n* border_mask_width: Border mask width.\r\n* border_mask_blur: Border mask blur.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">ImageBlend\u003C\u002Fa>\r\n\r\nA simple node for composit layer image and background image, multiple blend modes are available for option, and transparency can be set.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9ef4ac3780d9.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c28bafb50f06.jpg)    \r\n\r\n* background_image\u003Csup>1\u003C\u002Fsup>: The background image.\r\n* layer_image\u003Csup>1\u003C\u002Fsup>: Layer image for composite.\r\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>: Mask for layer_image.\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_mode\u003Csup>3\u003C\u002Fsup>: Blending mode.\r\n* opacity: Opacity of blend.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">ImageReel\u003C\u002Fa>\r\n\r\nDisplay multiple images in one reel. Text annotations can be added to each image in the reel. By using the [ImageReelComposite](#ImageReelComposite) node, multiple reel can be combined into one image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bee3c2ee1e72.jpg)    \r\n\r\nNode Options:   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2e77dd49a338.jpg)    \r\n\r\n* image1: The first image. it must be input.\r\n* image2: The second image. optional input.\r\n* image3: The third image. optional input.\r\n* image4: The fourth image. optional input.\r\n* image1_text: Text annotation for the first image.\r\n* image2_text: Text annotation for the second image.\r\n* image3_text: Text annotation for the third image.\r\n* image4_text: Text annotation for the fourth image.\r\n* reel_height: The height of reel.\r\n* border: The border width of the image in the reel.\r\n\r\nOutput:\r\n\r\n* reel: The reel of [ImageReelComposite](#ImageReelComposit) node input. \r\n\r\n### \u003Ca id=\"table1\">ImageReelComposite\u003C\u002Fa>\r\n\r\nCombine multiple reel into one image.\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_92d9ef8a0283.jpg)    \r\n\r\n* reel_1: The first reel. it must be input.\r\n* reel_2: The second reel. optional input.\r\n* reel_3: The third reel. optional input.\r\n* reel_4: The fourth reel. optional input.\r\n* font_file\u003Csup>**\u003C\u002Fsup>: Here is a list of available font files in the font folder, and the selected font files will be used to generate images.\r\n* border: The border width of the reel.\r\n* color_theme: Theme color for the output image.            \r\n  \u003Csup>*\u003C\u002Fsup>The font folder is defined in ```resource_dir.ini```, this file is located in the root directory of the plug-in, and the default name is ```resource_dir.ini.example```. \r\n  to use this file for the first time, you need to change the file suffix to ```.ini```.\r\n  Open the text editing software and find the line starting with \"FONT_dir=\", after \"=\", enter the custom folder path name. \r\n  support defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. \r\n  all font files in this folder will be collected and displayed in the node list during ComfyUI initialization.\r\n  If the folder set in ini is invalid, the font folder that comes with the plugin will be enabled.\r\n\r\n### \u003Ca id=\"table1\">ImageOpacity\u003C\u002Fa>\r\n\r\nAdjust image opacity\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a917a1ae3a8c.jpg)    \r\n\r\nNode option:   \r\n\r\n* image\u003Csup>5\u003C\u002Fsup>: Image input, supporting RGB and RGBA. if is RGB, the alpha channel of the entire image will be automatically added.\r\n* mask\u003Csup>2,5\u003C\u002Fsup> : Mask input.\r\n* invert_mask: Whether to reverse the mask.\r\n* opacity: Opacity of image.\r\n* [note](#notes)\r\n\r\n### \u003Ca id=\"table1\">ColorPicker\u003C\u002Fa>\r\n\r\nModify web extensions from [mtb nodes](https:\u002F\u002Fgithub.com\u002FmelMass\u002Fcomfy_mtb). Select colors on the color palette and output RGB values, thanks to the original author.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a97d8ebbcb00.jpg)    \r\n\r\nNode options:\r\n\r\n* mode： The output format is available in hexadecimal (HEX) and decimal (DEC).  \r\n\r\nOutput type: \r\n\r\n* value: String format.\r\n\r\n### \u003Ca id=\"table1\">RGBValue\u003C\u002Fa>\r\n\r\nOutput the color value as a single R, G, B three decimal values. Supports HEX and DEC formats for ColorPicker node output.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bf90f6a42074.jpg)    \r\n\r\nNode Options:\r\n\r\n* color_value: Supports hexadecimal (HEX) or decimal (DEC) color values and should be of string or tuple type. Forcing in other types will result in an error.\r\n\r\n### \u003Ca id=\"table1\">HSVValue\u003C\u002Fa>\r\n\r\nOutput color values as individual decimal values of H, S, and V (maximum value of 255). Supports HEX and DEC formats for ColorPicker node output.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_986b8b4d89ea.jpg)    \r\n\r\nNode Options:\r\n\r\n* color_value: Supports hexadecimal (HEX) or decimal (DEC) color values and should be of string or tuple type. Forcing in other types will result in an error.\r\n\r\n### \u003Ca id=\"table1\">GrayValue\u003C\u002Fa>\r\n\r\nOutput grayscale values based on color values. Supports outputting 256 level and 100 level grayscale values.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6570e5fe8281.jpg)    \r\n\r\nNode Options:\r\n\r\n* color_value: Supports hexadecimal (HEX) or decimal (DEC) color values and should be of string or tuple type. Forcing in other types will result in an error.\r\n\r\nOutputs:\r\n\r\n* gray(256_level): 256 level grayscale value. Integer type, range 0~255.\r\n* gray(100_level): 100 level grayscale value. Integer type, range 0~100.\r\n\r\n\r\n### \u003Ca id=\"table1\">GetMainColors\u003C\u002Fa>\r\n\r\nObtain the main color of the image. You can obtain 5 colors.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_68c7bf3b293c.jpg)\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9e03ff9e2190.jpg)\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_147da81ae016.jpg)    \r\n\r\n* image: The image input.\r\n* k_means_algorithm:K-Means algorithm options. \"lloyd\" is the standard K-Means algorithm, while \"elkan\" is the triangle inequality algorithm, suitable for larger images. \r\n\r\nOutputs:\r\n\r\n* preview_image: 5 main color preview images.\r\n* color_1~color_5: Color value output. Output an RGB string in HEX format.\r\n\r\n### \u003Ca id=\"table1\">GetMainColorsV2\u003C\u002Fa>\r\nAdd sorting by color area to the [GetMainColors](#GetMainColors) node and display color values and color areas in the preview image. \r\nThis part of the code was improved by @ HL, thanks.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d8753263d694.jpg)\r\n\r\n\r\n### \u003Ca id=\"table1\">ColorName\u003C\u002Fa>\r\n\r\nOutput the most similar color name in the color palette based on the color value.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_70d240125602.jpg)\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_89b481ea3aa8.jpg)    \r\n\r\n* color: Color value input, in HEX format RGB string format.\r\n* palette: Color palette. There are 6 color mapping tables available, including xkcd, wiki_color, flux_sdxl, css4, css3, and html4.\r\n\r\nOutput:\r\n\r\n* color_name: Color name in string.\r\n\r\n### \u003Ca id=\"table1\">NameToColor\u003C\u002Fa>\r\nOutput color images and color values from color names.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6e318d9250ac.jpg)\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ce73465d94c8.jpg)\r\n* size_as\u003Csup>*\u003C\u002Fsup>: Input image or mask here to generate image according to its size. Note that this input takes priority over other size settings.\r\n* color_name: The name of color describe.\r\n* palette: Color palette. There are 6 color mapping tables available, including xkcd, wiki_color, flux_sdxl, css4, css3, and html4.\r\n* in_palette_only: Set to only output colors from the color palette. If set to True, search only in the current color palette. If there is no matching name, output default_color.\r\nIf set to False, search for all color palettes. If there is no matching name in all color palettes, output the color with the closest name.\r\n* default_color: Default color. If no matching name is found, output the color.\r\n* size\u003Csup>**\u003C\u002Fsup>: Size preset. the preset can be customized by the user. if have size_as input, this option will be ignored.\r\n* custom_width: Image width. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n* custom_height: Image height. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input images and masks. forcing the integration of other types of inputs will result in node errors.\r\n\u003Csup>**\u003C\u002Fsup>The preset size is defined in ```custom_size.ini```, this file is located in the root directory of the plug-in, and the default name is ```custom_size.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```. Open with text editing software. Each row represents a size, with the first value being width and the second being height, separated by a lowercase \"x\" in the middle. To avoid errors, please do not enter extra characters.\r\n\r\n输出:\r\n* image: The output color image.\r\n* color: Color value output, in HEX format RGB string format.\r\n\r\n\r\n### \u003Ca id=\"table1\">ExtendCanvas\u003C\u002Fa>\r\n\r\nExtend the canvas\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3f08ba522bba.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_890f9a3953a3.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* top: Top extension value.\r\n* bottom: Bottom extension value.\r\n* left: Left extension value.\r\n* right: Right extension value.\r\n* color; Color of canvas.\r\n\r\n### \u003Ca id=\"table1\">ExtendCanvasV2\u003C\u002Fa>\r\n\r\nV2 upgrade to ExtendCanvas.\r\n\r\nBased on ExtendCanvas, color is modified to be a string type, and it supports external ```ColorPicker``` input, Support negative value input, it means image will be cropped.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_13ae8c0d7467.jpg)    \r\n\r\n### XY to \u003Ca id=\"table1\">Percent\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2834f4b3b690.jpg)    \r\nConvert absolute coordinates to percentage coordinates.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bbe1114bcfd8.jpg)    \r\nNode options:\r\n\r\n* x: Value of X.\r\n* y: Value of Y.\r\n\r\n### \u003Ca id=\"table1\">LayerImageTransform\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_434e9bcd3c86.jpg)    \r\nThis node is used to transform layer_image separately, which can change size, rotation, aspect ratio, and mirror flip without changing the image size.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6b90fdbd49ab.jpg)    \r\nNode options:\r\n\r\n* x: Value of X.\r\n* y: Value of Y.\r\n* mirror: Mirror flipping. Provide two flipping modes, horizontal flipping and vertical flipping.\r\n* scale: Layer magnification, 1.0 represents the original size.\r\n* aspect_ratio: Layer aspect ratio. 1.0 is the original ratio, a value greater than this indicates elongation, and a value less than this indicates flattening.\r\n* rotate: Layer rotation degree.\r\n* Sampling methods for layer enlargement and rotation, including lanczos, bicubic, hamming, bilinear, box and nearest. Different sampling methods can affect the image quality and processing time of the synthesized image.\r\n* anti_aliasing: Anti aliasing, ranging from 0 to 16, the larger the value, the less obvious the aliasing. An excessively high value will significantly reduce the processing speed of the node.\r\n\r\n### \u003Ca id=\"table1\">LayerMaskTransform\u003C\u002Fa>\r\n\r\nSimilar to LayerImageTransform node, this node is used to transform the layer_mask separately, which can scale, rotate, change aspect ratio, and mirror flip without changing the mask size.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_643f554f6cd2.jpg)    \r\nNode options:\r\n\r\n* x: Value of X.\r\n* y: Value of Y.\r\n* mirror: Mirror flipping. Provide two flipping modes, horizontal flipping and vertical flipping.\r\n* scale: Layer magnification, 1.0 represents the original size.\r\n* aspect_ratio: Layer aspect ratio. 1.0 is the original ratio, a value greater than this indicates elongation, and a value less than this indicates flattening.\r\n* rotate: Layer rotation degree.\r\n* Sampling methods for layer enlargement and rotation, including lanczos, bicubic, hamming, bilinear, box and nearest. Different sampling methods can affect the image quality and processing time of the synthesized image.\r\n* anti_aliasing: Anti aliasing, ranging from 0 to 16, the larger the value, the less obvious the aliasing. An excessively high value will significantly reduce the processing speed of the node.\r\n\r\n### \u003Ca id=\"table1\">ColorImage\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_840e1bb02086.jpg)    \r\nGenerate an image of a specified color and size.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_af384cc7b105.jpg)    \r\nNode options:\r\n\r\n* width: Width of the image.\r\n* height: Height of the image.\r\n* color\u003Csup>4\u003C\u002Fsup>: Color of the image.\r\n\r\n### \u003Ca id=\"table1\">ColorImageV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of ColorImage.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bfff57fe710a.jpg)    \r\nThe following changes have been made based on ColorImage:\r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: Input image or mask here to generate image according to its size. Note that this input takes priority over other size settings.\r\n* size\u003Csup>**\u003C\u002Fsup>: Size preset. the preset can be customized by the user. if have size_as input, this option will be ignored.\r\n* custom_width: Image width. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n* custom_height: Image height. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input images and masks. forcing the integration of other types of inputs will result in node errors.\r\n\u003Csup>**\u003C\u002Fsup>The preset size is defined in ```custom_size.ini```, this file is located in the root directory of the plug-in, and the default name is ```custom_size.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```. Open with text editing software. Each row represents a size, with the first value being width and the second being height, separated by a lowercase \"x\" in the middle. To avoid errors, please do not enter extra characters.\r\n\r\n### \u003Ca id=\"table1\">GradientImage\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_cebf2aefd0cd.jpg)    \r\nGenerate an image with a specified size and color gradient.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9c8b9822de6e.jpg)    \r\nNode options:\r\n\r\n* width: Width of the image.\r\n* height: Height of the image.\r\n* angle: Angle of gradient.\r\n* start_color\u003Csup>4\u003C\u002Fsup>: Color of the begging.\r\n* end_color\u003Csup>4\u003C\u002Fsup>: Color of the ending.\r\n\r\n### \u003Ca id=\"table1\">GradientImageV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of GradientImage.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_65750c3f58f0.jpg)    \r\nThe following changes have been made based on GradientImage:\r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: Input image or mask here to generate image according to its size. Note that this input takes priority over other size settings.\r\n* size\u003Csup>**\u003C\u002Fsup>: Size preset. the preset can be customized by the user. if have size_as input, this option will be ignored.\r\n* custom_width: Image width. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n* custom_height: Image height. it valid when size is set to \"custom\". if have size_as input, this option will be ignored.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input images and masks. forcing the integration of other types of inputs will result in node errors.\r\n\u003Csup>**\u003C\u002Fsup>The preset size is defined in ```custom_size.ini```, this file is located in the root directory of the plug-in, and the default name is ```custom_size.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```. Open with text editing software. Each row represents a size, with the first value being width and the second being height, separated by a lowercase \"x\" in the middle. To avoid errors, please do not enter extra characters.\r\n\r\n### \u003Ca id=\"table1\">RoundedRectangle\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_398038b873cb.jpg)    \r\nGenerate rounded rectangles and masks.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_67ff08a9da40.jpg)    \r\nNode Options:\r\n* image: The image to be processed.\r\n* object_mask: Optional input. This mask can generate rounded rectangular regions. If have input for ```crop-box```, this option will be ignored.\r\n* crop_box: Optional input. This can generate a rounded rectangular area by cropping the region.\r\n* rounded_rect_radius: Rounded rectangle radius. The range is 0-100, and the larger the value, the more pronounced the rounded corners.\r\n* anti_aliasing: Anti aliasing, ranging from 0-16, with larger values indicating less pronounced aliasing. Excessive values will significantly reduce the processing speed of nodes.\r\n* top: Top margin of the rounded rectangle is a percentage of the image height, and negative values are allowed. If there is crox_box or object_mask input, this option will be ignored.\r\n* bottom: Bottom margin of the rounded rectangle is a percentage of the image height, and negative values are allowed. If there is crox_box or object_mask input, this option will be ignored.\r\n* left: Left margin of the rounded rectangle is a percentage of the image width, and negative values are allowed. If there is crox_box or object_mask input, this option will be ignored.\r\n* right: Right margin of the rounded rectangle is a percentage of the image width, and negative values are allowed. If there is crox_box or object_mask input, this option will be ignored.\r\n* detect: The method of detecting mask regions when object_mask is input. ```min_bounding_rect``` is the minimum bounding rectangle of block shape, ```max_inscribed_rect``` is the maximum inscribed rectangle of block shape, and ```mask-area``` is the effective area for masking pixels.\r\n* obj_ext_top: When object_mask or crop-box is input, the top of the rounded rectangle area expands outward as a percentage of the area height, and negative values are allowed.\r\n* obj_ext_bottom: When object_mask or crop-box is input, the bottom of the rounded rectangle area expands outward as a percentage of the area height, and negative values are allowed.\r\n* obj_ext_left: When object_mask or crop-box is input, the left of the rounded rectangle area expands outward as a percentage of the area width, and negative values are allowed.\r\n* obj_ext_right: When object_mask or crop-box is input, the right of the rounded rectangle area expands outward as a percentage of the area width, and negative values are allowed.\r\n\r\n\r\n### \u003Ca id=\"table1\">SimpleTextImage\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_dcebc13bcbea.jpg)    \r\nGenerate simple typesetting images and masks from text. This node references some of the functionalities and code of [ZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite), thanks to the original author.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_467d0ada7804.jpg)    \r\nNode options:\r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: The input image or mask here will generate the output image and mask according to their size. this input takes priority over the width and height below.\r\n* font_file\u003Csup>**\u003C\u002Fsup>: Here is a list of available font files in the font folder, and the selected font files will be used to generate images.\r\n* align: Alignment options. There are three options: center, left, and right.\r\n* char_per_line: The number of characters per line, any excess will be automatically wrapped.\r\n* leading: The leading space.\r\n* font_size: The size of font.\r\n* text_color: The color of text.\r\n* stroke_width: The width of stroke.\r\n* stroke_color: The color of stroke.\r\n* x_offset: The horizontal offset of the text position.\r\n* y_offset: The vertical offset of the text position.\r\n* width: Width of the image. If there is a size_as input, this setting will be ignored.\r\n* height: Height of the image. If there is a size_as input, this setting will be ignored.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.\r\n\r\n\u003Csup>**\u003C\u002Fsup>The font folder is defined in ```resource_dir.ini```, this file is located in the root directory of the plug-in, and the default name is ```resource_dir.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```.\r\nOpen the text editing software and find the line starting with \"FONT_dir=\", after \"=\", enter the custom folder path name. \r\nsupport defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. \r\nall font files in this folder will be collected and displayed in the node list during ComfyUI initialization.\r\nIf the folder set in ini is invalid, the font folder that comes with the plugin will be enabled.\r\n\r\n### \u003Ca id=\"table1\">TextImage\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_049861627710.jpg)    \r\nGenerate images and masks from text. support for adjusting the spacing between words and lines, horizontal and vertical adjustments, it can set random changes in each character, including size and position.\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7dd51eb8e0c2.jpg)    \r\nNode options:\r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: The input image or mask here will generate the output image and mask according to their size. this input takes priority over the width and height below.\r\n* font_file\u003Csup>**\u003C\u002Fsup>: Here is a list of available font files in the font folder, and the selected font files will be used to generate images.\r\n* spacing: Word spacing.this value is in pixels.\r\n* leading: Row leading.this value is in pixels.\r\n* horizontal_border: Side margin. If the text is horizontal, it is the left margin, and if it is vertical, it is the right margin. this value is represents a percentage, for example, 50 indicates that the starting point is located in the center on both sides.\r\n* vertical_border: Top margin. this value is represents a percentage, for example, 10 indicates that the starting point is located 10% away from the top.\r\n* scale: The overall size of the text. the initial size of text is automatically calculated based on the screen size and text content, with the longest row or column by default adapting to the image width or height. adjusting the value here will scale the text as a whole. this value is represents a percentage, for example, 60 represents scaling to 60%.\r\n* variation_range: The range of random changes in characters. when this value is greater than 0, the character will undergo random changes in size and position, and the larger the value, the greater the magnitude of the change.\r\n* variation_seed: The seed for randomly. fix this value to individual characters changes generated each time will not change.\r\n* layout: Text layout. there are horizontal and vertical options to choose from.\r\n* width: Width of the image. If there is a size_as input, this setting will be ignored.\r\n* height: Height of the image. If there is a size_as input, this setting will be ignored.\r\n* text_color: The color of text.\r\n* background_color\u003Csup>4\u003C\u002Fsup>: The color of background.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.\r\n\r\n\u003Csup>**\u003C\u002Fsup>The font folder is defined in ```resource_dir.ini```, this file is located in the root directory of the plug-in, and the default name is ```resource_dir.ini.example```. to use this file for the first time, you need to change the file suffix to ```.ini```.\r\nOpen the text editing software and find the line starting with \"FONT_dir=\", after \"=\", enter the custom folder path name. \r\nsupport defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. \r\nall font files in this folder will be collected and displayed in the node list during ComfyUI initialization.\r\nIf the folder set in ini is invalid, the font folder that comes with the plugin will be enabled.\r\n\r\n### \u003Ca id=\"table1\">TextImageV2\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ba69500abdbe.jpg)    \r\n\r\nThis node is merged from [heshengtao](https:\u002F\u002Fgithub.com\u002Fheshengtao). The PR modifies the scaling of the image text node based on the TextImage node. The font spacing follows the scaling, and the coordinates are no longer based on the top left corner of the text, but on the center point of the entire line of text. Thank you for the author's contribution.\r\n\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageChannelSplit\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_408a947d6b0a.jpg)    \r\nSplit the image channel into individual images.\r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bbaba8b94c50.jpg)    \r\n\r\n* mode: Channel mode, include RGBA, YCbCr, LAB adn HSV.\r\n\r\n### \u003Ca id=\"table1\">ImageChannelMerge\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b39210853240.jpg)    \r\nMerge each channel image into one image.\r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_44f53d552fd0.jpg)    \r\n\r\n* mode: Channel mode, include RGBA, YCbCr, LAB adn HSV.\r\n\r\n### \u003Ca id=\"table1\">ImageRemoveAlpha\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d2fecb85712d.jpg)    \r\nRemove the alpha channel from the image and convert it to RGB mode. you can choose to fill the background and set the background color.\r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a0e83d0378b6.jpg)    \r\n\r\n* RGBA_image: The input image supports RGBA or RGB modes.\r\n* mask: Optional input mask. If there is an input mask, it will be used first, ignoring the alpha that comes with RGBA_image.\r\n* fill_background: Whether to fill the background.\r\n* background_color\u003Csup>4\u003C\u002Fsup>: Color of background.\r\n\r\n### \u003Ca id=\"table1\">ImageCombineAlpha\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a68033baa989.jpg)    \r\nMerge the image and mask into an RGBA mode image containing an alpha channel.\r\n\r\n\r\n### \u003Ca id=\"table1\">HLFrequencyDetailRestore\u003C\u002Fa>\r\n\r\nUsing low frequency filtering and retaining high frequency to recover image details. Compared to [kijai's DetailTransfer](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-IC-Light), this node is better integrated with the environment while retaining details.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c28f9e720cb1.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4ebf3af7d82c.jpg)    \r\n\r\n* image: Background image input.\r\n* detail_image: Detail image input.\r\n* mask: Optional input, if there is a mask input, only the details of the mask part are restored.\r\n* keep_high_freq: Reserved range of high frequency parts. The larger the value, the richer the retained high-frequency details.\r\n* erase_low_freq: The range of low frequency parts of the erasure. The larger the value, the more the low frequency range of the erasure.\r\n* mask_blur: Mask edge blur. Valid only if there is masked input.\r\n\r\n### \u003Ca id=\"table1\">GetImageSize\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_52afb6e54124.jpg)    \r\nObtain the width and height of the image.\r\n\r\nOutput:\r\n\r\n* width: The width of image.\r\n* height: The height of image.\r\n* original_size: The original size data of the image is used for subsequent node recovery.\r\n\r\n### \u003Ca id=\"table1\">AnyRerouter\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4f209dbaf028.jpg)    \r\nUsed for rerouter any type of data, this node allows for any type of input.\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageHub\u003C\u002Fa>\r\n\r\nSwitch output from multiple input images and masks, supporting 9 sets of inputs. All input items are optional. if there is only image or mask in a set of input, the missing item will be output as None.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7dfe4ad7006b.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a2e3fecf3bb.jpg)    \r\n\r\n* output: Switch output. the value is the corresponding input group. when the ```random-output``` option is True, this setting will be ignored.\r\n* random_output: When this is true, the ```output``` setting will be ignored and a random set will be output among all valid inputs.\r\n\r\n### \u003Ca id=\"table1\">BatchSelector\u003C\u002Fa>\r\n\r\nRetrieve specified images or masks from batch images or masks.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e3d03b63d89.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a92c07d7deb2.jpg)    \r\n\r\n* images: Batch images input. This input is optional.\r\n* masks: Batch masks input. This input is optional.\r\n* select: Select the output image or mask at the batch index value, where 0 is the first image. Multiple values can be entered, separated by any non numeric character, including but not limited to commas, periods, semicolons, spaces or letters, and even Chinese characters.\r\n  Note: If the value exceeds the batch size, the last image will be output. If there is no corresponding input, an empty 64x64 image or a 64x64 black mask will be output.\r\n\r\n### \u003Ca id=\"table1\">ChoiceTextPreset\u003C\u002Fa>\r\nSelect output from the preset text dictionary.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4c36c6bd849c.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6f3b04b68492.jpg)    \r\n* text_preset: Preset text. Set the output by the [TextPreseter](#TextPreseter) node.\r\n* choice_title: Select a preset title to output the corresponding text content.\r\n* random_choice: Whether to randomly select a preset.\r\n* default: Default output text, 0 corresponds to the first paragraph, and so on. Note that exceeding the preset text paragraph length will result in errors.\r\n* seed: The random seed used for random selection.\r\n* control_after_generate: Whether to change the seed every time it runs.\r\n\r\nOutputs:\r\n* title: Text paragraph title.\r\n* content: Text paragraph content.\r\n\r\n### \u003Ca id=\"table1\">TextPreseter\u003C\u002Fa>\r\nPreset text dictionary, set a section of text for each node, supporting multiple nodes to be concatenated.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_61cd1f29a8ea.jpg)    \r\n* text_preset: Preset text input, optional input. Multiple preset text nodes can be concatenated.\r\n* title: Text paragraph title.\r\n* content: Text paragraph content.\r\n\r\n### \u003Ca id=\"table1\">TextJoin\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ad0ec02b101d.jpg)    \r\nCombine multiple paragraphs of text into one.\r\n\r\n\r\n### \u003Ca id=\"table1\">TextJoinV2\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e9b2fd43377b.jpg)    \r\nAdded delimiter options on the basis of [TextJoin](#TextJoin).\r\n\r\n### \u003Ca id=\"table1\">PrintInfo\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_17607c2bd1c7.jpg)    \r\nUsed to provide assistance for workflow debugging. When running, the properties of any object connected to this node will be printed to the console.\r\n\r\nThis node allows any type of input.\r\n\r\n\r\n### \u003Ca id=\"table1\">TextBox\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b79932b26e14.jpg)    \r\nOutput a string.\r\n\r\n### \u003Ca id=\"table1\">String\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4ff23992cc64.jpg)    \r\nOutput a string. same as TextBox.\r\n\r\n### \u003Ca id=\"table1\">Integer\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_709f29bbab2d.jpg)    \r\nOutput a integer value.\r\n\r\n### \u003Ca id=\"table1\">Float\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_906324007f6c.jpg)    \r\nOutput a floating-point value with a precision of 5 decimal places.\r\n\r\n### \u003Ca id=\"table1\">Boolean\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7491441fa044.jpg)    \r\nOutput a boolean value.\r\n\r\n### \u003Ca id=\"table1\">RandomGenerator\u003C\u002Fa>\r\n\r\nUsed to generate random value within a specified range, with outputs of int, float, and boolean. Supports batch and list generation, and supports batch generation of a set of different random number lists based on image batch.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_15807506f422.jpg)    \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7c2fe7f3e463.jpg)  \r\n\r\n* image: Optional input, generate a list of random numbers that match the quantity in batches according to the image.\r\n* min_value: Minimum value. Random numbers will randomly take values from the minimum to the maximum.\r\n* max_value: Maximum value. Random numbers will randomly take values from the minimum to the maximum.\r\n* float_decimal_places: Precision of float value.\r\n* fix_seed:Is the random number seed fixed. If this option is fixed, the generated random number will always be the same.\r\n\r\nOutputs:\r\nint: Integer random number.\r\nfloat: Float random number.\r\nbool: Boolean random number.\r\n\r\n### \u003Ca id=\"table1\">RandomGeneratorV2\u003C\u002Fa>\r\nOn the based of [RandomGenerator](#RandomGenerator), add  the least random range and seed options.\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c22930bb9a6c.jpg)  \r\n* image: Optional input, generate a list of random numbers that match the quantity in batches according to the image.\r\n* min_value: Minimum value. Random numbers will randomly take values from the minimum to the maximum.\r\n* max_value: Maximum value. Random numbers will randomly take values from the minimum to the maximum.\r\n* least: Minimum random range. Random numbers will randomly at least take this value.\r\n* float_decimal_places: Precision of float value.\r\n* seed: The seed of random number.\r\n* control_after_generate: Seed change options. If this option is fixed, the generated random number will always be the same.\r\n\r\nOutputs:\r\nint: Integer random number.\r\nfloat: Float random number.\r\nbool: Boolean random number.\r\n\r\n\r\n### \u003Ca id=\"table1\">NumberCalculator\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_10c470bbd282.jpg)    \r\nPerforms mathematical operations on two numeric values and outputs integer and floating point results\u003Csup>*\u003C\u002Fsup>. Supported operations include```+```, ```-```, ```*```, ```\u002F```, ```**```, ```\u002F\u002F```, ```%```.\r\n\r\n\u003Csup>*\u003C\u002Fsup>  The input only supports boolean, integer, and floating point numbers, forcing in other data will result in error.\r\n\r\n### \u003Ca id=\"table1\">NumberCalculatorV2\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9491f9aa990b.jpg)  \r\nThe upgraded version of NumberCalculator has added numerical inputs within nodes and square root operations. The square root operation option is ```nth_root```\r\nNote: The input takes priority, and when there is input, the values within the node will be invalid.\r\n\r\n### \u003Ca id=\"table1\">BooleanOperator\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_aa8e575bfe09.jpg)    \r\nPerform a Boolean operation on two numeric values and output the result\u003Csup>*\u003C\u002Fsup>. Supported operations include```==```, ```!=```, ```and```, ```or```, ```xor```, ```not```, ```min```, ```max```.\r\n\r\n\u003Csup>*\u003C\u002Fsup>  The input only supports boolean, integer, and floating point numbers, forcing in other data will result in error. The ```and``` operation between the values outputs a larger number, and the ```or``` operation outputs a smaller number.\r\n\r\n### \u003Ca id=\"table1\">BooleanOperatorV2\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a96a423fa2a9.jpg)  \r\nThe upgraded version of Boolean Operator has added numerical inputs within nodes and added judgments for greater than, less than, greater than or equal to, and less than or equal to.\r\nNote: The input takes priority, and when there is input, the values within the node will be invalid.\r\n\r\n### \u003Ca id=\"table1\">StringCondition\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e4d7e5d3fd8b.jpg)    \r\nDetermine whether the text contains or does not contain substrings, and output a Boolean value.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7f116eea890c.jpg)  \r\n\r\n* text: Input text.\r\n* condition: Judgment conditions. ```include``` determines whether it contains a substring, ```exclude``` determines whether it does not, and  ```equal``` determine whether it is equal to the substring.\r\n* sub_string: Substring.\r\n\r\n### \u003Ca id=\"table1\">CheckMask\u003C\u002Fa>\r\n\r\nCheck if the mask contains enough valid areas and output a Boolean value.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9d8ad3d8e120.jpg)    \r\n\r\n* white_point: The white point threshold used to determine whether the mask is valid is considered valid if it exceeds this value.\r\n* area_percent: The percentage of effective areas. If the proportion of effective areas exceeds this value, output True.\r\n\r\n### \u003Ca id=\"table1\">CheckMaskV2\u003C\u002Fa>\r\n\r\nOn the basis of CheckMask, the ```method``` option has been added, which allows for the selection of different detection methods. The ```area_percent``` is changed to a floating point number with an accuracy of 2 decimal places, which can detect smaller effective areas.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f0fd2086fa23.jpg)    \r\n\r\n* method: There are two detection methods, which are ```simple``` and ```detectability```. The simple method only detects whether the mask is completely black, while the detect_percent method detects the proportion of effective areas.\r\n\r\n### \u003Ca id=\"table1\">If\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ea6bbc9e9de2.jpg)    \r\nSwitches output based on Boolean conditional input. It can be used for any type of data switching, including but not limited to numeric values, strings, pictures, masks, models, latent, pipe pipelines, etc.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3b66d71aa16e.jpg)    \r\n\r\n* if_condition: Conditional input. Boolean, integer, floating point, and string inputs are supported. When entering a value, 0 is judged to be False; When a string is entered, an empty string is judged as Flase.\r\n* when_True: This item is output when the condition is True.\r\n* when_False: This item is output when the condition is False.\r\n\r\n### \u003Ca id=\"table1\">SwitchCase\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_318b768b9315.jpg)    \r\nSwitches the output based on the matching string. It can be used for any type of data switching, including but not limited to numeric values, strings, pictures, masks, models, latent, pipe pipelines, etc. Supports up to 3 sets of case switches.\r\nCompare case to ```switch_condition``` , if the same, output the corresponding input. If there are the same cases, the output is prioritized in order. If there is no matching case, the default input is output. \r\nNote that the string is case sensitive and Chinese and English full-width and half-width.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2693aee5786e.jpg)    \r\n\r\n* input_default: Input entry for default output. This input is required.\r\n* input_1: Input entry used to match ```case_1```. This input is optional.\r\n* input_2: Input entry used to match ```case_2```. This input is optional.\r\n* input_3: Input entry used to match ```case_3```. This input is optional.\r\n* switch_condition: String used to judge with case.\r\n* case_1: case_1 string.\r\n* case_2: case_2 string.\r\n* case_3: case_3 string.\r\n\r\n### \u003Ca id=\"table1\">QueueStop\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ca1836c3c2fa.jpg)    \r\nStop the current queue. When executed at this node, the queue will stop. The workflow diagram above illustrates that if the image is larger than 1Mega pixels, the queue will stop executing.\r\n\r\nNode Options:    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_96cf787fcb9a.jpg)    \r\n\r\n* mode: Stop mode. If you choose ```stop```, it will be determined whether to stop based on the input conditions. If you choose ```continue```, ignore the condition to continue executing the queue.\r\n* stop: If true, the queue will stop. If false, the queue will continue to execute.\r\n\r\n### \u003Ca id=\"table1\">PurgeVRAM\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_696632fc2b8e.jpg)    \r\nClean up GPU VRAM and system RAM. any type of input can be accessed, and when executed to this node, the VRAM and garbage objects in the RAM will be cleaned up. Usually placed after the node where the inference task is completed, such as the VAE Decode node.\r\n\r\nNode Options:  \r\n\r\n* purge_cache: Clean up cache。\r\n* purge_models: Unload all loaded models。\r\n\r\n### \u003Ca id=\"table1\">PurgeVRAMV2\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a41fa38e95e4.jpg)    \r\nUnlike PurgeVRM, this node is not mandatory and can access any input in the process while maintaining the original output, allowing for flexible cleaning.\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageTaggerSave\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_919cce6d9ea0.jpg)  \r\nThe node used to save the training set images and their text labels, where the image files and text label files have the same file name. Customizable directory for saving images, adding timestamps to file names, selecting save formats, and setting image compression rates.\r\n*The workflow image_tagger_stave.exe is located in the workflow directory.\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2bff755625e0.jpg)      \r\n\r\n* iamge: The input image.\r\n* tag_text: Text label of image.\r\n* custom_path\u003Csup>*\u003C\u002Fsup>: User-defined directory, enter the directory name in the correct format. If empty, it is saved in the default output directory of ComfyUI.\r\n* filename_prefix\u003Csup>*\u003C\u002Fsup>: The prefix of file name.\r\n* timestamp: Timestamp the file name, opting for date, time to seconds, and time to milliseconds.\r\n* format: The format of image save. Currently available in ```png``` and ```jpg```. Note that only png format is supported for RGBA mode pictures.\r\n* quality: Image quality, the value range 10-100, the higher the value, the better the picture quality, the volume of the file also correspondingly increases.\r\n* preview: Preview switch.\r\n\r\n\u003Csup>*\u003C\u002Fsup> Enter```%date``` for the current date (YY-mm-dd) and ```%time``` for the current time (HH-MM-SS). You can enter ```\u002F``` for subdirectories. For example, ```%date\u002Fname_%tiem``` will output the image to the ```YY-mm-dd``` folder, with ```name_HH-MM-SS``` as the file name prefix.\r\n\r\n### \u003Ca id=\"table1\">ImageTaggerSaveV2\u003C\u002Fa>\r\nThe upgraded version of [ImageTaggerSave](#ImageTaggerSave) node can be used in conjunction with [LoadImagesFromPath](#LoadImagesFromPath) node to save the text label files of corresponding images in the folder, while maintaining the original file name.\r\n\r\nThe following options have been added to the original node:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_105851ace2eb.jpg)      \r\n* custom_filename: User defined file name. If there is input here, use it as the save file name; otherwise, use filename_prefix as the file name prefix.\r\n* remove_custom_filename_ext: Whether to remove the extension from the original file name.\r\n\r\n### \u003Ca id=\"table1\">LoadImagesFromPath\u003C\u002Fa>  \r\nLoad images from the specified folder.\r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3b4ce2019ac2.jpg)      \r\n* path: Folder path.\r\n* image_load_cap: The number of output files. The default value of 0 means to read all image files in the folder.\r\n* select_every_nth: Loads one image every ```select_every_nth``` images, skipping others.\r\n\r\nOutputs:\r\n* images: Output image list.\r\n* masks: Output the mask list corresponding to the image.\r\n* file_name: Output a list of file names corresponding to the images.\r\n* frame_count: Output the total number of images.\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageBatchToList\u003C\u002Fa>  \r\nConvert a batch of images into multiple smaller batches, with option to define the maximum number of images in each small batch.\r\n\r\nNode Options:\r\n![image](image\u002Fimage_batch_to_list(multi)_node.jpg)      \r\n* batch_size:  The maximum number of images in each small batch.\r\n\r\n\r\n### \u003Ca id=\"table1\">ImageListToBatch\u003C\u002Fa>  \r\nMerge multiple small batches of images into one large batch.\r\n![image](image\u002Fimage_batch_to_list(multi)_node.jpg)      \r\n\r\n\r\n\r\n# \u003Ca id=\"table1\">LayerMask\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4b9da73b229c.jpg)    \r\n\r\n### \u003Ca id=\"table1\">BlendIfMask\u003C\u002Fa>\r\n\r\nReproduction of Photoshop's layer Style - Blend If function. This node outputs a mask for layer composition on the ImageBlend or ImageBlendAdvance nodes.\r\n```mask``` is an optional input, and if you enter a mask here, it will act on the output.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a97185529b5b.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d38bc6c2ce29.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* blend_if: Channel selection for Blend If. There are four options: ```gray``` , ```red```, ```green```, and ```blue```.\r\n* black_point: Black point values, ranging from 0-255.\r\n* black_range: Dark part transition range. The larger the value, the richer the transition level of the dark part mask.\r\n* white_point: White point values, ranging from 0-255.\r\n* white_range: Brightness transition range. The larger the value is, the richer the transition level of the bright part mask is.\r\n\r\n### \u003Ca id=\"table1\">MaskBoxDetect\u003C\u002Fa>\r\n\r\nDetect the area where the mask is located and output its position and size.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ee1b2ac380ab.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eb040d450d0d.jpg)    \r\n\r\n* detect: Detection method, ```min_bounding_rect``` is the minimum bounding rectangle of block shape, ```max_inscribed_rect``` is the maximum inscribed rectangle of block shape, and ```mask-area``` is the effective area for masking pixels.\r\n* x_adjust: Adjust of horizontal deviation after detection.\r\n* y_adjust: Adjust of vertical offset after detection.\r\n* scale_adjust: Adjust the scaling offset after detection.\r\n\r\nOutput:\r\n\r\n* box_preview: Preview image of detection results. Red represents the detected result, and green represents the adjust output result.\r\n* x_percent: Horizontal position output in percentage.\r\n* y_percent: Vertical position output in percentage.\r\n* width: Width.\r\n* height: Height.\r\n* x: The x-coordinate of the top left corner position.\r\n* y: The y-coordinate of the top left corner position.\r\n\r\n### \u003Ca id=\"table1\">MaskBoxExtend\u003C\u002Fa>\r\nGenerate a BBOX mask to expand the range and output it as Mask. The expansion range can be set to positive or negative values, with positive values indicating expansion and negative values indicating contraction.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f51ee1deb64a.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e5181eb2ad60.jpg)    \r\n* mask: The input mask.\r\n* crop_box: MaskBoxDetect node outputs mask BBOX data.\r\n* top_extend: Top extension range. 100 represents a 100% increase in BBOX height.\r\n* bottom_extend: Bottom extension range. 100 represents a 100% increase in BBOX height.\r\n* left_extend: Left extension range. 100 represents a 100% increase in BBOX width.\r\n* right_extend: Right extension range. 100 represents a 100% increase in BBOX width.\r\n\r\nOutputs:\r\n* mask: Mask of BBOX extension.\r\n* x_percent: Horizontal position output in percentage.\r\n* y_percent: Vertical position output in percentage.\r\n* width: Width.\r\n* height: Height.\r\n* x: The x-coordinate of the top left corner position.\r\n* y: The y-coordinate of the top left corner position.\r\n\r\n## \u003Ca id=\"table1\">Ultra\u003C\u002Fa> Nodes\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8a5e70abe284.jpg)   \r\nNodes that use ultra fine edge masking processing methods, the latest version of nodes includes: SegmentAnythingUltraV2, RmBgUltraV2, BiRefNetUltra, PersonMaskUltraV2, SegformerB2ClothesUltra and MaskEdgeUltraDetailV2.\r\nThere are three edge processing methods for these nodes:\r\n\r\n* ```PyMatting``` optimizes the edges of the mask by using a closed form matching to mask trimap.\r\n* ```GuideFilter``` uses opencv guidedfilter to feather edges based on color similarity, and performs best when edges have strong color separation.    \r\n  The code for the above two methods is from the [ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) in spacepxl's Alpha Matte, thanks to the original author.\r\n* ```VitMatte``` uses the transformer vit model for high-quality edge processing, preserving edge details and even generating semi transparent masks.\r\n  Note: When running for the first time, you need to download the vitmate model file and wait for the automatic download to complete. If the download cannot be completed, you can run the command ```huggingface-cli download hustvl\u002Fvitmatte-small-composition-1k``` to manually download.\r\n  After successfully downloading the model, you can use ```VITMatte(local)``` without accessing the network.\r\n* VitMatte's options: ```device``` set whether to use CUDA for vitimate operations, which is about 5 times faster than CPU. ```max_megapixels``` set the maximum image size for vitmate operation, and oversized images will be reduced in size. For 16G VRAM, it is recommended to set it to 3.\r\n\r\n*Download all model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xYF-V6QRwcFalEqLS7giWg?pwd=jiyz) or [Huggingface](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain) to ```ComfyUI\u002Fmodels\u002Fvitmatte``` folder.\r\n\r\nThe following figure is an example of the difference in output between three methods.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1741869a6aef.jpg)   \r\n\r\n\r\n### \u003Ca id=\"table1\">RemBgUltra\u003C\u002Fa>\r\n\r\nRemove background. compared to the similar background removal nodes, this node has ultra-high edge details.\r\n\r\nThis node combines the Alpha Matte node of Spacepxl's [ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) and the functionality of ZHO-ZHO-ZHO's [ComfyUI-BRIA_AI-RMBG](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-BRIA_AI-RMBG), thanks to the original author.\r\n\r\n*Download model files from [BRIA Background Removal v1.4](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F16PMfjpkXn_35T-cVYEPTZA?pwd=qi6o) to ```ComfyUI\u002Fmodels\u002Frmbg\u002FRMBG-1.4``` folder. This model can be used for non-commercial purposes.    \r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_48f56dc3e857.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_da3292c04fd1.jpg)    \r\n\r\n* detail_range: Edge detail range.\r\n* black_point: Edge black sampling threshold.\r\n* white_point: Edge white sampling threshold.\r\n* process_detail: Set to false here will skip edge processing to save runtime.\r\n\r\n### \u003Ca id=\"table1\">RmBgUltraV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of RemBgUltra has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory) \r\n\r\nOn the basis of RemBgUltra, the following changes have been made: \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c2e28b01d4f2.jpg)    \r\n\r\n* detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.\r\n* detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.\r\n* detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.\r\n* device: Set whether the VitMatte to use cuda.\r\n* max_megapixels: Set the maximum size for VitMate operations.\r\n\r\n\r\n\r\n### \u003Ca id=\"table1\">SegformerB2ClothesUltra\u003C\u002Fa>\r\n  \r\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_97b832bbdd92.jpg)   \r\n  Generate masks for characters' faces, hair, arms, legs, and clothing, mainly used for segmenting clothing.\r\n  The model segmentation code is from[StartHua](https:\u002F\u002Fgithub.com\u002FStartHua\u002FComfyui_segformer_b2_clothes)，thanks to the original author.\r\n  Compared to the comfyui_segformer_b2_clothes, this node has ultra-high edge details. (Note: Generating images with edges exceeding 2K in size using the VITMatte method will consume a lot of memory)    \r\n\r\n*Download all model files from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fmattmdjaga\u002Fsegformer_b2_clothes\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OK-HfCNyZWux5iQFANq9Rw?pwd=haxg)  to  ```ComfyUI\u002Fmodels\u002Fsegformer_b2_clothes``` folder.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_344c871700ad.jpg)    \r\n\r\n* face: Facial recognition switch.\r\n* hair: Hair recognition switch.\r\n* hat: Hat recognition switch.\r\n* sunglass: Sunglass recognition switch.\r\n* left_arm: Left arm recognition switch.\r\n* right_arm: Right arm recognition switch.\r\n* left_leg: Left leg recognition switch.\r\n* right_leg: Right leg recognition switch.\r\n* skirt: Skirt recognition switch.\r\n* pants: Pants recognition switch.\r\n* dress: Dress recognition switch.\r\n* belt: Belt recognition switch.\r\n* shoe: Shoes recognition switch.\r\n* bag: Bag recognition switch.\r\n* scarf: Scarf recognition switch.\r\n* detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.\r\n* detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.\r\n* detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.\r\n* black_point: Edge black sampling threshold.\r\n* white_point: Edge white sampling threshold.\r\n* process_detail: Set to false here will skip edge processing to save runtime.\r\n* device: Set whether the VitMatte to use cuda.\r\n* max_megapixels: Set the maximum size for VitMate operations.\r\n\r\n### \u003Ca id=\"table1\">SegformerUltraV2\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e4ecd26ae5fa.jpg)   \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_21f604063f84.jpg)   \r\nUsing the segformer model to segment clothing with ultra-high edge details. Currently supports segformer b2 clothes, segformer b3 clothes and segformer b3 fashion。\r\n\r\n*Download modelfiles from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fmattmdjaga\u002Fsegformer_b2_clothes\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OK-HfCNyZWux5iQFANq9Rw?pwd=haxg) to ```ComfyUI\u002Fmodels\u002Fsegformer_b2_clothes``` folder.         \r\n*Download modelfiles from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fsayeed99\u002Fsegformer_b3_clothes\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18KrCqNqUwmoJlqgAGDTw9g?pwd=ap4z) to ```ComfyUI\u002Fmodels\u002Fsegformer_b3_clothes``` folder.    \r\n*Download modelfiles from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fsayeed99\u002Fsegformer-b3-fashion\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10vd5PmJLFNWXaRVGW6tSvA?pwd=xzqi) to ```ComfyUI\u002Fmodels\u002Fsegformer_b3_fashion``` folder. \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_51780a46a2ac.jpg)    \r\n\r\n* image: The input image.\r\n* segformer_pipeline: Segformer pipeline input. The pipeline is output by SegformerClottesPipeline and SegformerFashionPipeline node.\r\n* detail_method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.\r\n* detail_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.\r\n* detail_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.\r\n* black_point: Edge black sampling threshold.\r\n* white_point: Edge white sampling threshold.\r\n* process_detail: Set to false here will skip edge processing to save runtime.\r\n* device: Set whether the VitMatte to use cuda.\r\n* max_megapixels: Set the maximum size for VitMate operations.\r\n\r\n### \u003Ca id=\"table1\">SegformerClothesPipiline\u003C\u002Fa>\r\n\r\nSelect the segformer clothes model and choose the segmentation content.  \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_de156fbe561c.jpg)    \r\n\r\n* model: Model selection. There are currently two models available to choose from for segformer b2 clothes and segformer b3 clothes.\r\n* face: Facial recognition switch.\r\n* hair: Hair recognition switch.\r\n* hat: Hat recognition switch.\r\n* sunglass: Sunglass recognition switch.\r\n* left_arm: Left arm recognition switch.\r\n* right_arm: Right arm recognition switch.\r\n* left_leg: Left leg recognition switch.\r\n* right_leg: Right leg recognition switch.\r\n* left_shoe: Left shoe recognition switch.\r\n* right_shoe: Right shoe recognition switch.\r\n* skirt: Skirt recognition switch.\r\n* pants: Pants recognition switch.\r\n* dress: Dress recognition switch.\r\n* belt: Belt recognition switch.\r\n* bag: Bag recognition switch.\r\n* scarf: Scarf recognition switch.\r\n\r\n### \u003Ca id=\"table1\">SegformerFashionPipiline\u003C\u002Fa>\r\n\r\nSelect the segformer fashion model and choose the segmentation content.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e3ce7728eb7e.jpg)    \r\n\r\n* model: Model selection. Currently, there is only one model available for selection: segformer b3 fashion。\r\n* shirt: shirt and blouse switch.\r\n* top: top, t-shirt, sweatshirt switch.\r\n* sweater: sweater switch.\r\n* cardigan: cardigan switch.\r\n* jacket: jacket switch.\r\n* vest: vest switch.\r\n* pants: pants switch.\r\n* shorts: shorts switch.\r\n* skirt: skirt switch.\r\n* coat: coat switch.\r\n* dress: dress switch.\r\n* jumpsuit: jumpsuit switch.\r\n* cape: cape switch.\r\n* glasses: glasses switch.\r\n* hat: hat switch.\r\n* hairaccessory: headband, head covering, hair accessory switch.\r\n* tie: tie switch.\r\n* glove: glove switch.\r\n* watch: watch switch.\r\n* belt: belt switch.\r\n* legwarmer: leg warmer switch.\r\n* tights: tights and stockings switch.\r\n* sock: sock switch.\r\n* shoe: shoes switch.\r\n* bagwallet: bag and wallet switch.\r\n* scarf: scarf switch.\r\n* umbrella: umbrella switch.\r\n* hood: hood switch.\r\n* collar: collar switch.\r\n* lapel: lapel switch.\r\n* epaulette: epaulette switch.\r\n* sleeve: sleeve switch.\r\n* pocket: pocket switch.\r\n* neckline: neckline switch.\r\n* buckle: buckle switch.\r\n* zipper: zipper switch.\r\n* applique: applique switch.\r\n* bead: bead switch.\r\n* bow: bow switch.\r\n* flower: flower switch.\r\n* fringe: fringe switch.\r\n* ribbon: ribbon switch.\r\n* rivet: rivet switch.\r\n* ruffle: ruffle switch.\r\n* sequin: sequin switch.\r\n* tassel: tassel switch.\r\n\r\n\r\n### \u003Ca id=\"table1\">SegformerUltraV3\u003C\u002Fa>   \r\nOn the basis of ```SegformerUltraV2```, it has been modified to load models and settings separately, saving resources when using multiple nodes. Please note that the type set must match the model.\r\n\r\nModified node options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1b0791c06165.jpg)\r\n* segformer_model: Segformer model input, the model is loaded by the ```LoadSegformerModel``` node.\r\n* segformer_setting: Segformer Label setting input.\r\n\r\n### \u003Ca id=\"table1\">SegformerClothesSetting\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3831daec2658.jpg)\r\nSet up Segformer Clothes nodes in conjunction with ```Segformer Ultra V3```. The model option has been removed from the ```SegformerClottesPipiline``` node.  \r\n\r\n\r\n### \u003Ca id=\"table1\">SegformerFashionSetting\u003C\u002Fa>\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e3ce7728eb7e.jpg)    \r\nSet up Segformer Fashion nodes in conjunction with ```Segformer Ultra V3```. The model option has been removed from the ```SegformerFashionPipiline``` node.\r\n\r\n### \u003Ca id=\"table1\">LoadSegformerModel\u003C\u002Fa>\r\nModel loading node compatible with ```Segformer Ultra V3```.   \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_60644d08e588.jpg)    \r\n* model_name: Model selection.\r\n* devicee: Load device selection.\r\n\r\n### \u003Ca id=\"table1\">MaskEdgeUltraDetail\u003C\u002Fa>\r\n\r\nProcess rough masks to ultra fine edges.\r\nThis node combines the Alpha Matte and the Guided Filter Alpha nodes functions of Spacepxl's [ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters), thanks to the original author.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_305366bb5872.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1fc85e1c3130.jpg)    \r\n\r\n* method: Provide two methods for edge processing: PyMatting and OpenCV-GuidedFilter. PyMatching has a slower processing speed, but for video, it is recommended to use this method to obtain smoother mask sequences.\r\n* mask_grow: Mask expansion amplitude. positive values expand outward, while negative values contract inward. For rougher masks, negative values are usually used to shrink their edges for better results.\r\n* fix_gap: Repair the gaps in the mask. if obvious gaps in the mask, increase this value appropriately.\r\n* fix_threshold: The threshold of fix_gap.\r\n* detail_range: Edge detail range.\r\n* black_point: Edge black sampling threshold.\r\n* white_point: Edge white sampling threshold.\r\n\r\n### \u003Ca id=\"table1\">MaskEdgeUltraDetailV2\u003C\u002Fa>\r\n\r\nThe V2 upgraded version of MaskEdgeUltraDetail has added the VITMatte edge processing method.(Note: Images larger than 2K in size using this method will consume huge memory)    \r\nThis method is suitable for handling semi transparent areas. \r\n\r\nOn the basis of MaskEdgeUltraDetail, the following changes have been made: \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d2b700477245.jpg)    \r\n\r\n* method: Edge processing methods. provides VITMatte, VITMatte(local), PyMatting, GuidedFilter. If the model has been downloaded after the first use of VITMatte, you can use VITMatte (local) afterwards.\r\n* edge_erode: Mask the erosion range inward from the edge. the larger the value, the larger the range of inward repair.\r\n* edge_dilate: The edge of the mask expands outward. the larger the value, the wider the range of outward repair.\r\n* device: Set whether the VitMatte to use cuda.\r\n* max_megapixels: Set the maximum size for VitMate operations.\r\n\r\n\r\n### \u003Ca id=\"table1\">MaskEdgeUltraDetailV3\u003C\u002Fa>\r\nThe upgraded version of MaskEdgeUltraDetailV2 to processes different partitions through inputting trimap masks, generating an overall mask that includes more refined and translucent parts.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e6859ddd14d.jpg)  \r\n\r\nOn the basis of MaskEdgeUltraDetailV2, the following changes have been made: \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0a9e3f7bb12d.jpg)    \r\n* transparent_trimap: Using different vitmatte parameters within this area can generate more refined matte mask. It is typically used to handle areas such as translucent objects or hair strands. \r\n* mask_edge_erode: The edge of the mask part erodes inwardly. The larger the value, the greater the range of inward repair.\r\n* mask_edge_dilate: The edge of the mask expands outward. The larger the value, the greater the outward repair range.\r\n* transparent_trimap_edge_erode: The edge of the transparent_trimap mask erodes inwardly. The larger the value, the greater the range of inward correction.\r\n* transparent_trimap_edge_dilate: The edge of the transparent_trimap mask expands outward. The larger the value, the greater the outward repair range.\r\n* trimap_blur: The degree of blur at the edge where the trimap mask and the mask are fused.\r\n\r\n### \u003Ca id=\"table1\">MaskByColor\u003C\u002Fa>\r\n\r\nGenerate a mask based on the selected color.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0d1a747b9261.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_21f6328fd6d7.jpg)    \r\n\r\n* image: Input image.\r\n* mask: This input is optional, if there is a mask, only the colors inside the mask are included in the range.\r\n* color: Color selector. Click on the color block to select a color, and you can use the straws on the color picker panel to pick up the screen color. Note: When using straws, maximize the browser window.\r\n* color_in_HEX\u003Csup>4\u003C\u002Fsup>: Enter color values. If this item has input, it will be used first, ignoring the color selected by the ```color``` .\r\n* threshold: Mask range threshold, the larger the value, the larger the mask range.\r\n* fix_gap: Repair the gaps in the mask. If there are obvious gaps in the mask, increase this value appropriately.\r\n* fix_threshold: The threshold for repairing masks. \r\n* invert_mask: Whether to reverse the mask.\r\n\r\n### \u003Ca id=\"table1\">ImageToMask\u003C\u002Fa>\r\n\r\nConvert the image to a mask. Supports converting any channel in LAB, RGBA, YUV, and HSV modes into masks, while providing color scale adjustment. Support mask optional input to obtain masks that only include valid parts.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2d47b90bc38d.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a845a1a0f392.jpg)    \r\n\r\n* image: Input image.\r\n* mask: This input is optional, if there is a mask, only the colors inside the mask are included in the range.\r\n* channel: Channel selection. You can choose any channel of LAB, RGBA, YUV, or HSV modes.\r\n* black_point\u003Csup>*\u003C\u002Fsup>: Black dot value for the mask. The value range is 0-255, with a default value of 0.\r\n* white_point\u003Csup>*\u003C\u002Fsup>: White dot value for the mask. The value range is 0-255, with a default value of 255.\r\n* gray_point: Gray dot values for the mask. The value range is 0.01-9.99, with a default of 1.\r\n* invert_output_mask: Whether to reverse the mask.\r\n\r\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">If the black_point or output_black_point value is greater than white_point or output_white_point, the two values are swapped, with the larger value used as white_point and the smaller value used as black_point.\u003C\u002Ffont>      \r\n\r\n### \u003Ca id=\"table1\">Shadow\u003C\u002Fa> & Highlight Mask\r\n\r\nGenerate masks for the dark and bright parts of the image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eaba0fe17395.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c1c7d59bd312.jpg)    \r\n\r\n* image: The input image.\r\n* mask: Optional input. if there is input, only the colors within the mask range will be adjusted.\r\n* shadow_level_offset: The offset of values in the dark area, where larger values bring more areas closer to the bright into the dark area.\r\n* shadow_range: The transitional range of the dark area.\r\n* highlight_level_offset: The offset of values in the highlight area, where larger values bring more areas closer to the dark into the highlight area.\r\n* highlight_range: The transitional range of the highlight area. \r\n\r\n### \u003Ca id=\"table1\">Shadow\u003C\u002Fa> Highlight Mask V2\r\n\r\nA replica of the ```Shadow & Highlight Mask``` node, with the \"&\" character removed from the node name to avoid ComfyUI workflow parsing errors.\r\n\r\n### \u003Ca id=\"table1\">PixelSpread\u003C\u002Fa>\r\n\r\nPixel expansion preprocessing on the masked edge of an image can effectively improve the edges of image composit.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_45a78690c548.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6aaed9052bac.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* mask_grow: Mask expansion amplitude.\r\n\r\n\r\n### \u003Ca id=\"table1\">MaskGrow\u003C\u002Fa>\r\n\r\nGrow and shrink edges and blur the mask\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a182afc09a2b.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0dc634206bb9.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* grow: Positive values expand outward, while negative values contract inward.\r\n* blur: Blur the edge.\r\n\r\n### \u003Ca id=\"table1\">MaskEdgeShrink\u003C\u002Fa>\r\n\r\nSmooth transition and shrink the mask edges while preserving edge details.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4845b2d1dc53.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_75e5831f1232.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* shrink_level: Shrink the smoothness level.\r\n* soft: Smooth amplitude.\r\n* edge_shrink: Edge shrinkage amplitude.\r\n* edge_reserve: Preserve the amplitude of edge details, 100 represents complete preservation, and 0 represents no preservation at all.\r\n\r\nComparison of MaskGrow and MaskEdgeShrink\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fc2d30ddc3fc.jpg)    \r\n\r\n### \u003Ca id=\"table1\">MaskMotionBlur\u003C\u002Fa>\r\n\r\nCreate motion blur on the mask.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ef2934251d2d.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eb80caf22d52.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* blur: The size of blur.\r\n* angle: The angle of blur.\r\n\r\n### \u003Ca id=\"table1\">MaskGradient\u003C\u002Fa>\r\n\r\nCreate a gradient for the mask from one side. please note the difference between this node and the CreateGradientMask node.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2dc1807a2b81.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8654ea424609.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* gradient_side: Generate gradient from which edge. There are four directions: top, bottom, left and right.\r\n* gradient_scale: Gradient distance. The default value of 100 indicates that one side of the gradient is completely transparent and the other side is completely opaque. The smaller the value, the shorter the distance from transparent to opaque.\r\n* gradient_offset: Gradient position offset.\r\n* opacity: The opacity of the gradient.\r\n\r\n### \u003Ca id=\"table1\">CreateGradientMask\u003C\u002Fa>\r\n\r\nCreate a gradient mask. please note the difference between this node and the MaskGradient node.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_75ffb7e7a10b.jpg)    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7114551ccdae.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e2990ff6d3b.jpg)    \r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: The input image or mask here will generate the output image and mask according to their size. this input takes priority over the width and height below.\r\n* width: Width of the image. If there is a size_as input, this setting will be ignored.\r\n* height: Height of the image. If there is a size_as input, this setting will be ignored.\r\n* gradient_side: Generate gradient from which edge. There are five directions: top, bottom, left, right and center.\r\n* gradient_scale: Gradient distance. The default value of 100 indicates that one side of the gradient is completely transparent and the other side is completely opaque. The smaller the value, the shorter the distance from transparent to opaque.\r\n* gradient_offset: Gradient position offset. When ```gradient_side``` is center, the size of the gradient area is adjusted here, positive values are smaller, and negative values are enlarged.\r\n* opacity: The opacity of the gradient.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input image and mask. forcing the integration of other types of inputs will result in node errors.  \r\n\r\n### \u003Ca id=\"table1\">MaskStroke\u003C\u002Fa>\r\n\r\nGenerate mask contour strokes.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e5601137179.jpg)    \r\n\r\nNode options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e25337303680.jpg)    \r\n\r\n* invert_mask: Whether to reverse the mask.\r\n* stroke_grow: Stroke expansion\u002Fcontraction amplitude, positive values indicate expansion and negative values indicate contraction.\r\n* stroke_width: Stroke width.\r\n* blur: Blur of stroke.\r\n\r\n### \u003Ca id=\"table1\">MaskGrain\u003C\u002Fa>\r\n\r\nGenerates noise for the mask.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f0a339cbfaf0.jpg)    \r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c14b61ec1c1d.jpg)    \r\n\r\n* grain: Noise intensity.\r\n* invert_mask: Whether to reverse the mask.\r\n\r\n### \u003Ca id=\"table1\">DrawRoundedRectangle\u003C\u002Fa>\r\nGenerate rounded rectangular borders for the mask.\r\n\r\nNode Options:  \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f8e0a79e26d1.jpg)     \r\n\r\n* size_as\u003Csup>*\u003C\u002Fsup>: Reference size. It can be an image or a mask.\r\n* rounded_rect_radius: Rounded rectangle radius.\r\n* anti_aliasing: Anti aliasing sampling multiple.\r\n* width: Mask width. If size_as is entered, this setting will be ignored.\r\n* height: Mask height. If size_as is entered, this setting will be ignored.\r\n\r\n\u003Csup>*\u003C\u002Fsup>Only limited to input images and masks. forcing the integration of other types of inputs will result in node errors.\r\n\r\n### \u003Ca id=\"table1\">MaskPreview\u003C\u002Fa>\r\n\r\nPreview the input mask\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_36b0cdd4871b.jpg)    \r\n\r\n### \u003Ca id=\"table1\">MaskInvert\u003C\u002Fa>\r\n\r\nInvert the mask\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_032dad481ef3.jpg)    \r\n\r\n# \u003Ca id=\"table1\">LayerFilter\u003C\u002Fa>\r\n\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7948b920fa13.jpg)    \r\n\r\n### \u003Ca id=\"table1\">Sharp\u003C\u002Fa> & Soft\r\n\r\nEnhance or smooth out details for image.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_01843b0a0286.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f5c0afba5728.jpg)    \r\n\r\n* enhance: Provide 4 presets, which are very sharp, sharp, soft and very soft. If you choose None, you will not do any processing.\r\n\r\n### \u003Ca id=\"table1\">SkinBeauty\u003C\u002Fa>\r\n\r\nMake the skin look smoother.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b30e90bda15a.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e9998ed795ad.jpg)    \r\n\r\n* smooth: Skin smoothness.\r\n* threshold: Smooth range. the larger the range with the smaller value.\r\n* opacity: The opacity of the smoothness.\r\n\r\n### \u003Ca id=\"table1\">WaterColor\u003C\u002Fa>\r\n\r\nWatercolor painting effect\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_274b3543c87f.jpg)    \r\n\r\nNode option:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_116c533ae398.jpg)    \r\n\r\n* line_density: The black line density.\r\n* opacity: The opacity of watercolor effects.\r\n\r\n### \u003Ca id=\"table1\">HalfTone\u003C\u002Fa>\r\nConvert the image to a halfton.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7634167a40e4.jpg)    \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a6588e94bff.jpg)    \r\n* image: The input image.\r\n* mask: Optional mask input.\r\n* dot_size: The size of the dots.\r\n* angle: The angle of arrangement of dots.\r\n* shape: The shape of the dots. There are three options: circle、diamond、square。\r\n* dot_color: The color of dots.\r\n* background_color: The color of the background.\r\n* anti_alias: Anti aliasing intensity. the higher value will make the edges of the points smoother and increase processing time.\r\n\r\n### \u003Ca id=\"table1\">SoftLight\u003C\u002Fa>\r\n\r\nSoft light effect, the bright highlights on the screen appear blurry.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b73126a1f78e.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_dd415b53b4a0.jpg)    \r\n\r\n* soft: Size of soft light.\r\n* threshold: Soft light range. the light appears from the brightest part of the picture. in lower value, the range will be larger, and in higher value, the range will be smaller.\r\n* opacity: Opacity of the soft light.\r\n\r\n### \u003Ca id=\"table1\">ChannelShake\u003C\u002Fa>\r\n\r\nChannel misalignment. similar to the effect of Tiktok logo.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d88047b534f2.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_069609cdefaf.jpg)    \r\n\r\n* distance: Distance of channel separation.\r\n* angle: Angle of channel separation.\r\n* mode: Channel shift arrangement order.\r\n\r\n### \u003Ca id=\"table1\">HDR\u003C\u002Fa> Effects\r\n\r\nenhances the dynamic range and visual appeal of input images.\r\nThis node is reorganize and encapsulate of  [HDR Effects (SuperBeasts.AI)](https:\u002F\u002Fgithub.com\u002FSuperBeastsAI\u002FComfyUI-SuperBeasts), thanks to the original author.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_74280730bf6e.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ff91eaaf9553.jpg)    \r\n\r\n* hdr_intensity: Range: 0.0 to 5.0, Controls the overall intensity of the HDR effect, Higher values result in a more pronounced HDR effect.\r\n* shadow_intensity: Range: 0.0 to 1.0，Adjusts the intensity of shadows in the image，Higher values darken the shadows and increase contrast.\r\n* highlight_intensity: Range: 0.0 to 1.0，Adjusts the intensity of highlights in the image，Higher values brighten the highlights and increase contrast.\r\n* gamma_intensity: Range: 0.0 to 1.0，Controls the gamma correction applied to the image，Higher values increase the overall brightness and contrast.\r\n* contrast: Range: 0.0 to 1.0，Enhances the contrast of the image, Higher values result in more pronounced contrast.\r\n* enhance_color: Range: 0.0 to 1.0，Enhances the color saturation of the image, Higher values result in more vibrant colors.\r\n\r\n### \u003Ca id=\"table1\">Film\u003C\u002Fa>\r\n\r\nSimulate the grain, dark edge, and blurred edge of the film, support input depth map to simulate defocus.    \r\nThis node is reorganize and encapsulate of [digitaljohn\u002Fcomfyui-propost](https:\u002F\u002Fgithub.com\u002Fdigitaljohn\u002Fcomfyui-propost), thanks to the original author.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_19334f52865a.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4af8cc81c8fd.jpg)    \r\n\r\n* image: The input image.\r\n* depth_map: Input depth map to simulate defocus effect. it is an optional input. if there is no input, will simulates radial blur at the edges of the image.\r\n* center_x: The horizontal axis of the center point position of the dark edge and radial blur, where 0 represents the leftmost side, 1 represents the rightmost side, and 0.5 represents at the center.\r\n* center_y: The vertical axis of the center point position of the dark edge and radial blur, where 0 represents the leftmost side, 1 represents the rightmost side, and 0.5 represents at the center.\r\n* saturation: Color saturation, 1 is the original value.\r\n* grain_power: Grain intensity. larger value means more pronounced the noise.\r\n* grain_scale: Grain size.\r\n* grain_sat: The color saturation of grain. 0 represents mono noise, and the larger the value, the more prominent the color.\r\n* grain_shadows: Grain intensity of dark part.\r\n* grain_highs: Grain intensity of light part.\r\n* blur_strength: The strength of blur. larger value means more blurry it becomes.\r\n* blur_focus_spread: Focus diffusion range. larger value means larger clear range.\r\n* focal_depth: Simulate the focal distance of defucus. 0 indicates that focus is farthest, and 1 indicates that is closest. this setting only valid when input the depth_map.\r\n\r\n### \u003Ca id=\"table1\">FilmV2\u003C\u002Fa>\r\n\r\nThe upgraded version of the Film node adds the fastgrain method on the basis of the previous one, and the speed of generating noise is accelerated by 10 times. The code for fastgrain is from [github.com\u002Fspacepxl\u002FComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) BetterFilmGrain node, thanks to the original authors.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_599f8d9cfce7.jpg)    \r\n\r\n### \u003Ca id=\"table1\">LightLeak\u003C\u002Fa>\r\n\r\nSimulate the light leakage effect of the film. please download model file from [Baidu Netdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18Z0lhsDAejbwlOrCZFMuNg?pwd=o8sz) or [Google Drive]([light_leak.pkl(Google Drive)(https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1DcH2Zkyj7W3OiAeeGpJk1eaZpdJwdCL-\u002Fview?usp=sharing)) and copy to ```ComfyUI\u002Fmodels\u002Flayerstyle``` folder.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_95c7e1dfdbbe.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1d5da8bcd24a.jpg)    \r\n\r\n* light: 32 types of light spots are provided. random is a random selection.\r\n* corner: There are four options for the corner where the light appears: top left, top right, bottom left, and bottom right.\r\n* hue: The hue of the light.\r\n* saturation: The color saturation of the light.\r\n* opacity: The opacity of the light.\r\n\r\n### \u003Ca id=\"table1\">ColorMap\u003C\u002Fa>\r\n\r\nPseudo color heat map effect.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d3a5bd690624.jpg)    \r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2f576ae9edbd.jpg)    \r\n\r\n* color_map: Effect type. there are a total of 22 types of effects, as shown in the above figure.\r\n* opacity: The opacity of the color map effect.\r\n\r\n### \u003Ca id=\"table1\">MotionBlur\u003C\u002Fa>\r\n\r\nMake the image motion blur\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b498766b336a.jpg)    \r\n\r\nNode options:\r\n\r\n* angle: The angle of blur.\r\n* blur: The size of blur.\r\n\r\n### \u003Ca id=\"table1\">GaussianBlur\u003C\u002Fa>\r\n\r\nMake the image gaussian blur\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f68d8fa25336.jpg)    \r\n\r\nNode options:\r\n\r\n* blur: The size of blur, integer, range 1-999.\r\n\r\n### \u003Ca id=\"table1\">GaussianBlurV2\u003C\u002Fa>\r\n\r\nGaussian blur. Change the parameter precision to floating-point number, with a precision of 0.01\r\n\r\nNode options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_42719d6dc4ed.jpg)    \r\n\r\n* blur: The size of blur, float, range 0 - 1000.\r\n\r\n### \u003Ca id=\"table1\">AddGrain\u003C\u002Fa>\r\n\r\nAdd noise to the picture.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_af9f41000a1f.jpg)    \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5f6d557d55db.jpg)    \r\n\r\n* grain_power: Noise intensity.\r\n* grain_scale: Noise size.\r\n* grain_sat: Color saturation of noise.\r\n\r\n### \u003Ca id=\"table1\">DistortDisplace\u003C\u002Fa>\r\nGenerate displacement deformation effects for material images.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f19746cbde9a.jpg)    \r\n\r\nNode Options:\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_52f7f1002a13.jpg)    \r\n* image: The original image, with material distortion based on the grayscale information of this image.\r\n* material_image: Material image. The size of this image should be consistent with that of the image, otherwise it will be resized forcibly.\r\n* mask: Optional mask input. The output will only include the deformed result of the map in the masked part.\r\n* distort_strength: The strength of distortion.\r\n* smoothness: The smoothness of distortion.\r\n* anit_aliasing: The value of anti-aliasing. Higher values will result in a significant decrease in generation speed.\r\n* shadow_blend_mode: The shadow part blending mode.\r\n* shadow_strength: The shadow part blending opacity.\r\n* highlight_blend_mode: The highlight part blending mode.\r\n* highlight_strength: The highlight part blending opacity.\r\n\r\nOutputs:\r\n* image: The output image.\r\n* displaced_material: The deformation result of the material image.\r\n\r\n## Annotation for \u003Ca id=\"table1\">notes\u003C\u002Fa>\r\n\r\n\u003Csup>1\u003C\u002Fsup>  The layer_image, layer_mask and the background_image(if have input), These three items must be of the same size.    \r\n\r\n\u003Csup>2\u003C\u002Fsup>  The mask not a mandatory input item. the alpha channel of the image is used by default. If the image input does not include an alpha channel, the entire image's alpha channel will be automatically created. if have masks input simultaneously, the alpha channel will be overwrite by the mask.    \r\n\r\n\u003Csup>3\u003C\u002Fsup>  The \u003Ca id=\"table1\">Blend\u003C\u002Fa> Mode include **normal, multply, screen, add, subtract, difference, darker, color_burn, color_dodge, linear_burn, linear_dodge, overlay, soft_light, hard_light, vivid_light, pin_light, linear_light, and hard_mix.** all of 19 blend modes in total.    \r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_cbbb48dd6093.jpg)    \r\n\u003Cfont size=\"1\">*Preview of the blend mode  \u003C\u002Ffont>\u003Cbr \u002F>     \r\n\r\n\u003Csup>3\u003C\u002Fsup>   The \u003Ca id=\"table1\">BlendModeV2\u003C\u002Fa> include **normal, dissolve, darken, multiply, color burn, linear burn, darker color, lighten, screen, color dodge, linear dodge(add), lighter color, dodge, overlay, soft light, hard light, vivid light, linear light, pin light, hard mix, difference, exclusion, subtract, divide, hue, saturation, color, luminosity, grain extract, grain merge** all of 30 blend modes in total.      \r\nPart of the code for BlendMode V2 is from [Virtuoso Nodes for ComfyUI](https:\u002F\u002Fgithub.com\u002Fchrisfreilich\u002Fvirtuoso-nodes). Thanks to the original authors.\r\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_644ed5b5d98c.jpg)    \r\n\u003Cfont size=\"1\">*Preview of the Blend Mode V2\u003C\u002Ffont>\u003Cbr \u002F>     \r\n\r\n\u003Csup>4\u003C\u002Fsup>  The RGB color described by hexadecimal RGB format, like '#FA3D86'.    \r\n\r\n\u003Csup>5\u003C\u002Fsup>  The layer_image and layer_mask must be of the same size.    \r\n\r\n## Stars\r\n\r\n[![Star History Chart](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6d8a59860009.png)](https:\u002F\u002Fstar-history.com\u002F#chflame163\u002FComfyUI_LayerStyle&Date)\r\n\r\n# statement\r\n\r\nLayerStyle nodes follows the MIT license, Some of its functional code comes from other open-source projects. Thanks to the original author. If used for commercial purposes, please refer to the original project license to authorization agreement.\r\n","# ComfyUI 层样式\n\n## 重要提示\n将一些容易出现问题的依赖节点拆分到了 [ComfyUI_LayerStyle_Advance](https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance) 仓库中。包括：\nLayerMask: BiRefNetUltra, LayerMask: BiRefNetUltraV2, LayerMask: LoadBiRefNetModel, LayerMask: LoadBiRefNetModelV2,\nLayerMask: EVFSAMUltra, LayerMask: Florence2Ultra, LayerMask: LoadFlorence2Model, LayerUtility: Florence2Image2Prompt,\nLayerUtility: GetColorTone, LayerUtility: GetColorToneV2, LayerMask: HumanPartsUltra, LayerMask: BenUltra, LayerMask: LoadBenModel,\nLayerUtility: ImageAutoCrop, LayerUtility: ImageAutoCropV2, LayerUtility: ImageAutoCropV3,\nLayerUtility: ImageRewardFilter, LayerUtility: LoadJoyCaption2Model, LayerUtility: JoyCaption2Split,\nLayerUtility: JoyCaption2, LayerUtility: JoyCaption2ExtraOptions, LayerUtility: LaMa,\nLayerUtility: LlamaVision, LayerUtility: LoadPSD, LayerMask: MaskByDifferent, LayerMask: MediapipeFacialSegment,\nLayerMask: BBoxJoin, LayerMask: DrawBBoxMask, LayerMask: ObjectDetectorFL2, LayerMask: ObjectDetectorMask,\nLayerMask: ObjectDetectorYOLO8, LayerMask: ObjectDetectorYOLOWorld, LayerMask: PersonMaskUltra, LayerMask: PersonMaskUltra V2,\nLayerUtility: PhiPrompt, LayerUtility: PromptEmbellish, LayerUtility: PromptTagger, LayerUtility: CreateQRCode, LayerUtility: DecodeQRCode,\nLayerUtility: QWenImage2Prompt, LayerMask: SAM2Ultra, LayerMask: SAM2VideoUltra, LayerUtility: SaveImagePlus, LayerUtility: SD3NegativeConditioning,\nLayerMask: SegmentAnythingUltra, LayerMask: SegmentAnythingUltra V2, LayerMask: TransparentBackgroundUltra,\nLayerUtility: UserPromptGeneratorTxt2ImgPrompt, LayerUtility: UserPromptGeneratorTxt2ImgPromptWithReference, LayerUtility: UserPromptGeneratorReplaceWord,\nLayerUtility: AddBlindWaterMark, LayerUtility: ShowBlindWaterMark, LayerMask: YoloV8Detect\n\n如果近期有更新，您需要安装 [ComfyUI_LayerStyle_Advance](https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance)，以确保之前的流程不会丢失节点。\n如果问题是由仓库拆分引起的，请将插件版本回滚到```3d4a3526a9d1a19671a133e9215077bda520ee5d```\n在插件目录下打开终端，并使用以下命令回滚版本：\n```\ngit reset --hard 3d4a3526a9d1a19671a133e9215077bda520ee5d\n```\n\n[中文说明点这里](.\u002FREADME_CN.MD)\n\n商务合作请联系email [chflame@163.com](mailto:chflame@163.com)。\n\nFor business cooperation, please contact email [chflame@163.com](mailto:chflame@163.com)。\n\n一套用于 ComfyUI 的节点，可以合成图层和蒙版，实现类似 Photoshop 的功能。\n\n它将 Photoshop 的一些基本功能迁移到 ComfyUI 中，旨在集中工作流并减少软件切换的频率。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b763c32151b8.jpg)\n\u003Cfont size=\"1\">*此工作流（title_example_workflow.json）位于工作流目录中。\u003C\u002Ffont>\u003Cbr \u002F>\n\n## 示例工作流\n\n在 ```workflow``` 目录中有一些 JSON 工作流文件，展示了这些节点如何在 ComfyUI 中使用。\n\n## 如何安装\n\n（以 ComfyUI 官方便携包和 Aki ComfyUI 包为例，其他 ComfyUI 环境请相应修改依赖环境目录）\n\n### 安装插件\n\n* 建议使用 ComfyUI Manager 进行安装。\n\n* 或者在 ComfyUI 的插件目录中打开命令提示符窗口，例如 ```ComfyUI\\custom_nodes```，输入：\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle.git\n```\n\n* 或者下载压缩包并解压，将解压后的文件夹复制到 ```ComfyUI\\custom_nodes``` 目录中。\n\n### 安装依赖包\n\n* 对于 ComfyUI 官方便携包，在插件目录中双击 ```install_requirements.bat```；对于 Aki ComfyUI 包，则双击插件目录中的 ```install_requirements_aki.bat```，等待安装完成。\n\n* 或者手动安装依赖包：在 ComfyUI_LayerStyle 插件目录中打开命令提示符窗口，例如 ```ComfyUI\\custom_nodes\\ComfyUI_LayerStyle```，输入以下命令：\n\n&emsp;&emsp;对于 ComfyUI 官方便携包，输入：\n\n```\n..\\..\\..\\python_embeded\\python.exe -s -m pip install -r requirements.txt\n.\\repair_dependency.bat\n```\n\n&emsp;&emsp;对于 Aki ComfyUI 包，输入：\n\n```\n..\\..\\python\\python.exe -s -m pip install -r requirements.txt\n.\\repair_dependency_aki.bat\n```\n\n* 重启 ComfyUI。\n\n### 下载模型文件\n\n中国国内用户可从 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1T_uXMX3OKIWOJLPuLijrgA?pwd=1yye) 或 [夸克网盘](https:\u002F\u002Fpan.quark.cn\u002Fs\u002F4802d6bca7cb) 下载；其他用户则可从 [huggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle](https:\u002F\u002Fhuggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle\u002Ftree\u002Fmain) 下载所有文件，并将其复制到 ```ComfyUI\\models``` 文件夹中。该链接提供了本插件所需的所有模型文件。\n或者根据各个节点的说明单独下载模型文件。\n部分名为“Ultra”的节点会使用 vitmatte 模型，请下载 [vitmatte 模型](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain) 并复制到 ```ComfyUI\u002Fmodels\u002Fvitmatte``` 文件夹中，上述下载链接中也包含了该模型。\n\n## 常见问题\n\n如果节点无法正常加载或使用过程中出现错误，请查看 ComfyUI 终端窗口中的错误信息。以下是常见错误及其解决方法。\n\n### 警告：xxxx.ini 未找到，使用默认 xxxx..\n\n此警告表示无法找到 ini 文件，但不影响正常使用。如果您不想看到这些警告，请将插件目录中的所有 ```*.ini.example``` 文件重命名为 ```*.ini```。\n\n### 无法从 'cv2.ximgproc' 导入名称 'guidedFilter'\n\n此错误是由于 ```opencv-contrib-python``` 包的版本不正确，或者该包被其他 OpenCV 包覆盖所致。\n\n### NameError: 名称 'guidedFilter' 未定义\n\n问题原因同上。\n#### 针对上述问题，请在插件文件夹中双击 ```repair_dependency.bat```（适用于官方 ComfyUI 便携版）或 ```repair_dependency_aki.bat```（适用于 ComfyUI-aki-v1.x），即可自动修复。\n\n### 无法从 'transformers' 导入名称 'VitMatteImageProcessor'\n\n此错误是由于 ```transformers``` 包版本过低所致。\n\n### insightface 加载非常慢\n\n此错误是由于 ```protobuf``` 包版本过低所致。\n\n\n### onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH 已设置，但无法加载 CUDA。请按照 GPU 要求页面上的说明安装正确版本的 CUDA 和 cuDNN。\n\n解决方案：\n重新安装 ```onnxruntime``` 依赖包。\n\n### 加载模型 xxx 时出错：无法连接到 huggingface.co ...\r\n\r\n请检查网络环境。如果您在中国无法正常访问 huggingface.co，请尝试修改 huggingface_hub 包，强制使用镜像源 hf_mirror。\n\n* 在 huggingface_hub 包的目录中找到 ```constants.py``` 文件（通常位于虚拟环境路径下的 ```Lib\u002Fsite-packages\u002Fhuggingface_hub```），\n  在 ```import os``` 之后添加一行：\n\n  ```python\n  os.environ['HF_ENDPOINT'] = 'https:\u002F\u002Fhf-mirror.com'\n  ```\n\n### ValueError: 三通道遮罩未包含前景像素 (xxxx...)\n\n此错误是由于在使用 ```PyMatting``` 方法处理遮罩边缘时，遮罩区域过大或过小所致。\n\n解决方案：\n\n* 请调整参数以改变遮罩的有效区域。或者使用其他方法来处理边缘。\n\n### Requests.exceptions.ProxyError: HTTPSConnectionPool(xxxx...)\n\n出现此错误时，请检查您的网络环境。\n\n## Update\r\n\r\n\u003Cfont size=\"4\">**If the dependency package error after updating,  please double clicking ```repair_dependency.bat``` (for Official ComfyUI Protable) or  ```repair_dependency_aki.bat``` (for ComfyUI-aki-v1.x) in the plugin folder to reinstall the dependency packages. \u003C\u002Ffont>\u003Cbr \u002F>    \r\n\r\n* Commit [ImageBatchToList](#ImageBatchToList) and [ImageListToBatch](#ImageListToBatch) nodes, Used for converting single batches of images into multiple small batches and vice versa, with option to define the maximum number of images in each small batch. \r\n* Commit [DistortDisplace](#DistortDisplace) node,  Generate displacement deformation effects for material images.\r\n* Commit [MaskEdgeUltraDetailV3](#MaskEdgeUltraDetailV3)  node, By processing different partitions through inputting a trimap mask, a more refined overall mask including translucent parts is generated. \r\n* Commit [ImageCompositeHandleMask](#ImageCompositeHandleMask) node, used to generate local feathering masks and cropping data.\r\n* Commit [DrawRoundedRectangle](#DrawRoundedRectangle) node, used to generate rounded rectangle masks.\r\n* Commit [FluxKontextImageScale](#FluxKontextImageScale) node, based on official node modifications, used to resizes the image to one that is more optimal for flux kontext. For images with different aspect ratio, the scale will be adjusted appropriately to maintain all information.\r\n* Commit [MaskBoxExtend](#MaskBoxExtend) node, used to generate BBOX mask extension range and output as Mask.\r\n* Commit [ColorNegative](#ColorNegative) node, used to invert the color of image.\r\n* Commit [LoadImagesFromPath](#LoadImagesFromPath) and [ImageTaggerSaveV2](#ImageTaggerSaveV2) nodes, used to load a list of images from a folder and save images and tagger text file with corresponding file names.\r\n* Commit [LoadImageFromPath](LoadImageFromPath) node, The images in a folder can be loaded and output as image list, also supporting output of a list corresponding to a file name.\r\n* Commit [SegformerUltraV3](SegformerUltraV3), [LoadSegformerModel](LoadSegformerModel), [SegformerClothesSetting](SegformerClothesSetting) and [SegformerFashionSetting](SegformerFashionSetting) nodes, Separate the loading of models and settings to save resources when using multiple nodes.\r\n* Add multiple languages and increase support for 5 languages: Chinese, French, Japanese, Korean and Russian. This feature producted by [ComfyUI-Globalization-Node-Translation](https:\u002F\u002Fgithub.com\u002Fyamanacn\u002FComfyUI-Globalization-Node-Translation), thank you to the original author.\r\n* Commit [HalfTone](#HalfTone) node, use for halftone processing of images.\r\n* Add QuarkNetdisk model download link.\r\n* Support numpy 2.x dependency package.\r\n* Commit [PurgeVRAM V2](#PurgeVRAMV2) node.\r\n* Commit [ChoiceTextPreset](#ChoiceTextPreset) and [TextPreseter](#TextPreseter) nodes, used for preset text and selecting preset text output.\r\n* [StringCondition](#StringCondition) add the option of comparing strings to determine if they are the same.\r\n* Commit [NameToColor](#NameToColor) node, Output colors based on their names.\r\n* Commit [ImageMaskScaleAsV2](#ImageMaskScaleAsV2) node, Add background color settings on the basis of the original node.\r\n* Commit [RoundedRectangle](#RoundedRectangle) node, Used to create rounded rectangle and mask.\r\n* Commit [AnyRerouter](#AnyRerouter) node, Used for reroute any type of data.\r\n* Commit [ICMask](#ICMask) and [ICMaskCropBack](#ICMaskCropBack) nodes, Used for generating In-Context image and mask, and automatic crop back. The code is from [lrzjason\u002FComfyui-In-Context-Lora-Utils](https:\u002F\u002Fgithub.com\u002Flrzjason\u002FComfyui-In-Context-Lora-Utils) , Thanks to the original author @小志Jason.\r\n* Commit [GetMainColorsV2](#GetMainColorsV2) node, add sorting by color area and output color values and proportions in the preview image. This part of the code was improved by @ HL, thanks.\r\n* Optimize dependency packages. Optimize some algorithms.\r\n* Split some nodes of the dependencies that are prone to problems into [ComfyUI_LayerStyle_Advance](#https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance) repository. Including:\r\nLayerMask: BiRefNetUltra, LayerMask: BiRefNetUltraV2, LayerMask: LoadBiRefNetModel, LayerMask: LoadBiRefNetModelV2,    \r\nLayerMask: EVFSAMUltra, LayerMask: Florence2Ultra, LayerMask: LoadFlorence2Model, LayerUtility: Florence2Image2Prompt,    \r\nLayerUtility: GetColorTone, LayerUtility: GetColorToneV2, LayerMask: HumanPartsUltra,    \r\nLayerUtility: ImageAutoCrop, LayerUtility: ImageAutoCropV2, LayerUtility: ImageAutoCropV3,    \r\nLayerUtility: ImageRewardFilter, LayerUtility: LoadJoyCaption2Model, LayerUtility: JoyCaption2Split,    \r\nLayerUtility: JoyCaption2, LayerUtility: JoyCaption2ExtraOptions, LayerUtility: LaMa,    \r\nLayerUtility: LlamaVision, LayerUtility: LoadPSD, LayerMask: MaskByDifferent, LayerMask: MediapipeFacialSegment,    \r\nLayerMask: BBoxJoin, LayerMask: DrawBBoxMask, LayerMask: ObjectDetectorFL2, LayerMask: ObjectDetectorMask,    \r\nLayerMask: ObjectDetectorYOLO8, LayerMask: ObjectDetectorYOLOWorld, LayerMask: PersonMaskUltra, LayerMask: PersonMaskUltra V2,    \r\nLayerUtility: PhiPrompt, LayerUtility: PromptEmbellish, LayerUtility: PromptTagger, LayerUtility: CreateQRCode, LayerUtility: DecodeQRCode,    \r\nLayerUtility: QWenImage2Prompt, LayerMask: SAM2Ultra, LayerMask: SAM2VideoUltra, LayerUtility: SaveImagePlus, LayerUtility: SD3NegativeConditioning,    \r\nLayerMask: SegmentAnythingUltra, LayerMask: SegmentAnythingUltra V2, LayerMask: TransparentBackgroundUltra,     \r\nLayerUtility: UserPromptGeneratorTxt2ImgPrompt, LayerUtility: UserPromptGeneratorTxt2ImgPromptWithReference, LayerUtility: UserPromptGeneratorReplaceWord,    \r\nLayerUtility: AddBlindWaterMark, LayerUtility: ShowBlindWaterMark, LayerMask: YoloV8Detect\r\n\r\n* Merge the PR submitted by [alexisrolland](https:\u002F\u002Fgithub.com\u002Falexisrolland) , commit the ```Image Blend Advanced v3``` and ```Drop Shadow v3``` nodes, support transparent background.\r\n* Commit [BenUltra](#BenUltra) and [LoadBenModel](#LoadBenModel) nodes. These two nodes are the implementation of [PramaLLC\u002FBEN](https:\u002F\u002Fhuggingface.co\u002FPramaLLC\u002FBEN) project in ComfyUI.   \r\nDownload the ```BEN_Base.pth``` and ```config.json``` from [huggingface](https:\u002F\u002Fhuggingface.co\u002FPramaLLC\u002FBEN\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F17mdBxfBl_R97mtNHuiHsxQ?pwd=2jn3) and copy to ```ComfyUI\u002Fmodels\u002FBEN``` folder.\r\n* Merge the PR submitted by [jimlee2048](https:\u002F\u002Fgithub.com\u002Fjimlee2048), add the LoadBiRefNetModelV2 node, and support loading RMBG 2.0 models.       \r\nDownload the model files from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-2.0\u002Ftree\u002Fmain) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1viIXlZnpTYTKkm2F-QMj_w?pwd=axr9) and copy to ```ComfyUI\u002Fmodels\u002FBiRefNet\u002FRMBG-2.0``` folder.\r\n\r\n* Florence2 nodes support base-PromptGen-v2.0 and large-PromptGen-v2.0, Download ```base-PromptGen-v2.0``` and ```large-PromptGen-v2.0``` two folder from [huggingface](https:\u002F\u002Fhuggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle\u002Ftree\u002Fmain\u002FComfyUI\u002Fmodels\u002Fflorence2) or [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BVvXt3N7zrBnToyF-GrC_A?pwd=xm0x) and copy to ```ComfyUI\u002Fmodels\u002Fflorence2``` folder.\r\n* [SAM2Ultra](#SAM2Ultra) and ObjectDetector nodes support image batch.\r\n* [SAM2Ultra](#SAM2Ultra) and [SAM2VideoUltra](#SAM2VideoUltra) nodes add support for SAM2.1 model, including [kijai](https:\u002F\u002Fgithub.com\u002Fkijai)'s FP16 model. Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xaQYBA6ktxvAxm310HXweQ?pwd=auki) or [huggingface.co\u002FKijai\u002Fsam2-safetensors](https:\u002F\u002Fhuggingface.co\u002FKijai\u002Fsam2-safetensors\u002Ftree\u002Fmain) and copy to ```ComfyUI\u002Fmodels\u002Fsam2``` folder.\r\n* Commit [JoyCaption2Split](#JoyCaption2Split) and [LoadJoyCaption2Model](#LoadJoyCaption2Model) nodes, Sharing the model across multiple JoyCaption2 nodes improves efficiency.\r\n* [SegmentAnythingUltra](#SegmentAnythingUltra) and [SegmentAnythingUltraV2](#SegmentAnythingUltraV2) add the  ```cache_model``` option, Easy to flexibly manage VRAM usage.\r\n\r\n* Due to the high version requirements of the [LlamaVision](#LlamaVision) node for ```transformers```, which affects the loading of some older third-party plugins, so the LayerStyle plugin has lowered the default requirement to 4.43.2. If you need to run LlamaVision, please upgrade to 4.45.0 or above on your own. \r\n\r\n* Commit [JoyCaption2](#JoyCaption2) and [JoyCaption2ExtraOptions](#JoyCaption2ExtraOptions) nodes. New dependency packages need to be installed.\r\nUse the JoyCaption-alpha-two model for local inference. Can be used to generate prompt words. this node is https:\u002F\u002Fhuggingface.co\u002FJohn6666\u002Fjoy-caption-alpha-two-cli-mod Implementation in ComfyUI, thank you to the original author.\r\nDownload models form [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1dOjbUEacUOhzFitAQ3uIeQ?pwd=4ypv) and [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1mH1SuW45Dy6Wga7aws5siQ?pwd=w6h5) , \r\nor [huggingface\u002FOrenguteng](https:\u002F\u002Fhuggingface.co\u002FOrenguteng\u002FLlama-3.1-8B-Lexi-Uncensored-V2\u002Ftree\u002Fmain) and [huggingface\u002Funsloth](https:\u002F\u002Fhuggingface.co\u002Funsloth\u002FMeta-Llama-3.1-8B-Instruct\u002Ftree\u002Fmain) , then copy to ```ComfyUI\u002Fmodels\u002FLLM```,\r\nDownload models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1pkVymOsDcXqL7IdQJ6lMVw?pwd=v8wp) or [huggingface\u002Fgoogle](https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fsiglip-so400m-patch14-384\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002Fclip```,\r\nDonwload the ```cgrkzexw-599808``` folder from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12TDwZAeI68hWT6MgRrrK7Q?pwd=d7dh) or [huggingface\u002FJohn6666](https:\u002F\u002Fhuggingface.co\u002FJohn6666\u002Fjoy-caption-alpha-two-cli-mod\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002FJoy_caption```。\r\n\r\n* Commit [LlamaVision](#LlamaVision) node, Use the Llama 3.2 vision model for local inference. Can be used to generate prompt words. part of the code for this node comes from [ComfyUI-PixtralLlamaMolmoVision](https:\u002F\u002Fgithub.com\u002FSeanScripts\u002FComfyUI-PixtralLlamaMolmoVision), thank you to the original author.\r\nTo use this node, the ```transformers``` need upgraded to 4.45.0 or higher.\r\nDownload models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18oHnTrkNMiwKLMcUVrfFjA?pwd=4g81) or [huggingface\u002FSeanScripts](https:\u002F\u002Fhuggingface.co\u002FSeanScripts\u002FLlama-3.2-11B-Vision-Instruct-nf4\u002Ftree\u002Fmain) , and copy to ```ComfyUI\u002Fmodels\u002FLLM```.\r\n\r\n* Commit [RandomGeneratorV2](#RandomGeneratorV2) node, add least random range and seed options.\r\n* Commit [TextJoinV2](#TextJoinV2) node, add delimiter options on top of TextJion.\r\n* Commit [GaussianBlurV2](#GaussianBlurV2) node, The parameter accuracy has been improved to 0.01.\r\n* Commit [UserPromptGeneratorTxtImgWithReference](#UserPromptGeneratorTxtImgWithReference) node.\r\n* Commit [GrayValue](#GrayValue) node, output the grayscale values corresponding to the RGB color values.\r\n* [LUT Apply](#LUT), [TextImageV2](#TextImageV2), [TextImage](#TextImage), [SimpleTextImage](#SimpleTextImage) nodes to support defining multiple folders in ```resource-dir.ini```, separated by commas, semicolons, or spaces. Simultaneously supports refreshing real-time updates.\r\n* [LUT Apply](#LUT), [TextImageV2](#TextImageV2), [TextImage](#TextImage), [SimpleTextImage](#SimpleTextImage) nodes support defining multi directory fonts and lut folders, and support refreshing and real-time updates.\r\n* Commit [HumanPartsUltra](#HumanPartsUltra) node, used to generate human body parts masks. It is based on the warrper of [metal3d\u002FComfyUI_Human_Parts](https:\u002F\u002Fgithub.com\u002Fmetal3d\u002FComfyUI_Human_Parts), thank the original author.\r\n  Download model file from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1-6uwH6RB0FhIVfa3qO7hhQ?pwd=d862) or [huggingface](https:\u002F\u002Fhuggingface.co\u002FMetal3d\u002Fdeeplabv3p-resnet50-human\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\onnx\\human-parts``` folder.\r\n* ObjectDetector nodes add sort by confidence option.\r\n* Commit [DrawBBoxMask](#DrawBBoxMask) node, used to convert the BBoxes output by the Object Detector node into a mask.\r\n* Commit [UserPromptGeneratorTxtImg](#UserPromptGeneratorTxtImg) and [UserPromptGeneratorReplaceWord](#UserPromptGeneratorReplaceWord) nodes, Used to generate text and image prompts and replace prompt content.\r\n* Commit [PhiPrompt](#PhiPrompt) node, Use Microsoft Phi 3.5 text and visual models for local inference. Can be used to generate prompt words, process prompt words, or infer prompt words from images. Running this model requires at least 16GB of video memory.      \r\n  Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1BdTLdaeGC3trh1U3V-6XTA?pwd=29dh) or [huggingface.co\u002Fmicrosoft\u002FPhi-3.5-vision-instruct](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3.5-vision-instruct\u002Ftree\u002Fmain) and [huggingface.co\u002Fmicrosoft\u002FPhi-3.5-mini-instruct](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FPhi-3.5-mini-instruct\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\LLM``` folder.\r\n* Commit [GetMainColors](#GetMainColors) node, it can obtained 5 main colors of image. Commit [ColorName](#ColorName) node, it can obtain the color name of input color value.\r\n* Duplicate the [Brightness & Contrast](#Brightness) node as [BrightnessContrastV2](#BrightnessContrastV2), the [Color of Shadow & Highlight](#Highlight) node as [ColorofShadowHighlight](#HighlightV2), and [Shadow & Highlight Mask](#Shadow) to [Shadow Highlight Mask V2](#ShadowV2), to avoid errors in ComfyUI workflow parsing caused by the \"&\" character in the node name.\r\n* Commit [VQAPrompt](#VQAPrompt) and [LoadVQAModel](#LoadVQAModel) nodes.      \r\n  Download the model from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) or [huggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large\u002Ftree\u002Fmain) and [huggingface.co\u002FSalesforce\u002Fblip-vqa-base](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-base\u002Ftree\u002Fmain) and copy to ```ComfyUI\\models\\VQA``` folder.\r\n* [Florence2Ultra](#Florence2Ultra),  [Florence2Image2Prompt](#Florence2Image2Prompt) 和 [LoadFlorence2Model](#LoadFlorence2Model) nodes support the MiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5 and MiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5 model.    \r\n  Download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xOL6x6LijIMSh_3woErjJg?pwd=t3xa) or [huggingface.co\u002FMiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5](https:\u002F\u002Fhuggingface.co\u002FMiaoshouAI\u002FFlorence-2-large-PromptGen-v1.5\u002Ftree\u002Fmain) and [huggingface.co\u002FMiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5](https:\u002F\u002Fhuggingface.co\u002FMiaoshouAI\u002FFlorence-2-base-PromptGen-v1.5\u002Ftree\u002Fmain) , copy to  ```ComfyUI\\models\\florence2``` folder.\r\n* Commit [BiRefNetUltraV2](#BiRefNetUltraV2) and [LoadBiRefNetModel](#LoadBiRefNetModel) nodes, that support the use of the latest BiRefNet model.\r\n  Download model file from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F12z3qUuqag3nqpN2NJ5pSzg?pwd=ek65) or [GoogleDrive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1s2Xe0cjq-2ctnJBR24563yMSCOu4CcxM) named ```BiRefNet-general-epoch_244.pth``` to  ```ComfyUI\u002FModels\u002FBiRefNet\u002Fpth``` folder. You can also download more BiRefNet models and put them here.\r\n* [ExtendCanvasV2](#ExtendCanvasV2) node support negative value input, it means image will be cropped.\r\n* The default title color of nodes is changed to blue-green, and nodes in LayerStyle, LayerColor, LayerMask, LayerUtility, and LayerFilter are distinguished by different colors.\r\n* The Object Detector nodes added sort bbox option, which allows sorting from left to right, top to bottom, and large to small, making object selection more intuitive and convenient. The nodes released yesterday has been abandoned, please manually replace it with the new version node (sorry).\r\n* Commit [SAM2Ultra](#SAM2Ultra), [SAM2VideoUltra](#SAM2VideoUltra), [ObjectDetectorFL2](#ObjectDetectorFL2), [ObjectDetectorYOLOWorld](#ObjectDetectorYOLOWorld), [ObjectDetectorYOLO8](#ObjectDetectorYOLO8), [ObjectDetectorMask](#ObjectDetectorMask) and [BBoxJoin](#BBoxJoin) nodes. \r\n  Download models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xaQYBA6ktxvAxm310HXweQ?pwd=auki) or [huggingface.co\u002FKijai\u002Fsam2-safetensors](https:\u002F\u002Fhuggingface.co\u002FKijai\u002Fsam2-safetensors\u002Ftree\u002Fmain) and copy to ```ComfyUI\u002Fmodels\u002Fsam2``` folder,\r\n  Download models from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1QpjajeTA37vEAU2OQnbDcQ?pwd=nqsk) or [GoogleDrive](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1nrsfq4S-yk9ewJgwrhXAoNVqIFLZ1at7?usp=sharing) and copy to ```ComfyUI\u002Fmodels\u002Fyolo-world``` folder.\r\n  This update introduces new dependencies, please reinstall the dependency package.\r\n* Commit [RandomGenerator](#RandomGenerator) node, Used to generate random numbers within a specified range, with outputs of int, float, and boolean, supporting batch generation of different random numbers by image batch.\r\n* Commit [EVF-SAMUltra](#EVFSAMUltra) node, it is implementation of [EVF-SAM](https:\u002F\u002Fgithub.com\u002Fhustvl\u002FEVF-SAM) in ComfyUI. Please download model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1EvaxgKcCxUpMbYKzLnEx9w?pwd=69bn) or [huggingface\u002FEVF-SAM2](https:\u002F\u002Fhuggingface.co\u002FYxZhang\u002Fevf-sam2\u002Ftree\u002Fmain), [huggingface\u002FEVF-SAM](https:\u002F\u002Fhuggingface.co\u002FYxZhang\u002Fevf-sam\u002Ftree\u002Fmain) to ```ComfyUI\u002Fmodels\u002FEVF-SAM``` folder(save the models in their respective subdirectories).\r\n  Due to the introduction of new dependencies package, after the plugin upgrade, please reinstall the dependency packages.\r\n* Commit [ImageTaggerSave](#ImageTaggerSave) and [ImageAutoCropV3](#ImageAutoCropV3) nodes. Used to implement the automatic trimming and marking workflow for the training set (the workflow ```image_tagger_save.json``` is located in the workflow directory).\r\n* Commit [CheckMaskV2](#CheckMaskV2) node, Added the ```simple``` method to detect masks more quickly.\r\n* Commit [ImageReel](#ImageReel) and [ImageReelComposite](#ImageReelComposite) nodes to composite multiple images on a canvas.\r\n* [NumberCalculatorV2](#NumberCalculatorV2) and [NumberCalculator](#NumberCalculator) add the  ```min``` and ```max``` method.\r\n* Optimize node loading speed.    \r\n* [Florence2Image2Prompt](#Florence2Image2Prompt) add support for ```thwri\u002FCogFlorence-2-Large-Freeze``` and ```thwri\u002FCogFlorence-2.1-Large``` models. Please download the model files from [BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1hzw9-QiU1vB8pMbBgofZIA?pwd=mfl3) or [huggingface\u002FCogFlorence-2-Large-Freeze](https:\u002F\u002Fhuggingface.co\u002Fthwri\u002FCogFlorence-2-Large-Freeze\u002Ftree\u002Fmain) and [huggingface\u002FCogFlorence-2.1-Large](https:\u002F\u002Fhuggingface.co\u002Fthwri\u002FCogFlorence-2.1-Large\u002Ftree\u002Fmain) , then copy it to ```ComfyUI\u002Fmodels\u002Fflorence2``` folder. \r\n* Merge branch from [ClownsharkBatwing](https:\u002F\u002Fgithub.com\u002FClownsharkBatwing) \"Use GPU for color blend mode\", the speed of some layer blends by more than ten times.\r\n* Commit [Florence2Ultra](#Florence2Ultra),  [Florence2Image2Prompt](#Florence2Image2Prompt) and [LoadFlorence2Model](#LoadFlorence2Model) nodes.\r\n* [TransparentBackgroundUltra](#TransparentBackgroundUltra) node add new model support. Please download the model file according to the instructions.\r\n* Commit [SegformerUltraV2](#SegformerUltraV2), [SegfromerFashionPipeline](#SegfromerFashionPipeline) and [SegformerClothesPipeline](#SegformerClothesPipeline) nodes, used for segmentation of clothing. please download the model file according to the instructions.\r\n* Commit ```install_requirements.bat``` and ```install_requirements_aki.bat```, One click solution to install dependency packages.\r\n* Commit [TransparentBackgroundUltra](#TransparentBackgroundUltra) node, it remove background based on transparent-background model.\r\n* Change the VitMatte model of the [Ultra](#Ultra) node to a local call. Please download [all files of vitmatte model](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain) to the ```ComfyUI\u002Fmodels\u002Fvitmatte``` folder.\r\n* [GetColorToneV2](#GetColorToneV2) node add the ```mask``` method to the color selection option, which can accurately obtain the main color and average color within the mask.\r\n* [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) node add the \"background_color\" option.\r\n* [LUT Apply](#LUT) Add the \"strength\" option.\r\n* Commit [AutoAdjustV2](#AutoAdjustV2) node, add optional mask input and support for multiple automatic color adjustment modes.\r\n* Due to the upcoming discontinuation of gemini-pro vision services, [PromptTagger](#PromptTagger) and [PromptEmbellish](#PromptEmbellish) have added the \"gemini-1.5-flash\" API to continue using it.\r\n* [Ultra](#Ultra) nodes added the option to run ```VitMatte``` on the CUDA device, resulting in a 5-fold increase in running speed.\r\n* Commit [QueueStop](#QueueStop) node, used to terminate the queue operation.\r\n* Optimize performance of the ```VitMate``` method for [Ultra](#Ultra) nodes when processing large-size image.\r\n* [CropByMaskV2](#CropByMaskV2) add option to round the cutting size by multiples.\r\n* Commit [CheckMask](#CheckMask) node, it detect whether the mask contains sufficient effective areas. Commit [HSVValue](#HSVValue) node, it convert color values to HSV values.\r\n* [BooleanOperatorV2](#BooleanOperatorV2), [NumberCalculatorV2](#NumberCalculatorV2), [Integer](#Integer), [Float](#Float), [Boolean](#Boolean) nodes add string output to output the value as a string for use with [SwitchCase](#SwitchCase).\r\n* Commit [SwitchCase](#SwitchCase) node, Switches the output based on the matching string. Can be used for any type of data switching.\r\n* Commit [String](#String) node, Used to output a string. It is the TextBox simplified node.\r\n* Commit [If](#If) node，Switches output based on Boolean conditional input. Can be used for any type of data switching.\r\n* Commit [StringCondition](#StringCondition) node, Determines whether the text contains or does not contain a substring.\r\n* Commit [NumberCalculatorV2](#NumberCalculatorV2) node，Add the nth root operation. Commit [BooleanOperatorV2](#BooleanOperatorV2) node, Increasing greater\u002Fless than, greater\u002Fless then or equal logical judgment. The two nodes can access numeric inputs and can input numeric values within the node. Note: Numeric input takes precedence. Values in nodes will not be valid when there is input.\r\n* Commit [SD3NegativeConditioning](#SD3NegativeConditioning) node, Encapsulate the four nodes of Negative Condition in SD3 into a separate node.\r\n* [ImageRemoveAlpha](#ImageRemoveAlpha) node add optional mask input.\r\n* Commit [HLFrequencyDetailRestore](#HLFrequencyDetailRestore) node, Using low-frequency filtering and high-frequency preserving to restore image details, the fusion is better.\r\n* Commit [AddGrain](#AddGrain) and [MaskGrain](#MaskGrain) nodes, Add noise to a picture or mask.\r\n* Commit [FilmV2](#FilmV2) node, The fastgrain method is added on the basis of the previous one, and the noise generation speed is 10 times faster.\r\n* Commit [ImageToMask](#ImageToMask) node, it can be converted image into mask. Supports converting any channel in LAB, RGBA, YUV, and HSV modes into masks, while providing color scale adjustment. Support mask optional input to obtain masks that only include valid parts.\r\n* The blackpoint and whitepoint options in some nodes have been changed to slider adjustment for a more intuitive display. Include [MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2)，[PersonMaskUltraV2](#PersonMaskUltraV2)，[BiRefNetUltra](#BiRefNetUltra), [SegformerB2ClothesUltra](#SegformerB2ClothesUltra), [BlendIfMask](#BlendIfMask) and [Levels](#Levels).\r\n* [ImageScaleRestoreV2](#ImageScaleRestoreV2) and  [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) nodes add the ```total_pixel``` method to scale images.\r\n* Commit [MediapipeFacialSegment](#MediapipeFacialSegment) node，Used to segment facial features, including left and right eyebrows, eyes, lips, and teeth.\r\n* Commit [BatchSelector](#BatchSelector) node，Used to retrieve specified images or masks from batch images or masks.\r\n* LayerUtility creates new subdirectories such as SystemIO, Data, and Prompt. Some nodes are classified into subdirectories.\r\n* Commit [MaskByColor](#MaskByColor) node, Generate a mask based on the selected color.\r\n* Commit [LoadPSD](#LoadPSD) node, It read the psd format, and output layer images. Note that this node requires the installation of the ```psd_tools``` dependency package, If error occurs during the installation of psd_tool, such as ```ModuleNotFoundError: No module named 'docopt'``` , please download [docopt's whl](https:\u002F\u002Fwww.piwheels.org\u002Fproject\u002Fdocopt\u002F) and manual install it. \r\n* Commit [SegformerB2ClothesUltra](#SegformerB2ClothesUltra) node, it used to segment character clothing. The model segmentation code is from[StartHua](https:\u002F\u002Fgithub.com\u002FStartHua\u002FComfyui_segformer_b2_clothes),  thanks to the original author.\r\n* [SaveImagePlus](#SaveImagePlus) node adds the output workflow to the json function, supports ```%date``` and ```%time``` to embeddint date or time to path and filename, and adds the preview switch.\r\n* Commit [SaveImagePlus](#SaveImagePlus) node，It can customize the directory where the picture is saved, add a timestamp to the file name, select the save format, set the image compression rate, set whether to save the workflow, and optionally add invisible watermarks to the picture.\r\n* Commit [AddBlindWaterMark](#AddBlindWaterMark), [ShowBlindWaterMark](#ShowBlindWaterMark) nodes, Add invisible watermark and decoded watermark to the picture. Commit [CreateQRCode](#CreateQRCode), [DecodeQRCode](#DecodeQRCode) nodes, It can generate two-dimensional code pictures and decode two-dimensional codes.\r\n* [ImageScaleRestoreV2](#ImageScaleRestoreV2), [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2), [ImageAutoCropV2](#ImageAutoCropV2) nodes add options for ```width``` and ```height```, which can specify width or height as fixed values.\r\n* Commit [PurgeVRAM](#PurgeVRAM) node, Clean up VRAM an RAM.\r\n* Commit [AutoAdjust](#AutoAdjust) node, it can automatically adjust image contrast and white balance.\r\n* Commit [RGBValue](#RGBValue) node to output the color value as a single decimal value of R, G, B. This idea is from [vxinhao](https:\u002F\u002Fgithub.com\u002Fvxinhao\u002Fcolor2rgb), Thanks.\r\n* Commit [seed](#seed) node to output the seed value. The [ImageMaskScaleAs](#ImageMaskScaleAs), [ImageScaleBySpectRatio](#ImageScaleBySpectRatio), [ImageScaleBySpectRatioV2](#ImageScaleBySpectRatioV2), [ImageScaleRestore](#ImageScaleRestore), [ImageScaleRestoreV2](#ImageScaleRestoreV2) nodes increase ```width```, ```height``` output.\r\n* Commit [Levels](#Levels) node, it can achieve the same color levels adjustment function as Photoshop.[Sharp&Soft](#Sharp) add the \"None\" option.\r\n* Commit [BlendIfMask](#BlendIfMask) node, This node cooperates with ImgaeBlendV2 or ImageBlendAdvanceV2 to achieve the same Blend If function as Photoshop.\r\n* Commit [ColorTemperature](#ColorTemperature) and [ColorBalance](#ColorBalance) nodes, used to adjust the color temperature and color balance of the picture.\r\n* Add new types of [Blend Mode V2](#BlendModeV2) between images. now supports up to 30 blend modes. The new blend mode is available for all V2 versions that support mixed mode nodes, including ImageBlend V2, ImageBlendAdvance V2, DropShadow V2, InnerShadow V2, OuterGlow V2, InnerGlow V2, Stroke V2, ColorOverlay V2, GradientOverlay V2.    \r\n  Part of the code for BlendMode V2 is from [Virtuoso Nodes for ComfyUI](https:\u002F\u002Fgithub.com\u002Fchrisfreilich\u002Fvirtuoso-nodes). Thanks to the original authors.\r\n* Commit [YoloV8Detect](#YoloV8Detect) node.\r\n* Commit [QWenImage2Prompt](#QWenImage2Prompt) node, this node is repackage of the [ComfyUI_VLM_nodes](https:\u002F\u002Fgithub.com\u002Fgokayfem\u002FComfyUI_VLM_nodes)'s ```UForm-Gen2 Qwen Node```,  thanks to the original author.\r\n* Commit [BooleanOperator](#BooleanOperator), [NumberCalculator](#NumberCalculator), [TextBox](#TextBox), [Integer](#Integer), [Float](#Float), [Boolean](#Boolean)nodes. These nodes can perform mathematical and logical operations.\r\n* Commit [ExtendCanvasV2](#ExtendCanvasV2) node，support color value input.\r\n* Commit [AutoBrightness](#AutoBrightness) node，it can automatically adjust the brightness of image.\r\n* [CreateGradientMask](#CreateGradientMask) node add ```center``` option.\r\n* Commit [GetColorToneV2](#GetColorToneV2) node, can select the main and average colors for the background or body. \r\n* Commit [ImageRewardFilter](#ImageRewardFilter) node, can filter out poor quality pictures.\r\n* [Ultra](#Ultra) nodes add ```VITMatte(local)``` method, You can choose this method to avoid accessing huggingface.co if you have already downloaded the model before.\r\n* Commit [HDR Effect](#HDR) node，it enhances the dynamic range and visual appeal of input images.  this node is repackage of [HDR Effects (SuperBeasts.AI)](https:\u002F\u002Fgithub.com\u002FSuperBeastsAI\u002FComfyUI-SuperBeasts).\r\n* Commit [CropBoxResolve](#CropBoxResolve) node.\r\n* Commit [BiRefNetUltra](#BiRefNetUltra) node, it using the BiRefNet model to remove background has better recognition ability and ultra-high edge details.\r\n* Commit [ImageAutoCropV2](#ImageAutoCropV2) node, it can choose not to remove the background, support mask input, and scale by long or short side size.\r\n* Commit [ImageHub](#ImageHub) node, supports up to 9 sets of Image and Mask switching output, and supports random output.\r\n* Commit [TextJoin](#TextJoin) node.\r\n* Commit [PromptEmbellish](#PromptEmbellish) node. it output polished prompt words, and support inputting images as references.\r\n* [Ultra](#Ultra) nodes have been fully upgraded to V2 version, with the addition of VITMatte edge processing method, which is suitable for handling semi transparent areas. Include [MaskEdgeUltraDetailV2](#MaskEdgeUltraDetailV2), [SegmentAnythingUltraV2](#SegmentAnythingUltraV2), [RmBgUltraV2](#RmBgUltraV2) and [PersonMaskUltraV2](#PersonMaskUltraV2) nodes.\r\n* Commit [Color of Shadow & Highlight](#Highlight) node, it can adjust the color of the dark and bright parts separately. Commit [Shadow & Highlight Mask](#Shadow) node, it can output mask for dark and bright areas.\r\n* Commit [CropByMaskV2](#CropByMaskV2) node, On the basis of the original node, it supports ```crop_box``` input, making it convenient to cut layers of the same size.\r\n* Commit [SimpleTextImage](#SimpleTextImage) node, it generate simple typesetting images and masks from text. This node references some of the functionalities and code of [ZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite).\r\n* Commit [PromptTagger](#PromptTagger) node，Inference the prompts based on the image. and it can replace key word for the prompt(need apply for Google Studio API key). Upgrade [ColorImageV2](#ColorImageV2) and [GradientImageV2](#GradientImageV2)，support user customize preset sizes and size_as input.\r\n* Commit [LaMa](#LaMa) node, it can erase objects from the image based on the mask. this node is repackage of [IOPaint](https:\u002F\u002Fwww.iopaint.com).\r\n* Commit [ImageRemoveAlpha](#ImageRemoveAlpha) and [ImageCombineAlpha](#ImageCombineAlpha) nodes, alpha channel of the image can be removed or merged.\r\n* Commit [ImageScaleRestoreV2](#ImageScaleRestoreV2) and [ImageScaleByAspectRatioV2](#ImageScaleByAspectRatioV2) nodes, supports scaling images to specified long or short edge sizes.\r\n* Commit [PersonMaskUltra](#PersonMaskUltra) node, Generate masks for portrait's face, hair, body skin, clothing, or accessories. the model code for this node comes from [a-person-mask-generator](https:\u002F\u002Fgithub.com\u002Fdjbielejeski\u002Fa-person-mask-generator).\r\n* Commit [LightLeak](#LightLeak) node, this filter simulate the light leakage effect of the film.\r\n* Commit [Film](#Film) node, this filter simulate the grain, dark edge, and blurred edge of the film, support input depth map to simulate defocus. it is reorganize and encapsulate of [digitaljohn\u002Fcomfyui-propost](https:\u002F\u002Fgithub.com\u002Fdigitaljohn\u002Fcomfyui-propost).\r\n* Commit [ImageAutoCrop](#ImageAutoCrop) node, which is designed to generate image materials for training models.\r\n* Commit [ImageScaleByAspectRatio](#ImageScaleByAspectRatio) node, it can be scaled image or mask according to frame ratio.\r\n* Fix the bug of color gradation in [LUT Apply](#LUT) node rendering, and this node now support for log color space. *Please load the dedicated log lut file for the log color space image.\r\n* Commit [CreateGradientMask](#CreateGradientMask) node. Commit [LayerImageTransform](#LayerImageTransform) and [LayerMaskTransform](#LayerMaskTransform) nodes.\r\n* Commit [MaskEdgeUltraDetail](#MaskEdgeUltraDetail) node, it process rough masks to ultra fine edges.Commit [Exposure](#Exposure) node.\r\n* Commit [Sharp & Soft](#Sharp) node, it can enhance or smooth out image details. Commit [MaskByDifferent](#MaskByDifferent) node, it compare two images and output a Mask. Commit [SegmentAnythingUltra](#SegmentAnythingUltra) node, Improve the quality of mask edges. *If SegmentAnything is not installed, you will need to manually download the model.\r\n* All nodes have fully supported batch images, providing convenience for video creation.\r\n  (The CropByMask node only supports cuts of the same size. if a batch mask_for_crop inputted, the data from the first sheet will be used.)\r\n* Commit [RemBgUltra](#RemBgUltra) and [PixelSpread](#PixelSpread) nodes significantly improved mask quality. *RemBgUltra requires manual model download.\r\n* Commit [TextImage](#TextImage) node, it generate text images and masks.\r\n* Add new types of [blend mode](#Blend) between images. now supports up to 19 blend modes. add **color_burn, color_dodge, linear_burn, linear_dodge, overlay, soft_light, hard_light, vivid_light, pin_light, linear_light** and **hard_mix**. \r\n  The newly added blend mode is applicable to all nodes that support blend mode.\r\n* Commit [ColorMap](#ColorMap) filter node to create a pseudo color heatmap effect.\r\n* Commit [WaterColor](#WaterColor) and [SkinBeauty](#SkinBeauty) nodes。These are image filters that generate watercolor and skin smoothness effects.\r\n* Commit [ImageShift](#ImageShift)  node to shift the image and output a displacement seam mask, making it convenient to create continuous textures.\r\n* Commit [ImageMaskScaleAs](#ImageMaskScaleAs) node to adjust the image or mask size based on the reference image.\r\n* Commit [ImageScaleRestore](#ImageScaleRestore) node to work with CropByMask for local upscale and repair works.\r\n* Commit [CropByMask](#CropByMask) and [RestoreCropBox](#RestoreCropBox) nodes. The combination of these two can partially crop and redraw the image before restoring it.\r\n* Commit [ColorAdapter](#ColorAdapter) node, that can automatically adjust the color tone of the image.\r\n* Commit [MaskStroke](#MaskStroke) node, it can generate mask contour strokes.\r\n* Add [LayerColor](#LayerColor) node group, used to adjust image color. it include [LUT Apply](#LUT), [Gamma](#Gamma), [Brightness & Contrast](#Brightness), [RGB](#RGB), [YUV](#YUV), [LAB](#LAB) adn [HSV](#HSV).\r\n* Commit [ImageChannelSplit](#ImageChannelSplit) and [ImageChannelMerge](#ImageChannelMerge) nodes.\r\n* Commit [MaskMotionBlur](#MaskMotionBlur) node.\r\n* Commit [SoftLight](#SoftLight) node.\r\n* Commit [ChannelShake](#ChannelShake) node, that is filter, can produce channel dislocation effect similar like Tiktok logo.\r\n* Commit [MaskGradient](#MaskGradient) node, can create a gradient in the mask.\r\n* Commit [GetColorTone](#GetColorTone) node, can obtain the main color or average color of the image. \r\n  Commit [MaskGrow](#MaskGrow) and [MaskEdgeShrink](#MaskEdgeShrink) nodes.\r\n* Commit [MaskBoxDetect](#MaskBoxDetect) node, which can automatically detect the position through the mask and output it to the composite node.\r\n  Commit [XY to Percent](#Percent) node to convert absolute coordinates to percent coordinates.\r\n  Commit [GaussianBlur](#GaussianBlur) node.\r\n  Commit [GetImageSize](#GetImageSize) node.\r\n* Commit [ExtendCanvas](#ExtendCanvas) node.\r\n* Commit [ImageBlendAdvance](#ImageBlendAdvance) node. This node allows for the synthesis of background images and layers of different sizes, providing a more free synthesis experience. \r\n  Commit [PrintInfo](#PrintInfo) node as a workflow debugging aid.\r\n* Commit [ColorImage](#ColorImage) and [GradientImage](#GradientImage) nodes, Used to generate solid and gradient color images.\r\n* Commit [GradientOverlay](#GradientOverlay) and [ColorOverlay](#ColorOverlay) nodes. \r\n  Add invalid mask input judgment and ignore it when invalid mask is input.\r\n* Commit [InnerGlow](#InnerGlow), [InnerShadow](#InnerShadow) and [MotionBlur](#MotionBlur) nodes.\r\n* Renaming all completed nodes, the nodes are divided into 4 groups：LayerStyle, LayerMask, LayerUtility, LayerFilter. workflows containing old version nodes need to be manually replaced with new version nodes.\r\n* [OuterGlow](#OuterGlow) node has undergone significant modifications by adding options for **_brightness_**, **_light_color_**, and **_glow_color_**.\r\n* Commit [MaskInvert](#MaskInvert) node.\r\n* Commit [ColorPick](#ColorPick) node.\r\n* Commit [Stroke](#Stroke) node.\r\n* Commit [MaskPreview](#MaskPreview) node.\r\n* Commit [ImageOpacity](#ImageOpacity) node.\r\n* The layer_mask is not a mandatory input now. it is allowed to use layers and masks with different shapes, but the size must be consistent.\r\n* Commit [ImageBlend](#ImageBlend) node.\r\n* Commit [OuterGlow](#OuterGlow) node.\r\n* Commit [DropShadow](#DropShadow) node.\n\n## 说明\n\n节点根据其功能分为5组：LayerStyle、LayerColor、LayerMask、LayerUtility和LayerFilter。\n\n* [LayerStyle](#LayerStyle) 节点提供模仿 Adobe Photoshop 的图层样式。\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_60829226ff64.jpg)    \n* [LayerColor](#LayerColor) 节点组提供颜色调整功能。\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d18b4c8f6134.jpg)    \n* [LayerMask](#LayerMask) 节点提供蒙版辅助工具。\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_82f1180642fe.jpg)    \n* [LayerUtility](#LayerUtility) 节点提供与图层合成工具和工作流程相关的辅助节点。\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_828f7387092d.jpg)    \n* [LayerFilter](#LayerFilter) 节点提供图像效果滤镜。\n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e2a7b19f2468.jpg)    \n\n# \u003Ca id=\"table1\">LayerStyle\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c5dde66db62f.jpg)    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_01f3980b05a5.jpg)    \n\n### \u003Ca id=\"table1\">DropShadow\u003C\u002Fa>\n\n生成阴影\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8bc27843ade5.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a962b43ad9db.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版，阴影会根据其形状生成。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：阴影的混合模式。\n* opacity：阴影的透明度。\n* distance_x：阴影的水平偏移量。\n* distance_y：阴影的垂直偏移量。\n* grow：阴影的扩展幅度。\n* blur：阴影的模糊程度。\n* shadow_color\u003Csup>4\u003C\u002Fsup>：阴影的颜色。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">OuterGlow\u003C\u002Fa>\n\n生成外发光\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9fec8670b374.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d10e48ac7eea.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版，发光会根据其形状生成。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：发光的混合模式。\n* opacity：发光的透明度。\n* brightness：光的亮度。\n* glow_range：发光的范围。\n* blur：发光的模糊程度。\n* light_color\u003Csup>4\u003C\u002Fsup>：发光中心部分的颜色。\n* glow_color\u003Csup>4\u003C\u002Fsup>：发光边缘部分的颜色。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">InnerShadow\u003C\u002Fa>\n\n生成内阴影\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ee9e54e60ed8.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_98899cf0a91d.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版，阴影会根据其形状生成。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：阴影的混合模式。\n* opacity：阴影的透明度。\n* distance_x：阴影的水平偏移量。\n* distance_y：阴影的垂直偏移量。\n* grow：阴影的扩展幅度。\n* blur：阴影的模糊程度。\n* shadow_color\u003Csup>4\u003C\u002Fsup>：阴影的颜色。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">InnerGlow\u003C\u002Fa>\n\n生成内发光\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6a562e4fa957.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a9b541ae0053.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版，发光会根据其形状生成。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：发光的混合模式。\n* opacity：发光的透明度。\n* brightness：光的亮度。\n* glow_range：发光的范围。\n* blur：发光的模糊程度。\n* light_color\u003Csup>4\u003C\u002Fsup>：发光中心部分的颜色。\n* glow_color\u003Csup>4\u003C\u002Fsup>：发光边缘部分的颜色。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">Stroke\u003C\u002Fa>\n\n为图层生成描边。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6de545b99b32.jpg)    \n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_50967a464f62.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版，描边会根据其形状生成。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：描边的混合模式。\n* opacity：描边的透明度。\n* stroke_grow：描边的扩展或收缩幅度，正值表示扩展，负值表示收缩。\n* stroke_width：描边的宽度。\n* blur：描边的模糊程度。\n* stroke_color\u003Csup>4\u003C\u002Fsup>：描边的颜色，以十六进制 RGB 格式表示。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">GradientOverlay\u003C\u002Fa>\n\n生成渐变叠加\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8482312b5a76.jpg)    \n\n节点选项：   \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：渐变的混合模式。\n* opacity：叠加的透明度。\n* start_color：渐变起始处的颜色。\n* start_alpha：渐变起始处的透明度。\n* end_color：渐变结束处的颜色。\n* end_alpha：渐变结束处的透明度。\n* angle：渐变的旋转角度。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">ColorOverlay\u003C\u002Fa>\n\n生成颜色叠加\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fe3fee659914.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6502f8c02ae4.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：layer_image 的蒙版。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：颜色的混合模式。\n* opacity：叠加的透明度。\n* color：叠加的颜色。\n* [note](#notes)\n\n# \u003Ca id=\"table1\">LayerColor\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_aaff75deec77.jpg)    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_87592d32005d.jpg)\n\n### \u003Ca id=\"table1\">LUT\u003C\u002Fa> 应用\n\n将 LUT 应用于图像。仅支持 .cube 格式。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7af331f07b6a.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_483037b8b05f.jpg)    \n\n* LUT\u003Csup>*\u003C\u002Fsup>：此处列出了 LUT 文件夹中可用的 .cube 文件，所选的 LUT 文件将被应用到图像上。\n* color_space：对于常规图像，请选择 linear；对于处于 log 色彩空间的图像，请选择 log。\n* strength：范围为 0~100，表示 LUT 应用强度。数值越大，与原始图像的差异越明显；数值越小，越接近原始图像。\n\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">LUT 文件夹在 ```resource_dir.ini``` 中定义，该文件位于插件的根目录下，默认名称为 ```resource_dir.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。\n打开文本编辑软件，找到以“LUT_dir=”开头的行，在“=”号后输入自定义的文件夹路径名。\n```resource-dir.ini``` 支持定义多个文件夹，各文件夹之间可用逗号、分号或空格分隔。\nComfyUI 初始化时，会收集该文件夹中的所有 .cube 文件，并将其显示在节点列表中。\n如果 ini 文件中设置的文件夹无效，则会启用插件自带的 LUT 文件夹。\u003C\u002Ffont>\n\n### \u003Ca id=\"table1\">AutoAdjust\u003C\u002Fa>\n\n自动调整图像的亮度、对比度和白平衡。同时提供一些手动调整选项，以弥补自动调整的不足。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7600ac14cedf.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8656b623dbca.jpg)    \n\n* strength：调整强度。数值越大，与原始图像的差异越大。\n* brightness：手动调整亮度。\n* contrast：手动调整对比度。\n* saturation：手动调整饱和度。\n* red：手动调整红色通道。\n* green：手动调整绿色通道。\n* blue：手动调整蓝色通道。\n\n### \u003Ca id=\"table1\">AutoAdjustV2\u003C\u002Fa>\n\n在 AutoAdjust 的基础上，增加了掩码输入功能，仅对掩码内的内容进行自动色彩调整。同时新增多种自动调整模式。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3adec65ca62e.jpg)    \n\n在此基础上，AutoAdjustV2 做了以下改动：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5af0e141df5c.jpg)    \n\n* mask：可选的掩码输入。\n* mode：自动调整模式。“RGB”根据 RGB 三个通道自动调整，“lum + sat”根据亮度和饱和度自动调整，“luminance”根据亮度自动调整，“saturation”根据饱和度自动调整，“mono”则根据灰度自动调整并输出单色图像。\n\n### \u003Ca id=\"table1\">AutoBrightness\u003C\u002Fa>\n\n自动将过暗或过亮的图像调整至适中亮度，并支持掩码输入。当使用掩码输入时，仅以掩码部分的内容作为自动亮度调整的数据源，而最终输出的仍是整幅调整后的图像。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2e2691b779f9.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b8460536c434.jpg)    \n\n* strength：自动调整亮度的强度。数值越大，结果越偏向中间值，与原图的差异也越大。\n* saturation：色彩饱和度。亮度的变化通常会导致色彩饱和度的变化，可通过此参数进行适当补偿。\n\n### \u003Ca id=\"table1\">ColorAdapter\u003C\u002Fa>\n\n自动调整图像的色调，使其更接近参考图像。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_600c724d57ae.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ea6bfe5a00ea.jpg)    \n\n* opacity：调整色调后图像的透明度。\n\n### \u003Ca id=\"table1\">Exposure\u003C\u002Fa>\n\n改变图像的曝光。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5509a799611c.jpg)    \n\n### 阴影与\u003Ccode id=\"table1\">高光\u003C\u002Fcode>的颜色\n\n调整图像中暗部和亮部的颜色。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_64cc8863e4c2.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7b3702c3b73d.jpg)    \n\n* image：输入图像。\n* mask：可选输入。若提供掩码，则仅调整掩码范围内的颜色。\n* shadow_brightness：暗部亮度。\n* shadow_saturation：暗部色彩饱和度。\n* shadow_hue：暗部色彩色调。\n* shadow_level_offset：暗部数值偏移量，数值越大，越多区域会从亮部过渡到暗部。\n* shadow_range：暗部过渡范围。\n* highlight_brightness：亮部亮度。\n* highlight_saturation：亮部色彩饱和度。\n* highlight_hue：亮部色彩色调。\n* highlight_level_offset：亮部数值偏移量，数值越大，越多区域会从暗部过渡到亮部。\n* highlight_range：亮部过渡范围。\n\n节点选项：  \n\n* exposure：曝光值。数值越高，图像越亮。\n\n### 阴影\u003Ccode id=\"table1\">高光V2\u003C\u002Fa>\n\n这是 ```阴影与高光``` 节点的复刻版，去掉了节点名称中的“&”符号，以避免 ComfyUI 工作流解析错误。\n\n### \u003Ca id=\"table1\">ColorTemperature\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2b37c5b8531f.jpg)    \n改变图像的色温。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9a83d460085f.jpg)    \n\n* temperature：色温值。范围在 -100 到 100 之间。数值越高，色温越高（越偏蓝）；数值越低，色温越低（越偏黄）。\n\n### \u003Ca id=\"table1\">Levels\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a3271d4ea81.jpg)    \n调整图像的色阶。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_25cdf533d394.jpg)    \n\n* channel：选择要调整的通道。可选 RGB、红、绿、蓝。\n* black_point\u003Csup>*\u003C\u002Fsup>：输入黑场值。取值范围 0-255，默认为 0。\n* white_point\u003Csup>*\u003C\u002Fsup>：输入白场值。取值范围 0-255，默认为 255。\n* gray_point：输入灰场值。取值范围 0.01-9.99，默认为 1。\n* output_black_point\u003Csup>*\u003C\u002Fsup>：输出黑场值。取值范围 0-255，默认为 0。\n* output_white_point\u003Csup>*\u003C\u002Fsup>：输出白场值。取值范围 0-255，默认为 255。\n\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">若 black_point 或 output_black_point 的值大于 white_point 或 output_white_point，则两者会互换位置，较大的值作为白场，较小的值作为黑场。\u003C\u002Ffont>\n\n### \u003Ca id=\"table1\">色彩平衡\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4978557c9510.jpg)    \n调整图像的色彩平衡。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c727b12aeea6.jpg)    \n\n* cyan_red: 青红平衡。负值偏向青色，正值偏向红色。\n* magenta_green: 洋红绿平衡。负值偏向洋红色，正值偏向绿色。\n* yellow_blue: 黄蓝平衡。负值偏向黄色，正值偏向蓝色。\n\n### \u003Ca id=\"table1\">伽马\u003C\u002Fa>\n\n调整图像的伽马值。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8ff27eaa7d29.jpg)    \n\n* gamma: 伽马值。\n\n### \u003Ca id=\"table1\">亮度\u003C\u002Fa>与对比度\n\n调整图像的亮度、对比度和饱和度。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_02051cdc00b7.jpg)    \n\n* brightness: 亮度值。\n* contrast: 对比度值。\n* saturation: 色饱和度值。\n\n### \u003Ca id=\"table1\">BrightnessContrastV2\u003C\u002Fa>\n\n“亮度与对比度”节点的复制品，去掉了节点名称中的“&”字符，以避免 ComfyUI 工作流解析错误。\n\n### \u003Ca id=\"table1\">RGB\u003C\u002Fa>\n\n调整图像的 RGB 通道。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_838c01431979.jpg)    \n\n* R: R 通道。\n* G: G 通道。\n* B: B 通道。\n\n### \u003Ca id=\"table1\">YUV\u003C\u002Fa>\n\n调整图像的 YUV 通道。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_359a1fde2b92.jpg)    \n\n* Y: Y 通道。\n* U: U 通道。\n* V: V 通道。\n\n### \u003Ca id=\"table1\">LAB\u003C\u002Fa>\n\n调整图像的 LAB 通道。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_852e24c579ff.jpg)    \n\n* L: L 通道。\n* A: A 通道。\n* B: B 通道。\n\n### \u003Ca id=\"table1\">HSV\u003C\u002Fa>\n\n调整图像的 HSV 通道。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bb753b6fafc3.jpg)    \n\n* H: H 通道。\n* S: S 通道。\n* V: V 通道。\n\n### \u003Ca id=\"table1\">色彩反转\u003C\u002Fa>\n将图像颜色反转为负片效果，可以选择 RGB 全部反转、单色反转，或单独反转 RGB 中的某一个通道。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6e93474aaae9.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_08ca91d55ed3.jpg)    \n* negative_channel: 选择需要反转的通道。\n\n# \u003Ca id=\"table1\">图层工具\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6c23aba33e67.jpg)    \n\n### \u003Ca id=\"table1\">高级图像混合\u003C\u002Fa>\n\n用于图层合成，允许在背景图像上合成不同尺寸的图层图像，并设置位置和变换。提供多种混合模式供选择，还可设置透明度。\n\n该节点提供图层变换方法和抗锯齿选项，有助于提升合成图像的质量。\n\n节点还提供可用于后续工作流的遮罩输出。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_68a71f6f3e63.jpg)    \n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a7d56307b6a8.jpg)    \n\n* background_image: 背景图像。\n* layer_image\u003Csup>5\u003C\u002Fsup>: 用于合成的图层图像。\n* layer_mask\u003Csup>2,5\u003C\u002Fsup>: 图层图像的遮罩。\n* invert_mask: 是否反转遮罩。\n* blend_mode\u003Csup>3\u003C\u002Fsup>: 混合模式。\n* opacity: 混合的透明度。\n* x_percent: 图层在背景图像上的水平位置，以百分比表示，0 表示最左端，100 表示最右端。可以小于 0 或大于 100，表示图层部分内容超出画面范围。\n* y_percent: 图层在背景图像上的垂直位置，以百分比表示，0 表示顶部，100 表示底部。例如，设置为 50 表示垂直居中，20 表示上中，80 表示下中。\n* mirror: 镜像翻转。提供水平翻转和垂直翻转两种模式。\n* scale: 图层放大倍数，1.0 代表原始大小。\n* aspect_ratio: 图层宽高比。1.0 为原始比例，大于 1.0 表示拉长，小于 1.0 表示压扁。\n* rotate: 图层旋转角度。\n* 图层放大和旋转时的采样方法，包括 Lanczos、双三次、Hamming、双线性、Box 和最近邻。不同的采样方法会影响合成图像的质量和处理时间。\n* anti_aliasing: 抗锯齿，取值范围为 0 到 16，数值越大，锯齿现象越不明显。但过高的值会显著降低节点的处理速度。\n* [注释](#notes)\n\n### \u003Ca id=\"table1\">ImageCompositeHandleMask\u003C\u002Fa>  \n用于生成局部羽化蒙版及相应的裁剪数据。该节点提供可用于后续工作流的蒙版输出。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_56f34c33bddf.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f7a766b41472.jpg)    \n* background_image: 背景图像。  \n* layer_image: 用于合成的图层图像。  \n* layer_mask: 图层图像的蒙版。  \n* invert_mask: 是否反转蒙版。  \n* opacity: 合成图像的透明度。  \n* x_percent: 图层在背景图像上的水平位置，以百分比表示，0 表示最左端，100 表示最右端。该值可以小于 0 或大于 100，表示图层内容部分超出画面范围。  \n* y_percent: 图层在背景图像上的垂直位置，以百分比表示，0 表示顶部，100 表示底部。例如，设置为 50 表示垂直居中，20 表示上中，80 表示下中。  \n* scale: 图层放大倍数，1.0 表示原始大小。  \n* mirror: 镜像翻转。提供水平翻转和垂直翻转两种模式。  \n* rotate: 图层旋转角度。  \n* anti_aliasing: 抗锯齿级别，范围为 0 到 8，数值越大，锯齿越不明显。但过高的值会显著降低节点处理速度。  \n* handle_detect: 羽化蒙版位置检测方法有两种：mask_area 和 layer-bbox。```mask_area``` 检测图层对象蒙版的有效区域，而 ```layer-bbox``` 检测图层对象的外接 BBox。  \n* top_handle: 蒙版顶部的羽化幅度。该值为蒙版平均边长的百分比。  \n* bottom_handle: 蒙版底部的羽化幅度。该值为蒙版平均边长的百分比。  \n* left_handle: 蒙版左侧的羽化幅度。该值为蒙版平均边长的百分比。  \n* right_handle: 蒙版右侧的羽化幅度。该值为蒙版平均边长的百分比。  \n* handle_mask_outradius: 羽化蒙版的圆角半径。  \n* top_reserve: 保留顶部裁剪后的尺寸。  \n* bottom_reserve: 保留底部裁剪后的尺寸。  \n* left_reserve: 保留左侧裁剪后的尺寸。  \n* right_reserve: 保留右侧裁剪后的尺寸。  \n* round_to_multiple: 将裁剪边长四舍五入到指定的倍数。例如，设置为 8 会使宽度和高度均为 8 的倍数。\n\n输出：  \n* image: 合成后的图像。  \n* mask: 合成后的蒙版。  \n* layer_bbox_mask: 合成对象的 BBox 蒙版。  \n* handle_mask: 经过羽化处理后的蒙版。  \n* handle_crop_bbox: 羽化蒙版的裁剪数据。  \n* handle_overrange: 羽化蒙版是否超出背景图像范围。输出格式为包含“top”、“bottom”、“left”和“right”的字符串。\n\n### \u003Ca id=\"table1\">CropByMask\u003C\u002Fa>  \n\n根据蒙版范围裁剪图像，并设置周围需保留的边框尺寸。  \n该节点可与 [RestoreCropBox](#RestoreCropBox) 和 [ImageScaleRestore](#ImageScaleRestore) 节点配合使用，用于裁剪并调整图像中需要放大的部分，然后再将其粘贴回原位。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_addad8df505f.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1d815b1a29a3.jpg)    \n\n* image\u003Csup>5\u003C\u002Fsup>: 输入图像。  \n* mask_for_crop\u003Csup>5\u003C\u002Fsup>: 图像的蒙版，将自动按蒙版范围进行裁剪。  \n* invert_mask: 是否反转蒙版。  \n* detect: 检测方法，```min_bounding_rect``` 是最小包围矩形，```max_inscribed_rect``` 是最大内接矩形，```mask-area``` 是用于遮罩像素的有效区域。  \n* top_reserve: 保留顶部裁剪后的尺寸。  \n* bottom_reserve: 保留底部裁剪后的尺寸。  \n* left_reserve: 保留左侧裁剪后的尺寸。  \n* right_reserve: 保留右侧裁剪后的尺寸。  \n* [note](#notes)\n\n输出：  \n* croped_image: 裁剪后的图像。  \n* croped_mask: 裁剪后的蒙版。  \n* crop_box: 裁剪时的边界框数据，用于 RestoreCropBox 节点的还原操作。  \n* box_preview: 裁剪位置预览图，红色表示检测到的范围，绿色表示加上预留边框后的裁剪范围。\n\n### \u003Ca id=\"table1\">CropByMaskV2\u003C\u002Fa>  \n\nCropByMask 的升级版本 V2。支持输入 crop_box，便于裁剪相同尺寸的图层。  \n\n在此基础上对 CropByMask 进行了以下更改：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f42eb377caae.jpg)    \n\n* 将输入项 ```mask_for_crop``` 重命名为 ```mask```。  \n* 增加可选输入项 ```crop_box```。如果此处有输入，则会忽略蒙版检测，直接使用该数据进行裁剪。  \n* 新增选项 ```round_to_multiple```，用于将裁剪边长四舍五入到指定的倍数。例如，设置为 8 会使宽度和高度均为 8 的倍数。\n\n### \u003Ca id=\"table1\">RestoreCropBox\u003C\u002Fa>  \n\n通过 [CropByMask](#CropByMask) 将裁剪后的图像恢复为原始图像。  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_66d699cffbd7.jpg)    \n\n* background_image: 裁剪前的原始图像。  \n* croped_image\u003Csup>5\u003C\u002Fsup>: 裁剪后的图像。如果中间部分被放大，则需要先恢复其尺寸再进行还原。  \n* croped_mask\u003Csup>5\u003C\u002Fsup>: 裁剪后的蒙版。  \n* crop_box: 裁剪时的边界框数据。  \n* invert_mask: 是否反转蒙版。  \n* [note](#notes)\n\n### \u003Ca id=\"table1\">CropBoxResolve\u003C\u002Fa>  \n\n将 ```corp_box``` 解析为 ```x```、```y```、```width```、```height```。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1add6a747602.jpg)    \n\n### \u003Ca id=\"table1\">ImageScaleRestore\u003C\u002Fa>  \n\n图像缩放。当此节点与另一节点配对使用时，可在第二个节点上自动将图像恢复到原始尺寸。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1e5a399d06b9.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_56c61e9ea55b.jpg)    \n\n* image\u003Csup>5\u003C\u002Fsup>: 输入图像。  \n* mask\u003Csup>2,5\u003C\u002Fsup>: 图像的蒙版。  \n* original_size: 可选输入，用于将图像恢复到原始尺寸。  \n* scale: 缩放比例。当输入了 original_size，或设置了 scale_ by_longest_side 为 True 时，此设置将被忽略。  \n* scale_by_longest_side: 允许按最长边尺寸缩放。  \n* longest_side: 当 scale_by_longest_side 设置为 True 时，将以此值作为图像的最长边。如果输入了 original_size，则此设置将被忽略。\n\n输出：  \n* image: 缩放后的图像。  \n* mask: 如果输入了蒙版，则会输出缩放后的蒙版。  \n* original_size: 图像的原始尺寸数据，用于后续节点的恢复操作。  \n* width: 输出图像的宽度。  \n* height: 输出图像的高度。\n\n### \u003Ca id=\"table1\">ImageScaleRestoreV2\u003C\u002Fa>\n\nImageScaleRestore 的 V2 升级版本。\n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_38a311425168.jpg)    \n在 ImageScaleRestore 的基础上进行了以下更改：\n\n* scale_by：允许按指定的长边、短边、宽度、高度或总像素数进行缩放。当此选项设置为 by_scale 时，使用 scale 值；对于其他选项，则使用 scale_by_length 值。\n* scale_by_length：此处的值用作 ```scale_by``` 来指定边的长度。\n\n### \u003Ca id=\"table1\">ImageMaskScaleAs\u003C\u002Fa>\n\n将图像或掩码缩放到参考图像（或参考掩码）的大小。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c66b6e524782.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_288b96cd91dc.jpg)    \n\n* scale_as\u003Csup>*\u003C\u002Fsup>：参考尺寸。可以是图像或掩码。\n* image：要缩放的图像。此选项为可选输入。如果没有输入，则输出黑色图像。\n* mask：要缩放的掩码。此选项为可选输入。如果没有输入，则输出黑色掩码。\n* fit：缩放宽高比模式。当原始图像的宽高比与缩放后的尺寸不匹配时，有三种模式可供选择：\n  _letterbox_ 模式会保留完整画面，并用黑色填充空白区域；\n  _crop_ 模式会保留完整的短边，而长边多余的部分会被裁剪掉；\n  _fill_ 模式则不保持画面比例，而是用宽度和高度填满整个屏幕。\n* method：缩放采样方法，包括 lanczos、bicubic、hamming、bilinear、box 和 nearest。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和掩码。强制集成其他类型的输入会导致节点报错。\n\n输出：\n\n* image：如果有图像输入，则输出缩放后的图像。\n* mask：如果有掩码输入，则输出缩放后的掩码。\n* original_size：图像的原始尺寸数据用于后续节点的恢复。\n* width：输出图像的宽度。\n* height：输出图像的高度。\n\n### \u003Ca id=\"table1\">ImageMaskScaleAsV2\u003C\u002Fa>\nImageMaskScaleAs 的升级版在原有节点的基础上增加了背景颜色设置。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d9e3667db27d.jpg)    \n\n新增选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ef95dba21faa.jpg)    \n* background_color：扩展背景颜色。\n\n\n### \u003Ca id=\"table1\">ImageScaleByAspectRatio\u003C\u002Fa>\n\n按宽高比缩放图像或掩码。缩放后的尺寸可以四舍五入到 8 或 16 的倍数，也可以按长边尺寸进行缩放。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c9b9ab2b7453.jpg)    \n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_09e84575d779.jpg)    \n\n* aspect_ratio：这里提供几种常见的画面比例。此外，您还可以选择“original”以保持原始比例，或使用“custom”来自定义比例。\n* proportional_width：按比例的宽度。如果 aspect_ratio 选项不是“custom”，则此设置将被忽略。\n* proportional_height：按比例的高度。如果 aspect_ratio 选项不是“custom”，则此设置将被忽略。\n* fit：缩放宽高比模式。当原始图像的宽高比与缩放后的尺寸不匹配时，有三种模式可供选择：\n  _letterbox_ 模式会保留完整画面，并用黑色填充空白区域；\n  _crop_ 模式会保留完整的短边，而长边多余的部分会被裁剪掉；\n  _fill_ 模式则不保持画面比例，而是用宽度和高度填满整个屏幕。\n* method：缩放采样方法，包括 lanczos、bicubic、hamming、bilinear、box 和 nearest。\n* round_to_multiple：四舍五入到倍数。例如，将其设置为 8 将使宽度和高度强制成为 8 的倍数。\n* scale_by_longest_side：允许按长边尺寸进行缩放。\n* longest_side：当 scale_by_longest_side 设置为 True 时，将使用此值作为图像的长边尺寸。如果提供了 original_size 输入，则此设置将被忽略。\n\n输出：\n\n* image：如果有图像输入，则输出缩放后的图像。\n* mask：如果有掩码输入，则输出缩放后的掩码。\n* original_size：图像的原始尺寸数据用于后续节点的恢复。\n* width：输出图像的宽度。\n* height：输出图像的高度。\n\n### \u003Ca id=\"table1\">ImageScaleByAspectRatioV2\u003C\u002Fa>\n\nImageScaleByAspectRatio 的 V2 升级版本\n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8c349d8198dd.jpg)    \n在 ImageScaleByAspectRatio 的基础上进行了以下更改：\n\n* scale_to_side：允许按指定的长边、短边、宽度、高度或总像素数进行缩放。\n* scale_to_length：这里的数值用作指定边的长度或 total pixels（千像素）来决定 scale_to_side。\n* background_color\u003Csup>4\u003C\u002Fsup>：背景的颜色。\n\n### \u003Ca id=\"table1\">ICMask\u003C\u002Fa>\n用于生成上下文相关的图像和掩码。代码来自 [lrzjason\u002FComfyui-In-Context-Lora-Utils](https:\u002F\u002Fgithub.com\u002Flrzjason\u002FComfyui-In-Context-Lora-Utils)，感谢原作者 @小志Jason。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8652fa8cf3ce.jpg)\n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a617339b3544.jpg)    \n\n* first_image：用作上下文参考的图像。\n* first_mask：可选输入，上下文参考图像的掩码。\n* second_image：用于重新绘制图像。\n* second_mask：用于重新绘制图像的掩码。\n* patch_mode：拼接模式有三种：auto、patch_right 和 patch_bottom。\n* output_length：输出图像的长边尺寸。\n* patch_color：填充颜色。\n\n输出：\n* image：输出的图像。\n* mask：输出的掩码。\n* icmask_data：图像的拼接信息，用于后续节点的自动裁剪。\n\n### \u003Ca id=\"table1\">ICMaskCropBack\u003C\u002Fa>\n对 ICMask 生成的图像推理结果进行裁剪。\n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_228540a2dc49.jpg)    \n\n* image：输入的图像。\n* icmask_data：由 ICMask 节点输出的拼接信息。\n\n### \u003Ca id=\"table1\">FluxKontextImageScale\u003C\u002Fa>\n基于官方节点的修改，用于将图像缩放到更适合 Flux Kontext 的尺寸。对于不同宽高比的图像，缩放比例会相应调整，以确保所有信息都能保留。  \n以下示例使用该节点来完整保留 4K 分辨率图像的所有信息，通过 FluxKontext 模型推理改变图像背景，再利用 [HLFrequencyDetailRestore](#HLFrequencyDetailRestore) 节点实现 4K 图像质量的细节恢复。\n\u003Cfont size=\"1\">*此工作流（flux_kontext_image_scale_example.json）位于工作流目录中。\u003C\u002Ffont>\u003Cbr \u002F> \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d7a576ecf9b6.jpg)\n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fed9f2f2f3b0.jpg)    \n\n* image：输入图像。\n* method：缩放采样方法，包括 lanczos、bicubic、hamming、bilinear、box 和 nearest。\n\n输出：\n* image：输出图像。\n\n### \u003Ca id=\"table1\">VQAPrompt\u003C\u002Fa>\n\n使用 blip-vqa 模型进行视觉问答。该节点的部分代码参考自 [celoron\u002FComfyUI-VisualQueryTemplate](https:\u002F\u002Fgithub.com\u002Fceloron\u002FComfyUI-VisualQueryTemplate)，感谢原作者。   \n*请从 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1ILREVgM0eFJlkWaYlKsR0g?pwd=yw75) 或 [huggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-capfilt-large\u002Ftree\u002Fmain) 以及 [huggingface.co\u002FSalesforce\u002Fblip-vqa-base](https:\u002F\u002Fhuggingface.co\u002FSalesforce\u002Fblip-vqa-base\u002Ftree\u002Fmain) 下载模型文件，并将其复制到 ```ComfyUI\\models\\VQA``` 文件夹中。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_26ee7f6e1333.jpg) \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8f24d5ba5774.jpg)\n\n* image：图像输入。\n* vqa_model：VQA 模型输入，来自 [LoadVQAModel](#LoadVQAModel) 节点。\n* question：任务文本输入。单个问题用大括号“{}”括起来，问题的答案将在输出文本中替换其原始位置。多个问题可以用大括号在一次问答中定义。\n例如，对于一张放置在场景中的物品图片，问题可以是：“{object color} {object} on the {scene}”。\n\n### \u003Ca id=\"table1\">LoadVQAModel\u003C\u002Fa>\n\n加载 blip-vqa 模型。    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fe5c9fda8276.jpg)\n\n* model：目前有两个模型可供选择：“blip-vqa-base”和“blip-vqa-capfilt-large”。\n* precision：模型精度有两种选项：“fp16”和“fp32”。\n* device：模型运行设备有两种选项：“cuda”和“cpu”。\n\n### \u003Ca id=\"table1\">ImageShift\u003C\u002Fa>\n\n对图像进行位移。该节点支持输出位移接缝蒙版，便于创建连续纹理。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6b39ef79d5f7.jpg)    \n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bd188354f0f0.jpg)    \n\n* image\u003Csup>5\u003C\u002Fsup>：输入图像。\n* mask\u003Csup>2,5\u003C\u002Fsup>：图像的蒙版。\n* shift_x：水平位移距离。\n* shift_y：垂直位移距离。\n* cyclic：超出边界的部分是否循环。\n* background_color\u003Csup>4\u003C\u002Fsup>：背景颜色。如果 cyclic 设置为 False，则此处设置的背景颜色将被使用。\n* border_mask_width：边框蒙版宽度。\n* border_mask_blur：边框蒙版模糊度。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">ImageBlend\u003C\u002Fa>\n\n一个用于合成图层图像和背景图像的简单节点，提供多种混合模式供选择，并可设置透明度。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9ef4ac3780d9.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c28bafb50f06.jpg)    \n\n* background_image\u003Csup>1\u003C\u002Fsup>：背景图像。\n* layer_image\u003Csup>1\u003C\u002Fsup>：用于合成的图层图像。\n* layer_mask\u003Csup>1,2\u003C\u002Fsup>：图层图像的蒙版。\n* invert_mask：是否反转蒙版。\n* blend_mode\u003Csup>3\u003C\u002Fsup>：混合模式。\n* opacity：混合的透明度。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">ImageReel\u003C\u002Fa>\n\n在一个卷轴中显示多张图像。可以为卷轴中的每张图像添加文字说明。通过使用 [ImageReelComposite](#ImageReelComposite) 节点，可以将多个卷轴合并成一张图像。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bee3c2ee1e72.jpg)    \n\n节点选项：   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2e77dd49a338.jpg)    \n\n* image1：第一张图像，必须输入。\n* image2：第二张图像，可选输入。\n* image3：第三张图像，可选输入。\n* image4：第四张图像，可选输入。\n* image1_text：第一张图像的文字说明。\n* image2_text：第二张图像的文字说明。\n* image3_text：第三张图像的文字说明。\n* image4_text：第四张图像的文字说明。\n* reel_height：卷轴的高度。\n* border：卷轴中图像的边框宽度。\n\n输出：\n\n* reel：[ImageReelComposite](#ImageReelComposit) 节点的输入卷轴。\n\n### \u003Ca id=\"table1\">ImageReelComposite\u003C\u002Fa>\n\n将多个卷轴合并成一张图像。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_92d9ef8a0283.jpg)    \n\n* reel_1：第一个卷轴，必须输入。\n* reel_2：第二个卷轴，可选输入。\n* reel_3：第三个卷轴，可选输入。\n* reel_4：第四个卷轴，可选输入。\n* font_file\u003Csup>**\u003C\u002Fsup>：此处列出字体文件夹中可用的字体文件，所选字体文件将用于生成图像。\n* border：卷轴的边框宽度。\n* color_theme：输出图像的主题颜色。            \n  \u003Csup>*\u003C\u002Fsup>字体文件夹在 ```resource_dir.ini``` 中定义，该文件位于插件根目录，默认名称为 ```resource_dir.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。\n  打开文本编辑软件，找到以“FONT_dir=”开头的行，在“=”后面输入自定义的文件夹路径名。  \n  支持在 ```resource-dir.ini``` 中定义多个文件夹，用逗号、分号或空格分隔。  \n  插件初始化时，该文件夹中的所有字体文件都将被收集并显示在节点列表中。  \n  如果 ini 中设置的文件夹无效，则启用插件自带的字体文件夹。\n\n### \u003Ca id=\"table1\">ImageOpacity\u003C\u002Fa>\n\n调整图像透明度\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a917a1ae3a8c.jpg)    \n\n节点选项：   \n\n* image\u003Csup>5\u003C\u002Fsup>：图像输入，支持 RGB 和 RGBA。如果是 RGB，则会自动为整张图像添加 alpha 通道。\n* mask\u003Csup>2,5\u003C\u002Fsup>：蒙版输入。\n* invert_mask：是否反转蒙版。\n* opacity：图像的透明度。\n* [note](#notes)\n\n### \u003Ca id=\"table1\">ColorPicker\u003C\u002Fa>\n\n源自 [mtb nodes](https:\u002F\u002Fgithub.com\u002FmelMass\u002Fcomfy_mtb) 的网页扩展修改版本。可在调色板上选择颜色并输出 RGB 值，感谢原作者。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a97d8ebbcb00.jpg)    \n\n节点选项：\n\n* mode：输出格式有十六进制 (HEX) 和十进制 (DEC) 两种。  \n\n输出类型： \n\n* value：字符串格式。\n\n### \u003Ca id=\"table1\">RGBValue\u003C\u002Fa>\n\n将颜色值以单独的 R、G、B 三个十进制数值形式输出。支持 ColorPicker 节点输出的 HEX 和 DEC 格式。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bf90f6a42074.jpg)    \n\n节点选项：\n\n* color_value：支持十六进制 (HEX) 或十进制 (DEC) 颜色值，应为字符串或元组类型。强制输入其他类型会导致错误。\n\n### \u003Ca id=\"table1\">HSVValue\u003C\u002Fa>\n\n将颜色值以 H、S、V 的单独十进制数值形式输出（最大值为 255）。支持 ColorPicker 节点输出的 HEX 和 DEC 格式。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_986b8b4d89ea.jpg)    \n\n节点选项：\n\n* color_value：支持十六进制 (HEX) 或十进制 (DEC) 颜色值，应为字符串或元组类型。强制输入其他类型会导致错误。\n\n### \u003Ca id=\"table1\">GrayValue\u003C\u002Fa>\n\n根据颜色值输出灰度值。支持输出 256 级和 100 级灰度值。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6570e5fe8281.jpg)    \n\n节点选项：\n\n* color_value：支持十六进制 (HEX) 或十进制 (DEC) 颜色值，应为字符串或元组类型。强制输入其他类型会导致错误。\n\n输出：\n\n* gray(256_level)：256 级灰度值。整数类型，范围 0~255。\n* gray(100_level)：100 级灰度值。整数类型，范围 0~100。\n\n\n### \u003Ca id=\"table1\">GetMainColors\u003C\u002Fa>\n\n获取图像的主要颜色。可以获取 5 种颜色。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_68c7bf3b293c.jpg)\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9e03ff9e2190.jpg)\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_147da81ae016.jpg)    \n\n* image：图像输入。\n* k_means_algorithm：K-Means 算法选项。“lloyd”是标准的 K-Means 算法，“elkan”是三角不等式算法，适用于较大的图像。\n\n输出：\n\n* preview_image：5 种主要颜色的预览图像。\n* color_1~color_5：颜色值输出。以 HEX 格式的 RGB 字符串形式输出。\n\n### \u003Ca id=\"table1\">GetMainColorsV2\u003C\u002Fa>\n在 [GetMainColors](#GetMainColors) 节点的基础上增加按颜色面积排序的功能，并在预览图像中显示颜色值和颜色面积。\n这部分代码由 @ HL 改进，感谢！\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d8753263d694.jpg)\n\n\n### \u003Ca id=\"table1\">ColorName\u003C\u002Fa>\n\n根据颜色值，在调色板中输出最相似的颜色名称。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_70d240125602.jpg)\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_89b481ea3aa8.jpg)    \n\n* color：颜色值输入，采用 HEX 格式的 RGB 字符串格式。\n* palette：调色板。提供 6 种颜色映射表，包括 xkcd、wiki_color、flux_sdxl、css4、css3 和 html4。\n\n输出：\n\n* color_name：字符串形式的颜色名称。\n\n### \u003Ca id=\"table1\">NameToColor\u003C\u002Fa>\n根据颜色名称输出颜色图像和颜色值。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6e318d9250ac.jpg)\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ce73465d94c8.jpg)\n* size_as\u003Csup>*\u003C\u002Fsup>：在此输入图像或掩码，以根据其尺寸生成图像。请注意，此输入优先于其他尺寸设置。\n* color_name：要描述的颜色名称。\n* palette：调色板。提供 6 种颜色映射表，包括 xkcd、wiki_color、flux_sdxl、css4、css3 和 html4。\n* in_palette_only：设置为仅从调色板中输出颜色。如果设置为 True，则仅在当前调色板中搜索。如果没有匹配的名称，则输出 default_color。\n如果设置为 False，则搜索所有调色板。如果在所有调色板中都没有匹配的名称，则输出名称最接近的颜色。\n* default_color：默认颜色。如果没有找到匹配的名称，则输出该颜色。\n* size\u003Csup>**\u003C\u002Fsup>：预设尺寸。用户可以自定义预设尺寸。如果有 size_as 输入，则此选项将被忽略。\n* custom_width：图像宽度。当尺寸设置为“custom”时有效。如果有 size_as 输入，则此选项将被忽略。\n* custom_height：图像高度。当尺寸设置为“custom”时有效。如果有 size_as 输入，则此选项将被忽略。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和掩码。强制集成其他类型的输入会导致节点错误。\n\u003Csup>**\u003C\u002Fsup>预设尺寸定义在 ```custom_size.ini``` 文件中，该文件位于插件根目录下，默认名为 ```custom_size.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。用文本编辑软件打开。每行代表一个尺寸，第一个值为宽度，第二个值为高度，中间用小写的“x”分隔。为避免错误，请勿输入额外字符。\n\n输出：\n* image：输出的颜色图像。\n* color：颜色值输出，采用 HEX 格式的 RGB 字符串格式。\n\n\n### \u003Ca id=\"table1\">ExtendCanvas\u003C\u002Fa>\n\n扩展画布\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3f08ba522bba.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_890f9a3953a3.jpg)    \n\n* invert_mask：是否反转掩码。\n* top：顶部扩展值。\n* bottom：底部扩展值。\n* left：左侧扩展值。\n* right：右侧扩展值。\n* color；画布颜色。\n\n### \u003Ca id=\"table1\">ExtendCanvasV2\u003C\u002Fa>\n\nExtendCanvas 的 V2 升级版。\n\n在 ExtendCanvas 的基础上，将颜色修改为字符串类型，并支持外部 ```ColorPicker``` 输入，同时支持负值输入，这意味着图像会被裁剪。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_13ae8c0d7467.jpg)    \n\n### XY 到 \u003Ca id=\"table1\">Percent\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2834f4b3b690.jpg)    \n将绝对坐标转换为百分比坐标。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bbe1114bcfd8.jpg)    \n节点选项：\n\n* x：X 值。\n* y：Y 值。\n\n### \u003Ca id=\"table1\">LayerImageTransform\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_434e9bcd3c86.jpg)    \n该节点用于单独变换图层图像，可以在不改变图像整体大小的情况下调整大小、旋转、宽高比以及镜像翻转。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6b90fdbd49ab.jpg)    \n节点选项：\n\n* x：X 值。\n* y：Y 值。\n* mirror：镜像翻转。提供水平翻转和垂直翻转两种模式。\n* scale：图层放大倍数，1.0 表示原始大小。\n* aspect_ratio：图层宽高比。1.0 为原始比例，大于 1.0 表示拉伸，小于 1.0 表示压缩。\n* rotate：图层旋转角度。\n* 图层放大和旋转的采样方法，包括 lanczos、bicubic、hamming、bilinear、box 和 nearest。不同的采样方法会影响合成图像的质量和处理时间。\n* anti_aliasing：抗锯齿设置，范围 0 到 16，数值越大，锯齿越不明显。过高的数值会显著降低节点的处理速度。\n\n### \u003Ca id=\"table1\">LayerMaskTransform\u003C\u002Fa>\n\n与 LayerImageTransform 节点类似，此节点用于单独变换图层蒙版，可以在不改变蒙版尺寸的情况下进行缩放、旋转、更改宽高比以及镜像翻转。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_643f554f6cd2.jpg)    \n节点选项：\n\n* x：X 轴值。\n* y：Y 轴值。\n* mirror：镜像翻转。提供水平翻转和垂直翻转两种模式。\n* scale：图层放大倍数，1.0 表示原始大小。\n* aspect_ratio：图层宽高比。1.0 为原始比例，大于该值表示拉伸，小于该值表示压缩。\n* rotate：图层旋转角度。\n* 图层放大和旋转的采样方法，包括 lanczos、bicubic、hamming、bilinear、box 和 nearest。不同的采样方法会影响合成图像的质量和处理时间。\n* anti_aliasing：抗锯齿，取值范围为 0 到 16，数值越大，锯齿现象越不明显。但过高的值会显著降低节点的处理速度。\n\n### \u003Ca id=\"table1\">ColorImage\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_840e1bb02086.jpg)    \n生成指定颜色和尺寸的图像。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_af384cc7b105.jpg)    \n节点选项：\n\n* width：图像宽度。\n* height：图像高度。\n* color\u003Csup>4\u003C\u002Fsup>：图像颜色。\n\n### \u003Ca id=\"table1\">ColorImageV2\u003C\u002Fa>\n\nColorImage 的升级版本 V2。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bfff57fe710a.jpg)    \n在 ColorImage 的基础上进行了以下改动：\n\n* size_as\u003Csup>*\u003C\u002Fsup>：在此输入图像或蒙版，以根据其尺寸生成图像。请注意，此输入优先于其他尺寸设置。\n* size\u003Csup>**\u003C\u002Fsup>：预设尺寸。用户可以自定义预设。如果已输入 size_as，则此选项将被忽略。\n* custom_width：图像宽度。仅当 size 设置为“custom”时有效。如果已输入 size_as，则此选项将被忽略。\n* custom_height：图像高度。仅当 size 设置为“custom”时有效。如果已输入 size_as，则此选项将被忽略。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和蒙版。强行集成其他类型的输入会导致节点报错。\n\u003Csup>**\u003C\u002Fsup>预设尺寸定义在 ```custom_size.ini``` 文件中，该文件位于插件根目录下，默认名为 ```custom_size.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。用文本编辑软件打开，每行代表一个尺寸，第一个值为宽度，第二个值为高度，中间用小写的“x”分隔。为避免错误，请勿输入额外字符。\n\n### \u003Ca id=\"table1\">GradientImage\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_cebf2aefd0cd.jpg)    \n生成具有指定尺寸和颜色渐变的图像。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9c8b9822de6e.jpg)    \n节点选项：\n\n* width：图像宽度。\n* height：图像高度。\n* angle：渐变角度。\n* start_color\u003Csup>4\u003C\u002Fsup>：起始颜色。\n* end_color\u003Csup>4\u003C\u002Fsup>：结束颜色。\n\n### \u003Ca id=\"table1\">GradientImageV2\u003C\u002Fa>\n\nGradientImage 的升级版本 V2。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_65750c3f58f0.jpg)    \n在 GradientImage 的基础上进行了以下改动：\n\n* size_as\u003Csup>*\u003C\u002Fsup>：在此输入图像或蒙版，以根据其尺寸生成图像。请注意，此输入优先于其他尺寸设置。\n* size\u003Csup>**\u003C\u002Fsup>：预设尺寸。用户可以自定义预设。如果已输入 size_as，则此选项将被忽略。\n* custom_width：图像宽度。仅当 size 设置为“custom”时有效。如果已输入 size_as，则此选项将被忽略。\n* custom_height：图像高度。仅当 size 设置为“custom”时有效。如果已输入 size_as，则此选项将被忽略。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和蒙版。强行集成其他类型的输入会导致节点报错。\n\u003Csup>**\u003C\u002Fsup>预设尺寸定义在 ```custom_size.ini``` 文件中，该文件位于插件根目录下，默认名为 ```custom_size.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。用文本编辑软件打开，每行代表一个尺寸，第一个值为宽度，第二个值为高度，中间用小写的“x”分隔。为避免错误，请勿输入额外字符。\n\n### \u003Ca id=\"table1\">圆角矩形\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_398038b873cb.jpg)    \n生成圆角矩形和遮罩。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_67ff08a9da40.jpg)    \n节点选项：\n* image：待处理的图像。\n* object_mask：可选输入。该遮罩可用于生成圆角矩形区域。如果已提供 ```crop-box``` 输入，则此选项将被忽略。\n* crop_box：可选输入。可通过裁剪区域生成一个圆角矩形区域。\n* rounded_rect_radius：圆角半径。取值范围为0-100，数值越大，圆角越明显。\n* anti_aliasing：抗锯齿效果，取值范围为0-16，数值越大，锯齿现象越不明显。但过高的值会显著降低节点的处理速度。\n* top：圆角矩形顶部边距，以图像高度的百分比表示，允许使用负值。如果已提供 crop_box 或 object_mask 输入，则此选项将被忽略。\n* bottom：圆角矩形底部边距，以图像高度的百分比表示，允许使用负值。如果已提供 crop_box 或 object_mask 输入，则此选项将被忽略。\n* left：圆角矩形左侧边距，以图像宽度的百分比表示，允许使用负值。如果已提供 crop_box 或 object_mask 输入，则此选项将被忽略。\n* right：圆角矩形右侧边距，以图像宽度的百分比表示，允许使用负值。如果已提供 crop_box 或 object_mask 输入，则此选项将被忽略。\n* detect：当输入 object_mask 时，用于检测遮罩区域的方法。```min_bounding_rect``` 表示最小外接矩形（块状），```max_inscribed_rect``` 表示最大内接矩形（块状），```mask-area``` 表示用于遮罩像素的有效区域。\n* obj_ext_top：当输入 object_mask 或 crop-box 时，圆角矩形区域顶部向外扩展，以区域高度的百分比表示，允许使用负值。\n* obj_ext_bottom：当输入 object_mask 或 crop-box 时，圆角矩形区域底部向外扩展，以区域高度的百分比表示，允许使用负值。\n* obj_ext_left：当输入 object_mask 或 crop-box 时，圆角矩形区域左侧向外扩展，以区域宽度的百分比表示，允许使用负值。\n* obj_ext_right：当输入 object_mask 或 crop-box 时，圆角矩形区域右侧向外扩展，以区域宽度的百分比表示，允许使用负值。\n\n\n### \u003Ca id=\"table1\">简单文本图像\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_dcebc13bcbea.jpg)    \n根据文本生成简单的排版图像和遮罩。该节点参考了 [ZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-Text_Image-Composite) 的部分功能和代码，感谢原作者。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_467d0ada7804.jpg)    \n节点选项：\n\n* size_as\u003Csup>*\u003C\u002Fsup>：此处的输入图像或遮罩将按照其尺寸生成输出图像和遮罩。此输入优先于下方的宽度和高度设置。\n* font_file\u003Csup>**\u003C\u002Fsup>：此处列出字体文件夹中可用的字体文件，所选字体将用于生成图像。\n* align：对齐方式选项。共有三种：居中、左对齐和右对齐。\n* char_per_line：每行字符数，超出部分将自动换行。\n* leading：行间距。\n* font_size：字体大小。\n* text_color：文本颜色。\n* stroke_width：描边宽度。\n* stroke_color：描边颜色。\n* x_offset：文本位置的水平偏移量。\n* y_offset：文本位置的垂直偏移量。\n* width：图像宽度。如果已提供 size_as 输入，则此设置将被忽略。\n* height：图像高度。如果已提供 size_as 输入，则此设置将被忽略。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和遮罩。强制集成其他类型的输入会导致节点报错。\n\n\u003Csup>**\u003C\u002Fsup>字体文件夹在 ```resource_dir.ini``` 中定义，该文件位于插件根目录下，默认名称为 ```resource_dir.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。打开文本编辑软件，找到以“FONT_dir=”开头的行，在“=”号后输入自定义的文件夹路径名。  \n支持在 ```resource-dir.ini``` 中定义多个文件夹，各文件夹之间可用逗号、分号或空格分隔。  \nComfyUI 初始化时，该文件夹中的所有字体文件都将被收集并显示在节点列表中。  \n如果 ini 文件中设置的文件夹无效，则将启用插件自带的字体文件夹。\n\n### \u003Ca id=\"table1\">TextImage\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_049861627710.jpg)    \n根据文本生成图像和掩码。支持调整单词和行之间的间距、水平和垂直方向的调整，还可以为每个字符设置随机变化，包括大小和位置。\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7dd51eb8e0c2.jpg)    \n节点选项：\n\n* size_as\u003Csup>*\u003C\u002Fsup>：此处输入的图像或掩码将根据其尺寸生成输出图像和掩码。此输入优先于下方的宽度和高度。\n* font_file\u003Csup>**\u003C\u002Fsup>：此处列出字体文件夹中可用的字体文件，所选字体文件将用于生成图像。\n* spacing：单词间距。该值以像素为单位。\n* leading：行间距。该值以像素为单位。\n* horizontal_border：侧边距。如果文本是水平的，则为左 margin；如果是垂直的，则为右 margin。该值表示百分比，例如 50 表示起始点位于两侧的中心。\n* vertical_border：上边距。该值表示百分比，例如 10 表示起始点距离顶部 10%。\n* scale：文本的整体大小。文本的初始大小会根据屏幕尺寸和文本内容自动计算，默认情况下最长的行或列会适应图像的宽度或高度。在此处调整数值将整体缩放文本。该值表示百分比，例如 60 表示缩放到 60%。\n* variation_range：字符随机变化的范围。当该值大于 0 时，字符会在大小和位置上发生随机变化，数值越大，变化幅度越大。\n* variation_seed：随机种子。固定此值后，每次生成的单个字符变化将保持不变。\n* layout：文本布局。可选择水平和垂直两种方式。\n* width：图像的宽度。如果有 size_as 输入，则此设置将被忽略。\n* height：图像的高度。如果有 size_as 输入，则此设置将被忽略。\n* text_color：文本颜色。\n* background_color\u003Csup>4\u003C\u002Fsup>：背景颜色。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和掩码。强制集成其他类型的输入会导致节点错误。\n\n\u003Csup>**\u003C\u002Fsup>字体文件夹在 ```resource_dir.ini``` 中定义，该文件位于插件根目录下，默认名称为 ```resource_dir.ini.example```。首次使用时，需将文件后缀改为 ```.ini```。打开文本编辑软件，找到以 \"FONT_dir=\" 开头的行，在 \"=\" 后输入自定义的文件夹路径名。  \n```resource-dir.ini``` 支持定义多个文件夹，用逗号、分号或空格分隔。  \nComfyUI 初始化时，该文件夹中的所有字体文件会被收集并显示在节点列表中。若 ini 文件中设置的文件夹无效，则会启用插件自带的字体文件夹。\n\n### \u003Ca id=\"table1\">TextImageV2\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ba69500abdbe.jpg)    \n\n此节点由 [heshengtao](https:\u002F\u002Fgithub.com\u002Fheshengtao) 合并而来。该 PR 基于 TextImage 节点修改了图像文本节点的缩放方式。字体间距随缩放调整，坐标也不再以文本的左上角为基准，而是以整行文本的中心点为基准。感谢作者的贡献。\n\n### \u003Ca id=\"table1\">ImageChannelSplit\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_408a947d6b0a.jpg)    \n将图像通道拆分为单独的图像。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_bbaba8b94c50.jpg)    \n\n* mode：通道模式，包括 RGBA、YCbCr、LAB 和 HSV。\n\n### \u003Ca id=\"table1\">ImageChannelMerge\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b39210853240.jpg)    \n将各个通道图像合并为一张图像。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_44f53d552fd0.jpg)    \n\n* mode：通道模式，包括 RGBA、YCbCr、LAB 和 HSV。\n\n### \u003Ca id=\"table1\">ImageRemoveAlpha\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d2fecb85712d.jpg)    \n移除图像的 alpha 通道并将其转换为 RGB 模式。可以选择填充背景并设置背景颜色。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a0e83d0378b6.jpg)    \n\n* RGBA_image：输入图像支持 RGBA 或 RGB 模式。\n* mask：可选的输入掩码。如果有输入掩码，则优先使用掩码，忽略 RGBA_image 自带的 alpha 通道。\n* fill_background：是否填充背景。\n* background_color\u003Csup>4\u003C\u002Fsup>：背景颜色。\n\n### \u003Ca id=\"table1\">ImageCombineAlpha\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a68033baa989.jpg)    \n将图像和掩码合并为包含 alpha 通道的 RGBA 模式图像。\n\n### \u003Ca id=\"table1\">HLFrequencyDetailRestore\u003C\u002Fa>\n\n通过低频滤波并保留高频来恢复图像细节。与 [kijai 的 DetailTransfer](https:\u002F\u002Fgithub.com\u002Fkijai\u002FComfyUI-IC-Light) 相比，此节点在保留细节的同时更好地融入环境。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c28f9e720cb1.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4ebf3af7d82c.jpg)    \n\n* image：背景图像输入。\n* detail_image：细节图像输入。\n* mask：可选输入，如果有掩码输入，则仅恢复掩码部分的细节。\n* keep_high_freq：保留的高频范围。数值越大，保留的高频细节越丰富。\n* erase_low_freq：擦除的低频范围。数值越大，擦除的低频范围越广。\n* mask_blur：掩码边缘模糊。仅在有掩码输入时有效。\n\n### \u003Ca id=\"table1\">GetImageSize\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_52afb6e54124.jpg)    \n获取图像的宽度和高度。\n\n输出：\n\n* width：图像的宽度。\n* height：图像的高度。\n* original_size：图像的原始尺寸数据，用于后续节点的恢复。\n\n### \u003Ca id=\"table1\">AnyRerouter\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4f209dbaf028.jpg)    \n用于重新路由任何类型的数据，此节点允许任意类型的输入。\n\n### \u003Ca id=\"table1\">ImageHub\u003C\u002Fa>\n\n从多个输入图像和掩码中切换输出，支持 9 组输入。所有输入项均为可选。如果某组输入中只有图像或掩码，则缺失的部分将以 None 输出。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7dfe4ad7006b.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a2e3fecf3bb.jpg)    \n\n* output：切换输出。该值对应相应的输入组。当 ```random-output``` 选项为 True 时，此设置将被忽略。\n* random_output：当此选项为 True 时，```output``` 设置将被忽略，并在所有有效输入中随机输出一组。\n\n### \u003Ca id=\"table1\">BatchSelector\u003C\u002Fa>\n\n从批量图像或掩码中提取指定的图像或掩码。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e3d03b63d89.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a92c07d7deb2.jpg)    \n\n* images: 批量图像输入。此输入为可选。\n* masks: 批量掩码输入。此输入为可选。\n* select: 选择批次索引值处的输出图像或掩码，其中0表示第一张图像。可以输入多个值，用任何非数字字符分隔，包括但不限于逗号、句点、分号、空格或字母，甚至中文字符。\n  注意：如果值超过批次大小，则会输出最后一张图像。如果没有对应的输入，则会输出一张64x64的空白图像或64x64的黑色掩码。\n\n### \u003Ca id=\"table1\">ChoiceTextPreset\u003C\u002Fa>\n从预设文本字典中选择输出。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4c36c6bd849c.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6f3b04b68492.jpg)    \n* text_preset: 预设文本。通过[TextPreseter](#TextPreseter)节点设置输出。\n* choice_title: 选择一个预设标题以输出相应的文本内容。\n* random_choice: 是否随机选择一个预设。\n* default: 默认输出文本，0对应第一段，依此类推。请注意，超出预设文本段落数量会导致错误。\n* seed: 用于随机选择的随机种子。\n* control_after_generate: 每次运行时是否更改种子。\n\n输出：\n* title: 文本段落标题。\n* content: 文本段落内容。\n\n### \u003Ca id=\"table1\">TextPreseter\u003C\u002Fa>\n预设文本字典，为每个节点设置一段文本，支持多个节点串联。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_61cd1f29a8ea.jpg)    \n* text_preset: 预设文本输入，可选输入。可以串联多个预设文本节点。\n* title: 文本段落标题。\n* content: 文本段落内容。\n\n### \u003Ca id=\"table1\">TextJoin\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ad0ec02b101d.jpg)    \n将多段文本合并为一段。\n\n\n### \u003Ca id=\"table1\">TextJoinV2\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e9b2fd43377b.jpg)    \n在[TextJoin](#TextJoin)的基础上增加了分隔符选项。\n\n### \u003Ca id=\"table1\">PrintInfo\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_17607c2bd1c7.jpg)    \n用于辅助工作流调试。运行时，连接到该节点的任何对象的属性都会打印到控制台。\n\n该节点允许任何类型的输入。\n\n\n### \u003Ca id=\"table1\">TextBox\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b79932b26e14.jpg)    \n输出一个字符串。\n\n### \u003Ca id=\"table1\">String\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4ff23992cc64.jpg)    \n输出一个字符串。与TextBox相同。\n\n### \u003Ca id=\"table1\">Integer\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_709f29bbab2d.jpg)    \n输出一个整数值。\n\n### \u003Ca id=\"table1\">Float\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_906324007f6c.jpg)    \n输出一个浮点数值，精度为小数点后5位。\n\n### \u003Ca id=\"table1\">Boolean\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7491441fa044.jpg)    \n输出一个布尔值。\n\n### \u003Ca id=\"table1\">RandomGenerator\u003C\u002Fa>\n\n用于在指定范围内生成随机值，输出类型包括整数、浮点数和布尔值。支持批量和列表生成，并且可以根据图像批次生成一组不同的随机数列表。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_15807506f422.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7c2fe7f3e463.jpg)  \n\n* image: 可选输入，根据图像数量按批次生成随机数列表。\n* min_value: 最小值。随机数将在最小值和最大值之间随机取值。\n* max_value: 最大值。随机数将在最小值和最大值之间随机取值。\n* float_decimal_places: 浮点值的精度。\n* fix_seed: 随机数种子是否固定。如果此选项固定，则每次生成的随机数将始终相同。\n\n输出：\nint: 整数随机数。\nfloat: 浮点随机数。\nbool: 布尔随机数。\n\n### \u003Ca id=\"table1\">RandomGeneratorV2\u003C\u002Fa>\n在[RandomGenerator](#RandomGenerator)的基础上，增加了最小随机范围和种子选项。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c22930bb9a6c.jpg)  \n* image: 可选输入，根据图像数量按批次生成随机数列表。\n* min_value: 最小值。随机数将在最小值和最大值之间随机取值。\n* max_value: 最大值。随机数将在最小值和最大值之间随机取值。\n* least: 最小随机范围。随机数至少会取这个值。\n* float_decimal_places: 浮点值的精度。\n* seed: 随机数的种子。\n* control_after_generate: 种子更改选项。如果此选项固定，则每次生成的随机数将始终相同。\n\n输出：\nint: 整数随机数。\nfloat: 浮点随机数。\nbool: 布尔随机数。\n\n\n### \u003Ca id=\"table1\">NumberCalculator\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_10c470bbd282.jpg)    \n对两个数值进行数学运算，并输出整数和浮点结果\u003Csup>*\u003C\u002Fsup>。支持的运算包括```+```, ```-```, ```*```, ```\u002F```, ```**```, ```\u002F\u002F```, ```%```。\n\n\u003Csup>*\u003C\u002Fsup> 输入仅支持布尔值、整数和浮点数，强制输入其他数据会导致错误。\n\n### \u003Ca id=\"table1\">NumberCalculatorV2\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9491f9aa990b.jpg)  \nNumberCalculator的升级版增加了节点内的数值输入和平方根运算。平方根运算选项为```nth_root```。\n注意：输入具有优先级，当有输入时，节点内的数值将无效。\n\n### \u003Ca id=\"table1\">BooleanOperator\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_aa8e575bfe09.jpg)    \n对两个数值进行布尔运算并输出结果\u003Csup>*\u003C\u002Fsup>。支持的运算包括```==```, ```!=```, ```and```, ```or```, ```xor```, ```not```, ```min```, ```max```。\n\n\u003Csup>*\u003C\u002Fsup> 输入仅支持布尔值、整数和浮点数，强制输入其他数据会导致错误。其中```and```运算会输出较大的数值，而```or```运算则会输出较小的数值。\n\n### \u003Ca id=\"table1\">BooleanOperatorV2\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a96a423fa2a9.jpg)  \nBoolean Operator的升级版增加了节点内的数值输入，并新增了大于、小于、大于等于以及小于等于的判断条件。\n注意：输入具有优先级，当有输入时，节点内的数值将无效。\n\n### \u003Ca id=\"table1\">StringCondition\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e4d7e5d3fd8b.jpg)    \n判断文本是否包含或不包含子字符串，并输出一个布尔值。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7f116eea890c.jpg)  \n\n* text: 输入文本。\n* condition: 判断条件。```include```用于判断是否包含子字符串，```exclude```用于判断是否不包含，而```equal```则用于判断是否等于该子字符串。\n* sub_string: 子字符串。\n\n### \u003Ca id=\"table1\">CheckMask\u003C\u002Fa>\n\n检查掩码中是否包含足够多的有效区域，并输出一个布尔值。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_9d8ad3d8e120.jpg)    \n\n* white_point: 用于判断掩码是否有效的白点阈值。若超过此值，则视为有效。\n* area_percent: 有效区域的百分比。若有效区域的比例超过此值，则输出 True。\n\n### \u003Ca id=\"table1\">CheckMaskV2\u003C\u002Fa>\n\n在 CheckMask 的基础上，新增了 ```method``` 选项，允许选择不同的检测方法。同时，将 ```area_percent``` 改为保留两位小数的浮点数，从而能够检测更小的有效区域。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f0fd2086fa23.jpg)    \n\n* method: 检测方法有两种，分别是 ```simple``` 和 ```detectability```。简单方法仅检测掩码是否完全黑色，而 detect_percent 方法则会检测有效区域所占的比例。\n\n### \u003Ca id=\"table1\">If\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ea6bbc9e9de2.jpg)    \n根据布尔类型的条件输入切换输出。可用于任何类型的数据切换，包括但不限于数值、字符串、图片、掩码、模型、潜在变量、管道流程等。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3b66d71aa16e.jpg)    \n\n* if_condition: 条件输入。支持布尔值、整数、浮点数和字符串输入。当输入数值时，0 被判定为 False；当输入字符串时，空字符串也被判定为 False。\n* when_True: 当条件为 True 时，输出此项。\n* when_False: 当条件为 False 时，输出此项。\n\n### \u003Ca id=\"table1\">SwitchCase\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_318b768b9315.jpg)    \n根据匹配的字符串切换输出。可用于任何类型的数据切换，包括但不限于数值、字符串、图片、掩码、模型、潜在变量、管道流程等。最多支持 3 组 case 切换。\n将 case 与 ```switch_condition``` 进行比较，若相同，则输出对应的输入。若有多个相同的 case，则按顺序优先输出。若无匹配的 case，则输出默认输入。\n请注意，字符串区分大小写，且中文和英文的全角与半角也需注意。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2693aee5786e.jpg)    \n\n* input_default: 默认输出的输入项。此项为必填项。\n* input_1: 用于匹配 ```case_1``` 的输入项。此项为可选项。\n* input_2: 用于匹配 ```case_2``` 的输入项。此项为可选项。\n* input_3: 用于匹配 ```case_3``` 的输入项。此项为可选项。\n* switch_condition: 用于与 case 进行判断的字符串。\n* case_1: case_1 字符串。\n* case_2: case_2 字符串。\n* case_3: case_3 字符串。\n\n### \u003Ca id=\"table1\">QueueStop\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ca1836c3c2fa.jpg)    \n停止当前队列。在此节点执行时，队列将停止运行。上述工作流图显示，如果图像大于 1 兆像素，队列将停止执行。\n\n节点选项：    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_96cf787fcb9a.jpg)    \n\n* mode: 停止模式。若选择 ```stop```，则会根据输入条件决定是否停止；若选择 ```continue```，则忽略条件继续执行队列。\n* stop: 若为真，队列将停止；若为假，队列将继续执行。\n\n### \u003Ca id=\"table1\">PurgeVRAM\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_696632fc2b8e.jpg)    \n清理 GPU VRAM 和系统 RAM。可接受任何类型的输入，执行到此节点时，VRAM 及 RAM 中的垃圾对象会被清理。通常放置在推理任务完成后的节点之后，例如 VAE 解码节点。\n\n节点选项：\n\n* purge_cache: 清理缓存。\n* purge_models: 卸载所有已加载的模型。\n\n### \u003Ca id=\"table1\">PurgeVRAMV2\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a41fa38e95e4.jpg)    \n与 PurgeVRM 不同，此节点并非强制性操作，可在保持原有输出的同时处理流程中的任意输入，从而实现灵活的清理。\n\n### \u003Ca id=\"table1\">ImageTaggerSave\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_919cce6d9ea0.jpg)  \n用于保存训练集图像及其文本标签的节点，其中图像文件和文本标签文件具有相同的文件名。可自定义保存图像的目录、为文件名添加时间戳、选择保存格式以及设置图像压缩率。\n*工作流 image_tagger_stave.exe 位于工作流目录中。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2bff755625e0.jpg)      \n\n* iamge: 输入的图像。\n* tag_text: 图像的文本标签。\n* custom_path\u003Csup>*\u003C\u002Fsup>: 用户自定义目录，请以正确格式输入目录名称。若为空，则保存至 ComfyUI 的默认输出目录。\n* filename_prefix\u003Csup>*\u003C\u002Fsup>: 文件名前缀。\n* timestamp: 为文件名添加时间戳，可选择日期、精确到秒的时间以及精确到毫秒的时间。\n* format: 图像保存格式。目前支持 ```png``` 和 ```jpg``` 格式。需要注意的是，RGBA 模式的图片仅支持 png 格式。\n* quality: 图像质量，取值范围为 10-100，数值越高，图片质量越好，但文件体积也会相应增大。\n* preview: 预览开关。\n\n\u003Csup>*\u003C\u002Fsup> 输入```%date``` 表示当前日期（YY-mm-dd），输入```%time``` 表示当前时间（HH-MM-SS）。可用```\u002F```表示子目录。例如，```%date\u002Fname_%tiem``` 将把图像保存到 ```YY-mm-dd``` 文件夹中，文件名前缀为 ```name_HH-MM-SS```。\n\n### \u003Ca id=\"table1\">ImageTaggerSaveV2\u003C\u002Fa>\n[ImageTaggerSave](#ImageTaggerSave) 节点的升级版，可与 [LoadImagesFromPath](#LoadImagesFromPath) 节点配合使用，以保存文件夹中对应图像的文本标签文件，同时保持原始文件名不变。\n\n在原有节点的基础上，新增了以下选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_105851ace2eb.jpg)      \n* custom_filename: 用户自定义文件名。若此处有输入，则以此作为保存文件名；否则，仍使用 filename_prefix 作为文件名前缀。\n* remove_custom_filename_ext: 是否移除原始文件名中的扩展名。\n\n### \u003Ca id=\"table1\">从路径加载图片\u003C\u002Fa>  \n从指定文件夹加载图片。\n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3b4ce2019ac2.jpg)      \n* path：文件夹路径。\n* image_load_cap：输出文件的数量。默认值为0，表示读取文件夹中的所有图片文件。\n* select_every_nth：每隔```select_every_nth```张图片加载一张，跳过其余图片。\n\n输出：\n* images：输出的图片列表。\n* masks：与图片对应的掩码列表。\n* file_name：与图片对应的文件名列表。\n* frame_count：图片的总数。\n\n\n### \u003Ca id=\"table1\">图片批次转列表\u003C\u002Fa>  \n将一批图片转换为多个较小的批次，可选择每个小批次的最大图片数量。\n\n节点选项：\n![image](image\u002Fimage_batch_to_list(multi)_node.jpg)      \n* batch_size：每个小批次的最大图片数量。\n\n\n### \u003Ca id=\"table1\">图片列表转批次\u003C\u002Fa>  \n将多个小批次的图片合并为一个大批次。\n![image](image\u002Fimage_batch_to_list(multi)_node.jpg)      \n\n\n\n# \u003Ca id=\"table1\">图层蒙版\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4b9da73b229c.jpg)    \n\n### \u003Ca id=\"table1\">Blend If蒙版\u003C\u002Fa>\n\n复现Photoshop图层样式中的“Blend If”功能。该节点会输出用于ImageBlend或ImageBlendAdvance节点进行图层合成的蒙版。\n```mask```为可选输入，若在此处输入蒙版，则会对输出结果产生作用。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a97185529b5b.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d38bc6c2ce29.jpg)    \n\n* invert_mask：是否反转蒙版。\n* blend_if：Blend If的通道选择。共有四种选项：```gray```、```red```、```green```和```blue```。\n* black_point：黑点值，范围为0-255。\n* black_range：暗部过渡范围。数值越大，暗部蒙版的过渡层次越丰富。\n* white_point：白点值，范围为0-255。\n* white_range：亮部过渡范围。数值越大，亮部蒙版的过渡层次越丰富。\n\n### \u003Ca id=\"table1\">蒙版框检测\u003C\u002Fa>\n\n检测蒙版所在区域，并输出其位置和大小。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ee1b2ac380ab.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eb040d450d0d.jpg)    \n\n* detect：检测方法，```min_bounding_rect```为最小外接矩形，```max_inscribed_rect```为最大内接矩形，```mask-area```为用于遮罩像素的有效区域。\n* x_adjust：检测后的水平偏差调整。\n* y_adjust：检测后的垂直偏移调整。\n* scale_adjust：检测后的缩放偏移调整。\n\n输出：\n\n* box_preview：检测结果预览图。红色表示检测结果，绿色表示调整后的输出结果。\n* x_percent：水平位置百分比输出。\n* y_percent：垂直位置百分比输出。\n* width：宽度。\n* height：高度。\n* x：左上角位置的x坐标。\n* y：左上角位置的y坐标。\n\n### \u003Ca id=\"table1\">蒙版框扩展\u003C\u002Fa>\n生成一个BBOX蒙版以扩大范围，并将其作为Mask输出。扩展范围可以设置为正数或负数，正数表示扩张，负数表示收缩。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f51ee1deb64a.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e5181eb2ad60.jpg)    \n* mask：输入的蒙版。\n* crop_box：由MaskBoxDetect节点输出的蒙版BBOX数据。\n* top_extend：顶部扩展范围。100表示BBOX高度增加100%。\n* bottom_extend：底部扩展范围。100表示BBOX高度增加100%。\n* left_extend：左侧扩展范围。100表示BBOX宽度增加100%。\n* right_extend：右侧扩展范围。100表示BBOX宽度增加100%。\n\n输出：\n* mask：BBOX扩展后的蒙版。\n* x_percent：水平位置百分比输出。\n* y_percent：垂直位置百分比输出。\n* width：宽度。\n* height：高度。\n* x：左上角位置的x坐标。\n* y：左上角位置的y坐标。\n\n## \u003Ca id=\"table1\">Ultra\u003C\u002Fa> 节点\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8a5e70abe284.jpg)   \n采用超精细边缘遮罩处理方法的节点，最新版本包括：SegmentAnythingUltraV2、RmBgUltraV2、BiRefNetUltra、PersonMaskUltraV2、SegformerB2ClothesUltra和MaskEdgeUltraDetailV2。\n这些节点提供三种边缘处理方法：\n\n* ```PyMatting```通过闭合形式匹配来优化蒙版边缘，适用于对蒙版Trimap进行处理。\n* ```GuideFilter```使用OpenCV的引导滤波器，基于颜色相似性对边缘进行羽化处理，在边缘颜色对比强烈时效果最佳。  \n  上述两种方法的代码均来自spacepxl的Alpha Matte中的[ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters)，感谢原作者。\n* ```VitMatte```利用Transformer VIT模型进行高质量的边缘处理，能够保留边缘细节，甚至生成半透明的蒙版。\n  注意：首次运行时，需下载Vitmate模型文件并等待自动下载完成。若无法完成下载，可手动执行命令```huggingface-cli download hustvl\u002Fvitmatte-small-composition-1k```进行下载。\n  模型下载成功后，即可使用```VITMatte(local)```而无需联网。\n  VitMatte的选项：```device```用于设置是否使用CUDA加速Vitmate运算，速度约为CPU的5倍。```max_megapixels```用于设置Vitmate处理的最大图像尺寸，超出限制的图像将会被缩小。对于16G显存，建议将其设置为3。\n\n*请将所有模型文件从[BaiduNetdisk](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1xYF-V6QRwcFalEqLS7giWg?pwd=jiyz)或[Huggingface](https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k\u002Ftree\u002Fmain)下载至```ComfyUI\u002Fmodels\u002Fvitmatte```文件夹。\n\n下图为三种方法输出结果的差异示例。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1741869a6aef.jpg)\n\n### \u003Ca id=\"table1\">RemBgUltra\u003C\u002Fa>\n\n移除背景。与同类背景移除节点相比，该节点具有超高的边缘细节。\n\n此节点结合了 Spacepxl 的 [ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) 中的 Alpha Matte 节点以及 ZHO-ZHO-ZHO 的 [ComfyUI-BRIA_AI-RMBG](https:\u002F\u002Fgithub.com\u002FZHO-ZHO-ZHO\u002FComfyUI-BRIA_AI-RMBG) 的功能，感谢原作者。\n\n*请从 [BRIA Background Removal v1.4](https:\u002F\u002Fhuggingface.co\u002Fbriaai\u002FRMBG-1.4) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F16PMfjpkXn_35T-cVYEPTZA?pwd=qi6o) 下载模型文件，并将其放置于 ```ComfyUI\u002Fmodels\u002Frmbg\u002FRMBG-1.4``` 文件夹中。该模型可用于非商业用途。    \n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_48f56dc3e857.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_da3292c04fd1.jpg)    \n\n* detail_range：边缘细节范围。\n* black_point：边缘黑色采样阈值。\n* white_point：边缘白色采样阈值。\n* process_detail：此处设置为 false 将跳过边缘处理以节省运行时间。\n\n### \u003Ca id=\"table1\">RmBgUltraV2\u003C\u002Fa>\n\nRemBgUltra 的 V2 升级版本新增了 VITMatte 边缘处理方法。（注：使用此方法处理超过 2K 分辨率的图像会消耗大量内存）\n\n在 RemBgUltra 的基础上，进行了以下更改：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c2e28b01d4f2.jpg)    \n\n* detail_method：边缘处理方法。提供 VITMatte、VITMatte（局部）、PyMatting 和 GuidedFilter。如果首次使用 VITMatte 后已下载模型，则后续可使用 VITMatte（局部）。\n* detail_erode：从边缘向内侵蚀遮罩范围。数值越大，向内修复的范围越大。\n* detail_dilate：遮罩边缘向外扩张。数值越大，向外修复的范围越广。\n* device：设置是否使用 CUDA 运行 VitMatte。\n* max_megapixels：设置 VitMate 操作的最大尺寸。\n\n\n\n### \u003Ca id=\"table1\">SegformerB2ClothesUltra\u003C\u002Fa>\n  \n  ![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_97b832bbdd92.jpg)   \n  生成人物面部、头发、手臂、腿部和服装的掩码，主要用于服装分割。\n  模型分割代码来自[StartHua](https:\u002F\u002Fgithub.com\u002FStartHua\u002FComfyui_segformer_b2_clothes)，感谢原作者。\n  与 comfyui_segformer_b2_clothes 相比，此节点具有超高的边缘细节。（注：使用 VITMatte 方法生成边缘超过 2K 分辨率的图像会消耗大量内存）    \n\n*请从 [huggingface](https:\u002F\u002Fhuggingface.co\u002Fmattmdjaga\u002Fsegformer_b2_clothes\u002Ftree\u002Fmain) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OK-HfCNyZWux5iQFANq9Rw?pwd=haxg) 下载所有模型文件，并将其放置于 ```ComfyUI\u002Fmodels\u002Fsegformer_b2_clothes``` 文件夹中。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_344c871700ad.jpg)    \n\n* face：面部识别开关。\n* hair：头发识别开关。\n* hat：帽子识别开关。\n* sunglass：太阳镜识别开关。\n* left_arm：左臂识别开关。\n* right_arm：右臂识别开关。\n* left_leg：左腿识别开关。\n* right_leg：右腿识别开关。\n* skirt：裙子识别开关。\n* pants：裤子识别开关。\n* dress：连衣裙识别开关。\n* belt：腰带识别开关。\n* shoe：鞋子识别开关。\n* bag：包包识别开关。\n* scarf：围巾识别开关。\n* detail_method：边缘处理方法。提供 VITMatte、VITMatte（局部）、PyMatting 和 GuidedFilter。如果首次使用 VITMatte 后已下载模型，则后续可使用 VITMatte（局部）。\n* detail_erode：从边缘向内侵蚀遮罩范围。数值越大，向内修复的范围越大。\n* detail_dilate：遮罩边缘向外扩张。数值越大，向外修复的范围越广。\n* black_point：边缘黑色采样阈值。\n* white_point：边缘白色采样阈值。\n* process_detail：此处设置为 false 将跳过边缘处理以节省运行时间。\n* device：设置是否使用 CUDA 运行 VitMatte。\n* max_megapixels：设置 VitMate 操作的最大尺寸。\n\n### \u003Ca id=\"table1\">SegformerUltraV2\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e4ecd26ae5fa.jpg)   \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_21f604063f84.jpg)   \n使用 segformer 模型进行服装分割，具有超高的边缘细节。目前支持 segformer b2 衣服、segformer b3 衣服和 segformer b3 时尚。\n\n*请从 [huggingface](https:\u002F\u002Fhuggingface.co\u002Fmattmdjaga\u002Fsegformer_b2_clothes\u002Ftree\u002Fmain) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1OK-HfCNyZWux5iQFANq9Rw?pwd=haxg) 下载模型文件，并将其放置于 ```ComfyUI\u002Fmodels\u002Fsegformer_b2_clothes``` 文件夹中。         \n*请从 [huggingface](https:\u002F\u002Fhuggingface.co\u002Fsayeed99\u002Fsegformer_b3_clothes\u002Ftree\u002Fmain) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18KrCqNqUwmoJlqgAGDTw9g?pwd=ap4z) 下载模型文件，并将其放置于 ```ComfyUI\u002Fmodels\u002Fsegformer_b3_clothes``` 文件夹中。    \n*请从 [huggingface](https:\u002F\u002Fhuggingface.co\u002Fsayeed99\u002Fsegformer-b3-fashion\u002Ftree\u002Fmain) 或 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F10vd5PmJLFNWXaRVGW6tSvA?pwd=xzqi) 下载模型文件，并将其放置于 ```ComfyUI\u002Fmodels\u002Fsegformer_b3_fashion``` 文件夹中。 \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_51780a46a2ac.jpg)    \n\n* image：输入图像。\n* segformer_pipeline：Segformer 流水线输入。该流水线由 SegformerClottesPipeline 和 SegformerFashionPipeline 节点输出。\n* detail_method：边缘处理方法。提供 VITMatte、VITMatte（局部）、PyMatting 和 GuidedFilter。如果首次使用 VITMatte 后已下载模型，则后续可使用 VITMatte（局部）。\n* detail_erode：从边缘向内侵蚀遮罩范围。数值越大，向内修复的范围越大。\n* detail_dilate：遮罩边缘向外扩张。数值越大，向外修复的范围越广。\n* black_point：边缘黑色采样阈值。\n* white_point：边缘白色采样阈值。\n* process_detail：此处设置为 false 将跳过边缘处理以节省运行时间。\n* device：设置是否使用 CUDA 运行 VitMatte。\n* max_megapixels：设置 VitMate 操作的最大尺寸。\n\n### \u003Ca id=\"table1\">SegformerClothesPipiline\u003C\u002Fa>\n\n选择 segformer 服装模型，并选择分割内容。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_de156fbe561c.jpg)    \n\n* model: 模型选择。目前有两个模型可供选择，分别是 segformer b2 服装和 segformer b3 服装。\n* face: 面部识别开关。\n* hair: 头发识别开关。\n* hat: 帽子识别开关。\n* sunglass: 太阳镜识别开关。\n* left_arm: 左臂识别开关。\n* right_arm: 右臂识别开关。\n* left_leg: 左腿识别开关。\n* right_leg: 右腿识别开关。\n* left_shoe: 左鞋识别开关。\n* right_shoe: 右鞋识别开关。\n* skirt: 裙子识别开关。\n* pants: 裤子识别开关。\n* dress: 连衣裙识别开关。\n* belt: 腰带识别开关。\n* bag: 包识别开关。\n* scarf: 围巾识别开关。\n\n### \u003Ca id=\"table1\">SegformerFashionPipiline\u003C\u002Fa>\n\n选择 segformer 时尚模型，并选择分割内容。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e3ce7728eb7e.jpg)    \n\n* model: 模型选择。目前只有一个模型可供选择，即 segformer b3 时尚。\n* shirt: 衬衫和女式衬衫开关。\n* top: 上衣、T恤、卫衣开关。\n* sweater: 毛衣开关。\n* cardigan: 开衫开关。\n* jacket: 外套开关。\n* vest: 马甲开关。\n* pants: 裤子开关。\n* shorts: 短裤开关。\n* skirt: 裙子开关。\n* coat: 外套开关。\n* dress: 连衣裙开关。\n* jumpsuit: 连体裤开关。\n* cape: 斗篷开关。\n* glasses: 眼镜开关。\n* hat: 帽子开关。\n* hairaccessory: 发带、头饰、发饰开关。\n* tie: 领带开关。\n* glove: 手套开关。\n* watch: 手表开关。\n* belt: 腰带开关。\n* legwarmer: 护腿套开关。\n* tights: 紧身裤和长筒袜开关。\n* sock: 袜子开关。\n* shoe: 鞋子开关。\n* bagwallet: 包和钱包开关。\n* scarf: 围巾开关。\n* umbrella: 雨伞开关。\n* hood: 防风帽开关。\n* collar: 领口开关。\n* lapel: 翻领开关。\n* epaulette: 肩章开关。\n* sleeve: 袖子开关。\n* pocket: 口袋开关。\n* neckline: 领口开关。\n* buckle: 扣件开关。\n* zipper: 拉链开关。\n* applique: 贴花开关。\n* bead: 珠饰开关。\n* bow: 蝴蝶结开关。\n* flower: 花朵开关。\n* fringe: 流苏开关。\n* ribbon: 丝带开关。\n* rivet: 铆钉开关。\n* ruffle: 褶边开关。\n* sequin: 亮片开关。\n* tassel: 流苏开关。\n\n\n### \u003Ca id=\"table1\">SegformerUltraV3\u003C\u002Fa>   \n在 ```SegformerUltraV2``` 的基础上，进行了修改，将模型和设置分开加载，从而在使用多个节点时节省资源。请注意，类型设置必须与模型匹配。\n\n修改后的节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1b0791c06165.jpg)\n* segformer_model: Segformer 模型输入，该模型由 ```LoadSegformerModel``` 节点加载。\n* segformer_setting: Segformer 标签设置输入。\n\n### \u003Ca id=\"table1\">SegformerClothesSetting\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_3831daec2658.jpg)\n结合 ```Segformer Ultra V3``` 设置 Segformer 服装节点。已从 ```SegformerClottesPipiline``` 节点中移除了模型选项。  \n\n\n### \u003Ca id=\"table1\">SegformerFashionSetting\u003C\u002Fa>\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e3ce7728eb7e.jpg)    \n结合 ```Segformer Ultra V3``` 设置 Segformer 时尚节点。已从 ```SegformerFashionPipiline``` 节点中移除了模型选项。\n\n### \u003Ca id=\"table1\">LoadSegformerModel\u003C\u002Fa>\n与 ```Segformer Ultra V3``` 兼容的模型加载节点。   \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_60644d08e588.jpg)    \n* model_name: 模型选择。\n* devicee: 加载设备选择。\n\n### \u003Ca id=\"table1\">MaskEdgeUltraDetail\u003C\u002Fa>\n\n将粗糙的掩码处理为超精细的边缘。\n该节点结合了 Spacepxl 的 [ComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) 中的 Alpha Matte 和 Guided Filter Alpha 节点功能，感谢原作者。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_305366bb5872.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1fc85e1c3130.jpg)    \n\n* method: 提供两种边缘处理方法：PyMatting 和 OpenCV-GuidedFilter。PyMatching 的处理速度较慢，但对于视频来说，建议使用此方法以获得更平滑的掩码序列。\n* mask_grow: 掩码扩展幅度。正值向外扩展，负值向内收缩。对于较为粗糙的掩码，通常使用负值来缩小其边缘，以获得更好的效果。\n* fix_gap: 修复掩码中的缺口。如果掩码中有明显的缺口，请适当增加此值。\n* fix_threshold: 修复缺口的阈值。\n* detail_range: 边缘细节范围。\n* black_point: 边缘黑色采样阈值。\n* white_point: 边缘白色采样阈值。\n\n### \u003Ca id=\"table1\">MaskEdgeUltraDetailV2\u003C\u002Fa>\n\nMaskEdgeUltraDetail 的 V2 升级版本新增了 VITMatte 边缘处理方法。（注：使用此方法处理超过 2K 分辨率的图像会消耗大量内存）    \n这种方法适用于处理半透明区域。 \n\n在 MaskEdgeUltraDetail 的基础上，进行了以下更改：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d2b700477245.jpg)    \n\n* method: 边缘处理方法。提供 VITMatte、VITMatte（本地）、PyMatting、GuidedFilter。如果首次使用 VITMatte 后已下载模型，则后续可使用 VITMatte（本地）。\n* edge_erode: 从边缘向内侵蚀掩码范围。数值越大，向内修复的范围越大。\n* edge_dilate: 掩码边缘向外扩张。数值越大，向外修复的范围越广。\n* device: 设置是否使用 CUDA 的 VitMatte。\n* max_megapixels: 设置 VitMate 操作的最大尺寸。\n\n### \u003Ca id=\"table1\">MaskEdgeUltraDetailV3\u003C\u002Fa>\nMaskEdgeUltraDetailV3 是 MaskEdgeUltraDetailV2 的升级版本，通过输入三元图蒙版来处理不同的分区，生成包含更精细和半透明部分的完整蒙版。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e6859ddd14d.jpg)  \n\n在 MaskEdgeUltraDetailV2 的基础上，进行了以下改进：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0a9e3f7bb12d.jpg)    \n* transparent_trimap：在此区域内使用不同的虚化参数可以生成更精细的抠像蒙版。通常用于处理半透明物体或发丝等区域。  \n* mask_edge_erode：蒙版边缘向内侵蚀。数值越大，向内修复的范围越大。  \n* mask_edge_dilate：蒙版边缘向外扩张。数值越大，向外修复的范围越大。  \n* transparent_trimap_edge_erode：透明三元图蒙版的边缘向内侵蚀。数值越大，向内修正的范围越大。  \n* transparent_trimap_edge_dilate：透明三元图蒙版的边缘向外扩张。数值越大，向外修复的范围越大。  \n* trimap_blur：三元图蒙版与主蒙版融合处边缘的模糊程度。\n\n### \u003Ca id=\"table1\">MaskByColor\u003C\u002Fa>\n\n根据选定的颜色生成蒙版。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0d1a747b9261.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_21f6328fd6d7.jpg)    \n\n* image：输入图像。  \n* mask：此输入为可选，若有蒙版，则仅包含蒙版内的颜色范围。  \n* color：颜色选择器。点击色块选择颜色，也可使用拾色器面板上的吸管工具吸取屏幕颜色。注意：使用吸管时，请将浏览器窗口最大化。  \n* color_in_HEX\u003Csup>4\u003C\u002Fsup>：输入颜色值。若填写此项，则优先使用该值，忽略通过 ```color``` 选择的颜色。  \n* threshold：蒙版范围阈值，数值越大，蒙版范围越大。  \n* fix_gap：修复蒙版中的空隙。若蒙版存在明显空隙，可适当增大此值。  \n* fix_threshold：修复蒙版的阈值。  \n* invert_mask：是否反转蒙版。\n\n### \u003Ca id=\"table1\">ImageToMask\u003C\u002Fa>\n\n将图像转换为蒙版。支持将 LAB、RGBA、YUV 和 HSV 模式下的任意通道转换为蒙版，并提供色彩尺度调整功能。同时支持可选的蒙版输入，以获得仅包含有效部分的蒙版。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2d47b90bc38d.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a845a1a0f392.jpg)    \n\n* image：输入图像。  \n* mask：此输入为可选，若有蒙版，则仅包含蒙版内的颜色范围。  \n* channel：通道选择。可选择 LAB、RGBA、YUV 或 HSV 模式的任意通道。  \n* black_point\u003Csup>*\u003C\u002Fsup>：蒙版的黑点值。取值范围为 0–255，默认值为 0。  \n* white_point\u003Csup>*\u003C\u002Fsup>：蒙版的白点值。取值范围为 0–255，默认值为 255。  \n* gray_point：蒙版的灰点值。取值范围为 0.01–9.99，默认值为 1。  \n* invert_output_mask：是否反转输出的蒙版。\n\n\u003Csup>*\u003C\u002Fsup>\u003Cfont size=\"3\">若 black_point 或 output_black_point 值大于 white_point 或 output_white_point，则两值会互换，较大的值作为 white_point，较小的值作为 black_point。\u003C\u002Ffont>      \n\n### \u003Ca id=\"table1\">Shadow\u003C\u002Fa> & Highlight Mask\n\n生成图像中暗部和亮部的蒙版。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eaba0fe17395.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c1c7d59bd312.jpg)    \n\n* image：输入图像。  \n* mask：可选输入。若有输入，则仅调整蒙版范围内的颜色。  \n* shadow_level_offset：暗部数值的偏移量，数值越大，越多靠近亮部的区域会被归入暗部。  \n* shadow_range：暗部的过渡范围。  \n* highlight_level_offset：亮部数值的偏移量，数值越大，越多靠近暗部的区域会被归入亮部。  \n* highlight_range：亮部的过渡范围。\n\n### \u003Ca id=\"table1\">Shadow\u003C\u002Fa> Highlight Mask V2\n\n这是 ```Shadow & Highlight Mask``` 节点的复制品，去掉了节点名称中的“&”字符，以避免 ComfyUI 工作流解析错误。\n\n### \u003Ca id=\"table1\">PixelSpread\u003C\u002Fa>\n\n对图像的蒙版边缘进行像素扩展预处理，可有效改善图像合成的边缘效果。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_45a78690c548.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6aaed9052bac.jpg)    \n\n* invert_mask：是否反转蒙版。  \n* mask_grow：蒙版扩展幅度。  \n\n### \u003Ca id=\"table1\">MaskGrow\u003C\u002Fa>\n\n扩展、收缩蒙版边缘并对其进行模糊处理。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_a182afc09a2b.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_0dc634206bb9.jpg)    \n\n* invert_mask：是否反转蒙版。  \n* grow：正值表示向外扩展，负值表示向内收缩。  \n* blur：对边缘进行模糊处理。\n\n### \u003Ca id=\"table1\">MaskEdgeShrink\u003C\u002Fa>\n\n在保留边缘细节的同时，平滑并缩小蒙版边缘。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4845b2d1dc53.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_75e5831f1232.jpg)    \n\n* invert_mask：是否反转蒙版。  \n* shrink_level：收缩的平滑度级别。  \n* soft：平滑幅度。  \n* edge_shrink：边缘收缩幅度。  \n* edge_reserve：保留边缘细节的幅度，100 表示完全保留，0 表示完全不保留。\n\nMaskGrow 与 MaskEdgeShrink 的对比  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_fc2d30ddc3fc.jpg)    \n\n### \u003Ca id=\"table1\">MaskMotionBlur\u003C\u002Fa>\n\n在蒙版上创建运动模糊效果。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ef2934251d2d.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_eb80caf22d52.jpg)    \n\n* invert_mask：是否反转蒙版。  \n* blur：模糊的大小。  \n* angle：模糊的角度。\n\n### \u003Ca id=\"table1\">MaskGradient\u003C\u002Fa>\n\n从蒙版的一侧创建渐变效果。请注意，此节点与 CreateGradientMask 节点有所不同。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2dc1807a2b81.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_8654ea424609.jpg)    \n\n* invert_mask：是否反转蒙版。  \n* gradient_side：从哪一侧生成渐变。方向包括顶部、底部、左侧和右侧。  \n* gradient_scale：渐变距离。默认值为 100，表示渐变的一侧完全透明，另一侧完全不透明。数值越小，透明到不透明的距离越短。  \n* gradient_offset：渐变位置的偏移量。  \n* opacity：渐变的不透明度。\n\n### \u003Ca id=\"table1\">创建渐变蒙版\u003C\u002Fa>\n\n创建渐变蒙版。请注意此节点与“MaskGradient”节点的区别。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_75ffb7e7a10b.jpg)    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7114551ccdae.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e2990ff6d3b.jpg)    \n\n* size_as\u003Csup>*\u003C\u002Fsup>：此处的输入图像或蒙版将根据其尺寸生成输出图像和蒙版。该输入优先于下方的宽度和高度设置。\n* width：图像的宽度。如果有 size_as 输入，则此设置将被忽略。\n* height：图像的高度。如果有 size_as 输入，则此设置将被忽略。\n* gradient_side：从哪个边缘生成渐变。共有五个方向：顶部、底部、左侧、右侧和中心。\n* gradient_scale：渐变距离。默认值为100，表示渐变的一侧完全透明，另一侧完全不透明。数值越小，从透明到不透明的距离越短。\n* gradient_offset：渐变位置偏移。当 ```gradient_side``` 设置为“中心”时，此处用于调整渐变区域的大小，正值会使区域缩小，负值则会放大。\n* opacity：渐变的不透明度。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和蒙版。强制集成其他类型的输入会导致节点报错。  \n\n### \u003Ca id=\"table1\">蒙版描边\u003C\u002Fa>\n\n生成蒙版轮廓描边。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7e5601137179.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e25337303680.jpg)    \n\n* invert_mask：是否反转蒙版。\n* stroke_grow：描边的扩展\u002F收缩幅度，正值表示扩展，负值表示收缩。\n* stroke_width：描边的宽度。\n* blur：描边的模糊程度。\n\n### \u003Ca id=\"table1\">蒙版噪点\u003C\u002Fa>\n\n为蒙版生成噪点。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f0a339cbfaf0.jpg)    \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_c14b61ec1c1d.jpg)    \n\n* grain：噪点强度。\n* invert_mask：是否反转蒙版。\n\n### \u003Ca id=\"table1\">绘制圆角矩形\u003C\u002Fa>\n为蒙版生成圆角矩形边框。\n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f8e0a79e26d1.jpg)     \n\n* size_as\u003Csup>*\u003C\u002Fsup>：参考尺寸。可以是图像或蒙版。\n* rounded_rect_radius：圆角矩形的半径。\n* anti_aliasing：抗锯齿采样倍数。\n* width：蒙版的宽度。如果输入了 size_as，则此设置将被忽略。\n* height：蒙版的高度。如果输入了 size_as，则此设置将被忽略。\n\n\u003Csup>*\u003C\u002Fsup>仅限于输入图像和蒙版。强制集成其他类型的输入会导致节点报错。\n\n### \u003Ca id=\"table1\">蒙版预览\u003C\u002Fa>\n\n预览输入的蒙版\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_36b0cdd4871b.jpg)    \n\n### \u003Ca id=\"table1\">蒙版反转\u003C\u002Fa>\n\n反转蒙版\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_032dad481ef3.jpg)    \n\n# \u003Ca id=\"table1\">图层滤镜\u003C\u002Fa>\n\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7948b920fa13.jpg)    \n\n### \u003Ca id=\"table1\">锐化与柔化\u003C\u002Fa>\n\n增强或柔化图像细节。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_01843b0a0286.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f5c0afba5728.jpg)    \n\n* enhance：提供4种预设，分别为极锐、锐、柔和极柔。若选择“无”，则不进行任何处理。\n\n### \u003Ca id=\"table1\">美肤\u003C\u002Fa>\n\n使皮肤看起来更光滑。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b30e90bda15a.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_e9998ed795ad.jpg)    \n\n* smooth：皮肤平滑度。\n* threshold：平滑范围。数值越大，平滑范围越小。\n* opacity：平滑效果的不透明度。\n\n### \u003Ca id=\"table1\">水彩\u003C\u002Fa>\n\n水彩画效果\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_274b3543c87f.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_116c533ae398.jpg)    \n\n* line_density：黑色线条密度。\n* opacity：水彩效果的不透明度。\n\n### \u003Ca id=\"table1\">半色调\u003C\u002Fa>\n将图像转换为半色调效果。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_7634167a40e4.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5a6588e94bff.jpg)    \n* image：输入图像。\n* mask：可选的蒙版输入。\n* dot_size：网点的大小。\n* angle：网点排列的角度。\n* shape：网点的形状。有三种选项：圆形、菱形和方形。\n* dot_color：网点的颜色。\n* background_color：背景颜色。\n* anti_alias：抗锯齿强度。数值越高，网点边缘越平滑，但处理时间也会增加。\n\n### \u003Ca id=\"table1\">柔光\u003C\u002Fa>\n\n柔光效果，画面中的高光显得模糊。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b73126a1f78e.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_dd415b53b4a0.jpg)    \n\n* soft：柔光的大小。\n* threshold：柔光范围。光线从画面中最亮的部分开始显现。数值越低，范围越大；数值越高，范围越小。\n* opacity：柔光的不透明度。\n\n### \u003Ca id=\"table1\">通道抖动\u003C\u002Fa>\n\n通道错位效果，类似于 TikTok 标志的效果。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d88047b534f2.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_069609cdefaf.jpg)    \n\n* distance：通道分离的距离。\n* angle：通道分离的角度。\n* mode：通道偏移的排列顺序。\n\n### \u003Ca id=\"table1\">HDR效果\u003C\u002Fa>\n\n提升输入图像的动态范围和视觉吸引力。\n本节点是对 [HDR Effects (SuperBeasts.AI)](https:\u002F\u002Fgithub.com\u002FSuperBeastsAI\u002FComfyUI-SuperBeasts) 的重新组织和封装，感谢原作者。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_74280730bf6e.jpg)    \n\n节点选项：\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_ff91eaaf9553.jpg)    \n\n* hdr_intensity：范围为0.0至5.0，控制HDR效果的整体强度。数值越高，HDR效果越明显。\n* shadow_intensity：范围为0.0至1.0，调整图像中阴影的强度。数值越高，阴影越深，对比度也越高。\n* highlight_intensity：范围为0.0至1.0，调整图像中高光的强度。数值越高，高光越明亮，对比度也越高。\n* gamma_intensity：范围为0.0至1.0，控制应用于图像的伽马校正。数值越高，整体亮度和对比度越高。\n* contrast：范围为0.0至1.0，增强图像的对比度。数值越高，对比度越明显。\n* enhance_color：范围为0.0至1.0，增强图像的色彩饱和度。数值越高，色彩越鲜艳。\n\n### \u003Ca id=\"table1\">胶片\u003C\u002Fa>\n\n模拟胶片的颗粒感、暗角和边缘模糊效果，支持输入深度图以模拟散焦效果。  \n该节点是对 [digitaljohn\u002Fcomfyui-propost](https:\u002F\u002Fgithub.com\u002Fdigitaljohn\u002Fcomfyui-propost) 的重新组织和封装，感谢原作者。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_19334f52865a.jpg)  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_4af8cc81c8fd.jpg)  \n\n* image: 输入图像。  \n* depth_map: 输入深度图以模拟散焦效果。这是一个可选输入。如果没有输入，则会在图像边缘模拟径向模糊。  \n* center_x: 暗角和径向模糊中心点位置的水平坐标，其中 0 表示最左侧，1 表示最右侧，0.5 表示正中间。  \n* center_y: 暗角和径向模糊中心点位置的垂直坐标，定义同上。  \n* saturation: 色彩饱和度，1 为原始值。  \n* grain_power: 颗粒强度。数值越大，噪点越明显。  \n* grain_scale: 颗粒大小。  \n* grain_sat: 颗粒的色彩饱和度。0 表示单色噪点，数值越大，色彩越突出。  \n* grain_shadows: 暗部噪点强度。  \n* grain_highs: 亮部噪点强度。  \n* blur_strength: 模糊强度。数值越大，模糊效果越明显。  \n* blur_focus_spread: 焦点扩散范围。数值越大，清晰区域越大。  \n* focal_depth: 模拟散焦的焦距。0 表示焦点最远，1 表示焦点最近。此设置仅在输入深度图时有效。\n\n### \u003Ca id=\"table1\">胶片V2\u003C\u002Fa>\n\n胶片节点的升级版，在原有基础上增加了 fastgrain 方法，生成噪点的速度提升了 10 倍。fastgrain 的代码来自 [github.com\u002Fspacepxl\u002FComfyUI-Image-Filters](https:\u002F\u002Fgithub.com\u002Fspacepxl\u002FComfyUI-Image-Filters) 中的 BetterFilmGrain 节点，感谢原作者。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_599f8d9cfce7.jpg)  \n\n### \u003Ca id=\"table1\">光晕\u003C\u002Fa>\n\n模拟胶片的漏光效果。请从 [百度网盘](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F18Z0lhsDAejbwlOrCZFMuNg?pwd=o8sz) 或 [Google Drive](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1DcH2Zkyj7W3OiAeeGpJk1eaZpdJwdCL-\u002Fview?usp=sharing) 下载模型文件，并将其复制到 ```ComfyUI\u002Fmodels\u002Flayerstyle``` 文件夹中。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_95c7e1dfdbbe.jpg)  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_1d5da8bcd24a.jpg)  \n\n* light: 提供 32 种光斑类型。random 表示随机选择。  \n* corner: 光出现的位置有四个选项：左上角、右上角、左下角和右下角。  \n* hue: 光的颜色色调。  \n* saturation: 光的色彩饱和度。  \n* opacity: 光的透明度。\n\n### \u003Ca id=\"table1\">颜色映射\u003C\u002Fa>\n\n伪彩色热图效果。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_d3a5bd690624.jpg)  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_2f576ae9edbd.jpg)  \n\n* color_map: 效果类型。共有 22 种效果，如上图所示。  \n* opacity: 颜色映射效果的透明度。\n\n### \u003Ca id=\"table1\">运动模糊\u003C\u002Fa>\n\n使图像产生运动模糊效果。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_b498766b336a.jpg)  \n\n节点选项：  \n\n* angle: 模糊的角度。  \n* blur: 模糊的大小。\n\n### \u003Ca id=\"table1\">高斯模糊\u003C\u002Fa>\n\n对图像进行高斯模糊处理。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f68d8fa25336.jpg)  \n\n节点选项：  \n\n* blur: 模糊的大小，整数，范围 1–999。\n\n### \u003Ca id=\"table1\">高斯模糊V2\u003C\u002Fa>\n\n高斯模糊。将参数精度改为浮点数，精确到 0.01。  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_42719d6dc4ed.jpg)  \n\n* blur: 模糊的大小，浮点数，范围 0–1000。\n\n### \u003Ca id=\"table1\">添加噪点\u003C\u002Fa>\n\n为图片添加噪点。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_af9f41000a1f.jpg)  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_5f6d557d55db.jpg)  \n\n* grain_power: 噪点强度。  \n* grain_scale: 噪点大小。  \n* grain_sat: 噪点的色彩饱和度。\n\n### \u003Ca id=\"table1\">扭曲置换\u003C\u002Fa>\n为材质图像生成位移变形效果。  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_f19746cbde9a.jpg)  \n\n节点选项：  \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_52f7f1002a13.jpg)  \n\n* image: 原始图像，基于该图像的灰度信息进行材质扭曲。  \n* material_image: 材质图像。该图像的尺寸应与原图像一致，否则会被强制调整大小。  \n* mask: 可选的遮罩输入。输出将仅包含遮罩区域内变形后的结果。  \n* distort_strength: 扭曲强度。  \n* smoothness: 扭曲的平滑度。  \n* anit_aliasing: 抗锯齿值。数值越高，生成速度会显著降低。  \n* shadow_blend_mode: 阴影部分的混合模式。  \n* shadow_strength: 阴影部分的混合透明度。  \n* highlight_blend_mode: 亮部部分的混合模式。  \n* highlight_strength: 亮部部分的混合透明度。\n\n输出：  \n* image: 输出图像。  \n* displaced_material: 材质图像的变形结果。\n\n## 注释用于 \u003Ca id=\"table1\">notes\u003C\u002Fa>\n\n\u003Csup>1\u003C\u002Fsup>  layer_image、layer_mask 以及 background_image（如果已输入），这三项必须具有相同的尺寸。    \n\n\u003Csup>2\u003C\u002Fsup>  mask 并非必填项。默认情况下会使用图像的 Alpha 通道。如果输入的图像不包含 Alpha 通道，则会自动创建整个图像的 Alpha 通道。若同时输入了 mask，则 Alpha 通道将被 mask 覆盖。    \n\n\u003Csup>3\u003C\u002Fsup>  \u003Ca id=\"table1\">混合模式\u003C\u002Fa> 包括 **正常、正片叠底、屏幕、添加、减去、差值、变暗、颜色加深、颜色减淡、线性加深、线性减淡、叠加、柔光、强光、亮光、点光、线性光和硬混**，共计 19 种混合模式。    \n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_cbbb48dd6093.jpg)    \n\u003Cfont size=\"1\">*混合模式预览\u003C\u002Ffont>\u003Cbr \u002F>     \n\n\u003Csup>3\u003C\u002Fsup>   \u003Ca id=\"table1\">BlendModeV2\u003C\u002Fa> 包括 **正常、溶解、变暗、正片叠底、颜色加深、线性加深、深色、变亮、屏幕、颜色减淡、线性减淡（加法）、浅色、滤色、叠加、柔光、强光、亮光、线性光、点光、硬混、差值、排除、减去、除法、色相、饱和度、颜色、明度、颗粒提取、颗粒合并**，共计 30 种混合模式。      \nBlendMode V2 的部分代码来源于 [Virtuoso Nodes for ComfyUI](https:\u002F\u002Fgithub.com\u002Fchrisfreilich\u002Fvirtuoso-nodes)。感谢原作者。\n![image](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_644ed5b5d98c.jpg)    \n\u003Cfont size=\"1\">*Blend Mode V2 预览\u003C\u002Ffont>\u003Cbr \u002F>     \n\n\u003Csup>4\u003C\u002Fsup>  RGB 颜色采用十六进制 RGB 格式表示，例如 '#FA3D86'。    \n\n\u003Csup>5\u003C\u002Fsup>  layer_image 和 layer_mask 必须具有相同的尺寸。    \n\n## 星标\n\n[![星标历史图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_readme_6d8a59860009.png)](https:\u002F\u002Fstar-history.com\u002F#chflame163\u002FComfyUI_LayerStyle&Date)\n\n# 声明\n\nLayerStyle 节点遵循 MIT 许可证，其部分功能代码来自其他开源项目。感谢原作者。如用于商业用途，请参考原项目的许可证以获取授权协议。","# ComfyUI_LayerStyle 快速上手指南\n\nComfyUI_LayerStyle 是一套为 ComfyUI 设计的节点集合，旨在将 Photoshop 的基础图层与蒙版合成功能迁移至 ComfyUI，实现工作流集中化，减少软件切换频率。\n\n## 1. 环境准备\n\n*   **系统要求**：Windows \u002F Linux \u002F macOS（需已安装 Python 环境）。\n*   **前置依赖**：\n    *   已安装 **ComfyUI**（推荐使用官方便携版或 Aki 整合包）。\n    *   **重要提示**：部分高级节点（如 BiRefNet, SAM2, YOLO 等）已拆分至独立仓库 **[ComfyUI_LayerStyle_Advance](https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance)**。若需使用完整功能或加载旧工作流，请务必同时安装该进阶插件。\n*   **网络环境**：下载模型时建议配置国内镜像源，以避免连接 HuggingFace 超时。\n\n## 2. 安装步骤\n\n### 第一步：安装插件\n\n推荐通过 **ComfyUI Manager** 搜索 `ComfyUI_LayerStyle` 进行一键安装。\n\n若手动安装，请在终端进入 ComfyUI 的 `custom_nodes` 目录，执行以下命令：\n\n```bash\ncd ComfyUI\u002Fcustom_nodes\ngit clone https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle.git\n```\n\n*(可选) 如需完整功能，请同样克隆进阶仓库：*\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle_Advance.git\n```\n\n### 第二步：安装依赖包\n\n进入插件目录，根据你的 ComfyUI 版本运行对应的安装脚本：\n\n**方案 A：官方便携版 (Official Portable)**\n```bash\ncd ComfyUI\u002Fcustom_nodes\u002FComfyUI_LayerStyle\n..\\..\\..\\python_embeded\\python.exe -s -m pip install -r requirements.txt\n.\\repair_dependency.bat\n```\n\n**方案 B：Aki 整合包 (Aki Package)**\n```bash\ncd ComfyUI\u002Fcustom_nodes\u002FComfyUI_LayerStyle\n..\\..\\python\\python.exe -s -m pip install -r requirements.txt\n.\\repair_dependency_aki.bat\n```\n\n> **注意**：安装完成后请**重启 ComfyUI**。若更新后出现依赖报错，请重新运行上述 `repair_dependency*.bat` 脚本修复。\n\n### 第三步：下载模型文件\n\n本插件需要额外的模型文件才能运行部分节点（尤其是带有 \"Ultra\" 字样的节点）。\n\n**下载方式（优先推荐国内源）：**\n*   **百度网盘**: [链接](https:\u002F\u002Fpan.baidu.com\u002Fs\u002F1T_uXMX3OKIWOJLPuLijrgA?pwd=1yye) (提取码: `1yye`)\n*   **夸克网盘**: [链接](https:\u002F\u002Fpan.quark.cn\u002Fs\u002F4802d6bca7cb)\n*   *海外用户*: [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fchflame163\u002FComfyUI_LayerStyle\u002Ftree\u002Fmain)\n\n**部署方法：**\n1.  下载所有文件。\n2.  将文件复制到 ComfyUI 的模型目录：`ComfyUI\u002Fmodels`。\n3.  **特别注意**：部分节点需要 `vitmatte` 模型，请确保 `vitmatte-small-composition-1k` 相关文件位于 `ComfyUI\u002Fmodels\u002Fvitmatte` 文件夹内（上述打包下载中已包含）。\n\n## 3. 基本使用\n\n安装并重启 ComfyUI 后，你可以在节点菜单中找到 `LayerStyle` 和 `LayerUtility` 分类。\n\n### 最简单的工作流示例：图层合成\n\n以下是一个基础的“图片 + 蒙版”合成流程：\n\n1.  **加载图片**：添加两个 `Load Image` 节点，分别加载**背景图**和**前景图**。\n2.  **创建\u002F获取蒙版**：\n    *   可以使用 `LayerMask` 系列节点（如 `PersonMaskUltra`）自动从前景图中提取人物蒙版。\n    *   或者使用 `Load Image` 加载一张黑白蒙版图片，并通过 `Image To Mask` 转换。\n3.  **图层合成**：\n    *   添加 `LayerComposite` 节点（通常在 `LayerStyle` 菜单下）。\n    *   连接端口：\n        *   `destination`: 连接背景图。\n        *   `source`: 连接前景图。\n        *   `mask`: 连接生成的蒙版。\n    *   调整 `blend_mode`（混合模式）和 `opacity`（不透明度）。\n4.  **预览**：连接 `Save Image` 或直接查看输出预览。\n\n> **提示**：插件作者提供了丰富的预设工作流，位于插件目录下的 `workflow` 文件夹中（例如 `title_example_workflow.json`），可直接拖入 ComfyUI 界面参考学习。","电商设计师小林需要为新款运动鞋批量生成带有复杂光影特效和透明背景的宣传海报，且必须保持图层可编辑性以便后续微调。\n\n### 没有 ComfyUI_LayerStyle 时\n- **软件频繁切换**：必须在 Stable Diffusion 生成底图后，导出文件导入 Photoshop 进行抠图、调色和合成，工作流被强行割裂。\n- **批量处理困难**：面对上百张不同配色的鞋款，无法在 AI 工作流中自动应用统一的图层混合模式（如正片叠底、滤色），只能手动在 PS 中重复操作。\n- **蒙版精度受限**：传统的节点难以实现类似 PS 的精细蒙版运算，导致头发丝或半透明纱网等细节边缘处理生硬，需人工修补。\n- **迭代成本高昂**：一旦客户要求调整光影角度或背景色调，需重新经历“生成 - 导出 - 修图”的全流程，反馈周期长。\n\n### 使用 ComfyUI_LayerStyle 后\n- **全流程闭环**：直接在 ComfyUI 中调用类 PS 节点完成图层叠加与蒙版合成，无需离开界面即可输出成品，实现一站式自动化。\n- **参数化批量生产**：通过节点预设图层混合算法，一键将同一套光影逻辑应用到不同颜色的鞋款生成任务中，效率提升数倍。\n- **像素级精细控制**：利用其强大的蒙版运算节点，轻松实现复杂边缘的无损融合，达到专业修图软件的合成效果。\n- **灵活即时调整**：所有合成步骤均为节点连接，修改光影参数或替换背景仅需调整连线或数值，秒级重新渲染预览。\n\nComfyUI_LayerStyle 将 Photoshop 的核心合成能力融入 AI 生成流，让创意从“单点生成”进化为“可控的工业化生产”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fchflame163_ComfyUI_LayerStyle_60829226.jpg","chflame163",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fchflame163_59afbcb5.png","Beijing","https:\u002F\u002Fgithub.com\u002Fchflame163",[78,82,86],{"name":79,"color":80,"percentage":81},"Python","#3572A5",93.3,{"name":83,"color":84,"percentage":85},"JavaScript","#f1e05a",6.4,{"name":87,"color":88,"percentage":89},"Batchfile","#C1F12E",0.3,2988,192,"2026-04-17T04:36:13","MIT","Windows","未明确说明具体型号，但依赖 onnxruntime 和 CUDA（报错信息提及需安装正确版本的 CUDA 和 cuDNN），部分节点（如 BiRefNet, SAM2, YOLO）通常需要 NVIDIA GPU","未说明",{"notes":98,"python":99,"dependencies":100},"1. 主要支持 Windows 环境（提供 .bat 安装脚本），其他系统需手动配置依赖。2. 部分高级节点已拆分至 ComfyUI_LayerStyle_Advance 仓库，若工作流丢失节点需安装该扩展。3. 需手动下载模型文件（可通过百度网盘、夸克网盘或 HuggingFace 获取）并放入 ComfyUI\u002Fmodels 目录。4. 常见报错涉及 opencv-contrib-python 版本冲突、transformers 版本过低、protobuf 版本过低及网络连接问题（国内用户建议配置 hf-mirror）。5. 更新后若出现依赖错误，需运行 repair_dependency.bat 修复。","未说明（依赖 ComfyUI 官方便携版或 Aki 版的嵌入式 Python 环境）",[101,102,103,104,105,106,107,108],"opencv-contrib-python","transformers","protobuf","onnxruntime","huggingface_hub","numpy (支持 2.x)","insightface","vitmatte",[15,35],"2026-03-27T02:49:30.150509","2026-04-18T11:14:17.805735",[113,118,123,128,133,138,143],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},39346,"加载工作流时提示找不到节点类型（如 LayerColor: Brightness Contrast），如何解决？","这通常是由于节点命名变更或工作流版本不匹配导致的。解决方法是将更新后的 ComfyUI_LayerStyle 和 ComfyUI_LayerStyle_Advance 节点文件放入 custom_nodes 文件夹后，重新将工作流文件拖入 ComfyUI 页面覆盖一次即可。另外，如果看到 'Brightness & Contrast' 相关报错，可以直接使用 'BrightnessContrastV2' 节点替代。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F142",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},39347,"RmBgUltra V2 或 PersonMaskUltra V2 节点因网络问题无法自动下载模型，如何手动加载本地模型？","您可以手动下载模型文件并放置到指定目录。对于 vitmatte 方法，请访问 HuggingFace 仓库 (https:\u002F\u002Fhuggingface.co\u002Fhustvl\u002Fvitmatte-small-composition-1k) 下载模型文件。然后在 ComfyUI 的 models 目录下新建 'vitmatte' 文件夹，将下载的文件拷贝进去即可。确保路径正确，避免报错 'Incorrect path_or_model_id'。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F25",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},39348,"执行节点时提示 'name 'wget' is not defined' 或模型文件缺失错误，怎么办？","这通常是因为缺少必要的模型文件或环境依赖。请前往项目的 GitHub 页面或相关教程（如 B 站视频 BV1EZ421J751）获取模型文件。具体步骤是：在 ComfyUI 的 models 目录下创建对应的模型文件夹（如 vitmatte），并将下载的模型文件（如 model.pth 等）完整拷贝到该目录中。不要只放文件夹，必须包含具体的模型权重文件。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F23",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},39349,"安装插件时遇到 'docopt' 模块缺失或 'psd-tools' 安装失败的错误，如何修复？","这是由于依赖包 docopt 的源码安装失败导致的。解决方法是手动下载 docopt 的预编译 wheel 文件（.whl），例如从 https:\u002F\u002Fwww.piwheels.org\u002Fsimple\u002Fdocopt\u002Fdocopt-0.6.2-py2.py3-none-any.whl 下载。然后将该文件放入 ComfyUI 便携版的 python_embeded 目录（或更新目录），运行命令：`python.exe -s -m pip install docopt-0.6.2-py2.py3-none-any.whl` 进行安装。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F113",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},39350,"该插件对 OpenCV Python 版本有什么具体要求？","插件不需要安装基础的 opencv-python，而是明确要求安装 `opencv-contrib-python` 库，且版本需要在 4.7.0 或更高。请使用 pip 安装指定版本：`pip install opencv-contrib-python>=4.7.0`。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F239",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},39351,"插件是否支持最新的 Segment Anything 2 (SAM2) 模型？","是的，插件已经支持 SAM2。您可以在项目 README 文档中找到关于 'SAM2Ultra' 节点的详细说明和使用方法，无需等待额外更新，直接查看文档即可使用。","https:\u002F\u002Fgithub.com\u002Fchflame163\u002FComfyUI_LayerStyle\u002Fissues\u002F193",{"id":144,"question_zh":145,"answer_zh":146,"source_url":137},39352,"更新插件后导致无法加载或启动失败，常见原因是什么？","更新后加载失败通常与依赖库版本冲突或缺失有关。首先检查是否满足 OpenCV 的版本要求（需 opencv-contrib-python >= 4.7.0）。其次，如果是便携版用户，确认是否成功安装了 psd-tools 及其依赖（如 docopt）。如果问题依旧，尝试重启 ComfyUI 或重新应用工作流以刷新节点列表。",[]]