[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Fannovel16--comfyui_controlnet_aux":3,"tool-Fannovel16--comfyui_controlnet_aux":61},[4,18,26,35,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[43,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[60,15,13,14],"语言模型",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":87,"forks":88,"last_commit_at":89,"license":90,"difficulty_score":32,"env_os":91,"env_gpu":92,"env_ram":93,"env_deps":94,"category_tags":106,"github_topics":76,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":107,"updated_at":108,"faqs":109,"releases":138},4928,"Fannovel16\u002Fcomfyui_controlnet_aux","comfyui_controlnet_aux","ComfyUI's ControlNet Auxiliary Preprocessors","comfyui_controlnet_aux 是专为 ComfyUI 设计的一套即插即用节点集合，旨在帮助用户快速生成 ControlNet 所需的“提示图像”（hint images）。在利用 AI 进行图像创作时，用户往往需要精确控制画面的线条、边缘或结构，而原始图片通常无法直接满足这一需求。comfyui_controlnet_aux 通过集成多种先进的预处理算法（如 Canny 边缘检测、HED 软边缘提取、各类线稿风格转换及涂鸦识别等），能自动将普通图片转化为适合 ControlNet 理解的结构图，从而让 AI 更精准地遵循用户的构图意图。\n\n这套工具特别适合使用 ComfyUI 进行创作的数字艺术家、设计师以及 AI 绘画爱好者。无论是想将照片转为动漫线稿，还是从草图生成精细成品，它都能大幅降低手动处理素材的门槛。其核心亮点在于高度的集成化：除了需要微调阈值的特殊场景外，绝大多数功能都被整合在一个\"AIO Aux Preprocessor\"通用节点中，让用户无需连接繁琐的节点链即可一键调用。此外，它直接对接 Hugging Face 模型库，确保了预处理器算法与官方 Co","comfyui_controlnet_aux 是专为 ComfyUI 设计的一套即插即用节点集合，旨在帮助用户快速生成 ControlNet 所需的“提示图像”（hint images）。在利用 AI 进行图像创作时，用户往往需要精确控制画面的线条、边缘或结构，而原始图片通常无法直接满足这一需求。comfyui_controlnet_aux 通过集成多种先进的预处理算法（如 Canny 边缘检测、HED 软边缘提取、各类线稿风格转换及涂鸦识别等），能自动将普通图片转化为适合 ControlNet 理解的结构图，从而让 AI 更精准地遵循用户的构图意图。\n\n这套工具特别适合使用 ComfyUI 进行创作的数字艺术家、设计师以及 AI 绘画爱好者。无论是想将照片转为动漫线稿，还是从草图生成精细成品，它都能大幅降低手动处理素材的门槛。其核心亮点在于高度的集成化：除了需要微调阈值的特殊场景外，绝大多数功能都被整合在一个\"AIO Aux Preprocessor\"通用节点中，让用户无需连接繁琐的节点链即可一键调用。此外，它直接对接 Hugging Face 模型库，确保了预处理器算法与官方 ControlNet 项目保持同步，为用户提供了稳定且丰富的创作辅助能力。","# ComfyUI's ControlNet Auxiliary Preprocessors\nPlug-and-play [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) node sets for making [ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet\u002F) hint images\n\n\"anime style, a protest in the street, cyberpunk city, a woman with pink hair and golden eyes (looking at the viewer) is holding a sign with the text \"ComfyUI ControlNet Aux\" in bold, neon pink\" on Flux.1 Dev\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_18b6a8afb656.jpg)\n\nThe code is copy-pasted from the respective folders in https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet\u002Ftree\u002Fmain\u002Fannotator and connected to [the 🤗 Hub](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators).\n\nAll credit & copyright goes to https:\u002F\u002Fgithub.com\u002Flllyasviel.\n\n# Updates\nGo to [Update page](.\u002FUPDATES.md) to follow updates\n\n# Installation:\n## Using ComfyUI Manager (recommended):\nInstall [ComfyUI Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager) and do steps introduced there to install this repo.\n\n## Alternative:\nIf you're running on Linux, or non-admin account on windows you'll want to ensure `\u002FComfyUI\u002Fcustom_nodes` and `comfyui_controlnet_aux` has write permissions.\n\nThere is now a **install.bat** you can run to install to portable if detected. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. \n\nIf you can't run **install.bat** (e.g. you are a Linux user). Open the CMD\u002FShell and do the following:\n  - Navigate to your `\u002FComfyUI\u002Fcustom_nodes\u002F` folder\n  - Run `git clone https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002F`\n  - Navigate to your `comfyui_controlnet_aux` folder\n    - Portable\u002Fvenv:\n       - Run `path\u002Fto\u002FComfUI\u002Fpython_embeded\u002Fpython.exe -s -m pip install -r requirements.txt`\n\t- With system python\n\t   - Run `pip install -r requirements.txt`\n  - Start ComfyUI\n\n# Nodes\nPlease note that this repo only supports preprocessors making hint images (e.g. stickman, canny edge, etc).\nAll preprocessors except Inpaint are intergrated into `AIO Aux Preprocessor` node. \nThis node allow you to quickly get the preprocessor but a preprocessor's own threshold parameters won't be able to set.\nYou need to use its node directly to set thresholds.\n\n# Nodes (sections are categories in Comfy menu)\n## Line Extractors\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| Binary Lines                | binary                    | control_scribble                          |\n| Canny Edge                  | canny                     | control_v11p_sd15_canny \u003Cbr> control_canny \u003Cbr> t2iadapter_canny |\n| HED Soft-Edge Lines         | hed                       | control_v11p_sd15_softedge \u003Cbr> control_hed |\n| Standard Lineart            | standard_lineart          | control_v11p_sd15_lineart                 |\n| Realistic Lineart           | lineart (or `lineart_coarse` if `coarse` is enabled) | control_v11p_sd15_lineart |\n| Anime Lineart               | lineart_anime             | control_v11p_sd15s2_lineart_anime         |\n| Manga Lineart               | lineart_anime_denoise     | control_v11p_sd15s2_lineart_anime         |\n| M-LSD Lines                 | mlsd                      | control_v11p_sd15_mlsd \u003Cbr> control_mlsd  |\n| PiDiNet Soft-Edge Lines     | pidinet                   | control_v11p_sd15_softedge \u003Cbr> control_scribble |\n| Scribble Lines              | scribble                  | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| Scribble XDoG Lines         | scribble_xdog             | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| Fake Scribble Lines         | scribble_hed              | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| TEED Soft-Edge Lines        | teed                      | [controlnet-sd-xl-1.0-softedge-dexined](https:\u002F\u002Fhuggingface.co\u002FSargeZT\u002Fcontrolnet-sd-xl-1.0-softedge-dexined\u002Fblob\u002Fmain\u002Fcontrolnet-sd-xl-1.0-softedge-dexined.safetensors) \u003Cbr> control_v11p_sd15_softedge (Theoretically)\n| Scribble PiDiNet Lines      | scribble_pidinet          | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| AnyLine Lineart             |                           | mistoLine_fp16.safetensors \u003Cbr> mistoLine_rank256 \u003Cbr> control_v11p_sd15s2_lineart_anime \u003Cbr> control_v11p_sd15_lineart |\n\n## Normal and Depth Estimators\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| MiDaS Depth Map           | (normal) depth            | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| LeReS Depth Map           | depth_leres               | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| Zoe Depth Map             | depth_zoe                 | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| MiDaS Normal Map          | normal_map                | control_normal                            |\n| BAE Normal Map            | normal_bae                | control_v11p_sd15_normalbae               |\n| MeshGraphormer Hand Refiner ([HandRefinder](https:\u002F\u002Fgithub.com\u002Fwenquanlu\u002FHandRefiner))  | depth_hand_refiner | [control_sd15_inpaint_depth_hand_fp16](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fcontrol_sd15_inpaint_depth_hand_fp16.safetensors) |\n| Depth Anything            |  depth_anything           | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n| Zoe Depth Anything \u003Cbr> (Basically Zoe but the encoder is replaced with DepthAnything)       | depth_anything | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n| Normal DSINE              |                           | control_normal\u002Fcontrol_v11p_sd15_normalbae |\n| Metric3D Depth            |                           | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| Metric3D Normal           |                           | control_v11p_sd15_normalbae |\n| Depth Anything V2         |                           | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n\n## Faces and Poses Estimators\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| DWPose Estimator                 | dw_openpose_full          | control_v11p_sd15_openpose \u003Cbr> control_openpose \u003Cbr> t2iadapter_openpose |\n| OpenPose Estimator               | openpose (detect_body) \u003Cbr> openpose_hand (detect_body + detect_hand) \u003Cbr> openpose_faceonly (detect_face) \u003Cbr> openpose_full (detect_hand + detect_body + detect_face)    | control_v11p_sd15_openpose \u003Cbr> control_openpose \u003Cbr> t2iadapter_openpose |\n| MediaPipe Face Mesh         | mediapipe_face            | controlnet_sd21_laion_face_v2             | \n| Animal Estimator                 | animal_openpose           | [control_sd15_animal_openpose_fp16](https:\u002F\u002Fhuggingface.co\u002Fhuchenlei\u002Fanimal_openpose\u002Fblob\u002Fmain\u002Fcontrol_sd15_animal_openpose_fp16.pth) |\n\n## Optical Flow Estimators\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| Unimatch Optical Flow       |                           | [DragNUWA](https:\u002F\u002Fgithub.com\u002FProjectNUWA\u002FDragNUWA) |\n\n### How to get OpenPose-format JSON?\n#### User-side\nThis workflow will save images to ComfyUI's output folder (the same location as output images). If you haven't found `Save Pose Keypoints` node, update this extension\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_4f1f0306aec8.png)\n\n#### Dev-side\nAn array of [OpenPose-format JSON](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose\u002Fblob\u002Fmaster\u002Fdoc\u002F02_output.md#json-output-format) corresponsding to each frame in an IMAGE batch can be gotten from DWPose and OpenPose using `app.nodeOutputs` on the UI or `\u002Fhistory` API endpoint. JSON output from AnimalPose uses a kinda similar format to OpenPose JSON:\n```\n[\n    {\n        \"version\": \"ap10k\",\n        \"animals\": [\n            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],\n            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],\n            ...\n        ],\n        \"canvas_height\": 512,\n        \"canvas_width\": 768\n    },\n    ...\n]\n```\n\nFor extension developers (e.g. Openpose editor):\n```js\nconst poseNodes = app.graph._nodes.filter(node => [\"OpenposePreprocessor\", \"DWPreprocessor\", \"AnimalPosePreprocessor\"].includes(node.type))\nfor (const poseNode of poseNodes) {\n    const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])\n    console.log(openposeResults) \u002F\u002FAn array containing Openpose JSON for each frame\n}\n```\n\nFor API users:\nJavascript\n```js\nimport fetch from \"node-fetch\" \u002F\u002FRemember to add \"type\": \"module\" to \"package.json\"\nasync function main() {\n    const promptId = '792c1905-ecfe-41f4-8114-83e6a4a09a9f' \u002F\u002FToo lazy to POST \u002Fqueue\n    let history = await fetch(`http:\u002F\u002F127.0.0.1:8188\u002Fhistory\u002F${promptId}`).then(re => re.json())\n    history = history[promptId]\n    const nodeOutputs = Object.values(history.outputs).filter(output => output.openpose_json)\n    for (const nodeOutput of nodeOutputs) {\n        const openposeResults = JSON.parse(nodeOutput.openpose_json[0])\n        console.log(openposeResults) \u002F\u002FAn array containing Openpose JSON for each frame\n    }\n}\nmain()\n```\n\nPython\n```py\nimport json, urllib.request\n\nserver_address = \"127.0.0.1:8188\"\nprompt_id = '' #Too lazy to POST \u002Fqueue\n\ndef get_history(prompt_id):\n    with urllib.request.urlopen(\"http:\u002F\u002F{}\u002Fhistory\u002F{}\".format(server_address, prompt_id)) as response:\n        return json.loads(response.read())\n\nhistory = get_history(prompt_id)[prompt_id]\nfor o in history['outputs']:\n    for node_id in history['outputs']:\n        node_output = history['outputs'][node_id]\n        if 'openpose_json' in node_output:\n            print(json.loads(node_output['openpose_json'][0])) #An list containing Openpose JSON for each frame\n```\n## Semantic Segmentation\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| OneFormer ADE20K Segmentor  | oneformer_ade20k          | control_v11p_sd15_seg                     |\n| OneFormer COCO Segmentor    | oneformer_coco            | control_v11p_sd15_seg                     |\n| UniFormer Segmentor         | segmentation              |control_sd15_seg \u003Cbr> control_v11p_sd15_seg|\n\n## T2IAdapter-only\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| Color Pallete               | color                     | t2iadapter_color                          |\n| Content Shuffle             | shuffle                   | t2iadapter_style                          |\n\n## Recolor\n| Preprocessor Node           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| Image Luminance             | recolor_luminance         | [ioclab_sd15_recolor](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd_control_collection\u002Fresolve\u002Fmain\u002Fioclab_sd15_recolor.safetensors) \u003Cbr> [sai_xl_recolor_256lora](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd_control_collection\u002Fresolve\u002Fmain\u002Fsai_xl_recolor_256lora.safetensors) \u003Cbr> [bdsqlsz_controlllite_xl_recolor_luminance](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fresolve\u002Fmain\u002Fbdsqlsz_controlllite_xl_recolor_luminance.safetensors) |\n| Image Intensity             | recolor_intensity         | Idk. Maybe same as above? |\n\n# Examples\n> A picture is worth a thousand words\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_ab238baf7e57.jpg)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_58c8ae05a0b5.jpg)\n\n# Testing workflow\nhttps:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fblob\u002Fmain\u002Fexamples\u002FExecuteAll.png\nInput image: https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fblob\u002Fmain\u002Fexamples\u002Fcomfyui-controlnet-aux-logo.png\n\n# Q&A:\n## Why some nodes doesn't appear after I installed this repo?\n\nThis repo has a new mechanism which will skip any custom node can't be imported. If you meet this case, please create a issue on [Issues tab](https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues) with the log from the command line.\n\n## DWPose\u002FAnimalPose only uses CPU so it's so slow. How can I make it use GPU?\nThere are two ways to speed-up DWPose: using TorchScript checkpoints (.torchscript.pt) checkpoints or ONNXRuntime (.onnx). TorchScript way is little bit slower than ONNXRuntime but doesn't require any additional library and still way way faster than CPU. \n\nA torchscript bbox detector is compatiable with an onnx pose estimator and vice versa.\n### TorchScript\nSet `bbox_detector` and `pose_estimator` according to this picture. You can try other bbox detector endings with `.torchscript.pt` to reduce bbox detection time if input images are ideal.\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_b22705d380e8.png)\n### ONNXRuntime\nIf onnxruntime is installed successfully and the checkpoint used endings with `.onnx`, it will replace default cv2 backend to take advantage of GPU. Note that if you are using NVidia card, this method currently can only works on CUDA 11.8 (ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z) unless you compile onnxruntime yourself.\n\n1. Know your onnxruntime build:\n* * NVidia CUDA 11.x or bellow\u002FAMD GPU: `onnxruntime-gpu`\n* * NVidia CUDA 12.x: `onnxruntime-gpu --extra-index-url https:\u002F\u002Faiinfra.pkgs.visualstudio.com\u002FPublicPackages\u002F_packaging\u002Fonnxruntime-cuda-12\u002Fpypi\u002Fsimple\u002F`\n* * DirectML: `onnxruntime-directml`\n* * OpenVINO: `onnxruntime-openvino`\n\nNote that if this is your first time using ComfyUI, please test if it can run on your device before doing next steps.\n\n2. Add it into `requirements.txt`\n\n3. Run `install.bat` or pip command mentioned in Installation\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_c612474c8dc6.png)\n\n# Assets files of preprocessors\n* anime_face_segment:  [bdsqlsz\u002Fqinglong_controlnet-lllite\u002FAnnotators\u002FUNet.pth](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fblob\u002Fmain\u002FAnnotators\u002FUNet.pth), [anime-seg\u002Fisnetis.ckpt](https:\u002F\u002Fhuggingface.co\u002Fskytnt\u002Fanime-seg\u002Fblob\u002Fmain\u002Fisnetis.ckpt)\n* densepose:  [LayerNorm\u002FDensePose-TorchScript-with-hint-image\u002Fdensepose_r50_fpn_dl.torchscript](https:\u002F\u002Fhuggingface.co\u002FLayerNorm\u002FDensePose-TorchScript-with-hint-image\u002Fblob\u002Fmain\u002Fdensepose_r50_fpn_dl.torchscript)\n* dwpose:  \n* * bbox_detector: Either [yzd-v\u002FDWPose\u002Fyolox_l.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fyolox_l.onnx), [hr16\u002Fyolox-onnx\u002Fyolox_l.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolox-onnx\u002Fblob\u002Fmain\u002Fyolox_l.torchscript.pt), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_l_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_l_fp16.onnx), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_m_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_m_fp16.onnx), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_s_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_s_fp16.onnx)\n* * pose_estimator: Either [hr16\u002FDWPose-TorchScript-BatchSize5\u002Fdw-ll_ucoco_384_bs5.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDWPose-TorchScript-BatchSize5\u002Fblob\u002Fmain\u002Fdw-ll_ucoco_384_bs5.torchscript.pt), [yzd-v\u002FDWPose\u002Fdw-ll_ucoco_384.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fdw-ll_ucoco_384.onnx)\n* animal_pose (ap10k):\n* * bbox_detector: Either [yzd-v\u002FDWPose\u002Fyolox_l.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fyolox_l.onnx), [hr16\u002Fyolox-onnx\u002Fyolox_l.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolox-onnx\u002Fblob\u002Fmain\u002Fyolox_l.torchscript.pt), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_l_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_l_fp16.onnx), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_m_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_m_fp16.onnx), [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_s_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_s_fp16.onnx)\n* * pose_estimator: Either [hr16\u002FDWPose-TorchScript-BatchSize5\u002Frtmpose-m_ap10k_256_bs5.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDWPose-TorchScript-BatchSize5\u002Fblob\u002Fmain\u002Frtmpose-m_ap10k_256_bs5.torchscript.pt), [hr16\u002FUnJIT-DWPose\u002Frtmpose-m_ap10k_256.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnJIT-DWPose\u002Fblob\u002Fmain\u002Frtmpose-m_ap10k_256.onnx)\n* hed:  [lllyasviel\u002FAnnotators\u002FControlNetHED.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FControlNetHED.pth)\n* leres:  [lllyasviel\u002FAnnotators\u002Fres101.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fres101.pth), [lllyasviel\u002FAnnotators\u002Flatest_net_G.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Flatest_net_G.pth)\n* lineart:  [lllyasviel\u002FAnnotators\u002Fsk_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fsk_model.pth), [lllyasviel\u002FAnnotators\u002Fsk_model2.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fsk_model2.pth)\n* lineart_anime:  [lllyasviel\u002FAnnotators\u002FnetG.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FnetG.pth)\n* manga_line:  [lllyasviel\u002FAnnotators\u002Ferika.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ferika.pth)\n* mesh_graphormer:  [hr16\u002FControlNet-HandRefiner-pruned\u002Fgraphormer_hand_state_dict.bin](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fgraphormer_hand_state_dict.bin), [hr16\u002FControlNet-HandRefiner-pruned\u002Fhrnetv2_w64_imagenet_pretrained.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fhrnetv2_w64_imagenet_pretrained.pth)\n* midas:  [lllyasviel\u002FAnnotators\u002Fdpt_hybrid-midas-501f0c75.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fdpt_hybrid-midas-501f0c75.pt)\n* mlsd:  [lllyasviel\u002FAnnotators\u002Fmlsd_large_512_fp32.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fmlsd_large_512_fp32.pth)\n* normalbae:  [lllyasviel\u002FAnnotators\u002Fscannet.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fscannet.pt)\n* oneformer:  [lllyasviel\u002FAnnotators\u002F250_16_swin_l_oneformer_ade20k_160k.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002F250_16_swin_l_oneformer_ade20k_160k.pth)\n* open_pose:  [lllyasviel\u002FAnnotators\u002Fbody_pose_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fbody_pose_model.pth), [lllyasviel\u002FAnnotators\u002Fhand_pose_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fhand_pose_model.pth), [lllyasviel\u002FAnnotators\u002Ffacenet.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ffacenet.pth)\n* pidi:  [lllyasviel\u002FAnnotators\u002Ftable5_pidinet.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ftable5_pidinet.pth)\n* sam:  [dhkim2810\u002FMobileSAM\u002Fmobile_sam.pt](https:\u002F\u002Fhuggingface.co\u002Fdhkim2810\u002FMobileSAM\u002Fblob\u002Fmain\u002Fmobile_sam.pt)\n* uniformer:  [lllyasviel\u002FAnnotators\u002Fupernet_global_small.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fupernet_global_small.pth)\n* zoe:  [lllyasviel\u002FAnnotators\u002FZoeD_M12_N.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FZoeD_M12_N.pt)\n* teed:  [bdsqlsz\u002Fqinglong_controlnet-lllite\u002F7_model.pth](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fblob\u002Fmain\u002FAnnotators\u002F7_model.pth)\n* depth_anything: Either [LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vitl14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vitl14.pth), [LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vitb14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vitb14.pth) or [LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vits14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vits14.pth)\n* diffusion_edge: Either [hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_indoor.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_indoor.pt), [hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_urban.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_urban.pt) or [hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_natrual.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_natrual.pt)\n* unimatch: Either [hr16\u002FUnimatch\u002Fgmflow-scale2-regrefine6-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale2-regrefine6-mixdata.pth), [hr16\u002FUnimatch\u002Fgmflow-scale2-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale2-mixdata.pth) or [hr16\u002FUnimatch\u002Fgmflow-scale1-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale1-mixdata.pth)\n* zoe_depth_anything: Either [LiheYoung\u002FDepth-Anything\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_indoor.pt](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_indoor.pt) or [LiheYoung\u002FDepth-Anything\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_outdoor.pt](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_outdoor.pt)\n# 2000 Stars 😄\n\u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#Fannovel16\u002Fcomfyui_controlnet_aux&Date\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png&theme=dark\" \u002F>\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png\" \u002F>\n    \u003Cimg alt=\"Star History Chart\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png\" \u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fa>\n\nThanks for yalls supports. I never thought the graph for stars would be linear lol.\n","# ComfyUI 的 ControlNet 辅助预处理器\n即插即用的 [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) 节点集，用于生成 [ControlNet](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet\u002F) 提示图。\n\n“动漫风格，街头抗议场景，赛博朋克城市，一位粉色头发、金色眼睛的女性（正注视着观众）手持标语，上面用粗体霓虹粉字写着‘ComfyUI ControlNet Aux’”——基于 Flux.1 Dev 模型。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_18b6a8afb656.jpg)\n\n代码直接从 https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet\u002Ftree\u002Fmain\u002Fannotator 中相应文件夹复制而来，并与 [Hugging Face Hub](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators) 相连。\n\n所有版权及署名权归 https:\u002F\u002Fgithub.com\u002Flllyasviel 所有。\n\n# 更新\n请前往 [更新页面](.\u002FUPDATES.md) 查看最新动态。\n\n# 安装：\n## 使用 ComfyUI 管理器（推荐）：\n安装 [ComfyUI 管理器](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager)，并按照其说明步骤安装本仓库。\n\n## 其他方式：\n如果您使用的是 Linux 系统，或在 Windows 上以非管理员账户运行，请确保 `\u002FComfyUI\u002Fcustom_nodes` 和 `comfyui_controlnet_aux` 文件夹具有写入权限。\n\n现在提供了一个 **install.bat** 脚本，若检测到便携式安装环境，则会自动执行便携式安装；否则将默认为系统安装，并假定您已按照 ComfyUI 的手动安装步骤操作。\n\n如果您无法运行 **install.bat**（例如，您是 Linux 用户），请打开终端或命令行，执行以下操作：\n  - 导航至您的 `\u002FComfyUI\u002Fcustom_nodes\u002F` 文件夹。\n  - 运行 `git clone https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002F`。\n  - 进入 `comfyui_controlnet_aux` 文件夹：\n    - 对于便携式或虚拟环境：\n       - 运行 `path\u002Fto\u002FComfUI\u002Fpython_embeded\u002Fpython.exe -s -m pip install -r requirements.txt`。\n    - 对于系统 Python：\n       - 运行 `pip install -r requirements.txt`。\n  - 启动 ComfyUI。\n\n# 节点\n请注意，本仓库仅支持用于生成提示图的预处理器（如人体轮廓、Canny 边缘等）。除 Inpaint 外的所有预处理器均已集成到 `AIO 辅助预处理器` 节点中。该节点可快速获取预处理结果，但无法设置预处理器自身的阈值参数。如需调整阈值，需直接使用对应的预处理器节点。\n\n# 节点（分类按 Comfy 菜单中的类别划分）\n## 线条提取器\n| 预处理器节点           | sd-webui-controlnet\u002F其他 |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| 二值线条                | binary                    | control_scribble                          |\n| Canny 边缘                  | canny                     | control_v11p_sd15_canny \u003Cbr> control_canny \u003Cbr> t2iadapter_canny |\n| HED 软边缘线条         | hed                       | control_v11p_sd15_softedge \u003Cbr> control_hed |\n| 标准线稿            | standard_lineart          | control_v11p_sd15_lineart                 |\n| 写实线稿           | lineart（若启用 `coarse` 模式则为 `lineart_coarse`） | control_v11p_sd15_lineart |\n| 动漫线稿               | lineart_anime             | control_v11p_sd15s2_lineart_anime         |\n| 漫画线稿               | lineart_anime_denoise     | control_v11p_sd15s2_lineart_anime         |\n| M-LSD 线条                 | mlsd                      | control_v11p_sd15_mlsd \u003Cbr> control_mlsd  |\n| PiDiNet 软边缘线条     | pidinet                   | control_v11p_sd15_softedge \u003Cbr> control_scribble |\n| 涂鸦线条              | scribble                  | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| XDoG 涂鸦线条         | scribble_xdog             | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| 假涂鸦线条            | scribble_hed              | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| TEED 软边缘线条        | teed                      | [controlnet-sd-xl-1.0-softedge-dexined](https:\u002F\u002Fhuggingface.co\u002FSargeZT\u002Fcontrolnet-sd-xl-1.0-softedge-dexined\u002Fblob\u002Fmain\u002Fcontrolnet-sd-xl-1.0-softedge-dexined.safetensors) \u003Cbr> control_v11p_sd15_softedge（理论上）|\n| PiDiNet 涂鸦线条      | scribble_pidinet          | control_v11p_sd15_scribble \u003Cbr> control_scribble |\n| AnyLine 线稿             |                           | mistoLine_fp16.safetensors \u003Cbr> mistoLine_rank256 \u003Cbr> control_v11p_sd15s2_lineart_anime \u003Cbr> control_v11p_sd15_lineart |\n\n## 法线与深度估计器\n| 预处理器节点           | sd-webui-controlnet\u002F其他 |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| MiDaS 深度图           | （法线）depth            | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| LeReS 深度图           | depth_leres               | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| Zoe 深度图             | depth_zoe                 | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| MiDaS 法线图          | normal_map                | control_normal                            |\n| BAE 法线图            | normal_bae                | control_v11p_sd15_normalbae               |\n| MeshGraphormer 手部精修器（[HandRefiner](https:\u002F\u002Fgithub.com\u002Fwenquanlu\u002FHandRefiner)）  | depth_hand_refiner | [control_sd15_inpaint_depth_hand_fp16](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fcontrol_sd15_inpaint_depth_hand_fp16.safetensors) |\n| Depth Anything            |  depth_anything           | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n| Zoe Depth Anything \u003Cbr>（本质上是 Zoe，但编码器被 DepthAnything 替换）       | depth_anything | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n| Normal DSINE              |                           | control_normal\u002Fcontrol_v11p_sd15_normalbae |\n| Metric3D 深度            |                           | control_v11f1p_sd15_depth \u003Cbr> control_depth \u003Cbr> t2iadapter_depth |\n| Metric3D 法线           |                           | control_v11p_sd15_normalbae |\n| Depth Anything V2         |                           | [Depth-Anything](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_controlnet\u002Fdiffusion_pytorch_model.safetensors) |\n\n## 人脸与姿态估计器\n| 预处理节点           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| DWPose 姿态估计器                 | dw_openpose_full          | control_v11p_sd15_openpose \u003Cbr> control_openpose \u003Cbr> t2iadapter_openpose |\n| OpenPose 姿态估计器               | openpose (detect_body) \u003Cbr> openpose_hand (detect_body + detect_hand) \u003Cbr> openpose_faceonly (detect_face) \u003Cbr> openpose_full (detect_hand + detect_body + detect_face)    | control_v11p_sd15_openpose \u003Cbr> control_openpose \u003Cbr> t2iadapter_openpose |\n| MediaPipe 人脸网格         | mediapipe_face            | controlnet_sd21_laion_face_v2             | \n| 动物姿态估计器                 | animal_openpose           | [control_sd15_animal_openpose_fp16](https:\u002F\u002Fhuggingface.co\u002Fhuchenlei\u002Fanimal_openpose\u002Fblob\u002Fmain\u002Fcontrol_sd15_animal_openpose_fp16.pth) |\n\n## 光流估计器\n| 预处理节点           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| Unimatch 光流       |                           | [DragNUWA](https:\u002F\u002Fgithub.com\u002FProjectNUWA\u002FDragNUWA) |\n\n### 如何获取 OpenPose 格式的 JSON？\n#### 用户端\n此工作流会将图像保存到 ComfyUI 的输出文件夹（与输出图像相同的位置）。如果您没有找到 `Save Pose Keypoints` 节点，请更新此扩展。\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_4f1f0306aec8.png)\n\n#### 开发者端\n对于 IMAGE 批次中的每一帧，可以使用 UI 上的 `app.nodeOutputs` 或 `\u002Fhistory` API 端点，从 DWPose 和 OpenPose 中获取对应于每帧的 [OpenPose 格式 JSON](https:\u002F\u002Fgithub.com\u002FCMU-Perceptual-Computing-Lab\u002Fopenpose\u002Fblob\u002Fmaster\u002Fdoc\u002F02_output.md#json-output-format) 数组。AnimalPose 的 JSON 输出格式与 OpenPose JSON 类似：\n```\n[\n    {\n        \"version\": \"ap10k\",\n        \"animals\": [\n            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],\n            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],\n            ...\n        ],\n        \"canvas_height\": 512,\n        \"canvas_width\": 768\n    },\n    ...\n]\n```\n\n对于扩展开发者（例如 Openpose 编辑器）：\n```js\nconst poseNodes = app.graph._nodes.filter(node => [\"OpenposePreprocessor\", \"DWPreprocessor\", \"AnimalPosePreprocessor\"].includes(node.type))\nfor (const poseNode of poseNodes) {\n    const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])\n    console.log(openposeResults) \u002F\u002F包含每帧 OpenPose JSON 的数组\n}\n```\n\n对于 API 用户：\nJavaScript\n```js\nimport fetch from \"node-fetch\" \u002F\u002F请确保在 \"package.json\" 中添加 \"type\": \"module\"\nasync function main() {\n    const promptId = '792c1905-ecfe-41f4-8114-83e6a4a09a9f' \u002F\u002F懒得 POST \u002Fqueue\n    let history = await fetch(`http:\u002F\u002F127.0.0.1:8188\u002Fhistory\u002F${promptId}`).then(re => re.json())\n    history = history[promptId]\n    const nodeOutputs = Object.values(history.outputs).filter(output => output.openpose_json)\n    for (const nodeOutput of nodeOutputs) {\n        const openposeResults = JSON.parse(nodeOutput.openpose_json[0])\n        console.log(openposeResults) \u002F\u002F包含每帧 OpenPose JSON 的数组\n    }\n}\nmain()\n```\n\nPython\n```py\nimport json, urllib.request\n\nserver_address = \"127.0.0.1:8188\"\nprompt_id = '' #懒得 POST \u002Fqueue\n\ndef get_history(prompt_id):\n    with urllib.request.urlopen(\"http:\u002F\u002F{}\u002Fhistory\u002F{}\".format(server_address, prompt_id)) as response:\n        return json.loads(response.read())\n\nhistory = get_history(prompt_id)[prompt_id]\nfor o in history['outputs']:\n    for node_id in history['outputs']:\n        node_output = history['outputs'][node_id]\n        if 'openpose_json' in node_output:\n            print(json.loads(node_output['openpose_json'][0])) #包含每帧 OpenPose JSON 的列表\n```\n## 语义分割\n| 预处理节点           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| OneFormer ADE20K 分割器  | oneformer_ade20k          | control_v11p_sd15_seg                     |\n| OneFormer COCO 分割器    | oneformer_coco            | control_v11p_sd15_seg                     |\n| UniFormer 分割器         | segmentation              |control_sd15_seg \u003Cbr> control_v11p_sd15_seg|\n\n## 仅 T2IAdapter\n| 预处理节点           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| 色彩调色板               | color                     | t2iadapter_color                          |\n| 内容打乱             | shuffle                   | t2iadapter_style                          |\n\n## 重新着色\n| 预处理节点           | sd-webui-controlnet\u002Fother |          ControlNet\u002FT2I-Adapter           |\n|-----------------------------|---------------------------|-------------------------------------------|\n| 图像亮度             | recolor_luminance         | [ioclab_sd15_recolor](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd_control_collection\u002Fresolve\u002Fmain\u002Fioclab_sd15_recolor.safetensors) \u003Cbr> [sai_xl_recolor_256lora](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002Fsd_control_collection\u002Fresolve\u002Fmain\u002Fsai_xl_recolor_256lora.safetensors) \u003Cbr> [bdsqlsz_controlllite_xl_recolor_luminance](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fresolve\u002Fmain\u002Fbdsqlsz_controlllite_xl_recolor_luminance.safetensors) |\n| 图像强度             | recolor_intensity         | 不清楚。也许和上面一样？|\n\n# 示例\n> 一张图片胜过千言万语\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_ab238baf7e57.jpg)\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_58c8ae05a0b5.jpg)\n\n# 测试工作流\nhttps:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fblob\u002Fmain\u002Fexamples\u002FExecuteAll.png\n输入图像：https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fblob\u002Fmain\u002Fexamples\u002Fcomfyui-controlnet-aux-logo.png\n\n# 问答：\n## 为什么安装了这个仓库后有些节点没有出现？\n\n该仓库采用了一种新机制，会跳过无法导入的自定义节点。如果遇到这种情况，请在 [Issues 标签页](https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues) 上提交问题，并附上命令行日志。\n\n## DWPose\u002FAnimalPose 只使用 CPU，速度很慢。如何让它使用 GPU？\n加速 DWPose 的方法有两种：使用 TorchScript 检查点（.torchscript.pt）或 ONNXRuntime（.onnx）。TorchScript 方法比 ONNXRuntime 略慢，但不需要额外的库，而且仍然比 CPU 快得多。\n\nTorchScript 边界框检测器与 ONNX 姿态估计器是兼容的，反之亦然。\n\n### TorchScript\n按照这张图设置 `bbox_detector` 和 `pose_estimator`。如果输入图像质量较好，可以尝试使用以 `.torchscript.pt` 结尾的其他边界框检测器，以缩短边界框检测时间。\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_b22705d380e8.png)\n### ONNXRuntime\n如果 ONNXRuntime 已成功安装，并且所使用的检查点文件名以 `.onnx` 结尾，则会替换默认的 OpenCV 后端，从而利用 GPU 加速。请注意，如果您使用的是 NVIDIA 显卡，目前此方法仅适用于 CUDA 11.8 版本（ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z），除非您自行编译 ONNXRuntime。\n\n1. 确认您的 ONNXRuntime 构建版本：\n   * NVIDIA CUDA 11.x 或更低版本\u002FAMD GPU：`onnxruntime-gpu`\n   * NVIDIA CUDA 12.x：`onnxruntime-gpu --extra-index-url https:\u002F\u002Faiinfra.pkgs.visualstudio.com\u002FPublicPackages\u002F_packaging\u002Fonnxruntime-cuda-12\u002Fpypi\u002Fsimple\u002F`\n   * DirectML：`onnxruntime-directml`\n   * OpenVINO：`onnxruntime-openvino`\n\n请注意，如果您是首次使用 ComfyUI，请先测试其是否能在您的设备上正常运行，再进行后续步骤。\n\n2. 将其添加到 `requirements.txt` 文件中。\n\n3. 运行 `install.bat` 脚本，或按照安装说明中的 pip 命令进行安装。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_c612474c8dc6.png)\n\n# 预处理工具的资产文件\n* anime_face_segment:  [bdsqlsz\u002Fqinglong_controlnet-lllite\u002FAnnotators\u002FUNet.pth](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fblob\u002Fmain\u002FAnnotators\u002FUNet.pth), [anime-seg\u002Fisnetis.ckpt](https:\u002F\u002Fhuggingface.co\u002Fskytnt\u002Fanime-seg\u002Fblob\u002Fmain\u002Fisnetis.ckpt)\n* densepose:  [LayerNorm\u002FDensePose-TorchScript-with-hint-image\u002Fdensepose_r50_fpn_dl.torchscript](https:\u002F\u002Fhuggingface.co\u002FLayerNorm\u002FDensePose-TorchScript-with-hint-image\u002Fblob\u002Fmain\u002Fdensepose_r50_fpn_dl.torchscript)\n* dwpose:  \n* * bbox_detector: 可以是 [yzd-v\u002FDWPose\u002Fyolox_l.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fyolox_l.onnx)、[hr16\u002Fyolox-onnx\u002Fyolox_l.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolox-onnx\u002Fblob\u002Fmain\u002Fyolox_l.torchscript.pt)、[hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_l_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_l_fp16.onnx)、[hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_m_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_m_fp16.onnx) 或者 [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_s_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_s_fp16.onnx)\n* * pose_estimator: 可以是 [hr16\u002FDWPose-TorchScript-BatchSize5\u002Fdw-ll_ucoco_384_bs5.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDWPose-TorchScript-BatchSize5\u002Fblob\u002Fmain\u002Fdw-ll_ucoco_384_bs5.torchscript.pt) 或者 [yzd-v\u002FDWPose\u002Fdw-ll_ucoco_384.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fdw-ll_ucoco_384.onnx)\n* animal_pose (ap10k):\n* * bbox_detector: 可以是 [yzd-v\u002FDWPose\u002Fyolox_l.onnx](https:\u002F\u002Fhuggingface.co\u002Fyzd-v\u002FDWPose\u002Fblob\u002Fmain\u002Fyolox_l.onnx)、[hr16\u002Fyolox-onnx\u002Fyolox_l.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolox-onnx\u002Fblob\u002Fmain\u002Fyolox_l.torchscript.pt)、[hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_l_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_l_fp16.onnx)、[hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_m_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_m_fp16.onnx) 或者 [hr16\u002Fyolo-nas-fp16\u002Fyolo_nas_s_fp16.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002Fyolo-nas-fp16\u002Fblob\u002Fmain\u002Fyolo_nas_s_fp16.onnx)\n* * pose_estimator: 可以是 [hr16\u002FDWPose-TorchScript-BatchSize5\u002Frtmpose-m_ap10k_256_bs5.torchscript.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDWPose-TorchScript-BatchSize5\u002Fblob\u002Fmain\u002Frtmpose-m_ap10k_256_bs5.torchscript.pt) 或者 [hr16\u002FUnJIT-DWPose\u002Frtmpose-m_ap10k_256.onnx](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnJIT-DWPose\u002Fblob\u002Fmain\u002Frtmpose-m_ap10k_256.onnx)\n* hed:  [lllyasviel\u002FAnnotators\u002FControlNetHED.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FControlNetHED.pth)\n* leres:  [lllyasviel\u002FAnnotators\u002Fres101.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fres101.pth), [lllyasviel\u002FAnnotators\u002Flatest_net_G.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Flatest_net_G.pth)\n* lineart:  [lllyasviel\u002FAnnotators\u002Fsk_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fsk_model.pth), [lllyasviel\u002FAnnotators\u002Fsk_model2.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fsk_model2.pth)\n* lineart_anime:  [lllyasviel\u002FAnnotators\u002FnetG.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FnetG.pth)\n* manga_line:  [lllyasviel\u002FAnnotators\u002Ferika.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ferika.pth)\n* mesh_graphormer:  [hr16\u002FControlNet-HandRefiner-pruned\u002Fgraphormer_hand_state_dict.bin](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fgraphormer_hand_state_dict.bin), [hr16\u002FControlNet-HandRefiner-pruned\u002Fhrnetv2_w64_imagenet_pretrained.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FControlNet-HandRefiner-pruned\u002Fblob\u002Fmain\u002Fhrnetv2_w64_imagenet_pretrained.pth)\n* midas:  [lllyasviel\u002FAnnotators\u002Fdpt_hybrid-midas-501f0c75.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fdpt_hybrid-midas-501f0c75.pt)\n* mlsd:  [lllyasviel\u002FAnnotators\u002Fmlsd_large_512_fp32.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fmlsd_large_512_fp32.pth)\n* normalbae:  [lllyasviel\u002FAnnotators\u002Fscannet.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fscannet.pt)\n* oneformer:  [lllyasviel\u002FAnnotators\u002F250_16_swin_l_oneformer_ade20k_160k.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002F250_16_swin_l_oneformer_ade20k_160k.pth)\n* open_pose:  [lllyasviel\u002FAnnotators\u002Fbody_pose_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fbody_pose_model.pth), [lllyasviel\u002FAnnotators\u002Fhand_pose_model.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fhand_pose_model.pth), [lllyasviel\u002FAnnotators\u002Ffacenet.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ffacenet.pth)\n* pidi:  [lllyasviel\u002FAnnotators\u002Ftable5_pidinet.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Ftable5_pidinet.pth)\n* sam:  [dhkim2810\u002FMobileSAM\u002Fmobile_sam.pt](https:\u002F\u002Fhuggingface.co\u002Fdhkim2810\u002FMobileSAM\u002Fblob\u002Fmain\u002Fmobile_sam.pt)\n* uniformer:  [lllyasviel\u002FAnnotators\u002Fupernet_global_small.pth](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002Fupernet_global_small.pth)\n* zoe:  [lllyasviel\u002FAnnotators\u002FZoeD_M12_N.pt](https:\u002F\u002Fhuggingface.co\u002Flllyasviel\u002FAnnotators\u002Fblob\u002Fmain\u002FZoeD_M12_N.pt)\n* teed:  [bdsqlsz\u002Fqinglong_controlnet-lllite\u002F7_model.pth](https:\u002F\u002Fhuggingface.co\u002Fbdsqlsz\u002Fqinglong_controlnet-lllite\u002Fblob\u002Fmain\u002FAnnotators\u002F7_model.pth)\n* depth_anything: 可以是 [LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vitl14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vitl14.pth)、[LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vitb14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vitb14.pth) 或者 [LiheYoung\u002FDepth-Anything\u002Fcheckpoints\u002Fdepth_anything_vits14.pth](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints\u002Fdepth_anything_vits14.pth)\n* diffusion_edge: 可以是 [hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_indoor.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_indoor.pt)、[hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_urban.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_urban.pt) 或者 [hr16\u002FDiffusion-Edge\u002Fdiffusion_edge_natrual.pt](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FDiffusion-Edge\u002Fblob\u002Fmain\u002Fdiffusion_edge_natrual.pt)\n* unimatch: 可以是 [hr16\u002FUnimatch\u002Fgmflow-scale2-regrefine6-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale2-regrefine6-mixdata.pth)、[hr16\u002FUnimatch\u002Fgmflow-scale2-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale2-mixdata.pth) 或者 [hr16\u002FUnimatch\u002Fgmflow-scale1-mixdata.pth](https:\u002F\u002Fhuggingface.co\u002Fhr16\u002FUnimatch\u002Fblob\u002Fmain\u002Fgmflow-scale1-mixdata.pth)\n* zoe_depth_anything: 可以是 [LiheYoung\u002FDepth-Anything\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_indoor.pt](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_indoor.pt) 或者 [LiheYoung\u002FDepth-Anything\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_outdoor.pt](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002FLiheYoung\u002FDepth-Anything\u002Fblob\u002Fmain\u002Fcheckpoints_metric_depth\u002Fdepth_anything_metric_depth_outdoor.pt)\n\n# 2000 颗星 😄\n\u003Ca href=\"https:\u002F\u002Fstar-history.com\u002F#Fannovel16\u002Fcomfyui_controlnet_aux&Date\">\n  \u003Cpicture>\n    \u003Csource media=\"(prefers-color-scheme: dark)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png&theme=dark\" \u002F>\n    \u003Csource media=\"(prefers-color-scheme: light)\" srcset=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png\" \u002F>\n    \u003Cimg alt=\"星级历史图表\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_readme_f435cec202eb.png\" \u002F>\n  \u003C\u002Fpicture>\n\u003C\u002Fa>\n\n感谢大家的支持！真没想到星星数量的增长会这么线性，哈哈。","# ComfyUI ControlNet Aux 快速上手指南\n\n`comfyui_controlnet_aux` 是 ComfyUI 的插件，提供了一系列即插即用的节点，用于生成 ControlNet 所需的提示图（如线稿、深度图、姿态图等）。其核心算法源自 ControlNet 官方仓库。\n\n## 环境准备\n\n*   **系统要求**：Windows、Linux 或 macOS。\n*   **前置依赖**：\n    *   已安装并配置好 [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)。\n    *   推荐安装 [ComfyUI Manager](https:\u002F\u002Fgithub.com\u002Fltdrdata\u002FComfyUI-Manager) 以简化插件管理。\n    *   Python 环境（若使用便携版 ComfyUI 则无需单独配置系统 Python）。\n*   **网络建议**：首次运行时会自动从 Hugging Face 下载模型权重。国内用户若下载缓慢，建议配置本地代理或使用 Hugging Face 镜像源。\n\n## 安装步骤\n\n### 方法一：通过 ComfyUI Manager 安装（推荐）\n\n1.  启动 ComfyUI，点击右侧菜单的 **Manager** 按钮。\n2.  选择 **Install Custom Nodes**。\n3.  在搜索框输入 `controlnet aux`。\n4.  找到 `ComfyUI's ControlNet Auxiliary Preprocessors`，点击 **Install**。\n5.  安装完成后重启 ComfyUI。\n\n### 方法二：手动安装\n\n如果无法使用 Manager 或在 Linux\u002F无管理员权限环境下，请按以下步骤操作：\n\n1.  进入 ComfyUI 的自定义节点目录：\n    ```bash\n    cd \u002Fpath\u002Fto\u002FComfyUI\u002Fcustom_nodes\u002F\n    ```\n\n2.  克隆仓库：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002F\n    ```\n\n3.  进入插件目录并安装依赖：\n\n    *   **如果是 ComfyUI 便携版 (Portable\u002Fvenv)**：\n        ```bash\n        cd comfyui_controlnet_aux\n        \u002Fpath\u002Fto\u002FComfyUI\u002Fpython_embeded\u002Fpython.exe -s -m pip install -r requirements.txt\n        ```\n        *(注：请将 `\u002Fpath\u002Fto\u002FComfyUI\u002F` 替换为你的实际安装路径)*\n\n    *   **如果是系统全局 Python 环境**：\n        ```bash\n        cd comfyui_controlnet_aux\n        pip install -r requirements.txt\n        ```\n\n4.  重启 ComfyUI。\n\n> **Windows 用户提示**：目录下包含 `install.bat`，双击运行可自动检测便携版环境并完成安装。\n\n## 基本使用\n\n安装成功后，重启 ComfyUI，你可以在节点菜单中找到 **ControlNet Preprocessors** 分类。\n\n### 核心节点说明\n\n*   **AIO Aux Preprocessor**：全能预处理节点。集成了除 Inpaint 外的所有预处理器，可通过下拉菜单快速切换类型（如 Canny, OpenPose, Depth 等）。\n    *   *优点*：调用快捷，无需查找特定节点。\n    *   *缺点*：无法调整该预处理器特有的阈值参数。\n*   **独立预处理器节点**：如需精细控制（例如调整 Canny 的高低阈值或 OpenPose 的检测模型），请直接搜索并使用对应的独立节点（如 `Canny Edge Preprocessor`, `DWPose Estimator` 等）。\n\n### 最简单工作流示例\n\n以下是一个生成“线稿控制图”的基础流程：\n\n1.  **加载图像**：添加 `Load Image` 节点，上传一张参考图片。\n2.  **预处理**：\n    *   添加 `Canny Edge Preprocessor` 节点（或在 `AIO Aux Preprocessor` 中选择 `canny`）。\n    *   将 `Load Image` 的输出连接到预处理节点的 `image` 输入端。\n3.  **应用 ControlNet**：\n    *   添加 `ControlNet Apply` 节点。\n    *   加载对应的 ControlNet 模型（例如 `control_v11p_sd15_canny`）。\n    *   将预处理节点的输出连接到 `ControlNet Apply` 的 `image` 输入端。\n4.  **生成图像**：连接 KSampler 和 Checkpoint 进行绘图。\n\n**节点连接逻辑示意：**\n`Load Image` -> `[Preprocessor Node]` -> `ControlNet Apply (image)` -> `KSampler`\n\n### 进阶提示：获取 OpenPose JSON 数据\n\n如果你需要提取姿态数据的 JSON 文件用于其他开发：\n*   **用户侧**：使用 `Save Pose Keypoints` 节点，它会将 JSON 保存到 ComfyUI 的 `output` 文件夹中。\n*   **开发侧**：可通过 API `\u002Fhistory\u002F{prompt_id}` 获取输出结果中的 `openpose_json` 字段，解析后即可得到标准的 OpenPose 格式坐标数据。","一位独立游戏开发者正在为赛博朋克风格的项目批量生成角色概念图，需要严格保持人物姿势和线条结构的一致性。\n\n### 没有 comfyui_controlnet_aux 时\n- 开发者必须手动切换多个外部插件或脚本来提取线稿、边缘和骨架，工作流支离破碎且容易报错。\n- 无法在 ComfyUI 原生界面中精细调节 Canny 阈值或 HED 软边缘参数，导致生成的控制图噪点过多或细节丢失。\n- 针对动漫风格的角色，缺乏专用的 `Anime Lineart` 预处理节点，通用算法难以准确识别二次元特有的简洁线条。\n- 每次尝试不同风格的控制图（如从硬边缘切换到涂鸦风格）都需要重新搭建复杂的节点组，严重拖慢迭代速度。\n\n### 使用 comfyui_controlnet_aux 后\n- 所有预处理功能（如 Canny、HED、MLSD 等）均以原生节点形式集成，开发者可在同一工作流中一键调用，无需跳转外部环境。\n- 每个预处理器都提供独立的阈值调节滑块，能精准控制线条粗细与细节保留度，直接获得高质量的提示图像。\n- 内置专为二次元优化的 `Anime Lineart` 和 `Manga Lineart` 节点，完美适配粉色头发、金色眼睛等动漫特征的结构提取。\n- 通过 `AIO Aux Preprocessor` 节点可快速预览多种效果，或利用分类节点灵活组合，将原本数小时的调试过程缩短至几分钟。\n\ncomfyui_controlnet_aux 通过将专业的 ControlNet 预处理能力无缝融入 ComfyUI，让创作者能以前所未有的精度和效率掌控 AI 绘图的构图与结构。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FFannovel16_comfyui_controlnet_aux_18b6a8af.jpg","Fannovel16","H.D.Tài 🇻🇳","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FFannovel16_69e5a371.png",null,"https:\u002F\u002Fgithub.com\u002FFannovel16",[79,83],{"name":80,"color":81,"percentage":82},"Python","#3572A5",100,{"name":84,"color":85,"percentage":86},"Batchfile","#C1F12E",0,3902,350,"2026-04-06T18:06:37","Apache-2.0","Linux, Windows","未说明 (作为 ComfyUI 插件，通常依赖宿主环境的 GPU 配置以运行 ControlNet 模型)","未说明",{"notes":95,"python":96,"dependencies":97},"该工具是 ComfyUI 的自定义节点插件，需先安装 ComfyUI。推荐使用 ComfyUI Manager 进行安装。在 Windows 非管理员账户或 Linux 下需确保目录有写入权限。首次运行会自动从 Hugging Face 下载所需的预处理器模型文件。支持便携式（Portable）和系统 Python 环境安装。","未说明 (依赖 ComfyUI 内置或系统安装的 Python 环境)",[98,99,100,101,102,103,104,105],"torch","opencv-python","pillow","numpy","scipy","transformers","mediapipe","filterpy",[15,43],"2026-03-27T02:49:30.150509","2026-04-11T17:00:07.070268",[110,115,120,125,130,134],{"id":111,"question_zh":112,"answer_zh":113,"source_url":114},22372,"遇到警告 'Onnxruntime not found or doesn't come with acceleration providers' 导致运行缓慢怎么办？","这通常是因为同时安装了 `onnxruntime` 和 `onnxruntime-gpu` 导致冲突。解决方法是彻底移除非 GPU 版本并重新安装 GPU 版本。\n\n具体步骤：\n1. 手动删除虚拟环境站点包目录下的所有 onnxruntime 相关文件（例如：`rm -rf venv\u002Flib\u002Fpython3.10\u002Fsite-packages\u002Fonnxruntime*`，路径需根据你的 Python 版本调整）。\n2. 仅使用 pip 卸载可能不够，必须手动清理文件。\n3. 重新安装 GPU 版本：`pip install onnxruntime-gpu`。\n\n如果是 Windows 便携版用户，需在项目根目录运行：`.\\python_embeded\\python.exe -s -m pip install onnxruntime-gpu`，确保使用的是嵌入版 Python 进行安装。","https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues\u002F75",{"id":116,"question_zh":117,"answer_zh":118,"source_url":119},22373,"执行 Metric3D 预处理器时出现 'Failed to find function' 或模型加载错误如何解决？","该错误通常是由于模型下载链接失效或变更导致的。请尝试使用以下更新后的有效链接手动下载模型文件：\nhttps:\u002F\u002Fhuggingface.co\u002FJUGGHM\u002FMetric3D\u002Fresolve\u002Fmain\u002Fmetric_depth_vit_giant2_800k.pth?download=true\n\n下载后将其放置在正确的缓存目录中即可解决找不到函数或模型的问题。","https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues\u002F343",{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},22374,"安装后运行节点出现 'ModuleNotFoundError: No module named controlnet_aux.xxx' 错误怎么办？","此问题通常发生在通过 ComfyUI-Manager 安装后，依赖项未正确加载或路径配置有误。\n\n解决方案：\n1. 尝试手动进入插件目录，运行 `pip install -r requirements.txt` 确保所有依赖已安装。\n2. 如果问题依旧，可能是 Manager 安装的版本存在缺陷，建议删除该自定义节点文件夹，从 GitHub 源码直接克隆最新代码到 `custom_nodes` 目录。\n3. 重启 ComfyUI，检查启动日志确认模块是否加载成功。维护者已在后续提交中修复了部分路径导入问题，保持代码为最新版本至关重要。","https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues\u002F3",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},22375,"运行 Zoe-DepthMapPreprocessor 或 MiDaS 时出现连接错误或无法找到缓存文件怎么办？","这通常是因为网络问题导致无法从 Hugging Face 自动下载模型，或者磁盘缓存损坏。\n\n解决方法：\n1. 检查网络连接，确保能访问 Hugging Face。\n2. 如果网络受限，请手动下载对应的模型权重文件。\n3. 将下载的模型文件放入 ComfyUI 的 annotator 缓存目录（通常位于 `ComfyUI\u002Fcustom_nodes\u002Fcomfyui_controlnet_aux\u002Fannotator_ckpts` 或类似路径）。\n4. 清除旧的损坏缓存文件后重试。维护者已针对部分模型加载逻辑进行了修复，更新至最新版本也可能解决此问题。","https:\u002F\u002Fgithub.com\u002FFannovel16\u002Fcomfyui_controlnet_aux\u002Fissues\u002F2",{"id":131,"question_zh":132,"answer_zh":133,"source_url":114},22376,"在 Windows 上使用 NVIDIA GPU 的便携版 ComfyUI 时，如何正确安装 onnxruntime-gpu？","在 Windows 便携版环境中，直接使用系统命令行的 pip 可能无法将包安装到嵌入版 Python 中，从而导致加速无效。\n\n正确做法是：\n1. 打开 PowerShell 或 CMD，进入 ComfyUI 项目根目录。\n2. 运行以下命令调用嵌入版 Python 进行安装：\n   `.\\python_embeded\\python.exe -s -m pip install onnxruntime-gpu`\n   \n注意：必须指定 `python_embeded\\python.exe` 路径，以确保依赖包安装在正确的环境中。",{"id":135,"question_zh":136,"answer_zh":137,"source_url":114},22377,"哪些其他扩展可能会导致 onnxruntime 冲突？","除了当前插件外，其他一些扩展也会自动安装 `onnxruntime`（CPU 版），从而与需要的 `onnxruntime-gpu` 产生冲突。已知会导致问题的扩展包括：\n- `rembg`（背景移除工具）\n- `ComfyUI-WD14-Tagger`（图像打标工具）\n\n如果你安装了这些扩展并遇到加速失效问题，请务必检查并清理 `site-packages` 目录下的 `onnxruntime` (CPU 版) 文件，只保留 `onnxruntime-gpu`。",[]]