[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Chaoses-Ib--ComfyScript":3,"tool-Chaoses-Ib--ComfyScript":62},[4,18,26,35,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,2,"2026-04-10T11:39:34",[14,15,13],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":32,"last_commit_at":41,"category_tags":42,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[43,13,15,14],"插件",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[52,15,13,14],"语言模型",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,61],"视频",{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":74,"owner_company":74,"owner_location":74,"owner_email":76,"owner_twitter":77,"owner_website":74,"owner_url":78,"languages":79,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":32,"env_os":92,"env_gpu":93,"env_ram":94,"env_deps":95,"category_tags":101,"github_topics":102,"view_count":32,"oss_zip_url":74,"oss_zip_packed_at":74,"status":17,"created_at":109,"updated_at":110,"faqs":111,"releases":141},8723,"Chaoses-Ib\u002FComfyScript","ComfyScript","A Python frontend and library for ComfyUI","ComfyScript 是专为 ComfyUI 打造的 Python 前端与开发库，旨在将原本基于图形节点的工作流转化为人类可读的 Python 代码。它有效解决了复杂工作流难以版本管理、复用和批量生成的痛点，让用户不再受限于拖拽式界面，能够利用 Python 强大的编程能力（如循环、函数封装及逻辑控制）来构建和运行图像生成流程。\n\n这款工具特别适合开发者、AI 研究人员以及习惯代码操作的高级用户。对于研究者而言，ComfyScript 允许将 ComfyUI 节点作为函数库直接调用，便于进行机器学习实验、调试自定义节点及优化缓存策略；对于开发者，它支持通过脚本自动生成庞大的工作流，甚至利用大语言模型（LLM）直接编写工作流代码，极大提升了自动化效率。此外，它还提供了将现有图形工作流自动转译为 Python 脚本的功能，并支持本地或远程服务器运行。无论是希望以更灵活方式掌控生成逻辑的技术人员，还是寻求将 AI 工作流集成到现有 Python 项目中的工程师，ComfyScript 都提供了一个高效、透明且易于扩展的解决方案。","# ComfyScript\n[![PyPI - Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcomfy-script)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcomfy-script) ![Python Version from PEP 621 TOML](https:\u002F\u002Fimg.shields.io\u002Fpython\u002Frequired-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2FChaoses-Ib%2FComfyScript%2Fmain%2Fpyproject.toml) [![License](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fcomfy-script)](LICENSE.txt)\n\nA Python frontend and library for [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI).\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c5831caf0417.png)\n\nIt has the following use cases:\n- Serving as a [human-readable format](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI\u002Fissues\u002F612) for ComfyUI's workflows.\n\n  This makes it easy to compare and reuse different parts of one's workflows.\n  \n  It is also possible to train LLMs to generate workflows, since many LLMs can handle Python code relatively well. This approach can be more powerful than just asking LLMs for some hardcoded parameters.\n\n  Scripts can be automatically translated from ComfyUI's workflows. See [transpiler](#transpiler) for details.\n\n- Directly running the script to generate images.\n\n  The main advantage of doing this than using the web UI is being able to mix Python code with ComfyUI's nodes, such as doing loops, calling library functions, and easily encapsulating custom nodes. This also makes adding interaction easier since the UI and logic can be both written in Python. And, some people may feel more comfortable with simple Python code than a graph-based GUI.[^graph-gui]\n\n  See [runtime](#runtime) for details. Scripts can be executed locally or remotely with a ComfyUI server.\n\n- Using ComfyUI as a function library.\n\n  With ComfyScript, ComfyUI's nodes can be used as functions to do ML research, reuse nodes in other projects, debug custom nodes, and optimize caching to run workflows faster.\n\n  See runtime's [real mode](docs\u002FRuntime.md#real-mode) for details.\n\n- Generating ComfyUI's workflows with scripts.\n\n  Scripts can also be used to generate ComfyUI's workflows and then used in the web UI or elsewhere. This way, one can use loops and generate huge workflows where it would be time-consuming or impractical to create them manually. See [workflow generation](docs\u002FRuntime.md#workflow-generation) for details. It is also possible to load workflows from images generated by ComfyScript.\n\n- Retrieving any wanted information by running the script with some stubs.\n\n  See [workflow information retrieval](docs\u002FREADME.md#workflow-information-retrieval) for details.\n\n- Converting workflows from ComfyUI's web UI format to API format without the web UI.\n\n## [Documentation](docs\u002FREADME.md)\n- [Introduction](#installation) (this page)\n- [Runtime](docs\u002FRuntime.md)\n- [Images](docs\u002FImages\u002FREADME.md)\n- [Models](docs\u002FModels\u002FREADME.md)\n- Nodes\n  - [Node Compatibility](docs\u002FNodes\u002FCompatibility.md)\n  - [Additional Nodes](docs\u002FNodes\u002FAdditional.md)\n- [Transpiler](docs\u002FTranspiler.md)\n- UI\n  - [ipywidgets UI](docs\u002FUI\u002Fipywidgets.md)\n  - [Solara UI](docs\u002FUI\u002FSolara.md)\n- [Examples](examples\u002FREADME.md)\n- [Differences from ComfyUI-to-Python-Extension](docs\u002FREADME.md#differences-from-comfyui-to-python-extension)\n\n## Installation\n### Installing only ComfyScript package\nIf you only want to use ComfyScript with an external ComfyUI server,\nlike using cloud ComfyUI servers and developing apps\u002Flibraries:\n\n\u003Cdetails>\n\nInstall Python first.\n\nInstall\u002Fupdate ComfyScript:\n```sh\npython -m pip install -U \"comfy-script[default]\"\n```\n\nSave and run [the following code](examples\u002Fruntime.py) to test (e.g. `python examples\u002Fruntime.py`):\n```py\nfrom comfy_script.runtime import *\n# ComfyUI server\u002Fpath\n# or: load(r'path\u002Fto\u002FComfyUI')\nload('http:\u002F\u002F127.0.0.1:8188\u002F')\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow(wait=True):\n    image = EmptyImage()\n    images = util.get_images(image, save=True)\n```\n\nOr without installing Python, directly use ComfyScript with [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F):\n```sh\nuv run examples\u002Fuv.py\n```\n[`examples\u002Fuv.py`](examples\u002Fuv.py):\n```python\n# \u002F\u002F\u002F script\n# requires-python = \">=3.9\"\n# dependencies = [\n#     \"comfy-script[default]\",\n# ]\n# \u002F\u002F\u002F\nfrom comfy_script.runtime import *\nload('http:\u002F\u002F127.0.0.1:8188\u002F')\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow(wait=True):\n    image = EmptyImage()\n    images = util.get_images(image, save=True)\n```\n\nSee [installing only ComfyScript package](docs\u002FREADME.md#installing-only-comfyscript-package) for details.\n\n\u003C\u002Fdetails>\n\n### Installing with ComfyUI\nIf you have Python\u002FComfyUI installed:\n\n\u003Cdetails>\n\nIf you haven't installed ComfyUI, install it first.\nSee [ComfyUI Installing](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing)\nor use [Comfy-Cli](https:\u002F\u002Fgithub.com\u002FComfy-Org\u002Fcomfy-cli) to install:\n```sh\npython -m pip install comfy-cli\ncomfy --here install\n```\nAnd then run the following commands to install ComfyScript:\n```sh\ncd ComfyUI\u002Fcustom_nodes\ngit clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git\ncd ComfyScript\npython -m pip install -e \".[default]\"\n```\n\nUpdate:\n```sh\ncd ComfyUI\u002Fcustom_nodes\u002FComfyScript\ngit pull\npython -m pip install -e \".[default]\"\n```\n\n`[default]` is necessary to install common dependencies. See [`pyproject.toml`](pyproject.toml) for other options. If no option is specified, ComfyScript will be installed without any dependencies.\n\n\u003C\u002Fdetails>\n\n### Installing with ComfyUI & uv venv\nIf you haven't installed Python\u002FComfyUI, you can use [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F), a fast Python package and project manager, to install ComfyUI and ComfyScript:\n\n\u003Cdetails>\n\n[Install uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002Fgetting-started\u002Finstallation\u002F) first. Then create a venv, install Comfy-Cli and ComfyUI:\n```sh\nmkdir ComfyUI\ncd ComfyUI\nuv venv --seed --python 3.12\nuv pip install comfy-cli\nuv run comfy --workspace . install\n\n# (Optional) Start ComfyUI to test\n# uv run main.py\n```\n\nInstall ComfyScript:\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git .\u002Fcustom_nodes\u002FComfyScript\nuv pip install -e \".\u002Fcustom_nodes\u002FComfyScript[default]\"\n```\n\nUpdate ComfyScript:\n```sh\ngit -C \".\u002Fcustom_nodes\u002FComfyScript\" pull\nuv pip install -e \".\u002Fcustom_nodes\u002FComfyScript[default]\"\n```\n\n`[default]` is necessary to install common dependencies. See [`pyproject.toml`](pyproject.toml) for other options. If no option is specified, ComfyScript will be installed without any dependencies.\n\nNote uv can only discover the ComfyUI venv when the working directory is `ComfyUI` or `ComfyUI\u002F*`. To use the venv in other directories, like in `ComfyUI\u002Fcustom_nodes\u002FComfyScript` or your script directory, you need to activate it manually:\n```pwsh\ncd ComfyUI\n# Windows\n.\\.venv\\Scripts\\activate\n# Linux\nsource .venv\u002Fbin\u002Factivate\n```\n\nSee [VS Code](docs\u002FREADME.md#vs-code) if you have problems when using ComfyScript in VS Code.\n\u003C\u002Fdetails>\n\n### Installing with ComfyUI package\nIf you want to install ComfyUI as a pip package:\n\n\u003Cdetails>\n\nInstall [ComfyUI package](https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI) first:\n- If PyTorch is not installed:\n\n  ```sh\n  python -m pip install git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git\n  ```\n- If PyTorch is already installed (e.g. Google Colab):\n\n  ```sh\n  python -m pip install wheel\n  python -m pip install --no-build-isolation git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git\n  ```\n\nInstall\u002Fupdate ComfyScript:\n```sh\npython -m pip install -U \"comfy-script[default]\"\n```\n\n`[default]` is necessary to install common dependencies. See [`pyproject.toml`](pyproject.toml) for other options. If no option is specified, ComfyScript will be installed without any dependencies.\n\nIf there are problems with the latest ComfyUI package, one can use the last tested version:\n```\npython -m pip install --no-build-isolation git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git@95a12f42e2b0c78202af10f2337009bd769157a7\n```\n\u003C\u002Fdetails>\n\n### Containers\n- [Modal](examples\u002Fmodal.py) by @the-dream-machine (ComfyUI + Comfy-Cli)\n- [promeG\u002Fcomfyui: ComfyUI docker images](https:\u002F\u002Fgithub.com\u002FpromeG\u002Fcomfyui)\n\n### Others\nSee [troubleshooting](docs\u002FREADME.md#troubleshooting) and [VS Code](docs\u002FREADME.md#vs-code) if you encountered any problems.\nTo uninstall, see [uninstallation](docs\u002FREADME.md#uninstallation).\n\n## Transpiler\nThe transpiler can translate ComfyUI's workflows to ComfyScript.\n\nWhen ComfyScript is installed as custom nodes, `SaveImage` and similar nodes will be hooked to automatically save the script as the image's metadata. The script will also be printed to the terminal.\n\nIf you installed ComfyScript outside of ComfyUI, you can still use the transpiler by:\n- [CLI](docs\u002FTranspiler.md#cli)\n  ```sh\n  python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n  ```\n  Or without installing ComfyScript, directly with uv:\n  ```sh\n  uvx --from \"comfy-script[default]\" python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n  ```\n- [Python code](docs\u002FTranspiler.md#from-python-code)\n- Jupyter Notebook \u002F web: [MetadataViewer](#metadataviewer)\n\nFor example, here is a workflow in ComfyUI:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_d0d8e206dbb4.png)\n\nComfyScript translated from it:\n```python\nmodel, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\nconditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)\nconditioning2 = CLIPTextEncode('text, watermark', clip)\nlatent = EmptyLatentImage(512, 512, 1)\nlatent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\nimage = VAEDecode(latent, vae)\nSaveImage(image, 'ComfyUI')\n```\n\nIf there two or more `SaveImage` nodes in one workflow, only the necessary inputs of each node will be translated to scripts. For example, here is a 2 pass txt2img (hires fix) workflow:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_ae0eb0d933b6.png)\n\nComfyScript saved for each of the two saved image are respectively:\n1. ```python\n   model, clip, vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')\n   conditioning = CLIPTextEncode('masterpiece HDR victorian portrait painting of woman, blonde hair, mountain nature, blue sky', clip)\n   conditioning2 = CLIPTextEncode('bad hands, text, watermark', clip)\n   latent = EmptyLatentImage(768, 768, 1)\n   latent = KSampler(model, 89848141647836, 12, 8, 'dpmpp_sde', 'normal', conditioning, conditioning2, latent, 1)\n   image = VAEDecode(latent, vae)\n   SaveImage(image, 'ComfyUI')\n   ```\n2. ```python\n   model, clip, vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')\n   conditioning = CLIPTextEncode('masterpiece HDR victorian portrait painting of woman, blonde hair, mountain nature, blue sky', clip)\n   conditioning2 = CLIPTextEncode('bad hands, text, watermark', clip)\n   latent = EmptyLatentImage(768, 768, 1)\n   latent = KSampler(model, 89848141647836, 12, 8, 'dpmpp_sde', 'normal', conditioning, conditioning2, latent, 1)\n   latent2 = LatentUpscale(latent, 'nearest-exact', 1152, 1152, 'disabled')\n   latent2 = KSampler(model, 469771404043268, 14, 8, 'dpmpp_2m', 'simple', conditioning, conditioning2, latent2, 0.5)\n   image = VAEDecode(latent2, vae)\n   SaveImage(image, 'ComfyUI')\n   ```\n\nComparing scripts:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_f7e3ed34a6b4.png)\n\n\n## Runtime\nWith the runtime, one can run ComfyScript like this:\n```python\nfrom comfy_script.runtime import *\nload()\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow():\n    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\n    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)\n    conditioning2 = CLIPTextEncode('text, watermark', clip)\n    latent = EmptyLatentImage(512, 512, 1)\n    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\n    image = VAEDecode(latent, vae)\n    SaveImage(image, 'ComfyUI')\n    \n    # To retrieve `image` instead of saving it, replace `SaveImage` with:\n    # images = util.get_images(image)\n    # `images` is of type `list[PIL.Image.Image]`\n```\n\nA Jupyter Notebook example is available at [`examples\u002Fruntime.ipynb`](examples\u002Fruntime.ipynb).\n\n- [Type stubs](https:\u002F\u002Ftyping.readthedocs.io\u002Fen\u002Flatest\u002Fsource\u002Fstubs.html) will be generated at `comfy_script\u002Fruntime\u002Fnodes.pyi` after loading. Mainstream code editors (e.g. [VS Code](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Flanguages\u002Fpython)) can use them to help with coding:\n\n  | | |\n  | --- | --- |\n  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_fbdf03cdd821.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_e036a45e85ff.png) |\n\n  [Python enumerations](https:\u002F\u002Fdocs.python.org\u002F3\u002Fhowto\u002Fenum.html) are generated for all arguments provding the value list. So instead of copying and pasting strings like `'v1-5-pruned-emaonly.ckpt'`, you can use:\n  ```python\n  Checkpoints.v1_5_pruned_emaonly\n  # or\n  CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly\n  ```\n  \n  Embeddings can also be referenced as `Embeddings.my_embedding`, which is equivalent to `'embedding:my-embedding'`. See [enumerations](docs\u002FRuntime.md#enumerations) for details.\n\n  If type stubs are not working for you (cannot get results similar to the screenshot), see [Type stubs not working](docs\u002FRuntime.md#type-stubs-not-working).\n\n- The runtime is asynchronous by default. You can queue multiple tasks without waiting for the first one to finish. A daemon thread will watch and report the remaining tasks in the queue and the current progress, for example:\n  ```\n  Queue remaining: 1\n  Queue remaining: 2\n  100%|██████████████████████████████████████████████████| 20\u002F20\n  Queue remaining: 1\n  100%|██████████████████████████████████████████████████| 20\u002F20\n  Queue remaining: 0\n  ```\n  Some control functions are also available:\n  ```python\n  # Interrupt the current task\n  queue.cancel_current()\n  # Clear the queue\n  queue.cancel_remaining()\n  # Interrupt the current task and clear the queue\n  queue.cancel_all()\n  # Call the callback when the queue is empty\n  queue.when_empty(callback)\n\n  # With Workflow:\n  Workflow(cancel_remaining=True)\n  Workflow(cancel_all=True)\n  ```\n\nSee [differences from ComfyUI's web UI](docs\u002FRuntime.md#differences-from-comfyuis-web-ui) if you are a previous user of ComfyUI's web UI, and [runtime](docs\u002FRuntime.md) for the details of runtime.\n\n### [Examples](examples)\n#### [Plotting](examples\u002Fplotting.ipynb)\n```python\nwith Workflow():\n    seed = 0\n    pos = 'sky, 1girl, smile'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)\n    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    model2 = TomePatchModel(model2, 0.5)\n    for color in 'red', 'green', 'blue':\n        latent = EmptyLatentImage(440, 640)\n        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),\n                          latent_image=latent)\n        SaveImage(VAEDecode(latent, vae2), f'{seed} {color}')\n        latent = LatentUpscaleBy(latent, scale_by=2)\n        latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip2), negative=CLIPTextEncode(neg, clip2),\n                          latent_image=latent, denoise=0.6)\n        SaveImage(VAEDecode(latent, vae2), f'{seed} {color} hires')\n```\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c5831caf0417.png)\n\n#### Auto queue\nAutomatically queue new workflows when the queue becomes empty.\n\nFor example, one can use [comfyui-photoshop](https:\u002F\u002Fgithub.com\u002FNimaNzrii\u002Fcomfyui-photoshop) (currently a bit buggy) to automatically do img2img with the image in Photoshop when it changes:\n```python\ndef f(wf):\n    seed = 0\n    pos = '1girl, angry, middle finger'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    image, width, height = PhotoshopToComfyUI(wait_for_photoshop_changes=True)\n    latent = VAEEncode(image, vae)\n    latent = LatentUpscaleBy(latent, scale_by=1.5)\n    latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                        positive=CLIPTextEncode(pos, clip), negative=CLIPTextEncode(neg, clip),\n                        latent_image=latent, denoise=0.8)\n    PreviewImage(VAEDecode(latent, vae))\nqueue.when_empty(f)\n```\nScreenshot:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c3b02d833101.png)\n\n#### Select and process\nFor example, to generate 3 images at once, and then let the user decide which ones they want to hires fix:\n```python\nimport ipywidgets as widgets\n\nqueue.watch_display(False)\n\nlatents = []\nimage_batches = []\nwith Workflow():\n    seed = 0\n    pos = 'sky, 1girl, smile'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)\n    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    for color in 'red', 'green', 'blue':\n        latent = EmptyLatentImage(440, 640)\n        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),\n                          latent_image=latent)\n        latents.append(latent)\n        image_batches.append(SaveImage(VAEDecode(latent, vae), f'{seed} {color}'))\n\ngrid = widgets.GridspecLayout(1, len(image_batches))\nfor i, image_batch in enumerate(image_batches):\n    image_batch = image_batch.wait()\n    image = widgets.Image(value=image_batch[0]._repr_png_())\n\n    button = widgets.Button(description=f'Hires fix {i}')\n    def hiresfix(button, i=i):\n        print(f'Image {i} is chosen')\n        with Workflow():\n            latent = LatentUpscaleBy(latents[i], scale_by=2)\n            latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                            positive=CLIPTextEncode(pos, clip2), negative=CLIPTextEncode(neg, clip2),\n                            latent_image=latent, denoise=0.6)\n            image_batch = SaveImage(VAEDecode(latent, vae2), f'{seed} hires')\n        display(image_batch.wait())\n    button.on_click(hiresfix)\n\n    grid[0, i] = widgets.VBox(children=(image, button))\ndisplay(grid)\n```\nThis example uses [ipywidgets](https:\u002F\u002Fgithub.com\u002Fjupyter-widgets\u002Fipywidgets) for the GUI, but other GUI frameworks can be used as well.\n\nScreenshot:\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_27a9eec1a9bc.png)\n\n## UI\n### [ipywidgets UI](docs\u002FUI\u002Fipywidgets.md)\n#### [ImageViewer](docs\u002FUI\u002Fipywidgets.md#imageviewer)\nA simple image viewer that can display multiple images with optional titles.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_e3f3b5571d7d.png)\n\n### [Solara UI](docs\u002FUI\u002FSolara.md)\nThese Solara widgets can be used in Jupyter Notebook and in web pages.\n\n#### [MetadataViewer](docs\u002FUI\u002FSolara.md#metadataviewer)\nA widget for viewing the metadata of an image generated by ComfyScript \u002F ComfyUI \u002F Stable Diffusion web UI. Workflow JSON files are supported too, including both the web UI format and the API format.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_cfe10e85dcc2.png)\n\n## Credits\nDate | Sponsor | Comment\n--- | --- | ---\n2026-01-27 | Nils Schuseil | Unknown project\n2025-04-20 | [@derhuebiii](https:\u002F\u002Fgithub.com\u002Fderhuebiii)\n\n## Projects using this library\n- [CarbBot: Simple Discord Bot for interfacing with ComfyUI and\u002For the Stability AI API for text2image generation using the SDXL model](https:\u002F\u002Fgithub.com\u002Fambocclusion\u002FComfyUI-SDXL-DiscordBot)\n- [comfy-character-app: A ComfyUI and ComfyScript Gradio-based app for generating characters using a multi-step process.](https:\u002F\u002Fgithub.com\u002FPraecordi\u002Fcomfy-character-app)\n- [io\\_comfyui: Let Blender work with ComfyUI by ComfyScript.](https:\u002F\u002Fgithub.com\u002Fgameltb\u002Fio_comfyui)\n- [Mea comfy wrap: Simple script for wraping comfy ui workflows for future usage as a micro services with gRPC interface](https:\u002F\u002Fgithub.com\u002Frhoninn11\u002Fmea_comfy)\n- [the-searcher-SD: proof of concept of a tool to enhance likeness of subjects in SDXL](https:\u002F\u002Fgithub.com\u002Fambocclusion\u002Fthe-searcher-SD)\n- [Randomize\\_ComfyScript: Randomizer script for ComfyUI using ComfyScript](https:\u002F\u002Fgithub.com\u002Flingondricka2\u002FRandomize_ComfyScript)\n- ~~[MaLoskins\u002FLineartApp](https:\u002F\u002Fgithub.com\u002FMaLoskins\u002FLineartApp)~~ (Flask)\n\n\n[^graph-gui]: [I hate nodes. (No offense comfyui) : StableDiffusion](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002F15cr5xx\u002Fi_hate_nodes_no_offense_comfyui\u002F)","# ComfyScript\n[![PyPI - 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fv\u002Fcomfy-script)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcomfy-script) ![Python版本（来自PEP 621 TOML）](https:\u002F\u002Fimg.shields.io\u002Fpython\u002Frequired-version-toml?tomlFilePath=https%3A%2F%2Fraw.githubusercontent.com%2FChaoses-Ib%2FComfyScript%2Fmain%2Fpyproject.toml) [![许可证](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fl\u002Fcomfy-script)](LICENSE.txt)\n\nComfyScript 是一个用于 [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) 的 Python 前端和库。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c5831caf0417.png)\n\n它具有以下使用场景：\n- 作为 ComfyUI 工作流的 [人类可读格式](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI\u002Fissues\u002F612)。\n\n  这使得比较和重用工作流的不同部分变得容易。此外，由于许多大型语言模型能够较好地处理 Python 代码，因此也可以训练这些模型来生成工作流。这种方法比仅仅让 LLM 提供一些硬编码参数更为强大。\n\n  脚本可以自动从 ComfyUI 的工作流中转换而来。详情请参阅 [转译器](#transpiler)。\n\n- 直接运行脚本以生成图像。\n\n  与使用 Web UI 相比，这样做的一大优势是可以将 Python 代码与 ComfyUI 的节点混合使用，例如进行循环、调用库函数以及轻松封装自定义节点。这还使得添加交互变得更加容易，因为用户界面和逻辑都可以用 Python 编写。另外，有些人可能更习惯于简单的 Python 代码，而不是基于图的 GUI。[^graph-gui]\n\n  详情请参阅 [运行时](#runtime)。脚本可以在本地或通过远程 ComfyUI 服务器执行。\n\n- 将 ComfyUI 用作函数库。\n\n  使用 ComfyScript，可以将 ComfyUI 的节点当作函数来开展机器学习研究、在其他项目中重用节点、调试自定义节点，并优化缓存以加快工作流的运行速度。\n\n  详情请参阅运行时的 [真实模式](docs\u002FRuntime.md#real-mode)。\n\n- 使用脚本生成 ComfyUI 的工作流。\n\n  脚本还可以用来生成 ComfyUI 的工作流，然后在 Web UI 或其他地方使用。这样就可以利用循环生成庞大的工作流，而手动创建这些工作流可能会非常耗时或不切实际。详情请参阅 [工作流生成](docs\u002FRuntime.md#workflow-generation)。此外，还可以从 ComfyScript 生成的图像中加载工作流。\n\n- 通过运行带有某些桩代码的脚本获取所需信息。\n\n  详情请参阅 [工作流信息检索](docs\u002FREADME.md#workflow-information-retrieval)。\n\n- 在没有 Web UI 的情况下，将 ComfyUI 的 Web UI 格式的工作流转换为 API 格式。\n\n## [文档](docs\u002FREADME.md)\n- [简介](#installation)（本页）\n- [运行时](docs\u002FRuntime.md)\n- [图像](docs\u002FImages\u002FREADME.md)\n- [模型](docs\u002FModels\u002FREADME.md)\n- 节点\n  - [节点兼容性](docs\u002FNodes\u002FCompatibility.md)\n  - [附加节点](docs\u002FNodes\u002FAdditional.md)\n- [转译器](docs\u002FTranspiler.md)\n- 用户界面\n  - [ipywidgets 界面](docs\u002FUI\u002Fipywidgets.md)\n  - [Solara 界面](docs\u002FUI\u002FSolara.md)\n- [示例](examples\u002FREADME.md)\n- [与 ComfyUI-to-Python-Extension 的区别](docs\u002FREADME.md#differences-from-comfyui-to-python-extension)\n\n## 安装\n### 仅安装 ComfyScript 包\n如果您只想将 ComfyScript 与外部 ComfyUI 服务器一起使用，例如使用云端 ComfyUI 服务器并开发应用程序或库：\n\n\u003Cdetails>\n\n首先安装 Python。\n\n安装或更新 ComfyScript：\n```sh\npython -m pip install -U \"comfy-script[default]\"\n```\n\n保存并运行 [以下代码](examples\u002Fruntime.py) 进行测试（例如 `python examples\u002Fruntime.py`）：\n```py\nfrom comfy_script.runtime import *\n# ComfyUI 服务器\u002F路径\n# 或：load(r'path\u002Fto\u002FComfyUI')\nload('http:\u002F\u002F127.0.0.1:8188\u002F')\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow(wait=True):\n    image = EmptyImage()\n    images = util.get_images(image, save=True)\n```\n\n或者，无需安装 Python，可以直接使用 [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) 来运行 ComfyScript：\n```sh\nuv run examples\u002Fuv.py\n```\n[`examples\u002Fuv.py`](examples\u002Fuv.py)：\n```python\n# \u002F\u002F\u002F script\n# requires-python = \">=3.9\"\n# dependencies = [\n#     \"comfy-script[default]\",\n# ]\n# \u002F\u002F\u002F\nfrom comfy_script.runtime import *\nload('http:\u002F\u002F127.0.0.1:8188\u002F')\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow(wait=True):\n    image = EmptyImage()\n    images = util.get_images(image, save=True)\n```\n\n详情请参阅 [仅安装 ComfyScript 包](docs\u002FREADME.md#installing-only-comfyscript-package)。\n\n\u003C\u002Fdetails>\n\n### 与 ComfyUI 一起安装\n如果您已经安装了 Python 和 ComfyUI：\n\n\u003Cdetails>\n\n如果您尚未安装 ComfyUI，请先安装它。请参阅 [ComfyUI 安装](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing) 或使用 [Comfy-Cli](https:\u002F\u002Fgithub.com\u002FComfy-Org\u002Fcomfy-cli) 进行安装：\n```sh\npython -m pip install comfy-cli\ncomfy --here install\n```\n\n然后运行以下命令安装 ComfyScript：\n```sh\ncd ComfyUI\u002Fcustom_nodes\ngit clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git\ncd ComfyScript\npython -m pip install -e \".[default]\"\n```\n\n更新：\n```sh\ncd ComfyUI\u002Fcustom_nodes\u002FComfyScript\ngit pull\npython -m pip install -e \".[default]\"\n```\n\n`[default]` 是安装常用依赖项所必需的。其他选项请参阅 [`pyproject.toml`](pyproject.toml)。如果未指定任何选项，则 ComfyScript 将在没有任何依赖的情况下安装。\n\n\u003C\u002Fdetails>\n\n### 与 ComfyUI 及 uv venv 一起安装\n如果您尚未安装 Python 或 ComfyUI，可以使用快速的 Python 包和项目管理工具 [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) 来同时安装 ComfyUI 和 ComfyScript：\n\n\u003Cdetails>\n\n首先 [安装 uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002Fgetting-started\u002Finstallation\u002F)。然后创建一个虚拟环境，安装 Comfy-Cli 和 ComfyUI：\n```sh\nmkdir ComfyUI\ncd ComfyUI\nuv venv --seed --python 3.12\nuv pip install comfy-cli\nuv run comfy --workspace . install\n\n# （可选）启动 ComfyUI 进行测试\n# uv run main.py\n```\n\n安装 ComfyScript：\n```sh\ngit clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git .\u002Fcustom_nodes\u002FComfyScript\nuv pip install -e \".\u002Fcustom_nodes\u002FComfyScript[default]\"\n```\n\n更新 ComfyScript：\n```sh\ngit -C \".\u002Fcustom_nodes\u002FComfyScript\" pull\nuv pip install -e \".\u002Fcustom_nodes\u002FComfyScript[default]\"\n```\n\n`[default]` 是安装常用依赖项所必需的。其他选项请参阅 [`pyproject.toml`](pyproject.toml)。如果未指定任何选项，ComfyScript 将在没有任何依赖的情况下安装。\n\n请注意，uv 只能在工作目录为 `ComfyUI` 或 `ComfyUI\u002F*` 时发现 ComfyUI 的虚拟环境。要在其他目录中使用该虚拟环境，例如在 `ComfyUI\u002Fcustom_nodes\u002FComfyScript` 或您的脚本目录中，您需要手动激活它：\n```pwsh\ncd ComfyUI\n# Windows\n.\\.venv\\Scripts\\activate\n# Linux\nsource .venv\u002Fbin\u002Factivate\n```\n\n如果您在 VS Code 中使用 ComfyScript 时遇到问题，请参阅 [VS Code](docs\u002FREADME.md#vs-code)。\n\u003C\u002Fdetails>\n\n### 使用 ComfyUI 包进行安装\n如果您想将 ComfyUI 作为 pip 包安装：\n\n\u003Cdetails>\n\n首先安装 [ComfyUI 包](https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI)：\n- 如果尚未安装 PyTorch：\n\n  ```sh\n  python -m pip install git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git\n  ```\n- 如果已安装 PyTorch（例如在 Google Colab 中）：\n\n  ```sh\n  python -m pip install wheel\n  python -m pip install --no-build-isolation git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git\n  ```\n  \n安装或更新 ComfyScript：\n```sh\npython -m pip install -U \"comfy-script[default]\"\n```\n\n`[default]` 是必需的，用于安装常用依赖项。其他选项请参阅 [`pyproject.toml`](pyproject.toml) 文件。如果不指定任何选项，ComfyScript 将不带任何依赖项安装。\n\n如果最新的 ComfyUI 包出现问题，可以使用上次测试通过的版本：\n```\npython -m pip install --no-build-isolation git+https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git@95a12f42e2b0c78202af10f2337009bd769157a7\n```\n\u003C\u002Fdetails>\n\n### 容器\n- @the-dream-machine 提供的 [Modal](examples\u002Fmodal.py)（ComfyUI + Comfy-Cli）\n- [promeG\u002Fcomfyui：ComfyUI Docker 镜像](https:\u002F\u002Fgithub.com\u002FpromeG\u002Fcomfyui)\n\n### 其他\n如遇到任何问题，请参阅 [故障排除](docs\u002FREADME.md#troubleshooting) 和 [VS Code](docs\u002FREADME.md#vs-code)。卸载方法请参考 [卸载指南](docs\u002FREADME.md#uninstallation)。\n\n## 转译器\n转译器可以将 ComfyUI 的工作流转换为 ComfyScript。\n\n当 ComfyScript 以自定义节点的形式安装时，`SaveImage` 等节点会自动钩子，将脚本保存为图像的元数据，并同时在终端中打印该脚本。\n\n如果您是在 ComfyUI 外部安装了 ComfyScript，仍然可以通过以下方式使用转译器：\n- [CLI](docs\u002FTranspiler.md#cli)\n  ```sh\n  python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n  ```\n  或者无需安装 ComfyScript，直接使用 uv：\n  ```sh\n  uvx --from \"comfy-script[default]\" python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n  ```\n- [Python 代码](docs\u002FTranspiler.md#from-python-code)\n- Jupyter Notebook \u002F Web：[MetadataViewer](#metadataviewer)\n\n例如，以下是 ComfyUI 中的一个工作流：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_d0d8e206dbb4.png)\n\n由此转换得到的 ComfyScript 如下：\n```python\nmodel, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\nconditioning = CLIPTextEncode('美丽的风景 自然 玻璃瓶 风景画, , 紫色星系瓶,', clip)\nconditioning2 = CLIPTextEncode('文字 水印', clip)\nlatent = EmptyLatentImage(512, 512, 1)\nlatent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\nimage = VAEDecode(latent, vae)\nSaveImage(image, 'ComfyUI')\n```\n\n如果一个工作流中有两个或多个 `SaveImage` 节点，每个节点只会翻译其必要的输入参数生成脚本。例如，以下是一个两步的 txt2img（高分辨率修复）工作流：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_ae0eb0d933b6.png)\n\n针对这两个保存图像分别生成的 ComfyScript 如下：\n1. ```python\n   model, clip, vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')\n   conditioning = CLIPTextEncode('杰作 HDR 维多利亚风格女性肖像画，金发，山地自然，蓝天', clip)\n   conditioning2 = CLIPTextEncode('手部缺陷 文字 水印', clip)\n   latent = EmptyLatentImage(768, 768, 1)\n   latent = KSampler(model，89848141647836，12，8，'dpmpp_sde', 'normal', conditioning，conditioning2，latent，1)\n   image = VAEDecode(latent，vae)\n   SaveImage(image，'ComfyUI')\n   ```\n2. ```python\n   model，clip，vae = CheckpointLoaderSimple('v2-1_768-ema-pruned.ckpt')\n   conditioning = CLIPTextEncode('杰作 HDR 维多利亚风格女性肖像画，金发，山地自然，蓝天', clip)\n   conditioning2 = CLIPTextEncode('手部缺陷 文字 水印', clip)\n   latent = EmptyLatentImage(768，768，1)\n   latent = KSampler(model，89848141647836，12，8,'dpmpp_2m', 'simple', conditioning，conditioning2，latent，1)\n   latent2 = LatentUpscale(latent，'nearest-exact', 1152，1152，'disabled')\n   latent2 = KSampler(model，469771404043268，14，8，'dpmpp_2m', 'simple', conditioning，conditioning2，latent2，0.5)\n   image = VAEDecode(latent2，vae)\n   SaveImage(image，'ComfyUI')\n   ```\n\n脚本对比：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_f7e3ed34a6b4.png)\n\n## 运行时\n使用运行时，可以按如下方式运行 ComfyScript：\n```python\nfrom comfy_script.runtime import *\nload()\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow():\n    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\n    conditioning = CLIPTextEncode('美丽的风景 自然 玻璃瓶 风景画, , 紫色星系瓶,', clip)\n    conditioning2 = CLIPTextEncode('文字，水印', clip)\n    latent = EmptyLatentImage(512, 512, 1)\n    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\n    image = VAEDecode(latent, vae)\n    SaveImage(image, 'ComfyUI')\n    \n    # 若要获取 `image` 而不是保存它，可将 `SaveImage` 替换为：\n    # images = util.get_images(image)\n    # `images` 的类型为 `list[PIL.Image.Image]`\n```\n\nJupyter Notebook 示例可在 [`examples\u002Fruntime.ipynb`](examples\u002Fruntime.ipynb) 中找到。\n\n- 加载后，将在 `comfy_script\u002Fruntime\u002Fnodes.pyi` 中生成类型存根文件。主流代码编辑器（例如 [VS Code](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Flanguages\u002Fpython)）可以利用这些存根文件来辅助编码：\n\n  | | |\n  | --- | --- |\n  | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_fbdf03cdd821.png) | ![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_e036a45e85ff.png) |\n\n  对于所有提供值列表的参数，都会生成 Python 枚举。因此，您无需再复制粘贴类似 `'v1-5-pruned-emaonly.ckpt'` 的字符串，而是可以直接使用：\n  ```python\n  Checkpoints.v1_5_pruned_emaonly\n  # 或\n  CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly\n  ```\n  \n  嵌入向量也可以通过 `Embeddings.my_embedding` 来引用，这等价于 `'embedding:my-embedding'`。有关详细信息，请参阅 [枚举](docs\u002FRuntime.md#enumerations)。\n\n  如果类型存根对您不起作用（无法获得与截图相似的效果），请参阅 [类型存根不工作](docs\u002FRuntime.md#type-stubs-not-working)。\n\n- 运行时默认是异步的。您可以将多个任务加入队列，而无需等待第一个任务完成。一个守护线程会监控并报告队列中剩余的任务以及当前进度，例如：\n  ```\n  队列剩余：1\n  队列剩余：2\n  100%|██████████████████████████████████████████████████| 20\u002F20\n  队列剩余：1\n  100%|██████████████████████████████████████████████████| 20\u002F20\n  队列剩余：0\n  ```\n  此外，还提供了一些控制功能：\n  ```python\n  # 中断当前任务\n  queue.cancel_current()\n  # 清空队列\n  queue.cancel_remaining()\n  # 中断当前任务并清空队列\n  queue.cancel_all()\n  # 当队列为空时调用回调函数\n  queue.when_empty(callback)\n\n  # 使用 Workflow 时：\n  Workflow(cancel_remaining=True)\n  Workflow(cancel_all=True)\n  ```\n\n如果您之前使用过 ComfyUI 的 Web UI，请参阅 [与 ComfyUI Web UI 的区别](docs\u002FRuntime.md#differences-from-comfyuis-web-ui)，更多关于运行时的详细信息请参阅 [运行时文档](docs\u002FRuntime.md)。\n\n### [示例](examples)\n#### [绘图](examples\u002Fplotting.ipynb)\n```python\nwith Workflow():\n    seed = 0\n    pos = '天空, 1位女孩, 微笑'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)\n    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    model2 = TomePatchModel(model2, 0.5)\n    for color in '红色', '绿色', '蓝色':\n        latent = EmptyLatentImage(440, 640)\n        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),\n                          latent_image=latent)\n        SaveImage(VAEDecode(latent, vae2), f'{seed} {color}')\n        latent = LatentUpscaleBy(latent, scale_by=2)\n        latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip2), negative=CLIPTextEncode(neg, clip2),\n                          latent_image=latent, denoise=0.6)\n        SaveImage(VAEDecode(latent, vae2), f'{seed} {color} 高分辨率')\n```\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c5831caf0417.png)\n\n#### 自动队列\n当队列为空时，自动将新工作流加入队列。\n\n例如，可以使用 [comfyui-photoshop](https:\u002F\u002Fgithub.com\u002FNimaNzrii\u002Fcomfyui-photoshop)（目前还有一些小问题）来实现：当 Photoshop 中的图像发生变化时，自动执行 img2img 操作：\n```python\ndef f(wf):\n    seed = 0\n    pos = '1位女孩, 生气, 中指'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    image, width, height = PhotoshopToComfyUI(wait_for_photoshop_changes=True)\n    latent = VAEEncode(image, vae)\n    latent = LatentUpscaleBy(latent, scale_by=1.5)\n    latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                        positive=CLIPTextEncode(pos, clip), negative=CLIPTextEncode(neg, clip),\n                        latent_image=latent, denoise=0.8)\n    PreviewImage(VAEDecode(latent, vae))\nqueue.when_empty(f)\n```\n截图：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_c3b02d833101.png)\n\n#### 选择与处理\n例如，一次生成 3 张图片，然后让用户决定哪些需要进行高分辨率修复：\n```python\nimport ipywidgets as widgets\n\nqueue.watch_display(False)\n\nlatents = []\nimage_batches = []\nwith Workflow():\n    seed = 0\n    pos = '天空, 1位女孩, 微笑'\n    neg = 'embedding:easynegative'\n    model, clip, vae = CheckpointLoaderSimple(Checkpoints.AOM3A1B_orangemixs)\n    model2, clip2, vae2 = CheckpointLoaderSimple(Checkpoints.CounterfeitV25_25)\n    for color in '红色', '绿色', '蓝色':\n        latent = EmptyLatentImage(440, 640)\n        latent = KSampler(model, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                          positive=CLIPTextEncode(f'{color}, {pos}', clip), negative=CLIPTextEncode(neg, clip),\n                          latent_image=latent)\n        latents.append(latent)\n        image_batches.append(SaveImage(VAEDecode(latent, vae), f'{seed} {color}'))\n\ngrid = widgets.GridspecLayout(1, len(image_batches))\nfor i, image_batch in enumerate(image_batches):\n    image_batch = image_batch.wait()\n    image = widgets.Image(value=image_batch[0]._repr_png_())\n\n    button = widgets.Button(description=f'高分辨率修复 {i}')\n    def hiresfix(button, i=i):\n        print(f'第 {i} 张图片被选中')\n        with Workflow():\n            latent = LatentUpscaleBy(latents[i], scale_by=2)\n            latent = KSampler(model2, seed, steps=15, cfg=6, sampler_name='uni_pc',\n                            positive=CLIPTextEncode(pos, clip2), negative=CLIPTextEncode(neg, clip2),\n                            latent_image=latent, denoise=0.6)\n            image_batch = SaveImage(VAEDecode(latent, vae2), f'{seed} 高分辨率')\n        display(image_batch.wait())\n    button.on_click(hiresfix)\n\n    grid[0, i] = widgets.VBox(children=(image, button))\ndisplay(grid)\n```\n本示例使用了 [ipywidgets](https:\u002F\u002Fgithub.com\u002Fjupyter-widgets\u002Fipywidgets) 来构建 GUI，当然也可以使用其他 GUI 框架。\n\n截图：\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_27a9eec1a9bc.png)\n\n## UI\n### [ipywidgets UI](docs\u002FUI\u002Fipywidgets.md)\n#### [ImageViewer](docs\u002FUI\u002Fipywidgets.md#imageviewer)\n一个简单的图片查看器，可以显示多张图片，并可选添加标题。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_e3f3b5571d7d.png)\n\n### [Solara UI](docs\u002FUI\u002FSolara.md)\n这些 Solara 组件既可以在 Jupyter Notebook 中使用，也可以嵌入到网页中。\n\n#### [MetadataViewer](docs\u002FUI\u002FSolara.md#metadataviewer)\n用于查看由 ComfyScript \u002F ComfyUI \u002F Stable Diffusion Web UI 生成的图片元数据的组件。同时也支持工作流 JSON 文件，包括 Web UI 格式和 API 格式。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_readme_cfe10e85dcc2.png)\n\n## 致谢\n日期 | 赞助者 | 备注\n--- | --- | ---\n2026-01-27 | Nils Schuseil | 未知项目\n2025-04-20 | [@derhuebiii](https:\u002F\u002Fgithub.com\u002Fderhuebiii)\n\n## 使用本库的项目\n- [CarbBot：用于对接 ComfyUI 和\u002F或 Stability AI API，以 SDXL 模型进行文生图的简单 Discord 机器人](https:\u002F\u002Fgithub.com\u002Fambocclusion\u002FComfyUI-SDXL-DiscordBot)\n- [comfy-character-app：基于 ComfyUI 和 ComfyScript 的 Gradio 应用程序，用于通过多步骤流程生成角色。](https:\u002F\u002Fgithub.com\u002FPraecordi\u002Fcomfy-character-app)\n- [io_comfyui：通过 ComfyScript 让 Blender 与 ComfyUI 协同工作。](https:\u002F\u002Fgithub.com\u002Fgameltb\u002Fio_comfyui)\n- [Mea comfy wrap：用于将 ComfyUI 工作流封装为微服务，并提供 gRPC 接口的简单脚本](https:\u002F\u002Fgithub.com\u002Frhoninn11\u002Fmea_comfy)\n- [the-searcher-SD：增强 SDXL 中人物相似度的概念验证工具](https:\u002F\u002Fgithub.com\u002Fambocclusion\u002Fthe-searcher-SD)\n- [Randomize_ComfyScript：使用 ComfyScript 的 ComfyUI 随机化脚本](https:\u002F\u002Fgithub.com\u002Flingondricka2\u002FRandomize_ComfyScript)\n- ~~[MaLoskins\u002FLineartApp](https:\u002F\u002Fgithub.com\u002FMaLoskins\u002FLineartApp)~~（Flask）\n\n\n[^graph-gui]: [我讨厌节点。（并非针对 ComfyUI）：StableDiffusion](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002F15cr5xx\u002Fi_hate_nodes_no_offense_comfyui\u002F)","# ComfyScript 快速上手指南\n\nComfyScript 是 [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) 的 Python 前端和库。它允许你使用可读性强的 Python 代码来定义、运行和生成 ComfyUI 工作流，支持循环、函数调用等编程特性，非常适合开发者进行自动化绘图、工作流复用及 ML 研究。\n\n## 环境准备\n\n*   **操作系统**: Windows, Linux, macOS\n*   **Python 版本**: >= 3.9 (推荐 3.10+)\n*   **前置依赖**:\n    *   方案 A：已安装并运行的 ComfyUI 服务（本地或远程）。\n    *   方案 B：完整的 ComfyUI 本地环境（用于作为自定义节点安装）。\n    *   可选工具：[uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F) (推荐的快速包管理器)。\n\n> **国内加速提示**：安装 Python 依赖时，建议指定清华或阿里镜像源以提升下载速度。\n> 例如：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple ...`\n\n## 安装步骤\n\n根据你的使用场景，选择以下任一安装方式：\n\n### 方案一：仅安装 ComfyScript 包（连接现有 ComfyUI 服务）\n适用于已有 ComfyUI 服务器（本地或云端），仅需通过 Python 脚本调用的场景。\n\n1.  **使用 pip 安装**：\n    ```bash\n    python -m pip install -U \"comfy-script[default]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n2.  **或使用 uv 直接运行（无需预先安装环境）**：\n    ```bash\n    uv run examples\u002Fuv.py\n    ```\n\n### 方案二：作为 ComfyUI 自定义节点安装\n适用于希望将 ComfyScript 集成到 ComfyUI 界面中，利用其自动转译工作流功能的场景。\n\n1.  **进入 ComfyUI 自定义节点目录**：\n    ```bash\n    cd ComfyUI\u002Fcustom_nodes\n    ```\n\n2.  **克隆仓库并安装**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git\n    cd ComfyScript\n    python -m pip install -e \".[default]\" -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *注：`[default]` 参数用于安装常用依赖，若省略则需手动管理依赖。*\n\n### 方案三：使用 uv 一键搭建完整环境\n适用于尚未安装 Python 或 ComfyUI，希望快速构建隔离开发环境的用户。\n\n1.  **创建项目目录并初始化虚拟环境**：\n    ```bash\n    mkdir ComfyUI && cd ComfyUI\n    uv venv --seed --python 3.12\n    ```\n\n2.  **安装 Comfy-Cli 并部署 ComfyUI**：\n    ```bash\n    uv pip install comfy-cli\n    uv run comfy --workspace . install\n    ```\n\n3.  **安装 ComfyScript**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript.git .\u002Fcustom_nodes\u002FComfyScript\n    uv pip install -e \".\u002Fcustom_nodes\u002FComfyScript[default]\"\n    ```\n\n## 基本使用\n\n以下是最基础的 Python 脚本示例，展示如何加载模型、生成图像并保存。\n\n### 1. 编写脚本\n创建文件 `test_comfy.py`，写入以下内容：\n\n```python\nfrom comfy_script.runtime import *\n\n# 加载 ComfyUI 服务地址\n# 如果是本地默认端口，可简写为 load()\nload('http:\u002F\u002F127.0.0.1:8188\u002F')\n\nfrom comfy_script.runtime.nodes import *\n\nwith Workflow(wait=True):\n    # 1. 加载大模型检查点\n    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\n    \n    # 2. 编写提示词\n    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, purple galaxy bottle', clip)\n    conditioning2 = CLIPTextEncode('text, watermark', clip)\n    \n    # 3. 创建潜在空间图像\n    latent = EmptyLatentImage(512, 512, 1)\n    \n    # 4. 采样生成\n    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\n    \n    # 5. 解码并保存图像\n    image = VAEDecode(latent, vae)\n    SaveImage(image, 'ComfyUI')\n    \n    # 提示：若不保存文件而是获取 PIL Image 对象，可使用:\n    # images = util.get_images(image)\n```\n\n### 2. 运行脚本\n确保 ComfyUI 服务已在后台运行（默认端口 8188），然后执行：\n\n```bash\npython test_comfy.py\n```\n\n脚本执行完毕后，生成的图片将保存在 ComfyUI 的输出目录中。你也可以在 Jupyter Notebook 中直接运行上述代码进行交互式开发。","一位算法研究员需要批量测试不同提示词组合对生成图像质量的影响，并自动记录实验数据。\n\n### 没有 ComfyScript 时\n- 必须在 ComfyUI 网页端手动拖拽节点、连线，重复构建数十个相似的工作流，效率极低且容易出错。\n- 无法直接使用 Python 的 `for` 循环或逻辑判断来动态调整参数，只能硬编码每个工作流或依赖外部脚本调用 API，导致逻辑割裂。\n- 难以复用现有节点逻辑进行二次开发，调试自定义节点时需要频繁在图形界面和代码之间切换，上下文断裂。\n- 提取工作流中的中间数据（如潜空间特征）非常繁琐，通常需要手动添加保存节点并整理大量临时文件。\n- 团队协作时，图形化的工作流文件难以进行版本对比（Diff），无法清晰看出具体参数或结构的变更。\n\n### 使用 ComfyScript 后\n- 直接用 Python 代码定义工作流，通过简单的 `for` 循环即可自动生成并执行上百种参数组合的实验，大幅缩短研发周期。\n- 将 ComfyUI 节点作为普通 Python 函数调用，无缝混合使用原生库（如 NumPy 处理数据）与 AI 生成逻辑，代码结构清晰统一。\n- 利用“真实模式”直接导入节点进行单元测试或性能分析，无需启动完整的图形界面，调试过程更加轻量高效。\n- 在脚本中灵活插入数据提取逻辑，运行时自动捕获并结构化存储关键指标，彻底告别手动整理文件的痛苦。\n- 工作流即代码，天然支持 Git 版本管理，团队成员可以清晰地对比代码差异，快速定位优化点或回滚错误修改。\n\nComfyScript 通过将图形化工作流转化为可编程的 Python 代码，让复杂的 AI 实验实现了自动化、版本化和工程化的高效闭环。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChaoses-Ib_ComfyScript_c5831caf.png","Chaoses-Ib",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FChaoses-Ib_14d48e06.png","Chaos-es@outlook.com","Chaoses_Ib","https:\u002F\u002Fgithub.com\u002FChaoses-Ib",[80,84],{"name":81,"color":82,"percentage":83},"Python","#3572A5",99.9,{"name":85,"color":86,"percentage":87},"PowerShell","#012456",0.1,663,41,"2026-04-16T23:30:52","MIT","Linux, macOS, Windows","未说明 (取决于后端 ComfyUI 及所加载模型的需求，通常推荐 NVIDIA GPU)","未说明",{"notes":96,"python":97,"dependencies":98},"该工具是 ComfyUI 的 Python 前端和库。它本身不直接规定具体的硬件需求，而是依赖于后端的 ComfyUI 服务器或本地安装的 ComfyUI。因此，实际的 GPU、显存和内存需求完全取决于用户运行的具体工作流和加载的 AI 模型（如 Stable Diffusion 等）。支持多种安装方式：作为独立包连接远程\u002F本地 ComfyUI 服务器、作为 ComfyUI 自定义节点安装、或使用 uv 工具一键部署包含 ComfyUI 的完整环境。",">=3.9",[99,28,100],"comfy-script[default]","uv (可选，用于快速环境管理)",[15],[103,104,105,106,107,108],"comfyui","stable-diffusion","comfy-script","jupyter-notebook","ipython","jupyter","2026-03-27T02:49:30.150509","2026-04-18T09:19:39.355437",[112,117,122,127,132,137],{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},39088,"如何在 Flask 应用中集成并运行 ComfyScript 代码？","可以在 Flask 应用中导入 `comfy_script.runtime` 并调用 `load()` 初始化环境。如果遇到问题，建议尝试禁用所有自定义节点以排查冲突，加载参数可设置为：`load(args=ComfyUIArgs('--disable-all-custom-nodes', '--force-fp16'))`。确保 ComfyUI 和 ComfyScript 均为最新版本，且 Python 环境配置正确（如测试通过的 Python 3.10 + Flask 3.0.3）。","https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fissues\u002F64",{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},39086,"在 Modal 等平台部署时遇到 \"ValueError: Host '127.0.0.1:8199' cannot contain ':'\" 错误如何解决？","该错误通常是由于依赖版本冲突引起的，具体是使用了新版本的 `yarl` 库搭配旧版本的 `aiohttp` 库。解决方案是降级 `yarl` 库的版本，或者确保 `aiohttp` 和 `yarl` 都是最新的兼容版本。维护者指出这是一个环境配置问题，而非 ComfyScript 本身的代码错误。","https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fissues\u002F73",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},39087,"如何处理像 'Reroute (rgthree)' 这样仅包含前端逻辑（UI-only）而没有对应 Python 实现的节点？","这类节点因为没有对应的 Python 类定义，无法被 ComfyScript 的转译器直接识别，会抛出 KeyError。目前的官方建议是查阅兼容性文档（docs\u002FNodes\u002FCompatibility.md）。对于不支持 API 格式的特定 UI 节点，可能需要等待官方添加专门支持，或者通过复杂的追踪机制来记录实际执行的工作流。暂时避免在纯脚本模式中使用此类纯前端节点。","https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fissues\u002F42",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},39089,"为什么在 Jupyter 或实时模式下使用 PreviewImage 和 ImageViewer 时会报错？","在实时模式（real mode）下直接使用 `PreviewImage` 或 `ImageViewer` 可能会因为异步事件循环（asyncio）冲突导致 TypeError。作为替代方案，建议使用 PIL 库手动处理图像显示。例如，将 tensor 转换为 numpy 数组并使用 PIL 显示：\n```python\ni = 255. * image.squeeze(0).cpu().numpy()\nimg = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))\nimg\n```","https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fissues\u002F88",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},39090,"是否可以直接在 VSCode 中运行 ipywidgets\u002Fgui 界面？","虽然用户询问是否可以直接在 VSCode 中获得 README 中展示的简单 UI，但维护者建议除非封装的函数边界清晰且足够灵活，否则应避免过度依赖此类封装，因为它们可能需要频繁更改并破坏现有代码。对于复杂的逻辑工作流，更推荐的做法是开发自己的复合节点（composite nodes），将大量运行时逻辑（如判断语句、静音操作等）组合在一起，以减少工作流的杂乱程度。","https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fissues\u002F44",{"id":138,"question_zh":139,"answer_zh":140,"source_url":136},39091,"如何减少复杂工作流中的混乱（Spaghetti code）？","当工作流包含大量逻辑（如 if 语句、静音操作等）导致屏幕杂乱时，最佳实践是创建自定义的复合节点。通过将多个基础节点和运行时逻辑封装在一个新的自定义节点中，可以显著简化主工作流的视觉结构，使其更易于管理和维护。这是许多高级用户采用的方法。",[142,147,152,157,162,167,172,177,182,187,192,197,202,207,212,217,222,227,232,237],{"id":143,"version":144,"summary_zh":145,"released_at":146},315018,"v0.7.0a1","> [!NOTE]  \n> 若要从 PyPI 安装此预发布版本（而非本地 Git 仓库），需要使用 `--pre` 选项：`pip install -U --pre comfy-script[default]`\n\n- 转译器：支持将节点输入格式化为关键字参数 (#119, @longredzhong)\n\n  默认行为已更改为：当输入超过 2 个时，自动将其格式化为关键字参数。例如：\n  ```py\n  model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\n  conditioning = CLIPTextEncode('美丽的风景、自然、玻璃瓶、山水画风，，紫色星系瓶,'，clip)\n  conditioning2 = CLIPTextEncode('文字、水印'，clip)\n  latent = EmptyLatentImage(width=512, height=512, batch_size=1)\n  latent = KSampler(model=model, seed=156680208700286, steps=20, cfg=8, sampler_name='euler', scheduler='normal', positive=conditioning, negative=conditioning2, latent_image=latent, denoise=1)\n  image = VAEDecode(latent, vae)\n  SaveImage(image, 'ComfyUI')\n  ```\n  CLI 使用方法：\n  ```sh\n    --args [pos|pos2orkwd|kwd]  将节点输入格式化为位置参数或关键字参数。  [默认：Pos2OrKwd]\n  ```\n  例如：`python -m comfy_script.transpile --args kwd workflow.json`\n- 运行时：`util` 模块新增 `concat_latents`、`load_latent_from_path`、`save_latent_and_get_path` 函数 (#118, #29)\n\n  示例：\n  ```python\n  # 用于打印\n  queue.watch_display(False)\n  \n  with Workflow():\n      latent = EmptyLatentImage(batch_size=4)\n      latent_path = util.save_latent_and_get_path(latent)\n  print(latent_path)\n  \n  with Workflow():\n      latent = util.load_latent_from_path(latent_path)\n      # 对潜变量进行一些操作，例如：\n      SaveLatent(latent, 'latents\u002Floaded')\n  ```","2025-12-24T04:29:03",{"id":148,"version":149,"summary_zh":150,"released_at":151},315019,"v0.6.1","- 修复（运行时、客户端）：在 Python 3.12 及更高版本中关闭事件循环 (#117)","2025-11-20T14:39:51",{"id":153,"version":154,"summary_zh":155,"released_at":156},315020,"v0.6.0","> [!NOTE]\n> ComfyScript 是 ComfyUI 的 Python 前端和库。有关详细信息和示例，请参阅 [README](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript)。\n\n### 新特性\n- 支持 Python 3.14\n\n  （Solara UI 目前尚不支持 Python 3.14：https:\u002F\u002Fgithub.com\u002Fwidgetti\u002Fsolara\u002Fpull\u002F1108）\n- 运行时\n  - 支持 ComfyUI v3 模式 (#113, #85)\n  - 添加 `util` 模块，包含一些实用函数 (#29, #47, #76)\n    - `get_int`、`get_float`、`get_str` 和 `get_images`\n    - `concat_images`\n    - `save_image` 和 `save_image_cwd`\n    - `save_image_and_get_paths` 以及 `load_image_from_paths`\n\n    示例：\n    ```py\n    latent = KSampler(...)\n    image = VAEDecode(latent, vae)\n    images = util.get_images(image)\n    # `images` 的类型为 `list[PIL.Image.Image]`\n    ```\n  - 添加 `node` 模块，包含 `nodes` 和 `get` 属性 (#17, #30, #59)\n\n    请参阅 `util` 模块中的示例。\n  - 虚拟模式\n    - `Workflow` 只会将未排队的输出加入队列 (#29)\n    - 支持进度回调 (#36, #102)\n    - 当设置新输出时唤醒 `result()` (#108)\n  - 将 `StrEnum` 存根类型更改为 `StrEnum | str` (#71, @dwgrth)\n  - 如果在 ComfyUI 内部运行，则使用已加载的 ComfyUI (#104)\n  - 默认共享 ComfyUI 包的上下文变量\n\n### 文档\n- 安装\n  - 更新仅安装 ComfyScript 包的说明 (#3, #115, #95)\n\n    安装或更新 ComfyScript：\n    ```sh\n    python -m pip install -U \"comfy-script[default]\"\n    ```\n\n    保存并运行以下代码进行测试（例如 `python examples\u002Fruntime.py`）：\n    ```py\n    from comfy_script.runtime import *\n    # ComfyUI 服务器\u002F路径\n    # 或：load(r'path\u002Fto\u002FComfyUI')\n    load('http:\u002F\u002F127.0.0.1:8188\u002F')\n    from comfy_script.runtime.nodes import *\n\n    with Workflow(wait=True):\n        image = EmptyImage()\n        images = util.get_images(image, save=True)\n    ```\n\n    或者，无需安装 Python，可以直接使用 ComfyScript 和 [uv](https:\u002F\u002Fdocs.astral.sh\u002Fuv\u002F)：\n    ```sh\n    uv run examples\u002Fuv.py\n    ```\n    `examples\u002Fuv.py`：\n    ```python\n    # \u002F\u002F\u002F script\n    # requires-python = \">=3.9\"\n    # dependencies = [\n    #     \"comfy-script[default]\",\n    # ]\n    # \u002F\u002F\u002F\n    from comfy_script.runtime import *\n    load('http:\u002F\u002F127.0.0.1:8188\u002F')\n    from comfy_script.runtime.nodes import *\n\n    with Workflow(wait=True):\n        image = EmptyImage()\n        images = util.get_images(image, save=True)\n    ```\n  - 添加使用 ComfyUI 和 uv venv 的安装方法\n  - 添加 VS Code Notebook 内核列表故障排除\n  - 提升 ComfyUI 包的测试版本 (#33, #29)\n- 转译器：更新用法\n\n  如果您在 ComfyUI 外部安装了 ComfyScript，仍然可以通过以下方式使用转译器：\n  - CLI\n    ```sh\n    python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n    ```\n    或者，无需安装 ComfyScript，直接使用 uv：\n    ```sh\n    uvx --from \"comfy-script[default]\" python -m comfy_script.transpile \"workflow.json\" --api http:\u002F\u002F127.0.0.1:8188\u002F\n    ```\n  - Python 代码\n  - Jupyter Notebook \u002F w","2025-11-19T22:36:03",{"id":158,"version":159,"summary_zh":160,"released_at":161},315021,"v0.5.1","### 文档\n- 安装：添加 [Comfy-Cli](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript#with-comfyui) 和 [Modal 示例](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Fmain\u002Fexamples\u002Fmodal.py) (#68, #69, @the-dream-machine)\n- 模型：添加[列出所有检查点的示例代码](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Ftree\u002Fmain\u002Fdocs\u002FModels#checkpoints) (#65, #69)\n\n### 修复\n- 运行时：comfyui 包无法加载 [comfyui-legacy](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002Fcomfyui-legacy) 节点（所有附加节点）(#68)","2024-09-07T20:45:41",{"id":163,"version":164,"summary_zh":165,"released_at":166},315022,"v0.5.0","> [!NOTE]  \n> ComfyScript 是 ComfyUI 的 Python 前端和库。详情和示例请参阅 [README](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript)。\n\n### 新特性\n- 运行时\n  - 添加节点预览显示和回调函数 (#36)（实验性）\n\n    @ambocclusion 提供的使用示例：\n\n    https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F1e4de7b6-a939-4dc3-9b38-91c368bc8f2b\n\n  - 使用 [tqdm](https:\u002F\u002Fgithub.com\u002Ftqdm\u002Ftqdm) 显示进度条，取代自定义的 `print()` 输出\n\n    ![image](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Fb6c05469-2a9e-4d16-9fe1-2afa5c14f5f5)\n\n  - 虚拟模式：`when_empty()` 接受无参数的回调函数 (#48)\n  - 实际模式：添加对节点缓存的支持 (#34)（实验性）\n  - 文档\n    - 添加 [类型注解无效的故障排除](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Fmain\u002Fdocs\u002FRuntime.md#type-stubs-not-working) (#44)\n    - 添加 [获取图像的示例](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Fmain\u002Fdocs\u002FImages\u002FREADME.md#to-output-directory) (#64, #36)\n    - 更新 `ComfyUIArgs` 文档 (#64)\n\n- 转译器\n  - 支持通过 CLI 将工作流 JSON 转换为可运行脚本 ([CLI 使用说明](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Fmain\u002Fdocs\u002FTranspiler.md#cli)) (#43)\n\n    使用方法：\n    ```sh\n    python -m comfy_script.transpile \"tests\\transpile\\default.json\" --runtime > script.py\n    ```\n\n  - 添加钩子设置 (#27, #38, #42)\n\n    `settings.toml` 文件内容如下：\n    ```toml\n    [transpile.hook]\n    # 当 ComfyScript 作为自定义节点安装时，`SaveImage` 等节点会被钩住，自动将脚本保存为图像的元数据，并在终端打印该脚本。\n    # 若要禁用某项功能，可将其值改为 `false`。\n    save_script = true\n    print_script = true\n    \n    # 尽可能使用 API 格式的工作流，而非 Web UI 格式。\n    # Web UI 格式包含更多信息，但也可能包含一些仅用于 UI 的虚拟节点（如自定义 Reroute、PrimitiveNode、Note 节点），这些节点无法被正确转译。API 格式兼容性更好，但输出可读性较低。\n    prefer_api_format = false\n    ```\n    这些设置也可以通过环境变量或 Python 代码进行覆盖，详细信息请参阅 [`settings.example.toml`](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Fmain\u002Fsettings.example.toml)。\n\n- 节点：添加 [civitai_comfy_nodes](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002Fcivitai_comfy_nodes)，用于从 CivitAI 加载检查点和 LoRA 模型\n\n  示例：\n  ```python\n  model, clip, vae = CivitAICheckpointLoader('101055@128078')\n  model, clip, vae = CivitAICheckpointLoader('https:\u002F\u002Fcivitai.com\u002Fmodels\u002F101055?modelVersionId=128078')\n  model, clip, vae = CivitAICheckpointLoader('https:\u002F\u002Fcivitai.com\u002Fmodels\u002F101055\u002Fsd-xl?modelVersionId=128078')\n\n  model, clip = CivitAILoraLoader(model, clip, '350450@391994', strength_clip=1, strength_model=1)\n  model, clip = CivitAILoraLoader(model, clip, 'https:\u002F\u002Fcivitai.com\u002Fmodels\u002F350450?modelVersionId=391994', strength_clip=1, strength_model=1)\n  model, clip = Civ","2024-09-06T22:23:41",{"id":168,"version":169,"summary_zh":170,"released_at":171},315023,"v0.5.0a5","- 功能（运行时\u002F实际）：添加节点缓存 (#34)","2024-05-15T15:22:01",{"id":173,"version":174,"summary_zh":175,"released_at":176},315024,"v0.5.0a4","### 新特性\n- 运行时：使用 [tqdm](https:\u002F\u002Fgithub.com\u002Ftqdm\u002Ftqdm) 显示进度条，取代自定义的 `print()`。\n- 节点：新增 [civitai_comfy_nodes](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002Fcivitai_comfy_nodes)，用于从 CivitAI 加载检查点和 LoRA 模型。\n\n示例：\n```python\nmodel, clip, vae = CivitAICheckpointLoader('101055@128078')\nmodel, clip, vae = CivitAICheckpointLoader('https:\u002F\u002Fcivitai.com\u002Fmodels\u002F101055?modelVersionId=128078')\nmodel, clip, vae = CivitAICheckpointLoader('https:\u002F\u002Fcivitai.com\u002Fmodels\u002F101055\u002Fsd-xl?modelVersionId=128078')\n\nmodel, clip = CivitAILoraLoader(model, clip, '350450@391994', strength_clip=1, strength_model=1)\nmodel, clip = CivitAILoraLoader(model, clip, 'https:\u002F\u002Fcivitai.com\u002Fmodels\u002F350450?modelVersionId=391994', strength_clip=1, strength_model=1)\nmodel, clip = CivitAILoraLoader(model, clip, 'https:\u002F\u002Fcivitai.com\u002Fmodels\u002F350450\u002Fsdxl-lightning-lora-2step?modelVersionId=391994', strength_clip=1, strength_model=1)\n```","2024-05-09T11:10:00",{"id":178,"version":179,"summary_zh":180,"released_at":181},315025,"v0.5.0a3","- 功能（转译）：添加钩子设置 (#27, #38)","2024-05-03T14:15:41",{"id":183,"version":184,"summary_zh":185,"released_at":186},315026,"v0.5.0a2","- 修复（客户端）：支持连接到 HTTPS ComfyUI 服务器 (#37，@lucak5s)","2024-04-29T21:21:05",{"id":188,"version":189,"summary_zh":190,"released_at":191},315027,"v0.5.0a1","- 功能（运行时）：添加节点预览显示和回调函数 (#36)\n\n注意：若要从 PyPI 安装预发布版本，而不是从本地 Git 仓库安装，需要使用 `--pre` 选项：`pip install -U --pre comfy-script[default]`","2024-04-21T21:42:54",{"id":193,"version":194,"summary_zh":195,"released_at":196},315028,"v0.4.6","### Fixes\r\n- Runtime:\r\n  - Virtual mode: Local ComfyUI server test of `load()` (v0.4.4+, #32)\r\n  - Standalone virtual mode \u002F real mode:\r\n    - Enum values of `tuple` type\r\n    - Restrict ComfyUI directory test","2024-04-13T17:18:35",{"id":198,"version":199,"summary_zh":200,"released_at":201},315029,"v0.4.5","### Fixes\r\n- Runtime: `comfyui` package breaking changes (#33)\r\n\r\n  Note that there are still other problems in `comfyui`: the built package doesn't include JSON configs needed by models (https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI\u002Fissues\u002F6). A workaround is to use editable install:\r\n  ```sh\r\n  git clone https:\u002F\u002Fgithub.com\u002Fhiddenswitch\u002FComfyUI.git && cd ComfyUI && pip install --no-build-isolation -e .\r\n  ```","2024-04-13T15:03:59",{"id":203,"version":204,"summary_zh":205,"released_at":206},315030,"v0.4.4","### New features\r\n- Runtime: Virtual mode: Support custom [`ClientSession`](https:\u002F\u002Fdocs.aiohttp.org\u002Fen\u002Flatest\u002Fclient_reference.html#aiohttp.ClientSession) factory (#32)\r\n\r\n### Changes\r\n- Client: `endpoint` and `set_endpoint()` are removed (undocumented before)","2024-04-09T17:40:50",{"id":208,"version":209,"summary_zh":210,"released_at":211},315031,"v0.4.3","### Fixes\r\n- Runtime\r\n  - Node outputs from nodes other than `SaveImage` nodes are now also converted to `Result` (#22)\r\n  - Add `wait()` method on `ImageBatchResult`\r\n  - Real mode: Unwrap args when calling node (#31)\r\n- Nodes\r\n  - `LoadImageFromPath`: Fix path validation with node outputs instead of literals (https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyUI_Ib_CustomNodes\u002Fissues\u002F1)","2024-04-05T11:15:13",{"id":213,"version":214,"summary_zh":215,"released_at":216},315032,"v0.4.2","### New features\r\n- Runtime: Virtual mode: `Path`\u002F`PurePath` can now be used as node arguments without `str()` (#24)\r\n\r\n### Fixes\r\n- Runtime: `comfyui` package `main()` is now async (#28)\r\n- Transpiler: Multiplexer node elimination with removed inputs (#25)","2024-02-26T11:17:37",{"id":218,"version":219,"summary_zh":220,"released_at":221},315033,"v0.4.1","### New features\r\n- Runtime: Standalone virtual mode: Join ComfyUI (wait for all tasks to be done) at process exit by default (#23)\r\n\r\n### Changes\r\n- Runtime\r\n  - String enums with members `{'true', 'false'}` and `{'yes', 'no'}` are now recognized as bool enums\r\n  - Limit the maximum number of enum values to 2000 by default (#23)\r\n\r\n### Fixes\r\n- Runtime: Standalone: Restore the original event loop after starting ComfyUI (#23)\r\n- Runtime\u002FTranspiler: Bugs related to empty id (#23, #22)","2024-02-13T10:11:41",{"id":223,"version":224,"summary_zh":225,"released_at":226},315034,"v0.4.0","### New features\r\n- Runtime\r\n  - Standalone [virtual mode](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002FRuntime.md#virtual-mode)\r\n\r\n    Virtual mode can now start a ComfyUI server by itself. By default, if the server is not available at `http:\u002F\u002F127.0.0.1:8188\u002F`, virtual mode will try to start a new ComfyUI server. See `load()` for details.\r\n\r\n  - Global enums\r\n\r\n    Some enums are now also available in a shorter form, for example:\r\n    ```python\r\n    # Before\r\n    CheckpointLoaderSimple.ckpt_name.v1_5_pruned_emaonly\r\n    # After\r\n    Checkpoints.v1_5_pruned_emaonly\r\n    ```\r\n    Currently, these global enums include `Checkpoints`, `CLIPs`, `CLIPVisions`, `ControlNets`, `Diffusers`, `Embeddings`, `GLIGENs`, `Hypernetworks`, `Loras`, `PhotoMakers`, `StyleModels`, `UNETs`, `UpscaleModels`, `VAEs` and `Samplers`, `Schedulers`. See [global enums](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002FRuntime.md#global-enums) for details.\r\n\r\n  - Node docstrings\r\n\r\n    Information about the nodes are now added to type stubs, including the display name, description, category, input range and round and output node. For example:\r\n\r\n    ![](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002Fimages\u002FREADME\u002Ftype-stubs.png?raw=true)\r\n\r\n    In standalone virtual mode and real mode, each node's module name will also be added to their docstrings. This can be used to figure out where a node came from.\r\n\r\n  - [Real mode](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002FRuntime.md#real-mode)\r\n    - Workflow tracking (#21)\r\n\r\n      Now workflows will be automatically tracked and saved to images in real mode. But note that changes to inputs made by user code instead of nodes will not be tracked. This means the saved workflows are not guaranteed to be able to reproduce the same results. If user code changes have to be made, one can add some custom metadata to keep the results reproducible. See [real mode](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002FRuntime.md#real-mode) for an example.\r\n\r\n    - The hidden inputs of nodes are now shown in type stubs (#21)\r\n\r\n      e.g. `extra_pnginfo`, `prompt` and `unique_id`.\r\n\r\n    - Improved compatibility with custom nodes by trying to simulate the real ComfyUI environment as far as possible\r\n\r\n  - Enums are now of `StrEnum` and `FloatEnum` types if applicable. Now their values can be used interchangeably with `str` and `float`.\r\n  \r\n    For example, they can now be used as real mode node arguments.\r\n\r\n- Nodes: New nodes for converting images between `Image` and `PIL.Image.Image`, mainly for use in real mode.\r\n\r\n  ```python\r\n  def ImageToPIL(\r\n      images: Image\r\n  ) -> PilImage\r\n\r\n  def PILToImage(\r\n      images: PilImage\r\n  ) -> Image\r\n\r\n  def PILToMask(\r\n      images: PilImage\r\n  ) -> Image\r\n  ```\r\n\r\n  See [images](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fdocs\u002FImage\u002FREADME.md) for all nodes about loading\u002Fsaving images.\r\n\r\n### Changes\r\n- All dependencies are optional now\r\n\r\n  Now ComfyScript should be installed with `[default]`, for example:\r\n  ```sh\r\n  python -m pip install -e \".[default]\"\r\n  ```\r\n  `[default]` is necessary to install common dependencies. See [`pyproject.toml`](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002FComfyScript\u002Fblob\u002Febf73c430b03e76dc6ccf71baa23f292bd7cbc85\u002Fpyproject.toml) for other options. If no option is specified, `comfy-script` will be installed without any dependencies.\r\n\r\n- Runtime\r\n  - VAEs enum member names are changed\r\n    \r\n    For example, `foo.vae.pt` and `foo.vae.safetensors`'s name will now be `foo` instead of `foo_vae`.\r\n\r\n  - Real mode: Nodes are now wrapped instead of being directly modified (#20)\r\n\r\n### Fixes\r\n- Runtime\r\n  - Ignore invalid str enum values (#22)\r\n\r\n    This is mainly for mitigating hacks made by [ComfyUI-VideoHelperSuite](https:\u002F\u002Fgithub.com\u002FKosinkadink\u002FComfyUI-VideoHelperSuite). See #22 for details.\r\n  - Node output name to id conversion","2024-02-11T12:54:19",{"id":228,"version":229,"summary_zh":230,"released_at":231},315035,"v0.3.2","### New features\r\n- Runtime: Real mode: Add support for [`comfyui` package](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI\u002Fpull\u002F298)\r\n\r\n### Changes\r\n- Nodes: Additional nodes are now installed as package dependencies instead of Git submodules (#7, #15)\r\n\r\n  See [comfyui-legacy](https:\u002F\u002Fgithub.com\u002FChaoses-Ib\u002Fcomfyui-legacy) for details.\r\n\r\n### Fixes\r\n- Runtime\r\n  - `get_nodes_info()` should not load `nodes` (#15)\r\n  - Real mode: Modify a class only once (#18)","2024-02-03T20:15:36",{"id":233,"version":234,"summary_zh":235,"released_at":236},315036,"v0.3.1","### New features\r\n- Transpiler: Ignore bypassed nodes (#12 @lingondricka2)\r\n- Runtime: Skip the node if failed to load it (#13 @ShagaONhan)\r\n\r\n### Fixes\r\n- Runtime\r\n  - Fix support for nodes with int and float enum types (#13)\r\n  - Real mode: Fix ComfyUI root locating (#14 @ShagaONhan)\r\n- Transpiler: Fix mapping optional inputs to positional arguments (#12)\r\n- Runtime\u002FTranspiler: String literal emitting with strings ending with `'`","2024-01-29T14:50:30",{"id":238,"version":239,"summary_zh":240,"released_at":241},315037,"v0.3.0","### New features\r\n- Add real mode (#6)\r\n\r\n  See below for details.\r\n  \r\n- Add `pyproject.toml` and [PyPI package](https:\u002F\u002Fpypi.org\u002Fproject\u002Fcomfy-script)\r\n\r\n  After installing the package by `python -m pip install -e .`, you can `import comfy_script` anywhere, no longer limited to the root directory of the repository.\r\n\r\n### Changes\r\n- The project layout was changed. You cannot `import script` at the repository root anymore.\r\n\r\n  Instead, you should install ComfyScript as a package by `python -m pip install -e .`, or add `src` directory to `sys.path`:\r\n  ```python\r\n  import sys\r\n  sys.path.insert(0, 'src')\r\n  ```\r\n  And then replace all\r\n  ```python\r\n  from script.runtime import *\r\n  load()\r\n  from script.runtime.nodes import *\r\n  ```\r\n  with\r\n  ```python\r\n  from comfy_script.runtime import *\r\n  load()\r\n  from comfy_script.runtime.nodes import *\r\n  ```\r\n\r\n### Fixes\r\n- Runtime: Fix support for outputs of enum types (#9)\r\n\r\n\u003Chr \u002F>\r\n\r\n## Real mode\r\nIn the original mode - virtual mode, calling a node is not executing it. Instead, the entire workflow will only get executed when it is sent to ComfyUI's server, by generating workflow JSON from the workflow (`wf.api_format_json()`).\r\n\r\nIn real mode, calling a node will execute it directly:\r\n```python\r\nprint(CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt'))\r\n# (\u003Ccomfy.model_patcher.ModelPatcher object at 0x000002198721ECB0>, \u003Ccomfy.sd.CLIP object at 0x000002198721C250>, \u003Ccomfy.sd.VAE object at 0x000002183B128EB0>)\r\n```\r\n\r\nReal mode is thus more flexible and powerful than virtual mode. It can be used to:\r\n- Doing ML research.\r\n\r\n- Reuse custom nodes in other projects.\r\n\r\n  Besides research projects and commercial products, it is also possible to integrate ComfyUI into sd-webui. This way, a feature can be implemented as a node once and then be used in both ComfyUI and sd-webui.\r\n\r\n- Making developing custom nodes easier.\r\n\r\n- Optimizing caching to run workflows faster.\r\n\r\n  Because real mode executes the nodes directly, it cannot utilize ComfyUI's cache system. But if the lifetime of variables are maintained carefully enough, it is possible to run workflows faster than ComfyUI, since ComfyUI's cache system uses a naive single-slot cache.\r\n\r\nDifferences from virtual mode:\r\n- Scripts cannot be executed through the API of a ComfyUI server.\r\n\r\n  However, it is still possible to run scripts on a remote machine without the API. For example, you can launching a [Jupyter Server](https:\u002F\u002Fjupyter-server.readthedocs.io\u002Fen\u002Flatest\u002F) and [connect to it remotely](https:\u002F\u002Fcode.visualstudio.com\u002Fdocs\u002Fdatascience\u002Fjupyter-notebooks#_connect-to-a-remote-jupyter-server).\r\n\r\n- As mentioned above, nodes will not cache the output themselves. It is the user's responsibility to avoid re-executing nodes with the same inputs.\r\n\r\n- The outputs of output nodes (e.g. `SaveImage`) is not converted to result classes (e.g. `ImageBatchResult`).\r\n\r\n  This may be changed in future versions.\r\n\r\nA complete example:\r\n```python\r\nfrom comfy_script.runtime.real import *\r\nload()\r\nfrom comfy_script.runtime.real.nodes import *\r\n\r\n# Or: with torch.inference_mode()\r\nwith Workflow():\r\n    model, clip, vae = CheckpointLoaderSimple('v1-5-pruned-emaonly.ckpt')\r\n    print(model, clip, vae, sep='\\n')\r\n    # \u003Ccomfy.model_patcher.ModelPatcher object at 0x000002198721ECB0>\r\n    # \u003Ccomfy.sd.CLIP object at 0x000002198721C250>\r\n    # \u003Ccomfy.sd.VAE object at 0x000002183B128EB0>\r\n\r\n    conditioning = CLIPTextEncode('beautiful scenery nature glass bottle landscape, , purple galaxy bottle,', clip)\r\n    conditioning2 = CLIPTextEncode('text, watermark', clip)\r\n    print(conditioning2)\r\n    # [[\r\n    #   tensor([[\r\n    #     [-0.3885, ..., 0.0674],\r\n    #     ...,\r\n    #     [-0.8676, ..., -0.0057]\r\n    #   ]]),\r\n    #   {'pooled_output': tensor([[-1.2670e+00, ..., -1.5058e-01]])}\r\n    # ]]\r\n\r\n    latent = EmptyLatentImage(512, 512, 1)\r\n    print(latent)\r\n    # {'samples': tensor([[\r\n    #   [[0., ..., 0.],\r\n    #     ...,\r\n    #    [0., ..., 0.]],\r\n    #   [[0., ..., 0.],\r\n    #     ...,\r\n    #    [0., ..., 0.]],\r\n    #   [[0., ..., 0.],\r\n    #     ...,\r\n    #    [0., ..., 0.]],\r\n    #   [[0., ..., 0.],\r\n    #     ...,\r\n    #    [0., ..., 0.]]\r\n    # ]])}\r\n\r\n    latent = KSampler(model, 156680208700286, 20, 8, 'euler', 'normal', conditioning, conditioning2, latent, 1)\r\n\r\n    image = VAEDecode(latent, vae)\r\n    print(image)\r\n    # tensor([[\r\n    #   [[0.3389, 0.3652, 0.3428],\r\n    #     ...,\r\n    #     [0.4277, 0.3789, 0.1445]],\r\n    #   ...,\r\n    #   [[0.6348, 0.5898, 0.5270],\r\n    #     ...,\r\n    #     [0.7012, 0.6680, 0.5952]]\r\n    # ]])\r\n\r\n    print(SaveImage(image, 'ComfyUI'))\r\n    # {'ui': {'images': [\r\n    #   {'filename': 'ComfyUI_00001_.png',\r\n    #     'subfolder': '',\r\n    #     'type': 'output'}\r\n    # ]}}\r\n```\r\n\r\n## Naked mode\r\nIf you have ever gotten to know the internals of ComfyUI, you will realize that real mode is not completely real. Some changes were made to nodes to improve the development experience and keep the","2024-01-28T20:06:18"]