[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-sakalond--StableGen":3,"tool-sakalond--StableGen":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":77,"owner_location":80,"owner_email":81,"owner_twitter":76,"owner_website":77,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":92,"env_os":93,"env_gpu":94,"env_ram":95,"env_deps":96,"category_tags":105,"github_topics":106,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":146},2294,"sakalond\u002FStableGen","StableGen","Transform your 3D texturing workflow with the power of generative AI, directly within Blender!","StableGen 是一款专为 Blender 设计的开源 AI 插件，旨在将生成式人工智能无缝融入用户的 3D 创作流程。它解决了传统 3D 资产制作中建模与贴图耗时费力的痛点，让用户能够直接在 Blender 内部，通过单张参考图或文字提示快速生成带有完整贴图的 3D 网格模型，或对现有模型进行高质量纹理重绘。\n\n这款工具特别适合 3D 设计师、概念艺术家以及希望提升工作流的独立开发者使用。其核心技术亮点在于集成了微软的 TRELLIS.2 模型，支持从图像或文本直接生成高细节 3D 资产，并提供多种分辨率模式以适应不同需求。此外，StableGen 依托强大的 ComfyUI 后端，兼容 SDXL、FLUX.1-dev 等多种主流扩散模型，确保纹理生成的多样性与高质量。\n\n独具特色的是，StableGen 支持“场景级多网格同时贴图”，能够一次性为场景中的所有物体赋予协调统一的纹理风格，极大提升了复杂场景的概念设计效率。配合智能的多视角一致性算法和灵活的相机布局策略，它能有效避免贴图接缝问题，确保视觉效果自然流畅。无论是快速原型验证还是批量资产库构建，StableGen 都能为用","StableGen 是一款专为 Blender 设计的开源 AI 插件，旨在将生成式人工智能无缝融入用户的 3D 创作流程。它解决了传统 3D 资产制作中建模与贴图耗时费力的痛点，让用户能够直接在 Blender 内部，通过单张参考图或文字提示快速生成带有完整贴图的 3D 网格模型，或对现有模型进行高质量纹理重绘。\n\n这款工具特别适合 3D 设计师、概念艺术家以及希望提升工作流的独立开发者使用。其核心技术亮点在于集成了微软的 TRELLIS.2 模型，支持从图像或文本直接生成高细节 3D 资产，并提供多种分辨率模式以适应不同需求。此外，StableGen 依托强大的 ComfyUI 后端，兼容 SDXL、FLUX.1-dev 等多种主流扩散模型，确保纹理生成的多样性与高质量。\n\n独具特色的是，StableGen 支持“场景级多网格同时贴图”，能够一次性为场景中的所有物体赋予协调统一的纹理风格，极大提升了复杂场景的概念设计效率。配合智能的多视角一致性算法和灵活的相机布局策略，它能有效避免贴图接缝问题，确保视觉效果自然流畅。无论是快速原型验证还是批量资产库构建，StableGen 都能为用户提供高效、智能的解决方案。","# StableGen: AI-Powered 3D Generation & Texturing in Blender ✨\n\n[![License: GPL v3](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPLv3-blue.svg)](https:\u002F\u002Fwww.gnu.org\u002Flicenses\u002Fgpl-3.0)\n[![Blender Version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlender-4.2+%20%7C%205.1%2B-orange.svg)](#system-requirements)\n[![GitHub All Releases](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fsakalond\u002Fstablegen\u002Ftotal?color=brightgreen&label=Downloads)](https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Freleases)\n\n**Create 3D assets from images and prompts, then texture and refine them - all inside Blender.**\n\nStableGen is an open-source Blender addon that brings generative AI into your 3D workflow. **Generate** fully textured 3D meshes from a single image or text prompt via [TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2), then **texture and refine** them - or any existing model - using SDXL, FLUX.1-dev, or Qwen Image Edit through a flexible [ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI) backend.\n\n---\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>Table of Contents\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n- [🌟 Key Features](#-key-features)\n- [🚀 Showcase Gallery](#-showcase-gallery)\n- [🛠️ How It Works](#️-how-it-works-a-glimpse)\n- [💻 System Requirements](#-system-requirements)\n- [⚙️ Installation](#️-installation)\n- [🚀 Quick Start Guide](#-quick-start-guide)\n  - [Texturing an Existing Model](#texturing-an-existing-model)\n  - [Generating a 3D Model with TRELLIS.2](#generating-a-3d-model-with-trellis2)\n- [📖 Usage & Parameters Overview](#-usage--parameters-overview)\n- [📁 Output Directory Structure](#-output-directory-structure)\n- [🤔 Troubleshooting](#-troubleshooting)\n- [🤝 Contributing](#-contributing)\n- [📜 License](#-license)\n- [🙏 Acknowledgements](#-acknowledgements)\n- [💡 List of planned features](#-list-of-planned-features)\n- [📧 Contact](#-contact)\n\n\u003C\u002Fdetails>\n\n---\n\n## 🌟 Key Features\n\nStableGen brings AI-powered 3D generation and texturing directly into Blender:\n\n* 🧊 **TRELLIS.2: Image & Prompt to 3D:**\n    * Generate fully textured 3D meshes from a single reference image or text prompt using Microsoft's [TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2) (4B-parameter model).\n    * **Multiple resolution modes:** 512, 1024, 1024 Cascade (recommended), and 1536 Cascade for maximum geometric detail.\n    * **Flexible texture pipeline:** Use TRELLIS.2's native PBR textures, or automatically texture the generated mesh with SDXL, FLUX.1-dev, or Qwen Image Edit for higher-quality diffusion textures.\n    * **Preview Gallery:** Generate multiple candidate images with different seeds and pick the best before committing to 3D generation.\n    * **Smart mesh handling:** Auto-recovery from mesh corruption, configurable decimation\u002Fremeshing, import scaling, and studio lighting setup.\n    * VRAM-conscious: disk offloading, configurable attention backend\n    * Powered by [ComfyUI-TRELLIS2](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2) (installable via `installer.py`).\n* 🌍 **Scene-Wide Multi-Mesh Texturing:**\n    * Don't just texture one mesh at a time! StableGen is designed to apply textures to **all mesh objects in your scene simultaneously** from your defined camera viewpoints. Alternatively, you can choose to texture only selected objects.\n    * Achieve a cohesive look across entire environments or collections of assets in a single generation pass.\n    * Ideal for concept art, look development for complex scenes, and batch-texturing asset libraries.\n* 🎨 **Multi-View Consistency:**\n    * **Sequential Mode:** Generates textures viewpoint by viewpoint on each mesh, using inpainting and visibility masks for high consistency across complex surfaces.\n    * **Grid Mode:** Processes multiple viewpoints for all meshes simultaneously for faster previews. Includes an optional refinement pass.\n    * Sophisticated weighted blending ensures smooth transitions between views.\n* 📷 **Advanced Camera Placement:**\n    * **7 placement strategies:** Orbit Ring, Fan Arc, Hemisphere, PCA-Axis, Normal-Weighted K-means, Greedy Occlusion Coverage, and Interactive Visibility-Weighted placement.\n    * **Per-camera optimal aspect ratios** - each camera gets its own resolution computed from the mesh's silhouette, so no pixels are wasted on letterboxing.\n    * **Unlimited cameras** - no more 8-camera limit.\n    * **Camera generation order** - drag-and-drop reorder list with 6 preset strategies to control the processing order in Sequential mode.\n    * Camera cloning, mirroring, and floating viewport prompt labels.\n* 🎯 **Local Edit Mode:**\n    * Point cameras at specific areas to modify - new texture blends seamlessly over the original using angle-based and vignette-based feathering.\n    * Separate angle ramp and silhouette edge feathering controls for precise blending.\n    * Works with all architectures (SDXL, Flux, Qwen Image Edit).\n* 📐 **Precise Geometric Control with ControlNet:**\n    * Leverage multiple ControlNet units (Depth, Canny, Normal) simultaneously to ensure generated textures respect your model's geometry.\n    * Fine-tune strength, start\u002Fend steps for each ControlNet unit.\n    * Supports custom ControlNet model mapping.\n* 🖌️ **Powerful Style Guidance with IPAdapter:**\n    * Use external reference images to guide the style, mood, and content of your textures with IPAdapter.\n    * Employ IPAdapter without an reference image for enhanced consistency in multi-view generation modes.\n    * Control IPAdapter strength, weight type, and active steps.\n* ⚙️ **Flexible ComfyUI Backend:**\n    * Connects to your existing ComfyUI installation, allowing you to use your preferred SDXL checkpoints, custom LoRAs, and the new Qwen Image Edit workflow alongside experimental FLUX.1-dev support.\n    * Offloads heavy computation to the ComfyUI server, keeping Blender mostly responsive.\n* ✨ **Advanced Inpainting & Refinement:**\n    * **Refine Mode (Img2Img):** Re-style, enhance, or add detail to existing textures (StableGen generated or otherwise) using an image-to-image process.\n    * **Local Edit Mode:** Selectively modify specific areas while preserving the rest, with independent angle and vignette feathering controls.\n    * **UV Inpaint Mode:** Intelligently fills untextured areas directly on your model's UV map using surrounding texture context.\n    * **Color Matching:** Match each generated view's colors to the current texture before blending, using multiple algorithms (MKL, Reinhard, Histogram, MVGD).\n* 🛠️ **Integrated Workflow Tools:**\n    * **Camera Setup:** Quickly add and arrange multiple cameras with 7 placement strategies, per-camera aspect ratios, interactive occlusion preview, and customizable generation order.\n    * **View-Specific Prompts:** Assign unique text prompts to individual camera viewpoints for targeted details.\n    * **Texture Baking:** Convert complex procedural StableGen materials into standard UV image textures. \"Flatten for Refine\" option lets you bake and continue editing.\n    * **Debug Tools:** Visualize projection coverage, UV alignment, and weight blending without running AI generation.\n    * **HDRI Setup, Modifier Application, Curve Conversion, GIF\u002FMP4 Export & Reproject.****\n* 📋 **Preset System:**\n    * Get started quickly with built-in presets for common scenarios (e.g., \"Default\", \"Characters\", \"Quick Draft\").\n    * Save and manage your own custom parameter configurations for repeatable workflows.\n\n---\n\n## 🚀 Showcase Gallery\n\n\u003Cdetails open>\n\u003Csummary>See what StableGen can do!\u003C\u002Fsummary>\n\n\u003Csub>Tip: Refresh the page to synchronize all GIF animations.\u003C\u002Fsub>\n\n---\n\n### Showcase 1: Text-to-3D (SDXL)\n\nAssets generated entirely from a text prompt using the TRELLIS.2 pipeline with SDXL-based texturing.\n\n| Dragon | Wizard | Hut |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_7e1e18c86cf7.gif\" alt=\"Fantasy dragon\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_93e2940f89ee.gif\" alt=\"Wizard character\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_dc2254c7b1c6.gif\" alt=\"Hut\" width=\"200\"> |\n| **Telescope** | **Robot** | **Cyber Ninja** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_3491aa5e4f2f.gif\" alt=\"Telescope\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_9bb993fdb14d.gif\" alt=\"Robot\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_c8989d321fb9.gif\" alt=\"Cyber Ninja\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Dragon:** *\"fantasy dragon\"*\n2. **Wizard:** *\"wizard character, intricate embroidered purple and gold robes, pointed hat, wooden staff with glowing crystal, leather belt with pouches, fantasy character concept art, 4k\"*\n3. **Hut:** *\"house, small house, cozy, wooden, hut\"*\n4. **Telescope:** *\"antique brass telescope, tarnished patina with bright spots from handling, leather grip wrap, extended sections, mahogany tripod, product photography, 4k\"*\n5. **Robot:** *\"giant robot, mecha, cyberpunk style, sci-fi, white body, intricate details, neon accents\"*\n6. **Cyber Ninja:** *\"full body character, neutral pose, cyber-ninja, futuristic assassin, matte black carbon fiber stealth suit, hexagonal weave pattern, faceless helmet, glowing red neon visor slit, metallic silver shoulder armor, cyberpunk aesthetic, high contrast materials, unreal engine 5 render\"*\n\n\u003C\u002Fdetails>\n\n\n### Showcase 2: Text-to-3D (Qwen)\n\nText-to-3D via TRELLIS.2 with Qwen Image Edit texturing - well-suited for stylized objects and crisp details.\n\n| Barrel | Chest | Crate |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_466242068203.gif\" alt=\"Barrel\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_80f5409b8131.gif\" alt=\"Chest\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_4be9a7e8b6a5.gif\" alt=\"Crate\" width=\"200\"> |\n| **Obelisk** | **Robot** | **Tree Stump** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bf043726010e.gif\" alt=\"Obelisk\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_dc680c2c8f23.gif\" alt=\"Robot\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ba95ac19d945.gif\" alt=\"Tree Stump\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Barrel:** *\"A chunky, stylized wooden barrel bound by thick, oversized iron hoops. The wood has deep, exaggerated hand-carved grooves\"*\n2. **Chest:** *\"A highly detailed wooden treasure chest bound in heavy, dark iron. The chest is slightly open, revealing a pile of glowing gold coins inside. The wood is old and splintered, and the iron has patches of orange rust.\"*\n3. **Crate:** *\"A yellow industrial hazmat shipping crate. On the side, there is a large, highly legible warning label that says \\\"DANGER: BIOHAZARD\\\" in bold black letters. The crate has a digital keypad on the front and two red oxygen tanks strapped to the left side.\"*\n4. **Obelisk:** *\"An ancient, monolithic stone obelisk covered in glowing green runic carvings. The grey stone is deeply cracked from age and covered in patches of thick, fuzzy green moss.\"*\n5. **Robot:** *\"giant robot, mecha, cyberpunk style, sci-fi, white body, intricate details, neon accents\"*\n6. **Tree Stump:** *\"A mystical, ancient gnarled tree stump with exposed, twisting roots. Growing out of the top is a cluster of translucent, glowing bioluminescent blue mushrooms and delicate, thin fern leaves. Fantasy RPG asset, hand-painted texture style mixed with photorealism, highly detailed.\"*\n\n\u003C\u002Fdetails>\n\n\n### Showcase 3: PBR Comparison\n\nPBR material maps (roughness, metallic, normal) can be generated via Marigold decomposition. Each pair shows the same object without and with PBR materials.\n\n| House | House (PBR) | Wizard | Wizard (PBR) |\n| :------: | :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5ba10392d418.gif\" alt=\"House (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f36eab7d8146.gif\" alt=\"House (PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_93e2940f89ee.gif\" alt=\"Wizard (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_51019b6522ce.gif\" alt=\"Wizard (PBR)\" width=\"170\"> |\n| **Chest** | **Chest (PBR)** | **Obelisk** | **Obelisk (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_80f5409b8131.gif\" alt=\"Chest (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ecee864a77d5.gif\" alt=\"Chest (PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bf043726010e.gif\" alt=\"Obelisk (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0fc5e855b481.gif\" alt=\"Obelisk (PBR)\" width=\"170\"> |\n| **Lunar Habitat** | **Lunar Habitat (PBR)** | **Scavenger** | **Scavenger (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_3b7f104a6e3b.gif\" alt=\"Lunar Habitat (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5234a0ea43f8.gif\" alt=\"Lunar Habitat (PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_52399838dc5e.gif\" alt=\"Scavenger (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bc8ca96e5122.gif\" alt=\"Scavenger (PBR)\" width=\"170\"> |\n| **Shaman** | **Shaman (PBR)** | **Cyberpunk Woman** | **Cyberpunk Woman (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5e90c36831ea.gif\" alt=\"Shaman (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_a8c9c67dcd77.gif\" alt=\"Shaman (PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_31d08be99b0d.gif\" alt=\"Cyberpunk Woman (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d0014a537859.gif\" alt=\"Cyberpunk Woman (PBR)\" width=\"170\"> |\n| **Crate** | **Crate (PBR)** | **Tree Stump** | **Tree Stump (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_4be9a7e8b6a5.gif\" alt=\"Crate (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f6435c27c70b.gif\" alt=\"Crate (PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ba95ac19d945.gif\" alt=\"Tree Stump (non-PBR)\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d1e4c029fb53.gif\" alt=\"Tree Stump (PBR)\" width=\"170\"> |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **House (Qwen):** *\"house, small house, cozy, wooden, hut\"*\n2. **Wizard (SDXL):** *\"wizard character, intricate embroidered purple and gold robes, pointed hat, wooden staff with glowing crystal, leather belt with pouches, fantasy character concept art, 4k\"*\n3. **Chest (Qwen):** *\"A highly detailed wooden treasure chest bound in heavy, dark iron. The chest is slightly open, revealing a pile of glowing gold coins inside. The wood is old and splintered, and the iron has patches of orange rust.\"*\n4. **Obelisk (Qwen):** *\"An ancient, monolithic stone obelisk covered in glowing green runic carvings. The grey stone is deeply cracked from age and covered in patches of thick, fuzzy green moss.\"*\n5. **Lunar Habitat (SDXL):** *\"futuristic lunar habitat module, domed cylinder base building, pristine white composite panels, high gloss reflections, gold foil wrapped pipes, circular metal airlock door, glowing blue exterior floodlights, sci-fi base architecture, clean PBR textures, hard surface modeling, 8k\"*\n6. **Scavenger (SDXL):** *\"full body character, A-pose, post-apocalyptic scavenger, oil-stained olive green military jacket, tattered clothing, rusty street sign armor, dirty leather belts, scratched welding mask, wasteland survivalist, grunge textures, heavy weathering, fallout style character asset\"*\n7. **Shaman (SDXL):** *\"full body character, A-pose, tribal shaman, rough woven brown wool, thick white animal fur, carved white bone mask, glowing purple magical runes, bare arms, fantasy RPG character class, organic textures, highly detailed displacement map, ZBrush sculpt style\"*\n8. **Cyberpunk Woman (Qwen):** *\"A futuristic cyberpunk female mercenary standing in a neutral pose. She has a robotic left arm made of black metal and glowing blue wires. She wears a tactical jacket made of synthetic material with glowing LED strips on the collar and futuristic sneakers.\"*\n9. **Crate (Qwen):** *\"A yellow industrial hazmat shipping crate. On the side, there is a large, highly legible warning label that says \\\"DANGER: BIOHAZARD\\\" in bold black letters. The crate has a digital keypad on the front and two red oxygen tanks strapped to the left side.\"*\n10. **Tree Stump (Qwen):** *\"A mystical, ancient gnarled tree stump with exposed, twisting roots. Growing out of the top is a cluster of translucent, glowing bioluminescent blue mushrooms and delicate, thin fern leaves. Fantasy RPG asset, hand-painted texture style mixed with photorealism, highly detailed.\"*\n\n\u003C\u002Fdetails>\n\n\n### Showcase 4: PBR Gallery\n\nA selection of assets with PBR materials enabled, demonstrating realistic surface response under varying lighting.\n\n| Pot of Gold | Astrolabe | Tree Stump |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_6ebfc9965c2e.gif\" alt=\"Pot of gold (PBR)\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ab32d74d78ba.gif\" alt=\"Astrolabe (PBR)\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d1e4c029fb53.gif\" alt=\"Tree stump (PBR)\" width=\"200\"> |\n| **Rabbit** | **Crate** | **Obelisk (Qwen)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_b0a62515de51.gif\" alt=\"Rabbit (PBR)\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f6435c27c70b.gif\" alt=\"Crate (PBR)\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0fc5e855b481.gif\" alt=\"Obelisk (PBR)\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Pot of Gold:** *\"pot of gold\"*\n2. **Astrolabe:** *\"A highly detailed, antique steampunk astrolabe resting on a rough-hewn wooden pedestal. The astrolabe features gleaming polished brass rings, tarnished copper gears, and a faceted glass crystal in the center. Studio lighting, photorealistic, 8k resolution, intricate mechanical details, isolated on a solid background.\"*\n3. **Tree Stump:** *\"A mystical, ancient gnarled tree stump with exposed, twisting roots. Growing out of the top is a cluster of translucent, glowing bioluminescent blue mushrooms and delicate, thin fern leaves. Fantasy RPG asset, hand-painted texture style mixed with photorealism, highly detailed.\"*\n4. **Rabbit:** *\"a white rabbit\"*\n5. **Crate:** *\"A yellow industrial hazmat shipping crate. On the side, there is a large, highly legible warning label that says \\\"DANGER: BIOHAZARD\\\" in bold black letters. The crate has a digital keypad on the front and two red oxygen tanks strapped to the left side.\"*\n6. **Obelisk (Qwen):** *\"An ancient, monolithic stone obelisk covered in glowing green runic carvings. The grey stone is deeply cracked from age and covered in patches of thick, fuzzy green moss.\"*\n\n\u003C\u002Fdetails>\n\n---\n\n### Showcase 5: Head Stylization (Texturing Only)\n\nTexturing an existing model using prompts and style guidance from an IPAdapter image reference.\n\n**3D Model Source:** \"Brown\" by ucupumar - Available at: [BlendSwap (Blend #15262)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F15262)\n\n\n\n| Untextured Model  | Generated | Generated  | Generated (with a reference image) |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_55919634e53d.gif\" alt=\"Untextured Anime Head\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_023ff35d8a2b.gif\" alt=\"Anime head with red hair\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_1f71cc0b4d54.gif\" alt=\"Anime head with Cyberpunk style\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_e9f6f87201aa.gif\" alt=\"Anime head with Starry Night style\" width=\"170\">   | \n| *Base Untextured Model* | *Red Hair* | *Cyberpunk* | *Artistic Style* |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Red Hair:** *\"anime girl head, red hair\"*\n2. **Cyberpunk:** *\"girl head, brown hair, cyberpunk style, realistic\"*\n3. **Artistic Style:** *\"anime girl head, artistic style\"* (style guided by IPAdapter reference image shown below)\n\n\u003C\u002Fdetails>\n\u003Cp align=\"left\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_e97b5186b0a1.jpg\" alt=\"The Starry Night - IPAdapter Reference\" width=\"250\">\n  \u003Cbr>\n  \u003Csmall>\u003Cem>Reference: \"The Starry Night\" by Vincent van Gogh (used to guide the \"Artistic Style\" variant)\u003C\u002Fem>\u003C\u002Fsmall>\n\u003C\u002Fp>\n\n\n### Showcase 6: Car Texturing (Texturing Only)\n\nTexturing a car model using different prompts to achieve various visual styles.\n\n**3D Model Source:** \"Pontiac GTO 67\" by thecali - Available at: [BlendSwap (Blend #13575)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F13575)\n\n| Untextured Model  | Generated | Generated | Generated |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_7050e047a81b.gif\" alt=\"Untextured Car\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_90ecab5ef6e8.gif\" alt=\"Green car\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_baef5f871a5a.gif\" alt=\"Steampunk style car\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_8ed08563240e.gif\" alt=\"Stealth black car\" width=\"170\">   | \n| *Base Untextured Model* | *Green* | *Steampunk* | *Stealth Black* |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Green:** *\"green car\"*\n2. **Steampunk:** *\"steampunk style car\"*\n3. **Stealth Black:** *\"stealth black car\"*\n\n\u003C\u002Fdetails>\n\n\n### Showcase 7: Scene Texturing (Texturing Only)\n\nTexturing a complex scene consisting of many mesh objects.\n\n**3D Model Source:** \"Subway Station Entrance\" by argonius - Available at: [BlendSwap (Blend #19305)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F19305)\n\n| Untextured Scene  | Generated | Generated | Generated |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ed3bcbe57c92.gif\" alt=\"Untextured Subway Scene\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_044725360422.gif\" alt=\"Subway station\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0c17e119fcb1.gif\" alt=\"Overgrown fantasy palace interior\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_92c60fd57b4e.gif\" alt=\"Cyberpunk subway station\" width=\"170\">   | \n| *Base Untextured Scene* | *Subway Station* | *Fantasy Palace* | *Cyberpunk* |\n\n\u003Cdetails>\n\u003Csummary>Prompts used\u003C\u002Fsummary>\n\n1. **Subway Station:** *\"subway station\"*\n2. **Fantasy Palace:** *\"an overgrown fantasy palace interior, gold elements\"*\n3. **Cyberpunk:** *\"subway station, cyberpunk style, neon lit\"*\n\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n---\n\n## 🛠️ How It Works (A Glimpse)\n\nStableGen acts as an intuitive interface within Blender that communicates with a ComfyUI backend.\n1.  You set up your scene and parameters in the StableGen panel.\n2.  StableGen prepares necessary data (like ControlNet inputs from camera views).\n3.  It constructs a workflow and sends it to your ComfyUI server.\n4.  ComfyUI processes the request using your selected diffusion models.\n5.  Generated images are sent back to Blender.\n6.  StableGen applies these images as textures to your models using sophisticated projection and blending techniques.\n\n---\n\n## 💻 System Requirements\n\n* **Blender:** Version 4.2 – 4.5 (OSL projection) or Blender 5.1+ (GPU-accelerated projection via native Raycast nodes). **Blender 5.0 is not supported** (OSL is broken and native Raycast was not yet available).\n* **Operating System:** Windows 10\u002F11, Linux, or macOS (Apple Silicon).\n* **GPU:** **NVIDIA GPU with CUDA is recommended** for ComfyUI. For further details, check ComfyUI's github page: [https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI).\n    * At least 8 GB of VRAM is required to run SDXL at a usable speed; plan for 16 GB or more when running FLUX.1-dev or the Qwen-Image-Edit pipeline.\n* **ComfyUI:** A working installation of ComfyUI. StableGen uses this as its backend.\n* **Python:** Version 3.x (usually comes with Blender, but Python 3 is needed for the `installer.py` script).\n* **Git:** Required by the `installer.py` script.\n* **Disk Space:** Significant free space for ComfyUI, AI models (10GB to 50GB+), and generated textures.\n\n---\n\n## ⚙️ Installation\n\nSetting up StableGen involves installing ComfyUI, then StableGen's dependencies into ComfyUI using our installer script, and finally installing the StableGen plugin in Blender.\n\nFollow the step‑by‑step instructions below to install StableGen.\n\nIf you’d rather watch, Polynox provides a concise video walkthrough:  \n[StableGen Installation & Basic Usage Video Tutorial](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=EVNYAMnn_oQ)\n\n### Step 1: Install ComfyUI (If not already installed)\n\nStableGen relies on a working ComfyUI installation as its backend. This can be done on a separate machine if desired. \n\n*If you wish to use a separate machine for the backend, do step 1 and 2 there.*\n* If you don't have ComfyUI, please follow the **official ComfyUI installation guide**: [https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing).\n    * Install ComfyUI in a dedicated directory. We'll refer to this as `\u003CYourComfyUIDirectory>`.\n    * Ensure you can run ComfyUI and it's functioning correctly before proceeding.\n\n### Step 2: Install Dependencies (Custom Nodes & AI Models) - Automated (Recommended)\n\nThe `installer.py` script (found in this repository) automates the download and placement of required ComfyUI custom nodes and core AI models into your `\u003CYourComfyUIDirectory>`.\n\n**Prerequisites for the installer:**\n* Python 3.\n* Git installed and accessible in your system's PATH.\n* The path to your ComfyUI installation (`\u003CYourComfyUIDirectory>`).\n* Required Python packages for the script: `requests` and `tqdm`. Install them via pip:\n    ```bash\n    pip install requests tqdm\n    ```\n\n**Running the Installer:**\n1.  **Download\u002FLocate the Installer:** Get `installer.py` from this GitHub repository.\n2.  **Execute the Script:**\n    * Open your system's terminal or command prompt.\n    * Navigate to the directory containing `installer.py`.\n    * Run the script:\n        ```bash\n        python installer.py \u003CYourComfyUIDirectory>\n        ```\n        Replace `\u003CYourComfyUIDirectory>` with the actual path. If omitted, the script will prompt for it.\n3.  **Follow On-Screen Instructions:**\n    * The script will display a menu of installation packages. Choose the option(s) that match the features you need.\n    * It will download and place files into the correct subdirectories of `\u003CYourComfyUIDirectory>`.\n\n**Installer Packages Overview:**\n\n| # | Package | What it enables | Size |\n|---|---------|-----------------|------|\n| 1 | Minimal Core | Basic SDXL texturing (bring your own checkpoint + ControlNets) | ~7.3 GB |\n| 2 | Core + Preset Essentials | All built-in presets work out of the box | ~9.8 GB |\n| 3 | **Recommended** Full SDXL Setup | SDXL texturing + PBR decomposition (no checkpoint) | ~19.3 GB |\n| 4 | Complete SDXL + RealVisXL | Everything in #3 plus a ready-to-use checkpoint | ~26.3 GB |\n| 5 | Qwen Core | Qwen Image Edit texturing architecture | ~20.3 GB |\n| 6 | Qwen + Lightning LoRAs | Qwen with additional Lightning LoRAs | ~22.6 GB |\n| 7 | Qwen Nunchaku | Qwen with Int4 quantized Nunchaku model (lower VRAM) | ~33.0 GB |\n| 8 | TRELLIS.2 | Image\u002Ftext-to-3D mesh generation (~5 GB install + ~15.4 GB models on first use) | ~20.4 GB |\n| 9 | Marigold IID | PBR decomposition node (models auto-download on first use) | ~0.01 GB |\n| 10 | StableDelight | Specular-free albedo for PBR (includes model download) | ~3.3 GB |\n| 11 | FLUX.2 Klein *(experimental)* | Klein texturing architecture (~13 GB VRAM required) | ~12.4 GB |\n\n**Common setups:**\n- **Full 3D asset generation (SDXL):** Options 3 + 8 (or 4 + 8 with a checkpoint included)\n- **Full 3D asset generation (Qwen):** Options 6 + 8\n- **Texturing only (SDXL):** Option 3 (or 4)\n- **Texturing only (Qwen):** Option 5 (or 6\u002F7)\n- **Add PBR to any setup:** Options 9 + 10 (included in options 3 and 4)\n\n> **Note:** TRELLIS.2 and Marigold IID download additional models automatically on first use via HuggingFace. The sizes shown above include these first-use downloads. Expect the initial run to take longer.\n4.  **Restart ComfyUI:** If ComfyUI was running, restart it to load new custom nodes.\n\n*(For manual dependency installation-including FLUX.1-dev and Qwen Image Edit setups-see `docs\u002FMANUAL_INSTALLATION.md`.)*\n\n### Step 3: Install StableGen Blender Plugin\n\n1.  Go to the [**Releases** page](https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Freleases) of this repository.\n2.  Download the latest `StableGen.zip` file.\n3.  In Blender, go to `Edit > Preferences > Add-ons > Install...`.\n4.  Navigate to and select the downloaded `StableGen.zip` file.\n5.  Enable the \"StableGen\" addon (search for \"StableGen\" and check the box).\n\n### Step 4: Configure StableGen Plugin in Blender\n\n1.  In Blender, go to `Edit > Preferences > Add-ons`.\n2.  Find \"StableGen\" and expand its preferences.\n3.  Set the following paths:\n    * **Output Directory:** Choose a folder where StableGen will save generated images.\n    * **Server Address:** Ensure this matches your ComfyUI server (default `127.0.0.1:8188`).\n    * Review **ControlNet Mapping** if using custom named ControlNet models.\n4.  Enable online access in Blender if not enabled already. Select `Edit -> Preferences` from the topbar of Blender. Then navigate to `System -> Network` and check the box `Enable Online Access`. While StableGen does not require internet access, this is added to respect Blender add-on guidelines, as there are still network calls being made locally.\n\n---\n\n## 🚀 Quick Start Guide\n\n### Texturing an Existing Model\n\nHere’s how to get your first texture generated with StableGen:\n\n1.  **Start ComfyUI Server:** Make sure it's running in the background.\n2.  **Open Blender & Prepare Scene:**\n    * Have a mesh object ready (e.g., the default Cube).\n    * Ensure the StableGen addon is enabled and configured (see Step 4 above).\n3.  **Access StableGen Panel:** Press `N` in the 3D Viewport, go to the \"StableGen\" tab.\n4.  **Add Cameras (Recommended for Multi-View):**\n    * Select your object.\n    * In the StableGen panel, click \"**Add Cameras**\". Choose `Object` as center type. Adjust interactively if needed, then confirm.\n5.  **Set Basic Parameters:**\n    * **Prompt:** Type a description (e.g., \"ancient stone wall with moss\").\n    * **Architecture:** Pick the diffusion family (`SDXL`, `Flux 1`, or `Qwen Image Edit`) that matches the workflow you set up.\n    * **Checkpoint:** Select a checkpoint or GGUF file suited to the chosen architecture (e.g., `sdxl_base_1.0` or `Qwen-Image-Edit-2509-Q3_K_M.gguf`).\n    * **Preset:** Choose a preset and apply it. `Default` or `Characters` are good starting points.\n6.  **Hit Generate!** Click the main \"**Generate**\" button.\n7.  **Observe:** Watch the progress in the panel and the ComfyUI console. Your object should update with the new texture! Output files will be in your specified \"Output Directory\".\n    * By default, the generated texture will only be visible in the Rendered viewport shading mode (CYCLES Render Engine).\n\n### Generating a 3D Model with TRELLIS.2\n\nFollow these steps to generate a fully textured 3D mesh from a text prompt or reference image using the TRELLIS.2 pipeline:\n\n1.  **Prerequisites:** Make sure you have the TRELLIS.2 dependencies installed (see [Installation - Step 2](#step-2-install-dependencies-custom-nodes--ai-models---automated-recommended)) and that your hardware meets the [System Requirements](#-system-requirements).\n2.  **Choose a Preset:** Select and apply one of the **(MESH + TEXTURE)** labeled presets:\n    * **SDXL** - best for creative, prompt-driven workflows.\n    * **Qwen Image Edit** - well-suited for stylized generations, legible text, and specific details. Particularly effective for image-to-3D workflows (turning a picture into a 3D model).\n    * Hover over any preset in Blender for a detailed description of what it does.\n    * Alternatively, use the **TRELLIS.2 (MESH ONLY)** preset if you only need the generated mesh without automatic texturing.\n3.  **Select Input Mode:** Set the **`Generate from`** field to **`Prompt`** for text-to-3D, or **`Image`** to use a reference image.\n4.  **Provide Input:** Write a descriptive prompt or load a reference image.\n5.  *(Optional)* **Enable PBR:** Turn on **PBR generation** under *Advanced Parameters → Output & Material Settings* to produce physically-based material maps (roughness, metallic, normal).\n6.  **Generate:** Click the main **Generate** button and wait for the process to complete.\n7.  *(Optional)* **Refine the Result:** Adjust per-camera prompts and regenerate specific views, or switch to **Local Edit** mode (a preset is available) for targeted touch-ups.\n\n**Exporting for a Game Engine:**\n\n8.  **Bake Textures:** You will most likely need to toggle UV unwrapping (within the `Bake Textures` operator) - the `Smart UV Project` mode works well in most cases.\n9.  **Export:** Use the built-in export tool `Export for Game Engine` or export manually from Blender.\n\n---\n\n## 📖 Usage & Parameters Overview\n\nStableGen provides a comprehensive interface for AI-powered 3D asset generation and texturing, from mesh creation to final PBR export. Here's an overview of the main sections and tools available in the StableGen panel:\n\n### Primary Actions & Scene Setup\n\nThese are the main operational buttons and initial setup tools, generally found near the top of the StableGen panel:\n\n* **Generate \u002F Cancel Generation (Main Button):** Starts either 3D mesh generation (TRELLIS.2 pipeline) or texture generation for existing mesh objects, depending on the current mode. While processing, the button changes to \"Cancel Generation.\" Progress bars (overall, phase, and per-step) appear below this button during generation.\n* **Bake Textures:** Converts the dynamic, multi-projection material into a single, standard UV-mapped image texture per object. Also bakes PBR maps (albedo, roughness, metallic, normal, height, AO, emission) if PBR decomposition was enabled. Defaults to Smart UV Project unwrapping. Essential for exporting to game engines.\n* **Add Cameras:** Set up multiple viewpoints using one of 7 placement strategies - from simple orbit rings to geometry-aware occlusion-optimized placement with per-camera aspect ratios. Use the interactive preview to fine-tune placement before confirming.\n* **Collect Camera Prompts:** Cycles through all cameras in your scene, allowing you to type a specific descriptive text prompt for each viewpoint (e.g., \"front view,\" \"close-up on face\"). These per-camera prompts are used in conjunction with the main prompt if `Use camera prompts` is enabled in `Viewpoint Blending Settings`.\n\n### Preset Management\n\n* Located prominently in the UI, this system allows you to:\n    * **Select a Preset:** Choose from 30+ built-in presets organized across 4 architecture groups (SDXL\u002FFLUX.1, Qwen Image Edit, FLUX.2 Klein, TRELLIS.2 Pipeline), or select `Custom` to use your current settings.\n    * **Preset Diff Preview:** When hovering or selecting a preset, StableGen shows which parameters differ from your current settings and what they will change to.\n    * **Apply Preset:** If you modify a stock preset, this button applies its original values.\n    * **Save Preset \u002F Delete Preset:** Save your current configuration as a named preset or remove a custom preset. ControlNet and LoRA include toggles let you choose what to save.\n\n### Main Parameters\n\nThese are your primary controls for defining the generation:\n\n* **Prompt:** The main text description of the texture (or 3D asset) you want to generate.\n* **Checkpoint:** Select the base SDXL checkpoint (for SDXL\u002FFLUX architectures).\n* **Architecture:** Choose between `SDXL`, `Flux 1`, `Qwen Image Edit`, and `FLUX.2 Klein` (experimental) model architectures. For 3D mesh generation, use the TRELLIS.2 pipeline presets.\n* **Generation Mode:** Defines the core strategy for texturing:\n    * `Generate Separately`: Each viewpoint generates independently.\n    * `Generate Sequentially`: Viewpoints generate one by one, using inpainting from previous views for consistency.\n    * `Generate Using Grid`: Combines all views into a grid for a single generation pass, with an optional refinement step.\n    * `Refine\u002FRestyle Texture (Img2Img)`: Uses the current texture as input for an image-to-image process.\n    * `Local Edit`: Selectively modify specific areas by pointing cameras at them - new texture blends over the original with feathered edges.\n    * `UV Inpaint Missing Areas`: Fills untextured areas on a UV map via inpainting.\n* **Target Objects:** Choose whether to texture all visible mesh objects or only selected ones.\n\n### Advanced Parameters (Collapsible Sections)\n\nClick the arrow next to each title to expand and access detailed settings:\n\n* **Core Generation Settings:** Control diffusion basics like Seed, Steps, CFG, Negative Prompt, Sampler, Scheduler and Clip Skip.\n* **LoRA Management:** Add and configure LoRAs (Low-Rank Adaptation) for additional style or content guidance. You can set the model and clip strength for each LoRA.\n* **Viewpoint Blending Settings:** Manage how textures from different camera views are combined, including camera-specific prompts, discard angles, blending weight exponents, camera generation order, and post-generation exponent reset.\n* **Output & Material Settings:** Define fallback color, material properties (BSDF), automatic resolution scaling, and options for baking textures during generation which enables generating with more than 8 viewpoints.\n* **Image Guidance (IPAdapter & ControlNet):** Configure IPAdapter for style transfer using external images and set up multiple ControlNet units (Depth, Canny, etc.) for precise structural control.\n* **Inpainting Options:** Fine-tune masking and blending for `Sequential` and `UV Inpaint` modes (e.g., differential diffusion, mask blurring\u002Fgrowing).\n* **Generation Mode Specifics:** Parameters unique to the selected Generation Mode, like refinement options for Grid mode or IPAdapter consistency settings for Sequential\u002FSeparate\u002FRefine modes.\n* **PBR Decomposition:** Enable PBR material extraction after texturing. Toggle individual map types (albedo, roughness, metallic, normal, height, AO, emission), choose albedo source, and configure tiled super-resolution. Only shown when the required Marigold\u002FStableDelight nodes are available on the server.\n* **TRELLIS.2 Settings:** Configure 3D mesh generation - resolution mode, decimation, remeshing, import scale, shading mode, texture mode (Native\u002FSDXL\u002FFLUX\u002FQwen\u002FKlein), preview gallery seed count, and camera placement strategy for texturing.\n\n### Integrated Workflow Tools (Bottom Section)\n\nA collection of utilities to further support your workflow:\n\n* **Scene Queue:** Queue multiple assets for unattended batch processing. Add items with prompt and label, reorder, retry on failure. Supports both texturing and TRELLIS.2 pipelines with optional auto GIF export after each item.\n* **Switch Material:** For selected objects with multiple material slots, quickly set a material at a specific index as the active one.\n* **Add HDRI Light:** Prompts for an HDRI image file and sets it up as the world lighting, providing realistic illumination for your scene.\n* **Apply All Modifiers:** Iterates through all mesh objects in the scene, applies their modifier stacks, and converts geometry instances into real mesh data. Helps prepare models for texturing.\n* **Convert Curves to Mesh:** Converts any selected curve objects into mesh objects, which is necessary before StableGen can texture them.\n* **Export Orbit GIF\u002FMP4:** Creates an animated GIF and MP4 of the active object with the camera orbiting around it. Configurable duration, FPS, resolution, render engine (Workbench\u002FEevee\u002FCycles), and HDRI environment modes.\n* **Reproject Images:** Re-applies previously generated textures using the latest Viewpoint Blending Settings. Allows tweaking texture blending without full regeneration.\n* **Mirror Reproject:** Mirrors the last projection camera and image across an axis, then reprojects. Useful for symmetric objects.\n\nExperiment with these settings and tools to achieve a vast range of effects and control! Remember that the optimal parameters can vary greatly depending on the model, subject matter, and desired artistic style.\n\n---\n\n## 📁 Output Directory Structure\n\nStableGen organizes the generated files within the `Output Directory` specified in your addon preferences. For each generation session, a new timestamped folder is created, helping you keep track of different iterations. The structure for each session (revision) is as follows:\n\n* `\u003COutput Directory>\u002F`\n    * `\u003CSceneName>\u002F` *(Based on your `.blend` file name, or scene name if unsaved)*\n        * `\u003CYYYY-MM-DDTHH-MM-SS>\u002F` *(Timestamp of generation start - this is the main revision directory)*\n            * `generated\u002F` *(Main output textures from each camera\u002Fviewpoint before being applied or baked)*\n            * `controlnet\u002F` *(Intermediate ControlNet input images)*\n                * `depth\u002F` *(Depth pass renders)*\n                * `canny\u002F` *(Renders processed using Canny edge decetor)*\n                * `normal\u002F` *(Normal pass renders)*\n            * `baked\u002F` *(Textures baked onto UV maps using the standalone `Bake Textures` tool, exported `.glb` files from the `Export for Game Engine` tool)*\n            * `generated_baked\u002F` *(Textures baked as part of the generation process if \"Bake Textures While Generating\" is enabled)*\n            * `inpaint\u002F` *(Files related to inpainting processes, e.g., for `Sequential mode`)*\n                * `render\u002F` *(Renders of previous state used as context for inpainting)*\n                * `visibility\u002F` *(Visibility masks used as masks during the inpainting)*\n            * `uv_inpaint\u002F` *(Files specific to the UV Inpaint mode)*\n                * `uv_visibility\u002F` *(Visibility masks generated on UVs for UV inpainting)*\n            * `misc\u002F` *(Other temporary or miscellaneous files, e.g., renders made for Canny edge detection input)*\n            * `.gif` \u002F `.mp4` *(If the `Export  GIF\u002FMP4` tool is used, these files are saved directly into the timestamped revision directory)*\n            * `prompt.json` *(The last generated workflow to be used in ComfyUI)*\n         \n---\n\n## 🤔 Troubleshooting\n\nEncountering issues? Here are some common fixes. Always check the **Blender System Console** (Window > Toggle System Console) AND the **ComfyUI server console** for error messages.\n\n* **StableGen Panel Not Showing:** Ensure the addon is installed and enabled in Blender's preferences.\n* **\"Cannot generate...\" on Generate Button:** Check Addon Preferences: `Output Directory` and `Server Address` must be correctly set. The server also has to be reachable.\n* **Connection Issues with ComfyUI:**\n    * Make sure your ComfyUI server is running.\n    * Verify the `Server Address` in StableGen preferences.\n    * Check firewall settings.\n* **Models Not Found (Error in ComfyUI Console):**\n    * Run the `installer.py` script.\n    * Manually ensure models are in the correct subfolders of `\u003CYourComfyUIDirectory>\u002Fmodels\u002F` (e.g., `checkpoints\u002F`, `controlnet\u002F`, `loras\u002F`, `ipadapter\u002F`, `clip_vision\u002F`, `clip\u002F`, `vae\u002F`, `unet\u002F`).\n    * Restart ComfyUI after adding new models or custom nodes.\n* **GPU Out Of Memory (OOM):**\n    * Enable `Auto Rescale Resolution` in `Advanced Parameters` > `Output & Material Settings` if disabled.\n    * Try lower bake resolutions if baking.\n    * Close other GPU-intensive applications.\n* **Textures not visible after generation completes:**\n    * Switch to Rendered viewport shading (top right corner, fourth \"sphere\" icon)\n* **Textures not affected by your lighting setup:**\n    * Enable `Apply BSDF` in `Advanced Parameters > Output & Material Settings` and regenerate.\n* **Poor Texture Quality\u002FArtifacts:**\n    * Try using the provided presets.\n    * Adjust prompts and negative prompts.\n    * Experiment with different Generation Modes. `Sequential` with IPAdapter is often good for consistency.\n    * Ensure adequate camera coverage and appropriate `Discard-Over Angle`.\n    * Fine-tune ControlNet strength. Too low might ignore geometry; too high might yield flat results.\n    * For `Sequential` mode, check inpainting and visibility mask settings.\n* **All Visible Meshes Textured:** StableGen textures all visible mesh objects by default. You can set `Target Objects` to `Selected` to only texture selected objects.\n\n---\n\n## 🤝 Contributing\n\nWe welcome contributions! Whether it's bug reports, feature suggestions, code contributions, or new presets, please feel free to open an issue or a pull request.\n\n---\n\n## 📜 License\n\nStableGen is released under the **GNU General Public License v3.0**. See the `LICENSE` file for details.\n\n### Third-Party Licenses: TRELLIS.2 Image-to-3D\n\n> **Note:** This section applies **only** to the TRELLIS.2 Image-to-3D feature. StableGen's standard texturing pipelines (SDXL, FLUX.1-dev, Qwen Image Edit) do not use any of the libraries listed below and are unaffected by these licensing restrictions.\n\nThe TRELLIS.2 feature relies on several third-party components, each with its own license. **Users should be aware of these licenses, particularly the non-commercial restrictions on certain NVIDIA libraries used in the TRELLIS.2 textured output pipeline.**\n\n| Component | License | Commercial Use Permitted? |\n|---|---|---|\n| [TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2) (Microsoft) | MIT | ✅ Yes |\n| [TRELLIS.2-4B model weights](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FTRELLIS.2-4B) | MIT | ✅ Yes |\n| [ComfyUI-TRELLIS2](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2) | MIT | ✅ Yes |\n| [DINOv3](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdinov3) (Meta, image conditioning) | [DINOv3 License](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fdinov3-license\u002F) | ✅ Yes |\n| [BiRefNet](https:\u002F\u002Fgithub.com\u002FZhengPeng7\u002FBiRefNet) (background removal) | MIT | ✅ Yes |\n| [FlexGEMM](https:\u002F\u002Fgithub.com\u002FJeffreyXiang\u002FFlexGEMM) (sparse convolutions) | MIT | ✅ Yes |\n| [CuMesh](https:\u002F\u002Fgithub.com\u002FJeffreyXiang\u002FCuMesh) (mesh operations) | MIT | ✅ Yes |\n| O-Voxel (voxel processing, part of TRELLIS.2) | MIT | ✅ Yes |\n| [nvdiffrast](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrast) (NVIDIA) | NVIDIA Source Code License | ❌ **Non-commercial only** |\n| [nvdiffrec](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrec) (NVIDIA) | NVIDIA Source Code License | ❌ **Non-commercial only** |\n\n**Important:** The NVIDIA libraries (`nvdiffrast` and `nvdiffrec`) are only used when the TRELLIS.2 **Texture Mode** is set to **\"Native (TRELLIS.2)\"** - specifically for UV rasterization and PBR texture baking. Their license restricts usage to *\"research or evaluation purposes only and not for any direct or indirect monetary gain\"* (Section 3.3). Only NVIDIA and its affiliates may use these libraries commercially.\n\n**All other TRELLIS.2 modes do not introduce licensing restrictions:**\n* **Shape-only mode (\"None\")** - does not use nvdiffrast\u002Fnvdiffrec. All other pipeline components are permissively licensed (MIT\u002FApache 2.0 + DINOv3 License).\n* **Projection-based texture modes (\"SDXL\", \"Qwen Image Edit\", ...)** - do not use nvdiffrast\u002Fnvdiffrec. The licensing terms of the selected diffusion model apply as usual (e.g., FLUX.1-dev has its own license terms separate from the TRELLIS.2 pipeline).\n\nIf you require commercial use of the \"Native (TRELLIS.2)\" texture mode, consider contacting NVIDIA regarding commercial licensing for nvdiffrast\u002Fnvdiffrec.\n\n---\n\n## 🙏 Acknowledgements\n\nStableGen builds upon the fantastic work of many individuals and communities. Our sincere thanks go to:\n\n* **Academic Roots:** This plugin originated as a Bachelor's Thesis by Ondřej Sakala at the Czech Technical University in Prague (Faculty of Information Technology), supervised by Ing. Radek Richtr, Ph.D. \n    * Full thesis available at: [https:\u002F\u002Fdspace.cvut.cz\u002Fhandle\u002F10467\u002F123567](https:\u002F\u002Fdspace.cvut.cz\u002Fhandle\u002F10467\u002F123567)\n* **Core Technologies & Communities:**\n    * **ComfyUI** by ComfyAnonymous ([GitHub](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)) for the powerful and flexible backend.\n    * **ComfyUI-TRELLIS2** by PozzettiAndrea ([GitHub](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2)) for the TRELLIS.2 ComfyUI integration.\n    * The **Blender Foundation** and its community for the amazing open-source 3D creation suite.\n* **Inspired by following Blender Addons:**\n    * **Dream Textures** by Carson Katri et al. ([GitHub](https:\u002F\u002Fgithub.com\u002Fcarson-katri\u002Fdream-textures))\n    * **Diffused Texture Addon** by Frederik Hasecke ([GitHub](https:\u002F\u002Fgithub.com\u002FFrederikHasecke\u002Fdiffused-texture-addon))\n* **Pioneering Research:** We are indebted to the researchers behind key advancements that power StableGen. The following list highlights some of the foundational and influential works in diffusion models, AI-driven control, and 3D texturing (links to arXiv pre-prints):\n    * **Diffusion Models:**\n        * Ho et al. (2020), Denoising Diffusion Probabilistic Models - [2006.11239](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11239)\n        * Rombach et al. (2022), Latent Diffusion Models (Stable Diffusion) - [2112.10752](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10752)\n    * **AI Control Mechanisms:**\n        * Zhang et al. (2023), ControlNet - [2302.05543](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.05543)\n        * Ye et al. (2023), IP-Adapter - [2308.06721](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06721)\n    * **Key 3D Texture Synthesis Papers:**\n        * Chen et al. (2023), Text2Tex - [2303.11396](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11396)\n        * Richardson et al. (2023), TEXTure - [2302.01721](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01721)\n        * Zeng et al. (2023), Paint3D - [2312.13913](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13913)\n        * Le et al. (2024), EucliDreamer - [2311.15573](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15573)\n        * Ceylan et al. (2024), MatAtlas - [2404.02899](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02899)\n    * **Other Influential Works:**\n        * Siddiqui et al. (2022), Texturify - [2204.02411](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02411)\n        * Bokhovkin et al. (2023), Mesh2Tex - [2304.05868](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05868)\n        * Levin & Fried (2024), Differential Diffusion - [2306.00950](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00950)\n\nThe open spirit of the AI and open-source communities is what makes projects like StableGen possible.\n\n---\n\n## 💡 List of planned features\n\nHere are some features we plan to implement in the future (in no particular order):\n* **Upscaling:** Support for upscaling generated textures.\n* **Custom VAE, CLIP model selection:** Ability to select custom VAE and CLIP models in addition to custom ControlNet and LoRA models.\n* **Refine mode improvements:** Features like brush based inpainting.\n* **Brush-based inpainting:** Paint masks directly on the viewport for targeted local edits.\n* **Better remeshing for TRELLIS.2:** Implementing more advanced remeshing techniques to improve the quality of generated meshes.\n\nIf you have any suggestions, please feel free to open an issue!\n\n---\n\n## 📧 Contact\n\nOndřej Sakala\n* Email: `sakalaondrej@gmail.com`\n* X\u002FTwitter: `@sakalond`\n\n---\n*Last Updated: March 5, 2026*\n","# StableGen：在Blender中实现AI驱动的3D生成与贴图✨\n\n[![许可证：GPL v3](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-GPLv3-blue.svg)](https:\u002F\u002Fwww.gnu.org\u002Flicenses\u002Fgpl-3.0)\n[![Blender版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FBlender-4.2+%20%7C%205.1%2B-orange.svg)](#系统要求)\n[![GitHub所有发布](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Fdownloads\u002Fsakalond\u002Fstablegen\u002Ftotal?color=brightgreen&label=下载量)](https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Freleases)\n\n**从图像和提示词创建3D资产，随后进行贴图与优化——全程在Blender内完成。**\n\nStableGen是一款开源的Blender插件，将生成式AI引入您的3D工作流。通过[TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2)，您可以**生成**由单张图像或文本提示词驱动的完整贴图3D网格；然后利用SDXL、FLUX.1-dev或Qwen Image Edit，并借助灵活的[ComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)后端，对这些模型或任何现有模型进行**贴图与优化**。\n\n---\n\n\u003Cdetails>\n\u003Csummary>\u003Cstrong>目录\u003C\u002Fstrong>\u003C\u002Fsummary>\n\n- [🌟 核心功能](#核心功能)\n- [🚀 展示图库](#展示图库)\n- [🛠️ 工作原理](#工作原理概览)\n- [💻 系统要求](#系统要求)\n- [⚙️ 安装](#安装)\n- [🚀 快速入门指南](#快速入门指南)\n  - [为现有模型贴图](#为现有模型贴图)\n  - [使用TRELLIS.2生成3D模型](#使用TRELLIS.2生成3D模型)\n- [📖 使用说明与参数概述](#使用说明与参数概述)\n- [📁 输出目录结构](#输出目录结构)\n- [🤔 故障排除](#故障排除)\n- [🤝 贡献](#贡献)\n- [📜 许可证](#许可证)\n- [🙏 致谢](#致谢)\n- [💡 计划中的功能列表](#计划中的功能列表)\n- [📧 联系方式](#联系方式)\n\n\u003C\u002Fdetails>\n\n---\n\n## 🌟 核心功能\n\nStableGen 将 AI 驱动的 3D 生成与贴图功能直接融入 Blender：\n\n* 🧊 **TRELLIS.2：图像与提示词转 3D：**\n    * 使用微软的 [TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2)（40亿参数模型），根据单张参考图像或文本提示生成完全贴图的 3D 网格。\n    * **多种分辨率模式：** 512、1024、1024 Cascade（推荐）以及 1536 Cascade，以获得最高级别的几何细节。\n    * **灵活的贴图流程：** 可使用 TRELLIS.2 原生的 PBR 贴图，也可自动将生成的网格用 SDXL、FLUX.1-dev 或 Qwen Image Edit 进行贴图，以获得更高质量的扩散贴图。\n    * **预览图库：** 生成多个不同种子的候选图像，在确定最终 3D 生成之前挑选最佳方案。\n    * **智能网格处理：** 自动修复网格损坏，支持可配置的简化\u002F重拓扑、导入缩放以及工作室灯光设置。\n    * 低显存优化：磁盘交换、可配置的注意力后端。\n    * 由 [ComfyUI-TRELLIS2](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2) 提供支持（可通过 `installer.py` 安装）。\n* 🌍 **场景级多网格贴图：**\n    * 不再局限于一次只贴一张网格！StableGen 专为从您定义的摄像机视角同时为**场景中的所有网格对象**应用贴图而设计。您也可以选择仅贴图选定的对象。\n    * 在一次生成过程中即可实现整个环境或资产集合的一致外观。\n    * 非常适合概念艺术创作、复杂场景的外观开发以及批量贴图资产库。\n* 🎨 **多视角一致性：**\n    * **顺序模式：** 按照每个网格的视角逐个生成贴图，利用修复填充和可见性遮罩技术，确保复杂表面上的高度一致性。\n    * **网格模式：** 同时处理所有网格的多个视角，以加快预览速度。包含可选的细化步骤。\n    * 精巧的加权混合算法确保各视角之间的平滑过渡。\n* 📷 **高级摄像机布局：**\n    * **7 种布局策略：** 轨道环形、扇形弧线、半球形、PCA 轴向、法线加权 K-means、贪婪遮挡覆盖率，以及交互式可见性加权布局。\n    * **每台摄像机的最佳长宽比**——每台摄像机都会根据网格轮廓计算出专属分辨率，避免因信箱格式浪费像素。\n    * **无限数量的摄像机**——不再受 8 台摄像机的限制。\n    * **摄像机生成顺序**——通过拖放排序列表，结合 6 种预设策略来控制顺序模式下的处理顺序。\n    * 支持摄像机克隆、镜像以及浮动视口提示标签。\n* 🎯 **局部编辑模式：**\n    * 将摄像机对准特定区域进行修改——新贴图会基于角度和晕影效果与原有贴图无缝融合。\n    * 分别控制角度渐变和轮廓边缘的羽化效果，实现精准融合。\n    * 兼容所有架构（SDXL、Flux、Qwen Image Edit）。\n* 📐 **借助 ControlNet 实现精确的几何控制：**\n    * 同时使用多个 ControlNet 单元（深度、Canny、法线），确保生成的贴图忠实于您的模型几何形状。\n    * 可精细调整每个 ControlNet 单元的强度及生效起止步数。\n    * 支持自定义 ControlNet 模型映射。\n* 🖌️ **借助 IPAdapter 实现强大的风格引导：**\n    * 使用外部参考图像，通过 IPAdapter 引导贴图的风格、氛围和内容。\n    * 在多视角生成模式中，即使不使用参考图像，IPAdapter 也能提升一致性。\n    * 可控制 IPAdapter 的强度、权重类型及生效步数。\n* ⚙️ **灵活的 ComfyUI 后端：**\n    * 可连接您现有的 ComfyUI 安装，让您在实验性的 FLUX.1-dev 支持之外，继续使用偏好的 SDXL 检查点、自定义 LoRA 以及全新的 Qwen Image Edit 工作流。\n    * 将繁重的计算任务卸载到 ComfyUI 服务器上，使 Blender 保持较高的响应速度。\n* ✨ **高级修复填充与细化：**\n    * **细化模式（Img2Img）：** 利用图像到图像处理方式，重新塑造风格、增强细节或将细节添加到现有贴图上（无论是 StableGen 生成的还是其他来源）。\n    * **局部编辑模式：** 选择性地修改特定区域，同时保留其余部分，并提供独立的角度和晕影羽化控制。\n    * **UV 修复模式：** 根据周围贴图上下文，智能填补模型 UV 图上未贴图的区域。\n    * **颜色匹配：** 在混合前，使用多种算法（MKL、Reinhard、直方图、MVGD）将每个生成视角的颜色与当前贴图进行匹配。\n* 🛠️ **集成式工作流程工具：**\n    * **摄像机设置：** 快速添加并排列多台摄像机，提供 7 种布局策略、每台摄像机的专属长宽比、交互式遮挡预览以及可定制的生成顺序。\n    * **视图专属提示词：** 为每个摄像机视角分配独特的文本提示，以实现针对性的细节控制。\n    * **贴图烘焙：** 将复杂的程序化 StableGen 材质转换为标准的 UV 图像贴图。“为细化而展平”选项允许您烘焙后继续编辑。\n    * **调试工具：** 无需运行 AI 生成，即可可视化投影覆盖范围、UV 对齐情况和权重混合效果。\n    * **HDRI 设置、修改器应用、曲线转换、GIF\u002FMP4 导出与重投影。**\n* 📋 **预设系统：**\n    * 通过内置预设快速上手常见场景（如“默认”、“角色”、“快速草稿”）。\n    * 保存并管理您自己的自定义参数配置，以实现可重复的工作流程。\n\n---\n\n## 🚀 展示图库\n\n\u003Cdetails open>\n\u003Csummary>看看 StableGen 能做什么！\u003C\u002Fsummary>\n\n\u003Csub>提示：刷新页面可同步所有 GIF 动画。\u003C\u002Fsub>\n\n---\n\n### 展示 1：文本转 3D（SDXL）\n\n完全由文本提示生成的资产，使用基于 SDXL 的贴图处理的 TRELLIS.2 流程。\n\n| 龙 | 巫师 | 小屋 |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_7e1e18c86cf7.gif\" alt=\"奇幻龙\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_93e2940f89ee.gif\" alt=\"巫师角色\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_dc2254c7b1c6.gif\" alt=\"小屋\" width=\"200\"> |\n| **望远镜** | **机器人** | **赛博忍者** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_3491aa5e4f2f.gif\" alt=\"望远镜\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_9bb993fdb14d.gif\" alt=\"机器人\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_c8989d321fb9.gif\" alt=\"赛博忍者\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **龙：** *\"奇幻龙\"*\n2. **巫师：** *\"巫师角色，精致刺绣的紫金长袍，尖顶帽，镶嵌发光水晶的木制法杖，系有小袋的皮带，奇幻角色概念艺术，4k\"*\n3. **小屋：** *\"房子，小房子，温馨舒适，木质，小屋\"*\n4. **望远镜：** *\"古董黄铜望远镜，表面有因使用而留下的暗淡包浆与光亮痕迹，皮革包裹的手柄，可伸缩的镜筒，桃花心木三脚架，产品摄影，4k\"*\n5. **机器人：** *\"巨型机器人，机甲，赛博朋克风格，科幻，白色机身，细节丰富，带有霓虹灯点缀\"*\n6. **赛博忍者：** *\"全身角色，中立姿势，赛博忍者，未来刺客，哑光黑色碳纤维隐形战衣，六边形编织图案，无面头盔，红色荧光面罩缝隙，金属银色肩甲，赛博朋克美学，高对比度材质，虚幻引擎5渲染\"*\n\n\u003C\u002Fdetails>\n\n\n### 展示 2：文本转 3D（Qwen）\n\n通过 TRELLIS.2 结合 Qwen 图像编辑贴图技术实现的文本转 3D——非常适合风格化物体和清晰细腻的细节。\n\n| 桶 | 宝箱 | 箱子 |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_466242068203.gif\" alt=\"桶\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_80f5409b8131.gif\" alt=\"宝箱\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_4be9a7e8b6a5.gif\" alt=\"箱子\" width=\"200\"> |\n| **方尖碑** | **机器人** | **树桩** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bf043726010e.gif\" alt=\"方尖碑\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_dc680c2c8f23.gif\" alt=\"机器人\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ba95ac19d945.gif\" alt=\"树桩\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **桶：** *\"一个粗犷、风格化的木桶，由厚重、超大号的铁箍紧紧束缚。木头上刻有深邃而夸张的手工凹槽\"*\n2. **宝箱：** *\"一个细节极其丰富的木制宝箱，被沉重的深色铁链束缚着。箱子微微打开，露出里面堆积如山的金色金币。木头陈旧且开裂，铁链上还布满了橙色锈斑。\"*\n3. **箱子：** *\"一个黄色的工业级危险品运输箱。侧面印有一块醒目的警告标签，用粗黑体字写着‘危险：生物危害’。箱子正面装有一个数字密码锁，左侧则绑着两个红色氧气罐。\"*\n4. **方尖碑：** *\"一座古老、巨大的石质方尖碑，表面布满了发着绿光的符文雕刻。灰色的石质因年久失修而深深开裂，上面还覆盖着厚厚的绿色绒毛状苔藓。\"*\n5. **机器人：** *\"巨型机器人，机甲，赛博朋克风格，科幻，白色机身，细节繁复，点缀着霓虹灯光\"*\n6. **树桩：** *\"一棵充满神秘感的古老扭曲树桩，根系盘根错节地裸露在外。树桩顶部簇生着半透明、泛着蓝光的生物发光蘑菇，以及纤细柔美的蕨类叶片。奇幻RPG场景资源，手绘纹理风格融合写实效果，细节极为丰富。\"*\n\n\u003C\u002Fdetails>\n\n### 展示 3：PBR 对比\n\nPBR 材质贴图（粗糙度、金属度、法线）可以通过 Marigold 分解生成。每对图片展示了同一物体在无 PBR 材质和有 PBR 材质情况下的对比效果。\n\n| 房子 | 房子 (PBR) | 巫师 | 巫师 (PBR) |\n| :------: | :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5ba10392d418.gif\" alt=\"房子（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f36eab7d8146.gif\" alt=\"房子（PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_93e2940f89ee.gif\" alt=\"巫师（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_51019b6522ce.gif\" alt=\"巫师（PBR）\" width=\"170\"> |\n| **宝箱** | **宝箱 (PBR)** | **方尖碑** | **方尖碑 (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_80f5409b8131.gif\" alt=\"宝箱（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ecee864a77d5.gif\" alt=\"宝箱（PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bf043726010e.gif\" alt=\"方尖碑（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0fc5e855b481.gif\" alt=\"方尖碑（PBR）\" width=\"170\"> |\n| **月球栖息地** | **月球栖息地 (PBR)** | **拾荒者** | **拾荒者 (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_3b7f104a6e3b.gif\" alt=\"月球栖息地（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5234a0ea43f8.gif\" alt=\"月球栖息地（PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_52399838dc5e.gif\" alt=\"拾荒者（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_bc8ca96e5122.gif\" alt=\"拾荒者（PBR）\" width=\"170\"> |\n| **萨满** | **萨满 (PBR)** | **赛博朋克女战士** | **赛博朋克女战士 (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_5e90c36831ea.gif\" alt=\"萨满（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_a8c9c67dcd77.gif\" alt=\"萨满（PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_31d08be99b0d.gif\" alt=\"赛博朋克女战士（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d0014a537859.gif\" alt=\"赛博朋克女战士（PBR）\" width=\"170\"> |\n| **木箱** | **木箱 (PBR)** | **树桩** | **树桩 (PBR)** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_4be9a7e8b6a5.gif\" alt=\"木箱（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f6435c27c70b.gif\" alt=\"木箱（PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ba95ac19d945.gif\" alt=\"树桩（非 PBR）\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d1e4c029fb53.gif\" alt=\"树桩（PBR）\" width=\"170\"> |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **房子（Qwen）：** *\"房子，小房子，舒适，木质，小屋\"*\n2. **巫师（SDXL）：** *\"巫师角色，复杂刺绣的紫色和金色长袍，尖顶帽，带有发光水晶的木制法杖，系着小袋的皮带，奇幻角色概念艺术，4k\"*\n3. **宝箱（Qwen）：** *\"一个细节极其丰富的木制宝箱，用厚重的深色铁条加固。箱子微微打开，露出里面堆叠的发光金币。木材陈旧且开裂，铁条上布满了橙色锈斑。\"*\n4. **方尖碑（Qwen）：** *\"一座古老的单体石制方尖碑，表面布满发光的绿色符文雕刻。灰色的石质因年久失修而深深开裂，还覆盖着厚厚的绿色绒毛状苔藓。\"*\n5. **月球栖息地（SDXL）：** *\"未来感十足的月球栖息地模块，圆顶圆柱形底座建筑，洁白如新的复合材料面板，高光泽反射，包裹着金箔的管道，圆形金属气闸门，外侧泛着蓝光的探照灯，科幻基地建筑，干净的 PBR 质地，硬表面建模，8k\"*\n6. **拾荒者（SDXL）：** *\"全身角色，A 字站姿，后末日时代的拾荒者，油渍斑驳的橄榄绿军装夹克，破烂的衣服，生锈的街牌护甲，脏兮兮的皮带，划痕累累的焊接面罩，荒原生存者，垃圾风质感，严重风化，辐射风格的角色资产\"*\n7. **萨满（SDXL）：** *\"全身角色，A 字站姿，部落萨满，粗布棕色羊毛，厚重白色兽皮，雕刻的白色骨质面具，闪耀着紫色光芒的魔法符文，赤裸的双臂，奇幻 RPG 角色职业，有机质感，高度细节化的位移贴图，ZBrush 雕塑风格\"*\n8. **赛博朋克女战士（Qwen）：** *\"一位站立于中立姿势的未来感十足的赛博朋克女佣兵。她的左臂由黑色金属和蓝色发光电线构成的机械假肢，身穿合成材料制成的战术夹克，衣领处点缀着发光的 LED 灯带，脚踏未来感十足的运动鞋。\"*\n9. **木箱（Qwen）：** *\"一个黄色的工业级危险品运输箱。侧面有一块醒目的警告标签，用粗黑字写着‘危险：生物危害’。箱子正面装有数字密码锁，左侧绑着两个红色氧气罐。\"*\n10. **树桩（Qwen）：** *\"一棵神秘而古老的扭曲树桩，根部裸露并盘旋交错。树桩顶端长出一簇半透明、发出蓝光的生物荧光蘑菇，以及纤细的蕨类叶片。奇幻 RPG 资产，手绘纹理风格与写实相结合，细节极为丰富。\"*\n\n\u003C\u002Fdetails>\n\n\n### 展示 4：PBR 画廊\n\n一组启用了 PBR 材质的资产，展示了在不同光照条件下逼真的表面反应。\n\n| 金锅 | 星盘 | 树桩 |\n| :------: | :------: | :------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_6ebfc9965c2e.gif\" alt=\"金锅（PBR）\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ab32d74d78ba.gif\" alt=\"星盘（PBR）\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_d1e4c029fb53.gif\" alt=\"树桩（PBR）\" width=\"200\"> |\n| **兔子** | **木箱** | **方尖碑（Qwen）** |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_b0a62515de51.gif\" alt=\"兔子（PBR）\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_f6435c27c70b.gif\" alt=\"木箱（PBR）\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0fc5e855b481.gif\" alt=\"方尖碑（PBR）\" width=\"200\"> |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **金锅：** *\"金锅\"*\n2. **星盘：** *\"一个细节极其丰富的古董蒸汽朋克星盘，静置于粗糙的木制台座之上。星盘由闪亮的黄铜环、暗淡的铜质齿轮以及中央的多面玻璃水晶组成。影棚灯光，写实风格，8k 分辨率，精密的机械细节，独立于纯色背景之上。\"*\n3. **树桩：** *\"一棵神秘而古老的扭曲树桩，根部裸露并盘旋交错。树桩顶端长出一簇半透明、发出蓝光的生物荧光蘑菇，以及纤细的蕨类叶片。奇幻 RPG 资产，手绘纹理风格与写实相结合，细节极为丰富。\"*\n4. **兔子：** *\"一只白兔\"*\n5. **木箱：** *\"一个黄色的工业级危险品运输箱。侧面有一块醒目的警告标签，上面用粗黑字写着‘危险：生物危害’。箱子正面装有数字密码锁，左侧绑着两个红色氧气罐。\"*\n6. **方尖碑（Qwen）：** *\"一座古老的单体石制方尖碑，表面布满发光的绿色符文雕刻。灰色的石质因年久失修而深深开裂，还覆盖着厚厚的绿色绒毛状苔藓。\"*\n\n\u003C\u002Fdetails>\n\n---\n\n### 展示 5：头部风格化（仅贴图）\n\n使用提示词和 IPAdapter 图像参考的风格指导，为现有模型添加贴图。\n\n**3D 模型来源**：“Brown” by ucupumar - 可在以下链接获取：[BlendSwap (Blend #15262)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F15262)\n\n\n\n| 未贴图模型  | 生成结果 | 生成结果 | 生成结果（使用参考图像） |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_55919634e53d.gif\" alt=\"未贴图动漫头\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_023ff35d8a2b.gif\" alt=\"红发动漫头\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_1f71cc0b4d54.gif\" alt=\"赛博朋克风格动漫头\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_e9f6f87201aa.gif\" alt=\"星夜风格动漫头\" width=\"170\">   | \n| *基础未贴图模型* | *红发* | *赛博朋克* | *艺术风格* |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **红发**：“anime girl head, red hair”\n2. **赛博朋克**：“girl head, brown hair, cyberpunk style, realistic”\n3. **艺术风格**：“anime girl head, artistic style”（风格由下方所示的 IPAdapter 参考图像引导）\n\n\u003C\u002Fdetails>\n\u003Cp align=\"left\">\n  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_e97b5186b0a1.jpg\" alt=\"星夜 - IPAdapter 参考\" width=\"250\">\n  \u003Cbr>\n  \u003Csmall>\u003Cem>参考：文森特·梵高《星夜》（用于引导“艺术风格”变体）\u003C\u002Fem>\u003C\u002Fsmall>\n\u003C\u002Fp>\n\n\n### 展示 6：汽车贴图（仅贴图）\n\n使用不同的提示词为汽车模型添加贴图，以实现多种视觉风格。\n\n**3D 模型来源**：“Pontiac GTO 67” by thecali - 可在以下链接获取：[BlendSwap (Blend #13575)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F13575)\n\n| 未贴图模型  | 生成结果 | 生成结果 | 生成结果 |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_7050e047a81b.gif\" alt=\"未贴图汽车\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_90ecab5ef6e8.gif\" alt=\"绿色汽车\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_baef5f871a5a.gif\" alt=\"蒸汽朋克风格汽车\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_8ed08563240e.gif\" alt=\"隐形黑色汽车\" width=\"170\">   | \n| *基础未贴图模型* | *绿色* | *蒸汽朋克* | *隐形黑色* |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **绿色**：“green car”\n2. **蒸汽朋克**：“steampunk style car”\n3. **隐形黑色**：“stealth black car”\n\n\u003C\u002Fdetails>\n\n\n### 展示 7：场景贴图（仅贴图）\n\n为由多个网格对象组成的复杂场景添加贴图。\n\n**3D 模型来源**：“Subway Station Entrance” by argonius - 可在以下链接获取：[BlendSwap (Blend #19305)](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F19305)\n\n| 未贴图场景  | 生成结果 | 生成结果 | 生成结果 |\n| :------: | :---------: | :----------: | :-----------------: |\n| \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_ed3bcbe57c92.gif\" alt=\"未贴图地铁场景\" width=\"170\"> | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_044725360422.gif\" alt=\"地铁站\" width=\"170\">  | \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_0c17e119fcb1.gif\" alt=\"长满杂草的奇幻宫殿内部\" width=\"170\">   |  \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_readme_92c60fd57b4e.gif\" alt=\"赛博朋克风格地铁站\" width=\"170\">   | \n| *基础未贴图场景* | *地铁站* | *奇幻宫殿* | *赛博朋克* |\n\n\u003Cdetails>\n\u003Csummary>使用的提示词\u003C\u002Fsummary>\n\n1. **地铁站**：“subway station”\n2. **奇幻宫殿**：“an overgrown fantasy palace interior, gold elements”\n3. **赛博朋克**：“subway station, cyberpunk style, neon lit”\n\n\u003C\u002Fdetails>\n\n\u003C\u002Fdetails>\n\n---\n\n## 🛠️ 工作原理（简要介绍）\n\nStableGen 是一个直观的 Blender 插件界面，负责与 ComfyUI 后端进行通信。\n1.  在 StableGen 面板中设置场景和参数。\n2.  StableGen 准备必要的数据（例如来自摄像机视图的 ControlNet 输入）。\n3.  构建工作流程并将其发送到您的 ComfyUI 服务器。\n4.  ComfyUI 使用您选择的扩散模型处理请求。\n5.  生成的图像会返回到 Blender。\n6.  StableGen 使用复杂的投影和混合技术，将这些图像作为贴图应用到您的模型上。\n\n---\n\n## 💻 系统要求\n\n* **Blender**：版本 4.2–4.5（OSL 投影）或 Blender 5.1+（通过原生 Raycast 节点实现 GPU 加速投影）。**不支持 Blender 5.0**（因为 OSL 存在问题且原生 Raycast 尚未可用）。\n* **操作系统**：Windows 10\u002F11、Linux 或 macOS（Apple Silicon）。\n* **GPU**：建议使用带有 CUDA 的 NVIDIA 显卡来运行 ComfyUI。更多详细信息请参阅 ComfyUI 的 GitHub 页面：[https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)。\n    * 至少需要 8 GB 显存才能以可接受的速度运行 SDXL；运行 FLUX.1-dev 或 Qwen-Image-Edit 流程时则需 16 GB 或以上显存。\n* **ComfyUI**：已安装并正常运行的 ComfyUI。StableGen 将其用作后端。\n* **Python**：版本 3.x（通常随 Blender 自带，但 `installer.py` 脚本需要 Python 3）。\n* **Git**：`installer.py` 脚本需要 Git。\n* **磁盘空间**：ComfyUI、AI 模型（10 GB 至 50 GB 以上）以及生成的贴图都需要大量可用空间。\n\n---\n\n## ⚙️ 安装步骤\n\n安装 StableGen 包括安装 ComfyUI，然后使用我们的安装脚本将 StableGen 的依赖项安装到 ComfyUI 中，最后在 Blender 中安装 StableGen 插件。\n\n请按照以下分步说明安装 StableGen。\n\n如果您更喜欢观看视频，Polynox 提供了一个简洁的安装与基本使用教程：  \n[StableGen 安装及基本使用视频教程](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=EVNYAMnn_oQ)\n\n### 第一步：安装 ComfyUI（如果尚未安装）\n\nStableGen 依赖于一个正常工作的 ComfyUI 安装作为其后端。这可以在另一台机器上完成，如果需要的话。\n\n*如果您希望使用另一台机器作为后端，请在该机器上执行步骤 1 和 2。*\n* 如果您尚未安装 ComfyUI，请遵循 **官方 ComfyUI 安装指南**：[https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI#installing)。\n    * 请将 ComfyUI 安装在一个专用目录中，我们将其称为 `\u003CYourComfyUIDirectory>`。\n    * 在继续下一步之前，请确保 ComfyUI 能够正常运行并工作正常。\n\n### 步骤 2：安装依赖项（自定义节点与 AI 模型）——自动化方式（推荐）\n\n`installer.py` 脚本（位于本仓库中）可自动下载并将所需的 ComfyUI 自定义节点和核心 AI 模型放置到您的 `\u003CYourComfyUIDirectory>` 目录中。\n\n**安装脚本的先决条件：**\n* Python 3。\n* 已安装 Git，并且 Git 可在系统的 PATH 中访问。\n* 您的 ComfyUI 安装路径（`\u003CYourComfyUIDirectory>`）。\n* 脚本所需的 Python 包：`requests` 和 `tqdm`。请通过 pip 安装：\n    ```bash\n    pip install requests tqdm\n    ```\n\n**运行安装程序：**\n1.  **下载\u002F找到安装程序：** 从本 GitHub 仓库获取 `installer.py`。\n2.  **执行脚本：**\n    * 打开您系统的终端或命令提示符。\n    * 导航到包含 `installer.py` 的目录。\n    * 运行脚本：\n        ```bash\n        python installer.py \u003CYourComfyUIDirectory>\n        ```\n        将 `\u003CYourComfyUIDirectory>` 替换为实际路径。如果省略，脚本会提示您输入。\n3.  **按照屏幕上的指示操作：**\n    * 脚本将显示一个安装包菜单。选择符合您需求的功能选项。\n    * 脚本会下载文件并将其放置到 `\u003CYourComfyUIDirectory>` 的正确子目录中。\n\n**安装包概览：**\n\n| 序号 | 包名                     | 功能描述                                   | 大小     |\n|------|--------------------------|--------------------------------------------|----------|\n| 1    | Minimal Core             | 基础 SDXL 纹理化（需自行提供检查点 + ControlNets） | ~7.3 GB  |\n| 2    | Core + Preset Essentials | 所有内置预设即插即用                       | ~9.8 GB  |\n| 3    | **推荐** 全套 SDXL 设置   | SDXL 纹理化 + PBR 分解（不含检查点）         | ~19.3 GB |\n| 4    | Complete SDXL + RealVisXL | 第 3 项的所有功能，外加一个即用检查点       | ~26.3 GB |\n| 5    | Qwen Core                | Qwen Image Edit 纹理架构                   | ~20.3 GB |\n| 6    | Qwen + Lightning LoRAs    | Qwen 配合额外的 Lightning LoRAs            | ~22.6 GB |\n| 7    | Qwen Nunchaku            | Qwen 使用 Int4 量化 Nunchaku 模型（降低显存占用） | ~33.0 GB |\n| 8    | TRELLIS.2                | 图像\u002F文本转 3D 网格生成（首次使用时约 5 GB 安装 + 约 15.4 GB 模型） | ~20.4 GB |\n| 9    | Marigold IID             | PBR 分解节点（首次使用时自动下载模型）       | ~0.01 GB |\n| 10   | StableDelight            | 无镜面反射的 PBR 反照率（包含模型下载）      | ~3.3 GB  |\n| 11   | FLUX.2 Klein *(实验性)*   | Klein 纹理架构（需约 13 GB 显存）           | ~12.4 GB |\n\n**常见配置：**\n- **完整 3D 资产生成（SDXL）：** 选项 3 + 8（或选项 4 + 8，含检查点）\n- **完整 3D 资产生成（Qwen）：** 选项 6 + 8\n- **仅纹理化（SDXL）：** 选项 3（或 4）\n- **仅纹理化（Qwen）：** 选项 5（或 6\u002F7）\n- **为任何设置添加 PBR：** 选项 9 + 10（已包含在选项 3 和 4 中）\n\n> **注意：** TRELLIS.2 和 Marigold IID 会在首次使用时通过 HuggingFace 自动下载额外模型。上述大小已包含这些首次使用的下载内容。首次运行可能需要更长时间。\n4.  **重启 ComfyUI：** 如果 ComfyUI 正在运行，请重启以加载新的自定义节点。\n\n*（如需手动安装依赖项，包括 FLUX.1-dev 和 Qwen Image Edit 的配置，请参阅 `docs\u002FMANUAL_INSTALLATION.md`。）*\n\n### 步骤 3：安装 StableGen Blender 插件\n\n1.  访问本仓库的 [**发布页面**](https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Freleases)。\n2.  下载最新的 `StableGen.zip` 文件。\n3.  在 Blender 中，前往 `编辑 > 首选项 > 插件 > 安装...`。\n4.  浏览并选择下载的 `StableGen.zip` 文件。\n5.  启用“StableGen”插件（搜索“StableGen”并勾选）。\n\n### 步骤 4：在 Blender 中配置 StableGen 插件\n\n1.  在 Blender 中，前往 `编辑 > 首选项 > 插件`。\n2.  找到“StableGen”，展开其设置。\n3.  设置以下路径：\n    * **输出目录：** 选择一个文件夹，用于保存 StableGen 生成的图像。\n    * **服务器地址：** 确保此地址与您的 ComfyUI 服务器匹配（默认为 `127.0.0.1:8188`）。\n    * 如果使用自定义命名的 ControlNet 模型，请检查 **ControlNet 映射**。\n4.  如果尚未启用，请在 Blender 中启用在线访问。从 Blender 的顶部栏选择 `编辑 -> 首选项`，然后导航到 `系统 -> 网络`，勾选 `启用在线访问`。尽管 StableGen 不需要互联网连接，但此举是为了遵守 Blender 插件的相关规定，因为插件仍会在本地进行网络调用。\n\n---\n\n## 🚀 快速入门指南\n\n### 为现有模型添加纹理\n\n以下是使用 StableGen 生成第一张纹理的方法：\n\n1.  **启动 ComfyUI 服务器：** 确保其已在后台运行。\n2.  **打开 Blender 并准备场景：**\n    * 准备好一个网格对象（例如默认的立方体）。\n    * 确保 StableGen 插件已启用并正确配置（参见步骤 4）。\n3.  **访问 StableGen 面板：** 在 3D 视口按下 `N` 键，进入“StableGen”选项卡。\n4.  **添加相机（建议用于多视角）：**\n    * 选择您的对象。\n    * 在 StableGen 面板中，点击“**添加相机**”。选择“对象”作为中心类型。如有需要可交互调整，然后确认。\n5.  **设置基本参数：**\n    * **提示词：** 输入描述（例如“长满苔藓的古老石墙”）。\n    * **架构：** 根据您设置的工作流程，选择扩散家族（`SDXL`、`Flux 1` 或 `Qwen Image Edit`）。\n    * **检查点：** 选择适合所选架构的检查点或 GGUF 文件（例如 `sdxl_base_1.0` 或 `Qwen-Image-Edit-2509-Q3_K_M.gguf`）。\n    * **预设：** 选择一个预设并应用。`默认` 或 `角色` 是不错的起点。\n6.  **开始生成！** 点击主“**生成**”按钮。\n7.  **观察结果：** 查看面板和 ComfyUI 控制台中的进度。您的对象应会更新为新纹理！输出文件将保存在您指定的“输出目录”中。\n    * 默认情况下，生成的纹理仅在渲染视图着色模式下可见（Cycles 渲染引擎）。\n\n### 使用 TRELLIS.2 生成 3D 模型\n\n按照以下步骤，您可以使用 TRELLIS.2 流水线根据文本提示或参考图像生成带有完整纹理的 3D 网格：\n\n1.  **先决条件：** 确保已安装 TRELLIS.2 的依赖项（参见[安装 - 第 2 步](#step-2-install-dependencies-custom-nodes--ai-models---automated-recommended)），并且您的硬件满足[系统要求](#-system-requirements)。\n2.  **选择预设：** 选择并应用带有 **(MESH + TEXTURE)** 标签的预设之一：\n    * **SDXL** - 最适合创意驱动的提示工作流。\n    * **Qwen Image Edit** - 非常适合风格化生成、可读文本和特定细节。尤其适用于从图像到 3D 模型的工作流。\n    * 将鼠标悬停在 Blender 中的任何预设上，即可查看其详细功能说明。\n    * 或者，如果您只需要生成的网格而不需要自动纹理，则可以使用 **TRELLIS.2 (MESH ONLY)** 预设。\n3.  **选择输入模式：** 将 **`Generate from`** 字段设置为 **`Prompt`** 以进行文本到 3D 的转换，或设置为 **`Image`** 以使用参考图像。\n4.  **提供输入：** 输入描述性提示或加载参考图像。\n5.  *(可选)* **启用 PBR：** 在 *高级参数 → 输出与材质设置* 下开启 **PBR 生成**，以生成基于物理的材质贴图（粗糙度、金属度、法线）。\n6.  **生成：** 单击主 **Generate** 按钮，等待处理完成。\n7.  *(可选)* **优化结果：** 调整每个摄像机的提示并重新生成特定视角，或者切换到 **Local Edit** 模式（有相应预设可用）进行针对性调整。\n\n**导出至游戏引擎：**\n\n8.  **烘焙纹理：** 您很可能需要切换 UV 展开方式（在 `Bake Textures` 操作器中）——大多数情况下，`Smart UV Project` 模式效果良好。\n9.  **导出：** 使用内置的 `Export for Game Engine` 导出工具，或从 Blender 手动导出。\n\n---\n\n## 📖 使用与参数概览\n\nStableGen 提供了一个全面的界面，用于 AI 驱动的 3D 资产生成和纹理处理，从网格创建到最终的 PBR 导出。以下是 StableGen 面板中主要部分和工具的概述：\n\n### 主要操作与场景设置\n\n这些是主要的操作按钮和初始设置工具，通常位于 StableGen 面板的顶部附近：\n\n* **生成\u002F取消生成（主按钮）：** 根据当前模式，开始 3D 网格生成（TRELLIS.2 流水线）或现有网格对象的纹理生成。处理过程中，按钮会变为“取消生成”。生成期间，此按钮下方会显示总体、阶段和每步进度条。\n* **Bake Textures：** 将动态的多投影材质转换为每个对象的一张标准 UV 映射图像纹理。如果启用了 PBR 分解，还会烘焙 PBR 贴图（反照率、粗糙度、金属度、法线、高度、环境光遮蔽、自发光）。默认使用 Smart UV Project 展开方式。这是导出到游戏引擎所必需的步骤。\n* **Add Cameras：** 使用 7 种放置策略之一设置多个视点——从简单的轨道环到基于几何感知、针对遮挡优化且具有各摄像机不同宽高比的放置方式。使用交互式预览微调位置，然后再确认。\n* **Collect Camera Prompts：** 循环遍历场景中的所有摄像机，允许您为每个视点输入特定的描述性文本提示（例如，“正面视图”、“面部特写”）。如果在 `Viewpoint Blending Settings` 中启用了 `Use camera prompts`，则这些每个摄像机的提示将与主提示结合使用。\n\n### 预设管理\n\n* 该系统位于 UI 的显眼位置，允许您：\n    * **选择预设：** 从按 4 大架构分组的 30 多个内置预设中选择（SDXL\u002FFLUX.1、Qwen Image Edit、FLUX.2 Klein、TRELLIS.2 Pipeline），或选择 `Custom` 以使用当前设置。\n    * **预设差异预览：** 当您将鼠标悬停或选择某个预设时，StableGen 会显示哪些参数与您当前设置不同，以及它们将被更改为哪些值。\n    * **应用预设：** 如果您修改了某个默认预设，此按钮会将其恢复为原始值。\n    * **保存预设\u002F删除预设：** 将当前配置保存为命名预设，或删除自定义预设。ControlNet 和 LoRA 包含切换开关，可让您选择要保存的内容。\n\n### 主要参数\n\n这些是您定义生成过程的主要控制选项：\n\n* **Prompt：** 您希望生成的纹理（或 3D 资产）的主要文本描述。\n* **Checkpoint：** 选择基础 SDXL 检查点（适用于 SDXL\u002FFLUX 架构）。\n* **Architecture：** 在 `SDXL`、`Flux 1`、`Qwen Image Edit` 和 `FLUX.2 Klein`（实验性）模型架构之间进行选择。对于 3D 网格生成，请使用 TRELLIS.2 流水线预设。\n* **Generation Mode：** 定义纹理生成的核心策略：\n    * `Generate Separately`：每个视点独立生成。\n    * `Generate Sequentially`：视点逐个生成，利用前一视点的修复来保持一致性。\n    * `Generate Using Grid`：将所有视点组合成一个网格，进行一次生成，并可选进行细化步骤。\n    * `Refine\u002FRestyle Texture (Img2Img)`：将当前纹理作为输入，进行图像到图像的处理。\n    * `Local Edit`：通过将摄像机对准特定区域来有选择地修改，新纹理会以羽化边缘的方式与原有纹理融合。\n    * `UV Inpaint Missing Areas`：通过 inpainting 填充 UV 图上未纹理化的区域。\n* **Target Objects：** 选择是为所有可见的网格对象还是仅选定的对象添加纹理。\n\n### 高级参数（可折叠部分）\n\n点击每个标题旁边的箭头以展开并访问详细设置：\n\n* **核心生成设置：** 控制扩散的基本参数，如种子、步数、CFG、负面提示词、采样器、调度器和Clip跳过。\n* **LoRA管理：** 添加并配置LoRA（低秩适应），以获得额外的风格或内容指导。您可以为每个LoRA设置模型强度和CLIP强度。\n* **视点混合设置：** 管理来自不同摄像视角的纹理如何组合，包括特定于摄像机的提示词、丢弃角度、混合权重指数、摄像机生成顺序以及生成后的指数重置。\n* **输出与材质设置：** 定义回退颜色、材质属性（BSDF）、自动分辨率缩放，以及在生成过程中烘焙纹理的选项，这使得能够使用超过8个视角进行生成。\n* **图像引导（IPAdapter与ControlNet）：** 配置IPAdapter以使用外部图像进行风格迁移，并设置多个ControlNet单元（深度、Canny等），以实现精确的结构控制。\n* **修复选项：** 细化`顺序`和`UV修复`模式下的遮罩与混合（例如，差异扩散、遮罩模糊\u002F扩展）。\n* **生成模式特有参数：** 仅适用于所选生成模式的参数，例如网格模式的细化选项，或顺序\u002F分离\u002F细化模式下的IPAdapter一致性设置。\n* **PBR分解：** 在贴图完成后启用PBR材质提取。可以切换各个贴图类型（反照率、粗糙度、金属度、法线、高度、环境光遮蔽、自发光），选择反照率来源，并配置平铺超分辨率。仅当服务器上存在所需的Marigold\u002FStableDelight节点时才会显示。\n* **TRELLIS.2设置：** 配置3D网格生成——分辨率模式、简化、重新网格化、导入比例、着色模式、纹理模式（原生\u002FSDXL\u002FFLUX\u002FQwen\u002FKlein）、预览图库种子数量，以及用于贴图的摄像机放置策略。\n\n### 集成工作流工具（底部区域）\n\n一系列实用工具，进一步支持您的工作流程：\n\n* **场景队列：** 将多个资产加入队列，进行无人值守的批量处理。可添加带有提示词和标签的项目，重新排序，在失败时重试。支持贴图和TRELLIS.2流程，并可在每个项目后选择性地自动导出GIF。\n* **切换材质：** 对于具有多个材质槽位的选定对象，可快速将特定索引处的材质设为当前激活材质。\n* **添加HDRI光源：** 提示您选择一个HDRI图像文件，并将其设置为世界光照，为您的场景提供逼真的照明效果。\n* **应用所有修改器：** 遍历场景中的所有网格物体，应用其修改器堆栈，并将几何体实例转换为实际网格数据。有助于为贴图准备模型。\n* **将曲线转换为网格：** 将任何选定的曲线对象转换为网格对象，这是StableGen对其进行贴图前的必要步骤。\n* **导出环绕动画GIF\u002FMP4：** 创建活动对象的动画GIF和MP4，摄像机围绕该对象旋转。可配置持续时间、帧率、分辨率、渲染引擎（Workbench\u002FEevee\u002FCycles）以及HDRI环境模式。\n* **重投影图像：** 使用最新的视点混合设置重新应用之前生成的纹理。允许在不完全重新生成的情况下调整纹理混合。\n* **镜像重投影：** 沿某一轴镜像上次投影的摄像机和图像，然后重新投影。对于对称对象非常有用。\n\n请尝试这些设置和工具，以实现丰富多样的效果和控制！请记住，最佳参数会因模型、主题和期望的艺术风格而有很大差异。\n\n---\n\n## 📁 输出目录结构\n\nStableGen会根据您插件偏好中指定的`输出目录`来组织生成的文件。每次生成都会创建一个新的带时间戳的文件夹，帮助您跟踪不同的迭代版本。每个会话（修订版）的结构如下：\n\n* `\u003C输出目录>\u002F`\n    * `\u003C场景名称>\u002F` *(基于您的`.blend`文件名，或未保存场景的名称）*\n        * `\u003CYYYY-MM-DDTHH-MM-SS>\u002F` *(生成开始的时间戳——这是主要的修订目录）*\n            * `generated\u002F` *(各摄像机\u002F视角生成的主要输出纹理，尚未应用或烘焙）*\n            * `controlnet\u002F` *(中间的ControlNet输入图像）*\n                * `depth\u002F` *(深度通道渲染）*\n                * `canny\u002F` *(使用Canny边缘检测器处理后的渲染）*\n                * `normal\u002F` *(法线通道渲染）*\n            * `baked\u002F` *(使用独立的`烘焙纹理`工具烘焙到UV贴图上的纹理，以及使用`导出至游戏引擎`工具导出的`.glb`文件）*\n            * `generated_baked\u002F` *(如果启用了“生成时烘焙纹理”，则在此过程中烘焙的纹理）*\n            * `inpaint\u002F` *(与修复过程相关的文件，例如针对`顺序模式`的文件）*\n                * `render\u002F` *(用作修复上下文的先前状态渲染）*\n                * `visibility\u002F` *(修复过程中使用的可见性遮罩）*\n            * `uv_inpaint\u002F` *(专属于UV修复模式的文件）*\n                * `uv_visibility\u002F` *(为UV修复生成的UV上的可见性遮罩）*\n            * `misc\u002F` *(其他临时或杂项文件，例如用于Canny边缘检测输入的渲染）*\n            * `.gif` \u002F `.mp4` *(如果使用了`导出GIF\u002FMP4`工具，则这些文件会直接保存到带时间戳的修订目录中）*\n            * `prompt.json` *(最后生成的用于ComfyUI的工作流）*\n\n---\n\n## 🤔 故障排除\n\n遇到问题？以下是一些常见的解决方法。请务必同时检查 **Blender 系统控制台**（窗口 > 切换系统控制台）和 **ComfyUI 服务器控制台**，以查看错误信息。\n\n* **StableGen 面板未显示：** 确保插件已安装，并在 Blender 的偏好设置中启用。\n* **“无法生成…” 在生成按钮上：** 检查插件偏好设置：`输出目录` 和 `服务器地址` 必须正确设置。同时，服务器必须可访问。\n* **与 ComfyUI 连接问题：**\n    * 确保您的 ComfyUI 服务器正在运行。\n    * 核实 StableGen 偏好设置中的 `服务器地址`。\n    * 检查防火墙设置。\n* **模型未找到（ComfyUI 控制台报错）：**\n    * 运行 `installer.py` 脚本。\n    * 手动确保模型位于 `\u003CYourComfyUIDirectory>\u002Fmodels\u002F` 的正确子文件夹中（例如：`checkpoints\u002F`、`controlnet\u002F`、`loras\u002F`、`ipadapter\u002F`、`clip_vision\u002F`、`clip\u002F`、`vae\u002F`、`unet\u002F`）。\n    * 添加新模型或自定义节点后，请重启 ComfyUI。\n* **GPU 内存不足 (OOM)：**\n    * 如果未启用，请在 `高级参数` > `输出与材质设置` 中启用 `自动缩放分辨率`。\n    * 烘焙时尝试降低烘焙分辨率。\n    * 关闭其他占用 GPU 资源的应用程序。\n* **生成完成后纹理不可见：**\n    * 切换到渲染视口着色模式（右上角第四个“球体”图标）。\n* **纹理不受光照设置影响：**\n    * 在 `高级参数 > 输出与材质设置` 中启用 `应用 BSDF`，然后重新生成。\n* **纹理质量差\u002F出现伪影：**\n    * 尝试使用提供的预设。\n    * 调整提示词和负面提示词。\n    * 尝试不同的生成模式。通常，带有 IPAdapter 的 `顺序` 模式在一致性方面表现较好。\n    * 确保相机覆盖范围足够，并适当设置 `丢弃超角度`。\n    * 微调 ControlNet 强度。强度过低可能会忽略几何形状；强度过高则可能导致结果过于平坦。\n    * 对于 `顺序` 模式，检查 inpainting 和可见性遮罩设置。\n* **所有可见网格都被贴图：** 默认情况下，StableGen 会为所有可见的网格对象贴图。您可以将 `目标对象` 设置为 `选中`，以便仅对选定对象进行贴图。\n\n---\n\n## 🤝 贡献\n\n我们欢迎各种形式的贡献！无论是 bug 报告、功能建议、代码贡献，还是新的预设，请随时提交 issue 或 pull request。\n\n---\n\n## 📜 许可证\n\nStableGen 采用 **GNU 通用公共许可证 v3.0** 发布。详情请参阅 `LICENSE` 文件。\n\n### 第三方许可证：TRELLIS.2 图像转 3D\n\n> **注意：** 本节仅适用于 TRELLIS.2 图像转 3D 功能。StableGen 的标准贴图流程（SDXL、FLUX.1-dev、Qwen Image Edit）不使用下列任何库，因此不受这些许可限制的影响。\n\nTRELLIS.2 功能依赖于多个第三方组件，每个组件都有其自身的许可证。**用户应了解这些许可证，尤其是 TRELLIS.2 贴图输出流程中使用的某些 NVIDIA 库的非商业限制。**\n\n| 组件 | 许可证 | 是否允许商业使用？ |\n|---|---|---|\n| [TRELLIS.2](https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FTRELLIS.2)（微软） | MIT | ✅ 是 |\n| [TRELLIS.2-4B 模型权重](https:\u002F\u002Fhuggingface.co\u002Fmicrosoft\u002FTRELLIS.2-4B) | MIT | ✅ 是 |\n| [ComfyUI-TRELLIS2](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2) | MIT | ✅ 是 |\n| [DINOv3](https:\u002F\u002Fgithub.com\u002Ffacebookresearch\u002Fdinov3)（Meta，图像条件处理） | [DINOv3 许可证](https:\u002F\u002Fai.meta.com\u002Fresources\u002Fmodels-and-libraries\u002Fdinov3-license\u002F) | ✅ 是 |\n| [BiRefNet](https:\u002F\u002Fgithub.com\u002FZhengPeng7\u002FBiRefNet)（背景去除） | MIT | ✅ 是 |\n| [FlexGEMM](https:\u002F\u002Fgithub.com\u002FJeffreyXiang\u002FFlexGEMM)（稀疏卷积） | MIT | ✅ 是 |\n| [CuMesh](https:\u002F\u002Fgithub.com\u002FJeffreyXiang\u002FCuMesh)（网格操作） | MIT | ✅ 是 |\n| O-Voxel（体素处理，TRELLIS.2 的一部分） | MIT | ✅ 是 |\n| [nvdiffrast](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrast)（NVIDIA） | NVIDIA 源代码许可证 | ❌ **仅限非商业用途** |\n| [nvdiffrec](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fnvdiffrec)（NVIDIA） | NVIDIA 源代码许可证 | ❌ **仅限非商业用途** |\n\n**重要提示：** NVIDIA 的两个库（`nvdiffrast` 和 `nvdiffrec`）仅在将 TRELLIS.2 的 **贴图模式** 设置为 **“原生（TRELLIS.2）”** 时才会被使用——具体用于 UV 光栅化和 PBR 贴图烘焙。它们的许可证限制使用仅限于“研究或评估目的，不得用于任何直接或间接的经济利益”（第 3.3 条）。只有 NVIDIA 及其关联公司才能在商业环境中使用这些库。\n\n**其他所有 TRELLIS.2 模式均不涉及许可限制：**\n* **仅形状模式（“无”）** - 不使用 nvdiffrast\u002Fnvdiffrec。其余管道组件均采用宽松许可（MIT\u002FApache 2.0 + DINOv3 许可证）。\n* **基于投影的贴图模式（“SDXL”、“Qwen Image Edit”等）** - 不使用 nvdiffrast\u002Fnvdiffrec。所选扩散模型的许可条款照常适用（例如，FLUX.1-dev 有其独立于 TRELLIS.2 流程的许可条款）。\n\n如果您需要在商业环境中使用“原生（TRELLIS.2）”贴图模式，请考虑联系 NVIDIA，咨询关于 nvdiffrast\u002Fnvdiffrec 的商业许可事宜。\n\n---\n\n## 🙏 致谢\n\nStableGen 基于众多个人和社区的卓越工作而构建。我们由衷地感谢以下各方：\n\n* **学术渊源：** 本插件最初是奥德雷·萨卡拉在布拉格捷克理工大学（信息技术学院）完成的学士论文，导师为拉德克·里希特工程师、博士。\n    * 论文全文可在：[https:\u002F\u002Fdspace.cvut.cz\u002Fhandle\u002F10467\u002F123567](https:\u002F\u002Fdspace.cvut.cz\u002Fhandle\u002F10467\u002F123567) 查阅\n* **核心技术与社区：**\n    * ComfyAnonymous 开发的 **ComfyUI**（[GitHub](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)），提供了强大且灵活的后端支持。\n    * PozzettiAndrea 开发的 **ComfyUI-TRELLIS2**（[GitHub](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2)），实现了 TRELLIS.2 与 ComfyUI 的集成。\n    * **Blender 基金会**及其社区，提供了令人惊叹的开源 3D 创作套件。\n* **受以下 Blender 插件启发：**\n    * 卡森·卡特里等人开发的 **Dream Textures**（[GitHub](https:\u002F\u002Fgithub.com\u002Fcarson-katri\u002Fdream-textures)）\n    * 弗雷德里克·哈塞克开发的 **Diffused Texture Addon**（[GitHub](https:\u002F\u002Fgithub.com\u002FFrederikHasecke\u002Fdiffused-texture-addon)）\n* **开创性研究：** 我们深深感激那些推动 StableGen 核心技术发展的研究人员。以下列出了一些在扩散模型、AI 驱动控制以及 3D 纹理生成领域具有基础性和影响力的成果（附 arXiv 预印本链接）：\n    * **扩散模型：**\n        * Ho 等人（2020），去噪扩散概率模型 - [2006.11239](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11239)\n        * Rombach 等人（2022），潜在扩散模型（Stable Diffusion）- [2112.10752](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.10752)\n    * **AI 控制机制：**\n        * Zhang 等人（2023），ControlNet - [2302.05543](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.05543)\n        * Ye 等人（2023），IP-Adapter - [2308.06721](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.06721)\n    * **关键的 3D 纹理合成论文：**\n        * Chen 等人（2023），Text2Tex - [2303.11396](https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.11396)\n        * Richardson 等人（2023），TEXTure - [2302.01721](https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.01721)\n        * Zeng 等人（2023），Paint3D - [2312.13913](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.13913)\n        * Le 等人（2024），EucliDreamer - [2311.15573](https:\u002F\u002Farxiv.org\u002Fabs\u002F2311.15573)\n        * Ceylan 等人（2024），MatAtlas - [2404.02899](https:\u002F\u002Farxiv.org\u002Fabs\u002F2404.02899)\n    * **其他有影响力的工作：**\n        * Siddiqui 等人（2022），Texturify - [2204.02411](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02411)\n        * Bokhovkin 等人（2023），Mesh2Tex - [2304.05868](https:\u002F\u002Farxiv.org\u002Fabs\u002F2304.05868)\n        * Levin & Fried（2024），微分扩散 - [2306.00950](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.00950)\n\n正是 AI 和开源社区的开放精神，才使得像 StableGen 这样的项目成为可能。\n\n---\n\n## 💡 计划中的功能列表\n\n以下是我们未来计划实现的一些功能（不分先后顺序）：\n* **超分辨率：** 支持对生成的纹理进行超分辨率处理。\n* **自定义 VAE 和 CLIP 模型选择：** 除了自定义 ControlNet 和 LoRA 模型外，还能够选择自定义的 VAE 和 CLIP 模型。\n* **细化模式改进：** 例如基于画笔的修复功能。\n* **基于画笔的修复：** 可直接在视口中绘制遮罩，以进行有针对性的局部编辑。\n* **针对 TRELLIS.2 的更好重网格化：** 实现更先进的重网格化技术，以提升生成网格的质量。\n\n如果您有任何建议，请随时提交问题！\n\n---\n\n## 📧 联系方式\n\n奥德雷·萨卡拉\n* 邮箱：`sakalaondrej@gmail.com`\n* X\u002FTwitter：`@sakalond`\n\n---\n*最后更新日期：2026年3月5日*","# StableGen 快速上手指南\n\nStableGen 是一款开源 Blender 插件，旨在将生成式 AI 融入 3D 工作流。它支持通过文本或单张图片生成带贴图的 3D 模型（基于 TRELLIS.2），并能利用 SDXL、FLUX.1-dev 或 Qwen Image Edit 对现有模型进行高质量纹理生成与细化。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n### 系统要求\n*   **操作系统**: Windows, macOS, 或 Linux\n*   **Blender 版本**: 4.2+ 或 5.1+\n*   **GPU**: 推荐 NVIDIA 显卡（支持 CUDA），显存建议 8GB 以上（生成高分辨率模型需更多显存）。\n*   **磁盘空间**: 预留至少 20GB 空间用于安装 ComfyUI 后端及下载模型权重。\n\n### 前置依赖\nStableGen 依赖本地运行的 **ComfyUI** 作为计算后端。\n1.  **安装 Python**: 确保已安装 Python 3.10 或更高版本。\n2.  **安装 ComfyUI**:\n    *   官方仓库：[ComfyUI GitHub](https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI)\n    *   *国内加速建议*：若访问 GitHub 较慢，可使用 Gitee 镜像或国内源克隆。\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fcomfyanonymous\u002FComfyUI.git\n    cd ComfyUI\n    pip install -r requirements.txt\n    # 国内用户推荐使用清华\u002F阿里源加速 pip 安装\n    # pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n3.  **安装必要节点**: StableGen 需要 `ComfyUI-TRELLIS2` 等自定义节点。插件安装脚本会自动处理部分依赖，但建议预先熟悉 ComfyUI 管理器 (ComfyUI Manager) 的使用。\n\n## 安装步骤\n\n### 1. 下载插件\n访问 [StableGen Releases 页面](https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Freleases) 下载最新版本的 `.zip` 安装包。\n\n### 2. 在 Blender 中安装\n1.  打开 Blender，进入 `Edit` (编辑) > `Preferences` (偏好设置)。\n2.  选择左侧 `Add-ons` (插件) 标签页。\n3.  点击右上角 `Install...` 按钮，选择下载的 `.zip` 文件。\n4.  在列表中找到 **StableGen**，勾选复选框以启用插件。\n\n### 3. 配置后端连接\n1.  启动 ComfyUI 服务器（确保其监听地址允许本地连接，通常为 `http:\u002F\u002F127.0.0.1:8188`）。\n2.  在 Blender 右侧属性面板找到 **StableGen** 选项卡。\n3.  在 `ComfyUI Server URL` 字段中输入您的 ComfyUI 地址（默认即可）。\n4.  点击 `Connect` 测试连接状态。若显示成功，即可开始使用。\n\n> **提示**: 首次运行时，插件可能会提示运行 `installer.py` 脚本以自动安装缺失的 ComfyUI 自定义节点（如 TRELLIS2 相关节点），请按提示操作。\n\n## 基本使用\n\n### 场景一：从文本\u002F图片生成 3D 模型 (Text\u002FImage to 3D)\n\n此流程利用 TRELLIS.2 模型直接从零生成带贴图的网格。\n\n1.  **打开面板**: 在 Blender 右侧边栏 (`N` 键) 切换到 **StableGen** 标签。\n2.  **选择模式**: 在 `Generation Mode` 中选择 `Text to 3D` 或 `Image to 3D`。\n3.  **输入提示词\u002F图片**:\n    *   若选文本模式，在 `Prompt` 框输入描述（例如：`\"fantasy dragon, intricate details, 4k\"`）。\n    *   若选图片模式，上传一张参考图。\n4.  **配置参数**:\n    *   **Resolution**: 推荐选择 `1024 Cascade` 以平衡速度与细节。\n    *   **Texture Pipeline**: 可选择使用 TRELLIS 原生 PBR 贴图，或勾选 `Use Diffusion Texturing` 调用 SDXL\u002FFlux\u002FQwen 进行重绘以提升质感。\n5.  **生成**: 点击 `Generate` 按钮。\n    *   系统将先在 ComfyUI 中生成候选视图，确认后可自动构建 3D 网格并导入 Blender 场景。\n\n### 场景二：为现有模型添加纹理 (Texturing Existing Model)\n\n此流程适用于您已有的低模或未贴图模型，利用多视角一致性技术进行贴图。\n\n1.  **准备模型**: 在场景中选中一个或多个需要贴图的 Mesh 物体。\n2.  **设置相机**:\n    *   在 StableGen 面板的 `Camera Setup` 部分，选择一种放置策略（如 `Orbit Ring` 或 `Hemisphere`）。\n    *   点击 `Add Cameras` 自动生成环绕模型的相机阵列。\n3.  **编写提示词**:\n    *   在全局 `Prompt` 中输入整体风格描述。\n    *   （可选）展开相机列表，为特定视角的相机输入独立的 `View-Specific Prompt` 以细化局部细节。\n4.  **配置生成引擎**:\n    *   在 `Backend Settings` 中选择底模（Checkpoint），如 `SDXL`, `FLUX.1-dev` 或 `Qwen Image Edit`。\n    *   开启 `ControlNet` (Depth\u002FNormal) 以确保纹理贴合几何结构。\n5.  **执行纹理化**:\n    *   选择 `Sequential Mode` (高质量，逐视角生成) 或 `Grid Mode` (快速预览)。\n    *   点击 `Start Texturing`。\n    *   插件将控制 ComfyUI 逐帧渲染视角，并通过智能混合算法将纹理投射回模型的 UV 上。\n\n完成后，您可以在材质编辑器中查看生成的图像纹理，或使用 `Texture Baking` 工具将其烘焙为标准 UV 贴图以便进一步编辑。","一位独立游戏开发者需要在周末前为即将演示的关卡快速制作一套风格统一的废弃工厂资产，包括生锈的管道、破损的墙壁和散落的机械零件。\n\n### 没有 StableGen 时\n- **建模与贴图割裂**：必须先在外部软件或网站生成基础模型，再导出导入 Blender，流程繁琐且容易丢失比例信息。\n- **手动贴图效率极低**：面对场景中十几个不同的网格物体，需要逐个展开 UV 并手绘纹理，确保锈迹和污渍在多个视角下自然衔接耗时数天。\n- **风格难以统一**：不同资产由不同参考图生成，导致光照方向和材质质感（如金属锈蚀程度）不一致，后期调整工作量巨大。\n- **多视角一致性差**：传统 AI 贴图往往只优化正面视角，旋转模型后发现侧面或背面出现严重的纹理拉伸或逻辑错误。\n\n### 使用 StableGen 后\n- **一站式生成工作流**：直接在 Blender 内输入“废弃金属管道”提示词，利用 TRELLIS.2 瞬间生成带完整 PBR 材质的 3D 网格，无需切换软件。\n- **场景级批量贴图**：选中场景中所有未贴图的机械模型，StableGen 能基于预设的摄像机位，一次性为所有物体生成风格连贯的高质量纹理。\n- **智能多视角融合**：通过顺序模式（Sequential Mode）结合修复掩码，自动处理复杂曲面的纹理过渡，确保无论玩家从哪个角度观察，锈迹和磨损都自然真实。\n- **快速迭代方案**：利用预览画廊功能，几分钟内即可对比不同种子生成的多种材质方案，迅速锁定最符合关卡氛围的效果。\n\nStableGen 将原本需要数天的资产制作与贴图周期压缩至几小时，让开发者能专注于创意验证而非重复劳动。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsakalond_StableGen_ce43a4fa.png","sakalond",null,"https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsakalond_3dfefa21.png","Student at CTU in Prague, Czechia","Prague, Czechia","sakalaondrej@gmail.com","https:\u002F\u002Fgithub.com\u002Fsakalond",[84],{"name":85,"color":86,"percentage":87},"Python","#3572A5",100,721,58,"2026-04-03T01:08:43","GPL-3.0",4,"Windows, macOS, Linux","必需 NVIDIA GPU（用于 ComfyUI 后端及 TRELLIS.2 模型推理），支持显存卸载（VRAM-conscious disk offloading），具体显存大小取决于所选模型分辨率模式（512-1536）及是否使用 FLUX.1-dev 等大模型，建议 8GB+，需安装兼容的 CUDA 版本以配合 PyTorch\u002FComfyUI","未说明（建议 16GB+ 以处理复杂场景多网格纹理）",{"notes":97,"python":98,"dependencies":99},"该工具是 Blender 插件，核心计算依赖外部 ComfyUI 服务。用户需自行安装并配置 ComfyUI 后端，连接后在 Blender 中调用。支持多种生成模式（如 1024 Cascade, 1536 Cascade），高分辨率模式对显存要求较高。插件具备显存优化功能（磁盘卸载、可配置注意力后端）。首次使用需通过 installer.py 安装 ComfyUI-TRELLIS2 节点，并下载相应的 AI 模型权重文件。","未说明（取决于 Blender 4.2+\u002F5.1+ 内置版本及 ComfyUI 环境要求）",[100,101,102,103,104],"Blender 4.2+ 或 5.1+","ComfyUI (本地部署)","ComfyUI-TRELLIS2","TRELLIS.2 (Microsoft)","SDXL \u002F FLUX.1-dev \u002F Qwen Image Edit 模型",[14],[107,108,109,110,111,112],"blender","controlnet","flux1-dev","ipadapter","stable-diffusion","texture","2026-03-27T02:49:30.150509","2026-04-06T08:15:55.806791",[116,121,126,131,136,141],{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},10525,"在 Windows 上运行生成任务时遇到 \"[Errno 11001] getaddrinfo failed\" 错误怎么办？","该错误通常与 WSL (Windows Subsystem for Linux) 或 Docker 版本有关。解决方案是直接在 Windows 原生环境下安装 ComfyUI Portable 版本，而不是在 WSL 中运行。确认在 Windows 便携版上运行后，问题即可解决。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F2",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},10526,"进行 UV 修复（UV Inpaint）时出现 \"object is not iterable\" 错误或烘焙功能异常如何处理？","正确的操作顺序至关重要：必须先进行烘焙，再进行 UV 修复。但在烘焙时，务必**禁用** \"Add Material\"（添加材质）选项，以保留原始材质。如果未预先烘焙，系统会自动使用 \"Pack Islands\" 方法进行烘焙，这可能不是最优解。若先尝试对未烘焙的纹理进行 UV 修复，然后再执行烘焙，通常可以避免此错误。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F30",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},10527,"StableGen 无法连接到 ComfyUI 服务器，即使网络访问已开启且地址已修改？","这通常是因为服务器地址解析格式不兼容。维护者已在后续版本中改进了地址解析逻辑，支持多种变体格式。如果遇到此问题，请确保升级到最新版本。若仍无法连接，请检查 ComfyUI 桌面版显示的服务器地址是否与插件中填写的完全一致（包括端口号）。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F42",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},10528,"Blender 中生成的纹理显示为黑色，但 ComfyUI 控制台报错提示缺少 \"Input\" 文件夹？","如果确认输出文件夹中确实存在生成的图片，那么黑色纹理问题很可能是由 Blender 版本引起的。在 Blender 4.5 及以上版本中，旧的着色器节点连接方式不再适用。需要手动调整材质编辑器中的节点链接，以适配新版本的渲染逻辑。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F29",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},10529,"生成完成后，图片或材质没有自动应用到模型上，并提示 \"Not enough UV map slots\"？","此问题通常是因为模型的 UV 贴图槽位不足。虽然添加一个 UV 槽位可能让生成过程完成，但材质仍可能无法应用。请确保模型拥有足够数量的 UV 映射槽位以匹配所有相机视角。此外，检查控制台日志中是否有 \"Applied modifier was not first\" 的警告，尝试在生成前应用或调整修改器顺序也可能有助于解决问题。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F47",{"id":142,"question_zh":143,"answer_zh":144,"source_url":145},10530,"是否支持 Black Forest Labs 的 Depth 和 Canny 模型及 LoRA？","是的，FLUX.1-dev Depth LoRA 自 v0.0.8 版本起已得到支持。建议使用 LoRA 版本而非完整的控制模型，因为它们在显存（VRAM）和存储空间占用上更轻量，同时能提供可预测的输出效果。","https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fissues\u002F10",[147,152,157,162,167,172,177,182,187,192,197,202,207],{"id":148,"version":149,"summary_zh":150,"released_at":151},71075,"v0.3.0","### StableGen v0.3.0: Full 3D Asset Generation, PBR Materials & Scene Queue\r\n\r\nGenerate complete 3D assets from text or image prompts, decompose textures into PBR material stacks, and batch-process many assets overnight - all inside Blender. This release transforms StableGen from a texturing tool into an end-to-end 3D asset creation pipeline built on Microsoft's TRELLIS.2, SDXL and Qwen-Image-Edit.\r\n\r\n> **Blender Compatibility:** StableGen v0.3.0 supports Blender 4.2 - 4.5 (using OSL shaders) and Blender 5.1+ (using native Raycast nodes with GPU acceleration). Blender 5.0 is not supported.\r\n\r\n---\r\n\r\n✨ **What's New in v0.3.0:**\r\n\r\n- [TRELLIS.2: Image & Text to 3D](#-trellis2-image--text-to-3d)\r\n- [PBR Material Decomposition](#-pbr-material-decomposition)\r\n- [Scene Queue](#-scene-queue)\r\n- [FLUX.2 Klein Architecture (Experimental)](#-flux2-klein-architecture-experimental)\r\n- [Improvements & Fixes](#-improvements--fixes)\r\n\r\n---\r\n\r\n### 🧊 TRELLIS.2: Image & Text to 3D\r\n\r\nGenerate fully textured 3D meshes from a single reference image or text prompt using Microsoft's TRELLIS.2 (4B-parameter model). Powered by [ComfyUI-TRELLIS2](https:\u002F\u002Fgithub.com\u002FPozzettiAndrea\u002FComfyUI-TRELLIS2).\r\n\r\n- **Two input modes:**\r\n  - **Image** - provide a reference image directly.\r\n  - **Prompt** - StableGen first generates a reference image using any supported architecture (SDXL, FLUX.1, Qwen, Klein), then feeds it to TRELLIS.2.\r\n- **Multiple resolution modes:** 512, 1024, 1024 Cascade (recommended), and 1536 Cascade for maximum geometric detail.\r\n- **Flexible texture pipeline:** Use TRELLIS.2's native PBR textures, or automatically re-texture the generated mesh with SDXL, FLUX.1, Qwen Image Edit, or FLUX.2 Klein for higher-quality diffusion textures.\r\n- **Preview Gallery:** Generate multiple candidate images with different seeds and pick the best before committing to 3D generation. GPU-rendered overlay with hover selection and seed labels.\r\n- **Separate texture prompt:** Provide a dedicated prompt for the texturing cameras, independent of the image generation prompt - useful when the texturing needs different emphasis than the concept image.\r\n- **Source image as reference:** Feed the TRELLIS.2 input image as an IPAdapter style reference or Qwen style reference during texturing, keeping the final texture faithful to the original concept.\r\n- **Mesh post-processing:** Configurable decimation (up to 5M polys), optional remeshing, import scale, and shading modes (Flat, Smooth, Auto Smooth). Post-processing master toggle to import raw meshes for manual retopology.\r\n- **Automatic camera placement:** 6 strategies (Orbit Ring, Sphere Coverage, Normal-Weighted K-means, PCA Axes, Greedy Coverage, Fan from Camera) with auto aspect ratio, occlusion handling, elevation clamping, and bottom-face exclusion.\r\n- **Auto view-direction prompts** for hands-free camera-specific prompting.\r\n- **3-tier progress tracking:** Overall progress, phase progress, and per-step detail - all shown in the UI. Cancel via the Escape key at any point.\r\n- **30+ built-in presets** organized across 4 architecture groups, including TRELLIS.2 pipeline presets (DEFAULT \u002F CHARACTERS \u002F ARCHITECTURE \u002F Mesh Only \u002F Qwen variants). Preset diff preview shows which parameters a preset will change before applying it.\r\n- **Export-ready pipeline:** Generate → Bake → Export. The built-in bake operator (now defaulting to Smart UV Project) flattens the multi-projection material into a single UV-mapped texture, ready for any game engine or DCC tool.\r\n\r\n**Installer:** Option 8 - TRELLIS.2 (~0.1 GB, models auto-download on first use). The installer applies 10 post-clone patches to ComfyUI-TRELLIS2 including a DinoV3 VRAM leak fix (5-9 GB savings) and comfy-env model registry cleanup (~7 GB savings).\r\n\r\n**Examples** (SDXL texturing):\r\n\r\n| Dragon | Robot | Wizard |\r\n|:---:|:---:|:---:|\r\n| \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fsakalond\u002FStableGen\u002Fv0.3.0\u002Fdocs\u002Fimg\u002Ftrellis2\u002Fsdxl_dragon.gif\" alt=\"Dragon\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fsakalond\u002FStableGen\u002Fv0.3.0\u002Fdocs\u002Fimg\u002Ftrellis2\u002Fsdxl_robot.gif\" alt=\"Robot\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fsakalond\u002FStableGen\u002Fv0.3.0\u002Fdocs\u002Fimg\u002Ftrellis2\u002Fsdxl_wizard.gif\" alt=\"Wizard\" width=\"200\"> |\r\n\r\n\u003Cdetails>\r\n\u003Csummary>Prompts used\u003C\u002Fsummary>\r\n\r\n1. **Dragon:** *\"fantasy dragon\"*\r\n2. **Robot:** *\"giant robot, mecha, cyberpunk style, sci-fi, white body, intricate details, neon accents\"*\r\n3. **Wizard:** *\"wizard character, intricate embroidered purple and gold robes, pointed hat, wooden staff with glowing crystal, leather belt with pouches, fantasy character concept art, 4k\"*\r\n\r\n\u003C\u002Fdetails>\r\n\r\n**Examples** (Qwen texturing):\r\n\r\n| Chest | Robot | Obelisk |\r\n|:---:|:---:|:---:|\r\n| \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fsakalond\u002FStableGen\u002Fv0.3.0\u002Fdocs\u002Fimg\u002Ftrellis2\u002Fqwen_chest.gif\" alt=\"Chest\" width=\"200\"> | \u003Cimg src=\"https:\u002F\u002Fraw.githubusercontent.com\u002Fsakalond\u002FStableGen\u002Fv0.3.0\u002Fdocs\u002Fimg\u002Ftrellis2\u002Fqwen_robot.gif\" alt=\"Robot\"","2026-03-05T17:35:22",{"id":153,"version":154,"summary_zh":155,"released_at":156},71076,"v0.2.0","### StableGen v0.2.0: Camera Overhaul, Local Edit Mode & Blender 5.1 Support\r\n\r\nThis is the biggest StableGen update yet - a ground-up rework of the camera system, a brand new Local Edit mode, Blender 5.1 support with GPU-accelerated projection, Apple Silicon support, and a suite of quality-of-life improvements across the board.\r\n\r\n> **⚠️ Blender Compatibility:** StableGen v0.2.0 supports **Blender 4.2 – 4.5** (using OSL shaders) and **Blender 5.1+** (using native Raycast nodes with GPU acceleration). **Blender 5.0 is not supported** - OSL is broken in 5.0 and the native Raycast node was not introduced until 5.1. Blender 5.1 is currently in beta but is expected to release soon.\r\n\r\n**✨ What's New in v0.2.0:**\r\n\r\n* **📷 Camera System Overhaul:**\r\n    * The camera placement system has been completely rewritten with **7 placement strategies** including fully automatic modes:\r\n        * **Orbit Ring** - the original circular arrangement.\r\n        * **Fan Arc** - cameras spread in an arc facing the subject.\r\n        * **Hemisphere** - even distribution across a hemisphere.\r\n        * **PCA-Axis** - cameras aligned to the mesh's principal axes.\r\n        * **Normal-Weighted K-means** - clusters camera directions based on surface normals, biasing toward faces that need coverage.\r\n        * **Greedy Occlusion** - iteratively picks directions that maximise visible, uncovered surface.\r\n        * **Interactive Visibility** - real-time scroll-to-adjust preview that lets you balance occlusion filtering live, with HUD and camera count preview.\r\n    * **Per-Camera Optimal Aspect Ratios:** Each camera now gets its own resolution computed from the mesh's silhouette extent in that viewing direction. No more wasted pixels on letterboxing - portraits get tall frames, landscapes get wide ones.\r\n    * **No More 8-Camera Limit:** The hardcoded limit has been removed - use as many cameras as you need. *(Thanks to hickVieira for the suggestion!)*\r\n    * **Camera Generation Order:** New reorder list in Viewpoint Blending Settings lets you control the exact order cameras are processed in Sequential mode. Includes 6 preset strategies: Alphabetical, Front→Back→Sides, Back→Front→Sides, Alternating Opposite, Top→Bottom, and Reverse.\r\n    * New camera operators:\r\n        * **Clone Camera** - duplicate a camera and immediately enter fly mode to reposition.\r\n        * **Mirror Camera** - mirror a camera across X\u002FY\u002FZ axis through the mesh center.\r\n        * **Toggle Camera Labels** - show floating per-camera prompt text in the viewport.\r\n\r\n   Here are some of the automatic camera adding. As you can see, the process is now much simpler:\r\n    * *restaurant model by Jellepostma from [Sketchfab](https:\u002F\u002Fsketchfab.com\u002F3d-models\u002Fjapanese-restaurant-inakaya-97594e92c418491ab7f032ed2abbf596)*\r\n    * *woman model by Patrix from [Sketchfab](https:\u002F\u002Fsketchfab.com\u002F3d-models\u002Fscifi-girl-v01-96340701c2ed4d37851c7d9109eee9c0)*\r\n\r\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F97004562-4353-4b8a-9474-b1e3dabac81a\" alt=\"camera showcase\" width=\"47%\">\r\n\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F6c173858-4965-4d0b-8c80-b6ce9ccbf34c\" alt=\"camera showcase\" width=\"50%\">\r\n\r\n* **🖥️ Blender 5.1 Support (GPU-Accelerated Projection):**\r\n    * StableGen now fully supports **Blender 5.1**, including all API changes to the compositor, UV selection, animation channels, and Eevee engine naming.\r\n    * On Blender 5.1+, projection uses **native Raycast shader nodes** instead of OSL scripts. This means projection and visibility rendering now run on **GPU (CUDA\u002FOptiX\u002FMetal)** instead of being locked to CPU - a major speed boost on complex scenes.\r\n    * Blender 4.2 – 4.5 remain fully supported using the original OSL pipeline.\r\n\r\n    \u003C!-- PLACEHOLDER: Blender 5.1 comparison GIF showing GPU vs CPU projection speed -->\r\n\r\n* **🎯 Local Edit Mode:**\r\n    * A brand new generation mode that replaces the old \"Preserve Original Texture\" toggle. Point cameras at specific areas you want to modify - the new generation **blends seamlessly over the original** using angle-based and vignette-based feathering, leaving untouched areas pristine. *(Based on the refine\u002Fmirror PR by ManglerFTW - thank you!)*\r\n    * Works with all architectures (SDXL, Flux, and Qwen Image Edit) - Qwen gets its own dedicated `local_edit` generation method.\r\n    * **Silhouette Edge Feathering:** New controls for softly blending projection boundaries based on screen-space frustum distance - prevents hard seams at projection edges. Powered by a new dedicated `feather.osl` shader (and its native Blender 5.1 equivalent).\r\n    * **Separate Angle & Vignette Controls:** Angle ramp (black\u002Fwhite point) and vignette (width, softness) are now independently tunable for precise control over where new texture fades into old.\r\n    * **Original Render IPAdapter mode**, which enables SDXL \u002F FLUX to match the styling of the previous generation for usecases such as improving detail \u002F quality in specific areas.\r\n\r\n* **🎨 Qwen Re","2026-02-15T22:01:41",{"id":158,"version":159,"summary_zh":160,"released_at":161},71077,"v0.1.1","### StableGen v0.1.1: Nunchaku Support & Qwen Polish\r\n\r\nThis update brings support for **Nunchaku**, a specialized inference backend for Qwen models, along with several key fixes for the Qwen-Image-Edit workflow to ensure a smoother experience. This should generally offer **much faster generation** with Qwen-Image-Edit.\r\n\r\n**✨ What's New in v0.1.1:**\r\n\r\n* **🥋 Nunchaku Support:**\r\n    * Added full integration for **Nunchaku**, enabling the use of **Qwen models** and **LoRAs** via Nunchaku nodes.\r\n    * Added dedicated presets: `QWEN EDIT PRECISE (NUNCHAKU)`, `QWEN EDIT SAFE (NUNCHAKU)`, and `QWEN EDIT ALT (NUNCHAKU)` for optimized 4-step workflows.\r\n    * *Using Nunchaku **requires downloading a separate checkpoint and additional custom nodes**. You can use the installer script, which has been updated, or refer to the manual installation instructions.*\r\n\r\n* **🛠️ Fixes & Improvements:**\r\n    * The \"Reproject Textures\" operator now functions correctly with Qwen architectures.\r\n    * Fixed an issue where manually cancelling a Qwen generation would throw an error code instead of stopping gracefully.\r\n    * **Guidance Prompt Fix:** Resolved a bug where the guidance prompt wasn't being fetched correctly when using an external style image combined with the \"Additional\" context render mode.\r\n\r\n* **⚠️ Important Note on Blender 5.0:**\r\n    * I am actively working on support for **Blender 5.0**. However, due to significant breaking changes in the new version, it will take some time to ensure full compatibility. **Blender 5.0 is NOT supported in this release.** Please continue using Blender 4.2+\r\n    \r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.1.0...v0.1.1","2025-11-25T23:24:47",{"id":163,"version":164,"summary_zh":165,"released_at":166},71078,"v0.1.0","### StableGen v0.1.0: Next-Gen Texturing with Qwen-Image-Edit\r\n\r\nThis major update introduces the **Qwen-Image-Edit** architecture, a powerful new model that enables high-fidelity, consistent texturing (including legible text!) using a novel image editing workflow. This release also includes a rollup of important bug fixes for object visibility, UV map handling, and FLUX workflows.\r\n\r\n**✨ What's New in v0.1.0:**\r\n\r\n* **🎨 Next-Gen Texturing with Qwen-Image-Edit:**\r\n    * Integrates the `Qwen-Image-Edit-2509` model, a new architecture that works without traditional ControlNet or IPAdapter to deliver outstanding consistency and **legible text generation**.\r\n    * Adds a **dedicated \"Qwen Image Edit\" architecture** to the main panel.\r\n    * Introduces a new **Qwen Guidance advanced parameters section** for precise control over the image editing workflow, including:\r\n        * **Guidance Map Control:** Use Depth or Normal maps as the structural driver for sequential projections.\r\n        * **Context Render Options:** Control how sequential views utilize the previous render (e.g., disable RGB context, swap style image, or feed as a reference).\r\n        * **External Style Imaging:** Apply a consistent art direction from a reference file, either for the first viewpoint only or for the entire generation.\r\n        * **Custom Prompt Templates:** Separate prompt fields for the initial shot and sequential steps (using `{main_prompt}` token).\r\n        * **Guidance Color Management:** New tools (dilation, fallback\u002Fbackground colors, hue\u002Fvalue cleanup) to eliminate magenta mask artifacts before projection.\r\n    * **Qwen LoRAs** are now fully integrated into the shared LoRA manager.\r\n    * New **Qwen-specific presets** are included for fast, high-quality results.\r\n    * *Note: **You will need to install additional requirements** for this new architecture. You can use the `installer.py` script, which has been updated with new Qwen related packages.*\r\n    \r\n    Here are some examples generated with the new architecture:\r\n    * *restaurant model by Jellepostma from [Sketchfab](https:\u002F\u002Fsketchfab.com\u002F3d-models\u002Fjapanese-restaurant-inakaya-97594e92c418491ab7f032ed2abbf596)*\r\n    * *woman model by Patrix from [Sketchfab](https:\u002F\u002Fsketchfab.com\u002F3d-models\u002Fscifi-girl-v01-96340701c2ed4d37851c7d9109eee9c0)*\r\n    \r\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F28b4a25a-6312-455c-aece-2479a9d906fd\" alt=\"woman\" width=\"20%\">\r\n    \r\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002F5bb1fec5-4c20-4800-8579-23ea39a5b7d0\" alt=\"restaurant\" width=\"50%\">\r\n    \r\n    \u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Febfa6c97-7a23-4e53-9b75-8fb0b29b93e4\" alt=\"woman\" width=\"20%\">\r\n\r\n\r\n* **🛠️ Fixes & Improvements:**\r\n    * Fixed a critical issue where **hidden objects** or objects excluded from the view layer could cause errors during generation.\r\n    * Resolved a bug with **non-StableGen UV maps** when \"Overwrite Material\" was enabled.\r\n    * Corrected the **FLUX IPAdapter and ControlNet** implementations, which were broken by the v0.0.9 remote server (API) refactor.\r\n    * Fixed a bug where generation would incorrectly cancel on **high-resolution renders** even when \"Auto Rescale\" was disabled.\r\n    * Improved server connectivity by adding more robust **server address parsing** to handle different formats (like with\u002Fwithout `http:\u002F\u002F`).\r\n    * Fixed multiple image data-blocks being created from the same image when more meshes are being textured.\r\n    * Fixed a bug where generation wouldn't start with 7 viewpoints and 1 pre-existing UV map.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002Fstablegen\u002Fcompare\u002Fv0.0.9...v0.1.0","2025-11-04T13:19:06",{"id":168,"version":169,"summary_zh":170,"released_at":171},71079,"v0.0.9","### StableGen v0.0.9: Remote Backend Support & Dynamic ControlNet Mapping\r\n\r\nThis release introduces major under-the-hood changes enabling support for remote ComfyUI backends, along with a completely revamped and user-friendly system for managing ControlNet models.\r\n\r\n**✨ What's New in v0.0.9:**\r\n\r\n* **🌐 Remote ComfyUI Backend Support:**\r\n    * StableGen now communicates with ComfyUI primarily through its API.\r\n    * **Input images** (ControlNet maps, IPAdapter references, img2img inputs) are uploaded via the `\u002Fupload\u002Fimage` endpoint instead of relying on shared file paths.\r\n    * **Model lists** (Checkpoints, LoRAs, ControlNets) are fetched directly from the ComfyUI server's API (`\u002Fmodels\u002F...`).\r\n    * This enables running ComfyUI on a separate machine from Blender (requires correct `Server Address` in preferences).\r\n    * It also means you no longer need to set ComfyUI's directory, since it doesn't even have to be on the same computer.\r\n\r\n* **🧠 Dynamic ControlNet Mapping:**\r\n    * Replaced the cumbersome JSON string in preferences with a dynamic list.\r\n    * The addon attempts to **auto-assign types** (Depth, Canny, Normal) based on filenames.\r\n    * Users can easily **override assignments** using checkboxes directly in the preferences UI.\r\n    * Correctly supports assigning **multiple types to Union models**.\r\n\r\n* **🛠️ Fixes & Improvements:**\r\n    * Added a server status check button next to the Server Address in preferences and the main panel, providing immediate feedback on connectivity.\r\n    * Fixed UI flicker where the per-image progress bar would jump from 100% back to 0% after the image index updated.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.8...v0.0.9","2025-10-25T14:37:10",{"id":173,"version":174,"summary_zh":175,"released_at":176},71080,"v0.0.8","### StableGen v0.0.8: Viewpoint Regeneration & Expanded FLUX.1 support\r\n\r\nThis update introduces a powerful new viewpoint regeneration operator, adds support for the FLUX.1 Depth LoRA, and includes a host of important fixes for UV inpainting and more.\r\n\r\n**✨ What's New in v0.0.8:**\r\n\r\n* **🎯 Selective Viewpoint Regeneration:**\r\n    * You can now regenerate only the specific viewpoints you select. This new operator provides finer control over your multi-view generations, saving significant time and resources by letting you focus only on the angles that need adjustment.\r\n       * This new operator will replace the default `Generate` operator whenever any cameras are selected.\r\n\r\n* **🚀 Expanded FLUX.1 Support:**\r\n    * Added support for FLUX.1 models in `.gguf` format for more flexibility.\r\n    * Integrated support for the `flux.1-depth-dev` LoRA, which can now be used instead of ControlNet, saving VRAM.\r\n\r\n* **🛠️ Fixes & Improvements:**\r\n    * Added material compatibility for UV Inpainting and resolved multiple issues with object-specific prompts.\r\n    * Fixed an issue where the FLUX IPAdapter failed when using the `regenerate first image` option.\r\n    * Corrected a bug that prevented FLUX Comfy workflows from saving correctly.\r\n    * Resolved an issue where applied presets were not being detected properly.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.7...v0.0.8","2025-09-27T18:15:27",{"id":178,"version":179,"summary_zh":180,"released_at":181},71081,"v0.0.7","### StableGen v0.0.7: FLUX.1 IPAdapter & Checkpoint Selection\r\n\r\nThis update introduces enhancements for the FLUX.1 workflow, offering greater model flexibility and creative possibilities.\r\n\r\n**✨ What's New in v0.0.7:**\r\n\r\n* **🎯 FLUX.1 Checkpoint Selection:**\r\n    * The FLUX.1 model is no longer hardcoded. You can now select any FLUX.1 model directly from the UI.\r\n    * Place your FLUX.1 models in `\u003CYourComfyUIDirectory>\u002Fmodels\u002Funet\u002F` or another custom model directory to have them appear in the model list.\r\n\r\n* **🎨 FLUX.1 IPAdapter Support:**\r\n    * Integrated IPAdapter for the FLUX.1 model, enabling image-based prompting.\r\n    * This allows you to guide your generations with reference images for more precise control over the output style and content.\r\n    * You need to install additional dependencies. Please refer to the [manual installation guide](https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fblob\u002Fmain\u002Fdocs\u002FMANUAL_INSTALLATION.md).\r\n    * Note that this will also require more VRAM.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.6...v0.0.7","2025-07-10T16:28:35",{"id":183,"version":184,"summary_zh":185,"released_at":186},71082,"v0.0.6","### StableGen v0.0.6: Enhanced Control & Workflow Flexibility 🚀\r\n\r\nThis update brings new options for more flexible and controlled texture generation.\r\n\r\n**✨ What's New in v0.0.6:**\r\n\r\n* **🎯 Selective Object Texturing:**\r\n    * Added an option to texture only the selected objects. Unselected objects won't be changed.\r\n    * This can improve performance in larger scenes and allows for targeted modifications to parts of a scene or multi-mesh models.\r\n\r\n* **🎨 New `Prioritize Initial Views` for Blending:**\r\n    * Located in `Advanced Parameters > Viewpoint Blending Settings`.\r\n    * This switch allows textures generated from earlier camera viewpoints to have more influence during the blending process.\r\n    * Its effect can be adjusted with the `Priority Strength` slider.\r\n    * For best results, ensure important details (like a character's face) are well-covered by the initial camera views.\r\n    \r\n    * Showcase (model available at [blendswap](https:\u002F\u002Fwww.blendswap.com\u002Fblend\u002F12450)): ![comparison_prioritize_initial_views](https:\u002F\u002Fgithub.com\u002Fuser-attachments\u002Fassets\u002Ff7440228-b002-4b5b-b744-b08be24ecbc8)\r\n\r\n\r\n* **⚙️ Additional Schedulers:**\r\n    * `Normal` and `Simple` schedulers have been added to the available options.\r\n\r\n* **🖌️ Inpainting Context Background Update:**\r\n    * RGB renders used for inpainting context (e.g., in Sequential mode) now use the `Fallback Color` (set in `Advanced Parameters > Output & Material Settings`) for their background, instead of the previous fixed gray.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.5...v0.0.6","2025-05-24T23:00:41",{"id":188,"version":189,"summary_zh":190,"released_at":191},71083,"v0.0.5","### StableGen v0.0.5: External Directory Support, Reprojection Tool & Fixes\r\n\r\nVersion 0.0.5 introduces more flexibility in how you manage your models, a handy new reprojection tool, and important stability improvements.\r\n\r\n**✨ What's New in v0.0.5:**\r\n\r\n* **📁 External Directory Support for Checkpoints & LoRAs**\r\n    * You can now specify *additional external directories* for your Checkpoints and LoRAs in the addon preferences, alongside your main ComfyUI directory.\r\n    * StableGen will scan these external locations (including subfolders) and add them to your model lists.\r\n    * **Important Note:** For ComfyUI to use models from these external paths during generation, you must *also configure ComfyUI itself* to recognize these directories (e.g., by editing its `extra_model_paths.yaml`).\r\n\r\n* **🔄 New \"Reproject Textures\" Operator**\r\n    * Find this new tool in the \"Tools\" tab.\r\n    * It allows you to re-apply previously generated textures to your models.\r\n    * Crucially, the reprojection process will respect your *current* \"Viewpoint Blending Settings\" (like Discard-Over Angle, Weight Exponent), allowing you to tweak how existing textures are blended without regenerating them from scratch.\r\n\r\n* **🔧 Bug Fix: Mixed ControlNet Stability**\r\n    * Resolved an issue where using a combination of \"Union\" type ControlNets alongside standard, non-union ControlNets could lead to broken workflows.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.4...v0.0.5","2025-05-21T01:18:27",{"id":193,"version":194,"summary_zh":195,"released_at":196},71084,"v0.0.4","### StableGen v0.0.4: Advanced LoRAs & Simpler Setup! 🚀\r\n\r\nThis update brings powerful LoRA customization and a more streamlined workflow to StableGen!\r\n\r\n**✨ What's New:**\r\n\r\n* **Advanced LoRA System:**\r\n    * Chain any number of custom LoRAs for complex styles.\r\n    * Fine-tune each LoRA with individual `model strength` and `CLIP strength`.\r\n    * LoRAs are now auto-discovered from your ComfyUI `models\u002Floras` directory (including subfolders!).\r\n* **Simplified Model Management:**\r\n    * Just one `ComfyUI directory` to set in preferences!\r\n    * All your checkpoints (from `models\u002Fcheckpoints\u002F`) and LoRAs (from `models\u002Floras\u002F`) are automatically found, even in subdirectories.\r\n* **Presets & UI:** LoRA setups are now part of presets. New `LoRA Management` section is available under `Advanced Parameters`.\r\n\r\n---\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.3...v0.0.4\r\n\r\nHappy generating!","2025-05-19T23:10:20",{"id":198,"version":199,"summary_zh":200,"released_at":201},71085,"v0.0.3","Minor update:\r\n- Improved error handling for websocket, errors should now be more informative\r\n- No longer possible to start generation without valid model directory\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fsakalond\u002FStableGen\u002Fcompare\u002Fv0.0.2...v0.0.3","2025-05-19T15:00:43",{"id":203,"version":204,"summary_zh":205,"released_at":206},71086,"v0.0.2","Updated to respect Blender's add-on guidelines:\r\n\r\n- Python dependencies are now bundled as wheels. Removed previous install script.\r\n- Respects Blender's \"Allow Online Access\".","2025-05-18T21:25:59",{"id":208,"version":209,"summary_zh":77,"released_at":210},71087,"v0.0.1","2025-05-18T03:14:24"]