[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-bryanswkim--Chain-of-Zoom":3,"similar-bryanswkim--Chain-of-Zoom":73},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":17,"owner_twitter":14,"owner_website":20,"owner_url":21,"languages":22,"stars":31,"forks":32,"last_commit_at":33,"license":34,"difficulty_score":35,"env_os":36,"env_gpu":37,"env_ram":36,"env_deps":38,"category_tags":49,"github_topics":17,"view_count":52,"oss_zip_url":17,"oss_zip_packed_at":17,"status":53,"created_at":54,"updated_at":55,"faqs":56,"releases":72},3755,"bryanswkim\u002FChain-of-Zoom","Chain-of-Zoom","[NeurIPS'25 Spotlight] Official repository for \"Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment\"","Chain-of-Zoom 是一款专为突破图像超分辨率极限而设计的开源框架，旨在让现有的超分模型无需重新训练，即可生成远超其原始训练范围的超高清晰度图像。传统模型在强行放大时往往会出现模糊、伪影，且针对新倍数重新训练成本高昂，Chain-of-Zoom 巧妙地将这一难题转化为“尺度自回归”过程：它像链条一样多次复用同一个基础模型，通过逐步放大的中间状态，将复杂的超分任务分解为可管理的子问题。\n\n该工具的独特亮点在于引入了“多尺度感知提示”机制。由于高倍放大下视觉细节容易丢失，Chain-of-Zoom 利用视觉语言模型（VLM）自动生成描述图像内容的文本提示来辅助每一步的放大，并可通过人类偏好对齐技术进一步优化生成效果，确保结果自然逼真且符合审美。此外，它还提供了显存优化选项，使高性能计算更加亲民。\n\nChain-of-Zoom 非常适合 AI 研究人员探索生成式超分的新范式，也适用于开发者将其集成到图像处理管线中，或是设计师用于需要极致细节的图片修复与增强场景。只要具备基础的深度学习环境配置能力，用户即可利用该工具释放现有模型的潜力，轻松实现从普通清晰到电影级画质的跨越。","# Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment (NeurIPS 2025 Spotlight)\n\nThis repository is the official implementation of [Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18600), led by\n\n[Bryan Sangwoo Kim](https:\u002F\u002Fbryanswkim.github.io\u002F), [Jeongsol Kim](https:\u002F\u002Fjeongsol.dev\u002F), [Jong Chul Ye](https:\u002F\u002Fbispl.weebly.com\u002Fprofessor.html)\n\n![main figure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbryanswkim_Chain-of-Zoom_readme_723eb77bcae7.jpg)\n\n[![Project Website](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Website-blue)](https:\u002F\u002Fbryanswkim.github.io\u002Fchain-of-zoom\u002F)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.18600-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18600)\n\n---\n## 🔥 Summary\n\nModern single-image super-resolution (SISR) models deliver photo-realistic results at the scale factors on which they are trained, but show notable drawbacks:\n\n1. **Blur and artifacts** when pushed to magnify beyond its training regime\n2. **High computational costs and inefficiency** of retraining models when we want to magnify further\n\nThis brings us to the fundamental question: \\\n_How can we effectively utilize super-resolution models to explore much higher resolutions than they were originally trained for?_\n\nWe address this via **Chain-of-Zoom** 🔎, a model-agnostic framework that factorizes SISR into an autoregressive chain of intermediate scale-states with multi-scale-aware prompts.\nCoZ repeatedly re-uses a backbone SR model, decomposing the conditional probability into tractable sub-problems to achieve extreme resolutions without additional training.\nBecause visual cues diminish at high magnifications, we augment each zoom step with multi-scale-aware text prompts generated by a prompt extractor VLM.\nThis prompt extractor can be fine-tuned through GRPO with a critic VLM to further align text guidance towards human preference.\n\n## 🗓 ️News\n- [Aug 2025] Additional code released.\n- [Jun 2025] Check out the awesome 🤗 [Huggingface Space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Falexnasa\u002FChain-of-Zoom) by [@alexnasa](https:\u002F\u002Fhuggingface.co\u002Falexnasa)! Thanks for the awesome work!\n- [May 2025] Code and paper released.\n\n## 🛠️ Setup\nFirst, create your environment. We recommend using the following commands. \n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom.git\ncd Chain-of-Zoom\n\nconda create -n coz python=3.10\nconda activate coz\npip install -r requirements.txt\n```\n\n## ⏳ Models\n\n|Models|Checkpoints|\n|:---------|:--------|\n|Stable Diffusion v3|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3-medium)\n|Qwen2.5-VL-3B-Instruct|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-VL-3B-Instruct)\n|RAM|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fxinyu1205\u002Frecognize-anything\u002Fblob\u002Fmain\u002Fram_swin_large_14m.pth)\n\n## ⚡ Quick Inference\nYou can quickly check the results of using **CoZ** with the following example:\n```\npython inference_coz.py \\\n  -i samples \\\n  -o inference_results\u002Fcoz_vlmprompt \\\n  --rec_type recursive_multiscale \\\n  --prompt_type vlm \\\n  --lora_path ckpt\u002FSR_LoRA\u002Fmodel_20001.pkl \\\n  --vae_path ckpt\u002FSR_VAE\u002Fvae_encoder_20001.pt \\\n  --vlm_lora_path ckpt\u002FVLM_LoRA\u002Fcheckpoint-10000 \\\n  --pretrained_model_name_or_path 'stabilityai\u002Fstable-diffusion-3-medium-diffusers' \\\n  --ram_ft_path ckpt\u002FDAPE\u002FDAPE.pth \\\n  --ram_path ckpt\u002FRAM\u002Fram_swin_large_14m.pth \\\n  --save_prompts;\n```\nWhich will give a result like below:\n\n![main figure](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbryanswkim_Chain-of-Zoom_readme_e0249127158c.png)\n\n## 🔬 Efficient Memory\nUsing ```--efficient_memory``` allows **CoZ** to run on a single GPU with 24GB VRAM, but highly increases inference time due to offloading. \\\nWe recommend using two GPUs.\n\n## 🌄 Full Image Super-Resolution\nAlthough our main focus is zooming into local areas, **CoZ** can be easily applied to super-resolution of full images. Try out the code below!\n\n```\npython inference_coz_full.py \\\n  -i samples \\\n  -o inference_results\u002Fcoz_full \\\n  --rec_type recursive_multiscale \\\n  --prompt_type vlm \\\n  --lora_path ckpt\u002FSR_LoRA\u002Fmodel_20001.pkl \\\n  --vae_path ckpt\u002FSR_VAE\u002Fvae_encoder_20001.pt \\\n  --vlm_lora_path ckpt\u002FVLM_LoRA\u002Fcheckpoint-10000 \\\n  --pretrained_model_name_or_path 'stabilityai\u002Fstable-diffusion-3-medium-diffusers' \\\n  --ram_ft_path ckpt\u002FDAPE\u002FDAPE.pth \\\n  --ram_path ckpt\u002FRAM\u002Fram_swin_large_14m.pth;\n```\n\n## 🚆 Training the SR Backbone Model\n**Chain-of-Zoom** is model-agnostic and can be used with *any* pretrained text-aware SR model. In this repository we leverage OSEDiff trained with Stable Diffusion 3 Medium as its backbone model. This requires some additional installations:\n\n```\npip install wandb opencv-python basicsr==1.4.2\n\npip install --no-deps --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121 xformers==0.0.28.post1\n```\n\nPlease refer to the [OSEDiff](https:\u002F\u002Fgithub.com\u002Fcswry\u002FOSEDiff) repository for training configurations (ex. preparing training data). Now train the SR backbone model:\n```\nbash scripts\u002Ftrain\u002Ftrain_osediff_sd3.sh\n```\n\n## 📝 Citation\nIf you find our method useful, please cite as below or leave a star to this repository.\n\n```\n@article{kim2025chain,\n  title={Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment},\n  author={Kim, Bryan Sangwoo and Kim, Jeongsol and Ye, Jong Chul},\n  journal={arXiv preprint arXiv:2505.18600},\n  year={2025}\n}\n```\n\n## 🤗 Acknowledgements\nWe thank the authors of [OSEDiff](https:\u002F\u002Fgithub.com\u002Fcswry\u002FOSEDiff) for sharing their awesome work!\n","# 链式缩放：基于尺度自回归与偏好对齐的极端超分辨率（NeurIPS 2025 Spotlight）\n\n本仓库是 [Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18600) 的官方实现，由以下人员主导：\n\n[Bryan Sangwoo Kim](https:\u002F\u002Fbryanswkim.github.io\u002F)、[Jeongsol Kim](https:\u002F\u002Fjeongsol.dev\u002F)、[Jong Chul Ye](https:\u002F\u002Fbispl.weebly.com\u002Fprofessor.html)\n\n![主图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbryanswkim_Chain-of-Zoom_readme_723eb77bcae7.jpg)\n\n[![项目官网](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FProject-Website-blue)](https:\u002F\u002Fbryanswkim.github.io\u002Fchain-of-zoom\u002F)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.18600-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.18600)\n\n---\n## 🔥 摘要\n\n现代单图像超分辨率（SISR）模型在与其训练尺度一致的放大倍数下能够生成照片级逼真的结果，但在超出其训练范围时却存在显著缺陷：\n\n1. **模糊与伪影**：当放大倍数超过模型的训练上限时，输出质量会明显下降。\n2. **高昂的计算成本与低效性**：若需进一步提升放大倍数，通常需要重新训练模型。\n\n这引出了一个根本问题：\\\n*我们如何有效地利用超分辨率模型，将其应用到远超原始训练范围的更高分辨率场景中？*\n\n为此，我们提出了 **Chain-of-Zoom** 🔎——一种模型无关的框架，它将 SISR 任务分解为一系列具有多尺度感知提示的中间尺度状态的自回归链。CoZ 可以重复使用同一骨干 SR 模型，通过将条件概率分解为易于处理的子问题，从而在无需额外训练的情况下实现极高的分辨率。由于在高倍率放大时视觉线索会逐渐减弱，我们在每一步缩放过程中都加入了由提示提取 VLM 生成的多尺度感知文本提示。该提示提取器还可以通过 GRPO 方法与评判型 VLM 进行微调，以进一步使文本指导更符合人类偏好。\n\n## 🗓 新闻\n- [2025年8月] 发布了更多代码。\n- [2025年6月] 请查看 [@alexnasa](https:\u002F\u002Fhuggingface.co\u002Falexnasa) 构建的精彩 🤗 [Huggingface Space](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Falexnasa\u002FChain-of-Zoom)! 感谢他们的出色工作！\n- [2025年5月] 代码与论文正式发布。\n\n## 🛠️ 环境搭建\n首先，请创建您的运行环境。我们推荐使用以下命令：\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom.git\ncd Chain-of-Zoom\n\nconda create -n coz python=3.10\nconda activate coz\npip install -r requirements.txt\n```\n\n## ⏳ 模型\n|模型|检查点|\n|:---------|:--------|\n|Stable Diffusion v3|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3-medium)\n|Qwen2.5-VL-3B-Instruct|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-VL-3B-Instruct)\n|RAM|[Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fxinyu1205\u002Frecognize-anything\u002Fblob\u002Fmain\u002Fram_swin_large_14m.pth)\n\n## ⚡ 快速推理\n您可以通过以下示例快速体验 **CoZ** 的效果：\n```\npython inference_coz.py \\\n  -i samples \\\n  -o inference_results\u002Fcoz_vlmprompt \\\n  --rec_type recursive_multiscale \\\n  --prompt_type vlm \\\n  --lora_path ckpt\u002FSR_LoRA\u002Fmodel_20001.pkl \\\n  --vae_path ckpt\u002FSR_VAE\u002Fvae_encoder_20001.pt \\\n  --vlm_lora_path ckpt\u002FVLM_LoRA\u002Fcheckpoint-10000 \\\n  --pretrained_model_name_or_path 'stabilityai\u002Fstable-diffusion-3-medium-diffusers' \\\n  --ram_ft_path ckpt\u002FDAPE\u002FDAPE.pth \\\n  --ram_path ckpt\u002FRAM\u002Fram_swin_large_14m.pth \\\n  --save_prompts;\n```\n运行结果如下所示：\n\n![主图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbryanswkim_Chain-of-Zoom_readme_e0249127158c.png)\n\n## 🔬 节省显存\n使用 ```--efficient_memory``` 参数可以让 **CoZ** 在仅配备 24GB 显存的单张 GPU 上运行，但因数据交换操作较多，推理时间会显著增加。因此，我们建议使用两张 GPU。\n\n## 🌄 全图超分辨率\n尽管我们的主要目标是局部区域的放大，但 **CoZ** 同样可以轻松应用于整张图片的超分辨率处理。请尝试以下代码：\n\n```\npython inference_coz_full.py \\\n  -i samples \\\n  -o inference_results\u002Fcoz_full \\\n  --rec_type recursive_multiscale \\\n  --prompt_type vlm \\\n  --lora_path ckpt\u002FSR_LoRA\u002Fmodel_20001.pkl \\\n  --vae_path ckpt\u002FSR_VAE\u002Fvae_encoder_20001.pt \\\n  --vlm_lora_path ckpt\u002FVLM_LoRA\u002Fcheckpoint-10000 \\\n  --pretrained_model_name_or_path 'stabilityai\u002Fstable-diffusion-3-medium-diffusers' \\\n  --ram_ft_path ckpt\u002FDAPE\u002FDAPE.pth \\\n  --ram_path ckpt\u002FRAM\u002Fram_swin_large_14m.pth;\n```\n\n## 🚆 训练 SR 骨干模型\n**Chain-of-Zoom** 是模型无关的，可与 *任意* 预训练的文本感知 SR 模型搭配使用。在本仓库中，我们采用了基于 Stable Diffusion 3 Medium 训练的 OSEDiff 作为骨干模型。为此，您需要安装一些额外的依赖：\n\n```\npip install wandb opencv-python basicsr==1.4.2\n\npip install --no-deps --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121 xformers==0.0.28.post1\n```\n\n有关训练配置（例如准备训练数据等），请参考 [OSEDiff](https:\u002F\u002Fgithub.com\u002Fcswry\u002FOSEDiff) 仓库。接下来即可开始训练 SR 骨干模型：\n```\nbash scripts\u002Ftrain\u002Ftrain_osediff_sd3.sh\n```\n\n## 📝 引用\n如果您认为我们的方法有所帮助，请按照以下格式引用或为本仓库点亮星标：\n\n```\n@article{kim2025chain,\n  title={Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment},\n  author={Kim, Bryan Sangwoo and Kim, Jeongsol and Ye, Jong Chul},\n  journal={arXiv preprint arXiv:2505.18600},\n  year={2025}\n}\n```\n\n## 🤗 致谢\n我们感谢 [OSEDiff](https:\u002F\u002Fgithub.com\u002Fcswry\u002FOSEDiff) 的作者们分享他们的优秀工作！","# Chain-of-Zoom 快速上手指南\n\nChain-of-Zoom (CoZ) 是一个模型无关的框架，旨在通过尺度自回归和偏好对齐实现极致超分辨率。它无需重新训练即可利用现有的超分辨率模型探索远超其原始训练范围的更高分辨率，有效解决高倍放大下的模糊和伪影问题。\n\n## 环境准备\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+)\n*   **Python 版本**: 3.10\n*   **GPU 要求**: \n    *   推荐配置：双 GPU 以获得最佳推理速度。\n    *   最低配置：单张 24GB 显存 GPU（需开启 `--efficient_memory` 模式，但推理时间会显著增加）。\n*   **前置依赖**: Conda (用于环境管理), Git\n\n## 安装步骤\n\n1.  **克隆仓库并进入目录**\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom.git\n    cd Chain-of-Zoom\n    ```\n\n2.  **创建并激活 Conda 环境**\n    ```bash\n    conda create -n coz python=3.10\n    conda activate coz\n    ```\n\n3.  **安装基础依赖**\n    ```bash\n    pip install -r requirements.txt\n    ```\n    > **提示**: 如果下载速度较慢，可添加国内镜像源加速，例如：`pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple`\n\n4.  **(可选) 安装训练所需的额外依赖**\n    如果您仅需推理，可跳过此步。若需训练骨干模型，请执行：\n    ```bash\n    pip install wandb opencv-python basicsr==1.4.2\n    pip install --no-deps --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu121 xformers==0.0.28.post1\n    ```\n\n5.  **下载预训练模型**\n    确保从以下链接下载模型权重并放置于项目对应的 `ckpt` 或缓存目录中：\n    *   **Stable Diffusion v3**: [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fstabilityai\u002Fstable-diffusion-3-medium)\n    *   **Qwen2.5-VL-3B-Instruct**: [Hugging Face](https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen2.5-VL-3B-Instruct)\n    *   **RAM**: [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fxinyu1205\u002Frecognize-anything\u002Fblob\u002Fmain\u002Fram_swin_large_14m.pth)\n\n## 基本使用\n\n以下命令展示了如何使用 Chain-of-Zoom 对样本图像进行局部区域的高倍超分辨率推理。该示例将自动提取多尺度文本提示以增强生成效果。\n\n```bash\npython inference_coz.py \\\n  -i samples \\\n  -o inference_results\u002Fcoz_vlmprompt \\\n  --rec_type recursive_multiscale \\\n  --prompt_type vlm \\\n  --lora_path ckpt\u002FSR_LoRA\u002Fmodel_20001.pkl \\\n  --vae_path ckpt\u002FSR_VAE\u002Fvae_encoder_20001.pt \\\n  --vlm_lora_path ckpt\u002FVLM_LoRA\u002Fcheckpoint-10000 \\\n  --pretrained_model_name_or_path 'stabilityai\u002Fstable-diffusion-3-medium-diffusers' \\\n  --ram_ft_path ckpt\u002FDAPE\u002FDAPE.pth \\\n  --ram_path ckpt\u002FRAM\u002Fram_swin_large_14m.pth \\\n  --save_prompts;\n```\n\n**参数说明：**\n*   `-i`: 输入图像文件夹路径。\n*   `-o`: 输出结果保存路径。\n*   `--efficient_memory`: 若显存不足（单卡 24GB），可在命令末尾添加此标志以启用显存优化模式（会降低速度）。\n*   其他路径参数请根据实际下载的模型权重位置进行调整。\n\n运行完成后，超分辨率结果将保存在 `inference_results\u002Fcoz_vlmprompt` 目录中。","某刑侦技术团队正在处理一起陈年旧案，需要将一张模糊的监控截图放大以识别嫌疑人面部特征和衣物纹理。\n\n### 没有 Chain-of-Zoom 时\n- **图像严重失真**：强行将低分辨率图片放大 8 倍以上时，传统超分模型生成的面部细节模糊不清，甚至产生奇怪的伪影，导致无法辨认五官。\n- **训练成本高昂**：若要获得更高清的结果，必须针对特定的高倍率重新收集数据并训练新模型，耗时数周且算力消耗巨大。\n- **缺乏语义引导**：模型仅凭像素猜测细节，无法理解“警徽”、“特定品牌鞋纹”等关键语义信息，导致还原出的细节不符合现实逻辑。\n- **显存资源受限**：尝试高分辨率推理时，单张显卡显存迅速爆满，迫使团队不得不租用昂贵的多卡集群才能运行测试。\n\n### 使用 Chain-of-Zoom 后\n- **极致清晰成像**：通过自回归式的多级缩放链，Chain-of-Zoom 将放大过程分解为多个可控步骤，成功在无需重训的情况下生成了照片级的 16 倍超清图像，毛孔与织物纹理清晰可见。\n- **零样本即时应用**：直接复用现有的基础超分模型，利用其独特的尺度分解框架，几分钟内即可完成从模糊截图到高清证据的转换，无需任何额外训练。\n- **智能语义增强**：集成的视觉语言模型（VLM）自动提取并生成“带有编号的警用夹克”等多尺度提示词，引导模型生成符合人类认知偏好的真实细节，避免了胡乱填充。\n- **高效显存管理**：借助高效内存优化策略，Chain-of-Zoom 能够在单张 24GB 显存的消费级显卡上流畅运行极端超分任务，大幅降低了硬件门槛。\n\nChain-of-Zoom 通过将超分辨率任务转化为带语义引导的自回归缩放链条，让普通模型也能在无训练成本下实现极致的细节还原。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbryanswkim_Chain-of-Zoom_723eb77b.jpg","bryanswkim","Bryan Sangwoo Kim","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbryanswkim_0f4f3101.jpg",null,"KAIST AI","Seoul, South Korea","https:\u002F\u002Fbryanswkim.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fbryanswkim",[23,27],{"name":24,"color":25,"percentage":26},"Python","#3572A5",98.6,{"name":28,"color":29,"percentage":30},"Shell","#89e051",1.4,766,76,"2026-04-01T12:14:58","MIT",4,"未说明","需要 NVIDIA GPU。推荐双显卡；若使用单卡需 24GB VRAM 并开启 --efficient_memory 模式（会显著增加推理时间）。安装指令暗示支持 CUDA 12.1 (cu121)。",{"notes":39,"python":40,"dependencies":41},"该工具基于 Stable Diffusion v3、Qwen2.5-VL-3B 和 RAM 模型，需手动下载相关检查点。训练骨干网络需额外安装 OSEDiff 依赖。单卡运行时建议使用 --efficient_memory 参数以适配 24GB 显存，但推理速度会变慢，官方推荐使用双显卡。","3.10",[42,43,44,45,46,47,48],"torch (via xformers cu121)","xformers==0.0.28.post1","basicsr==1.4.2","opencv-python","wandb","transformers (implied by SD3\u002FQwen)","diffusers (implied by SD3)",[50,51],"图像","其他",2,"ready","2026-03-27T02:49:30.150509","2026-04-06T07:23:04.836884",[57,62,67],{"id":58,"question_zh":59,"answer_zh":60,"source_url":61},17198,"是否有计划发布训练代码？","是的，作者计划在完成一些修订和清理工作后，发布训练脚本及其他相关细节。","https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom\u002Fissues\u002F1",{"id":63,"question_zh":64,"answer_zh":65,"source_url":66},17199,"使用的 VLM 模型是 Qwen2.5-VL-3B-Instruct 的基础版本还是微调版本？","目前尚未发布微调后的 VLM 模型。但在大多数情况下，即使使用基础的 Qwen2.5-VL-3B-Instruct 模型，性能表现也非常好。","https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom\u002Fissues\u002F5",{"id":68,"question_zh":69,"answer_zh":70,"source_url":71},17200,"项目是否会添加开源许可证（如 MIT）？","作者已确认将采纳建议，为项目添加开源许可证。","https:\u002F\u002Fgithub.com\u002Fbryanswkim\u002FChain-of-Zoom\u002Fissues\u002F2",[],[74,85,93,106,114,122],{"id":75,"name":76,"github_repo":77,"description_zh":78,"stars":79,"difficulty_score":80,"last_commit_at":81,"category_tags":82,"status":53},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[83,50,84],"开发框架","Agent",{"id":86,"name":87,"github_repo":88,"description_zh":89,"stars":90,"difficulty_score":52,"last_commit_at":91,"category_tags":92,"status":53},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[83,50,84],{"id":94,"name":95,"github_repo":96,"description_zh":97,"stars":98,"difficulty_score":52,"last_commit_at":99,"category_tags":100,"status":53},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[50,101,102,103,84,51,104,83,105],"数据工具","视频","插件","语言模型","音频",{"id":107,"name":108,"github_repo":109,"description_zh":110,"stars":111,"difficulty_score":80,"last_commit_at":112,"category_tags":113,"status":53},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[84,50,83,104,51],{"id":115,"name":116,"github_repo":117,"description_zh":118,"stars":119,"difficulty_score":80,"last_commit_at":120,"category_tags":121,"status":53},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74939,"2026-04-05T23:16:38",[104,50,83,51],{"id":123,"name":124,"github_repo":125,"description_zh":126,"stars":127,"difficulty_score":52,"last_commit_at":128,"category_tags":129,"status":53},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[83,50]]