[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Zhendong-Wang--Diffusion-GAN":3,"tool-Zhendong-Wang--Diffusion-GAN":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":76,"owner_website":76,"owner_url":77,"languages":78,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":10,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":117,"github_topics":76,"view_count":118,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":119,"updated_at":120,"faqs":121,"releases":150},156,"Zhendong-Wang\u002FDiffusion-GAN","Diffusion-GAN","Official PyTorch implementation for paper: Diffusion-GAN: Training GANs with Diffusion","Diffusion-GAN 是一种结合扩散模型思想改进传统生成对抗网络（GAN）训练的新方法。它通过在判别器输入中注入由多步扩散过程构造的高斯混合噪声，让 GAN 训练更稳定、更高效。传统 GAN 虽然理论上可通过添加噪声提升稳定性，但在实践中效果有限；Diffusion-GAN 利用扩散链动态调整噪声强度，在不同训练阶段自适应控制噪声比例，从而有效缓解模式崩溃和训练震荡问题。\n\n这项技术最大的亮点在于“可微分的数据增强”——无需修改模型结构，即可作为插件式模块接入现有 GAN 框架，同时支持 timestep 相关的判别器设计，灵活适配多种架构与数据领域。实验表明，它在 CelebA、LSUN、AFHQ、FFHQ 等多个图像生成任务上均超越主流 GAN 基线，尤其擅长以较少数据生成逼真图像。\n\nDiffusion-GAN 主要面向 AI 研究人员与深度学习开发者，适合希望提升 GAN 训练稳定性、追求更高生成质量或探索扩散与对抗训练融合方向的用户。普通用户或设计师如需使用，建议等待封装好的应用或预训练模型。项目已开源 PyTorch 实现并提供预训练权重，便于快速复现与二次开发。","## Diffusion-GAN &mdash; Official PyTorch implementation\n\n![Illustration](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZhendong-Wang_Diffusion-GAN_readme_04b15285b557.png)\n\n**Diffusion-GAN: Training GANs with Diffusion**\u003Cbr>\nZhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen and Mingyuan Zhou \u003Cbr>\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02262 \u003Cbr>\n\nAbstract: *For stable training of generative adversarial networks (GANs), injecting instance\nnoise into the input of the discriminator is considered as a theoretically sound\nsolution, which, however, has not yet delivered on its promise in practice. This\npaper introduces Diffusion-GAN that employs a Gaussian mixture distribution,\ndefined over all the diffusion steps of a forward diffusion chain, to inject instance\nnoise. A random sample from the mixture, which is diffused from an observed\nor generated data, is fed as the input to the discriminator. The generator is\nupdated by backpropagating its gradient through the forward diffusion chain,\nwhose length is adaptively adjusted to control the maximum noise-to-data ratio\nallowed at each training step. Theoretical analysis verifies the soundness of the\nproposed Diffusion-GAN, which provides model- and domain-agnostic differentiable\naugmentation. A rich set of experiments on diverse datasets show that DiffusionGAN can \nprovide stable and data-efficient GAN training, bringing consistent\nperformance improvement over strong GAN baselines for synthesizing photorealistic images.*\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-celeba-64x64)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-celeba-64x64?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-stl-10)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-stl-10?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-lsun-bedroom-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-bedroom-256-x-256?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-wild)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-wild?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-cat)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-cat?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-dog)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-dog?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-lsun-churches-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-churches-256-x-256?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-ffhq-1024-x-1024)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-1024-x-1024?p=diffusion-gan-training-gans-with-diffusion)\n\n## ToDos\n- [x] Initial code release\n- [x] Providing pretrained models\n\n## Build your Diffusion-GAN\nHere, we explain how to train general GANs with diffusion. We provide two ways: \na. plug-in as simple as a data augmentation method; \nb. training GANs on diffusion chains with a timestep-dependent discriminator. \nCurrently, we didn't find significant empirical differences of the two approaches, \nwhile the second approach has stronger theoretical guarantees. We suspect when advanced timestep-dependent structure is applied in the discriminator,\nthe second approach could become better, and we left that for future study. \n\n### Simple Plug-in\n* Design a proper diffusion process based on the ```diffusion.py``` ([example](https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fblob\u002Fmain\u002Fdiffusion-stylegan2\u002Ftraining\u002Fdiffusion.py)) file\n* Apply diffusion on the inputs of discriminators, \n```logits = Discriminator(Diffusion(gen\u002Freal_images))```\n* Add adaptiveness of diffusion into your training iterations\n``` \nif update_diffusion:  # batch_idx % ada_interval == 0\n    adjust = np.sign(sign(Discriminator(real_images)) - ada_target) * C  # C = (batch_size * ada_interval) \u002F (ada_kimg * 1000)\n    diffusion.p = (diffusion.p + adjust).clip(min=0., max=1.)\n    diffusion.update_T()\n```\n\n### Full Version\n* Add diffusion timestep `t` as an input for discriminators `logits = Discriminator(images, t)`. \nYou may need some modifications in your discriminator architecture. \n* The other steps are the same as Simple Plug-in. Note that since discriminator depends on timesteps, \nyou need to collect `t`.\n```\ndiffused_images, t = Diffusion(images)\nlogits = Discrimnator(diffused_images, t)\n```\n\n## Train our Diffusion-GAN\n\n### Requirements\nIn each folder, we provide a `environment.yml` for preparing the running environments. Note here to fit newest PyTorch versions, I made some change in `torch_utils` folder. The performance might have some small perturbations due to different versions of PyTorch. The provided checkpoints were trained on old environment inherited from [StyleGAN2-ADA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch#requirements), and the old environment is detailed as follows: \n* 64-bit Python 3.7 and PyTorch 1.7.1\u002F1.8.1. See [https:\u002F\u002Fpytorch.org\u002F](https:\u002F\u002Fpytorch.org\u002F) for PyTorch install instructions.\n* CUDA toolkit 11.0 or later. \n* Python libraries: `pip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3`.\n\n### Data Preparation\n\nIn our paper, we trained our model on [CIFAR-10 (32 x 32)](https:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html), [STL-10 (64 x 64)](https:\u002F\u002Fcs.stanford.edu\u002F~acoates\u002Fstl10\u002F),\n[LSUN (256 x 256)](https:\u002F\u002Fgithub.com\u002Ffyu\u002Flsun), [AFHQ (512 x 512)](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fstargan-v2) and [FFHQ (1024 x 1024)](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fffhq-dataset).\nYou can download the datasets we used in our paper at their respective websites. \nTo prepare the dataset at the respective resolution, run for example\n```.bash\npython dataset_tool.py --source=~\u002Fdownloads\u002Flsun\u002Fraw\u002Fbedroom_lmdb --dest=~\u002Fdatasets\u002Flsun_bedroom200k.zip \\\n    --transform=center-crop --width=256 --height=256 --max_images=200000\n\npython dataset_tool.py --source=~\u002Fdownloads\u002Flsun\u002Fraw\u002Fchurch_lmdb --dest=~\u002Fdatasets\u002Flsun_church200k.zip \\\n    --transform=center-crop-wide --width=256 --height=256 --max_images=200000\n```\n\n### Training\n\nWe show the training commands that we used below. In most cases, the training commands are similar, so below we use CIFAR-10 dataset\nas an example: \n\nFor Diffusion-GAN,\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fcifar10.zip\" --gpus=4 --cfg cifar --kimg 50000 --aug no --target 0.6 --noise_sd 0.05 --ts_dist priority\n```\nFor Diffusion-ProjectedGAN\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fcifar10.zip\" --gpus=4 --batch 64 --batch-gpu=16 --cfg fastgan --kimg 50000 --target 0.45 --d_pos first --noise_sd 0.5\n```\nFor Diffusion-InsGen\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fafhq-wild.zip\" --gpus=8 --cfg paper512 --kimg 25000\n```\n\nWe follows the `config` setting from [StyleGAN2-ADA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorchhttps:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch) \nand refer to them for more details. The other major hyperparameters are listed and discussed below:\n* `--target` the discriminator target, which balances the level of diffusion intensity.\n* `--aug` domain-specific image augmentation, such as ADA and Differentiable Augmentation, which is used for evaluate complementariness with diffusion. \n* `--noise_sd` diffusion noise standard deviation, which is set as 0.05 in our case.\n* ` --ts_dist` t sampling distribution, $\\pi(t)$ in paper. \n\nWe evaluated two `t` sampling distribution `['priority', 'uniform']`,\nwhere `'priority'` denotes the Equation (11) in paper and `'uniform'` denotes random sampling. In most cases, `priority` works slightly better, while in some cases, such as FFHQ,\n`'uniform'` is better. \n\n## Sampling and Evaluation with our checkpoints\nWe provide our Diffusion-GAN checkpoints on our [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Ftree\u002Fmain\u002Fcheckpoints) page. We copy and paste the links below for quick access. \n\n|            Model            |   Dataset    | Resolution |  FID  |                                                        Checkpoint                                                         |\n|:---------------------------:|:------------:|:----------:|:-----:|:-------------------------------------------------------------------------------------------------------------------------:|\n|     Diffusion-StyleGAN2     |   CIFAR-10   |   32x32    | 3.19  |     [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-cifar10.pkl)     |\n|     Diffusion-StyleGAN2     |    CelebA    |   64x64    | 1.69  |    [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-celeba64.pkl)     |\n|     Diffusion-StyleGAN2     |    STL-10    |   64x64    | 11.53 |      [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-stl10.pkl)      |\n|     Diffusion-StyleGAN2     | LSUN-Bedroom |  256x256   | 3.65  |  [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-lsun-bedroom.pkl)   |\n|     Diffusion-StyleGAN2     | LSUN-Church  |  256x256   | 3.17  |   [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-lsun-church.pkl)   |\n|     Diffusion-StyleGAN2     |     FFHQ     | 1024x1024  | 2.83  |      [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-ffhq.pkl)       |\n|   Diffusion-ProjectedGAN    |   CIFAR-10   |   32x32    | 2.54  |   [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-cifar10.pkl)    |\n|   Diffusion-ProjectedGAN    |    STL-10    |   64x64    | 6.91  |    [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-stl10.pkl)     |\n|   Diffusion-ProjectedGAN    | LSUN-Bedroom |  256x256   | 1.43  | [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-lsun-bedroom.pkl) |\n|   Diffusion-ProjectedGAN    | LSUN-Church  |  256x256   | 1.85  | [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-lsun-church.pkl)  |\n|      Diffusion-InsGen       |   AFHQ-Cat   |  512x512   | 2.40  |      [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqcat.pkl)       |\n|     Diffusion-InsGen        |   AFHQ-Dog   |  512x512   | 4.83  |      [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqdog.pkl)       |\n|      Diffusion-InsGen       |  AFHQ-Wild   |  512x512   | 1.51  |      [download](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqwild.pkl)      |\n\n\nTo generate samples, run the following commands:\n\n```.bash\n# Generate FFHQ with pretrained Diffusion-StyleGAN2\npython generate.py --outdir=out --seeds=1-100 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-stylegan2-ffhq.pkl\n\n# Generate LSUN-Church with pretrained Diffusion-ProjectedGAN\npython gen_images.py --outdir=out --seeds=1-100 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-projectedgan-lsun-church.pkl\n```\n\nThe checkpoints can be replaced with any pre-trained Diffusion-GAN checkpoint path downloaded from the table above.\n\n\nSimilarly, the metrics can be calculated with the following commands:\n\n```.bash\n# Pre-trained network pickle: specify dataset explicitly, print result to stdout.\npython calc_metrics.py --metrics=fid50k_full --data=~\u002Fdatasets\u002Fffhq.zip --mirror=1 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-stylegan2-ffhq.pkl\n```\n\n## Citation\n\n```\n@article{wang2022diffusiongan,\n  title     = {Diffusion-GAN: Training GANs with Diffusion},\n  author    = {Wang, Zhendong and Zheng, Huangjie and He, Pengcheng and Chen, Weizhu and Zhou, Mingyuan},\n  journal   = {arXiv preprint arXiv:2206.02262},\n  year      = {2022},\n  url       = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02262}\n}\n```\n\n## Acknowledgements\n\nOur code builds upon the awesome [StyleGAN2-ADA repo](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch), [InsGen repo](https:\u002F\u002Fgithub.com\u002Fgenforce\u002Finsgen) and [ProjectedGAN repo](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fprojected_gan), respectively by Karras et al, Ceyuan Yang et al and Axel Sauer et al.\n","## Diffusion-GAN —— 官方 PyTorch 实现\n\n![示意图](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZhendong-Wang_Diffusion-GAN_readme_04b15285b557.png)\n\n**Diffusion-GAN：使用扩散过程训练生成对抗网络（GANs）**\u003Cbr>\nZhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen 和 Mingyuan Zhou \u003Cbr>\nhttps:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02262 \u003Cbr>\n\n摘要：*为了稳定训练生成对抗网络（Generative Adversarial Networks, GANs），向判别器输入中注入实例噪声（instance noise）被认为是一种理论上合理的解决方案，然而在实践中尚未充分兑现其潜力。本文提出 Diffusion-GAN，它采用一个定义在前向扩散链（forward diffusion chain）所有扩散步骤上的高斯混合分布（Gaussian mixture distribution）来注入实例噪声。从该混合分布中随机采样的样本（由真实或生成数据经扩散得到）将作为判别器的输入。生成器通过前向扩散链反向传播梯度进行更新，而扩散链长度会自适应调整，以控制每个训练步骤允许的最大噪声-数据比。理论分析验证了所提出的 Diffusion-GAN 的合理性，它提供了一种与模型和领域无关的可微增强方法。在多个数据集上丰富的实验表明，Diffusion-GAN 能够实现稳定且数据高效的 GAN 训练，在合成逼真图像任务上持续优于强大的 GAN 基线模型。*\n\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-celeba-64x64)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-celeba-64x64?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-stl-10)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-stl-10?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-lsun-bedroom-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-bedroom-256-x-256?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-wild)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-wild?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-cat)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-cat?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-afhq-dog)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-afhq-dog?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-lsun-churches-256-x-256)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-lsun-churches-256-x-256?p=diffusion-gan-training-gans-with-diffusion)\n[![PWC](https:\u002F\u002Fimg.shields.io\u002Fendpoint.svg?url=https:\u002F\u002Fpaperswithcode.com\u002Fbadge\u002Fdiffusion-gan-training-gans-with-diffusion\u002Fimage-generation-on-ffhq-1024-x-1024)](https:\u002F\u002Fpaperswithcode.com\u002Fsota\u002Fimage-generation-on-ffhq-1024-x-1024?p=diffusion-gan-training-gans-with-diffusion)\n\n## 待办事项\n- [x] 初始代码发布\n- [x] 提供预训练模型\n\n## 构建你的 Diffusion-GAN\n\n在此，我们解释如何使用扩散过程训练通用 GAN。我们提供两种方式：\na. 作为数据增强方法简单插件式使用；\nb. 在扩散链上训练 GAN，并使用依赖时间步（timestep-dependent）的判别器。\n目前，我们尚未发现这两种方法在经验上有显著差异，但第二种方法具有更强的理论保证。我们推测，当判别器中应用更先进的时间步相关结构时，第二种方法可能表现更优，这留待未来研究。\n\n### 简单插件式用法\n* 根据 `diffusion.py` 文件设计合适的扩散过程（[示例](https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fblob\u002Fmain\u002Fdiffusion-stylegan2\u002Ftraining\u002Fdiffusion.py)）\n* 对判别器输入应用扩散：\n```logits = Discriminator(Diffusion(gen\u002Freal_images))```\n* 在训练迭代中加入扩散过程的自适应机制：\n``` \nif update_diffusion:  # batch_idx % ada_interval == 0\n    adjust = np.sign(sign(Discriminator(real_images)) - ada_target) * C  # C = (batch_size * ada_interval) \u002F (ada_kimg * 1000)\n    diffusion.p = (diffusion.p + adjust).clip(min=0., max=1.)\n    diffusion.update_T()\n```\n\n### 完整版本\n* 将扩散时间步 `t` 作为判别器的输入：`logits = Discriminator(images, t)`。\n你可能需要对判别器架构进行一些修改。\n* 其余步骤与“简单插件式”相同。注意由于判别器依赖于时间步，你需要收集 `t`。\n```\ndiffused_images, t = Diffusion(images)\nlogits = Discrimnator(diffused_images, t)\n```\n\n## 训练我们的 Diffusion-GAN\n\n### 环境要求\n每个文件夹中我们都提供了 `environment.yml` 用于配置运行环境。注意，为适配最新版 PyTorch，我对 `torch_utils` 文件夹做了一些修改。由于 PyTorch 版本不同，性能可能会有轻微波动。所提供的检查点是在继承自 [StyleGAN2-ADA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch#requirements) 的旧环境中训练的，旧环境详情如下：\n* 64 位 Python 3.7 和 PyTorch 1.7.1\u002F1.8.1。PyTorch 安装说明请参见 [https:\u002F\u002Fpytorch.org\u002F](https:\u002F\u002Fpytorch.org\u002F)。\n* CUDA 工具包 11.0 或更高版本。\n* Python 库：`pip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3`。\n\n### 数据准备\n\n在论文中，我们在以下数据集上训练了模型：[CIFAR-10 (32 x 32)](https:\u002F\u002Fwww.cs.toronto.edu\u002F~kriz\u002Fcifar.html)、[STL-10 (64 x 64)](https:\u002F\u002Fcs.stanford.edu\u002F~acoates\u002Fstl10\u002F)、\n[LSUN (256 x 256)](https:\u002F\u002Fgithub.com\u002Ffyu\u002Flsun)、[AFHQ (512 x 512)](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fstargan-v2) 和 [FFHQ (1024 x 1024)](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fffhq-dataset)。\n你可以从各自网站下载我们在论文中使用的数据集。要准备指定分辨率的数据集，例如运行：\n```.bash\npython dataset_tool.py --source=~\u002Fdownloads\u002Flsun\u002Fraw\u002Fbedroom_lmdb --dest=~\u002Fdatasets\u002Flsun_bedroom200k.zip \\\n    --transform=center-crop --width=256 --height=256 --max_images=200000\n\npython dataset_tool.py --source=~\u002Fdownloads\u002Flsun\u002Fraw\u002Fchurch_lmdb --dest=~\u002Fdatasets\u002Flsun_church200k.zip \\\n    --transform=center-crop-wide --width=256 --height=256 --max_images=200000\n```\n\n### 训练\n\n我们在下方展示了所使用的训练命令。在大多数情况下，训练命令类似，因此以下以 CIFAR-10 数据集为例：\n\n对于 Diffusion-GAN：\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fcifar10.zip\" --gpus=4 --cfg cifar --kimg 50000 --aug no --target 0.6 --noise_sd 0.05 --ts_dist priority\n```\n\n对于 Diffusion-ProjectedGAN：\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fcifar10.zip\" --gpus=4 --batch 64 --batch-gpu=16 --cfg fastgan --kimg 50000 --target 0.45 --d_pos first --noise_sd 0.5\n```\n\n对于 Diffusion-InsGen：\n```.bash\npython train.py --outdir=training-runs --data=\"~\u002Fafhq-wild.zip\" --gpus=8 --cfg paper512 --kimg 25000\n```\n\n我们遵循 [StyleGAN2-ADA](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch) 的 `config` 设置，更多细节请参考其项目。其他主要超参数如下并作简要说明：\n* `--target`：判别器目标值，用于平衡扩散强度。\n* `--aug`：领域特定的图像增强（如 ADA 和可微分增强），用于评估与扩散的互补性。\n* `--noise_sd`：扩散噪声标准差，在我们的实验中设为 0.05。\n* `--ts_dist`：时间步采样分布 $\\pi(t)$（论文中的符号）。\n\n我们评估了两种 `t` 采样分布：`['priority', 'uniform']`，\n其中 `'priority'` 表示论文中的公式 (11)，而 `'uniform'` 表示随机采样。在多数情况下，`priority` 略优；但在某些数据集（如 FFHQ）上，`'uniform'` 表现更好。\n\n## 使用我们的检查点进行采样与评估\n\n我们在 [Hugging Face](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Ftree\u002Fmain\u002Fcheckpoints) 页面提供了 Diffusion-GAN 的预训练检查点，下方为快速访问链接：\n\n|            模型             |    数据集     | 分辨率   |  FID  |                                                        检查点                                                         |\n|:---------------------------:|:-------------:|:--------:|:-----:|:-------------------------------------------------------------------------------------------------------------------------:|\n|     Diffusion-StyleGAN2     |   CIFAR-10    |  32x32   | 3.19  |     [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-cifar10.pkl)     |\n|     Diffusion-StyleGAN2     |    CelebA     |  64x64   | 1.69  |    [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-celeba64.pkl)     |\n|     Diffusion-StyleGAN2     |    STL-10     |  64x64   | 11.53 |      [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-stl10.pkl)      |\n|     Diffusion-StyleGAN2     | LSUN-Bedroom  | 256x256  | 3.65  |  [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-lsun-bedroom.pkl)   |\n|     Diffusion-StyleGAN2     | LSUN-Church   | 256x256  | 3.17  |   [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-lsun-church.pkl)   |\n|     Diffusion-StyleGAN2     |     FFHQ      |1024x1024 | 2.83  |      [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-stylegan2-ffhq.pkl)       |\n|   Diffusion-ProjectedGAN    |   CIFAR-10    |  32x32   | 2.54  |   [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-cifar10.pkl)    |\n|   Diffusion-ProjectedGAN    |    STL-10     |  64x64   | 6.91  |    [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-stl10.pkl)     |\n|   Diffusion-ProjectedGAN    | LSUN-Bedroom  | 256x256  | 1.43  | [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-lsun-bedroom.pkl) |\n|   Diffusion-ProjectedGAN    | LSUN-Church   | 256x256  | 1.85  | [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-projectedgan-lsun-church.pkl)  |\n|      Diffusion-InsGen       |   AFHQ-Cat    | 512x512  | 2.40  |      [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqcat.pkl)       |\n|     Diffusion-InsGen        |   AFHQ-Dog    | 512x512  | 4.83  |      [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqdog.pkl)       |\n|      Diffusion-InsGen       |  AFHQ-Wild    | 512x512  | 1.51  |      [下载](https:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Fresolve\u002Fmain\u002Fcheckpoints\u002Fdiffusion-insgen-afhqwild.pkl)      |\n\n要生成样本，请运行以下命令：\n\n```.bash\n# 使用预训练的 Diffusion-StyleGAN2 生成 FFHQ 样本\npython generate.py --outdir=out --seeds=1-100 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-stylegan2-ffhq.pkl\n\n# 使用预训练的 Diffusion-ProjectedGAN 生成 LSUN-Church 样本\npython gen_images.py --outdir=out --seeds=1-100 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-projectedgan-lsun-church.pkl\n```\n\n上述命令中的检查点路径可替换为表格中任意预训练 Diffusion-GAN 检查点。\n\n同样，可通过以下命令计算指标：\n\n```.bash\n# 预训练网络模型：显式指定数据集，结果输出至标准输出\npython calc_metrics.py --metrics=fid50k_full --data=~\u002Fdatasets\u002Fffhq.zip --mirror=1 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-stylegan2-ffhq.pkl\n```\n\n## 引用\n\n```\n@article{wang2022diffusiongan,\n  title     = {Diffusion-GAN: Training GANs with Diffusion},\n  author    = {Wang, Zhendong and Zheng, Huangjie and He, Pengcheng and Chen, Weizhu and Zhou, Mingyuan},\n  journal   = {arXiv preprint arXiv:2206.02262},\n  year      = {2022},\n  url       = {https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.02262}\n}\n```\n\n## 致谢\n\n我们的代码基于 Karras 等人的优秀 [StyleGAN2-ADA 仓库](https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2-ada-pytorch)、Ceyuan Yang 等人的 [InsGen 仓库](https:\u002F\u002Fgithub.com\u002Fgenforce\u002Finsgen) 以及 Axel Sauer 等人的 [ProjectedGAN 仓库](https:\u002F\u002Fgithub.com\u002Fautonomousvision\u002Fprojected_gan) 构建。","# Diffusion-GAN 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- 64位 Python 3.7\n- PyTorch 1.7.1 或 1.8.1（推荐使用官方源或清华\u002F阿里云镜像加速）\n- CUDA Toolkit 11.0 或更高版本（需与 PyTorch 版本兼容）\n- NVIDIA GPU（建议至少 8GB 显存）\n\n### 前置依赖\n```bash\npip install click requests tqdm pyspng ninja imageio-ffmpeg==0.4.3\n```\n\n> 国内用户建议配置 pip 镜像源（如清华源）加速安装：\n```bash\npip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 安装步骤\n\n1. 克隆项目仓库：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN.git\ncd Diffusion-GAN\n```\n\n2. 创建并激活 Conda 环境（推荐）：\n```bash\nconda env create -f environment.yml\nconda activate diffusion-gan\n```\n\n3. 编译必要组件（如有）：\n```bash\n# 根据具体子目录需求，例如进入 diffusion-stylegan2 目录后执行\ncd diffusion-stylegan2\npython setup.py build_ext --inplace\n```\n\n## 基本使用\n\n### 数据准备示例（以 LSUN 卧室数据集为例）：\n```bash\npython dataset_tool.py --source=~\u002Fdownloads\u002Flsun\u002Fraw\u002Fbedroom_lmdb --dest=~\u002Fdatasets\u002Flsun_bedroom200k.zip \\\n    --transform=center-crop --width=256 --height=256 --max_images=200000\n```\n\n### 训练命令示例（CIFAR-10）：\n```bash\npython train.py --outdir=training-runs --data=\"~\u002Fcifar10.zip\" --gpus=4 --cfg cifar --kimg 50000 --aug no --target 0.6 --noise_sd 0.05 --ts_dist priority\n```\n\n### 使用预训练模型生成图像（FFHQ 示例）：\n```bash\npython generate.py --outdir=out --seeds=1-100 \\\n    --network=https:\u002F\u002Ftsciencescu.blob.core.windows.net\u002Fprojectshzheng\u002FDiffusionGAN\u002Fdiffusion-stylegan2-ffhq.pkl\n```\n\n> 预训练模型也可从 Hugging Face 下载（国内用户可尝试代理或镜像加速）：\nhttps:\u002F\u002Fhuggingface.co\u002Fzhendongw\u002Fdiffusion-gan\u002Ftree\u002Fmain\u002Fcheckpoints","一位独立游戏开发者正在为新作《幻兽纪元》训练一个能自动生成奇幻生物角色的图像生成模型，用于快速填充游戏美术资源库。\n\n### 没有 Diffusion-GAN 时\n- 训练过程极不稳定，每隔几轮就出现模式崩溃，生成的怪物要么千篇一律，要么完全扭曲失真，不得不频繁手动重启训练。\n- 即使使用传统噪声注入方法增强判别器鲁棒性，也无法有效缓解梯度消失问题，导致生成质量长期停滞在“勉强可用”水平。\n- 需要大量标注数据和反复调参才能获得尚可的结果，小团队难以负担时间和算力成本，项目进度严重滞后。\n- 在低分辨率（64x64）下尚可运行，一旦尝试生成256x256以上高清图，模型几乎必然发散，无法满足游戏实际需求。\n- 不同数据集（如毛茸茸动物 vs. 鳞甲怪兽）需要重新设计噪声策略，缺乏通用方案，复用性差。\n\n### 使用 Diffusion-GAN 后\n- 训练全程稳定收敛，无需人工干预，即使连续跑数百个epoch也不会崩溃，节省了90%以上的调试时间。\n- 利用扩散链中的高斯混合噪声自动调节噪声强度，生成图像细节更丰富、结构更合理，怪物设计自然多样，美术团队直接可用。\n- 在仅1\u002F3原始数据量下即达到超越原基线的效果，显著降低数据采集与标注压力，适合资源有限的小型开发组。\n- 成功在AFHQ-Dog等复杂数据集上稳定生成256x256高清图像，满足游戏贴图精度要求，且支持FFHQ级1024x1024超分输出。\n- 一套配置通用于多种生物类型，无需针对毛发、鳞片或翅膀等特征单独优化，大幅提升开发效率。\n\nDiffusion-GAN 让小型团队也能稳定高效地训练高质量图像生成模型，把原本“玄学调参”的GAN训练变成了可预测、可复用的工程化流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FZhendong-Wang_Diffusion-GAN_07f764bf.png","Zhendong-Wang","Zhendong Wang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FZhendong-Wang_c6da6523.png","I am a PhD student in the University of Texas at Austin. Currently, I focus on research in deep generative models, and reinforcement learning.",null,"https:\u002F\u002Fgithub.com\u002FZhendong-Wang",[79,83,87,91,95],{"name":80,"color":81,"percentage":82},"Python","#3572A5",82.4,{"name":84,"color":85,"percentage":86},"Cuda","#3A4E3A",12.4,{"name":88,"color":89,"percentage":90},"C++","#f34b7d",4.5,{"name":92,"color":93,"percentage":94},"HTML","#e34c26",0.4,{"name":96,"color":97,"percentage":98},"CSS","#663399",0.3,699,75,"2026-04-10T06:31:45","MIT","Linux","需要 NVIDIA GPU，CUDA toolkit 11.0 或更高版本","未说明",{"notes":107,"python":108,"dependencies":109},"建议使用 environment.yml 配置环境，部分功能依赖 StyleGAN2-ADA 的 torch_utils 修改版，数据需预处理为指定分辨率 ZIP 格式","3.7",[110,111,112,113,114,115,116],"torch==1.7.1\u002F1.8.1","click","requests","tqdm","pyspng","ninja","imageio-ffmpeg==0.4.3",[15,14],5,"2026-03-27T02:49:30.150509","2026-04-11T16:55:07.681110",[122,127,132,137,142,146],{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},290,"恢复训练 Diffusion-StyleGAN2 时加载预训练权重报错怎么办？","该错误源于参数复制时梯度状态冲突。解决方案是修改 torch_utils\u002Fmisc.py 中的 copy_params_and_buffers 函数：先保存 requires_grad 状态，用 torch.no_grad() 包裹 copy_ 操作，再恢复梯度状态。具体补丁见评论中的 diff 代码。","https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fissues\u002F6",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},291,"DCGAN\u002FSNGAN 实验中判别器是如何进行时间步 t 条件化的？","作者确认使用了将时间步 t 的嵌入向量与图像输入拼接的方式输入判别器，类似条件GAN的做法。具体实现可参考 issue 中展示的代码结构，作者也表示愿意通过邮件提供完整 DCGAN 代码（需主动联系）。","https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fissues\u002F13",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},292,"Diffusion-GAN 相比 StyleGAN2 训练时间如何？","官方未直接对比训练时间，但指出扩散模块（diffusion）替代了原始 ADA 的增强管道（augment_pipe），功能相似。用户可自行添加 --Diffusion=false\u002Ftrue 开关来控制是否启用扩散，便于对比实验。","https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fissues\u002F4",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},293,"训练命令中参数不一致或对象创建错误如何修正？","应将 --target 改为 --d_target；cfg 参数对 ProjectedGAN 应使用 fastgan 而非 cifar；Diffusion 类构造函数中应使用 t_min 而非 t_init。维护者已更新代码统一使用 target，并确认 fastgan 是 Projected GAN 的正确配置。","https:\u002F\u002Fgithub.com\u002FZhendong-Wang\u002FDiffusion-GAN\u002Fissues\u002F3",{"id":143,"question_zh":144,"answer_zh":145,"source_url":136},294,"如何将扩散模块与其他数据增强方法结合使用？","扩散模块替代了原始 augment_pipe，但原始 ADA 增强代码仍保留在 training\u002Fadaaug.py 中。可参考 training\u002Fdiffusion.py 第137-144行和第186行的代码结构，将扩散与其他增强方法组合使用。也可尝试将时间步 t 的嵌入加入判别器输入。",{"id":147,"question_zh":148,"answer_zh":149,"source_url":126},295,"Diffusion-InsGen 恢复训练时报错如何解决？","除修改 misc.py 中的参数复制逻辑外，还需检查 InsGen 的 Contrastive_Head 是否存在就地操作（in-place operation）问题。维护者提到曾有相关修复但被删除，建议用户尝试查找或重写相关部分避免 inplace 操作。",[]]