[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-POSTECH-CVLab--PyTorch-StudioGAN":3,"tool-POSTECH-CVLab--PyTorch-StudioGAN":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":92,"forks":93,"last_commit_at":94,"license":95,"difficulty_score":10,"env_os":96,"env_gpu":97,"env_ram":96,"env_deps":98,"category_tags":111,"github_topics":112,"view_count":123,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":124,"updated_at":125,"faqs":126,"releases":157},683,"POSTECH-CVLab\u002FPyTorch-StudioGAN","PyTorch-StudioGAN","StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional\u002Funconditional image generation.","StudioGAN 是一个基于 PyTorch 的开源库，致力于提供代表性生成对抗网络（GAN）的实现方案，支持条件与非条件图像生成。它主要为了解决机器学习研究中不同 GAN 模型实现细节不一致、难以公平对比的问题。通过统一的环境和模块化设计，StudioGAN 让研究人员能专注于算法创新而非底层代码调试。\n\nStudioGAN 功能丰富，内置了 7 种主流 GAN 架构、9 种条件生成方法及多种评估指标，并提供了涵盖 GAN、自回归模型和扩散模型的大规模基准测试数据。用户只需通过 YAML 配置文件即可灵活组合不同的损失函数、正则化模块和增强策略，极大提升了实验效率。同时，StudioGAN 支持从单 GPU 到多节点分布式训练等多种加速方式，并开放了部分预训练模型和训练日志。\n\nStudioGAN 特别适合深度学习研究人员、算法工程师以及需要复现或改进图像生成模型的技术开发者。无论是进行新想法的快速验证，还是开展全面的模型性能分析，StudioGAN 都能提供稳定且可复现的实验环境，助力生成式 AI 领域的探索与发展。","\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_fb47e1680140.jpg\" \u002F>\n\u003C\u002Fp>\n\n--------------------------------------------------------------------------------\n\n**StudioGAN** is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional\u002Funconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea.\n\n**Moreover**, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U).\n\n# News\n- StudioGAN paper is accepted at IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023.\n- We provide all checkpoints we used: Please visit [Hugging Face Hub](https:\u002F\u002Fhuggingface.co\u002FMingguksky\u002FPyTorch-StudioGAN\u002Ftree\u002Fmain).\n- Our new paper \"[StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479)\" is made public on arXiv.\n- StudioGAN provides implementations of 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 3 differentiable augmentations, 8 evaluation metrics, and 5 evaluation backbones.\n- StudioGAN supports both clean and architecture-friendly metrics (IS, FID, PRDC, IFID) with a comprehensive benchmark.\n- StudioGAN provides wandb logs and pre-trained models (will be ready soon).\n\n#  Release Notes (v.0.4.0)\n- We checked the reproducibility of implemented GANs.\n- We provide Baby, Papa, and Grandpa ImageNet datasets where images are processed using the anti-aliasing and high-quality resizer.\n- StudioGAN provides a dedicatedly established Benchmark on standard datasets (CIFAR10, ImageNet, AFHQv2, and FFHQ).\n- StudioGAN supports InceptionV3, ResNet50, SwAV, DINO, and Swin Transformer backbones for GAN evaluation.\n\n#  Features\n- **Coverage:** StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives.\n- **Flexibility:** Each modularized option is managed through a configuration system that works through a YAML file, so users can train a large combination of GANs by mix-matching distinct options.\n- **Reproducibility:** With StudioGAN, users can compare and debug various GANs with the unified computing environment without concerning about hidden details and tricks.\n- **Plentifulness:** StudioGAN provides a large collection of pre-trained GAN models, training logs, and evaluation results.\n- **Versatility:** StudioGAN supports 5 types of acceleration methods with synchronized batch normalization for training: a single GPU training, data-parallel training (DP), distributed data-parallel training (DDP), multi-node distributed data-parallel training (MDDP), and mixed-precision training.\n\n#  Implemented GANs\n\n| Method | Venue | Architecture | GC | DC | Loss | EMA |\n|:-----------|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n| [**DCGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434) | arXiv'15 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | Vanilla | False |\n| [**InfoGAN**](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2016\u002Fhash\u002F7c9d0b1f96aebd7b5eca8c3edaa19ebb-Abstract.html) | NIPS'16 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | Vanilla | False |\n| [**LSGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04076) | ICCV'17 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | Least Sqaure | False |\n| [**GGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02894) | arXiv'17 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | Hinge | False |\n| [**WGAN-WC**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04862)              |  ICLR'17   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | Wasserstein  | False |\n| [**WGAN-GP**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028)              |  NIPS'17   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | Wasserstein  | False |\n| [**WGAN-DRA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)             |  arXiv'17  |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | Wasserstein  | False |\n| **ACGAN-Mod**\u003Csup>[2](#footnote_2)\u003C\u002Fsup>                     |     -      |                 ResNetGAN                  |  cBN   |   AC   |    Hinge     | False |\n| [**PDGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05637)                |  ICLR'18   |                 ResNetGAN                  |  cBN   |   PD   |    Hinge     | False |\n| [**SNGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05957)                |  ICLR'18   |                 ResNetGAN                  |  cBN   |   PD   |    Hinge     | False |\n| [**SAGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08318)                |  ICML'19   |                 ResNetGAN                  |  cBN   |   PD   |    Hinge     | False |\n| [**TACGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02690)               | Neurips'19 |                   BigGAN                   |  cBN   |  TAC   |    Hinge     | True  |\n| [**LGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.05687)                 |  ICML'19   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   |   Vanilla    | False |\n| [**Unconditional BigGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096) |  ICLR'19   |                   BigGAN                   |  N\u002FA   |  N\u002FA   |    Hinge     | True  |\n| [**BigGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096) | ICLR'19 | BigGAN | cBN | PD | Hinge | True |\n| [**BigGAN-Deep-CompareGAN**](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmodels\u002Fbig_resnet_deep_legacy.py) | ICLR'19 | BigGAN-Deep CompareGAN | cBN | PD | Hinge | True |\n| [**BigGAN-Deep-StudioGAN**](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmodels\u002Fbig_resnet_deep_studiogan.py) | - | BigGAN-Deep StudioGAN | cBN | PD | Hinge | True |\n| [**StyleGAN2**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04958)            |  CVPR' 20  |                 StyleGAN2                  | cAdaIN |  SPD   |   Logistic   | True  |\n| [**CRGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12027)                |  ICLR'20   |                   BigGAN                   |  cBN   |   PD   |    Hinge     | True  |\n| [**ICRGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.04724)               |  AAAI'21   |                   BigGAN                   |  cBN   |   PD   |    Hinge     | True  |\n| [**LOGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00953)                |  arXiv'19  |                 ResNetGAN                  |  cBN   |   PD   |    Hinge     | True  |\n| [**ContraGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681)            | Neurips'20 |                   BigGAN                   |  cBN   |   2C   |    Hinge     | True  |\n| [**MHGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04216)                |  WACV'21   |                   BigGAN                   |  cBN   |   MH   |      MH      | True  |\n| [**BigGAN + DiffAugment**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10738) | Neurips'20 |                   BigGAN                   |  cBN   |   PD   |    Hinge     | True  |\n| [**StyleGAN2 + ADA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06676)      | Neurips'20 |                 StyleGAN2                  | cAdaIN |  SPD   |   Logistic   | True  |\n| [**BigGAN + LeCam**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03310)       | CVPR'2021  |                   BigGAN                   |  cBN   |   PD   |    Hinge     | True  |\n| [**ReACGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01118) | Neurips'21 | BigGAN | cBN | D2D-CE | Hinge | True |\n| [**StyleGAN2 + APA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06849) | Neurips'21 | StyleGAN2 | cAdaIN | SPD | Logistic | True |\n| [**StyleGAN3-t**](https:\u002F\u002Fnvlabs.github.io\u002Fstylegan3\u002F) | Neurips'21 | StyleGAN3 | cAaIN | SPD | Logistic | True |\n| [**StyleGAN3-r**](https:\u002F\u002Fnvlabs.github.io\u002Fstylegan3\u002F) | Neurips'21 | StyleGAN3 | cAaIN | SPD | Logistic | True |\n| [**ADCGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10060) | ICML'22 | BigGAN | cBN | ADC | Hinge | True |\n\nGC\u002FDC indicates the way how we inject label information to the Generator or Discriminator.\n\n[EMA](https:\u002F\u002Fopenreview.net\u002Fforum?id=SJgw_sRqFQ): Exponential Moving Average update to the generator.\n[cBN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.07629) : conditional Batch Normalization.\n[cAdaIN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04948): Conditional version of Adaptive Instance Normalization.\n[AC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) : Auxiliary Classifier.\n[PD](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05637) : Projection Discriminator.\n[TAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02690): Twin Auxiliary Classifier.\n[SPD](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04948) : Modified PD for StyleGAN.\n[2C](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681) : Conditional Contrastive loss.\n[MH](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04216) : Multi-Hinge loss.\n[ADC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10060) : Auxiliary Discriminative Classifier.\n[D2D-CE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01118) : Data-to-Data Cross-Entropy.\n\n#  Evaluation Metrics\n| Method | Venue | Architecture |\n|:-----------|:-------------:|:-------------:|\n| [**Inception Score (IS)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498) | Neurips'16 | InceptionV3 |\n| [**Frechet Inception Distance (FID)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08500) | Neurips'17 | InceptionV3 |\n| [**Improved Precision & Recall**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06991) | Neurips'19 |        InceptionV3         |\n| [**Classifier Accuracy Score (CAS)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10887) | Neurips'19 |        InceptionV3         |\n| [**Density & Coverage**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09797)   |  ICML'20   |        InceptionV3         |\n| **Intra-class FID**                                          |     -      |        InceptionV3         |\n| [**SwAV FID**](https:\u002F\u002Fopenreview.net\u002Fforum?id=NeRdBeTionN) | ICLR'21 | SwAV |\n| [**Clean metrics (IS, FID, PRDC)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11222) | CVPR'22 | InceptionV3 |\n| [**Architecture-friendly metrics (IS, FID, PRDC)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479) | arXiv'22 | Not limited to InceptionV3 |\n\n#  Training and Inference Techniques\n\n| Method                                                 |    Venue     | Target Architecture  |\n| :----------------------------------------------------- | :----------: | :------------------: |\n| [**FreezeD**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10964)        |   CVPRW'20   | Except for StyleGAN2 |\n| [**Top-K Training**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06224) | Neurips'2020 |          -           |\n| [**DDLS**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06060)           | Neurips'2020 |          -           |\n| [**SeFa**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06600)           |  CVPR'2021   |        BigGAN        |\n\n# Reproducibility\n\nWe check the reproducibility of GANs implemented in StudioGAN  by comparing IS and FID with the original papers. We identify our platform successfully reproduces most of representative GANs except for PD-GAN, ACGAN, LOGAN, SAGAN, and BigGAN-Deep. FQ means Flickr-Faces-HQ Dataset (FFHQ). The resolutions of ImageNet, AFHQv2, and FQ datasets are 128, 512, and 1024, respectively.\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"50%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_33de70aa617b.png\" \u002F>\n\u003C\u002Fp>\n\n# Requirements\n\nFirst, install PyTorch meeting your environment (at least 1.7):\n```bash\npip install torch torchvision torchaudio --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu116\n```\n\nThen, use the following command to install the rest of the libraries:\n```bash\npip install tqdm ninja h5py kornia matplotlib pandas sklearn scipy seaborn wandb PyYaml click requests pyspng imageio-ffmpeg timm\n```\n\nWith docker, you can use (Updated 14\u002FDEC\u002F2022):\n```bash\ndocker pull alex4727\u002Fexperiment:pytorch113_cuda116\n```\n\nThis is our command to make a container named \"StudioGAN\".\n\n```bash\ndocker run -it --gpus all --shm-size 128g --name StudioGAN -v \u002Fpath_to_your_folders:\u002Froot\u002Fcode --workdir \u002Froot\u002Fcode alex4727\u002Fexperiment:pytorch113_cuda116 \u002Fbin\u002Fzsh\n```\nIf your nvidia driver version doesn't satisfy requirements, you can try adding below to above command.\n```bash\n--env NVIDIA_DISABLE_REQUIRE=true\n```\n\n# Dataset\n\n* CIFAR10\u002FCIFAR100: StudioGAN will automatically download the dataset once you execute ``main.py``.\n\n* Tiny ImageNet, ImageNet, or a custom dataset:\n  1. download [Tiny ImageNet](https:\u002F\u002Fgist.github.com\u002Fmoskomule\u002F2e6a9a463f50447beca4e64ab4699ac4), [Baby ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw), [Papa ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw), [Grandpa ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw), [ImageNet](http:\u002F\u002Fwww.image-net.org). Prepare your own dataset.\n  2. make the folder structure of the dataset as follows:\n\n```\ndata\n└── ImageNet, Tiny_ImageNet, Baby ImageNet, Papa ImageNet, or Grandpa ImageNet\n    ├── train\n    │   ├── cls0\n    │   │   ├── train0.png\n    │   │   ├── train1.png\n    │   │   └── ...\n    │   ├── cls1\n    │   └── ...\n    └── valid\n        ├── cls0\n        │   ├── valid0.png\n        │   ├── valid1.png\n        │   └── ...\n        ├── cls1\n        └── ...\n```\n\n# Quick Start\n\nBefore starting, users should login wandb using their personal API key.\n\n```bash\nwandb login PERSONAL_API_KEY\n```\nFrom release 0.3.0, you can now define which evaluation metrics to use through ``-metrics`` option. Not specifying option defaults to calculating FID only. \ni.e. ``-metrics is fid`` calculates only IS and FID and ``-metrics none`` skips evaluation.\n\n\n* Train (``-t``) and evaluate IS, FID, Prc, Rec, Dns, Cvg (``-metrics is fid prdc``) of the model defined in ``CONFIG_PATH`` using GPU ``0``.\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -metrics is fid prdc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* Preprocess images for training and evaluation using PIL.LANCZOS filter (``--pre_resizer lanczos``). Then, train (``-t``) and evaluate friendly-IS, friendly-FID, friendly-Prc, friendly-Rec, friendly-Dns, friendly-Cvg (``-metrics is fid prdc --post_resizer clean``) of the model defined in ``CONFIG_PATH`` using GPU ``0``.\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -metrics is fid prdc --pre_resizer lanczos --post_resizer clean -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* Train (``-t``) and evaluate FID of the model defined in ``CONFIG_PATH`` through ``DataParallel`` using GPUs ``(0, 1, 2, 3)``. Evaluation of FID does not require (``-metrics``) argument!\n\n```bash\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* Train (``-t``) and skip evaluation (``-metrics none``) of the model defined in ``CONFIG_PATH`` through ``DistributedDataParallel`` using GPUs ``(0, 1, 2, 3)``, ``Synchronized batch norm``, and ``Mixed precision``.\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=2222\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -metrics none -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -DDP -sync_bn -mpc \n```\n\nTry ``python3 src\u002Fmain.py`` to see available options.\n\n# Supported Training\u002FTesting Techniques\n\n* Load All Data in Main Memory (``-hdf5 -l``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -hdf5 -l -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n\n* DistributedDataParallel (Please refer to [Here](https:\u002F\u002Fyangkky.github.io\u002F2019\u002F07\u002F08\u002Fdistributed-pytorch-tutorial.html)) (``-DDP``)\n  ```bash\n  ### NODE_0, 4_GPUs, All ports are open to NODE_1\n  ~\u002Fcode>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0\n  ~\u002Fcode>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0\n  ~\u002Fcode\u002FPyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -DDP -tn 2 -cn 0 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  ```bash\n  ### NODE_1, 4_GPUs, All ports are open to NODE_0\n  ~\u002Fcode>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0\n  ~\u002Fcode>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0\n  ~\u002Fcode\u002FPyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -DDP -tn 2 -cn 1 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [Mixed Precision Training](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.03740) (``-mpc``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -mpc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [Change Batch Normalization Statistics](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479)\n  ```bash\n  # Synchronized batchNorm (-sync_bn)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -sync_bn -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  \n  # Standing statistics (-std_stat, -std_max, -std_step)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -std_stat -std_max STD_MAX -std_step STD_STEP -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  \n  # Batch statistics (-batch_stat)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -batch_stat -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [Truncation Trick](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py --truncation_factor TRUNCATION_FACTOR -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n\n* [DDLS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06060) (``-lgv -lgv_rate -lgv_std -lgv_decay -lgv_decay_steps -lgv_steps``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -lgv -lgv_rate LGV_RATE -lgv_std LGV_STD -lgv_decay LGV_DECAY -lgv_decay_steps LGV_DECAY_STEPS -lgv_steps LGV_STEPS -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n\n* [Freeze Discriminator](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10964) (``-freezeD``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t --freezeD FREEZED -ckpt SOURCE_CKPT -cfg TARGET_CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n\n# Analyzing Generated Images\n\nStudioGAN supports ``Image visualization, K-nearest neighbor analysis, Linear interpolation, Frequency analysis, TSNE analysis, and Semantic factorization``. All results will be saved in ``SAVE_DIR\u002Ffigures\u002FRUN_NAME\u002F*.png``.\n\n* Image Visualization\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -v -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_82c993e2cc45.png\" \u002F>\n\u003C\u002Fp>\n\n\n* K-Nearest Neighbor Analysis (we have fixed K=7, the images in the first column are generated images.)\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -knn -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_a60c102d4c1d.png\" \u002F>\n\u003C\u002Fp>\n\n* Linear Interpolation (applicable only to conditional Big ResNet models)\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -itp -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_d86f496a005a.png\" \u002F>\n\u003C\u002Fp>\n\n* Frequency Analysis\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -fa -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_7edde9ba9b61.png\" \u002F>\n\u003C\u002Fp>\n\n\n* TSNE Analysis\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -tsne -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"80%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_3e4b8f9b58f0.png\" \u002F>\n\u003C\u002Fp>\n\n* Semantic Factorization for BigGAN\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -sefa -sefa_axis SEFA_AXIS -sefa_max SEFA_MAX -cfg CONFIG_PATH -ckpt CKPT -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_2fad6b931a05.png\" \u002F>\n\u003C\u002Fp>\n\n\n\n#  Training GANs\n\nStudioGAN supports the training of 30 representative GANs from DCGAN to StyleGAN3-r.\n\nWe used different scripts depending on the dataset and model, and it is as follows:\n\n### CIFAR10\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -hdf5 -l -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### CIFAR10 using StyleGAN2\u002F3\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -hdf5 -l -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### Baby\u002FPapa\u002FGrandpa ImageNet and ImageNet\n```bash\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -hdf5 -l -sync_bn -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### AFHQv2\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=8888\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### FFHQ\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=8888\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 src\u002Fmain.py -t -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n#  Metrics\n\nStudioGAN supports Inception Score, Frechet Inception Distance, Improved Precision and Recall, Density and Coverage, Intra-Class FID, Classifier Accuracy Score. Users can get ``Intra-Class FID, Classifier Accuracy Score`` scores using ``-iFID, -GAN_train, and -GAN_test`` options, respectively. \n\nUsers can change the evaluation backbone from InceptionV3 to ResNet50, SwAV, DINO, or Swin Transformer using ``--eval_backbone ResNet50_torch, SwAV_torch, DINO_torch, or Swin-T_torch`` option.\n\nIn addition, Users can calculate metrics with clean- or architecture-friendly resizer using ``--post_resizer clean or friendly`` option.\n\n### 1. Inception Score (IS)\nInception Score (IS) is a metric to measure how much GAN generates high-fidelity and diverse images. Calculating IS requires the pre-trained Inception-V3 network. Note that we do not split a dataset into ten folds to calculate IS ten times.\n\n### 2. Frechet Inception Distance (FID)\nFID is a widely used metric to evaluate the performance of a GAN model. Calculating FID requires the pre-trained Inception-V3 network, and modern approaches use [Tensorflow-based FID](https:\u002F\u002Fgithub.com\u002Fbioinf-jku\u002FTTUR). StudioGAN utilizes the [PyTorch-based FID](https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid) to test GAN models in the same PyTorch environment. We show that the PyTorch based FID implementation provides [almost the same results](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Ffigures\u002FTable3.png) with the TensorFlow implementation (See Appendix F of [ContraGAN paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681)).\n\n### 3. Improved Precision and Recall (Prc, Rec)\nImproved precision and recall are developed to make up for the shortcomings of the precision and recall. Like IS, FID, calculating improved precision and recall requires the pre-trained Inception-V3 model. StudioGAN uses the PyTorch implementation provided by [developers of density and coverage scores](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc). \n\n### 4. Density and Coverage (Dns, Cvg)\nDensity and coverage metrics can estimate the fidelity and diversity of generated images using the pre-trained Inception-V3 model. The metrics are known to be robust to outliers, and they can detect identical real and fake distributions. StudioGAN uses the [authors' official PyTorch implementation](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc), and StudioGAN follows the author's suggestion for hyperparameter selection.\n\n# Benchmark\n\n#### ※ We always welcome your contribution if you find any wrong implementation, bug, and misreported score.\n\nWe report the best IS, FID, Improved Precision & Recall, and Density & Coverage of GANs.\n\nTo download all checkpoints reported in StudioGAN, Please [**click here**](https:\u002F\u002Fhuggingface.co\u002FMingguksky\u002FPyTorch-StudioGAN\u002Ftree\u002Fmain) (Hugging face Hub).\n\nYou can evaluate the checkpoint by adding ``-ckpt CKPT_PATH`` option with the corresponding configuration path ``-cfg CORRESPONDING_CONFIG_PATH``. \n\n### 1. GANs from StudioGAN\n\nThe resolutions of CIFAR10, Baby ImageNet, Papa ImageNet, Grandpa ImageNet, ImageNet, AFHQv2, and FQ are 32, 64, 64, 64, 128, 512, and 1024, respectively.\n\nWe use the same number of generated images as the training images for Frechet Inception Distance (FID), Precision, Recall, Density, and Coverage calculation. For the experiments using Baby\u002FPapa\u002FGrandpa ImageNet and ImageNet, we exceptionally use 50k fake images against a complete training set as real images.\n\nAll features and moments of reference datasets can be downloaded via [**features**](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FElbkH1fLidJDpzUvrZZiT6EBZgBUhi-t1xoOhnqCas2p9g?e=WfGdGT) and [**moments**](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEn88Meh2gJtKk-1tIM1b3YEBcUZlP_4ksAI-qAS9pja4Yw?e=3OWJ7E).\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_09bf7f8dbb69.png\"\u002F>\n\u003C\u002Fp>\n\n### 2. Other generative models\n\nThe resolutions of ImageNet-128 and ImageNet 256 are 128 and 256, respectively.\n\nAll images used for Benchmark can be downloaded via One Drive (will be uploaded soon).\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_5c9502346b0b.png\"\u002F>\n\u003C\u002Fp>\n\n# Evaluating pre-saved image folders\n\n* Evaluate IS, FID, Prc, Rec, Dns, Cvg (``-metrics is fid prdc``) of image folders (already preprocessed) saved in DSET1 and DSET2 using GPUs ``(0,...,N)``.\n\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --dset1 DSET1 --dset2 DSET2\n```\n\n* Evaluate IS, FID, Prc, Rec, Dns, Cvg (``-metrics is fid prdc``) of image folder saved in DSET2 using pre-computed features (``--dset1_feats DSET1_FEATS``), moments of dset1 (``--dset1_moments DSET1_MOMENTS``), and GPUs ``(0,...,N)``.\n\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --dset1_feats DSET1_FEATS --dset1_moments DSET1_MOMENTS --dset2 DSET2\n```\n\n* Evaluate friendly-IS, friendly-FID, friendly-Prc, friendly-Rec, friendly-Dns, friendly-Cvg (``-metrics is fid prdc --post_resizer friendly``) of image folders saved in DSET1 and DSET2 through ``DistributedDataParallel`` using GPUs ``(0,...,N)``.\n\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=2222\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --post_resizer friendly --dset1 DSET1 --dset2 DSET2 -DDP\n```\n\n## StudioGAN thanks the following Repos for the code sharing\n\n[[MIT license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fsync_batchnorm\u002FLICENSE) Synchronized BatchNorm: https:\u002F\u002Fgithub.com\u002Fvacancy\u002FSynchronized-BatchNorm-PyTorch\n\n[[MIT license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fops.py) Self-Attention module: https:\u002F\u002Fgithub.com\u002Fvoletiv\u002Fself-attention-GAN-pytorch\n\n[[MIT license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fdiffaug.py) DiffAugment: https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fdata-efficient-gans\n\n[[MIT_license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Fprdc.py) PyTorch Improved Precision and Recall: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc\n\n[[MIT_license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Fprdc.py) PyTorch Density and Coverage: https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc\n\n[[MIT license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fresize.py) PyTorch clean-FID: https:\u002F\u002Fgithub.com\u002FGaParmar\u002Fclean-fid\n\n[[NVIDIA source code license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA) StyleGAN2: https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2\n\n[[NVIDIA source code license]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA) Adaptive Discriminator Augmentation: https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2\n\n[[Apache License]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Ffid.py) Pytorch FID: https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid\n\n## License\nPyTorch-StudioGAN is an open-source library under the MIT license (MIT). However, portions of the library are avaiiable under distinct license terms: StyleGAN2, StyleGAN2-ADA, and StyleGAN3 are licensed under [NVIDIA source code license](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA), and PyTorch-FID is licensed under [Apache License](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Ffid.py).\n\n## Citation\nStudioGAN is established for the following research projects. Please cite our work if you use StudioGAN.\n```bib\n@article{kang2023StudioGANpami,\n  title   = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}},\n  author  = {MinGuk Kang and Joonghyuk Shin and Jaesik Park},\n  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},\n  year    = {2023}\n}\n```\n\n```bib\n@inproceedings{kang2021ReACGAN,\n  title   = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}},\n  author  = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park},\n  journal = {Conference on Neural Information Processing Systems (NeurIPS)},\n  year    = {2021}\n}\n```\n\n```bib\n@inproceedings{kang2020ContraGAN,\n  title   = {{ContraGAN: Contrastive Learning for Conditional Image Generation}},\n  author  = {Minguk Kang and Jaesik Park},\n  journal = {Conference on Neural Information Processing Systems (NeurIPS)},\n  year    = {2020}\n}\n```\n---------------------------------------\n\n\u003Ca name=\"footnote_1\">[1]\u003C\u002Fa> Experiments on Tiny ImageNet are conducted using the ResNet architecture instead of CNN.\n\n\u003Ca name=\"footnote_2\">[2]\u003C\u002Fa> Our re-implementation of [ACGAN (ICML'17)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) with slight modifications, which bring strong performance enhancement for the experiment using CIFAR10.\n","\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_fb47e1680140.jpg\" \u002F>\n\u003C\u002Fp>\n\n--------------------------------------------------------------------------------\n\n**StudioGAN** 是一个 PyTorch 库，提供了代表性生成对抗网络（Generative Adversarial Networks, GANs）的实现，用于条件\u002F无条件图像生成。StudioGAN 旨在为现代 GANs 提供一个统一的实验平台，以便机器学习研究人员能够轻松比较和分析新想法。\n\n**此外**，StudioGAN 提供了一个前所未有的生成模型基准测试。该基准包括来自 GANs（BigGAN-Deep, StyleGAN-XL）、自回归模型（auto-regressive models, MaskGIT, RQ-Transformer）和扩散模型（Diffusion models, LSGM++, CLD-SGM, ADM-G-U）的结果。\n\n# 新闻\n- StudioGAN 论文于 2023 年被 IEEE 模式分析与机器智能汇刊（IEEE Transactions on Pattern Analysis and Machine Intelligence, TPAMI）接收。\n- 我们提供了所有使用的检查点（checkpoints）：请访问 [Hugging Face Hub](https:\u002F\u002Fhuggingface.co\u002FMingguksky\u002FPyTorch-StudioGAN\u002Ftree\u002Fmain)。\n- 我们的新论文 \"[StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479)\" 已在 arXiv 上公开。\n- StudioGAN 提供了 7 种 GAN 架构、9 种条件设置方法、4 种对抗损失、13 种正则化模块、3 种可微分增强、8 种评估指标和 5 种评估骨干网络的实现。\n- StudioGAN 同时支持标准指标和架构友好型指标（IS, FID, PRDC, IFID），并配有全面的基准测试。\n- StudioGAN 提供 wandb 日志和预训练模型（即将就绪）。\n\n# 发布说明 (v.0.4.0)\n- 我们检查了已实现 GANs 的可复现性。\n- 我们提供了 Baby、Papa 和 Grandpa ImageNet 数据集，其中图像使用抗锯齿和高精度重采样器进行处理。\n- StudioGAN 在标准数据集（CIFAR10, ImageNet, AFHQv2, 和 FFHQ）上提供了专门建立的基准测试。\n- StudioGAN 支持 InceptionV3, ResNet50, SwAV, DINO, 和 Swin Transformer 骨干网络用于 GAN 评估。\n\n# 功能特性\n- **覆盖范围：** StudioGAN 是一个自包含的库，提供了 7 种 GAN 架构、9 种条件设置方法、4 种对抗损失、13 种正则化模块、6 种增强模块、8 种评估指标和 5 种评估骨干网络。基于这些配置，我们构建了 30 种代表性的 GANs。\n- **灵活性：** 每个模块化选项都通过配置文件系统管理，该系统通过 YAML 文件工作，因此用户可以通过混合搭配不同的选项来训练大量组合的 GANs。\n- **可复现性：** 使用 StudioGAN，用户可以在统一的计算环境中比较和调试各种 GANs，而无需担心隐藏的细节和技巧。\n- **丰富性：** StudioGAN 提供了大量的预训练 GAN 模型、训练日志和评估结果。\n- **多功能性：** StudioGAN 支持 5 种加速方法，配合同步批归一化（batch normalization）进行训练：单 GPU 训练、数据并行训练（data-parallel training, DP）、分布式数据并行训练（distributed data-parallel training, DDP）、多节点分布式数据并行训练（multi-node distributed data-parallel training, MDDP）和混合精度训练（mixed-precision training）。\n\n# 已实现的生成对抗网络 (GAN)\n\n| 方法 | 发表场合 | 架构 | GC (生成器条件) | DC (判别器条件) | 损失函数 | EMA (指数移动平均) |\n|:-----------|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n| [**DCGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434) | arXiv'15 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | 标准 | 否 |\n| [**InfoGAN**](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2016\u002Fhash\u002F7c9d0b1f96aebd7b5eca8c3edaa19ebb-Abstract.html) | NIPS'16 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | 标准 | 否 |\n| [**LSGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04076) | ICCV'17 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | 最小二乘 | 否 |\n| [**GGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02894) | arXiv'17 | DCGAN\u002FResNetGAN\u003Csup>[1](#footnote_1)\u003C\u002Fsup> | N\u002FA | N\u002FA | 铰链 | 否 |\n| [**WGAN-WC**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04862)              |  ICLR'17   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | 瓦瑟斯坦  | 否 |\n| [**WGAN-GP**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028)              |  NIPS'17   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | 瓦瑟斯坦  | 否 |\n| [**WGAN-DRA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)             |  arXiv'17  |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   | 瓦瑟斯坦  | 否 |\n| **ACGAN-Mod**\u003Csup>[2](#footnote_2)\u003C\u002Fsup>                     |     -      |                 ResNetGAN                  |  cBN   |   AC   |    铰链     | 否 |\n| [**PDGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05637)                |  ICLR'18   |                 ResNetGAN                  |  cBN   |   PD   |    铰链     | 否 |\n| [**SNGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05957)                |  ICLR'18   |                 ResNetGAN                  |  cBN   |   PD   |    铰链     | 否 |\n| [**SAGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.08318)                |  ICML'19   |                 ResNetGAN                  |  cBN   |   PD   |    铰链     | 否 |\n| [**TACGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02690)               | Neurips'19 |                   BigGAN                   |  cBN   |  TAC   |    铰链     | 是  |\n| [**LGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.05687)                 |  ICML'19   |                 ResNetGAN                  |  N\u002FA   |  N\u002FA   |   标准    | 否 |\n| [**Unconditional BigGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096) |  ICLR'19   |                   BigGAN                   |  N\u002FA   |  N\u002FA   |    铰链     | 是  |\n| [**BigGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096) | ICLR'19 | BigGAN | cBN | PD | 铰链 | 是 |\n| [**BigGAN-Deep-CompareGAN**](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmodels\u002Fbig_resnet_deep_legacy.py) | ICLR'19 | BigGAN-Deep CompareGAN | cBN | PD | 铰链 | 是 |\n| [**BigGAN-Deep-StudioGAN**](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmodels\u002Fbig_resnet_deep_studiogan.py) | - | BigGAN-Deep StudioGAN | cBN | PD | 铰链 | 是 |\n| [**StyleGAN2**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04958)            |  CVPR' 20  |                 StyleGAN2                  | cAdaIN |  SPD   |   逻辑   | 是  |\n| [**CRGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12027)                |  ICLR'20   |                   BigGAN                   |  cBN   |   PD   |    铰链     | 是  |\n| [**ICRGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.04724)               |  AAAI'21   |                   BigGAN                   |  cBN   |   PD   |    铰链     | 是  |\n| [**LOGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00953)                |  arXiv'19  |                 ResNetGAN                  |  cBN   |   PD   |    铰链     | 是  |\n| [**ContraGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681)            | Neurips'20 |                   BigGAN                   |  cBN   |   2C   |    铰链     | 是  |\n| [**MHGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04216)                |  WACV'21   |                   BigGAN                   |  cBN   |   MH   |      MH      | 是  |\n| [**BigGAN + DiffAugment**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10738) | Neurips'20 |                   BigGAN                   |  cBN   |   PD   |    铰链     | 是  |\n| [**StyleGAN2 + ADA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.06676)      | Neurips'20 |                 StyleGAN2                  | cAdaIN |  SPD   |   逻辑   | 是  |\n| [**BigGAN + LeCam**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.03310)       | CVPR'2021  |                   BigGAN                   |  cBN   |   PD   |    铰链     | 是  |\n| [**ReACGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01118) | Neurips'21 | BigGAN | cBN | D2D-CE | 铰链 | 是 |\n| [**StyleGAN2 + APA**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06849) | Neurips'21 | StyleGAN2 | cAdaIN | SPD | 逻辑 | 是 |\n| [**StyleGAN3-t**](https:\u002F\u002Fnvlabs.github.io\u002Fstylegan3\u002F) | Neurips'21 | StyleGAN3 | cAaIN | SPD | 逻辑 | 是 |\n| [**StyleGAN3-r**](https:\u002F\u002Fnvlabs.github.io\u002Fstylegan3\u002F) | Neurips'21 | StyleGAN3 | cAaIN | SPD | 逻辑 | 是 |\n| [**ADCGAN**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10060) | ICML'22 | BigGAN | cBN | ADC | 铰链 | 是 |\n\nGC\u002FDC 表示我们将标签信息注入生成器或判别器的方法。\n\n[EMA](https:\u002F\u002Fopenreview.net\u002Fforum?id=SJgw_sRqFQ): 对生成器进行指数移动平均更新。\n[cBN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.07629) : 条件批归一化。\n[cAdaIN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04948): 自适应实例归一化的条件版本。\n[AC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) : 辅助分类器。\n[PD](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05637) : 投影判别器。\n[TAC](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.02690): 双辅助分类器。\n[SPD](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04948) : 针对 StyleGAN 修改的 PD。\n[2C](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681) : 条件对比损失。\n[MH](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.04216) : 多铰链损失。\n[ADC](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.10060) : 辅助判别分类器。\n[D2D-CE](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01118) : 数据到数据交叉熵。\n\n# 评估指标\n| 方法 | 发表场合 | 架构 |\n|:-----------|:-------------:|:-------------:|\n| [**Inception Score (IS)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498) | Neurips'16 | InceptionV3 |\n| [**Frechet Inception Distance (FID)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08500) | Neurips'17 | InceptionV3 |\n| [**Improved Precision & Recall**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.06991) | Neurips'19 |        InceptionV3         |\n| [**Classifier Accuracy Score (CAS)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.10887) | Neurips'19 |        InceptionV3         |\n| [**Density & Coverage**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.09797)   |  ICML'20   |        InceptionV3         |\n| **Intra-class FID**                                          |     -      |        InceptionV3         |\n| [**SwAV FID**](https:\u002F\u002Fopenreview.net\u002Fforum?id=NeRdBeTionN) | ICLR'21 | SwAV |\n| [**Clean metrics (IS, FID, PRDC)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11222) | CVPR'22 | InceptionV3 |\n| [**Architecture-friendly metrics (IS, FID, PRDC)**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479) | arXiv'22 | Not limited to InceptionV3 |\n\n# 训练与推理技术\n\n| 方法                                                 |    发表地     | 目标架构  |\n| :----------------------------------------------------- | :----------: | :------------------: |\n| [**FreezeD**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10964)        |   CVPRW'20   | 除 StyleGAN2 外 |\n| [**Top-K Training**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06224) | Neurips'2020 |          -           |\n| [**DDLS**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06060)           | Neurips'2020 |          -           |\n| [**SeFa**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.06600)           |  CVPR'2021   |        BigGAN        |\n\n# 可复现性\n\n我们通过将 StudioGAN 中实现的生成对抗网络（GANs）的 IS（Inception Score）和 FID（Fréchet Inception Distance）指标与原始论文中的结果进行比较，来检查其可复现性。我们发现我们的平台成功复现了大多数代表性 GAN，但 PD-GAN、ACGAN、LOGAN、SAGAN 和 BigGAN-Deep 除外。FQ 指 Flickr-Faces-HQ 数据集（FFHQ）。ImageNet、AFHQv2 和 FQ 数据集的分辨率分别为 128、512 和 1024。\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"50%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_33de70aa617b.png\" \u002F>\n\u003C\u002Fp>\n\n# 环境要求\n\n首先，安装符合您环境的 PyTorch（至少 1.7 版本）：\n```bash\npip install torch torchvision torchaudio --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu116\n```\n\n然后，使用以下命令安装其余库：\n```bash\npip install tqdm ninja h5py kornia matplotlib pandas sklearn scipy seaborn wandb PyYaml click requests pyspng imageio-ffmpeg timm\n```\n\n使用 Docker，您可以使用（更新于 2022 年 12 月 14 日）：\n```bash\ndocker pull alex4727\u002Fexperiment:pytorch113_cuda116\n```\n\n这是我们创建名为 \"StudioGAN\" 容器的命令。\n\n```bash\ndocker run -it --gpus all --shm-size 128g --name StudioGAN -v \u002Fpath_to_your_folders:\u002Froot\u002Fcode --workdir \u002Froot\u002Fcode alex4727\u002Fexperiment:pytorch113_cuda116 \u002Fbin\u002Fzsh\n```\n如果您的 NVIDIA 驱动程序版本不满足要求，可以尝试在上面的命令中添加以下内容。\n```bash\n--env NVIDIA_DISABLE_REQUIRE=true\n```\n\n# 数据集\n\n* CIFAR10\u002FCIFAR100：一旦执行 ``main.py``，StudioGAN 将自动下载该数据集。\n\n* Tiny ImageNet、ImageNet 或自定义数据集：\n  1. 下载 [Tiny ImageNet](https:\u002F\u002Fgist.github.com\u002Fmoskomule\u002F2e6a9a463f50447beca4e64ab4699ac4)、[Baby ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw)、[Papa ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw)、[Grandpa ImageNet](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEs-M92IXeN1Dv_L6H_ScswEBxiUanxF9BVsWkH3GsazABQ?e=Bs5ROw)、[ImageNet](http:\u002F\u002Fwww.image-net.org)。准备您自己的数据集。\n  2. 按照以下方式构建数据集的文件夹结构：\n\n```\ndata\n└── ImageNet, Tiny_ImageNet, Baby ImageNet, Papa ImageNet, or Grandpa ImageNet\n    ├── train\n    │   ├── cls0\n    │   │   ├── train0.png\n    │   │   ├── train1.png\n    │   │   └── ...\n    │   ├── cls1\n    │   └── ...\n    └── valid\n        ├── cls0\n        │   ├── valid0.png\n        │   ├── valid1.png\n        │   └── ...\n        ├── cls1\n        └── ...\n```\n\n# 快速开始\n\n开始之前，用户应使用个人 API 密钥登录 wandb（Weights & Biases）。\n\n```bash\nwandb login PERSONAL_API_KEY\n```\n从 0.3.0 版本起，您可以通过 ``-metrics`` 选项定义要使用哪些评估指标。未指定选项时默认仅计算 FID。\n即 ``-metrics is fid`` 仅计算 IS 和 FID，而 ``-metrics none`` 跳过评估。\n\n* 使用 GPU ``0`` 训练（``-t``）并评估 ``CONFIG_PATH`` 中定义的模型的 IS、FID、Prc、Rec、Dns、Cvg（``-metrics is fid prdc``）。\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -metrics is fid prdc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* 使用 PIL.LANCZOS 过滤器（``--pre_resizer lanczos``）对图像进行预处理以用于训练和评估。然后，使用 GPU ``0`` 训练（``-t``）并评估 ``CONFIG_PATH`` 中定义的模型的友好型 IS、友好型 FID、友好型 Prc、友好型 Rec、友好型 Dns、友好型 Cvg（``-metrics is fid prdc --post_resizer clean``）。\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -metrics is fid prdc --pre_resizer lanczos --post_resizer clean -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* 通过 ``DataParallel`` 使用 GPUs ``(0, 1, 2, 3)`` 训练（``-t``）并评估 ``CONFIG_PATH`` 中定义的模型的 FID。评估 FID 不需要（``-metrics``）参数！\n\n```bash\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n```\n\n* 通过 ``DistributedDataParallel`` 使用 GPUs ``(0, 1, 2, 3)``、``Synchronized batch norm``（同步批归一化）和 ``Mixed precision``（混合精度）训练（``-t``）并跳过评估（``-metrics none``）``CONFIG_PATH`` 中定义的模型。\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=2222\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -metrics none -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -DDP -sync_bn -mpc \n```\n\n尝试运行 ``python3 src\u002Fmain.py`` 查看可用选项。\n\n# 支持的训练\u002F测试技术\n\n* 将全部数据加载到主内存中 (``-hdf5 -l``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -hdf5 -l -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n\n* 分布式数据并行 (DistributedDataParallel) (请参阅 [此处](https:\u002F\u002Fyangkky.github.io\u002F2019\u002F07\u002F08\u002Fdistributed-pytorch-tutorial.html)) (``-DDP``)\n  ```bash\n  ### NODE_0, 4_GPUs, All ports are open to NODE_1\n  ~\u002Fcode>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0\n  ~\u002Fcode>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0\n  ~\u002Fcode\u002FPyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -DDP -tn 2 -cn 0 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  ```bash\n  ### NODE_1, 4_GPUs, All ports are open to NODE_0\n  ~\u002Fcode>>> export MASTER_ADDR=PUBLIC_IP_OF_NODE_0\n  ~\u002Fcode>>> export MASTER_PORT=AVAILABLE_PORT_OF_NODE_0\n  ~\u002Fcode\u002FPyTorch-StudioGAN>>> CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -DDP -tn 2 -cn 1 -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [混合精度训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.03740) (``-mpc``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -mpc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [更改批归一化 (Batch Normalization) 统计信息](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.09479)\n  ```bash\n  # Synchronized batchNorm (-sync_bn)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t -sync_bn -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  \n  # Standing statistics (-std_stat, -std_max, -std_step)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -std_stat -std_max STD_MAX -std_step STD_STEP -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  \n  # Batch statistics (-batch_stat)\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -batch_stat -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n  \n* [截断技巧](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.11096)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py --truncation_factor TRUNCATION_FACTOR -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n\n* [DDLS](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.06060) (``-lgv -lgv_rate -lgv_std -lgv_decay -lgv_decay_steps -lgv_steps``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -lgv -lgv_rate LGV_RATE -lgv_std LGV_STD -lgv_decay LGV_DECAY -lgv_decay_steps LGV_DECAY_STEPS -lgv_steps LGV_STEPS -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n  ```\n\n* [冻结判别器 (Discriminator)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.10964) (``-freezeD``)\n  ```bash\n  CUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -t --freezeD FREEZED -ckpt SOURCE_CKPT -cfg TARGET_CONFIG_PATH -data DATA_PATH -save SAVE_PATH\n  ```\n\n# 分析生成的图像\n\nStudioGAN 支持 ``图像可视化、K 近邻分析、线性插值、频率分析、t-SNE 分析和语义因子分解``。所有结果将保存在 ``SAVE_DIR\u002Ffigures\u002FRUN_NAME\u002F*.png`` 中。\n\n* 图像可视化\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -v -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR\n```\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_82c993e2cc45.png\" \u002F>\n\u003C\u002Fp>\n\n\n* K 近邻分析 (我们固定 K=7，第一列中的图像为生成的图像。)\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -knn -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_a60c102d4c1d.png\" \u002F>\n\u003C\u002Fp>\n\n* 线性插值 (仅适用于条件 Big ResNet 模型)\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -itp -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_d86f496a005a.png\" \u002F>\n\u003C\u002Fp>\n\n* 频率分析\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -fa -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"60%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_7edde9ba9b61.png\" \u002F>\n\u003C\u002Fp>\n\n\n* t-SNE 分析\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -tsne -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"80%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_3e4b8f9b58f0.png\" \u002F>\n\u003C\u002Fp>\n\n* BigGAN 的语义因子分解\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fmain.py -sefa -sefa_axis SEFA_AXIS -sefa_max SEFA_MAX -cfg CONFIG_PATH -ckpt CKPT -save SAVE_PATH\n```\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_2fad6b931a05.png\" \u002F>\n\u003C\u002Fp>\n\n\n\n# 训练 GAN (生成对抗网络)\n\nStudioGAN 支持从 DCGAN 到 StyleGAN3-r 的 30 种代表性 GAN 的训练。\n\n我们根据数据集和模型使用了不同的脚本，如下所示：\n\n### CIFAR10\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -hdf5 -l -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### 使用 StyleGAN2\u002F3 的 CIFAR10\n```bash\nCUDA_VISIBLE_DEVICES=0 python3 src\u002Fmain.py -t -hdf5 -l -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### Baby\u002FPapa\u002FGrandpa ImageNet 和 ImageNet\n```bash\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -hdf5 -l -sync_bn -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### AFHQv2\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=8888\nCUDA_VISIBLE_DEVICES=0,1,2,3 python3 src\u002Fmain.py -t -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n### FFHQ\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=8888\nCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 src\u002Fmain.py -t -metrics is fid prdc -ref \"train\" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer \"lanczos\" --post_resizer \"friendly\" --eval_backbone \"InceptionV3_tf\"\n```\n\n# 指标\n\nStudioGAN 支持 Inception Score（IS，Inception 评分）、Frechet Inception Distance（FID，弗雷歇 Inception 距离）、Improved Precision and Recall（Prc, Rec，改进的精确率和召回率）、Density and Coverage（Dns, Cvg，密度和覆盖率）、Intra-Class FID（类内 FID）、Classifier Accuracy Score（分类器准确率分数）。用户可以使用 ``-iFID, -GAN_train, and -GAN_test`` 选项分别获取 ``Intra-Class FID（类内 FID）``, ``Classifier Accuracy Score（分类器准确率分数）`` 分数。\n\n用户可以通过 ``--eval_backbone ResNet50_torch, SwAV_torch, DINO_torch, or Swin-T_torch`` 选项将评估 backbone（骨干网络）从 InceptionV3 更改为 ResNet50、SwAV、DINO 或 Swin Transformer。\n\n此外，用户可以使用 ``--post_resizer clean or friendly`` 选项，通过干净的或架构友好的 resizer（重采样器）来计算指标。\n\n### 1. Inception Score (IS)\nInception Score（IS，Inception 评分）是一种用于衡量 GAN 生成高保真度和多样性图像程度的指标。计算 IS 需要预训练的 Inception-V3 网络。注意，我们不会将数据集分成十折来计算十次 IS。\n\n### 2. Frechet Inception Distance (FID)\nFID（弗雷歇 Inception 距离）是广泛用于评估 GAN 模型性能的指标。计算 FID 需要预训练的 Inception-V3 网络，现代方法通常使用基于 [Tensorflow 的 FID](https:\u002F\u002Fgithub.com\u002Fbioinf-jku\u002FTTUR)。StudioGAN 利用基于 [PyTorch 的 FID](https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid) 在相同的 PyTorch 环境中测试 GAN 模型。我们展示了基于 PyTorch 的 FID 实现提供了与 TensorFlow 实现 [几乎相同的结果](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fdocs\u002Ffigures\u002FTable3.png)（参见 [ContraGAN 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12681) 的附录 F）。\n\n### 3. Improved Precision and Recall (Prc, Rec)\n改进的精确率和召回率是为弥补精确率和召回率的缺点而开发的。与 IS、FID 一样，计算改进的精确率和召回率需要预训练的 Inception-V3 模型。StudioGAN 使用 [密度和覆盖率分数的开发者提供的 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc)。\n\n### 4. Density and Coverage (Dns, Cvg)\n密度和覆盖率指标可以使用预训练的 Inception-V3 模型来估计生成图像的保真度和多样性。这些指标已知对异常值具有鲁棒性，并且可以检测真实和虚假分布是否相同。StudioGAN 使用 [作者的官方 PyTorch 实现](https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc)，并遵循作者关于超参数选择的建议。\n\n# 基准测试\n\n#### ※ 如果您发现任何错误的实现、漏洞或报告的分数有误，我们始终欢迎您的贡献。\n\n我们报告了 GAN 的最佳 IS、FID、改进的精确率 & 召回率，以及密度 & 覆盖率。\n\n要下载 StudioGAN 中报告的所有检查点（checkpoints），请 [**点击这里**](https:\u002F\u002Fhuggingface.co\u002FMingguksky\u002FPyTorch-StudioGAN\u002Ftree\u002Fmain)（Hugging Face Hub）。\n\n您可以通过添加 ``-ckpt CKPT_PATH`` 选项以及相应的配置路径 ``-cfg CORRESPONDING_CONFIG_PATH`` 来评估检查点。\n\n### 1. StudioGAN 中的 GAN\n\nCIFAR10、Baby ImageNet、Papa ImageNet、Grandpa ImageNet、ImageNet、AFHQv2 和 FQ 的分辨率分别为 32、64、64、64、128、512 和 1024。\n\n对于 Frechet Inception Distance (FID)、Precision、Recall、Density 和 Coverage 的计算，我们使用与训练图像数量相同的生成图像数量。对于使用 Baby\u002FPapa\u002FGrandpa ImageNet 和 ImageNet 的实验，我们例外地使用 50k 张假图像，并以完整的训练集作为真实图像。\n\n参考数据集的所有特征（features）和矩（moments）可以通过 [**特征**](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FElbkH1fLidJDpzUvrZZiT6EBZgBUhi-t1xoOhnqCas2p9g?e=WfGdGT) 和 [**矩**](https:\u002F\u002Fpostechackr-my.sharepoint.com\u002F:f:\u002Fg\u002Fpersonal\u002Fjaesik_postech_ac_kr\u002FEn88Meh2gJtKk-1tIM1b3YEBcUZlP_4ksAI-qAS9pja4Yw?e=3OWJ7E) 下载。\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_09bf7f8dbb69.png\"\u002F>\n\u003C\u002Fp>\n\n### 2. 其他生成模型\n\nImageNet-128 和 ImageNet 256 的分辨率分别为 128 和 256。\n\n用于基准测试的所有图像可以通过 OneDrive 下载（即将上传）。\n\n\u003Cp align=\"center\">\n  \u003Cimg width=\"95%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_readme_5c9502346b0b.png\"\u002F>\n\u003C\u002Fp>\n\n# 评估预保存的图像文件夹\n\n* 使用 GPU ``(0,...,N)`` 评估保存在 DSET1 和 DSET2 中的图像文件夹（已预处理）的 IS、FID、Prc、Rec、Dns、Cvg（``-metrics is fid prdc``）。\n\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --dset1 DSET1 --dset2 DSET2\n```\n\n* 使用预计算的特征（``--dset1_feats DSET1_FEATS``）、dset1 的矩（``--dset1_moments DSET1_MOMENTS``）以及 GPU ``(0,...,N)`` 评估保存在 DSET2 中的图像文件夹的 IS、FID、Prc、Rec、Dns、Cvg（``-metrics is fid prdc``）。\n\n```bash\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --dset1_feats DSET1_FEATS --dset1_moments DSET1_MOMENTS --dset2 DSET2\n```\n\n* 通过 ``DistributedDataParallel（分布式数据并行）`` 使用 GPU ``(0,...,N)`` 评估保存在 DSET1 和 DSET2 中的图像文件夹的 friendly-IS、friendly-FID、friendly-Prc、friendly-Rec、friendly-Dns、friendly-Cvg（``-metrics is fid prdc --post_resizer friendly``）。\n\n```bash\nexport MASTER_ADDR=\"localhost\"\nexport MASTER_PORT=2222\nCUDA_VISIBLE_DEVICES=0,...,N python3 src\u002Fevaluate.py -metrics is fid prdc --post_resizer friendly --dset1 DSET1 --dset2 DSET2 -DDP\n```\n\n## StudioGAN 感谢以下仓库的代码共享\n\n[[MIT 许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fsync_batchnorm\u002FLICENSE) 同步批归一化 (Synchronized BatchNorm): https:\u002F\u002Fgithub.com\u002Fvacancy\u002FSynchronized-BatchNorm-PyTorch\n\n[[MIT 许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fops.py) 自注意力模块 (Self-Attention module): https:\u002F\u002Fgithub.com\u002Fvoletiv\u002Fself-attention-GAN-pytorch\n\n[[MIT 许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fdiffaug.py) DiffAugment: https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fdata-efficient-gans\n\n[[MIT_许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Fprdc.py) PyTorch 改进的精确率和召回率 (Improved Precision and Recall): https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc\n\n[[MIT_许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Fprdc.py) PyTorch 密度和覆盖率 (Density and Coverage): https:\u002F\u002Fgithub.com\u002Fclovaai\u002Fgenerative-evaluation-prdc\n\n[[MIT 许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Futils\u002Fresize.py) PyTorch clean-FID: https:\u002F\u002Fgithub.com\u002FGaParmar\u002Fclean-fid\n\n[[NVIDIA 源代码许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA) StyleGAN2: https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2\n\n[[NVIDIA 源代码许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA) 自适应判别器增强 (Adaptive Discriminator Augmentation): https:\u002F\u002Fgithub.com\u002FNVlabs\u002Fstylegan2\n\n[[Apache 许可证]](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Ffid.py) PyTorch FID: https:\u002F\u002Fgithub.com\u002Fmseitzer\u002Fpytorch-fid\n\n## 许可证\nPyTorch-StudioGAN 是一个基于 MIT 许可证 (MIT) 的开源库。然而，该库的部分内容遵循不同的许可条款：StyleGAN2、StyleGAN2-ADA 和 StyleGAN3 采用 [NVIDIA 源代码许可证](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002FLICENSE-NVIDIA) 授权，而 PyTorch-FID 采用 [Apache 许可证](https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fblob\u002Fmaster\u002Fsrc\u002Fmetrics\u002Ffid.py) 授权。\n\n## 引用\nStudioGAN 是出于以下研究项目而建立的。如果您使用 StudioGAN，请引用我们的工作。\n```bib\n@article{kang2023StudioGANpami,\n  title   = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}},\n  author  = {MinGuk Kang and Joonghyuk Shin and Jaesik Park},\n  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},\n  year    = {2023}\n}\n```\n\n```bib\n@inproceedings{kang2021ReACGAN,\n  title   = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}},\n  author  = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park},\n  journal = {Conference on Neural Information Processing Systems (NeurIPS)},\n  year    = {2021}\n}\n```\n\n```bib\n@inproceedings{kang2020ContraGAN,\n  title   = {{ContraGAN: Contrastive Learning for Conditional Image Generation}},\n  author  = {Minguk Kang and Jaesik Park},\n  journal = {Conference on Neural Information Processing Systems (NeurIPS)},\n  year    = {2020}\n}\n```\n---------------------------------------\n\n\u003Ca name=\"footnote_1\">[1]\u003C\u002Fa> Tiny ImageNet 上的实验使用 ResNet 架构而非 CNN 进行。\n\n\u003Ca name=\"footnote_2\">[2]\u003C\u002Fa> 我们对 [ACGAN (ICML'17)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) 的重新实现，并进行了轻微修改，这为使用 CIFAR10 的实验带来了显著的性能提升。","# PyTorch-StudioGAN 快速上手指南\n\n**StudioGAN** 是一个基于 PyTorch 的开源库，提供代表性生成对抗网络（GANs）的实现，支持条件\u002F无条件图像生成。它旨在为机器学习研究人员提供一个统一的实验平台，以便轻松比较和分析新想法，并提供大规模的生成模型基准测试。\n\n## 1. 环境准备\n\n*   **操作系统**: Linux \u002F macOS \u002F Windows (需支持 Docker 或 Python 环境)\n*   **Python**: 推荐 Python 3.x\n*   **深度学习框架**: PyTorch (版本至少 1.7)\n*   **硬件**: 建议使用 NVIDIA GPU 以利用加速功能（如混合精度训练、分布式训练）\n\n## 2. 安装步骤\n\n### 方式一：通过 pip 安装\n\n首先安装满足环境的 PyTorch：\n\n```bash\npip install torch torchvision torchaudio --extra-index-url https:\u002F\u002Fdownload.pytorch.org\u002Fwhl\u002Fcu116\n```\n\n然后安装其余依赖库：\n\n```bash\npip install tqdm ninja h5py kornia matplotlib pandas sklearn scipy seaborn wandb PyYaml click requests pyspng imageio-ffmpeg timm\n```\n\n### 方式二：使用 Docker (可选)\n\n如果您希望使用预配置的容器环境，可以使用以下命令：\n\n拉取镜像：\n```bash\ndocker pull alex4727\u002Fexperiment:pytorch113_cuda116\n```\n\n启动容器（请根据实际情况修改挂载路径）：\n```bash\ndocker run -it --gpus all --shm-size 128g --name StudioGAN -v \u002Fpath_to_your_folders:\u002Froot\u002Fcode --workdir \u002Froot\u002Fcode alex4727\u002Fexperiment:pytorch113_cuda116 \u002F\n```\n\n## 3. 基本使用\n\nStudioGAN 采用模块化设计，所有选项均通过 **YAML 配置文件**进行管理。用户只需修改配置文件即可组合不同的 GAN 架构、损失函数和正则化模块进行训练。\n\n*   **配置训练**: 根据需求编辑 YAML 文件，选择支持的 GAN 架构（如 BigGAN, StyleGAN2 等）、条件方法、损失函数及评估指标。\n*   **数据集支持**: 支持标准数据集基准测试，包括 CIFAR10, ImageNet, AFHQv2, 和 FFHQ。\n*   **评估指标**: 内置多种评估指标，包括 IS, FID, PRDC, IFID 等，并支持 InceptionV3, ResNet50, SwAV, DINO, Swin Transformer 等多种评估骨干网络。\n*   **训练加速**: 支持单卡训练、数据并行 (DP)、分布式数据并行 (DDP)、多节点分布式 (MDDP) 以及混合精度训练。\n\n更多详细的预训练模型、日志及具体参数配置，请访问官方仓库或 Hugging Face Hub 获取。","某电商公司算法团队正在研发虚拟试衣系统，急需对比多种生成对抗网络在商品图合成任务上的表现。\n\n### 没有 PyTorch-StudioGAN 时\n- 需要从零手写多种 GAN 架构代码，工程师需花费数周时间重复造轮子。\n- 不同模型依赖库版本混乱，导致训练环境不一致，结果难以公平对比。\n- 缺乏标准化的评估流程，无法量化生成图像的真实度与多样性差异。\n- 调试过程中常因隐藏的技巧或参数设置问题，导致实验结果无法复现。\n\n### 使用 PyTorch-StudioGAN 后\n- 直接集成 7 种主流 GAN 架构实现，无需编写底层网络结构代码。\n- 通过 YAML 配置文件灵活组合模块，轻松切换不同变体进行横向对比。\n- 内置 IS、FID 等 8 种评估指标及多骨干网络，自动输出标准化性能报告。\n- 提供预训练模型与统一计算环境，确保实验结果高度可复现且训练高效。\n\nPyTorch-StudioGAN 将工程负担转化为配置选择，让研究人员能专注于算法创新本身，大幅缩短模型选型周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FPOSTECH-CVLab_PyTorch-StudioGAN_fb47e168.jpg","POSTECH-CVLab","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FPOSTECH-CVLab_9b09863a.png",null,"https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab",[80,84,88],{"name":81,"color":82,"percentage":83},"Python","#3572A5",84.7,{"name":85,"color":86,"percentage":87},"Cuda","#3A4E3A",11.6,{"name":89,"color":90,"percentage":91},"C++","#f34b7d",3.7,3491,343,"2026-04-04T15:25:48","NOASSERTION","未说明","需要 NVIDIA GPU，CUDA 11.6+, 显存未说明",{"notes":99,"python":96,"dependencies":100},"推荐使用官方提供的 Docker 镜像 (alex4727\u002Fexperiment:pytorch113_cuda116)，运行容器时建议设置共享内存 (--shm-size 128g)。预训练模型和日志可在 Hugging Face Hub 下载。",[101,102,103,104,105,106,107,108,109,110],"torch>=1.7","torchvision","kornia","timm","wandb","pandas","scikit-learn","scipy","matplotlib","PyYaml",[51,13,14],[113,114,115,116,117,118,119,120,121,122],"pytorch","deep-learning","generative-adversarial-network","biggan","stylegan2","machine-learning","stylegan2-ada","stylegan3","data-efficient-gan-training","clean-fid",4,"2026-03-27T02:49:30.150509","2026-04-06T07:01:02.231751",[127,132,137,142,147,152],{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},2849,"训练总是在步骤 4000 后停止，如何解决？","请打印 self.num_eval 并输入适当的评估键。如果遇到 \"ref\" 属性未知的错误，请将代码中的 self.num_eval[self.RUN.ref_dataset] 修改为 self.num_eval[\"test\"]。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F136",{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},2850,"如何使用现有的预训练权重生成图像（不进行训练）？","使用 -ckpt 参数指定 checkpoint 目录。例如：python src\u002Fmain.py -t -metrics is fid prdc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -ckpt CHECKPOINT_DIR。确保路径正确以便加载 .pth 文件。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F185",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},2851,"运行时出现 AttributeError: module 'utils' has no attribute 'misc' 循环导入错误怎么办？","这是 Python 3.6 版本的已知问题。建议拉取最新主分支代码修复。临时解决方法是在 config.py, ckpt.py 等文件中添加 import sys 和 sys.path.append(\".\u002Fsrc\u002Futils\")。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F119",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},2852,"ContraGAN 的 FID 分数有时不如 BigGAN，这正常吗？","结果可能因实验设置而异。有用户测试一致性正则化技巧得到 IS=9.316，FID=10.566。建议参考官方日志，不同模型在不同数据集上的表现会有波动。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F1",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},2853,"sample_latents 中的截断技巧（Truncation Trick）实现是否有误？","原实现并非标准截断技巧（采样自截断正态分布）。维护者已确认该问题并进行了修复。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F62",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},2854,"losses.py 中 transport_cost 未定义报错如何修复？","这是已知 bug，维护者正在检查修复。临时解决方案可参考用户建议，将 sample.py 第 31 行的 latents = truncated_normal(...) 改为 latents = torch.FloatTensor(truncated_normal(...))。","https:\u002F\u002Fgithub.com\u002FPOSTECH-CVLab\u002FPyTorch-StudioGAN\u002Fissues\u002F63",[158,163,168,173],{"id":159,"version":160,"summary_zh":161,"released_at":162},102323,"v.0.4.0","* We checked the reproducibility of implemented GANs.\r\n* We provide Baby, Papa, and Grandpa ImageNet datasets where images are processed using the anti-aliasing and high-quality resizer.\r\n* StudioGAN provides a dedicatedly established Benchmark on standard datasets (CIFAR10, ImageNet, AFHQv2, and FFHQ).\r\n* StudioGAN supports InceptionV3, ResNet50, SwAV, DINO, and Swin Transformer backbones for GAN evaluation.","2022-07-05T18:09:50",{"id":164,"version":165,"summary_zh":166,"released_at":167},102324,"v.0.3.0","- Add SOTA GANs: LGAN, TACGAN, StyleGAN2, MDGAN, MHGAN, ADCGAN, [ReACGAN (our new paper)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.01118).\r\n- Add five types of differentiable augmentation: CR, DiffAugment, ADA, SimCLR, BYOL.\r\n- Implement useful regularizations: Top-K training, Feature Matching, R1-Regularization, MaxGP\r\n- Add Improved Precision & Recall, Density & Coverage, iFID, and CAS for reliable evaluation.\r\n- Support Inception_V3 and SwAV backbones for GAN evaluation.\r\n- Verify the reproducibility of StyleGAN2 and BigGAN.\r\n- Fix bugs in FreezeD, DDP training, Mixed Precision training, and ADA.\r\n- Support Discriminator Driven Latent Sampling, Semantic Factorization for BigGAN evaluation.\r\n- Support Wandb logging instead of Tensorboard.","2021-11-05T16:08:22",{"id":169,"version":170,"summary_zh":171,"released_at":172},102325,"v0.2.0","## Second release of StudioGAN with following features\r\n\r\n- Fix minor bugs (slow convergence of training GAN + ADA models, tracking bn statistics during evaluation, etc.)\r\n- Add multi-node DistributedDataParallel (DDP) training.\r\n- Comprehensive benchmarks on CIFAR10, Tiny_ImageNet, and ImageNet datasets.\r\n- Provide pre-trained models and log files for the future research.\r\n- Add LARS optimizer and TSNE analysis. ","2021-02-23T14:33:14",{"id":174,"version":175,"summary_zh":176,"released_at":177},102326,"v0.1.0","## First StudioGAN release with following features\r\n\r\n- Extensive GAN implementations for Pytorch: From DCGAN to ADAGAN\r\n- Comprehensive benchmark of GANs using CIFAR10 dataset\r\n- Better performance and lower memory consumption than original implementations\r\n- Providing pre-trained models that are fully compatible with up-to-date PyTorch environment\r\n- Support Multi-GPU(both DP and DDP), Mixed precision, Synchronized Batch Normalization, and Tensorboard Visualization","2020-12-07T03:02:04"]