[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-nightrome--really-awesome-gan":3,"tool-nightrome--really-awesome-gan":64},[4,17,26,40,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,2,"2026-04-03T11:11:01",[13,14,15],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":23,"last_commit_at":32,"category_tags":33,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,34,35,36,15,37,38,13,39],"数据工具","视频","插件","其他","语言模型","音频",{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":10,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,38,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":10,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[38,14,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":23,"last_commit_at":62,"category_tags":63,"status":16},2471,"tesseract","tesseract-ocr\u002Ftesseract","Tesseract 是一款历史悠久且备受推崇的开源光学字符识别（OCR）引擎，最初由惠普实验室开发，后由 Google 维护，目前由全球社区共同贡献。它的核心功能是将图片中的文字转化为可编辑、可搜索的文本数据，有效解决了从扫描件、照片或 PDF 文档中提取文字信息的难题，是数字化归档和信息自动化的重要基础工具。\n\n在技术层面，Tesseract 展现了强大的适应能力。从版本 4 开始，它引入了基于长短期记忆网络（LSTM）的神经网络 OCR 引擎，显著提升了行识别的准确率；同时，为了兼顾旧有需求，它依然支持传统的字符模式识别引擎。Tesseract 原生支持 UTF-8 编码，开箱即用即可识别超过 100 种语言，并兼容 PNG、JPEG、TIFF 等多种常见图像格式。输出方面，它灵活支持纯文本、hOCR、PDF、TSV 等多种格式，方便后续数据处理。\n\nTesseract 主要面向开发者、研究人员以及需要构建文档处理流程的企业用户。由于它本身是一个命令行工具和库（libtesseract），不包含图形用户界面（GUI），因此最适合具备一定编程能力的技术人员集成到自动化脚本或应用程序中",73286,"2026-04-03T01:56:45",[13,14],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":82,"stars":85,"forks":86,"last_commit_at":87,"license":82,"difficulty_score":88,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":82,"view_count":23,"oss_zip_url":82,"oss_zip_packed_at":82,"status":16,"created_at":95,"updated_at":96,"faqs":97,"releases":128},3670,"nightrome\u002Freally-awesome-gan","really-awesome-gan","A list of papers on Generative Adversarial (Neural) Networks","really-awesome-gan 是一个专注于生成对抗网络（GAN）领域的精选资源库，由 Holger Caesar 维护。它并非一个可执行的软件程序，而是一份详尽的文献与学习指南，旨在帮助从业者系统性地掌握 GAN 技术。\n\n在 GAN 技术从前沿探索走向主流应用的过程中，相关论文数量呈爆炸式增长，研究者往往难以快速筛选出高质量的核心资料。really-awesome-gan 通过人工整理，解决了信息过载与检索困难的问题。它将资源科学分类为理论综述、计算机视觉应用、跨领域应用甚至趣味项目，并特别推荐了如 CycleGAN 等里程碑式的研究成果。此外，该库还汇集了来自 NIPS 等顶级会议的教程、博客文章、视频讲解以及开源代码链接，构建了从理论基础到生产实践完整的学习路径。\n\n这份资源非常适合人工智能研究人员、深度学习开发者以及对生成式模型感兴趣的学生使用。对于希望深入理解 GAN 原理、追踪最新学术动态或寻找项目灵感的专业人士而言，really-awesome-gan 提供了极高的参考价值。尽管维护者于 2017 年停止了更新，但其收录的经典文献和结构化知识体系至今仍是进入 GA","really-awesome-gan 是一个专注于生成对抗网络（GAN）领域的精选资源库，由 Holger Caesar 维护。它并非一个可执行的软件程序，而是一份详尽的文献与学习指南，旨在帮助从业者系统性地掌握 GAN 技术。\n\n在 GAN 技术从前沿探索走向主流应用的过程中，相关论文数量呈爆炸式增长，研究者往往难以快速筛选出高质量的核心资料。really-awesome-gan 通过人工整理，解决了信息过载与检索困难的问题。它将资源科学分类为理论综述、计算机视觉应用、跨领域应用甚至趣味项目，并特别推荐了如 CycleGAN 等里程碑式的研究成果。此外，该库还汇集了来自 NIPS 等顶级会议的教程、博客文章、视频讲解以及开源代码链接，构建了从理论基础到生产实践完整的学习路径。\n\n这份资源非常适合人工智能研究人员、深度学习开发者以及对生成式模型感兴趣的学生使用。对于希望深入理解 GAN 原理、追踪最新学术动态或寻找项目灵感的专业人士而言，really-awesome-gan 提供了极高的参考价值。尽管维护者于 2017 年停止了更新，但其收录的经典文献和结构化知识体系至今仍是进入 GAN 领域不可或缺的入门基石，同时也鼓励社区在此基础上继续拓展和完善。","# really-awesome-gan\nA list of papers and other resources on Generative Adversarial (Neural) Networks.\nThis site is maintained by Holger Caesar.\nTo complement or correct it, please contact me at holger-at-it-caesar.com or visit [it-caesar.com](http:\u002F\u002Fwww.it-caesar.com). Also checkout [really-awesome-semantic-segmentation](https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-semantic-segmentation) and our [COCO-Stuff dataset](https:\u002F\u002Fgithub.com\u002Fnightrome\u002Fcocostuff).\n\n**NOTE:** Despite the enormous interest in this cite (~3000 visitors per month), I will no longer add new papers starting from November 2017. I feel that GANs have come from an exotic topic to the mainstream and an exhaustive list of all GAN papers is no more feasible or useful. However, I invite other people to continue this effort and reuse my list.\n\n## Contents\n- [Recommendations](#recommendations)\n- [Workshops](#workshops)\n- [Tutorials & Workshops & Blogs](#tutorials--workshops--blogs)\n- [Videos](#videos)\n- [Code](#code)\n- [Papers](#papers)\n  - [Overview](#overview)\n  - [Theory & Machine Learning](#theory--machine-learning)\n  - [Applied Vision](#applied-vision)\n  - [Applied Other](#applied-other)\n  - [Humor](#humor)\n  \n## Recommendations\n\u003Cul>\n\u003Cli>Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04086\">[arXiv]\u003C\u002Fa> \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fbeyond-face-rotation.png\" alt=\"Beyond face rotation\">\u003C\u002Fli>\n\n\u003Cli>Pose Guided Person Image Generation \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09368\">[arXiv]\u003C\u002Fa> \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fpose-guided-person.png\" alt=\"Pose guided person\">\u003C\u002Fli>\n\n\u003Cli>Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10593\">[arXiv]\u003C\u002Fa>  \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fcycle-gan.png\" alt=\"Cycle GAN\">\u003C\u002Fli>\n\u003C\u002Ful>\n\n# Tutorials & Workshops & Blogs\n- Columbia Advanced Machine Learning Seminar\n  - New Progress on GAN Theory and Practice [[Blog]](https:\u002F\u002Fcasmls.github.io\u002Fgeneral\u002F2017\u002F04\u002F13\u002Fgan.html)\n  - Implicit Generative Models — What are you GAN-na do? [[Blog]](https:\u002F\u002Fcasmls.github.io\u002Fgeneral\u002F2017\u002F05\u002F24\u002Fligm.html)\n- How to Train a GAN? Tips and tricks to make GANs work [[Blog]](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fganhacks)\n- NIPS 2016 Tutorial: Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.00160)\n- NIPS 2016 Workshop on Adversarial Training [[Web]](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fnips2016adversarial\u002F) [[Blog]](http:\u002F\u002Fwww.inference.vc\u002Fmy-summary-of-adversarial-training-nips-workshop\u002F)\n- On the intuition behind deep learning & GANs — towards a fundamental understanding [[Blog]](https:\u002F\u002Fblog.waya.ai\u002Fintroduction-to-gans-a-boxing-match-b-w-neural-nets-b4e5319cc935)\n- OpenAI - Generative Models [[Blog]](https:\u002F\u002Fopenai.com\u002Fblog\u002Fgenerative-models\u002F)\n- SimGANs - a game changer in unsupervised learning, self driving cars, and more [[Blog]](https:\u002F\u002Fblog.waya.ai\u002Fsimgans-applied-to-autonomous-driving-5a8c6676e36b)\n- Deep Diving into GANs: from theory to production (EuroScipy 2018) [[GitHub]](https:\u002F\u002Fgithub.com\u002Fzurutech\u002Fgans-from-theory-to-production) \n\n# Books\n- GANs in Action: Deep learning with Generative Adversarial Networks [[Book]](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgans-in-action)\n\n# Videos\n- Generative Adversarial Networks by Ian Goodfellow [[Video]](https:\u002F\u002Fchannel9.msdn.com\u002FEvents\u002FNeural-Information-Processing-Systems-Conference\u002FNeural-Information-Processing-Systems-Conference-NIPS-2016\u002FGenerative-Adversarial-Networks)\n- Tutorial on Generative Adversarial Networks by Mark Chang [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLeeHDpwX2Kj5Ugx6c9EfDLDojuQxnmxmU)\n- Deep Diving into GANs: From Theory to Production (EuroSciPy 2018) by Michele De Simoni, Paolo Galeone [[Video]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CePrdabdtxw) \n\n# Code\n- Cleverhans: A library for benchmarking vulnerability to adversarial examples [[Code]](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcleverhans) [[Blog]](http:\u002F\u002Fcleverhans.io\u002F)\n- Generative Adversarial Networks (GANs) in 50 lines of code (PyTorch) [[Blog]](https:\u002F\u002Fmedium.com\u002F@devnag\u002Fgenerative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f) [[Code]](https:\u002F\u002Fgithub.com\u002Fdevnag\u002Fpytorch-generative-adversarial-networks)\n- Generative Models: Collection of generative models, e.g. GAN, VAE in Pytorch and Tensorflow [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Reproduction of the GANs paper (MNIST) in 100 lines of PyTorch code  [[Blog]](https:\u002F\u002Fpapers-100-lines.medium.com\u002Fgenerative-adversarial-networks-in-100-lines-of-code-516f09d1790a) [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FGenerative_Adversarial_Networks)\n- Reproduction of results from the paper *Conditional Generative Adversarial Nets* in 100 lines of PyTorch code  [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FConditional_Generative_Adversarial_Nets)\n- Reproduction of results from the paper *Improved Techniques for Training GANs* in 100 lines of PyTorch code  [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FImproved_Techniques_for_Training_GANs)\n- Reproduction of results from the *LSGAN* paper in 100 lines of PyTorch code  [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FLeast_Squares_Generative_Adversarial_Networks)\n- Reproduction of results from the *WGAN* paper in 100 lines of PyTorch code  [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FWasserstein_GAN)\n- Reproduction of results from the *pix2pix* paper in 100 lines of PyTorch code  [[Code]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FImage_to_Image_Translation_with_Conditional_Adversarial_Nets)\n\n# Papers\n## Overview\n- Generative Adversarial Networks: An Overview [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07035)\n\n## Theory & Machine Learning\n- A Classification-Based Perspective on GAN Distributions [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00970)\n- A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.03852)\n- A General Retraining Framework for Scalable Adversarial Classification [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_2.pdf)\n- Activation Maximization Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02000)\n- AdaGAN: Boosting Generative Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.02386)\n- Adversarial Autoencoders [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n- Adversarial Discriminative Domain Adaptation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.05464)\n- Adversarial Generator-Encoder Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.02304.pdf)\n- Adversarial Feature Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.09782) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Adversarially Learned Inference [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00704) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- AE-GAN: adversarial eliminating with GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05474)\n- An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.02382)\n- APE-GAN: Adversarial Perturbation Elimination with GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05474)\n- Associative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06953)\n- Autoencoding beyond pixels using a learned similarity metric [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300)\n- Bayesian Conditional Generative Adverserial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05477)\n- Bayesian GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09558)\n- BEGAN: Boundary Equilibrium Generative Adversarial Networks [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_4.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10717) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Binary Generative Adversarial Networks for Image Retrieval [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04150)\n- Boundary-Seeking Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08431) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02023)\n- Class-Splitting Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07359)\n- Comparison of Maximum Likelihood and GAN-based training of Real NVPs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05263)\n- Conditional CycleGAN for Attribute Guided Face Image Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09966)\n- Conditional Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Connecting Generative Adversarial Networks and Actor-Critic Methods [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_1.pdf)\n- Continual Learning in Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08395)\n- C-RNN-GAN: Continuous recurrent neural networks with adversarial training [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09904)\n- CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05106)\n- Cooperative Training of Descriptor and Generator Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09408)\n- Coupled Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.07536) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Dualing GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06216)\n- Deep and Hierarchical Implicit Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08896)\n- Energy-based Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.03126) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Enhancing GANs with MMD Neural Architecture Search, PMish Activation Function, and Adaptive Rank Decomposition [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10732016) [[Code]](https:\u002F\u002Fgithub.com\u002FPrasannaPulakurthi\u002FMMD-PMish-NAS-GAN) [[Website]](https:\u002F\u002Fprasannapulakurthi.github.io\u002FMMD-PMish-NAS-GAN\u002F) [[YouTube]](https:\u002F\u002Fyoutu.be\u002FyejnLOO2VaI) [[Demo]](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fprasannareddyp\u002FMMD-PMish-NAS-GAN)\n- Enhancing GAN Performance Through Neural Architecture Search and Tensor Decomposition [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10446488) [[PDF]](https:\u002F\u002Fprasannapulakurthi.github.io\u002Fpapers\u002FPDFs\u002F2024_ICASSP_GANs-Tensor-Decomposition.pdf) [[Code]](https:\u002F\u002Fgithub.com\u002FPrasannaPulakurthi\u002FMMD-AdversarialNAS-GAN)\n- Explaining and Harnessing Adversarial Examples [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6572)\n- Flow-GAN: Bridging implicit and prescribed learning in generative models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08868)\n- f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00709) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04865)\n- Generalization and Equilibrium in Generative Adversarial Nets (GANs) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.00573)\n- Generating images with recurrent adversarial networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.05110)\n- Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661) [[Code]](https:\u002F\u002Fgithub.com\u002Fgoodfeli\u002Fadversarial) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Generative Adversarial Networks as Variational Training of Energy Based Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01799)\n- Generative Adversarial Networks with Inverse Transformation Unit [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.09354)\n- Generative Adversarial Parallelization [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04021) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Generative Adversarial Residual Pairwise Networks for One Shot Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08033)\n- Generative Adversarial Structured Networks [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_14.pdf)\n- Generative Cooperative Net for Image Generation and Data Augmentation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02887)\n- Generative Moment Matching Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02761) [[Code]](https:\u002F\u002Fgithub.com\u002Fyujiali\u002Fgmmn)\n- Generative Semantic Manipulation with Contrasting GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00315)\n- Geometric GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02894)\n- Good Semi-supervised Learning that Requires a Bad GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09783)\n- Gradient descent GAN optimization is locally stable [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04156)\n- How to Train Your DRAGAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)\n- Image Quality Assessment Techniques Show Improved Training and Evaluation of Autoencoder Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02237)\n- Improved Semi-supervised Learning with GANs using Manifold Invariances [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08850)\n- Improved Techniques for Training GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498) [[Code]](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fimproved-gan)\n- Improved Training of Wasserstein GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Inverting The Generator Of A Generative Adversarial Network [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_9.pdf)\n- It Takes (Only) Two: Adversarial Generator-Encoder Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02304)\n- KGAN: How to Break The Minimax Game in GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01744)\n- Learning in Implicit Generative Models [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_10.pdf)\n- Learning Loss for Knowledge Distillation with Conditional Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00513)\n- Learning to Discover Cross-Domain Relations with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05192) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Learning Texture Manifolds with the Periodic Spatial GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.06566)\n- Least Squares Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04076) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Linking Generative Adversarial Learning and Binary Classification [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01509)\n- Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06264)\n- LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01560)\n- MAGAN: Margin Adaptation for Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03817) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Maximum-Likelihood Augmented Discrete Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07983)\n- McGan: Mean and Covariance Feature Matching GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08398)\n- Message Passing Multi-Agent GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01294)\n- MMD GAN: Towards Deeper Understanding of Moment Matching Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08584)\n- Mode Regularized Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.02136) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Multi-Agent Diverse Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02906)\n- Multi-Generator Gernerative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02556)\n- Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10843)\n- On Convergence and Stability of GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)\n- On the effect of Batch Normalization and Weight Normalization in Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03971)\n- On the Quantitative Analysis of Decoder-Based Generative Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04273)\n- Optimizing the Latent Space of Generative Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05776)\n- Parametrizing filters of a CNN with a GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11386)\n- PixelGAN Autoencoders [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00531)\n- Progressive Growing of GANs for Improved Quality, Stability, and Variation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10196) [[Code]](https:\u002F\u002Fgithub.com\u002Ftkarras\u002Fprogressive_growing_of_gans)\n- SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01805)\n- SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.05473)\n- Simple Black-Box Adversarial Perturbations for Deep Networks [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_11.pdf)\n- Softmax GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.06191)\n- Stabilizing Training of Generative Adversarial Networks through Regularization [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09367)\n- Stacked Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04357)\n- Statistics of Deep Generated Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02688)\n- Structured Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00889)\n- Tensorizing Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10772)\n- The Cramer Distance as a Solution to Biased Wasserstein Gradients [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10743)\n- Towards Understanding Adversarial Learning for Joint Distribution Matching [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01215)\n- Training generative neural networks via Maximum Mean Discrepancy optimization [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.03906)\n- Triple Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02291)\n- Unrolled Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02163)\n- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434) [[Code]](https:\u002F\u002Fgithub.com\u002FNewmu\u002Fdcgan_code) [[Code]](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\u002Ftree\u002Fmaster\u002Fdcgan) [[Code]](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDCGAN-tensorflow) [[Code]](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fdcgan.torch) [[Code]](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fkeras-dcgan)\n- Wasserstein GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875) [[Code]](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n\n## Applied Vision\n- 3D Object Reconstruction from a Single Depth View with Adversarial Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.07969)\n- 3D Shape Induction from 2D Views of Multiple Objects [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05872)\n- A step towards procedural terrain generation with GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03383) [[Code]](https:\u002F\u002Fgithub.com\u002Fchristopher-beckham\u002Fgan-heightmaps)\n- Abnormal Event Detection in Videos using Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09644)\n- Adversarial Generation of Training Examples for Vehicle License Plate Recognition [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03124)\n- Adversarial nets with perceptual losses for text-to-image synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09321)\n- Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00265)\n- Adversarial Networks for the Detection of Aggressive Prostate Cancer [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08014)\n- Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.00389.pdf)\n- Adversarial Training For Sketch Retrieval [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02748)\n- Aesthetic-Driven Image Enhancement by Adversarial Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05251)\n- Age Progression \u002F Regression by Conditional Adversarial Autoencoder [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08423)\n- AlignGAN: Learning to Align Cross-Domain Images with Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01400)\n- Amortised MAP Inference for Image Super-resolution [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.04490)\n- Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.00344) [[Code]](https:\u002F\u002Fgithub.com\u002Fsubeeshvasu\u002F2018_subeesh_epsr_eccvw)\n- A Novel Approach to Artistic Textual Visualization via GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10553)\n- Anti-Makeup: Learning A Bi-Level Adversarial Network for Makeup-Invariant Face Verification [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.03654)\n- Arbitrary Facial Attribute Editing: Only Change What You Want [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10678) [[Code]](https:\u002F\u002Fgithub.com\u002FLynnHo\u002FAttGAN-Tensorflow)\n- ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00938)\n- ArtGAN: Artwork Synthesis with Conditional Categorial GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03410)\n- Artificial Generation of Big Data for Improving Image Classification: A Generative Adversarial Network Approach on SAR Data [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.02010)\n- Auto-Encoder Guided GAN for Chinese Calligraphy Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08789)\n- Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.01908)\n- Automatic Liver Segmentation Using an Adversarial Image-to-Image Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08037)\n- Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04086)\n- CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.07068)\n- CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.07441) [[Code]](https:\u002F\u002Fgithub.com\u002Fnschor\u002FCompoNet)\n- Compressed Sensing MRI Reconstruction with Cyclic Loss in Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00753)\n- Conditional Adversarial Network for Semantic Segmentation of Brain Tumor [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05227)\n- Conditional generative adversarial nets for convolutional face generation [[Paper]](http:\u002F\u002Fwww.foldl.me\u002Fuploads\u002F2015\u002Fconditional-gans-face-generation\u002Fpaper.pdf)\n- Conditional Image Synthesis with Auxiliary Classifier GANs [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_7.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Contextual RNN-GANs for Abstract Reasoning Diagram Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09444)\n- Controllable Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00598)\n- Creatism: A deep-learning photographer capable of creating professional work [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03491)\n- Crossing Nets: Combining GANs and VAEs with a Shared Latent Space for Hand Pose Estimation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03431)\n- CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10155)\n- Data Augmentation in Classification using GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00648)\n- Deep Generative Adversarial Compression Artifact Removal [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02518)\n- Deep Generative Adversarial Networks for Compressed Sensing (GANCS) Automates MRI [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00051)\n- Deep Generative Adversarial Neural Networks for Realistic Prostate Lesion MRI Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00129)\n- Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05751) [[Code]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Feyescream) [[Blog]](http:\u002F\u002Fsoumith.ch\u002Feyescream\u002F)\n- Deep multi-scale video prediction beyond mean square error [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05440) [[Code]](https:\u002F\u002Fgithub.com\u002Fdyelax\u002FAdversarial_Video_Generation)\n- Deep Unsupervised Representation Learning for Remote Sensing Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08879)\n- DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02071)\n- Depth Structure Preserving Scene Image Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00212)\n- DualGAN: Unsupervised Dual Learning for Image-to-Image Translation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02510) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- Dual Motion GAN for Future-Flow Embedded Video Prediction [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00284)\n- Efficient Super Resolution For Large-Scale Images Using Attentional GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04821) [[Thesis]](https:\u002F\u002Fdigitalcommons.wpi.edu\u002Fetd-theses\u002F1256\u002F) [[Thesis]](https:\u002F\u002Fwww.wpi.edu\u002Fnews\u002Fannouncements\u002Fdata-science-ms-thesis-presentation-xiaozhou-zou)\n- ExprGAN: Facial Expression Editing with Controllable Expression Intensity [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.03842)\n- Face Aging With Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.01983)\n- Face Transfer with Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06090)\n- Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04835)\n- Freehand Ultrasound Image Simulation with Spatially-Conditioned Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05392)\n- From source to target and back: symmetric bi-directional adaptive GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08824)\n- Full Resolution Image Compression with Recurrent Neural Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.05148)\n- GANs for Biological Image Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04692)\n- GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04932) [[Code]](https:\u002F\u002Fgithub.com\u002FPrinsphield\u002FGeneGAN)\n- Generate Identity-Preserving Faces by Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03227)\n- Generate To Adapt: Aligning Domains using Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.01705)\n- Generative Adversarial Graph Convolutional Networks for Human Action Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11191) [[Code]](https:\u002F\u002Fgithub.com\u002FDegardinBruno\u002FKinetic-GAN)\n- Generative Adversarial Models for People Attribute Recognition in Surveillance [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02240)\n- Generative Adversarial Network based on Resnet for Conditional Image Restoration [[arxiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04881)\n- Generative Adversarial Network-based Synthesis of Visible Faces from Polarimetric Thermal Faces [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02681)\n- Generative Adversarial Networks for Multimodal Representation Learning in Video Hyperlinking [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05103)\n- Generative Adversarial Text to Image Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.05396) [[Code]](https:\u002F\u002Fgithub.com\u002Fpaarthneekhara\u002Ftext-to-image)\n- Generative Visual Manipulation on the Natural Image Manifold [[Project]](http:\u002F\u002Fwww.eecs.berkeley.edu\u002F~junyanz\u002Fprojects\u002Fgvm\u002F) [[Youtube]](https:\u002F\u002Fyoutu.be\u002F9c4z6YsBGQ0) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.03552) [[Code]](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002FiGAN)\n- Global-to-Local Generative Model for 3D Shapes [[Project]](http:\u002F\u002Fvcc.szu.edu.cn\u002Fresearch\u002F2018\u002FG2L)[[Code]](https:\u002F\u002Fgithub.com\u002FHao-HUST\u002FG2LGAN)\n- GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.00962)\n- GP-GAN: Towards Realistic High-Resolution Image Blending [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07195)\n- Guiding InfoGAN with Semi-Supervision [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04487)\n- How to Fool Radiologists with Generative Adversarial Networks? A Visual Turing Test for Lung Cancer Diagnosis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09762)\n- Hierarchical Detail Enhancing Mesh-Based Shape Generation with 3D Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07581)\n- High-Quality Face Image SR Using Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.00737)\n- High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10182)\n- Image De-raining Using a Conditional Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.05957)\n- Image Generation and Editing with Variational Info Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04568)\n- Image-to-Image Translation with Conditional Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.07004) [[Code]](https:\u002F\u002Fgithub.com\u002Fphillipi\u002Fpix2pix)\n- Improved Adversarial Systems for 3D Object Generation and Reconstruction [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09557) [[Code]](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002F3D-IWGAN)\n- Improving Heterogeneous Face Recognition with Conditional Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02848)\n- Improving image generative models with human interactions [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.10459)\n- Imitating Driver Behavior with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06699)\n- Interactive 3D Modeling with a Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05170)\n- Intraoperative Organ Motion Models with an Ensemble of Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02255)\n- Invertible Conditional GANs for image editing [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06355) [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_8.pdf)\n- Joint Discriminative and Generative Learning for Person Re-identification [[Project]](http:\u002F\u002Fzdzheng.xyz\u002FDG-Net\u002F) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07223) [[YouTube]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ubCrEAIpQs4) [[Bilibili]](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002Fav51439240) [[Poster]](http:\u002F\u002Fzdzheng.xyz\u002Fimages\u002FDGNet_poster.pdf) [[Code]](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDG-Net)\n- Label Denoising Adversarial Network (LDAN) for Inverse Lighting of Face Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01993)\n- Learning a Driving Simulator [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.01230)\n- Learning a Generative Adversarial Network for High Resolution Artwork Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09533)\n- Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.07584)\n- Learning from Simulated and Unsupervised Images through Adversarial Training [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.07828)\n- Learning to Discover Cross-Domain Relations with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05192)\n- Learning to Generate Chairs with Generative Adversarial Nets [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10413)\n- Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07592)\n- Low Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00961)\n- MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08879)\n- Megapixel Size Image Creation using Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00082)\n- Microscopy Cell Segmentation via Adversarial Neural Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.05860)\n- MoCoGAN: Decomposing Motion and Content for Video Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04993)\n- Multi-view Generative Adversarial Networks [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_13.pdf)\n- Neural Photo Editing with Introspective Adversarial Networks [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_15.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.07093)\n- Neural Stain-Style Transfer Learning using GAN for Histopathological Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.08543)\n- Outline Colorization through Tandem Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.08834)\n- Perceptual Adversarial Networks for Image-to-Image Transformation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09138)\n- Perceptual Generative Adversarial Networks for Small Object Detection [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05274)\n- Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.04802)\n- Pose Guided Person Image Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09368)\n- Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.04382)\n- Recurrent Topic-Transition GAN for Visual Paragraph Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07022)\n- RenderGAN: Generating Realistic Labeled Data [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01331)\n- Representation Learning and Adversarial Generation of 3D Point Clouds [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02392)\n- Retinal Vasculature Segmentation Using Local Saliency Maps and Generative Adversarial Networks For Image Super Resolution [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04783)\n- Retinal Vessel Segmentation in Fundoscopic Images with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09318)\n- SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08788)\n- SalGAN: Visual Saliency Prediction with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.01081v2)\n- SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01805)\n- SeGAN: Segmenting and Generating the Invisible [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10239)\n- Semantic Image Inpainting with Deep Generative Models [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07539)\n- EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.00212) [[Code]](https:\u002F\u002Fgithub.com\u002Fknazeri\u002Fedge-connect)\n- Semantic Image Synthesis via Adversarial Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06873)\n- Semantic Segmentation using Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08408)\n- Semantically Decomposing the Latent Spaces of Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07904)\n- Semi-Latent GAN: Learning to generate and modify facial images from attributes [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02166)\n- Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06430)\n- Sharpness-aware Low dose CT denoising using conditional generative adversarial network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06453)\n- Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09105)\n- SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04991) [[Code]](https:\u002F\u002Fgithub.com\u002FXiaoming-Yu\u002FSingleGAN)\n- Socially-compliant Navigation through Raw Depth Inputs with Generative Adversarial Imitation Learning [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.02543)\n- StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03242)\n- StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10916)\n- Style Transfer for Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03319)\n- Supervised Adversarial Networks for Image Saliency Detection [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.07242)\n- Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09747)\n- Synthesizing Filamentary Structured Images with GANs [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02185)\n- Synthetic Iris Presentation Attack using iDCGAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10565)\n- Synthetic Medical Images from Dual Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01872)\n- TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06412)\n- Temporal Generative Adversarial Nets with Singular Value Clipping [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06624)\n- TextureGAN: Controlling Deep Image Synthesis with Texture Patches [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02823)\n- Texture Synthesis with Spatial Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08207v3) [[Code]](https:\u002F\u002Fgithub.com\u002Fubergmann\u002Fspatial_gan)\n- Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.11919) [[Code]](https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Ftagan)\n- The Conditional Analogy GAN: Swapping Fashion Articles on People Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04695)\n- TopoAL: An Adversarial Learning Approach for Topology-Aware Road Segmentation [[Paper]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123720222.pdf)\n- TopoGAN: A Topology-Aware Generative Adversarial Network [[Paper]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123480120.pdf)\n- Towards Adversarial Retinal Image Synthesis [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.08974) [[Code]](https:\u002F\u002Fgithub.com\u002Fcostapt\u002Fvess2ret) [[Demo]](http:\u002F\u002Fvess2ret.inesctec.pt\u002Fretina)\n- Towards Diverse and Natural Image Descriptions via a Conditional GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06029)\n- Towards the Automatic Anime Characters Creation with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05509)\n- UGAN: Enhancing Underwater Imagery using Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04011)\n- Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07717)[[Code]](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson-reID_GAN)\n- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10593)\n- Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06390)\n- Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05921)\n- Unsupervised Cross-Domain Image Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02200)\n- Unsupervised Diverse Colorization via Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.06674)\n- Unsupervised Pixel–Level Domain Adaptation with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05424)\n- Unsupervised Visual Attribute Transfer with Reconfigurable Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09798)\n- VIGAN: Missing View Imputation with Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06724)\n- WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07392)\n- Weakly Supervised Generative Adversarial Networks for 3D Reconstruction [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10904)\n- [TomoGAN: Low-Dose X-Ray Tomography with Generative Adversarial Networks] [[scholar]](https:\u002F\u002Fscholar.google.ca\u002Fscholar?hl=en&as_sdt=0%2C5&q=TomoGAN%3A+Low-Dose+X-Ray+Tomography+with+Generative+Adversarial+Networks&btnG=) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.07582) \n\n## Applied Other\n- Adversarial Generation of Natural Language [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10929)\n- Adversarial Ranking for Language Generation [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.11001)\n- Adversarial Training Methods for Semi-Supervised Text Classification [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07725) [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_12.pdf)\n- A Generative Model for Volume Rendering [[arXiv]](A Generative Model for Volume Rendering)\n- ChemGAN challenge for drug discovery: can AI reproduce natural chemical diversity? [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.08227)\n- Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.05983)\n- Generating Multi-label Discrete Electronic Health Records using Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06490)\n- Language Generation with Recurrent Generative Adversarial Networks without Pre-training [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01399)\n- Learning to Protect Communications with Adversarial Neural Cryptography [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.06918) [[Blog]](https:\u002F\u002Fblog.acolyer.org\u002F2017\u002F02\u002F10\u002Flearning-to-protect-communications-with-adversarial-neural-cryptography\u002F)\n- Long Text Generation via Adversarial Training with Leaked Information [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08624)\n- MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation using 1D and 2D Conditions [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10847)\n- MuseGAN: Symbolic-domain Music Generation and Accompaniment with Multi-track Sequential Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06298)\n- Reconstruction of three-dimensional porous media using generative adversarial neural networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03225) [[Code]](https:\u002F\u002Fgithub.com\u002FLukasMosser\u002FPorousMediaGan)\n- SEGAN: Speech Enhancement Generative Adversarial Network [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.09452)\n- Semi-supervised Learning of Compact Document Representations with Deep Networks [[Paper]](http:\u002F\u002Fwww.cs.nyu.edu\u002F~ranzato\u002Fpublications\u002Franzato-icml08.pdf)\n- SSGAN: Secure Steganography Based on Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01613)\n- Steganographic Generative Adversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05502)\n- Towards Grounding Conceptual Spaces in Neural Representations [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04825)\n\n## Humor\n- Stopping GAN Violence: Generative Unadversarial Networks [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02528)\n","# 非常棒的GAN\n生成对抗（神经）网络相关的论文及其他资源列表。\n本网站由霍尔格·凯撒维护。\n如需补充或更正，请通过 holger-at-it-caesar.com 与我联系，或访问 [it-caesar.com](http:\u002F\u002Fwww.it-caesar.com)。同时也可以查看 [非常棒的语义分割](https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-semantic-segmentation) 以及我们的 [COCO-Stuff 数据集](https:\u002F\u002Fgithub.com\u002Fnightrome\u002Fcocostuff)。\n\n**注意：** 尽管本站备受关注（每月约3000名访客），但我将自2017年11月起不再添加新论文。我认为GAN已从一个冷门话题发展为当前主流研究方向，因此再试图列出所有GAN相关论文既不现实也不必要。不过，我欢迎其他人继续这项工作，并自由使用我的这份列表。\n\n## 目录\n- [推荐](#recommendations)\n- [研讨会](#workshops)\n- [教程、讲习班与博客](#tutorials--workshops--blogs)\n- [视频](#videos)\n- [代码](#code)\n- [论文](#papers)\n  - [概述](#overview)\n  - [理论与机器学习](#theory--machine-learning)\n  - [应用视觉](#applied-vision)\n  - [其他应用](#applied-other)\n  - [幽默](#humor)\n\n## 推荐\n\u003Cul>\n\u003Cli>超越人脸旋转：全局与局部感知GAN用于照片级真实感且保持身份一致的正面视图合成 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04086\">[arXiv]\u003C\u002Fa> \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fbeyond-face-rotation.png\" alt=\"超越人脸旋转\">\u003C\u002Fli>\n\n\u003Cli>姿态引导的人像生成 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09368\">[arXiv]\u003C\u002Fa> \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fpose-guided-person.png\" alt=\"姿态引导的人像\">\u003C\u002Fli>\n\n\u003Cli>基于循环一致性的无配对图像到图像转换 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10593\">[arXiv]\u003C\u002Fa>  \n\u003Cimg src=\"http:\u002F\u002Fit-caesar.com\u002Fgithub\u002Fcycle-gan.png\" alt=\"循环GAN\">\u003C\u002Fli>\n\u003C\u002Ful>\n\n# 教程、讲习班与博客\n- 哥伦比亚高级机器学习研讨会\n  - GAN理论与实践的新进展 [[博客]](https:\u002F\u002Fcasmls.github.io\u002Fgeneral\u002F2017\u002F04\u002F13\u002Fgan.html)\n  - 隐式生成模型——你打算用GAN做什么？[[博客]](https:\u002F\u002Fcasmls.github.io\u002Fgeneral\u002F2017\u002F05\u002F24\u002Fligm.html)\n- 如何训练GAN？让GAN奏效的技巧与窍门 [[博客]](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fganhacks)\n- NIPS 2016教程：生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.00160)\n- NIPS 2016关于对抗训练的研讨会 [[网页]](https:\u002F\u002Fsites.google.com\u002Fsite\u002Fnips2016adversarial\u002F) [[博客]](http:\u002F\u002Fwww.inference.vc\u002Fmy-summary-of-adversarial-training-nips-workshop\u002F)\n- 关于深度学习与GAN背后的直觉——迈向根本理解 [[博客]](https:\u002F\u002Fblog.waya.ai\u002Fintroduction-to-gans-a-boxing-match-b-w-neural-nets-b4e5319cc935)\n- OpenAI - 生成模型 [[博客]](https:\u002F\u002Fopenai.com\u002Fblog\u002Fgenerative-models\u002F)\n- SimGANs——无监督学习、自动驾驶等领域中的变革者 [[博客]](https:\u002F\u002Fblog.waya.ai\u002Fsimgans-applied-to-autonomous-driving-5a8c6676e36b)\n- 深入探讨GAN：从理论到生产（EuroScipy 2018） [[GitHub]](https:\u002F\u002Fgithub.com\u002Fzurutech\u002Fgans-from-theory-to-production) \n\n# 书籍\n- GAN实战：使用生成对抗网络进行深度学习 [[书籍]](https:\u002F\u002Fwww.manning.com\u002Fbooks\u002Fgans-in-action)\n\n# 视频\n- 伊恩·古德费洛讲解生成对抗网络 [[视频]](https:\u002F\u002Fchannel9.msdn.com\u002FEvents\u002FNeural-Information-Processing-Systems-Conference\u002FNeural-Information-Processing-Systems-Conference-NIPS-2016\u002FGenerative-Adversarial-Networks)\n- 马克·张讲解生成对抗网络教程 [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLeeHDpwX2Kj5Ugx6c9EfDLDojuQxnmxmU)\n- 米歇莱·德西莫尼和保罗·加莱奥内讲解的深入GAN课程：从理论到生产（EuroSciPy 2018） [[视频]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=CePrdabdtxw) \n\n# 代码\n- Cleverhans：用于评估对抗样本脆弱性的库 [[代码]](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcleverhans) [[博客]](http:\u002F\u002Fcleverhans.io\u002F)\n- 50行代码实现GAN（PyTorch） [[博客]](https:\u002F\u002Fmedium.com\u002F@devnag\u002Fgenerative-adversarial-networks-gans-in-50-lines-of-code-pytorch-e81b79659e3f) [[代码]](https:\u002F\u002Fgithub.com\u002Fdevnag\u002Fpytorch-generative-adversarial-networks)\n- 生成模型：包含GAN、VAE等生成模型的集合，支持PyTorch和TensorFlow [[代码]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- MNIST数据集上GAN论文的复现，仅用100行PyTorch代码 [[博客]](https:\u002F\u002Fpapers-100-lines.medium.com\u002Fgenerative-adversarial-networks-in-100-lines-of-code-516f09d1790a) [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FGenerative_Adversarial_Networks)\n- 条件生成对抗网络论文结果的复现，同样采用100行PyTorch代码 [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FConditional_Generative_Adversarial_Nets)\n- 改进GAN训练技术论文结果的复现，100行PyTorch代码 [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FImproved_Techniques_for_Training_GANs)\n- LSGAN论文结果的复现，100行PyTorch代码 [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FLeast_Squares_Generative_Adversarial_Networks)\n- WGAN论文结果的复现，100行PyTorch代码 [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FWasserstein_GAN)\n- pix2pix论文结果的复现，100行PyTorch代码 [[代码]](https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code\u002Ftree\u002Fmain\u002FImage_to_Image_Translation_with_Conditional_Adversarial_Nets)\n\n# 论文\n## 概述\n- 生成对抗网络：综述 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07035)\n\n## 理论与机器学习\n- 基于分类视角的GAN分布 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00970)\n- 生成对抗网络、逆强化学习与基于能量模型之间的联系 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.03852)\n- 面向可扩展对抗分类的一般性再训练框架 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_2.pdf)\n- 激活最大化生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02000)\n- AdaGAN：提升生成模型性能 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.02386)\n- 对抗自编码器 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644)\n- 对抗判别域适应 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.05464)\n- 对抗生成-编码网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.02304.pdf)\n- 对抗特征学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.09782) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 对抗式学习推理 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00704) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- AE-GAN：利用GAN进行对抗性消除 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05474)\n- 用于结构化输出神经网络半监督训练的对抗正则化 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.02382)\n- APE-GAN：利用GAN进行对抗扰动消除 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05474)\n- 关联式对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06953)\n- 使用学习到的相似性度量超越像素的自编码 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1512.09300)\n- 贝叶斯条件生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05477)\n- 贝叶斯GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09558)\n- BEGAN：边界均衡生成对抗网络 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_4.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10717) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 用于图像检索的二值生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04150)\n- 寻求边界的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08431) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- CausalGAN：通过对抗训练学习因果隐式生成模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02023)\n- 类别分裂生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07359)\n- 最大似然与基于GAN的Real NVPs训练比较 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05263)\n- 用于属性引导人脸图像生成的条件循环GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09966)\n- 条件生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1411.1784) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 将生成对抗网络与演员-评论家方法连接起来 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_1.pdf)\n- 生成对抗网络中的持续学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08395)\n- C-RNN-GAN：具有对抗训练的连续循环神经网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09904)\n- CM-GANs：用于共同表示学习的跨模态生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.05106)\n- 描述符与生成器网络的协同训练 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09408)\n- 耦合生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.07536) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 双重GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06216)\n- 深层与分层隐式模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08896)\n- 基于能量的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.03126) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 利用MMD神经架构搜索、PMish激活函数和自适应秩分解增强GAN [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10732016) [[Code]](https:\u002F\u002Fgithub.com\u002FPrasannaPulakurthi\u002FMMD-PMish-NAS-GAN) [[Website]](https:\u002F\u002Fprasannapulakurthi.github.io\u002FMMD-PMish-NAS-GAN\u002F) [[YouTube]](https:\u002F\u002Fyoutu.be\u002FyejnLOO2VaI) [[Demo]](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fprasannareddyp\u002FMMD-PMish-NAS-GAN)\n- 通过神经架构搜索和张量分解提升GAN性能 [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10446488) [[PDF]](https:\u002F\u002Fprasannapulakurthi.github.io\u002Fpapers\u002FPDFs\u002F2024_ICASSP_GANs-Tensor-Decomposition.pdf) [[Code]](https:\u002F\u002Fgithub.com\u002FPrasannaPulakurthi\u002FMMD-AdversarialNAS-GAN)\n- 解释与利用对抗样本 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6572)\n- Flow-GAN：在生成模型中弥合隐式与显式学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08868)\n- f-GAN：使用变分散度最小化训练生成式神经采样器 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00709) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- GAN团伙：采用最大间隔排序的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04865)\n- 生成对抗网络（GAN）中的泛化与均衡 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.00573)\n- 利用递归对抗网络生成图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1602.05110)\n- 生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661) [[Code]](https:\u002F\u002Fgithub.com\u002Fgoodfeli\u002Fadversarial) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 生成对抗网络作为基于能量模型的变分训练 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01799)\n- 具有逆变换单元的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.09354)\n- 生成对抗并行化 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04021) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 用于一次学习的生成对抗残差成对网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08033)\n- 生成对抗结构化网络 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_14.pdf)\n- 用于图像生成和数据增强的生成合作网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02887)\n- 生成矩匹配网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02761) [[Code]](https:\u002F\u002Fgithub.com\u002Fyujiali\u002Fgmmn)\n- 利用对比GAN进行生成语义操纵 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00315)\n- 几何GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02894)\n- 优秀的半监督学习需要一个糟糕的GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09783)\n- 梯度下降优化的GAN在局部是稳定的 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04156)\n- 如何训练你的DRAGAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)\n- 图像质量评估技术表明自编码器生成对抗网络的训练和评估有所改善 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02237)\n- 利用流形不变性，通过GAN改进半监督学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08850)\n- 改进GAN训练的技术 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498) [[Code]](https:\u002F\u002Fgithub.com\u002Fopenai\u002Fimproved-gan)\n- 改进Wasserstein GAN的训练 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.00028) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- InfoGAN：通过信息最大化生成对抗网络实现可解释的表征学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 逆转生成对抗网络的生成器 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_9.pdf)\n- 只需两人：对抗生成-编码网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02304)\n- KGAN：如何破解GAN中的极小极大博弈 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.01744)\n- 隐式生成模型中的学习 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_10.pdf)\n- 用于条件对抗网络知识蒸馏的学习损失 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00513)\n- 学习使用生成对抗网络发现跨领域关系 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05192) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 利用周期性空间GAN学习纹理流形 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.06566)\n- 最小二乘生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04076) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 将生成对抗学习与二分类联系起来 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01509)\n- 对Lipschitz密度敏感的损失生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06264)\n- LR-GAN：用于图像生成的分层递归生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01560)\n- MAGAN：面向生成对抗网络的边缘适应 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03817) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 最大似然增强的离散生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07983)\n- McGan：均值和协方差特征匹配GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08398)\n- 消息传递多智能体GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.01294)\n- MMD GAN：迈向对矩匹配网络更深入的理解 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08584)\n- 模式正则化的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.02136) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 多智能体多样化生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02906)\n- 多生成器生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02556)\n- 目标强化生成对抗网络（ORGAN）用于序列生成模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10843)\n- 关于GAN的收敛性和稳定性 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07215)\n- 批量归一化和权重归一化在生成对抗网络中的影响 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03971)\n- 关于基于解码器的生成模型的定量分析 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04273)\n- 优化生成网络的潜在空间 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05776)\n- 用GAN参数化CNN的滤波器 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.11386)\n- PixelGAN自编码器 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00531)\n- 逐步增长GAN以提高质量、稳定性和多样性 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10196) [[Code]](https:\u002F\u002Fgithub.com\u002Ftkarras\u002Fprogressive_growing_of_gans)\n- SegAN：具有多尺度L1损失的医学图像分割对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01805)\n- SeqGAN：带有策略梯度的序列生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.05473)\n- 针对深度网络的简单黑盒对抗扰动 [[Paper]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_11.pdf)\n- Softmax GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.06191)\n- 通过正则化稳定生成对抗网络的训练 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09367)\n- 堆叠式生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.04357)\n- 深度生成图像的统计特性 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02688)\n- 结构化生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00889)\n- 生成对抗网络的张量化 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10772)\n- Cramer距离作为解决Wasserstein梯度偏斜问题的方法 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10743)\n- 朝着理解用于联合分布匹配的对抗学习方向前进 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01215)\n- 通过最大均值差异优化训练生成式神经网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.03906)\n- 三重生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02291)\n- 展开式生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02163)\n- 无监督表示学习与深度卷积生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434) [[Code]](https:\u002F\u002Fgithub.com\u002FNewmu\u002Fdcgan_code) [[Code]](https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fexamples\u002Ftree\u002Fmaster\u002Fdcgan) [[Code]](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FDCGAN-tensorflow) [[Code]](https:\u002F\u002Fgithub.com\u002Fsoumith\u002Fdcgan.torch) [[Code]](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fkeras-dcgan)\n- Wasserstein GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07875) [[Code]](https:\u002F\u002Fgithub.com\u002Fmartinarjovsky\u002FWassersteinGAN) [[Code]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n\n## 应用视觉\n- 基于对抗学习的单深度视图3D物体重建 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.07969)\n- 从多物体2D视图中推断3D形状 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05872)\n- 使用GAN迈向程序化地形生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03383) [[代码]](https:\u002F\u002Fgithub.com\u002Fchristopher-beckham\u002Fgan-heightmaps)\n- 利用生成对抗网络进行视频异常事件检测 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09644)\n- 针对车牌识别的对抗性训练样本生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03124)\n- 结合感知损失的文本到图像合成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09321)\n- 基于RGB的光谱图像空间上下文感知重建对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00265)\n- 用于侵袭性前列腺癌检测的对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08014)\n- 对抗PoseNet：一种结构感知的人体姿态估计卷积网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.00389.pdf)\n- 基于对抗训练的草图检索 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02748)\n- 基于对抗学习的美学驱动图像增强 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05251)\n- 基于条件对抗自编码器的年龄增长\u002F退化模拟 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08423)\n- AlignGAN：利用条件生成对抗网络学习跨域图像对齐 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01400)\n- 用于图像超分辨率的折衷MAP推理 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.04490)\n- 借助增强型感知超分辨率网络分析感知与失真之间的权衡 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.00344) [[代码]](https:\u002F\u002Fgithub.com\u002Fsubeeshvasu\u002F2018_subeesh_epsr_eccvw)\n- 基于GAN的艺术化文本可视化新方法 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10553)\n- 反化妆：学习一种双层对抗网络以实现不受化妆影响的人脸验证 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.03654)\n- 任意面部属性编辑：只更改你想要的部分 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10678) [[代码]](https:\u002F\u002Fgithub.com\u002FLynnHo\u002FAttGAN-Tensorflow)\n- ARIGAN：使用生成对抗网络合成拟南芥植物 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00938)\n- ArtGAN：基于条件分类GAN的艺术作品合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03410)\n- 为提升图像分类而人工生成大数据：基于SAR数据的生成对抗网络方法 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.02010)\n- 自编码器引导的GAN用于中国书法合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.08789)\n- 自动画家：利用条件生成对抗网络从草图生成卡通图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.01908)\n- 基于对抗图像到图像网络的自动肝脏分割 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08037)\n- 超越人脸旋转：全局与局部感知GAN，用于逼真且保持身份特征的正面视图合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04086)\n- CAN：通过学习风格并偏离风格规范来生成“艺术”的创意对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.07068)\n- CompoNet：通过部件合成与组合学习生成未见之物 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.07441) [[代码]](https:\u002F\u002Fgithub.com\u002Fnschor\u002FCompoNet)\n- 在生成对抗网络中使用循环损失进行压缩感知MRI重建 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00753)\n- 用于脑肿瘤语义分割的条件对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05227)\n- 用于卷积人脸生成的条件生成对抗网络 [[论文]](http:\u002F\u002Fwww.foldl.me\u002Fuploads\u002F2015\u002Fconditional-gans-face-generation\u002Fpaper.pdf)\n- 带辅助分类器GAN的条件图像合成 [[论文]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_7.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585) [[代码]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 用于抽象推理图生成的上下文RNN-GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09444)\n- 可控生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00598)\n- 创意主义：一位能够创作专业作品的深度学习摄影师 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.03491)\n- 网络交叉：将GAN和VAE结合，共享潜在空间用于手部姿态估计 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03431)\n- CVAE-GAN：通过非对称训练进行细粒度图像生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10155)\n- 使用GAN进行分类中的数据增强 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.00648)\n- 深度生成对抗网络去除压缩伪影 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02518)\n- 用于压缩感知的深度生成对抗网络（GANCS）可自动化MRI扫描 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00051)\n- 用于逼真前列腺病灶MRI合成的深度生成对抗神经网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00129)\n- 基于对抗网络拉普拉斯金字塔的深度生成图像模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05751) [[代码]](https:\u002F\u002Fgithub.com\u002Ffacebook\u002Feyescream) [[博客]](http:\u002F\u002Fsoumith.ch\u002Feyescream\u002F)\n- 超越均方误差的深度多尺度视频预测 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05440) [[代码]](https:\u002F\u002Fgithub.com\u002Fdyelax\u002FAdversarial_Video_Generation)\n- 用于遥感图像的深度无监督表征学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08879)\n- DeLiGAN：针对多样且有限数据的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02071)\n- 保留深度结构的场景图像生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00212)\n- DualGAN：用于图像到图像转换的无监督双向学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02510) [[代码]](https:\u002F\u002Fgithub.com\u002Fwiseodd\u002Fgenerative-models)\n- 用于未来流嵌入式视频预测的双运动GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00284)\n- 使用注意力GAN高效实现大规模图像超分辨率 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.04821) [[论文]](https:\u002F\u002Fdigitalcommons.wpi.edu\u002Fetd-theses\u002F1256\u002F) [[新闻]](https:\u002F\u002Fwww.wpi.edu\u002Fnews\u002Fannouncements\u002Fdata-science-ms-thesis-presentation-xiaozhou-zou)\n- ExprGAN：可控表情强度的脸部表情编辑 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.03842)\n- 使用条件生成对抗网络进行人脸老化 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.01983)\n- 使用生成对抗网络进行人脸迁移 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.06090)\n- 多光谱条件生成对抗网络在卫星影像上进行云层去除 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04835)\n- 基于空间条件生成对抗网络的徒手超声图像仿真 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05392)\n- 从源域到目标域再返回：对称双向自适应GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.08824)\n- 基于循环神经网络的全分辨率图像压缩 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.05148)\n- 用于生物图像合成的GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.04692)\n- GeneGAN：从非配对数据中学习对象变形和属性子空间 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.04932) [[代码]](https:\u002F\u002Fgithub.com\u002FPrinsphield\u002FGeneGAN)\n- 使用生成对抗网络生成保持身份特征的人脸 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03227)\n- 生成以适应：利用生成对抗网络对齐领域 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.01705)\n- 用于人体动作合成的生成对抗图卷积网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.11191) [[代码]](https:\u002F\u002Fgithub.com\u002FDegardinBruno\u002FKinetic-GAN)\n- 用于监控中人物属性识别的生成对抗模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02240)\n- 基于ResNet的条件图像恢复生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04881)\n- 基于生成对抗网络的偏振热像可见面合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.02681)\n- 用于视频超链接中多模态表征学习的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05103)\n- 生成对抗文本到图像合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.05396) [[代码]](https:\u002F\u002Fgithub.com\u002Fpaarthneekhara\u002Ftext-to-image)\n- 自然图像流形上的生成式视觉操纵 [[项目]](http:\u002F\u002Fwww.eecs.berkeley.edu\u002F~junyanz\u002Fprojects\u002Fgvm\u002F) [[YouTube]](https:\u002F\u002Fyoutu.be\u002F9c4z6YsBGQ0) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.03552) [[代码]](https:\u002F\u002Fgithub.com\u002Fjunyanz\u002FiGAN)\n- 3D形状的全局到局部生成模型 [[项目]](http:\u002F\u002Fvcc.szu.edu.cn\u002Fresearch\u002F2018\u002FG2L)[[代码]](https:\u002F\u002Fgithub.com\u002FHao-HUST\u002FG2LGAN)\n- GP-GAN：基于地标合成人脸的性别保持GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.00962)\n- GP-GAN：迈向逼真高分辨率图像融合 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07195)\n- 半监督指导InfoGAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04487)\n- 如何用生成对抗网络欺骗放射科医生？肺癌诊断的视觉图灵测试 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09762)\n- 基于3D生成对抗网络的分层细节增强网格状形状生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07581)\n- 使用条件生成对抗网络进行高质量人脸图像超分辨率 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.00737)\n- 使用多对抗网络进行高质量人脸照片到素描合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10182)\n- 使用条件生成对抗网络进行图像去雨处理 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.05957)\n- 使用变分信息生成对抗网络进行图像生成和编辑 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.04568)\n- 使用条件生成对抗网络进行图像到图像转换 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.07004) [[代码]](https:\u002F\u002Fgithub.com\u002Fphillipi\u002Fpix2pix)\n- 改进用于3D物体生成和重建的对抗系统 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09557) [[代码]](https:\u002F\u002Fgithub.com\u002FEdwardSmith1884\u002F3D-IWGAN)\n- 使用条件生成对抗网络改善异构人脸识别 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02848)\n- 通过人类交互改进图像生成模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.10459)\n- 使用生成对抗网络模仿驾驶员行为 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06699)\n- 使用生成对抗网络进行交互式3D建模 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05170)\n- 使用条件生成对抗网络集合构建术中器官运动模型 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.02255)\n- 可逆条件GAN用于图像编辑 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06355) [[论文]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_8.pdf)\n- 用于行人重识别的联合判别与生成学习 [[项目]](http:\u002F\u002Fzdzheng.xyz\u002FDG-Net\u002F) [[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07223) [[YouTube]](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ubCrEAIpQs4) [[哔哩哔哩]](https:\u002F\u002Fwww.bilibili.com\u002Fvideo\u002Fav51439240) [[海报]](http:\u002F\u002Fzdzheng.xyz\u002Fimages\u002FDGNet_poster.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FNVlabs\u002FDG-Net)\n- 用于人脸图像逆向光照的标签去噪对抗网络（LDAN）[[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01993)\n- 学习驾驶模拟器 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.01230)\n- 学习用于高分辨率艺术品合成的生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09533)\n- 通过3D生成对抗建模学习对象形状的概率潜在空间 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.07584)\n- 通过对抗训练从模拟和无监督图像中学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.07828)\n- 使用生成对抗网络学习发现跨域关系 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05192)\n- 学习使用生成对抗网络生成椅子 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10413)\n- 学习使用多阶段动态生成对抗网络生成延时视频 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.07592)\n- 使用带有Wasserstein距离和感知损失的生成对抗网络进行低剂量CT图像去噪 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00961)\n- MARTA GAN：用于遥感图像分类的无监督表征学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.08879)\n- 使用生成对抗网络创建百万像素尺寸图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.00082)\n- 基于对抗神经网络的显微镜细胞分割 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.05860)\n- MoCoGAN：分解运动与内容以生成视频 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04993)\n- 多视角生成对抗网络 [[论文]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_13.pdf)\n- 带有内省对抗网络的神经照片编辑 [[论文]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_15.pdf) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.07093)\n- 使用GAN进行组织病理学图像的染色风格迁移学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.08543)\n- 通过串联对抗网络进行轮廓着色 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.08834)\n- 用于图像到图像转换的感知对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09138)\n- 用于小目标检测的感知生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05274)\n- 使用生成对抗网络实现逼真单张图像超分辨率 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.04802)\n- 姿势引导的人物图像生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.09368)\n- 使用马尔可夫生成对抗网络预计算实时纹理合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.04382)\n- 用于视觉段落生成的递归主题转换GAN [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.07022)\n- RenderGAN：生成逼真的标注数据 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01331)\n- 3D点云的表征学习和对抗性生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.02392)\n- 使用局部显著性图和用于图像超分辨率的生成对抗网络进行视网膜血管分割 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.04783)\n- 使用生成对抗网络在眼底图像中进行视网膜血管分割 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09318)\n- SAD-GAN：利用生成对抗网络进行自动驾驶模拟 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08788)\n- SalGAN：利用生成对抗网络进行视觉显著性预测 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.01081v2)\n- SegAN：具有多尺度L1损失的医学图像分割对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01805)\n- SeGAN：分割并生成不可见的内容 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10239)\n- 基于深度生成模型的语义图像修复 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07539)\n- EdgeConnect：利用对抗边缘学习进行生成式图像修复 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.00212) [[代码]](https:\u002F\u002Fgithub.com\u002Fknazeri\u002Fedge-connect)\n- 基于对抗学习的语义图像合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06873)\n- 使用对抗网络进行语义分割 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08408)\n- 对生成对抗网络的潜在空间进行语义分解 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07904)\n- 半潜伏GAN：学习从属性生成和修改人脸图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02166)\n- 基于上下文条件生成对抗网络的半监督学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06430)\n- 基于条件生成对抗网络的锐度感知低剂量CT去噪 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06453)\n- 同时进行彩色和深度超分辨率的条件生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.09105)\n- SingleGAN：通过单一生成器网络结合多种生成对抗学习实现图像到图像转换 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.04991) [[代码]](https:\u002F\u002Fgithub.com\u002FXiaoming-Yu\u002FSingleGAN)\n- 基于原始深度输入的生成对抗模仿学习实现社会合规导航 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.02543)\n- StackGAN：利用堆叠生成对抗网络实现文本到逼真图像的合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03242)\n- StackGAN++：利用堆叠生成对抗网络实现逼真图像合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F17010.10916)\n- 使用增强型残差U-net和辅助分类器GAN对素描进行风格迁移 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03319)\n- 用于图像显著性检测的监督对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.07242)\n- 利用多通道生成对抗网络（GANs）合成正电子发射断层扫描（PET）图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09747)\n- 使用GAN合成丝状结构图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02185)\n- 使用iDCGAN进行合成虹膜演示攻击 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.10565)\n- 由双重生成对抗网络合成的医疗图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01872)\n- TAC-GAN：文本条件辅助分类器生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06412)\n- 带有奇异值截断的时序生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06624)\n- TextureGAN：利用纹理贴片控制深度图像合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02823)\n- 基于空间生成对抗网络的纹理合成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08207v3) [[代码]](https:\u002F\u002Fgithub.com\u002Fubergmann\u002Fspatial_gan)\n- 文本适应性生成对抗网络：用自然语言操纵图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.11919) [[代码]](https:\u002F\u002Fgithub.com\u002Fwoozzu\u002Ftagan)\n- 条件类比GAN：在人物图像上交换时尚单品 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.04695)\n- TopoAL：一种面向拓扑的道路分割对抗学习方法 [[论文]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123720222.pdf)\n- TopoGAN：一种面向拓扑的生成对抗网络 [[论文]](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123480120.pdf)\n- 朝着对抗性视网膜图像合成迈进 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.08974) [[代码]](https:\u002F\u002Fgithub.com\u002Fcostapt\u002Fvess2ret) [[演示]](http:\u002F\u002Fvess2ret.inesctec.pt\u002Fretina)\n- 通过条件GAN迈向多样且自然的图像描述 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06029)\n- 朝着利用生成对抗网络自动创作动漫角色迈进 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05509)\n- UGAN：利用生成对抗网络增强水下图像 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.04011)\n- 由GAN生成的未标记样本可在体外提升行人重识别基线 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.07717) [[代码]](https:\u002F\u002Fgithub.com\u002Flayumi\u002FPerson-reID_GAN)\n- 使用循环一致对抗网络进行非配对图像到图像转换 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10593)\n- 基于分类生成对抗网络的无监督和半监督学习 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06390)\n- 无监督生成对抗网络辅助标记物发现以进行异常检测 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05921)\n- 无监督跨域图像生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02200)\n- 无监督多样化色彩化通过生成对抗网络实现 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.06674)\n- 无监督基于生成对抗网络的像素级领域适应 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05424)\n- 无监督通过可重构生成对抗网络进行视觉属性转移 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.09798)\n- VIGAN：利用生成对抗网络填补缺失视图 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06724)\n- WaterGAN：无监督生成网络使单目水下图像实现实时色彩校正 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07392)\n- 用于3D重建的弱监督生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10904)\n- [TomoGAN：基于生成对抗网络的低剂量X射线断层扫描] [[学者]](https:\u002F\u002Fscholar.google.ca\u002Fscholar?hl=en&as_sdt=0%2C5&q=TomoGAN%3A+Low-Dose+X-Ray+Tomography+with+Generative+Adversarial+Networks&btnG=) [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.07582)\n\n## 应用其他\n- 自然语言的对抗生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10929)\n- 面向语言生成的对抗排序 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.11001)\n- 用于半监督文本分类的对抗训练方法 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07725) [[论文]](https:\u002F\u002Fc4209155-a-62cb3a1a-s-sites.googlegroups.com\u002Fsite\u002Fnips2016adversarial\u002FWAT16_paper_12.pdf)\n- 体绘制的生成模型 [[arXiv]](A Generative Model for Volume Rendering)\n- ChemGAN药物发现挑战：AI能否再现天然化学多样性？[[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.08227)\n- 基于GAN生成针对黑盒攻击的对抗性恶意软件样本 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.05983)\n- 使用生成对抗网络生成多标签离散电子健康记录 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06490)\n- 无需预训练的循环生成对抗网络语言生成 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01399)\n- 学习通过对抗神经密码学保护通信 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.06918) [[博客]](https:\u002F\u002Fblog.acolyer.org\u002F2017\u002F02\u002F10\u002Flearning-to-protect-communications-with-adversarial-neural-cryptography\u002F)\n- 借助泄露信息的对抗训练生成长文本 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08624)\n- MidiNet：基于一维和二维条件的符号域音乐生成卷积生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.10847)\n- MuseGAN：使用多轨序列生成对抗网络进行符号域音乐生成与伴奏 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.06298)\n- 利用生成对抗神经网络重建三维多孔介质 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03225) [[代码]](https:\u002F\u002Fgithub.com\u002FLukasMosser\u002FPorousMediaGan)\n- SEGAN：语音增强生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.09452)\n- 基于深度网络的紧凑文档表示的半监督学习 [[论文]](http:\u002F\u002Fwww.cs.nyu.edu\u002F~ranzato\u002Fpublications\u002Franzato-icml08.pdf)\n- SSGAN：基于生成对抗网络的安全隐写术 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.01613)\n- 隐写生成对抗网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.05502)\n- 朝着将概念空间嵌入神经表征的方向发展 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04825)\n\n## 幽默\n- 阻止GAN暴力：生成非对抗性网络 [[arXiv]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02528)","# really-awesome-gan 快速上手指南\n\n**重要说明**：`really-awesome-gan` 并非一个可直接安装运行的软件库或框架，而是一个**生成对抗网络（GAN）领域的论文、教程、代码实现及资源汇总清单**。因此，本指南将指导你如何利用该列表中的资源，快速搭建环境并运行其中推荐的经典 GAN 代码示例。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Linux (推荐), macOS, 或 Windows (需配置 WSL2)\n*   **Python 版本**：Python 3.6 或更高版本\n*   **深度学习框架**：推荐使用 **PyTorch** 或 **TensorFlow**（列表中大多数现代示例基于 PyTorch）\n*   **硬件加速**：建议配备 NVIDIA GPU 并安装对应的 CUDA 驱动，以加速模型训练\n\n### 前置依赖安装\n\n建议使用 `conda` 或 `pip` 创建虚拟环境。以下以 PyTorch 为例（使用国内清华源加速）：\n\n```bash\n# 创建虚拟环境\nconda create -n gan_env python=3.8\nconda activate gan_env\n\n# 安装 PyTorch (使用清华大学镜像源)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装其他常用依赖\npip install numpy matplotlib scipy tqdm --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 获取资源与代码\n\n由于本项目是资源列表，你需要从列表中选择一个具体的代码仓库进行克隆。列表中推荐了几个适合新手入门的极简实现（\"100 行代码系列\"）。\n\n以下以复现经典 **GAN (MNIST)** 为例：\n\n```bash\n# 克隆包含多个经典 GAN 复现的代码库\ngit clone https:\u002F\u002Fgithub.com\u002FMaximeVandegar\u002FPapers-in-100-Lines-of-Code.git\ncd Papers-in-100-Lines-of-Code\u002FGenerative_Adversarial_Networks\n```\n\n*注：如果 GitHub 连接缓慢，可尝试使用国内镜像站下载或使用代理加速。*\n\n## 基本使用\n\n进入目录后，你可以直接运行提供的 Python 脚本来训练模型并生成图像。\n\n### 1. 运行训练脚本\n\n在项目根目录下执行：\n\n```bash\npython main.py\n```\n\n*   该脚本将自动下载 MNIST 数据集。\n*   开始训练生成器（Generator）和判别器（Discriminator）。\n*   训练过程中会实时打印 Loss 信息。\n\n### 2. 查看结果\n\n训练完成后（或训练过程中），脚本通常会在当前目录下生成输出文件夹（如 `results\u002F` 或 `images\u002F`），里面包含生成的假图片。\n\n你可以使用以下命令快速预览生成的图像（如果项目中未包含查看脚本，可使用 Python 直接查看）：\n\n```bash\n# 使用 Python 和 matplotlib 查看最新生成的图像\npython -c \"import matplotlib.pyplot as plt; import os; files = sorted(os.listdir('results')); img = plt.imread(f'results\u002F{files[-1]}'); plt.imshow(img); plt.show()\"\n```\n\n## 进阶探索\n\n`really-awesome-gan` 列表中还包含了更多高级应用的代码链接，你可以按照相同步骤克隆并运行：\n\n*   **DCGAN**: 更深层次的卷积生成网络\n*   **CycleGAN**: 无配对图像转换（如马变斑马）\n*   **Pix2Pix**: 有配对图像转换（如素描变照片）\n*   **WGAN**: 改进训练稳定性的 Wasserstein GAN\n\n只需替换上述 `git clone` 的地址为列表中对应的子目录路径即可。例如：\n```bash\ncd ..\u002FCycle_Consistent_Adversarial_Nets\npython main.py\n```\n\n建议阅读原仓库中的 `README.md` 以获取特定模型的详细参数调整说明。","某计算机视觉初创团队正致力于研发一款虚拟试衣应用，急需寻找能够根据人体姿态生成逼真服装图像的生成对抗网络（GAN）前沿方案。\n\n### 没有 really-awesome-gan 时\n- 研究人员需在 Google Scholar 和 arXiv 上盲目搜索海量论文，难以区分哪些是真正具有落地价值的核心成果，哪些仅是理论探索。\n- 面对 GAN 训练不稳定、模式崩溃等经典难题，团队缺乏系统性的调试指南和“避坑”技巧，导致大量时间浪费在反复试错上。\n- 难以快速定位到与“姿态引导图像生成”直接相关的开源代码库，往往找到的是只有理论公式而无实现细节的论文。\n- 团队内部缺乏统一的学习路径，新成员需要花费数周时间自行整理教程、视频和博客，严重拖慢项目启动进度。\n\n### 使用 really-awesome-gan 后\n- 团队直接通过分类列表锁定了《Pose Guided Person Image Generation》等关键论文，迅速明确了技术选型方向，节省了数周的文献调研时间。\n- 利用收录的\"How to Train a GAN?\"等实战博客和 NIPS 教程，工程师快速掌握了稳定训练的技巧，显著减少了模型调优周期。\n- 通过\"Code\"板块直接获取了经过验证的开源实现参考，将原本需要从头编写的核心算法模块缩短为几天的集成工作。\n- 借助汇总的视频教程和研讨会资源，团队成员在短时间内建立了从理论基础到生产部署的完整知识体系，实现了高效协作。\n\nreally-awesome-gan 通过将分散的 GAN 学术资源结构化，帮助开发者从茫茫论文海中精准导航，极大加速了从理论研究到工程落地的转化过程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fnightrome_really-awesome-gan_9fbd65f3.png","nightrome","Holger Caesar","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fnightrome_e14613b8.png","Assistant Professor at TU Delft. Author of the COCO-Stuff and nuScenes datasets.","TU Delft","Delft",null,"http:\u002F\u002Fwww.it-caesar.com","https:\u002F\u002Fgithub.com\u002Fnightrome",3777,706,"2026-04-04T14:47:56",5,"","未说明",{"notes":92,"python":90,"dependencies":93},"该仓库（really-awesome-gan）并非一个可执行的 AI 工具或代码库，而是一个关于生成对抗网络（GAN）的论文、教程、博客和视频的资源列表（Awesome List）。README 中明确提到维护者自 2017 年 11 月起已停止更新此列表。因此，该项目本身没有运行环境、依赖库或硬件需求。列表中提到的个别代码示例（如 PyTorch 实现）需参考其各自链接的独立仓库获取具体环境要求。",[],[14,37],"2026-03-27T02:49:30.150509","2026-04-06T07:14:55.744870",[98,103,108,113,118,123],{"id":99,"question_zh":100,"answer_zh":101,"source_url":102},16810,"如何向该列表提交新的 GAN 论文或代码？","您可以直接在 GitHub 上创建一个新 Issue，在正文中提供论文的标题、摘要简介以及论文链接（如 arXiv）和代码仓库链接。维护者审核后会将其添加到列表中。例如，用户通过提供论文标题 \"SingleGAN\" 及相关链接成功被收录。","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F12",{"id":104,"question_zh":105,"answer_zh":106,"source_url":107},16811,"列表中的论文是否会标注发表年份？","维护者认为添加年份是有用的，但目前尚未全面实施。关于按应用领域（如图像、文本等）分类的建议，维护者指出大多数 GAN 论文都涉及图像处理，且部分论文侧重于理论洞察而非特定模态的实际改进，因此难以进行严格的分类。","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F18",{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},16812,"Step-Up GAN 的完整论文标题是什么？","Step-Up GAN 并非传统期刊论文，而是一项关于数据延续方法的研究工作，其详细内容包含在两篇硕士论文中。相关资源可通过以下链接获取：1. https:\u002F\u002Fdigitalcommons.wpi.edu\u002Fetd-theses\u002F1256\u002F 2. https:\u002F\u002Fwww.wpi.edu\u002Fnews\u002Fannouncements\u002Fdata-science-ms-thesis-presentation-xiaozhou-zou","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F17",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},16813,"是否有计划将 GAN 的元结构转化为可热插拔生成器和判别器的 Python 类？","虽然有用户提出了将 GAN 元结构形式化并转化为支持热插拔组件的 Python 类的构想，但项目维护者表示由于已有一段时间未深入研究 GAN，无法直接参与开发。不过，维护者对该想法持支持态度，并表示愿意在项目主页顶部链接到此类相关项目。","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F23",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},16814,"AttGAN（任意面部属性编辑）的论文和代码在哪里可以找到？","AttGAN 的相关资源已被收录。论文标题为《Arbitrary Facial Attribute Editing: Only Change What You Want》，可在 arXiv 查看：https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.10678；TensorFlow 版本的代码仓库地址为：https:\u002F\u002Fgithub.com\u002FLynnHo\u002FAttGAN-Tensorflow","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F6",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},16815,"哪里可以找到使用生成对抗网络重建三维多孔介质的代码？","相关代码已开源，您可以访问以下 GitHub 仓库获取用于重建三维多孔介质的 GAN 代码：https:\u002F\u002Fgithub.com\u002FLukasMosser\u002FPorousMediaGan","https:\u002F\u002Fgithub.com\u002Fnightrome\u002Freally-awesome-gan\u002Fissues\u002F1",[]]