[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-SerialLain3170--AwesomeAnimeResearch":3,"tool-SerialLain3170--AwesomeAnimeResearch":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",151314,2,"2026-04-11T23:32:58",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":79,"owner_website":80,"owner_url":81,"languages":82,"stars":83,"forks":84,"last_commit_at":85,"license":82,"difficulty_score":86,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":93,"view_count":32,"oss_zip_url":82,"oss_zip_packed_at":82,"status":17,"created_at":98,"updated_at":99,"faqs":100,"releases":136},6747,"SerialLain3170\u002FAwesomeAnimeResearch","AwesomeAnimeResearch","Papers, repository and other data about anime or manga research. Please let me know if you have information that the list does not include.","AwesomeAnimeResearch 是一个专注于动漫与漫画领域学术研究的开源资源聚合平台。它系统性地整理了该方向的前沿论文、代码仓库及关键数据集，旨在解决研究人员在寻找高质量动漫数据时面临的分散与匮乏难题，为相关算法的开发与验证提供坚实基础。\n\n该项目特别适合计算机视觉、自然语言处理领域的科研人员、开发者以及关注二次元内容智能化的技术爱好者使用。无论是希望训练角色识别模型、探索漫画分镜理解，还是研究生成式动漫图像检测，都能在此找到对应的支持资源。\n\n其核心亮点在于收录了多个具有里程碑意义的数据集，涵盖从大规模 AI 生成动漫图像检测（AnimeDL-2M）、多模态漫画翻译，到精细化的角色对话标注（Manga109Dialog）及拟声词识别（COO）等细分场景。这些资源不仅规模庞大且标注专业，有效填补了通用数据集在动漫特定风格与叙事结构上的空白。通过一站式汇聚全球最新成果，AwesomeAnimeResearch 极大地降低了进入该垂直研究领域的门槛，推动了动漫智能分析技术的社区协作与创新。","# AwesomeAnimeResearch\n\nEverything related to Anime.\\\nFor the **Comics\u002FManga** papers, please refer to [🔥 Awesome Comics Understanding](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding)\\\nFor the **2D cartoon video** research, please refer to [🚀 Awesome-Animation-Research](https:\u002F\u002Fgithub.com\u002Fzhenglinpan\u002FAwesome-Animation-Research)\n\n## 📂 Datasets\n\n  - \u003Cdetails>\n      \u003Csummary>Overview of Anime\u002FComic\u002FManga Datasets\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [ComicScene154: A Scene Dataset for Comic Analysis](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.16190) | |\n      | 2025 | COLING | [AnimeDL-2M: Million-Scale AI-Generated Anime Image Detection and Localization in Diffusion Era](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11015) | [HP](https:\u002F\u002Fflytweety.github.io\u002FAnimeDL2M\u002F) |\n      | 2025 | COLING | [Context-Informed Machine Translation of Manga using Multimodal Large Language Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.02589) | [GitHub](https:\u002F\u002Fgithub.com\u002Fplippmann\u002Fmultimodal-manga-translation) |\n      | 2024 | Arxiv | [Tails Tell Tales: Chapter-Wide Manga Transcriptions with Character Names](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00298) | [GitHub](https:\u002F\u002Fgithub.com\u002Fragavsachdeva\u002Fmagi) |\n      | 2024 | Arxiv | [CoMix: A Comprehensive Benchmark for Multi-Task Comic Understanding](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03550) | [GitHub](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fcomix-dataset) |\n      | 2023 | CVPR | [Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FJu_Human-Art_A_Versatile_Human-Centric_Dataset_Bridging_Natural_and_Artificial_Scenes_CVPR_2023_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FHumanArt) | \n      | 2023 | Arxiv | [Manga109Dialog: A Large-scale Dialogue Dataset for Comics Speaker Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.17469.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fmanga109\u002Fpublic-annotations) |\n      | 2023 | ACM-TG | [Semi-supervised reference-based sketch extraction using a contrastive learning framework](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1FELTVl73OrQ9Q0uBXN7jLbRStSsF-NgM\u002Fview?pli=1) |  [Github](https:\u002F\u002Fgithub.com\u002FChanuku\u002F4skst) |\n      | 2023 | ACM-TG |  [Parsing-Conditioned Anime Translation: A New Dataset and Method](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3585002) | [Github](https:\u002F\u002Fgithub.com\u002Fzsl2018\u002FStyleAnime) | \n      | 2022 | NeurIPS-DB | [AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies](https:\u002F\u002Fopenreview.net\u002Fpdf?id=04OPxj0jGN_) |  [HP](https:\u002F\u002Flisiyao21.github.io\u002Fprojects\u002FAnimeRun) |\n      | 2022 | ECCV | [COO: Comic Onomatopoeia Dataset for Recognizing Arbitrary or Truncated Texts](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.04675.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fku21fan\u002FCOO-Comic-Onomatopoeia) |\n      | 2022 | CVPR Workshop |[A Challenging Benchmark of Anime Style Recognition](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FVDU\u002Fpapers\u002FLi_A_Challenging_Benchmark_of_Anime_Style_Recognition_CVPRW_2022_paper.pdf) |  | \n      | 2022 | ECCV | [AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.07640.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fkangyeolk\u002FAnimeCeleb) |\n      | 2021 | Arxiv | [DAF:RE: A CHALLENGING, CROWD-SOURCED, LARGE-SCALE, LONG-TAILED DATASET FOR ANIME CHARACTER RECOGNITION](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.08674.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Farkel23\u002Fanimesion) |\n      | 2020 | ACM-MM | [Cartoon Face Recognition: A Benchmark Dataset](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.13394.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fluxiangju-PersonAI\u002FiCartoonFace) |\n      | 2020 | ECCV Workshop | [Unconstrained Text Detection in Manga: a New Dataset and Baseline](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.04042.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fjuvian\u002FManga-Text-Segmentation) |\n      | 2020 | ECCV | [DanbooRegion: An Illustration Region Dataset](https:\u002F\u002Flllyasviel.github.io\u002FDanbooRegion\u002Fpaper\u002Fpaper.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FDanbooRegion) |\n      | 2020 | MMUL | [Building a Manga Dataset ”Manga109” with Annotations for Multimedia Applications](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.04425.pdf) |  [HP](http:\u002F\u002Fwww.manga109.org\u002Fja\u002Fdownload_s.html) |\n      | 2019 | CVPR | [Creative Flow+ Dataset](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShugrina_Creative_Flow_Dataset_CVPR_2019_paper.pdf) |  [HP](https:\u002F\u002Fwww.cs.toronto.edu\u002Fcreativeflow\u002F) |\n      | 2017 | CVPR | [The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.05118.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmiyyer\u002Fcomics) |\n    \u003C\u002Fdetails>\n\n## 📜 Papers\n\n### Image Generation\n\n  - \u003Cdetails>\n      \u003Csummary>Generation\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | Arxiv | [SakugaFlow: A Stagewise Illustration Framework Emulating the Human Drawing Process and Providing Interactive Tutoring for Novice Drawing Skills](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08443) | | |\n      | 2025 | Arxiv | [Interactive Drawing Guidance for Anime Illustrations with Diffusion Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.09140) |  |\n      | 2022 | Arxiv | [Combating Mode Collapse in GANs via Manifold Entropy Estimation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.12055.pdf) | | [Github](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.12055.pdf) |\n      | 2022 | SIGGRAPH | [StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00946.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Frinongal\u002FStyleGAN-nada) |\n      | 2021 | ICCV | [DisUnknown: Distilling Unknown Factors for Disentanglement Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.08090.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002Fdisunknown) |\n      | 2021 | Arxiv | [CoPE: Conditional image generation using Polynomial Expansions](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05077.pdf) | |\n      | 2021 | Arxiv | [Efficient Continual Adaptation for Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.04032.pdf) | |\n      | 2021 | ElConRus | [Generating \"Ideal\" Anime Opening Frames Using Neural Networks](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9396557) | |\n      | 2021 | CVPR | [HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11731.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fmahmoudnafifi\u002FHistoGAN) |\n      | 2020 |  | [Generating Full-Body Standing Figures of Anime Characters and Its Style Transfer by GAN](https:\u002F\u002Fwaseda.repo.nii.ac.jp\u002F?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=58145&item_no=1&page_id=13&block_id=21) | |\n      | 2020 | NeurIPS | [GAN Memory with No Forgetting](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.07543.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FMiaoyunZhao\u002FGANmemory_LifelongLearning) |\n      | 2020 | Arxiv | [Classification Representations Can be Reused for Downstream Generations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.07543.pdf) | |\n      | 2020 | Arxiv | [Autoencoding Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.05472.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FConorLazarou\u002FAEGAN-keras) |\n      | 2019 | IEEE Access | [An Adaptive Control Algorithm for Stable Training of Generative Adversarial Networks](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8936350) | |\n      | 2019 | Arxiv | [Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.09091.pdf) | |\n      | 2019 | EG | [Towards Diverse Anime Face Generation: Active Label Completion and Style Feature Network](https:\u002F\u002Fdiglib.eg.org\u002Fbitstream\u002Fhandle\u002F10.2312\u002Fegs20191016\u002F065-068.pdf?sequence=1&isAllowed=y) | |\n      | 2018 | IJCNN | [Generate Novel Image Styles using Weighted Hybrid Generative Adversarial Nets](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8489080) | |\n      | 2018 | ECCV Workshop | [Full-body High-resolution Anime Generation with Progressive Structure-conditional Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01890v1.pdf) | [HP](https:\u002F\u002Fdena.com\u002Fintl\u002Fanime-generation\u002F) |\n      | 2017 | Comiket92 | [Towards the Automatic Anime Characters Creation with Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05509.pdf) | [HP](https:\u002F\u002Fmake.girls.moe\u002F#\u002F) |\n  \u003C\u002Fdetails>\n  \n  \n  - \u003Cdetails>\n      \u003Csummary>Few-shot\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | CAVW | [Controlling StyleGANs Using Rough Scribbles via One-shot Learning](http:\u002F\u002Fwww.cgg.cs.tsukuba.ac.jp\u002F~endo\u002Fprojects\u002FStyleGANSparseControl\u002FCAVW_endo22_preprint.pdf)  | [HP](http:\u002F\u002Fwww.cgg.cs.tsukuba.ac.jp\u002F~endo\u002Fprojects\u002FStyleGANSparseControl\u002F) |\n      | 2022 | WACV | [Data InStance Prior (DISP) in Generative Adversarial Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FMangla_Data_InStance_Prior_DISP_in_Generative_Adversarial_Networks_WACV_2022_paper.pdf) | |\n      | 2021 | Arxiv | [MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13742.pdf) | |\n      | 2020 | Arxiv | [DATA INSTANCE PRIOR FOR TRANSFER LEARNING IN GANS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.04256.pdf) | |\n      | 2020 | CVPR Workshop | [Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.10964.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fsangwoomo\u002FFreezeD) |\n      | 2020 | CVPR | [MineGAN: effective knowledge transfer from GANs to target domains with few images](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_MineGAN_Effective_Knowledge_Transfer_From_GANs_to_Target_Domains_With_CVPR_2020_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fyaxingwang\u002FMineGAN) |\n      | 2020 | Arxiv | [FEW-SHOT ADAPTATION OF GENERATIVE ADVERSARIAL NETWORKS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11943.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fe-271\u002Ffew-shot-gan) |\n      | 2019 | ICCV | [Image Generation From Small Datasets via Batch Statistics Adaptation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FNoguchi_Image_Generation_From_Small_Datasets_via_Batch_Statistics_Adaptation_ICCV_2019_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fnogu-atsu\u002Fsmall-dataset-image-generation) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Interpretability\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | Big Data | [Unsupervised Discovery of Disentangled Interpretable Directions for Layer-Wise GAN](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-19-8331-3_2) | |\n      | 2022 | AAAI | [Self-supervised Enhancement of Latent Discovery in GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.08835.pdf) | | \n      | 2021 | ACM-MM | [Discovering Density-Preserving Latent Space Walks in GANs for Semantic Image Transformations](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475293) | | |\n      | 2021 | CVPR | [Discovering Interpretable Latent Space Directions of GANs Beyond Binary Attributes](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_Discovering_Interpretable_Latent_Space_Directions_of_GANs_Beyond_Binary_Attributes_CVPR_2021_paper.pdf) | |\n      | 2021 | Arxiv | [EigenGAN: Layer-Wise Eigen-Learning for GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12476.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FLynnHo\u002FEigenGAN-Tensorflow) |\n      | 2021 | CVPR | [Surrogate Gradient Field for Latent Space Manipulation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.09065.pdf)  | |\n      | 2021 | Arxiv | [Do Generative Models Know Disentanglement? Contrastive Learning is All You Need](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.10543.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002Fxrenaa\u002FDisCo) |\n      | 2020 | Arxiv | [Unsupervised Discovery of Disentangled Manifolds in GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11842.pdf) | | |\n      | 2021 | CVPR | [Closed-Form Factorization of Latent Semantics in GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06600.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fgenforce\u002Fsefa) |\n      | 2020 | ICML | [Unsupervised Discovery of Interpretable Directions in the GAN Latent Space](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.03754.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fanvoynov\u002FGANLatentDiscovery) |\n      | 2019 | Arxiv | [RPGAN: GANs Interpretability via Random Routing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.10920.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002Fanvoynov\u002FRandomPathGAN) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Montage\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICPR | [MontageGAN: Generation and Assembly of Multiple Components by GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.15577.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fuchidalab\u002Fdocker-montage-gan)|\n      | 2022 | TOG | [Sprite-from-Sprite: Cartoon Animation Decomposition with Self-supervised Sprite Estimation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3550454.3555439) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Text-to-Image\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [Adding Conditional Control to Text-to-Image Diffusion Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05543.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet) |\n      | 2022 | Arxiv | [DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Positive-Negative Prompt-Tuning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.11337.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002F7eu7d7\u002FDreamArtist-stable-diffusion) |\n  \u003C\u002Fdetails>\n\n### Image 2 Image Translation\n\n  - \u003Cdetails>\n      \u003Csummary>Face 2 Anime\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [StyO: Stylize Your Face in Only One-Shot](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.03231.pdf) | |\n      | 2022 | TVCG | [Appearance-preserved Portrait-to-anime Translation via Proxy-guided Domain Adaptation](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9982378) | |\n      | 2022 | Arxiv | [Neural Optimal Transport](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.12220.pdf) | |\n      | 2022 | CVPR Workshop | [Cross-Domain Style Mixing for Face Cartoonization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.12450.pdf) | [HP](https:\u002F\u002Fwebtoon.github.io\u002FWebtoonMe\u002Fen)|\n      | 2021 | Arxiv | [A Domain Gap Aware Generative Adversarial Network for Multi-domain Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.10837.pdf) | |\n      | 2021 | ACM-TG| [AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3450626.3459771) | [Github](https:\u002F\u002Fgithub.com\u002FGuoxianSong\u002FAgileGAN) |\n      | 2021 | Arxiv | [FINE-TUNING STYLEGAN2 FOR CARTOON FACE GENERATION](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.12445.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhappy-jihye\u002FCartoon-StyleGAN) |\n      | 2021 | Arxiv | [GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06561.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmchong6\u002FGANsNRoses) |\n      | 2021 | JSAI | [Multi-CartoonGAN for Conditional Artistic Face Translation ](https:\u002F\u002Fwww.jstage.jst.go.jp\u002Farticle\u002Fpjsai\u002FJSAI2021\u002F0\u002FJSAI2021_2N1IS2a01\u002F_pdf) | |\n      | 2021 | Arxiv | [AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.12593.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fbing-li-ai\u002FAniGAN) |\n      | 2021 | ICCECE | [Turn Real People into Anime Cartoonization](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9342433)  | |\n      | 2020 | NeurIPS Workshop | [A Note on Data Biases in Generative Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.02516.pdf) |  |\n      |  2020 | Arxiv | [Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2 Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05713.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FHideUnderBush\u002FUI2I_via_StyleGAN2) |\n      |  2020 | Arxiv | [Few-shot Knowledge Transfer for Fine-grained Cartoon Face Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.13332.pdf) | | \n      | 2020 | Arxiv | [Auto-Encoding for Shared Cross Domain Feature Representation and Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.11404.pdf) | |\n      | 2019 | ICASERT | [Generating Anime from Real Human Image with Adversarial Training](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8934465) | |\n      | 2019 | Arxiv | [Landmark Assisted CycleGAN for Cartoon Face Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.01424v1.pdf) | |\n      | 2018 | CVPR | [DA-GAN: Instance-level Image Translation by Deep Attention Generative Adversarial Networks](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMa_DA-GAN_Instance-Level_Image_CVPR_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FRongpeng-Lin\u002FA-DA-GAN-architecture) |\n      | 2018 | Arxiv | [Twin-GAN – Unpaired Cross-Domain Image Translation with Weight-Sharing GANs](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.00946.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fjerryli27\u002FTwinGAN) |\n      | 2018 | ECCV | [Improving Shape Deformation in Unsupervised Image-to-Image Translation](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FAaron_Gokaslan_Improving_Shape_Deformation_ECCV_2018_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fbrownvc\u002Fganimorph) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Selfie 2 Anime\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | CVPR | [Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FGuo_Alleviating_Semantics_Distortion_in_Unsupervised_Low-Level_Image-to-Image_Translation_via_Structure_CVPR_2022_paper.pdf)  | |\n      | 2022 | ICIP | [Hyprogan: Breaking the Dimensional wall From Human to Anime](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897973)  | |\n      | 2022 | MMUL | [Unpaired Image-to-Image Translation using Negative Learning for Noisy Patches](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9780547) | |\n      | 2022 | CVPR | [Unpaired Cartoon Image Synthesis via Gated Cycle Mapping](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FMen_Unpaired_Cartoon_Image_Synthesis_via_Gated_Cycle_Mapping_CVPR_2022_paper.pdf) | |\n      | 2022 | Arxiv | [UVCGAN: UNET VISION TRANSFORMER CYCLE-CONSISTENT GAN FOR UNPAIRED IMAGE-TO-IMAGE TRANSLATION](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02557.pdf) | |\n      | 2021 | ICCV | [Unaligned Image-to-Image Translation by Learning to Reweight](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11736.pdf) | |\n      | 2021 | Arxiv | [Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12623.pdf) | |\n      | 2021 | Arxiv | [SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.16219.pdf) | |\n      | 2020 | MAPR | [Interpolation based Anime Face Style Transfer](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9237764) | |\n      | 2020 | ICML | [Feature Quantization Improves GAN Training](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.02088.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FYangNaruto\u002FFQ-GAN) |\n      | 2020 | ECCV | [Unpaired Image-to-Image Translation using Adversarial Consistency Loss](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.04858.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhyperplane-lab\u002FACL-GAN) |\n      | 2019 | IJCNN | [AttentionGAN: Unpaired Image-to-Image Translation using Attention-Guided Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11897.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FHa0Tang\u002FAttentionGAN) |\n      | 2020 | CVPR | [Breaking the cycle—Colleagues are all you need](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.10538.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FOnr\u002FCouncil-GAN) |\n      | 2020 | ICLR | [U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.10830.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FUGATIT) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Photo 2 Anime\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | IEICE TIS | [A Novel Double-Tail Generative Adversarial Network for Fast Photo Animation](https:\u002F\u002Fwww.jstage.jst.go.jp\u002Farticle\u002Ftransinf\u002FE107.D\u002F1\u002FE107.D_2023EDP7061\u002F_pdf) |[HP](https:\u002F\u002Ftachibanayoshino.github.io\u002FAnimeGANv3\u002F) |\n      | 2023 | ICCV | [Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12968) | [Github](https:\u002F\u002Fgithub.com\u002FYuxinn-J\u002FScenimefy) |\n      | 2023 | CVPR | [Interactive Cartoonization with Controllable Perceptual Factors](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FAhn_Interactive_Cartoonization_With_Controllable_Perceptual_Factors_CVPR_2023_paper.pdf)  | |\n      | 2022 | ACM-MM | [Cartoon-Flow: A Flow-Based Generative Adversarial Network for Arbitrary-Style Photo Cartoonization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503161.3548094)  | |\n      | 2022 | ICML | [Learning to Incorporate Texture Saliency Adaptive Attention to Image Cartoonization](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fgao22k\u002Fgao22k.pdf)  | |\n      | 2022 | AAAI | [Unsupervised Coherent Video Cartoonization with Perceptual Motion Consistency](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6861.ZhenahuanL.pdf) | |\n      | 2022 | MVIP | [ARGAN: Fast Converging GAN for Animation Style Transfer](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9738752)  | [Github](https:\u002F\u002Fgithub.com\u002Famirzenoozi\u002FARGAN) |\n      | 2022 | ICCECE | [Transfer photo to anime with dual discriminators GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9712766) | |\n      | 2021 | 3ICT | [Cartoonize Images using TinyML Strategies with Transfer Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9581835)  | |\n      | 2021 | IEEE Access | [Pseudo-Supervised Learning for Semantic Multi-Style Transfer](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9316188)  |\n      | 2021 | TVCG | [GAN-based Multi-Style Photo Cartoonization](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9382902)  | |\n      | 2020 | ISICA | [AnimeGAN: a novel lightweight GAN for photo animation](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGAN\u002Fblob\u002Fmaster\u002Fdoc\u002FChen2020_Chapter_AnimeGAN.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGAN) |\n      | 2020 | Arxiv | [Generative Adversarial Networks for photo to Hayao Miyazaki style cartoons](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.07702.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FFilipAndersson245\u002Fcartoon-gan) |\n      | 2020 | CVPR | [Learning to Cartoonize Using White-box Cartoon Representations](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization\u002Fblob\u002Fmaster\u002Fpaper\u002F06791.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization) |\n      | 2020 | MMM | [CartoonRenderer: An Instance-based Multi-Style Cartoon Image Translator](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.06102.pdf) | |\n      | 2020 | Arxiv | [GANILLA: Generative adversarial networks for image to illustration translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.05638.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fgiddyyupp\u002Fganilla) |\n      | 2018 | Arxiv | [Comixify: Transform video into a comics](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.03473.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmaciej3031\u002Fcomixify) |\n      | 2018 | CVPR | [CartoonGAN: Generative Adversarial Networks for Photo Cartoonization](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_CartoonGAN_Generative_Adversarial_CVPR_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-CartoonGAN) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Sketch 2 Anime\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | ACM-TG | [AniFaceDrawing: Anime Portrait Exploration during Your Sketching](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07476) | [HP](http:\u002F\u002Fwww.jaist.ac.jp\u002F~xie\u002FAniFaceDrawing.html) |\n      | 2022 | TNNLS | [PMSGAN: Parallel Multistage GANs for Face Image Translation](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10014017)  | |\n      | 2022 | FDG | [SketchBetween: Video-to-Video Synthesis for Sprite Animation via Sketches](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.00185.pdf) | |\n      | 2021 | TVCG | [Deep Sketch-guided Cartoon Video Inbetweening](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.04149.pdf)  | |\n      | 2020 | NeurIPS | [How to train your conditional GAN: An approach using geometrically structured latent manifolds](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.13055.pdf) | |\n      | 2020 | ECCV | [Modeling Artistic Workflows for Image Generation and Editing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07238.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhytseng0509\u002FArtEditing) |\n      | 2019 | Arxiv | [PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.10146.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fyouyuge34\u002FPI-REC) |\n      | 2019 | FITEE | [SmartPaint: a co-creative drawing system based on generative adversarial networks](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1631\u002FFITEE.1900386.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Photo 2 Manga\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | AAAI | [MangaGAN: Unpaired Photo-to-Manga Translation Based on The Methodology of Manga Drawing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10634.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Anime 2 Costume\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | Arxiv | [Anime-to-Real Clothing: Cosplay Costume Generation via Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11479.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Style transfer\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | IJCAI | [Diffutoon: High-Resolution Editable Toon Shading via Diffusion Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.16224) | [HP](https:\u002F\u002Fecnu-cilab.github.io\u002FDiffutoonProjectPage\u002F) |\n      | 2023 | CVPR | [LANIT: Language-Driven Image-to-Image Translation for Unlabeled Data](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.14889.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FKU-CVLAB\u002FLANIT) |\n      | 2022 | TCSVT | [HRInversion: High-Resolution GAN Inversion for Cross-Domain Image Synthesis](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9953153) | |\n      | 2022 | NeurIPS | [Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.08736.pdf) | [Github](https:\u002F\u002Fgithub.com\u002F1170300521\u002FDiFa) |\n      | 2022 | CVPR | [Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13248.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fwilliamyang1991\u002FDualStyleGAN) |\n      | 2022 | ICLR | [Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08398.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FZPdesu\u002FMindTheGap) |\n      | 2022 | Arxiv | [Styleverse: Towards Identity Stylization across Heterogeneous Domains](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.00861.pdf) | |\n      | 2022 | ICCECE | [Unsupersived Image Texture Transfer Based On Generative Adversarial Network](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9712754) | |\n      | 2021 | VSIP | [Cross-modal and Semantics-Augmented Asymmetric CycleGAN for Data-Imbalanced Anime Style Face Translation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002FfullHtml\u002F10.1145\u002F3503961.3503969) |  |\n      | 2021 | Arxiv | [JoJoGAN: One Shot Face Stylization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11641.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmchong6\u002FJoJoGAN) |\n      | 2021 | Arxiv | [Fine-Grained Control of Artistic Styles in Image Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.10278.pdf) | |\n      | 2021 | Arxiv | [Few-shot Semantic Image Synthesis Using StyleGAN Prior](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.14877.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fendo-yuki-t\u002FFewshot-SMIS) |\n      | 2020 | CSICC | [StarGAN Based Facial Expression Transfer for Anime Characters](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9050061) | |\n      | 2019 | IEEE Access | [RAG: Facial Attribute Editing by Learning Residual Attributes](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F334058885_RAG_Facial_Attribute_Editing_by_Learning_Residual_Attributes) | |\n      | 2019 | Arxiv| [Disentangling Style and Content in Anime Illustrations](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10742v2.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002Fadversarial-disentangle) |\n      | 2018 | Arxiv  | [Anime Style Space Exploration Using Metric Learning and Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07997v1.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Author Style transfer\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICIP | [Translation of Illustration Artist Style Using Sailormoonredraw Data](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897787) |  |\n  \u003C\u002Fdetails>\n\n### Colorization\n\n  - \u003Cdetails>\n      \u003Csummary>NoHint\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2018 | ISCID | [Automatic Sketch Colorization with Tandem Conditional Adversarial Networks](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8695564) | |\n      | 2019 | IJNDC | [Do You Like Sclera? Sclera-region Detection and Colorization for Anime Character Line Drawings](https:\u002F\u002Fwww.atlantis-press.com\u002Fjournals\u002Fijndc\u002F125913573) | |\n      | 2019 | IJPE | [Colorization for Anime Sketches with Cycle-Consistent Adversarial Network](http:\u002F\u002Fwww.ijpe-online.com\u002FEN\u002Fabstract\u002Fabstract4089.shtml) | |\n      | 2021 | MDPI-AS | [Seg2pix: Few Shot Training Line Art Colorization with Segmented Image Data](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F11\u002F4\u002F1464) | |\n      | 2021 | - | [Semi-automatic Manga Colorization Using Conditional Adversarial Networks](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2021-golyadkin.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fqweasdd\u002Fmanga-colorization) |\n      | 2021 | ICPR | [Stylized-Colorization for Line Arts](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2021-fang.pdf)  | |\n      | 2021 | Arxiv | [Generative Probabilistic Image Colorization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.14518.pdf) | |\n      | 2022 | CHI | [FlatMagic: Improving Flat Colorization through AI-driven Design for Digital Comic Professionals](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3491102.3502075) | [Github](https:\u002F\u002Fcragl.cs.gmu.edu\u002Fflatmagic\u002F) |\n      | 2022 | ICCIR | [Attention-Based Unsupervised Sketch Colorization of Anime Avatar](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3548608.3559316) | |\n      | 2023 | IEEE Access| [Robust Manga Page Colorization via Coloring Latent Space](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10278137)| |\n  \u003C\u002Fdetails>\n\n\n  - \u003Cdetails>\n      \u003Csummary>Atari\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | CVPR Workshop | [Diffusart: Enhancing Line Art Colorization with Conditional Diffusion Models](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023W\u002FCVFAD\u002Fpapers\u002FCarrillo_Diffusart_Enhancing_Line_Art_Colorization_With_Conditional_Diffusion_Models_CVPRW_2023_paper.pdf) | |\n      | 2023 | WACV | [Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.14270.pdf)| |\n      | 2022 | IVCNZ | [StencilTorch: An Iterative and User-Guided Framework for Anime Lineart Colorization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1007\u002F978-3-031-25825-1_1) | |\n      | 2022 | WACV | [Late-resizing: A Simple but Effective Sketch Extraction Strategy for Improving Generalization of Line-art Colorization](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FKim_Late-Resizing_A_Simple_but_Effective_Sketch_Extraction_Strategy_for_Improving_WACV_2022_paper.pdf) | |\n      | 2021 | SIGGRAPH Asia | [Interactive Manga Colorization with Fast Flat Coloring](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002FfullHtml\u002F10.1145\u002F3476124.3488628) | |\n      | 2021 | TIP | [Dual Color Space Guided Sketch Colorization](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515572) | |\n      | 2021 | ICCV | [Deep Edge-Aware Interactive Colorization against Color-Bleeding Effects](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.01619.pdf) | |\n      | 2021 | CVPR | [User-Guided Line Art Flat Filling with Split Filling Mechanism](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_User-Guided_Line_Art_Flat_Filling_With_Split_Filling_Mechanism_CVPR_2021_paper.pdf) | [HP](https:\u002F\u002Flllyasviel.github.io\u002FSplitFilling\u002F) |\n      | 2021 | CVPR Workshop | [Line Art Colorization with Concatenated Spatial Attention](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021W\u002FCVFAD\u002Fpapers\u002FYuan_Line_Art_Colorization_With_Concatenated_Spatial_Attention_CVPRW_2021_paper.pdf) | |\n      | 2020 | MDPI-AS | [Automatic Colorization of Anime Style Illustrations Using a Two-Stage Generator](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F23\u002F8699) | |\n      | 2020 | ICCST | [Cartoon image colorization based on emotion recognition and superpixel color resolution](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9262834) | |\n      | 2019 | CSAI | [User Guided Digital Artwork Colorization](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3374587.3374604) | |\n      | 2019 | IEEE Access | [Two-Stage Sketch Colorization With Color Parsing](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8944253)  | |\n      | 2019 | ICIP | [MANGAN: ASSISTING COLORIZATION OF MANGA CHARACTERS CONCEPT ART USING CONDITIONAL GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803667) | [Github](https:\u002F\u002Fgithub.com\u002Ffelipelodur\u002FManGAN) |\n      | 2019 | CISP-BMEI | [Semi-Auto Sketch Colorization Based on Conditional Generative Adversarial Networks](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8965999) | |\n      | 2019 | TAAI | [Interactive Anime Sketch Colorization with Style Consistency via a Deep Residual Neural Network](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8959911) | |\n      | 2019 | CVMP | [PaintsTorch: a User-Guided Anime Line Art Colorization Tool with Double Generator Conditional Adversarial Network](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3359998.3369401) | |\n      | 2019 | Engineering Letters | [Anime Sketch Coloring with Swish-gated Residual U-net and Spectrally Normalized GAN](http:\u002F\u002Fwww.engineeringletters.com\u002Fissues_v27\u002Fissue_3\u002FEL_27_3_01.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fpradeeplam\u002FAnime-Sketch-Coloring-with-Swish-Gated-Residual-UNet) |\n      | 2018 | ACM-TG | [Two-stage Sketch Colorization](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints\u002Fblob\u002Fmaster\u002Fpapers\u002Fsa.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints) |\n      | 2018 | ACM-MC | [User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.03240.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Forashi\u002FAlacGAN) |\n      | 2018 | Neurocomputing | [Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Wasserstein Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.01908.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FirfanICMLL\u002FAuto_painter) |\n      | 2018 | EG | [A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-01891876\u002Fdocument)| |\n      | 2017 | Arxiv | [Outline Colorization through Tandem Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.08834.pdf) | |\n      | 2011 | NPAR | [TexToons: Practical Texture Mapping for Hand-drawn Cartoon Animations](https:\u002F\u002Fdcgi.fel.cvut.cz\u002Fhome\u002Fsykorad\u002FSykora11-NPAR.pdf) | |\n      | 2006 | ACM-TG | [Manga Colorization](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fmanga\u002Fmanga.pdf) | |\n  \u003C\u002Fdetails>\n\n\n  - \u003Cdetails>\n      \u003Csummary>Reference\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- |\n      | 2025 | Arxiv | [SSIMBaD: Sigma Scaling with SSIM-Guided Balanced Diffusion for AnimeFace Colorization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04283) | [GitHub](https:\u002F\u002Fgithub.com\u002FGiventicket\u002FSSIMBaD-Sigma-Scaling-with-SSIM-Guided-Balanced-Diffusion-for-AnimeFace-Colorization) |\n      | 2025 | Arxiv | [Cobra: Efficient Line Art COlorization with BRoAder References](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12240) | [HP](https:\u002F\u002Fzhuang2002.github.io\u002FCobra\u002F) |\n      | 2025 | Arxiv | [ColorizeDiffusion v2: Enhancing Reference-based Sketch Colorization Through Separating Utilities](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06895) | [GitHub](https:\u002F\u002Fgithub.com\u002Ftellurion-kanata\u002FcolorizeDiffusion) |\n      | 2025 | Arxiv | [MangaNinja: Line Art Colorization with Precise Reference Following](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08332) | [HP](https:\u002F\u002Fjohanan528.github.io\u002FMangaNinjia\u002F) |\n      | 2024 | Arxiv | [ColorFlow: Retrieval-Augmented Image Sequence Colorization](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11815) | [Github](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FColorFlow) |\n      | 2024 | Arxiv | [ColorizeDiffusion: Adjustable Sketch Colorization with Reference Image and Text](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.01456.pdf) | |\n      | 2024 | CVPR | [Learning Inclusion Matching for Animation Paint Bucket Coloriation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18342) | [HP](https:\u002F\u002Fykdai.github.io\u002Fprojects\u002FInclusionMatching) |\n      | 2023 | CGF | [Two-Step Training: Adjustable Sketch Colourization via Reference Image and Text Tag](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Fpdfdirect\u002F10.1111\u002Fcgf.14791) | [Github](https:\u002F\u002Fgithub.com\u002Fydk-tellurion\u002Fsketch_colorizer) |\n      | 2023 | Arxiv | [AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.11137.pdf) | |\n      | 2022 | ICME  | [ATTENTION-AWARE ANIME LINE DRAWING COLORIZATION](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.10988.pdf) | |\n      | 2022 | NicoInt | [Semi-Automatic Colorization Pipeline for Anime Characters and its Evaluation in Production](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9848507) | |\n      | 2022 | ICASSP | [Improving Reference-Based Image Colorization For Line Arts Via Feature Aggregation And Contrastive Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9746326) | |\n      | 2022 | ECCV | [Eliminating Gradient Conflict in Reference-based Line-Art Colorization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06095.pdf)| [Github](https:\u002F\u002Fgithub.com\u002Fkunkun0w0\u002FSGA) |\n      | 2022 | Mathematics | [Exemplar-Based Sketch Colorization with Cross-Domain Dense Semantic Correspondence](https:\u002F\u002Fwww.mdpi.com\u002F2227-7390\u002F10\u002F12\u002F1988\u002Fhtm) | |\n      | 2022 | CVM | [Reference-guided structure-aware deep sketch colorization for cartoons](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs41095-021-0228-6.pdf) | |\n      | 2022 | TMM | [Multi-Density Sketch-to-Image Translation Network](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.10649.pdf) | |\n      | 2021 | SIGGRAPH Asia | [Anime Character Colorization using Few-shot Learning](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3478512.3488604) | |\n      | 2021 | IET-IP | [Disentangled and controllable sketch creation based ondisentangling the structure and color enhancement](https:\u002F\u002Fietresearch.onlinelibrary.wiley.com\u002Fdoi\u002Fepdf\u002F10.1049\u002Fipr2.12343) | |\n      | 2021 | ICIP | [PAINTING STYLE-AWARE MANGA COLORIZATION BASED ON GENERATIVE ADVERSARIAL NETWORKS](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.07943.pdf) | |\n      | 2021 | CGI | [Exploring Sketch-based Character Design Guided by Automatic Colorization](https:\u002F\u002Frawanmg.github.io\u002Fpdf\u002Fgi21.pdf) | |\n      | 2021 | ICME | [Anime Style Transfer With Spatially-Adaptive Normalization](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9428305)  | |\n      | 2021 | ICPR | [Anime Sketch Colorization by Component-based Matching using Deep Appearance Features and Graph Representation](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9412507) | |\n      | 2020 | EG | [Colorization of Line Drawings with Empty Pupils](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F347703757_Colorization_of_Line_Drawings_with_Empty_Pupils) | |\n      | 2020 | EG | [Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2020-akita.pdf) | |\n      | 2020 | CVPR | [Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.05207.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FJungjaewon\u002FReference_based_Skectch_Image_Colorization) |\n      | 2020 | TMM | [Semantic Example Guided Image-to-Image Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13028.pdf) | |\n      | 2019 | SIGGRAPH | [Graph Matching based Anime Colorization with Multiple References](https:\u002F\u002Fahcweb01.naist.jp\u002Fpapers\u002Fconference\u002F2019\u002F201907_SIGGRAPH2019_s-nakamura\u002F201907_SIGGRAPH_s-nakamura.slides.pdf) | |\n      | 2019 | SIGGRAPH | [Fully Automatic Colorization for Anime Character Considering Accurate Eye Colors](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3306214.3338585) | |\n      | 2018 | - | [Attentioned Deep Paint](https:\u002F\u002Fgithub.com\u002Fktaebum\u002FAttentionedDeepPaint\u002Fblob\u002Fmaster\u002Fposter.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fktaebum\u002FAttentionedDeepPaint) |\n      | 2017 | ACIS SNPD | [Automatic manga colorization with color style by generative adversarial nets](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8022768) | |\n      | 2017 | SIGGRAPH | [Comicolorization: Semi-Automatic Manga Colorization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.06759.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FDwangoMediaVillage\u002FComicolorization) |\n      | 2017 | ICDAR | [cGAN-based Manga Colorization Using a Single Training Image](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.06918.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fsudheerachary\u002FManga_Colorization) |\n      | 2017 | ACPR | [Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03319.pdf) | |\n      | 2017 | IIAI | [Deep Manga Colorization with Color Style Extraction by Conditional Adversarially Learned Inference](http:\u002F\u002Fwww.iaiai.org\u002Fjournals\u002Findex.php\u002FIEE\u002Farticle\u002Fview\u002F214) | |\n      | 2014 | SIGGRAPH | [Reference-based Manga Colorization by Graph Correspondence Using Quadratic Programming](http:\u002F\u002Fyusukematsui.me\u002Fpdf\u002Fsato_sa2014.pdf) | |\n      | 2004 | NPAR | [Unsupervised Colorization of Black-and-White Cartoons](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.95.2629&rep=rep1&type=pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Tag\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | PG | [Line Art Colorization Based on Explicit Region Segmentation](https:\u002F\u002Fwww.sysu-imsl.com\u002Ffiles\u002FPG2021\u002Fline_art_colorization_pg2021_main.pdf) | |\n      | 2019 | ICCV | [Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FKim_Tag2Pix_Line_Art_Colorization_Using_Text_Tag_With_SECat_and_ICCV_2019_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fblandocs\u002FTag2Pix) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Video\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | PR | [Coloring anime line art videos with transformation region enhancement network](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320323002625?casa_token=evjknkPkujoAAAAA:a0kjRw6hy3aaO9UAkINCtXYlELCDMDQu5RykR6k7qNeRPaYsaBfR8_PNSg0R-MsIs3vOCePOTfYh) |  |\n      | 2021 | ICCV | [The Animation Transformer: Visual Correspondence via Segment Matching](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.02614.pdf)  | [Video](https:\u002F\u002Fcadmium.app\u002F) |\n      | 2021 | WACV  | [Line Art Correlation Matching Feature Transfer Network for Automatic Animation Colorization](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fpapers\u002FZhang_Line_Art_Correlation_Matching_Feature_Transfer_Network_for_Automatic_Animation_WACV_2021_paper.pdf) | |  \n      | 2020 | TVCG | [Deep Line Art Video Colorization with a Few References](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10685.pdf) | |\n      | 2019 | ICCV Workshop | [Artist-Guided Semiautomatic Animation Colorization](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fpapers\u002FCVFAD\u002FThasarathan_Artist-Guided_Semiautomatic_Animation_Colorization_ICCVW_2019_paper.pdf) | |\n      | 2019 | CCCRV | [Automatic Temporally Coherent Video Colorization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09527.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FHarry-Thasarathan\u002FTCVC) |\n  \u003C\u002Fdetails>\n\n### Editing\n\n  - \u003Cdetails>\n      \u003Csummary>Lighting\u003C\u002Fsummary>\n\n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICIP | [Automatic Illumination of Flat-Colored Drawings by 3D Augmentation of 2D Silhouettes](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897386) | |\n      | 2021 | ICCV | [SmartShadow: Artistic Shadow Drawing Tool for Line Drawings](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_SmartShadow_Artistic_Shadow_Drawing_Tool_for_Line_Drawings_ICCV_2021_paper.pdf) | |\n      | 2020 | CVPR | [Learning to Shadow Hand-drawn Sketches](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.11812.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fqyzdao\u002FShadeSketch) |\n      | 2020 | ACM-TG | [Generating Digital Painting Lighting Effects via RGB-space Geometry](https:\u002F\u002Flllyasviel.github.io\u002FPaintingLight\u002Ffiles\u002FTOG20PaintingLight.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FPaintingLight) |\n      | 2019 | CVMP | [Augmenting Hand-Drawn Art with Global Illumination Effects through Surface Inflation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3359998.3369400) | [HP](https:\u002F\u002Fv-sense.scss.tcd.ie\u002Fresearch\u002Faugmenting-hand-drawn-art-with-global-illumination-effects-through-surface-inflation\u002F) |\n      | 2018 | NPAR | [2D shading for cel animation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3229147.3229148) | [HP](https:\u002F\u002Fv-sense.scss.tcd.ie\u002Fresearch\u002Fvfx-animation\u002F2d-shading-for-cel-animation\u002F) |\n      | 2018 | NeurIPS Workshop | [Automatic Illumination Effects for 2D Characters](https:\u002F\u002Fnips2018creativity.github.io\u002Fdoc\u002FAutomatic_Illumination_Effects_for_2D_Characters.pdf) | |\n      | 2018 | ECCV Workshop | [Deep Normal Estimation for Automatic Shading of Hand-Drawn Characters](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCVW_2018\u002Fpapers\u002F11131\u002FHudon_Deep_Normal_Estimation_for_Automatic_Shading_of_Hand-Drawn_Characters_ECCVW_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FV-Sense\u002FDeepNormals) |\n      | 2014 | ACM-TG | [Ink-and-Ray: Bas-Relief Meshes for Adding Global Illumination Effects to Hand-Drawn Characters](https:\u002F\u002Fdcgi.fel.cvut.cz\u002Fhome\u002Fsykorad\u002FSykora14-TOG.pdf) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Illustration Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | Arxiv | [Re:Draw - Context Aware Translation as a Controllable Method for Artistic Production](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.03499.pdf) | | |\n      | 2023 | Arxiv | [DreamTuner: Single Image is Enough for Subject-Driven Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.13691) | | |\n      | 2023 | Arxiv | [Instance-guided Cartoon Editing with a Large-scale Dataset](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.01943.pdf) | | |\n      | 2023 | Arxiv | [Reference-base Image Composition with Sketch via Structure-aware Diffusion Model](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.09748.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fkangyeolk\u002FPaint-by-Sketch) |\n      | 2023 | WACV | [DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.10737.pdf) | | |\n      | 2021 | NeurIPS | [Unsupervised Learning of Compositional Energy Concepts](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.03042.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fyilundu\u002Fcomet) |\n      | 2021 | ICME | [Universal Face Restoration With Memorized Modulation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01033.pdf) | | |\n      | 2021 | ICIP | [Improving The Quality Of Illustrations: Transforming Amateur Illustrations To A Professional Standard](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9506615) | |\n      | 2021 | NicoInt | [Sketch-based Anime Hairstyle Editing with Generative Inpainting](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515963) | |\n      | 2021 | TSP | [Deep Unfolding with Normalizing Flow Priors for Inverse Problems](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02848.pdf) | |\n      | 2021 | CVPR | [L2M-GAN: Learning to Manipulate Latent Space Semantics for Facial Attribute Editing](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_L2M-GAN_Learning_To_Manipulate_Latent_Space_Semantics_for_Facial_Attribute_CVPR_2021_paper.pdf)  | |\n      | 2021 | TVCG | [Cross-Domain and Disentangled Face Manipulation with 3D Guidance](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.11228.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FcassiePython\u002Fcddfm3d) |\n      | 2020 | ECCV | [Erasing Appearance Preservation in Optimization-based Smoothing](https:\u002F\u002Flllyasviel.github.io\u002FAppearanceEraser\u002Fpaper\u002Fpaper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FAppearanceEraser) |\n      | 2018 | Arxiv | [Spatially Controllable Image Synthesis with Internal Representation Collaging](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10153.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fquolc\u002Fneural-collage) |\n      | 2018 | PG | [Decomposing Images into Layers with Advanced Color Blending](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1111\u002Fcgf.13577) | [Github](https:\u002F\u002Fgithub.com\u002Fyuki-koyama\u002Funblending) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | ICMR |  [Joint Geometric-Semantic Driven Character Line Drawing Generation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3591106.3592216) | \n      | 2023 | ACM-TG | [Semi-supervised reference-based sketch extraction using a contrastive learning framework](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1FELTVl73OrQ9Q0uBXN7jLbRStSsF-NgM\u002Fview?pli=1)  | [Github](https:\u002F\u002Fgithub.com\u002FChanuku\u002Fsemi_ref2sketch_code) |\n      | 2022 | ACM-TG | [Reference Based Sketch Extraction via Attention Mechanism](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3550454.3555504) | [Github](https:\u002F\u002Fgithub.com\u002Fref2sketch\u002Fref2sketch) |\n      | 2022 | AAAI | [End-to-End Line Drawing Vectorization](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F20379\u002F20138)  | |\n      | 2022 | CW |  [A Drawing Support System for Sketching Aging Anime Faces](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9937356) | |\n      | 2021 | NicoInt | [One-shot Line Extraction from Color Illustrations](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515964) | |\n      | 2020 | ACM-MM | [SketchMan: Learning to Create Professional Sketches](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394171.3413720)  | [Github](https:\u002F\u002Fgithub.com\u002FLCXCUC\u002FSketchMan2020) |\n      | 2020 | MDPI-AS | [Progressive Full Data Convolutional Neural Networks for Line Extraction from Anime-Style Illustrations](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F1\u002F41) | |\n      | 2019 | TVCG | [Perceptual-aware Sketch Simplification Based on Integrated VGG Layers](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8771128) | |\n      | 2019 | SIGGRAPH | [Unpaired Sketch-to-Line Translation via Synthesis of Sketches](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3355088.3365163)  | |\n      | 2018 | ACM-TG| [Real-Time Data-Driven Interactive Rough Sketch Inking](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3197517.3201370) | [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fline_thinning) |\n      | 2018 | ACM-TG| [Mastering Sketching: Adversarial Augmentation for Structured Prediction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.08966.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fsketch_simplification) |\n      | 2017 | ACM-TG | [Deep Extraction of Manga Structural Lines](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3072959.3073675) | [Github](https:\u002F\u002Fgithub.com\u002Fljsabc\u002FMangaLineExtraction) |\n      | 2016 | ACM-TG | [Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup](https:\u002F\u002Fesslab.jp\u002F~ess\u002Fpublications\u002FSimoSerraSIGGRAPH2016.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fsketch_simplification) |\n      | 2011 | NPAR | [Temporal Noise Control for Sketchy Animation](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F2024676.2024691)  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Automatic Animation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | MTA | [Automatic Animation Inbetweening](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-023-17354-x) | |\n      | 2023 | ICCV | [Deep Geometrized Cartoon Line Inbetweening](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FSiyao_Deep_Geometrized_Cartoon_Line_Inbetweening_ICCV_2023_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002Fanimeinbet) |\n      | 2022 | ICIP | [Enhanced Deep Animation Video Interpolation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.12657.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flaomao0\u002FAutoSktFI) |\n      | 2022 | ECCV | [Improving the Perceptual Quality of 2D Animation Interpolation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.12792.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Feisai-anime-interpolator\u002F) |\n      | 2021 | CVPR | [Deep Animation Video Interpolation in the Wild](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02495.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002FAnimeInterp\u002F) |\n      | 2019 | ICIP | [Optical Flow Based Line Drawing Frame Interpolation Using Distance Transform to Support Inbetweenings](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803506) | |\n      | 2017 | CMV | [DiLight: Digital light table – Inbetweening for 2D animations using guidelines](http:\u002F\u002Fgraphics.tudelft.nl\u002FPublications-new\u002F2017\u002FCMV17\u002Fpdf.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Automatic Image Enhancement\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | CVPR | [APISR: Anime Production Inspired Real-World Anime Super-Resolution](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.01598.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FKiteretsu77\u002FAPISR) |\n      | 2022 | NeurIPS | [AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.07038.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FAnimeSR) |\n      | 2022 | Sensors | [A Transformer-Based Model for Super-Resolution of Anime Image](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F22\u002F21\u002F8126) | |\n      | 2021 | ICCV Workshop | [Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10833.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) |\n      | 2021 | JSCI | [Enhancement of Anime Imaging Enlargement using Modified Super-Resolution CNN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.02321.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FTanakitInt\u002FSRCNN-anime) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n    \u003Csummary>Background Removal\u003C\u002Fsummary>\n    \n    | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n    | ---- | ---- | ---- | ---- | \n    | 2025 | arXiv | [ToonOut: Fine-tuned Background-Removal for Anime Characters](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2509.06839) |[Github](https:\u002F\u002Fgithub.com\u002FMatteoKartoon\u002FBiRefNet) |\n      \u003C\u002Fdetails>\n\n### Character Animating\n  - \u003Cdetails>\n      \u003Csummary>Character animation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [CartoonAlive: Towards Expressive Live2D Modeling from Single Portraits](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.17327) |[HP](https:\u002F\u002Fhuman3daigc.github.io\u002FCartoonAlive_webpage\u002F) |\n      | 2024 | Arxiv | [AnimateDiff-Lightning: Cross-Model Diffusion Distillation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.12706) | [HF](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FAnimateDiff-Lightning) |\n      | 2024 | TCSVT | [Hierarchical Feature Warping and Blending for Talking Head Animation](https:\u002F\u002Fgwern.net\u002Fdoc\u002Fai\u002Fanime\u002F2024-zhang.pdf) | |\n      | 2023 | TMM | [Language-Guided Face Animation by Recurrent StyleGAN-based Generator](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.05617.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTiankaiHang\u002Flanguage-guided-animation) |\n      | 2023 | IJCAI | [Collaborative Neural Rendering using Anime Character Sheets](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.05378.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FCONR) |\n      | 2020 | ACCV | [CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FACCV2020\u002Fpapers\u002FZhang_CPTNet_Cascade_Pose_Transform_Network_for_Single_Image_Talking_Head_ACCV_2020_paper.pdf) | |\n      | 2020 | SIGGRAPH Asia | [MakeItTalk: Speaker-Aware Talking-Head Animation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.12992.pdf) | |\n  \u003C\u002Fdetails>\n\n### Manga Application\n  - \u003Cdetails>\n      \u003Csummary>Classification\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | TIP | [Panel-Page-Aware Comic Genre Understanding](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=10112648) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n    \u003Csummary>Generation\u003C\u002Fsummary>\n\n    | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n    | ---- | ---- | ---- | ---- |\n    | 2025 | Arxiv | [DreamingComics: A Story Visualization Pipeline via Subject and Layout Customized Generation using Video Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2512.01686) |  |\n    | 2025 | Arxiv | [Panel-by-Panel Souls: A Performative Workflow for Expressive Faces in AI-Assisted Manga Creation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2511.16038) |  |\n    | 2025 | Arxiv | [Retrieval Augmented Comic Image Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.09140) |  |\n    | 2024 | Arxiv | [Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.08266) | [Github](https:\u002F\u002Fgithub.com\u002FdmMaze\u002Fsketch2manga) |\n    | 2021 | CVPR | [Generating Manga from Illustrations via Mimicking Manga Workflow](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Generating_Manga_From_Illustrations_via_Mimicking_Manga_Creation_Workflow_CVPR_2021_paper.pdf) | [HP](https:\u002F\u002Flllyasviel.github.io\u002FMangaFilter\u002F) |\n    | 2020 | ICAART | [Hair Shading Style Transfer for Manga with cGAN](https:\u002F\u002Fwww.scitepress.org\u002FPapers\u002F2020\u002F89614\u002F89614.pdf) | |\n    | 2020 | SIGGRAPH | [Manga Filling Style Conversion with Screentone Variational Autoencoder](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fscreenstyle\u002Fscreenstyle.pdf) | |\n    | 2019 | ISM | [Synthesis of Screentone Patterns of Manga Characters](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8959008)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Colorization\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | 2025 | arXiv | [Region-Wise Correspondence Prediction between Manga Line Art Images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2509.09501) |  |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [MangaDiT: Reference-Guided Line Art Colorization with Hierarchical Attention in Diffusion Transformers](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.09709) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Restoration\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | CVPR | [Exploiting Aliasing for Manga Restoration](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXie_Exploiting_Aliasing_for_Manga_Restoration_CVPR_2021_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fmsxie92\u002FMangaRestoration) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Understanding\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [Re:Verse - Can Your VLM Read a Manga?](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.08508)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Inpainting\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | TOG | [Seamless Manga Inpainting with Semantics Awareness](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fmangainpaint\u002Fmangainpaint.pdf)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [Manga Rescreening with Interpretable Screentone Representation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.04114.pdf) | |\n      | 2021 | SIGGRAPH Asia | [Comic Image Inpainting via Distance Transform](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3478512.3488607) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Text detection\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | ACM-MM | [Learning from the Past: Meta-Continual Learning withKnowledge Embedding for Jointly Sketch, Cartoon, andCaricature Face Recognition](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fepdf\u002F10.1145\u002F3394171.3413892) | |\n      | 2020 | TST | [Deep Learning-Based Classification of the Polar Emotions of “Moe”-Style Cartoon Pictures](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9220754) | |\n      | 2019 | ICDAR Worshop | [CNN based Extraction of Panels\u002FCharacters from Bengali Comic Book Page Images](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8893046)  | |\n      | 2011 | IJIP | [Method for Real Time Text Extraction of Digital Manga Comic](https:\u002F\u002Fwww.cscjournals.org\u002Fmanuscript\u002FJournals\u002FIJIP\u002FVolume4\u002FIssue6\u002FIJIP-290.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Landmark detection\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2018 | Arxiv | [Facial Landmark Detection for Manga Images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.03214.pdf) | [GitHub](https:\u002F\u002Fgithub.com\u002Foaugereau\u002FFacialLandmarkManga) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Segmentation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- |\n      | 2025 | Arxiv | [Advancing Manga Analysis: Comprehensive Segmentation Annotations for the Manga109 Dataset](https:\u002F\u002Fopenaccess.thecvf.com\u002F\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FXie_Advancing_Manga_Analysis_Comprehensive_Segmentation_Annotations_for_the_Manga109_Dataset_CVPR_2025_paper.pdf) | [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FMS92\u002FMangaSegmentation) |\n      | 2022 | ICPR | [Towards Content-Aware Pixel-Wise Comic Panel Segmentation](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-37742-6_1) | |\n      | 2020 | ISM | [Extraction of Frame Sequences in the Manga Context](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9327968) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Translation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | AAAI | [Towards Fully Automated Manga Translation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.14271.pdf)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Depth Estimation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | WACV | [Estimating Image Depth in the Comics Domain](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03575.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Vectorization\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | Arxiv | [Raster Manga Vectorization via Primitive-wise Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04830.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Re-Identification\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | ACM-MM | [Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.13993) | |\n      | 2022 | Arxiv | [Unsupervised Manga Character Re-identification via Face-body and Spatial-temporal Associated Clustering](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.04621.pdf) | |\n  \u003C\u002Fdetails>\n\n### Representation Learning\n  - \u003Cdetails>\n      \u003Csummary>Automatic Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2015 | SIGGRAPH | [Illustration2Vec: A Semantic Vector Representation of Illustrations](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fanime\u002F2015-saito.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Frezoo\u002Fillustration2vec) |\n  \u003C\u002Fdetails>\n\n### Pose Estimation\n  - \u003Cdetails>\n      \u003Csummary>Automatic Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | Arxiv | [VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.14456) | |\n      | 2022 | WACV | [Transfer Learning for Pose Estimation of Illustrated Characters](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01819.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Fbizarre-pose-estimator) |\n      | 2016 | MANPU | [Pose Estimation of Anime\u002FManga Characters: A Case for Synthetic Data](http:\u002F\u002Fwww.cs.cornell.edu\u002F~pramook\u002Fpapers\u002Fmanpu2016.pdf) | |\n  \u003C\u002Fdetails>\n\n### Image Retrieval\n  - \u003Cdetails>\n      \u003Csummary>Automatic Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | Arxiv | [AugNet: End-to-End Unsupervised Visual Representation Learning with Image Augmentation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06250.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fchenmingxiang110\u002FAugNet) |\n      | 2017 | MTA | [Sketch-based Manga Retrieval using Manga109 Dataset](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1510.04389.pdf) | |\n  \u003C\u002Fdetails>\n\n### Visual Correspondence\n  - \u003Cdetails>\n      \u003Csummary>Automatic Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2016 | ACM-TG | [Globally Optimal Toon Tracking](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Ftoontrack\u002Ftoontrack.pdf)  | [HP](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Ftoontrack\u002Ftoontrack.html) |\n  \u003C\u002Fdetails>\n\n### Character Recognition\n  - \u003Cdetails>\n      \u003Csummary>Automatic Sketch Editing\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | IEEE Access | [Hierarchical Multi-Label Attribute Classification With Graph Convolutional Networks on Anime Illustration](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10097719) | | \n      | 2022 | ICIP | [GCN-Based Multi-Modal Multi-Label Attribute Classification in Anime Illustration Using Domain-Specific Semantic Features](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9898071) | |\n      | 2022 | Arxiv | [AniWho : A Quick and Accurate Way to Classify Anime Character Faces in Images](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.11012.pdf) | |\n      | 2022 | ECCV | [Open-Vocabulary DETR with Conditional Matching](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11876.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fyuhangzang\u002FOV-DETR) |\n      | 2022 | EG |  [CAST: CHARACTER LABELING IN ANIMATION USING SELF-SUPERVISION BY TRACKING](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.07619.pdf) | |\n      | 2021 | TIP | [Graph Jigsaw Learning for Cartoon Face Recognition](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06532.pdf) | |\n      | 2020 | IJCAI | [ACFD: Asymmetric Cartoon Face Detector](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00899.pdf)  | | \n      | 2020 | ACM-MM | [Learning from the Past: Meta-Continual Learning withKnowledge Embedding for Jointly Sketch, Cartoon, andCaricature Face Recognition](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fepdf\u002F10.1145\u002F3394171.3413892) | |\n      | 2020 | TST | [Deep Learning-Based Classification of the Polar Emotions of “Moe”-Style Cartoon Pictures](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9220754) | |\n      | 2019 | ICDAR Workshop | [CNN based Extraction of Panels\u002FCharacters from Bengali Comic Book Page Images](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8893046) | |\n      | 2019 | ACM-TURC | [Progressive Deep Feature Learning for Manga Character Recognition via Unlabeled Training Data](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3321408.3322624) | |\n      | 2018 | Arxiv | [Object Detection for Comics using Manga109 Annotations](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.08670) | |\n  \u003C\u002Fdetails>\n\n### 3D Character Creation\n  - \u003Cdetails>\n      \u003Csummary>3D character creation\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | NeurIPS | [DreamWaltz: Make a Scene with Complex 3D Animatable Avatars](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.12529.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDreamWaltz) |\n      | 2020 | ICCW | [Automatic Generation of 3D Natural Anime-like Non-Player Characters with Machine Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9240508)  | |\n  \u003C\u002Fdetails>\n\n### Robotics\n  - \u003Cdetails>\n      \u003Csummary>Robotics\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | IROS | [Making Robots Draw A Vivid Portrait In Two Minutes](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.05526.pdf) | | \n  \u003C\u002Fdetails>\n\n### Speech Synthesis\n  - \u003Cdetails>\n      \u003Csummary>Speech Synthesis\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2019 | ACM-TG | [Comic-Guided Speech Synthesis](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3355089.3356487)  | [HP](https:\u002F\u002Fbitwangyujia.github.io\u002Fresearch\u002Fproject\u002Fcomic2speech.html) |\n  \u003C\u002Fdetails>\n\n### Adult Content Detection\n  - \u003Cdetails>\n      \u003Csummary>Adult Content Detection\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | IEEE Access | [A Deep Learning-Based Approach for Inappropriate Content Detection and Classification of YouTube Videos](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9696242)  |  |\n      | 2021 | IEEE Access | [An Evaluation of Traditional and CNN-Based Feature Descriptors for Cartoon Pornography Detection](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9371684) |  |\n      | 2019 | ACM SAC | [KidsGUARD: Fine Grained Approach for Child Unsafe Video Representation and Detection](https:\u002F\u002Fprecog.iiitd.edu.in\u002Fpubs\u002FKidsGuard-cam-ready.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fprecog-iiitd\u002Fkidsguard-sac) |\n  \u003C\u002Fdetails>\n\n### Survey & Review\n  - \u003Cdetails>\n      \u003Csummary>Adult Content Detection\u003C\u002Fsummary>\n      \n      | **Year** | **Conference \u002F Journal** | **Title** | **Links** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [Comparing Human and AI Performance in Visual Storytelling through Creation of Comic Strips: A Case Study](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.18641) ||\n      | 2024 | under review | [One missing peace in Vision & Language: A survey on Comics Understanding](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding) | [GitHub](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding) |\n      | 2023 | HSET | [Anime like Character Face Generation: A survey](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F374705438_Anime-like_Character_Face_Generation_A_Survey) |  |\n      | 2022 | IJCV | [Cartoon Image Processing: A Survey](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01645-1) | |\n      | 2020 | Arxiv | [Image Colorization: A Survey and Dataset](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.10774.pdf) | |\n      | 2019 | TOPS | [Computational Approaches to Comics Analysis](https:\u002F\u002Fpubmed.ncbi.nlm.nih.gov\u002F31705626\u002F) | |\n      | 2018 | - | [A Survey of Comics Research in Computer Science](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.05490.pdf) | |\n  \u003C\u002Fdetails>\n\n## Projects\nSummary of github or other types of projects that are related to anime or manga except above.\n\n  - \u003Cdetails>\n      \u003Csummary>Repository\u003C\u002Fsummary>\n      \n      - [Awesome-Animation-Research](https:\u002F\u002Fgithub.com\u002Fzhenglinpan\u002FAwesome-Animation-Research)\n      \n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Datasets\u003C\u002Fsummary>\n      \n      - [Layered Temporal Dataset for Anime Drawings](https:\u002F\u002Flayered-anime.github.io\u002F)\n      - [TRIGGER dataset](https:\u002F\u002Fwww.nii.ac.jp\u002Fdsc\u002Fidr\u002Ftrigger\u002F)\n      - [Anime Art](https:\u002F\u002Fwww.kaggle.com\u002Fdatasets\u002Fmuoncollider\u002Fdanbooru2020small)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Representation Learning\u003C\u002Fsummary>\n\n      - [Classification and vectorization of key-frames and face characters of anime](https:\u002F\u002Fgithub.com\u002Fenmanuelmag\u002FAnimeClassificator)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Image Generation\u003C\u002Fsummary>\n      \n      ### GANs\n      \n      - [makegirlsmoe](https:\u002F\u002Fgithub.com\u002Fmakegirlsmoe\u002Fmakegirlsmoe_web)\n      - [ANIME305\u002FAnime-GAN-tensorflow](https:\u002F\u002Fgithub.com\u002FANIME305\u002FAnime-GAN-tensorflow)\n      - [jayleicn\u002FAnimeGAN](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN)\n      - [FangYang970206\u002FAnime_GAN](https:\u002F\u002Fgithub.com\u002FFangYang970206\u002FAnime_GAN)\n      - [pavitrakumar78\u002FAnime-Face-GAN-Keras](https:\u002F\u002Fgithub.com\u002Fpavitrakumar78\u002FAnime-Face-GAN-Keras)\n      - [forcecore\u002FKeras-GAN-Animeface-Character](https:\u002F\u002Fgithub.com\u002Fforcecore\u002FKeras-GAN-Animeface-Character)\n      - [tdrussell\u002FIllustrationGAN](https:\u002F\u002Fgithub.com\u002Ftdrussell\u002FIllustrationGAN)\n      - [m516825\u002FConditional-GAN](https:\u002F\u002Fgithub.com\u002Fm516825\u002FConditional-GAN)\n      - [bchao1\u002FAnime-Generation](https:\u002F\u002Fgithub.com\u002Fbchao1\u002FAnime-Generation)\n      \n      ### Diffusion models\n      \n      - [harubaru\u002Fwaifu-diffusion](https:\u002F\u002Fgithub.com\u002Fharubaru\u002Fwaifu-diffusion)\n      - [DGSpitzer\u002FCyberpunk-Anime-Diffusion](https:\u002F\u002Fhuggingface.co\u002FDGSpitzer\u002FCyberpunk-Anime-Diffusion)\n      - [NovelAI](https:\u002F\u002Fnovelai.net\u002F)\n      - [Stable Diffusion Models](https:\u002F\u002Fcyberes.github.io\u002Fstable-diffusion-models\u002F)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Image-to-Image Tranlation\u003C\u002Fsummary>\n\n      - [Aixile\u002Fchainer-cyclegan](https:\u002F\u002Fgithub.com\u002FAixile\u002Fchainer-cyclegan)\n      - [SystemErrorWang\u002FFacialCartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FFacialCartoonization)\n      - [experience-ml\u002Fcartoonize](https:\u002F\u002Fgithub.com\u002Fexperience-ml\u002Fcartoonize)\n      - [racinmat\u002Fanime-style-transfer](https:\u002F\u002Fgithub.com\u002Fracinmat\u002Fanime-style-transfer)\n      - [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)\n      - [XiaoSanGit\u002FReal2Animation-video-generation](https:\u002F\u002Fgithub.com\u002FXiaoSanGit\u002FReal2Animation-video-generation)\n      - [Avatar Artist Using GAN](http:\u002F\u002Fcs230.stanford.edu\u002Fprojects_winter_2020\u002Freports\u002F32639139.pdf)\n      - [Generating Cartoon Style Facial Expressions with StackGAN](https:\u002F\u002Fcs230.stanford.edu\u002Fprojects_fall_2019\u002Freports\u002F26242839.pdf)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Automatic Line Art Colorization\u003C\u002Fsummary>\n\n      - [style2paints](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints)\n      - [PaintsChainer](https:\u002F\u002Fgithub.com\u002Fpfnet\u002FPaintsChainer)\n      - [Ugness\u002FLine-Art-Colorization-SPADE](https:\u002F\u002Fgithub.com\u002FUgness\u002FLine-Art-Colorization-SPADE)\n      - [sanjay235\u002FSketch2Color-anime-translation](https:\u002F\u002Fgithub.com\u002Fsanjay235\u002FSketch2Color-anime-translation)\n      - [Pengxiao-Wang\u002FStyle2Paints_V3](https:\u002F\u002Fgithub.com\u002FPengxiao-Wang\u002FStyle2Paints_V3)\n      - [GANime: Generating Anime and Manga Character Drawings from Sketches](http:\u002F\u002Fcs230.stanford.edu\u002Fprojects_winter_2020\u002Fposters\u002F32226261.pdf)\n      - [Line Drawing Colorization](http:\u002F\u002Fcs231n.stanford.edu\u002Freports\u002F2017\u002Fpdfs\u002F425.pdf)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Character Animating\u003C\u002Fsummary>\n\n      - [Talking Head Anime from a Single Image](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo)\n      - [Talking Head Anime from a Single Image 2: More Expressive](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-2-demo)\n      - [Talking Head(?) Anime from a Single Image 3: Now the Body Too](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-3-demo)\n      - [Neural Rendering with Attention: An Incremental Improvement for Anime Character Animation](https:\u002F\u002Fgithub.com\u002Ftranspchan\u002FLive3D-v2)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Super Resolution\u003C\u002Fsummary>\n\n      - [waifu2x](https:\u002F\u002Fgithub.com\u002Fnagadomi\u002Fwaifu2x)\n      - [Anime4K](https:\u002F\u002Fgithub.com\u002Fbloc97\u002FAnime4K)\n      - [goldhuang\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fgoldhuang\u002FSRGAN-PyTorch)\n      - [Real-CUGAN](https:\u002F\u002Fgithub.com\u002Fbilibili\u002Failab\u002Fblob\u002Fmain\u002FReal-CUGAN\u002FREADME_EN.md)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Segmentation\u003C\u002Fsummary>\n\n    - [jerryli27\u002FAniSeg](https:\u002F\u002Fgithub.com\u002Fjerryli27\u002FAniSeg)\n    - [zymk9\u002FYet-Another-Anime-Segmenter](https:\u002F\u002Fgithub.com\u002Fzymk9\u002FYet-Another-Anime-Segmenter)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>Landmark Detection\u003C\u002Fsummary>\n\n      - [Anime face landmark detection by deep cascaded regression](https:\u002F\u002Fgithub.com\u002Fkanosawa\u002Fanime_face_landmark_detection)\n      - [Anime Face Detector using mmdet and mmpose](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector)\n    \u003C\u002Fdetails>\n\n\n### Venues\n\n- TIP: IEEE Transactions on Image Processing\n- TMM: IEEE Transactions on Multimedia\n- PR: IEEE Transactions on Pattern Recognition\n- TSP: IEEE Transactions on Signal Processing\n- CVPR: IEEE\u002FCVF Conference on Computer Vision and Pattern Recognition\n- MMUL: IEEE MultiMedia\n- TVCG: IEEE Transactions on Visualization and Computer Graphics\n- MMM: International Conference on Multimedia Modeling\n- ECCV: European Conference on Computer Vision\n- NeurIPS: Conference on Neural Information Processing Systems\n- NeurIPS-DB: Conference on Neural Information Processing Systems, Datasets and Benchmarks Track\n- ACM-MM: ACM Multimedia\n- ACM-TG: ACM Transaction on Graphics\n- ACM-TURC: ACM Turing Celebration Conference\n- ISICA: International Symposium on Intelligence Computation and Applications\n- 3ICT: International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies\n- MDPI-AS: MDPI Applied Science\n- EG: EUROGRAPHICS\n- CGF: Computer Graphic Forum\n- CGI: Conference on Graphic Interface\n- IET-IP: IET Image Processing\n- ICME: International Conference on Multimedia and Expo\n- CCCRV: Canadian Conference on Computer and Robot Vision\n- ICCW: International Conference on Cyberworlds\n- IROS: IEEE International Conference on Intelligent Robots and Systems\n- HSET: Highlights in Science Engineering and Technology\n- TOPS: Topics in Cognitive Science\n- COLING: Conference on Computational Linguistics\n","# 令人惊叹的动漫研究\n\n所有与动漫相关的内容。\\\n对于**漫画\u002F连环画**领域的论文，请参阅 [🔥 惊人漫画理解](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding)\\\n对于**2D动画视频**研究，请参阅 [🚀 惊人动画研究](https:\u002F\u002Fgithub.com\u002Fzhenglinpan\u002FAwesome-Animation-Research)\n\n## 📂 数据集\n\n  - \u003Cdetails>\n      \u003Csummary>动漫\u002F漫画\u002F连环画数据集概览\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [ComicScene154：用于漫画分析的场景数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.16190) | |\n      | 2025 | COLING | [AnimeDL-2M：扩散时代百万级AI生成动漫图像检测与定位](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.11015) | [官网](https:\u002F\u002Fflytweety.github.io\u002FAnimeDL2M\u002F) |\n      | 2025 | COLING | [基于多模态大语言模型的情境感知漫画机器翻译](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2411.02589) | [GitHub](https:\u002F\u002Fgithub.com\u002Fplippmann\u002Fmultimodal-manga-translation) |\n      | 2024 | Arxiv | [尾巴讲故事：带角色名称的整章漫画转录本](https:\u002F\u002Farxiv.org\u002Fabs\u002F2408.00298) | [GitHub](https:\u002F\u002Fgithub.com\u002Fragavsachdeva\u002Fmagi) |\n      | 2024 | Arxiv | [CoMix：一个多任务漫画理解的综合基准数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F2407.03550) | [GitHub](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fcomix-dataset) |\n      | 2023 | CVPR | [Human-Art：一个跨越自然与人工场景的多功能以人为本数据集](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FJu_Human-Art_A_Versatile_Human-Centric_Dataset_Bridging_Natural_and_Artificial_Scenes_CVPR_2023_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FHumanArt) | \n      | 2023 | Arxiv | [Manga109Dialog：用于漫画说话人检测的大规模对话数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.17469.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fmanga109\u002Fpublic-annotations) |\n      | 2023 | ACM-TG | [利用对比学习框架进行半监督参考式草图提取](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1FELTVl73OrQ9Q0uBXN7jLbRStSsF-NgM\u002Fview?pli=1) |  [Github](https:\u002F\u002Fgithub.com\u002FChanuku\u002F4skst) |\n      | 2023 | ACM-TG |  [解析条件下的动漫翻译：一个新的数据集和方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3585002) | [Github](https:\u002F\u002Fgithub.com\u002Fzsl2018\u002FStyleAnime) | \n      | 2022 | NeurIPS-DB | [AnimeRun：从开源3D电影中获取的2D动画视觉对应关系](https:\u002F\u002Fopenreview.net\u002Fpdf?id=04OPxj0jGN_) |  [官网](https:\u002F\u002Flisiyao21.github.io\u002Fprojects\u002FAnimeRun) |\n      | 2022 | ECCV | [COO：用于识别任意或截断文本的漫画拟声词数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.04675.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fku21fan\u002FCOO-Comic-Onomatopoeia) |\n      | 2022 | CVPR研讨会 |[一项具有挑战性的动漫风格识别基准测试](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022W\u002FVDU\u002Fpapers\u002FLi_A_Challenging_Benchmark_of_Anime_Style_Recognition_CVPRW_2022_paper.pdf) |  | \n      | 2022 | ECCV | [AnimeCeleb：用于头部重演的大规模动画名人头像数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.07640.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fkangyeolk\u002FAnimeCeleb) |\n      | 2021 | Arxiv | [DAF:RE：一个具有挑战性、众包、大规模且长尾分布的动漫角色识别数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.08674.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Farkel23\u002Fanimesion) |\n      | 2020 | ACM-MM | [卡通人脸识别：一个基准数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.13394.pdf) |   [Github](https:\u002F\u002Fgithub.com\u002Fluxiangju-PersonAI\u002FiCartoonFace) |\n      | 2020 | ECCV研讨会 | [漫画中无约束文本检测：一个新的数据集和基线](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2009.04042.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fjuvian\u002FManga-Text-Segmentation) |\n      | 2020 | ECCV | [DanbooRegion：插图区域数据集](https:\u002F\u002Flllyasviel.github.io\u002FDanbooRegion\u002Fpaper\u002Fpaper.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FDanbooRegion) |\n      | 2020 | MMUL | [构建带有多媒体应用标注的漫画数据集“Manga109”](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.04425.pdf) |  [官网](http:\u002F\u002Fwww.manga109.org\u002Fja\u002Fdownload_s.html) |\n      | 2019 | CVPR | [Creative Flow+ 数据集](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FShugrina_Creative_Flow_Dataset_CVPR_2019_paper.pdf) |  [官网](https:\u002F\u002Fwww.cs.toronto.edu\u002Fcreativeflow\u002F) |\n      | 2017 | CVPR | [漫画叙事中面板间推断的惊人奥秘](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.05118.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmiyyer\u002Fcomics) |\n    \u003C\u002Fdetails>\n\n## 📜 论文\n\n### 图像生成\n\n  - \u003Cdetails>\n      \u003Csummary>生成\u003C\u002Fsummary>\n\n| **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | Arxiv | [SakugaFlow: 一种模拟人类绘画过程并为初学者提供交互式辅导的分阶段插画框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.08443) | | |\n      | 2025 | Arxiv | [基于扩散模型的动漫插画交互式绘制指导](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.09140) |  |\n      | 2022 | Arxiv | [通过流形熵估计对抗GAN中的模式坍塌](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.12055.pdf) | | [Github](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.12055.pdf) |\n      | 2022 | SIGGRAPH | [StyleGAN-NADA: 基于CLIP引导的图像生成器领域适应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.00946.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Frinongal\u002FStyleGAN-nada) |\n      | 2021 | ICCV | [DisUnknown: 用于解耦学习的未知因素蒸馏](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.08090.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002Fdisunknown) |\n      | 2021 | Arxiv | [CoPE: 基于多项式展开的条件图像生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.05077.pdf) | |\n      | 2021 | Arxiv | [生成对抗网络的高效持续适应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.04032.pdf) | |\n      | 2021 | ElConRus | [利用神经网络生成“理想”的动漫片头画面](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9396557) | |\n      | 2021 | CVPR | [HistoGAN: 通过颜色直方图控制GAN生成图像和真实图像的颜色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11731.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fmahmoudnafifi\u002FHistoGAN) |\n      | 2020 |  | [利用GAN生成动漫角色全身站立姿势及其风格迁移](https:\u002F\u002Fwaseda.repo.nii.ac.jp\u002F?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=58145&item_no=1&page_id=13&block_id=21) | |\n      | 2020 | NeurIPS | [无遗忘的GAN记忆](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.07543.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FMiaoyunZhao\u002FGANmemory_LifelongLearning) |\n      | 2020 | Arxiv | [分类表示可用于下游生成任务](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.07543.pdf) | |\n      | 2020 | Arxiv | [自编码生成对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.05472.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FConorLazarou\u002FAEGAN-keras) |\n      | 2019 | IEEE Access | [一种用于稳定训练生成对抗网络的自适应控制算法](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8936350) | |\n      | 2019 | Arxiv | [通过对抗性神经元剪枝和突触巩固克服长期灾难性遗忘](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.09091.pdf) | |\n      | 2019 | EG | [迈向多样化的动漫人脸生成：主动标签补全与风格特征网络](https:\u002F\u002Fdiglib.eg.org\u002Fbitstream\u002Fhandle\u002F10.2312\u002Fegs20191016\u002F065-068.pdf?sequence=1&isAllowed=y) | |\n      | 2018 | IJCNN | [使用加权混合生成对抗网络生成新颖的图像风格](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8489080) | |\n      | 2018 | ECCV Workshop | [基于渐进式结构条件生成对抗网络的高分辨率全身动漫生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.01890v1.pdf) | [官网](https:\u002F\u002Fdena.com\u002Fintl\u002Fanime-generation\u002F) |\n      | 2017 | Comiket92 | [迈向利用生成对抗网络自动创作动漫角色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1708.05509.pdf) | [官网](https:\u002F\u002Fmake.girls.moe\u002F#\u002F) |\n  \u003C\u002Fdetails>\n  \n  \n  - \u003Cdetails>\n      \u003Csummary>少样本学习\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | CAVW | [通过一次学习利用粗略涂鸦控制StyleGAN](http:\u002F\u002Fwww.cgg.cs.tsukuba.ac.jp\u002F~endo\u002Fprojects\u002FStyleGANSparseControl\u002FCAVW_endo22_preprint.pdf)  | [官网](http:\u002F\u002Fwww.cgg.cs.tsukuba.ac.jp\u002F~endo\u002Fprojects\u002FStyleGANSparseControl\u002F) |\n      | 2022 | WACV | [生成对抗网络中的数据实例先验（DISP）](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FMangla_Data_InStance_Prior_DISP_in_Generative_Adversarial_Networks_WACV_2022_paper.pdf) | |\n      | 2021 | Arxiv | [MineGAN++: 挖掘生成模型以实现向小数据域的高效知识迁移](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.13742.pdf) | |\n      | 2020 | Arxiv | [GAN中的数据实例先验用于迁移学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.04256.pdf) | |\n      | 2020 | CVPR研讨会 | [冻结判别器：微调GAN的简单基线](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.10964.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fsangwoomo\u002FFreezeD) |\n      | 2020 | CVPR | [MineGAN：从GAN向仅有少量图像的目标域进行有效知识迁移](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FWang_MineGAN_Effective_Knowledge_Transfer_From_GANs_to_Target_Domains_With_CVPR_2020_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fyaxingwang\u002FMineGAN) |\n      | 2020 | Arxiv | [生成对抗网络的少样本适应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.11943.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fe-271\u002Ffew-shot-gan) |\n      | 2019 | ICCV | [通过批量统计适应从小型数据集生成图像](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FNoguchi_Image_Generation_From_Small_Datasets_via_Batch_Statistics_Adaptation_ICCV_2019_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fnogu-atsu\u002Fsmall-dataset-image-generation) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>可解释性\u003C\u002Fsummary>\n\n| **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | 大数据 | [无监督发现逐层GAN的解耦可解释方向](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-981-19-8331-3_2) | |\n      | 2022 | AAAI | [GAN中潜在空间发现的自监督增强](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.08835.pdf) | | \n      | 2021 | ACM-MM | [在GAN中发现用于语义图像变换的保密度潜在空间路径](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3474085.3475293) | | |\n      | 2021 | CVPR | [超越二值属性的GAN可解释潜在空间方向发现](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_Discovering_Interpretable_Latent_Space_Directions_of_GANs_Beyond_Binary_Attributes_CVPR_2021_paper.pdf) | |\n      | 2021 | Arxiv | [EigenGAN：GAN的逐层特征值学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12476.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FLynnHo\u002FEigenGAN-Tensorflow) |\n      | 2021 | CVPR | [用于潜在空间操控的代理梯度场](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.09065.pdf)  | |\n      | 2021 | Arxiv | [生成模型是否理解解耦？对比学习就够了](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.10543.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002Fxrenaa\u002FDisCo) |\n      | 2020 | Arxiv | [GAN中无监督的解耦流形发现](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.11842.pdf) | | |\n      | 2021 | CVPR | [GAN中潜在语义的闭式分解](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.06600.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fgenforce\u002Fsefa) |\n      | 2020 | ICML | [GAN潜在空间中可解释方向的无监督发现](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.03754.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fanvoynov\u002FGANLatentDiscovery) |\n      | 2019 | Arxiv | [RPGAN：通过随机路由实现GAN可解释性](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.10920.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002Fanvoynov\u002FRandomPathGAN) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>蒙太奇\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICPR | [MontageGAN：GAN对多组件的生成与拼接](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.15577.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fuchidalab\u002Fdocker-montage-gan)|\n      | 2022 | TOG | [精灵图转精灵图：基于自监督精灵估计的卡通动画分解](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3550454.3555439) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>文本到图像\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [为文本到图像扩散模型添加条件控制](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05543.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FControlNet) |\n      | 2022 | Arxiv | [DreamArtist：通过正负提示调优实现可控的一次性文本到图像生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2211.11337.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002F7eu7d7\u002FDreamArtist-stable-diffusion) |\n  \u003C\u002Fdetails>\n\n\n\n### 图像到图像翻译\n\n  - \u003Cdetails>\n      \u003Csummary>人脸转动漫\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [StyO：仅需一次即可风格化你的脸](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.03231.pdf) | |\n      | 2022 | TVCG | [通过代理引导的领域适应实现外观保留的人像到动漫转换](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9982378) | |\n      | 2022 | Arxiv | [神经最优传输](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.12220.pdf) | |\n      | 2022 | CVPR研讨会 | [跨域风格混合用于人脸卡通化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2205.12450.pdf) | [HP](https:\u002F\u002Fwebtoon.github.io\u002FWebtoonMe\u002Fen)|\n      | 2021 | Arxiv | [一种考虑领域差距的生成对抗网络，用于多领域图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.10837.pdf) | |\n      | 2021 | ACM-TG| [AgileGAN：通过反演一致的迁移学习风格化人像](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3450626.3459771) | [Github](https:\u002F\u002Fgithub.com\u002FGuoxianSong\u002FAgileGAN) |\n      | 2021 | Arxiv | [FINE-TUNING STYLEGAN2 FOR CARTOON FACE GENERATION](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.12445.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhappy-jihye\u002FCartoon-StyleGAN) |\n      | 2021 | Arxiv | [GANs N’ Roses：稳定、可控、多样化的图像到图像翻译（也适用于视频！)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06561.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmchong6\u002FGANsNRoses) |\n      | 2021 | JSAI | [多卡通GAN用于条件性艺术化人脸转换 ](https:\u002F\u002Fwww.jstage.jst.go.jp\u002Farticle\u002Fpjsai\u002FJSAI2021\u002F0\u002FJSAI2021_2N1IS2a01\u002F_pdf) | |\n      | 2021 | Arxiv | [AniGAN：基于风格指导的生成对抗网络，用于无监督动漫人脸生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.12593.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fbing-li-ai\u002FAniGAN) |\n      | 2021 | ICCECE | [将真人转化为动漫卡通化效果](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9342433)  | |\n      | 2020 | NeurIPS研讨会 | [关于生成模型中数据偏见的一点说明](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.02516.pdf) |  |\n      |  2020 | Arxiv | [通过预训练的StyleGAN2网络进行无监督图像到图像翻译](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2010.05713.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FHideUnderBush\u002FUI2I_via_StyleGAN2) |\n      |  2020 | Arxiv | [少量样本知识迁移用于细粒度卡通人脸生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.13332.pdf) | | \n      | 2020 | Arxiv | [用于共享跨域特征表示和图像到图像翻译的自动编码](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.11404.pdf) | |\n      | 2019 | ICASERT | [利用对抗训练从真实人像生成动漫](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8934465) | |\n      | 2019 | Arxiv | [地标辅助CycleGAN用于卡通人脸生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.01424v1.pdf) | |\n      | 2018 | CVPR | [DA-GAN：基于深度注意力生成对抗网络的实例级图像转换](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FMa_DA-GAN_Instance-Level_Image_CVPR_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FRongpeng-Lin\u002FA-DA-GAN-architecture) |\n      | 2018 | Arxiv | [Twin-GAN——具有权重共享GAN的非配对跨域图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1809.00946.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fjerryli27\u002FTwinGAN) |\n      | 2018 | ECCV | [改进无监督图像到图像翻译中的形状变形](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FAaron_Gokaslan_Improving_Shape_Deformation_ECCV_2018_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fbrownvc\u002Fganimorph) |\n  \u003C\u002Fdetails>\n\n- \u003Cdetails>\n      \u003Csummary>自拍转动漫\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | CVPR | [通过结构一致性约束缓解无监督低层图像到图像转换中的语义失真](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FGuo_Alleviating_Semantics_Distortion_in_Unsupervised_Low-Level_Image-to-Image_Translation_via_Structure_CVPR_2022_paper.pdf)  | |\n      | 2022 | ICIP | [Hyprogan：突破人与动漫之间的维度壁垒](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897973)  | |\n      | 2022 | MMUL | [利用负学习进行噪声补丁的非配对图像到图像转换](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9780547) | |\n      | 2022 | CVPR | [基于门控循环映射的非配对卡通图像合成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2022\u002Fpapers\u002FMen_Unpaired_Cartoon_Image_Synthesis_via_Gated_Cycle_Mapping_CVPR_2022_paper.pdf) | |\n      | 2022 | Arxiv | [UVCGAN：用于非配对图像到图像转换的UNet视觉变换器循环一致性GAN](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.02557.pdf) | |\n      | 2021 | ICCV | [通过学习重新加权实现未对齐的图像到图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.11736.pdf) | |\n      | 2021 | Arxiv | [好的艺术家复制，伟大的艺术家剽窃：针对图像转换生成对抗网络的模型提取攻击](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.12623.pdf) | |\n      | 2021 | Arxiv | [SPatchGAN：一种基于统计特征的判别器，用于无监督图像到图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.16219.pdf) | |\n      | 2020 | MAPR | [基于插值的动漫人脸风格迁移](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9237764) | |\n      | 2020 | ICML | [特征量化提升GAN训练](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.02088.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FYangNaruto\u002FFQ-GAN) |\n      | 2020 | ECCV | [使用对抗一致性损失的非配对图像到图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.04858.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhyperplane-lab\u002FACL-GAN) |\n      | 2019 | IJCNN | [AttentionGAN：利用注意力引导的生成对抗网络进行非配对图像到图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.11897.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FHa0Tang\u002FAttentionGAN) |\n      | 2020 | CVPR | [打破循环——同事就是你所需要的](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.10538.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FOnr\u002FCouncil-GAN) |\n      | 2020 | ICLR | [U-GAT-IT：用于图像到图像转换的无监督生成式注意力网络，带有自适应层实例归一化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.10830.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Ftaki0112\u002FUGATIT) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>照片转动漫\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | IEICE TIS | [一种用于快速照片动画的新双尾生成对抗网络](https:\u002F\u002Fwww.jstage.jst.go.jp\u002Farticle\u002Ftransinf\u002FE107.D\u002F1\u002FE107.D_2023EDP7061\u002F_pdf) |[官网](https:\u002F\u002Ftachibanayoshino.github.io\u002FAnimeGANv3\u002F) |\n      | 2023 | ICCV | [Scenimefy：通过半监督图像到图像转换学习制作动漫场景](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12968) | [Github](https:\u002F\u002Fgithub.com\u002FYuxinn-J\u002FScenimefy) |\n      | 2023 | CVPR | [具有可控感知因素的交互式卡通化](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023\u002Fpapers\u002FAhn_Interactive_Cartoonization_With_Controllable_Perceptual_Factors_CVPR_2023_paper.pdf)  | |\n      | 2022 | ACM-MM | [Cartoon-Flow：一种基于流的生成对抗网络，用于任意风格的照片卡通化](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3503161.3548094)  | |\n      | 2022 | ICML | [学习将纹理显著性自适应注意力融入图像卡通化](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fgao22k\u002Fgao22k.pdf)  | |\n      | 2022 | AAAI | [具有感知运动一致性的无监督连贯视频卡通化](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6861.ZhenahuanL.pdf) | |\n      | 2022 | MVIP | [ARGAN：用于动画风格迁移的快速收敛GAN](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9738752)  | [Github](https:\u002F\u002Fgithub.com\u002Famirzenoozi\u002FARGAN) |\n      | 2022 | ICCECE | [使用双判别器GAN将照片转换为动漫](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9712766) | |\n      | 2021 | 3ICT | [利用迁移学习和TinyML策略对图像进行卡通化](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9581835)  | |\n      | 2021 | IEEE Access | [用于语义多风格迁移的伪监督学习](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9316188)  |\n      | 2021 | TVCG | [基于GAN的多风格照片卡通化](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9382902)  | |\n      | 2020 | ISICA | [AnimeGAN：一种用于照片动画的新轻量级GAN](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGAN\u002Fblob\u002Fmaster\u002Fdoc\u002FChen2020_Chapter_AnimeGAN.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGAN) |\n      | 2020 | Arxiv | [用于将照片转化为宫崎骏风格卡通的生成对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.07702.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FFilipAndersson245\u002Fcartoon-gan) |\n      | 2020 | CVPR | [学习使用白盒卡通表示进行卡通化](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization\u002Fblob\u002Fmaster\u002Fpaper\u002F06791.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FWhite-box-Cartoonization) |\n      | 2020 | MMM | [CartoonRenderer：一种基于实例的多风格卡通图像翻译器](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.06102.pdf) | |\n      | 2020 | Arxiv | [GANILLA：用于图像到插图转换的生成对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.05638.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fgiddyyupp\u002Fganilla) |\n      | 2018 | Arxiv | [Comixify：将视频转换为漫画](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.03473.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmaciej3031\u002Fcomixify) |\n      | 2018 | CVPR | [CartoonGAN：用于照片卡通化的生成对抗网络](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_CartoonGAN_Generative_Adversarial_CVPR_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fznxlwm\u002Fpytorch-CartoonGAN) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>素描转动漫\u003C\u002Fsummary>\n\n| **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | ACM-TG | [AniFaceDrawing: 在素描过程中探索动漫肖像](https:\u002F\u002Farxiv.org\u002Fabs\u002F2306.07476) | [主页](http:\u002F\u002Fwww.jaist.ac.jp\u002F~xie\u002FAniFaceDrawing.html) |\n      | 2022 | TNNLS | [PMSGAN：用于人脸图像转换的并行多阶段生成对抗网络](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10014017)  | |\n      | 2022 | FDG | [SketchBetween：通过素描实现精灵动画的视频到视频合成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2209.00185.pdf) | |\n      | 2021 | TVCG | [深度素描引导的卡通视频中间帧生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.04149.pdf)  | |\n      | 2020 | NeurIPS | [如何训练你的条件生成对抗网络：一种基于几何结构化潜在流形的方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.13055.pdf) | |\n      | 2020 | ECCV | [为图像生成与编辑建模艺术工作流程](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07238.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fhytseng0509\u002FArtEditing) |\n      | 2019 | Arxiv | [PI-REC：具有边缘和颜色域的渐进式图像重建网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1903.10146.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fyouyuge34\u002FPI-REC) |\n      | 2019 | FITEE | [SmartPaint：基于生成对抗网络的协同创作绘画系统](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1631\u002FFITEE.1900386.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>照片转漫画\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | AAAI | [MangaGAN：基于漫画绘制方法的无配对照片到漫画转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10634.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>动漫转服装\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | Arxiv | [动漫转真实服装：通过图像到图像转换生成角色扮演服装](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.11479.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>风格迁移\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | IJCAI | [Diffutoon：基于扩散模型的高分辨率可编辑卡通渲染](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.16224) | [主页](https:\u002F\u002Fecnu-cilab.github.io\u002FDiffutoonProjectPage\u002F) |\n      | 2023 | CVPR | [LANIT：面向无标签数据的语言驱动图像到图像翻译](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.14889.pdf) | | [Github](https:\u002F\u002Fgithub.com\u002FKU-CVLAB\u002FLANIT) |\n      | 2022 | TCSVT | [HRInversion：用于跨域图像合成的高分辨率生成对抗网络反演](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9953153) | |\n      | 2022 | NeurIPS | [迈向生成对抗网络的多样且忠实的一次性适应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.08736.pdf) | [Github](https:\u002F\u002Fgithub.com\u002F1170300521\u002FDiFa) |\n      | 2022 | CVPR | [Pastiche Master：基于范例的高分辨率人像风格迁移](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.13248.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fwilliamyang1991\u002FDualStyleGAN) |\n      | 2022 | ICLR | [Mind the Gap：针对生成对抗网络单次域适应的领域差距控制](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.08398.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FZPdesu\u002FMindTheGap) |\n      | 2022 | Arxiv | [Styleverse：迈向异质领域间的身份风格化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.00861.pdf) | |\n      | 2022 | ICCECE | [基于生成对抗网络的无监督图像纹理迁移](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9712754) | |\n      | 2021 | VSIP | [跨模态且语义增强的非对称循环生成对抗网络，用于数据不平衡的动漫风格人脸转换](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002FfullHtml\u002F10.1145\u002F3503961.3503969) |  |\n      | 2021 | Arxiv | [JoJoGAN：一次性人脸风格化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.11641.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmchong6\u002FJoJoGAN) |\n      | 2021 | Arxiv | [在图像生成中对艺术风格进行细粒度控制](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.10278.pdf) | |\n      | 2021 | Arxiv | [使用StyleGAN先验进行少量样本语义图像合成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.14877.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fendo-yuki-t\u002FFewshot-SMIS) |\n      | 2020 | CSICC | [基于StarGAN的动漫角色面部表情迁移](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9050061) | |\n      | 2019 | IEEE Access | [RAG：通过学习残差属性进行面部属性编辑](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F334058885_RAG_Facial_Attribute_Editing_by_Learning_Residual_Attributes) | |\n      | 2019 | Arxiv | [解耦动漫插画中的风格与内容](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.10742v2.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fstormraiser\u002Fadversarial-disentangle) |\n      | 2018 | Arxiv | [利用度量学习和生成对抗网络探索动漫风格空间](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07997v1.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>作者风格迁移\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICIP | [使用Sailormoonredraw数据进行插画师风格的转换](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897787) |  |\n  \u003C\u002Fdetails>\n\n\n\n### 彩色化\n\n  - \u003Cdetails>\n      \u003Csummary>无提示\u003C\u002Fsummary>\n\n| **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2018 | ISCID | [基于串联条件对抗网络的自动素描上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8695564) | |\n      | 2019 | IJNDC | [你喜欢巩膜吗？动漫人物线稿中的巩膜区域检测与上色](https:\u002F\u002Fwww.atlantis-press.com\u002Fjournals\u002Fijndc\u002F125913573) | |\n      | 2019 | IJPE | [基于循环一致性对抗网络的动漫素描上色](http:\u002F\u002Fwww.ijpe-online.com\u002FEN\u002Fabstract\u002Fabstract4089.shtml) | |\n      | 2021 | MDPI-AS | [Seg2pix：利用分割图像数据进行少样本训练的线稿上色](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F11\u002F4\u002F1464) | |\n      | 2021 | - | [基于条件对抗网络的半自动漫画上色](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2021-golyadkin.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fqweasdd\u002Fmanga-colorization) |\n      | 2021 | ICPR | [线稿的风格化上色](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2021-fang.pdf)  | |\n      | 2021 | Arxiv | [生成式概率图像上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.14518.pdf) | |\n      | 2022 | CHI | [FlatMagic：通过AI驱动的设计改进面向数字漫画从业者的平面化上色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3491102.3502075) | [Github](https:\u002F\u002Fcragl.cs.gmu.edu\u002Fflatmagic\u002F) |\n      | 2022 | ICCIR | [基于注意力机制的无监督动漫头像线稿上色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3548608.3559316) | |\n      | 2023 | IEEE Access| [通过色彩潜在空间实现鲁棒的漫画页面上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10278137)| |\n  \u003C\u002Fdetails>\n\n\n  - \u003Cdetails>\n      \u003Csummary>Atari\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | CVPR Workshop | [Diffusart：使用条件扩散模型增强线稿上色](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2023W\u002FCVFAD\u002Fpapers\u002FCarrillo_Diffusart_Enhancing_Line_Art_Colorization_With_Conditional_Diffusion_Models_CVPRW_2023_paper.pdf) | |\n      | 2023 | WACV | [通过无监督区域优先级引导用户高效交互式素描上色给出颜色提示的位置](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2210.14270.pdf)| |\n      | 2022 | IVCNZ | [StencilTorch：一种迭代式且用户引导的动漫线稿上色框架](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1007\u002F978-3-031-25825-1_1) | |\n      | 2022 | WACV | [晚期调整尺寸：一种简单但有效的线稿提取策略，用于提升线稿上色的泛化能力](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2022\u002Fpapers\u002FKim_Late-Resizing_A_Simple_but_Effective_Sketch_Extraction_Strategy_for_Improving_WACV_2022_paper.pdf) | |\n      | 2021 | SIGGRAPH Asia | [快速平面着色的交互式漫画上色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002FfullHtml\u002F10.1145\u002F3476124.3488628) | |\n      | 2021 | TIP | [双色域引导的线稿上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515572) | |\n      | 2021 | ICCV | [深度边缘感知的交互式上色，防止颜色渗漏](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.01619.pdf) | |\n      | 2021 | CVPR | [带有分割填充机制的用户引导线稿平面填充](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_User-Guided_Line_Art_Flat_Filling_With_Split_Filling_Mechanism_CVPR_2021_paper.pdf) | [HP](https:\u002F\u002Flllyasviel.github.io\u002FSplitFilling\u002F) |\n      | 2021 | CVPR Workshop | [基于拼接空间注意力的线稿上色](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021W\u002FCVFAD\u002Fpapers\u002FYuan_Line_Art_Colorization_With_Concatenated_Spatial_Attention_CVPRW_2021_paper.pdf) | |\n      | 2020 | MDPI-AS | [使用两阶段生成器自动为动漫风格插图上色](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F23\u002F8699) | |\n      | 2020 | ICCST | [基于情绪识别和超像素色彩解析的卡通图像上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9262834) | |\n      | 2019 | CSAI | [用户引导的数字艺术作品上色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3374587.3374604) | |\n      | 2019 | IEEE Access | [两阶段素描上色与色彩解析](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8944253)  | |\n      | 2019 | ICIP | [MANGAN：利用条件GAN辅助漫画角色概念图上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803667) | [Github](https:\u002F\u002Fgithub.com\u002Ffelipelodur\u002FManGAN) |\n      | 2019 | CISP-BMEI | [基于条件生成对抗网络的半自动素描上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8965999) | |\n      | 2019 | TAAI | [通过深度残差神经网络实现风格一致性的交互式动漫线稿上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8959911) | |\n      | 2019 | CVMP | [PaintsTorch：一款用户引导、采用双生成器条件对抗网络的动漫线稿上色工具](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3359998.3369401) | |\n      | 2019 | Engineering Letters | [使用Swish门控残差U-Net和谱归一化GAN为动漫素描上色](http:\u002F\u002Fwww.engineeringletters.com\u002Fissues_v27\u002Fissue_3\u002FEL_27_3_01.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fpradeeplam\u002FAnime-Sketch-Coloring-with-Swish-Gated-Residual-UNet) |\n      | 2018 | ACM-TG | [两阶段素描上色](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints\u002Fblob\u002Fmaster\u002Fpapers\u002Fsa.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints) |\n      | 2018 | ACM-MC | [用户引导的深度动漫线稿上色，基于条件对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.03240.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Forashi\u002FAlacGAN) |\n      | 2018 | Neurocomputing | [Auto-painter：利用条件Wasserstein生成对抗网络从线稿生成卡通图像](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.01908.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FirfanICMLL\u002FAuto_painter) |\n      | 2018 | EG | [一种快速高效的半引导算法，用于线稿的平面化上色](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-01891876\u002Fdocument)| |\n      | 2017 | Arxiv | [通过串联对抗网络进行轮廓上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.08834.pdf) | |\n      | 2011 | NPAR | [TexToons：为手绘卡通动画提供实用的纹理映射](https:\u002F\u002Fdcgi.fel.cvut.cz\u002Fhome\u002Fsykorad\u002FSykora11-NPAR.pdf) | |\n      | 2006 | ACM-TG | [漫画上色](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fmanga\u002Fmanga.pdf) | |\n  \u003C\u002Fdetails>\n\n\n  - \u003Cdetails>\n      \u003Csummary>参考\u003C\u002Fsummary>\n\n| **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- |\n      | 2025 | Arxiv | [SSIMBaD: 基于SSIM引导的平衡扩散模型的Sigma缩放技术用于动漫人脸上色](https:\u002F\u002Farxiv.org\u002Fabs\u002F2506.04283) | [GitHub](https:\u002F\u002Fgithub.com\u002FGiventicket\u002FSSIMBaD-Sigma-Scaling-with-SSIM-Guided-Balanced-Diffusion-for-AnimeFace-Colorization) |\n      | 2025 | Arxiv | [Cobra: 基于更广泛参考的高效线稿上色方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.12240) | [个人主页](https:\u002F\u002Fzhuang2002.github.io\u002FCobra\u002F) |\n      | 2025 | Arxiv | [ColorizeDiffusion v2: 通过分离效用实现基于参考的草图上色增强](https:\u002F\u002Farxiv.org\u002Fabs\u002F2504.06895) | [GitHub](https:\u002F\u002Fgithub.com\u002Ftellurion-kanata\u002FcolorizeDiffusion) |\n      | 2025 | Arxiv | [MangaNinja: 精准参考追踪的线稿上色方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2501.08332) | [个人主页](https:\u002F\u002Fjohanan528.github.io\u002FMangaNinjia\u002F) |\n      | 2024 | Arxiv | [ColorFlow: 检索增强的图像序列上色方法](https:\u002F\u002Farxiv.org\u002Fabs\u002F2412.11815) | [Github](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FColorFlow) |\n      | 2024 | Arxiv | [ColorizeDiffusion: 可调节的草图上色方法，结合参考图像和文本描述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.01456.pdf) | |\n      | 2024 | CVPR | [学习包含匹配用于动画油漆桶上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.18342) | [个人主页](https:\u002F\u002Fykdai.github.io\u002Fprojects\u002FInclusionMatching) |\n      | 2023 | CGF | [两步训练：基于参考图像和文本标签的可调草图上色](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002Fpdfdirect\u002F10.1111\u002Fcgf.14791) | [Github](https:\u002F\u002Fgithub.com\u002Fydk-tellurion\u002Fsketch_colorizer) |\n      | 2023 | Arxiv | [AnimeDiffusion: 基于扩散模型的动漫人脸线稿上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2303.11137.pdf) | |\n      | 2022 | ICME  | [注意力感知的动漫线稿上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2212.10988.pdf) | |\n      | 2022 | NicoInt | [动漫角色的半自动上色流水线及其在制作中的评估](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9848507) | |\n      | 2022 | ICASSP | [通过特征聚合与对比学习提升基于参考的线稿图像上色效果](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9746326) | |\n      | 2022 | ECCV | [消除基于参考的线稿上色中的梯度冲突](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.06095.pdf)| [Github](https:\u002F\u002Fgithub.com\u002Fkunkun0w0\u002FSGA) |\n      | 2022 | Mathematics | [基于跨域密集语义对应关系的示例驱动草图上色](https:\u002F\u002Fwww.mdpi.com\u002F2227-7390\u002F10\u002F12\u002F1988\u002Fhtm) | |\n      | 2022 | CVM | [参考引导的结构感知深度草图上色用于卡通](https:\u002F\u002Flink.springer.com\u002Fcontent\u002Fpdf\u002F10.1007\u002Fs41095-021-0228-6.pdf) | |\n      | 2022 | TMM | [多密度草图到图像转换网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2006.10649.pdf) | |\n      | 2021 | SIGGRAPH Asia | [使用少样本学习进行动漫角色上色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3478512.3488604) | |\n      | 2021 | IET-IP | [基于结构与色彩增强解耦的可解耦且可控的草图生成](https:\u002F\u002Fietresearch.onlinelibrary.wiley.com\u002Fdoi\u002Fepdf\u002F10.1049\u002Fipr2.12343) | |\n      | 2021 | ICIP | [基于生成对抗网络的绘画风格感知漫画上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.07943.pdf) | |\n      | 2021 | CGI | [探索由自动上色引导的基于草图的角色设计](https:\u002F\u002Frawanmg.github.io\u002Fpdf\u002Fgi21.pdf) | |\n      | 2021 | ICME | [采用空间自适应归一化进行动漫风格迁移](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9428305)  | |\n      | 2021 | ICPR | [基于组件匹配、深度外观特征和图表示的动漫草图上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9412507) | |\n      | 2020 | EG | [对瞳孔为空的线稿进行上色](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F347703757_Colorization_of_Line_Drawings_with_Empty_Pupils) | |\n      | 2020 | EG | [Deep-Eyes: 全自动动漫角色上色，包括对空瞳孔细节的绘制](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fai\u002Fanime\u002F2020-akita.pdf) | |\n      | 2020 | CVPR | [基于增强自我参考和密集语义对应关系的参考式草图图像上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.05207.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FJungjaewon\u002FReference_based_Skectch_Image_Colorization) |\n      | 2020 | TMM | [语义示例引导的图像到图像转换](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.13028.pdf) | |\n      | 2019 | SIGGRAPH | [基于图匹配的多参考动漫上色](https:\u002F\u002Fahcweb01.naist.jp\u002Fpapers\u002Fconference\u002F2019\u002F201907_SIGGRAPH2019_s-nakamura\u002F201907_SIGGRAPH_s-nakamura.slides.pdf) | |\n      | 2019 | SIGGRAPH | [全自动动漫角色上色，考虑精确的眼部颜色](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3306214.3338585) | |\n      | 2018 | - | [注意力深度绘画](https:\u002F\u002Fgithub.com\u002Fktaebum\u002FAttentionedDeepPaint\u002Fblob\u002Fmaster\u002Fposter.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fktaebum\u002FAttentionedDeepPaint) |\n      | 2017 | ACIS SNPD | [利用生成对抗网络实现带色彩风格的自动漫画上色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8022768) | |\n      | 2017 | SIGGRAPH | [Comicolorization: 半自动漫画上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.06759.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FDwangoMediaVillage\u002FComicolorization) |\n      | 2017 | ICDAR | [基于单张训练图像的cGAN漫画上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.06918.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fsudheerachary\u002FManga_Colorization) |\n      | 2017 | ACPR | [利用增强型残差U-Net和辅助分类器GAN进行动漫草图风格迁移](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.03319.pdf) | |\n      | 2017 | IIAI | [基于条件对抗学习推断提取色彩风格的深度漫画上色](http:\u002F\u002Fwww.iaiai.org\u002Fjournals\u002Findex.php\u002FIEE\u002Farticle\u002Fview\u002F214) | |\n      | 2014 | SIGGRAPH | [基于图对应关系和二次规划的参考式漫画上色](http:\u002F\u002Fyusukematsui.me\u002Fpdf\u002Fsato_sa2014.pdf) | |\n      | 2004 | NPAR | [无监督黑白卡通上色](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.95.2629&rep=rep1&type=pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>标签\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | PG | [基于显式区域分割的线稿上色](https:\u002F\u002Fwww.sysu-imsl.com\u002Ffiles\u002FPG2021\u002Fline_art_colorization_pg2021_main.pdf) | |\n      | 2019 | ICCV | [Tag2Pix: 使用文本标签结合SECat和变化损失的线稿上色](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FKim_Tag2Pix_Line_Art_Colorization_Using_Text_Tag_With_SECat_and_ICCV_2019_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fblandocs\u002FTag2Pix) |\n  \u003C\u002Fdetails>\n\n- \u003Cdetails>\n      \u003Csummary>视频\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | PR | [基于变换区域增强网络的动漫线稿视频上色](https:\u002F\u002Fwww.sciencedirect.com\u002Fscience\u002Farticle\u002Fpii\u002FS0031320323002625?casa_token=evjknkPkujoAAAAA:a0kjRw6hy3aaO9UAkINCtXYlELCDMDQu5RykR6k7qNeRPaYsaBfR8_PNSg0R-MsIs3vOCePOTfYh) |  |\n      | 2021 | ICCV | [动画Transformer：通过片段匹配实现视觉对应](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.02614.pdf)  | [视频](https:\u002F\u002Fcadmium.app\u002F) |\n      | 2021 | WACV  | [用于自动动画上色的线稿相关性匹配特征迁移网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FWACV2021\u002Fpapers\u002FZhang_Line_Art_Correlation_Matching_Feature_Transfer_Network_for_Automatic_Animation_WACV_2021_paper.pdf) | |  \n      | 2020 | TVCG | [基于少量参考的深度线稿视频上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.10685.pdf) | |\n      | 2019 | ICCV Workshop | [艺术家引导的半自动动画上色](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCVW_2019\u002Fpapers\u002FCVFAD\u002FThasarathan_Artist-Guided_Semiautomatic_Animation_Colorization_ICCVW_2019_paper.pdf) | |\n      | 2019 | CCCRV | [自动的时序一致视频上色](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.09527.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FHarry-Thasarathan\u002FTCVC) |\n  \u003C\u002Fdetails>\n\n\n\n### 编辑\n\n  - \u003Cdetails>\n      \u003Csummary>光照\u003C\u002Fsummary>\n\n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | ICIP | [通过二维轮廓的三维增强实现平面着色图画的自动光照](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9897386) | |\n      | 2021 | ICCV | [SmartShadow：面向线稿的艺术化阴影绘制工具](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2021\u002Fpapers\u002FZhang_SmartShadow_Artistic_Shadow_Drawing_Tool_for_Line_Drawings_ICCV_2021_paper.pdf) | |\n      | 2020 | CVPR | [学习为手绘草图添加阴影](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2002.11812.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fqyzdao\u002FShadeSketch) |\n      | 2020 | ACM-TG | [通过RGB空间几何生成数字绘画光照效果](https:\u002F\u002Flllyasviel.github.io\u002FPaintingLight\u002Ffiles\u002FTOG20PaintingLight.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FPaintingLight) |\n      | 2019 | CVMP | [通过表面膨胀为手绘艺术增添全局光照效果](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3359998.3369400) | [HP](https:\u002F\u002Fv-sense.scss.tcd.ie\u002Fresearch\u002Faugmenting-hand-drawn-art-with-global-illumination-effects-through-surface-inflation\u002F) |\n      | 2018 | NPAR | [赛璐珞动画的二维阴影处理](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3229147.3229148) | [HP](https:\u002F\u002Fv-sense.scss.tcd.ie\u002Fresearch\u002Fvfx-animation\u002F2d-shading-for-cel-animation\u002F) |\n      | 2018 | NeurIPS Workshop | [为二维角色自动生成光照效果](https:\u002F\u002Fnips2018creativity.github.io\u002Fdoc\u002FAutomatic_Illumination_Effects_for_2D_Characters.pdf) | |\n      | 2018 | ECCV Workshop | [深度法线估计用于手绘角色的自动阴影处理](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCVW_2018\u002Fpapers\u002F11131\u002FHudon_Deep_Normal_Estimation_for_Automatic_Shading_of_Hand-Drawn_Characters_ECCVW_2018_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FV-Sense\u002FDeepNormals) |\n      | 2014 | ACM-TG | [Ink-and-Ray：用于为手绘角色添加全局光照效果的浮雕网格](https:\u002F\u002Fdcgi.fel.cvut.cz\u002Fhome\u002Fsykorad\u002FSykora14-TOG.pdf) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>插画编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | Arxiv | [Re:Draw——上下文感知翻译作为一种可控的艺术创作方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2401.03499.pdf) | | |\n      | 2023 | Arxiv | [DreamTuner：仅需一张图片即可进行主体驱动的生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.13691) | | |\n      | 2023 | Arxiv | [基于大规模数据集的实例引导卡通编辑](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2312.01943.pdf) | | |\n      | 2023 | Arxiv | [基于结构感知扩散模型，通过草图进行参考图像合成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2304.09748.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fkangyeolk\u002FPaint-by-Sketch) |\n      | 2023 | WACV | [DyStyle：用于多属性条件风格编辑的动态神经网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2109.10737.pdf) | | |\n      | 2021 | NeurIPS | [无监督学习构图能量概念](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.03042.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fyilundu\u002Fcomet) |\n      | 2021 | ICME | [利用记忆调制实现通用人脸修复](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.01033.pdf) | | |\n      | 2021 | ICIP | [提升插画质量：将业余插画转化为专业水准](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9506615) | |\n      | 2021 | NicoInt | [基于草图的动漫发型编辑与生成式修复](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515963) | |\n      | 2021 | TSP | [结合归一化流先验的深度展开方法用于逆问题](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.02848.pdf) | |\n      | 2021 | CVPR | [L2M-GAN：学习操纵潜在空间语义以进行面部属性编辑](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYang_L2M-GAN_Learning_To_Manipulate_Latent_Space_Semantics_for_Facial_Attribute_CVPR_2021_paper.pdf)  | |\n      | 2021 | TVCG | [借助3D引导进行跨域且解耦的脸部操控](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.11228.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FcassiePython\u002Fcddfm3d) |\n      | 2020 | ECCV | [基于优化的平滑中保留外观的擦除技术](https:\u002F\u002Flllyasviel.github.io\u002FAppearanceEraser\u002Fpaper\u002Fpaper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002FAppearanceEraser) |\n      | 2018 | Arxiv | [利用内部表征拼贴实现空间可控的图像合成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10153.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fquolc\u002Fneural-collage) |\n      | 2018 | PG | [使用高级色彩混合将图像分解为图层](https:\u002F\u002Fonlinelibrary.wiley.com\u002Fdoi\u002F10.1111\u002Fcgf.13577) | [Github](https:\u002F\u002Fgithub.com\u002Fyuki-koyama\u002Funblending) |\n  \u003C\u002Fdetails>\n\n- \u003Cdetails>\n      \u003Csummary>素描编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | ICMR |  [基于几何语义联合驱动的字符线条绘制生成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3591106.3592216) | \n      | 2023 | ACM-TG | [利用对比学习框架的半监督参考驱动素描提取](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1FELTVl73OrQ9Q0uBXN7jLbRStSsF-NgM\u002Fview?pli=1)  | [Github](https:\u002F\u002Fgithub.com\u002FChanuku\u002Fsemi_ref2sketch_code) |\n      | 2022 | ACM-TG | [基于注意力机制的参考驱动素描提取](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3550454.3555504) | [Github](https:\u002F\u002Fgithub.com\u002Fref2sketch\u002Fref2sketch) |\n      | 2022 | AAAI | [端到端的线条绘制矢量化](https:\u002F\u002Fojs.aaai.org\u002Findex.php\u002FAAAI\u002Farticle\u002Fdownload\u002F20379\u002F20138)  | |\n      | 2022 | CW |  [用于绘制老化动漫角色面部的绘图辅助系统](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9937356) | |\n      | 2021 | NicoInt | [从彩色插画中进行一次-shot线条提取](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9515964) | |\n      | 2020 | ACM-MM | [SketchMan：学习创作专业素描](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3394171.3413720)  | [Github](https:\u002F\u002Fgithub.com\u002FLCXCUC\u002FSketchMan2020) |\n      | 2020 | MDPI-AS | [渐进式全数据卷积神经网络用于从动漫风格插画中提取线条](https:\u002F\u002Fwww.mdpi.com\u002F2076-3417\u002F10\u002F1\u002F41) | |\n      | 2019 | TVCG | [基于集成VGG层的感知意识素描简化](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8771128) | |\n      | 2019 | SIGGRAPH | [通过合成素描实现无配对的素描到线条转换](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3355088.3365163)  | |\n      | 2018 | ACM-TG| [实时数据驱动的交互式粗略素描描线](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3197517.3201370) | [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fline_thinning) |\n      | 2018 | ACM-TG| [掌握素描：面向结构化预测的对抗增强](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.08966.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fsketch_simplification) |\n      | 2017 | ACM-TG | [深度提取漫画结构线](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3072959.3073675) | [Github](https:\u002F\u002Fgithub.com\u002Fljsabc\u002FMangaLineExtraction) |\n      | 2016 | ACM-TG | [学习简化：用于清理粗略素描的全卷积网络](https:\u002F\u002Fesslab.jp\u002F~ess\u002Fpublications\u002FSimoSerraSIGGRAPH2016.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fbobbens\u002Fsketch_simplification) |\n      | 2011 | NPAR | [针对草图动画的时间噪声控制](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F2024676.2024691)  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>自动动画\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | MTA | [自动动画中间帧生成](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11042-023-17354-x) | |\n      | 2023 | ICCV | [深度几何化的卡通线条中间帧生成](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FICCV2023\u002Fpapers\u002FSiyao_Deep_Geometrized_Cartoon_Line_Inbetweening_ICCV_2023_paper.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002Fanimeinbet) |\n      | 2022 | ICIP | [增强型深度动画视频插帧](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.12657.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Flaomao0\u002FAutoSktFI) |\n      | 2022 | ECCV | [提升2D动画插帧的感知质量](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2111.12792.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Feisai-anime-interpolator\u002F) |\n      | 2021 | CVPR | [野外环境下的深度动画视频插帧](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2104.02495.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Flisiyao21\u002FAnimeInterp\u002F) |\n      | 2019 | ICIP | [基于光流和距离变换的线条绘制帧插值，以支持中间帧生成](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8803506) | |\n      | 2017 | CMV | [DiLight：数字光台——使用引导线进行2D动画中间帧生成](http:\u002F\u002Fgraphics.tudelft.nl\u002FPublications-new\u002F2017\u002FCMV17\u002Fpdf.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>自动图像增强\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | CVPR | [APISR：受动漫制作启发的真实世界动漫超分辨率](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.01598.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FKiteretsu77\u002FAPISR) |\n      | 2022 | NeurIPS | [AnimeSR：为动画视频学习真实世界的超分辨率模型](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2206.07038.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTencentARC\u002FAnimeSR) |\n      | 2022 | Sensors | [基于Transformer的动漫图像超分辨率模型](https:\u002F\u002Fwww.mdpi.com\u002F1424-8220\u002F22\u002F21\u002F8126) | |\n      | 2021 | ICCV Workshop | [Real-ESRGAN：使用纯合成数据训练真实世界的盲超分辨率](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.10833.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fxinntao\u002FReal-ESRGAN) |\n      | 2021 | JSCI | [使用改进的超分辨率CNN增强动漫图像放大效果](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.02321.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FTanakitInt\u002FSRCNN-anime) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n    \u003Csummary>背景去除\u003C\u002Fsummary>\n    \n    | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n    | ---- | ---- | ---- | ---- | \n    | 2025 | arXiv | [ToonOut：针对动漫角色的微调背景去除](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2509.06839) |[Github](https:\u002F\u002Fgithub.com\u002FMatteoKartoon\u002FBiRefNet) |\n      \u003C\u002Fdetails>\n\n### 角色动画\n  - \u003Cdetails>\n      \u003Csummary>角色动画\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [CartoonAlive: 基于单张肖像实现富有表现力的Live2D建模](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.17327) |[官网](https:\u002F\u002Fhuman3daigc.github.io\u002FCartoonAlive_webpage\u002F) |\n      | 2024 | Arxiv | [AnimateDiff-Lightning: 跨模型扩散蒸馏](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.12706) | [HF](https:\u002F\u002Fhuggingface.co\u002FByteDance\u002FAnimateDiff-Lightning) |\n      | 2024 | TCSVT | [用于说话头像动画的层次化特征变形与融合](https:\u002F\u002Fgwern.net\u002Fdoc\u002Fai\u002Fanime\u002F2024-zhang.pdf) | |\n      | 2023 | TMM | [基于循环StyleGAN生成器的语言引导人脸动画](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.05617.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FTiankaiHang\u002Flanguage-guided-animation) |\n      | 2023 | IJCAI | [利用动漫角色素材进行协作式神经渲染](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.05378.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fmegvii-research\u002FCONR) |\n      | 2020 | ACCV | [CPTNet: 用于单张图像说话头像动画的级联姿态变换网络](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FACCV2020\u002Fpapers\u002FZhang_CPTNet_Cascade_Pose_Transform_Network_for_Single_Image_Talking_Head_ACCV_2020_paper.pdf) | |\n      | 2020 | SIGGRAPH Asia | [MakeItTalk: 具备演讲者感知的说话头像动画](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.12992.pdf) | |\n  \u003C\u002Fdetails>\n\n### 漫画应用\n  - \u003Cdetails>\n      \u003Csummary>分类\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | TIP | [考虑分镜与整页信息的漫画类型理解](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=10112648) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n    \u003Csummary>生成\u003C\u002Fsummary>\n\n    | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n    | ---- | ---- | ---- | ---- |\n    | 2025 | Arxiv | [DreamingComics: 基于视频模型，通过主题和版面定制化生成的故事可视化流水线](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2512.01686) |  |\n    | 2025 | Arxiv | [逐格灵魂：AI辅助漫画创作中富有表现力的面部绘制流程](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2511.16038) |  |\n    | 2025 | Arxiv | [检索增强型漫画图像生成](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.09140) |  |\n    | 2024 | Arxiv | [Sketch2Manga: 利用扩散模型将草图转化为阴影处理的漫画画面](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2403.08266) | [Github](https:\u002F\u002Fgithub.com\u002FdmMaze\u002Fsketch2manga) |\n    | 2021 | CVPR | [通过模仿漫画制作流程从插画生成漫画](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FZhang_Generating_Manga_From_Illustrations_via_Mimicking_Manga_Creation_Workflow_CVPR_2021_paper.pdf) | [官网](https:\u002F\u002Flllyasviel.github.io\u002FMangaFilter\u002F) |\n    | 2020 | ICAART | [使用cGAN进行漫画头发阴影风格迁移](https:\u002F\u002Fwww.scitepress.org\u002FPapers\u002F2020\u002F89614\u002F89614.pdf) | |\n    | 2020 | SIGGRAPH | [利用屏幕网点变分自编码器进行漫画填充风格转换](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fscreenstyle\u002Fscreenstyle.pdf) | |\n    | 2019 | ISM | [合成漫画人物的屏幕网点图案](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8959008)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>上色\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | 2025 | arXiv | [漫画线稿图像之间的区域级对应关系预测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2509.09501) |  |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [MangaDiT: 基于参考的线稿上色，采用扩散Transformer中的层次化注意力机制](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.09709) |  |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>修复\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | CVPR | [利用混叠效应进行漫画修复](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FXie_Exploiting_Aliasing_for_Manga_Restoration_CVPR_2021_paper.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002Fmsxie92\u002FMangaRestoration) |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>理解\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [Re:Verse - 你的视觉语言模型能读懂漫画吗？](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2508.08508)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>补绘\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | TOG | [语义感知的无缝漫画补绘](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Fmangainpaint\u002Fmangainpaint.pdf)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | Arxiv | [基于可解释屏幕网点表示的漫画重绘](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2306.04114.pdf) | |\n      | 2021 | SIGGRAPH Asia | [通过距离变换进行漫画图像补绘](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3478512.3488607) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>文本检测\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | ACM-MM | [从过去学习：结合知识嵌入的元持续学习，用于联合识别素描、卡通和讽刺漫画人脸](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fepdf\u002F10.1145\u002F3394171.3413892) | |\n      | 2020 | TST | [基于深度学习的“萌”系卡通图片极性情绪分类](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9220754) | |\n      | 2019 | ICDAR研讨会 | [基于CNN从孟加拉语漫画书页面图像中提取分镜\u002F角色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8893046)  | |\n      | 2011 | IJIP | [数字漫画文本的实时提取方法](https:\u002F\u002Fwww.cscjournals.org\u002Fmanuscript\u002FJournals\u002FIJIP\u002FVolume4\u002FIssue6\u002FIJIP-290.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>地标检测\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2018 | Arxiv | [面向漫画图像的人脸地标检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.03214.pdf) | [GitHub](https:\u002F\u002Fgithub.com\u002Foaugereau\u002FFacialLandmarkManga) |\n  \u003C\u002Fdetails>\n\n- \u003Cdetails>\n      \u003Csummary>分割\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- |\n      | 2025 | Arxiv | [推进漫画分析：Manga109数据集的全面分割标注](https:\u002F\u002Fopenaccess.thecvf.com\u002F\u002Fcontent\u002FCVPR2025\u002Fpapers\u002FXie_Advancing_Manga_Analysis_Comprehensive_Segmentation_Annotations_for_the_Manga109_Dataset_CVPR_2025_paper.pdf) | [HuggingFace](https:\u002F\u002Fhuggingface.co\u002Fdatasets\u002FMS92\u002FMangaSegmentation) |\n      | 2022 | ICPR | [面向内容感知的像素级连环画分格分割](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-031-37742-6_1) | |\n      | 2020 | ISM | [漫画语境下画面序列的提取](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9327968) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>翻译\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | AAAI | [迈向全自动漫画翻译](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2012.14271.pdf)  | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>深度估计\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | WACV | [在漫画领域中估计图像深度](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.03575.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>矢量化\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | Arxiv | [基于基元级深度强化学习的位图漫画矢量化](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2110.04830.pdf) | |\n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>重识别\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | ACM-MM | [通过迭代式多模态融合实现漫画中的零样本角色识别与说话人预测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2404.13993) | |\n      | 2022 | Arxiv | [基于人脸-身体及时空关联聚类的无监督漫画角色重识别](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2204.04621.pdf) | |\n  \u003C\u002Fdetails>\n\n\n\n### 表征学习\n  - \u003Cdetails>\n      \u003Csummary>自动草图编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2015 | SIGGRAPH | [Illustration2Vec：插画的语义向量表示](https:\u002F\u002Fwww.gwern.net\u002Fdocs\u002Fanime\u002F2015-saito.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Frezoo\u002Fillustration2vec) |\n  \u003C\u002Fdetails>\n\n### 姿势估计\n  - \u003Cdetails>\n      \u003Csummary>自动草图编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2024 | Arxiv | [VLPose：通过语言-视觉调优弥合姿势估计领域的域间差距](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2402.14456) | |\n      | 2022 | WACV | [插画人物姿势估计的迁移学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.01819.pdf) | [Github](https:\u002F\u002Fgithub.com\u002FShuhongChen\u002Fbizarre-pose-estimator) |\n      | 2016 | MANPU | [动漫\u002F漫画人物的姿态估计：以合成数据为例](http:\u002F\u002Fwww.cs.cornell.edu\u002F~pramook\u002Fpapers\u002Fmanpu2016.pdf) | |\n  \u003C\u002Fdetails>\n\n### 图像检索\n  - \u003Cdetails>\n      \u003Csummary>自动草图编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2021 | Arxiv | [AugNet：结合图像增强的端到端无监督视觉表征学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2106.06250.pdf) |  [Github](https:\u002F\u002Fgithub.com\u002Fchenmingxiang110\u002FAugNet) |\n      | 2017 | MTA | [基于草图的Manga109数据集漫画检索](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1510.04389.pdf) | |\n  \u003C\u002Fdetails>\n\n### 视觉对应\n  - \u003Cdetails>\n      \u003Csummary>自动草图编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2016 | ACM-TG | [全局最优的卡通追踪](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Ftoontrack\u002Ftoontrack.pdf)  | [HP](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002F~ttwong\u002Fpapers\u002Ftoontrack\u002Ftoontrack.html) |\n  \u003C\u002Fdetails>\n\n### 角色识别\n  - \u003Cdetails>\n      \u003Csummary>自动草图编辑\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | IEEE Access | [基于图卷积网络的动漫插画层次化多标签属性分类](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10097719) | | \n      | 2022 | ICIP | [利用领域特定语义特征进行动漫插画的GCN-based多模态多标签属性分类](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9898071) | |\n      | 2022 | Arxiv | [AniWho：一种快速准确地对图像中的动漫角色面部进行分类的方法](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2208.11012.pdf) | |\n      | 2022 | ECCV | [具有条件匹配的开放词汇DETR](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.11876.pdf) | [Github](https:\u002F\u002Fgithub.com\u002Fyuhangzang\u002FOV-DETR) |\n      | 2022 | EG | [CAST：通过跟踪进行自监督的动画角色标注](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2201.07619.pdf) | |\n      | 2021 | TIP | [用于卡通面部识别的图拼图学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2107.06532.pdf) | |\n      | 2020 | IJCAI | [ACFD：非对称卡通面部检测器](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.00899.pdf)  | | \n      | 2020 | ACM-MM | [从过去中学习：结合知识嵌入的元持续学习，用于联合草图、卡通和讽刺画面部识别](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fepdf\u002F10.1145\u002F3394171.3413892) | |\n      | 2020 | TST | [基于深度学习的“萌”系卡通图片极性情感分类](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9220754) | |\n      | 2019 | ICDAR Workshop | [基于CNN的孟加拉语连环画页面图像中分格\u002F角色的提取](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8893046) | |\n      | 2019 | ACM-TURC | [通过无标注训练数据逐步进行深度特征学习以实现漫画角色识别](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3321408.3322624) | |\n      | 2018 | Arxiv | [使用Manga109标注进行漫画目标检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.08670) | |\n  \u003C\u002Fdetails>\n\n### 3D角色创建\n  - \u003Cdetails>\n      \u003Csummary>3D角色创建\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2023 | NeurIPS | [DreamWaltz：用复杂的可动画3D虚拟形象构建场景](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2305.12529.pdf)  | [Github](https:\u002F\u002Fgithub.com\u002FIDEA-Research\u002FDreamWaltz) |\n      | 2020 | ICCW | [利用机器学习自动生成3D自然风格的动漫类非玩家角色](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9240508)  | |\n  \u003C\u002Fdetails>\n\n### 机器人技术\n  - \u003Cdetails>\n      \u003Csummary>机器人技术\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2020 | IROS | [让机器人两分钟内绘制出生动的肖像](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.05526.pdf) | | \n  \u003C\u002Fdetails>\n\n### 语音合成\n  - \u003Cdetails>\n      \u003Csummary>语音合成\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2019 | ACM-TG | [漫画引导的语音合成](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3355089.3356487)  | [主页](https:\u002F\u002Fbitwangyujia.github.io\u002Fresearch\u002Fproject\u002Fcomic2speech.html) |\n  \u003C\u002Fdetails>\n\n### 成人内容检测\n  - \u003Cdetails>\n      \u003Csummary>成人内容检测\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2022 | IEEE Access | [基于深度学习的YouTube视频不当内容检测与分类方法](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9696242)  |  |\n      | 2021 | IEEE Access | [传统特征描述符与基于CNN的特征描述符在卡通色情内容检测中的评估](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9371684) |  |\n      | 2019 | ACM SAC | [KidsGUARD：针对儿童不安全视频的细粒度表征与检测方法](https:\u002F\u002Fprecog.iiitd.edu.in\u002Fpubs\u002FKidsGuard-cam-ready.pdf) | [GitHub](https:\u002F\u002Fgithub.com\u002Fprecog-iiitd\u002Fkidsguard-sac) |\n  \u003C\u002Fdetails>\n\n### 综述与评论\n  - \u003Cdetails>\n      \u003Csummary>成人内容检测\u003C\u002Fsummary>\n      \n      | **年份** | **会议\u002F期刊** | **标题** | **链接** |\n      | ---- | ---- | ---- | ---- | \n      | 2025 | arXiv | [通过创作连环画比较人类与AI在视觉叙事中的表现：一项案例研究](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2507.18641) || \n      | 2024 | 审稿中 | [视觉与语言领域中缺失的一环：关于漫画理解的综述](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding) | [GitHub](https:\u002F\u002Fgithub.com\u002Femanuelevivoli\u002Fawesome-comics-understanding) |\n      | 2023 | HSET | [动漫风格角色面部生成：一项综述](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F374705438_Anime-like_Character_Face_Generation_A_Survey) |  |\n      | 2022 | IJCV | [卡通图像处理：一项综述](https:\u002F\u002Flink.springer.com\u002Farticle\u002F10.1007\u002Fs11263-022-01645-1) | |\n      | 2020 | Arxiv | [图像上色：综述与数据集](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.10774.pdf) | |\n      | 2019 | TOPS | [计算方法在漫画分析中的应用](https:\u002F\u002Fpubmed.ncbi.nlm.nih.gov\u002F31705626\u002F) | |\n      | 2018 | - | [计算机科学领域中漫画研究的综述](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.05490.pdf) | |\n  \u003C\u002Fdetails>\n\n## 项目\n除上述之外，与动漫或漫画相关的 GitHub 或其他类型的项目汇总。\n\n  - \u003Cdetails>\n      \u003Csummary>仓库\u003C\u002Fsummary>\n      \n      - [Awesome-Animation-Research](https:\u002F\u002Fgithub.com\u002Fzhenglinpan\u002FAwesome-Animation-Research)\n      \n  \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>数据集\u003C\u002Fsummary>\n      \n      - [动漫绘画的分层时序数据集](https:\u002F\u002Flayered-anime.github.io\u002F)\n      - [TRIGGER 数据集](https:\u002F\u002Fwww.nii.ac.jp\u002Fdsc\u002Fidr\u002Ftrigger\u002F)\n      - [动漫艺术](https:\u002F\u002Fwww.kaggle.com\u002Fdatasets\u002Fmuoncollider\u002Fdanbooru2020small)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>表示学习\u003C\u002Fsummary>\n\n      - [动漫关键帧和人脸角色的分类与向量化](https:\u002F\u002Fgithub.com\u002Fenmanuelmag\u002FAnimeClassificator)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>图像生成\u003C\u002Fsummary>\n      \n      ### GANs\n      \n      - [makegirlsmoe](https:\u002F\u002Fgithub.com\u002Fmakegirlsmoe\u002Fmakegirlsmoe_web)\n      - [ANIME305\u002FAnime-GAN-tensorflow](https:\u002F\u002Fgithub.com\u002FANIME305\u002FAnime-GAN-tensorflow)\n      - [jayleicn\u002FAnimeGAN](https:\u002F\u002Fgithub.com\u002Fjayleicn\u002FanimeGAN)\n      - [FangYang970206\u002FAnime_GAN](https:\u002F\u002Fgithub.com\u002FFangYang970206\u002FAnime_GAN)\n      - [pavitrakumar78\u002FAnime-Face-GAN-Keras](https:\u002F\u002Fgithub.com\u002Fpavitrakumar78\u002FAnime-Face-GAN-Keras)\n      - [forcecore\u002FKeras-GAN-Animeface-Character](https:\u002F\u002Fgithub.com\u002Fforcecore\u002FKeras-GAN-Animeface-Character)\n      - [tdrussell\u002FIllustrationGAN](https:\u002F\u002Fgithub.com\u002Ftdrussell\u002FIllustrationGAN)\n      - [m516825\u002FConditional-GAN](https:\u002F\u002Fgithub.com\u002Fm516825\u002FConditional-GAN)\n      - [bchao1\u002FAnime-Generation](https:\u002F\u002Fgithub.com\u002Fbchao1\u002FAnime-Generation)\n      \n      ### 扩散模型\n      \n      - [harubaru\u002Fwaifu-diffusion](https:\u002F\u002Fgithub.com\u002Fharubaru\u002Fwaifu-diffusion)\n      - [DGSpitzer\u002FCyberpunk-Anime-Diffusion](https:\u002F\u002Fhuggingface.co\u002FDGSpitzer\u002FCyberpunk-Anime-Diffusion)\n      - [NovelAI](https:\u002F\u002Fnovelai.net\u002F)\n      - [Stable Diffusion 模型](https:\u002F\u002Fcyberes.github.io\u002Fstable-diffusion-models\u002F)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>图像到图像转换\u003C\u002Fsummary>\n\n      - [Aixile\u002Fchainer-cyclegan](https:\u002F\u002Fgithub.com\u002FAixile\u002Fchainer-cyclegan)\n      - [SystemErrorWang\u002FFacialCartoonization](https:\u002F\u002Fgithub.com\u002FSystemErrorWang\u002FFacialCartoonization)\n      - [experience-ml\u002Fcartoonize](https:\u002F\u002Fgithub.com\u002Fexperience-ml\u002Fcartoonize)\n      - [racinmat\u002Fanime-style-transfer](https:\u002F\u002Fgithub.com\u002Fracinmat\u002Fanime-style-transfer)\n      - [TachibanaYoshino\u002FAnimeGANv2](https:\u002F\u002Fgithub.com\u002FTachibanaYoshino\u002FAnimeGANv2)\n      - [XiaoSanGit\u002FReal2Animation-video-generation](https:\u002F\u002Fgithub.com\u002FXiaoSanGit\u002FReal2Animation-video-generation)\n      - [使用 GAN 的头像艺术家](http:\u002F\u002Fcs230.stanford.edu\u002Fprojects_winter_2020\u002Freports\u002F32639139.pdf)\n      - [用 StackGAN 生成卡通风格的面部表情](https:\u002F\u002Fcs230.stanford.edu\u002Fprojects_fall_2019\u002Freports\u002F26242839.pdf)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>自动线稿上色\u003C\u002Fsummary>\n\n      - [style2paints](https:\u002F\u002Fgithub.com\u002Flllyasviel\u002Fstyle2paints)\n      - [PaintsChainer](https:\u002F\u002Fgithub.com\u002Fpfnet\u002FPaintsChainer)\n      - [Ugness\u002FLine-Art-Colorization-SPADE](https:\u002F\u002Fgithub.com\u002FUgness\u002FLine-Art-Colorization-SPADE)\n      - [sanjay235\u002FSketch2Color-anime-translation](https:\u002F\u002Fgithub.com\u002Fsanjay235\u002FSketch2Color-anime-translation)\n      - [Pengxiao-Wang\u002FStyle2Paints_V3](https:\u002F\u002Fgithub.com\u002FPengxiao-Wang\u002FStyle2Paints_V3)\n      - [GANime: 从草图生成动漫和漫画人物插画](http:\u002F\u002Fcs230.stanford.edu\u002Fprojects_winter_2020\u002Fposters\u002F32226261.pdf)\n      - [线稿上色](http:\u002F\u002Fcs231n.stanford.edu\u002Freports\u002F2017\u002Fpdfs\u002F425.pdf)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>角色动画\u003C\u002Fsummary>\n\n      - [单张图片驱动的会说话的动漫头部](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-demo)\n      - [单张图片驱动的会说话的动漫头部 2：更具表现力](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-2-demo)\n      - [单张图片驱动的会说话的动漫头部 3：现在连身体也动起来了](https:\u002F\u002Fgithub.com\u002Fpkhungurn\u002Ftalking-head-anime-3-demo)\n      - [基于注意力机制的神经渲染：动漫角色动画的逐步改进](https:\u002F\u002Fgithub.com\u002Ftranspchan\u002FLive3D-v2)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>超分辨率\u003C\u002Fsummary>\n\n      - [waifu2x](https:\u002F\u002Fgithub.com\u002Fnagadomi\u002Fwaifu2x)\n      - [Anime4K](https:\u002F\u002Fgithub.com\u002Fbloc97\u002FAnime4K)\n      - [goldhuang\u002FSRGAN-PyTorch](https:\u002F\u002Fgithub.com\u002Fgoldhuang\u002FSRGAN-PyTorch)\n      - [Real-CUGAN](https:\u002F\u002Fgithub.com\u002Fbilibili\u002Failab\u002Fblob\u002Fmain\u002FReal-CUGAN\u002FREADME_EN.md)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>分割\u003C\u002Fsummary>\n\n    - [jerryli27\u002FAniSeg](https:\u002F\u002Fgithub.com\u002Fjerryli27\u002FAniSeg)\n    - [zymk9\u002FYet-Another-Anime-Segmenter](https:\u002F\u002Fgithub.com\u002Fzymk9\u002FYet-Another-Anime-Segmenter)\n    \u003C\u002Fdetails>\n\n  - \u003Cdetails>\n      \u003Csummary>关键点检测\u003C\u002Fsummary>\n\n      - [基于深度级联回归的动漫人脸关键点检测](https:\u002F\u002Fgithub.com\u002Fkanosawa\u002Fanime_face_landmark_detection)\n      - [使用 mmdet 和 mmpose 的动漫人脸检测器](https:\u002F\u002Fgithub.com\u002Fhysts\u002Fanime-face-detector)\n    \u003C\u002Fdetails>\n\n\n### 会议与期刊\n\n- TIP: IEEE 图像处理汇刊\n- TMM: IEEE 多媒体汇刊\n- PR: IEEE 模式识别汇刊\n- TSP: IEEE 信号处理汇刊\n- CVPR: IEEE\u002FCVF 计算机视觉与模式识别大会\n- MMUL: IEEE 多媒体\n- TVCG: IEEE 可视化与计算机图形学汇刊\n- MMM: 国际多媒体建模会议\n- ECCV: 欧洲计算机视觉会议\n- NeurIPS: 神经信息处理系统大会\n- NeurIPS-DB: 神经信息处理系统大会——数据集与基准测试赛道\n- ACM-MM: ACM 多媒体\n- ACM-TG: ACM 图形学汇刊\n- ACM-TURC: ACM 图灵纪念大会\n- ISICA: 国际智能计算与应用研讨会\n- 3ICT: 国际信息、计算与技术领域的创新与智能大会\n- MDPI-AS: MDPI 应用科学\n- EG: 欧洲图形学协会\n- CGF: 计算机图形论坛\n- CGI: 图形界面会议\n- IET-IP: IET 图像处理\n- ICME: 国际多媒体与博览会\n- CCCRV: 加拿大计算机与机器人视觉会议\n- ICCW: 国际网络世界大会\n- IROS: IEEE 国际智能机器人与系统大会\n- HSET: 科学、工程与技术亮点\n- TOPS: 认知科学专题\n- COLING: 计算语言学大会","# AwesomeAnimeResearch 快速上手指南\n\n**AwesomeAnimeResearch** 并非一个单一的可执行软件或 Python 库，而是一个**学术资源汇总列表（Awesome List）**。它整理了与动漫（Anime）、漫画（Manga）及卡通相关的最新数据集、研究论文和开源代码链接。\n\n本指南将指导开发者如何利用该列表查找资源，并快速运行其中收录的典型开源项目。\n\n## 1. 环境准备\n\n由于列表中包含多个不同的研究项目（主要基于深度学习），你需要准备通用的 AI 开发环境。大多数现代动漫生成与分析项目依赖以下配置：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04\u002F22.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **硬件要求**: \n    *   NVIDIA GPU (推荐显存 ≥ 8GB，用于运行 GANs 或 Diffusion 模型)。\n    *   若仅进行数据浏览或轻量级推理，CPU 亦可。\n*   **核心依赖**:\n    *   Python 3.8+\n    *   PyTorch 或 TensorFlow (具体版本需参照选定项目的 `requirements.txt`)\n    *   Git\n    *   CUDA Toolkit (版本需与 PyTorch\u002FTensorFlow 匹配)\n\n**前置检查命令：**\n```bash\npython --version\nnvidia-smi\ngit --version\n```\n\n## 2. 安装与资源获取\n\n由于这是一个资源列表，\"安装\"过程实际上是**克隆仓库**并**选择特定项目进行部署**。\n\n### 步骤 1: 克隆资源列表\n首先获取完整的论文和数据集索引：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fzhenglinpan\u002FAwesomeAnimeResearch.git\ncd AwesomeAnimeResearch\n```\n\n### 步骤 2: 选择并部署具体项目\n浏览 `README.md` 中的 **Datasets** 或 **Papers** 表格，找到你感兴趣的项目（例如图像生成类的 `StyleGAN-NADA` 或数据集 `Manga109`）。\n\n以列表中提到的 **StyleGAN-NADA** (CLIP-Guided Domain Adaptation) 为例，安装步骤如下：\n\n1.  **访问项目主页**：点击表格中的 `[Github]` 链接跳转至具体代码库。\n2.  **克隆具体项目**：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Frinongal\u002FStyleGAN-nada.git\n    cd StyleGAN-nada\n    ```\n3.  **创建虚拟环境** (推荐)：\n    ```bash\n    conda create -n anime_research python=3.8\n    conda activate anime_research\n    ```\n4.  **安装依赖**：\n    *注意：国内用户建议使用清华源或阿里源加速 PyTorch 安装。*\n    ```bash\n    # 安装 PyTorch (示例为 CUDA 11.8，请根据实际显卡调整)\n    pip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n    # 安装项目其他依赖\n    pip install -r requirements.txt\n    ```\n\n## 3. 基本使用\n\n不同项目的用法差异较大，以下是基于列表中常见任务的通用操作模式。\n\n### 场景 A: 下载与研究数据集\n列表中包含了如 `Manga109`, `DanbooRegion`, `AnimeCeleb` 等数据集。\n1.  在 `README.md` 的 **Datasets** 部分找到目标数据集。\n2.  点击 `[HP]` (Homepage) 或 `[Github]` 链接。\n3.  按照该项目页面的指示下载数据（通常需要填写申请表或直接下载）。\n4.  将数据放置在项目指定的目录（通常为 `.\u002Fdata` 或 `.\u002Fdatasets`）。\n\n### 场景 B: 运行图像生成\u002F风格迁移模型\n以克隆的 `StyleGAN-nada` 为例，执行简单的文本引导风格迁移：\n\n```bash\n# 确保已在项目根目录且环境已激活\n# 示例命令：将预训练模型适配到 \"anime style\"\npython train.py \\\n    --text \"anime style\" \\\n    --name my_anime_experiment \\\n    --batch_size 4\n```\n\n### 场景 C: 复现论文结果\n对于 **Papers** 部分列出的研究（如 `SakugaFlow`, `CoMix`）：\n1.  点击论文标题阅读 arXiv 原文理解算法原理。\n2.  点击对应的 `[GitHub]` 链接获取官方实现代码。\n3.  通常仓库根目录会提供 `demo.ipynb` 或 `inference.py`，直接运行即可体验最基础的功能：\n    ```bash\n    python inference.py --input path\u002Fto\u002Finput_image.png --output .\u002Fresults\n    ```\n\n---\n**提示**：该列表更新频繁（包含 2025 年最新论文），建议定期 `git pull` 更新本地列表，以获取最新的 SOTA（State-of-the-Art）模型链接。","某 AI 初创团队正致力于开发一款能自动识别漫画角色并翻译对话气泡的智能阅读助手，但在项目初期陷入了数据搜集的泥潭。\n\n### 没有 AwesomeAnimeResearch 时\n- **数据搜寻如大海捞针**：团队成员需在 arXiv、GitHub 及各大学术会议网站间手动翻阅，耗时数周仍难以找全针对“漫画拟声词”或“长尾角色识别”的专用数据集。\n- **领域边界模糊导致误用**：容易混淆通用卡通数据与日式动漫数据，甚至错误引用了 3D 动画或欧美漫画的研究成果，导致模型在特定画风下表现不佳。\n- **复现成本极高**：找到论文后，往往发现对应的代码仓库已失效或缺失，缺乏统一的入口去验证算法在动漫场景下的实际效果。\n- **前沿动态滞后**：难以及时获取如\"AI 生成动漫图像检测”等 2025 年的最新研究方向，导致技术选型落后于社区进展。\n\n### 使用 AwesomeAnimeResearch 后\n- **一站式精准获取**：直接通过分类列表定位到 `Manga109Dialog` 用于说话人检测，或 `COO` 数据集处理拟声词，将数据准备周期从数周缩短至两天。\n- **领域资源严格区分**：清晰指引团队避开通用的 2D 卡通研究，转而聚焦于 `AnimeCeleb` 或 `DAF:RE` 等专为动漫角色定制的高质量基准，显著提升模型准确率。\n- **代码与论文无缝对接**：每个条目均附带有效的 GitHub 链接或主页，开发人员可立即拉取代码进行基线测试，大幅降低复现门槛。\n- **紧跟学术最前沿**：迅速捕捉到关于“多模态大模型翻译漫画”的最新论文，及时调整技术路线，确保产品具备行业领先的上下文理解能力。\n\nAwesomeAnimeResearch 将原本碎片化、高门槛的动漫科研资源整合为结构化知识库，让开发者能从繁琐的搜集工作中解脱，专注于核心算法的创新与落地。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FSerialLain3170_AwesomeAnimeResearch_5f0c2b8d.png","SerialLain3170","lento","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FSerialLain3170_61832e78.png","omnipresent machine learning researcher, novice low-level programmer","Tachibana General Laboratory","WIRED","crosssceneofwindff@gmail.com","NieA7_3170","https:\u002F\u002Fseriallain3170.github.io\u002F","https:\u002F\u002Fgithub.com\u002FSerialLain3170",null,1232,71,"2026-04-09T11:09:22",1,"","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库（AwesomeAnimeResearch）是一个 curated list（精选列表），主要收集了与动漫研究相关的数据集、论文和外部项目链接，本身不是一个可独立运行的软件工具或代码库。因此，README 中未包含任何关于操作系统、硬件配置、Python 版本或依赖库的安装需求。用户若需运行列表中提到的具体算法（如 StyleGAN-NADA, EigenGAN 等），需前往各论文对应的独立 GitHub 仓库查看具体的环境要求。",[],[14],[94,95,96,97],"anime","machine-learning","awesome","deep-learning","2026-03-27T02:49:30.150509","2026-04-12T07:53:57.792025",[101,106,111,116,121,126,131],{"id":102,"question_zh":103,"answer_zh":104,"source_url":105},30427,"如何向该仓库提交关于自动动漫线稿上色的新论文？","您可以直接在 GitHub 上创建一个新的 Issue，提供论文的标题、链接或摘要。例如，有用户提交了题为《Automatic Colorization of Anime Style Illustrations Using a Two-Stage Generator》的论文，维护者会在确认后通过 Commit 将其添加到仓库中。此外，相关的扩展版本论文（如《Colorization of Line Drawings with Empty Pupils》）也可以一并推荐。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F3",{"id":107,"question_zh":108,"answer_zh":109,"source_url":110},30428,"是否有推荐的图像上色综述论文或数据集？","有的。社区推荐了论文《Image Colorization: A Survey and Dataset》（arXiv:2008.10774），这是一篇关于图像上色的全面综述及相关数据集。维护者已将该资源收录到仓库中，供研究者参考。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F2",{"id":112,"question_zh":113,"answer_zh":114,"source_url":115},30429,"哪里可以找到基于参考图的半监督草图提取最新研究及对应数据集？","您可以参考发表在 ACM TG 2023 (SIGGRAPH Journal Track) 上的论文《Semi-supervised reference-based sketch extraction using a contrastive learning framework》。该项目页面位于 https:\u002F\u002Fchanuku.github.io\u002FSemi_ref2sketch\u002F，代码开源在 https:\u002F\u002Fgithub.com\u002FChanuku\u002Fsemi_ref2sketch_code。同时，作者还发布了包含动漫图像及其四种不同风格配对草图的数据集 (4skst)，地址为：https:\u002F\u002Fgithub.com\u002FChanuku\u002F4skst。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F16",{"id":117,"question_zh":118,"answer_zh":119,"source_url":120},30430,"有没有解决无监督低级图像到图像翻译中语义失真问题的论文推荐？","推荐查阅 CVPR 2022 的论文《Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure...》。该论文探讨了如何通过结构信息缓解语义失真问题，维护者已将其收录至仓库列表中。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F14",{"id":122,"question_zh":123,"answer_zh":124,"source_url":125},30431,"有哪些关于漫画全自动翻译、深度估计或矢量化的最新研究？","社区整理了三篇相关论文：1. AAAI 2021 的《Towards Fully Automated Manga Translation》；2. 《Estimating Image Depth in the Comics Domain》；3. 《Vectorization of Raster Manga by Deep Reinforcement Learning》。这些资源涵盖了漫画翻译、深度估计及光栅漫画矢量化等方向，均已被维护者添加到仓库中。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F6",{"id":127,"question_zh":128,"answer_zh":129,"source_url":130},30432,"是否有针对卡通人脸检测与识别的基准数据集或论文？","有的。相关资源包括论文《ACFD: Asymmetric Cartoon Face Detector》以及基准数据集论文《Cartoon Face Recognition: A Benchmark Dataset》。这些文件通常以 PDF 形式附件上传或直接提供链接，维护者确认后会将其整合进收藏列表。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F4",{"id":132,"question_zh":133,"answer_zh":134,"source_url":135},30433,"有哪些关于草图生成动漫（Sketch2anime）的早期代表性工作？","早期的代表性工作包括：1. 《SmartPaint: a co-creative drawing system based on generative adversarial networks》，这是一个基于 GAN 的协同绘画系统；2. 《PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain》，提出了结合边缘和颜色域的渐进式图像重建网络。这两篇论文均已被收录。","https:\u002F\u002Fgithub.com\u002FSerialLain3170\u002FAwesomeAnimeResearch\u002Fissues\u002F1",[]]